threads
listlengths 1
275
|
---|
[
{
"msg_contents": "I can't help wondering how a couple thousand context switches per\nsecond would affect the attempt to load disk info into the L1 and\nL2 caches. That's pretty much the low end of what I see when the\nserver is under any significant load.\n\n\n",
"msg_date": "Tue, 27 Sep 2005 10:21:37 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
}
] |
[
{
"msg_contents": ">From: Josh Berkus <[email protected]>\n>ent: Sep 27, 2005 12:15 PM\n>To: Ron Peacetree <[email protected]> \n>Subject: Re: [HACKERS] [PERFORM] A Better External Sort?\n>\n>I've somehow missed part of this thread, which is a shame since this is \n>an area of primary concern for me.\n>\n>Your suggested algorithm seems to be designed to relieve I/O load by \n>making more use of the CPU. (if I followed it correctly).\n>\nThe goal is to minimize all IO load. Not just HD IO load, but also RAM\nIO load. Particularly random access IO load of any type (for instance:\n\"the pointer chasing problem\").\n\nIn addition, the design replaces explicit data or explicit key manipulation\nwith the creation of a smaller, far more CPU and IO efficient data\nstructure (essentially a CPU cache friendly Btree index) of the sorted\norder of the data.\n\nThat Btree can be used to generate a physical reordering of the data\nin one pass, but that's the weakest use for it. The more powerful\nuses involve allowing the Btree to persist and using it for more\nefficient re-searches or combining it with other such Btrees (either as\na step in task distribution across multiple CPUs or as a more efficient\nway to do things like joins by manipulating these Btrees rather than\nthe actual records.)\n\n\n>However, that's not PostgreSQL's problem; currently for us external\n>sort is a *CPU-bound* operation, half of which is value comparisons.\n>(oprofiles available if anyone cares)\n>\n>So we need to look, instead, at algorithms which make better use of \n>work_mem to lower CPU activity, possibly even at the expense of I/O.\n>\nI suspect that even the highly efficient sorting code we have is\nsuffering more pessimal CPU IO behavior than what I'm presenting.\nJim Gray's external sorting contest web site points out that memory IO\nhas become a serious problem for most of the contest entries.\n\nAlso, I'll bet the current code manipulates more data.\n\nFinally, there's the possibilty of reusing the product of this work to a\ndegree and in ways that we can't with our current sorting code.\n\n\nNow all we need is resources and time to create a prototype.\nSince I'm not likely to have either any time soon, I'm hoping that\nI'll be able to explain this well enough that others can test it.\n\n*sigh* I _never_ have enough time or resources any more...\nRon\n \n \n",
"msg_date": "Tue, 27 Sep 2005 13:15:38 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "On Tue, 2005-09-27 at 13:15 -0400, Ron Peacetree wrote:\n\n> That Btree can be used to generate a physical reordering of the data\n> in one pass, but that's the weakest use for it. The more powerful\n> uses involve allowing the Btree to persist and using it for more\n> efficient re-searches or combining it with other such Btrees (either as\n> a step in task distribution across multiple CPUs or as a more efficient\n> way to do things like joins by manipulating these Btrees rather than\n> the actual records.)\n\nMaybe you could describe some concrete use cases. I can see what you\nare getting at, and I can imagine some advantageous uses, but I'd like\nto know what you are thinking.\n\nSpecifically I'd like to see some cases where this would beat sequential\nscan. I'm thinking that in your example of a terabyte table with a\ncolumn having only two values, all the queries I can think of would be\nbetter served with a sequential scan.\n\nPerhaps I believe this because you can now buy as much sequential I/O as\nyou want. Random I/O is the only real savings.\n\n-jwb\n\n\n",
"msg_date": "Tue, 27 Sep 2005 10:26:28 -0700",
"msg_from": "\"Jeffrey W. Baker\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
}
] |
[
{
"msg_contents": "Hello,\n\nMy connection ADO is very, very, very slow.... \n\nMy Delphi connection saw ADO is very slow. All SQL that I execute delay big, I tested in pgadmin and the reply is instantaned, the problem this in the Delphi? \n\nTanks!\nNo virus found in this outgoing message.\nChecked by AVG Anti-Virus.\nVersion: 7.0.344 / Virus Database: 267.11.5/110 - Release Date: 22/09/2005",
"msg_date": "Tue, 27 Sep 2005 15:01:24 -0300",
"msg_from": "\"Everton\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Delphi connection ADO is slow"
}
] |
[
{
"msg_contents": "By occation, we dropped the whole production database and refreshed it from\na database backup - and all our performance problems seems to have gone. I\nsuppose this means that to keep the database efficient, one eventually does\nhave to do reindexing and/or full vacuum from time to time?\n\n-- \nNotice of Confidentiality: This email is sent unencrypted over the network,\nand may be stored on several email servers; it can be read by third parties\nas easy as a postcard. Do not rely on email for confidential information.\n",
"msg_date": "Wed, 28 Sep 2005 05:33:27 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "The need for full vacuum / reindex"
},
{
"msg_contents": "On Wed, Sep 28, 2005 at 05:33:27 +0200,\n Tobias Brox <[email protected]> wrote:\n> By occation, we dropped the whole production database and refreshed it from\n> a database backup - and all our performance problems seems to have gone. I\n> suppose this means that to keep the database efficient, one eventually does\n> have to do reindexing and/or full vacuum from time to time?\n\nNormally you only need to do that if you didn't vacuum often enough or with\nhigh enough fsm setting and bloat has gotten out of hand to the point that\nyou need to recover some space.\n",
"msg_date": "Tue, 27 Sep 2005 23:16:45 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The need for full vacuum / reindex"
}
] |
[
{
"msg_contents": "hi\nsetup:\npostgresql 8.0.3 put on debian on dual xeon, 8GB ram, hardware raid.\n\ndatabase just after recreation from dump takes 15gigabytes.\nafter some time (up to 3 weeks) it gets really slow and has to be dump'ed\nand restored.\n\nas for fsm:\nend of vacuum info:\nINFO: free space map: 248 relations, 1359140 pages stored; 1361856 total\npages needed\nDETAIL: Allocated FSM size: 1000 relations + 10000000 pages = 58659 kB\nshared memory.\n\nso it looks i have plenty of space in fsm.\n\nvacuums run constantly.\n4 different tasks, 3 of them doing:\nwhile true\nvacuum table\nsleep 15m\ndone\nwith different tables (i have chooses the most updated tables in system).\n\nand the fourth vacuum task does the same, but without specifying table - so\nit vacuumes whole database.\n\nafter last dump/restore cycle i noticed that doing reindex on all indices in\ndatabase made it drop in side from 40G to about 20G - so it might be that i\nwill be using reindex instead of drop/restore.\nanyway - i'm not using any special indices - just some (117 to be exact)\nindices of btree type. we use simple, multi-column, partial and multi-column\npartial indices. we do not have functional indices.\n\ndatabase has quite huge load of updates, but i thought that vacum will guard\nme from database bloat, but from what i observed it means that vacuuming of\nb-tree indices is somewhat faulty.\n\nany suggestions? what else can i supply you with to help you help me?\n\nbest regards\n\ndepesz\n\nhi\nsetup:\npostgresql 8.0.3 put on debian on dual xeon, 8GB ram, hardware raid.\n\ndatabase just after recreation from dump takes 15gigabytes.\nafter some time (up to 3 weeks) it gets really slow and has to be dump'ed and restored.\n\nas for fsm:\nend of vacuum info:\nINFO: free space map: 248 relations, 1359140 pages stored; 1361856 total pages needed\nDETAIL: Allocated FSM size: 1000 relations + 10000000 pages = 58659 kB shared memory.\n\nso it looks i have plenty of space in fsm.\n\nvacuums run constantly.\n4 different tasks, 3 of them doing:\nwhile true\nvacuum table\nsleep 15m\ndone\nwith different tables (i have chooses the most updated tables in system).\n\nand the fourth vacuum task does the same, but without specifying table - so it vacuumes whole database.\n\nafter last dump/restore cycle i noticed that doing reindex on all\nindices in database made it drop in side from 40G to about 20G - so it\nmight be that i will be using reindex instead of drop/restore.\nanyway - i'm not using any special indices - just some (117 to be\nexact) indices of btree type. we use simple, multi-column, partial and\nmulti-column partial indices. we do not have functional indices.\n\ndatabase has quite huge load of updates, but i thought that vacum will\nguard me from database bloat, but from what i observed it means that\nvacuuming of b-tree indices is somewhat faulty.\n\nany suggestions? what else can i supply you with to help you help me?\n\nbest regards\n\ndepesz",
"msg_date": "Wed, 28 Sep 2005 09:07:07 +0200",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": true,
"msg_subject": "database bloat, but vacuums are done, and fsm seems to be setup ok"
},
{
"msg_contents": "Looks like it's definately an issue with index bloat. Note that it's\nnormal to have some amount of empty space depending on vacuum and update\nfrequency, so 15G -> 20G isn't terribly surprising. I would suggest\nusing pg_autovacuum instead of the continuous vacuum; it's very possible\nthat some of your tables need more frequent vacuuming than they're\ngetting now. If you go this route, you might want to change the default\nsettings a bit to make pg_autovacuum more agressive.\n\nAlso, I'd suggest posting to -hackers about the index bloat. Would you\nbe able to make a filesystem copy (ie: tar -cjf database.tar.bz2\n$PGDATA) available? It might also be useful to keep an eye on index size\nin pg_class.relpages and see exactly what indexes are bloating.\n\nOn Wed, Sep 28, 2005 at 09:07:07AM +0200, hubert depesz lubaczewski wrote:\n> hi\n> setup:\n> postgresql 8.0.3 put on debian on dual xeon, 8GB ram, hardware raid.\n> \n> database just after recreation from dump takes 15gigabytes.\n> after some time (up to 3 weeks) it gets really slow and has to be dump'ed\n> and restored.\n> \n> as for fsm:\n> end of vacuum info:\n> INFO: free space map: 248 relations, 1359140 pages stored; 1361856 total\n> pages needed\n> DETAIL: Allocated FSM size: 1000 relations + 10000000 pages = 58659 kB\n> shared memory.\n> \n> so it looks i have plenty of space in fsm.\n> \n> vacuums run constantly.\n> 4 different tasks, 3 of them doing:\n> while true\n> vacuum table\n> sleep 15m\n> done\n> with different tables (i have chooses the most updated tables in system).\n> \n> and the fourth vacuum task does the same, but without specifying table - so\n> it vacuumes whole database.\n> \n> after last dump/restore cycle i noticed that doing reindex on all indices in\n> database made it drop in side from 40G to about 20G - so it might be that i\n> will be using reindex instead of drop/restore.\n> anyway - i'm not using any special indices - just some (117 to be exact)\n> indices of btree type. we use simple, multi-column, partial and multi-column\n> partial indices. we do not have functional indices.\n> \n> database has quite huge load of updates, but i thought that vacum will guard\n> me from database bloat, but from what i observed it means that vacuuming of\n> b-tree indices is somewhat faulty.\n> \n> any suggestions? what else can i supply you with to help you help me?\n> \n> best regards\n> \n> depesz\n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 30 Sep 2005 14:34:59 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database bloat, but vacuums are done,\n and fsm seems to be setup ok"
},
{
"msg_contents": "On 9/30/05, Jim C. Nasby <[email protected]> wrote:\n>\n> Looks like it's definately an issue with index bloat. Note that it's\n> normal to have some amount of empty space depending on vacuum and update\n> frequency, so 15G -> 20G isn't terribly surprising. I would suggest\n> using pg_autovacuum instead of the continuous vacuum; it's very possible\n> that some of your tables need more frequent vacuuming than they're\n> getting now. If you go this route, you might want to change the default\n> settings a bit to make pg_autovacuum more agressive.\n\n\n\nactually i have a very bad experience with autovaccum - of course it is\nbecause i dont know how to setup it correctly, but for me it's just easier\nto setup continuos vacuums. and i know which tables are frequently updated,\nso i setup additional vacuums on them.\n\nAlso, I'd suggest posting to -hackers about the index bloat. Would you\n> be able to make a filesystem copy (ie: tar -cjf database.tar.bz2\n> $PGDATA) available? It might also be useful to keep an eye on index size\n> in pg_class.relpages and see exactly what indexes are bloating.\n\n\n\ni'm watching it right now (which indices are bloating), but i cannot send\ncopy of pgdata - it contains very sensitive information.\ndepesz\n\nOn 9/30/05, Jim C. Nasby <[email protected]> wrote:\nLooks like it's definately an issue with index bloat. Note that it'snormal to have some amount of empty space depending on vacuum and updatefrequency, so 15G -> 20G isn't terribly surprising. I would suggest\nusing pg_autovacuum instead of the continuous vacuum; it's very possiblethat some of your tables need more frequent vacuuming than they'regetting now. If you go this route, you might want to change the default\nsettings a bit to make pg_autovacuum more agressive.\n\nactually i have a very bad experience with autovaccum - of course it is\nbecause i dont know how to setup it correctly, but for me it's just\neasier to setup continuos vacuums. and i know which tables are\nfrequently updated, so i setup additional vacuums on them.\n Also, I'd suggest posting to -hackers about the index bloat. Would yoube able to make a filesystem copy (ie: tar -cjf \ndatabase.tar.bz2$PGDATA) available? It might also be useful to keep an eye on index sizein pg_class.relpages and see exactly what indexes are bloating.\n\ni'm watching it right now (which indices are bloating), but i cannot\nsend copy of pgdata - it contains very sensitive information.\n depesz",
"msg_date": "Sat, 1 Oct 2005 12:01:04 +0200",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: database bloat, but vacuums are done,\n and fsm seems to be setup ok"
},
{
"msg_contents": "On Wed, 2005-09-28 at 09:07 +0200, hubert depesz lubaczewski wrote:\n> database has quite huge load of updates, but i thought that vacum will\n> guard me from database bloat, but from what i observed it means that\n> vacuuming of b-tree indices is somewhat faulty.\n\nNo, thats perfectly normal.\n\nIndices are packed tighter when they are first created and they spread\nout a bit as you update the database. Blocks start at 90% full and end\nup at 50% full for non-monotonic indexes (e.g. SERIAL) or 67% for\nmonotonic.\n\nIt's a long debated design feature on any DBMS that uses b-trees.\n\nREINDEX or dump/restore should be identical.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Mon, 03 Oct 2005 22:47:02 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] database bloat, but vacuums are done, and fsm seems"
}
] |
[
{
"msg_contents": "Hi\n\nWhile doing some stress testing for updates in a small sized table\nwe found the following results. We are not too happy about the speed\nof the updates particularly at high concurrency (10 clients).\n\nInitially we get 119 updates / sec but it drops to 10 updates/sec\nas concurrency is increased.\n\nPostgreSQL: 8.0.3\n-------------------------------\nTABLE STRUCTURE: general.stress\n-------------------------------\n| dispatch_id | integer | not null |\n| query_id | integer | |\n| generated | timestamp with time zone | |\n| unsubscribes | integer | |\n| read_count | integer | |\n| status | character varying(10) | |\n| bounce_tracking | boolean | |\n| dispatch_hour | integer | |\n| dispatch_date_id | integer | |\n+------------------+--------------------------+-----------+\nIndexes:\n \"stress_pkey\" PRIMARY KEY, btree (dispatch_id)\n\nUPDATE STATEMENT:\nupdate general.stress set read_count=read_count+1 where dispatch_id=114\nTOOL USED: Perl/DBI , with prepared statement handlers\nCONCURRENCY METHOD: executing multiple copies of same program\nfrom different shells (linux enviornment)\nCLIENT SERVER LINK : 10/100 Mbits , LAN\n\nCLIENT CODE: stress.pl\n-------------------------------------------------------------------------\n#!/opt/perl/bin/perl -I/usr/local/masonapache/lib/perl\n################################################\n#overview: update the table as fast as possible (while(1){})\n#on every 100th commit , print the average update frequency\n#of last 100 updates\n##########################################\nuse strict;\nuse Time::HiRes qw(gettimeofday tv_interval);\nuse Utils;\nmy $dbh = &Utils::db_connect();\nmy $sth = $dbh -> prepare(\"update general.stress set\nread_count=read_count+1 where dispatch_id=114\");\nmy $cnt=0;\nmy $t0 = [ gettimeofday ];\nwhile(1) {\n $sth -> execute();\n $dbh->commit();\n $cnt++;\n if ($cnt % 100 == 0)\n {\n my $t1 = [ gettimeofday ];\n my $elapsed = tv_interval ( $t0 , $t1 );\n $t0 = $t1;\n printf \"Rate: %d updates / sec\\n\" , 100.0/$elapsed ;\n }\n}\n$sth->finish();\n$dbh->disconnect();\n--------------------------------------------------------------------------------------\n\n--------------------------------------------------------------------------------------\nRESULTS:\n--------------------------------------------------------------------------------------\n\nNumber of Copies | Update perl Sec\n\n1 --> 119\n2 ---> 59\n3 ---> 38\n4 ---> 28\n5 --> 22\n6 --> 19\n7 --> 16\n8 --> 14\n9 --> 11\n10 --> 11\n11 --> 10\n\n-------------------------------------------------------------------------------------\nNote that the table was vacuum analyzed during the tests\ntotal number of records in table: 93\n-------------------------------------------------------------------------------------\n\nRegds\nRajesh Kumar Mallah.\n",
"msg_date": "Wed, 28 Sep 2005 17:57:36 +0530",
"msg_from": "Rajesh Kumar Mallah <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow concurrent update of same row in a given table"
},
{
"msg_contents": "On Wed, 28 Sep 2005, Rajesh Kumar Mallah wrote:\n\n> Hi\n>\n> While doing some stress testing for updates in a small sized table\n> we found the following results. We are not too happy about the speed\n> of the updates particularly at high concurrency (10 clients).\n>\n> Initially we get 119 updates / sec but it drops to 10 updates/sec\n> as concurrency is increased.\n>\n> PostgreSQL: 8.0.3\n> -------------------------------\n> TABLE STRUCTURE: general.stress\n> -------------------------------\n> | dispatch_id | integer | not null |\n> | query_id | integer | |\n> | generated | timestamp with time zone | |\n> | unsubscribes | integer | |\n> | read_count | integer | |\n> | status | character varying(10) | |\n> | bounce_tracking | boolean | |\n> | dispatch_hour | integer | |\n> | dispatch_date_id | integer | |\n> +------------------+--------------------------+-----------+\n> Indexes:\n> \"stress_pkey\" PRIMARY KEY, btree (dispatch_id)\n>\n> UPDATE STATEMENT:\n> update general.stress set read_count=read_count+1 where dispatch_id=114\n\nThis means you are updating only one row, correct?\n\n> Number of Copies | Update perl Sec\n>\n> 1 --> 119\n> 2 ---> 59\n> 3 ---> 38\n> 4 ---> 28\n> 5 --> 22\n> 6 --> 19\n> 7 --> 16\n> 8 --> 14\n> 9 --> 11\n> 10 --> 11\n> 11 --> 10\n\nSo, 11 instances result in 10 updated rows per second, database wide or\nper instance? If it is per instance, then 11 * 10 is close to the\nperformance for one connection.\n\nThat being said, when you've got 10 connections fighting over one row, I\nwouldn't be surprised if you had bad performance.\n\nAlso, at 119 updates a second, you're more than doubling the table's\ninitial size (dead tuples) each second. How often are you vacuuming and\nare you using vacuum or vacuum full?\n\nGavin\n",
"msg_date": "Wed, 28 Sep 2005 22:53:35 +1000 (EST)",
"msg_from": "Gavin Sherry <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow concurrent update of same row in a given table"
},
{
"msg_contents": "On 9/28/05, Gavin Sherry <[email protected]> wrote:\n> On Wed, 28 Sep 2005, Rajesh Kumar Mallah wrote:\n>\n> > Hi\n> >\n> > While doing some stress testing for updates in a small sized table\n> > we found the following results. We are not too happy about the speed\n> > of the updates particularly at high concurrency (10 clients).\n> >\n> > Initially we get 119 updates / sec but it drops to 10 updates/sec\n> > as concurrency is increased.\n> >\n> > PostgreSQL: 8.0.3\n> > -------------------------------\n> > TABLE STRUCTURE: general.stress\n> > -------------------------------\n> > | dispatch_id | integer | not null |\n> > | query_id | integer | |\n> > | generated | timestamp with time zone | |\n> > | unsubscribes | integer | |\n> > | read_count | integer | |\n> > | status | character varying(10) | |\n> > | bounce_tracking | boolean | |\n> > | dispatch_hour | integer | |\n> > | dispatch_date_id | integer | |\n> > +------------------+--------------------------+-----------+\n> > Indexes:\n> > \"stress_pkey\" PRIMARY KEY, btree (dispatch_id)\n> >\n> > UPDATE STATEMENT:\n> > update general.stress set read_count=read_count+1 where dispatch_id=114\n>\n> This means you are updating only one row, correct?\n\nCorrect.\n\n\n>\n> > Number of Copies | Update perl Sec\n> >\n> > 1 --> 119\n> > 2 ---> 59\n> > 3 ---> 38\n> > 4 ---> 28\n> > 5 --> 22\n> > 6 --> 19\n> > 7 --> 16\n> > 8 --> 14\n> > 9 --> 11\n> > 10 --> 11\n> > 11 --> 10\n>\n> So, 11 instances result in 10 updated rows per second, database wide or\n> per instance? If it is per instance, then 11 * 10 is close to the\n> performance for one connection.\n\n\nSorry do not understand the difference between \"database wide\"\nand \"per instance\"\n\n>\n> That being said, when you've got 10 connections fighting over one row, I\n> wouldn't be surprised if you had bad performance.\n>\n> Also, at 119 updates a second, you're more than doubling the table's\n> initial size (dead tuples) each second. How often are you vacuuming and\n> are you using vacuum or vacuum full?\n\n\nYes I realize the obvious phenomenon now, (and the uselessness of the script)\n , we should not consider it a performance degradation.\n\nI am having performance issue in my live database thats why i tried to\nsimulate the situation(may the the script was overstresser).\n\nMy original problem is that i send 100 000s of emails carrying a\nbeacon for tracking readership every tuesday and on wednesday i see\nlot of the said query in pg_stat_activity each of these query update\nthe SAME row that corresponds to the dispatch of last day and it is\nthen i face the performance problem.\n\nI think i can only post further details next wednesday , please lemme\nknow how should i be dealing with the situation if each the updates takes\n100times more time that normal update duration.\n\nBest Regards\nMallah.\n\n\n>\n> Gavin\n>\n",
"msg_date": "Wed, 28 Sep 2005 23:14:12 +0530",
"msg_from": "Rajesh Kumar Mallah <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow concurrent update of same row in a given table"
},
{
"msg_contents": "On Wed, 28 Sep 2005, Rajesh Kumar Mallah wrote:\n\n> > > Number of Copies | Update perl Sec\n> > >\n> > > 1 --> 119\n> > > 2 ---> 59\n> > > 3 ---> 38\n> > > 4 ---> 28\n> > > 5 --> 22\n> > > 6 --> 19\n> > > 7 --> 16\n> > > 8 --> 14\n> > > 9 --> 11\n> > > 10 --> 11\n> > > 11 --> 10\n> >\n> > So, 11 instances result in 10 updated rows per second, database wide or\n> > per instance? If it is per instance, then 11 * 10 is close to the\n> > performance for one connection.\n>\n>\n> Sorry do not understand the difference between \"database wide\"\n> and \"per instance\"\n\nPer instance.\n\n>\n> >\n> > That being said, when you've got 10 connections fighting over one row, I\n> > wouldn't be surprised if you had bad performance.\n> >\n> > Also, at 119 updates a second, you're more than doubling the table's\n> > initial size (dead tuples) each second. How often are you vacuuming and\n> > are you using vacuum or vacuum full?\n>\n>\n> Yes I realize the obvious phenomenon now, (and the uselessness of the script)\n> , we should not consider it a performance degradation.\n>\n> I am having performance issue in my live database thats why i tried to\n> simulate the situation(may the the script was overstresser).\n>\n> My original problem is that i send 100 000s of emails carrying a\n> beacon for tracking readership every tuesday and on wednesday i see\n> lot of the said query in pg_stat_activity each of these query update\n> the SAME row that corresponds to the dispatch of last day and it is\n> then i face the performance problem.\n>\n> I think i can only post further details next wednesday , please lemme\n> know how should i be dealing with the situation if each the updates takes\n> 100times more time that normal update duration.\n\nI see. These problems regularly come up in database design. The best thing\nyou can do is modify your database design/application such that instead of\nincrementing a count in a single row, you insert a row into a table,\nrecording the 'dispatch_id'. Counting the number of rows for a given\ndispatch id will give you your count.\n\nThanks,\n\nGavin\n",
"msg_date": "Thu, 29 Sep 2005 08:01:55 +1000 (EST)",
"msg_from": "Gavin Sherry <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow concurrent update of same row in a given table"
},
{
"msg_contents": "On 9/29/05, Gavin Sherry <[email protected]> wrote:\n> On Wed, 28 Sep 2005, Rajesh Kumar Mallah wrote:\n>\n> > > > Number of Copies | Update perl Sec\n> > > >\n> > > > 1 --> 119\n> > > > 2 ---> 59\n> > > > 3 ---> 38\n> > > > 4 ---> 28\n> > > > 5 --> 22\n> > > > 6 --> 19\n> > > > 7 --> 16\n> > > > 8 --> 14\n> > > > 9 --> 11\n> > > > 10 --> 11\n> > > > 11 --> 10\n> > >\n> > > So, 11 instances result in 10 updated rows per second, database wide or\n> > > per instance? If it is per instance, then 11 * 10 is close to the\n> > > performance for one connection.\n> >\n> >\n> > Sorry do not understand the difference between \"database wide\"\n> > and \"per instance\"\n>\n> Per instance.\n>\n> >\n> > >\n> > > That being said, when you've got 10 connections fighting over one row, I\n> > > wouldn't be surprised if you had bad performance.\n> > >\n> > > Also, at 119 updates a second, you're more than doubling the table's\n> > > initial size (dead tuples) each second. How often are you vacuuming and\n> > > are you using vacuum or vacuum full?\n> >\n> >\n> > Yes I realize the obvious phenomenon now, (and the uselessness of the script)\n> > , we should not consider it a performance degradation.\n> >\n> > I am having performance issue in my live database thats why i tried to\n> > simulate the situation(may the the script was overstresser).\n> >\n> > My original problem is that i send 100 000s of emails carrying a\n> > beacon for tracking readership every tuesday and on wednesday i see\n> > lot of the said query in pg_stat_activity each of these query update\n> > the SAME row that corresponds to the dispatch of last day and it is\n> > then i face the performance problem.\n> >\n> > I think i can only post further details next wednesday , please lemme\n> > know how should i be dealing with the situation if each the updates takes\n> > 100times more time that normal update duration.\n>\n> I see. These problems regularly come up in database design. The best thing\n> you can do is modify your database design/application such that instead of\n> incrementing a count in a single row, you insert a row into a table,\n> recording the 'dispatch_id'. Counting the number of rows for a given\n> dispatch id will give you your count.\n>\n\nsorry i will be accumulating huge amount of rows in seperate table\nwith no extra info when i really want just the count. Do you have\na better database design in mind?\n\nAlso i encounter same problem in implementing read count of\narticles in sites and in counting banner impressions where same\nrow get updated by multiple processes frequently.\n\n\nThanks & Regds\nmallah.\n\n\n\n\n\n\n\n> Thanks,\n>\n> Gavin\n>\n",
"msg_date": "Thu, 29 Sep 2005 07:59:34 +0530",
"msg_from": "Rajesh Kumar Mallah <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow concurrent update of same row in a given table"
},
{
"msg_contents": "On Thu, 29 Sep 2005, Rajesh Kumar Mallah wrote:\n\n> On 9/29/05, Gavin Sherry <[email protected]> wrote:\n> > On Wed, 28 Sep 2005, Rajesh Kumar Mallah wrote:\n> >\n> > > > > Number of Copies | Update perl Sec\n> > > > >\n> > > > > 1 --> 119\n> > > > > 2 ---> 59\n> > > > > 3 ---> 38\n> > > > > 4 ---> 28\n> > > > > 5 --> 22\n> > > > > 6 --> 19\n> > > > > 7 --> 16\n> > > > > 8 --> 14\n> > > > > 9 --> 11\n> > > > > 10 --> 11\n> > > > > 11 --> 10\n> > > >\n> > > > So, 11 instances result in 10 updated rows per second, database wide or\n> > > > per instance? If it is per instance, then 11 * 10 is close to the\n> > > > performance for one connection.\n> > >\n> > >\n> > > Sorry do not understand the difference between \"database wide\"\n> > > and \"per instance\"\n> >\n> > Per instance.\n> >\n> > >\n> > > >\n> > > > That being said, when you've got 10 connections fighting over one row, I\n> > > > wouldn't be surprised if you had bad performance.\n> > > >\n> > > > Also, at 119 updates a second, you're more than doubling the table's\n> > > > initial size (dead tuples) each second. How often are you vacuuming and\n> > > > are you using vacuum or vacuum full?\n> > >\n> > >\n> > > Yes I realize the obvious phenomenon now, (and the uselessness of the script)\n> > > , we should not consider it a performance degradation.\n> > >\n> > > I am having performance issue in my live database thats why i tried to\n> > > simulate the situation(may the the script was overstresser).\n> > >\n> > > My original problem is that i send 100 000s of emails carrying a\n> > > beacon for tracking readership every tuesday and on wednesday i see\n> > > lot of the said query in pg_stat_activity each of these query update\n> > > the SAME row that corresponds to the dispatch of last day and it is\n> > > then i face the performance problem.\n> > >\n> > > I think i can only post further details next wednesday , please lemme\n> > > know how should i be dealing with the situation if each the updates takes\n> > > 100times more time that normal update duration.\n> >\n> > I see. These problems regularly come up in database design. The best thing\n> > you can do is modify your database design/application such that instead of\n> > incrementing a count in a single row, you insert a row into a table,\n> > recording the 'dispatch_id'. Counting the number of rows for a given\n> > dispatch id will give you your count.\n> >\n>\n> sorry i will be accumulating huge amount of rows in seperate table\n> with no extra info when i really want just the count. Do you have\n> a better database design in mind?\n>\n> Also i encounter same problem in implementing read count of\n> articles in sites and in counting banner impressions where same\n> row get updated by multiple processes frequently.\n\nAs I said in private email, accumulating large numbers of rows is not a\nproblem. In your current application, you are write bound, not read bound.\nI've designed many similar systems which have hundred of millions of rows.\nIt takes a while to generate the count, but you just do it periodically in\nnon-busy periods.\n\nWith 8.1, constraint exclusion will give you significantly better\nperformance with this system, as well.\n\nThanks,\n\nGavin\n",
"msg_date": "Thu, 29 Sep 2005 12:35:49 +1000 (EST)",
"msg_from": "Gavin Sherry <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow concurrent update of same row in a given table"
},
{
"msg_contents": "On Thu, Sep 29, 2005 at 07:59:34AM +0530, Rajesh Kumar Mallah wrote:\n> > I see. These problems regularly come up in database design. The best thing\n> > you can do is modify your database design/application such that instead of\n> > incrementing a count in a single row, you insert a row into a table,\n> > recording the 'dispatch_id'. Counting the number of rows for a given\n> > dispatch id will give you your count.\n> >\n> \n> sorry i will be accumulating huge amount of rows in seperate table\n> with no extra info when i really want just the count. Do you have\n> a better database design in mind?\n> \n> Also i encounter same problem in implementing read count of\n> articles in sites and in counting banner impressions where same\n> row get updated by multiple processes frequently.\n\nDatabases like to work on *sets* of data, not individual rows. Something\nlike this would probably perform much better than what you've got now,\nand would prevent having a huge table laying around:\n\nINSERT INTO holding_table ... -- Done for every incomming\nconnection/what-have-you\n\nCREATE OR REPLACE FUNCTION summarize() RETURNS void AS $$\nDECLARE\n v_rows int;\nBEGIN\n DELETE FROM holding_table;\n GET DIAGNOSTICS v_rows = ROW_COUNT;\n UPDATE count_table\n SET count = count + v_rows\n ;\nEND;\n$$ LANGUAGE plpgsql;\n\nPeriodically (say, once a minute):\nSELECT summarize()\nVACUUM holding_table;\nVACUUM count_table;\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 4 Oct 2005 13:55:13 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow concurrent update of same row in a given table"
}
] |
[
{
"msg_contents": "Hi all,\n\n I have been \"googling\" a bit searching info about a way to monitor \npostgresql (CPU & Memory, num processes, ... ) and I haven't found \nanything relevant. I'm using munin to monitor others parameters of my \nservers and I'd like to include postgresql or have a similar tool. Any \nof you is using anything like that? all kind of hints are welcome :-)\n\nCheers!\n-- \nArnau\n\n",
"msg_date": "Wed, 28 Sep 2005 16:32:02 +0200",
"msg_from": "Arnau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Monitoring Postgresql performance"
},
{
"msg_contents": "\nOn Sep 28, 2005, at 8:32 AM, Arnau wrote:\n\n> Hi all,\n>\n> I have been \"googling\" a bit searching info about a way to \n> monitor postgresql (CPU & Memory, num processes, ... )\n\nYou didn't mention your platform, but I have an xterm open pretty \nmuch continuously for my DB server that runs plain old top. I have \ncustomized my settings enough that I can pretty much see anything I \nneed to from there.\n\n-Dan\n\n",
"msg_date": "Wed, 28 Sep 2005 11:22:18 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring Postgresql performance"
},
{
"msg_contents": "On 9/28/05, Arnau <[email protected]> wrote:\n> Hi all,\n>\n> I have been \"googling\" a bit searching info about a way to monitor\n> postgresql (CPU & Memory, num processes, ... ) and I haven't found\n> anything relevant. I'm using munin to monitor others parameters of my\n> servers and I'd like to include postgresql or have a similar tool. Any\n> of you is using anything like that? all kind of hints are welcome :-)\n>\n> Cheers!\n> --\n> Arnau\n\nI have a cronjob that runs every 5 minutes and checks the number of\nprocesses. When things get unruly I get a text message sent to my cell\nphone. It also creates a detailed log entry. I'll paste in an example\nof one of my scripts that does this below. This is on a dual purpose\nserver and monitors both cpu load average and postgres. You can have\nthe text message sent to multiple email addresses, just put a space\nseparated list of e-mail addresses between quotes in the CONTACTS=\nline. It's simple, but it works and its always nice to know when\nthere's a problem *before the boss discovers it* ;-)\n\n# Create some messages\nHOSTNAME=`hostname`\nWARNING_DB=\"Database connections on $HOSTNAME is rather high\"\nWARNING_CPU=\"CPU load on $HOSTNAME is rather high\"\nCONTACTS=\"[email protected] [email protected] [email protected]\"\nWARN=0\n\n#calculate the db load\nDB_LOAD=`ps -ax | grep postgres | wc -l`\nif (($DB_LOAD > 150))\nthen\n WARN=1\n echo \"$WARNING_DB ($DB_LOAD) \" | mail -s \"db_load is high\n($DB_LOAD)\" $CONTACTS\nfi\n\n#calculate the processor load\nCPU_LOAD=`cat /proc/loadavg | cut --delimiter=\" \" -f 2 | cut\n--delimiter=\".\" -f 1`\nif (($CPU_LOAD > 8))\nthen\n WARN=1\n echo \"$WARNING_CPU ($CPU_LOAD) \" | mail -s \"CPU_load is high\n($CPU_LOAD)\" $CONTACTS\nfi\n\nif (($WARN > 0))\nthen\n echo -=-=-=-=-=-=-=-=- W A R N I N G -=-=-=-=-=-=-=-=- >> /tmp/warn.txt\n NOW=`date`\n echo -=-=-=-=-=-$NOW-=-=-=-=-=- >> /tmp/warn.txt\n echo CPU LOAD: $CPU_LOAD DB LOAD: $DB_LOAD >> /tmp/warn.txt\n echo >> /tmp/warn.txt\n top -bn 1 >> /tmp/warn.txt\n echo >> /tmp/warn.txt\nfi\n\nNOW=`date`\nCPU_LOAD=`cat /proc/loadavg | cut --delimiter=\" \" -f 1,2,3\n--output-delimiter=\\|`\necho -e $NOW\\|$CPU_LOAD\\|$DB_LOAD >> ~/LOAD_MONITOR.LOG\n\n--\nMatthew Nuzum\nwww.bearfruit.org\n",
"msg_date": "Wed, 28 Sep 2005 12:56:28 -0500",
"msg_from": "Matthew Nuzum <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring Postgresql performance"
},
{
"msg_contents": "\nOn 28 Sep 2005, at 15:32, Arnau wrote:\n\n\n> Hi all,\n>\n> I have been \"googling\" a bit searching info about a way to \n> monitor postgresql (CPU & Memory, num processes, ... ) and I \n> haven't found anything relevant. I'm using munin to monitor others \n> parameters of my servers and I'd like to include postgresql or have \n> a similar tool. Any of you is using anything like that? all kind of \n> hints are welcome :-)\n>\n> Cheers!\n>\n\nHave you looked at SNMP? It's a bit complex but there's lots of tools \nfor monitoring system data / sending alerts based on SNMP already.\n\n",
"msg_date": "Wed, 28 Sep 2005 22:36:29 +0100",
"msg_from": "Alex Stapleton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring Postgresql performance"
},
{
"msg_contents": "Arnau wrote:\n> Hi all,\n> \n> I have been \"googling\" a bit searching info about a way to monitor\n> postgresql (CPU & Memory, num processes, ... ) and I haven't found\n> anything relevant. I'm using munin to monitor others parameters of my\n> servers and I'd like to include postgresql or have a similar tool. Any\n> of you is using anything like that? all kind of hints are welcome :-)\n> \n> Cheers!\n\nWe use Cricket + Nagios ( new Netsaint release ).\n\n\nRegards\nGaetano Mendola\n\n\n",
"msg_date": "Thu, 29 Sep 2005 15:46:52 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring Postgresql performance"
},
{
"msg_contents": "On 9/28/05, Matthew Nuzum <[email protected]> wrote:\n> On 9/28/05, Arnau <[email protected]> wrote:\n> > Hi all,\n> >\n> > I have been \"googling\" a bit searching info about a way to monitor\n> > postgresql (CPU & Memory, num processes, ... ) and I haven't found\n> > anything relevant. I'm using munin to monitor others parameters of my\n> > servers and I'd like to include postgresql or have a similar tool. Any\n> > of you is using anything like that? all kind of hints are welcome :-)\n\nWe are also using cricket + nagios.\n\nOn each DB server: Setup snmpd and use snmpd.conf to set disk quotas\nand mark processes that need to be running (like\npostmaster,syslog,sshd)\n\nOn the monitoring server(s): Use cricket for long term trends &\ngraphs. Use nagios for current status and alerting and some trending.\n(Nagios has plugins over SNMP for load,cpu,memory,disk and processes)\n\nHere's the nagios plugins I have hacked up over the past few months\nand what they do. I'd imagine some could use better names. I can\nprovide these of package them up if anyone is interested.\n\ncheck_pgconn.pl - Shows percentage of connections available. It uses\n\"SELECT COUNT(*) FROM pg_stat_activity\" / \"SHOW max_connections\". It\ncan also alert when less than a certain number of connections are\navailable.\n\ncheck_pgqueries.pl - If you have query logging enabled this summarizes\nthe types of queries running (SELECT ,INSERT ,DELETE ,UPDATE ,ALTER\n,CREATE ,TRUNCATE, VACUUM, COPY) and warns if any queries have been\nrunning longer than 5 minutes (configurable).\n\ncheck_pglocks.pl - Look for locks that block and for baselining lock activity.\n\ncheck_pgtime.pl - Makes sure that postgresql's time is in sync with\nthe monitoring server.\n\ncheck_pgqueries.pl - Whines if any queries are in the \"waiting\" state.\nThe script that runs on each DB server does \"ps auxww | grep postgres\n| grep -i \"[W]aiting\"\" and exposes that through SNMP using the exec\nfunctionality. Nagios then alerts if queries are being blocked.\n",
"msg_date": "Thu, 29 Sep 2005 13:02:03 -0700",
"msg_from": "Tony Wasson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring Postgresql performance"
},
{
"msg_contents": "Hi,\n\nimpressive\n\nBut you forgot to include those scipts as attachment or they got lost\nsomehow ;-)\n\ncould you post them (again)?\n\nthanx,\nJuraj\n\nAm Donnerstag, den 29.09.2005, 13:02 -0700 schrieb Tony Wasson:\n> On 9/28/05, Matthew Nuzum <[email protected]> wrote:\n> > On 9/28/05, Arnau <[email protected]> wrote:\n> > > Hi all,\n> > >\n> > > I have been \"googling\" a bit searching info about a way to monitor\n> > > postgresql (CPU & Memory, num processes, ... ) and I haven't found\n> > > anything relevant. I'm using munin to monitor others parameters of my\n> > > servers and I'd like to include postgresql or have a similar tool. Any\n> > > of you is using anything like that? all kind of hints are welcome :-)\n> \n> We are also using cricket + nagios.\n> \n> On each DB server: Setup snmpd and use snmpd.conf to set disk quotas\n> and mark processes that need to be running (like\n> postmaster,syslog,sshd)\n> \n> On the monitoring server(s): Use cricket for long term trends &\n> graphs. Use nagios for current status and alerting and some trending.\n> (Nagios has plugins over SNMP for load,cpu,memory,disk and processes)\n> \n> Here's the nagios plugins I have hacked up over the past few months\n> and what they do. I'd imagine some could use better names. I can\n> provide these of package them up if anyone is interested.\n> \n> check_pgconn.pl - Shows percentage of connections available. It uses\n> \"SELECT COUNT(*) FROM pg_stat_activity\" / \"SHOW max_connections\". It\n> can also alert when less than a certain number of connections are\n> available.\n> \n> check_pgqueries.pl - If you have query logging enabled this summarizes\n> the types of queries running (SELECT ,INSERT ,DELETE ,UPDATE ,ALTER\n> ,CREATE ,TRUNCATE, VACUUM, COPY) and warns if any queries have been\n> running longer than 5 minutes (configurable).\n> \n> check_pglocks.pl - Look for locks that block and for baselining lock activity.\n> \n> check_pgtime.pl - Makes sure that postgresql's time is in sync with\n> the monitoring server.\n> \n> check_pgqueries.pl - Whines if any queries are in the \"waiting\" state.\n> The script that runs on each DB server does \"ps auxww | grep postgres\n> | grep -i \"[W]aiting\"\" and exposes that through SNMP using the exec\n> functionality. Nagios then alerts if queries are being blocked.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n",
"msg_date": "Thu, 29 Sep 2005 22:56:42 +0200",
"msg_from": "Juraj Holtak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring Postgresql performance"
},
{
"msg_contents": "Arnau wrote:\n\n> Hi all,\n> \n> I have been \"googling\" a bit searching info about a way to monitor \n> postgresql (CPU & Memory, num processes, ... ) and I haven't found \n> anything relevant. I'm using munin to monitor others parameters of my \n> servers and I'd like to include postgresql or have a similar tool. Any \n> of you is using anything like that? all kind of hints are welcome :-)\n\nProbably, as you said, this is not so much relevant,\nas it is something at *early* stages of usability :-)\nbut have you looked at pgtop?\n\nThe basic requirement is that you enable your postmaster\nstats collector and query command strings.\n\nHere is the first announcement email:\n http://archives.postgresql.org/pgsql-announce/2005-05/msg00000.php\n\nAnd its home on the CPAN:\n http://search.cpan.org/dist/pgtop/pgtop\n\n--\nCosimo\n\n",
"msg_date": "Thu, 29 Sep 2005 23:19:35 +0000",
"msg_from": "Cosimo Streppone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring Postgresql performance"
},
{
"msg_contents": "On 9/29/05, Juraj Holtak <[email protected]> wrote:\n> Hi,\n>\n> impressive\n>\n> But you forgot to include those scipts as attachment or they got lost\n> somehow ;-)\n>\n> could you post them (again)?\n>\n> thanx,\n> Juraj\n\nAttached you'll find the scripts I use in nagios to monitor\npostgresql. I cleaned these up a bit but they are still rough. Please\nlet me know if you find these useful. I will see about getting these\nare part of the nagios plugins package.\n\nTony Wasson",
"msg_date": "Fri, 30 Sep 2005 13:57:16 -0700",
"msg_from": "Tony Wasson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring Postgresql performance"
}
] |
[
{
"msg_contents": ">From: \"Jeffrey W. Baker\" <[email protected]>\n>Sent: Sep 27, 2005 1:26 PM\n>To: Ron Peacetree <[email protected]>\n>Subject: Re: [HACKERS] [PERFORM] A Better External Sort?\n>\n>On Tue, 2005-09-27 at 13:15 -0400, Ron Peacetree wrote:\n>\n>>That Btree can be used to generate a physical reordering of the data\n>>in one pass, but that's the weakest use for it. The more powerful\n>>uses involve allowing the Btree to persist and using it for more\n>>efficient re-searches or combining it with other such Btrees (either as\n>>a step in task distribution across multiple CPUs or as a more efficient\n>>way to do things like joins by manipulating these Btrees rather than\n>>the actual records.)\n>\n>Maybe you could describe some concrete use cases. I can see what\n>you are getting at, and I can imagine some advantageous uses, but\n>I'd like to know what you are thinking.\n>\n1= In a 4P box, we split the data in RAM into 4 regions and create\na CPU cache friendly Btree using the method I described for each CPU.\nThe 4 Btrees can be merged in a more time and space efficient manner\nthan the original records to form a Btree that represents the sorted order\nof the entire data set. Any of these Btrees can be allowed to persist to\nlower the cost of doing similar operations in the future (Updating the\nBtrees during inserts and deletes is cheaper than updating the original\ndata files and then redoing the same sort from scratch in the future.)\nBoth the original sort and future such sorts are made more efficient\nthan current methods.\n\n2= We use my method to sort two different tables. We now have these\nvery efficient representations of a specific ordering on these tables. A\njoin operation can now be done using these Btrees rather than the\noriginal data tables that involves less overhead than many current\nmethods.\n\n3= We have multiple such Btrees for the same data set representing\nsorts done using different fields (and therefore different Keys).\nCalculating a sorted order for the data based on a composition of\nthose Keys is now cheaper than doing the sort based on the composite\nKey from scratch. When some of the Btrees exist and some of them\ndo not, there is a tradeoff calculation to be made. Sometimes it will be\ncheaper to do the sort from scratch using the composite Key. \n\n\n>Specifically I'd like to see some cases where this would beat sequential\n>scan. I'm thinking that in your example of a terabyte table with a\n>column having only two values, all the queries I can think of would be\n>better served with a sequential scan.\n>\nIn my original example, a sequential scan of the 1TB of 2KB or 4KB\nrecords, => 250M or 500M records of data, being sorted on a binary\nvalue key will take ~1000x more time than reading in the ~1GB Btree\nI described that used a Key+RID (plus node pointers) representation\nof the data.\n \nJust to clarify the point further,\n1TB of 1B records => 2^40 records of at most 256 distinct values.\n1TB of 2B records => 2^39 records of at most 2^16 distinct values.\n1TB of 4B records => 2^38 records of at most 2^32 distinct values.\n1TB of 5B records => 200B records of at most 200B distinct values.\n>From here on, the number of possible distinct values is limited by the\nnumber of records.\n100B records are used in the \"Indy\" version of Jim Gray's sorting \ncontests, so 1TB => 10B records.\n2KB-4KB is the most common record size I've seen in enterprise\nclass DBMS (so I used this value to make my initial example more\nrealistic).\n\nTherefore the vast majority of the time representing a data set by Key\nwill use less space that the original record. Less space used means\nless IO to scan the data set, which means faster scan times.\n\nThis is why index files work in the first place, right?\n\n\n>Perhaps I believe this because you can now buy as much sequential I/O\n>as you want. Random I/O is the only real savings.\n>\n1= No, you can not \"buy as much sequential IO as you want\". Even if\nwith an infinite budget, there are physical and engineering limits. Long\nbefore you reach those limits, you will pay exponentially increasing costs\nfor linearly increasing performance gains. So even if you _can_ buy a\ncertain level of sequential IO, it may not be the most efficient way to\nspend money.\n\n2= Most RW IT professionals have far from an infinite budget. Just traffic\non these lists shows how severe the typical cost constraints usually are.\nOTOH, if you have an inifinite IT budget, care to help a few less fortunate\nthan yourself? After all, a even a large constant substracted from infinity\nis still infinity... ;-)\n\n3= No matter how fast you can do IO, IO remains the most expensive\npart of the performance equation. The fastest and cheapest IO you can\ndo is _no_ IO. As long as we trade cheaper RAM and even cheaoer CPU\noperations for IO correctly, more space efficient data representations will\nalways be a Win because of this.\n",
"msg_date": "Wed, 28 Sep 2005 12:03:34 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "On Wed, 2005-09-28 at 12:03 -0400, Ron Peacetree wrote:\n> >From: \"Jeffrey W. Baker\" <[email protected]>\n> >Sent: Sep 27, 2005 1:26 PM\n> >To: Ron Peacetree <[email protected]>\n> >Subject: Re: [HACKERS] [PERFORM] A Better External Sort?\n> >\n> >On Tue, 2005-09-27 at 13:15 -0400, Ron Peacetree wrote:\n> >\n> >>That Btree can be used to generate a physical reordering of the data\n> >>in one pass, but that's the weakest use for it. The more powerful\n> >>uses involve allowing the Btree to persist and using it for more\n> >>efficient re-searches or combining it with other such Btrees (either as\n> >>a step in task distribution across multiple CPUs or as a more efficient\n> >>way to do things like joins by manipulating these Btrees rather than\n> >>the actual records.)\n> >\n> >Maybe you could describe some concrete use cases. I can see what\n> >you are getting at, and I can imagine some advantageous uses, but\n> >I'd like to know what you are thinking.\n> >\n> >Specifically I'd like to see some cases where this would beat sequential\n> >scan. I'm thinking that in your example of a terabyte table with a\n> >column having only two values, all the queries I can think of would be\n> >better served with a sequential scan.\n> >\n> In my original example, a sequential scan of the 1TB of 2KB or 4KB\n> records, => 250M or 500M records of data, being sorted on a binary\n> value key will take ~1000x more time than reading in the ~1GB Btree\n> I described that used a Key+RID (plus node pointers) representation\n> of the data.\n\nYou are engaging in a length and verbose exercise in mental\nmasturbation, because you have not yet given a concrete example of a\nquery where this stuff would come in handy. A common, general-purpose\ncase would be the best.\n\nWe can all see that the method you describe might be a good way to sort\na very large dataset with some known properties, which would be fine if\nyou are trying to break the terasort benchmark. But that's not what\nwe're doing here. We are designing and operating relational databases.\nSo please explain the application.\n\nYour main example seems to focus on a large table where a key column has\nconstrained values. This case is interesting in proportion to the\nnumber of possible values. If I have billions of rows, each having one\nof only two values, I can think of a trivial and very fast method of\nreturning the table \"sorted\" by that key: make two sequential passes,\nreturning the first value on the first pass and the second value on the\nsecond pass. This will be faster than the method you propose.\n\nI think an important aspect you have failed to address is how much of\nthe heap you must visit after the sort is complete. If you are\nreturning every tuple in the heap then the optimal plan will be very\ndifferent from the case when you needn't. \n\n-jwb\n\nPS: Whatever mailer you use doesn't understand or respect threading nor\nattribution. Out of respect for the list's readers, please try a mailer\nthat supports these 30-year-old fundamentals of electronic mail.\n\n",
"msg_date": "Wed, 28 Sep 2005 21:27:20 -0700",
"msg_from": "\"Jeffrey W. Baker\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "On Wed, 2005-09-28 at 12:03 -0400, Ron Peacetree wrote:\n> >From: \"Jeffrey W. Baker\" <[email protected]>\n> >Perhaps I believe this because you can now buy as much sequential I/O\n> >as you want. Random I/O is the only real savings.\n> >\n> 1= No, you can not \"buy as much sequential IO as you want\". Even if\n> with an infinite budget, there are physical and engineering limits. Long\n> before you reach those limits, you will pay exponentially increasing costs\n> for linearly increasing performance gains. So even if you _can_ buy a\n> certain level of sequential IO, it may not be the most efficient way to\n> spend money.\n\nThis is just false. You can buy sequential I/O for linear money up to\nand beyond your platform's main memory bandwidth. Even 1GB/sec will\nseverely tax memory bandwidth of mainstream platforms, and you can\nachieve this rate for a modest cost. \n\nI have one array that can supply this rate and it has only 15 disks. It\nwould fit on my desk. I think your dire talk about the limits of\nscience and engineering may be a tad overblown.\n\n",
"msg_date": "Wed, 28 Sep 2005 21:33:46 -0700",
"msg_from": "\"Jeffrey W. Baker\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Sequential I/O Cost (was Re: A Better External Sort?)"
},
{
"msg_contents": "Jeff, Ron,\n\nFirst off, Jeff, please take it easy. We're discussing 8.2 features at \nthis point and there's no reason to get stressed out at Ron. You can \nget plenty stressed out when 8.2 is near feature freeze. ;-)\n\n\nRegarding use cases for better sorts:\n\nThe biggest single area where I see PostgreSQL external sort sucking is \n on index creation on large tables. For example, for free version of \nTPCH, it takes only 1.5 hours to load a 60GB Lineitem table on OSDL's \nhardware, but over 3 hours to create each index on that table. This \nmeans that over all our load into TPCH takes 4 times as long to create \nthe indexes as it did to bulk load the data.\n\nAnyone restoring a large database from pg_dump is in the same situation. \n Even worse, if you have to create a new index on a large table on a \nproduction database in use, because the I/O from the index creation \nswamps everything.\n\nFollowing an index creation, we see that 95% of the time required is the \nexternal sort, which averages 2mb/s. This is with seperate drives for \nthe WAL, the pg_tmp, the table and the index. I've confirmed that \nincreasing work_mem beyond a small minimum (around 128mb) had no benefit \non the overall index creation speed.\n\n\n--Josh Berkus\n\n\n",
"msg_date": "Thu, 29 Sep 2005 09:54:05 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "Josh,\n\nOn 9/29/05 9:54 AM, \"Josh Berkus\" <[email protected]> wrote:\n\n> Following an index creation, we see that 95% of the time required is the\n> external sort, which averages 2mb/s. This is with seperate drives for\n> the WAL, the pg_tmp, the table and the index. I've confirmed that\n> increasing work_mem beyond a small minimum (around 128mb) had no benefit\n> on the overall index creation speed.\n\nYuuuup! That about sums it up - regardless of taking 1 or 2 passes through\nthe heap being sorted, 1.5 - 2 MB/s is the wrong number. This is not\nnecessarily an algorithmic problem, but is a optimization problem with\nPostgres that must be fixed before it can be competitive.\n\nWe read/write to/from disk at 240MB/s and so 2 passes would run at a net\nrate of 120MB/s through the sort set if it were that efficient.\n\nAnyone interested in tackling the real performance issue? (flame bait, but\nfor a worthy cause :-)\n\n- Luke\n\n\n",
"msg_date": "Thu, 29 Sep 2005 10:06:52 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "On Thu, Sep 29, 2005 at 10:06:52AM -0700, Luke Lonergan wrote:\n> Josh,\n> \n> On 9/29/05 9:54 AM, \"Josh Berkus\" <[email protected]> wrote:\n> \n> > Following an index creation, we see that 95% of the time required\n> > is the external sort, which averages 2mb/s. This is with seperate\n> > drives for the WAL, the pg_tmp, the table and the index. I've\n> > confirmed that increasing work_mem beyond a small minimum (around\n> > 128mb) had no benefit on the overall index creation speed.\n> \n> Yuuuup! That about sums it up - regardless of taking 1 or 2 passes\n> through the heap being sorted, 1.5 - 2 MB/s is the wrong number.\n> This is not necessarily an algorithmic problem, but is a\n> optimization problem with Postgres that must be fixed before it can\n> be competitive.\n> \n> We read/write to/from disk at 240MB/s and so 2 passes would run at a\n> net rate of 120MB/s through the sort set if it were that efficient.\n> \n> Anyone interested in tackling the real performance issue? (flame\n> bait, but for a worthy cause :-)\n\nI'm not sure that it's flamebait, but what do I know? Apart from the\nnasty number (1.5-2 MB/s), what other observations do you have to\nhand? Any ideas about what things are not performing here? Parts of\nthe code that could bear extra scrutiny? Ideas on how to fix same in\na cross-platform way?\n\nCheers,\nD\n-- \nDavid Fetter [email protected] http://fetter.org/\nphone: +1 510 893 6100 mobile: +1 415 235 3778\n\nRemember to vote!\n",
"msg_date": "Thu, 29 Sep 2005 10:17:57 -0700",
"msg_from": "David Fetter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "On Thu, 2005-09-29 at 10:06 -0700, Luke Lonergan wrote:\n> Josh,\n> \n> On 9/29/05 9:54 AM, \"Josh Berkus\" <[email protected]> wrote:\n> \n> > Following an index creation, we see that 95% of the time required is the\n> > external sort, which averages 2mb/s. This is with seperate drives for\n> > the WAL, the pg_tmp, the table and the index. I've confirmed that\n> > increasing work_mem beyond a small minimum (around 128mb) had no benefit\n> > on the overall index creation speed.\n> \n> Yuuuup! That about sums it up - regardless of taking 1 or 2 passes through\n> the heap being sorted, 1.5 - 2 MB/s is the wrong number.\n\nYeah this is really bad ... approximately the speed of GNU sort.\n\nJosh, do you happen to know how many passes are needed in the multiphase\nmerge on your 60GB table?\n\nLooking through tuplesort.c, I have a couple of initial ideas. Are we\nallowed to fork here? That would open up the possibility of using the\nCPU and the I/O in parallel. I see that tuplesort.c also suffers from\nthe kind of postgresql-wide disease of calling all the way up and down a\nbig stack of software for each tuple individually. Perhaps it could be\nchanged to work on vectors.\n\nI think the largest speedup will be to dump the multiphase merge and\nmerge all tapes in one pass, no matter how large M. Currently M is\ncapped at 6, so a sort of 60GB with 1GB sort memory needs 13 passes over\nthe tape. It could be done in a single pass heap merge with N*log(M)\ncomparisons, and, more importantly, far less input and output.\n\nI would also recommend using an external processes to asynchronously\nfeed the tuples into the heap during the merge.\n\nWhat's the timeframe for 8.2?\n\n-jwb\n\n\n\n",
"msg_date": "Thu, 29 Sep 2005 10:44:23 -0700",
"msg_from": "\"Jeffrey W. Baker\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "Jeff,\n\n> Josh, do you happen to know how many passes are needed in the multiphase\n> merge on your 60GB table?\n\nNo, any idea how to test that?\n\n> I think the largest speedup will be to dump the multiphase merge and\n> merge all tapes in one pass, no matter how large M. Currently M is\n> capped at 6, so a sort of 60GB with 1GB sort memory needs 13 passes over\n> the tape. It could be done in a single pass heap merge with N*log(M)\n> comparisons, and, more importantly, far less input and output.\n\nYes, but the evidence suggests that we're actually not using the whole 1GB \nof RAM ... maybe using only 32MB of it which would mean over 200 passes \n(I'm not sure of the exact match). Just fixing our algorithm so that it \nused all of the work_mem permitted might improve things tremendously.\n\n> I would also recommend using an external processes to asynchronously\n> feed the tuples into the heap during the merge.\n>\n> What's the timeframe for 8.2?\n\nToo far out to tell yet. Probably 9mo to 1 year, that's been our history.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 29 Sep 2005 11:03:27 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "On Thu, 2005-09-29 at 11:03 -0700, Josh Berkus wrote:\n> Jeff,\n> \n> > Josh, do you happen to know how many passes are needed in the multiphase\n> > merge on your 60GB table?\n> \n> No, any idea how to test that?\n\nI would just run it under the profiler and see how many times\nbeginmerge() is called.\n\n-jwb\n\n",
"msg_date": "Thu, 29 Sep 2005 11:15:41 -0700",
"msg_from": "\"Jeffrey W. Baker\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "Jeff,\n\n> I would just run it under the profiler and see how many times\n> beginmerge() is called.\n\nHmm, I'm not seeing it at all in the oprofile results on a 100million-row \nsort.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 29 Sep 2005 11:27:56 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "Jeff,\n\nOn 9/29/05 10:44 AM, \"Jeffrey W. Baker\" <[email protected]> wrote:\n\n> On Thu, 2005-09-29 at 10:06 -0700, Luke Lonergan wrote:\n> Looking through tuplesort.c, I have a couple of initial ideas. Are we\n> allowed to fork here? That would open up the possibility of using the\n> CPU and the I/O in parallel. I see that tuplesort.c also suffers from\n> the kind of postgresql-wide disease of calling all the way up and down a\n> big stack of software for each tuple individually. Perhaps it could be\n> changed to work on vectors.\n\nYes!\n \n> I think the largest speedup will be to dump the multiphase merge and\n> merge all tapes in one pass, no matter how large M. Currently M is\n> capped at 6, so a sort of 60GB with 1GB sort memory needs 13 passes over\n> the tape. It could be done in a single pass heap merge with N*log(M)\n> comparisons, and, more importantly, far less input and output.\n\nYes again, see above.\n \n> I would also recommend using an external processes to asynchronously\n> feed the tuples into the heap during the merge.\n\nSimon Riggs is working this idea a bit - it's slightly less interesting to\nus because we already have a multiprocessing executor. Our problem is that\n4 x slow is still far too slow.\n\n> What's the timeframe for 8.2?\n\nLet's test it out in Bizgres!\n \n- Luke\n\n\n",
"msg_date": "Thu, 29 Sep 2005 21:22:27 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "On 9/28/05, Ron Peacetree <[email protected]> wrote:\n> 2= We use my method to sort two different tables. We now have these\n> very efficient representations of a specific ordering on these tables. A\n> join operation can now be done using these Btrees rather than the\n> original data tables that involves less overhead than many current\n> methods.\n\nIf we want to make joins very fast we should implement them using RD\ntrees. For the example cases where a join against a very large table\nwill produce a much smaller output, a RD tree will provide pretty much\nthe optimal behavior at a very low memory cost.\n\nOn the subject of high speed tree code for in-core applications, you\nshould check out http://judy.sourceforge.net/ . The performance\n(insert, remove, lookup, AND storage) is really quite impressive.\nProducing cache friendly code is harder than one might expect, and it\nappears the judy library has already done a lot of the hard work. \nThough it is *L*GPLed, so perhaps that might scare some here away from\nit. :) and good luck directly doing joins with a LC-TRIE. ;)\n",
"msg_date": "Fri, 30 Sep 2005 22:07:16 -0400",
"msg_from": "Gregory Maxwell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "\"Jeffrey W. Baker\" <[email protected]> writes:\n> I think the largest speedup will be to dump the multiphase merge and\n> merge all tapes in one pass, no matter how large M. Currently M is\n> capped at 6, so a sort of 60GB with 1GB sort memory needs 13 passes over\n> the tape. It could be done in a single pass heap merge with N*log(M)\n> comparisons, and, more importantly, far less input and output.\n\nI had more or less despaired of this thread yielding any usable ideas\n:-( but I think you have one here. The reason the current code uses a\nsix-way merge is that Knuth's figure 70 (p. 273 of volume 3 first\nedition) shows that there's not much incremental gain from using more\ntapes ... if you are in the regime where number of runs is much greater\nthan number of tape drives. But if you can stay in the regime where\nonly one merge pass is needed, that is obviously a win.\n\nI don't believe we can simply legislate that there be only one merge\npass. That would mean that, if we end up with N runs after the initial\nrun-forming phase, we need to fit N tuples in memory --- no matter how\nlarge N is, or how small work_mem is. But it seems like a good idea to\ntry to use an N-way merge where N is as large as work_mem will allow.\nWe'd not have to decide on the value of N until after we've completed\nthe run-forming phase, at which time we've already seen every tuple\nonce, and so we can compute a safe value for N as work_mem divided by\nlargest_tuple_size. (Tape I/O buffers would have to be counted too\nof course.)\n\nIt's been a good while since I looked at the sort code, and so I don't\nrecall if there are any fundamental reasons for having a compile-time-\nconstant value of the merge order rather than choosing it at runtime.\nMy guess is that any inefficiencies added by making it variable would\nbe well repaid by the potential savings in I/O.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 01 Oct 2005 02:01:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort? "
},
{
"msg_contents": "On Sat, 2005-10-01 at 02:01 -0400, Tom Lane wrote:\n> \"Jeffrey W. Baker\" <[email protected]> writes:\n> > I think the largest speedup will be to dump the multiphase merge and\n> > merge all tapes in one pass, no matter how large M. Currently M is\n> > capped at 6, so a sort of 60GB with 1GB sort memory needs 13 passes over\n> > the tape. It could be done in a single pass heap merge with N*log(M)\n> > comparisons, and, more importantly, far less input and output.\n> \n> I had more or less despaired of this thread yielding any usable ideas\n> :-( but I think you have one here. The reason the current code uses a\n> six-way merge is that Knuth's figure 70 (p. 273 of volume 3 first\n> edition) shows that there's not much incremental gain from using more\n> tapes ... if you are in the regime where number of runs is much greater\n> than number of tape drives. But if you can stay in the regime where\n> only one merge pass is needed, that is obviously a win.\n> \n> I don't believe we can simply legislate that there be only one merge\n> pass. That would mean that, if we end up with N runs after the initial\n> run-forming phase, we need to fit N tuples in memory --- no matter how\n> large N is, or how small work_mem is. But it seems like a good idea to\n> try to use an N-way merge where N is as large as work_mem will allow.\n> We'd not have to decide on the value of N until after we've completed\n> the run-forming phase, at which time we've already seen every tuple\n> once, and so we can compute a safe value for N as work_mem divided by\n> largest_tuple_size. (Tape I/O buffers would have to be counted too\n> of course.)\n> \n> It's been a good while since I looked at the sort code, and so I don't\n> recall if there are any fundamental reasons for having a compile-time-\n> constant value of the merge order rather than choosing it at runtime.\n> My guess is that any inefficiencies added by making it variable would\n> be well repaid by the potential savings in I/O.\n\nWell, perhaps Knuth is not untouchable!\n\nSo we merge R runs with N variable rather than N=6.\n\nPick N so that N >= 6 and N <= R, with N limited by memory, sufficient\nto allow long sequential reads from the temp file.\n\nLooking at the code, in selectnewtape() we decide on the connection\nbetween run number and tape number. This gets executed during the\nwriting of initial runs, which was OK when the run->tape mapping was\nknown ahead of time because of fixed N.\n\nTo do this it sounds like we'd be better to write each run out to its\nown personal runtape, taking the assumption that N is very large. Then\nwhen all runs are built, re-assign the run numbers to tapes for the\nmerge. That is likely to be a trivial mapping unless N isn't large\nenough to fit in memory. That idea should be easily possible because the\ntape numbers were just abstract anyway.\n\nRight now, I can't see any inefficiencies from doing this. It uses\nmemory better and Knuth shows that using more tapes is better anyhow.\nKeeping track of more tapes isn't too bad, even for hundreds or even\nthousands of runs/tapes.\n\nTom, its your idea, so you have first dibs. I'm happy to code this up if\nyou choose not to, once I've done my other immediate chores.\n\nThat just leaves these issues for a later time:\n- CPU and I/O interleaving\n- CPU cost of abstract data type comparison operator invocation\n\nBest Regards, Simon Riggs\n\n\n",
"msg_date": "Sat, 01 Oct 2005 10:29:52 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> The biggest single area where I see PostgreSQL external sort sucking is \n> on index creation on large tables. For example, for free version of \n> TPCH, it takes only 1.5 hours to load a 60GB Lineitem table on OSDL's \n> hardware, but over 3 hours to create each index on that table. This \n> means that over all our load into TPCH takes 4 times as long to create \n> the indexes as it did to bulk load the data.\n> ...\n> Following an index creation, we see that 95% of the time required is the \n> external sort, which averages 2mb/s. This is with seperate drives for \n> the WAL, the pg_tmp, the table and the index. I've confirmed that \n> increasing work_mem beyond a small minimum (around 128mb) had no benefit \n> on the overall index creation speed.\n\nThese numbers don't seem to add up. You have not provided any details\nabout the index key datatypes or sizes, but I'll take a guess that the\nraw data for each index is somewhere around 10GB. The theory says that\nthe runs created during the first pass should on average be about twice\nwork_mem, so at 128mb work_mem there should be around 40 runs to be\nmerged, which would take probably three passes with six-way merging.\nRaising work_mem to a gig should result in about five runs, needing only\none pass, which is really going to be as good as it gets. If you could\nnot see any difference then I see little hope for the idea that reducing\nthe number of merge passes will help.\n\nUmm ... you were raising maintenance_work_mem, I trust, not work_mem?\n\nWe really need to get some hard data about what's going on here. The\nsort code doesn't report any internal statistics at the moment, but it\nwould not be hard to whack together a patch that reports useful info\nin the form of NOTICE messages or some such.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 01 Oct 2005 11:44:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort? "
},
{
"msg_contents": "\nTom Lane <[email protected]> writes:\n\n> \"Jeffrey W. Baker\" <[email protected]> writes:\n> > I think the largest speedup will be to dump the multiphase merge and\n> > merge all tapes in one pass, no matter how large M. Currently M is\n> > capped at 6, so a sort of 60GB with 1GB sort memory needs 13 passes over\n> > the tape. It could be done in a single pass heap merge with N*log(M)\n> > comparisons, and, more importantly, far less input and output.\n> \n> I had more or less despaired of this thread yielding any usable ideas\n> :-( but I think you have one here. The reason the current code uses a\n> six-way merge is that Knuth's figure 70 (p. 273 of volume 3 first\n> edition) shows that there's not much incremental gain from using more\n> tapes ... if you are in the regime where number of runs is much greater\n> than number of tape drives. But if you can stay in the regime where\n> only one merge pass is needed, that is obviously a win.\n\nIs that still true when the multiple tapes are being multiplexed onto a single\nactual file on disk?\n\nThat brings up one of my pet features though. The ability to declare multiple\ntemporary areas on different spindles and then have them be used on a rotating\nbasis. So a sort could store each tape on a separate spindle and merge them\ntogether at full sequential i/o speed.\n\nThis would make the tradeoff between multiway merges and many passes even\nharder to find though. The broader the multiway merges the more sort areas\nwould be used which would increase the likelihood of another sort using the\nsame sort area and hurting i/o performance.\n\n-- \ngreg\n\n",
"msg_date": "01 Oct 2005 12:17:17 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "Tom,\n\n> Raising work_mem to a gig should result in about five runs, needing only\n> one pass, which is really going to be as good as it gets. If you could\n> not see any difference then I see little hope for the idea that reducing\n> the number of merge passes will help.\n\nRight. It *should have*, but didn't seem to. Example of a simple sort \ntest of 100 million random-number records\n\n1M 3294 seconds\n 16M 1107 seconds\n 256M 1209 seconds\n 512M 1174 seconds\n 512M with 'not null' for column that is indexed 1168 seconds\n\n> Umm ... you were raising maintenance_work_mem, I trust, not work_mem?\n\nYes.\n\n>\n> We really need to get some hard data about what's going on here. The\n> sort code doesn't report any internal statistics at the moment, but it\n> would not be hard to whack together a patch that reports useful info\n> in the form of NOTICE messages or some such.\n\nYeah, I'll do this as soon as the patch is finished. Always useful to \ngear up the old TPC-H.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 3 Oct 2005 13:40:29 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "Hi everybody!\n\nWhich wal sync method is the fastest under linux 2.6.x?\nI'm using RAID-10 (JFS filesystem), 2xXEON, 4 Gb RAM.\n\nI've tried to switch to open_sync which seems to work \nfaster than default fdatasync, but is it crash-safe?\n\n-- \nEvgeny Gridasov\nSoftware Engineer \nI-Free, Russia\n",
"msg_date": "Mon, 27 Feb 2006 15:08:19 +0300",
"msg_from": "Evgeny Gridasov <[email protected]>",
"msg_from_op": false,
"msg_subject": "wal sync method"
},
{
"msg_contents": "Hi Evgeny\n\n Im also testing what fsync method to use and using this program\n(http://archives.postgresql.org/pgsql-performance/2003-12/msg00191.php)\n a bit modified and i get this results:\n\n write 0.000036\n write & fsync 0.006796\n write & fdatasync 0.001001\n write (O_FSYNC) 0.005761\n write (O_DSYNC) 0.005894\n\n So fdatasync faster for me? \n\n\n> Hi everybody!\n> \n> Which wal sync method is the fastest under linux 2.6.x?\n> I'm using RAID-10 (JFS filesystem), 2xXEON, 4 Gb RAM.\n> \n> I've tried to switch to open_sync which seems to work \n> faster than default fdatasync, but is it crash-safe?\n\n\n\n\nJavier Somoza\nOficina de Dirección Estratégica\nmailto:[email protected]\n\nPanda Software\nBuenos Aires, 12\n48001 BILBAO - ESPAÑA\nTeléfono: 902 24 365 4\nFax: 94 424 46 97\nhttp://www.pandasoftware.es\nPanda Software, una de las principales compañías desarrolladoras de\nsoluciones de protección contra virus e intrusos, presenta su nueva\nfamilia de soluciones. Todos los usuarios de ordenadores, desde las\nredes más grandes a los domésticos, disponen ahora de nuevos productos\ncon excelentes tecnologías de seguridad. Más información en:\nhttp://www.pandasoftware.es/productos\n\n \n\n¡Protéjase ahora contra virus e intrusos! Pruebe gratis nuestros\nproductos en http://www.pandasoftware.es/descargas/\n\n \n \n \n \n \n \n \n \n\n\n\n\n\n\n\n\n\n\n Hi Evgeny\n\n Im also testing what fsync method to use and using this program (http://archives.postgresql.org/pgsql-performance/2003-12/msg00191.php)\n a bit modified and i get this results:\n\n write 0.000036\n write & fsync 0.006796\n write & fdatasync 0.001001\n write (O_FSYNC) 0.005761\n write (O_DSYNC) 0.005894\n\n So fdatasync faster for me? \n\n\nHi everybody!\n\nWhich wal sync method is the fastest under linux 2.6.x?\nI'm using RAID-10 (JFS filesystem), 2xXEON, 4 Gb RAM.\n\nI've tried to switch to open_sync which seems to work \nfaster than default fdatasync, but is it crash-safe?\n\n\n\n\n\n\nJavier Somoza\nOficina de Dirección Estratégica\nmailto:[email protected]\n\nPanda Software\nBuenos Aires, 12\n48001 BILBAO - ESPAÑA\nTeléfono: 902 24 365 4\nFax: 94 424 46 97\nhttp://www.pandasoftware.es\nPanda Software, una de las principales compañías desarrolladoras de soluciones de protección contra virus e intrusos, presenta su nueva familia de soluciones. Todos los usuarios de ordenadores, desde las redes más grandes a los domésticos, disponen ahora de nuevos productos con excelentes tecnologías de seguridad. Más información en: http://www.pandasoftware.es/productos\n\n\n\n¡Protéjase ahora contra virus e intrusos! Pruebe gratis nuestros productos en http://www.pandasoftware.es/descargas/",
"msg_date": "Mon, 27 Feb 2006 13:29:19 +0100",
"msg_from": "Javier Somoza <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: wal sync method"
},
{
"msg_contents": "\nUse whichever sync method is fastest for you. They are all reliable,\nexcept turning fsync off.\n\n---------------------------------------------------------------------------\n\nJavier Somoza wrote:\n> \n> \n> \n> Hi Evgeny\n> \n> Im also testing what fsync method to use and using this program\n> (http://archives.postgresql.org/pgsql-performance/2003-12/msg00191.php)\n> a bit modified and i get this results:\n> \n> write 0.000036\n> write & fsync 0.006796\n> write & fdatasync 0.001001\n> write (O_FSYNC) 0.005761\n> write (O_DSYNC) 0.005894\n> \n> So fdatasync faster for me? \n> \n> \n> > Hi everybody!\n> > \n> > Which wal sync method is the fastest under linux 2.6.x?\n> > I'm using RAID-10 (JFS filesystem), 2xXEON, 4 Gb RAM.\n> > \n> > I've tried to switch to open_sync which seems to work \n> > faster than default fdatasync, but is it crash-safe?\n> \n> \n> \n> \n> Javier Somoza\n> Oficina de Direcci?n Estrat?gica\n> mailto:[email protected]\n> \n> Panda Software\n> Buenos Aires, 12\n> 48001 BILBAO - ESPA?A\n> Tel?fono: 902 24 365 4\n> Fax: 94 424 46 97\n> http://www.pandasoftware.es\n> Panda Software, una de las principales compa??as desarrolladoras de\n> soluciones de protecci?n contra virus e intrusos, presenta su nueva\n> familia de soluciones. Todos los usuarios de ordenadores, desde las\n> redes m?s grandes a los dom?sticos, disponen ahora de nuevos productos\n> con excelentes tecnolog?as de seguridad. M?s informaci?n en:\n> http://www.pandasoftware.es/productos\n> \n> \n> \n> ?Prot?jase ahora contra virus e intrusos! Pruebe gratis nuestros\n> productos en http://www.pandasoftware.es/descargas/\n> \n> \n> \n> \n> \n> \n> \n> \n> \n\n-- \n Bruce Momjian http://candle.pha.pa.us\n SRA OSS, Inc. http://www.sraoss.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Mon, 27 Feb 2006 20:14:10 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: wal sync method"
},
{
"msg_contents": "\n\n\tJust a stupid question about the various fsync settings.\n\tThere is fsync=off, but is there fsync=fflush ?\n\tfflush would mean only an OS crash could cause data loss,\n\tI think.it could be useful for some applications where you need a speed \nboost (like testing database import scripts...) without being as scary as \nfsync=off...\n\n",
"msg_date": "Wed, 01 Mar 2006 00:31:33 +0100",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: wal sync method"
},
{
"msg_contents": "PFC <[email protected]> writes:\n> \tJust a stupid question about the various fsync settings.\n> \tThere is fsync=off, but is there fsync=fflush ?\n> \tfflush would mean only an OS crash could cause data loss,\n> \tI think.it could be useful for some applications where you need a speed \n> boost (like testing database import scripts...) without being as scary as \n> fsync=off...\n\nI think you misunderstand. There aren't any scenarios where a PG crash\n(without hardware/OS crash) risks data, because we always at least\nwrite() data before commit. The only issue is how hard do we try to get\nthe OS+hardware to push that data down to disk.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 28 Feb 2006 18:45:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: wal sync method "
},
{
"msg_contents": "\n\n\tHm, i seem to have mixed fwrite() (which buffers data in userspace) and \nwrite() (which apparently doesnt !)\n\tSorry !\n\n> PFC <[email protected]> writes:\n>> \tJust a stupid question about the various fsync settings.\n>> \tThere is fsync=off, but is there fsync=fflush ?\n>> \tfflush would mean only an OS crash could cause data loss,\n>> \tI think.it could be useful for some applications where you need a speed\n>> boost (like testing database import scripts...) without being as scary \n>> as\n>> fsync=off...\n>\n> I think you misunderstand. There aren't any scenarios where a PG crash\n> (without hardware/OS crash) risks data, because we always at least\n> write() data before commit. The only issue is how hard do we try to get\n> the OS+hardware to push that data down to disk.\n>\n> \t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Mar 2006 12:58:21 +0100",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: wal sync method "
}
] |
[
{
"msg_contents": "Something interesting is going on. I wish I could show you the graphs,\nbut I'm sure this will not be a surprise to the seasoned veterans.\n\nA particular application server I have has been running for over a\nyear now. I've been logging cpu load since mid-april.\n\nIt took 8 months or more to fall from excellent performance to\n\"acceptable.\" Then, over the course of about 5 weeks it fell from\n\"acceptable\" to \"so-so.\" Then, in the last four weeks it's gone from\n\"so-so\" to alarming.\n\nI've been working on this performance drop since Friday but it wasn't\nuntil I replied to Arnau's post earlier today that I remembered I'd\nbeen logging the server load. I grabbed the data and charted it in\nExcel and to my surprise, the graph of the server's load average looks\nkind of like the graph of y=x^2.\n\nI've got to make a recomendation for a solution to the PHB and my\nanalysis is showing that as the dataset becomes larger, the amount of\ntime the disk spends seeking is increasing. This causes processes to\ntake longer to finish, which causes more processes to pile up, which\ncuases processes to take longer to finish, which causes more processes\nto pile up etc. It is this growing dataset that seems to be the source\nof the sharp decrease in performance.\n\nI knew this day would come, but I'm actually quite surprised that when\nit came, there was little time between the warning and the grande\nfinale. I guess this message is being sent to the list to serve as a\nwarning to other data warehouse admins that when you reach your\ncapacity, the downward spiral happens rather quickly.\n\nCrud... Outlook just froze while composing the PHB memo. I've been\nworking on that for an hour. What a bad day.\n--\nMatthew Nuzum\nwww.bearfruit.org\n",
"msg_date": "Wed, 28 Sep 2005 15:02:51 -0500",
"msg_from": "Matthew Nuzum <[email protected]>",
"msg_from_op": true,
"msg_subject": "Logarithmic change (decrease) in performance"
}
] |
[
{
"msg_contents": ">From: Matthew Nuzum <[email protected]>\n>Sent: Sep 28, 2005 4:02 PM\n>Subject: [PERFORM] Logarithmic change (decrease) in performance\n>\nSmall nit-pick: A \"logarithmic decrease\" in performance would be\na relatively good thing, being better than either a linear or\nexponential decrease in performance. What you are describing is\nthe worst kind: an _exponential_ decrease in performance.\n\n>Something interesting is going on. I wish I could show you the graphs,\n>but I'm sure this will not be a surprise to the seasoned veterans.\n>\n>A particular application server I have has been running for over a\n>year now. I've been logging cpu load since mid-april.\n>\n>It took 8 months or more to fall from excellent performance to\n>\"acceptable.\" Then, over the course of about 5 weeks it fell from\n>\"acceptable\" to \"so-so.\" Then, in the last four weeks it's gone from\n>\"so-so\" to alarming.\n>\n>I've been working on this performance drop since Friday but it wasn't\n>until I replied to Arnau's post earlier today that I remembered I'd\n>been logging the server load. I grabbed the data and charted it in\n>Excel and to my surprise, the graph of the server's load average looks\n>kind of like the graph of y=x^2.\n>\n>I've got to make a recomendation for a solution to the PHB and my\n>analysis is showing that as the dataset becomes larger, the amount of\n>time the disk spends seeking is increasing. This causes processes to\n>take longer to finish, which causes more processes to pile up, which\n>causes processes to take longer to finish, which causes more processes\n>to pile up etc. It is this growing dataset that seems to be the source\n>of the sharp decrease in performance.\n>\n>I knew this day would come, but I'm actually quite surprised that when\n>it came, there was little time between the warning and the grande\n>finale. I guess this message is being sent to the list to serve as a\n>warning to other data warehouse admins that when you reach your\n>capacity, the downward spiral happens rather quickly.\n>\nYep, definitely been where you are. Bottom line: you have to reduce\nthe sequential seeking behavior of the system to within an acceptable\nwindow and then keep it there.\n\n1= keep more of the data set in RAM\n2= increase the size of your HD IO buffers\n3= make your RAID sets wider (more parallel vs sequential IO)\n4= reduce the atomic latency of your RAID sets\n(time for Fibre Channel 15Krpm HD's vs 7.2Krpm SATA ones?)\n5= make sure your data is as unfragmented as possible\n6= change you DB schema to minimize the problem\na= overall good schema design\nb= partitioning the data so that the system only has to manipulate a\nreasonable chunk of it at a time.\n\nIn many cases, there's a number of ways to accomplish the above.\nUnfortunately, most of them require CapEx.\n\nAlso, ITRW world such systems tend to have this as a chronic\nproblem. This is not a \"fix it once and it goes away forever\". This\nis a part of the regular maintenance and upgrade plan(s). \n\nGood Luck,\nRon \n",
"msg_date": "Wed, 28 Sep 2005 18:03:03 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Logarithmic change (decrease) in performance"
},
{
"msg_contents": "On Wed, Sep 28, 2005 at 06:03:03PM -0400, Ron Peacetree wrote:\n> 1= keep more of the data set in RAM\n> 2= increase the size of your HD IO buffers\n> 3= make your RAID sets wider (more parallel vs sequential IO)\n> 4= reduce the atomic latency of your RAID sets\n> (time for Fibre Channel 15Krpm HD's vs 7.2Krpm SATA ones?)\n> 5= make sure your data is as unfragmented as possible\n> 6= change you DB schema to minimize the problem\n> a= overall good schema design\n> b= partitioning the data so that the system only has to manipulate a\n> reasonable chunk of it at a time.\n\nNote that 6 can easily swamp the rest of these tweaks. A poor schema\ndesign will absolutely kill any system. Also of great importance is how\nyou're using the database. IE: are you doing any row-by-row operations?\n\n> In many cases, there's a number of ways to accomplish the above.\n> Unfortunately, most of them require CapEx.\n> \n> Also, ITRW world such systems tend to have this as a chronic\n> problem. This is not a \"fix it once and it goes away forever\". This\n> is a part of the regular maintenance and upgrade plan(s). \n\nAnd why DBA's typically make more money that other IT folks. :)\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 4 Oct 2005 15:19:33 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logarithmic change (decrease) in performance"
}
] |
[
{
"msg_contents": "In the interest of efficiency and \"not reinventing the wheel\", does anyone know\nwhere I can find C or C++ source code for a Btree variant with the following\nproperties:\n\nA= Data elements (RIDs) are only stored in the leaves, Keys (actually\nKeyPrefixes; see \"D\" below) and Node pointers are only stored in the internal\nnodes of the Btree.\n\nB= Element redistribution is done as an alternative to node splitting in overflow\nconditions during Inserts whenever possible.\n\nC= Variable length Keys are supported.\n\nD= Node buffering with a reasonable replacement policy is supported.\n\nE= Since we will know beforehand exactly how many RID's will be stored, we\nwill know apriori how much space will be needed for leaves, and will know the\nworst case for how much space will be required for the Btree internal nodes\nas well. This implies that we may be able to use an array, rather than linked\nlist, implementation of the Btree. Less pointer chasing at the expense of more\nCPU calculations, but that's a trade-off in the correct direction. \n\nSuch source would be a big help in getting a prototype together.\n\nThanks in advance for any pointers or source,\nRon\n",
"msg_date": "Wed, 28 Sep 2005 19:49:59 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
}
] |
[
{
"msg_contents": "If I've done this correctly, there should not be anywhere near\nthe number of context switches we currently see while sorting.\n\nEach unscheduled context switch represents something unexpected\noccuring or things not being where they are needed when they are\nneeded. Reducing such circumstances to the absolute minimum \nwas one of the design goals.\n\nReducing the total amount of IO to the absolute minimum should\nhelp as well. \n\nRon\n\n\n-----Original Message-----\nFrom: Kevin Grittner <[email protected]>\nSent: Sep 27, 2005 11:21 AM\nSubject: Re: [HACKERS] [PERFORM] A Better External Sort?\n\nI can't help wondering how a couple thousand context switches per\nsecond would affect the attempt to load disk info into the L1 and\nL2 caches. That's pretty much the low end of what I see when the\nserver is under any significant load.\n\n\n\n",
"msg_date": "Wed, 28 Sep 2005 20:25:31 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
}
] |
[
{
"msg_contents": "I'm converting a relatively small database (2 MB) from MySQL to PostgreSQL. It \nis used to generate web pages using PHP. Although the actual website runs under \nLinux, the development is done under XP. I've completed most of the data \nconversion and rewrite of the PHP scripts, so now I'm comparing relative \nperformance.\n\nIt appears that PostgreSQL is two to three times slower than MySQL. For \nexample, some pages that have some 30,000 characters (when saved as HTML) take 1 \nto 1 1/2 seconds with MySQL but 3 to 4 seconds with PostgreSQL. I had read that \nthe former was generally faster than the latter, particularly for simple web \napplications but I was hoping that Postgres' performance would not be that \nnoticeably slower.\n\nI'm trying to determine if the difference can be attributed to anything that \nI've done or missed. I've run VACUUM ANALYZE on the two main tables and I'm \nlooking at the results of EXPLAIN on the query that drives the retrieval of \nprobably 80% of the data for the pages in question.\n\nBefore I post the EXPLAIN and the table schema I'd appreciate confirmation that \nthis list is the appropriate forum. I'm a relative newcomer to PostgreSQL (but \nnot to relational databases), so I'm not sure if this belongs in the novice or \ngeneral lists.\n\nJoe\n\n",
"msg_date": "Wed, 28 Sep 2005 22:00:04 -0400",
"msg_from": "Joe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Comparative performance"
},
{
"msg_contents": "On Wed, 28 Sep 2005, Joe wrote:\n\n> I'm converting a relatively small database (2 MB) from MySQL to PostgreSQL. It\n> is used to generate web pages using PHP. Although the actual website runs under\n> Linux, the development is done under XP. I've completed most of the data\n> conversion and rewrite of the PHP scripts, so now I'm comparing relative\n> performance.\n>\n> It appears that PostgreSQL is two to three times slower than MySQL. For\n> example, some pages that have some 30,000 characters (when saved as HTML) take 1\n> to 1 1/2 seconds with MySQL but 3 to 4 seconds with PostgreSQL. I had read that\n> the former was generally faster than the latter, particularly for simple web\n> applications but I was hoping that Postgres' performance would not be that\n> noticeably slower.\n\nAre you comparing PostgreSQL on XP to MySQL on XP or PostgreSQL on Linux\nto MySQL on Linux? Our performance on XP is not great. Also, which version\nof PostgreSQL are you using?\n\n>\n> I'm trying to determine if the difference can be attributed to anything that\n> I've done or missed. I've run VACUUM ANALYZE on the two main tables and I'm\n> looking at the results of EXPLAIN on the query that drives the retrieval of\n> probably 80% of the data for the pages in question.\n\nGood.\n\n>\n> Before I post the EXPLAIN and the table schema I'd appreciate confirmation that\n> this list is the appropriate forum. I'm a relative newcomer to PostgreSQL (but\n> not to relational databases), so I'm not sure if this belongs in the novice or\n> general lists.\n\nYou can post the results of EXPLAIN ANALYZE here. Please including schema\ndefinitions and the query string(s) themselves.\n\nThanks,\n\nGavin\n",
"msg_date": "Thu, 29 Sep 2005 13:46:54 +1000 (EST)",
"msg_from": "Gavin Sherry <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparative performance"
},
{
"msg_contents": "On Wed, 28 Sep 2005, Joe wrote:\n\n> Before I post the EXPLAIN and the table schema I'd appreciate\n> confirmation that this list is the appropriate forum. \n\nIt is and and useful things to show are\n\n * the slow query\n * EXPLAIN ANALYZE of the query\n * the output of \\d for each table involved in the query\n * the output of SHOW ALL;\n * The amount of memory the machine have\n\nThe settings that are the most important to tune in postgresql.conf for\nperformance is in my opinion; shared_buffers, effective_cache_size and\n(to a lesser extent) work_mem.\n\n-- \n/Dennis Bj�rklund\n\n",
"msg_date": "Thu, 29 Sep 2005 07:38:38 +0200 (CEST)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparative performance"
},
{
"msg_contents": "\n> It appears that PostgreSQL is two to three times slower than MySQL. For \n> example, some pages that have some 30,000 characters (when saved as \n> HTML) take 1 to 1 1/2 seconds with MySQL but 3 to 4 seconds with \n> PostgreSQL. I had read that the former was generally faster than the \n> latter, particularly for simple web applications but I was hoping that \n> Postgres' performance would not be that noticeably slower.\n\n\tFrom my experience, the postgres libraries in PHP are a piece of crap, \nand add a lot of overhead even from small queries.\n\tFor instance, a simple query like \"SELECT * FROM table WHERE \nprimary_key_id=1234\" can take the following time, on my laptop, with data \nin the filesystem cache of course :\n\nEXPLAIN ANALYZE\t<0.1 ms\npython + psycopg 2\t0.1 ms (damn fast)\nphp + mysql\t\t0.3 ms\nphp + postgres\t1-2 ms (damn slow)\n\n\tSo, if your pages are designed in The PHP Way (ie. a large number of \nsmall queries), I might suggest using a language with a decent postgres \ninterface (python, among others), or rewriting your bunches of small \nqueries as Stored Procedures or Joins, which will provide large speedups. \nDoing >50 queries on a page is always a bad idea, but it's tolerable in \nphp-mysql, not in php-postgres.\n\n\tIf it's only one large query, there is a problem, as postgres is usually \na lot smarter about query optimization.\n\n\tIf you use the usual mysql techniques (like, storing a page counter in a \nrow in a table, or storing sessions in a table) beware, these are no-nos \nfor postgres, these things should NOT be done with a database anyway, try \nmemcached for instance.\n",
"msg_date": "Thu, 29 Sep 2005 12:44:55 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparative performance"
},
{
"msg_contents": "PFC wrote:\n> From my experience, the postgres libraries in PHP are a piece of \n> crap, and add a lot of overhead even from small queries.\n> For instance, a simple query like \"SELECT * FROM table WHERE \n> primary_key_id=1234\" can take the following time, on my laptop, with \n> data in the filesystem cache of course :\n> \n> EXPLAIN ANALYZE <0.1 ms\n> python + psycopg 2 0.1 ms (damn fast)\n> php + mysql 0.3 ms\n> php + postgres 1-2 ms (damn slow)\n\nAs a Trac user I was considering moving to Python, so it's good to know that, \nbut the rewrite is a longer term project.\n\n> So, if your pages are designed in The PHP Way (ie. a large number \n> of small queries), I might suggest using a language with a decent \n> postgres interface (python, among others), or rewriting your bunches of \n> small queries as Stored Procedures or Joins, which will provide large \n> speedups. Doing >50 queries on a page is always a bad idea, but it's \n> tolerable in php-mysql, not in php-postgres.\n\nThe pages do use a number of queries to collect all the data for display but \nnowhere near 50. I'd say it's probably less than a dozen. As an aside, one of \nmy tasks (before the conversion) was to analyze the queries and see where they \ncould be tweaked for performance, but with MySQL that was never a top priority.\n\nThe schema is fairly simple having two main tables: topic and entry (sort of \nlike account and transaction in an accounting scenario). There are two \nadditional tables that perhaps could be merged into the entry table (and that \nwould reduce the number of queries) but I do not want to make major changes to \nthe schema (and the app) for the PostgreSQL conversion.\n\nJoe\n\n",
"msg_date": "Thu, 29 Sep 2005 08:37:07 -0400",
"msg_from": "Joe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Comparative performance"
},
{
"msg_contents": "Joe wrote:\n\n> \n> \n> The pages do use a number of queries to collect all the data for display \n> but nowhere near 50. I'd say it's probably less than a dozen. \n> \n> The schema is fairly simple having two main tables: topic and entry \n> (sort of like account and transaction in an accounting scenario). There \n> are two additional tables that perhaps could be merged into the entry \n> table \n\nHm, if you only have 4 tables, why do you need 12 queries?\nTo reduce queries, join them in the query; no need to merge them \nphysically. If you have only two main tables, I'd bet you only need 1-2 \nqueries for the whole page.\n\nRegards,\nAndreas\n",
"msg_date": "Thu, 29 Sep 2005 15:22:03 +0200",
"msg_from": "Andreas Pflug <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparative performance"
},
{
"msg_contents": "Andreas Pflug wrote:\n> Hm, if you only have 4 tables, why do you need 12 queries?\n> To reduce queries, join them in the query; no need to merge them \n> physically. If you have only two main tables, I'd bet you only need 1-2 \n> queries for the whole page.\n\nThere are more than four tables and the queries are not functionally \noverlapping. As an example, allow me to refer to the page \nwww.freedomcircle.com/topic.php/Economists.\n\nThe top row of navigation buttons (Life, Liberty, etc.) is created from a query \nof the 'topic' table. It could've been hard-coded as a PHP array, but with less \nflexibility. The alphabetical links are from a SELECT DISTINCT substring from \ntopic. It could've been generated by a PHP for loop (originally implemented \nthat way) but again with less flexibility. The listing of economists is another \nSELECT from topic. The subheadings (Articles, Books) come from a SELECT of an \nentry_type table --which currently has 70 rows-- and is read into a PHP array \nsince we don't know what headings will be used in a given page. The detail of \nthe entries comes from that query that I posted earlier, but there are three \nadditional queries that are used for specialized entry types (relationships \nbetween topics --e.g., Prof. Williams teaches at George Mason, events, and \nmulti-author or multi-subject articles and books). And there's yet another \ntable for the specific book information. Once the data is retrieved it's sorted \ninternally with PHP, at the heading level, before display.\n\nMaybe there is some way to merge all the queries (some already fairly complex) \nthat fetch the data for the entries box but I believe it would be a monstrosity \nwith over 100 lines of SQL.\n\nThanks,\n\nJoe\n\n",
"msg_date": "Thu, 29 Sep 2005 16:39:36 -0400",
"msg_from": "Joe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Comparative performance"
},
{
"msg_contents": "On Thu, Sep 29, 2005 at 04:39:36PM -0400, Joe wrote:\n> Andreas Pflug wrote:\n> >Hm, if you only have 4 tables, why do you need 12 queries?\n> >To reduce queries, join them in the query; no need to merge them \n> >physically. If you have only two main tables, I'd bet you only need 1-2 \n> >queries for the whole page.\n> \n> There are more than four tables and the queries are not functionally \n> overlapping. As an example, allow me to refer to the page \n> www.freedomcircle.com/topic.php/Economists.\n> \n> The top row of navigation buttons (Life, Liberty, etc.) is created from a \n> query of the 'topic' table. It could've been hard-coded as a PHP array, \n> but with less flexibility. The alphabetical links are from a SELECT \n> DISTINCT substring from topic. It could've been generated by a PHP for \n> loop (originally implemented that way) but again with less flexibility. \n> The listing of economists is another SELECT from topic. The subheadings \n> (Articles, Books) come from a SELECT of an entry_type table --which \n> currently has 70 rows-- and is read into a PHP array since we don't know \n> what headings will be used in a given page. The detail of the entries \n\nI suspect this might be something better done in a join.\n\n> comes from that query that I posted earlier, but there are three additional \n> queries that are used for specialized entry types (relationships between \n> topics --e.g., Prof. Williams teaches at George Mason, events, and \n> multi-author or multi-subject articles and books). And there's yet another \n\nLikewise...\n\n> table for the specific book information. Once the data is retrieved it's \n> sorted internally with PHP, at the heading level, before display.\n\nIt's often better to let the database sort and/or aggregate data.\n\n> Maybe there is some way to merge all the queries (some already fairly \n> complex) that fetch the data for the entries box but I believe it would be \n> a monstrosity with over 100 lines of SQL.\n\nAlso, just because no one else has mentioned it, remember that it's very\neasy to get MySQL into a mode where you have no data integrity. If\nthat's the case it's going to be faster than PostgreSQL (though I'm not\nsure how much that affects the performance of SELECTs).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 4 Oct 2005 15:41:22 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparative performance"
},
{
"msg_contents": "Hi Jim,\n\nJim C. Nasby wrote:\n> Also, just because no one else has mentioned it, remember that it's very\n> easy to get MySQL into a mode where you have no data integrity. If\n> that's the case it's going to be faster than PostgreSQL (though I'm not\n> sure how much that affects the performance of SELECTs).\n\nYes indeed. When I added the REFERENCES to the schema and reran the conversion \nscripts, aside from having to reorder the table creation and loading (they used \nto be in alphabetical order), I also found a few referential integrity errors in \nthe MySQL data.\n\nJoe\n\n",
"msg_date": "Tue, 04 Oct 2005 17:11:19 -0400",
"msg_from": "Joe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Comparative performance"
},
{
"msg_contents": "On Tue, Oct 04, 2005 at 05:11:19PM -0400, Joe wrote:\n> Hi Jim,\n> \n> Jim C. Nasby wrote:\n> >Also, just because no one else has mentioned it, remember that it's very\n> >easy to get MySQL into a mode where you have no data integrity. If\n> >that's the case it's going to be faster than PostgreSQL (though I'm not\n> >sure how much that affects the performance of SELECTs).\n> \n> Yes indeed. When I added the REFERENCES to the schema and reran the \n> conversion scripts, aside from having to reorder the table creation and \n> loading (they used to be in alphabetical order), I also found a few \n> referential integrity errors in the MySQL data.\n\nData integrity != refferential integrity. :) It's very easy to\naccidentally get MyISAM tables in MySQL, which means you are nowhere\nnear ACID which also means you can't get anything close to an apples to\napples comparison to PostgreSQL.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 4 Oct 2005 17:16:40 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparative performance"
},
{
"msg_contents": "Postgresql uses MVCC to ensure data integrity. Server must choose the right\nversion of tuple, according to transaction ID of statement. Even for a\nselect (ACID features of postgresql, I think C and I apply here), it must\naccomplish some extra work.\n\n-----Mensaje original-----\nDe: [email protected]\n[mailto:[email protected]]En nombre de Joe\nEnviado el: martes, 04 de octubre de 2005 18:11\nPara: Jim C. Nasby\nCC: Andreas Pflug; [email protected]\nAsunto: Re: [PERFORM] Comparative performance\n\n\nHi Jim,\n\nJim C. Nasby wrote:\n> Also, just because no one else has mentioned it, remember that it's very\n> easy to get MySQL into a mode where you have no data integrity. If\n> that's the case it's going to be faster than PostgreSQL (though I'm not\n> sure how much that affects the performance of SELECTs).\n\nYes indeed. When I added the REFERENCES to the schema and reran the\nconversion\nscripts, aside from having to reorder the table creation and loading (they\nused\nto be in alphabetical order), I also found a few referential integrity\nerrors in\nthe MySQL data.\n\nJoe\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Tue, 4 Oct 2005 20:19:14 -0300",
"msg_from": "\"Dario\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparative performance"
}
] |
[
{
"msg_contents": ">From: \"Jeffrey W. Baker\" <[email protected]>\n>Sent: Sep 29, 2005 12:27 AM\n>To: Ron Peacetree <[email protected]>\n>Cc: [email protected], [email protected]\n>Subject: Re: [HACKERS] [PERFORM] A Better External Sort?\n>\n>You are engaging in a length and verbose exercise in mental\n>masturbation, because you have not yet given a concrete example of a\n>query where this stuff would come in handy. A common, general-purpose\n>case would be the best.\n> \n??? I posted =3= specific classes of common, general-purpose query\noperations where OES and the OES Btrees look like they should be\nsuperior to current methods:\n1= when splitting sorting or other operations across multiple CPUs\n2= when doing joins of different tables by doing the join on these Btrees\nrather than the original tables.\n3= when the opportunity arises to reuse OES Btree results of previous\nsorts for different keys in the same table. Now we can combine the\nexisting Btrees to obtain the new order based on the composite key\nwithout ever manipulating the original, much larger, table. \n\nIn what way are these examples not \"concrete\"?\n\n\n>We can all see that the method you describe might be a good way to sort\n>a very large dataset with some known properties, which would be fine if\n>you are trying to break the terasort benchmark. But that's not what\n>we're doing here. We are designing and operating relational databases.\n>So please explain the application.\n>\nThis is a GENERAL method. It's based on CPU cache efficient Btrees that\nuse variable length prefix keys and RIDs.\nIt assumes NOTHING about the data or the system in order to work.\nI gave some concrete examples for the sake of easing explanation, NOT\nas an indication of assumptions or limitations of the method. I've even\ngone out of my way to prove that no such assumptions or limitations exist.\nWhere in the world are you getting such impressions?\n \n\n>Your main example seems to focus on a large table where a key column has\n>constrained values. This case is interesting in proportion to the\n>number of possible values. If I have billions of rows, each having one\n>of only two values, I can think of a trivial and very fast method of\n>returning the table \"sorted\" by that key: make two sequential passes,\n>returning the first value on the first pass and the second value on the\n>second pass. This will be faster than the method you propose.\n>\n1= No that was not my main example. It was the simplest example used to\nframe the later more complicated examples. Please don't get hung up on it.\n\n2= You are incorrect. Since IO is the most expensive operation we can do,\nany method that makes two passes through the data at top scanning speed\nwill take at least 2x as long as any method that only takes one such pass.\n\n\n>I think an important aspect you have failed to address is how much of\n>the heap you must visit after the sort is complete. If you are\n>returning every tuple in the heap then the optimal plan will be very\n>different from the case when you needn't. \n>\nHmmm. Not sure which \"heap\" you are referring to, but the OES Btree\nindex is provably the lowest (in terms of tree height) and smallest\npossible CPU cache efficient data structure that one can make and still\nhave all of the traditional benefits associated with a Btree representation\nof a data set.\n\nNonetheless, returning a RID, or all RIDs with(out) the same Key, or all\nRIDs (not) within a range of Keys, or simply all RIDs in sorted order is\nefficient. Just as should be for a Btree (actually it's a B+ tree variant to\nuse Knuth's nomenclature). I'm sure someone posting from acm.org\nrecognizes how each of these Btree operations maps to various SQL\nfeatures... \n\nI haven't been talking about query plans because they are orthogonal to\nthe issue under discussion? If we use a layered model for PostgreSQL's\narchitecture, this functionality is more primal than that of a query\nplanner. ALL query plans that currently involve sorts will benefit from a\nmore efficient way to do, or avoid, sorts.\n \n\n>PS: Whatever mailer you use doesn't understand or respect threading nor\n>attribution. Out of respect for the list's readers, please try a mailer\n>that supports these 30-year-old fundamentals of electronic mail.\n>\nThat is an issue of infrastructure on the recieving side, not on the sending\n(my) side since even my web mailer seems appropriately RFC conformant.\nEverything seems to be going in the correct places and being properly \norganized on archival.postgres.org ...\n\nRon\n",
"msg_date": "Thu, 29 Sep 2005 02:21:10 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": ">> Your main example seems to focus on a large table where a key \n>> column has\n>> constrained values. This case is interesting in proportion to the\n>> number of possible values. If I have billions of rows, each \n>> having one\n>> of only two values, I can think of a trivial and very fast method of\n>> returning the table \"sorted\" by that key: make two sequential passes,\n>> returning the first value on the first pass and the second value \n>> on the\n>> second pass. This will be faster than the method you propose.\n>>\n>>\n> 1= No that was not my main example. It was the simplest example \n> used to\n> frame the later more complicated examples. Please don't get hung \n> up on it.\n>\n> 2= You are incorrect. Since IO is the most expensive operation we \n> can do,\n> any method that makes two passes through the data at top scanning \n> speed\n> will take at least 2x as long as any method that only takes one \n> such pass.\nYou do not get the point.\nAs the time you get the sorted references to the tuples, you need to \nfetch the tuples themself, check their visbility, etc. and returns \nthem to the client.\n\nSo,\nif there is only 2 values in the column of big table that is larger \nthan available RAM,\ntwo seq scans of the table without any sorting\nis the fastest solution.\n\nCordialement,\nJean-G�rard Pailloncy\n\n",
"msg_date": "Thu, 29 Sep 2005 13:11:45 +0200",
"msg_from": "Pailloncy Jean-Gerard <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "\n\tJust to add a little anarchy in your nice debate...\n\n\tWho really needs all the results of a sort on your terabyte table ?\n\n\tI guess not many people do a SELECT from such a table and want all the \nresults. So, this leaves :\n\t- Really wanting all the results, to fetch using a cursor,\n\t- CLUSTER type things, where you really want everything in order,\n\t- Aggregates (Sort->GroupAggregate), which might really need to sort the \nwhole table.\n\t- Complex queries where the whole dataset needs to be examined, in order \nto return a few values\n\t- Joins (again, the whole table is probably not going to be selected)\n\t- And the ones I forgot.\n\n\tHowever,\n\n\tMost likely you only want to SELECT N rows, in some ordering :\n\t- the first N (ORDER BY x LIMIT N)\n\t- last N (ORDER BY x DESC LIMIT N)\n\t- WHERE x>value ORDER BY x LIMIT N\n\t- WHERE x<value ORDER BY x DESC LIMIT N\n\t- and other variants\n\n\tOr, you are doing a Merge JOIN against some other table ; in that case, \nyes, you might need the whole sorted terabyte table, but most likely there \nare WHERE clauses in the query that restrict the set, and thus, maybe we \ncan get some conditions or limit values on the column to sort.\n\n\tAlso the new, optimized hash join, which is more memory efficient, might \ncover this case.\n\n\tPoint is, sometimes, you only need part of the results of your sort. And \nthe bigger the sort, the most likely it becomes that you only want part of \nthe results. So, while we're in the fun hand-waving, new algorithm trying \nmode, why not consider this right from the start ? (I know I'm totally in \nhand-waving mode right now, so slap me if needed).\n\n\tI'd say your new, fancy sort algorithm needs a few more input values :\n\n\t- Range of values that must appear in the final result of the sort :\n\t\tnone, minimum, maximum, both, or even a set of values from the other \nside of the join, hashed, or sorted.\n\t- LIMIT information (first N, last N, none)\n\t- Enhanced Limit information (first/last N values of the second column to \nsort, for each value of the first column) (the infamous \"top10 by \ncategory\" query)\n\t- etc.\n\n\tWith this, the amount of data that needs to be kept in memory is \ndramatically reduced, from the whole table (even using your compressed \nkeys, that's big) to something more manageable which will be closer to the \nsize of the final result set which will be returned to the client, and \navoid a lot of effort.\n\n\tSo, this would not be useful in all cases, but when it applies, it would \nbe really useful.\n\n\tRegards !\n\n\t\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Thu, 29 Sep 2005 18:10:29 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
}
] |
[
{
"msg_contents": "> > It appears that PostgreSQL is two to three times slower \n> than MySQL. \n> > For example, some pages that have some 30,000 characters \n> (when saved \n> > as HTML) take 1 to 1 1/2 seconds with MySQL but 3 to 4 seconds with \n> > PostgreSQL. I had read that the former was generally \n> faster than the \n> > latter, particularly for simple web applications but I was \n> hoping that \n> > Postgres' performance would not be that noticeably slower.\n> \n> Are you comparing PostgreSQL on XP to MySQL on XP or \n> PostgreSQL on Linux to MySQL on Linux? Our performance on XP \n> is not great. Also, which version of PostgreSQL are you using?\n\nThat actually depends a lot on *how* you use it. I've seen pg-on-windows\ndeployments that come within a few percent of the linux performance.\nI've also seen those that are absolutely horrible compared.\n\nOne sure way to kill the performance is to do a lot of small\nconnections. Using persistent connection is even more important on\nWindows than it is on Unix. It could easily explain a difference like\nthis.\n\n//Magnus\n",
"msg_date": "Thu, 29 Sep 2005 08:29:09 +0200",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Comparative performance"
},
{
"msg_contents": "Magnus Hagander wrote:\n> That actually depends a lot on *how* you use it. I've seen pg-on-windows\n> deployments that come within a few percent of the linux performance.\n> I've also seen those that are absolutely horrible compared.\n> \n> One sure way to kill the performance is to do a lot of small\n> connections. Using persistent connection is even more important on\n> Windows than it is on Unix. It could easily explain a difference like\n> this.\n\nI just tried using pg_pconnect() and I didn't notice any significant \nimprovement. What bothers me most is that with Postgres I tend to see jerky \nbehavior on almost every page: the upper 1/2 or 2/3 of the page is displayed \nfirst and you can see a blank bottom (or you can see a half-filled completion \nbar). With MySQL each page is generally displayed in one swoop.\n\nJoe\n\n",
"msg_date": "Thu, 29 Sep 2005 08:16:11 -0400",
"msg_from": "Joe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparative performance"
},
{
"msg_contents": "On Thu, Sep 29, 2005 at 08:16:11AM -0400, Joe wrote:\n> I just tried using pg_pconnect() and I didn't notice any significant \n> improvement.\n\nPHP persistent connections are not really persistent -- or so I've been told.\n\nAnyhow, what was discussed here was pg_query, not pg_connect. You really want\nto reduce the number of pg_query() calls in any case; you haven't told us how\nmany there are yet, but it sounds like there are a lot of them.\n\n> What bothers me most is that with Postgres I tend to see jerky behavior on\n> almost every page: the upper 1/2 or 2/3 of the page is displayed first and\n> you can see a blank bottom (or you can see a half-filled completion bar).\n> With MySQL each page is generally displayed in one swoop.\n\nThis might just be your TCP/IP stack finding out that the rest of the page\nisn't likely to come anytime soon, and start sending it out... or something\nelse. I wouldn't put too much weight on it, it's likely to go away as soon as\nyou fix the rest of the problem.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 29 Sep 2005 14:30:30 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparative performance"
},
{
"msg_contents": "On Thu, 29 Sep 2005, Joe wrote:\n\n> Magnus Hagander wrote:\n> > That actually depends a lot on *how* you use it. I've seen pg-on-windows\n> > deployments that come within a few percent of the linux performance.\n> > I've also seen those that are absolutely horrible compared.\n> >\n> > One sure way to kill the performance is to do a lot of small\n> > connections. Using persistent connection is even more important on\n> > Windows than it is on Unix. It could easily explain a difference like\n> > this.\n>\n> I just tried using pg_pconnect() and I didn't notice any significant\n> improvement. What bothers me most is that with Postgres I tend to see jerky\n> behavior on almost every page: the upper 1/2 or 2/3 of the page is displayed\n> first and you can see a blank bottom (or you can see a half-filled completion\n> bar). With MySQL each page is generally displayed in one swoop.\n\nPlease post the table definitions, queries and explain analyze results so\nwe can tell you why the performance is poor.\n\nGavin\n",
"msg_date": "Thu, 29 Sep 2005 22:31:06 +1000 (EST)",
"msg_from": "Gavin Sherry <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparative performance"
},
{
"msg_contents": "Gavin Sherry wrote:\n> Please post the table definitions, queries and explain analyze results so\n> we can tell you why the performance is poor.\n\nI did try to post that last night but apparently my reply didn't make it to the \nlist. Here it is again:\n\nMatthew Nuzum wrote:\n\n > This is the right list. Post detail and I'm sure you'll get some suggestions.\n\n\nThanks, Matthew (and Chris and Gavin).\n\nThe main table used in the query is defined as follows:\n\nCREATE TABLE entry (\n entry_id serial PRIMARY KEY,\n title VARCHAR(128) NOT NULL,\n subtitle VARCHAR(128),\n subject_type SMALLINT,\n subject_id INTEGER REFERENCES topic,\n actor_type SMALLINT,\n actor_id INTEGER REFERENCES topic,\n actor VARCHAR(64),\n actor_role VARCHAR(64),\n rel_entry_id INTEGER,\n rel_entry VARCHAR(64),\n description VARCHAR(255),\n quote text,\n url VARCHAR(255),\n entry_date CHAR(10),\n created DATE NOT NULL DEFAULT CURRENT_DATE,\n updated TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP)\nWITHOUT OIDS;\nCREATE INDEX entry_actor_id ON entry (actor_id);\nCREATE INDEX entry_subject_id ON entry (subject_id);\n\nIt has 3422 rows at this time.\n\nThe query for one of the pages is the following:\n\nSELECT entry_id, subject_type AS type, subject_type, subject_id, actor_type, \nactor_id, actor, actor_role, rel_entry_id, rel_entry, title, subtitle, \ndescription, url, quote AS main_quote, NULL AS rel_quote, substring(entry_date \nfrom 8) AS dom, substring(entry_date from 1) AS date_ymd, substring(entry_date \nfrom 1 for 7) AS date_ym, substring(entry_date from 1 for 4) AS date_y, created, \nupdated FROM entry WHERE subject_id = 1079\nUNION SELECT entry_id, actor_type AS type, subject_type, subject_id, actor_type, \nactor_id, actor, actor_role, rel_entry_id, rel_entry, title, subtitle, \ndescription, url, quote AS main_quote, NULL AS rel_quote, substring(entry_date \nfrom 8) AS dom, substring(entry_date from 1) AS date_ymd, substring(entry_date \nfrom 1 for 7) AS date_ym, substring(entry_date from 1 for 4) AS date_y, created, \nupdated FROM entry WHERE actor_id = 1079 ORDER BY type, title, subtitle;\n\nThe output of EXPLAIN ANALYZE is:\n\n Sort (cost=158.98..159.14 rows=62 width=568) (actual time=16.000..16.000 \nrows=59 loops=1)\n Sort Key: \"type\", title, subtitle\n -> Unique (cost=153.57..157.14 rows=62 width=568) (actual \ntime=16.000..16.000 rows=59 loops=1)\n -> Sort (cost=153.57..153.73 rows=62 width=568) (actual \ntime=16.000..16.000 rows=59 loops=1)\n Sort Key: entry_id, \"type\", subject_type, subject_id, \nactor_type, actor_id, actor, actor_role, rel_entry_id, rel_entry, title, \nsubtitle, description, url, main_quote, rel_quote, dom, date_ymd, date_ym, \ndate_y, created, updated\n -> Append (cost=0.00..151.73 rows=62 width=568) (actual \ntime=0.000..16.000 rows=59 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..17.21 rows=4 \nwidth=568) (actual time=0.000..0.000 rows=3 loops=1)\n -> Index Scan using entry_subject_id on entry \n(cost=0.00..17.17 rows=4 width=568) (actual time=0.000..0.000 rows=3 loops=1)\n Index Cond: (subject_id = 1079)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..134.52 rows=58 \nwidth=568) (actual time=0.000..16.000 rows=56 loops=1)\n -> Seq Scan on entry (cost=0.00..133.94 rows=58 \nwidth=568) (actual time=0.000..16.000 rows=56 loops=1)\n Filter: (actor_id = 1079)\n Total runtime: 16.000 ms\n(13 rows)\n\nWhat I don't quite understand is why it's doing a sequential scan on actor_id \ninstead of using the entry_actor_id index. Note that actor_id has 928 non-null \nvalues (27%), whereas subject_id has 3089 non-null values (90%).\n\nNote that the entry_date column was originally a MySQL date but it had partial \ndates, i.e., some days and months are set to zero. Eventually I hope to define \na PostgreSQL datatype for it and to simplify the substring retrievals. However, \nI don't think the extra computational time should affect the overall runtime \nsignificantly.\n\nGavin, I'm using PostgreSQL 8.0.3, Apache 1.3.28, PHP 4.3.4, MySQL 4.0.16 and \nI'm comparing both databases on XP (on a Pentium 4, 1.6 GHz, 256 MB RAM).\n\nThanks for any feedback.\n\nJoe\n\n",
"msg_date": "Thu, 29 Sep 2005 08:44:16 -0400",
"msg_from": "Joe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparative performance"
},
{
"msg_contents": "\n\n> I just tried using pg_pconnect() and I didn't notice any significant \n> improvement. What bothers me most is that with Postgres I tend to see \n> jerky behavior on almost every page: the upper 1/2 or 2/3 of the page \n> is displayed first and you can see a blank bottom (or you can see a \n> half-filled completion bar). With MySQL each page is generally \n> displayed in one swoop.\n\n\tPersistent connections are useful when your page is fast and the \nconnection time is an important part of your page time. It is mandatory if \nyou want to serve more than 20-50 hits/s without causing unnecessary load \non the database. This is not your case, which is why you don't notice any \nimprovement...\n",
"msg_date": "Thu, 29 Sep 2005 18:12:52 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparative performance"
},
{
"msg_contents": "\n> Total runtime: 16.000 ms\n\n\tEven though this query isn't that optimized, it's still only 16 \nmilliseconds.\n\tWhy does it take this long for PHP to get the results ?\n\n\tCan you try pg_query'ing this exact same query, FROM PHP, and timing it \nwith getmicrotime() ?\n\n\tYou can even do an EXPLAIN ANALYZE from pg_query and display the results \nin your webpage, to check how long the query takes on the server.\n\n\tYou can also try it on a Linux box.\n\n\tThis smells like a TCP communication problem.\n",
"msg_date": "Thu, 29 Sep 2005 18:17:05 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparative performance"
},
{
"msg_contents": "PFC wrote:\n> Even though this query isn't that optimized, it's still only 16 \n> milliseconds.\n> Why does it take this long for PHP to get the results ?\n> \n> Can you try pg_query'ing this exact same query, FROM PHP, and timing \n> it with getmicrotime() ?\n\nThanks, that's what I was looking for. It's microtime(), BTW. It'll take me \nsome time to instrument it, but that way I can pinpoint what is really slow.\n\n> You can even do an EXPLAIN ANALYZE from pg_query and display the \n> results in your webpage, to check how long the query takes on the server.\n> \n> You can also try it on a Linux box.\n\nMy current host only supports MySQL. I contacted hub.org to see if they could \nassist in this transition but I haven't heard back.\n\n> This smells like a TCP communication problem.\n\nI'm puzzled by that remark. How much does TCP get into the picture in a local \nWindows client/server environment?\n\nJoe\n\n",
"msg_date": "Thu, 29 Sep 2005 16:50:54 -0400",
"msg_from": "Joe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparative performance"
},
{
"msg_contents": "PFC wrote:\n> Even though this query isn't that optimized, it's still only 16 \n> milliseconds.\n> Why does it take this long for PHP to get the results ?\n> \n> Can you try pg_query'ing this exact same query, FROM PHP, and timing \n> it with getmicrotime() ?\n\nThat query took about 27 msec in actual PHP execution time. It turns out the \nreal culprit is the following query, which interestingly enough retrieves zero \nrows in the case of the Economists page that I've been using for testing, yet it \nuses up about 1370 msec in actual runtime:\n\nSELECT topic_id1, topic_id2, topic_name, categ_id, list_name, t.title, url, \npage_type, rel_type, inverse_id, r.description AS rel_descrip, r.created, r.updated\nFROM relationship r, topic t, entry_type e\nWHERE ((topic_id1 = topic_id AND topic_id2 = 1252) OR (topic_id2 = topic_id and \ntopic_id1 = 1252)) AND rel_type = type_id AND e.class_id = 2\nORDER BY rel_type, list_name;\n\nThe EXPLAIN ANALYZE output, after I ran VACUUM ANALYZE on the three tables, is:\n\n Sort (cost=4035.55..4035.56 rows=1 width=131) (actual time=2110.000..2110.000 \nrows=0 loops=1)\n Sort Key: r.rel_type, t.list_name\n -> Nested Loop (cost=36.06..4035.54 rows=1 width=131) (actual \ntime=2110.000..2110.000 rows=0 loops=1)\n Join Filter: (((\"inner\".topic_id1 = \"outer\".topic_id) AND \n(\"inner\".topic_id2 = 1252)) OR ((\"inner\".topic_id2 = \"outer\".topic_id) AND \n(\"inner\".topic_id1 = 1252)))\n -> Seq Scan on topic t (cost=0.00..38.34 rows=1234 width=90) (actual \ntime=0.000..15.000 rows=1234 loops=1)\n -> Materialize (cost=36.06..37.13 rows=107 width=45) (actual \ntime=0.000..0.509 rows=466 loops=1234)\n -> Merge Join (cost=30.31..35.96 rows=107 width=45) (actual \ntime=0.000..0.000 rows=466 loops=1)\n Merge Cond: (\"outer\".type_id = \"inner\".rel_type)\n -> Index Scan using entry_type_pkey on entry_type e (cost\n=0.00..3.94 rows=16 width=4) (actual time=0.000..0.000 rows=15 loops=1)\n Filter: (class_id = 2)\n -> Sort (cost=30.31..31.48 rows=466 width=43) (actual \ntime=0.000..0.000 rows=466 loops=1)\n Sort Key: r.rel_type\n -> Seq Scan on relationship r (cost=0.00..9.66 \nrows=466 width=43) (actual time=0.000..0.000 rows=466 loops=1)\n Total runtime: 2110.000 ms\n(14 rows)\n\nThe tables are as follows:\n\nCREATE TABLE entry_type (\n type_id SMALLINT NOT NULL PRIMARY KEY,\n title VARCHAR(32) NOT NULL,\n rel_title VARCHAR(32),\n class_id SMALLINT NOT NULL DEFAULT 1,\n inverse_id SMALLINT,\n updated TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP)\nWITHOUT OIDS;\n\nCREATE TABLE topic (\n topic_id serial PRIMARY KEY,\n topic_name VARCHAR(48) NOT NULL UNIQUE,\n categ_id SMALLINT NOT NULL,\n parent_entity INTEGER,\n parent_concept INTEGER,\n crossref_id INTEGER,\n list_name VARCHAR(80) NOT NULL,\n title VARCHAR(80),\n description VARCHAR(255),\n url VARCHAR(64),\n page_type SMALLINT NOT NULL,\n dark_ind BOOLEAN NOT NULL DEFAULT FALSE,\n ad_code INTEGER,\n created DATE NOT NULL DEFAULT CURRENT_DATE,\n updated TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP)\nWITHOUT OIDS;\n\nCREATE TABLE relationship (\n topic_id1 INTEGER NOT NULL REFERENCES topic,\n topic_id2 INTEGER NOT NULL REFERENCES topic,\n rel_type INTEGER NOT NULL,\n description VARCHAR(255),\n created DATE NOT NULL DEFAULT CURRENT_DATE,\n updated TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP,\n PRIMARY KEY (topic_id1, topic_id2, rel_type))\nWITHOUT OIDS;\n\nI'm thinking that perhaps I need to set up another index with topic_id2 first \nand topic_id1 second. In addition, an index on entry_type.class_id may improve \nthings. Another possibility would be to rewrite the query as a UNION.\n\nOf course, this doesn't explain how MySQL manages to execute the query in about \n9 msec. The only minor differences in the schema are: entry_type.title and \nrel_title are char(32) in MySQL, entry_type.class_id is a tinyint, and \ntopic.categ_id, page_type and dark_ind are also tinyints. MySQL also doesn't \nhave the REFERENCES.\n\nA couple of interesting side notes from my testing. First is that pg_connect() \ntook about 39 msec but mysql_connect() took only 4 msec, however, pg_pconnect() \ntook 0.14 msec while mysql_pconnect() took 0.99 msec (all tests were repeated \nfive times and the quoted results are averages). Second, is that PostgreSQL's \nperformance appears to be much more consistent in certain queries. For example, \nthe query that retrieves the list of subtopics (the names and description of \neconomists), took 17 msec in PG, with a low of 15 (three times) and a high of \n21, whereas MySQL took 60 msec on average but had a low of 22 and a high of 102 \nmsec.\n\nJoe\n\n",
"msg_date": "Mon, 03 Oct 2005 22:04:24 -0400",
"msg_from": "Joe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparative performance"
},
{
"msg_contents": "\nIt's more understandable if the table names are in front of the column \nnames :\n\nSELECT relationship.topic_id1, relationship.topic_id2, topic.topic_name, \ntopic.categ_id, topic.list_name, topic.title,\ntopic.url, topic.page_type, relationship.rel_type, entry_type.inverse_id, \nrelationship.description AS rel_descrip,\nrelationship.created, relationship.updated\n FROM relationship, topic, entry_type\nWHERE\n\t((relationship.topic_id1 = topic.topic_id AND relationship.topic_id2 = \n1252)\n\tOR (relationship.topic_id2 = topic.topic_id and relationship.topic_id1 = \n1252))\n\tAND\trelationship.rel_type = entry_type.type_id\n\tAND\tentry_type.class_id = 2\n\tORDER BY rel_type, list_name;\n\nI see a few problems in your schema.\n- topic_id1 and topic_id2 play the same role, there is no constraint to \ndetermine which is which, hence it is possible to define the same relation \ntwice.\n- as you search on two columns with OR, you need UNION to use indexes.\n- lack of indexes\n- I don't understand why the planner doesn't pick up hash joins...\n- if you use a version before 8, type mismatch will prevent use of the \nindexes.\n\nI'd suggest rewriting the query like this :\nSELECT topic.*, foo.* FROM\ntopic,\n(SELECT topic_id2 as fetch_id, topic_id1, topic_id2, rel_type, description \nas rel_descrip, created, updated\n\tFROM relationship\n\tWHERE\n\t\trel_type IN (SELECT type_id FROM entry_type WHERE class_id = 2)\n\tAND\ttopic_id1 = 1252\nUNION\nSELECT topic_id1 as fetch_id, topic_id1, topic_id2, rel_type, description \nas rel_descrip, created, updated\n\tFROM relationship\n\tWHERE\n\t\trel_type IN (SELECT type_id FROM entry_type WHERE class_id = 2)\n\tAND\ttopic_id2 = 1252)\nAS foo\nWHERE topic.topic_id = foo.fetch_id\n\n\nCREATE INDEX'es ON\nentry_type( class_id )\n\nrelationship( topic_id1, rel_type, topic_id2 )\twhich becomes your new \nPRIMARY KEY\nrelationship( topic_id2, rel_type, topic_id1 )\n\n> Of course, this doesn't explain how MySQL manages to execute the query \n> in about 9 msec. The only minor differences in the schema are: \n> entry_type.title and rel_title are char(32) in MySQL, \n> entry_type.class_id is a tinyint, and topic.categ_id, page_type and \n> dark_ind are also tinyints. MySQL also doesn't have the REFERENCES.\n\nCan you post the result from MySQL EXPLAIN ?\n\nYou might be interested in the following code. Just replace mysql_ by pg_, \nit's quite useful.\n\n$global_queries_log = array();\n\nfunction _getmicrotime()\t{\tlist($u,$s) = explode(' ',microtime());\treturn \n$u+$s;\t}\n\n/*\tFormats query, with given arguments, escaping all strings as needed.\n\tdb_quote_query( 'UPDATE junk SET a=%s WHERE b=%s', array( 1,\"po'po\" ) )\n\t=>\tUPDATE junk SET a='1 WHERE b='po\\'po'\n*/\nfunction db_quote_query( $sql, $params=false )\n{\n\t// if no params, send query raw\n\tif( !$params )\n\t\treturn $sql;\n\n\t// quote params\n\tforeach( $params as $key => $val )\n\t{\n\t\tif( is_array( $val ))\n\t\t\t$val = implode( ',', $val );\n\t\t$params[$key] = \"'\".mysql_real_escape_string( $val ).\"'\";\n\t}\n\treturn vsprintf( $sql, $params );\n}\n\n/*\tFormats query, with given arguments, escaping all strings as needed.\n\tRuns query, logging its execution time.\n\tReturns the query, or dies with error.\n*/\nfunction db_query( $sql, $params=false )\n{\n\t// it's already a query\n\tif( is_resource( $sql ))\n\t\treturn $sql;\n\n\t$sql = db_quote_query( $sql, $params );\n\n\t$t = _getmicrotime();\n\t$r = mysql_query( $sql );\n\tif( !$r )\n\t{\n\t\techo \"<div class=bigerror><b>Erreur MySQL \n:</b><br>\".mysql_error().\"<br><br><b>Requte</b> \n:<br>\".$sql.\"<br><br><b>Traceback </b>:<pre>\";\n\t\tforeach( debug_backtrace() as $t ) xdump( $t );\n\t\techo \"</pre></div>\";\n\t\tdie();\n\t}\n\tglobal $global_queries_log;\n\t$global_queries_log[] = array( _getmicrotime()-$t, $sql );\n\treturn $r;\n}\n\nAt the end of your page, display the contents of $global_queries_log.\n\n\n\n",
"msg_date": "Tue, 04 Oct 2005 10:45:06 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparative performance"
},
{
"msg_contents": "On Thu, Sep 29, 2005 at 08:44:16AM -0400, Joe wrote:\n> CREATE TABLE entry (\n> entry_id serial PRIMARY KEY,\n> title VARCHAR(128) NOT NULL,\n> subtitle VARCHAR(128),\n> subject_type SMALLINT,\n> subject_id INTEGER REFERENCES topic,\n> actor_type SMALLINT,\n> actor_id INTEGER REFERENCES topic,\n> actor VARCHAR(64),\n> actor_role VARCHAR(64),\n> rel_entry_id INTEGER,\n> rel_entry VARCHAR(64),\n> description VARCHAR(255),\n> quote text,\n> url VARCHAR(255),\n> entry_date CHAR(10),\n> created DATE NOT NULL DEFAULT CURRENT_DATE,\n> updated TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP)\n> WITHOUT OIDS;\n> CREATE INDEX entry_actor_id ON entry (actor_id);\n> CREATE INDEX entry_subject_id ON entry (subject_id);\n\nA few tips...\n\nFields in PostgreSQL have alignment requirements, so the smallints\naren't saving you anything right now. If you put both of them together\nthough, you'll save 4 bytes on most hardware.\n\nYou'll also get some minor gains from putting all the variable-length\nfields at the end, as well as nullable fields. If you search the\narchives for 'field order' you should be able to find some useful info.\n\nMake sure these indexes exist if you'll be updating or inserting into\nentry:\n\nCREATE INDEX topic__subject_id ON topic(subject_id);\nCREATE INDEX topic__actor_id ON topic(actor_id);\n\nAlso, the fact that subject and actor both point to topic along with\nsubject_type and actor_type make me suspect that your design is\nde-normalized. Of course there's no way to know without more info.\n\nFWIW, I usually use timestamptz for both created and updated fields.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 4 Oct 2005 15:31:02 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparative performance"
},
{
"msg_contents": "PFC wrote:\n> - if you use a version before 8, type mismatch will prevent use of the \n> indexes.\n\nI'm using 8.0.3, but the type mismatch between relationship.rel_type and \nentry_type.type_id was unintended. The current databases use SMALLINT for both. \n The PostgreSQL schema was derived from an export script stored in Subversion, \napparently before the column datatypes were changed.\n\n> CREATE INDEX'es ON\n> entry_type( class_id )\n> \n> relationship( topic_id1, rel_type, topic_id2 ) which becomes your \n> new PRIMARY KEY\n> relationship( topic_id2, rel_type, topic_id1 )\n\nCreating the second relationship index was sufficient to modify the query plan \nto cut down runtime to zero:\n\n Sort (cost=75.94..75.95 rows=2 width=381) (actual time=0.000..0.000 rows=0 \nloops=1)\n Sort Key: r.rel_type, t.list_name\n -> Nested Loop (cost=16.00..75.93 rows=2 width=381) (actual \ntime=0.000..0.000 rows=0 loops=1)\n Join Filter: (((\"outer\".topic_id1 = \"inner\".topic_id) AND \n(\"outer\".topic_id2 = 1252)) OR ((\"outer\".topic_id2 = \"inner\".topic_id) AND \n(\"outer\".topic_id1 = 1252)))\n -> Nested Loop (cost=16.00..35.11 rows=1 width=169) (actual \ntime=0.000..0.000 rows=0 loops=1)\n Join Filter: (\"inner\".rel_type = \"outer\".type_id)\n -> Seq Scan on entry_type e (cost=0.00..18.75 rows=4 width=4) \n(actual time=0.000..0.000 rows=15 loops=1)\n Filter: (class_id = 2)\n -> Materialize (cost=16.00..16.04 rows=4 width=167) (actual \ntime=0.000..0.000 rows=0 loops=15)\n -> Seq Scan on relationship r (cost=0.00..16.00 rows=4 \nwidth=167) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((topic_id2 = 1252) OR (topic_id1 = 1252))\n -> Seq Scan on topic t (cost=0.00..30.94 rows=494 width=216) (never \nexecuted)\n Total runtime: 0.000 ms\n(13 rows)\n\nThe overall execution time for the Economists page for PostgreSQL is within 4% \nof the MySQL time, so for the time being I'll leave the query in its current form.\n\nThanks for your help.\n\nJoe\n\n",
"msg_date": "Tue, 04 Oct 2005 16:57:19 -0400",
"msg_from": "Joe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparative performance"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> Make sure these indexes exist if you'll be updating or inserting into\n> entry:\n> \n> CREATE INDEX topic__subject_id ON topic(subject_id);\n> CREATE INDEX topic__actor_id ON topic(actor_id);\n\nActually, topic's primary key is topic_id.\n\n> Also, the fact that subject and actor both point to topic along with\n> subject_type and actor_type make me suspect that your design is\n> de-normalized. Of course there's no way to know without more info.\n\nYes, the design is denormalized. The reason is that a book or article is \nusually by a single author (an \"actor\" topic) and it will be listed under one \nmain topic (a \"subject\" topic). There's a topic_entry table where additional \nactors and subjects can be added.\n\nIt's somewhat ironic because I used to teach and/or preach normalization and the \n\"goodness\" of a 3NF+ design (also about having the database do aggregation and \nsorting as you mentioned in your other email).\n\n> FWIW, I usually use timestamptz for both created and updated fields.\n\nIIRC 'created' ended up as a DATE because MySQL 4 has a restriction about a \nsingle TIMESTAMP column per table taking the default value of current_timestamp.\n\nJoe\n\n",
"msg_date": "Tue, 04 Oct 2005 17:37:30 -0400",
"msg_from": "Joe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparative performance"
}
] |
[
{
"msg_contents": ">From: \"Jeffrey W. Baker\" <[email protected]>\n>Sent: Sep 29, 2005 12:33 AM\n>Subject: Sequential I/O Cost (was Re: [PERFORM] A Better External Sort?)\n>\n>On Wed, 2005-09-28 at 12:03 -0400, Ron Peacetree wrote:\n>>>From: \"Jeffrey W. Baker\" <[email protected]>\n>>>Perhaps I believe this because you can now buy as much sequential I/O\n>>>as you want. Random I/O is the only real savings.\n>>>\n>> 1= No, you can not \"buy as much sequential IO as you want\". Even if\n>> with an infinite budget, there are physical and engineering limits. Long\n>> before you reach those limits, you will pay exponentially increasing costs\n>> for linearly increasing performance gains. So even if you _can_ buy a\n>> certain level of sequential IO, it may not be the most efficient way to\n>> spend money.\n>\n>This is just false. You can buy sequential I/O for linear money up to\n>and beyond your platform's main memory bandwidth. Even 1GB/sec will\n>severely tax memory bandwidth of mainstream platforms, and you can\n>achieve this rate for a modest cost. \n>\nI don't think you can prove this statement.\nA= www.pricewatch.com lists 7200rpm 320GB SATA II HDs for ~$160.\nASTR according to www.storagereview.com is ~50MBps. Average access\ntime is ~12-13ms.\nAbsolute TOTL 15Krpm 147GB U320 or FC HDs cost ~4x as much per GB,\nyet only deliver ~80-90MBps ASTR and average access times of \n~5.5-6.0ms.\nYour statement is clearly false in terms of atomic raw HD performance.\n\nB= low end RAID controllers can be obtained for a few $100's. But even\namongst them, a $600+ card does not perform 3-6x better than a\n$100-$200 card. When the low end HW is not enough, the next step in\nprice is to ~$10K+ (ie Xyratex), and the ones after that are to ~$100K+\n(ie NetApps) and ~$1M+ (ie EMC, IBM, etc). None of these ~10x steps\nin price results in a ~10x increase in performance.\nYour statement is clearly false in terms of HW based RAID performance.\n\nC= A commodity AMD64 mainboard with a dual channel DDR PC3200\nRAM subsystem has 6.4GBps of bandwidth. These are as common\nas weeds and almost as cheap: www.pricewatch.com\nYour statement about commodity systems main memory bandwidth\nbeing \"severely taxed at 1GBps\" is clearly false.\n\nD= Xyratecs makes RAID HW for NetApps and EMC. NONE of their\ncurrent HW can deliver 1GBps. More like 600-700MBps. Engino and\nDot Hill have similar limitations on their current products. No PCI or\nPCI-X based HW could ever do more than ~800-850MBps since\nthat's the RW limit of those busses. Next Gen products are likely to\n2x those limits and cross the 1GBps barrier based on ~90MBps SAS\nor FC HD's and PCI-Ex8 (2GBps max) and PCI-Ex16 (4GBps max).\nNote that not even next gen or 2 gens from now RAID HW will be\nable to match the memory bandwidth of the current commodity\nmemory subsystem mentioned in \"C\" above.\nYour statement that one can achieve a HD IO rate that will tax RAM\nbandwidth at modest cost is clearly false.\n \nQED Your statement is false on all counts and in all respects.\n\n\n>I have one array that can supply this rate and it has only 15 disks. It\n>would fit on my desk. I think your dire talk about the limits of\n>science and engineering may be a tad overblown.\n>\nName it and post its BOM, configuration specs, price and ordering\ninformation. Then tell us what it's plugged into and all the same\ndetails on _that_.\n\nIf all 15 HD's are being used for one RAID set, then you can't be\nusing RAID 10, which means any claims re: write performance in\nparticular should be closely examined.\n\nA 15 volume RAID 5 made of the fastest 15Krpm U320 or FC HDs,\neach with ~85.9MBps ASTR, could in theory do ~14*85.9=\n~1.2GBps raw ASTR for at least reads, but no one I know of makes\ncommodity RAID HW that can keep up with this, nor can any one\nPCI-X bus support it even if such commodity RAID HW did exist.\n\nHmmm. SW RAID on at least a PCI-Ex8 bus might be able to do it if\nwe can multiplex enough 4Gbps FC lines (4Gbps= 400MBps => max\nof 4 of the above HDs per line and 4 FC lines) with low enough latency\nand have enough CPU driving it...Won't be easy nor cheap though. \n",
"msg_date": "Thu, 29 Sep 2005 03:42:54 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sequential I/O Cost (was Re: A Better External Sort?)"
}
] |
[
{
"msg_contents": " \nAm trying to port a mysql statement to postgres.\n\nPlease help me in finding the error in this,\n\n\nCREATE SEQUENCE ai_id;\nCREATE TABLE badusers (\n id int DEFAULT nextval('ai_id') NOT NULL,\n UserName varchar(30),\n Date datetime DEFAULT '0000-00-00 00:00:00' NOT NULL,\n Reason varchar(200),\n Admin varchar(30) DEFAULT '-',\n PRIMARY KEY (id),\n KEY UserName (UserName),\n KEY Date (Date)\n);\n\n\nAm always getting foll. Errors,\n\nERROR: relation \"ai_id\" already exists\nERROR: syntax error at or near \"(\" at character 240\n\nThanks,\nRajesh R\n",
"msg_date": "Thu, 29 Sep 2005 18:35:14 +0530",
"msg_from": "\"R, Rajesh (STSD)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query in SQL statement"
},
{
"msg_contents": "\n> CREATE SEQUENCE ai_id;\n> CREATE TABLE badusers (\n> id int DEFAULT nextval('ai_id') NOT NULL,\n> UserName varchar(30),\n> Date datetime DEFAULT '0000-00-00 00:00:00' NOT NULL,\n> Reason varchar(200),\n> Admin varchar(30) DEFAULT '-',\n> PRIMARY KEY (id),\n> KEY UserName (UserName),\n> KEY Date (Date)\n> );\n> \n> \n> Am always getting foll. Errors,\n> \n> ERROR: relation \"ai_id\" already exists\n> ERROR: syntax error at or near \"(\" at character 240\n\nYou have just copied the Mysql code to Postgresql. It will in no way \nwork. Your default for 'Date' is illegal in postgresql and hence it \nmust allow NULLs. There is no such thing as a 'datetime' type. There \nis no such thing as 'Key'. Also your mixed case identifiers won't be \npreserved. You want:\n\nCREATE TABLE badusers (\n id SERIAL PRIMARY KEY,\n UserName varchar(30),\n Date timestamp,\n Reason varchar(200),\n Admin varchar(30) DEFAULT '-'\n);\n\nCREATE INDEX UserName_Idx ON badusers(Username);\nCREATE INDEX Date_Idx ON badusers(Date);\n\n",
"msg_date": "Thu, 29 Sep 2005 21:28:38 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query in SQL statement"
},
{
"msg_contents": "On Thu, Sep 29, 2005 at 09:28:38PM +0800, Christopher Kings-Lynne wrote:\n> \n> >CREATE SEQUENCE ai_id;\n> >CREATE TABLE badusers (\n> > id int DEFAULT nextval('ai_id') NOT NULL,\n> > UserName varchar(30),\n> > Date datetime DEFAULT '0000-00-00 00:00:00' NOT NULL,\n> > Reason varchar(200),\n> > Admin varchar(30) DEFAULT '-',\n> > PRIMARY KEY (id),\n> > KEY UserName (UserName),\n> > KEY Date (Date)\n> >);\n> >\n> >\n> >Am always getting foll. Errors,\n> >\n> >ERROR: relation \"ai_id\" already exists\n> >ERROR: syntax error at or near \"(\" at character 240\n> \n> You have just copied the Mysql code to Postgresql. It will in no way \n> work. Your default for 'Date' is illegal in postgresql and hence it \n> must allow NULLs. There is no such thing as a 'datetime' type. There \n> is no such thing as 'Key'. Also your mixed case identifiers won't be \n> preserved. You want:\n> \n> CREATE TABLE badusers (\n> id SERIAL PRIMARY KEY,\n> UserName varchar(30),\n> Date timestamp,\n> Reason varchar(200),\n> Admin varchar(30) DEFAULT '-'\n> );\n> \n> CREATE INDEX UserName_Idx ON badusers(Username);\n> CREATE INDEX Date_Idx ON badusers(Date);\n\nActually, to preserve the case you can wrap everything in quotes:\nCREATE ...\n \"UserName\" varchar(30)\n\nOf course that means that now you have to do that in every statement\nthat uses that field, too...\n\nSELECT username FROM badusers\nERROR\n\nSELECT \"UserName\" FROM badusers\nbad user\n\nI suggest ditching the CamelCase and going with underline_seperators.\nI'd also not use the bareword id, instead using bad_user_id. And I'd\nname the table bad_user. But that's just me. :)\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 30 Sep 2005 18:48:53 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query in SQL statement"
},
{
"msg_contents": "R, Rajesh (STSD) wrote:\n> \n> Am trying to port a mysql statement to postgres.\n> \n> Please help me in finding the error in this,\n\nCan I recommend the reference section of the manuals for this sort of \nthing? There is an excellent section detailing the valid SQL for the \nCREATE TABLE command.\n\nAlso - the pgsql-hackers list is for discussion of database development, \nand the performance list is for performance problems. This would be \nbetter posted on pgsql-general or -sql or -novice.\n\n> CREATE SEQUENCE ai_id;\n\nThis line is causing the first error:\n > ERROR: relation \"ai_id\" already exists\n\nThat's because you've already successfully created the sequence, so it \nalready exists. Either drop it and recreate it, or stop trying to \nrecreate it.\n\n> CREATE TABLE badusers (\n> id int DEFAULT nextval('ai_id') NOT NULL,\n> UserName varchar(30),\n> Date datetime DEFAULT '0000-00-00 00:00:00' NOT NULL,\n\nWell, \"Date\" is a type-name, \"datetime\" isn't and even if it was \n\"0000-00-00\" isn't a valid date is it?\n\n> Reason varchar(200),\n> Admin varchar(30) DEFAULT '-',\n> PRIMARY KEY (id),\n> KEY UserName (UserName),\n> KEY Date (Date)\n\nThe word \"KEY\" isn't valid here either - are you trying to define an \nindex? If so, see the \"CREATE INDEX\" section of the SQL reference.\n\nhttp://www.postgresql.org/docs/8.0/static/sql-commands.html\n\nIf you reply to this message, please remove the pgsql-hackers CC:\n--\n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 05 Oct 2005 09:54:11 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query in SQL statement"
}
] |
[
{
"msg_contents": "\n> In my original example, a sequential scan of the 1TB of 2KB \n> or 4KB records, => 250M or 500M records of data, being sorted \n> on a binary value key will take ~1000x more time than reading \n> in the ~1GB Btree I described that used a Key+RID (plus node \n> pointers) representation of the data.\n\nImho you seem to ignore the final step your algorithm needs of\ncollecting the\ndata rows. After you sorted the keys the collect step will effectively\naccess the \ntuples in random order (given a sufficiently large key range).\n\nThis random access is bad. It effectively allows a competing algorithm\nto read the\nwhole data at least 40 times sequentially, or write the set 20 times\nsequentially. \n(Those are the random/sequential ratios of modern discs)\n\nAndreas\n",
"msg_date": "Thu, 29 Sep 2005 15:28:27 +0200",
"msg_from": "\"Zeugswetter Andreas DAZ SD\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "On Thu, Sep 29, 2005 at 03:28:27PM +0200, Zeugswetter Andreas DAZ SD wrote:\n> \n> > In my original example, a sequential scan of the 1TB of 2KB \n> > or 4KB records, => 250M or 500M records of data, being sorted \n> > on a binary value key will take ~1000x more time than reading \n> > in the ~1GB Btree I described that used a Key+RID (plus node \n> > pointers) representation of the data.\n> \n> Imho you seem to ignore the final step your algorithm needs of\n> collecting the\n> data rows. After you sorted the keys the collect step will effectively\n> access the \n> tuples in random order (given a sufficiently large key range).\n> \n> This random access is bad. It effectively allows a competing algorithm\n> to read the\n> whole data at least 40 times sequentially, or write the set 20 times\n> sequentially. \n> (Those are the random/sequential ratios of modern discs)\n\nTrue, but there is a compromise... not shuffling full tuples around when\nsorting in memory. Do your sorting with pointers, then write the full\ntuples out to 'tape' if needed.\n\nOf course the other issue here is that as correlation improves it\nbecomes better and better to do full pointer-based sorting.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Sat, 8 Oct 2005 17:51:55 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
}
] |
[
{
"msg_contents": "I think this question may be more appropriate for\[email protected].\n\nAnyrate for the below. Sounds like you maybe already have a table or\nsequence called ai_id;\n\nTry doing a DROP SEQUENCE ai_id;\n\nFirst\n\nAlso if you plan to use this sequence only for this table it would be better\nto use serial8 which will automatically create the sequence for you. Then\nyou don't even need that first part. Also you should avoid naming fields\nthings like Date which tend to be keywords in many kinds of databases.\n\nTry changing your logic to something like\n\nCREATE TABLE badusers (\n id serial8,\n UserName varchar(30),\n Date timestamp DEFAULT now() NOT NULL,\n Reason varchar(200),\n Admin varchar(30) DEFAULT '-',\n PRIMARY KEY (id)\n);\n\nCREATE INDEX badusers_username\n ON badusers\n USING btree\n (username);\n\nCREATE INDEX badusers_date\n ON badusers\n USING btree\n (date);\n\n-----Original Message-----\nFrom: R, Rajesh (STSD) [mailto:[email protected]] \nSent: Thursday, September 29, 2005 9:05 AM\nTo: [email protected]; [email protected]\nSubject: [HACKERS] Query in SQL statement\n\n\n \nAm trying to port a mysql statement to postgres.\n\nPlease help me in finding the error in this,\n\n\nCREATE SEQUENCE ai_id;\nCREATE TABLE badusers (\n id int DEFAULT nextval('ai_id') NOT NULL,\n UserName varchar(30),\n Date datetime DEFAULT '0000-00-00 00:00:00' NOT NULL,\n Reason varchar(200),\n Admin varchar(30) DEFAULT '-',\n PRIMARY KEY (id),\n KEY UserName (UserName),\n KEY Date (Date)\n);\n\n\nAm always getting foll. Errors,\n\nERROR: relation \"ai_id\" already exists\nERROR: syntax error at or near \"(\" at character 240\n\nThanks,\nRajesh R\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n",
"msg_date": "Thu, 29 Sep 2005 09:30:17 -0400",
"msg_from": "\"Obe, Regina DND\\\\MIS\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query in SQL statement"
}
] |
[
{
"msg_contents": "Hi All,\n \n I have a SQL function like :\n \nCREATE OR REPLACE FUNCTION\nfn_get_yetkili_inisyer_listesi(int4, int4)\n RETURNS SETOF kod_adi_liste_type AS\n$BODY$\n SELECT Y.KOD,Y.ADI\n FROM T_YER Y\n WHERE EXISTS (SELECT 1\n FROM T_GUZER G\n WHERE (G.BIN_YER_KOD = $1 OR COALESCE($1,0)=0)\n AND FN_FIRMA_ISVISIBLE(G.FIRMA_NO,$2) = 1\n AND G.IN_YER_KOD = Y.KOD)\n AND Y.IPTAL = 'H';\n$BODY$\n LANGUAGE 'sql' VOLATILE;\n\n When i use like \"SELECT * FROM\nfn_get_yetkili_inisyer_listesi(1, 3474)\" and \nplanner result is \"Function Scan on\nfn_get_yetkili_inisyer_listesi (cost=0.00..12.50 rows=1000\nwidth=36) (1 row) \" and it runs very slow.\n \n But when i use like \n\n \"SELECT Y.KOD,Y.ADI\n FROM T_YER Y\n WHERE EXISTS (SELECT 1\n FROM T_GUZER G\n WHERE (G.BIN_YER_KOD\n= 1 OR COALESCE(1,0)=0)\n AND FN_FIRMA_ISVISIBLE(G.FIRMA_NO,3474) = 1\n AND G.IN_YER_KOD = Y.KOD)\n AND Y.IPTAL = 'H';\" \n\nplanner result :\n\n\" \n QUERY PLAN\n \n--------------------------------------------------------------------------------\n-----------------------------\n Seq Scan on t_yer y (cost=0.00..3307.79 rows=58 width=14)\n Filter: (((iptal)::text = 'H'::text) AND (subplan))\n SubPlan\n -> Index Scan using\nt_guzer_ucret_giris_performans_idx on t_guzer g (cost\n=0.00..28.73 rows=1 width=0)\n Index Cond: ((bin_yer_kod = 1) AND (in_yer_kod =\n$0))\n Filter: (fn_firma_isvisible(firma_no, 3474) = 1)\n(6 rows)\n\"\n and it runs very fast.\n\nAny idea ?\n\nAdnan DURSUN\nASRIN Bili�im Hiz.Ltd.\n",
"msg_date": "Thu, 29 Sep 2005 22:54:58 +0300",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "SQL Function performance"
},
{
"msg_contents": "On Thu, Sep 29, 2005 at 10:54:58PM +0300, [email protected] wrote:\n> Hi All,\n> \n> I have a SQL function like :\n> \n> CREATE OR REPLACE FUNCTION\n> fn_get_yetkili_inisyer_listesi(int4, int4)\n> RETURNS SETOF kod_adi_liste_type AS\n> $BODY$\n> SELECT Y.KOD,Y.ADI\n> FROM T_YER Y\n> WHERE EXISTS (SELECT 1\n> FROM T_GUZER G\n> WHERE (G.BIN_YER_KOD = $1 OR COALESCE($1,0)=0)\n> AND FN_FIRMA_ISVISIBLE(G.FIRMA_NO,$2) = 1\n> AND G.IN_YER_KOD = Y.KOD)\n> AND Y.IPTAL = 'H';\n> $BODY$\n> LANGUAGE 'sql' VOLATILE;\n> \n> When i use like \"SELECT * FROM\n> fn_get_yetkili_inisyer_listesi(1, 3474)\" and \n> planner result is \"Function Scan on\n> fn_get_yetkili_inisyer_listesi (cost=0.00..12.50 rows=1000\n> width=36) (1 row) \" and it runs very slow.\n> \n> But when i use like \n> \n> \"SELECT Y.KOD,Y.ADI\n> FROM T_YER Y\n> WHERE EXISTS (SELECT 1\n> FROM T_GUZER G\n> WHERE (G.BIN_YER_KOD\n> = 1 OR COALESCE(1,0)=0)\n> AND FN_FIRMA_ISVISIBLE(G.FIRMA_NO,3474) = 1\n> AND G.IN_YER_KOD = Y.KOD)\n> AND Y.IPTAL = 'H';\" \n> \n> planner result :\n> \n> \" \n> QUERY PLAN\n> \n> --------------------------------------------------------------------------------\n> -----------------------------\n> Seq Scan on t_yer y (cost=0.00..3307.79 rows=58 width=14)\n> Filter: (((iptal)::text = 'H'::text) AND (subplan))\n> SubPlan\n> -> Index Scan using\n> t_guzer_ucret_giris_performans_idx on t_guzer g (cost\n> =0.00..28.73 rows=1 width=0)\n> Index Cond: ((bin_yer_kod = 1) AND (in_yer_kod =\n> $0))\n> Filter: (fn_firma_isvisible(firma_no, 3474) = 1)\n> (6 rows)\n> \"\n> and it runs very fast.\n> \n> Any idea ?\n\nNeed EXPLAIN ANALYZE.\n\nI suspect this is due to a cached query plan. PostgreSQL will cache a\nquery plan for the SELECT the first time you run the function and that\nplan will be re-used. Depending on what data you call the function with,\nyou could get a very different plan.\n\nAlso, you might do better with a JOIN instead of using EXISTS. You can\nalso make this function STABLE instead of VOLATILE. Likewise, if\nFN_FIRMA_ISVISIBLE can't change any data, you can also make it STABLE\nwhich would likely improve the performance of the query. But neither of\nthese ideas would account for the difference between function\nperformance and raw query performance.\n\nOn a side note, if OR $1 IS NULL works that will be more readable (and\nprobably faster) than the OR COALESCE($1,0)=0.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 4 Oct 2005 15:52:25 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL Function performance"
}
] |
[
{
"msg_contents": "I am running version 8.0.1 on Windows 2003. I have an application that\nsubjects PostgreSQL to sudden bursts of activity at times which cannot be\npredicted. The bursts are significant enough to cause performance\ndegradation, which can be fixed by a 'vacuum analyze'.\n\nI am aware of the existence and contents of tables like pg_class.\n\nQUESTION: I would like to trigger a vacuum analyze process on a table\nwhenever it gets a large enough burst of activity to warrant it. Using the\ndata in pg_class (like the number of pages the system found the last time it\nwas vacuumed / analyzed), I would like to compare those statistics to\ncurrent size, and trigger a vacuum/analyze on a table if needed.\n\nDoes anyone know of any available tools, or an approach I could use, to\ndetermine what the CURRENT SIZE is ?\n\n\n",
"msg_date": "Thu, 29 Sep 2005 19:05:36 -0400",
"msg_from": "\"Lane Van Ingen\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to Trigger An Automtic Vacuum on Selected Tables "
},
{
"msg_contents": "Autovacuum does exactly what I understood you want :-)\n\n-----Mensaje original-----\nDe: [email protected]\n[mailto:[email protected]]En nombre de Lane Van\nIngen\nEnviado el: jueves, 29 de septiembre de 2005 20:06\nPara: [email protected]\nAsunto: [PERFORM] How to Trigger An Automtic Vacuum on Selected Tables\n\n\nI am running version 8.0.1 on Windows 2003. I have an application that\nsubjects PostgreSQL to sudden bursts of activity at times which cannot be\npredicted. The bursts are significant enough to cause performance\ndegradation, which can be fixed by a 'vacuum analyze'.\n\nI am aware of the existence and contents of tables like pg_class.\n\nQUESTION: I would like to trigger a vacuum analyze process on a table\nwhenever it gets a large enough burst of activity to warrant it. Using the\ndata in pg_class (like the number of pages the system found the last time it\nwas vacuumed / analyzed), I would like to compare those statistics to\ncurrent size, and trigger a vacuum/analyze on a table if needed.\n\nDoes anyone know of any available tools, or an approach I could use, to\ndetermine what the CURRENT SIZE is ?\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n\n",
"msg_date": "Thu, 29 Sep 2005 21:05:45 -0300",
"msg_from": "\"Dario\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to Trigger An Automtic Vacuum on Selected Tables "
}
] |
[
{
"msg_contents": "... to do the following:\n (1) Make a table memory-resident only ?\n (2) Set up user variables in memory that are persistent across all\nsessions, for\n as long as the database is up and running ?\n (3) Assure that a disk-based table is always in memory (outside of keeping\nit in\n memory buffers as a result of frequent activity which would prevent\nLRU\n operations from taking it out) ?\n\n\n",
"msg_date": "Thu, 29 Sep 2005 19:21:08 -0400",
"msg_from": "\"Lane Van Ingen\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is There Any Way ...."
},
{
"msg_contents": "1) AFAIK, no. Just in case you are thinking \"There should be a way coz I\nknow it will be used all the time\", you must know that postgresql philosophy\nis \"I'm smarter than you\". If table is used all the time, it will be in\nmemory, if not, it won't waste memory.\n2) don't know.\n3) see number 1) Of course, you could run into a pathological case where\ntable is queried just before being taken out of memory. But it means, the\ntable isn't queried all the time...\n\nGreetings...\n\n\n\n-----Mensaje original-----\nDe: [email protected]\n[mailto:[email protected]]En nombre de Lane Van\nIngen\nEnviado el: jueves, 29 de septiembre de 2005 20:21\nPara: [email protected]\nAsunto: [PERFORM] Is There Any Way ....\n\n\n... to do the following:\n (1) Make a table memory-resident only ?\n (2) Set up user variables in memory that are persistent across all\nsessions, for\n as long as the database is up and running ?\n (3) Assure that a disk-based table is always in memory (outside of keeping\nit in\n memory buffers as a result of frequent activity which would prevent\nLRU\n operations from taking it out) ?\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n",
"msg_date": "Thu, 29 Sep 2005 21:22:42 -0300",
"msg_from": "\"Dario\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is There Any Way ...."
},
{
"msg_contents": "Quoting Lane Van Ingen <[email protected]>:\n\n> ... to do the following:\n> (1) Make a table memory-resident only ?\n\nPut it on a RAM filesystem. On Linux, shmfs. On *BSD, mfs. Solaris, tmpfs.\n\n> (2) Set up user variables in memory that are persistent across all\n> sessions, for\n> as long as the database is up and running ?\n\nThis sounds like a client thing? Dunno.\n\n> (3) Assure that a disk-based table is always in memory (outside of\n> keeping\n> it in\n> memory buffers as a result of frequent activity which would prevent\n> LRU\n> operations from taking it out) ?\n> \n\nPut on RAM fs (like question 1).\n\nBasically, RAM filesystems are on RAM, meaning you need to have enough physical\nmemory to support them. And of course their contents completely disappear\nbetween reboots, so you'll need a way to populate them on bootup and make sure\nthat your updates go to a real nonvolatile storage medium (like disks). And you\nmight get swapping on some types of memory filesystems--Solaris' tmpfs is carved\nout of virtual memory, which means it will cause swapping if tmpfs contents plus\nthe rest of your applications exceed physical memory.\n\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n\n",
"msg_date": "Thu, 29 Sep 2005 17:57:21 -0700",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Is There Any Way ...."
},
{
"msg_contents": "On Thu, Sep 29, 2005 at 07:21:08PM -0400, Lane Van Ingen wrote:\n> (1) Make a table memory-resident only ?\n\nYou might want to look into memcached, but it's impossible to say whether it\nwill fit your needs or not without more details.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 30 Sep 2005 03:46:12 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is There Any Way ...."
},
{
"msg_contents": "Lane Van Ingen wrote:\n> (2) Set up user variables in memory that are persistent across all\n> sessions, for\n> as long as the database is up and running ?\n\nYou could probably write \"C\" functions (or possibly Perl) to store data \nin shared memory. Of course you'd have to deal with concurrency issues \nyourself.\n\nStoring the values in a table and having cached access to them during \nthe session is probably your best bet.\n\n--\n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 30 Sep 2005 08:14:38 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is There Any Way ...."
},
{
"msg_contents": "On 2005-09-30 01:21, Lane Van Ingen wrote:\n> (3) Assure that a disk-based table is always in memory (outside of keeping\n> it in\n> memory buffers as a result of frequent activity which would prevent\n> LRU\n> operations from taking it out) ?\n\nI was wondering about this too. IMO it would be useful to have a way to tell\nPG that some tables were needed frequently, and should be cached if\npossible. This would allow application developers to consider joins with\nthese tables as \"cheap\", even when querying on columns that are not indexed.\nI'm thinking about smallish tables like users, groups, *types, etc which\nwould be needed every 2-3 queries, but might be swept out of RAM by one\nlarge query in between. Keeping a table like \"users\" on a RAM fs would not\nbe an option, because the information is not volatile.\n\n\ncheers,\nstefan\n",
"msg_date": "Tue, 04 Oct 2005 12:31:42 +0200",
"msg_from": "Stefan Weiss <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is There Any Way ...."
},
{
"msg_contents": "Yes, Stefan, the kind of usage you are mentioning is exactly why I was\nasking.\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Stefan Weiss\nSent: Tuesday, October 04, 2005 6:32 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Is There Any Way ....\n\n\nOn 2005-09-30 01:21, Lane Van Ingen wrote:\n> (3) Assure that a disk-based table is always in memory (outside of\nkeeping\n> it in\n> memory buffers as a result of frequent activity which would prevent\n> LRU\n> operations from taking it out) ?\n\nI was wondering about this too. IMO it would be useful to have a way to tell\nPG that some tables were needed frequently, and should be cached if\npossible. This would allow application developers to consider joins with\nthese tables as \"cheap\", even when querying on columns that are not indexed.\nI'm thinking about smallish tables like users, groups, *types, etc which\nwould be needed every 2-3 queries, but might be swept out of RAM by one\nlarge query in between. Keeping a table like \"users\" on a RAM fs would not\nbe an option, because the information is not volatile.\n\n\ncheers,\nstefan\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n\n",
"msg_date": "Tue, 4 Oct 2005 10:45:48 -0400",
"msg_from": "\"Lane Van Ingen\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is There Any Way ...."
},
{
"msg_contents": "On Tue, Oct 04, 2005 at 12:31:42PM +0200, Stefan Weiss wrote:\n> On 2005-09-30 01:21, Lane Van Ingen wrote:\n> > (3) Assure that a disk-based table is always in memory (outside of keeping\n> > it in\n> > memory buffers as a result of frequent activity which would prevent\n> > LRU\n> > operations from taking it out) ?\n> \n> I was wondering about this too. IMO it would be useful to have a way to tell\n> PG that some tables were needed frequently, and should be cached if\n> possible. This would allow application developers to consider joins with\n> these tables as \"cheap\", even when querying on columns that are not indexed.\n> I'm thinking about smallish tables like users, groups, *types, etc which\n> would be needed every 2-3 queries, but might be swept out of RAM by one\n> large query in between. Keeping a table like \"users\" on a RAM fs would not\n> be an option, because the information is not volatile.\n\nWhy do you think you'll know better than the database how frequently\nsomething is used? At best, your guess will be correct and PostgreSQL\n(or the kernel) will keep the table in memory. Or, your guess is wrong\nand you end up wasting memory that could have been used for something\nelse.\n\nIt would probably be better if you describe why you want to force this\ntable (or tables) into memory, so we can point you at more appropriate\nsolutions.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 4 Oct 2005 15:57:20 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is There Any Way ...."
},
{
"msg_contents": "Which version of PG are you using? One of the new features for 8.0 was\nan improved caching algorithm that was smart enough to avoid letting a\nsingle big query sweep everything else out of cache.\n\n-- Mark Lewis\n\n\nOn Tue, 2005-10-04 at 10:45 -0400, Lane Van Ingen wrote:\n> Yes, Stefan, the kind of usage you are mentioning is exactly why I was\n> asking.\n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Stefan Weiss\n> Sent: Tuesday, October 04, 2005 6:32 AM\n> To: [email protected]\n> Subject: Re: [PERFORM] Is There Any Way ....\n> \n> \n> On 2005-09-30 01:21, Lane Van Ingen wrote:\n> > (3) Assure that a disk-based table is always in memory (outside of\n> keeping\n> > it in\n> > memory buffers as a result of frequent activity which would prevent\n> > LRU\n> > operations from taking it out) ?\n> \n> I was wondering about this too. IMO it would be useful to have a way to tell\n> PG that some tables were needed frequently, and should be cached if\n> possible. This would allow application developers to consider joins with\n> these tables as \"cheap\", even when querying on columns that are not indexed.\n> I'm thinking about smallish tables like users, groups, *types, etc which\n> would be needed every 2-3 queries, but might be swept out of RAM by one\n> large query in between. Keeping a table like \"users\" on a RAM fs would not\n> be an option, because the information is not volatile.\n> \n> \n> cheers,\n> stefan\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n",
"msg_date": "Tue, 04 Oct 2005 15:23:52 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is There Any Way ...."
}
] |
[
{
"msg_contents": ">From: Pailloncy Jean-Gerard <[email protected]>\n>Sent: Sep 29, 2005 7:11 AM\n>Subject: Re: [HACKERS] [PERFORM] A Better External Sort?\n>>>Jeff Baker:\n>>>Your main example seems to focus on a large table where a key \n>>>column has constrained values. This case is interesting in\n>>>proportion to the number of possible values. If I have billions\n>>>of rows, each having one of only two values, I can think of a\n>>>trivial and very fast method of returning the table \"sorted\" by\n>>>that key: make two sequential passes, returning the first value\n>>>on the first pass and the second value on the second pass.\n>>> This will be faster than the method you propose.\n>>\n>>Ron Peacetree:\n>>1= No that was not my main example. It was the simplest example \n>>used to frame the later more complicated examples. Please don't\n>>get hung up on it.\n>>\n>>2= You are incorrect. Since IO is the most expensive operation we \n>>can do, any method that makes two passes through the data at top\n>>scanning speed will take at least 2x as long as any method that only\n>>takes one such pass.\n>\n>You do not get the point.\n>As the time you get the sorted references to the tuples, you need to \n>fetch the tuples themself, check their visbility, etc. and returns \n>them to the client.\n>\nAs PFC correctly points out elsewhere in this thread, =maybe= you\nhave to do all that. The vast majority of the time people are not\ngoing to want to look at a detailed record by record output of that\nmuch data.\n\nThe most common usage is to calculate or summarize some quality\nor quantity of the data and display that instead or to use the tuples\nor some quality of the tuples found as an intermediate step in a\nlonger query process such as a join.\n\nSometimes there's a need to see _some_ of the detailed records; a\nrandom sample or a region in a random part of the table or etc.\nIt's rare that there is a RW need to actually list every record in a\ntable of significant size.\n\nOn the rare occasions where one does have to return or display all\nrecords in such large table, network IO and/or display IO speeds\nare the primary performance bottleneck. Not HD IO.\n\nNonetheless, if there _is_ such a need, there's nothing stopping us\nfrom rearranging the records in RAM into sorted order in one pass\nthrough RAM (using at most space for one extra record) after\nconstructing the cache conscious Btree index. Then the sorted\nrecords can be written to HD in RAM buffer sized chunks very\nefficiently. \nRepeating this process until we have stepped through the entire\ndata set will take no more HD IO than one HD scan of the data\nand leave us with a permanent result that can be reused for\nmultiple purposes. If the sorted records are written in large\nenough chunks, rereading them at any later time can be done\nat maximum HD throughput\n\nIn a total of two HD scans (one to read the original data, one\nto write out the sorted data) we can make a permanent\nrearrangement of the data. We've essentially created a \ncluster index version of the data.\n\n\n>So, if there is only 2 values in the column of big table that is larger \n>than available RAM, two seq scans of the table without any sorting\n>is the fastest solution.\n>\nIf you only need to do this once, yes this wins. OTOH, if you have\nto do this sort even twice, my method is better.\n\nregards,\nRon \n",
"msg_date": "Thu, 29 Sep 2005 22:03:26 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
}
] |
[
{
"msg_contents": ">From: Zeugswetter Andreas DAZ SD <[email protected]>\n>Sent: Sep 29, 2005 9:28 AM\n>Subject: RE: [HACKERS] [PERFORM] A Better External Sort?\n>\n>>In my original example, a sequential scan of the 1TB of 2KB \n>>or 4KB records, => 250M or 500M records of data, being sorted \n>>on a binary value key will take ~1000x more time than reading \n>>in the ~1GB Btree I described that used a Key+RID (plus node \n>>pointers) representation of the data.\n>\n>Imho you seem to ignore the final step your algorithm needs of\n>collecting the data rows. After you sorted the keys the collect\n>step will effectively access the tuples in random order (given a \n>sufficiently large key range).\n>\n\"Collecting\" the data rows can be done for each RAM buffer full of\nof data in one pass through RAM after we've built the Btree. Then\nif desired those data rows can be read out to HD in sorted order\nin essentially one streaming burst. This combination of index build\n+ RAM buffer rearrangement + write results to HD can be repeat\nas often as needed until we end up with an overall Btree index and\na set of sorted sublists on HD. Overall HD IO for the process is only\ntwo effectively sequential passes through the data.\n\nSubsequent retrieval of the sorted information from HD can be\ndone at full HD streaming speed and whatever we've decided to\nsave to HD can be reused later if we desire.\n\nHope this helps,\nRon\n",
"msg_date": "Thu, 29 Sep 2005 22:57:19 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
}
] |
[
{
"msg_contents": ">From: Josh Berkus <[email protected]>\n>Sent: Sep 29, 2005 12:54 PM\n>Subject: Re: [HACKERS] [PERFORM] A Better External Sort?\n>\n>The biggest single area where I see PostgreSQL external\n>sort sucking is on index creation on large tables. For\n>example, for free version of TPCH, it takes only 1.5 hours to\n>load a 60GB Lineitem table on OSDL's hardware, but over 3\n>hours to create each index on that table. This means that\n>over all our load into TPCH takes 4 times as long to create \n>the indexes as it did to bulk load the data.\n>\nHmmm.\n60GB/5400secs= 11MBps. That's ssllooww. So the first\nproblem is evidently our physical layout and/or HD IO layer\nsucks.\n\nCreating the table and then creating the indexes on the table\nis going to require more physical IO than if we created the\ntable and the indexes concurrently in chunks and then\ncombined the indexes on the chunks into the overall indexes\nfor the whole table, so there's a potential speed-up.\n\nThe method I've been talking about is basically a recipe for\ncreating indexes as fast as possible with as few IO operations,\nHD or RAM, as possible and nearly no random ones, so it\ncould help as well.\n\nOTOH, HD IO rate is the fundamental performance metric.\nAs long as our HD IO rate is pessimal, so will the performance\nof everything else be. Why can't we load a table at closer to\nthe peak IO rate of the HDs? \n\n\n>Anyone restoring a large database from pg_dump is in the\n>same situation. Even worse, if you have to create a new\n>index on a large table on a production database in use,\n>because the I/O from the index creation swamps everything.\n>\nFix for this in the works ;-)\n\n\n>Following an index creation, we see that 95% of the time\n>required is the external sort, which averages 2mb/s.\n>\nAssuming decent HD HW, this is HORRIBLE.\n\nWhat's kind of instrumenting and profiling has been done of\nthe code involved?\n\n\n>This is with seperate drives for the WAL, the pg_tmp, the table\n>and the index. I've confirmed that increasing work_mem \n>beyond a small minimum (around 128mb) had no benefit on\n>the overall index creation speed.\n>\nNo surprise. The process is severely limited by the abyssmally\nslow HD IO.\n\nRon\n",
"msg_date": "Fri, 30 Sep 2005 01:24:30 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "Ron,\n\n> Hmmm.\n> 60GB/5400secs= 11MBps. That's ssllooww. So the first\n> problem is evidently our physical layout and/or HD IO layer\n> sucks.\n\nActually, it's much worse than that, because the sort is only dealing \nwith one column. As I said, monitoring the iostat our top speed was \n2.2mb/s.\n\n--Josh\n",
"msg_date": "Fri, 30 Sep 2005 10:23:30 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
}
] |
[
{
"msg_contents": "> > This smells like a TCP communication problem.\n> \n> I'm puzzled by that remark. How much does TCP get into the \n> picture in a local Windows client/server environment?\n\nWindows has no Unix Domain Sockets (no surprise there), so TCP\nconnections over the loopback interface are used to connect to the\nserver.\n\n//Magnus\n",
"msg_date": "Fri, 30 Sep 2005 07:58:23 +0200",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Comparative performance"
}
] |
[
{
"msg_contents": "Hi list,\ni'm a postgres noob and before designing a new Database Schema i would\nlike to ask you a simple question because preformance will be a critical\npoint.\n\nMy question regards performance comparison between a Table which\nincludes a list or a Table without list and with an external table.\n\n1 - TABLE A has several fields (most of which are types defined by me)\nfor a total of 30-40 field (counting fields in my new types) and could\ncontain 5000 rows. \n \n2 - TABLE A will be accessed several times with several views or query.\n (many SELECT,few UPDATE)\n\n\nLet's suppose i need to add an info about addresses (which includes\ncountry,city,cap....etc etc).\nAddresses can vary from 1 to 20 entries..\n\nTalking about performance is it better to include a list of addresses in\nTABLE A or is it better to create an external TABLE B?\n\nI'm afraid that including those addresses will decrease performance in\ndoing many SELECT.\n\nOn the other side i'm afraid of doing JOIN with TABLE B.\n\nWhat do you think?\nThank you!\n\nxchris\n\n",
"msg_date": "Fri, 30 Sep 2005 11:16:21 +0200",
"msg_from": "xchris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Lists or external TABLE?"
},
{
"msg_contents": "xchris wrote:\n> \n> Let's suppose i need to add an info about addresses (which includes\n> country,city,cap....etc etc).\n> Addresses can vary from 1 to 20 entries..\n> \n> Talking about performance is it better to include a list of addresses in\n> TABLE A or is it better to create an external TABLE B?\n\nDon't optimise before you have to.\n\nDo the addresses belong in \"A\"? If so, put them there. On the other \nhand, if you want items in \"A\" to have more than one address, or to \nshare addresses then clearly you will want a separate address table. \nIt's difficult to say more without a clear example of your requirements.\n\nEven if you choose to alter your design for performance reasons, you \nshould make sure you run tests with realistic workloads and hardware. \nBut first, trust PG to do its job and design your database according to \nthe problem requirements.\n\n--\n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 30 Sep 2005 10:34:35 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Lists or external TABLE?"
},
{
"msg_contents": "On Fri, Sep 30, 2005 at 10:34:35AM +0100, Richard Huxton wrote:\n> xchris wrote:\n> >\n> >Let's suppose i need to add an info about addresses (which includes\n> >country,city,cap....etc etc).\n> >Addresses can vary from 1 to 20 entries..\n> >\n> >Talking about performance is it better to include a list of addresses in\n> >TABLE A or is it better to create an external TABLE B?\n> \n> Don't optimise before you have to.\n> \n> Do the addresses belong in \"A\"? If so, put them there. On the other \n> hand, if you want items in \"A\" to have more than one address, or to \n> share addresses then clearly you will want a separate address table. \n> It's difficult to say more without a clear example of your requirements.\n> \n> Even if you choose to alter your design for performance reasons, you \n> should make sure you run tests with realistic workloads and hardware. \n> But first, trust PG to do its job and design your database according to \n> the problem requirements.\n\nOn top of what Richard said, 5000 rows is pretty tiny. Even if each row\nwas 1K wide, that's still only 5MB.\n\nAlso, if from a data-model standpoint it doesn't matter which way you\ngo, I suggest looking at what it will take to write queries against both\nversions before deciding. I tend to stay away from arrays because\nthey tend to be harder to query against.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 4 Oct 2005 16:01:37 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Lists or external TABLE?"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:pgsql-hackers-\n> [email protected]] On Behalf Of PFC\n> Sent: Thursday, September 29, 2005 9:10 AM\n> To: [email protected]\n> Cc: Pg Hackers; [email protected]\n> Subject: Re: [HACKERS] [PERFORM] A Better External Sort?\n> \n> \n> \tJust to add a little anarchy in your nice debate...\n> \n> \tWho really needs all the results of a sort on your terabyte\ntable ?\n\nReports with ORDER BY/GROUP BY, and many other possibilities. 40% of\nmainframe CPU cycles are spent sorting. That is because huge volumes of\ndata require lots of energy to be meaningfully categorized. Let's\nsuppose that instead of a terabyte of data (or a petabyte or whatever)\nwe have 10% of it. That's still a lot of data.\n\n> \tI guess not many people do a SELECT from such a table and want\nall\n> the\n> results. \n\nWhat happens when they do? The cases where it is already fast are not\nvery important. The cases where things go into the crapper are the ones\nthat need attention.\n\n>So, this leaves :\n> \t- Really wanting all the results, to fetch using a cursor,\n> \t- CLUSTER type things, where you really want everything in\norder,\n> \t- Aggregates (Sort->GroupAggregate), which might really need to\nsort\n> the\n> whole table.\n> \t- Complex queries where the whole dataset needs to be examined,\nin\n> order\n> to return a few values\n> \t- Joins (again, the whole table is probably not going to be\n> selected)\n> \t- And the ones I forgot.\n> \n> \tHowever,\n> \n> \tMost likely you only want to SELECT N rows, in some ordering :\n> \t- the first N (ORDER BY x LIMIT N)\n> \t- last N (ORDER BY x DESC LIMIT N)\n\nFor these, the QuickSelect algorithm is what is wanted. For example:\n#include <stdlib.h>\ntypedef double Etype;\n\nextern Etype RandomSelect(Etype * A, size_t p, size_t r, size_t i);\nextern size_t RandRange(size_t a, size_t b);\nextern size_t RandomPartition(Etype * A, size_t p, size_t r);\nextern size_t Partition(Etype * A, size_t p, size_t r);\n\n/*\n**\n** In the following code, every reference to CLR means:\n**\n** \"Introduction to Algorithms\"\n** By Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest\n** ISBN 0-07-013143-0\n*/\n\n\n/*\n** CLR, page 187\n*/\nEtype RandomSelect(Etype A[], size_t p, size_t r, size_t i)\n{\n size_t q,\n k;\n if (p == r)\n return A[p];\n q = RandomPartition(A, p, r);\n k = q - p + 1;\n\n if (i <= k)\n return RandomSelect(A, p, q, i);\n else\n return RandomSelect(A, q + 1, r, i - k);\n}\n\nsize_t RandRange(size_t a, size_t b)\n{\n size_t c = (size_t) ((double) rand() / ((double) RAND_MAX +\n1) * (b - a));\n return c + a;\n}\n\n/*\n** CLR, page 162\n*/\nsize_t RandomPartition(Etype A[], size_t p, size_t r)\n{\n size_t i = RandRange(p, r);\n Etype Temp;\n Temp = A[p];\n A[p] = A[i];\n A[i] = Temp;\n return Partition(A, p, r);\n}\n\n/*\n** CLR, page 154\n*/\nsize_t Partition(Etype A[], size_t p, size_t r)\n{\n Etype x,\n temp;\n size_t i,\n j;\n\n x = A[p];\n i = p - 1;\n j = r + 1;\n\n for (;;) {\n do {\n j--;\n } while (!(A[j] <= x));\n do {\n i++;\n } while (!(A[i] >= x));\n if (i < j) {\n temp = A[i];\n A[i] = A[j];\n A[j] = temp;\n } else\n return j;\n }\n}\n\n> \t- WHERE x>value ORDER BY x LIMIT N\n> \t- WHERE x<value ORDER BY x DESC LIMIT N\n> \t- and other variants\n> \n> \tOr, you are doing a Merge JOIN against some other table ; in\nthat\n> case,\n> yes, you might need the whole sorted terabyte table, but most likely\nthere\n> are WHERE clauses in the query that restrict the set, and thus, maybe\nwe\n> can get some conditions or limit values on the column to sort.\n\nWhere clause filters are to be applied AFTER the join operations,\naccording to the SQL standard.\n\n> \tAlso the new, optimized hash join, which is more memory\nefficient,\n> might\n> cover this case.\n\nFor == joins. Not every order by is applied to joins. And not every\njoin is an equal join.\n\n> \tPoint is, sometimes, you only need part of the results of your\nsort.\n> And\n> the bigger the sort, the most likely it becomes that you only want\npart of\n> the results.\n\nThat is an assumption that will sometimes be true, and sometimes not.\nIt is not possible to predict usage patterns for a general purpose\ndatabase system.\n\n> So, while we're in the fun hand-waving, new algorithm trying\n> mode, why not consider this right from the start ? (I know I'm totally\nin\n> hand-waving mode right now, so slap me if needed).\n> \n> \tI'd say your new, fancy sort algorithm needs a few more input\nvalues\n> :\n> \n> \t- Range of values that must appear in the final result of the\nsort :\n> \t\tnone, minimum, maximum, both, or even a set of values\nfrom the\n> other\n> side of the join, hashed, or sorted.\n\nThat will already happen (or it certainly ought to)\n\n> \t- LIMIT information (first N, last N, none)\n\nThat will already happen (or it certainly ought to -- I would be pretty\nsurprised if it does not happen)\n\n> \t- Enhanced Limit information (first/last N values of the second\n> column to\n> sort, for each value of the first column) (the infamous \"top10 by\n> category\" query)\n> \t- etc.\n\nAll the filters will (at some point) be applied to the data unless they\ncannot be applied to the data by formal rule.\n \n> \tWith this, the amount of data that needs to be kept in memory is\n> dramatically reduced, from the whole table (even using your compressed\n> keys, that's big) to something more manageable which will be closer to\nthe\n> size of the final result set which will be returned to the client, and\n> avoid a lot of effort.\n\nSorting the minimal set is a good idea. Sometimes there is a big\nsavings there. I would be pretty surprised if a large fraction of data\nthat does not have to be included is actually processed during the\nsorts.\n \n> \tSo, this would not be useful in all cases, but when it applies,\nit\n> would\n> be really useful.\n\nNo argument there. And if an algorithm is being reworked, it is a good\nidea to look at things like filtering to see if all filtering that is\nallowed by the language standard before the sort takes place is applied.\n",
"msg_date": "Fri, 30 Sep 2005 10:44:34 -0700",
"msg_from": "\"Dann Corbit\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
}
] |
[
{
"msg_contents": "That 11MBps was your =bulk load= speed. If just loading a table\nis this slow, then there are issues with basic physical IO, not just\nIO during sort operations.\n\nAs I said, the obvious candidates are inefficient physical layout\nand/or flawed IO code.\n\nUntil the basic IO issues are addressed, we could replace the\npresent sorting code with infinitely fast sorting code and we'd\nstill be scrod performance wise.\n\nSo why does basic IO suck so badly?\n\nRon \n\n\n-----Original Message-----\nFrom: Josh Berkus <[email protected]>\nSent: Sep 30, 2005 1:23 PM\nTo: Ron Peacetree <[email protected]>\nCc: [email protected], [email protected]\nSubject: Re: [HACKERS] [PERFORM] A Better External Sort?\n\nRon,\n\n> Hmmm.\n> 60GB/5400secs= 11MBps. That's ssllooww. So the first\n> problem is evidently our physical layout and/or HD IO layer\n> sucks.\n\nActually, it's much worse than that, because the sort is only dealing \nwith one column. As I said, monitoring the iostat our top speed was \n2.2mb/s.\n\n--Josh\n\n",
"msg_date": "Fri, 30 Sep 2005 16:20:50 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "I have seen similar performance as Josh and my reasoning is as follows:\n\n* WAL is the biggest bottleneck with its default size of 16MB. Many \npeople hate to recompile the code to change its default, and \nincreasing checkpoint segments help but still there is lot of overhead \nin the rotation of WAL files (Even putting WAL on tmpfs shows that it is \nstill slow). Having an option for bigger size is helpful to a small \nextent percentagewise (and frees up CPU a bit in doing file rotation)\n\n* Growing files: Even though this is OS dependent but it does spend lot \nof time doing small 8K block increases to grow files. If we can signal \nbigger chunks to grow or \"pre-grow\" to expected size of data files \nthat will help a lot in such cases.\n\n* COPY command had restriction but that has been fixed to a large \nextent.(Great job)\n\nBut ofcourse I have lost touch with programming and can't begin to \nunderstand PostgreSQL code to change it myself.\n\nRegards,\nJignesh\n\n\n\n\nRon Peacetree wrote:\n\n>That 11MBps was your =bulk load= speed. If just loading a table\n>is this slow, then there are issues with basic physical IO, not just\n>IO during sort operations.\n>\n>As I said, the obvious candidates are inefficient physical layout\n>and/or flawed IO code.\n>\n>Until the basic IO issues are addressed, we could replace the\n>present sorting code with infinitely fast sorting code and we'd\n>still be scrod performance wise.\n>\n>So why does basic IO suck so badly?\n>\n>Ron \n>\n>\n>-----Original Message-----\n>From: Josh Berkus <[email protected]>\n>Sent: Sep 30, 2005 1:23 PM\n>To: Ron Peacetree <[email protected]>\n>Cc: [email protected], [email protected]\n>Subject: Re: [HACKERS] [PERFORM] A Better External Sort?\n>\n>Ron,\n>\n> \n>\n>>Hmmm.\n>>60GB/5400secs= 11MBps. That's ssllooww. So the first\n>>problem is evidently our physical layout and/or HD IO layer\n>>sucks.\n>> \n>>\n>\n>Actually, it's much worse than that, because the sort is only dealing \n>with one column. As I said, monitoring the iostat our top speed was \n>2.2mb/s.\n>\n>--Josh\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n>\n",
"msg_date": "Fri, 30 Sep 2005 16:38:00 -0400",
"msg_from": "\"Jignesh K. Shah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "Ron,\n\nOn 9/30/05 1:20 PM, \"Ron Peacetree\" <[email protected]> wrote:\n\n> That 11MBps was your =bulk load= speed. If just loading a table\n> is this slow, then there are issues with basic physical IO, not just\n> IO during sort operations.\n\nBulk loading speed is irrelevant here - that is dominated by parsing, which\nwe have covered copiously (har har) previously and have sped up by 500%,\nwhich still makes Postgres < 1/2 the loading speed of MySQL.\n \n> As I said, the obvious candidates are inefficient physical layout\n> and/or flawed IO code.\n\nYes.\n \n> Until the basic IO issues are addressed, we could replace the\n> present sorting code with infinitely fast sorting code and we'd\n> still be scrod performance wise.\n\nPostgres' I/O path has many problems that must be micro-optimized away. Too\nsmall of an operand size compared to disk caches, memory, etc etc are the\ncommon problem. Another is lack of micro-parallelism (loops) with long\nenough runs to let modern processors pipeline and superscale.\n \nThe net problem here is that a simple \"select blah from blee order\nby(blah.a);\" runs at 1/100 of the sequential scan rate.\n\n- Luke\n\n\n",
"msg_date": "Fri, 30 Sep 2005 13:38:54 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "Ron,\n\n> That 11MBps was your =bulk load= speed. If just loading a table\n> is this slow, then there are issues with basic physical IO, not just\n> IO during sort operations.\n\nOh, yeah. Well, that's separate from sort. See multiple posts on this \nlist from the GreenPlum team, the COPY patch for 8.1, etc. We've been \nconcerned about I/O for a while. \n\nRealistically, you can't do better than about 25MB/s on a single-threaded \nI/O on current Linux machines, because your bottleneck isn't the actual \ndisk I/O. It's CPU. Databases which \"go faster\" than this are all, to \nmy knowledge, using multi-threaded disk I/O.\n\n(and I'd be thrilled to get a consistent 25mb/s on PostgreSQL, but that's \nanother thread ... )\n\n> As I said, the obvious candidates are inefficient physical layout\n> and/or flawed IO code.\n\nYeah, that's what I thought too. But try sorting an 10GB table, and \nyou'll see: disk I/O is practically idle, while CPU averages 90%+. We're \nCPU-bound, because sort is being really inefficient about something. I \njust don't know what yet.\n\nIf we move that CPU-binding to a higher level of performance, then we can \nstart looking at things like async I/O, O_Direct, pre-allocation etc. that \nwill give us incremental improvements. But what we need now is a 5-10x \nimprovement and that's somewhere in the algorithms or the code.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 30 Sep 2005 13:41:22 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "\n\n> Bulk loading speed is irrelevant here - that is dominated by parsing, \n> which\n> we have covered copiously (har har) previously and have sped up by 500%,\n> which still makes Postgres < 1/2 the loading speed of MySQL.\n\n\tLet's ask MySQL 4.0\n\n> LOAD DATA INFILE blah\n0 errors, 666 warnings\n> SHOW WARNINGS;\nnot implemented. upgrade to 4.1\n\nduhhhhhhhhhhhhhhhhhhhhh\n",
"msg_date": "Fri, 30 Sep 2005 23:45:10 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "On Fri, 2005-09-30 at 13:41 -0700, Josh Berkus wrote:\n> Yeah, that's what I thought too. But try sorting an 10GB table, and \n> you'll see: disk I/O is practically idle, while CPU averages 90%+. We're \n> CPU-bound, because sort is being really inefficient about something. I \n> just don't know what yet.\n> \n> If we move that CPU-binding to a higher level of performance, then we can \n> start looking at things like async I/O, O_Direct, pre-allocation etc. that \n> will give us incremental improvements. But what we need now is a 5-10x \n> improvement and that's somewhere in the algorithms or the code.\n\nI'm trying to keep an open mind about what the causes are, and I think\nwe need to get a much better characterisation of what happens during a\nsort before we start trying to write code. It is always too easy to jump\nin and tune the wrong thing, which is not a good use of time.\n\nThe actual sort algorithms looks damn fine to me and the code as it\nstands is well optimised. That indicates to me that we've come to the\nend of the current line of thinking and we need a new approach, possibly\nin a number of areas.\n\nFor myself, I don't wish to be drawn further on solutions at this stage\nbut I am collecting performance data, so any test results are most\nwelcome.\n\nBest Regards, Simon Riggs\n\n\n\n",
"msg_date": "Fri, 30 Sep 2005 23:21:16 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "On Fri, Sep 30, 2005 at 01:41:22PM -0700, Josh Berkus wrote:\n>Realistically, you can't do better than about 25MB/s on a single-threaded \n>I/O on current Linux machines,\n\nWhat on earth gives you that idea? Did you drop a zero?\n\nMike Stone\n",
"msg_date": "Fri, 30 Sep 2005 19:10:32 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "On R, 2005-09-30 at 13:38 -0700, Luke Lonergan wrote:\n\n> \n> Bulk loading speed is irrelevant here - that is dominated by parsing, which\n> we have covered copiously (har har) previously and have sped up by 500%,\n> which still makes Postgres < 1/2 the loading speed of MySQL.\n\nIs this < 1/2 of MySQL with WAL on different spindle and/or WAL\ndisabled ?\n\n-- \nHannu Krosing <[email protected]>\n\n",
"msg_date": "Sat, 01 Oct 2005 17:03:52 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "Michael,\n\n> >Realistically, you can't do better than about 25MB/s on a\n> > single-threaded I/O on current Linux machines,\n>\n> What on earth gives you that idea? Did you drop a zero?\n\nNope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison, A \nBig-Name Proprietary Database doesn't get much more than that either.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 3 Oct 2005 13:34:01 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "On Mon, 2005-10-03 at 13:34 -0700, Josh Berkus wrote:\n> Michael,\n> \n> > >Realistically, you can't do better than about 25MB/s on a\n> > > single-threaded I/O on current Linux machines,\n> >\n> > What on earth gives you that idea? Did you drop a zero?\n> \n> Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison, A \n> Big-Name Proprietary Database doesn't get much more than that either.\n\nI find this claim very suspicious. I get single-threaded reads in\nexcess of 1GB/sec with XFS and > 250MB/sec with ext3. \n\n-jwb\n",
"msg_date": "Mon, 03 Oct 2005 13:42:31 -0700",
"msg_from": "\"Jeffrey W. Baker\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "Jeff,\n\n> > Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison, A\n> > Big-Name Proprietary Database doesn't get much more than that either.\n>\n> I find this claim very suspicious. I get single-threaded reads in\n> excess of 1GB/sec with XFS and > 250MB/sec with ext3.\n\nDatabase reads? Or raw FS reads? It's not the same thing.\n\nAlso, we're talking *write speed* here, not read speed.\n\nI also find *your* claim suspicious, since there's no way XFS is 300% faster \nthan ext3 for the *general* case.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 3 Oct 2005 14:16:15 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "Jeff, Josh,\n\nOn 10/3/05 2:16 PM, \"Josh Berkus\" <[email protected]> wrote:\n\n> Jeff,\n> \n>>> Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison, A\n>>> Big-Name Proprietary Database doesn't get much more than that either.\n>> \n>> I find this claim very suspicious. I get single-threaded reads in\n>> excess of 1GB/sec with XFS and > 250MB/sec with ext3.\n> \n> Database reads? Or raw FS reads? It's not the same thing.\n> \n> Also, we're talking *write speed* here, not read speed.\n\nI think you are both talking past each other here. I'll state what I\n*think* each of you are saying:\n\nJosh: single threaded DB writes are limited to 25MB/s\n\nMy opinion: Not if they're done better than they are now in PostgreSQL.\nPostgreSQL COPY is still CPU limited at 12MB/s on a super fast Opteron. The\ncombination of WAL and head writes while this is the case is about 50MB/s,\nwhich is far from the limit of the filesystems we test on that routinely\nperform at 250MB/s on ext2 writing in sequential 8k blocks.\n\nThere is no reason that we couldn't do triple the current COPY speed by\nreducing the CPU overhead in parsing and attribute conversion. We've talked\nthis to death, and implemented much of the code to fix it, but there's much\nmore to do.\n\nJeff: Plenty of FS bandwidth to be had on Linux, observed 250MB/s on ext3\nand 1,000MB/s on XFS.\n\nWow - can you provide a link or the results from the XFS test? Is this 8k\nblocksize sequential I/O? How many spindles and what controller are you\nusing? Inquiring minds want to know...\n\n- Luke \n\n\n",
"msg_date": "Mon, 03 Oct 2005 14:28:12 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "On Mon, 2005-10-03 at 14:16 -0700, Josh Berkus wrote:\n> Jeff,\n> \n> > > Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison, A\n> > > Big-Name Proprietary Database doesn't get much more than that either.\n> >\n> > I find this claim very suspicious. I get single-threaded reads in\n> > excess of 1GB/sec with XFS and > 250MB/sec with ext3.\n> \n> Database reads? Or raw FS reads? It's not the same thing.\n\nJust reading files off the filesystem. These are input rates I get with\na specialized sort implementation. 1GB/sec is not even especially\nwonderful, I can get that on two controllers with 24-disk stripe set.\n\nI guess database reads are different, but I remain unconvinced that they\nare *fundamentally* different. After all, a tab-delimited file (my sort\nworkload) is a kind of database.\n\n> Also, we're talking *write speed* here, not read speed.\n\nOk, I did not realize. Still you should see 250-300MB/sec\nsingle-threaded sequential output on ext3, assuming the storage can\nprovide that rate.\n\n> I also find *your* claim suspicious, since there's no way XFS is 300% faster \n> than ext3 for the *general* case.\n\nOn a single disk you wouldn't notice, but XFS scales much better when\nyou throw disks at it. I get a 50MB/sec boost from the 24th disk,\nwhereas ext3 stops scaling after 16 disks. For writes both XFS and ext3\ntop out around 8 disks, but in this case XFS tops out at 500MB/sec while\next3 can't break 350MB/sec.\n\nI'm hopeful that in the future the work being done at ClusterFS will\nmake ext3 on-par with XFS.\n\n-jwb\n",
"msg_date": "Mon, 03 Oct 2005 14:32:26 -0700",
"msg_from": "\"Jeffrey W. Baker\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "On E, 2005-10-03 at 14:16 -0700, Josh Berkus wrote:\n> Jeff,\n> \n> > > Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison, A\n> > > Big-Name Proprietary Database doesn't get much more than that either.\n> >\n> > I find this claim very suspicious. I get single-threaded reads in\n> > excess of 1GB/sec with XFS and > 250MB/sec with ext3.\n> \n> Database reads? Or raw FS reads? It's not the same thing.\n\nJust FYI, I run a count(*) on a 15.6GB table on a lightly loaded db and\nit run in 163 sec. (Dual opteron 2.6GHz, 6GB RAM, 6 x 74GB 15k disks in\nRAID10, reiserfs). A little less than 100MB sec.\n\nAfter this I ran count(*) over a 2.4GB file from another tablespace on\nanother device (4x142GB 10k disks in RAID10) and it run 22.5 sec on\nfirst run and 12.5 on second.\n\ndb=# show shared_buffers ;\n shared_buffers\n----------------\n 196608\n(1 row)\n\ndb=# select version();\n version\n--------------------------------------------------------------------------------------------\n PostgreSQL 8.0.3 on x86_64-pc-linux-gnu, compiled by GCC cc (GCC) 3.3.6\n(Debian 1:3.3.6-7)\n(1 row)\n\n\n-- \nHannu Krosing <[email protected]>\n\n",
"msg_date": "Tue, 04 Oct 2005 00:43:10 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "Hannu,\n\nOn 10/3/05 2:43 PM, \"Hannu Krosing\" <[email protected]> wrote:\n\n> Just FYI, I run a count(*) on a 15.6GB table on a lightly loaded db and\n> it run in 163 sec. (Dual opteron 2.6GHz, 6GB RAM, 6 x 74GB 15k disks in\n> RAID10, reiserfs). A little less than 100MB sec.\n\nThis confirms our findings - sequential scan is CPU limited at about 120MB/s\nper single threaded executor. This is too slow for fast file systems like\nwe're discussing here.\n\nBizgres MPP gets 250MB/s by running multiple scanners, but we still chew up\nunnecessary amounts of CPU.\n \n> After this I ran count(*) over a 2.4GB file from another tablespace on\n> another device (4x142GB 10k disks in RAID10) and it run 22.5 sec on\n> first run and 12.5 on second.\n\nYou're getting caching effects here.\n\n- Luke\n\n\n",
"msg_date": "Mon, 03 Oct 2005 14:52:27 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "On Mon, Oct 03, 2005 at 01:34:01PM -0700, Josh Berkus wrote:\n>> >Realistically, you can't do better than about 25MB/s on a\n>> > single-threaded I/O on current Linux machines,\n>>\n>> What on earth gives you that idea? Did you drop a zero?\n>\n>Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison, A \n>Big-Name Proprietary Database doesn't get much more than that either.\n\nYou seem to be talking about database IO, which isn't what you said.\n\n",
"msg_date": "Mon, 03 Oct 2005 17:53:32 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "Michael,\n\n> >Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison, A\n> >Big-Name Proprietary Database doesn't get much more than that either.\n>\n> You seem to be talking about database IO, which isn't what you said.\n\nRight, well, it was what I meant. I failed to specify, that's all.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 3 Oct 2005 14:59:30 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "Jeffrey,\n\n> I guess database reads are different, but I remain unconvinced that they\n> are *fundamentally* different. After all, a tab-delimited file (my sort\n> workload) is a kind of database.\n\nUnfortunately, they are ... because of CPU overheads. I'm basing what's \n\"reasonable\" for data writes on the rates which other high-end DBs can \nmake. From that, 25mb/s or even 40mb/s for sorts should be achievable \nbut doing 120mb/s would require some kind of breakthrough.\n\n> On a single disk you wouldn't notice, but XFS scales much better when\n> you throw disks at it. I get a 50MB/sec boost from the 24th disk,\n> whereas ext3 stops scaling after 16 disks. For writes both XFS and ext3\n> top out around 8 disks, but in this case XFS tops out at 500MB/sec while\n> ext3 can't break 350MB/sec.\n\nThat would explain it. I seldom get more than 6 disks (and 2 channels) to \ntest with.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 3 Oct 2005 15:03:11 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "On Tue, Oct 04, 2005 at 12:43:10AM +0300, Hannu Krosing wrote:\n>Just FYI, I run a count(*) on a 15.6GB table on a lightly loaded db and\n>it run in 163 sec. (Dual opteron 2.6GHz, 6GB RAM, 6 x 74GB 15k disks in\n>RAID10, reiserfs). A little less than 100MB sec.\n\nAnd none of that 15G table is in the 6G RAM?\n\nMike Stone\n",
"msg_date": "Wed, 05 Oct 2005 05:43:15 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "On K, 2005-10-05 at 05:43 -0400, Michael Stone wrote:\n> On Tue, Oct 04, 2005 at 12:43:10AM +0300, Hannu Krosing wrote:\n> >Just FYI, I run a count(*) on a 15.6GB table on a lightly loaded db and\n> >it run in 163 sec. (Dual opteron 2.6GHz, 6GB RAM, 6 x 74GB 15k disks in\n> >RAID10, reiserfs). A little less than 100MB sec.\n> \n> And none of that 15G table is in the 6G RAM?\n\nI believe so, as there had been another query running for some time,\ndoing a select form a 50GB table.\n\n-- \nHannu Krosing <[email protected]>\n\n",
"msg_date": "Wed, 05 Oct 2005 13:32:52 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
}
] |
[
{
"msg_contents": "I see the following routines that seem to be related to sorting.\n\nIf I were to examine these routines to consider ways to improve it, what\nroutines should I key in on? I am guessing that tuplesort.c is the hub\nof activity for database sorting.\n\nDirectory of U:\\postgresql-snapshot\\src\\backend\\access\\nbtree\n\n08/11/2005 06:22 AM 24,968 nbtsort.c\n 1 File(s) 24,968 bytes\n\n Directory of U:\\postgresql-snapshot\\src\\backend\\executor\n\n03/16/2005 01:38 PM 7,418 nodeSort.c\n 1 File(s) 7,418 bytes\n\n Directory of U:\\postgresql-snapshot\\src\\backend\\utils\\sort\n\n09/23/2005 08:36 AM 67,585 tuplesort.c\n 1 File(s) 67,585 bytes\n\n Directory of U:\\postgresql-snapshot\\src\\bin\\pg_dump\n\n06/29/2005 08:03 PM 31,620 pg_dump_sort.c\n 1 File(s) 31,620 bytes\n\n Directory of U:\\postgresql-snapshot\\src\\port\n\n07/27/2005 09:03 PM 5,077 qsort.c\n 1 File(s) 5,077 bytes\n",
"msg_date": "Fri, 30 Sep 2005 14:31:32 -0700",
"msg_from": "\"Dann Corbit\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
}
] |
[
{
"msg_contents": "25MBps should not be a CPU bound limit for IO, nor should it be\nan OS limit. It should be something ~100x (Single channel RAM)\nto ~200x (dual channel RAM) that.\n\nFor an IO rate of 25MBps to be pegging the CPU at 100%, the CPU\nis suffering some combination of\nA= lot's of cache misses (\"cache thrash\"), \nB= lot's of random rather than sequential IO (like pointer chasing)\nC= lot's of wasteful copying\nD= lot's of wasteful calculations\n\nIn fact, this is crappy enough performance that the whole IO layer\nshould be rethought and perhaps reimplemented from scratch.\nOptimization of the present code is unlikely to yield a 100-200x\nimprovement.\n\nOn the HD side, the first thing that comes to mind is that DBs are\n-NOT- like ordinary filesystems in a few ways:\n1= the minimum HD IO is a record that is likely to be larger than\na HD sector. Therefore, the FS we use should be laid out with\nphysical segments of max(HD sector size, record size)\n\n2= DB files (tables) are usually considerably larger than any other\nkind of files stored. Therefore the FS we should use should be laid\nout using LARGE physical pages. 64KB-256KB at a _minimum_.\n\n3= The whole \"2GB striping\" of files idea needs to be rethought.\nOur tables are significantly different in internal structure from the\nusual FS entity.\n\n4= I'm sure we are paying all sorts of nasty overhead for essentially\nemulating the pg \"filesystem\" inside another filesystem. That means\n~2x as much overhead to access a particular piece of data. \n\nThe simplest solution is for us to implement a new VFS compatible\nfilesystem tuned to exactly our needs: pgfs.\n\nWe may be able to avoid that by some amount of hacking or\nmodifying of the current FSs we use, but I suspect it would be more\nwork for less ROI.\n\nRon \n\n\n-----Original Message-----\nFrom: Josh Berkus <[email protected]>\nSent: Sep 30, 2005 4:41 PM\nTo: Ron Peacetree <[email protected]>\nCc: [email protected], [email protected]\nSubject: Re: [HACKERS] [PERFORM] A Better External Sort?\n\nRon,\n\n> That 11MBps was your =bulk load= speed. If just loading a table\n> is this slow, then there are issues with basic physical IO, not just\n> IO during sort operations.\n\nOh, yeah. Well, that's separate from sort. See multiple posts on this \nlist from the GreenPlum team, the COPY patch for 8.1, etc. We've been \nconcerned about I/O for a while. \n\nRealistically, you can't do better than about 25MB/s on a single-threaded \nI/O on current Linux machines, because your bottleneck isn't the actual \ndisk I/O. It's CPU. Databases which \"go faster\" than this are all, to \nmy knowledge, using multi-threaded disk I/O.\n\n(and I'd be thrilled to get a consistent 25mb/s on PostgreSQL, but that's \nanother thread ... )\n\n> As I said, the obvious candidates are inefficient physical layout\n> and/or flawed IO code.\n\nYeah, that's what I thought too. But try sorting an 10GB table, and \nyou'll see: disk I/O is practically idle, while CPU averages 90%+. We're \nCPU-bound, because sort is being really inefficient about something. I \njust don't know what yet.\n\nIf we move that CPU-binding to a higher level of performance, then we can \nstart looking at things like async I/O, O_Direct, pre-allocation etc. that \nwill give us incremental improvements. But what we need now is a 5-10x \nimprovement and that's somewhere in the algorithms or the code.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n",
"msg_date": "Fri, 30 Sep 2005 19:40:09 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "On 9/30/05, Ron Peacetree <[email protected]> wrote:\n> 4= I'm sure we are paying all sorts of nasty overhead for essentially\n> emulating the pg \"filesystem\" inside another filesystem. That means\n> ~2x as much overhead to access a particular piece of data.\n>\n> The simplest solution is for us to implement a new VFS compatible\n> filesystem tuned to exactly our needs: pgfs.\n>\n> We may be able to avoid that by some amount of hacking or\n> modifying of the current FSs we use, but I suspect it would be more\n> work for less ROI.\n\nOn this point, Reiser4 fs already implements a number of things which\nwould be desirable for PostgreSQL. For example: write()s to reiser4\nfilesystems are atomic, so there is no risk of torn pages (this is\nenabled because reiser4 uses WAFL like logging where data is not\noverwritten but rather relocated). The filesystem is modular and\nextensible so it should be easy to add whatever additional semantics\nare needed. I would imagine that all that would be needed is some\nmore atomicity operations (single writes are already atomic, but I'm\nsure it would be useful to batch many writes into a transaction),some\nlayout and packing controls, and some flush controls. A step further\nwould perhaps integrate multiversioning directly into the FS (the\nwandering logging system provides the write side of multiversioning, a\nlittle read side work would be required.). More importantly: the file\nsystem was intended to be extensible for this sort of application.\n\nIt might make a good 'summer of code' project for someone next year,\n... presumably by then reiser4 will have made it into the mainline\nkernel by then. :)\n",
"msg_date": "Fri, 30 Sep 2005 21:44:26 -0400",
"msg_from": "Gregory Maxwell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
}
] |
[
{
"msg_contents": "I have perused the tuple sort stuff.\n\nThe good:\nThe documentation of the sort algorithm from Knuth's TAOCP was\nbeautifully done. Everyone who writes an algorithm should credit the\noriginal source like this, and also where it deviates. That was done\nvery nicely.\n\nThe bad:\nWith random access, tape style merging is not necessary. A priority\nqueue based merge will be a lot faster.\n\nThe UGLY:\nPlease, someone, tell me I was hallucinating. Is that code really\nREADING AND WRITING THE WHOLE TUPLE with every sort exchange?! Maybe\nthere is a layer of abstraction that I am somehow missing. I just can't\nimagine that is what it is really doing. If (somehow) it really is\ndoing that, a pointer based sort which forms a permutation based upon\nthe keys, would be a lot better.\n\nThe fundamental algorithm itself could also be improved somewhat.\n\nHere is a {public domain} outline for an introspective quick sort that\nPete Filander and I wrote some time ago and contributed to FastDB. It\nis written as a C++ template, but it will take no effort to make it a\nsimple C routine. It assumes that e_type has comparison operators, so\nin C you would use a compare function instead.\n\n/*\n** Sorting stuff by Dann Corbit and Pete Filandr.\n** ([email protected] and [email protected])\n** Use it however you like.\n*/\n\n//\n// The insertion sort template is used for small partitions.\n//\n\ntemplate < class e_type >\nvoid insertion_sort(e_type * array, size_t nmemb)\n{\n e_type temp,\n *last,\n *first,\n *middle;\n if (nmemb > 1) {\n first = middle = 1 + array;\n last = nmemb - 1 + array;\n while (first != last) {\n ++first;\n if ((*(middle) > *(first))) {\n middle = first;\n }\n }\n if ((*(array) > *(middle))) {\n ((void) ((temp) = *(array), *(array) = *(middle), *(middle)\n= (temp)));\n }\n ++array;\n while (array != last) {\n first = array++;\n if ((*(first) > *(array))) {\n middle = array;\n temp = *middle;\n do {\n *middle-- = *first--;\n } while ((*(first) > *(&temp)));\n *middle = temp;\n }\n }\n }\n}\n\n//\n// The median estimate is used to choose pivots for the quicksort\nalgorithm\n//\n\ntemplate < class e_type >\nvoid median_estimate(e_type * array, size_t n)\n{\n e_type temp;\n long unsigned lu_seed = 123456789LU;\n const size_t k = ((lu_seed) = 69069 * (lu_seed) + 362437) % --n;\n ((void) ((temp) = *(array), *(array) = *(array + k), *(array + k) =\n(temp)));\n if ((*((array + 1)) > *((array)))) {\n (temp) = *(array + 1);\n if ((*((array + n)) > *((array)))) {\n *(array + 1) = *(array);\n if ((*(&(temp)) > *((array + n)))) {\n *(array) = *(array + n);\n *(array + n) = (temp);\n } else {\n *(array) = (temp);\n }\n } else {\n *(array + 1) = *(array + n);\n *(array + n) = (temp);\n }\n } else {\n if ((*((array)) > *((array + n)))) {\n if ((*((array + 1)) > *((array + n)))) {\n (temp) = *(array + 1);\n *(array + 1) = *(array + n);\n *(array + n) = *(array);\n *(array) = (temp);\n } else {\n ((void) (((temp)) = *((array)), *((array)) = *((array +\nn)), *((array + n)) = ((temp))));\n }\n }\n }\n}\n\n\n//\n// This is the heart of the quick sort algorithm used here.\n// If the sort is going quadratic, we switch to heap sort.\n// If the partition is small, we switch to insertion sort.\n//\n\ntemplate < class e_type >\nvoid qloop(e_type * array, size_t nmemb, size_t d)\n{\n e_type temp,\n *first,\n *last;\n while (nmemb > 50) {\n if (sorted(array, nmemb)) {\n return;\n }\n if (!d--) {\n heapsort(array, nmemb);\n return;\n }\n median_estimate(array, nmemb);\n first = 1 + array;\n last = nmemb - 1 + array;\n do {\n ++first;\n } while ((*(array) > *(first)));\n do {\n --last;\n } while ((*(last) > *(array)));\n while (last > first) {\n ((void) ((temp) = *(last), *(last) = *(first), *(first) =\n(temp)));\n do {\n ++first;\n } while ((*(array) > *(first)));\n do {\n --last;\n } while ((*(last) > *(array)));\n }\n ((void) ((temp) = *(array), *(array) = *(last), *(last) =\n(temp)));\n qloop(last + 1, nmemb - 1 + array - last, d);\n nmemb = last - array;\n }\n insertion_sort(array, nmemb);\n}\n\n//\n// This heap sort is better than average because it uses Lamont's heap.\n//\n\ntemplate < class e_type >\nvoid heapsort(e_type * array, size_t nmemb)\n{\n size_t i,\n child,\n parent;\n e_type temp;\n if (nmemb > 1) {\n i = --nmemb / 2;\n do {\n {\n (parent) = (i);\n (temp) = (array)[(parent)];\n (child) = (parent) * 2;\n while ((nmemb) > (child)) {\n if ((*((array) + (child) + 1) > *((array) +\n(child)))) {\n ++(child);\n }\n if ((*((array) + (child)) > *(&(temp)))) {\n (array)[(parent)] = (array)[(child)];\n (parent) = (child);\n (child) *= 2;\n } else {\n --(child);\n break;\n }\n }\n if ((nmemb) == (child) && (*((array) + (child)) >\n*(&(temp)))) {\n (array)[(parent)] = (array)[(child)];\n (parent) = (child);\n }\n (array)[(parent)] = (temp);\n }\n } while (i--);\n ((void) ((temp) = *(array), *(array) = *(array + nmemb), *(array\n+ nmemb) = (temp)));\n for (--nmemb; nmemb; --nmemb) {\n {\n (parent) = (0);\n (temp) = (array)[(parent)];\n (child) = (parent) * 2;\n while ((nmemb) > (child)) {\n if ((*((array) + (child) + 1) > *((array) +\n(child)))) {\n ++(child);\n }\n if ((*((array) + (child)) > *(&(temp)))) {\n (array)[(parent)] = (array)[(child)];\n (parent) = (child);\n (child) *= 2;\n } else {\n --(child);\n break;\n }\n }\n if ((nmemb) == (child) && (*((array) + (child)) >\n*(&(temp)))) {\n (array)[(parent)] = (array)[(child)];\n (parent) = (child);\n }\n (array)[(parent)] = (temp);\n }\n ((void) ((temp) = *(array), *(array) = *(array + nmemb),\n*(array + nmemb) = (temp)));\n }\n }\n}\n\n// \n// We use this to check to see if a partition is already sorted.\n// \n\ntemplate < class e_type >\nint sorted(e_type * array, size_t nmemb)\n{\n for (--nmemb; nmemb; --nmemb) {\n if ((*(array) > *(array + 1))) {\n return 0;\n }\n ++array;\n }\n return 1;\n}\n\n// \n// We use this to check to see if a partition is already reverse-sorted.\n// \n\ntemplate < class e_type >\nint rev_sorted(e_type * array, size_t nmemb)\n{\n for (--nmemb; nmemb; --nmemb) {\n if ((*(array + 1) > *(array))) {\n return 0;\n }\n ++array;\n }\n return 1;\n}\n\n// \n// We use this to reverse a reverse-sorted partition.\n// \n\ntemplate < class e_type >\nvoid rev_array(e_type * array, size_t nmemb)\n{\n e_type temp,\n *end;\n for (end = array + nmemb - 1; end > array; ++array) {\n ((void) ((temp) = *(array), *(array) = *(end), *(end) =\n(temp)));\n --end;\n }\n}\n\n// \n// Introspective quick sort algorithm user entry point.\n// You do not need to directly call any other sorting template.\n// This sort will perform very well under all circumstances.\n// \n\ntemplate < class e_type >\nvoid iqsort(e_type * array, size_t nmemb)\n{\n size_t d,\n n;\n if (nmemb > 1 && !sorted(array, nmemb)) {\n if (!rev_sorted(array, nmemb)) {\n n = nmemb / 4;\n d = 2;\n while (n) {\n ++d;\n n /= 2;\n }\n qloop(array, nmemb, 2 * d);\n } else {\n rev_array(array, nmemb);\n }\n }\n}\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-hackers-\n> [email protected]] On Behalf Of Jignesh K. Shah\n> Sent: Friday, September 30, 2005 1:38 PM\n> To: Ron Peacetree\n> Cc: Josh Berkus; [email protected]; pgsql-\n> [email protected]\n> Subject: Re: [HACKERS] [PERFORM] A Better External Sort?\n> \n> I have seen similar performance as Josh and my reasoning is as\nfollows:\n> \n> * WAL is the biggest bottleneck with its default size of 16MB. Many\n> people hate to recompile the code to change its default, and\n> increasing checkpoint segments help but still there is lot of overhead\n> in the rotation of WAL files (Even putting WAL on tmpfs shows that it\nis\n> still slow). Having an option for bigger size is helpful to a small\n> extent percentagewise (and frees up CPU a bit in doing file rotation)\n> \n> * Growing files: Even though this is OS dependent but it does spend\nlot\n> of time doing small 8K block increases to grow files. If we can signal\n> bigger chunks to grow or \"pre-grow\" to expected size of data files\n> that will help a lot in such cases.\n> \n> * COPY command had restriction but that has been fixed to a large\n> extent.(Great job)\n> \n> But ofcourse I have lost touch with programming and can't begin to\n> understand PostgreSQL code to change it myself.\n> \n> Regards,\n> Jignesh\n> \n> \n> \n> \n> Ron Peacetree wrote:\n> \n> >That 11MBps was your =bulk load= speed. If just loading a table\n> >is this slow, then there are issues with basic physical IO, not just\n> >IO during sort operations.\n> >\n> >As I said, the obvious candidates are inefficient physical layout\n> >and/or flawed IO code.\n> >\n> >Until the basic IO issues are addressed, we could replace the\n> >present sorting code with infinitely fast sorting code and we'd\n> >still be scrod performance wise.\n> >\n> >So why does basic IO suck so badly?\n> >\n> >Ron\n> >\n> >\n> >-----Original Message-----\n> >From: Josh Berkus <[email protected]>\n> >Sent: Sep 30, 2005 1:23 PM\n> >To: Ron Peacetree <[email protected]>\n> >Cc: [email protected], [email protected]\n> >Subject: Re: [HACKERS] [PERFORM] A Better External Sort?\n> >\n> >Ron,\n> >\n> >\n> >\n> >>Hmmm.\n> >>60GB/5400secs= 11MBps. That's ssllooww. So the first\n> >>problem is evidently our physical layout and/or HD IO layer\n> >>sucks.\n> >>\n> >>\n> >\n> >Actually, it's much worse than that, because the sort is only dealing\n> >with one column. As I said, monitoring the iostat our top speed was\n> >2.2mb/s.\n> >\n> >--Josh\n> >\n> >\n> >---------------------------(end of\nbroadcast)---------------------------\n> >TIP 1: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that\nyour\n> > message can get through to the mailing list cleanly\n> >\n> >\n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n",
"msg_date": "Fri, 30 Sep 2005 16:52:42 -0700",
"msg_from": "\"Dann Corbit\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
}
] |
[
{
"msg_contents": "Judy definitely rates a WOW!!\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-hackers-\n> [email protected]] On Behalf Of Gregory Maxwell\n> Sent: Friday, September 30, 2005 7:07 PM\n> To: Ron Peacetree\n> Cc: Jeffrey W. Baker; [email protected]; pgsql-\n> [email protected]\n> Subject: Re: [HACKERS] [PERFORM] A Better External Sort?\n> \n> On 9/28/05, Ron Peacetree <[email protected]> wrote:\n> > 2= We use my method to sort two different tables. We now have these\n> > very efficient representations of a specific ordering on these\ntables.\n> A\n> > join operation can now be done using these Btrees rather than the\n> > original data tables that involves less overhead than many current\n> > methods.\n> \n> If we want to make joins very fast we should implement them using RD\n> trees. For the example cases where a join against a very large table\n> will produce a much smaller output, a RD tree will provide pretty much\n> the optimal behavior at a very low memory cost.\n> \n> On the subject of high speed tree code for in-core applications, you\n> should check out http://judy.sourceforge.net/ . The performance\n> (insert, remove, lookup, AND storage) is really quite impressive.\n> Producing cache friendly code is harder than one might expect, and it\n> appears the judy library has already done a lot of the hard work.\n> Though it is *L*GPLed, so perhaps that might scare some here away from\n> it. :) and good luck directly doing joins with a LC-TRIE. ;)\n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 6: explain analyze is your friend\n",
"msg_date": "Fri, 30 Sep 2005 22:32:15 -0700",
"msg_from": "\"Dann Corbit\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:pgsql-hackers-\n> [email protected]] On Behalf Of Tom Lane\n> Sent: Friday, September 30, 2005 11:02 PM\n> To: Jeffrey W. Baker\n> Cc: Luke Lonergan; Josh Berkus; Ron Peacetree; pgsql-\n> [email protected]; [email protected]\n> Subject: Re: [HACKERS] [PERFORM] A Better External Sort?\n> \n> \"Jeffrey W. Baker\" <[email protected]> writes:\n> > I think the largest speedup will be to dump the multiphase merge and\n> > merge all tapes in one pass, no matter how large M. Currently M is\n> > capped at 6, so a sort of 60GB with 1GB sort memory needs 13 passes\nover\n> > the tape. It could be done in a single pass heap merge with\nN*log(M)\n> > comparisons, and, more importantly, far less input and output.\n> \n> I had more or less despaired of this thread yielding any usable ideas\n> :-( but I think you have one here. \n\nI believe I made the exact same suggestion several days ago.\n\n>The reason the current code uses a\n> six-way merge is that Knuth's figure 70 (p. 273 of volume 3 first\n> edition) shows that there's not much incremental gain from using more\n> tapes ... if you are in the regime where number of runs is much\ngreater\n> than number of tape drives. But if you can stay in the regime where\n> only one merge pass is needed, that is obviously a win.\n> \n> I don't believe we can simply legislate that there be only one merge\n> pass. That would mean that, if we end up with N runs after the\ninitial\n> run-forming phase, we need to fit N tuples in memory --- no matter how\n> large N is, or how small work_mem is. But it seems like a good idea\nto\n> try to use an N-way merge where N is as large as work_mem will allow.\n> We'd not have to decide on the value of N until after we've completed\n> the run-forming phase, at which time we've already seen every tuple\n> once, and so we can compute a safe value for N as work_mem divided by\n> largest_tuple_size. (Tape I/O buffers would have to be counted too\n> of course.)\n\nYou only need to hold the sort column(s) in memory, except for the queue\nyou are exhausting at the time. [And of those columns, only the values\nfor the smallest one in a sub-list.] Of course, the more data from each\nlist that you can hold at once, the fewer the disk reads and seeks.\n\nAnother idea (not sure if it is pertinent):\nInstead of having a fixed size for the sort buffers, size it to the\nquery. Given a total pool of size M, give a percentage according to the\ndifficulty of the work to perform. So a query with 3 small columns and\na cardinality of 1000 gets a small percentage and a query with 10 GB of\ndata gets a big percentage of available sort mem.\n \n> It's been a good while since I looked at the sort code, and so I don't\n> recall if there are any fundamental reasons for having a compile-time-\n> constant value of the merge order rather than choosing it at runtime.\n> My guess is that any inefficiencies added by making it variable would\n> be well repaid by the potential savings in I/O.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 6: explain analyze is your friend\n",
"msg_date": "Fri, 30 Sep 2005 23:32:40 -0700",
"msg_from": "\"Dann Corbit\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] A Better External Sort? "
}
] |
[
{
"msg_contents": "*blink* Tapes?! I thought that was a typo...\nIf our sort is code based on sorting tapes, we've made a mistake. HDs\nare not tapes, and Polyphase Merge Sort and it's brethren are not the\nbest choices for HD based sorts.\n\nUseful references to this point:\nKnuth, Vol 3 section 5.4.9, (starts p356 of 2ed) \nTharp, ISBN 0-471-60521-2, starting p352\nFolk, Zoellick, and Riccardi, ISBN 0-201-87401-6, chapter 8 (starts p289)\n\nThe winners of the \"Daytona\" version of Jim Gray's sorting contest, for\ngeneral purpose external sorting algorithms that are of high enough quality\nto be offered commercially, also demonstrate a number of better ways to\nattack external sorting using HDs.\n\nThe big take aways from all this are:\n1= As in Polyphase Merge Sort, optimum External HD Merge Sort\nperformance is obtained by using Replacement Selection and creating\nbuffers of different lengths for later merging. The values are different.\n\n2= Using multiple HDs split into different functions, IOW _not_ simply\nas RAIDs, is a big win.\nA big enough win that we should probably consider having a config option\nto pg that allows the use of HD(s) or RAID set(s) dedicated as temporary\nwork area(s).\n \n3= If the Key is small compared record size, Radix or Distribution\nCounting based algorithms are worth considering.\n\nThe good news is all this means it's easy to demonstrate that we can\nimprove the performance of our sorting functionality.\n\nAssuming we get the abyssmal physical IO performance fixed...\n(because until we do, _nothing_ is going to help us as much)\n\nRon \n\n\n-----Original Message-----\nFrom: Tom Lane <[email protected]>\nSent: Oct 1, 2005 2:01 AM\nSubject: Re: [HACKERS] [PERFORM] A Better External Sort? \n\n\"Jeffrey W. Baker\" <[email protected]> writes:\n> I think the largest speedup will be to dump the multiphase merge and\n> merge all tapes in one pass, no matter how large M. Currently M is\n> capped at 6, so a sort of 60GB with 1GB sort memory needs 13 passes over\n> the tape. It could be done in a single pass heap merge with N*log(M)\n> comparisons, and, more importantly, far less input and output.\n\nI had more or less despaired of this thread yielding any usable ideas\n:-( but I think you have one here. The reason the current code uses a\nsix-way merge is that Knuth's figure 70 (p. 273 of volume 3 first\nedition) shows that there's not much incremental gain from using more\ntapes ... if you are in the regime where number of runs is much greater\nthan number of tape drives. But if you can stay in the regime where\nonly one merge pass is needed, that is obviously a win.\n\nI don't believe we can simply legislate that there be only one merge\npass. That would mean that, if we end up with N runs after the initial\nrun-forming phase, we need to fit N tuples in memory --- no matter how\nlarge N is, or how small work_mem is. But it seems like a good idea to\ntry to use an N-way merge where N is as large as work_mem will allow.\nWe'd not have to decide on the value of N until after we've completed\nthe run-forming phase, at which time we've already seen every tuple\nonce, and so we can compute a safe value for N as work_mem divided by\nlargest_tuple_size. (Tape I/O buffers would have to be counted too\nof course.)\n\nIt's been a good while since I looked at the sort code, and so I don't\nrecall if there are any fundamental reasons for having a compile-time-\nconstant value of the merge order rather than choosing it at runtime.\nMy guess is that any inefficiencies added by making it variable would\nbe well repaid by the potential savings in I/O.\n",
"msg_date": "Sat, 1 Oct 2005 10:22:40 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "\n\nRon Peacetree wrote:\n\n>The good news is all this means it's easy to demonstrate that we can\n>improve the performance of our sorting functionality.\n>\n>Assuming we get the abyssmal physical IO performance fixed...\n>(because until we do, _nothing_ is going to help us as much)\n>\n> \n>\n\nI for one would be paying more attention if such a demonstration were \nforthcoming, in the form of a viable patch and some benchmark results.\n\ncheers\n\nandrew\n",
"msg_date": "Sat, 01 Oct 2005 11:19:26 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "On Sat, Oct 01, 2005 at 10:22:40AM -0400, Ron Peacetree wrote:\n> Assuming we get the abyssmal physical IO performance fixed...\n> (because until we do, _nothing_ is going to help us as much)\n\nI'm still not convinced this is the major problem. For example, in my\ntotally unscientific tests on an oldish machine I have here:\n\nDirect filesystem copy to /dev/null\n21MB/s 10% user 50% system (dual cpu, so the system is using a whole CPU)\n\nCOPY TO /dev/null WITH binary\n13MB/s 55% user 45% system (ergo, CPU bound)\n\nCOPY TO /dev/null\n4.4MB/s 60% user 40% system\n\n\\copy to /dev/null in psql\n6.5MB/s 60% user 40% system\n\nThis machine is a bit strange setup, not sure why fs copy is so slow.\nAs to why \\copy is faster than COPY, I have no idea, but it is\nrepeatable. And actually turning the tuples into a printable format is\nthe most expensive. But it does point out that the whole process is\nprobably CPU bound more than anything else.\n\nSo, I don't think physical I/O is the problem. It's something further\nup the call tree. I wouldn't be surprised at all it it had to do with\nthe creation and destruction of tuples. The cost of comparing tuples\nshould not be underestimated.\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.",
"msg_date": "Sat, 1 Oct 2005 18:19:41 +0200",
"msg_from": "Martijn van Oosterhout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "On Sat, Oct 01, 2005 at 06:19:41PM +0200, Martijn van Oosterhout wrote:\n>COPY TO /dev/null WITH binary\n>13MB/s 55% user 45% system (ergo, CPU bound)\n[snip]\n>the most expensive. But it does point out that the whole process is\n>probably CPU bound more than anything else.\n\nNote that 45% of that cpu usage is system--which is where IO overhead\nwould end up being counted. Until you profile where you system time is\ngoing it's premature to say it isn't an IO problem.\n\nMike Stone\n\n",
"msg_date": "Wed, 05 Oct 2005 05:41:25 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "On Wed, Oct 05, 2005 at 05:41:25AM -0400, Michael Stone wrote:\n> On Sat, Oct 01, 2005 at 06:19:41PM +0200, Martijn van Oosterhout wrote:\n> >COPY TO /dev/null WITH binary\n> >13MB/s 55% user 45% system (ergo, CPU bound)\n> [snip]\n> >the most expensive. But it does point out that the whole process is\n> >probably CPU bound more than anything else.\n> \n> Note that 45% of that cpu usage is system--which is where IO overhead\n> would end up being counted. Until you profile where you system time is\n> going it's premature to say it isn't an IO problem.\n\nIt's a dual CPU system, so 50% is the limit for a single process. Since\nsystem usage < user, PostgreSQL is the limiter. Sure, the system is\ntaking a lot of time, but PostgreSQL is still the limiting factor.\n\nAnyway, the later measurements using gprof exclude system time\naltogether and it still shows CPU being the limiting factor. Fact is,\nextracting tuples from pages is expensive.\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.",
"msg_date": "Wed, 5 Oct 2005 12:49:17 +0200",
"msg_from": "Martijn van Oosterhout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
}
] |
[
{
"msg_contents": "As I posted earlier, I'm looking for code to base a prototype on now.\nI'll test it outside pg to make sure it is bug free and performs as\npromised before I hand it off to the core pg developers.\n\nSomeone else is going to have to merge it into the pg code base\nsince I don't know the code intimately enough to make changes this\ndeep in the core functionality, nor is there enough time for me to\ndo so if we are going to be timely enough get this into 8.2\n(and no, I can't devote 24x7 to doing pg development unless\nsomeone is going to replace my current ways of paying my bills so\nthat I can.)\n\nRon\n \n\n-----Original Message-----\nFrom: Andrew Dunstan <[email protected]>\nSent: Oct 1, 2005 11:19 AM\nTo: Ron Peacetree <[email protected]>\nSubject: Re: [HACKERS] [PERFORM] A Better External Sort?\n\n\n\nRon Peacetree wrote:\n\n>The good news is all this means it's easy to demonstrate that we can\n>improve the performance of our sorting functionality.\n>\n>Assuming we get the abyssmal physical IO performance fixed...\n>(because until we do, _nothing_ is going to help us as much)\n>\n> \n>\n\nI for one would be paying more attention if such a demonstration were \nforthcoming, in the form of a viable patch and some benchmark results.\n\ncheers\n\nandrew\n\n",
"msg_date": "Sat, 1 Oct 2005 12:38:55 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
}
] |
[
{
"msg_contents": "You have not said anything about what HW, OS version, and pg version\nused here, but even at that can't you see that something Smells Wrong?\n\nThe most common CPUs currently shipping have clock rates of ~2-3GHz\nand have 8B-16B internal pathways. SPARCs and other like CPUs are\nclocked slower but have 16B-32B internal pathways. In short, these\nCPU's have an internal bandwidth of 16+ GBps.\n\nThe most common currently shipping mainboards have 6.4GBps RAM\nsubsystems. ITRW, their peak is ~80% of that, or ~5.1GBps.\n\nIn contrast, the absolute peak bandwidth of a 133MHx 8B PCI-X bus is\n1GBps, and ITRW it peaks at ~800-850MBps. Should anyone ever build\na RAID system that can saturate a PCI-Ex16 bus, that system will be\nmaxing ITRW at ~3.2GBps.\n\nCPUs should NEVER be 100% utilized during copy IO. They should be\nidling impatiently waiting for the next piece of data to finish being\nprocessed even when the RAM IO subsystem is pegged; and they\ndefinitely should be IO starved rather than CPU bound when doing\nHD IO.\n\nThose IO rates are also alarming in all but possibly the first case. A\nsingle ~50MBps HD doing 21MBps isn't bad, but for even a single\n~80MBps HD it starts to be of concern. If any these IO rates came\nfrom any reasonable 300+MBps RAID array, then they are BAD.\n\nWhat your simple experiment really does is prove We Have A\nProblem (tm) with our IO code at either or both of the OS or the pg\nlevel(s).\n\nRon\n\n \n-----Original Message-----\nFrom: Martijn van Oosterhout <[email protected]>\nSent: Oct 1, 2005 12:19 PM\nSubject: Re: [HACKERS] [PERFORM] A Better External Sort?\n\nOn Sat, Oct 01, 2005 at 10:22:40AM -0400, Ron Peacetree wrote:\n> Assuming we get the abyssmal physical IO performance fixed...\n> (because until we do, _nothing_ is going to help us as much)\n\nI'm still not convinced this is the major problem. For example, in my\ntotally unscientific tests on an oldish machine I have here:\n\nDirect filesystem copy to /dev/null\n21MB/s 10% user 50% system (dual cpu, so the system is using a whole CPU)\n\nCOPY TO /dev/null WITH binary\n13MB/s 55% user 45% system (ergo, CPU bound)\n\nCOPY TO /dev/null\n4.4MB/s 60% user 40% system\n\n\\copy to /dev/null in psql\n6.5MB/s 60% user 40% system\n\nThis machine is a bit strange setup, not sure why fs copy is so slow.\nAs to why \\copy is faster than COPY, I have no idea, but it is\nrepeatable. And actually turning the tuples into a printable format is\nthe most expensive. But it does point out that the whole process is\nprobably CPU bound more than anything else.\n\nSo, I don't think physical I/O is the problem. It's something further\nup the call tree. I wouldn't be surprised at all it it had to do with\nthe creation and destruction of tuples. The cost of comparing tuples\nshould not be underestimated.\n",
"msg_date": "Sat, 1 Oct 2005 13:42:32 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "[removed -performance, not subscribed]\n\nOn Sat, Oct 01, 2005 at 01:42:32PM -0400, Ron Peacetree wrote:\n> You have not said anything about what HW, OS version, and pg version\n> used here, but even at that can't you see that something Smells Wrong?\n\nSomewhat old machine running 7.3 on Linux 2.4. Not exactly speed\ndaemons but it's still true that the whole process would be CPU bound\n*even* if the O/S could idle while it's waiting. PostgreSQL used a\n*whole CPU* which is its limit. My point is that trying to reduce I/O\nby increasing CPU usage is not going to be benficial, we need CPU usage\ndown also.\n\nAnyway, to bring some real info I just profiled PostgreSQL 8.1beta\ndoing an index create on a 2960296 row table (3 columns, table size\n317MB).\n\nThe number 1 bottleneck with 41% of user time is comparetup_index. It\nwas called 95,369,361 times (about 2*ln(N)*N). It used 3 tapes. Another\n15% of time went to tuplesort_heap_siftup.\n\nThe thing is, I can't see anything in comparetup_index() that could\ntake much time. The actual comparisons are accounted elsewhere\n(inlineApplySortFunction) which amounted to <10% of total time. Since\nnocache_index_getattr doesn't feature I can't imagine index_getattr\nbeing a big bottleneck. Any ideas what's going on here?\n\nOther interesting features:\n- ~4 memory allocations per tuple, nearly all of which were explicitly\nfreed\n- Things I though would be expensive, like: heapgettup and\nmyFunctionCall2 didn't really count for much.\n\nHave a nice weekend,\n\n % cumulative self self total \n time seconds seconds calls s/call s/call name \n 43.63 277.81 277.81 95370055 0.00 0.00 comparetup_index\n 16.24 381.24 103.43 5920592 0.00 0.00 tuplesort_heap_siftup\n 3.76 405.17 23.93 95370055 0.00 0.00 inlineApplySortFunction\n 3.18 425.42 20.26 95370056 0.00 0.00 btint4cmp\n 2.82 443.37 17.95 11856219 0.00 0.00 AllocSetAlloc\n 2.52 459.44 16.07 95370055 0.00 0.00 myFunctionCall2\n 1.71 470.35 10.91 2960305 0.00 0.00 heapgettup\n 1.26 478.38 8.03 11841204 0.00 0.00 GetMemoryChunkSpace\n 1.14 485.67 7.29 5920592 0.00 0.00 tuplesort_heap_insert\n 1.11 492.71 7.04 2960310 0.00 0.00 index_form_tuple\n 1.09 499.67 6.96 11855105 0.00 0.00 AllocSetFree\n 0.97 505.83 6.17 23711355 0.00 0.00 AllocSetFreeIndex\n 0.84 511.19 5.36 5920596 0.00 0.00 LogicalTapeWrite\n 0.84 516.51 5.33 2960314 0.00 0.00 slot_deform_tuple\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.",
"msg_date": "Sat, 1 Oct 2005 23:56:07 +0200",
"msg_from": "Martijn van Oosterhout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "Martijn van Oosterhout <[email protected]> writes:\n> Anyway, to bring some real info I just profiled PostgreSQL 8.1beta\n> doing an index create on a 2960296 row table (3 columns, table size\n> 317MB).\n\n3 columns in the index you mean? What were the column datatypes?\nAny null values?\n\n> The number 1 bottleneck with 41% of user time is comparetup_index.\n> ...\n> The thing is, I can't see anything in comparetup_index() that could\n> take much time.\n\nThe index_getattr and heap_getattr macros can be annoyingly expensive.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 01 Oct 2005 23:26:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort? "
},
{
"msg_contents": "On Sat, Oct 01, 2005 at 11:26:07PM -0400, Tom Lane wrote:\n> Martijn van Oosterhout <[email protected]> writes:\n> > Anyway, to bring some real info I just profiled PostgreSQL 8.1beta\n> > doing an index create on a 2960296 row table (3 columns, table size\n> > 317MB).\n> \n> 3 columns in the index you mean? What were the column datatypes?\n> Any null values?\n\nNope, three columns in the table, one column in the index, no nulls.\nThe indexed column was integer. I did it once with around 6500 values\nrepeated over and over, lots of duplicate kays. And once on a serial\ncolumn but it made no descernable difference either way. Although the\ncomparison function was called less (only 76 million times), presumably\nbecause it was mostly sorted already.\n\n> > The number 1 bottleneck with 41% of user time is comparetup_index.\n> > ...\n> > The thing is, I can't see anything in comparetup_index() that could\n> > take much time.\n> \n> The index_getattr and heap_getattr macros can be annoyingly expensive.\n\nAnd yet they are optimised for the common case. nocache_index_getattr\nwas only called 7 times, which is about what you expect. I'm getting\nannotated output now, to determine which line takes the time...\nActually, my previous profile overstated stuff a bit. Profiling turned\noff optimisation so I put it back and you get better results but the\norder doesn't change much. By line results are below.\n\nThe top two are the index_getattr calls in comparetup_index. Third and\nfourth are the HEAPCOMPARES in tuplesort_heap_siftup. Then comes the\ninlineApplySortFunction call (which isn't being inlined, despite\nsuggesting it should be, -Winline warns about this).\n\nLooks to me that there are no real gains to be made in this function.\nWhat is needed is an algorithmic change to call this function less\noften...\n\nHave a nice weekend,\n\n % cumulative self self total \n time seconds seconds calls ms/call ms/call name \n 9.40 22.56 22.56 comparetup_index (tuplesort.c:2042 @ 8251060)\n 5.07 34.73 12.17 comparetup_index (tuplesort.c:2043 @ 82510c0)\n 4.73 46.09 11.36 tuplesort_heap_siftup (tuplesort.c:1648 @ 825074d)\n 3.48 54.45 8.36 tuplesort_heap_siftup (tuplesort.c:1661 @ 82507a9)\n 2.80 61.18 6.73 comparetup_index (tuplesort.c:2102 @ 8251201)\n 2.68 67.62 6.44 comparetup_index (tuplesort.c:2048 @ 8251120)\n 2.16 72.82 5.20 tuplesort_heap_siftup (tuplesort.c:1652 @ 825076d)\n 1.88 77.34 4.52 76025782 0.00 0.00 comparetup_index (tuplesort.c:2016 @ 8251010)\n 1.82 81.70 4.36 76025782 0.00 0.00 inlineApplySortFunction (tuplesort.c:1833 @ 8251800)\n 1.73 85.85 4.15 readtup_heap (tuplesort.c:2000 @ 8250fd8)\n 1.67 89.86 4.01 AllocSetAlloc (aset.c:568 @ 824bec0)\n 1.61 93.72 3.86 comparetup_index (tuplesort.c:2025 @ 825102f)\n 1.47 97.25 3.53 76025785 0.00 0.00 btint4cmp (nbtcompare.c:74 @ 80924a0)\n 1.11 99.92 2.67 readtup_datum (tuplesort.c:2224 @ 82517c4)\n 1.10 102.55 2.64 comparetup_index (tuplesort.c:2103 @ 82511e7)\n\n % cumulative self self total \n time seconds seconds calls s/call s/call name \n 28.34 68.01 68.01 76025782 0.00 0.00 comparetup_index\n 13.56 100.54 32.53 7148934 0.00 0.00 tuplesort_heap_siftup\n 8.66 121.33 20.79 76025782 0.00 0.00 inlineApplySortFunction\n 4.43 131.96 10.63 13084567 0.00 0.00 AllocSetAlloc\n 3.73 140.90 8.94 76025785 0.00 0.00 btint4cmp\n 2.15 146.07 5.17 6095625 0.00 0.00 LWLockAcquire\n 2.02 150.92 4.85 2960305 0.00 0.00 heapgettup\n 1.98 155.66 4.74 7148934 0.00 0.00 tuplesort_heap_insert\n 1.78 159.94 4.28 2960312 0.00 0.00 slot_deform_tuple\n 1.73 164.09 4.15 readtup_heap\n 1.67 168.09 4.00 6095642 0.00 0.00 LWLockRelease\n 1.53 171.76 3.68 2960308 0.00 0.00 index_form_tuple\n 1.44 175.21 3.45 13083442 0.00 0.00 AllocSetFree\n 1.28 178.28 3.07 8377285 0.00 0.00 LogicalTapeWrite\n 1.25 181.29 3.01 8377285 0.00 0.00 LogicalTapeRead\n 1.11 183.96 2.67 readtup_datum\n 1.06 186.51 2.55 1 2.55 123.54 IndexBuildHeapScan\n\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.",
"msg_date": "Sun, 2 Oct 2005 14:32:45 +0200",
"msg_from": "Martijn van Oosterhout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "Ok, I tried two optimisations:\n\n1. By creating a special version of comparetup_index for single key\ninteger indexes. Create an index_get_attr with byval and len args. By\nusing fetch_att and specifying the values at compile time, gcc\noptimises the whole call to about 12 instructions of assembly rather\nthan the usual mess.\n\n2. By specifying: -Winline -finline-limit-1500 (only on tuplesort.c).\nThis causes inlineApplySortFunction() to be inlined, like the code\nobviously expects it to be.\n\ndefault build (baseline) 235 seconds\n-finline only 217 seconds (7% better)\ncomparetup_index_fastbyval4 only 221 seconds (6% better)\ncomparetup_index_fastbyval4 and -finline 203 seconds (13.5% better)\n\nThis is indexing the integer sequence column on a 2.7 million row\ntable. The times are as given by gprof and so exclude system call time.\n\nBasically, I recommend adding \"-Winline -finline-limit-1500\" to the\ndefault build while we discuss other options.\n\ncomparetup_index_fastbyval4 patch attached per example.\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.",
"msg_date": "Sun, 2 Oct 2005 21:38:43 +0200",
"msg_from": "Martijn van Oosterhout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "On Sun, 2005-10-02 at 21:38 +0200, Martijn van Oosterhout wrote:\n> Ok, I tried two optimisations:\n> \n\n> 2. By specifying: -Winline -finline-limit-1500 (only on tuplesort.c).\n> This causes inlineApplySortFunction() to be inlined, like the code\n> obviously expects it to be.\n> \n> default build (baseline) 235 seconds\n> -finline only 217 seconds (7% better)\n> comparetup_index_fastbyval4 only 221 seconds (6% better)\n> comparetup_index_fastbyval4 and -finline 203 seconds (13.5% better)\n> \n> This is indexing the integer sequence column on a 2.7 million row\n> table. The times are as given by gprof and so exclude system call time.\n> \n> Basically, I recommend adding \"-Winline -finline-limit-1500\" to the\n> default build while we discuss other options.\n\nI add -Winline but get no warnings. Why would I use -finline-limit-1500?\n\nI'm interested, but uncertain as to what difference this makes. Surely\nusing -O3 works fine?\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Mon, 03 Oct 2005 22:51:32 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "On Mon, Oct 03, 2005 at 10:51:32PM +0100, Simon Riggs wrote:\n> > Basically, I recommend adding \"-Winline -finline-limit-1500\" to the\n> > default build while we discuss other options.\n> \n> I add -Winline but get no warnings. Why would I use -finline-limit-1500?\n> \n> I'm interested, but uncertain as to what difference this makes. Surely\n> using -O3 works fine?\n\nDifferent versions of gcc have different ideas of when a function can\nbe inlined. From my reading of the documentation, this decision is\nindependant of optimisation level. Maybe your gcc version has a limit\nhigher than 1500 by default.\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.",
"msg_date": "Tue, 4 Oct 2005 12:04:56 +0200",
"msg_from": "Martijn van Oosterhout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "On Tue, 2005-10-04 at 12:04 +0200, Martijn van Oosterhout wrote:\n> On Mon, Oct 03, 2005 at 10:51:32PM +0100, Simon Riggs wrote:\n> > > Basically, I recommend adding \"-Winline -finline-limit-1500\" to the\n> > > default build while we discuss other options.\n> > \n> > I add -Winline but get no warnings. Why would I use -finline-limit-1500?\n> > \n> > I'm interested, but uncertain as to what difference this makes. Surely\n> > using -O3 works fine?\n\nHow did you determine the 1500 figure? Can you give some more info to\nsurround that recommendation to allow everybody to evaluate it?\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Tue, 04 Oct 2005 12:24:54 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "On Tue, Oct 04, 2005 at 12:24:54PM +0100, Simon Riggs wrote:\n> How did you determine the 1500 figure? Can you give some more info to\n> surround that recommendation to allow everybody to evaluate it?\n\nkleptog@vali:~/dl/cvs/pgsql-local/src/backend/utils/sort$ gcc -finline-limit-1000 -Winline -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wendif-labels -fno-strict-aliasing -g -I../../../../src/include -D_GNU_SOURCE -c -o tuplesort.o tuplesort.c\ntuplesort.c: In function 'applySortFunction':\ntuplesort.c:1833: warning: inlining failed in call to 'inlineApplySortFunction'\ntuplesort.c:1906: warning: called from here\ntuplesort.c: In function 'comparetup_heap':\ntuplesort.c:1833: warning: inlining failed in call to 'inlineApplySortFunction'\ntuplesort.c:1937: warning: called from here\ntuplesort.c: In function 'comparetup_index':\ntuplesort.c:1833: warning: inlining failed in call to 'inlineApplySortFunction'\ntuplesort.c:2048: warning: called from here\ntuplesort.c: In function 'comparetup_datum':\ntuplesort.c:1833: warning: inlining failed in call to 'inlineApplySortFunction'\ntuplesort.c:2167: warning: called from here\nkleptog@vali:~/dl/cvs/pgsql-local/src/backend/utils/sort$ gcc -finline-limit-1500 -Winline -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wendif-labels -fno-strict-aliasing -g -I../../../../src/include -D_GNU_SOURCE -c -o tuplesort.o tuplesort.c\n<no warnings>\n\nA quick binary search puts the cutoff between 1200 and 1300. Given\nversion variation I picked a nice round number, 1500.\n\nUgh, that's for -O2, for -O3 and above it needs to be 4100 to work.\nMaybe we should go for 5000 or so.\n\nI'm using: gcc (GCC) 3.3.5 (Debian 1:3.3.5-13)\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.",
"msg_date": "Tue, 4 Oct 2005 14:24:46 +0200",
"msg_from": "Martijn van Oosterhout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "Martijn van Oosterhout <[email protected]> writes:\n> A quick binary search puts the cutoff between 1200 and 1300. Given\n> version variation I picked a nice round number, 1500.\n\n> Ugh, that's for -O2, for -O3 and above it needs to be 4100 to work.\n> Maybe we should go for 5000 or so.\n\n> I'm using: gcc (GCC) 3.3.5 (Debian 1:3.3.5-13)\n\nI don't know what the units of this number are, but it's apparently far\ntoo gcc-version-dependent to consider putting into our build scripts.\nUsing gcc version 4.0.1 20050727 (current Fedora Core 4 compiler) on\ni386, and compiling tuplesort.c as you did, I find:\n\t-O2: warning goes away between 800 and 900\n\t-O3: warning is always there (tried values up to 10000000)\n(the latter behavior may indicate a bug, not sure).\n\nWhat's even more interesting is that the warning does not appear in\neither case if I omit -finline-limit --- so the default value is plenty.\n\nAt least on this particular compiler, the proposed switch would be\ncounterproductive.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 04 Oct 2005 10:06:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort? "
},
{
"msg_contents": "On Tue, Oct 04, 2005 at 10:06:24AM -0400, Tom Lane wrote:\n> Martijn van Oosterhout <[email protected]> writes:\n> > I'm using: gcc (GCC) 3.3.5 (Debian 1:3.3.5-13)\n> \n> I don't know what the units of this number are, but it's apparently far\n> too gcc-version-dependent to consider putting into our build scripts.\n> Using gcc version 4.0.1 20050727 (current Fedora Core 4 compiler) on\n> i386, and compiling tuplesort.c as you did, I find:\n> \t-O2: warning goes away between 800 and 900\n> \t-O3: warning is always there (tried values up to 10000000)\n> (the latter behavior may indicate a bug, not sure).\n\nFacsinating. The fact that the warning goes away if you don't specify\n-finline-limit seems to indicate they've gotten smarter. Or a bug.\nWe'd have to check the asm code to see if it's actually inlined or\nnot.\n\nTwo options:\n1. Add -Winline so we can at least be aware of when it's (not) happening.\n2. If we can't get gcc to reliably inline, maybe we need to consider\nother options?\n\nIn particular, move the isNull test statements out since they are ones\nthe optimiser can use to best effect.\n\nAdd if we put in -Winline, it would be visible to users while\ncompiling so they can tweak their own build options (if they care).\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.",
"msg_date": "Tue, 4 Oct 2005 16:30:42 +0200",
"msg_from": "Martijn van Oosterhout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "On Tue, 2005-10-04 at 16:30 +0200, Martijn van Oosterhout wrote:\n> On Tue, Oct 04, 2005 at 10:06:24AM -0400, Tom Lane wrote:\n> > Martijn van Oosterhout <[email protected]> writes:\n> > > I'm using: gcc (GCC) 3.3.5 (Debian 1:3.3.5-13)\n> > \n> > I don't know what the units of this number are, but it's apparently far\n> > too gcc-version-dependent to consider putting into our build scripts.\n> > Using gcc version 4.0.1 20050727 (current Fedora Core 4 compiler) on\n> > i386, and compiling tuplesort.c as you did, I find:\n> > \t-O2: warning goes away between 800 and 900\n> > \t-O3: warning is always there (tried values up to 10000000)\n> > (the latter behavior may indicate a bug, not sure).\n>\n> Facsinating. The fact that the warning goes away if you don't specify\n> -finline-limit seems to indicate they've gotten smarter. Or a bug.\n> We'd have to check the asm code to see if it's actually inlined or\n> not.\n\nI've been using gcc 3.4 and saw no warning when using either \"-Winline\"\nor \"-O3 -Winline\".\n\nMartijn, at the moment it sounds like this is a feature that we no\nlonger need to support - even if we should have done for previous\nreleases.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Tue, 04 Oct 2005 15:56:53 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "Martijn van Oosterhout <[email protected]> writes:\n> 1. Add -Winline so we can at least be aware of when it's (not) happening.\n\nYeah, I agree with that part, just not with adding a fixed -finline-limit\nvalue.\n\nWhile on the subject of gcc warnings ... if I touch that code, I want to\nremove -Wold-style-definition from the default flags, too. It's causing\nmuch more clutter than it's worth, because all the flex files generate\nseveral such warnings.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 04 Oct 2005 11:01:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort? "
},
{
"msg_contents": "On Tue, Oct 04, 2005 at 03:56:53PM +0100, Simon Riggs wrote:\n> I've been using gcc 3.4 and saw no warning when using either \"-Winline\"\n> or \"-O3 -Winline\".\n\nOk, I've just installed 3.4 and verified that. I examined the asm code\nand gcc is inlining it. I concede, at this point just throw in -Winline\nand monitor the situation.\n\nAs an aside, the *_getattr calls end up a bit suboptimal though. It's\nproducing code like:\n\n cmp attlen, 4\n je $elsewhere1\n cmp attlen, 2\n je $elsewhere2\n ld byte\nhere:\n\n--- much later ---\nelsewhere1:\n ld integer\n jmp $here\nelsewhere2:\n ld short\n jmp $here\n\nNo idea whether we want to go down the path of hinting to gcc which\nsize will be the most common.\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.",
"msg_date": "Tue, 4 Oct 2005 17:23:41 +0200",
"msg_from": "Martijn van Oosterhout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "On Tue, Oct 04, 2005 at 05:23:41PM +0200, Martijn van Oosterhout wrote:\n> On Tue, Oct 04, 2005 at 03:56:53PM +0100, Simon Riggs wrote:\n> > I've been using gcc 3.4 and saw no warning when using either \"-Winline\"\n> > or \"-O3 -Winline\".\n> Ok, I've just installed 3.4 and verified that. I examined the asm code\n> and gcc is inlining it. I concede, at this point just throw in -Winline\n> and monitor the situation.\n> As an aside, the *_getattr calls end up a bit suboptimal though. It's\n> producing code like:\n> cmp attlen, 4\n> je $elsewhere1\n> cmp attlen, 2\n> je $elsewhere2\n> ld byte\n> here:\n> --- much later ---\n> elsewhere1:\n> ld integer\n> jmp $here\n> elsewhere2:\n> ld short\n> jmp $here\n> No idea whether we want to go down the path of hinting to gcc which\n> size will be the most common.\n\nIf it will very frequently be one value, and not the other values, I\ndon't see why we wouldn't want to hint? #ifdef it to a expand to just\nthe expression if not using GCC. It's important that we know that the\nvalue would be almost always a certain value, however, as GCC will try\nto make the path for the expected value as fast as possible, at the\ncost of an unexpected value being slower.\n\n __builtin_expect (long EXP, long C)\n\n You may use `__builtin_expect' to provide the compiler with branch\n prediction information. In general, you should prefer to use\n actual profile feedback for this (`-fprofile-arcs'), as\n programmers are notoriously bad at predicting how their programs\n actually perform. However, there are applications in which this\n data is hard to collect.\n\n The return value is the value of EXP, which should be an integral\n expression. The value of C must be a compile-time constant. The\n semantics of the built-in are that it is expected that EXP == C.\n For example:\n\n if (__builtin_expect (x, 0))\n foo ();\n\n would indicate that we do not expect to call `foo', since we\n expect `x' to be zero. Since you are limited to integral\n expressions for EXP, you should use constructions such as\n\n if (__builtin_expect (ptr != NULL, 1))\n error ();\n\n when testing pointer or floating-point values.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Tue, 4 Oct 2005 13:02:53 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Jim C. Nasby\n> Sent: Friday, September 30, 2005 4:49 PM\n> Subject: Re: [PERFORM] [HACKERS] Query in SQL statement\n \n> I suggest ditching the CamelCase and going with underline_seperators.\n> I'd also not use the bareword id, instead using bad_user_id. And I'd\n> name the table bad_user. But that's just me. :)\n\nI converted a db from MS SQL, where tables and fields were CamelCase, and \njust lowercased the ddl to create the tables.\n\nSo table and fields names were all created in lowercase, but I didn't have to change\nany of the application code: the SELECT statements worked fine with mixed case.\n\n-- sample DDL\nCREATE TABLE testtable\n(\n fieldone int4\n) \ninsert into TestTable (fieldone) values (11);\n\n-- These statements will both work:\n\n-- lowercase\nSELECT fieldone FROM testtable;\n\n-- CamelCase\nSELECT FieldOne FROM TestTable;\n\n-Roger\n\n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n",
"msg_date": "Sat, 1 Oct 2005 12:51:08 -0700",
"msg_from": "\"Roger Hand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Query in SQL statement"
},
{
"msg_contents": "\"Roger Hand\" <[email protected]> writes:\n>> I suggest ditching the CamelCase and going with underline_seperators.\n>> I'd also not use the bareword id, instead using bad_user_id. And I'd\n>> name the table bad_user. But that's just me. :)\n\n> I converted a db from MS SQL, where tables and fields were CamelCase, and \n> just lowercased the ddl to create the tables.\n> So table and fields names were all created in lowercase, but I didn't have to change\n> any of the application code: the SELECT statements worked fine with mixed case.\n\nYeah, the only time this stuff really bites you is if the application\nsometimes double-quotes mixed-case names and sometimes doesn't. If it's\nconsistent then you don't have an issue ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 01 Oct 2005 17:15:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Query in SQL statement "
},
{
"msg_contents": "On Sat, Oct 01, 2005 at 12:51:08PM -0700, Roger Hand wrote:\n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of Jim C. Nasby\n> > Sent: Friday, September 30, 2005 4:49 PM\n> > Subject: Re: [PERFORM] [HACKERS] Query in SQL statement\n> \n> > I suggest ditching the CamelCase and going with underline_seperators.\n> > I'd also not use the bareword id, instead using bad_user_id. And I'd\n> > name the table bad_user. But that's just me. :)\n> \n> I converted a db from MS SQL, where tables and fields were CamelCase, and \n> just lowercased the ddl to create the tables.\n> \n> So table and fields names were all created in lowercase, but I didn't have to change\n> any of the application code: the SELECT statements worked fine with mixed case.\n> \n> -- sample DDL\n> CREATE TABLE testtable\n> (\n> fieldone int4\n> ) \n> insert into TestTable (fieldone) values (11);\n\nThat will usually work (see Tom's reply), but fieldone is a heck of a\nlot harder to read than field_one. But like I said, this is the coding\nconventions I've found work well; YMMV.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 4 Oct 2005 15:43:43 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Query in SQL statement"
}
] |
[
{
"msg_contents": "FreeBSD or Linux , which system has better performance for PostgreSQL \n\n\n",
"msg_date": "Sat, 1 Oct 2005 23:08:31 +0300",
"msg_from": "\"AL��� ���EL���K\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Which one FreeBSD or Linux"
},
{
"msg_contents": "ALÝ ÇELÝK wrote:\n> FreeBSD or Linux , which system has better performance for PostgreSQL \n\nDepends on the underlying hardware and your experience. I'd recommend \ngoing with whichever you are more familiar, so long as it will support \nthe hardware you need to buy.\n\n-- \n Richard Huxton\n Archonet Ltd\n\n",
"msg_date": "Wed, 05 Oct 2005 09:54:58 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Which one FreeBSD or Linux"
}
] |
[
{
"msg_contents": "I thought this might be interesting, not the least due to the extremely low\nprice ($150 + the price of regular DIMMs):\n\n http://www.tomshardware.com/storage/20050907/index.html\n\nAnybody know a good reason why you can't put a WAL on this, and enjoy a hefty\nspeed boost for a fraction of the price of a traditional SSD? (Yes, it's\nSATA, not PCI, so the throughput is not all that impressive -- but still,\nit's got close to zero seek time.)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 3 Oct 2005 13:02:26 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Ultra-cheap NVRAM device"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi,\n\non a 7.2 System (Suse-Linux) I got an error \"duplicate key in unique\nindex pg_statistic_relid_att_index\" (think it was while vacuuming)\n\nI REINDEXd the database.\nNow the table pg_statistic_relid_att_index is completely gone.\n\nHas anybody an advise?\n\ntia,\nHarald\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2 (MingW32)\nComment: GnuPT 2.7.2\n\niQEVAwUBQ0Ejx3fX3+fgcdKcAQIVoQf+MFnt+U65FPNxQjHwZ15eT13NwBoCsOE9\nd3nFaKTG58SmI9QziMt1Tpo+pD89LMQZacnCRDv/M3Tz6ruSQaPIsxS6m1evKjq7\n7ixSRCwD+41C2x27qSRZDOEUt6AvG5xfSv43NxJQNS/zB+/TnQ3wGXzwdRrRQiQE\nMv6DXv5s+3Wrbg9qG78Xn3mHOGGySFSG1x9ItUoST+jC6a7rOl5YL3wDCacdgve/\npzq3fe6+JYjEQyKFxEzZYJidsWvr9C7EKfc321PYBscuPNyGMU1Vohe8kDYFbyeG\nL23jXPV8c7WO2w4aQMdQr6V9YXtnBeMgGdAFjo4My29xbdepkwOUvw==\n=I8Ax\n-----END PGP SIGNATURE-----\n",
"msg_date": "Mon, 03 Oct 2005 14:27:54 +0200",
"msg_from": "Harald Lau <[email protected]>",
"msg_from_op": false,
"msg_subject": "URGENT: pg_statistic_relid_att_index has gone"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n> on a 7.2 System (Suse-Linux) I got an error \"duplicate key in unique\n> index pg_statistic_relid_att_index\" (think it was while vacuuming)\n> \n> I REINDEXd the database.\n> Now the table pg_statistic_relid_att_index is completely gone.\n\ngo searching the internet first, man ...\nSurprisingly I'm not the first one having such a breakdown\n\nFound a solution provided by Tom Lane:\nhttp://www.xy1.org/[email protected]/msg04568.html\n\nSeems to work, many thanks\n\nSorry for the overhasty question\nHarald\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2 (MingW32)\nComment: GnuPT 2.7.2\n\niQEVAwUBQ0E5nXfX3+fgcdKcAQJWsgf+JvgWRjgl/RLBzGd8wNt7x6/VngGOzdpT\n4E3OgbrGuAPEC3INkMLTLRU2hVvjRqgkNaWS2YlXpFmlAff6czGeSbwXv4vDiiH7\nAYHpONACLgr8jcHohS0kmylqu/3QYSsmRBDOTOCNms1iMEmJZvpru9YJpSEjwWUL\n/n5pu5lurcpU+VGLTCikin5UnsNWmQzsegz+f2co3UuTDHIUER+W2538Fb9iiZBD\nP9TCI972U+oC2YTg+Puh22jPfS1gG7EHUxKt/XbE9klca1AnCdJX6LdsIh7vdMhw\n6u8JzaaAz9nHtqYFpClkEpnkp9KEohw/uQyDUCB7FK//MRtSWx+MPw==\n=52pe\n-----END PGP SIGNATURE-----\n",
"msg_date": "Mon, 03 Oct 2005 16:01:02 +0200",
"msg_from": "Harald Lau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: URGENT: pg_statistic_relid_att_index has gone"
},
{
"msg_contents": "Harald Lau <[email protected]> writes:\n> on a 7.2 System (Suse-Linux) I got an error \"duplicate key in unique\n> index pg_statistic_relid_att_index\" (think it was while vacuuming)\n> I REINDEXd the database.\n> Now the table pg_statistic_relid_att_index is completely gone.\n> Has anybody an advise?\n\nDump, initdb, reload. You've probably got more problems than just\nthat. This might be a good time to update to something newer than\nPG 7.2, too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 03 Oct 2005 10:08:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: URGENT: pg_statistic_relid_att_index has gone "
},
{
"msg_contents": "\nOn Oct 3, 2005, at 5:02 AM, Steinar H. Gunderson wrote:\n\n> I thought this might be interesting, not the least due to the \n> extremely low\n> price ($150 + the price of regular DIMMs):\n>\n>\n>\n\nThis has been posted before, and the main reason nobody got very \nexcited is that:\n\na) it only uses the PCI bus to provide power to the device, not for I/O\nb) It is limited to SATA bandwidth\nc) The benchmarks did not prove it to be noticeably faster than a \ngood single SATA drive\n\nA few of us were really excited at first too, until seeing the \nbenchmarks..\n\n-Dan\n\n\n",
"msg_date": "Mon, 3 Oct 2005 11:15:03 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ultra-cheap NVRAM device"
},
{
"msg_contents": "\nOn Oct 3, 2005, at 5:02 AM, Steinar H. Gunderson wrote:\n\n> I thought this might be interesting, not the least due to the \n> extremely low\n> price ($150 + the price of regular DIMMs):\n>\n>\n\nReplying before my other post came through.. It looks like their \nbenchmarks are markedly improved since the last article I read on \nthis. There may be more interest now..\n\n-Dan\n\n",
"msg_date": "Mon, 3 Oct 2005 11:21:30 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ultra-cheap NVRAM device"
},
{
"msg_contents": "On Mon, 2005-10-03 at 11:15 -0600, Dan Harris wrote:\n> On Oct 3, 2005, at 5:02 AM, Steinar H. Gunderson wrote:\n> \n> > I thought this might be interesting, not the least due to the \n> > extremely low\n> > price ($150 + the price of regular DIMMs):\n> >\n> >\n> >\n> \n> This has been posted before, and the main reason nobody got very \n> excited is that:\n> \n> a) it only uses the PCI bus to provide power to the device, not for I/O\n> b) It is limited to SATA bandwidth\n> c) The benchmarks did not prove it to be noticeably faster than a \n> good single SATA drive\n> \n> A few of us were really excited at first too, until seeing the \n> benchmarks..\n\nAlso, no ECC support. You'd be crazy to use it for anything.\n\n-jwb\n",
"msg_date": "Mon, 03 Oct 2005 10:36:11 -0700",
"msg_from": "\"Jeffrey W. Baker\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ultra-cheap NVRAM device"
},
{
"msg_contents": "On Oct 3, 2005, at 7:02 AM, Steinar H. Gunderson wrote:\n\n> Anybody know a good reason why you can't put a WAL on this, and \n> enjoy a hefty\n> speed boost for a fraction of the price of a traditional SSD? (Yes, \n> it's\n> SATA, not PCI, so the throughput is not all that impressive -- but \n> still,\n> it's got close to zero seek time.)\n>\n\nold news. discussed here a while back.\n\nthe board you see has no ECC. Would you trust > 1GB RAM to not have \nECC for more than 1 month? You're almost guaranteed at least 1 bit \nerror.\n\n",
"msg_date": "Mon, 3 Oct 2005 15:06:15 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ultra-cheap NVRAM device"
},
{
"msg_contents": "There was a discussion about this about 2 months ago. See the archives.\n\nOn Mon, Oct 03, 2005 at 01:02:26PM +0200, Steinar H. Gunderson wrote:\n> I thought this might be interesting, not the least due to the extremely low\n> price ($150 + the price of regular DIMMs):\n> \n> http://www.tomshardware.com/storage/20050907/index.html\n> \n> Anybody know a good reason why you can't put a WAL on this, and enjoy a hefty\n> speed boost for a fraction of the price of a traditional SSD? (Yes, it's\n> SATA, not PCI, so the throughput is not all that impressive -- but still,\n> it's got close to zero seek time.)\n> \n> /* Steinar */\n> -- \n> Homepage: http://www.sesse.net/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 4 Oct 2005 16:02:23 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ultra-cheap NVRAM device"
},
{
"msg_contents": "[email protected] (Dan Harris) writes:\n> On Oct 3, 2005, at 5:02 AM, Steinar H. Gunderson wrote:\n>\n>> I thought this might be interesting, not the least due to the\n>> extremely low\n>> price ($150 + the price of regular DIMMs):\n>\n> Replying before my other post came through.. It looks like their\n> benchmarks are markedly improved since the last article I read on\n> this. There may be more interest now..\n\nIt still needs a few more generations worth of improvement.\n\n1. It's still limited to SATA speed\n2. It's not ECC smart\n\nWhat I'd love to see would be something that much smarter, or, at\nleast, that pushes the limits of SATA speed, and which has both a\nbattery on board and enough CF storage to cope with outages.\n-- \noutput = reverse(\"gro.mca\" \"@\" \"enworbbc\")\nhttp://www.ntlug.org/~cbbrowne/linuxxian.html\nWe all live in a yellow subroutine, a yellow subroutine, a yellow\nsubroutine...\n",
"msg_date": "Wed, 05 Oct 2005 13:01:04 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ultra-cheap NVRAM device"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm trying to include a custom function in my SQL-queries, which\nunfortunately leaves the server hanging...\n\nI basically search through two tables:\n* TABLE_MAPPING: lists that 'abc' is mapped to 'def'\n id1 | name1 | id2 | name2\n -------------------------\n 1 | abc | 2 | def\n 3 | uvw | 4 | xyz\nThis data means that 'abc' is mapped_to 'def', and 'uvw' is mapped_to\n'xyz'. About 1,500,000 records in total.\n\n* TABLE ALIASES: lists different aliases of the same thing\n id1 | name1 | id2 | name2\n -------------------------\n 3 | uvw | 2 | def\nThis data means that 'uvw' and 'def' are essentially the same thing.\nAbout 820,000 records in total.\n\nI have indexes on all columns of the above tables.\n\nBased on the two tables above, 'abc' is indirectly mapped_to 'xyz' as\nwell (through 'def' also-known-as 'uvw').\n\nI wrote this little function: aliases_of\nCREATE FUNCTION aliases_of(INTEGER) RETURNS SETOF integer\nAS 'SELECT $1\n UNION\n SELECT id1 FROM aliases WHERE id2 = $1\n UNION\n SELECT id2 FROM aliases WHERE id1 = $1\n '\nLANGUAGE SQL\nSTABLE;\n\nA simple SELECT aliases_of(2) shows:\n aliases_of\n ----------\n 2\n 3\n\nNow, when I want to traverse the aliases, I would write a query as\nfollows:\nSELECT m1.name1, m1.name2, m2.name1, m2.name2\nFROM mappings m1, mappings m2\nWHERE m1.name1 = 'abc'\nAND m2.name1 IN (SELECT aliases_of(m1.name2));\n\nUnfortunately, this query seems to keep running and to never stop...\n\n\nAn EXPLAIN of the above query shows:\nQUERY PLAN\n-----------------------------------------------------------------------------\n Nested Loop (cost=0.00..118379.45 rows=1384837 width=80)\n Join Filter: (subplan)\n -> Index Scan using ind_cmappings_object1_id on c_mappings m1\n(cost=0.00..7.08 rows=2 width=40)\n Index Cond: (name1 = 'abc')\n -> Seq Scan on c_mappings m2 (cost=0.00..35935.05 rows=1423805\nwidth=40)\n SubPlan\n -> Result (cost=0.00..0.01 rows=1 width=0)\n(7 rows)\n\nStrangely enough, I _do_ get output when I type the following query:\nSELECT m1.name1, m1.name2, m2.name1, m2.name2\nFROM mappings m1, mappings m2\nWHERE m1.name1 = 'abc'\nAND m2.name1 IN (\n SELECT m1.name2\n UNION\n SELECT name2 FROM aliases WHERE name1 = m1.name2\n UNION\n SELECT name1 FROM aliases WHERE name2 = m2.name1\n);\n\nThe EXPLAIN for this query is:\n QUERY PLAN\n-------------------------------------------------------------------------------\n Nested Loop (cost=0.00..36712030.90 rows=1384837 width=80)\n Join Filter: (subplan)\n -> Index Scan using ind_cmappings_object1_id on c_mappings m1\n(cost=0.00..7.08 rows=2 width=40)\n Index Cond: (object1_id = 16575564)\n -> Seq Scan on c_mappings m2 (cost=0.00..35935.05 rows=1423805\nwidth=40)\n SubPlan\n -> Unique (cost=13.21..13.23 rows=1 width=4)\n -> Sort (cost=13.21..13.22 rows=3 width=4)\n Sort Key: object2_id\n -> Append (cost=0.00..13.18 rows=3 width=4)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..0.01\nrows=1 width=0)\n -> Result (cost=0.00..0.01 rows=1\nwidth=0)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..5.92\nrows=1 width=4)\n -> Index Scan using\nind_caliases_object2_id on c_aliases (cost=0.00..5.92 rows=1 width=4)\n Index Cond: (object2_id = $0)\n -> Subquery Scan \"*SELECT* 3\" (cost=0.00..7.25\nrows=1 width=4)\n -> Index Scan using\nind_caliases_object1_id on c_aliases (cost=0.00..7.25 rows=1 width=4)\n Index Cond: (object1_id = $0)\n(18 rows)\n\nSo my questions are:\n* Does anyone have any idea how I can integrate a function that lists\nall aliases for a given name into such a mapping query?\n* Does the STABLE keyword in the function definition make the function\nto read all its data into memory?\n* Is there a way to let postgres use an \"Index scan\" on that function\ninstead of a \"seq scan\"?\n\nAny help very much appreciated,\nJan Aerts\n\n",
"msg_date": "3 Oct 2005 08:14:11 -0700",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "index on custom function; explain"
},
{
"msg_contents": "Some additional thoughts: what appears to take the most time (i.e.\naccount for the highest cost in the explain), is _not_ running the\nfunction itself (cost=0.00..0.01), but comparing the result from that\nfunction with the name1 column in the mappings table\n(cost=0.00..35935.05). Am I right? (See EXPLAIN in previous post.) If\nso: that's pretty strange, because the name1-column in the mappings\ntable is indexed...\n\njan.\n\n",
"msg_date": "4 Oct 2005 03:10:28 -0700",
"msg_from": "\"Jan Aerts\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index on custom function; explain"
},
{
"msg_contents": "Hi,\n\nOn Mon, Oct 03, 2005 at 08:14:11AM -0700, [email protected] wrote:\n> So my questions are:\n> * Does anyone have any idea how I can integrate a function that lists\n> all aliases for a given name into such a mapping query?\n\nwhat version are you using?\n\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n\nCheers,\nYann\n",
"msg_date": "Thu, 6 Oct 2005 10:19:19 +0200",
"msg_from": "Yann Michel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index on custom function; explain"
},
{
"msg_contents": "On Tue, 2005-10-04 at 03:10 -0700, Jan Aerts wrote:\n> Some additional thoughts: what appears to take the most time (i.e.\n> account for the highest cost in the explain), is _not_ running the\n> function itself (cost=0.00..0.01), but comparing the result from that\n> function with the name1 column in the mappings table\n> (cost=0.00..35935.05). Am I right? (See EXPLAIN in previous post.) If\n> so: that's pretty strange, because the name1-column in the mappings\n> table is indexed...\n\n35935.05 is for the loop, 0.01 is for the operation within the loop.\n\nWhat version of PostgreSQL is this? Some old versions were not good at\nhandling the IN ( ... ) clause.\n\nAlso, PostgreSQL doesn't always do a wonderful job of considering the\nactivities of a function into the design of the query plans. Sometimes\nthis can be a blessing, but not in this case.\n\nCheers,\n\t\t\t\t\tAndrew.\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\nIt is truth which you cannot contradict; you can without any difficulty\n contradict Socrates. - Plato\n\n-------------------------------------------------------------------------",
"msg_date": "Fri, 07 Oct 2005 07:50:01 +1300",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index on custom function; explain"
},
{
"msg_contents": "* Yann Michel <[email protected]> wrote:\n\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\nI've got a similar problem: I have to match different datatypes,\nie. bigint vs. integer vs. oid.\n\nOf course I tried to use casted index (aka ON (foo::oid)), but \nit didn't work. \n\nWhat am I doing wrong ?\n\n\ncu\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n---------------------------------------------------------------------\n Realtime Forex/Stock Exchange trading powered by postgreSQL :))\n http://www.fxignal.net/\n---------------------------------------------------------------------\n",
"msg_date": "Mon, 7 Nov 2005 19:07:38 +0100",
"msg_from": "Enrico Weigelt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Index + mismatching datatypes [WAS: index on custom function;\n explain]"
},
{
"msg_contents": "On Mon, 2005-07-11 at 19:07 +0100, Enrico Weigelt wrote:\n> I've got a similar problem: I have to match different datatypes,\n> ie. bigint vs. integer vs. oid.\n> \n> Of course I tried to use casted index (aka ON (foo::oid)), but \n> it didn't work. \n\nDon't include the cast in the index definition, include it in the query\nitself:\n\n SELECT ... FROM foo WHERE int8col = 5::int8\n\nfor example. Alternatively, upgrade to 8.0 or better, which doesn't\nrequire this workaround.\n\n-Neil\n\n\n",
"msg_date": "Mon, 07 Nov 2005 15:45:57 -0500",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index + mismatching datatypes [WAS: index on custom"
}
] |
[
{
"msg_contents": "\n\n\n\nI have a PHP web-based application where a temporary list of servers and\ntheir characteristics (each represented by a unique numeric server_id) is\nextracted from a master server list based on a number of dynamic and\nuser-selected criteria (can the user view the server, is it on-line, is it\na member of a specific group, etc). Once the user selects the set of\ncriteria (and servers), it rarely change during the rest of the browser\nsession. The resulting temporary list of servers is then joined against\nother tables with different sets of information about each of the servers,\nbased on the server_id.\n\nI currently create a temporary table to hold the selected server_id's and\ncharacteristics. I then join this temp table with other data tables to\nproduce my reports. My reason for using the temporary table method is that\nthe SQL for the initial server selection is dynamically created based on\nthe user's selections, and is complex enough that it does not lend itself\nto being easily incorporated into any of the other data extraction queries\n(which may also contain dynamic filtering).\n\nUnfortunately, the PHP connection to the database does not survive from\nwebscreen to webscreen, so I have to re-generate the temporary server_id\ntable each time it is needed for a report screen. An idea I had to make\nthis process more efficient was instead of re-creating the temporary table\nover and over each time it is needed, do a one-time extraction of the list\nof user-selected server_id's, store the list in a PHP global variable, and\nthen use the list in a dynamically-created WHERE clause in the rest of the\nqueries. The resulting query would look something like\n\n SELECT *\n FROM some_data_table\n WHERE server_id IN (sid1,sid5,sid6,sid17,sid24...)\n\nSimple enough, however in rare cases the list of server_id's can range\nbetween 6,000 and 10,000.\n\nMy question to the group is, can having so many values in a WHERE/IN clause\neffect query performance? Am I being naive about this and is there a\ndifferent, better way? The server_id field is of course indexed, but it is\npossible that the list of selected sid's can contain almost all of the\nvalues in the some_data_table server_id index (in the situation where _all_\nof the records are requested I wouldn't use the WHERE clause in the query).\nThe some_data_table can contain millions of records for thousands of\nservers, so every bit of efficiency helps.\n\nIf this is not the proper group for this kind of question, please point me\nin the right direction.\n\nThanks!\n--- Steve\n___________________________________________________________________________________\n\nSteven Rosenstein\nIT Architect/Developer | IBM Virtual Server Administration\nVoice/FAX: 845-689-2064 | Cell: 646-345-6978 | Tieline: 930-6001\nText Messaging: 6463456978 @ mobile.mycingular.com\nEmail: srosenst @ us.ibm.com\n\n\"Learn from the mistakes of others because you can't live long enough to\nmake them all yourself.\" -- Eleanor Roosevelt\n\n",
"msg_date": "Mon, 3 Oct 2005 11:47:52 -0400",
"msg_from": "Steven Rosenstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Alternative to a temporary table"
},
{
"msg_contents": "On Mon, Oct 03, 2005 at 11:47:52AM -0400, Steven Rosenstein wrote:\n\n> I currently create a temporary table to hold the selected server_id's and\n> characteristics. I then join this temp table with other data tables to\n> produce my reports. My reason for using the temporary table method is that\n> the SQL for the initial server selection is dynamically created based on\n> the user's selections, and is complex enough that it does not lend itself\n> to being easily incorporated into any of the other data extraction queries\n> (which may also contain dynamic filtering).\n> \n> Unfortunately, the PHP connection to the database does not survive from\n> webscreen to webscreen, so I have to re-generate the temporary server_id\n> table each time it is needed for a report screen. An idea I had to make\n> this process more efficient was instead of re-creating the temporary table\n> over and over each time it is needed, do a one-time extraction of the list\n> of user-selected server_id's, store the list in a PHP global variable, and\n> then use the list in a dynamically-created WHERE clause in the rest of the\n> queries. The resulting query would look something like\n> \n> SELECT *\n> FROM some_data_table\n> WHERE server_id IN (sid1,sid5,sid6,sid17,sid24...)\n> \n> Simple enough, however in rare cases the list of server_id's can range\n> between 6,000 and 10,000.\n> \n> My question to the group is, can having so many values in a WHERE/IN clause\n> effect query performance? \n\nProbably, yes. As always, benchmark a test case, but last time I\nchecked (in 7.4) you'd be better creating a new temporary table for\nevery query than use an IN list that long. It's a lot better in 8.0, I\nbelieve, so you should benchmark it there.\n\n> Am I being naive about this and is there a\n> different, better way? The server_id field is of course indexed, but it is\n> possible that the list of selected sid's can contain almost all of the\n> values in the some_data_table server_id index (in the situation where _all_\n> of the records are requested I wouldn't use the WHERE clause in the query).\n> The some_data_table can contain millions of records for thousands of\n> servers, so every bit of efficiency helps.\n\nDon't use a temporary table. Instead use a permanent table that\ncontains the server ids you need and the PHP session token. Then you\ncan create your list of server_ids once and insert it into that table\nassociated with your sessionid. Then future queries can be simple\njoins against that table.\n\n SELECT some_data_table.*\n FROM some_data_table, session_table\n WHERE some_data_table.server_id = session_table.server_id\n AND session_table.session_id = 'foobar'\n\nYou'd need a reaper process to delete old data from that table to\nprevent it from growing without limit, and probably a table associating\nsession start time with sessionid to make reaping easier.\n\nCheers,\n Steve\n",
"msg_date": "Mon, 3 Oct 2005 09:20:10 -0700",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Alternative to a temporary table"
}
] |
[
{
"msg_contents": "Nah. It's still not right. It needs:\n1= full PCI, preferably at least 64b 133MHz PCI-X, bandwidth.\nA RAM card should blow the doors off the fastest commodity\nRAID setup you can build.\n2= 8-16 DIMM slots\n3= a standard battery type that I can pick up spares for easily\n4= ECC support\n\nIf it had all those features, I'd buy it at even 2x or possibly\neven 3x it's current price.\n\n8, 16, or 32GB (using 1, 2, or 4GB DIMMs respectively in an 8 slot\nform factor) of very fast temporary work memory (sorting\n anyone ;-) ). Yum.\n\nRon\n\n-----Original Message-----\nFrom: Dan Harris <[email protected]>\nSent: Oct 3, 2005 1:21 PM\nTo: [email protected]\nSubject: Re: [PERFORM] Ultra-cheap NVRAM device\n\n\nOn Oct 3, 2005, at 5:02 AM, Steinar H. Gunderson wrote:\n\n> I thought this might be interesting, not the least due to the \n> extremely low\n> price ($150 + the price of regular DIMMs):\n>\n>\n\nReplying before my other post came through.. It looks like their \nbenchmarks are markedly improved since the last article I read on \nthis. There may be more interest now..\n\n-Dan\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n",
"msg_date": "Mon, 3 Oct 2005 14:00:13 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Ultra-cheap NVRAM device"
}
] |
[
{
"msg_contents": "Jeff, are those _burst_ rates from HD buffer or _sustained_ rates from\nactual HD media? Rates from IO subsystem buffer or cache are\nusually considerably higher than Average Sustained Transfer Rate.\n\nAlso, are you measuring _raw_ HD IO (bits straight off the platters, no\nFS or other overhead) or _cooked_ HD IO (actual FS or pg IO)?\n\nBTW, it would seem Useful to measure all of raw HD IO, FS HD IO,\nand pg HD IO as this would give us an idea of just how much overhead\neach layer is imposing on the process.\n\nWe may be able to get better IO than we currently are for things like\nsorts by the simple expedient of making sure we read enough data per\nseek.\n\nFor instance, a HD with a 12ms average access time and a ASTR of\n50MBps should always read _at least_ 600KB/access or it is impossible\nfor it to achieve it's rated ASTR.\n\nThis number will vary according to the average access time and the\nASTR of your physical IO subsystem, but the concept is valid for _any_\nphysical IO subsystem.\n \n\n-----Original Message-----\nFrom: \"Jeffrey W. Baker\" <[email protected]>\nSent: Oct 3, 2005 4:42 PM\nTo: [email protected]\nCc: \nSubject: Re: [HACKERS] [PERFORM] A Better External Sort?\n\nOn Mon, 2005-10-03 at 13:34 -0700, Josh Berkus wrote:\n> Michael,\n> \n> > >Realistically, you can't do better than about 25MB/s on a\n> > > single-threaded I/O on current Linux machines,\n> >\n> > What on earth gives you that idea? Did you drop a zero?\n> \n> Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison, A \n> Big-Name Proprietary Database doesn't get much more than that either.\n\nI find this claim very suspicious. I get single-threaded reads in\nexcess of 1GB/sec with XFS and > 250MB/sec with ext3. \n\n-jwb\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n",
"msg_date": "Mon, 3 Oct 2005 17:18:45 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
}
] |
[
{
"msg_contents": "Let's pretend we get a 24HD HW RAID solution like that J Baker\nsays he has access to and set it up as a RAID 10. Assuming\nit uses two 64b 133MHz PCI-X busses and has the fastest HDs\navailable on it, Jeff says he can hit ~1GBps of XFS FS IO rate\nwith that set up (12*83.3MBps= 1GBps).\n\nJosh says that pg can't do more than 25MBps of DB level IO\nregardless of how fast the physical IO subsystem is because at\n25MBps, pg is CPU bound. \n\nJust how bad is this CPU bound condition? How powerful a CPU is\nneeded to attain a DB IO rate of 25MBps?\n \nIf we replace said CPU with one 2x, 10x, etc faster than that, do we\nsee any performance increase?\n\nIf a modest CPU can drive a DB IO rate of 25MBps, but that rate\ndoes not go up regardless of how much extra CPU we throw at\nit...\n\nRon \n\n-----Original Message-----\nFrom: Josh Berkus <[email protected]>\nSent: Oct 3, 2005 6:03 PM\nTo: \"Jeffrey W. Baker\" <[email protected]>\nCc: \nSubject: Re: [HACKERS] [PERFORM] A Better External Sort?\n\nJeffrey,\n\n> I guess database reads are different, but I remain unconvinced that they\n> are *fundamentally* different. After all, a tab-delimited file (my sort\n> workload) is a kind of database.\n\nUnfortunately, they are ... because of CPU overheads. I'm basing what's \n\"reasonable\" for data writes on the rates which other high-end DBs can \nmake. From that, 25mb/s or even 40mb/s for sorts should be achievable \nbut doing 120mb/s would require some kind of breakthrough.\n\n> On a single disk you wouldn't notice, but XFS scales much better when\n> you throw disks at it. I get a 50MB/sec boost from the 24th disk,\n> whereas ext3 stops scaling after 16 disks. For writes both XFS and ext3\n> top out around 8 disks, but in this case XFS tops out at 500MB/sec while\n> ext3 can't break 350MB/sec.\n\nThat would explain it. I seldom get more than 6 disks (and 2 channels) to \ntest with.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n",
"msg_date": "Mon, 3 Oct 2005 20:07:02 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "On 10/3/05, Ron Peacetree <[email protected]> wrote:\n[snip]\n> Just how bad is this CPU bound condition? How powerful a CPU is\n> needed to attain a DB IO rate of 25MBps?\n>\n> If we replace said CPU with one 2x, 10x, etc faster than that, do we\n> see any performance increase?\n>\n> If a modest CPU can drive a DB IO rate of 25MBps, but that rate\n> does not go up regardless of how much extra CPU we throw at\n> it...\n\nSingle threaded was mentioned.\nPlus even if it's purely cpu bound, it's seldom as trivial as throwing\nCPU at it, consider the locking in both the application, in the\nfilesystem, and elsewhere in the kernel.\n",
"msg_date": "Mon, 3 Oct 2005 20:19:56 -0400",
"msg_from": "Gregory Maxwell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
}
] |
[
{
"msg_contents": "OK, change \"performance\" to \"single thread performance\" and we\nstill have a valid starting point for a discussion.\n\nRon\n\n\n-----Original Message-----\nFrom: Gregory Maxwell <[email protected]>\nSent: Oct 3, 2005 8:19 PM\nTo: Ron Peacetree <[email protected]>\nSubject: Re: [HACKERS] [PERFORM] A Better External Sort?\n\nOn 10/3/05, Ron Peacetree <[email protected]> wrote:\n[snip]\n> Just how bad is this CPU bound condition? How powerful a CPU is\n> needed to attain a DB IO rate of 25MBps?\n>\n> If we replace said CPU with one 2x, 10x, etc faster than that, do we\n> see any performance increase?\n>\n> If a modest CPU can drive a DB IO rate of 25MBps, but that rate\n> does not go up regardless of how much extra CPU we throw at\n> it...\n\nSingle threaded was mentioned.\nPlus even if it's purely cpu bound, it's seldom as trivial as throwing\nCPU at it, consider the locking in both the application, in the\nfilesystem, and elsewhere in the kernel.\n\n",
"msg_date": "Mon, 3 Oct 2005 20:32:02 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
}
] |
[
{
"msg_contents": "pg is _very_ stupid about caching. Almost all of the caching is left\nto the OS, and it's that way by design (as post after post by TL has\npointed out).\n\nThat means pg has almost no ability to take application domain\nspecific knowledge into account when deciding what to cache.\nThere's plenty of papers on caching out there that show that\ncontext dependent knowledge leads to more effective caching\nalgorithms than context independent ones are capable of.\n\n(Which means said design choice is a Mistake, but unfortunately\none with too much inertia behind it currentyl to change easily.)\n\nUnder these circumstances, it is quite possible that an expert class\nhuman could optimize memory usage better than the OS + pg.\n \nIf one is _sure_ they know what they are doing, I'd suggest using\ntmpfs or the equivalent for critical read-only tables. For \"hot\"\ntables that are rarely written to and where data loss would not be\na disaster, \"tmpfs\" can be combined with an asyncronous writer\nprocess push updates to HD. Just remember that a power hit\nmeans that \n\nThe (much) more expensive alternative is to buy SSD(s) and put\nthe critical tables on it at load time.\n\nRon\n \n\n-----Original Message-----\nFrom: \"Jim C. Nasby\" <[email protected]>\nSent: Oct 4, 2005 4:57 PM\nTo: Stefan Weiss <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] Is There Any Way ....\n\nOn Tue, Oct 04, 2005 at 12:31:42PM +0200, Stefan Weiss wrote:\n> On 2005-09-30 01:21, Lane Van Ingen wrote:\n> > (3) Assure that a disk-based table is always in memory (outside of keeping\n> > it in\n> > memory buffers as a result of frequent activity which would prevent\n> > LRU\n> > operations from taking it out) ?\n> \n> I was wondering about this too. IMO it would be useful to have a way to tell\n> PG that some tables were needed frequently, and should be cached if\n> possible. This would allow application developers to consider joins with\n> these tables as \"cheap\", even when querying on columns that are not indexed.\n> I'm thinking about smallish tables like users, groups, *types, etc which\n> would be needed every 2-3 queries, but might be swept out of RAM by one\n> large query in between. Keeping a table like \"users\" on a RAM fs would not\n> be an option, because the information is not volatile.\n\nWhy do you think you'll know better than the database how frequently\nsomething is used? At best, your guess will be correct and PostgreSQL\n(or the kernel) will keep the table in memory. Or, your guess is wrong\nand you end up wasting memory that could have been used for something\nelse.\n\nIt would probably be better if you describe why you want to force this\ntable (or tables) into memory, so we can point you at more appropriate\nsolutions.\n",
"msg_date": "Tue, 4 Oct 2005 19:33:47 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is There Any Way ...."
},
{
"msg_contents": "On Tue, Oct 04, 2005 at 07:33:47PM -0400, Ron Peacetree wrote:\n> pg is _very_ stupid about caching. Almost all of the caching is left\n> to the OS, and it's that way by design (as post after post by TL has\n> pointed out).\n> \n> That means pg has almost no ability to take application domain\n> specific knowledge into account when deciding what to cache.\n> There's plenty of papers on caching out there that show that\n> context dependent knowledge leads to more effective caching\n> algorithms than context independent ones are capable of.\n> \n> (Which means said design choice is a Mistake, but unfortunately\n> one with too much inertia behind it currentyl to change easily.)\n> \n> Under these circumstances, it is quite possible that an expert class\n> human could optimize memory usage better than the OS + pg.\n\nDo you have any examples where this has actually happened? Especially\nwith 8.x, which isn't all that 'stupid' about how it handles buffers?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 4 Oct 2005 19:01:04 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is There Any Way ...."
},
{
"msg_contents": "\nRon Peacetree sounds like someone talking out of his _AZZ_.\nHe can save his unreferenced flapdoodle for his SQL Server\nclients. Maybe he will post references so that we may all\nlearn at the feet of Master Peacetree. :-)\n\n douglas\n\nOn Oct 4, 2005, at 7:33 PM, Ron Peacetree wrote:\n\n> pg is _very_ stupid about caching. Almost all of the caching is left\n> to the OS, and it's that way by design (as post after post by TL has\n> pointed out).\n>\n> That means pg has almost no ability to take application domain\n> specific knowledge into account when deciding what to cache.\n> There's plenty of papers on caching out there that show that\n> context dependent knowledge leads to more effective caching\n> algorithms than context independent ones are capable of.\n>\n> (Which means said design choice is a Mistake, but unfortunately\n> one with too much inertia behind it currentyl to change easily.)\n>\n> Under these circumstances, it is quite possible that an expert class\n> human could optimize memory usage better than the OS + pg.\n>\n> If one is _sure_ they know what they are doing, I'd suggest using\n> tmpfs or the equivalent for critical read-only tables. For \"hot\"\n> tables that are rarely written to and where data loss would not be\n> a disaster, \"tmpfs\" can be combined with an asyncronous writer\n> process push updates to HD. Just remember that a power hit\n> means that\n>\n> The (much) more expensive alternative is to buy SSD(s) and put\n> the critical tables on it at load time.\n>\n> Ron\n>\n>\n> -----Original Message-----\n> From: \"Jim C. Nasby\" <[email protected]>\n> Sent: Oct 4, 2005 4:57 PM\n> To: Stefan Weiss <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Is There Any Way ....\n>\n> On Tue, Oct 04, 2005 at 12:31:42PM +0200, Stefan Weiss wrote:\n>> On 2005-09-30 01:21, Lane Van Ingen wrote:\n>>> (3) Assure that a disk-based table is always in memory (outside of \n>>> keeping\n>>> it in\n>>> memory buffers as a result of frequent activity which would \n>>> prevent\n>>> LRU\n>>> operations from taking it out) ?\n>>\n>> I was wondering about this too. IMO it would be useful to have a way \n>> to tell\n>> PG that some tables were needed frequently, and should be cached if\n>> possible. This would allow application developers to consider joins \n>> with\n>> these tables as \"cheap\", even when querying on columns that are not \n>> indexed.\n>> I'm thinking about smallish tables like users, groups, *types, etc \n>> which\n>> would be needed every 2-3 queries, but might be swept out of RAM by \n>> one\n>> large query in between. Keeping a table like \"users\" on a RAM fs \n>> would not\n>> be an option, because the information is not volatile.\n>\n> Why do you think you'll know better than the database how frequently\n> something is used? At best, your guess will be correct and PostgreSQL\n> (or the kernel) will keep the table in memory. Or, your guess is wrong\n> and you end up wasting memory that could have been used for something\n> else.\n>\n> It would probably be better if you describe why you want to force this\n> table (or tables) into memory, so we can point you at more appropriate\n> solutions.\n\n",
"msg_date": "Tue, 4 Oct 2005 20:40:54 -0400",
"msg_from": "\"Douglas J. Trainor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is There Any Way ...."
},
{
"msg_contents": "Douglas J. Trainor wrote:\n\n>\n> Ron Peacetree sounds like someone talking out of his _AZZ_.\n> He can save his unreferenced flapdoodle for his SQL Server\n> clients. Maybe he will post references so that we may all\n> learn at the feet of Master Peacetree. :-)\n\nAlthough I agree that I would definitely like to see some test cases\nfor what Ron is talking about, I don't think that resorting to insults\nis going to help the situation.\n\nRon, if you would please -- provide some test cases for what you are\ndescribing I am sure that anyone would love to see them. We are all\nfor improving PostgreSQL.\n\nSincerely,\n\nJoshua D. Drake\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n\n",
"msg_date": "Tue, 04 Oct 2005 18:32:18 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is There Any Way ...."
},
{
"msg_contents": "I'm sure there will be cases when some human assisted caching algorithm will\nperform better than an mathetical statistical based design, but it will also\ndepend on the \"human\". And it probably will make thing worse when workload\nchanges and human doesn't realize. It must be considered that, today,\nhardware cost is not the %90 of budget that it used to be. Throwing hardware\nat the system can be as much expensive as throwing certified \"it stuff\".\n(just think in coffee budget! :-) )\n\nIf you need to improve \"user perception\", you can do others things. Like\ncaching a table in your client (with a trigger for any change on table X\nupdating a table called \"timestamp_table_change\" and a small select to this\ntable, you can easily know when you must update your client). If it is a\napplication server, serving http request, then \"user perception\" will be\nsticked to bandwidth AND application server (some of them have cache for\nrequest).\n\nFYI, I don't recall a mechanism in MSSQL to cache a table in buffers. Oracle\nhas some structures to allow that. (you know) It uses his own buffer. Since\nversion 9i, you can set three different data buffers, one (recycled cache)\nfor low usage tables (I mean tables with blocks which don't have too much\nchance to be queried again, like a very large historical table) , one for\nhigh usage tables (keep cache), and the regular one (difference is in\nalgorithm). And you must also set a buffer cache size for tablespaces with\ndifferent block size. But there is no such thing as \"create table x keep\nentirenly in buffer\". And above all things, oracle doc always states \"first,\ntune design, then tune queries, then start tunning engine\".\n\ngreetings.\n\n\n\n",
"msg_date": "Wed, 5 Oct 2005 08:16:46 -0300",
"msg_from": "\"Dario\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is There Any Way ...."
}
] |
[
{
"msg_contents": "\nI have an application that has a table that is both read and write intensive. \nData from iostat indicates that the write speed of the system is the factor \nthat is limiting performance. The table has around 20 columns and most of the \ncolumns are indexed. The data and the indices for the table are distributed \nover several mirrored disk partitions and pg_xlog is on another. I'm looking \nat ways to improve performance and besides the obvious one of getting an SSD \nI thought about putting the indices on a ramdisk. That means that after a \npower failure or shutdown I would have to recreate them but that is \nacceptable for this application. What I am wondering though is whether or not \nI would see much performance benefit and if there would be any startup \nproblems after a power down event due to the indices not being present. Any \ninsight would be appreciated.\n\nEmil\n",
"msg_date": "Tue, 4 Oct 2005 21:09:01 -0400",
"msg_from": "Emil Briggs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Indexes on ramdisk"
},
{
"msg_contents": "Talk about your IO system a bit. There might be obvious ways to improve.\n\nWhat System/Motherboard are you using?\nWhat Controller Cards are you using?\nWhat kind of Disks do you have (SATA, SCSI 7.6k 10k 15k)\nWhat denominations (9, 18, 36, 72, 143, 80, 160, 200 240Gig)?\nWhat kind of RAIDs do you have setup (How many drives what stripe sizes, how\nmany used for what).\nWhat levels of RAID are you using (0,1,10,5,50)?\n\nWith good setup, a dual PCI-X bus motherboard can hit 2GB/sec and thousands\nof transactions to disk if you have a controller/disks that can keep up.\nThat is typicaly enough for most people without resorting to SSD.\n\nAlex Turner\nNetEconomist\n\nOn 10/4/05, Emil Briggs <[email protected]> wrote:\n>\n>\n> I have an application that has a table that is both read and write\n> intensive.\n> Data from iostat indicates that the write speed of the system is the\n> factor\n> that is limiting performance. The table has around 20 columns and most of\n> the\n> columns are indexed. The data and the indices for the table are\n> distributed\n> over several mirrored disk partitions and pg_xlog is on another. I'm\n> looking\n> at ways to improve performance and besides the obvious one of getting an\n> SSD\n> I thought about putting the indices on a ramdisk. That means that after a\n> power failure or shutdown I would have to recreate them but that is\n> acceptable for this application. What I am wondering though is whether or\n> not\n> I would see much performance benefit and if there would be any startup\n> problems after a power down event due to the indices not being present.\n> Any\n> insight would be appreciated.\n>\n> Emil\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\nTalk about your IO system a bit. There might be obvious ways to improve.\n\nWhat System/Motherboard are you using?\nWhat Controller Cards are you using?\nWhat kind of Disks do you have (SATA, SCSI 7.6k 10k 15k)\nWhat denominations (9, 18, 36, 72, 143, 80, 160, 200 240Gig)?\nWhat kind of RAIDs do you have setup (How many drives what stripe sizes, how many used for what).\nWhat levels of RAID are you using (0,1,10,5,50)?\n\nWith good setup, a dual PCI-X bus motherboard can hit 2GB/sec and\nthousands of transactions to disk if you have a controller/disks\nthat can keep up. That is typicaly enough for most people without\nresorting to SSD.\n\nAlex Turner\nNetEconomistOn 10/4/05, Emil Briggs <[email protected]> wrote:\nI have an application that has a table that is both read and write intensive.Data from iostat indicates that the write speed of the system is the factorthat is limiting performance. The table has around 20 columns and most of the\ncolumns are indexed. The data and the indices for the table are distributedover several mirrored disk partitions and pg_xlog is on another. I'm lookingat ways to improve performance and besides the obvious one of getting an SSD\nI thought about putting the indices on a ramdisk. That means that after apower failure or shutdown I would have to recreate them but that isacceptable for this application. What I am wondering though is whether or not\nI would see much performance benefit and if there would be any startupproblems after a power down event due to the indices not being present. Anyinsight would be appreciated.Emil---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings",
"msg_date": "Tue, 4 Oct 2005 23:23:17 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indexes on ramdisk"
},
{
"msg_contents": "> Talk about your IO system a bit. There might be obvious ways to improve.\n>\n> What System/Motherboard are you using?\n> What Controller Cards are you using?\n> What kind of Disks do you have (SATA, SCSI 7.6k 10k 15k)\n> What denominations (9, 18, 36, 72, 143, 80, 160, 200 240Gig)?\n> What kind of RAIDs do you have setup (How many drives what stripe sizes,\n> how many used for what).\n> What levels of RAID are you using (0,1,10,5,50)?\n>\n\nIt's a quad opteron system. RAID controller is a 4 channel LSILogic Megaraid \n320 connected to 10 15k 36.7G SCSI disks. The disks are configured in 5 \nmirrored partitions. The pg_xlog is on one mirror and the data and indexes \nare spread over the other 4 using tablespaces. These numbers from \npg_stat_user_tables are from about 2 hours earlier today on this one table.\n\n\nidx_scan 20578690\nidx_tup_fetch 35866104841\nn_tup_ins 1940081\nn_tup_upd 1604041\nn_tup_del 1880424\n\n\n",
"msg_date": "Tue, 4 Oct 2005 23:41:23 -0400",
"msg_from": "Emil Briggs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Indexes on ramdisk"
},
{
"msg_contents": "What kind of order of improvement do you need to see?\n\nWhat period are these number for? Were they collected over 1 hour, 1 day, 1\nmonth?\n\nHow much Cache do you have on the controller?\n\nYou can certainly get more speed by adding more disk and possibly by adding\nmore controller RAM/a second controller. 10 disks isn't really that many for\na totally kick-ass DB server. You can acheive more block writes with RAID\n10s than with RAID 1s. Wether it's cost effective is dependant on lots of\nfactors like your chassis and drive enclosures etc. vs SSD. SSD will be\nfaster, but last I heard was expensive, and I checked a few websites but\ncouldn't get much price info. Normaly when you can't get price info, thats a\nbad sign ;). If you are doing large chunks of writes to a small number of\ntables, then you might be better off with a single large RAID 10 for your\ntablespace than with seperate RAID 1s. If you are writing 5 to 1 more table\ndata than index data, you are hurting yourself by seperating on to multiple\nRAID 1s instead of a single RAID 10 which could write at 2-3x for the 5, and\n2-3x for the 1 and only suffer a single seek penalty but get data onto disk\ntwice to three times as fast (depending how many RAID 1s you join). Try\nunseperating RAID 1s, and combine to a RAID 10. for indexes and tablespaces.\nThe controller will re-sequence your writes/reads to help with effeciency,\nand dbwriter is there to make things go easier.\n\nYou can at least get some idea by doing an iostat and see how many IOs and\nhow much throughput is happening. That will rappidly help determine if you\nare bound by IOs or by MB/sec.\n\nWorst case I'm wrong, but IMHO it's worth a try.\n\nAlex Turner\nNetEconomist\n\nOn 10/4/05, Emil Briggs <[email protected]> wrote:\n>\n> > Talk about your IO system a bit. There might be obvious ways to improve.\n> >\n> > What System/Motherboard are you using?\n> > What Controller Cards are you using?\n> > What kind of Disks do you have (SATA, SCSI 7.6k 10k 15k)\n> > What denominations (9, 18, 36, 72, 143, 80, 160, 200 240Gig)?\n> > What kind of RAIDs do you have setup (How many drives what stripe sizes,\n> > how many used for what).\n> > What levels of RAID are you using (0,1,10,5,50)?\n> >\n>\n> It's a quad opteron system. RAID controller is a 4 channel LSILogic\n> Megaraid\n> 320 connected to 10 15k 36.7G SCSI disks. The disks are configured in 5\n> mirrored partitions. The pg_xlog is on one mirror and the data and indexes\n> are spread over the other 4 using tablespaces. These numbers from\n> pg_stat_user_tables are from about 2 hours earlier today on this one\n> table.\n>\n>\n> idx_scan 20578690\n> idx_tup_fetch 35866104841\n> n_tup_ins 1940081\n> n_tup_upd 1604041\n> n_tup_del 1880424\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\nWhat kind of order of improvement do you need to see?\n\nWhat period are these number for? Were they collected over 1 hour, 1 day, 1 month?\n\nHow much Cache do you have on the controller?\n\nYou can certainly get more speed by adding more disk and possibly by\nadding more controller RAM/a second controller. 10 disks isn't\nreally that many for a totally kick-ass DB server. You can\nacheive more block writes with RAID 10s than with RAID 1s. Wether\nit's cost effective is dependant on lots of factors like your chassis\nand drive enclosures etc. vs SSD. SSD will be faster, but last I\nheard was expensive, and I checked a few websites but couldn't get much\nprice info. Normaly when you can't get price info, thats a bad\nsign ;). If you are doing large chunks of writes to a small\nnumber of tables, then you might be better off with a single large RAID\n10 for your tablespace than with seperate RAID 1s. If you are\nwriting 5 to 1 more table data than index data, you are hurting\nyourself by seperating on to multiple RAID 1s instead of a single RAID\n10 which could write at 2-3x for the 5, and 2-3x for the 1 and only\nsuffer a single seek penalty but get data onto disk twice to three\ntimes as fast (depending how many RAID 1s you join). Try\nunseperating RAID 1s, and combine to a RAID 10. for indexes and\ntablespaces. The controller will re-sequence your writes/reads to\nhelp with effeciency, and dbwriter is there to make things go easier.\n\nYou can at least get some idea by doing an iostat and see how many IOs\nand how much throughput is happening. That will rappidly help determine\nif you are bound by IOs or by MB/sec.\n\nWorst case I'm wrong, but IMHO it's worth a try.\n\nAlex Turner\nNetEconomistOn 10/4/05, Emil Briggs <[email protected]> wrote:\n> Talk about your IO system a bit. There might be obvious ways to improve.>> What System/Motherboard are you using?> What Controller Cards are you using?> What kind of Disks do you have (SATA, SCSI \n7.6k 10k 15k)> What denominations (9, 18, 36, 72, 143, 80, 160, 200 240Gig)?> What kind of RAIDs do you have setup (How many drives what stripe sizes,> how many used for what).> What levels of RAID are you using (0,1,10,5,50)?\n>It's a quad opteron system. RAID controller is a 4 channel LSILogic Megaraid320 connected to 10 15k 36.7G SCSI disks. The disks are configured in 5mirrored partitions. The pg_xlog is on one mirror and the data and indexes\nare spread over the other 4 using tablespaces. These numbers frompg_stat_user_tables are from about 2 hours earlier today on this one table.idx_scan 20578690idx_tup_fetch 35866104841\nn_tup_ins 1940081n_tup_upd 1604041n_tup_del 1880424---------------------------(end of broadcast)---------------------------TIP 4: Have you searched our list archives?\n http://archives.postgresql.org",
"msg_date": "Wed, 5 Oct 2005 11:03:03 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indexes on ramdisk"
},
{
"msg_contents": "> What kind of order of improvement do you need to see?\n>\n\nA lot since the load on the system is expected to increase by up to 100% over \nthe next 6 months.\n\n> What period are these number for? Were they collected over 1 hour, 1 day, 1\n> month?\n>\n\nI thought I mentioned that in the earlier post but it was from a 2 hour \nperiod. It's a busy system.\n\n> How much Cache do you have on the controller?\n>\n\n64Mbytes but I don't think that's an issue. As I mentioned in the first post \nthe table that is the bottleneck has indexes on 15 columns and is seeing a \nlot of inserts, deletes and updates. The indexes are spread out over the 5 \nmirrors but it's still a couple of writes per mirror for each operation. I'm \ngoing to order an SSD which should give us a lot more headroom than trying to \nrearrange the RAID setup.\n\n",
"msg_date": "Wed, 5 Oct 2005 19:46:24 -0400",
"msg_from": "Emil Briggs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Indexes on ramdisk"
}
] |
[
{
"msg_contents": "Unfortunately, no matter what I say or do, I'm not going to please\nor convince anyone who has already have made their minds up\nto the extent that they post comments like Mr Trainor's below.\nHis response style pretty much proves my earlier point that this\nis presently a religious issue within the pg community.\n\nThe absolute best proof would be to build a version of pg that does\nwhat Oracle and DB2 have done and implement it's own DB\nspecific memory manager and then compare the performance\nbetween the two versions on the same HW, OS, and schema.\n\nThe second best proof would be to set up either DB2 or Oracle so\nthat they _don't_ use their memory managers and compare their\nperformance to a set up that _does_ use said memory managers\non the same HW, OS, and schema.\n\nI don't currently have the resources for either experiment.\n\nSome might even argue that IBM (where Codd and Date worked)\nand Oracle just _might_ have had justification for the huge effort\nthey put into developing such infrastructure. \n\nThen there's the large library of research on caching strategies\nin just about every HW and SW domain, including DB theory,\nthat points put that the more context dependent, ie application\nor domain specific awareness, caching strategies are the better\nthey are.\n\nMaybe after we do all we can about physical IO and sorting\nperformance I'll take on the religious fanatics on this one.\n\nOne problem set at a time.\nRon \n \n\n-----Original Message-----\nFrom: \"Joshua D. Drake\" <[email protected]>\nSent: Oct 4, 2005 9:32 PM\nTo: \"Douglas J. Trainor\" <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] Is There Any Way ....\n\nDouglas J. Trainor wrote:\n\n>\n> Ron Peacetree sounds like someone talking out of his _AZZ_.\n> He can save his unreferenced flapdoodle for his SQL Server\n> clients. Maybe he will post references so that we may all\n> learn at the feet of Master Peacetree. :-)\n\nAlthough I agree that I would definitely like to see some test cases\nfor what Ron is talking about, I don't think that resorting to insults\nis going to help the situation.\n\nRon, if you would please -- provide some test cases for what you are\ndescribing I am sure that anyone would love to see them. We are all\nfor improving PostgreSQL.\n\nSincerely,\n\nJoshua D. Drake\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n",
"msg_date": "Tue, 4 Oct 2005 23:06:54 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is There Any Way ...."
},
{
"msg_contents": "On Tue, Oct 04, 2005 at 11:06:54PM -0400, Ron Peacetree wrote:\n\n> Some might even argue that IBM (where Codd and Date worked)\n> and Oracle just _might_ have had justification for the huge effort\n> they put into developing such infrastructure. \n\nThe OS and FS world is very, very different now than it was when\nthe Oracle and DB2 architectures were being crafted. What may have\nbeen an excellent development effort then may not provide such good\nROI now.\n\n> Then there's the large library of research on caching strategies\n> in just about every HW and SW domain, including DB theory,\n> that points put that the more context dependent, ie application\n> or domain specific awareness, caching strategies are the better\n> they are.\n> \n> Maybe after we do all we can about physical IO and sorting\n> performance I'll take on the religious fanatics on this one.\n\nActually, the main \"religious fanatic\" I've seen recently is yourself.\nWhile I have a gut feel that some of the issues you raise could\ncertainly do with further investigation, I'm not seeing that much from\nyou other than statements that muchof what postgresql does is wrong\n(not \"wrong for your Ron's use case\", but \"wrong in every respect\").\n\nA little less arrogance and a little more \"here are some possibilities\nfor improvement\", \"here is an estimate of the amount of effort that\nmight be needed\" and \"here are some rough benchmarks showing the\npotential return on that investment\" would, at the very least, make\nthe threads far less grating to read.\n\nCheers,\n Steve\n",
"msg_date": "Tue, 4 Oct 2005 20:50:13 -0700",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is There Any Way ...."
},
{
"msg_contents": "On Tue, Oct 04, 2005 at 11:06:54PM -0400, Ron Peacetree wrote:\n> Unfortunately, no matter what I say or do, I'm not going to please\n> or convince anyone who has already have made their minds up\n> to the extent that they post comments like Mr Trainor's below.\n> His response style pretty much proves my earlier point that this\n> is presently a religious issue within the pg community.\n\nReligious for some. Conservative for others.\n\nSometimes people need to see the way, before they are willing to\naccept it merely on the say so of another person. In some circles, it\nis called the scientific method... :-)\n\nAlso, there is a cost to complicated specific optimizations. They can\nbe a real maintenance and portability head-ache. What is the value ratio\nof performance to maintenance or portability?\n\n> The absolute best proof would be to build a version of pg that does\n> what Oracle and DB2 have done and implement it's own DB\n> specific memory manager and then compare the performance\n> between the two versions on the same HW, OS, and schema.\n\nNot necessarily. Even if a version of PostgreSQL were to be written to\nfunction in this new model, there would be no guarantee that it was\nwritten in the most efficient manner possible. Performance could show\nPostgreSQL using its own caching, and disk space management\nimplementation, and performing poorly. The only true, and accurate way\nwould be to implement, and then invest time by those most competent to\ntest, and optimize the implementation. At this point, it would become\na moving target, as those who believe otherwise, would be free to\npursue using more efficient file systems, or modifications to the\noperating system to better co-operate with PostgreSQL.\n\nI don't think there can be a true answer to this one. The more\ninnovative, and clever people, will always be able to make their\nsolution work better. If the difference in performance was really so\nobvious, there wouldn't be doubters on either side. It would be clear\nto all. The fact is, there is reason to doubt. Perhaps not doubt that\nthe final solution would be more efficient, but rather, the reason\nto doubt that the difference in efficiency would be significant.\n\n> The second best proof would be to set up either DB2 or Oracle so\n> that they _don't_ use their memory managers and compare their\n> performance to a set up that _does_ use said memory managers\n> on the same HW, OS, and schema.\n\nSame as above. If Oracle was designed to work with the functionality,\nthen disabling the functionality, wouldn't prove that an efficient\ndesign would perform equally poorly, or even, poorly at all. I think\nit would be obvious that Oracle would have invested most of their\ndollars into the common execution paths, with the expected\nfunctionality present.\n\n> I don't currently have the resources for either experiment.\n\nThis is the real problem. :-)\n\n> Some might even argue that IBM (where Codd and Date worked)\n> and Oracle just _might_ have had justification for the huge effort\n> they put into developing such infrastructure. \n\nOr, not. They might just have more money to throw at the problem,\nand be entrenched into their solution to the point that they need\nto innovate to ensure that their solution appears to be the best.\n\n> Then there's the large library of research on caching strategies\n> in just about every HW and SW domain, including DB theory,\n> that points put that the more context dependent, ie application\n> or domain specific awareness, caching strategies are the better\n> they are.\n\nA lot of this is theory. It may be good theory, but there is no\nguarantee that the variables listed in these theories match, or\nproperly estimate the issues that would be found in a real\nimplementation.\n\n> Maybe after we do all we can about physical IO and sorting\n> performance I'll take on the religious fanatics on this one.\n> One problem set at a time.\n\nIn any case, I'm on your side - in theory. Intuitively, I don't\nunderstand how anybody could claim that a general solution could ever\nbe faster than a specific solution. Anybody who claimed this, would\ngo down in my books as a fool. It should be obvious to these people\nthat, as an extreme, the entire operating system caching layer, and\nthe entire file system layer could be inlined into PostgreSQL,\navoiding many of the expenses involved in switching back and forth\nbetween user space and system space, leaving a more efficient,\nalthough significantly more complicated solution.\n\nWhether by luck, or by experience of those involved, I haven't seen\nany senior PostgreSQL developers actually stating that it couldn't be\nfaster. Instead, I've seen it claimed that the PostgreSQL developers\ndon't have the resources to attack this problem, as there are other\nfar more pressing features, product defects, and more obviously\nbeneficial optimization opportunities to work on. Memory management,\nor disk management, is \"good enough\" as provided by decent operating\nsystems, and the itch just isn't bad enough to scratch yet. They\nremain unconvinced that the gain in performance, would be worth the\ncost of maintaining this extra complexity in the code base.\n\nIf you believe the case can be made, it is up to you to make it.\n\nCheers!\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Tue, 4 Oct 2005 23:50:24 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Is There Any Way ...."
},
{
"msg_contents": "\nHey, you can say what you want about my style, but you\nstill haven't pointed to even one article from the vast literature\nthat you claim supports your argument. And I did include a\nsmiley. Your original email that PostgreSQL is wrong and\nthat you are right led me to believe that you, like others making\nsuch statements, would not post your references. You remind\nme of Ted Nelson, who wanted the computing center at\nthe University of Illinois at Chicago to change their systems\njust for him. BTW, I'm a scientist -- I haven't made my mind\nup about anything. I really am interested in what you say,\nif there is any real work backing up your claims such that\nit would impact average cases.\n\nAny app designer can conceive of many ways to game the\nserver to their app's advantage -- I'm not interested in that\npotboiler.\n\n douglas\n\nOn Oct 4, 2005, at 11:06 PM, Ron Peacetree wrote:\n\n> Unfortunately, no matter what I say or do, I'm not going to please\n> or convince anyone who has already have made their minds up\n> to the extent that they post comments like Mr Trainor's below.\n> His response style pretty much proves my earlier point that this\n> is presently a religious issue within the pg community.\n>\n> The absolute best proof would be to build a version of pg that does\n> what Oracle and DB2 have done and implement it's own DB\n> specific memory manager and then compare the performance\n> between the two versions on the same HW, OS, and schema.\n>\n> The second best proof would be to set up either DB2 or Oracle so\n> that they _don't_ use their memory managers and compare their\n> performance to a set up that _does_ use said memory managers\n> on the same HW, OS, and schema.\n>\n> I don't currently have the resources for either experiment.\n>\n> Some might even argue that IBM (where Codd and Date worked)\n> and Oracle just _might_ have had justification for the huge effort\n> they put into developing such infrastructure.\n>\n> Then there's the large library of research on caching strategies\n> in just about every HW and SW domain, including DB theory,\n> that points put that the more context dependent, ie application\n> or domain specific awareness, caching strategies are the better\n> they are.\n>\n> Maybe after we do all we can about physical IO and sorting\n> performance I'll take on the religious fanatics on this one.\n>\n> One problem set at a time.\n> Ron\n\n",
"msg_date": "Wed, 5 Oct 2005 07:00:32 -0400",
"msg_from": "\"Douglas J. Trainor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is There Any Way ...."
},
{
"msg_contents": "On Tue, 4 Oct 2005 23:06:54 -0400 (EDT)\nRon Peacetree <[email protected]> wrote:\n\n> Then there's the large library of research on caching strategies\n> in just about every HW and SW domain, including DB theory,\n> that points put that the more context dependent, ie application\n> or domain specific awareness, caching strategies are the better\n> they are.\n\n Isn't this also a very strong argument for putting your caching\n into your application and not at the database level? \n \n As you say the more \"application or domain specific\" it is the better.\n I don't see how PostgreSQL is going to magically determine what\n is perfect for everyone's differing needs and implement it for you. \n\n Even rudimentary controls such \"always keep this\n table/index/whatever in RAM\" aren't as fine grained or specific\n enough to get full benefit. \n\n My suggestion is to use something like memcached to store your\n data in, based on the particular needs of your application. This\n puts all of the control in the hands of the programmer where, in\n my opinion, it belongs. \n\n Just to clarify, I'm not entirely against the idea, but I certainly\n think there are other areas of PostgreSQL we should be focusing our\n efforts. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n",
"msg_date": "Wed, 5 Oct 2005 10:05:43 -0500",
"msg_from": "Frank Wiles <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is There Any Way ...."
},
{
"msg_contents": "A blast from the past is forwarded below.\n\n douglas\n\nBegin forwarded message:\n\n> From: Tom Lane <[email protected]>\n> Date: August 23, 2005 3:23:43 PM EDT\n> To: Donald Courtney <[email protected]>\n> Cc: [email protected], Frank Wiles <[email protected]>, \n> gokulnathbabu manoharan <[email protected]>\n> Subject: Re: [PERFORM] Caching by Postgres\n>\n> Donald Courtney <[email protected]> writes:\n>> I am not alone in having the *expectation* that a database should have\n>> some cache size parameter and the option to skip the file system. If\n>> I use oracle, sybase, mysql and maxdb they all have the ability to\n>> size a data cache and move to 64 bits.\n>\n> And you're not alone in holding that opinion despite having no shred\n> of evidence that it's worthwhile expanding the cache that far.\n>\n> However, since we've gotten tired of hearing this FUD over and over,\n> 8.1 will have the ability to set shared_buffers as high as you want.\n> I expect next we'll be hearing from people complaining that they\n> set shared_buffers to use all of RAM and performance went into the\n> tank ...\n>\n> \t\t\tregards, tom lane\n\n\nOn Oct 4, 2005, at 11:06 PM, Ron Peacetree wrote:\n\n> Unfortunately, no matter what I say or do, I'm not going to please\n> or convince anyone who has already have made their minds up\n> to the extent that they post comments like Mr Trainor's below.\n> His response style pretty much proves my earlier point that this\n> is presently a religious issue within the pg community.\n>\n> The absolute best proof would be to build a version of pg that does\n> what Oracle and DB2 have done and implement it's own DB\n> specific memory manager and then compare the performance\n> between the two versions on the same HW, OS, and schema.\n>\n> The second best proof would be to set up either DB2 or Oracle so\n> that they _don't_ use their memory managers and compare their\n> performance to a set up that _does_ use said memory managers\n> on the same HW, OS, and schema.\n>\n> I don't currently have the resources for either experiment.\n>\n> Some might even argue that IBM (where Codd and Date worked)\n> and Oracle just _might_ have had justification for the huge effort\n> they put into developing such infrastructure.\n>\n> Then there's the large library of research on caching strategies\n> in just about every HW and SW domain, including DB theory,\n> that points put that the more context dependent, ie application\n> or domain specific awareness, caching strategies are the better\n> they are.\n>\n> Maybe after we do all we can about physical IO and sorting\n> performance I'll take on the religious fanatics on this one.\n>\n> One problem set at a time.\n> Ron",
"msg_date": "Wed, 5 Oct 2005 15:52:19 -0400",
"msg_from": "\"Douglas J. Trainor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is There Any Way ...."
}
] |
[
{
"msg_contents": "First off, Mr. Trainor's response proves nothing about anyone or\nanything except Mr. Trainor.\n \nI'm going to offer an opinion on the caching topic. I don't have\nany benchmarks; I'm offering a general sense of the issue based on\ndecades of experience, so I'll give a short summary of that.\n \nI've been earning my living by working with computers since 1972,\nand am the architect and primary author of a little-known\ndatabase product (developed in 1984) which saw tens of thousands\nof installations in various vertical markets. (Last I checked, a\ncouple years ago, it was still being used statewide by one state\ngovernment after a multi-million dollar attempt to replace it with a\npopular commercial database product failed.) I've installed and\ntuned many other database products over the years. I'm just getting\nto know PostgreSQL, and am pretty excited about it.\n \nNow on to the meat of it.\n \nMy experience is that a DBMS can improve performance by caching\ncertain types of data. In the product I developed, we had a fairly\nsmall cache which used a weighting algorithm for what to keep\n(rather than simply relying on LRU). Index pages got higher weight\nthan data pages; the higher in the index, the higher the weight.\nRecent access got higher weight than older access, although it took\nquite a while for the older access to age out entirely. This\nimproved performance quite a bit over a generalized caching\nproduct alone.\n \nHowever, there was a point of diminishing return. My sense is that\nevery 10% you add to a \"smart\" cache yields less benefit at a\nhigher cost, so beyond a certain point, taking RAM from the general\ncache to expand the smart cache degrades performance. Clever\nprogramming techniques can shift the break-even point, but I very\nmuch doubt it can be eliminated entirely, unless the ratio of\nperformance between CPU+RAM and persistent storage is much\nmore extreme than I've ever seen.\n \nThere is another issue, which has been raised from time to time in\nthese lists, but not enunciated clearly enough in my view. These\ndiscussions about caching generally address maximum throughput,\nwhile there are times when it is important that certain tables can\nbe queried very quickly, even if it hurts overall throughput. As an\nexample, there can be tables which are accessed as a user types in\na window and tabs around from one GUI control to another. The\nuser perception of the application performance is going to depend\nPRIMARILY on how quickly the GUI renders the results of these\nqueries; if the run time for a large report goes up by 10%, they\nwill probably not notice. This is a situation where removing RAM\nfrom a generalized cache, or one based on database internals, to\ncreate an \"application specific\" cache can yield big rewards.\n \nOne client has addressed this in a commercial product by defining\na named cache large enough to hold these tables, and binding those\ntables to the cache. One side benefit is that such caches can be\ndefined as \"relaxed LRU\" -- meaning that they eliminate the\noverhead of tracking accesses, since they can assume that data will\nrarely, if ever, be discarded from the cache.\n \nIt seems to me that in the PostgreSQL world, this would currently\nbe addressed by binding the tables to a tablespace where the file\nsystem, controller, or drive(s) would cache the data, although this\nis somewhat less flexible than the \"named cache\" approach -- unless\nthere is a file system that can layer a cache on top of a reference to\nsome other file system's space. (And let's not forget the many OS\nenvironments in which people use PostgreSQL.) So I do see that\nthere would be benefit to adding a feature to PostgreSQL to define\ncaches and bind tables or indexes to them.\n \nSo I do think that it is SMART of PostgreSQL to rely on the\nincreasingly sophisticated file systems to provide the MAIN cache.\nI suspect that a couple types of smaller \"smart\" caches in front of\nthis could boost performance, and it might be a significant boost.\nI'm not sure what the current shared memory is used for; perhaps\nthis is already caching specific types of structures for the DBMS.\nI'm pretty sure that programmers of GUI apps would appreciate the\nnamed cache feature, so they could tune the database for snappy\nGUI response, even under heavy load.\n \nI realize this is short on specifics -- I'm shooting for perspective.\nFor the record, I don't consider myself particularly religious on the\ntopic, but I do pull back a little at arguments which sound strictly\nacademic -- I've found that most of what I've drawn from those\ncircles has needed adjustment in solving real-world problems.\n(Particularly when algorithms optimize for best worst-case\nperformance. I've found users are much happier with best typical\ncase performance as long as the product of worst case performance\nand worst case frequency is low.)\n \nLike many others who have posted on the topic, I am quite\nprepared to alter my views in the face of relavent evidence.\n \nFeel free to laugh at the old fart who decided to sip his Bushmill's\nwhile reading through this thread and try to run with the young lions.\nAs someone else recently requested, though, please don't point while\nyou laugh -- that's just rude. :-)\n \n-Kevin\n \n \n>>> Ron Peacetree <[email protected]> 10/04/05 10:06 PM >>>\nUnfortunately, no matter what I say or do, I'm not going to please\nor convince anyone who has already have made their minds up\nto the extent that they post comments like Mr Trainor's below.\nHis response style pretty much proves my earlier point that this\nis presently a religious issue within the pg community.\n\n",
"msg_date": "Wed, 05 Oct 2005 01:16:23 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is There Any Way ...."
}
] |
[
{
"msg_contents": "\nThanks. \nI've already understood that \nI need to post it in another list.\n\nSorry for wasting your precious time. \n\n--\nRajesh R\n\n-----Original Message-----\nFrom: Richard Huxton [mailto:[email protected]] \nSent: Wednesday, October 05, 2005 2:24 PM\nTo: R, Rajesh (STSD)\nCc: [email protected]; [email protected]\nSubject: Re: [PERFORM] Query in SQL statement\n\nR, Rajesh (STSD) wrote:\n> \n> Am trying to port a mysql statement to postgres.\n> \n> Please help me in finding the error in this,\n\nCan I recommend the reference section of the manuals for this sort of\nthing? There is an excellent section detailing the valid SQL for the\nCREATE TABLE command.\n\nAlso - the pgsql-hackers list is for discussion of database development,\nand the performance list is for performance problems. This would be\nbetter posted on pgsql-general or -sql or -novice.\n\n> CREATE SEQUENCE ai_id;\n\nThis line is causing the first error:\n > ERROR: relation \"ai_id\" already exists\n\nThat's because you've already successfully created the sequence, so it\nalready exists. Either drop it and recreate it, or stop trying to\nrecreate it.\n\n> CREATE TABLE badusers (\n> id int DEFAULT nextval('ai_id') NOT NULL,\n> UserName varchar(30),\n> Date datetime DEFAULT '0000-00-00 00:00:00' NOT NULL,\n\nWell, \"Date\" is a type-name, \"datetime\" isn't and even if it was\n\"0000-00-00\" isn't a valid date is it?\n\n> Reason varchar(200),\n> Admin varchar(30) DEFAULT '-',\n> PRIMARY KEY (id),\n> KEY UserName (UserName),\n> KEY Date (Date)\n\nThe word \"KEY\" isn't valid here either - are you trying to define an\nindex? If so, see the \"CREATE INDEX\" section of the SQL reference.\n\nhttp://www.postgresql.org/docs/8.0/static/sql-commands.html\n\nIf you reply to this message, please remove the pgsql-hackers CC:\n--\n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 5 Oct 2005 14:33:15 +0530",
"msg_from": "\"R, Rajesh (STSD)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Query in SQL statement"
},
{
"msg_contents": "R, Rajesh (STSD) wrote:\n> Thanks. \n> I've already understood that \n> I need to post it in another list.\n> \n> Sorry for wasting your precious time. \n\nNo time wasted. It was a perfectly reasonable question, just to the \nwrong lists.\n\n--\n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 05 Oct 2005 10:09:20 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query in SQL statement"
}
] |
[
{
"msg_contents": "[to K C:] sorry, was out on vactation all last week. I was visualizing\nthe problem incorrectly anyways...\n\nJim wrote:\n> That function is not immutable, it should be defined as stable.\n\nThat is 100% correct: however now and then I declare stable functions as\nimmutable in some cases because the planner treats them differently with\nno side effects...this is a hack of course...see my earlier suggestion\nto try both immutable and stable versions. I can give a pretty good\nexample of when this can make a big difference.\n \n> PostgreSQL doesn't pre-compile functions, at least not until 8.1 (and\n> I'm not sure how much those are pre-compiled, though they are\n> syntax-checked at creation). Do you get the same result time when you\n> run it a second time? What time do you get from running just the\n> function versus the SQL in the function?\n\nplpgsql functions are at least partially compiled (sql functions afaik\nare not), in that a internal state is generated following the first\nexecution. This is the cause of all those infernal 'invalid table oid'\nerrors.\n \n> Also, remember that every layer you add to the cake means more work\nfor\n> the database. If speed is that highly critical you'll probably want to\n> not wrap things in functions, and possibly not use views either.\n\nThe overhead of the function/view is totally inconsequential next to the\nplanner choosing a suboptimal plan. The purpose of the function is to\ncoerce the planner into choosing the correct plan.\n\n> Also, keep in mind that getting below 1ms doesn't automatically mean\n> you'll be able to scale to 1000TPS. Things will definately change when\n> you load the system down, so if performance is that critical you\nshould\n> start testing with the system under load if you're not already.\n\n",
"msg_date": "Wed, 5 Oct 2005 09:22:48 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue"
}
] |
[
{
"msg_contents": "Can anyone tell me what precisely a WAL buffer contains,\nso that I can compute an appropriate setting for\nwal_buffers (in 8.0.3)?\n\nI know the documentation suggests there is little\nevidence that supports increasing wal_buffers, but we\nare inserting a large amount of data that, I believe,\neasily exceeds the default 64K in a single transaction.\nWe are also very sensitive to write latency.\n\nAs background, we are doing a sustained insert of 2.2\nbillion rows in 1.3 million transactions per day. Thats\nabout 1700 rows per transaction, at (roughly) 50 bytes\nper row.\n\n",
"msg_date": "Wed, 05 Oct 2005 09:23:45 -0400",
"msg_from": "Ian Westmacott <[email protected]>",
"msg_from_op": true,
"msg_subject": "wal_buffers"
},
{
"msg_contents": "\nOn Oct 5, 2005, at 8:23 AM, Ian Westmacott wrote:\n\n> Can anyone tell me what precisely a WAL buffer contains,\n> so that I can compute an appropriate setting for\n> wal_buffers (in 8.0.3)?\n>\n> I know the documentation suggests there is little\n> evidence that supports increasing wal_buffers, but we\n> are inserting a large amount of data that, I believe,\n> easily exceeds the default 64K in a single transaction.\n> We are also very sensitive to write latency.\n>\n> As background, we are doing a sustained insert of 2.2\n> billion rows in 1.3 million transactions per day. Thats\n> about 1700 rows per transaction, at (roughly) 50 bytes\n> per row.\n\nIan,\n\nThe WAL Configuration chapter (25.2) has a pretty good discussion of \nhow wal_buffers is used:\n\nhttp://www.postgresql.org/docs/8.0/static/wal-configuration.html\n\nYou might also take a look at Josh Berkus' recent testing on this \nsetting:\n\nhttp://www.powerpostgresql.com/\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\n\nStrategic Open Source: Open Your i�\n\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-469-5150\n615-469-5151 (fax)\n",
"msg_date": "Thu, 6 Oct 2005 01:39:20 -0500",
"msg_from": "\"Thomas F. O'Connell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: wal_buffers"
},
{
"msg_contents": "On Thu, 2005-10-06 at 02:39, Thomas F. O'Connell wrote:\n> The WAL Configuration chapter (25.2) has a pretty good discussion of \n> how wal_buffers is used:\n> \n> http://www.postgresql.org/docs/8.0/static/wal-configuration.html\n> \n> You might also take a look at Josh Berkus' recent testing on this \n> setting:\n> \n> http://www.powerpostgresql.com/\n\nThanks; I'd seen the documentation, but not Josh Berkus'\ntesting.\n\nFor my part, I don't have a large number of concurrent\nconnections, only one. But it is doing large writes,\nand XLogInsert is number 2 on the profile (with\nLWLockAcquire and LWLockRelease close behind). I suppose\nthat is expected, but lead by the documentation I wanted\nto make sure XLogInsert always had some buffer space to\nplay with.\n\n\t--Ian\n\n\n",
"msg_date": "Thu, 06 Oct 2005 08:56:31 -0400",
"msg_from": "Ian Westmacott <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: wal_buffers"
},
{
"msg_contents": "On Thu, Oct 06, 2005 at 08:56:31AM -0400, Ian Westmacott wrote:\n> On Thu, 2005-10-06 at 02:39, Thomas F. O'Connell wrote:\n> > The WAL Configuration chapter (25.2) has a pretty good discussion of \n> > how wal_buffers is used:\n> > \n> > http://www.postgresql.org/docs/8.0/static/wal-configuration.html\n> > \n> > You might also take a look at Josh Berkus' recent testing on this \n> > setting:\n> > \n> > http://www.powerpostgresql.com/\n> \n> Thanks; I'd seen the documentation, but not Josh Berkus'\n> testing.\n> \n> For my part, I don't have a large number of concurrent\n> connections, only one. But it is doing large writes,\n> and XLogInsert is number 2 on the profile (with\n> LWLockAcquire and LWLockRelease close behind). I suppose\n> that is expected, but lead by the documentation I wanted\n> to make sure XLogInsert always had some buffer space to\n> play with.\n\nIf you are using a single connection, you are wasting lots of cycles\njust waiting for the disk to spin. Were you to use multiple\nconnections, some transactions could be doing some useful work while\nothers are waiting for their transaction to be committed.\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/5ZYLFMCVHXC\n\"I suspect most samba developers are already technically insane...\nOf course, since many of them are Australians, you can't tell.\" (L. Torvalds)\n",
"msg_date": "Thu, 6 Oct 2005 09:23:45 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: wal_buffers"
},
{
"msg_contents": "Ian, Thomas,\n\n> Thanks; I'd seen the documentation, but not Josh Berkus'\n> testing.\n\nBTW, that's still an open question for me. I'm now theorizing that it's \nbest to set wal_buffers to the expected maximum number of concurrent write \nconnections. However, I don't have enough test systems to test that \nmeaningfully.\n\nYour test results will help.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 7 Oct 2005 11:51:36 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: wal_buffers"
}
] |
[
{
"msg_contents": "** Low Priority **\n\nHuman feedback from testers and users has proven pretty effective\nat catching errors in the \"human assisted\" cache configuration. When\npeople setting up the servers have missed the named cache configuration,\nand all they had was the single general purpose cache, it has been caught\nbecause of user complaints on performance.\n \nThere was an attempt made to simulate database queries -- hitting a\nclient side cache on some of the roughly100 tables (out of 300 in the well\nnormalized schema) which fit this pattern of usage. It didn't prove very\ncost effective. It just makes more sense to allow the DBAs to tweek\ndatabase performance through database configuration changes than to\njump through that many hoops in application code to try to achieve it\nwhere it becomes an issue.\n \nAs far as I know, you can't use this technique in Microsoft SQL Server or\nOracle. They are using Sybase Adaptive Server Enterprise (ASE). I\nbelieve named caches were added in version 12.0, long after Microsoft\nsplit off with their separate code stream based on the Sybase effort.\n \n-Kevin\n \n \n>>> \"Dario\" <[email protected]> 10/05/05 6:16 AM >>>\nI'm sure there will be cases when some human assisted caching algorithm will\nperform better than an mathetical statistical based design, but it will also\ndepend on the \"human\". And it probably will make thing worse when workload\nchanges and human doesn't realize. It must be considered that, today,\nhardware cost is not the %90 of budget that it used to be. Throwing hardware\nat the system can be as much expensive as throwing certified \"it stuff\".\n(just think in coffee budget! :-) )\n\nIf you need to improve \"user perception\", you can do others things. Like\ncaching a table in your client (with a trigger for any change on table X\nupdating a table called \"timestamp_table_change\" and a small select to this\ntable, you can easily know when you must update your client). If it is a\napplication server, serving http request, then \"user perception\" will be\nsticked to bandwidth AND application server (some of them have cache for\nrequest).\n\nFYI, I don't recall a mechanism in MSSQL to cache a table in buffers. Oracle\nhas some structures to allow that. (you know) It uses his own buffer. Since\nversion 9i, you can set three different data buffers, one (recycled cache)\nfor low usage tables (I mean tables with blocks which don't have too much\nchance to be queried again, like a very large historical table) , one for\nhigh usage tables (keep cache), and the regular one (difference is in\nalgorithm). And you must also set a buffer cache size for tablespaces with\ndifferent block size. But there is no such thing as \"create table x keep\nentirenly in buffer\". And above all things, oracle doc always states \"first,\ntune design, then tune queries, then start tunning engine\".\n\ngreetings.\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n",
"msg_date": "Wed, 05 Oct 2005 08:51:43 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is There Any Way ...."
}
] |
[
{
"msg_contents": "> It's a quad opteron system. RAID controller is a 4 channel LSILogic\n> Megaraid\n> 320 connected to 10 15k 36.7G SCSI disks. The disks are configured in\n5\n> mirrored partitions. The pg_xlog is on one mirror and the data and\nindexes\n> are spread over the other 4 using tablespaces. These numbers from\n> pg_stat_user_tables are from about 2 hours earlier today on this one\n> table.\n> \n> \n> idx_scan 20578690\n> idx_tup_fetch 35866104841\n> n_tup_ins 1940081\n> n_tup_upd 1604041\n> n_tup_del 1880424\n\nIs your raid controller configured to buffer your writes? How much RAM\nare you packing? Are you running 64 bit?\n\nMerlin\n",
"msg_date": "Wed, 5 Oct 2005 10:55:36 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Indexes on ramdisk"
}
] |
[
{
"msg_contents": "Nope - it would be disk wait.\n\nCOPY is CPU bound on I/O subsystems faster that 50 MB/s on COPY (in) and about 15 MB/s (out).\n\n- Luke\n\n -----Original Message-----\nFrom: \tMichael Stone [mailto:[email protected]]\nSent:\tWed Oct 05 09:58:41 2005\nTo:\tMartijn van Oosterhout\nCc:\[email protected]; [email protected]\nSubject:\tRe: [HACKERS] [PERFORM] A Better External Sort?\n\nOn Sat, Oct 01, 2005 at 06:19:41PM +0200, Martijn van Oosterhout wrote:\n>COPY TO /dev/null WITH binary\n>13MB/s 55% user 45% system (ergo, CPU bound)\n[snip]\n>the most expensive. But it does point out that the whole process is\n>probably CPU bound more than anything else.\n\nNote that 45% of that cpu usage is system--which is where IO overhead\nwould end up being counted. Until you profile where you system time is\ngoing it's premature to say it isn't an IO problem.\n\nMike Stone\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n\n",
"msg_date": "Wed, 5 Oct 2005 11:24:07 -0400",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "On Wed, Oct 05, 2005 at 11:24:07AM -0400, Luke Lonergan wrote:\n>Nope - it would be disk wait.\n\nI said I/O overhead; i.e., it could be the overhead of calling the\nkernel for I/O's. E.g., the following process is having I/O problems:\n\ntime dd if=/dev/sdc of=/dev/null bs=1 count=10000000 \n10000000+0 records in \n10000000+0 records out \n10000000 bytes transferred in 8.887845 seconds (1125132 bytes/sec) \n \nreal 0m8.889s \nuser 0m0.877s \nsys 0m8.010s \n\nit's not in disk wait state (in fact the whole read was cached) but it's\nonly getting 1MB/s. \n\nMike Stone\n",
"msg_date": "Wed, 05 Oct 2005 11:33:49 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "On 10/6/05, Michael Stone <[email protected]> wrote:\n> On Wed, Oct 05, 2005 at 11:24:07AM -0400, Luke Lonergan wrote:\n> >Nope - it would be disk wait.\n>\n> I said I/O overhead; i.e., it could be the overhead of calling the\n> kernel for I/O's. E.g., the following process is having I/O problems:\n>\n> time dd if=/dev/sdc of=/dev/null bs=1 count=10000000\n> 10000000+0 records in\n> 10000000+0 records out\n> 10000000 bytes transferred in 8.887845 seconds (1125132 bytes/sec)\n>\n> real 0m8.889s\n> user 0m0.877s\n> sys 0m8.010s\n>\n> it's not in disk wait state (in fact the whole read was cached) but it's\n> only getting 1MB/s.\n>\n> Mike Stone\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\nI think you only proved that dd isn't the smartest tool out there... or\nthat using it with a blocksize of 1 byte doesn't make too much sense.\n\n\n[andrej@diggn:~]$ time dd if=/dev/sr0 of=/dev/null bs=2048 count=4883\n4883+0 records in\n4883+0 records out\n\nreal 0m6.824s\nuser 0m0.010s\nsys 0m0.060s\n[andrej@diggn:~]$ time dd if=/dev/sr0 of=/dev/null bs=1 count=10000000\n10000000+0 records in\n10000000+0 records out\n\nreal 0m18.523s\nuser 0m7.410s\nsys 0m10.310s\n[andrej@diggn:~]$ time dd if=/dev/sr0 of=/dev/null bs=8192 count=1220\n1220+0 records in\n1220+0 records out\n\nreal 0m6.796s\nuser 0m0.000s\nsys 0m0.070s\n\nThat's with caching, and all. Or did I miss the point of your post\ncompletely? Interestingly, the CPU usage with the bs=1 goes up\nto 97%, it stays at a mellow 3% with the 8192 and 2048.\n\n\nCheers,\nAndrej\n",
"msg_date": "Thu, 6 Oct 2005 08:43:54 +1300",
"msg_from": "Andrej Ricnik-Bay <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "Michael,\n\nOn 10/5/05 8:33 AM, \"Michael Stone\" <[email protected]> wrote:\n\n> real 0m8.889s \n> user 0m0.877s \n> sys 0m8.010s \n> \n> it's not in disk wait state (in fact the whole read was cached) but it's\n> only getting 1MB/s.\n\nYou've proven my point completely. This process is bottlenecked in the CPU.\nThe only way to improve it would be to optimize the system (libc) functions\nlike \"fread\" where it is spending most of it's time.\n\nIn COPY, we found lots of libc functions like strlen() being called\nridiculous numbers of times, in one case it was called on every\ntimestamp/date attribute to get the length of TZ, which is constant. That\none function call was in the system category, and was responsible for\nseveral percent of the time.\n\nBy the way, system routines like fgetc/getc/strlen/atoi etc, don't appear in\ngprof profiles of dynamic linked objects, nor by default in oprofile\nresults.\n\nIf the bottleneck is in I/O, you will see the time spent in disk wait, not\nin system.\n\n- Luke\n\n\n",
"msg_date": "Wed, 05 Oct 2005 16:55:51 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "On Wed, Oct 05, 2005 at 04:55:51PM -0700, Luke Lonergan wrote:\n> In COPY, we found lots of libc functions like strlen() being called\n> ridiculous numbers of times, in one case it was called on every\n> timestamp/date attribute to get the length of TZ, which is constant. That\n> one function call was in the system category, and was responsible for\n> several percent of the time.\n\nWhat? strlen is definitely not in the kernel, and thus won't count as system\ntime.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 6 Oct 2005 02:12:39 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "On Wed, Oct 05, 2005 at 04:55:51PM -0700, Luke Lonergan wrote:\n>You've proven my point completely. This process is bottlenecked in the CPU.\n>The only way to improve it would be to optimize the system (libc) functions\n>like \"fread\" where it is spending most of it's time.\n\nOr to optimize its IO handling to be more efficient. (E.g., use larger\nblocks to reduce the number of syscalls.) \n\nMike Stone\n",
"msg_date": "Thu, 06 Oct 2005 05:49:34 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "Steinar,\n\nOn 10/5/05 5:12 PM, \"Steinar H. Gunderson\" <[email protected]> wrote:\n\n> What? strlen is definitely not in the kernel, and thus won't count as system\n> time.\n\nSystem time on Linux includes time spent in glibc routines.\n\n- Luke\n\n\n\n",
"msg_date": "Fri, 07 Oct 2005 16:55:28 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "On Fri, Oct 07, 2005 at 04:55:28PM -0700, Luke Lonergan wrote:\n> On 10/5/05 5:12 PM, \"Steinar H. Gunderson\" <[email protected]> wrote:\n> > What? strlen is definitely not in the kernel, and thus won't count as\n> > system time.\n> System time on Linux includes time spent in glibc routines.\n\nDo you have a reference for this?\n\nI believe this statement to be 100% false.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Fri, 7 Oct 2005 20:17:04 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "Mark,\n\nOn 10/7/05 5:17 PM, \"[email protected]\" <[email protected]> wrote:\n\n> On Fri, Oct 07, 2005 at 04:55:28PM -0700, Luke Lonergan wrote:\n>> On 10/5/05 5:12 PM, \"Steinar H. Gunderson\" <[email protected]> wrote:\n>>> What? strlen is definitely not in the kernel, and thus won't count as\n>>> system time.\n>> System time on Linux includes time spent in glibc routines.\n> \n> Do you have a reference for this?\n> \n> I believe this statement to be 100% false.\n\nHow about 99%? OK, you're right, I had this confused with the profiling\nproblem where glibc routines aren't included in dynamic linked profiles.\n\nBack to the statements earlier - the output of time had much of time for a\ndd spent in system, which means kernel, so where in the kernel would that be\nexactly?\n\n- Luke\n\n\n",
"msg_date": "Fri, 07 Oct 2005 21:20:59 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "On Fri, Oct 07, 2005 at 09:20:59PM -0700, Luke Lonergan wrote:\n> On 10/7/05 5:17 PM, \"[email protected]\" <[email protected]> wrote:\n> > On Fri, Oct 07, 2005 at 04:55:28PM -0700, Luke Lonergan wrote:\n> >> On 10/5/05 5:12 PM, \"Steinar H. Gunderson\" <[email protected]> wrote:\n> >>> What? strlen is definitely not in the kernel, and thus won't count as\n> >>> system time.\n> >> System time on Linux includes time spent in glibc routines.\n> > Do you have a reference for this?\n> > I believe this statement to be 100% false.\n> How about 99%? OK, you're right, I had this confused with the profiling\n> problem where glibc routines aren't included in dynamic linked profiles.\n\nSorry to emphasize the 100%. It wasn't meant to judge you. It was meant\nto indicate that I believe 100% of system time is accounted for, while\nthe system call is actually active, which is not possible while glibc\nis active.\n\nI believe the way it works, is that a periodic timer interrupt\nincrements a specific integer every time it wakes up. If it finds\nitself within the kernel, it increments the system time for the active\nprocess, if it finds itself outside the kernel, it incremenets the\nuser time for the active process.\n\n> Back to the statements earlier - the output of time had much of time for a\n> dd spent in system, which means kernel, so where in the kernel would that be\n> exactly?\n\nNot really an expert here. I only play around. At a minimum, their is a\ncost to switching from user context to system context and back, and then\nfilling in the zero bits. There may be other inefficiencies, however.\nPerhaps /dev/zero always fill in a whole block (8192 usually), before\nallowing the standard file system code to read only one byte.\n\nI dunno.\n\nBut, I see this oddity too:\n\n$ time dd if=/dev/zero of=/dev/zero bs=1 count=10000000\n10000000+0 records in\n10000000+0 records out\ndd if=/dev/zero of=/dev/zero bs=1 count=10000000 4.05s user 11.13s system 94% cpu 16.061 total\n\n$ time dd if=/dev/zero of=/dev/zero bs=10 count=1000000\n1000000+0 records in\n1000000+0 records out\ndd if=/dev/zero of=/dev/zero bs=10 count=1000000 0.37s user 1.37s system 100% cpu 1.738 total\n\n From my numbers, it looks like 1 byte reads are hard in both the user context\nand the system context. It looks almost linearly, even:\n\n$ time dd if=/dev/zero of=/dev/zero bs=100 count=100000\n100000+0 records in\n100000+0 records out\ndd if=/dev/zero of=/dev/zero bs=100 count=100000 0.04s user 0.15s system 95% cpu 0.199 total\n\n$ time dd if=/dev/zero of=/dev/zero bs=1000 count=10000\n10000+0 records in\n10000+0 records out\ndd if=/dev/zero of=/dev/zero bs=1000 count=10000 0.01s user 0.02s system 140% cpu 0.021 total\n\nAt least some of this gets into the very in-depth discussions as to\nwhether kernel threads, or user threads, are more efficient. Depending\non the application, user threads can switch many times faster than\nkernel threads. Other parts of this may just mean that /dev/zero isn't\nimplemented optimally.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Sat, 8 Oct 2005 09:31:06 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
}
] |
[
{
"msg_contents": "I've now gotten verification from multiple working DBA's that DB2, Oracle, and\nSQL Server can achieve ~250MBps ASTR (with as much as ~500MBps ASTR in\nsetups akin to Oracle RAC) when attached to a decent (not outrageous, but\ndecent) HD subsystem...\n\nI've not yet had any RW DBA verify Jeff Baker's supposition that ~1GBps ASTR is\nattainable. Cache based bursts that high, yes. ASTR, no.\n\nThe DBA's in question run RW installations that include Solaris, M$, and Linux OS's\nfor companies that just about everyone on these lists are likely to recognize.\n\nAlso, the implication of these pg IO limits is that money spent on even moderately\npriced 300MBps SATA II based RAID HW is wasted $'s.\n\nIn total, this situation is a recipe for driving potential pg users to other DBMS. \n \n25MBps in and 15MBps out is =BAD=.\n\nHave we instrumented the code in enough detail that we can tell _exactly_ where\nthe performance drainage is?\n\nWe have to fix this.\nRon \n\n\n-----Original Message-----\nFrom: Luke Lonergan <[email protected]>\nSent: Oct 5, 2005 11:24 AM\nTo: Michael Stone <[email protected]>, Martijn van Oosterhout <[email protected]>\nCc: [email protected], [email protected]\nSubject: Re: [HACKERS] [PERFORM] A Better External Sort?\n\nNope - it would be disk wait.\n\nCOPY is CPU bound on I/O subsystems faster that 50 MB/s on COPY (in) and about 15 MB/s (out).\n\n- Luke\n\n -----Original Message-----\nFrom: \tMichael Stone [mailto:[email protected]]\nSent:\tWed Oct 05 09:58:41 2005\nTo:\tMartijn van Oosterhout\nCc:\[email protected]; [email protected]\nSubject:\tRe: [HACKERS] [PERFORM] A Better External Sort?\n\nOn Sat, Oct 01, 2005 at 06:19:41PM +0200, Martijn van Oosterhout wrote:\n>COPY TO /dev/null WITH binary\n>13MB/s 55% user 45% system (ergo, CPU bound)\n[snip]\n>the most expensive. But it does point out that the whole process is\n>probably CPU bound more than anything else.\n\nNote that 45% of that cpu usage is system--which is where IO overhead\nwould end up being counted. Until you profile where you system time is\ngoing it's premature to say it isn't an IO problem.\n\nMike Stone\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n",
"msg_date": "Wed, 5 Oct 2005 12:14:21 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "\n> We have to fix this.\n> Ron \n> \n\n\nThe source is freely available for your perusal. Please feel free to\npoint us in specific directions in the code where you may see some\nbenefit. I am positive all of us that can, would put resources into\nfixing the issue had we a specific direction to attack.\n\nSincerely,\n\nJoshua D. Drake\n\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n\n\n",
"msg_date": "Wed, 05 Oct 2005 10:18:25 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "On Wed, 2005-10-05 at 12:14 -0400, Ron Peacetree wrote:\n> I've now gotten verification from multiple working DBA's that DB2, Oracle, and\n> SQL Server can achieve ~250MBps ASTR (with as much as ~500MBps ASTR in\n> setups akin to Oracle RAC) when attached to a decent (not outrageous, but\n> decent) HD subsystem...\n> \n> I've not yet had any RW DBA verify Jeff Baker's supposition that ~1GBps ASTR is\n> attainable. Cache based bursts that high, yes. ASTR, no.\n\nI find your tone annoying. That you do not have access to this level of\nhardware proves nothing, other than pointing out that your repeated\nemails on this list are based on supposition.\n\nIf you want 1GB/sec STR you need:\n\n1) 1 or more Itanium CPUs\n2) 24 or more disks\n3) 2 or more SATA controllers\n4) Linux\n\nHave fun.\n\n-jwb\n",
"msg_date": "Wed, 05 Oct 2005 11:30:21 -0700",
"msg_from": "\"Jeffrey W. Baker\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
}
] |
[
{
"msg_contents": "From: Kevin Grittner <[email protected]>\nSent: Oct 5, 2005 2:16 AM\nSubject: Re: [PERFORM] Is There Any Way ....\n\n>First off, Mr. Trainor's response proves nothing about anyone or\n>anything except Mr. Trainor.\n>\nFair Enough. I apologize for the inappropriately general statement.\n\n \n>I'm going to offer an opinion on the caching topic. I don't have\n>any benchmarks; I'm offering a general sense of the issue based on\n>decades of experience, so I'll give a short summary of that.\n> \n>I've been earning my living by working with computers since 1972,\n>\n~1978 for me. So to many on this list, I also would be an \"old fart\".\n\n\n<description of qualifications snipped>\n>\nI've pretty much spent my entire career thinking about and making\nadvances in RW distributed computing and parallel processing as\nfirst a programmer and then a systems architect. \n \n\n>Now on to the meat of it. \n<excellent and fair handed overall analysis snipped>\n>\nI agree with your comments just about across the board.\n\n\nI also agree with the poster(s) who noted that the \"TLC factor\" and the\n2x every 18months pace of increasing HW performance and RAM capacity\nmake this stuff a moving target.\n\nOTOH, there are some fundamentals that don't seem to change no\nmatter how far or fast the computing field evolves.\n\nAs usual, the proper answers involve finding a sometimes nontrivial\nbalance between building on known precedent and not being trapped\nby doctrine.\n\nRon\n",
"msg_date": "Wed, 5 Oct 2005 13:08:35 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is There Any Way ...."
}
] |
[
{
"msg_contents": "First I wanted to verify that pg's IO rates were inferior to The Competition.\nNow there's at least an indication that someone else has solved similar\nproblems. Existence proofs make some things easier ;-)\n\nIs there any detailed programmer level architectual doc set for pg? I know\n\"the best doc is the code\", but the code in isolation is often the Slow Path to\nunderstanding with systems as complex as a DBMS IO layer.\n\nRon\n \n\n-----Original Message-----\nFrom: \"Joshua D. Drake\" <[email protected]>\nSent: Oct 5, 2005 1:18 PM\nSubject: Re: [HACKERS] [PERFORM] A Better External Sort?\n\n\nThe source is freely available for your perusal. Please feel free to\npoint us in specific directions in the code where you may see some\nbenefit. I am positive all of us that can, would put resources into\nfixing the issue had we a specific direction to attack.\n\nSincerely,\n\nJoshua D. Drake\n",
"msg_date": "Wed, 5 Oct 2005 13:21:04 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "Ron,\n\nThis thread is getting on my nerves. Your tone in some of the other\nposts (as-well-as this one) is getting very annoying. Yes,\nPostgreSQL's storage manager (like all other open source databases),\nlacks many of the characteristics and enhancements of the commercial\ndatabases. Unlike Oracle, Microsoft, etc., the PostgreSQL Global\nDevelopment Group doesn't have the tens of millions of dollars\nrequired to pay hundreds of developers around the world for\nround-the-clock development and R&D. Making sure that every little\ntweak, on every system, is taken advantage of is expensive (in terms\nof time) for an open source project where little ROI is gained. \nBefore you make a statement like, \"I wanted to verify that pg's IO\nrates were inferior to The Competition\", think about how you'd write\nyour own RDBMS from scratch (in reality, not in theory).\n\nAs for your question regarding developer docs for the storage manager\nand related components, read the READMEs and the code... just like\neveryone else.\n\nRather than posting more assumptions and theory, please read through\nthe code and come back with actual suggestions.\n\n-Jonah\n\n2005/10/5, Ron Peacetree <[email protected]>:\n> First I wanted to verify that pg's IO rates were inferior to The Competition.\n> Now there's at least an indication that someone else has solved similar\n> problems. Existence proofs make some things easier ;-)\n>\n> Is there any detailed programmer level architectual doc set for pg? I know\n> \"the best doc is the code\", but the code in isolation is often the Slow Path to\n> understanding with systems as complex as a DBMS IO layer.\n>\n> Ron\n>\n>\n> -----Original Message-----\n> From: \"Joshua D. Drake\" <[email protected]>\n> Sent: Oct 5, 2005 1:18 PM\n> Subject: Re: [HACKERS] [PERFORM] A Better External Sort?\n>\n>\n> The source is freely available for your perusal. Please feel free to\n> point us in specific directions in the code where you may see some\n> benefit. I am positive all of us that can, would put resources into\n> fixing the issue had we a specific direction to attack.\n>\n> Sincerely,\n>\n> Joshua D. Drake\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n\n--\nRespectfully,\n\nJonah H. Harris, Database Internals Architect\nEnterpriseDB Corporation\nhttp://www.enterprisedb.com/\n",
"msg_date": "Wed, 5 Oct 2005 15:54:42 -0400",
"msg_from": "\"Jonah H. Harris\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "On K, 2005-10-05 at 13:21 -0400, Ron Peacetree wrote:\n> First I wanted to verify that pg's IO rates were inferior to The Competition.\n> Now there's at least an indication that someone else has solved similar\n> problems. Existence proofs make some things easier ;-)\n> \n> Is there any detailed programmer level architectual doc set for pg? I know\n> \"the best doc is the code\",\n\nFor postgres it is often \"best doc's are in the code, in form of\ncomments.\"\n\n-- \nHannu Krosing <[email protected]>\n\n",
"msg_date": "Thu, 06 Oct 2005 13:37:36 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
}
] |
[
{
"msg_contents": "Hello, just a little question, It's preferable to use Text Fields or\nvarchar(255) fields in a table? Are there any performance differences in the\nuse of any of them?\n\nThanks a lot for your answer!\n\n",
"msg_date": "Wed, 5 Oct 2005 12:21:35 -0600",
"msg_from": "\"Cristian Prieto\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Text/Varchar performance..."
},
{
"msg_contents": "On Wed, Oct 05, 2005 at 12:21:35PM -0600, Cristian Prieto wrote:\n> Hello, just a little question, It's preferable to use Text Fields or\n> varchar(255) fields in a table? Are there any performance differences in the\n> use of any of them?\n\nThey are essentially the same. Note that you can have varchar without length\n(well, up to about a gigabyte or so after compression), and you can have\nvarchar with a length well above 255 (say, 100000).\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 5 Oct 2005 20:34:41 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Text/Varchar performance..."
},
{
"msg_contents": "Cristian,\n\n> Hello, just a little question, It's preferable to use Text Fields or\n> varchar(255) fields in a table? Are there any performance differences in\n> the use of any of them?\n\nTEXT, VARCHAR, and CHAR use the same underlying storage mechanism. This \nmeans that TEXT is actually the \"fastest\" since it doesn't check length or \nspace-pad. However, that's unlikely to affect you unless you've millions \nof records; you should use the type which makes sense given your \napplication.\n\nFor \"large text fields\" I always use TEXT. BTW, in PostgreSQL VARCHAR is \nnot limited to 255; I think we support up to 1GB of text or something \npreposterous.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 5 Oct 2005 12:00:48 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Text/Varchar performance..."
},
{
"msg_contents": "Dear Cristian,\n\nIf you need to index the field, you must know that it limit the length up to\n1000 bytes. So if you need to index the field you must limit the field type,\nex: varchar(250), than you can index the field and you can gain better\nperfomance in searching base on the fields, because the search uses the\nindex you have been created.\nIf you do not need to index the field, you can use the text field. Because\ntext field can store data up to 4 Gbytes.\n\nRegards,\nahmad fajar\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Cristian Prieto\nSent: Kamis, 06 Oktober 2005 1:22\nTo: [email protected]; [email protected]\nSubject: [PERFORM] Text/Varchar performance...\n\nHello, just a little question, It's preferable to use Text Fields or\nvarchar(255) fields in a table? Are there any performance differences in the\nuse of any of them?\n\nThanks a lot for your answer!\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n",
"msg_date": "Mon, 10 Oct 2005 18:28:23 +0700",
"msg_from": "\"Ahmad Fajar\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Text/Varchar performance..."
},
{
"msg_contents": "On Mon, Oct 10, 2005 at 06:28:23PM +0700, Ahmad Fajar wrote:\n> than you can index the field and you can gain better\n> perfomance in searching base on the fields, because the search uses the\n> index you have been created.\n\nThat really depends on the queries. An index will help some queries (notably\n<, = or > comparisons, or LIKE 'foo%' with the C locale), but definitely not\nall (it will help you nothing for LIKE '%foo%').\n\n> If you do not need to index the field, you can use the text field. Because\n> text field can store data up to 4 Gbytes.\n\nSo can varchar.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 10 Oct 2005 14:54:22 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Text/Varchar performance..."
}
] |
[
{
"msg_contents": "Chris wrote:\n> [email protected] (Dan Harris) writes:\n> > On Oct 3, 2005, at 5:02 AM, Steinar H. Gunderson wrote:\n> >\n> >> I thought this might be interesting, not the least due to the\n> >> extremely low\n> >> price ($150 + the price of regular DIMMs):\n> >\n> > Replying before my other post came through.. It looks like their\n> > benchmarks are markedly improved since the last article I read on\n> > this. There may be more interest now..\n> \n> It still needs a few more generations worth of improvement.\n> \n> 1. It's still limited to SATA speed\n> 2. It's not ECC smart\n\n3. Another zero (or two) on the price tag :). While it looks like a fun\ntoy to play with, for it to replace hard drives in server environments\nthey need to provide more emphasis and effort in assuring people their\ndrive is reliable.\n\nIf they really wanted it to be adopted in server environments, it would\nhave been packaged in a 3.5\" drive, not a pci card, since that's what we\nall hot swap (especially since it already uses SATA interface). They\nwould also have allowed use of 2 and 4gb DIMS, and put in a small hard\ndrive that the memory paged to when powered off, and completely isolated\nthe power supply...hard to pack all that in 60$.\n\nThat said, we are in the last days of the hard disk. I think it is only\na matter of months before we see a sub 1000$ part which have zero\nlatency in the 20-40 GB range. Once that happens economies of scale\nwill kick in and hard drives will become basically a backup device.\n\nMerlin\n",
"msg_date": "Wed, 5 Oct 2005 14:35:37 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Ultra-cheap NVRAM device"
}
] |
[
{
"msg_contents": "I'm putting in as much time as I can afford thinking about pg related\nperformance issues. I'm doing it because of a sincere desire to help\nunderstand and solve them, not to annoy people.\n\nIf I didn't believe in pg, I would't be posting thoughts about how to\nmake it better. \n\nIt's probably worth some review (suggestions marked with a \"+\":\n\n+I came to the table with a possibly better way to deal with external\nsorts (that now has branched into 2 efforts: short term improvements\nto the existing code, and the original from-the-ground-up idea). That\nsuggestion was based on a great deal of prior thought and research,\ndespite what some others might think.\n\nThen we were told that our IO limit was lower than I thought.\n\n+I suggested that as a \"Quick Fix\" we try making sure we do IO\ntransfers in large enough chunks based in the average access time\nof the physical device in question so as to achieve the device's\nASTR (ie at least 600KB per access for a 50MBps ASTR device with\na 12ms average access time.) whenever circumstances allowed us.\nAs far as I know, this experiment hasn't been tried yet.\n\nI asked some questions about physical layout and format translation\noverhead being possibly suboptimal that seemed to be agreed to, but\nspecifics as to where we are taking the hit don't seem to have been\nmade explicit yet.\n\n+I made the \"from left field\" suggestion that perhaps a pg native fs\nformat would be worth consideration. This is a major project, so\nthe suggestion was to at least some extent tongue-in-cheek.\n\n+I then made some suggestions about better code instrumentation\nso that we can more accurately characterize were the bottlenecks are. \n\nWe were also told that evidently we are CPU bound far before one\nwould naively expect to be based on the performance specifications\nof the components involved.\n\nDouble checking among the pg developer community led to some\ndiffering opinions as to what the actual figures were and under what\ncircumstances they were achieved. Further discussion seems to have\nconverged on both accurate values and a better understanding as to\nthe HW and SW needed; _and_ we've gotten some RW confirmation\nas to what current reasonable expectations are within this problem\ndomain from outside the pg community.\n\n+Others have made some good suggestions in this thread as well.\nSince I seem to need to defend my tone here, I'm not detailing them\nhere. That should not be construed as a lack of appreciation of them.\n\nNow I've asked for the quickest path to detailed understanding of the\npg IO subsystem. The goal being to get more up to speed on its\ncoding details. Certainly not to annoy you or anyone else.\n\nAt least from my perspective, this for the most part seems to have\nbeen an useful and reasonable engineering discussion that has\nexposed a number of important things.\n \nRegards,\nRon\n",
"msg_date": "Wed, 5 Oct 2005 19:54:15 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "On K, 2005-10-05 at 19:54 -0400, Ron Peacetree wrote:\n\n> +I made the \"from left field\" suggestion that perhaps a pg native fs\n> format would be worth consideration. This is a major project, so\n> the suggestion was to at least some extent tongue-in-cheek.\n\nThis idea is discussed about once a year on hackers. If you are more\ninterested in this, search the archives :)\n\n-- \nHannu Krosing <[email protected]>\n\n",
"msg_date": "Thu, 06 Oct 2005 13:44:00 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "On Wed, Oct 05, 2005 at 07:54:15PM -0400, Ron Peacetree wrote:\n> I asked some questions about physical layout and format translation\n> overhead being possibly suboptimal that seemed to be agreed to, but\n> specifics as to where we are taking the hit don't seem to have been\n> made explicit yet.\n\nThis hit is easy to see and terribly hard to do anything about at the\nsame time. Any single row in a table stores its values but the offsets\narn't constant. If a field is NULL, it is skipped. If a field is\nvariable length, you have to look at the length before you can jump\nover to the next value.\n\nIf you have no NULLs and no variable length fields, then you can\noptimise access. This is already done and it's hard to see how you\ncould improve it further. To cut costs, many places use\nheap_deform_tuple and similar routines so that the costs are reduced,\nbut they're still there.\n\nUpping the data transfer rate from disk is a worthy goal, just some\npeople beleive it is of lower priority than improving CPU usage.\n\n> We were also told that evidently we are CPU bound far before one\n> would naively expect to be based on the performance specifications\n> of the components involved.\n\nAs someone pointed out, calls to the C library are not counted\nseperately, making it harder to see if we're overcalling some of them.\nPinpointing the performance bottleneck is hard work.\n\n> Now I've asked for the quickest path to detailed understanding of the\n> pg IO subsystem. The goal being to get more up to speed on its\n> coding details. Certainly not to annoy you or anyone else.\n\nWell, the work is all in storage/smgr and storage/file. It's not\nterribly complicated, it just sometimes takes a while to understand\n*why* it is done this way.\n\nIndeed, one of the things on my list is to remove all the lseeks in\nfavour of pread. Halving the number of kernel calls has got to be worth\nsomething right? Portability is an issue ofcourse...\n\nBut it's been a productive thread, absolutly. Progress has been made...\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.",
"msg_date": "Thu, 6 Oct 2005 18:57:30 +0200",
"msg_from": "Martijn van Oosterhout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "Martijn van Oosterhout <[email protected]> writes:\n> Indeed, one of the things on my list is to remove all the lseeks in\n> favour of pread. Halving the number of kernel calls has got to be worth\n> something right? Portability is an issue ofcourse...\n\nBeing sure that it's not a pessimization is another issue. I note that\nglibc will emulate these functions if the kernel doesn't have them;\nwhich means you could be replacing one kernel call with three.\n\nAnd I don't think autoconf has any way to determine whether a libc\nfunction represents a native kernel call or not ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 06 Oct 2005 15:57:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort? "
},
{
"msg_contents": "On Thu, Oct 06, 2005 at 03:57:38PM -0400, Tom Lane wrote:\n> Martijn van Oosterhout <[email protected]> writes:\n> > Indeed, one of the things on my list is to remove all the lseeks in\n> > favour of pread. Halving the number of kernel calls has got to be worth\n> > something right? Portability is an issue ofcourse...\n> \n> Being sure that it's not a pessimization is another issue. I note that\n> glibc will emulate these functions if the kernel doesn't have them;\n> which means you could be replacing one kernel call with three.\n\nFrom the linux pread manpage:\n\nHISTORY\n The pread and pwrite system calls were added to Linux in version\n 2.1.60; the entries in the i386 system call table were added in\n 2.1.69. The libc support (including emulation on older kernels\n without the system calls) was added in glibc 2.1.\n\nAre we awfully worried about people still using 2.0 kernels? And it\nwould replace two calls with three in the worst case, we currently\nlseek before every read.\n\nI don't know about other OSes.\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.",
"msg_date": "Thu, 6 Oct 2005 22:14:47 +0200",
"msg_from": "Martijn van Oosterhout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
},
{
"msg_contents": "On Thu, Oct 06, 2005 at 03:57:38PM -0400, Tom Lane wrote:\n> Martijn van Oosterhout <[email protected]> writes:\n> > Indeed, one of the things on my list is to remove all the lseeks in\n> > favour of pread. Halving the number of kernel calls has got to be worth\n> > something right? Portability is an issue ofcourse...\n> \n> Being sure that it's not a pessimization is another issue. I note that\n> glibc will emulate these functions if the kernel doesn't have them;\n> which means you could be replacing one kernel call with three.\n> \n> And I don't think autoconf has any way to determine whether a libc\n> function represents a native kernel call or not ...\n\nThe problem kernels would be Linux 2.0, which I very much doubt is going\nto be present in to-be-deployed database servers.\n\nUnless someone runs glibc on top of some other kernel, I guess. Is this\na common scenario? I've never seen it.\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/DXLWNGRJD34\nOh, oh, las chicas galacianas, lo har�n por las perlas,\n�Y las de Arrakis por el agua! Pero si buscas damas\nQue se consuman como llamas, �Prueba una hija de Caladan! (Gurney Halleck)\n",
"msg_date": "Thu, 6 Oct 2005 16:17:21 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort?"
},
{
"msg_contents": "Martijn van Oosterhout <[email protected]> writes:\n> Are we awfully worried about people still using 2.0 kernels? And it\n> would replace two calls with three in the worst case, we currently\n> lseek before every read.\n\nThat's utterly false.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 06 Oct 2005 16:25:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] A Better External Sort? "
},
{
"msg_contents": "On Thu, Oct 06, 2005 at 04:25:11PM -0400, Tom Lane wrote:\n> Martijn van Oosterhout <[email protected]> writes:\n> > Are we awfully worried about people still using 2.0 kernels? And it\n> > would replace two calls with three in the worst case, we currently\n> > lseek before every read.\n> \n> That's utterly false.\n\nOops, you're right. I usually strace during a vacuum or a large query\nand my screen fills up with:\n\nlseek()\nread()\nlseek()\nread()\n...\n\nSo didn't wonder if the straight sequential read was optimised. Still,\nI think pread() would be a worthwhile improvement, at least for Linux.\n\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.",
"msg_date": "Thu, 6 Oct 2005 23:40:17 +0200",
"msg_from": "Martijn van Oosterhout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] A Better External Sort?"
}
] |
[
{
"msg_contents": "I am working on a system which will be heavily dependent on functions\n(some SQL, some PL/pgSQL). I am worried about the backend caching query\nexecution plans for long running connections.\n\nGiven:\n- Processes which are connected to the database for long periods of time\n(transactions are always short).\n- These processes will use some functions to query data.\n- Lots of data is being inserted into tables that these functions query.\n- Vacuums are done frequently.\n\nAm I at risk of degrading performance after some time due to stale\nexecution plans?\n\nThanks,\n\n-Kelly\n",
"msg_date": "Thu, 06 Oct 2005 08:17:54 -0500",
"msg_from": "Kelly Burkhart <[email protected]>",
"msg_from_op": true,
"msg_subject": "functions and execution plan caching"
},
{
"msg_contents": "On Thu, Oct 06, 2005 at 08:17:54AM -0500, Kelly Burkhart wrote:\n> Given:\n> - Processes which are connected to the database for long periods of time\n> (transactions are always short).\n> - These processes will use some functions to query data.\n> - Lots of data is being inserted into tables that these functions query.\n> - Vacuums are done frequently.\n> \n> Am I at risk of degrading performance after some time due to stale\n> execution plans?\n\nYes, because plans are chosen based on the statistics that were\ncurrent when the function was first called. For example, if a\nsequential scan made sense when you first called the function, then\nsubsequent calls will also use a sequential scan. You can see this\nfor yourself with a simple test: create a table, populate it with\na handful of records, and call a function that issues a query that\ncan (but won't necessarily) use an index. Then add a lot of records\nto the table and call the function again. You'll probably notice\nthat the function runs slower than the same query run from outside\nthe function, and that the function runs fast if you recreate it\nor call it in a new session.\n\nIf you set debug_print_plan to on and client_min_messages to debug1,\nthen you'll see the plan that the function chose (but only on the\nfirst call to the function). If you have statistics enabled, then\nyou can query pg_stat_user_tables and pg_stat_user_indexes to see\nwhether subsequent calls use sequential or index scans (this should\nbe done when nobody else is querying the table so the statistics\nrepresent only what you did).\n\nYou can avoid cached plans by using EXECUTE. You'll have to run\ntests to see whether the potential gain is worth the overhead.\n\n-- \nMichael Fuhr\n",
"msg_date": "Thu, 6 Oct 2005 13:46:06 -0600",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: functions and execution plan caching"
}
] |
[
{
"msg_contents": "I have an application that is prone to sudden, unscheduled high bursts of\nactivity, and\nI am finding that the application design permits me to detect the activity\nbursts within\nan existing function. The bursts only affect 3 tables, but degradation\nbecomes apparent\nafter 2,000 updates, and significant after 8,000 updates.\n\nI already know that a plain vacuum (without full, analyze, or free options)\nsolves my\nproblem. Since vacuum is classified in the documentation as an SQL command,\nI tried to\ncall it using a trigger function on one the tables (they all have roughly\nthe same insert\n/ update rate). However, I just found out that vacuum cannot be called by a\nfunction.\nVacuums done by a scheduler at 3AM in the morning are adequate to handle my\nnon-peak\nneeds otherwise.\n\nautovacuum sounds like it would do the trick, but I am on a WINDOWS 2003\nenvironment, but\nI have Googled up messages that it still has various problems (in Windows)\nwhich won't be\nresolved until 8.1 is out. But I have a problem NOW, and the application is\ndeployed\naround the world.\n\nQUESTION:\n Is there anyway anyone knows of to permit me to execute an operating\nsystem program\n(even vacuumdb) or possibly to add a C function to the library which would\nallow me to\ndo this (I am not a C programmer, but have access to some persons who are)?\n\nVery important to me for performance reasons.\n\nDoes anybody have some suggestions on the best path for me to take?\n\n\n",
"msg_date": "Thu, 6 Oct 2005 17:29:42 -0400",
"msg_from": "\"Lane Van Ingen\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Need Some Suggestions"
},
{
"msg_contents": "Lane Van Ingen wrote:\n> I have an application that is prone to sudden, unscheduled high bursts of\n> activity, and\n> I am finding that the application design permits me to detect the activity\n> bursts within\n> an existing function. The bursts only affect 3 tables, but degradation\n> becomes apparent\n> after 2,000 updates, and significant after 8,000 updates.\n\nHmm - assuming your free-space settings are large enough, it might be \nadequate to just run a vacuum on the 3 tables every 5 minutes or so. It \nsounds like these are quite small tables with a lot of activity, so if \nthere's not much for vacuum to do it won't place too much load on your \nsystem.\n\n--\n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 07 Oct 2005 08:52:46 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need Some Suggestions"
},
{
"msg_contents": "You are correct, in that these tables are not large (50,000 records), but\ntheir effect on performance is noticeable. Plain VACUUM (no freeze, full,\netc)\ndoes the trick well, but I am unable to figure a way to call the 'plain\nvanilla\nversion' of VACUUM via a PostgreSQL trigger function (does not allow it).\n\nUsing the Windows scheduler (schtask, somewhat like Unix cron) is an option,\nbut not a good one, as it takes too much out of the platform to run. My\nclient\ndoes not use strong platforms, so I have to be concerned about that. VACUUM\nis\na minimum impact on performance when running. I believe it would be much\nbetter\nto be able to call VACUUM out of a function, the same way in which other SQL\ncommands are used.\n\n-----Original Message-----\nFrom: Richard Huxton [mailto:[email protected]]\nSent: Friday, October 07, 2005 3:53 AM\nTo: Lane Van Ingen\nCc: [email protected]\nSubject: Re: [PERFORM] Need Some Suggestions\n\nLane Van Ingen wrote:\n> I have an application that is prone to sudden, unscheduled high bursts of\n> activity, and I am finding that the application design permits me to\ndetect\n> the activity bursts within an existing function. The bursts only affect 3\n> tables, but degradation becomes apparent after 2,000 updates, and quite\n> significant after 8,000 updates.\n\nHmm - assuming your free-space settings are large enough, it might be\nadequate to just run a vacuum on the 3 tables every 5 minutes or so. It\nsounds like these are quite small tables with a lot of activity, so if\nthere's not much for vacuum to do it won't place too much load on your\nsystem.\n\n--\n Richard Huxton\n Archonet Ltd\n\n\n",
"msg_date": "Fri, 7 Oct 2005 09:37:44 -0400",
"msg_from": "\"Lane Van Ingen\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Need Some Suggestions"
}
] |
[
{
"msg_contents": "What's the current status of how much faster the Opteron is compared to the \nXeons? I know the Opterons used to be close to 2x faster, but is that still \nthe case? I understand much work has been done to reduce the contect \nswitching storms on the Xeon architecture, is this correct?\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n",
"msg_date": "Thu, 6 Oct 2005 16:55:27 -0700 (PDT)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Status of Opteron vs Xeon"
},
{
"msg_contents": "[email protected] (Jeff Frost) writes:\n> What's the current status of how much faster the Opteron is compared\n> to the Xeons? I know the Opterons used to be close to 2x faster,\n> but is that still the case? I understand much work has been done to\n> reduce the contect switching storms on the Xeon architecture, is\n> this correct?\n\nWork has gone into 8.1 to try to help with the context switch storms;\nthat doesn't affect previous versions.\n\nFurthermore, it does not do anything to address the consideration that\nmemory access on Opterons seem to be intrinsically faster than on Xeon\ndue to differences in the memory bus architecture. \n\nThe only evident ways to address that are:\n a) For Intel to deploy chips with better memory buses;\n b) For Intel to convince people to deploy compilers that \n optimize badly on AMD to make Intel chips look better...\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"ntlug.org\")\nhttp://cbbrowne.com/info/lsf.html\nA mathematician is a machine for converting caffeine into theorems.\n",
"msg_date": "Fri, 07 Oct 2005 13:33:28 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Status of Opteron vs Xeon"
},
{
"msg_contents": "Chris Browne <[email protected]> writes:\n> [email protected] (Jeff Frost) writes:\n>> What's the current status of how much faster the Opteron is compared\n>> to the Xeons? I know the Opterons used to be close to 2x faster,\n>> but is that still the case? I understand much work has been done to\n>> reduce the contect switching storms on the Xeon architecture, is\n>> this correct?\n\n> Work has gone into 8.1 to try to help with the context switch storms;\n> that doesn't affect previous versions.\n\nAlso note that we've found that the current coding of the TAS macro\nseems to be very bad for at least some Opterons --- they do much better\nif the \"pre-test\" cmpb is removed. But this is not true for all x86_64\nchips. We still have an open issue about what to do about this.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 Oct 2005 14:44:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Status of Opteron vs Xeon "
},
{
"msg_contents": ">\n> Furthermore, it does not do anything to address the consideration that\n> memory access on Opterons seem to be intrinsically faster than on Xeon\n> due to differences in the memory bus architecture.\n>\n\nI have been running some tests using different numa policies on a quad Opteron \nserver and have found some significant performance differences depending on \nthe type of load the system is under. It's not clear to me yet if I can draw \nany general conclusions from the results though.\n\nEmil\n",
"msg_date": "Fri, 7 Oct 2005 15:03:47 -0400",
"msg_from": "Emil Briggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Status of Opteron vs Xeon"
}
] |
[
{
"msg_contents": "Hello all\n\nFirst of all, I do understand why pgsql with it's MVCC design has to examine tuples to evaluate \"count(*)\" and \"count(*) where (...)\" queries in environment with heavy concurrent updates.\n\nThis kind of usage IMHO isn't the average one. There are many circumstances with rather \"query often, update rarely\" character.\n\nIsn't it possible (and reasonable) for these environments to keep track of whether there is a transaction in progress with update to given table and if not, use an index scan (count(*) where) or cached value (count(*)) to perform this kind of query?\n\n(sorry for disturbing if this was already discussed)\n\nRegards,\n\nCestmir Hybl\n\n\n\n\n\n\nHello all\n \nFirst of all, I do understand why pgsql with it's \nMVCC design has to examine tuples to evaluate \"count(*)\" and \"count(*) \nwhere (...)\" queries in environment with heavy concurrent updates.\n \nThis kind of usage IMHO isn't the average one. \nThere are many circumstances with rather \"query often, update \nrarely\" character.\n \nIsn't it possible (and reasonable) for these \nenvironments to keep track of whether there is a transaction in progress with \nupdate to given table and if not, use an index scan (count(*) where) or cached \nvalue (count(*)) to perform this kind of query?\n \n(sorry for disturbing if this was already \ndiscussed)\n \nRegards,\n \nCestmir Hybl",
"msg_date": "Fri, 7 Oct 2005 11:24:05 +0200",
"msg_from": "\"Cestmir Hybl\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "count(*) using index scan in \"query often, update rarely\" environment"
},
{
"msg_contents": "On 10/7/05, Cestmir Hybl <[email protected]> wrote:\n>\n> Isn't it possible (and reasonable) for these environments to keep track of\n> whether there is a transaction in progress with update to given table and if\n> not, use an index scan (count(*) where) or cached value (count(*)) to\n> perform this kind of query?\n>\n\nif i understand your problem correctly, then simple usage of triggers will\ndo the job just fine.\n\nhubert\n\nOn 10/7/05, Cestmir Hybl <[email protected]> wrote:\nIsn't it possible (and reasonable) for these \nenvironments to keep track of whether there is a transaction in progress with \nupdate to given table and if not, use an index scan (count(*) where) or cached \nvalue (count(*)) to perform this kind of query?\n\nif i understand your problem correctly, then simple usage of triggers will do the job just fine.\n\nhubert",
"msg_date": "Fri, 7 Oct 2005 11:54:23 +0200",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) using index scan in \"query often,\n\tupdate rarely\" environment"
},
{
"msg_contents": "Yes, I can possibly use triggers to maintanin counts of several fixed groups of records or total recordcount (but it's unpractical).\n\nNo, I can't speed-up evaluation of generic \"count(*) where ()\" queries this way.\n\nMy question was rather about general performance of count() queries in environment with infrequent updates.\n\nCestmir\n ----- Original Message ----- \n From: hubert depesz lubaczewski \n To: Cestmir Hybl \n Cc: [email protected] \n Sent: Friday, October 07, 2005 11:54 AM\n Subject: Re: [PERFORM] count(*) using index scan in \"query often, update rarely\" environment\n\n\n On 10/7/05, Cestmir Hybl <[email protected]> wrote:\n Isn't it possible (and reasonable) for these environments to keep track of whether there is a transaction in progress with update to given table and if not, use an index scan (count(*) where) or cached value (count(*)) to perform this kind of query?\n\n if i understand your problem correctly, then simple usage of triggers will do the job just fine.\n\n hubert\n\n\n\n\n\n\n\nYes, I can possibly use triggers to maintanin \ncounts of several fixed groups of records or total recordcount (but it's \nunpractical).\n \nNo, I can't speed-up evaluation of generic \n\"count(*) where ()\" queries this way.\n \nMy question was rather about general performance of \ncount() queries in environment with infrequent updates.\n \nCestmir\n\n----- Original Message ----- \nFrom:\nhubert depesz \n lubaczewski \nTo: Cestmir Hybl \nCc: [email protected]\n\nSent: Friday, October 07, 2005 11:54 \n AM\nSubject: Re: [PERFORM] count(*) using \n index scan in \"query often, update rarely\" environment\nOn 10/7/05, Cestmir Hybl <[email protected]> wrote:\n \n\nIsn't it possible (and reasonable) for these \n environments to keep track of whether there is a transaction in progress \n with update to given table and if not, use an index scan (count(*) where) or \n cached value (count(*)) to perform this kind of query?\nif i understand your problem correctly, then \n simple usage of triggers will do the job just \nfine.hubert",
"msg_date": "Fri, 7 Oct 2005 12:14:29 +0200",
"msg_from": "\"Cestmir Hybl\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: count(*) using index scan in \"query often,\n\tupdate rarely\" environment"
},
{
"msg_contents": "On Fri, Oct 07, 2005 at 11:24:05AM +0200, Cestmir Hybl wrote:\n> Isn't it possible (and reasonable) for these environments to keep track of\n> whether there is a transaction in progress with update to given table and\n> if not, use an index scan (count(*) where) or cached value (count(*)) to\n> perform this kind of query?\n\nEven if there is no running update, there might still be dead rows in the\ntable. In any case, of course, a new update could always be occurring while\nyour counting query was still running.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 7 Oct 2005 12:48:16 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) using index scan in \"query often,\n\tupdate rarely\" environment"
},
{
"msg_contents": "collision: it's possible to either block updating transaction until index \nscan ends or discard index scan imediately and finish query using MVCC \ncompliant scan\n\ndead rows: this sounds like more serious counter-argument, I don't know much \nabout dead records management and whether it would be possible/worth to \nmake indexes matching live records when there's no transaction in progress \non that table\n\n----- Original Message ----- \nFrom: \"Steinar H. Gunderson\" <[email protected]>\nTo: <[email protected]>\nSent: Friday, October 07, 2005 12:48 PM\nSubject: Re: [PERFORM] count(*) using index scan in \"query often, update \nrarely\" environment\n\n\n> On Fri, Oct 07, 2005 at 11:24:05AM +0200, Cestmir Hybl wrote:\n>> Isn't it possible (and reasonable) for these environments to keep track \n>> of\n>> whether there is a transaction in progress with update to given table and\n>> if not, use an index scan (count(*) where) or cached value (count(*)) to\n>> perform this kind of query?\n>\n> Even if there is no running update, there might still be dead rows in the\n> table. In any case, of course, a new update could always be occurring \n> while\n> your counting query was still running.\n>\n> /* Steinar */\n> -- \n> Homepage: http://www.sesse.net/\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster \n\n",
"msg_date": "Fri, 7 Oct 2005 13:14:20 +0200",
"msg_from": "\"Cestmir Hybl\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: count(*) using index scan in \"query often,\n\tupdate rarely\" environment"
},
{
"msg_contents": "On Fri, Oct 07, 2005 at 01:14:20PM +0200, Cestmir Hybl wrote:\n> collision: it's possible to either block updating transaction until \n> index scan ends or discard index scan imediately and finish query using \n> MVCC compliant scan\n\nYou can't change from one scan method to a different one on the fly.\nThere's no way to know which tuples have alreaady been returned.\n\nOur index access methods are designed to be very concurrent, and it\nworks extremely well. One index scan being able to block an update\nwould destroy that advantage.\n\n> dead rows: this sounds like more serious counter-argument, I don't know \n> much about dead records management and whether it would be \n> possible/worth to make indexes matching live records when there's no \n> transaction in progress on that table\n\nIt's not possible, because a finishing transaction would have to clean\nup every index it has used, and also any index it hasn't used but has\nbeen modified by another transaction which couldn't clean up by itself\nbut didn't do the work because the first one was looking at the index.\nIt's easy to see that it's possible to create an unbounded number of\ntransactions, each forcing the other to do some index cleanup. This is\nnot acceptable.\n\nPlus, it would be very hard to implement, and a very wide door to bugs.\n\n-- \nAlvaro Herrera http://www.advogato.org/person/alvherre\n\"Et put se mouve\" (Galileo Galilei)\n",
"msg_date": "Fri, 7 Oct 2005 09:07:15 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) using index scan in \"query often,\n\tupdate rarely\" environment"
},
{
"msg_contents": "\"Cestmir Hybl\" <[email protected]> writes:\n> Isn't it possible (and reasonable) for these environments to keep track =\n> of whether there is a transaction in progress with update to given table =\n> and if not, use an index scan (count(*) where) or cached value =\n> (count(*)) to perform this kind of query?\n\nPlease read the archives before bringing up such well-discussed issues.\n\nThere's a workable-looking design in the archives (pghackers probably)\nfor maintaining overall table counts in a separate table, with each\ntransaction adding one row of \"delta\" information just before it\ncommits. I haven't seen anything else that looks remotely attractive.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 Oct 2005 09:42:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) using index scan in \"query often,\n\tupdate rarely\" environment"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> There's a workable-looking design in the archives (pghackers probably)\n> for maintaining overall table counts in a separate table, with each\n> transaction adding one row of \"delta\" information just before it\n> commits. I haven't seen anything else that looks remotely attractive.\n\nIt might be useful if there was a way to trap certain queries and \nrewrite/replace them. That way more complex queries could be \ntransparently redirected to a summary table etc. I'm guessing that the \noverhead to check every query would quickly destroy any gains though.\n\n--\n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 07 Oct 2005 15:11:01 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) using index scan in \"query often, update rarely\""
},
{
"msg_contents": "On 10/7/05, Cestmir Hybl <[email protected]> wrote:\n>\n> No, I can't speed-up evaluation of generic \"count(*) where ()\" queries\n> this way.\n>\n\nno you can't speed up generic where(), *but* you can check what are the most\ncommon \"where\"'s (like usually i do where on one column like:\nselect count(*) from table where some_particular_column = 'some value';\nwhere you can simply make the trigger aware of the fact that it should count\nbased on value in some_particular_column.\nworks good enough for me not to look for alternatives.\n\ndepesz\n\nOn 10/7/05, Cestmir Hybl <[email protected]> wrote:\nNo, I can't speed-up evaluation of generic \n\"count(*) where ()\" queries this way.\nno you can't speed up generic where(), *but* you can check what are the\nmost common \"where\"'s (like usually i do where on one column like:\nselect count(*) from table where some_particular_column = 'some value';\nwhere you can simply make the trigger aware of the fact that it should count based on value in some_particular_column.\nworks good enough for me not to look for alternatives.\n\ndepesz",
"msg_date": "Sat, 8 Oct 2005 12:44:09 +0200",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) using index scan in \"query often,\n\tupdate rarely\" environment"
},
{
"msg_contents": "On Fri, Oct 07, 2005 at 12:48:16PM +0200, Steinar H. Gunderson wrote:\n> On Fri, Oct 07, 2005 at 11:24:05AM +0200, Cestmir Hybl wrote:\n> > Isn't it possible (and reasonable) for these environments to keep track of\n> > whether there is a transaction in progress with update to given table and\n> > if not, use an index scan (count(*) where) or cached value (count(*)) to\n> > perform this kind of query?\n> Even if there is no running update, there might still be dead rows in the\n> table. In any case, of course, a new update could always be occurring while\n> your counting query was still running.\n\nI don't see this being different from count(*) as it is today.\n\nUpdating a count column is certainly clever. If using a trigger,\nperhaps it would allow the equivalent of:\n\n select count(*) from table for update;\n\n:-)\n\nCheers,\nmark\n\n(not that this is necessarily a good thing!)\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Sat, 8 Oct 2005 09:34:32 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: count(*) using index scan in \"query often,\n\tupdate rarely\" environment"
}
] |
[
{
"msg_contents": "> What's the current status of how much faster the Opteron is compared\nto\n> the\n> Xeons? I know the Opterons used to be close to 2x faster, but is that\n> still\n> the case? I understand much work has been done to reduce the contect\n> switching storms on the Xeon architecture, is this correct?\n\nUp until two days ago (Oct 5) Intel has had no answer for AMD's dual\ncore offerings...unfortunately this has allowed AMD to charge top dollar\nfor dual core Opterons. The Intel dual core solution on the P4 side\nhasn't been very impressive particularly with regard to thermals.\n\nMy 90nm athlon 3000 at home runs very cool...if I underclock it a bit I\ncan actually turn off the cooling fan :).\n\nIMO, right now it's AMD all the way, but if you are planning a big\npurchase, it might be smart to wait a couple of months for the big price\nrealignment as Intel's dual xeons hit the retail channel.\n\nMerlin\n\n",
"msg_date": "Fri, 7 Oct 2005 08:27:05 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Status of Opteron vs Xeon"
}
] |
[
{
"msg_contents": "On 10/7/05, Cestmir Hybl <[email protected]> wrote:\nIsn't it possible (and reasonable) for these environments to keep track\nof whether there is a transaction in progress with update to given table\nand if not, use an index scan (count(*) where) or cached value\n(count(*)) to perform this kind of query?\n________________________________________\n\nThe answer to the first question is subtle. Basically, the PostgreSQL\nengine is designed for high concurrency. We are definitely on the right\nside of the cost/benefit tradeoff here. SQL server does not have MVCC\n(or at least until 2005 appears) so they are on the other side of the\ntradeoff.\n\nYou can of course serialize the access yourself by materializing the\ncount in a small table and use triggers or cleverly designed\ntransactions. This is trickier than it might look however so check the\narchives for a thorough treatment of the topic.\n\nOne interesting thing is that making count(*) over large swaths of data\nis frequently an indicator of a poorly normalized database. Is it\npossible to optimize the counting by laying out your data in a different\nway?\n\nMerlin\n\n\n",
"msg_date": "Fri, 7 Oct 2005 09:50:30 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: count(*) using index scan in \"query often,\n\tupdate rarely\" environment"
}
] |
[
{
"msg_contents": "What's goin on pg-people?\n\nI have a table PRODUCTIONS that is central to the DB and ties a lot of other\ninformation together:\n\nPRODUCTIONS (table)\n----------------------------------\nprod_id\t\tprimary key\ntype_id\t\tforeign key\nlevel_id\t\tforeign key\ntour_id\t\tforeign key\nshow_id\t\tforeign key\nvenue_id\t\tforeign key\ntitle\t\t\tvarchar(255); not null indexed\nversion\t\tchar;\ndetails\t\ttext\nopen_date\t\tdate\nclose_date\t\tdate\npreview_open\tdate\npreview_close\tdate\nperform_tot\t\tint\npreview_tot\t\tint\npark_info\t\ttext\nphone_nos\t\ttext\nsome_other_info\ttext\nseating_info\ttext\nthis\t\t\ttext\nthat\t\t\ttext\ncreate_tstmp\ttimestamptz; NOW()\nmod_tstmp\t\ttimestamptz;triggered\ndelete_tstmp\ttimestamptz;default null\nis_complete\t\tbool\n\n\nAs it stands now, there are approximately 25-30 columns on the table. Since\nthis table is very central to the database, would it be more efficient to\nbreak some of the columns (especially the TEXT ones) out into a separate\nINFO table since some queries on the web will not care about all of these\ntext columns anyway? I know that pg can handle A LOT more columns and if\nthere IS no performance hit for keeping them all on the same table, I would\nlike to do that because the relation between PRODUCTIONS and the INFO will\nalways be 1-to-1.\n\nMy implementation of this INFO table would look a little somethin' like\nthis:\n\nPROD_INFO (table)\n-------------------------------\nprod_id\t\tpkey/fkey\nopen_date\t\tdate\nclose_date\t\tdate\npreview_open\tdate\npreview_close\tdate\nperform_tot\t\tint\npreview_tot\t\tint\npark_info\t\ttext\nphone_nos\t\ttext\nsome_other_info\ttext\nseating_info\ttext\nthis\t\t\ttext\nthat\t\t\ttext\n(the rest would stay in in the original PRODUCTIONS table)\n\n\nI am open to ANY suggestions, criticisms, mockery, etc.\n\nThanks,\n\nAaron\n\n",
"msg_date": "Sun, 9 Oct 2005 22:03:33 -0500",
"msg_from": "\"Announce\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "What's the cost of a few extra columns?"
},
{
"msg_contents": "What you're describing is known as vertical partitioning (think of\nsplitting a table vertically), and can be a good technique for\nincreasing performance when used properly. The key is to try and get the\naverage row size down, since that means more rows per page which means\nless I/O. Some things to consider:\n\nFirst rule of performance tuning: don't. In other words, you should be\nable to verify with benchmark numbers that a) you need to do this and b)\nhow much it's actually helping.\n\nHow will splitting the table affect *_tstmp, especially mod_tstmp?\n\nHow will you handle inserts and joining these two tables together? Will\nyou always do a left join (preferably via a view), or will you have a\ntrigger/rule that inserts into production_info whenever a row is\ninserted into productions?\n\nOn Sun, Oct 09, 2005 at 10:03:33PM -0500, Announce wrote:\n> What's goin on pg-people?\n> \n> I have a table PRODUCTIONS that is central to the DB and ties a lot of other\n> information together:\n> \n> PRODUCTIONS (table)\n> ----------------------------------\n> prod_id\t\tprimary key\n> type_id\t\tforeign key\n> level_id\t\tforeign key\n> tour_id\t\tforeign key\n> show_id\t\tforeign key\n> venue_id\t\tforeign key\n> title\t\t\tvarchar(255); not null indexed\n> version\t\tchar;\n> details\t\ttext\n> open_date\t\tdate\n> close_date\t\tdate\n> preview_open\tdate\n> preview_close\tdate\n> perform_tot\t\tint\n> preview_tot\t\tint\n> park_info\t\ttext\n> phone_nos\t\ttext\n> some_other_info\ttext\n> seating_info\ttext\n> this\t\t\ttext\n> that\t\t\ttext\n> create_tstmp\ttimestamptz; NOW()\n> mod_tstmp\t\ttimestamptz;triggered\n> delete_tstmp\ttimestamptz;default null\n> is_complete\t\tbool\n> \n> \n> As it stands now, there are approximately 25-30 columns on the table. Since\n> this table is very central to the database, would it be more efficient to\n> break some of the columns (especially the TEXT ones) out into a separate\n> INFO table since some queries on the web will not care about all of these\n> text columns anyway? I know that pg can handle A LOT more columns and if\n> there IS no performance hit for keeping them all on the same table, I would\n> like to do that because the relation between PRODUCTIONS and the INFO will\n> always be 1-to-1.\n> \n> My implementation of this INFO table would look a little somethin' like\n> this:\n> \n> PROD_INFO (table)\n> -------------------------------\n> prod_id\t\tpkey/fkey\n> open_date\t\tdate\n> close_date\t\tdate\n> preview_open\tdate\n> preview_close\tdate\n> perform_tot\t\tint\n> preview_tot\t\tint\n> park_info\t\ttext\n> phone_nos\t\ttext\n> some_other_info\ttext\n> seating_info\ttext\n> this\t\t\ttext\n> that\t\t\ttext\n> (the rest would stay in in the original PRODUCTIONS table)\n> \n> \n> I am open to ANY suggestions, criticisms, mockery, etc.\n> \n> Thanks,\n> \n> Aaron\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Mon, 10 Oct 2005 19:12:49 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What's the cost of a few extra columns?"
},
{
"msg_contents": "Thanks a lot.\n\nWell, if I'm understanding you correctly, then doing the vertical splitting\nfor some of the text columns WOULD decrease the average row size returned in\nmy slimmer PRODUCTIONS table. I don't plan on using any of the \"prod_info\"\ncolumns in a WHERE clause (except open_date and close_date now that I think\nof it so they would stay in the original table).\n\nThere will be a lot of queries where I just want to return quick pri-key,\nprod_name and prod_date results from a PRODUCTION search. Then, there would\nbe a detail query that would then need all of the PRODUCTION and INFO data\nfor a single row.\n\nThanks again,\n\nAaron\n-----Original Message-----\nSubject: Re: [PERFORM] What's the cost of a few extra columns?\n\nWhat you're describing is known as vertical partitioning (think of\nsplitting a table vertically), and can be a good technique for\nincreasing performance when used properly. The key is to try and get the\naverage row size down, since that means more rows per page which means\nless I/O. Some things to consider:\n\nFirst rule of performance tuning: don't. In other words, you should be\nable to verify with benchmark numbers that a) you need to do this and b)\nhow much it's actually helping.\n\nHow will splitting the table affect *_tstmp, especially mod_tstmp?\n\nHow will you handle inserts and joining these two tables together? Will\nyou always do a left join (preferably via a view), or will you have a\ntrigger/rule that inserts into production_info whenever a row is\ninserted into productions?\n\nOn Sun, Oct 09, 2005 at 10:03:33PM -0500, Announce wrote:\n\n",
"msg_date": "Mon, 10 Oct 2005 21:14:12 -0500",
"msg_from": "\"Announce\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: What's the cost of a few extra columns?"
}
] |
[
{
"msg_contents": "Hi to all, \n\nI have the following configuration:\nDual Xeon 2.8 Ghz, 1G RAM and postgre 8.0.3 installed.\n\nModified configuration parameters:\n\nmax_connections = 100 \n\nshared_buffers = 64000 # 500MB = 500 x 1024 x 1024 / (8 x 1024) (8KB)\nwork_mem = 51200 # 50MB = 50 x 1024 KB\nmaintenance_work_mem = 102400 # 50MB = 100 x 1024 KB \n\ncheckpoint_segments = 10\n\neffective_cache_size = 25600 # 200MB = 50 x 1024 / 8 \n\nclient_min_messages = notice \nlog_min_messages = notice\nlog_min_duration_statement = 2000\n\n\n\nI get the feeling the server is somehow missconfigured or it does not work at full parameter. If I look at memory allocation, it never goes over 250MB whatever I do with the database. The kernel shmmax is set to 600MB. Database Size is around 550MB. \n\n\nNeed some advise. \n\nThanks. \nAndy.\n\n\n\n\n\n\n\nHi to all, \n \nI have the following configuration:\nDual Xeon 2.8 Ghz, 1G RAM and postgre 8.0.3 \ninstalled.\n \nModified configuration parameters:\n \nmax_connections = 100 \n \nshared_buffers = 64000 # 500MB = 500 x \n1024 x 1024 / (8 x 1024) (8KB)work_mem = 51200 # 50MB = 50 x 1024 \nKBmaintenance_work_mem = 102400 # 50MB = 100 x 1024 KB \n \ncheckpoint_segments = 10\n \neffective_cache_size = 25600 # 200MB = 50 x \n1024 / 8 \n \nclient_min_messages = notice \nlog_min_messages = notice\nlog_min_duration_statement = 2000\n \n \n \nI get the feeling the server is somehow \nmissconfigured or it does not work at full parameter. If I look at memory \nallocation, it never goes over 250MB whatever I do with the database. The \nkernel shmmax is set to 600MB. Database Size is around 550MB. \n \n \nNeed some advise. \n \nThanks. \nAndy.",
"msg_date": "Mon, 10 Oct 2005 11:39:45 +0300",
"msg_from": "\"Andy\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Server misconfiguration???"
},
{
"msg_contents": "A lot of them are too large. Try:\n\nAndy wrote:\n> Hi to all,\n> \n> I have the following configuration:\n> Dual Xeon 2.8 Ghz, 1G RAM and postgre 8.0.3 installed.\n> \n> Modified configuration parameters:\n> \n> max_connections = 100\n> \n> shared_buffers = 64000 # 500MB = 500 x 1024 x 1024 / (8 x 1024) (8KB)\n\nshared_buffers = 10000\n\n> work_mem = 51200 # 50MB = 50 x 1024 KB\n\nwork_mem = 4096\n\n> maintenance_work_mem = 102400 # 50MB = 100 x 1024 KB\n> \n> checkpoint_segments = 10\n> \n> effective_cache_size = 25600 # 200MB = 50 x 1024 / 8\n> \n> client_min_messages = notice \n> log_min_messages = notice\n> log_min_duration_statement = 2000\n> \n> \n> \n> I get the feeling the server is somehow missconfigured or it does not \n> work at full parameter. If I look at memory allocation, it never \n> goes over 250MB whatever I do with the database. The kernel shmmax is \n> set to 600MB. Database Size is around 550MB.\n\nThat's because you have work_mem set massively high. Remember that's \nPER SORT. If you have 10 queries running each doing 3 sorts that's 30x \nthe work_mem right there.\n\nChris\n\n",
"msg_date": "Mon, 10 Oct 2005 16:55:39 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server misconfiguration???"
},
{
"msg_contents": "Yes you're right it really bosst a little.\nI want to improve the system performance. Are there any more tipps?\n\nOn this server runs only a webserver with php application which uses postgre \nDb. Should I give more memory to postgre? From what I noticed this is the \nmost memory \"needing\" service from this system.\n\n\nAndy.\n\n----- Original Message ----- \nFrom: \"Christopher Kings-Lynne\" <[email protected]>\nTo: \"Andy\" <[email protected]>\nCc: <[email protected]>\nSent: Monday, October 10, 2005 11:55 AM\nSubject: Re: [PERFORM] Server misconfiguration???\n\n\n>A lot of them are too large. Try:\n>\n> Andy wrote:\n>> Hi to all,\n>> I have the following configuration:\n>> Dual Xeon 2.8 Ghz, 1G RAM and postgre 8.0.3 installed.\n>> Modified configuration parameters:\n>> max_connections = 100\n>> shared_buffers = 64000 # 500MB = 500 x 1024 x 1024 / (8 x 1024) (8KB)\n>\n> shared_buffers = 10000\n>\n>> work_mem = 51200 # 50MB = 50 x 1024 KB\n>\n> work_mem = 4096\n>\n>> maintenance_work_mem = 102400 # 50MB = 100 x 1024 KB\n>> checkpoint_segments = 10\n>> effective_cache_size = 25600 # 200MB = 50 x 1024 / 8\n>> client_min_messages = notice log_min_messages = notice\n>> log_min_duration_statement = 2000\n>> I get the feeling the server is somehow missconfigured or it does not \n>> work at full parameter. If I look at memory allocation, it never goes \n>> over 250MB whatever I do with the database. The kernel shmmax is set to \n>> 600MB. Database Size is around 550MB.\n>\n> That's because you have work_mem set massively high. Remember that's PER \n> SORT. If you have 10 queries running each doing 3 sorts that's 30x the \n> work_mem right there.\n>\n> Chris\n>\n>\n> \n\n",
"msg_date": "Mon, 10 Oct 2005 12:42:43 +0300",
"msg_from": "\"Andy\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Server misconfiguration???"
},
{
"msg_contents": "\"Andy\" <[email protected]> writes:\n> I get the feeling the server is somehow missconfigured or it does not\n> work at full parameter. If I look at memory allocation, it never goes\n> over 250MB whatever I do with the database.\n\nThat is not wrong. Postgres expects the kernel to be doing disk\ncaching, so the amount of memory that's effectively being used for\ndatabase work includes not only what is shown as belonging to the\nPG processes, but some portion of the kernel disk buffers as well.\nYou don't really *want* the processes eating all of available RAM.\n\nI concur with Chris K-L's comments that you should reduce rather\nthan increase your settings.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 10 Oct 2005 10:18:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server misconfiguration??? "
},
{
"msg_contents": "When I ment memory allocation, I look with htop to see the process list, CPU \nload, memory, swap. So I didn't ment the a postgre process uses that amount \nof memory.\n\nI read some tuning things, I made the things that are written there, but I \nthink that there improvements can be made.\n\nregards,\nAndy.\n\n----- Original Message ----- \nFrom: \"Tom Lane\" <[email protected]>\nTo: \"Andy\" <[email protected]>\nCc: <[email protected]>\nSent: Monday, October 10, 2005 5:18 PM\nSubject: Re: [PERFORM] Server misconfiguration???\n\n\n> \"Andy\" <[email protected]> writes:\n>> I get the feeling the server is somehow missconfigured or it does not\n>> work at full parameter. If I look at memory allocation, it never goes\n>> over 250MB whatever I do with the database.\n>\n> That is not wrong. Postgres expects the kernel to be doing disk\n> caching, so the amount of memory that's effectively being used for\n> database work includes not only what is shown as belonging to the\n> PG processes, but some portion of the kernel disk buffers as well.\n> You don't really *want* the processes eating all of available RAM.\n>\n> I concur with Chris K-L's comments that you should reduce rather\n> than increase your settings.\n>\n> regards, tom lane\n>\n> \n\n",
"msg_date": "Mon, 10 Oct 2005 17:31:10 +0300",
"msg_from": "\"Andy\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Server misconfiguration??? "
},
{
"msg_contents": "> Yes you're right it really bosst a little.\n> I want to improve the system performance. Are there any more tipps?\n\nThe rest of the numbers look vaguely ok...\n\n> On this server runs only a webserver with php application which uses \n> postgre Db. Should I give more memory to postgre? From what I noticed \n> this is the most memory \"needing\" service from this system.\n\nThe best thing you can do is use two servers so that pgsql does not \ncompete with web server for RAM... Personally I'd start looking at my \nqueries themselves next, see where I could optimise them.\n\nChris\n",
"msg_date": "Mon, 10 Oct 2005 22:56:41 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server misconfiguration???"
},
{
"msg_contents": "On Mon, Oct 10, 2005 at 05:31:10PM +0300, Andy wrote:\n> I read some tuning things, I made the things that are written there, but I \n> think that there improvements can be made.\n\nHave you tried the suggestions people made? Because if I were you,\nI'd be listing very carefully to what Chris and Tom were telling me\nabout how to tune my database.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nIn the future this spectacle of the middle classes shocking the avant-\ngarde will probably become the textbook definition of Postmodernism. \n --Brad Holland\n",
"msg_date": "Thu, 13 Oct 2005 11:01:18 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server misconfiguration???"
},
{
"msg_contents": "Yes I did, and it works better(on a test server). I had no time to put it in \nproduction.\nI will try to do small steps to see what happens.\n\nRegards,\nAndy.\n\n\n----- Original Message ----- \nFrom: \"Andrew Sullivan\" <[email protected]>\nTo: <[email protected]>\nSent: Thursday, October 13, 2005 6:01 PM\nSubject: Re: [PERFORM] Server misconfiguration???\n\n\n> On Mon, Oct 10, 2005 at 05:31:10PM +0300, Andy wrote:\n>> I read some tuning things, I made the things that are written there, but \n>> I\n>> think that there improvements can be made.\n>\n> Have you tried the suggestions people made? Because if I were you,\n> I'd be listing very carefully to what Chris and Tom were telling me\n> about how to tune my database.\n>\n> A\n>\n> -- \n> Andrew Sullivan | [email protected]\n> In the future this spectacle of the middle classes shocking the avant-\n> garde will probably become the textbook definition of Postmodernism.\n> --Brad Holland\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n> \n\n",
"msg_date": "Fri, 14 Oct 2005 10:23:18 +0300",
"msg_from": "\"Andy\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Server misconfiguration???"
}
] |
[
{
"msg_contents": "I have a table in the databases I work with,\nthat contains two text columns with XML data \nstored inside them.\n\nThis table is by far the biggest table in the databases,\nand the text columns use up the most space. \nI saw that the default storage type for text columns is\n\"EXTENDED\" which, according to the documentation, uses up extra\nspace to make possible substring functioning faster. \n\nSuppose that the data in those columns are only really ever\n_used_ once, but may be needed in future for viewing purposes mostly,\nand I cannot really change the underlying structure of the table,\nwhat can I possibly do to maximally reduce the amount of disk space\nused by the table on disk. (There are no indexes on these two columns.)\nI've thought about compression using something like :\nztext http://www.mahalito.net/~harley/sw/postgres/\n\nbut I have to change the table structure a lot and I've already \nencountered problems unzipping the data again.\nThe other problem with this solution, is that database dumps almost double\nin size, because of double compression.\n\nAny suggestions much appreciated\n\nTIA \nStefan \n",
"msg_date": "Mon, 10 Oct 2005 14:17:21 +0200",
"msg_from": "Stef <[email protected]>",
"msg_from_op": true,
"msg_subject": "Compression of text columns"
},
{
"msg_contents": "Stef schrieb:\n> I have a table in the databases I work with,\n> that contains two text columns with XML data \n> stored inside them.\n> \n> This table is by far the biggest table in the databases,\n> and the text columns use up the most space. \n> I saw that the default storage type for text columns is\n> \"EXTENDED\" which, according to the documentation, uses up extra\n> space to make possible substring functioning faster. \n> \n> Suppose that the data in those columns are only really ever\n> _used_ once, but may be needed in future for viewing purposes mostly,\n> and I cannot really change the underlying structure of the table,\n> what can I possibly do to maximally reduce the amount of disk space\n> used by the table on disk. (There are no indexes on these two columns.)\n> I've thought about compression using something like :\n> ztext http://www.mahalito.net/~harley/sw/postgres/\n> \n> but I have to change the table structure a lot and I've already \n> encountered problems unzipping the data again.\n> The other problem with this solution, is that database dumps almost double\n> in size, because of double compression.\n> \n> Any suggestions much appreciated\n\nWell, text columns are automatically compressed via the toast mechanism.\nThis is handled transparently for you.\n\n",
"msg_date": "Mon, 10 Oct 2005 14:27:14 +0200",
"msg_from": "Tino Wildenhain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compression of text columns"
},
{
"msg_contents": "Tino Wildenhain mentioned :\n=> Well, text columns are automatically compressed via the toast mechanism.\n=> This is handled transparently for you.\n\nOK, I misread the documentation, and I forgot to mention that\nI'm using postgres 7.3 and 8.0\nIt's actually the EXTERNAL storage type that is larger, not EXTENDED. \nWhat kind of compression is used in the EXTERNAL storage type?\nIs there any way to achieve better compression?\n",
"msg_date": "Mon, 10 Oct 2005 14:57:00 +0200",
"msg_from": "Stef <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Compression of text columns"
},
{
"msg_contents": "Stef <[email protected]> writes:\n> I saw that the default storage type for text columns is\n> \"EXTENDED\" which, according to the documentation, uses up extra\n> space to make possible substring functioning faster. \n\nYou misread it. EXTENDED does compression by default on long strings.\nEXTERNAL is the one that suppresses compression.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 10 Oct 2005 10:38:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compression of text columns "
},
{
"msg_contents": "On Mon, 2005-10-10 at 14:57 +0200, Stef wrote:\n> Is there any way to achieve better compression?\n\nYou can use XML schema aware compression techniques, but PostgreSQL\ndoesn't know about those. You have to do it yourself, or translate the\nXML into an infoset-preserving form that will still allow XPath and\nfriends.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Tue, 11 Oct 2005 09:02:57 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compression of text columns"
}
] |
[
{
"msg_contents": "Hello all,\n\nI'm encountering a quite strange performance problem. Look at the\nfollowing two queries and their execution times. The only difference is\nthe first query has OR operator and the second query has AND operator.\nAny ideas?\n\nThank you in advance,\nFederico\n\n\n[FIRST QUERY: EXEC TIME 0.015 SECS]\n\n explain analyze SELECT * FROM ViewHttp\n WHERE txContentType ilike '%html%' OR vchost ilike '%www.%'\n ORDER BY iDStart DESC, iS_ID DESC, iF_ID DESC, iSubID DESC\n OFFSET 0 LIMIT 201\n\n\"Limit (cost=12.75..3736.66 rows=201 width=1250) (actual\ntime=0.000..15.000 rows=201 loops=1)\"\n\" -> Nested Loop (cost=12.75..1996879.04 rows=107782 width=1250)\n(actual time=0.000..15.000 rows=201 loops=1)\"\n\" -> Nested Loop (cost=12.75..1524883.03 rows=6334 width=1106)\n(actual time=0.000..15.000 rows=201 loops=1)\"\n\" Join Filter: (\"outer\".isensorid = \"inner\".isensorid)\"\n\" -> Index Scan Backward using idx_0009_ord4 on\ndetail0009 (cost=0.00..1489241.53 rows=6334 width=1005) (actual\ntime=0.000..15.000 rows=201 loops=1)\"\n\" Filter: ((txcontenttype ~~* '%html%'::text) OR\n((vchost)::text ~~* '%www.%'::text))\"\n\" -> Materialize (cost=12.75..15.25 rows=250 width=101)\n(actual time=0.000..0.000 rows=1 loops=201)\"\n\" -> Seq Scan on sensors (cost=0.00..12.50 rows=250\nwidth=101) (actual time=0.000..0.000 rows=1 loops=1)\"\n\" -> Index Scan using connections_pkey on connections \n(cost=0.00..74.25 rows=18 width=168) (actual time=0.000..0.000 rows=1\nloops=201)\"\n\" Index Cond: ((\"outer\".is_id = connections.is_id) AND\n(\"outer\".if_id = connections.if_id))\"\n\"Total runtime: 15.000 ms\"\n\n\n\n\n[SECOND QUERY: EXEC TIME 13.844 SECS]\n\n explain analyze SELECT * FROM ViewHttp\n WHERE txContentType ilike '%html%' AND vchost ilike '%www.%'\n ORDER BY iDStart DESC, iS_ID DESC, iF_ID DESC, iSubID DESC\n OFFSET 0 LIMIT 201\n\n\"Limit (cost=22476.81..22477.31 rows=201 width=1250) (actual\ntime=13187.000..13187.000 rows=201 loops=1)\"\n\" -> Sort (cost=22476.81..22477.92 rows=443 width=1250) (actual\ntime=13187.000..13187.000 rows=201 loops=1)\"\n\" Sort Key: detail0009.idstart, detail0009.isensorid,\ndetail0009.iforensicid, detail0009.isubid\"\n\" -> Hash Join (cost=13.13..22457.34 rows=443 width=1250)\n(actual time=469.000..10966.000 rows=53559 loops=1)\"\n\" Hash Cond: (\"outer\".isensorid = \"inner\".isensorid)\"\n\" -> Nested Loop (cost=0.00..22437.57 rows=443\nwidth=1165) (actual time=469.000..10201.000 rows=53559 loops=1)\"\n\" -> Seq Scan on detail0009 (cost=0.00..20500.11\nrows=26 width=1005) (actual time=453.000..5983.000 rows=53588 loops=1)\"\n\" Filter: ((txcontenttype ~~* '%html%'::text)\nAND ((vchost)::text ~~* '%www.%'::text))\"\n\" -> Index Scan using connections_pkey on\nconnections (cost=0.00..74.25 rows=18 width=168) (actual\ntime=0.063..0.065 rows=1 loops=53588)\"\n\" Index Cond: ((\"outer\".isensorid =\nconnections.isensorid) AND (\"outer\".iforensicid = connections.iforensicid))\"\n\" -> Hash (cost=12.50..12.50 rows=250 width=101) (actual\ntime=0.000..0.000 rows=0 loops=1)\"\n\" -> Seq Scan on sensors (cost=0.00..12.50 rows=250\nwidth=101) (actual time=0.000..0.000 rows=1 loops=1)\"\n\"Total runtime: 13844.000 ms\"\n\n\n",
"msg_date": "Mon, 10 Oct 2005 15:27:09 +0200",
"msg_from": "\"Federico Simonetti (Liveye)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query performance on ILIKE with AND operator..."
},
{
"msg_contents": "\"Federico Simonetti (Liveye)\" <[email protected]> writes:\n> I'm encountering a quite strange performance problem.\n\nThe problem stems from the horrid misestimation of the number of rows\nfetched from detail0009:\n\n> \" -> Seq Scan on detail0009 (cost=0.00..20500.11\n> rows=26 width=1005) (actual time=453.000..5983.000 rows=53588 loops=1)\"\n> \" Filter: ((txcontenttype ~~* '%html%'::text)\n> AND ((vchost)::text ~~* '%www.%'::text))\"\n\nWhen the planner is off by a factor of two thousand about the number of\nrows involved, it's not very likely to produce a good plan :-(\n\nIn the OR case the rowcount estimate is 6334, which is somewhat closer\nto reality (only about a factor of 10 off, looks like), and that changes\nthe plan to something that works acceptably well.\n\nAssuming that this is web-log data, the prevalence of www and html\ntogether is hardly surprising, but PG's statistical mechanisms will\nnever realize it. Not sure about a good workaround. Does it make\nsense to combine the two conditions into one?\n\t(vchost || txcontenttype) ilike '%www.%html%'\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 10 Oct 2005 10:58:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance on ILIKE with AND operator... "
},
{
"msg_contents": "Sorry but this does not seem to improve performance, it takes even more\ntime, have a look at these data:\n\nexplain analyze SELECT * FROM ViewHttp\nWHERE (vchost || txcontenttype) ilike '%www.%html%'\nORDER BY iDStart DESC, iSensorID DESC, iForensicID DESC, iSubID DESC\nOFFSET 0 LIMIT 201\n\n\n\"Limit (cost=22740.77..22741.28 rows=201 width=1250) (actual\ntime=14234.000..14234.000 rows=201 loops=1)\"\n\" -> Sort (cost=22740.77..22741.89 rows=447 width=1250) (actual\ntime=14234.000..14234.000 rows=201 loops=1)\"\n\" Sort Key: detail0009.idstart, detail0009.isensorid,\ndetail0009.iforensicid, detail0009.isubid\"\n\" -> Hash Join (cost=13.13..22721.10 rows=447 width=1250)\n(actual time=469.000..12140.000 rows=54035 loops=1)\"\n\" Hash Cond: (\"outer\".isensorid = \"inner\".isensorid)\"\n\" -> Nested Loop (cost=0.00..22701.27 rows=447\nwidth=1165) (actual time=469.000..11428.000 rows=54035 loops=1)\"\n\" -> Seq Scan on detail0009 (cost=0.00..20763.77\nrows=26 width=1005) (actual time=453.000..6345.000 rows=54064 loops=1)\"\n\" Filter: (((vchost)::text || txcontenttype)\n~~* '%www.%html%'::text)\"\n\" -> Index Scan using connections_pkey on\nconnections (cost=0.00..74.25 rows=18 width=168) (actual\ntime=0.073..0.077 rows=1 loops=54064)\"\n\" Index Cond: ((\"outer\".isensorid =\nconnections.isensorid) AND (\"outer\".iforensicid = connections.iforensicid))\"\n\" -> Hash (cost=12.50..12.50 rows=250 width=101) (actual\ntime=0.000..0.000 rows=0 loops=1)\"\n\" -> Seq Scan on sensors (cost=0.00..12.50 rows=250\nwidth=101) (actual time=0.000..0.000 rows=1 loops=1)\"\n\"Total runtime: 14234.000 ms\"\n\n\nThanks for your help anyway...\n\nFederico\n\n\n\n\n\n\nTom Lane ha scritto:\n\n>\"Federico Simonetti (Liveye)\" <[email protected]> writes:\n> \n>\n>>I'm encountering a quite strange performance problem.\n>> \n>>\n>\n>The problem stems from the horrid misestimation of the number of rows\n>fetched from detail0009:\n>\n> \n>\n>>\" -> Seq Scan on detail0009 (cost=0.00..20500.11\n>>rows=26 width=1005) (actual time=453.000..5983.000 rows=53588 loops=1)\"\n>>\" Filter: ((txcontenttype ~~* '%html%'::text)\n>>AND ((vchost)::text ~~* '%www.%'::text))\"\n>> \n>>\n>\n>When the planner is off by a factor of two thousand about the number of\n>rows involved, it's not very likely to produce a good plan :-(\n>\n>In the OR case the rowcount estimate is 6334, which is somewhat closer\n>to reality (only about a factor of 10 off, looks like), and that changes\n>the plan to something that works acceptably well.\n>\n>Assuming that this is web-log data, the prevalence of www and html\n>together is hardly surprising, but PG's statistical mechanisms will\n>never realize it. Not sure about a good workaround. Does it make\n>sense to combine the two conditions into one?\n>\t(vchost || txcontenttype) ilike '%www.%html%'\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: explain analyze is your friend\n>\n>\n> \n>\n\n\n\n\n\n\nSorry but this does not seem to improve performance, it takes even more\ntime, have a look at these data:\n\nexplain analyze SELECT * FROM ViewHttp\nWHERE (vchost || txcontenttype) ilike '%www.%html%'\nORDER BY iDStart DESC, iSensorID DESC, iForensicID DESC, iSubID DESC \nOFFSET 0 LIMIT 201\n\n\n\"Limitᅵ (cost=22740.77..22741.28 rows=201 width=1250) (actual\ntime=14234.000..14234.000 rows=201 loops=1)\"\n\"ᅵ ->ᅵ Sortᅵ (cost=22740.77..22741.89 rows=447 width=1250) (actual\ntime=14234.000..14234.000 rows=201 loops=1)\"\n\"ᅵᅵᅵᅵᅵᅵᅵ Sort Key: detail0009.idstart, detail0009.isensorid,\ndetail0009.iforensicid, detail0009.isubid\"\n\"ᅵᅵᅵᅵᅵᅵᅵ ->ᅵ Hash Joinᅵ (cost=13.13..22721.10 rows=447 width=1250)\n(actual time=469.000..12140.000 rows=54035 loops=1)\"\n\"ᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵ Hash Cond: (\"outer\".isensorid = \"inner\".isensorid)\"\n\"ᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵ ->ᅵ Nested Loopᅵ (cost=0.00..22701.27 rows=447\nwidth=1165) (actual time=469.000..11428.000 rows=54035 loops=1)\"\n\"ᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵ ->ᅵ Seq Scan on detail0009ᅵ\n(cost=0.00..20763.77 rows=26 width=1005) (actual time=453.000..6345.000\nrows=54064 loops=1)\"\n\"ᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵ Filter: (((vchost)::text || txcontenttype)\n~~* '%www.%html%'::text)\"\n\"ᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵ ->ᅵ Index Scan using connections_pkey on\nconnectionsᅵ (cost=0.00..74.25 rows=18 width=168) (actual\ntime=0.073..0.077 rows=1 loops=54064)\"\n\"ᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵ Index Cond: ((\"outer\".isensorid =\nconnections.isensorid) AND (\"outer\".iforensicid =\nconnections.iforensicid))\"\n\"ᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵ ->ᅵ Hashᅵ (cost=12.50..12.50 rows=250 width=101)\n(actual time=0.000..0.000 rows=0 loops=1)\"\n\"ᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵ ->ᅵ Seq Scan on sensorsᅵ (cost=0.00..12.50\nrows=250 width=101) (actual time=0.000..0.000 rows=1 loops=1)\"\n\"Total runtime: 14234.000 ms\"\n\n\nThanks for your help anyway...\n\nFederico\n\n\n\n\n\n\nTom Lane ha scritto:\n\n\"Federico Simonetti (Liveye)\" <[email protected]> writes:\n \n\nI'm encountering a quite strange performance problem.\n \n\n\nThe problem stems from the horrid misestimation of the number of rows\nfetched from detail0009:\n\n \n\n\" -> Seq Scan on detail0009 (cost=0.00..20500.11\nrows=26 width=1005) (actual time=453.000..5983.000 rows=53588 loops=1)\"\n\" Filter: ((txcontenttype ~~* '%html%'::text)\nAND ((vchost)::text ~~* '%www.%'::text))\"\n \n\n\nWhen the planner is off by a factor of two thousand about the number of\nrows involved, it's not very likely to produce a good plan :-(\n\nIn the OR case the rowcount estimate is 6334, which is somewhat closer\nto reality (only about a factor of 10 off, looks like), and that changes\nthe plan to something that works acceptably well.\n\nAssuming that this is web-log data, the prevalence of www and html\ntogether is hardly surprising, but PG's statistical mechanisms will\nnever realize it. Not sure about a good workaround. Does it make\nsense to combine the two conditions into one?\n\t(vchost || txcontenttype) ilike '%www.%html%'\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend",
"msg_date": "Mon, 10 Oct 2005 17:18:04 +0200",
"msg_from": "\"Federico Simonetti (Liveye)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query performance on ILIKE with AND operator..."
}
] |
[
{
"msg_contents": "I have a SUSE 9 box that is running Postgres 8.0.1 compiled from source. \nOver time, I see the memory usage of the box go way way up (it's got \n8GBs in it and by the end of the day, it'll be all used up) with what \nlooks like cached inodes relating to the extreme IO generated by \npostgres. We replicate about 10GBs of data every day from our AS/400 \ninto postgres, and it is the main database for our intranet portal, \nwhich will server 40,000 pages on a good day.\n\nI was wondering if there is something I'm doing wrong with my default \nsettings of postgres that is keeping all that stuff cached, or if I just \nneed to switch to XFS or if there is some setting in postgres that I can \ntweak that will make this problem go away. It's gone beyond an annoyance \nand is now into the realm of getting me in trouble if I can't keep this \nDB server up and running. Even a minute or two of downtime in a restart \nis often too much.\n\nAny help you can give in this would be extrememly helpful as I'm very \nfar from an expert on Linux filesystems and postgres tuning.\n\nThanks!\n\n-- \n\nJon Brisbin\nWebmaster\nNPC International, Inc.\n",
"msg_date": "Mon, 10 Oct 2005 16:29:42 -0500",
"msg_from": "Jon Brisbin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance on SUSE w/ reiserfs"
},
{
"msg_contents": "> I have a SUSE 9 box that is running Postgres 8.0.1 compiled from source.\n> Over time, I see the memory usage of the box go way way up (it's got\n> 8GBs in it and by the end of the day, it'll be all used up) with what\n> looks like cached inodes relating to the extreme IO generated by\n>\n> I was wondering if there is something I'm doing wrong with my default\n> settings of postgres that is keeping all that stuff cached, or if I just\n> need to switch to XFS or if there is some setting in postgres that I can\n> tweak that will make this problem go away. It's gone beyond an annoyance\n> and is now into the realm of getting me in trouble if I can't keep this\n> DB server up and running. Even a minute or two of downtime in a restart\n> is often too much.\n>\n> Any help you can give in this would be extrememly helpful as I'm very\n> far from an expert on Linux filesystems and postgres tuning.\n\nYou may want to submit your postgresql.conf. Upgrading to the latest\nstable version may also help, although my experience is related to\nFreeBSD and postgresql 7.4.8.\n\nregards\nClaus\n",
"msg_date": "Mon, 10 Oct 2005 23:45:44 +0200",
"msg_from": "Claus Guttesen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on SUSE w/ reiserfs"
},
{
"msg_contents": "Jon Brisbin <[email protected]> writes:\n> I have a SUSE 9 box that is running Postgres 8.0.1 compiled from source. \n> Over time, I see the memory usage of the box go way way up (it's got \n> 8GBs in it and by the end of the day, it'll be all used up) with what \n> looks like cached inodes relating to the extreme IO generated by \n> postgres. We replicate about 10GBs of data every day from our AS/400 \n> into postgres, and it is the main database for our intranet portal, \n> which will server 40,000 pages on a good day.\n\nAre you sure it's not cached data pages, rather than cached inodes?\nIf so, the above behavior is *good*.\n\nPeople often have a mistaken notion that having near-zero free RAM means\nthey have a problem. In point of fact, that is the way it is supposed\nto be (at least on Unix-like systems). This is just a reflection of the\nkernel doing what it is supposed to do, which is to use all spare RAM\nfor caching recently accessed disk pages. If you're not swapping then\nyou do not have a problem.\n\nYou should be looking at swap I/O rates (see vmstat or iostat) to\ndetermine if you have memory pressure, not \"free RAM\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 10 Oct 2005 17:54:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on SUSE w/ reiserfs "
},
{
"msg_contents": "More info:\n\napps:/home/jbrisbin # free -mo\n total used free shared buffers cached\nMem: 8116 5078 3038 0 92 4330\nSwap: 1031 0 1031\n\n\napps:/home/jbrisbin # cat /proc/meminfo\nMemTotal: 8311188 kB\nMemFree: 3111668 kB\nBuffers: 94604 kB\nCached: 4434764 kB\nSwapCached: 0 kB\nActive: 4844344 kB\nInactive: 279556 kB\nHighTotal: 7469996 kB\nHighFree: 2430976 kB\nLowTotal: 841192 kB\nLowFree: 680692 kB\nSwapTotal: 1056124 kB\nSwapFree: 1056124 kB\nDirty: 436 kB\nWriteback: 0 kB\nMapped: 581924 kB\nSlab: 48264 kB\nCommitted_AS: 651128 kB\nPageTables: 4020 kB\nVmallocTotal: 112632 kB\nVmallocUsed: 13104 kB\nVmallocChunk: 97284 kB\nHugePages_Total: 0\nHugePages_Free: 0\nHugepagesize: 2048 kB\n\napps:/home/jbrisbin # cat /proc/slabinfo\nslabinfo - version: 2.0\n...\nreiser_inode_cache 28121 28140 512 7 1 : tunables\n...\nradix_tree_node 28092 28154 276 14 1 : tunables\n...\ninode_cache 1502 1520 384 10 1 : tunables\ndentry_cache 40763 40794 152 26 1 : tunables\n...\nbuffer_head 83929 94643 52 71 1 : tunables\n\n\nClaus Guttesen wrote:\n\n> You may want to submit your postgresql.conf. Upgrading to the latest\n> stable version may also help, although my experience is related to\n> FreeBSD and postgresql 7.4.8.\n\n# -----------------------------\n# PostgreSQL configuration file\n# -----------------------------\n#\n# This file consists of lines of the form:\n#\n# name = value\n#\n# (The '=' is optional.) White space may be used. Comments are introduced\n# with '#' anywhere on a line. The complete list of option names and\n# allowed values can be found in the PostgreSQL documentation. The\n# commented-out settings shown in this file represent the default values.\n#\n# Please note that re-commenting a setting is NOT sufficient to revert it\n# to the default value, unless you restart the postmaster.\n#\n# Any option can also be given as a command line switch to the\n# postmaster, e.g. 'postmaster -c log_connections=on'. Some options\n# can be changed at run-time with the 'SET' SQL command.\n#\n# This file is read on postmaster startup and when the postmaster\n# receives a SIGHUP. If you edit the file on a running system, you have\n# to SIGHUP the postmaster for the changes to take effect, or use\n# \"pg_ctl reload\". Some settings, such as listen_address, require\n# a postmaster shutdown and restart to take effect.\n\n\n#---------------------------------------------------------------------------\n# FILE LOCATIONS\n#---------------------------------------------------------------------------\n\n# The default values of these variables are driven from the -D command line\n# switch or PGDATA environment variable, represented here as ConfigDir.\n# data_directory = 'ConfigDir' # use data in another directory\n# hba_file = 'ConfigDir/pg_hba.conf' # the host-based authentication file\n# ident_file = 'ConfigDir/pg_ident.conf' # the IDENT configuration file\n\n# If external_pid_file is not explicitly set, no extra pid file is written.\n# external_pid_file = '(none)' # write an extra pid file\n\n\n#---------------------------------------------------------------------------\n# CONNECTIONS AND AUTHENTICATION\n#---------------------------------------------------------------------------\n\n# - Connection Settings -\n\nlisten_addresses = '*' # what IP interface(s) to listen on;\n # defaults to localhost, '*' = any\n#port = 5432\nmax_connections = 100\n # note: increasing max_connections costs about 500 bytes of shared\n # memory per connection slot, in addition to costs from \nshared_buffers\n # and max_locks_per_transaction.\n#superuser_reserved_connections = 2\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777 # octal\n#rendezvous_name = '' # defaults to the computer name\n\n# - Security & Authentication -\n\n#authentication_timeout = 60 # 1-600, in seconds\n#ssl = false\n#password_encryption = true\n#krb_server_keyfile = ''\n#db_user_namespace = false\n\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 1000 # min 16, at least max_connections*2, \n8KB each\n#work_mem = 1024 # min 64, size in KB\n#maintenance_work_mem = 16384 # min 1024, size in KB\n#max_stack_depth = 2048 # min 100, size in KB\n\n# - Free Space Map -\n\n#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n#max_fsm_relations = 1000 # min 100, ~50 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000 # min 25\n#preload_libraries = ''\n\n# - Cost-Based Vacuum Delay -\n\n#vacuum_cost_delay = 0 # 0-1000 milliseconds\n#vacuum_cost_page_hit = 1 # 0-10000 credits\n#vacuum_cost_page_miss = 10 # 0-10000 credits\n#vacuum_cost_page_dirty = 20 # 0-10000 credits\n#vacuum_cost_limit = 200 # 0-10000 credits\n\n# - Background writer -\n\n#bgwriter_delay = 200 # 10-10000 milliseconds between rounds\n#bgwriter_percent = 1 # 0-100% of dirty buffers in each round\n#bgwriter_maxpages = 100 # 0-1000 buffers max per round\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\n#fsync = true # turns forced synchronization on or off\n#wal_sync_method = fsync # the default varies across platforms:\n # fsync, fdatasync, open_sync, or \nopen_datasync\n#wal_buffers = 8 # min 4, 8KB each\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n\n# - Checkpoints -\n\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 300 # range 30-3600, in seconds\n#checkpoint_warning = 30 # 0 is off, in seconds\n\n# - Archiving -\n\n#archive_command = '' # command to use to archive a logfile \nsegment\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\n#enable_hashagg = true\n#enable_hashjoin = true\n#enable_indexscan = true\n#enable_mergejoin = true\n#enable_nestloop = true\n#enable_seqscan = true\n#enable_sort = true\n#enable_tidscan = true\n\n# - Planner Cost Constants -\n\n#effective_cache_size = 1000 # typically 8KB each\n#random_page_cost = 4 # units are one sequential page fetch cost\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n\n# - Genetic Query Optimizer -\n\n#geqo = true\n#geqo_threshold = 12\n#geqo_effort = 5 # range 1-10\n#geqo_pool_size = 0 # selects default based on effort\n#geqo_generations = 0 # selects default based on effort\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 10 # range 1-1000\n#from_collapse_limit = 8\n#join_collapse_limit = 8 # 1 disables collapsing of explicit JOINs\n\n\n#---------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#---------------------------------------------------------------------------\n\n# - Where to Log -\n\nlog_destination = 'syslog' # Valid values are combinations of stderr,\n # syslog and eventlog, depending on\n # platform.\n\n# This is relevant when logging to stderr:\nredirect_stderr = true # Enable capturing of stderr into log files.\n# These are only relevant if redirect_stderr is true:\nlog_directory = 'pg_log' # Directory where log files are written.\n # May be specified absolute or relative to \nPGDATA\nlog_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # Log file name pattern.\n # May include strftime() escapes\n#log_truncate_on_rotation = false # If true, any existing log file of the\n # same name as the new log file will be \ntruncated\n # rather than appended to. But such truncation\n # only occurs on time-driven rotation,\n # not on restarts or size-driven rotation.\n # Default is false, meaning append to existing\n # files in all cases.\n#log_rotation_age = 1440 # Automatic rotation of logfiles will happen \nafter\n # so many minutes. 0 to disable.\n#log_rotation_size = 10240 # Automatic rotation of logfiles will happen \nafter\n # so many kilobytes of log output. 0 to \ndisable.\n\n# These are relevant when logging to syslog:\nsyslog_facility = 'LOCAL0'\nsyslog_ident = 'postgres'\n\n\n# - When to Log -\n\nclient_min_messages = notice # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # log, notice, warning, error\n\nlog_min_messages = notice # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error, log, \nfatal,\n # panic\n\nlog_error_verbosity = default # terse, default, or verbose messages\n\nlog_min_error_statement = notice # Values in order of increasing severity:\n # debug5, debug4, debug3, debug2, \ndebug1,\n # info, notice, warning, error, \npanic(off)\n\n#log_min_duration_statement = -1 # -1 is disabled, in milliseconds.\n\n#silent_mode = false # DO NOT USE without syslog or \nredirect_stderr\n\n# - What to Log -\n\n#debug_print_parse = false\n#debug_print_rewritten = false\n#debug_print_plan = false\n#debug_pretty_print = false\n#log_connections = false\n#log_disconnections = false\n#log_duration = false\n#log_line_prefix = '' # e.g. '<%u%%%d> '\n # %u=user name %d=database name\n # %r=remote host and port\n # %p=PID %t=timestamp %i=command tag\n # %c=session id %l=session line number\n # %s=session start timestamp \n%x=transaction id\n # %q=stop here in non-session processes\n # %%='%'\n#log_statement = 'none' # none, mod, ddl, all\nlog_hostname = true\n\n\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Statistics Monitoring -\n\n#log_parser_stats = false\n#log_planner_stats = false\n#log_executor_stats = false\n#log_statement_stats = false\n\n# - Query/Index Statistics Collector -\n\n#stats_start_collector = true\n#stats_command_string = false\n#stats_block_level = false\n#stats_row_level = false\n#stats_reset_on_server_start = true\n\n\n#---------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#---------------------------------------------------------------------------\n\n# - Statement Behavior -\n\n#search_path = '$user,public' # schema names\n#default_tablespace = '' # a tablespace name, or '' for default\n#check_function_bodies = true\n#default_transaction_isolation = 'read committed'\n#default_transaction_read_only = false\n#statement_timeout = 0 # 0 is disabled, in milliseconds\n\n# - Locale and Formatting -\n\n#datestyle = 'iso, mdy'\n#timezone = unknown # actually, defaults to TZ environment \nsetting\n#australian_timezones = false\n#extra_float_digits = 0 # min -15, max 2\n#client_encoding = sql_ascii # actually, defaults to database encoding\n\n# These settings are initialized by initdb -- they might be changed\nlc_messages = 'C' # locale for system error message strings\nlc_monetary = 'C' # locale for monetary formatting\nlc_numeric = 'C' # locale for number formatting\nlc_time = 'C' # locale for time formatting\n\n# - Other Defaults -\n\n#explain_pretty_print = true\n#dynamic_library_path = '$libdir'\n\n\n#---------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n\n#deadlock_timeout = 1000 # in milliseconds\n#max_locks_per_transaction = 64 # min 10, ~200*max_connections bytes each\n\n\n#---------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#---------------------------------------------------------------------------\n\n# - Previous Postgres Versions -\n\n#add_missing_from = true\n#regex_flavor = advanced # advanced, extended, or basic\n#sql_inheritance = true\n#default_with_oids = true\n\n# - Other Platforms & Clients -\n\n#transform_null_equals = false\n\n\n-- \n\nJon Brisbin\nWebmaster\nNPC International, Inc.\n",
"msg_date": "Mon, 10 Oct 2005 17:13:53 -0500",
"msg_from": "Jon Brisbin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance on SUSE w/ reiserfs"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Are you sure it's not cached data pages, rather than cached inodes?\n> If so, the above behavior is *good*.\n> \n> People often have a mistaken notion that having near-zero free RAM means\n> they have a problem. In point of fact, that is the way it is supposed\n> to be (at least on Unix-like systems). This is just a reflection of the\n> kernel doing what it is supposed to do, which is to use all spare RAM\n> for caching recently accessed disk pages. If you're not swapping then\n> you do not have a problem.\n\nExcept for the fact that my Java App server crashes when all the \navailable memory is being used by caching and not reclaimed :-)\n\nIf it wasn't for the app server going down, I probably wouldn't care.\n\n-- \n\nJon Brisbin\nWebmaster\nNPC International, Inc.\n",
"msg_date": "Mon, 10 Oct 2005 17:15:27 -0500",
"msg_from": "Jon Brisbin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance on SUSE w/ reiserfs"
},
{
"msg_contents": "Jon,\n\n> Any help you can give in this would be extrememly helpful as I'm very\n> far from an expert on Linux filesystems and postgres tuning.\n\nSee Tom's response; it may be that you don't have an issue at all.\n\nIf you do, it's probably the kernel, not the FS. 2.6.8 and a few other \n2.6.single-digit kernels had memory leaks in shmem that would cause \ngradually escalating swappage. The solution to that one is to upgrade to \n2.6.11.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 10 Oct 2005 15:15:40 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on SUSE w/ reiserfs"
},
{
"msg_contents": "Jon Brisbin wrote:\n> Tom Lane wrote:\n> >\n> >Are you sure it's not cached data pages, rather than cached inodes?\n> >If so, the above behavior is *good*.\n> >\n> >People often have a mistaken notion that having near-zero free RAM means\n> >they have a problem. In point of fact, that is the way it is supposed\n> >to be (at least on Unix-like systems). This is just a reflection of the\n> >kernel doing what it is supposed to do, which is to use all spare RAM\n> >for caching recently accessed disk pages. If you're not swapping then\n> >you do not have a problem.\n> \n> Except for the fact that my Java App server crashes when all the \n> available memory is being used by caching and not reclaimed :-)\n\nAh, so you have a different problem. What you should be asking is why\nthe appserver crashes. You still seem to have a lot of free swap,\njudging by a nearby post. But maybe the problem is that the swap is\ncompletely used too, and so the OOM killer (is this Linux?) comes around\nand kills the appserver.\n\nCertainly the problem is not the caching. You should be monitoring when\nand why the appserver dies.\n\n-- \nAlvaro Herrera Architect, http://www.EnterpriseDB.com\n\"On the other flipper, one wrong move and we're Fatal Exceptions\"\n(T.U.X.: Term Unit X - http://www.thelinuxreview.com/TUX/)\n",
"msg_date": "Mon, 10 Oct 2005 19:29:37 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on SUSE w/ reiserfs"
},
{
"msg_contents": "Jon Brisbin <[email protected]> writes:\n> Tom Lane wrote:\n>> If you're not swapping then you do not have a problem.\n\n> Except for the fact that my Java App server crashes when all the \n> available memory is being used by caching and not reclaimed :-)\n\nThat's a kernel bug (or possibly a Java bug ;-)). I concur with Josh's\nsuggestion that you need a newer kernel.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 10 Oct 2005 18:35:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on SUSE w/ reiserfs "
},
{
"msg_contents": "I have a postgresql 7.4.8-server with 4 GB ram.\n\n> #max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n> #max_fsm_relations = 1000 # min 100, ~50 bytes each\n\nIf you do a vacuum verbose (when it's convenient) the last couple of\nlines will tell you something like this:\n\nINFO: free space map: 143 relations, 62034 pages stored; 63792 total\npages needed\nDETAIL: Allocated FSM size: 300 relations + 75000 pages = 473 kB shared memory.\n\nIt says 143 relations and 63792 total pages needed, so I up'ed my\nvalues to these settings:\n\nmax_fsm_relations = 300 # min 10, fsm is free space map, ~40 bytes\nmax_fsm_pages = 75000 # min 1000, fsm is free space map, ~6 bytes\n\n> #effective_cache_size = 1000 # typically 8KB each\n\nThis is computed by sysctl -n vfs.hibufspace / 8192 (on FreeBSD). So I\nchanged it to:\n\neffective_cache_size = 27462 # typically 8KB each\n\nBear in mind that this is 7.4.8 and FreeBSD so these suggestions may\nnot apply to your environment. These suggestions could be validated by\nthe other members of this list.\n\nregards\nClaus\n",
"msg_date": "Tue, 11 Oct 2005 09:41:32 +0200",
"msg_from": "Claus Guttesen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on SUSE w/ reiserfs"
},
{
"msg_contents": "On Tue, 2005-10-11 at 09:41 +0200, Claus Guttesen wrote:\n> I have a postgresql 7.4.8-server with 4 GB ram.\n\n<snip>\n> \n> > #effective_cache_size = 1000 # typically 8KB each\n> \n> This is computed by sysctl -n vfs.hibufspace / 8192 (on FreeBSD). So I\n> changed it to:\n> \n> effective_cache_size = 27462 # typically 8KB each\n\nApparently this formula is no longer relevant on the FreeBSD systems as\nit can cache up to almost all the available RAM. With 4GB of RAM, one\ncould specify most of the RAM as being available for caching, assuming\nthat nothing but PostgreSQL runs on the server -- certainly 1/2 the RAM\nwould be a reasonable value to tell the planner.\n\n(This was verified by using dd: \ndd if=/dev/zero of=/usr/local/pgsql/iotest bs=128k count=16384 to create\na 2G file then\ndd if=/usr/local/pgsql/iotest of=/dev/null\n\nIf you run systat -vmstat 2 you will see 0% diskaccess during the read\nof the 2G file indicating that it has, in fact, been cached)\n\n\nSven\n\n",
"msg_date": "Tue, 11 Oct 2005 09:52:30 -0400",
"msg_from": "Sven Willenberger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on SUSE w/ reiserfs"
},
{
"msg_contents": "Realise also that unless you are running the 1.5 x86-64 build, java will not\nuse more than 1Gig, and if the app server requests more than 1gig, Java will\ndie (I've been there) with an out of memory error, even though there is\nplenty of free mem available. This can easily be cause by a lazy GC thread\nif the applicaiton is running high on CPU usage.\n\nThe kernel will not report memory used for caching pages as being\nunavailable, if a program calls a malloc, the kernel will just swap out the\noldest disk page and give the memory to the application.\n\nYour free -mo shows 3 gig free even with cached disk pages. It looks to me\nmore like either a Java problem, or a kernel problem...\n\nAlex Turner\nNetEconomist\n\nOn 10/10/05, Jon Brisbin <[email protected]> wrote:\n>\n> Tom Lane wrote:\n> >\n> > Are you sure it's not cached data pages, rather than cached inodes?\n> > If so, the above behavior is *good*.\n> >\n> > People often have a mistaken notion that having near-zero free RAM means\n> > they have a problem. In point of fact, that is the way it is supposed\n> > to be (at least on Unix-like systems). This is just a reflection of the\n> > kernel doing what it is supposed to do, which is to use all spare RAM\n> > for caching recently accessed disk pages. If you're not swapping then\n> > you do not have a problem.\n>\n> Except for the fact that my Java App server crashes when all the\n> available memory is being used by caching and not reclaimed :-)\n>\n> If it wasn't for the app server going down, I probably wouldn't care.\n>\n> --\n>\n> Jon Brisbin\n> Webmaster\n> NPC International, Inc.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\nRealise also that unless you are running the 1.5 x86-64 build, java\nwill not use more than 1Gig, and if the app server requests more than\n1gig, Java will die (I've been there) with an out of memory error, even\nthough there is plenty of free mem available. This can easily be\ncause by a lazy GC thread if the applicaiton is running high on CPU\nusage.\n\nThe kernel will not report memory used for caching pages as being\nunavailable, if a program calls a malloc, the kernel will just swap out\nthe oldest disk page and give the memory to the application.\n\nYour free -mo shows 3 gig free even with cached disk pages. It\nlooks to me more like either a Java problem, or a kernel problem...\n\nAlex Turner\nNetEconomistOn 10/10/05, Jon Brisbin <[email protected]> wrote:\nTom Lane wrote:>> Are you sure it's not cached data pages, rather than cached inodes?> If so, the above behavior is *good*.>> People often have a mistaken notion that having near-zero free RAM means\n> they have a problem. In point of fact, that is the way it is supposed> to be (at least on Unix-like systems). This is just a reflection of the> kernel doing what it is supposed to do, which is to use all spare RAM\n> for caching recently accessed disk pages. If you're not swapping then> you do not have a problem.Except for the fact that my Java App server crashes when all theavailable memory is being used by caching and not reclaimed :-)\nIf it wasn't for the app server going down, I probably wouldn't care.--Jon BrisbinWebmasterNPC International, Inc.---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [email protected] so that your message can get through to the mailing list cleanly",
"msg_date": "Tue, 11 Oct 2005 10:27:36 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on SUSE w/ reiserfs"
},
{
"msg_contents": "Alex Turner wrote:\n\n> Realise also that unless you are running the 1.5 x86-64 build, java \n> will not use more than 1Gig, and if the app server requests more than \n> 1gig, Java will die (I've been there) with an out of memory error, \n> even though there is plenty of free mem available. This can easily be \n> cause by a lazy GC thread if the applicaiton is running high on CPU usage.\n\nOn my side of Planet Earth, the standard non-x64 1.5 JVM will happily \nuse more than 1G of memory (on linux and Solaris, can't speak for \nWindows). If you're running larger programs, it's probably a good idea \nto use the -server compiler in the JVM as well. I regularly run with \n-Xmx1800m and regularly have >1GB heap sizes.\n\nThe standard GC will not cause on OOM error if space remains for the \nrequested object. The GC thread blocks all other threads during its \nactivity, whatever else is happening on the machine. The \nnewer/experimental GC's did have some potential race conditions, but I \nbelieve those have been resolved in the 1.5 JVMs. \n\nFinally, note that the latest _05 release of the 1.5 JVM also now \nsupports large page sizes on Linux and Windows:\n-XX:+UseLargePages this can be quite beneficial depending on the \nmemory patterns in your programs.\n\n-- Alan\n",
"msg_date": "Tue, 11 Oct 2005 10:51:42 -0400",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on SUSE w/ reiserfs"
},
{
"msg_contents": "Perhaps this is true for 1.5 on x86-32 (I've only used it on x86-64) but I\nwas more thinking 1.4 which many folks are still using.\n\nAlex\n\nOn 10/11/05, Alan Stange <[email protected]> wrote:\n>\n> Alex Turner wrote:\n>\n> > Realise also that unless you are running the 1.5 x86-64 build, java\n> > will not use more than 1Gig, and if the app server requests more than\n> > 1gig, Java will die (I've been there) with an out of memory error,\n> > even though there is plenty of free mem available. This can easily be\n> > cause by a lazy GC thread if the applicaiton is running high on CPU\n> usage.\n>\n> On my side of Planet Earth, the standard non-x64 1.5 JVM will happily\n> use more than 1G of memory (on linux and Solaris, can't speak for\n> Windows). If you're running larger programs, it's probably a good idea\n> to use the -server compiler in the JVM as well. I regularly run with\n> -Xmx1800m and regularly have >1GB heap sizes.\n>\n> The standard GC will not cause on OOM error if space remains for the\n> requested object. The GC thread blocks all other threads during its\n> activity, whatever else is happening on the machine. The\n> newer/experimental GC's did have some potential race conditions, but I\n> believe those have been resolved in the 1.5 JVMs.\n>\n> Finally, note that the latest _05 release of the 1.5 JVM also now\n> supports large page sizes on Linux and Windows:\n> -XX:+UseLargePages this can be quite beneficial depending on the\n> memory patterns in your programs.\n>\n> -- Alan\n>\n\nPerhaps this is true for 1.5 on x86-32 (I've only used it on x86-64)\nbut I was more thinking 1.4 which many folks are still using.\n\nAlexOn 10/11/05, Alan Stange <[email protected]> wrote:\nAlex Turner wrote:> Realise also that unless you are running the 1.5 x86-64 build, java> will not use more than 1Gig, and if the app server requests more than> 1gig, Java will die (I've been there) with an out of memory error,\n> even though there is plenty of free mem available. This can easily be> cause by a lazy GC thread if the applicaiton is running high on CPU usage.On my side of Planet Earth, the standard non-x64 1.5\n JVM will happilyuse more than 1G of memory (on linux and Solaris, can't speak forWindows). If you're running larger programs, it's probably a good ideato use the -server compiler in the JVM as well. I regularly run with\n-Xmx1800m and regularly have >1GB heap sizes.The standard GC will not cause on OOM error if space remains for therequested object. The GC thread blocks all other threads during itsactivity, whatever else is happening on the machine. The\nnewer/experimental GC's did have some potential race conditions, but Ibelieve those have been resolved in the 1.5 JVMs.Finally, note that the latest _05 release of the 1.5 JVM also nowsupports large page sizes on Linux and Windows:\n-XX:+UseLargePages this can be quite beneficial depending on thememory patterns in your programs.-- Alan",
"msg_date": "Tue, 11 Oct 2005 10:57:10 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on SUSE w/ reiserfs"
},
{
"msg_contents": "Alex Turner wrote:\n\n> Perhaps this is true for 1.5 on x86-32 (I've only used it on x86-64) \n> but I was more thinking 1.4 which many folks are still using.\n\nThe 1.4.x JVM's will also work just fine with much more than 1GB of \nmemory. Perhaps you'd like to try again?\n\n-- Alan\n\n>\n> On 10/11/05, *Alan Stange* <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> Alex Turner wrote:\n>\n> > Realise also that unless you are running the 1.5 x86-64 build, java\n> > will not use more than 1Gig, and if the app server requests more\n> than\n> > 1gig, Java will die (I've been there) with an out of memory error,\n> > even though there is plenty of free mem available. This can\n> easily be\n> > cause by a lazy GC thread if the applicaiton is running high on\n> CPU usage.\n>\n> On my side of Planet Earth, the standard non-x64 1.5 JVM will happily\n> use more than 1G of memory (on linux and Solaris, can't speak for\n> Windows). If you're running larger programs, it's probably a good\n> idea\n> to use the -server compiler in the JVM as well. I regularly run with\n> -Xmx1800m and regularly have >1GB heap sizes.\n>\n> The standard GC will not cause on OOM error if space remains for the\n> requested object. The GC thread blocks all other threads during its\n> activity, whatever else is happening on the machine. The\n> newer/experimental GC's did have some potential race conditions, but I\n> believe those have been resolved in the 1.5 JVMs.\n>\n> Finally, note that the latest _05 release of the 1.5 JVM also now\n> supports large page sizes on Linux and Windows:\n> -XX:+UseLargePages this can be quite beneficial depending on the\n> memory patterns in your programs.\n>\n> -- Alan\n>\n>\n\n",
"msg_date": "Tue, 11 Oct 2005 11:28:30 -0400",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on SUSE w/ reiserfs"
},
{
"msg_contents": "Well - to each his own I guess - we did extensive testing on 1.4, and it\nrefused to allocate much past 1gig on both Linux x86/x86-64 and Windows.\n\nAlex\n\nOn 10/11/05, Alan Stange <[email protected]> wrote:\n>\n> Alex Turner wrote:\n>\n> > Perhaps this is true for 1.5 on x86-32 (I've only used it on x86-64)\n> > but I was more thinking 1.4 which many folks are still using.\n>\n> The 1.4.x JVM's will also work just fine with much more than 1GB of\n> memory. Perhaps you'd like to try again?\n>\n> -- Alan\n>\n> >\n> > On 10/11/05, *Alan Stange* <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >\n> > Alex Turner wrote:\n> >\n> > > Realise also that unless you are running the 1.5 x86-64 build, java\n> > > will not use more than 1Gig, and if the app server requests more\n> > than\n> > > 1gig, Java will die (I've been there) with an out of memory error,\n> > > even though there is plenty of free mem available. This can\n> > easily be\n> > > cause by a lazy GC thread if the applicaiton is running high on\n> > CPU usage.\n> >\n> > On my side of Planet Earth, the standard non-x64 1.5 JVM will happily\n> > use more than 1G of memory (on linux and Solaris, can't speak for\n> > Windows). If you're running larger programs, it's probably a good\n> > idea\n> > to use the -server compiler in the JVM as well. I regularly run with\n> > -Xmx1800m and regularly have >1GB heap sizes.\n> >\n> > The standard GC will not cause on OOM error if space remains for the\n> > requested object. The GC thread blocks all other threads during its\n> > activity, whatever else is happening on the machine. The\n> > newer/experimental GC's did have some potential race conditions, but I\n> > believe those have been resolved in the 1.5 JVMs.\n> >\n> > Finally, note that the latest _05 release of the 1.5 JVM also now\n> > supports large page sizes on Linux and Windows:\n> > -XX:+UseLargePages this can be quite beneficial depending on the\n> > memory patterns in your programs.\n> >\n> > -- Alan\n> >\n> >\n>\n>\n\nWell - to each his own I guess - we did extensive testing on 1.4, and\nit refused to allocate much past 1gig on both Linux x86/x86-64 and\nWindows.\n\nAlexOn 10/11/05, Alan Stange <[email protected]> wrote:\nAlex Turner wrote:> Perhaps this is true for 1.5 on x86-32 (I've only used it on x86-64)> but I was more thinking 1.4 which many folks are still using.The 1.4.x JVM's will also work just fine with much more than 1GB of\nmemory. Perhaps you'd like to try again?-- Alan>> On 10/11/05, *Alan Stange* <[email protected]> <mailto:\[email protected]>> wrote:>> Alex Turner wrote:>> > Realise also that unless you are running the 1.5 x86-64 build, java> > will not use more than 1Gig, and if the app server requests more\n> than> > 1gig, Java will die (I've been there) with an out of memory error,> > even though there is plenty of free mem available. This can> easily be> > cause by a lazy GC thread if the applicaiton is running high on\n> CPU usage.>> On my side of Planet Earth, the standard non-x64 1.5 JVM will happily> use more than 1G of memory (on linux and Solaris, can't speak for> Windows). If you're running larger programs, it's probably a good\n> idea> to use the -server compiler in the JVM as well. I regularly run with> -Xmx1800m and regularly have >1GB heap sizes.>> The standard GC will not cause on OOM error if space remains for the\n> requested object. The GC thread blocks all other threads during its> activity, whatever else is happening on the machine. The> newer/experimental GC's did have some potential race conditions, but I\n> believe those have been resolved in the 1.5 JVMs.>> Finally, note that the latest _05 release of the 1.5 JVM also now> supports large page sizes on Linux and Windows:> -XX:+UseLargePages this can be quite beneficial depending on the\n> memory patterns in your programs.>> -- Alan>>",
"msg_date": "Tue, 11 Oct 2005 14:11:02 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on SUSE w/ reiserfs"
}
] |
[
{
"msg_contents": "Fellow Postgresql users,\n\nI have a Pg 7.4.8 data on XFS RAID10 6-disc (U320 SCSI on LSI MegaRAID\nw/bbu). The pg_xlog is on its own RAID1 with nothing else.\n\nI don't have room for more drives, but I am considering moving the XFS\nexternal log of the data directory to the RAID1 where the pg_xlog\nexists. Unfortunately, I don't have room on the RAID1 that the OS exists\non(Centos Linux 4.1).\n\nAnyone have any experience moving the XFS log to the pg_xlog? The\nguessing the the benefit / cost will cancel each other out. \n\nThanks.\n\nSteve Poe\n\n",
"msg_date": "Mon, 10 Oct 2005 15:28:42 -0700",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": true,
"msg_subject": "XFS External Log on Pg 7.4.8 Pg_xlog drives?"
},
{
"msg_contents": "On Mon, Oct 10, 2005 at 03:28:42PM -0700, Steve Poe wrote:\n>I don't have room for more drives, but I am considering moving the XFS\n>external log \n\nThere is absolutely no reason to move the xfs log on a system that\nsmall. \n\nMike Stone\n",
"msg_date": "Mon, 10 Oct 2005 18:48:34 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XFS External Log on Pg 7.4.8 Pg_xlog drives?"
}
] |
[
{
"msg_contents": "Hi to all, \n\nI have the following problem: I have a client to which we send every night a \"dump\" with a the database in which there are only their data's. It is a stupid solution but I choose this solution because I couldn't find any better. The target machine is a windows 2003. \n\nSo, I have a replication only with the tables that I need to send, then I make a copy of this replication, and from this copy I delete all the data's that are not needed. \n\nHow can I increase this DELETE procedure because it is really slow??? There are of corse a lot of data's to be deleted. \n\n\n\nOr is there any other solution for this? \nDB -> (replication) RE_DB -> (copy) -> COPY_DB -> (Delete unnecesary data) -> CLIENT_DB -> (ISDN connection) -> Data's to the client. \n\nRegards, \nAndy.\n\n\n\n\n\n\n\n\n\nHi to all, \n \nI have the following problem: I have a client to \nwhich we send every night a \"dump\" with a the database in which there are only \ntheir data's. It is a stupid solution but I choose this solution because I \ncouldn't find any better. The target machine is a windows 2003. \n \n \nSo, I have a replication only with the tables that \nI need to send, then I make a copy of this replication, and from this copy I \ndelete all the data's that are not needed. \n \nHow can I increase this DELETE procedure because it \nis really slow??? There are of corse a lot of data's to be deleted. \n\n \n \n \nOr is there any other solution for this? \n\nDB -> (replication) RE_DB -> (copy) -> \nCOPY_DB -> (Delete unnecesary data) -> CLIENT_DB -> (ISDN connection) \n-> Data's to the client. \n \nRegards, \nAndy.",
"msg_date": "Tue, 11 Oct 2005 10:47:03 +0300",
"msg_from": "\"Andy\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Massive delete performance"
},
{
"msg_contents": "On 10/11/05 3:47 AM, \"Andy\" <[email protected]> wrote:\n\n> Hi to all, \n> \n> I have the following problem: I have a client to which we send every night a\n> \"dump\" with a the database in which there are only their data's. It is a\n> stupid solution but I choose this solution because I couldn't find any better.\n> The target machine is a windows 2003.\n> \n> So, I have a replication only with the tables that I need to send, then I make\n> a copy of this replication, and from this copy I delete all the data's that\n> are not needed. \n> \n> How can I increase this DELETE procedure because it is really slow??? There\n> are of corse a lot of data's to be deleted.\n\nDo you have foreign key relationships that must be followed for cascade\ndelete? If so, make sure that you have indices on them. Are you running\nany type of vacuum after the whole process? What kind?\n\nSean\n\n",
"msg_date": "Tue, 11 Oct 2005 07:54:39 -0400",
"msg_from": "Sean Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive delete performance"
},
{
"msg_contents": "> Do you have foreign key relationships that must be followed for cascade\n> delete? If so, make sure that you have indices on them.\nYes I have such things. Indexes are on these fields. >> To be onest this \ndelete is taking the longest time, but it involves about 10 tables.\n\n> Are you running\n> any type of vacuum after the whole process? What kind?\nFull vacuum. (cmd: vacuumdb -f)\n\nIs there any configuration parameter for delete speed up?\n\n\n----- Original Message ----- \nFrom: \"Sean Davis\" <[email protected]>\nTo: \"Andy\" <[email protected]>; <[email protected]>\nSent: Tuesday, October 11, 2005 2:54 PM\nSubject: Re: [PERFORM] Massive delete performance\n\n\n> On 10/11/05 3:47 AM, \"Andy\" <[email protected]> wrote:\n>\n>> Hi to all,\n>>\n>> I have the following problem: I have a client to which we send every \n>> night a\n>> \"dump\" with a the database in which there are only their data's. It is a\n>> stupid solution but I choose this solution because I couldn't find any \n>> better.\n>> The target machine is a windows 2003.\n>>\n>> So, I have a replication only with the tables that I need to send, then I \n>> make\n>> a copy of this replication, and from this copy I delete all the data's \n>> that\n>> are not needed.\n>>\n>> How can I increase this DELETE procedure because it is really slow??? \n>> There\n>> are of corse a lot of data's to be deleted.\n>\n> Do you have foreign key relationships that must be followed for cascade\n> delete? If so, make sure that you have indices on them. Are you running\n> any type of vacuum after the whole process? What kind?\n>\n> Sean\n>\n>\n> \n\n",
"msg_date": "Tue, 11 Oct 2005 15:05:52 +0300",
"msg_from": "\"Andy\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Massive delete performance"
},
{
"msg_contents": "On 10/11/05 8:05 AM, \"Andy\" <[email protected]> wrote:\n\n>> Do you have foreign key relationships that must be followed for cascade\n>> delete? If so, make sure that you have indices on them.\n> Yes I have such things. Indexes are on these fields. >> To be onest this\n> delete is taking the longest time, but it involves about 10 tables.\n\nCan you post explain analyze output of the next delete?\n\nSean\n\n",
"msg_date": "Tue, 11 Oct 2005 08:12:11 -0400",
"msg_from": "Sean Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive delete performance"
},
{
"msg_contents": "On Tue, Oct 11, 2005 at 10:47:03AM +0300, Andy wrote:\n> So, I have a replication only with the tables that I need to send, then I\n> make a copy of this replication, and from this copy I delete all the data's\n> that are not needed. \n> \n> How can I increase this DELETE procedure because it is really slow???\n> There are of corse a lot of data's to be deleted. \n\nInstead of copying and then deleting, could you try just selecting out what\nyou wanted in the first place?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 11 Oct 2005 14:19:42 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive delete performance"
},
{
"msg_contents": "We run the DB on a linux system. The client has a windows system. The\napplication is almost the same (so the database structure is 80% the same).\nThe difference is that the client does not need all the tables.\n\nSo, in the remaining tables there are a lot of extra data's that don't\nbelong to this client. We have to send every night a updated \"info\" to the\nclient database. Our (have to admin) \"fast and not the best\" solution was so\nreplicated the needed tables, and delete from these the info that is not\nneeded.\n\nSo, I send to this client a \"dump\" from the database.\n\nI also find the ideea \"not the best\", but couldn't find in two days another\nfast solution. And it works this way for 4 months.\n\nOut database is not THAT big (500MB), the replication about (300MB)...\neverything works fast enough except this delete....\n\n\nHow can I evidence the cascade deletes also on explain analyze?\n\nThe answer for Sean Davis <[email protected]>:\n\nEXPLAIN ANALYZE\nDELETE FROM report WHERE id_order IN\n(SELECT o.id FROM orders o WHERE o.id_ag NOT IN (SELECT cp.id_ag FROM users \nu INNER JOIN\ncontactpartner cp ON cp.id_user=u.id WHERE u.name in ('dc') ORDER BY \ncp.id_ag))\n\nHash IN Join (cost=3532.83..8182.33 rows=32042 width=6) (actual \ntime=923.456..2457.323 rows=59557 loops=1)\n Hash Cond: (\"outer\".id_order = \"inner\".id)\n -> Seq Scan on report (cost=0.00..2613.83 rows=64083 width=10) (actual \ntime=33.269..1159.024 rows=64083 loops=1)\n -> Hash (cost=3323.31..3323.31 rows=32608 width=4) (actual \ntime=890.021..890.021 rows=0 loops=1)\n -> Seq Scan on orders o (cost=21.12..3323.31 rows=32608 width=4) \n(actual time=58.428..825.306 rows=60596 loops=1)\n Filter: (NOT (hashed subplan))\n SubPlan\n -> Sort (cost=21.11..21.12 rows=3 width=4) (actual \ntime=47.612..47.612 rows=1 loops=1)\n Sort Key: cp.id_ag\n -> Nested Loop (cost=0.00..21.08 rows=3 width=4) \n(actual time=47.506..47.516 rows=1 loops=1)\n -> Index Scan using users_name_idx on users u \n(cost=0.00..5.65 rows=1 width=4) (actual time=20.145..20.148 rows=1 loops=1)\n Index Cond: ((name)::text = 'dc'::text)\n -> Index Scan using contactpartner_id_user_idx \non contactpartner cp (cost=0.00..15.38 rows=4 width=8) (actual \ntime=27.348..27.352 rows=1 loops=1)\n Index Cond: (cp.id_user = \"outer\".id)\nTotal runtime: 456718.658 ms\n\n\n\n\n----- Original Message ----- \nFrom: \"Steinar H. Gunderson\" <[email protected]>\nTo: <[email protected]>\nSent: Tuesday, October 11, 2005 3:19 PM\nSubject: Re: [PERFORM] Massive delete performance\n\n\n> On Tue, Oct 11, 2005 at 10:47:03AM +0300, Andy wrote:\n>> So, I have a replication only with the tables that I need to send, then I\n>> make a copy of this replication, and from this copy I delete all the\n>> data's\n>> that are not needed.\n>>\n>> How can I increase this DELETE procedure because it is really slow???\n>> There are of corse a lot of data's to be deleted.\n>\n> Instead of copying and then deleting, could you try just selecting out\n> what\n> you wanted in the first place?\n>\n> /* Steinar */\n> -- \n> Homepage: http://www.sesse.net/\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n>\n\n",
"msg_date": "Tue, 11 Oct 2005 16:27:48 +0300",
"msg_from": "\"Andy\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Massive delete performance"
},
{
"msg_contents": "\"Andy\" <[email protected]> writes:\n> EXPLAIN ANALYZE\n> DELETE FROM report WHERE id_order IN\n> ...\n\n> Hash IN Join (cost=3532.83..8182.33 rows=32042 width=6) (actual \n> time=923.456..2457.323 rows=59557 loops=1)\n> ...\n> Total runtime: 456718.658 ms\n\nSo the runtime is all in the delete triggers. The usual conclusion from\nthis is that there is a foreign key column pointing at this table that\ndoes not have an index, or is not the same datatype as the column it\nreferences. Either condition will force a fairly inefficient way of\nhandling the FK deletion check.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 Oct 2005 10:17:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive delete performance "
},
{
"msg_contents": "Ups folks,\n\nIndeed there were 2 important indexes missing. Now it runs about 10 times \nfaster. Sorry for the caused trouble :) and thanx for help.\n\n\nHash IN Join (cost=3307.49..7689.47 rows=30250 width=6) (actual \ntime=227.666..813.786 rows=56374 loops=1)\n Hash Cond: (\"outer\".id_order = \"inner\".id)\n -> Seq Scan on report (cost=0.00..2458.99 rows=60499 width=10) (actual \ntime=0.035..269.422 rows=60499 loops=1)\n -> Hash (cost=3109.24..3109.24 rows=30901 width=4) (actual \ntime=227.459..227.459 rows=0 loops=1)\n -> Seq Scan on orders o (cost=9.73..3109.24 rows=30901 width=4) \n(actual time=0.429..154.219 rows=57543 loops=1)\n Filter: (NOT (hashed subplan))\n SubPlan\n -> Sort (cost=9.71..9.72 rows=3 width=4) (actual \ntime=0.329..0.330 rows=1 loops=1)\n Sort Key: cp.id_ag\n -> Nested Loop (cost=0.00..9.69 rows=3 width=4) \n(actual time=0.218..0.224 rows=1 loops=1)\n -> Index Scan using users_name_idx on users u \n(cost=0.00..5.61 rows=1 width=4) (actual time=0.082..0.084 rows=1 loops=1)\n Index Cond: ((name)::text = 'dc'::text)\n -> Index Scan using contactpartner_id_user_idx \non contactpartner cp (cost=0.00..4.03 rows=3 width=8) (actual \ntime=0.125..0.127 rows=1 loops=1)\n Index Cond: (cp.id_user = \"outer\".id)\nTotal runtime: 31952.811 ms\n\n\n\n----- Original Message ----- \nFrom: \"Tom Lane\" <[email protected]>\nTo: \"Andy\" <[email protected]>\nCc: \"Steinar H. Gunderson\" <[email protected]>; \n<[email protected]>\nSent: Tuesday, October 11, 2005 5:17 PM\nSubject: Re: [PERFORM] Massive delete performance\n\n\n> \"Andy\" <[email protected]> writes:\n>> EXPLAIN ANALYZE\n>> DELETE FROM report WHERE id_order IN\n>> ...\n>\n>> Hash IN Join (cost=3532.83..8182.33 rows=32042 width=6) (actual\n>> time=923.456..2457.323 rows=59557 loops=1)\n>> ...\n>> Total runtime: 456718.658 ms\n>\n> So the runtime is all in the delete triggers. The usual conclusion from\n> this is that there is a foreign key column pointing at this table that\n> does not have an index, or is not the same datatype as the column it\n> references. Either condition will force a fairly inefficient way of\n> handling the FK deletion check.\n>\n> regards, tom lane\n>\n> \n\n",
"msg_date": "Tue, 11 Oct 2005 17:27:40 +0300",
"msg_from": "\"Andy\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Massive delete performance "
},
{
"msg_contents": "* Andy <[email protected]> wrote:\n\n<snip>\n> I have the following problem: I have a client to which we send every\n> night a \"dump\" with a the database in which there are only their\n> data's. It is a stupid solution but I choose this solution because I\n> couldn't find any better. The target machine is a windows 2003.\n> \n> So, I have a replication only with the tables that I need to send,\n> then I make a copy of this replication, and from this copy I delete\n> all the data's that are not needed.\n\nWhy not filtering out as much unnecessary stuff as possible on copying ?\n\n<snip>\n\n> How can I increase this DELETE procedure because it is really slow???\n> There are of corse a lot of data's to be deleted.\n\nHave you set up the right indices ?\n\n\ncu\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n---------------------------------------------------------------------\n Realtime Forex/Stock Exchange trading powered by postgreSQL :))\n http://www.fxignal.net/\n---------------------------------------------------------------------\n",
"msg_date": "Tue, 11 Oct 2005 23:48:50 +0200",
"msg_from": "Enrico Weigelt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive delete performance"
}
] |
[
{
"msg_contents": "> > I have a postgresql 7.4.8-server with 4 GB ram.\n> > #effective_cache_size = 1000 # typically 8KB each\n> >\n> > This is computed by sysctl -n vfs.hibufspace / 8192 (on FreeBSD). So I\n> > changed it to:\n> >\n> > effective_cache_size = 27462 # typically 8KB each\n>\n> Apparently this formula is no longer relevant on the FreeBSD systems as\n> it can cache up to almost all the available RAM. With 4GB of RAM, one\n> could specify most of the RAM as being available for caching, assuming\n> that nothing but PostgreSQL runs on the server -- certainly 1/2 the RAM\n> would be a reasonable value to tell the planner.\n>\n> (This was verified by using dd:\n> dd if=/dev/zero of=/usr/local/pgsql/iotest bs=128k count=16384 to create\n> a 2G file then\n> dd if=/usr/local/pgsql/iotest of=/dev/null\n>\n> If you run systat -vmstat 2 you will see 0% diskaccess during the read\n> of the 2G file indicating that it has, in fact, been cached)\n\nThank you for your reply. Does this apply to FreeBSD 5.4 or 6.0 on\namd64 (or both)?\n\nregards\nClaus\n",
"msg_date": "Tue, 11 Oct 2005 16:54:31 +0200",
"msg_from": "Claus Guttesen <[email protected]>",
"msg_from_op": true,
"msg_subject": "effective cache size on FreeBSD (WAS: Performance on SUSE w/\n reiserfs)"
},
{
"msg_contents": "On Tue, 2005-10-11 at 16:54 +0200, Claus Guttesen wrote:\n> > > I have a postgresql 7.4.8-server with 4 GB ram.\n> > > #effective_cache_size = 1000 # typically 8KB each\n> > >\n> > > This is computed by sysctl -n vfs.hibufspace / 8192 (on FreeBSD). So I\n> > > changed it to:\n> > >\n> > > effective_cache_size = 27462 # typically 8KB each\n> >\n> > Apparently this formula is no longer relevant on the FreeBSD systems as\n> > it can cache up to almost all the available RAM. With 4GB of RAM, one\n> > could specify most of the RAM as being available for caching, assuming\n> > that nothing but PostgreSQL runs on the server -- certainly 1/2 the RAM\n> > would be a reasonable value to tell the planner.\n> >\n> > (This was verified by using dd:\n> > dd if=/dev/zero of=/usr/local/pgsql/iotest bs=128k count=16384 to create\n> > a 2G file then\n> > dd if=/usr/local/pgsql/iotest of=/dev/null\n> >\n> > If you run systat -vmstat 2 you will see 0% diskaccess during the read\n> > of the 2G file indicating that it has, in fact, been cached)\n> \n> Thank you for your reply. Does this apply to FreeBSD 5.4 or 6.0 on\n> amd64 (or both)?\n> \n\nNot sure about 6.0 (but I don't know why it would change) but definitely\non 5.4 amd64 (and I would imagine i386 as well).\n\nSven\n\n",
"msg_date": "Tue, 11 Oct 2005 11:04:24 -0400",
"msg_from": "Sven Willenberger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: effective cache size on FreeBSD (WAS: Performance on SUSE w/"
},
{
"msg_contents": "> > > Apparently this formula is no longer relevant on the FreeBSD systems as\n> > > it can cache up to almost all the available RAM. With 4GB of RAM, one\n> > > could specify most of the RAM as being available for caching, assuming\n> > > that nothing but PostgreSQL runs on the server -- certainly 1/2 the RAM\n> > > would be a reasonable value to tell the planner.\n> > >\n> > > (This was verified by using dd:\n> > > dd if=/dev/zero of=/usr/local/pgsql/iotest bs=128k count=16384 to create\n> > > a 2G file then\n> > > dd if=/usr/local/pgsql/iotest of=/dev/null\n> > >\n> > > If you run systat -vmstat 2 you will see 0% diskaccess during the read\n> > > of the 2G file indicating that it has, in fact, been cached)\n> >\n> > Thank you for your reply. Does this apply to FreeBSD 5.4 or 6.0 on\n> > amd64 (or both)?\n> >\n>\n> Not sure about 6.0 (but I don't know why it would change) but definitely\n> on 5.4 amd64 (and I would imagine i386 as well).\n\nWorks on FreeBSD 6.0 RC1 as well. Tried using count=4096 on a 1 GB ram\nbox. Same behaviour as you describe above.\n\nregards\nClaus\n",
"msg_date": "Tue, 11 Oct 2005 19:23:59 +0200",
"msg_from": "Claus Guttesen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: effective cache size on FreeBSD (WAS: Performance on SUSE w/"
},
{
"msg_contents": "On Oct 11, 2005, at 10:54 AM, Claus Guttesen wrote:\n\n> Thank you for your reply. Does this apply to FreeBSD 5.4 or 6.0 on\n> amd64 (or both)?\n>\n\nIt applies to FreeBSD >= 5.0.\n\nHowever, I have not been able to get a real answer from the FreeBSD \nhacker community on what the max buffer space usage will be to \nproperly set this. The `sysctl -n vfs.hibufspace` / 8192 estimation \nworks very well for me, still, and I continue to use it.\n\n",
"msg_date": "Tue, 11 Oct 2005 16:09:56 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: effective cache size on FreeBSD (WAS: Performance on SUSE w/\n\treiserfs)"
}
] |
[
{
"msg_contents": "KC wrote:\n> \n> So I guess it all comes back to the basic question:\n> \n> For the query select distinct on (PlayerID) * from Player a where\n> PlayerID='22220' order by PlayerId Desc, AtDate Desc;\n> can the optimizer recognise the fact the query is selecting by the\nprimary\n> key (PlayerID,AtDate), so it can skip the remaining rows for that\n> PlayerID,\n> as if LIMIT 1 is implied?\n> \n> Best regards, KC.\n\nHi KC, have you tried:\nselect * from player where playerid = '22220' and atdate < 9999999999\norder by platerid desc, atdate desc limit 1;\n\n??\nMerlin\n",
"msg_date": "Wed, 12 Oct 2005 08:14:44 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue"
},
{
"msg_contents": "Dear Merlin and all,\n\nThat direct SQL returns in 0 ms. The problem only appears when a view is used.\n\nWhat we've done to work around this problem is to modify the table to add a \nfield DataStatus which is set to 1 for the latest record for each player, \nand reset to 0 when it is superceded.\n\nA partial index is then created as:\nCREATE INDEX IDX_CurPlayer on Player (PlayerID) where DataStatus = 1;\n\nThe VCurPlayer view is changed to:\nCREATE or REPLACE VIEW VCurPlayer as select * from Player where DataStatus = 1;\nand it now returns in 0 ms.\n\nThis is not the best solution, but until (if ever) the original problem is \nfixed, we have not found an alternative work around.\n\nThe good news is that even with the additional overhead of maintaining an \nextra index and the problem of vacuuming, pg 8.0.3 still performs \nsignificantly faster on Windows than MS Sql 2000 in our OLTP application \ntesting so far.\n\nThanks to all for your help.\n\nBest regards,\nKC.\n\nAt 20:14 05/10/12, you wrote:\n>KC wrote:\n> >\n> > So I guess it all comes back to the basic question:\n> >\n> > For the query select distinct on (PlayerID) * from Player a where\n> > PlayerID='22220' order by PlayerId Desc, AtDate Desc;\n> > can the optimizer recognise the fact the query is selecting by the\n>primary\n> > key (PlayerID,AtDate), so it can skip the remaining rows for that\n> > PlayerID,\n> > as if LIMIT 1 is implied?\n> >\n> > Best regards, KC.\n>\n>Hi KC, have you tried:\n>select * from player where playerid = '22220' and atdate < 9999999999\n>order by platerid desc, atdate desc limit 1;\n>\n>??\n>Merlin\n\n",
"msg_date": "Wed, 12 Oct 2005 21:00:15 +0800",
"msg_from": "K C Lau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT LIMIT 1 VIEW Performance Issue"
}
] |
[
{
"msg_contents": "Hi all,\n\nAfter a long time of reading the general list it's time to subscribe to\nthis one...\n\nWe have adapted our application (originally written for oracle) to\npostgres, and switched part of our business to a postgres data base.\n\nThe data base has in the main tables around 150 million rows, the whole\ndata set takes ~ 30G after the initial migration. After ~ a month of\nusage that bloated to ~ 100G. We installed autovacuum after ~ 2 weeks.\n\nThe main table is heavily updated during the active periods of usage,\nwhich is coming in bursts.\n\nNow Oracle on the same hardware has no problems handling it (the load),\nbut postgres comes to a crawl. Examining the pg_stats_activity table I\nsee the updates on the main table as being the biggest problem, they are\nvery slow. The table has a few indexes on it, I wonder if they are\nupdated too on an update ? The index fields are not changing. In any\ncase, I can't explain why the updates are so much slower on postgres.\n\nSorry for being fuzzy a bit, I spent quite some time figuring out what I\ncan do and now I have to give up and ask for help.\n\nThe machine running the DB is a debian linux, details:\n\n$ cat /proc/cpuinfo\nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 11\nmodel name : Intel(R) Pentium(R) III CPU family 1266MHz\nstepping : 1\ncpu MHz : 1263.122\ncache size : 512 KB\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 2\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge\nmca cmov pat pse36 mmx fxsr sse\nbogomips : 2490.36\n \nprocessor : 1\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 11\nmodel name : Intel(R) Pentium(R) III CPU family 1266MHz\nstepping : 1\ncpu MHz : 1263.122\ncache size : 512 KB\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 2\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge\nmca cmov pat pse36 mmx fxsr sse\nbogomips : 2514.94\n \n\n$ uname -a\nLinux *** 2.6.12.3 #1 SMP Tue Oct 11 13:13:00 CEST 2005 i686 GNU/Linux\n\n\n$ cat /proc/meminfo\nMemTotal: 4091012 kB\nMemFree: 118072 kB\nBuffers: 18464 kB\nCached: 3393436 kB\nSwapCached: 0 kB\nActive: 947508 kB\nInactive: 2875644 kB\nHighTotal: 3211264 kB\nHighFree: 868 kB\nLowTotal: 879748 kB\nLowFree: 117204 kB\nSwapTotal: 0 kB\nSwapFree: 0 kB\nDirty: 13252 kB\nWriteback: 0 kB\nMapped: 829300 kB\nSlab: 64632 kB\nCommitLimit: 2045504 kB\nCommitted_AS: 1148064 kB\nPageTables: 75916 kB\nVmallocTotal: 114680 kB\nVmallocUsed: 96 kB\nVmallocChunk: 114568 kB\n\n\nThe disk used for the data is an external raid array, I don't know much\nabout that right now except I think is some relatively fast IDE stuff.\nIn any case the operations should be cache friendly, we don't scan over\nand over the big tables...\n\nThe postgres server configuration is attached.\n\nI have looked in the postgres statistics tables, looks like most of the\nneeded data is always cached, as in the most accessed tables the\nload/hit ratio is mostly something like 1/100, or at least 1/30.\n\n\nIs anything in the config I got very wrong for the given machine, or\nwhat else should I investigate further ? If I can't make this fly, the\nobvious solution will be to move back to Oracle, cash out the license\nand forget about postgres forever...\n\nTIA,\nCsaba.",
"msg_date": "Wed, 12 Oct 2005 16:45:15 +0200",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Help tuning postgres"
},
{
"msg_contents": "> Hi all,\n>\n> After a long time of reading the general list it's time to subscribe to\n> this one...\n>\n> We have adapted our application (originally written for oracle) to\n> postgres, and switched part of our business to a postgres data base.\n>\n> The data base has in the main tables around 150 million rows, the whole\n> data set takes ~ 30G after the initial migration. After ~ a month of\n> usage that bloated to ~ 100G. We installed autovacuum after ~ 2 weeks.\n>\n\nHave you tried reindexing your active tables?\n\nEmil\n",
"msg_date": "Wed, 12 Oct 2005 11:12:11 -0400",
"msg_from": "Emil Briggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help tuning postgres"
},
{
"msg_contents": "[snip]\n> Have you tried reindexing your active tables?\n> \nNot yet, the db is in production use and I have to plan for a down-time\nfor that... or is it not impacting the activity on the table ?\n\n> Emil\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n",
"msg_date": "Wed, 12 Oct 2005 17:17:53 +0200",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help tuning postgres"
},
{
"msg_contents": "> [snip]\n>\n> > Have you tried reindexing your active tables?\n>\n> Not yet, the db is in production use and I have to plan for a down-time\n> for that... or is it not impacting the activity on the table ?\n>\n\nIt will cause some performance hit while you are doing it. It sounds like \nsomething is bloating rapidly on your system and the indexes is one possible \nplace that could be happening.\n",
"msg_date": "Wed, 12 Oct 2005 11:33:56 -0400",
"msg_from": "Emil Briggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help tuning postgres"
},
{
"msg_contents": "On 10/12/05, Csaba Nagy <[email protected]> wrote:\n\n> We have adapted our application (originally written for oracle) to\n> postgres, and switched part of our business to a postgres data base.\n\n> The data base has in the main tables around 150 million rows, the whole\n> data set takes ~ 30G after the initial migration. After ~ a month of\n> usage that bloated to ~ 100G. We installed autovacuum after ~ 2 weeks.\n>\n> The main table is heavily updated during the active periods of usage,\n> which is coming in bursts.\n>\n> Now Oracle on the same hardware has no problems handling it (the load),\n> but postgres comes to a crawl. Examining the pg_stats_activity table I\n> see the updates on the main table as being the biggest problem, they are\n> very slow. The table has a few indexes on it, I wonder if they are\n> updated too on an update ? The index fields are not changing. In any\n> case, I can't explain why the updates are so much slower on postgres.\n\nI'm not the most experience person on this list, but I've got some big\ntables I work with. Doing an update on these big tables often involves\na sequential scan which can be quite slow.\n\nI would suggest posting the explain analyze output for one of your\nslow updates. I'll bet it is much more revealing and takes out a lot\nof the guesswork.\n\n--\nMatthew Nuzum\nwww.bearfruit.org\n",
"msg_date": "Wed, 12 Oct 2005 11:39:50 -0500",
"msg_from": "Matthew Nuzum <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help tuning postgres"
},
{
"msg_contents": "Ok, that was the first thing I've done, checking out the explain of the\nquery. I don't really need the analyze part, as the plan is going for\nthe index, which is the right decision. The updates are simple one-row\nupdates of one column, qualified by the primary key condition.\nThis part is OK, the query is not taking extremely long, but the problem\nis that we execute 500 in a transaction, and that takes too long and\nblocks other activities.\nActually I've done an iostat run in the meantime (just learned how to\nuse it), and looks like the disk is 100 saturated. So it clearly is a\ndisk issue in this case. And it turns out the Oracle hardware has an\nedge of 3 years over what I have for postgres, so that might very well\nexplain the performance difference... Oh well.\n\nNext we'll upgrade the postgres hardware, and then I'll come back to\nreport if it's working better... sorry for the noise for now.\n\nCheers,\nCsaba.\n\nBTW, is the config file good enough for the kind of machine I have ?\nCause it's the first time I had to make a production configuration and\nmost of the stuff is according to the commented config guid from varlena\nwith some guesswork added...\n\n> I'm not the most experience person on this list, but I've got some big\n> tables I work with. Doing an update on these big tables often involves\n> a sequential scan which can be quite slow.\n> \n> I would suggest posting the explain analyze output for one of your\n> slow updates. I'll bet it is much more revealing and takes out a lot\n> of the guesswork.\n> \n> --\n> Matthew Nuzum\n> www.bearfruit.org\n\n",
"msg_date": "Wed, 12 Oct 2005 18:55:30 +0200",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help tuning postgres"
},
{
"msg_contents": "Emil Briggs <[email protected]> writes:\n>> Not yet, the db is in production use and I have to plan for a down-time\n>> for that... or is it not impacting the activity on the table ?\n\n> It will cause some performance hit while you are doing it.\n\nIt'll also lock out writes on the table until the index is rebuilt,\nso he does need to schedule downtime.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Oct 2005 13:21:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help tuning postgres "
},
{
"msg_contents": "\nWould it not be faster to do a dump/reload of the table than reindex or\nis it about the same? \n\nSteve Poe\n\nOn Wed, 2005-10-12 at 13:21 -0400, Tom Lane wrote:\n> Emil Briggs <[email protected]> writes:\n> >> Not yet, the db is in production use and I have to plan for a down-time\n> >> for that... or is it not impacting the activity on the table ?\n> \n> > It will cause some performance hit while you are doing it.\n> \n> It'll also lock out writes on the table until the index is rebuilt,\n> so he does need to schedule downtime.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n",
"msg_date": "Wed, 12 Oct 2005 10:32:05 -0700",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help tuning postgres"
},
{
"msg_contents": "On Wed, Oct 12, 2005 at 06:55:30PM +0200, Csaba Nagy wrote:\n> Ok, that was the first thing I've done, checking out the explain of the\n> query. I don't really need the analyze part, as the plan is going for\n> the index, which is the right decision. The updates are simple one-row\n\nHow do you know? You _do_ need the ANALYSE, because it'll tell you\nwhat the query _actually did_ as opposed to what the planner thought\nit was going to do. \n\nNote that EXPLAIN ANALYSE actually performs the work, so you better\ndo it in a transaction and ROLLBACK if it's a production system.\n\n> Actually I've done an iostat run in the meantime (just learned how to\n> use it), and looks like the disk is 100 saturated. So it clearly is a\n> disk issue in this case. And it turns out the Oracle hardware has an\n\nYes, but it could be a disk issue because you're doing more work than\nyou need to. If your UPDATEs are chasing down a lot of dead tuples,\nfor instance, you'll peg your I/O even though you ought to have I/O\nto burn.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe whole tendency of modern prose is away from concreteness.\n\t\t--George Orwell\n",
"msg_date": "Wed, 12 Oct 2005 18:00:36 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help tuning postgres"
},
{
"msg_contents": "[snip]\n> Yes, but it could be a disk issue because you're doing more work than\n> you need to. If your UPDATEs are chasing down a lot of dead tuples,\n> for instance, you'll peg your I/O even though you ought to have I/O\n> to burn.\n\nOK, this sounds interesting, but I don't understand: why would an update\n\"chase down a lot of dead tuples\" ? Should I read up on some docs, cause\nI obviously don't know enough about how updates work on postgres...\n\nAnd how would the analyze help in finding this out ? I thought it would\nonly show me additionally the actual timings, not more detail in what\nwas done...\n\nThanks,\nCsaba.\n\n\n",
"msg_date": "Thu, 13 Oct 2005 10:15:03 +0200",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help tuning postgres"
},
{
"msg_contents": "On Thu, Oct 13, 2005 at 10:15:03AM +0200, Csaba Nagy wrote:\n> \n> OK, this sounds interesting, but I don't understand: why would an update\n> \"chase down a lot of dead tuples\" ? Should I read up on some docs, cause\n> I obviously don't know enough about how updates work on postgres...\n\nRight. Here's the issue:\n\nMVCC does not replace rows when you update. Instead, it marks the\nold row as expired, and sets the new values. The old row is still\nthere, and it's available for other transactions who need to see it. \nAs the docs say (see\n<http://www.postgresql.org/docs/8.0/interactive/transaction-iso.html>),\n\"In effect, a SELECT query sees a snapshot of the database as of the\ninstant that that query begins to run.\" And that can be true because\nthe original data is still there, although marked as expired for\nsubsequent transactions.\n\nUPDATE works the same was as SELECT in terms of searching for rows\n(so does any command that searches for data). \n\nNow, when you select data, you actually have to traverse all the\nexisting versions of the tuple in order to get the one that's live\nfor you. This is normally not a problem: VACUUM goes around and\ncleans out old, expired data that is not live for _anyone_. It does\nthis by looking for the oldest transaction that is open. (As far as\nI understand it, this is actually the oldest transaction in the\nentire back end; but I've never understood why that should the the\ncase, and I'm too incompetent/dumb to understand the code, so I may\nbe wrong on this point.) If you have very long-running transactions,\nthen, you can end up with a lot of versions of dead tuples on the\ntable, and so reading the few records you want can turn out actually\nto be a very expensive operation, even though it ought to be cheap.\n\nYou can see this by using the VERBOSE option to VACUUM:\n\ntest=# VACUUM VERBOSE eval1 ;\nINFO: vacuuming \"public.eval1\"\nINFO: \"eval1\": found 0 removable, 0 nonremovable row versions in 0\npages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: vacuuming \"pg_toast.pg_toast_18831\"\nINFO: index \"pg_toast_18831_index\" now contains 0 row versions in 1\npages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_18831\": found 0 removable, 0 nonremovable row\nversions in 0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\n\nNote those \"removable\" and \"nonremovable\" row versions. It's the\nunremovable ones that can hurt. WARNING: doing VACUUM on a big table\non a disk that's already pegged is going to cause you performance\npain, because it scans the whole table. In some cases, though, you\nhave no choice: if the winds are already out of your sails, and\nyou're effectively stopped, anything that might get you moving again\nis an improvement.\n\n> And how would the analyze help in finding this out ? I thought it would\n> only show me additionally the actual timings, not more detail in what\n> was done...\n\nYes, it shows the actual timings, and the actual number of rows. But\nif the estimates that the planner makes are wildly different than the\nactual results, then you know your statistics are wrong, and that the\nplanner is going about things the wrong way. ANALYSE is a big help. \nThere's also a verbose option to it, but it's usually less useful in\nproduction situations.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nIt is above all style through which power defers to reason.\n\t\t--J. Robert Oppenheimer\n",
"msg_date": "Thu, 13 Oct 2005 08:40:07 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help tuning postgres"
},
{
"msg_contents": "Thanks Andrew, this explanation about the dead rows was enlightening.\nMight be the reason for the slowdown I see on occasions, but not for the\ncase which I was first observing. In that case the updated rows are\ndifferent for each update. It is possible that each row has a few dead\nversions, but not too many, each row is updated just a limited number of\ntimes.\n\nHowever, we have other updates which access the same row 1000s of times\n(up to millions of times), and that could hurt if it's like you said,\ni.e. if each update has to crawl over all the dead rows... I have now\nautovacuum in place, and I'm sure it will kick in at ~ a few 10000s of\nupdates, but in the meantime it could get bad.\nIn any case, I suppose that those disk pages should be in OS cache\npretty soon and stay there, so I still don't understand why the disk\nusage is 100% in this case (with very low CPU activity, the CPUs are\nmostly waiting/idle)... the amount of actively used data is not that\nbig.\n\nI'll try to vacuum through cron jobs the most exposed tables to this\nmultiple-dead-row-versions symptom, cause autovacuum probably won't do\nit often enough. Let's see if it helps...\n\nThanks,\nCsaba.\n\n\nOn Thu, 2005-10-13 at 14:40, Andrew Sullivan wrote:\n> On Thu, Oct 13, 2005 at 10:15:03AM +0200, Csaba Nagy wrote:\n> > \n> > OK, this sounds interesting, but I don't understand: why would an update\n> > \"chase down a lot of dead tuples\" ? Should I read up on some docs, cause\n> > I obviously don't know enough about how updates work on postgres...\n> \n> Right. Here's the issue:\n> \n> MVCC does not replace rows when you update. Instead, it marks the\n> old row as expired, and sets the new values. The old row is still\n> there, and it's available for other transactions who need to see it. \n> As the docs say (see\n> <http://www.postgresql.org/docs/8.0/interactive/transaction-iso.html>),\n> \"In effect, a SELECT query sees a snapshot of the database as of the\n> instant that that query begins to run.\" And that can be true because\n> the original data is still there, although marked as expired for\n> subsequent transactions.\n> \n> UPDATE works the same was as SELECT in terms of searching for rows\n> (so does any command that searches for data). \n> \n> Now, when you select data, you actually have to traverse all the\n> existing versions of the tuple in order to get the one that's live\n> for you. This is normally not a problem: VACUUM goes around and\n> cleans out old, expired data that is not live for _anyone_. It does\n> this by looking for the oldest transaction that is open. (As far as\n> I understand it, this is actually the oldest transaction in the\n> entire back end; but I've never understood why that should the the\n> case, and I'm too incompetent/dumb to understand the code, so I may\n> be wrong on this point.) If you have very long-running transactions,\n> then, you can end up with a lot of versions of dead tuples on the\n> table, and so reading the few records you want can turn out actually\n> to be a very expensive operation, even though it ought to be cheap.\n> \n> You can see this by using the VERBOSE option to VACUUM:\n> \n> test=# VACUUM VERBOSE eval1 ;\n> INFO: vacuuming \"public.eval1\"\n> INFO: \"eval1\": found 0 removable, 0 nonremovable row versions in 0\n> pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 0 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: vacuuming \"pg_toast.pg_toast_18831\"\n> INFO: index \"pg_toast_18831_index\" now contains 0 row versions in 1\n> pages\n> DETAIL: 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: \"pg_toast_18831\": found 0 removable, 0 nonremovable row\n> versions in 0 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 0 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> VACUUM\n> \n> Note those \"removable\" and \"nonremovable\" row versions. It's the\n> unremovable ones that can hurt. WARNING: doing VACUUM on a big table\n> on a disk that's already pegged is going to cause you performance\n> pain, because it scans the whole table. In some cases, though, you\n> have no choice: if the winds are already out of your sails, and\n> you're effectively stopped, anything that might get you moving again\n> is an improvement.\n> \n> > And how would the analyze help in finding this out ? I thought it would\n> > only show me additionally the actual timings, not more detail in what\n> > was done...\n> \n> Yes, it shows the actual timings, and the actual number of rows. But\n> if the estimates that the planner makes are wildly different than the\n> actual results, then you know your statistics are wrong, and that the\n> planner is going about things the wrong way. ANALYSE is a big help. \n> There's also a verbose option to it, but it's usually less useful in\n> production situations.\n> \n> A\n\n",
"msg_date": "Thu, 13 Oct 2005 15:14:44 +0200",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help tuning postgres"
},
{
"msg_contents": "On 10/13/05, Csaba Nagy <[email protected]> wrote:\n> On Thu, 2005-10-13 at 14:40, Andrew Sullivan wrote:\n> > On Thu, Oct 13, 2005 at 10:15:03AM +0200, Csaba Nagy wrote:\n> > > And how would the analyze help in finding this out ? I thought it would\n> > > only show me additionally the actual timings, not more detail in what\n> > > was done...\n> >\n> > Yes, it shows the actual timings, and the actual number of rows. But\n> > if the estimates that the planner makes are wildly different than the\n> > actual results, then you know your statistics are wrong, and that the\n> > planner is going about things the wrong way. ANALYSE is a big help.\n> > There's also a verbose option to it, but it's usually less useful in\n> > production situations.\n\nThis is the point I was trying to make. I've seen special instances\nwhere people have posted an explain annalyze for a select/update to\nthe list and suggestions have arisen allowing major performance\nimprovements.\n\nIf this task is where your database is performing its worst then it is\nthe best place to start with optimizing, short of the obvious stuff,\nwhich it sounds like you've covered.\n\nSometimes, and I think this has often been true for databases that are\neither very large or very small, statistics can be tweaked to get\nbetter performance. One good example is when a sequential scan is\nbeing chosen when an index scan may be better; something like this\nwould definately peg your disk i/o.\n\nThrowing more hardware at your problem will definately help, but I'm a\nperformance freak and I like to optimize everything to the max.\n*Sometimes* you can get drastic improvements without adding any\nhardware. I have seen some truly miraculus turn-arounds by tweaking\nsome non-obvious settings based on suggestions made on this list.\n--\nMatthew Nuzum\nwww.bearfruit.org\n",
"msg_date": "Thu, 13 Oct 2005 09:50:40 -0500",
"msg_from": "Matthew Nuzum <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help tuning postgres"
},
{
"msg_contents": "On Thu, Oct 13, 2005 at 03:14:44PM +0200, Csaba Nagy wrote:\n> In any case, I suppose that those disk pages should be in OS cache\n> pretty soon and stay there, so I still don't understand why the disk\n> usage is 100% in this case (with very low CPU activity, the CPUs are\n> mostly waiting/idle)... the amount of actively used data is not that\n> big.\n\nAh, but if the sum of all the dead rows is large enough that they\nstart causing your shared memory (== Postgres buffers) to thrash,\nthen you start causing the memory subsystem to thrash on the box,\nwhich means less RAM is available for disk buffers because the OS is\ndoing more work; and the disk buffers are full of a lot of garbage\n_anyway_, so then you may find that you're ending up hitting the disk\nfor some of these reads after all. Around the office I have called\nthis the \"buffer death spiral\". And note that once you've managed to\nget into a vacuum-starvation case, your free space map might be\nexceeded, at which point your database performance really won't\nrecover until you've done VACUUM FULL (prior to 7.4 there's also an\nindex problem that's even worse, and that needs occasional REINDEX to\nsolve; I forget which version you said you were using).\n\nThe painful part about tuning a production system is really that you\nhave to keep about 50 variables juggling in your head, just so you\ncan uncover the one thing that you have to put your finger on to make\nit all play nice.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nA certain description of men are for getting out of debt, yet are\nagainst all taxes for raising money to pay it off.\n\t\t--Alexander Hamilton\n",
"msg_date": "Thu, 13 Oct 2005 12:07:14 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help tuning postgres"
},
{
"msg_contents": "reindex should be faster, since you're not dumping/reloading the table\ncontents on top of rebuilding the index, you're just rebuilding the\nindex. \n\n\nRobert Treat\nemdeon Practice Services\nAlachua, Florida\n\nOn Wed, 2005-10-12 at 13:32, Steve Poe wrote:\n> \n> Would it not be faster to do a dump/reload of the table than reindex or\n> is it about the same? \n> \n> Steve Poe\n> \n> On Wed, 2005-10-12 at 13:21 -0400, Tom Lane wrote:\n> > Emil Briggs <[email protected]> writes:\n> > >> Not yet, the db is in production use and I have to plan for a down-time\n> > >> for that... or is it not impacting the activity on the table ?\n> > \n> > > It will cause some performance hit while you are doing it.\n> > \n> > It'll also lock out writes on the table until the index is rebuilt,\n> > so he does need to schedule downtime.\n> > \n> > \t\t\tregards, tom lane\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n",
"msg_date": "18 Oct 2005 11:18:41 -0400",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help tuning postgres"
},
{
"msg_contents": "In the light of what you've explained below about \"nonremovable\" row\nversions reported by vacuum, I wonder if I should worry about the\nfollowing type of report:\n\nINFO: vacuuming \"public.some_table\"\nINFO: \"some_table\": removed 29598 row versions in 452 pages\nDETAIL: CPU 0.01s/0.04u sec elapsed 18.77 sec.\nINFO: \"some_table\": found 29598 removable, 39684 nonremovable row\nversions in 851 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.02s/0.07u sec elapsed 23.16 sec.\nVACUUM\n\n\nDoes that mean that 39684 nonremovable pages are actually the active\nlive pages in the table (as it reports 0 dead) ? I'm sure I don't have\nany long running transaction, at least according to pg_stats_activity\n(backed by the linux ps too). Or I should run a vacuum full...\n\nThis table is one of which has frequently updated rows.\n\nTIA,\nCsaba.\n\n\nOn Thu, 2005-10-13 at 14:40, Andrew Sullivan wrote:\n> On Thu, Oct 13, 2005 at 10:15:03AM +0200, Csaba Nagy wrote:\n> > \n> > OK, this sounds interesting, but I don't understand: why would an update\n> > \"chase down a lot of dead tuples\" ? Should I read up on some docs, cause\n> > I obviously don't know enough about how updates work on postgres...\n> \n> Right. Here's the issue:\n> \n> MVCC does not replace rows when you update. Instead, it marks the\n> old row as expired, and sets the new values. The old row is still\n> there, and it's available for other transactions who need to see it. \n> As the docs say (see\n> <http://www.postgresql.org/docs/8.0/interactive/transaction-iso.html>),\n> \"In effect, a SELECT query sees a snapshot of the database as of the\n> instant that that query begins to run.\" And that can be true because\n> the original data is still there, although marked as expired for\n> subsequent transactions.\n> \n> UPDATE works the same was as SELECT in terms of searching for rows\n> (so does any command that searches for data). \n> \n> Now, when you select data, you actually have to traverse all the\n> existing versions of the tuple in order to get the one that's live\n> for you. This is normally not a problem: VACUUM goes around and\n> cleans out old, expired data that is not live for _anyone_. It does\n> this by looking for the oldest transaction that is open. (As far as\n> I understand it, this is actually the oldest transaction in the\n> entire back end; but I've never understood why that should the the\n> case, and I'm too incompetent/dumb to understand the code, so I may\n> be wrong on this point.) If you have very long-running transactions,\n> then, you can end up with a lot of versions of dead tuples on the\n> table, and so reading the few records you want can turn out actually\n> to be a very expensive operation, even though it ought to be cheap.\n> \n> You can see this by using the VERBOSE option to VACUUM:\n> \n> test=# VACUUM VERBOSE eval1 ;\n> INFO: vacuuming \"public.eval1\"\n> INFO: \"eval1\": found 0 removable, 0 nonremovable row versions in 0\n> pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 0 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: vacuuming \"pg_toast.pg_toast_18831\"\n> INFO: index \"pg_toast_18831_index\" now contains 0 row versions in 1\n> pages\n> DETAIL: 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: \"pg_toast_18831\": found 0 removable, 0 nonremovable row\n> versions in 0 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 0 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> VACUUM\n> \n> Note those \"removable\" and \"nonremovable\" row versions. It's the\n> unremovable ones that can hurt. WARNING: doing VACUUM on a big table\n> on a disk that's already pegged is going to cause you performance\n> pain, because it scans the whole table. In some cases, though, you\n> have no choice: if the winds are already out of your sails, and\n> you're effectively stopped, anything that might get you moving again\n> is an improvement.\n> \n> > And how would the analyze help in finding this out ? I thought it would\n> > only show me additionally the actual timings, not more detail in what\n> > was done...\n> \n> Yes, it shows the actual timings, and the actual number of rows. But\n> if the estimates that the planner makes are wildly different than the\n> actual results, then you know your statistics are wrong, and that the\n> planner is going about things the wrong way. ANALYSE is a big help. \n> There's also a verbose option to it, but it's usually less useful in\n> production situations.\n> \n> A\n\n",
"msg_date": "Tue, 18 Oct 2005 17:21:37 +0200",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help tuning postgres"
},
{
"msg_contents": "First of all thanks all for the input.\n\nI probably can't afford even the reindex till Christmas, when we have\nabout 2 weeks of company holiday... but I guess I'll have to do\nsomething until Christmas.\n\nThe system should at least look like working all the time. I can have\ndowntime, but only for short periods preferably less than 1 minute. The\ntables we're talking about have ~10 million rows the smaller ones and\n~150 million rows the bigger ones, and I guess reindex will take quite\nsome time.\n\nI wonder if I could device a scheme like:\n \n - create a temp table exactly like the production table, including\nindexes and foreign keys;\n - create triggers on the production table which log all inserts,\ndeletes, updates to a log table;\n - activate these triggers;\n - copy all data from the production table to a temp table (this will\ntake the bulk of the time needed for the whole operation);\n - replay the log on the temp table repeatedly if necessary, until the\ntemp table is sufficiently close to the original;\n - rename the original table to something else, and then rename the temp\ntable to the original name, all this in a transaction - this would be\nideally the only visible delay for the user, and if the system is not\nbusy, it should be quick I guess;\n - replay on more time the log;\n\nAll this should happen in a point in time when there's little traffic to\nthe data base.\n\nReplaying could be as simple as a few delete triggers on the log table,\nwhich replay the deleted record on the production table, and the replay\nthen consisting in a delete operation on the log table. This is so that\nnew log entries can be replayed later without replaying again what was\nalready replayed.\n\nThe big tables I should do this procedure on have low probability of\nconflicting operations (like insert and immediate delete of the same\nrow, or multiple insert of the same row, multiple conflicting updates of\nthe same row, etc.), this is why I think replaying the log will work\nfine... of course this whole set up will be a lot more work than just\nreindex...\n\nI wonder if somebody tried anything like this and if it has chances to\nwork ?\n\nThanks,\nCsaba.\n\nOn Tue, 2005-10-18 at 17:18, Robert Treat wrote:\n> reindex should be faster, since you're not dumping/reloading the table\n> contents on top of rebuilding the index, you're just rebuilding the\n> index. \n> \n> \n> Robert Treat\n> emdeon Practice Services\n> Alachua, Florida\n> \n> On Wed, 2005-10-12 at 13:32, Steve Poe wrote:\n> > \n> > Would it not be faster to do a dump/reload of the table than reindex or\n> > is it about the same? \n> > \n> > Steve Poe\n> > \n> > On Wed, 2005-10-12 at 13:21 -0400, Tom Lane wrote:\n> > > Emil Briggs <[email protected]> writes:\n> > > >> Not yet, the db is in production use and I have to plan for a down-time\n> > > >> for that... or is it not impacting the activity on the table ?\n> > > \n> > > > It will cause some performance hit while you are doing it.\n> > > \n> > > It'll also lock out writes on the table until the index is rebuilt,\n> > > so he does need to schedule downtime.\n> > > \n> > > \t\t\tregards, tom lane\n> > > \n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 1: if posting/reading through Usenet, please send an appropriate\n> > > subscribe-nomail command to [email protected] so that your\n> > > message can get through to the mailing list cleanly\n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: don't forget to increase your free space map settings\n\n",
"msg_date": "Tue, 18 Oct 2005 17:49:36 +0200",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help tuning postgres"
},
{
"msg_contents": "On Tue, Oct 18, 2005 at 05:21:37PM +0200, Csaba Nagy wrote:\n> INFO: vacuuming \"public.some_table\"\n> INFO: \"some_table\": removed 29598 row versions in 452 pages\n> DETAIL: CPU 0.01s/0.04u sec elapsed 18.77 sec.\n> INFO: \"some_table\": found 29598 removable, 39684 nonremovable row\n> versions in 851 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n\n> Does that mean that 39684 nonremovable pages are actually the active\n> live pages in the table (as it reports 0 dead) ? I'm sure I don't have\n> any long running transaction, at least according to pg_stats_activity\n> (backed by the linux ps too). Or I should run a vacuum full...\n> \n> This table is one of which has frequently updated rows.\n\nNo, you should be ok there. What that should tell you is that you\nhave about 40,000 rows in the table. But notice that your vacuum\nprocess just removed about 75% of the live table rows. Moreover,\nyour 39684 rows are taking 851 pages. On a standard installation,\nthat's usually 8Kb/page. So that's about 6,808 Kb of physical\nstorage space you're using. Is that consistent with the size of your\ndata? If it's very large compared to the data you have stored in\nthere, you may want to ask if you're \"leaking\" space from the free\nspace map (because of that table turnover, which seems pretty\nsevere).\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe whole tendency of modern prose is away from concreteness.\n\t\t--George Orwell\n",
"msg_date": "Tue, 18 Oct 2005 12:48:06 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help tuning postgres"
}
] |
[
{
"msg_contents": "> The disk used for the data is an external raid array, I don't know\nmuch\n> about that right now except I think is some relatively fast IDE stuff.\n> In any case the operations should be cache friendly, we don't scan\nover\n> and over the big tables...\n\nMaybe you are I/O bound. Do you know if your RAID array is caching your\nwrites? Easy way to check is to run fsync off and look for obvious\nperformance differences. Maybe playing with sync method could help\nhere.\n\nMerlin\n\n",
"msg_date": "Wed, 12 Oct 2005 11:54:17 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help tuning postgres"
}
] |
[
{
"msg_contents": "> \n> Would it not be faster to do a dump/reload of the table than reindex\nor\n> is it about the same?\n> \nreindex is probably faster, but that's not the point. you can reindex a\nrunning system whereas dump/restore requires downtime unless you work\neverything into a transaction, which is headache, and dangerous.\n\nreindex locking is very granular, in that it only acquires a excl. lock\non one index at a time and while doing so reading is possible (writes\nwill wait).\n\nin 8.1 we get a fire and forget reindex database xyz which is about as\ngood as it gets without a dump/load or full vacuum.\n\nMerlin\n",
"msg_date": "Wed, 12 Oct 2005 14:17:47 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help tuning postgres"
}
] |
[
{
"msg_contents": "All,\n\nI can see why the query below is slow. The lead table is 34 million rows,\nand a sequential scan always takes 3+ minutes. Mailing_id is the PK for\nmailing and is constrained as a foreign key (NULLS allowed) in lead. \nThere is an index on lead.mailing_id. I've just run VACUUM ANALYZE on\nlead. I don't understand why it isn't being used.\n\nThanks for your help,\nMartin Nickel\n\nSELECT m.mailcode, l.lead_id\n FROM mailing m \n INNER JOIN lead l ON m.mailing_id = l.mailing_id \n WHERE (m.maildate >= '2005-7-01'::date \n AND m.maildate < '2005-8-01'::date) \n-- takes 510,145 ms\n\nEXPLAIN SELECT m.mailcode, l.lead_id\n FROM mailing m \n INNER JOIN lead l ON m.mailing_id = l.mailing_id \n WHERE (m.maildate >= '2005-7-01'::date \n AND m.maildate < '2005-8-01'::date) \n\nHash Join (cost=62.13..2001702.55 rows=2711552 width=20)\n Hash Cond: (\"outer\".mailing_id = \"inner\".mailing_id)\n -> Seq Scan on lead l (cost=0.00..1804198.60 rows=34065260 width=8)\n -> Hash (cost=61.22..61.22 rows=362 width=20)\n -> Index Scan using mailing_maildate_idx on mailing m (cost=0.00..61.22 rows=362 width=20)\n Index Cond: ((maildate >= '2005-07-01'::date) AND (maildate < '2005-08-01'::date))\n\n",
"msg_date": "Wed, 12 Oct 2005 15:40:24 -0500",
"msg_from": "Martin Nickel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sequential scan on FK join"
},
{
"msg_contents": "Martin Nickel wrote:\n> EXPLAIN SELECT m.mailcode, l.lead_id\n> FROM mailing m \n> INNER JOIN lead l ON m.mailing_id = l.mailing_id \n> WHERE (m.maildate >= '2005-7-01'::date \n> AND m.maildate < '2005-8-01'::date) \n> \n> Hash Join (cost=62.13..2001702.55 rows=2711552 width=20)\n> Hash Cond: (\"outer\".mailing_id = \"inner\".mailing_id)\n> -> Seq Scan on lead l (cost=0.00..1804198.60 rows=34065260 width=8)\n> -> Hash (cost=61.22..61.22 rows=362 width=20)\n> -> Index Scan using mailing_maildate_idx on mailing m (cost=0.00..61.22 rows=362 width=20)\n> Index Cond: ((maildate >= '2005-07-01'::date) AND (maildate < '2005-08-01'::date))\n\nWell the reason *why* is that the planner expects 2.71 million rows to \nbe matched. If that was the case, then a seq-scan of 34 million rows \nmight well make sense. The output from EXPLAIN ANALYSE would show us \nwhether that estimate is correct - is it?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 17 Oct 2005 09:45:00 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sequential scan on FK join"
},
{
"msg_contents": "Subject: Re: Sequential scan on FK join\nFrom: Martin Nickel <[email protected]>\nNewsgroups: pgsql.performance\nDate: Wed, 12 Oct 2005 15:53:35 -0500\n\nRichard, here's the EXPLAIN ANALYZE. I see your point re: the 2.7M\nexpected vs the 2 actual, but I've run ANALYZE on the lead table and it\nhasn't changed the plan. Suggestions?\n\n\"Hash Join (cost=62.13..2001702.55 rows=2711552 width=20) (actual\ntime=40.659..244709.315 rows=2\t125270 loops=1)\" \" Hash Cond:\n(\"outer\".mailing_id = \"inner\".mailing_id)\" \" -> Seq Scan on lead l\n(cost=0.00..1804198.60 rows=34065260 width=8) (actual\ntime=8.621..180281.094 rows=34060373 loops=1)\" \" -> Hash\n(cost=61.22..61.22 rows=362 width=20) (actual time=28.718..28.718 rows=0\nloops=1)\" \" -> Index Scan using mailing_maildate_idx on mailing m\n(cost=0.00..61.22 rows=362 width=20) (actual time=16.571..27.793 rows=430\nloops=1)\" \" Index Cond: ((maildate >= '2005-07-01'::date) AND\n(maildate < '2005-08-01'::date))\" \"Total runtime: 248104.339 ms\"\n\n\n",
"msg_date": "Mon, 17 Oct 2005 08:07:54 -0500",
"msg_from": "Martin Nickel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sequential scan on FK join"
},
{
"msg_contents": "Martin Nickel wrote:\n> Subject: Re: Sequential scan on FK join\n> From: Martin Nickel <[email protected]>\n> Newsgroups: pgsql.performance\n> Date: Wed, 12 Oct 2005 15:53:35 -0500\n> \n> Richard, here's the EXPLAIN ANALYZE. I see your point re: the 2.7M\n> expected vs the 2 actual, but I've run ANALYZE on the lead table and it\n> hasn't changed the plan. Suggestions?\n> \n> Hash Join (cost=62.13..2001702.55 rows=2711552 width=20) \n> (actual time=40.659..244709.315 rows=2 125270 loops=1)\n ^^^\nHmm - is that not just a formatting gap there? Is it not 2,125,270 rows \nmatching which would suggest PG is getting it more right than wrong.\n\nTry issuing \"SET enable_seqscan=false\" before running the explain \nanalyse - that will force the planner to use any indexes it can find and \nshould show us whether the index would help.\n--\n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 17 Oct 2005 18:45:38 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sequential scan on FK join"
},
{
"msg_contents": "When I turn of seqscan it does use the index - and it runs 20 to 30%\nlonger. Based on that, the planner is correctly choosing a sequential\nscan - but that's just hard for me to comprehend. I'm joining on an int4\nkey, 2048 per index page - I guess that's a lot of reads - then the data\n-page reads. Still, the 8-minute query time seems excessive. \n\nOn Mon, 17 Oct 2005 18:45:38 +0100, Richard Huxton wrote:\n\n> Martin Nickel wrote:\n>> Subject: Re: Sequential scan on FK join From: Martin Nickel\n>> <[email protected]> Newsgroups: pgsql.performance\n>> Date: Wed, 12 Oct 2005 15:53:35 -0500\n>> \n>> Richard, here's the EXPLAIN ANALYZE. I see your point re: the 2.7M\n>> expected vs the 2 actual, but I've run ANALYZE on the lead table and it\n>> hasn't changed the plan. Suggestions?\n>> \n>> Hash Join (cost=62.13..2001702.55 rows=2711552 width=20) (actual\n>> time=40.659..244709.315 rows=2 125270 loops=1)\n> ^^^\n> Hmm - is that not just a formatting gap there? Is it not 2,125,270 rows\n> matching which would suggest PG is getting it more right than wrong.\n> \n> Try issuing \"SET enable_seqscan=false\" before running the explain analyse\n> - that will force the planner to use any indexes it can find and should\n> show us whether the index would help. --\n> Richard Huxton\n> Archonet Ltd\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n",
"msg_date": "Mon, 17 Oct 2005 14:56:43 -0500",
"msg_from": "Martin Nickel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sequential scan on FK join"
},
{
"msg_contents": "Martin Nickel wrote:\n> When I turn of seqscan it does use the index - and it runs 20 to 30%\n> longer. Based on that, the planner is correctly choosing a sequential\n> scan - but that's just hard for me to comprehend. I'm joining on an int4\n> key, 2048 per index page - I guess that's a lot of reads - then the data\n> -page reads. Still, the 8-minute query time seems excessive. \n\nYou'll be getting (many) fewer than 2048 index entries per page. There's \na page header and various pointers involved too, and index pages aren't \ngoing to be full. So - it needs to search the table on dates, fetch the \nid's and then assemble them for the hash join. Of course, if you have \ntoo many to join then all this will spill to disk slowing you further.\n\nNow, you'd rather get down below 8 minutes. There are a number of options:\n 1. Make sure your disk i/o is being pushed to its limit\n 2. Look into increasing the sort memory for this one query \"set \nwork_mem...\" (see the runtime configuration section of the manual)\n 3. Actually - are you happy that your general configuration is OK?\n 4. Perhaps use a cursor - I'm guessing you want to process these \nmailings in some way and only want them one at a time in any case.\n 5. Try the query one day at a time and see if the balance tips the \nother way - you'll be dealing with substantially less data per query \nwhich might match your system better. Of course, this may not be \npractical for your applicaton.\n 6. If your lead table is updated only rarely, you could try a CLUSTER \non the table by mailing_id - that should speed the scan. Read the manual \nfor the cluster command first though.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 18 Oct 2005 08:52:15 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sequential scan on FK join"
},
{
"msg_contents": "On Tue, 18 Oct 2005 08:52:15 +0100, Richard Huxton wrote:\n\n> Martin Nickel wrote:\n>> When I turn of seqscan it does use the index - and it runs 20 to 30%\n>> longer. Based on that, the planner is correctly choosing a sequential\n>> scan - but that's just hard for me to comprehend. I'm joining on an\n>> int4 key, 2048 per index page - I guess that's a lot of reads - then the\n>> data -page reads. Still, the 8-minute query time seems excessive.\n> \n> You'll be getting (many) fewer than 2048 index entries per page. There's a\n> page header and various pointers involved too, and index pages aren't\n> going to be full. So - it needs to search the table on dates, fetch the\n> id's and then assemble them for the hash join. Of course, if you have too\n> many to join then all this will spill to disk slowing you further.\n> \n> Now, you'd rather get down below 8 minutes. There are a number of options:\n> 1. Make sure your disk i/o is being pushed to its limit \nWe are completely peaked out on disk io. iostat frequently shows 60%\niowait time. This is quite an issue for us and I don't have any\ngreat ideas. Data is on a 3ware sata raid at raid 10 across 4 disks. I\ncan barely even run vacuums on our largest table (lead) since it runs for\na day and a half and kills our online performance while running.\n\n> 2. Look into increasing the sort memory for this one query \"set\n> work_mem...\" (see the runtime configuration section of the manual)\nI haven't tried this, and I will. Thanks for the idea.\n\n> 3. Actually - are you happy that your general configuration is OK? \nI'm not at all. Most of the configuration changes I've tried have made\nalmost no discernable difference. I'll post the relevant numbers in a\ndifferent post - possibly you'll have some suggestions.\n\n> 4. Perhaps use a cursor - I'm guessing you want to process these\n> mailings in some way and only want them one at a time in any case.\nWhere this first came up was in trying to get aggregate totals per\nmailing. I gave up on that and created a nightly job to create a summary\ntable since Postgres wasn't up to the job in real time. Still, I\nfrequently need to do the join and limit it by other criteria - and it is\nincredibly slow - even when the result set is smallish.\n\n> 5. Try the query one day at a time and see if the balance tips the\n> other way - you'll be dealing with substantially less data per query\n> which might match your system better. Of course, this may not be\n> practical for your applicaton.\nIt is not useful.\n\n> 6. If your lead table is updated only rarely, you could try a CLUSTER\n> on the table by mailing_id - that should speed the scan. Read the manual\n> for the cluster command first though.\nThe lead table is one of the most volatle in our system. Each day we\ninsert tens or hundreds of thousands of rows, update almost that many, and\ndelete a few. It is growing, and could reach 100 million rows in 8 or 9\nmonths. We're redesigning the data structure a little so lead is not\nupdated (updates are just too slow), but it will continue to have inserts\nand deletes, and we'll have to join it with the associated table being\nupdated, which already promises to be a slow operation.\n\nWe're looking at 15K rpm scsi drives for a replacement raid array. We are\ngetting the place where it may be cheaper to convert to Oracle or DB2 than\nto try and make Posgres work.\n\n",
"msg_date": "Fri, 21 Oct 2005 02:59:14 -0500",
"msg_from": "Martin Nickel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sequential scan on FK join"
},
{
"msg_contents": "On Tue, 18 Oct 2005 08:52:15 +0100, Richard Huxton wrote:\n> 3. Actually - are you happy that your general configuration is OK? \nWe're running dual Opteron 244s with 4G of memory. The platform is\nSuse 9.3, 64 bit. The database is on a 3ware 9500S-8 sata raid controller\nconfigured raid 10 with 4 drives plus a hot swap. Drives are\n7400 rpm (don't remember model or size).\n\nI'm running Postgres 8.0.3. Here are some of the relevant conf file\nparameters:\nshared_buffers = 50000\nsort_mem = 8192\nwork_mem = 256000\nvacuum_mem = 32768\nmax_fsm_pages = 40000\nmax_fsm_relations = 1000\n\nI realize shared_buffers is too high. Not sure on the others. Thanks for\nany help you can suggest. I've moved most of these around some and\nrestarted without any clear changes for the better or worse (just\nseat-of-the-pants feel - I haven't tried to benchmark the result of\nchanges at all).\n\nThanks,\nMartin\n\n\n",
"msg_date": "Fri, 21 Oct 2005 03:51:56 -0500",
"msg_from": "Martin Nickel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sequential scan on FK join"
},
{
"msg_contents": "Martin Nickel wrote:\n> On Tue, 18 Oct 2005 08:52:15 +0100, Richard Huxton wrote:\n> > 3. Actually - are you happy that your general configuration is OK? \n> We're running dual Opteron 244s with 4G of memory. The platform is\n> Suse 9.3, 64 bit. The database is on a 3ware 9500S-8 sata raid controller\n> configured raid 10 with 4 drives plus a hot swap. Drives are\n> 7400 rpm (don't remember model or size).\n> \n> I'm running Postgres 8.0.3. Here are some of the relevant conf file\n> parameters:\n> shared_buffers = 50000\n> sort_mem = 8192\n> work_mem = 256000\n\nInteresting that you set both sort_mem and work_mem. Do you realize\nthat the former is an obsolete name, and currently a synonym for the\nlatter? Maybe the problem is that you are using too much memory for\nsorts, forcing swap usage, etc.\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/DXLWNGRJD34J\n\"La persona que no quer�a pecar / estaba obligada a sentarse\n en duras y empinadas sillas / desprovistas, por cierto\n de blandos atenuantes\" (Patricio Vogel)\n",
"msg_date": "Sat, 22 Oct 2005 10:25:12 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sequential scan on FK join"
}
] |
[
{
"msg_contents": "Hello, \n\nI have a strange effect on upcoming structure :\n\n\nDEX_OBJ ---< DEX_STRUCT >--- DEX_LIT\n\nDEX_OBJ : 100 records (#DOO_ID, DOO_NAME)\nDEX_STRUCT : 2,5 million records (#(DST_SEQ, FK_DOO_ID, FK_LIT_ID))\nDEX_LIT : 150K records (#LIT_ID, LIT_TEXT)\n\n(# marks primary key)\n\ni'd like to count all LIT occurences in struct for a set of LITs.\n\nso i indexed DEX_STRUCT using (FK_LIT_ID, FK_DOO_ID)\nand i indexed DEX_LIT using BTREE (LIT_TEXT, LIT_ID)\n\nbut if i query\n\nSELECT DOO_ID\n ,\t COUNT(FK_LIT_ID) AS occurences\n FROM DEX_STRUCT STR\n ,\t DEX_LITERAL LIT\nWHERE STR.FK_LIT_ID = LIT.LIT_ID\n AND LIT_TEXT IN ('foo', 'bar', 'foobar')\n GROUP BY DOO_ID\n\npostgresql always runs a seq scan on DEX_STRUCT. I tried several indices and also very different kinds of queries (from EXISTS via INNER JOIN up to subqueries), but Pgsql does not use any index on dex_struct.\n\nWhat can I do ? Is this a optimizer misconfiguration (hence, it is still in default config) ?\n\nHow can I make Pg using the indices on doc_struct ? The index on LIT is used :-(\n\nI expect 30 - 60 millions of records in the struct table, so I urgently need indexed access.\n\nThanks a lot !\n\nMarcus\n\n\n\n\n\n\nOptimizer misconfigured ?\n\n\n\n\n\nHello, \n\nI have a strange effect on upcoming structure :\n\n\nDEX_OBJ ---< DEX_STRUCT >--- DEX_LIT\n\nDEX_OBJ : 100 records (#DOO_ID, DOO_NAME)\nDEX_STRUCT : 2,5 million records (#(DST_SEQ, FK_DOO_ID, FK_LIT_ID))\nDEX_LIT : 150K records (#LIT_ID, LIT_TEXT)\n\n(# marks primary key)\n\ni'd like to count all LIT occurences in struct for a set of LITs.\n\nso i indexed DEX_STRUCT using (FK_LIT_ID, FK_DOO_ID)\nand i indexed DEX_LIT using BTREE (LIT_TEXT, LIT_ID)\n\nbut if i query\n\nSELECT DOO_ID\n , COUNT(FK_LIT_ID) AS occurences\n FROM DEX_STRUCT STR\n , DEX_LITERAL LIT\nWHERE STR.FK_LIT_ID = LIT.LIT_ID\n AND LIT_TEXT IN ('foo', 'bar', 'foobar')\n GROUP BY DOO_ID\n\npostgresql always runs a seq scan on DEX_STRUCT. I tried several indices and also very different kinds of queries (from EXISTS via INNER JOIN up to subqueries), but Pgsql does not use any index on dex_struct.\nWhat can I do ? Is this a optimizer misconfiguration (hence, it is still in default config) ?\n\nHow can I make Pg using the indices on doc_struct ? The index on LIT is used :-(\n\nI expect 30 - 60 millions of records in the struct table, so I urgently need indexed access.\n\nThanks a lot !\n\nMarcus",
"msg_date": "Thu, 13 Oct 2005 11:53:57 +0200",
"msg_from": "=?iso-8859-1?Q?N=F6rder-Tuitje=2C_Marcus?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizer misconfigured ?"
},
{
"msg_contents": "Nörder-Tuitje wrote:\n> \n> Hello, \n> \n> I have a strange effect on upcoming structure :\n\nPeople will be wanting the output of EXPLAIN ANALYSE on that query.\n\nThey'll also ask whether you've VACUUMed, ANALYSEd and configured your \npostgresql.conf correctly.\n\n-- \n Richard Huxton\n Archonet Ltd\n\n\n",
"msg_date": "Thu, 13 Oct 2005 11:21:58 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer misconfigured ?"
}
] |
[
{
"msg_contents": "[email protected] wrote:\n\n>>> Have you tried reindexing your active tables?\n\n> It will cause some performance hit while you are doing it. It\n> sounds like something is bloating rapidly on your system and\n> the indexes is one possible place that could be happening.\n\nYou might consider using contrib/oid2name to monitor physical growth of\ntables and indexes. There have been some issues with bloat in PostgreSQL\nversions prior to 8.0, however there might still be some issues under\ncertain circumstances even now, so it does pay to cast an eye on what's\ngoing on. If you haven't run vaccum regularly, this might lead to\nregular vacuums not reclaiming enough dead tuples in one go, so if\nyou've had quite a lot of UPDATE/DELETE activity going onin the past and\nonly just started to use pg_autovacuum after the DB has been in\nproduction for quite a while, you might indeed have to run a VACUUM FULL\nand/or REINDEX on the affected tables, both of which will more or less\nlock out any client access to the tables als long as they're running.\n\nKind regards\n\n Markus\n",
"msg_date": "Thu, 13 Oct 2005 12:16:49 +0200",
"msg_from": "\"Markus Wollny\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help tuning postgres"
}
] |
[
{
"msg_contents": "[email protected] wrote:\n\n> Next we'll upgrade the postgres hardware, and then I'll come\n> back to report if it's working better... sorry for the noise for now.\n\nThere have been some discussions about which hardware suits PostgreSQL's\nneeds best under certain load-characteristics. We have experienced quite\na write-performance burst just from switching from a RAID5-config to a\nRAID10 (mirroring&striping), even though we had been using some\nsupposedly sufficiently powerful dedicated battery-backuped\nSCSI-RAID-adapters with lots of on-board cache. You can't beat simple,\nalthough it will cost disk-space. Anyway, you might want to search the\narchives for discussion on RAID-configurations.\n",
"msg_date": "Thu, 13 Oct 2005 12:24:31 +0200",
"msg_from": "\"Markus Wollny\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help tuning postgres"
}
] |
[
{
"msg_contents": "\nPg 7.4.5\nRH 7.3\nQuad Xeon 3Gz\n12G ram\n\nTrying to do a update of fields on 23M row database.\nIs it normal for this process to take 16hrs and still clocking? Both join\nfields are indexed and I have removed any indexes on the updated columns.\nAlso both tables are vacuumed regularly.\nI'm weary to cancel the job for fear that it is just slow and I'll have to\nrepeat the 16hr job.\nAny suggestions of what I can check for the bottleneck?\n\nBelow is my update statement and table structure:\n\nupdate cdm.cdm_ddw_tran_item\nset dept_id = dept,\nvend_id = vend,\nmkstyl = mstyle\nfrom flbasics\nwhere flbasics.upc = cdm.cdm_ddw_tran_item.item_upc;\n\n\nCREATE TABLE cdm.cdm_ddw_tran_item\n(\n appl_xref varchar(22),\n intr_xref varchar(13),\n tran_typ_id char(1),\n tran_ship_amt numeric(8,2),\n fill_store_div int4,\n soldto_cust_id int8,\n soldto_cust_seq int4,\n shipto_cust_id int8,\n shipto_cust_seq int4,\n itm_qty int4,\n itm_price numeric(8,2),\n item_id int8,\n item_upc int8,\n item_pid varchar(20),\n item_desc varchar(30),\n nrf_color_name varchar(10),\n nrf_size_name varchar(10),\n dept_id int4,\n vend_id int4,\n mkstyl int4,\n ddw_tran_key bigserial NOT NULL,\n price_type_id int2 DEFAULT 999,\n last_update date DEFAULT ('now'::text)::date,\n CONSTRAINT ddw_tritm_pk PRIMARY KEY (ddw_tran_key)\n)\nWITHOUT OIDS;\n\nCREATE TABLE flbasics\n(\n upc int8,\n dept int4,\n vend int4,\n mstyle int4,\n xcolor int4,\n size int4,\n owned float8,\n cost float8,\n xclass int2,\n firstticket float8,\n status char(2),\n last_receipt date,\n description varchar(50),\n pack_qty int2,\n discontinue_date date,\n std_rcv_units int4,\n std_rcv_cost float8,\n std_rcv_retail float8,\n first_receipt date,\n last_pchange varchar(9),\n ticket float8,\n std_mkd_units int4,\n std_mkd_dollars float8\n)\nWITHOUT OIDS;\n\nPatrick Hatcher\nDevelopment Manager Analytics/MIO\nMacys.com\n\n\n",
"msg_date": "Thu, 13 Oct 2005 09:34:39 -0700",
"msg_from": "Patrick Hatcher <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow update"
},
{
"msg_contents": "Patrick Hatcher <[email protected]> writes:\n> Pg 7.4.5\n\n> Trying to do a update of fields on 23M row database.\n> Is it normal for this process to take 16hrs and still clocking?\n\nAre there foreign keys pointing at the table being updated? If so,\nfailure to index the referencing columns could create this sort of\nperformance problem. Also, in 7.4 you'd better be sure the referencing\ncolumns are the same datatype as the referenced column.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Oct 2005 14:34:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow update "
},
{
"msg_contents": "Thanks. No foreign keys and I've been bitten by the mismatch datatypes and\nchecked that before sending out the message :)\n\nPatrick Hatcher\nDevelopment Manager Analytics/MIO\nMacys.com\n\n\n\n\n \n Tom Lane \n <[email protected] \n s> To \n Patrick Hatcher \n 10/13/2005 11:34 <[email protected]> \n AM cc \n postgres performance list \n <[email protected]> \n Subject \n Re: [PERFORM] slow update \n \n \n \n \n \n \n\n\n\n\nPatrick Hatcher <[email protected]> writes:\n> Pg 7.4.5\n\n> Trying to do a update of fields on 23M row database.\n> Is it normal for this process to take 16hrs and still clocking?\n\nAre there foreign keys pointing at the table being updated? If so,\nfailure to index the referencing columns could create this sort of\nperformance problem. Also, in 7.4 you'd better be sure the referencing\ncolumns are the same datatype as the referenced column.\n\n regards, tom lane\n\n\n",
"msg_date": "Thu, 13 Oct 2005 11:37:09 -0700",
"msg_from": "Patrick Hatcher <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow update"
}
] |
[
{
"msg_contents": "Hi,\n\nmeanwhile I have received the hint to make postgres use the index via\n\nSET ENABLE_SEQSCAN=FALSE;\n\nwhich fits perfectly. The execution plan now indicates full use of index.\n\nNevertheless this is merely a workaround. Maybe the io-costs are configured to cheap.\n\nthanks :-)\n\n\n-----Ursprüngliche Nachricht-----\nVon: Richard Huxton [mailto:[email protected]]\nGesendet: Donnerstag, 13. Oktober 2005 12:22\nAn: Nörder-Tuitje, Marcus\nCc: [email protected]\nBetreff: Re: [PERFORM] Optimizer misconfigured ?\n\n\nNörder-Tuitje wrote:\n> \n> Hello, \n> \n> I have a strange effect on upcoming structure :\n\nPeople will be wanting the output of EXPLAIN ANALYSE on that query.\n\nThey'll also ask whether you've VACUUMed, ANALYSEd and configured your \npostgresql.conf correctly.\n\n-- \n Richard Huxton\n Archonet Ltd\n\n\n\n\n",
"msg_date": "Fri, 14 Oct 2005 09:31:34 +0200",
"msg_from": "=?iso-8859-1?Q?N=F6rder-Tuitje=2C_Marcus?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizer misconfigured ?"
},
{
"msg_contents": "Nörder-Tuitje wrote:\n> Hi,\n> \n> meanwhile I have received the hint to make postgres use the index via\n> \n> \n> SET ENABLE_SEQSCAN=FALSE;\n> \n> which fits perfectly. The execution plan now indicates full use of\n> index.\n\nWhat execution plan? I still only see one message on the list.\n\n> Nevertheless this is merely a workaround. Maybe the io-costs are\n> configured to cheap.\n\nPossibly - the explain analyse will show you.\n--\n Richard Huxton\n Archonet Ltd\n\n",
"msg_date": "Fri, 14 Oct 2005 08:38:39 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer misconfigured ?"
}
] |
[
{
"msg_contents": "Ok, since my question got no answer on the general list, I'm reposting\nit here since this list seems in fact better suited to it.\n \nDoes anyone here know what is the most efficient way to list all\ndifferent values of a given column with low cardinality ? For instance\nI have a table with columns DAY, NAME, ID, etc. The table is updated\nabout each week with thousands of records with the same (current) date.\nNow I would like to list all values for DAY, only if possible without\nscanning all the table each time I submit the request.\n \nI can think of:\n \nSolution 1: SELECT DAY FROM TABLE GROUP BY DAY;\n \nSolution 2: SELECT DISTINCT DAY FROM TABLE;\n \n(BTW why do those two yield such different performances, the later being\nseemingly *much* slower than the former ?)\n \nSolution 3: Improve performance through an index scan by using DAY as\nthe first element of the PK, (PRIMARY KEY (DAY, ID) ), although DAY has\na low cardinality ?\n \nSolution 4: Create a separate index on column DAY ?\n \nSolution 5: Use some kind of view / stored procedure that would be\nprecomputed when TABLE is updated or cached when called for the first\ntime ? Does something like that exist ?\n \nSolution 6: Store the values in a separate table, recreated each time\nTABLE is updated.\n \nThis looks to me as a very common problem. Is there an obvious / best /\nstandard solution there ? What would be the expected performance of the\ndifferent solutions above ? (I guess some are probably non-sense)\n \nThank you all !\nChristian\n \n",
"msg_date": "Fri, 14 Oct 2005 18:02:56 +0200",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Best way to get all different values in a column"
},
{
"msg_contents": "On Fri, Oct 14, 2005 at 06:02:56PM +0200, [email protected] wrote:\n> Does anyone here know what is the most efficient way to list all\n> different values of a given column with low cardinality ? For instance\n> I have a table with columns DAY, NAME, ID, etc. The table is updated\n> about each week with thousands of records with the same (current) date.\n> Now I would like to list all values for DAY, only if possible without\n> scanning all the table each time I submit the request.\n> I can think of:\n> ...\n> Solution 6: Store the values in a separate table, recreated each time\n> TABLE is updated.\n\nI've found a variant on 6 to work well for this problem domain.\n\nWhy not insert into the separate table, when you insert into the table?\nEither as a trigger, or in your application.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Fri, 14 Oct 2005 12:38:19 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Best way to get all different values in a column"
},
{
"msg_contents": "On Fri, Oct 14, 2005 at 06:02:56PM +0200, [email protected] wrote:\n> Ok, since my question got no answer on the general list, I'm reposting\n> it here since this list seems in fact better suited to it.\n> \n> Does anyone here know what is the most efficient way to list all\n> different values of a given column with low cardinality ? For instance\n> I have a table with columns DAY, NAME, ID, etc. The table is updated\n> about each week with thousands of records with the same (current) date.\n> Now I would like to list all values for DAY, only if possible without\n> scanning all the table each time I submit the request.\n> \n> I can think of:\n> \n> Solution 1: SELECT DAY FROM TABLE GROUP BY DAY;\n> \n> Solution 2: SELECT DISTINCT DAY FROM TABLE;\n> \n> (BTW why do those two yield such different performances, the later being\n> seemingly *much* slower than the former ?)\n> \n> Solution 3: Improve performance through an index scan by using DAY as\n> the first element of the PK, (PRIMARY KEY (DAY, ID) ), although DAY has\n> a low cardinality ?\n> \n> Solution 4: Create a separate index on column DAY ?\n> \n> Solution 5: Use some kind of view / stored procedure that would be\n> precomputed when TABLE is updated or cached when called for the first\n> time ? Does something like that exist ?\n> \n> Solution 6: Store the values in a separate table, recreated each time\n> TABLE is updated.\n> \n> This looks to me as a very common problem. Is there an obvious / best /\n> standard solution there ? What would be the expected performance of the\n> different solutions above ? (I guess some are probably non-sense)\n> \n\nThere's not going to be a single \"best\" solution, as it'll depend on\nyour requirements, and on your application level constraints.\n\nYou say that the table is seldom updated (a few thousand a week is \"almost\nnever\"). If it's updated in a single batch you could simply generate\na table of the distinct values after each update pretty easily (solution\n6).\n\nIf you don't have such a well-defined update then using a trigger on\ninserts, updates and deletes of the table to update a separate table\nto keep track of the counts of each distinct values, then you can\njust select any row with a non-zero count from that table (solution 5).\n(You need the counts to be able to deal with deletes efficiently). That\nwould increase the cost of updating the main table significantly, but\nyou're putting very little traffic through it, so that's unlikely to\nbe a problem.\n\nI doubt that solutions 3 or 4 are worth looking at at all, and the first\ntwo are what they are and you know their performance already.\n\nYou could probably do this far more efficiently with some of the work\nbeing done in the application layer, rather than in the database - for\ninstance you could update the counts table one time per transaction,\nrather than one time per operation - but that would lose you the\nconvenience of maintaining the counts correctly when you futz with\nthe data manually or using tools not aware of the count table.\n\nCheers,\n Steve\n",
"msg_date": "Fri, 14 Oct 2005 10:04:24 -0700",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best way to get all different values in a column"
}
] |
[
{
"msg_contents": "We are indexing about 5 million small documents using tsearch2/GIST. Each \"document\" contains 2 to 50 words. This is a \"write once, read many\" situation. Write performance is unimportant, and the database contents are static. (We build it offline.)\n\nWe're having problems with inconsistent performance, and it's very hard to separate the effects of various factors. Here are the things we think may be relevant.\n\n1. Total number of words\n\nOur documents currently contain about 110,000 unique words. Oleg wrote: \"[The limit is] 100K, but it's very fuzzy limit.\" By trial and error, we've learned that 50,000 works well, and 150,000 works poorly, so Oleg's comment appears to be a good rule-of-thumb. (With SIGLENINT enlarged, as mentioned above.) But there may be other factors that affect this conclusion (such as shared memory, total memory, etc.).\n\n\n2. Total size of the table\n\n8 million documents is not a very big database (each document is a few to a few hundred bytes), so we don't think this is relevant.\n\n\n3. Number of documents per word\n\nThere seems to be a VERY strong effect related to \"common\" words. When a word occurs in more than about 1% of the documents (say 50,000 to 150,000 documents), performance goes WAY down. Not just for that specific query, but it screws up tsearch2/GIST completely.\n\nWe have a test of 100 queries that return 382,000 documents total. The first time we run it, it's slow, about 20 minutes (as expected). The second time we run it, it's very fast, about 72 seconds -- very fast!! As long as we avoid queries with common words, performance is very good.\n\nBut, if we run just one query that contains a common word (a word that's in more than about 2% of the documents, roughly 150,000 documents), then the next time we run the 100 test queries, it will take 20 minutes again.\n\nWe can't simply eliminate these common words. First of all, they can be very significant. Second, it doesn't seem like 2% is \"common\". I can understand that a words like \"the\" which occur in most documents shouldn't be indexed. But a word that occurs in 2% of the database seems like a very good word to index, yet it causes us great problems.\n\nI've read a bit about tsearchd, and wonder if it would solve our problem. For our application, consistent performance is VERY important. If we could lock the GIST index into memory, I think it would fix our problem.\n\nI tried copying the GIST indexes (which are in a separate tablespace) to a 1 GB RAM disk, and it made the initial query faster, but overall performance seemed worse, probably because the RAM disk was using memory that could have been used by the file-system cache.\n\n\n4. Available RAM and Disk drives\n\nWould more RAM help? How would we tell Postgres to use it effectively? The GIST indexes are currently about 2.6 GB on the disk.\n\nWould more disks help? I know they would make it faster -- the 20-minute initial query would be reduce with a RAID drive, etc. But I'm not concerned about the 20-minute initial query, I'm concerned about keeping the system in that super-fast state where the GIST indexes are all in memory.\n\n\nHardware:\n Dual-CPU Xeon Dell server with 4 GB memory and a single SATA 7200 RPM 150GB disk.\n\ntsearch2/gistidx.h\n modified as: #define SIGLENINT 120\n\nSystem configuration:\n echo 2147483648 >/proc/sys/kernel/shmmax\n echo 4096 >/proc/sys/kernel/shmmni\n echo 2097152 >/proc/sys/kernel/shmall\n\nPostgres Configuration:\n shared_buffers = 20000\t\n work_mem = 32768\n effective_cache_size = 300000\n\nThanks very much for any comments and advice.\n\nCraig\n\n\n",
"msg_date": "Fri, 14 Oct 2005 15:31:59 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "tsearch2/GIST performance factors?"
}
] |
[
{
"msg_contents": "Hello,\n\n I am trying to select form table with bytea field. And queries runs very\nslow.\nMy table:\nCREATE TABLE files (file bytea, nr serial NOT NULL) WITH OIDS;\n\nQuery:\nselect * from files where nr > 1450\n\n(I have total 1500 records in it, every holds picture of 23kB size)\nQuery runs very long:\nTotal query runtime: 23625 ms.\nData retrieval runtime: 266 ms.\n50 rows retrieved.\n\nexplain:\nIndex Scan using pk on files (cost=0.00..3.67 rows=50 width=36)\nIndex Cond: (nr > 1450)\n\n Is it possible to do something with it? or it is normal? Our server is\nfast, and all other tables work fine..\n\n\n\n-- \nThis message has been scanned for viruses and\ndangerous content, and is believed to be clean.\n\n",
"msg_date": "Sat, 15 Oct 2005 16:20:54 +0300 (EEST)",
"msg_from": "\"NSO\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bytea poor performance"
},
{
"msg_contents": "On 10/15/05 9:20 AM, \"NSO\" <[email protected]> wrote:\n\n> Hello,\n> \n> I am trying to select form table with bytea field. And queries runs very\n> slow.\n> My table:\n> CREATE TABLE files (file bytea, nr serial NOT NULL) WITH OIDS;\n> \n> Query:\n> select * from files where nr > 1450\n> \n> (I have total 1500 records in it, every holds picture of 23kB size)\n> Query runs very long:\n> Total query runtime: 23625 ms.\n> Data retrieval runtime: 266 ms.\n> 50 rows retrieved.\n> \n> explain:\n> Index Scan using pk on files (cost=0.00..3.67 rows=50 width=36)\n> Index Cond: (nr > 1450)\n> \n> Is it possible to do something with it? or it is normal? Our server is\n> fast, and all other tables work fine..\n\nHow about some explain analyze output? Have you done a full vacuum lately?\nHow about reindexing?\n\nSean\n\n",
"msg_date": "Sat, 15 Oct 2005 09:40:28 -0400",
"msg_from": "Sean Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea poor performance"
},
{
"msg_contents": "Hello,\n\nHow about some explain analyze output?\n Explain analyse select * from files where nr > 1450\n\n \"Index Scan using pk on files (cost=0.00..3.67 rows=50 width=36)\n (actual time=0.000..0.000 rows=50 loops=1)\"\n\nHave you done a full vacuum lately? How about reindexing?\n Yes, I did reindexing and vacuum full just before query..\n\n\n\n\n> On 10/15/05 9:20 AM, \"NSO\" <[email protected]> wrote:\n>\n>> Hello,\n>>\n>> I am trying to select form table with bytea field. And queries runs very\n>> slow.\n>> My table:\n>> CREATE TABLE files (file bytea, nr serial NOT NULL) WITH OIDS;\n>>\n>> Query:\n>> select * from files where nr > 1450\n>>\n>> (I have total 1500 records in it, every holds picture of 23kB size)\n>> Query runs very long:\n>> Total query runtime: 23625 ms.\n>> Data retrieval runtime: 266 ms.\n>> 50 rows retrieved.\n>>\n>> explain:\n>> Index Scan using pk on files (cost=0.00..3.67 rows=50 width=36)\n>> Index Cond: (nr > 1450)\n>>\n>> Is it possible to do something with it? or it is normal? Our server is\n>> fast, and all other tables work fine..\n>\n> How about some explain analyze output? Have you done a full vacuum\n> lately?\n> How about reindexing?\n>\n> Sean\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n> --\n> This message has been scanned for viruses and\n> dangerous content, and is believed to be clean.\n>\n>\n\n\n\n-- \nThis message has been scanned for viruses and\ndangerous content, and is believed to be clean.\n\n",
"msg_date": "Sat, 15 Oct 2005 17:00:09 +0300 (EEST)",
"msg_from": "\"NSO\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bytea poor performance"
},
{
"msg_contents": "On 10/15/05 10:00 AM, \"NSO\" <[email protected]> wrote:\n\n> Hello,\n> \n> How about some explain analyze output?\n> Explain analyse select * from files where nr > 1450\n> \n> \"Index Scan using pk on files (cost=0.00..3.67 rows=50 width=36)\n> (actual time=0.000..0.000 rows=50 loops=1)\"\n\nI may not be understanding the output, but your actual time reports 0 for\nthe query. And the total runtime is 23 seconds?\n\nSean\n\n\n>> On 10/15/05 9:20 AM, \"NSO\" <[email protected]> wrote:\n>> \n>>> Hello,\n>>> \n>>> I am trying to select form table with bytea field. And queries runs very\n>>> slow.\n>>> My table:\n>>> CREATE TABLE files (file bytea, nr serial NOT NULL) WITH OIDS;\n>>> \n>>> Query:\n>>> select * from files where nr > 1450\n>>> \n>>> (I have total 1500 records in it, every holds picture of 23kB size)\n>>> Query runs very long:\n>>> Total query runtime: 23625 ms.\n>>> Data retrieval runtime: 266 ms.\n>>> 50 rows retrieved.\n>>> \n>>> explain:\n>>> Index Scan using pk on files (cost=0.00..3.67 rows=50 width=36)\n>>> Index Cond: (nr > 1450)\n>>> \n>>> Is it possible to do something with it? or it is normal? Our server is\n>>> fast, and all other tables work fine..\n>> \n>> How about some explain analyze output? Have you done a full vacuum\n>> lately?\n>> How about reindexing?\n>> \n>> Sean\n>> \n>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 6: explain analyze is your friend\n>> \n>> --\n>> This message has been scanned for viruses and\n>> dangerous content, and is believed to be clean.\n>> \n>> \n> \n> \n\n",
"msg_date": "Sat, 15 Oct 2005 10:08:20 -0400",
"msg_from": "Sean Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea poor performance"
},
{
"msg_contents": "\n Yes, it takes even up to 35 seconds.. I did the same query on the server\n(not PC with was connected directly to server with 100mbit net), and /I\ngot better result it is 3.5 - 4 seconds, but it still not good.. Why it\nis slow? and why the difference is so big? I mean from 4 to 35 seconds?\n\nthx\n\n> On 10/15/05 10:00 AM, \"NSO\" <[email protected]> wrote:\n>\n>> Hello,\n>>\n>> How about some explain analyze output?\n>> Explain analyse select * from files where nr > 1450\n>>\n>> \"Index Scan using pk on files (cost=0.00..3.67 rows=50 width=36)\n>> (actual time=0.000..0.000 rows=50 loops=1)\"\n>\n> I may not be understanding the output, but your actual time reports 0 for\n> the query. And the total runtime is 23 seconds?\n>\n> Sean\n>\n>\n>>> On 10/15/05 9:20 AM, \"NSO\" <[email protected]> wrote:\n>>>\n>>>> Hello,\n>>>>\n>>>> I am trying to select form table with bytea field. And queries runs\n>>>> very\n>>>> slow.\n>>>> My table:\n>>>> CREATE TABLE files (file bytea, nr serial NOT NULL) WITH OIDS;\n>>>>\n>>>> Query:\n>>>> select * from files where nr > 1450\n>>>>\n>>>> (I have total 1500 records in it, every holds picture of 23kB size)\n>>>> Query runs very long:\n>>>> Total query runtime: 23625 ms.\n>>>> Data retrieval runtime: 266 ms.\n>>>> 50 rows retrieved.\n>>>>\n>>>> explain:\n>>>> Index Scan using pk on files (cost=0.00..3.67 rows=50 width=36)\n>>>> Index Cond: (nr > 1450)\n>>>>\n>>>> Is it possible to do something with it? or it is normal? Our server is\n>>>> fast, and all other tables work fine..\n>>>\n>>> How about some explain analyze output? Have you done a full vacuum\n>>> lately?\n>>> How about reindexing?\n>>>\n>>> Sean\n>>>\n>>>\n>>> ---------------------------(end of\n>>> broadcast)---------------------------\n>>> TIP 6: explain analyze is your friend\n>>>\n>>> --\n>>> This message has been scanned for viruses and\n>>> dangerous content, and is believed to be clean.\n>>>\n>>>\n>>\n>>\n>\n>\n> --\n> This message has been scanned for viruses and\n> dangerous content, and is believed to be clean.\n>\n>\n\n\n\n-- \nThis message has been scanned for viruses and\ndangerous content, and is believed to be clean.\n\n",
"msg_date": "Sat, 15 Oct 2005 17:48:54 +0300 (EEST)",
"msg_from": "\"NSO\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bytea poor performance"
},
{
"msg_contents": "\"NSO\" <[email protected]> writes:\n> Query runs very long:\n> Total query runtime: 23625 ms.\n> Data retrieval runtime: 266 ms.\n> 50 rows retrieved.\n\nNotice that the query itself took 266ms. The rest of the time was\nwasted by your client app trying to format a 23Kb by 50 row table\nfor display. You need to replace your client-side code with something\nless inefficient about handling wide values.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 15 Oct 2005 11:54:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea poor performance "
},
{
"msg_contents": "Hello,\n\n Yes, I can understand that, but then why the same app on the server\nmachine is done in 4 seconds? (big difference from 20-30 seconds). I\ntryed to monitor network traffic and it is used only for 1-2% of total\n100mbit.\n\n\n> \"NSO\" <[email protected]> writes:\n>> Query runs very long:\n>> Total query runtime: 23625 ms.\n>> Data retrieval runtime: 266 ms.\n>> 50 rows retrieved.\n>\n> Notice that the query itself took 266ms. The rest of the time was\n> wasted by your client app trying to format a 23Kb by 50 row table\n> for display. You need to replace your client-side code with something\n> less inefficient about handling wide values.\n>\n> \t\t\tregards, tom lane\n>\n> --\n> This message has been scanned for viruses and\n> dangerous content, and is believed to be clean.\n>\n>\n\n\n\n-- \nThis message has been scanned for viruses and\ndangerous content, and is believed to be clean.\n\n",
"msg_date": "Sat, 15 Oct 2005 19:16:30 +0300 (EEST)",
"msg_from": "\"NSO\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bytea poor performance"
},
{
"msg_contents": "\"NSO\" <[email protected]> writes:\n> Yes, I can understand that, but then why the same app on the server\n> machine is done in 4 seconds? (big difference from 20-30 seconds).\n\nThat would suggest a networking problem, which is a bit outside my\nexpertise. If the client machine is running Windows, we have seen\nproblems of that sort before from (IIRC) various third-party add-ons\nthat fool around with networking behavior. Try searching the archives.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 15 Oct 2005 12:24:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea poor performance "
},
{
"msg_contents": "NSO wrote:\n> Hello,\n> \n> Yes, I can understand that, but then why the same app on the server\n> machine is done in 4 seconds? (big difference from 20-30 seconds). I\n> tryed to monitor network traffic and it is used only for 1-2% of total\n> 100mbit.\n> \n\nIs this a web app? If so, then check you are using the same browser \nsettings on the server and client (or the same browser for that matter).\n\nNote that some browsers really suck for large (wide or long) table display!\n\ncheers\n\nMark\n\n",
"msg_date": "Sun, 16 Oct 2005 17:53:10 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea poor performance"
},
{
"msg_contents": "Hello,\n\n No it is not web app, I tested on simple delphi app and with PGAdmin\nIII.. same results.. Query from PGAdmin takes up to 30seconds...\n\n> NSO wrote:\n>> Hello,\n>>\n>> Yes, I can understand that, but then why the same app on the server\n>> machine is done in 4 seconds? (big difference from 20-30 seconds). I\n>> tryed to monitor network traffic and it is used only for 1-2% of total\n>> 100mbit.\n>>\n>\n> Is this a web app? If so, then check you are using the same browser\n> settings on the server and client (or the same browser for that matter).\n>\n> Note that some browsers really suck for large (wide or long) table\n> display!\n>\n> cheers\n>\n> Mark\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n> --\n> This message has been scanned for viruses and\n> dangerous content, and is believed to be clean.\n>\n>\n\n\n\n-- \nThis message has been scanned for viruses and\ndangerous content, and is believed to be clean.\n\n",
"msg_date": "Sun, 16 Oct 2005 14:01:46 +0300 (EEST)",
"msg_from": "\"NSO\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bytea poor performance"
},
{
"msg_contents": "NSO wrote:\n\n>Hello,\n>\n> No it is not web app, I tested on simple delphi app and with PGAdmin\n>III.. same results.. Query from PGAdmin takes up to 30seconds...\n> \n>\nDisplaying the data can take a long time on several platforms for \npgAdmin; complex controls tend to be dead slow on larger data sets. \nWe're waiting for a better wxWidgets solution, I doubt delphi is better...\n\nRegards,\nAndreas\n\n",
"msg_date": "Sun, 16 Oct 2005 23:07:31 +0200",
"msg_from": "Andreas Pflug <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea poor performance"
},
{
"msg_contents": "\n\nWell, no. Delphi isn't better, same time just for downloading data... But\nas I told before, if for ex. pgAdminIII is running on server machine it is\na lot faster, I do not know why, I was monitoring network connection\nbetween client and server and it is using only up to 2% of full speed.. is\nserver can't send faster? or client is not accepting data faster?\n\n> NSO wrote:\n>\n>>Hello,\n>>\n>> No it is not web app, I tested on simple delphi app and with PGAdmin\n>>III.. same results.. Query from PGAdmin takes up to 30seconds...\n>>\n>>\n> Displaying the data can take a long time on several platforms for\n> pgAdmin; complex controls tend to be dead slow on larger data sets.\n> We're waiting for a better wxWidgets solution, I doubt delphi is better...\n>\n> Regards,\n> Andreas\n>\n>\n> --\n> This message has been scanned for viruses and\n> dangerous content, and is believed to be clean.\n>\n>\n\n\n\n-- \nThis message has been scanned for viruses and\ndangerous content, and is believed to be clean.\n\n",
"msg_date": "Mon, 17 Oct 2005 01:02:00 +0300 (EEST)",
"msg_from": "\"NSO\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bytea poor performance"
},
{
"msg_contents": "NSO wrote:\n\n>Well, no. Delphi isn't better, same time just for downloading data... But\n>as I told before, if for ex. pgAdminIII is running on server machine it is\n>a lot faster, I do not know why, I was monitoring network connection\n>between client and server and it is using only up to 2% of full speed.. is\n>server can't send faster? or client is not accepting data faster?\n> \n>\n\nOnly the first number is relevant and subject to network/db/server \nissues. The second is GUI only.\n\nRegards,\nAndreas\n\n",
"msg_date": "Mon, 17 Oct 2005 10:31:35 +0200",
"msg_from": "Andreas Pflug <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea poor performance"
},
{
"msg_contents": "NSO wrote:\n> \n> Well, no. Delphi isn't better, same time just for downloading data... But\n> as I told before, if for ex. pgAdminIII is running on server machine it is\n> a lot faster, I do not know why, I was monitoring network connection\n> between client and server and it is using only up to 2% of full speed.. is\n> server can't send faster? or client is not accepting data faster?\n> \n> \n\nThat difference is suspiciously high - you need to get one of your \nnetwork boys to check that the NIC in your client box is operating at \nfull speed (and/or does not clash with whatever network device it is \nplugged into). The other thing to check that that your client box is \nreasonably spec'ed : e.g. not running out of ram or disk in particular - \nor suffering from massively fragmented disk (the latter if its win32).\n\nWith respect to the Delphi, you can probably narrow where it has issues \nby running test versions of your app that have bits of functionality \nremoved:\n\n- retrieves the bytea but does not display it\n- retrieves the bytea but displays it unformatted, or truncated\n- does not retrieve the bytea at all\n\nThe difference between these should tell you where your issue is!\n\nBy way of comparison, I have a Php page (no Delphi sorry) that \nessentially shows 50 rows from your files table over a 100Mbit network. \nSome experiments with that show:\n\n- takes 2 seconds to display in Firefox\n- takes 0.2 seconds to complete a request (i.e. \"display\") using httperf\n\nThis indicates that (in my case) most of the 2 seconds is being used by \nFirefox (not being very good at) formatting the wide output for display.\n\nThe figure of about 2-5 seconds seems about right, so your 20-30 seconds \ncertainly seems high!\n\n\ncheers\n\nMark\n",
"msg_date": "Tue, 18 Oct 2005 14:58:48 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea poor performance"
}
] |
[
{
"msg_contents": "We are indexing about 5 million small documents using tsearch2/GIST. Each \"document\" contains 2 to 50 words. This is a \"write once, read many\" situation. Write performance is unimportant, and the database contents are static. (We build it offline.)\n\nWe're having problems with inconsistent performance, and it's very hard to separate the effects of various factors. Here are the things we think may be relevant.\n\n1. Total number of words\n\nOur documents currently contain about 110,000 unique words. Oleg wrote: \"[The limit is] 100K, but it's very fuzzy limit.\" By trial and error, we've learned that 50,000 works well, and 150,000 works poorly, so Oleg's comment appears to be a good rule-of-thumb. (With SIGLENINT enlarged, see below.) But there may be other factors that affect this conclusion (such as shared memory, total memory, etc.).\n\n\n2. Total size of the table\n\n5 million documents is not a very big database (each document is a few to a few hundred bytes), so we don't think this is relevant.\n\n\n3. Number of documents per word\n\nThere seems to be a VERY strong effect related to \"common\" words. When a word occurs in more than about 1% of the documents (say 50,000 to 150,000 documents), performance goes WAY down. Not just for that specific query, but it screws up tsearch2/GIST completely.\n\nWe have a test of 100 queries that return 382,000 documents total. The first time we run it, it's slow, about 20 minutes (as expected). The second time we run it, it's very fast, about 72 seconds -- very fast!! As long as we avoid queries with common words, performance is very good.\n\nBut, if we run just one query that contains a common word (a word that's in more than about 2% of the documents, roughly 150,000 documents), then the next time we run the 100 test queries, it will take 20 minutes again.\n\nWe can't simply eliminate these common words. First of all, they can be very significant. Second, it doesn't seem like 2% is \"common\". I can understand that a words like \"the\" which occur in most documents shouldn't be indexed. But a word that occurs in 2% of the database seems like a very good word to index, yet it causes us great problems.\n\nI've read a bit about tsearchd, and wonder if it would solve our problem. For our application, consistent performance is VERY important. If we could lock the GIST index into memory, I think it would fix our problem.\n\nI tried copying the GIST indexes (which are in a separate tablespace) to a 1 GB RAM disk, and it made the initial query faster, but overall performance seemed worse, probably because the RAM disk was using memory that could have been used by the file-system cache.\n\n\n4. Available RAM and Disk drives\n\nWould more RAM help? How would we tell Postgres to use it effectively? The GIST indexes are currently about 2.6 GB on the disk.\n\nWould more disks help? I know they would make it faster -- the 20-minute initial query would be reduce with a RAID drive, etc. But I'm not concerned about the 20-minute initial query, I'm concerned about keeping the system in that super-fast state where the GIST indexes are all in memory.\n\n\nHardware:\n Dual-CPU Xeon Dell server with 4 GB memory and a single SATA 7200 RPM 150GB disk.\n\ntsearch2/gistidx.h\n modified as: #define SIGLENINT 120\n\nSystem configuration:\n echo 2147483648 >/proc/sys/kernel/shmmax\n echo 4096 >/proc/sys/kernel/shmmni\n echo 2097152 >/proc/sys/kernel/shmall\n\nPostgres Configuration:\n shared_buffers = 20000\t\n work_mem = 32768\n effective_cache_size = 300000\n\nI feel like I'm shooting in the dark -- Linux, Postgres and tsearch2/GIST are interacting in ways that I can't predict or analyze. Thanks very much for any comments and advice.\n\nCraig\n\n\n\n\n",
"msg_date": "Sat, 15 Oct 2005 07:47:08 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "tsearch2/GIST performance factors?"
},
{
"msg_contents": "On Sat, 15 Oct 2005, Craig A. James wrote:\n\n> We are indexing about 5 million small documents using tsearch2/GIST. Each \n> \"document\" contains 2 to 50 words. This is a \"write once, read many\" \n> situation. Write performance is unimportant, and the database contents are \n> static. (We build it offline.)\n>\n> We're having problems with inconsistent performance, and it's very hard to \n> separate the effects of various factors. Here are the things we think may be \n> relevant.\n>\n> 1. Total number of words\n>\n> Our documents currently contain about 110,000 unique words. Oleg wrote: \n> \"[The limit is] 100K, but it's very fuzzy limit.\" By trial and error, we've \n> learned that 50,000 works well, and 150,000 works poorly, so Oleg's comment \n> appears to be a good rule-of-thumb. (With SIGLENINT enlarged, see below.) \n> But there may be other factors that affect this conclusion (such as shared \n> memory, total memory, etc.).\n>\n\nDid you consider *decreasing* SIGLENINT ? Size of index will diminish\nand performance could be increased. I use in current project SIGLENINT=15\n\n>\n> 2. Total size of the table\n>\n> 5 million documents is not a very big database (each document is a few to a \n> few hundred bytes), so we don't think this is relevant.\n>\n>\n> 3. Number of documents per word\n>\n> There seems to be a VERY strong effect related to \"common\" words. When a \n> word occurs in more than about 1% of the documents (say 50,000 to 150,000 \n> documents), performance goes WAY down. Not just for that specific query, but \n> it screws up tsearch2/GIST completely.\n>\n> We have a test of 100 queries that return 382,000 documents total. The first \n> time we run it, it's slow, about 20 minutes (as expected). The second time \n> we run it, it's very fast, about 72 seconds -- very fast!! As long as we \n> avoid queries with common words, performance is very good.\n>\n> But, if we run just one query that contains a common word (a word that's in \n> more than about 2% of the documents, roughly 150,000 documents), then the \n> next time we run the 100 test queries, it will take 20 minutes again.\n>\n\n> We can't simply eliminate these common words. First of all, they can be very \n> significant. Second, it doesn't seem like 2% is \"common\". I can understand \n> that a words like \"the\" which occur in most documents shouldn't be indexed. \n> But a word that occurs in 2% of the database seems like a very good word to \n> index, yet it causes us great problems.\n>\n\ntsearch2's index is a lossy index, read http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals\nso search results should be rechecked !\n\n\n> I've read a bit about tsearchd, and wonder if it would solve our problem. \n> For our application, consistent performance is VERY important. If we could \n> lock the GIST index into memory, I think it would fix our problem.\n\nI think so, tsearchd was designed for static contents in mind and it's\nindex doesn't require rechecking !\n\n>\n> I tried copying the GIST indexes (which are in a separate tablespace) to a 1 \n> GB RAM disk, and it made the initial query faster, but overall performance \n> seemed worse, probably because the RAM disk was using memory that could have \n> been used by the file-system cache.\n>\n>\n> 4. Available RAM and Disk drives\n>\n> Would more RAM help? How would we tell Postgres to use it effectively? The \n> GIST indexes are currently about 2.6 GB on the disk.\n\ntry to decrease signature size, say, \n#define SIGLENINT 15\n\n\n> I feel like I'm shooting in the dark -- Linux, Postgres and tsearch2/GIST are \n> interacting in ways that I can't predict or analyze. Thanks very much for \n> any comments and advice.\n\nWe have our TODO http://www.sai.msu.su/~megera/oddmuse/index.cgi/todo\nand hope to find sponsorhips for fts project for 8.2 release.\nUnfortunately, I didn't find spare time to package tsearchd for you,\nit should certainly help you.\n\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n",
"msg_date": "Mon, 17 Oct 2005 21:43:13 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tsearch2/GIST performance factors?"
},
{
"msg_contents": "Oleg wrote:\n> Did you consider *decreasing* SIGLENINT ? Size of index will diminish\n> and performance could be increased. I use in current project SIGLENINT=15\n\nThe default value for SIGLENINT actually didn't work at all. It was only by increasing it that I got any performance at all. An examination of the GIST indexes showed that most of the first level and many of the second level bitmaps were saturated.\n\n> tsearch2's index is a lossy index, read \n> http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals\n> so search results should be rechecked !\n\nYes, thanks. We do indeed recheck the actual results. The tests I'm running are just on the raw index performance - how long does it take to \"select ... where dockeys @@ to_tsquery(...)\".\n\n> We have our TODO http://www.sai.msu.su/~megera/oddmuse/index.cgi/todo\n> and hope to find sponsorhips for fts project for 8.2 release.\n> Unfortunately, I didn't find spare time to package tsearchd for you,\n> it should certainly help you.\n\nAt this point we may not have time to try tsearchd, and unfortunately we're not in a position to sponsor anything yet.\n\nMy original question is still bothering me. Is it normal for a keyword that occurs in more than about 2% of the documents to cause such inconsistent performance? Is there any single thing I might look at that would help improve performance (like, do I need more memory? More shared memory? Different config parameters?)\n\nThanks,\nCraig\n",
"msg_date": "Mon, 17 Oct 2005 23:07:55 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: tsearch2/GIST performance factors?"
},
{
"msg_contents": "Craig,\n\ncould you prepare excerption from your db (if possible), so I could\nplay myself ?\n\n \tOleg\nOn Mon, 17 Oct 2005, Craig A. James wrote:\n\n> Oleg wrote:\n>> Did you consider *decreasing* SIGLENINT ? Size of index will diminish\n>> and performance could be increased. I use in current project SIGLENINT=15\n>\n> The default value for SIGLENINT actually didn't work at all. It was only by \n> increasing it that I got any performance at all. An examination of the GIST \n> indexes showed that most of the first level and many of the second level \n> bitmaps were saturated.\n>\n>> tsearch2's index is a lossy index, read \n>> http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals\n>> so search results should be rechecked !\n>\n> Yes, thanks. We do indeed recheck the actual results. The tests I'm running \n> are just on the raw index performance - how long does it take to \"select ... \n> where dockeys @@ to_tsquery(...)\".\n>\n>> We have our TODO http://www.sai.msu.su/~megera/oddmuse/index.cgi/todo\n>> and hope to find sponsorhips for fts project for 8.2 release.\n>> Unfortunately, I didn't find spare time to package tsearchd for you,\n>> it should certainly help you.\n>\n> At this point we may not have time to try tsearchd, and unfortunately we're \n> not in a position to sponsor anything yet.\n>\n> My original question is still bothering me. Is it normal for a keyword that \n> occurs in more than about 2% of the documents to cause such inconsistent \n> performance? Is there any single thing I might look at that would help \n> improve performance (like, do I need more memory? More shared memory? \n> Different config parameters?)\n>\n> Thanks,\n> Craig\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n",
"msg_date": "Tue, 18 Oct 2005 10:20:43 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tsearch2/GIST performance factors?"
}
] |
[
{
"msg_contents": "Hello there,\n\nThis is my first post in the list. I have a deep low-level background on\ncomputer programming, but I am totally newbie to sql databases. I am using\npostgres because of its commercial license.\n\nMy problem is with storing large values. I have a database that stores large\nammounts of data (each row consisting of up to 5MB). After carefully reading\nthe Postgres 8.0 manual (the version I'm using), I was told that the best\noption was to create a bytea field.\n\nLarge objects are out of the line here since we have lots of tables.\n\nAs I understand it, the client needs to put the data into the server using a\ntextual-based command. This makes the 5MB data grow up-to 5x, making it 25MB\nin the worst case. (Example: 0x01 -> \\\\001).\n\nMy question is:\n\n1) Is there any way for me to send the binary field directly without needing\nescape codes?\n2) Will this mean that the client actually wastes my network bandwidth\nconverting binary data to text? Or does the client transparently manage\nthis?\n\nThanks for any light on the subject,\nRodrigo\n\nHello there,\n\nThis is my first post in the list. I have a deep low-level background\non computer programming, but I am totally newbie to sql databases. I am\nusing postgres because of its commercial license.\n\nMy problem is with storing large values. I have a database that stores\nlarge ammounts of data (each row consisting of up to 5MB). After\ncarefully reading the Postgres 8.0 manual (the version I'm using), I\nwas told that the best option was to create a bytea field.\n\nLarge objects are out of the line here since we have lots of tables.\n\nAs I understand it, the client needs to put the data into the server\nusing a textual-based command. This makes the 5MB data grow up-to 5x,\nmaking it 25MB in the worst case. (Example: 0x01 -> \\\\001).\n\nMy question is:\n\n1) Is there any way for me to send the binary field directly without needing escape codes?\n2) Will this mean that the client actually wastes my network bandwidth\nconverting binary data to text? Or does the client transparently manage\nthis?\n\nThanks for any light on the subject,\nRodrigo",
"msg_date": "Tue, 18 Oct 2005 18:07:12 +0000",
"msg_from": "Rodrigo Madera <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inefficient escape codes."
},
{
"msg_contents": "On Tue, Oct 18, 2005 at 06:07:12PM +0000, Rodrigo Madera wrote:\n> 1) Is there any way for me to send the binary field directly without needing\n> escape codes?\n\nIn 7.4 and later the client/server protocol supports binary data\ntransfer. If you're programming with libpq you can use PQexecParams()\nto send and/or retrieve values in binary instead of text.\n\nhttp://www.postgresql.org/docs/8.0/interactive/libpq-exec.html#LIBPQ-EXEC-MAIN\n\nAPIs built on top of libpq or that implement the protcol themselves\nmight provide hooks to this capability; check your documentation.\nWhat language and API are you using?\n\nSee also COPY BINARY:\n\nhttp://www.postgresql.org/docs/8.0/interactive/sql-copy.html\n\n> 2) Will this mean that the client actually wastes my network bandwidth\n> converting binary data to text? Or does the client transparently manage\n> this?\n\nBinary transfer sends data in binary, not by automatically converting\nto and from text.\n\n-- \nMichael Fuhr\n",
"msg_date": "Tue, 18 Oct 2005 13:03:24 -0600",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficient escape codes."
},
{
"msg_contents": "[Please copy the mailing list on replies so others can participate\nin and learn from the discussion.]\n\nOn Tue, Oct 18, 2005 at 07:09:08PM +0000, Rodrigo Madera wrote:\n> > What language and API are you using?\n> \n> I'm using libpqxx. A nice STL-style library for C++ (I am 101% C++).\n\nI've only dabbled with libpqxx; I don't know if or how you can make\nit send data in binary instead of text. See the documentation or\nask in a mailing list like libpqxx-general or pgsql-interfaces.\n\n> > Binary transfer sends data in binary, not by automatically converting\n> > to and from text.\n> \n> Uh, I'm sorry I didn't get that... If I send: insert into foo\n> values('\\\\001\\\\002') will libpq send 0x01, 0x02 or \"\\\\\\\\001\\\\\\\\002\"??\n\nIf you do it that way libpq will send the string as text with escape\nsequences; you can use a sniffer like tcpdump or ethereal to see this\nfor yourself. To send the data in binary you'd call PQexecParams()\nwith a query like \"INSERT INTO foo VALUES ($1)\". The $1 is a\nplaceholder; the other arguments to PQexecParams() provide the data\nitself, the data type and length, and specify whether the data is in\ntext format or binary. See the libpq documentation for details.\n\n-- \nMichael Fuhr\n",
"msg_date": "Tue, 18 Oct 2005 14:47:27 -0600",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficient escape codes."
},
{
"msg_contents": "On 18/10/05, Michael Fuhr <[email protected]> wrote:\n> [Please copy the mailing list on replies so others can participate\n> in and learn from the discussion.]\n>\n> On Tue, Oct 18, 2005 at 07:09:08PM +0000, Rodrigo Madera wrote:\n> > > What language and API are you using?\n> >\n> > I'm using libpqxx. A nice STL-style library for C++ (I am 101% C++).\n>\n> I've only dabbled with libpqxx; I don't know if or how you can make\n> it send data in binary instead of text. See the documentation or\n> ask in a mailing list like libpqxx-general or pgsql-interfaces.\n>\n> > > Binary transfer sends data in binary, not by automatically converting\n> > > to and from text.\n> >\n> > Uh, I'm sorry I didn't get that... If I send: insert into foo\n> > values('\\\\001\\\\002') will libpq send 0x01, 0x02 or \"\\\\\\\\001\\\\\\\\002\"??\n>\n> If you do it that way libpq will send the string as text with escape\n> sequences; you can use a sniffer like tcpdump or ethereal to see this\n> for yourself. To send the data in binary you'd call PQexecParams()\n> with a query like \"INSERT INTO foo VALUES ($1)\". The $1 is a\n> placeholder; the other arguments to PQexecParams() provide the data\n> itself, the data type and length, and specify whether the data is in\n> text format or binary. See the libpq documentation for details.\n>\n\nYou could base64 encode your data admitiaddly increasing it by 1/3 but\nit does at least convert it to text which means that its more\nunserstandable. base64 is also pritty standard being whats used in\nEMails for mime attachments.\n\nPeter\n",
"msg_date": "Wed, 19 Oct 2005 08:19:04 +0100",
"msg_from": "Peter Childs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficient escape codes."
},
{
"msg_contents": "HI!\n\n \n\nI am having a confusion to the memory handling of postgreSQL.\n\n \n\nHere is the Scenario.\n\nI rebooted my Server which is a PostgreSQL 8.0 Running on Redhat 9, which is\na Dual Xeon Server and 6 gig of memory.\n\nOf course there is not much memory still used since it is just restarted.\n\nBut after a number of access to the tables the memory is being used and it\nis not being free up. Actually after this access to the database and the\nserver is just idle\n\nThe memory is still used up. I am monitoring this using the \"free\" command\nwhich gives me about 5.5 gig of used memory and the rest free.\n\n \n\nIs there something that I should do to minimize and free up the used memory?\n\n \n\nThanks You.\n\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html\n\n\n\n\n\n\n\n\nHI!\n \nI am having a confusion to the memory\nhandling of postgreSQL.\n \nHere is the Scenario.\nI rebooted my Server which is a PostgreSQL\n8.0 Running on Redhat 9, which is a Dual Xeon Server and 6 gig of memory.\nOf course there is not much memory still\nused since it is just restarted.\nBut after a number of access to the tables\nthe memory is being used and it is not being free up. Actually after this\naccess to the database and the server is just idle\nThe memory is still used up. I am\nmonitoring this using the “free” command which gives me about 5.5\ngig of used memory and the rest free.\n \nIs there something that I should do to minimize\nand free up the used memory?\n \nThanks You.\n\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html",
"msg_date": "Fri, 21 Oct 2005 03:40:47 -0000",
"msg_from": "\"Christian Paul B. Cosinas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Used Memory"
},
{
"msg_contents": "--On Freitag, Oktober 21, 2005 03:40:47 +0000 \"Christian Paul B. Cosinas\" \n<[email protected]> wrote:\n> I am having a confusion to the memory handling of postgreSQL.\n> I rebooted my Server which is a PostgreSQL 8.0 Running on Redhat 9, which\n> is a Dual Xeon Server and 6 gig of memory.\n>\n> Of course there is not much memory still used since it is just restarted.\n>\n> But after a number of access to the tables the memory is being used and\n> it is not being free up. Actually after this access to the database and\n> the server is just idle\n>\n> The memory is still used up. I am monitoring this using the \"free\"\n> command which gives me about 5.5 gig of used memory and the rest free.\nI suppose you looked at the top row of the free output?\n\nBecause there the disk-cache is counted as \"used\"... Have a look at the \nsecond row where buffers are counted as free, which they more or less are.\n\n> Is there something that I should do to minimize and free up the used\n> memory?\nNo, the buffers make your database faster because they reduce direct disk \naccess\n\n> I choose Polesoft Lockspam to fight spam, and you?\n> http://www.polesoft.com/refer.html\nI don't :)\n\nMit freundlichem Gruß,\nJens Schicke\n-- \nJens Schicke\t\t [email protected]\nasco GmbH\t\t http://www.asco.de\nMittelweg 7\t\t Tel 0531/3906-127\n38106 Braunschweig\t Fax 0531/3906-400\n",
"msg_date": "Fri, 21 Oct 2005 09:23:28 +0200",
"msg_from": "Jens-Wolfhard Schicke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Used Memory"
},
{
"msg_contents": "\n\n\nBut as long as the memory is in the cache my database became much slower.\nWhat could probably be the cause of this? But When I restarted the database\nis back to normal processing.\n-----Original Message-----\nFrom: Jens-Wolfhard Schicke [mailto:[email protected]]\nSent: Friday, October 21, 2005 7:23 AM\nTo: Christian Paul B. Cosinas; [email protected]\nSubject: Re: [PERFORM] Used Memory\n\n--On Freitag, Oktober 21, 2005 03:40:47 +0000 \"Christian Paul B. Cosinas\" \n<[email protected]> wrote:\n> I am having a confusion to the memory handling of postgreSQL.\n> I rebooted my Server which is a PostgreSQL 8.0 Running on Redhat 9, \n> which is a Dual Xeon Server and 6 gig of memory.\n>\n> Of course there is not much memory still used since it is just restarted.\n>\n> But after a number of access to the tables the memory is being used \n> and it is not being free up. Actually after this access to the \n> database and the server is just idle\n>\n> The memory is still used up. I am monitoring this using the \"free\"\n> command which gives me about 5.5 gig of used memory and the rest free.\nI suppose you looked at the top row of the free output?\n\nBecause there the disk-cache is counted as \"used\"... Have a look at the\nsecond row where buffers are counted as free, which they more or less are.\n\n> Is there something that I should do to minimize and free up the used \n> memory?\nNo, the buffers make your database faster because they reduce direct disk\naccess\n\n> I choose Polesoft Lockspam to fight spam, and you?\n> http://www.polesoft.com/refer.html\nI don't :)\n\nMit freundlichem Gruß,\nJens Schicke\n-- \nJens Schicke\t\t [email protected]\nasco GmbH\t\t http://www.asco.de\nMittelweg 7\t\t Tel 0531/3906-127\n38106 Braunschweig\t Fax 0531/3906-400\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html \n\n",
"msg_date": "Fri, 21 Oct 2005 08:02:40 -0000",
"msg_from": "\"Christian Paul B. Cosinas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Used Memory"
},
{
"msg_contents": "\nAlso Does Creating Temporary table in a function and not dropping them\naffects the performance of the database?\n\n\n-----Original Message-----\nFrom: Jens-Wolfhard Schicke [mailto:[email protected]]\nSent: Friday, October 21, 2005 7:23 AM\nTo: Christian Paul B. Cosinas; [email protected]\nSubject: Re: [PERFORM] Used Memory\n\n--On Freitag, Oktober 21, 2005 03:40:47 +0000 \"Christian Paul B. Cosinas\" \n<[email protected]> wrote:\n> I am having a confusion to the memory handling of postgreSQL.\n> I rebooted my Server which is a PostgreSQL 8.0 Running on Redhat 9, \n> which is a Dual Xeon Server and 6 gig of memory.\n>\n> Of course there is not much memory still used since it is just restarted.\n>\n> But after a number of access to the tables the memory is being used \n> and it is not being free up. Actually after this access to the \n> database and the server is just idle\n>\n> The memory is still used up. I am monitoring this using the \"free\"\n> command which gives me about 5.5 gig of used memory and the rest free.\nI suppose you looked at the top row of the free output?\n\nBecause there the disk-cache is counted as \"used\"... Have a look at the\nsecond row where buffers are counted as free, which they more or less are.\n\n> Is there something that I should do to minimize and free up the used \n> memory?\nNo, the buffers make your database faster because they reduce direct disk\naccess\n\n> I choose Polesoft Lockspam to fight spam, and you?\n> http://www.polesoft.com/refer.html\nI don't :)\n\nMit freundlichem Gruß,\nJens Schicke\n-- \nJens Schicke\t\t [email protected]\nasco GmbH\t\t http://www.asco.de\nMittelweg 7\t\t Tel 0531/3906-127\n38106 Braunschweig\t Fax 0531/3906-400\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html \n\n",
"msg_date": "Fri, 21 Oct 2005 08:08:16 -0000",
"msg_from": "\"Christian Paul B. Cosinas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Used Memory"
},
{
"msg_contents": "On Fri, 21 Oct 2005 03:40:47 -0000\n\"Christian Paul B. Cosinas\" <[email protected]> wrote:\n\n>\n> But after a number of access to the tables the memory is being used\n> and it is not being free up. Actually after this access to the\n> database and the server is just idle\n\nI noticed this behavior on my SUSE linux box as well. I thought it was\na memory leak in something (I think there was an actual memory leak in\nthe kernel shared memory stuff, which I fixed by upgrading my kernel\nto 2.6.13-ck8). It turns out that some file systems are better than\nothers when it comes to increasing the performance of I/O on Linux.\nReiserFS was what I put on originally and by the end of the day, the\nbox would be using all of it's available memory in caching inodes.\n\nI kept rebooting and trying to get the memory usage to go down, but it\nnever did. All but 500MB of it was disk cache. I let my apps just run\nand when the application server needed more memory, it reclaimed it from\nthe disk cache, so there weren't side effects to the fact that top and\nfree always reported full memory usage.\n\nThey tell me that this is a good thing, as it reduces disk I/O and\nincreases performance. That's all well and good, but it's entirely\nunnecessary in our situation. Despite that, I can't turn it off because\nmy research into the issue has shown that kernel developers don't want\nusers to be able to turn off disk caching. There is a value\nin /proc/sys/vm/vfs_cache_pressure that can be changed, which will\naffect the propensity of the kernel to cache files in RAM (google it\nto find the suggestions on what value to set it to), but there isn't a\nsetting to turn that off on purpose.\n\nAfter rolling my own CK-based kernel, switching to XFS, and tweaking\nthe nice and CPU affinity of my database process (I use schedtool in my\nCK kernel to run it at SCHED_FIFO, nice -15, and CPU affinity confined\nto the second processor in my dual Xeon eServer) has got me to the\npoint that the perpetually high memory usage doesn't affect my\napplication server.\n\nHope any of this helps.\n\nJon Brisbin\nWebmaster\nNPC International, Inc.\n",
"msg_date": "Fri, 21 Oct 2005 10:29:35 -0500",
"msg_from": "Jon Brisbin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Used Memory"
},
{
"msg_contents": "[snip]\n>\n> to the second processor in my dual Xeon eServer) has got me to the\n> point that the perpetually high memory usage doesn't affect my\n> application server.\n\n\nI'm curious - how does the high memory usage affect your application server?\n\nAlex\n\n[snip]to the second processor in my dual Xeon eServer) has got me to thepoint that the perpetually high memory usage doesn't affect my\napplication server.\nI'm curious - how does the high memory usage affect your application server?\n\nAlex",
"msg_date": "Fri, 21 Oct 2005 11:41:34 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Used Memory"
},
{
"msg_contents": "It affect my application since the database server starts to slow down.\nHence a very slow in return of functions.\n\n \n\nAny more ideas about this everyone?\n\n \n\nPlease..\n\n _____ \n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Alex Turner\nSent: Friday, October 21, 2005 3:42 PM\nTo: Jon Brisbin\nCc: [email protected]\nSubject: Re: [PERFORM] Used Memory\n\n \n\n[snip]\n\nto the second processor in my dual Xeon eServer) has got me to the\npoint that the perpetually high memory usage doesn't affect my \napplication server.\n\n\nI'm curious - how does the high memory usage affect your application server?\n\nAlex \n\n \n\n \n\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html\n\n\n\n\n\n\n\n\n\n \nIt affect my application since the\ndatabase server starts to slow down. Hence a very slow in return of functions.\n \nAny more ideas about this everyone?\n \nPlease….\n\n\n\n\nFrom:\[email protected]\n[mailto:[email protected]] On Behalf Of Alex Turner\nSent: Friday, October 21, 2005\n3:42 PM\nTo: Jon Brisbin\nCc:\[email protected]\nSubject: Re: [PERFORM] Used Memory\n\n \n[snip]\n\n\nto the second processor in my dual Xeon eServer) has got me to the\npoint that the perpetually high memory usage doesn't affect my \napplication server.\n\n\n\nI'm curious - how does the high memory usage affect your application server?\n\nAlex \n\n \n\n \n\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html",
"msg_date": "Mon, 24 Oct 2005 01:42:33 -0000",
"msg_from": "\"Christian Paul B. Cosinas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Used Memory"
},
{
"msg_contents": "total used free shared\nbuffers cached\n\nMem: 6192460 6137424 55036\n0 85952 5828844\n\n-/+ buffers/cache: 222628 5969832\n\nSwap: 2096472 0 2096472\n\n \n\n \n\nHere is the result of \"free\" command\" I am talking about.\n\nWhat does this result mean?\n\n \n\nI just noticed that as long as the free memory in the first row (which is\n55036 as of now) became low, the slower is the response of the database\nserver.\n\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html\n\n\n\n\n\n\n\n\n \ntotal used free shared \n buffers cached\nMem: 6192460 6137424 \n 55036 0 85952 5828844\n-/+ buffers/cache: 222628 \n 5969832\nSwap: 2096472 0 \n 2096472\n \n \nHere is the result of “free”\ncommand” I am talking about.\nWhat does this result mean?\n \nI just noticed that as long as the free\nmemory in the first row (which is 55036 as of now) became low, the slower is\nthe response of the database server.\n\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html",
"msg_date": "Mon, 24 Oct 2005 01:47:05 -0000",
"msg_from": "\"Christian Paul B. Cosinas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Used Memory"
},
{
"msg_contents": "Christian Paul B. Cosinas wrote:\n> \n> \n> Here is the result of �free� command� I am talking about.\n> \n> What does this result mean?\n> \n\nI seem to recall the Linux man page for 'free' being most \nunenlightening, so have a look at:\n\nhttp://gentoo-wiki.com/FAQ_Linux_Memory_Management\n\n(For Gentoo, but should be applicable to RHEL).\n\nThe basic idea is that modern operating systems try to make as much use \nof the memory as possible. Postgresql depends on this behavior - e.g. a \npage that has previously been fetched from disk, will be cached, so it \ncan be read from memory next time, as this is faster(!)\n\n> \n> \n> I just noticed that as long as the free memory in the first row (which \n> is 55036 as of now) became low, the slower is the response of the \n> database server.\n> \n\nWell, you could be swapping - what does the swap line of 'free' show then?\n\nAlso, how about posting your postgresql.conf (or just the non-default \nparameters) to this list?\n\nSome other stuff that could be relevant:\n\n- Is the machine just a database server, or does it run (say) Apache + Php?\n- When the slowdown is noticed, does this coincide with certain \nactivities - e.g, backup , daily maintenance, data load(!) etc.\n\n\nregards\n\nMark\n\n> \n> I choose Polesoft Lockspam to fight spam, and you?\n> http://www.polesoft.com/refer.html\n\nNope, not me either.\n\n",
"msg_date": "Mon, 24 Oct 2005 16:14:20 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Used Memory"
}
] |
[
{
"msg_contents": "I want to correlate two index rows of different tables to find an\noffset so that\n\ntable1.value = table2.value AND table1.id = table2.id + offset\n\nis true for a maximum number of rows.\n\nTo achieve this, I have the two tables and a table with possible\noffset values and execute a query:\n\nSELECT value,(SELECT COUNT(*) FROM table1,table2\n WHERE table1.value = table2.value AND\n table1.id = table2.id + offset)\n AS matches FROM offsets ORDER BY matches;\n\nThe query is very inefficient, however, because the planner doesn't\nuse my indexes and executes seqscans instead. I can get it to execute\nfast by setting ENABLE_SEQSCAN to OFF, but I have read this will make\nthe performance bad on other query types so I want to know how to\ntweak the planner costs or possibly other stats so the planner will\nplan the query correctly and use index scans. There must be something\nwrong in the planning parameters after all if a plan that is slower by\na factor of tens or hundreds becomes estimated better than the fast\nvariant.\n\nI have already issued ANALYZE commands on the tables.\n\nThanks for your help,\nKatherine Stoovs\n",
"msg_date": "Wed, 19 Oct 2005 14:51:55 +0000 (UTC)",
"msg_from": "Katherine Stoovs <[email protected]>",
"msg_from_op": true,
"msg_subject": "tuning seqscan costs"
},
{
"msg_contents": "Katherine Stoovs <[email protected]> writes:\n> There must be something\n> wrong in the planning parameters after all if a plan that is slower by\n> a factor of tens or hundreds becomes estimated better than the fast\n> variant.\n\nInstead of handwaving, how about showing us EXPLAIN ANALYZE results for\nboth cases? You didn't even explain how the index you expect it to use\nis defined...\n\nSpecifying what PG version you are using is also minimum required\ninformation for this sort of question.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Oct 2005 10:15:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] tuning seqscan costs "
},
{
"msg_contents": "\nOn Oct 19, 2005, at 9:51 AM, Katherine Stoovs wrote:\n\n> I want to correlate two index rows of different tables to find an\n> offset so that\n>\n> table1.value = table2.value AND table1.id = table2.id + offset\n>\n> is true for a maximum number of rows.\n>\n> To achieve this, I have the two tables and a table with possible\n> offset values and execute a query:\n>\n> SELECT value,(SELECT COUNT(*) FROM table1,table2\n> WHERE table1.value = table2.value AND\n> table1.id = table2.id + offset)\n> AS matches FROM offsets ORDER BY matches;\n>\n> The query is very inefficient, however, because the planner doesn't\n> use my indexes and executes seqscans instead. I can get it to execute\n> fast by setting ENABLE_SEQSCAN to OFF, but I have read this will make\n> the performance bad on other query types so I want to know how to\n> tweak the planner costs or possibly other stats so the planner will\n> plan the query correctly and use index scans. There must be something\n> wrong in the planning parameters after all if a plan that is slower by\n> a factor of tens or hundreds becomes estimated better than the fast\n> variant.\n>\n> I have already issued ANALYZE commands on the tables.\n>\n> Thanks for your help,\n> Katherine Stoovs\n\nKatherine,\n\nIf offset is a column in offsets, can you add an index on the \nexpresion table2.id + offset?\n\nhttp://www.postgresql.org/docs/8.0/static/indexes-expressional.html\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\n\nOpen Source Solutions. Optimized Web Development.\n\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-469-5150\n615-469-5151 (fax)\n",
"msg_date": "Wed, 26 Oct 2005 17:48:17 -0500",
"msg_from": "\"Thomas F. O'Connell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tuning seqscan costs"
}
] |
[
{
"msg_contents": "Rodrigo wrote:\n$$\nAs I understand it, the client needs to put the data into the server\nusing a textual-based command. This makes the 5MB data grow up-to 5x,\nmaking it 25MB in the worst case. (Example: 0x01 -> \\\\001).\n\nMy question is:\n\n1) Is there any way for me to send the binary field directly without\nneeding escape codes?\n2) Will this mean that the client actually wastes my network bandwidth\nconverting binary data to text? Or does the client transparently manage\nthis?\n$$ [snip]\n\nI think the fastest, most efficient binary transfer of data to\nPostgreSQL via C++ is a STL wrapper to libpq. Previously I would not\nhave recommended libqpxx for this purpose although this may have changed\nwith the later releases. As others have commented you most certainly\nwant to do this with the ExecParams/ExecPrepared interface. If you want\nto exclusively use libqxx then you need to find out if it exposes/wraps\nthis function (IIRC libpqxx build on top of libpq).\n\nYou can of course 'roll your own' libpq wrapper via STL pretty easily.\nFor example, here is how I am making my SQL calls from my COBOL apps:\n\ntypedef vector<string> stmt_param_strings;\ntypedef vector<int> stmt_param_lengths;\ntypedef vector<const char*> stmt_param_values;\ntypedef vector<int> stmt_param_formats;\n\n[...]\n\nres = PQexecPrepared(\t_connection, \n\t\t\t\tstmt.c_str(), \n\t\t\t\tnum_params, \n\t\t\t\t¶m_values[0], \n\t\t\t\t¶m_lengths[0], \n\t\t\t\t¶m_formats[0], \n\t\t\t\tresult_format);\n\nExecuting data this way is a direct data injection to/from the server,\nno parsing/unparsing, no plan generating, etc. Also STL vectors play\nvery nice with the libpq interface because it takes unterminated stings.\n\n\nMerlin\n\n\n\n\n",
"msg_date": "Wed, 19 Oct 2005 10:54:21 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inefficient escape codes."
}
] |
[
{
"msg_contents": "Hi,\n\nI'm using PostgreSQL 8.1 beta 3 (packages from Debian experimental), and I\nhave a (rather complex) query that seems to take forever -- when the database\nwas just installed, it took about 1200ms (which is quite good, considering\nthat the 7.4 system this runs on today uses about the same time, but has\ntwice as much CPU power and runs sequential scans up to eight times as fast),\nbut now I can never even get it to complete. I've tried running it for half\nan hour, but it still doesn't complete, so I'm a bit unsure what's going on.\n\nThere's a _lot_ of tables and views in here, several hundres lines of SQL,\nbut experience tells me that posting more is better than posting less, so\nhere goes. (The data is unfortunately not public since it contains PIN codes\nand such, but if anybody asks I can probably send it off-list. It's ~30MB in\nplain pg_dump, though.) There might be a few tables that aren't referenced,\nbut I don't really know a good way to figure out such dependencies\nautomatically, and I'd guess most of them _are_ used :-) Apologies in advance\nfor the Norwegian in the names.\n\n=== cut here ===\n\nCREATE TABLE gruppetype (\n gruppetype_id integer NOT NULL PRIMARY KEY,\n gruppetype varchar\n);\nCREATE TABLE gruppe (\n gruppe_id serial NOT NULL PRIMARY KEY,\n gruppe varchar NOT NULL,\n beskrivelse varchar,\n gruppetype_id integer DEFAULT 1 NOT NULL REFERENCES gruppetype,\n adminacl varchar,\n aktiv boolean default 't' NOT NULL\n);\nCREATE TABLE adgangsskjema (\n adgangsskjema_id serial NOT NULL PRIMARY KEY,\n navn varchar NOT NULL,\n rita_navn varchar NOT NULL\n);\nCREATE TABLE adgangsskjema_gruppe_kobling (\n gruppe_id integer NOT NULL REFERENCES gruppe (gruppe_id),\n adgangsskjema_id integer NOT NULL REFERENCES adgangsskjema (adgangsskjema_id),\n PRIMARY KEY (adgangsskjema_id, gruppe_id)\n);\nCREATE TABLE kortstatus (\n kortstatus_id smallint NOT NULL PRIMARY KEY,\n kortstatus varchar\n);\nCREATE TABLE korttype (\n korttype_id serial NOT NULL PRIMARY KEY,\n korttype varchar NOT NULL,\n beskrivelse varchar\n);\nCREATE TABLE medlemstatus (\n medlemstatus_id serial NOT NULL PRIMARY KEY,\n medlemstatus varchar NOT NULL,\n beskrivelse varchar\n);\nCREATE TABLE oblattype (\n oblattype_id serial NOT NULL PRIMARY KEY,\n oblattype varchar NOT NULL,\n varighet interval NOT NULL\n);\nCREATE TABLE skole (\n skole_id serial NOT NULL PRIMARY KEY,\n skole varchar NOT NULL,\n beskrivelse varchar\n);\nCREATE TABLE studie (\n studie_id serial NOT NULL PRIMARY KEY,\n studie varchar NOT NULL,\n beskrivelse varchar\n);\nCREATE TABLE poststed (\n postnummer smallint NOT NULL PRIMARY KEY CHECK (postnummer >= 0 AND postnummer <= 9999),\n poststed varchar\n);\nCREATE TABLE gruppekobling (\n overgruppe_id integer NOT NULL REFERENCES gruppe (gruppe_id),\n undergruppe_id integer NOT NULL REFERENCES gruppe (gruppe_id),\n PRIMARY KEY (overgruppe_id, undergruppe_id)\n);\nCREATE TABLE medlem (\n medlem_id serial NOT NULL PRIMARY KEY CHECK (medlem_id > 0),\n fornavn varchar NOT NULL,\n etternavn varchar NOT NULL,\n hjemadresse varchar,\n hjem_postnummer smallint REFERENCES poststed (postnummer),\n studieadresse varchar,\n studie_postnummer smallint REFERENCES poststed (postnummer),\n fodselsdato date,\n telefon varchar,\n mail varchar UNIQUE,\n passord character(32) NOT NULL,\n registrert date DEFAULT now(),\n oppdatert date DEFAULT now(),\n skole_id integer REFERENCES skole,\n studie_id integer REFERENCES studie,\n medlemstatus_id integer DEFAULT 1 NOT NULL REFERENCES medlemstatus,\n pinkode smallint CHECK ((pinkode >= 0 AND pinkode <= 9999) OR pinkode IS NULL),\n UNIQUE ( LOWER(mail) )\n);\nCREATE TABLE kort (\n kortnummer integer NOT NULL PRIMARY KEY CHECK (kortnummer > 0),\n medlem_id integer REFERENCES medlem DEFERRABLE,\n korttype_id integer DEFAULT 1 NOT NULL REFERENCES korttype,\n serie_registrert date DEFAULT now() NOT NULL,\n bruker_registrert date,\n kortstatus_id integer DEFAULT 1 NOT NULL REFERENCES kortstatus\n);\nCREATE TABLE oblat (\n oblatnummer integer NOT NULL PRIMARY KEY CHECK (oblatnummer > 0),\n oblattype_id integer NOT NULL REFERENCES oblattype,\n \"start\" date NOT NULL,\n kortnummer integer REFERENCES kort,\n bruker_registrert date,\n serie_registrert date DEFAULT NOW() NOT NULL\n);\nCREATE TABLE verv (\n medlem_id integer NOT NULL REFERENCES medlem,\n gruppe_id integer NOT NULL REFERENCES gruppe,\n \"start\" date DEFAULT now() NOT NULL,\n stopp date,\n\n CHECK ( stopp >= start ),\n PRIMARY KEY ( medlem_id, gruppe_id, \"start\" )\n);\nCREATE TABLE nytt_passord (\n medlem_id integer NOT NULL REFERENCES medlem,\n hash varchar NOT NULL,\n tidspunkt date DEFAULT now() NOT NULL\n);\nCREATE VIEW gyldige_medlemskap AS\n\tSELECT medlem_id,MAX(\"start\"+varighet) AS stopp\n\t FROM kort\n\t JOIN oblat ON kort.kortnummer=oblat.kortnummer\n\t NATURAL JOIN oblattype\n\tWHERE kortstatus_id=1\n\tAND medlem_id IS NOT NULL\n\tGROUP BY medlem_id\n\tHAVING MAX(\"start\"+varighet) >= current_date;\n\nCREATE SCHEMA kortsys2;\n\nCREATE FUNCTION kortsys2.effektiv_dato(date) RETURNS date\nAS\n\t'SELECT CASE WHEN $1 < CURRENT_DATE THEN CURRENT_DATE ELSE $1 END'\nLANGUAGE SQL STABLE;\n\nCREATE VIEW kortsys2.mdb_personer AS\n SELECT * FROM (\n SELECT DISTINCT ON (medlem_id) medlem_id,fornavn,etternavn,mail,pinkode,kort.kortnummer AS kortnummer\n FROM medlem\n NATURAL JOIN kort -- the member must have an ID card\n WHERE\n kortstatus_id=1 -- the card must be active\n AND korttype_id IN (2,3) -- the card must be an ID card or UKA ID card\n AND pinkode IS NOT NULL -- the member must have a PIN\n AND medlem_id IN ( -- the member must be active in at least one group\n SELECT medlem_id\n \t FROM verv\n \t WHERE stopp IS NULL OR stopp >= current_date\n )\n AND medlem_id IN ( -- the member must have a valid membership\n SELECT medlem_id FROM gyldige_medlemskap\n )\n ORDER BY\n medlem_id, -- needed for the DISTINCT\n korttype_id -- prioritize ID cards over UKA ID cards\n ) AS t1\n UNION ALL\n SELECT * FROM eksterne_kort.eksterne_personer;\n \nCREATE TABLE kortsys2.rita_personer (\n\tmedlem_id integer PRIMARY KEY NOT NULL,\n\tfornavn varchar NOT NULL,\n\tetternavn varchar NOT NULL,\n\tmail varchar NOT NULL,\n\tpinkode smallint NOT NULL CHECK (pinkode >= 0 AND pinkode <= 9999),\n\tkortnummer integer UNIQUE NOT NULL\n);\nCREATE TABLE kortsys2.personer_tving_sletting (\n\tmedlem_id integer PRIMARY KEY NOT NULL\n);\nCREATE VIEW kortsys2.personer_skal_slettes AS\n\tSELECT medlem_id\n\t FROM kortsys2.rita_personer\n\t WHERE (medlem_id,pinkode,kortnummer) NOT IN (\n\t SELECT medlem_id,pinkode,kortnummer\n\t FROM kortsys2.mdb_personer\n\t )\n\tUNION\n\t SELECT medlem_id\n\t FROM kortsys2.personer_tving_sletting;\n\nCREATE TABLE kortsys2.personer_nylig_slettet (\n\tmedlem_id integer PRIMARY KEY NOT NULL\n);\n\nCREATE VIEW kortsys2.personer_skal_eksporteres AS\n SELECT *\n FROM kortsys2.mdb_personer\n WHERE medlem_id NOT IN (\n SELECT medlem_id FROM kortsys2.rita_personer\n )\n\t AND medlem_id NOT IN (\n\t SELECT medlem_id FROM kortsys2.personer_nylig_slettet\n\t );\n\nCREATE TABLE kortsys2.mdb_gruppekobling_temp (\n\tovergruppe_id INTEGER NOT NULL,\n\tundergruppe_id INTEGER NOT NULL\n);\n\nCREATE OR REPLACE FUNCTION kortsys2.mdb_gruppekobling_transitiv_tillukning() RETURNS SETOF gruppekobling AS\n'\nDECLARE\n\tr RECORD;\nBEGIN\n\tINSERT INTO kortsys2.mdb_gruppekobling_temp\n\tSELECT overgruppe_id,undergruppe_id FROM gruppekobling gk\n\t\tJOIN gruppe g1 ON gk.overgruppe_id=g1.gruppe_id\n\t\tJOIN gruppe g2 ON gk.overgruppe_id=g2.gruppe_id\n\t\tWHERE g1.aktiv AND g2.aktiv;\n\tLOOP\n\t\tINSERT INTO kortsys2.mdb_gruppekobling_temp\n\t\tSELECT g1.overgruppe_id, g2.undergruppe_id\n\t\t FROM kortsys2.mdb_gruppekobling_temp g1\n\t\t JOIN kortsys2.mdb_gruppekobling_temp g2\n\t\t ON g1.undergruppe_id=g2.overgruppe_id\n\t\tWHERE (g1.overgruppe_id, g2.undergruppe_id) NOT IN (\n\t\t\tSELECT * FROM kortsys2.mdb_gruppekobling_temp\n\t\t);\n\t\tEXIT WHEN NOT FOUND;\n\tEND LOOP;\n\tFOR r IN SELECT * from kortsys2.mdb_gruppekobling_temp LOOP\n\t\tRETURN NEXT r;\n\tEND LOOP;\n DELETE FROM kortsys2.mdb_gruppekobling_temp;\n\tRETURN;\nEND;\n' LANGUAGE plpgsql;\n\nCREATE VIEW kortsys2.mdb_gruppetilgang AS\n\tSELECT DISTINCT\n\t\tgk.undergruppe_id AS gruppe_id,\n\t\trita_navn\n\tFROM (\n\t\tSELECT * FROM mdb_gruppekobling_transitiv_tillukning()\n\t\tUNION SELECT gruppe_id,gruppe_id FROM gruppe WHERE aktiv \n\t) gk \n\tJOIN adgangsskjema_gruppe_kobling ak ON gk.overgruppe_id=ak.gruppe_id\n\tNATURAL JOIN adgangsskjema; \n\nCREATE VIEW kortsys2.mdb_tilgang AS\n SELECT\n t1.medlem_id AS medlem_id,\n rita_navn,\n \"start\",\n CASE WHEN m_stopp < stopp OR stopp IS NULL THEN m_stopp ELSE stopp END AS stopp \n FROM ( \n SELECT\n medlem_id,\n gruppe_id,\n ms.stopp AS m_stopp,\n MIN(\"start\") AS start,\n MAX(v.stopp) AS stopp\n FROM (\n SELECT * FROM verv\n UNION ALL\n SELECT * FROM eksterne_kort.vervekvivalens\n ) v\n JOIN (\n SELECT * FROM gyldige_medlemskap ms\n UNION ALL\n SELECT medlem_id,stopp FROM eksterne_kort.vervekvivalens\n ) ms\n USING (medlem_id)\n \n WHERE ( v.stopp IS NULL OR v.stopp >= current_date ) \n GROUP BY medlem_id,gruppe_id,ms.stopp\n ) t1\n JOIN mdb_gruppetilgang gt ON t1.gruppe_id=gt.gruppe_id\n WHERE medlem_id IN (\n SELECT medlem_id FROM mdb_personer\n )\n;\n\nCREATE VIEW kortsys2.mdb_effektiv_tilgang AS\n\tSELECT\n\t medlem_id,\n\t rita_navn,\n\t MIN(\"start\") AS \"start\",\n\t MAX(stopp) AS stopp\n\tFROM kortsys2.mdb_tilgang\n\tGROUP BY medlem_id,rita_navn\n\tHAVING MAX(stopp) >= current_date;\n\nCREATE TABLE kortsys2.rita_tilgang (\n\tmedlem_id integer NOT NULL REFERENCES kortsys2.rita_personer,\n\trita_navn varchar NOT NULL,\n\t\"start\" date NOT NULL,\n\tstopp date NOT NULL,\n\n\tPRIMARY KEY ( medlem_id, rita_navn )\n);\n\nCREATE VIEW kortsys2.tilganger_skal_slettes AS\n\tSELECT * FROM kortsys2.rita_tilgang\n\t WHERE medlem_id NOT IN (\n\t SELECT medlem_id FROM kortsys2.personer_nylig_slettet\n\t )\n\t AND (medlem_id,rita_navn,kortsys2.effektiv_dato(\"start\"),stopp) NOT IN (\n\t SELECT medlem_id,rita_navn,kortsys2.effektiv_dato(\"start\"),stopp FROM kortsys2.mdb_effektiv_tilgang\n\t );\n\nCREATE VIEW kortsys2.tilganger_skal_gis AS\n\tSELECT medlem_id,rita_navn,\"start\",stopp\n\tFROM kortsys2.mdb_effektiv_tilgang\n\t WHERE medlem_id NOT IN (\n\t SELECT medlem_id FROM kortsys2.personer_nylig_slettet\n\t )\n\t AND (medlem_id,rita_navn,kortsys2.effektiv_dato(\"start\"),stopp) NOT IN (\n\t SELECT medlem_id,rita_navn,kortsys2.effektiv_dato(\"start\"),stopp FROM kortsys2.rita_tilgang\n\t );\n\n=== cut here ===\n\nNow for the simple query:\n\n mdb2_jodal=# explain select * from kortsys2.tilganger_skal_gis ;\n\nand the monster of a query plan (no EXPLAIN ANALYZE because, well, it never\nfinishes):\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan mdb_effektiv_tilgang (cost=19821.69..4920621.69 rows=10000 width=48)\n Filter: ((NOT (hashed subplan)) AND (NOT (subplan)))\n -> HashAggregate (cost=19238.48..20838.48 rows=40000 width=52)\n Filter: (max(CASE WHEN ((m_stopp < (stopp)::timestamp without time zone) OR (stopp IS NULL)) THEN m_stopp ELSE (stopp)::timestamp without time zone END) >= (('now'::text)::date)::timestamp without time zone)\n -> Merge Join (cost=12231.86..16091.27 rows=251777 width=52)\n Merge Cond: (\"outer\".gruppe_id = \"inner\".gruppe_id)\n -> Unique (cost=483.64..514.68 rows=4138 width=30)\n -> Sort (cost=483.64..493.99 rows=4138 width=30)\n Sort Key: gk.undergruppe_id, adgangsskjema.rita_navn\n -> Merge Join (cost=149.81..235.06 rows=4138 width=30)\n Merge Cond: (\"outer\".overgruppe_id = \"inner\".gruppe_id)\n -> Unique (cost=92.52..101.21 rows=1159 width=8)\n -> Sort (cost=92.52..95.41 rows=1159 width=8)\n Sort Key: overgruppe_id, undergruppe_id\n -> Append (cost=0.00..33.53 rows=1159 width=8)\n -> Function Scan on mdb_gruppekobling_transitiv_tillukning (cost=0.00..12.50 rows=1000 width=8)\n -> Seq Scan on gruppe (cost=0.00..9.44 rows=159 width=4)\n Filter: aktiv\n -> Sort (cost=57.29..59.08 rows=714 width=30)\n Sort Key: ak.gruppe_id\n -> Hash Join (cost=1.60..23.45 rows=714 width=30)\n Hash Cond: (\"outer\".adgangsskjema_id = \"inner\".adgangsskjema_id)\n -> Seq Scan on adgangsskjema_gruppe_kobling ak (cost=0.00..11.14 rows=714 width=8)\n -> Hash (cost=1.48..1.48 rows=48 width=30)\n -> Seq Scan on adgangsskjema (cost=0.00..1.48 rows=48 width=30)\n -> Sort (cost=11748.21..11778.64 rows=12169 width=24)\n Sort Key: t1.gruppe_id\n -> Hash Join (cost=8975.45..10922.49 rows=12169 width=24)\n Hash Cond: (\"outer\".medlem_id = \"inner\".medlem_id)\n -> HashAggregate (cost=5180.87..6093.55 rows=60845 width=24)\n -> Merge Join (cost=3496.19..4420.31 rows=60845 width=24)\n Merge Cond: (\"outer\".medlem_id = \"inner\".medlem_id)\n -> Sort (cost=2743.39..2749.11 rows=2290 width=12)\n Sort Key: ms.medlem_id\n -> Subquery Scan ms (cost=2483.70..2615.60 rows=2290 width=12)\n -> Append (cost=2483.70..2592.70 rows=2290 width=12)\n -> HashAggregate (cost=2483.70..2545.82 rows=2259 width=24)\n Filter: (max((\"start\" + varighet)) >= (('now'::text)::date)::timestamp without time zone)\n -> Hash Join (cost=662.54..2427.49 rows=7494 width=24)\n Hash Cond: (\"outer\".oblattype_id = \"inner\".oblattype_id)\n -> Hash Join (cost=661.50..2314.03 rows=7494 width=12)\n Hash Cond: (\"outer\".kortnummer = \"inner\".kortnummer)\n -> Seq Scan on oblat (cost=0.00..632.17 rows=37817 width=12)\n -> Hash (cost=614.81..614.81 rows=18673 width=8)\n -> Seq Scan on kort (cost=0.00..614.81 rows=18673 width=8)\n Filter: ((kortstatus_id = 1) AND (medlem_id IS NOT NULL))\n -> Hash (cost=1.04..1.04 rows=4 width=20)\n -> Seq Scan on oblattype (cost=0.00..1.04 rows=4 width=20)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..1.70 rows=31 width=4)\n -> Seq Scan on eksterne_kort (cost=0.00..1.39 rows=31 width=4)\n -> Sort (cost=752.80..766.08 rows=5314 width=16)\n Sort Key: v.medlem_id\n -> Append (cost=0.00..370.84 rows=5314 width=16)\n -> Seq Scan on verv (cost=0.00..316.31 rows=5283 width=16)\n Filter: ((stopp IS NULL) OR (stopp >= ('now'::text)::date))\n -> Subquery Scan \"*SELECT* 2\" (cost=0.01..1.70 rows=31 width=8)\n -> Result (cost=0.01..1.39 rows=31 width=8)\n One-Time Filter: (('2030-01-01'::date IS NULL) OR ('2030-01-01'::date >= ('now'::text)::date))\n -> Seq Scan on eksterne_kort (cost=0.00..1.31 rows=31 width=8)\n -> Hash (cost=3794.48..3794.48 rows=40 width=4)\n -> HashAggregate (cost=3794.08..3794.48 rows=40 width=4)\n -> Append (cost=3791.65..3793.58 rows=40 width=106)\n -> Subquery Scan t1 (cost=3791.65..3791.79 rows=9 width=106)\n -> Unique (cost=3791.65..3791.70 rows=9 width=60)\n -> Sort (cost=3791.65..3791.68 rows=9 width=60)\n Sort Key: medlem.medlem_id, public.kort.korttype_id\n -> Nested Loop (cost=2922.47..3791.51 rows=9 width=60)\n Join Filter: (\"outer\".medlem_id = \"inner\".medlem_id)\n -> Hash Join (cost=2918.46..3454.13 rows=42 width=60)\n Hash Cond: (\"outer\".medlem_id = \"inner\".medlem_id)\n -> Hash Join (cost=2574.06..3106.62 rows=538 width=56)\n Hash Cond: (\"outer\".medlem_id = \"inner\".medlem_id)\n -> Seq Scan on medlem (cost=0.00..500.01 rows=3623 width=52)\n Filter: (pinkode IS NOT NULL)\n -> Hash (cost=2568.41..2568.41 rows=2259 width=4)\n -> HashAggregate (cost=2483.70..2545.82 rows=2259 width=24)\n Filter: (max((\"start\" + varighet)) >= (('now'::text)::date)::timestamp without time zone)\n -> Hash Join (cost=662.54..2427.49 rows=7494 width=24)\n Hash Cond: (\"outer\".oblattype_id = \"inner\".oblattype_id)\n -> Hash Join (cost=661.50..2314.03 rows=7494 width=12)\n Hash Cond: (\"outer\".kortnummer = \"inner\".kortnummer)\n -> Seq Scan on oblat (cost=0.00..632.17 rows=37817 width=12)\n -> Hash (cost=614.81..614.81 rows=18673 width=8)\n -> Seq Scan on kort (cost=0.00..614.81 rows=18673 width=8)\n Filter: ((kortstatus_id = 1) AND (medlem_id IS NOT NULL))\n -> Hash (cost=1.04..1.04 rows=4 width=20)\n -> Seq Scan on oblattype (cost=0.00..1.04 rows=4 width=20)\n -> Hash (cost=341.42..341.42 rows=1191 width=4)\n -> HashAggregate (cost=329.51..341.42 rows=1191 width=4)\n -> Seq Scan on verv (cost=0.00..316.31 rows=5283 width=4)\n Filter: ((stopp IS NULL) OR (stopp >= ('now'::text)::date))\n -> Bitmap Heap Scan on kort (cost=4.01..8.02 rows=1 width=12)\n Recheck Cond: (((\"outer\".medlem_id = kort.medlem_id) AND (kort.korttype_id = 2)) OR ((\"outer\".medlem_id = kort.medlem_id) AND (kort.korttype_id = 3)))\n Filter: (kortstatus_id = 1)\n -> BitmapOr (cost=4.01..4.01 rows=1 width=0)\n -> Bitmap Index Scan on maksimalt_ett_aktivt_kort_per_medlem (cost=0.00..2.01 rows=1 width=0)\n Index Cond: ((\"outer\".medlem_id = kort.medlem_id) AND (kort.korttype_id = 2))\n -> Bitmap Index Scan on maksimalt_ett_aktivt_kort_per_medlem (cost=0.00..2.01 rows=1 width=0)\n Index Cond: ((\"outer\".medlem_id = kort.medlem_id) AND (kort.korttype_id = 3))\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..1.70 rows=31 width=25)\n -> Seq Scan on eksterne_kort (cost=0.00..1.39 rows=31 width=25)\n SubPlan\n -> Materialize (cost=546.45..742.37 rows=19592 width=38)\n -> Seq Scan on rita_tilgang (cost=0.00..526.86 rows=19592 width=38)\n -> Seq Scan on personer_nylig_slettet (cost=0.00..31.40 rows=2140 width=4)\n(105 rows)\n\nThere's two oddities here at first sight:\n\n 1. Why does it materialize the sequential scan? What use would that have?\n 2. Why does it estimate four million disk page fetches in the top node? \n I can't find anything like that in the bottom nodes...\n\nAll the obvious things are taken care of: The tables are freshly loaded,\nVACUUM ANALYZE just ran, sort_mem/shared_buffers/effective_cache_size is the\nsame as on the 7.4 machine with the same amount of RAM (1GB).\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n\n",
"msg_date": "Wed, 19 Oct 2005 19:45:44 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Materializing a sequential scan"
},
{
"msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> I'm using PostgreSQL 8.1 beta 3 (packages from Debian experimental), and I\n> have a (rather complex) query that seems to take forever -- when the database\n> was just installed, it took about 1200ms (which is quite good, considering\n> that the 7.4 system this runs on today uses about the same time, but has\n> twice as much CPU power and runs sequential scans up to eight times as fast),\n> but now I can never even get it to complete. I've tried running it for half\n> an hour, but it still doesn't complete, so I'm a bit unsure what's going on.\n\nThat mdb_gruppekobling_transitiv_tillukning function looks awfully\ngrotty ... how many rows does it return, and how long does it take to\nrun by itself? How often does its temp table get vacuumed? A quick\nband-aid might be to use TRUNCATE instead of DELETE FROM to clean the\ntable ... but if I were you I'd try to rewrite the function entirely.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Oct 2005 00:37:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Materializing a sequential scan "
},
{
"msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> BEGIN\n> \tINSERT INTO kortsys2.mdb_gruppekobling_temp\n> \tSELECT overgruppe_id,undergruppe_id FROM gruppekobling gk\n> \t\tJOIN gruppe g1 ON gk.overgruppe_id=g1.gruppe_id\n> \t\tJOIN gruppe g2 ON gk.overgruppe_id=g2.gruppe_id\n> \t\tWHERE g1.aktiv AND g2.aktiv;\n> \tLOOP\n\nBTW, it sure looks like that second JOIN ought to be\n\t\tJOIN gruppe g2 ON gk.undergruppe_id=g2.gruppe_id\n\nAs-is, it's not doing anything for you ... certainly not enforcing\nthat the undergruppe_id be aktiv.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Oct 2005 00:58:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Materializing a sequential scan "
},
{
"msg_contents": "On Thu, Oct 20, 2005 at 12:37:25AM -0400, Tom Lane wrote:\n> That mdb_gruppekobling_transitiv_tillukning function looks awfully\n> grotty ... how many rows does it return, and how long does it take to\n> run by itself? How often does its temp table get vacuumed? A quick\n> band-aid might be to use TRUNCATE instead of DELETE FROM to clean the\n> table ... but if I were you I'd try to rewrite the function entirely.\n\nIt returns 752 rows, and the table is autovacuumed. If I run the queries\nmanually, they take ~15ms in all -- for some odd reason, the function in itself\nvaries between 40 and 500ms, though...\n\nI tried using TRUNCATE earlier, but if anything, it made the function slower\n(might just have been zero difference, though). I also had written the\nfunction differently (using a series of depth-first searches), but it was\nawfully slow even after a lot of tweaking, so it was not really worth it...\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n\n",
"msg_date": "Thu, 20 Oct 2005 12:16:34 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Materializing a sequential scan"
},
{
"msg_contents": "On Thu, Oct 20, 2005 at 12:58:51AM -0400, Tom Lane wrote:\n> As-is, it's not doing anything for you ... certainly not enforcing\n> that the undergruppe_id be aktiv.\n\nOops, yes, that's a bug -- thanks for noticing. (It does not matter\nparticularily with the current data set, though.)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n\n",
"msg_date": "Thu, 20 Oct 2005 12:18:47 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Materializing a sequential scan"
},
{
"msg_contents": "On Thu, Oct 20, 2005 at 12:37:25AM -0400, Tom Lane wrote:\n> That mdb_gruppekobling_transitiv_tillukning function looks awfully\n> grotty ... how many rows does it return, and how long does it take to\n> run by itself? How often does its temp table get vacuumed? A quick\n> band-aid might be to use TRUNCATE instead of DELETE FROM to clean the\n> table ... but if I were you I'd try to rewrite the function entirely.\n\nI've verified that it indeed does use 20ms more for every run without a\nVACUUM, but it shouldn't really matter -- and I guess it will go away once\nsomebody teaches plpgsql about not caching OIDs for CREATE TEMPORARY TABLE.\n:-)\n\nIn any case, I still can't understand why it picks the plan it does; what's\nup with the materialized seqscan, and where do the four million rows come\nfrom? 7.4 estimates ~52000 disk page fetches for the same query, so surely\nthere must be a better plan than four million :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n\n",
"msg_date": "Sun, 23 Oct 2005 12:23:33 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Materializing a sequential scan"
},
{
"msg_contents": "Hi,\n\nI finally found what I believe is the root cause for the hopeless\nperformance, after a lot of query rewriting:\n\n> Subquery Scan mdb_effektiv_tilgang (cost=19821.69..4920621.69 rows=10000 width=48)\n> Filter: ((NOT (hashed subplan)) AND (NOT (subplan)))\n\nThe problem here is simply that 8.1 refuses to hash this part of the plan:\n\n> -> Materialize (cost=546.45..742.37 rows=19592 width=38)\n> -> Seq Scan on rita_tilgang (cost=0.00..526.86 rows=19592 width=38)\n> -> Seq Scan on personer_nylig_slettet (cost=0.00..31.40 rows=2140 width=4)\n\nprobably because of the NOT IN with a function inside; I rewrote it to an\nEXCEPT (which is not equivalent, but good enough for my use), and it\ninstantly hashed the other subplan, and the query went speedily. Well, at\nleast in four seconds and not several hours...\n\nAny good ideas why 8.1 would refuse to do this, when 7.4 would do it? It does\nnot matter how high I set my work_mem; even at 2.000.000 it refused to hash\nthe subplan.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 26 Oct 2005 23:37:30 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Materializing a sequential scan"
},
{
"msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> Any good ideas why 8.1 would refuse to do this, when 7.4 would do it? It does\n> not matter how high I set my work_mem; even at 2.000.000 it refused to hash\n> the subplan.\n\nAFAICS, subplan_is_hashable() is testing the same conditions in 7.4 and\nHEAD, so this isn't clear. Want to step through it and see where it's\ndeciding not to hash?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 26 Oct 2005 19:06:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Materializing a sequential scan "
},
{
"msg_contents": "On Wed, Oct 26, 2005 at 07:06:15PM -0400, Tom Lane wrote:\n> AFAICS, subplan_is_hashable() is testing the same conditions in 7.4 and\n> HEAD, so this isn't clear. Want to step through it and see where it's\n> deciding not to hash?\n\nLine 639, ie.:\n\n635 if (!optup->oprcanhash || optup->oprcom != opid ||\n636 !func_strict(optup->oprcode))\n637 {\n638 ReleaseSysCache(tup);\n639 return false;\n640 }\n\ngdb gives\n\n(gdb) print *optup\n$2 = {oprname = {\n data = \"\\220Ü2\\b\\000\\000\\000\\000\\000\\000\\000\\000\\005\\230-\\b\", '\\0' <repeats 16 times>, \"X\\0305\\b\\020\\000\\000\\000\\000\\000\\000\\000ئ>\\b\\020\\000\\000\\000\\000\\000\\000\\000ð\\213>\\b\\020\\000\\000\", alignmentDummy = 137550992}, oprnamespace = 137542808, oprowner = 64, oprkind = 8 '\\b', oprcanhash = -112 '\\220', oprleft = 2, oprright = 0, \n oprresult = 0, oprcom = 0, oprnegate = 0, oprlsortop = 0, oprrsortop = 0, oprltcmpop = 0, oprgtcmpop = 0, oprcode = 0, oprrest = 0, oprjoin = 0}\n\n(gdb) print opid \n$3 = 2373\n\nSo it's complaining about the optup->oprcom != opid part. This is of course\non the third run through the loop, ie. it's complaining about the argument\nwhich is run through the function kortsys2.effektiv_dato(date)... For\nconvenience, I've listed it again here:\n\nCREATE FUNCTION kortsys2.effektiv_dato(date) RETURNS date\nAS\n 'SELECT CASE WHEN $1 < CURRENT_DATE THEN CURRENT_DATE ELSE $1 END'\nLANGUAGE SQL STABLE;\n\t\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 27 Oct 2005 01:22:19 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Materializing a sequential scan"
},
{
"msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> On Wed, Oct 26, 2005 at 07:06:15PM -0400, Tom Lane wrote:\n>> AFAICS, subplan_is_hashable() is testing the same conditions in 7.4 and\n>> HEAD, so this isn't clear. Want to step through it and see where it's\n>> deciding not to hash?\n\n> (gdb) print opid \n> $3 = 2373\n\nI don't think you're getting a correct reading for optup, but OID\n2373 is timestamp = date:\n\nregression=# select * from pg_operator where oid = 2373;\n oprname | oprnamespace | oprowner | oprkind | oprcanhash | oprleft | oprright | oprresult | oprcom | oprnegate | oprlsortop | oprrsortop | oprltcmpop | oprgtcmpop | oprcode | oprrest | oprjoin\n---------+--------------+----------+---------+------------+---------+----------+-----------+--------+-----------+------------+------------+------------+------------+-------------------+---------+-----------\n = | 11 | 10 | b | f | 1114 | 1082 | 16 | 2347 | 2376 | 2062 | 1095 | 2371 | 2375 | timestamp_eq_date | eqsel | eqjoinsel\n(1 row)\n\nwhich is marked not hashable, quite correctly since the input datatypes\naren't even the same.\n\nMy recollection is that there was no such operator in 7.4; probably in\n7.4 the IN ended up using timestamp = timestamp which is hashable.\n\nWhat's not clear though is why you're getting that operator --- aren't\nboth sides of the IN of type \"date\"?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 26 Oct 2005 19:53:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Materializing a sequential scan "
},
{
"msg_contents": "On Wed, Oct 26, 2005 at 07:53:02PM -0400, Tom Lane wrote:\n> I don't think you're getting a correct reading for optup, but OID\n> 2373 is timestamp = date:\n>\n> [...]\n> \n> My recollection is that there was no such operator in 7.4; probably in\n> 7.4 the IN ended up using timestamp = timestamp which is hashable.\n\nYou are quite correct, there is no such operator (whether by oid or by\ndescription) in my 7.4 installation.\n\n> What's not clear though is why you're getting that operator --- aren't\n> both sides of the IN of type \"date\"?\n\nAha!\n\nFigured out the \"start\" column wasn't the problem after all. The problem was\nthe \"stopp\" column, which was timestamp on one side and date on the other...\n\nSo, it can be fixed for this instance, but this feels a bit like the pre-8.0\njoins on differing data types -- is there any way to fix it? :-)\n\n/* QSteinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 27 Oct 2005 02:13:48 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Materializing a sequential scan"
},
{
"msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> Aha!\n\n> Figured out the \"start\" column wasn't the problem after all. The problem was\n> the \"stopp\" column, which was timestamp on one side and date on the other...\n\nAh-hah.\n\n> So, it can be fixed for this instance, but this feels a bit like the pre-8.0\n> joins on differing data types -- is there any way to fix it? :-)\n\nI have some ideas in the back of my head about supporting\ncross-data-type hashing. Essentially this would require that the hash\nfunctions for two types be compatible in that they generate the same\nhash value for two values that would be considered equal. (For\ninstance, the integer hash functions already have the property that\n42::int2, 42::int4, and 42::int8 will all generate the same hash code.\nThe date and timestamp hash functions don't have such a property ATM,\nbut probably could be made to.) For types that share a hash coding\nconvention, cross-type equality functions could be marked hashable.\nThis is all pretty handwavy at the moment though, and I don't know\nhow soon it will get done.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 26 Oct 2005 20:51:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Materializing a sequential scan "
},
{
"msg_contents": "On Wed, Oct 26, 2005 at 08:51:03PM -0400, Tom Lane wrote:\n> I have some ideas in the back of my head about supporting\n> cross-data-type hashing. Essentially this would require that the hash\n> functions for two types be compatible in that they generate the same\n> hash value for two values that would be considered equal.\n\nOK, another entry for the TODO then.\n\nAnyhow, my query is now on about the same performance level with 8.1 as it\nwas with 7.4 (or rather, a bit faster), so it's no longer a 8.1 blocker for\nus. Thanks. :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 27 Oct 2005 02:57:15 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Materializing a sequential scan"
}
] |
[
{
"msg_contents": "I guess, You should check, if a blob field and large object access is suitable for you - no escaping etc, just raw binary large objects.\n\nAFAIK, PQExecParams is not the right solution for You. Refer the \"Large object\" section:\n\n\"28.3.5. Writing Data to a Large Object\nThe function\nint lo_write(PGconn *conn, int fd, const char *buf, size_t len);writes len bytes from buf to large object descriptor fd. The fd argument must have been returned by a previous lo_open. The number of bytes actually written is returned. In the event of an error, the return value is negative.\"\n\nregards,\nNarcus\n\n-----Ursprüngliche Nachricht-----\nVon: [email protected]\n[mailto:[email protected]]Im Auftrag von Michael\nFuhr\nGesendet: Dienstag, 18. Oktober 2005 22:47\nAn: Rodrigo Madera\nCc: [email protected]\nBetreff: Re: [PERFORM] Inefficient escape codes.\n\n\n[Please copy the mailing list on replies so others can participate\nin and learn from the discussion.]\n\nOn Tue, Oct 18, 2005 at 07:09:08PM +0000, Rodrigo Madera wrote:\n> > What language and API are you using?\n> \n> I'm using libpqxx. A nice STL-style library for C++ (I am 101% C++).\n\nI've only dabbled with libpqxx; I don't know if or how you can make\nit send data in binary instead of text. See the documentation or\nask in a mailing list like libpqxx-general or pgsql-interfaces.\n\n> > Binary transfer sends data in binary, not by automatically converting\n> > to and from text.\n> \n> Uh, I'm sorry I didn't get that... If I send: insert into foo\n> values('\\\\001\\\\002') will libpq send 0x01, 0x02 or \"\\\\\\\\001\\\\\\\\002\"??\n\nIf you do it that way libpq will send the string as text with escape\nsequences; you can use a sniffer like tcpdump or ethereal to see this\nfor yourself. To send the data in binary you'd call PQexecParams()\nwith a query like \"INSERT INTO foo VALUES ($1)\". The $1 is a\nplaceholder; the other arguments to PQexecParams() provide the data\nitself, the data type and length, and specify whether the data is in\ntext format or binary. See the libpq documentation for details.\n\n-- \nMichael Fuhr\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n\n",
"msg_date": "Thu, 20 Oct 2005 09:15:17 +0200",
"msg_from": "=?iso-8859-1?Q?N=F6rder-Tuitje=2C_Marcus?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inefficient escape codes."
},
{
"msg_contents": "Hi!\n\nI'm experiencing a very slow deletion of records. Which I thin is not right.\nI have a Dual Xeon Server with 6gig Memory.\nI am only deleting about 22,000 records but it took me more than 1 hour to\nfinish this.\n\nWhat could possibly I do so that I can make this fast?\n\nHere is the code inside my function:\n\n\tFOR temp_rec IN SELECT * FROM item_qc_doer LOOP\n\t\tDELETE FROM qc_session WHERE item_id = temp_rec.item_id;\n\t\tDELETE FROM item_qc_doer WHERE item_id = temp_rec.item_id;\n\tEND LOOP;\n\nItem_qc_oder table contains 22,000 records.\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html \n\n",
"msg_date": "Thu, 20 Oct 2005 08:43:34 -0000",
"msg_from": "\"Christian Paul B. Cosinas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Deleting Records"
},
{
"msg_contents": "Christian,\n\nDo you have foreign keys pointing to your table with ON CASCADE... ?\nCause in that case you're not only deleting your 22000 records, but the\nwhole tree of cascades. And if you don't have an index on one of those\nforeign keys, then you might have a sequential scan of the child table\non each deleted row... I would check the foreign keys.\n\nHTH,\nCsaba.\n\n\nOn Thu, 2005-10-20 at 10:43, Christian Paul B. Cosinas wrote:\n> Hi!\n> \n> I'm experiencing a very slow deletion of records. Which I thin is not right.\n> I have a Dual Xeon Server with 6gig Memory.\n> I am only deleting about 22,000 records but it took me more than 1 hour to\n> finish this.\n> \n> What could possibly I do so that I can make this fast?\n> \n> Here is the code inside my function:\n> \n> \tFOR temp_rec IN SELECT * FROM item_qc_doer LOOP\n> \t\tDELETE FROM qc_session WHERE item_id = temp_rec.item_id;\n> \t\tDELETE FROM item_qc_doer WHERE item_id = temp_rec.item_id;\n> \tEND LOOP;\n> \n> Item_qc_oder table contains 22,000 records.\n> \n> \n> I choose Polesoft Lockspam to fight spam, and you?\n> http://www.polesoft.com/refer.html \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n",
"msg_date": "Thu, 20 Oct 2005 10:49:04 +0200",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deleting Records"
},
{
"msg_contents": "Hi,\n\n> What could possibly I do so that I can make this fast?\n> \n> Here is the code inside my function:\n> \n> \tFOR temp_rec IN SELECT * FROM item_qc_doer LOOP\n> \t\tDELETE FROM qc_session WHERE item_id = temp_rec.item_id;\n> \t\tDELETE FROM item_qc_doer WHERE item_id = temp_rec.item_id;\n> \tEND LOOP;\n\nQhat about just using:\n\nDELETE FROM gc_session WHERE item_id IN\n\t(SELECT item_id FROM item_qc_doer)\nDELETE FROM item_qc_doer;\n\nIt doesn't make sense to run 2 x 22.000 separate delete statements \ninstead that only two...\n\nAnd... What about using a foreing key?\n\n\nBest regards\n--\nMatteo Beccati\nhttp://phpadsnew.com\nhttp://phppgads.com\n",
"msg_date": "Thu, 20 Oct 2005 10:52:29 +0200",
"msg_from": "Matteo Beccati <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deleting Records"
},
{
"msg_contents": "> What could possibly I do so that I can make this fast?\n> \n> Here is the code inside my function:\n> \n> \tFOR temp_rec IN SELECT * FROM item_qc_doer LOOP\n> \t\tDELETE FROM qc_session WHERE item_id = temp_rec.item_id;\n> \t\tDELETE FROM item_qc_doer WHERE item_id = temp_rec.item_id;\n> \tEND LOOP;\n> \n> Item_qc_oder table contains 22,000 records.\n\nI'd check to see if i have foreign keys on those tables and if the \ncolumns that refer to them are properly indexed. (For cascade delete or \neven just checking restrict)\n\nChris\n\n",
"msg_date": "Thu, 20 Oct 2005 16:57:31 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deleting Records"
},
{
"msg_contents": "> Here is the code inside my function:\n> \n> \tFOR temp_rec IN SELECT * FROM item_qc_doer LOOP\n> \t\tDELETE FROM qc_session WHERE item_id = temp_rec.item_id;\n> \t\tDELETE FROM item_qc_doer WHERE item_id = temp_rec.item_id;\n> \tEND LOOP;\n> \n> Item_qc_oder table contains 22,000 records.\n\nAlso, chekc you have an index on both those item_id columns.\n\nAlso, why don't you just not use the loop and do this instead:\n\nDELETE FROM qc_session WHERE item_id IN (SELECT item_id FROM item_qc_doer);\nDELETE FROM item_qc_doer;\n\nChris\n\n",
"msg_date": "Thu, 20 Oct 2005 16:59:09 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deleting Records"
},
{
"msg_contents": "> I guess, You should check, if a blob field and large object access is\n> suitable for you - no escaping etc, just raw binary large objects.\n>\n> AFAIK, PQExecParams is not the right solution for You. Refer the \"Large\n> object\" section:\n>\n> \"28.3.5. Writing Data to a Large Object\n> The function\n> int lo_write(PGconn *conn, int fd, const char *buf, size_t len);writes len\n> bytes from buf to large object descriptor fd. The fd argument must have been\n> returned by a previous lo_open. The number of bytes actually written is\n> returned. In the event of an error, the return value is negative.\"\n\n\nWell, I read that large objects are kept in only ONE table. No matter what,\nonly the LOID would be kept. I can't affor that since I hav lots of tables\n(using the image-album-app analogy, imagine that we have pictures from\nseveral cities, each one corresponding to a city, like Memphis_Photos,\nChicago_Photos, etc.).\n\nThis is one major drawback, isn't it?\n\nRodrigo\n\n\nI\nguess, You should check, if a blob field and large object access is\nsuitable for you - no escaping etc, just raw binary large objects.AFAIK, PQExecParams is not the right solution for You. Refer the \"Large object\" section:\"28.3.5. Writing Data to a Large Object\nThe functionint\nlo_write(PGconn *conn, int fd, const char *buf, size_t len);writes len\nbytes from buf to large object descriptor fd. The fd argument must have\nbeen returned by a previous lo_open. The number of bytes actually\nwritten is returned. In the event of an error, the return value is\nnegative.\"\nWell, I read that large objects are kept in only ONE table. No matter\nwhat, only the LOID would be kept. I can't affor that since I hav lots\nof tables (using the image-album-app analogy, imagine that we have\npictures from several cities, each one corresponding to a city, like\nMemphis_Photos, Chicago_Photos, etc.).\n\nThis is one major drawback, isn't it?\n\nRodrigo",
"msg_date": "Fri, 21 Oct 2005 16:59:30 +0000",
"msg_from": "Rodrigo Madera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficient escape codes."
},
{
"msg_contents": "I guess, You should check, if a blob field and large object access is suitable for you - no escaping etc, just raw binary large objects.\n\t\n\tAFAIK, PQExecParams is not the right solution for You. Refer the \"Large object\" section:\n\t\n\t\"28.3.5. Writing Data to a Large Object \n\tThe function\n\tint lo_write(PGconn *conn, int fd, const char *buf, size_t len);writes len bytes from buf to large object descriptor fd. The fd argument must have been returned by a previous lo_open. The number of bytes actually written is returned. In the event of an error, the return value is negative.\"\n\n\nWell, I read that large objects are kept in only ONE table. No matter what, only the LOID would be kept. I can't affor that since I hav lots of tables (using the image-album-app analogy, imagine that we have pictures from several cities, each one corresponding to a city, like Memphis_Photos, Chicago_Photos, etc.).\n\nThis is one major drawback, isn't it?\n\nRodrigo\n\n\n\n\n\nI\nguess, You should check, if a blob field and large object access is\nsuitable for you - no escaping etc, just raw binary large objects.AFAIK, PQExecParams is not the right solution for You. Refer the \"Large object\" section:\"28.3.5. Writing Data to a Large Object\nThe functionint\nlo_write(PGconn *conn, int fd, const char *buf, size_t len);writes len\nbytes from buf to large object descriptor fd. The fd argument must have\nbeen returned by a previous lo_open. The number of bytes actually\nwritten is returned. In the event of an error, the return value is\nnegative.\"\nWell, I read that large objects are kept in only ONE table. No matter\nwhat, only the LOID would be kept. I can't affor that since I hav lots\nof tables (using the image-album-app analogy, imagine that we have\npictures from several cities, each one corresponding to a city, like\nMemphis_Photos, Chicago_Photos, etc.).\n\nThis is one major drawback, isn't it?\n\nRodrigo",
"msg_date": "Fri, 21 Oct 2005 14:15:58 -0300",
"msg_from": "\"Rodrigo Madera\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficient escape codes."
}
] |
[
{
"msg_contents": "\nwhat about firing a \n\nDELETE FROM qc_session S \n WHERE EXISTS (SELECT * \n FROM item_qc_doer i\n WHERE i.item_id = s.item_id);\n\nand \n\nDELETE FROM item_qc_doer S \n WHERE EXISTS (SELECT * \n FROM item_qc_doer i\n WHERE i.item_id = s.item_id);\n\n\nthis might be faster.\n\nanother way to speed up deletes might be disabling foreign keys.\n\nalso a SET ENABLE_SEQSCAN=FALSE; can speed up queries (force use of indices for access)\n\n\ndo you have a EXPLAIN for us ? do you have a index on item_id on your tables ?\n\nquestions by questions ;-)\n\n\n\nmfg\n\n\n-----Ursprüngliche Nachricht-----\nVon: [email protected]\n[mailto:[email protected]]Im Auftrag von Christian\nPaul B. Cosinas\nGesendet: Donnerstag, 20. Oktober 2005 10:44\nAn: [email protected]\nBetreff: [PERFORM] Deleting Records\n\n\nHi!\n\nI'm experiencing a very slow deletion of records. Which I thin is not right.\nI have a Dual Xeon Server with 6gig Memory.\nI am only deleting about 22,000 records but it took me more than 1 hour to\nfinish this.\n\nWhat could possibly I do so that I can make this fast?\n\nHere is the code inside my function:\n\n\tFOR temp_rec IN SELECT * FROM item_qc_doer LOOP\n\t\tDELETE FROM qc_session WHERE item_id = temp_rec.item_id;\n\t\tDELETE FROM item_qc_doer WHERE item_id = temp_rec.item_id;\n\tEND LOOP;\n\nItem_qc_oder table contains 22,000 records.\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html \n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n\n",
"msg_date": "Thu, 20 Oct 2005 10:50:50 +0200",
"msg_from": "=?iso-8859-1?Q?N=F6rder-Tuitje=2C_Marcus?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Deleting Records"
}
] |
[
{
"msg_contents": "Hi,\n\nis there an easy way to flush all cached query plans in pl/pgsql \nfunctions? I've long running sessions where data are changing and the \nplans become inaccurate after a while. I can imagine something like \nrecreating all pl/pgsql functions. I can recreate them from sql source \nfiles but I'd prefer recreating them inside the database without \naccessing files outside. I can think only of one way - reconstructing \nfunction source code from pg_proc and EXECUTEing it. But it's not the \ncleanest solution (there isn't saved the actual source code anywhere so \nthere could be problems with quoting etc.). Can you see any other \npossibility? How do you solve this problem? [And yes, I don't want to \nre-connect...]\n\nThanks,\n\nKuba\n\n\n\n",
"msg_date": "Thu, 20 Oct 2005 16:07:22 +0200",
"msg_from": "Kuba Ouhrabka <[email protected]>",
"msg_from_op": true,
"msg_subject": "cached plans in plpgsql"
}
] |
[
{
"msg_contents": "Kuba wrote:\n\n> is there an easy way to flush all cached query plans in pl/pgsql\n> functions? I've long running sessions where data are changing and the\n> plans become inaccurate after a while. I can imagine something like\n> recreating all pl/pgsql functions. I can recreate them from sql source\n> files but I'd prefer recreating them inside the database without\n> accessing files outside. I can think only of one way - reconstructing\n> function source code from pg_proc and EXECUTEing it. But it's not the\n> cleanest solution (there isn't saved the actual source code anywhere\nso\n> there could be problems with quoting etc.). Can you see any other\n> possibility? How do you solve this problem? [And yes, I don't want to\n> re-connect...]\n\nStart here:\nhttp://archives.postgresql.org/pgsql-hackers/2005-09/msg00690.php\n\nMerlin\n",
"msg_date": "Thu, 20 Oct 2005 11:03:58 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: cached plans in plpgsql"
},
{
"msg_contents": "> [howto recreate plpgsql functions]\n> \n> Start here:\n> http://archives.postgresql.org/pgsql-hackers/2005-09/msg00690.php\n\nGreat, thanks!\n\nI slighltly modified the function - it was not working for overloaded \nfunctions (same name, different arguments) and for functions with named \narguments. Modified version attached for anyone interested - not perfect \nbut works for me...\n\nKuba",
"msg_date": "Thu, 20 Oct 2005 18:25:12 +0200",
"msg_from": "Kuba Ouhrabka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cached plans in plpgsql"
},
{
"msg_contents": "Kuba Ouhrabka <[email protected]> writes:\n> IF Var_datos.pronargs > 0 THEN\n> \tVar_args := '';\n> \tFOR i IN 0..Var_datos.pronargs-1 LOOP\n> \t\tSELECT typname::varchar INTO Var_nameArg FROM pg_type WHERE oid = Var_datos.proargtypes[i];\n\t\t\t\n> \t\tVar_args := Var_args|| COALESCE(Var_datos.proargnames[i+1], '') || ' ' || Var_nameArg||', ';\n> \tEND LOOP;\n\nThis will not work at all; it makes far too many incorrect assumptions,\nlike proargnames always being non-null and having subscripts that match\nproargtypes. (It'll mess things up completely for anything that has OUT\narguments, too.)\n\nIt's pretty much the hard way to form a function reference anyway ---\nyou can just cast the function OID to regprocedure, which aside from\navoiding a lot of subtle assumptions about the catalog contents,\nwill deal with schema naming issues, something the above likewise\nfails at.\n\nTo avoid having to reconstruct argument names/types, I'd suggest using\nan ALTER FUNCTION command instead of CREATE OR REPLACE FUNCTION, maybe\n\n\tDECLARE fullproname text := a_oid::regprocedure;\n\t...\n\tEXECUTE 'ALTER FUNCTION ' || fullproname || ' RENAME TO ' || Var_datos.proname;\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Oct 2005 12:50:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cached plans in plpgsql "
},
{
"msg_contents": "Tom,\n\nmany thanks. Perfect advice as usual...\n\nCorrected version attached for the archives.\n\nKuba\n\nTom Lane napsal(a):\n> Kuba Ouhrabka <[email protected]> writes:\n> \n>> IF Var_datos.pronargs > 0 THEN\n>>\tVar_args := '';\n>>\tFOR i IN 0..Var_datos.pronargs-1 LOOP\n>>\t\tSELECT typname::varchar INTO Var_nameArg FROM pg_type WHERE oid = Var_datos.proargtypes[i];\n> \n> \t\t\t\n> \n>>\t\tVar_args := Var_args|| COALESCE(Var_datos.proargnames[i+1], '') || ' ' || Var_nameArg||', ';\n>>\tEND LOOP;\n> \n> \n> This will not work at all; it makes far too many incorrect assumptions,\n> like proargnames always being non-null and having subscripts that match\n> proargtypes. (It'll mess things up completely for anything that has OUT\n> arguments, too.)\n> \n> It's pretty much the hard way to form a function reference anyway ---\n> you can just cast the function OID to regprocedure, which aside from\n> avoiding a lot of subtle assumptions about the catalog contents,\n> will deal with schema naming issues, something the above likewise\n> fails at.\n> \n> To avoid having to reconstruct argument names/types, I'd suggest using\n> an ALTER FUNCTION command instead of CREATE OR REPLACE FUNCTION, maybe\n> \n> \tDECLARE fullproname text := a_oid::regprocedure;\n> \t...\n> \tEXECUTE 'ALTER FUNCTION ' || fullproname || ' RENAME TO ' || Var_datos.proname;\n> \n> \t\t\tregards, tom lane",
"msg_date": "Fri, 21 Oct 2005 09:33:17 +0200",
"msg_from": "Kuba Ouhrabka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cached plans in plpgsql"
}
] |
[
{
"msg_contents": "If I turn on stats_command_string, how much impact would it have on\nPostgreSQL server's performance during a period of massive data\nINSERTs? I know that the answer to the question I'm asking will\nlargely depend upon different factors so I would like to know in which\nsituations it would be negligible or would have a signifcant impact.\n\n",
"msg_date": "20 Oct 2005 13:33:07 -0700",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "impact of stats_command_string"
},
{
"msg_contents": "On Thu, Oct 20, 2005 at 01:33:07PM -0700, [email protected] wrote:\n> If I turn on stats_command_string, how much impact would it have on\n> PostgreSQL server's performance during a period of massive data\n> INSERTs?\n\nDo you really need to be doing \"massive data INSERTs\"? Can you use\nCOPY, which is much more efficient for bulk loads?\n\nhttp://www.postgresql.org/docs/8.0/interactive/populate.html\n\n-- \nMichael Fuhr\n",
"msg_date": "Tue, 25 Oct 2005 10:04:08 -0600",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: impact of stats_command_string"
}
] |
[
{
"msg_contents": "I was reading a comment in another posting and it started me thinking\nabout this. Let's say I startup an Oracle server. All my queries are a\nlittle bit (sometimes a lot bit) slow until it gets its \"normal\" things in\nmemory, then it's up to speed. The \"normal\" things would include some\nsmall lookup tables and the indexes for the most frequently used tables.\n\nLet's say I do the same thing in Postgres. I'm likely to have my very\nfastest performance for the first few queries until memory gets filled up.\n The only time Postgres seems to take advantage of cached data is when I\n repeat the same (or substantially the same) query. I don't know of any\n way to view what is actually cached at any point in time, but it seems\n like \"most recently used\" rather than \"most frequently used\". \n\nDoes this seem true?\n s\n",
"msg_date": "Fri, 21 Oct 2005 07:34:30 -0500",
"msg_from": "Martin Nickel <[email protected]>",
"msg_from_op": true,
"msg_subject": "What gets cached?"
},
{
"msg_contents": "On Fri, Oct 21, 2005 at 07:34:30AM -0500, Martin Nickel wrote:\n> Let's say I do the same thing in Postgres. I'm likely to have my very\n> fastest performance for the first few queries until memory gets filled up.\n> The only time Postgres seems to take advantage of cached data is when I\n> repeat the same (or substantially the same) query. I don't know of any\n> way to view what is actually cached at any point in time, but it seems\n> like \"most recently used\" rather than \"most frequently used\". \n\nWhat version are you using? There have been significant improvements to the\nbuffer manager in the last few versions. Most of the caching is done by your\nOS, though, so that would probably influence the results quite a bit.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n\n",
"msg_date": "Fri, 21 Oct 2005 14:40:38 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What gets cached?"
},
{
"msg_contents": "Oracle uses LRU caching algorithm also, not LFU.\n\nAlex\n\nOn 10/21/05, Martin Nickel <[email protected]> wrote:\n>\n> I was reading a comment in another posting and it started me thinking\n> about this. Let's say I startup an Oracle server. All my queries are a\n> little bit (sometimes a lot bit) slow until it gets its \"normal\" things in\n> memory, then it's up to speed. The \"normal\" things would include some\n> small lookup tables and the indexes for the most frequently used tables.\n>\n> Let's say I do the same thing in Postgres. I'm likely to have my very\n> fastest performance for the first few queries until memory gets filled up.\n> The only time Postgres seems to take advantage of cached data is when I\n> repeat the same (or substantially the same) query. I don't know of any\n> way to view what is actually cached at any point in time, but it seems\n> like \"most recently used\" rather than \"most frequently used\".\n>\n> Does this seem true?\n> s\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\nOracle uses LRU caching algorithm also, not LFU.\n\nAlexOn 10/21/05, Martin Nickel <[email protected]> wrote:\nI was reading a comment in another posting and it started me thinkingabout this. Let's say I startup an Oracle server. All my queries are alittle bit (sometimes a lot bit) slow until it gets its \"normal\" things in\nmemory, then it's up to speed. The \"normal\" things would include somesmall lookup tables and the indexes for the most frequently used tables.Let's say I do the same thing in Postgres. I'm likely to have my very\nfastest performance for the first few queries until memory gets filled up. The only time Postgres seems to take advantage of cached data is when I repeat the same (or substantially the same) query. I don't know of any\n way to view what is actually cached at any point in time, but it seems like \"most recently used\" rather than \"most frequently used\".Does this seem true? s---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives? http://archives.postgresql.org",
"msg_date": "Fri, 21 Oct 2005 11:19:10 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What gets cached?"
},
{
"msg_contents": "On Fri, Oct 21, 2005 at 07:34:30AM -0500, Martin Nickel wrote:\n> I don't know of any way to view what is actually cached at any point in time\n\nIn 8.1 (currently in beta) you can use contrib/pg_buffercache. Code\nfor older versions is available on PgFoundry:\n\nhttp://pgfoundry.org/projects/pgbuffercache/\n\nNote that pg_buffercache shows only pages in PostgreSQL's buffer\ncache; it doesn't show your operating system's cache.\n\n-- \nMichael Fuhr\n",
"msg_date": "Fri, 21 Oct 2005 09:59:43 -0600",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What gets cached?"
},
{
"msg_contents": "On Fri, 2005-21-10 at 07:34 -0500, Martin Nickel wrote:\n> Let's say I do the same thing in Postgres. I'm likely to have my very\n> fastest performance for the first few queries until memory gets filled up.\n\nNo, you're not: if a query doesn't hit the cache (both the OS cache and\nthe Postgres userspace cache), it will run slower. If the caches are\nempty when Postgres starts up (which is true for the userspace cache and\nmight be true of the OS cache), the first queries that are run should be\nslower, not faster.\n\n> The only time Postgres seems to take advantage of cached data is when I\n> repeat the same (or substantially the same) query.\n\nCaching is done on a page-by-page basis -- the source text of the query\nitself is not relevant. If two different queries happen to hit a similar\nset of pages, they will probably both benefit from the same set of\ncached pages.\n\n> I don't know of any way to view what is actually cached at any point\n> in time, but it seems like \"most recently used\" rather than \"most\n> frequently used\".\n\nThe cache replacement policy in 7.4 and older releases is simple LRU.\nThe policy in 8.0 is ARC (essentially a version of LRU modified to try\nto retain hot pages more accurately). The policy in 8.1 is a clock-based\nalgorithm.\n\n-Neil\n\n\n",
"msg_date": "Fri, 21 Oct 2005 17:44:47 -0400",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What gets cached?"
},
{
"msg_contents": "Just to play devils advocate here for as second, but if we have an algorithm\nthat is substational better than just plain old LRU, which is what I believe\nthe kernel is going to use to cache pages (I'm no kernel hacker), then why\ndon't we apply that and have a significantly larger page cache a la Oracle?\n\nAlex\n\nOn 10/21/05, Neil Conway <[email protected]> wrote:\n>\n> On Fri, 2005-21-10 at 07:34 -0500, Martin Nickel wrote:\n> > Let's say I do the same thing in Postgres. I'm likely to have my very\n> > fastest performance for the first few queries until memory gets filled\n> up.\n>\n> No, you're not: if a query doesn't hit the cache (both the OS cache and\n> the Postgres userspace cache), it will run slower. If the caches are\n> empty when Postgres starts up (which is true for the userspace cache and\n> might be true of the OS cache), the first queries that are run should be\n> slower, not faster.\n>\n> > The only time Postgres seems to take advantage of cached data is when I\n> > repeat the same (or substantially the same) query.\n>\n> Caching is done on a page-by-page basis -- the source text of the query\n> itself is not relevant. If two different queries happen to hit a similar\n> set of pages, they will probably both benefit from the same set of\n> cached pages.\n>\n> > I don't know of any way to view what is actually cached at any point\n> > in time, but it seems like \"most recently used\" rather than \"most\n> > frequently used\".\n>\n> The cache replacement policy in 7.4 and older releases is simple LRU.\n> The policy in 8.0 is ARC (essentially a version of LRU modified to try\n> to retain hot pages more accurately). The policy in 8.1 is a clock-based\n> algorithm.\n>\n> -Neil\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\nJust to play devils advocate here for as second, but if we have an\nalgorithm that is substational better than just plain old LRU, which is\nwhat I believe the kernel is going to use to cache pages (I'm no kernel\nhacker), then why don't we apply that and have a significantly larger\npage cache a la Oracle?\n\nAlexOn 10/21/05, Neil Conway <[email protected]> wrote:\nOn Fri, 2005-21-10 at 07:34 -0500, Martin Nickel wrote:> Let's say I do the same thing in Postgres. I'm likely to have my very> fastest performance for the first few queries until memory gets filled up.\nNo, you're not: if a query doesn't hit the cache (both the OS cache andthe Postgres userspace cache), it will run slower. If the caches areempty when Postgres starts up (which is true for the userspace cache and\nmight be true of the OS cache), the first queries that are run should beslower, not faster.> The only time Postgres seems to take advantage of cached data is when I> repeat the same (or substantially the same) query.\nCaching is done on a page-by-page basis -- the source text of the queryitself is not relevant. If two different queries happen to hit a similarset of pages, they will probably both benefit from the same set of\ncached pages.> I don't know of any way to view what is actually cached at any point> in time, but it seems like \"most recently used\" rather than \"most> frequently used\".\nThe cache replacement policy in 7.4 and older releases is simple LRU.The policy in 8.0 is ARC (essentially a version of LRU modified to tryto retain hot pages more accurately). The policy in 8.1 is a clock-based\nalgorithm.-Neil---------------------------(end of broadcast)---------------------------TIP 6: explain analyze is your friend",
"msg_date": "Mon, 24 Oct 2005 11:09:55 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What gets cached?"
},
{
"msg_contents": "On Mon, Oct 24, 2005 at 11:09:55AM -0400, Alex Turner wrote:\n> Just to play devils advocate here for as second, but if we have an algorithm\n> that is substational better than just plain old LRU, which is what I believe\n> the kernel is going to use to cache pages (I'm no kernel hacker), then why\n> don't we apply that and have a significantly larger page cache a la Oracle?\n\nThere have (AFAIK) been reports of setting huge amounts of shared_buffers\n(close to the total amount of RAM) performing much better in 8.1 than in\nearlier versions, so this might actually be okay these days.\n\nI haven't heard of anybody reporting increase setting such values, though.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n\n",
"msg_date": "Mon, 24 Oct 2005 17:32:48 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What gets cached?"
},
{
"msg_contents": "Thank each of you for your replies. I'm just beginning to understand the \nscope of my opportunities.\n\nSomeone (I apologize, I forgot who) recently posted this query:\n SELECT oid::regclass, reltuples, relpages\n FROM pg_class\n ORDER BY 3 DESC\n\nThough the application is a relatively low-volume TP system, it is \nstructured a lot like a data warehouse with one primary table that \neverything else hangs off. What the query above shows is that my largest \ntable, at 34 million rows, takes almost 1.4 million pages or 10+ Gb if my \nmath is good. The same table has 14 indexes, totaling another 12Gb. All \nthis is running on a box with 4Gb of memory.\n\nSo what I believe I see happening is that almost every query is clearing out \nmemory to load the particular index it needs. Hence my \"first queries are \nthe fastest\" observation at the beginning of this thread.\n\nThere are certainly design improvements to be done, but I've already started \nthe process of getting the memory increased on our production db server. We \nare btw running 8.1 beta 3.\n\n\"\"Steinar H. Gunderson\"\" <[email protected]> wrote in message \nnews:[email protected]...\n> On Mon, Oct 24, 2005 at 11:09:55AM -0400, Alex Turner wrote:\n>> Just to play devils advocate here for as second, but if we have an \n>> algorithm\n>> that is substational better than just plain old LRU, which is what I \n>> believe\n>> the kernel is going to use to cache pages (I'm no kernel hacker), then \n>> why\n>> don't we apply that and have a significantly larger page cache a la \n>> Oracle?\n>\n> There have (AFAIK) been reports of setting huge amounts of shared_buffers\n> (close to the total amount of RAM) performing much better in 8.1 than in\n> earlier versions, so this might actually be okay these days.\n>\n> I haven't heard of anybody reporting increase setting such values, though.\n>\n> /* Steinar */\n> -- \n> Homepage: http://www.sesse.net/\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n",
"msg_date": "Thu, 27 Oct 2005 15:41:10 -0500",
"msg_from": "\"PostgreSQL\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What gets cached?"
},
{
"msg_contents": "Did the patch that allows multiple seqscans to piggyback on each other\nmake it into 8.1? It might help in this situation.\n\nBTW, if a query requires loading more than a few percent of an index\nPostgreSQL will usually go with a sequential scan instead. You should\ncheck explain/explain analyze on your queries and see what's actually\nhappening. If you've got stats turned on you can also look at\npg_stat_user_indexes to get a better idea of what indexes are and aren't\nbeing used.\n\nOn Thu, Oct 27, 2005 at 03:41:10PM -0500, PostgreSQL wrote:\n> Thank each of you for your replies. I'm just beginning to understand the \n> scope of my opportunities.\n> \n> Someone (I apologize, I forgot who) recently posted this query:\n> SELECT oid::regclass, reltuples, relpages\n> FROM pg_class\n> ORDER BY 3 DESC\n> \n> Though the application is a relatively low-volume TP system, it is \n> structured a lot like a data warehouse with one primary table that \n> everything else hangs off. What the query above shows is that my largest \n> table, at 34 million rows, takes almost 1.4 million pages or 10+ Gb if my \n> math is good. The same table has 14 indexes, totaling another 12Gb. All \n> this is running on a box with 4Gb of memory.\n> \n> So what I believe I see happening is that almost every query is clearing out \n> memory to load the particular index it needs. Hence my \"first queries are \n> the fastest\" observation at the beginning of this thread.\n> \n> There are certainly design improvements to be done, but I've already started \n> the process of getting the memory increased on our production db server. We \n> are btw running 8.1 beta 3.\n> \n> \"\"Steinar H. Gunderson\"\" <[email protected]> wrote in message \n> news:[email protected]...\n> > On Mon, Oct 24, 2005 at 11:09:55AM -0400, Alex Turner wrote:\n> >> Just to play devils advocate here for as second, but if we have an \n> >> algorithm\n> >> that is substational better than just plain old LRU, which is what I \n> >> believe\n> >> the kernel is going to use to cache pages (I'm no kernel hacker), then \n> >> why\n> >> don't we apply that and have a significantly larger page cache a la \n> >> Oracle?\n> >\n> > There have (AFAIK) been reports of setting huge amounts of shared_buffers\n> > (close to the total amount of RAM) performing much better in 8.1 than in\n> > earlier versions, so this might actually be okay these days.\n> >\n> > I haven't heard of anybody reporting increase setting such values, though.\n> >\n> > /* Steinar */\n> > -- \n> > Homepage: http://www.sesse.net/\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: Don't 'kill -9' the postmaster\n> > \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Thu, 27 Oct 2005 17:48:56 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What gets cached?"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.