threads
listlengths 1
275
|
---|
[
{
"msg_contents": "I wrote: \n> Another question:\n> How are you benchmarking your queries? Are you running them from\nwithin\n> psql? Do you plan to run these queries from within an application?\n> \n> psql introduces some overhead because it has to scan the result set to\n> determine the widths of the columns for formatting purposes. Try\n> returning a result set inside the libpq library if you know C and\ncompare\n> the times. Of course, if you are already using libpq, this moot. If\nyou\n> do know libpq, try setting up a loop that fetches the data in 10k\nblock in\n> a loop...I will wager that you can get this to run in under two\nminutes\n> (Q4).\n\nWhoop, didn't see the aggregate...sorry :)\n\nMerlin\n",
"msg_date": "Thu, 5 Aug 2004 09:26:29 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Tuning queries on large database"
}
] |
[
{
"msg_contents": "The box:\nLinux 2.4.24-ck1 \n8 Intel(R) Xeon(TM) MP CPU 2.80GHz\n4 gb RAM.\nPostgresql 7.4.2\n\nThe problem: \nShort in disk space. (waiting new hard)\n\nThe real problem:\nDevelopers usually write queries involving the creation of temporary tables. \n\nThe BF question:\nIs a good idea to link this tmp tables to another partition?\nIf so, how can I link this tmp tables to another partition?\nSuggestions?\n\nThanks in advance!\nGuido\n\n\n\n",
"msg_date": "Thu, 5 Aug 2004 11:33:20 -0300 (GMT+3)",
"msg_from": "G u i d o B a r o s i o <[email protected]>",
"msg_from_op": true,
"msg_subject": "Temporary tables"
},
{
"msg_contents": "G u i d o B a r o s i o wrote:\n\n> The box:\n> Linux 2.4.24-ck1 \n> 8 Intel(R) Xeon(TM) MP CPU 2.80GHz\n> 4 gb RAM.\n> Postgresql 7.4.2\n> \n> The problem: \n> Short in disk space. (waiting new hard)\n> \n> The real problem:\n> Developers usually write queries involving the creation of temporary tables. \n\nI seen too this behavior, till I explained that this is a valid sql:\n\nselect T.* from ( select * from table t where a = 5 ) AS T join foo using ( bar );\n\n\nshow us a typical function that use temporary tables.\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n",
"msg_date": "Thu, 05 Aug 2004 17:37:50 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nG u i d o B a r o s i o wrote:\n\n| The box:\n| Linux 2.4.24-ck1\n| 8 Intel(R) Xeon(TM) MP CPU 2.80GHz\n| 4 gb RAM.\n| Postgresql 7.4.2\n|\n| The problem:\n| Short in disk space. (waiting new hard)\n|\n| The real problem:\n| Developers usually write queries involving the creation of temporary tables.\n\nI seen too this behavior, till I explained that this is a valid sql:\n\nselect T.* from ( select * from table t where a = 5 ) AS T join foo using ( bar );\n\n\nshow us a typical function that use temporary tables.\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD4DBQFBElSx7UpzwH2SGd4RAhnkAKDABtA1fZSCCF/WAP5TUBJnHdWHYACWLjjQ\nLgncGg4+b0lPCQbafXVG6w==\n=1f1i\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Thu, 05 Aug 2004 17:41:13 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables"
}
] |
[
{
"msg_contents": "I couldn't track down recent info in the archives, so I figured I'd ask here.\n\nDoes the order of columns still have an impact on table speed? Back in\nthe olden days, it used to be that fixed width columns (integer,\ntinyint, etc.) should be the first (\"left\") columns in the table and\nvariable width ones should be towards the end (\"right\"). This allowed\na database to line up the columns better on disk and give you a speed\nboost.\n\nSo, does Postgres still care about it? And, if so, how much? The posts\nI found were from 2 years ago, and indicated that there is a minor\nincrease, but not a lot. Incidentally, could anyone quantify that in\nany fashion?\n\nThanks,\n\n-Jim....\n",
"msg_date": "Thu, 5 Aug 2004 13:10:01 -0500",
"msg_from": "Jim Thomason <[email protected]>",
"msg_from_op": true,
"msg_subject": "performance with column orders"
}
] |
[
{
"msg_contents": "in the docummentation about the planer it says:\n\n\"It first combines all possible ways of scanning and joining the relations \nthat appear in a query\"\n\nI would like to know if there's a time limit to do that or if it just scans \nALL the posibilities until it finishes..no matter the time it takes..\n\n\nthanks in advance.\n\n_________________________________________________________________\nMSN Amor: busca tu � naranja http://latam.msn.com/amor/\n\n",
"msg_date": "Fri, 06 Aug 2004 19:28:38 -0500",
"msg_from": "\"sandra ruiz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Question about Generating Possible Plans by the planner/optimizer"
},
{
"msg_contents": "On Fri, Aug 06, 2004 at 07:28:38PM -0500, sandra ruiz wrote:\n> in the docummentation about the planer it says:\n> \n> \"It first combines all possible ways of scanning and joining the relations \n> that appear in a query\"\n> \n> I would like to know if there's a time limit to do that or if it just scans \n> ALL the posibilities until it finishes..no matter the time it takes..\n\nDepends; if you join a lot of tables, it stops doing an exhaustive search and\ngoes for genetic optimization instead:\n\n http://www.postgresql.org/docs/7.4/static/geqo.html\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Sat, 7 Aug 2004 03:11:48 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about Generating Possible Plans by the planner/optimizer"
}
] |
[
{
"msg_contents": "If you use tablespaces to put a high-update, non-critical table on a\nramdisk, will updates to that table will still cause the WAL files to\nsync?\n\nI'm looking for a way to turn off syncing completely for a table.\nTemporary tables do this, but they can only be accessed from a single\nbackend.\n\nMerlin\n",
"msg_date": "Mon, 9 Aug 2004 11:29:57 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "tablespaces and ramdisks"
},
{
"msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> If you use tablespaces to put a high-update, non-critical table on a\n> ramdisk, will updates to that table will still cause the WAL files to\n> sync?\n\nSure. Postgres has no way of knowing that there's anything special\nabout such a tablespace.\n\n> I'm looking for a way to turn off syncing completely for a table.\n\nThere isn't one, and I'm not eager to invent one.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Aug 2004 12:35:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tablespaces and ramdisks "
}
] |
[
{
"msg_contents": "Having trouble with one table (see time to count records below!).\n\nFairly new to postgres so any help much appreciated.\n\nIt only contains 9,106 records - as you can see from:\n\n\nselect count(id) from project\n\ncount\n9106\n1 row(s)\nTotal runtime: 45,778.813 ms\n\n\nThere are only 3 fields:\n\nid\ninteger\nnextval('id'::text)\n\nprojectnumber\ntext\n\ndescription\ntext\n\n\nThere is one index:\n\nid_project_ukey\nCREATE UNIQUE INDEX id_project_ukey ON project USING btree (id)\n\n... the database is regularly vaccuumed.",
"msg_date": "Mon, 9 Aug 2004 17:54:00 +0100",
"msg_from": "Paul Langard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow select, insert, update"
},
{
"msg_contents": "Paul Langard <[email protected]> writes:\n> select count(id) from project\n> count\n> 9106\n> 1 row(s)\n> Total runtime: 45,778.813 ms\n\nYipes. The only explanation I can think of is tremendous table bloat.\nWhat do you get from \"vacuum verbose project\" --- in particular, how\nmany pages in the table?\n\n> ... the database is regularly vaccuumed.\n\nNot regularly enough, perhaps ... or else you need to increase the free\nspace map size parameters. In any case you'll probably need to do one\nround of \"vacuum full\" to get this table back within bounds.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Aug 2004 14:18:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow select, insert, update "
},
{
"msg_contents": "Paul,\n\nPaul Langard wrote:\n\n> Having trouble with one table (see time to count records below!).\n>\n> Fairly new to postgres so any help much appreciated.\n>\n> It only contains 9,106 records - as you can see from:\n>\n>\n> select count(id) from project\n>\n> *count\n> *9106\n> 1 row(s)\n> Total runtime: 45,778.813 ms\n\n<snip>\n\n> ... the database is regularly vaccuumed. \n\n\nHave you tried doing a VACUUM FULL, CLUSTER, or drop/restore on the \ntable? This sounds symptomatic of a table with a bunch of dead tuples \nnot in the FSM (free space map). Only tuples in the FSM are reclaimed by \na regular VACUUM. If your FSM parameters in postgresql.conf are not big \nenough for your ratio of UPDATE/DELETE operations to VACUUM frequency, \nyou will end up with dead tuples that will only be reclaimed by a VACUUM \nFULL.\n\nTo prevent this problem in the future, look at increasing your FSM size \nand possibly vacuuming more frequently or using pg_autovacuum.\n\nGood Luck,\n\nBill Montgomery\n",
"msg_date": "Tue, 10 Aug 2004 14:56:28 -0400",
"msg_from": "Bill Montgomery <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow select, insert, update"
},
{
"msg_contents": "Paul Langard <[email protected]> writes:\n\n> Having trouble with one table (see time to count records below!).\n>\n> Fairly new to postgres so any help much appreciated.\n>\n> It only contains 9,106 records - as you can see from:\n>\n>\n> select count(id) from project\n>\n> count\n> 9106\n> 1 row(s)\n> Total runtime: 45,778.813 ms\n\n> ... the database is regularly vaccuumed.\n\nHmm. You might try a VACUUM FULL and a REINDEX on the table (you\ndon't say what version you are running--REINDEX is sometimes needed on\n7.3 and below).\n\nAlso, use EXPLAIN ANALYZE on your query and post the result--that's\nhelpful diagnostic information.\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n\n",
"msg_date": "Tue, 10 Aug 2004 16:48:01 -0400",
"msg_from": "Doug McNaught <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow select, insert, update"
},
{
"msg_contents": "Does that mean reindex is not needed\nfor PG version 7.4?\n\nIn what kind situations under PG 7.4, \nreindex is worthwhile?\n\nThanks,\n \n\nHere is doc from 7.3:\nPostgreSQL is unable to reuse B-tree index pages in\ncertain cases. The problem is that if indexed rows are\ndeleted, those index pages can only be reused by rows\nwith similar values. For example, if indexed rows are\ndeleted and newly inserted/updated rows have much\nhigher values, the new rows can't use the index space\nmade available by the deleted rows. Instead, such new\nrows must be placed on new index pages. In such cases,\ndisk space used by the index will grow indefinitely,\neven if VACUUM is run frequently. \n\nAs a solution, you can use the REINDEX command\nperiodically to discard pages used by deleted rows.\nThere is also contrib/reindexdb which can reindex an\nentire database. \n\nThe counterpart of 7.4 is:\nIn some situations it is worthwhile to rebuild indexes\nperiodically with the REINDEX command. (There is also\ncontrib/reindexdb which can reindex an entire\ndatabase.) However, PostgreSQL 7.4 has substantially\nreduced the need for this activity compared to earlier\nreleases. \n\n\n--- Doug McNaught <[email protected]> wrote:\n\n> Paul Langard <[email protected]> writes:\n> \n> > Having trouble with one table (see time to count\n> records below!).\n> >\n> > Fairly new to postgres so any help much\n> appreciated.\n> >\n> > It only contains 9,106 records - as you can see\n> from:\n> >\n> >\n> > select count(id) from project\n> >\n> > count\n> > 9106\n> > 1 row(s)\n> > Total runtime: 45,778.813 ms\n> \n> > ... the database is regularly vaccuumed.\n> \n> Hmm. You might try a VACUUM FULL and a REINDEX on\n> the table (you\n> don't say what version you are running--REINDEX is\n> sometimes needed on\n> 7.3 and below).\n> \n> Also, use EXPLAIN ANALYZE on your query and post the\n> result--that's\n> helpful diagnostic information.\n> \n> -Doug\n> -- \n> Let us cross over the river, and rest under the\n> shade of the trees.\n> --T. J. Jackson, 1863\n> \n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n> \n\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nRead only the mail you want - Yahoo! Mail SpamGuard.\nhttp://promotions.yahoo.com/new_mail \n",
"msg_date": "Tue, 10 Aug 2004 14:44:00 -0700 (PDT)",
"msg_from": "Litao Wu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow select, insert, update"
}
] |
[
{
"msg_contents": "I'm thinking of upgrading an existing dedicated server and colocating my\nown server.\n\nThe server is used for prototype systems running Postgresql, php and\napache. The largest database is presently under 10GB and I haven't had\nany major performance problems. We expect to have to host larger data\nsets in the next year and anticipate one or two databases reaching the\n30GB mark in size. We write a lot of our webapps functionality using\npl/pgsql.\n\nThe present server is a 2GHz Pentium 4/512 KB cache with 2\nsoftware-raided ide disks (Maxtors) and 1GB of RAM.\n\n\nI have been offered the following 1U server which I can just about\nafford:\n\n1U server\nIntel Xeon 2.8GHz 512K cache 1\n512MB PC2100 DDR ECC Registered 2\n80Gb SATA HDD 4\n4 port SATA card, 3 ware 8506-4 1\n3 year next-day hardware warranty 1\n\nThere is an option for dual CPUs.\n\nI intend to install the system (Debian testing) on the first disk and\nrun the other 3 disks under RAID5 and ext3.\n\nI'm fairly ignorant about the issues relating to SATA vs SCSI and what\nthe best sort of RAM is for ensuring good database performance. I don't\nrequire anything spectacular, just good speedy general performance.\n\nI imagine dedicating around 25% of RAM to Shared Memory and 2-4% for\nSort memory.\n\nComments and advice gratefully received.\nRory\n\n-- \nRory Campbell-Lange \n<[email protected]>\n<www.campbell-lange.net>\n",
"msg_date": "Mon, 9 Aug 2004 17:56:57 +0100",
"msg_from": "Rory Campbell-Lange <[email protected]>",
"msg_from_op": true,
"msg_subject": "Help specifying new machine"
},
{
"msg_contents": "Rory,\n\nRory Campbell-Lange wrote:\n\n>I'm thinking of upgrading an existing dedicated server and colocating my\n>own server.\n>\n>The server is used for prototype systems running Postgresql, php and\n>apache. The largest database is presently under 10GB and I haven't had\n>any major performance problems. We expect to have to host larger data\n>sets in the next year and anticipate one or two databases reaching the\n>30GB mark in size. We write a lot of our webapps functionality using\n>pl/pgsql.\n>\n>The present server is a 2GHz Pentium 4/512 KB cache with 2\n>software-raided ide disks (Maxtors) and 1GB of RAM.\n>\n>\n>I have been offered the following 1U server which I can just about\n>afford:\n>\n>1U server\n>Intel Xeon 2.8GHz 512K cache 1\n>512MB PC2100 DDR ECC Registered 2\n>80Gb SATA HDD 4\n>4 port SATA card, 3 ware 8506-4 1\n>3 year next-day hardware warranty 1\n>\n>There is an option for dual CPUs.\n>\n>I intend to install the system (Debian testing) on the first disk and\n>run the other 3 disks under RAID5 and ext3.\n> \n>\nIf you are going to spend your money anywhere, spend it on your disk \nsubsystem. More or less, depending on the application, the bottleneck in \nyour database performance will be the number of random IO operations per \nsecond (IOPS) your disk subsystem can execute. If you've got the money, \nget the largest (i.e. most spindles) RAID 10 (striped and mirrored) you \ncan buy. If your budget doesn't permit RAID 10, RAID 5 is probably your \nnext best bet.\n\n>I'm fairly ignorant about the issues relating to SATA vs SCSI and what\n>the best sort of RAM is for ensuring good database performance. I don't\n>require anything spectacular, just good speedy general performance.\n> \n>\nBe sure that if you go with SATA over SCSI, the disk firmware does not \nlie about fsync(). Most consumer grade IDE disks will report to the OS \nthat data is written to disk while it is still in the drive's cache. I \ndon't know much about SATA disks, but I suspect they behave the same \nway. The problem with this is that if there is data that PostgreSQL \nthinks is written safely to disk, but is really still in the drive \ncache, and you lose power at that instant, you can find yourself with an \ninconsistent set of data that cannot be recovered from. SCSI disks, \nAFAIK, will always be truthful about fsync(), and you will never end up \nwith data that you *thought* was written to disk, but gets lost on power \nfailure.\n\n>I imagine dedicating around 25% of RAM to Shared Memory and 2-4% for\n>Sort memory.\n> \n>\nThat is probably too much RAM to allocate to shm. Start with 10000 \nbuffers, benchmark your app, and slowly work up from there. NOTE: This \nadvice will likely change with the introduction of ARC (adaptive \ncaching) in 8.0; for now what will happen is that a large read of an \ninfrequently accessed index or table will blow away all your shared \nmemory buffers, and you'll end up with 25% of your memory filled with \nuseless data. Better to let the smarter filesystem cache handle the bulk \nof your caching needs.\n\n>Comments and advice gratefully received.\n>Rory\n>\n> \n>\nBest Luck,\n\nBill Montgomery\n",
"msg_date": "Mon, 09 Aug 2004 16:08:40 -0400",
"msg_from": "Bill Montgomery <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help specifying new machine"
},
{
"msg_contents": "Rory Campbell-Lange wrote:\n\n> The present server is a 2GHz Pentium 4/512 KB cache with 2\n> software-raided ide disks (Maxtors) and 1GB of RAM.\n> \n> \n> I have been offered the following 1U server which I can just about\n> afford:\n> \n> 1U server\n> Intel Xeon 2.8GHz 512K cache 1\n> 512MB PC2100 DDR ECC Registered 2\n> 80Gb SATA HDD 4\n> 4 port SATA card, 3 ware 8506-4 1\n> 3 year next-day hardware warranty 1\n\nYou're not getting much of a bump with this server. The CPU is \nincrementally faster -- in the absolutely best case scenario where your \nqueries are 100% cpu-bound, that's about ~25%-30% faster.\n\nIf you could use that money instead to upgrade your current server, \nyou'd get a much bigger impact. Go for more memory and scsi (raid \ncontrollers w/ battery-backed cache).\n",
"msg_date": "Thu, 12 Aug 2004 15:12:43 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help specifying new machine"
},
{
"msg_contents": "William Yu wrote:\n\n> Rory Campbell-Lange wrote:\n> \n>> The present server is a 2GHz Pentium 4/512 KB cache with 2\n>> software-raided ide disks (Maxtors) and 1GB of RAM.\n>> \n>> \n>> I have been offered the following 1U server which I can just about\n>> afford:\n>> \n>> 1U server\n>> Intel Xeon 2.8GHz 512K cache 1\n>> 512MB PC2100 DDR ECC Registered 2\n>> 80Gb SATA HDD 4\n>> 4 port SATA card, 3 ware 8506-4 1\n>> 3 year next-day hardware warranty 1\n> \n> You're not getting much of a bump with this server. The CPU is\n> incrementally faster -- in the absolutely best case scenario where your\n> queries are 100% cpu-bound, that's about ~25%-30% faster.\n\nWhat about using Dual Athlon MP instead of a Xeon? Would be much less expensive,\nbut have higher performance (I think).\n\n> \n> If you could use that money instead to upgrade your current server,\n> you'd get a much bigger impact. Go for more memory and scsi (raid\n> controllers w/ battery-backed cache).\n\n",
"msg_date": "Fri, 13 Aug 2004 08:29:15 +0200",
"msg_from": "Raoul Buzziol <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help specifying new machine"
},
{
"msg_contents": ">>You're not getting much of a bump with this server. The CPU is\n>>incrementally faster -- in the absolutely best case scenario where your\n>>queries are 100% cpu-bound, that's about ~25%-30% faster.\n> \n> \n> What about using Dual Athlon MP instead of a Xeon? Would be much less expensive,\n> but have higher performance (I think).\n\nYou're not going to be able to get a Dual Athlon MP for the same price \nas a single Xeon. A few years back, this was the case because Xeon CPUs \n& MBs had a huge premium over Athlon. This is no longer true mainly \nbecause the number of people carrying Athlon MP motherboards has dropped \ndown drastically. Go to pricewatch.com and do a search for 760MPX -- you \nget a mere 8 entries. Not surprisingly because who would not want to \nspend a few pennies more for a much superior Dual Opteron? The few \nsellers you see now just keep stuff in inventory for people who need \nreplacement parts for emergencies and are willing to pay up the nose \nbecause it is an emergency.\n",
"msg_date": "Sun, 15 Aug 2004 13:17:13 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help specifying new machine"
},
{
"msg_contents": "> You're not going to be able to get a Dual Athlon MP for the same price\n> as a single Xeon. A few years back, this was the case because Xeon CPUs\n> & MBs had a huge premium over Athlon. This is no longer true mainly\n> because the number of people carrying Athlon MP motherboards has dropped\n> down drastically. Go to pricewatch.com and do a search for 760MPX -- you\n> get a mere 8 entries. Not surprisingly because who would not want to\n> spend a few pennies more for a much superior Dual Opteron? The few\n> sellers you see now just keep stuff in inventory for people who need\n> replacement parts for emergencies and are willing to pay up the nose\n> because it is an emergency.\n\nI saw pricewatch.com and you're right. \n\nI looked for some benchmarks, and I would know if I'm right on:\n- Dual Opteron 246 have aproximately the same performance of a Dual Xeon 3Gh\n(Opteron a little better)\n- Opteron system equal or cheeper than Xeon system.\n\nAs I'm not a hardware expert I would know if my impressions were right.\n\nThanx, Raoul\n",
"msg_date": "Wed, 18 Aug 2004 17:18:23 +0200",
"msg_from": "Raoul Buzziol <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help specifying new machine"
},
{
"msg_contents": "On Wed, 2004-08-18 at 11:18, Raoul Buzziol wrote:\n> > You're not going to be able to get a Dual Athlon MP for the same price\n> > as a single Xeon. A few years back, this was the case because Xeon CPUs\n> > & MBs had a huge premium over Athlon. This is no longer true mainly\n> > because the number of people carrying Athlon MP motherboards has dropped\n> > down drastically. Go to pricewatch.com and do a search for 760MPX -- you\n> > get a mere 8 entries. Not surprisingly because who would not want to\n> > spend a few pennies more for a much superior Dual Opteron? The few\n> > sellers you see now just keep stuff in inventory for people who need\n> > replacement parts for emergencies and are willing to pay up the nose\n> > because it is an emergency.\n> \n> I saw pricewatch.com and you're right. \n> \n> I looked for some benchmarks, and I would know if I'm right on:\n> - Dual Opteron 246 have aproximately the same performance of a Dual Xeon 3Gh\n> (Opteron a little better)\n> - Opteron system equal or cheeper than Xeon system.\n\nFor PostgreSQL, Opteron might be a touch worse than Xeon for single\nprocessor, little better for Dual, and a whole heck of a bunch better\nfor Quads -- but this depends on your specific work load as memory\nbound, cpu bound, lots of float math, etc. work loads will perform\ndifferently.\n\nIn general, an Opteron is a better bet simply because you can shove more\nram onto it (without workarounds), and you can't beat an extra 8GB ram\non an IO bound database (consider your datasize in 1 year).\n\n\n",
"msg_date": "Wed, 18 Aug 2004 12:12:23 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help specifying new machine"
},
{
"msg_contents": "Raoul Buzziol wrote:\n> I looked for some benchmarks, and I would know if I'm right on:\n> - Dual Opteron 246 have aproximately the same performance of a Dual Xeon 3Gh\n> (Opteron a little better)\n> - Opteron system equal or cheeper than Xeon system.\n\nIn terms of general database performance, top of the line dual opteron \nwill perform roughly the same as top of the line dual xeon. Assuming you \njust run in 32-bit mode. Throw in 64-bit mode, NUMA, etc, all bets are off.\n\nIn terms of Postgres database performance, Opteron *may* be the better \nCPU for this app but there's not enough data points yet. Here's a recent \nreview at Anandtech showing Opteron 150 (2.4ghz) versus 64-bit Prescott \n3.6hgz with some very simple MySQL and Postgres benchmarks:\n\nhttp://www.anandtech.com/linux/showdoc.aspx?i=2163&p=2\n\nWhat is a slight lead in MySQL becomes a blowout in Postgres. Of course, \nthis is just the 1MB cache model. I'm sure if you went with the 2MB or \n4MB models, the Xeons would come up much closer.\n\nThe really good sign in the above numbers though is that somebody \nfinally included Postgres in their benchmark suite. :) We may be seeing \nmore and more data points to evaluate hardware for Postgres in the near \nfuture.\n",
"msg_date": "Wed, 18 Aug 2004 21:26:45 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help specifying new machine"
},
{
"msg_contents": "William Yu <[email protected]> writes:\n> In terms of Postgres database performance, Opteron *may* be the better \n> CPU for this app but there's not enough data points yet. Here's a recent \n> review at Anandtech showing Opteron 150 (2.4ghz) versus 64-bit Prescott \n> 3.6hgz with some very simple MySQL and Postgres benchmarks:\n\n> http://www.anandtech.com/linux/showdoc.aspx?i=2163&p=2\n\n> What is a slight lead in MySQL becomes a blowout in Postgres.\n\nThis is really interesting. We had previously seen some evidence that\nthe Xeon sucks at running Postgres, but I thought that the issues only\nmaterialized with multiple CPUs (because what we were concerned about\nwas the cost of transferring cache lines across CPUs). AFAICS this test\nis using single CPUs. Maybe there is more going on than we realized.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Aug 2004 01:49:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help specifying new machine "
},
{
"msg_contents": "Tom,\n\n> This is really interesting. We had previously seen some evidence that\n> the Xeon sucks at running Postgres, but I thought that the issues only\n> materialized with multiple CPUs (because what we were concerned about\n> was the cost of transferring cache lines across CPUs). AFAICS this test\n> is using single CPUs. Maybe there is more going on than we realized.\n\nAside from the fact that the Xeon architecture is a kludge? \n\nIntel really screwed up the whole Xeon line through some bad architectural \ndecisions, and instead of starting over from scratch, \"patched\" the problem. \nThe result has been sub-optimal Xeon performance for the last two years.\n\nThis is why AMD is still alive. Better efficiency, lower cost.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 19 Aug 2004 09:18:31 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help specifying new machine"
}
] |
[
{
"msg_contents": "Hi all,\n\nWe have table q_20040805 and a delete trigger on\nit. The delete trigger is: \nupdate table q_summary set count=count-1...\n\nWhen we delete from q_20040805, we also insert into\nrelated info q_process within the same \ntransaction. There is a PK on q_process, but no\ntrigger on it. No FK on either of the 3 tables.\n\nHere is info from pg_lock:\n relname | pid | mode |\ngranted | current_query \n \n-------------------+-------+------------------+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n q_process | 14643 | RowExclusiveLock | t |\nDELETE FROM q_20040805 WHERE domain_id='20237906' AND\nmodule='spam'\n q_summary | 14643 | RowExclusiveLock | t |\nDELETE FROM q_20040805 WHERE domain_id='20237906' AND\nmodule='spam'\n q_20040805 | 14643 | RowExclusiveLock | t |\nDELETE FROM q_20040805 WHERE domain_id='20237906' AND\nmodule='spam'\n q_process | 18951 | RowExclusiveLock | t |\nINSERT INTO q_process (...) SELECT ... FROM q_20040805\n WHERE domain_id='20237906' AND module='spam'\n\n From ps command, it is easy to see another\ninsert is waiting:\n\nps -elfww|grep 18951\n040 S postgres 18951 870 0 69 0 - 81274\nsemtim 16:34 ? 00:00:00 postgres: postgres mxl\nxxx.xxx.x.xxx:49986 INSERT waiting\nps -elfww|grep 14643\n040 S postgres 14643 870 79 70 0 - 81816\nsemtim 15:56 ? 00:44:02 postgres: postgres mxl\nxxx.xxx.x.xxx:47236 DELETE\n\nI do not understand why process 18951 (insert)\nis waiting (subqery SELECT of INSERT INTO \nis not a problem as I know)\n\nPG version is: 7.3.2\n\nCan someone explain?\n\nThanks,\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nNew and Improved Yahoo! Mail - Send 10MB messages!\nhttp://promotions.yahoo.com/new_mail \n",
"msg_date": "Mon, 9 Aug 2004 17:05:59 -0700 (PDT)",
"msg_from": "Litao Wu <[email protected]>",
"msg_from_op": true,
"msg_subject": "insert waits for delete with trigger"
},
{
"msg_contents": "Litao Wu <[email protected]> writes:\n> Here is info from pg_lock:\n\nAll those locks are already granted, so they are not much help in\nunderstanding what PID 18951 is waiting for. What row does it have\nwith granted = 'f' ?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Aug 2004 20:31:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: insert waits for delete with trigger "
},
{
"msg_contents": "Hi Tom, \n\nNo row has granted='f'.\nThe result shown in the original email is from:\nselect c.relname, l.pid, l.mode, l.granted,\ncurrent_query\nfrom pg_locks l, pg_class c, pg_stat_activity a\nwhere relation is not null\n AND l.relation = c.oid\n AND l.pid = a.procpid\n AND l.mode != 'AccessShareLock'\norder by l.pid;\n\nAfter the above result, I went to OS\nto get ps status.\n\nDid I miss something?\n\nSince the lock was granted to pid (18951), that\ncause me confuse why OS ps shows it is waiting.\n\nAlso, I ntoiced that insert will be finished \nalmost immediately after delete is done.\n\nThanks,\n\n\n--- Tom Lane <[email protected]> wrote:\n\n> Litao Wu <[email protected]> writes:\n> > Here is info from pg_lock:\n> \n> All those locks are already granted, so they are not\n> much help in\n> understanding what PID 18951 is waiting for. What\n> row does it have\n> with granted = 'f' ?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map\n> settings\n> \n\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nYahoo! Mail is new and improved - Check it out!\nhttp://promotions.yahoo.com/new_mail\n",
"msg_date": "Mon, 9 Aug 2004 21:35:55 -0700 (PDT)",
"msg_from": "Litao Wu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: insert waits for delete with trigger "
},
{
"msg_contents": "Litao Wu <[email protected]> writes:\n> Did I miss something?\n\nYour join omits all transaction locks.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Aug 2004 00:41:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: insert waits for delete with trigger "
},
{
"msg_contents": "Thank you.\n\nHow about:\n\nselect c.relname, l.pid, l.mode, l.granted,\na.current_query\nfrom pg_locks l, pg_class c, pg_stat_activity a\nwhere\n l.relation = c.oid\n AND l.pid = a.procpid\norder by l.granted, l.pid;\n\n\n relname | pid | \nmode | granted |\n \n current_query\n\n-----------------------------------+-------+------------------+---------+-----------------------------------------------\n------------------------------------------------------------------------------------------------------------------------\n---------------------------------------------------------------------------------\n q_20040810 | 488 | AccessShareLock \n| t | <IDLE>\n q_20040810 | 488 | RowExclusiveLock\n| t | <IDLE>\n q_process | 3729 | AccessShareLock \n| t | DELETE FROM q_20040805 WHERE domain_id\n='2005761066' AND module='spam'\n q_process | 3729 | RowExclusiveLock\n| t | DELETE FROM q_20040805 WHERE domain_id\n='2005761066' AND module='spam'\n q_20040805 | 3729 | AccessShareLock \n| t | DELETE FROM q_20040805 WHERE domain_id\n='2005761066' AND module='spam'\n q_20040805 | 3729 | RowExclusiveLock\n| t | DELETE FROM q_20040805 WHERE domain_id\n='2005761066' AND module='spam'\n q_summary | 3729 | AccessShareLock \n| t | DELETE FROM q_20040805 WHERE domain_id\n='2005761066' AND module='spam'\n q_summary | 3729 | RowExclusiveLock\n| t | DELETE FROM q_20040805 WHERE domain_id\n='2005761066' AND module='spam'\n q_summary_did_dir_idx | 3729 | AccessShareLock \n| t | DELETE FROM q_20040805 WHERE domain_id\n='2005761066' AND module='spam'\n pg_shadow | 7660 |\nAccessShareLock | t | <IDLE>\n pg_locks | 7660 |\nAccessShareLock | t | <IDLE>\n pg_database | 7660 |\nAccessShareLock | t | <IDLE>\n pg_class | 7660 |\nAccessShareLock | t | <IDLE>\n pg_stat_activity | 7660 |\nAccessShareLock | t | <IDLE>\n pg_class_oid_index | 7660 |\nAccessShareLock | t | <IDLE>\n q_process | 8593 | AccessShareLock \n| t | DELETE FROM q_20040810 WHERE domain_id\n='2002300623' AND module='spam'\n q_process | 8593 | RowExclusiveLock\n| t | DELETE FROM q_20040810 WHERE domain_id\n='2002300623' AND module='spam'\n q_20040810 | 8593 | AccessShareLock \n| t | DELETE FROM q_20040810 WHERE domain_id\n='2002300623' AND module='spam'\n q_20040810 | 8593 | RowExclusiveLock\n| t | DELETE FROM q_20040810 WHERE domain_id\n='2002300623' AND module='spam'\n q_summary | 8593 | AccessShareLock \n| t | DELETE FROM q_20040810 WHERE domain_id\n='2002300623' AND module='spam'\n q_summary | 8593 | RowExclusiveLock\n| t | DELETE FROM q_20040810 WHERE domain_id\n='2002300623' AND module='spam'\n q_summary_did_dir_idx | 8593 | AccessShareLock \n| t | DELETE FROM q_20040810 WHERE domain_id\n='2002300623' AND module='spam'\n q_process | 19027 | AccessShareLock \n| t | INSERT INTO q_process (...) SELECT ...\nFROM q_20040805 WHERE domain_id='2005761066' AND\nmodule='spam'\n q_process | 19027 | RowExclusiveLock\n| t | INSERT INTO q_process (...) SELECT ...\nFROM q_20040805 WHERE domain_id='2005761066' AND\nmodule='spam'\n q_20040805 | 19027 | AccessShareLock \n| t | INSERT INTO q_process (...) SELECT ...\nFROM q_20040805 WHERE domain_id='2005761066' AND\nmodule='spam'\n q_did_mod_dir_20040805_idx | 19027 | AccessShareLock \n| t | INSERT INTO q_process (...) SELECT ...\nFROM q_20040805 WHERE domain_id='2005761066' AND\nmodule='spam'\n(26 rows)\n\n\nps -elfww|grep 19027\n040 S postgres 19027 870 1 69 0 - 81290\nsemtim 07:31 ? 00:00:51 postgres: postgres mxl\n192.168.0.177:38266 INSERT waiting\n\n--- Tom Lane <[email protected]> wrote:\n\n> Litao Wu <[email protected]> writes:\n> > Did I miss something?\n> \n> Your join omits all transaction locks.\n> \n> \t\t\tregards, tom lane\n> \n\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nYahoo! Mail - 50x more storage than other providers!\nhttp://promotions.yahoo.com/new_mail\n",
"msg_date": "Tue, 10 Aug 2004 07:48:38 -0700 (PDT)",
"msg_from": "Litao Wu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: insert waits for delete with trigger "
},
{
"msg_contents": "Litao Wu <[email protected]> writes:\n> How about:\n\n> select c.relname, l.pid, l.mode, l.granted,\n> a.current_query\n> from pg_locks l, pg_class c, pg_stat_activity a\n> where\n> l.relation = c.oid\n> AND l.pid = a.procpid\n> order by l.granted, l.pid;\n\nYou can't join to pg_class without eliminating the transaction lock rows\n(because they have NULLs in the relation field).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Aug 2004 12:53:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: insert waits for delete with trigger "
}
] |
[
{
"msg_contents": "Does the order of columns of varying size have any effect on \nSELECT/INSERT/UPDATE/and/or/DELETE performance? Take the example where \nan integer primary key is listed first in the table and alternatively \nlisted after some large varchar or text columns? For example, is this \ndifferent performance-wise:\n\nCREATE TABLE foo\n(\n foo_id serial,\n foo_data varchar(8000),\n primary key (foo_id)\n);\n\nfrom this?\n\nCREATE TABLE bar\n(\n bar_data varchar(8000),\n bar_id serial,\n primary key (bar_id)\n);\n\nMy suspicion is it would never make a difference since the index will be \nsearched when querying \"WHERE [foo|bar]_id=?\" (as long as the planner \ndecides to use the index).\n\nWhat about a case where a sequential scan _must_ be performed? Could the \norder of columns make a difference in the number of pages read/written \nif there is a mix of small and large columns?\n\nThanks for your help.\n\nBest Regards,\n\nBill Montgomery\n",
"msg_date": "Tue, 10 Aug 2004 09:39:20 -0400",
"msg_from": "Bill Montgomery <[email protected]>",
"msg_from_op": true,
"msg_subject": "Column order performance"
},
{
"msg_contents": "Bill,\n\n> Does the order of columns of varying size have any effect on\n> SELECT/INSERT/UPDATE/and/or/DELETE performance? Take the example where\n> an integer primary key is listed first in the table and alternatively\n> listed after some large varchar or text columns?\n\nNo, the \"order\" of the columns in the table makes no difference. They are not \nphysically stored in the metadata order, anyway; on the data pages, \nfixed-length fields (e.g. INT, BOOLEAN, etc.) are stored first and \nvariable-length fields (CHAR, TEXT, NUMERIC) after them, AFAIK.\n\nThe only thing I have seen elusive reports of is that *display* speed can be \nafffected by column order (e.g. when you call the query to the command line \nwith many rows) but I've not seen this proven in a test case.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 10 Aug 2004 10:23:28 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Column order performance"
},
{
"msg_contents": "Josh Berkus wrote:\n\n>>Does the order of columns of varying size have any effect on\n>>SELECT/INSERT/UPDATE/and/or/DELETE performance? Take the example where\n>>an integer primary key is listed first in the table and alternatively\n>>listed after some large varchar or text columns?\n>> \n>>\n>\n>No, the \"order\" of the columns in the table makes no difference. They are not \n>physically stored in the metadata order, anyway; on the data pages, \n>fixed-length fields (e.g. INT, BOOLEAN, etc.) are stored first and \n>variable-length fields (CHAR, TEXT, NUMERIC) after them, AFAIK.\n> \n>\n\nIs this true even after a table is altered to \"append\" say, an integer \ncolumn, after there are already variable-length columns in the table?\n\n-Bill\n",
"msg_date": "Tue, 10 Aug 2004 14:49:29 -0400",
"msg_from": "Bill Montgomery <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Column order performance"
}
] |
[
{
"msg_contents": "> Anyway, with fsync enabled using standard fsync(), I get roughly\n300-400\n> inserts per second. With fsync disabled, I get about 7000 inserts per\n> second. When I re-enable fsync but use the open_sync option, I can get\n> about 2500 inserts per second.\n\nYou are getting 300-400 inserts/sec with fsync on? If you don't mind me\nasking, what's your hardware? (also, have you checked fsync on #s with\nthe new bgwriter in 7.5?)\n\nMerlin\n\n\n",
"msg_date": "Tue, 10 Aug 2004 11:25:15 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] fsync vs open_sync"
},
{
"msg_contents": ">> Anyway, with fsync enabled using standard fsync(), I get roughly\n> 300-400\n>> inserts per second. With fsync disabled, I get about 7000 inserts per\n>> second. When I re-enable fsync but use the open_sync option, I can get\n>> about 2500 inserts per second.\n>\n> You are getting 300-400 inserts/sec with fsync on? If you don't mind me\n> asking, what's your hardware? (also, have you checked fsync on #s with\n> the new bgwriter in 7.5?)\n>\n\n300 inserts persecond with fsync on using fdatasync. 2500 inserts per\nsecond with fsync on using open_sync.\n\n[mwoodward@penguin-021 mwoodward]$ cat /proc/cpuinfo\nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 15\nmodel : 2\nmodel name : Intel(R) Xeon(TM) CPU 2.40GHz\nstepping : 5\ncpu MHz : 2399.373\ncache size : 512 KB\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 2\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca\ncmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe cid\nbogomips : 4784.12\n\nLinux node1 2.4.25 #1 Mon Mar 22 13:33:41 EST 2004 i686 i686 i386 GNU/Linux\n\n ide2: BM-DMA at 0xc400-0xc407, BIOS settings: hde:pio, hdf:pio\nhde: Maxtor 6Y200P0, ATA DISK drive\nhde: attached ide-disk driver.\nhde: host protected area => 1\nhde: 398297088 sectors (203928 MB) w/7936KiB Cache, CHS=24792/255/63,\nUDMA(100)\n\nPDC20268: IDE controller at PCI slot 06:05.0\n\n",
"msg_date": "Tue, 10 Aug 2004 11:54:28 -0400 (EDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] fsync vs open_sync"
},
{
"msg_contents": "Guys, just so you know:\n\nOSDL did some testing and found Ext3 to be perhaps the worst FS for PostgreSQL \n-- although this testing was with the default options. Ext3 involved an \nalmost 40% write performance penalty compared with Ext2, whereas the penalty \nfor ReiserFS and JFS was less than 10%. \n\nThis concurs with my personal experience.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 10 Aug 2004 10:18:32 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] fsync vs open_sync"
},
{
"msg_contents": "> Guys, just so you know:\n>\n> OSDL did some testing and found Ext3 to be perhaps the worst FS for\n> PostgreSQL\n> -- although this testing was with the default options. Ext3 involved an\n> almost 40% write performance penalty compared with Ext2, whereas the\n> penalty\n> for ReiserFS and JFS was less than 10%.\n>\n> This concurs with my personal experience.\n\nI had exactly the same experience\n",
"msg_date": "Tue, 10 Aug 2004 14:04:52 -0400 (EDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] fsync vs open_sync"
},
{
"msg_contents": "On Tue, 2004-08-10 at 10:18 -0700, Josh Berkus wrote:\n> Guys, just so you know:\n> \n> OSDL did some testing and found Ext3 to be perhaps the worst FS for PostgreSQL \n> -- although this testing was with the default options. Ext3 involved an \n> almost 40% write performance penalty compared with Ext2, whereas the penalty \n> for ReiserFS and JFS was less than 10%. \n> \n> This concurs with my personal experience.\n> \n\nYes, I have been wondering about the relative trade offs between\nunderlying file systems and pgsql.\n\nFor metadata journalled filesystems, wouldn't fdatasync be a better\noption, since the fs is journalling the metadata anyway?\n\nWith its default settings (data=ordered), ext3 is making a guaranty that\nafter a crash, the filesystem will not only be in a consistent state,\nbut the files (including the WAL) will not contain garbage, though their\ncontents may not be the latest. With reiserfs and JFS, files can\ncontain garbage. (I'm not sure what the implications of all this for\npgsql are.)\n\nAnd wouldn't the following comparisons with ext3 be more interesting:\n\next3,data=writeback,fdatasync vs Other_Journalled_FS,fdatasync\n\nor \n\next3,data=journal,open_sync vs Other_Journalled_FS,fdatasync\n\nJust wondering.\n\n-Steve\n\n",
"msg_date": "Wed, 11 Aug 2004 15:51:14 -0500",
"msg_from": "Steve Bergman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] fsync vs open_sync"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\[email protected] wrote:\n|>>Anyway, with fsync enabled using standard fsync(), I get roughly\n|>\n|>300-400\n|>\n|>>inserts per second. With fsync disabled, I get about 7000 inserts per\n|>>second. When I re-enable fsync but use the open_sync option, I can get\n|>>about 2500 inserts per second.\n|>\n|>You are getting 300-400 inserts/sec with fsync on? If you don't mind me\n|>asking, what's your hardware? (also, have you checked fsync on #s with\n|>the new bgwriter in 7.5?)\n|>\n|\n|\n| 300 inserts persecond with fsync on using fdatasync. 2500 inserts per\n| second with fsync on using open_sync.\n|\n| [mwoodward@penguin-021 mwoodward]$ cat /proc/cpuinfo\n| processor : 0\n| vendor_id : GenuineIntel\n| cpu family : 15\n| model : 2\n| model name : Intel(R) Xeon(TM) CPU 2.40GHz\n| stepping : 5\n| cpu MHz : 2399.373\n| cache size : 512 KB\n| fdiv_bug : no\n| hlt_bug : no\n| f00f_bug : no\n| coma_bug : no\n| fpu : yes\n| fpu_exception : yes\n| cpuid level : 2\n| wp : yes\n| flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca\n| cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe cid\n| bogomips : 4784.12\n|\n| Linux node1 2.4.25 #1 Mon Mar 22 13:33:41 EST 2004 i686 i686 i386 GNU/Linux\n|\n| ide2: BM-DMA at 0xc400-0xc407, BIOS settings: hde:pio, hdf:pio\n| hde: Maxtor 6Y200P0, ATA DISK drive\n| hde: attached ide-disk driver.\n| hde: host protected area => 1\n| hde: 398297088 sectors (203928 MB) w/7936KiB Cache, CHS=24792/255/63,\n| UDMA(100)\n|\n| PDC20268: IDE controller at PCI slot 06:05.0\n\n\nI did some experiments too:\n\ninserting 10000 rows in a table with an integer column:\n\nfsync=false ====> ~7.5 secs 1300 insert/sec\n\nwal_sync_method=fsync ====> ~15.5 secs 645 insert/sec\nwal_sync_method=fdatasync ====> ~15.5 secs 645 insert/sec\nwal_sync_method=open_sync ====> ~10.0 secs 1000 insert/sec\nwal_sync_method=open_datasync ====> <the server doesn't start>\n\n\n\n\n\nTest bed: Postgresql 8.0beta1, linux kernel 2.4.22,\n~ hda: IC35L060AVVA07-0, ATA DISK drive\n~ hda: 120103200 sectors (61493 MB) w/1863KiB Cache, CHS=7476/255/63, UDMA(100)\n\n\n# cat /proc/cpuinfo\nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 8\nmodel name : Pentium III (Coppermine)\nstepping : 6\ncpu MHz : 877.500\ncache size : 256 KB\nphysical id : 0\nsiblings : 1\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 2\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 mmx fxsr sse\nrunqueue : 0\n\nbogomips : 1749.81\n\nprocessor : 1\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 8\nmodel name : Pentium III (Coppermine)\nstepping : 6\ncpu MHz : 877.500\ncache size : 256 KB\nphysical id : 0\nsiblings : 1\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 2\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 mmx fxsr sse\nrunqueue : 1\n\nbogomips : 1749.81\n\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFBGrSE7UpzwH2SGd4RAoXnAKCHhuw/pWKgY+OD3JcWYMTPDbmgZwCgyqfT\n+OugUEvUF8usYYrWSGDAnn4=\n=FAaI\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Thu, 12 Aug 2004 02:06:31 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] fsync vs open_sync"
}
] |
[
{
"msg_contents": "Hi All,\n\nWe're currently running Postgres 7.4.1 on FreeBSD 5.2, a Dual Xeon 2.4, 2GB\nECC, 3Ware Serial ATA RAID 5 w/ 4 disks (SLOW!!).\n\nOur database is about 20GB on disk, we have some quite large tables - 2M\nrows with TEXT fields in a sample table, accessed constantly. We average\nabout 4,000 - 5,000 queries per second - all from web traffic. As you can\nimagine, we're quite disk limited and checkpoints can be killer.\nAdditionally, we see queries and connections getting serialized due to\nqueries that take a long time (5 sec or so) while waiting on disk access.\nNo fun at all.\n\nWe've tweaked everything long and hard, and at the end of the day, the disk\nis killing us.\n\nWe're looking to upgrade our server - or rather, replace it as it has no\nupgrade path to SCSI. I'm considering going Opteron (though right now we\ndon't need more CPU time), and am looking for suggestions on what an optimal\nRAID configuration may look like (disks, controller, cache setting). We're\nin the market to buy right now - any good vendor suggestions?\n\nI'd appreciate any input, thanks!\n\nJason \n\n",
"msg_date": "Tue, 10 Aug 2004 15:17:32 -0400",
"msg_from": "\"Jason Coene\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hardware upgrade for a high-traffic database"
},
{
"msg_contents": "> Our database is about 20GB on disk, we have some quite large tables - 2M\n> rows with TEXT fields in a sample table, accessed constantly. We average\n> about 4,000 - 5,000 queries per second - all from web traffic. As you can\n\n99% is reads? and probably the same data over and over again? You might\nwant to think about a small code change to cache sections of page output\nin memory for the most commonly generated pages (there are usually 3 or\n4 that account for 25% to 50% of web traffic -- starting pages).\n\nThe fact you're getting 5k queries/second off IDE drives tells me most\nof the active data is in memory -- so your actual working data set is\nprobably quite small (less than 10% of the 20GB).\n\n\nIf the above is all true (mostly reads, smallish dataset, etc.) and the\ndatabase is not growing very quickly, you might want to look into RAM\nand RAM bandwidth over disk. An Opteron with 8GB ram using the same old\nIDE drives. Get a mobo with a SCSI raid controller in it, so the disk\ncomponent can be upgraded in the future (when necessary).\n\n\n",
"msg_date": "Tue, 10 Aug 2004 19:06:52 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgrade for a high-traffic database"
},
{
"msg_contents": "On Tue, 2004-08-10 at 13:17, Jason Coene wrote:\n> Hi All,\n> \n> We're currently running Postgres 7.4.1 on FreeBSD 5.2, a Dual Xeon 2.4, 2GB\n> ECC, 3Ware Serial ATA RAID 5 w/ 4 disks (SLOW!!).\n> \n> Our database is about 20GB on disk, we have some quite large tables - 2M\n> rows with TEXT fields in a sample table, accessed constantly. We average\n> about 4,000 - 5,000 queries per second - all from web traffic. As you can\n> imagine, we're quite disk limited and checkpoints can be killer.\n> Additionally, we see queries and connections getting serialized due to\n> queries that take a long time (5 sec or so) while waiting on disk access.\n> No fun at all.\n> \n> We've tweaked everything long and hard, and at the end of the day, the disk\n> is killing us.\n> \n> We're looking to upgrade our server - or rather, replace it as it has no\n> upgrade path to SCSI. I'm considering going Opteron (though right now we\n> don't need more CPU time), and am looking for suggestions on what an optimal\n> RAID configuration may look like (disks, controller, cache setting). We're\n> in the market to buy right now - any good vendor suggestions?\n\nI've had very good luck with LSI MegaRAID controllers with battery\nbacked cache. The amount of cache doesn't seem as important as having\nit, and having it set for write back.\n\nAfter that, 2 gigs or more of memory is the next improvement.\n\nAfter that, the speed of the memory.\n\n\n",
"msg_date": "Tue, 10 Aug 2004 18:00:01 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgrade for a high-traffic database"
},
{
"msg_contents": "Hi Rod,\n\nActually, we're already using a substantial caching system in code for\nnearly all pages delivered - we've exhausted that option. Our system uses a\nlogin/session table for about 1/8 of our page views (those visitors who are\nlogged in), and has tracking features. While I'd love to scrap them and\ngive the database server a vacation, it's a requirement for us.\n\nYou're correct about the query caching (stored in memory) being used - most\nof our queries are run once and then come from memory (or, based on speed of\nconsecutive executions, that seems to be the case). Once a user hits a page\nfor the first time in an hour or so, it seems to cache their session query.\n\nThe issue that I think we're seeing is that the performance on the 3Ware\nRAID is quite bad, watching FreeBSD systat will show it at \"100% busy\" at\naround \"3.5 MB/s\". When it needs to seek across a table (for, say, an\naggregate function - typically a COUNT()), it slows the entire server down\nwhile working on the disk. Additionally, VACUUM's make the server\npractically useless. We have indexes on everything that's used in queries,\nand the planner is using them.\n\nThe server has 2GB of physical memory, however it's only uses between 130MB\nand 200MB of it. Postgres is the only application running on the server.\n\nOur pertinent settings look like this:\n\nmax_connections = 512\n\nshared_buffers = 20000\nsort_mem = 2000\nvacuum_mem = 20000\neffective_cache_size = 300000\n\nfsync = false\nwal_sync_method = fsync\nwal_buffers = 32\n\ncheckpoint_segments = 2\ncheckpoint_timeout = 30\ncommit_delay = 10000\n\nTypically, we don't use anywhere near the 512 connections - however there\nare peak hours where we come close, and other times that we eclipse it and\nrun out (should some connections become serialized due to a slowdown). It's\nnot something that we can comfortably lower.\n\nThe non-standard checkpoint settings have helped making it less likely that\na large (in disk time) query will conflict with a checkpoint write.\n\nI'm a programmer - definitely not a DBA by any stretch - though I am forced\ninto the role. From reading this list, it seems to me that our settings are\nreasonable given our usage, and that a disk upgrade is likely in order.\n\nI'd love to hear any suggestions.\n\nThanks,\n\nJason\n \n-----Original Message-----\nFrom: Rod Taylor [mailto:[email protected]] \nSent: Tuesday, August 10, 2004 7:07 PM\nTo: Jason Coene\nCc: Postgresql Performance\nSubject: Re: [PERFORM] Hardware upgrade for a high-traffic database\n\n> Our database is about 20GB on disk, we have some quite large tables - 2M\n> rows with TEXT fields in a sample table, accessed constantly. We average\n> about 4,000 - 5,000 queries per second - all from web traffic. As you can\n\n99% is reads? and probably the same data over and over again? You might\nwant to think about a small code change to cache sections of page output\nin memory for the most commonly generated pages (there are usually 3 or\n4 that account for 25% to 50% of web traffic -- starting pages).\n\nThe fact you're getting 5k queries/second off IDE drives tells me most\nof the active data is in memory -- so your actual working data set is\nprobably quite small (less than 10% of the 20GB).\n\n\nIf the above is all true (mostly reads, smallish dataset, etc.) and the\ndatabase is not growing very quickly, you might want to look into RAM\nand RAM bandwidth over disk. An Opteron with 8GB ram using the same old\nIDE drives. Get a mobo with a SCSI raid controller in it, so the disk\ncomponent can be upgraded in the future (when necessary).\n\n\n\n",
"msg_date": "Tue, 10 Aug 2004 21:06:37 -0400",
"msg_from": "\"Jason Coene\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware upgrade for a high-traffic database"
},
{
"msg_contents": "\n> We're currently running Postgres 7.4.1 on FreeBSD 5.2, a Dual Xeon 2.4, \n> 2GB\n> ECC, 3Ware Serial ATA RAID 5 w/ 4 disks (SLOW!!).\n\n\tCheap solution while you look for another server :\n\n\tTry to use something other than RAID5.\n\tYou have 4 disks, so you could use a striping+mirroring RAID which would \nboost performance.\n\tYou can switch with a minimum downtime (copy files to other HDD, change \nRAID parameters, copy again...) maybe 1 hour ?\n\tIf your hardware supports it of course.\n\tAnd tell us how it works !\n",
"msg_date": "Wed, 11 Aug 2004 08:43:36 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgrade for a high-traffic database"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a question on bulk checking, inserting into a table and\nhow best to use an index for performance.\n\nThe data I have to work with is a monthly CD Rom csv data dump of\n300,000 property owners from one area/shire.\n\nSo every CD has 300,000 odd lines, each line of data which fills the \n'property' table.\n\nBeginning with the first CD each line should require one SELECT and\none INSERT as it will be the first property with this address.\n\nThe SELECT uses fields like 'street' and 'suburb', to check for an \nexisting property,\nso I have built an index on those fields.\n\nMy question is does each INSERT rebuild the index on the 'street' and \n'suburb' fields?\nI believe it does but I'm asking to be sure.\n\nIf this is the case I guess performance will suffer when I have, say, \n200,000\nrows in the table.\n\nWould it be like:\n\na) Use index to search on 'street' and 'suburb'\nb) No result? Insert new record\nc) Rebuild index on 'street' and 'suburb'\n\nfor each row?\nWould this mean that after 200,000 rows each INSERT will require\nthe index of 000's of rows to be re-indexed?\n\nSo far I believe my only options are to use either and index\nor sequential scan and see which is faster.\n\nA minute for your thoughts and/or suggestions would be great.\n\nThanks.\nRegards,\nRudi.\n\n",
"msg_date": "Wed, 11 Aug 2004 09:04:02 +1000",
"msg_from": "Rudi Starcevic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bulk Insert and Index use"
},
{
"msg_contents": "In an attempt to throw the authorities off his trail, [email protected] (Rudi Starcevic) transmitted:\n> A minute for your thoughts and/or suggestions would be great.\n\nCould you give a more concrete example? E.g. - the DDL for the\ntable(s), most particularly.\n\nAt first guess, I think you're worrying about a nonissue. Each insert\nwill lead to a _modification_ of the various indices, which costs\n_something_, but which is WAY less expensive than creating each index\nfrom scratch.\n\nBut perhaps I'm misreading things; DDL for the intended tables and\nindices would be real handy.\n-- \noutput = (\"cbbrowne\" \"@\" \"cbbrowne.com\")\nhttp://www.ntlug.org/~cbbrowne/linux.html\nRules of the Evil Overlord #21. \"I will hire a talented fashion\ndesigner to create original uniforms for my Legions of Terror, as\nopposed to some cheap knock-offs that make them look like Nazi\nstormtroopers, Roman footsoldiers, or savage Mongol hordes. All were\neventually defeated and I want my troops to have a more positive\nmind-set.\" <http://www.eviloverlord.com/>\n",
"msg_date": "Tue, 10 Aug 2004 22:24:31 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bulk Insert and Index use"
},
{
"msg_contents": "If the bulk load has the possibility of duplicating data, then you need \nto change methods. Try bulk loading into a temp table, index it like \nthe original, eliminate the dups and merge the tables.\n\nIt is also possible to do an insert from the temp table into the final \ntable like:\ninsert into original (x,x,x) (select temp.1, temp.2, etc from temp left \njoin original on temp.street=original.street where original.street is null)\n\nGood Luck\nJim\n \nRudi Starcevic wrote:\n\n> Hi,\n>\n> I have a question on bulk checking, inserting into a table and\n> how best to use an index for performance.\n>\n> The data I have to work with is a monthly CD Rom csv data dump of\n> 300,000 property owners from one area/shire.\n>\n> So every CD has 300,000 odd lines, each line of data which fills the \n> 'property' table.\n>\n> Beginning with the first CD each line should require one SELECT and\n> one INSERT as it will be the first property with this address.\n>\n> The SELECT uses fields like 'street' and 'suburb', to check for an \n> existing property,\n> so I have built an index on those fields.\n>\n> My question is does each INSERT rebuild the index on the 'street' and \n> 'suburb' fields?\n> I believe it does but I'm asking to be sure.\n>\n> If this is the case I guess performance will suffer when I have, say, \n> 200,000\n> rows in the table.\n>\n> Would it be like:\n>\n> a) Use index to search on 'street' and 'suburb'\n> b) No result? Insert new record\n> c) Rebuild index on 'street' and 'suburb'\n>\n> for each row?\n> Would this mean that after 200,000 rows each INSERT will require\n> the index of 000's of rows to be re-indexed?\n>\n> So far I believe my only options are to use either and index\n> or sequential scan and see which is faster.\n>\n> A minute for your thoughts and/or suggestions would be great.\n>\n> Thanks.\n> Regards,\n> Rudi.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n>\n\n",
"msg_date": "Tue, 10 Aug 2004 21:31:48 -0500",
"msg_from": "Jim J <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bulk Insert and Index use"
},
{
"msg_contents": "Hi Jim,\n\nThanks for your time.\n\n > If the bulk load has the possibility of duplicating data\n\nYes, each row will require either:\n\na) One SELECT + One INSERT\nor\nb) One SELECT + One UPDATE\n\nI did think of using more than one table, ie. temp table.\nAs each month worth of data is added I expect to see\na change from lots of INSERTS to lots of UPDATES.\n\nPerhaps when the UPDATES become more dominant it would\nbe best to start using Indexes.\n\nWhile INSERTS are more prevelant perhaps a seq. scan is better.\n\nI guess of all the options available it boils down to which\nis quicker for my data: index or sequential scan.\n\nMany thanks.\n\nJim J wrote:\n\n> If the bulk load has the possibility of duplicating data, then you need \n> to change methods. Try bulk loading into a temp table, index it like \n> the original, eliminate the dups and merge the tables.\n> \n> It is also possible to do an insert from the temp table into the final \n> table like:\n> insert into original (x,x,x) (select temp.1, temp.2, etc from temp left \n> join original on temp.street=original.street where original.street is null)\n> \n> Good Luck\n> Jim\n> \n> Rudi Starcevic wrote:\n> \n>> Hi,\n>>\n>> I have a question on bulk checking, inserting into a table and\n>> how best to use an index for performance.\n>>\n>> The data I have to work with is a monthly CD Rom csv data dump of\n>> 300,000 property owners from one area/shire.\n>>\n>> So every CD has 300,000 odd lines, each line of data which fills the \n>> 'property' table.\n>>\n>> Beginning with the first CD each line should require one SELECT and\n>> one INSERT as it will be the first property with this address.\n>>\n>> The SELECT uses fields like 'street' and 'suburb', to check for an \n>> existing property,\n>> so I have built an index on those fields.\n>>\n>> My question is does each INSERT rebuild the index on the 'street' and \n>> 'suburb' fields?\n>> I believe it does but I'm asking to be sure.\n>>\n>> If this is the case I guess performance will suffer when I have, say, \n>> 200,000\n>> rows in the table.\n>>\n>> Would it be like:\n>>\n>> a) Use index to search on 'street' and 'suburb'\n>> b) No result? Insert new record\n>> c) Rebuild index on 'street' and 'suburb'\n>>\n>> for each row?\n>> Would this mean that after 200,000 rows each INSERT will require\n>> the index of 000's of rows to be re-indexed?\n>>\n>> So far I believe my only options are to use either and index\n>> or sequential scan and see which is faster.\n>>\n>> A minute for your thoughts and/or suggestions would be great.\n>>\n>> Thanks.\n>> Regards,\n>> Rudi.\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 4: Don't 'kill -9' the postmaster\n>>\n>>\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n> \n\n\n-- \n\n\nRegards,\nRudi.\n\nInternet Media Productions\n",
"msg_date": "Wed, 11 Aug 2004 13:33:33 +1000",
"msg_from": "Rudi Starcevic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bulk Insert and Index use"
},
{
"msg_contents": "Hi,\n\n> In an attempt to throw the authorities off his trail, [email protected] (Rudi Starcevic) transmitted:\n> A minute for your thoughts and/or suggestions would be great.\n\nHeh heh ....\n\n> Could you give a more concrete example? E.g. - the DDL for the\n> table(s), most particularly.\n\nThanks, I didn't add the DDL as I though it may make my question too\nlong. I have the DDL at another office so I'll pick up this email\nthread when I get there in a couple hours.\n\n> At first guess, I think you're worrying about a nonissue. Each insert\n> will lead to a _modification_ of the various indices, which costs\n> _something_, but which is WAY less expensive than creating each index\n> from scratch.\n\nVery interesting, modification and creation.\nI will post another email later today.\n\nMany thanks.\n\n-- \n\nRegards,\nRudi.\n\nInternet Media Productions\n",
"msg_date": "Wed, 11 Aug 2004 14:10:52 +1000",
"msg_from": "Rudi Starcevic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bulk Insert and Index use"
}
] |
[
{
"msg_contents": "\nUsualy any bulk load is faster with indexes dropped and the rebuilt ... failing that (like you really need the indexes while loading, say into a \"hot\" table) be sure to wrap all the SQL into one transaction (BEGIN;...COMMIT;) ... if any data failes it all fails, which is usually easier to deal with than partial data loads, and it is *much* faster than having each insert being its own transaction.\n\nHTH,\n\nGreg Williamson\nDBA\nGlobeXplorer LLC\n\n-----Original Message-----\nFrom:\tRudi Starcevic [mailto:[email protected]]\nSent:\tTue 8/10/2004 4:04 PM\nTo:\[email protected]\nCc:\t\nSubject:\t[PERFORM] Bulk Insert and Index use\nHi,\n\nI have a question on bulk checking, inserting into a table and\nhow best to use an index for performance.\n\nThe data I have to work with is a monthly CD Rom csv data dump of\n300,000 property owners from one area/shire.\n\nSo every CD has 300,000 odd lines, each line of data which fills the \n'property' table.\n\nBeginning with the first CD each line should require one SELECT and\none INSERT as it will be the first property with this address.\n\nThe SELECT uses fields like 'street' and 'suburb', to check for an \nexisting property,\nso I have built an index on those fields.\n\nMy question is does each INSERT rebuild the index on the 'street' and \n'suburb' fields?\nI believe it does but I'm asking to be sure.\n\nIf this is the case I guess performance will suffer when I have, say, \n200,000\nrows in the table.\n\nWould it be like:\n\na) Use index to search on 'street' and 'suburb'\nb) No result? Insert new record\nc) Rebuild index on 'street' and 'suburb'\n\nfor each row?\nWould this mean that after 200,000 rows each INSERT will require\nthe index of 000's of rows to be re-indexed?\n\nSo far I believe my only options are to use either and index\nor sequential scan and see which is faster.\n\nA minute for your thoughts and/or suggestions would be great.\n\nThanks.\nRegards,\nRudi.\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n\n\n",
"msg_date": "Tue, 10 Aug 2004 19:27:48 -0700",
"msg_from": "\"Gregory S. Williamson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bulk Insert and Index use"
}
] |
[
{
"msg_contents": "If it has to read a majority (or even a good percentage) of the rows in question a sequential scan is probably faster ... and as Jim pointed out, a temp table can often be a useful medium for getting speed in a load and then allowing you to clean/alter data for a final (easy) push.\n\nG\n-----Original Message-----\nFrom:\tRudi Starcevic [mailto:[email protected]]\nSent:\tTue 8/10/2004 8:33 PM\nTo:\[email protected]\nCc:\t\nSubject:\tRe: [PERFORM] Bulk Insert and Index use\nHi Jim,\n\nThanks for your time.\n\n > If the bulk load has the possibility of duplicating data\n\nYes, each row will require either:\n\na) One SELECT + One INSERT\nor\nb) One SELECT + One UPDATE\n\nI did think of using more than one table, ie. temp table.\nAs each month worth of data is added I expect to see\na change from lots of INSERTS to lots of UPDATES.\n\nPerhaps when the UPDATES become more dominant it would\nbe best to start using Indexes.\n\nWhile INSERTS are more prevelant perhaps a seq. scan is better.\n\nI guess of all the options available it boils down to which\nis quicker for my data: index or sequential scan.\n\nMany thanks.\n\nJim J wrote:\n\n> If the bulk load has the possibility of duplicating data, then you need \n> to change methods. Try bulk loading into a temp table, index it like \n> the original, eliminate the dups and merge the tables.\n> \n> It is also possible to do an insert from the temp table into the final \n> table like:\n> insert into original (x,x,x) (select temp.1, temp.2, etc from temp left \n> join original on temp.street=original.street where original.street is null)\n> \n> Good Luck\n> Jim\n> \n> Rudi Starcevic wrote:\n> \n>> Hi,\n>>\n>> I have a question on bulk checking, inserting into a table and\n>> how best to use an index for performance.\n>>\n>> The data I have to work with is a monthly CD Rom csv data dump of\n>> 300,000 property owners from one area/shire.\n>>\n>> So every CD has 300,000 odd lines, each line of data which fills the \n>> 'property' table.\n>>\n>> Beginning with the first CD each line should require one SELECT and\n>> one INSERT as it will be the first property with this address.\n>>\n>> The SELECT uses fields like 'street' and 'suburb', to check for an \n>> existing property,\n>> so I have built an index on those fields.\n>>\n>> My question is does each INSERT rebuild the index on the 'street' and \n>> 'suburb' fields?\n>> I believe it does but I'm asking to be sure.\n>>\n>> If this is the case I guess performance will suffer when I have, say, \n>> 200,000\n>> rows in the table.\n>>\n>> Would it be like:\n>>\n>> a) Use index to search on 'street' and 'suburb'\n>> b) No result? Insert new record\n>> c) Rebuild index on 'street' and 'suburb'\n>>\n>> for each row?\n>> Would this mean that after 200,000 rows each INSERT will require\n>> the index of 000's of rows to be re-indexed?\n>>\n>> So far I believe my only options are to use either and index\n>> or sequential scan and see which is faster.\n>>\n>> A minute for your thoughts and/or suggestions would be great.\n>>\n>> Thanks.\n>> Regards,\n>> Rudi.\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 4: Don't 'kill -9' the postmaster\n>>\n>>\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n> \n\n\n-- \n\n\nRegards,\nRudi.\n\nInternet Media Productions\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n\n\n",
"msg_date": "Tue, 10 Aug 2004 20:53:11 -0700",
"msg_from": "\"Gregory S. Williamson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bulk Insert and Index use"
}
] |
[
{
"msg_contents": "\n\n\n\n\n\nGreetings.\n\nI have a question regarding performance of certain datatypes:\n\nI have a field where I will store my clients phone numbers. I know that\nthis field will never exceed 15 characters, and I will store only\nnumbers here (no dashes, dots, etc...), so I was wondering:\n\nWich type is faster: NUMERIC(15,0) or VARCHAR(15)? Are there any\nstorage differences between them?\n\nTIA,\n\n-- \nEr Galvão Abbott\nDesenvolvedor Web\nhttp://www.galvao.eti.br/\[email protected]\n\n\n",
"msg_date": "Wed, 11 Aug 2004 02:42:33 -0300",
"msg_from": "=?ISO-8859-1?Q?Er_Galv=E3o_Abbott?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "NUMERIC x VARCHAR"
},
{
"msg_contents": "On Tue, 2004-08-10 at 23:42, Er Galvão Abbott wrote:\n> Greetings.\n> \n> I have a question regarding performance of certain datatypes:\n> \n> I have a field where I will store my clients phone numbers. I know\n> that this field will never exceed 15 characters, and I will store only\n> numbers here (no dashes, dots, etc...), so I was wondering:\n> \n> Wich type is faster: NUMERIC(15,0) or VARCHAR(15)? Are there any\n> storage differences between them?\n\nSince numerics are stored as text strings, the storage would be\nsimilar. Numerics, however, may be slower since they have more\nconstraints built in. If you throw a check constraint on the\nvarchar(15) then it will likely be about the same speed for updating.\n\ntext type with a check contraint it what i'd use. That way if you want\nto change it at a later date you just drop and recreate your constraint.\n\n",
"msg_date": "Wed, 11 Aug 2004 00:21:48 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NUMERIC x VARCHAR"
},
{
"msg_contents": "\nNumeric won't store that :\n\n\t(+33) 4 01 23 45 67\n\nOn Wed, 11 Aug 2004 02:42:33 -0300, Er Galvᅵo Abbott \n<[email protected]> wrote:\n\n> Greetings.\n>\n> I have a question regarding performance of certain datatypes:\n>\n> I have a field where I will store my clients phone numbers. I know that \n> this\n> field will never exceed 15 characters, and I will store only numbers \n> here (no\n> dashes, dots, etc...), so I was wondering:\n>\n> Wich type is faster: NUMERIC(15,0) or VARCHAR(15)? Are there any storage\n> differences between them?\n>\n> TIA,\n>\n\n\n",
"msg_date": "Wed, 11 Aug 2004 08:47:10 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NUMERIC x VARCHAR"
},
{
"msg_contents": "\n\n\n\n\n\nIt will. As I've said I wont be storing any\nsymbols.\n\n\n-- \nEr Galvᅵo Abbott\nDesenvolvedor Web\nhttp://www.galvao.eti.br/\[email protected]\n\n\nPierre-Frᅵdᅵric Caillaud wrote:\n\nNumeric won't store that :\n \n\nᅵᅵᅵᅵ(+33) 4 01 23 45 67\n \n\nOn Wed, 11 Aug 2004 02:42:33 -0300, Er Galvᅵo Abbottᅵ\n<[email protected]> wrote:\n \n\nGreetings.\n \n\nI have a question regarding performance of certain datatypes:\n \n\nI have a field where I will store my clients phone numbers. I know\nthatᅵ this\n \nfield will never exceed 15 characters, and I will store only numbersᅵ\nhere (no\n \ndashes, dots, etc...), so I was wondering:\n \n\nWich type is faster: NUMERIC(15,0) or VARCHAR(15)? Are there any\nstorage\n \ndifferences between them?\n \n\nTIA,\n \n\n\n\n\n\n---------------------------(end of\nbroadcast)---------------------------\n \nTIP 6: Have you searched our list archives?\n \n\nᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵᅵ http://archives.postgresql.org\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Wed, 11 Aug 2004 04:27:31 -0300",
"msg_from": "=?ISO-8859-15?Q?Er_Galv=E3o_Abbott?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NUMERIC x VARCHAR"
},
{
"msg_contents": "\nOn Aug 11, 2004, at 4:27 PM, Er Galvão Abbott wrote:\n\n> It will. As I've said I wont be storing any symbols.\n\nIt won't store leading zeros, however. This may or may not be an issue \nfor you.\n\n\ntest=# create table tel (name_id integer not null, tel_numeric \nnumeric(15) not null, tel_varchar varchar(15) not null);\nCREATE TABLE\ntest=# insert into tel (name_id, tel_numeric, tel_varchar) values \n(1,012345678911234, '012345678911234');\nINSERT 17153 1\ntest=# select * from tel;\n name_id | tel_numeric | tel_varchar\n---------+----------------+-----------------\n 1 | 12345678911234 | 012345678911234\n(1 row)\n\nI would do as another poster suggested: create a telephone number \ndomain as text with the check constraints you desire.\n\nMichael Glaesemann\ngrzm myrealbox com\n\n",
"msg_date": "Wed, 11 Aug 2004 16:41:25 +0900",
"msg_from": "Michael Glaesemann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NUMERIC x VARCHAR"
},
{
"msg_contents": "\n\n\n\n\n\nThanks, Michael.\n\nYou and \"Evil Azrael\" (lol) got me. Never thought about leading zeros.\n\nVarchar it is!\n\nThanks a lot,\n-- \nEr Galvão Abbott\nDesenvolvedor Web\nhttp://www.galvao.eti.br/\[email protected]\n\n\nMichael Glaesemann wrote:\n\nOn Aug 11, 2004, at 4:27 PM, Er Galvão Abbott wrote:\n \n\nIt will. As I've said I wont be storing any\nsymbols.\n \n\n\nIt won't store leading zeros, however. This may or may not be an issue\nfor you.\n \n\n\ntest=# create table tel (name_id integer not null, tel_numeric\nnumeric(15) not null, tel_varchar varchar(15) not null);\n \nCREATE TABLE\n \ntest=# insert into tel (name_id, tel_numeric, tel_varchar) values\n(1,012345678911234, '012345678911234');\n \nINSERT 17153 1\n \ntest=# select * from tel;\n \n name_id | tel_numeric | tel_varchar\n \n---------+----------------+-----------------\n \n 1 | 12345678911234 | 012345678911234\n \n(1 row)\n \n\nI would do as another poster suggested: create a telephone number\ndomain as text with the check constraints you desire.\n \n\nMichael Glaesemann\n \ngrzm myrealbox com\n \n\n\n\n",
"msg_date": "Wed, 11 Aug 2004 14:13:47 -0300",
"msg_from": "=?ISO-8859-1?Q?Er_Galv=E3o_Abbott?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: NUMERIC x VARCHAR"
},
{
"msg_contents": "On 8/11/2004 2:21 AM, Scott Marlowe wrote:\n\n> On Tue, 2004-08-10 at 23:42, Er Galvão Abbott wrote:\n>> Greetings.\n>> \n>> I have a question regarding performance of certain datatypes:\n>> \n>> I have a field where I will store my clients phone numbers. I know\n>> that this field will never exceed 15 characters, and I will store only\n>> numbers here (no dashes, dots, etc...), so I was wondering:\n>> \n>> Wich type is faster: NUMERIC(15,0) or VARCHAR(15)? Are there any\n>> storage differences between them?\n> \n> Since numerics are stored as text strings, the storage would be\n> similar. Numerics, however, may be slower since they have more\n> constraints built in. If you throw a check constraint on the\n> varchar(15) then it will likely be about the same speed for updating.\n\nThey are stored as an array of signed small integers holding digits in \nbase-10000, plus a precision, scale and sign. That's somewhat different \nfrom text strings, isn't it?\n\n\nJan\n\n> \n> text type with a check contraint it what i'd use. That way if you want\n> to change it at a later date you just drop and recreate your constraint.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n",
"msg_date": "Wed, 11 Aug 2004 15:19:56 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NUMERIC x VARCHAR"
}
] |
[
{
"msg_contents": "> The issue that I think we're seeing is that the performance on the\n3Ware\n> RAID is quite bad, watching FreeBSD systat will show it at \"100% busy\"\nat\n> around \"3.5 MB/s\". When it needs to seek across a table (for, say, an\n> aggregate function - typically a COUNT()), it slows the entire server\ndown\n> while working on the disk. Additionally, VACUUM's make the server\n> practically useless. We have indexes on everything that's used in\n> queries,\n> and the planner is using them.\n\nIt sounds to me like your application is CPU bound, except when\nvacuuming...then your server is just overloaded. A higher performance\ni/o system will help when vacuuming and checkpointing but will not solve\nthe overall problem. \n\nWith a (good & well supported) battery backed raid controller you can\nturn fsync back on which will help you with your i/o storm issues (plus\nthe safety issue). This will be particularly important if you follow\nmy next suggestion.\n\nOne thing you might consider is materialized views. Your aggregate\nfunctions are killing you...try to avoid using them (except min/max on\nan index). Just watch out for mutable functions like now().\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/matviews.html\n\nAn application specific approach is to use triggers to keep the data you\nneed in as close to query form as possible...you can reap enormous\nsavings particularly if your queries involve 3 or more tables or have\nlarge aggregate scans.\n\nMerlin\n\n \n",
"msg_date": "Wed, 11 Aug 2004 08:21:33 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware upgrade for a high-traffic database"
},
{
"msg_contents": "On Wed, 2004-08-11 at 17:51, Merlin Moncure wrote:\n\n> One thing you might consider is materialized views. Your aggregate\n> functions are killing you...try to avoid using them (except min/max on\n> an index). Just watch out for mutable functions like now().\n> \n> http://www.varlena.com/varlena/GeneralBits/Tidbits/matviews.html\n> \n> An application specific approach is to use triggers to keep the data you\n> need in as close to query form as possible...you can reap enormous\n> savings particularly if your queries involve 3 or more tables or have\n> large aggregate scans.\n\nI thought materialized views support in pgsql was experimental as yet.\nAre the pg mat-view code upto production servers? Also, do you have to\ndelete mat-views before you dump the db or does dump automatically not\ndump the mat-views data?\n\nSanjay.\n\n\n",
"msg_date": "11 Aug 2004 21:17:10 +0530",
"msg_from": "Sanjay Arora <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgrade for a high-traffic database"
}
] |
[
{
"msg_contents": "Hi. \n\nPlease be a bit patient.. I'm quite new to PostgreSQL. \n\nI'd like some advise on storing binary data in the database. \n\nCurrently I have about 300.000 320.000 Bytes \"Bytea\" records in the\ndatabase. It works quite well but I have a feeling that it actually is\nslowing the database down on queries only related to the surrounding\nattributes. \n\nThe \"common\" solution, I guess would be to store them in the filesystem\ninstead, but I like to have them just in the database it is nice clean\ndatabase and application design and if I can get PostgreSQL to \"not\ncache\" them then it should be excactly as fast i assume. \n\nThe binary data is not a part of most queries in the database only a few\nexplicitly written to fetch them and they are not accessed very often. \n\nWhat do people normally do? \n\n\nThanks, Jesper\n\n-- \n./Jesper Krogh, [email protected]\nJabber ID: [email protected]\n\n\n",
"msg_date": "Wed, 11 Aug 2004 14:29:46 +0000 (UTC)",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Storing binary data."
},
{
"msg_contents": "On Wednesday 11 Aug 2004 7:59 pm, Jesper Krogh wrote:\n> The \"common\" solution, I guess would be to store them in the filesystem\n> instead, but I like to have them just in the database it is nice clean\n> database and application design and if I can get PostgreSQL to \"not\n> cache\" them then it should be excactly as fast i assume.\n\nYou can normalize them so that a table contains an id and a bytea column only. \nTe main table will contain all the other attributes and a mapping id. That \nway you will have only the main table cached.\n\nYou don't have to go to filesystem for this, I hope.\n\nHTH\n\n Shridhar\n",
"msg_date": "Wed, 11 Aug 2004 20:42:50 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Storing binary data."
},
{
"msg_contents": "Jesper Krogh <[email protected]> writes:\n> I'd like some advise on storing binary data in the database. \n\n> Currently I have about 300.000 320.000 Bytes \"Bytea\" records in the\n> database. It works quite well but I have a feeling that it actually is\n> slowing the database down on queries only related to the surrounding\n> attributes. \n\n> The \"common\" solution, I guess would be to store them in the filesystem\n> instead, but I like to have them just in the database it is nice clean\n> database and application design and if I can get PostgreSQL to \"not\n> cache\" them then it should be excactly as fast i assume. \n\n> The binary data is not a part of most queries in the database only a few\n> explicitly written to fetch them and they are not accessed very often. \n\n> What do people normally do? \n\nNothing. If the bytea values are large enough to be worth splitting\nout, Postgres will actually do that for you automatically. Wide field\nvalues get pushed into a separate \"toast\" table, and are not fetched by\na query unless the value is specifically demanded.\n\nYou can control this behavior to some extent by altering the storage\noption for the bytea column (see ALTER TABLE), but the default choice\nis usually fine.\n\nIf you just want to see whether anything is happening, do a VACUUM\nVERBOSE on that table and note the amount of storage in the toast table\nas compared to the main table.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 Aug 2004 12:06:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Storing binary data. "
},
{
"msg_contents": "I gmane.comp.db.postgresql.performance, skrev Shridhar Daithankar:\n> On Wednesday 11 Aug 2004 7:59 pm, Jesper Krogh wrote:\n> > The \"common\" solution, I guess would be to store them in the filesystem\n> > instead, but I like to have them just in the database it is nice clean\n> > database and application design and if I can get PostgreSQL to \"not\n> > cache\" them then it should be excactly as fast i assume.\n> \n> You can normalize them so that a table contains an id and a bytea column only. \n> Te main table will contain all the other attributes and a mapping id. That \n> way you will have only the main table cached.\n> \n> You don't have to go to filesystem for this, I hope.\n\nFurther benchmarking. \n\nI tried to create a table with the excact same attributes but without\nthe binary attribute. It didn't change anything, so my idea that it\nshould be the binary-stuff that sloved it down was wrong. \n\nI have a timestamp column in the table that I sort on. Data is ordered\nover the last 4 years and I select based on a timerange, I cannot make\nthe query-planner use the index on the timestamp by itself but if I \"set\nenable_seqscan = false\" the query time drops by 1/3 (from 1.200 msec to\nabout 400 msec). \n\nI cannot figure out why the query-planner chooses wrong. \nNB: It's postgresql 7.4.3\n\n-- \n./Jesper Krogh, [email protected]\nJabber ID: [email protected]\n\n\n",
"msg_date": "Wed, 11 Aug 2004 16:29:33 +0000 (UTC)",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Storing binary data."
},
{
"msg_contents": "On Thu, 12 Aug 2004 02:29 am, Jesper Krogh wrote:\n> I gmane.comp.db.postgresql.performance, skrev Shridhar Daithankar:\n> > On Wednesday 11 Aug 2004 7:59 pm, Jesper Krogh wrote:\n> > > The \"common\" solution, I guess would be to store them in the filesystem\n> > > instead, but I like to have them just in the database it is nice clean\n> > > database and application design and if I can get PostgreSQL to \"not\n> > > cache\" them then it should be excactly as fast i assume.\n> > \n> > You can normalize them so that a table contains an id and a bytea column only. \n> > Te main table will contain all the other attributes and a mapping id. That \n> > way you will have only the main table cached.\n> > \n> > You don't have to go to filesystem for this, I hope.\n> \n> Further benchmarking. \n> \n> I tried to create a table with the excact same attributes but without\n> the binary attribute. It didn't change anything, so my idea that it\n> should be the binary-stuff that sloved it down was wrong. \n> \n> I have a timestamp column in the table that I sort on. Data is ordered\n> over the last 4 years and I select based on a timerange, I cannot make\n> the query-planner use the index on the timestamp by itself but if I \"set\n> enable_seqscan = false\" the query time drops by 1/3 (from 1.200 msec to\n> about 400 msec). \n> \n> I cannot figure out why the query-planner chooses wrong. \n> NB: It's postgresql 7.4.3\n> \nPlease post explain analyze of the query.\n\nI would guess you are using now() is your query, which is not optimized perfectly\nby the planner, so you end up with problems. But if you post explain analyze \npeople will be able to tell you what the problem is.\n\nMaybe on with seqscan on, and one with it off.\n\nRegards\n\nRussell Smith\n",
"msg_date": "Thu, 12 Aug 2004 09:44:56 +1000",
"msg_from": "Russell Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Storing binary data."
}
] |
[
{
"msg_contents": "> OSDL did some testing and found Ext3 to be perhaps the worst FS for\n> PostgreSQL\n> -- although this testing was with the default options. Ext3 involved\nan\n> almost 40% write performance penalty compared with Ext2, whereas the\n> penalty\n> for ReiserFS and JFS was less than 10%.\n> \n> This concurs with my personal experience.\n\nI'm really curious to see if you guys have compared insert performance\nresults between 7.4 and 8.0. As you probably know the system sync()\ncall was replaced with a looping fsync on open file handles. This may\nhave some interesting interactions with the WAL sync method.\n\nWhat caught my attention initially was the 300+/sec insert performance.\nOn 8.0/NTFS/fsync=on, I can't break 100/sec on a 10k rpm ATA disk. My\nhardware seems to be more or less in the same league as psql's, so I was\nnaturally curious if this was a NT/Unix issue, a 7.4/8.0 issue, or a\ncombination of both.\n\nA 5ms seek time disk would be limited to 200 transaction commits/sec if\neach transaction commit has at least 1 seek. Are there some\ncircumstances where a transaction commit does not generate a physical\nseek? \n\nMaybe ext3 is not the worst filesystem after all!\n\nMerlin\n",
"msg_date": "Wed, 11 Aug 2004 12:37:10 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: fsync vs open_sync"
},
{
"msg_contents": ">> OSDL did some testing and found Ext3 to be perhaps the worst FS for\n>> PostgreSQL\n>> -- although this testing was with the default options. Ext3 involved\n> an\n>> almost 40% write performance penalty compared with Ext2, whereas the\n>> penalty\n>> for ReiserFS and JFS was less than 10%.\n>>\n>> This concurs with my personal experience.\n>\n> I'm really curious to see if you guys have compared insert performance\n> results between 7.4 and 8.0. As you probably know the system sync()\n> call was replaced with a looping fsync on open file handles. This may\n> have some interesting interactions with the WAL sync method.\n>\n> What caught my attention initially was the 300+/sec insert performance.\n> On 8.0/NTFS/fsync=on, I can't break 100/sec on a 10k rpm ATA disk. My\n> hardware seems to be more or less in the same league as psql's, so I was\n> naturally curious if this was a NT/Unix issue, a 7.4/8.0 issue, or a\n> combination of both.\n\nThe system on which I can get 300 inserts per second is a battery backed\nup XEON system with 512M RAM, a Promise PDC DMA ATA card, and some fast\ndisks with write caching enabled.\n\n(We are not worried about write caching because we have a UPS. Since all\nnon-redundent systems are evaluated on probability of error, we decided\nthat the probability of power failure and UPS failure was sufficiently\nmore rare than system crash with file system corruption or hard disk\nfailure.)\n>\n> A 5ms seek time disk would be limited to 200 transaction commits/sec if\n> each transaction commit has at least 1 seek. Are there some\n> circumstances where a transaction commit does not generate a physical\n> seek?\n>\n> Maybe ext3 is not the worst filesystem after all!\n>\n> Merlin\n>\n\n",
"msg_date": "Wed, 11 Aug 2004 13:14:05 -0400 (EDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: fsync vs open_sync"
},
{
"msg_contents": "\n>> What caught my attention initially was the 300+/sec insert performance.\n>> On 8.0/NTFS/fsync=on, I can't break 100/sec on a 10k rpm ATA disk. My\n>> hardware seems to be more or less in the same league as psql's, so I was\n>> naturally curious if this was a NT/Unix issue, a 7.4/8.0 issue, or a\n>> combination of both.\n\n\tThere is also the fact that NTFS is a very slow filesystem, and Linux is \na lot better than Windows for everything disk, caching and IO related. Try \nto copy some files in NTFS and in ReiserFS...\n",
"msg_date": "Fri, 13 Aug 2004 18:06:59 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fsync vs open_sync"
}
] |
[
{
"msg_contents": "> On Wed, 2004-08-11 at 17:51, Merlin Moncure wrote:\n> \n> > One thing you might consider is materialized views. Your aggregate\n> > functions are killing you...try to avoid using them (except min/max\non\n> > an index). Just watch out for mutable functions like now().\n> >\n> > http://www.varlena.com/varlena/GeneralBits/Tidbits/matviews.html\n> >\n> > An application specific approach is to use triggers to keep the data\nyou\n> > need in as close to query form as possible...you can reap enormous\n> > savings particularly if your queries involve 3 or more tables or\nhave\n> > large aggregate scans.\n> \n> I thought materialized views support in pgsql was experimental as yet.\n> Are the pg mat-view code upto production servers? Also, do you have to\n> delete mat-views before you dump the db or does dump automatically not\n> dump the mat-views data?\n\nI think you are thinking about 100% 'true' materialized views. In that\ncase the answer is no, the server does not have them. The GeneralBits\narticle describes how to emulate them through pl/sql triggers. I just\nbumped into the article yesterday and was very impressed by it...I have\nto admin though Note: I have never tried the method, but it should work.\nI cc'd the author who perhaps might chime in and tell you more about\nthem.\n\nMaterialized views can give performance savings so good that the tpc\npeople had to ban them from benchmarks because they skewed results...:)\nIn postgres, they can help a lot with aggregates...there are many\ngotchas tho, for example keeping a count() up to date can get kind of\ntricky. If you can get them to work, the filesystem cache efficiency\nwill rocket upwards...YMMV.\n\nGetting back on topic, I missed the original post where the author\nstated his problems were i/o related, not cpu (contrary to my\nspeculation). I wonder what his insert/update load is?\n\nMerlin\n\n\n",
"msg_date": "Wed, 11 Aug 2004 13:04:00 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware upgrade for a high-traffic database"
},
{
"msg_contents": "Thanks for all the feedback. To clear it up, we are definitely not CPU\nbound at the moment. Any slowdown seems to be disk dependant, or from to\nserialization due to a long query (due to disk).\n\nWe do have a lot of INSERT/UPDATE calls, specifically on tables that track\nuser sessions, then of course things like comments, etc (where we'll see\n10-30 INSERT's per second, with TEXT field, and hundreds of reads per\nsecond). Additionally, our system does use a lot of aggregate functions.\nI'll look into materialized views, it sounds like it may be worth\nimplementing.\n\nOne question I do have though - you specifically mentioned NOW() as\nsomething to watch out for, in that it's mutable. We typically use COUNT()\nas a subselect to retrieve the number of associated rows to the current\nquery. Additionally, we use NOW a lot, primarily to detect the status of a\ndate, i.e.:\n\nSELECT id FROM subscriptions WHERE userid = 11111 AND timeend > NOW();\n\nIs there a better way to do this? I was under the impression that NOW() was\npretty harmless, just to return a current timestamp.\n\nBased on feedback, I'm looking at a minor upgrade of our RAID controller to\na 3ware 9000 series (SATA with cache, battery backup optional), and\nre-configuring it for RAID 10. It's a damn cheap upgrade at around $350 and\nan hour of downtime, so I figure that it's worth it for us to give it a\nshot.\n\nThanks,\n\nJason\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Merlin Moncure\nSent: Wednesday, August 11, 2004 1:04 PM\nTo: [email protected]\nCc: Postgresql Performance; [email protected]\nSubject: Re: [PERFORM] Hardware upgrade for a high-traffic database\n\n> On Wed, 2004-08-11 at 17:51, Merlin Moncure wrote:\n> \n> > One thing you might consider is materialized views. Your aggregate\n> > functions are killing you...try to avoid using them (except min/max\non\n> > an index). Just watch out for mutable functions like now().\n> >\n> > http://www.varlena.com/varlena/GeneralBits/Tidbits/matviews.html\n> >\n> > An application specific approach is to use triggers to keep the data\nyou\n> > need in as close to query form as possible...you can reap enormous\n> > savings particularly if your queries involve 3 or more tables or\nhave\n> > large aggregate scans.\n> \n> I thought materialized views support in pgsql was experimental as yet.\n> Are the pg mat-view code upto production servers? Also, do you have to\n> delete mat-views before you dump the db or does dump automatically not\n> dump the mat-views data?\n\nI think you are thinking about 100% 'true' materialized views. In that\ncase the answer is no, the server does not have them. The GeneralBits\narticle describes how to emulate them through pl/sql triggers. I just\nbumped into the article yesterday and was very impressed by it...I have\nto admin though Note: I have never tried the method, but it should work.\nI cc'd the author who perhaps might chime in and tell you more about\nthem.\n\nMaterialized views can give performance savings so good that the tpc\npeople had to ban them from benchmarks because they skewed results...:)\nIn postgres, they can help a lot with aggregates...there are many\ngotchas tho, for example keeping a count() up to date can get kind of\ntricky. If you can get them to work, the filesystem cache efficiency\nwill rocket upwards...YMMV.\n\nGetting back on topic, I missed the original post where the author\nstated his problems were i/o related, not cpu (contrary to my\nspeculation). I wonder what his insert/update load is?\n\nMerlin\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n\n\n",
"msg_date": "Wed, 11 Aug 2004 15:08:42 -0400",
"msg_from": "\"Jason Coene\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgrade for a high-traffic database"
},
{
"msg_contents": "Jason,,\n\nOne suggestion i have, stay away from adaptec ZCR RAID products, we've \nbeen doing testing on them, and they don't perform well at all.\n\n--brian\n\nOn Aug 11, 2004, at 1:08 PM, Jason Coene wrote:\n\n> Thanks for all the feedback. To clear it up, we are definitely not CPU\n> bound at the moment. Any slowdown seems to be disk dependant, or from \n> to\n> serialization due to a long query (due to disk).\n\n",
"msg_date": "Wed, 11 Aug 2004 13:39:38 -0600",
"msg_from": "Brian Hirt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgrade for a high-traffic database"
},
{
"msg_contents": "[snip]\n> \n> One question I do have though - you specifically mentioned NOW() as\n> something to watch out for, in that it's mutable. We typically use COUNT()\n> as a subselect to retrieve the number of associated rows to the current\n> query. Additionally, we use NOW a lot, primarily to detect the status of a\n> date, i.e.:\n> \n> SELECT id FROM subscriptions WHERE userid = 11111 AND timeend > NOW();\n> \n> Is there a better way to do this? I was under the impression that NOW() was\n> pretty harmless, just to return a current timestamp.\n> \nNOW() will trigger unnessecary sequence scans. As it is unknown with prepared\nquery and function when the statement is run, the planner plans the query with\nnow as a variable. This can push the planner to a seq scan over and index scan.\nI have seen this time and time again.\n\nYou can create your own immutable now, but don't use it in functions or prepared queries\nor you will get wrong results.\n\n> Based on feedback, I'm looking at a minor upgrade of our RAID controller to\n> a 3ware 9000 series (SATA with cache, battery backup optional), and\n> re-configuring it for RAID 10. It's a damn cheap upgrade at around $350 and\n> an hour of downtime, so I figure that it's worth it for us to give it a\n> shot.\n> \n> Thanks,\n> \n> Jason\n\nRussell Smith\n",
"msg_date": "Thu, 12 Aug 2004 09:52:20 +1000",
"msg_from": "Russell Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgrade for a high-traffic database"
}
] |
[
{
"msg_contents": "> We do have a lot of INSERT/UPDATE calls, specifically on tables that\ntrack\n> user sessions, then of course things like comments, etc (where we'll\nsee\n> 10-30 INSERT's per second, with TEXT field, and hundreds of reads per\n> second). Additionally, our system does use a lot of aggregate\nfunctions.\n> I'll look into materialized views, it sounds like it may be worth\n> implementing.\n\nRight. The point is: is your i/o bottle neck on the read side or the\nwrite side. With 10-30 inserts/sec and fsync off, it's definitely on\nthe read side. What's interesting is that such a low insert load is\ncausing i/o storm problems. How does your app run with fsync on?\n\nWith read-bound i/o problems, might want to consider upgrading memory\nfirst to get better cache efficiency. You may want to consider Opteron\nfor > 4GB allocations (yummy!).\n\nThe good news is that read problems are usually solvable by being\nclever, whereas write problems require hardware.\n\n> One question I do have though - you specifically mentioned NOW() as\n> something to watch out for, in that it's mutable. We typically use\n\nThis is specifically with regards to materialized views. Mutable\nfunctions cause problems because when they are pushed unto the view,\nthey are refreshed...something to watch out for.\n\nThe trick with MVs is to increase your filesystem cache efficiency. The\nbig picture is to keep frequently read data in a single place to make\nbetter benefit of cache. Aggregates naturally read multiple rows to\nreturn a single row's worth of data so you want to target them first.\nThis all comes at a cost of update I/O time and some application\ncomplexity.\n\n> as a subselect to retrieve the number of associated rows to the\ncurrent\n> query. Additionally, we use NOW a lot, primarily to detect the status\nof\n> a\n> date, i.e.:\n\nMight want to check if your application middleware (php?) exposes\nPQntuples()...this is a zero cost way to get the same information.\n \n> Based on feedback, I'm looking at a minor upgrade of our RAID\ncontroller\n> to\n> a 3ware 9000 series (SATA with cache, battery backup optional), and\n> re-configuring it for RAID 10. It's a damn cheap upgrade at around\n$350\n> and\n> an hour of downtime, so I figure that it's worth it for us to give it\na\n> shot.\n\np.s. you can also increase cache efficiency by reducing database size,\nfor example use int2/int4 vs. numerics.\n\nGood luck!\n",
"msg_date": "Wed, 11 Aug 2004 15:49:48 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware upgrade for a high-traffic database"
},
{
"msg_contents": "> \n> Right. The point is: is your i/o bottle neck on the read side or the\n> write side. With 10-30 inserts/sec and fsync off, it's definitely on\n> the read side. What's interesting is that such a low insert load is\n> causing i/o storm problems. How does your app run with fsync on?\n> \n> With read-bound i/o problems, might want to consider upgrading memory\n> first to get better cache efficiency. You may want to consider Opteron\n> for > 4GB allocations (yummy!).\n> \n> The good news is that read problems are usually solvable by being\n> clever, whereas write problems require hardware.\n> \n\nThe difference with fsync being off makes seems to be that it allows the\nserver to write in groups instead of scattering our INSERT/UPDATE calls all\nover - it helps keep things going. When a checkpoint occurs, reads slow\ndown there. Normal reads are usually quite fast, aside from some reads.\n\nA good example, a comments table where users submit TEXT data. A common\nquery is to find the last 5 comments a user has submitted. The scan, while\nusing an index, takes a considerable amount of time (> 0.5 sec is about as\ngood as it gets). Again, it's using an index on the single WHERE clause\n(userid = int). The field that's used to ORDER BY (timestamp) is also\nindexed.\n\nI'm wondering why our PG server is using so little memory... The system has\n2GB of memory, though only around 200MB of it are used. Is there a PG\nsetting to force more memory usage towards the cache? Additionally, we use\nFreeBSD. I've heard that Linux may manage that memory better, any truth\nthere? Sorry if I'm grabbing at straws here :)\n\n> > One question I do have though - you specifically mentioned NOW() as\n> > something to watch out for, in that it's mutable. We typically use\n> \n> This is specifically with regards to materialized views. Mutable\n> functions cause problems because when they are pushed unto the view,\n> they are refreshed...something to watch out for.\n> \n> The trick with MVs is to increase your filesystem cache efficiency. The\n> big picture is to keep frequently read data in a single place to make\n> better benefit of cache. Aggregates naturally read multiple rows to\n> return a single row's worth of data so you want to target them first.\n> This all comes at a cost of update I/O time and some application\n> complexity.\n> \n> > as a subselect to retrieve the number of associated rows to the\n> current\n> > query. Additionally, we use NOW a lot, primarily to detect the status\n> of\n> > a\n> > date, i.e.:\n> \n> Might want to check if your application middleware (php?) exposes\n> PQntuples()...this is a zero cost way to get the same information.\n> \n\nThanks, I'll look into it. We use C and PHP.\n\n> > Based on feedback, I'm looking at a minor upgrade of our RAID\n> controller\n> > to\n> > a 3ware 9000 series (SATA with cache, battery backup optional), and\n> > re-configuring it for RAID 10. It's a damn cheap upgrade at around\n> $350\n> > and\n> > an hour of downtime, so I figure that it's worth it for us to give it\n> a\n> > shot.\n> \n> p.s. you can also increase cache efficiency by reducing database size,\n> for example use int2/int4 vs. numerics.\n> \n\nI've gone through and optimized data types as much as possible. I'll see\nwhat else we can do w/o causing downtime once PG 8 is ready to go and we can\nchange data types on the fly.\n\nThanks,\n\nJason\n\n",
"msg_date": "Wed, 11 Aug 2004 17:18:27 -0400",
"msg_from": "\"Jason Coene\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgrade for a high-traffic database"
},
{
"msg_contents": "\nOn Aug 11, 2004, at 3:18 PM, Jason Coene wrote:\n>\n> I'm wondering why our PG server is using so little memory... The \n> system has\n> 2GB of memory, though only around 200MB of it are used. Is there a PG\n> setting to force more memory usage towards the cache? Additionally, \n> we use\n> FreeBSD. I've heard that Linux may manage that memory better, any \n> truth\n> there? Sorry if I'm grabbing at straws here :)\n>\n\ni don't know about freebsd, but linux is very aggressive about using \nunused memory for disk cache. we have dedicated linux box running pg \nwith 2gb of memory, about 250mb of memory is being used by processes \n(system+postgres) and shared memory (postgres only), and there is \n1.75gb of disk buffers in use in the kernel. this particular database \nis only about 4gb, so a good portion of the db resides in memory, \ncertainly most of the active set. the disk subsystem is a 6 disk scsi \nu160 raid array which performs pretty well when there is io.\n\n",
"msg_date": "Wed, 11 Aug 2004 15:31:48 -0600",
"msg_from": "Brian Hirt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgrade for a high-traffic database"
},
{
"msg_contents": "> I'm wondering why our PG server is using so little memory... The system has\n> 2GB of memory, though only around 200MB of it are used. Is there a PG\n\nThis is the second time you've said this. Surely you're not implying\nthere is 1.8GB Free Memory -- rather than 1.8GB in Buffers or Cache.\n\nSend output of the below:\n\nsysctl vm\n\nsysctl -a | grep buffers\n\ntop | grep -E \"(Mem|Swap):\"\n\n\n",
"msg_date": "Wed, 11 Aug 2004 17:46:24 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgrade for a high-traffic database"
},
{
"msg_contents": "On Wed, 2004-08-11 at 17:31, Brian Hirt wrote:\n> On Aug 11, 2004, at 3:18 PM, Jason Coene wrote:\n> >\n> > I'm wondering why our PG server is using so little memory... The \n> > system has\n> > 2GB of memory, though only around 200MB of it are used. Is there a PG\n> > setting to force more memory usage towards the cache? Additionally, \n> > we use\n> > FreeBSD. I've heard that Linux may manage that memory better, any \n> > truth\n> > there? Sorry if I'm grabbing at straws here :)\n> >\n> \n> i don't know about freebsd, but linux is very aggressive about using \n> unused memory for disk cache. we have dedicated linux box running pg \n\nAggressive indeed.. I'm stuck with the version that has a tendency to\nswap out active processes rather than abandon disk cache -- it gets very\nannoying!\n\n\n",
"msg_date": "Wed, 11 Aug 2004 17:50:10 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgrade for a high-traffic database"
},
{
"msg_contents": "> -----Original Message-----\n> From: Rod Taylor [mailto:[email protected]]\n> Sent: Wednesday, August 11, 2004 5:46 PM\n> To: Jason Coene\n> Cc: 'Merlin Moncure'; Postgresql Performance\n> Subject: Re: [PERFORM] Hardware upgrade for a high-traffic database\n> \n> > I'm wondering why our PG server is using so little memory... The system\n> has\n> > 2GB of memory, though only around 200MB of it are used. Is there a PG\n> \n> This is the second time you've said this. Surely you're not implying\n> there is 1.8GB Free Memory -- rather than 1.8GB in Buffers or Cache.\n\nHi Rod,\n\nI was looking at top and vmstat - which always show under 300MB \"Active\".\nWe may hit 400MB at peak. Everything I see (though this isn't my area of\nexpertise) points to most of the memory simply being unused. Results below,\nam I missing something?\n\nJason\n\n> \n> Send output of the below:\n> \n> sysctl vm\n\nd01> sysctl vm\nvm.vmtotal:\nSystem wide totals computed every five seconds: (values in kilobytes)\n===============================================\nProcesses: (RUNQ: 1 Disk Wait: 0 Page Wait: 0 Sleep: 149)\nVirtual Memory: (Total: 2101614K, Active 440212K)\nReal Memory: (Total: 2023532K Active 327032K)\nShared Virtual Memory: (Total: 14356K Active: 3788K)\nShared Real Memory: (Total: 4236K Active: 2456K)\nFree Memory Pages: 88824K\n\nvm.loadavg: { 0.46 0.41 0.42 }\nvm.v_free_min: 3312\nvm.v_free_target: 13997\nvm.v_free_reserved: 749\nvm.v_inactive_target: 20995\nvm.v_cache_min: 13997\nvm.v_cache_max: 27994\nvm.v_pageout_free_min: 34\nvm.pageout_algorithm: 0\nvm.swap_enabled: 1\nvm.swap_async_max: 4\nvm.dmmax: 32\nvm.nswapdev: 1\nvm.swap_idle_threshold1: 2\nvm.swap_idle_threshold2: 10\nvm.v_free_severe: 2030\nvm.stats.sys.v_swtch: 627853362\nvm.stats.sys.v_trap: 3622664114\nvm.stats.sys.v_syscall: 1638589210\nvm.stats.sys.v_intr: 3250875036\nvm.stats.sys.v_soft: 1930666043\nvm.stats.vm.v_vm_faults: 3197534554\nvm.stats.vm.v_cow_faults: 2999625102\nvm.stats.vm.v_cow_optim: 10093309\nvm.stats.vm.v_zfod: 3603956919\nvm.stats.vm.v_ozfod: 3104475907\nvm.stats.vm.v_swapin: 3353\nvm.stats.vm.v_swapout: 3382\nvm.stats.vm.v_swappgsin: 3792\nvm.stats.vm.v_swappgsout: 7213\nvm.stats.vm.v_vnodein: 14675\nvm.stats.vm.v_vnodeout: 140671\nvm.stats.vm.v_vnodepgsin: 24330\nvm.stats.vm.v_vnodepgsout: 245840\nvm.stats.vm.v_intrans: 3643\nvm.stats.vm.v_reactivated: 35038\nvm.stats.vm.v_pdwakeups: 26984\nvm.stats.vm.v_pdpages: 335769007\nvm.stats.vm.v_dfree: 8\nvm.stats.vm.v_pfree: 1507856856\nvm.stats.vm.v_tfree: 430723755\nvm.stats.vm.v_page_size: 4096\nvm.stats.vm.v_page_count: 512831\nvm.stats.vm.v_free_reserved: 749\nvm.stats.vm.v_free_target: 13997\nvm.stats.vm.v_free_min: 3312\nvm.stats.vm.v_free_count: 968\nvm.stats.vm.v_wire_count: 62039\nvm.stats.vm.v_active_count: 44233\nvm.stats.vm.v_inactive_target: 20995\nvm.stats.vm.v_inactive_count: 343621\nvm.stats.vm.v_cache_count: 21237\nvm.stats.vm.v_cache_min: 13997\nvm.stats.vm.v_cache_max: 27994\nvm.stats.vm.v_pageout_free_min: 34\nvm.stats.vm.v_interrupt_free_min: 2\nvm.stats.vm.v_forks: 45205536\nvm.stats.vm.v_vforks: 74315\nvm.stats.vm.v_rforks: 0\nvm.stats.vm.v_kthreads: 2416\nvm.stats.vm.v_forkpages: 1464383994\nvm.stats.vm.v_vforkpages: 4259727\nvm.stats.vm.v_rforkpages: 0\nvm.stats.vm.v_kthreadpages: 0\nvm.stats.misc.zero_page_count: 709\nvm.stats.misc.cnt_prezero: -972664922\nvm.max_proc_mmap: 34952\nvm.msync_flush_flags: 3\nvm.idlezero_enable: 1\nvm.idlezero_maxrun: 16\nvm.max_launder: 32\nvm.pageout_stats_max: 13997\nvm.pageout_full_stats_interval: 20\nvm.pageout_stats_interval: 5\nvm.pageout_stats_free_max: 5\nvm.swap_idle_enabled: 0\nvm.defer_swapspace_pageouts: 0\nvm.disable_swapspace_pageouts: 0\nvm.pageout_lock_miss: 0\nvm.zone:\nITEM SIZE LIMIT USED FREE REQUESTS\n\nFFS2 dinode: 256, 0, 30156, 4389, 20093512\nFFS1 dinode: 128, 0, 0, 0, 0\nFFS inode: 140, 0, 30156, 4340, 20093512\nSWAPMETA: 276, 121576, 16, 264, 44599\nripcb: 180, 32780, 0, 132, 289\nhostcache: 88, 15390, 6, 309, 741\nsyncache: 104, 15390, 0, 418, 44592418\ntcptw: 56, 6603, 3, 1204, 224900\ntcpcb: 368, 32769, 136, 4264, 44594153\ninpcb: 180, 32780, 139, 4437, 44594153\nudpcb: 180, 32780, 10, 144, 85953\nunpcb: 140, 32788, 6, 246, 143982\nsocket: 240, 32768, 152, 4248, 44824378\nKNOTE: 64, 0, 0, 434, 7561\nPIPE: 172, 0, 8, 222, 352848\nNFSNODE: 460, 0, 1596, 92, 2419\nNFSMOUNT: 424, 0, 1, 17, 1\nDIRHASH: 1024, 0, 238, 86, 287\nL VFS Cache: 291, 0, 165, 160, 11956\nS VFS Cache: 68, 0, 38283, 3430, 3795133\nNAMEI: 1024, 0, 0, 240, 907013101\nVNODEPOLL: 60, 0, 1, 131, 2\nVNODE: 260, 0, 34104, 36, 34104\ng_bio: 136, 0, 0, 5887, 551700514\nVMSPACE: 236, 0, 152, 987, 45279840\nUPCALL: 44, 0, 0, 0, 0\nKSE: 64, 0, 1224, 202, 1224\nKSEGRP: 120, 0, 1224, 109, 1224\nTHREAD: 312, 0, 1224, 84, 1224\nPROC: 452, 0, 261, 963, 45282231\nFiles: 68, 0, 782, 5413, 719968279\n4096: 4096, 0, 441, 1935, 90066743\n2048: 2048, 0, 237, 423, 25077\n1024: 1024, 0, 23, 157, 448114\n512: 512, 0, 108, 140, 770519\n256: 256, 0, 458, 1102, 70685682\n128: 128, 0, 1904, 1041, 186085712\n64: 64, 0, 5124, 13042, 1404464781\n32: 32, 0, 1281, 1302, 839881182\n16: 16, 0, 842, 1548, 1712031683\nDP fakepg: 72, 0, 0, 0, 0\nPV ENTRY: 28, 2166780, 157829, 769251, 56650653911\nMAP ENTRY: 60, 0, 6716, 33280, 2270740046\nKMAP ENTRY: 60, 65538, 24, 702, 152938\nMAP: 160, 0, 9, 41, 2\nVM OBJECT: 132, 0, 21596, 10654, 1136467083\n128 Bucket: 524, 0, 3115, 0, 0\n64 Bucket: 268, 0, 200, 10, 0\n32 Bucket: 140, 0, 191, 5, 0\n16 Bucket: 76, 0, 49, 3, 0\nUMA Hash: 128, 0, 0, 31, 0\nUMA Slabs: 34, 0, 3095, 95, 0\nUMA Zones: 432, 0, 52, 2, 0\n\nvm.kvm_size: 1069543424\nvm.kvm_free: 364900352\n\n> \n> sysctl -a | grep buffers\n\nd01 > sysctl -a | grep buffers\nvfs.numdirtybuffers: 52\nvfs.lodirtybuffers: 909\nvfs.hidirtybuffers: 1819\nvfs.numfreebuffers: 7146\nvfs.lofreebuffers: 404\nvfs.hifreebuffers: 808\n> \n> top | grep -E \"(Mem|Swap):\"\n> \n\nd01 > top | grep -E \"(Mem|Swap):\"\nMem: 173M Active, 1346M Inact, 242M Wired, 77M Cache, 112M Buf, 5784K Free\nSwap: 4096M Total, 124K Used, 4096M Free\n\n",
"msg_date": "Wed, 11 Aug 2004 18:03:40 -0400",
"msg_from": "\"Jason Coene\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgrade for a high-traffic database"
},
{
"msg_contents": "On Wed, 2004-08-11 at 18:03, Jason Coene wrote:\n> > -----Original Message-----\n> > From: Rod Taylor [mailto:[email protected]]\n> > Sent: Wednesday, August 11, 2004 5:46 PM\n> > To: Jason Coene\n> > Cc: 'Merlin Moncure'; Postgresql Performance\n> > Subject: Re: [PERFORM] Hardware upgrade for a high-traffic database\n> > \n> > > I'm wondering why our PG server is using so little memory... The system\n> > has\n> > > 2GB of memory, though only around 200MB of it are used. Is there a PG\n> > \n> > This is the second time you've said this. Surely you're not implying\n> > there is 1.8GB Free Memory -- rather than 1.8GB in Buffers or Cache.\n> \n> Hi Rod,\n> \n> I was looking at top and vmstat - which always show under 300MB \"Active\".\n> We may hit 400MB at peak. Everything I see (though this isn't my area of\n> expertise) points to most of the memory simply being unused. Results below,\n> am I missing something?\n\nThis looks fine. The memory is not unused (only 5MB is actually empty)\nbut is being used for disk cache.\n\nActive is memory used by programs and would need to be swapped if this\nspace was needed.\n\nInactive is memory that is generally dirty. Disk cache is often here. In\nyour case, you likely write to the same pages you're reading from --\nwhich is why this number is so big. It also explains why a checkpoint is\na killer; a large chunk of this memory set needs to be pushed to disk.\n\nCache is memory used generally for disk cache that is not dirty. It's\nbeen read from the disk and could be cleared immediately if necessary.\n\nWired is memory that cannot be swapped. In your case, Shared Memory is\nprobably Wired (this is good). There is another sysctl to check and set\nwhether it is wired or swappable.\n\n\n\nInteresting (if dry) read:\nhttp://www.freebsd.org/doc/en_US.ISO8859-1/articles/vm-design/index.html\n\n\n",
"msg_date": "Wed, 11 Aug 2004 18:26:41 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgrade for a high-traffic database"
},
{
"msg_contents": "> > Hi Rod,\n> >\n> > I was looking at top and vmstat - which always show under 300MB\n> \"Active\".\n> > We may hit 400MB at peak. Everything I see (though this isn't my area\n> of\n> > expertise) points to most of the memory simply being unused. Results\n> below,\n> > am I missing something?\n> \n> This looks fine. The memory is not unused (only 5MB is actually empty)\n> but is being used for disk cache.\n> \n> Active is memory used by programs and would need to be swapped if this\n> space was needed.\n> \n> Inactive is memory that is generally dirty. Disk cache is often here. In\n> your case, you likely write to the same pages you're reading from --\n> which is why this number is so big. It also explains why a checkpoint is\n> a killer; a large chunk of this memory set needs to be pushed to disk.\n> \n> Cache is memory used generally for disk cache that is not dirty. It's\n> been read from the disk and could be cleared immediately if necessary.\n> \n> Wired is memory that cannot be swapped. In your case, Shared Memory is\n> probably Wired (this is good). There is another sysctl to check and set\n> whether it is wired or swappable.\n> \n> \n> \n> Interesting (if dry) read:\n> http://www.freebsd.org/doc/en_US.ISO8859-1/articles/vm-design/index.html\n> \n\nAh, thanks - I didn't know that Inactive was still being used. I'm glad to\nknow that at least the OS is using up the free memory for disk cache.\nShared memory is Wired, set via sysctl. Thanks for the info! It sounds\nlike adding more memory would help cache more data - I'll look into the\nupgrade.\n\nJason\n\n",
"msg_date": "Wed, 11 Aug 2004 18:53:52 -0400",
"msg_from": "\"Jason Coene\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgrade for a high-traffic database"
},
{
"msg_contents": "\"Jason Coene\" <[email protected]> writes:\n> A good example, a comments table where users submit TEXT data. A common\n> query is to find the last 5 comments a user has submitted. The scan, while\n> using an index, takes a considerable amount of time (> 0.5 sec is about as\n> good as it gets). Again, it's using an index on the single WHERE clause\n> (userid = int). The field that's used to ORDER BY (timestamp) is also\n> indexed.\n\nYou mean you are doing\n SELECT ... WHERE userid = 42 ORDER BY timestamp DESC LIMIT 5;\nand hoping that separate indexes on userid and timestamp will get the\njob done? They won't. There are only two possible plans for this,\nneither very good: select all of user 42's posts and sort them, or\nscan timewise backwards through *all* posts looking for the last 5 from\nuser 42.\n\nIf you do this enough to justify a specialized index, I would suggest a\ntwo-column index on (userid, timestamp). You will also need to tweak\nthe query, because the planner is not quite smart enough to deduce that\nsuch an index is applicable to the given sort order:\n SELECT ... WHERE userid = 42 ORDER BY userid DESC, timestamp DESC LIMIT 5;\nThis should generate an index-scan-backwards plan that will execute nigh\ninstantaneously, because it will only fetch the rows you really want.\n\nYou might or might not be able to drop the separate indexes on userid\nand timestamp, depending on what other queries you might have that need\nthem. But you should be paying attention to what plans you are really\ngetting (see EXPLAIN) rather than just assuming that some indexes chosen\nat random will do what you need.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 Aug 2004 19:20:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgrade for a high-traffic database "
},
{
"msg_contents": "> You mean you are doing\n> SELECT ... WHERE userid = 42 ORDER BY timestamp DESC LIMIT 5;\n> and hoping that separate indexes on userid and timestamp will get the\n> job done? They won't. There are only two possible plans for this,\n> neither very good: select all of user 42's posts and sort them, or\n> scan timewise backwards through *all* posts looking for the last 5 from\n> user 42.\n\nWow! I did try the method you state below (including the WHERE restricted\ncolumn in the sort by, and creating a two-column index), and it did execute\nmuch faster (even on odd userid's to avoid cached results as much as\npossible).\n\nWe have a lot of:\n\nSELECT whatever\n\tFROM ourtable\n\tWHERE field1 = X\n\tAND field2 = Y\n\tAND field3 = Z\n\tORDER BY id DESC\n\tLIMIT 5\n\nWith indexes:\n\nourtable(id)\nourtable(field1, field2, field3)\n\nIs it standard procedure with postgres to include any fields listed in WHERE\nin the ORDER BY, and create a single index for only the ORDER BY fields (in\norder of appearance, of course)?\n\n> \n> If you do this enough to justify a specialized index, I would suggest a\n> two-column index on (userid, timestamp). You will also need to tweak\n> the query, because the planner is not quite smart enough to deduce that\n> such an index is applicable to the given sort order:\n> SELECT ... WHERE userid = 42 ORDER BY userid DESC, timestamp DESC\n> LIMIT 5;\n> This should generate an index-scan-backwards plan that will execute nigh\n> instantaneously, because it will only fetch the rows you really want.\n> \n> You might or might not be able to drop the separate indexes on userid\n> and timestamp, depending on what other queries you might have that need\n> them. But you should be paying attention to what plans you are really\n> getting (see EXPLAIN) rather than just assuming that some indexes chosen\n> at random will do what you need.\n> \n> \t\t\tregards, tom lane\n> \n\nWe do many varied queries on nearly every table - our data is highly\nrelational, and we have a lot of indexes. I thought the planner would pick\nup the right index via constraints and not require them in ORDER BY...\nEXPLAIN ANALYZE says that the indexes are being used, ala:\n\ngf=# EXPLAIN ANALYZE SELECT id FROM comments WHERE userid = 51 ORDER BY\ntimestamp DESC LIMIT 5;\n QUERY PLAN\n----------------------------------------------------------------------------\n-------------------------------------------------------------------\n Limit (cost=1608.43..1608.45 rows=5 width=8) (actual time=0.292..0.317\nrows=5 loops=1)\n -> Sort (cost=1608.43..1609.45 rows=407 width=8) (actual\ntime=0.287..0.295 rows=5 loops=1)\n Sort Key: \"timestamp\"\n -> Index Scan using comments_ix_userid on comments\n(cost=0.00..1590.79 rows=407 width=8) (actual time=0.031..0.190 rows=35\nloops=1)\n Index Cond: (userid = 51)\n Total runtime: 0.375 ms\n(6 rows)\n\nIs this the wrong procedure? Your suggested syntax seems much more\nefficient, but I don't quite understand exactly why, as PG is using our\nexisting indexes...\n\ngf=# EXPLAIN ANALYZE SELECT id FROM comments WHERE userid = 51 ORDER BY\nuserid DESC, timestamp DESC LIMIT 5;\n \nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n----\n Limit (cost=0.00..19.90 rows=5 width=12) (actual time=0.040..0.076 rows=5\nloops=1)\n -> Index Scan Backward using comments_ix_userid_timestamp on comments\n(cost=0.00..1620.25 rows=407 width=12) (actual time=0.035..0.054 rows=5\nloops=1)\n Index Cond: (userid = 51)\n Total runtime: 0.134 ms\n(4 rows)\n\nNote: This was done after adding an index on comments (userid, timestamp)\n\nRegards,\n\nJason\n\n",
"msg_date": "Wed, 11 Aug 2004 20:29:04 -0400",
"msg_from": "\"Jason Coene\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgrade for a high-traffic database "
},
{
"msg_contents": "\"Jason Coene\" <[email protected]> writes:\n> We have a lot of:\n\n> SELECT whatever\n> \tFROM ourtable\n> \tWHERE field1 = X\n> \tAND field2 = Y\n> \tAND field3 = Z\n> \tORDER BY id DESC\n> \tLIMIT 5\n\n> With indexes:\n\n> ourtable(id)\n> ourtable(field1, field2, field3)\n\n> Is it standard procedure with postgres to include any fields listed in WHERE\n> in the ORDER BY, and create a single index for only the ORDER BY fields (in\n> order of appearance, of course)?\n\nIt depends. If the X/Y/Z constraint is already pretty selective, then\nit seems sufficient to me to pick up the matching rows (using the\n3-field index), sort them by id, and take the first 5. The case where\nthe extra-index-column trick is useful is where the WHERE clause *isn't*\nreal selective and so a lot of rows would have to be sorted. In your\nprevious example, I imagine you have a lot of prolific posters and so\n\"all posts by userid 42\" can be a nontrivial set. The double-column\nindex lets you skip the sort and just pull out the required rows by\nscanning from the end of the range of userid = 42 entries.\n\n> gf=# EXPLAIN ANALYZE SELECT id FROM comments WHERE userid = 51 ORDER BY\n> timestamp DESC LIMIT 5;\n> QUERY PLAN\n> ----------------------------------------------------------------------------\n> -------------------------------------------------------------------\n> Limit (cost=1608.43..1608.45 rows=5 width=8) (actual time=0.292..0.317\n> rows=5 loops=1)\n> -> Sort (cost=1608.43..1609.45 rows=407 width=8) (actual\n> time=0.287..0.295 rows=5 loops=1)\n> Sort Key: \"timestamp\"\n> -> Index Scan using comments_ix_userid on comments\n> (cost=0.00..1590.79 rows=407 width=8) (actual time=0.031..0.190 rows=35\n> loops=1)\n> Index Cond: (userid = 51)\n> Total runtime: 0.375 ms\n> (6 rows)\n\nThis example looks fine, but since userid 51 evidently only has 35\nposts, there's not much time needed to read 'em all and sort 'em. The\nplace where the double-column index will win big is on userids with\nhundreds of posts.\n\nYou have to keep in mind that each index costs time to maintain during\ninserts/updates. So adding an index just because it makes a few queries\na little faster probably isn't a win. You need to make tradeoffs.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 Aug 2004 22:09:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgrade for a high-traffic database "
},
{
"msg_contents": "On Wed, 11 Aug 2004 20:29:04 -0400, Jason Coene <[email protected]> wrote:\n\n\n\n\n> gf=# EXPLAIN ANALYZE SELECT id FROM comments WHERE userid = 51 ORDER BY\n> timestamp DESC LIMIT 5;\n> QUERY \n> PLAN\n> ----------------------------------------------------------------------------\n> -------------------------------------------------------------------\n> Limit (cost=1608.43..1608.45 rows=5 width=8) (actual time=0.292..0.317\n> rows=5 loops=1)\n> -> Sort (cost=1608.43..1609.45 rows=407 width=8) (actual\n> time=0.287..0.295 rows=5 loops=1)\n> Sort Key: \"timestamp\"\n> -> Index Scan using comments_ix_userid on comments\n> (cost=0.00..1590.79 rows=407 width=8) (actual time=0.031..0.190 rows=35\n> loops=1)\n> Index Cond: (userid = 51)\n> Total runtime: 0.375 ms\n> (6 rows)\n\n\tWell, you have to read it from the bottom.\n\t- Index Scan using comments_ix_userid :\n\tIt selects all records for your user.\n\trows=407 : there are 407 rows.\n\n\t-> Sort (cost=1608.43..1609.45 rows=407 width=8)\n\tIt sorts them to find the 5 more recent.\n\n\tSo basically you grab 407 rows to return only 5, so you do 80x more disk \nI/O than necessary. It is likely that posts from all users are interleaved \nin the table, so this probably translates directly into 407 page fetches.\n\n\tNote : EXPLAIN ANALYZE will only give good results the first time you run \nit. The second time, all data is in the cache, so it looks really faster \nthan it is.\n\n> gf=# EXPLAIN ANALYZE SELECT id FROM comments WHERE userid = 51 ORDER BY\n> userid DESC, timestamp DESC LIMIT 5;\n> QUERY PLAN\n> ----\n> Limit (cost=0.00..19.90 rows=5 width=12) (actual time=0.040..0.076 \n> rows=5\n> loops=1)\n> -> Index Scan Backward using comments_ix_userid_timestamp on comments\n> (cost=0.00..1620.25 rows=407 width=12) (actual time=0.035..0.054 rows=5\n> loops=1)\n> Index Cond: (userid = 51)\n> Total runtime: 0.134 ms\n> (4 rows)\n>\n> Note: This was done after adding an index on comments (userid, timestamp)\n\n\tWell, this one correctly uses the index, fetches 5 rows, and returns them.\n\n\tSo, excluding index page hits, your unoptimized query has >400 page \nfetches, and your optimized one has 5 page fetches. Still wonder why it's \nfaster ?\n\n\tSeq scan is fast when locality of reference is good. In your case, it's \nvery bad.\n",
"msg_date": "Thu, 12 Aug 2004 08:49:37 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgrade for a high-traffic database "
}
] |
[
{
"msg_contents": "Hi,\n\nThis email is picking up a thread from yesterday on INSERTS and INDEXES.\n\nIn this case the question is to use and index or a sequential scan.\n\nI have included the db DDL and SELECT query.\n\nFor each month I have a csv data dump of council property data.\n\nSo the First CD will have almost all unique records.\n From there most properties will already be in the db.\n\nEach line of the csv file populates three tables.\n\ngc_prop\ngc_value\ngc_owner\n\ngc_prop has a one to many relationship with gc_value and gc_owner.\n\nFor-each line we first SELECT from the gc_prop table.\nIf there is no record we INSERT into the gc_prop,gc_value and gc_owner \ntables.\n\nIf there is a matching property we do not need to INSERT into the\ngc_prop table only the gc_value and gc_owner tables.\n\nSo each row will require either:\n\na) One SELECT + three INSERTS\nor\nb) One SELECT + two INSERTS\n\nCREATE TABLE gc_prop\n(\ngc_id serial PRIMARY KEY,\nacc_no numeric(9,0),\nmaster_code varchar(32),\nno_prop numeric(4,0),\nlot_no numeric(9,0),\nplan_no varchar(32),\nvolume_no numeric(8,0),\nfolio_no numeric(8,0),\nstreet_no varchar(32),\nstreet_name varchar(30),\nsuburb_name varchar(30),\nsuburb_postcode numeric(4,0),\naddress1_txt varchar(80),\naddress2_txt varchar(80),\naddress3_txt varchar(80),\naddress4_txt varchar(80),\naddress5_txt varchar(80),\naddress6_txt varchar(80),\nowner_postcode text,\nplanning_zone varchar(80),\nwaterfront_code varchar(80),\nvacant_land varchar(32),\nproperty_area text,\nimprovemeny_type varchar(20),\nbup_indicator varchar(4),\nproperty_name varchar(30),\nlot_quantity text,\npool_indicator varchar(32),\nowner_occupied varchar(32),\nprsh_text varchar(30),\nvg_no varchar(20),\nproperty_part varchar(20),\nlot_no_txt varchar(32),\nplan_qualifier1 varchar(80),\nplan_qualifier2 varchar(80),\nowner_trans_date date,\nlegal_entity varchar(32),\n\"1st_surname_com\" varchar(80),\n\"1st_given_name\" varchar(64),\n\"2nd_surname\" varchar(64),\n\"2nd_given_name\" varchar(64),\neasement_flag varchar(32),\ntrans_code varchar(32)\n);\n\nCREATE TABLE gc_value\n(\ngc_v_id serial PRIMARY KEY,\ngc_id integer NOT NULL, -- foreign key to gc_prop table\ncurrent_valuation numeric(10,0),\nvaluation_date date,\nlast_sale_date date,\nannual_rates numeric(7,0),\nsale_amount numeric(9,0),\ntype_sale varchar(32),\ndate_sale date,\npre_val_date date,\npre_val_amount numeric(10,0),\npre_vg_no varchar(20),\nfuture_val_date varchar(32),\nfut_val_amount numeric(10,0),\nfut_val_no varchar(32)\n);\n\nCREATE TABLE gc_owner\n(\ngc_o_id serial PRIMARY KEY,\ngc_id integer NOT NULL, -- foreign key to gc_prop table\npre_legal_entity varchar(32),\npre_surname_com varchar(76),\npre_given_name varchar(32),\npre_2nd_surname varchar(40),\npre_2nd_given varchar(32),\norig_lot_no numeric(9,0),\norig_lot_txt varchar(32),\norig_prop_txt text,\norig_plan_qualifier1 varchar(32),\norig_plan_qualifier2 varchar(32),\norig_plan_no varchar(32)\n);\n\nCREATE INDEX gc_prop_check ON gc_prop ( \nacc_no,no_prop,lot_no,plan_no,street_no,street_name,suburb_postcode );\n\n//check if this property already exists using gc_prop_check index\n$sql = \"\nSELECT\ngp.gc_id,\ngp.acc_no,\ngp.no_prop,\ngp.lot_no,\ngp.plan_no,\ngp.street_no,\ngp.street_name,\ngp.suburb_postcode,\ngp.owner_trans_date\nFROM gc_prop gp\nWHERE\ngp.acc_no = $acc_no,\nAND\ngp.no_prop = $no_prop,\nAND\ngp.lot_no = $lot_no\nAND\ngp.plan_no = '$plan_no'\nAND\ngp.street_no = '$street_no'\nAND\ngp.street_name = '$street_name'\nAND\ngp.suburb_postcode = $suburb_postcode\n\";\n\nDo you think an Index or Seq. scan should be used?\n\nThanks.\nRegards,\nRudi.\n",
"msg_date": "Thu, 12 Aug 2004 10:00:02 +1000",
"msg_from": "Rudi Starcevic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Buld Insert and Index use."
},
{
"msg_contents": "Rudi,\n\n> Do you think an Index or Seq. scan should be used?\n\nThat was way too much data for way too simple a question. ;-)\nThe answer is: depends on how many rows you have. With any significant \nnumber of rows, yes.\n\nHowever, you probably only need to index the first 3-5 columns; that's \nselective enough.\n\n-- \n-Josh Berkus\n \"A developer of Very Little Brain\"\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 11 Aug 2004 17:25:55 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Buld Insert and Index use."
}
] |
[
{
"msg_contents": "> This example looks fine, but since userid 51 evidently only has 35\n> posts, there's not much time needed to read 'em all and sort 'em. The\n> place where the double-column index will win big is on userids with\n> hundreds of posts.\n> \n> You have to keep in mind that each index costs time to maintain during\n> inserts/updates. So adding an index just because it makes a few\nqueries\n> a little faster probably isn't a win. You need to make tradeoffs.\n\nIMNSHO, in Jason's case he needs to do everything possible to get his\nfrequently run queries going as quick as possible. ISTM he can give up\na little on the update side, especially since he is running fsync=false.\nA .3-.5 sec query multiplied over 50-100 users running concurrently adds\nup quick. Ideally, you are looking up records based on a key that takes\nyou directly to the first record you want and is pointing to the next\nnumber of records in ascending order. I can't stress enough how\nimportant this is so long as you can deal with the index/update\noverhead. \n\nI don't have a huge amount of experience with this in pg, but one of the\ntricks we do in the ISAM world is a 'reverse date' system, so that you\ncan scan forwards on the key to pick up datetimes in descending order.\nThis is often a win because the o/s cache may assume read/forwards\ngiving you more cache hits. There are a few different ways to do this,\nbut imagine:\n\ncreate table t\n(\n id int,\n ts timestamp default now(),\n iv interval default ('01/01/2050'::timestamp - now())\n);\n\ncreate index t_idx on t(id, iv);\nselect * from t where id = k order by id, iv limit 5;\n\nThe above query should do a much better job pulling up data and should\nbe easier on your cache. A further win might be to cluster the table on\nthis key if the table is really big.\n\nnote: interval is poor type to do this with, because it's a 12 byte type\n(just used it here for demonstration purposes because it's easy). With\na little trickery you can stuff it into a time type or an int4 type\n(even better!). If you want to be really clever you can do it without\nadding any data to your table at all through functional indexes.\n\nSince the planner can use the same index in the extraction and ordering,\nyou get some savings...not much, but worthwhile when applied over a lot\nof users. Knowing when and how to apply multiple key/functional indexes\nwill make you feel like you have 10 times the database you are using\nright now.\n\nMerlin\n",
"msg_date": "Thu, 12 Aug 2004 09:39:46 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware upgrade for a high-traffic database "
}
] |
[
{
"msg_contents": "> I don't have a huge amount of experience with this in pg, but one of\nthe\n> tricks we do in the ISAM world is a 'reverse date' system, so that you\n> can scan forwards on the key to pick up datetimes in descending order.\n> This is often a win because the o/s cache may assume read/forwards\n> giving you more cache hits. There are a few different ways to do\nthis,\n> but imagine:\n\nI've been thinking more about this and there is even a more optimal way\nof doing this if you are willing to beat on your data a bit. It\ninvolves the use of sequences. Lets revisit your id/timestamp query\ncombination for a message board. The assumption is you are using\ninteger keys for all tables. You probably have something like:\n\ncreate table messages\n(\n user_id \tint4 references users,\n topic_id int4 references topics,\n message_id serial,\n message_time timestamp default now(),\n [...]\n);\n\nThe following suggestion works in two principles: one is that instead of\nusing timestamps for ordering, integers are quicker, and sequences have\na built in ability for reverse-ordering.\n\nLets define:\ncreate sequence message_seq increment -1 start 2147483647 minvalue 0\nmaxvalue 2147483647;\n\nnow we define our table:\ncreate table messages\n(\n user_id int4 references users,\n topic_id int4 references topics,\n message_id int4 default nextval('message_seq') primary key,\n message_time timestamp default now(),\n [...]\n);\n\ncreate index user_message_idx on messages(user_id, message_id);\n-- optional\ncluster user_message_idx messages;\n\nSince the sequence is in descending order, we don't have to do any\ntricks to logically reverse order the table.\n\n-- return last k posts made by user u in descending order;\n\nselect * from messages where user_id = u order by user_id, message_id\nlimit k;\n\n-- return last k posts on a topic\ncreate index topic_message_idx on messages(topic_id, user_id);\nselect * from messages where topic_id = t order by topic_id, message_id\n\na side benefit of clustering is that there is little penalty for\nincreasing k because of read ahead optimization whereas in normal\nscenarios your read time scales with k (forcing small values for k). If\nwe tended to pull up messages by topic more frequently than user, we\nwould cluster on topic_message_idx instead. (if we couldn't decide, we\nmight cluster on message_id or not at all).\n\nThe crucial point is that we are making this one index run really fast\nat the expense of other operations. The other major point is we can use\na sequence in place of a timestamp for ordering. Using int4 vs.\ntimestamp is a minor efficiency win, if you are worried about > 4B rows,\nthen stick with timestamp.\n\nThis all boils down to a central unifying principle: organize your\nindices around your expected access patterns to the data. Sorry if I'm\nbleating on and on about this...I just think there is plenty of\noptimization room left in there :)\n\nMerlin\n\n",
"msg_date": "Thu, 12 Aug 2004 12:58:10 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware upgrade for a high-traffic database "
},
{
"msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> The following suggestion works in two principles: one is that instead of\n> using timestamps for ordering, integers are quicker,\n\nThe difference would be pretty marginal --- especially if you choose to\nuse bigints instead of ints. (A timestamp is just a float8 or bigint\nunder the hood, and is no more expensive to compare than those datatypes.\nTimestamps *are* expensive to convert for I/O, but comparison does not\nhave to do that.) I wouldn't recommend kluging up your data schema just\nfor that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Aug 2004 13:09:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgrade for a high-traffic database "
},
{
"msg_contents": "\"Merlin Moncure\" <[email protected]> wrote:\n\n-- optional\ncluster user_message_idx messages;\n\nwould one not have to repeat this operation regularly, to keep\nany advantage of this ? my impression was that this is a relatively\nheavy operation on a large table.\n\ngnari\n\n\n\n",
"msg_date": "Thu, 12 Aug 2004 18:59:28 -0000",
"msg_from": "\"gnari\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgrade for a high-traffic database "
}
] |
[
{
"msg_contents": "Tom Lane wrote:\n> The difference would be pretty marginal --- especially if you choose\nto\n> use bigints instead of ints. (A timestamp is just a float8 or bigint\n> under the hood, and is no more expensive to compare than those\ndatatypes.\n> Timestamps *are* expensive to convert for I/O, but comparison does not\n> have to do that.) I wouldn't recommend kluging up your data schema\njust\n> for that.\n\nRight (int4 use was assumed). I agree, but it's kind of a 'two birds\nwith one stone' kind of thing, because it's easier to work with reverse\nordering integers than time values. So I claim a measurable win (the\nreal gainer of course being able to select and sort on the same key,\nwhich works on any type), based on the int4-int8 difference, which is a\n33% reduction in key size.\n\nOne claim I don't have the data for is that read-forward is better than\nread-back, but my gut tells me he'll get a better cache hit ratio that\nway. This will be very difficult to measure.\n\nAs for kludging, using a decrementing sequence is not a bad idea if the\ngeneral tendency is to read the table backwards, even if just for\nconceptual reasons. The main kludge is the int4 assumption, which (IMO)\nisn't so bad. He would just have to rebuild the existing p-key in\nreverse order (10$ says his keys are all already int4s), and hopefully\nnot mess with the application code too much.\n\nAt least, it's what I would try if I was in his shoes :)\n\nYMMV\nMerlin\n\n\n\n\n\n\n",
"msg_date": "Thu, 12 Aug 2004 13:48:38 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware upgrade for a high-traffic database "
}
] |
[
{
"msg_contents": "> would one not have to repeat this operation regularly, to keep\n> any advantage of this ? my impression was that this is a relatively\n> heavy operation on a large table.\n\nYeah, it requires an exclusive lock and a table rebuild. It might be\nuseful to a message board type database since (one presumes) the reads\nwould be concentrated over recently created data, entered after the\ncluster and losing any benefit.\n\nAs far as table size, bigger tables are a larger operation but take\nlonger to get all out of whack. Question is: what percentage of the\ndata turns over between maintenance periods? Plus, there has to be a\nmaintenance period...nobody does anything while the table is clustering.\n\nAlso, a particular method of reading the table has to really dominate as\nfar as user usage pattern. So, it's pretty rare to user cluster, but it\ncan help in some cases.\n\nMerlin\n",
"msg_date": "Thu, 12 Aug 2004 15:27:01 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware upgrade for a high-traffic database "
}
] |
[
{
"msg_contents": "Hi All,\n\nWe are having a performance problem with our database. The problem\nexists when we include a constraint in GCTBALLOT. The constraint is as\nfollows:\n\nalter table GCTBALLOT\n add constraint FK_GCTBALLOT_GCTWEBU foreign key (GCTWEBU_SRL)\n references GCTWEBU (SRL)\n on delete restrict on update restrict;\n\nThe two tables that we insert into are the following:\n\nGCTBALLOT:\n\n Table \"cbcca.gctballot\"\n\n Column | Type | \n Modifiers\n------------------+-----------------------------+-----------------------------------------------------------\n srl | integer | not null default\nnextval('cbcca.gctballot_srl_seq'::text)\n gctbwindow_srl | numeric(12,0) | not null\n gctcandidate_srl | numeric(12,0) | not null\n gctwebu_srl | numeric(12,0) |\n gctphoneu_srl | numeric(12,0) |\n ballot_time | timestamp without time zone | not null\n ip_addr | character varying(15) |\nIndexes:\n \"pk_gctballot\" primary key, btree (srl)\n \"i1_gctballot_webusrl\" btree (gctwebu_srl)\nForeign-key constraints:\n \"fk_gctbwindow_gctballot\" FOREIGN KEY (gctbwindow_srl) REFERENCES\ngctbwindow(srl) ON UPDATE RESTRICT ON DELETE RESTRICT\n \"fk_gctcandidate_gctballot\" FOREIGN KEY (gctcandidate_srl)\nREFERENCES gctcandidate(srl) ON UPDATE RESTRICT ON DELETE RESTRICT\n \"fk_gctphoneu_gctballot\" FOREIGN KEY (gctphoneu_srl) REFERENCES\ngctphoneu(srl) ON UPDATE RESTRICT ON DELETE RESTRICT\n\nwith the extra constraint:\n\n\"fk_gctballot_gctwebu\" FOREIGN KEY (gctwebu_srl) REFERENCES\ngctwebu(srl) ON UPDATE RESTRICT ON DELETE RESTRICT\n\n\n\nGCTWEBU:\n\n Table \"cbcca.gctwebu\"\n Column | Type | \nModifiers\n-----------------+-----------------------------+---------------------------------------------------------\n srl | integer | not null default\nnextval('cbcca.gctwebu_srl_seq'::text)\n gctlocation_srl | numeric(12,0) | not null\n gctagerange_srl | numeric(12,0) | not null\n email | character varying(255) | not null\n uhash | character varying(255) | not null\n sex | character varying(1) | not null\n created_time | timestamp without time zone | not null\nIndexes:\n \"pk_gctwebu\" primary key, btree (srl)\n \"i1_gctwebu_email\" unique, btree (email)\nForeign-key constraints:\n \"fk_gctagerang_gctwebu\" FOREIGN KEY (gctagerange_srl) REFERENCES\ngctagerange(srl) ON UPDATE RESTRICT ON DELETE RESTRICT\n \"fk_gctwebu_gctlocation\" FOREIGN KEY (gctlocation_srl) REFERENCES\ngctlocation(srl) ON UPDATE RESTRICT ON DELETE RESTRICT\n\n\nTo begin, GCTBALLOT has 6122546 rows and GCTWEBU has 231444 rows.\n\nNow when we try and insert 100 entries into GCTBALLOT with the extra\nconstraint it \ntakes: 37981 milliseconds\n\nAlso, when we try and insert 100 entries into GCTBALLOT with the extra\nconstraint, \nbut insert 'null' into the column gctwebu_srl it takes: 286\nmilliseconds\n\nHowever when we try and insert 100 entries into GCTBALLOT without the\nextra constraint (no foreign key between GCTBALLOT & GCTWEBU)\nit takes: 471 milliseconds\n\n\nIn summary, inserting into GCTBALLOT without the constraint or\ninserting null for \ngctwebu_srl in GCTBALLOT gives us good performance. However, inserting\ninto GCTBALLOT\nwith the constraint and valid gctwebu_srl values gives us poor\nperformance.\n\nAlso, the insert we use is as follows:\n\nINSERT INTO GCTBALLOT (gctbwindow_srl, gctcandidate_srl, gctwebu_srl,\ngctphoneu_srl, \nballot_time, ip_addr) VALUES (CBCCA.gcf_getlocation(?), ?,\nCBCCA.gcf_validvoter(?,?), \nnull, ?, ?);\n\nNOTE: \"gcf_validvoter\" find 'gctweb_srl' value\n\n\"\nCREATE OR REPLACE FUNCTION gcf_validvoter (VARCHAR, VARCHAR) \n RETURNS NUMERIC AS '\nDECLARE\n arg1 ALIAS FOR $1;\n arg2 ALIAS FOR $2;\n return_val NUMERIC;\nBEGIN\n SELECT SRL INTO return_val\n FROM gctwebu\n WHERE EMAIL = arg1\n AND UHASH = arg2;\n\n RETURN return_val;\nEND;\n' LANGUAGE plpgsql;\n\"\n\n\nWhere the question marks are filled in with values in our java code.\n\nWe are puzzled as to why there is this difference in performance when\ninserting b/c we \nbelieve that we have indexed all columns used by this constraint. And\nwe realize that \ninserting 'null' into GCTBALLOT doesn't use this constraint b/c no look\nup is necessary.\nSo this causes good performance. Why is it that when we use this\nconstraint that\nthe performance is effected so much?\n\nAny help would be much appreciated.\nThanks\n\n\nP.S. Even we added an index on 'gctwebu_srl' column and did \n1- \"Analyzed ALL TABLES\"\n2- \"analyze GCTBALLOT(gctwebu_srl);\"\n\nbut still have the same problem!\n\n",
"msg_date": "Thu, 12 Aug 2004 16:09:48 -0400",
"msg_from": "\"Arash Zaryoun\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance Problem With Postgresql!"
},
{
"msg_contents": "Arash,\n\n> We are having a performance problem with our database. The problem\n> exists when we include a constraint in GCTBALLOT. The constraint is as\n> follows:\n\nYou posted twice, to three different mailing lists each time. This is \ndiscourteous. Please do not do so again, as people may not help you if they \nfeel you are being rude.\n\nRichard H has posted the solution to your problem.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 13 Aug 2004 10:19:15 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Performance Problem With Postgresql!"
}
] |
[
{
"msg_contents": "Hi,\n\nis there anything I can doo to speed up inserts? One of my tables gets \nabout 100 new rows every five minutes. And somehow the inserts tend to \ntake more and more time.\n\nAny suggestions welcome.\n\nTIA\n\nUlrich\n\n",
"msg_date": "Fri, 13 Aug 2004 09:47:17 +0200",
"msg_from": "Ulrich Wisser <[email protected]>",
"msg_from_op": true,
"msg_subject": "insert"
}
] |
[
{
"msg_contents": "Tips!\n*Delete indexes and recreate them after the insert.\n*Disable auto-commit\n*Perform a copy will be faster, sure.\n\nBest wishes,\nGuido\n\n> Hi,\n> \n> is there anything I can doo to speed up inserts? One of my tables gets \n> about 100 new rows every five minutes. And somehow the inserts tend to \n> take more and more time.\n> \n> Any suggestions welcome.\n> \n> TIA\n> \n> Ulrich\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n\n",
"msg_date": "Fri, 13 Aug 2004 07:08:00 -0300 (GMT+3)",
"msg_from": "G u i d o B a r o s i o <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: insert"
},
{
"msg_contents": "\"G u i d o B a r o s i o\" <[email protected]> wrote:\n\n[speeding up 100 inserts every 5 minutes]\n\n> Tips!\n> *Delete indexes and recreate them after the insert.\n\nsounds a bit extreme, for only 100 inserts\n \ngnari\n\n\n\n\n",
"msg_date": "Fri, 13 Aug 2004 10:49:05 -0000",
"msg_from": "\"gnari\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: insert"
},
{
"msg_contents": "gnari wrote:\n\n> \"G u i d o B a r o s i o\" <[email protected]> wrote:\n> \n> [speeding up 100 inserts every 5 minutes]\n> \n> \n>>Tips!\n>>*Delete indexes and recreate them after the insert.\n> \n> \n> sounds a bit extreme, for only 100 inserts\n\nwhich fsync method are you using ?\nchange it and see what happen\n\nRegards\nGaetano Mendola\n\n\n",
"msg_date": "Sat, 14 Aug 2004 16:24:08 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: insert"
}
] |
[
{
"msg_contents": "As I see it's 100 inserts every 5 minutes, not only 100 inserts.\n\nSure it's extreme for only 100 inserts.\n\nCheers,\nGuido\n\n> \"G u i d o B a r o s i o\" <[email protected]> wrote:\n> \n> [speeding up 100 inserts every 5 minutes]\n> \n> > Tips!\n> > *Delete indexes and recreate them after the insert.\n> \n> sounds a bit extreme, for only 100 inserts\n> \n> gnari\n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n\n",
"msg_date": "Fri, 13 Aug 2004 08:16:49 -0300 (GMT+3)",
"msg_from": "G u i d o B a r o s i o <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: insert"
},
{
"msg_contents": "From: \"G u i d o B a r o s i o\" <[email protected]>:\n\n\n> As I see it's 100 inserts every 5 minutes, not only 100 inserts.\n>\n> Sure it's extreme for only 100 inserts.\n\nI am sorry, I do not quite grasp what you are saying.\nmy understanding was that there are constantly new inserts,\ncoming in bursts of 100 , every 5 minutes.\nI imagined that the indexes were needed in between.\n\nif this is the case, the bunches of 100 inserts should\nbe done inside a transaction (or by 1 COPY statement)\n\nif, on the other hand, the inserts happen independently,\nat a rate of 100 inserts / 5 minutes, then this will not help\n\ngnari\n\n\n\n>\n> Cheers,\n> Guido\n>\n> > \"G u i d o B a r o s i o\" <[email protected]> wrote:\n> >\n> > [speeding up 100 inserts every 5 minutes]\n> >\n> > > Tips!\n> > > *Delete indexes and recreate them after the insert.\n> >\n> > sounds a bit extreme, for only 100 inserts\n> >\n> > gnari\n> >\n> >\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 9: the planner will ignore your desire to choose an index scan if\nyour\n> > joining column's datatypes do not match\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n\n",
"msg_date": "Fri, 13 Aug 2004 11:47:27 -0000",
"msg_from": "\"gnari\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: insert"
},
{
"msg_contents": "Hi,\n\nmy inserts are done in one transaction, but due to some foreign key \nconstraints and five indexes sometimes the 100 inserts will take more \nthan 5 minutes.\n\n/Ulrich\n\n",
"msg_date": "Fri, 13 Aug 2004 14:10:11 +0200",
"msg_from": "Ulrich Wisser <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: insert"
},
{
"msg_contents": "\nOn 13/08/2004 13:10 Ulrich Wisser wrote:\n> Hi,\n> \n> my inserts are done in one transaction, but due to some foreign key \n> constraints and five indexes sometimes the 100 inserts will take more \n> than 5 minutes.\n\nTwo possibilities come to mind:\n\na) you need an index on the referenced FK field\nb) you have an index but a type mis-match (e.g, an int4 field referencing \nan int8 field)\n\nEither of these will cause a sequential table scan and poor performance.\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n",
"msg_date": "Fri, 13 Aug 2004 13:41:15 +0100",
"msg_from": "Paul Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: insert"
},
{
"msg_contents": "On Fri, 2004-08-13 at 08:10, Ulrich Wisser wrote:\n> Hi,\n> \n> my inserts are done in one transaction, but due to some foreign key \n> constraints and five indexes sometimes the 100 inserts will take more \n> than 5 minutes.\n\nIt is likely that you are missing an index on one of those foreign key'd\nitems.\n\nDo an EXPLAIN ANALYZE SELECT * FROM foreign_table WHERE foreign_col =\n'<insert value>';\n\nFix them until they're quick.\n\n\n",
"msg_date": "Fri, 13 Aug 2004 08:57:56 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: insert"
},
{
"msg_contents": "On Fri, Aug 13, 2004 at 08:57:56 -0400,\n Rod Taylor <[email protected]> wrote:\n> On Fri, 2004-08-13 at 08:10, Ulrich Wisser wrote:\n> > Hi,\n> > \n> > my inserts are done in one transaction, but due to some foreign key \n> > constraints and five indexes sometimes the 100 inserts will take more \n> > than 5 minutes.\n> \n> It is likely that you are missing an index on one of those foreign key'd\n> items.\n\nI don't think that is too likely as a foreign key reference must be a\nunique key which would have an index. I think the type mismatch\nsuggestion is probably what the problem is.\nThe current solution is to make the types match. In 8.0.0 it would probably\nwork efficiently as is, though it isn't normal for foreign keys to have a type\nmismatch and he may want to change that anyway.\n",
"msg_date": "Fri, 13 Aug 2004 10:02:43 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: insert"
},
{
"msg_contents": "> > It is likely that you are missing an index on one of those foreign \n> > key'd items.\n> \n> I don't think that is too likely as a foreign key reference \n> must be a unique key which would have an index. \n\nI think you must be thinking of primary keys, not foreign keys. All\none-to-many relationships have non-unique foreign keys.\n\n",
"msg_date": "Fri, 13 Aug 2004 17:17:10 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: insert"
},
{
"msg_contents": "On Fri, Aug 13, 2004 at 17:17:10 +0100,\n Matt Clark <[email protected]> wrote:\n> > > It is likely that you are missing an index on one of those foreign \n> > > key'd items.\n> > \n> > I don't think that is too likely as a foreign key reference \n> > must be a unique key which would have an index. \n> \n> I think you must be thinking of primary keys, not foreign keys. All\n> one-to-many relationships have non-unique foreign keys.\n\nThe target of the reference needs to have at least a unique index.\nI am not sure if it needs to actually be declared as either a unique\nor primary key, though that is the intention.\n\nThe records doing the referencing don't need (and normally aren't)\nunique.\n",
"msg_date": "Fri, 13 Aug 2004 11:31:34 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: insert"
},
{
"msg_contents": "Bruno Wolff III <[email protected]> writes:\n> Rod Taylor <[email protected]> wrote:\n>> It is likely that you are missing an index on one of those foreign key'd\n>> items.\n\n> I don't think that is too likely as a foreign key reference must be a\n> unique key which would have an index. I think the type mismatch\n> suggestion is probably what the problem is.\n\nI agree. It is possible to have a lack-of-index problem on the\nreferencing column (as opposed to the referenced column), but that\nnormally only hurts you for deletes from the referenced table.\n\n> The current solution is to make the types match. In 8.0.0 it would probably\n> work efficiently as is, though it isn't normal for foreign keys to have a type\n> mismatch and he may want to change that anyway.\n\n8.0 will not fix this particular issue, as I did not add any numeric-vs-int\ncomparison operators. If we see a lot of complaints we could think\nabout adding such, but for 8.0 only the more common cases such as\nint-vs-bigint are covered.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Aug 2004 14:01:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: insert "
}
] |
[
{
"msg_contents": "Hi,\n\nWe are having a performance problem with our database. The problem\nexists when we include a constraint in GCTBALLOT. The constraint is as\nfollows:\n\nalter table GCTBALLOT\n add constraint FK_GCTBALLOT_GCTWEBU foreign key (GCTWEBU_SRL)\n references GCTWEBU (SRL)\n on delete restrict on update restrict;\n\nThe two tables that we insert into are the following:\n\nGCTBALLOT:\n\n Table \"cbcca.gctballot\"\n\n Column | Type | \n Modifiers\n------------------+-----------------------------+-----------------------------------------------------------\n srl | integer | not null default\nnextval('cbcca.gctballot_srl_seq'::text)\n gctbwindow_srl | numeric(12,0) | not null\n gctcandidate_srl | numeric(12,0) | not null\n gctwebu_srl | numeric(12,0) |\n gctphoneu_srl | numeric(12,0) |\n ballot_time | timestamp without time zone | not null\n ip_addr | character varying(15) |\nIndexes:\n \"pk_gctballot\" primary key, btree (srl)\n \"i1_gctballot_webusrl\" btree (gctwebu_srl)\nForeign-key constraints:\n \"fk_gctbwindow_gctballot\" FOREIGN KEY (gctbwindow_srl) REFERENCES\ngctbwindow(srl) ON UPDATE RESTRICT ON DELETE RESTRICT\n \"fk_gctcandidate_gctballot\" FOREIGN KEY (gctcandidate_srl)\nREFERENCES gctcandidate(srl) ON UPDATE RESTRICT ON DELETE RESTRICT\n \"fk_gctphoneu_gctballot\" FOREIGN KEY (gctphoneu_srl) REFERENCES\ngctphoneu(srl) ON UPDATE RESTRICT ON DELETE RESTRICT\n\nwith the extra constraint:\n\n\"fk_gctballot_gctwebu\" FOREIGN KEY (gctwebu_srl) REFERENCES\ngctwebu(srl) ON UPDATE RESTRICT ON DELETE RESTRICT\n\n\n\nGCTWEBU:\n\n Table \"cbcca.gctwebu\"\n Column | Type | \nModifiers\n-----------------+-----------------------------+---------------------------------------------------------\n srl | integer | not null default\nnextval('cbcca.gctwebu_srl_seq'::text)\n gctlocation_srl | numeric(12,0) | not null\n gctagerange_srl | numeric(12,0) | not null\n email | character varying(255) | not null\n uhash | character varying(255) | not null\n sex | character varying(1) | not null\n created_time | timestamp without time zone | not null\nIndexes:\n \"pk_gctwebu\" primary key, btree (srl)\n \"i1_gctwebu_email\" unique, btree (email)\nForeign-key constraints:\n \"fk_gctagerang_gctwebu\" FOREIGN KEY (gctagerange_srl) REFERENCES\ngctagerange(srl) ON UPDATE RESTRICT ON DELETE RESTRICT\n \"fk_gctwebu_gctlocation\" FOREIGN KEY (gctlocation_srl) REFERENCES\ngctlocation(srl) ON UPDATE RESTRICT ON DELETE RESTRICT\n\n\nTo begin, GCTBALLOT has 6122546 rows and GCTWEBU has 231444 rows.\n\nNow when we try and insert 100 entries into GCTBALLOT with the extra\nconstraint it \ntakes: 37981 milliseconds\n\nAlso, when we try and insert 100 entries into GCTBALLOT with the extra\nconstraint, \nbut insert 'null' into the column gctwebu_srl it takes: 286\nmilliseconds\n\nHowever when we try and insert 100 entries into GCTBALLOT without the\nextra constraint (no foreign key between GCTBALLOT & GCTWEBU)\nit takes: 471 milliseconds\n\n\nIn summary, inserting into GCTBALLOT without the constraint or\ninserting null for \ngctwebu_srl in GCTBALLOT gives us good performance. However, inserting\ninto GCTBALLOT\nwith the constraint and valid gctwebu_srl values gives us poor\nperformance.\n\nAlso, the insert we use is as follows:\n\nINSERT INTO GCTBALLOT (gctbwindow_srl, gctcandidate_srl, gctwebu_srl,\ngctphoneu_srl, \nballot_time, ip_addr) VALUES (CBCCA.gcf_getlocation(?), ?,\nCBCCA.gcf_validvoter(?,?), \nnull, ?, ?);\n\nNOTE: \"gcf_validvoter\" find 'gctweb_srl' value\n\n\"\nCREATE OR REPLACE FUNCTION gcf_validvoter (VARCHAR, VARCHAR) \n RETURNS NUMERIC AS '\nDECLARE\n arg1 ALIAS FOR $1;\n arg2 ALIAS FOR $2;\n return_val NUMERIC;\nBEGIN\n SELECT SRL INTO return_val\n FROM gctwebu\n WHERE EMAIL = arg1\n AND UHASH = arg2;\n\n RETURN return_val;\nEND;\n' LANGUAGE plpgsql;\n\"\n\n\nWhere the question marks are filled in with values in our java code.\n\nWe are puzzled as to why there is this difference in performance when\ninserting b/c we \nbelieve that we have indexed all columns used by this constraint. And\nwe realize that \ninserting 'null' into GCTBALLOT doesn't use this constraint b/c no look\nup is necessary.\nSo this causes good performance. Why is it that when we use this\nconstraint that\nthe performance is effected so much?\n\n\nThanks\n\n\nP.S. Even we added an index on 'gctwebu_srl' column and did \n1- \"Analyzed ALL TABLES\"\n2- \"analyze GCTBALLOT(gctwebu_srl);\"\n\nbut still have the same problem!\n\n",
"msg_date": "Fri, 13 Aug 2004 10:43:43 -0400",
"msg_from": "\"Arash Zaryoun\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Weird Database Performance problem!"
},
{
"msg_contents": "Arash Zaryoun wrote:\n> Hi,\n> \n> We are having a performance problem with our database. The problem\n> exists when we include a constraint in GCTBALLOT. The constraint is as\n> follows:\n> \n> alter table GCTBALLOT\n> add constraint FK_GCTBALLOT_GCTWEBU foreign key (GCTWEBU_SRL)\n> references GCTWEBU (SRL)\n> on delete restrict on update restrict;\n> \n> The two tables that we insert into are the following:\n\n> GCTBALLOT:\n> gctwebu_srl | numeric(12,0) |\n\n> GCTWEBU:\n> srl | integer | not null default\n\nYour types don't match. You have a numeric referencing an integer. PG \nprobably isn't using the index (it's smarter about this in 8.0 iirc).\n\nHTH\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 13 Aug 2004 16:20:04 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Weird Database Performance problem!"
},
{
"msg_contents": "\n\n\tReiserFS 4 is (will be) a filesystem that implements transactions.\n\n\tAre there any plans in a future Postgresql version to support a special \nfsync method for Reiser4 which will use the filesystem's transaction \nengine, instead of an old kludge like fsync(), with a possibility of \nvastly enhanced performance ?\n\n\tIs there also a possibility to tell Postgres : \"I don't care if I lose 30 \nseconds of transactions on this table if the power goes out, I just want \nto be sure it's still ACID et al. compliant but you can fsync less often \nand thus be faster\" (with a possibility of setting that on a per-table \nbasis) ?\n\n\tThanks.\n",
"msg_date": "Fri, 13 Aug 2004 18:12:34 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Reiser4"
},
{
"msg_contents": "Pierre,\n\n> \tAre there any plans in a future Postgresql version to support a special \n> fsync method for Reiser4 which will use the filesystem's transaction \n> engine, instead of an old kludge like fsync(), with a possibility of \n> vastly enhanced performance ?\n\nI don't know of any such in progress right now. Why don't you start it? It \nwould have to be an add-in since we support 28 operating systems and Reiser \nis AFAIK Linux-only, but it sounds like an interesting experiment.\n\n> \tIs there also a possibility to tell Postgres : \"I don't care if I lose 30 \n> seconds of transactions on this table if the power goes out, I just want \n> to be sure it's still ACID et al. compliant but you can fsync less often \n> and thus be faster\" (with a possibility of setting that on a per-table \n> basis) ?\n\nNot per-table, no, but otherwise take a look at the Background Writer feature \nof 8.0.\n\n-- \n-Josh Berkus\n \"A developer of Very Little Brain\"\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Fri, 13 Aug 2004 11:01:52 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reiser4"
},
{
"msg_contents": "Pierre-Fr�d�ric Caillaud wrote:\n> \tIs there also a possibility to tell Postgres : \"I don't care if I lose 30 \n> seconds of transactions on this table if the power goes out, I just want \n> to be sure it's still ACID et al. compliant but you can fsync less often \n> and thus be faster\" (with a possibility of setting that on a per-table \n> basis) ?\n\nI have been thinking about this. Informix calls it buffered logging and\nit would be a good feature.\n\nAdded to TODO:\n\n* Allow buffered WAL writes and fsync\n\n Instead of guaranteeing recovery of all committed transactions, this\n would provide improved performance by delaying WAL writes and fsync\n so an abrupt operating system restart might lose a few seconds of\n committed transactions but still be consistent. We could perhaps\n remove the 'fsync' parameter (which results in an an inconsistent\n database) in favor of this capability.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 13 Aug 2004 21:30:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Reiser4"
},
{
"msg_contents": "Josh Berkus wrote:\n> Pierre,\n> \n> > \tAre there any plans in a future Postgresql version to support a special \n> > fsync method for Reiser4 which will use the filesystem's transaction \n> > engine, instead of an old kludge like fsync(), with a possibility of \n> > vastly enhanced performance ?\n> \n> I don't know of any such in progress right now. Why don't you start it? It \n> would have to be an add-in since we support 28 operating systems and Reiser \n> is AFAIK Linux-only, but it sounds like an interesting experiment.\n> \n> > \tIs there also a possibility to tell Postgres : \"I don't care if I lose 30 \n> > seconds of transactions on this table if the power goes out, I just want \n> > to be sure it's still ACID et al. compliant but you can fsync less often \n> > and thus be faster\" (with a possibility of setting that on a per-table \n> > basis) ?\n> \n> Not per-table, no, but otherwise take a look at the Background Writer feature \n> of 8.0.\n\nActually the fsync of WAL is the big performance issue here. I added a\nTODO item about it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 13 Aug 2004 21:31:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reiser4"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Pierre-Fr�d�ric Caillaud wrote:\n> > \tIs there also a possibility to tell Postgres : \"I don't care if I\n> > lose 30 seconds of transactions on this table if the power goes\n> > out, I just want to be sure it's still ACID et al. compliant but\n> > you can fsync less often and thus be faster\" (with a possibility of\n> > setting that on a per-table basis) ?\n\nThen it would be \"ACI\" compliant.\n\n-- \nPeter Eisentraut\nhttp://developer.postgresql.org/~petere/\n\n",
"msg_date": "Sat, 14 Aug 2004 12:57:37 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Reiser4"
}
] |
[
{
"msg_contents": "Arash Zaryoun wrote:\n> Hi Richard,\n> \n> Thanks for your prompt reply. It fixed the problem. \n> Just one more question: Do I need to create an index for FKs? \n\nYou don't _need_ to, but on the referring side (e.g. table GCTBALLOT in \nyour example) PostgreSQL won't create one automatically.\n\nOf course, the primary-key side will already have an index being used as \npart of the constraint.\n\nI've cc:ed the list on this, the question pops up quite commonly.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 13 Aug 2004 20:23:52 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Weird Database Performance problem!"
}
] |
[
{
"msg_contents": "> \tThere is also the fact that NTFS is a very slow filesystem, and\n> Linux is\n> a lot better than Windows for everything disk, caching and IO related. Try\n> to copy some files in NTFS and in ReiserFS...\n\nI'm not so sure I would agree with such a blanket generalization. I find NTFS to be very fast, my main complaint is fragmentation issues...I bet NTFS is better than ext3 at most things (I do agree with you about the cache, thoughO.\n\nI think in very general sense the open source stuff is higher quality but Microsoft benefits from a very tight vertical integration of the system. They added ReadFileScatter and WriteFileScatter to the win32 api specifically to make SQL Server run faster and SQL server is indeed very, very good at i/o.\n\nSQL Server keeps a one file database with blocks collected and written asynchronously. It's a very tight system because they have control over every layer of the system.\n\nKnow your enemy.\n\nThat said, I think transaction based file I/O is 'the way' and if implemented on Reiser4 faster than I/O methodology than offered on windows/ntfs. \n\nMerlin\n",
"msg_date": "Fri, 13 Aug 2004 15:58:25 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: fsync vs open_sync"
}
] |
[
{
"msg_contents": "select count(*) FROM items_2004_07_29 as items WHERE true AND index @@ to_tsquery('default', '( audiovox)') ;\n count \n-------\n 4\n(1 row)\n\naers=# reindex index idx_title_2004_07_29;\nREINDEX\naers=# select count(*) FROM items_2004_07_29 as items WHERE true AND index @@ to_tsquery('default', '( audiovox)') ;\n count \n-------\n 2342\n(1 row)\n\n\nHere are 2 searches using a gist index, the first one has a gist index which is not properly created but does not though an error in creation. After doing a few searches I noticed the numbers were way off so I reindexed. The problem was fixed and the search works properly.\n\nIs there a problem with indexing? Do you know of this problem?\nIs there a way to fix it?\n\nWe are getting this error for other index's including primary key indexs where we get OID page not found errors and the index corruption as posted above.\nSometimes when the index is broken it will cause run away searches which eat up all the memory and the server needs to be restarted.\n\nAnyways this all looks to be in index problem, vacuum/analyze does not fix it only reindexing does.\n\nThank you,\n\nAaron\n\n\n\n\n\n\n\n\nselect count(*) FROM items_2004_07_29 as items \nWHERE true AND index @@ to_tsquery('default', '( audiovox)') ; count \n------- 4(1 row)\n \naers=# reindex index \nidx_title_2004_07_29;REINDEXaers=# select count(*) FROM items_2004_07_29 \nas items WHERE true AND index @@ to_tsquery('default', '( audiovox)') \n; count ------- 2342(1 row)\n \nHere are 2 searches using a gist index, the first \none has a gist index which is not properly created but does not though an error \nin creation. After doing a few searches I noticed the numbers were way off \nso I reindexed. The problem was fixed and the search works \nproperly.\n \nIs there a problem with indexing? Do you know \nof this problem?\nIs there a way to fix it?\n \nWe are getting this error for other index's including primary key indexs \nwhere we get OID page not found errors and the index corruption as posted \nabove.\nSometimes when the index is broken it will cause run away searches which \neat up all the memory and the server needs to be restarted.\n \nAnyways this all looks to be in index problem, vacuum/analyze does not fix \nit only reindexing does.\n \nThank you,\n \nAaron",
"msg_date": "Fri, 13 Aug 2004 14:40:26 -0700",
"msg_from": "\"borajetta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "REINDEX needed because of index corruption need help ASAP"
},
{
"msg_contents": "Aaron,\n\n> We are getting this error for other index's including primary key indexs \nwhere we get OID page not found errors and the index corruption as posted \nabove.\n> Sometimes when the index is broken it will cause run away searches which eat \nup all the memory and the server needs to be restarted.\n\nWhat version of PostgreSQL are you using?\nDo you have frequent power-outs on the machine?\nHave you tested for bad hard drive, controller, or memory?\n\nCorruption of any kind on PostgreSQL is not normal. It's usually indicative \nof a hardware problem.\n\n-- \n-Josh Berkus\n \"A developer of Very Little Brain\"\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Fri, 13 Aug 2004 15:18:41 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX needed because of index corruption need help ASAP"
},
{
"msg_contents": "\"borajetta\" <[email protected]> writes:\n> We are getting this error for other index's including primary key indexs wh=\n> ere we get OID page not found errors and the index corruption as posted abo=\n> ve.\n> Sometimes when the index is broken it will cause run away searches which ea=\n> t up all the memory and the server needs to be restarted.\n\nIt sounds to me like you have got hardware problems. Get out your\nmemory and disk tests ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Aug 2004 18:23:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX needed because of index corruption need help ASAP "
}
] |
[
{
"msg_contents": "HI all, I'm trying to implement a highly-scalable, high-performance,\nreal-time database replication system to back-up my Postgres database\nas data gets written.\n\nSo far, Mammoth Replicator is looking pretty good but it costs $1000+ . \n\nHas anyone tried Slony-I and other replication systems? Slony-I is\npretty new so I'm a little unsure if it's ready for a prime-time\ncommercial system yet.\n\nSo... wanted to put this out to the experts. Has anyone got any\nrecommendations or had experiences with real-time database replication\nsolutions that don't rely on RAID? The reason why I don't want to\nrely on a hardware solution is because we are renting dedicated\nservers and we don't have access to the boxes, only to software that\ngets installed on the boxes.\n\nThanks,\nChris\n",
"msg_date": "Fri, 13 Aug 2004 18:27:04 -0700",
"msg_from": "Chris Cheston <[email protected]>",
"msg_from_op": true,
"msg_subject": "Replication: Slony-I vs. Mammoth Replicator vs. ?"
},
{
"msg_contents": "Chris Cheston wrote:\n> HI all, I'm trying to implement a highly-scalable, high-performance,\n> real-time database replication system to back-up my Postgres database\n> as data gets written.\n> \n> So far, Mammoth Replicator is looking pretty good but it costs $1000+ . \n\nYes but it includes 30 days of support and 12 months of upgrades/updates :)\n\n\n> Has anyone tried Slony-I and other replication systems? Slony-I is\n> pretty new so I'm a little unsure if it's ready for a prime-time\n> commercial system yet.\n\nIt really depends on your needs. They are both good systems. Slony-I is \na bit more of a beast to get up and running, and it is a batch \nreplication system that uses triggers. Once it is up and running it \nworks well though.\n\nMammoth Replicator is easy to setup and is integrated into PostgreSQL.\nHowever replicator is 1000+ and doesn't support promoting of slaves \nautomatically (you can do it by hand) like Slony does. Replicator is\nalso live replication.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n> \n> So... wanted to put this out to the experts. Has anyone got any\n> recommendations or had experiences with real-time database replication\n> solutions that don't rely on RAID? The reason why I don't want to\n> rely on a hardware solution is because we are renting dedicated\n> servers and we don't have access to the boxes, only to software that\n> gets installed on the boxes.\n> \n> Thanks,\n> Chris\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nMammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL",
"msg_date": "Fri, 13 Aug 2004 18:39:19 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replication: Slony-I vs. Mammoth Replicator vs. ?"
},
{
"msg_contents": "On 8/13/2004 9:39 PM, Joshua D. Drake wrote:\n\n> Chris Cheston wrote:\n>> HI all, I'm trying to implement a highly-scalable, high-performance,\n>> real-time database replication system to back-up my Postgres database\n>> as data gets written.\n>> \n>> So far, Mammoth Replicator is looking pretty good but it costs $1000+ . \n> \n> Yes but it includes 30 days of support and 12 months of upgrades/updates :)\n\nThe lead developer of Slony (me) is working for a company that has it \ndeployed in production already and will move more mission critical \nsystems to it very shortly. Guess what will be my top work priority if \nyou spot a bug?\n\n> \n> \n>> Has anyone tried Slony-I and other replication systems? Slony-I is\n>> pretty new so I'm a little unsure if it's ready for a prime-time\n>> commercial system yet.\n> \n> It really depends on your needs. They are both good systems. Slony-I is \n> a bit more of a beast to get up and running, and it is a batch \n> replication system that uses triggers. Once it is up and running it \n> works well though.\n> \n> Mammoth Replicator is easy to setup and is integrated into PostgreSQL.\n> However replicator is 1000+ and doesn't support promoting of slaves \n> automatically (you can do it by hand) like Slony does. Replicator is\n> also live replication.\n\nOnce again, Joshua, would you please explain what you mean with \"batch\" \nand \"live\" replication system? Slony does group multiple \"master\" \ntransactions into one replication transaction to improve performance \n(fewer commits on the slaves). The interval of these groups is \nconfigurable and for high volume DBs it is recommended to use about one \nsecond, which means that all commits that fall into an interval of one \nsecond are replicated in one transaction on the slave. On normal running \nsystems this results in a replication lag of 600 to 800 milliseconds in \naverage. On overloaded systems the asynchronous nature of course allows \nthe slaves to fall behind.\n\nWhat is a usual average replication lag of Mammoth Replicator?\n\nWhat happens to the other existing slaves when you promote by hand? In \nSlony they accept the new master and continue replicating without the \nneed of rebuilding from scratch. Slony has mechanisms to ensure the new \nmaster will be ahead or equal in the replication process at the time it \ntakes over and allows client application updates.\n\nThe Slony documentation is an issue at the moment and the administrative \ntools around it are immature. The replication engine itself exceeds my \nown expectations and performs very robust.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n",
"msg_date": "Sat, 14 Aug 2004 11:33:24 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replication: Slony-I vs. Mammoth Replicator vs. ?"
},
{
"msg_contents": "\n>\n> Once again, Joshua, would you please explain what you mean with \n> \"batch\" and \"live\" replication system? Slony does group multiple \n> \"master\" transactions into one replication transaction to improve \n> performance (fewer commits on the slaves). The interval of these \n> groups is configurable and for high volume DBs it is recommended to \n> use about one second, which means that all commits that fall into an \n> interval of one second are replicated in one transaction on the slave. \n> On normal running systems this results in a replication lag of 600 to \n> 800 milliseconds in average. On overloaded systems the asynchronous \n> nature of course allows the slaves to fall behind.\n\n\nYour description above is what I considered batch... you are taking a \n\"batch\" of transactions and replicating them versus each transaction. I \nam not saying it is bad in any way. I am just saying it is different \nthat replicator.\n\n> What is a usual average replication lag of Mammoth Replicator?\n>\nObviously it depends on the system, the network connectivity between the \nsystems etc... In our test systems it takes less than 100 ms to \nreplicate the data. Again it depends on the size of the transaction (the \ndata being moved).\n\n> What happens to the other existing slaves when you promote by hand? \n\nThis is something that Slony has over replicator. Currently the new \nmaster will force a full dump to the slaves. Of course this is already \non the road map, thanks to Slony :) and should be resolved by months end.\n\n> The Slony documentation is an issue at the moment and the \n> administrative tools around it are immature. The replication engine \n> itself exceeds my own expectations and performs very robust.\n>\nI have never suggested otherwise. My only comment about maturity is that \ntheir are actually many companies using replicator in production. We \nhave already dealt with the 1.0 blues as they say.\n\nI hope you understand that I, in no way have ever suggested (purposely) \nanything negative about Slony. Only that I believe they serve different \ntechnical solutions.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n>\n> Jan\n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL\n\n",
"msg_date": "Sat, 14 Aug 2004 09:22:19 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replication: Slony-I vs. Mammoth Replicator vs. ?"
},
{
"msg_contents": "On 8/14/2004 12:22 PM, Joshua D. Drake wrote:\n\n> I hope you understand that I, in no way have ever suggested (purposely) \n> anything negative about Slony. Only that I believe they serve different \n> technical solutions.\n\nYou know I never took anything you said negative. I think People here \nneed to know that we two have communicated and collaborated outside of \nthe public mailing lists, that we still do it and that we do not \nconsider each other as opponents. There is a natural overlap in the \nsystems we offer and therefore there is some competition. Software \ndevelopment is a sport. For some professionals this sport happens to be \nthe main source of income. But that shall not spoil the spirit here.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n",
"msg_date": "Sat, 14 Aug 2004 13:56:31 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replication: Slony-I vs. Mammoth Replicator vs. ?"
},
{
"msg_contents": "Centuries ago, Nostradamus foresaw when [email protected] (\"Joshua D. Drake\") would write:\n> I hope you understand that I, in no way have ever suggested\n> (purposely) anything negative about Slony. Only that I believe they\n> serve different technical solutions.\n\nStipulating that I may have some bias ;-), I still don't find it at\nall clear what the different situations are \"shaped like\" that lead to\nMammoth being forcibly preferable to Slony-I.\n\n(Note that I have a pretty decent understanding about how ERS and\nSlony work, so I'm not too frightened of technicalities... I set up\ninstances of both on Thursday, so I'm pretty up to speed :-).)\n\nWin32 support may be true at the moment, although I have to discount\nthat as we only just got the start of a beta release of native Win32\nsupport for PostgreSQL proper. For that very reason, I had to point\nmy youngest brother who needed \"something better than Access\" to\nFirebird last Saturday; I played with my niece while he was doing the\ninstall. And there is little reason to think that Slony-I won't be\nportable to Win32 given a little interest and effort, particularly\nonce work to make it play well with \"pgxs\" gets done.\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"ntlug.org\")\nhttp://www3.sympatico.ca/cbbrowne/multiplexor.html\n\"At Microsoft, it doesn't matter which file you're compiling, only\nwhich flags you #define.\" -- Colin Plumb\n",
"msg_date": "Sat, 14 Aug 2004 15:58:18 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replication: Slony-I vs. Mammoth Replicator vs. ?"
},
{
"msg_contents": "Christopher Browne wrote:\n\n>Centuries ago, Nostradamus foresaw when [email protected] (\"Joshua D. Drake\") would write:\n> \n>\n>>I hope you understand that I, in no way have ever suggested\n>>(purposely) anything negative about Slony. Only that I believe they\n>>serve different technical solutions.\n>> \n>>\n>\n>Stipulating that I may have some bias ;-), I still don't find it at\n>all clear what the different situations are \"shaped like\" that lead to\n>Mammoth being forcibly preferable to Slony-I.\n> \n>\nI would choose replicator if:\n\n1. You want ease of setup\n2. You want your each transaction to be replicated at time of commit\n3. Your database is already laden with triggers\n4. You are pushing a very high transactional load*\n\n* Caveat I have no idea how well Slony performs on a system that does \nsay 200,000 transactions\nan hours that are heavily geared toward updates. Replicator performs \nvery well in this scenario.\n\n5. Replicators administrative tools are more mature than Slony (for \nexample you know exactly what state your slaves are in with Replicator).\n\nI would choose Slony if:\n\n1. The fact that it is Open Source matters to you\n2. The auto promotion of slaves is important*\n\n*This will be fixed in a couple of weeks with Replicator\n\nTo be fair, in the real world ---\n\nIt doesn't make a bit of difference which one you choose it really comes \ndown to this:\n\nReplicator is dumb simple to setup. Any halfway talented person can \nsetup replicator\nin 30 minutes with a single master / slave configuration.\n\nSlony is Open Source and thus a little easier on the pocket book initially.\n\nCommand Prompt, will support either one -- so the Replicator is \ncommercially supported\nargument is a little weak here.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n>(Note that I have a pretty decent understanding about how ERS and\n>Slony work, so I'm not too frightened of technicalities... I set up\n>instances of both on Thursday, so I'm pretty up to speed :-).)\n>\n>Win32 support may be true at the moment, although I have to discount\n>that as we only just got the start of a beta release of native Win32\n>support for PostgreSQL proper. For that very reason, I had to point\n>my youngest brother who needed \"something better than Access\" to\n>Firebird last Saturday; I played with my niece while he was doing the\n>install. And there is little reason to think that Slony-I won't be\n>portable to Win32 given a little interest and effort, particularly\n>once work to make it play well with \"pgxs\" gets done.\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL\n\n\n\n\n\n\n\n\nChristopher Browne wrote:\n\nCenturies ago, Nostradamus foresaw when [email protected] (\"Joshua D. Drake\") would write:\n \n\nI hope you understand that I, in no way have ever suggested\n(purposely) anything negative about Slony. Only that I believe they\nserve different technical solutions.\n \n\n\nStipulating that I may have some bias ;-), I still don't find it at\nall clear what the different situations are \"shaped like\" that lead to\nMammoth being forcibly preferable to Slony-I.\n \n\nI would choose replicator if:\n\n1. You want ease of setup\n2. You want your each transaction to be replicated at time of commit\n3. Your database is already laden with triggers\n4. You are pushing a very high transactional load*\n\n* Caveat I have no idea how well Slony performs on a system that does\nsay 200,000 transactions\nan hours that are heavily geared toward updates. Replicator performs\nvery well in this scenario.\n\n5. Replicators administrative tools are more mature than Slony (for\nexample you know exactly what state your slaves are in with Replicator).\n\nI would choose Slony if:\n\n1. The fact that it is Open Source matters to you\n2. The auto promotion of slaves is important*\n\n*This will be fixed in a couple of weeks with Replicator\n\nTo be fair, in the real world --- \n\nIt doesn't make a bit of difference which one you choose it really\ncomes down to this:\n\nReplicator is dumb simple to setup. Any halfway talented person can\nsetup replicator\nin 30 minutes with a single master / slave configuration.\n\nSlony is Open Source and thus a little easier on the pocket book\ninitially.\n\nCommand Prompt, will support either one -- so the Replicator is\ncommercially supported\nargument is a little weak here. \n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n\n\n(Note that I have a pretty decent understanding about how ERS and\nSlony work, so I'm not too frightened of technicalities... I set up\ninstances of both on Thursday, so I'm pretty up to speed :-).)\n\nWin32 support may be true at the moment, although I have to discount\nthat as we only just got the start of a beta release of native Win32\nsupport for PostgreSQL proper. For that very reason, I had to point\nmy youngest brother who needed \"something better than Access\" to\nFirebird last Saturday; I played with my niece while he was doing the\ninstall. And there is little reason to think that Slony-I won't be\nportable to Win32 given a little interest and effort, particularly\nonce work to make it play well with \"pgxs\" gets done.\n \n\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL",
"msg_date": "Sat, 14 Aug 2004 17:55:05 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replication: Slony-I vs. Mammoth Replicator vs. ?"
},
{
"msg_contents": "One more point for your list:\n\nChoose Slony if Replicator doesn't support your platform. :-)\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Mon, 16 Aug 2004 11:12:37 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replication: Slony-I vs. Mammoth Replicator vs. ?"
}
] |
[
{
"msg_contents": "I thought this could generate some interesting discussion. Essentially, \nthere are three queries below, two using sub-queries to change the way \nthe randomized information (works first by author and then by work) and \nthe original which simply randomizes out of all works available.\n\nThe one not using sub-queries under EXPLAIN ANALYZE proves itself to be \nless efficient and have a far higher cost then those with the penalty of \na sub-query. Since this seems to be counter to what I have been told \nin the past, I thought I would bring this forward and get some \nenlightenment.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n-----------------------------------------------------------------------\n\nSELECT\n g.GalleryID,\n w.WorkID,\n w.WorkName,\n w.WorkImageThumbnail,\n g.GalleryRating,\n g.GalleryPenName\nFROM ethereal.Work w, ethereal.Gallery g\nWHERE w.GalleryID = g.GalleryID\n AND g.GalleryPrivacy = 'no'\n AND w.WorkImageThumbnail IS NOT NULL\n AND g.PuppeteerLogin = (SELECT PuppeteerLogin\n FROM ethereal.Gallery\n WHERE GalleryType='image'\n GROUP BY PuppeteerLogin\n ORDER BY RANDOM() LIMIT 1)\nORDER BY RANDOM() LIMIT 1\n\n Limit (cost=60.70..60.70 rows=1 width=100) (actual time=1.013..1.013 \nrows=0 loops=1)\n InitPlan\n -> Limit (cost=6.36..6.37 rows=1 width=11) (actual \ntime=0.711..0.713 rows=1 loops=1)\n -> Sort (cost=6.36..6.45 rows=33 width=11) (actual \ntime=0.708..0.708 rows=1 loops=1)\n Sort Key: random()\n -> HashAggregate (cost=5.45..5.53 rows=33 width=11) \n(actual time=0.420..0.553 rows=46 loops=1)\n -> Seq Scan on gallery (cost=0.00..5.30 \nrows=60 width=11) (actual time=0.007..0.227 rows=59 loops=1)\n Filter: ((gallerytype)::text = 'image'::text)\n -> Sort (cost=54.33..54.37 rows=16 width=100) (actual \ntime=1.009..1.009 rows=0 loops=1)\n Sort Key: random()\n -> Nested Loop (cost=0.00..54.01 rows=16 width=100) (actual \ntime=0.981..0.981 rows=0 loops=1)\n -> Seq Scan on gallery g (cost=0.00..5.56 rows=2 \nwidth=24) (actual time=0.855..0.888 rows=1 loops=1)\n Filter: (((galleryprivacy)::text = 'no'::text) AND \n((puppeteerlogin)::text = ($0)::text))\n -> Index Scan using pkwork on \"work\" w \n(cost=0.00..24.10 rows=8 width=80) (actual time=0.080..0.080 rows=0 loops=1)\n Index Cond: (w.galleryid = \"outer\".galleryid)\n Filter: (workimagethumbnail IS NOT NULL)\n Total runtime: 1.211 ms\n\n-----------------------------------------------------------------------\n\nSELECT\n g.GalleryID,\n w.WorkID,\n w.WorkName,\n w.WorkImageThumbnail,\n g.GalleryRating,\n g.GalleryPenName\nFROM ethereal.Work w, ethereal.Gallery g\nWHERE w.GalleryID = g.GalleryID\n AND g.GalleryPrivacy = 'no'\n AND w.WorkImageThumbnail IS NOT NULL\n AND g.GalleryPenName = (SELECT GalleryPenName\n FROM ethereal.Gallery\n WHERE GalleryType='image'\n GROUP BY GalleryPenName\n ORDER BY RANDOM() LIMIT 1)\nORDER BY RANDOM() LIMIT 1\n\n Limit (cost=59.92..59.92 rows=1 width=100) (actual time=0.904..0.906 \nrows=1 loops=1)\n InitPlan\n -> Limit (cost=6.69..6.69 rows=1 width=14) (actual \ntime=0.731..0.733 rows=1 loops=1)\n -> Sort (cost=6.69..6.79 rows=42 width=14) (actual \ntime=0.729..0.729 rows=1 loops=1)\n Sort Key: random()\n -> HashAggregate (cost=5.45..5.56 rows=42 width=14) \n(actual time=0.431..0.568 rows=48 loops=1)\n -> Seq Scan on gallery (cost=0.00..5.30 \nrows=60 width=14) (actual time=0.011..0.233 rows=59 loops=1)\n Filter: ((gallerytype)::text = 'image'::text)\n -> Sort (cost=53.23..53.27 rows=16 width=100) (actual \ntime=0.899..0.899 rows=1 loops=1)\n Sort Key: random()\n -> Nested Loop (cost=0.00..52.91 rows=16 width=100) (actual \ntime=0.808..0.862 rows=6 loops=1)\n -> Index Scan using idxgallery_pen on gallery g \n(cost=0.00..4.45 rows=2 width=24) (actual time=0.767..0.769 rows=1 loops=1)\n Index Cond: ((gallerypenname)::text = ($0)::text)\n Filter: ((galleryprivacy)::text = 'no'::text)\n -> Index Scan using pkwork on \"work\" w \n(cost=0.00..24.10 rows=8 width=80) (actual time=0.020..0.042 rows=6 loops=1)\n Index Cond: (w.galleryid = \"outer\".galleryid)\n Filter: (workimagethumbnail IS NOT NULL)\n Total runtime: 1.117 ms\n\n-----------------------------------------------------------------------\n\nSELECT\n g.GalleryID,\n w.WorkID,\n w.WorkName,\n w.WorkImageThumbnail,\n g.GalleryRating,\n g.GalleryPenName\nFROM ethereal.Work w, ethereal.Gallery g\nWHERE w.GalleryID = g.GalleryID\n AND g.GalleryType = 'image'\n AND g.GalleryPrivacy = 'no'\n AND w.WorkImageThumbnail IS NOT NULL\nORDER BY RANDOM() LIMIT 1\n\n--------\n Limit (cost=111.73..111.73 rows=1 width=100) (actual \ntime=13.021..13.023 rows=1 loops=1)\n -> Sort (cost=111.73..113.70 rows=786 width=100) (actual \ntime=13.017..13.017 rows=1 loops=1)\n Sort Key: random()\n -> Hash Join (cost=5.55..73.93 rows=786 width=100) (actual \ntime=1.081..8.320 rows=803 loops=1)\n Hash Cond: (\"outer\".galleryid = \"inner\".galleryid)\n -> Seq Scan on \"work\" w (cost=0.00..54.47 rows=817 \nwidth=80) (actual time=0.006..2.207 rows=817 loops=1)\n Filter: (workimagethumbnail IS NOT NULL)\n -> Hash (cost=5.30..5.30 rows=100 width=24) (actual \ntime=0.669..0.669 rows=0 loops=1)\n -> Seq Scan on gallery g (cost=0.00..5.30 \nrows=100 width=24) (actual time=0.020..0.402 rows=100 loops=1)\n Filter: ((galleryprivacy)::text = 'no'::text)\n Total runtime: 13.252 ms\n\n",
"msg_date": "Sun, 15 Aug 2004 03:03:28 GMT",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Faster with a sub-query then without"
},
{
"msg_contents": "Martin Foster <[email protected]> writes:\n> The one not using sub-queries under EXPLAIN ANALYZE proves itself to be \n> less efficient and have a far higher cost then those with the penalty of \n> a sub-query. Since this seems to be counter to what I have been told \n> in the past, I thought I would bring this forward and get some \n> enlightenment.\n\nThe ones with the subqueries are not having to form the full join of W\nand G; they just pick a few rows out of G and look up the matching W\nrows.\n\nThe \"subquery penalty\" is nonexistent in this case because the\nsubqueries are not dependent on any variables from the outer query, and\nso they need be evaluated only once, rather than once per outer-query\nrow which is what I suppose you were expecting. This is reflected in\nthe EXPLAIN output: notice they are shown as InitPlans not SubPlans.\nThe outputs of the InitPlans are essentially treated as constants (shown\nas $0 in the EXPLAIN output) and the outer plan is approximately what\nit would be if you'd written WHERE g.field = 'constant' instead of\nWHERE g.field = (select ...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 14 Aug 2004 23:55:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Faster with a sub-query then without "
},
{
"msg_contents": "Tom Lane wrote:\n> Martin Foster <[email protected]> writes:\n> \n>>The one not using sub-queries under EXPLAIN ANALYZE proves itself to be \n>>less efficient and have a far higher cost then those with the penalty of \n>>a sub-query. Since this seems to be counter to what I have been told \n>>in the past, I thought I would bring this forward and get some \n>>enlightenment.\n> \n> \n> The ones with the subqueries are not having to form the full join of W\n> and G; they just pick a few rows out of G and look up the matching W\n> rows.\n> \n> The \"subquery penalty\" is nonexistent in this case because the\n> subqueries are not dependent on any variables from the outer query, and\n> so they need be evaluated only once, rather than once per outer-query\n> row which is what I suppose you were expecting. This is reflected in\n> the EXPLAIN output: notice they are shown as InitPlans not SubPlans.\n> The outputs of the InitPlans are essentially treated as constants (shown\n> as $0 in the EXPLAIN output) and the outer plan is approximately what\n> it would be if you'd written WHERE g.field = 'constant' instead of\n> WHERE g.field = (select ...)\n> \n> \t\t\tregards, tom lane\n\nThat would explain it overall. Still, it does seem unusual when one \nputs in additional code, which most literature warns you about and you \nactually gain a speed boost.\n\nThanks!\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n",
"msg_date": "Sun, 15 Aug 2004 12:01:26 -0400",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Faster with a sub-query then without"
}
] |
[
{
"msg_contents": "PostgreSQL versions: 7.4.3, 8.0.0beta1\n\nJoins against set-returning functions can be slow. Here's a simple\nexample (in 8.0.0beta1, the gen_series function can be replaced\nwith generate_series):\n\n CREATE FUNCTION gen_series(INTEGER, INTEGER) RETURNS SETOF INTEGER AS '\n DECLARE\n xstart ALIAS FOR $1;\n xend ALIAS FOR $2;\n x INTEGER;\n BEGIN\n FOR x IN xstart .. xend LOOP\n RETURN NEXT x;\n END LOOP;\n \n RETURN;\n END;\n ' LANGUAGE plpgsql IMMUTABLE STRICT;\n \n CREATE TABLE stuff (\n id INTEGER NOT NULL PRIMARY KEY,\n item VARCHAR(32) NOT NULL\n );\n \n INSERT INTO stuff (id, item)\n SELECT id, 'Item ' || id FROM gen_series(1, 100000) AS f(id);\n \n ANALYZE stuff;\n\nHere are two queries; notice how the second query, which uses higher\nnumbers for the join key, is much slower than the first, apparently\ndue to an inefficient index scan:\n\n EXPLAIN ANALYZE\n SELECT stuff.* FROM stuff JOIN gen_series(1, 10) AS f(id) USING (id);\n \n Merge Join (cost=62.33..2544.33 rows=1001 width=17) (actual time=1.398..1.950 rows=10 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".id)\n -> Index Scan using stuff_pkey on stuff (cost=0.00..2217.00 rows=100000 width=17) (actual time=0.667..0.860 rows=11 loops=1)\n -> Sort (cost=62.33..64.83 rows=1000 width=4) (actual time=0.646..0.670 rows=10 loops=1)\n Sort Key: f.id\n -> Function Scan on gen_series f (cost=0.00..12.50 rows=1000 width=4) (actual time=0.403..0.478 rows=10 loops=1)\n Total runtime: 2.529 ms\n (7 rows)\n \n EXPLAIN ANALYZE\n SELECT stuff.* FROM stuff JOIN gen_series(99991, 100000) AS f(id) USING (id);\n \n Merge Join (cost=62.33..2544.33 rows=1001 width=17) (actual time=2907.078..2907.618 rows=10 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".id)\n -> Index Scan using stuff_pkey on stuff (cost=0.00..2217.00 rows=100000 width=17) (actual time=0.107..2270.722 rows=100000 loops=1)\n -> Sort (cost=62.33..64.83 rows=1000 width=4) (actual time=0.630..0.654 rows=10 loops=1)\n Sort Key: f.id\n -> Function Scan on gen_series f (cost=0.00..12.50 rows=1000 width=4) (actual time=0.392..0.469 rows=10 loops=1)\n Total runtime: 2908.205 ms\n (7 rows)\n\nIf I turn off enable_mergejoin then both queries are fast:\n\n SET enable_mergejoin TO off;\n \n EXPLAIN ANALYZE\n SELECT stuff.* FROM stuff JOIN gen_series(1, 10) AS f(id) USING (id);\n \n Nested Loop (cost=0.00..3038.50 rows=1001 width=17) (actual time=0.600..1.912 rows=10 loops=1)\n -> Function Scan on gen_series f (cost=0.00..12.50 rows=1000 width=4) (actual time=0.395..0.482 rows=10 loops=1)\n -> Index Scan using stuff_pkey on stuff (cost=0.00..3.01 rows=1 width=17) (actual time=0.091..0.107 rows=1 loops=10)\n Index Cond: (stuff.id = \"outer\".id)\n Total runtime: 2.401 ms\n (5 rows)\n \n EXPLAIN ANALYZE\n SELECT stuff.* FROM stuff JOIN gen_series(99991, 100000) AS f(id) USING (id);\n \n Nested Loop (cost=0.00..3038.50 rows=1001 width=17) (actual time=0.586..1.891 rows=10 loops=1)\n -> Function Scan on gen_series f (cost=0.00..12.50 rows=1000 width=4) (actual time=0.394..0.479 rows=10 loops=1)\n -> Index Scan using stuff_pkey on stuff (cost=0.00..3.01 rows=1 width=17) (actual time=0.089..0.105 rows=1 loops=10)\n Index Cond: (stuff.id = \"outer\".id)\n Total runtime: 2.374 ms\n (5 rows)\n\nIs the planner doing something wrong here?\n\nThanks.\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n",
"msg_date": "Sun, 15 Aug 2004 10:00:14 -0600",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow joins against set-returning functions"
},
{
"msg_contents": "Michael Fuhr <[email protected]> writes:\n> Is the planner doing something wrong here?\n\nHard to see how it can be very smart with no idea about what the\nfunction is going to return :-(.\n\nI'd say that the mergejoin plan is actually a good choice given the\nlimited amount of info, because it has the least degradation when the\ninput varies from what you expected. Those \"better\" nestloop plans\ncould easily be very far worse, if the function returned more than a\ntrivial number of rows.\n\nThe reason the two mergejoin cases differ so much is that the scan of\nthe other relation can stop as soon as we've exhausted the function\noutput. Evidently scanning to key 10 doesn't traverse much of\nstuff_pkey while scanning to key 100000 does. The planner is aware of\nthat effect, but with no information about the maximum key value to be\nexpected from the function scan, it can't apply the knowledge.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Aug 2004 13:21:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow joins against set-returning functions "
}
] |
[
{
"msg_contents": "Using this SQL:\n\nEXPLAIN ANALYZE\nSELECT DISTINCT\n sessionid,\n '2004-33' AS \"yearweek\",\n nd.niveau\nINTO TEMP\n distinct_session\nFROM\n httplog h ,niveaudir nd\nWHERE\n hitDateTime>('now'::timestamp with time zone-'1440 min'::interval)\n and h.hostid=(select hostnameid from hostname where hostname='www.forbrug.dk')\n and h.statusid!=404\n and h.pathid=nd.pathid\n;\n\n\nI get this output:\n\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=5680.00..5766.70 rows=8670 width=26) (actual \ntime=17030.802..20044.614 rows=68213 loops=1)\n InitPlan\n -> Index Scan using hostname_hostname_key on hostname \n(cost=0.00..5.42 rows=1 width=4) (actual time=0.101..0.106 rows=1 loops=1)\n Index Cond: (hostname = 'www.forbrug.dk'::text)\n -> Sort (cost=5674.58..5696.25 rows=8670 width=26) (actual \ntime=17030.792..17689.650 rows=174714 loops=1)\n Sort Key: h.sessionid, '2004-33'::text, nd.niveau\n -> Merge Join (cost=4500.70..5107.48 rows=8670 width=26) \n(actual time=3226.955..3966.011 rows=174714 loops=1)\n Merge Cond: (\"outer\".pathid = \"inner\".pathid)\n -> Index Scan using niveaudir_pathid on niveaudir nd \n(cost=0.00..465.59 rows=22715 width=26) (actual time=0.181..52.248 \nrows=22330 loops=1)\n -> Sort (cost=4500.70..4522.38 rows=8670 width=8) (actual \ntime=3226.666..3443.092 rows=174714 loops=1)\n Sort Key: h.pathid\n -> Index Scan using httplog_hitdatetime on httplog h \n(cost=0.00..3933.61 rows=8670 width=8) (actual time=0.425..1048.428 \nrows=174714 loops=1)\n Index Cond: (hitdatetime > '2004-08-14 \n16:41:16.855829+02'::timestamp with time zone)\n Filter: ((hostid = $0) AND (statusid <> 404))\n Total runtime: 20478.174 ms\n(15 rows)\n\nAs I read it the output tells me what was done during the milliseconds:\n\n0.101..0.106\n0.181..52.248\n0.425..1048.428\n3226.666..3443.092\n3226.955..3966.011\n17030.792..17689.650\n17030.802..20044.614\n\nHowever, there are a few large gaps. What is happening during:\n\n1048.428..3226.666 (2 secs)\n3966.011..17030.792 (13 secs!)\n\nThis is the major part of the time but this is not accounted for in the\nexplain analyze output. It seems PostgreSQL is doing stuff that is not\npart of the query plan. How do I get to know what this \"stuff\" is?\n\n\n/Ole\n",
"msg_date": "Sun, 15 Aug 2004 19:47:53 +0200 (CEST)",
"msg_from": "Ole Tange <[email protected]>",
"msg_from_op": true,
"msg_subject": "Help interpreting explain analyze output"
},
{
"msg_contents": "Ole Tange <[email protected]> writes:\n> As I read it the output tells me what was done during the milliseconds:\n\nNo, you have a fundamental misconception here. The notation means that\nthe first output row from a plan step was delivered after X\nmilliseconds, and the last row after Y milliseconds.\n\nThe \"gap\" you are looking at is the time to do the Sort (since a sort\ncan't deliver the first output row until it's finished the sort).\n\nIt is gonna take a while to sort 175000 rows ... but possibly increasing\nsort_mem would help.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Aug 2004 14:13:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help interpreting explain analyze output "
},
{
"msg_contents": "On Sun, Aug 15, 2004 at 07:47:53PM +0200, Ole Tange wrote:\n\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Unique (cost=5680.00..5766.70 rows=8670 width=26) (actual \n> time=17030.802..20044.614 rows=68213 loops=1)\n> InitPlan\n> -> Index Scan using hostname_hostname_key on hostname \n> (cost=0.00..5.42 rows=1 width=4) (actual time=0.101..0.106 rows=1 loops=1)\n> Index Cond: (hostname = 'www.forbrug.dk'::text)\n> -> Sort (cost=5674.58..5696.25 rows=8670 width=26) (actual \n> time=17030.792..17689.650 rows=174714 loops=1)\n> Sort Key: h.sessionid, '2004-33'::text, nd.niveau\n> -> Merge Join (cost=4500.70..5107.48 rows=8670 width=26) \n> (actual time=3226.955..3966.011 rows=174714 loops=1)\n> Merge Cond: (\"outer\".pathid = \"inner\".pathid)\n> -> Index Scan using niveaudir_pathid on niveaudir nd \n> (cost=0.00..465.59 rows=22715 width=26) (actual time=0.181..52.248 \n> rows=22330 loops=1)\n> -> Sort (cost=4500.70..4522.38 rows=8670 width=8) (actual \n> time=3226.666..3443.092 rows=174714 loops=1)\n> Sort Key: h.pathid\n> -> Index Scan using httplog_hitdatetime on httplog h \n> (cost=0.00..3933.61 rows=8670 width=8) (actual time=0.425..1048.428 \n> rows=174714 loops=1)\n> Index Cond: (hitdatetime > '2004-08-14 \n> 16:41:16.855829+02'::timestamp with time zone)\n> Filter: ((hostid = $0) AND (statusid <> 404))\n> Total runtime: 20478.174 ms\n> (15 rows)\n> \n> As I read it the output tells me what was done during the milliseconds:\n\nThe first time given is not the time when this stage of the plan starts\nto execute, but the time when it returns its first row. So most of the\ntime in this query is being spent doing the two sorts - in a sort, of\ncourse, most of the work has to be done before any rows can be returned.\n\n\nRichard\n",
"msg_date": "Sun, 15 Aug 2004 19:24:08 +0100",
"msg_from": "Richard Poole <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help interpreting explain analyze output"
},
{
"msg_contents": "On Sun, 15 Aug 2004, Tom Lane wrote:\n\n> Ole Tange <[email protected]> writes:\n> > As I read it the output tells me what was done during the milliseconds:\n> \n> No, you have a fundamental misconception here. The notation means that\n> the first output row from a plan step was delivered after X\n> milliseconds, and the last row after Y milliseconds.\n\nThanks. For a novice tuner like me it would be nice if you could see more\neasily where the time was spent. However, the output is _far_ more\nintuitive that MySQL's.\n\n> It is gonna take a while to sort 175000 rows ... but possibly increasing\n> sort_mem would help.\n\nIt didn't. However, I could reformulate the DISTINCT query as a GROUP BY\non all the selected fields and this uses Hash aggregate which is far\nfaster.\n\nNow I am curious: Why isn't DISTINCT implemented using a Hash aggregate?\n\n\n/Ole\n",
"msg_date": "Sun, 15 Aug 2004 21:08:19 +0200 (CEST)",
"msg_from": "Ole Tange <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help interpreting explain analyze output "
},
{
"msg_contents": "Ole Tange <[email protected]> writes:\n> Now I am curious: Why isn't DISTINCT implemented using a Hash aggregate?\n\nPartly lack of round tuits, partly the fact that it is closely\nintertwined with ORDER BY and I'm not sure what side-effects would\narise from separating them. In particular, the DISTINCT ON special\ncase stops making any sense at all if it's not tied to a sort/uniq\nunderlying implementation.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Aug 2004 15:40:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help interpreting explain analyze output "
}
] |
[
{
"msg_contents": "Hi all, \n\nI'm running postgres 7.3.4 on a quad Xeon 2.8 GHz with \nMem: 1057824768 309108736 748716032 0 12242944 256413696\nSwap: 518053888 8630272 509423616\n\non Linux version 2.4.26-custom \n\nData directory is mounted with noatime.\n\nNothing else but one 11GB database is running on this machine.\nWhen the database was created, I changed the following defaults :\nshared_buffers = 24415\nsort_mem = 5120\nvacuum_mem = 10240\ncommit_delay = 5000\ncommit_siblings = 100\n\nThese settings worked fine, but were not optimal, I thought, and processing\nstuff on this database was a bit slow. The machine is not nearly used to it's\ncapacity, and I realized that disk IO is what's slowing me down. So I\ndecided to give postgres more shared memory and much more sort memory,\nas it does a lot of \"group by'\"s and \"order by\"'s during the nightly processing.\nThese were the new settings I tried :\nshared_buffers = 61035\nsort_mem = 97657\n\nI thought because it's only one process that runs queries exclusively at night,\nI should be able to set the sort_mem this high without worrying about running\nout of memory. \n\nIt seems I was mistaking, as I started getting these kind of errors in dmesg :\nVM: killing process postmaster\n__alloc_pages: 0-order allocation failed (gfp=0x1d2/0)\n__alloc_pages: 0-order allocation failed (gfp=0x1d2/0)\nVM: killing process postmaster\n\nand I kept on getting these postgres errors :\nERROR: Index ???? is not a btree\n\nI systematically reduced the shared buffers back down to 24415, and this kept\non happening. As soon as I reduced sort_mem back to under 10000,the problem\nstopped. But the database is just as slow as before. (By slow I mean not as fast as it should\nbe on such a powerful machine compared to much worse machines running the same processes)\n\nWhat can I do to make this database run faster on this machine.\nCan anyone suggest how I would go about speeding up this database. \n\nI need to prepare a database three times the size of this one, running the same processes,\nand I don't know what improvements I can do on hardware to make this possible.\n\nOn the current machine I can easily get another 1GB or 2GB of memory, but will that help at all?\nWithout going into the details of exactly the queries that run on this machine, what would be needed to\nmake postgres run very fast on this machine?\n\nPlease help.\n\nKind Regards\nStefan\n",
"msg_date": "Mon, 16 Aug 2004 17:12:34 +0200",
"msg_from": "Stef <[email protected]>",
"msg_from_op": true,
"msg_subject": "Strange problems with more memory."
},
{
"msg_contents": "Stef <[email protected]> writes:\n> It seems I was mistaking, as I started getting these kind of errors in dmesg :\n> VM: killing process postmaster\n> __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)\n> __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)\n> VM: killing process postmaster\n\nThis looks like the infamous OOM-kill kernel bug^H^H^Hfeature. Turn off\nmemory overallocation in your kernel to get more stable behavior when\npushing the limits of available memory.\n\n> But the database is just as slow as before. (By slow I mean not as\n> fast as it should be on such a powerful machine compared to much worse\n> machines running the same processes)\n\nIf your concern is with a single nightly process, then that quad Xeon is\ndoing squat for you, because only one of the processors will be working.\nSee if you can divide up the processing into several jobs that can run\nin parallel. (Of course, if the real problem is that you are disk I/O\nbound, nothing will help except better disk hardware. Way too many\npeople think they should buy a super-fast CPU and attach it to\nconsumer-grade IDE disks. For database work you're usually better off\nspending your money on good disks...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Aug 2004 11:41:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange problems with more memory. "
},
{
"msg_contents": "Tom Lane mentioned :\n=> Turn off\n=> memory overallocation in your kernel to get more stable behavior when\n=> pushing the limits of available memory.\n\nI think this will already help a lot.\nThanks!!\n\n=> If your concern is with a single nightly process, then that quad Xeon is\n=> doing squat for you, because only one of the processors will be working.\n=> See if you can divide up the processing into several jobs that can run\n=> in parallel. (Of course, if the real problem is that you are disk I/O\n=> bound, nothing will help except better disk hardware. Way too many\n=> people think they should buy a super-fast CPU and attach it to\n=> consumer-grade IDE disks. For database work you're usually better off\n=> spending your money on good disks...)\n\nGot 3 10000 rpm SCSI raid5 on here. I doubt I will get much better than that\nwithout losing both arms and legs... \n\nI think I'll try and even out the disk IO a bit and get 4 processes running in parallel.\nAt least I can move forward again.\n\nThanks again!\n\nKind Regards\nStefan\n",
"msg_date": "Mon, 16 Aug 2004 18:23:14 +0200",
"msg_from": "Stef <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Strange problems with more memory."
},
{
"msg_contents": "If your nightly process is heavily read-only, then raid5 is probably \nfine. If however, there is a significant write component then it would \nperhaps be worth getting another disk and converting to raid10 \n(alternatively - see previous postings about raid cards with on-board \ncache). Are you seeing a lot of write activity?\n\nNote that it is possible for a SELECT only workload to generate \nsignificant write activity - if the resulting datasets are too large for \nmemory sorting or hashing. I'm *guessing* that with an 11G database and \n1G (or was that 2G?) of ram that it is possible to overflow whatever \nyour sort_mem is set to.\n\n\ncheers\n\nMark\n\nStef wrote:\n\n>Got 3 10000 rpm SCSI raid5 on here. I doubt I will get much better than that\n>without losing both arms and legs... \n>\n> \n>\n",
"msg_date": "Tue, 17 Aug 2004 18:26:15 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange problems with more memory."
}
] |
[
{
"msg_contents": "Hello,\n\nI have a dedicated server for my posgresql database :\n\nP4 2.4 GHZ\nHDD IDE 7200 rpm\n512 DDR 2700\n\nI have a problem whith one table of my database :\n\nCREATE SEQUENCE \"base_aveugle_seq\" START 1;\nCREATE TABLE \"base_aveugle\" (\n \"record_id\" integer DEFAULT nextval('\"base_aveugle_seq\"'::text) NOT NULL,\n \"dunsnumber\" integer NOT NULL,\n \"cp\" text NOT NULL,\n \"tel\" text NOT NULL,\n \"fax\" text NOT NULL,\n \"naf\" text NOT NULL,\n \"siege/ets\" text NOT NULL,\n \"effectif\" integer NOT NULL,\n \"ca\" integer NOT NULL,\n Constraint \"base_aveugle_pkey\" Primary Key (\"record_id\")\n);\nCREATE INDEX base_aveugle_dunsnumber_key ON base_aveugle USING btree (dunsnumber);\nCREATE INDEX base_aveugle_cp_key ON base_aveugle USING btree (cp);\nCREATE INDEX base_aveugle_naf_key ON base_aveugle USING btree (naf);\nCREATE INDEX base_aveugle_effectif_key ON base_aveugle USING btree (effectif);\n\n\nThis table contains 5 000 000 records\n\nI have a PHP application which often makes queries on this table (especially on the \"cp\",\"naf\",\"effectif\" fields)\n\nQuerries are like :\n select (distint cp) from base_aveugle where cp='201A' and effectif between 1 and 150\n select (*) from base_aveugle where naf in ('721A','213F','421K') and cp in ('54210','21459','201A') and effectif < 150\n\nI think it is possible to optimize the performance of this queries before changing the hardware (I now I will...) but I don't know how, even after having read lot of things about postgresql ...\n\nThanks ;) \n\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.737 / Virus Database: 491 - Release Date: 11/08/2004\n\n\n\n\n\n\nHello,\n \nI have a dedicated server for my \nposgresql database :\n \nP4 2.4 GHZ\nHDD IDE 7200 rpm\n512 DDR 2700\n \nI have a problem whith one table of \nmy database :\n \nCREATE SEQUENCE \"base_aveugle_seq\" START \n1;CREATE TABLE \"base_aveugle\" ( \"record_id\" integer DEFAULT \nnextval('\"base_aveugle_seq\"'::text) NOT NULL, \"dunsnumber\" integer NOT \nNULL, \"cp\" text NOT NULL, \"tel\" text NOT NULL, \"fax\" \ntext NOT NULL, \"naf\" text NOT NULL, \"siege/ets\" text NOT \nNULL, \"effectif\" integer NOT NULL, \"ca\" integer NOT \nNULL, Constraint \"base_aveugle_pkey\" Primary Key \n(\"record_id\"));CREATE INDEX base_aveugle_dunsnumber_key ON base_aveugle \nUSING btree (dunsnumber);CREATE INDEX base_aveugle_cp_key ON base_aveugle \nUSING btree (cp);CREATE INDEX base_aveugle_naf_key ON base_aveugle USING \nbtree (naf);CREATE INDEX base_aveugle_effectif_key ON base_aveugle USING \nbtree (effectif);\n \n \nThis table contains 5 000 000 \nrecords\n \nI have a PHP application which \noften makes queries on this table (especially on the \"cp\",\"naf\",\"effectif\" \nfields)\n \nQuerries are like :\n \nselect (distint cp) from base_aveugle where cp='201A' and effectif between 1 and \n150\n \nselect (*) from base_aveugle where naf in ('721A','213F','421K') and cp in \n('54210','21459','201A') and effectif < 150\n \nI think it is possible to optimize \nthe performance of this queries before changing the hardware (I now I \nwill...) but I don't know how, even after having read lot of things about \npostgresql ...\n \nThanks ;) \n \n---Outgoing mail is \ncertified Virus Free.Checked by AVG anti-virus system (http://www.grisoft.com).Version: 6.0.737 / \nVirus Database: 491 - Release Date: 11/08/2004",
"msg_date": "Tue, 17 Aug 2004 15:30:29 +0200",
"msg_from": "\"olivier HARO\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "General performance problem!"
},
{
"msg_contents": "olivier HARO wrote:\n\n> Hello,\n> \n> I have a dedicated server for my posgresql database :\n> \n> P4 2.4 GHZ\n> HDD IDE 7200 rpm\n> 512 DDR 2700\n> \n> I have a problem whith one table of my database :\n> \n> CREATE SEQUENCE \"base_aveugle_seq\" START 1;\n> CREATE TABLE \"base_aveugle\" (\n> \"record_id\" integer DEFAULT nextval('\"base_aveugle_seq\"'::text) NOT NULL,\n> \"dunsnumber\" integer NOT NULL,\n> \"cp\" text NOT NULL,\n> \"tel\" text NOT NULL,\n> \"fax\" text NOT NULL,\n> \"naf\" text NOT NULL,\n> \"siege/ets\" text NOT NULL,\n> \"effectif\" integer NOT NULL,\n> \"ca\" integer NOT NULL,\n> Constraint \"base_aveugle_pkey\" Primary Key (\"record_id\")\n> );\n> CREATE INDEX base_aveugle_dunsnumber_key ON base_aveugle USING btree \n> (dunsnumber);\n> CREATE INDEX base_aveugle_cp_key ON base_aveugle USING btree (cp);\n> CREATE INDEX base_aveugle_naf_key ON base_aveugle USING btree (naf);\n> CREATE INDEX base_aveugle_effectif_key ON base_aveugle USING btree \n> (effectif);\n> \n> \n> This table contains 5 000 000 records\n> \n> I have a PHP application which often makes queries on this table \n> (especially on the \"cp\",\"naf\",\"effectif\" fields)\n> \n> Querries are like :\n> select (distint cp) from base_aveugle where cp='201A' and effectif \n> between 1 and 150\n> select (*) from base_aveugle where naf in ('721A','213F','421K') \n> and cp in ('54210','21459','201A') and effectif < 150\n> \n> I think it is possible to optimize the performance of this queries \n> before changing the hardware (I now I will...) but I don't know how, \n> even after having read lot of things about postgresql ...\n\nShow us a explain analyze for that queries.\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Tue, 17 Aug 2004 15:48:29 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: General performance problem!"
},
{
"msg_contents": "olivier HARO wrote:\n> This table contains 5 000 000 records\n> \n> I have a PHP application which often makes queries on this table (especially on the \"cp\",\"naf\",\"effectif\" fields)\n> \n> Querries are like :\n> select (distint cp) from base_aveugle where cp='201A' and effectif between 1 and 150\n> select (*) from base_aveugle where naf in ('721A','213F','421K') and cp in ('54210','21459','201A') and effectif < 150\n\nWe'll need to know what version of PostgreSQL you're using and also what \nthe output of EXPLAIN ANALYZE shows for your example queries.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 17 Aug 2004 15:16:14 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: General performance problem!"
}
] |
[
{
"msg_contents": "Hi,\n \nMake multi-column indexes, using the columns from your most typical queries, putting the most selective columns first (ie; you don't need to make indexes with columns in the same order as they are used in the query).\n \nFor instance, an index on cp, effectif could likely benefit both queries; same for an index on cp, effectif, naf. (You'd need only one of these indexes I think, not both. Experiment to find out which one gives you most benefit in your queries, vs. the slowdown in inserts).\nPerhaps some of the single-column keys can be dropped.\n \n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]]On Behalf Of olivier HARO\nSent: dinsdag 17 augustus 2004 15:30\nTo: [email protected]\nSubject: [PERFORM] General performance problem!\n\n\nHello,\n \nI have a dedicated server for my posgresql database :\n \nP4 2.4 GHZ\nHDD IDE 7200 rpm\n512 DDR 2700\n \nI have a problem whith one table of my database :\n \nCREATE SEQUENCE \"base_aveugle_seq\" START 1;\nCREATE TABLE \"base_aveugle\" (\n \"record_id\" integer DEFAULT nextval('\"base_aveugle_seq\"'::text) NOT NULL,\n \"dunsnumber\" integer NOT NULL,\n \"cp\" text NOT NULL,\n \"tel\" text NOT NULL,\n \"fax\" text NOT NULL,\n \"naf\" text NOT NULL,\n \"siege/ets\" text NOT NULL,\n \"effectif\" integer NOT NULL,\n \"ca\" integer NOT NULL,\n Constraint \"base_aveugle_pkey\" Primary Key (\"record_id\")\n);\nCREATE INDEX base_aveugle_dunsnumber_key ON base_aveugle USING btree (dunsnumber);\nCREATE INDEX base_aveugle_cp_key ON base_aveugle USING btree (cp);\nCREATE INDEX base_aveugle_naf_key ON base_aveugle USING btree (naf);\nCREATE INDEX base_aveugle_effectif_key ON base_aveugle USING btree (effectif);\n \n \nThis table contains 5 000 000 records\n \nI have a PHP application which often makes queries on this table (especially on the \"cp\",\"naf\",\"effectif\" fields)\n \nQuerries are like :\n select (distint cp) from base_aveugle where cp='201A' and effectif between 1 and 150\n select (*) from base_aveugle where naf in ('721A','213F','421K') and cp in ('54210','21459','201A') and effectif < 150\n \nI think it is possible to optimize the performance of this queries before changing the hardware (I now I will...) but I don't know how, even after having read lot of things about postgresql ...\n \nThanks ;) \n \n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system ( http://www.grisoft.com).\nVersion: 6.0.737 / Virus Database: 491 - Release Date: 11/08/2004\n\n\n\n\n\n\n\n\nHi,\n \nMake \nmulti-column indexes, using the columns from your most typical queries, putting \nthe most selective columns first (ie; you don't need to make indexes with \ncolumns in the same order as they are used in the query).\n \nFor \ninstance, an index on cp, effectif could likely benefit both queries; same for \nan index on cp, effectif, naf. (You'd need only one of these indexes I think, \nnot both. Experiment to find out which one gives you most benefit in your \nqueries, vs. the slowdown in inserts).\nPerhaps some of the single-column keys can be \ndropped.\n \n\n-----Original Message-----From: \n [email protected] \n [mailto:[email protected]]On Behalf Of olivier \n HAROSent: dinsdag 17 augustus 2004 15:30To: \n [email protected]: [PERFORM] General \n performance problem!\nHello,\n \nI have a dedicated server for my \n posgresql database :\n \nP4 2.4 GHZ\nHDD IDE 7200 rpm\n512 DDR 2700\n \nI have a problem whith one table \n of my database :\n \nCREATE SEQUENCE \"base_aveugle_seq\" START \n 1;CREATE TABLE \"base_aveugle\" ( \"record_id\" integer DEFAULT \n nextval('\"base_aveugle_seq\"'::text) NOT NULL, \"dunsnumber\" integer \n NOT NULL, \"cp\" text NOT NULL, \"tel\" text NOT \n NULL, \"fax\" text NOT NULL, \"naf\" text NOT \n NULL, \"siege/ets\" text NOT NULL, \"effectif\" integer NOT \n NULL, \"ca\" integer NOT NULL, Constraint \"base_aveugle_pkey\" \n Primary Key (\"record_id\"));CREATE INDEX base_aveugle_dunsnumber_key ON \n base_aveugle USING btree (dunsnumber);CREATE INDEX base_aveugle_cp_key ON \n base_aveugle USING btree (cp);CREATE INDEX base_aveugle_naf_key ON \n base_aveugle USING btree (naf);CREATE INDEX base_aveugle_effectif_key ON \n base_aveugle USING btree (effectif);\n \n \nThis table contains 5 000 000 \n records\n \nI have a PHP application which \n often makes queries on this table (especially on the \"cp\",\"naf\",\"effectif\" \n fields)\n \nQuerries are like :\n \n select (distint cp) from base_aveugle where cp='201A' and effectif between 1 \n and 150\n \n select (*) from base_aveugle where naf in ('721A','213F','421K') and cp in \n ('54210','21459','201A') and effectif < 150\n \nI think it is possible to \n optimize the performance of this queries before changing the hardware (I \n now I will...) but I don't know how, even after having read lot of things \n about postgresql ...\n \nThanks ;) \n \n---Outgoing mail is \n certified Virus Free.Checked by AVG anti-virus system (http://www.grisoft.com).Version: 6.0.737 \n / Virus Database: 491 - Release Date: \n11/08/2004",
"msg_date": "Tue, 17 Aug 2004 15:57:25 +0200",
"msg_from": "\"Leeuw van der, Tim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: General performance problem!"
}
] |
[
{
"msg_contents": "Hi verybody!\n\nI can't make use of indexes even I tried the same test by changing different settings in postgres.conf like geqo to off/on & geqo related parameters, enable_seqscan off/on & so on. Result is the same. \n\nHere is test itself:\n\nI've created simplest table test and executed the same statement \"explain analyze select id from test where id = 50000;\" Few times I added 100,000 records, applied vacuum full; and issued above explain command. \nPostgres uses sequential scan instead of index one. \nOf cause Time to execute the same statement constantly grows. In my mind index should not allow time to grow so much. \n\nWhy Postgres does not utilizes primary unique index?\nWhat I'm missing? It continue growing even there are 1,200,000 records. It should at least start using index at some point.\n\n\nDetails are below:\n100,000 records:\nQUERY PLAN\n----------------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..2427.00 rows=2 width=8) (actual time=99.626..199.835 rows=1 loops=1)\n Filter: (id = 50000)\n Total runtime: 199.990 ms\n\n200,000 records:\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..4853.00 rows=2 width=8) (actual time=100.389..402.770 rows=1 loops=1)\n Filter: (id = 50000)\n Total runtime: 402.926 ms\n\n\n300,000 records:\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..7280.00 rows=1 width=8) (actual time=100.563..616.064 rows=1 loops=1)\n Filter: (id = 50000)\n Total runtime: 616.224 ms\n(3 rows)\n\nI've created test table by script:\n\nCREATE TABLE test\n(\n id int8 NOT NULL DEFAULT nextval('next_id_seq'::text) INIQUE,\n description char(50),\n CONSTRAINT users_pkey PRIMARY KEY (id)\n);\n\nCREATE SEQUENCE next_id_seq\n INCREMENT 1\n MINVALUE 1\n MAXVALUE 10000000000\n START 1\n CACHE 5\n CYCLE;\n\nI use postgres 7.4.2\n\n\n\n",
"msg_date": "Tue, 17 Aug 2004 10:22:59 -0400",
"msg_from": "\"Igor Artimenko\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "I could not get postgres to utilizy indexes"
},
{
"msg_contents": "Igor Artimenko wrote:\n\n>Hi verybody!\n>\n>I can't make use of indexes even I tried the same test by changing different settings in postgres.conf like geqo to off/on & geqo related parameters, enable_seqscan off/on & so on. Result is the same. \n>\n>Here is test itself:\n>\n>I've created simplest table test and executed the same statement \"explain analyze select id from test where id = 50000;\" Few times I added 100,000 records, applied vacuum full; and issued above explain command. \n>Postgres uses sequential scan instead of index one. \n>Of cause Time to execute the same statement constantly grows. In my mind index should not allow time to grow so much. \n>\n>Why Postgres does not utilizes primary unique index?\n>What I'm missing? It continue growing even there are 1,200,000 records. It should at least start using index at some point.\n>\n> \n>\nIgor, you may want to run \"vacuum analyze\" and see if your results change.\n\nThomas\n\n",
"msg_date": "Wed, 18 Aug 2004 12:12:26 -0500",
"msg_from": "Thomas Swan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: I could not get postgres to utilizy indexes"
},
{
"msg_contents": "\"Thomas Swan\" <[email protected]> says:\n\n\n> Igor Artimenko wrote:\n>\n\n[ snipped question that was almost exactly a repeat\n of one we saw yesterday ]\n\n> >\n> >\n> Igor, you may want to run \"vacuum analyze\" and see if your results change.\n\nActually, I think it was determined that the problem was due to the\nint index\n\nMichal Taborsky suggested this solution:\n\n select id from test where id = 50000::int8\n\ndid this not help ?\n\ngnari\n\n\n\n",
"msg_date": "Wed, 18 Aug 2004 17:26:52 -0000",
"msg_from": "\"gnari\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: I could not get postgres to utilizy indexes"
},
{
"msg_contents": "I (\"gnari\" <[email protected]>) miswrote:\n\n> Actually, I think it was determined that the problem was due to the\n> int index\n\nof course, i meant int8 index\n\ngnari\n\n\n\n",
"msg_date": "Wed, 18 Aug 2004 17:31:24 -0000",
"msg_from": "\"gnari\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: I could not get postgres to utilizy indexes"
}
] |
[
{
"msg_contents": "Hi everybody!\n\nI can���t make use of indexes even I tried the same test by changing different settings in\npostgres.conf like geqo to off/on & geqo related parameters, enable_seqscan off/on & so on. Result\nis the same. \n\nHere is test itself:\n\nI���ve created simplest table test and executed the same statement ���explain analyze select id from\ntest where id = 50000;��� Few times I added 100,000 records, applied vacuum full; and issued above\nexplain command. \nPostgres uses sequential scan instead of index one. \nOf cause Time to execute the same statement constantly grows. In my mind index should not allow\ntime to grow so much. \n\nWhy Postgres does not utilizes primary unique index?\nWhat I���m missing? It continue growing even there are 1,200,000 records. It should at least start\nusing index at some point.\n\n\nDetails are below:\n100,000 records:\nQUERY PLAN\n----------------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..2427.00 rows=2 width=8) (actual time=99.626..199.835 rows=1\nloops=1)\n Filter: (id = 50000)\n Total runtime: 199.990 ms\n\n200,000 records:\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..4853.00 rows=2 width=8) (actual time=100.389..402.770 rows=1\nloops=1)\n Filter: (id = 50000)\n Total runtime: 402.926 ms\n\n\n300,000 records:\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..7280.00 rows=1 width=8) (actual time=100.563..616.064 rows=1\nloops=1)\n Filter: (id = 50000)\n Total runtime: 616.224 ms\n(3 rows)\n\nI've created test table by script:\n\nCREATE TABLE test\n(\n id int8 NOT NULL DEFAULT nextval('next_id_seq'::text) INIQUE,\n description char(50),\n CONSTRAINT users_pkey PRIMARY KEY (id)\n);\n\nCREATE SEQUENCE next_id_seq\n INCREMENT 1\n MINVALUE 1\n MAXVALUE 10000000000\n START 1\n CACHE 5\n CYCLE;\n\nI use postgres 7.4.2\n\n\n\n\n\n\n=====\nThanks a lot\nIgor Artimenko\nI specialize in \nJava, J2EE, Unix, Linux, HP, AIX, Solaris, Progress, Oracle, DB2, Postgres, Data Modeling\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nNew and Improved Yahoo! Mail - Send 10MB messages!\nhttp://promotions.yahoo.com/new_mail \n",
"msg_date": "Tue, 17 Aug 2004 08:35:55 -0700 (PDT)",
"msg_from": "Artimenko Igor <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres does not utilize indexes. Why?"
},
{
"msg_contents": "Artimenko Igor wrote:\n> id int8 NOT NULL DEFAULT nextval('next_id_seq'::text) INIQUE,\n\nID column is bigint, but '50000' is int, therefore the index does not \nmatch. You need to cast your clause like this:\n\nselect id from test where id = 50000::int8\n\nAlso, issue VACUUM ANALYZE, so Postgres knows about the structure of the \ndata.\n\n-- \nMichal Taborsky\nhttp://www.taborsky.cz\n\n",
"msg_date": "Tue, 17 Aug 2004 17:45:34 +0200",
"msg_from": "Michal Taborsky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres does not utilize indexes. Why?"
},
{
"msg_contents": "> test where id = 50000;” Few times I added 100,000 records, applied\n\n\tcast the 50000 to int8 and it will use the index\n",
"msg_date": "Tue, 17 Aug 2004 20:33:21 +0200",
"msg_from": "=?utf-8?Q?Pierre-Fr=C3=A9d=C3=A9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres does not utilize indexes. Why?"
}
] |
[
{
"msg_contents": "Thanks for the tip for the index on multiple columns ! (I never do inserts\non this table so insert time doesn't matter)\n\nMys posgresql version is : PostgreSQL 7.2.1\n\nHere are the results of the EXPLAIN ANALYZE you asked me to execute.\n\n\nexplain analyse select cp from base_aveugle where cp='69740' and effectif\nbetween 1 and 50;\n NOTICE: QUERY PLAN:\n Index Scan using base_aveugle_cp_eff on base_aveugle\n(cost=0.00..508.69 rows=126 width=32)\n (actual time=0.27..11.56 rows=398 loops=1)\n Total runtime: 11.77 msec\n EXPLAIN\n\nexplain analyse select cp from base_aveugle where cp like '69%' and effectif\nbetween 1 and 50 and naf like '24%' or naf like '25%';\n NOTICE: QUERY PLAN:\n Index Scan using base_aveugle_cp_eff_naf, base_aveugle_naf on\nbase_aveugle (cost=0.00..100001.89 rows=25245 width=32)\n (actual time=4.40..353.69 rows=6905 loops=1)\n Total runtime: 355.82 msec\n EXPLAIN\n\n\nthx ;)\n\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.737 / Virus Database: 491 - Release Date: 11/08/2004\n\n",
"msg_date": "Tue, 17 Aug 2004 17:41:41 +0200",
"msg_from": "\"olivier HARO\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: General performance problem!"
}
] |
[
{
"msg_contents": "An index on cp and effectif would help your first query. An index on naf,\ncp and effectif would help your second query.\n \nSomething like this:\n \nCREATE INDEX base_aveugle_cp_key2 ON base_aveugle USING btree (cp,\neffectif);\nCREATE INDEX base_aveugle_naf_key2 ON base_aveugle USING btree (naf, cp,\neffectif);\n \nAnother thing, why include \"distinct cp\" when you are only selecting\n\"cp='201A'\"? You will only retrieve one record regardless of how many may\ncontain cp='201A'.\n \nIf you could make these UNIQUE indexes that would help also but it's not a\nrequirement.\n \nGood luck,\nDuane\n \n \n-----Original Message-----\nFrom: olivier HARO [mailto:[email protected]]\nSent: Tuesday, August 17, 2004 6:30 AM\nTo: [email protected]\nSubject: [PERFORM] General performance problem!\n \nHello,\n \nI have a dedicated server for my posgresql database :\n \nP4 2.4 GHZ\nHDD IDE 7200 rpm\n512 DDR 2700\n \nI have a problem whith one table of my database :\n \nCREATE SEQUENCE \"base_aveugle_seq\" START 1;\nCREATE TABLE \"base_aveugle\" (\n \"record_id\" integer DEFAULT nextval('\"base_aveugle_seq\"'::text) NOT NULL,\n \"dunsnumber\" integer NOT NULL,\n \"cp\" text NOT NULL,\n \"tel\" text NOT NULL,\n \"fax\" text NOT NULL,\n \"naf\" text NOT NULL,\n \"siege/ets\" text NOT NULL,\n \"effectif\" integer NOT NULL,\n \"ca\" integer NOT NULL,\n Constraint \"base_aveugle_pkey\" Primary Key (\"record_id\")\n);\nCREATE INDEX base_aveugle_dunsnumber_key ON base_aveugle USING btree\n(dunsnumber);\nCREATE INDEX base_aveugle_cp_key ON base_aveugle USING btree (cp);\nCREATE INDEX base_aveugle_naf_key ON base_aveugle USING btree (naf);\nCREATE INDEX base_aveugle_effectif_key ON base_aveugle USING btree\n(effectif);\n \n \nThis table contains 5 000 000 records\n \nI have a PHP application which often makes queries on this table (especially\non the \"cp\",\"naf\",\"effectif\" fields)\n \nQuerries are like :\n select (distint cp) from base_aveugle where cp='201A' and effectif\nbetween 1 and 150\n select (*) from base_aveugle where naf in ('721A','213F','421K') and\ncp in ('54210','21459','201A') and effectif < 150\n \nI think it is possible to optimize the performance of this queries before\nchanging the hardware (I now I will...) but I don't know how, even after\nhaving read lot of things about postgresql ...\n \nThanks ;) \n \n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system ( http://www.grisoft.com\n<http://www.grisoft.com> ).\nVersion: 6.0.737 / Virus Database: 491 - Release Date: 11/08/2004\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAn index\non cp and effectif would help your first query. An index on naf, cp and effectif would help your second\nquery.\n \nSomething\nlike this:\n \nCREATE INDEX base_aveugle_cp_key2 ON\nbase_aveugle USING btree (cp, effectif);\nCREATE INDEX base_aveugle_naf_key2 ON base_aveugle USING btree (naf, cp,\neffectif);\n \nAnother thing, why include “distinct cp”\nwhen you are only selecting “cp=’201A’”? \nYou will only retrieve one record regardless of how many may contain cp=’201A’.\n \nIf you could make these UNIQUE indexes that\nwould help also but it’s not a requirement.\n \nGood luck,\nDuane\n \n \n-----Original\nMessage-----\nFrom: olivier HARO\n[mailto:[email protected]]\nSent: Tuesday, August 17, 2004\n6:30 AM\nTo:\[email protected]\nSubject: [PERFORM] General\nperformance problem!\n \nHello,\n \nI\nhave a dedicated server for my posgresql database :\n \nP4\n2.4 GHZ\nHDD\nIDE 7200 rpm\n512\nDDR 2700\n \nI\nhave a problem whith one table of my database :\n \nCREATE\nSEQUENCE \"base_aveugle_seq\" START 1;\nCREATE TABLE \"base_aveugle\" (\n \"record_id\" integer DEFAULT\nnextval('\"base_aveugle_seq\"'::text) NOT NULL,\n \"dunsnumber\" integer NOT NULL,\n \"cp\" text NOT NULL,\n \"tel\" text NOT NULL,\n \"fax\" text NOT NULL,\n \"naf\" text NOT NULL,\n \"siege/ets\" text NOT NULL,\n \"effectif\" integer NOT NULL,\n \"ca\" integer NOT NULL,\n Constraint \"base_aveugle_pkey\" Primary Key\n(\"record_id\")\n);\nCREATE INDEX base_aveugle_dunsnumber_key ON base_aveugle USING btree\n(dunsnumber);\nCREATE INDEX base_aveugle_cp_key ON base_aveugle USING btree (cp);\nCREATE INDEX base_aveugle_naf_key ON base_aveugle USING btree (naf);\nCREATE INDEX base_aveugle_effectif_key ON base_aveugle USING btree (effectif);\n \n \nThis\ntable contains 5 000 000 records\n \nI\nhave a PHP application which often makes queries on this table (especially on\nthe \"cp\",\"naf\",\"effectif\" fields)\n \nQuerries\nare like :\n \nselect (distint cp) from base_aveugle where cp='201A' and effectif between 1\nand 150\n \nselect (*) from base_aveugle where naf in ('721A','213F','421K') and cp in\n('54210','21459','201A') and effectif < 150\n \nI\nthink it is possible to optimize the performance of this queries before\nchanging the hardware (I now I will...) but I don't know how, even after\nhaving read lot of things about postgresql ...\n \nThanks\n;) \n \n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.737 / Virus Database: 491 - Release Date: 11/08/2004",
"msg_date": "Tue, 17 Aug 2004 09:58:59 -0700",
"msg_from": "Duane Lee - EGOVX <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: General performance problem!"
}
] |
[
{
"msg_contents": "I'm doing a nightly vacuum... so I don't think that's it, although \nshould I be doing a FULL vacuum instead? The size of my data directory \nis only about 389 MB. I'll take a closer look at file sizes going \nforward.\n\necho \"VACUUM ANALYZE VERBOSE;\" | /Library/PostgreSQL/bin/psql -U \npostgres officelink 2>> vacuum.log\n\nThanks.\n\n\nFrom: \"Scott Marlowe\"\nYour shared buffers are almost certainly not the problem here. 2000\nshared buffers is only 16 Megs of ram, max. More than likely, the\ndatabase filled up the data directory / partition because it wasn't\nbeing vacuumed.\n\nOn Sat, 2004-07-31 at 10:25, Joe Lester wrote:\n > I've been running a postgres server on a Mac (10.3, 512MB RAM) with \n200\n > clients connecting for about 2 months without a crash. However just\n > yesterday the database and all the clients hung. When I looked at the\n > Mac I'm using as the postgres server it had a window up that said that\n > there was no more disk space available to write memory too. I ended up\n > having to restart the whole machine. I would like to configure \npostgres\n > so that is does not rely so heavily on disk-based memory but, rather,\n > tries to stay within the scope of the 512MB of physical memory in the\n > Mac.",
"msg_date": "Tue, 17 Aug 2004 15:02:49 -0500",
"msg_from": "Joe Lester <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: shared_buffers Question"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm seeing the following behaviour with the table and functions given below:\n\ndb=# insert into f select * from full_sequence(1, 1000);\nINSERT 0 1000\nTime: 197,507 ms\ndb=# insert into f select * from full_sequence(1, 1000);\nINSERT 0 1000\nTime: 341,880 ms\ndb=# insert into f select * from full_sequence(1, 1000);\nINSERT 0 1000\nTime: 692,603 ms\ndb=# insert into f select * from full_sequence(1, 1000);\nINSERT 0 1000\nTime: 985,253 ms\ndb=# insert into f select * from full_sequence(1, 1000);\nINSERT 0 1000\nTime: 1241,334 ms\n\nOr even worse (fresh drop/create of the table and functions):\n\ndb=# insert into f select id from full_sequence(1, 10000);\nINSERT 0 10000\nTime: 22255,767 ms\ndb=# insert into f select id from full_sequence(1, 10000);\nINSERT 0 10000\nTime: 45398,433 ms\ndb=# insert into f select id from full_sequence(1, 10000);\nINSERT 0 10000\nTime: 67993,476 ms\n\nWrapping the commands in a transaction only accumulates the penalty at commit.\n\nIt seems in this case the time needed for a single deferred trigger somehow \ndepends on the number of dead tuples in the table, because a vacuum of the \ntable will 'reset' the query-times. However, even if I wanted to, vacuum is \nnot allowed from within a function.\n\nWhat is happening here? And more importantly, what can I do to prevent this?\n\nNB. My real-world application 'collects' id's in need for deferred work, but \nthis work is both costly and only needed once per base record. So I use an \n'update' table whose content I join with the actual tables in order to do the \nwork for _all_ the base records involved upon the first execution of the \ndeferred trigger. At the end of the trigger, this 'update' table is emptied \nso any additional deferred triggers on the same table will hardly lose any \ntime. Or at least, that was the intention....\n\n*********** demo script ***********\ndrop table f cascade;\ndrop function tr_f_def() cascade;\ndrop function full_sequence(integer, integer);\ndrop type full_sequence_type;\n\ncreate table f (id int);\ncreate function tr_f_def() RETURNS trigger LANGUAGE 'plpgsql' STABLE STRICT\nSECURITY INVOKER AS '\n DECLARE\n BEGIN\n\t\t-- do stuff with all the ids in the table\n\n\t\t-- delete the contents\n--\t\tdelete from f;\n\t\tIF EXISTS (SELECT 1 FROM f) THEN\n\t\t\tDELETE FROM F;\n\t\t\tVACUUM F;\n\t\tEND IF;\n\n RETURN NULL;\n END;';\ncreate type full_sequence_type as (id int);\ncreate function full_sequence(integer, integer)\n\tRETURNS SETOF full_sequence_type\n\tLANGUAGE 'plpgsql'\n\tIMMUTABLE\n\tSTRICT\n\tSECURITY INVOKER\n\tAS '\tDECLARE\n\t\t\tmy_from ALIAS FOR $1;\n\t\t\tmy_to ALIAS FOR $2;\n\t\t\tresult full_sequence_type%ROWTYPE;\n\t\tBEGIN\n\t\t\t-- just loop\n\t\t\tFOR i IN my_from..my_to LOOP\n\t\t\t\tresult.id = i;\n\t\t\t\tRETURN NEXT result;\n\t\t\tEND LOOP;\n\n\t\t\t-- finish\n\t\t\tRETURN;\n\t\tEND;';\nCREATE CONSTRAINT TRIGGER f_def AFTER INSERT ON f DEFERRABLE INITIALLY \nDEFERRED FOR EACH ROW EXECUTE PROCEDURE tr_f_def();\n*********** demo script ***********\n\ndb=# select version();\n version\n---------------------------------------------------------------------\n PostgreSQL 7.4.3 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66\n\n\n\n\n\n-- \nBest,\n\n\n\n\nFrank.\n\n",
"msg_date": "Tue, 17 Aug 2004 23:29:52 +0200",
"msg_from": "Frank van Vugt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why is the number of dead tuples causing the performance of deferred\n\ttriggers to degrading so rapidly (exponentionally)?"
},
{
"msg_contents": "Frank,\n\n> It seems in this case the time needed for a single deferred trigger somehow\n> depends on the number of dead tuples in the table, because a vacuum of the\n> table will 'reset' the query-times. However, even if I wanted to, vacuum is\n> not allowed from within a function.\n>\n> What is happening here? And more importantly, what can I do to prevent\n> this?\n\nI'm not clear on all of the work you're doing in the trigger. However, it \nseems obvious that you're deleting and/or updating a large number of rows. \nThe escalating execution times would be consistent with that.\n\n> NB. My real-world application 'collects' id's in need for deferred work,\n> but this work is both costly and only needed once per base record. So I use\n> an 'update' table whose content I join with the actual tables in order to\n> do the work for _all_ the base records involved upon the first execution of\n> the deferred trigger. At the end of the trigger, this 'update' table is\n> emptied so any additional deferred triggers on the same table will hardly\n> lose any time. Or at least, that was the intention....\n\nI think you're doing a lot more than is wise to do in triggers. Deferrable \ntriggers aren't really intended for running long procedures with the creation \nof types and temporary tables (your post got a bit garbled, so pardon me if \nI'm misreading it). I'd suggest reconsidering your approach to this \napplication problem.\n\nAt the very least, increase max_fsm_relations to some high value, which may \nhelp (or not). \n\n-Josh\n\n-- \n__Aglio Database Solutions_______________\nJosh Berkus\t\t Consultant\[email protected]\t www.agliodbs.com\nPh: 415-752-2500\tFax: 415-752-2387\n2166 Hayes Suite 200\tSan Francisco, CA\n",
"msg_date": "Tue, 17 Aug 2004 16:03:43 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is the number of dead tuples causing the performance of\n\tdeferred triggers to degrading so rapidly (exponentionally)?"
},
{
"msg_contents": "Hi Josh,\n\n> > It seems in this case the time needed for a single deferred trigger\n> > somehow depends on the number of dead tuples in the table\n\nAfter further investigation I think I have a better grasp of what's going on.\n\nThe thing biting me here is indeed the 'delete from' on a table with a number \nof dead rows, possibly made worse in some cases where not everything can be \nhandled in memory.\n\n> I'm not clear on all of the work you're doing in the trigger.\n> > NB. My real-world application 'collects' id's in need for deferred work\n> I think you're doing a lot more than is wise to do in triggers.\n\nI probably wasn't clear enough on this. I'm not creating types and/or \ntemporary tables or anything of that kind.\n\nThe ratio is probably explained better by this example:\n\n- the database has knowledge on 'parts' and 'sets', the sets have a few fields \nwhose content depend on the parts, but the proper value for these fields can \nonly be determined by looking at all the parts of the particular set together \n(i.e. it's not a plain 'part-count' that one could update by a trigger on the \npart)\n\n- during a transaction, a number of things will happen to various parts of \nvarious sets, so I have after triggers on the parts that will insert the ids \nof the sets that need an update into a set_update holding table; in turn, \nthis set_update table has a deferred trigger\n\n- upon execution of the deferred triggers, I now know that all the work on the \nparts is finished, so the deferred trigger initiates an update for the sets \nwhose ids are in the update table and it will delete these ids afterwards\n\nNow, because multiple updates to parts of the same set will result in multiple \ninserts in the update table, I want to avoid doing the set-update more that \nonce. \n\nObviously, it would be better to be able to 'cancel' the rest of the calls to \nthe deferred trigger after it has been executed for the first time, but that \ndoesn't seem possible.\n\nEven better would be to use a 'for each statement' trigger on the set_update \nholding table instead, but it is not possible to create a deferred 'for each \nstatement' trigger..... ;(\n\nSo, I seem to be a bit between a rock and a hard place here, I must use \ndeferred triggers in order to avoid a costly set update on each part update, \nbut in such a deferred trigger I cannot avoid doing the update multiple \ntimes....(due to the growing cost of a 'delete from' in the trigger)\n\nMmm, it seems that by hacking pg_trigger I am able to create a for each \nstatement trigger that is 'deferrable initially deferred'.\n\nThis probably solves my problem, I will ask on 'general' whether this has any \nunforseen side effects and whether or not a 'regular' deferrable for each \nstatement trigger is incorporated in v8.0.\n\nThanks for you reply!\n\n\n\n-- \nBest,\n\n\n\n\nFrank.\n\n",
"msg_date": "Wed, 18 Aug 2004 11:24:56 +0200",
"msg_from": "Frank van Vugt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why is the number of dead tuples causing the performance of\n\tdeferred triggers to degrade so rapidly (exponentionally)?"
},
{
"msg_contents": "Since you are updating all of the sets with the specified part number \nwhy not just ensure that a transaction never inserts the same part \nnumber more than once (an INSERT ...SELECT ... WHERE NOT EXISTS(...) \ncomes to mind), then delete the row before the end of transaction.\n\nFrank van Vugt wrote:\n\n>Hi Josh,\n>\n> \n>\n>>>It seems in this case the time needed for a single deferred trigger\n>>>somehow depends on the number of dead tuples in the table\n>>> \n>>>\n>\n>After further investigation I think I have a better grasp of what's going on.\n>\n>The thing biting me here is indeed the 'delete from' on a table with a number \n>of dead rows, possibly made worse in some cases where not everything can be \n>handled in memory.\n>\n> \n>\n>>I'm not clear on all of the work you're doing in the trigger.\n>> \n>>\n>>>NB. My real-world application 'collects' id's in need for deferred work\n>>> \n>>>\n>>I think you're doing a lot more than is wise to do in triggers.\n>> \n>>\n>\n>I probably wasn't clear enough on this. I'm not creating types and/or \n>temporary tables or anything of that kind.\n>\n>The ratio is probably explained better by this example:\n>\n>- the database has knowledge on 'parts' and 'sets', the sets have a few fields \n>whose content depend on the parts, but the proper value for these fields can \n>only be determined by looking at all the parts of the particular set together \n>(i.e. it's not a plain 'part-count' that one could update by a trigger on the \n>part)\n>\n>- during a transaction, a number of things will happen to various parts of \n>various sets, so I have after triggers on the parts that will insert the ids \n>of the sets that need an update into a set_update holding table; in turn, \n>this set_update table has a deferred trigger\n>\n>- upon execution of the deferred triggers, I now know that all the work on the \n>parts is finished, so the deferred trigger initiates an update for the sets \n>whose ids are in the update table and it will delete these ids afterwards\n>\n>Now, because multiple updates to parts of the same set will result in multiple \n>inserts in the update table, I want to avoid doing the set-update more that \n>once. \n>\n>Obviously, it would be better to be able to 'cancel' the rest of the calls to \n>the deferred trigger after it has been executed for the first time, but that \n>doesn't seem possible.\n>\n>Even better would be to use a 'for each statement' trigger on the set_update \n>holding table instead, but it is not possible to create a deferred 'for each \n>statement' trigger..... ;(\n>\n>So, I seem to be a bit between a rock and a hard place here, I must use \n>deferred triggers in order to avoid a costly set update on each part update, \n>but in such a deferred trigger I cannot avoid doing the update multiple \n>times....(due to the growing cost of a 'delete from' in the trigger)\n>\n>Mmm, it seems that by hacking pg_trigger I am able to create a for each \n>statement trigger that is 'deferrable initially deferred'.\n>\n>This probably solves my problem, I will ask on 'general' whether this has any \n>unforseen side effects and whether or not a 'regular' deferrable for each \n>statement trigger is incorporated in v8.0.\n>\n>Thanks for you reply!\n>\n>\n>\n> \n>\n\n",
"msg_date": "Wed, 18 Aug 2004 18:02:08 -0500",
"msg_from": "DeJuan Jackson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is the number of dead tuples causing the performance"
}
] |
[
{
"msg_contents": "Obviously,\n\nthis part of tr_f_def():\n\n******************************\n\t\t-- delete the contents\n--\t\tdelete from f;\n\t\tIF EXISTS (SELECT 1 FROM f) THEN\n\t\t\tDELETE FROM F;\n\t\t\tVACUUM F;\n\t\tEND IF;\n******************************\n\n\nshould simply read:\n\n******************************\n\t\t-- delete the contents\n\t\tdelete from f;\n******************************\n\n\n\n--\nBest,\n\n\n\n\nFrank.\n\n",
"msg_date": "Tue, 17 Aug 2004 23:33:42 +0200",
"msg_from": "Frank van Vugt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why is the number of dead tuples causing the performance of deferred\n\ttriggers to degrading so rapidly (exponentionally)?"
}
] |
[
{
"msg_contents": "Hi,\nI am working on a project which explore postgresql to\nstore multimedia data.\nIn details, i am trying to work with the buffer\nmanagement part of postgres source code. And try to\nimprove the performance. I had search on the web but\ncould not find much usefull information. \nIt would be great if anyone knows any developer groups\nthat working on similar things ? or where can i find\nmore information on this issue?\nThank you very much for your help\nregards,\nMT Ho\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nYahoo! Mail - 50x more storage than other providers!\nhttp://promotions.yahoo.com/new_mail\n",
"msg_date": "Tue, 17 Aug 2004 17:44:38 -0700 (PDT)",
"msg_from": "my thi ho <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgresql performance with multimedia"
},
{
"msg_contents": "On 8/17/2004 8:44 PM, my thi ho wrote:\n\n> Hi,\n> I am working on a project which explore postgresql to\n> store multimedia data.\n> In details, i am trying to work with the buffer\n> management part of postgres source code. And try to\n> improve the performance. I had search on the web but\n> could not find much usefull information. \n\nWhat version of PostgreSQL are you looking at? Note that the buffer \ncache replacement strategy was completely changed for version 8.0, which \nis currently in BETA test. A description of the algorithm can be found \nin the README file in src/backend/storage/bufmgr.\n\n\nJan\n\n> It would be great if anyone knows any developer groups\n> that working on similar things ? or where can i find\n> more information on this issue?\n> Thank you very much for your help\n> regards,\n> MT Ho\n> \n> \n> \t\t\n> __________________________________\n> Do you Yahoo!?\n> Yahoo! Mail - 50x more storage than other providers!\n> http://promotions.yahoo.com/new_mail\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n",
"msg_date": "Wed, 18 Aug 2004 09:10:27 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql performance with multimedia"
},
{
"msg_contents": "\n--- Jan Wieck <[email protected]> wrote:\n\n> On 8/17/2004 8:44 PM, my thi ho wrote:\n> \n> > Hi,\n> > I am working on a project which explore postgresql\n> to\n> > store multimedia data.\n> > In details, i am trying to work with the buffer\n> > management part of postgres source code. And try\n> to\n> > improve the performance. I had search on the web\n> but\n> > could not find much usefull information. \n> \n> What version of PostgreSQL are you looking at? Note\n> that the buffer \n> cache replacement strategy was completely changed\n> for version 8.0, which \n> is currently in BETA test. A description of the\n> algorithm can be found \n> in the README file in src/backend/storage/bufmgr.\n\noki, Thanks for the information. I have a look at 8.0\nbeta, but cannot start the statistic collector. (I had\npost this err message before for help, but havent\nreally got any clue to fix it)\n> LOG: could not create IPv6 socket: Address family\nnot\n> supported by protocol\n> LOG: could not bind socket for statistics\ncollector:\n> Cannot assign requested address\n> LOG: disabling statistics collector for lack of\n> working socket\n\nbtw, what i want to ask here is does postgreSQL have\nany kind of read-ahead buffer implemented? 'cos it\nwould be useful in multimedia case when we always scan\nthe large table for continous data.\nThanks \nHo\n\n\n\n\n\t\n\t\t\n__________________________________\nDo you Yahoo!?\nNew and Improved Yahoo! Mail - 100MB free storage!\nhttp://promotions.yahoo.com/new_mail \n",
"msg_date": "Mon, 23 Aug 2004 22:08:42 -0700 (PDT)",
"msg_from": "my ho <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql performance with multimedia"
},
{
"msg_contents": "On 8/24/2004 1:08 AM, my ho wrote:\n\n> --- Jan Wieck <[email protected]> wrote:\n> \n>> On 8/17/2004 8:44 PM, my thi ho wrote:\n>> \n>> > Hi,\n>> > I am working on a project which explore postgresql\n>> to\n>> > store multimedia data.\n>> > In details, i am trying to work with the buffer\n>> > management part of postgres source code. And try\n>> to\n>> > improve the performance. I had search on the web\n>> but\n>> > could not find much usefull information. \n>> \n>> What version of PostgreSQL are you looking at? Note\n>> that the buffer \n>> cache replacement strategy was completely changed\n>> for version 8.0, which \n>> is currently in BETA test. A description of the\n>> algorithm can be found \n>> in the README file in src/backend/storage/bufmgr.\n> \n> oki, Thanks for the information. I have a look at 8.0\n> beta, but cannot start the statistic collector. (I had\n> post this err message before for help, but havent\n> really got any clue to fix it)\n>> LOG: could not create IPv6 socket: Address family\n> not\n>> supported by protocol\n>> LOG: could not bind socket for statistics\n> collector:\n>> Cannot assign requested address\n>> LOG: disabling statistics collector for lack of\n>> working socket\n\nTom Lane answered to that question. The code in question does resolve \n\"localhost\" with getaddrinfo() and then tries to create and bind a UDP \nsocket to all returned addresses. For some reason \"localhost\" on your \nsystem resolves to an address that is not available for bind(2).\n\n> \n> btw, what i want to ask here is does postgreSQL have\n> any kind of read-ahead buffer implemented? 'cos it\n> would be useful in multimedia case when we always scan\n> the large table for continous data.\n\nSince there is no mechanism to control that data is stored contiguously \nin the tables, what would that be good for?\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n",
"msg_date": "Tue, 24 Aug 2004 08:00:39 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql performance with multimedia"
},
{
"msg_contents": "> Tom Lane answered to that question. The code in\n> question does resolve \n> \"localhost\" with getaddrinfo() and then tries to\n> create and bind a UDP \n> socket to all returned addresses. For some reason\n> \"localhost\" on your \n> system resolves to an address that is not available\n> for bind(2).\n\nI tried to put my_ip instead of \"localhost\" in\nbufmng.c and it seems to work (no more complaining).\nHowever i check the pg_statio_all_tables and dont see\nany recorded statistic at all. (all the columns are\n'0')\nsome time postmaster shut down with this err msg: \nLOG: statistics collector process (<process_id>)\nexited with exit code 1\ni starts postmaster with this command:\npostmaster -i -p $PORT -D $PGDATA -k $PGDATA -N 32 -B\n64 -o -s\n\n> > btw, what i want to ask here is does postgreSQL\n> have\n> > any kind of read-ahead buffer implemented? 'cos it\n> > would be useful in multimedia case when we always\n> scan\n> > the large table for continous data.\n> \n> Since there is no mechanism to control that data is\n> stored contiguously \n> in the tables, what would that be good for?\n\ni thought that rows in the table will be stored\ncontiguously? in that case, if the user is requesting\n1 row, we make sure that the continue rows are ready\nin the buffer pool so that when they next requested,\nthey wont be asked to read from disk. For multimedia\ndata, this is important 'cos data needs to be\npresented continuously without any waiting.\n\nthanks again for your help\nMT Ho\n\n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Tue, 24 Aug 2004 23:54:05 -0700 (PDT)",
"msg_from": "my ho <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql performance with multimedia"
},
{
"msg_contents": "On 8/25/2004 2:54 AM, my ho wrote:\n>> Tom Lane answered to that question. The code in\n>> question does resolve \n>> \"localhost\" with getaddrinfo() and then tries to\n>> create and bind a UDP \n>> socket to all returned addresses. For some reason\n>> \"localhost\" on your \n>> system resolves to an address that is not available\n>> for bind(2).\n> \n> I tried to put my_ip instead of \"localhost\" in\n> bufmng.c and it seems to work (no more complaining).\n> However i check the pg_statio_all_tables and dont see\n> any recorded statistic at all. (all the columns are\n> '0')\n\nThe block level statistics are disabled by default in the \npostgresql.conf file.\n\n> some time postmaster shut down with this err msg: \n> LOG: statistics collector process (<process_id>)\n> exited with exit code 1\n\nFix your operating systems network settings instead of curing the \nsymptoms by breaking PostgreSQL.\n\n> i starts postmaster with this command:\n> postmaster -i -p $PORT -D $PGDATA -k $PGDATA -N 32 -B\n> 64 -o -s\n> \n>> > btw, what i want to ask here is does postgreSQL\n>> have\n>> > any kind of read-ahead buffer implemented? 'cos it\n>> > would be useful in multimedia case when we always\n>> scan\n>> > the large table for continous data.\n>> \n>> Since there is no mechanism to control that data is\n>> stored contiguously \n>> in the tables, what would that be good for?\n> \n> i thought that rows in the table will be stored\n> contiguously? in that case, if the user is requesting\n> 1 row, we make sure that the continue rows are ready\n> in the buffer pool so that when they next requested,\n> they wont be asked to read from disk. For multimedia\n> data, this is important 'cos data needs to be\n> presented continuously without any waiting.\n\nThey are only stored in that way on initial load and if the load is done \nwith a single process. And don't you rely on this for the future. Right \nnow, if you ever update or delete tuples, that order changes already.\n\nAlso keep in mind that large values are not stored inline, but in an \nextra \"TOAST\" relation.\n\nFor your \"streaming\" purposes I strongly recommend you do it in your \napplication with the appropriate thread model. A relational database \nmanagement system is not a multimedia cache.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n",
"msg_date": "Wed, 25 Aug 2004 07:03:13 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql performance with multimedia"
},
{
"msg_contents": "Hi,\n> For your \"streaming\" purposes I strongly recommend\n> you do it in your \n> application with the appropriate thread model. A\n> relational database \n> management system is not a multimedia cache.\n\nThat's actually what i plan to do with postgreSQL,\nmaybe tailor it to suit with a multimedia streaming\ndatabase. Well, i could do it in the application level\nbut i think it's also worth a try with the database\nitself.\n\n> They are only stored in that way on initial load and\n> if the load is done \n> with a single process. And don't you rely on this\n> for the future. Right \n> now, if you ever update or delete tuples, that order\n> changes already.\n\ndoes the buffer manager have any idea what table that\nbuf belongs to? (can we add 'rel' variable to sbufdesc\nin buf_internals.h and update it everytime we add new\nentry to the buffer cahe?) And then we take in to\naccount which relation the data in the buffer belongs\nto in the buf replacement algorithm or in the\nread-ahead policy.\n\n> Also keep in mind that large values are not stored\n> inline, but in an \n> extra \"TOAST\" relation.\nThis is how i store my video file: break them in to\nsmall chunks and store each part in a row of a table. \n\nregards,\nMThi \n\n\n\n\t\n\t\t\n__________________________________\nDo you Yahoo!?\nNew and Improved Yahoo! Mail - 100MB free storage!\nhttp://promotions.yahoo.com/new_mail \n",
"msg_date": "Thu, 26 Aug 2004 23:27:16 -0700 (PDT)",
"msg_from": "my ho <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql performance with multimedia"
}
] |
[
{
"msg_contents": "Hello,\n\nThis is not strictly PostgreSQL performance hint, but may be\nhelpful to someone with problems like mine.\n\nAs I earlier posted, I was experiencing very high load average\non one of my Linux database servers (IBM eServer 345, SCSI disks on LSI \nLogic controller) caused by I/O bottleneck.\n\nINSERTs were really slow, even after many days of\ntweaking PostgreSQL configuration. The problem appeared to be\nin the Linux kernel itself - using acpi=ht and noapic boot parameters\nsolved my performance problems. Load average dropped below 1.0\n(before, it was as high as ten in peak) and the database\nworks much, much faster.\n\n-- \n11.\n",
"msg_date": "Wed, 18 Aug 2004 10:18:19 +0200",
"msg_from": "eleven <[email protected]>",
"msg_from_op": true,
"msg_subject": "high load caused by I/O - a hint"
},
{
"msg_contents": "...and on Wed, Aug 18, 2004 at 10:18:19AM +0200, eleven used the keyboard:\n> Hello,\n> \n> This is not strictly PostgreSQL performance hint, but may be\n> helpful to someone with problems like mine.\n> \n> As I earlier posted, I was experiencing very high load average\n> on one of my Linux database servers (IBM eServer 345, SCSI disks on LSI \n> Logic controller) caused by I/O bottleneck.\n> \n> INSERTs were really slow, even after many days of\n> tweaking PostgreSQL configuration. The problem appeared to be\n> in the Linux kernel itself - using acpi=ht and noapic boot parameters\n> solved my performance problems. Load average dropped below 1.0\n> (before, it was as high as ten in peak) and the database\n> works much, much faster.\n\nHello,\n\nDid you try with acpi=noidle? This proved to be of help on many an\noccasion before, and you don't have to give up any functionality over\nit. It's just that the ACPI BIOS is broken and overloads the system\nwith idle calls.\n\nOther than that, general guidelines would be, don't combine APM and\nACPI, and rather use proper SMP code for hyperthreaded machines than\njust the ACPI CPU enumeration feature.\n\nThere's also a new option with 2.6.8.1, called CONFIG_SCHED_SMT that\nis supposed to handle some cases SMP code had problems with better,\nat the cost of slight overhead in other areas.\n\nMy advice would be, if you have an option to choose between APM and\nACPI, go for ACPI. It's the future, it's being developed actively,\nit does a whole lot more than APM (that was really only about power\nmanagement), and last but not least, I've been using it for four\nyears on over fifty SMP machines and I never ever had a problem\nbeyond the scope of what noidle could fix (knocks-on-wood). :)\n\nHTH,\n-- \n Grega Bremec\n Senior Administrator\n Noviforum Ltd., Software & Media\n http://www.noviforum.si/",
"msg_date": "Wed, 18 Aug 2004 10:51:54 +0200",
"msg_from": "Grega Bremec <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: high load caused by I/O - a hint"
},
{
"msg_contents": "eleven wrote:\n\n> Hello,\n> \n> This is not strictly PostgreSQL performance hint, but may be\n> helpful to someone with problems like mine.\n> \n> As I earlier posted, I was experiencing very high load average\n> on one of my Linux database servers (IBM eServer 345, SCSI disks on LSI \n> Logic controller) caused by I/O bottleneck.\n> \n> INSERTs were really slow, even after many days of\n> tweaking PostgreSQL configuration. The problem appeared to be\n> in the Linux kernel itself - using acpi=ht and noapic boot parameters\n> solved my performance problems. Load average dropped below 1.0\n> (before, it was as high as ten in peak) and the database\n> works much, much faster.\n\nI suggest you to investigate why noapic did the work for you, do you have\nnot well supported device ? At your place also I'd try removing the noapic\noption and using acpi=noidle\n\n\n\nRegards\nGaetano Mendola\n\n\n",
"msg_date": "Wed, 18 Aug 2004 12:54:35 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: high load caused by I/O - a hint"
},
{
"msg_contents": "\nOn Aug 18, 2004, at 4:18 AM, eleven wrote:\n\n> Hello,\n>\n> This is not strictly PostgreSQL performance hint, but may be\n> helpful to someone with problems like mine.\n>\n> As I earlier posted, I was experiencing very high load average\n> on one of my Linux database servers (IBM eServer 345, SCSI disks on \n> LSI Logic controller) caused by I/O bottleneck.\n>\n\nWe have some 335's (I think they are 335s) and until April or so there \nwas a bug in the Fusion MPT driver that would cause it to revert to \nasync narrow mode if hardware RAID was enabled on it. (Performance was \nhorrible - NFS on a 100meg network was 10x faster than local disk!) And \non the upside, when I originally researched the problem they hadn't \nfound the bug yet so there were no others around having issues like \nmine so trying to figure it out was quite difficult.\n\nI may see if using that acpi=ht makes any difference as well.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n",
"msg_date": "Wed, 18 Aug 2004 09:10:24 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: high load caused by I/O - a hint"
}
] |
[
{
"msg_contents": "Hi, \n\nI noticed an interesting phenomenon when loding (COPY) some tables\nfrom a file. For some reason, one load was taking longer than I\nassumed it would. As it turns out, one of the columns was an array\ncontaining elements that were of a user defined type. Using strace\n(on linux) and truss (on solaris), most of the time was spent on\nstat64() calls on the shared library that contained the in/out\nfunctions for the user defined type.\n\nIn a nutshell, it looks like whenever COPY is invoked, and when a user\ndefined type is used in an array, then stat64() will be called for\neach row accessed on the shared library relevant for the user defined\ntype.\n\nAs an example, I created a simple unsigned 2 byte integer type (called\nuint2) as follows:\n...\ntypedef uint16 uint2;\n\nPG_FUNCTION_INFO_V1(uint2_in);\nDatum\nuint2_in (PG_FUNCTION_ARGS)\n{\n int val;\n val = atoi(PG_GETARG_CSTRING(0));\n if (val > 65535 || val < 0)\n elog (ERROR, \"Value %i is not in range for uint2\", val);\n PG_RETURN_INT16((uint2) val);\n}\n\nPG_FUNCTION_INFO_V1(uint2_out);\nDatum\nuint2_out (PG_FUNCTION_ARGS)\n{\n uint2 e = (uint2) PG_GETARG_INT16(0);\n char * ret_string = (char *) palloc(9);\n \n snprintf(ret_string, 9, \"%i\", (int) e);\n PG_RETURN_CSTRING(ret_string);\n}\n....\nI compiled this, making shared library uint2.so, and in psql created the type.\n\nI then created two test tables: \n\nCREATE TABLE scratch0 (value uint2);\nINSERT INTO scratch0 values('1');\nINSERT INTO scratch0 values('2');\nINSERT INTO scratch0 values('3');\nINSERT INTO scratch0 values('4');\n \nCREATE TABLE scratch1 (value uint2[]);\nINSERT INTO scratch1 values('{1,2,3}');\nINSERT INTO scratch1 values('{10,20,30}');\nINSERT INTO scratch1 values('{100,200,300}');\nINSERT INTO scratch1 values('{1000,2000,300\n\nNow, I ran strace (and truss) tattached to the postmaster process\nlooking for stat64, and here's what I found:\n\n------\nSELECT * from scratch0 LIMIT 1;\nstat64(\"/pg/lib/postgresql/uint2\", 0xFFBFE808) Err#2 ENOENT\nstat64(\"/pg/lib/postgresql/uint2.so\", 0xFFBFE808) = 0\n \nSELECT * from scratch0 LIMIT 2;\nstat64(\"/pg/lib/postgresql/uint2\", 0xFFBFE808) Err#2 ENOENT\nstat64(\"/pg/lib/postgresql/uint2.so\", 0xFFBFE808) = 0\n \nSELECT * from scratch0 LIMIT 3;\nstat64(\"/pg/lib/postgresql/uint2\", 0xFFBFE808) Err#2 ENOENT\nstat64(\"/pg/lib/postgresql/uint2.so\", 0xFFBFE808) = 0\n------\n.\n.so each SELECT stats the uint2.so file once, regardless of the number\nof fows. fair enough. now for the array case:\n\n------\nSELECT * from scratch1 LIMIT 1;\nstat64(\"/pg/lib/postgresql/uint2\", 0xFFBFE808) Err#2 ENOENT\nstat64(\"/pg/lib/postgresql/uint2.so\", 0xFFBFE808) = 0\n \nSELECT * from scratch1 LIMIT 2;\nstat64(\"/pg/lib/postgresql/uint2\", 0xFFBFE808) Err#2 ENOENT\nstat64(\"/pg/lib/postgresql/uint2.so\", 0xFFBFE808) = 0\nstat64(\"/pg/lib/postgresql/uint2\", 0xFFBFE808) Err#2 ENOENT\nstat64(\"/pg/lib/postgresql/uint2.so\", 0xFFBFE808) = 0\n\nSELECT * from scratch1 LIMIT 3;\nstat64(\"/pg/lib/postgresql/uint2\", 0xFFBFE808) Err#2 ENOENT\nstat64(\"/pg/lib/postgresql/uint2.so\", 0xFFBFE808) = 0\nstat64(\"/pg/lib/postgresql/uint2\", 0xFFBFE808) Err#2 ENOENT\nstat64(\"/pg/lib/postgresql/uint2.so\", 0xFFBFE808) = 0\nstat64(\"/pg/lib/postgresql/uint2\", 0xFFBFE808) Err#2 ENOENT\nstat64(\"/pg/lib/postgresql/uint2.so\", 0xFFBFE808) = 0\n------\n\n..so for *every row* involving an array, a stat64 is called. This is\ntrue for COPY too.. we were loading 2.5 billion rows, so that's quite\na bit of overhead! Now, we used strace (on linux) and truss (on\nsolaris) to see how much this was actually affecting us.:\n\nOutput from truss COPYING large file with millions of {1,2,3} unint2 arrays:\nsyscall seconds calls errors\nread .055 527\nwrite 1.507 11358\nclose .000 6\ntime .000 3\ngetpid .000 1\nkill .000 1\nsemop .002 116\nfdsync .000 3\nllseek .081 5042\nstat64 24.205 1078764 539382\nopen64 .000 6\n -------- ------ ----\nsys totals: 25.853 1095827 539382\nusr time: 21.567\nelapsed: 113.310\n\nso in other words, almost 25% of the total time and 50+% of the\nexecution time time was wasted on stat64. on Linux, the proportion of\ntime relative to other calls in strace was similar.\n\nSo.. Is this a bug or are the stat64 calls necessary? I doubt arrays\nof user-defined types (in C) are common, and I couldn't find anything\nthat looked relevent in the lists.\n",
"msg_date": "Wed, 18 Aug 2004 15:39:29 -0400",
"msg_from": "Aaron Birkland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Array types and loading"
},
{
"msg_contents": "Aaron Birkland <[email protected]> writes:\n> In a nutshell, it looks like whenever COPY is invoked, and when a user\n> defined type is used in an array, then stat64() will be called for\n> each row accessed on the shared library relevant for the user defined\n> type.\n\nLet me guess ... PG 7.3 or older?\n\n7.4 should avoid the problem because array_in() caches function lookup\ninformation for the element type's input function across multiple calls.\n\nIn 8.0 there's also a cache at the fmgr_info() level to eliminate\nrepeated searches for a dynamically loaded function.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Aug 2004 17:39:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Array types and loading "
},
{
"msg_contents": "You got it.. 7.3 (should have mentioned that). We're planning to\nupgrade to 8.0 anyway in the future, so it's good to know. Thanks!\n\n -Aaron\n\nOn Wed, 18 Aug 2004 17:39:21 -0400, Tom Lane <[email protected]> wrote:\n> Aaron Birkland <[email protected]> writes:\n> > In a nutshell, it looks like whenever COPY is invoked, and when a user\n> > defined type is used in an array, then stat64() will be called for\n> > each row accessed on the shared library relevant for the user defined\n> > type.\n> \n> Let me guess ... PG 7.3 or older?\n> \n> 7.4 should avoid the problem because array_in() caches function lookup\n> information for the element type's input function across multiple calls.\n> \n> In 8.0 there's also a cache at the fmgr_info() level to eliminate\n> repeated searches for a dynamically loaded function.\n> \n> regards, tom lane\n>\n",
"msg_date": "Wed, 18 Aug 2004 17:47:50 -0400",
"msg_from": "Aaron Birkland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Array types and loading"
}
] |
[
{
"msg_contents": "Thanks a lot. This issue has been resolved by casting to int8. I thought Postgres does those casts\nup by himself implicitelly.\n\n=====\nThanks a lot\nIgor Artimenko\nI specialize in \nJava, J2EE, Unix, Linux, HP, AIX, Solaris, Progress, Oracle, DB2, Postgres, Data Modeling\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nYahoo! Mail is new and improved - Check it out!\nhttp://promotions.yahoo.com/new_mail\n",
"msg_date": "Wed, 18 Aug 2004 16:11:37 -0700 (PDT)",
"msg_from": "Artimenko Igor <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres does not utilyze indexes"
}
] |
[
{
"msg_contents": "Hi,\n\nYou asked the very same question yesterday, and I believe you got some useful answers. Why do you post the question again?\n\nYou don't even mention your previous post, and you didn't continue the thread which you started yesterday.\n\nDid you try out any of the suggestions which you got yesterday? Do you have further questions about, for instance, how to do casting of values? If so, please continue posting with the previous thread, rather than reposting the same question with a different subject.\n\nregards,\n\n--Tim\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Igor\nArtimenko\nSent: dinsdag 17 augustus 2004 16:23\nTo: [email protected]\nSubject: [PERFORM] I could not get postgres to utilizy indexes\n\n\nHi verybody!\n\nI can't make use of indexes even I tried the same test by changing different settings in postgres.conf like geqo to off/on & geqo related parameters, enable_seqscan off/on & so on. Result is the same. \n\nHere is test itself:\n\nI've created simplest table test and executed the same statement \"explain analyze select id from test where id = 50000;\" Few times I added 100,000 records, applied vacuum full; and issued above explain command. \nPostgres uses sequential scan instead of index one. \nOf cause Time to execute the same statement constantly grows. In my mind index should not allow time to grow so much. \n\nWhy Postgres does not utilizes primary unique index?\nWhat I'm missing? It continue growing even there are 1,200,000 records. It should at least start using index at some point.\n\n\nDetails are below:\n100,000 records:\nQUERY PLAN\n----------------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..2427.00 rows=2 width=8) (actual time=99.626..199.835 rows=1 loops=1)\n Filter: (id = 50000)\n Total runtime: 199.990 ms\n\n200,000 records:\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..4853.00 rows=2 width=8) (actual time=100.389..402.770 rows=1 loops=1)\n Filter: (id = 50000)\n Total runtime: 402.926 ms\n\n\n300,000 records:\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..7280.00 rows=1 width=8) (actual time=100.563..616.064 rows=1 loops=1)\n Filter: (id = 50000)\n Total runtime: 616.224 ms\n(3 rows)\n\nI've created test table by script:\n\nCREATE TABLE test\n(\n id int8 NOT NULL DEFAULT nextval('next_id_seq'::text) INIQUE,\n description char(50),\n CONSTRAINT users_pkey PRIMARY KEY (id)\n);\n\nCREATE SEQUENCE next_id_seq\n INCREMENT 1\n MINVALUE 1\n MAXVALUE 10000000000\n START 1\n CACHE 5\n CYCLE;\n\nI use postgres 7.4.2\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\n http://archives.postgresql.org\n",
"msg_date": "Thu, 19 Aug 2004 09:54:47 +0200",
"msg_from": "\"Leeuw van der, Tim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: I could not get postgres to utilizy indexes"
},
{
"msg_contents": "On Thu, 19 Aug 2004 09:54:47 +0200, \"Leeuw van der, Tim\"\n<[email protected]> wrote:\n>You asked the very same question yesterday, and I believe you got some useful answers. Why do you post the question again?\n\nTim, no need to be rude here. We see this effect from time to time when\na new user sends a message to a mailing list while not subscribed. The\nsender gets an automated reply from majordomo, subscribes to the list\nand sends his mail again. One or two days later the original message is\napproved (by Marc, AFAIK) and forwarded to the list. Look at the\ntimestamps in these header lines:\n|Received: from postgresql.org (svr1.postgresql.org [200.46.204.71])\n|\tby svr4.postgresql.org (Postfix) with ESMTP id 32B1F5B04F4;\n|\tWed, 18 Aug 2004 15:54:13 +0000 (GMT)\n|Received: from localhost (unknown [200.46.204.144])\n|\tby svr1.postgresql.org (Postfix) with ESMTP id E6B2B5E4701\n|\tfor <[email protected]>; Tue, 17 Aug 2004 11:23:07 -0300 (ADT)\n\n>[more instructions]\n\nAnd while we are teaching netiquette, could you please stop top-posting\nand full-quoting.\n\nIgor, welcome to the list! Did the suggestions you got solve your\nproblem?\n\nServus\n Manfred\n",
"msg_date": "Fri, 20 Aug 2004 15:37:40 +0200",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: I could not get postgres to utilizy indexes"
}
] |
[
{
"msg_contents": "RT uses a query like:\n\nSELECT distinct main.oid,main.* FROM Tickets main\nWHERE\n(main.EffectiveId = main.id)\nAND\n(main.Status != 'deleted')\nAND\n ( (main.Type = 'ticket') OR (main.Type = 'subticket') )\nAND\n ( (main.Queue = '9') )\nAND ((\n ( (Links.Type = 'MemberOf') AND (Links.LocalTarget = '17417') AND (main.id = Links.LocalBase) )\n OR\n ( (Links.Type = 'MemberOf') AND (Links.LocalBase = '17417') AND (main.id = Links.LocalTarget) )\n or\n (main.id = '17417')\n )\n );\n\n\nwhich produces a query plan:\n\nNested Loop (cost=0.00..813.88 rows=1 width=169)\n Join Filter: ((((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"outer\".id = 17417)) AND ((\"inner\".localbase = 17417) OR ((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"outer\".id\n= 17417)) AND ((\"outer\".id = \"inner\".localtarget) OR ((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"outer\".id = 17417)) AND (((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"inner\"\n.localtarget = 17417) OR (\"outer\".id = 17417)) AND ((\"inner\".localbase = 17417) OR (\"inner\".localtarget = 17417) OR (\"outer\".id = 17417)) AND ((\"outer\".id = \"inner\".localtarget) OR (\n\"inner\".localtarget = 17417) OR (\"outer\".id = 17417)) AND (((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"outer\".id = \"inner\".localbase) OR (\"outer\".id = 17417)) AND ((\"inner\".loca\nlbase = 17417) OR (\"outer\".id = \"inner\".localbase) OR (\"outer\".id = 17417)) AND ((\"outer\".id = \"inner\".localtarget) OR (\"outer\".id = \"inner\".localbase) OR (\"outer\".id = 17417)))\n -> Index Scan using tickets1 on tickets main (cost=0.00..657.61 rows=1 width=169)\n Index Cond: (queue = 9)\n Filter: ((effectiveid = id) AND ((status)::text <> 'deleted'::text) AND (((\"type\")::text = 'ticket'::text) OR ((\"type\")::text = 'subticket'::text)))\n -> Seq Scan on links (cost=0.00..46.62 rows=1462 width=20)\n\nIf I rewrite the query as:\n\nSELECT main.* FROM Tickets main\nWHERE\n(main.EffectiveId = main.id)\nAND\n(main.Status != 'deleted')\nAND\n ( (main.Type = 'ticket') OR (main.Type = 'subticket') )\nAND\n ( (main.Queue = '9') )\nAND (\n 17417 in (select links.localtarget from links where links.type='MemberOf' and main.id=links.localbase)\n or\n 17417 in ( select links.localbase from links where links.type='MemberOf' and main.id=links.localtarget)\n or\n main.id = '17417'\n )\n ;\n\nThe time for the query goes from 1500ms to 15ms. The two OR clauses \n\n ( (Links.Type = 'MemberOf') AND (Links.LocalTarget = '17417') AND (main.id = Links.LocalBase) )\n OR\n ( (Links.Type = 'MemberOf') AND (Links.LocalBase = '17417') AND (main.id = Links.LocalTarget) )\n\ndon't contribute to the result set in this particular dataset, which is why the speed increases so dramatically.\n\nIs there a way to rewrite the top query to get the same results? I have already talked to Best Practical, \nand subqueries are not easily embraced.\n\nDave\n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\n\n",
"msg_date": "Thu, 19 Aug 2004 09:17:13 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "help with query"
},
{
"msg_contents": "You're doing a join except not, is the trouble, looks like. The query is really\n\"FROM Tickets main, Links\", but when Tickets.id is 17417, you've got no join\nto the Links table. So you end up getting every row in Links for each row in\nTickets with id = 17417.\n\nI'd think this wants to be two queries or a union:\n\nSELECT distinct main.oid,main.* FROM Tickets main\nWHERE (main.EffectiveId = main.id)\nAND (main.Status != 'deleted')\nAND ( (main.Type = 'ticket') OR (main.Type = 'subticket') )\nAND ( (main.Queue = '9') )\nAND ( (main.id = '17417'))\nunion\nSELECT distinct main.oid,main.* FROM Tickets main, Links\nWHERE (main.EffectiveId = main.id)\nAND (main.Status != 'deleted')\nAND ( (main.Type = 'ticket') OR (main.Type = 'subticket') )\nAND ( (main.Queue = '9') )\nAND ( (Links.Type = 'MemberOf') )\nAND ( (Links.LocalTarget = '17417') )\nAND ( (main.id = Links.LocalBase) ) OR (main.id = Links.LocalTarget) )\n;\n\nor else, yah, a subquery:\n\n[...]\nAND (\n main.id = '17417'\n or\n exists(\n select true from Links\n where Type = 'MemberOf' and LocalTarget = '17417'\n and (LocalBase = main.id or LocalTarget = main.id)\n )\n)\n\nThose are the only things I can think of to make it work, anyways.\n\nDave Cramer wrote:\n\n> RT uses a query like:\n> \n> SELECT distinct main.oid,main.* FROM Tickets main\n> WHERE\n> (main.EffectiveId = main.id)\n> AND\n> (main.Status != 'deleted')\n> AND\n> ( (main.Type = 'ticket') OR (main.Type = 'subticket') )\n> AND\n> ( (main.Queue = '9') )\n> AND ((\n> ( (Links.Type = 'MemberOf') AND (Links.LocalTarget = '17417') AND (main.id = Links.LocalBase) )\n> OR\n> ( (Links.Type = 'MemberOf') AND (Links.LocalBase = '17417') AND (main.id = Links.LocalTarget) )\n> or\n> (main.id = '17417')\n> )\n> );\n> \n> \n> which produces a query plan:\n> \n> Nested Loop (cost=0.00..813.88 rows=1 width=169)\n> Join Filter: ((((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"outer\".id = 17417)) AND ((\"inner\".localbase = 17417) OR ((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"outer\".id\n> = 17417)) AND ((\"outer\".id = \"inner\".localtarget) OR ((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"outer\".id = 17417)) AND (((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"inner\"\n> .localtarget = 17417) OR (\"outer\".id = 17417)) AND ((\"inner\".localbase = 17417) OR (\"inner\".localtarget = 17417) OR (\"outer\".id = 17417)) AND ((\"outer\".id = \"inner\".localtarget) OR (\n> \"inner\".localtarget = 17417) OR (\"outer\".id = 17417)) AND (((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"outer\".id = \"inner\".localbase) OR (\"outer\".id = 17417)) AND ((\"inner\".loca\n> lbase = 17417) OR (\"outer\".id = \"inner\".localbase) OR (\"outer\".id = 17417)) AND ((\"outer\".id = \"inner\".localtarget) OR (\"outer\".id = \"inner\".localbase) OR (\"outer\".id = 17417)))\n> -> Index Scan using tickets1 on tickets main (cost=0.00..657.61 rows=1 width=169)\n> Index Cond: (queue = 9)\n> Filter: ((effectiveid = id) AND ((status)::text <> 'deleted'::text) AND (((\"type\")::text = 'ticket'::text) OR ((\"type\")::text = 'subticket'::text)))\n> -> Seq Scan on links (cost=0.00..46.62 rows=1462 width=20)\n> \n> If I rewrite the query as:\n> \n> SELECT main.* FROM Tickets main\n> WHERE\n> (main.EffectiveId = main.id)\n> AND\n> (main.Status != 'deleted')\n> AND\n> ( (main.Type = 'ticket') OR (main.Type = 'subticket') )\n> AND\n> ( (main.Queue = '9') )\n> AND (\n> 17417 in (select links.localtarget from links where links.type='MemberOf' and main.id=links.localbase)\n> or\n> 17417 in ( select links.localbase from links where links.type='MemberOf' and main.id=links.localtarget)\n> or\n> main.id = '17417'\n> )\n> ;\n> \n> The time for the query goes from 1500ms to 15ms. The two OR clauses \n> \n> ( (Links.Type = 'MemberOf') AND (Links.LocalTarget = '17417') AND (main.id = Links.LocalBase) )\n> OR\n> ( (Links.Type = 'MemberOf') AND (Links.LocalBase = '17417') AND (main.id = Links.LocalTarget) )\n> \n> don't contribute to the result set in this particular dataset, which is why the speed increases so dramatically.\n> \n> Is there a way to rewrite the top query to get the same results? I have already talked to Best Practical, \n> and subqueries are not easily embraced.\n> \n> Dave\n",
"msg_date": "Thu, 19 Aug 2004 06:38:48 -0700",
"msg_from": "Brad Bulger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: help with query"
},
{
"msg_contents": "Brad,\n\nThanks, that runs on the same order of magnitude as the subqueries.\n\nDAve\nOn Thu, 2004-08-19 at 09:38, Brad Bulger wrote:\n> You're doing a join except not, is the trouble, looks like. The query is really\n> \"FROM Tickets main, Links\", but when Tickets.id is 17417, you've got no join\n> to the Links table. So you end up getting every row in Links for each row in\n> Tickets with id = 17417.\n> \n> I'd think this wants to be two queries or a union:\n> \n> SELECT distinct main.oid,main.* FROM Tickets main\n> WHERE (main.EffectiveId = main.id)\n> AND (main.Status != 'deleted')\n> AND ( (main.Type = 'ticket') OR (main.Type = 'subticket') )\n> AND ( (main.Queue = '9') )\n> AND ( (main.id = '17417'))\n> union\n> SELECT distinct main.oid,main.* FROM Tickets main, Links\n> WHERE (main.EffectiveId = main.id)\n> AND (main.Status != 'deleted')\n> AND ( (main.Type = 'ticket') OR (main.Type = 'subticket') )\n> AND ( (main.Queue = '9') )\n> AND ( (Links.Type = 'MemberOf') )\n> AND ( (Links.LocalTarget = '17417') )\n> AND ( (main.id = Links.LocalBase) ) OR (main.id = Links.LocalTarget) )\n> ;\n> \n> or else, yah, a subquery:\n> \n> [...]\n> AND (\n> main.id = '17417'\n> or\n> exists(\n> select true from Links\n> where Type = 'MemberOf' and LocalTarget = '17417'\n> and (LocalBase = main.id or LocalTarget = main.id)\n> )\n> )\n> \n> Those are the only things I can think of to make it work, anyways.\n> \n> Dave Cramer wrote:\n> \n> > RT uses a query like:\n> > \n> > SELECT distinct main.oid,main.* FROM Tickets main\n> > WHERE\n> > (main.EffectiveId = main.id)\n> > AND\n> > (main.Status != 'deleted')\n> > AND\n> > ( (main.Type = 'ticket') OR (main.Type = 'subticket') )\n> > AND\n> > ( (main.Queue = '9') )\n> > AND ((\n> > ( (Links.Type = 'MemberOf') AND (Links.LocalTarget = '17417') AND (main.id = Links.LocalBase) )\n> > OR\n> > ( (Links.Type = 'MemberOf') AND (Links.LocalBase = '17417') AND (main.id = Links.LocalTarget) )\n> > or\n> > (main.id = '17417')\n> > )\n> > );\n> > \n> > \n> > which produces a query plan:\n> > \n> > Nested Loop (cost=0.00..813.88 rows=1 width=169)\n> > Join Filter: ((((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"outer\".id = 17417)) AND ((\"inner\".localbase = 17417) OR ((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"outer\".id\n> > = 17417)) AND ((\"outer\".id = \"inner\".localtarget) OR ((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"outer\".id = 17417)) AND (((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"inner\"\n> > .localtarget = 17417) OR (\"outer\".id = 17417)) AND ((\"inner\".localbase = 17417) OR (\"inner\".localtarget = 17417) OR (\"outer\".id = 17417)) AND ((\"outer\".id = \"inner\".localtarget) OR (\n> > \"inner\".localtarget = 17417) OR (\"outer\".id = 17417)) AND (((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"outer\".id = \"inner\".localbase) OR (\"outer\".id = 17417)) AND ((\"inner\".loca\n> > lbase = 17417) OR (\"outer\".id = \"inner\".localbase) OR (\"outer\".id = 17417)) AND ((\"outer\".id = \"inner\".localtarget) OR (\"outer\".id = \"inner\".localbase) OR (\"outer\".id = 17417)))\n> > -> Index Scan using tickets1 on tickets main (cost=0.00..657.61 rows=1 width=169)\n> > Index Cond: (queue = 9)\n> > Filter: ((effectiveid = id) AND ((status)::text <> 'deleted'::text) AND (((\"type\")::text = 'ticket'::text) OR ((\"type\")::text = 'subticket'::text)))\n> > -> Seq Scan on links (cost=0.00..46.62 rows=1462 width=20)\n> > \n> > If I rewrite the query as:\n> > \n> > SELECT main.* FROM Tickets main\n> > WHERE\n> > (main.EffectiveId = main.id)\n> > AND\n> > (main.Status != 'deleted')\n> > AND\n> > ( (main.Type = 'ticket') OR (main.Type = 'subticket') )\n> > AND\n> > ( (main.Queue = '9') )\n> > AND (\n> > 17417 in (select links.localtarget from links where links.type='MemberOf' and main.id=links.localbase)\n> > or\n> > 17417 in ( select links.localbase from links where links.type='MemberOf' and main.id=links.localtarget)\n> > or\n> > main.id = '17417'\n> > )\n> > ;\n> > \n> > The time for the query goes from 1500ms to 15ms. The two OR clauses \n> > \n> > ( (Links.Type = 'MemberOf') AND (Links.LocalTarget = '17417') AND (main.id = Links.LocalBase) )\n> > OR\n> > ( (Links.Type = 'MemberOf') AND (Links.LocalBase = '17417') AND (main.id = Links.LocalTarget) )\n> > \n> > don't contribute to the result set in this particular dataset, which is why the speed increases so dramatically.\n> > \n> > Is there a way to rewrite the top query to get the same results? I have already talked to Best Practical, \n> > and subqueries are not easily embraced.\n> > \n> > Dave\n> \n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\n\n",
"msg_date": "Thu, 19 Aug 2004 09:50:38 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: help with query"
},
{
"msg_contents": ">From what I can figure, queries like this run much quicker on other\ndatabases, is this something that can be improved ?\n\nDave\nOn Thu, 2004-08-19 at 09:38, Brad Bulger wrote:\n> You're doing a join except not, is the trouble, looks like. The query is really\n> \"FROM Tickets main, Links\", but when Tickets.id is 17417, you've got no join\n> to the Links table. So you end up getting every row in Links for each row in\n> Tickets with id = 17417.\n> \n> I'd think this wants to be two queries or a union:\n> \n> SELECT distinct main.oid,main.* FROM Tickets main\n> WHERE (main.EffectiveId = main.id)\n> AND (main.Status != 'deleted')\n> AND ( (main.Type = 'ticket') OR (main.Type = 'subticket') )\n> AND ( (main.Queue = '9') )\n> AND ( (main.id = '17417'))\n> union\n> SELECT distinct main.oid,main.* FROM Tickets main, Links\n> WHERE (main.EffectiveId = main.id)\n> AND (main.Status != 'deleted')\n> AND ( (main.Type = 'ticket') OR (main.Type = 'subticket') )\n> AND ( (main.Queue = '9') )\n> AND ( (Links.Type = 'MemberOf') )\n> AND ( (Links.LocalTarget = '17417') )\n> AND ( (main.id = Links.LocalBase) ) OR (main.id = Links.LocalTarget) )\n> ;\n> \n> or else, yah, a subquery:\n> \n> [...]\n> AND (\n> main.id = '17417'\n> or\n> exists(\n> select true from Links\n> where Type = 'MemberOf' and LocalTarget = '17417'\n> and (LocalBase = main.id or LocalTarget = main.id)\n> )\n> )\n> \n> Those are the only things I can think of to make it work, anyways.\n> \n> Dave Cramer wrote:\n> \n> > RT uses a query like:\n> > \n> > SELECT distinct main.oid,main.* FROM Tickets main\n> > WHERE\n> > (main.EffectiveId = main.id)\n> > AND\n> > (main.Status != 'deleted')\n> > AND\n> > ( (main.Type = 'ticket') OR (main.Type = 'subticket') )\n> > AND\n> > ( (main.Queue = '9') )\n> > AND ((\n> > ( (Links.Type = 'MemberOf') AND (Links.LocalTarget = '17417') AND (main.id = Links.LocalBase) )\n> > OR\n> > ( (Links.Type = 'MemberOf') AND (Links.LocalBase = '17417') AND (main.id = Links.LocalTarget) )\n> > or\n> > (main.id = '17417')\n> > )\n> > );\n> > \n> > \n> > which produces a query plan:\n> > \n> > Nested Loop (cost=0.00..813.88 rows=1 width=169)\n> > Join Filter: ((((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"outer\".id = 17417)) AND ((\"inner\".localbase = 17417) OR ((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"outer\".id\n> > = 17417)) AND ((\"outer\".id = \"inner\".localtarget) OR ((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"outer\".id = 17417)) AND (((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"inner\"\n> > .localtarget = 17417) OR (\"outer\".id = 17417)) AND ((\"inner\".localbase = 17417) OR (\"inner\".localtarget = 17417) OR (\"outer\".id = 17417)) AND ((\"outer\".id = \"inner\".localtarget) OR (\n> > \"inner\".localtarget = 17417) OR (\"outer\".id = 17417)) AND (((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"outer\".id = \"inner\".localbase) OR (\"outer\".id = 17417)) AND ((\"inner\".loca\n> > lbase = 17417) OR (\"outer\".id = \"inner\".localbase) OR (\"outer\".id = 17417)) AND ((\"outer\".id = \"inner\".localtarget) OR (\"outer\".id = \"inner\".localbase) OR (\"outer\".id = 17417)))\n> > -> Index Scan using tickets1 on tickets main (cost=0.00..657.61 rows=1 width=169)\n> > Index Cond: (queue = 9)\n> > Filter: ((effectiveid = id) AND ((status)::text <> 'deleted'::text) AND (((\"type\")::text = 'ticket'::text) OR ((\"type\")::text = 'subticket'::text)))\n> > -> Seq Scan on links (cost=0.00..46.62 rows=1462 width=20)\n> > \n> > If I rewrite the query as:\n> > \n> > SELECT main.* FROM Tickets main\n> > WHERE\n> > (main.EffectiveId = main.id)\n> > AND\n> > (main.Status != 'deleted')\n> > AND\n> > ( (main.Type = 'ticket') OR (main.Type = 'subticket') )\n> > AND\n> > ( (main.Queue = '9') )\n> > AND (\n> > 17417 in (select links.localtarget from links where links.type='MemberOf' and main.id=links.localbase)\n> > or\n> > 17417 in ( select links.localbase from links where links.type='MemberOf' and main.id=links.localtarget)\n> > or\n> > main.id = '17417'\n> > )\n> > ;\n> > \n> > The time for the query goes from 1500ms to 15ms. The two OR clauses \n> > \n> > ( (Links.Type = 'MemberOf') AND (Links.LocalTarget = '17417') AND (main.id = Links.LocalBase) )\n> > OR\n> > ( (Links.Type = 'MemberOf') AND (Links.LocalBase = '17417') AND (main.id = Links.LocalTarget) )\n> > \n> > don't contribute to the result set in this particular dataset, which is why the speed increases so dramatically.\n> > \n> > Is there a way to rewrite the top query to get the same results? I have already talked to Best Practical, \n> > and subqueries are not easily embraced.\n> > \n> > Dave\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\n\n",
"msg_date": "Thu, 19 Aug 2004 10:17:51 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: help with query"
},
{
"msg_contents": "how about:\n\nSELECT distinct main.oid,main.* FROM Tickets main\nWHERE main.EffectiveId = main.id\nAND main.Status != 'deleted'\nAND ( main.Type = 'ticket' OR main.Type = 'subticket' )\nAND ( main.Queue = '9' )\nAND ( main.id = '17417'\n OR main.id IN (\n SELECT DISTINCT LocalTarget from Links\n where Type = 'MemberOf' and LocalTarget = '17417')\n OR main.id IN (\n SELECT DISTINCT LocalBase from Links\n where Type = 'MemberOf' and LocalTarget = '17417'))\n\n\n\n\nDave Cramer wrote:\n\n> Brad,\n> \n> Thanks, that runs on the same order of magnitude as the subqueries.\n> \n> DAve\n> On Thu, 2004-08-19 at 09:38, Brad Bulger wrote:\n> \n>>You're doing a join except not, is the trouble, looks like. The query is really\n>>\"FROM Tickets main, Links\", but when Tickets.id is 17417, you've got no join\n>>to the Links table. So you end up getting every row in Links for each row in\n>>Tickets with id = 17417.\n>>\n>>I'd think this wants to be two queries or a union:\n>>\n>>SELECT distinct main.oid,main.* FROM Tickets main\n>>WHERE (main.EffectiveId = main.id)\n>>AND (main.Status != 'deleted')\n>>AND ( (main.Type = 'ticket') OR (main.Type = 'subticket') )\n>>AND ( (main.Queue = '9') )\n>>AND ( (main.id = '17417'))\n>>union\n>>SELECT distinct main.oid,main.* FROM Tickets main, Links\n>>WHERE (main.EffectiveId = main.id)\n>>AND (main.Status != 'deleted')\n>>AND ( (main.Type = 'ticket') OR (main.Type = 'subticket') )\n>>AND ( (main.Queue = '9') )\n>>AND ( (Links.Type = 'MemberOf') )\n>>AND ( (Links.LocalTarget = '17417') )\n>>AND ( (main.id = Links.LocalBase) ) OR (main.id = Links.LocalTarget) )\n>>;\n>>\n>>or else, yah, a subquery:\n>>\n>>[...]\n>>AND (\n>> main.id = '17417'\n>> or\n>> exists(\n>> select true from Links\n>> where Type = 'MemberOf' and LocalTarget = '17417'\n>> and (LocalBase = main.id or LocalTarget = main.id)\n>> )\n>>)\n>>\n>>Those are the only things I can think of to make it work, anyways.\n>>\n>>Dave Cramer wrote:\n>>\n>>\n>>>RT uses a query like:\n>>>\n>>>SELECT distinct main.oid,main.* FROM Tickets main\n>>>WHERE\n>>>(main.EffectiveId = main.id)\n>>>AND\n>>>(main.Status != 'deleted')\n>>>AND\n>>> ( (main.Type = 'ticket') OR (main.Type = 'subticket') )\n>>>AND\n>>> ( (main.Queue = '9') )\n>>>AND ((\n>>> ( (Links.Type = 'MemberOf') AND (Links.LocalTarget = '17417') AND (main.id = Links.LocalBase) )\n>>> OR\n>>> ( (Links.Type = 'MemberOf') AND (Links.LocalBase = '17417') AND (main.id = Links.LocalTarget) )\n>>> or\n>>> (main.id = '17417')\n>>> )\n>>> );\n>>>\n>>>\n>>>which produces a query plan:\n>>>\n>>>Nested Loop (cost=0.00..813.88 rows=1 width=169)\n>>> Join Filter: ((((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"outer\".id = 17417)) AND ((\"inner\".localbase = 17417) OR ((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"outer\".id\n>>>= 17417)) AND ((\"outer\".id = \"inner\".localtarget) OR ((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"outer\".id = 17417)) AND (((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"inner\"\n>>>.localtarget = 17417) OR (\"outer\".id = 17417)) AND ((\"inner\".localbase = 17417) OR (\"inner\".localtarget = 17417) OR (\"outer\".id = 17417)) AND ((\"outer\".id = \"inner\".localtarget) OR (\n>>>\"inner\".localtarget = 17417) OR (\"outer\".id = 17417)) AND (((\"inner\".\"type\")::text = 'MemberOf'::text) OR (\"outer\".id = \"inner\".localbase) OR (\"outer\".id = 17417)) AND ((\"inner\".loca\n>>>lbase = 17417) OR (\"outer\".id = \"inner\".localbase) OR (\"outer\".id = 17417)) AND ((\"outer\".id = \"inner\".localtarget) OR (\"outer\".id = \"inner\".localbase) OR (\"outer\".id = 17417)))\n>>> -> Index Scan using tickets1 on tickets main (cost=0.00..657.61 rows=1 width=169)\n>>> Index Cond: (queue = 9)\n>>> Filter: ((effectiveid = id) AND ((status)::text <> 'deleted'::text) AND (((\"type\")::text = 'ticket'::text) OR ((\"type\")::text = 'subticket'::text)))\n>>> -> Seq Scan on links (cost=0.00..46.62 rows=1462 width=20)\n>>>\n>>>If I rewrite the query as:\n>>>\n>>>SELECT main.* FROM Tickets main\n>>>WHERE\n>>>(main.EffectiveId = main.id)\n>>>AND\n>>>(main.Status != 'deleted')\n>>>AND\n>>> ( (main.Type = 'ticket') OR (main.Type = 'subticket') )\n>>>AND\n>>> ( (main.Queue = '9') )\n>>>AND (\n>>> 17417 in (select links.localtarget from links where links.type='MemberOf' and main.id=links.localbase)\n>>> or\n>>> 17417 in ( select links.localbase from links where links.type='MemberOf' and main.id=links.localtarget)\n>>> or\n>>> main.id = '17417'\n>>> )\n>>> ;\n>>>\n>>>The time for the query goes from 1500ms to 15ms. The two OR clauses \n>>>\n>>> ( (Links.Type = 'MemberOf') AND (Links.LocalTarget = '17417') AND (main.id = Links.LocalBase) )\n>>> OR\n>>> ( (Links.Type = 'MemberOf') AND (Links.LocalBase = '17417') AND (main.id = Links.LocalTarget) )\n>>>\n>>>don't contribute to the result set in this particular dataset, which is why the speed increases so dramatically.\n>>>\n>>>Is there a way to rewrite the top query to get the same results? I have already talked to Best Practical, \n>>>and subqueries are not easily embraced.\n>>>\n>>>Dave\n>>\n",
"msg_date": "Thu, 19 Aug 2004 10:22:45 -0400",
"msg_from": "Jean-Luc Lachance <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: help with query"
}
] |
[
{
"msg_contents": "Hi all,\nI'm tring to optimize the following query:\n\nhttp://rafb.net/paste/results/YdO9vM69.html\n\nas you can see from the explain after defining the\nindex the performance is worst.\n\nIf I raise the default_statistic_target to 200\nthen the performance are worst then before:\n\n\nWithout index: 1.140 ms\nWith index: 1.400 ms\nWith default_statistic_targer = 200: 1.800 ms\n\n\ntought anyone ?\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n",
"msg_date": "Thu, 19 Aug 2004 19:26:01 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "using an index worst performances"
},
{
"msg_contents": "Gaetano Mendola wrote:\n> Hi all,\n> I'm tring to optimize the following query:\n> \n> http://rafb.net/paste/results/YdO9vM69.html\n> \n> as you can see from the explain after defining the\n> index the performance is worst.\n> \n> If I raise the default_statistic_target to 200\n> then the performance are worst then before:\n> \n> \n> Without index: 1.140 ms\n> With index: 1.400 ms\n> With default_statistic_targer = 200: 1.800 ms\n\nCan I just check that 1.800ms means 1.8 secs (You're using . as the \nthousands separator)?\n\nIf it means 1.8ms then frankly the times are too short to mean anything \nwithout running them 100 times and averaging.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 19 Aug 2004 19:09:55 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: using an index worst performances"
},
{
"msg_contents": "Richard Huxton wrote:\n\n> Gaetano Mendola wrote:\n> \n>> Hi all,\n>> I'm tring to optimize the following query:\n>>\n>> http://rafb.net/paste/results/YdO9vM69.html\n>>\n>> as you can see from the explain after defining the\n>> index the performance is worst.\n>>\n>> If I raise the default_statistic_target to 200\n>> then the performance are worst then before:\n>>\n>>\n>> Without index: 1.140 ms\n>> With index: 1.400 ms\n>> With default_statistic_targer = 200: 1.800 ms\n> \n> \n> Can I just check that 1.800ms means 1.8 secs (You're using . as the \n> thousands separator)?\n> \n> If it means 1.8ms then frankly the times are too short to mean anything \n> without running them 100 times and averaging.\n\n\nIt mean 1.8 ms and that execution time is sticky to that value even\nwith 1000 times.\n\n\nRegards\nGaetano Mendola\n",
"msg_date": "Thu, 19 Aug 2004 21:56:38 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: using an index worst performances"
},
{
"msg_contents": ">>> Without index: 1.140 ms\n>>> With index: 1.400 ms\n>>> With default_statistic_targer = 200: 1.800 ms\n>>\n>>\n>>\n>> Can I just check that 1.800ms means 1.8 secs (You're using . as the \n>> thousands separator)?\n>>\n>> If it means 1.8ms then frankly the times are too short to mean \n>> anything without running them 100 times and averaging.\n> \n> \n> \n> It mean 1.8 ms and that execution time is sticky to that value even\n> with 1000 times.\n\nGiven the almost irrelvant difference in the speed of those queries, I'd \nsay that with the stats so high, postgres simply takes longer to check \nthe statistics to come to the same conclusion. ie. it has to loop over \n200 rows instead of just 10.\n\nChris\n\n",
"msg_date": "Fri, 20 Aug 2004 09:39:41 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: using an index worst performances"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nChristopher Kings-Lynne wrote:\n\n|>>> Without index: 1.140 ms\n|>>> With index: 1.400 ms\n|>>> With default_statistic_targer = 200: 1.800 ms\n|>>\n|>>\n|>>\n|>>\n|>> Can I just check that 1.800ms means 1.8 secs (You're using . as the\n|>> thousands separator)?\n|>>\n|>> If it means 1.8ms then frankly the times are too short to mean\n|>> anything without running them 100 times and averaging.\n|>\n|>\n|>\n|>\n|> It mean 1.8 ms and that execution time is sticky to that value even\n|> with 1000 times.\n|\n|\n| Given the almost irrelvant difference in the speed of those queries, I'd\n| say that with the stats so high, postgres simply takes longer to check\n| the statistics to come to the same conclusion. ie. it has to loop over\n| 200 rows instead of just 10.\n\nThe time increase seems too much.\n\n\nRegards\nGaetano Mendola\n\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFBJcZW7UpzwH2SGd4RAuiMAJ971EAtr1RrHu2QMi0YYk0kKeuQmACg9bd3\nCFcmq5MRG/Eq3RXdNOdu43Y=\n=Bvo8\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Fri, 20 Aug 2004 11:37:27 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: using an index worst performances"
},
{
"msg_contents": "On Fri, 2004-08-20 at 05:37, Gaetano Mendola wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> Christopher Kings-Lynne wrote:\n> \n> |>>> Without index: 1.140 ms\n> |>>> With index: 1.400 ms\n> |>>> With default_statistic_targer = 200: 1.800 ms\n> |>>\n> |>>\n> |>>\n> |>>\n> |>> Can I just check that 1.800ms means 1.8 secs (You're using . as the\n> |>> thousands separator)?\n> |>>\n> |>> If it means 1.8ms then frankly the times are too short to mean\n> |>> anything without running them 100 times and averaging.\n> |>\n> |>\n> |>\n> |>\n> |> It mean 1.8 ms and that execution time is sticky to that value even\n> |> with 1000 times.\n> |\n> |\n> | Given the almost irrelvant difference in the speed of those queries, I'd\n> | say that with the stats so high, postgres simply takes longer to check\n> | the statistics to come to the same conclusion. ie. it has to loop over\n> | 200 rows instead of just 10.\n> \n> The time increase seems too much.\n\nWe can test this.\n\nWhat are the times without the index, with the index and with the higher\nstatistics value when using a prepared query?\n\n\n",
"msg_date": "Fri, 20 Aug 2004 08:55:49 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: using an index worst performances"
},
{
"msg_contents": "Rod Taylor wrote:\n\n> On Fri, 2004-08-20 at 05:37, Gaetano Mendola wrote:\n> \n>>-----BEGIN PGP SIGNED MESSAGE-----\n>>Hash: SHA1\n>>\n>>Christopher Kings-Lynne wrote:\n>>\n>>|>>> Without index: 1.140 ms\n>>|>>> With index: 1.400 ms\n>>|>>> With default_statistic_targer = 200: 1.800 ms\n>>|>>\n>>|>>\n>>|>>\n>>|>>\n>>|>> Can I just check that 1.800ms means 1.8 secs (You're using . as the\n>>|>> thousands separator)?\n>>|>>\n>>|>> If it means 1.8ms then frankly the times are too short to mean\n>>|>> anything without running them 100 times and averaging.\n>>|>\n>>|>\n>>|>\n>>|>\n>>|> It mean 1.8 ms and that execution time is sticky to that value even\n>>|> with 1000 times.\n>>|\n>>|\n>>| Given the almost irrelvant difference in the speed of those queries, I'd\n>>| say that with the stats so high, postgres simply takes longer to check\n>>| the statistics to come to the same conclusion. ie. it has to loop over\n>>| 200 rows instead of just 10.\n>>\n>>The time increase seems too much.\n> \n> \n> We can test this.\n> \n> What are the times without the index, with the index and with the higher\n> statistics value when using a prepared query?\n\nUsing a prepared query:\n\nWithout index and default stat 10 : 1.12 ms\nWithout index and default stat 1000 : 1.25 ms\nWith index and default stat 10: 1.35 ms\nWith index and default stat 1000: 1.6 ms\n\nthat values are the average obtained after the very first one,\non 20 execution.\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Fri, 20 Aug 2004 17:52:33 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: using an index worst performances"
},
{
"msg_contents": "Gaetano Mendola <[email protected]> writes:\n> Using a prepared query:\n\n> Without index and default stat 10 : 1.12 ms\n> Without index and default stat 1000 : 1.25 ms\n> With index and default stat 10: 1.35 ms\n> With index and default stat 1000: 1.6 ms\n\nCould we see EXPLAIN ANALYZE EXECUTE output for each case?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Aug 2004 12:19:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: using an index worst performances "
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nTom Lane wrote:\n| Gaetano Mendola <[email protected]> writes:\n|\n|>Using a prepared query:\n|\n|\n|>Without index and default stat 10 : 1.12 ms\n\nariadne=# explain analyze execute test_ariadne;\n~ QUERY PLAN\n- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n~ Limit (cost=46.15..46.17 rows=1 width=760) (actual time=0.926..1.035 rows=3 loops=1)\n~ -> Unique (cost=46.15..46.17 rows=1 width=760) (actual time=0.904..0.969 rows=3 loops=1)\n~ -> Sort (cost=46.15..46.15 rows=1 width=760) (actual time=0.891..0.909 rows=3 loops=1)\n~ Sort Key: store_nodes.parent, store_nodes.priority, store_nodes.\"path\", store_objects.id, store_objects.\"type\", store_objects.object, date_part('epoch'::text, store_objects.lastchanged), store_objects.vtype\n~ -> Hash Join (cost=1.74..46.14 rows=1 width=760) (actual time=0.342..0.825 rows=3 loops=1)\n~ Hash Cond: ((\"outer\".vtype)::text = (\"inner\".\"type\")::text)\n~ -> Nested Loop (cost=0.00..44.38 rows=1 width=760) (actual time=0.198..0.618 rows=3 loops=1)\n~ -> Nested Loop (cost=0.00..38.93 rows=1 width=104) (actual time=0.157..0.447 rows=3 loops=1)\n~ -> Seq Scan on store_prop_article (cost=0.00..1.75 rows=7 width=8) (actual time=0.030..0.119 rows=7 loops=1)\n~ Filter: ((ar_start <= 1092924200) AND (ar_end >= 1092924200) AND ((ar_display)::text = 'default'::text))\n~ -> Index Scan using store_nodes_object on store_nodes (cost=0.00..5.30 rows=1 width=96) (actual time=0.019..0.023 rows=0 loops=7)\n~ Index Cond: (\"outer\".object = store_nodes.object)\n~ Filter: ((\"path\")::text ~~ '/sites/broadsat/news/%'::text)\n~ -> Index Scan using store_objects_pkey on store_objects (cost=0.00..5.43 rows=1 width=672) (actual time=0.013..0.020 rows=1 loops=3)\n~ Index Cond: (\"outer\".object = store_objects.id)\n~ -> Hash (cost=1.74..1.74 rows=2 width=11) (actual time=0.085..0.085 rows=0 loops=1)\n~ -> Seq Scan on store_types (cost=0.00..1.74 rows=2 width=11) (actual time=0.038..0.064 rows=1 loops=1)\n~ Filter: ((implements)::text = 'particle'::text)\n~ Total runtime: 1.199 ms\n(19 rows)\n\n\n|>Without index and default stat 1000 : 1.25 ms\n\nariadne=# explain analyze execute test_ariadne;\n~ QUERY PLAN\n- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n~ Limit (cost=46.14..46.16 rows=1 width=760) (actual time=1.027..1.126 rows=3 loops=1)\n~ -> Unique (cost=46.14..46.16 rows=1 width=760) (actual time=1.014..1.077 rows=3 loops=1)\n~ -> Sort (cost=46.14..46.14 rows=1 width=760) (actual time=1.001..1.019 rows=3 loops=1)\n~ Sort Key: store_nodes.parent, store_nodes.priority, store_nodes.\"path\", store_objects.id, store_objects.\"type\", store_objects.object, date_part('epoch'::text, store_objects.lastchanged), store_objects.vtype\n~ -> Nested Loop (cost=0.00..46.13 rows=1 width=760) (actual time=0.278..0.933 rows=3 loops=1)\n~ Join Filter: ((\"outer\".vtype)::text = (\"inner\".\"type\")::text)\n~ -> Nested Loop (cost=0.00..44.38 rows=1 width=760) (actual time=0.208..0.591 rows=3 loops=1)\n~ -> Nested Loop (cost=0.00..38.93 rows=1 width=104) (actual time=0.168..0.417 rows=3 loops=1)\n~ -> Seq Scan on store_prop_article (cost=0.00..1.75 rows=7 width=8) (actual time=0.038..0.118 rows=7 loops=1)\n~ Filter: ((ar_start <= 1092924200) AND (ar_end >= 1092924200) AND ((ar_display)::text = 'default'::text))\n~ -> Index Scan using store_nodes_object on store_nodes (cost=0.00..5.30 rows=1 width=96) (actual time=0.016..0.020 rows=0 loops=7)\n~ Index Cond: (\"outer\".object = store_nodes.object)\n~ Filter: ((\"path\")::text ~~ '/sites/broadsat/news/%'::text)\n~ -> Index Scan using store_objects_pkey on store_objects (cost=0.00..5.43 rows=1 width=672) (actual time=0.012..0.022 rows=1 loops=3)\n~ Index Cond: (\"outer\".object = store_objects.id)\n~ -> Seq Scan on store_types (cost=0.00..1.74 rows=1 width=11) (actual time=0.029..0.060 rows=1 loops=3)\n~ Filter: ((implements)::text = 'particle'::text)\n~ Total runtime: 1.288 ms\n(18 rows)\n\n\n|>With index and default stat 10: 1.35 ms\n\nariadne=# explain analyze execute test_ariadne;\n~ QUERY PLAN\n- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n~ Limit (cost=14.95..14.97 rows=1 width=760) (actual time=1.066..1.165 rows=3 loops=1)\n~ -> Unique (cost=14.95..14.97 rows=1 width=760) (actual time=1.052..1.116 rows=3 loops=1)\n~ -> Sort (cost=14.95..14.95 rows=1 width=760) (actual time=1.036..1.054 rows=3 loops=1)\n~ Sort Key: store_nodes.parent, store_nodes.priority, store_nodes.\"path\", store_objects.id, store_objects.\"type\", store_objects.object, date_part('epoch'::text, store_objects.lastchanged), store_objects.vtype\n~ -> Hash Join (cost=3.51..14.94 rows=1 width=760) (actual time=0.614..0.968 rows=3 loops=1)\n~ Hash Cond: (\"outer\".id = \"inner\".object)\n~ -> Hash Join (cost=1.74..13.15 rows=1 width=768) (actual time=0.281..0.651 rows=5 loops=1)\n~ Hash Cond: ((\"outer\".vtype)::text = (\"inner\".\"type\")::text)\n~ -> Nested Loop (cost=0.00..11.39 rows=1 width=768) (actual time=0.070..0.406 rows=6 loops=1)\n~ -> Index Scan using test_index on store_nodes (cost=0.00..5.95 rows=1 width=96) (actual time=0.027..0.084 rows=6 loops=1)\n~ Index Cond: (((\"path\")::text ~>=~ '/sites/broadsat/news/'::character varying) AND ((\"path\")::text ~<~ '/sites/broadsat/news0'::character varying))\n~ Filter: ((\"path\")::text ~~ '/sites/broadsat/news/%'::text)\n~ -> Index Scan using store_objects_pkey on store_objects (cost=0.00..5.43 rows=1 width=672) (actual time=0.012..0.020 rows=1 loops=6)\n~ Index Cond: (store_objects.id = \"outer\".object)\n~ -> Hash (cost=1.74..1.74 rows=2 width=11) (actual time=0.093..0.093 rows=0 loops=1)\n~ -> Seq Scan on store_types (cost=0.00..1.74 rows=2 width=11) (actual time=0.029..0.054 rows=1 loops=1)\n~ Filter: ((implements)::text = 'particle'::text)\n~ -> Hash (cost=1.75..1.75 rows=7 width=8) (actual time=0.182..0.182 rows=0 loops=1)\n~ -> Seq Scan on store_prop_article (cost=0.00..1.75 rows=7 width=8) (actual time=0.041..0.121 rows=7 loops=1)\n~ Filter: ((ar_start <= 1092924200) AND (ar_end >= 1092924200) AND ((ar_display)::text = 'default'::text))\n~ Total runtime: 1.358 ms\n(21 rows)\n\n|>With index and default stat 1000: 1.6 ms\n\nariadne=# explain analyze execute test_ariadne;\n~ QUERY PLAN\n- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n~ Limit (cost=14.94..14.96 rows=1 width=760) (actual time=1.346..1.445 rows=3 loops=1)\n~ -> Unique (cost=14.94..14.96 rows=1 width=760) (actual time=1.329..1.393 rows=3 loops=1)\n~ -> Sort (cost=14.94..14.94 rows=1 width=760) (actual time=1.317..1.335 rows=3 loops=1)\n~ Sort Key: store_nodes.parent, store_nodes.priority, store_nodes.\"path\", store_objects.id, store_objects.\"type\", store_objects.object, date_part('epoch'::text, store_objects.lastchanged), store_objects.vtype\n~ -> Hash Join (cost=1.77..14.93 rows=1 width=760) (actual time=0.663..1.249 rows=3 loops=1)\n~ Hash Cond: (\"outer\".id = \"inner\".object)\n~ -> Nested Loop (cost=0.00..13.14 rows=1 width=768) (actual time=0.268..0.936 rows=5 loops=1)\n~ Join Filter: ((\"outer\".vtype)::text = (\"inner\".\"type\")::text)\n~ -> Nested Loop (cost=0.00..11.39 rows=1 width=768) (actual time=0.070..0.412 rows=6 loops=1)\n~ -> Index Scan using test_index on store_nodes (cost=0.00..5.95 rows=1 width=96) (actual time=0.027..0.093 rows=6 loops=1)\n~ Index Cond: (((\"path\")::text ~>=~ '/sites/broadsat/news/'::character varying) AND ((\"path\")::text ~<~ '/sites/broadsat/news0'::character varying))\n~ Filter: ((\"path\")::text ~~ '/sites/broadsat/news/%'::text)\n~ -> Index Scan using store_objects_pkey on store_objects (cost=0.00..5.43 rows=1 width=672) (actual time=0.013..0.020 rows=1 loops=6)\n~ Index Cond: (store_objects.id = \"outer\".object)\n~ -> Seq Scan on store_types (cost=0.00..1.74 rows=1 width=11) (actual time=0.025..0.051 rows=1 loops=6)\n~ Filter: ((implements)::text = 'particle'::text)\n~ -> Hash (cost=1.75..1.75 rows=7 width=8) (actual time=0.181..0.181 rows=0 loops=1)\n~ -> Seq Scan on store_prop_article (cost=0.00..1.75 rows=7 width=8) (actual time=0.040..0.122 rows=7 loops=1)\n~ Filter: ((ar_start <= 1092924200) AND (ar_end >= 1092924200) AND ((ar_display)::text = 'default'::text))\n~ Total runtime: 1.616 ms\n(20 rows)\n\n\n\n| Could we see EXPLAIN ANALYZE EXECUTE output for each case?\n|\n\nSee above.\n\n\nBTW I dont know if this is a known issue:\n\nAfter the prepare statement:\n\nariadne=# drop index test_index;\nDROP INDEX\nariadne=# explain analyze execute test_ariadne;\nERROR: could not open relation with OID 53695\n\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFBJlRb7UpzwH2SGd4RAn+/AJ9QEyedv6ZQNQse5uhhCpasF65dugCfUzW7\ntDuDEVFNgb42NbX2/GJ+joQ=\n=gaO/\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Fri, 20 Aug 2004 21:43:24 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: using an index worst performances"
},
{
"msg_contents": "Gaetano Mendola <[email protected]> writes:\n> Tom Lane wrote:\n> | Could we see EXPLAIN ANALYZE EXECUTE output for each case?\n\n> [snip]\n> See above.\n\nOkay, so the issue here is choosing between a nestloop or a hash join\nthat have very nearly equal estimated costs:\n\n> ~ -> Hash Join (cost=1.74..46.14 rows=1 width=760) (actual time=0.342..0.825 rows=3 loops=1)\n> ~ Hash Cond: ((\"outer\".vtype)::text = (\"inner\".\"type\")::text)\n\n> ~ -> Nested Loop (cost=0.00..46.13 rows=1 width=760) (actual time=0.278..0.933 rows=3 loops=1)\n> ~ Join Filter: ((\"outer\".vtype)::text = (\"inner\".\"type\")::text)\n\nIn the indexed case it's the same choice, but at a different level of joining:\n\n> ~ -> Hash Join (cost=1.74..13.15 rows=1 width=768) (actual time=0.281..0.651 rows=5 loops=1)\n> ~ Hash Cond: ((\"outer\".vtype)::text = (\"inner\".\"type\")::text)\n\n> ~ -> Nested Loop (cost=0.00..13.14 rows=1 width=768) (actual time=0.268..0.936 rows=5 loops=1)\n> ~ Join Filter: ((\"outer\".vtype)::text = (\"inner\".\"type\")::text)\n\nWith only 0.01 unit of difference in the costs, it's perfectly plausible\nfor a change in the statistics to change the estimated cost just enough\nto give one plan or the other the edge in estimated cost.\n\nGiven that the runtimes are in fact pretty similar, it doesn't bother me\nthat the planner is estimating them as essentially identical.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Aug 2004 16:04:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: using an index worst performances "
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nRod Taylor wrote:\n\n|>>What are the times without the index, with the index and with the higher\n|>>statistics value when using a prepared query?\n|>\n|>Using a prepared query:\n|>\n|>Without index and default stat 10 : 1.12 ms\n|>Without index and default stat 1000 : 1.25 ms\n|>With index and default stat 10: 1.35 ms\n|>With index and default stat 1000: 1.6 ms\n|>\n|>that values are the average obtained after the very first one,\n|>on 20 execution.\n|\n|\n| Most interesting. And the plans chosen with the 2 different default stat\n| targets are the same? Sorry if I missed the post indicating they were.\n|\n| If the plans are the same, it would be interesting to get a profile on\n| the 2 different cases with that index in place across 100k iterations of\n| the prepared query.\n\nDo you have an advice on the profiler to use ?\n\n\nRegards\nGaetano Mendola\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFBKHBg7UpzwH2SGd4RAmGCAKDOZ3xXNPFhhGSMN89MssR7UZnY3ACg6sAY\nmWKo4uAZzv1ZtmBsfQZ2SBc=\n=NQf/\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Sun, 22 Aug 2004 12:08:47 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: using an index worst performances"
}
] |
[
{
"msg_contents": "Hi list,\n\nI have a database with 1M \"people\" in it. Each person has about 20\nattributes, such as height, weight, eye color, etc. I need to be able to\nsearch for people based on these attributes. A search can be conducted\non one attribute, all attributes, or any number in between. How would\n_you_ do this?\n\nI have already attempted to answer this. My attempts are detailed here:\n\nhttp://sh.nu/email.txt\n\nThis is the email I was originally going to send to this list. Since\nit's so large, I decided to link to it instead. If you feel that it\nbelongs in a post to the list, let me know, and I'll post again.\n\nI've discussed these attempts with people in #postgresql on\nirc.freenode.net. Agliodbs (I presume you know who this is) was very\nhelpful, but in end was at a loss. I find myself in the same postition\nat this time. He suggested I contact this list.\n\nMy ultimate goal is performance. This _must_ be fast. And by fast, I\nmean, < 1 second, for every permutation of the number of attributes\nsearched for. Flexibility would be a bonus, but at this point I'll\nsettle for something that's harder to maintain if I can get the speed\ngain I need.\n\nThanks,\n\nDaniel Ceregatti\n",
"msg_date": "Thu, 19 Aug 2004 11:03:04 -0700",
"msg_from": "Daniel Ceregatti <[email protected]>",
"msg_from_op": true,
"msg_subject": "What is the best way to do attribute/values?"
},
{
"msg_contents": "Folks,\n\n> I've discussed these attempts with people in #postgresql on\n> irc.freenode.net. Agliodbs (I presume you know who this is) was very\n> helpful, but in end was at a loss. I find myself in the same postition\n> at this time. He suggested I contact this list.\n\nThere's a couple of issues here to attack:\n\n1) PostgreSQL is not using the most optimal plan. First, it's ignoring the \nfact that all referenced columns are indexed and only using the first column, \nthen filtering based on the other criteria. Second, testing has shown that \na hash join would actually be faster. We've tried upping the statistics, \nbut it doesn't seem to have an effect on the planner's erroneous estimates.\n\n2) Even were it using the most optimal plan, it's still to slow. As you can \nsee from the plan, each merge join takes about 1.5 to 2 seconds. (hash \njoins are only about 0.5 seconds slower). Mysteriously, a big chunk of this \ntime is spent *in bewtween* planner steps, as if there was some hold-up in \nretrieving the index or table pages. There may be, but Daniel and I have \nnot been able to diagnose the cause. It's particularly mysterious since a \nfilter-and-sort on a *single* criteria set, without join, takes < 400ms.\n\nThings we've already tried to avoid going over old ground:\n1) increasing statistics;\n2) increasing sort_mem (to 256MB, which is overkill)\n3) testing on 8.0 beta, which does not affect the issue.\n\nAt this point I'm looking for ideas. Suggestions, anyone?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 24 Aug 2004 13:30:32 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is the best way to do attribute/values?"
},
{
"msg_contents": "Daniel Ceregatti wrote:\n> Hi list,\n> \n> I have a database with 1M \"people\" in it. Each person has about 20\n> attributes, such as height, weight, eye color, etc. I need to be able to\n> search for people based on these attributes. A search can be conducted\n> on one attribute, all attributes, or any number in between. How would\n> _you_ do this?\n> \n> I have already attempted to answer this. My attempts are detailed here:\n> \n> http://sh.nu/email.txt\n\nHmm... interesting.\n\nShot in the dark - try a tsearch2 full-text index. Your problem could be \ntranslated into searching strings of the form\n \"hair=black eyes=blue age=117\"\n\nNot pretty, but might give you the speed you want.\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 24 Aug 2004 22:00:48 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is the best way to do attribute/values?"
},
{
"msg_contents": "\n\nJosh Berkus wrote:\n\n> Things we've already tried to avoid going over old ground:\n>\n>1) increasing statistics;\n>2) increasing sort_mem (to 256MB, which is overkill)\n>3) testing on 8.0 beta, which does not affect the issue.\n>\n>At this point I'm looking for ideas. Suggestions, anyone?\n>\n> \n>\nwith respect to query design:\n\nconsider instead of:\n\nselect\n\tpav1.person_id\nfrom\n\tperson_attributes_vertical pav1,\n\tperson_attributes_vertical pav2\nwhere\n\tpav1.attribute_id = 1\n\tand pav1.value_id in (2,3)\n\tand pav2.attribute_id = 2\n\tand pav2.value_id in (2,3)\n\tand pav1.person_id = pav2.person_id\n\ntry:\n\nselect\n\tpav1.person_id\nfrom\n\tperson_attributes_vertical pav1\nwhere\n\t ( pav1.attribute_id = 1\n\t and pav1.value_id in (2,3))\n\tor ( pav1.attribute_id = 2\n\t and pav1.value_id in (2,3))\n\t\nI am gambling that the 'or's' might be less expensive than the multiple self joins (particularly in the more general cases!).\n\nTo make access work well you might want to have *several* concatenated indexes of 2 -> 4 attributes - to work around Pg inability to use more than 1 in a given query.\nFor this query indexing (attribute_id, value_id) is probably good.\n\nConsider playing with 'random_page_cost' and maybe 'effective_cache_size' to encourage the planner to use 'em.\n\nregards\n\nMark\n\n\n\n\n",
"msg_date": "Wed, 25 Aug 2004 20:22:18 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is the best way to do attribute/values?"
},
{
"msg_contents": "\nOn Aug 25, 2004, at 4:22 AM, Mark Kirkwood wrote:\n\n> select\n> \tpav1.person_id\n> from\n> \tperson_attributes_vertical pav1\n> where\n> \t ( pav1.attribute_id = 1\n> \t and pav1.value_id in (2,3))\n> \tor ( pav1.attribute_id = 2\n> \t and pav1.value_id in (2,3))\n>\n\nYou know..\nIt may help if you toss in a group by\nie\n\nselect pav1.person_id, count(*) from person_attributes_vertical pav1\nwhere (pav1.attribute_id = 1 and pav1.value_id in (2,3)) or ( ... ) or \n(...)\ngroup by pav1.person_id\norder by count(*) desc\n\nthat should give you the person_id's that matched the most \ncriteria........\nI've used similar things before now that I've thought about it.\n\nIf you want an exact match you could put\n\"having count(*) = $myNumAttributes\" in there too.. By definition an \nexact match would match that definition..\n\nit has an added side effect of producing \"closest matches\" when an \nexact match cannot be found... granted you may not want that for a \ndating site : )\n\n\"You asked for a blond female, blue eyes.. but I couldn't find any... \nbut I *DID* find a brown haired male with brown eyes! Is that good \nenough?\"\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n",
"msg_date": "Wed, 25 Aug 2004 08:36:07 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is the best way to do attribute/values?"
},
{
"msg_contents": "Mark, Tim,\n\n> select\n> \tpav1.person_id\n> from\n> \tperson_attributes_vertical pav1\n> where\n> \t ( pav1.attribute_id = 1\n> \t and pav1.value_id in (2,3))\n> \tor ( pav1.attribute_id = 2\n> \t and pav1.value_id in (2,3))\n\nNot the same query, sorry. Daniel's query yields all the person_id's which \nhave criteria A AND criteria B. Yours gives all the person_id's which have \ncriteria A OR criteria B.\n\n> There will be a lot of attributes with the same ID; there will also be a\n> lot of attributes with the same value. However, there should be much less\n> attributes with a specific combination of (ID/Value). Right now I think it\n> will be very hard to determine which field has a better selectivity:\n> attribute_id or value_id.\n\nGiven that there is already an index on ( attribute_id, value_id ) I don't \nquite see what difference this makes. Unless you're suggesting this as a \nworkaround for the PG Planner's poor use of the index?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 25 Aug 2004 09:59:15 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is the best way to do attribute/values?"
},
{
"msg_contents": "\nTwo more unusual suggestions:\n\n1. Drop all the indexes and just do sequential scans (full table scans),\naiming as hard as possible to get the whole people table to fit in memory\n(size says that should be easy - various ways) - and make sure you're using\n8.0 so you have the best cache manager. This will at least give you\nconsistent performance on whatever attribute values searched on in user\nqueries. Dropping all the indexes will allow the query to optimize faster,\nsince it has only one path choice. Work out how many attributes it takes to\nreduce the list of candidates to a manageable number, and include only those\nfactors into the table, effectively vertically partitioning the table,\nthereby reducing the volume and increasing scan speed. Then redesign the\nuser interface so that they see a two-stage process, first stage is top N\ncommon attributes, second stage is to further reduce that down using rarer\nattributes, as well as using the output from the first table to index into\nthe second. Users then won't mind additional wait time.\n(Experiment with: Horizontally partition the table into N pieces. Issue N\nsimultaneous queries to simulate a parallel query. Try N == 2 on your box)\n\n2. You can try forcing the use of a Star Join optimization here:\nConcatenate the attribute values into a single column, then index it. This\nwill be nearly unique. Cluster the table.\nPermute the values of the attributes, to give you a list of concatenated\nkeys that would match, then join that list to the main table, using a join\nvia the index.\nYou can do this permutation by using a reference table per attribute, then\ndoing an unconstrained product join between all of the attribute tables\n(avoid any indexes on them) and assembling the candidate keys into a single\ntemporary table. Then join the temp table to the main people table. This\nwill only work effectively if people's attributes are selected with some\ndiscrimination, otherwise this optimisation will fail. You'd need to\nconstrain the user interface to \"pick 20 out of the following 100 attribute\nvalues\" or some other heuristic to ensure a relatively low count, or use a\nLIMIT on the query into the temp table.\nThis sounds long winded, but is essentially the route the Oracle optimizer\ntakes in performing a star join....you clearly know you're Oracle, so look\nthat up to confirm what I'm saying. (May not work as well if you use a\nsub-select on PostgreSQL....)\n\nAlso, I'd categorise the Age, Height, Weight and Salary attributes and\neverything else based upon most common ranges, so it will be just an\nequality search on an integer assigned to that category, rather than a >\nsearch. Order by the distance, don't search on it, it'll be quicker since\nyou'll only need to calculate it for the records that match...even if you do\nget a few too many, it would be a shame to avoid somebody because they lived\n1 mile outside of the stated radius.\n\nThe database sounds < 1 Gb in total logical volume, so 4Gb of RAM should be\neasily sufficient.\n\nBest Regards, Simon Riggs\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Daniel\n> Ceregatti\n> Sent: 19 August 2004 19:03\n> To: [email protected]\n> Subject: [PERFORM] What is the best way to do attribute/values?\n>\n>\n> Hi list,\n>\n> I have a database with 1M \"people\" in it. Each person has about 20\n> attributes, such as height, weight, eye color, etc. I need to be able to\n> search for people based on these attributes. A search can be conducted\n> on one attribute, all attributes, or any number in between. How would\n> _you_ do this?\n>\n> I have already attempted to answer this. My attempts are detailed here:\n>\n> http://sh.nu/email.txt\n>\n> This is the email I was originally going to send to this list. Since\n> it's so large, I decided to link to it instead. If you feel that it\n> belongs in a post to the list, let me know, and I'll post again.\n>\n> I've discussed these attempts with people in #postgresql on\n> irc.freenode.net. Agliodbs (I presume you know who this is) was very\n> helpful, but in end was at a loss. I find myself in the same postition\n> at this time. He suggested I contact this list.\n>\n> My ultimate goal is performance. This _must_ be fast. And by fast, I\n> mean, < 1 second, for every permutation of the number of attributes\n> searched for. Flexibility would be a bonus, but at this point I'll\n> settle for something that's harder to maintain if I can get the speed\n> gain I need.\n>\n> Thanks,\n>\n> Daniel Ceregatti\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Thu, 26 Aug 2004 01:38:16 +0100",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is the best way to do attribute/values?"
},
{
"msg_contents": "\nJosh Berkus wrote:\n\n>Mark, Tim,\n>\n> \n>\n>>select\n>>\tpav1.person_id\n>>from\n>>\tperson_attributes_vertical pav1\n>>where\n>>\t ( pav1.attribute_id = 1\n>>\t and pav1.value_id in (2,3))\n>>\tor ( pav1.attribute_id = 2\n>>\t and pav1.value_id in (2,3))\n>> \n>>\n>\n>Not the same query, sorry. Daniel's query yields all the person_id's which \n>have criteria A AND criteria B. Yours gives all the person_id's which have \n>criteria A OR criteria B.\n>\n> \n>\nApologies, not thinking clearly enough there...\n\n\nMaybe try out intersection :\n\n\nselect\n pav1.person_id\nfrom\n person_attributes_vertical pav1\nwhere\n ( pav1.attribute_id = 1\n and pav1.value_id in (2,3))\nintersect\nselect\n pav1.person_id\nfrom\n person_attributes_vertical pav1\nwhere ( pav1.attribute_id = 2\n and pav1.value_id in (2,3))\n\n\nIn the advent that is unhelpful, I wonder about simplifying the \nsituation and investigating how\n\n\nselect\n pav1.person_id\nfrom\n person_attributes_vertical pav1\nwhere\n pav1.attribute_id = 1\n\n\nperforms, compared to\n\n\nselect\n pav1.person_id\nfrom\n person_attributes_vertical pav1\nwhere\n ( pav1.attribute_id = 1\n and pav1.value_id in (2,3))\n\n\nIf the first performs ok and the second does not, It may be possible to \nget better times by doing some horrible re-writes :e.g:\n\n\nselect\n pav1.person_id\nfrom\n person_attributes_vertical pav1\nwhere\n ( pav1.attribute_id = 1\n and pav1.value_id||null in (2,3))\n\n\netc.\n\n\nregards\n\nMark\n\n\n\n \n",
"msg_date": "Thu, 26 Aug 2004 16:51:50 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is the best way to do attribute/values?"
}
] |
[
{
"msg_contents": "Hi,\nI'm migrating data from 7.4.2 to 8.0.0beta1 and the process is slow (10 15 tuples per second)\n\nCan be a type conversion issue?\n\nRedS\n\n\n\n\n\n\nHi,\nI'm migrating data from 7.4.2 to 8.0.0beta1 and the \nprocess is slow (10 15 tuples per second)\n \nCan be a type conversion issue?\n \nRedS",
"msg_date": "Fri, 20 Aug 2004 14:59:08 +0200",
"msg_from": "\"Bonnin S.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_restore very slow in 8.0.0beta1"
}
] |
[
{
"msg_contents": "Hi all,\n\nI offered apologies to Igor Artimenko in private mail already; I'll apologize again here.\n\nAbout top-posting: Outlook Exchange teaches bad habits. Can you set Outlook Exchange to prefix lines with \"> \" only when mail is in text-only format but not when mail arrives in html / rtf format?\n\nAbout full quoting: my apologies.\n\n-----Original Message-----\nFrom: Manfred Koizar [mailto:[email protected]]\nSent: vrijdag 20 augustus 2004 15:38\nTo: Leeuw van der, Tim\nCc: Igor Artimenko; [email protected]\nSubject: Re: [PERFORM] I could not get postgres to utilizy indexes\n\n\nOn Thu, 19 Aug 2004 09:54:47 +0200, \"Leeuw van der, Tim\"\n<[email protected]> wrote:\n>You asked the very same question yesterday, and I believe you got some useful answers. Why do you post the question again?\n\nTim, no need to be rude here. \n[...]\n>[more instructions]\n\nAnd while we are teaching netiquette, could you please stop top-posting\nand full-quoting.\n\nIgor, welcome to the list! Did the suggestions you got solve your\nproblem?\n\nServus\n Manfred\n",
"msg_date": "Fri, 20 Aug 2004 15:45:05 +0200",
"msg_from": "\"Leeuw van der, Tim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: I could not get postgres to utilizy indexes"
}
] |
[
{
"msg_contents": "HI All,\n \nI have a big performance issue concerning a PostgreSQL database.\n \nI have the following server configuration:\n \nPentium 4 2.4 GHz\n1 GB RAM\n36 GB SCSI\n \nAnd the following tables:\n TABLES\n------------------------------------------------------------------------\n--\n================== r_cliente: 75816 records\n============================\nCREATE TABLE \"public\".\"r_cliente\" (\n \"pkcliente\" INTEGER NOT NULL, \n \"cpfcnpj\" VARCHAR(20) NOT NULL, \n PRIMARY KEY(\"pkcliente\")\n) WITH OIDS;\n \nCREATE UNIQUE INDEX \"un_cliente_cpfcnpj\" ON \"public\".\"r_cliente\"\nUSING btree (\"cpfcnpj\");\n \n================== sav_cliente_lg: 65671 records\n=======================\nCREATE TABLE \"public\".\"sav_cliente_lg\" (\n \"codigo\" INTEGER NOT NULL, \n \"cpfcnpj\" VARCHAR(15) NOT NULL, \n PRIMARY KEY(\"codigo\")\n) WITH OIDS;\n \nCREATE INDEX \"ix_savclientelg_cpfcnpj\" ON \"public\".\"sav_cliente_lg\"\nUSING btree (\"cpfcnpj\");\n \n \n \nWhich I would like to run the following query:\n \n QUERY\n------------------------------------------------------------------------\n--\nSELECT\n rc.pkcliente\nFROM r_cliente AS rc\nINNER JOIN sav_cliente_lg AS sc ON sc.cpfcnpj = sc.cpfcnpj;\n \n \n \n \nThe problem is, it takes a long time to run, I wait up to half an hour\nand I get no result.\n \nSo, I executed the explain on the query and got the following results:\n \n \n \n \n QUERY PLAN\n------------------------------------------------------------------------\n--\n Nested Loop (cost=0.00..16696.87 rows=75816 width=4)\n -> Seq Scan on sav_cliente_cf sc (cost=0.00..3047.55 rows=1\nwidth=0)\n Filter: ((cpfcnpj)::text = (cpfcnpj)::text)\n -> Seq Scan on r_cliente rc (cost=0.00..12891.16 rows=75816\nwidth=4)\n \n \n \n \nAnd made the following modifications on my POSTGRESQL.CONF file:\n \n POSTGRESQL.CONF\n------------------------------------------------------------------------\n--\n### VERSION: Postgresql 7.4.2 ###\nshared_buffers = 7800\nsort_mem = 4096\ncheckpoint_segments = 5 \neffective_cache_size = 12000\ncpu_operator_cost = 0.0015\nstats_start_collector = false\n \n \nHope you can help me, I really need to get this running faster, and I am\nout of ideas.\n \nSince now, thanks a lot for your attention,\n \nDanilo Mota\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHI All,\n \nI have a big performance issue concerning a PostgreSQL\ndatabase.\n \nI have the following server configuration:\n \nPentium 4 2.4 GHz\n1 GB RAM\n36 GB SCSI\n \nAnd the following tables:\n TABLES\n--------------------------------------------------------------------------\n================== r_cliente: 75816 records \n============================\nCREATE TABLE \"public\".\"r_cliente\"\n(\n \"pkcliente\" INTEGER NOT NULL, \n \"cpfcnpj\" VARCHAR(20) NOT\nNULL, \n PRIMARY KEY(\"pkcliente\")\n) WITH OIDS;\n \nCREATE UNIQUE INDEX \"un_cliente_cpfcnpj\" ON \"public\".\"r_cliente\"\nUSING btree (\"cpfcnpj\");\n \n================== sav_cliente_lg: 65671 records \n=======================\nCREATE TABLE \"public\".\"sav_cliente_lg\"\n(\n \"codigo\" INTEGER NOT NULL, \n \"cpfcnpj\" VARCHAR(15) NOT\nNULL, \n PRIMARY KEY(\"codigo\")\n) WITH OIDS;\n \nCREATE INDEX \"ix_savclientelg_cpfcnpj\" ON \"public\".\"sav_cliente_lg\"\nUSING btree (\"cpfcnpj\");\n \n \n \nWhich I would like to run the following query:\n \n QUERY\n--------------------------------------------------------------------------\nSELECT\n rc.pkcliente\nFROM r_cliente AS rc\nINNER JOIN sav_cliente_lg AS sc ON\nsc.cpfcnpj = sc.cpfcnpj;\n \n \n \n \nThe problem is, it takes a long time to run, I wait up to\nhalf an hour and I get no result.\n \nSo, I executed the explain on the\nquery and got the following results:\n \n \n \n \n QUERY PLAN\n--------------------------------------------------------------------------\n Nested Loop (cost=0.00..16696.87\nrows=75816 width=4)\n -> Seq Scan on sav_cliente_cf sc \n(cost=0.00..3047.55 rows=1 width=0)\n Filter: ((cpfcnpj)::text = (cpfcnpj)::text)\n -> Seq Scan on r_cliente rc (cost=0.00..12891.16 rows=75816 width=4)\n \n \n \n \nAnd made the following modifications\non my POSTGRESQL.CONF file:\n \n POSTGRESQL.CONF\n--------------------------------------------------------------------------\n### VERSION: Postgresql 7.4.2 ###\nshared_buffers = 7800\nsort_mem = 4096\ncheckpoint_segments = 5 \neffective_cache_size = 12000\ncpu_operator_cost = 0.0015\nstats_start_collector = false\n \n \nHope you can help me, I really need to get this running\nfaster, and I am out of ideas.\n \nSince now, thanks a lot for your attention,\n \nDanilo Mota",
"msg_date": "Fri, 20 Aug 2004 13:25:30 -0300",
"msg_from": "\"Danilo Mota\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query performance problem"
},
{
"msg_contents": "\"Danilo Mota\" <[email protected]> writes:\n> SELECT\n> rc.pkcliente\n> FROM r_cliente AS rc\n> INNER JOIN sav_cliente_lg AS sc ON sc.cpfcnpj = sc.cpfcnpj;\n\nSurely you meant\n INNER JOIN sav_cliente_lg AS sc ON rc.cpfcnpj = sc.cpfcnpj;\n\nI would also venture that your statistics are desperately out of date,\nbecause if the planner's estimates are close to reality, even this\nunconstrained-cross-product join shouldn't have taken that long.\n \n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Aug 2004 12:34:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance problem "
},
{
"msg_contents": "On Fri, Aug 20, 2004 at 13:25:30 -0300,\n Danilo Mota <[email protected]> wrote:\n> \n> And the following tables:\n> TABLES\n> ------------------------------------------------------------------------\n> --\n> ================== r_cliente: 75816 records\n> ============================\n> CREATE TABLE \"public\".\"r_cliente\" (\n> \"pkcliente\" INTEGER NOT NULL, \n> \"cpfcnpj\" VARCHAR(20) NOT NULL, \n> PRIMARY KEY(\"pkcliente\")\n> ) WITH OIDS;\n> \n> CREATE UNIQUE INDEX \"un_cliente_cpfcnpj\" ON \"public\".\"r_cliente\"\n> USING btree (\"cpfcnpj\");\n> \n> ================== sav_cliente_lg: 65671 records\n> =======================\n> CREATE TABLE \"public\".\"sav_cliente_lg\" (\n> \"codigo\" INTEGER NOT NULL, \n> \"cpfcnpj\" VARCHAR(15) NOT NULL, \n> PRIMARY KEY(\"codigo\")\n> ) WITH OIDS;\n> \n> CREATE INDEX \"ix_savclientelg_cpfcnpj\" ON \"public\".\"sav_cliente_lg\"\n> USING btree (\"cpfcnpj\");\n> \n> \n> \n> Which I would like to run the following query:\n> \n> QUERY\n> ------------------------------------------------------------------------\n> --\n> SELECT\n> rc.pkcliente\n> FROM r_cliente AS rc\n> INNER JOIN sav_cliente_lg AS sc ON sc.cpfcnpj = sc.cpfcnpj;\n\nI am going to assume that one of the sc.cpfcnpj's above is really rc.cpfcnpj\nsince that corresponds to the explain below.\n\nsc.cpfcnpj and rc.cpfcnpj are different length varchars. You made need\nan explicit cast to allow the use of indexes. (Unless there is a real\nbusiness rule that mandates the limits you have used, you probably want\nto make them both type 'text'.)\n\nAnother potential problem is not having analyzed the tables. I don't think\nthis can be ruled out based on what you have showed us so far.\n\n> \n> So, I executed the explain on the query and got the following results:\n\nGenerally you want to run EXPLAIN ANALYZE results when submitting questions\nabout performance problems rather than just EXPLAIN results.\n",
"msg_date": "Fri, 20 Aug 2004 11:48:21 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance problem"
},
{
"msg_contents": "Bruno Wolff III <[email protected]> writes:\n> I am going to assume that one of the sc.cpfcnpj's above is really rc.cpfcnpj\n> since that corresponds to the explain below.\n\nNo, actually the explain plan corresponds to the sc.cpfcnpj = sc.cpfcnpj\ncondition. I didn't twig to the typo until I started to wonder why the\nplan had the condition in the wrong place (attached to the seqscan and\nnot the join step).\n\n> sc.cpfcnpj and rc.cpfcnpj are different length varchars. You made need\n> an explicit cast to allow the use of indexes.\n\nAFAIK the cross-type issues only apply to crossing actual types, not\nlengths. That does look like an error in the database schema, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Aug 2004 12:50:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance problem "
}
] |
[
{
"msg_contents": "Hi all,\n \n the following query is working well without the AND on WHERE clause, so\nI need suggestions about how could I rewrite the query to get the same\nresult with less cost of time and resources.\n \nI've already created indexes on all foreign key columns.\n \nThanks in advance.\n \nDanilo Mota\n \n========================================================================\n============\n SELECT\n sn.notafiscalnumero,\n sn.notafiscalserie,\n CASE sn.notafiscaldata WHEN '00000000' THEN NULL ELSE\nto_date(sn.notafiscaldata,'YYYYMMDD') END,\n sn.modalidade,\n rcm.pkclientemarca,\n sn.notafiscalvalor/100,\n sn.entrada/100,\n sn.cliente\n FROM r_clientemarca AS rcm\n INNER JOIN r_cliente AS rc ON rc.pkcliente = rcm.fkcliente\n INNER JOIN sav_cliente_lg AS sc ON sc.cpfcnpj = rc.cpfcnpj\n INNER JOIN sav_nota_lg AS sn ON sn.cliente = sc.codigo\n WHERE rcm.fkmarca = 1\n AND sn.notafiscalnumero||sn.notafiscalserie||sn.cliente NOT IN (\nSELECT numero||serie||codigo\n \nFROM r_contrato AS rcon\n \nWHERE savfonte = 'lg')\n \n========================================================================\n============\n \n \n TABLES\n------------------------------------------------------------------------\n-----------------------------------------------------\nr_cliente: 75820 records\nr_clientemarca: 97719 records\nr_contrato: 782058 records\nsav_cliente_lg: 65671 records\nsav_nota_lg: 297329 rcords\n MY SERVER\n------------------------------------------------------------------------\n-----------------------------------------------------\nPentium 4 2.4 GHz\n1 GB RAM\n36 GB SCSI\nPostgresql 7.4.2\n \n POSTGRESQL.CONF\n------------------------------------------------------------------------\n-----------------------------------------------------\nshared_buffers = 7800\nsort_mem = 4096\ncheckpoint_segments = 5 \neffective_cache_size = 12000\ncpu_operator_cost = 0.0015\nstats_start_collector = false\n \n QUERY PLAN\n------------------------------------------------------------------------\n-----------------------------------------------------\n Hash Join (cost=27149.61..3090289650.24 rows=128765 width=4)\n Hash Cond: (\"outer\".cliente = \"inner\".codigo)\n -> Seq Scan on sav_nota_lg sn (cost=0.00..3090258517.99 rows=148665\nwidth=8)\n Filter: (NOT (subplan))\n SubPlan\n -> Seq Scan on r_contrato rcon (cost=0.00..20362.47\nrows=282845 width=19)\n Filter: ((savfonte)::text = 'lg'::text)\n -> Hash (cost=26869.29..26869.29 rows=56880 width=4)\n -> Hash Join (cost=22473.95..26869.29 rows=56880 width=4)\n Hash Cond: (\"outer\".fkcliente = \"inner\".pkcliente)\n -> Index Scan using ix_r_clientemarca_fkmarca on\nr_clientemarca rcm (cost=0.00..2244.46 rows=65665 width=4)\n Index Cond: (fkmarca = 1)\n -> Hash (cost=22118.44..22118.44 rows=65672 width=8)\n -> Hash Join (cost=6613.22..22118.44 rows=65672\nwidth=8)\n Hash Cond: ((\"outer\".cpfcnpj)::text =\n(\"inner\".cpfcnpj)::text)\n -> Seq Scan on r_cliente rc\n(cost=0.00..12891.16 rows=75816 width=23)\n -> Hash (cost=6129.71..6129.71 rows=65671\nwidth=23)\n -> Seq Scan on sav_cliente_lg sc\n(cost=0.00..6129.71 rows=65671 width=23)\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi all,\n \n the following\nquery is working well without the AND on WHERE clause, so I need suggestions\nabout how could I rewrite the query to get the same result with less cost of\ntime and resources.\n \nI’ve already created indexes on all foreign key\ncolumns.\n \nThanks in advance.\n \nDanilo Mota\n \n====================================================================================\n SELECT\n sn.notafiscalnumero,\n sn.notafiscalserie,\n CASE sn.notafiscaldata WHEN '00000000' THEN NULL ELSE to_date(sn.notafiscaldata,'YYYYMMDD') END,\n sn.modalidade,\n rcm.pkclientemarca,\n sn.notafiscalvalor/100,\n sn.entrada/100,\n sn.cliente\n FROM r_clientemarca AS rcm\n INNER JOIN r_cliente AS rc\nON rc.pkcliente = rcm.fkcliente\n \nINNER JOIN sav_cliente_lg AS sc ON sc.cpfcnpj = rc.cpfcnpj\n INNER\nJOIN sav_nota_lg AS sn ON sn.cliente = sc.codigo\n WHERE rcm.fkmarca = 1\n AND sn.notafiscalnumero||sn.notafiscalserie||sn.cliente NOT IN ( SELECT numero||serie||codigo\n \n FROM\nr_contrato AS rcon\n \n WHERE\nsavfonte = 'lg')\n \n====================================================================================\n \n \n \nTABLES\n-----------------------------------------------------------------------------------------------------------------------------\nr_cliente: 75820 records\nr_clientemarca: 97719 records\nr_contrato: 782058 records\nsav_cliente_lg: 65671 records\nsav_nota_lg: 297329 rcords\n \nMY SERVER\n-----------------------------------------------------------------------------------------------------------------------------\nPentium 4 2.4 GHz\n1 GB RAM\n36 GB SCSI\nPostgresql 7.4.2\n \n \nPOSTGRESQL.CONF\n-----------------------------------------------------------------------------------------------------------------------------\nshared_buffers = 7800\nsort_mem = 4096\ncheckpoint_segments = 5 \neffective_cache_size = 12000\ncpu_operator_cost = 0.0015\nstats_start_collector = false\n \n \nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=27149.61..3090289650.24\nrows=128765 width=4)\n Hash Cond: (\"outer\".cliente\n= \"inner\".codigo)\n -> Seq Scan on sav_nota_lg sn \n(cost=0.00..3090258517.99 rows=148665 width=8)\n \nFilter: (NOT (subplan))\n \nSubPlan\n \n-> Seq Scan on r_contrato rcon \n(cost=0.00..20362.47 rows=282845 width=19)\n \nFilter: ((savfonte)::text\n= 'lg'::text)\n -> Hash (cost=26869.29..26869.29 rows=56880\nwidth=4)\n \n-> Hash\nJoin (cost=22473.95..26869.29\nrows=56880 width=4)\n \nHash Cond: (\"outer\".fkcliente\n= \"inner\".pkcliente)\n \n-> \nIndex Scan using ix_r_clientemarca_fkmarca\non r_clientemarca rcm (cost=0.00..2244.46 rows=65665 width=4)\n \nIndex Cond: (fkmarca\n= 1)\n \n-> Hash (cost=22118.44..22118.44 rows=65672\nwidth=8)\n -> Hash\nJoin (cost=6613.22..22118.44\nrows=65672 width=8)\n \nHash Cond: ((\"outer\".cpfcnpj)::text = (\"inner\".cpfcnpj)::text)\n \n-> Seq Scan on r_cliente rc \n(cost=0.00..12891.16 rows=75816 width=23)\n \n-> Hash (cost=6129.71..6129.71 rows=65671\nwidth=23)\n \n-> Seq Scan on sav_cliente_lg\nsc (cost=0.00..6129.71 rows=65671\nwidth=23)",
"msg_date": "Fri, 20 Aug 2004 21:03:54 -0300",
"msg_from": "\"Danilo Mota\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query Performance"
},
{
"msg_contents": "Have you tried\n\nAND (sn.notafiscalnumero, sn.notafiscalserie, sn.cliente) NOT IN (\n SELECT numero, serie, codigo FROM r_contrato WHERE savfonte = 'lg')\n\nor\n\nand not exists(select true from r_contrato where savfonte = 'lg' and numero = \nsn.notafiscalnumero and serie = sn.notafiscalserie and codigo = sn.cliente)\n\n\nDanilo Mota wrote:\n\n> Hi all,\n> \n> \n> \n> the following query is working well without the AND on WHERE clause, so \n> I need suggestions about how could I rewrite the query to get the same \n> result with less cost of time and resources.\n> \n> \n> \n> I�ve already created indexes on all foreign key columns.\n> \n> \n> \n> Thanks in advance.\n> \n> \n> \n> Danilo Mota\n> \n> \n> \n> ====================================================================================\n> \n> SELECT\n> \n> sn.notafiscalnumero,\n> \n> sn.notafiscalserie,\n> \n> CASE sn.notafiscaldata WHEN '00000000' THEN NULL ELSE \n> to_date(sn.notafiscaldata,'YYYYMMDD') END,\n> \n> sn.modalidade,\n> \n> rcm.pkclientemarca,\n> \n> sn.notafiscalvalor/100,\n> \n> sn.entrada/100,\n> \n> sn.cliente\n> \n> FROM r_clientemarca AS rcm\n> \n> INNER JOIN r_cliente AS rc ON rc.pkcliente = rcm.fkcliente\n> \n> INNER JOIN sav_cliente_lg AS sc ON sc.cpfcnpj = rc.cpfcnpj\n> \n> INNER JOIN sav_nota_lg AS sn ON sn.cliente = sc.codigo\n> \n> WHERE rcm.fkmarca = 1\n> \n> AND sn.notafiscalnumero||sn.notafiscalserie||sn.cliente NOT IN ( \n> SELECT numero||serie||codigo\n> \n> \n> FROM r_contrato AS rcon\n> \n> \n> WHERE savfonte = 'lg')\n> \n> \n> \n> ====================================================================================\n> \n> \n> \n> \n> \n> TABLES\n> \n> -----------------------------------------------------------------------------------------------------------------------------\n> \n> r_cliente: 75820 records\n> \n> r_clientemarca: 97719 records\n> \n> r_contrato: 782058 records\n> \n> sav_cliente_lg: 65671 records\n> \n> sav_nota_lg: 297329 rcords\n> \n> MY SERVER\n> \n> -----------------------------------------------------------------------------------------------------------------------------\n> \n> Pentium 4 2.4 GHz\n> \n> 1 GB RAM\n> \n> 36 GB SCSI\n> \n> Postgresql 7.4.2\n> \n> \n> \n> POSTGRESQL.CONF\n> \n> -----------------------------------------------------------------------------------------------------------------------------\n> \n> shared_buffers = 7800\n> \n> sort_mem = 4096\n> \n> checkpoint_segments = 5 \n> \n> effective_cache_size = 12000\n> \n> cpu_operator_cost = 0.0015\n> \n> stats_start_collector = false\n> \n> \n> \n> QUERY PLAN\n> \n> -----------------------------------------------------------------------------------------------------------------------------\n> \n> Hash Join (cost=27149.61..3090289650.24 rows=128765 width=4)\n> \n> Hash Cond: (\"outer\".cliente = \"inner\".codigo)\n> \n> -> Seq Scan on sav_nota_lg sn (cost=0.00..3090258517.99 rows=148665 \n> width=8)\n> \n> Filter: (NOT (subplan))\n> \n> SubPlan\n> \n> -> Seq Scan on r_contrato rcon (cost=0.00..20362.47 \n> rows=282845 width=19)\n> \n> Filter: ((savfonte)::text = 'lg'::text)\n> \n> -> Hash (cost=26869.29..26869.29 rows=56880 width=4)\n> \n> -> Hash Join (cost=22473.95..26869.29 rows=56880 width=4)\n> \n> Hash Cond: (\"outer\".fkcliente = \"inner\".pkcliente)\n> \n> -> Index Scan using ix_r_clientemarca_fkmarca on \n> r_clientemarca rcm (cost=0.00..2244.46 rows=65665 width=4)\n> \n> Index Cond: (fkmarca = 1)\n> \n> -> Hash (cost=22118.44..22118.44 rows=65672 width=8)\n> \n> -> Hash Join (cost=6613.22..22118.44 rows=65672 \n> width=8)\n> \n> Hash Cond: ((\"outer\".cpfcnpj)::text = \n> (\"inner\".cpfcnpj)::text)\n> \n> -> Seq Scan on r_cliente rc \n> (cost=0.00..12891.16 rows=75816 width=23)\n> \n> -> Hash (cost=6129.71..6129.71 rows=65671 \n> width=23)\n> \n> -> Seq Scan on sav_cliente_lg sc \n> (cost=0.00..6129.71 rows=65671 width=23)\n> \n> \n> \n> \n> \n",
"msg_date": "Fri, 20 Aug 2004 18:02:48 -0700",
"msg_from": "Brad Bulger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Performance"
}
] |
[
{
"msg_contents": "Hi, \nI got this error message: \n\nLOG: could not create IPv6 socket: Address family not\nsupported by protocol\nLOG: could not bind socket for statistics collector:\nCannot assign requested address\nLOG: disabling statistics collector for lack of\nworking socket\nLOG: database system was shut down at 2004-08-24\n00:07:21 EST\nLOG: checkpoint record is at 0/A45360\nLOG: redo record is at 0/A45360; undo record is at\n0/0; shutdown TRUE\nLOG: next transaction ID: 492; next OID: 17228\nLOG: database system is ready\n\n\nI already have a look at pgstat.c but dun know where\nto fix it ( with 7.3.1 i just change the line\ninet_aton(\"127.0.0.1\", &(pgStatAddr.sin_addr));\nto inet_aton(\"<myip>\", &(pgStatAddr.sin_addr)); then\nit works\nthis is the config of the computer:\n$ uname -a\nLinux grieg 2.4.26-general-64G #1 SMP Tue May 18\n09:31:45 EST 2004 i686 GNU/Linux\n\nAny help would be really appreciated.\nMT \n\n\n\t\n\t\t\n__________________________________\nDo you Yahoo!?\nNew and Improved Yahoo! Mail - 100MB free storage!\nhttp://promotions.yahoo.com/new_mail \n",
"msg_date": "Mon, 23 Aug 2004 07:33:10 -0700 (PDT)",
"msg_from": "my thi ho <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgresql 8.0 beta - fail to collect statsistic"
},
{
"msg_contents": "my thi ho <[email protected]> writes:\n> I got this error message: \n\n> LOG: could not create IPv6 socket: Address family not\n> supported by protocol\n> LOG: could not bind socket for statistics collector:\n> Cannot assign requested address\n> LOG: disabling statistics collector for lack of\n> working socket\n\nYou have a broken networking setup, or possibly a broken DNS setup.\nThe machine should accept connections to 127.0.0.1 ... if it does not,\nfind out why not.\n\n> I already have a look at pgstat.c but dun know where\n> to fix it ( with 7.3.1 i just change the line\n> inet_aton(\"127.0.0.1\", &(pgStatAddr.sin_addr));\n> to inet_aton(\"<myip>\", &(pgStatAddr.sin_addr)); then\n> it works\n\nYou were fixing the symptom and not the problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Aug 2004 15:01:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql 8.0 beta - fail to collect statsistic "
},
{
"msg_contents": "If IPv6 doesn't work, shouldn't it fall back to IPv4, or check IPv4\nfirst, or something? Just wondering.\n\n-Steve Bergman\n\n",
"msg_date": "Mon, 23 Aug 2004 16:52:47 -0500",
"msg_from": "Steve Bergman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql 8.0 beta - fail to collect statsistic"
},
{
"msg_contents": "Steve Bergman <[email protected]> writes:\n> If IPv6 doesn't work, shouldn't it fall back to IPv4,\n\nIt does. That was all debugged in 7.4 --- we have not seen any cases\nsince 7.4 beta in which failures of this kind did not mean a\nmisconfigured networking setup.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Aug 2004 18:30:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql 8.0 beta - fail to collect statsistic "
}
] |
[
{
"msg_contents": "I'm still having trouble with slow cascading DELETEs. What commands can I issue to see the sequence of events that occurs after I execute \n\nDELETE FROM x WHERE p;\n\nso that I can see if indexes being used correctly, or I have a constraint I don't want, etc.\n",
"msg_date": "Mon, 23 Aug 2004 17:02:18 -0700",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "How do I see what triggers are called on cascade?"
},
{
"msg_contents": "[email protected] writes:\n> I'm still having trouble with slow cascading DELETEs. What commands can I issue to see the sequence of events that occurs after I execute \n> DELETE FROM x WHERE p;\n> so that I can see if indexes being used correctly, or I have a constraint I don't want, etc.\n\nI'd suggest starting a fresh backend, turning on log_statement, and then\nissuing the DELETE. log_statement will log the queries generated by the\nforeign-key triggers ... but only the first time through, because those\ntriggers cache the query plans.\n\nThe queries you will see will be parameterized (they'll use $1,$2,etc).\nYou can use PREPARE and EXPLAIN ANALYZE EXECUTE to investigate what sort\nof plans result.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Aug 2004 22:50:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How do I see what triggers are called on cascade? "
}
] |
[
{
"msg_contents": "Hi all,\n\nI've attached all the query in query.sql\n\nI'm using postgres 7.3.4 on Linux version 2.4.26-custom \n( /proc/sys/vm/overcommit_memory = 0 this time ) \n\nfree :\n total used free shared buffers cached\nMem: 1810212 1767384 42828 0 5604 1663908\n-/+ buffers/cache: 97872 1712340\nSwap: 505912 131304 374608\n\nAfter I rebuilt the database, the query was fast (28255.12 msec).\nAfter one night's insertion into the tables that the query select from,\nthe query all of a sudden uses up all resources , and the kernel\nstarts swapping, and I haven't seen the query actually finish when\nthis happens. I did vacuum analyze AND reindex, but that doesn't \nhelp.\n\nI attached the explain analyze of the query before this happens, and\nthe explain plan from when it actually happens that the query doesn't finish.\n\nThe one noticeable difference, was that before, it used merge joins, and\nafter, it used hash joins.\n\nWhen the query was slow, I tried to : set enable_hashjoin to off\nfor this query, and the query finished relatively fast again (316245.16 msec)\n\nI attached the output of that explain analyze as well, as well as the postgres\nsettings.\n\nCan anyone shed some light on what's happening here. I can't figure it out.\n\nKind Regards\nStefan",
"msg_date": "Tue, 24 Aug 2004 10:32:40 +0200",
"msg_from": "Stef <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query kills machine."
},
{
"msg_contents": "For starters,\n\n\n> ------------------------------------------------------------------------\n> \n> shared_buffers = 110592\n> wal_buffers = 400\n> sort_mem = 30720\n> vacuum_mem = 10240\n> checkpoint_segments = 30\n> commit_delay = 5000\n> commit_siblings = 100\n> effective_cache_size = 201413\n\nTry more like this:\n\nshared_buffers = 30000\nwal_buffers = <default>\nsort_mem = 4096\nvacuum_mem = 10240\ncheckpoint_segments = 30\ncommit_delay = <default>\ncommit_siblings = <default>\neffective_cache_size = 100000\n\nChris\n\n",
"msg_date": "Tue, 24 Aug 2004 17:08:50 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query kills machine."
},
{
"msg_contents": "Christopher Kings-Lynne mentioned :\n=> sort_mem = 4096\n\nReducing sort_mem to 4096 seems to make it run in a reasonable time\nagain. Any idea why? The database does a whole lot of huge sorts\nevery day, so I thought upping this parameter would help.\n\nA couple of queries do seem to run slower now that I reduced\nthe sort_mem. \n\nThe shared buffers still makes a significant difference when I increase it.\n\nKind Regards\nStefan\n\n",
"msg_date": "Tue, 24 Aug 2004 15:30:14 +0200",
"msg_from": "Stef <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query kills machine."
},
{
"msg_contents": "Stef wrote:\n\n> Christopher Kings-Lynne mentioned :\n> => sort_mem = 4096\n> \n> Reducing sort_mem to 4096 seems to make it run in a reasonable time\n> again. Any idea why? The database does a whole lot of huge sorts\n> every day, so I thought upping this parameter would help.\n> \n> A couple of queries do seem to run slower now that I reduced\n> the sort_mem. \n> \n> The shared buffers still makes a significant difference when I increase it.\n> \n\nWell you have to take in account that sort_mem is not the total memory \nallocated for sorting but per connection and in complex expressions \nserveral times that too.\n\nSo if you sort a lot it can push your operating system off the cliff and \nit might start reaping things that shouldn't be reaped and start swapping.\n\nIf that happens _everything_ on that box will get slow...\n\nShared buffers on the other hand is only allocated once.\n\nRegards,\nMagnus\n",
"msg_date": "Tue, 24 Aug 2004 16:08:33 +0200",
"msg_from": "\"Magnus Naeslund(t)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query kills machine."
},
{
"msg_contents": "Stef wrote:\n\n > Christopher Kings-Lynne mentioned :\n > => sort_mem = 4096\n >\n > Reducing sort_mem to 4096 seems to make it run in a reasonable time\n > again. Any idea why? The database does a whole lot of huge sorts\n > every day, so I thought upping this parameter would help.\n >\n > A couple of queries do seem to run slower now that I reduced\n > the sort_mem.\n > The shared buffers still makes a significant difference when I \nincrease it.\n >\n\nWell you have to take in account that sort_mem is not the total memory \nallocated for sorting but per connection and in complex expressions \nserveral times that too.\n\nSo if you sort a lot it can push your operating system off the cliff and \nit might start reaping things that shouldn't be reaped and start swapping.\n\nIf that happens _everything_ on that box will get slow...\n\nShared buffers on the other hand is only allocated once.\n\nRegards,\nMagnus\n\n",
"msg_date": "Tue, 24 Aug 2004 16:15:06 +0200",
"msg_from": "\"Magnus Naeslund(pg)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query kills machine."
},
{
"msg_contents": "Stef <[email protected]> writes:\n> Reducing sort_mem to 4096 seems to make it run in a reasonable time\n> again. Any idea why? The database does a whole lot of huge sorts\n> every day, so I thought upping this parameter would help.\n\nNot if you haven't got the RAM to support it :-(\n\nAnother thing you might look at is ANALYZEing the tables again after\nyou've loaded all the new data. The row-count estimates seem way off\nin these plans. You might need to increase the statistics target,\ntoo, to get better plans.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 Aug 2004 11:23:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query kills machine. "
},
{
"msg_contents": "Tom Lane mentioned :\n=> Not if you haven't got the RAM to support it :-(\n=> \n=> Another thing you might look at is ANALYZEing the tables again after\n=> you've loaded all the new data. The row-count estimates seem way off\n=> in these plans. You might need to increase the statistics target,\n=> too, to get better plans.\n\nThanks Tom, Christopher and Magnus!\n\nI tested this, and found the correct sort_mem setting for my situation.\nI'm testing a new default_statistics_target setting.\nThis is something I never considered.\n\nKind Regards\nStefan\n",
"msg_date": "Wed, 25 Aug 2004 16:30:41 +0200",
"msg_from": "Stef <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query kills machine."
}
] |
[
{
"msg_contents": "Do you think that adopting the \"chip tuning\" product\npostgresql could increase the performances as well ?\n\n\n\nRegards\nGaetano Mendola\n\n\n\n",
"msg_date": "Tue, 24 Aug 2004 18:17:29 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "[FUN] Performance increase?"
}
] |
[
{
"msg_contents": "Coming from the MSSQL world, I'm used to the first step in optimization\nto be, choose your clustered index and choose it well.\n\nI see that PG has a one-shot CLUSTER command, but doesn't support\ncontinuously-updated clustered indexes.\n\nWhat I infer from newsgroup browsing is, such an index is impossible,\ngiven the MVCC versioning of records (happy to learn I'm wrong).\n\nI'd be curious to know what other people, who've crossed this same\nbridge from MSSQL or Oracle or Sybase to PG, have devised,\nfaced with the same kind of desired performance gain for retrieving\nblocks of rows with the same partial key.\n",
"msg_date": "Wed, 25 Aug 2004 05:28:42 GMT",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "> I see that PG has a one-shot CLUSTER command, but doesn't support\n> continuously-updated clustered indexes.\n> \n> What I infer from newsgroup browsing is, such an index is impossible,\n> given the MVCC versioning of records (happy to learn I'm wrong).\n> \n> I'd be curious to know what other people, who've crossed this same\n> bridge from MSSQL or Oracle or Sybase to PG, have devised,\n> faced with the same kind of desired performance gain for retrieving\n> blocks of rows with the same partial key.\n\nI just run clusterdb each weekend in a cron job...\n\nChris\n",
"msg_date": "Thu, 26 Aug 2004 09:16:17 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "On Tue, 2004-08-24 at 22:28, Mischa Sandberg wrote:\n> I see that PG has a one-shot CLUSTER command, but doesn't support\n> continuously-updated clustered indexes.\n> \n> What I infer from newsgroup browsing is, such an index is impossible,\n> given the MVCC versioning of records (happy to learn I'm wrong).\n\n\nIt is possible to have MVCC and ordered/indexed heaps, but it isn't\nsomething you can just tack onto the currently supported types -- I\nlooked into this myself. It would take substantial additional code\ninfrastructure to support it, basically an alternative heap system and\nadding support for tables with odd properties to many parts of the\nsystem. Pretty non-trivial.\n\nThis is probably my #1 \"I wish postgres had this feature\" feature. It\nis a serious scalability enhancer for big systems and a pain to work\naround not having.\n\n\n> I'd be curious to know what other people, who've crossed this same\n> bridge from MSSQL or Oracle or Sybase to PG, have devised,\n> faced with the same kind of desired performance gain for retrieving\n> blocks of rows with the same partial key.\n\n\nThe CLUSTER command is often virtually useless for precisely the kinds\nof tables that need to be clustered. My databases are on-line 24x7, and\nthe tables that are ideal candidates for clustering are in the range of\n50-100 million rows. I can afford to lock these tables up for no more\nthan 5-10 minutes during off-peak in the hopes that no one notices, and\nCLUSTER does not work remotely in the ballpark of that fast for tables\nof that size. People who can run CLUSTER in a cron job must either have\nrelatively small tables or regular large maintenance windows.\n\n\nMy solution, which may or may not work for you, was to write a table\npartitioning system using the natural flexibility and programmability of\npostgresql (e.g. table inheritance). From this I automatically get a\nroughly ordered heap according to the index I would cluster on, with\nonly slightly funky SQL access. The end result works much better with\nCLUSTER too, though CLUSTER is much less necessary at that point\nbecause, at least for my particular purposes, the rows are mostly\nordered due to how the data was partitioned.\n\nSo there are ways to work around CLUSTER, but you'll have to be clever\nand it will require tailoring the solution to your particular\nrequirements.\n\n\nJ. Andrew Rogers\n\n\n\n",
"msg_date": "26 Aug 2004 11:14:32 -0700",
"msg_from": "\"J. Andrew Rogers\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "\nHow do vendors actually implement auto-clustering? I assume they move\nrows around during quiet periods or have lots of empty space in each\nvalue bucket.\n\n---------------------------------------------------------------------------\n\nJ. Andrew Rogers wrote:\n> On Tue, 2004-08-24 at 22:28, Mischa Sandberg wrote:\n> > I see that PG has a one-shot CLUSTER command, but doesn't support\n> > continuously-updated clustered indexes.\n> > \n> > What I infer from newsgroup browsing is, such an index is impossible,\n> > given the MVCC versioning of records (happy to learn I'm wrong).\n> \n> \n> It is possible to have MVCC and ordered/indexed heaps, but it isn't\n> something you can just tack onto the currently supported types -- I\n> looked into this myself. It would take substantial additional code\n> infrastructure to support it, basically an alternative heap system and\n> adding support for tables with odd properties to many parts of the\n> system. Pretty non-trivial.\n> \n> This is probably my #1 \"I wish postgres had this feature\" feature. It\n> is a serious scalability enhancer for big systems and a pain to work\n> around not having.\n> \n> \n> > I'd be curious to know what other people, who've crossed this same\n> > bridge from MSSQL or Oracle or Sybase to PG, have devised,\n> > faced with the same kind of desired performance gain for retrieving\n> > blocks of rows with the same partial key.\n> \n> \n> The CLUSTER command is often virtually useless for precisely the kinds\n> of tables that need to be clustered. My databases are on-line 24x7, and\n> the tables that are ideal candidates for clustering are in the range of\n> 50-100 million rows. I can afford to lock these tables up for no more\n> than 5-10 minutes during off-peak in the hopes that no one notices, and\n> CLUSTER does not work remotely in the ballpark of that fast for tables\n> of that size. People who can run CLUSTER in a cron job must either have\n> relatively small tables or regular large maintenance windows.\n> \n> \n> My solution, which may or may not work for you, was to write a table\n> partitioning system using the natural flexibility and programmability of\n> postgresql (e.g. table inheritance). From this I automatically get a\n> roughly ordered heap according to the index I would cluster on, with\n> only slightly funky SQL access. The end result works much better with\n> CLUSTER too, though CLUSTER is much less necessary at that point\n> because, at least for my particular purposes, the rows are mostly\n> ordered due to how the data was partitioned.\n> \n> So there are ways to work around CLUSTER, but you'll have to be clever\n> and it will require tailoring the solution to your particular\n> requirements.\n> \n> \n> J. Andrew Rogers\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 26 Aug 2004 14:18:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "Bruce,\n\n> How do vendors actually implement auto-clustering? I assume they move\n> rows around during quiet periods or have lots of empty space in each\n> value bucket.\n\nThat's how SQL Server does it. In old versions (6.5) you had to manually \nsend commands to update the cluster, same as PG. Also, when you create a \ncluster (or an index or table for that matter) you can manually set an amount \nof \"space\" to be held open on each data page for updates.\n\nAlso keep in mind that SQL Server, as a \"single-user database\" has a much \neasier time with this. They don't have to hold several versions of an index \nin memory and collapse it into a single version at commit time.\n\nAll that being said, we could do a better job of \"auto-balancing\" clustered \ntables. I believe that someone was working on this in Hackers through what \nthey called \"B-Tree Tables\". What happened to that?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 26 Aug 2004 11:32:30 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "On Thu, 2004-08-26 at 11:18, Bruce Momjian wrote:\n> How do vendors actually implement auto-clustering? I assume they move\n> rows around during quiet periods or have lots of empty space in each\n> value bucket.\n\n\nAs far as I know, Oracle does it by having a B-Tree organized heap (a\nfeature introduced around v8 IIRC), basically making the primary key\nindex and the heap the same physical structure. Any non-index columns\nare stored in the index along with the index columns. Implementing it\nis slightly weird because searching the index and selecting the rows\nfrom the heap are not separate operations.\n\nThe major caveat to having tables of this type is that you can only have\na primary key index. No other indexes are possible because the \"heap\"\nconstantly undergoes local reorganizations if you have a lot of write\ntraffic, the same kind of reorganization you would normally expect in a\nBTree index.\n\nThe performance improvements come from two optimizations. First, you\nhave to touch significantly fewer blocks to get all the rows, even\ncompared to a CLUSTERed heap. Second, the footprint is smaller and\nplays nicely with the buffer cache.\n\nWhen I've used these types of heaps in Oracle 8 on heavily used tables\nwith tens of millions of rows, we frequently got a 10x or better\nperformance improvement on queries against those tables. It is only\nreally useful for tables with vast quantities of relatively small rows,\nbut it can be a lifesaver in those cases.\n\n\nJ. Andrew Rogers\n\n\n",
"msg_date": "26 Aug 2004 12:04:48 -0700",
"msg_from": "\"J. Andrew Rogers\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "Ummm ... not quite. In MSSQL/Sybase/Oracle, a clustered index maintains \nits space saturation as part of each update operation. High activity \ndoes indeed result in less-full pages (typically 60-80% full for tables \nwith heavy deletions or rowsize changes). To bring the percentage back \nup, you run DBCC INDEXDEFRAG, which also does what you'd expect of a \nnormal file defragmenter -- put related disk pages together on the platter.\n\nBut the performance difference is hardly as severe as I gather it can be \nif you neglect to vacuum.\n\nAs for SQL Server being a 'single-user database' ... ummm ... no, I \ndon't think so. I'm REALLY happy to be shut of the Microsoft world, but \nMSSQL 7/2000/2005 is a serious big DB engine. It also has some serious \nbright heads behind it. They hired Goetz Graefe and Paul (aka Per-Ake) \nLarsen away from academia, and it shows, in the join and aggregate \nprocessing. I'll be a happy camper if I manage to contribute something\nto PG that honks the way their stuff does. Happy to discuss, too.\n\nJosh Berkus wrote:\n> Bruce,\n> \n> \n>>How do vendors actually implement auto-clustering? I assume they move\n>>rows around during quiet periods or have lots of empty space in each\n>>value bucket.\n> \n> \n> That's how SQL Server does it. In old versions (6.5) you had to manually \n> send commands to update the cluster, same as PG. Also, when you create a \n> cluster (or an index or table for that matter) you can manually set an amount \n> of \"space\" to be held open on each data page for updates.\n> \n> Also keep in mind that SQL Server, as a \"single-user database\" has a much \n> easier time with this. They don't have to hold several versions of an index \n> in memory and collapse it into a single version at commit time.\n> \n> All that being said, we could do a better job of \"auto-balancing\" clustered \n> tables. I believe that someone was working on this in Hackers through what \n> they called \"B-Tree Tables\". What happened to that?\n> \n",
"msg_date": "Thu, 26 Aug 2004 21:00:47 GMT",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> How do vendors actually implement auto-clustering? I assume they move\n> rows around during quiet periods or have lots of empty space in each\n> value bucket.\n> \n> ---------------------------------------------------------------------------\n\nIIRC informix doesn't have it, and you have to recluster periodically\nthe table. After having clustered the table with an index in order to\nrecluster the table with another index you have to release the previous\none ( ALTER index TO NOT CLUSTER ), the CLUSTER is an index attribute and\neach table can have only one index with that attribute ON.\n\n\nRegards\nGaetano Mendola\n\n\n",
"msg_date": "Thu, 26 Aug 2004 23:09:35 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "Sheer nitpick here...\n\nA B-tree is where the records (data) live at all levels of the tree;\nB+ tree is where the records are only at the leaf level.\nThat's what Knuth calls them, anyway.\n\nClustered indexes for all known dbs are true B+ trees.\nNonclustered indexes could be B-trees (probably aren't),\nsince there's no big fanout penalty for storing the little\n(heap) row locators everywhere at all levels.\n\nJ. Andrew Rogers wrote:\n> As far as I know, Oracle does it by having a B-Tree organized heap (a\n> feature introduced around v8 IIRC), basically making the primary key\n> index and the heap the same physical structure. \n...\n",
"msg_date": "Thu, 26 Aug 2004 21:46:13 GMT",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "Mishca,\n\n> Ummm ... not quite. In MSSQL/Sybase/Oracle, a clustered index maintains\n> its space saturation as part of each update operation. High activity\n> does indeed result in less-full pages (typically 60-80% full for tables\n> with heavy deletions or rowsize changes). To bring the percentage back\n> up, you run DBCC INDEXDEFRAG, which also does what you'd expect of a\n> normal file defragmenter -- put related disk pages together on the platter.\n\nSure, it does now, which is a nice thing. It didn't in the first version \n(6.5) where this cluster maint needed to be done manually and asynchronously, \nas I recall. \n\n> As for SQL Server being a 'single-user database' ... ummm ... no, I\n> don't think so. \n\nHmmm ... perhaps it would be better if I said \"serial-only database\". MSSQL \n(like earlier versions of Sybase) handles transactions by spooling everything \nout to a serial log, effectively making all transcations SERIAL isolation. \nThis has some significant benefits in performance for OLAP and data \nwarehousing, but really kills you on high-concurrency transaction processing.\n\n> I'm REALLY happy to be shut of the Microsoft world, but \n> MSSQL 7/2000/2005 is a serious big DB engine. It also has some serious\n> bright heads behind it. They hired Goetz Graefe and Paul (aka Per-Ake)\n> Larsen away from academia, and it shows, in the join and aggregate\n> processing. I'll be a happy camper if I manage to contribute something\n> to PG that honks the way their stuff does. Happy to discuss, too.\n\nYeah, they also have very speedy cursor handling. I can do things with \ncursors in MSSQL that I wouldn't consider with other DBs. Not that that \nmakes up for the lack of other functionality, but it is nice when you need \nit.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sun, 29 Aug 2004 11:59:01 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "On Thu, Aug 26, 2004 at 12:04:48PM -0700, J. Andrew Rogers wrote:\n> The major caveat to having tables of this type is that you can only have\n> a primary key index. No other indexes are possible because the \"heap\"\n> constantly undergoes local reorganizations if you have a lot of write\n> traffic, the same kind of reorganization you would normally expect in a\n> BTree index.\n\nThis isn't true, at least in 9i. You can create whatever indexes you\nwant on an index-organized table. I believe that the index stores the PK\nvalue instead of the ROWID.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 31 Aug 2004 16:50:13 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "Mischa Sandberg wrote:\n> Coming from the MSSQL world, I'm used to the first step in optimization\n> to be, choose your clustered index and choose it well.\n> I see that PG has a one-shot CLUSTER command, but doesn't support\n> continuously-updated clustered indexes.\n> What I infer from newsgroup browsing is, such an index is impossible,\n> given the MVCC versioning of records (happy to learn I'm wrong).\n> I'd be curious to know what other people, who've crossed this same\n> bridge from MSSQL or Oracle or Sybase to PG, have devised,\n> faced with the same kind of desired performance gain for retrieving\n> blocks of rows with the same partial key.\n\nJust to let people know, after trying various options, this looks the \nmost promising:\n\n- segment the original table into four tables (call them A,B,C,D)\n\n- all insertions go into A.\n- longterm data lives in B.\n\n- primary keys of all requests to delete rows from (B) go into D -- no \nactual deletions are done against B. Deletions against A happen as normal.\n\n- all queries are made against a view: a union of A and B and (not \nexists) D.\n\n- daily merge A,B and (where not exists...) D, into C\n- run cluster on C, then swap names on B and C, truncate A and D.\n\nNot rocket science, but it seems to give the payback of normal \nclustering without locking the table for long periods of time. It also \nsaves on VACUUM FULL time.\n\nAt present, we're only at 1M rows in B on this. More when I know it.\nAdvance warning on any gotchas with this approach would be much \nappreciated. Making a complete copy of (B) is a bit of an ouch.\n",
"msg_date": "Wed, 08 Sep 2004 15:38:51 GMT",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
}
] |
[
{
"msg_contents": "\nNot sure about the overall performance, etc. but I think that in order to collect statistics you need to set some values in the postgresql.conf config file, to wit:\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Statistics Monitoring -\n\n#log_parser_stats = false\n#log_planner_stats = false\n#log_executor_stats = false\n#log_statement_stats = false\n\n# - Query/Index Statistics Collector -\n\nstats_start_collector = true\nstats_command_string = true\nstats_block_level = true\nstats_row_level = true\n#stats_reset_on_server_start = true\n\nIf the appropriate values aren't set this could account for why no entries are found in the pg_stat tables. The manual has details on these; you'll need to reload postgres to make any changes effective.\n\nGreg\n\n\n-----Original Message-----\nFrom:\tmy ho [mailto:[email protected]]\nSent:\tTue 8/24/2004 11:54 PM\nTo:\[email protected]\nCc:\tJan Wieck\nSubject:\tRe: [PERFORM] postgresql performance with multimedia\n> Tom Lane answered to that question. The code in\n> question does resolve \n> \"localhost\" with getaddrinfo() and then tries to\n> create and bind a UDP \n> socket to all returned addresses. For some reason\n> \"localhost\" on your \n> system resolves to an address that is not available\n> for bind(2).\n\nI tried to put my_ip instead of \"localhost\" in\nbufmng.c and it seems to work (no more complaining).\nHowever i check the pg_statio_all_tables and dont see\nany recorded statistic at all. (all the columns are\n'0')\nsome time postmaster shut down with this err msg: \nLOG: statistics collector process (<process_id>)\nexited with exit code 1\ni starts postmaster with this command:\npostmaster -i -p $PORT -D $PGDATA -k $PGDATA -N 32 -B\n64 -o -s\n\n> > btw, what i want to ask here is does postgreSQL\n> have\n> > any kind of read-ahead buffer implemented? 'cos it\n> > would be useful in multimedia case when we always\n> scan\n> > the large table for continous data.\n> \n> Since there is no mechanism to control that data is\n> stored contiguously \n> in the tables, what would that be good for?\n\ni thought that rows in the table will be stored\ncontiguously? in that case, if the user is requesting\n1 row, we make sure that the continue rows are ready\nin the buffer pool so that when they next requested,\nthey wont be asked to read from disk. For multimedia\ndata, this is important 'cos data needs to be\npresented continuously without any waiting.\n\nthanks again for your help\nMT Ho\n\n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n\n\n",
"msg_date": "Wed, 25 Aug 2004 00:14:01 -0700",
"msg_from": "\"Gregory S. Williamson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql performance with multimedia"
}
] |
[
{
"msg_contents": "Hi,\n\nWe're now getting very much off-topic about configuration of networking, but:\n\n- What is your OS?\n- What output do you get when you type 'ping localhost' in any command-prompt?\n\n\n-----Original Message-----\n\n[...]\n> I tried to put my_ip instead of \"localhost\" in\n> bufmng.c and it seems to work (no more complaining).\n[...]\n\nregards,\n\n--Tim\n",
"msg_date": "Wed, 25 Aug 2004 13:26:00 +0200",
"msg_from": "\"Leeuw van der, Tim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "OT: Network config (WAS: RE: postgresql performance with multimedia)"
}
] |
[
{
"msg_contents": "Hi,\n\nOn Aug 25, 2004, at 4:22 AM, Mark Kirkwood wrote:\n\n> select\n> \tpav1.person_id\n> from\n> \tperson_attributes_vertical pav1\n> where\n> \t ( pav1.attribute_id = 1\n> \t and pav1.value_id in (2,3))\n> \tor ( pav1.attribute_id = 2\n> \t and pav1.value_id in (2,3))\n>\n\n[...]\n\nWhy not combine attribute_id and value_id? Then you have nothing but an OR (or IN).\n\nIt should, AFAICS, give you much better selectivity on your indexes:\n\nThere will be a lot of attributes with the same ID; there will also be a lot of attributes with the same value. However, there should be much less attributes with a specific combination of (ID/Value).\nRight now I think it will be very hard to determine which field has a better selectivity: attribute_id or value_id.\n\n\nThe combined attribute/value field could be an int8 or so, where the upper 4 bytes are for attribute_id and the lower 4 bytes for value_id.\nDepending on the number of attributes and possible values a smaller datatype and / or a different split can be made. A smaller datatype will result in faster access.\n\nWhat difference does that make?\n\nregards,\n\n--Tim\n",
"msg_date": "Wed, 25 Aug 2004 15:05:56 +0200",
"msg_from": "\"Leeuw van der, Tim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: What is the best way to do attribute/values?"
}
] |
[
{
"msg_contents": "I have 2 servers both with the exact same data, the same O.S., the same\nversion of Postgres (7.4.5) and the exact same db schema's (one production\nserver, one development server). One server is using the correct index for\nSQL queries resulting in extremely slow performance, the other server is\nproperly selecting the index to use and performance is many times better. I\nhave tried vacuum, but that did not work. I finally resorted to dumping the\ndata, removing the database completely, creating a new database and\nimporting the data only to have to problem resurface. The table has\n5,000,000+ rows on both the systems.\n\nWhen I run 'analyze verbose' on the correctly working system, the following\nis displayed:\n {INDEXSCAN\n :startup_cost 0.00\n :total_cost 465.10\n :plan_rows 44\n :plan_width 118\n :targetlist (\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 1\n :restype 23\n :restypmod -1\n :resname trn_integer\n :ressortgroupref 0\n :resorigtbl 789839\n :resorigcol 1\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 1\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 1\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 2\n :restype 23\n :restypmod -1\n :resname trn_patno\n :ressortgroupref 0\n :resorigtbl 789839\n :resorigcol 2\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 2\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 3\n :restype 1042\n :restypmod 5\n :resname trn_bill_inc\n :ressortgroupref 0\n :resorigtbl 789839\n :resorigcol 3\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 3\n :vartype 1042\n :vartypmod 5\n :varlevelsup 0\n :varnoold 1\n :varoattno 3\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 4\n :restype 1043\n :restypmod 13\n :resname trn_userid\n :ressortgroupref 0\n :resorigtbl 789839\n :resorigcol 4\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 4\n :vartype 1043\n :vartypmod 13\n :varlevelsup 0\n :varnoold 1\n :varoattno 4\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 5\n :restype 23\n :restypmod -1\n :resname trn_location\n :ressortgroupref 0\n :resorigtbl 789839\n :resorigcol 5\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 5\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 5\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 6\n :restype 1082\n :restypmod -1\n :resname trn_date\n :ressortgroupref 0\n :resorigtbl 789839\n :resorigcol 6\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 6\n :vartype 1082\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 6\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 7\n :restype 23\n :restypmod -1\n :resname trn_sercode\n :ressortgroupref 0\n :resorigtbl 789839\n :resorigcol 7\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 7\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 7\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 8\n :restype 1043\n :restypmod 28\n :resname trn_descr\n :ressortgroupref 0\n :resorigtbl 789839\n :resorigcol 8\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 8\n :vartype 1043\n :vartypmod 28\n :varlevelsup 0\n :varnoold 1\n :varoattno 8\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 9\n :restype 23\n :restypmod -1\n :resname trn_employr\n :ressortgroupref 0\n :resorigtbl 789839\n :resorigcol 9\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 9\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 9\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 10\n :restype 23\n :restypmod -1\n :resname trn_prof\n :ressortgroupref 0\n :resorigtbl 789839\n :resorigcol 10\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 10\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 10\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 11\n :restype 1700\n :restypmod 720902\n :resname trn_amount\n :ressortgroupref 0\n :resorigtbl 789839\n :resorigcol 11\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 11\n :vartype 1700\n :vartypmod 720902\n :varlevelsup 0\n :varnoold 1\n :varoattno 11\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 12\n :restype 1043\n :restypmod 7\n :resname trn_tooth\n :ressortgroupref 0\n :resorigtbl 789839\n :resorigcol 12\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 12\n :vartype 1043\n :vartypmod 7\n :varlevelsup 0\n :varnoold 1\n :varoattno 12\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 13\n :restype 1043\n :restypmod 10\n :resname trn_surface\n :ressortgroupref 0\n :resorigtbl 789839\n :resorigcol 13\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 13\n :vartype 1043\n :vartypmod 10\n :varlevelsup 0\n :varnoold 1\n :varoattno 13\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 14\n :restype 1042\n :restypmod 5\n :resname trn_flag\n :ressortgroupref 0\n :resorigtbl 789839\n :resorigcol 14\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 14\n :vartype 1042\n :vartypmod 5\n :varlevelsup 0\n :varnoold 1\n :varoattno 14\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 15\n :restype 23\n :restypmod -1\n :resname trn_counter\n :ressortgroupref 0\n :resorigtbl 789839\n :resorigcol 15\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 15\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 15\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 16\n :restype 23\n :restypmod -1\n :resname trn_guarantr\n :ressortgroupref 0\n :resorigtbl 789839\n :resorigcol 16\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 16\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 16\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 17\n :restype 1042\n :restypmod 5\n :resname trn_lab\n :ressortgroupref 0\n :resorigtbl 789839\n :resorigcol 17\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 17\n :vartype 1042\n :vartypmod 5\n :varlevelsup 0\n :varnoold 1\n :varoattno 17\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 18\n :restype 1082\n :restypmod -1\n :resname trn_old_date\n :ressortgroupref 0\n :resorigtbl 789839\n :resorigcol 18\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 18\n :vartype 1082\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 18\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 19\n :restype 1042\n :restypmod 5\n :resname trn_hist_flag\n :ressortgroupref 0\n :resorigtbl 789839\n :resorigcol 19\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 19\n :vartype 1042\n :vartypmod 5\n :varlevelsup 0\n :varnoold 1\n :varoattno 19\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 20\n :restype 23\n :restypmod -1\n :resname trn_check_no\n :ressortgroupref 0\n :resorigtbl 789839\n :resorigcol 20\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 20\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 20\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 21\n :restype 1043\n :restypmod 7\n :resname trn_commcode\n :ressortgroupref 0\n :resorigtbl 789839\n :resorigcol 21\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 21\n :vartype 1043\n :vartypmod 7\n :varlevelsup 0\n :varnoold 1\n :varoattno 21\n }\n }\n )\n :qual (\n {OPEXPR\n :opno 1098\n :opfuncid 1090\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 18\n :vartype 1082\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 18\n }\n\n {CONST\n :consttype 1082\n :constlen 4\n :constbyval true\n :constisnull false\n :constvalue 4 [ 91 -8 -1 -1 ]\n }\n )\n }\n {OPEXPR\n :opno 1096\n :opfuncid 1088\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 18\n :vartype 1082\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 18\n }\n\n {CONST\n :consttype 1082\n :constlen 4\n :constbyval true\n :constisnull false\n :constvalue 4 [ -96 6 0 0 ]\n }\n )\n }\n\n {OPEXPR\n :opno 1054\n :opfuncid 1048\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 3\n :vartype 1042\n :vartypmod 5\n :varlevelsup 0\n :varnoold 1\n :varoattno 3\n }\n\n {CONST\n :consttype 1042\n :constlen -1\n :constbyval false\n :constisnull false\n :constvalue 5 [ 5 0 0 0 66 ]\n }\n )\n }\n )\n\n :lefttree <>\n :righttree <>\n :initPlan <>\n :extParam ()\n\n :allParam ()\n\n :nParamExec 0\n :scanrelid 1\n :indxid ( 7725589)\n\n :indxqual ((\n {OPEXPR\n :opno 96\n :opfuncid 65\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 1\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n }\n\n {CONST\n :consttype 23\n :constlen 4\n :constbyval true\n :constisnull false\n :constvalue 4 [ 63 13 3 0 ]\n }\n )\n }\n )\n )\n\n :indxqualorig ((\n {OPEXPR\n :opno 96\n :opfuncid 65\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 2\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n }\n\n {CONST\n :consttype 23\n :constlen 4\n :constbyval true\n :constisnull false\n :constvalue 4 [ 63 13 3 0 ]\n }\n )\n }\n )\n )\n\n :indxorderdir 1\n }\n\n Index Scan using trptserc on trans (cost=0.00..465.10 rows=44 width=118)\n Index Cond: (trn_patno = 199999)\n Filter: ((trn_old_date >= '1994-08-23'::date) AND (trn_old_date <=\n'2004-08-23'::date) AND (trn_bill_inc = 'B'::bpchar))\n(687 rows)\n\n\nNow, when I run 'analyze verbose' on the INCORRECTLY working system, the\nfollowing is displayed:\n {INDEXSCAN\n :startup_cost 0.00\n :total_cost 105165.74\n :plan_rows 1\n :plan_width 143\n :targetlist (\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 1\n :restype 23\n :restypmod -1\n :resname trn_integer\n :ressortgroupref 0\n :resorigtbl 2487466\n :resorigcol 1\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 1\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 1\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 2\n :restype 23\n :restypmod -1\n :resname trn_patno\n :ressortgroupref 0\n :resorigtbl 2487466\n :resorigcol 2\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 2\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 3\n :restype 1042\n :restypmod 5\n :resname trn_bill_inc\n :ressortgroupref 0\n :resorigtbl 2487466\n :resorigcol 3\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 3\n :vartype 1042\n :vartypmod 5\n :varlevelsup 0\n :varnoold 1\n :varoattno 3\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 4\n :restype 1043\n :restypmod 13\n :resname trn_userid\n :ressortgroupref 0\n :resorigtbl 2487466\n :resorigcol 4\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 4\n :vartype 1043\n :vartypmod 13\n :varlevelsup 0\n :varnoold 1\n :varoattno 4\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 5\n :restype 23\n :restypmod -1\n :resname trn_location\n :ressortgroupref 0\n :resorigtbl 2487466\n :resorigcol 5\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 5\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 5\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 6\n :restype 1082\n :restypmod -1\n :resname trn_date\n :ressortgroupref 0\n :resorigtbl 2487466\n :resorigcol 6\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 6\n :vartype 1082\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 6\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 7\n :restype 23\n :restypmod -1\n :resname trn_sercode\n :ressortgroupref 0\n :resorigtbl 2487466\n :resorigcol 7\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 7\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 7\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 8\n :restype 1043\n :restypmod 28\n :resname trn_descr\n :ressortgroupref 0\n :resorigtbl 2487466\n :resorigcol 8\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 8\n :vartype 1043\n :vartypmod 28\n :varlevelsup 0\n :varnoold 1\n :varoattno 8\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 9\n :restype 23\n :restypmod -1\n :resname trn_employer\n :ressortgroupref 0\n :resorigtbl 2487466\n :resorigcol 9\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 9\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 9\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 10\n :restype 23\n :restypmod -1\n :resname trn_prof\n :ressortgroupref 0\n :resorigtbl 2487466\n :resorigcol 10\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 10\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 10\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 11\n :restype 1700\n :restypmod 720902\n :resname trn_amount\n :ressortgroupref 0\n :resorigtbl 2487466\n :resorigcol 11\n :resjunk false\n }\n :expr\n {VAR\n :varno 1\n :varattno 11\n :vartype 1700\n :vartypmod 720902\n :varlevelsup 0\n :varnoold 1\n :varoattno 11\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 12\n :restype 1043\n :restypmod 7\n :resname trn_tooth\n :ressortgroupref 0\n :resorigtbl 2487466\n :resorigcol 12\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 12\n :vartype 1043\n :vartypmod 7\n :varlevelsup 0\n :varnoold 1\n :varoattno 12\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 13\n :restype 1043\n :restypmod 10\n :resname trn_surface\n :ressortgroupref 0\n :resorigtbl 2487466\n :resorigcol 13\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 13\n :vartype 1043\n :vartypmod 10\n :varlevelsup 0\n :varnoold 1\n :varoattno 13\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 14\n :restype 1042\n :restypmod 5\n :resname trn_flag\n :ressortgroupref 0\n :resorigtbl 2487466\n :resorigcol 14\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 14\n :vartype 1042\n :vartypmod 5\n :varlevelsup 0\n :varnoold 1\n :varoattno 14\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 15\n :restype 23\n :restypmod -1\n :resname trn_counter\n :ressortgroupref 0\n :resorigtbl 2487466\n :resorigcol 15\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 15\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 15\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 16\n :restype 23\n :restypmod -1\n :resname trn_guarantr\n :ressortgroupref 0\n :resorigtbl 2487466\n :resorigcol 16\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 16\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 16\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 17\n :restype 1042\n :restypmod 5\n :resname trn_lab\n :ressortgroupref 0\n :resorigtbl 2487466\n :resorigcol 17\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 17\n :vartype 1042\n :vartypmod 5\n :varlevelsup 0\n :varnoold 1\n :varoattno 17\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 18\n :restype 1082\n :restypmod -1\n :resname trn_old_date\n :ressortgroupref 0\n :resorigtbl 2487466\n :resorigcol 18\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 18\n :vartype 1082\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 18\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 19\n :restype 1042\n :restypmod 5\n :resname trn_hist_flag\n :ressortgroupref 0\n :resorigtbl 2487466\n :resorigcol 19\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 19\n :vartype 1042\n :vartypmod 5\n :varlevelsup 0\n :varnoold 1\n :varoattno 19\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 20\n :restype 23\n :restypmod -1\n :resname trn_check_no\n :ressortgroupref 0\n :resorigtbl 2487466\n :resorigcol 20\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 20\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 20\n }\n }\n\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 21\n :restype 1043\n :restypmod 7\n :resname trn_commcode\n :ressortgroupref 0\n :resorigtbl 2487466\n :resorigcol 21\n :resjunk false\n }\n\n :expr\n {VAR\n :varno 1\n :varattno 21\n :vartype 1043\n :vartypmod 7\n :varlevelsup 0\n :varnoold 1\n :varoattno 21\n }\n }\n )\n :qual (\n {OPEXPR\n :opno 96\n :opfuncid 65\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 2\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n }\n\n {CONST\n :consttype 23\n :constlen 4\n :constbyval true\n :constisnull false\n :constvalue 4 [ 63 13 3 0 ]\n }\n )\n }\n\n {OPEXPR\n :opno 1054\n :opfuncid 1048\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 3\n :vartype 1042\n :vartypmod 5\n :varlevelsup 0\n :varnoold 1\n :varoattno 3\n }\n\n {CONST\n :consttype 1042\n :constlen -1\n :constbyval false\n :constisnull false\n :constvalue 5 [ 5 0 0 0 66 ]\n }\n )\n }\n )\n\n :lefttree <>\n :righttree <>\n :initPlan <>\n :extParam ()\n\n :allParam ()\n\n :nParamExec 0\n :scanrelid 1\n :indxid ( 7762034)\n\n :indxqual ((\n {OPEXPR\n :opno 1098\n :opfuncid 1090\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 1\n :vartype 1082\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 18\n }\n\n {CONST\n :consttype 1082\n :constlen 4\n :constbyval true\n :constisnull false\n :constvalue 4 [ 91 -8 -1 -1 ]\n }\n )\n }\n {OPEXPR\n :opno 1096\n :opfuncid 1088\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 1\n :vartype 1082\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 18\n }\n\n {CONST\n :consttype 1082\n :constlen 4\n :constbyval true\n :constisnull false\n :constvalue 4 [ -96 6 0 0 ]\n }\n )\n }\n )\n )\n\n :indxqualorig ((\n {OPEXPR\n :opno 1098\n :opfuncid 1090\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 18\n :vartype 1082\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 18\n }\n {CONST\n :consttype 1082\n :constlen 4\n :constbyval true\n :constisnull false\n :constvalue 4 [ 91 -8 -1 -1 ]\n }\n )\n }\n\n {OPEXPR\n :opno 1096\n :opfuncid 1088\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 18\n :vartype 1082\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 18\n }\n\n {CONST\n :consttype 1082\n :constlen 4\n :constbyval true\n :constisnull false\n :constvalue 4 [ -96 6 0 0 ]\n }\n )\n }\n )\n )\n\n :indxorderdir 1\n }\n\n Index Scan using todate on trans (cost=0.00..105165.74 rows=1 width=143)\n Index Cond: ((trn_old_date >= '1994-08-23'::date) AND (trn_old_date <=\n'2004-08-23'::date))\n Filter: ((trn_patno = 199999) AND (trn_bill_inc = 'B'::bpchar))\n(713 rows)\n\n\nSo, you see the query optimizer has choosen different indices the two\nsystems - one correctly and the other incorrectly on the exact same set of\ndata???? I can change the query to reduce the number of arguments and then\nperform a subquery (in my java code) but I am afraid there is an internal\nproblem that will crop up somewhere else.\n\n\n",
"msg_date": "Wed, 25 Aug 2004 11:07:27 -0400",
"msg_from": "\"David Price\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizer Selecting Incorrect Index"
},
{
"msg_contents": "David Price wrote:\n> I have 2 servers both with the exact same data, the same O.S., the same\n> version of Postgres (7.4.5) and the exact same db schema's (one production\n> server, one development server). One server is using the correct index for\n> SQL queries resulting in extremely slow performance, the other server is\n> properly selecting the index to use and performance is many times better. I\n> have tried vacuum, but that did not work. I finally resorted to dumping the\n> data, removing the database completely, creating a new database and\n> importing the data only to have to problem resurface. The table has\n> 5,000,000+ rows on both the systems.\n> \n> When I run 'analyze verbose' on the correctly working system, the following\n> is displayed:\n\nEXPLAIN ANALYZE is usually considered enough\n\n> Index Scan using trptserc on trans (cost=0.00..465.10 rows=44 width=118)\n> Index Cond: (trn_patno = 199999)\n> Filter: ((trn_old_date >= '1994-08-23'::date) AND (trn_old_date <=\n> '2004-08-23'::date) AND (trn_bill_inc = 'B'::bpchar))\n> (687 rows)\n> \n> \n> Now, when I run 'analyze verbose' on the INCORRECTLY working system, the\n> following is displayed:\n\n> Index Scan using todate on trans (cost=0.00..105165.74 rows=1 width=143)\n> Index Cond: ((trn_old_date >= '1994-08-23'::date) AND (trn_old_date <=\n> '2004-08-23'::date))\n> Filter: ((trn_patno = 199999) AND (trn_bill_inc = 'B'::bpchar))\n> (713 rows)\n\nThese queries are different. The first returns 687 rows and the second \n713 rows. You need to check your systems if they are supposed to be \nidentical.\n\nThings to check:\n1. postgresql.conf settings match - different costs could cause this\n2. statistics on the two columns (trn_patno,trn_old_date) - if they \ndiffer considerably between systems that would also explain it.\n\nI suspect the second one, at a wild guess the working system happens to \nknow 199999 is fairly rare wheras the second just estimates an average.\n\nIf the stats don't help, people are going to want to see the entire \nquery+plan. Could you repost with the query and explain analyse on both \nsystem. Oh, and some idea on how many rows/unique values are involved in \nthe important columns.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 25 Aug 2004 16:47:19 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer Selecting Incorrect Index"
},
{
"msg_contents": "Richard Huxton <[email protected]> writes:\n> Things to check:\n> 1. postgresql.conf settings match - different costs could cause this\n> 2. statistics on the two columns (trn_patno,trn_old_date) - if they \n> differ considerably between systems that would also explain it.\n\nThe different estimated row counts could only come from #2. I suspect\nDavid has forgotten to run ANALYZE on the second system.\n\nI agree that EXPLAIN VERBOSE output is not helpful...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Aug 2004 15:08:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer Selecting Incorrect Index "
},
{
"msg_contents": "On Wed, 25 Aug 2004, Richard Huxton wrote:\n\n> > Index Scan using trptserc on trans (cost=0.00..465.10 rows=44 width=118)\n> > Index Cond: (trn_patno = 199999)\n> > Filter: ((trn_old_date >= '1994-08-23'::date) AND (trn_old_date <=\n> > '2004-08-23'::date) AND (trn_bill_inc = 'B'::bpchar))\n> > (687 rows)\n> \n> > Index Scan using todate on trans (cost=0.00..105165.74 rows=1 width=143)\n> > Index Cond: ((trn_old_date >= '1994-08-23'::date) AND (trn_old_date <=\n> > '2004-08-23'::date))\n> > Filter: ((trn_patno = 199999) AND (trn_bill_inc = 'B'::bpchar))\n> > (713 rows)\n> \n> These queries are different. The first returns 687 rows and the second \n> 713 rows.\n\nThe 687 and 713 are the number of rows in the plan, not the number of rows \nthe queries return.\n\n-- \n/Dennis Bj�rklund\n\n",
"msg_date": "Thu, 26 Aug 2004 08:15:52 +0200 (CEST)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer Selecting Incorrect Index"
},
{
"msg_contents": "Dennis Bjorklund wrote:\n> On Wed, 25 Aug 2004, Richard Huxton wrote:\n>>\n>>These queries are different. The first returns 687 rows and the second \n>>713 rows.\n> \n> \n> The 687 and 713 are the number of rows in the plan, not the number of rows \n> the queries return.\n\nD'OH! Thanks Dennis\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 26 Aug 2004 11:17:34 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer Selecting Incorrect Index"
},
{
"msg_contents": "Tom, your suspicions were correct - ANALYZE was not being run.\n\nI run vacuumdb via a cron script during off hours. After checking the\nscripts on both systems, I found that on the system that was not functioning\ncorrectly that the '-z' (analyze) command line option to vacuumdb was\nmissing. After correcting it and re-running the script, the poorly\nperforming SQL query takes only a few seconds as opposed to 15 minutes.\n\nThank you for your help!\n- David\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Tom Lane\nSent: Wednesday, August 25, 2004 3:08 PM\nTo: Richard Huxton\nCc: David Price; [email protected]\nSubject: Re: [PERFORM] Optimizer Selecting Incorrect Index\n\n\nRichard Huxton <[email protected]> writes:\n> Things to check:\n> 1. postgresql.conf settings match - different costs could cause this\n> 2. statistics on the two columns (trn_patno,trn_old_date) - if they\n> differ considerably between systems that would also explain it.\n\nThe different estimated row counts could only come from #2. I suspect\nDavid has forgotten to run ANALYZE on the second system.\n\nI agree that EXPLAIN VERBOSE output is not helpful...\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n\n",
"msg_date": "Thu, 26 Aug 2004 07:05:26 -0400",
"msg_from": "\"David Price\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizer Selecting Incorrect Index "
}
] |
[
{
"msg_contents": "Just curious if folks have ever used this for a postgresql server and if\nthey used it with OSX/BSD/Linux. Even if you haven't used it, if you\nknow of something comparable I'd be interested. TIA\n\nhttp://store.apple.com/1-800-MY-APPLE/WebObjects/AppleStore.woa/72103/wo/oC2xGlPM9M2i3UsLG0f1PaalTlE/0.0.9.1.0.6.13.0.3.1.3.0.7.12.1.1.0\n\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n",
"msg_date": "25 Aug 2004 17:09:37 -0400",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Anyone familiar with Apple Xserve RAID"
},
{
"msg_contents": "Robert,\n\n> Just curious if folks have ever used this for a postgresql server and if\n> they used it with OSX/BSD/Linux. Even if you haven't used it, if you\n> know of something comparable I'd be interested. TIA\n\nLast I checked Apple was still shipping the XServes with SATA drives and a \nPROMISE controller, both very consumer-grade (and not server-grade) hardware. \nI can't recommend the XServe as a database platform. SCSI still makes a \ndifference for databases, more because of the controllers than anything else.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 25 Aug 2004 14:22:08 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Anyone familiar with Apple Xserve RAID"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n\n> Robert,\n>\n>> Just curious if folks have ever used this for a postgresql server and if\n>> they used it with OSX/BSD/Linux. Even if you haven't used it, if you\n>> know of something comparable I'd be interested. TIA\n>\n\\> Last I checked Apple was still shipping the XServes with SATA drives\n> and a PROMISE controller, both very consumer-grade (and not\n> server-grade) hardware. I can't recommend the XServe as a database\n> platform. SCSI still makes a difference for databases, more because\n> of the controllers than anything else.\n\nThe XServe RAID is fibre-channel. \n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n\n",
"msg_date": "Wed, 25 Aug 2004 18:52:05 -0400",
"msg_from": "Doug McNaught <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Anyone familiar with Apple Xserve RAID"
},
{
"msg_contents": "we checked a XServe/XRaid system some months ago and\nespecially the relation price/space/performance was OK\ncompared to a HP/Intel maschine. Tomorrow I'll try to\nfind the performance charts on my harddisc and post the\nlinks to the list. You get a huge amount of raid-space\nfor a good price.\n\nWe plan to get one to do our web-statistics there with\nabout 150 MegaPageImpressions a month.\n\nRalf Schramm\n\n\nAm 25.08.2004 um 23:09 schrieb Robert Treat:\n\n> Just curious if folks have ever used this for a postgresql server and \n> if\n> they used it with OSX/BSD/Linux. Even if you haven't used it, if you\n> know of something comparable I'd be interested. TIA\n>\n> http://store.apple.com/1-800-MY-APPLE/WebObjects/AppleStore.woa/72103/ \n> wo/oC2xGlPM9M2i3UsLG0f1PaalTlE/0.0.9.1.0.6.13.0.3.1.3.0.7.12.1.1.0\n>\n>\n> Robert Treat\n> -- \n> Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n",
"msg_date": "Thu, 26 Aug 2004 00:58:24 +0200",
"msg_from": "Ralf Schramm <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Anyone familiar with Apple Xserve RAID"
},
{
"msg_contents": "the XServe/XRaid comes with FibreChannel\n\nHere some infos:\nhttp://www.apple.com/xserve/raid/architecture.html\nhttp://www.apple.com/xserve/raid/fibre_channel.html\nhttp://www.apple.com/xserve/architecture.html\n\nRalf Schramm\n\n\nAm 25.08.2004 um 23:22 schrieb Josh Berkus:\n\n> Robert,\n>\n>> Just curious if folks have ever used this for a postgresql server and \n>> if\n>> they used it with OSX/BSD/Linux. Even if you haven't used it, if you\n>> know of something comparable I'd be interested. TIA\n>\n> Last I checked Apple was still shipping the XServes with SATA drives \n> and a\n> PROMISE controller, both very consumer-grade (and not server-grade) \n> hardware.\n> I can't recommend the XServe as a database platform. SCSI still makes \n> a\n> difference for databases, more because of the controllers than \n> anything else.\n>\n> -- \n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n\n",
"msg_date": "Thu, 26 Aug 2004 01:03:52 +0200",
"msg_from": "Ralf Schramm <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Anyone familiar with Apple Xserve RAID"
},
{
"msg_contents": "Guys,\n\n> the XServe/XRaid comes with FibreChannel\n\nI stand corrected. That should help things some; it makes it more of a small \ntradeoff between performance and storage size for the drives.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 25 Aug 2004 16:49:02 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Anyone familiar with Apple Xserve RAID"
},
{
"msg_contents": "\nJust starting to work with one now, so I'll let people know what I \nfind. There has been\nsome talk that the XServe RAID seems more optimized for streaming \napplications rather than\nheavy random-access type applications, which really wouldn't surprise \nme given where they\nprobably expect to sell most of them (music/film). They gave us a very \ngood price break, as\nwe are in an industry they wanted exposure in (financial services). If \nyou want a pile of\nstorage at a good price point, its certainly worth considering.\n\nThe unit itself is built very well, and the admin tools are very good \n(OS X only, though). It and the\ncards that come in the XServes use copper SFP cables/connections, which \nis good or\nbad depending upon you're point of view. The switch Apple sells off of \ntheir web site\nis a Vixel (recently bought by Emulex).\n\nI have XServes hooked up at the moment, which work fine. My production \nDB machine\nis a slackware box, which has tested out fine in initial tests with a \nQLogic HBA and the stock in-kernel\ndrivers. They're also 'certified' to work with Emulex cards, but IIRC \nEmulex doesn't do copper.\nEmulex did open-source their driver code last year (right after I had \nto change an client's install\nfrom my beloved Slack to RHAS because Emulex only had version-specific \ndrivers....).\n\nMore as it happens.\n\nOn Aug 25, 2004, at 6:52 PM, Doug McNaught wrote:\n\n> Josh Berkus <[email protected]> writes:\n>\n>> Robert,\n>>\n>>> Just curious if folks have ever used this for a postgresql server \n>>> and if\n>>> they used it with OSX/BSD/Linux. Even if you haven't used it, if you\n>>> know of something comparable I'd be interested. TIA\n>>\n> \\> Last I checked Apple was still shipping the XServes with SATA drives\n>> and a PROMISE controller, both very consumer-grade (and not\n>> server-grade) hardware. I can't recommend the XServe as a database\n>> platform. SCSI still makes a difference for databases, more because\n>> of the controllers than anything else.\n>\n> The XServe RAID is fibre-channel.\n>\n> -Doug\n> -- \n> Let us cross over the river, and rest under the shade of the trees.\n> --T. J. Jackson, 1863\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to \n> [email protected]\n>\n--------------------\n\nAndrew Rawnsley\nPresident\nThe Ravensfield Digital Resource Group, Ltd.\n(740) 587-0114\nwww.ravensfield.com\n\n",
"msg_date": "Thu, 26 Aug 2004 08:07:42 -0400",
"msg_from": "Andrew Rawnsley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Anyone familiar with Apple Xserve RAID"
},
{
"msg_contents": "On Aug 26, 2004, at 14:07, Andrew Rawnsley wrote:\n\n> The unit itself is built very well, and the admin tools are very good \n> (OS X only, though). It and the\n\nThe admin tools are supposed to work cross platform. From Apples \nwebsite: \"This Java-based application provides an intuitive interface \nfor creating protected storage volumes, managing preferences and \nmonitoring storage hardware from any virtually any networked computer \nover TCP/IP. That means you don’t have to use a Mac to administer your \ndeployment, though, we’d like you to, of course.\"\n\nhttp://www.apple.com/xserve/raid/management.html\n\nRegards,\n - Tore.\n",
"msg_date": "Thu, 26 Aug 2004 17:13:24 +0200",
"msg_from": "Tore Halset <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Anyone familiar with Apple Xserve RAID"
},
{
"msg_contents": "\nOops. My bad.\n\nThey must really want to sell those things if they're making them \ncompletely platform independent...\n\nOn Aug 26, 2004, at 11:13 AM, Tore Halset wrote:\n\n> On Aug 26, 2004, at 14:07, Andrew Rawnsley wrote:\n>\n>> The unit itself is built very well, and the admin tools are very good \n>> (OS X only, though). It and the\n>\n> The admin tools are supposed to work cross platform. From Apples \n> website: \"This Java-based application provides an intuitive interface \n> for creating protected storage volumes, managing preferences and \n> monitoring storage hardware from any virtually any networked computer \n> over TCP/IP. That means you don’t have to use a Mac to administer your \n> deployment, though, we’d like you to, of course.\"\n>\n> http://www.apple.com/xserve/raid/management.html\n>\n> Regards,\n> - Tore.\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to \n> [email protected]\n>\n--------------------\n\nAndrew Rawnsley\nPresident\nThe Ravensfield Digital Resource Group, Ltd.\n(740) 587-0114\nwww.ravensfield.com\n\n",
"msg_date": "Thu, 26 Aug 2004 13:40:41 -0400",
"msg_from": "Andrew Rawnsley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Anyone familiar with Apple Xserve RAID"
},
{
"msg_contents": "Actually you are both are right and wrong. The XRaid uses FibreChannel \nto communicate to the host machine(s). The Raid controller is a \nFibreChannel controller. After that there is a FibreChannel to UltraATA \nconversion for each drive, separate ATA bus for each drive.\n\nWhat I am curious about is if this setup gets around ATA fsync problems, \nwhere the drive reports the write before it is actually performed.\n\n\nJosh Berkus wrote:\n\n>Guys,\n>\n> \n>\n>>the XServe/XRaid comes with FibreChannel\n>> \n>>\n>\n>I stand corrected. That should help things some; it makes it more of a small \n>tradeoff between performance and storage size for the drives.\n>\n> \n>\n\n-- \nKevin Barnard\nSpeed Fulfillment and Call Center\[email protected]\n214-258-0120\n\n\n\n\n\n\n\n\nActually you are both are right and wrong. The XRaid uses FibreChannel\nto communicate to the host machine(s). The Raid controller is a\nFibreChannel controller. After that there is a FibreChannel to\nUltraATA conversion for each drive, separate ATA bus for each drive.\n\nWhat I am curious about is if this setup gets around ATA fsync\nproblems, where the drive reports the write before it is actually\nperformed.\n\n\nJosh Berkus wrote:\n\nGuys,\n\n \n\nthe XServe/XRaid comes with FibreChannel\n \n\n\nI stand corrected. That should help things some; it makes it more of a small \ntradeoff between performance and storage size for the drives.\n\n \n\n\n-- \nKevin Barnard\nSpeed Fulfillment and Call Center\[email protected]\n214-258-0120",
"msg_date": "Thu, 26 Aug 2004 14:06:32 -0500",
"msg_from": "Kevin Barnard <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Anyone familiar with Apple Xserve RAID"
},
{
"msg_contents": "Kevin Barnard <[email protected]> writes:\n\n> Actually you are both are right and wrong. The XRaid uses\n> FibreChannel to communicate to the host machine(s). The Raid\n> controller is a FibreChannel controller. After that there is a\n> FibreChannel to UltraATA conversion for each drive, separate ATA bus\n> for each drive.\n> What I am curious about is if this setup gets around ATA fsync\n> problems, where the drive reports the write before it is actually\n> performed.\n\nGood point.\n\n(a) The FC<->ATA unit hopefully has a battery-backed cache, which\n would make the whole thing more robust against power loss.\n(b) Since Apple is the vendor for the drive units, they can buy ATA\n drives that don't lie about cache flushes. Whether they do or not\n is definitely a question. ;)\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n\n",
"msg_date": "Thu, 26 Aug 2004 15:54:04 -0400",
"msg_from": "Doug McNaught <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Anyone familiar with Apple Xserve RAID"
},
{
"msg_contents": "Doug McNaught wrote:\n\n>Kevin Barnard <[email protected]> writes:\n>\n> \n>\n>> Actually you are both are right and wrong. The XRaid uses\n>> FibreChannel to communicate to the host machine(s). The Raid\n>> controller is a FibreChannel controller. After that there is a\n>> FibreChannel to UltraATA conversion for each drive, separate ATA bus\n>> for each drive.\n>> What I am curious about is if this setup gets around ATA fsync\n>> problems, where the drive reports the write before it is actually\n>> performed.\n>> \n>>\n>\n>Good point.\n>\n>(a) The FC<->ATA unit hopefully has a battery-backed cache, which\n> would make the whole thing more robust against power loss.\n>(b) Since Apple is the vendor for the drive units, they can buy ATA\n> drives that don't lie about cache flushes. Whether they do or not\n> is definitely a question. ;)\n> \n>\n\nFYI: http://developer.apple.com/technotes/tn/pdf/tn1040.pdf a tech \nnote on write cache flushing.\n\nA bit dated now, but perhaps some other tech note from Apple has more \nrecent information.\n\n-- Alan\n",
"msg_date": "Thu, 26 Aug 2004 16:01:15 -0400",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Anyone familiar with Apple Xserve RAID"
},
{
"msg_contents": "\nOn Aug 26, 2004, at 3:54 PM, Doug McNaught wrote:\n\n> Kevin Barnard <[email protected]> writes:\n>\n>> Actually you are both are right and wrong. The XRaid uses\n>> FibreChannel to communicate to the host machine(s). The Raid\n>> controller is a FibreChannel controller. After that there is a\n>> FibreChannel to UltraATA conversion for each drive, separate ATA \n>> bus\n>> for each drive.\n>> What I am curious about is if this setup gets around ATA fsync\n>> problems, where the drive reports the write before it is actually\n>> performed.\n>\n> Good point.\n>\n> (a) The FC<->ATA unit hopefully has a battery-backed cache, which\n> would make the whole thing more robust against power loss.\n\nEach controller is battery backed (pretty beefy batteries too). \nActually, they are optional,\nbut if you spend the money for the unit and leave off the battery you \nshould\nhave your head examined.\n\n\n> (b) Since Apple is the vendor for the drive units, they can buy ATA\n> drives that don't lie about cache flushes. Whether they do or not\n> is definitely a question. ;)\n\nGiven what they charge for them I would like to think so...but who \nknows...\n\nThe ones in mine are from Hitachi, model #HDS722525VLAT80.\n\n>\n> -Doug\n> -- \n> Let us cross over the river, and rest under the shade of the trees.\n> --T. J. Jackson, 1863\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to \n> [email protected]\n>\n--------------------\n\nAndrew Rawnsley\nPresident\nThe Ravensfield Digital Resource Group, Ltd.\n(740) 587-0114\nwww.ravensfield.com\n\n",
"msg_date": "Thu, 26 Aug 2004 16:12:06 -0400",
"msg_from": "Andrew Rawnsley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Anyone familiar with Apple Xserve RAID"
},
{
"msg_contents": ">>>>> \"JB\" == Josh Berkus <[email protected]> writes:\n\nJB> Guys,\n>> the XServe/XRaid comes with FibreChannel\n\nJB> I stand corrected. That should help things some; it makes it more\nJB> of a small tradeoff between performance and storage size for the\nJB> drives.\n\n\nit is fibre channel to the host. the internals are still IDE drives\nwith possibly multiple controllers inside the enclosure.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Fri, 27 Aug 2004 16:34:29 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Anyone familiar with Apple Xserve RAID"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm a little beginner with Tsearch2 ....\n\nI have simples tables like this :\n\n# \\d article\n Table \"public.article\"\n Column | Type | \nModifiers\n------------+-----------------------------+-----------------------------------------------------------------\n id | integer | not null default \nnextval('public.article_rss_id_rss_seq'::text)\n id_site | integer | not null\n title | text |\n url | text |\n desc | text |\n r_date | timestamp without time zone | default now()\n r_update | timestamp without time zone | default now()\n idxfti | tsvector |\nIndexes:\n \"article_id_key\" unique, btree (id)\n \"idxfti_idx\" gist (idxfti)\n \"ix_article_update\" btree (r_update)\n \"ix_article_url\" btree (url)\n \"ix_id_site\" btree (id_site)\nForeign-key constraints:\n \"$1\" FOREIGN KEY (id_site) REFERENCES site (id_site)\nTriggers:\n tsvectorupdate BEFORE INSERT OR UPDATE ON article FOR EACH ROW EXECUTE \nPROCEDURE tsearch2('idxfti', 'title', 'desc')\n\n# \\d site_rss\n Table \"public.site\"\n Column | Type | Modifiers\n--------------+---------+---------------------------------------------------------------\n id_site | integer | not null default \nnextval('public.site_id_site_seq'::text)\n site_name | text |\n site_url | text |\nurl | text |\n language | text |\n datecrea | date | default now()\n id_category | integer |\n time_refresh | integer |\n active | integer |\n error | integer |\nIndexes:\n \"site_id_site_key\" unique, btree (id_site)\n \"ix_site_id_category\" btree (id_category)\n \"ix_site_url\" btree (url)\n\n# \\d user_choice\n Table \"public.user_choice\"\n Column | Type | Modifiers\n---------+---------+-----------\n id_user | integer |\n id_site | integer |\nIndexes:\n \"ix_user_choice_all\" unique, btree (id_user, id_site)\n\nI have done a simple request, looking for title or description having Postgres \ninside order by rank and date, like this :\nSELECT a.title, a.id, a.url, to_char(a.r_date, 'DD/MM/YYYY HH24:MI:SS') as dt, \ns.site_name, s.id_site, case when exists (select id_user from user_choice u \nwhere u.id_site=s.id_site and u.id_user = 1) then 1 else 0 end as bookmarked\n FROM article a, site s\n WHERE s.id_site = a.id_site\n AND idxfti @@ to_tsquery('postgresql')\n ORDER BY rank(idxfti, to_tsquery('postgresql')) DESC, a.r_date DESC;\n\nThe request takes about 4 seconds ... I have about 1 400 000 records in \narticle and 36 000 records in site table ... it's a Bi-Pentium III 933 MHz \nserver with 1 Gb memory ... I'm using Postgresql 7.4.5\nFor me this result is very very slow I really need a quicker result with less \nthan 1 second ...\nThe next time I call the same request I have got the result in 439 ms ... but \nIf I replace \"Postgresql\" in my find with \"Linux\" for example I will get the \nnext result in 5 seconds ... :o(\n\nIs it a bad use of Tsearch2 ... or a bad table structure ... or from my \nrequest ... ? I have no idea how to optimise this ...\n\nExplain gives me this result :\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------\n Sort (cost=10720.91..10724.29 rows=1351 width=191)\n Sort Key: rank(a.idxfti, '\\'postgresql\\''::tsquery), a.r_date\n -> Merge Join (cost=4123.09..10650.66 rows=1351 width=191)\n Merge Cond: (\"outer\".id_site = \"inner\".id_site)\n -> Index Scan using site_id_site_key on site s (cost=0.00..2834.96 \nrows=35705 width=28)\n -> Sort (cost=4123.09..4126.47 rows=1351 width=167)\n Sort Key: a.id_site\n -> Index Scan using idxfti_idx on article a \n(cost=0.00..4052.84 rows=1351 width=167)\n Index Cond: (idxfti @@ '\\'postgresql\\''::tsquery)\n Filter: (idxfti @@ '\\'postgresql\\''::tsquery)\n SubPlan\n -> Seq Scan on user_choice u (cost=0.00..2.69 rows=1 width=4)\n Filter: ((id_site = $0) AND (id_user = 1))\n(13 rows)\n\nAny idea are well done ;o)\n\nRegards,\n-- \nBill Footcow\n\n",
"msg_date": "Thu, 26 Aug 2004 00:48:32 +0200",
"msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "TSearch2 and optimisation ..."
},
{
"msg_contents": "Herve'\n\n> The request takes about 4 seconds ... I have about 1 400 000 records in\n> article and 36 000 records in site table ... it's a Bi-Pentium III 933 MHz\n> server with 1 Gb memory ... I'm using Postgresql 7.4.5\n> For me this result is very very slow I really need a quicker result with\n> less than 1 second ...\n> The next time I call the same request I have got the result in 439 ms ...\n> but If I replace \"Postgresql\" in my find with \"Linux\" for example I will\n> get the next result in 5 seconds ... :o(\n\nHmmm. It sounds like your system is unable to keep all of the data cached in \nmemory. What else do you have going on on that machine?\n\n> Explain gives me this result :\n\nPlease do \"EXPLAIN ANALYZE\" so that we can see where time is actually spent.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 25 Aug 2004 16:50:53 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TSearch2 and optimisation ..."
},
{
"msg_contents": "Josh,\n\nLe Jeudi 26 Aoᅵt 2004 01:50, Josh Berkus a ᅵcrit :\n> > The request takes about 4 seconds ... I have about 1 400 000 records in\n> > article and 36 000 records in site table ... it's a Bi-Pentium III 933\n> > MHz server with 1 Gb memory ... I'm using Postgresql 7.4.5\n> > For me this result is very very slow I really need a quicker result with\n> > less than 1 second ...\n> > The next time I call the same request I have got the result in 439 ms ...\n> > but If I replace \"Postgresql\" in my find with \"Linux\" for example I will\n> > get the next result in 5 seconds ... :o(\n>\n> Hmmm. It sounds like your system is unable to keep all of the data cached\n> in memory. What else do you have going on on that machine?\n\nThere is an Apache + PHP running in same time ... \n\n> > Explain gives me this result :\n>\n> Please do \"EXPLAIN ANALYZE\" so that we can see where time is actually\n> spent.\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\n Sort (cost=10740.35..10743.73 rows=1351 width=190) (actual \ntime=7054.603..7054.707 rows=139 loops=1)\n Sort Key: rank(a.idxfti, '\\'postgresql\\''::tsquery), a.r_date\n -> Merge Join (cost=4123.09..10670.10 rows=1351 width=190) (actual \ntime=5476.749..7052.766 rows=139 loops=1)\n Merge Cond: (\"outer\".id_site = \"inner\".id_site)\n -> Index Scan using site_id_site_key on site s (cost=0.00..2846.52 \nrows=35705 width=28) (actual time=43.985..1548.903 rows=34897 loops=1)\n -> Sort (cost=4123.09..4126.47 rows=1351 width=166) (actual \ntime=5416.836..5416.983 rows=139 loops=1)\n Sort Key: a.id_site\n -> Index Scan using idxfti_idx on article a \n(cost=0.00..4052.84 rows=1351 width=166) (actual time=109.766..5415.108 \nrows=139 loops=1)\n Index Cond: (idxfti @@ '\\'postgresql\\''::tsquery)\n Filter: (idxfti @@ '\\'postgresql\\''::tsquery)\n SubPlan\n -> Seq Scan on user_choice u (cost=0.00..2.69 rows=1 width=4) \n(actual time=0.146..0.146 rows=0 loops=139)\n Filter: ((id_site = $0) AND (id_user = 1))\n Total runtime: 7056.126 ms\n\nThanks for your help ...\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n",
"msg_date": "Thu, 26 Aug 2004 09:22:01 +0200",
"msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TSearch2 and optimisation ..."
},
{
"msg_contents": "Herve'\n\n> (cost=0.00..4052.84 rows=1351 width=166) (actual time=109.766..5415.108\n> rows=139 loops=1)\n> Index Cond: (idxfti @@ '\\'postgresql\\''::tsquery)\n> Filter: (idxfti @@ '\\'postgresql\\''::tsquery)\n\n From this, it looks like your FTI index isn't fitting in your sort_mem. \nWhat's sort_mem at now? Can you increase it?\n\nOverall, though, I'm not sure you can get this sub-1s without a faster \nmachine. Although I'm doing FTI on about 25MB of FTI text on a \nsingle-processor machine, and getting 40ms response times, so maybe we can \n...\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 26 Aug 2004 10:48:58 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TSearch2 and optimisation ..."
},
{
"msg_contents": "Le Jeudi 26 Août 2004 19:48, Josh Berkus a écrit :\n> Herve'\n>\n> > (cost=0.00..4052.84 rows=1351 width=166) (actual time=109.766..5415.108\n> > rows=139 loops=1)\n> > Index Cond: (idxfti @@ '\\'postgresql\\''::tsquery)\n> > Filter: (idxfti @@ '\\'postgresql\\''::tsquery)\n> >\n> >From this, it looks like your FTI index isn't fitting in your sort_mem.\n>\n> What's sort_mem at now? Can you increase it?\n\nshared_buffers = 3000\nsort_mem = 10240\n\n> Overall, though, I'm not sure you can get this sub-1s without a faster\n> machine. Although I'm doing FTI on about 25MB of FTI text on a\n> single-processor machine, and getting 40ms response times, so maybe we can\n> ...\n\nSorry I missed understand what you mean here ... \nYou tell me to upgrade the hardware but you manage a 25 Mb with a single \nprocessor ?? What you mean ?\nMy database is about 450 Mb ...\n\nRegards,\n-- \nBill Footcow\n\n",
"msg_date": "Thu, 26 Aug 2004 21:30:40 +0200",
"msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TSearch2 and optimisation ..."
}
] |
[
{
"msg_contents": "Hi, I have just installed 8.0.0beta1 and I noticed that some query are slower than 7.4.2 queries.\n\nAfter a FULL VACUUM ANALYZE\n\n***With 7.4.2***\n\nexplain analyze SELECT count(*) FROM \"SNS_DATA\" WHERE \"Data_Arrivo_Campione\" BETWEEN '2004-01-01 00:00:00' AND '2004-01-31 23:59:59' AND \"Cod_Par\" = '17476'\n\ngives\n\n Aggregate (cost=46817.89..46817.89 rows=1 width=0) (actual time=401.216..401.217 rows=1 loops=1)\n -> Index Scan using snsdata_codpar on \"SNS_DATA\" (cost=0.00..46817.22 rows=268 width=0) (actual time=165.948..400.258 rows=744 loops=1)\n Index Cond: ((\"Cod_Par\")::text = '17476'::text)\n Filter: ((\"Data_Arrivo_Campione\" >= '2004-01-01 00:00:00'::timestamp without time zone) AND (\"Data_Arrivo_Campione\" <= '2004-01-31 23:59:59'::timestamp without time zone))\n Total runtime: 401.302 ms\n\n***while on 8.0.0***\n\nthe same query gives\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=93932.91..93932.91 rows=1 width=0) (actual time=14916.371..14916.371 rows=1 loops=1)\n -> Seq Scan on \"SNS_DATA\" (cost=0.00..93930.14 rows=1108 width=0) (actual time=6297.152..14915.330 rows=744 loops=1)\n Filter: ((\"Data_Arrivo_Campione\" >= '2004-01-01 00:00:00'::timestamp without time zone) AND (\"Data_Arrivo_Campione\" <= '2004-01-31 23:59:59'::timestamp without time zone) AND ((\"Cod_Par\")::text = '17476'::text))\n Total runtime: 14916.935 ms\n\nAnd I if disable the seqscan\nSET enable_seqscan = false;\n\nI get the following Aggregate (cost=158603.19..158603.19 rows=1 width=0) (actual time=4605.862..4605.863 rows=1 loops=1)\n -> Index Scan using snsdata_codpar on \"SNS_DATA\" (cost=0.00..158600.41 rows=1108 width=0) (actual time=2534.422..4604.865 rows=744 loops=1)\n Index Cond: ((\"Cod_Par\")::text = '17476'::text)\n Filter: ((\"Data_Arrivo_Campione\" >= '2004-01-01 00:00:00'::timestamp without time zone) AND (\"Data_Arrivo_Campione\" <= '2004-01-31 23:59:59'::timestamp without time zone))\n Total runtime: 4605.965 ms\n\nThe total runtime is bigger (x10 !!) than the old one.\n\nThe memory runtime parameters are \nshared_buffer = 2048\nwork_mem = sort_mem = 2048\n\nSNS_DATA shema is the following:\n\n Table \"public.SNS_DATA\"\n Column | Type | Modifiers\n----------------------+-----------------------------+--------------------\n Ordine | integer | not null default 0\n Cod_Par | character varying(100) | not null\n Cod_Ana | character varying(100) | not null\n Valore | character varying(255) |\n Descriz | character varying(512) |\n Un_Mis | character varying(70) |\n hash | integer |\n valid | boolean | default true\n alarm | boolean | default false\n Cod_Luogo | character varying(30) |\n Data_Arrivo_Campione | timestamp without time zone |\n site_id | integer |\n Cod_Luogo_v | character varying(30) |\n repeated_val | boolean | default false\nIndexes:\n \"sns_data2_pkey\" PRIMARY KEY, btree (\"Ordine\", \"Cod_Ana\", \"Cod_Par\")\n \"sns_datacodluogo2\" btree (\"Cod_Luogo\")\n \"sns_datatimefield2\" btree (\"Data_Arrivo_Campione\")\n \"sns_siteid2\" btree (site_id)\n \"sns_valid2\" btree (\"valid\")\n \"snsdata_codana\" btree (\"Cod_Ana\")\n \"snsdata_codpar\" btree (\"Cod_Par\")\nForeign-key constraints:\n \"$2\" FOREIGN KEY (\"Cod_Ana\") REFERENCES \"SNS_ANA\"(\"Cod_Ana\") ON DELETE CASCADE\nTriggers:\n sns_action_tr BEFORE INSERT OR UPDATE ON \"SNS_DATA\" FOR EACH ROW EXECUTE PROCEDURE sns_action()\n\n\nCan it be a datatype conversion problem?\nThanks in advance!\nReds\n\n\n\n\n\n\n\n\n\n\n\nHi, I have just installed 8.0.0beta1 and I noticed \nthat some query are slower than 7.4.2 queries.\n \nAfter a FULL VACUUM ANALYZE\n \n***With 7.4.2***\n \nexplain analyze SELECT count(*) FROM \"SNS_DATA\" \nWHERE \"Data_Arrivo_Campione\" BETWEEN '2004-01-01 00:00:00' AND '2004-01-31 \n23:59:59' AND \"Cod_Par\" = '17476'\n \ngives\n \n Aggregate (cost=46817.89..46817.89 \nrows=1 width=0) (actual time=401.216..401.217 rows=1 loops=1) \n-> Index Scan using snsdata_codpar on \"SNS_DATA\" \n(cost=0.00..46817.22 rows=268 width=0) (actual time=165.948..400.258 rows=744 \nloops=1) Index Cond: \n((\"Cod_Par\")::text = \n'17476'::text) Filter: \n((\"Data_Arrivo_Campione\" >= '2004-01-01 00:00:00'::timestamp without time \nzone) AND (\"Data_Arrivo_Campione\" <= '2004-01-31 23:59:59'::timestamp without \ntime zone)) Total runtime: 401.302 ms\n \n***while on 8.0.0***\n \nthe same query gives\n \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate \n(cost=93932.91..93932.91 rows=1 width=0) (actual time=14916.371..14916.371 \nrows=1 loops=1) -> Seq Scan on \"SNS_DATA\" \n(cost=0.00..93930.14 rows=1108 width=0) (actual time=6297.152..14915.330 \nrows=744 loops=1) Filter: \n((\"Data_Arrivo_Campione\" >= '2004-01-01 00:00:00'::timestamp without time \nzone) AND (\"Data_Arrivo_Campione\" <= '2004-01-31 23:59:59'::timestamp without \ntime zone) AND ((\"Cod_Par\")::text = '17476'::text)) Total runtime: \n14916.935 ms\nAnd I if disable the seqscan\nSET enable_seqscan = false;\n \nI get the following Aggregate \n(cost=158603.19..158603.19 rows=1 width=0) (actual time=4605.862..4605.863 \nrows=1 loops=1) -> Index Scan using snsdata_codpar on \n\"SNS_DATA\" (cost=0.00..158600.41 rows=1108 width=0) (actual \ntime=2534.422..4604.865 rows=744 \nloops=1) Index Cond: \n((\"Cod_Par\")::text = \n'17476'::text) Filter: \n((\"Data_Arrivo_Campione\" >= '2004-01-01 00:00:00'::timestamp without time \nzone) AND (\"Data_Arrivo_Campione\" <= '2004-01-31 23:59:59'::timestamp without \ntime zone)) Total runtime: 4605.965 ms\nThe total runtime is bigger (x10 !!) than the old \none.\n \nThe memory runtime parameters are \nshared_buffer = 2048\nwork_mem = sort_mem = 2048\n \nSNS_DATA shema is the following:\n \n \nTable \"public.SNS_DATA\" \nColumn \n| \nType \n| \nModifiers----------------------+-----------------------------+-------------------- Ordine \n| \ninteger \n| not null default \n0 Cod_Par \n| character varying(100) | not \nnull Cod_Ana \n| character varying(100) | not \nnull Valore \n| character varying(255) \n| Descriz \n| character varying(512) \n| Un_Mis \n| character varying(70) \n| hash \n| \ninteger \n| valid \n| \nboolean \n| default \ntrue alarm \n| \nboolean \n| default \nfalse Cod_Luogo \n| character varying(30) \n| Data_Arrivo_Campione | timestamp without time zone \n| site_id \n| \ninteger \n| Cod_Luogo_v | \ncharacter varying(30) \n| repeated_val | \nboolean \n| default falseIndexes: \"sns_data2_pkey\" PRIMARY KEY, \nbtree (\"Ordine\", \"Cod_Ana\", \"Cod_Par\") \"sns_datacodluogo2\" \nbtree (\"Cod_Luogo\") \"sns_datatimefield2\" btree \n(\"Data_Arrivo_Campione\") \"sns_siteid2\" btree \n(site_id) \"sns_valid2\" btree \n(\"valid\") \"snsdata_codana\" btree \n(\"Cod_Ana\") \"snsdata_codpar\" btree \n(\"Cod_Par\")Foreign-key constraints: \"$2\" FOREIGN KEY \n(\"Cod_Ana\") REFERENCES \"SNS_ANA\"(\"Cod_Ana\") ON DELETE \nCASCADETriggers: sns_action_tr BEFORE INSERT OR UPDATE \nON \"SNS_DATA\" FOR EACH ROW EXECUTE PROCEDURE sns_action()\n \n \nCan it be a datatype conversion \nproblem?\nThanks in advance!\nReds",
"msg_date": "Thu, 26 Aug 2004 15:36:20 +0200",
"msg_from": "\"Stefano Bonnin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query performance problem in 8.0.0beta1"
},
{
"msg_contents": "Hi,\n\nI assume you have reposted because you have just signed up to the list.\n\nIf this is the case, can you please read the archives or replies to your original post about this question.\nIt did make it onto the archives and myself and others did reply with a few ideas and questions.\n\nIf you could address those in a reply mail that would help everybody with your problem\n\nRegards\n\nRussell Smith\n",
"msg_date": "Mon, 30 Aug 2004 08:31:30 +1000",
"msg_from": "Russell Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance problem in 8.0.0beta1"
}
] |
[
{
"msg_contents": "Bill Footcow wrote:\n\n...\n> I have done a simple request, looking for title or description having Postgres \n> inside order by rank and date, like this :\n> SELECT a.title, a.id, a.url, to_char(a.r_date, 'DD/MM/YYYY HH24:MI:SS') as dt, \n> s.site_name, s.id_site, case when exists (select id_user from user_choice u \n> where u.id_site=s.id_site and u.id_user = 1) then 1 else 0 end as bookmarked\n> FROM article a, site s\n> WHERE s.id_site = a.id_site\n> AND idxfti @@ to_tsquery('postgresql')\n> ORDER BY rank(idxfti, to_tsquery('postgresql')) DESC, a.r_date DESC;\n> \n> The request takes about 4 seconds ... I have about 1 400 000 records in \n> article and 36 000 records in site table ... it's a Bi-Pentium III 933 MHz \n> server with 1 Gb memory ... I'm using Postgresql 7.4.5\n> For me this result is very very slow I really need a quicker result with less \n> than 1 second ...\n> The next time I call the same request I have got the result in 439 ms ... but \n...\n\nThe first query is slow because the relevant index pages are not cached in memory. Everyone\nexperiences this. GiST indexes on tsvector columns can get really big. You have done nothing\nwrong. When you have a lot of records, tsearch2 will not run fast without extensive performance\ntuning. \n\nRead the following:\n\nOptimization\nhttp://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/oscon_tsearch2/optimization.html\n\nstat function\nhttp://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_Notes\nhttp://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/oscon_tsearch2/stat.html\n\nStop words\nhttp://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/tsearch-V2-intro.html\nhttp://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/oscon_tsearch2/stop_words.html\n\nMulticolumn GiST index\nhttp://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/oscon_tsearch2/multi_column_index.html\n\nopenfts-general mailing list archive\nhttp://sourceforge.net/mailarchive/forum.php?forum=openfts-general\n\nTry some of things out and let me know how it goes.\n\nGeorge Essig\n\n\n",
"msg_date": "Thu, 26 Aug 2004 10:58:33 -0700 (PDT)",
"msg_from": "George Essig <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TSearch2 and optimisation ..."
},
{
"msg_contents": "George,\n\nLe Jeudi 26 Aoᅵt 2004 19:58, George Essig a ᅵcrit :\n> Bill Footcow wrote:\n>\n> ...\n>\n> > I have done a simple request, looking for title or description having\n> > Postgres inside order by rank and date, like this :\n> > SELECT a.title, a.id, a.url, to_char(a.r_date, 'DD/MM/YYYY HH24:MI:SS')\n> > as dt, s.site_name, s.id_site, case when exists (select id_user from\n> > user_choice u where u.id_site=s.id_site and u.id_user = 1) then 1 else 0\n> > end as bookmarked FROM article a, site s\n> > WHERE s.id_site = a.id_site\n> > AND idxfti @@ to_tsquery('postgresql')\n> > ORDER BY rank(idxfti, to_tsquery('postgresql')) DESC, a.r_date DESC;\n> >\n> > The request takes about 4 seconds ... I have about 1 400 000 records in\n> > article and 36 000 records in site table ... it's a Bi-Pentium III 933\n> > MHz server with 1 Gb memory ... I'm using Postgresql 7.4.5\n> > For me this result is very very slow I really need a quicker result with\n> > less than 1 second ...\n> > The next time I call the same request I have got the result in 439 ms ...\n> > but\n>\n> ...\n>\n> The first query is slow because the relevant index pages are not cached in\n> memory. Everyone experiences this. GiST indexes on tsvector columns can\n> get really big. You have done nothing wrong. When you have a lot of\n> records, tsearch2 will not run fast without extensive performance tuning.\n>\n> Read the following:\n>\n> Optimization\n> http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/oscon_tsearch2/\n>optimization.html\n> \n> ...\n\nI have well read many pages about this subject ... but I have not found any \nthing for the moment to really help me ...\nWhat can I do to optimize my PostgreSQL configuration for a special use of \nTsearch2 ...\nI'm a little dispointed looking the Postgresql Russian search engine using \nTsearch2 is really quick ... why I can't haev the same result with a \nbi-pentium III 933 and 1Gb of RAM with the text indexation of 1 500 000 \nrecords ?\n\nRegards,\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n",
"msg_date": "Thu, 9 Sep 2004 16:56:01 +0200",
"msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TSearch2 and optimisation ..."
}
] |
[
{
"msg_contents": "I am using a simple PostgreSQL 7.3 database in a soft-realtime\napplication.\n\nI have a problem where an update on a record within a (fully indexed)\ntable containing less than ten records needs to occur as fast as\npossible.\n\nImmediately after performing a vaccum, updates take upto 50 milliseconds\nto occur, however the update performance degrades over time, such that\nafter a few hours of continuous updates, each update takes about half a\nsecond. Regular vacuuming improves the performance temporarily, but\nduring the vacuum operation (which takes upto 2 minutes), performance of\nconcurrent updates falls below an acceptable level (sometimes > 2\nseconds per update).\n\nAccording to the documentation, PostgreSQL keeps the old versions of the\ntuples in case of use by other transactions (i.e. each update is\nactually extending the table). I believe this behaviour is what is\ncausing my performance problem.\n\nIs there a way to disable this behaviour such that an update operation\nwould overwrite the current record and does not generate an outdated\ntuple each time? (My application does not need transactional support).\n \nI believe this would give me the performance gain I need, and would\neliminate the need for regular vacuuming too.\n \nThanks in advance,\n\nNeil Cooper.\n\nThis communication (including any attachments) is intended for the use of the intended recipient(s) only and may contain information that is confidential, privileged or legally protected. Any unauthorized use or dissemination of this communication is strictly prohibited. If you have received this communication in error, please immediately notify the sender by return e-mail message and delete all copies of the original communication. Thank you for your cooperation.\n",
"msg_date": "Thu, 26 Aug 2004 14:02:55 -0400",
"msg_from": "Neil Cooper <[email protected]>",
"msg_from_op": true,
"msg_subject": "Disabling transaction/outdated-tuple behaviour"
},
{
"msg_contents": "> Immediately after performing a vaccum, updates take upto 50 \n> milliseconds to occur, however the update performance \n> degrades over time, such that after a few hours of continuous \n> updates, each update takes about half a second. Regular \n> vacuuming improves the performance temporarily, but during \n> the vacuum operation (which takes upto 2 minutes), \n> performance of concurrent updates falls below an acceptable \n> level (sometimes > 2 seconds per update).\n\nYou must be doing an enormous number of updates! You can vacuum as often as\nyou like, and should usually do so at least as often as the time it takes\nfor 'all' tuples to be updated. So, in your case, every 10 updates. OK,\nthat seems unnecessary, how about every 100 updates? \n\n> According to the documentation, PostgreSQL keeps the old \n> versions of the tuples in case of use by other transactions \n> (i.e. each update is actually extending the table). I believe \n> this behaviour is what is causing my performance problem.\n\nYes, it probably is.\n\n> Is there a way to disable this behaviour such that an update \n> operation would overwrite the current record and does not \n> generate an outdated tuple each time? (My application does \n> not need transactional support).\n\nNo, I don't believe there is. If you really don't need transaction support\nthen you might want to reconsider whether postgres is really the right tool.\n\nM\n\n",
"msg_date": "Thu, 26 Aug 2004 19:18:52 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling transaction/outdated-tuple behaviour"
},
{
"msg_contents": "Neil,\n\n> I am using a simple PostgreSQL 7.3 database in a soft-realtime\n> application.\n\nThen you're not going to like the answer I have for you, see below.\n\n> I have a problem where an update on a record within a (fully indexed)\n> table containing less than ten records needs to occur as fast as\n> possible.\n\nHave you considered dropping the indexes? On such a small table, they won't \nbe used, and they are detracting significantly from your update speed.\n\n> Immediately after performing a vaccum, updates take upto 50 milliseconds\n> to occur, however the update performance degrades over time, such that\n> after a few hours of continuous updates, each update takes about half a\n> second. Regular vacuuming improves the performance temporarily, but\n> during the vacuum operation (which takes upto 2 minutes), performance of\n> concurrent updates falls below an acceptable level (sometimes > 2\n> seconds per update).\n\nThis is \"normal\" depending on your platform and concurrent activity. More \nfrequent vacuums would take less time each. What is your max_fsm_pages set \nto? Increasing this may decrease the necessity of vacuums as well as \nspeeding them up. Also, are you vacuuming the whole DB or just that table? \n2 mintues seems like a long time; I can vacuum a 100GB database in less than \n4.\n\n> Is there a way to disable this behaviour such that an update operation\n> would overwrite the current record and does not generate an outdated\n> tuple each time? (My application does not need transactional support).\n\nNo. Our ACID Transaction compliance depends on \"that behaviour\" (MVCC). We \ndon't offer PostgreSQL in a \"non-ACID mode\". If your application truly does \nnot need transactional support, you may want to consider an embedded database \ninstead, such as BerkeleyDB or SQLite. PostgreSQL has a *lot* of \"baggage\" \nassociated with having 99.99% incorruptable transactions.\n\nAlternately, you may also want to take a look at TelegraphCG, a derivative of \nPostgreSQL designed to handle \"streaming data\". They may have already \nconquered some of your difficulties for you. \nhttp://telegraph.cs.berkeley.edu/\n\nWere I you, I would start with tuning the database first through \nPostgreSQL.conf and a careful look at my hardware usage and DB maintenance. \nThen I would consider testing 8.0, which has some specific improvements \ndesigned to address some of the problems you are having. Particularly, \nJan's Background Writer and Lazy Vacuum.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 26 Aug 2004 11:20:20 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling transaction/outdated-tuple behaviour"
}
] |
[
{
"msg_contents": ">> How do vendors actually implement auto-clustering? I assume \n>they move\n>> rows around during quiet periods or have lots of empty space in each\n>> value bucket.\n>\n>\n>As far as I know, Oracle does it by having a B-Tree organized heap (a\n>feature introduced around v8 IIRC), basically making the primary key\n>index and the heap the same physical structure. Any non-index columns\n>are stored in the index along with the index columns. Implementing it\n>is slightly weird because searching the index and selecting the rows\n>from the heap are not separate operations.\n\nAlmost the same for MSSQL. The clustered index is always forced unique.\nIf you create a non-unique clustered index, SQLServer will internally\npad it with random (or is it sequential? Can't remember right now) data\nto make each key unique. The clustered index contains all the data\nfields - both the index key and the other columns from the database.\n\nIt does support non-clustered indexes as well on the same table. Any\n\"secondary index\" will then contain the index key and the primary key\nvalue. This means a lookup in a non-clustered index means a two-step\nindex lookup: First look in the non-clustered index for the clustered\nkey. Then look in the clustered index for the rest of the data. \n\nNaturally a non-clustered index needs better selectivity before it's\nactually used than a clustered index does.\n\nIIRC, SQL Server always creates clustered indexes by default for primary\nkeys.\n\n\n//Magnus\n",
"msg_date": "Thu, 26 Aug 2004 21:30:24 +0200",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "Magnus,\n\n> IIRC, SQL Server always creates clustered indexes by default for primary\n> keys.\n\nI think that's a per-database setting; certainly the ones I admin do not.\n\nHowever, since SQL Server orders its data pages, those data pages tend to be \nin the order of the primary key regardless if there is no clustered index.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 26 Aug 2004 12:48:25 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "On Thu, 2004-08-26 at 12:30, Magnus Hagander wrote:\n> Almost the same for MSSQL. The clustered index is always forced unique.\n> If you create a non-unique clustered index, SQLServer will internally\n> pad it with random (or is it sequential? Can't remember right now) data\n> to make each key unique. The clustered index contains all the data\n> fields - both the index key and the other columns from the database.\n> \n> It does support non-clustered indexes as well on the same table. Any\n> \"secondary index\" will then contain the index key and the primary key\n> value. This means a lookup in a non-clustered index means a two-step\n> index lookup: First look in the non-clustered index for the clustered\n> key. Then look in the clustered index for the rest of the data.\n\n\nAh, okay. I see how that would work for a secondary index, though it\nwould make for a slow secondary index. Neat workaround. For all I\nknow, current versions of Oracle may support secondary indexes on\nindex-organized tables; all this Postgres usage over the last couple\nyears has made my Oracle knowledge rusty.\n\n\n> IIRC, SQL Server always creates clustered indexes by default for primary\n> keys.\n\n\nThat would surprise me actually. For some types of tables, e.g. ones\nwith multiple well-used indexes or large rows, index-organizing the heap\ncould easily give worse performance than a normal index/heap pair\ndepending on access patterns. It also tends to be more prone to having\nlocking contention under some access patterns. This is one of those\noptions that needs to be used knowledgeably; it is not a general\narchitectural improvement that you would want to apply to every table\nall the time.\n\n\nJ. Andrew Rogers\n\n\n\n",
"msg_date": "26 Aug 2004 13:44:03 -0700",
"msg_from": "\"J. Andrew Rogers\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "J. Andrew Rogers wrote:\n> On Thu, 2004-08-26 at 12:30, Magnus Hagander wrote:\n>>IIRC, SQL Server always creates clustered indexes by default for primary\n>>keys.\n> \n> That would surprise me actually.\n\nYaz, it should. It doesn't ALWAYS create clustered (unique) index for \nprimary keys, but clustered is the default if you just specify\n\nCREATE TABLE Foo (col1, ...\n\t,PRIMARY KEY(col1, ...)\n)\n\nSaying PRIMARY KEY NONCLUSTERED(...) is how you override the default.\n\t\n((Weird to be discussing so much MSSQL here))\n",
"msg_date": "Thu, 26 Aug 2004 21:41:04 GMT",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "\nUpdated TODO item:\n\n o Automatically maintain clustering on a table\n\n This would require some background daemon to maintain clustering\n during periods of low usage. It might also require tables to be only\n paritally filled for easier reorganization. It also might require\n creating a merged heap/index data file so an index lookup would\n automatically access the heap data too.\n\n---------------------------------------------------------------------------\n\nJ. Andrew Rogers wrote:\n> On Thu, 2004-08-26 at 12:30, Magnus Hagander wrote:\n> > Almost the same for MSSQL. The clustered index is always forced unique.\n> > If you create a non-unique clustered index, SQLServer will internally\n> > pad it with random (or is it sequential? Can't remember right now) data\n> > to make each key unique. The clustered index contains all the data\n> > fields - both the index key and the other columns from the database.\n> > \n> > It does support non-clustered indexes as well on the same table. Any\n> > \"secondary index\" will then contain the index key and the primary key\n> > value. This means a lookup in a non-clustered index means a two-step\n> > index lookup: First look in the non-clustered index for the clustered\n> > key. Then look in the clustered index for the rest of the data.\n> \n> \n> Ah, okay. I see how that would work for a secondary index, though it\n> would make for a slow secondary index. Neat workaround. For all I\n> know, current versions of Oracle may support secondary indexes on\n> index-organized tables; all this Postgres usage over the last couple\n> years has made my Oracle knowledge rusty.\n> \n> \n> > IIRC, SQL Server always creates clustered indexes by default for primary\n> > keys.\n> \n> \n> That would surprise me actually. For some types of tables, e.g. ones\n> with multiple well-used indexes or large rows, index-organizing the heap\n> could easily give worse performance than a normal index/heap pair\n> depending on access patterns. It also tends to be more prone to having\n> locking contention under some access patterns. This is one of those\n> options that needs to be used knowledgeably; it is not a general\n> architectural improvement that you would want to apply to every table\n> all the time.\n> \n> \n> J. Andrew Rogers\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 26 Aug 2004 21:45:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "\nBruce Momjian <[email protected]> writes:\n\n> Updated TODO item:\n> \n> o Automatically maintain clustering on a table\n> \n> This would require some background daemon to maintain clustering\n> during periods of low usage. It might also require tables to be only\n> paritally filled for easier reorganization. It also might require\n> creating a merged heap/index data file so an index lookup would\n> automatically access the heap data too.\n\nFwiw, I would say the first \"would\" is also a \"might\". None of the previous\ndiscussions here presumed a maintenance daemon. The discussions before talked\nabout a mechanism to try to place new tuples as close as possible to the\nproper index position.\n\nI would also suggest making some distinction between a cluster system similar\nto what we have now but improved to maintain the clustering continuously, and\nan actual index-organized-table where the tuples are actually only stored in a\nbtree structure.\n\nThey're two different approaches to similar problems. But they might both be\nuseful to have, and have markedly different implementation details.\n\n-- \ngreg\n\n",
"msg_date": "26 Aug 2004 23:39:42 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "\nOK, new wording:\n\n o Automatically maintain clustering on a table\n\n This might require some background daemon to maintain clustering\n during periods of low usage. It might also require tables to be only\n paritally filled for easier reorganization. Another idea would\n be to create a merged heap/index data file so an index lookup would\n automatically access the heap data too.\n\n\n---------------------------------------------------------------------------\n\nGreg Stark wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n> \n> > Updated TODO item:\n> > \n> > o Automatically maintain clustering on a table\n> > \n> > This would require some background daemon to maintain clustering\n> > during periods of low usage. It might also require tables to be only\n> > paritally filled for easier reorganization. It also might require\n> > creating a merged heap/index data file so an index lookup would\n> > automatically access the heap data too.\n> \n> Fwiw, I would say the first \"would\" is also a \"might\". None of the previous\n> discussions here presumed a maintenance daemon. The discussions before talked\n> about a mechanism to try to place new tuples as close as possible to the\n> proper index position.\n> \n> I would also suggest making some distinction between a cluster system similar\n> to what we have now but improved to maintain the clustering continuously, and\n> an actual index-organized-table where the tuples are actually only stored in a\n> btree structure.\n> \n> They're two different approaches to similar problems. But they might both be\n> useful to have, and have markedly different implementation details.\n> \n> -- \n> greg\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 27 Aug 2004 00:35:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "Greg Stark wrote:\n\n> The discussions before talked about a mechanism to try to place new \n > tuples as close as possible to the proper index position.\n\nMeans this that an index shall have a \"fill factor\" property, similar to\nInformix one ?\n\n From the manual:\n\n\nThe FILLFACTOR option takes effect only when you build an index on a table\nthat contains more than 5,000 rows and uses more than 100 table pages, when\nyou create an index on a fragmented table, or when you create a fragmented\nindex on a nonfragmented table.\nUse the FILLFACTOR option to provide for expansion of an index at a later\ndate or to create compacted indexes.\nWhen the index is created, the database server initially fills only that\npercentage of the nodes specified with the FILLFACTOR value.\n\n# Providing a Low Percentage Value\nIf you provide a low percentage value, such as 50, you allow room for growth\nin your index. The nodes of the index initially fill to a certain percentage and\ncontain space for inserts. The amount of available space depends on the\nnumber of keys in each page as well as the percentage value.\nFor example, with a 50-percent FILLFACTOR value, the page would be half\nfull and could accommodate doubling in size. A low percentage value can\nresult in faster inserts and can be used for indexes that you expect to grow.\n\n\n# Providing a High Percentage Value\nIf you provide a high percentage value, such as 99, your indexes are\ncompacted, and any new index inserts result in splitting nodes. The\nmaximum density is achieved with 100 percent. With a 100-percent\nFILLFACTOR value, the index has no room available for growth; any\nadditions to the index result in splitting the nodes.\nA 99-percent FILLFACTOR value allows room for at least one insertion per\nnode. A high percentage value can result in faster selects and can be used for\nindexes that you do not expect to grow or for mostly read-only indexes.\n\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n",
"msg_date": "Fri, 27 Aug 2004 10:26:26 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "\nI had FILLFACTOR in the TODO list until just a few months ago, but\nbecause no one had discussed it in 3-4 years, I removed the item. I\nhave added mention now in the auto-cluster section because that actually\nseems like the only good reason for a non-100% fillfactor. I don't\nthink our ordinary btrees have enough of a penalty for splits to make a\nnon-full fillfactor worthwhile, but having a non-full fillfactor for\nautocluster controls how often items have to be shifted around.\n\n---------------------------------------------------------------------------\n\nGaetano Mendola wrote:\n> Greg Stark wrote:\n> \n> > The discussions before talked about a mechanism to try to place new \n> > tuples as close as possible to the proper index position.\n> \n> Means this that an index shall have a \"fill factor\" property, similar to\n> Informix one ?\n> \n> From the manual:\n> \n> \n> The FILLFACTOR option takes effect only when you build an index on a table\n> that contains more than 5,000 rows and uses more than 100 table pages, when\n> you create an index on a fragmented table, or when you create a fragmented\n> index on a nonfragmented table.\n> Use the FILLFACTOR option to provide for expansion of an index at a later\n> date or to create compacted indexes.\n> When the index is created, the database server initially fills only that\n> percentage of the nodes specified with the FILLFACTOR value.\n> \n> # Providing a Low Percentage Value\n> If you provide a low percentage value, such as 50, you allow room for growth\n> in your index. The nodes of the index initially fill to a certain percentage and\n> contain space for inserts. The amount of available space depends on the\n> number of keys in each page as well as the percentage value.\n> For example, with a 50-percent FILLFACTOR value, the page would be half\n> full and could accommodate doubling in size. A low percentage value can\n> result in faster inserts and can be used for indexes that you expect to grow.\n> \n> \n> # Providing a High Percentage Value\n> If you provide a high percentage value, such as 99, your indexes are\n> compacted, and any new index inserts result in splitting nodes. The\n> maximum density is achieved with 100 percent. With a 100-percent\n> FILLFACTOR value, the index has no room available for growth; any\n> additions to the index result in splitting the nodes.\n> A 99-percent FILLFACTOR value allows room for at least one insertion per\n> node. A high percentage value can result in faster selects and can be used for\n> indexes that you do not expect to grow or for mostly read-only indexes.\n> \n> \n> \n> \n> Regards\n> Gaetano Mendola\n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 27 Aug 2004 12:23:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "Bruce,\n\nWhat happened to the B-Tree Table patch discussed on Hackers ad nauseum last \nwinter?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 27 Aug 2004 09:31:06 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "Josh Berkus wrote:\n> Bruce,\n> \n> What happened to the B-Tree Table patch discussed on Hackers ad nauseum last \n> winter?\n\nI don't remember that. The only issue I remember is sorting btree index\nby heap tid on creation. We eventually got that into CVS for 8.0.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 27 Aug 2004 12:32:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "Greetings,\n\nI am not sure if this applies only to clustering but for storage in \ngeneral,\n\nIIRC Oracle has 2 parameters that can be set at table creation :\nfrom Oracle docs\n\nPCTFREE integer :\nSpecify the percentage of space in each data block of the table, object \ntable OID index, or partition reserved for future updates to the \ntable's rows. The value of PCTFREE must be a value from 0 to 99. A \nvalue of 0 allows the entire block to be filled by inserts of new rows. \nThe default value is 10. This value reserves 10% of each block for \nupdates to existing rows and allows inserts of new rows to fill a \nmaximum of 90% of each block.\nPCTFREE has the same function in the PARTITION description and in the \nstatements that create and alter clusters, indexes, materialized views, \nand materialized view logs. The combination of PCTFREE and PCTUSED \ndetermines whether new rows will be inserted into existing data blocks \nor into new blocks.\n\nPCTUSED integer\nSpecify the minimum percentage of used space that Oracle maintains for \neach data block of the table, object table OID index, or \nindex-organized table overflow data segment. A block becomes a \ncandidate for row insertion when its used space falls below PCTUSED. \nPCTUSED is specified as a positive integer from 0 to 99 and defaults to \n40.\nPCTUSED has the same function in the PARTITION description and in the \nstatements that create and alter clusters, materialized views, and \nmaterialized view logs.\nPCTUSED is not a valid table storage characteristic for an \nindex-organized table (ORGANIZATION INDEX).\nThe sum of PCTFREE and PCTUSED must be equal to or less than 100. You \ncan use PCTFREE and PCTUSED together to utilize space within a table \nmore efficiently.\n\nPostgreSQL could take some hints from the above.\n\nOn Aug 27, 2004, at 1:26 AM, Gaetano Mendola wrote:\n\n> Greg Stark wrote:\n>\n>> The discussions before talked about a mechanism to try to place new\n> > tuples as close as possible to the proper index position.\n>\n> Means this that an index shall have a \"fill factor\" property, similar \n> to\n> Informix one ?\n>\n> From the manual:\n>\n>\n> The FILLFACTOR option takes effect only when you build an index on a \n> table\n> that contains more than 5,000 rows and uses more than 100 table pages, \n> when\n> you create an index on a fragmented table, or when you create a \n> fragmented\n> index on a nonfragmented table.\n> Use the FILLFACTOR option to provide for expansion of an index at a \n> later\n> date or to create compacted indexes.\n> When the index is created, the database server initially fills only \n> that\n> percentage of the nodes specified with the FILLFACTOR value.\n>\n> # Providing a Low Percentage Value\n> If you provide a low percentage value, such as 50, you allow room for \n> growth\n> in your index. The nodes of the index initially fill to a certain \n> percentage and\n> contain space for inserts. The amount of available space depends on the\n> number of keys in each page as well as the percentage value.\n> For example, with a 50-percent FILLFACTOR value, the page would be half\n> full and could accommodate doubling in size. A low percentage value can\n> result in faster inserts and can be used for indexes that you expect \n> to grow.\n>\n>\n> # Providing a High Percentage Value\n> If you provide a high percentage value, such as 99, your indexes are\n> compacted, and any new index inserts result in splitting nodes. The\n> maximum density is achieved with 100 percent. With a 100-percent\n> FILLFACTOR value, the index has no room available for growth; any\n> additions to the index result in splitting the nodes.\n> A 99-percent FILLFACTOR value allows room for at least one insertion \n> per\n> node. A high percentage value can result in faster selects and can be \n> used for\n> indexes that you do not expect to grow or for mostly read-only indexes.\n>\n>\n>\n>\n> Regards\n> Gaetano Mendola\n>\n>\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n>\n--\nAdi Alurkar (DBA sf.NET) <[email protected]>\n1024D/79730470 A491 5724 74DE 956D 06CB D844 6DF1 B972 7973 0470\n\n\n",
"msg_date": "Fri, 27 Aug 2004 09:51:00 -0700",
"msg_from": "Adi Alurkar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "\nBut what is the advantage of non-full pages in Oracle?\n\n---------------------------------------------------------------------------\n\nAdi Alurkar wrote:\n> Greetings,\n> \n> I am not sure if this applies only to clustering but for storage in \n> general,\n> \n> IIRC Oracle has 2 parameters that can be set at table creation :\n> from Oracle docs\n> \n> PCTFREE integer :\n> Specify the percentage of space in each data block of the table, object \n> table OID index, or partition reserved for future updates to the \n> table's rows. The value of PCTFREE must be a value from 0 to 99. A \n> value of 0 allows the entire block to be filled by inserts of new rows. \n> The default value is 10. This value reserves 10% of each block for \n> updates to existing rows and allows inserts of new rows to fill a \n> maximum of 90% of each block.\n> PCTFREE has the same function in the PARTITION description and in the \n> statements that create and alter clusters, indexes, materialized views, \n> and materialized view logs. The combination of PCTFREE and PCTUSED \n> determines whether new rows will be inserted into existing data blocks \n> or into new blocks.\n> \n> PCTUSED integer\n> Specify the minimum percentage of used space that Oracle maintains for \n> each data block of the table, object table OID index, or \n> index-organized table overflow data segment. A block becomes a \n> candidate for row insertion when its used space falls below PCTUSED. \n> PCTUSED is specified as a positive integer from 0 to 99 and defaults to \n> 40.\n> PCTUSED has the same function in the PARTITION description and in the \n> statements that create and alter clusters, materialized views, and \n> materialized view logs.\n> PCTUSED is not a valid table storage characteristic for an \n> index-organized table (ORGANIZATION INDEX).\n> The sum of PCTFREE and PCTUSED must be equal to or less than 100. You \n> can use PCTFREE and PCTUSED together to utilize space within a table \n> more efficiently.\n> \n> PostgreSQL could take some hints from the above.\n> \n> On Aug 27, 2004, at 1:26 AM, Gaetano Mendola wrote:\n> \n> > Greg Stark wrote:\n> >\n> >> The discussions before talked about a mechanism to try to place new\n> > > tuples as close as possible to the proper index position.\n> >\n> > Means this that an index shall have a \"fill factor\" property, similar \n> > to\n> > Informix one ?\n> >\n> > From the manual:\n> >\n> >\n> > The FILLFACTOR option takes effect only when you build an index on a \n> > table\n> > that contains more than 5,000 rows and uses more than 100 table pages, \n> > when\n> > you create an index on a fragmented table, or when you create a \n> > fragmented\n> > index on a nonfragmented table.\n> > Use the FILLFACTOR option to provide for expansion of an index at a \n> > later\n> > date or to create compacted indexes.\n> > When the index is created, the database server initially fills only \n> > that\n> > percentage of the nodes specified with the FILLFACTOR value.\n> >\n> > # Providing a Low Percentage Value\n> > If you provide a low percentage value, such as 50, you allow room for \n> > growth\n> > in your index. The nodes of the index initially fill to a certain \n> > percentage and\n> > contain space for inserts. The amount of available space depends on the\n> > number of keys in each page as well as the percentage value.\n> > For example, with a 50-percent FILLFACTOR value, the page would be half\n> > full and could accommodate doubling in size. A low percentage value can\n> > result in faster inserts and can be used for indexes that you expect \n> > to grow.\n> >\n> >\n> > # Providing a High Percentage Value\n> > If you provide a high percentage value, such as 99, your indexes are\n> > compacted, and any new index inserts result in splitting nodes. The\n> > maximum density is achieved with 100 percent. With a 100-percent\n> > FILLFACTOR value, the index has no room available for growth; any\n> > additions to the index result in splitting the nodes.\n> > A 99-percent FILLFACTOR value allows room for at least one insertion \n> > per\n> > node. A high percentage value can result in faster selects and can be \n> > used for\n> > indexes that you do not expect to grow or for mostly read-only indexes.\n> >\n> >\n> >\n> >\n> > Regards\n> > Gaetano Mendola\n> >\n> >\n> >\n> >\n> >\n> > ---------------------------(end of \n> > broadcast)---------------------------\n> > TIP 7: don't forget to increase your free space map settings\n> >\n> >\n> --\n> Adi Alurkar (DBA sf.NET) <[email protected]>\n> 1024D/79730470 A491 5724 74DE 956D 06CB D844 6DF1 B972 7973 0470\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 27 Aug 2004 13:27:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Bruce Momjian\n> Sent: Friday, August 27, 2004 1:27 PM\n> To: Adi Alurkar\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Equivalent praxis to CLUSTERED INDEX?\n> \n> \n> \n> But what is the advantage of non-full pages in Oracle?\n> \n\nOne advantage has to do with updates of variable-length columns, e.g.\nvarchars.\n\nIf the block is fully packed with data, an update to a varchar column\nthat makes the column wider, causes \"row-chaining\". This means that a\nportion of the row is stored in a different data block, which may be\nsomewhere completely different in the storage array. Retrieving that\nrow (or even just that column from that row) as a unit may now require\nadditional disk seek(s).\n\nLeaving some space for updates in each data block doesn't prevent this\nproblem completely, but mitigates it to a certain extent. If for\ninstance a row is typically inserted with a null value for a varchar\ncolumn, but the application developer knows it will almost always get\nupdated with some value later on, then leaving a certain percentage of\nempty space in each block allocated to that table makes sense.\n\nConversely, if you know that your data is never going to get updated\n(e.g. a data warehousing application), you might specify to pack the\nblocks as full as possible. This makes for the most efficient data\nretrieval performance.\n\n- Jeremy\n\n",
"msg_date": "Fri, 27 Aug 2004 13:39:35 -0400",
"msg_from": "\"Jeremy Dunn\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "IIRC it it to reduce the \"overflow\" of data or what oracle calls \nchained rows. i.e if a table has variable length columns and 10 rows \nget inserted into a datapage, if this datapage is full and one of the \nvariable length field gets updated the row will now \"overflow\" into \nanother datapage, but if the datapage is created with an appropriate \namount of free space the updated row will be stored in one single \ndatapage.\n\nOn Aug 27, 2004, at 10:27 AM, Bruce Momjian wrote:\n\n>\n> But what is the advantage of non-full pages in Oracle?\n>\n> ----------------------------------------------------------------------- \n> ----\n>\n> Adi Alurkar wrote:\n>> Greetings,\n>>\n>> I am not sure if this applies only to clustering but for storage in\n>> general,\n>>\n>> IIRC Oracle has 2 parameters that can be set at table creation :\n>> from Oracle docs\n>>\n>> PCTFREE integer :\n>> Specify the percentage of space in each data block of the table, \n>> object\n>> table OID index, or partition reserved for future updates to the\n>> table's rows. The value of PCTFREE must be a value from 0 to 99. A\n>> value of 0 allows the entire block to be filled by inserts of new \n>> rows.\n>> The default value is 10. This value reserves 10% of each block for\n>> updates to existing rows and allows inserts of new rows to fill a\n>> maximum of 90% of each block.\n>> PCTFREE has the same function in the PARTITION description and in the\n>> statements that create and alter clusters, indexes, materialized \n>> views,\n>> and materialized view logs. The combination of PCTFREE and PCTUSED\n>> determines whether new rows will be inserted into existing data blocks\n>> or into new blocks.\n>>\n>> PCTUSED integer\n>> Specify the minimum percentage of used space that Oracle maintains for\n>> each data block of the table, object table OID index, or\n>> index-organized table overflow data segment. A block becomes a\n>> candidate for row insertion when its used space falls below PCTUSED.\n>> PCTUSED is specified as a positive integer from 0 to 99 and defaults \n>> to\n>> 40.\n>> PCTUSED has the same function in the PARTITION description and in the\n>> statements that create and alter clusters, materialized views, and\n>> materialized view logs.\n>> PCTUSED is not a valid table storage characteristic for an\n>> index-organized table (ORGANIZATION INDEX).\n>> The sum of PCTFREE and PCTUSED must be equal to or less than 100. You\n>> can use PCTFREE and PCTUSED together to utilize space within a table\n>> more efficiently.\n>>\n>> PostgreSQL could take some hints from the above.\n>>\n>> On Aug 27, 2004, at 1:26 AM, Gaetano Mendola wrote:\n>>\n>>> Greg Stark wrote:\n>>>\n>>>> The discussions before talked about a mechanism to try to place new\n>>>> tuples as close as possible to the proper index position.\n>>>\n>>> Means this that an index shall have a \"fill factor\" property, similar\n>>> to\n>>> Informix one ?\n>>>\n>>> From the manual:\n>>>\n>>>\n>>> The FILLFACTOR option takes effect only when you build an index on a\n>>> table\n>>> that contains more than 5,000 rows and uses more than 100 table \n>>> pages,\n>>> when\n>>> you create an index on a fragmented table, or when you create a\n>>> fragmented\n>>> index on a nonfragmented table.\n>>> Use the FILLFACTOR option to provide for expansion of an index at a\n>>> later\n>>> date or to create compacted indexes.\n>>> When the index is created, the database server initially fills only\n>>> that\n>>> percentage of the nodes specified with the FILLFACTOR value.\n>>>\n>>> # Providing a Low Percentage Value\n>>> If you provide a low percentage value, such as 50, you allow room for\n>>> growth\n>>> in your index. The nodes of the index initially fill to a certain\n>>> percentage and\n>>> contain space for inserts. The amount of available space depends on \n>>> the\n>>> number of keys in each page as well as the percentage value.\n>>> For example, with a 50-percent FILLFACTOR value, the page would be \n>>> half\n>>> full and could accommodate doubling in size. A low percentage value \n>>> can\n>>> result in faster inserts and can be used for indexes that you expect\n>>> to grow.\n>>>\n>>>\n>>> # Providing a High Percentage Value\n>>> If you provide a high percentage value, such as 99, your indexes are\n>>> compacted, and any new index inserts result in splitting nodes. The\n>>> maximum density is achieved with 100 percent. With a 100-percent\n>>> FILLFACTOR value, the index has no room available for growth; any\n>>> additions to the index result in splitting the nodes.\n>>> A 99-percent FILLFACTOR value allows room for at least one insertion\n>>> per\n>>> node. A high percentage value can result in faster selects and can be\n>>> used for\n>>> indexes that you do not expect to grow or for mostly read-only \n>>> indexes.\n>>>\n>>>\n>>>\n>>>\n>>> Regards\n>>> Gaetano Mendola\n>>>\n>>>\n>>>\n>>>\n>>>\n>>> ---------------------------(end of\n>>> broadcast)---------------------------\n>>> TIP 7: don't forget to increase your free space map settings\n>>>\n>>>\n>> --\n>> Adi Alurkar (DBA sf.NET) <[email protected]>\n>> 1024D/79730470 A491 5724 74DE 956D 06CB D844 6DF1 B972 7973 0470\n>>\n>>\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 6: Have you searched our list archives?\n>>\n>> http://archives.postgresql.org\n>>\n>\n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania \n> 19073\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if \n> your\n> joining column's datatypes do not match\n>\n>\n--\nAdi Alurkar (DBA sf.NET) <[email protected]>\n1024D/79730470 A491 5724 74DE 956D 06CB D844 6DF1 B972 7973 0470\n\n\n",
"msg_date": "Fri, 27 Aug 2004 10:39:38 -0700",
"msg_from": "Adi Alurkar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "Adi Alurkar wrote:\n> IIRC it it to reduce the \"overflow\" of data or what oracle calls \n> chained rows. i.e if a table has variable length columns and 10 rows \n> get inserted into a datapage, if this datapage is full and one of the \n> variable length field gets updated the row will now \"overflow\" into \n> another datapage, but if the datapage is created with an appropriate \n> amount of free space the updated row will be stored in one single \n> datapage.\n\nAgreed. What I am wondering is with our system where every update gets\na new row, how would this help us? I know we try to keep an update on\nthe same row as the original, but is there any significant performance\nbenefit to doing that which would offset the compaction advantage?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 27 Aug 2004 13:48:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Agreed. What I am wondering is with our system where every update gets\n> a new row, how would this help us? I know we try to keep an update on\n> the same row as the original, but is there any significant performance\n> benefit to doing that which would offset the compaction advantage?\n\nBecause Oracle uses overwrite-in-place (undoing from an UNDO log on\ntransaction abort), while we always write a whole new row, it would take\nmuch larger PCTFREE wastage to get a useful benefit in PG than it does\nin Oracle. That wastage translates directly into increased I/O costs,\nso I'm a bit dubious that we should assume there is a win to be had here\njust because Oracle offers the feature.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Aug 2004 14:19:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX? "
},
{
"msg_contents": "I think you've probably fingered the kicker of why PG doesn't have this \nkind of clustering already. Hence perhaps the need for other approaches\nto the issue (the disk-IO efficiency of reading groups of rows related \nby a common key) that other DB's (with in-place update) address with\nsynchronous clustering ('heap rebalancing' ?).\n\nBruce Momjian wrote:\n> Adi Alurkar wrote:\n> \n>>IIRC it it to reduce the \"overflow\" of data or what oracle calls \n>>chained rows. i.e if a table has variable length columns and 10 rows \n>>get inserted into a datapage, if this datapage is full and one of the \n>>variable length field gets updated the row will now \"overflow\" into \n>>another datapage, but if the datapage is created with an appropriate \n>>amount of free space the updated row will be stored in one single \n>>datapage.\n> \n> \n> Agreed. What I am wondering is with our system where every update gets\n> a new row, how would this help us? I know we try to keep an update on\n> the same row as the original, but is there any significant performance\n> benefit to doing that which would offset the compaction advantage?\n> \n",
"msg_date": "Fri, 27 Aug 2004 18:26:39 GMT",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n\n> Agreed. What I am wondering is with our system where every update gets\n> a new row, how would this help us? I know we try to keep an update on\n> the same row as the original, but is there any significant performance\n> benefit to doing that which would offset the compaction advantage?\n\nHm. Posit a system where all transactions are short updates executed in\nautocommit mode. \n\nIn such a system as soon as a transaction commits it would take a very short\ntime before the previous record was a dead tuple.\n\nIf every backend kept a small list of tuples it had marked deleted and\nwhenever it was idle checked to see if they were dead yet, it might avoid much\nof the need for vacuum. And in such a circumstance I think you wouldn't need\nmore than a pctfree of 50% even on a busy table. Every tuple would need about\none extra slot.\n\nThis would only be a reasonable idea if a) if the list of potential dead\ntuples is short and if it overflows it just forgets them leaving them for\nvacuum to deal with. and b) It only checks the potentially dead tuples when\nthe backend is otherwise idle.\n\nEven so it would be less efficient than a batch vacuum, and it would be taking\nup i/o bandwidth (to maintain indexes even if the heap buffer is in ram), even\nif that backend is idle it doesn't mean other backends couldn't have used that\ni/o bandwidth.\n\nBut I think it would deal with a lot of the complaints about vacuum and it\nwould make it more feasible to use a pctfree parameter to make clustering more\neffective.\n\n-- \ngreg\n\n",
"msg_date": "27 Aug 2004 15:31:22 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "\nBruce Momjian <[email protected]> writes:\n\n> but is there any significant performance benefit to doing that which would\n> offset the compaction advantage?\n\nJust as a side comment. Setting PCTFREE 0 PCTUSED 100 on tables that have no\nupdates on them has an astonishingly big effect on speed. So the penalty for\nleaving some space free really is substantial.\n\nI think the other poster is right. Oracle really needs pctfree because of the\nway it handles updates. Postgres doesn't really need as much because it\ndoesn't try to squeeze the new tuple in the space the old one took up. If it\ndoesn't fit on the page the worst that happens is it has to store it on some\nother page, whereas oracle has to do its strange row chaining thing.\n\n-- \ngreg\n\n",
"msg_date": "27 Aug 2004 15:34:57 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "Greg Stark wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n> \n> > but is there any significant performance benefit to doing that which would\n> > offset the compaction advantage?\n> \n> Just as a side comment. Setting PCTFREE 0 PCTUSED 100 on tables that have no\n> updates on them has an astonishingly big effect on speed. So the penalty for\n> leaving some space free really is substantial.\n> \n> I think the other poster is right. Oracle really needs pctfree because of the\n> way it handles updates. Postgres doesn't really need as much because it\n> doesn't try to squeeze the new tuple in the space the old one took up. If it\n> doesn't fit on the page the worst that happens is it has to store it on some\n> other page, whereas oracle has to do its strange row chaining thing.\n\nOracle also does that chain thing so moving updates to different pages\nmight have more of an impact than it does on PostgreSQL. We have chains\ntoo but just for locking. Not sure on Oracle.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 27 Aug 2004 15:42:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "This discussion is starting to sound like the split in HEAP memory \nmanagement evolution, into garbage-collecting (e.g. Java) and \nnon-garbage-collecting (e.g. C++).\n\nReclamation by GC's these days has become seriously sophisticated.\nCLUSTER resembles the first generation of GC's, which were \nsingle-big-pass hold-everything-else threads.\n\nPerhaps the latest in incremental GC algorithms would be worth scouting, \nfor the next step in PG page management.\n\nGreg Stark wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> \n>>but is there any significant performance benefit to doing that which would\n>>offset the compaction advantage?\n> \n> Just as a side comment. Setting PCTFREE 0 PCTUSED 100 on tables that have no\n> updates on them has an astonishingly big effect on speed. So the penalty for\n> leaving some space free really is substantial.\n> \n> I think the other poster is right. Oracle really needs pctfree because of the\n> way it handles updates. Postgres doesn't really need as much because it\n> doesn't try to squeeze the new tuple in the space the old one took up. If it\n> doesn't fit on the page the worst that happens is it has to store it on some\n> other page, whereas oracle has to do its strange row chaining thing.\n",
"msg_date": "Fri, 27 Aug 2004 19:50:12 GMT",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "Tom Lane wrote:\n\n > Bruce Momjian <[email protected]> writes:\n >\n >>Agreed. What I am wondering is with our system where every update gets\n >>a new row, how would this help us? I know we try to keep an update on\n >>the same row as the original, but is there any significant performance\n >>benefit to doing that which would offset the compaction advantage?\n >\n >\n > Because Oracle uses overwrite-in-place (undoing from an UNDO log on\n > transaction abort), while we always write a whole new row, it would take\n > much larger PCTFREE wastage to get a useful benefit in PG than it does\n > in Oracle. That wastage translates directly into increased I/O costs,\n > so I'm a bit dubious that we should assume there is a win to be had here\n > just because Oracle offers the feature.\n\nMmmm. Consider this scenario:\n\nctid datas\n(0,1) yyy-xxxxxxxxxxxxxxxxxxx\n(0,2) -------- EMPTY --------\n(0,3) -------- EMPTY --------\n(0,4) -------- EMPTY --------\n(0,5) -------- EMPTY --------\n(0,6) yyy-xxxxxxxxxxxxxxxxxxx\n(0,7) -------- EMPTY --------\n.... -------- EMPTY --------\n(0,11) yyy-xxxxxxxxxxxxxxxxxxx\n\n\nthe row (0,2) --> (0,5) are space available for the (0,1) updates.\nThis will help a table clustered ( for example ) to mantain his\nown correct cluster order.\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Sat, 28 Aug 2004 03:02:47 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "On Thu, Aug 26, 2004 at 11:39:42PM -0400, Greg Stark wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n> \n> > Updated TODO item:\n> > \n> > o Automatically maintain clustering on a table\n> > \n> > This would require some background daemon to maintain clustering\n> > during periods of low usage. It might also require tables to be only\n> > paritally filled for easier reorganization. It also might require\n> > creating a merged heap/index data file so an index lookup would\n> > automatically access the heap data too.\n> \n> Fwiw, I would say the first \"would\" is also a \"might\". None of the previous\n> discussions here presumed a maintenance daemon. The discussions before talked\n> about a mechanism to try to place new tuples as close as possible to the\n> proper index position.\n> \n> I would also suggest making some distinction between a cluster system similar\n> to what we have now but improved to maintain the clustering continuously, and\n> an actual index-organized-table where the tuples are actually only stored in a\n> btree structure.\n> \n> They're two different approaches to similar problems. But they might both be\n> useful to have, and have markedly different implementation details.\n\nThere's a third approach that I think is worth considering. Half of the\nbenefit to clustered tables is limiting the number of pages you need to\naccess when scanning the primary key. The order of tuples in the pages\nthemselves isn't nearly as important as ordering of the pages. This\nmeans you can get most of the benefit of an index-organized table just\nby being careful about what page you place a tuple on. What I'm thinking\nof is some means to ensure all the tuples on a page are within some PK\nrange, but not worrying about the exact order within the page since it's\nrelatively cheap to scan through the page in memory.\n\nSome pros:\nThis would probably mean less change to the code that inserts tuples.\n\nNo need for a background daemon.\n\nNo need to create a new B-Tree table structure.\n\nIdeally, there won't be need to move tuples around, which should mean\nthat current indexing code doesn't need to change.\n\nCons:\nNeed to have some way to deal with pages that fill up.\n\nTo gain full benefit some means of indicating what range of PK values\nare on a page might be needed.\n\nIt's not as beneficial as a true IOT since you don't get the benefit of\nstoring your tuples inline with your B-Tree.\n\nI'm sure there's a ton of things I'm missing, especially since I'm not\nfamiliar with the postgresql code, but hopefully others can explore this\nfurther.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 31 Aug 2004 17:05:03 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
}
] |
[
{
"msg_contents": "FWIW,\n\nInformix does allow the fragmentation of data over named dbspaces by round-robin and expression; this is autosupporting as long as the dba keeps enough space available. You may also fragment the index although there are some variations depending on type of Informix (XPS, etc.); this is available in at least 9.3 ... I have never used the index fragmentation as its own beast, but the fragmenting of data works like a charm for spreadling load over more disks.\n\nGreg Williamson\nDBA\nGlobeXplorer LLC\n\n-----Original Message-----\nFrom: Gaetano Mendola [mailto:[email protected]]\nSent: Thursday, August 26, 2004 2:10 PM\nTo: Bruce Momjian; [email protected]\nSubject: Re: [PERFORM] Equivalent praxis to CLUSTERED INDEX?\n\n\nBruce Momjian wrote:\n> How do vendors actually implement auto-clustering? I assume they move\n> rows around during quiet periods or have lots of empty space in each\n> value bucket.\n> \n> ---------------------------------------------------------------------------\n\nIIRC informix doesn't have it, and you have to recluster periodically\nthe table. After having clustered the table with an index in order to\nrecluster the table with another index you have to release the previous\none ( ALTER index TO NOT CLUSTER ), the CLUSTER is an index attribute and\neach table can have only one index with that attribute ON.\n\n\nRegards\nGaetano Mendola\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n",
"msg_date": "Thu, 26 Aug 2004 15:36:21 -0700",
"msg_from": "\"Gregory S. Williamson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
}
] |
[
{
"msg_contents": "Hi, I have just installed 8.0.0beta1 and I noticed that some query are slower than 7.4.2 queries.\n\nAfter:\npg_dump my_database >mydb.sql (from 7.4.2)\npsql my_new_database <mydb.sql (to 8.0.0 with COPY instead of INSERT)\nFULL VACUUM ANALYZE\n\n***With the old db on 7.4.2***\n\nexplain analyze SELECT count(*) FROM \"SNS_DATA\" WHERE \"Data_Arrivo_Campione\" BETWEEN '2004-01-01 00:00:00' AND '2004-01-31 23:59:59' AND \"Cod_Par\" = '17476'\n\ngives\n\n Aggregate (cost=46817.89..46817.89 rows=1 width=0) (actual time=401.216..401.217 rows=1 loops=1)\n -> Index Scan using snsdata_codpar on \"SNS_DATA\" (cost=0.00..46817.22 rows=268 width=0) (actual time=165.948..400.258 rows=744 loops=1)\n Index Cond: ((\"Cod_Par\")::text = '17476'::text)\n Filter: ((\"Data_Arrivo_Campione\" >= '2004-01-01 00:00:00'::timestamp without time zone) AND (\"Data_Arrivo_Campione\" <= '2004-01-31 23:59:59'::timestamp without time zone))\n Total runtime: 401.302 ms\n\n***while on 8.0.0***\n\nthe same query gives\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=93932.91..93932.91 rows=1 width=0) (actual time=14916.371..14916.371 rows=1 loops=1)\n -> Seq Scan on \"SNS_DATA\" (cost=0.00..93930.14 rows=1108 width=0) (actual time=6297.152..14915.330 rows=744 loops=1)\n Filter: ((\"Data_Arrivo_Campione\" >= '2004-01-01 00:00:00'::timestamp without time zone) AND (\"Data_Arrivo_Campione\" <= '2004-01-31 23:59:59'::timestamp without time zone) AND ((\"Cod_Par\")::text = '17476'::text))\n Total runtime: 14916.935 ms\n\nAnd I if disable the seqscan\nSET enable_seqscan = false;\n\nI get the following:\n\nAggregate (cost=158603.19..158603.19 rows=1 width=0) (actual time=4605.862..4605.863 rows=1 loops=1)\n -> Index Scan using snsdata_codpar on \"SNS_DATA\" (cost=0.00..158600.41 rows=1108 width=0) (actual time=2534.422..4604.865 rows=744 loops=1)\n Index Cond: ((\"Cod_Par\")::text = '17476'::text)\n Filter: ((\"Data_Arrivo_Campione\" >= '2004-01-01 00:00:00'::timestamp without time zone) AND (\"Data_Arrivo_Campione\" <= '2004-01-31 23:59:59'::timestamp without time zone))\n Total runtime: 4605.965 ms\n\nThe total runtime is bigger (x10 !!) than the old one.\n\nThe memory runtime parameters are \nshared_buffer = 2048\nwork_mem = sort_mem = 2048\n\nSNS_DATA shema is the following:\n\n Table \"public.SNS_DATA\"\n Column | Type | Modifiers\n----------------------+-----------------------------+--------------------\n Ordine | integer | not null default 0\n Cod_Par | character varying(100) | not null\n Cod_Ana | character varying(100) | not null\n Valore | character varying(255) |\n Descriz | character varying(512) |\n Un_Mis | character varying(70) |\n hash | integer |\n valid | boolean | default true\n alarm | boolean | default false\n Cod_Luogo | character varying(30) |\n Data_Arrivo_Campione | timestamp without time zone |\n site_id | integer |\n Cod_Luogo_v | character varying(30) |\n repeated_val | boolean | default false\nIndexes:\n \"sns_data2_pkey\" PRIMARY KEY, btree (\"Ordine\", \"Cod_Ana\", \"Cod_Par\")\n \"sns_datacodluogo2\" btree (\"Cod_Luogo\")\n \"sns_datatimefield2\" btree (\"Data_Arrivo_Campione\")\n \"sns_siteid2\" btree (site_id)\n \"sns_valid2\" btree (\"valid\")\n \"snsdata_codana\" btree (\"Cod_Ana\")\n \"snsdata_codpar\" btree (\"Cod_Par\")\nForeign-key constraints:\n \"$2\" FOREIGN KEY (\"Cod_Ana\") REFERENCES \"SNS_ANA\"(\"Cod_Ana\") ON DELETE CASCADE\nTriggers:\n sns_action_tr BEFORE INSERT OR UPDATE ON \"SNS_DATA\" FOR EACH ROW EXECUTE PROCEDURE sns_action()\n\nThe table has 2M of records\nCan it be a datatype conversion issue?\nCan it be depend on the the type of restore (with COPY commands)?\nI have no idea.\n\nThanks in advance!\nReds\n\n\n\n\n\n\n\n\n\nHi, I have just installed 8.0.0beta1 and I noticed \nthat some query are slower than 7.4.2 queries.\n \nAfter:\npg_dump my_database >mydb.sql (from \n7.4.2)\npsql my_new_database <mydb.sql (to 8.0.0 with COPY instead of \nINSERT)\nFULL VACUUM ANALYZE\n \n***With the old db on 7.4.2***\n \nexplain analyze SELECT count(*) FROM \"SNS_DATA\" \nWHERE \"Data_Arrivo_Campione\" BETWEEN '2004-01-01 00:00:00' AND '2004-01-31 \n23:59:59' AND \"Cod_Par\" = '17476'\n \ngives\n \n Aggregate (cost=46817.89..46817.89 \nrows=1 width=0) (actual time=401.216..401.217 rows=1 loops=1) \n-> Index Scan using snsdata_codpar on \"SNS_DATA\" \n(cost=0.00..46817.22 rows=268 width=0) (actual time=165.948..400.258 rows=744 \nloops=1) Index Cond: \n((\"Cod_Par\")::text = \n'17476'::text) Filter: \n((\"Data_Arrivo_Campione\" >= '2004-01-01 00:00:00'::timestamp without time \nzone) AND (\"Data_Arrivo_Campione\" <= '2004-01-31 23:59:59'::timestamp without \ntime zone)) Total runtime: 401.302 ms\n \n***while on 8.0.0***\n \nthe same query gives\n \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate \n(cost=93932.91..93932.91 rows=1 width=0) (actual time=14916.371..14916.371 \nrows=1 loops=1) -> Seq Scan on \"SNS_DATA\" \n(cost=0.00..93930.14 rows=1108 width=0) (actual time=6297.152..14915.330 \nrows=744 loops=1) Filter: \n((\"Data_Arrivo_Campione\" >= '2004-01-01 00:00:00'::timestamp without time \nzone) AND (\"Data_Arrivo_Campione\" <= '2004-01-31 23:59:59'::timestamp without \ntime zone) AND ((\"Cod_Par\")::text = '17476'::text)) Total runtime: \n14916.935 ms\nAnd I if disable the seqscan\nSET enable_seqscan = false;\n \nI get the following:\n \nAggregate (cost=158603.19..158603.19 rows=1 \nwidth=0) (actual time=4605.862..4605.863 rows=1 loops=1) \n-> Index Scan using snsdata_codpar on \"SNS_DATA\" \n(cost=0.00..158600.41 rows=1108 width=0) (actual time=2534.422..4604.865 \nrows=744 loops=1) Index \nCond: ((\"Cod_Par\")::text = \n'17476'::text) Filter: \n((\"Data_Arrivo_Campione\" >= '2004-01-01 00:00:00'::timestamp without time \nzone) AND (\"Data_Arrivo_Campione\" <= '2004-01-31 23:59:59'::timestamp without \ntime zone)) Total runtime: 4605.965 ms\nThe total runtime is bigger (x10 !!) than the old \none.\n \nThe memory runtime parameters are \nshared_buffer = 2048\nwork_mem = sort_mem = 2048\n \nSNS_DATA shema is the following:\n \n \nTable \"public.SNS_DATA\" \nColumn \n| \nType \n| \nModifiers----------------------+-----------------------------+-------------------- Ordine \n| \ninteger \n| not null default \n0 Cod_Par \n| character varying(100) | not \nnull Cod_Ana \n| character varying(100) | not \nnull Valore \n| character varying(255) \n| Descriz \n| character varying(512) \n| Un_Mis \n| character varying(70) \n| hash \n| \ninteger \n| valid \n| \nboolean \n| default \ntrue alarm \n| \nboolean \n| default \nfalse Cod_Luogo \n| character varying(30) \n| Data_Arrivo_Campione | timestamp without time zone \n| site_id \n| \ninteger \n| Cod_Luogo_v | \ncharacter varying(30) \n| repeated_val | \nboolean \n| default falseIndexes: \"sns_data2_pkey\" PRIMARY KEY, \nbtree (\"Ordine\", \"Cod_Ana\", \"Cod_Par\") \"sns_datacodluogo2\" \nbtree (\"Cod_Luogo\") \"sns_datatimefield2\" btree \n(\"Data_Arrivo_Campione\") \"sns_siteid2\" btree \n(site_id) \"sns_valid2\" btree \n(\"valid\") \"snsdata_codana\" btree \n(\"Cod_Ana\") \"snsdata_codpar\" btree \n(\"Cod_Par\")Foreign-key constraints: \"$2\" FOREIGN KEY \n(\"Cod_Ana\") REFERENCES \"SNS_ANA\"(\"Cod_Ana\") ON DELETE \nCASCADETriggers: sns_action_tr BEFORE INSERT OR UPDATE \nON \"SNS_DATA\" FOR EACH ROW EXECUTE PROCEDURE sns_action()\n \nThe table has 2M of records\nCan it be a datatype conversion issue?\nCan it be depend on the the type of restore (with COPY commands)?\nI have no idea.\n \nThanks in advance!\nReds",
"msg_date": "Fri, 27 Aug 2004 08:57:43 +0200",
"msg_from": "\"Stefano Bonnin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query performance issue with 8.0.0beta1"
},
{
"msg_contents": "7.4.2\n> Aggregate (cost=46817.89..46817.89 rows=1 width=0) (actual time=401.216..401.217 rows=1 loops=1)\n> -> Index Scan using snsdata_codpar on \"SNS_DATA\" (cost=0.00..46817.22 rows=268 width=0) (actual time=165.948..400.258 rows=744 loops=1)\n> Index Cond: ((\"Cod_Par\")::text = '17476'::text)\n> Filter: ((\"Data_Arrivo_Campione\" >= '2004-01-01 00:00:00'::timestamp without time zone) AND (\"Data_Arrivo_Campione\" <= '2004-01-31 23:59:59'::timestamp without time zone))\n> Total runtime: 401.302 ms\n> \nRow counts are out by a factor of 3, on the low side. so the planner will guess index is better, which it is.\n\n> ***while on 8.0.0***\n> Aggregate (cost=93932.91..93932.91 rows=1 width=0) (actual time=14916.371..14916.371 rows=1 loops=1)\n> -> Seq Scan on \"SNS_DATA\" (cost=0.00..93930.14 rows=1108 width=0) (actual time=6297.152..14915.330 rows=744 loops=1)\n> Filter: ((\"Data_Arrivo_Campione\" >= '2004-01-01 00:00:00'::timestamp without time zone) AND (\"Data_Arrivo_Campione\" <= '2004-01-31 23:59:59'::timestamp without time zone) AND ((\"Cod_Par\")::text = '17476'::text))\n> Total runtime: 14916.935 ms\nPlanner guesses that 1108 row should be returned, which is out by less, but on the high side.\nBig question is given there are 2M rows, why does returning 1108 rows, less than 1% result in a sequence scan.\nUsually the selectivity on the index is bad, try increasing the stats target on the column.\n\nI know 8.0 has new stats anaylsis code, which could be effecting how it choses the plan. But it would still\nrequire a good amount of stats to get it to guess correctly.\n\nIncrease stats and see if the times improve.\n\n> \n> And I if disable the seqscan\n> SET enable_seqscan = false;\n> \n> I get the following:\n> \n> Aggregate (cost=158603.19..158603.19 rows=1 width=0) (actual time=4605.862..4605.863 rows=1 loops=1)\n> -> Index Scan using snsdata_codpar on \"SNS_DATA\" (cost=0.00..158600.41 rows=1108 width=0) (actual time=2534.422..4604.865 rows=744 loops=1)\n> Index Cond: ((\"Cod_Par\")::text = '17476'::text)\n> Filter: ((\"Data_Arrivo_Campione\" >= '2004-01-01 00:00:00'::timestamp without time zone) AND (\"Data_Arrivo_Campione\" <= '2004-01-31 23:59:59'::timestamp without time zone))\n> Total runtime: 4605.965 ms\n> \n> The total runtime is bigger (x10 !!) than the old one.\nDid you run this multiple times, or is this the first time. If it had to get the data off disk it will be slower.\nAre you sure that it's coming from disk in this and the 7.4 case? or both from memory.\nIf 7.4 is from buffer_cache, or kernel_cache, and 8.0 is from disk you are likely to get A LOT slower.\n\n> \n> The memory runtime parameters are \n> shared_buffer = 2048\n> work_mem = sort_mem = 2048\n> \n[ snip ]\n\n> The table has 2M of records\n> Can it be a datatype conversion issue?\nThat should not be an issue in 8.0, at least for the simple type conversions. like int8 to int4.\nI'm not 100% sure which ones were added, and which were not, but the query appears to cast everything correctly anyway.\n\n> Can it be depend on the the type of restore (with COPY commands)?\nShouldn't and VACUUM FULL ANALYZE will make the table as small as possible. The row order may be different\non disk, but the planner won't know that, and it's a bad plan causing the problem.\n\n> I have no idea.\n> \n> Thanks in advance!\n> Reds\n> \nRegards\n\nRussell Smith.\n",
"msg_date": "Fri, 27 Aug 2004 18:48:08 +1000",
"msg_from": "Russell Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance issue with 8.0.0beta1"
},
{
"msg_contents": "Stefano,\n\n> Hi, I have just installed 8.0.0beta1 and I noticed that some query are\n> slower than 7.4.2 queries.\n\nSeems unlikely. How have you configured postgresql.conf? DID you \nconfigure it for the 8.0 database?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 27 Aug 2004 10:14:29 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance issue with 8.0.0beta1"
},
{
"msg_contents": "Server HP: Intel(R) Pentium(R) 4 CPU 2.26GHz\nRAM 1GB\nOS: RedHat 8\nAnd the disk:\n\n kernel: megaide: driver version 05.04f (Date: Nov 11, 2002; 18:15 EST)\n kernel: megaide: bios version 02.06.07221605\n kernel: megaide: LD 0 RAID1 status=ONLINE sectors=156297343\ncapacity=76317 MB drives=2\n kernel: scsi0 : LSI Logic MegaIDE RAID BIOS Version 2.6.07221605, 8\ntargs 1 chans 8 luns\n kernel: Vendor: LSILOGIC Model: LD 0 IDERAID Rev:\n kernel: Type: Direct-Access ANSI SCSI revision:\n02\n\nThanks.\nRedS\n\n\n----- Original Message ----- \nFrom: \"Josh Berkus\" <[email protected]>\nTo: \"Stefano Bonnin\" <[email protected]>\nSent: Monday, August 30, 2004 6:54 PM\nSubject: Re: [PERFORM] Query performance issue with 8.0.0beta1\n\n\n> Stefano,\n>\n> > This is my postgres.conf, I have changed only the work_mem and\n> > shared_buffers parameters.\n>\n> And not very much, I see.\n>\n> > >DID you\n> > > configure it for the 8.0 database?\n> >\n> > What does it mean? Is in 8.0 some important NEW configation parameter ?\n>\n> Well, there are but they shouldn't affect your case. However, there\nare a\n> lot of other settings that need to be adjusted. Please post your\nhardware\n> plaform information: OS, CPU, RAM, and disk setup.\n>\n> -- \n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n\n",
"msg_date": "Wed, 1 Sep 2004 11:18:52 +0200",
"msg_from": "\"Stefano Bonnin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query performance issue with 8.0.0beta1"
}
] |
[
{
"msg_contents": "Is it possible (to mix two threads) that you had CLUSTERed the table on the old database in a way that retrieved the records in this query faster?\n",
"msg_date": "Fri, 27 Aug 2004 09:52:15 -0700",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Query performance issue with 8.0.0beta1"
}
] |
[
{
"msg_contents": "The query:\n select count(*) from billing where timestamp > now()-60\n\nshould obviously use the index\n\n CREATE INDEX billing_timestamp_idx ON billing USING btree (\"timestamp\"\ntimestamp_ops);\n\non a table with 1400000 rows.\n\nBut it uses a Seq Scan. If I set enable_seqscan=no, it indicates a queryplan\ncould not be calculated.\n\nWhy does this simple query not use the timestamp index, and how can I get it\nto?\n\nThanks, Jack\n\n Jack Kerkhof\n Research & Development\n [email protected]\n www.guest-tek.com\n 1.866.509.1010 3480\n\n--------------------------------------------------------------------------\n\n Guest-Tek is a leading provider of broadband technology solutions for\nthe hospitality industry. Guest-Tek's GlobalSuite� high-speed Internet\nsolution enables hotels to offer their guests the convenience of wired\nand/or wireless broadband Internet access from guest rooms, meeting rooms\nand public areas.\n\n\n\n\n\n\n\nThe \nquery:\n\n select count(*) from billing where timestamp > \nnow()-60\nshould obviously use \nthe index \n CREATE INDEX billing_timestamp_idx ON billing USING btree (\"timestamp\" timestamp_ops);\non a table with \n1400000 rows.\nBut it uses a Seq \nScan. If I set enable_seqscan=no, it indicates a queryplan could not be \ncalculated. \nWhy does this simple query not use the timestamp index, and how can I get \nit to?\nThanks, Jack\n\n\n\n\n\nJack \n KerkhofResearch & Developmentjack.kerkhof@guest-tek.comwww.guest-tek.com1.866.509.1010 \n 3480\n\n\n\n\nGuest-Tek is a leading \n provider of broadband technology solutions for the hospitality industry. \n Guest-Tek's GlobalSuite high-speed Internet solution enables hotels to \n offer their guests the convenience of wired and/or wireless broadband \n Internet access from guest rooms, meeting rooms and public areas.",
"msg_date": "Fri, 27 Aug 2004 11:12:13 -0600",
"msg_from": "\"Jack Kerkhof\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why does a simple query not use an obvious index?"
},
{
"msg_contents": "Strangely enough, I don't find that result surprising.\n\nif the vast bulk of the data is in the past and now()-60 represents a very small slice of the data\nwe might expect that using an index is optimal, but there could be many reasons why it doesn't get\nused. \n\nAFAIK postgres doesn't peek at values used in a query when optimizing so any query with a \">\" type\ncondition is gonna have a seq scan as the plan since the best guess is that you are gonna match\n50% of the table. That's one possible explanation.\n\nAnother is that if the condition data types don't match then an indes won't be used you could try:\n\n select count(*) from billing where timestamp > (now()-60)::timestamp\n\nMight make a difference, I dunno, it's a case of testing amd seing what happens.\n\nYou could try lowering the random page cost, it might help, but I don't like your chances.\n\nIf your problem is that you want to access the most recent data from a large table with fast\nresponse, then you could consider:\n\n1. a \"recent\" index. If the data is within the \"recent\" time from set a flag to true, other wise\nnull. Reset the flags periodically. Nulls aren't indexed so the selectivity of such an index is\nmuch higher. Can work wonders.\n\n2, duplicate recent data in another table that is purged when data passes the age limit. This is\nbasic archiving.\n\nSomething like that. Hopefully someone with more knowlege of the optimaizer will have a brighter\nsuggestion for you. \n\nWhat version are you using by the way?\n \nRegards\nMr Pink\n \n--- Jack Kerkhof <[email protected]> wrote:\n\n> The query:\n> select count(*) from billing where timestamp > now()-60\n> \n> should obviously use the index\n> \n> CREATE INDEX billing_timestamp_idx ON billing USING btree (\"timestamp\"\n> timestamp_ops);\n> \n> on a table with 1400000 rows.\n> \n> But it uses a Seq Scan. If I set enable_seqscan=no, it indicates a queryplan\n> could not be calculated.\n> \n> Why does this simple query not use the timestamp index, and how can I get it\n> to?\n> \n> Thanks, Jack\n> \n> Jack Kerkhof\n> Research & Development\n> [email protected]\n> www.guest-tek.com\n> 1.866.509.1010 3480\n> \n> --------------------------------------------------------------------------\n> \n> Guest-Tek is a leading provider of broadband technology solutions for\n> the hospitality industry. Guest-Tek's GlobalSuite���Ehigh-speed Internet\n> solution enables hotels to offer their guests the convenience of wired\n> and/or wireless broadband Internet access from guest rooms, meeting rooms\n> and public areas.\n> \n> \n> \n\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nTake Yahoo! Mail with you! Get it on your mobile phone.\nhttp://mobile.yahoo.com/maildemo \n",
"msg_date": "Sun, 29 Aug 2004 11:04:48 -0700 (PDT)",
"msg_from": "Mr Pink <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index?"
},
{
"msg_contents": "On Fri, 2004-08-27 at 11:12, Jack Kerkhof wrote:\n> The query:\n> \n> select count(*) from billing where timestamp > now()-60\n> \n> should obviously use the index \n> \n> CREATE INDEX billing_timestamp_idx ON billing USING btree\n> (\"timestamp\" timestamp_ops);\n> \n> on a table with 1400000 rows.\n> \n> But it uses a Seq Scan. If I set enable_seqscan=no, it indicates a\n> queryplan could not be calculated. \n\nHave you tried this:\n\nmarlowe=> select now()-60;\nERROR: operator does not exist: timestamp with time zone - integer\nHINT: No operator matches the given name and argument type(s). You may\nneed to add explicit type casts.\n\nyou likely need:\n\nsmarlowe=> select now()-'60 seconds'::interval;\n ?column?\n-------------------------------\n 2004-08-29 12:25:38.249564-06\n\ninside there. \n\nAlso, count(*) is likely to always generate a seq scan due to the way\naggregates are implemented currently in pgsql. you might want to try:\n\nselect somefield from sometable where timestampfield > now()-'60\nseconds'::interval\n\nand count the number of returned rows. If there's a lot, it won't be\nany faster, if there's a few, it should be a win.\n\n",
"msg_date": "Sun, 29 Aug 2004 12:28:48 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index?"
},
{
"msg_contents": "On Sun, Aug 29, 2004 at 11:04:48AM -0700, Mr Pink wrote:\n> Another is that if the condition data types don't match then an indes won't be used you could try:\n> \n> select count(*) from billing where timestamp > (now()-60)::timestamp\n\nIn fact, I've had success with code like\n\n select count(*) from billing where timestamp > ( select now() - interval '1 minute' )\n\nAt least in my case (PostgreSQL 7.2, though), it made PostgreSQL magically do\nan index scan. :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Sun, 29 Aug 2004 20:57:50 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index?"
},
{
"msg_contents": "\nMr Pink <[email protected]> writes:\n\n> AFAIK postgres doesn't peek at values used in a query when optimizing \n\nOf course it does.\n\nHowever sometimes things don't work perfectly.\nTo get good answers rather than just guesses we'll need two things:\n\n. What version of postgres are you using.\n. The output of EXPLAIN ANALYZE select ...\n\n-- \ngreg\n\n",
"msg_date": "29 Aug 2004 17:10:37 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index?"
},
{
"msg_contents": "\n\"Scott Marlowe\" <[email protected]> writes:\n\n> Also, count(*) is likely to always generate a seq scan due to the way\n> aggregates are implemented currently in pgsql. you might want to try:\n\nHuh? I'm curious to know what you're talking about here.\n\n> select somefield from sometable where timestampfield > now()-'60\n> seconds'::interval\n> \n> and count the number of returned rows. If there's a lot, it won't be\n> any faster, if there's a few, it should be a win.\n\nWhy would this ever be faster? And how could postgres ever calculate that\nwithout doing a sequential scan when count(*) would force it to do a\nsequential scan?\n\n-- \ngreg\n\n",
"msg_date": "29 Aug 2004 17:12:05 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index?"
},
{
"msg_contents": "On Sun, 2004-08-29 at 15:12, Greg Stark wrote:\n> \"Scott Marlowe\" <[email protected]> writes:\n> \n> > Also, count(*) is likely to always generate a seq scan due to the way\n> > aggregates are implemented currently in pgsql. you might want to try:\n> \n> Huh? I'm curious to know what you're talking about here.\n\nThis has been discussed ad infinitum on the lists in the past. And\nexplained by better minds than mine, but I'll give it a go.\n\nPostgreSQL has a \"generic\" aggregate method. Imagine instead doing a\nselect count(id1+id2-id3) from table where ... In that instance, it's\nnot a simple shortcut to just grab the number of rows anymore. Since\nPostgreSQL uses a generic aggregate method that can be expanded by the\nuser with custom aggregates et. al., it has no optimizations to make\nsimple count(*) fast, like many other databases.\n\nAdd to that the fact that even when postgresql uses an index it still\nhas to hit the data store to get the actual value of the tuple, and\nyou've got very few instances in which an index scan of more than some\nsmall percentage of the table is worth while. I.e. a sequential scan\ntends to \"win\" over an index scan quicker in postgresql than in other\ndatabases like Oracle, where the data store is serialized and the\nindexes have the correct information along with the application of the\ntransaction / roll back segments.\n\n> > select somefield from sometable where timestampfield > now()-'60\n> > seconds'::interval\n> > \n> > and count the number of returned rows. If there's a lot, it won't be\n> > any faster, if there's a few, it should be a win.\n> \n> Why would this ever be faster? And how could postgres ever calculate that\n> without doing a sequential scan when count(*) would force it to do a\n> sequential scan?\n\nBecause, count(*) CANNOT use an index. So, if you're hitting, say,\n0.01% of the table (let's say 20 out of 20,000,000 rows or something\nlike that) then the second should be MUCH faster.\n\n",
"msg_date": "Sun, 29 Aug 2004 15:38:00 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index?"
},
{
"msg_contents": "On Sun, 2004-08-29 at 15:38, Scott Marlowe wrote:\n> On Sun, 2004-08-29 at 15:12, Greg Stark wrote:\n> > \"Scott Marlowe\" <[email protected]> writes:\n> > \n> > > Also, count(*) is likely to always generate a seq scan due to the way\n> > > aggregates are implemented currently in pgsql. you might want to try:\n> > \n> > Huh? I'm curious to know what you're talking about here.\n> \n> This has been discussed ad infinitum on the lists in the past. And\n> explained by better minds than mine, but I'll give it a go.\n> \n> PostgreSQL has a \"generic\" aggregate method. Imagine instead doing a\n> select count(id1+id2-id3) from table where ... \n\nthat should be avg(id1+id2-id3)... doh\n\n",
"msg_date": "Sun, 29 Aug 2004 15:44:15 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index?"
},
{
"msg_contents": ">> select somefield from sometable where timestampfield > now()-'60\n>> seconds'::interval\n\nThis is a FAQ, but since the archives don't seem to be up at the moment,\nhere's the answer once again:\n\nThe expression \"now() - something\" is not a constant, so the planner\nis faced with \"timestampfield > unknownvalue\". Its default assumption\nabout the number of rows that will match is much too high to make an\nindexscan look profitable (from memory, I think it guesses that about\na third of the table will match...).\n\nThere are a couple of hacks you can use to deal with this. Plan A\nis just \"set enable_seqscan = false\" for this query. This is ugly and\nnot really recommended, but you should try it first to verify that you\ndo get an indexscan that way, just to be sure that lack of statistics\nis the problem and not something else.\n\nPlan B is to add an extra WHERE clause to make the problem look like a\nrange query, eg\n\n\twhere timestampfield > now() - ... AND timestampfield <= now();\n\nThe planner still doesn't know the exact values involved, so it still\ncan't make use of any statistics, but it can see that this is a range\nconstraint on timestampfield. The default guess about the selectivity\nwill be a lot smaller than in the case of the one-sided inequality,\nand in most cases you should get an indexscan out of it. This isn't\ncompletely guaranteed though. Also, it's got a severe problem in that\nif you sometimes do queries with a large interval, it'll still do an\nindexscan even though that may be quite inappropriate.\n\nPlan C is to fix things so that the compared-to value *does* look like a\nconstant; then the planner will correctly observe that only a small part\nof the table is to be scanned, and do the right thing (given reasonably\nup-to-date ANALYZE statistics, anyway). The most trustworthy way of\ndoing that is to compute the \"now() - interval\" value on the client side\nand send over a timestamp constant. If that's not convenient for some\nreason, people frequently use a hack like this:\n\n\tcreate function ago(interval) returns timestamptz as\n\t'select now() - $1' language sql strict immutable;\n\n\tselect ... where timestampfield > ago('60 seconds');\n\nThis is a kluge because you are lying when you say that the result of\nago() is immutable; it obviously isn't. But the planner will believe\nyou, fold the function call to a constant during planning, and use the\nresult. CAUTION: this works nicely for interactively-issued SQL\nqueries, but it will come back to bite you if you try to use ago() in\nprepared queries or plpgsql functions, because the premature collapsing\nof the now() result will become significant.\n\nWe have speculated about ways to get the planner to treat expressions\ninvolving now() and similar functions as pseudo-constants, so that it\nwould do the right thing in this sort of situation without any kluges.\nIt's not been done yet though.\n\nBTW, the above discussion applies to PG 7.3 and later; if you're dealing\nwith an old version then there are some different considerations.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 29 Aug 2004 18:03:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index? "
},
{
"msg_contents": "On Sun, Aug 29, 2004 at 03:38:00PM -0600, Scott Marlowe wrote:\n>>> select somefield from sometable where timestampfield > now()-'60\n>>> seconds'::interval\n>>> \n>>> and count the number of returned rows. If there's a lot, it won't be\n>>> any faster, if there's a few, it should be a win.\n>> Why would this ever be faster? And how could postgres ever calculate that\n>> without doing a sequential scan when count(*) would force it to do a\n>> sequential scan?\n> Because, count(*) CANNOT use an index. So, if you're hitting, say,\n> 0.01% of the table (let's say 20 out of 20,000,000 rows or something\n> like that) then the second should be MUCH faster.\n\nOf course count(*) can use an index:\n\nimages=# explain analyze select count(*) from images where event='test';\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=168.97..168.97 rows=1 width=0) (actual time=68.211..68.215 rows=1 loops=1)\n -> Index Scan using unique_filenames on images (cost=0.00..168.81 rows=63 width=0) (actual time=68.094..68.149 rows=8 loops=1)\n Index Cond: ((event)::text = 'test'::text)\n Total runtime: 68.369 ms\n(4 rows)\n\nHowever, it cannot rely on an index _alone_; it has to go fetch the relevant\npages, but of course, so must \"select somefield from\" etc..\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 30 Aug 2004 00:04:13 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index?"
},
{
"msg_contents": "\n\"Scott Marlowe\" <[email protected]> writes:\n\n> PostgreSQL has a \"generic\" aggregate method. Imagine instead doing a\n> select count(id1+id2-id3) from table where ... In that instance, it's\n> not a simple shortcut to just grab the number of rows anymore. Since\n> PostgreSQL uses a generic aggregate method that can be expanded by the\n> user with custom aggregates et. al., it has no optimizations to make\n> simple count(*) fast, like many other databases.\n\nPeople expect count(*) _without a where clause_ to be cached in a single\nglobal variable. Postgres can't do this, but the reason has everything to do\nwith MVCC, not with postgres's generalized aggregates. Because of MVCC\nPostgres can't just store a single cached value, because there is no single\ncached value. It would have to store a complete history back to the oldest\nextant transaction.\n\nHowever in this case the user has a where clause. No database is going to\ncache values of count(*) for random where clauses. But that doesn't stop\nPostgres from using an index to fetch the records.\n\n\n> > > select somefield from sometable where timestampfield > now()-'60\n> > > seconds'::interval\n> > > \n> > > and count the number of returned rows. If there's a lot, it won't be\n> > > any faster, if there's a few, it should be a win.\n> > \n> > Why would this ever be faster? And how could postgres ever calculate that\n> > without doing a sequential scan when count(*) would force it to do a\n> > sequential scan?\n> \n> Because, count(*) CANNOT use an index. So, if you're hitting, say,\n> 0.01% of the table (let's say 20 out of 20,000,000 rows or something\n> like that) then the second should be MUCH faster.\n\nI think you've applied these past discussions and come up with some bogus\nconclusions.\n\nThe problem here has nothing to do with the count(*) and everything to do with\nthe WHERE clause. To fetch the records satisfying that where clause postgres\nhas to do exactly the same thing regardless of whether it's going to feed the\ndata to count(*) or return some or all of it to the client.\n\nIf postgres decides the where clause isn't selective enough it'll choose to\nuse a sequential scan. However it would do that regardless of whether you're\ncalling count(*) or not. If the number is records is substantial then you\nwould get the overhead of the scan plus the time it takes to transfer all that\nunnecessary data to the user.\n\nWhat you're probably thinking of when you talk about general purpose aggregate\ninterfaces is the difficulty of making min()/max() use indexes. That's a whole\nother case entirely. That's where postgres's generalized aggregates leaves it\nwithout enough information about what records the aggregate functions are\ninterested in and what index scans might make them faster.\n\nNone of these common cases end up making it a good idea to read the records\ninto the clients and do the work in the client. The only cases where that\nwould make sense would be if the function requires doing some manipulation of\nthe data that's awkward to express in sql. The \"top n\" type of query is the\nusual culprit, but with postgres's new array functions even that becomes\ntractable.\n\n-- \ngreg\n\n",
"msg_date": "29 Aug 2004 18:10:05 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index?"
},
{
"msg_contents": "> People expect count(*) _without a where clause_ to be cached in a single\n> global variable. Postgres can't do this, but the reason has everything to do\n\nSomeone should write an approx_count('table') function that reads\nreltuples from pg_class and tell them to use it in combination with\nautovac.\n\nI've yet to see someone use count(*) across a table and not round the\nresult themselves (approx 9 million clients).\n\n\n",
"msg_date": "Sun, 29 Aug 2004 18:18:20 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index?"
},
{
"msg_contents": "\n--- Greg Stark <[email protected]> wrote:\n\n> \n> Mr Pink <[email protected]> writes:\n> \n> > AFAIK postgres doesn't peek at values used in a query when optimizing \n> \n> Of course it does.\n\nBut not ones returned by a function such as now(), or when you use bind variables, as Tom aptly\nexplained.\n\nThat's what I meant by 'peek'. Interestingly enough Oracle does that, it's inline with their\npolicy of recommending the use of bind variables. Perhaps postgres could use such a feature some\nday.\n\n\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nYahoo! Mail - 50x more storage than other providers!\nhttp://promotions.yahoo.com/new_mail\n",
"msg_date": "Sun, 29 Aug 2004 19:09:54 -0700 (PDT)",
"msg_from": "Mr Pink <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index?"
},
{
"msg_contents": "Mr Pink <[email protected]> writes:\n>>> AFAIK postgres doesn't peek at values used in a query when optimizing \n>> \n>> Of course it does.\n\n> But not ones returned by a function such as now(), or when you use\n> bind variables, as Tom aptly explained.\n\nFWIW, 8.0 does have the ability to use the values of bind variables for\nplanning estimation (Oliver Jowett did the work for this). The main\nissue in the way of dealing with now() is that whatever we did to now()\nwould apply to all functions marked \"stable\", and it's a bit\nnervous-making to assume that they should all be treated this way.\nOr we could introduce more function volatility categories, but that's\nnot much fun either.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 29 Aug 2004 22:23:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index? "
},
{
"msg_contents": "Yeah! Bind variable peeking is great news. I did actually read the guff, but forgot about that.\n\nVersion 8 is looking great on paper, I hope I'll get a chance to play wth it soon.\n\nI can kind of appreciate your point about peeking stable functions, however, I would have thought\nthat if it was possible to do for bind variables (which could change many times in a transaction)\nthen it would make even more sense for a stable function which doesn't change for the life of the\ntransaction. No doubt this is an oversimplification the situation.\n\nregards\nMr Pink\n\n\n\t\t\n_______________________________\nDo you Yahoo!?\nWin 1 of 4,000 free domain names from Yahoo! Enter now.\nhttp://promotions.yahoo.com/goldrush\n",
"msg_date": "Mon, 30 Aug 2004 00:38:41 -0700 (PDT)",
"msg_from": "Mr Pink <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index? "
},
{
"msg_contents": "On Sun, Aug 29, 2004 at 06:03:43PM -0400, Tom Lane wrote:\n> >> select somefield from sometable where timestampfield > now()-'60\n> >> seconds'::interval\n> \n> This is a FAQ, but since the archives don't seem to be up at the moment,\n> here's the answer once again:\n> \n> The expression \"now() - something\" is not a constant, so the planner\n> is faced with \"timestampfield > unknownvalue\". Its default assumption\n> about the number of rows that will match is much too high to make an\n> indexscan look profitable (from memory, I think it guesses that about\n> a third of the table will match...).\n\n\nOk; this explains some really wierd stuff I've been seeing.\n\nHowever, I'm seeing breakage of the form mentioned by the original poster\neven when the query uses a _constant_ timestamp: [Postgres 7.4.3]\n\n ntais# \\d detect.stats\n Table \"detect.stats\"\n Column | Type | Modifiers \n --------------+--------------------------+-------------------------------------------------------------\n anomaly_id | integer | not null\n at | timestamp with time zone | not null default ('now'::text)::timestamp(6) with time zone\n resolution | real | default 1.0\n values | real[] | \n stat_type_id | integer | not null\n Indexes:\n \"stats_pkey\" primary key, btree (anomaly_id, stat_type_id, \"at\")\n \"stats__ends_at\" btree (stats__ends_at(\"at\", resolution, \"values\"))\n Foreign-key constraints:\n \"$1\" FOREIGN KEY (anomaly_id) REFERENCES anomalies(anomaly_id) ON DELETE CASCADE\n \"$2\" FOREIGN KEY (stat_type_id) REFERENCES stat_types(stat_type_id)\n\n\n ntais=# SET enable_seqscan = on;\n SET\n ntais=# EXPLAIN ANALYZE \n SELECT anomaly_id, stat_type_id\n FROM detect.stats\n WHERE detect.stats__ends_at(at, resolution, values) > '2004-08-30 16:21:09+12'::timestamptz\n ORDER BY anomaly_id, stat_type_id\n ;\n\n QUERY PLAN \n -----------------------------------------------------------------------------------------------------------------------------\n Sort (cost=602473.59..608576.72 rows=2441254 width=8) (actual time=198577.407..198579.136 rows=6152 loops=1)\n Sort Key: anomaly_id, stat_type_id\n -> Seq Scan on stats (cost=0.00..248096.42 rows=2441254 width=8) (actual time=198299.685..198551.460 rows=6152 loops=1)\n Filter: (stats__ends_at(\"at\", resolution, \"values\") > '2004-08-30 16:21:09+12'::timestamp with time zone)\n Total runtime: 198641.649 ms\n (5 rows)\n\n\n ntais=# EXPLAIN ANALYZE\n SELECT anomaly_id, stat_type_id\n FROM detect.stats\n WHERE detect.stats__ends_at(at, resolution, values) > '2004-08-30\n 16:21:09+12'::timestamptz\n ORDER BY anomaly_id, stat_type_id\n ;\n\n QUERY PLAN \n --------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=10166043.26..10172146.40 rows=2441254 width=8) (actual time=44.710..46.661 rows=6934 loops=1)\n Sort Key: anomaly_id, stat_type_id\n -> Index Scan using stats__ends_at on stats (cost=0.00..9811666.09 rows=2441254 width=8) (actual time=0.075..24.702 rows=6934 loops=1)\n Index Cond: (stats__ends_at(\"at\", resolution, \"values\") > '2004-08-30 16:21:09+12'::timestamp with time zone)\n Total runtime: 50.354 ms\n (5 rows)\n\n\n ntais=# SELECT count(*) FROM detect.stats;\n count \n ---------\n 7326151\n (1 row)\n\nIve done repeated ANALYZE's, both table-specific and database-wide, and get\nthe same result every time.\n\nFor us, a global 'enable_seqscan = off' in postgresql.conf is the way to go.\nYou occasionally see an odd plan while developing a query (eg: scanning an\nindex with no contraint to simply get ORDER BY). Usually thats a broken\nquery/index, and I simply fix it.\n\n\nGuy Thornley\n",
"msg_date": "Mon, 30 Aug 2004 20:19:59 +1200",
"msg_from": "Guy Thornley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index?"
},
{
"msg_contents": "Guy Thornley <[email protected]> writes:\n> However, I'm seeing breakage of the form mentioned by the original poster\n> even when the query uses a _constant_ timestamp: [Postgres 7.4.3]\n\n> Indexes:\n> \"stats_pkey\" primary key, btree (anomaly_id, stat_type_id, \"at\")\n> \"stats__ends_at\" btree (stats__ends_at(\"at\", resolution, \"values\"))\n\n> ntais=# EXPLAIN ANALYZE \n> SELECT anomaly_id, stat_type_id\n> FROM detect.stats\n> WHERE detect.stats__ends_at(at, resolution, values) > '2004-08-30 16:21:09+12'::timestamptz\n> ORDER BY anomaly_id, stat_type_id\n> ;\n\nHere I'm afraid you're just stuck until 8.0 comes out (or you're feeling\nbrave enough to use a beta). Releases before 8.0 do not maintain any\nstatistics about the contents of functional indexes, so the planner is\nflying blind here in any case, and you end up with the very same 1/3rd\ndefault assumption no matter what the right-hand side looks like.\nYou'll have to fall back to Plan A or Plan B to get this case to work\nin 7.4.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Aug 2004 11:41:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index? "
},
{
"msg_contents": "\nGuy Thornley <[email protected]> writes:\n\n> \"stats__ends_at\" btree (stats__ends_at(\"at\", resolution, \"values\"))\n\nPostgres 7.4 doesn't have any stats on functional indexes. So it's back to\njust guessing at the selectivity of this. 8.0 does gather stats for functional\nindexes so it should be better off.\n\n-- \ngreg\n\n",
"msg_date": "30 Aug 2004 11:41:05 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index?"
},
{
"msg_contents": "On Sun, Aug 29, 2004 at 06:03:43PM -0400, Tom Lane wrote:\n> The expression \"now() - something\" is not a constant, so the planner\n> is faced with \"timestampfield > unknownvalue\". Its default assumption\n> about the number of rows that will match is much too high to make an\n> indexscan look profitable (from memory, I think it guesses that about\n> a third of the table will match...).\n\nOut of curiosity, does the subselect query I presented earlier in the thread\ncount as \"a constant\"? It gives the correct query plan, but this could of\ncourse just be correct by accident...\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 30 Aug 2004 18:02:28 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index?"
},
{
"msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> On Sun, Aug 29, 2004 at 06:03:43PM -0400, Tom Lane wrote:\n>> The expression \"now() - something\" is not a constant, so the planner\n>> is faced with \"timestampfield > unknownvalue\".\n\n> Out of curiosity, does the subselect query I presented earlier in the thread\n> count as \"a constant\"? It gives the correct query plan, but this could of\n> course just be correct by accident...\n\nThat was on 7.2, wasn't it? I don't remember any longer exactly how 7.2\ndoes this stuff, but it's different from 7.3 and later (and certainly\nnot any more \"right\").\n\nYou did at one time need to hide now() in a subselect to get the planner\nto consider an indexscan at all --- that was before we made the\ndistinction between immutable and stable functions, and so now() had\nto be treated as unsafe to index against (just as random() still is).\nI think 7.2 behaved that way but I'm not totally sure.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Aug 2004 12:47:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index? "
},
{
"msg_contents": "\n\tMost likely your table has a SERIAL PRIMARY KEY in it, in this case, do \nthe following :\n\n \tmy_limit = select primary_key_field from billing where timestamp > \n(now()-60)::timestamp ORDER BY timestamp ASC LIMIT 1;\n\n\tthen\n\n\tSELECT count(*) FROM billing WHERE primary_key_field>=my_limit;\n\n\tI don't know if it'll work better, but you can try.\n\n\tWhen you insert records in the table, they are appended at the end, so \nthis type of recent records query only requires reading the tail of the \ntable. It should be fast if planned correctly.\n\n> Strangely enough, I don't find that result surprising.\n>\n> if the vast bulk of the data is in the past and now()-60 represents a \n> very small slice of the data\n> we might expect that using an index is optimal, but there could be many \n> reasons why it doesn't get\n> used.\n>\n> AFAIK postgres doesn't peek at values used in a query when optimizing so \n> any query with a \">\" type\n> condition is gonna have a seq scan as the plan since the best guess is \n> that you are gonna match\n> 50% of the table. That's one possible explanation.\n>\n> Another is that if the condition data types don't match then an indes \n> won't be used you could try:\n>\n> select count(*) from billing where timestamp > (now()-60)::timestamp\n>\n> Might make a difference, I dunno, it's a case of testing amd seing what \n> happens.\n>\n> You could try lowering the random page cost, it might help, but I don't \n> like your chances.\n>\n> If your problem is that you want to access the most recent data from a \n> large table with fast\n> response, then you could consider:\n>\n> 1. a \"recent\" index. If the data is within the \"recent\" time from set a \n> flag to true, other wise\n> null. Reset the flags periodically. Nulls aren't indexed so the \n> selectivity of such an index is\n> much higher. Can work wonders.\n>\n> 2, duplicate recent data in another table that is purged when data \n> passes the age limit. This is\n> basic archiving.\n>\n> Something like that. Hopefully someone with more knowlege of the \n> optimaizer will have a brighter\n> suggestion for you.\n>\n> What version are you using by the way?\n> Regards\n> Mr Pink\n> --- Jack Kerkhof <[email protected]> wrote:\n>\n>> The query:\n>> select count(*) from billing where timestamp > now()-60\n>>\n>> should obviously use the index\n>>\n>> CREATE INDEX billing_timestamp_idx ON billing USING btree \n>> (\"timestamp\"\n>> timestamp_ops);\n>>\n>> on a table with 1400000 rows.\n>>\n>> But it uses a Seq Scan. If I set enable_seqscan=no, it indicates a \n>> queryplan\n>> could not be calculated.\n>>\n>> Why does this simple query not use the timestamp index, and how can I \n>> get it\n>> to?\n>>\n>> Thanks, Jack\n>>\n>> Jack Kerkhof\n>> Research & Development\n>> [email protected]\n>> www.guest-tek.com\n>> 1.866.509.1010 3480\n>>\n>> --------------------------------------------------------------------------\n>>\n>> Guest-Tek is a leading provider of broadband technology solutions \n>> for\n>> the hospitality industry. Guest-Tek's GlobalSuite�Ehigh-speed Internet\n>> solution enables hotels to offer their guests the convenience of wired\n>> and/or wireless broadband Internet access from guest rooms, meeting \n>> rooms\n>> and public areas.\n>>\n>>\n>>\n>\n>\n>\n> \t\t\n> __________________________________\n> Do you Yahoo!?\n> Take Yahoo! Mail with you! Get it on your mobile phone.\n> http://mobile.yahoo.com/maildemo\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n\n",
"msg_date": "Mon, 30 Aug 2004 21:17:06 +0200",
"msg_from": "=?utf-8?Q?Pierre-Fr=C3=A9d=C3=A9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index?"
},
{
"msg_contents": ">> Also, count(*) is likely to always generate a seq scan due to the way\n>> aggregates are implemented currently in pgsql. you might want to try:\n\n\n\tBy the way, in an ideal world, count(*) should only read the index on the \ntimetamp column, not the rows. I guess this is not the case. Would this be \nan useful optimization ?\n",
"msg_date": "Mon, 30 Aug 2004 21:21:26 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index?"
},
{
"msg_contents": "\n\tAnother primary key trick :\n\n\tIf you insert records with a serial primary key, and rarely delete them \nor update the timestamp, you can use the primary key to compute an \napproximate number of rows.\n\n\ta := SELECT pkey FROM table WHERE timestamp() > threshold ORDER BY \ntimestamp ASC LIMIT 1;\n\tb := SELECT pkey FROM table WHERE ORDER BY pkey DESC LIMIT 1;\n\n\t(b-a) is an approximate count.\n\n\tPerformance is great because you only fetch two rows. Index scan is \nguaranteed (LIMIT 1). On the downside, you get an approximation, and this \nonly works for tables where timestamp is a date of INSERT, timestamp \nworrelated wiht pkey) not when timestamp is a date of UPDATE (uncorrelated \nwith pkey).\n",
"msg_date": "Mon, 30 Aug 2004 21:32:28 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index? "
},
{
"msg_contents": "On Mon, Aug 30, 2004 at 21:21:26 +0200,\n Pierre-Fr�d�ric Caillaud <[email protected]> wrote:\n> >>Also, count(*) is likely to always generate a seq scan due to the way\n> >>aggregates are implemented currently in pgsql. you might want to try:\n> \n> \n> \tBy the way, in an ideal world, count(*) should only read the index \n> \ton the timetamp column, not the rows. I guess this is not the case. Would \n> this be an useful optimization ?\n\nIt's in the archives. The short answer is that no, postgres has to check\nthe heap to check tuple visibility to the current transaction.\n",
"msg_date": "Mon, 30 Aug 2004 14:40:50 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index?"
},
{
"msg_contents": "\n\n[I'm actually responding to the previous post from Tom Lane, but I've deleted\nit and the archives seem to be down again.]\n\n\nThe assumption being made is that the first provided result is representative\nof all future results. I don't see any reason that making this assumption of\nall stable functions should be less scary than making the assumption about\nuser provided parameters.\n\nHowever I have the complementary reaction. I find peeking at the first\nbind parameter to be scary as hell. Functions seem slightly less scary.\n\nOn Oracle Peeking at bind parameters is a feature explicitly intended for DSS\ndata warehouse type systems. The use of placeholders there was purely for\nsecurity and programming ease, not efficiency, since the queries are only\nplanned executed a small number of times per plan. These are systems that\nsuffered enormously without the parameter values. They often involved full\ntable scans or bitmap index scans and without the statistics produced awful\nplans.\n\nFor OLTP systems peeking at placeholders is more a danger than a benefit. The\nquery will be executed thousands of times and if it's planned based on a\nsingle unusual value initially the entire system could fail.\n\nConsider the following scenario which isn't farfetched at all. In fact I think\nit well describes my current project:\n\nI have a table with a few million records. 99% of the time users are working\nwith only a few hundred records at most. There's an index on the column\nthey're keying off of. 1% of the key values have an unusually large number of\nrecords.\n\nWithout peeking at placeholders the system should see that virtually all the\nkey values are well under the threshold for an index scan to be best. So it\nalways uses an index scan. 1% of the time it takes longer than that it would\nhave with a sequential scan, but only by a small factor. (On the whole we're\nprobably still better off avoiding the cache pollution anyways.)\n\nWith peeking at placeholders 99% of the backends would perform the same way.\nHowever 1 backend in 100 sees one of these unusual values for its first query.\nThis backend will use a sequential scan for *every* request. Executing a\nsequential table scan of this big table once a second this backend will drive\nthe entire system into the ground.\n\nThis means every time I start the system up I stand a small but significant\nchance of it just completely failing to perform properly. Worse, apache is\ndesigned to periodically start new processes, so at any given time the system\ncould just randomly fall over and die.\n\nI would rather incur a 10% penalty on every query than have a 1% chance of it\nkeeling over and dieing. Given this I would when I upgrade to 8.0 have to\nensure that my application driver is either not using placeholders at all (at\nthe protocol level -- I always prefer them at the api level) or ensure that\npostgres is *not* peeking at the value.\n\nI like the feature but I just want to be sure that it's optional.\n\n-- \ngreg\n\n",
"msg_date": "30 Aug 2004 16:18:39 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index?"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> However I have the complementary reaction. I find peeking at the first\n> bind parameter to be scary as hell. Functions seem slightly less scary.\n\nFWIW, we only do it in the context of unnamed parameterized queries.\nAs the protocol docs say, those are optimized on the assumption that\nthey will be executed only once. It seems entirely legitimate to me \nto use the parameter values in such a case.\n\nWe might in future get braver about using sample parameter values,\nbut 8.0 is conservative about it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Aug 2004 17:00:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index? "
},
{
"msg_contents": "\nTom Lane <[email protected]> writes:\n\n> Greg Stark <[email protected]> writes:\n> > However I have the complementary reaction. I find peeking at the first\n> > bind parameter to be scary as hell. Functions seem slightly less scary.\n> \n> FWIW, we only do it in the context of unnamed parameterized queries.\n\nI knew that. That's why I hadn't been jumping up and down screaming. I was\nwatching though to insist on an option to disable it if it became more\nwidespread.\n\n> As the protocol docs say, those are optimized on the assumption that\n> they will be executed only once. It seems entirely legitimate to me \n> to use the parameter values in such a case.\n\nSure. It's a great feature to have; it means people can be more aggressive\nabout using placeholders for other reasons without worrying about performance\nimpacts.\n\n> We might in future get braver about using sample parameter values,\n> but 8.0 is conservative about it.\n\nIf they're used for named parameters I would strongly recommend guc variable\nto control the default on a server-wide basis. It could be a variable that\nindividual sessions could override since there's no security or resource\nimplications. It's purely a protocol interface issue.\n\nFor that matter, would it be possible for the default selectivity estimates to\nbe a guc variable? It's something that the DBA -- or even programmer on a\nper-session basis -- might be able to provide a better value for his\napplications than any hard coded default.\n\nOr perhaps it would be one valid use of hints to provide selectivity estimates\nfor blind placeholders. It would be nice to be able to say for example:\n\n select * from foo where col > $0 /*+ 5% */ AND col2 > $1 /*+ 10% */\n\nWould there be any hope of convincing you that this is a justifiable use of\nhints; providing information that the optimizer has absolutely no possibility\nof ever being able to calculate on its own?\n\n-- \ngreg\n\n",
"msg_date": "31 Aug 2004 02:00:22 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index?"
},
{
"msg_contents": "Hi Greg, Tom, etal\n\nIt's true that oracle only peeks during a hard parse, and this can have good or bad results\ndepending on the situation. Basically, the first value used in that query will determine the plan\nuntil that query is bumped from the sql cache or the server is restarted. As far as I know, there\nis no option to disable that feature in Oracle, I don't know about postgres. \n\nOverall, I think it's a good feature because it helps us in the goal of reducing hardparsing (that\nwas it's real purpose in oracle). The trick as with all good features is to use it cleverly. For\nexample, you could run scripts on server startup that run such queries with optimal values before\nany one gets back on. If your application has optimal use of bind variables allowing re-use of\nquery plan, and the sql cache has enough memory then the query plans you created at server startup\ncould be expected to be current for the life of that instance.\n\nI write all this from my knowlegdge of Oracle, but I can't be sure how it applies to postgres.\nCome to think about it, I don't think I've seen a good discussion of plan caching, hard parsing\nand such like specifically related to pg. I'd really like to know more about how pg treats that\nstuff.\n\nregards\nMr Pink\n\n--- Greg Stark <[email protected]> wrote:\n\n> \n> \n> [I'm actually responding to the previous post from Tom Lane, but I've deleted\n> it and the archives seem to be down again.]\n> \n> \n> The assumption being made is that the first provided result is representative\n> of all future results. I don't see any reason that making this assumption of\n> all stable functions should be less scary than making the assumption about\n> user provided parameters.\n> \n> However I have the complementary reaction. I find peeking at the first\n> bind parameter to be scary as hell. Functions seem slightly less scary.\n> \n> On Oracle Peeking at bind parameters is a feature explicitly intended for DSS\n> data warehouse type systems. The use of placeholders there was purely for\n> security and programming ease, not efficiency, since the queries are only\n> planned executed a small number of times per plan. These are systems that\n> suffered enormously without the parameter values. They often involved full\n> table scans or bitmap index scans and without the statistics produced awful\n> plans.\n> \n> For OLTP systems peeking at placeholders is more a danger than a benefit. The\n> query will be executed thousands of times and if it's planned based on a\n> single unusual value initially the entire system could fail.\n> \n> Consider the following scenario which isn't farfetched at all. In fact I think\n> it well describes my current project:\n> \n> I have a table with a few million records. 99% of the time users are working\n> with only a few hundred records at most. There's an index on the column\n> they're keying off of. 1% of the key values have an unusually large number of\n> records.\n> \n> Without peeking at placeholders the system should see that virtually all the\n> key values are well under the threshold for an index scan to be best. So it\n> always uses an index scan. 1% of the time it takes longer than that it would\n> have with a sequential scan, but only by a small factor. (On the whole we're\n> probably still better off avoiding the cache pollution anyways.)\n> \n> With peeking at placeholders 99% of the backends would perform the same way.\n> However 1 backend in 100 sees one of these unusual values for its first query.\n> This backend will use a sequential scan for *every* request. Executing a\n> sequential table scan of this big table once a second this backend will drive\n> the entire system into the ground.\n> \n> This means every time I start the system up I stand a small but significant\n> chance of it just completely failing to perform properly. Worse, apache is\n> designed to periodically start new processes, so at any given time the system\n> could just randomly fall over and die.\n> \n> I would rather incur a 10% penalty on every query than have a 1% chance of it\n> keeling over and dieing. Given this I would when I upgrade to 8.0 have to\n> ensure that my application driver is either not using placeholders at all (at\n> the protocol level -- I always prefer them at the api level) or ensure that\n> postgres is *not* peeking at the value.\n> \n> I like the feature but I just want to be sure that it's optional.\n> \n> -- \n> greg\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nYahoo! Mail - 50x more storage than other providers!\nhttp://promotions.yahoo.com/new_mail\n",
"msg_date": "Wed, 1 Sep 2004 05:50:02 -0700 (PDT)",
"msg_from": "Mr Pink <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does a simple query not use an obvious index?"
}
] |
[
{
"msg_contents": "Hi everybody!\n\nHere is my queries:\n\n1. explain SELECT * FROM messageinfo WHERE user_id = CAST( 20000 AS BIGINT ) and msgstatus = CAST(\n0 AS smallint );\n\n2. explain SELECT * FROM messageinfo WHERE messageinfo.user_id = 20000::int8 and msgstatus =\n0::smallint;\n\nIn both cases Explain command shows:\n1. Sequential search and very high cost if set enable_seqscan to on;\nSeq scan on messageinfo ( cost=0.00..24371.30, rows =36802 )\n\n2. Index scan but even bigger cost if set enable_seqscan to off;\nIndex ���messagesStatus��� on messageinfo ( Cost=0.00..27220.72, rows=36802 )\n\nmessageinfo table has 200 records which meet this criteria and 662420 in total:\n\nCREATE TABLE messageinfo\n(\n user_id int8 NOT NULL,\n msgstatus int2 NOT NULL DEFAULT (0)::smallint,\n receivedtime timestamp NOT NULL DEFAULT now(),\n ���\n msgread bool DEFAULT false,\n ���\n CONSTRAINT \"$1\" FOREIGN KEY (user_id) REFERENCES users (id) ON UPDATE CASCADE ON DELETE CASCADE,\n ) \nWITH OIDS;\n\nCREATE INDEX msgstatus\n ON messageinfo\n USING btree\n (user_id, msgstatus);\n\nCREATE INDEX \"messagesStatus\"\n ON messageinfo\n USING btree\n (msgstatus);\n\nCREATE INDEX msgread\n ON messageinfo\n USING btree\n (user_id, msgread);\n\nCREATE INDEX \"receivedTime\"\n ON messageinfo\n USING btree\n (receivedtime);\n\n\nMY QUESTIONS ARE:\n\n1.\tShould I afraid of high cost indexes? Or query will still be very efficient?\n\n2.\tPostgres does not use the index I need. For my data sets it���s always msgstatus index is\nnarrowest compare with ���messagesStatus��� one. Is any way to ���enforce��� to use a particular index?\nWhat���s the logic when Postgres chooses one index compare with the other.\n\n3.\tI can change db structure to utilize Postgres specifics if you can tell them to me.\n\n4.\tAlso, originally I had ���messagesStatus��� index having 2 components ( ���msgstatus���, ���user_id��� ).\nBut query SELECT * FROM messageinfo WHERE msgstatus = 0 did not utilize indexes in this case. It\nonly worked if both index components are in WHERE part. So I have to remove 2-nd component\n���user_id��� from messagesStatus index even I wanted it. Is any way that where clause has only 1-st\ncomponent but index is utilized? \n\nIgor Artimenko\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nYahoo! Mail Address AutoComplete - You start. We finish.\nhttp://promotions.yahoo.com/new_mail \n",
"msg_date": "Fri, 27 Aug 2004 12:29:11 -0700 (PDT)",
"msg_from": "Artimenko Igor <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why those queries do not utilize indexes?"
},
{
"msg_contents": "On Fri, 27 Aug 2004, Artimenko Igor wrote:\n\n> 1. Sequential search and very high cost if set enable_seqscan to on;\n> Seq scan on messageinfo ( cost=0.00..24371.30, rows =36802 )\n> \n> 2. Index scan but even bigger cost if set enable_seqscan to off;\n> Index �messagesStatus� on messageinfo ( Cost=0.00..27220.72, rows=36802 )\n\nSo pg thinks that a sequential scan will be a little bit faster (The cost \nis a little bit smaller). If you compare the actual runtimes maybe you \nwill see that pg was right. In this case the cost is almost the same so \nthe runtime is probably almost the same.\n\nWhen you have more data pg will start to use the index since then it will \nbe faster to use an index compared to a seq. scan.\n\n-- \n/Dennis Bj�rklund\n\n",
"msg_date": "Fri, 27 Aug 2004 21:49:12 +0200 (CEST)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why those queries do not utilize indexes?"
},
{
"msg_contents": "I could force Postgres to use the best index by removing condition \"msgstatus = CAST( 0 AS\nsmallint );\" from WHERE clause & set enable_seqscan to off;\nTotal runtime in this case dropped from 1883 ms ( sequential reads ) to 1.598 ms ( best index ).\n\nBut unfortunatelly It does not resolve my problem. I can not remove above condition. I need to\nfind a way to use whole condition \"WHERE user_id = CAST( 20000 AS BIGINT ) and msgstatus = CAST( 0\nAS smallint );\" and still utilyze index. \n\nYes you are right. Using \"messagesStatus\" index is even worse for my data set then sequential\nscan.\n\nIgor Artimenko\n\n--- Dennis Bjorklund <[email protected]> wrote:\n\n> On Fri, 27 Aug 2004, Artimenko Igor wrote:\n> \n> > 1. Sequential search and very high cost if set enable_seqscan to on;\n> > Seq scan on messageinfo ( cost=0.00..24371.30, rows =36802 )\n> > \n> > 2. Index scan but even bigger cost if set enable_seqscan to off;\n> > Index ���messagesStatus��� on messageinfo ( Cost=0.00..27220.72, rows=36802 )\n> \n> So pg thinks that a sequential scan will be a little bit faster (The cost \n> is a little bit smaller). If you compare the actual runtimes maybe you \n> will see that pg was right. In this case the cost is almost the same so \n> the runtime is probably almost the same.\n> \n> When you have more data pg will start to use the index since then it will \n> be faster to use an index compared to a seq. scan.\n> \n> -- \n> /Dennis Bj���rklund\n> \n> \n\n\n\n\t\t\n_______________________________\nDo you Yahoo!?\nWin 1 of 4,000 free domain names from Yahoo! Enter now.\nhttp://promotions.yahoo.com/goldrush\n",
"msg_date": "Fri, 27 Aug 2004 14:08:58 -0700 (PDT)",
"msg_from": "Artimenko Igor <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why those queries do not utilize indexes?"
},
{
"msg_contents": "First things first: try vacuum full analyze on all the tables involved.\n\n> 1.\tShould I afraid of high cost indexes? Or query will still be very efficient?\n\nNot necessarily. However, EXPLAIN output is pretty much useless for us \nfor helping you. You need to post EXPLAIN ANALYZE output.\n\nThen, you need to use explain analyze to check the speed difference \nbetween the index and seq scan versions. Is the seq scan actually slower?\n\n> 2.\tPostgres does not use the index I need. For my data sets it�s always msgstatus index is\n> narrowest compare with �messagesStatus� one. Is any way to �enforce� to use a particular index?\n> What�s the logic when Postgres chooses one index compare with the other.\n\nIt's complicated, but it's based on teh statistics in pg_statistic that \nthe vacuum analyze command gathers.\n\n> 3.\tI can change db structure to utilize Postgres specifics if you can tell them to me.\n\nI avoid using int8 and int2 in the first place :) In PostgreSQL 8.0, \nthey will be less troublesome, but I've never seen a need for them!\n\n> 4.\tAlso, originally I had �messagesStatus� index having 2 components ( �msgstatus�, �user_id� ).\n> But query SELECT * FROM messageinfo WHERE msgstatus = 0 did not utilize indexes in this case. It\n> only worked if both index components are in WHERE part. So I have to remove 2-nd component\n> �user_id� from messagesStatus index even I wanted it. Is any way that where clause has only 1-st\n> component but index is utilized? \n\nSo long as your where clause matches a subset of the columns in the \nindex in left to right order, the index can be used. For example, if \nyour index is over (a, b, c) then select * where a=1 and b=2; can use \nthe index.\n\nChris\n\n",
"msg_date": "Sat, 28 Aug 2004 16:50:26 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why those queries do not utilize indexes?"
}
] |
[
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nBruce Momjian wrote:\n\n| Gaetano Mendola wrote:\n|\n|>Tom Lane wrote:\n|>\n|> > Bruce Momjian <[email protected]> writes:\n|> >\n|> >>Agreed. What I am wondering is with our system where every update gets\n|> >>a new row, how would this help us? I know we try to keep an update on\n|> >>the same row as the original, but is there any significant performance\n|> >>benefit to doing that which would offset the compaction advantage?\n|> >\n|> >\n|> > Because Oracle uses overwrite-in-place (undoing from an UNDO log on\n|> > transaction abort), while we always write a whole new row, it would take\n|> > much larger PCTFREE wastage to get a useful benefit in PG than it does\n|> > in Oracle. That wastage translates directly into increased I/O costs,\n|> > so I'm a bit dubious that we should assume there is a win to be had here\n|> > just because Oracle offers the feature.\n|>\n|>Mmmm. Consider this scenario:\n|>\n|>ctid datas\n|>(0,1) yyy-xxxxxxxxxxxxxxxxxxx\n|>(0,2) -------- EMPTY --------\n|>(0,3) -------- EMPTY --------\n|>(0,4) -------- EMPTY --------\n|>(0,5) -------- EMPTY --------\n|>(0,6) yyy-xxxxxxxxxxxxxxxxxxx\n|>(0,7) -------- EMPTY --------\n|>.... -------- EMPTY --------\n|>(0,11) yyy-xxxxxxxxxxxxxxxxxxx\n|>\n|>\n|>the row (0,2) --> (0,5) are space available for the (0,1) updates.\n|>This will help a table clustered ( for example ) to mantain his\n|>own correct cluster order.\n|\n|\n| Right. My point was that non-full fill is valuable for us only when\n| doing clustering, while for Oracle it is a win even in non-cluster cases\n| because of the way they update in place.\n\nDon't you think this will permit also to avoid extra disk seek and cache\ninvalidation? If you are updating the row (0,1) I think is less expensive\nput the new version in (0,2) instead of thousand line far from that point.\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFBL+kA7UpzwH2SGd4RAp6fAJ9rSs5xmTXsy4acUGcnCRTbEUCwrwCgo/o6\n0JPtziuf1E/EGLaqjbPMV44=\n=pIgX\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Sat, 28 Aug 2004 04:08:01 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "Gaetano Mendola wrote:\n> | Right. My point was that non-full fill is valuable for us only when\n> | doing clustering, while for Oracle it is a win even in non-cluster cases\n> | because of the way they update in place.\n> \n> Don't you think this will permit also to avoid extra disk seek and cache\n> invalidation? If you are updating the row (0,1) I think is less expensive\n> put the new version in (0,2) instead of thousand line far from that point.\n\nIt would, but does that outweigh the decreased I/O by having things more\ndensely packed? I would think not.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 27 Aug 2004 23:54:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "\nBruce Momjian <[email protected]> writes:\n\n> > Don't you think this will permit also to avoid extra disk seek and cache\n> > invalidation? If you are updating the row (0,1) I think is less expensive\n> > put the new version in (0,2) instead of thousand line far from that point.\n\nWell if the other buffer \"a thousand lines far from that point\" is already in\nram, then no, there's no penalty at the time for storing it there.\n\nHowever it destroys the clustering, which was the original point.\n\n> It would, but does that outweigh the decreased I/O by having things more\n> densely packed? I would think not.\n\nWell the dense packing is worth something. But so is the clustering. There's\ndefinitely a trade-off.\n\nI always found my largest tables are almost always insert-only tables anyways.\nSo in Oracle I would have pctused 100 pctfree 0 on them and get the\nperformance gain.\n\nThe tables that would benefit from this would be tables always accessed by\nindexes in index scans of more than one record. The better the clustering the\nfewer pages the index scan would have to read in. If the data took 10% more\nspace but the index scan only needs 1/4 as many buffers it could be a big net\nwin.\n\n-- \ngreg\n\n",
"msg_date": "28 Aug 2004 04:15:01 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
},
{
"msg_contents": "Greg Stark wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> \n> \n>>>Don't you think this will permit also to avoid extra disk seek and cache\n>>>invalidation? If you are updating the row (0,1) I think is less expensive\n>>>put the new version in (0,2) instead of thousand line far from that point.\n> \n> \n> Well if the other buffer \"a thousand lines far from that point\" is already in\n> ram, then no, there's no penalty at the time for storing it there.\n\nI was wandering about the cache invalidation, may be the ram is big enough but I\ndoubt about the cache, the recommendation in this case is to modify adjacent\nmemory address instead of jumping.\n\n\nRegards\nGaetano Mendola\n\n\n",
"msg_date": "Sat, 28 Aug 2004 13:39:31 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Equivalent praxis to CLUSTERED INDEX?"
}
] |
[
{
"msg_contents": "Hi all,\ndo you know any clean workaround at ill-planned queries inside a stored procedure?\nLet me explain with an example:\n\n\nempdb=# select count(*) from user_logs;\n count\n---------\n 5223837\n(1 row)\n\nempdb=# select count(*) from user_logs where id_user = 5024;\n count\n--------\n 239453\n(1 row)\n\nempdb=# explain analyze select login_time from user_logs where id_user = 5024 order by id_user_log desc limit 1;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..22.62 rows=1 width=12) (actual time=3.921..3.922 rows=1 loops=1)\n -> Index Scan Backward using user_logs_pkey on user_logs (cost=0.00..5355619.65 rows=236790 width=12) (actual time=3.918..3.918 rows=1 loops=1)\n Filter: (id_user = 5024)\n Total runtime: 3.963 ms\n(4 rows)\n\n\nsame select in a prepared query ( I guess the stored procedure use same plan ):\n\nempdb=# explain analyze execute test(5024);\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=759.60..759.61 rows=1 width=12) (actual time=45065.755..45065.756 rows=1 loops=1)\n -> Sort (cost=759.60..760.78 rows=470 width=12) (actual time=45065.748..45065.748 rows=1 loops=1)\n Sort Key: id_user_log\n -> Index Scan using idx_user_user_logs on user_logs (cost=0.00..738.75 rows=470 width=12) (actual time=8.936..44268.087 rows=239453 loops=1)\n Index Cond: (id_user = $1)\n Total runtime: 45127.256 ms\n(6 rows)\n\n\nThere is a way to say: replan this query at execution time ?\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n",
"msg_date": "Sat, 28 Aug 2004 13:14:38 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "ill-planned queries inside a stored procedure"
}
] |
[
{
"msg_contents": "I use \"EXECUTE\" inside a stored procedure for just this purpose. This is not the same as PREPARE/EXECUTE, it lets you send an arbitrary string as SQL within the procedure. You have to write the query text on the fly in the procedure, which can be a little messy with quoting and escaping.\n\nGaetano Mendola <[email protected]> wrote ..\n> Hi all,\n> do you know any clean workaround at ill-planned queries inside a stored\n> procedure?\n",
"msg_date": "Sat, 28 Aug 2004 08:13:37 -0700",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: ill-planned queries inside a stored procedure"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\[email protected] wrote:\n\n| Gaetano Mendola <[email protected]> wrote ..\n|\n|>Hi all,\n|>do you know any clean workaround at ill-planned queries inside a stored\n|>procedure?\n| I use \"EXECUTE\" inside a stored procedure for just this purpose. This is\n| not the same as PREPARE/EXECUTE, it lets you send an arbitrary string as\n| SQL within the procedure. You have to write the query text on the fly in\n| the procedure, which can be a little messy with quoting and escaping.\n|\n\nYes I knew, I wrote \"clean workaround\" :-)\n\nI hate write in function piece of code like this:\n\n~ [...]\n\n~ my_stm := ''SELECT '' || my_operation || ''( '' || a_id_transaction;\n~ my_stm := my_stm || '', '' || a_id_contract;\n~ my_stm := my_stm || '', '' || quote_literal(a_date) || '') AS res'';\n\n~ FOR my_record IN EXECUTE my_stm LOOP\n~ IF my_record.res < 0 THEN\n~ RETURN my_record.res;\n~ END IF;\n\n~ EXIT;\n~ END LOOP;\n\n~ [...]\n\nnote also that useless loop that is needed to retrieve the value!\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFBMLRE7UpzwH2SGd4RAv0TAJ9+IokZjaXIhgV5dOH86FCvzSnewQCgwqxD\nnuW9joHmPxOnlRWrvhsKaag=\n=Axb7\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Sat, 28 Aug 2004 18:35:18 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ill-planned queries inside a stored procedure"
}
] |
[
{
"msg_contents": "I have a problem with certain queries performance. Trouble is that\nwhile their execution plan is pretty good and mostly their execution\nis great as well, their FIRST execution time (that is after you mount\nthe database) is abysmal.\n\nI realize that it happens due to the loading of data from the HD to\nthe memory/swap and it wouldn't be too bad if I just could make the\ndata stay in the memory, sadly, after a few minutes the data is back\non the HD and running the query again results the same bad\nperformance.\n\nHere's a query for example, though as I said, this problem occurs in\ndifferent queries.\n\n---------------------------------------------------------------------------------------\n\n SELECT *\n FROM bv_bookgenres, bv_books\n WHERE bv_books.book_id = bv_bookgenres.book_id and genre_id = 987 \nORDER BY vote_avg limit 10\n\n---------------------------------------------------------------------------------------\n\n---------------------------------------------------------------------------------------\n\nQUERY PLAN\nLimit (cost=2601.16..2601.18 rows=10 width=193) (actual\ntime=4735.097..4735.107 rows=10 loops=1)\n -> Sort (cost=2601.16..2601.70 rows=219 width=193) (actual\ntime=4735.092..4735.095 rows=10 loops=1)\n Sort Key: bv_books.vote_avg\n -> Nested Loop (cost=0.00..2592.64 rows=219 width=193)\n(actual time=74.615..4719.147 rows=1877 loops=1)\n -> Index Scan using i_bookgenres_genre_id on\nbv_bookgenres (cost=0.00..1707.03 rows=218 width=4) (actual\ntime=74.540..2865.366 rows=1877 loops=1)\n Index Cond: (genre_id = 987)\n -> Index Scan using bv_books_pkey on bv_books \n(cost=0.00..4.05 rows=1 width=193) (actual time=0.968..0.971 rows=1\nloops=1877)\n Index Cond: (bv_books.book_id = \"outer\".book_id)\nTotal runtime: 4735.726 ms\n\n---------------------------------------------------------------------------------------\n\nIf I run the query again after it just finished running I would get\nthe following timing:\n\n---------------------------------------------------------------------------------------\n\nLimit (cost=3937.82..3937.84 rows=10 width=204)\n -> Sort (cost=3937.82..3938.38 rows=223 width=204)\n Sort Key: bv_books.vote_avg\n -> Nested Loop (cost=0.00..3929.12 rows=223 width=204)\n -> Index Scan using i_bookgenres_genre_id on\nbv_bookgenres (cost=0.00..1731.94 rows=222 width=8)\n Index Cond: (genre_id = 987)\n -> Index Scan using bv_books_pkey on bv_books \n(cost=0.00..9.88 rows=1 width=196)\n Index Cond: (bv_books.book_id = \"outer\".book_id)\n\n---------------------------------------------------------------------------------------\n\nBefore going on, I should say that I am running PostgreSQL on CoLinux\nunder Windows 2000. From what I read/tested, the CoLinux performance\non CoLinux are matching to the performance of VMWare. Yet, I'm still\nwondering if it is a side effect of my development setup or if some of\nmy settings are indeed wrong.\n\nWith that said, here is the information of the tables:\n\n---------------------------------------------------------------------------------------\n\nCREATE TABLE bv_books\n(\n book_id serial NOT NULL,\n book_name varchar(255) NOT NULL,\n series_id int4,\n series_index int2,\n annotation_desc_id int4,\n description_desc_id int4,\n book_picture varchar(255) NOT NULL,\n reviews_error int4 NOT NULL,\n vote_avg float4 NOT NULL,\n vote_count int4 NOT NULL,\n book_genre int4[],\n book_name_fulltext tsearch2.tsvector,\n book_name_fulltext2 tsearch2.tsvector,\n CONSTRAINT bv_books_pkey PRIMARY KEY (book_id),\n CONSTRAINT fk_books_annotation_desc_id FOREIGN KEY\n(annotation_desc_id) REFERENCES bv_descriptions (description_id) ON\nUPDATE RESTRICT ON DELETE SET NULL,\n CONSTRAINT fk_books_description_desc_id FOREIGN KEY\n(description_desc_id) REFERENCES bv_descriptions (description_id) ON\nUPDATE RESTRICT ON DELETE SET NULL,\n CONSTRAINT fk_books_series_id FOREIGN KEY (series_id) REFERENCES\nbv_series (series_id) ON UPDATE RESTRICT ON DELETE RESTRICT\n) \nWITH OIDS;\n\nCREATE TABLE bv_bookgenres\n(\n book_id int4 NOT NULL,\n genre_id int4 NOT NULL,\n CONSTRAINT bv_bookgenres_pkey PRIMARY KEY (book_id, genre_id),\n CONSTRAINT fk_bookgenres_book_id FOREIGN KEY (book_id) REFERENCES\nbv_books (book_id) ON UPDATE RESTRICT ON DELETE CASCADE,\n CONSTRAINT fk_bookgenres_genre_id FOREIGN KEY (genre_id) REFERENCES\nbv_genres (genre_id) ON UPDATE RESTRICT ON DELETE RESTRICT\n) \nWITH OIDS;\n\n---------------------------------------------------------------------------------------\n\nAs far as the data is concerned, there are around 170,000 rows in\nbv_books and 940,000 in bv_bookgenres. There are also btree index on\nall the relevant (to the query) fields.\n\nI can live up with the fact that the data has to be loaded the first\ntime it is accessed, but is it possible to make it stick longer in the\nmemory? Is it the fact that CoLinux gets only 128MB of RAM? Or one of\nmy settings should be fixed?\n\nThanks\n",
"msg_date": "Sat, 28 Aug 2004 20:41:51 +0300",
"msg_from": "Vitaly Belman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance hit on loading from HD"
},
{
"msg_contents": "Vitaly,\n\n> I have a problem with certain queries performance. Trouble is that\n> while their execution plan is pretty good and mostly their execution\n> is great as well, their FIRST execution time (that is after you mount\n> the database) is abysmal.\n\nThis is a well-known problem. The general approach to this is to run a \nscript to do select * queries against all important tables on system \nstart-up.\n\n> I realize that it happens due to the loading of data from the HD to\n> the memory/swap and it wouldn't be too bad if I just could make the\n> data stay in the memory, sadly, after a few minutes the data is back\n> on the HD and running the query again results the same bad\n> performance.\n\nThis could be for a variety of reasons. On a standard platform (which yours \nmost definitely is not), this would be due to database vacuuming, commits of \nlarge updates to your data, or another application using most of the system \nmemory.\n\n> Before going on, I should say that I am running PostgreSQL on CoLinux\n> under Windows 2000. From what I read/tested, the CoLinux performance\n> on CoLinux are matching to the performance of VMWare. Yet, I'm still\n> wondering if it is a side effect of my development setup or if some of\n> my settings are indeed wrong.\n\nProbably you will continue to get worse-than-normal performance from both. \nYou simply can't expect performance PostgreSQL running on an emulation \nenvironment. If you could, we wouldn't have bothered with a Windows port. \nSpeaking of which, have you started testing the Windows port? I'd be \ninterested in your comparison of it against running on CoLinux.\n\n> I can live up with the fact that the data has to be loaded the first\n> time it is accessed, but is it possible to make it stick longer in the\n> memory? Is it the fact that CoLinux gets only 128MB of RAM? Or one of\n> my settings should be fixed?\n\nWell, mostly it's that you should start testing 8.0, and the Windows port. \nNot only should running native be better, but 8.0 (thanks to the work of Jan \nWieck) is now able to take advantage of a large chunk of dedicated memory, \nwhich earlier versions were not. Also, \"lazy vacuum\" and the \"background \nwriter\", also features of 8.0 and Jan's work, should prevent PostgreSQL from \ncleaning out its own cache completely. You should test this, \n*particularly* on Windows where we could use some more performance testing.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sat, 28 Aug 2004 14:34:23 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance hit on loading from HD"
}
] |
[
{
"msg_contents": "Dear all:\n\nIm having a weird problem here. I have a table w/ ~180.000 rows and I\nwant to select those where c > 0 or d > 0 (there only a few of those on\nthe table)\nI indexed columns c and d (separately) but this query used the slow\nseqscan instead of the index scan:\n\nselect * from t where c<>0 or d<>0;\n\nAfter playing some time, I noticed that if I change the \"or\" for an\n\"and\", pg used the fast index scan (but the query w/ 'and' was not what\nI as looking for).\n\nThen, I thought I could do the following:\nCreating an index with the expression (c+d) and selecting the rows where\nc+d > 0:\nselect * from t where c + d <> 0;\n\nAgain, this used a seqscan. Asking in #postgresql in freenode, somebody\ntold me to try to disable seqscan (set enable_seqscan false) and\nsuprisingly, Pg started using the index scan and it was -fast-.\n\nNow: I've no idea why it chooses to use a seq scan instead of the index\nscan (yes, I've just vacuum analyzed the table before running the\nquery).\n\nSome more info:\nc and d are both bigint. I've tried the queries casting the constant (0)\nto bigint but nothing changed.\n\nIm using debian's pg 7.4.1-2.\n\n\nThanks in advance",
"msg_date": "Mon, 30 Aug 2004 14:46:37 -0300",
"msg_from": "Martin Sarsale <[email protected]>",
"msg_from_op": true,
"msg_subject": "seqscan instead of index scan"
},
{
"msg_contents": "On Mon, Aug 30, 2004 at 14:46:37 -0300,\n Martin Sarsale <[email protected]> wrote:\n> Dear all:\n> \n> Im having a weird problem here. I have a table w/ ~180.000 rows and I\n> want to select those where c > 0 or d > 0 (there only a few of those on\n> the table)\n> I indexed columns c and d (separately) but this query used the slow\n> seqscan instead of the index scan:\n\nPostgres doesn't 'or' bitmaps derived from two indexes. You might have more\nluck using a combined index.\n\n> \n> select * from t where c<>0 or d<>0;\n> \n> After playing some time, I noticed that if I change the \"or\" for an\n> \"and\", pg used the fast index scan (but the query w/ 'and' was not what\n> I as looking for).\n> \n> Then, I thought I could do the following:\n> Creating an index with the expression (c+d) and selecting the rows where\n> c+d > 0:\n> select * from t where c + d <> 0;\n> \n> Again, this used a seqscan. Asking in #postgresql in freenode, somebody\n> told me to try to disable seqscan (set enable_seqscan false) and\n> suprisingly, Pg started using the index scan and it was -fast-.\n> \n> Now: I've no idea why it chooses to use a seq scan instead of the index\n> scan (yes, I've just vacuum analyzed the table before running the\n> query).\n> \n> Some more info:\n> c and d are both bigint. I've tried the queries casting the constant (0)\n> to bigint but nothing changed.\n> \n> Im using debian's pg 7.4.1-2.\n> \n> \n> Thanks in advance\n> \n\n\n",
"msg_date": "Mon, 30 Aug 2004 13:02:27 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seqscan instead of index scan"
},
{
"msg_contents": "On Mon, 2004-08-30 at 15:02, Bruno Wolff III wrote:\n> On Mon, Aug 30, 2004 at 14:46:37 -0300,\n\n> > Im having a weird problem here. I have a table w/ ~180.000 rows and I\n> > want to select those where c > 0 or d > 0 (there only a few of those on\n> > the table)\n> > I indexed columns c and d (separately) but this query used the slow\n> > seqscan instead of the index scan:\n> \n> Postgres doesn't 'or' bitmaps derived from two indexes. You might have more\n> luck using a combined index.\n\nWith combined index, you mean a multiple column index?\nFrom\nhttp://www.postgresql.org/docs/7.4/interactive/indexes-multicolumn.html\n \n\"Multicolumn indexes can only be used if the clauses involving the\nindexed columns are joined with AND. For instance,\n\nSELECT name FROM test2 WHERE major = constant OR minor = constant;\n\ncannot make use of the index test2_mm_idx defined above to look up both\ncolumns. (It can be used to look up only the major column, however.) \"\n\nBut I need something like:\n\nselect * from t where c<>0 or d<>0;",
"msg_date": "Mon, 30 Aug 2004 15:07:15 -0300",
"msg_from": "Martin Sarsale <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: seqscan instead of index scan"
},
{
"msg_contents": "On Mon, Aug 30, 2004 at 15:07:15 -0300,\n Martin Sarsale <[email protected]> wrote:\n> \n> With combined index, you mean a multiple column index?\n> From\n> http://www.postgresql.org/docs/7.4/interactive/indexes-multicolumn.html\n\nYou are right, a multicolumn index doesn't help for 'or'.\n",
"msg_date": "Mon, 30 Aug 2004 14:18:05 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seqscan instead of index scan"
},
{
"msg_contents": "Martin Sarsale <[email protected]> writes:\n> I indexed columns c and d (separately) but this query used the slow\n> seqscan instead of the index scan:\n\n> select * from t where c<>0 or d<>0;\n\n> After playing some time, I noticed that if I change the \"or\" for an\n> \"and\", pg used the fast index scan (but the query w/ 'and' was not what\n> I as looking for).\n\nI don't think so. <> is not an indexable operator --- it appears\nnowhere in the index operator classes. It would help if you showed us\n*exactly* what you did instead of a not-very-accurate filtered version.\nI'm going to assume that you meant > ...\n\n> Now: I've no idea why it chooses to use a seq scan instead of the index\n> scan (yes, I've just vacuum analyzed the table before running the\n> query).\n\nBecause 7.4 doesn't have statistics about expression indexes, so it has\nno idea that there are only a few rows with c+d > 0.\n\nWhat I'd suggest is\n\n\tselect * from t where c>0 union select * from t where d>0\n\nwith separate indexes on c and d.\n\nAnother possibility is a partial index on exactly the condition you\nwant:\n\n\tcreate index nonzero on t(c) where c>0 or d>0;\n\nalthough I'm not certain if 7.4 has enough stats to recognize this as a win.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Aug 2004 16:48:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seqscan instead of index scan "
},
{
"msg_contents": "On Mon, 30 Aug 2004, Martin Sarsale wrote:\n> \"Multicolumn indexes can only be used if the clauses involving the\n> indexed columns are joined with AND. For instance,\n>\n> SELECT name FROM test2 WHERE major = constant OR minor = constant;\n\nYou can use DeMorgan's Theorem to transform an OR clause to an AND clause.\n\nIn general:\n\tA OR B <=> NOT ((NOT A) AND (NOT B))\n\nSo:\n\n> But I need something like:\n>\n> select * from t where c<>0 or d<>0;\n\n\tselect * from t where not (c=0 and d=0);\n\nI haven't actually tried to see if postgresql would do anything\ninteresting after such a transformation.\n\n\n\n",
"msg_date": "Wed, 1 Sep 2004 11:47:32 -0400 (EDT)",
"msg_from": "Chester Kustarz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seqscan instead of index scan"
}
] |
[
{
"msg_contents": "> Im having a weird problem here. I have a table w/ ~180.000 rows and I\n> want to select those where c > 0 or d > 0 (there only a few of those\non\n> the table)\n> I indexed columns c and d (separately) but this query used the slow\n> seqscan instead of the index scan:\n\ncreate function is_somethingable (ctype, dtype) returns boolean as\n'\n\treturn case when $1 > 0 and $2 > 0 then true else false end;\n' language sql immutable;\n\ncreate index t_idx on t(is_somethingable(c,d));\n\nanalyze t;\n\nselect * from t where is_somethingable(t.c, t.d) = true;\n\nMerlin\n\n",
"msg_date": "Mon, 30 Aug 2004 14:06:48 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: seqscan instead of index scan"
},
{
"msg_contents": "On Mon, 2004-08-30 at 15:06, Merlin Moncure wrote:\n> create function is_somethingable (ctype, dtype) returns boolean as\n\nThanks, but I would prefer a simpler solution.\n\nI would like to know why this uses a seqscan instead of an index scan:\n\ncreate index t_idx on t((c+d));\nselect * from t where c+d > 0;",
"msg_date": "Mon, 30 Aug 2004 15:17:29 -0300",
"msg_from": "Martin Sarsale <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seqscan instead of index scan"
},
{
"msg_contents": "On Mon, 30 Aug 2004, Martin Sarsale wrote:\n\n> On Mon, 2004-08-30 at 15:06, Merlin Moncure wrote:\n> > create function is_somethingable (ctype, dtype) returns boolean as\n>\n> Thanks, but I would prefer a simpler solution.\n>\n> I would like to know why this uses a seqscan instead of an index scan:\n>\n> create index t_idx on t((c+d));\n> select * from t where c+d > 0;\n\nAs a geuss, since 7.4 and earlier have no statistics on the distribution\nof c+d it has to guess about how likely that is to be true and is probably\noverestimating. 8.0beta might handle this better.\n",
"msg_date": "Mon, 30 Aug 2004 13:04:00 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seqscan instead of index scan"
},
{
"msg_contents": "\nAnother option here is to use a partial index. You can index on some other\ncolumn -- perhaps the column you want the results ordered by where the where\nclause is true.\n\nSomething like:\n\ncreate index t_idx on t (name) where c>0 and d>0;\n\nthen any select with a matching where clause can use the index:\n\nselect * from t where c>0 and d>0 order by name\n\nCould scan the index and not even have to sort on name.\n\n-- \ngreg\n\n",
"msg_date": "30 Aug 2004 16:36:30 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seqscan instead of index scan"
}
] |
[
{
"msg_contents": "> On Mon, 2004-08-30 at 15:06, Merlin Moncure wrote:\n> > create function is_somethingable (ctype, dtype) returns boolean as\n> \n> Thanks, but I would prefer a simpler solution.\n> \n> I would like to know why this uses a seqscan instead of an index scan:\n> \n> create index t_idx on t((c+d));\n> select * from t where c+d > 0;\n> \n\nhmmm, please define simple. \n\nUsing a functional index you can define an index around the way you\naccess the data. There is no faster or better way to do it...this is a\nmathematical truth, not a problem with the planner. Why not use the\nright tool for the job? A boolean index is super-efficient both in disk\nspace and cache utilization.\n\nMultiple column indexes are useless for 'or' combinations! (however they\nare a huge win for 'and' combinations because you don't have to merge).\n\nWith an 'or' expression, the planner must use one index or the other, or\nuse both and merge the results. When and what the planner uses is an\neducated guess based on statistics.\n\nAlso, your function can be changed...why fill all your queries with\nBoolean cruft when you can abstract it into the database and reap the\nspeed savings at the same time? I think it's time to rethink the\nconcept of 'simple'.\n\nConstructive criticism all,\nMerlin\n\n\n",
"msg_date": "Mon, 30 Aug 2004 14:29:24 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: seqscan instead of index scan"
},
{
"msg_contents": "\n>> create index t_idx on t((c+d));\n>> select * from t where c+d > 0;\n\n\tWhy not :\n\n\tselect ((select * from t where c<>0::bigint) UNION (select * from t where \nd<>0::bigint))\n\tgroup by whatever;\n\n\tor someting ?\n",
"msg_date": "Mon, 30 Aug 2004 21:39:17 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seqscan instead of index scan"
},
{
"msg_contents": "> Using a functional index you can define an index around the way you\n> access the data. There is no faster or better way to do it...this is a\n> mathematical truth, not a problem with the planner. Why not use the\n> right tool for the job? A boolean index is super-efficient both in disk\n> space and cache utilization.\n\nThanks for your constructive criticism, you're absolutely right.\n\nI had to modify your \"return\" for a \"select\":\n\ncreate function rankeable (bigint, bigint) returns boolean as '\n select case when $1 > 0 or $2 > 0 then true else false end;'\nlanguage sql immutable;\n\nand it works great.",
"msg_date": "Tue, 31 Aug 2004 12:14:37 -0300",
"msg_from": "Martin Sarsale <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seqscan instead of index scan"
}
] |
[
{
"msg_contents": "hi,\nhas anyone compile Postgres with Intel compiler ?\nDoes it exist a substantial gain of performance ?\n\n\n-- \nJean-Max Reymond\nCKR Solutions\nhttp://www.ckr-solutions.com\n",
"msg_date": "Tue, 31 Aug 2004 17:27:41 +0200",
"msg_from": "Jean-Max Reymond <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance with Intel Compiler"
}
] |
[
{
"msg_contents": "Hi all!\n\nI'm new here, so hello to everybody!\n\nI'm in a deep truble using postgesSQL 7.2.0 on a low-end pc with SUSE 8. I'm\nusing some databases from that pc through odbc (7.3.200). Until now i had no\nproblems with this solution, everithing worked fine. But today i wrote a\nsmall app, that converts/copies some data from a database to an other\ndatabase.\n\nDuring this work i wrote a simple query as follows:\nselect pers_driving_license from person where pers_id=23456\n\nThis should return a single varchar(20) field. Running this query over\nADO/ODBC from a Delphi app tooks 50-100 secs. If i run this from pgAdmin II.\nit takes some msecs.\n\nThe output of explain is:\nIndex Scan using person_id_index on person (cost=0.00..3.14 rows=1 width=4)\n\nAny idea?\n\nThanks in advance: steve\n\n",
"msg_date": "Tue, 31 Aug 2004 19:26:15 +0200",
"msg_from": "=?iso-8859-2?Q?Kroh_Istv=E1n?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "odbc/ado problems"
},
{
"msg_contents": "hello,\n\ndid you use a read only / forward only cursor (adOpenForwardOnly) ?\nthis is the fastest way to retreive data, other cursor types are much slower\nwith postgres odbc.\n\nbye,\nMarkus Donath.\n\n\n\"Kroh Istv�n\" <[email protected]> schrieb im Newsbeitrag\nnews:005a01c48f7f$9ff8ae70$0102a8c0@pomme001...\n> Hi all!\n>\n> I'm new here, so hello to everybody!\n>\n> I'm in a deep truble using postgesSQL 7.2.0 on a low-end pc with SUSE 8.\nI'm\n> using some databases from that pc through odbc (7.3.200). Until now i had\nno\n> problems with this solution, everithing worked fine. But today i wrote a\n> small app, that converts/copies some data from a database to an other\n> database.\n>\n> During this work i wrote a simple query as follows:\n> select pers_driving_license from person where pers_id=23456\n>\n> This should return a single varchar(20) field. Running this query over\n> ADO/ODBC from a Delphi app tooks 50-100 secs. If i run this from pgAdmin\nII.\n> it takes some msecs.\n>\n> The output of explain is:\n> Index Scan using person_id_index on person (cost=0.00..3.14 rows=1\nwidth=4)\n>\n> Any idea?\n>\n> Thanks in advance: steve\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n\n\n",
"msg_date": "Wed, 1 Sep 2004 10:19:48 +0200",
"msg_from": "\"Markus Donath\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: odbc/ado problems"
},
{
"msg_contents": "On 31 Aug 2004 at 19:26, Kroh Istv n wrote:\n\n> This should return a single varchar(20) field. Running this query over\n> ADO/ODBC from a Delphi app tooks 50-100 secs. If i run this from\n> pgAdmin II. it takes some msecs.\n\nI have found instantiating and using the ADO objects directly to be\nmuch faster than using the TADOConnection and TADOQuery components\nthat come with Delphi. - This is in web apps where I'd construct the\nSQL and execute it with the Connection object.\n\n--Ray.\n\n-------------------------------------------------------------\nRaymond O'Donnell http://www.galwaycathedral.org/recitals\[email protected] Galway Cathedral Recitals\n-------------------------------------------------------------\n\n\n",
"msg_date": "Thu, 02 Sep 2004 11:46:02 +0100",
"msg_from": "\"Raymond O'Donnell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: odbc/ado problems"
}
] |
[
{
"msg_contents": "> I'm new here, so hello to everybody!\n> \n> I'm in a deep truble using postgesSQL 7.2.0 on a low-end pc with SUSE 8.\n> I'm\n> using some databases from that pc through odbc (7.3.200). Until now i had\n> no\n> problems with this solution, everithing worked fine. But today i wrote a\n> small app, that converts/copies some data from a database to an other\n> database.\n> \n> During this work i wrote a simple query as follows:\n> select pers_driving_license from person where pers_id=23456\n> \n> This should return a single varchar(20) field. Running this query over\n> ADO/ODBC from a Delphi app tooks 50-100 secs. If i run this from pgAdmin\n> II.\n> it takes some msecs.\n\nQuestion: what is your Delphi database driver? If you are using the BDE, that might be your problem. Try installing and using the ZeosLib toolkit.\n\nhttp://www.zeoslib.org\n\nMerlin\n",
"msg_date": "Tue, 31 Aug 2004 13:33:07 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: odbc/ado problems"
}
] |
[
{
"msg_contents": " We have a web based application with data that is updated daily. The \nbiggest bottleneck occurs when we try to update\n one of the tables. This table contains 58,000 rows and 62 columns, and \nEVERY column is indexed. Every column is\n queryable (?) by the users through the web interface so we are \nreluctant to remove the indexes (recreating them would\n be time consuming too). The primary key is an INT and the rest of the \ncolumns are a mix of NUMERIC, TEXT, and DATEs.\n A typical update is:\n UPDATE dataTable SET field01=44.5, field02=44.5, field03='Bob',\n field04='foo', ... , field60='2004-08-30', field61='2004-08-29'\n WHERE id = 1234;\n\n Also of note is that the update is run about 10 times per day; we get \nblocks of data from 10 different sources, so we pre-process the\n data and then update the table. We also run VACUUM FULL ANALYZE on a \nnightly basis.\n \n Does anyone have some idea on how we can increase speed, either by \nchanging the updates, designing the database\n differently, etc, etc? This is currently a big problem for us.\n \n Other notables:\n The UPDATE is run from a within a function: FOR rec IN SELECT ...LOOP \nRETURN NEXT rec; UPDATE dataTable.....\n Postgres 7.4.3\n debian stable\n 2 GB RAM\n 80 DB IDE drive (we can't change it)\n \n shared_buffers = 2048\n sort_mem = 1024 \n max_fsm_pages = 40000\n checkpoint_segments = 5\n random_page_cost = 3\n \n \n Thanks\n \n Ron\n\n\n",
"msg_date": "Tue, 31 Aug 2004 11:11:02 -0700",
"msg_from": "Ron St-Pierre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Table UPDATE is too slow"
},
{
"msg_contents": "What is the datatype of the id column?\n\n-tfo\n\nOn Aug 31, 2004, at 1:11 PM, Ron St-Pierre wrote:\n\n> We have a web based application with data that is updated daily. The \n> biggest bottleneck occurs when we try to update\n> one of the tables. This table contains 58,000 rows and 62 columns, and \n> EVERY column is indexed. Every column is\n> queryable (?) by the users through the web interface so we are \n> reluctant to remove the indexes (recreating them would\n> be time consuming too). The primary key is an INT and the rest of the \n> columns are a mix of NUMERIC, TEXT, and DATEs.\n> A typical update is:\n> UPDATE dataTable SET field01=44.5, field02=44.5, field03='Bob',\n> field04='foo', ... , field60='2004-08-30', field61='2004-08-29'\n> WHERE id = 1234;\n>\n> Also of note is that the update is run about 10 times per day; we get \n> blocks of data from 10 different sources, so we pre-process the\n> data and then update the table. We also run VACUUM FULL ANALYZE on a \n> nightly basis.\n> Does anyone have some idea on how we can increase speed, either by \n> changing the updates, designing the database\n> differently, etc, etc? This is currently a big problem for us.\n> Other notables:\n> The UPDATE is run from a within a function: FOR rec IN SELECT \n> ...LOOP RETURN NEXT rec; UPDATE dataTable.....\n> Postgres 7.4.3\n> debian stable\n> 2 GB RAM\n> 80 DB IDE drive (we can't change it)\n> shared_buffers = 2048\n> sort_mem = 1024 max_fsm_pages = 40000\n> checkpoint_segments = 5\n> random_page_cost = 3\n> Thanks\n> Ron\n\n",
"msg_date": "Tue, 31 Aug 2004 13:16:22 -0500",
"msg_from": "Thomas F.O'Connell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table UPDATE is too slow"
},
{
"msg_contents": "On Tue, Aug 31, 2004 at 11:11:02AM -0700, Ron St-Pierre wrote:\n> We have a web based application with data that is updated daily. The \n> biggest bottleneck occurs when we try to update\n> one of the tables. This table contains 58,000 rows and 62 columns, and \n> EVERY column is indexed.\n\nThat is usually a very bad idea; for every update, PostgreSQL has to update\n62 indexes. Do you really do queries on all those 62 columns?\n\n> A typical update is:\n> UPDATE dataTable SET field01=44.5, field02=44.5, field03='Bob',\n> field04='foo', ... , field60='2004-08-30', field61='2004-08-29'\n> WHERE id = 1234;\n\nThat looks like poor database normalization, really. Are you sure you don't\nwant to split this into multiple tables instead of having 62 columns?\n\n> Other notables:\n> The UPDATE is run from a within a function: FOR rec IN SELECT ...LOOP \n> RETURN NEXT rec; UPDATE dataTable.....\n> Postgres 7.4.3\n> debian stable\n> 2 GB RAM\n> 80 DB IDE drive (we can't change it)\n\nAre you doing all this in multiple transactions, or in a sngle one? Wrapping\nthe FOR loop in a transaction might help speed.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 31 Aug 2004 20:18:15 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table UPDATE is too slow"
},
{
"msg_contents": "Thomas F. O'Connell wrote:\n\n> What is the datatype of the id column?\n>\nThe id column is INTEGER.\n\nRon\n\n",
"msg_date": "Tue, 31 Aug 2004 11:23:39 -0700",
"msg_from": "Ron St-Pierre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table UPDATE is too slow"
},
{
"msg_contents": "Steinar H. Gunderson wrote:\n\n>On Tue, Aug 31, 2004 at 11:11:02AM -0700, Ron St-Pierre wrote:\n> \n>\n>>We have a web based application with data that is updated daily. The \n>>biggest bottleneck occurs when we try to update\n>>one of the tables. This table contains 58,000 rows and 62 columns, and \n>>EVERY column is indexed.\n>> \n>>\n>\n>That is usually a very bad idea; for every update, PostgreSQL has to update\n>62 indexes. Do you really do queries on all those 62 columns?\n> \n>\nYes, I know that it's not a very good idea, however queries are allowed \nagainst all of those columns. One option is to disable some or all of the\nindexes when we update, run the update, and recreate the indexes, \nhowever it may slow down user queries. Because there are so many indexes,\nit is time consuming to recreate them after the update.\n\n> \n>\n>>A typical update is:\n>> UPDATE dataTable SET field01=44.5, field02=44.5, field03='Bob',\n>> field04='foo', ... , field60='2004-08-30', field61='2004-08-29'\n>> WHERE id = 1234;\n>> \n>>\n>\n>That looks like poor database normalization, really. Are you sure you don't\n>want to split this into multiple tables instead of having 62 columns?\n>\nNo, it is properly normalized. The data in this table is stock \nfundamentals, stuff like 52 week high, ex-dividend date, etc, etc.\n\n>\n> \n>\n>>Other notables:\n>> The UPDATE is run from a within a function: FOR rec IN SELECT ...LOOP \n>>RETURN NEXT rec; UPDATE dataTable.....\n>> Postgres 7.4.3\n>> debian stable\n>> 2 GB RAM\n>> 80 DB IDE drive (we can't change it)\n>> \n>>\n>\n>Are you doing all this in multiple transactions, or in a sngle one? Wrapping\n>the FOR loop in a transaction might help speed.\n>\nWe're doing it in multiple transactions within the function. Could we do \nsomething like this?:\n\n....\nBEGIN\n FOR rec IN SELECT field01, field02, ... FROM otherTable LOOP \n RETURN NEXT rec;\n UPDATE dataTable SET field01=rec.field01, field02=rec.field02, rec.field03=field03, ...\n WHERE id = rec.id;\nCOMMIT;\n....\n\n\nIf we can do it this way, are there any other gotcha's we should be \naware of?\n\n\nRon\n\n",
"msg_date": "Tue, 31 Aug 2004 11:35:38 -0700",
"msg_from": "Ron St-Pierre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table UPDATE is too slow"
},
{
"msg_contents": "On Tue, 31 Aug 2004 11:11:02 -0700\nRon St-Pierre <[email protected]> wrote:\n\n\n> Postgres 7.4.3\n> debian stable\n> 2 GB RAM\n> 80 DB IDE drive (we can't change it)\n> \n> shared_buffers = 2048\n> sort_mem = 1024 \n> max_fsm_pages = 40000\n> checkpoint_segments = 5\n> random_page_cost = 3\n\n I agree with all of the follow ups that having indexes on every \n column is a bad idea. I would remove the indexes from the\n least searched upon 10-20 columns, as I'm sure this will help\n your performance.\n\n You mention that not having indexes on some of the columns because it\n \"may slow down user queries\". I think you should investigate this and\n make sure they are necessary. I've seen many an application, with far\n more rows than you're dealing with, only need 1 or 2 indexes, even\n when all (or most) columns could be searched. \n\n Also, you should consider increasing your shared_buffers and probably\n your sort memory a touch as well. This will help your overall\n performance. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n",
"msg_date": "Tue, 31 Aug 2004 13:46:19 -0500",
"msg_from": "Frank Wiles <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table UPDATE is too slow"
},
{
"msg_contents": "On Tue, Aug 31, 2004 at 11:35:38AM -0700, Ron St-Pierre wrote:\n> We're doing it in multiple transactions within the function. Could we do \n> something like this?:\n> \n> ....\n> BEGIN\n> FOR rec IN SELECT field01, field02, ... FROM otherTable LOOP \n> RETURN NEXT rec;\n> UPDATE dataTable SET field01=rec.field01, field02=rec.field02, \n> rec.field03=field03, ...\n> WHERE id = rec.id;\n> COMMIT;\n> ....\n> \n> \n> If we can do it this way, are there any other gotcha's we should be \n> aware of?\n\nAFAIK you should be able to do this, yes (although I have no experience with\nPL/SQL); I'm not sure how much it buys you, but you might want to test it, at\nleast.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 31 Aug 2004 20:48:45 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table UPDATE is too slow"
},
{
"msg_contents": "> >That looks like poor database normalization, really. Are you \n> sure you \n> >don't want to split this into multiple tables instead of having 62 \n> >columns?\n> >\n> No, it is properly normalized. The data in this table is stock \n> fundamentals, stuff like 52 week high, ex-dividend date, etc, etc.\n\nHmm, the two examples you gave there are actually ripe for breaking out into\nanother table. It's not quite 'normalisation', but if you have data that\nchanges very rarely, why not group it into a separate table? You could have\nthe highly volatile data in one table, the semi-volatile stuff in another,\nand the pretty static stuff in a third. Looked at another way, if you have\nsets of fields that tend to change together, group them into tables\ntogether. That way you will radically reduce the number of indexes that are\naffected by each update.\n\nBut as someone else pointed out, you should at the very least wrap your\nupdates in a big transaction.\n\nM\n\n",
"msg_date": "Tue, 31 Aug 2004 19:59:55 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table UPDATE is too slow"
},
{
"msg_contents": "Ron St-Pierre wrote:\n> Yes, I know that it's not a very good idea, however queries are allowed \n> against all of those columns. One option is to disable some or all of the\n> indexes when we update, run the update, and recreate the indexes, \n> however it may slow down user queries. Because there are so many indexes,\n> it is time consuming to recreate them after the update.\n\nJust because a query can run against any column does not mean all \ncolumns should be indexed. Take a good look at the column types and \ntheir value distribution.\n\nLet's say I have a table of addresses but every address I collect is in \nthe 94116 zip code. That would mean indexes on city, state and zip are \nnot only useless but could decrease performance.\n\nAlso, if a search always includes a unique key (or a column with highly \nunique values), eliminating the other indexes would force the planner to \nalways use that index first.\n",
"msg_date": "Tue, 31 Aug 2004 12:15:22 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table UPDATE is too slow"
},
{
"msg_contents": "Thanks for everyone's comments (Thomas, Steinar, Frank, Matt, William). \nRight now I'm bench-marking the time it takes for each step\nin the end of day update process and then I am going to test a few things:\n- dropping most indexes, and check the full processing time and see if \nthere is any noticeable performance degradation on the web-end\n- wrapping the updates in a transaction, and check the timing\n- combining the two\n- reviewing my shared_buffers and sort_memory settings\n\nThanks\nRon\n\n",
"msg_date": "Tue, 31 Aug 2004 17:18:14 -0700",
"msg_from": "Ron St-Pierre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table UPDATE is too slow"
},
{
"msg_contents": "Ron St-Pierre <[email protected]> writes:\n> Does anyone have some idea on how we can increase speed, either by \n> changing the updates, designing the database\n> differently, etc, etc? This is currently a big problem for us.\n \n> Other notables:\n> The UPDATE is run from a within a function: FOR rec IN SELECT ...LOOP \n> RETURN NEXT rec; UPDATE dataTable.....\n\nOne point that I don't think was made before: by doing a collection of\nupdates in this serial one-at-a-time fashion, you are precluding any\npossibility of query optimization over the collection of updates. It\nmight win to dump the update data into a temp table and do a single\nUPDATE command joining to the temp table. Or not --- quite possibly not\n--- but I think it's something to think about.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Sep 2004 01:32:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table UPDATE is too slow "
},
{
"msg_contents": "Ron St-Pierre wrote:\n\n> We have a web based application with data that is updated daily. The \n> biggest bottleneck occurs when we try to update\n> one of the tables. This table contains 58,000 rows and 62 columns, and \n> EVERY column is indexed.\n\nHave you thought of / tried using 2 separate databases or tables and \nswitching between them? Since you seem to be updating all the values, it \nmight be a lot faster to re-create the table from scratch without \nindexes and add those later (maybe followed by a VACUUM ANALYZE) ...\n\nThat said, I'm not entirely sure how well postgres' client libraries can \ndeal with tables being renamed while in use, perhaps someone can shed \nsome light on this.\n\nRegards,\n Marinos\n-- \nDipl.-Ing. Marinos Yannikos, CEO\nPreisvergleich Internet Services AG\nObere Donaustra�e 63/2, A-1020 Wien\nTel./Fax: (+431) 5811609-52/-55\n",
"msg_date": "Mon, 06 Sep 2004 01:28:04 +0200",
"msg_from": "\"Marinos J. Yannikos\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table UPDATE is too slow"
},
{
"msg_contents": "Do all of the commands to swap tables in a transaction. The table\ngets locked briefly but should have a lot less impact then the update\ncommand.\n\n\nOn Mon, 06 Sep 2004 01:28:04 +0200, Marinos J. Yannikos <[email protected]> wrote:\n> \n> That said, I'm not entirely sure how well postgres' client libraries can\n> deal with tables being renamed while in use, perhaps someone can shed\n> some light on this.\n>\n",
"msg_date": "Sun, 5 Sep 2004 20:19:35 -0500",
"msg_from": "Kevin Barnard <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table UPDATE is too slow"
},
{
"msg_contents": "\n\tHello,\n\n\tI have this table :\nCREATE TABLE apparts\n(\n id \t\t SERIAL NOT NULL PRIMARY KEY,\n\n price FLOAT NOT NULL,\n surface INTEGER NOT NULL,\n price_sq FLOAT NOT NULL,\n\n rooms INTEGER NULL,\n vente BOOL NOT NULL,\n category TEXT NOT NULL,\n zipcode INTEGER NOT NULL,\n departement INTEGER NOT NULL\n) WITHOUT OIDS;\n\nThere is a BTREE index on 'departement'.\nThe table fits in RAM.\n\nWhen I want to SELECT according to my indexed field, postgres chooses a \nsequential scan unless the number of rows to be returned is very, very \nsmall :\n\napparts=> explain analyze select * from apparts where departement=42;\n Seq Scan on apparts (cost=0.00..853.12 rows=1403 width=47) (actual \ntime=5.094..52.026 rows=1516 loops=1)\n Filter: (departement = 42)\n Total runtime: 52.634 ms\n\nOK, it returns 1516 rows, so maybe the seq scan is right.\n\napparts=> SET enable_seqscan = 0;\napparts=> explain analyze select * from apparts where departement=42;\n Index Scan using apparts_dept on apparts (cost=0.00..1514.59 rows=1403 \nwidth=47) (actual time=0.045..2.770 rows=1516 loops=1)\n Index Cond: (departement = 42)\n Total runtime: 3.404 ms\n\nUm, 15 times faster...\n\nIndex scan is called only when there are few rows. With other values for \n'departement' where there are few rows, the Index is used automatically. \nThis is logical, even if I should adjust the page costs. I wish I could \ntell postgres \"this table will fit in RAM and be accessed often, so for \nthis table, the page seek cost should be very low\".\n\nEverything is vacuum full analyze.\n\nNow, if I LIMIT the query to 10 rows, the index should be used all the \ntime, because it will always return few rows... well, it doesn't !\n\napparts=> SET enable_seqscan = 1;\napparts=> explain analyze select * from apparts where departement=42 LIMIT \n10;\n Limit (cost=0.00..6.08 rows=10 width=47) (actual time=5.003..5.023 \nrows=10 loops=1)\n -> Seq Scan on apparts (cost=0.00..853.12 rows=1403 width=47) (actual \ntime=4.998..5.013 rows=10 loops=1)\n Filter: (departement = 42)\n Total runtime: 5.107 ms\n\n\nNow, let's try :\n\napparts=> SET enable_seqscan = 0;\napparts=> explain analyze select * from apparts where departement=42 LIMIT \n10;\n Limit (cost=0.00..10.80 rows=10 width=47) (actual time=0.047..0.072 \nrows=10 loops=1)\n -> Index Scan using apparts_dept on apparts (cost=0.00..1514.59 \nrows=1403 width=47) (actual time=0.044..0.061 rows=10 loops=1)\n Index Cond: (departement = 42)\n Total runtime: 0.157 ms\n\nSo, by itself, Postgres will select a very bad query plan (32x slower) on \na query which would be executed very fast using indexes. If I use OFFSET \n+ LIMIT, it only gets worse because the seq scan has to scan more rows :\n\napparts=> SET enable_seqscan = 1;\napparts=> explain analyze select * from apparts where departement=42 LIMIT \n10 OFFSET 85;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------\n Limit (cost=51.69..57.77 rows=10 width=47) (actual time=10.224..10.246 \nrows=10 loops=1)\n -> Seq Scan on apparts (cost=0.00..853.12 rows=1403 width=47) (actual \ntime=5.254..10.200 rows=95 loops=1)\n Filter: (departement = 42)\n Total runtime: 10.326 ms\n\n\napparts=> SET enable_seqscan = 1;\napparts=> explain analyze select * from apparts where departement=42 LIMIT \n10 OFFSET 1000;\n Limit (cost=608.07..614.15 rows=10 width=47) (actual time=43.993..44.047 \nrows=10 loops=1)\n -> Seq Scan on apparts (cost=0.00..853.12 rows=1403 width=47) (actual \ntime=5.328..43.791 rows=1010 loops=1)\n Filter: (departement = 42)\n Total runtime: 44.128 ms\n\napparts=> SET enable_seqscan = 0;\napparts=> explain analyze select * from apparts where departement=42 LIMIT \n10 OFFSET 1000;\n Limit (cost=1079.54..1090.33 rows=10 width=47) (actual time=2.147..2.170 \nrows=10 loops=1)\n -> Index Scan using apparts_dept on apparts (cost=0.00..1514.59 \nrows=1403 width=47) (actual time=0.044..1.860 rows=1010 loops=1)\n Index Cond: (departement = 42)\n Total runtime: 2.259 ms\n\n\n\tWhy is it that way ? The planner should use the LIMIT values when \nplanning the query, should it not ?\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Mon, 06 Sep 2004 14:15:44 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "The usual sequential scan, but with LIMIT !"
},
{
"msg_contents": "\nUpdate :\n\nselect * from apparts where departement=69 order by departement limit 10;\n\ndoes use an index scan (because of the ORDER BY), even with OFFSET, and \nit's a lot faster.\n\n",
"msg_date": "Mon, 06 Sep 2004 14:27:06 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The usual sequential scan, but with LIMIT !"
},
{
"msg_contents": "On Mon, 6 Sep 2004, [iso-8859-15] Pierre-Fr�d�ric Caillaud wrote:\n\n> \tWhy is it that way ? The planner should use the LIMIT values when \n> planning the query, should it not ?\n\nAnd it do use limit values, the estimated cost was lower when you had the \nlimit,\n\nWhat you need to do is to tune pg for your computer. For example the \nfollowing settings:\n\n * effective_cache - this setting tells pg how much the os are caching\n (for example use top to find out during a normal work load). You said \n that the tables fit in memory and by telling pg how much is cached it \n might adjust it's plans accordingly.\n\n* random_page_cost - how expensive is a random access compared to seq. \n access. This is dependent on the computer and disk system you have.\n If the setting above does not help, maybe you need to lower this to\n variable to 2 or something.\n\nAnd don't forget the shared_buffer setting. But most people usually have\nit tuned in my experience (but usually too high). Here is an article that\nmight help you:\n\n http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n-- \n/Dennis Bj�rklund\n\n",
"msg_date": "Mon, 6 Sep 2004 14:45:30 +0200 (CEST)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The usual sequential scan, but with LIMIT !"
},
{
"msg_contents": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?= <[email protected]> writes:\n> Now, if I LIMIT the query to 10 rows, the index should be used all the \n> time, because it will always return few rows... well, it doesn't !\n\nNot at all. From the planner's point of view, the LIMIT is going to\nreduce the cost by about a factor of 10/1403, since the underlying plan\nstep will only be run partway through. That's not going to change the\ndecision about which underlying plan step is cheapest: 10/1403 of a\ncheaper plan is still always less than 10/1403 of a more expensive plan.\n\nLater, you note that LIMIT with ORDER BY does affect the plan choice\n--- that's because in that situation one plan alternative has a much\nhigher startup cost than the other (namely the cost of a sort step).\nA small LIMIT can allow the fast-startup plan to be chosen even though\nit would be estimated to be the loser if run to completion.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Sep 2004 12:40:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The usual sequential scan, but with LIMIT ! "
},
{
"msg_contents": "\n\tOK, thanks a lot for your explanations. Knowing how the planner \"thinks\", \nmakes it pretty logical. Thank you.\n\n\tNow another question...\n\n\tI have a table of records representing forum posts with a primary key \n(id), a topic_id, a timestamp, and other fields which I won't detail. I \nwant to split them into pages (like forums usually do), with N posts per \npage. In that case :\n\n\tSELECT * FROM table WHERE topic_id=... ORDER BY post_timestamp asc LIMIT \nN OFFSET N*page;\n\n\tAlso it's almost the same to order by id rather than post_timestamp (id \nbeing a serial).\n\n\tSELECT * FROM table WHERE topic_id=... ORDER BY id asc LIMIT N OFFSET \nN*page;\n\n\tThis query runs slower and slower as the OFFSET grows, which is a problem \nbecause the most accessed page in a forum is the last one.\n\n\tSo, for the last page, I tried :\n\tSELECT * FROM table WHERE topic_id=... ORDER BY id desc LIMIT N;\n\tBut this does not use the index at all (seq scan + sort + limit).\n\n\tMy solution is simple : build an index on (-id), or on (some \ndate)-post_timestamp, then :\n\tSELECT * FROM table WHERE topic_id=... ORDER BY (-id) desc LIMIT N;\n\n\tThen the last page is the fastest one, but it always shows N posts. \nThat's not a problem, so I guess I'll use that. I don't like forums which \nshow 1 post on the last page because the number of posts modulo N is 1.\n\tI may store the number of posts in a forum (updated by a trigger) to \navoid costly COUNT queries to count the pages, so I could use ORDER BY id \nfor the first half of the pages, and ORDER BY (-id) for the rest, so it \nwill always be fastest.\n\n\tI could even create a pages table to store the id of the first post on \nthat page and then :\n\tSELECT * FROM table WHERE topic_id=... AND id>id_of_first_post_in_page \nORDER BY id asc LIMIT N;\n\tthen all pages would be aqually fast.\n\n\tOr, I could cache the query results for all pages but the last one.\n\n\tFinally, the question : having a multiple field btree, it is not harder \nto scan it in \"desc order\" than in \"asc order\". So why does not Postgres \ndo it ? Here is a btree example :\n\n\ttopic_id\tid\n\t1\t\t1\n\t1\t\t10\n\t2\t\t2\n\t2\t\t5\n\t2\t\t17\n\t3\t\t4\n\t3\t\t6\n\n\tsuppose I SELECT WHERE topic_id=2 ORDER BY topic_id ASC,id ASC.\n\tPostgres simply finds the first row with topic_id=2 and goes from there.\n\t\n\tsuppose I SELECT WHERE topic_id=2 ORDER BY topic_id ASC,id DESC.\n\tPostgres does a seq scan, but it could think a bit more and start at \n\"first index node which has topic_id>2\" (simple to find in a btree) then \ngo backwards in the index. This can ge beneralized to any combination of \n(asc,desc).\n\n\tI made some more experiments, and saw Postgres does an 'Index Scan' when \nORDER BY clauses are all ASC, and an 'Index Scan Backwards' when all ORDER \nBY are DESC. However, it does not handle a combination of ASC and DESC?\n\n\tWhat do you think of this ?\n\n\nOn Mon, 06 Sep 2004 12:40:41 -0400, Tom Lane <[email protected]> wrote:\n\n> =?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?= \n> <[email protected]> writes:\n>> Now, if I LIMIT the query to 10 rows, the index should be used all the\n>> time, because it will always return few rows... well, it doesn't !\n>\n> Not at all. From the planner's point of view, the LIMIT is going to\n> reduce the cost by about a factor of 10/1403, since the underlying plan\n> step will only be run partway through. That's not going to change the\n> decision about which underlying plan step is cheapest: 10/1403 of a\n> cheaper plan is still always less than 10/1403 of a more expensive plan.\n>\n> Later, you note that LIMIT with ORDER BY does affect the plan choice\n> --- that's because in that situation one plan alternative has a much\n> higher startup cost than the other (namely the cost of a sort step).\n> A small LIMIT can allow the fast-startup plan to be chosen even though\n> it would be estimated to be the loser if run to completion.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n",
"msg_date": "Tue, 07 Sep 2004 08:51:54 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The usual sequential scan, but with LIMIT ! "
},
{
"msg_contents": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?= <[email protected]> writes:\n> \tsuppose I SELECT WHERE topic_id=2 ORDER BY topic_id ASC,id DESC.\n> \tPostgres does a seq scan, but it could think a bit more and start at \n> \"first index node which has topic_id>2\" (simple to find in a btree) then \n> go backwards in the index.\n\nIf you write it as\n\tSELECT WHERE topic_id=2 ORDER BY topic_id DESC,id DESC.\nthen an index on (topic_id, id) will work fine. The mixed ASC/DESC\nordering is not compatible with the index.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Sep 2004 09:47:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The usual sequential scan, but with LIMIT ! "
},
{
"msg_contents": "\n\tYes, you're right as usual.\n\tI had not thought about playing with ORDER BY on a field which has only \none value in the result set.\n\n\n> If you write it as\n> \tSELECT WHERE topic_id=2 ORDER BY topic_id DESC,id DESC.\n> then an index on (topic_id, id) will work fine. The mixed ASC/DESC\n> ordering is not compatible with the index.\n\n",
"msg_date": "Tue, 07 Sep 2004 16:30:54 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The usual sequential scan, but with LIMIT ! "
},
{
"msg_contents": "Ron St-Pierre wrote:\n\n> We have a web based application with data that is updated daily. The \n> biggest bottleneck occurs when we try to update\n> one of the tables. This table contains 58,000 rows and 62 columns, and \n> EVERY column is indexed. Every column is\n> queryable (?) by the users through the web interface so we are \n> reluctant to remove the indexes (recreating them would\n> be time consuming too). The primary key is an INT and the rest of the \n> columns are a mix of NUMERIC, TEXT, and DATEs.\n> A typical update is:\n> UPDATE dataTable SET field01=44.5, field02=44.5, field03='Bob',\n> field04='foo', ... , field60='2004-08-30', field61='2004-08-29'\n> WHERE id = 1234;\n>\n> Also of note is that the update is run about 10 times per day; we get \n> blocks of data from 10 different sources, so we pre-process the\n> data and then update the table. We also run VACUUM FULL ANALYZE on a \n> nightly basis. \n\nIt now appears that VACUUM wasn't running properly. A manual VACUUM FULL \nANALYZE VEBOSE told us that\napproximately 275000 total pages were needed. I increased the \nmax_fsm_pages to 300000, VACUUMED, renamed the\ndatabase and re-created it from backup, vacuumed numerous times, and the \ntotal fsm_pages needed continued to remain in\nthe 235000 -> 270000 range. This morning I deleted the original \n(renamed) database, and a VACUUM FULL ANALYZE\nVEBOSE now says that only about 9400 pages are needed.\n\nOne question about redirecting VACUUMs output to file though. When I run:\n psql -d imperial -c \"vacuum full verbose analyze;\" > vac.info\nvac.info contains only the following line:\n VACUUM\nI've been unable to capture the VERBOSE output to file. Any suggestions?\n\n<snip>\n\n>\nAlso, thanks for everyone's input about my original posting, I am \ninvestigating some of the options mentioned to further increase\nperformance.\n\nRon\n\n",
"msg_date": "Tue, 07 Sep 2004 09:20:07 -0700",
"msg_from": "Ron St-Pierre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Table UPDATE is too slow"
},
{
"msg_contents": "Pierre-Frᅵdᅵric Caillaud wrote:\n\n> \n> Yes, you're right as usual.\n\nAs usual ? Do you think your father can be wrong on you ? :-)\n\n\n\nGaetano\n\n\n\n\n\n\n",
"msg_date": "Tue, 07 Sep 2004 18:54:15 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The usual sequential scan, but with LIMIT !"
},
{
"msg_contents": "Ron St-Pierre <[email protected]> writes:\n> One question about redirecting VACUUMs output to file though. When I run:\n> psql -d imperial -c \"vacuum full verbose analyze;\" > vac.info\n> vac.info contains only the following line:\n> VACUUM\n> I've been unable to capture the VERBOSE output to file. Any suggestions?\n\nYou need to catch stderr not only stdout.\n\n(I'd be less vague if I knew which shell you were running, but sh- and\ncsh-derived shells do it differently.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Sep 2004 14:00:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Table UPDATE is too slow "
},
{
"msg_contents": "Tom Lane wrote:\n\n>Ron St-Pierre <[email protected]> writes:\n> \n>\n>>One question about redirecting VACUUMs output to file though. When I run:\n>> psql -d imperial -c \"vacuum full verbose analyze;\" > vac.info\n>>vac.info contains only the following line:\n>> VACUUM\n>>I've been unable to capture the VERBOSE output to file. Any suggestions?\n>> \n>>\n>\n>You need to catch stderr not only stdout.\n>\n>(I'd be less vague if I knew which shell you were running, but sh- and\n>csh-derived shells do it differently.)\n>\n>\t\n>\nOops, I'm running bash. I just redirected stderr to the file\n psql -d imperial -c \"vacuum full verbose analyze;\" 2> \n/usr/local/pgsql/vac.info\nwhich gives me exactly what I want.\n\nThanks again Tom\n\nRon\n\n",
"msg_date": "Tue, 07 Sep 2004 11:42:13 -0700",
"msg_from": "Ron St-Pierre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Table UPDATE is too slow"
}
] |
[
{
"msg_contents": "hi,\nI have the following structure in my base 7.4.2\n\nCREATE TABLE \"public\".\"article\" (\n \"art_id\" INTEGER NOT NULL,\n \"rub_id\" INTEGER DEFAULT '0' NOT NULL,\n \"art_titre\" VARCHAR(100) DEFAULT '' NOT NULL,\n \"art_texte\" TEXT NOT NULL,\n \"art_date\" DATE NOT NULL,\n \"aut_id\" INTEGER,\n CONSTRAINT \"article_pkey\" PRIMARY KEY(\"art_id\")\n) WITH OIDS;\n\nCREATE INDEX \"article_art_date_index\" ON \"public\".\"article\"\nUSING btree (\"art_date\");\n\n\nCREATE INDEX \"article_aut_id_index\" ON \"public\".\"article\"\nUSING btree (\"aut_id\");\n\n\nCREATE INDEX \"article_rub_id_index\" ON \"public\".\"article\"\nUSING btree (\"rub_id\");\n\n\nCREATE INDEX \"article_titre\" ON \"public\".\"article\"\nUSING btree (\"art_id\", \"art_titre\");\n\n\nCREATE TABLE \"public\".\"auteur\" (\n \"aut_id\" INTEGER NOT NULL,\n \"aut_name\" VARCHAR(100) DEFAULT '' NOT NULL,\n CONSTRAINT \"auteur_pkey\" PRIMARY KEY(\"aut_id\")\n) WITH OIDS;\n\n\nCREATE TABLE \"public\".\"rubrique\" (\n \"rub_id\" INTEGER NOT NULL,\n \"rub_titre\" VARCHAR(100) DEFAULT '' NOT NULL,\n \"rub_parent\" INTEGER DEFAULT '0' NOT NULL,\n \"rub_date\" DATE,\n CONSTRAINT \"rubrique_pkey\" PRIMARY KEY(\"rub_id\")\n) WITH OIDS;\n\nCREATE INDEX \"rub_rub\" ON \"public\".\"rubrique\"\nUSING btree (\"rub_parent\");\n\nCREATE INDEX \"rubrique_rub_date_index\" ON \"public\".\"rubrique\"\nUSING btree (\"rub_date\");\n\nCREATE INDEX \"rubrique_rub_titre_index\" ON \"public\".\"rubrique\"\nUSING btree (\"rub_titre\");\n\nI want to optimize the following request and avoid the seq scan on the\ntable article (10000000 rows).\n\n\n\nexplain SELECT art_id, art_titre, art_texte, rub_titre\nFROM article inner join rubrique on article.rub_id = rubrique.rub_id\nwhere rub_parent = 8;\n\nHash Join (cost=8.27..265637.59 rows=25 width=130)\n Hash Cond: (\"outer\".rub_id = \"inner\".rub_id)\n -> Seq Scan on article (cost=0.00..215629.00 rows=10000000 width=108)\n -> Hash (cost=8.26..8.26 rows=3 width=22)\n -> Index Scan using rubrique_parent on rubrique \n(cost=0.00..8.26 rows=3 width=22)\n Index Cond: (rub_parent = 8)\n\n\nthanks for your answers,\n\n-- \nJean-Max Reymond\nCKR Solutions\nhttp://www.ckr-solutions.com\n",
"msg_date": "Tue, 31 Aug 2004 20:59:11 +0200",
"msg_from": "Jean-Max Reymond <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizing a request"
},
{
"msg_contents": "Jean,\n\n> I have the following structure in my base 7.4.2\n\nUpgrade to 7.4.5. The version you're using has several known issues with data \nrestore in the event of system failure.\n\n> Hash Join (cost=8.27..265637.59 rows=25 width=130)\n> Hash Cond: (\"outer\".rub_id = \"inner\".rub_id)\n> -> Seq Scan on article (cost=0.00..215629.00 rows=10000000 width=108)\n> -> Hash (cost=8.26..8.26 rows=3 width=22)\n> -> Index Scan using rubrique_parent on rubrique\n> (cost=0.00..8.26 rows=3 width=22)\n> Index Cond: (rub_parent = 8)\n\nThose look suspiciously like stock estimates. When was the last time you ran \nANALYZE?\n\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 31 Aug 2004 12:15:40 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing a request"
},
{
"msg_contents": "On Tue, 31 Aug 2004 12:15:40 -0700, Josh Berkus <[email protected]> wrote:\n\n> Those look suspiciously like stock estimates. When was the last time you ran\n> ANALYZE?\n\nthe vacuum analyze ran just before the explain\n\n-- \nJean-Max Reymond\nCKR Solutions\nhttp://www.ckr-solutions.com\n",
"msg_date": "Tue, 31 Aug 2004 21:19:09 +0200",
"msg_from": "Jean-Max Reymond <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing a request"
},
{
"msg_contents": "On 31 Aug 2004 at 20:59, Jean-Max Reymond wrote:\n\n> hi,\n> \n> I want to optimize the following request and avoid the seq scan on the\n> table article (10000000 rows).\n> \n> explain SELECT art_id, art_titre, art_texte, rub_titre\n> FROM article inner join rubrique on article.rub_id = rubrique.rub_id\n> where rub_parent = 8;\n> \n> Hash Join (cost=8.27..265637.59 rows=25 width=130)\n> Hash Cond: (\"outer\".rub_id = \"inner\".rub_id)\n> -> Seq Scan on article (cost=0.00..215629.00 rows=10000000 width=108)\n> -> Hash (cost=8.26..8.26 rows=3 width=22)\n> -> Index Scan using rubrique_parent on rubrique \n> (cost=0.00..8.26 rows=3 width=22)\n> Index Cond: (rub_parent = 8)\n> \n> \n> thanks for your answers,\n> \n> -- \n\nHave you run ANALYZE on this database after creating the indexes or \nloading the data?\n\nWhat percentage of rows in the \"article\" table are likely to match the \nkeys selected from the \"rubrique\" table?\n\nIf it is likely to fetch a high proportion of the rows from article then it \nmay be best that a seq scan is performed.\n\nWhat are your non-default postgresql.conf settings? It may be better to \nincrease the default_statistics_target (to say 100 to \n200) before running ANALYZE and then re-run the \nquery.\n\nCheers,\nGary.\n\n\n\n\n\n\n\nOn 31 Aug 2004 at 20:59, Jean-Max Reymond wrote:\n\n\n> hi,\n> \n> I want to optimize the following request and avoid the seq scan on the\n> table article (10000000 rows).\n> \n> explain SELECT art_id, art_titre, art_texte, rub_titre\n> FROM article inner join rubrique on article.rub_id = rubrique.rub_id\n> where rub_parent = 8;\n> \n> Hash Join (cost=8.27..265637.59 rows=25 width=130)\n> Hash Cond: (\"outer\".rub_id = \"inner\".rub_id)\n> -> Seq Scan on article (cost=0.00..215629.00 rows=10000000 \nwidth=108)\n> -> Hash (cost=8.26..8.26 rows=3 width=22)\n> -> Index Scan using rubrique_parent \non rubrique \n> (cost=0.00..8.26 rows=3 width=22)\n> Index \nCond: (rub_parent = 8)\n> \n> \n> thanks for your answers,\n> \n> -- \n\nHave you run ANALYZE on this database after creating the indexes or \nloading the data?\n\n\nWhat percentage of rows in the \"article\" table are likely to match the \nkeys selected from the \"rubrique\" table?\n\n\nIf it is likely to fetch a high proportion of the rows from article then it \nmay be best that a seq scan is performed.\n\n\nWhat are your non-default postgresql.conf settings? It may be better to \nincrease the default_statistics_target (to say 100 to \n200) before running ANALYZE and then re-run the \nquery.\n\n\nCheers,\nGary.",
"msg_date": "Tue, 31 Aug 2004 20:21:49 +0100",
"msg_from": "\"Gary Doades\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing a request"
},
{
"msg_contents": "----- Original Message -----\nFrom: Gary Doades <[email protected]>\nDate: Tue, 31 Aug 2004 20:21:49 +0100\nSubject: Re: [PERFORM] Optimizing a request\nTo: [email protected]\n\n \n\n> Have you run ANALYZE on this database after creating the indexes or loading the data? \n \nthe indexes are created and the data loaded and then, I run vacuum analyze.\n\n>What percentage of rows in the \"article\" table are likely to match\nthe keys selected from the \"rubrique\" table?\n \nonly 1 record.\n\nIf it is likely to fetch a high proportion of the rows from article\nthen it may be best that a seq scan is performed.\n \nWhat are your non-default postgresql.conf settings? It may be better\nto increase the default_statistics_target (to say 100 to 200) before\nrunning ANALYZE and then re-run the query.\n \nyes, default_statistics_target is set to the default_value.\nI have just increased shared_buffers and effective_cache_size to give\nadvantage of 1 Mb RAM\n \n\n\n\n-- \nJean-Max Reymond\nCKR Solutions\nhttp://www.ckr-solutions.com\n",
"msg_date": "Tue, 31 Aug 2004 21:42:56 +0200",
"msg_from": "Jean-Max Reymond <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing a request"
},
{
"msg_contents": "Jean-Max Reymond <[email protected]> writes:\n> explain SELECT art_id, art_titre, art_texte, rub_titre\n> FROM article inner join rubrique on article.rub_id = rubrique.rub_id\n> where rub_parent = 8;\n\n> Hash Join (cost=8.27..265637.59 rows=25 width=130)\n> Hash Cond: (\"outer\".rub_id = \"inner\".rub_id)\n> -> Seq Scan on article (cost=0.00..215629.00 rows=10000000 width=108)\n> -> Hash (cost=8.26..8.26 rows=3 width=22)\n> -> Index Scan using rubrique_parent on rubrique \n> (cost=0.00..8.26 rows=3 width=22)\n> Index Cond: (rub_parent = 8)\n\nThat seems like a very strange plan choice given those estimated row\ncounts. I'd have expected it to use a nestloop with inner index scan\non article_rub_id_index. You haven't done anything odd like disable\nnestloop, have you?\n\nWhat plan do you get if you turn off enable_hashjoin? (If it's a merge\njoin, then turn off enable_mergejoin and try again.) Also, could we see\nEXPLAIN ANALYZE not just EXPLAIN output for all these cases?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 31 Aug 2004 16:13:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing a request "
},
{
"msg_contents": "On 31 Aug 2004 at 21:42, Jean-Max Reymond wrote:\n\n> ----- Original Message -----\n> From: Gary Doades <[email protected]>\n> Date: Tue, 31 Aug 2004 20:21:49 +0100\n> Subject: Re: [PERFORM] Optimizing a request\n> To: [email protected]\n> \n> \n> \n> > Have you run ANALYZE on this database after creating the indexes or loading the data? \n> \n> the indexes are created and the data loaded and then, I run vacuum analyze.\n> \n> >What percentage of rows in the \"article\" table are likely to match\n> the keys selected from the \"rubrique\" table?\n> \n> only 1 record.\n> \n> If it is likely to fetch a high proportion of the rows from article\n> then it may be best that a seq scan is performed.\n> \n> What are your non-default postgresql.conf settings? It may be better\n> to increase the default_statistics_target (to say 100 to 200) before\n> running ANALYZE and then re-run the query.\n> \n> yes, default_statistics_target is set to the default_value.\n> I have just increased shared_buffers and effective_cache_size to give\n> advantage of 1 Mb RAM\n> \n\nI can only presume you mean 1 GB RAM. What exactly are your \nsettings for shared buffers and effective_cache_size?\n\nCan you increase default_statistics_target and re-test? It is possible \nthat with such a large table that the distribution of values is skewed and \npostgres does not realise that an index scan would be better.\n\nIt seems very odd otherwise that only on row out of 10,000,000 could \nmatch and postgres does not realise this.\n\nCan you post an explain analyse (not just explain) for this query?\n\nCheers,\nGary.\n\n",
"msg_date": "Tue, 31 Aug 2004 21:16:46 +0100",
"msg_from": "\"Gary Doades\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing a request"
},
{
"msg_contents": "On Tue, 31 Aug 2004 21:16:46 +0100, Gary Doades <[email protected]> wrote:\n\n> I can only presume you mean 1 GB RAM. What exactly are your\n> settings for shared buffers and effective_cache_size?\n\nfor 1 GB RAM,\nshared_buffers = 65536\neffective_cache_size = 16384 \n\n> \n> Can you increase default_statistics_target and re-test? It is possible\n> that with such a large table that the distribution of values is skewed and\n> postgres does not realise that an index scan would be better.\n\nOK, tomorrow, I'll try with the new value of default_statistics_target\n\n> It seems very odd otherwise that only on row out of 10,000,000 could\n> match and postgres does not realise this.\n> \n> Can you post an explain analyse (not just explain) for this query?\n\nyes, of course\n\n\n-- \nJean-Max Reymond\nCKR Solutions\nhttp://www.ckr-solutions.com\n",
"msg_date": "Tue, 31 Aug 2004 22:24:51 +0200",
"msg_from": "Jean-Max Reymond <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing a request"
},
{
"msg_contents": "On Tue, 31 Aug 2004 16:13:58 -0400, Tom Lane <[email protected]> wrote:\n\n> That seems like a very strange plan choice given those estimated row\n> counts. I'd have expected it to use a nestloop with inner index scan\n> on article_rub_id_index. You haven't done anything odd like disable\n> nestloop, have you?\n> \n\nno optimizer disabled.\n\n> What plan do you get if you turn off enable_hashjoin? (If it's a merge\n> join, then turn off enable_mergejoin and try again.) Also, could we see\n> EXPLAIN ANALYZE not just EXPLAIN output for all these cases?\n> \n> regards, tom lane\n> \n\nOK, TOM Thanks for your help\n\n-- \nJean-Max Reymond\nCKR Solutions\nhttp://www.ckr-solutions.com\n",
"msg_date": "Tue, 31 Aug 2004 22:26:05 +0200",
"msg_from": "Jean-Max Reymond <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing a request"
},
{
"msg_contents": "On 31 Aug 2004 at 22:24, Jean-Max Reymond wrote:\n\n> On Tue, 31 Aug 2004 21:16:46 +0100, Gary Doades <[email protected]> wrote:\n> \n> > I can only presume you mean 1 GB RAM. What exactly are your\n> > settings for shared buffers and effective_cache_size?\n> \n> for 1 GB RAM,\n> shared_buffers = 65536\n> effective_cache_size = 16384 \n\nThis seems like the wrong way round also.\n\nYou might try:\n\nshared_buffers = 10000\neffective_cache_size = 60000\n\nCheers,\nGary.\n\n",
"msg_date": "Tue, 31 Aug 2004 21:31:44 +0100",
"msg_from": "\"Gary Doades\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing a request"
},
{
"msg_contents": "Hi,\n\nLe Mardi 31 Août 2004 20:59, Jean-Max Reymond a écrit :\n> explain SELECT art_id, art_titre, art_texte, rub_titre\n> FROM article inner join rubrique on article.rub_id = rubrique.rub_id\n> where rub_parent = 8;\n>\n> Hash Join (cost=8.27..265637.59 rows=25 width=130)\n> Hash Cond: (\"outer\".rub_id = \"inner\".rub_id)\n> -> Seq Scan on article (cost=0.00..215629.00 rows=10000000 width=108)\n> -> Hash (cost=8.26..8.26 rows=3 width=22)\n> -> Index Scan using rubrique_parent on rubrique\n> (cost=0.00..8.26 rows=3 width=22)\n> Index Cond: (rub_parent = 8)\n>\n\nWhat are the values in rub_parent ... is their many disparity in the values ?\nMay be you have most of the value set to 8 ... and may be the optimizer think \na seq scan is better than the use of an index ...\n\nCould you do a simple :\nSELECT rub_parent, count(rub_id)\n FROM rubrique \n GROUP BY rub_parent;\n\nJust to see the disparity of the values ...\n\nregards,\n-- \nBill Footcow\n\n",
"msg_date": "Tue, 31 Aug 2004 22:41:15 +0200",
"msg_from": "=?iso-8859-1?q?Herv=E9_Piedvache?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing a request"
}
] |
[
{
"msg_contents": "> thanks for the quick answer! My db driver is the native MS ADO, and\nfrom\n> Delphi i use the AODExpress components which are wrapper classes to\nreach\n> the ActiveX components from delhpi. The strange behaviour of that\nquery\n> is,\n> that all other queries executed in this environment are running fast,\n> except\n> this one. First there was any index on that table (appr. 40.000\nrecords),\n> and i thought that maybe this is my problem. Then i made this index,\nbut\n> this hasn't solved my problem.\n> Now i think that maybe the odbc driver makes something with my query?\nCan\n> this happen?\n\nPossible. I would turn on statement logging on the server and make sure\nthe query is the way you wrote it in the app. The driver might be doing\nsomething like pulling all the data and attempting a client side filter.\nOtherwise you may be looking at a casting problem of some sort.\n\nTo turn on logging, set statement log to 'all' in your postgresql.conf\nfile. You may need to start the server manually so you can determine\nwhere the log goes (logging to terminal is often the easiest to work\nwith).\n\nIf you are writing Delphi applications, you really should check out\nZeos. It utilizes native drivers to connect to the database...it's\nreally, really fast and supports all the Delphi controls (and free!).\n\nMerlin\n",
"msg_date": "Tue, 31 Aug 2004 15:00:14 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: odbc/ado problems"
}
] |
[
{
"msg_contents": "Hi ,\n\n I am sorry that my question is out of line with this\ngroup(performance) but I need\n\nan urgent help :-( .pls .. I need to know how to change the length of the\ncolumn.\n\n \n\nThanks and hoping that u will not ignore my question\n\n \n\n\n\n\n\n\n\n\n\n\nHi ,\n I am sorry that my question is out of line with\nthis group(performance) but I need\nan urgent help L …pls .. I need\nto know how to change the length of the column.\n \nThanks and hoping that u will not ignore my question",
"msg_date": "Wed, 1 Sep 2004 14:42:00 +0800",
"msg_from": "\"Michael Ryan S. Puncia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Changing the column length"
},
{
"msg_contents": "From: \"Michael Ryan S. Puncia\" <[email protected]>\n\n> I am sorry that my question is out of line with this\n> group(performance) but I need\n\n-general might be more appropriate\n\n>\n> an urgent help :-( .pls .. I need to know how to change the length of the\n> column.\n\nadd a new column, use update to copy values from old column,\nuse alter table to rename columns\n\ngnari\n\n\n",
"msg_date": "Wed, 1 Sep 2004 08:47:38 -0000",
"msg_from": "\"gnari\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Changing the column length"
},
{
"msg_contents": "Michael,\n\n> I am sorry that my question is out of line with this\n> group(performance) but I need\n>\n> an urgent help :-( .pls .. I need to know how to change the length of the\n> column.\n\nIn the future, try to provide more detail on your problem. Fortunately, I \nthink I know what it is.\n\nPostgreSQL does not support changing the length of VARCHAR columns in-place \nuntil version 8.0 (currently in beta). Instead, you need to:\n\n1) Add a new column of the correct length;\n2) Copy the data in the old column to the new column;\n3) Drop the old column;\n4) Rename the new column to the same name as the old column.\n\nI realize that this approach can be quite painful if you have dependant views, \ncontstraints, etc. It's why we fixed it for 8.0. You can also:\n\n1) pg_dump the database in text format;\n2) edit the table definition in the pg_dump file;\n3) re-load the database\n\nWhile it *is* possible to change the column size by updating the system \ntables, doing so is NOT recommended as it can cause unexpected database \nerrors.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 1 Sep 2004 09:32:24 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Changing the column length"
}
] |
[
{
"msg_contents": "\n----- Original Message ----- \nFrom: \"Stefano Bonnin\" <[email protected]>\nTo: \"Josh Berkus\" <[email protected]>\nSent: Monday, August 30, 2004 4:13 PM\nSubject: Re: [PERFORM] Query performance issue with 8.0.0beta1\n\n\n> This is my postgres.conf, I have changed only the work_mem and\n> shared_buffers parameters.\n>\n> >DID you\n> > configure it for the 8.0 database?\n>\n> What does it mean? Is in 8.0 some important NEW configation parameter ?\n>\n> # pgdata = '/usr/local/pgsql/data' # use data in another\n> directory\n> # hba_conf = '/etc/pgsql/pg_hba.conf' # use hba info in another\n> directory\n> # ident_conf = '/etc/pgsql/pg_ident.conf' # use ident info in\nanother\n> directory\n> # external_pidfile= '/var/run/postgresql.pid' # write an extra pid file\n> #listen_addresses = 'localhost' # what IP interface(s) to listen on;\n> # defaults to localhost, '*' = any\n> #port = 5432\n> max_connections = 100\n> #superuser_reserved_connections = 2\n> #unix_socket_directory = ''\n> #unix_socket_group = ''\n> #unix_socket_permissions = 0777 # octal\n> #rendezvous_name = '' # defaults to the computer name\n> #authentication_timeout = 60 # 1-600, in seconds\n> #ssl = false\n> #password_encryption = true\n> #krb_server_keyfile = ''\n> #db_user_namespace = false\n>\n> shared_buffers = 2048 # min 16, at least max_connections*2, 8KB\n> each\n> work_mem = 2048 # min 64, size in KB\n> #maintenance_work_mem = 16384 # min 1024, size in KB\n> #max_stack_depth = 2048 # min 100, size in KB\n>\n> #max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n> #max_fsm_relations = 1000 # min 100, ~50 bytes each\n>\n> #max_files_per_process = 1000 # min 25\n> #preload_libraries = ''\n>\n> #vacuum_cost_delay = 0 # 0-1000 milliseconds\n> #vacuum_cost_page_hit = 1 # 0-10000 credits\n> #vacuum_cost_page_miss = 10 # 0-10000 credits\n> #vacuum_cost_page_dirty = 20 # 0-10000 credits\n> #vacuum_cost_limit = 200 # 0-10000 credits\n>\n> #bgwriter_delay = 200 # 10-5000 milliseconds\n> #bgwriter_percent = 1 # 1-100% of dirty buffers\n> #bgwriter_maxpages = 100 # 1-1000 buffers max at once\n>\n> #fsync = true # turns forced synchronization on or off\n> #wal_sync_method = fsync # the default varies across platforms:\n> # fsync, fdatasync, open_sync, or\n> open_datasync\n> #wal_buffers = 8 # min 4, 8KB each\n> #commit_delay = 0 # range 0-100000, in microseconds\n> #commit_siblings = 5 # range 1-100\n> #checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n> #checkpoint_timeout = 300 # range 30-3600, in seconds\n> #checkpoint_warning = 30 # 0 is off, in seconds\n>\n> #archive_command = '' # command to use to archive a logfile\n> segment\n>\n> #enable_hashagg = true\n> #enable_hashjoin = true\n> #enable_indexscan = true\n> #enable_mergejoin = true\n> #enable_nestloop = true\n> #enable_seqscan = true\n> #enable_sort = true\n> #enable_tidscan = true\n>\n> #effective_cache_size = 1000 # typically 8KB each\n> #random_page_cost = 4 # units are one sequential page fetch cost\n> #cpu_tuple_cost = 0.01 # (same)\n> #cpu_index_tuple_cost = 0.001 # (same)\n> #cpu_operator_cost = 0.0025 # (same)\n>\n> #geqo = true\n> #geqo_threshold = 12\n> #geqo_effort = 5 # range 1-10\n> #geqo_pool_size = 0 # selects default based on effort\n> #geqo_generations = 0 # selects default based on effort\n> #geqo_selection_bias = 2.0 # range 1.5-2.0\n>\n> default_statistics_target = 20 # range 1-1000\n> #from_collapse_limit = 8\n> #join_collapse_limit = 8 # 1 disables collapsing of explicit JOINs\n>\n> #log_destination = 'stderr' # Valid values are combinations of stderr,\n> # syslog and eventlog, depending on\n> # platform.\n>\n> # This is relevant when logging to stderr:\n> #redirect_stderr = false # Enable capturing of stderr into log files.\n> # These are only relevant if redirect_stderr is true:\n> #log_directory = 'pg_log' # Directory where logfiles are written.\n> # May be specified absolute or relative to\n> PGDATA\n> #log_filename_prefix = 'postgresql_' # Prefix for logfile names.\n> #log_rotation_age = 1440 # Automatic rotation of logfiles will happen\n> after\n> # so many minutes. 0 to disable.\n> #log_rotation_size = 10240 # Automatic rotation of logfiles will happen\n> after\n> # so many kilobytes of log output. 0 to\n> disable.\n>\n> # These are relevant when logging to syslog:\n> #syslog_facility = 'LOCAL0'\n> #syslog_ident = 'postgres'\n>\n>\n> # - When to Log -\n>\n> #client_min_messages = notice # Values, in order of decreasing detail:\n> # debug5, debug4, debug3, debug2,\ndebug1,\n> # log, notice, warning, error\n>\n> #log_min_messages = notice # Values, in order of decreasing detail:\n> # debug5, debug4, debug3, debug2,\ndebug1,\n> # info, notice, warning, error, log,\n> fatal,\n> # panic\n>\n> #log_error_verbosity = default # terse, default, or verbose messages\n>\n> #log_min_error_statement = panic # Values in order of increasing severity:\n> # debug5, debug4, debug3, debug2,\ndebug1,\n> # info, notice, warning, error,\n> panic(off)\n>\n> #log_min_duration_statement = -1 # -1 is disabled, in milliseconds.\n>\n> #silent_mode = false # DO NOT USE without syslog or\n> redirect_stderr\n>\n> # - What to Log -\n>\n> #debug_print_parse = false\n> #debug_print_rewritten = false\n> #debug_print_plan = false\n> #debug_pretty_print = false\n> #log_connections = false\n> #log_disconnections = false\n> #log_duration = false\n> #log_line_prefix = '%t %u %d ' # e.g. '<%u%%%d> '\n> # %u=user name %d=database name\n> # %r=remote host and port\n> # %p=PID %t=timestamp %i=command tag\n> # %c=session id %l=session line number\n> # %s=session start timestamp\n> # %x=stop here in non-session processes\n> # %%='%'\n> log_statement = 'all' # none, mod, ddl, all\n> #log_hostname = false\n>\n>\n>\n#---------------------------------------------------------------------------\n> # RUNTIME STATISTICS\n>\n#---------------------------------------------------------------------------\n>\n> # - Statistics Monitoring -\n>\n> #log_parser_stats = false\n> #log_planner_stats = false\n> #log_executor_stats = false\n> #log_statement_stats = false\n>\n> # - Query/Index Statistics Collector -\n>\n> #stats_start_collector = true\n> #stats_command_string = false\n> #stats_block_level = false\n> #stats_row_level = false\n> #stats_reset_on_server_start = true\n>\n#---------------------------------------------------------------------------\n> # CLIENT CONNECTION DEFAULTS\n>\n#---------------------------------------------------------------------------\n>\n> # - Statement Behavior -\n>\n> #search_path = '$user,public' # schema names\n> #check_function_bodies = true\n> #default_transaction_isolation = 'read committed'\n> #default_transaction_read_only = false\n> #statement_timeout = 0 # 0 is disabled, in milliseconds\n>\n> # - Locale and Formatting -\n>\n> #datestyle = 'iso, mdy'\n> #timezone = unknown # actually, defaults to TZ environment\n> setting\n> #australian_timezones = false\n> #extra_float_digits = 0 # min -15, max 2\n> #client_encoding = sql_ascii # actually, defaults to database encoding\n>\n> # These settings are initialized by initdb -- they might be changed\n> lc_messages = 'it_IT.UTF-8' # locale for system error message\n> strings\n> lc_monetary = 'it_IT.UTF-8' # locale for monetary formatting\n> lc_numeric = 'it_IT.UTF-8' # locale for number formatting\n> lc_time = 'it_IT.UTF-8' # locale for time formatting\n>\n> # - Other Defaults -\n>\n> #explain_pretty_print = true\n> #dynamic_library_path = '$libdir'\n>\n>\n>\n#---------------------------------------------------------------------------\n> # LOCK MANAGEMENT\n>\n#---------------------------------------------------------------------------\n>\n> #deadlock_timeout = 1000 # in milliseconds\n> #max_locks_per_transaction = 64 # min 10, ~260*max_connections bytes each\n>\n>\n>\n#---------------------------------------------------------------------------\n> # VERSION/PLATFORM COMPATIBILITY\n>\n#---------------------------------------------------------------------------\n>\n> # - Previous Postgres Versions -\n> #add_missing_from = true\n> #regex_flavor = advanced # advanced, extended, or basic\n> #sql_inheritance = true\n> #default_with_oids = true\n>\n> # - Other Platforms & Clients -\n>\n> #transform_null_equals = false\n>\n>\n>\n> ************************\n> Thanks\n>\n> PS. I'm sorry for the late answer but I was not in office.\n>\n> ----- Original Message ----- \n> From: \"Josh Berkus\" <[email protected]>\n> To: \"Stefano Bonnin\" <[email protected]>;\n> <[email protected]>\n> Sent: Friday, August 27, 2004 7:14 PM\n> Subject: Re: [PERFORM] Query performance issue with 8.0.0beta1\n>\n>\n> > Stefano,\n> >\n> > > Hi, I have just installed 8.0.0beta1 and I noticed that some query are\n> > > slower than 7.4.2 queries.\n> >\n> > Seems unlikely. How have you configured postgresql.conf? DID you\n> > configure it for the 8.0 database?\n> >\n> > -- \n> > Josh Berkus\n> > Aglio Database Solutions\n> > San Francisco\n> >\n>\n\n",
"msg_date": "Wed, 1 Sep 2004 09:56:30 +0200",
"msg_from": "\"Stefano Bonnin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fw: Query performance issue with 8.0.0beta1"
}
] |
[
{
"msg_contents": "Dear all,\n\n I am currently experiencing troubles with the performance of my critical's database.\n\n The problem is the time that the postgres takes to perform/return a query. For example, trying the \\d <tablename> command takes between 4 or 5 seconds. This table is very big, but I am not asking for the rows, only asking the table schema, so...why is this so slow?!?!? My last administrative action into this table was a reindex to all the indexes via the BKI in standalone mode. I thought I suceed, but this was las saturday. Today I am in the same situation again.\n\n The only change that I've done was a highest level of debug in the conf file (loggin lot of stuff). \n I understand that this could lack on performance, but when I've changed the .conf file to the usual .conf file (with less debug), and pg_ctl reload(ed) it, it goes on debuging as in the first state, in the higher level. Is this a known issue? \n\n My conclusion is that I can aquire high levels of debug while the server is running, editing the .conf file, and pg_reload(ing) it, but I can go back then, unless I pg_restart the server. Is this ok?\n\nSome info\n-----------------------------------------------------------\nPostgreSQL 7.4.2\n[postgres@lmnukmis02 data]$ pg_config --configure\n'--enable-thread-safety' '--with-perl'\nIntel(R) Xeon(TM) MP CPU 2.80GHz\nLinux 2.4.24-ck1 #5 SMP Fri Mar 12 23:41:51 GMT 2004 i686 unknown\nRAM 4 Gb.\n-----------------------------------------------------------\n\n\nThanks, Guido.\n\n\n\n",
"msg_date": "Wed, 1 Sep 2004 07:06:22 -0300 (GMT+3)",
"msg_from": "G u i d o B a r o s i o <[email protected]>",
"msg_from_op": true,
"msg_subject": "slower every day"
},
{
"msg_contents": "On Wednesday 01 Sep 2004 3:36 pm, G u i d o B a r o s i o wrote:\n> Dear all,\n>\n> I am currently experiencing troubles with the performance of my\n> critical's database.\n>\n> The problem is the time that the postgres takes to perform/return a\n> query. For example, trying the \\d <tablename> command takes between 4 or 5\n> seconds. This table is very big, but I am not asking for the rows, only\n> asking the table schema, so...why is this so slow?!?!? My last\n> administrative action into this table was a reindex to all the indexes via\n> the BKI in standalone mode. I thought I suceed, but this was las saturday.\n> Today I am in the same situation again.\n\nIs this database vacuumed and analyzed recently? I would suggest database-wide \nvacuum full analyze.\n\nIf your queries are getting slower, then checking the explain analyze output \nis a good starting point. To see queries issued by psql, start it as psql -E.\n\nHTH\n\n Shridhar\n",
"msg_date": "Wed, 1 Sep 2004 15:55:36 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slower every day"
},
{
"msg_contents": "Am Mittwoch, 1. September 2004 12:06 schrieb G u i d o B a r o s i o:\n> The problem is the time that the postgres takes to perform/return a\n> query. For example, trying the \\d <tablename> command takes between 4 or 5\n> seconds. This table is very big, but I am not asking for the rows, only\n> asking the table schema, so...why is this so slow?!?!? My last\n> administrative action into this table was a reindex to all the indexes via\n> the BKI in standalone mode. I thought I suceed, but this was las saturday.\n\nDo you regularly vacuum and analyze the database?\n\n-- \nPeter Eisentraut\nhttp://developer.postgresql.org/~petere/\n",
"msg_date": "Wed, 1 Sep 2004 12:28:48 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slower every day"
}
] |
[
{
"msg_contents": "The solution appeared as something I didn't know\n\n On the .conf file\n\nPrevious situation:\n\n#log_something=false\nlog_something=true\n\nWorst situation \n#log_something=false\n#log_something=true \n\nNice situation\nlog_something=false\n#log_something=true\n\n\nOk, the problem was that I assumed that commenting a value on\nthe conf file will set it up to a default (false?). I was wrong.\nMy server was writting tons of log's.\n\nIs this the normal behavior for pg_ctl reload? It seems that looks for new values, remembering the last state on the ones that actually are commented. Although it's my fault to have 2 (tow) lines for the same issue, and that I should realize that this is MY MISTAKE, the log defaults on a reload, if commented, tend to be the last value entered?\n\nRegards,\nGuido\n\n\n> Am Mittwoch, 1. September 2004 12:06 schrieb G u i d o B a r o s i o:\n> > The problem is the time that the postgres takes to perform/return a\n> > query. For example, trying the \\d <tablename> command takes between 4 or 5\n> > seconds. This table is very big, but I am not asking for the rows, only\n> > asking the table schema, so...why is this so slow?!?!? My last\n> > administrative action into this table was a reindex to all the indexes via\n> > the BKI in standalone mode. I thought I suceed, but this was las saturday.\n> \n> Do you regularly vacuum and analyze the database?\n> \n> -- \n> Peter Eisentraut\n> http://developer.postgresql.org/~petere/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n",
"msg_date": "Wed, 1 Sep 2004 07:57:15 -0300 (GMT+3)",
"msg_from": "G u i d o B a r o s i o <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slower every day"
},
{
"msg_contents": "This issue was resently discussed on hackers. It is a known issue, not very\nconvinient for the user. Nevertheless it is not fixed in 8.0, but will\nperhaps be addressed in the next major release.\n(Remembering, it was a non-trivial thing to change.)\n\nBest Regards,\nMichael Paesold\n\nG u i d o B a r o s i o wrote:\n\n> The solution appeared as something I didn't know\n>\n> On the .conf file\n>\n> Previous situation:\n>\n> #log_something=false\n> log_something=true\n>\n> Worst situation\n> #log_something=false\n> #log_something=true\n>\n> Nice situation\n> log_something=false\n> #log_something=true\n>\n>\n> Ok, the problem was that I assumed that commenting a value on\n> the conf file will set it up to a default (false?). I was wrong.\n> My server was writting tons of log's.\n>\n> Is this the normal behavior for pg_ctl reload? It seems that looks for new\nvalues, remembering the last state on the ones that actually are commented.\nAlthough it's my fault to have 2 (tow) lines for the same issue, and that I\nshould realize that this is MY MISTAKE, the log defaults on a reload, if\ncommented, tend to be the last value entered?\n\n",
"msg_date": "Wed, 1 Sep 2004 16:28:15 +0200",
"msg_from": "\"Michael Paesold\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slower every day"
}
] |
[
{
"msg_contents": "Again me, \n\n To make it easier.\n\nSituation A:\nlog_something = true\n\nSituation B: \n# log_something = <anything>\n\nSituation C:\nlog_something = false\n\nAfter the pg_ctl reload:\n\nSituation B = Situation A\nSituation C <> (Situation A || Situation B)\n\nIs this the expected behavior?\n\nConclusion:\n\nIf you comment a line on the conf file, and reload it, will remain in the last state. (either wast true or false, while I expected a default)\n\nRegards\n\n> The solution appeared as something I didn't know\n> \n> On the .conf file\n> \n> Previous situation:\n> \n> #log_something=false\n> log_something=true\n> \n> Worst situation \n> #log_something=false\n> #log_something=true \n> \n> Nice situation\n> log_something=false\n> #log_something=true\n> \n> \n> Ok, the problem was that I assumed that commenting a value on\n> the conf file will set it up to a default (false?). I was wrong.\n> My server was writting tons of log's.\n> \n> Is this the normal behavior for pg_ctl reload? It seems that looks for new values, remembering the last state on the ones that actually are commented. Although it's my fault to have 2 (tow) lines for the same issue, and that I should realize that this is MY MISTAKE, the log defaults on a reload, if commented, tend to be the last value entered?\n> \n> Regards,\n> Guido\n> \n> \n> > Am Mittwoch, 1. September 2004 12:06 schrieb G u i d o B a r o s i o:\n> > > The problem is the time that the postgres takes to perform/return a\n> > > query. For example, trying the \\d <tablename> command takes between 4 or 5\n> > > seconds. This table is very big, but I am not asking for the rows, only\n> > > asking the table schema, so...why is this so slow?!?!? My last\n> > > administrative action into this table was a reindex to all the indexes via\n> > > the BKI in standalone mode. I thought I suceed, but this was las saturday.\n> > \n> > Do you regularly vacuum and analyze the database?\n> > \n> > -- \n> > Peter Eisentraut\n> > http://developer.postgresql.org/~petere/\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 8: explain analyze is your friend\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Wed, 1 Sep 2004 08:58:53 -0300 (GMT+3)",
"msg_from": "G u i d o B a r o s i o <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slower every day"
},
{
"msg_contents": "G u i d o B a r o s i o wrote:\n> Conclusion:\n> \n> If you comment a line on the conf file, and reload it, will remain in\n> the last state. (either wast true or false, while I expected a\n> default)\n\nYes, that's correct. No, you're not the only one to have been caught out \nby this.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 01 Sep 2004 13:27:54 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] slower every day"
}
] |
[
{
"msg_contents": "Thanks for the reply,\n\n Been reading hackers of Aug 2004 and found the threads. It's a common habit to create two lines on the configuration files, in order to maintain the copy of the default conf file. I guess this should be the worst scenery for a freshly incoming DBA trying to put things in order. \n\n A temporary patch, will be updating documentation, encouraging administrators to use the SHOW ALL; command in the psql env, to confirm that changes where made.\n\n In my case, a 1.2 gig file was written, performance was on the floor. And my previous situation, a reindex force task last saturday, confused me. This is not a trivial problem, but in conjunction with other small problems could become a big one.\n\n Good habits when touching conf files & using the SHOW ALL to confirm that changes where made will help until this is patched. \n\n Thanks for Postgres, \n\nRegards, Guido.\n\n\n> This issue was resently discussed on hackers. It is a known issue, not very\n> convinient for the user. Nevertheless it is not fixed in 8.0, but will\n> perhaps be addressed in the next major release.\n> (Remembering, it was a non-trivial thing to change.)\n> \n> Best Regards,\n> Michael Paesold\n> \n> G u i d o B a r o s i o wrote:\n> \n> > The solution appeared as something I didn't know\n> >\n> > On the .conf file\n> >\n> > Previous situation:\n> >\n> > #log_something=false\n> > log_something=true\n> >\n> > Worst situation\n> > #log_something=false\n> > #log_something=true\n> >\n> > Nice situation\n> > log_something=false\n> > #log_something=true\n> >\n> >\n> > Ok, the problem was that I assumed that commenting a value on\n> > the conf file will set it up to a default (false?). I was wrong.\n> > My server was writting tons of log's.\n> >\n> > Is this the normal behavior for pg_ctl reload? It seems that looks for new\n> values, remembering the last state on the ones that actually are commented.\n> Although it's my fault to have 2 (tow) lines for the same issue, and that I\n> should realize that this is MY MISTAKE, the log defaults on a reload, if\n> commented, tend to be the last value entered?\n> \n\n",
"msg_date": "Wed, 1 Sep 2004 12:07:39 -0300 (GMT+3)",
"msg_from": "G u i d o B a r o s i o <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slower every day"
}
] |
[
{
"msg_contents": "> On Mon, 30 Aug 2004, Martin Sarsale wrote:\n> > \"Multicolumn indexes can only be used if the clauses involving the\n> > indexed columns are joined with AND. For instance,\n> >\n> > SELECT name FROM test2 WHERE major = constant OR minor = constant;\n> \n> You can use DeMorgan's Theorem to transform an OR clause to an AND\nclause.\n> \n> In general:\n> \tA OR B <=> NOT ((NOT A) AND (NOT B))\n> \n> So:\n> \n> > But I need something like:\n> >\n> > select * from t where c<>0 or d<>0;\n> \n> \tselect * from t where not (c=0 and d=0);\n> \n> I haven't actually tried to see if postgresql would do anything\n> interesting after such a transformation.\n\nThat made me really curious. I ran a quick test and it turns out the\nserver used dm's theorem to convert the expression back to 'or' case.\n\nExplain output (see below to set up the test case for this stmnt):\nesp=# explain analyze select * from millions where not (value1 <> 500000\nand value2 <> 200000);\n QUERY\nPLAN\n\n------------------------------------------------------------------------\n----------------------------\n--------------------------------------\n Index Scan using millions_1_idx, millions_2_idx on millions\n(cost=0.00..12.01 rows=2 width=8) (act\nual time=0.000..0.000 rows=2 loops=1)\n Index Cond: ((value1 = 500000) OR (value2 = 200000))\n Total runtime: 0.000 ms\n(3 rows)\n\ndrop table tens;\ndrop table millions;\n\ncreate table tens(value int);\ncreate table millions(value1 int, value2 int);\ninsert into tens values (0);\ninsert into tens values (1);\ninsert into tens values (2);\ninsert into tens values (3);\ninsert into tens values (4);\ninsert into tens values (5);\ninsert into tens values (6);\ninsert into tens values (7);\ninsert into tens values (8);\ninsert into tens values (9);\n\ninsert into millions \n select ones.value + \n (tens.value * 10) +\n (hundreds.value * 100) +\n (thousands.value * 1000) +\n (tenthousands.value * 10000) +\n (hundredthousands.value * 100000) \n from tens ones, \n tens tens,\n tens hundreds,\n tens thousands,\n tens tenthousands,\n tens hundredthousands;\n \nupdate millions set value2 = value1;\n\ncreate index millions_idx1 on millions(value1);\ncreate index millions_idx2 on millions(value2);\ncreate index millions_idx12 on millions(value1, value2);\nvacuum analyze millions;\n\n",
"msg_date": "Wed, 1 Sep 2004 13:53:18 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: seqscan instead of index scan"
}
] |
[
{
"msg_contents": "Hello,\n\nToday, we stumbled about the following query plan on PostGreSQL 7.4.1:\n\nlogigis=# explain select count(id) from (select distinct id from (select distinct ref_in_id as id from streets union select distinct nref_in_id as id from streets) as blubb) as blabb; \n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=33246086.32..33246086.32 rows=1 width=8)\n -> Subquery Scan blabb (cost=32970061.50..33246085.82 rows=200 width=8)\n -> Unique (cost=32970061.50..33246083.82 rows=200 width=8)\n -> Sort (cost=32970061.50..33108072.66 rows=55204464 width=8)\n Sort Key: id\n -> Subquery Scan blubb (cost=23697404.79..24525471.75 rows=55204464 width=8)\n -> Unique (cost=23697404.79..23973427.11 rows=55204464 width=8)\n -> Sort (cost=23697404.79..23835415.95 rows=55204464 width=8)\n Sort Key: id\n -> Append (cost=7212374.04..15252815.03 rows=55204464 width=8)\n -> Subquery Scan \"*SELECT* 1\" (cost=7212374.04..7626407.52 rows=27602232 width=8)\n -> Unique (cost=7212374.04..7350385.20 rows=27602232 width=8)\n -> Sort (cost=7212374.04..7281379.62 rows=27602232 width=8)\n Sort Key: ref_in_id\n -> Seq Scan on streets (cost=0.00..3129090.32 rows=27602232 width=8)\n -> Subquery Scan \"*SELECT* 2\" (cost=7212374.04..7626407.52 rows=27602232 width=8)\n -> Unique (cost=7212374.04..7350385.20 rows=27602232 width=8)\n -> Sort (cost=7212374.04..7281379.62 rows=27602232 width=8)\n Sort Key: nref_in_id\n -> Seq Scan on streets (cost=0.00..3129090.32 rows=27602232 width=8)\n(20 rows)\n\nI might have to add that this query is not an every-day query, it's\nmerely part of some statistics that a colleague needs for some\nestimations as he has to create a tool that works on this data.\n\nBoth ref_in_id and nref_in_id are non-indexed columns with type int8.\n\nI was somehow irritated by the fact that the Query Plan contains 4 Uniques.\n\nThen, I tried the following query:\n\nlogigis=# explain select count(id) from (select distinct id from (select ref_in_id as id from streets union select nref_in_id as id from streets) as blubb) as blabb;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=24803496.57..24803496.57 rows=1 width=8)\n -> Subquery Scan blabb (cost=24527471.75..24803496.07 rows=200 width=8)\n -> Unique (cost=24527471.75..24803494.07 rows=200 width=8)\n -> Sort (cost=24527471.75..24665482.91 rows=55204464 width=8)\n Sort Key: id\n -> Subquery Scan blubb (cost=15254815.03..16082881.99 rows=55204464 width=8)\n -> Unique (cost=15254815.03..15530837.35 rows=55204464 width=8)\n -> Sort (cost=15254815.03..15392826.19 rows=55204464 width=8)\n Sort Key: id\n -> Append (cost=0.00..6810225.28 rows=55204464 width=8)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..3405112.64 rows=27602232 width=8)\n -> Seq Scan on streets (cost=0.00..3129090.32 rows=27602232 width=8)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..3405112.64 rows=27602232 width=8)\n -> Seq Scan on streets (cost=0.00..3129090.32 rows=27602232 width=8)\n(14 rows)\n\nAnd after re-parsing the documentation about UNION, the following one:\n\nlogigis=# explain select count(id) from (select ref_in_id as id from streets union select nref_in_id as id from streets) as blubb;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------\n Aggregate (cost=16220893.16..16220893.16 rows=1 width=8)\n -> Subquery Scan blubb (cost=15254815.03..16082881.99 rows=55204464 width=8)\n -> Unique (cost=15254815.03..15530837.35 rows=55204464 width=8)\n -> Sort (cost=15254815.03..15392826.19 rows=55204464 width=8)\n Sort Key: id\n -> Append (cost=0.00..6810225.28 rows=55204464 width=8)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..3405112.64 rows=27602232 width=8)\n -> Seq Scan on streets (cost=0.00..3129090.32 rows=27602232 width=8)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..3405112.64 rows=27602232 width=8)\n -> Seq Scan on streets (cost=0.00..3129090.32 rows=27602232 width=8)\n(10 rows)\n\nThose queries should give the same result.\n\nSo, now my question is, why does the query optimizer not recognize that\nit can throw away those \"non-unique\" Sort/Unique passes?\n\nIs PostGreSQL 8 capable of this optimization?\n\n\nThanks,\nMarkus\n\n\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zürich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n",
"msg_date": "Thu, 2 Sep 2004 15:43:40 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Multiple Uniques"
},
{
"msg_contents": "Markus Schaber <[email protected]> writes:\n> Today, we stumbled about the following query plan on PostGreSQL 7.4.1:\n\n> logigis=# explain select count(id) from (select distinct id from (select distinct ref_in_id as id from streets union select distinct nref_in_id as id from streets) as blubb) as blabb; \n\n> I was somehow irritated by the fact that the Query Plan contains 4 Uniques.\n\nWell, if you write a silly query, you can get a silly plan ...\n\nAs you appear to have realized later, given the definition of UNION,\nall three of the explicit DISTINCTs are redundant.\n\n> So, now my question is, why does the query optimizer not recognize that\n> it can throw away those \"non-unique\" Sort/Unique passes?\n\nBecause the issue doesn't come up often enough to justify expending\ncycles to check for it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Sep 2004 10:22:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Uniques "
},
{
"msg_contents": "\nMarkus Schaber <[email protected]> writes:\n\n> logigis=# explain select count(id) from (select ref_in_id as id from streets union select nref_in_id as id from streets) as blubb;\n> QUERY PLAN \n> ---------------------------------------------------------------------------------------------------------\n> Aggregate (cost=16220893.16..16220893.16 rows=1 width=8)\n> -> Subquery Scan blubb (cost=15254815.03..16082881.99 rows=55204464 width=8)\n> -> Unique (cost=15254815.03..15530837.35 rows=55204464 width=8)\n> -> Sort (cost=15254815.03..15392826.19 rows=55204464 width=8)\n> Sort Key: id\n> -> Append (cost=0.00..6810225.28 rows=55204464 width=8)\n> -> Subquery Scan \"*SELECT* 1\" (cost=0.00..3405112.64 rows=27602232 width=8)\n> -> Seq Scan on streets (cost=0.00..3129090.32 rows=27602232 width=8)\n> -> Subquery Scan \"*SELECT* 2\" (cost=0.00..3405112.64 rows=27602232 width=8)\n> -> Seq Scan on streets (cost=0.00..3129090.32 rows=27602232 width=8)\n\nYou can actually go one step further and do:\n\n select count(distinct id) from (select ... union all select ...) as blubb;\n\nI'm not sure why this is any faster since it still has to do all the same\nwork, but it's a different code path and it seems to be about 20% faster for\nme.\n\n-- \ngreg\n\n",
"msg_date": "02 Sep 2004 15:33:38 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Uniques"
},
{
"msg_contents": "Hi, Greg,\n\nOn 02 Sep 2004 15:33:38 -0400\nGreg Stark <[email protected]> wrote:\n\n> Markus Schaber <[email protected]> writes:\n> \n> > logigis=# explain select count(id) from (select ref_in_id as id from streets union select nref_in_id as id from streets) as blubb;\n> > QUERY PLAN \n> > ---------------------------------------------------------------------------------------------------------\n> > Aggregate (cost=16220893.16..16220893.16 rows=1 width=8)\n> > -> Subquery Scan blubb (cost=15254815.03..16082881.99 rows=55204464 width=8)\n> > -> Unique (cost=15254815.03..15530837.35 rows=55204464 width=8)\n> > -> Sort (cost=15254815.03..15392826.19 rows=55204464 width=8)\n> > Sort Key: id\n> > -> Append (cost=0.00..6810225.28 rows=55204464 width=8)\n> > -> Subquery Scan \"*SELECT* 1\" (cost=0.00..3405112.64 rows=27602232 width=8)\n> > -> Seq Scan on streets (cost=0.00..3129090.32 rows=27602232 width=8)\n> > -> Subquery Scan \"*SELECT* 2\" (cost=0.00..3405112.64 rows=27602232 width=8)\n> > -> Seq Scan on streets (cost=0.00..3129090.32 rows=27602232 width=8)\n> \n> You can actually go one step further and do:\n> \n> select count(distinct id) from (select ... union all select ...) as blubb;\n\nHmm, as query plan, it gives me:\n\n> logigis=# explain select count(distinct id) from (select ref_in_id as id from streets union all select nref_in_id as id from streets) as blubb;\n> QUERY PLAN \n> ---------------------------------------------------------------------------------------------\n> Aggregate (cost=7500281.08..7500281.08 rows=1 width=8)\n> -> Subquery Scan blubb (cost=0.00..7362269.92 rows=55204464 width=8)\n> -> Append (cost=0.00..6810225.28 rows=55204464 width=8)\n> -> Subquery Scan \"*SELECT* 1\" (cost=0.00..3405112.64 rows=27602232 width=8)\n> -> Seq Scan on streets (cost=0.00..3129090.32 rows=27602232 width=8)\n> -> Subquery Scan \"*SELECT* 2\" (cost=0.00..3405112.64 rows=27602232 width=8)\n> -> Seq Scan on streets (cost=0.00..3129090.32 rows=27602232 width=8)\n> (7 rows)\n\nThis leaves off the sort and unique completely, and leaves all work to\nthe count aggrecation. Maybe this is implemented by hashing and thus\nsomehow more efficient...\n\n> I'm not sure why this is any faster since it still has to do all the same\n> work, but it's a different code path and it seems to be about 20% faster for\n> me.\n\nWhen comparing the actual timings, I get:\n\n> logigis=# \\timing\n> Timing is on.\n> logigis=# select count(distinct id) from (select ref_in_id as id from streets union all select nref_in_id as id from streets) as blubb;\n> count \n> ----------\n> 20225596\n> (1 row)\n> \n> Time: 1339098.226 ms\n> logigis=# select count(id) from (select ref_in_id as id from streets union select nref_in_id as id from streets) as blubb;\n> count \n> ----------\n> 20225596\n> (1 row)\n> \n> Time: 1558920.784 ms\n> logigis=# \n\nSo you're right, its faster this way. There seems to be some room to\nplay with optimizer enhancements.\n\nBut simply removing all distincts from subselects would not be the best\nway to go, as reducing intermediate datasets can also improve the\nperformance.\n\n\nMarkus\n\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zürich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n",
"msg_date": "Mon, 6 Sep 2004 17:56:18 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Multiple Uniques"
},
{
"msg_contents": "Tom Lane wrote:\n> Markus Schaber <[email protected]> writes:\n>>So, now my question is, why does the query optimizer not recognize that\n>>it can throw away those \"non-unique\" Sort/Unique passes?\n> \n> Because the issue doesn't come up often enough to justify expending\n> cycles to check for it.\n\nHow many cycles are we really talking about, though? I have a patch \nwhich I'll send along in a few days which implements a similar \noptimization: if a subselect is referenced by EXISTS or IN, we can \ndiscard DISTINCT and ORDER BY clauses in the subquery (actually, we \ncan't discard ORDER BY in the case of IN if LIMIT is also specified, but \nthe point remains). It's very cheap computationally for the planner to \ndo this simplification, and I'd imagine doing the equivalent \nsimplifications for UNION is similarly cheap.\n\nWhile I understand what you're saying WRT to it being a silly query, in \nthe real world people make mistakes...\n\n-Neil\n",
"msg_date": "Fri, 10 Sep 2004 09:06:23 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Uniques"
},
{
"msg_contents": "Neil Conway <[email protected]> writes:\n> Tom Lane wrote:\n>> Because the issue doesn't come up often enough to justify expending\n>> cycles to check for it.\n\n> How many cycles are we really talking about, though? I have a patch \n> which I'll send along in a few days which implements a similar \n> optimization: if a subselect is referenced by EXISTS or IN, we can \n> discard DISTINCT and ORDER BY clauses in the subquery\n\nI don't think either of those is worth doing. ORDER BY in a sub-select\nisn't even legal SQL, much less probable, so why should we expend even\na nanosecond to optimize it? The DISTINCT is more of a judgment call,\nbut my thought when I looked at it originally is that it would give\npeople a possible optimization knob. If you write DISTINCT in an IN\nclause then you can get a different plan (the IN reduces to an ordinary\njoin) that might or might not be better than without it. We shouldn't\ntake away that possibility just on the grounds of nanny-ism.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Sep 2004 23:07:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Uniques "
},
{
"msg_contents": "\nTom Lane <[email protected]> writes:\n\n> Neil Conway <[email protected]> writes:\n> \n> > How many cycles are we really talking about, though? I have a patch \n> > which I'll send along in a few days which implements a similar \n> > optimization: if a subselect is referenced by EXISTS or IN, we can \n> > discard DISTINCT and ORDER BY clauses in the subquery\n> \n> I don't think either of those is worth doing. ORDER BY in a sub-select\n> isn't even legal SQL, much less probable, so why should we expend even\n> a nanosecond to optimize it? The DISTINCT is more of a judgement call,\n> but my thought when I looked at it originally is that it would give\n> people a possible optimization knob. If you write DISTINCT in an IN\n> clause then you can get a different plan (the IN reduces to an ordinary\n> join) that might or might not be better than without it. We shouldn't\n> take away that possibility just on the grounds of nanny-ism.\n\nJust one user's 2c: Consider the plight of dynamically constructed queries.\nThe queries within \"IN\" clauses are particularly likely to be constructed this\nway. The query in the IN clause could be a constructed in an entirely separate\nfunction without any idea that it will be used within an IN clause.\n\nE.g. something like:\n\n$accessible_ids = $security_manager->get_accessible_ids_query($this->userid);\n$selected_columns = $this->selected_columns_parameters();\n$query = \"select $selected_columns where id IN ($accessible_ids)\"\n\nIn an ideal world functionally equivalent queries should always generate\nidentical plans. Of course there are limitations, it's not an ideal world, but\nas much as possible it should be possible to write code without having to\nworry whether the optimizer will be able to figure it out.\n\n-- \ngreg\n\n",
"msg_date": "10 Sep 2004 02:34:28 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Uniques"
},
{
"msg_contents": "Hello,\n\n* I need information on the size of pg ARRAY[]'s :\n\nI did not find any info in the Docs on this.\nHow many bytes does an array take on disk ?\n\nIs there a difference between an array of fixed size elements like \nintegers, and an array of variable length elements like text ? is there a \npointer table ? Or are the elements packed together ?\n\nIs there any advantage in using a smallint[] over an integer[] regarding \nsize ?\n\nDoes a smallint[] with 2 elements really take 12 bytes ?\n\n* On Alignment :\n\nThe docs say fields are aligned on 4-bytes boundaries.\nDoes this mean that several consecutive smallint fields will take 4 bytes \neach ?\nWhat about seleral consecutive \"char\" fields ? 4 bytes each too ?\n\nI ask this because I'll have a lot of columns with small values to store \nin a table, and\nwould like it to be small and to fit in the cache.\n\n\nThanks for any info.\n",
"msg_date": "Fri, 10 Sep 2004 08:36:16 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Question on Byte Sizes"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.