threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Hi,\n\nI've been trying to spec a new server for my company's database for a\nfew weeks and one of the biggest problems I've had is trying to find\nmeaningful performance information about how PostgreSQL will perfom\nunder various disk configurations.\n\nBut, we have now taken the plunge and I'm in a position to do some\nbenchmarking to actually get some data. Basically I was wondering if\nanyone else had any particular recommendations (or requests) about the\nmost useful kinds of benchmarks to do.\n\n\nThe hardware I'll be benchmarking on is...\n\nserver 1: single 2.8Ghz Xeon, 2Gb RAM. Adaptec 2410SA SATA hardware\nRAID, with 4 x 200Gb 7200rpm WD SATA drives. RAID in both RAID5 and\nRAID10 (currently RAID5, but want to experiment with write performance\nin RAID10). Gentoo Linux\n\nserver 2: single 2.6Ghz Xeon, 2Gb RAM, single 80Gb IDE drive. Redhat\nLinux\n\nserver 3: dual 2.6Ghz Xeon, 6Gb RAM, software RAID10 with 4 x 36Gb\n10kRPM U320 SCSI drives, RedHat Linux\n\n\nI realise the boxes aren't all identical - but some benchmarks on those\nshould give some ballpark figures for anyone else speccing out a\nlow-mid range box and wanting some performance figures on IDE vs IDE\nRAID vs SCSI RAID\n\nI'd be more than happy to post any results back to the list, and if\nanyone else can contribute any other data points that'd be great.\n\nOtherwise, any pointers to a quick/easy setup for some vaguely useful\nbenchmarks would be great. At the moment I'm thinking just along the\nlines of 'pgbench -c 10 -s 100 -v'.\n\nCheers\n\nShane\n\n",
"msg_date": "3 Sep 2004 09:44:36 -0700",
"msg_from": "\"Shane Wright\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "disk performance benchmarks"
},
{
"msg_contents": ">>>>> \"SW\" == Shane Wright <Shane> writes:\n\nSW> But, we have now taken the plunge and I'm in a position to do some\nSW> benchmarking to actually get some data. Basically I was wondering if\nSW> anyone else had any particular recommendations (or requests) about the\nSW> most useful kinds of benchmarks to do.\n\nI did a bunch of benchmarking on a 14 disk SCSI RAID array comparing\nRAID 5, 10, and 50. My tests consisted of doing a full restore of a\n30Gb database (including indexes) and comparing the times to do the\nrestore, the time to make the indexes, and the time to vacuum. Then I\nran a bunch of queries.\n\nIt was damn near impossible to pick a 'better' RAID config, so I just\nwent with RAID5.\n\nYou can find many of my posts on this topic on the list archives from\nabout august - october of last year.\n\nBasically, you have to approach it holistically to tune the system: Pg\nconfig parameters, memory, and disk speed are the major factors.\n\nThat and your schema needs to be not idiotic. :-)\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Tue, 14 Sep 2004 13:28:59 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: disk performance benchmarks"
},
{
"msg_contents": "On Tue, 2004-09-14 at 10:28, Vivek Khera wrote:\n> >>>>> \"SW\" == Shane Wright <Shane> writes:\n> \n> SW> But, we have now taken the plunge and I'm in a position to do some\n> SW> benchmarking to actually get some data. Basically I was wondering if\n> SW> anyone else had any particular recommendations (or requests) about the\n> SW> most useful kinds of benchmarks to do.\n> \n> I did a bunch of benchmarking on a 14 disk SCSI RAID array comparing\n> RAID 5, 10, and 50. My tests consisted of doing a full restore of a\n> 30Gb database (including indexes) and comparing the times to do the\n> restore, the time to make the indexes, and the time to vacuum. Then I\n> ran a bunch of queries.\n> \n> It was damn near impossible to pick a 'better' RAID config, so I just\n> went with RAID5.\n> \n> You can find many of my posts on this topic on the list archives from\n> about august - october of last year.\n> \n> Basically, you have to approach it holistically to tune the system: Pg\n> config parameters, memory, and disk speed are the major factors.\n> \n> That and your schema needs to be not idiotic. :-)\n\nI've recently bee frustrated by this topic, because it seems like you\ncan design the hell out of a system, getting everything tuned with micro\nand macro benchmarks, but when you put it in production the thing falls\napart.\n\nCurrent issue:\n\nA dual 64-bit Opteron 244 machine with 8GB main memory, two 4-disk RAID5\narrays (one for database, one for xlogs). PG's config is extremely\ngenerous, and in isolated benchmarks it's very fast.\n\nBut, in reality, performance is abyssmal. There's something about what\nPG does inside commits and checkpoints that sends Linux into a catatonic\nstate. For instance here's a snapshot of vmstat during a parallel heavy\nselect/insert load:\n\nprocs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 3 0 216 13852 39656 7739724 0 0 820 2664 2868 2557 16 2 74 7\n 0 0 216 17580 39656 7736460 0 0 3024 4700 3458 4313 42 6 52 0\n 0 0 216 16428 39676 7737324 0 0 840 4248 3930 4516 0 4 89 8\n 0 1 216 18620 39672 7736920 0 0 7576 516 2738 3347 1 4 55 39\n 0 0 216 14972 39672 7738960 0 0 1992 2532 2509 2288 2 3 93 3\n 0 0 216 13564 39672 7740592 0 0 1640 2656 2581 2066 1 3 97 0\n 0 0 216 12028 39672 7742292 0 0 1688 3576 2072 1626 1 2 96 0\n 0 0 216 18364 39680 7736164 0 0 1804 3372 1836 1379 1 4 96 0\n 0 0 216 16828 39684 7737588 0 0 1432 2756 2256 1720 1 3 94 2\n 0 0 216 15452 39684 7738812 0 0 1188 2184 2384 1830 1 2 97 0\n 0 1 216 15388 39684 7740104 0 0 1336 2628 2490 1974 2 3 94 2\n 6 0 216 15424 39684 7740240 0 0 104 3472 2757 1940 3 2 92 2\n 0 0 216 14784 39700 7741856 0 0 1668 3320 2718 2332 0 3 97 0\n\nYou can see there's not much progress being made there. In the\npresence of a farily pathetic writeout, there's a tiny trickle of disk\nreads, userspace isn't making any progress, the kernel isn't busy, and\nfew processes are in iowait. So what the heck is going on?\n\nThis state of non-progress persists as long as the checkpoint subprocess\nis active. I'm sure there's some magic way to improve this but I\nhaven't found it yet.\n\nPS this is with Linux 2.6.7.\n\nRegards,\njwb\n",
"msg_date": "Tue, 14 Sep 2004 11:11:38 -0700",
"msg_from": "\"Jeffrey W. Baker\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: disk performance benchmarks"
},
{
"msg_contents": "On Tue, Sep 14, 2004 at 11:11:38AM -0700, Jeffrey W. Baker wrote:\n> procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id wa\n> 3 0 216 13852 39656 7739724 0 0 820 2664 2868 2557 16 2 74 7\n> 0 0 216 17580 39656 7736460 0 0 3024 4700 3458 4313 42 6 52 0\n> 0 0 216 16428 39676 7737324 0 0 840 4248 3930 4516 0 4 89 8\n> 0 1 216 18620 39672 7736920 0 0 7576 516 2738 3347 1 4 55 39\n> 0 0 216 14972 39672 7738960 0 0 1992 2532 2509 2288 2 3 93 3\n> 0 0 216 13564 39672 7740592 0 0 1640 2656 2581 2066 1 3 97 0\n> 0 0 216 12028 39672 7742292 0 0 1688 3576 2072 1626 1 2 96 0\n> 0 0 216 18364 39680 7736164 0 0 1804 3372 1836 1379 1 4 96 0\n> 0 0 216 16828 39684 7737588 0 0 1432 2756 2256 1720 1 3 94 2\n> 0 0 216 15452 39684 7738812 0 0 1188 2184 2384 1830 1 2 97 0\n> 0 1 216 15388 39684 7740104 0 0 1336 2628 2490 1974 2 3 94 2\n> 6 0 216 15424 39684 7740240 0 0 104 3472 2757 1940 3 2 92 2\n> 0 0 216 14784 39700 7741856 0 0 1668 3320 2718 2332 0 3 97 0\n> \n> You can see there's not much progress being made there. In the\n\nThose IO numbers look pretty high for nothing going on. Are you sure\nyou're not IO bound?\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 14 Sep 2004 16:45:40 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: disk performance benchmarks"
},
{
"msg_contents": "On Tue, 2004-09-14 at 14:45, Jim C. Nasby wrote:\n> On Tue, Sep 14, 2004 at 11:11:38AM -0700, Jeffrey W. Baker wrote:\n> > procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n> > r b swpd free buff cache si so bi bo in cs us sy id wa\n> > 3 0 216 13852 39656 7739724 0 0 820 2664 2868 2557 16 2 74 7\n> > 0 0 216 17580 39656 7736460 0 0 3024 4700 3458 4313 42 6 52 0\n> > 0 0 216 16428 39676 7737324 0 0 840 4248 3930 4516 0 4 89 8\n> > 0 1 216 18620 39672 7736920 0 0 7576 516 2738 3347 1 4 55 39\n> > 0 0 216 14972 39672 7738960 0 0 1992 2532 2509 2288 2 3 93 3\n> > 0 0 216 13564 39672 7740592 0 0 1640 2656 2581 2066 1 3 97 0\n> > 0 0 216 12028 39672 7742292 0 0 1688 3576 2072 1626 1 2 96 0\n> > 0 0 216 18364 39680 7736164 0 0 1804 3372 1836 1379 1 4 96 0\n> > 0 0 216 16828 39684 7737588 0 0 1432 2756 2256 1720 1 3 94 2\n> > 0 0 216 15452 39684 7738812 0 0 1188 2184 2384 1830 1 2 97 0\n> > 0 1 216 15388 39684 7740104 0 0 1336 2628 2490 1974 2 3 94 2\n> > 6 0 216 15424 39684 7740240 0 0 104 3472 2757 1940 3 2 92 2\n> > 0 0 216 14784 39700 7741856 0 0 1668 3320 2718 2332 0 3 97 0\n> > \n> > You can see there's not much progress being made there. In the\n> \n> Those IO numbers look pretty high for nothing going on. Are you sure\n> you're not IO bound?\n\nJust for the list to get an idea of the kinds of performance problems\nI'm trying to eliminate, check out these vmstat captures:\n\nhttp://saturn5.com/~jwb/pg.html\n\nPerformance is okay-ish for about three minutes at a stretch and then\nextremely bad during the fourth minute, and the cycle repeats all day. \nDuring the bad periods everything involving the database just blocks.\n\n-jwb\n",
"msg_date": "Tue, 14 Sep 2004 15:48:51 -0700",
"msg_from": "\"Jeffrey W. Baker\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: disk performance benchmarks"
},
{
"msg_contents": "\n>You can see there's not much progress being made there. In the\n>presence of a farily pathetic writeout, there's a tiny trickle of disk\n>reads, userspace isn't making any progress, the kernel isn't busy, and\n>few processes are in iowait. So what the heck is going on?\n>\n>This state of non-progress persists as long as the checkpoint subprocess\n>is active. I'm sure there's some magic way to improve this but I\n>haven't found it yet.\n>\n> \n>\nHello,\n\nIt is my experience that RAID 5 is not that great for heavy write \nsituations and that RAID 10 is better.\nAlso as you are on linux you may want to take a look at what file system \nyou are using. EXT3 for example is\nknown to be stable, if a very slow piggy.\n\nJ\n\n\n\n\n\n>PS this is with Linux 2.6.7.\n>\n>Regards,\n>jwb\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL\n\n",
"msg_date": "Tue, 14 Sep 2004 18:49:21 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: disk performance benchmarks"
},
{
"msg_contents": "On Sep 14, 2004, at 9:49 PM, Joshua D. Drake wrote:\n\n> It is my experience that RAID 5 is not that great for heavy write \n> situations and that RAID 10 is better.\n>\nIt is my experience that this depends entirely on how many spindles you \nhave in your RAID. For 4 or 5 spindles, I find RAID10 faster. With 14 \nspindles, it was more or less a toss-up for me.",
"msg_date": "Tue, 14 Sep 2004 22:20:23 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: disk performance benchmarks"
},
{
"msg_contents": "\nVivek Khera <[email protected]> writes:\n\n> On Sep 14, 2004, at 9:49 PM, Joshua D. Drake wrote:\n> \n> > It is my experience that RAID 5 is not that great for heavy write situations\n> > and that RAID 10 is better.\n> >\n> It is my experience that this depends entirely on how many spindles you have in\n> your RAID. For 4 or 5 spindles, I find RAID10 faster. With 14 spindles, it\n> was more or less a toss-up for me.\n\nI think this depends massively on the hardware involved and the applications\ninvolved. \n\nFor write heavy application I would expect RAID5 to be a lose on any\nsoftware-raid based solution. Only with good hardware raid systems with very\nlarge battery-backed cache would it begin to be effective.\n\n-- \ngreg\n\n",
"msg_date": "15 Sep 2004 02:07:08 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: disk performance benchmarks"
},
{
"msg_contents": "Jeffrey W. Baker wrote:\n\n> Current issue:\n>\n> A dual 64-bit Opteron 244 machine with 8GB main memory, two 4-disk RAID5\n> arrays (one for database, one for xlogs). PG's config is extremely\n> generous, and in isolated benchmarks it's very fast.\n\nIt depends on the controller, but usually I would expect a better\nperformance if xlogs are just on a two-disk mirror and the rest of the disks\nfor data (6 splindles instead of 4 then).\n\nI don't think RAID5 is a benefit for xlogs.\n\nRegards,\nMichael Paesold\n\n> But, in reality, performance is abyssmal. There's something about what\n> PG does inside commits and checkpoints that sends Linux into a catatonic\n> state. For instance here's a snapshot of vmstat during a parallel heavy\n> select/insert load:\n...\n\n",
"msg_date": "Wed, 15 Sep 2004 11:39:42 +0200",
"msg_from": "\"Michael Paesold\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: disk performance benchmarks"
},
{
"msg_contents": "On Wed, 2004-09-15 at 02:39, Michael Paesold wrote:\n> Jeffrey W. Baker wrote:\n> \n> > Current issue:\n> >\n> > A dual 64-bit Opteron 244 machine with 8GB main memory, two 4-disk RAID5\n> > arrays (one for database, one for xlogs). PG's config is extremely\n> > generous, and in isolated benchmarks it's very fast.\n> \n> It depends on the controller, but usually I would expect a better\n> performance if xlogs are just on a two-disk mirror and the rest of the disks\n> for data (6 splindles instead of 4 then).\n> \n> I don't think RAID5 is a benefit for xlogs.\n\nAll these replies are really interesting, but the point is not that my\nRAIDs are too slow, or that my CPUs are too slow. My point is that, for\nlong stretches of time, by database doesn't come anywhere near using the\ncapacity of the hardware. And I think that's odd and would like to\nconfig it to \"false\".\n\n-jwb\n",
"msg_date": "Wed, 15 Sep 2004 09:11:37 -0700",
"msg_from": "\"Jeffrey W. Baker\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: disk performance benchmarks"
},
{
"msg_contents": ">>>>> \"GS\" == Greg Stark <[email protected]> writes:\n\nGS> For write heavy application I would expect RAID5 to be a lose on\nGS> any software-raid based solution. Only with good hardware raid\nGS> systems with very large battery-backed cache would it begin to be\nGS> effective.\n\nWho in their right mind would run a 14 spindle RAID in software? :-)\n\nBattery backed write-back cache is definitely mandatory for performance.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Wed, 15 Sep 2004 13:51:38 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: disk performance benchmarks"
},
{
"msg_contents": ">>>>> \"JWB\" == Jeffrey W Baker <[email protected]> writes:\n\nJWB> All these replies are really interesting, but the point is not that my\nJWB> RAIDs are too slow, or that my CPUs are too slow. My point is that, for\nJWB> long stretches of time, by database doesn't come anywhere near using the\nJWB> capacity of the hardware. And I think that's odd and would like to\nJWB> config it to \"false\".\n\nHave you tried to increase your checkpoing_segments? I get the\nsuspicion that you're checkpointing every 3 minutes constantly.\nYou'll have to restart the postmaster for this setting to take effect,\nI believe.\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Wed, 15 Sep 2004 13:53:18 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: disk performance benchmarks"
},
{
"msg_contents": "Jeffrey W. Baker wrote:\n\n>All these replies are really interesting, but the point is not that my\n>RAIDs are too slow, or that my CPUs are too slow. My point is that, for\n>long stretches of time, by database doesn't come anywhere near using the\n>capacity of the hardware. And I think that's odd and would like to\n>config it to \"false\".\n> \n>\nWhat motherboard are you using, and what distro? Earlier you mentioned \nthat you're on linux 2.6.7 and\na 64-bit Opteron 244 machine with 8GB main memory, two 4-disk RAID5 \narrays (one for\ndatabase, one for xlogs).\n\nAlso, did you have a chance to test performance before you implemented RAID?\n\nRon\n\n\n",
"msg_date": "Wed, 15 Sep 2004 10:55:21 -0700",
"msg_from": "Ron St-Pierre <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] disk performance benchmarks"
},
{
"msg_contents": "On Wed, 2004-09-15 at 10:53, Vivek Khera wrote:\n> >>>>> \"JWB\" == Jeffrey W Baker <[email protected]> writes:\n> \n> JWB> All these replies are really interesting, but the point is not that my\n> JWB> RAIDs are too slow, or that my CPUs are too slow. My point is that, for\n> JWB> long stretches of time, by database doesn't come anywhere near using the\n> JWB> capacity of the hardware. And I think that's odd and would like to\n> JWB> config it to \"false\".\n> \n> Have you tried to increase your checkpoing_segments? I get the\n> suspicion that you're checkpointing every 3 minutes constantly.\n> You'll have to restart the postmaster for this setting to take effect,\n> I believe.\n\nI have checkpoint_segments set to 24, but I get the feeling that making\nit larger may have the opposite effect of what I want, by extending the\nperiod during which the DB makes no progress.\n\n-jwb\n",
"msg_date": "Wed, 15 Sep 2004 11:36:18 -0700",
"msg_from": "\"Jeffrey W. Baker\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: disk performance benchmarks"
},
{
"msg_contents": "On Wed, Sep 15, 2004 at 11:36:18AM -0700, Jeffrey W. Baker wrote:\n> On Wed, 2004-09-15 at 10:53, Vivek Khera wrote:\n> > >>>>> \"JWB\" == Jeffrey W Baker <[email protected]> writes:\n> > \n> > JWB> All these replies are really interesting, but the point is not that my\n> > JWB> RAIDs are too slow, or that my CPUs are too slow. My point is that, for\n> > JWB> long stretches of time, by database doesn't come anywhere near using the\n> > JWB> capacity of the hardware. And I think that's odd and would like to\n> > JWB> config it to \"false\".\n> > \n> > Have you tried to increase your checkpoing_segments? I get the\n> > suspicion that you're checkpointing every 3 minutes constantly.\n> > You'll have to restart the postmaster for this setting to take effect,\n> > I believe.\n> \n> I have checkpoint_segments set to 24, but I get the feeling that making\n> it larger may have the opposite effect of what I want, by extending the\n> period during which the DB makes no progress.\n\nIt sounds strange that the DB stops doing anything while the checkpoint\nis in progress. Have you tried poking at pg_locks during that interval?\n\n-- \nAlvaro Herrera (<alvherre[a]dcc.uchile.cl>)\n\"La naturaleza, tan fr�gil, tan expuesta a la muerte... y tan viva\"\n\n",
"msg_date": "Wed, 15 Sep 2004 14:47:39 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: disk performance benchmarks"
},
{
"msg_contents": "On Wed, 15 Sep 2004 09:11:37 -0700, Jeffrey W. Baker <[email protected]> wrote:\n> On Wed, 2004-09-15 at 02:39, Michael Paesold wrote:\n> > Jeffrey W. Baker wrote:\n> >\n> > > Current issue:\n> > >\n> > > A dual 64-bit Opteron 244 machine with 8GB main memory, two 4-disk RAID5\n> > > arrays (one for database, one for xlogs). PG's config is extremely\n> > > generous, and in isolated benchmarks it's very fast.\n> >\n> > It depends on the controller, but usually I would expect a better\n> > performance if xlogs are just on a two-disk mirror and the rest of the disks\n> > for data (6 splindles instead of 4 then).\n> >\n> > I don't think RAID5 is a benefit for xlogs.\n> \n> All these replies are really interesting, but the point is not that my\n> RAIDs are too slow, or that my CPUs are too slow. My point is that, for\n> long stretches of time, by database doesn't come anywhere near using the\n> capacity of the hardware. And I think that's odd and would like to\n> config it to \"false\".\n\nUmh, I don't think you have shown any numbers to show if the database\nis using the capacity of the hardware or not...\n\nIf this is a seek heavy operation, the raw throughput is irrelevant;\nyou are limited by the number of seeks your disks can do. Run some\niostats and look at the number of transactions per second.\n\nUsing raid 5 can just destroy the number of write transactions per\nsecond you can do, especially if it is software raid or a cheap raid\ncontroller.\n\nYou can't just say \"the hardware is fine and not stressed so I don't\nwant to discuss that, but everything is too slow so please make it\nfaster\".\n",
"msg_date": "Wed, 15 Sep 2004 12:13:54 -0700",
"msg_from": "Marc Slemko <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: disk performance benchmarks"
}
] |
[
{
"msg_contents": "> > \tThere is also the fact that NTFS is a very slow filesystem, and\n> > Linux is\n> > a lot better than Windows for everything disk, caching and IO related.\n> Try\n> > to copy some files in NTFS and in ReiserFS...\n> \n> I'm not so sure I would agree with such a blanket generalization. I find\n> NTFS to be very fast, my main complaint is fragmentation issues...I bet\n> NTFS is better than ext3 at most things (I do agree with you about the\n> cache, thoughO.\n\nOk, you were right. I made some tests and NTFS is just not very good in the general case. I've seen some benchmarks for Reiser4 that are just amazing.\n\nMerlin\n",
"msg_date": "Fri, 3 Sep 2004 13:08:24 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: fsync vs open_sync"
},
{
"msg_contents": ">> > \tThere is also the fact that NTFS is a very slow filesystem, and\n>> > Linux is\n>> > a lot better than Windows for everything disk, caching and IO related.\n>> Try\n>> > to copy some files in NTFS and in ReiserFS...\n>>\n>> I'm not so sure I would agree with such a blanket generalization. I \n>> find\n>> NTFS to be very fast, my main complaint is fragmentation issues...I bet\n>> NTFS is better than ext3 at most things (I do agree with you about the\n>> cache, thoughO.\n>\n> Ok, you were right. I made some tests and NTFS is just not very good in \n> the general case. I've seen some benchmarks for Reiser4 that are just \n> amazing.\n\n\tAs a matter of fact I was again amazed today.\n\tI was looking into a way to cache database queries for a website (not \nyet) written in Python. The purpose was to cache long queries like those \nused to render forum pages (which is the typical slow query, selecting \n from a big table where records are rather random and LIMIT is used to cut \nthe result in pages).\n\tI wanted to save a serialized (python pickled) representation of the data \nto disk to avoid reissuing the query every time.\n\tIn the end it took about 1 ms to load or save the data for a page with 40 \nposts... then I wondered, how much does it take just to read or write the \nfile ?\n\n\tReiserFS 3.6, Athlon XP 2.5G+, 512Mb DDR400\n\t7200 RPM IDE Drive with 8MB Cache\n\tThis would be considered a very underpowered server...\n\n\t22 KB files, 1000 of them :\n\topen(), read(), close() : 10.000 files/s\n\topen(), write(), close() : 4.000 files/s\n\n\tThis is quite far from database FS activity, but it's still amazing, \nalthough the disk doesn't even get used. Which is what I like in Linux. \nYou can write 10000 files in one second and the HDD is still idle... then \nwhen it decides to flush it all goes to disk in one burst.\n\n\tI did make benchmarks some time ago and found that what sets Linux apart \n from Windows in terms of filesystems is :\n\t- very high performance filesystems like ReiserFS\n\tThis is the obvious part ; although with a huuuuge amount of data in \nsmall files accessed randomly, ReiserFS is faster but not 10x, maybe \nsomething like 2x NTFS. I trust Reiser4 to offer better performance, but \nnot right now. Also ReiserFS lacks a defragmenter, and it gets slower \nafter 1-2 years (compared to 1-2 weeks with NTFS this is still not that \nbad, but I'd like to defragment and I cant). Reiser4 will fix that \napparently with background defragger etc.\n\n\t- caching.\n\tLinux disk caching is amazing. When copying a large file to the same disk \non Windows, the drive head swaps a lot, like the OS can't decide between \nreading and writing. Linux, on the other hand, reads and writes by large \nchunks and loses a lot less time seekng. Even when reading two files at \nthe same time, Linux reads ahead in large chunks (very little performance \nloss) whereas Windows seeks a lot. The read-ahead and write-back thus gets \nit a lot faster than 2x NTFS for everyday tasks like copying files, \nbacking up, making archives, grepping, serving files, etc...\n\tMy windows box was able to saturate a 100Mbps ethernet while serving one \nlarge FTP file on the LAN (not that impressive, it's only 10 MB/s hey!). \nHowever, when several simultaneous clients were trying to download \ndifferent files which were not in the disk cache, all hell broke loose : \nlots of seeking, and bandwidth dropped to 30 Mbits/s. Not enough \nread-ahead...\n\tThe Linux box, serving FTP, with half the RAM (256 Mb), had no problem \npushing the 100 Mbits/s with something like 10 simultaneous connections. \nThe amusing part is that I could not use the Windows box to test it \nbecause it would choke at such a \"high\" IO concurrency (writing 10 \nMBytes/s to several files at once, my god).\n\tOf course the files which had been downloaded to the Windows box were cut \nin as many fragments as the number of disk seeks during the download... \nseveral hundred fragments each... my god...\n\n\tWhat amazes me is that it must just be some parameter somewhere and the \nMicrosoft guys probably could have easily changed the read-ahead \nthresholds and time between seeks when in a multitasking environment, but \nthey didn't. Why ?\n\n\tThus people are forced to buy 10000RPM SCSI drives for their LAN servers \nwhen an IDE raid, used with Linux, could push nearly a Gigabit...\n\n\tFor database, this is different, as we're concerned about large files, \nand fsync() times... but it seems reiserfs still wins over ext3 so...\n\n\tAbout NTFS vs EXT3 : ext3 dies if you put a lot of files in the same \ndirectory. It's fast but still outperformed by reiser.\n\n\tI saw XFS fry eight 7 harddisk RAID bays. The computer was rebooted with \nthe Reset button a few times because a faulty SCSI cable in the eighth \nRAID bay was making it hang. The 7 bays had no problem. When it went back \nup, all the bays were in mayhem. XFSrepair just vomited over itself and we \ngot plenty of files with random data in them. Fortunately there was a \ncatalog of files with their checksums so at least we could know which \nfiles were okay. Have you tried restoring that amount of data from a \nbackup ?\n\n\tNow maybe this was just bad luck and crap hardware, but I still won't \ntouch XFS. Amazing performance on large files though...\n\n\tI've had my computers shutdown violently by power failures and no \nreiserfs problems so far. NTFS is very crash proof too. My windows machine \nbluescreens twice a day and still no data loss ;)\n\n\tUpside : an junkyard UPS with dead batteries, powered with two brand new \n12V car batteries, costs 70 euro and powers a computer for more than 5 \nhours...\n\tDownside :\n\t- it's ugly (I hide it under my desk)\n\t- you \"borrow\" a battery to start your friend's car, and just five \nminutes later, the UPS wants to test itself, discovered it has no more \nbatteries, and switches everything off... argh.\n\n\tGood evening...\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Fri, 03 Sep 2004 20:24:27 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fsync vs open_sync"
},
{
"msg_contents": "Pierre-Frᅵdᅵric Caillaud wrote:\n\n> 22 KB files, 1000 of them :\n> open(), read(), close() : 10.000 files/s\n> open(), write(), close() : 4.000 files/s\n> \n> This is quite far from database FS activity, but it's still \n> amazing, although the disk doesn't even get used. Which is what I like \n> in Linux. You can write 10000 files in one second and the HDD is still \n> idle... then when it decides to flush it all goes to disk in one burst.\n\nYou can not trust your data in this.\n\n\n> I've had my computers shutdown violently by power failures and no \n> reiserfs problems so far. NTFS is very crash proof too. My windows \n> machine bluescreens twice a day and still no data loss ;)\n\nIf you have the BSOD twice a day then you have a broken driver or broken\nHW. CPU overclocked ?\n\n\nI understood from your email that you are a Windows haters, try to post\nsomething here:\n\nhttp://ihatelinux.blogspot.com/\n\n:-)\n\n\nRegards\nGaetano Mendola\n",
"msg_date": "Sat, 04 Sep 2004 12:43:33 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fsync vs open_sync"
},
{
"msg_contents": "Another possibly useless datapoint on this thread for anyone who's\ncurious ... open_sync absolutely stinks over NFS at least on Linux. :)\n\n\n\n\n",
"msg_date": "Sat, 04 Sep 2004 19:46:41 -0700",
"msg_from": "Cott Lang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fsync vs open_sync"
},
{
"msg_contents": "The world rejoiced as [email protected] (\"Merlin Moncure\") wrote:\n> Ok, you were right. I made some tests and NTFS is just not very\n> good in the general case. I've seen some benchmarks for Reiser4\n> that are just amazing.\n\nReiser4 has been sounding real interesting.\n\nThe killer problem is thus:\n\n \"We must caution that just as Linux 2.6 is not yet as stable as\n Linux 2.4, it will also be some substantial time before V4 is as\n stable as V3.\"\n\nIn practice, there's a further problem.\n\nWe have some systems at work we need to connect to EMC disk arrays;\nthat's something that isn't supported by EMC unless you're using a\nwhole set of pieces that are \"officially supported.\"\n\nRHAT doesn't want to talk to you about support for anything other than\next3.\n\nI'm not sure what all SuSE supports; they're about the only other Linx\nvendor that EMC would support, and I don't expect that Reiser4 yet\nfits into the \"supportable\" category :-(.\n\nThe upshot of that is that this means that we'd only consider using\nstuff like Reiser4 on \"toy\" systems, and, quite frankly, that means\nthat they'll have \"toy\" disk as opposed to the good stuff :-(.\n\nAnd frankly, we're too busy with issues nearer to our hearts than\ntesting out ReiserFS. :-(\n-- \noutput = (\"cbbrowne\" \"@\" \"cbbrowne.com\")\nhttp://cbbrowne.com/info/emacs.html\n\"Linux! Guerrilla Unix Development Venimus, Vidimus, Dolavimus.\"\n-- <[email protected]> Mark A. Horton KA4YBR\n",
"msg_date": "Sat, 04 Sep 2004 23:47:55 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fsync vs open_sync"
},
{
"msg_contents": "On Sat, 2004-09-04 at 23:47 -0400, Christopher Browne wrote:\n> The world rejoiced as [email protected] (\"Merlin Moncure\") wrote:\n> > Ok, you were right. I made some tests and NTFS is just not very\n> > good in the general case. I've seen some benchmarks for Reiser4\n> > that are just amazing.\n> \n> Reiser4 has been sounding real interesting.\n> \n\nAre these independent benchmarks, or the benchmarketing at namesys.com?\nNote that the APPEND, MODIFY, and OVERWRITE phases have been turned off\non the mongo tests and the other tests have been set to a lexical (non\ndefault for mongo) mode. I've done some mongo benchmarking myself and\nreiser4 loses to ext3 (data=ordered) in the excluded tests. APPEND\nphase performance is absolutely *horrible*. So they just turned off the\nphases in which reiser4 lost and published the remaining results as\nproof that \"resier4 is the fastest filesystem\".\n\nSee: http://marc.theaimsgroup.com/?l=reiserfs&m=109363302000856\n\n\n-Steve Bergman\n\n\n",
"msg_date": "Sun, 05 Sep 2004 00:16:42 -0500",
"msg_from": "Steve Bergman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fsync vs open_sync"
},
{
"msg_contents": "Christopher Browne wrote:\n\n> I'm not sure what all SuSE supports; they're about the only other Linx\n> vendor that EMC would support, and I don't expect that Reiser4 yet\n> fits into the \"supportable\" category :-(.\n\nI use quite a bit of SuSE, and although I don't know their official \nposition on Reiser file systems, I do know that it is the default when \ninstalling, so I'd suggest you might check into it.\n\n\n-- \nUntil later, Geoffrey Registered Linux User #108567\n AT&T Certified UNIX System Programmer - 1995\n",
"msg_date": "Sun, 05 Sep 2004 07:41:29 -0400",
"msg_from": "Geoffrey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fsync vs open_sync"
},
{
"msg_contents": "\n\tWere you upset by my message ? I'll try to clarify.\n\n> I understood from your email that you are a Windows haters\n\n\tWell, no, not really. I use Windows everyday and it has its strengths. I \nstill don't think the average (non-geek) person can really use Linux as a \nDesktop OS. The problem I have with Windows is that I think it could be \nmade much faster, without too much effort (mainly some tweaking in the \nDisk IO field), but Microsoft doesn't do it. Why ? I can't understand this.\n\n>> in Linux. You can write 10000 files in one second and the HDD is still \n>> idle... then when it decides to flush it all goes to disk in one burst.\n>\n> You can not trust your data in this.\n\n\tThat's why I mentioned that it did not relate to database type \nperformance. If the computer crashes while writing these files, some may \nbe partially written, some not at all, some okay... the only certainty is \nabout filesystem integrity. But it's exactly the same on all Journaling \nfilesystems (including NTFS). Thus, with equal reliability, the faster \nwins. Maybe, with Reiser4, we will see real filesystem transactions and \nmaybe this will translate in higher postgres performance...\n>\n>> I've had my computers shutdown violently by power failures and no \n>> reiserfs problems so far. NTFS is very crash proof too. My windows \n>> machine bluescreens twice a day and still no data loss ;)\n>\n> If you have the BSOD twice a day then you have a broken driver or broken\n> HW. CPU overclocked ?\n\n\tI think this machine has crap hardware. In fact this example was to \nemphasize the reliability of NTFS : it is indeed remarkable that no data \nloss occurs even on such a crap machine. I know Windows has got quite \nreliable now.\n\n\n\n\n\n",
"msg_date": "Sun, 05 Sep 2004 20:01:30 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fsync vs open_sync"
},
{
"msg_contents": "\n\tI trust ReiserFS 3.\n\tI wouldn't trust the 4 before maybe 1-2 years.\n\nOn Sun, 05 Sep 2004 07:41:29 -0400, Geoffrey <[email protected]> wrote:\n\n> Christopher Browne wrote:\n>\n>> I'm not sure what all SuSE supports; they're about the only other Linx\n>> vendor that EMC would support, and I don't expect that Reiser4 yet\n>> fits into the \"supportable\" category :-(.\n>\n> I use quite a bit of SuSE, and although I don't know their official \n> position on Reiser file systems, I do know that it is the default when \n> installing, so I'd suggest you might check into it.\n>\n>\n\n\n",
"msg_date": "Sun, 05 Sep 2004 20:03:04 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fsync vs open_sync"
},
{
"msg_contents": "On Sun, Sep 05, 2004 at 12:16:42AM -0500, Steve Bergman wrote:\n> On Sat, 2004-09-04 at 23:47 -0400, Christopher Browne wrote:\n> > The world rejoiced as [email protected] (\"Merlin Moncure\") wrote:\n> > > Ok, you were right. I made some tests and NTFS is just not very\n> > > good in the general case. I've seen some benchmarks for Reiser4\n> > > that are just amazing.\n> > \n> > Reiser4 has been sounding real interesting.\n> > \n> \n> Are these independent benchmarks, or the benchmarketing at namesys.com?\n> Note that the APPEND, MODIFY, and OVERWRITE phases have been turned off\n> on the mongo tests and the other tests have been set to a lexical (non\n> default for mongo) mode. I've done some mongo benchmarking myself and\n> reiser4 loses to ext3 (data=ordered) in the excluded tests. APPEND\n> phase performance is absolutely *horrible*. So they just turned off the\n> phases in which reiser4 lost and published the remaining results as\n> proof that \"resier4 is the fastest filesystem\".\n> \n> See: http://marc.theaimsgroup.com/?l=reiserfs&m=109363302000856\n> \n> \n> -Steve Bergman\n> \n> \n> \n\nReiser4 also isn't optmized for lots of fsyncs (unless it's been done\nrecently.) I believe the mention fsync performance in their release\nnotes. I've seen this dramatically hurt performance with our OLTP\nworkload.\n\n-- \nMark Wong - - [email protected]\nOpen Source Development Lab Inc - A non-profit corporation\n12725 SW Millikan Way - Suite 400 - Beaverton, OR 97005\n(503) 626-2455 x 32 (office)\n(503) 626-2436 (fax)\nhttp://developer.osdl.org/markw/\n",
"msg_date": "Thu, 9 Sep 2004 10:12:55 -0700",
"msg_from": "Mark Wong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fsync vs open_sync"
}
] |
[
{
"msg_contents": "I was just copying a database that was in UNICODE encoding into a new\ndb for some testing. I hand't realized it was UNICODE and when it hit\nsome funky chinese data (from some spam that came in...) it errored\nout with a string too long for a varchar(255).\n\nThe dump was created on PG 7.4.3 with \"pg_dump -Fc\"\n\nThe db was created with \"createdb rt3\"\n\nThe restore was to PG 7.4.5 with \"pg_restore --verbose -d rt3 rt3.dump\"\n\n\nIs there some way for the dump to notice that the encoding is wrong in\nthe db into which it is being restored? Once I created the rt3 db\nwith encoding='UNICODE' it worked just fine. Should there be some\nkind of check like that?\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Fri, 03 Sep 2004 15:42:44 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": true,
"msg_subject": "restoring to wrong encoding db"
}
] |
[
{
"msg_contents": "Greetings,\n\nI have observed that in a dump/restore scenario the longest time is \nspent on index creation for larger tables, I have a suggestion of how \nthe performance could be improved thus reducing the time to recover \nfrom a crash. Not sure if this is possible but would definitely be a \nnice addition to the TODO list.\n\n1) Add a new config paramter e.g work_maintanence_max_mem this will \nthe max memory postgresql *can* claim if need be.\n\n2) During the dump phase of the DB postgresql estimates the \n\"work_maintenance_mem\" that would be required to create the index in \nmemory(if possible) and add's a\nSET work_maintenance_mem=\"the value calculated\" (IF this value is less \nthan work_maintanence_max_mem. )\n\n3) During the restore phase the appropriate memory is allocated in RAM \nand the index creation takes less time since PG does not have to sort \non disk.\n\n--\nAdi Alurkar (DBA sf.NET) <[email protected]>\n1024D/79730470 A491 5724 74DE 956D 06CB D844 6DF1 B972 7973 0470\n\n",
"msg_date": "Sat, 4 Sep 2004 21:07:26 -0700",
"msg_from": "Adi Alurkar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Dump/Restore performance improvement"
},
{
"msg_contents": "Adi Alurkar <[email protected]> writes:\n> 1) Add a new config paramter e.g work_maintanence_max_mem this will \n> the max memory postgresql *can* claim if need be.\n\n> 2) During the dump phase of the DB postgresql estimates the \n> \"work_maintenance_mem\" that would be required to create the index in \n> memory(if possible) and add's a\n> SET work_maintenance_mem=\"the value calculated\" (IF this value is less \n> than work_maintanence_max_mem. )\n\nThis seems fairly pointless to me. How is this different from just\nsetting maintenance_work_mem as large as you can stand before importing\nthe dump?\n\nMaking any decisions at dump time seems wrong to me in the first place;\npg_dump should not be expected to know what conditions the restore will\nbe run under. I'm not sure that's what you're proposing, but I don't\nsee what the point is in practice. It's already the case that\nmaintenance_work_mem is treated as the maximum memory you can use,\nrather than what you will use even if you don't need it all.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 05 Sep 2004 13:07:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dump/Restore performance improvement "
}
] |
[
{
"msg_contents": "I have been experimenting with the 'IPC::Shareable' module under the \nnative implementation of Perl 5 for OpenBSD 3.5. While it is not loaded \nby default it is a pure pure implementation.\n\nI have tested this module under two machines, one which used to run \nPostgreSQL and has a higher then normal amount of SYSV semaphores. The \nother has a normal amount, when testing under the former database server \nthings load up fine, clients can connect and all information is as it \nshould.\n\nWhen I test under the normal setup the machine tanks. No core dumps, \nno errors produced, just a near instant lock-up of the server itself and \nthat is with a non-privileged user.\n\nWhile I know this is a Perl issue, but figured I might be able to gain \nsome insight on how a server could drop without at least generating a \npanic. Any ideas?\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n",
"msg_date": "Mon, 06 Sep 2004 01:15:24 GMT",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tanking a server with shared memory"
},
{
"msg_contents": "Martin Foster <[email protected]> writes:\n> While I know this is a Perl issue, but figured I might be able to gain \n> some insight on how a server could drop without at least generating a \n> panic. Any ideas?\n\nThe standard spelling for this is \"kernel bug\". Send a reproducible\nexample to the OpenBSD kernel maintainers.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 05 Sep 2004 22:18:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tanking a server with shared memory "
}
] |
[
{
"msg_contents": "Sorry for crossposting, didn't know where to post.\n\nAny hint/help on this?!\n\ndb_postgres1=# truncate ref_v2_drs_valid_product ;\nERROR: expected both swapped tables to have TOAST tables\n\nI need to truncate this table, this is the first time I see this error.\n\nRegards,\nGuido\n\n",
"msg_date": "Tue, 7 Sep 2004 07:41:07 -0300 (GMT+3)",
"msg_from": "G u i d o B a r o s i o <[email protected]>",
"msg_from_op": true,
"msg_subject": "TOAST tables, cannot truncate"
}
] |
[
{
"msg_contents": "Ok, problem solved.\n\nA previous ALTER TABLE DROP COLUMN over this table was performed.\nBy some reason, truncate didn't work then. Would like to know why...it does not seems to be a very frequent problem, due to the fact I only found one chat on the mailing lists talking about this issue, and was hard to find :( !\n\nA dump of the table schema, passed to the psql command \n(cat table_dump.sql | psql xxx) worked fine.\n\nTRUNCATE is now available.\n\nThanks.\n\nGuido\n\n> Sorry for crossposting, didn't know where to post.\n> \n> Any hint/help on this?!\n> \n> db_postgres1=# truncate ref_v2_drs_valid_product ;\n> ERROR: expected both swapped tables to have TOAST tables\n> \n> I need to truncate this table, this is the first time I see this error.\n> \n> Regards,\n> Guido\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n",
"msg_date": "Tue, 7 Sep 2004 08:12:58 -0300 (GMT+3)",
"msg_from": "G u i d o B a r o s i o <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TOAST tables, cannot truncate"
},
{
"msg_contents": "G u i d o B a r o s i o <[email protected]> writes:\n> Ok, problem solved.\n> A previous ALTER TABLE DROP COLUMN over this table was performed.\n> By some reason, truncate didn't work then. Would like to know why...it does not seems to be a very frequent problem, due to the fact I only found one chat on the mailing lists talking about this issue, and was hard to find :( !\n\nhttp://developer.postgresql.org/cvsweb.cgi/pgsql-server/src/backend/commands/cluster.c\n\n(note: cvsweb seems mighty slow today, but it is working...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Sep 2004 10:16:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] TOAST tables, cannot truncate "
}
] |
[
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nI have a problem where I have the table format listed below. I have the \nprimary key tsyslog_id and the index built against it. However, when I \nselect a unique row, it will only ever do a seq scan even after I turn off \nall other types except indexscan. I understand you cannot fully turn off seq \nscan. \n\nSyslog_TArchive size: 1,426,472,960 bytes\nsyslog_tarchive_pkey size: 132,833,280 bytes\narchhost_idx size: 300,802,048 bytes\ntarchdatetime_idx size: 159,293,440 bytes\ntarchhostid_idx size: 362,323,968 bytes\n\nI cannot run vacuum more than once a day because of its heavy IO penalty. I \nrun analyze once an hour. However, if I run analyze then explain, I see no \ndifference in the planners decisions. What am I missing?\n\n\nTSyslog=# \\d syslog_tarchive;\n Table \"public.syslog_tarchive\"\n Column | Type | \nModifiers\n- ------------+------------------------+-------------------------------------------------------------------------\n tsyslog_id | bigint | not null default \nnextval('public.syslog_tarchive_tsyslog_id_seq'::text)\n facility | integer |\n severity | integer |\n date | date |\n time | time without time zone |\n host | character varying(128) |\n message | text |\nIndexes:\n \"syslog_tarchive_pkey\" primary key, btree (tsyslog_id)\n \"archhost_idx\" btree (host)\n \"tarchdatetime_idx\" btree (date, \"time\")\n \"tarchhostid_idx\" btree (tsyslog_id, host)\n\nTSyslog=# explain select * from tsyslog where tsyslog_id=431650835;\n QUERY PLAN\n- -------------------------------------------------------------------------\n Seq Scan on tsyslog (cost=100000000.00..100000058.20 rows=2 width=187)\n Filter: (tsyslog_id = 431650835)\n(2 rows)\n\n- -- \n\n- --------------------------------------------------\nJeremy M. Guthrie [email protected]\nSenior Network Engineer Phone: 608-298-1061\nBerbee Fax: 608-288-3007\n5520 Research Park Drive NOC: 608-298-1102\nMadison, WI 53711\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (GNU/Linux)\n\niD8DBQFBPijTqtjaBHGZBeURAndgAJ4rT2NpG9aGAdogoZaV+BvUfF6TjACfaexf\nLrBzhDQK72u8dCUuPOSHB+Y=\n=DSxi\n-----END PGP SIGNATURE-----\n",
"msg_date": "Tue, 7 Sep 2004 16:32:03 -0500",
"msg_from": "\"Jeremy M. Guthrie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Stuck using Sequential Scan"
},
{
"msg_contents": "On Tue, 2004-09-07 at 22:32, Jeremy M. Guthrie wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> I have a problem where I have the table format listed below. I have the \n> primary key tsyslog_id and the index built against it. However, when I \n> select a unique row, it will only ever do a seq scan even after I turn off \n> all other types except indexscan. I understand you cannot fully turn off seq \n> scan.\n...\n> I cannot run vacuum more than once a day because of its heavy IO penalty. I \n> run analyze once an hour. However, if I run analyze then explain, I see no \n> difference in the planners decisions. What am I missing?\n> \n> \n> TSyslog=# \\d syslog_tarchive;\n> Table \"public.syslog_tarchive\"\n> Column | Type | \n> Modifiers\n> - ------------+------------------------+-------------------------------------------------------------------------\n> tsyslog_id | bigint | not null default \n...\n> \n> TSyslog=# explain select * from tsyslog where tsyslog_id=431650835;\n\nThat constant is INTEGER, whereas the column is BIGINT; there is no\nautomatic conversion in this case, so the planner does not realise the\nindex is usable for this query (I think 8.0 solves this).\n\nTry: select * from tsyslog where tsyslog_id=431650835::BIGINT;\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/A54310EA 92C8 39E7 280E 3631 3F0E 1EC0 5664 7A2F A543 10EA\n ========================================\n \"I am crucified with Christ; nevertheless I live; yet \n not I, but Christ liveth in me; and the life which I \n now live in the flesh I live by the faith of the Son \n of God, who loved me, and gave himself for me.\" \n Galatians 2:20 \n\n",
"msg_date": "Sat, 11 Sep 2004 07:07:42 +0100",
"msg_from": "Oliver Elphick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stuck using Sequential Scan"
}
] |
[
{
"msg_contents": "Hi. I hope I'm not asking a too trivial question here...\n\nI'm having trouble with a (quite big) query, and can't find a way to make it \nfaster.\n\nHere is the information :\n\nTables :\n============================\nsces_vte -> 2753539 rows\nsces_art -> 602327\nsces_fsf -> 8126\nsces_frc -> 7763\nsces_tps -> 38\nsces_gtr -> 35\n\n\nQuery :\n===========================\nSELECT\nsces_gtr_art.gtr_cod,\nsces_gtr_art.gtr_lib,\nsces_frc_art.fou_cod,\nsces_frc_art.fou_lib,\nsces_tps.tps_annee_mois,\nTO_NUMBER('200401','999999'),\nTO_NUMBER('200405','999999'),\nsces_tps.tps_libc,\nsum(sces_vte.vte_mnt),\nsum(sces_vte.vte_qte),\nsum(sces_vte.vte_ton),\nsces_famille.fsf_codfam,\nsces_famille.fsf_lib,\nsces_s_famille.fsf_codsfm,\nsces_s_famille.fsf_lib\nFROM\nsces_vte,\nsces_art,\nsces_fsf sces_famille,\nsces_fsf sces_s_famille,\nsces_frc sces_frc_art,\nsces_tps,\nsces_gtr sces_gtr_art\nWHERE\n( sces_famille.fsf_codfam=sces_s_famille.fsf_codfam )\nAND ( sces_famille.fsf_codseg= 0 and sces_famille.fsf_codsfm = 0 )\nAND ( sces_vte.tps_annee_mois=sces_tps.tps_annee_mois )\nAND ( sces_vte.art_cod=sces_art.art_cod and \nsces_vte.dos_cod=sces_art.dos_cod )\nAND ( sces_gtr_art.gtr_cod=sces_frc_art.gtr_cod )\nAND ( sces_frc_art.gtr_cod=sces_art.gtr_cod and \nsces_frc_art.fou_cod=sces_art.fou_cod )\nAND ( sces_s_famille.fsf_codfam=sces_art.fsf_codfam and \nsces_s_famille.fsf_codsfm=sces_art.fsf_codsfm )\nAND ( sces_s_famille.fsf_codseg = 0 )\nAND (\n( ( ( sces_tps.tps_annee_mois ) >= ( TO_NUMBER('200401','999999') ) and \n( sces_tps.tps_annee_mois ) <= (\nTO_NUMBER('200405','999999') )\n)\nOR\n(\n( sces_tps.tps_annee_mois ) >= ( TO_NUMBER('200401','999999') )-100 and \n( sces_tps.tps_annee_mois ) <= (\nTO_NUMBER('200405','999999') )-100\n) )\nAND ( sces_gtr_art.gtr_cod in (2))\n)\nGROUP BY\nsces_gtr_art.gtr_cod,\nsces_gtr_art.gtr_lib,\nsces_frc_art.fou_cod,\nsces_frc_art.fou_lib,\nsces_tps.tps_annee_mois,\nTO_NUMBER('200401','999999'),\nTO_NUMBER('200405','999999'),\nsces_tps.tps_libc,\nsces_famille.fsf_codfam,\nsces_famille.fsf_lib,\nsces_s_famille.fsf_codsfm,\nsces_s_famille.fsf_lib\n\nExplain Analyze Plan : \n====================================\n GroupAggregate (cost=27161.91..27938.72 rows=16354 width=280) (actual time=484509.210..544436.148 rows=4115 loops=1)\n -> Sort (cost=27161.91..27202.79 rows=16354 width=280) (actual time=484496.188..485334.151 rows=799758 loops=1)\n Sort Key: sces_gtr_art.gtr_cod, sces_gtr_art.gtr_lib, sces_frc_art.fou_cod, sces_frc_art.fou_lib, sces_tps.tps_annee_mois, 200401::numeric, 200405::numeric, sces_tps.tps_libc, sces_famille.fsf_codfam, sces_famille.fsf_lib, sces_s_famille.fsf_codsfm, sces_s_famille.fsf_lib\n -> Merge Join (cost=25727.79..26017.34 rows=16354 width=280) (actual time=58945.821..69321.146 rows=799758 loops=1)\n Merge Cond: ((\"outer\".fsf_codfam = \"inner\".fsf_codfam) AND (\"outer\".fsf_codsfm = \"inner\".fsf_codsfm))\n -> Sort (cost=301.36..304.60 rows=1298 width=83) (actual time=27.926..28.256 rows=332 loops=1)\n Sort Key: sces_s_famille.fsf_codfam, sces_s_famille.fsf_codsfm\n -> Seq Scan on sces_fsf sces_s_famille (cost=0.00..234.24 rows=1298 width=83) (actual time=0.042..19.124 rows=1341 loops=1)\n Filter: (fsf_codseg = 0::numeric)\n -> Sort (cost=25426.43..25448.05 rows=8646 width=225) (actual time=58917.106..59693.810 rows=799758 loops=1)\n Sort Key: sces_art.fsf_codfam, sces_art.fsf_codsfm\n -> Merge Join (cost=24726.32..24861.08 rows=8646 width=225) (actual time=19036.709..29404.943 rows=799758 loops=1)\n Merge Cond: (\"outer\".tps_annee_mois = \"inner\".tps_annee_mois)\n -> Sort (cost=2.49..2.53 rows=17 width=23) (actual time=0.401..0.428 rows=20 loops=1)\n Sort Key: sces_tps.tps_annee_mois\n -> Seq Scan on sces_tps (cost=0.00..2.14 rows=17 width=23) (actual time=0.068..0.333 rows=20 loops=1)\n Filter: (((tps_annee_mois >= 200301::numeric) OR (tps_annee_mois >= 200401::numeric)) AND ((tps_annee_mois <= 200305::numeric) OR (tps_annee_mois >= 200401::numeric)) AND ((tps_annee_mois >= 200301::numeric) OR (tps_annee_mois <= 200405::numeric)) AND ((tps_annee_mois <= 200305::numeric) OR (tps_annee_mois <= 200405::numeric)))\n -> Sort (cost=24723.83..24747.97 rows=9656 width=214) (actual time=19036.223..19917.214 rows=799757 loops=1)\n Sort Key: sces_vte.tps_annee_mois\n -> Nested Loop (cost=21825.09..24084.74 rows=9656 width=214) (actual time=417.603..8644.294 rows=399879 loops=1)\n -> Nested Loop (cost=21825.09..21837.50 rows=373 width=195) (actual time=417.444..672.741 rows=14158 loops=1)\n -> Seq Scan on sces_gtr sces_gtr_art (cost=0.00..1.44 rows=1 width=40) (actual time=0.026..0.085 rows=1 loops=1)\n Filter: (gtr_cod = 2::numeric)\n -> Merge Join (cost=21825.09..21832.34 rows=373 width=165) (actual time=417.400..568.247 rows=14158 loops=1)\n Merge Cond: (\"outer\".fsf_codfam = \"inner\".fsf_codfam)\n -> Sort (cost=255.24..255.30 rows=24 width=74) (actual time=16.597..16.692 rows=106 loops=1)\n Sort Key: sces_famille.fsf_codfam\n -> Seq Scan on sces_fsf sces_famille (cost=0.00..254.69 rows=24 width=74) (actual time=0.029..15.971 rows=155 loops=1)\n Filter: ((fsf_codseg = 0::numeric) AND (fsf_codsfm = 0::numeric))\n -> Sort (cost=21569.85..21571.64 rows=715 width=91) (actual time=400.631..416.871 rows=14162 loops=1)\n Sort Key: sces_art.fsf_codfam\n -> Nested Loop (cost=0.00..21535.95 rows=715 width=91) (actual time=1.320..230.975 rows=14162 loops=1)\n -> Seq Scan on sces_frc sces_frc_art (cost=0.00..182.75 rows=728 width=51) (actual time=1.195..14.316 rows=761 loops=1)\n Filter: (2::numeric = gtr_cod)\n -> Index Scan using ind_art_02 on sces_art (cost=0.00..29.24 rows=7 width=61) (actual time=0.040..0.160 rows=19 loops=761)\n Index Cond: ((2::numeric = sces_art.gtr_cod) AND (\"outer\".fou_cod = sces_art.fou_cod))\n -> Index Scan using idx_vte_02 on sces_vte (cost=0.00..6.01 rows=1 width=62) (actual time=0.037..0.259 rows=28 loops=14158)\n Index Cond: ((sces_vte.art_cod = \"outer\".art_cod) AND (sces_vte.dos_cod = \"outer\".dos_cod))\n Total runtime: 545435.989 ms\n\n\n\n From what I understand from the plan, the worst part of it is the sort. Is there a way I can improve this query ?\n(Obviously, as it has many rows, it will still be a slow query, but here it's too slow for us...).\n\nI allready extended the sort_mem (up to 500 MB to be sure, the server has plenty of RAM),\nthe query has become faster, but I don't know what else to do, to speed up the sort.\n\nBTW the query was generated, not written. We allready are trying to write something better, but we are still facing a\ngigantic sort at the end (we need the group by, and there are many lines from the main table (sces_vte) to be retrieved and aggregated)...\n\nThanks in advance...\n",
"msg_date": "Wed, 8 Sep 2004 15:49:49 +0200",
"msg_from": "Marc Cousin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem with large query"
},
{
"msg_contents": "Marc Cousin <[email protected]> writes:\n> I'm having trouble with a (quite big) query, and can't find a way to make it \n> faster.\n\nSeems like it might help if the thing could use a HashAggregate instead\nof sort/group. Numeric is not hashable, so having those TO_NUMBER\nconstants in GROUP BY destroys this option instantly ... but why in the\nworld are you grouping by constants anyway? You didn't say what the\ndatatypes of the other columns were...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Sep 2004 10:40:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem with large query "
},
{
"msg_contents": "by the way, this reminds me: I just ran a performance study at a company doing\n an oracle-to-postgres conversion, and FYI converting from numeric and decimal\n to integer/bigint/real saved roughly 3x on space and 2x on performance.\n Obviously, YMMV.\n\nadam\n\n\nTom Lane wrote:\n\n> Marc Cousin <[email protected]> writes:\n> \n>>I'm having trouble with a (quite big) query, and can't find a way to make it \n>>faster.\n> \n> \n> Seems like it might help if the thing could use a HashAggregate instead\n> of sort/group. Numeric is not hashable, so having those TO_NUMBER\n> constants in GROUP BY destroys this option instantly ... but why in the\n> world are you grouping by constants anyway? You didn't say what the\n> datatypes of the other columns were...\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n> \n",
"msg_date": "Wed, 08 Sep 2004 07:47:33 -0700",
"msg_from": "Adam Sah <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem with large query"
},
{
"msg_contents": "The query has been generated by business objects ... i'ill try to suggest to the developpers to remove this constant (if they can)...\nThe fields used by the sort are of type numeric(6,0) or (10,0) ...\nCould it be better if the fields were integer or anything else ?\n\n\nOn Wednesday 08 September 2004 16:40, you wrote:\n> Marc Cousin <[email protected]> writes:\n> > I'm having trouble with a (quite big) query, and can't find a way to make it \n> > faster.\n> \n> Seems like it might help if the thing could use a HashAggregate instead\n> of sort/group. Numeric is not hashable, so having those TO_NUMBER\n> constants in GROUP BY destroys this option instantly ... but why in the\n> world are you grouping by constants anyway? You didn't say what the\n> datatypes of the other columns were...\n> \n> regards, tom lane\n> \n",
"msg_date": "Wed, 8 Sep 2004 16:49:59 +0200",
"msg_from": "Marc Cousin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problem with large query"
},
{
"msg_contents": "Marc Cousin <[email protected]> writes:\n> The query has been generated by business objects ... i'ill try to suggest to the developpers to remove this constant (if they can)...\n> The fields used by the sort are of type numeric(6,0) or (10,0) ...\n> Could it be better if the fields were integer or anything else ?\n\ninteger or bigint would be a WHOLE lot faster. I'd venture that\ncomparing two numerics is order of a hundred times slower than\ncomparing two integers.\n\nEven if you don't want to change the fields on-disk, you might think\nabout casting them all to int/bigint in the query.\n\nAnother thing that might or might not be easy is to change the order of\nthe GROUP BY items so that the fields with the largest number of\ndistinct values are listed first. If two rows are distinct at the first\ncolumn, the sorting comparison doesn't even have to look at the\nremaining columns ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Sep 2004 10:56:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem with large query "
},
{
"msg_contents": "On Wednesday 08 September 2004 16:56, you wrote:\n> Marc Cousin <[email protected]> writes:\n> > The query has been generated by business objects ... i'ill try to suggest to the developpers to remove this constant (if they can)...\n> > The fields used by the sort are of type numeric(6,0) or (10,0) ...\n> > Could it be better if the fields were integer or anything else ?\n> \n> integer or bigint would be a WHOLE lot faster. I'd venture that\n> comparing two numerics is order of a hundred times slower than\n> comparing two integers.\n> \n> Even if you don't want to change the fields on-disk, you might think\n> about casting them all to int/bigint in the query.\n> \n> Another thing that might or might not be easy is to change the order of\n> the GROUP BY items so that the fields with the largest number of\n> distinct values are listed first. If two rows are distinct at the first\n> column, the sorting comparison doesn't even have to look at the\n> remaining columns ...\n> \n> regards, tom lane\n> \nThanks. I've just had confirmation that they can remove the two constants (allready won 100 seconds thanks to that)\nI've tried the cast, and got down to 72 seconds.\nSo now we're going to try to convert the fields to int or bigint.\n\nThanks a lot for your help and time.\n",
"msg_date": "Wed, 8 Sep 2004 17:17:47 +0200",
"msg_from": "Marc Cousin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problem with large query"
}
] |
[
{
"msg_contents": "What would be performance of pgSQL text search vs MySQL vs Lucene (flat \nfile) for a 2 terabyte db?\nthanks for any comments.\n.V\n-- \nPlease post on Rich Internet Applications User Interface (RiA/SoA) \n<http://www.portalvu.com>\n",
"msg_date": "Thu, 09 Sep 2004 07:56:20 -0500",
"msg_from": "Vic Cekvenich <[email protected]>",
"msg_from_op": true,
"msg_subject": "Text Search vs MYSQL vs Lucene"
},
{
"msg_contents": "On Thursday 09 Sep 2004 6:26 pm, Vic Cekvenich wrote:\n> What would be performance of pgSQL text search vs MySQL vs Lucene (flat\n> file) for a 2 terabyte db?\n\nWell, it depends upon lot of factors. There are few questions to be asked \nhere..\n- What is your hardware and OS configuration?\n- What type of data you are dealing with? Mostly static or frequently updated?\n- What type of query you are doing. Aggregates or table scan or selective \nretreival etc.\n\nUnfortunately there is no one good answer. If you could provide details, it \nwould help a lot..\n\n Shridhar\n",
"msg_date": "Thu, 9 Sep 2004 19:09:33 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Text Search vs MYSQL vs Lucene"
},
{
"msg_contents": "It be at least dual opteron 64 w 4 gigs of ram runing fedora with a huge \nraid striped drives as single volume.\nA similar system and types of querries would be this:\nhttp://marc.theaimsgroup.com\n\nSo I guess a table scan.\n\n\n.V\n\nShridhar Daithankar wrote:\n\n>On Thursday 09 Sep 2004 6:26 pm, Vic Cekvenich wrote:\n> \n>\n>>What would be performance of pgSQL text search vs MySQL vs Lucene (flat\n>>file) for a 2 terabyte db?\n>> \n>>\n>\n>Well, it depends upon lot of factors. There are few questions to be asked \n>here..\n>- What is your hardware and OS configuration?\n>- What type of data you are dealing with? Mostly static or frequently updated?\n>- What type of query you are doing. Aggregates or table scan or selective \n>retreival etc.\n>\n>Unfortunately there is no one good answer. If you could provide details, it \n>would help a lot..\n>\n> Shridhar\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n>\n> \n>\n\n\n-- \nPlease post on Rich Internet Applications User Interface (RiA/SoA) \n<http://www.portalvu.com>\n",
"msg_date": "Thu, 09 Sep 2004 09:14:34 -0500",
"msg_from": "Vic Cekvenich <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Text Search vs MYSQL vs Lucene"
},
{
"msg_contents": "On Thu, Sep 09, 2004 at 07:56:20AM -0500, Vic Cekvenich wrote:\n\n> What would be performance of pgSQL text search vs MySQL vs Lucene (flat \n> file) for a 2 terabyte db?\n> thanks for any comments.\n\nMy experience with tsearch2 has been that indexing even moderately\nlarge chunks of data is too slow to be feasible. Moderately large\nmeaning tens of megabytes.\n\nYour milage might well vary, but I wouldn't rely on postgresql full\ntext search of that much data being functional, let alone fast enough\nto be useful. Test before making any decisions.\n\nIf it's a static or moderately static text corpus you're probably\nbetter using a traditional FTS system anyway (tsearch2 has two\nadvantages - tight integration with pgsql and good support for\nincremental indexing).\n\nTwo terabytes is a lot of data. I'd suggest you do some research on\nFTS algorithms rather than just picking one of the off-the-shelf FTS\nsystems without understanding what they actually do. \"Managing\nGigabytes\" ISBN 1-55860-570-3 covers some approaches.\n\nCheers,\n Steve\n",
"msg_date": "Thu, 9 Sep 2004 07:20:06 -0700",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Text Search vs MYSQL vs Lucene"
},
{
"msg_contents": "Steve Atkins wrote:\n>>What would be performance of pgSQL text search vs MySQL vs Lucene (flat \n>>file) for a 2 terabyte db?\n>>thanks for any comments.\n> \n> My experience with tsearch2 has been that indexing even moderately\n> large chunks of data is too slow to be feasible. Moderately large\n> meaning tens of megabytes.\n\nMy experience with MySQL's full text search as well as the various \nMySQL-based text indexing programs (forgot the names, it's been a while) \nfor some 10-20GB of mail archives has been pretty disappointing too. My \nbiggest gripe is with the indexing speed. It literally takes days to \nindex less than a million documents.\n\nI ended up using Swish++. Microsoft's CHM compiler also has pretty \namazing indexing speed (though it crashes quite often when encountering \nbad HTML).\n\n-- \ndave\n",
"msg_date": "Thu, 09 Sep 2004 22:33:11 +0700",
"msg_from": "David Garamond <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Text Search vs MYSQL vs Lucene"
},
{
"msg_contents": "I'd say indexing of 2 TB of data would be a very costly even for\nstandalone solution ( no relational database ).\nIdeal solution would be to have tsearch2 for current documents and\nstandalone solution for archive documents. If these solutions share\ncommon parsers,dictionaries and ranking schemes it would be easy to\ncombine results from two queries. We have prototype for standalone\nsolution - it's based on OpenFTS, which is already tsearch2 compatible.\n\n\n\tOleg\nOn Thu, 9 Sep 2004, Steve Atkins wrote:\n\n> On Thu, Sep 09, 2004 at 07:56:20AM -0500, Vic Cekvenich wrote:\n>\n> > What would be performance of pgSQL text search vs MySQL vs Lucene (flat\n> > file) for a 2 terabyte db?\n> > thanks for any comments.\n>\n> My experience with tsearch2 has been that indexing even moderately\n> large chunks of data is too slow to be feasible. Moderately large\n> meaning tens of megabytes.\n>\n> Your milage might well vary, but I wouldn't rely on postgresql full\n> text search of that much data being functional, let alone fast enough\n> to be useful. Test before making any decisions.\n>\n> If it's a static or moderately static text corpus you're probably\n> better using a traditional FTS system anyway (tsearch2 has two\n> advantages - tight integration with pgsql and good support for\n> incremental indexing).\n>\n> Two terabytes is a lot of data. I'd suggest you do some research on\n> FTS algorithms rather than just picking one of the off-the-shelf FTS\n> systems without understanding what they actually do. \"Managing\n> Gigabytes\" ISBN 1-55860-570-3 covers some approaches.\n>\n> Cheers,\n> Steve\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n",
"msg_date": "Thu, 9 Sep 2004 21:56:59 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Text Search vs MYSQL vs Lucene"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm tuning my PostgreSQL DB (7.3.4) and have come across a query that\ndoesn't use an index I created specially for it, and consequently takes\ncirca 2 seconds to run. :(\n\nThe ugly query looks like this (the important part is really at the\nvery end - order by piece):\n\nselect userinfo1_.id as id0_, servicepla3_.id as id1_, account2_.id as\nid2_, passwordhi4_.id as id3_, userdemogr5_.id as id4_,\nuserinfo1_.first_name as first_name0_, userinfo1_.last_name as\nlast_name0_, userinfo1_.email as email0_, userinfo1_.href as href0_,\nuserinfo1_.last_login_date as last_log6_0_, userinfo1_.login_count as\nlogin_co7_0_, userinfo1_.password_hint_answer as password8_0_,\nuserinfo1_.create_date as create_d9_0_, userinfo1_.exp_date as\nexp_date0_, userinfo1_.type as type0_, userinfo1_.account_id as\naccount_id0_, userinfo1_.plan_id as plan_id0_,\nuserinfo1_.password_hint_id as passwor14_0_,\nuserinfo1_.user_demographic_id as user_de15_0_, servicepla3_.name as\nname1_, servicepla3_.max_links as max_links1_, account2_.username as\nusername2_, account2_.password as password2_, account2_.status as\nstatus2_, passwordhi4_.question as question3_, userdemogr5_.city as\ncity4_, userdemogr5_.postal_code as postal_c3_4_,\nuserdemogr5_.country_id as country_id4_,\nuserdemogr5_.state_id as state_id4_, userdemogr5_.gender_id as\ngender_id4_ from user_preference userprefer0_ inner join user_info\nuserinfo1_ on userprefer0_.user_id=userinfo1_.id inner join account\naccount2_ on userinfo1_.account_id=account2_.id inner join service_plan\nservicepla3_ on userinfo1_.plan_id=servicepla3_.id left outer join\npassword_hint passwordhi4_ on\nuserinfo1_.password_hint_id=passwordhi4_.id inner join user_demographic\nuserdemogr5_ on userinfo1_.user_demographic_id=userdemogr5_.id,\npreference preference6_, preference_value preference7_ where\n(preference6_.name='allow_subscribe' and\nuserprefer0_.preference_id=preference6_.id)AND(preference7_.value=1 \nand userprefer0_.preference_value_id=preference7_.id) order by \nuserinfo1_.create_date desc limit 10;\n\n\nThe output of EXPLAIN ANALYZE follows. Note how 99% of the total cost\ncomes from \"Sort Key: userinfo1_.create_date\". When I saw this, I\ncreated an index for this:\n\nCREATE INDEX ix_user_info_create_date ON user_info(create_date);\n\nBut that didn't seem to make much of a difference. The total cost did\ngo down from about 1250 to 1099, but that's still too high.\n\n---------------------------------------------------------\n Limit (cost=1099.35..1099.38 rows=10 width=222) (actual\ntime=1914.13..1914.17 rows=10 loops=1)\n -> Sort (cost=1099.35..1099.43 rows=31 width=222) (actual\ntime=1914.12..1914.14 rows=11 loops=1)\n Sort Key: userinfo1_.create_date\n -> Hash Join (cost=90.71..1098.60 rows=31 width=222) (actual\ntime=20.34..1908.41 rows=767 loops=1)\n Hash Cond: (\"outer\".preference_value_id = \"inner\".id)\n -> Hash Join (cost=89.28..1092.58 rows=561 width=218)\n(actual time=19.92..1886.59 rows=768 loops=1)\n Hash Cond: (\"outer\".preference_id = \"inner\".id)\n -> Hash Join (cost=88.10..1045.14 rows=7850\nwidth=214) (actual time=19.44..1783.47 rows=9984 loops=1)\n Hash Cond: (\"outer\".user_demographic_id =\n\"inner\".id)\n -> Hash Join (cost=72.59..864.51 rows=8933\nwidth=190) (actual time=14.83..1338.15 rows=9984 loops=1)\n Hash Cond: (\"outer\".password_hint_id =\n\"inner\".id)\n -> Hash Join (cost=71.50..726.87\nrows=8933 width=161) (actual time=14.53..1039.69 rows=9984 loops=1)\n Hash Cond: (\"outer\".plan_id =\n\"inner\".id)\n -> Hash Join \n(cost=70.42..569.46 rows=8933 width=144) (actual time=14.26..700.80\nrows=9984 loops=1)\n Hash Cond:\n(\"outer\".account_id = \"inner\".id)\n -> Hash Join \n(cost=53.83..390.83 rows=10073 width=116) (actual time=9.67..373.71\nrows=9984 loops=1)\n Hash Cond:\n(\"outer\".user_id = \"inner\".id)\n -> Seq Scan on\nuser_preference userprefer0_ (cost=0.00..160.73 rows=10073 width=12)\n(actual time=0.09..127.64 rows=9984 loops=1)\n -> Hash \n(cost=51.66..51.66 rows=866 width=104) (actual time=9.40..9.40 rows=0\nloops=1)\n -> Seq Scan\non user_info userinfo1_ (cost=0.00..51.66 rows=866 width=104) (actual\ntime=0.12..7.15 rows=768 loops=1)\n -> Hash \n(cost=14.68..14.68 rows=768 width=28) (actual time=4.45..4.45 rows=0\nloops=1)\n -> Seq Scan on\naccount account2_ (cost=0.00..14.68 rows=768 width=28) (actual\ntime=0.10..2.56 rows=768 loops=1)\n -> Hash (cost=1.06..1.06\nrows=6 width=17) (actual time=0.13..0.13 rows=0 loops=1)\n -> Seq Scan on\nservice_plan servicepla3_ (cost=0.00..1.06 rows=6 width=17) (actual\ntime=0.10..0.11 rows=6 loops=1)\n -> Hash (cost=1.07..1.07 rows=7\nwidth=29) (actual time=0.15..0.15 rows=0 loops=1)\n -> Seq Scan on password_hint\npasswordhi4_ (cost=0.00..1.07 rows=7 width=29) (actual\ntime=0.11..0.13 rows=7 loops=1)\n -> Hash (cost=13.61..13.61 rows=761\nwidth=24) (actual time=4.46..4.46 rows=0 loops=1)\n -> Seq Scan on user_demographic\nuserdemogr5_ (cost=0.00..13.61 rows=761 width=24) (actual\ntime=0.10..2.73 rows=769 loops=1)\n -> Hash (cost=1.18..1.18 rows=1 width=4) (actual\ntime=0.16..0.16 rows=0 loops=1)\n -> Seq Scan on preference preference6_ \n(cost=0.00..1.18 rows=1 width=4) (actual time=0.14..0.15\nrows=1 loops=1)\n Filter: (name =\n'allow_subscribe'::character varying)\n -> Hash (cost=1.43..1.43 rows=2 width=4) (actual\ntime=0.23..0.23 rows=0 loops=1)\n -> Seq Scan on preference_value preference7_ \n(cost=0.00..1.43 rows=2 width=4) (actual time=0.17..0.21\nrows=3 loops=1)\n Filter: ((value)::text = '1'::text)\n Total runtime: 1914.91 msec\n(35 rows)\n\n\n\nThere are a few Seq Scan's, but they are benign, as their low/no cost\nshows - they are very small, 'static' tables (e.g. country list, state\nlist, preference names list).\n\nDoes anyone have any ideas how I could speed up this query?\n\nThanks,\nOtis\n\n",
"msg_date": "Thu, 9 Sep 2004 15:51:37 -0700 (PDT)",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Costly \"Sort Key\" on indexed timestamp column"
},
{
"msg_contents": "<[email protected]> writes:\n> I'm tuning my PostgreSQL DB (7.3.4) and have come across a query that\n> doesn't use an index I created specially for it, and consequently takes\n> circa 2 seconds to run. :(\n> ...\n> The output of EXPLAIN ANALYZE follows. Note how 99% of the total cost\n> comes from \"Sort Key: userinfo1_.create_date\".\n\nNo, you are misreading the output. 99% of the cost comes from the join\nsteps.\n\nI think the problem is that you have forced a not-very-appropriate join\norder by use of INNER JOIN syntax, and so the plan is creating\nintermediate join outputs that are larger than they need be. See\nhttp://www.postgresql.org/docs/7.3/static/explicit-joins.html\n\n7.4 is a bit more forgiving about this; compare\nhttp://www.postgresql.org/docs/7.4/static/explicit-joins.html\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Sep 2004 00:17:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Costly \"Sort Key\" on indexed timestamp column "
}
] |
[
{
"msg_contents": "#postgresql on Freenode recommended I post this here.\n\nI'm seeing some odd behaviour with LIMIT. The query plans are included\nhere, as are the applicable table and index definitions. All table,\nindex, and query information can be found in a standard dbmail 1.2.6\ninstall, if anyone wants to try setting up an exactly similar system.\n\nVersion: PostgreSQL 7.4.3 on i386-pc-linux-gnu, compiled by GCC\ni386-linux-gcc (GCC) 3.3.4 (Debian 1:3.3.4-3)\nOS: Debian Linux, \"unstable\" tree\n\nSome settings that I was told to include (as far as I am aware, these\nare debian default values):\nshared_buffers = 1000\nsort_mem = 1024\neffective_cache_size = 1000\n\n\nTable/index definitions:\n\n Table \"public.messages\"\n Column | Type | Modifiers\n---------------+--------------------------------+----------------------------------------------------\n message_idnr | bigint | not null default\nnextval('message_idnr_seq'::text)\n mailbox_idnr | bigint | not null default 0\n messagesize | bigint | not null default 0\n seen_flag | smallint | not null default 0\n answered_flag | smallint | not null default 0\n deleted_flag | smallint | not null default 0\n flagged_flag | smallint | not null default 0\n recent_flag | smallint | not null default 0\n draft_flag | smallint | not null default 0\n unique_id | character varying(70) | not null\n internal_date | timestamp(6) without time zone |\n status | smallint | not null default 0\n rfcsize | bigint | not null default 0\n queue_id | character varying(40) | not null default\n''::character varying\nIndexes:\n \"messages_pkey\" primary key, btree (message_idnr)\n \"idx_mailbox_idnr_queue_id\" btree (mailbox_idnr, queue_id)\nForeign-key constraints:\n \"ref141\" FOREIGN KEY (mailbox_idnr) REFERENCES\nmailboxes(mailbox_idnr) ON UPDATE CASCADE ON DELETE CASCADE\n\n\n\n\nEXPLAIN ANALYZE results:\n\n\n EXPLAIN ANALYZE SELECT message_idnr FROM messages WHERE mailbox_idnr\n= 1746::bigint AND status<2::smallint AND seen_flag = 0 AND unique_id\n!= '' ORDER BY message_idnr ASC LIMIT 1;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..848.36 rows=1 width=8) (actual\ntime=1173.949..1173.953 rows=1 loops=1)\n -> Index Scan using messages_pkey on messages \n(cost=0.00..367338.15 rows=433 width=8) (actual\ntime=1173.939..1173.939 rows=1 loops=1)\n Filter: ((mailbox_idnr = 1746::bigint) AND (status <\n2::smallint) AND (seen_flag = 0) AND ((unique_id)::text <> ''::text))\n Total runtime: 1174.012 ms\n \n \nEXPLAIN ANALYZE SELECT message_idnr FROM messages WHERE mailbox_idnr =\n1746::bigint AND status<2::smallint AND seen_flag = 0 AND unique_id !=\n'' ORDER BY message_idnr ASC ;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=2975.42..2976.50 rows=433 width=8) (actual\ntime=2.357..2.545 rows=56 loops=1)\n Sort Key: message_idnr\n -> Index Scan using idx_mailbox_idnr_queue_id on messages \n(cost=0.00..2956.46 rows=433 width=8) (actual time=0.212..2.124\nrows=56 loops=1)\n Index Cond: (mailbox_idnr = 1746::bigint)\n Filter: ((status < 2::smallint) AND (seen_flag = 0) AND\n((unique_id)::text <> ''::text))\n Total runtime: 2.798 ms\n \n \nI see a similar speedup (and change in query plan) using \"LIMIT 1\nOFFSET <anything besides 0>\".\n",
"msg_date": "Fri, 10 Sep 2004 15:01:42 -0600",
"msg_from": "Joey Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Interesting performance behaviour"
},
{
"msg_contents": "Joey,\n\n> shared_buffers = 1000\n> sort_mem = 1024\n> effective_cache_size = 1000\n\neffective_cache_size should be much higher, like 3/4 of your available RAM. \nThis is probably the essence of your planner problem; the planner thinks you \nhave no RAM.\n\n> I see a similar speedup (and change in query plan) using \"LIMIT 1\n> OFFSET <anything besides 0>\".\n\nSo what's your problem?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 10 Sep 2004 14:46:53 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Interesting performance behaviour"
},
{
"msg_contents": "> > shared_buffers = 1000\n> > sort_mem = 1024\n> > effective_cache_size = 1000\n> \n> effective_cache_size should be much higher, like 3/4 of your available RAM.\n> This is probably the essence of your planner problem; the planner thinks you\n> have no RAM.\n\nI set effective_cache_size to 64000 on a machine with 2GB of physical\nRAM, and the behaviour is exactly the same.\n",
"msg_date": "Fri, 10 Sep 2004 16:01:21 -0600",
"msg_from": "Joey Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Interesting performance behaviour"
},
{
"msg_contents": "Accidentally sent directly to Josh.\n\n\n---------- Forwarded message ----------\nFrom: Joey Smith <[email protected]>\nDate: Fri, 10 Sep 2004 15:57:49 -0600\nSubject: Re: [PERFORM] Interesting performance behaviour\nTo: [email protected]\n\n> > I see a similar speedup (and change in query plan) using \"LIMIT 1\n> > OFFSET <anything besides 0>\".\n>\n> So what's your problem?\n\nThe problem is that \"LIMIT 1 OFFSET 0\" has such poor performance. I'm\nnot so much worried about the query time (it's still low enough to be\nacceptable), but the fact that it behaves oddly raised the question of\nwhether this was correct behaviour or not. I'll try it with a saner\nvalue for effective_cache_size.\n",
"msg_date": "Fri, 10 Sep 2004 16:02:05 -0600",
"msg_from": "Joey Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Interesting performance behaviour"
},
{
"msg_contents": "Joey Smith <[email protected]> writes:\n> EXPLAIN ANALYZE SELECT message_idnr FROM messages WHERE mailbox_idnr\n> = 1746::bigint AND status<2::smallint AND seen_flag = 0 AND unique_id\n> != '' ORDER BY message_idnr ASC LIMIT 1;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..848.36 rows=1 width=8) (actual\n> time=1173.949..1173.953 rows=1 loops=1)\n> -> Index Scan using messages_pkey on messages \n> (cost=0.00..367338.15 rows=433 width=8) (actual\n> time=1173.939..1173.939 rows=1 loops=1)\n> Filter: ((mailbox_idnr = 1746::bigint) AND (status <\n> 2::smallint) AND (seen_flag = 0) AND ((unique_id)::text <> ''::text))\n> Total runtime: 1174.012 ms\n\nThe planner is correctly estimating that this plan is very expensive\noverall --- but it is guessing that the indexscan will only need to be\nrun 1/433'd of the way to completion before the single required row is\nfound. So that makes it look like a slightly better bet than the more\nconventional indexscan-on-mailbox_idnr-and-then-sort plan. If you ask\nfor a few more than one row, though, it stops looking like a good bet,\nsince each additional row is estimated to cost another 1/433'd of the\ntotal cost.\n\nPart of the estimation error is that there are only 56 matching rows\nnot 433, so the real cost-per-row ought to be 1/56'th of the total\nindexscan cost. I suspect also that there is some correlation between\nmessage_idnr and mailbox_idnr, which results in having to scan much\nmore than the expected 1/56'th of the index before finding a matching\nrow.\n\nThe planner has no stats about intercolumn correlation so it's not going\nto be able to recognize the correlation risk, but if you could get the\nrowcount estimate closer to reality that would be enough to tilt the\nscales to the better plan. Increasing ANALYZE's stats target for\nmailbox_idnr would be worth trying. Also, I suspect that there is a\nstrong correlation between seen_flag and status, no? This again is\nsomething you can't expect the planner to realize directly, but you\nmight be able to finesse the problem (and save some storage as well)\nif you could merge the seen_flag into the status column and do just one\ncomparison to cover both conditions.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Sep 2004 18:09:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Interesting performance behaviour "
}
] |
[
{
"msg_contents": "Hello,\n\nI saw a few mentions of 'effective_cache_size' parameter. Is this a\nnew PG 7.4 option? I have PG 7.3.4 and didn't see that parameter in my\npostgresql.conf.\n\nThanks,\nOtis\n\n",
"msg_date": "Fri, 10 Sep 2004 15:00:07 -0700 (PDT)",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "effective_cache_size in 7.3.4?"
},
{
"msg_contents": "Otis,\n\n> I saw a few mentions of 'effective_cache_size' parameter. Is this a\n> new PG 7.4 option? I have PG 7.3.4 and didn't see that parameter in my\n> postgresql.conf.\n\nNope. AFAIK, it's been around since 7.0. Maybe you accidentally cut it out \nof your postgresql.conf?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 10 Sep 2004 15:04:09 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: effective_cache_size in 7.3.4?"
}
] |
[
{
"msg_contents": "Here's the query:\n\n---------------------------------------------------------------------------\nSELECT * FROM bv_reviews r, bv_votes v \nWHERE r.vote_id = v.vote_id \nAND v.book_id = 113\n---------------------------------------------------------------------------\n\nbv_votes has around 7000 rows with the given book_id and bv_reviews\nhas 10 reviews. Thus the resulting table consists of only 10 rows.\n\nThat's the regular EXPLAIN of the query:\n\n---------------------------------------------------------------------------\nQUERY PLAN\nHash Join (cost=169.36..49635.37 rows=2117 width=897) (actual\ntime=13533.550..15107.987 rows=10 loops=1)\n Hash Cond: (\"outer\".vote_id = \"inner\".vote_id)\n -> Seq Scan on bv_reviews r (cost=0.00..45477.42 rows=396742\nwidth=881) (actual time=12.020..13305.055 rows=396742 loops=1)\n -> Hash (cost=151.96..151.96 rows=6960 width=16) (actual\ntime=24.673..24.673 rows=0 loops=1)\n -> Index Scan using i_votes_book_id on bv_votes v \n(cost=0.00..151.96 rows=6960 width=16) (actual time=0.035..14.970\nrows=7828 loops=1)\n Index Cond: (book_id = 113)\nTotal runtime: 15109.126 ms\n---------------------------------------------------------------------------\n\nAnd here is what happens when I turn the hashjoin to off:\n\n---------------------------------------------------------------------------\nQUERY PLAN\nNested Loop (cost=0.00..53799.79 rows=2117 width=897) (actual\ntime=4.260..79.721 rows=10 loops=1)\n -> Index Scan using i_votes_book_id on bv_votes v \n(cost=0.00..151.96 rows=6960 width=16) (actual time=0.071..14.100\nrows=7828 loops=1)\n Index Cond: (book_id = 113)\n -> Index Scan using i_bv_reviews_vote_id on bv_reviews r \n(cost=0.00..7.70 rows=1 width=881) (actual time=0.007..0.007 rows=0\nloops=7828)\n Index Cond: (r.vote_id = \"outer\".vote_id)\nTotal runtime: 79.830 ms\n---------------------------------------------------------------------------\n\nWhat am I to do? Are there hints (like in Oracle) in PostgreSQL to\nforce it to use the i_bv_reviews_vote_id index instead of doing a\nseq.scan? Or is something wrong with my Postgresql settings?\n\n-- \n ICQ: 1912453\n AIM: VitalyB1984\n MSN: [email protected]\n Yahoo!: VitalyBe\n",
"msg_date": "Sat, 11 Sep 2004 15:45:42 +0300",
"msg_from": "Vitaly Belman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad performance with hashjoin"
},
{
"msg_contents": "Vitaly Belman <[email protected]> writes:\n> What am I to do?\n\nReduce random_page_cost and/or increase effective_cache_size.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 Sep 2004 11:28:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad performance with hashjoin "
}
] |
[
{
"msg_contents": "Hi all,\nI had a difficult time deciding which list to post\nthis to, so please forgive me if this list doesn't\nperfectly match my questions. My decision will not\nsolely be based on performance, but it is the primary\nconcern. I would be very appreciative if you all\ncould comment on my test plan. Real world examples of\na Postgres implementation of >=600G with a web\nfront-end would be great, or any data warehouse with\nsome size to it.\n\nThe dilemma:\nThe time has come to reevaluate/rearchitect an\napplication which I built about 3 years ago. There\nare no performance concerns with MySQL, but it would\nbenefit greatly from stored procedures, views, etc. \nIt is a very large rolling data warehouse that inserts\nabout 4.5 million rows every 2 hours and subsequently\nrolls this data off the back end of a 90 day window. \nA web interface has been designed for querying the\ndata warehouse. \n\nMigration planning is much easier with views and\nstored procedures and this is my primary reason for\nevaluating Postgres once again. As the application\ngrows I want to have the ability to provide backward\ncompatible views for those who are accustomed to the\ncurrent structure. This is not possible in MySQL. \n\nSome of the mining that we do could benefit from\nstored procedures as well. MySQL may have these in\nthe works, but we won't be able to move to a version\nof MySQL that supports stored procs for another year\nor two.\n\nRequirements:\nMerge table definition equivalent. We use these\nextensively.\n\nMerge table equivalent with all tables containing over\n100M rows(and about 40 columns, some quite wide) will\nneed to do index scans in at least 5 seconds(MySQL\ncurrently does 2, but we can live with 5) and return\n~200 rows.\n\nUm, gonna sound silly, but the web interface has to\nremain \"snappy\" under load. I don't see this as a\nmajor concern since you don't require table locking.\n\nIf business logic is moved to the database(likely with\nPostgres) performance for inserting with light logic\non each insert has to keep up with the 4.5M inserts\nper 2 hours(which MySQL completes in ~35min\ncurrently). Acceptable numbers for this aggregation\nwould be 45-55min using stored procedures.\n\nAbout 3 years ago I did some performance\ncharacterizations of Postgres vs. MySQL and didn't\nfeel Postgres was the best solution. 3 years later\nwe've won runner-up for MySQL application of the\nyear(behind Saabre). Oddly enough this reevaluting\ndatabase strategy is right on the coattails of this\naward. I'll begin writing my business logic within\nthe next week and start migrating test data shortly\nthereafter. Case studies would be very beneficial as\nI put together my analysis.\n\nAlso, this is for a Fortune 500 company that uses this\ndata warehouse extensively. It is an internal\napplication that is widely used and gets about 4 hits\nper employee per day. Much of customer care, data\nengineering, plant engineering(it's a cable company),\nand marketing use the interface. I've done a great\ndeal of press for MySQL and would be equally willing\nto tout the benefits of Postgres to trade rags,\nmagazines, etc provided the results are favorable.\n\nHere's our case study if you're interested . . . \nhttp://www.mysql.com/customers/customer.php?id=16\n\nThoughts, suggestions?\n\n'njoy,\nMark\n",
"msg_date": "Sat, 11 Sep 2004 21:24:42 -0700 (PDT)",
"msg_from": "Mark Cotner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Data Warehouse Reevaluation - MySQL vs Postgres"
},
{
"msg_contents": "On Sat, 11 Sep 2004, Mark Cotner wrote:\n\n> There are no performance concerns with MySQL, but it would benefit\n> greatly from stored procedures, views, etc. It is a very large rolling\n> data warehouse that inserts about 4.5 million rows every 2 hours and\n> subsequently rolls this data off the back end of a 90 day window.\n\nWhile it is impossible to know without testing, postgresql has the benefit\nof readers and writers that does not block each other. So in situations\nwhere you do lots of concurrent inserts and selects postgresql should\nbehave well.\n\n> Merge table definition equivalent. We use these extensively.\n\nAs far as I can tell a merge table in mysql is the same as a view over a \nnumber of unions of other tables. And possibly a rule that defines how \ninserts will be done if you do inserts in the merged table.\n\n> Merge table equivalent with all tables containing over 100M rows(and\n> about 40 columns, some quite wide) will need to do index scans in at\n> least 5 seconds(MySQL currently does 2, but we can live with 5) and\n> return ~200 rows.\n\nSince each table that are merged will have it's own index the speed should \nbe proportional to the number of tables. Index scans in them self are very \nfast, and of you have 30 tables you need 30 index scans.\n\nAlso, are you sure you really need merge tables? With pg having row locks\nand mvcc, maybe you could go for a simpler model with just one big table. \nOften you can also combine that with partial indexes to get a smaller\nindex to use for lots of your queries.\n\n> Thoughts, suggestions?\n\nI see nothing in what you have written that indicates that pg can not do \nthe job, and do it well. It's however very hard to know exactly what is \nthe bottleneck before one tries. There are lots of cases where people have \nconverted mysql applications to postgresql and have gotten a massive \nspeedup. You could be lucky and have such a case, who knows..\n\nI spend some time each day supporting people using postgresql in the\n#postgresql irc channel (on the freenode.net network). There I talk to\npeople doing both small and big conversions and the majority is very happy\nwith the postgresql performance. Postgresql have gotten faster and faster \nwith each release and while speed was a fair argument a number of years \nago it's not like that today.\n\nThat said, in the end it depends on the application.\n\nWe are all interested in how it goes (well, at least me :-), so feel free\nto send more mails keeping us posted. Good luck.\n\n-- \n/Dennis Bj�rklund\n\n",
"msg_date": "Sun, 12 Sep 2004 07:22:48 +0200 (CEST)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres"
},
{
"msg_contents": "Mark Cotner wrote:\n\n> Requirements:\n> Merge table definition equivalent. We use these\n> extensively.\n\nWhat do you mean with \"merge table definition equivalent\"?\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n",
"msg_date": "Sun, 12 Sep 2004 11:07:34 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres"
},
{
"msg_contents": "\nMark Cotner wrote:\n> Hi all,\n> I had a difficult time deciding which list to post\n> this to, so please forgive me if this list doesn't\n> perfectly match my questions. My decision will not\n> solely be based on performance, but it is the primary\n> concern. I would be very appreciative if you all\n> could comment on my test plan. Real world examples of\n> a Postgres implementation of >=600G with a web\n> front-end would be great, or any data warehouse with\n> some size to it.\n\nI'm only in the 30GB range of database, in case that's a consideration \nfor my comments that follow.\n\nAt this time, I'm working out the best ROLAP options for our PG \ntransaction store. The transaction store is highly volatile -- longest a \ntxn stays in it is 15 days ... so you imagine the need for historic \nsummaries :-)\n\nI've also implemented multiple data servers, including\na federated server that had to make the best of existing engines\n(like MySQL, PG and everything from MSJet to RedBrick in the commercial \nworld).\n\n> The time has come to reevaluate/rearchitect an\n> application which I built about 3 years ago. There\n> are no performance concerns with MySQL, but it would\n> benefit greatly from stored procedures, views, etc. \n\nIf your company is currently happy with MySQL, there probably are other \n(nontechnical) reasons to stick with it. I'm impressed that you'd \nconsider reconsidering PG.\n\n> Some of the mining that we do could benefit from\n> stored procedures as well. MySQL may have these in\n> the works, but we won't be able to move to a version\n> of MySQL that supports stored procs for another year\n> or two.\n\nAnd PG lets you back-end with some powerful pattern- and \naggregate-handling languages, like Perl. This was definitely a plus for \ndata mining of web traffic, for example. The power of server-side \nextensibility for bailing you out of a design dead-end is not \ninconsequential.\n\nPG doesn't have PIVOT operators (qv Oracle and MSSQL), but it makes the \ntranslation from data to column fairly painless otherwise.\n\n> Requirements:\n> Merge table definition equivalent. We use these\n> extensively.\n\nLooked all over mysql.com etc, and afaics merge table\nis indeed exactly a view of a union-all. Is that right?\n\nPG supports views, of course, as well (now) as tablespaces, allowing you \nto split tables/tablesets across multiple disk systems.\nPG is also pretty efficient in query plans on such views, where (say) \nyou make one column a constant (identifier, sort of) per input table.\n\n> Merge table equivalent with all tables containing over\n> 100M rows(and about 40 columns, some quite wide) will\n> need to do index scans in at least 5 seconds(MySQL\n> currently does 2, but we can live with 5) and return\n> ~200 rows.\n\nPG has TOAST for handling REALLY BIG columns, and the generic TEXT type \nis as efficient as any size-specific VARCHAR() type ... should make \nthings easier for you.\n\n> Um, gonna sound silly, but the web interface has to\n> remain \"snappy\" under load. I don't see this as a\n> major concern since you don't require table locking.\n\nAgreed. It's more in your warehouse design, and intelligent bounding of \nqueries. I'd say PG's query analyzer is a few years ahead of MySQL for \nlarge and complex queries.\n\n> If business logic is moved to the database(likely with\n> Postgres) performance for inserting with light logic\n> on each insert has to keep up with the 4.5M inserts\n> per 2 hours(which MySQL completes in ~35min\n> currently). Acceptable numbers for this aggregation\n> would be 45-55min using stored procedures.\n\nAgain, it's a matter of pipeline design. The tools for creating an \nefficient pipeline are at least as good in PG as MySQL.\n\nIf you try to insert and postprocess information one row at a time,\nprocedures or no, there's no offhand way to guarantee your performance \nwithout a test/prototype.\n\nOn the other hand, if you do warehouse-style loading (Insert, or PG \nCOPY, into a temp table; and then 'upsert' into the perm table), I can \nguarantee 2500 inserts/sec is no problem.\n\n> Here's our case study if you're interested . . . \n> http://www.mysql.com/customers/customer.php?id=16\n",
"msg_date": "Sun, 12 Sep 2004 20:47:17 GMT",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres -- merge tables"
},
{
"msg_contents": "The world rejoiced as Mischa Sandberg <[email protected]> wrote:\n> Mark Cotner wrote:\n>> Requirements:\n>> Merge table definition equivalent. We use these\n>> extensively.\n\n> Looked all over mysql.com etc, and afaics merge table is indeed\n> exactly a view of a union-all. Is that right?\n\n> PG supports views, of course, as well (now) as tablespaces, allowing\n> you to split tables/tablesets across multiple disk systems. PG is\n> also pretty efficient in query plans on such views, where (say) you\n> make one column a constant (identifier, sort of) per input table.\n\nThe thing that _doesn't_ work well with these sorts of UNION views are\nwhen you do self-joins. Supposing you have 10 members, a self-join\nleads to a 100-way join, which is not particularly pretty.\n\nI'm quite curious as to how MySQL(tm) copes with this, although it may\nnot be able to take place; they may not support that...\n\n>> Um, gonna sound silly, but the web interface has to remain \"snappy\"\n>> under load. I don't see this as a major concern since you don't\n>> require table locking.\n\n> Agreed. It's more in your warehouse design, and intelligent bounding\n> of queries. I'd say PG's query analyzer is a few years ahead of\n> MySQL for large and complex queries.\n\nThe challenge comes in if the application has had enormous amounts of\neffort put into it to attune it exactly to MySQL(tm)'s feature set.\n\nThe guys working on RT/3 have found this a challenge; they had rather\na lot of dependancies on its case-insensitive string comparisons,\ncausing considerable grief.\n\n> On the other hand, if you do warehouse-style loading (Insert, or PG\n> COPY, into a temp table; and then 'upsert' into the perm table), I\n> can guarantee 2500 inserts/sec is no problem.\n\nThe big wins are thus:\n\n 1. Group plenty of INSERTs into a single transaction.\n\n 2. Better still, use COPY to cut parsing costs plenty more.\n\n 3. Adding indexes _after_ the COPY are a further win.\n\nAnother possibility is to do clever things with stored procs; load\nincoming data using the above optimizations, and then run stored\nprocedures to use some more or less fancy logic to put the data where\nit's ultimately supposed to be. Having the logic running inside the\nengine is the big optimization.\n-- \nwm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','linuxfinances.info').\nhttp://linuxfinances.info/info/spreadsheets.html\nRules of the Evil Overlord #198. \"I will remember that any\nvulnerabilities I have are to be revealed strictly on a need-to-know\nbasis. I will also remember that no one needs to know.\"\n<http://www.eviloverlord.com/>\n",
"msg_date": "Sun, 12 Sep 2004 22:29:00 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres -- merge tables"
},
{
"msg_contents": "See comments . . . thanks for the feedback.\n\n'njoy,\nMark\n\n--- Christopher Browne <[email protected]> wrote:\n\n> The world rejoiced as Mischa Sandberg\n> <[email protected]> wrote:\n> > Mark Cotner wrote:\n> >> Requirements:\n> >> Merge table definition equivalent. We use these\n> >> extensively.\n> \n> > Looked all over mysql.com etc, and afaics merge\n> table is indeed\n> > exactly a view of a union-all. Is that right?\n> \n> > PG supports views, of course, as well (now) as\n> tablespaces, allowing\n> > you to split tables/tablesets across multiple disk\n> systems. PG is\n> > also pretty efficient in query plans on such\n> views, where (say) you\n> > make one column a constant (identifier, sort of)\n> per input table.\n> \n> The thing that _doesn't_ work well with these sorts\n> of UNION views are\n> when you do self-joins. Supposing you have 10\n> members, a self-join\n> leads to a 100-way join, which is not particularly\n> pretty.\n> \n> I'm quite curious as to how MySQL(tm) copes with\n> this, although it may\n> not be able to take place; they may not support\n> that...\n> \n> >> Um, gonna sound silly, but the web interface has\n> to remain \"snappy\"\n> >> under load. I don't see this as a major concern\n> since you don't\n> >> require table locking.\n> \n> > Agreed. It's more in your warehouse design, and\n> intelligent bounding\n> > of queries. I'd say PG's query analyzer is a few\n> years ahead of\n> > MySQL for large and complex queries.\n> \n> The challenge comes in if the application has had\n> enormous amounts of\n> effort put into it to attune it exactly to\n> MySQL(tm)'s feature set.\n> \n> The guys working on RT/3 have found this a\n> challenge; they had rather\n> a lot of dependancies on its case-insensitive string\n> comparisons,\n> causing considerable grief.\n>\n\nNot so much, I've tried to be as agnostic as possible.\n Much of the more advanced mining that I've written is\nkinda MySQL specific, but needs to be rewritten as\nstored procedures anyway.\n \n> > On the other hand, if you do warehouse-style\n> loading (Insert, or PG\n> > COPY, into a temp table; and then 'upsert' into\n> the perm table), I\n> > can guarantee 2500 inserts/sec is no problem.\n> \n> The big wins are thus:\n> \n> 1. Group plenty of INSERTs into a single\n> transaction.\n> \n> 2. Better still, use COPY to cut parsing costs\n> plenty more.\n> \n> 3. Adding indexes _after_ the COPY are a further\n> win.\n> \n> Another possibility is to do clever things with\n> stored procs; load\n> incoming data using the above optimizations, and\n> then run stored\n> procedures to use some more or less fancy logic to\n> put the data where\n> it's ultimately supposed to be. Having the logic\n> running inside the\n> engine is the big optimization.\n\nAgreed, I did some preliminary testing today and am\nvery impressed. I wasn't used to running analyze\nafter a data load, but once I did that everything was\nsnappy.\n\nMy best results from MySQL bulk inserts was around 36k\nrows per second on a fairly wide table. Today I got\n42k using the COPY command, but with the analyze post\ninsert the results were similar. These are excellent\nnumbers. It basically means we could have our\ncake(great features) and eat it too(performance that's\ngood enough to run the app).\n\nQueries from my test views were equally pleasing. I\nwon't bore you with the details just yet, but\nPostgreSQL is doing great. Not that you all are\nsurprised. ;)\n\n\n> -- \n> wm(X,Y):-write(X),write('@'),write(Y).\n> wm('cbbrowne','linuxfinances.info').\n> http://linuxfinances.info/info/spreadsheets.html\n> Rules of the Evil Overlord #198. \"I will \n> remember that any\n> vulnerabilities I have are to be revealed strictly \n> on a need-to-know\n> basis. I will also remember that no one needs to\n> know.\"\n> <http://www.eviloverlord.com/>\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n> \n\n",
"msg_date": "Mon, 13 Sep 2004 00:57:52 -0700 (PDT)",
"msg_from": "Mark Cotner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres -- merge tables"
},
{
"msg_contents": "A long time ago, in a galaxy far, far away, [email protected] (Mark Cotner) wrote:\n> Agreed, I did some preliminary testing today and am very impressed.\n> I wasn't used to running analyze after a data load, but once I did\n> that everything was snappy.\n\nSomething worth observing is that this is true for _any_ of the\ndatabase systems supporting a \"cost-based\" optimization system,\nincluding Oracle and DB2.\n\nWhen working with SAP R/3 Payroll, on one project, we found that when\nthe system was empty of data, the first few employee creates were\nquick enough, but it almost immediately got excruciatingly slow. One\nof the DBAs told the Oracle instance underneath to collect statistics\non the main table, and things _immediately_ got snappy again. But it\ndidn't get snappy until the conversion folk had run the conversion\nprocess for several minutes, to the point to which it would get\npainfully slow :-(. There, with MILLIONS of dollars worth of license\nfees being paid, across the various vendors, it still took a fair bit\nof manual fiddling.\n\nMySQL(tm) is just starting to get into cost-based optimization; in\nthat area, they're moving from where the \"big DBs\" were about 10 years\nago. It was either version 7 or 8 where Oracle started moving to\ncost-based optimization, and (as with the anecdote above) it took a\nrelease or two for people to get accustomed to the need to 'feed' the\noptimizer with statistics. This is a \"growing pain\" that bites users\nwith any database where this optimization gets introduced. It's\nworthwhile, but is certainly not costless.\n\nI expect some forseeable surprises will be forthcoming for MySQL AB's\ncustomers in this regard...\n\n> My best results from MySQL bulk inserts was around 36k rows per\n> second on a fairly wide table. Today I got 42k using the COPY\n> command, but with the analyze post insert the results were similar.\n> These are excellent numbers. It basically means we could have our\n> cake(great features) and eat it too(performance that's good enough\n> to run the app).\n\nIn the end, performance for inserts is always fundamentally based on\nhow much disk I/O there is, and so it should come as no shock that\nwhen roughly the same amount of data is getting laid down on disk,\nperformance won't differ much on these sorts of essentials.\n\nThere are a few places where there's some need for cleverness; if you\nsee particular queries running unusually slowly, it's worth doing an\nEXPLAIN or EXPLAIN ANALYZE on them, to see how the query plans are\nbeing generated. There's some collected wisdom out here on how to\nencourage the right plans.\n\nThere are also unexpected results that are OK. We did a system\nupgrade a few days ago that led to one of the tables starting out\ntotally empty. A summary report that looks at that table wound up\nwith a pretty wacky looking query plan (compared to what's usual)\nbecause the postmaster knew that the query would be reading in\nessentially the entire table. You'd normally expect an index scan,\nlooking for data for particular dates. In this case, it did a \"scan\nthe whole table; filter out a few irrelevant entries\" plan. \n\nIt looked wacky, compared to what's usual, but it ran in about 2\nseconds, which was way FASTER than what's usual. So the plan was\nexactly the right one.\n\nTelling the difference between the right plan and a poor one is a bit\nof an art; we quite regularly take a look at query plans on this list\nto figure out what might be not quite right. If you find slow ones,\nmake sure you have run ANALYZE on the tables recently, to be sure that\nthe plans are sane, and you may want to consider posting some of them\nto see if others can point to improvements that can be made.\n-- \nIf this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me\nhttp://linuxfinances.info/info/linuxdistributions.html\n\"I can't believe my room doesn't have Ethernet! Why wasn't it wired\nwhen the house was built?\"\n\"The house was built in 1576.\" \n-- Alex Kamilewicz on the Oxford breed of `conference American.'\n",
"msg_date": "Mon, 13 Sep 2004 05:21:42 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres -- merge tables"
},
{
"msg_contents": "\nMark,\n\nI thought some additional comments on top of Christopher's excellent notes\nmight help you.\n\n> Christopher Browne\n> The world rejoiced as Mischa Sandberg\n> <[email protected]> wrote:\n> > Mark Cotner wrote:\n> >> Requirements:\n> >> Merge table definition equivalent. We use these\n> >> extensively.\n>\n> > Looked all over mysql.com etc, and afaics merge table is indeed\n> > exactly a view of a union-all. Is that right?\n>\n\nPostgreSQL's functionality is in many ways similar to Oracle Partitioning.\n\nLoading up your data in many similar tables, then creating a view like:\n\nCREATE VIEW BIGTABLE (idate, col1, col2, col3...) AS\nSELECT 200409130800, col1, col2, col3... FROM table200409130800\nUNION ALL\nSELECT 200409131000, col1, col2, col3... FROM table200409131000\nUNION ALL\nSELECT 200409131200, col1, col2, col3... FROM table200409131200\n...etc...\n\nwill allow the PostgreSQL optimizer to eliminate partitions from the query\nwhen you run queries which include a predicate on the partitioning_col, e.g.\n\nselect count(*) from bigtable where idate >= 200409131000\n\nwill scan the last two partitions only...\n\nThere are a few other ways of creating the view that return the same answer,\nbut only using constants in that way will allow the partitions to be\neliminated from the query, and so run for much longer.\n\nSo you can give different VIEWS to different user groups, have different\nindexes on different tables etc.\n\nHowever, I haven't managed to get this technique to work when performing a\nstar join to a TIME dimension table, since the parition elimination relies\non comparison of constant expressions. You'll need to check out each main\njoin type to make sure it works for you in your environment.\n\n> > PG supports views, of course, as well (now) as tablespaces, allowing\n> > you to split tables/tablesets across multiple disk systems. PG is\n> > also pretty efficient in query plans on such views, where (say) you\n> > make one column a constant (identifier, sort of) per input table.\n>\n> The thing that _doesn't_ work well with these sorts of UNION views are\n> when you do self-joins. Supposing you have 10 members, a self-join\n> leads to a 100-way join, which is not particularly pretty.\n>\n\nWell, that only happens when you forget to include the partitioning constant\nin the self join.\n\ne.g. select count(*) from bigtable a, bigtable b where a.idate =\n.idate; --works just fine\n\nThe optimizer really is smart enough to handle that too, but I'm sure such\nlarge self-joins aren't common for you anyhow.\n\n> I'm quite curious as to how MySQL(tm) copes with this, although it may\n> not be able to take place; they may not support that...\n>\n\nIt doesn't, AFAIK.\n\n> Christopher Browne wrote\n> A long time ago, in a galaxy far, far away, [email protected]\n> (Mark Cotner) wrote:\n> > Agreed, I did some preliminary testing today and am very impressed.\n> > I wasn't used to running analyze after a data load, but once I did\n> > that everything was snappy.\n>\n> Something worth observing is that this is true for _any_ of the\n> database systems supporting a \"cost-based\" optimization system,\n> including Oracle and DB2.\n\nAgreed. You can reduce the time for the ANALYZE by ignoring some of the\n(measures) columns not used in WHERE clauses.\n\nAlso, if you're sure that each load is very similar to the last, you might\neven consider directly updating pg_statistic rows with the statistical\nvalues produced from an earlier ANALYZE...scary, but it can work.\n\nTo create a set of tables of > 600Gb, you will benefit from creating each\ntable WITHOUT OIDS.\n\nHope some of that helps you...\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Mon, 13 Sep 2004 23:07:35 +0100",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres -- merge tables"
},
{
"msg_contents": "You all have been so very helpful so far and I really\nappreciate it.\n\nThe data in these tables is thankfully static since\nthey are logging tables and an analyze only takes\nabout 4 minutes for the largest of them.\n\nI've finished porting the schema and am importing the\ndata now. My estimates for just two-thirds(60 of the\n90 days) of one of our 30 cable systems(MySQL dbs) is\nestimated to take about 16 hours. This may seem like\na lot, but I'm satisfied with the performance. I've\ncreated a slightly normalized version and some stored\nprocedures to help me normalize the data. When this\nfinishes I'm going to query the data as is with the\nviews as you suggested, and I'm going to create views\nfor the normalized version to test that as well. This\nwill then be contrasted to the MySQL query results and\nI plan to write a white paper of my findings.\n\nI don't have any concerns that Postgres will do fine,\nbut if I run into any performance problems I'll be\nsure and post them here first.\n\nIt should be noted that our development life cycle is\ncurrently severely hindered by lack of features in\nMySQL like views and stored procedures. Frankly I've\nimplemented some pretty ugly SQL using as many as 5\ntemp tables to generate a result set with MySQL. \nHaving stored procedures and views is going to help us\ntremendously. This performance evaluation is to\nverify that Postgres can handle what we're going to\nthrow at it, not to find out if it's faster in\nmilliseconds than MySQL. We love the speed and ease\nof maintenance with MySQL, but have simply outgrown\nit. This will be reflected in the white paper.\n\nI have already imported our customer tables, which\naren't too small(2.4M rows x 3 tables), and stuck a\nview in front of it. The view queried faster than\nMySQL would query a pre-joined flat table.\n\nGetting carried away . . . needless to say I'm really\nexcited about the possiblity of Postgres, but I won't\nbore you with the details just yet. I'll send the\nlink out to the white paper so you all can review it\nbefore I send it anywhere else. If anything could\nhave been optimized more please let me know and I'll\nsee that it gets updated before it's widely published.\n\nThanks again for all the great feedback!\n\n'njoy,\nMark\n\n--- Christopher Browne <[email protected]> wrote:\n\n> A long time ago, in a galaxy far, far away,\n> [email protected] (Mark Cotner) wrote:\n> > Agreed, I did some preliminary testing today and\n> am very impressed.\n> > I wasn't used to running analyze after a data\n> load, but once I did\n> > that everything was snappy.\n> \n> Something worth observing is that this is true for\n> _any_ of the\n> database systems supporting a \"cost-based\"\n> optimization system,\n> including Oracle and DB2.\n> \n> When working with SAP R/3 Payroll, on one project,\n> we found that when\n> the system was empty of data, the first few employee\n> creates were\n> quick enough, but it almost immediately got\n> excruciatingly slow. One\n> of the DBAs told the Oracle instance underneath to\n> collect statistics\n> on the main table, and things _immediately_ got\n> snappy again. But it\n> didn't get snappy until the conversion folk had run\n> the conversion\n> process for several minutes, to the point to which\n> it would get\n> painfully slow :-(. There, with MILLIONS of dollars\n> worth of license\n> fees being paid, across the various vendors, it\n> still took a fair bit\n> of manual fiddling.\n> \n> MySQL(tm) is just starting to get into cost-based\n> optimization; in\n> that area, they're moving from where the \"big DBs\"\n> were about 10 years\n> ago. It was either version 7 or 8 where Oracle\n> started moving to\n> cost-based optimization, and (as with the anecdote\n> above) it took a\n> release or two for people to get accustomed to the\n> need to 'feed' the\n> optimizer with statistics. This is a \"growing pain\"\n> that bites users\n> with any database where this optimization gets\n> introduced. It's\n> worthwhile, but is certainly not costless.\n> \n> I expect some forseeable surprises will be\n> forthcoming for MySQL AB's\n> customers in this regard...\n> \n> > My best results from MySQL bulk inserts was around\n> 36k rows per\n> > second on a fairly wide table. Today I got 42k\n> using the COPY\n> > command, but with the analyze post insert the\n> results were similar.\n> > These are excellent numbers. It basically means\n> we could have our\n> > cake(great features) and eat it too(performance\n> that's good enough\n> > to run the app).\n> \n> In the end, performance for inserts is always\n> fundamentally based on\n> how much disk I/O there is, and so it should come as\n> no shock that\n> when roughly the same amount of data is getting laid\n> down on disk,\n> performance won't differ much on these sorts of\n> essentials.\n> \n> There are a few places where there's some need for\n> cleverness; if you\n> see particular queries running unusually slowly,\n> it's worth doing an\n> EXPLAIN or EXPLAIN ANALYZE on them, to see how the\n> query plans are\n> being generated. There's some collected wisdom out\n> here on how to\n> encourage the right plans.\n> \n> There are also unexpected results that are OK. We\n> did a system\n> upgrade a few days ago that led to one of the tables\n> starting out\n> totally empty. A summary report that looks at that\n> table wound up\n> with a pretty wacky looking query plan (compared to\n> what's usual)\n> because the postmaster knew that the query would be\n> reading in\n> essentially the entire table. You'd normally expect\n> an index scan,\n> looking for data for particular dates. In this\n> case, it did a \"scan\n> the whole table; filter out a few irrelevant\n> entries\" plan. \n> \n> It looked wacky, compared to what's usual, but it\n> ran in about 2\n> seconds, which was way FASTER than what's usual. So\n> the plan was\n> exactly the right one.\n> \n> Telling the difference between the right plan and a\n> poor one is a bit\n> of an art; we quite regularly take a look at query\n> plans on this list\n> to figure out what might be not quite right. If you\n> find slow ones,\n> make sure you have run ANALYZE on the tables\n> recently, to be sure that\n> the plans are sane, and you may want to consider\n> posting some of them\n> to see if others can point to improvements that can\n> be made.\n> -- \n> If this was helpful,\n> <http://svcs.affero.net/rm.php?r=cbbrowne> rate me\n>\nhttp://linuxfinances.info/info/linuxdistributions.html\n> \"I can't believe my room doesn't have Ethernet! Why\n> wasn't it wired\n> when the house was built?\"\n> \"The house was built in 1576.\" \n> -- Alex Kamilewicz on the Oxford breed of\n> `conference American.'\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map\n> settings\n> \n\n",
"msg_date": "Tue, 14 Sep 2004 00:39:43 -0700 (PDT)",
"msg_from": "Mark Cotner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres -- merge tables"
},
{
"msg_contents": "\n\n\tPerformance hint :\n\n\tFor static data, do not normalize too much.\n\tFor instance if you have a row which can be linked to several other rows, \nyou can do this :\n\ncreate table parents (\n\tid\tserial primary key,\n\tvalues... )\n\ncreate table children (\n\tid serial primary key,\n\tparent_id references parents(id),\n\tinteger slave_value )\n\n\n\tOr you can do this, using an array :\n\ncreate table everything (\n\tid\tserial primary key,\n\tinteger[] children_values,\n\tvalues... )\n\n\tPros :\n\tNo Joins. Getting the list of chilndren_values from table everything is \njust a select.\n\tOn an application with several million rows, a query lasting 150 ms with \na Join takes 30 ms with an array.\n\tYou can build the arrays from normalized tables by using an aggregate \nfunction.\n\tYou can index the array elements with a GIST index...\n\n\tCons :\n\tNo joins, thus your queries are a little bit limited ; problems if the \narray is too long ;\n\n\n\n\n",
"msg_date": "Tue, 14 Sep 2004 11:07:59 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres -- merge tables"
},
{
"msg_contents": "Hi, Mischa,\n\nOn Sun, 12 Sep 2004 20:47:17 GMT\nMischa Sandberg <[email protected]> wrote:\n\n> On the other hand, if you do warehouse-style loading (Insert, or PG \n> COPY, into a temp table; and then 'upsert' into the perm table), I can \n> guarantee 2500 inserts/sec is no problem.\n\nAs we can forsee that we'll have similar insert rates to cope with in\nthe not-so-far future, what do you mean with 'upsert'? Do you mean a\nstored procedure that iterates over the temp table?\n\nGenerally, what is the fastest way for doing bulk processing of \nupdate-if-primary-key-matches-and-insert-otherwise operations?\n\nThanks,\nMarkus Schaber\n\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zürich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n",
"msg_date": "Tue, 14 Sep 2004 14:14:52 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": "> Mark Cotner wrote:\n\n> > The time has come to reevaluate/rearchitect an\n> > application which I built about 3 years ago. There\n> > are no performance concerns with MySQL, but it would\n> > benefit greatly from stored procedures, views, etc.\n>\n\nFrom: \"Mischa Sandberg\" <[email protected]>\n\n> If your company is currently happy with MySQL, there probably are other\n> (nontechnical) reasons to stick with it. I'm impressed that you'd\n> consider reconsidering PG.\n\nI'd like to second Mischa on that issue. In general, if you migrate an\n*existing* application from one RDBMS to another, you should expect\nperformance to decrease significantly. This is always true in a well\nperforming system even if the replacement technology is more sophisticated.\nThis is because of several factors.\n\nEven if you try to develop in totally agnostic generic SQL, you are always\ncustomizing to a feature set, namely the ones in the current system. Any\nexisting application has had substantial tuning and tweaking, and the new\none is at a disadvantage. Moreover, an existing system is a Skinnerian\nreward/punishment system to the developers and DBAs, rewarding or punishing\nthem for very environment specific choices - resulting in an application,\ndbms, OS, and platform that are both explicitly and unconsciously customized\nto work together in a particular manner.\n\nThe net effect is a rule of thumb that I use:\n\nNEVER reimplement an existing system unless the project includes substantial\nfunctional imporovement.\n\nEvery time I've broken that rule, I've found that users expectations, based\non the application they are used to, are locked in. Any place where the new\nsystem is slower, the users are dissatisfied; where it exceeds expectations\nit isn't appreciated: the users are used to the old system quirks, and the\nimprovements only leave them uncomforable since the system \"acts\ndifferently\". (I've broken the rule on occation for standardization\nconversions.)\n\nMy expectation is that pg will not get a fair shake here. If you do it - I'd\nlike to see the results anyway.\n\n/Aaron\n",
"msg_date": "Tue, 14 Sep 2004 09:24:35 -0400",
"msg_from": "\"Aaron Werman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres -- merge tables"
},
{
"msg_contents": "* Markus Schaber ([email protected]) wrote:\n> Generally, what is the fastest way for doing bulk processing of \n> update-if-primary-key-matches-and-insert-otherwise operations?\n\nThis is a very good question, and I havn't seen much of an answer to it\nyet. I'm curious about the answer myself, actually. In the more recent\nSQL specs, from what I understand, this is essentially what the 'MERGE'\ncommand is for. This was recently added and unfortunately is not yet\nsupported in Postgres. Hopefully it will be added soon.\n\nOtherwise, what I've done is basically an update followed by an insert\nusing outer joins. If there's something better, I'd love to hear about\nit. The statements looks something like:\n\nupdate X\n set colA = a.colA,\n colB = a.colB\n from Y a\n where keyA = a.keyA and\n keyB = a.keyB;\n\ninsert into X\n select a.keyA,\n a.keyB,\n\t a.colA,\n\t a.colB\n from Y a left join X b\n using (keyA, keyB)\n where b.keyA is NULL and\n b.keyB is NULL;\n\nWith the appropriate indexes, this is pretty fast but I think a merge\nwould be much faster.\n\n\t\tThanks,\n\n\t\t\tStephen",
"msg_date": "Tue, 14 Sep 2004 10:33:03 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n> From: \"Mischa Sandberg\" <[email protected]>\n> \n> > If your company is currently happy with MySQL, there probably are\n> > other (nontechnical) reasons to stick with it. I'm impressed that\n> > you'd consider reconsidering PG.\n> \n> I'd like to second Mischa on that issue.\n\nThough both of you are right from my point of view, I don't think\nit's very useful to discuss this item here.\n\nHaving once migrated a MySQL-DB to PG I can confirm, that in fact\nchances are good you will be unhappy if you adopt the MySQL\ndata-model and the SQL 1:1.\nAs well as PG has to be much more configured and optimized than\nMySQL.\nAs well as the client-application is supposed to be modified to a\ncertain extend, particularly if you want to take over some -or some\nmore- business-logic from client to database.\n\nBut, from what Mark stated so far I'm sure he is not going to migrate\nhis app just for fun, resp. without having considered this.\n\n> NEVER reimplement an existing system unless the project includes\n> substantial functional imporovement.\n\nor monetary issues\nI know one big database that was migrated from Oracle to PG and\nanother from SQLServer to PG because of licence-costs. Definitely\nthere are some more.\nThat applies to MySQL, too; licence policy is somewhat obscure to me,\nbut under certain circumstances you have to pay\n\nregards Harald\n\n-----BEGIN PGP SIGNATURE-----\nVersion: PGPfreeware 6.5.3 for non-commercial use <http://www.pgp.com>\n\niQA/AwUBQUb+O8JpD/drhCuMEQJCZACgqdJsrWjOwdP779PFaFMjxdgvqkwAoIPc\njPONy6urLRLf3vylVjVlEyci\n=/1Ka\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Tue, 14 Sep 2004 17:20:43 +0200",
"msg_from": "\"Harald Lau (Sector-X)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres -- merge tables"
},
{
"msg_contents": ">>>>> \"MC\" == Mark Cotner <[email protected]> writes:\n\nMC> I've finished porting the schema and am importing the\nMC> data now. My estimates for just two-thirds(60 of the\nMC> 90 days) of one of our 30 cable systems(MySQL dbs) is\nMC> estimated to take about 16 hours. This may seem like\nMC> a lot, but I'm satisfied with the performance. I've\n\nbe sure to load your data without indexes defined for your initial\nimport.\n\ncheck your logs to see if increasing checkpoint_segments is\nrecommended. I found that bumping it up to 50 helped speed up my\ndata loads (restore from dump) significantly.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Tue, 14 Sep 2004 14:27:46 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres -- merge tables"
},
{
"msg_contents": "> Stephen Frost\n> * Markus Schaber ([email protected]) wrote:\n> > Generally, what is the fastest way for doing bulk processing of\n> > update-if-primary-key-matches-and-insert-otherwise operations?\n>\n> This is a very good question, and I havn't seen much of an answer to it\n> yet. I'm curious about the answer myself, actually. In the more recent\n> SQL specs, from what I understand, this is essentially what the 'MERGE'\n> command is for. This was recently added and unfortunately is not yet\n> supported in Postgres. Hopefully it will be added soon.\n>\n\nYes, I think it is an important feature for both Data Warehousing (used in\nset-operation mode for bulk processing) and OLTP (saves a round-trip to the\ndatabase, so faster on single rows also). It's in my top 10 for 2005.\n\nBest Regards, Simon Riggs\n\n\n",
"msg_date": "Tue, 14 Sep 2004 21:38:49 +0100",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": "On Mon, Sep 13, 2004 at 11:07:35PM +0100, Simon Riggs wrote:\n> PostgreSQL's functionality is in many ways similar to Oracle Partitioning.\n> \n> Loading up your data in many similar tables, then creating a view like:\n> \n> CREATE VIEW BIGTABLE (idate, col1, col2, col3...) AS\n> SELECT 200409130800, col1, col2, col3... FROM table200409130800\n> UNION ALL\n> SELECT 200409131000, col1, col2, col3... FROM table200409131000\n> UNION ALL\n> SELECT 200409131200, col1, col2, col3... FROM table200409131200\n> ...etc...\n> \n> will allow the PostgreSQL optimizer to eliminate partitions from the query\n> when you run queries which include a predicate on the partitioning_col, e.g.\n> \n> select count(*) from bigtable where idate >= 200409131000\n> \n> will scan the last two partitions only...\n> \n> There are a few other ways of creating the view that return the same answer,\n> but only using constants in that way will allow the partitions to be\n> eliminated from the query, and so run for much longer.\n\nIs there by any chance a set of functions to manage adding and removing\npartitions? Certainly this can be done by hand, but having a set of\ntools would make life much easier. I just looked but didn't see anything\non GBorg.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 14 Sep 2004 17:33:33 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres -- merge tables"
},
{
"msg_contents": "Googling 'upsert' (an Oraclism, I believe) will get you hits on Oracle \nand DB2's implementation of MERGE, which does what AMOUNTS to what is\ndescribed below (one mass UPDATE...FROM, one mass INSERT...WHERE NOT \nEXISTS).\n\nNo, you shouldn't iterate row-by-row through the temp table.\nWhenever possible, try to do updates in one single (mass) operation.\nDoing it that way gives the optimizer the best chance at amortizing\nfixed costs, and batching operations.\n\n---------\nIn any database other than Postgres, I would recommend doing the\nINSERT /followed by/ the UPDATE. That order looks wonky --- your update\nends up pointlessly operating on the rows just INSERTED. The trick is, \nUPDATE acquires and holds write locks (the rows were previously visible \nto other processes), while INSERT's write locks refer to rows that no \nother process could try to lock.\n\nStephen Frost wrote:\n> * Markus Schaber ([email protected]) wrote:\n> \n>>Generally, what is the fastest way for doing bulk processing of \n>>update-if-primary-key-matches-and-insert-otherwise operations?\n> \n> \n> This is a very good question, and I havn't seen much of an answer to it\n> yet. I'm curious about the answer myself, actually. In the more recent\n> SQL specs, from what I understand, this is essentially what the 'MERGE'\n> command is for. This was recently added and unfortunately is not yet\n> supported in Postgres. Hopefully it will be added soon.\n> \n> Otherwise, what I've done is basically an update followed by an insert\n> using outer joins. If there's something better, I'd love to hear about\n> it. The statements looks something like:\n> \n> update X\n> set colA = a.colA,\n> colB = a.colB\n> from Y a\n> where keyA = a.keyA and\n> keyB = a.keyB;\n> \n> insert into X\n> select a.keyA,\n> a.keyB,\n> \t a.colA,\n> \t a.colB\n> from Y a left join X b\n> using (keyA, keyB)\n> where b.keyA is NULL and\n> b.keyB is NULL;\n> \n> With the appropriate indexes, this is pretty fast but I think a merge\n> would be much faster.\n> \n> \t\tThanks,\n> \n> \t\t\tStephen\n",
"msg_date": "Tue, 14 Sep 2004 22:58:20 GMT",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": "> Jim C. Nasby\n> On Mon, Sep 13, 2004 at 11:07:35PM +0100, Simon Riggs wrote:\n> > PostgreSQL's functionality is in many ways similar to Oracle\n> Partitioning.\n> >\n> > Loading up your data in many similar tables, then creating a view like:\n> >\n> > CREATE VIEW BIGTABLE (idate, col1, col2, col3...) AS\n> > SELECT 200409130800, col1, col2, col3... FROM table200409130800\n> > UNION ALL\n> > SELECT 200409131000, col1, col2, col3... FROM table200409131000\n> > UNION ALL\n> > SELECT 200409131200, col1, col2, col3... FROM table200409131200\n> > ...etc...\n> >\n> > will allow the PostgreSQL optimizer to eliminate partitions\n> from the query\n> > when you run queries which include a predicate on the\n> partitioning_col, e.g.\n> >\n> > select count(*) from bigtable where idate >= 200409131000\n> >\n> > will scan the last two partitions only...\n> >\n> > There are a few other ways of creating the view that return the\n> same answer,\n> > but only using constants in that way will allow the partitions to be\n> > eliminated from the query, and so run for much longer.\n>\n> Is there by any chance a set of functions to manage adding and removing\n> partitions? Certainly this can be done by hand, but having a set of\n> tools would make life much easier. I just looked but didn't see anything\n> on GBorg.\n\nWell, its fairly straightforward to auto-generate the UNION ALL view, and\nimportant as well, since it needs to be re-specified each time a new\npartition is loaded or an old one is cleared down. The main point is that\nthe constant placed in front of each table must in some way relate to the\ndata, to make it useful in querying. If it is just a unique constant, chosen\nat random, it won't do much for partition elimination. So, that tends to\nmake the creation of the UNION ALL view an application/data specific thing.\n\nThe \"partitions\" are just tables, so no need for other management tools.\nOracle treats the partitions as sub-tables, so you need a range of commands\nto add, swap etc the partitions of the main table.\n\nI guess a set of tools that emulates that functionality would be generically\na good thing, if you can see a way to do that.\n\nOracle partitions were restricted in only allowing a single load statement\ninto a single partition at any time, whereas multiple COPY statements can\naccess a single partition table on PostgreSQL.\n\nBTW, multi-dimensional partitioning is also possible using the same general\nscheme....\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Wed, 15 Sep 2004 00:32:11 +0100",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres -- merge tables"
},
{
"msg_contents": "Simon Riggs wrote:\n>>Jim C. Nasby\n>>On Mon, Sep 13, 2004 at 11:07:35PM +0100, Simon Riggs wrote:\n>>\n>>>PostgreSQL's functionality is in many ways similar to Oracle\n>>Partitioning.\n>>\n>>>Loading up your data in many similar tables, then creating a view like:\n>>>\n>>>CREATE VIEW BIGTABLE (idate, col1, col2, col3...) AS\n>>>SELECT 200409130800, col1, col2, col3... FROM table200409130800\n>>>UNION ALL\n>>>SELECT 200409131000, col1, col2, col3... FROM table200409131000\n>>>UNION ALL\n>>>SELECT 200409131200, col1, col2, col3... FROM table200409131200\n>>>...etc...\n>>>\n>>>will allow the PostgreSQL optimizer to eliminate partitions\n>>from the query\n>>>when you run queries which include a predicate on the\n>>partitioning_col, e.g.\n>>\n>>>select count(*) from bigtable where idate >= 200409131000\n> \n> The \"partitions\" are just tables, so no need for other management tools.\n> Oracle treats the partitions as sub-tables, so you need a range of commands\n> to add, swap etc the partitions of the main table.\n\nA few years ago I wrote a federated query engine (wrapped as an ODBC \ndriver) that had to handle thousands of contributors (partitions) to a \npseudotable / VIEWofUNIONs. Joins did require some special handling in \nthe optimizer, because of the huge number of crossproducts between \ndifferent tables. It was definitely worth the effort at the time, \nbecause you need different strategies for: joining a partition to \nanother partition on the same subserver; joining two large partitions on \ndifferent servers; and joining a large partition on one server to a \nsmall one on another.\n\nThe differences may not be so great for a solitary server;\nbut they're still there, because of disparity in subtable sizes. The \nsimplistic query plans tend to let you down, when you're dealing with \nhonking warehouses.\n\nI'm guessing that Oracle keeps per-subtable AND cross-all-subtables \nstatistics, rather than building the latter from scratch in the course \nof evaluating the query plan. That's the one limitation I see in \nemulating their partitioned tables with Views.\n",
"msg_date": "Tue, 14 Sep 2004 23:32:48 GMT",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres -- merge tables"
},
{
"msg_contents": "\nOn Sep 15, 2004, at 8:32 AM, Simon Riggs wrote:\n\n> The \"partitions\" are just tables, so no need for other management \n> tools.\n> Oracle treats the partitions as sub-tables, so you need a range of \n> commands\n> to add, swap etc the partitions of the main table.\n>\n> I guess a set of tools that emulates that functionality would be \n> generically\n> a good thing, if you can see a way to do that.\n>\n> Oracle partitions were restricted in only allowing a single load \n> statement\n> into a single partition at any time, whereas multiple COPY statements \n> can\n> access a single partition table on PostgreSQL.\n\nHow does this compare to DB2 partitioning?\n\nMichael Glaesemann\ngrzm myrealbox com\n\n",
"msg_date": "Wed, 15 Sep 2004 10:01:57 +0900",
"msg_from": "Michael Glaesemann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres -- merge tables"
},
{
"msg_contents": "[email protected] (\"Simon Riggs\") writes:\n> Well, its fairly straightforward to auto-generate the UNION ALL view, and\n> important as well, since it needs to be re-specified each time a new\n> partition is loaded or an old one is cleared down. The main point is that\n> the constant placed in front of each table must in some way relate to the\n> data, to make it useful in querying. If it is just a unique constant, chosen\n> at random, it won't do much for partition elimination. So, that tends to\n> make the creation of the UNION ALL view an application/data specific thing.\n\nAh, that's probably a good thought.\n\nWhen we used big \"UNION ALL\" views, it was with logging tables, where\nthere wasn't really any meaningful distinction between partitions.\n\nSo you say that if the VIEW contains, within it, meaningful constraint\ninformation, that can get applied to chop out irrelevant bits? \n\nThat suggests a way of resurrecting the idea...\n\nMight we set up the view as:\n\ncreate view combination_of_logs as\n select * from table_1 where txn_date between 'this' and 'that' \n union all\n select * from table_2 where txn_date between 'this2' and 'that2' \n union all\n select * from table_3 where txn_date between 'this3' and 'that3' \n union all\n select * from table_4 where txn_date between 'this4' and 'that4' \n union all\n ... ad infinitum\n union all\n select * from table_n where txn_date > 'start_of_partition_n';\n\nand expect that to help, as long as the query that hooks up to this\nhas date constraints?\n\nWe'd have to regenerate the view with new fixed constants each time we\nset up the tables, but that sounds like it could work...\n-- \n\"cbbrowne\",\"@\",\"acm.org\"\nhttp://www3.sympatico.ca/cbbrowne/x.html\nBut what can you do with it? -- ubiquitous cry from Linux-user\npartner. -- Andy Pearce, <[email protected]>\n",
"msg_date": "Tue, 14 Sep 2004 22:34:53 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres -- merge tables"
},
{
"msg_contents": "[email protected] (\"Simon Riggs\") wrote:\n> The main point is that the constant placed in front of each table\n> must in some way relate to the data, to make it useful in\n> querying. If it is just a unique constant, chosen at random, it\n> won't do much for partition elimination.\n\nIt just struck me - this is much the same notion as that of \"cutting\nplanes\" used in Integer Programming.\n\nThe approach, there, is that you take a linear program, which can give\nfractional results, and throw on as many additional constraints as you\nneed in order to force the likelihood of particular variable falling\non integer values. The constraints may appear redundant, but\ndeclaring them allows the answers to be pushed in the right\ndirections.\n\nIn this particular case, the (arguably redundant) constraints let the\nquery optimizer have criteria for throwing out unnecessary tables.\nThanks for pointing this out; it may turn a fowl into a feature, when\nI can get some \"round tuits\" :-). That should allow me to turn an\n81-way evil join into something that's 4-way at the worst.\n\nCheers!\n-- \n\"cbbrowne\",\"@\",\"linuxfinances.info\"\nhttp://linuxfinances.info/info/nonrdbms.html\nImplementing systems is 95% boredom and 5% sheer terror.\n",
"msg_date": "Tue, 14 Sep 2004 23:33:49 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres -- merge tables"
},
{
"msg_contents": "Chris Browne wrote:\n> Might we set up the view as:\n> \n> create view combination_of_logs as\n> select * from table_1 where txn_date between 'this' and 'that' \n> union all\n> select * from table_2 where txn_date between 'this2' and 'that2' \n> union all\n> select * from table_3 where txn_date between 'this3' and 'that3' \n> union all\n> select * from table_4 where txn_date between 'this4' and 'that4' \n> union all\n> ... ad infinitum\n> union all\n> select * from table_n where txn_date > 'start_of_partition_n';\n> \n> and expect that to help, as long as the query that hooks up to this\n> has date constraints?\n> \n> We'd have to regenerate the view with new fixed constants each time we\n> set up the tables, but that sounds like it could work...\n\nThat's exactly what we're doing, but using inherited tables instead of a \nunion view. With inheritance, there is no need to rebuild the view each \ntime a table is added or removed. Basically, in our application, tables \nare partitioned by either month or week, depending on the type of data \ninvolved, and queries are normally date qualified.\n\nWe're not completely done with our data conversion (from a commercial \nRDBMSi), but so far the results have been excellent. Similar to what \nothers have said in this thread, the conversion involved restructuring \nthe data to better suit Postgres, and the application (data \nanalysis/mining vs. the source system which is operational). As a result \nwe've compressed a > 1TB database down to ~0.4TB, and seen at least one \ntypical query reduced from ~9 minutes down to ~40 seconds.\n\nJoe\n\n",
"msg_date": "Tue, 14 Sep 2004 21:30:24 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": "Hi Joe,\n\n> That's exactly what we're doing, but using inherited tables instead of a\n> union view. With inheritance, there is no need to rebuild the view each\n> time a table is added or removed. Basically, in our application, tables\n> are partitioned by either month or week, depending on the type of data\n> involved, and queries are normally date qualified.\n\nThat sounds interesting. I have to admit that I havn't touched iheritance in\npg at all yet so I find it hard to imagine how this would work. If you have\na chance, would you mind elaborating on it just a little?\n\nRegards\nIain\n\n",
"msg_date": "Wed, 15 Sep 2004 13:54:18 +0900",
"msg_from": "\"Iain\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": "On Tue, Sep 14, 2004 at 05:33:33PM -0500, Jim C. Nasby wrote:\n> On Mon, Sep 13, 2004 at 11:07:35PM +0100, Simon Riggs wrote:\n> > PostgreSQL's functionality is in many ways similar to Oracle Partitioning.\n> > \n> > Loading up your data in many similar tables, then creating a view like:\n> > \n> > CREATE VIEW BIGTABLE (idate, col1, col2, col3...) AS\n> > SELECT 200409130800, col1, col2, col3... FROM table200409130800\n> > UNION ALL\n> > SELECT 200409131000, col1, col2, col3... FROM table200409131000\n> > UNION ALL\n> > SELECT 200409131200, col1, col2, col3... FROM table200409131200\n> > ...etc...\n[...]\n> \n> Is there by any chance a set of functions to manage adding and removing\n> partitions? Certainly this can be done by hand, but having a set of\n> tools would make life much easier. I just looked but didn't see anything\n> on GBorg.\n\nI've done a similar thing with time-segregated data by inheriting\nall the partition tables from an (empty) parent table.\n\nAdding a new partition is just a \"create table tablefoo () inherits(bigtable)\"\nand removing a partition just \"drop table tablefoo\".\n\nCheers,\n Steve\n\n",
"msg_date": "Tue, 14 Sep 2004 22:10:04 -0700",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres -- merge tables"
},
{
"msg_contents": "In the last exciting episode, [email protected] (Joe Conway) wrote:\n> That's exactly what we're doing, but using inherited tables instead of\n> a union view. With inheritance, there is no need to rebuild the view\n> each time a table is added or removed. Basically, in our application,\n> tables are partitioned by either month or week, depending on the type\n> of data involved, and queries are normally date qualified.\n\nSounds interesting, and possibly usable.\n\nWhere does the constraint come in that'll allow most of the data to be\nexcluded?\n\nOr is this just that the entries are all part of \"bigtable\" so that\nthe self join is only 2-way?\n-- \nlet name=\"cbbrowne\" and tld=\"acm.org\" in name ^ \"@\" ^ tld;;\nhttp://linuxfinances.info/info/advocacy.html\n\"Be humble. A lot happened before you were born.\" - Life's Little\nInstruction Book\n",
"msg_date": "Wed, 15 Sep 2004 02:15:08 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": "Hi,\n\nOn Tue, 14 Sep 2004 22:10:04 -0700\nSteve Atkins <[email protected]> wrote:\n\n> > Is there by any chance a set of functions to manage adding and removing\n> > partitions? Certainly this can be done by hand, but having a set of\n> > tools would make life much easier. I just looked but didn't see anything\n> > on GBorg.\n> \n> I've done a similar thing with time-segregated data by inheriting\n> all the partition tables from an (empty) parent table.\n> \n> Adding a new partition is just a \"create table tablefoo () inherits(bigtable)\"\n> and removing a partition just \"drop table tablefoo\".\n\nBut you have to add table constraints restricting the time after adding\nthe partition?\n\nThanks,\nMarkus\n\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zürich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n",
"msg_date": "Wed, 15 Sep 2004 11:16:44 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": "On Tue, 2004-09-14 at 21:30, Joe Conway wrote:\n> That's exactly what we're doing, but using inherited tables instead of a \n> union view. With inheritance, there is no need to rebuild the view each \n> time a table is added or removed. Basically, in our application, tables \n> are partitioned by either month or week, depending on the type of data \n> involved, and queries are normally date qualified.\n\n\n\nWe do something very similar, also using table inheritance and a lot of \ntriggers to automatically generate partitions and so forth. It works\npretty well, but it is a custom job every time I want to implement a\npartitioned table. You can save a lot on speed and space if you use it\nto break up large tables with composite indexes, since you can drop\ncolumns from the table depending on how you use it. A big part of\nperformance gain is that the resulting partitions end up being more\nwell-ordered than the non-partitioned version, since inserts are hashed\nto different partition according to the key and hash function. It is\nkind of like a cheap and dirty real-time CLUSTER operation. It also\nlets you truncate, lock, and generally be heavy-handed with subsets of\nthe table without affecting the rest of the table.\n\n\nI think generic table partitioning could pretty much be built on top of\nexisting capabilities with a small number of tweaks.\n\nThe main difference would be the ability to associate a partitioning\nhash function with a table (probably defined inline at CREATE TABLE\ntime). Something with syntax like:\n\n\t...PARTITION ON 'date_trunc(''hour'',ts)'...\n\nThere would also probably need to be some type of metadata table to\nassociate specific hashes with partition table names. Other than that,\nthe capabilities largely already exist, and managing the partition\nhashing and association is the ugly part when rolling your own. \nIntercepting DML when necessary and making it behave correctly is\nalready pretty easy, but could probably be streamlined.\n\n\nj. andrew rogers\n\n\n\n",
"msg_date": "15 Sep 2004 14:09:31 -0700",
"msg_from": "\"J. Andrew Rogers\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Partitioning"
},
{
"msg_contents": "\n\"J. Andrew Rogers\" <[email protected]> writes:\n\n> We do something very similar, also using table inheritance\n\nI have a suspicion postgres's table inheritance will end up serving as a good\nbase for a partitioned table feature. Is it currently possible to query which\nsubtable a record came from though?\n\n> A big part of performance gain is that the resulting partitions end up being\n> more well-ordered than the non-partitioned version, since inserts are hashed\n> to different partition according to the key and hash function. It is kind of\n> like a cheap and dirty real-time CLUSTER operation. \n\nThere is also one particular performance gain that cannot be obtained via\nother means: A query that accesses a large percentage of a single partition\ncan use a sequential table scan of just that partition. This can be several\ntimes faster than using an index scan which is the only option if all the data\nis stored in a single large table.\n\nThis isn't an uncommon occurrence. Consider an accounting table partitioned by\naccounting period. Any aggregate reports for a single accounting period fall\ninto this category. If you define your partitions well that can often by most\nor all of your reports.\n\nOf course this applies equally if the query is accessing a small number of\npartitions. A further refinement is to leverage the partitioning in GROUP BY\nor ORDER BY clauses. If you're grouping by the partition key you can avoid a\nlarge sort without having to resort to an index scan or even a hash. And of\ncourse it's tempting to think about parallelization of such queries,\nespecially if the partitions are stored in separate table spaces on different\ndrives.\n\n> It also lets you truncate, lock, and generally be heavy-handed with subsets\n> of the table without affecting the rest of the table.\n\nThe biggest benefit by far is this management ability of being able to swap in\nand out partitions in a single atomic transaction that doesn't require\nextensive i/o.\n\nIn the application we used them on Oracle 8i they were an absolute life-saver.\nThey took a huge batch job that took several days to run in off-peak hours and\nturned it into a single quick cron job that could run at peak hours. We were\nable to cut the delay for our OLTP data showing up in the data warehouse from\nabout a week after extensive manual work to hours after a daily cron job.\n\n> \t...PARTITION ON 'date_trunc(''hour'',ts)'...\n> \n> There would also probably need to be some type of metadata table to\n> associate specific hashes with partition table names. Other than that,\n> the capabilities largely already exist, and managing the partition\n> hashing and association is the ugly part when rolling your own. \n> Intercepting DML when necessary and making it behave correctly is\n> already pretty easy, but could probably be streamlined.\n\nI would suggest you look at the Oracle syntax to handle this. They've already\ngone through several iterations of implementations. The original Oracle 7\nimplementation was much as people have been describing where you had to define\na big UNION ALL view and enable an option to have the optimizer look for such\nviews and attempt to eliminate partitions.\n\nIn Oracle 8i they introduced first class partitions with commands to define\nand manipulate them. You defined a high bound for each partition.\n\nIn Oracle 9 (or thereabouts, sometime after 8i at any rate) they introduced a\nnew form where you specify a specific constant value for each partition. This\nseems to be more akin to how you're thinking about things.\n\nThe optimizer has several plan nodes specific for partitioned tables. It can\nselect a single known partition based on information present in the query. It\ncan also detect cases where it can be sure the query will only access a single\npartition but won't be able to determine which until execution time based on\nplaceholder parameters for cases like \"WHERE partition_key = ?\". It can also\ndetect cases like \"WHERE partition_key between ? and ?\" and \"WHERE\npartition_key IN (?,?,?)\" Or join clauses on partitions. It can also do some\nmagic things with \"GROUP BY partition_key\" and \"ORDER BY partition_key\".\n\nThe work in the optimizer will be the most challenging part. In an ideal world\nif the optimizer is very solid it will be possible to bring some partitions to\nslow or even near-line storage media. As long as no live queries accidentally\naccess the wrong partitions the rest of the database need never know that the\ndata isn't readily available.\n\n\n-- \ngreg\n\n",
"msg_date": "15 Sep 2004 23:55:24 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning"
},
{
"msg_contents": "Iain wrote:\n>>That's exactly what we're doing, but using inherited tables instead of a\n>>union view. With inheritance, there is no need to rebuild the view each\n>>time a table is added or removed. Basically, in our application, tables\n>>are partitioned by either month or week, depending on the type of data\n>>involved, and queries are normally date qualified.\n> \n> That sounds interesting. I have to admit that I havn't touched iheritance in\n> pg at all yet so I find it hard to imagine how this would work. If you have\n> a chance, would you mind elaborating on it just a little?\n\nOK, see below:\n=====================\n\ncreate table foo(f1 int, f2 date, f3 float8);\n\ncreate table foo_2004_01() inherits (foo);\ncreate table foo_2004_02() inherits (foo);\ncreate table foo_2004_03() inherits (foo);\n\ncreate index foo_2004_01_idx1 on foo_2004_01(f2);\ncreate index foo_2004_02_idx1 on foo_2004_02(f2);\ncreate index foo_2004_03_idx1 on foo_2004_03(f2);\n\ninsert into foo_2004_02 values(1,'2004-feb-15',3.14);\n\n\n -- needed just for illustration since these are toy tables\nset enable_seqscan to false;\nexplain analyze select * from foo where f2 = '2004-feb-15';\n \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=100000000.00..100000061.32 rows=16 width=16) (actual \ntime=0.224..0.310 rows=1 loops=1)\n -> Append (cost=100000000.00..100000061.32 rows=16 width=16) \n(actual time=0.214..0.294 rows=1 loops=1)\n -> Seq Scan on foo (cost=100000000.00..100000022.50 rows=5 \nwidth=16) (actual time=0.004..0.004 rows=0 loops=1)\n Filter: (f2 = '2004-02-15'::date)\n -> Index Scan using foo_2004_01_idx1 on foo_2004_01 foo \n(cost=0.00..17.07 rows=5 width=16) (actual time=0.101..0.101 rows=0 loops=1)\n Index Cond: (f2 = '2004-02-15'::date)\n -> Index Scan using foo_2004_02_idx1 on foo_2004_02 foo \n(cost=0.00..4.68 rows=1 width=16) (actual time=0.095..0.101 rows=1 loops=1)\n Index Cond: (f2 = '2004-02-15'::date)\n -> Index Scan using foo_2004_03_idx1 on foo_2004_03 foo \n(cost=0.00..17.07 rows=5 width=16) (actual time=0.066..0.066 rows=0 loops=1)\n Index Cond: (f2 = '2004-02-15'::date)\n Total runtime: 0.582 ms\n(11 rows)\n\ncreate table foo_2004_04() inherits (foo);\ncreate index foo_2004_04_idx1 on foo_2004_04(f2);\n\nexplain analyze select * from foo where f2 = '2004-feb-15';\n \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=100000000.00..100000078.38 rows=21 width=16) (actual \ntime=0.052..0.176 rows=1 loops=1)\n -> Append (cost=100000000.00..100000078.38 rows=21 width=16) \n(actual time=0.041..0.159 rows=1 loops=1)\n -> Seq Scan on foo (cost=100000000.00..100000022.50 rows=5 \nwidth=16) (actual time=0.004..0.004 rows=0 loops=1)\n Filter: (f2 = '2004-02-15'::date)\n -> Index Scan using foo_2004_01_idx1 on foo_2004_01 foo \n(cost=0.00..17.07 rows=5 width=16) (actual time=0.012..0.012 rows=0 loops=1)\n Index Cond: (f2 = '2004-02-15'::date)\n -> Index Scan using foo_2004_02_idx1 on foo_2004_02 foo \n(cost=0.00..4.68 rows=1 width=16) (actual time=0.016..0.022 rows=1 loops=1)\n Index Cond: (f2 = '2004-02-15'::date)\n -> Index Scan using foo_2004_03_idx1 on foo_2004_03 foo \n(cost=0.00..17.07 rows=5 width=16) (actual time=0.008..0.008 rows=0 loops=1)\n Index Cond: (f2 = '2004-02-15'::date)\n -> Index Scan using foo_2004_04_idx1 on foo_2004_04 foo \n(cost=0.00..17.07 rows=5 width=16) (actual time=0.095..0.095 rows=0 loops=1)\n Index Cond: (f2 = '2004-02-15'::date)\n Total runtime: 0.443 ms\n(13 rows)\n\nFor loading data, we COPY into foo, and have a trigger that redirects \nthe rows to the appropriate partition.\n\nNotice that the partitions which do not contain any data of interest are \nstill probed for data, but since they have none it is very quick. In a \nreal life example I got the following results just this afternoon:\n\n - aggregate row count = 471,849,665\n - total number inherited tables = 216\n (many are future dated and therefore contain no data)\n - select one month's worth of data for one piece of equipment by serial\n number (49,257 rows) = 526.015 ms\n\nNot too bad -- quick enough for my needs. BTW, this is using NFS mounted \nstorage (NetApp NAS).\n\nJoe\n",
"msg_date": "Wed, 15 Sep 2004 21:07:34 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": "On Wed, Sep 15, 2004 at 11:16:44AM +0200, Markus Schaber wrote:\n> Hi,\n> \n> On Tue, 14 Sep 2004 22:10:04 -0700\n> Steve Atkins <[email protected]> wrote:\n> \n> > > Is there by any chance a set of functions to manage adding and removing\n> > > partitions? Certainly this can be done by hand, but having a set of\n> > > tools would make life much easier. I just looked but didn't see anything\n> > > on GBorg.\n> > \n> > I've done a similar thing with time-segregated data by inheriting\n> > all the partition tables from an (empty) parent table.\n> > \n> > Adding a new partition is just a \"create table tablefoo () inherits(bigtable)\"\n> > and removing a partition just \"drop table tablefoo\".\n> \n> But you have to add table constraints restricting the time after adding\n> the partition?\n\nUhm... unless I'm confused that's not a meaningful thing in this context.\nThere's no rule that's putting insertions into an inherited table - the\ndecision of which inherited table to insert into is made at application\nlevel.\n\nAs I was using it to segregate data based on creation timestamp the\napplication just inserts into the 'newest' inherited table until it's\nfull, then creates a new inherited table.\n\nI've no doubt you could set up rules to scatter inserted data across\na number of tables, but that's not something that's been applicaable\nfor the problems I tend to work with, so I've not looked at it.\n\nCheers,\n Steve\n\n",
"msg_date": "Wed, 15 Sep 2004 21:17:03 -0700",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": ">> insert into X\n>> select a.keyA,\n>> a.keyB,\n>> a.colA,\n>> a.colB\n>> from Y a left join X b\n>> using (keyA, keyB)\n>> where b.keyA is NULL and\n>> b.keyB is NULL;\n>>\n>> With the appropriate indexes, this is pretty fast but I think a merge\n>> would be much faster.\n\nProblem is it's subject to race conditions if another process is \ninserting stuff at the same time...\n\nChris\n",
"msg_date": "Thu, 16 Sep 2004 12:29:04 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": "Hi Joe,\n\nYou went to quite a bit of effort, thanks I have the picture now.\n\nUsing inheritence seems to be a useful refinement on top of the earlier\noutlined aproach using the UNION ALL view with appropriate predicates on the\ncondition used to do the partitioning. Having the individual partitions\nderived from a parent table makes a lot of sense.\n\nregards\nIain\n\n\n----- Original Message ----- \nFrom: \"Joe Conway\" <[email protected]>\nTo: \"Iain\" <[email protected]>\nCc: <[email protected]>\nSent: Thursday, September 16, 2004 1:07 PM\nSubject: Re: [PERFORM] Data Warehouse Reevaluation - MySQL vs Postgres --\n\n\n> Iain wrote:\n> >>That's exactly what we're doing, but using inherited tables instead of a\n> >>union view. With inheritance, there is no need to rebuild the view each\n> >>time a table is added or removed. Basically, in our application, tables\n> >>are partitioned by either month or week, depending on the type of data\n> >>involved, and queries are normally date qualified.\n> >\n> > That sounds interesting. I have to admit that I havn't touched\niheritance in\n> > pg at all yet so I find it hard to imagine how this would work. If you\nhave\n> > a chance, would you mind elaborating on it just a little?\n>\n> OK, see below:\n> =====================\n>\n> create table foo(f1 int, f2 date, f3 float8);\n>\n> create table foo_2004_01() inherits (foo);\n> create table foo_2004_02() inherits (foo);\n> create table foo_2004_03() inherits (foo);\n>\n> create index foo_2004_01_idx1 on foo_2004_01(f2);\n> create index foo_2004_02_idx1 on foo_2004_02(f2);\n> create index foo_2004_03_idx1 on foo_2004_03(f2);\n>\n> insert into foo_2004_02 values(1,'2004-feb-15',3.14);\n>\n>\n> -- needed just for illustration since these are toy tables\n> set enable_seqscan to false;\n> explain analyze select * from foo where f2 = '2004-feb-15';\n>\n> QUERY PLAN\n> --------------------------------------------------------------------------\n----------------------------------------------------------------------\n> Result (cost=100000000.00..100000061.32 rows=16 width=16) (actual\n> time=0.224..0.310 rows=1 loops=1)\n> -> Append (cost=100000000.00..100000061.32 rows=16 width=16)\n> (actual time=0.214..0.294 rows=1 loops=1)\n> -> Seq Scan on foo (cost=100000000.00..100000022.50 rows=5\n> width=16) (actual time=0.004..0.004 rows=0 loops=1)\n> Filter: (f2 = '2004-02-15'::date)\n> -> Index Scan using foo_2004_01_idx1 on foo_2004_01 foo\n> (cost=0.00..17.07 rows=5 width=16) (actual time=0.101..0.101 rows=0\nloops=1)\n> Index Cond: (f2 = '2004-02-15'::date)\n> -> Index Scan using foo_2004_02_idx1 on foo_2004_02 foo\n> (cost=0.00..4.68 rows=1 width=16) (actual time=0.095..0.101 rows=1\nloops=1)\n> Index Cond: (f2 = '2004-02-15'::date)\n> -> Index Scan using foo_2004_03_idx1 on foo_2004_03 foo\n> (cost=0.00..17.07 rows=5 width=16) (actual time=0.066..0.066 rows=0\nloops=1)\n> Index Cond: (f2 = '2004-02-15'::date)\n> Total runtime: 0.582 ms\n> (11 rows)\n>\n> create table foo_2004_04() inherits (foo);\n> create index foo_2004_04_idx1 on foo_2004_04(f2);\n>\n> explain analyze select * from foo where f2 = '2004-feb-15';\n>\n> QUERY PLAN\n> --------------------------------------------------------------------------\n----------------------------------------------------------------------\n> Result (cost=100000000.00..100000078.38 rows=21 width=16) (actual\n> time=0.052..0.176 rows=1 loops=1)\n> -> Append (cost=100000000.00..100000078.38 rows=21 width=16)\n> (actual time=0.041..0.159 rows=1 loops=1)\n> -> Seq Scan on foo (cost=100000000.00..100000022.50 rows=5\n> width=16) (actual time=0.004..0.004 rows=0 loops=1)\n> Filter: (f2 = '2004-02-15'::date)\n> -> Index Scan using foo_2004_01_idx1 on foo_2004_01 foo\n> (cost=0.00..17.07 rows=5 width=16) (actual time=0.012..0.012 rows=0\nloops=1)\n> Index Cond: (f2 = '2004-02-15'::date)\n> -> Index Scan using foo_2004_02_idx1 on foo_2004_02 foo\n> (cost=0.00..4.68 rows=1 width=16) (actual time=0.016..0.022 rows=1\nloops=1)\n> Index Cond: (f2 = '2004-02-15'::date)\n> -> Index Scan using foo_2004_03_idx1 on foo_2004_03 foo\n> (cost=0.00..17.07 rows=5 width=16) (actual time=0.008..0.008 rows=0\nloops=1)\n> Index Cond: (f2 = '2004-02-15'::date)\n> -> Index Scan using foo_2004_04_idx1 on foo_2004_04 foo\n> (cost=0.00..17.07 rows=5 width=16) (actual time=0.095..0.095 rows=0\nloops=1)\n> Index Cond: (f2 = '2004-02-15'::date)\n> Total runtime: 0.443 ms\n> (13 rows)\n>\n> For loading data, we COPY into foo, and have a trigger that redirects\n> the rows to the appropriate partition.\n>\n> Notice that the partitions which do not contain any data of interest are\n> still probed for data, but since they have none it is very quick. In a\n> real life example I got the following results just this afternoon:\n>\n> - aggregate row count = 471,849,665\n> - total number inherited tables = 216\n> (many are future dated and therefore contain no data)\n> - select one month's worth of data for one piece of equipment by serial\n> number (49,257 rows) = 526.015 ms\n>\n> Not too bad -- quick enough for my needs. BTW, this is using NFS mounted\n> storage (NetApp NAS).\n>\n> Joe\n\n",
"msg_date": "Thu, 16 Sep 2004 14:08:34 +0900",
"msg_from": "\"Iain\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": "Christopher Browne wrote:\n> In the last exciting episode, [email protected] (Joe Conway) wrote:\n>>That's exactly what we're doing, but using inherited tables instead of\n>>a union view. With inheritance, there is no need to rebuild the view\n>>each time a table is added or removed. Basically, in our application,\n>>tables are partitioned by either month or week, depending on the type\n>>of data involved, and queries are normally date qualified.\n\n> Where does the constraint come in that'll allow most of the data to be\n> excluded?\n\nNot sure I follow this.\n\n> Or is this just that the entries are all part of \"bigtable\" so that\n> the self join is only 2-way?\n\nWe don't have a need for self-joins in our application. We do use a \ncrosstab function to materialize some transposed views of the data, \nhowever. That allows us to avoid self-joins in the cases where we might \notherwise need them.\n\nJoe\n",
"msg_date": "Wed, 15 Sep 2004 22:17:45 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": "Joe, Christopher,\n\nJoe's example wasn't excluding partions, as he didn't use a predicated UNION\nALL view to select from. His queries use an indexed column that allow the\nvarious partitions to be probed at low cost, and he was satisfied wth that.\n\nMy point in my previous post was that you could still do all that that if\nyou wanted to, by building the predicated view with UNION ALL of each of the\nchild tables.\n\nregards\nIain\n----- Original Message ----- \nFrom: \"Joe Conway\" <[email protected]>\nTo: \"Christopher Browne\" <[email protected]>\nCc: <[email protected]>\nSent: Thursday, September 16, 2004 2:17 PM\nSubject: Re: [PERFORM] Data Warehouse Reevaluation - MySQL vs Postgres --\n\n\n> Christopher Browne wrote:\n> > In the last exciting episode, [email protected] (Joe Conway) wrote:\n> >>That's exactly what we're doing, but using inherited tables instead of\n> >>a union view. With inheritance, there is no need to rebuild the view\n> >>each time a table is added or removed. Basically, in our application,\n> >>tables are partitioned by either month or week, depending on the type\n> >>of data involved, and queries are normally date qualified.\n>\n> > Where does the constraint come in that'll allow most of the data to be\n> > excluded?\n>\n> Not sure I follow this.\n>\n> > Or is this just that the entries are all part of \"bigtable\" so that\n> > the self join is only 2-way?\n>\n> We don't have a need for self-joins in our application. We do use a\n> crosstab function to materialize some transposed views of the data,\n> however. That allows us to avoid self-joins in the cases where we might\n> otherwise need them.\n>\n> Joe\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n",
"msg_date": "Thu, 16 Sep 2004 18:08:37 +0900",
"msg_from": "\"Iain\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": "Hi, Mischa,\n\nOn Tue, 14 Sep 2004 22:58:20 GMT\nMischa Sandberg <[email protected]> wrote:\n\n> Googling 'upsert' (an Oraclism, I believe) will get you hits on Oracle \n> and DB2's implementation of MERGE, which does what AMOUNTS to what is\n> described below (one mass UPDATE...FROM, one mass INSERT...WHERE NOT \n> EXISTS).\n> \n> No, you shouldn't iterate row-by-row through the temp table.\n> Whenever possible, try to do updates in one single (mass) operation.\n> Doing it that way gives the optimizer the best chance at amortizing\n> fixed costs, and batching operations.\n\nBut when every updated row has a different value for the column(s) to be\nupdated, then I still have to use one update statement per row, which I\nexpect to be faster when done via a stored procedure than having the\nwhole client-server roundtrip including parsing every time. Or did I\nmiss some nice SQL statement?\n\n\nHave a nice day,\nMarkus\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zürich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n",
"msg_date": "Thu, 16 Sep 2004 12:39:04 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": "On 15 Sep 2004 23:55:24 -0400, Greg Stark <[email protected]> wrote:\n> \n> \"J. Andrew Rogers\" <[email protected]> writes:\n> \n> > We do something very similar, also using table inheritance\n> \n> I have a suspicion postgres's table inheritance will end up serving as a good\n> base for a partitioned table feature. Is it currently possible to query which\n> subtable a record came from though?\n\n From the docs on http://www.postgresql.org/docs/7.4/static/ddl-inherit.html :\n\n... In some cases you may wish to know which table a particular row\noriginated from. There is a system column called TABLEOID in each\ntable which can tell you the originating table:\n\nSELECT c.tableoid, c.name, c.altitude\nFROM cities c\nWHERE c.altitude > 500;\n\nwhich returns:\n\n tableoid | name | altitude\n----------+-----------+----------\n 139793 | Las Vegas | 2174\n 139793 | Mariposa | 1953\n 139798 | Madison | 845\n\n(If you try to reproduce this example, you will probably get different\nnumeric OIDs.) By doing a join with pg_class you can see the actual\ntable names:\n\nSELECT p.relname, c.name, c.altitude\nFROM cities c, pg_class p\nWHERE c.altitude > 500 and c.tableoid = p.oid;\n\nwhich returns:\n\n relname | name | altitude\n----------+-----------+----------\n cities | Las Vegas | 2174\n cities | Mariposa | 1953\n capitals | Madison | 845\n\n--miker\n",
"msg_date": "Thu, 16 Sep 2004 06:58:32 -0400",
"msg_from": "Mike Rylander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning"
},
{
"msg_contents": "Hi, Steve,\n\nOn Wed, 15 Sep 2004 21:17:03 -0700\nSteve Atkins <[email protected]> wrote:\n\n> On Wed, Sep 15, 2004 at 11:16:44AM +0200, Markus Schaber wrote:\n> > But you have to add table constraints restricting the time after adding\n> > the partition?\n> \n> Uhm... unless I'm confused that's not a meaningful thing in this context.\n> There's no rule that's putting insertions into an inherited table - the\n> decision of which inherited table to insert into is made at application\n> level.\n\nI thought of the query optimizer. I thought it could use the table\nconstraints to drop tables when creating the union. But now I think that\nan index gives enough win, because the tree-based indices are rather\nquick at returning zero rows when the queried value is out of the\nindexed range.\n\nGreetings,\nMarkus\n\n\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zürich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n",
"msg_date": "Thu, 16 Sep 2004 15:38:21 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": "Iain wrote:\n> Joe's example wasn't excluding partions, as he didn't use a predicated UNION\n> ALL view to select from. His queries use an indexed column that allow the\n> various partitions to be probed at low cost, and he was satisfied wth that.\n\nRight.\n\n> My point in my previous post was that you could still do all that that if\n> you wanted to, by building the predicated view with UNION ALL of each of the\n> child tables.\n\nRight. It doesn't look that much different:\n\ncreate or replace view foo_vw as\nselect * from foo_2004_01 where f2 >= '2004-jan-01' and f2 <= '2004-jan-31'\nunion all\nselect * from foo_2004_02 where f2 >= '2004-feb-01' and f2 <= '2004-feb-29'\nunion all\nselect * from foo_2004_03 where f2 >= '2004-mar-01' and f2 <= '2004-mar-31'\n;\n\n -- needed just for illustration since these are toy tables\nset enable_seqscan to false;\n\nexplain analyze select * from foo_vw where f2 = '2004-feb-15';\n QUERY PLAN\n----------------------------------------------------------------------------------\n Subquery Scan foo_vw (cost=0.00..14.54 rows=3 width=16) (actual \ntime=0.022..0.027 rows=1 loops=1)\n -> Append (cost=0.00..14.51 rows=3 width=16) (actual \ntime=0.019..0.022 rows=1 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..4.84 rows=1 \nwidth=16) (actual time=0.004..0.004 rows=0 loops=1)\n -> Index Scan using foo_2004_01_idx2 on foo_2004_01 \n(cost=0.00..4.83 rows=1 width=16) (actual time=0.003..0.003 rows=0 loops=1)\n Index Cond: ((f2 >= '2004-01-01'::date) AND (f2 <= \n'2004-01-31'::date) AND (f2 = '2004-02-15'::date))\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..4.84 rows=1 \nwidth=16) (actual time=0.013..0.015 rows=1 loops=1)\n -> Index Scan using foo_2004_02_idx2 on foo_2004_02 \n(cost=0.00..4.83 rows=1 width=16) (actual time=0.009..0.010 rows=1 loops=1)\n Index Cond: ((f2 >= '2004-02-01'::date) AND (f2 <= \n'2004-02-29'::date) AND (f2 = '2004-02-15'::date))\n -> Subquery Scan \"*SELECT* 3\" (cost=0.00..4.84 rows=1 \nwidth=16) (actual time=0.001..0.001 rows=0 loops=1)\n -> Index Scan using foo_2004_03_idx2 on foo_2004_03 \n(cost=0.00..4.83 rows=1 width=16) (actual time=0.001..0.001 rows=0 loops=1)\n Index Cond: ((f2 >= '2004-03-01'::date) AND (f2 <= \n'2004-03-31'::date) AND (f2 = '2004-02-15'::date))\n Total runtime: 0.188 ms\n(12 rows)\n\nregression=# explain analyze select * from foo where f2 = '2004-feb-15';\n QUERY PLAN\n----------------------------------------------------------------------------------\n Result (cost=100000000.00..100000073.70 rows=20 width=16) (actual \ntime=0.059..0.091 rows=1 loops=1)\n -> Append (cost=100000000.00..100000073.70 rows=20 width=16) \n(actual time=0.055..0.086 rows=1 loops=1)\n -> Seq Scan on foo (cost=100000000.00..100000022.50 rows=5 \nwidth=16) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: (f2 = '2004-02-15'::date)\n -> Index Scan using foo_2004_01_idx2 on foo_2004_01 foo \n(cost=0.00..17.07 rows=5 width=16) (actual time=0.045..0.045 rows=0 loops=1)\n Index Cond: (f2 = '2004-02-15'::date)\n -> Index Scan using foo_2004_02_idx2 on foo_2004_02 foo \n(cost=0.00..17.07 rows=5 width=16) (actual time=0.008..0.009 rows=1 loops=1)\n Index Cond: (f2 = '2004-02-15'::date)\n -> Index Scan using foo_2004_03_idx2 on foo_2004_03 foo \n(cost=0.00..17.07 rows=5 width=16) (actual time=0.029..0.029 rows=0 loops=1)\n Index Cond: (f2 = '2004-02-15'::date)\n Total runtime: 0.191 ms\n(11 rows)\n\n\nThe main difference being that the view needs to be recreated every time \na table is added or dropped, whereas with the inherited tables method \nthat isn't needed.\n\nJoe\n",
"msg_date": "Thu, 16 Sep 2004 08:36:31 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": "On Wed, Sep 15, 2004 at 02:09:31PM -0700, J. Andrew Rogers wrote:\n> On Tue, 2004-09-14 at 21:30, Joe Conway wrote:\n> > That's exactly what we're doing, but using inherited tables instead of a \n> > union view. With inheritance, there is no need to rebuild the view each \n> > time a table is added or removed. Basically, in our application, tables \n> > are partitioned by either month or week, depending on the type of data \n> > involved, and queries are normally date qualified.\n> \n> \n> \n> We do something very similar, also using table inheritance and a lot of \n> triggers to automatically generate partitions and so forth. It works\n> pretty well, but it is a custom job every time I want to implement a\n> partitioned table. You can save a lot on speed and space if you use it\n> to break up large tables with composite indexes, since you can drop\n> columns from the table depending on how you use it. A big part of\n\nForgive my ignorance, but I didn't think you could have a table that\ninherits from a parent not have all the columns. Or is that not what you\nmean by 'you can drop columns from the table...'?\n\nThis is one advantage I see to a big UNION ALL view; if you're doing\npartitioning based on unique values, you don't actually have to store\nthat value in the partition tables. For example,\nhttp://stats.distributed.net has a table that details how much work each\nparticipant did each day for each project. Storing project_id in that\ntable is an extra 4 bytes... doesn't sound like much until you consider\nthat the table has over 130M rows right now. So it would be nice to have\nan easy way to partition the table based on unique project_id's and not\nwaste space in the partition tables on a field that will be the same for\nevery row (in each partition).\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Thu, 16 Sep 2004 15:39:51 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning"
},
{
"msg_contents": "On Thu, 2004-09-16 at 13:39, Jim C. Nasby wrote:\n> Forgive my ignorance, but I didn't think you could have a table that\n> inherits from a parent not have all the columns. Or is that not what you\n> mean by 'you can drop columns from the table...'?\n> \n> This is one advantage I see to a big UNION ALL view; if you're doing\n> partitioning based on unique values, you don't actually have to store\n> that value in the partition tables. For example,\n> http://stats.distributed.net has a table that details how much work each\n> participant did each day for each project. Storing project_id in that\n> table is an extra 4 bytes... doesn't sound like much until you consider\n> that the table has over 130M rows right now. So it would be nice to have\n> an easy way to partition the table based on unique project_id's and not\n> waste space in the partition tables on a field that will be the same for\n> every row (in each partition).\n\n\nYeah, it is harder to do this automagically, though in theory it should\nbe possible. Since we have to roll our own partitioning anyway, we've\nbroken up composite primary keys so that one of the key columns hashes\nto a partition, using the key itself in the partition table name rather\nthan replicating that value several million times. Ugly as sin, but you\ncan make it work in some cases.\n\nI do just enough work for our queries to behave correctly, and a lot of\ntimes I actually hide the base table and its descendents underneath a\nsort of metadata table that is grafted to the base tables by a lot of\nrules/triggers/functions/etc, and then do queries against that or a view\nof that. As I said, ugly as sin and probably not universal, but you need\na lot of abstraction to make it look halfway normal. I'm going to think\nabout this some more and see if I can't construct a generic solution.\n\n\ncheers,\n\nj. andrew rogers\n\n\n",
"msg_date": "16 Sep 2004 14:59:12 -0700",
"msg_from": "\"J. Andrew Rogers\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning"
},
{
"msg_contents": "> Iain\n> Joe's example wasn't excluding partions, as he didn't use a\n> predicated UNION\n> ALL view to select from. His queries use an indexed column that allow the\n> various partitions to be probed at low cost, and he was satisfied\n> wth that.\n\nAgreed - very very interesting design though.\n\n> My point in my previous post was that you could still do all that that if\n> you wanted to, by building the predicated view with UNION ALL of\n> each of the\n> child tables.\n>\n\nAFAICS of all the designs proposed there is still only one design *using\ncurrent PostgreSQL* that allows partitions to be excluded from queries as a\nway of speeding up queries against very large tables: UNION ALL with\nappended constants.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Fri, 17 Sep 2004 08:39:10 +0100",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
}
] |
[
{
"msg_contents": "Hi, I have downloaded the new postgresql (version 8.0 beta2) and I was\nwondering what performance features I can take advantage of before I start\nto dump my 3/4 terrabyte database into the new format. More specifically\nI am interested in tablespaces--what exactly is this feature, some sort of\norganizational addition (?) and howcan I best take advantage of this....? \nAnything else?\n\nThanks.\n",
"msg_date": "Sun, 12 Sep 2004 20:35:06 -0500 (CDT)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "tblspaces integrated in new postgresql (version 8.0)"
},
{
"msg_contents": "[email protected] writes:\n> I am interested in tablespaces--what exactly is this feature, some sort of\n> organizational addition (?) and howcan I best take advantage of this....? \n\nSee\nhttp://developer.postgresql.org/docs/postgres/manage-ag-tablespaces.html\n\nIt doesn't talk a lot yet about *why* you'd want to use this ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Sep 2004 00:52:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tblspaces integrated in new postgresql (version 8.0) "
}
] |
[
{
"msg_contents": "--- Herv���<inputPiedvache <[email protected]> wrote:\n\n> George,\n> \n> I have well read many pages about this subject ... but I have not found any \n> thing for the moment to really help me ...\n> What can I do to optimize my PostgreSQL configuration for a special use of \n> Tsearch2 ...\n> I'm a little dispointed looking the Postgresql Russian search engine using \n> Tsearch2 is really quick ... why I can't haev the same result with a \n> bi-pentium III 933 and 1Gb of RAM with the text indexation of 1 500 000 \n> records ?\n> \n> Regards,\n> -- \n> Herv���<inputPiedvache\n> \n> Elma Ing���<inputierie Informatique\n> 6 rue du Faubourg Saint-Honor���<input> F-75008 - Paris - France\n> Pho. 33-144949901\n> Fax. 33-144949902\n> \n\nTsearch does not scale indefinitely. It was designed for fast online updates and to be integrated\ninto PostgreSQL. My understanding is that it uses a bloom filter together with bit string\nsignatures. Typically, full text searches use inverted indexes, scale better, but are slower to\nupdate.\n\nMy understanding is that tsearch has a practical limit of 100,000 distinct word stems or lexemes. \nNote that word stems are not words. Word stems are what are actually stored in a tsvector after\nparsing and dictionary processing.\n\nThe key to making tsearch fast is to keep the number of word stems low. You decrease the number\nof word stems by using stop words, various dictionaries, synonyms, and preprocessing text before\nit gets to tsearch. You can find what word stems are stored in a tsvector column by using the\nstat function. For examples of how to use the stat function, see:\n\nhttp://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_Notes\nhttp://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/oscon_tsearch2/stat.html\n\nNote that the stat function will take a long time to run on large tables.\n\nPerformance tuning must be done on a case by case basis. It can take some time to try different\nthings and see the change in performance. Each time you try something new, use the stat function\nto see how the number of word stems has changed.\n\nThe largest project I used tsearch2 on contained 900,000 records. Without performance tuning,\nthere were 275,000 distinct word stems. After performance tuning, I got it down to 14,000\ndistinct word stems. \n\nBy using the stat function, I noticed some obvious stop words that were very frequent that nobody\nwould ever search for. For how to use stop words, see:\n\nhttp://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/oscon_tsearch2/stop_words.html\n\nAlso I noticed some strange patterns by looking through all of the word stems. \n\nIn one case, strings of 3 to 7 words were joined together with hyphens to indicate category\nnesting. Tsearch would store these long hyphenated words intact and also store the stem of each\nindividual word. I made a judgment call that no one would ever search for the long hyphenated\nwords, so I preprocessed the text to remove the hyphens. \n\nI also noticed that many of the word stems were alphanumeric IDs that were designed to be unique. \nThere were many of these IDs in the tsvector column although each ID would occur only once or\ntwice. I again preprocessed the text to remove these IDs, but created a btree index on a varchar\ncolumn representing the IDs. My search form allows users to either search full text using\ntsearch2 or search IDs using 'LIKE' queries which use a btree index. For 'LIKE' queries, it was\nanother matter to get postgres to use the btree index and not use a sequential scan. For this,\nsee:\n\nhttp://www.postgresql.org/docs/7.4/static/indexes-opclass.html\n\nLast, I noticed that most users wanted to restrict the full text search to a subset determined by\nanother column in the table. As a result, I created a multicolumn gist index on an integer column\nand a tsvector column. For how to setup a multicolumn gist index, see:\n\nhttp://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/oscon_tsearch2/multi_column_index.html\n\nThere are no easy answers. Like I said, performance tuning must be done on a case by case basis.\n\nHope this helps,\n\nGeorge Essig\n",
"msg_date": "Sun, 12 Sep 2004 20:03:57 -0700 (PDT)",
"msg_from": "George Essig <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TSearch2 and optimisation ..."
}
] |
[
{
"msg_contents": "Hi i have four sample tables ename, esal, edoj and esum\nAll of them have 1000000 records. Im running the following\nquery : select ename.eid, name, sal, doj, summary from\nename,esal,edoj,esum where ename.eid=esal.eid and ename.eid=edoj.eid\nand ename.eid=esum.eid. Its a join of all four tables which returns\nall 1 million records. The eid field in ename is a Primary Key and the\neid in all other tables are Foreign Keys. I have created an index for\nall Foreign Keys. This query takes around 16 MINUTES to complete. Can\nthis time be reduced?\nThanks\nVijay\n\n----------------------------------------------------------------\n\nEXPLAIN OUTPUT\n\nQUERY PLAN \nMerge Join (cost=647497.97..163152572.97 rows=25000025000000 width=80) \n Merge Cond: (\"outer\".eid = \"inner\".eid) \n -> Merge Join (cost=356059.69..75361059.69 rows=5000000000 width=44) \n Merge Cond: (\"outer\".eid = \"inner\".eid) \n -> Sort (cost=150295.84..152795.84 rows=1000000 width=8) \n Sort Key: edoj.eid \n -> Seq Scan on edoj (cost=0.00..15568.00 rows=1000000 width=8) \n -> Sort (cost=205763.84..208263.84 rows=1000000 width=36) \n Sort Key: esum.eid \n -> Seq Scan on esum (cost=0.00..31976.00 rows=1000000 width=36) \n -> Sort (cost=291438.28..293938.29 rows=1000002 width=48) \n Sort Key: ename.eid \n -> Hash Join (cost=26683.01..107880.23 rows=1000002 width=48) \n Hash Cond: (\"outer\".eid = \"inner\".eid) \n -> Seq Scan on esal (cost=0.00..21613.01 rows=1000001 width=12) \n -> Hash (cost=16370.01..16370.01 rows=1000001 width=36) \n -> Seq Scan on ename (cost=0.00..16370.01\nrows=1000001 width=36)\n\n17 row(s)\n\nTotal runtime: 181.021 ms\n\n----------------------------------------------------------------\n\nEXPLAIN ANALYZE OUTPUT\n\nQUERY PLAN \n\nMerge Join (cost=647497.97..163152572.97 rows=25000025000000\nwidth=80) (actual time=505418.965..584981.013 rows=1000000 loops=1)\n Merge Cond: (\"outer\".eid = \"inner\".eid) \n -> Merge Join (cost=356059.69..75361059.69 rows=5000000000\nwidth=44) (actual time=110394.376..138177.569 rows=1000000 loops=1)\n Merge Cond: (\"outer\".eid = \"inner\".eid) \n -> Sort (cost=150295.84..152795.84 rows=1000000 width=8)\n(actual time=27587.622..31077.077 rows=1000000 loops=1)\n Sort Key: edoj.eid \n -> Seq Scan on edoj (cost=0.00..15568.00 rows=1000000\nwidth=8) (actual time=144.000..10445.145 rows=1000000 loops=1)\n -> Sort (cost=205763.84..208263.84 rows=1000000 width=36)\n(actual time=82806.646..90322.943 rows=1000000 loops=1)\n Sort Key: esum.eid \n -> Seq Scan on esum (cost=0.00..31976.00 rows=1000000\nwidth=36) (actual time=20.312..29030.247 rows=1000000 loops=1)\n -> Sort (cost=291438.28..293938.29 rows=1000002 width=48) (actual\ntime=395024.482..426870.491 rows=1000001 loops=1)\n Sort Key: ename.eid \n -> Hash Join (cost=26683.01..107880.23 rows=1000002\nwidth=48) (actual time=29234.472..198064.105 rows=1000001 loops=1)\n Hash Cond: (\"outer\".eid = \"inner\".eid) \n -> Seq Scan on esal (cost=0.00..21613.01 rows=1000001\nwidth=12) (actual time=32.257..23999.163 rows=1000001 loops=1)\n -> Hash (cost=16370.01..16370.01 rows=1000001\nwidth=36) (actual time=19362.095..19362.095 rows=0 loops=1)\n -> Seq Scan on ename (cost=0.00..16370.01\nrows=1000001 width=36) (actual time=26.744..13878.410 rows=1000001\nloops=1)\n\nTotal runtime: 586226.831 ms \n\n18 row(s)\n\nTotal runtime: 586,435.978 ms\n\n----------------------------------------------------------------\n",
"msg_date": "Mon, 13 Sep 2004 10:00:41 +0530",
"msg_from": "Vijay Moses <[email protected]>",
"msg_from_op": true,
"msg_subject": "Four table join with million records - performance improvement?"
},
{
"msg_contents": "Vijay Moses <[email protected]> writes:\n> Hi i have four sample tables ename, esal, edoj and esum\n> All of them have 1000000 records. Im running the following\n> query : select ename.eid, name, sal, doj, summary from\n> ename,esal,edoj,esum where ename.eid=esal.eid and ename.eid=edoj.eid\n> and ename.eid=esum.eid. Its a join of all four tables which returns\n> all 1 million records. The eid field in ename is a Primary Key and the\n> eid in all other tables are Foreign Keys. I have created an index for\n> all Foreign Keys. This query takes around 16 MINUTES to complete. Can\n> this time be reduced?\n\nThe indexes will be completely useless for that sort of query; the\nreasonable choices are sort/merge or hashjoin. For either one, your\nbest way to speed it up is to increase sort_mem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Sep 2004 00:57:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Four table join with million records - performance improvement? "
}
] |
[
{
"msg_contents": "Hi, I'd like to help with the topic in the Subject: line. It seems to be a \nTODO item. I've reviewed some threads discussing the matter, so I hope I've \nacquired enough history concerning it. I've taken an initial swipe at \nfiguring out how to optimize sync'ing methods. It's based largely on \nrecommendations I've read on previous threads about fsync/O_SYNC and so on. \nAfter reviewing, if anybody has recommendations on how to proceed then I'd \nlove to hear them. \n \nAttached is a little program that basically does a bunch of sequential writes \nto a file. All of the sync'ing methods supported by PostgreSQL WAL can be \nused. Results are printed in microseconds. Size and quanity of writes are \nconfigurable. The documentation is in the code (how to configure, build, run, \netc.). I realize that this program doesn't reflect all of the possible \nactivities of a production database system, but I hope it's a step in the \nright direction for this task. I've used it to see differences in behavior \nbetween the various sync'ing methods on various platforms. \n \nHere's what I've found running the benchmark on some systems to which \nI have access. The differences in behavior between platforms is quite vast. \n \nSummary first... \n \n<halfjoke> \nPostgreSQL should be run on an old Apple MacIntosh attached to \nits own Hitachi disk array with 2GB cache or so. Use any sync method \nexcept for fsync(). \n</halfjoke> \n \nAnyway, there is *a lot* of variance in file synching behavior across \ndifferent hardware and O/S platforms. It's probably not safe \nto conclude much. That said, here are some findings so far based on \ntests I've run: \n \n1. under no circumstances do fsync() or fdatasync() seem to perform \nbetter than opening files with O_SYNC or O_DSYNC \n2. where there are differences, opening files with O_SYNC or O_DSYNC \ntends to be quite faster. \n3. fsync() seems to be the slowest where there are differences. And \nO_DSYNC seems to be the fastest where results differ. \n4. the safest thing to assert at this point is that \nSolaris systems ought to use the O_DSYNC method for WAL. \n \n----------- \n \nTest system(s) \n \nAthlon Linux: \nAMD Athlon XP2000, 512MB RAM, single (54 or 7200?) RPM 20GB IDE disk, \nreiserfs filesystem (3 something I think) \nSuSE Linux kernel 2.4.21-99 \n \nMac Linux: \nI don't know the specific model. 400MHz G3, 512MB, single IDE disk, \next2 filesystem \nDebian GNU/Linux 2.4.16-powerpc \n \nHP Intel Linux: \nProlient HPDL380G3, 2 x 3GHz Xeon, 2GB RAM, SmartArray 5i 64MB cache, \n2 x 15,000RPM 36GB U320 SCSI drives mirrored. I'm not sure if \nwrites are cached or not. There's no battery backup. \next3 filesystem. \nRedhat Enterprise Linux 3.0 kernel based on 2.4.21 \n \nDell Intel OpenBSD: \nPoweredge ?, single 1GHz PIII, 128MB RAM, single 7200RPM 80GB IDE disk, \nffs filesystem \nOpenBSD 3.2 GENERIC kernel \n \nSUN Ultra2: \nUltra2, 2 x 296MHz UltraSPARC II, 2GB RAM, 2 x 10,000RPM 18GB U160 \nSCSI drives mirrored with Solstice DiskSuite. UFS filesystem. \nSolaris 8. \n \nSUN E4500 + HDS Thunder 9570v \nE4500, 8 x 400MHz UltraSPARC II, 3GB RAM, \nHDS Thunder 9570v, 2GB mirrored battery-backed cache, RAID5 with a \nbunch of 146GB 10,000RPM FC drives. LUN is on single 2GB FC fabric \nconnection. \nVeritas filesystem (VxFS) \nSolaris 8. \n \nTest methodology: \n \nAll test runs were done with CHUNKSIZE 8 * 1024, CHUNKS 2 * 1024, \nFILESIZE_MULTIPLIER 2, and SLEEP 5. So a total of 16MB was sequentially \nwritten for each benchmark. \n \nResults are in microseconds. \n \nPLATFORM: Athlon Linux \nbuffered: 48220 \nfsync: 74854397 \nfdatasync: 75061357 \nopen_sync: 73869239 \nopen_datasync: 74748145 \nNotes: System mostly idle. Even during tests, top showed about 95% \nidle. Something's not right on this box. All sync methods similarly \nhorrible on this system. \n \nPLATFORM: Mac Linux \nbuffered: 58912 \nfsync: 1539079 \nfdatasync: 769058 \nopen_sync: 767094 \nopen_datasync: 763074 \nNotes: system mostly idle. fsync seems worst. Otherwise, they seem \npretty equivalent. This is the fastest system tested. \n \nPLATFORM: HP Intel Linux \nbuffered: 33026 \nfsync: 29330067 \nfdatasync: 28673880 \nopen_sync: 8783417 \nopen_datasync: 8747971 \nNotes: system idle. O_SYNC and O_DSYNC methods seem to be a lot \nbetter on this platform than fsync & fdatasync. \n \nPLATFORM: Dell Intel OpenBSD \nbuffered: 511890 \nfsync: 1769190 \nfdatasync: -------- \nopen_sync: 1748764 \nopen_datasync: 1747433 \nNotes: system idle. I couldn't locate fdatasync() on this box, so I \ncouldn't test it. All sync methods seem equivalent and are very fast -- \nthough still trail the old Mac. \n \nPLATFORM: SUN Ultra2 \nbuffered: 1814824 \nfsync: 73954800 \nfdatasync: 52594532 \nopen_sync: 34405585 \nopen_datasync: 13883758 \nNotes: system mostly idle, with occasional spikes from 1-10% utilization. \nIt looks like substantial difference between each sync method, with \nO_DSYNC the best and fsync() the worst. There is substantial \ndifference between the open* and f* methods. \n \nPLATFORM: SUN E4500 + HDS Thunder 9570v \nbuffered: 233947 \nfsync: 57802065 \nfdatasync: 56631013 \nopen_sync: 2362207 \nopen_datasync: 1976057 \nNotes: host about 30% idle, but the array tested on was completely idle. \nSomething looks seriously not right about fsync and fdatasync -- write \ncache seems to have no effect on them. As for write cache, that \nprobably explains the 2 seconds or so for the open_sync and \nopen_datasync methods. \n \n-------------- \n \nThanks for reading...I look forward to feedback, and hope to be helpful in \nthis effort! \n \nMark",
"msg_date": "Sun, 12 Sep 2004 23:11:06 -0700",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Determine optimal fdatasync/fsync, O_SYNC/O_DSYNC options "
},
{
"msg_contents": "\nHave you seen /src/tools/fsync?\n\n---------------------------------------------------------------------------\n\[email protected] wrote:\n> Hi, I'd like to help with the topic in the Subject: line. It seems to be a \n> TODO item. I've reviewed some threads discussing the matter, so I hope I've \n> acquired enough history concerning it. I've taken an initial swipe at \n> figuring out how to optimize sync'ing methods. It's based largely on \n> recommendations I've read on previous threads about fsync/O_SYNC and so on. \n> After reviewing, if anybody has recommendations on how to proceed then I'd \n> love to hear them. \n> \n> Attached is a little program that basically does a bunch of sequential writes \n> to a file. All of the sync'ing methods supported by PostgreSQL WAL can be \n> used. Results are printed in microseconds. Size and quanity of writes are \n> configurable. The documentation is in the code (how to configure, build, run, \n> etc.). I realize that this program doesn't reflect all of the possible \n> activities of a production database system, but I hope it's a step in the \n> right direction for this task. I've used it to see differences in behavior \n> between the various sync'ing methods on various platforms. \n> \n> Here's what I've found running the benchmark on some systems to which \n> I have access. The differences in behavior between platforms is quite vast. \n> \n> Summary first... \n> \n> <halfjoke> \n> PostgreSQL should be run on an old Apple MacIntosh attached to \n> its own Hitachi disk array with 2GB cache or so. Use any sync method \n> except for fsync(). \n> </halfjoke> \n> \n> Anyway, there is *a lot* of variance in file synching behavior across \n> different hardware and O/S platforms. It's probably not safe \n> to conclude much. That said, here are some findings so far based on \n> tests I've run: \n> \n> 1. under no circumstances do fsync() or fdatasync() seem to perform \n> better than opening files with O_SYNC or O_DSYNC \n> 2. where there are differences, opening files with O_SYNC or O_DSYNC \n> tends to be quite faster. \n> 3. fsync() seems to be the slowest where there are differences. And \n> O_DSYNC seems to be the fastest where results differ. \n> 4. the safest thing to assert at this point is that \n> Solaris systems ought to use the O_DSYNC method for WAL. \n> \n> ----------- \n> \n> Test system(s) \n> \n> Athlon Linux: \n> AMD Athlon XP2000, 512MB RAM, single (54 or 7200?) RPM 20GB IDE disk, \n> reiserfs filesystem (3 something I think) \n> SuSE Linux kernel 2.4.21-99 \n> \n> Mac Linux: \n> I don't know the specific model. 400MHz G3, 512MB, single IDE disk, \n> ext2 filesystem \n> Debian GNU/Linux 2.4.16-powerpc \n> \n> HP Intel Linux: \n> Prolient HPDL380G3, 2 x 3GHz Xeon, 2GB RAM, SmartArray 5i 64MB cache, \n> 2 x 15,000RPM 36GB U320 SCSI drives mirrored. I'm not sure if \n> writes are cached or not. There's no battery backup. \n> ext3 filesystem. \n> Redhat Enterprise Linux 3.0 kernel based on 2.4.21 \n> \n> Dell Intel OpenBSD: \n> Poweredge ?, single 1GHz PIII, 128MB RAM, single 7200RPM 80GB IDE disk, \n> ffs filesystem \n> OpenBSD 3.2 GENERIC kernel \n> \n> SUN Ultra2: \n> Ultra2, 2 x 296MHz UltraSPARC II, 2GB RAM, 2 x 10,000RPM 18GB U160 \n> SCSI drives mirrored with Solstice DiskSuite. UFS filesystem. \n> Solaris 8. \n> \n> SUN E4500 + HDS Thunder 9570v \n> E4500, 8 x 400MHz UltraSPARC II, 3GB RAM, \n> HDS Thunder 9570v, 2GB mirrored battery-backed cache, RAID5 with a \n> bunch of 146GB 10,000RPM FC drives. LUN is on single 2GB FC fabric \n> connection. \n> Veritas filesystem (VxFS) \n> Solaris 8. \n> \n> Test methodology: \n> \n> All test runs were done with CHUNKSIZE 8 * 1024, CHUNKS 2 * 1024, \n> FILESIZE_MULTIPLIER 2, and SLEEP 5. So a total of 16MB was sequentially \n> written for each benchmark. \n> \n> Results are in microseconds. \n> \n> PLATFORM: Athlon Linux \n> buffered: 48220 \n> fsync: 74854397 \n> fdatasync: 75061357 \n> open_sync: 73869239 \n> open_datasync: 74748145 \n> Notes: System mostly idle. Even during tests, top showed about 95% \n> idle. Something's not right on this box. All sync methods similarly \n> horrible on this system. \n> \n> PLATFORM: Mac Linux \n> buffered: 58912 \n> fsync: 1539079 \n> fdatasync: 769058 \n> open_sync: 767094 \n> open_datasync: 763074 \n> Notes: system mostly idle. fsync seems worst. Otherwise, they seem \n> pretty equivalent. This is the fastest system tested. \n> \n> PLATFORM: HP Intel Linux \n> buffered: 33026 \n> fsync: 29330067 \n> fdatasync: 28673880 \n> open_sync: 8783417 \n> open_datasync: 8747971 \n> Notes: system idle. O_SYNC and O_DSYNC methods seem to be a lot \n> better on this platform than fsync & fdatasync. \n> \n> PLATFORM: Dell Intel OpenBSD \n> buffered: 511890 \n> fsync: 1769190 \n> fdatasync: -------- \n> open_sync: 1748764 \n> open_datasync: 1747433 \n> Notes: system idle. I couldn't locate fdatasync() on this box, so I \n> couldn't test it. All sync methods seem equivalent and are very fast -- \n> though still trail the old Mac. \n> \n> PLATFORM: SUN Ultra2 \n> buffered: 1814824 \n> fsync: 73954800 \n> fdatasync: 52594532 \n> open_sync: 34405585 \n> open_datasync: 13883758 \n> Notes: system mostly idle, with occasional spikes from 1-10% utilization. \n> It looks like substantial difference between each sync method, with \n> O_DSYNC the best and fsync() the worst. There is substantial \n> difference between the open* and f* methods. \n> \n> PLATFORM: SUN E4500 + HDS Thunder 9570v \n> buffered: 233947 \n> fsync: 57802065 \n> fdatasync: 56631013 \n> open_sync: 2362207 \n> open_datasync: 1976057 \n> Notes: host about 30% idle, but the array tested on was completely idle. \n> Something looks seriously not right about fsync and fdatasync -- write \n> cache seems to have no effect on them. As for write cache, that \n> probably explains the 2 seconds or so for the open_sync and \n> open_datasync methods. \n> \n> -------------- \n> \n> Thanks for reading...I look forward to feedback, and hope to be helpful in \n> this effort! \n> \n> Mark \n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 13 Sep 2004 10:38:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Determine optimal fdatasync/fsync, O_SYNC/O_DSYNC options"
},
{
"msg_contents": "Quoting Bruce Momjian <[email protected]>:\n\n> \n> Have you seen /src/tools/fsync?\n> \n\nI have now. Thanks.\n",
"msg_date": "Mon, 13 Sep 2004 12:32:44 -0700",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Determine optimal fdatasync/fsync, O_SYNC/O_DSYNC options"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> Have you seen /src/tools/fsync?\n> \n\nNow that the argument is already open, why postgres choose\non linux fdatasync? I'm understanding from other posts that\non this platform open_sync is better than fdatasync.\n\nHowever I choose open_sync. During initdb why don't detect\nthis parameter ?\n\nRegards\nGaetano Mendola\n\n\n\n\n\nThese are my times:\n\n\n\nkernel 2.4.9-e.24smp ( RAID SCSI ):\n\nSimple write timing:\n write 0.011544\n\nCompare fsync times on write() and non-write() descriptor:\n(If the times are similar, fsync() can sync data written\n on a different descriptor.)\n write, fsync, close 1.233312\n write, close, fsync 1.242086\n\nCompare one o_sync write to two:\n one 16k o_sync write 0.517633\n two 8k o_sync writes 0.824603\n\nCompare file sync methods with one 8k write:\n (o_dsync unavailable)\n open o_sync, write 0.438580\n write, fdatasync 1.239377\n write, fsync, 1.178017\n\nCompare file sync methods with 2 8k writes:\n (o_dsync unavailable)\n open o_sync, write 0.818720\n write, fdatasync 1.395602\n write, fsync, 1.351214\n\n\n\n\nkernel 2.4.22-1.2199.nptlsmp (single EIDE disk):\n\nSimple write timing:\n write 0.023697\n\nCompare fsync times on write() and non-write() descriptor:\n(If the times are similar, fsync() can sync data written\n on a different descriptor.)\n write, fsync, close 0.688765\n write, close, fsync 0.702166\n\nCompare one o_sync write to two:\n one 16k o_sync write 0.498296\n two 8k o_sync writes 0.543956\n\nCompare file sync methods with one 8k write:\n (o_dsync unavailable)\n open o_sync, write 0.259664\n write, fdatasync 0.971712\n write, fsync, 1.006096\n\nCompare file sync methods with 2 8k writes:\n (o_dsync unavailable)\n open o_sync, write 0.536882\n write, fdatasync 1.160347\n write, fsync, 1.189699\n\n\n\n\n\n\n\n",
"msg_date": "Mon, 13 Sep 2004 22:28:05 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Determine optimal fdatasync/fsync, O_SYNC/O_DSYNC options"
},
{
"msg_contents": "Gaetano,\n\n> Now that the argument is already open, why postgres choose\n> on linux fdatasync? I'm understanding from other posts that\n> on this platform open_sync is better than fdatasync.\n\nNot necessarily. For example, here's my test results, on Linux 2.6.7, \nwriting to a ReiserFS mount on a Software RAID 1 slave of 2 IDE disks, on an \nAthalon 1600mhz single-processor machine. I ran the loop 10,000 times \ninstead of 1000 because tests with 1,000 varied too much.\n\nSimple write timing:\n write 0.088701\n\nCompare fsync times on write() and non-write() descriptor:\n(If the times are similar, fsync() can sync data written\n on a different descriptor.)\n write, fsync, close 3.593958\n write, close, fsync 3.556978\n\nCompare one o_sync write to two:\n one 16k o_sync write 42.951595\n two 8k o_sync writes 11.251389\n\nCompare file sync methods with one 8k write:\n (o_dsync unavailable)\n open o_sync, write 6.807060\n write, fdatasync 7.207879\n write, fsync, 7.209087\n\nCompare file sync methods with 2 8k writes:\n (o_dsync unavailable)\n open o_sync, write 13.120305\n write, fdatasync 7.583871\n write, fsync, 7.801748\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 13 Sep 2004 14:15:59 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Determine optimal fdatasync/fsync, O_SYNC/O_DSYNC options"
},
{
"msg_contents": "Gaetano Mendola <[email protected]> writes:\n> Now that the argument is already open, why postgres choose\n> on linux fdatasync? I'm understanding from other posts that\n> on this platform open_sync is better than fdatasync.\n\nAFAIR, we've seen *one* test from *one* person alleging that.\nAnd it was definitely not that way when we tested the behavior\noriginally, several releases back. I'd like to see more evidence,\nor better some indication that the Linux kernel changed algorithms,\nbefore changing the default.\n\nThe tests that started this thread are pretty unconvincing in my eyes,\nbecause they are comparing open_sync against code that fsyncs after each\none-block write. Under those circumstances, *of course* fsync will lose\n(or at least do no better), because it's forcing the same number of\nwrites through a same-or-less-efficient API. The reason that this isn't\na trivial choice is that Postgres doesn't necessarily need to fsync\nafter every block of WAL. In particular, when doing large transactions\nthere could be many blocks written between fsyncs, and in that case you\ncould come out ahead with fsync because the kernel would have more\nfreedom to schedule disk writes.\n\nSo, the only test I put a whole lot of faith in is testing your own\nworkload on your own Postgres server. But if we want to set up a toy\ntest program to test this stuff, it's at least got to have an easily\nadjustable (and preferably randomizable) distance between fsyncs.\n\nAlso, tests on IDE drives have zero credibility to start with, unless\nyou can convince me you know how to turn off write buffering on the\ndrive...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Sep 2004 17:18:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Determine optimal fdatasync/fsync, O_SYNC/O_DSYNC options "
},
{
"msg_contents": "Tom Lane wrote:\n> The tests that started this thread are pretty unconvincing in my eyes,\n> because they are comparing open_sync against code that fsyncs after each\n> one-block write. Under those circumstances, *of course* fsync will lose\n> (or at least do no better), because it's forcing the same number of\n> writes through a same-or-less-efficient API. The reason that this isn't\n> a trivial choice is that Postgres doesn't necessarily need to fsync\n> after every block of WAL. In particular, when doing large transactions\n> there could be many blocks written between fsyncs, and in that case you\n> could come out ahead with fsync because the kernel would have more\n> freedom to schedule disk writes.\n\nMy guess is that the majority of queries do not fill more than one WAL\nblock. Sure some do, but in those cases the fsync is probably small\ncompared to the duration of the query. If we had a majority of queries\nfilling more than one block we would be checkpointing like crazy and we\ndon't normally get reports about that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 13 Sep 2004 21:28:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Determine optimal fdatasync/fsync, O_SYNC/O_DSYNC options"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nTom Lane wrote:\n\n| Gaetano Mendola <[email protected]> writes:\n|\n|>Now that the argument is already open, why postgres choose\n|>on linux fdatasync? I'm understanding from other posts that\n|>on this platform open_sync is better than fdatasync.\n|\n|\n| AFAIR, we've seen *one* test from *one* person alleging that.\n| And it was definitely not that way when we tested the behavior\n| originally, several releases back. I'd like to see more evidence,\n| or better some indication that the Linux kernel changed algorithms,\n| before changing the default.\n\nI remember more then one person claim that open_sync *apparently*\nwas working better then fdatasync, however I trust you ( here is\n3:00 AM ).\n\n\n| The tests that started this thread are pretty unconvincing in my eyes,\n| because they are comparing open_sync against code that fsyncs after each\n| one-block write. Under those circumstances, *of course* fsync will lose\n| (or at least do no better), because it's forcing the same number of\n| writes through a same-or-less-efficient API.\n|\n| The reason that this isn't a trivial choice is that Postgres doesn't\n| necessarily need to fsync after every block of WAL. In particular,\n| when doing large transactions there could be many blocks written between\n| fsyncs, and in that case you could come out ahead with fsync because the\n| kernel would have more freedom to schedule disk writes.\n\nAre you suggesting that postgres shall use more the one sync method and use\none or the other depending on the activity is performing ?\n\n| So, the only test I put a whole lot of faith in is testing your own\n| workload on your own Postgres server. But if we want to set up a toy\n| test program to test this stuff, it's at least got to have an easily\n| adjustable (and preferably randomizable) distance between fsyncs.\n|\n| Also, tests on IDE drives have zero credibility to start with, unless\n| you can convince me you know how to turn off write buffering on the\n| drive...\n\nI reported the IDE times just for info; however my SAN works better\nwith open_sync. Can we trust on numbers given by tools/fsync ? I seen\nsome your objections in the past but I don't know if there was some fix\nfrom that time.\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFBRkvW7UpzwH2SGd4RAia1AKD2L5JLhpRNvBzPq9Lv5bAfFJvRmwCffjC5\nhg7V0Sfm2At7yR1C+gBCzPE=\n=RsSy\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Tue, 14 Sep 2004 03:39:35 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Determine optimal fdatasync/fsync, O_SYNC/O_DSYNC options"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> If we had a majority of queries filling more than one block we would\n> be checkpointing like crazy and we don't normally get reports about\n> that.\n\n[ raised eyebrow... ] And of course the 30-second-checkpoint-warning\nstuff is a useless feature that no one ever exercises.\n\nBut your logic doesn't hold up anyway. People may be doing large\ntransactions without necessarily doing them back-to-back-to-back;\nthere could be idle time in between. For instance, I'd think an average\ntransaction size of 100 blocks would be more than enough to make fsync a\nwinner. There are 2K blocks per WAL segment, so 20 of these would fit\nin a segment. With the default WAL parameters you could do sixty such\ntransactions per five minutes, or one every five seconds, without even\ncausing more-frequent-than-default checkpoints; and you could do two a\nsecond without setting off the checkpoint-warning alarm. The lack of\ncheckpoint complaints doesn't prove that this isn't a common real-world\nload.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Sep 2004 22:00:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Determine optimal fdatasync/fsync, O_SYNC/O_DSYNC options "
},
{
"msg_contents": ">>>>> \"TL\" == Tom Lane <[email protected]> writes:\n\nTL> Bruce Momjian <[email protected]> writes:\n>> If we had a majority of queries filling more than one block we would\n>> be checkpointing like crazy and we don't normally get reports about\n>> that.\n\nTL> [ raised eyebrow... ] And of course the 30-second-checkpoint-warning\nTL> stuff is a useless feature that no one ever exercises.\n\nWell, last year about this time I discovered in my testing I was\nexcessively checkpointing; I found that the error message was\nconfusing, and Bruce cleaned it up. So at least one person excercised\nthat feature, namely me. :-)\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Tue, 14 Sep 2004 14:19:33 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Determine optimal fdatasync/fsync, O_SYNC/O_DSYNC options"
}
] |
[
{
"msg_contents": "\nHi All,\n\nI am having a performance problem extracting a large volume of data from\nPostgres 7.4.2, and was wondering if there was a more cunning way to get\nthe data out of the DB...\n\nThis isn't a performance problem with any particular PgSQL operation,\nits more a strategy for getting large volumes of related tables out of\nthe DB whilst perserving the relations between them.\n\n\nBasically we have a number of tables, which are exposed as 2 public\nviews (say PvA and PvB). For each row in PvA, there are a number of\nrelated rows in PvB (this number is arbitrary, which is one of the\nreasons why it cant be expressed as additional columns in PvA - so we\nreally need 2 sets of tables - which leads to two sets of extract calls\n- interwoven to associate PvA with PvB).\n\n\nThe problem is that for extraction, we ultimately want to grab a row\nfrom PvA, and then all the related rows from PvB and store them together\noffline (e.g. in XML).\n\nHowever, the number of rows at any time on the DB is likely to be in the\nmillions, with perhaps 25% of them being suitable for extraction at any\ngiven batch run (ie several hundred thousand to several million).\n\n\nCurrently, the proposal is to grab several hundred rows from PvA (thus\navoiding issues with the resultset being very large), and then process\neach of them by selecting the related rows in PvB (again, several\nhundred rows at a time to avoid problems with large result sets).\n\nSo the algorithm is basically:\n\n\n\tDo\n\n\t\tSelect the next 200 rows from PvA\n\n\t\tFor each PvA row Do\n\t\t\tWrite current PvA row as XML\n\n\t\t\tDo \n\t\t\t\tSelect the next 200 rows from PvB\n\t\t\t\n\t\t\t\tFor each PvB row Do\n\t\t\t\t\tWrite current PvB row as XML\nwithin the parent PvA XML Element\n\t\t\t\tEnd For\n\t\t\tWhile More Rows\n\t\tEnd For\n\tWhile More Rows\n\n\nHowever, this has a fairly significant performance impact, and I was\nwondering if there was a faster way to do it (something like taking a\ndump of the tables so they can be worked on offline - but a basic dump\nmeans we have lost the 1:M relationships between PvA and PvB).\n\n\nAre there any tools/tricks/tips with regards to extracting large volumes\nof data across related tables from Postgres? It doesnt have to export\ninto XML, we can do post-processing on the extracted data as needed -\nthe important thing is to keep the relationship between PvA and PvB on a\nrow-by-row basis.\n\n\nMany thanks,\n\nDamien\n\n",
"msg_date": "Mon, 13 Sep 2004 12:38:05 +0100",
"msg_from": "\"Damien Dougan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Help with extracting large volumes of records across related tables"
},
{
"msg_contents": "\nOn 13/09/2004 12:38 Damien Dougan wrote:\n> [snip]\n> Are there any tools/tricks/tips with regards to extracting large volumes\n> of data across related tables from Postgres? It doesnt have to export\n> into XML, we can do post-processing on the extracted data as needed -\n> the important thing is to keep the relationship between PvA and PvB on a\n> row-by-row basis.\n\nHave you considered using cursors?\n\n-- \nPaul Thomas\n+------------------------------+-------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for Business |\n| Computer Consultants | http://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+-------------------------------------------+\n",
"msg_date": "Mon, 13 Sep 2004 13:10:23 +0100",
"msg_from": "Paul Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with extracting large volumes of records across related\n\ttables"
},
{
"msg_contents": "Damien Dougan wrote:\n> Basically we have a number of tables, which are exposed as 2 public\n> views (say PvA and PvB). For each row in PvA, there are a number of\n> related rows in PvB (this number is arbitrary, which is one of the\n> reasons why it cant be expressed as additional columns in PvA - so we\n> really need 2 sets of tables - which leads to two sets of extract calls\n> - interwoven to associate PvA with PvB).\n> \n> Are there any tools/tricks/tips with regards to extracting large volumes\n> of data across related tables from Postgres? It doesnt have to export\n> into XML, we can do post-processing on the extracted data as needed -\n> the important thing is to keep the relationship between PvA and PvB on a\n> row-by-row basis.\n\nJust recently had to come up with an alternative to MSSQL's \"SQL..FOR \nXML\", for some five-level nested docs, that turned out to be faster (!)\nand easier to understand:\n\nUse SQL to organize each of the row types into a single text field, plus \na single key field, as well as any filter fields you . Sort the union, \nand have the reading process break them into documents.\n\nFor example, if PvA has key (account_id, order_id) and \nfields(order_date, ship_date) and PvB has key (order_id, product_id) and \nfields (order_qty, back_order)\n\nCREATE VIEW PvABxml AS\nSELECT\taccount_id::text + order_id::text AS quay\n\t,'order_date=\"' + order_date::text\n\t+ '\" ship_date=\"' + ship_date::text + '\"' AS info\n\t,ship_date\nFROM\tPvA\n\tUNION ALL\nSELECT\taccount_id::text + order_id::text + product_id::text\n\t,'order_qty=\"' + order_qty::text +'\"'\n\t,ship_date\nFROM\tPvA JOIN PvB USING (order_id)\n\nThen:\n\nSELECT quay, info\nFROM pvABxml\nWHERE ship_date = '...'\nORDER BY quay\n\ngives you a stream of info in the (parent,child,child... \nparent,child,child...) order you want, that assemble very easily into \nXML documents. If you need to pick out, say, orders where there are \nbackordered items, you probably need to work with a temp table with \nwhich to prefilter.\n",
"msg_date": "Mon, 13 Sep 2004 12:58:57 GMT",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with extracting large volumes of records across related"
},
{
"msg_contents": "\n\n\tThere's a very simple solution using cursors.\n\n\tAs an example :\n\ncreate table categories ( id serial primary key, name text );\ncreate table items ( id serial primary key, cat_id integer references \ncategories(id), name text );\ncreate index items_cat_idx on items( cat_id );\n\ninsert stuff...\n\nselect * from categories;\n id | name\n----+----------\n 1 | tools\n 2 | supplies\n 3 | food\n(3 lignes)\n\nselect * from items;\n id | cat_id | name\n----+--------+--------------\n 1 | 1 | hammer\n 2 | 1 | screwdriver\n 3 | 2 | nails\n 4 | 2 | screws\n 5 | 1 | wrench\n 6 | 2 | bolts\n 7 | 2 | cement\n 8 | 3 | beer\n 9 | 3 | burgers\n 10 | 3 | french fries\n(10 lignes)\n\n\tNow (supposing you use Python) you use the extremely simple sample \nprogram below :\n\nimport psycopg\ndb = psycopg.connect(\"host=localhost dbname=rencontres user=rencontres \npassword=.........\")\n\n#\tSimple. Let's make some cursors.\ncursor = db.cursor()\ncursor.execute( \"BEGIN;\" )\ncursor.execute( \"declare cat_cursor no scroll cursor without hold for \nselect * from categories order by id for read only;\" )\ncursor.execute( \"declare items_cursor no scroll cursor without hold for \nselect * from items order by cat_id for read only;\" )\n\n# set up some generators\ndef qcursor( cursor, psql_cursor_name ):\n\twhile True:\n\t\tcursor.execute( \"fetch 2 from %s;\" % psql_cursor_name )guess\n\t\tif not cursor.rowcount:\n\t\t\tbreak\n#\t\tprint \"%s fetched %d rows.\" % (psql_cursor_name, cursor.rowcount)\n\t\tfor row in cursor.dictfetchall():\n\t\t\tyield row\n\tprint \"%s exhausted.\" % psql_cursor_name\n\n# use the generators\ncategories = qcursor( cursor, \"cat_cursor\" )\nitems = qcursor( cursor, \"items_cursor\" )\n\ncurrent_item = items.next()\nfor cat in categories:\n\tprint \"Category : \", cat\n\t\n\t# if no items (or all items in category are done) skip to next category\n\tif cat['id'] < current_item['cat_id']:\n\t\tcontinue\n\t\n\t# case of items without category (should not happen)\n\twhile cat['id'] > current_item['cat_id']:\n\t\tcurrent_item = items.next()\n\t\n\twhile current_item['cat_id'] == cat['id']:\n\t\tprint \"\\t\", current_item\n\t\tcurrent_item = items.next()\n\n\nIt produces the following output :\n\nCategory : {'id': 1, 'name': 'tools'}\n {'cat_id': 1, 'id': 1, 'name': 'hammer'}\n {'cat_id': 1, 'id': 2, 'name': 'screwdriver'}\n {'cat_id': 1, 'id': 5, 'name': 'wrench'}\nCategory : {'id': 2, 'name': 'supplies'}\n {'cat_id': 2, 'id': 3, 'name': 'nails'}\n {'cat_id': 2, 'id': 4, 'name': 'screws'}\n {'cat_id': 2, 'id': 6, 'name': 'bolts'}\n {'cat_id': 2, 'id': 7, 'name': 'cement'}\nCategory : {'id': 3, 'name': 'food'}\n {'cat_id': 3, 'id': 8, 'name': 'beer'}\n {'cat_id': 3, 'id': 9, 'name': 'burgers'}\n {'cat_id': 3, 'id': 10, 'name': 'french fries'}\n\nThis simple code, with \"fetch 1000\" instead of \"fetch 2\", dumps a database \nof several million rows, where each categories contains generally 1 but \noften 2-4 items, at the speed of about 10.000 items/s.\n\nSatisfied ?\n\n\n\n\n\n\n",
"msg_date": "Mon, 13 Sep 2004 15:01:49 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with extracting large volumes of records across related\n\ttables"
},
{
"msg_contents": "\nMy simple python program dumps 1653992 items in 1654000 categories in :\n\nreal 3m12.029s\nuser 1m36.720s\nsys 0m2.220s\n\nIt was running on the same machine as postgresql (AthlonXP 2500).\nI Ctrl-C'd it before it dumped all the database but you get an idea.\n\nIf you don't know Python and Generators, have a look !\n",
"msg_date": "Mon, 13 Sep 2004 15:05:43 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with extracting large volumes of records across related\n\ttables"
},
{
"msg_contents": "Pierre-Frederic, Paul,\n\nThanks for your fast response (especially for the python code and\nperformance figure) - I'll chase this up as a solution - looks most\npromising!\n\nCheers,\n\nDamien\n\n",
"msg_date": "Mon, 13 Sep 2004 14:44:45 +0100",
"msg_from": "\"Damien Dougan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help with extracting large volumes of records across related\n\ttables"
},
{
"msg_contents": "\n\tThanks for the thanks !\n\n\tGenerally, when grouping stuff together, it is a good idea to have two \nsorted lists, and to scan them simultaneously. I have already used this \nsolution several times outside of Postgres, and it worked very well (it \nwas with Berkeley DB and there were 3 lists to scan in order). The fact \nthat Python can very easily virtualize these lists using generators makes \nit possible to do it without consuming too much memory.\n\n> Pierre-Frederic, Paul,\n>\n> Thanks for your fast response (especially for the python code and\n> performance figure) - I'll chase this up as a solution - looks most\n> promising!\n>\n> Cheers,\n>\n> Damien\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n\n",
"msg_date": "Mon, 13 Sep 2004 16:00:35 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with extracting large volumes of records across related\n\ttables"
}
] |
[
{
"msg_contents": "Hi, I have downloaded the new postgresql (version 8.0 beta2) and I was\nwondering what performance features I can take advantage of before I start\nto dump my 3/4 terrabyte database into the new database. More \nspecifically\nI am interested in tablespaces--what exactly is this feature, some sort of\norganizational addition (?) and how can I best take advantage of this....? \nAnything else? Furthermore, if I compile from source will I be able to \nrevert to using the packaged version of postgresql 8.0 stable later on \nwithout modifying the database(I use debian)�.?\n\nThanks.\n\n",
"msg_date": "Mon, 13 Sep 2004 10:48:46 -0500 (CDT)",
"msg_from": "Bill Fefferman <[email protected]>",
"msg_from_op": true,
"msg_subject": "tblspace"
}
] |
[
{
"msg_contents": "Does postgres cache the entire result set before it begins returning\ndata to the client?\n\nI have a table with ~8 million rows and I am executing a query which\nshould return about ~800,000 rows. The problem is that as soon as I\nexecute the query it absolutely kills my machine and begins swapping\nfor 5 or 6 minutes before it begins returning results. Is postgres\ntrying to load the whole query into memory before returning anything?\nAlso, why would it choose not to use the index? It is properly\nestimating the # of rows returned. If I set enable_seqscan to off it\nis just as slow.\n\nRunning postgres 8.0 beta2 dev2\n\nexplain select * from island_history where date='2004-09-07' and stock='QQQ';\n QUERY PLAN\n---------------------------------------------------------------------------\n Seq Scan on island_history (cost=0.00..266711.23 rows=896150 width=83)\n Filter: ((date = '2004-09-07'::date) AND ((stock)::text = 'QQQ'::text))\n(2 rows)\n\nAny help would be appreciated\n\n--Stephen\n\n Table \"public.island_history\"\n Column | Type | Modifiers\n------------------+------------------------+-----------\n date | date | not null\n stock | character varying(6) |\n time | time without time zone | not null\n reference_number | numeric(9,0) | not null\n message_type | character(1) | not null\n buy_sell_ind | character(1) |\n shares | numeric(6,0) |\n remaining_shares | numeric(6,0) |\n price | numeric(10,4) |\n display | character(1) |\n match_number | numeric(9,0) | not null\nIndexes:\n \"island_history_pkey\" PRIMARY KEY, btree (date, reference_number,\nmessage_type, \"time\", match_number)\n \"island_history_date_stock_time\" btree (date, stock, \"time\")\n \"island_history_oid\" btree (oid)\n",
"msg_date": "Mon, 13 Sep 2004 19:51:22 -0500",
"msg_from": "Stephen Crowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Large # of rows in query extremely slow, not using index"
},
{
"msg_contents": "Stephen Crowley <[email protected]> writes:\n> Does postgres cache the entire result set before it begins returning\n> data to the client?\n\nThe backend doesn't, but libpq does, and I think JDBC does too.\n\nI'd recommend using a cursor so you can FETCH a reasonable number of\nrows at a time.\n\n> Also, why would it choose not to use the index?\n\nSelecting 1/10th of a table is almost always a poor candidate for an\nindex scan. You've got about 100 rows per page (assuming the planner's\nwidth estimate is credible) and so on average every page of the table\nhas about ten rows that need to be picked up and returned. You might as\nwell just seqscan and be sure you don't read any page more than once.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Sep 2004 21:11:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large # of rows in query extremely slow, not using index "
},
{
"msg_contents": "On Mon, 2004-09-13 at 20:51, Stephen Crowley wrote:\n> Does postgres cache the entire result set before it begins returning\n> data to the client?\n\nSometimes you need to be careful as to how the clients treat the data. \n\nFor example psql will resize columns width on the length (width) of the\ndata returned.\n\nPHP and Perl will retrieve and cache all of the rows if you request a\nrow count ($sth->rows() or pg_num_rows($rset))\n\n\nYou may find that using a cursor will help you out.\n\n> I have a table with ~8 million rows and I am executing a query which\n> should return about ~800,000 rows. The problem is that as soon as I\n> execute the query it absolutely kills my machine and begins swapping\n> for 5 or 6 minutes before it begins returning results. Is postgres\n> trying to load the whole query into memory before returning anything?\n> Also, why would it choose not to use the index? It is properly\n> estimating the # of rows returned. If I set enable_seqscan to off it\n> is just as slow.\n> \n> Running postgres 8.0 beta2 dev2\n> \n> explain select * from island_history where date='2004-09-07' and stock='QQQ';\n> QUERY PLAN\n> ---------------------------------------------------------------------------\n> Seq Scan on island_history (cost=0.00..266711.23 rows=896150 width=83)\n> Filter: ((date = '2004-09-07'::date) AND ((stock)::text = 'QQQ'::text))\n> (2 rows)\n> \n> Any help would be appreciated\n> \n> --Stephen\n> \n> Table \"public.island_history\"\n> Column | Type | Modifiers\n> ------------------+------------------------+-----------\n> date | date | not null\n> stock | character varying(6) |\n> time | time without time zone | not null\n> reference_number | numeric(9,0) | not null\n> message_type | character(1) | not null\n> buy_sell_ind | character(1) |\n> shares | numeric(6,0) |\n> remaining_shares | numeric(6,0) |\n> price | numeric(10,4) |\n> display | character(1) |\n> match_number | numeric(9,0) | not null\n> Indexes:\n> \"island_history_pkey\" PRIMARY KEY, btree (date, reference_number,\n> message_type, \"time\", match_number)\n> \"island_history_date_stock_time\" btree (date, stock, \"time\")\n> \"island_history_oid\" btree (oid)\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n-- \nRod Taylor <rbt [at] rbt [dot] ca>\n\nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\nPGP Key: http://www.rbt.ca/signature.asc",
"msg_date": "Mon, 13 Sep 2004 21:11:13 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large # of rows in query extremely slow, not using"
},
{
"msg_contents": "On Mon, 13 Sep 2004 21:11:07 -0400, Tom Lane <[email protected]> wrote:\n> Stephen Crowley <[email protected]> writes:\n> > Does postgres cache the entire result set before it begins returning\n> > data to the client?\n> \n> The backend doesn't, but libpq does, and I think JDBC does too.\n> \n> I'd recommend using a cursor so you can FETCH a reasonable number of\n> rows at a time.\n\nThat is incredible. Why would libpq do such a thing? JDBC as well? I\nknow oracle doesn't do anything like that, not sure about mysql. Is\nthere any way to turn it off? In this case I was just using psql but\nwill be using JDBC for the app. About cursors, I thought a jdbc\nResultSet WAS a cursor, am I mistaken?\n\nThanks,\nStephen\n",
"msg_date": "Mon, 13 Sep 2004 20:22:19 -0500",
"msg_from": "Stephen Crowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Large # of rows in query extremely slow, not using index"
},
{
"msg_contents": "Stephen Crowley <[email protected]> writes:\n> On Mon, 13 Sep 2004 21:11:07 -0400, Tom Lane <[email protected]> wrote:\n>> Stephen Crowley <[email protected]> writes:\n>>> Does postgres cache the entire result set before it begins returning\n>>> data to the client?\n>> \n>> The backend doesn't, but libpq does, and I think JDBC does too.\n\n> That is incredible. Why would libpq do such a thing?\n\nBecause the API it presents doesn't allow for the possibility of query\nfailure after having given you back a PGresult: either you have the\nwhole result available with no further worries, or you don't.\nIf you think it's \"incredible\", let's see you design an equally\neasy-to-use API that doesn't make this assumption.\n\n(Now having said that, I would have no objection to someone extending\nlibpq to offer an alternative streaming API for query results. It\nhasn't got to the top of anyone's to-do list though ... and I'm\nunconvinced that psql could use it if it did exist.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Sep 2004 21:49:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large # of rows in query extremely slow, not using index"
},
{
"msg_contents": "Problem solved.. I set the fetchSize to a reasonable value instead of\nthe default of unlimited in the PreparedStatement and now the query\nis . After some searching it seeems this is a common problem, would it\nmake sense to change the default value to something other than 0 in\nthe JDBC driver?\n\nIf I get some extra time I'll look into libpq and see what is required\nto fix the API. Most thirdparty programs and existing JDBC apps won't\nwork with the current paradigm when returning large result sets.\n\nThanks,\nStephen\n\n\n\nOn Mon, 13 Sep 2004 21:49:14 -0400, Tom Lane <[email protected]> wrote:\n> Stephen Crowley <[email protected]> writes:\n> > On Mon, 13 Sep 2004 21:11:07 -0400, Tom Lane <[email protected]> wrote:\n> >> Stephen Crowley <[email protected]> writes:\n> >>> Does postgres cache the entire result set before it begins returning\n> >>> data to the client?\n> >>\n> >> The backend doesn't, but libpq does, and I think JDBC does too.\n> \n> > That is incredible. Why would libpq do such a thing?\n> \n> Because the API it presents doesn't allow for the possibility of query\n> failure after having given you back a PGresult: either you have the\n> whole result available with no further worries, or you don't.\n> If you think it's \"incredible\", let's see you design an equally\n> easy-to-use API that doesn't make this assumption.\n> \n> (Now having said that, I would have no objection to someone extending\n> libpq to offer an alternative streaming API for query results. It\n> hasn't got to the top of anyone's to-do list though ... and I'm\n> unconvinced that psql could use it if it did exist.)\n",
"msg_date": "Tue, 14 Sep 2004 01:04:33 -0500",
"msg_from": "Stephen Crowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Large # of rows in query extremely slow, not using index"
},
{
"msg_contents": "Hi, Stephen,\n\nOn Mon, 13 Sep 2004 19:51:22 -0500\nStephen Crowley <[email protected]> wrote:\n\n> Does postgres cache the entire result set before it begins returning\n> data to the client?\n> \n> I have a table with ~8 million rows and I am executing a query which\n> should return about ~800,000 rows. The problem is that as soon as I\n> execute the query it absolutely kills my machine and begins swapping\n> for 5 or 6 minutes before it begins returning results. Is postgres\n> trying to load the whole query into memory before returning anything?\n> Also, why would it choose not to use the index? It is properly\n> estimating the # of rows returned. If I set enable_seqscan to off it\n> is just as slow.\n\nAs you get about 10% of all rows in the table, the query will hit every\npage of the table.\n\nMaybe it helps to CLUSTER the table using the index on your query\nparameters, and then set enable_seqscan to off.\n\nBut beware, that you have to re-CLUSTER after modifications.\n\nHTH,\nMarkus\n\n\n\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zürich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n",
"msg_date": "Tue, 14 Sep 2004 18:43:58 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large # of rows in query extremely slow, not using"
},
{
"msg_contents": "\n>> I have a table with ~8 million rows and I am executing a query which\n>> should return about ~800,000 rows. The problem is that as soon as I\n>> execute the query it absolutely kills my machine and begins swapping\n>> for 5 or 6 minutes before it begins returning results. Is postgres\n>> trying to load the whole query into memory before returning anything?\n>> Also, why would it choose not to use the index? It is properly\n>> estimating the # of rows returned. If I set enable_seqscan to off it\n>> is just as slow.\n\n\t1; EXPLAIN ANALYZE.\n\n\tNote the time it takes. It should not swap, just read data from the disk \n(and not kill the machine).\n\n\t2; Run the query in your software\n\n\tNote the time it takes. Watch RAM usage. If it's vastly longer and you're \nswimming in virtual memory, postgres is not the culprit... rather use a \ncursor to fetch a huge resultset bit by bit.\n\n\tTell us what you find ?\n\n\tRegards.\n",
"msg_date": "Tue, 14 Sep 2004 21:27:55 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large # of rows in query extremely slow, not using"
},
{
"msg_contents": "Here are some results of explain analyze, I've included the LIMIT 10\nbecause otherwise the resultset would exhaust all available memory.\n\n\nexplain analyze select * from history where date='2004-09-07' and\nstock='ORCL' LIMIT 10;\n\n\"Limit (cost=0.00..17.92 rows=10 width=83) (actual\ntime=1612.000..1702.000 rows=10 loops=1)\"\n\" -> Index Scan using island_history_date_stock_time on\nisland_history (cost=0.00..183099.72 rows=102166 width=83) (actual\ntime=1612.000..1702.000 rows=10 loops=1)\"\n\" Index Cond: ((date = '2004-09-07'::date) AND ((stock)::text =\n'ORCL'::text))\"\n\"Total runtime: 1702.000 ms\"\n\n\nOk, so for 100,000 rows it decides to use the index and returns very\nquicktly.. now for\n\n explain analyze select * from history where date='2004-09-07' and\nstock='MSFT' LIMIT 10;\n\n\"Limit (cost=0.00..14.30 rows=10 width=83) (actual\ntime=346759.000..346759.000 rows=10 loops=1)\"\n\" -> Seq Scan on island_history (cost=0.00..417867.13 rows=292274\nwidth=83) (actual time=346759.000..346759.000 rows=10 loops=1)\"\n\" Filter: ((date = '2004-09-07'::date) AND ((stock)::text =\n'MSFT'::text))\"\n\"Total runtime: 346759.000 ms\"\n\nNearly 8 minutes.. Why would it take this long? Is there anything else\nI can do to debug this?\n\nWhen I set enable_seqscan to OFF and force everything to use the index\nevery stock I query returns within 100ms, but turn seqscan back ON and\nits back up to taking several minutes for non-index using plans.\n\nAny ideas?\n--Stephen\n\n\nOn Tue, 14 Sep 2004 21:27:55 +0200, Pierre-Frédéric Caillaud\n<[email protected]> wrote:\n> \n> >> I have a table with ~8 million rows and I am executing a query which\n> >> should return about ~800,000 rows. The problem is that as soon as I\n> >> execute the query it absolutely kills my machine and begins swapping\n> >> for 5 or 6 minutes before it begins returning results. Is postgres\n> >> trying to load the whole query into memory before returning anything?\n> >> Also, why would it choose not to use the index? It is properly\n> >> estimating the # of rows returned. If I set enable_seqscan to off it\n> >> is just as slow.\n> \n> 1; EXPLAIN ANALYZE.\n> \n> Note the time it takes. It should not swap, just read data from the disk\n> (and not kill the machine).\n",
"msg_date": "Thu, 16 Sep 2004 20:51:11 -0500",
"msg_from": "Stephen Crowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Large # of rows in query extremely slow, not using"
},
{
"msg_contents": "> When I set enable_seqscan to OFF and force everything to use the index\n> every stock I query returns within 100ms, but turn seqscan back ON and\n> its back up to taking several minutes for non-index using plans.\n> \n> Any ideas?\n> --Stephen\n\nTry increasing your statistics target and re-running analyze. Try say 100?\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n> \n> \n> On Tue, 14 Sep 2004 21:27:55 +0200, Pierre-Frédéric Caillaud\n> <[email protected]> wrote:\n> \n>>>>I have a table with ~8 million rows and I am executing a query which\n>>>>should return about ~800,000 rows. The problem is that as soon as I\n>>>>execute the query it absolutely kills my machine and begins swapping\n>>>>for 5 or 6 minutes before it begins returning results. Is postgres\n>>>>trying to load the whole query into memory before returning anything?\n>>>>Also, why would it choose not to use the index? It is properly\n>>>>estimating the # of rows returned. If I set enable_seqscan to off it\n>>>>is just as slow.\n>>\n>> 1; EXPLAIN ANALYZE.\n>>\n>> Note the time it takes. It should not swap, just read data from the disk\n>>(and not kill the machine).\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nMammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL",
"msg_date": "Thu, 16 Sep 2004 20:14:16 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large # of rows in query extremely slow, not using"
},
{
"msg_contents": "Stephen,\n\n> \" -> Seq Scan on island_history (cost=0.00..417867.13 rows=292274\n> width=83) (actual time=346759.000..346759.000 rows=10 loops=1)\"\n\nTake a look at your row comparisons. When was the last time you ran ANALYZE?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 17 Sep 2004 10:40:18 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large # of rows in query extremely slow, not using"
},
{
"msg_contents": "On Thu, 16 Sep 2004 20:51:11 -0500, Stephen Crowley\n<[email protected]> wrote:\n>explain analyze select * from history where date='2004-09-07' and\n>stock='ORCL' LIMIT 10;\n\n>\" -> Index Scan using island_history_date_stock_time on\n>island_history (cost=0.00..183099.72 rows=102166 width=83) (actual\n>time=1612.000..1702.000 rows=10 loops=1)\"\n ^^\nLIMIT 10 hides what would be the most interesting info here. I don't\nbelieve that\n\tEXPLAIN ANALYSE SELECT * FROM history WHERE ...\nconsumes lots of memory. Please try it.\n\nAnd when you post the results please include your Postgres version, some\ninfo about hardware and OS, and your non-default settings, especially\nrandom_page_cost and effective_cache_size.\n\nMay I guess that the correlation of the physical order of tuples in your\ntable to the contents of the date column is pretty good (examine\ncorrelation in pg_stats) and that island_history_date_stock_time is a\n3-column index?\n\nIt is well known that the optimizer overestimates the cost of index\nscans in those situations. This can be compensated to a certain degree\nby increasing effective_cache_size and/or decreasing random_page_cost\n(which might harm other planner decisions).\n\nYou could also try\n\tCREATE INDEX history_date_stock ON history(\"date\", stock);\n\nThis will slow down INSERTs and UPDATEs, though.\n\nServus\n Manfred\n",
"msg_date": "Fri, 17 Sep 2004 22:44:05 +0200",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large # of rows in query extremely slow, not using"
},
{
"msg_contents": "Ok.. now I ran \"VACUUM FULL' and things seem to be working as they should.. \n\nexplain analyze select * from history where date='2004-09-07' and stock='MSFT';\n\nSeq Scan on island_history (cost=0.00..275359.13 rows=292274\nwidth=83) (actual time=50.000..411683.000 rows=265632 loops=1)\n Filter: ((date = '2004-09-07'::date) AND ((stock)::text = 'MSFT'::text))\nTotal runtime: 412703.000 ms\n\nrandom_page_cost and effective_cache_size are both default, 8 and 1000\n\nexplain analyze select * from history where date='2004-09-07' and stock='ORCL';\n\n\"Index Scan using island_history_date_stock_time on island_history \n(cost=0.00..181540.07 rows=102166 width=83) (actual\ntime=551.000..200268.000 rows=159618 loops=1)\"\n\" Index Cond: ((date = '2004-09-07'::date) AND ((stock)::text = 'ORCL'::text))\"\n\"Total runtime: 201009.000 ms\"\n\nSo now this in all in proportion and works as expected.. the question\nis, why would the fact that it needs to be vaccumed cause such a huge\nhit in performance? When i vacuumed it did free up nearly 25% of the\nspace.\n\n--Stephen\n\nOn Fri, 17 Sep 2004 22:44:05 +0200, Manfred Koizar <[email protected]> wrote:\n> On Thu, 16 Sep 2004 20:51:11 -0500, Stephen Crowley\n> <[email protected]> wrote:\n> >explain analyze select * from history where date='2004-09-07' and\n> >stock='ORCL' LIMIT 10;\n> \n> >\" -> Index Scan using island_history_date_stock_time on\n> >island_history (cost=0.00..183099.72 rows=102166 width=83) (actual\n> >time=1612.000..1702.000 rows=10 loops=1)\"\n> ^^\n> LIMIT 10 hides what would be the most interesting info here. I don't\n> believe that\n> EXPLAIN ANALYSE SELECT * FROM history WHERE ...\n> consumes lots of memory. Please try it.\n> \n> And when you post the results please include your Postgres version, some\n> info about hardware and OS, and your non-default settings, especially\n> random_page_cost and effective_cache_size.\n> \n> May I guess that the correlation of the physical order of tuples in your\n> table to the contents of the date column is pretty good (examine\n> correlation in pg_stats) and that island_history_date_stock_time is a\n> 3-column index?\n> \n> It is well known that the optimizer overestimates the cost of index\n> scans in those situations. This can be compensated to a certain degree\n> by increasing effective_cache_size and/or decreasing random_page_cost\n> (which might harm other planner decisions).\n> \n> You could also try\n> CREATE INDEX history_date_stock ON history(\"date\", stock);\n> \n> This will slow down INSERTs and UPDATEs, though.\n> \n> Servus\n> Manfred\n>\n",
"msg_date": "Fri, 17 Sep 2004 19:23:44 -0500",
"msg_from": "Stephen Crowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Large # of rows in query extremely slow, not using"
},
{
"msg_contents": "On Fri, 17 Sep 2004 19:23:44 -0500, Stephen Crowley\n<[email protected]> wrote:\n>Seq Scan [...] rows=265632\n> Filter: ((date = '2004-09-07'::date) AND ((stock)::text = 'MSFT'::text))\n>Total runtime: 412703.000 ms\n>\n>random_page_cost and effective_cache_size are both default, 8 and 1000\n\nUsually random_page_cost is 4.0 by default. And your\neffective_cache_size setting is far too low for a modern machine.\n\n>\"Index Scan [...] rows=159618\n>\" Index Cond: ((date = '2004-09-07'::date) AND ((stock)::text = 'ORCL'::text))\"\n>\"Total runtime: 201009.000 ms\"\n\nExtrapolating this to 265000 rows you should be able to get the MSFT\nresult in ca. 330 seconds, if you can persuade the planner to choose an\nindex scan. Fiddling with random_page_cost and effective_cache_size\nmight do the trick.\n\n>So now this in all in proportion and works as expected.. the question\n>is, why would the fact that it needs to be vaccumed cause such a huge\n>hit in performance? When i vacuumed it did free up nearly 25% of the\n>space.\n\nSo before the VACCUM a seq scan would have taken ca. 550 seconds. Your\nMSFT query with LIMIT 10 took ca. 350 seconds. It's not implausible to\nassume that more than half of the table had to be scanned to find the\nfirst ten rows matching the filter condition.\n\nServus\n Manfred\n",
"msg_date": "Mon, 20 Sep 2004 09:31:11 +0200",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large # of rows in query extremely slow, not using"
},
{
"msg_contents": "\n\nOn Tue, 14 Sep 2004, Stephen Crowley wrote:\n\n> Problem solved.. I set the fetchSize to a reasonable value instead of\n> the default of unlimited in the PreparedStatement and now the query\n> is . After some searching it seeems this is a common problem, would it\n> make sense to change the default value to something other than 0 in\n> the JDBC driver?\n\nIn the JDBC driver, setting the fetch size to a non-zero value means that \nthe query will be run using what the frontend/backend protocol calls a \nnamed statement. What this means on the backend is that the planner will \nnot be able to use the values from the query parameters to generate the \noptimum query plan and must use generic placeholders and create a generic \nplan. For this reason we have decided not to default to a non-zero \nfetch size. This is something whose default value could be set by a URL \nparameter if you think that is something that is really required.\n\nKris Jurka\n\n",
"msg_date": "Thu, 23 Sep 2004 18:22:15 -0500 (EST)",
"msg_from": "Kris Jurka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large # of rows in query extremely slow, not using"
},
{
"msg_contents": "Thanks for the explanation. So what sort of changes need to be made to\nthe client/server protocol to fix this problem?\n\n\n\nOn Thu, 23 Sep 2004 18:22:15 -0500 (EST), Kris Jurka <[email protected]> wrote:\n> \n> \n> On Tue, 14 Sep 2004, Stephen Crowley wrote:\n> \n> > Problem solved.. I set the fetchSize to a reasonable value instead of\n> > the default of unlimited in the PreparedStatement and now the query\n> > is . After some searching it seeems this is a common problem, would it\n> > make sense to change the default value to something other than 0 in\n> > the JDBC driver?\n> \n> In the JDBC driver, setting the fetch size to a non-zero value means that\n> the query will be run using what the frontend/backend protocol calls a\n> named statement. What this means on the backend is that the planner will\n> not be able to use the values from the query parameters to generate the\n> optimum query plan and must use generic placeholders and create a generic\n> plan. For this reason we have decided not to default to a non-zero\n> fetch size. This is something whose default value could be set by a URL\n> parameter if you think that is something that is really required.\n> \n> Kris Jurka\n> \n>\n",
"msg_date": "Thu, 23 Sep 2004 18:36:49 -0500",
"msg_from": "Stephen Crowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Large # of rows in query extremely slow, not using index"
},
{
"msg_contents": "\n\nOn Thu, 23 Sep 2004, Stephen Crowley wrote:\n\n> Thanks for the explanation. So what sort of changes need to be made to\n> the client/server protocol to fix this problem?\n\nThe problem is that there is no way to indicate why you are using a \nparticular statement in the extended query protocol. For the JDBC driver \nthere are two potential reasons, streaming a ResultSet and using a server \nprepared statement. For the streaming as default case you desire there \nneeds to be a way to indicate that you don't want to create a generic \nserver prepared statement and that this query is really just for one time \nuse, so it can generate the best plan possible.\n\nAdditionally you can only stream ResultSets that are of type FORWARD_ONLY. \nIt would also be nice to be able to specify scrollability and holdability \nwhen creating a statement and the offset/direction when streaming data \nfrom a scrollable one.\n\nKris Jurka\n",
"msg_date": "Thu, 23 Sep 2004 18:47:59 -0500 (EST)",
"msg_from": "Kris Jurka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large # of rows in query extremely slow, not using"
}
] |
[
{
"msg_contents": "Hi,\n\nI found bulk-insert to perform very slow, when compared to MySQL / Oracle. All inserts were done in 1 transaction. However, mitigating factors here were:\n- Application was a .Net application using ODBC drivers\n- PostgreSQL 7.3 running on CYGWIN with cygipc daemon\n- Probably very bad tuning in the config file, if any tuning done at all\n- The application was issuing 'generic' SQL since it was generally used with Oracle and MySQL databases. So no tricks like using COPY or multiple rows with 1 INSERT statement. No stored procedures either.\n- When doing queries, most of the time the results were comparable to or better than MySQL (the only other database that I tested with myself).\n\n\nSo what I can say is, that if you want fast INSERT performance from PostgreSQL then you'll probably have to do some trickery that you wouldn't have to do with a default MySQL installation.\n\nregards,\n\n--Tim\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]]On Behalf Of Markus Schaber\nSent: Tuesday, September 14, 2004 2:15 PM\nTo: PostgreSQL Performance List\nSubject: Re: [PERFORM] Data Warehouse Reevaluation - MySQL vs Postgres --\n\n\nHi, Mischa,\n\nOn Sun, 12 Sep 2004 20:47:17 GMT\nMischa Sandberg <[email protected]> wrote:\n\n> On the other hand, if you do warehouse-style loading (Insert, or PG \n> COPY, into a temp table; and then 'upsert' into the perm table), I can \n> guarantee 2500 inserts/sec is no problem.\n\nAs we can forsee that we'll have similar insert rates to cope with in\nthe not-so-far future, what do you mean with 'upsert'? Do you mean a\nstored procedure that iterates over the temp table?\n\nGenerally, what is the fastest way for doing bulk processing of \nupdate-if-primary-key-matches-and-insert-otherwise operations?\n\nThanks,\nMarkus Schaber\n\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zürich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n",
"msg_date": "Tue, 14 Sep 2004 14:42:20 +0200",
"msg_from": "\"Leeuw van der, Tim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": "On Tue, Sep 14, 2004 at 02:42:20PM +0200, Leeuw van der, Tim wrote:\n> - PostgreSQL 7.3 running on CYGWIN with cygipc daemon\n\nIsn't this doomed to kill your performance anyhow?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 14 Sep 2004 15:33:05 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": "In article <BF88DF69D9E2884B9BE5160DB2B97A85010D6D5F@nlshl-exch1.eu.uis.unisys.com>,\n\"Leeuw van der, Tim\" <[email protected]> writes:\n\n> So what I can say is, that if you want fast INSERT performance from\n> PostgreSQL then you'll probably have to do some trickery that you\n> wouldn't have to do with a default MySQL installation.\n\nI think the word \"INSERT\" is superfluous in the above sentence ;-)\n\nContrary to MySQL, you can't expect decent PostgreSQL performance on\ndecent hardware without some tuning.\n\n",
"msg_date": "14 Sep 2004 16:15:43 +0200",
"msg_from": "Harald Fuchs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
}
] |
[
{
"msg_contents": "\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]]On Behalf Of Steinar H. Gunderson\nSent: Tuesday, September 14, 2004 3:33 PM\nTo: PostgreSQL Performance List\nSubject: Re: [PERFORM] Data Warehouse Reevaluation - MySQL vs Postgres --\n\n\n> On Tue, Sep 14, 2004 at 02:42:20PM +0200, Leeuw van der, Tim wrote:\n> > - PostgreSQL 7.3 running on CYGWIN with cygipc daemon\n> \n> Isn't this doomed to kill your performance anyhow?\n\nYes and no, therefore I mentioned it explicitly as one of the caveats. When doing selects I could get performance very comparable to MySQL, so I don't want to blame poor insert-performance on cygwin/cygipc per se.\nI'm not working on this app. anymore and don't have a working test-environment for it anymore so I cannot retest now with more recent versions.\n\nregards,\n\n--Tim\n\n> \n> /* Steinar */\n> -- \n> Homepage: http://www.sesse.net/\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 8: explain analyze is your friend\n",
"msg_date": "Tue, 14 Sep 2004 16:00:58 +0200",
"msg_from": "\"Leeuw van der, Tim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": "What MySQl-table-type did you use?\nWas it \"MyISAM\" which don't supports transactions ?\nYes I read about that bulk-inserts with this table-type are very fast.\nIn Data Warehouse one often don't need transactions.\n\n\nLeeuw van der, Tim schrieb:\n> \n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]]On Behalf Of Steinar H. \n> Gunderson\n> Sent: Tuesday, September 14, 2004 3:33 PM\n> To: PostgreSQL Performance List\n> Subject: Re: [PERFORM] Data Warehouse Reevaluation - MySQL vs Postgres --\n> \n> \n> > On Tue, Sep 14, 2004 at 02:42:20PM +0200, Leeuw van der, Tim wrote:\n> > > - PostgreSQL 7.3 running on CYGWIN with cygipc daemon\n> >\n> > Isn't this doomed to kill your performance anyhow?\n> \n> Yes and no, therefore I mentioned it explicitly as one of the caveats. \n> When doing selects I could get performance very comparable to MySQL, so \n> I don't want to blame poor insert-performance on cygwin/cygipc per se.\n> I'm not working on this app. anymore and don't have a working \n> test-environment for it anymore so I cannot retest now with more recent \n> versions.\n> \n> regards,\n> \n> --Tim\n> \n> >\n> > /* Steinar */\n> > --\n> > Homepage: http://www.sesse.net/\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n",
"msg_date": "Tue, 14 Sep 2004 16:23:02 +0200",
"msg_from": "Michael Kleiser <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
}
] |
[
{
"msg_contents": ">From: \"Harald Lau (Sector-X)\" <[email protected]>\n...\n> > From: \"Mischa Sandberg\" <[email protected]>\n> >\n> > > If your company is currently happy with MySQL, there probably are\n> > > other (nontechnical) reasons to stick with it. I'm impressed that\n> > > you'd consider reconsidering PG.\n> >\n> > I'd like to second Mischa on that issue.\n>\n>Though both of you are right from my point of view, I don't think\n>it's very useful to discuss this item here.\n>\n\nIt is kinda windy for the list, but the point is that a big part of \nperformance is developer expectation and user expectation. I'd hope to lower \nexpectations before we see an article in eWeek. Perhaps this thread should \nmove to the advocacy list until the migration needs specific advice.\n\n_________________________________________________________________\nGet ready for school! Find articles, homework help and more in the Back to \nSchool Guide! http://special.msn.com/network/04backtoschool.armx\n\n",
"msg_date": "Tue, 14 Sep 2004 14:20:50 -0400",
"msg_from": "\"aaron werman\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres -- merge tables"
}
] |
[
{
"msg_contents": "Hi,\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]]On Behalf Of Michael Kleiser\nSent: Tuesday, September 14, 2004 4:23 PM\nTo: Leeuw van der, Tim\nCc: Steinar H. Gunderson; PostgreSQL Performance List\nSubject: Re: [PERFORM] Data Warehouse Reevaluation - MySQL vs Postgres --\n\n\n> What MySQl-table-type did you use?\n> Was it \"MyISAM\" which don't supports transactions ?\n> Yes I read about that bulk-inserts with this table-type are very fast.\n> In Data Warehouse one often don't need transactions.\n\nAlthough totally beyond the scope of this thread, we used InnoDB tables with MySQL because of the transaction-support.\n\nregards,\n\n--Tim\n",
"msg_date": "Wed, 15 Sep 2004 08:51:00 +0200",
"msg_from": "\"Leeuw van der, Tim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
}
] |
[
{
"msg_contents": "\nJoe Conway <[email protected]> wrote on 15.09.2004, 06:30:24:\n> Chris Browne wrote:\n> > Might we set up the view as:\n> > \n> > create view combination_of_logs as\n> > select * from table_1 where txn_date between 'this' and 'that' \n> > union all\n> > select * from table_2 where txn_date between 'this2' and 'that2' \n> > union all\n> > select * from table_3 where txn_date between 'this3' and 'that3' \n> > union all\n> > select * from table_4 where txn_date between 'this4' and 'that4' \n> > union all\n> > ... ad infinitum\n> > union all\n> > select * from table_n where txn_date > 'start_of_partition_n';\n> > \n> > and expect that to help, as long as the query that hooks up to this\n> > has date constraints?\n> > \n> > We'd have to regenerate the view with new fixed constants each time we\n> > set up the tables, but that sounds like it could work...\n> \n> That's exactly what we're doing, but using inherited tables instead of a \n> union view. With inheritance, there is no need to rebuild the view each \n> time a table is added or removed. Basically, in our application, tables \n> are partitioned by either month or week, depending on the type of data \n> involved, and queries are normally date qualified.\n> \n> We're not completely done with our data conversion (from a commercial \n> RDBMSi), but so far the results have been excellent. Similar to what \n> others have said in this thread, the conversion involved restructuring \n> the data to better suit Postgres, and the application (data \n> analysis/mining vs. the source system which is operational). As a result \n> we've compressed a > 1TB database down to ~0.4TB, and seen at least one \n> typical query reduced from ~9 minutes down to ~40 seconds.\n\nSounds interesting.\n\nThe performance gain comes from partition elimination of the inherited\ntables under the root?\n\nI take it the compression comes from use of arrays, avoiding the need\nfor additional rows and key overhead?\n\nBest Regards, Simon Riggs\n",
"msg_date": "Wed, 15 Sep 2004 11:10:01 +0200",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?iso-8859-1?Q?Re:_Re:__Data_Warehouse_Reevaluation_-_MySQL_vs_Postgres_--?="
},
{
"msg_contents": "[email protected] wrote:\n> Joe Conway <[email protected]> wrote on 15.09.2004, 06:30:24:\n>>We're not completely done with our data conversion (from a commercial \n>>RDBMSi), but so far the results have been excellent. Similar to what \n>>others have said in this thread, the conversion involved restructuring \n>>the data to better suit Postgres, and the application (data \n>>analysis/mining vs. the source system which is operational). As a result \n>>we've compressed a > 1TB database down to ~0.4TB, and seen at least one \n>>typical query reduced from ~9 minutes down to ~40 seconds.\n> \n> Sounds interesting.\n> \n> The performance gain comes from partition elimination of the inherited\n> tables under the root?\n> \n> I take it the compression comes from use of arrays, avoiding the need\n> for additional rows and key overhead?\n\nSorry, in trying to be concise I was not very clear. I'm using the term \ncompression very generally here. I'll try to give a bit more background,\n\nThe original data source is a database schema designed for use by an \noperational application that my company sells to provide enhanced \nmanagement of equipment that we also sell. The application needs to be \nvery flexible in exactly what data it stores in order to be useful \nacross a wide variety of equipment models and versions. In order to do \nthat there is a very large central \"transaction\" table that stores \nname->value pairs in varchar columns. The name->value pairs come from \nparsed output of the equipment, and as such there is a fair amount of \nredundancy and unneeded data that ends up getting stored. At each \ninstallation in the field this table can get very large (> billion \nrows). Additionally the application prematerializes a variety of \nsummaries for use by the operators using the GUI.\n\nWe collect the data exported from each of the systems in the field and \naccumulate it in a single central database for data mining and analysis. \nThis is the database that is actually being converted. By compression I \nreally mean that unneeded and redundant data is being stripped out, and \ndata known to be of a certain datatype is stored in that type instead of \nvarchar (e.g. values known to be int are stored as int). Also the \nsummaries are not being converted (although we do some post processing \nto create new materialized summaries).\n\nMy points in telling this were:\n - the use of inherited tables to partition this huge number of rows and\n yet allow simple query access to it seems to work well, at least in\n early validation tests\n - had we simply taken the original database and \"slammed\" it into\n Postgres with no further thought, we would not have seen the big\n improvements, and thus the project might have been seen as a failure\n (even though it saves substantial $)\n\nHope that's a bit more clear. I'm hoping to write up a more detailed \ncase study once we've cut the Postgres system into production and the \ndust settles a bit.\n\nJoe\n",
"msg_date": "Wed, 15 Sep 2004 08:15:50 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": "Joe,\n\n> - the use of inherited tables to partition this huge number of rows and\n> yet allow simple query access to it seems to work well, at least in\n> early validation tests\n> - had we simply taken the original database and \"slammed\" it into\n> Postgres with no further thought, we would not have seen the big\n> improvements, and thus the project might have been seen as a failure\n> (even though it saves substantial $)\n\nAny further thoughts on developing this into true table partitioning?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 15 Sep 2004 10:28:08 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": "Josh Berkus wrote:\n>> - the use of inherited tables to partition this huge number of rows and\n>> yet allow simple query access to it seems to work well, at least in\n>> early validation tests\n>> - had we simply taken the original database and \"slammed\" it into\n>> Postgres with no further thought, we would not have seen the big\n>> improvements, and thus the project might have been seen as a failure\n>> (even though it saves substantial $)\n> \n> \n> Any further thoughts on developing this into true table partitioning?\n> \n\nJust that I'd love to see it happen ;-)\n\nMaybe someday I'll be able to find the time to work on it myself, but \nfor the moment I'm satisfied with the workarounds we've made.\n\nJoe\n",
"msg_date": "Wed, 15 Sep 2004 10:34:53 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": "Simon Riggs wrote:\n> Joe,\n> \n> Your application is very interesting. I've just read your OSCON paper. I'd\n> like to talk more about that. Very similar to Kalido.\n> \n> ...but back to partitioning momentarily: Does the performance gain come from\n> partition elimination of the inherited tables under the root?\n\nI think the major part of the peformance gain comes from the fact that \nthe source database has different needs in terms of partitioning \ncriteria because of it's different purpose. The data is basically \npartitioned by customer installation instead of by date. Our converted \nscheme partitions by date, which is in line with the analytical queries \nrun at the corporate office. Again, this is an argument in favor of not \nsimply porting what you're handed.\n\nWe might get similar query performance with a single large table and \nmultiple partial indexes (e.g. one per month), but there would be one \ntradeoff and one disadvantage to that:\n1) The indexes would need to be generated periodically -- this is a \ntradeoff since we currently need to create inherited tables at the same \nperiodicity\n2) It would be much more difficult to \"roll off\" a month's worth of data \nwhen needed. The general idea is that each month we create a new monthly \ntable, then archive and drop the oldest monthly table. If all the data \nwere in one big table we would have to delete many millions of rows from \na (possibly) multibillion row table, and then vacuum that table -- no \nthanks ;-)\n\nJoe\n",
"msg_date": "Wed, 15 Sep 2004 13:56:17 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
},
{
"msg_contents": "\nJoe,\n\nYour application is very interesting. I've just read your OSCON paper. I'd\nlike to talk more about that. Very similar to Kalido.\n\n...but back to partitioning momentarily: Does the performance gain come from\npartition elimination of the inherited tables under the root?\n\nBest Regards, Simon Riggs\n\n\n\n",
"msg_date": "Wed, 15 Sep 2004 21:59:06 +0100",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres --"
}
] |
[
{
"msg_contents": "Why would postgres use a different query plan for declared cursors than \nwithout?\n\nI have a relatively simple query that takes about 150ms using explain \nanalyze. However, when I wrap the same query in a declared cursor \nstatement, the subsequent fetch statement takes almost 30seconds. For \nsome reason, the planner decided to do a nested loop left join instead \nof a hash left join. Does anyone know why the planner would choose this \ncourse?\n\nFor those interested, the results of the planner are:\n\nEXPLAIN ANALYZE SELECT a.wb_id, a.group_code, a.area, a.type, a.source, \na.fcode, asbinary((a.the_geom), 'XDR'), c.name, b.gnis_id FROM \ncsn_waterbodies a LEFT JOIN (csn_named_waterbodies as b JOIN \nall_gnis_info as c ON b.gnis_id = c.gnis_id) on a.wb_id = b.wb_id WHERE \nthe_geom && GeometryFromText('POLYGON ((998061.4211119856 \n820217.228917891, 1018729.3748344192 820217.228917891, \n1018729.3748344192 827989.3006519538, 998061.4211119856 \n827989.3006519538, 998061.4211119856 820217.228917891))', 42102);\n\n \nQUERY \nPLAN \n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \n\nHash Left Join (cost=1554.46..1611.26 rows=5 width=1048) (actual \ntime=144.620..150.277 rows=208 loops=1)\n Hash Cond: (\"outer\".wb_id = \"inner\".wb_id)\n -> Index Scan using csn_waterbodies_the_geom_idx on csn_waterbodies \na (cost=0.00..6.40 rows=5 width=1026) (actual time=0.192..2.838 \nrows=208 loops=1)\n Index Cond: (the_geom && 'SRID=42102;POLYGON((998061.421111986 \n820217.228917891,1018729.37483442 820217.228917891,1018729.37483442 \n827989.300651954,998061.421111986 827989.300651954,998061.421111986 \n820217.228917891))'::geometry)\n Filter: (the_geom && 'SRID=42102;POLYGON((998061.421111986 \n820217.228917891,1018729.37483442 820217.228917891,1018729.37483442 \n827989.300651954,998061.421111986 827989.300651954,998061.421111986 \n820217.228917891))'::geometry)\n -> Hash (cost=1535.13..1535.13 rows=7734 width=26) (actual \ntime=143.717..143.717 rows=0 loops=1)\n -> Merge Join (cost=0.00..1535.13 rows=7734 width=26) (actual \ntime=6.546..134.906 rows=7203 loops=1)\n Merge Cond: (\"outer\".gnis_id = \"inner\".gnis_id)\n -> Index Scan using csn_named_waterbodies_gnis_id_idx on \ncsn_named_waterbodies b (cost=0.00..140.37 rows=7215 width=8) (actual \ntime=0.035..10.796 rows=7204 loops=1)\n -> Index Scan using all_gnis_info_gnis_id_idx on \nall_gnis_info c (cost=0.00..1210.19 rows=41745 width=22) (actual \ntime=0.014..60.387 rows=42757 loops=1)\nTotal runtime: 150.713 ms\n(11 rows)\n\n\nDECLARE thread_33000912 CURSOR FOR SELECT ...\n\n \nQUERY \nPLAN \n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \n\nNested Loop Left Join (cost=0.00..8165.43 rows=5 width=1048)\n Join Filter: (\"outer\".wb_id = \"inner\".wb_id)\n -> Index Scan using csn_waterbodies_the_geom_idx on csn_waterbodies \na (cost=0.00..6.40 rows=5 width=1026)\n Index Cond: (the_geom && 'SRID=42102;POLYGON((998061.421111986 \n820217.228917891,1018729.37483442 820217.228917891,1018729.37483442 \n827989.300651954,998061.421111986 827989.300651954,998061.421111986 \n820217.228917891))'::geometry)\n Filter: (the_geom && 'SRID=42102;POLYGON((998061.421111986 \n820217.228917891,1018729.37483442 820217.228917891,1018729.37483442 \n827989.300651954,998061.421111986 827989.300651954,998061.421111986 \n820217.228917891))'::geometry)\n -> Merge Join (cost=0.00..1535.13 rows=7734 width=26)\n Merge Cond: (\"outer\".gnis_id = \"inner\".gnis_id)\n -> Index Scan using csn_named_waterbodies_gnis_id_idx on \ncsn_named_waterbodies b (cost=0.00..140.37 rows=7215 width=8)\n -> Index Scan using all_gnis_info_gnis_id_idx on all_gnis_info \nc (cost=0.00..1210.19 rows=41745 width=22)\n(9 rows)\n\n\n\nCheers,\nKevin\n\n-- \nKevin Neufeld,\nRefractions Research Inc.,\[email protected]\nPhone: (250) 383-3022 \nFax: (250) 383-2140 \n\n",
"msg_date": "Wed, 15 Sep 2004 13:08:30 -0700",
"msg_from": "Kevin Neufeld <[email protected]>",
"msg_from_op": true,
"msg_subject": "declared cursor uses slow plan"
},
{
"msg_contents": "Kevin Neufeld <[email protected]> writes:\n> I have a relatively simple query that takes about 150ms using explain \n> analyze. However, when I wrap the same query in a declared cursor \n> statement, the subsequent fetch statement takes almost 30seconds. For \n> some reason, the planner decided to do a nested loop left join instead \n> of a hash left join. Does anyone know why the planner would choose this \n> course?\n\nPlans for cursors are optimized partly for startup speed as opposed to\ntotal time, on the assumption that you'd rather get some of the rows\nsooner so you can crunch on them.\n\nProbably there should be a knob you can fool with to adjust the strength\nof the effect, but at present I think it's hard-wired.\n\nThe real problem here of course is that the total cost of the nestloop\nis being underestimated so badly (the estimate is only 5x more than the\nhash join where reality is 200x more). It looks like this is mainly\nbecause the number of matching rows from csn_waterbodies is badly\nunderestimated, which comes from the fact that we have no useful\nstatistics for geometric operators :-(. I think that the PostGIS crew\nis working that problem but I have no idea how far along they are...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Sep 2004 14:07:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: declared cursor uses slow plan "
}
] |
[
{
"msg_contents": "\nChris Browne <[email protected]> wrote on 15.09.2004, 04:34:53:\n> [email protected] (\"Simon Riggs\") writes:\n> > Well, its fairly straightforward to auto-generate the UNION ALL view,\nand\n> > important as well, since it needs to be re-specified each time a new\n> > partition is loaded or an old one is cleared down. The main point is\nthat\n> > the constant placed in front of each table must in some way relate to\nthe\n> > data, to make it useful in querying. If it is just a unique constant,\nchosen\n> > at random, it won't do much for partition elimination. So, that tends to\n> > make the creation of the UNION ALL view an application/data specific\nthing.\n>\n> Ah, that's probably a good thought.\n>\n> When we used big \"UNION ALL\" views, it was with logging tables, where\n> there wasn't really any meaningful distinction between partitions.\n>\n> So you say that if the VIEW contains, within it, meaningful constraint\n> information, that can get applied to chop out irrelevant bits?\n>\n> That suggests a way of resurrecting the idea...\n>\n> Might we set up the view as:\n>\n> create view combination_of_logs as\n> select * from table_1 where txn_date between 'this' and 'that'\n> union all\n> select * from table_2 where txn_date between 'this2' and 'that2'\n> union all\n> select * from table_3 where txn_date between 'this3' and 'that3'\n> union all\n> select * from table_4 where txn_date between 'this4' and 'that4'\n> union all\n> ... ad infinitum\n> union all\n> select * from table_n where txn_date > 'start_of_partition_n';\n>\n> and expect that to help, as long as the query that hooks up to this\n> has date constraints?\n>\n\nThat way of phrasing the view can give you the right answer to the\nquery, but does not exclude partitions.\n\nWith the above way of doing things, you end up with predicate phrases of\nthe form ((PARTLIMITLO < partcol) AND (PARTLIMITHI > partcol) AND\n(partcol > QUERYLIMITLO) AND (partcol < QUERYLIMITHI))\n...if the values in capitals are constants, then this should evaluate to\na true or false value for each partition table. The optimizer doesn't\nyet do this....\n\nIf you specify the view the way I showed, then the predicate query\nbecomes a comparison of constants, which DOES get evaluated prior to\nfull execution....you will see this as a \"one time test: false\" in the\nEXPLAIN.\n\nThe way you've phrased the view is the more obvious way to phrase it,\nand I'd spent a few days trying to work out how to solve the algebra\nabove in code....but that was wasted effort.\n\nAnyway, if you use constants you can still specify ranges and betweens\nand have them work... hence my example showed date-like integers - but\nI don't think it just applies to one datatype.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Wed, 15 Sep 2004 21:54:22 +0100",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Data Warehouse Reevaluation - MySQL vs Postgres -- merge tables?"
}
] |
[
{
"msg_contents": "I'm working on a dating/personals/match-making site, that has used many \ndifferent methods of \"match-making\", that all seem to be very slow. One I am \nattempting now that seems to be an efficient method of storage, but not the \nbest for indexing, is using bitwise operators to compare one person's profile \nto another's.\n\nThis method seems to be very fast on a small scale, but I am dealing with a \nlarge user-base here, in excess of 200,000 users that will be executing this \nsearch function every time they login (the search results of their profile \nwill appear on the main page after they have logged in). I've opted to use \n\"label tables\" for each possible set of answers. (i.e: Marital Status)\n\nFor this table, the set of bits -- bit(5) -- are represented as such:\n\n+-----+------------+\n| Bit | Meaning |\n+-----+------------+\n| 1 | single |\n| 2 | separated |\n| 3 | married |\n| 4 | divorced |\n| 5 | widowed |\n+-----+------------+\n\nHere's the structure of the marital status table:\n\n# \\d marital_matrix \nTable \"public.marital_matrix\"\n Column | Type | Modifiers \n-----------+----------------+-----------------------------------------------------------------------\n member_id | integer | not null default \nnextval('public.marital_matrix_member_id_seq'::text)\n status | bit varying(5) | not null default (0)::bit(5)\n p_status | bit varying(5) | not null default (0)::bit(5)\nIndexes:\n \"marital_matrix_pkey\" PRIMARY KEY, btree (member_id)\n \"idx_marital_matrix\" btree ((status::\"bit\" & p_status::\"bit\"))\n \"idx_marital_matrix_single\" btree ((status::\"bit\" & p_status::\"bit\"))\n \"idx_marital_p_status\" btree (p_status)\n \"idx_marital_status\" btree (status)\nForeign-key constraints:\n \"$1\" FOREIGN KEY (member_id) REFERENCES members(member_id) ON DELETE \nCASCADE DEFERRABLE INITIALLY DEFERRED\n\nTo give you an idea of the selectivity (NOTE: there are only 50,000 rows, a \nsmaller sample than what I will actually be using):\n\ndatingsite=> select count(*),status,p_status from marital_matrix group by \nstatus,p_status;\n count | status | p_status \n-------+--------+----------\n 89 | 00001 | 00000\n 1319 | 00010 | 00000\n 2465 | 00100 | 00000\n 1 | 00100 | 11111\n 46117 | 10000 | 00000\n\nhere is the user I'll be comparing against, which has selected that he be \nmatched with any but married people:\n\ndatingsite=> SELECT * FROM marital_matrix WHERE member_id = 21;\n member_id | status | p_status \n-----------+--------+----------\n 21 | 10000 | 11011\n(1 row)\n\n\n\n\nHere are a few possible comparison methods I can think of (NOTE: tests were \nrun on a 2.4Ghz Intel CPU w/ 256M RAM on FreeBSD 4.10:\n\n\nMETHOD 1: Any bit that is set in p_status (prefered marital status) of the \nsearching user should be set in the potential match's marital status. This is \nthe method I'd like to improve, if possible. Running the query twice didn't \nproduce a different runtime.\n\nEXPLAIN ANALYZE\nSELECT\n m2.member_id\nFROM\n marital_matrix m1, marital_matrix m2\nWHERE\n m1.member_id = 21 AND\n m2.status & m1.p_status != B'00000';\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..2357.79 rows=49742 width=4) (actual \ntime=18.062..708.469 rows=47525 loops=1)\n Join Filter: (((\"inner\".status)::\"bit\" & (\"outer\".p_status)::\"bit\") <> \nB'00000'::\"bit\")\n -> Index Scan using marital_matrix_pkey on marital_matrix m1 \n(cost=0.00..5.01 rows=1 width=9) (actual time=0.035..0.045 rows=1 loops=1)\n Index Cond: (member_id = 21)\n -> Seq Scan on marital_matrix m2 (cost=0.00..1602.91 rows=49991 width=13) \n(actual time=17.966..255.529 rows=49991 loops=1)\n Total runtime: 905.694 ms\n(6 rows)\n\n\nMETHOD 2: Specifying the value (I don't think this would make a difference, \nbut I'll post anyways):\n\nEXPLAIN ANALYZE\nSELECT\n member_id\nFROM\n marital_matrix\nWHERE\n status & B'11011' != B'00000';\n\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------\n Seq Scan on marital_matrix (cost=0.00..1852.87 rows=49742 width=4) (actual \ntime=18.113..281.101 rows=47525 loops=1)\n Filter: (((status)::\"bit\" & B'11011'::\"bit\") <> B'00000'::\"bit\")\n Total runtime: 480.836 ms\n(3 rows)\n\n\nMETHOD 3: Checking for one bit only. This is definitely not a \"real world\" \nexample and unacceptable since the p_status column can and will have multiple \nbits. For categories other than \"Marital Status\", such as \"Prefered Hair \nColor\", the users are likely to select multiple bits (they choose all that \napply). This query does use the index, but is still not very fast at all:\n\nEXPLAIN ANALYZE\nSELECT\n member_id\nFROM\n marital_matrix m1\nWHERE\n status & B'10000' = B'10000';\n QUERY \nPLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using idx_marital_matrix_single on marital_matrix m1 \n(cost=0.00..903.59 rows=250 width=4) (actual time=0.042..258.907 rows=46117 \nloops=1)\n Index Cond: (((status)::\"bit\" & B'10000'::\"bit\") = B'10000'::\"bit\")\n Total runtime: 451.162 ms\n(3 rows)\n\nMETHOD 4: Using an IN statement. This method seems to be very fussy about \nusing the index, and I have at some point made it use the index when there \nare less than 3 possibilites. Also, for fields other than Marital Status, \nusers will be able to select many bits for their own profile, which means \nthere would be many permutations:\n\nEXPLAIN ANALYZE\nSELECT\n member_id\nFROM\n marital_matrix\nWHERE\n status & B'11011' IN (B'10000',B'01000');\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on marital_matrix (cost=0.00..2602.73 rows=993 width=4) (actual \ntime=17.845..288.279 rows=47525 loops=1)\n Filter: ((((status)::\"bit\" & B'11011'::\"bit\") = B'10000'::\"bit\") OR \n(((status)::\"bit\" & B'11011'::\"bit\") = B'01000'::\"bit\") OR (((status)::\"bit\" \n& B'11011'::\"bit\") = B'00010'::\"bit\") OR (((status)::\"bit\" & B'11011'::\"bit\") \n= B'00001'::\"bit\"))\n Total runtime: 488.651 ms\n(3 rows)\n\n\nMethod 3 is the only one that used the index, but the only really acceptable \nmethod here is Method 1.\n\nMy questions are...\n- Is there any hope in getting this to use an efficient index?\n- Any mathmaticians know if there is a way to reorder my bitwise comparison to \nhave the operator use = and not an != (perhaps to force an index)? (AFAIK, \nthe answer to the second question is no)\n\nIf anyone could offer any performance tips here I'd really appreciate it. I \nimagine that having this schema wouldn't last an hour with the amount of CPU \ncycles it would be consuming on math operations.\n\nAlso, I have read the thread that was posted here by Daniel in August:\nhttp://archives.postgresql.org/pgsql-performance/2004-08/msg00328.php\n\nI have spoke with Daniel on this issue and we both agree it's very difficult \nto find a solution that can scale to very large sites.\n\nI would very much appreciate any advice that some experienced users may have \nto offer me for such a situation. TIA\n\nPatrick\n",
"msg_date": "Thu, 16 Sep 2004 01:41:37 -0600",
"msg_from": "Patrick Clery <[email protected]>",
"msg_from_op": true,
"msg_subject": "Comparing user attributes with bitwise operators"
},
{
"msg_contents": "Sounds like you want a many-to-many table that maps user_ids to match_ids\n\nThen you can put an index over (user_id, match_id) and the search will \nbe very fast.\n\nChris\n\nPatrick Clery wrote:\n> I'm working on a dating/personals/match-making site, that has used many \n> different methods of \"match-making\", that all seem to be very slow. One I am \n> attempting now that seems to be an efficient method of storage, but not the \n> best for indexing, is using bitwise operators to compare one person's profile \n> to another's.\n> \n> This method seems to be very fast on a small scale, but I am dealing with a \n> large user-base here, in excess of 200,000 users that will be executing this \n> search function every time they login (the search results of their profile \n> will appear on the main page after they have logged in). I've opted to use \n> \"label tables\" for each possible set of answers. (i.e: Marital Status)\n> \n> For this table, the set of bits -- bit(5) -- are represented as such:\n> \n> +-----+------------+\n> | Bit | Meaning |\n> +-----+------------+\n> | 1 | single |\n> | 2 | separated |\n> | 3 | married |\n> | 4 | divorced |\n> | 5 | widowed |\n> +-----+------------+\n> \n> Here's the structure of the marital status table:\n> \n> # \\d marital_matrix \n> Table \"public.marital_matrix\"\n> Column | Type | Modifiers \n> -----------+----------------+-----------------------------------------------------------------------\n> member_id | integer | not null default \n> nextval('public.marital_matrix_member_id_seq'::text)\n> status | bit varying(5) | not null default (0)::bit(5)\n> p_status | bit varying(5) | not null default (0)::bit(5)\n> Indexes:\n> \"marital_matrix_pkey\" PRIMARY KEY, btree (member_id)\n> \"idx_marital_matrix\" btree ((status::\"bit\" & p_status::\"bit\"))\n> \"idx_marital_matrix_single\" btree ((status::\"bit\" & p_status::\"bit\"))\n> \"idx_marital_p_status\" btree (p_status)\n> \"idx_marital_status\" btree (status)\n> Foreign-key constraints:\n> \"$1\" FOREIGN KEY (member_id) REFERENCES members(member_id) ON DELETE \n> CASCADE DEFERRABLE INITIALLY DEFERRED\n> \n> To give you an idea of the selectivity (NOTE: there are only 50,000 rows, a \n> smaller sample than what I will actually be using):\n> \n> datingsite=> select count(*),status,p_status from marital_matrix group by \n> status,p_status;\n> count | status | p_status \n> -------+--------+----------\n> 89 | 00001 | 00000\n> 1319 | 00010 | 00000\n> 2465 | 00100 | 00000\n> 1 | 00100 | 11111\n> 46117 | 10000 | 00000\n> \n> here is the user I'll be comparing against, which has selected that he be \n> matched with any but married people:\n> \n> datingsite=> SELECT * FROM marital_matrix WHERE member_id = 21;\n> member_id | status | p_status \n> -----------+--------+----------\n> 21 | 10000 | 11011\n> (1 row)\n> \n> \n> \n> \n> Here are a few possible comparison methods I can think of (NOTE: tests were \n> run on a 2.4Ghz Intel CPU w/ 256M RAM on FreeBSD 4.10:\n> \n> \n> METHOD 1: Any bit that is set in p_status (prefered marital status) of the \n> searching user should be set in the potential match's marital status. This is \n> the method I'd like to improve, if possible. Running the query twice didn't \n> produce a different runtime.\n> \n> EXPLAIN ANALYZE\n> SELECT\n> m2.member_id\n> FROM\n> marital_matrix m1, marital_matrix m2\n> WHERE\n> m1.member_id = 21 AND\n> m2.status & m1.p_status != B'00000';\n> QUERY PLAN \n> ---------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..2357.79 rows=49742 width=4) (actual \n> time=18.062..708.469 rows=47525 loops=1)\n> Join Filter: (((\"inner\".status)::\"bit\" & (\"outer\".p_status)::\"bit\") <> \n> B'00000'::\"bit\")\n> -> Index Scan using marital_matrix_pkey on marital_matrix m1 \n> (cost=0.00..5.01 rows=1 width=9) (actual time=0.035..0.045 rows=1 loops=1)\n> Index Cond: (member_id = 21)\n> -> Seq Scan on marital_matrix m2 (cost=0.00..1602.91 rows=49991 width=13) \n> (actual time=17.966..255.529 rows=49991 loops=1)\n> Total runtime: 905.694 ms\n> (6 rows)\n> \n> \n> METHOD 2: Specifying the value (I don't think this would make a difference, \n> but I'll post anyways):\n> \n> EXPLAIN ANALYZE\n> SELECT\n> member_id\n> FROM\n> marital_matrix\n> WHERE\n> status & B'11011' != B'00000';\n> \n> QUERY PLAN \n> ----------------------------------------------------------------------------------------------------------------------\n> Seq Scan on marital_matrix (cost=0.00..1852.87 rows=49742 width=4) (actual \n> time=18.113..281.101 rows=47525 loops=1)\n> Filter: (((status)::\"bit\" & B'11011'::\"bit\") <> B'00000'::\"bit\")\n> Total runtime: 480.836 ms\n> (3 rows)\n> \n> \n> METHOD 3: Checking for one bit only. This is definitely not a \"real world\" \n> example and unacceptable since the p_status column can and will have multiple \n> bits. For categories other than \"Marital Status\", such as \"Prefered Hair \n> Color\", the users are likely to select multiple bits (they choose all that \n> apply). This query does use the index, but is still not very fast at all:\n> \n> EXPLAIN ANALYZE\n> SELECT\n> member_id\n> FROM\n> marital_matrix m1\n> WHERE\n> status & B'10000' = B'10000';\n> QUERY \n> PLAN \n> -------------------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using idx_marital_matrix_single on marital_matrix m1 \n> (cost=0.00..903.59 rows=250 width=4) (actual time=0.042..258.907 rows=46117 \n> loops=1)\n> Index Cond: (((status)::\"bit\" & B'10000'::\"bit\") = B'10000'::\"bit\")\n> Total runtime: 451.162 ms\n> (3 rows)\n> \n> METHOD 4: Using an IN statement. This method seems to be very fussy about \n> using the index, and I have at some point made it use the index when there \n> are less than 3 possibilites. Also, for fields other than Marital Status, \n> users will be able to select many bits for their own profile, which means \n> there would be many permutations:\n> \n> EXPLAIN ANALYZE\n> SELECT\n> member_id\n> FROM\n> marital_matrix\n> WHERE\n> status & B'11011' IN (B'10000',B'01000');\n> QUERY PLAN \n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on marital_matrix (cost=0.00..2602.73 rows=993 width=4) (actual \n> time=17.845..288.279 rows=47525 loops=1)\n> Filter: ((((status)::\"bit\" & B'11011'::\"bit\") = B'10000'::\"bit\") OR \n> (((status)::\"bit\" & B'11011'::\"bit\") = B'01000'::\"bit\") OR (((status)::\"bit\" \n> & B'11011'::\"bit\") = B'00010'::\"bit\") OR (((status)::\"bit\" & B'11011'::\"bit\") \n> = B'00001'::\"bit\"))\n> Total runtime: 488.651 ms\n> (3 rows)\n> \n> \n> Method 3 is the only one that used the index, but the only really acceptable \n> method here is Method 1.\n> \n> My questions are...\n> - Is there any hope in getting this to use an efficient index?\n> - Any mathmaticians know if there is a way to reorder my bitwise comparison to \n> have the operator use = and not an != (perhaps to force an index)? (AFAIK, \n> the answer to the second question is no)\n> \n> If anyone could offer any performance tips here I'd really appreciate it. I \n> imagine that having this schema wouldn't last an hour with the amount of CPU \n> cycles it would be consuming on math operations.\n> \n> Also, I have read the thread that was posted here by Daniel in August:\n> http://archives.postgresql.org/pgsql-performance/2004-08/msg00328.php\n> \n> I have spoke with Daniel on this issue and we both agree it's very difficult \n> to find a solution that can scale to very large sites.\n> \n> I would very much appreciate any advice that some experienced users may have \n> to offer me for such a situation. TIA\n> \n> Patrick\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n",
"msg_date": "Thu, 16 Sep 2004 16:11:00 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparing user attributes with bitwise operators"
},
{
"msg_contents": "\nPatrick Clery <[email protected]> writes:\n\n> Method 3 is the only one that used the index, but the only really acceptable \n> method here is Method 1.\n> \n> My questions are...\n> - Is there any hope in getting this to use an efficient index?\n> - Any mathmaticians know if there is a way to reorder my bitwise comparison to \n> have the operator use = and not an != (perhaps to force an index)? (AFAIK, \n> the answer to the second question is no)\n\nThe only kind of index that is capable of indexing this type of data structure\nfor arbitrary searches would be a GiST index. I'm not aware of any\nimplementation for bitfields, though it would be an appropriate use.\n\nWhat there is now is the contrib/intarray package. You would have to store\nmore than just the bitfields, you would have to store an array of integer\nflags. That might be denser actually if you end up with many flags few of\nwhich are set.\n\nGiST indexes allow you to search arbitrary combinations of set and unset\nflags. using the \"@@\" operator\n\n int[] @@ query_int - returns TRUE if array satisfies query (like '1&(2|3)') \n\nYou might be able to look at the code there and adapt it to apply to bit\nfields. If so I think it would be a useful tool. But GiST indexing is pretty\nesoteric stuff.\n\n-- \ngreg\n\n",
"msg_date": "16 Sep 2004 04:44:29 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparing user attributes with bitwise operators"
},
{
"msg_contents": "\nPatrick Clery <[email protected]> writes:\n\n> Here's the structure of the marital status table:\n\nAlso I find it very odd that you have a \"marital status table\". marital status\nis just one attribute of member. Do you expect to have more than one marital\nstatus bitfield per member? How would you distinguish which one to use?\n\nIt's going to make it very hard to combine criteria against other attributes\neven if you do manage to get a GiST index to work against marital status and\nyou do the same with the other, then postgres will have to do some sort of\nmerge join between them. It also means you'll have to write the same code over\nand over for each of these tables.\n\nI think you're much more likely to want to merge all these attributes into a\nsingle \"member_attributes\" table, or even into the member table itself. Then\nyour goal would be to match all the member_attribute bits against all the\nmember_preferred bits in the right way.\n\nThe more conventional approach is to break them out into a fact separate\ntable:\n\nmember_id, attribute_id\n\nAnd then just have a list of pairs that apply. This kind of normalized data is\nmuch more flexible for writing all kinds of queries against. But like you've\nfound, it's hard to optimize this to be fast enough for transactional use.\n\nI think the normal approach with dating sites is to leave this for a batch job\nthat populates a match table for everyone and just have the web site display\nthe contents out of that table.\n\n-- \ngreg\n\n",
"msg_date": "16 Sep 2004 04:53:49 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparing user attributes with bitwise operators"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n\n> Sounds like you want a many-to-many table that maps user_ids to match_ids\n>\n> Then you can put an index over (user_id, match_id) and the search will \n> be very fast.\n>\n> Chris\n>\nIf I understand you correctly, I believe I've tried this approach. While \nmatching on a single attribute and a single value was indeed very fast \nand used an index, as soon as I tried to match on more than one value \n(where valueid in (1, 2, 3)) the index was no longer used. Since my \napproach used ints, I used in(), which is effectively \"or\", which is \npresumably why the index is no longer used. With the bit, one would do a \nbitwise \"or\" (where value & search = value). This cannot be easily \nindexed, afaik.\n\nThe other problem I had with a 1:many table, where there was a row for \nevery person's attributes (~20M rows) was that somehow time was lost in \neither sorting or somewhere else. Individual queries against a single \nattribute would be very fast, but as soon as I tried to join another \nattribute, the query times got really bad. See http://sh.nu/w/email.txt \nline 392 (Don't worry, there are line numbers in the page).\n\nSo far I've stuck with my original plan, which is to maintain a 1:1 \ntable of people:attributes where each attribute is in its own column. \nStill, no index is used, but it's been the best performer up to now.\n\nI'm still looking for a better plan though.\n\nDaniel\n\n-- \n\nDaniel Ceregatti - Programmer\nOmnis Network, LLC\n\nYou are fighting for survival in your own sweet and gentle way.\n\n",
"msg_date": "Thu, 16 Sep 2004 09:46:43 -0700",
"msg_from": "Daniel Ceregatti <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparing user attributes with bitwise operators"
},
{
"msg_contents": "I have currently implemented a schema for my \"Dating Site\" that is storing \nuser search preferences and user attributes in an int[] array using the \ncontrib/intarray package (suggested by Greg Stark). But there are a few \nproblems.\n\ta) query_int can't be cast to int4.\n\tb) query_int can't be indexed.\n\ndatingsite=> alter table people_attributes add column bla query_int;\nALTER TABLE\ndatingsite=> create index idx_query_int on people_attributes (bla);\nERROR: data type query_int has no default operator class for access method \n\"btree\"\nHINT: You must specify an operator class for the index or define a default \noperator class for the data type.\ndatingsite=> create index idx_query_int on people_attributes (bla \ngist__int_ops);\nERROR: operator class \"gist__int_ops\" does not exist for access method \n\"btree\"\ndatingsite=> alter table people_attributes drop column bla;\nALTER TABLE\n\n\tc) query_int can only be used in one operation against int[]:\n\n\tREADME.intarray:\n\tint[] @@ query_int - returns TRUE if array satisfies query (like '1&(2|3)')\n\n \tIt is not possible to use >=, <=, =, etc. Also, this operator does not work \nlike example says:\n\ndatingsite=> select '{2,3}'::int[] @@ '1'::query_int;\n ?column? \n----------\n f\n(1 row)\n\n\td) I can't find a way to simply check if an integer is an array without \ndeclaring it as an array; Therefore, I need to use an int[] type for a column \nthat will only be storing one int4 if I want to compare it to an int[] array:\n\n\tREADME.intarray:\n\tint[] && int[] - overlap - returns TRUE if arrays has at least one common \nelements.\n\n\te) int[] and query_int are somewhat ugly to deal with since query_int needs \nto be quoted as a string, and int[] is returned as '{1,2,3}'. Or maybe I'm \njust being anal :)\n\n\nBecause of these limitations, I've chosen to declare the attribute columns as \nint[] arrays (even though they will only contain one value) so that I can use \n'{1,2,3}'::int[] && column_name:\n\n\tREADME.intarray:\n\tint[] && int[] - overlap - returns TRUE if arrays has at least one common \nelements.\n\nHere is the schema:\n\ncreate table people (\n person_id serial,\n datecreated timestamp with time zone default now (),\n signup_ip cidr not null,\n username character varying(30) not null,\n password character varying(28) not null,\n email character varying(65) not null,\n dob date not null,\n primary key (person_id)\n);\n\ncreate table people_attributes (\n person_id int references people (person_id) on delete cascade initially \ndeferred,\n askmecount int not null default 0,\n age int[] not null default '{1}'::int[],\n gender int[] not null default '{1}'::int[],\n orientation int[] not null default '{1}'::int[],\n bodytype int[] not null default '{1}'::int[],\n children int[] not null default '{1}'::int[],\n drinking int[] not null default '{1}'::int[],\n education int[] not null default '{1}'::int[],\n ethnicity int[] not null default '{1}'::int[],\n eyecolor int[] not null default '{1}'::int[],\n haircolor int[] not null default '{1}'::int[],\n hairstyle int[] not null default '{1}'::int[],\n height int[] not null default '{1}'::int[],\n income int[] not null default '{1}'::int[],\n occupation int[] not null default '{1}'::int[],\n relation int[] not null default '{1}'::int[], /* multiple answer */\n religion int[] not null default '{1}'::int[],\n seeking int[] not null default '{1}'::int[], /* multiple answer */\n smoking int[] not null default '{1}'::int[],\n want_children int[] not null default '{1}'::int[],\n weight int[] not null default '{1}'::int[],\n\n primary key (person_id)\n)\nwithout oids;\n\ncreate index people_attributes_search on people_attributes using gist (\n age gist__int_ops,\n gender gist__int_ops,\n orientation gist__int_ops,\n bodytype gist__int_ops,\n children gist__int_ops,\n drinking gist__int_ops,\n education gist__int_ops,\n ethnicity gist__int_ops,\n eyecolor gist__int_ops,\n haircolor gist__int_ops,\n hairstyle gist__int_ops,\n height gist__int_ops,\n income gist__int_ops,\n occupation gist__int_ops,\n relation gist__int_ops,\n religion gist__int_ops,\n seeking gist__int_ops,\n smoking gist__int_ops,\n want_children gist__int_ops,\n weight gist__int_ops\n );\n\n/* These will be compared against the people_attributes table */\ncreate table people_searchprefs (\n person_id int references people (person_id) on delete cascade initially \ndeferred,\n age int[] not null default \n'{18,19,20,21,22,23,24,25,26,27,28,29,30}'::int[],\n gender int[] not null default '{1,2,4}'::int[],\n orientation int[] not null default '{1,2,8}'::int[],\n bodytype int[] not null default '{1,2,3,4,5,6}'::int[],\n children int[] not null default '{0}'::int[],\n drinking int[] not null default '{0}'::int[],\n education int[] not null default '{0}'::int[],\n ethnicity int[] not null default '{0}'::int[],\n eyecolor int[] not null default '{0}'::int[],\n haircolor int[] not null default '{0}'::int[],\n hairstyle int[] not null default '{0}'::int[],\n height int[] not null default '{0}'::int[],\n income int[] not null default '{0}'::int[],\n occupation int[] not null default '{0}'::int[],\n relation int[] not null default '{0}'::int[],\n religion int[] not null default '{0}'::int[],\n seeking int[] not null default '{0}'::int[],\n smoking int[] not null default '{0}'::int[],\n want_children int[] not null default '{0}'::int[],\n weight int[] not null default '{0}'::int[],\n\n primary key (person_id)\n)\nwithout oids;\n\n\nAnd now, the moment you've all been waiting for: performance!\n\n\n(Number of profiles)\n\ndatingsite=> select count(*) from people_attributes ;\n count \n-------\n 96146\n(1 row)\n\n(age, gender and sexual orientation will always be a part of the query, and \nare necessary to be invoke the index. The query is to show females, age 30-40 \nof any orientation. But first, without the index)\n\nexplain analyze\nselect person_id, gender\nfrom people_attributes\nwhere '{30,31,32,33,34,35,36,37,38,39,40}'::int[] && age\nand '{2}'::int[] && gender\nand '{1,2,4}'::int[] && orientation;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on people_attributes (cost=0.00..9078.56 rows=1 width=36) (actual \ntime=0.044..299.537 rows=937 loops=1)\n Filter: (('{30,31,32,33,34,35,36,37,38,39,40}'::integer[] && age) AND \n('{2}'::integer[] && gender) AND ('{1,2,4}'::integer[] && orientation))\n Total runtime: 304.707 ms\n(3 rows)\n\n\n( with the index )\n\nexplain analyze\nselect person_id, gender\nfrom people_attributes\nwhere '{30,31,32,33,34,35,36,37,38,39,40}'::int[] && age\nand '{2}'::int[] && gender\nand '{1,2,4}'::int[] && orientation;\n QUERY \nPLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using people_attributes_search on people_attributes \n(cost=0.00..6.02 rows=1 width=36) (actual time=0.064..52.383 rows=937 \nloops=1)\n Index Cond: (('{30,31,32,33,34,35,36,37,38,39,40}'::integer[] && age) AND \n('{2}'::integer[] && gender) AND ('{1,2,4}'::integer[] && orientation))\n Total runtime: 57.032 ms\n(3 rows)\n\n(more realistically, it will have a limit of 10)\n\nexplain analyze\nselect person_id, gender\nfrom people_attributes\nwhere '{30,31,32,33,34,35,36,37,38,39,40}'::int[] && age\nand '{2}'::int[] && gender\nand '{1,2,4}'::int[] && orientation limit 10;\n\n QUERY \nPLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..6.02 rows=1 width=36) (actual time=0.235..0.651 rows=10 \nloops=1)\n -> Index Scan using people_attributes_search on people_attributes \n(cost=0.00..6.02 rows=1 width=36) (actual time=0.224..0.550 rows=10 loops=1)\n Index Cond: (('{30,31,32,33,34,35,36,37,38,39,40}'::integer[] && age) \nAND ('{2}'::integer[] && gender) AND ('{1,2,4}'::integer[] && orientation))\n Total runtime: 0.817 ms\n(4 rows)\n\n(slower with an sort key)\n\nexplain analyze\nselect person_id, gender\nfrom people_attributes\nwhere '{30,31,32,33,34,35,36,37,38,39,40}'::int[] && age\nand '{2}'::int[] && gender\nand '{1,2,4}'::int[] && orientation order by age;\n QUERY \nPLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=6.03..6.03 rows=1 width=68) (actual time=62.572..66.338 rows=937 \nloops=1)\n Sort Key: age\n -> Index Scan using people_attributes_search on people_attributes \n(cost=0.00..6.02 rows=1 width=68) (actual time=0.223..55.999 rows=937 \nloops=1)\n Index Cond: (('{30,31,32,33,34,35,36,37,38,39,40}'::integer[] && age) \nAND ('{2}'::integer[] && gender) AND ('{1,2,4}'::integer[] && orientation))\n Total runtime: 71.206 ms\n(5 rows)\n\n(no better with a limit)\n\nexplain analyze\nselect person_id, gender\nfrom people_attributes\nwhere '{30,31,32,33,34,35,36,37,38,39,40}'::int[] && age\nand '{2}'::int[] && gender\nand '{1,2,4}'::int[] && orientation order by age limit 10;\n QUERY \nPLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=6.03..6.03 rows=1 width=68) (actual time=69.391..69.504 rows=10 \nloops=1)\n -> Sort (cost=6.03..6.03 rows=1 width=68) (actual time=69.381..69.418 \nrows=10 loops=1)\n Sort Key: age\n -> Index Scan using people_attributes_search on people_attributes \n(cost=0.00..6.02 rows=1 width=68) (actual time=0.068..61.648 rows=937 \nloops=1)\n Index Cond: (('{30,31,32,33,34,35,36,37,38,39,40}'::integer[] \n&& age) AND ('{2}'::integer[] && gender) AND ('{1,2,4}'::integer[] && \norientation))\n Total runtime: 69.899 ms\n(6 rows)\n\nThe last query is the most likely since I will need to be sorting by some key. \nIf I wasn't sorting it looks like it wouldn't be too bad, but sorting is \ninevitable I think. I've only imported 96,146 of the 150,000 profiles. This \nseems a bit slow now, and it doesn't look like it will scale. \n\nMy questions are:\n\n- Is there a way of speeding up the sort?\n- Will using queries like \" WHERE orientation IN (1,2,4) \" be any \nbetter/worse?\n- The queries with the GiST index are faster, but is it of any benefit when \nthe int[] arrays all contain a single value?\n- Is there any hope for this structure?\n\nThanks for the suggestion Greg, and thanks to those who responded to this \nthread. \n\n\n\nOn Thursday 16 September 2004 02:44, Greg Stark wrote:\n> The only kind of index that is capable of indexing this type of data\n> structure for arbitrary searches would be a GiST index. I'm not aware of\n> any implementation for bitfields, though it would be an appropriate use.\n>\n> What there is now is the contrib/intarray package. You would have to store\n> more than just the bitfields, you would have to store an array of integer\n> flags. That might be denser actually if you end up with many flags few of\n> which are set.\n>\n> GiST indexes allow you to search arbitrary combinations of set and unset\n> flags. using the \"@@\" operator\n>\n> int[] @@ query_int - returns TRUE if array satisfies query (like\n> '1&(2|3)')\n>\n> You might be able to look at the code there and adapt it to apply to bit\n> fields. If so I think it would be a useful tool. But GiST indexing is\n> pretty esoteric stuff.\n\n\n\n primary key (person_id)\n)\nwithout oids;\n\n",
"msg_date": "Sat, 18 Sep 2004 21:26:13 -0600",
"msg_from": "Patrick Clery <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Comparing user attributes with bitwise operators"
},
{
"msg_contents": "\nPatrick Clery <[email protected]> writes:\n\n> PLAN \n> -----------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=6.03..6.03 rows=1 width=68) (actual time=69.391..69.504 rows=10 loops=1)\n> -> Sort (cost=6.03..6.03 rows=1 width=68) (actual time=69.381..69.418 rows=10 loops=1)\n> Sort Key: age\n> -> Index Scan using people_attributes_search on people_attributes (cost=0.00..6.02 rows=1 width=68) (actual time=0.068..61.648 rows=937 loops=1)\n> Index Cond: (('{30,31,32,33,34,35,36,37,38,39,40}'::integer[] && age) AND ('{2}'::integer[] && gender) AND ('{1,2,4}'::integer[] && orientation))\n> Total runtime: 69.899 ms\n> (6 rows)\n...\n> - Is there a way of speeding up the sort?\n\nThe sort seems to have only taken 8ms out of 69ms or just over 10%. As long as\nthe index scan doesn't match too many records the sort should never be any\nslower so it shouldn't be the performance bottleneck. You might consider\nputting a subquery inside the order by with a limit to ensure that the sort\nnever gets more than some safe maximum. Something like:\n\nselect * from (select * from people_attributes where ... limit 1000) order by age limit 10\n\nThis means if the query matches more than 1000 it won't be sorted properly by\nage; you'll get the top 10 out of some random subset. But you're protected\nagainst ever having to sort more than 1000 records.\n\n> - Will using queries like \" WHERE orientation IN (1,2,4) \" be any better/worse?\n\nWell they won't use the GiST index, so no. If there was a single column with a\nbtree index then this would be the cleanest way to go.\n\n> - The queries with the GiST index are faster, but is it of any benefit when\n> the int[] arrays all contain a single value?\n\nWell you've gone from 5 minutes to 60ms. You'll have to do more than one test\nto be sure but it sure seems like it's of some benefit.\n\nIf they're always a single value you could make it an expression index instead\nand not have to change your data model.\n\nJust have the fields be integers individually and make an index as:\n\ncreate index idx on people_attributes using gist (\n (array[age]) gist__int_ops, \n (array[gender]) gist__int_ops,\n...\n)\n\n\nHowever I would go one step further. I would make the index simply:\n\ncreate index idx on people_attributes using gist (\n (array[age,gender,orientation,...]) gist__int_ops\n)\n\nAnd ensure that all of these attributes have distinct domains. Ie, that they\ndon't share any values. There are 4 billion integer values available so that\nshouldn't be an issue.\n\nThen you could use query_int to compare them the way you want. You\nmisunderstood how query_int is supposed to work. You index an array column and\nthen you can check it against a query_int just as you're currently checking\nfor overlap. Think of @@ as a more powerful version of the overlap operator\nthat can do complex logical expressions.\n\nThe equivalent of \n\n where '{30,31,32,33,34,35,36,37,38,39,40}'::int[] && age\n and '{2}'::int[] && gender\n and '{1,2,4}'::int[] && orientation\n\nwould then become:\n\n WHERE array[age,gender,orientation] @@ '(30|31|32|33|34|35|36|37|38|39|40)&(2)&(1|2|4)'\n\nexcept you would have to change orientation and gender to not both have a\nvalue of 2. \n\nYou might consider doing the expression index a bit of overkill actually. You\nmight consider just storing a column \"attributes\" with an integer array\ndirectly in the table.\n\nYou would also want a table that lists the valid attributes to be sure not to\nhave any overlaps:\n\n1 age 1\n2 age 2\n...\n101 gender male\n102 gender female\n103 orientation straight\n104 orientation gay\n105 orientation bi\n106 bodytype scrawny\n...\n\n\n> - Is there any hope for this structure?\n\nYou'll have to test this carefully. I tried using GiST indexes for my project\nand found that I couldn't load the data and build the GiST indexes fast\nenough. You have to test the costs of building and maintaining this index,\nespecially since it has so many columns in it.\n\nBut it looks like your queries are in trouble without it so hopefully it'll be\nok on the insert/update side for you.\n\n-- \ngreg\n\n",
"msg_date": "19 Sep 2004 01:07:56 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparing user attributes with bitwise operators"
},
{
"msg_contents": "Sorry I have taken this long to reply, Greg, but here are the results of the \npersonals site done with contrib/intarray:\n\nThe first thing I did was add a serial column to the attributes table. So \ninstead of having a unique constraint on (attribute_id,value_id), every row \nhas a unique value:\n\ndatingsite=> \\d attribute_names\n Table \"public.attribute_names\"\n Column | Type | \nModifiers \n----------------+-----------------------+---------------------------------------------------------------------------\n attribute_id | integer | not null default \nnextval('public.attribute_names_attribute_id_seq'::text)\n attribute_name | character varying(50) | not null\nIndexes:\n \"attribute_names_pkey\" PRIMARY KEY, btree (attribute_id)\n \"attribute_names_attribute_id_key\" UNIQUE, btree (attribute_id, \nattribute_name\n\nan example insert:\ninsert into attribute_names (attribute_name) values ('languages');\n\n\n\ndatingsite=> \\d attribute_values\n Table \"public.attribute_values\"\n Column | Type | \nModifiers \n--------------+------------------------+------------------------------------------------------------------------\n attribute_id | integer | not null\n order_id | integer | not null default \n(nextval('order_id_seq'::text) - 1)\n label | character varying(255) | not null\n value_id | integer | not null default \nnextval('public.attribute_values_value_id_seq'::text)\nIndexes:\n \"attribute_values_pkey\" PRIMARY KEY, btree (value_id)\nForeign-key constraints:\n \"attribute_values_attribute_id_fkey\" FOREIGN KEY (attribute_id) REFERENCES \nattribute_names(attribute_id)\n\nan example insert (22 is the attribute_id of \"languages\"):\ninsert into attribute_values (attribute_id, label) values (22, 'English');\n\n\nThe \"value_id\" column is where the integers inside the int[] arrays will \nreference. Even age (between 18-99) and height (between 48-84) have rows for \nevery possible choice, as well as \"Ask me!\" where a user could choose to \nleave that blank.\n\nHere is \"the int[] table\":\n\ncreate table people_attributes (\n person_id int references people (person_id) on delete cascade initially \ndeferred,\n askmecount int not null default 0,\n age int not null references attribute_values(value_id) on delete restrict,\n gender int not null references attribute_values(value_id) on delete \nrestrict,\n bodytype int not null references attribute_values(value_id) on delete \nrestrict,\n children int not null references attribute_values(value_id) on delete \nrestrict,\n drinking int not null references attribute_values(value_id) on delete \nrestrict,\n education int not null references attribute_values(value_id) on delete \nrestrict,\n ethnicity int not null references attribute_values(value_id) on delete \nrestrict,\n eyecolor int not null references attribute_values(value_id) on delete \nrestrict,\n haircolor int not null references attribute_values(value_id) on delete \nrestrict,\n hairstyle int not null references attribute_values(value_id) on delete \nrestrict,\n height int not null references attribute_values(value_id) on delete \nrestrict,\n income int not null references attribute_values(value_id) on delete \nrestrict,\n languages int[] not null,\n occupation int not null references attribute_values(value_id) on delete \nrestrict,\n orientation int not null references attribute_values(value_id) on delete \nrestrict,\n relation int not null references attribute_values(value_id) on delete \nrestrict,\n religion int not null references attribute_values(value_id) on delete \nrestrict,\n smoking int not null references attribute_values(value_id) on delete \nrestrict,\n want_children int not null references attribute_values(value_id) on delete \nrestrict,\n weight int not null references attribute_values(value_id) on delete \nrestrict,\n\n seeking int[] not null,\n\n primary key (person_id)\n)\nwithout oids;\n\n\nIf you'll notice that \"seeking\" and \"languages\" are both int[] types. I did \nthis because those will be multiple choice. The index was created like so:\n\ncreate index people_attributes_search on people_attributes using gist (\n (array[\n age,\n gender,\n orientation,\n children,\n drinking,\n education,\n ethnicity,\n eyecolor,\n haircolor,\n hairstyle,\n height,\n income,\n occupation,\n relation,\n religion,\n smoking,\n want_children,\n weight\n ] + seeking + languages) gist__int_ops\n );\n\nseeking and languages are appended with the intarray + op.\n\n\nI'm not going to go too in depth on how this query was generated since that \nwas mostly done with the PHP side of things, but from the structure it should \nbe obvious. I did, however, have to generate a few SQL functions using Smarty \ntemplates since it would be way to tedious to map all these values by hand. \n\nThere are 96,000 rows (people) in the people_attributes table. Here is what is \ngoing on in the following query: \"Show me all single (48) females (88) who \nare heterosexual (92) age between 18 and 31 (95|96|97|98|99|100|101|102|103|\n104|105|106|107|108)\"\n\nEXPLAIN ANALYZE SELECT *\nFROM people_attributes pa\nWHERE person_id <> 1\nAND (ARRAY[age, gender, orientation, children, drinking, education, \nethnicity, eyecolor, haircolor, hairstyle, height, income, occupation, \nrelation, religion, smoking, want_children, weight] + seeking + languages) @@ \n'48 & 89 & 92 & ( 95 | 96 | 97 | 98 | 99 | 100 | 101 | 102 | 103 | 104 | 105 \n| 106 | 107 | 108 )'\n\n\nIndex Scan using people_attributes_search on people_attributes pa \n(cost=0.00..386.45 rows=96 width=140) (actual time=0.057..19.266 rows=516 \nloops=1)\n Index Cond: (((ARRAY[age, gender, orientation, children, drinking, \neducation, ethnicity, eyecolor, haircolor, hairstyle, height, income, \noccupation, relation, religion, smoking, want_children, weight] + seeking) + \nlanguages) @@ '48 & 89 & 92 & ( ( ( ( ( ( ( ( ( ( ( ( ( 95 | 96 ) | 97 ) | \n98 ) | 99 ) | 100 ) | 101 ) | 102 ) | 103 ) | 104 ) | 105 ) | 106 ) | 107 ) | \n108 )'::query_int)\n Filter: (person_id <> 1)\n Total runtime: 21.646 ms\n\n\nThe speed only seems to vary significant on very broad searches, e.g: \"All \nfemales.\" But once the threshold of about 2 or more attributes is met, the \ntimes are very acceptable.\n\nIf we get a little more specific by adding \"non-smokers and non-drinkers \nbetween 18 and 22\", slight improvements:\n\n\nEXPLAIN ANALYZE SELECT *\nFROM people_attributes pa\nWHERE person_id <> 1\nAND (ARRAY[age, gender, orientation, children, drinking, education, \nethnicity, eyecolor, haircolor, hairstyle, height, income, occupation, \nrelation, religion, smoking, want_children, weight] + seeking + languages) @@ \n'48 & 89 & 92 & ( 95 | 96 | 97 | 98) & 67 & 2'\n\nIndex Scan using people_attributes_search on people_attributes pa \n(cost=0.00..386.45 rows=96 width=140) (actual time=0.077..13.090 rows=32 \nloops=1)\n Index Cond: (((ARRAY[age, gender, orientation, children, drinking, \neducation, ethnicity, eyecolor, haircolor, hairstyle, height, income, \noccupation, relation, religion, smoking, want_children, weight] + seeking) + \nlanguages) @@ '48 & 89 & 92 & ( ( ( 95 | 96 ) | 97 ) | 98 ) & 67 & \n2'::query_int)\n Filter: (person_id <> 1)\n Total runtime: 13.393 ms\n\n\nAll in all, my final thoughts on this are that it is \"hella faster\" than the \nprevious methods. Vertical tables for your user attributes will not work for \na personals site -- there are just too many conditions to be able to \nefficiently use an index. Out of all the methods I have tried, verticle table \nwas not even remotely scalable on large amounts of data. Horizontal table is \nthe way to go, but it wouldn't perform like this if not for the intarray \nmodule. The array method works quite nicely, especially for the columns like \n\"languages\" and \"seeking\" that are multiple choice. However, even though this \nmethod is fast, I still might opt for caching the results because the \"real \nworld\" search query involves a lot more and will be executed non-stop. But to \nhave it run this fast the first time certainly helps.\n\nThe only drawback I can think of is that the attributes no longer have values \nlike 1,2,3 -- instead they could be any integer value. This puts a spin on \nthe programming side of things, which required me to write \"code that writes \ncode\" on a few occassions during the attribute \"mapping\" process. For \nexample, keeping an associative array of all the attributes without fetching \nthat data from the database each time. My advice: if you're not a masochist, \nuse a template engine (or simply parse out a print_r() ) to create these PHP \narrays or SQL functions.\n\nGreg, thanks a lot for the advice. I owe you a beer ;)\n\n\nOn Saturday 18 September 2004 23:07, you wrote:\n> Patrick Clery <[email protected]> writes:\n> > PLAN\n> > -------------------------------------------------------------------------\n> >--------------------------------------------------------------------------\n> >-------------- Limit (cost=6.03..6.03 rows=1 width=68) (actual\n> > time=69.391..69.504 rows=10 loops=1) -> Sort (cost=6.03..6.03 rows=1\n> > width=68) (actual time=69.381..69.418 rows=10 loops=1) Sort Key: age\n> > -> Index Scan using people_attributes_search on\n> > people_attributes (cost=0.00..6.02 rows=1 width=68) (actual\n> > time=0.068..61.648 rows=937 loops=1) Index Cond:\n> > (('{30,31,32,33,34,35,36,37,38,39,40}'::integer[] && age) AND\n> > ('{2}'::integer[] && gender) AND ('{1,2,4}'::integer[] && orientation))\n> > Total runtime: 69.899 ms\n> > (6 rows)\n>\n> ...\n>\n> > - Is there a way of speeding up the sort?\n>\n> The sort seems to have only taken 8ms out of 69ms or just over 10%. As long\n> as the index scan doesn't match too many records the sort should never be\n> any slower so it shouldn't be the performance bottleneck. You might\n> consider putting a subquery inside the order by with a limit to ensure that\n> the sort never gets more than some safe maximum. Something like:\n>\n> select * from (select * from people_attributes where ... limit 1000) order\n> by age limit 10\n>\n> This means if the query matches more than 1000 it won't be sorted properly\n> by age; you'll get the top 10 out of some random subset. But you're\n> protected against ever having to sort more than 1000 records.\n>\n> > - Will using queries like \" WHERE orientation IN (1,2,4) \" be any\n> > better/worse?\n>\n> Well they won't use the GiST index, so no. If there was a single column\n> with a btree index then this would be the cleanest way to go.\n>\n> > - The queries with the GiST index are faster, but is it of any benefit\n> > when the int[] arrays all contain a single value?\n>\n> Well you've gone from 5 minutes to 60ms. You'll have to do more than one\n> test to be sure but it sure seems like it's of some benefit.\n>\n> If they're always a single value you could make it an expression index\n> instead and not have to change your data model.\n>\n> Just have the fields be integers individually and make an index as:\n>\n> create index idx on people_attributes using gist (\n> (array[age]) gist__int_ops,\n> (array[gender]) gist__int_ops,\n> ...\n> )\n>\n>\n> However I would go one step further. I would make the index simply:\n>\n> create index idx on people_attributes using gist (\n> (array[age,gender,orientation,...]) gist__int_ops\n> )\n>\n> And ensure that all of these attributes have distinct domains. Ie, that\n> they don't share any values. There are 4 billion integer values available\n> so that shouldn't be an issue.\n>\n> Then you could use query_int to compare them the way you want. You\n> misunderstood how query_int is supposed to work. You index an array column\n> and then you can check it against a query_int just as you're currently\n> checking for overlap. Think of @@ as a more powerful version of the overlap\n> operator that can do complex logical expressions.\n>\n> The equivalent of\n>\n> where '{30,31,32,33,34,35,36,37,38,39,40}'::int[] && age\n> and '{2}'::int[] && gender\n> and '{1,2,4}'::int[] && orientation\n>\n> would then become:\n>\n> WHERE array[age,gender,orientation] @@\n> '(30|31|32|33|34|35|36|37|38|39|40)&(2)&(1|2|4)'\n>\n> except you would have to change orientation and gender to not both have a\n> value of 2.\n>\n> You might consider doing the expression index a bit of overkill actually.\n> You might consider just storing a column \"attributes\" with an integer array\n> directly in the table.\n>\n> You would also want a table that lists the valid attributes to be sure not\n> to have any overlaps:\n>\n> 1 age 1\n> 2 age 2\n> ...\n> 101 gender male\n> 102 gender female\n> 103 orientation straight\n> 104 orientation gay\n> 105 orientation bi\n> 106 bodytype scrawny\n> ...\n>\n> > - Is there any hope for this structure?\n>\n> You'll have to test this carefully. I tried using GiST indexes for my\n> project and found that I couldn't load the data and build the GiST indexes\n> fast enough. You have to test the costs of building and maintaining this\n> index, especially since it has so many columns in it.\n>\n> But it looks like your queries are in trouble without it so hopefully it'll\n> be ok on the insert/update side for you.\n",
"msg_date": "Tue, 05 Oct 2004 00:39:23 -0600",
"msg_from": "Patrick Clery <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Comparing user attributes with bitwise operators"
},
{
"msg_contents": "Patrick,\n\nFirst off, thanks for posting this solution! I love to see a new demo of The \nPower of Postgres(tm) and have been wondering about this particular problem \nsince it came up on IRC.\n\n> The array method works quite nicely, especially for the\n> columns like \"languages\" and \"seeking\" that are multiple choice. However,\n> even though this method is fast, I still might opt for caching the results\n> because the \"real world\" search query involves a lot more and will be\n> executed non-stop. But to have it run this fast the first time certainly\n> helps.\n\nNow, for the bad news: you need to test having a large load of users updating \ntheir data. The drawback to GiST indexes is that they are low-concurrency, \nbecause the updating process needs to lock the whole index (this has been on \nour TODO list for about a decade, but it's a hard problem). \n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 5 Oct 2004 09:32:38 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparing user attributes with bitwise operators"
},
{
"msg_contents": "Another problem I should note is that when I first insert all the data into \nthe people_attributes table (\"the int[] table\"), the GiST index is not used:\n\nTHE INDEX:\n\"people_attributes_search\" gist ((ARRAY[age, gender, orientation, children, \ndrinking, education, \nethnicity, eyecolor, haircolor, hairstyle, height, income, occupation, \nrelation, religion, smoking, w\nant_children, weight] + seeking + languages))\n\nPART OF THE QUERY PLAN:\nSeq Scan on people_attributes pa (cost=0.00..0.00 rows=1 width=20)\n Filter: (((ARRAY[age, gender, orientation, children, \ndrinking, education, ethnicity, eyecolor, haircolor, hairstyle, height, \nincome, occupation, relation, religion, smoking, want_children, weight] + \nseeking) + languages) @@ '( ( 4 | 5 ) | 6 ) & 88 & 48 & ( 69 | 70 ) & 92 & \n( ( ( ( ( ( ( ( ( ( ( ( ( 95 | 96 ) | 97 ) | 98 ) | 99 ) | 100 ) | 101 ) | \n102 ) | 103 ) | 104 ) | 105 ) | 106 ) | 107 ) | 108 ) & \n( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( 190 \n| 191 ) | 192 ) | 193 ) | 194 ) | 195 ) | 196 ) | 197 ) | 198 ) | 199 ) | \n200 ) | 201 ) | 202 ) | 203 ) | 204 ) | 205 ) | 206 ) | 207 ) | 208 ) | 209 ) \n| 210 ) | 211 ) | 212 ) | 213 ) | 214 ) | 215 ) | 216 ) | 217 ) | 218 ) | \n219 ) | 220 ) | 221 ) | 222 ) | 223 ) | 224 ) | 225 ) | 226 ) | 227 ) | 228 ) \n| 229 ) | 230 ) | 231 ) | 232 ) | 233 ) | 234 ) | 235 ) | 236 ) | 237 ) | \n238 ) | 239 ) | 240 ) | 241 ) | 242 ) | 243 )'::query_int)\n\n\nSo I run \"VACUUM ANALYZE people_attributes\", then run again:\n\nPART OF THE QUERY PLAN:\nIndex Scan using people_attributes_pkey on people_attributes pa \n(cost=0.00..5.32 rows=1 width=20)\n Index Cond: (pa.person_id = \"outer\".person_id)\n Filter: (((ARRAY[age, gender, orientation, children, drinking, \neducation, ethnicity, eyecolor, haircolor, hairstyle, height, income, \noccupation, relation, religion, smoking, want_children, weight] + seeking) + \nlanguages) @@ '( ( 4 | 5 ) | 6 ) & 88 & 48 & ( 69 | 70 ) & 92 & \n( ( ( ( ( ( ( ( ( ( ( ( ( 95 | 96 ) | 97 ) | 98 ) | 99 ) | 100 ) | 101 ) | \n102 ) | 103 ) | 104 ) | 105 ) | 106 ) | 107 ) | 108 ) & \n( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( 190 \n| 191 ) | 192 ) | 193 ) | 194 ) | 195 ) | 196 ) | 197 ) | 198 ) | 199 ) | \n200 ) | 201 ) | 202 ) | 203 ) | 204 ) | 205 ) | 206 ) | 207 ) | 208 ) | 209 ) \n| 210 ) | 211 ) | 212 ) | 213 ) | 214 ) | 215 ) | 216 ) | 217 ) | 218 ) | \n219 ) | 220 ) | 221 ) | 222 ) | 223 ) | 224 ) | 225 ) | 226 ) | 227 ) | 228 ) \n| 229 ) | 230 ) | 231 ) | 232 ) | 233 ) | 234 ) | 235 ) | 236 ) | 237 ) | \n238 ) | 239 ) | 240 ) | 241 ) | 242 ) | 243 )'::query_int)\n\nStill not using the index. I'm trying to DROP INDEX and recreate it, but the \nquery just stalls. I remember last time this situation happened that I just \ndropped and recreated the index, and voila it was using the index again. Now \nI can't seem to get this index to drop. Here's the table structure:\n\n\n Column | Type | Modifiers \n---------------+-----------+--------------------\n person_id | integer | not null\n askmecount | integer | not null default 0\n age | integer | not null\n gender | integer | not null\n bodytype | integer | not null\n children | integer | not null\n drinking | integer | not null\n education | integer | not null\n ethnicity | integer | not null\n eyecolor | integer | not null\n haircolor | integer | not null\n hairstyle | integer | not null\n height | integer | not null\n income | integer | not null\n languages | integer[] | not null\n occupation | integer | not null\n orientation | integer | not null\n relation | integer | not null\n religion | integer | not null\n smoking | integer | not null\n want_children | integer | not null\n weight | integer | not null\n seeking | integer[] | not null\nIndexes:\n \"people_attributes_pkey\" PRIMARY KEY, btree (person_id)\n \"people_attributes_search\" gist ((ARRAY[age, gender, orientation, \nchildren, drinking, education, \nethnicity, eyecolor, haircolor, hairstyle, height, income, occupation, \nrelation, religion, smoking, w\nant_children, weight] + seeking + languages))\nForeign-key constraints:\n \"people_attributes_weight_fkey\" FOREIGN KEY (weight) REFERENCES \nattribute_values(value_id) ON DEL\nETE RESTRICT\n \"people_attributes_person_id_fkey\" FOREIGN KEY (person_id) REFERENCES \npeople(person_id) ON DELETE\n CASCADE DEFERRABLE INITIALLY DEFERRED\n \"people_attributes_age_fkey\" FOREIGN KEY (age) REFERENCES \nattribute_values(value_id) ON DELETE RE\nSTRICT\n \"people_attributes_gender_fkey\" FOREIGN KEY (gender) REFERENCES \nattribute_values(value_id) ON DEL\nETE RESTRICT\n \"people_attributes_bodytype_fkey\" FOREIGN KEY (bodytype) REFERENCES \nattribute_values(value_id) ON\n DELETE RESTRICT\n \"people_attributes_children_fkey\" FOREIGN KEY (children) REFERENCES \nattribute_values(value_id) ON\n DELETE RESTRICT\n \"people_attributes_drinking_fkey\" FOREIGN KEY (drinking) REFERENCES \nattribute_values(value_id) ON\n DELETE RESTRICT\n \"people_attributes_education_fkey\" FOREIGN KEY (education) REFERENCES \nattribute_values(value_id) \nON DELETE RESTRICT\n \"people_attributes_ethnicity_fkey\" FOREIGN KEY (ethnicity) REFERENCES \nattribute_values(value_id) \nON DELETE RESTRICT\n \"people_attributes_eyecolor_fkey\" FOREIGN KEY (eyecolor) REFERENCES \nattribute_values(value_id) ON\n DELETE RESTRICT\n \"people_attributes_haircolor_fkey\" FOREIGN KEY (haircolor) REFERENCES \nattribute_values(value_id) \nON DELETE RESTRICT\n \"people_attributes_hairstyle_fkey\" FOREIGN KEY (hairstyle) REFERENCES \nattribute_values(value_id) \nON DELETE RESTRICT\n \"people_attributes_height_fkey\" FOREIGN KEY (height) REFERENCES \nattribute_values(value_id) ON DELETE RESTRICT\n \"people_attributes_income_fkey\" FOREIGN KEY (income) REFERENCES \nattribute_values(value_id) ON DELETE RESTRICT\n \"people_attributes_occupation_fkey\" FOREIGN KEY (occupation) REFERENCES \nattribute_values(value_id\n) ON DELETE RESTRICT\n \"people_attributes_orientation_fkey\" FOREIGN KEY (orientation) REFERENCES \nattribute_values(value_\nid) ON DELETE RESTRICT\n \"people_attributes_relation_fkey\" FOREIGN KEY (relation) REFERENCES \nattribute_values(value_id) ON\n DELETE RESTRICT\n \"people_attributes_religion_fkey\" FOREIGN KEY (religion) REFERENCES \nattribute_values(value_id) ON\n DELETE RESTRICT\n \"people_attributes_smoking_fkey\" FOREIGN KEY (smoking) REFERENCES \nattribute_values(value_id) ON D\nELETE RESTRICT\n \"people_attributes_want_children_fkey\" FOREIGN KEY (want_children) \nREFERENCES attribute_values(va\nlue_id) ON DELETE RESTRICT\n\n\nIs it all the foreign keys that are stalling the drop? I have done VACUUM \nANALYZE on the entire db. Could anyone offer some insight as to why this \nindex is not being used or why the index is not dropping easily?\n\n\nOn Tuesday 05 October 2004 10:32, you wrote:\n> Patrick,\n>\n> First off, thanks for posting this solution! I love to see a new demo of\n> The Power of Postgres(tm) and have been wondering about this particular\n> problem since it came up on IRC.\n>\n> > The array method works quite nicely, especially for the\n> > columns like \"languages\" and \"seeking\" that are multiple choice. However,\n> > even though this method is fast, I still might opt for caching the\n> > results because the \"real world\" search query involves a lot more and\n> > will be executed non-stop. But to have it run this fast the first time\n> > certainly helps.\n>\n> Now, for the bad news: you need to test having a large load of users\n> updating their data. The drawback to GiST indexes is that they are\n> low-concurrency, because the updating process needs to lock the whole index\n> (this has been on our TODO list for about a decade, but it's a hard\n> problem).\n",
"msg_date": "Wed, 06 Oct 2004 12:55:02 -0600",
"msg_from": "Patrick Clery <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Comparing user attributes with bitwise operators"
},
{
"msg_contents": "Err... I REINDEX'ed it and it is now using the index. :)\n\nI'd still appreciate if anyone could tell me why this needs to be\nreindexed. Is the index not updated when the records are inserted?\n\n> On Wednesday 06 October 2004 12:55, I wrote:\n> > Another problem I should note is that when I first insert all the data\n> > into the people_attributes table (\"the int[] table\"), the GiST index is\n> > not used:\n> >\n> > THE INDEX:\n> > \"people_attributes_search\" gist ((ARRAY[age, gender, orientation,\n> > children, drinking, education,\n> > ethnicity, eyecolor, haircolor, hairstyle, height, income, occupation,\n> > relation, religion, smoking, w\n> > ant_children, weight] + seeking + languages))\n> >\n> > PART OF THE QUERY PLAN:\n> > Seq Scan on people_attributes pa (cost=0.00..0.00 rows=1 width=20)\n> > Filter: (((ARRAY[age, gender, orientation, children,\n> > drinking, education, ethnicity, eyecolor, haircolor, hairstyle, height,\n> > income, occupation, relation, religion, smoking, want_children, weight] +\n> > seeking) + languages) @@ '( ( 4 | 5 ) | 6 ) & 88 & 48 & ( 69 | 70 ) & 92\n> > & ( ( ( ( ( ( ( ( ( ( ( ( ( 95 | 96 ) | 97 ) | 98 ) | 99 ) | 100 ) | 101\n> > ) | 102 ) | 103 ) | 104 ) | 105 ) | 106 ) | 107 ) | 108 ) &\n> > ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (\n> > ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( 190\n> >\n> > | 191 ) | 192 ) | 193 ) | 194 ) | 195 ) | 196 ) | 197 ) | 198 ) | 199 ) |\n> >\n> > 200 ) | 201 ) | 202 ) | 203 ) | 204 ) | 205 ) | 206 ) | 207 ) | 208 ) |\n> > 209 )\n> >\n> > | 210 ) | 211 ) | 212 ) | 213 ) | 214 ) | 215 ) | 216 ) | 217 ) | 218 ) |\n> >\n> > 219 ) | 220 ) | 221 ) | 222 ) | 223 ) | 224 ) | 225 ) | 226 ) | 227 ) |\n> > 228 )\n> >\n> > | 229 ) | 230 ) | 231 ) | 232 ) | 233 ) | 234 ) | 235 ) | 236 ) | 237 ) |\n> >\n> > 238 ) | 239 ) | 240 ) | 241 ) | 242 ) | 243 )'::query_int)\n> >\n> >\n> > So I run \"VACUUM ANALYZE people_attributes\", then run again:\n> >\n> > PART OF THE QUERY PLAN:\n> > Index Scan using people_attributes_pkey on people_attributes pa\n> > (cost=0.00..5.32 rows=1 width=20)\n> > Index Cond: (pa.person_id = \"outer\".person_id)\n> > Filter: (((ARRAY[age, gender, orientation, children, drinking,\n> > education, ethnicity, eyecolor, haircolor, hairstyle, height, income,\n> > occupation, relation, religion, smoking, want_children, weight] +\n> > seeking) + languages) @@ '( ( 4 | 5 ) | 6 ) & 88 & 48 & ( 69 | 70 ) & 92\n> > & ( ( ( ( ( ( ( ( ( ( ( ( ( 95 | 96 ) | 97 ) | 98 ) | 99 ) | 100 ) | 101\n> > ) | 102 ) | 103 ) | 104 ) | 105 ) | 106 ) | 107 ) | 108 ) &\n> > ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (\n> > ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( 190\n> >\n> > | 191 ) | 192 ) | 193 ) | 194 ) | 195 ) | 196 ) | 197 ) | 198 ) | 199 ) |\n> >\n> > 200 ) | 201 ) | 202 ) | 203 ) | 204 ) | 205 ) | 206 ) | 207 ) | 208 ) |\n> > 209 )\n> >\n> > | 210 ) | 211 ) | 212 ) | 213 ) | 214 ) | 215 ) | 216 ) | 217 ) | 218 ) |\n> >\n> > 219 ) | 220 ) | 221 ) | 222 ) | 223 ) | 224 ) | 225 ) | 226 ) | 227 ) |\n> > 228 )\n> >\n> > | 229 ) | 230 ) | 231 ) | 232 ) | 233 ) | 234 ) | 235 ) | 236 ) | 237 ) |\n> >\n> > 238 ) | 239 ) | 240 ) | 241 ) | 242 ) | 243 )'::query_int)\n> >\n> > Still not using the index. I'm trying to DROP INDEX and recreate it, but\n> > the query just stalls. I remember last time this situation happened that\n> > I just dropped and recreated the index, and voila it was using the index\n> > again. Now I can't seem to get this index to drop. Here's the table\n> > structure:\n> >\n> >\n> > Column | Type | Modifiers\n> > ---------------+-----------+--------------------\n> > person_id | integer | not null\n> > askmecount | integer | not null default 0\n> > age | integer | not null\n> > gender | integer | not null\n> > bodytype | integer | not null\n> > children | integer | not null\n> > drinking | integer | not null\n> > education | integer | not null\n> > ethnicity | integer | not null\n> > eyecolor | integer | not null\n> > haircolor | integer | not null\n> > hairstyle | integer | not null\n> > height | integer | not null\n> > income | integer | not null\n> > languages | integer[] | not null\n> > occupation | integer | not null\n> > orientation | integer | not null\n> > relation | integer | not null\n> > religion | integer | not null\n> > smoking | integer | not null\n> > want_children | integer | not null\n> > weight | integer | not null\n> > seeking | integer[] | not null\n> > Indexes:\n> > \"people_attributes_pkey\" PRIMARY KEY, btree (person_id)\n> > \"people_attributes_search\" gist ((ARRAY[age, gender, orientation,\n> > children, drinking, education,\n> > ethnicity, eyecolor, haircolor, hairstyle, height, income, occupation,\n> > relation, religion, smoking, w\n> > ant_children, weight] + seeking + languages))\n> > Foreign-key constraints:\n> > \"people_attributes_weight_fkey\" FOREIGN KEY (weight) REFERENCES\n> > attribute_values(value_id) ON DEL\n> > ETE RESTRICT\n> > \"people_attributes_person_id_fkey\" FOREIGN KEY (person_id) REFERENCES\n> > people(person_id) ON DELETE\n> > CASCADE DEFERRABLE INITIALLY DEFERRED\n> > \"people_attributes_age_fkey\" FOREIGN KEY (age) REFERENCES\n> > attribute_values(value_id) ON DELETE RE\n> > STRICT\n> > \"people_attributes_gender_fkey\" FOREIGN KEY (gender) REFERENCES\n> > attribute_values(value_id) ON DEL\n> > ETE RESTRICT\n> > \"people_attributes_bodytype_fkey\" FOREIGN KEY (bodytype) REFERENCES\n> > attribute_values(value_id) ON\n> > DELETE RESTRICT\n> > \"people_attributes_children_fkey\" FOREIGN KEY (children) REFERENCES\n> > attribute_values(value_id) ON\n> > DELETE RESTRICT\n> > \"people_attributes_drinking_fkey\" FOREIGN KEY (drinking) REFERENCES\n> > attribute_values(value_id) ON\n> > DELETE RESTRICT\n> > \"people_attributes_education_fkey\" FOREIGN KEY (education) REFERENCES\n> > attribute_values(value_id)\n> > ON DELETE RESTRICT\n> > \"people_attributes_ethnicity_fkey\" FOREIGN KEY (ethnicity) REFERENCES\n> > attribute_values(value_id)\n> > ON DELETE RESTRICT\n> > \"people_attributes_eyecolor_fkey\" FOREIGN KEY (eyecolor) REFERENCES\n> > attribute_values(value_id) ON\n> > DELETE RESTRICT\n> > \"people_attributes_haircolor_fkey\" FOREIGN KEY (haircolor) REFERENCES\n> > attribute_values(value_id)\n> > ON DELETE RESTRICT\n> > \"people_attributes_hairstyle_fkey\" FOREIGN KEY (hairstyle) REFERENCES\n> > attribute_values(value_id)\n> > ON DELETE RESTRICT\n> > \"people_attributes_height_fkey\" FOREIGN KEY (height) REFERENCES\n> > attribute_values(value_id) ON DELETE RESTRICT\n> > \"people_attributes_income_fkey\" FOREIGN KEY (income) REFERENCES\n> > attribute_values(value_id) ON DELETE RESTRICT\n> > \"people_attributes_occupation_fkey\" FOREIGN KEY (occupation)\n> > REFERENCES attribute_values(value_id\n> > ) ON DELETE RESTRICT\n> > \"people_attributes_orientation_fkey\" FOREIGN KEY (orientation)\n> > REFERENCES attribute_values(value_\n> > id) ON DELETE RESTRICT\n> > \"people_attributes_relation_fkey\" FOREIGN KEY (relation) REFERENCES\n> > attribute_values(value_id) ON\n> > DELETE RESTRICT\n> > \"people_attributes_religion_fkey\" FOREIGN KEY (religion) REFERENCES\n> > attribute_values(value_id) ON\n> > DELETE RESTRICT\n> > \"people_attributes_smoking_fkey\" FOREIGN KEY (smoking) REFERENCES\n> > attribute_values(value_id) ON D\n> > ELETE RESTRICT\n> > \"people_attributes_want_children_fkey\" FOREIGN KEY (want_children)\n> > REFERENCES attribute_values(va\n> > lue_id) ON DELETE RESTRICT\n> >\n> >\n> > Is it all the foreign keys that are stalling the drop? I have done VACUUM\n> > ANALYZE on the entire db. Could anyone offer some insight as to why this\n> > index is not being used or why the index is not dropping easily?\n> >\n> > On Tuesday 05 October 2004 10:32, you wrote:\n> > > Patrick,\n> > >\n> > > First off, thanks for posting this solution! I love to see a new demo\n> > > of The Power of Postgres(tm) and have been wondering about this\n> > > particular problem since it came up on IRC.\n> > >\n> > > > The array method works quite nicely, especially for the\n> > > > columns like \"languages\" and \"seeking\" that are multiple choice.\n> > > > However, even though this method is fast, I still might opt for\n> > > > caching the results because the \"real world\" search query involves a\n> > > > lot more and will be executed non-stop. But to have it run this fast\n> > > > the first time certainly helps.\n> > >\n> > > Now, for the bad news: you need to test having a large load of users\n> > > updating their data. The drawback to GiST indexes is that they are\n> > > low-concurrency, because the updating process needs to lock the whole\n> > > index (this has been on our TODO list for about a decade, but it's a\n> > > hard problem).\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n",
"msg_date": "Wed, 06 Oct 2004 13:27:55 -0600",
"msg_from": "Patrick Clery <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Comparing user attributes with bitwise operators"
},
{
"msg_contents": "\nPatrick Clery <[email protected]> writes:\n\n> PART OF THE QUERY PLAN:\n> Index Scan using people_attributes_pkey on people_attributes pa (cost=0.00..5.32 rows=1 width=20)\n> Index Cond: (pa.person_id = \"outer\".person_id)\n> Filter: (((ARRAY[age, gender, orientation, children, drinking, \n\nYou'll probably have to show the rest of the plan for anyone to have much idea\nwhat's going on. It seems to be part of a join of some sort and the planner is\nchoosing to drive the join from the wrong table. This may make it awkward to\nforce the right plan using enable_seqscan or anything like that. But GiST\nindexes don't have very good selectivity estimates so I'm not sure you can\nhope for the optimizer to guess right on its own.\n\n> Is it all the foreign keys that are stalling the drop? I have done VACUUM \n> ANALYZE on the entire db. Could anyone offer some insight as to why this \n> index is not being used or why the index is not dropping easily?\n\nI don't think foreign keys cause problems dropping indexes. Foreign key\nconstraints are just checked whenever there's an insert/update/delete. Perhaps\nyou're just underestimating the size of this index and the amount of time\nit'll take to delete it? Or are there queries actively executing using the\nindex while you're trying to delete it? Or a vacuum running?\n\n-- \ngreg\n\n",
"msg_date": "06 Oct 2004 15:38:56 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comparing user attributes with bitwise operators"
}
] |
[
{
"msg_contents": "Ricardo,\n\nHello. I've moved your query to a more appropriate mailing list; on \nPERFORMANCE we discuss RAID all the time. If you don't mind wading through a \nhost of opinions, you'll get plenty here. I've also cc'd our Brazillian \nPostgreSQL community.\n\nEveryone, please note that Ricardo is NOT subscribed so cc him on your \nresponses.\n\nHere's Ricardo's question. My response is below it.\n\n===============================================\nLet me introduce, I'm Ricardo Rezende and I'm SQL Magazine subeditor, from \nBrazil (http://www.sqlmagazine.com.br.).\n\nMy goal in this first contact is to solve a doubt about PostgreSQL RDBMS.\n\nI'm writing an article about redundant storage technology, called RAID. \nThe first part of the article can be found in \nhttp://www.sqlmagazine.com.br/colunistas.asp?artigo=Colunistas/RicardoRezende/06_Raid_P1.asp\n\nMy ideia is to put, in the end of the article, a note about the better \nconfiguration of RAID to use with PostgreSQL and the reasons, including \nthe reference to the autor/link to this information.\n\nCould you send me this information?\n\nOur magazine is being a reference between DBAs and Database Developers in \nBrazil and that is the reason to write \"oficial\" papers about PostgreSQL\n\nThank you very much and I'm waiting for a return of this e-mail.\n=========================================================\n\nThe first and most important step for RAID performance with PostgreSQL is to \nget a card with onboard battery back-up and enable the write cache for the \ncard. You do not want to enable the write cache *without* battery back-up \nbecause of the risk of data corruption after a power failure.\n\nIf you can't afford this hardware, I would advise using software RAID over \nusing a cheaper (< $300US) RAID card.\n\nThe second step is to have lots of disks; 5 drives is a minimum for really \ngood performance. 3-drive RAID5, in particular, is a poor performer for \nPostgreSQL, often resulting in I/O that is 40% or less as efficient as a \nsingle disk due to extremely slow random seeks and little parallelization.\n\nOnce you have 6 drives or more, opinions are divided on whether RAID 10 or \nRAID 5 is better. I think it partly depends on your access pattern.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 16 Sep 2004 10:50:33 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Article about PostgreSQL and RAID in Brazil"
},
{
"msg_contents": "\n>The first part of the article can be found in \n>http://www.sqlmagazine.com.br/colunistas.asp?artigo=Colunistas/RicardoRezende/06_Raid_P1.asp\n> \n>\nThe site seems to be down. I was looking forward to reading it. :(\n\n>The first and most important step for RAID performance with PostgreSQL is to \n>get a card with onboard battery back-up and enable the write cache for the \n>card. You do not want to enable the write cache *without* battery back-up \n>because of the risk of data corruption after a power failure.\n> \n>\nHere is a small example of the performance difference with write cache:\n\nhttp://sh.nu/bonnie.txt\n\n-- \n\nDaniel Ceregatti - Programmer\nOmnis Network, LLC\n\nA little suffering is good for the soul.\n\t\t-- Kirk, \"The Corbomite Maneuver\", stardate 1514.0\n\n",
"msg_date": "Thu, 16 Sep 2004 11:10:13 -0700",
"msg_from": "Daniel Ceregatti <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Article about PostgreSQL and RAID in Brazil"
},
{
"msg_contents": "On Thu, Sep 16, 2004 at 11:10:13AM -0700, Daniel Ceregatti wrote:\n> Here is a small example of the performance difference with write cache:\n> \n> http://sh.nu/bonnie.txt\n\nAm I missing something here? I can't find any tests with the same machine\nshowing the difference between writeback and write-through -- one machine\nalways uses write-through and the other always uses writeback. (Yes, the\nhardware looks more or less the same, but the kernels and systems are way\ndifferent.)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 16 Sep 2004 20:57:32 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Article about PostgreSQL and RAID in Brazil"
},
{
"msg_contents": "Primer,\n\n> The site seems to be down. I was looking forward to reading it. :(\n\nI didn't have a problem. The site *is* in Portuguese, though.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 16 Sep 2004 12:00:32 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Article about PostgreSQL and RAID in Brazil"
},
{
"msg_contents": "Josh Berkus wrote:\n\n>Primer,\n>\n> \n>\n>>The site seems to be down. I was looking forward to reading it. :(\n>> \n>>\n>\n>I didn't have a problem. The site *is* in Portuguese, though.\n>\n> \n>\nYes, it came up finally. Fortunately I'm Brazilian. :)\n\n-- \n\nDaniel Ceregatti - Programmer\nOmnis Network, LLC\n\nToo clever is dumb.\n\t\t-- Ogden Nash\n\n\n\n\n\n\n\n\nJosh Berkus wrote:\n\nPrimer,\n\n \n\nThe site seems to be down. I was looking forward to reading it. :(\n \n\n\nI didn't have a problem. The site *is* in Portuguese, though.\n\n \n\nYes, it came up finally. Fortunately I'm Brazilian. :)\n-- \n\nDaniel Ceregatti - Programmer\nOmnis Network, LLC\n\nToo clever is dumb.\n\t\t-- Ogden Nash",
"msg_date": "Thu, 16 Sep 2004 12:01:42 -0700",
"msg_from": "Daniel Ceregatti <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Article about PostgreSQL and RAID in Brazil"
},
{
"msg_contents": "Hi, there,\n\nI am running PostgreSQL 7.3.4 on MAC OS X G5 with dual processors and\n8GB memory. The shared buffer was set as 512MB.\n\nThe database has been running great until about 10 days ago when our\ndevelopers decided to add some indexes to some tables to speed up\ncertain uploading ops.\n\nNow the CPU usage reaches 100% constantly when there are a few users\naccessing their information by SELECT tables in databases. If I REINEX\nall the indexes, the database performance improves a bit but before \nlong,\nit goes back to bad again.\n\nMy suspicion is that since now a few indexes are added, every ops are\nrun by PostgreSQL with the indexes being used when calculating cost.\nThis leads to the downgrade of performance.\n\nWhat do you think of this? What is the possible solution?\n\nThanks!\n\nQing\n\nThe following is the output from TOP command:\n\nProcesses: 92 total, 4 running, 88 sleeping... 180 threads \n13:09:18\nLoad Avg: 2.81, 2.73, 2.50 CPU usage: 95.2% user, 4.8% sys, 0.0% \nidle\nSharedLibs: num = 116, resident = 11.5M code, 1.66M data, 4.08M \nLinkEdit\nMemRegions: num = 12132, resident = 148M + 2.82M private, 403M shared\nPhysMem: 435M wired, 5.04G active, 2.22G inactive, 7.69G used, 316M \nfree\nVM: 32.7G + 81.5M 5281127(13) pageins, 8544145(0) pageouts\n\n PID COMMAND %CPU TIME #TH #PRTS #MREGS RPRVT RSHRD RSIZE \nVSIZE\n27314 postgres 92.2% 2:14.75 1 9 49 12.8M+ 396M 75.0M+ \n849M\n26099 postgres 91.1% 19:28.04 1 9 67 15.9M+ 396M 298M+ \n850M\n24754 top 2.8% 4:48.33 1 29 26 272K 404K 648K \n27.1M\n 0 kernel_tas 1.9% 2:12:05 40 2 8476 67.1M 0K 281M \n1.03G\n 294 hwmond 0.5% 2:26:34 8 75 57 240K 544K 1.09M \n31.0M\n 347 lookupd 0.3% 1:52:28 2 35 73 3.05M 648K 3.14M \n33.6M\n 89 configd 0.1% 53:05.16 3 126 151 304K 644K 832K \n29.2M\n26774 servermgrd 0.1% 0:02.93 1 10 40 344K- 1.17M+ 1.86M \n28.2M\n 170 coreservic 0.1% 0:09.04 1 40 93 152K 532K 2.64M \n28.5M\n 223 DirectoryS 0.1% 19:42.47 8 84 135 880K+ 1.44M 4.60M+ \n37.1M+\n 125 dynamic_pa 0.0% 0:26.79 1 12 17 16K 292K 28K \n17.7M\n 87 kextd 0.0% 0:01.23 2 17 21 0K 292K 36K \n28.2M\n 122 update 0.0% 14:27.71 1 9 15 16K 300K 44K \n17.6M\n 1 init 0.0% 0:00.03 1 12 16 28K 320K 76K \n17.6M\n 2 mach_init 0.0% 3:36.18 2 95 18 76K 320K 148K \n18.2M\n 81 syslogd 0.0% 0:19.96 1 10 17 96K 320K 148K \n17.7M\n\n",
"msg_date": "Thu, 16 Sep 2004 13:10:39 -0700",
"msg_from": "Qing Zhao <[email protected]>",
"msg_from_op": false,
"msg_subject": "CPU maximized out!"
},
{
"msg_contents": "\nHi, there,\n\nI am running PostgreSQL 7.3.4 on MAC OS X G5 with dual processors and\n8GB memory. The shared buffer was set as 512MB.\n\nThe database has been running great until about 10 days ago when our\ndevelopers decided to add some indexes to some tables to speed up\ncertain uploading ops.\n\nNow the CPU usage reaches 100% constantly when there are a few users\naccessing their information by SELECT tables in databases. If I REINEX\nall the indexes, the database performance improves a bit but before \nlong,\nit goes back to bad again.\n\nMy suspicion is that since now a few indexes are added, every ops are\nrun by PostgreSQL with the indexes being used when calculating cost.\nThis leads to the downgrade of performance.\n\nWhat do you think of this? What is the possible solution?\n\nThanks!\n\nQing\n\nThe following is the output from TOP command:\n\nProcesses: 92 total, 4 running, 88 sleeping... 180 threads \n13:09:18\nLoad Avg: 2.81, 2.73, 2.50 CPU usage: 95.2% user, 4.8% sys, 0.0% \nidle\nSharedLibs: num = 116, resident = 11.5M code, 1.66M data, 4.08M \nLinkEdit\nMemRegions: num = 12132, resident = 148M + 2.82M private, 403M shared\nPhysMem: 435M wired, 5.04G active, 2.22G inactive, 7.69G used, 316M \nfree\nVM: 32.7G + 81.5M 5281127(13) pageins, 8544145(0) pageouts\n\n PID COMMAND %CPU TIME #TH #PRTS #MREGS RPRVT RSHRD RSIZE \nVSIZE\n27314 postgres 92.2% 2:14.75 1 9 49 12.8M+ 396M 75.0M+ \n849M\n26099 postgres 91.1% 19:28.04 1 9 67 15.9M+ 396M 298M+ \n850M\n24754 top 2.8% 4:48.33 1 29 26 272K 404K 648K \n27.1M\n 0 kernel_tas 1.9% 2:12:05 40 2 8476 67.1M 0K 281M \n1.03G\n 294 hwmond 0.5% 2:26:34 8 75 57 240K 544K 1.09M \n31.0M\n 347 lookupd 0.3% 1:52:28 2 35 73 3.05M 648K 3.14M \n33.6M\n 89 configd 0.1% 53:05.16 3 126 151 304K 644K 832K \n29.2M\n26774 servermgrd 0.1% 0:02.93 1 10 40 344K- 1.17M+ 1.86M \n28.2M\n 170 coreservic 0.1% 0:09.04 1 40 93 152K 532K 2.64M \n28.5M\n 223 DirectoryS 0.1% 19:42.47 8 84 135 880K+ 1.44M 4.60M+ \n37.1M+\n 125 dynamic_pa 0.0% 0:26.79 1 12 17 16K 292K 28K \n17.7M\n 87 kextd 0.0% 0:01.23 2 17 21 0K 292K 36K \n28.2M\n 122 update 0.0% 14:27.71 1 9 15 16K 300K 44K \n17.6M\n 1 init 0.0% 0:00.03 1 12 16 28K 320K 76K \n17.6M\n 2 mach_init 0.0% 3:36.18 2 95 18 76K 320K 148K \n18.2M\n 81 syslogd 0.0% 0:19.96 1 10 17 96K 320K 148K \n17.7M\n\n\n\n\n",
"msg_date": "Thu, 16 Sep 2004 13:43:58 -0700",
"msg_from": "Qing Zhao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Article about PostgreSQL and RAID in Brazil"
},
{
"msg_contents": "On Thu, Sep 16, 2004 at 10:50:33AM -0700, Josh Berkus wrote:\n> The second step is to have lots of disks; 5 drives is a minimum for really \n> good performance. 3-drive RAID5, in particular, is a poor performer for \n> PostgreSQL, often resulting in I/O that is 40% or less as efficient as a \n> single disk due to extremely slow random seeks and little parallelization.\n> \n> Once you have 6 drives or more, opinions are divided on whether RAID 10 or \n> RAID 5 is better. I think it partly depends on your access pattern.\n\nWhat about benefits from putting WAL and pg_temp on seperate drives?\nSpecifically, we have a box with 8 drives, 2 in a mirror with the OS and\nWAL and pg_temp; the rest in a raid10 with the database on it. Do you\nthink it would have been better to make one big raid10? What if it was\nraid5? And what if it was only 6 drives total?\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Thu, 16 Sep 2004 15:48:53 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Article about PostgreSQL and RAID in Brazil"
},
{
"msg_contents": "Qing,\n\nPlease don't start a new question by replying to someone else's e-mail. It \nconfuses people and makes it unlikely for you to get help.\n\n> My suspicion is that since now a few indexes are added, every ops are\n> run by PostgreSQL with the indexes being used when calculating cost.\n> This leads to the downgrade of performance.\n\nThat seems rather unlikely to me. Unless you've *really* complex queries \nand some unusual settings, you can't swamp the CPU through query planning. \n\nOn the other hand, your mention of REINDEX indicates that the table is being \nupdated very frequently. If that's the case, then the solution is probably \nfor you to cut back on the number of indexes.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 16 Sep 2004 14:05:28 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Question about PG on OSX"
},
{
"msg_contents": "Jim,\n\n> What about benefits from putting WAL and pg_temp on seperate drives?\n> Specifically, we have a box with 8 drives, 2 in a mirror with the OS and\n> WAL and pg_temp; the rest in a raid10 with the database on it. Do you\n> think it would have been better to make one big raid10? What if it was\n> raid5? And what if it was only 6 drives total?\n\nOSDL's finding was that even with a large RAID array, it still benefits you to \nhave WAL on a seperate disk resource ... substantially, like 10% total \nperformance. However, your setup doesn't get the full possible benefit, \nsince WAL is sharing the array with other resources.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 16 Sep 2004 14:07:37 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Article about PostgreSQL and RAID in Brazil"
},
{
"msg_contents": "Josh:\n\nSorry for the reply to the existing subject!\n\nThe newly added indexes have made all other queries much slower except \nthe uploading ops.\nAs a result, all the CPU's are running crazy but not much is getting \nfinished and our Application\nServer waits for certain time and then times out. Customers thought the \nsystem hung.\n\nMy guess is that all the queries that involves the columns that are \nbeing indexed need to\nbe rewritten to use the newly created indexes to avoid the performance \nissues. The reason\nis that REINDEX does not help either. Does it make sense?\n\nThanks!\n\nQing\n\nOn Sep 16, 2004, at 2:05 PM, Josh Berkus wrote:\n\n> Qing,\n>\n> Please don't start a new question by replying to someone else's \n> e-mail. It\n> confuses people and makes it unlikely for you to get help.\n>\n>> My suspicion is that since now a few indexes are added, every ops are\n>> run by PostgreSQL with the indexes being used when calculating cost.\n>> This leads to the downgrade of performance.\n>\n> That seems rather unlikely to me. Unless you've *really* complex \n> queries\n> and some unusual settings, you can't swamp the CPU through query \n> planning.\n>\n> On the other hand, your mention of REINDEX indicates that the table is \n> being\n> updated very frequently. If that's the case, then the solution is \n> probably\n> for you to cut back on the number of indexes.\n>\n> -- \n> --Josh\n>\n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n\n",
"msg_date": "Thu, 16 Sep 2004 14:20:29 -0700",
"msg_from": "Qing Zhao <[email protected]>",
"msg_from_op": false,
"msg_subject": "indexes make other queries slow!"
},
{
"msg_contents": "Qing,\n\n> My guess is that all the queries that involves the columns that are\n> being indexed need to\n> be rewritten to use the newly created indexes to avoid the performance\n> issues. The reason\n> is that REINDEX does not help either. Does it make sense?\n\nWhat's the rate of updates on the newly indexed tables? If you have a lot \nof updates, the work that the database does to keep the indexes current would \nput a big load on your server. This is far more likely to be the cause of \nyour issues.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 16 Sep 2004 14:24:28 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: indexes make other queries slow!"
},
{
"msg_contents": "> My guess is that all the queries that involves the columns that are\n> being indexed need to\n> be rewritten to use the newly created indexes to avoid the performance\n> issues. The reason\n> is that REINDEX does not help either. Does it make sense?\n> \n\nQing,\n\nGenerally, adding new indexes blindly will hurt performance, not help it.\n\nMore indexes mean more work during INSERT/UPDATE. That could easily be\nhampering your performance if you have a high INSERT/UPDATE volume.\n\nRun your queries through EXPLAIN ANALYZE to make sure they're using the\nright indexes. Take a look at the pg_stat_user_indexes table to see what\nindexes are simply not being used.\n\nJason\n\n",
"msg_date": "Thu, 16 Sep 2004 17:36:46 -0400",
"msg_from": "\"Jason Coene\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: indexes make other queries slow!"
},
{
"msg_contents": "On Thu, Sep 16, 2004 at 02:07:37PM -0700, Josh Berkus wrote:\n> Jim,\n> \n> > What about benefits from putting WAL and pg_temp on seperate drives?\n> > Specifically, we have a box with 8 drives, 2 in a mirror with the OS and\n> > WAL and pg_temp; the rest in a raid10 with the database on it. Do you\n> > think it would have been better to make one big raid10? What if it was\n> > raid5? And what if it was only 6 drives total?\n> \n> OSDL's finding was that even with a large RAID array, it still benefits you to \n> have WAL on a seperate disk resource ... substantially, like 10% total \n> performance. However, your setup doesn't get the full possible benefit, \n> since WAL is sharing the array with other resources.\n \nYes, but if a 3 drive raid array is 40% slower than a single disk it\nseems like the 10% benefit for having WAL on a seperate drive would\nstill be a losing proposition.\n\nBTW, my experience with our setup is that the raid10 is almost always\nthe IO bottleneck, and not the mirror with everything else on it.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Thu, 16 Sep 2004 17:19:43 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Article about PostgreSQL and RAID in Brazil"
},
{
"msg_contents": "Quoting Josh Berkus <[email protected]>:\n\n> The first and most important step for RAID performance with PostgreSQL is to\n> \n> get a card with onboard battery back-up and enable the write cache for the \n> card. You do not want to enable the write cache *without* battery back-up\n> \n\nI'm curious about this -- how do you avoid losing data if a cache stick dies? \nWithout redundancy, whatever hasn't been destaged to the physical media vanishes\n Dual-controller external arrays (HDS, EMC, LSI, etc.) tend to mirror (though\nalgorithms vary) the cache in addition to battery backup. But do onboard arrays\ntend to do this as well?\n\nMark\n",
"msg_date": "Thu, 16 Sep 2004 16:00:19 -0700",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Article about PostgreSQL and RAID in Brazil"
}
] |
[
{
"msg_contents": "Our product (Sophos PureMessage) runs on a Postgres database.\n\nSome of our Solaris customers have Oracle licenses, and they've \ncommented on the performance difference between Oracle and Postgresql\non such boxes. In-house, we've noticed the 2:1 (sometimes 5:1)\nperformance difference in inserting rows (mostly 2-4K), between\nPostgresql on Solaris 8 and on Linux, for machines with comparable\nCPU's and RAM.\n\nThese (big) customers are starting to ask, why don't we just port our \ndataserver to Oracle for them? I'd like to avoid that, if possible :-)\n\nWhat we can test on, in-house are leetle Sun workstations, while some of \nour customers have BIG Sun iron --- so I have no means to-date to \nreproduce what their bottleneck is :-( Yes, it has been recommended that \nwe talk to Sun about their iForce test lab ... that's in the pipe.\n\nIn the meantime, what I gather from browsing mail archives is that \npostgresql on Solaris seems to get hung up on IO rather than CPU.\nFurthermore, I notice that Oracle and now MySQL use directio to bypass \nthe system cache, when doing heavy writes to the disk; and Postgresql \ndoes not.\n\nNot wishing to alter backend/store/file for this test, I figured I could \nget a customer to mount the UFS volume for pg_xlog with the option \n\"forcedirectio\".\n\nAny comment on this? No consideration of what the wal_sync_method is at \nthis point. Presumably it's defaulting to fdatasync on Solaris.\n\nBTW this is Postgres 7.4.1, and our customers are Solaris 8 and 9.\n",
"msg_date": "Fri, 17 Sep 2004 19:23:08 GMT",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tryint to match Solaris-Oracle performance with directio?"
},
{
"msg_contents": "Mischa Sandberg wrote:\n> In the meantime, what I gather from browsing mail archives is that \n> postgresql on Solaris seems to get hung up on IO rather than CPU.\n> Furthermore, I notice that Oracle and now MySQL use directio to bypass \n> the system cache, when doing heavy writes to the disk; and Postgresql \n> does not.\n> \n> Not wishing to alter backend/store/file for this test, I figured I could \n> get a customer to mount the UFS volume for pg_xlog with the option \n> \"forcedirectio\".\n> \n> Any comment on this? No consideration of what the wal_sync_method is at \n> this point. Presumably it's defaulting to fdatasync on Solaris.\n> \n> BTW this is Postgres 7.4.1, and our customers are Solaris 8 and 9.\n\nIf you care your data upgrade to more recent 7.4.5\n\nTest your better sync method using /src/tools/fsync however do some\nexperiment changing the sync method, you can also avoid to update the\nacces time for the inodes mounting the partition with noatime option\n( this however have more impact on performance for read activities )\n\n\nRegards\nGaetano Mendola\n\n\n",
"msg_date": "Sat, 18 Sep 2004 01:42:25 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tryint to match Solaris-Oracle performance with directio?"
},
{
"msg_contents": "I fully agree with Gaetano about testing sync methods. From testing I've done\non two different Solaris 8 boxes, the O_DSYNC option on Solaris 8 beats fsync\nand fdatasync easily. Test it yourself though. There's probably some\nopportuntiy there for better performance for you.\n\n> > BTW this is Postgres 7.4.1, and our customers are Solaris 8 and 9.\n> \n> If you care your data upgrade to more recent 7.4.5\n> \n> Test your better sync method using /src/tools/fsync however do some\n> experiment changing the sync method, you can also avoid to update the\n> acces time for the inodes mounting the partition with noatime option\n> ( this however have more impact on performance for read activities )\n> \n> \n> Regards\n> Gaetano Mendola\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n\n",
"msg_date": "Fri, 17 Sep 2004 18:22:43 -0700",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Tryint to match Solaris-Oracle performance with directio?"
},
{
"msg_contents": "Mischa Sandberg <[email protected]> writes:\n> Our product (Sophos PureMessage) runs on a Postgres database.\n> Some of our Solaris customers have Oracle licenses, and they've \n> commented on the performance difference between Oracle and Postgresql\n> on such boxes. In-house, we've noticed the 2:1 (sometimes 5:1)\n> performance difference in inserting rows (mostly 2-4K), between\n> Postgresql on Solaris 8 and on Linux, for machines with comparable\n> CPU's and RAM.\n\nYou haven't given any evidence at all to say that I/O is where the\nproblem is. I think it would be good first to work through the\nconventional issues such as configuration parameters, foreign key\nproblems, etc. Give us some more detail about the slow INSERT\nqueries ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 18 Sep 2004 12:35:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tryint to match Solaris-Oracle performance with directio? "
},
{
"msg_contents": "Mischa Sandberg wrote:\n\n> In the meantime, what I gather from browsing mail archives is that \n> postgresql on Solaris seems to get hung up on IO rather than CPU.\n\nWell, people more knowledgeable in the secrets of postgres seem \nconfident that this is not your problem. Fortunetly, however, there is a \nsimple way to find out.\n\nJust download the utinyint var type from pgfoundry \n(http://pgfoundry.org/projects/sql2pg/). There are some stuff there you \nwill need to compile yourself from CVS. I'm sorry, but I haven't done a \nproper release just yet. In any case, the utinyint type should provide \nyou with the data type you seek, and thus allow you to find out whether \nthis is, indeed, the problem.\n\n-- \nShachar Shemesh\nLingnu Open Source Consulting ltd.\nhttp://www.lingnu.com/\n\n",
"msg_date": "Sat, 18 Sep 2004 20:02:48 +0300",
"msg_from": "Shachar Shemesh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tryint to match Solaris-Oracle performance with directio?"
},
{
"msg_contents": "Hi Mischa,\n\nYou probably need to determine whether the bottleneck is cpu or disk (should be\neasy enough!)\n\nHaving said that, assuming your application is insert/update intensive I would\nrecommend:\n\n- mount the ufs filesystems Pg uses *without* logging\n- use postgresql.conf setting fsync_method=fdatasync\n\nThese changes made my Pgbench results improve by a factor or 4 (enough to catch\nthe big O maybe...)\n\nThen you will need to have a look at your other postgresql.conf parameters!\n(posting this file to the list might be a plan)\n\nCheers\n\nMark\n\n\n\nQuoting Mischa Sandberg <[email protected]>:\n\n> Our product (Sophos PureMessage) runs on a Postgres database.\n>\n> Some of our Solaris customers have Oracle licenses, and they've\n> commented on the performance difference between Oracle and Postgresql\n> on such boxes. In-house, we've noticed the 2:1 (sometimes 5:1)\n> performance difference in inserting rows (mostly 2-4K), between\n> Postgresql on Solaris 8 and on Linux, for machines with comparable\n> CPU's and RAM.\n>\n>\n\n",
"msg_date": "Sun, 19 Sep 2004 22:04:41 +1200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Tryint to match Solaris-Oracle performance with"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm using PostgreSQL 7.4 on a table with ~700.000 rows looking like this:\n\n Table \"public.enkeltsalg\"\n Column | Type | Modifiers \n------------+--------------------------+-------------------------------------------------------\n id | integer | not null default nextval('\"enkeltsalg_id_seq\"'::text)\n kommentar | text | not null default ''::text\n antall | numeric(14,4) | not null\n belop | numeric(10,0) | not null\n type | character(1) | not null\n tid | timestamp with time zone | default now()\n eksternid | integer | \n kasseid | integer | \n baraapning | integer | \n salgspris | integer | \n firma | integer | \n bongid | integer | \nIndexes:\n \"enkeltsalg_pkey\" primary key, btree (id)\n \"enkeltsalg_aapn\" btree (baraapning)\n \"enkeltsalg_aapn_pris\" btree (baraapning, salgspris)\n \"enkeltsalg_aapn_type\" btree (baraapning, \"type\")\n \"enkeltsalg_pris\" btree (salgspris)\nCheck constraints:\n \"enkeltsalg_type_valid\" CHECK (\"type\" = 'K'::bpchar OR \"type\" = 'B'::bpchar OR \"type\" = 'M'::bpchar OR \"type\" = 'T'::bpchar)\n\nAnd I'm doing the query (after VACUUM ANALYZE)\n\nsmt=# explain analyze select sum(belop) as omsetning,date_trunc('day',tid) as dato from enkeltsalg group by date_trunc('day',tid);\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=108062.34..114477.98 rows=172735 width=17) (actual time=20977.544..23890.020 rows=361 loops=1)\n -> Sort (cost=108062.34..109912.99 rows=740263 width=17) (actual time=20947.372..21627.107 rows=710720 loops=1)\n Sort Key: date_trunc('day'::text, tid)\n -> Seq Scan on enkeltsalg (cost=0.00..18010.29 rows=740263 width=17) (actual time=0.091..7180.528 rows=710720 loops=1)\n Total runtime: 23908.538 ms\n(5 rows)\n\nNow, as you can see, the GroupAggregate here is _way_ off, so the planner\nmakes the wrong choice (it should do a hash aggregate). If I set sort_mem to\n131072 instead of 16384, it does a hash aggregate (which is 10 seconds\ninstead of 24), but I can't have sort_mem that high generally.\n\nNow, my first notion was creating a functional index to help the planner:\n\nsmt=# create index enkeltsalg_dag on enkeltsalg ( date_trunc('day',tid) );\nCREATE INDEX \nsmt=# vacuum analyze;\nVACUUM\n\nHowever, this obviously didn't help the planner (this came as a surprise to\nme, but probably won't come as a surprise to the more seasoned users here :-)\n):\n\nsmt=# explain analyze select sum(belop) as omsetning,date_trunc('day',tid) as dato from enkeltsalg group by date_trunc('day',tid);\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=103809.15..110017.11 rows=175512 width=17) (actual time=21061.357..23917.370 rows=361 loops=1)\n -> Sort (cost=103809.15..105585.95 rows=710720 width=17) (actual time=21032.239..21695.674 rows=710720 loops=1)\n Sort Key: date_trunc('day'::text, tid)\n -> Seq Scan on enkeltsalg (cost=0.00..17641.00 rows=710720 width=17) (actual time=0.091..7231.387 rows=710720 loops=1)\n Total runtime: 23937.791 ms\n(5 rows)\n\nI also tried to increase the statistics on the \"tid\" column:\n\nsmt=# alter table enkeltsalg alter column tid set statistics 500;\nALTER TABLE\nsmt=# analyze enkeltsalg;\nANALYZE\n\nHowever, this made the planner only do a _worse_ estimate:\n\nsmt=# explain analyze select sum(belop) as omsetning,date_trunc('day',tid) as dato from enkeltsalg group by date_trunc('day',tid);\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=107906.59..114449.09 rows=199715 width=17) (actual time=20947.197..23794.389 rows=361 loops=1)\n -> Sort (cost=107906.59..109754.56 rows=739190 width=17) (actual time=20918.001..21588.735 rows=710720 loops=1)\n Sort Key: date_trunc('day'::text, tid)\n -> Seq Scan on enkeltsalg (cost=0.00..17996.88 rows=739190 width=17) (actual time=0.092..7166.488 rows=710720 loops=1)\n Total runtime: 23814.624 ms\n(5 rows)\n\nActually, it seems that the higher I set statistics on \"tid\", the worse the\nestimate becomes.\n\nAlso, I was told (on #postgresql :-) ) to include the following information:\n\nsmt=# select n_distinct from pg_stats where attname='tid';\n n_distinct \n ------------\n -0.270181\n(1 row)\n\nAny ideas for speeding this up?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Sat, 18 Sep 2004 19:01:17 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Planner having way wrong estimate for group aggregate"
},
{
"msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> Now, my first notion was creating a functional index to help the planner:\n> ...\n> However, this obviously didn't help the planner (this came as a surprise to\n> me, but probably won't come as a surprise to the more seasoned users here :-)\n\n7.4 doesn't have any statistics on expression indexes. 8.0 will do what\nyou want though. (I just fixed an oversight that prevented it from\ndoing so...)\n\n> Actually, it seems that the higher I set statistics on \"tid\", the worse the\n> estimate becomes.\n\nI believe that the estimate of number of groups will be exactly the same\nas the estimate of the number of values of tid --- there's no knowledge\nthat date_trunc() might reduce the number of distinct values.\n\n> Any ideas for speeding this up?\n\nIn 7.4, the only way I can see to force this to use a hash aggregate is\nto temporarily set enable_sort false or raise sort_mem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 18 Sep 2004 15:48:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner having way wrong estimate for group aggregate "
},
{
"msg_contents": "On Sat, Sep 18, 2004 at 03:48:13PM -0400, Tom Lane wrote:\n> 7.4 doesn't have any statistics on expression indexes. 8.0 will do what\n> you want though. (I just fixed an oversight that prevented it from\n> doing so...)\n\nOK, so I'll have to wait for 8.0.0beta3 or 8.0.0 (I tried 8.0.0beta2, it gave\nme zero difference) -- fortunately, I can probably wait at the rate\neverything else is progressing here. :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Sat, 18 Sep 2004 22:03:36 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planner having way wrong estimate for group aggregate"
}
] |
[
{
"msg_contents": "A recent comment on this (or perhaps another?) mailing list about Sun boxen\nand the directio mount option has prompted me to read about O_DIRECT on the\nopen() manpage.\n\nHas anybody tried this option? Ever taken any performance measurements?\nI assume the way postgres manages its buffer memory (dealing with 8kB pages)\nwould be compatible with the restrictions:\n\n Under Linux 2.4 transfer sizes, and the alignment of user buffer\n and file offset must all be multiples of the logical block size of\n the file system.\n\nAccording to the manpage, O_DIRECT implies O_SYNC:\n\n File I/O is done directly to/from user space buffers. The I/O is\n synchronous, i.e., at the completion of the read(2) or write(2)\n system call, data is guaranteed to have been transferred.\n\nAt the moment I am fairly interested in trying this, and I would spend some\ntime with it, but I have my hands full with other projects. I'd imagine this\nis more use with the revamped buffer manager in PG8.0 than the 7.x line, but\nwe are not using PG8.0 here yet.\n\nWould people be interested in a performance benchmark? I need some benchmark\ntips :)\n\nIncidentally, postgres heap files suffer really, really bad fragmentation,\nwhich affects sequential scan operations (VACUUM, ANALYZE, REINDEX ...)\nquite drastically. We have in-house patches that somewhat alleiviate this,\nbut they are not release quality. Has anybody else suffered this?\n\nGuy Thornley\n\n",
"msg_date": "Mon, 20 Sep 2004 19:57:34 +1200",
"msg_from": "Guy Thornley <[email protected]>",
"msg_from_op": true,
"msg_subject": "O_DIRECT setting"
},
{
"msg_contents": "On Mon, 2004-09-20 at 17:57, Guy Thornley wrote:\n> According to the manpage, O_DIRECT implies O_SYNC:\n> \n> File I/O is done directly to/from user space buffers. The I/O is\n> synchronous, i.e., at the completion of the read(2) or write(2)\n> system call, data is guaranteed to have been transferred.\n\nThis seems like it would be a rather large net loss. PostgreSQL already\nstructures writes so that the writes we need to hit disk immediately\n(WAL records) are fsync()'ed -- the kernel is given more freedom to\nschedule how other writes are flushed from the cache. Also, my\nrecollection is that O_DIRECT also disables readahead -- if that's\ncorrect, that's not what we want either.\n\nBTW, using O_DIRECT has been discussed a few times in the past. Have you\nchecked the list archives? (for both -performance and -hackers)\n\n> Would people be interested in a performance benchmark?\n\nSure -- I'd definitely be curious, although as I said I'm skeptical it's\na win.\n\n> I need some benchmark tips :)\n\nSome people have noted that it can be difficult to use contrib/pgbench\nto get reproducible results -- you might want to look at Jan's TPC-W\nimplementation or the OSDL database benchmarks:\n\nhttp://pgfoundry.org/projects/tpc-w-php/\nhttp://www.osdl.org/lab_activities/kernel_testing/osdl_database_test_suite/\n\n> Incidentally, postgres heap files suffer really, really bad fragmentation,\n> which affects sequential scan operations (VACUUM, ANALYZE, REINDEX ...)\n> quite drastically. We have in-house patches that somewhat alleiviate this,\n> but they are not release quality.\n\nCan you elaborate on these \"in-house patches\"?\n\n-Neil\n\n\n",
"msg_date": "Thu, 23 Sep 2004 13:58:34 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: O_DIRECT setting"
},
{
"msg_contents": "\nTODO has:\n\n\t* Consider use of open/fcntl(O_DIRECT) to minimize OS caching\n\nShould the item be removed?\n\n---------------------------------------------------------------------------\n\nNeil Conway wrote:\n> On Mon, 2004-09-20 at 17:57, Guy Thornley wrote:\n> > According to the manpage, O_DIRECT implies O_SYNC:\n> > \n> > File I/O is done directly to/from user space buffers. The I/O is\n> > synchronous, i.e., at the completion of the read(2) or write(2)\n> > system call, data is guaranteed to have been transferred.\n> \n> This seems like it would be a rather large net loss. PostgreSQL already\n> structures writes so that the writes we need to hit disk immediately\n> (WAL records) are fsync()'ed -- the kernel is given more freedom to\n> schedule how other writes are flushed from the cache. Also, my\n> recollection is that O_DIRECT also disables readahead -- if that's\n> correct, that's not what we want either.\n> \n> BTW, using O_DIRECT has been discussed a few times in the past. Have you\n> checked the list archives? (for both -performance and -hackers)\n> \n> > Would people be interested in a performance benchmark?\n> \n> Sure -- I'd definitely be curious, although as I said I'm skeptical it's\n> a win.\n> \n> > I need some benchmark tips :)\n> \n> Some people have noted that it can be difficult to use contrib/pgbench\n> to get reproducible results -- you might want to look at Jan's TPC-W\n> implementation or the OSDL database benchmarks:\n> \n> http://pgfoundry.org/projects/tpc-w-php/\n> http://www.osdl.org/lab_activities/kernel_testing/osdl_database_test_suite/\n> \n> > Incidentally, postgres heap files suffer really, really bad fragmentation,\n> > which affects sequential scan operations (VACUUM, ANALYZE, REINDEX ...)\n> > quite drastically. We have in-house patches that somewhat alleiviate this,\n> > but they are not release quality.\n> \n> Can you elaborate on these \"in-house patches\"?\n> \n> -Neil\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 23 Sep 2004 09:35:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: O_DIRECT setting"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> TODO has:\n> \t* Consider use of open/fcntl(O_DIRECT) to minimize OS caching\n> Should the item be removed?\n\nI think it's fine ;-) ... it says \"consider it\", not \"do it\". The point\nis that we could do with more research in this area, even if O_DIRECT\nper se is not useful. Maybe you could generalize the entry to\n\"investigate ways of fine-tuning OS caching behavior\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Sep 2004 10:57:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: O_DIRECT setting "
},
{
"msg_contents": "On Mon, Sep 20, 2004 at 07:57:34PM +1200, Guy Thornley wrote:\n[snip]\n> \n> Incidentally, postgres heap files suffer really, really bad fragmentation,\n> which affects sequential scan operations (VACUUM, ANALYZE, REINDEX ...)\n> quite drastically. We have in-house patches that somewhat alleiviate this,\n> but they are not release quality. Has anybody else suffered this?\n> \n\nAny chance I could give those patches a try? I'm interested in seeing\nhow they may affect our DBT-3 workload, which execute DSS type queries.\n\nThanks,\nMark\n",
"msg_date": "Fri, 24 Sep 2004 08:07:03 -0700",
"msg_from": "Mark Wong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: O_DIRECT setting"
},
{
"msg_contents": "On Thu, Sep 23, 2004 at 10:57:41AM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > TODO has:\n> > \t* Consider use of open/fcntl(O_DIRECT) to minimize OS caching\n> > Should the item be removed?\n> \n> I think it's fine ;-) ... it says \"consider it\", not \"do it\". The point\n> is that we could do with more research in this area, even if O_DIRECT\n> per se is not useful. Maybe you could generalize the entry to\n> \"investigate ways of fine-tuning OS caching behavior\".\n> \n> \t\t\tregards, tom lane\n> \n\nI talked to Jan a little about this during OSCon since Linux filesystems\n(ext2, ext3, etc) let you use O_DIRECT. He felt the only place where\nPostgreSQL may benefit from this now, without managing its own buffer first,\nwould be with the log writer. I'm probably going to get this wrong, but\nhe thought it would be interesting to try an experiment by taking X number\nof pages to be flushed, sort them (by age? where they go on disk?) and\nwrite them out. He thought this would be a relatively easy thing to try,\na day or two of work. We'd really love to experiment with it.\n\nMark\n",
"msg_date": "Wed, 29 Sep 2004 18:45:10 -0700",
"msg_from": "Mark Wong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: O_DIRECT setting"
},
{
"msg_contents": "Mark Wong <[email protected]> writes:\n> I talked to Jan a little about this during OSCon since Linux filesystems\n> (ext2, ext3, etc) let you use O_DIRECT. He felt the only place where\n> PostgreSQL may benefit from this now, without managing its own buffer first,\n> would be with the log writer. I'm probably going to get this wrong, but\n> he thought it would be interesting to try an experiment by taking X number\n> of pages to be flushed, sort them (by age? where they go on disk?) and\n> write them out.\n\nHmm. Most of the time the log writer has little choice about page write\norder --- certainly if all your transactions are small it's not going to\nhave any choice. I think this would mainly be equivalent to O_SYNC with\nthe extra feature of stopping the kernel from buffering the WAL data in\nits own buffer cache. Which is probably useful, but I doubt it's going\nto make a huge difference.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 29 Sep 2004 23:03:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: O_DIRECT setting "
},
{
"msg_contents": "Sorry about the belated reply, its been busy around here.\n\n> > Incidentally, postgres heap files suffer really, really bad fragmentation,\n> > which affects sequential scan operations (VACUUM, ANALYZE, REINDEX ...)\n> > quite drastically. We have in-house patches that somewhat alleiviate this,\n> > but they are not release quality. Has anybody else suffered this?\n> > \n> \n> Any chance I could give those patches a try? I'm interested in seeing\n> how they may affect our DBT-3 workload, which execute DSS type queries.\n\nLike I said, the patches are not release quality... if you run them on a\nmetadata journalling filesystem, without an 'ordered write' mode, its\npossible to end up with corrupt heaps after a crash because of garbage data\nin the extended files.\n\nIf/when we move to postgres 8 I'll try to ensure the patches get re-done\nwith releasable quality\n\nGuy Thornley\n",
"msg_date": "Thu, 30 Sep 2004 19:02:32 +1200",
"msg_from": "Guy Thornley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: O_DIRECT setting"
},
{
"msg_contents": "On Thu, Sep 30, 2004 at 07:02:32PM +1200, Guy Thornley wrote:\n> Sorry about the belated reply, its been busy around here.\n> \n> > > Incidentally, postgres heap files suffer really, really bad fragmentation,\n> > > which affects sequential scan operations (VACUUM, ANALYZE, REINDEX ...)\n> > > quite drastically. We have in-house patches that somewhat alleiviate this,\n> > > but they are not release quality. Has anybody else suffered this?\n> > > \n> > \n> > Any chance I could give those patches a try? I'm interested in seeing\n> > how they may affect our DBT-3 workload, which execute DSS type queries.\n> \n> Like I said, the patches are not release quality... if you run them on a\n> metadata journalling filesystem, without an 'ordered write' mode, its\n> possible to end up with corrupt heaps after a crash because of garbage data\n> in the extended files.\n> \n> If/when we move to postgres 8 I'll try to ensure the patches get re-done\n> with releasable quality\n> \n> Guy Thornley\n\nThat's ok, we like to help test and proof things, we don't need patches to be\nrelease quality.\n\nMark\n",
"msg_date": "Thu, 30 Sep 2004 09:33:34 -0700",
"msg_from": "Mark Wong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: O_DIRECT setting"
}
] |
[
{
"msg_contents": "Hello.\nCouple of questions:\n\n\n- Q1: Today I decided to do a vacuum full verbose analyze on a large table that has been giving me slow performance. And then I did it again. I noticed that after each run the values in my indexes and estimate row version changed. What really got me wondering is the fact my indexes report more rows than are in the table and then the estimated rows is less than the actual amount.\n\nThe table is a read-only table that is updated 1/wk. After updating it is vacuumed full. I've also tried reindexing but the numbers still change.\nIs this normal? Below is a partial output for 4 consecutive vacuum full analyzes. No data was added nor was there anyone in the table.\n\n- Q2: I have about a dozen 5M plus row tables. I currently have my max_fsm_pages set to 300,000. As you can see in vacuum full output I supplied, one table is already over this amount. Is there a limit on the size of max_fsm_pages?\n\n\nCONF settings:\n# - Memory -\n\nshared_buffers = 2000 # min 16, at least max_connections*2, 8KB each\nsort_mem = 12288 # min 64, size in KB\n#vacuum_mem = 8192 # min 1024, size in KB\n\n# - Free Space Map -\n\nmax_fsm_pages = 300000 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 500 # min 100, ~50 bytes each\n\n\nVacuum full information\n#after second vacuum full\nINFO: index \"emaildat_fkey\" now contains 8053743 row versions in 25764 pages\nDETAIL: 1895 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 2.38s/0.42u sec elapsed 11.11 sec.\nINFO: analyzing \"cdm.cdm_email_data\"\nINFO: \"cdm_email_data\": 65882 pages, 3000 rows sampled, 392410 estimated total rows\n\n\n#after third vacuum full\nINFO: index \"emaildat_fkey\" now contains 8052738 row versions in 25769 pages\nDETAIL: 890 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 2.08s/0.32u sec elapsed 4.36 sec.\nINFO: analyzing \"cdm.cdm_email_data\"\nINFO: \"cdm_email_data\": 65874 pages, 3000 rows sampled, 392363 estimated total rows\n\n\n#after REINDEX and vacuum full\nINFO: index \"emaildat_fkey\" now contains 8052369 row versions in 25771 pages\nDETAIL: 521 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 1.37s/0.35u sec elapsed 4.79 sec.\nINFO: analyzing \"cdm.cdm_email_data\"\nINFO: \"cdm_email_data\": 65869 pages, 3000 rows sampled, 392333 estimated total rows\n\n#After vacuum full(s)\nmdc_oz=# select count(*) from cdm.cdm_email_data;\n count\n---------\n 5433358\n(1 row)\n\n\nTIA\nPatrick\n\n\n\n\n\n\n \nHello.\nCouple of questions:\n \n \n\n- Q1: Today I decided to do a vacuum full verbose \nanalyze on a large table that has been giving me slow performance. And \nthen I did it again. I noticed that after each run the values in my \nindexes and estimate row version changed. What really got me \nwondering is the fact my indexes report more rows than are in the table and then \nthe estimated rows is less than the actual amount.\n \nThe table is a read-only table that is updated \n1/wk. After updating it is vacuumed full. I've also tried reindexing \nbut the numbers still change.\nIs this normal? Below is a partial output for \n4 consecutive vacuum full analyzes. No data was added nor was there anyone \nin the table.\n \n- Q2: I have about a dozen 5M plus row \ntables. I currently have my max_fsm_pages set to 300,000. As you can \nsee in vacuum full output I supplied, one table is already over this \namount. Is there a limit on the size of max_fsm_pages?\n \n \nCONF settings:\n# - Memory -\n \nshared_buffers = \n2000 # min 16, at \nleast max_connections*2, 8KB eachsort_mem = \n12288 \n# min 64, size in KB#vacuum_mem = \n8192 \n# min 1024, size in KB\n \n# - Free Space Map -\n \nmax_fsm_pages = \n300000 # min \nmax_fsm_relations*16, 6 bytes eachmax_fsm_relations = \n500 # min 100, ~50 bytes \neach\n \nVacuum full information\n#after second vacuum full\nINFO: index \"emaildat_fkey\" now contains \n8053743 row versions in 25764 pagesDETAIL: 1895 index row versions \nwere removed.0 index pages have been deleted, 0 are currently \nreusable.CPU 2.38s/0.42u sec elapsed 11.11 sec.INFO: analyzing \n\"cdm.cdm_email_data\"INFO: \"cdm_email_data\": 65882 pages, 3000 rows \nsampled, 392410 estimated total rows\n \n \n#after third vacuum full\nINFO: index \"emaildat_fkey\" now contains \n8052738 row versions in 25769 pagesDETAIL: 890 index row versions were \nremoved.0 index pages have been deleted, 0 are currently reusable.CPU \n2.08s/0.32u sec elapsed 4.36 sec.INFO: analyzing \n\"cdm.cdm_email_data\"INFO: \"cdm_email_data\": 65874 pages, 3000 rows \nsampled, 392363 estimated total rows\n \n \n#after REINDEX and vacuum full\nINFO: index \"emaildat_fkey\" now contains \n8052369 row versions in 25771 pagesDETAIL: 521 index row versions were \nremoved.0 index pages have been deleted, 0 are currently reusable.CPU \n1.37s/0.35u sec elapsed 4.79 sec.INFO: analyzing \n\"cdm.cdm_email_data\"INFO: \"cdm_email_data\": 65869 pages, 3000 rows \nsampled, 392333 estimated total rows\n \n#After vacuum full(s)\nmdc_oz=# select count(*) from \ncdm.cdm_email_data; count--------- 5433358(1 \nrow)\n \nTIA\nPatrick",
"msg_date": "Mon, 20 Sep 2004 21:01:26 -0700",
"msg_from": "\"Patrick Hatcher\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "vacuum full & max_fsm_pages question"
},
{
"msg_contents": "On Tuesday 21 September 2004 00:01, Patrick Hatcher wrote:\n> Hello.\n> Couple of questions:>\n> - Q1: Today I decided to do a vacuum full verbose analyze on a large table\n> that has been giving me slow performance. And then I did it again. I\n> noticed that after each run the values in my indexes and estimate row\n> version changed. What really got me wondering is the fact my indexes\n> report more rows than are in the table and then the estimated rows is less\n> than the actual amount.\n>\n> The table is a read-only table that is updated 1/wk. After updating it is\n> vacuumed full. I've also tried reindexing but the numbers still change. Is\n> this normal? Below is a partial output for 4 consecutive vacuum full\n> analyzes. No data was added nor was there anyone in the table.\n>\n\nThis looks normal to me for a pre 7.4 database, if I am right your running on \n7.2? Basically your indexes are overgrown, so each time you run vacuum you \nare shrinking the number of pages involved, which will change the row counts, \nand correspondingly change the count on the table as the sampled pages \nchange. \n\n\n> - Q2: I have about a dozen 5M plus row tables. I currently have my\n> max_fsm_pages set to 300,000. As you can see in vacuum full output I\n> supplied, one table is already over this amount. Is there a limit on the\n> size of max_fsm_pages?\n>\n\nThe limit is based on your memory... each page = 6 bytes. But according to \nthe output below you are not over 300000 pages yet on that table (though you \nmight be on some other tables.)\n\n>\n> CONF settings:\n> # - Memory -\n>\n> shared_buffers = 2000 # min 16, at least max_connections*2, 8KB\n> each sort_mem = 12288 # min 64, size in KB\n> #vacuum_mem = 8192 # min 1024, size in KB\n>\n> # - Free Space Map -\n>\n> max_fsm_pages = 300000 # min max_fsm_relations*16, 6 bytes each\n> max_fsm_relations = 500 # min 100, ~50 bytes each\n>\n>\n> Vacuum full information\n> #after second vacuum full\n> INFO: index \"emaildat_fkey\" now contains 8053743 row versions in 25764\n> pages DETAIL: 1895 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 2.38s/0.42u sec elapsed 11.11 sec.\n> INFO: analyzing \"cdm.cdm_email_data\"\n> INFO: \"cdm_email_data\": 65882 pages, 3000 rows sampled, 392410 estimated\n> total rows\n>\n>\n> #after third vacuum full\n> INFO: index \"emaildat_fkey\" now contains 8052738 row versions in 25769\n> pages DETAIL: 890 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 2.08s/0.32u sec elapsed 4.36 sec.\n> INFO: analyzing \"cdm.cdm_email_data\"\n> INFO: \"cdm_email_data\": 65874 pages, 3000 rows sampled, 392363 estimated\n> total rows\n>\n>\n> #after REINDEX and vacuum full\n> INFO: index \"emaildat_fkey\" now contains 8052369 row versions in 25771\n> pages DETAIL: 521 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 1.37s/0.35u sec elapsed 4.79 sec.\n> INFO: analyzing \"cdm.cdm_email_data\"\n> INFO: \"cdm_email_data\": 65869 pages, 3000 rows sampled, 392333 estimated\n> total rows\n>\n> #After vacuum full(s)\n> mdc_oz=# select count(*) from cdm.cdm_email_data;\n> count\n> ---------\n> 5433358\n> (1 row)\n>\n\nI do think the count(*) seems a bit off based on the vacuum output above. I'm \nguessing you either have blocking transactions in the way or your not giving \nus a complete copy/paste of the session involved. \n\n-- \nRobert Treat\nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n",
"msg_date": "Tue, 21 Sep 2004 02:12:33 -0400",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuum full & max_fsm_pages question"
},
{
"msg_contents": "Sorry. I wrote PG 7.4.2 and then I erased it to write something else and\nthen forgot to add it back.\n\nAnd thanks for the Page info. I was getting frustrated and looked in the\nwrong place.\n\nSo it's probably best to drop and readd the indexes then?\n\n\n----- Original Message ----- \nFrom: \"Robert Treat\" <[email protected]>\nTo: \"Patrick Hatcher\" <[email protected]>\nCc: <[email protected]>\nSent: Monday, September 20, 2004 11:12 PM\nSubject: Re: [PERFORM] vacuum full & max_fsm_pages question\n\n\n> On Tuesday 21 September 2004 00:01, Patrick Hatcher wrote:\n> > Hello.\n> > Couple of questions:>\n> > - Q1: Today I decided to do a vacuum full verbose analyze on a large\ntable\n> > that has been giving me slow performance. And then I did it again. I\n> > noticed that after each run the values in my indexes and estimate row\n> > version changed. What really got me wondering is the fact my indexes\n> > report more rows than are in the table and then the estimated rows is\nless\n> > than the actual amount.\n> >\n> > The table is a read-only table that is updated 1/wk. After updating it\nis\n> > vacuumed full. I've also tried reindexing but the numbers still change.\nIs\n> > this normal? Below is a partial output for 4 consecutive vacuum full\n> > analyzes. No data was added nor was there anyone in the table.\n> >\n>\n> This looks normal to me for a pre 7.4 database, if I am right your running\non\n> 7.2? Basically your indexes are overgrown, so each time you run vacuum you\n> are shrinking the number of pages involved, which will change the row\ncounts,\n> and correspondingly change the count on the table as the sampled pages\n> change.\n>\n>\n> > - Q2: I have about a dozen 5M plus row tables. I currently have my\n> > max_fsm_pages set to 300,000. As you can see in vacuum full output I\n> > supplied, one table is already over this amount. Is there a limit on\nthe\n> > size of max_fsm_pages?\n> >\n>\n> The limit is based on your memory... each page = 6 bytes. But according\nto\n> the output below you are not over 300000 pages yet on that table (though\nyou\n> might be on some other tables.)\n>\n> >\n> > CONF settings:\n> > # - Memory -\n> >\n> > shared_buffers = 2000 # min 16, at least max_connections*2,\n8KB\n> > each sort_mem = 12288 # min 64, size in KB\n> > #vacuum_mem = 8192 # min 1024, size in KB\n> >\n> > # - Free Space Map -\n> >\n> > max_fsm_pages = 300000 # min max_fsm_relations*16, 6 bytes each\n> > max_fsm_relations = 500 # min 100, ~50 bytes each\n> >\n> >\n> > Vacuum full information\n> > #after second vacuum full\n> > INFO: index \"emaildat_fkey\" now contains 8053743 row versions in 25764\n> > pages DETAIL: 1895 index row versions were removed.\n> > 0 index pages have been deleted, 0 are currently reusable.\n> > CPU 2.38s/0.42u sec elapsed 11.11 sec.\n> > INFO: analyzing \"cdm.cdm_email_data\"\n> > INFO: \"cdm_email_data\": 65882 pages, 3000 rows sampled, 392410\nestimated\n> > total rows\n> >\n> >\n> > #after third vacuum full\n> > INFO: index \"emaildat_fkey\" now contains 8052738 row versions in 25769\n> > pages DETAIL: 890 index row versions were removed.\n> > 0 index pages have been deleted, 0 are currently reusable.\n> > CPU 2.08s/0.32u sec elapsed 4.36 sec.\n> > INFO: analyzing \"cdm.cdm_email_data\"\n> > INFO: \"cdm_email_data\": 65874 pages, 3000 rows sampled, 392363\nestimated\n> > total rows\n> >\n> >\n> > #after REINDEX and vacuum full\n> > INFO: index \"emaildat_fkey\" now contains 8052369 row versions in 25771\n> > pages DETAIL: 521 index row versions were removed.\n> > 0 index pages have been deleted, 0 are currently reusable.\n> > CPU 1.37s/0.35u sec elapsed 4.79 sec.\n> > INFO: analyzing \"cdm.cdm_email_data\"\n> > INFO: \"cdm_email_data\": 65869 pages, 3000 rows sampled, 392333\nestimated\n> > total rows\n> >\n> > #After vacuum full(s)\n> > mdc_oz=# select count(*) from cdm.cdm_email_data;\n> > count\n> > ---------\n> > 5433358\n> > (1 row)\n> >\n>\n> I do think the count(*) seems a bit off based on the vacuum output above.\nI'm\n> guessing you either have blocking transactions in the way or your not\ngiving\n> us a complete copy/paste of the session involved.\n>\n> -- \n> Robert Treat\n> Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n",
"msg_date": "Tue, 21 Sep 2004 06:24:18 -0700",
"msg_from": "\"Patrick Hatcher\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: vacuum full & max_fsm_pages question"
},
{
"msg_contents": "\n\n\n\nNope. It's been running like a champ for while now.\n\nPatrick Hatcher\nMacys.Com\nLegacy Integration Developer\n415-422-1610 office\nHatcherPT - AIM\n\n\n \n Josh Berkus \n <[email protected] \n m> To \n Sent by: \"Patrick Hatcher\" \n pgsql-performance <[email protected]> \n -owner@postgresql cc \n .org \"Robert Treat\" \n <[email protected]>, \n <[email protected]> \n 09/21/2004 10:49 Subject \n AM Re: [PERFORM] vacuum full & \n max_fsm_pages question \n \n \n \n \n \n \n\n\n\n\nPatrick,\n\n> Sorry. I wrote PG 7.4.2 and then I erased it to write something else and\n> then forgot to add it back.\n\nOdd. You shouldn't be having to re-vacuum on 7.4.\n\n> And thanks for the Page info. I was getting frustrated and looked in the\n> wrong place.\n>\n> So it's probably best to drop and readd the indexes then?\n\nWell, I have to wonder if you've not run afoul of the known 7.4.2 bug\nregarding indexes. This system hasn't had an improper database shutdown\nor\npower-out in the last few weeks, has it?\n\n--\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if your\n joining column's datatypes do not match\n\n\n",
"msg_date": "Tue, 21 Sep 2004 10:45:37 -0700",
"msg_from": "Patrick Hatcher <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuum full & max_fsm_pages question"
},
{
"msg_contents": "Patrick,\n\n> Sorry. I wrote PG 7.4.2 and then I erased it to write something else and\n> then forgot to add it back.\n\nOdd. You shouldn't be having to re-vacuum on 7.4.\n\n> And thanks for the Page info. I was getting frustrated and looked in the\n> wrong place.\n>\n> So it's probably best to drop and readd the indexes then?\n\nWell, I have to wonder if you've not run afoul of the known 7.4.2 bug \nregarding indexes. This system hasn't had an improper database shutdown or \npower-out in the last few weeks, has it?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 21 Sep 2004 10:49:02 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuum full & max_fsm_pages question"
},
{
"msg_contents": "I upgraded to 7.4.3 this morning and did a vacuum full analyze on the \nproblem table and now the indexes show the correct number of records\n\n\nPatrick Hatcher\nMacys.Com\n\n\n\n\nJosh Berkus <[email protected]> \nSent by: [email protected]\n09/21/04 10:49 AM\n\nTo\n\"Patrick Hatcher\" <[email protected]>\ncc\n\"Robert Treat\" <[email protected]>, \n<[email protected]>\nSubject\nRe: [PERFORM] vacuum full & max_fsm_pages question\n\n\n\n\n\n\nPatrick,\n\n> Sorry. I wrote PG 7.4.2 and then I erased it to write something else \nand\n> then forgot to add it back.\n\nOdd. You shouldn't be having to re-vacuum on 7.4.\n\n> And thanks for the Page info. I was getting frustrated and looked in \nthe\n> wrong place.\n>\n> So it's probably best to drop and readd the indexes then?\n\nWell, I have to wonder if you've not run afoul of the known 7.4.2 bug \nregarding indexes. This system hasn't had an improper database shutdown \nor \npower-out in the last few weeks, has it?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if your\n joining column's datatypes do not match\n\n\nI upgraded to 7.4.3 this morning and\ndid a vacuum full analyze on the problem table and now the indexes show\nthe correct number of records\n\n\nPatrick Hatcher\nMacys.Com\n\n\n\n\n\n\nJosh Berkus <[email protected]>\n\nSent by: [email protected]\n09/21/04 10:49 AM\n\n\n\n\nTo\n\"Patrick Hatcher\"\n<[email protected]>\n\n\ncc\n\"Robert Treat\"\n<[email protected]>, <[email protected]>\n\n\nSubject\nRe: [PERFORM] vacuum full\n& max_fsm_pages question\n\n\n\n\n\n\n\n\nPatrick,\n\n> Sorry. I wrote PG 7.4.2 and then I erased it to write something\nelse and\n> then forgot to add it back.\n\nOdd. You shouldn't be having to re-vacuum on 7.4.\n\n> And thanks for the Page info. I was getting frustrated and looked\nin the\n> wrong place.\n>\n> So it's probably best to drop and readd the indexes then?\n\nWell, I have to wonder if you've not run afoul of the known 7.4.2 bug \nregarding indexes. This system hasn't had an improper database shutdown\nor \npower-out in the last few weeks, has it?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if your\n joining column's datatypes do not match",
"msg_date": "Thu, 23 Sep 2004 09:32:50 -0700",
"msg_from": "Patrick Hatcher <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuum full & max_fsm_pages question"
}
] |
[
{
"msg_contents": "Hi all,\n\nI searched list archives, but did not found anything about HT of Pentium \n4/Xeon processors. I wonder if hyperthreading can boost or decrease \nperformance. AFAIK for other commercial servers (msssql, oracle) official \ndocuments state something like \"faster, but not always, so probably slower, \nunless faster\". User opinions are generaly more clear: better swhitch off HT.\n\nDo you have any experiance or test results regarding hyperthreading? Or what \nadditional conditions can make HT useful or pointless?\n\nTIA,\n\nMariusz\n\n",
"msg_date": "Tue, 21 Sep 2004 10:54:48 +0200",
"msg_from": "Mariusz =?iso-8859-2?q?Czu=B3ada?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hyper threading?"
},
{
"msg_contents": "On Tue, 2004-09-21 at 03:54, Mariusz Czułada wrote:\n> Hi all,\n> \n> I searched list archives, but did not found anything about HT of Pentium \n> 4/Xeon processors. I wonder if hyperthreading can boost or decrease \n> performance. AFAIK for other commercial servers (msssql, oracle) official \n> documents state something like \"faster, but not always, so probably slower, \n> unless faster\". User opinions are generaly more clear: better swhitch off HT.\n> \n> Do you have any experiance or test results regarding hyperthreading? Or what \n> additional conditions can make HT useful or pointless?\n> \n\nI think you'll find that HT is very sensitive to both the OS and the\napplication. Generally speaking, most consider HT to actually slow\nthings down, unless you can prove that your OS/application combination\nis faster with HT enabled. Last I heard, most vendors specifically\ndisable HT in the BIOS because the defacto is to expect HT to inflict a\nnegative performance hit.\n\nIIRC, one of critical paths for good HT performance is an OS that\nunderstands how to schedule processes in a HT friendly manner (as in,\ndoesn't push processes from a virtual CPU to a different physical CPU,\netc). Secondly, applications which experience a lot of bad branch\npredictions tend to do well. I don't recall what impact SSE\ninstructions have on the pipeline; but memory seems to recall that\napplications which use a lot of SSE may be more HT friendly. At any\nrate, the notion is, if you are HT'ing, and one application/thread\nrequires the pipeline to be flushed, the other HT'ing thread is free to\nrun while the new branch is populating cache, etc. Thusly, you get a\nperformance gain for the other thread when the CPU makes a bad guess.\n\nAlong these lines, I understand that Intel is planning better HT\nimplementation in the future, but as a general rule, people simply\nexpect too much from the current HT implementations. Accordingly, for\nmost applications, performance generally suffers because they don't tend\nto fall into the corner cases where HT helps.\n\nLong story short, the general rule is, slower unless you having proven\nit to be faster.\n\n\nCheers,\n\n-- \nGreg Copeland, Owner\[email protected]\nCopeland Computer Consulting\n940.206.8004\n\n\n",
"msg_date": "Thu, 23 Sep 2004 14:01:57 -0500",
"msg_from": "Greg Copeland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hyper threading?"
}
] |
[
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi all,\nI'm having performance degradation with a view upgrading from\n7.3 to 7.4, the view is a not so complex, one of his field\nis the result from a function.\nIf I remove the function ( or I use a void function ) the 7.4\nout perform the 7.3:\n\nOn 7.4 I get:\n\nxxxxx=# explain analyze select * from v_ivan_2;\n~ QUERY PLAN\n- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n~ Hash Left Join (cost=7028.36..16780.89 rows=65613 width=288) (actual time=2059.923..9340.043 rows=79815 loops=1)\n~ Hash Cond: (\"outer\".id_baa_loc = \"inner\".id_baa_loc)\n~ -> Hash Left Join (cost=6350.62..15134.25 rows=65613 width=258) (actual time=1816.013..7245.085 rows=65609 loops=1)\n~ Hash Cond: (\"outer\".id_localita = \"inner\".id_localita)\n~ -> Hash Left Join (cost=6252.93..14786.74 rows=65613 width=247) (actual time=1777.072..6533.316 rows=65609 loops=1)\n~ Hash Cond: (\"outer\".id_frazione = \"inner\".id_frazione)\n~ -> Hash Left Join (cost=6226.61..14362.74 rows=65613 width=235) (actual time=1768.273..5837.104 rows=65609 loops=1)\n~ Hash Cond: (\"outer\".id_baa = \"inner\".id_baa)\n~ -> Hash Left Join (cost=5092.24..12342.65 rows=65594 width=197) (actual time=1354.059..4562.398 rows=65592 loops=1)\n~ Hash Cond: (\"outer\".id_pratica = \"inner\".id_pratica)\n~ -> Hash Left Join (cost=3597.52..10010.84 rows=65594 width=173) (actual time=785.775..3278.372 rows=65592 loops=1)\n~ Hash Cond: (\"outer\".id_pratica = \"inner\".id_pratica)\n~ -> Hash Join (cost=1044.77..6605.97 rows=65594 width=149) (actual time=274.316..2070.788 rows=65592 loops=1)\n~ Hash Cond: (\"outer\".id_stato_pratica = \"inner\".id_stato_pratica)\n~ -> Hash Join (cost=1043.72..5850.59 rows=65593 width=141) (actual time=273.478..1421.274 rows=65592 loops=1)\n~ Hash Cond: (\"outer\".id_pratica = \"inner\".id_pratica)\n~ -> Seq Scan on t_pratica p (cost=0.00..3854.27 rows=65927 width=137) (actual time=7.275..533.281 rows=65927 loops=1)\n~ -> Hash (cost=1010.92..1010.92 rows=65592 width=8) (actual time=265.615..265.615 rows=0 loops=1)\n~ -> Seq Scan on t_baa_pratica bp (cost=0.00..1010.92 rows=65592 width=8) (actual time=0.209..164.761 rows=65592 loops=1)\n~ -> Hash (cost=1.05..1.05 rows=5 width=22) (actual time=0.254..0.254 rows=0 loops=1)\n~ -> Seq Scan on lookup_stato_pratica s (cost=0.00..1.05 rows=5 width=22) (actual time=0.190..0.210 rows=5 loops=1)\n~ -> Hash (cost=2519.82..2519.82 rows=65865 width=28) (actual time=511.104..511.104 rows=0 loops=1)\n~ -> Seq Scan on t_persona (cost=0.00..2519.82 rows=65865 width=28) (actual time=0.068..381.586 rows=65864 loops=1)\n~ Filter: (is_rich = true)\n~ -> Hash (cost=1462.53..1462.53 rows=64356 width=28) (actual time=567.919..567.919 rows=0 loops=1)\n~ -> Index Scan using idx_t_persona_is_inte on t_persona (cost=0.00..1462.53 rows=64356 width=28) (actual time=12.953..432.697 rows=64356 loops=1)\n~ Index Cond: (is_inte = true)\n~ -> Hash (cost=1113.65..1113.65 rows=41444 width=46) (actual time=413.782..413.782 rows=0 loops=1)\n~ -> Hash Join (cost=4.33..1113.65 rows=41444 width=46) (actual time=2.687..333.746 rows=41444 loops=1)\n~ Hash Cond: (\"outer\".id_comune = \"inner\".id_comune)\n~ -> Seq Scan on t_baa_loc bl (cost=0.00..653.44 rows=41444 width=20) (actual time=0.422..94.803 rows=41444 loops=1)\n~ -> Hash (cost=4.22..4.22 rows=222 width=34) (actual time=1.735..1.735 rows=0 loops=1)\n~ -> Seq Scan on t_comune co (cost=0.00..4.22 rows=222 width=34) (actual time=0.521..1.277 rows=222 loops=1)\n~ -> Hash (cost=25.59..25.59 rows=1459 width=20) (actual time=8.343..8.343 rows=0 loops=1)\n~ -> Seq Scan on t_frazione f (cost=0.00..25.59 rows=1459 width=20) (actual time=0.554..5.603 rows=1459 loops=1)\n~ -> Hash (cost=94.94..94.94 rows=5494 width=19) (actual time=38.504..38.504 rows=0 loops=1)\n~ -> Seq Scan on t_localita l (cost=0.00..94.94 rows=5494 width=19) (actual time=8.499..28.216 rows=5494 loops=1)\n~ -> Hash (cost=660.61..660.61 rows=34261 width=38) (actual time=198.663..198.663 rows=0 loops=1)\n~ -> Seq Scan on t_affaccio af (cost=0.00..660.61 rows=34261 width=38) (actual time=5.875..133.336 rows=34261 loops=1)\n~ Total runtime: 9445.263 ms\n(40 rows)\n\n\nOn 7.3 I get:\n\n\nxxxxx=# explain analyze select * from v_ivan_2;\n~ QUERY PLAN\n- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n~ Hash Join (cost=5597.02..15593.91 rows=65610 width=354) (actual time=2169.37..13102.64 rows=79815 loops=1)\n~ Hash Cond: (\"outer\".id_baa_loc = \"inner\".id_baa_loc)\n~ -> Hash Join (cost=4919.28..13953.00 rows=65610 width=316) (actual time=1966.38..10568.69 rows=65609 loops=1)\n~ Hash Cond: (\"outer\".id_localita = \"inner\".id_localita)\n~ -> Hash Join (cost=4821.59..13596.30 rows=65610 width=297) (actual time=1934.29..9151.45 rows=65609 loops=1)\n~ Hash Cond: (\"outer\".id_frazione = \"inner\".id_frazione)\n~ -> Hash Join (cost=4795.27..13157.36 rows=65610 width=277) (actual time=1925.29..7795.71 rows=65609 loops=1)\n~ Hash Cond: (\"outer\".id_baa = \"inner\".id_baa)\n~ -> Hash Join (cost=3640.17..11149.38 rows=65592 width=223) (actual time=1375.66..5870.74 rows=65592 loops=1)\n~ Hash Cond: (\"outer\".id_pratica = \"inner\".id_pratica)\n~ -> Hash Join (cost=3597.53..10237.66 rows=65592 width=195) (actual time=835.95..4332.46 rows=65592 loops=1)\n~ Hash Cond: (\"outer\".id_pratica = \"inner\".id_pratica)\n~ -> Hash Join (cost=1044.78..6800.07 rows=65592 width=167) (actual time=307.55..2903.04 rows=65592 loops=1)\n~ Hash Cond: (\"outer\".id_pratica = \"inner\".id_pratica)\n~ -> Merge Join (cost=1.06..4770.96 rows=65927 width=159) (actual time=1.41..1898.12 rows=65927 loops=1)\n~ Merge Cond: (\"outer\".id_stato_pratica = \"inner\".id_stato_pratica)\n~ -> Index Scan using idx_t_pratica on t_pratica p (cost=0.00..4044.70 rows=65927 width=137) (actual time=0.58..894.95 rows=65927 loops=1)\n~ -> Sort (cost=1.06..1.06 rows=5 width=22) (actual time=0.78..58.49 rows=63528 loops=1)\n~ Sort Key: s.id_stato_pratica\n~ -> Seq Scan on lookup_stato_pratica s (cost=0.00..1.05 rows=5 width=22) (actual time=0.11..0.13 rows=5 loops=1)\n~ -> Hash (cost=1010.92..1010.92 rows=65592 width=8) (actual time=305.40..305.40 rows=0 loops=1)\n~ -> Seq Scan on t_baa_pratica bp (cost=0.00..1010.92 rows=65592 width=8) (actual time=0.23..192.88 rows=65592 loops=1)\n~ -> Hash (cost=2519.82..2519.82 rows=65864 width=28) (actual time=527.88..527.88 rows=0 loops=1)\n~ -> Seq Scan on t_persona (cost=0.00..2519.82 rows=65864 width=28) (actual time=0.07..394.51 rows=65864 loops=1)\n~ Filter: (is_rich = true)\n~ -> Hash (cost=10.46..10.46 rows=64356 width=28) (actual time=539.27..539.27 rows=0 loops=1)\n~ -> Index Scan using idx_t_persona_is_inte on t_persona (cost=0.00..10.46 rows=64356 width=28) (actual time=0.61..403.48 rows=64356 loops=1)\n~ Index Cond: (is_inte = true)\n~ -> Hash (cost=1134.38..1134.38 rows=41444 width=54) (actual time=549.25..549.25 rows=0 loops=1)\n~ -> Hash Join (cost=4.33..1134.38 rows=41444 width=54) (actual time=2.19..470.20 rows=41444 loops=1)\n~ Hash Cond: (\"outer\".id_comune = \"inner\".id_comune)\n~ -> Seq Scan on t_baa_loc bl (cost=0.00..653.44 rows=41444 width=20) (actual time=0.15..179.24 rows=41444 loops=1)\n~ -> Hash (cost=4.22..4.22 rows=222 width=34) (actual time=1.55..1.55 rows=0 loops=1)\n~ -> Seq Scan on t_comune co (cost=0.00..4.22 rows=222 width=34) (actual time=0.22..1.08 rows=222 loops=1)\n~ -> Hash (cost=25.59..25.59 rows=1459 width=20) (actual time=8.37..8.37 rows=0 loops=1)\n~ -> Seq Scan on t_frazione f (cost=0.00..25.59 rows=1459 width=20) (actual time=0.22..5.46 rows=1459 loops=1)\n~ -> Hash (cost=94.94..94.94 rows=5494 width=19) (actual time=31.46..31.46 rows=0 loops=1)\n~ -> Seq Scan on t_localita l (cost=0.00..94.94 rows=5494 width=19) (actual time=0.22..20.41 rows=5494 loops=1)\n~ -> Hash (cost=660.61..660.61 rows=34261 width=38) (actual time=199.96..199.96 rows=0 loops=1)\n~ -> Seq Scan on t_affaccio af (cost=0.00..660.61 rows=34261 width=38) (actual time=0.21..130.67 rows=34261 loops=1)\n~ Total runtime: 13190.70 msec\n(41 rows)\n\n\n\nAs you can see the 7.3 do a index scan on the table t_pratica and the 7.4 perform a sequential scan,\nthe plans however are very close to each other.\n\nSo I identify the performance issue on the function call, indeed:\n\n\n7.4:\n\nxxxxx=# explain analyze select sp_foo(id_pratica) from t_pratica;\n~ QUERY PLAN\n- ------------------------------------------------------------------------------------------------------------------\n~ Seq Scan on t_pratica (cost=0.00..3887.23 rows=65927 width=4) (actual time=4.013..45240.015 rows=65927 loops=1)\n~ Total runtime: 45499.123 ms\n(2 rows)\n\n\n7.3:\n\nxxxxx=# explain analyze select sp_foo(id_pratica) from t_pratica;\n~ QUERY PLAN\n- ----------------------------------------------------------------------------------------------------------------\n~ Seq Scan on t_pratica (cost=0.00..3854.27 rows=65927 width=4) (actual time=0.58..18446.99 rows=65927 loops=1)\n~ Total runtime: 18534.41 msec\n(2 rows)\n\n\n\nThis is the sp_foo:\n\nCREATE FUNCTION sp_foo (integer) RETURNS text\n~ AS '\nDECLARE\n~ a_id_pratica ALIAS FOR $1;\n\n~ my_parere TEXT;\nBEGIN\n~ a_id_pratica := $1;\n\n~ SELECT INTO my_parere le.nome\n~ FROM t_evento e,\n~ lookup_tipo_evento le\n~ WHERE e.id_tipo_evento = le.id_tipo_evento AND\n~ e.id_pratica = a_id_pratica AND\n~ e.id_tipo_evento in (5,6,7,8 )\n~ ORDER by e.id_evento desc\n~ LIMIT 1;\n\n~ RETURN my_parere;\nEND;\n' LANGUAGE plpgsql;\n\n\nPreparing a statement this is the plan used by 7.4:\n\nxxxxx=# explain analyze execute foo_body( 5 );\n~ QUERY PLAN\n- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n~ Limit (cost=10.30..10.30 rows=1 width=24) (actual time=0.538..0.538 rows=0 loops=1)\n~ -> Sort (cost=10.30..10.30 rows=1 width=24) (actual time=0.534..0.534 rows=0 loops=1)\n~ Sort Key: e.id_evento\n~ -> Hash Join (cost=9.11..10.30 rows=1 width=24) (actual time=0.512..0.512 rows=0 loops=1)\n~ Hash Cond: (\"outer\".id_tipo_evento = \"inner\".id_tipo_evento)\n~ -> Seq Scan on lookup_tipo_evento le (cost=0.00..1.16 rows=16 width=32) (actual time=0.010..0.041 rows=16 loops=1)\n~ -> Hash (cost=9.11..9.11 rows=1 width=16) (actual time=0.144..0.144 rows=0 loops=1)\n~ -> Index Scan using t_evento_id_pratica_key on t_evento e (cost=0.00..9.11 rows=1 width=16) (actual time=0.140..0.140 rows=0 loops=1)\n~ Index Cond: (id_pratica = $1)\n~ Filter: (((id_tipo_evento)::text = '5'::text) OR ((id_tipo_evento)::text = '6'::text) OR ((id_tipo_evento)::text = '7'::text) OR ((id_tipo_evento)::text = '8'::text))\n~ Total runtime: 0.824 ms\n(11 rows)\n\n\n\nThe table t_pratica have 65927 rows so 0.824 ms * 65927 is almost the total time execution for\neach t_pratica row ~ 45000 ms\n\n\nUnfortunately I can not see the plan used by the 7.3 engine due the lack of explain execute,\nhowever I did an explain analyze on the select:\n\nxxxxx=# explain analyze SELECT le.nome\nxxxxx-# FROM t_evento e,lookup_tipo_evento le\nxxxxx-# WHERE e.id_tipo_evento = le.id_tipo_evento\nxxxxx-# AND e.id_pratica = 5\nxxxxx-# AND e.id_tipo_evento in (5,6,7,8 )\nxxxxx-# ORDER by e.id_evento desc\nxxxxx-# LIMIT 1;\n~ QUERY PLAN\n- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n~ Limit (cost=10.27..10.27 rows=1 width=48) (actual time=0.19..0.19 rows=0 loops=1)\n~ -> Sort (cost=10.27..10.27 rows=1 width=48) (actual time=0.18..0.18 rows=0 loops=1)\n~ Sort Key: e.id_evento\n~ -> Merge Join (cost=10.24..10.27 rows=1 width=48) (actual time=0.09..0.09 rows=0 loops=1)\n~ Merge Cond: (\"outer\".id_tipo_evento = \"inner\".id_tipo_evento)\n~ -> Sort (cost=9.02..9.02 rows=1 width=16) (actual time=0.09..0.09 rows=0 loops=1)\n~ Sort Key: e.id_tipo_evento\n~ -> Index Scan using t_evento_id_pratica_key on t_evento e (cost=0.00..9.02 rows=1 width=16) (actual time=0.06..0.06 rows=0 loops=1)\n~ Index Cond: (id_pratica = 5)\n~ Filter: (((id_tipo_evento)::text = '5'::text) OR ((id_tipo_evento)::text = '6'::text) OR ((id_tipo_evento)::text = '7'::text) OR ((id_tipo_evento)::text = '8'::text))\n~ -> Sort (cost=1.22..1.23 rows=16 width=32) (never executed)\n~ Sort Key: le.id_tipo_evento\n~ -> Seq Scan on lookup_tipo_evento le (cost=0.00..1.16 rows=16 width=32) (never executed)\n~ Total runtime: 0.31 msec\n(14 rows)\n\n\nDisabling the hashjoin on the 7.4 I got best performance that 7.3:\n\nxxxxx=# set enable_hashjoin = off;\nSET\nxxxxx=# explain analyze select sp_get_ultimo_parere(id_pratica) from t_pratica;\n~ QUERY PLAN\n- -------------------------------------------------------------------------------------------------------------------\n~ Seq Scan on t_pratica (cost=0.00..3887.23 rows=65927 width=4) (actual time=12.384..12396.136 rows=65927 loops=1)\n~ Total runtime: 12485.548 ms\n(2 rows)\n\n\nNow my question is why the 7.4 choose the hash join ? :-(\nI can provide further details if you ask\n\nBTW with the hash_join = off the 7.4 choose the same 7.3 plan for this function body.\n\nOf course both engines are running on the same machine with the same settings.\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFBULCU7UpzwH2SGd4RAt2ZAKC9FjAKiljRqgaZSZa+p/7N65Cl7ACePWBV\nTaR2VH1kDSBS7b+kNK4deFo=\n=X+th\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Wed, 22 Sep 2004 00:52:06 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "7.4 vs 7.3 ( hash join issue )"
},
{
"msg_contents": "On Wed, 22 Sep 2004, Gaetano Mendola wrote:\n\n> Now my question is why the 7.4 choose the hash join ? :-(\n\nIt looks to me that the marge join is faster because there wasn't really \nanything to merge, it resulted in 0 rows. Maybe the hash join that is \nchoosen in 7.4 would have been faster had there been a couple of result \nrows (just a guess).\n\nIt would be interesting to compare the plans in 7.4 with and without \nhash_join active and see what costs it estimates for a merge join compared \nto a hash join.\n\n-- \n/Dennis Bj�rklund\n\n",
"msg_date": "Wed, 22 Sep 2004 08:48:09 +0200 (CEST)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.4 vs 7.3 ( hash join issue )"
},
{
"msg_contents": "Dennis Bjorklund wrote:\n> On Wed, 22 Sep 2004, Gaetano Mendola wrote:\n> \n> \n>>Now my question is why the 7.4 choose the hash join ? :-(\n> \n> \n> It looks to me that the marge join is faster because there wasn't really \n> anything to merge, it resulted in 0 rows. Maybe the hash join that is \n> choosen in 7.4 would have been faster had there been a couple of result \n> rows (just a guess).\n> \n> It would be interesting to compare the plans in 7.4 with and without \n> hash_join active and see what costs it estimates for a merge join compared \n> to a hash join.\n\nHere they are:\n\nhash_join = on\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=10.21..10.21 rows=1 width=24) (actual time=0.885..0.885 rows=0 loops=1)\n -> Sort (cost=10.21..10.21 rows=1 width=24) (actual time=0.880..0.880 rows=0 loops=1)\n Sort Key: e.id_evento\n -> Hash Join (cost=9.02..10.21 rows=1 width=24) (actual time=0.687..0.687 rows=0 loops=1)\n Hash Cond: (\"outer\".id_tipo_evento = \"inner\".id_tipo_evento)\n -> Seq Scan on lookup_tipo_evento le (cost=0.00..1.16 rows=16 width=32) (actual time=0.017..0.038 rows=16 loops=1)\n -> Hash (cost=9.02..9.02 rows=1 width=16) (actual time=0.212..0.212 rows=0 loops=1)\n -> Index Scan using t_evento_id_pratica_key on t_evento e (cost=0.00..9.02 rows=1 width=16) (actual time=0.208..0.208 rows=0 loops=1)\n Index Cond: (id_pratica = 5)\n Filter: (((id_tipo_evento)::text = '5'::text) OR ((id_tipo_evento)::text = '6'::text) OR ((id_tipo_evento)::text = '7'::text) OR ((id_tipo_evento)::text = '8'::text))\n Total runtime: 1.244 ms\n(11 rows)\n\nhash_join = off\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=10.28..10.28 rows=1 width=24) (actual time=0.429..0.429 rows=0 loops=1)\n -> Sort (cost=10.28..10.28 rows=1 width=24) (actual time=0.425..0.425 rows=0 loops=1)\n Sort Key: e.id_evento\n -> Merge Join (cost=10.25..10.27 rows=1 width=24) (actual time=0.218..0.218 rows=0 loops=1)\n Merge Cond: (\"outer\".id_tipo_evento = \"inner\".id_tipo_evento)\n -> Sort (cost=9.02..9.02 rows=1 width=16) (actual time=0.214..0.214 rows=0 loops=1)\n Sort Key: e.id_tipo_evento\n -> Index Scan using t_evento_id_pratica_key on t_evento e (cost=0.00..9.02 rows=1 width=16) (actual time=0.110..0.110 rows=0 loops=1)\n Index Cond: (id_pratica = 5)\n Filter: (((id_tipo_evento)::text = '5'::text) OR ((id_tipo_evento)::text = '6'::text) OR ((id_tipo_evento)::text = '7'::text) OR ((id_tipo_evento)::text = '8'::text))\n -> Sort (cost=1.22..1.23 rows=16 width=32) (never executed)\n Sort Key: le.id_tipo_evento\n -> Seq Scan on lookup_tipo_evento le (cost=0.00..1.16 rows=16 width=32) (never executed)\n Total runtime: 0.721 ms\n(14 rows)\n\n\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n",
"msg_date": "Wed, 22 Sep 2004 10:22:05 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 7.4 vs 7.3 ( hash join issue )"
},
{
"msg_contents": "On Wed, 22 Sep 2004, Gaetano Mendola wrote:\n\n> Limit (cost=10.21..10.21 rows=1 width=24) (actual time=0.885..0.885 rows=0 loops=1)\n> Limit (cost=10.28..10.28 rows=1 width=24) (actual time=0.429..0.429 rows=0 loops=1)\n\nThese estimated costs are almost the same, but the runtime differs a bit. \nThis means that maybe you need to alter settings like random_page_cost, \neffective_cache and maybe some others to make the cost reflect the runtime \nbetter.\n\nSince the costs are so close to each other very small changes can make it \nchoose the other plan. It's also very hard to make an estimate that is \ncorrect in all situations. That's why it's called an estimate after all.\n\n-- \n/Dennis Bj�rklund\n\n",
"msg_date": "Wed, 22 Sep 2004 11:32:19 +0200 (CEST)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.4 vs 7.3 ( hash join issue )"
},
{
"msg_contents": "Dennis Bjorklund wrote:\n > On Wed, 22 Sep 2004, Gaetano Mendola wrote:\n >\n >\n >> Limit (cost=10.21..10.21 rows=1 width=24) (actual time=0.885..0.885 rows=0 loops=1)\n >> Limit (cost=10.28..10.28 rows=1 width=24) (actual time=0.429..0.429 rows=0 loops=1)\n >\n >\n > These estimated costs are almost the same, but the runtime differs a bit.\n > This means that maybe you need to alter settings like random_page_cost,\n > effective_cache and maybe some others to make the cost reflect the runtime\n > better.\n >\n > Since the costs are so close to each other very small changes can make it\n > choose the other plan. It's also very hard to make an estimate that is\n > correct in all situations. That's why it's called an estimate after all.\n\nIs not feseable.\n\nThat values are obtained with random_page_cost = 2, effective_cache_size = 20000,\ncpu_tuple_cost = 0.01\nincreasing or decreasing random_page_cost this means increase or decrease both\ncosts:\n\n\nrandom_page_cost = 1.5\n\thashjoin on => 8.47\n hashjoin off => 8.53\n\n\nrandom_page_cost = 3\n\thashjoin on => 13.70\n hashjoin off => 13.76\n\n\nso is choosen the hasjoin method in both cases.\n\nIn the other side the effective_cache_size doesn't affect this costs.\n\nDecreasing the cpu_tuple_cost have the same effect\n\ncpu_tuple_cost = 0.005\n\thashjoin on => 10.11\n hashjoin off => 10.17\n\ncpu_tuple_cost = 0.001\n\thashjoin on => 10.03\n hashjoin off => 10.03\n\ncpu_tuple_cost = 0.0005\n\thashjoin on => 10.01\n hashjoin off => 10.01\n\t\n\tAnd when the two costs are the same the hashjoin path is choosen.\n\nI think cpu_tuple_cost less then 0.001 is not a good idea\n\nI think the only way is set the hashjoin = off. Any other suggestion ?\n\nRegards\nGaetano Mendola\n\n\n\n\n",
"msg_date": "Wed, 22 Sep 2004 12:28:04 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 7.4 vs 7.3 ( hash join issue )"
},
{
"msg_contents": "\nGaetano Mendola <[email protected]> writes:\n\n> hash_join = on\n> -> Seq Scan on lookup_tipo_evento le (cost=0.00..1.16 rows=16 width=32) (actual time=0.017..0.038 rows=16 loops=1)\n>\n> hash_join = off\n> -> Seq Scan on lookup_tipo_evento le (cost=0.00..1.16 rows=16 width=32) (never executed)\n\n\nActually this looks like it's arguably a bug to me. Why does the hash join\nexecute the sequential scan at all? Shouldn't it also like the merge join\nrecognize that the other hashed relation is empty and skip the sequential scan\nentirely?\n\n\n-- \ngreg\n\n",
"msg_date": "22 Sep 2004 09:43:24 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.4 vs 7.3 ( hash join issue )"
},
{
"msg_contents": "On 22 Sep 2004, Greg Stark wrote:\n\n> Actually this looks like it's arguably a bug to me. Why does the hash\n> join execute the sequential scan at all? Shouldn't it also like the\n> merge join recognize that the other hashed relation is empty and skip\n> the sequential scan entirely?\n\nI'm not sure you can classify that as a bug. It's just that he in one of \nthe plans started with the empty scan and bacause of that didn't need \nthe other, but with the hash join it started with the table that had 16 \nrows and then got to the empty one.\n\nWhile I havn't checked, I assume that if it had started with the empty \ntable there then it would have skipped the other.\n\nI don't know what criteria is used to select which part to start with when\ndoing a hash join. Looks like it started with the one that had the highest\nestimate of rows here, doing it the other way around might be a good idea\nbecause you in some cases are lucky to find an empty scans and can omit\nthe other.\n\nThe above are just observations of the behaviour, I've not seen the source \nat all.\n\n-- \n/Dennis Bj�rklund\n\n",
"msg_date": "Wed, 22 Sep 2004 17:22:42 +0200 (CEST)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.4 vs 7.3 ( hash join issue )"
},
{
"msg_contents": "\nDennis Bjorklund <[email protected]> writes:\n\n> On 22 Sep 2004, Greg Stark wrote:\n> \n> > Actually this looks like it's arguably a bug to me. Why does the hash\n> > join execute the sequential scan at all? Shouldn't it also like the\n> > merge join recognize that the other hashed relation is empty and skip\n> > the sequential scan entirely?\n> \n> I'm not sure you can classify that as a bug. It's just that he in one of \n> the plans started with the empty scan and bacause of that didn't need \n> the other, but with the hash join it started with the table that had 16 \n> rows and then got to the empty one.\n\nNo, postgres didn't do things in reverse order. It hashed the empty table and\nthen went ahead and checked every record of the non-empty table against the\nempty hash table.\n\nReading the code there's no check for this, and it seems like it would be a\nuseful low-cost little optimization.\n\nI think postgres normally hashes the table it thinks is smaller, so you do\njoin against an empty relation it should end up on the hash side of the hash\njoin and allow postgres to avoid the scan of the outer table.\n\n-- \ngreg\n\n",
"msg_date": "22 Sep 2004 13:38:00 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.4 vs 7.3 ( hash join issue )"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n\n> Dennis Bjorklund <[email protected]> writes:\n> \n> > On 22 Sep 2004, Greg Stark wrote:\n> > \n> > > Actually this looks like it's arguably a bug to me. Why does the hash\n> > > join execute the sequential scan at all? Shouldn't it also like the\n> > > merge join recognize that the other hashed relation is empty and skip\n> > > the sequential scan entirely?\n> > \n> > I'm not sure you can classify that as a bug. It's just that he in one of \n> > the plans started with the empty scan and bacause of that didn't need \n> > the other, but with the hash join it started with the table that had 16 \n> > rows and then got to the empty one.\n> \n> No, postgres didn't do things in reverse order. It hashed the empty table and\n> then went ahead and checked every record of the non-empty table against the\n> empty hash table.\n\nAlright, attached is a simple patch that changes this. I don't really know\nenough of the overall code to be sure this is safe. But from what I see of the\nhash join code it never returns any rows unless there's a match except for\nouter joins. So I think it should be safe.\n\ntest=# create table a (a integer);\nCREATE TABLE\ntest=# create table b (a integer);\nCREATE TABLE\ntest=# set enable_mergejoin = off;\nSET\ntest=# explain analyze select * from a natural join b;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------\n Hash Join (cost=22.50..345.00 rows=5000 width=4) (actual time=0.022..0.022 rows=0 loops=1)\n Hash Cond: (\"outer\".a = \"inner\".a)\n -> Seq Scan on a (cost=0.00..20.00 rows=1000 width=4) (never executed)\n -> Hash (cost=20.00..20.00 rows=1000 width=4) (actual time=0.005..0.005 rows=0 loops=1)\n -> Seq Scan on b (cost=0.00..20.00 rows=1000 width=4) (actual time=0.002..0.002 rows=0 loops=1)\n Total runtime: 0.089 ms\n(6 rows)\n\nBy comparison, note the sequential scan doesn't show \"never executed\" on 7.4.3\n(sorry, I didn't think to run the query against 8.0 before I compiled the\npatched version):\n\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------\n Hash Join (cost=22.50..345.00 rows=5000 width=4) (actual time=0.881..0.881 rows=0 loops=1)\n Hash Cond: (\"outer\".a = \"inner\".a)\n -> Seq Scan on a (cost=0.00..20.00 rows=1000 width=4) (actual time=0.001..0.001 rows=0 loops=1)\n -> Hash (cost=20.00..20.00 rows=1000 width=4) (actual time=0.008..0.008 rows=0 loops=1)\n -> Seq Scan on b (cost=0.00..20.00 rows=1000 width=4) (actual time=0.004..0.004 rows=0 loops=1)\n Total runtime: 1.105 ms\n(6 rows)\n\n\n\n\n\n-- \ngreg",
"msg_date": "22 Sep 2004 13:56:13 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] 7.4 vs 7.3 ( hash join issue )"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> No, postgres didn't do things in reverse order. It hashed the empty table and\n> then went ahead and checked every record of the non-empty table against the\n> empty hash table.\n\n> Reading the code there's no check for this, and it seems like it would be a\n> useful low-cost little optimization.\n\nYeah, I was just looking at doing that.\n\nIt would also be interesting to prefetch one row from the outer table and fall\nout immediately (without building the hash table) if the outer table is\nempty. This seems to require some contortion of the code though :-(\n\n> I think postgres normally hashes the table it thinks is smaller,\n\nRight, it will prefer to put the physically smaller table (estimated\nwidth*rows) on the inside.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Sep 2004 13:56:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.4 vs 7.3 ( hash join issue ) "
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> Yeah, I was just looking at doing that.\n\nWell I imagine it takes you as long to read my patch as it would for you to\nwrite it. But anyways it's still useful to me as exercises.\n\n> It would also be interesting to prefetch one row from the outer table and fall\n> out immediately (without building the hash table) if the outer table is\n> empty. This seems to require some contortion of the code though :-(\n\nWhy is it any more complicated than just moving the hash build down lower?\nThere's one small special case needed in ExecHashJoinOuterGetTuple but it's\npretty non-intrusive.\n\nIt seems to work for me but I can't test multiple batches easily. I think I've\nconvinced myself that they would work fine but...\n\ntest=# explain analyze select * from a natural join b;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------\n Hash Join (cost=22.50..345.00 rows=5000 width=4) (actual time=0.005..0.005 rows=0 loops=1)\n Hash Cond: (\"outer\".a = \"inner\".a)\n -> Seq Scan on a (cost=0.00..20.00 rows=1000 width=4) (actual time=0.002..0.002 rows=0 loops=1)\n -> Hash (cost=20.00..20.00 rows=1000 width=4) (never executed)\n -> Seq Scan on b (cost=0.00..20.00 rows=1000 width=4) (never executed)\n Total runtime: 0.070 ms\n(6 rows)\n\n\n\n\n\n-- \ngreg",
"msg_date": "22 Sep 2004 14:46:12 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] 7.4 vs 7.3 ( hash join issue )"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n>> It would also be interesting to prefetch one row from the outer table and fall\n>> out immediately (without building the hash table) if the outer table is\n>> empty. This seems to require some contortion of the code though :-(\n\n> Why is it any more complicated than just moving the hash build down lower?\n\nHaving to inject the consideration into ExecHashJoinOuterGetTuple seems\nmessy to me.\n\nOn reflection I'm not sure it would be a win anyway, for a couple of reasons.\n(1) Assuming that the planner has gotten things right and put the larger\nrelation on the outside, the case of an empty outer relation and a\nnonempty inner one should rarely arise.\n(2) Doing this would lose some of the benefit from the optimization to\ndetect an empty inner relation. If the outer subplan is a slow-start\none (such as another hashjoin), it would lose a lot of the benefit :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Sep 2004 15:00:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] 7.4 vs 7.3 ( hash join issue ) "
},
{
"msg_contents": "Tom Lane wrote:\n> Greg Stark <[email protected]> writes:\n> \n>>No, postgres didn't do things in reverse order. It hashed the empty table and\n>>then went ahead and checked every record of the non-empty table against the\n>>empty hash table.\n> \n> \n>>Reading the code there's no check for this, and it seems like it would be a\n>>useful low-cost little optimization.\n> \n> \n> Yeah, I was just looking at doing that.\n> \n> It would also be interesting to prefetch one row from the outer table and fall\n> out immediately (without building the hash table) if the outer table is\n> empty. This seems to require some contortion of the code though :-(\n> \n> \n>>I think postgres normally hashes the table it thinks is smaller,\n> \n> \n> Right, it will prefer to put the physically smaller table (estimated\n> width*rows) on the inside.\n\nDo you plan to do a patch for the 7.4, so I'll wait for a 7.4.6 ( that IIRC have already\ntwo important patches pending ) or is 8.0 stuff ?\n\n\n\nRegards\nGaetano Mendola\n\n\n\n",
"msg_date": "Thu, 23 Sep 2004 16:04:20 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 7.4 vs 7.3 ( hash join issue )"
}
] |
[
{
"msg_contents": "I couldn't find anything in the docs or in the mailing list on this,\nbut it is something that Oracle appears to do as does MySQL.\nThe idea, I believe, is to do a quick (hash) string lookup of the\nquery and if it's exactly the same as another query that has been done\nrecently to re-use the old parse tree.\nIt should save the time of doing the parsing of the SQL and looking up\nthe object in the system tables.\nIt should probably go through the planner again because values passed\nas parameters may have changed. Although, for extra points it could\nlook at the previous query plan as a hint.\nOn the surface it looks like an easy enhancement, but what do I know?\nI suppose it would benefit mostly those programs that use a lot of\nPQexecParams() with simple queries where a greater percentage of the\ntime is spent parsing the SQL rather than building the execute plan.\nWhat do you think?\n",
"msg_date": "Wed, 22 Sep 2004 16:50:26 -0300",
"msg_from": "Scott Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Caching of Queries"
},
{
"msg_contents": "Scott Kirkwood <[email protected]> writes:\n> What do you think?\n\nI think this would allow the problems of cached plans to bite\napplications that were previously not subject to them :-(.\nAn app that wants plan re-use can use PREPARE to identify the\nqueries that are going to be re-executed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Sep 2004 15:59:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries "
},
{
"msg_contents": "There is a difference between MySQL and Oracle here.\n\nOracle, to reduce parse/planner costs, hashes statements to see if it can\nmatch an existing optimizer plan. This is optional and there are a few\nflavors that range from a characher to characyter match through parse tree\nmatches through replacing of literals in the statements with parameters.\nThis dramatically improves performance in almost all high transaction rate\nsystems.\n\nMySQL stores a statement with its results. This is optional and when a\nclient allows this type of processing, the SQL is hashed and matched to the\nstatement - and the stored *result* is returned. The point is that a lot of\nsystems do lots of static queries, such as a pick list on a web page - but\nif the data changes the prior result is returned. This (plus a stable jdbc\ndriver) was the reason MySQL did well in the eWeek database comparison.\n\n/Aaron\n\n\n\n----- Original Message ----- \nFrom: \"Scott Kirkwood\" <[email protected]>\nTo: <[email protected]>\nSent: Wednesday, September 22, 2004 3:50 PM\nSubject: [PERFORM] Caching of Queries\n\n\n> I couldn't find anything in the docs or in the mailing list on this,\n> but it is something that Oracle appears to do as does MySQL.\n> The idea, I believe, is to do a quick (hash) string lookup of the\n> query and if it's exactly the same as another query that has been done\n> recently to re-use the old parse tree.\n> It should save the time of doing the parsing of the SQL and looking up\n> the object in the system tables.\n> It should probably go through the planner again because values passed\n> as parameters may have changed. Although, for extra points it could\n> look at the previous query plan as a hint.\n> On the surface it looks like an easy enhancement, but what do I know?\n> I suppose it would benefit mostly those programs that use a lot of\n> PQexecParams() with simple queries where a greater percentage of the\n> time is spent parsing the SQL rather than building the execute plan.\n> What do you think?\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n",
"msg_date": "Wed, 22 Sep 2004 17:43:16 -0400",
"msg_from": "\"Aaron Werman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "On Thu, 2004-09-23 at 05:59, Tom Lane wrote:\n> I think this would allow the problems of cached plans to bite\n> applications that were previously not subject to them :-(.\n> An app that wants plan re-use can use PREPARE to identify the\n> queries that are going to be re-executed.\n\nI agree; if you want to do some work in this area, making improvements\nto PREPARE would IMHO be the best bet. For example, some people have\ntalked about having PREPARE store queries in shared memory. Another idea\nwould be to improve the quality of the plan we generate at PREPARE time:\nfor instance you could generate 'n' plans for various combinations of\ninput parameters, and then choose the best query plan at EXECUTE time.\nIt's a difficult problem to solve, however (consider multiple parameters\nto PREPARE, for example).\n\n-Neil\n\n\n",
"msg_date": "Thu, 23 Sep 2004 12:14:24 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "On 22 Sep 2004 at 15:59, Tom Lane wrote:\n\n> Scott Kirkwood <[email protected]> writes:\n> > What do you think?\n> \n> I think this would allow the problems of cached plans to bite\n> applications that were previously not subject to them :-(.\n> An app that wants plan re-use can use PREPARE to identify the\n> queries that are going to be re-executed.\n> \n> \t\t\tregards, tom lane\n> \n\nAnd then there are the people that would like to upgrade and get a \nperformance gain without having to change their programs. A simple \nconf flag could turn query/plan caching off for all those that rely on each \nstatement being re-planned.\n\nThis is where SQLServer etc. tend to get big wins. I know from direct \ncomparisons that SQLServer often takes quite a bit longer to parse/plan \na select statement than Postgres, but wins out overall from its \nquery/plan caching.\n\nRegards,\nGary.\n\n",
"msg_date": "Thu, 23 Sep 2004 07:36:11 +0100",
"msg_from": "\"Gary Doades\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries "
},
{
"msg_contents": "Neil Conway wrote:\n> Another idea would be to improve the quality of the plan we generate at PREPARE time:\n> for instance you could generate 'n' plans for various combinations of\n> input parameters, and then choose the best query plan at EXECUTE time.\n> It's a difficult problem to solve, however (consider multiple parameters\n> to PREPARE, for example).\n\nDo you mean store different plans for each different histogram segment ?\n\nRegards\nGaetano Mendola\n\n",
"msg_date": "Thu, 23 Sep 2004 10:52:21 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "In article <[email protected]>,\nScott Kirkwood <[email protected]> writes:\n\n> I couldn't find anything in the docs or in the mailing list on this,\n> but it is something that Oracle appears to do as does MySQL.\n> The idea, I believe, is to do a quick (hash) string lookup of the\n> query and if it's exactly the same as another query that has been done\n> recently to re-use the old parse tree.\n\nThat's not was MySQL is doing. MySQL caches not the query plan, but\nthe result set for the (hashed) query string. If the same query comes\nagain, it is not executed at all (unless one of the tables involved\nhave been changed meanwhile).\n\n",
"msg_date": "23 Sep 2004 16:24:46 +0200",
"msg_from": "Harald Fuchs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "Not knowing anything about the internals of pg, I don't know how this relates, but in theory, \nquery plan caching is not just about saving time re-planning queries, it's about scalability.\nOptimizing queries requires shared locks on the database metadata, which, as I understand it\ncauses contention and serialization, which kills scalability. \n\nI read this thread from last to first, and I'm not sure if I missed something, but if pg isnt\ncaching plans, then I would say plan caching should be a top priority for future enhancements. It\nneedn't be complex either: if the SQL string is the same, and none of the tables involved in the\nquery have changed (in structure), then re-use the cached plan. Basically, DDL and updated\nstatistics would have to invalidate plans for affected tables. \n\nPreferably, it should work equally for prepared statements and those not pre-prepared. If you're\nnot using prepare (and bind variables) though, your plan caching down the drain anyway...\n\nI don't think that re-optimizing based on values of bind variables is needed. It seems like it\ncould actually be counter-productive and difficult to asses it's impact.\n\nThat's the way I see it anyway.\n\n:)\n\n--- Scott Kirkwood <[email protected]> wrote:\n\n> I couldn't find anything in the docs or in the mailing list on this,\n> but it is something that Oracle appears to do as does MySQL.\n> The idea, I believe, is to do a quick (hash) string lookup of the\n> query and if it's exactly the same as another query that has been done\n> recently to re-use the old parse tree.\n> It should save the time of doing the parsing of the SQL and looking up\n> the object in the system tables.\n> It should probably go through the planner again because values passed\n> as parameters may have changed. Although, for extra points it could\n> look at the previous query plan as a hint.\n> On the surface it looks like an easy enhancement, but what do I know?\n> I suppose it would benefit mostly those programs that use a lot of\n> PQexecParams() with simple queries where a greater percentage of the\n> time is spent parsing the SQL rather than building the execute plan.\n> What do you think?\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n> \n\n\n\n\t\t\n_______________________________\nDo you Yahoo!?\nDeclare Yourself - Register online to vote today!\nhttp://vote.yahoo.com\n",
"msg_date": "Thu, 23 Sep 2004 08:29:25 -0700 (PDT)",
"msg_from": "Mr Pink <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "I'm not an expert, but I've been hunting down a killer performance problem\nfor a while now. It seems this may be the cause.\n\nAt peak load, our database slows to a trickle. The CPU and disk utilization\nare normal - 20-30% used CPU and disk performance good.\n\nAll of our \"postgres\" processes end up in the \"semwai\" state - seemingly\nwaiting on other queries to complete. If the system isn't taxed in CPU or\ndisk, I have a good feeling that this may be the cause. I didn't know that\nplanning queries could create such a gridlock, but based on Mr Pink's\nexplanation, it sounds like a very real possibility.\n\nWe're running on SELECT's, and the number of locks on our \"high traffic\"\ntables grows to the hundreds. If it's not the SELECT locking (and we don't\nget that many INSERT/UPDATE on these tables), could the planner be doing it?\n\nAt peak load (~ 1000 queries/sec on highest traffic table, all very\nsimilar), the serialized queries pile up and essentially create a DoS on our\nservice - requiring a restart of the PG daemon. Upon stop & start, it's\nback to normal.\n\nI've looked at PREPARE, but apparently it only lasts per-session - that's\nworthless in our case (web based service, one connection per data-requiring\nconnection).\n\nDoes this sound plausible? Is there an alternative way to do this that I\ndon't know about? Additionally, in our case, I personally don't see any\ndownside to caching and using the same query plan when the only thing\nsubstituted are variables. In fact, I'd imagine it would help performance\nsignificantly in high-volume web applications.\n\nThanks,\n\nJason\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Mr Pink\n> Sent: Thursday, September 23, 2004 11:29 AM\n> To: Scott Kirkwood; [email protected]\n> Subject: Re: [PERFORM] Caching of Queries\n> \n> Not knowing anything about the internals of pg, I don't know how this\n> relates, but in theory,\n> query plan caching is not just about saving time re-planning queries, it's\n> about scalability.\n> Optimizing queries requires shared locks on the database metadata, which,\n> as I understand it\n> causes contention and serialization, which kills scalability.\n> \n> I read this thread from last to first, and I'm not sure if I missed\n> something, but if pg isnt\n> caching plans, then I would say plan caching should be a top priority for\n> future enhancements. It\n> needn't be complex either: if the SQL string is the same, and none of the\n> tables involved in the\n> query have changed (in structure), then re-use the cached plan. Basically,\n> DDL and updated\n> statistics would have to invalidate plans for affected tables.\n> \n> Preferably, it should work equally for prepared statements and those not\n> pre-prepared. If you're\n> not using prepare (and bind variables) though, your plan caching down the\n> drain anyway...\n> \n> I don't think that re-optimizing based on values of bind variables is\n> needed. It seems like it\n> could actually be counter-productive and difficult to asses it's impact.\n> \n> That's the way I see it anyway.\n> \n> :)\n> \n\n",
"msg_date": "Thu, 23 Sep 2004 12:53:25 -0400",
"msg_from": "\"Jason Coene\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "\"Jason Coene\" <[email protected]> writes:\n> All of our \"postgres\" processes end up in the \"semwai\" state - seemingly\n> waiting on other queries to complete. If the system isn't taxed in CPU or\n> disk, I have a good feeling that this may be the cause.\n\nWhatever that is, I'll bet lunch that it's got 0 to do with caching\nquery plans. Can you get stack tracebacks from some of the stuck\nprocesses? What do they show in \"ps\"?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Sep 2004 13:05:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries "
},
{
"msg_contents": "Jason Coene wrote:\n> I'm not an expert, but I've been hunting down a killer performance problem\n> for a while now. It seems this may be the cause.\n> \n> At peak load, our database slows to a trickle. The CPU and disk utilization\n> are normal - 20-30% used CPU and disk performance good.\n\nFor a peak load 20-30% used CPU this mean you reached your IO bottleneck.\n\n> All of our \"postgres\" processes end up in the \"semwai\" state - seemingly\n> waiting on other queries to complete. If the system isn't taxed in CPU or\n> disk, I have a good feeling that this may be the cause. I didn't know that\n> planning queries could create such a gridlock, but based on Mr Pink's\n> explanation, it sounds like a very real possibility.\n> \n> We're running on SELECT's, and the number of locks on our \"high traffic\"\n> tables grows to the hundreds. If it's not the SELECT locking (and we don't\n> get that many INSERT/UPDATE on these tables), could the planner be doing it?\n> \n> At peak load (~ 1000 queries/sec on highest traffic table, all very\n> similar), the serialized queries pile up and essentially create a DoS on our\n> service - requiring a restart of the PG daemon. Upon stop & start, it's\n> back to normal.\n\nGive us informations on this queries, a explain analyze could be a good start\npoint.\n\n> I've looked at PREPARE, but apparently it only lasts per-session - that's\n> worthless in our case (web based service, one connection per data-requiring\n> connection).\n\nTrust me the PREPARE is not doing miracle in shenarios like yours . If you use postgres\nin a web service environment what you can use is a connection pool ( look for pgpoll IIRC ),\nif you use a CMS then try to enable the cache in order to avoid to hit the DB for each\nrequest.\n\n\n\n\nRegards\nGaetano Mendola\n",
"msg_date": "Thu, 23 Sep 2004 19:12:16 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "Hi Tom,\n\nEasily recreated with Apache benchmark, \"ab -n 30000 -c 3000\nhttp://webserver \". This runs 1 query per page, everything else is cached\non webserver. \n\nThe lone query:\n\nSELECT \n id, \n gameid, \n forumid, \n subject \n FROM threads \n WHERE nuked = 0 \n ORDER BY nuked DESC, \n lastpost DESC LIMIT 8\n\nLimit (cost=0.00..1.99 rows=8 width=39) (actual time=27.865..28.027 rows=8\nloops=1)\n -> Index Scan Backward using threads_ix_nuked_lastpost on threads\n(cost=0.0 0..16824.36 rows=67511 width=39) (actual time=27.856..27.989\nrows=8 loops=1)\n Filter: (nuked = 0)\n Total runtime: 28.175 ms\n\nI'm not sure how I go about getting the stack traceback you need. Any info\non this? Results of \"ps\" below. System is dual xeon 2.6, 2gb ram, hardware\nraid 10 running FreeBSD 5.2.1.\n\nJason\n\nlast pid: 96094; load averages: 0.22, 0.35, 0.38\nup 19+20:50:37 13:10:45\n161 processes: 2 running, 151 sleeping, 8 lock\nCPU states: 12.2% user, 0.0% nice, 16.9% system, 1.6% interrupt, 69.4%\nidle\nMem: 120M Active, 1544M Inact, 194M Wired, 62M Cache, 112M Buf, 2996K Free\nSwap: 4096M Total, 4096M Free\n\n PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU CPU COMMAND\n50557 pgsql 98 0 95276K 4860K select 0 24:00 0.59% 0.59% postgres\n95969 pgsql 4 0 96048K 34272K sbwait 0 0:00 2.10% 0.29% postgres\n95977 pgsql -4 0 96048K 29620K semwai 2 0:00 1.40% 0.20% postgres\n96017 pgsql 4 0 96048K 34280K sbwait 0 0:00 2.05% 0.20% postgres\n95976 pgsql -4 0 96048K 30564K semwai 3 0:00 1.05% 0.15% postgres\n95970 pgsql -4 0 96048K 24404K semwai 1 0:00 1.05% 0.15% postgres\n95972 pgsql -4 0 96048K 21060K semwai 1 0:00 1.05% 0.15% postgres\n96053 pgsql -4 0 96048K 24140K semwai 3 0:00 1.54% 0.15% postgres\n96024 pgsql -4 0 96048K 22192K semwai 3 0:00 1.54% 0.15% postgres\n95985 pgsql -4 0 96048K 15208K semwai 3 0:00 1.54% 0.15% postgres\n96033 pgsql 98 0 95992K 7812K *Giant 2 0:00 1.54% 0.15% postgres\n95973 pgsql -4 0 96048K 30936K semwai 3 0:00 0.70% 0.10% postgres\n95966 pgsql 4 0 96048K 34272K sbwait 0 0:00 0.70% 0.10% postgres\n95983 pgsql 4 0 96048K 34272K sbwait 2 0:00 1.03% 0.10% postgres\n95962 pgsql 4 0 96048K 34268K sbwait 2 0:00 0.70% 0.10% postgres\n95968 pgsql -4 0 96048K 26232K semwai 2 0:00 0.70% 0.10% postgres\n95959 pgsql 4 0 96048K 34268K sbwait 2 0:00 0.70% 0.10% postgres\n\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Thursday, September 23, 2004 1:06 PM\n> To: Jason Coene\n> Cc: 'Mr Pink'; 'Scott Kirkwood'; [email protected]\n> Subject: Re: [PERFORM] Caching of Queries\n> \n> \"Jason Coene\" <[email protected]> writes:\n> > All of our \"postgres\" processes end up in the \"semwai\" state - seemingly\n> > waiting on other queries to complete. If the system isn't taxed in CPU\n> or\n> > disk, I have a good feeling that this may be the cause.\n> \n> Whatever that is, I'll bet lunch that it's got 0 to do with caching\n> query plans. Can you get stack tracebacks from some of the stuck\n> processes? What do they show in \"ps\"?\n> \n> \t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 23 Sep 2004 13:22:30 -0400",
"msg_from": "\"Jason Coene\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries "
},
{
"msg_contents": "\n\"Jason Coene\" <[email protected]> writes:\n\n> All of our \"postgres\" processes end up in the \"semwai\" state - seemingly\n> waiting on other queries to complete. If the system isn't taxed in CPU or\n> disk, I have a good feeling that this may be the cause. \n\nWell, it's possible contention of some sort is an issue but it's not clear\nthat it's planning related contention.\n\n> We're running on SELECT's, and the number of locks on our \"high traffic\"\n> tables grows to the hundreds. \n\nWhere are you seeing this? What information do you have about these locks?\n\n> I've looked at PREPARE, but apparently it only lasts per-session - that's\n> worthless in our case (web based service, one connection per data-requiring\n> connection).\n\nWell the connection time in postgres is pretty quick. But a lot of other\nthings, including prepared queries but also including other factors are a lot\nmore effective if you have long-lived sessions.\n\nI would strongly recommend you consider some sort of persistent database\nconnection for your application. Most web based services run queries from a\nsingle source base where all the queries are written in-house. In that\nsituation you can ensure that one request never leaves the session in an\nunusual state (like setting guc variables strangely, or leaving a transaction\nopen, or whatever).\n\nThat saves you the reconnect time, which as I said is actually small, but\ncould still be contributing to your problem. I think it also makes the buffer\ncache more effective as well. And It also means you can prepare all your\nqueries and reuse them on subsequent requests.\n\nThe nice thing about web based services is that while each page only executes\neach query once, you tend to get the same pages over and over thousands of\ntimes. So if they prepare their queries the first time around they can reuse\nthose prepared queries thousands of times.\n\nUsing a text cache of the query string on the server side is just a\nwork-around for failing to do that on the client side. It's much more\nefficient and more flexible to do it on the client-side.\n\n-- \ngreg\n\n",
"msg_date": "23 Sep 2004 13:25:25 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "\"Jason Coene\" <[email protected]> writes:\n> I'm not sure how I go about getting the stack traceback you need. Any info\n> on this? Results of \"ps\" below. System is dual xeon 2.6, 2gb ram, hardware\n> raid 10 running FreeBSD 5.2.1.\n\nHmm. Dual Xeon sets off alarm bells ...\n\nI think you are probably looking at the same problem previously reported\nby Josh Berkus among others. Does the rate of context swaps shown by\nvmstat go through the roof when this happens? If you strace or ktrace\none of the backends, do you see lots of semop()s and little else?\n\nCheck the archives for this thread among others:\nhttp://archives.postgresql.org/pgsql-performance/2004-04/msg00249.php\nThe test case you are talking about is a tight indexscan loop, which\nis pretty much the same scenario as here:\nhttp://archives.postgresql.org/pgsql-performance/2004-04/msg00280.php\n\nThe fundamental problem is heavy contention for access to a shared data\nstructure. We're still looking for good solutions, but in the context\nof this thread it's worth pointing out that a shared query-plan cache\nwould itself be subject to heavy contention, and arguably would make\nthis sort of problem worse not better.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Sep 2004 13:35:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries "
},
{
"msg_contents": "> I've looked at PREPARE, but apparently it only lasts \n> per-session - that's worthless in our case (web based \n> service, one connection per data-requiring connection).\n\nThat's a non-sequitur. Most 'normal' high volume web apps have persistent\nDB connections, one per http server process. Are you really dropping DB\nconnections and reconnecting each time a new HTTP request comes in?\n\nM\n\n",
"msg_date": "Thu, 23 Sep 2004 18:38:32 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "Jason Coene wrote:\n> Hi Tom,\n> \n> Easily recreated with Apache benchmark, \"ab -n 30000 -c 3000\n> http://webserver \". This runs 1 query per page, everything else is cached\n> on webserver. \n\nThat test require 30000 access with 3000 connections that is not a normal\nload. Describe us your HW.\n\n3000 connections means a very huge load, may you provide also the result of\n\"vmstat 5\" my webserver trash already with -c 120 !\n\nhow many connection your postgres can manage ?\n\nYou have to consider to use a connection pool with that ammount of connections.\n\n\nRegards\nGaetano Mendola\n",
"msg_date": "Thu, 23 Sep 2004 19:40:32 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "Hi, Jason,\n\nOn Thu, 23 Sep 2004 12:53:25 -0400\n\"Jason Coene\" <[email protected]> wrote:\n\n> I've looked at PREPARE, but apparently it only lasts per-session - that's\n> worthless in our case (web based service, one connection per data-requiring\n> connection).\n\nThis sounds like the loads of connection init and close may be the\nreason for the slowdown. Can you use connection pooling in your service?\n\nHTH,\nMarkus\n\n\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zürich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n",
"msg_date": "Thu, 23 Sep 2004 19:46:26 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "Tom,\n\n> I think you are probably looking at the same problem previously reported\n> by Josh Berkus among others. Does the rate of context swaps shown by\n> vmstat go through the roof when this happens? If you strace or ktrace\n> one of the backends, do you see lots of semop()s and little else?\n\nThat would be interesting. Previously we've only demonstrated the problem on \nlong-running queries, but I suppose it could also affect massive concurrent \nquery access.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 23 Sep 2004 11:56:31 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n>> I think you are probably looking at the same problem previously reported\n>> by Josh Berkus among others.\n\n> That would be interesting. Previously we've only demonstrated the\n> problem on long-running queries, but I suppose it could also affect\n> massive concurrent query access.\n\nWell, the test cases we used were designed to get the system into a\ntight loop of grabbing and releasing shared buffers --- a long-running\nindex scan is certainly one of the best ways to do that, but there are\nothers.\n\nI hadn't focused before on the point that Jason is launching a new\nconnection for every query. In that scenario I think the bulk of the\ncycles are going to go into loading the per-backend catalog caches with\nthe system catalog rows that are needed to parse and plan the query.\nThe catalog fetches to get those rows are effectively mini-queries\nwith preset indexscan plans, so it's not hard to believe that they'd be\nhitting the BufMgrLock nearly as hard as a tight indexscan loop. Once\nall the pages needed are cached in shared buffers, there's no I/O delays\nto break the loop, and so you could indeed get into the context swap\nstorm regime we saw before.\n\nI concur with the thought that using persistent connections might go a\nlong way towards alleviating his problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Sep 2004 15:06:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries "
},
{
"msg_contents": "Hi All,\n\nIt does sound like we should be pooling connections somehow. I'll be\nlooking at implementing that shortly. I'd really like to understand what\nthe actual problem is, though.\n\nSorry, I meant 30,000 with 300 connections - not 3,000. The 300 connections\n/ second is realistic, if not underestimated. As is the nature of our site\n(realtime information about online gaming), there's a huge fan base and as a\nbig upset happens, we'll do 50,000 page views in a span of 3-5 minutes.\n\nI get the same results with:\n\nab -n 10000 -c 150 http://www.gotfrag.com/portal/news/\n\nI've attached results from the above test, showing open locks, top output,\nand vmstat 5.\n\nTom, I've run the test described in:\n\nhttp://archives.postgresql.org/pgsql-performance/2004-04/msg00280.php\n\nResults attached in mptest.txt. The box did experience the same problems as\nwe've seen before. I ran it under a separate database (test), and it still\ncaused our other queries to slow significantly from our production database\n(gf) - semwait again.\n\nIt does look like the \"cs\" column under CPU (which I'd assume is Context\nSwap) does bump up significantly (10x or better) during both my ab test, and\nthe test you suggested in that archived message.\n\nReading the first thread you pointed out (2004-04/msg00249.php), Josh Berkus\nwas questioning the ServerWorks chipsets. We're running on the Intel E7501\nChipset (MSI board). Our CPU's are 2.66 GHz with 533MHz FSB, Hyperthreading\nenabled. Unfortunately, I don't have physical access to the machine to turn\nHT off.\n\n\nThanks,\n\nJason\n\n\n\n> -----Original Message-----\n> From: Gaetano Mendola [mailto:[email protected]]\n> Sent: Thursday, September 23, 2004 1:41 PM\n> To: Jason Coene\n> Subject: Re: Caching of Queries\n> \n> Jason Coene wrote:\n> > Hi Tom,\n> >\n> > Easily recreated with Apache benchmark, \"ab -n 30000 -c 3000\n> > http://webserver \". This runs 1 query per page, everything else is\n> cached\n> > on webserver.\n> \n> That test require 30000 access with 3000 connections that is not a normal\n> load. Describe us your HW.\n> \n> 3000 connections means a very huge load, may you provide also the result\n> of\n> \"vmstat 5\" my webserver trash already with -c 120 !\n> \n> how many connection your postgres can manage ?\n> \n> You have to consider to use a connection pool with that ammount of\n> connections.\n> \n> \n> Regards\n> Gaetano Mendola",
"msg_date": "Thu, 23 Sep 2004 15:07:55 -0400",
"msg_from": "\"Jason Coene\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "Update:\n\nI just tried running the same test (ab with 150 concurrent connections)\nwhile connecting to postgres through 35 persistent connections (PHP\nlibrary), and had roughly the same type of results. This should eliminate\nthe \"new connection\" overhead. I've attached top and vmstat. I let it run\nuntil it had completed 800 requests. Unless I'm missing something, there's\nmore than the \"new connection\" IO load here.\n\nJason\n\n> -----Original Message-----\n> From: Jason Coene [mailto:[email protected]]\n> Sent: Thursday, September 23, 2004 3:08 PM\n> To: [email protected]\n> Cc: [email protected]; [email protected]; [email protected]\n> Subject: RE: Caching of Queries\n> \n> Hi All,\n> \n> It does sound like we should be pooling connections somehow. I'll be\n> looking at implementing that shortly. I'd really like to understand what\n> the actual problem is, though.\n> \n> Sorry, I meant 30,000 with 300 connections - not 3,000. The 300\n> connections\n> / second is realistic, if not underestimated. As is the nature of our\n> site\n> (realtime information about online gaming), there's a huge fan base and as\n> a\n> big upset happens, we'll do 50,000 page views in a span of 3-5 minutes.\n> \n> I get the same results with:\n> \n> ab -n 10000 -c 150 http://www.gotfrag.com/portal/news/\n> \n> I've attached results from the above test, showing open locks, top output,\n> and vmstat 5.\n> \n> Tom, I've run the test described in:\n> \n> http://archives.postgresql.org/pgsql-performance/2004-04/msg00280.php\n> \n> Results attached in mptest.txt. The box did experience the same problems\n> as\n> we've seen before. I ran it under a separate database (test), and it\n> still\n> caused our other queries to slow significantly from our production\n> database\n> (gf) - semwait again.\n> \n> It does look like the \"cs\" column under CPU (which I'd assume is Context\n> Swap) does bump up significantly (10x or better) during both my ab test,\n> and\n> the test you suggested in that archived message.\n> \n> Reading the first thread you pointed out (2004-04/msg00249.php), Josh\n> Berkus\n> was questioning the ServerWorks chipsets. We're running on the Intel\n> E7501\n> Chipset (MSI board). Our CPU's are 2.66 GHz with 533MHz FSB,\n> Hyperthreading\n> enabled. Unfortunately, I don't have physical access to the machine to\n> turn\n> HT off.\n> \n> \n> Thanks,\n> \n> Jason\n> \n> \n> \n> > -----Original Message-----\n> > From: Gaetano Mendola [mailto:[email protected]]\n> > Sent: Thursday, September 23, 2004 1:41 PM\n> > To: Jason Coene\n> > Subject: Re: Caching of Queries\n> >\n> > Jason Coene wrote:\n> > > Hi Tom,\n> > >\n> > > Easily recreated with Apache benchmark, \"ab -n 30000 -c 3000\n> > > http://webserver \". This runs 1 query per page, everything else is\n> > cached\n> > > on webserver.\n> >\n> > That test require 30000 access with 3000 connections that is not a\n> normal\n> > load. Describe us your HW.\n> >\n> > 3000 connections means a very huge load, may you provide also the result\n> > of\n> > \"vmstat 5\" my webserver trash already with -c 120 !\n> >\n> > how many connection your postgres can manage ?\n> >\n> > You have to consider to use a connection pool with that ammount of\n> > connections.\n> >\n> >\n> > Regards\n> > Gaetano Mendola",
"msg_date": "Thu, 23 Sep 2004 15:24:33 -0400",
"msg_from": "\"Jason Coene\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "Jason,\n\n> Sorry, I meant 30,000 with 300 connections - not 3,000. The 300\n> connections\n> / second is realistic, if not underestimated. As is the nature of\n> our site\n> (realtime information about online gaming), there's a huge fan base\n> and as a\n> big upset happens, we'll do 50,000 page views in a span of 3-5\n> minutes.\n\nFirst, your posts show no evidences of the CS storm bug.\n\nSecond, 300 *new* connections a second is a lot. Each new connection\nrequires a significant amount of both database and OS overhead. This\nis why all the other web developers use a connection pool.\n\nIn fact, I wouldn't be surprised if your lockups are on the OS level,\neven; I don't recall that you cited what OS you're using, but I can\nimagine locking up Linux 2.4 trying to spawn 300 new processes a\nsecond.\n\n--Josh\n",
"msg_date": "Thu, 23 Sep 2004 17:05:58 -0700",
"msg_from": "\"Josh Berkus\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": ">>Sorry, I meant 30,000 with 300 connections - not 3,000. The 300\n>>connections\n>>/ second is realistic, if not underestimated. As is the nature of\n>>our site\n>>(realtime information about online gaming), there's a huge fan base\n>>and as a\n>>big upset happens, we'll do 50,000 page views in a span of 3-5\n>>minutes.\n>> \n>>\n>\n>First, your posts show no evidences of the CS storm bug.\n>\n>Second, 300 *new* connections a second is a lot. Each new connection\n>requires a significant amount of both database and OS overhead. This\n>is why all the other web developers use a connection pool.\n>\n> \n>\nI would second this. You need to be running a connection pool and \nprobably multiple web servers in\nfront of that. You are talking about a huge amount of connections in \nthat amount of time.\n\nJosh Drake\n\n\n\n>In fact, I wouldn't be surprised if your lockups are on the OS level,\n>even; I don't recall that you cited what OS you're using, but I can\n>imagine locking up Linux 2.4 trying to spawn 300 new processes a\n>second.\n>\n>--Josh\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 8: explain analyze is your friend\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL\n\n\n\n\n\n\n\n\n\n\n\nSorry, I meant 30,000 with 300 connections - not 3,000. The 300\nconnections\n/ second is realistic, if not underestimated. As is the nature of\nour site\n(realtime information about online gaming), there's a huge fan base\nand as a\nbig upset happens, we'll do 50,000 page views in a span of 3-5\nminutes.\n \n\n\nFirst, your posts show no evidences of the CS storm bug.\n\nSecond, 300 *new* connections a second is a lot. Each new connection\nrequires a significant amount of both database and OS overhead. This\nis why all the other web developers use a connection pool.\n\n \n\nI would second this. You need to be running a connection pool and\nprobably multiple web servers in \nfront of that. You are talking about a huge amount of connections in\nthat amount of time.\n\nJosh Drake\n\n\n\n\nIn fact, I wouldn't be surprised if your lockups are on the OS level,\neven; I don't recall that you cited what OS you're using, but I can\nimagine locking up Linux 2.4 trying to spawn 300 new processes a\nsecond.\n\n--Josh\n\n---------------------------(end of broadcast)---------------------------\nTIP 8: explain analyze is your friend\n \n\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL",
"msg_date": "Thu, 23 Sep 2004 17:37:30 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "Hi Josh,\n\nI just tried using pgpool to pool the connections, and ran:\n\nab -n 1000 -c 50 http://wstg.int/portal/news/\n\nI ran some previous queries to get pgpool to pre-establish all the\nconnections, and ab ran for a few minutes (with one query per page, eek!).\nIt was still exhibiting the same problems as before. While so many new\nconnections at once can surely make the problem worse (and pgpool will\nsurely help there), shouldn't this prove that it's not the only issue?\n\nWe're running FreeBSD 5.2.1\n\nI've attached open locks, running queries, query plans, top output and\nvmstat 5 output for while ab was running, from start to finish.\n\nAny ideas?\n\nJason\n\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Josh Berkus\n> Sent: Thursday, September 23, 2004 8:06 PM\n> To: Jason Coene; [email protected]\n> Cc: [email protected]; [email protected]; [email protected]\n> Subject: Re: [PERFORM] Caching of Queries\n> \n> Jason,\n> \n> > Sorry, I meant 30,000 with 300 connections - not 3,000. The 300\n> > connections\n> > / second is realistic, if not underestimated. As is the nature of\n> > our site\n> > (realtime information about online gaming), there's a huge fan base\n> > and as a\n> > big upset happens, we'll do 50,000 page views in a span of 3-5\n> > minutes.\n> \n> First, your posts show no evidences of the CS storm bug.\n> \n> Second, 300 *new* connections a second is a lot. Each new connection\n> requires a significant amount of both database and OS overhead. This\n> is why all the other web developers use a connection pool.\n> \n> In fact, I wouldn't be surprised if your lockups are on the OS level,\n> even; I don't recall that you cited what OS you're using, but I can\n> imagine locking up Linux 2.4 trying to spawn 300 new processes a\n> second.\n> \n> --Josh\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend",
"msg_date": "Thu, 23 Sep 2004 21:23:51 -0400",
"msg_from": "\"Jason Coene\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries (now with pgpool)"
},
{
"msg_contents": "On Thu, Sep 23, 2004 at 09:23:51PM -0400, Jason Coene wrote:\n> I ran some previous queries to get pgpool to pre-establish all the\n> connections, and ab ran for a few minutes (with one query per page, eek!).\n> It was still exhibiting the same problems as before. While so many new\n> connections at once can surely make the problem worse (and pgpool will\n> surely help there), shouldn't this prove that it's not the only issue?\n\n> Any ideas?\n\nNow that your connections are persistent, you may benefit from using\nPREPAREd queries.\n\n-Mike\n",
"msg_date": "Thu, 23 Sep 2004 22:00:59 -0400",
"msg_from": "Michael Adler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries (now with pgpool)"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nJoshua D. Drake wrote:\n|\n|>>Sorry, I meant 30,000 with 300 connections - not 3,000. The 300\n|>>connections\n|>>/ second is realistic, if not underestimated. As is the nature of\n|>>our site\n|>>(realtime information about online gaming), there's a huge fan base\n|>>and as a\n|>>big upset happens, we'll do 50,000 page views in a span of 3-5\n|>>minutes.\n|>>\n|>>\n|>\n|>First, your posts show no evidences of the CS storm bug.\n|>\n|>Second, 300 *new* connections a second is a lot. Each new connection\n|>requires a significant amount of both database and OS overhead. This\n|>is why all the other web developers use a connection pool.\n|>\n|>\n|>\n| I would second this. You need to be running a connection pool and\n| probably multiple web servers in\n| front of that. You are talking about a huge amount of connections in\n| that amount of time.\n|\n| Josh Drake\n|\n|\n|\n|>In fact, I wouldn't be surprised if your lockups are on the OS level,\n|>even; I don't recall that you cited what OS you're using, but I can\n|>imagine locking up Linux 2.4 trying to spawn 300 new processes a\n|>second.\n\nNot to mention that a proxy squid mounted in reverse proxy mode will\nhelp a lot.\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.5 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFBVaOg7UpzwH2SGd4RAnW4AJ9TYV0oSjYcv8Oxt4Ot/T/nJikoRgCg1Egx\nr4KKm14ziu/KWFb3SnTK/U8=\n=xgmw\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Sat, 25 Sep 2004 18:58:09 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nMichael Adler wrote:\n| On Thu, Sep 23, 2004 at 09:23:51PM -0400, Jason Coene wrote:\n|\n|>I ran some previous queries to get pgpool to pre-establish all the\n|>connections, and ab ran for a few minutes (with one query per page, eek!).\n|>It was still exhibiting the same problems as before. While so many new\n|>connections at once can surely make the problem worse (and pgpool will\n|>surely help there), shouldn't this prove that it's not the only issue?\n|\n|\n|>Any ideas?\n|\n|\n| Now that your connections are persistent, you may benefit from using\n| PREPAREd queries.\n|\n| -Mike\n\nWith his load will not change anything.\n\n\nRegards\nGaetano Mendola\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.5 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFBVany7UpzwH2SGd4RAj9UAJ0SO3VE7zMbwrgdwPQc+HP5PHClMACgtTvn\nKIp1TK2lVbmXZ+s62fpJ46U=\n=sjT0\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Sat, 25 Sep 2004 19:25:08 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries (now with pgpool)"
},
{
"msg_contents": "On Thu, 2004-09-23 at 07:43, Aaron Werman wrote:\n> MySQL stores a statement with its results. This is optional and when a\n> client allows this type of processing, the SQL is hashed and matched to the\n> statement - and the stored *result* is returned. The point is that a lot of\n> systems do lots of static queries, such as a pick list on a web page - but\n> if the data changes the prior result is returned. This (plus a stable jdbc\n> driver) was the reason MySQL did well in the eWeek database comparison.\n\nI think the conclusion of past discussions about this feature is that\nit's a bad idea. Last I checked, MySQL has to clear the *entire* query\ncache when a single DML statement modifying the table in question is\nissued. Not to mention that the feature is broken for non-deterministic\nqueries (like now(), ORDER BY random(), or nextval('some_seq'), and so\non). That makes the feature close to useless for a lot of situations,\nalbeit not every situation.\n\n-Neil\n\n\n",
"msg_date": "Mon, 27 Sep 2004 15:03:01 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "Neil Conway <[email protected]> writes:\n> I think the conclusion of past discussions about this feature is that\n> it's a bad idea. Last I checked, MySQL has to clear the *entire* query\n> cache when a single DML statement modifying the table in question is\n> issued.\n\nDo they actually make a rigorous guarantee that the cached result is\nstill accurate when/if it is returned to the client? (That's an honest\nquestion --- I don't know how MySQL implements this.)\n\nIIRC, in our past threads on this topic, it was suggested that if you\ncan tolerate not-necessarily-up-to-date results, you should be doing\nthis sort of caching on the client side and not in the DB server at all.\nI wouldn't try that in a true \"client\" scenario, but when the DB client\nis application-server middleware, it would make some sense to cache in\nthe application server.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 27 Sep 2004 01:18:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries "
},
{
"msg_contents": "Neil Conway <[email protected]> writes:\n\n> I think the conclusion of past discussions about this feature is that\n> it's a bad idea. Last I checked, MySQL has to clear the *entire* query\n> cache when a single DML statement modifying the table in question is\n> issued. Not to mention that the feature is broken for non-deterministic\n> queries (like now(), ORDER BY random(), or nextval('some_seq'), and so\n> on). That makes the feature close to useless for a lot of situations,\n> albeit not every situation.\n\nWell there's no reason to assume that just because other implementations are\nweak that postgres would have to slavishly copy them.\n\nI've often wondered whether it would make sense to cache the intermediate\nresults in queries. Any time there's a Materialize node, the database is\nstoring all those data somewhere; it could note the plan and parameters that\ngenerated the data and reuse them if it sees the same plan and parameters --\nincluding keeping track of whether the source tables have changed or whether\nthere were any non-immutable functions of course.\n\nThis could be quite helpful as people often do a series of queries on the same\nbasic data. Things like calculating the total number of records matching the\nuser's query then fetching only the records that fit on the current page. Or\nfetching records for a report then having to calculate subtotals and totals\nfor that same report. Or even generating multiple reports breaking down the\nsame data along different axes.\n\n-- \ngreg\n\n",
"msg_date": "27 Sep 2004 08:59:27 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "On Mon, 27 Sep 2004 15:03:01 +1000, Neil Conway <[email protected]> wrote:\n> I think the conclusion of past discussions about this feature is that\n> it's a bad idea. Last I checked, MySQL has to clear the *entire* query\n> cache when a single DML statement modifying the table in question is\n> issued. Not to mention that the feature is broken for non-deterministic\n> queries (like now(), ORDER BY random(), or nextval('some_seq'), and so\n> on). That makes the feature close to useless for a lot of situations,\n> albeit not every situation.\n\nI think it's important to demark three levels of possible caching:\n1) Caching of the parsed query tree\n2) Caching of the query execute plan\n3) Caching of the query results\n\nI think caching the query results (3) is pretty dangerous and\ndifficult to do correctly.\n\nCaching of the the execute plan (2) is not dangerous but may actually\nexecute more slowly by caching a bad plan (i.e. a plan not suited to\nthe current data)\n\nCaching of the query tree (1) to me has very little downsides (except\nextra coding). But may not have a lot of win either, depending how\nmuch time/resources are required to parse the SQL and lookup the\nobjects in the system tables (something I've never gotten a\nsatisfactory answer about). Also, some of the query cache would have\nto be cleared when DDL statements are performed.\n\n-Scott\n",
"msg_date": "Mon, 27 Sep 2004 10:00:14 -0300",
"msg_from": "Scott Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "On Mon, 27 Sep 2004 01:18:56 -0400, Tom Lane <[email protected]> wrote:\n> IIRC, in our past threads on this topic, it was suggested that if you\n> can tolerate not-necessarily-up-to-date results, you should be doing\n> this sort of caching on the client side and not in the DB server at all.\n> I wouldn't try that in a true \"client\" scenario, but when the DB client\n> is application-server middleware, it would make some sense to cache in\n> the application server.\n\nI'd also like to add that when one of the Mambo community members\nstarted running benchmarks of popular Content Management Systems\n(CMS), the ones that implemented page-level caching were significantly\nmore scalable as a result of the decreased load on the database (and\napplication server, as a result):\n\nhttp://forum.mamboserver.com/showthread.php?t=11782\n\nCaching at the database level provides the smallest possible\nperformance boost (at least regarding caching), as caching the query\non the webserver (via ADOdb's query cache) avoids the database server\naltogether; and page-level caching gives you the biggest possible\nbenefit.\n\nYes, you have to be careful how you cache your data, but for many\napplications it is easy to implement a trigger that clears the cache\nwhen certain data is updated.\n\n-- Mitch\n",
"msg_date": "Mon, 27 Sep 2004 09:29:58 -0400",
"msg_from": "Mitch Pirtle <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "\nAdded to TODO:\n\n* Consider automatic caching of queries at various levels:\n o Parsed query tree\n o Query execute plan\n o Query results\n\n\n---------------------------------------------------------------------------\n\nScott Kirkwood wrote:\n> On Mon, 27 Sep 2004 15:03:01 +1000, Neil Conway <[email protected]> wrote:\n> > I think the conclusion of past discussions about this feature is that\n> > it's a bad idea. Last I checked, MySQL has to clear the *entire* query\n> > cache when a single DML statement modifying the table in question is\n> > issued. Not to mention that the feature is broken for non-deterministic\n> > queries (like now(), ORDER BY random(), or nextval('some_seq'), and so\n> > on). That makes the feature close to useless for a lot of situations,\n> > albeit not every situation.\n> \n> I think it's important to demark three levels of possible caching:\n> 1) Caching of the parsed query tree\n> 2) Caching of the query execute plan\n> 3) Caching of the query results\n> \n> I think caching the query results (3) is pretty dangerous and\n> difficult to do correctly.\n> \n> Caching of the the execute plan (2) is not dangerous but may actually\n> execute more slowly by caching a bad plan (i.e. a plan not suited to\n> the current data)\n> \n> Caching of the query tree (1) to me has very little downsides (except\n> extra coding). But may not have a lot of win either, depending how\n> much time/resources are required to parse the SQL and lookup the\n> objects in the system tables (something I've never gotten a\n> satisfactory answer about). Also, some of the query cache would have\n> to be cleared when DDL statements are performed.\n> \n> -Scott\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 27 Sep 2004 10:17:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "In article <[email protected]>,\nNeil Conway <[email protected]> writes:\n\n> I think the conclusion of past discussions about this feature is that\n> it's a bad idea. Last I checked, MySQL has to clear the *entire* query\n> cache when a single DML statement modifying the table in question is\n> issued.\n\nNope, it deletes only queries using that table.\n\n> Not to mention that the feature is broken for non-deterministic\n> queries (like now(), ORDER BY random(), or nextval('some_seq'), and so\n> on).\n\nQueries containing now(), rand(), or similar functions aren't cached by MySQL.\n\n",
"msg_date": "27 Sep 2004 18:15:35 +0200",
"msg_from": "Harald Fuchs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "From: \"Scott Kirkwood\" <[email protected]>\n\n> On Mon, 27 Sep 2004 15:03:01 +1000, Neil Conway <[email protected]> wrote:\n> > I think the conclusion of past discussions about this feature is that\n> > it's a bad idea. Last I checked, MySQL has to clear the *entire* query\n> > cache when a single DML statement modifying the table in question is\n> > issued. Not to mention that the feature is broken for non-deterministic\n> > queries (like now(), ORDER BY random(), or nextval('some_seq'), and so\n> > on). That makes the feature close to useless for a lot of situations,\n> > albeit not every situation.\n\nOnly the cache of changed tables are cleared. MySQL sanely doesn't cache\nstatements with unstable results. The vast majority of statements are\nstable. The feature is likely to dramatically improve performance of most\napplications; ones with lots of queries are obvious, but even data\nwarehouses have lots of (expensive) repetitious queries against static data.\n\n>\n> I think it's important to demark three levels of possible caching:\n> 1) Caching of the parsed query tree\n> 2) Caching of the query execute plan\n> 3) Caching of the query results\n>\n> I think caching the query results (3) is pretty dangerous and\n> difficult to do correctly.\n\nI think it's very hard to cache results on the client side without guidance\nbecause it is expensive to notify the client of change events. A changing\ntable couldn't be cached on client side without a synchronous check to the\ndb - defeating the purpose.\n\nGuidance should work, though - I also think an optional client configuration\ntable which specified static tables would work and the cost of a sparse XOR\nhash of statements to find match candidate statements would be negligible.\nThe list of tables would be a contract that they won't change. The fact is\nthat there often are a lot of completely static tables in high volume\ntransaction systems, and the gain of SQUID style proxying could be an\nenormous performance gain (effort, network overhead, latency, DB server cont\next switching, ...) especially in web farm and multi tiered applications\n(and middleware doing caching invests so many cycles to do so).\n\nCaching results on the server would also dramatically improve performance of\nhigh transaction rate applications, but less than at the client. The\nalgorithm of only caching small result sets for tables that haven't changed\nrecently is trivial, and the cost of first pass filtering of candidate\nstatements to use a cache result through sparse XOR hashes is low. The\nstatement/results cache would need to be invalidated when any referenced\ntable is changed. This option seems like a big win.\n\n>\n> Caching of the the execute plan (2) is not dangerous but may actually\n> execute more slowly by caching a bad plan (i.e. a plan not suited to\n> the current data)\n\nThis concern could be resolved by aging plans out of cache.\n\nThis concern relates to an idiosyncrasy of pg, that vacuum has such a\nprofound effect. Anyone who has designed very high transaction rate systems\nappreciates DB2 static binding, where a plan is determined and stored in the\ndatabase, and precompiled code uses those plans - and is both stable and\nfree of plan cost. The fact is that at a high transaction rate, we often see\nthe query parse and optimization as the most expensive activity. The planner\ndesign has to be \"dumbed down\" to reduce overhead (and even forced to geqo\nchoice).\n\nThe common development philosophy in pg is expecting explicit prepares and\nexecutes against bind variables (relatively rare, but useful in high volume\nsituations), and otherwise (commonly) using explicit literals in statements.\nThe problem here is the prepare/execute only works in monolithic\napplications, and the chance of reuse of SQL statements with literals is\nmuch lower.\n\n(On a blue sky note, I would love to see a planner that dynamically changed\nsearch depth of execution paths, so it could exhaustively build best plans\nat low usage times and be less sophisticated when the load was higher... or\nbetter yet, try alternatively for very high transaction frequency plans\nuntil it found the best one in practice! The identified correct plan would\nbe used subsequently.)\n\n>\n> Caching of the query tree (1) to me has very little downsides (except\n> extra coding). But may not have a lot of win either, depending how\n> much time/resources are required to parse the SQL and lookup the\n> objects in the system tables (something I've never gotten a\n> satisfactory answer about). Also, some of the query cache would have\n> to be cleared when DDL statements are performed.\n\nParse cache is obviously easy - just store the parse tree with a hash and\nthe SQL string. This would only help some very specific types of transaction\nmixes. The issue is why go through all this trouble without caching the\nplan? The same issues exist in both - the cost of matching, the need to\ninvalidate if objects definitions change, but the win would be so much less.\n\n>\n> -Scott\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n\n\n\n",
"msg_date": "Mon, 27 Sep 2004 12:43:38 -0400",
"msg_from": "\"Aaron Werman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "> I think it's very hard to cache results on the client side \n> without guidance because it is expensive to notify the client \n> of change events. A changing table couldn't be cached on \n> client side without a synchronous check to the db - defeating \n> the purpose.\n\nThis is very true. Client side caching is an enormous win for apps, but it\nrequires quite a lot of logic, triggers to update last-modified fields on\nrelevant tables, etc etc. Moving some of this logic to the DB would perhaps\nnot usually be quite as efficient as a bespoke client caching solution, but\nit will above all be a lot easier for the application developer!\n\nThe other reason why it is god for the DB to support this feature is that in\ntypical web apps there are multiple web/app servers in a farm, but mostly\njust one live DB instance, so effective client side caching requires a\ndistributed cache, or a SQL proxy, both of which are the kind of middleware\nthat tends to give cautious people cause to fret.\n\nAs a side effect, this would also satisfy the common gotcha of count(),\nmax() and other aggregates always needing a scan. There are _so_ many\noccasions where 'select count(*) from bar' really does not need to be that\naccurate.\n\nSo yeah, here's another vote for this feature. It doesn't even need to\nhappen automagically to be honest, so long as it's really simple for the\nclient to turn on (preferably per-statement or per-table).\n\nActually, that gives me an implementation idea. How about cacheable views?\nSo you might do:\n\nCREATE [ CACHEABLE ] VIEW view \n\t[ MAXSTALEDATA seconds ]\n\t[ MAXSTALEPLAN seconds ]\nAS ... \n\nThat would be tidy I think...\n\nM\n\n",
"msg_date": "Mon, 27 Sep 2004 18:20:48 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "> So yeah, here's another vote for this feature. It doesn't even need to\n> happen automagically to be honest, so long as it's really simple for the\n> client to turn on (preferably per-statement or per-table).\n\nIt might be easiest to shove the caching logic into pgpool instead.\n\nCreate an extension of EXPLAIN which returns data in an easy to\nunderstand format for computers so that pgpool can retrieve information\nsuch as a list of tables involved, \n\nExtend LISTEN to be able to listen for a SELECT on a table --\nreplacement for dynamically adding triggers to send a notify on inserts,\nupdates, deletes.\n\nCreate some kind of generic LISTEN for structural changes. I know SLONY\ncould make use of triggers on ALTER TABLE, and friends as well.\n\n\nWhen pg_pool is told to cache a query, it can get a table list and\nmonitor for changes. When it gets changes, simply dumps the cache.\n\n",
"msg_date": "Mon, 27 Sep 2004 13:37:51 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "[ discussion of server side result caching ]\n\nand lets not forget PG's major fork it will throw into things: MVCC\nThe results of query A may hold true for txn 1, but not txn 2 and so on \n.\nThat would have to be taken into account as well and would greatly \ncomplicate things.\n\nIt is always possible to do a \"poor man\"'s query cache with triggers.. \nwhich would just leave you with basically a materialized view.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n",
"msg_date": "Mon, 27 Sep 2004 14:25:55 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "On Mon, 27 Sep 2004 18:20:48 +0100, Matt Clark <[email protected]> wrote:\n> This is very true. Client side caching is an enormous win for apps, but it\n> requires quite a lot of logic, triggers to update last-modified fields on\n> relevant tables, etc etc. Moving some of this logic to the DB would perhaps\n> not usually be quite as efficient as a bespoke client caching solution, but\n> it will above all be a lot easier for the application developer!\n\nIn the world of PHP it is trivial thanks to PEAR's Cache_Lite. The\nproject lead for Mambo implemented page-level caching in a day, and\nhad all the triggers for clearing the cache included in the content\nmanagement interface - not difficult at all.\n\nBasically you set a default in seconds for the HTML results to be\ncached, and then have triggers set that force the cache to regenerate\n(whenever CRUD happens to the content, for example).\n\nCan't speak for Perl/Python/Ruby/.Net/Java, but Cache_Lite sure made a\nbeliever out of me!\n\n-- Mitch\n",
"msg_date": "Mon, 27 Sep 2004 14:59:13 -0400",
"msg_from": "Mitch Pirtle <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "On Thu, Sep 23, 2004 at 08:29:25AM -0700, Mr Pink wrote:\n> Not knowing anything about the internals of pg, I don't know how this relates, but in theory, \n> query plan caching is not just about saving time re-planning queries, it's about scalability.\n> Optimizing queries requires shared locks on the database metadata, which, as I understand it\n> causes contention and serialization, which kills scalability. \n\nOne of the guru's can correct me if I'm wrong here, but AFAIK metadata\nlookups use essentially the same access methods as normal queries. This\nmeans MVCC is used and no locking is required. Even if locks were\nrequired, they would be shared read locks which wouldn't block each\nother.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Mon, 27 Sep 2004 14:18:36 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "The context of the discussion was a hack to speed queries against static\ntables, so MVCC is not relevent. As soon as any work unit against a\nreferenced table commits, the cache is invalid, and in fact the table\nshouldn't be a candidate for this caching for a while. In fact, this cache\nwould reduce some the MVCC 'select count(*) from us_states' type of horrors.\n\n(The attraction of a server side cache is obviously that it could *with no\nserver or app changes* dramatically improve performance. A materialized view\nis a specialized denormalization-ish mechanism to optimize a category of\nqueries and requires the DBA to sweat the details. It is very hard to cache\nthings stochastically without writing a server. Trigger managed extracts\nwon't help you execute 1,000 programs issuing the query \"select sec_level\nfrom sec where division=23\" each second or a big table loaded monthly.)\n\n\n\n----- Original Message ----- \nFrom: \"Jeff\" <[email protected]>\nTo: \"Mitch Pirtle\" <[email protected]>\nCc: \"Aaron Werman\" <[email protected]>; \"Scott Kirkwood\"\n<[email protected]>; \"Neil Conway\" <[email protected]>;\n<[email protected]>; \"Tom Lane\" <[email protected]>\nSent: Monday, September 27, 2004 2:25 PM\nSubject: Re: [PERFORM] Caching of Queries\n\n\n> [ discussion of server side result caching ]\n>\n> and lets not forget PG's major fork it will throw into things: MVCC\n> The results of query A may hold true for txn 1, but not txn 2 and so on\n> .\n> That would have to be taken into account as well and would greatly\n> complicate things.\n>\n> It is always possible to do a \"poor man\"'s query cache with triggers..\n> which would just leave you with basically a materialized view.\n>\n> --\n> Jeff Trout <[email protected]>\n> http://www.jefftrout.com/\n> http://www.stuarthamm.net/\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n",
"msg_date": "Mon, 27 Sep 2004 16:11:53 -0400",
"msg_from": "\"Aaron Werman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "\n>Basically you set a default in seconds for the HTML results to be\n>cached, and then have triggers set that force the cache to regenerate\n>(whenever CRUD happens to the content, for example).\n>\n>Can't speak for Perl/Python/Ruby/.Net/Java, but Cache_Lite sure made a\n>believer out of me!\n>\n> \n>\nNice to have it in a library, but if you want to be that simplistic then \nit's easy in any language. What if a process on server B modifies a n \nimportant value that server A has cached though? Coherency (albeit that \nthe client may choose to not use it) is a must for a general solution.\n",
"msg_date": "Mon, 27 Sep 2004 21:19:12 +0100",
"msg_from": "Matt Clark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "\n>It might be easiest to shove the caching logic into pgpool instead.\n>\n>...\n>\n>When pg_pool is told to cache a query, it can get a table list and\n>monitor for changes. When it gets changes, simply dumps the cache.\n>\n>\n> \n>\nIt's certainly the case that the typical web app (which, along with \nwarehouses, seems to be one half of the needy apps), could probably do \nworse than use pooling as well. I'm not well up enough on pooling to \nknow how bulletproof it is though, which is why I included it in my list \nof things that make me go 'hmm....'. It would be really nice not to \nhave to take both things together.\n\nMore to the point though, I think this is a feature that really really \nshould be in the DB, because then it's trivial for people to use. \nTaking an existing production app and justifying a switch to an extra \nlayer of pooling software is relatively hard compared with grabbing data \nfrom a view instead of a table (or setting a variable, or adding a tweak \nto a query, or however else it might be implemented).\n\nEminiently doable in pgpool though, and just the right thing for anyone \nalready using it.\n\nM\n",
"msg_date": "Mon, 27 Sep 2004 21:30:31 +0100",
"msg_from": "Matt Clark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "> More to the point though, I think this is a feature that really really \n> should be in the DB, because then it's trivial for people to use. \n\nHow does putting it into PGPool make it any less trivial for people to\nuse?\n\n",
"msg_date": "Mon, 27 Sep 2004 16:37:34 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "On Mon, Sep 27, 2004 at 09:19:12PM +0100, Matt Clark wrote:\n\n> >Basically you set a default in seconds for the HTML results to be\n> >cached, and then have triggers set that force the cache to regenerate\n> >(whenever CRUD happens to the content, for example).\n> >\n> >Can't speak for Perl/Python/Ruby/.Net/Java, but Cache_Lite sure made a\n> >believer out of me!\n> >\n> > \n> >\n> Nice to have it in a library, but if you want to be that simplistic then \n> it's easy in any language. What if a process on server B modifies a n \n> important value that server A has cached though? Coherency (albeit that \n> the client may choose to not use it) is a must for a general solution.\n\nmemcached is one solution designed for that situation. Easy to use\nfrom most languages. Works. Lets you use memory on systems where you\nhave it, rather than using up valuable database server RAM that's\nbetter spent caching disk sectors.\n\nAny competently written application where caching results would be a\nsuitable performance boost can already implement application or\nmiddleware caching fairly easily, and increase performance much more\nthan putting result caching into the database would.\n\nI don't see caching results in the database as much of a win for most\nwell written applications. Toy benchmarks, sure, but for real apps it\nseems it would add a lot of complexity, and violate the whole point of\nusing an ACID database.\n\n(Caching parse trees or query plans, though? It'd be interesting to\n model what effect that'd have.)\n\nCheers,\n Steve\n\n",
"msg_date": "Mon, 27 Sep 2004 13:53:45 -0700",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": ">>More to the point though, I think this is a feature that really really \n>>should be in the DB, because then it's trivial for people to use. \n>> \n>>\n>\n>How does putting it into PGPool make it any less trivial for people to\n>use?\n>\nThe answers are at http://www2b.biglobe.ne.jp/~caco/pgpool/index-e.html \n. Specifically, it's a separate application that needs configuration, \nthe homepage has no real discussion of the potential pitfalls of pooling \nand what this implementation does to get around them, you get the idea. \nI'm sure it's great software, but it doesn't come as part of the DB \nserver, so 95% of people who would benefit from query caching being \nimplemented in it never will. If it shipped with and was turned on by \ndefault in SUSE or RedHat that would be a different matter. Which I \nrealise makes me look like one of those people who doesn't appreciate \ncode unless it's 'popular', but I hope I'm not *that* bad...\n\nOh OK, I'll say it, this is a perfect example of why My*** has so much \nmore mindshare. It's not better, but it sure makes the average Joe \n_feel_ better. Sorry, I've got my corporate hat on today, I'm sure I'll \nfeel a little less cynical tomorrow.\n\nM\n\n\n\n\n\n\n\n\n\n\nMore to the point though, I think this is a feature that really really \nshould be in the DB, because then it's trivial for people to use. \n \n\n\nHow does putting it into PGPool make it any less trivial for people to\nuse?\n\nThe answers are at \nhttp://www2b.biglobe.ne.jp/~caco/pgpool/index-e.html . Specifically,\nit's a separate application that needs configuration, the homepage has\nno real discussion of the potential pitfalls of pooling and what this\nimplementation does to get around them, you get the idea. I'm sure\nit's great software, but it doesn't come as part of the DB server, so\n95% of people who would benefit from query caching being implemented in\nit never will. If it shipped with and was turned on by default in SUSE\nor RedHat that would be a different matter. Which I realise makes me\nlook like one of those people who doesn't appreciate code unless it's\n'popular', but I hope I'm not *that* bad...\n\nOh OK, I'll say it, this is a perfect example of why My*** has so much\nmore mindshare. It's not better, but it sure makes the average Joe\n_feel_ better. Sorry, I've got my corporate hat on today, I'm sure\nI'll feel a little less cynical tomorrow.\n\nM",
"msg_date": "Mon, 27 Sep 2004 22:35:42 +0100",
"msg_from": "Matt Clark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "\n>Any competently written application where caching results would be a\n>suitable performance boost can already implement application or\n>middleware caching fairly easily, and increase performance much more\n>than putting result caching into the database would.\n>\n> \n>\nI guess the performance increase is that you can spend $10,000 on a \ndeveloper, or $10,000 on hardware, and for the most part get a more \nreliable result the second way. MemcacheD is fine(ish), but it's not a \npanacea, and it's more than easy to shoot yourself in the foot with it. \nCaching is hard enough that lots of people do it badly - I'd rather use \nan implementation from the PG team than almost anywhere else.\n\n>I don't see caching results in the database as much of a win for most\n>well written applications. Toy benchmarks, sure, but for real apps it\n>seems it would add a lot of complexity, and violate the whole point of\n>using an ACID database.\n>\n> \n>\nWell the point surely is to _remove_ complexity from the application, \nwhich is written by God Knows Who, and put it in the DB, which is \nwritten by God And You. And you can still have ACID (cached data is not \nthe same as stale data, although once you have the former, the latter \ncan begin to look tempting sometimes).\n\nM\n",
"msg_date": "Mon, 27 Sep 2004 22:41:52 +0100",
"msg_from": "Matt Clark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "Jim,\n\nI can only tell you (roughly) how it works wth Oracle, and it's a very well\ndocumented and laboured point over there - it's the cornerstone of Oracle's\nscalability architecture, so if you don't believe me, or my explanation is\njust plain lacking, then it wouldn't be a bad idea to check it out. The\n\"other Tom\" aka Tomas Kyte runs the Ask Tom site which is a great source of\ninfo on this. It's also very well explained in his book \"Expert one on one\nOracle\" I think it was called. I havn't seen any reason yet as to why the\nsame issues shouldn't, don't or wouldn't apply to pg.\n\nYour comment is both right and wrong. Yes, metadata lookups are essentially\nthe same as as access methods for normal queries. Any time you read data in\nthe DB you have to place a shared lock, often called a latch - it's a\nlightweight type of lock. The trouble is that while a data page can have\nmultiple latches set at any time, only 1 process can be placing a a latch\non a page at a time. This doesn't sound so serious so far, latches are\n\"lightweight\" afterall, however... even in a database of a billion rows and\n100+ tables, the database metadata is a very _small_ area. You must put\nlatches on the metadata tables to do optimization, so for example, if you\nare optimizing a 10 table join, you must queue up 10 times to place your\nlatchs. You then do your optimization and queue up 10 more times to remove\nyour latches. In fact it is worse than this, because you won't queue up 10\ntimes it's more likely to be a hundred times since it is far more complex\nthan 1 latch per table being optimized (you will be looking up statistics\nand other things).\n\nAs I already said, even in a huge DB of a billion rows, these latches are\nhappening on a realatively small and concentrated data set - the metadata.\nEven if there is no contention for the application data, the contention for\nthe metadata may be furious. Consider this scenario, you have a 1000 users\nconstantly submitting queries that must not only be soft parsed (SQL\nstatement syntax) but hard parsed (optimized) because you have no query\ncache. Even if they are looking at completely different data, they'll all be\nqueuing up for latches on the same little patch of metadata. Doubling your\nCPU speed or throwing in a fibre channel disk array will not help here, the\nsystem smply won't scale.\n\nTom Lane noted that since the query cache would be in shared memory the\ncontention issue does not go away. This is true, but I don't think that it's\nhard to see that the amount of contention is consderably less in any system\nthat is taking advantage of the caching facility - ie applications using\nbind variables to reduce hard parsing. However, badly written applications\n(from the point of view of query cache utilization) could very well\nexperience a degradation in performance. This could be handled with an\noption to disable caching - or even better to disable caching of any sql not\nusing binds. I don't think even the mighty Oracle has that option.\n\nAs you may have guessed, my vote is for implementing a query cache that\nincludes plans.\n\nI have no specific preference as to data caching. It doesn't seem to be so\nimportant to me.\n\nRegards\nIain\n\n\n\n> On Thu, Sep 23, 2004 at 08:29:25AM -0700, Mr Pink wrote:\n> > Not knowing anything about the internals of pg, I don't know how this\nrelates, but in theory,\n> > query plan caching is not just about saving time re-planning queries,\nit's about scalability.\n> > Optimizing queries requires shared locks on the database metadata,\nwhich, as I understand it\n> > causes contention and serialization, which kills scalability.\n>\n> One of the guru's can correct me if I'm wrong here, but AFAIK metadata\n> lookups use essentially the same access methods as normal queries. This\n> means MVCC is used and no locking is required. Even if locks were\n> required, they would be shared read locks which wouldn't block each\n> other.\n> -- \n> Jim C. Nasby, Database Consultant [email protected]\n> Give your computer some brain candy! www.distributed.net Team #1828\n>\n> Windows: \"Where do you want to go today?\"\n> Linux: \"Where do you want to go tomorrow?\"\n> FreeBSD: \"Are you guys coming, or what?\"\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n",
"msg_date": "Tue, 28 Sep 2004 11:06:17 +0900",
"msg_from": "\"Iain\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "\"Iain\" <[email protected]> writes:\n> I can only tell you (roughly) how it works wth Oracle,\n\nWhich unfortunately has little to do with how it works with Postgres.\nThis \"latches\" stuff is irrelevant to us.\n\nIn practice, any repetitive planning in PG is going to be consulting\ncatalog rows that it draws from the backend's local catalog caches.\nAfter the first read of a given catalog row, the backend won't need\nto re-read it unless the associated table has a schema update. (There\nare some other cases, like a VACUUM FULL of the catalog the rows came\nfrom, but in practice catalog cache entries don't change often in most\nscenarios.) We need place only one lock per table referenced in order\nto interlock against schema updates; not one per catalog row used.\n\nThe upshot of all this is that any sort of shared plan cache is going to\ncreate substantially more contention than exists now --- and that's not\neven counting the costs of managing the cache, ie deciding when to throw\naway entries.\n\nA backend-local plan cache would avoid the contention issues, but would\nof course not allow amortizing planning costs across multiple backends.\n\nI'm personally dubious that sharing planning costs is a big deal.\nSimple queries generally don't take that long to plan. Complicated\nqueries do, but I think the reusability odds go down with increasing\nquery complexity.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 27 Sep 2004 23:17:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries "
},
{
"msg_contents": "Hi Tom,\n\n> This \"latches\" stuff is irrelevant to us.\n\nWell, that's good to know anyway, thanks for setting me straight. Maybe\nOracle could take a leaf out of PGs book instead of the other way around. I\nrecall that you mentioned the caching of the schema before, so even though I\nassumed PG was latching the metadata, I had begun to wonder if it was\nactually neccessary.\n\nWhile it7s obviously not as critical as I thought, I think there may still\nbe some potential for query caching by pg. It would be nice to have the\noption anyway, as different applications have different needs.\n\nI think that re-use of SQL in applications (ie controlling the proliferation\nof SQL statements that are minor variants of each other) is a good goal for\nmaintainability, even if it doesn't have a major impact on performance as it\nseems you are suggesting in the case of pg. Even complex queries that must\nbe constructed dynamically typically only have a finite number of options\nand can still use bind variables, so in a well tuned system, they should\nstill be viable candidates for caching (ie, if they aren't being bumped out\nof the cache by thousands of little queries not using binds).\n\nI'll just finish by saying that, developing applications in a way that would\ntake advantage of any query caching still seems like good practice to me,\neven if the target DBMS has no query caching. For now, that's what I plan to\ndo with future PG/Oracle/Hypersonic (my 3 favourite DBMSs) application\ndevelopment anyway.\n\nRegards\nIain\n\n\n----- Original Message ----- \nFrom: \"Tom Lane\" <[email protected]>\nTo: \"Iain\" <[email protected]>\nCc: \"Jim C. Nasby\" <[email protected]>; <[email protected]>\nSent: Tuesday, September 28, 2004 12:17 PM\nSubject: Re: [PERFORM] Caching of Queries\n\n\n> \"Iain\" <[email protected]> writes:\n> > I can only tell you (roughly) how it works wth Oracle,\n>\n> Which unfortunately has little to do with how it works with Postgres.\n> This \"latches\" stuff is irrelevant to us.\n>\n> In practice, any repetitive planning in PG is going to be consulting\n> catalog rows that it draws from the backend's local catalog caches.\n> After the first read of a given catalog row, the backend won't need\n> to re-read it unless the associated table has a schema update. (There\n> are some other cases, like a VACUUM FULL of the catalog the rows came\n> from, but in practice catalog cache entries don't change often in most\n> scenarios.) We need place only one lock per table referenced in order\n> to interlock against schema updates; not one per catalog row used.\n>\n> The upshot of all this is that any sort of shared plan cache is going to\n> create substantially more contention than exists now --- and that's not\n> even counting the costs of managing the cache, ie deciding when to throw\n> away entries.\n>\n> A backend-local plan cache would avoid the contention issues, but would\n> of course not allow amortizing planning costs across multiple backends.\n>\n> I'm personally dubious that sharing planning costs is a big deal.\n> Simple queries generally don't take that long to plan. Complicated\n> queries do, but I think the reusability odds go down with increasing\n> query complexity.\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n",
"msg_date": "Tue, 28 Sep 2004 13:47:30 +0900",
"msg_from": "\"Iain\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries "
},
{
"msg_contents": "\n----- Original Message ----- \nFrom: \"Tom Lane\" <[email protected]>\nTo: \"Iain\" <[email protected]>\nCc: \"Jim C. Nasby\" <[email protected]>; <[email protected]>\nSent: Monday, September 27, 2004 11:17 PM\nSubject: Re: [PERFORM] Caching of Queries\n\n\n> \"Iain\" <[email protected]> writes:\n> > I can only tell you (roughly) how it works wth Oracle,\n>\n> Which unfortunately has little to do with how it works with Postgres.\n> This \"latches\" stuff is irrelevant to us.\n\nLatches are the Oracle term for semaphores. Both Oracle and pg use\nsemaphores and spin locks to serialize activity in critical sections. I\nbelieve that the point that blocking/queuing reduces scalability is valid.\n\n>\n> In practice, any repetitive planning in PG is going to be consulting\n> catalog rows that it draws from the backend's local catalog caches.\n> After the first read of a given catalog row, the backend won't need\n> to re-read it unless the associated table has a schema update. (There\n> are some other cases, like a VACUUM FULL of the catalog the rows came\n> from, but in practice catalog cache entries don't change often in most\n> scenarios.) We need place only one lock per table referenced in order\n> to interlock against schema updates; not one per catalog row used.\n>\n> The upshot of all this is that any sort of shared plan cache is going to\n> create substantially more contention than exists now --- and that's not\n> even counting the costs of managing the cache, ie deciding when to throw\n> away entries.\n\nI imagine a design where a shared plan cache would consist of the plans,\nindexed by a statement hash and again by dependant objects. A statement to\nbe planned would be hashed and matched to the cache. DDL would need to\nsynchronously destroy all dependant plans. If each plan maintains a validity\nflag, changing the cache wouldn't have to block so I don't see where there\nwould be contention.\n\n>\n> A backend-local plan cache would avoid the contention issues, but would\n> of course not allow amortizing planning costs across multiple backends.\n>\n> I'm personally dubious that sharing planning costs is a big deal.\n> Simple queries generally don't take that long to plan. Complicated\n> queries do, but I think the reusability odds go down with increasing\n> query complexity.\n>\n\nI think both the parse and planning are major tasks if the transaction rate\nis high. Simple queries can easily take much longer to plan than execute, so\nthis is a scalability concern. Caching complicated queries is valuable -\napps seem to have lots of similar queries because they are intimately\nrelated to the data model.\n\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n",
"msg_date": "Tue, 28 Sep 2004 09:04:34 -0400",
"msg_from": "\"Aaron Werman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "\"Aaron Werman\" <[email protected]> writes:\n> I imagine a design where a shared plan cache would consist of the plans,\n> indexed by a statement hash and again by dependant objects. A statement to\n> be planned would be hashed and matched to the cache. DDL would need to\n> synchronously destroy all dependant plans. If each plan maintains a validity\n ^^^^^^^^^^^^^\n> flag, changing the cache wouldn't have to block so I don't see where there\n ^^^^^^^^^^^^^^^^^^^^^^\n> would be contention.\n\nYou have contention to access a shared data structure *at all* -- for\ninstance readers must lock out writers. Or didn't you notice the self-\ncontradictions in what you just said?\n\nOur current scalability problems dictate reducing such contention, not\nadding whole new sources of it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 28 Sep 2004 09:58:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries "
},
{
"msg_contents": "\n----- Original Message ----- \nFrom: \"Tom Lane\" <[email protected]>\nTo: \"Aaron Werman\" <[email protected]>\nCc: \"Iain\" <[email protected]>; \"Jim C. Nasby\" <[email protected]>;\n<[email protected]>\nSent: Tuesday, September 28, 2004 9:58 AM\nSubject: Re: [PERFORM] Caching of Queries\n\n\n> \"Aaron Werman\" <[email protected]> writes:\n> > I imagine a design where a shared plan cache would consist of the plans,\n> > indexed by a statement hash and again by dependant objects. A statement\nto\n> > be planned would be hashed and matched to the cache. DDL would need to\n> > synchronously destroy all dependant plans. If each plan maintains a\nvalidity\n> ^^^^^^^^^^^^^\n> > flag, changing the cache wouldn't have to block so I don't see where\nthere\n> ^^^^^^^^^^^^^^^^^^^^^^\n> > would be contention.\n>\n> You have contention to access a shared data structure *at all* -- for\n> instance readers must lock out writers. Or didn't you notice the self-\n> contradictions in what you just said?\n>\n> Our current scalability problems dictate reducing such contention, not\n> adding whole new sources of it.\n\nYou're right - that seems unclear. What I meant is that there can be a\nglobal hash table that is never locked, and the hashes point to chains of\nplans that are only locally locked for maintenance, such as gc and chaining\nhash collisions. If maintenance was relatively rare and only local, my\nassumption is that it wouldn't have global impact.\n\nThe nice thing about plan caching is that it can be sloppy, unlike block\ncache, because it is only an optimization tweak. So, for example, if the\nplan has atomic refererence times or counts there is no need to block, since\noverwriting is not so bad. If the multiprocessing planner chains the same\nplan twice, the second one would ultimately age out....\n\n/Aaron\n\n>\n> regards, tom lane\n>\n",
"msg_date": "Tue, 28 Sep 2004 10:36:03 -0400",
"msg_from": "\"Aaron Werman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "I could spend a week or two tweaking the performance of my database servers\nand probably make some sizeable improvements, but I'm not going to.\n\nWhy? Because PostgreSQL screams as it is.\n\nI would make sure that if the consensus is to add some sort of caching that\nit be done only if there is no hit to current performance and stability.\nThat being said, I think that server side caching has major buzz and there's\nnothing wrong with adding features that sell.\n\nI will disagree with 3 points made on the argument against caching.\nSpecifically, the benefit of doing caching on the db server is that the\nbenefits may be reaped by multiple clients where as caching on the client\nside must be done by each client and may not be as effective.\n\nSo what if the caching has a slight chance of returning stale results? Just\nmake sure people know about it in advance. There are some things where\nstale results are no big deal and if I can easily benefit from an aggressive\ncaching system, I will (and I do now with the adodb caching library, but\nlike I said, caching has to be done for each client). In fact, I'm all for\nusing a low-tech cache expiration algorithm to keep complexity down.\n\nFinally, if the caching is not likely to help (or may even hurt) simple\nqueries but is likely to help complex queries then fine, make sure people\nknow about it and let them decide if they can benefit. \n\nSorry if I'm beating a dead horse or playing the devil's advocate. Just\nfelt compelled to chime in.\n\n-- \nMatthew Nuzum + \"Man was born free, and everywhere\nwww.bearfruit.org : he is in chains,\" Rousseau\n+~~~~~~~~~~~~~~~~~~+ \"Then you will know the truth, and \nthe TRUTH will set you free,\" Jesus Christ (John 8:32 NIV)\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Tom Lane\nSent: Monday, September 27, 2004 1:19 AM\nTo: Neil Conway\nCc: Aaron Werman; Scott Kirkwood; [email protected]\nSubject: Re: [PERFORM] Caching of Queries\n\nNeil Conway <[email protected]> writes:\n> I think the conclusion of past discussions about this feature is that\n> it's a bad idea. Last I checked, MySQL has to clear the *entire* query\n> cache when a single DML statement modifying the table in question is\n> issued.\n\nDo they actually make a rigorous guarantee that the cached result is\nstill accurate when/if it is returned to the client? (That's an honest\nquestion --- I don't know how MySQL implements this.)\n\nIIRC, in our past threads on this topic, it was suggested that if you\ncan tolerate not-necessarily-up-to-date results, you should be doing\nthis sort of caching on the client side and not in the DB server at all.\nI wouldn't try that in a true \"client\" scenario, but when the DB client\nis application-server middleware, it would make some sense to cache in\nthe application server.\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n\n",
"msg_date": "Tue, 28 Sep 2004 21:36:43 -0400",
"msg_from": "\"Matthew Nuzum\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "On Mon, Sep 27, 2004 at 09:30:31PM +0100, Matt Clark wrote:\n> It's certainly the case that the typical web app (which, along with \n> warehouses, seems to be one half of the needy apps), could probably do \n> worse than use pooling as well. I'm not well up enough on pooling to \n> know how bulletproof it is though, which is why I included it in my list \n> of things that make me go 'hmm....'. It would be really nice not to \n> have to take both things together.\n \nIf you're not using a connection pool of some kind then you might as\nwell forget query plan caching, because your connect overhead will swamp\nthe planning cost. This does not mean you have to use something like\npgpool (which makes some rather questionable claims IMO); any decent web\napplication language/environment will support connection pooling.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Thu, 30 Sep 2004 17:11:07 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "\n>If you're not using a connection pool of some kind then you might as\n>well forget query plan caching, because your connect overhead will swamp\n>the planning cost. This does not mean you have to use something like\n>pgpool (which makes some rather questionable claims IMO); any decent web\n>application language/environment will support connection pooling.\n> \n>\nHmm, a question of definition - there's a difference between a pool and \na persistent connection. Pretty much all web apps have one connection \nper process, which is persistent (i.e. not dropped and remade for each \nrequest), but not shared between processes, therefore not pooled.\n",
"msg_date": "Fri, 01 Oct 2004 06:43:42 +0100",
"msg_from": "Matt Clark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "On Fri, Oct 01, 2004 at 06:43:42AM +0100, Matt Clark wrote:\n> \n> >If you're not using a connection pool of some kind then you might as\n> >well forget query plan caching, because your connect overhead will swamp\n> >the planning cost. This does not mean you have to use something like\n> >pgpool (which makes some rather questionable claims IMO); any decent web\n> >application language/environment will support connection pooling.\n> > \n> >\n> Hmm, a question of definition - there's a difference between a pool and \n> a persistent connection. Pretty much all web apps have one connection \n> per process, which is persistent (i.e. not dropped and remade for each \n> request), but not shared between processes, therefore not pooled.\n\nOK, that'd work too... the point is if you're re-connecting all the time\nit doesn't really matter what else you do for performance.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Fri, 1 Oct 2004 10:13:03 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "> OK, that'd work too... the point is if you're re-connecting \n> all the time it doesn't really matter what else you do for \n> performance.\n\nYeah, although there is the chap who was asking questions on the list\nrecently who had some very long-running code on his app servers, so was best\noff closing the connection because he had far too many postmaster processes\njust sitting there idle all the time!\n\nBut you're right, it's a killer usually.\n\nM \n\n",
"msg_date": "Fri, 1 Oct 2004 16:46:59 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "People:\n\nTransparent \"query caching\" is the \"industry standard\" for how these things \nare handled. However, Postgres' lack of this feature has made me consider \nother approaches, and I'm starting to wonder if the \"standard\" query caching \n-- where a materialized query result, or some reduction thereof, is cached in \ndatabase memory -- isn't the best way to cache things. I'm going to \nabbreviate it \"SQC\" for the rest of this e-mail.\n\nObviously, the draw of SQC is its transparency to developers. With it, the \nJava/Perl/PHP programmers and the DBA don't have to communicate at all -- you \nset it up, give it some RAM, and it \"just works\". As someone who frequently \nhas to consult based on limited knowledge, I can understand the appeal.\n\nHowever, one of the problems with SQC, aside from the ones already mentioned \nof stale data and/or cache-clearing, is that (at least in applications like \nMySQL's) it is indiscriminate and caches, at least breifly, unique queries as \nreadily as common ones. Possibly Oracle's implementation is more \nsophisticated; I've not had an opportunity. \n\nThe other half of that problem is that an entire query is cached, rather than \njust the relevant data to uniquely identify the request to the application. \nThis is bad in two respects; one that the entire query needs to be parsed to \nsee if a new query is materially equivalent, and that two materially \ndifferent queries which could utilize overlapping ranges of the same \nunderlying result set must instead cache their results seperately, eating up \nyet more memory.\n\nTo explain what I'm talking about, let me give you a counter-example of \nanother approach.\n\nI have a data-warehousing application with a web front-end. The data in the \napplication is quite extensive and complex, and only a summary is presented \nto the public users -- but that summary is a query involving about 30 lines \nand 16 joins. This summary information is available in 3 slightly different \nforms. Further, the client has indicated that an up to 1/2 hour delay in \ndata \"freshness\" is acceptable.\n\nThe first step is forcing that \"materialized\" view of the data into memory. \nRight now I'm working on a reliable way to do that without using Memcached, \nwhich won't install on our Solaris servers. Temporary tables have the \nannoying property of being per-connection, which doesn't work in a pool of 60 \nconnections.\n\nThe second step, which I completed first due to the lack of technical \nobstacles, is to replace all queries against this data with calls to a \nSet-Returning Function (SRF). This allowed me to re-direct where the data \nwas coming from -- presumably the same thing could be done through RULES, but \nit would have been considerably harder to implement.\n\nThe first thing the SRF does is check the criteria passed to it against a set \nof cached (in a table) criteria with that user's permission level which is < \n1/2 hour old. If the same criteria are found, then the SRF is returned a \nset of row identifiers for the materialized view (MV), and looks up the rows \nin the MV and returns those to the web client. \n\nIf no identical set of criteria are found, then the query is run to get a set \nof identifiers which are then cached, and the SRF returns the queried rows.\n\nOnce I surmount the problem of storing all the caching information in \nprotected memory, the advantages of this approach over SQC are several:\n\n1) The materialized data is available in 3 different forms; a list, a detail \nview, and a spreadsheet. Each form as somewhat different columns and \ndifferent rules about ordering, which would likely confuse an SQC planner. \nIn this implementation, all 3 forms are able to share the same cache.\n\n2) The application is comparing only sets of unambguous criteria rather than \nlong queries which would need to be compared in planner form in order to \ndetermine query equivalence. \n\n3) With the identifier sets, we are able to cache other information as well, \nsuch as a count of rows, further limiting the number of queries we must run.\n\n4) This approach is ideally suited to the pagination and re-sorting common to \na web result set. As only the identifiers are cached, the results can be \nre-sorted and broken in to pages after the cache read, a fast, all-in-memory \noperation.\n\nIn conclusion, what I'm saying is that while forms of transparent query \ncaching (plan, materialized or whatever) may be desirable for other reasons, \nit's quite possible to acheive a superior level of \"query caching\" through \ntight integration with the front-end application. \n\nIf people are interested in this, I'd love to see some suggestions on ways to \nforce the materialized view into dedicated memory.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 1 Oct 2004 10:10:40 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "I'm not sure I understand your req fully. If the same request is repeatedly\ndone with same parameters, you could implement a proxy web server with a\ncroned script to purge stale pages. If there is substantially the same data\nbeing summarized, doing your own summary tables works; if accessed enough,\nthey're in memory. I interleaved some notes into your posting.\n\n----- Original Message ----- \n\nFrom: \"Josh Berkus\" <[email protected]>\n\nTo: \"Postgresql Performance\" <[email protected]>\n\nSent: Friday, October 01, 2004 1:10 PM\n\nSubject: Re: [PERFORM] Caching of Queries\n\n\n\n\n> People:\n>\n> Transparent \"query caching\" is the \"industry standard\" for how these\nthings\n> are handled. However, Postgres' lack of this feature has made me\nconsider\n> other approaches, and I'm starting to wonder if the \"standard\" query\ncaching\n> -- where a materialized query result, or some reduction thereof, is cached\nin\n> database memory -- isn't the best way to cache things. I'm going to\n> abbreviate it \"SQC\" for the rest of this e-mail.\n>\n> Obviously, the draw of SQC is its transparency to developers. With it,\nthe\n> Java/Perl/PHP programmers and the DBA don't have to communicate at all -- \nyou\n> set it up, give it some RAM, and it \"just works\". As someone who\nfrequently\n> has to consult based on limited knowledge, I can understand the appeal.\n\nMy sense is that pg is currently unique among popular dbmses in having the\nmajority of applications being homegrown (a chicken / egg / advocacy issue -\nif I install a CMS, I'm not the DBA or the PHP programmer - and I don't want\nto change the code; we'll see more about this when native WinPg happens).\n\n\n\n\n>\n> However, one of the problems with SQC, aside from the ones already\nmentioned\n> of stale data and/or cache-clearing, is that (at least in applications\nlike\n> MySQL's) it is indiscriminate and caches, at least breifly, unique queries\nas\n> readily as common ones. Possibly Oracle's implementation is more\n> sophisticated; I've not had an opportunity.\n\nI'm not sure I agree here. Stale data and caching choice are\noptimizer/buffer manager choices and implementation can decide whether to\nallow stale data. These are design choices involving development effort and\nchoices of where to spend server cycles and memory. All buffering choices\ncache unique objects, I'm not sure why this is bad (but sensing you want\ncontrol of the choices). FWIW, this is my impression of other dbmses.\n\nIn MySQL, a global cache can be specified with size and globally, locally,\nor through statement hints in queries to suggest caching results. I don't\nbelieve that these could be used as common subexpressions (with an exception\nof MERGE table component results). The optimizer knows nothing about the\ncached results - SQL select statements are hashed, and can be replaced by\nthe the cached statement/results on a match.\n\nIn DB2 and Oracle result sets are not cached. They have rich sets of\nmaterialized view features (that match your requirements). They allow a\nmaterialized view to be synchronous with table updates or asynchronous.\nSynchronous is often an unrealistic option, and asynchronous materialized\nviews are refreshed at a specified schedule. The optimizers allow \"query\nrewrite\" (in Oracle it is a session option) so one can connect to the\ndatabase and specify that the optimizer is allowed to replace subexpressions\nwith data from (possibly stale) materialized views. SQL Server 2K has more\nrestrictive synchronous MVs, but I've never used them.\n\nSo, in your example use in Oracle, you would need to define appropriate MVs\nwith a � hour refresh frequency, and hope that the planner would use them in\nyour queries. The only change in the app is on connection you would allow\nuse of asynchronous stale data.\n\nYou're suggesting an alternative involving identifying common, but\nexpensive, subexpressions and generating MVs for them. This is a pretty\nsophisticated undertaking, and probably requires some theory research to\ndetermine if it's viable.\n\n\n>\n> The other half of that problem is that an entire query is cached, rather\nthan\n> just the relevant data to uniquely identify the request to the\napplication.\n> This is bad in two respects; one that the entire query needs to be parsed\nto\n> see if a new query is materially equivalent, and that two materially\n> different queries which could utilize overlapping ranges of the same\n> underlying result set must instead cache their results separately, eating\nup\n> yet more memory.\n\nThere are two separate issues. The cost of parse/optimization and the cost\nof results retrieval. Other dbmses hash statement text. This is a good\nthing, and probably 3 orders of magnitude faster than parse and\noptimization. (Oracle also has options to replace literals with parameters\nand match parse trees instead of text, expecting parse costs to be less than\nplanning costs.) MySQL on a match simply returns the result set. Oracle and\nDB2 attempt to rewrite queries to use the DBA selected extracts. The MySQL\napproach seems to be almost what you're describing: all it needs is the\nstatement hash, statement, and result set. The rest of your wish list,\nidentifying and caching data to satisfy multiple request is what query\nrewrite does - as long as you've created the appropriate MV.\n\n\n\n\n>\n> To explain what I'm talking about, let me give you a counter-example of\n> another approach.\n>\n> I have a data-warehousing application with a web front-end. The data in\nthe\n> application is quite extensive and complex, and only a summary is\npresented\n> to the public users -- but that summary is a query involving about 30\nlines\n> and 16 joins. This summary information is available in 3 slightly\ndifferent\n> forms. Further, the client has indicated that an up to 1/2 hour delay in\n> data \"freshness\" is acceptable.\n\nThis sounds like a requirement for a summary table - if the data can be\nsummarized appropriately, and a regular refresh process.\n\n\n\n\n>\n> The first step is forcing that \"materialized\" view of the data into\nmemory.\n> Right now I'm working on a reliable way to do that without using\nMemcached,\n> which won't install on our Solaris servers. Temporary tables have the\n> annoying property of being per-connection, which doesn't work in a pool of\n60\n> connections.\n\n\n\n\nI'm not clear on your desire to keep the data in memory. If it is because of\nI/O cost of the summary table, database buffers should be caching it. If you\nwant to store calculated results, again - why not use a summary table? The\ncon of summary tables is the customization / denormalization of the data,\nand the need to have programs use them instead of source data - you seem to\nbe willing to do each of these things.\n\n\n>\n> The second step, which I completed first due to the lack of technical\n> obstacles, is to replace all queries against this data with calls to a\n> Set-Returning Function (SRF). This allowed me to re-direct where the\ndata\n> was coming from -- presumably the same thing could be done through RULES,\nbut\n> it would have been considerably harder to implement.\n>\n> The first thing the SRF does is check the criteria passed to it against a\nset\n> of cached (in a table) criteria with that user's permission level which is\n<\n> 1/2 hour old. If the same criteria are found, then the SRF is returned a\n> set of row identifiers for the materialized view (MV), and looks up the\nrows\n> in the MV and returns those to the web client.\n>\n> If no identical set of criteria are found, then the query is run to get a\nset\n> of identifiers which are then cached, and the SRF returns the queried\nrows.\n>\n> Once I surmount the problem of storing all the caching information in\n> protected memory, the advantages of this approach over SQC are several:\n\nYou are creating summary data on demand. I have had problems with this\napproach, mostly because it tends to cost more than doing it in batch and\nadds latency (unfortunately adding to peak load - so I tend to prefer\nperiodic extract/summarize programs). In either approach why don't you want\npg to cache the data? The result also feels more like persisted object data\nthan typical rdbms processing.\n\n>\n> 1) The materialized data is available in 3 different forms; a list, a\ndetail\n> view, and a spreadsheet. Each form as somewhat different columns and\n> different rules about ordering, which would likely confuse an SQC planner.\n> In this implementation, all 3 forms are able to share the same cache.\n\n\n\n\nI'm not clear what the issue here is. Are you summarizing data differently\nor using some business rules to identify orthogonal queries?\n\n\n>\n> 2) The application is comparing only sets of unambiguous criteria rather\nthan\n> long queries which would need to be compared in planner form in order to\n> determine query equivalence.\n\n>\n> 3) With the identifier sets, we are able to cache other information as\nwell,\n> such as a count of rows, further limiting the number of queries we must\nrun.\n>\n> 4) This approach is ideally suited to the pagination and re-sorting common\nto\n> a web result set. As only the identifiers are cached, the results can be\n> re-sorted and broken in to pages after the cache read, a fast,\nall-in-memory\n> operation.\n>\n> In conclusion, what I'm saying is that while forms of transparent query\n> caching (plan, materialized or whatever) may be desirable for other\nreasons,\n> it's quite possible to achieve a superior level of \"query caching\" through\n> tight integration with the front-end application.\n\nThis looks like you're building an object store to support a custom app that\nperiodically or on demand pulls rdbms data mart data. The description of the\nuse seems either static, suggesting summary tables or dynamic, suggesting\nthat you're mimicking some function of a periodically extracted OLAP cube.\n\n\n\n\n>\n> If people are interested in this, I'd love to see some suggestions on ways\nto\n> force the materialized view into dedicated memory.\n\n\n\n\nCan you identify your objections to summarizing the data and letting pg\nbuffer it?\n\n/Aaron\n",
"msg_date": "Fri, 1 Oct 2004 23:13:28 -0400",
"msg_from": "\"Aaron Werman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "Aaron,\n\n> I'm not sure I understand your req fully.\n\nI'm not surprised. I got wrapped up in an overly involved example and \ncompletely left off the points I was illustrating. So here's the points, in \nbrief:\n\n1) Query caching is not a single problem, but rather several different \nproblems requiring several different solutions.\n\n2) Of these several different solutions, any particular query result caching \nimplementation (but particularly MySQL's) is rather limited in its \napplicability, partly due to the tradeoffs required. Per your explanation, \nOracle has improved this by offering a number of configurable options.\n\n3) Certain other caching problems would be solved in part by the ability to \nconstruct \"in-memory\" tables which would be non-durable and protected from \ncache-flushing. This is what I'm interested in chatting about.\n\nBTW, I AM using a summary table.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sat, 2 Oct 2004 13:04:21 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "Josh Berkus wrote:\n\n> 1) Query caching is not a single problem, but rather several different \n> problems requiring several different solutions.\n> \n> 2) Of these several different solutions, any particular query result caching \n> implementation (but particularly MySQL's) is rather limited in its \n> applicability, partly due to the tradeoffs required. Per your explanation, \n> Oracle has improved this by offering a number of configurable options.\n> \n> 3) Certain other caching problems would be solved in part by the ability to \n> construct \"in-memory\" tables which would be non-durable and protected from \n> cache-flushing. This is what I'm interested in chatting about.\n\nJust my 2 cents on this whole issue. I would lean towards having result \ncaching in pgpool versus the main backend. I want every ounce of memory \non a database server devoted to the database. Caching results would \ndouble the effect of cache flushing ... ie, now both the results and the \npages used to build the results are in memory pushing out other stuff to \ndisk that may be just as important.\n\nIf it was in pgpool or something similar, I could devote a separate \nmachine just for caching results leaving the db server untouched.\n",
"msg_date": "Sat, 02 Oct 2004 13:47:26 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "William,\n\n> Just my 2 cents on this whole issue. I would lean towards having result\n> caching in pgpool versus the main backend. I want every ounce of memory\n> on a database server devoted to the database. Caching results would\n> double the effect of cache flushing ... ie, now both the results and the\n> pages used to build the results are in memory pushing out other stuff to\n> disk that may be just as important.\n>\n> If it was in pgpool or something similar, I could devote a separate\n> machine just for caching results leaving the db server untouched.\n\nOddly, Joe Conway just mentioned the same idea to me.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sat, 2 Oct 2004 15:50:58 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "\n> pgpool (which makes some rather questionable claims IMO); any decent web\n> application language/environment will support connection pooling.\n\n\tThat's why it should not be tied to something specific as pgpool.\n\n\tIf you want performance, which is the case here, usually you have a \nwebserver serving static files, and an application server serving dynamic \npages.\n\tThis is not necessarily a huge application server, it can be as simple as \nan Apache instance serving static files, with a special path mod_proxy'ed \nto another instance of apache acting as an application server.\n\tIMHO this is a nice way to do it, because you have a light weight static \nfiles server which can spawn many processes without using precious \nresources like memory and postgres connections, and a specialized server \nwhich has a lot less processes, each one having more size, a db \nconnection, etc. The connexions are permanent, of course, so there is no \nconnection overhead. The proxy has an extra advantage buffering the data \n from the \"app server\" and sending it back slowly to the client, so the app \nserver can then very quickly process the next request instead of hogging a \ndb connection while the html is slowly trickled back to the client.\n\tIMHO the standard PHP way of doing things (just one server) is wrong \nbecause every server process, even if it's serving static files, hogs a \nconnection and thus needs an extra layer for pooling.\n\tThus, I see query result caching as a way to pushing further \narchitectures which are already optimized for performance, not as a \nband-aid for poor design solutions like the one-apache server with pooling.\n\n\tNow, a proposition :\n\n\tHere is where we are now, a typical slow query :\n\tPREPARE myquery(text,integer)\n\tEXECUTE myquery('john',2)\n\n\tMy proposition :\n\tPREPARE myquery(text,integer)\n\tPLANNED USING ('john',2)\n\tCACHED IF $1 IS NOT NULL AND $2 IS NOT NULL\n\t\tDEPENDS ON $1, $2\n\t\tMAXIMUM CACHE TIME '5 minute'::interval\n\t\tMINIMUM CACHE TIME '1 minute'::interval\n\t\tMAXIMUM CACHE SIZE 2000000\n\tAS SELECT count(*) as number FROM mytable WHERE myname=$2 AND myfield>=$1;\n\n\tEXECUTE myquery('john',2)\n\n\tExplainations :\n\t-----------\n\tPLANNED USING ('john',2)\n\tTells the planner to compute the stored query plan using the given \nparameters. This is independent from caching but could be a nice feature \nas it would avoid the possibility of storing a bad query plan.\n\n\t-----------\n\tCACHED IF $1 IS NOT NULL AND $2 IS NOT NULL\n\tSpecifies that the result is to be cached. There is an optional condition \n(here, IF ...) telling postgres of when and where it should cache, or not \ncache. It could be useful to avoid wasting cache space.\n\t-----------\n\t\tDEPENDS ON $1, $2\n\tDefines the cache key. I don't know if this is useful, as the query \nparameters make a pretty obvious cache key so why repeat them. It could be \nused to add other data as a cache key, like :\n\t\tDEPENDS ON (SELECT somefunction($1))\n\tAlso a syntax for specifying which tables should be watched for updates, \nand which should be ignored, could be interesting.\n\t-----------\n\t\tMAXIMUM CACHE TIME '5 minute'::interval\n\tPretty obvious.\n\t-----------\n\t\tMINIMUM CACHE TIME '1 minute'::interval\n\tThis query is a count and I want a fast but imprecise count. Thus, I \nspecify a minimum cache time of 1 minute, meaning that the result will \nstay in the cache even if the tables change. This is dangerous, so I'd \nsuggest the following :\n\n\t\tMINIMUM CACHE TIME CASE WHEN result.number>10 THEN '1 minute'::interval \nELSE '5 second'::interval\n\n\tThus the cache time is an expression ; it is evaluated after performed \nthe query. There needs to be a way to access the 'count' result, which I \ncalled 'result.number' because of the SELECT count() as number.\n\tThe result could also be used in the CACHE IF.\n\n\tThe idea here is that the count will vary over time, but we accept some \nimprecision to gain speed. SWho cares if there are 225 or 227 messages in \na forum thread counter anyway ? However, if there are 2 messages, first \ncaching the query is less necessary because it's fast, and second a \nvariation in the count will be much easier to spot, thus we specify a \nshorter cache duration for small counts and a longer duration for large \ncounts.\n\n\tFor queries returning result sets, this is not usable of course, but a \nspecial feature for speeding count() queries would be welcome !\n\n\t-----------\n\t\tMAXIMUM CACHE SIZE 2000000\n\tPretty obvious. Size in bytes.\n\n\tFor queries returning several rows, MIN/MAX on result rows could be \nuseful also :\n\t\tMAXIMUM RESULT ROWS nnn\n\tOr maybe :\n\t\tCACHE IF (select count(*) from result) > nnn\n\n\n\n\tThinking about it, using prepared queries seems a bad idea ; maybe the \ncache should act on the result of functions. This would force the \napplication programmers to put the queries they want to optimize in \nfunctions, but as function code is shared between connections and prepared \nstatements are not, maybe it would be easier to implement, and would \nshield against some subtle bugs, like PREPARing the different queries \nunder the same name...\n\n\tIn that case the cache manager would also know if the function returns \nSETOF or not, which would be interesting.\n\n\tWhat do you think of these optimizations ?\n\n\tRight now, a count() query cache could be implemented as a simple plsql \nfunction with a table as the cache, by the way.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Sun, 03 Oct 2004 15:20:49 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "\n> 1) The materialized data is available in 3 different forms; a list, a \n> detail\n> view, and a spreadsheet. Each form as somewhat different columns and\n> different rules about ordering, which would likely confuse an SQC \n> planner.\n> In this implementation, all 3 forms are able to share the same cache.\n\n\tSee my proposal to cache function results.\n\tYou can create a cached function and :\n\n\tSELECT your rows FROM cached_function(parameters) WHERE ... ORDER BY... \nGROUP BY...\n\n\twill only fetch the function result from the cache, and then the only \nadditional costs are the ORDER and GROUP BY... the query parsing is very \nsimple, it's just a select, and a \"cached function scan\"\n\n\tI think caching can be made much more powerful if it is made usable like \nthis. I mean, not only cache a query and its result, but being able to use \ncached queries internally like this and manipulaing them, adds value to \nthe cached data and allows storing less data in the cache because \nduplicates are avoided. Thus we could use cached results in CHECK() \nconditions, inside plsql functions, anywhere...\n",
"msg_date": "Sun, 03 Oct 2004 15:27:46 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "\n> If it was in pgpool or something similar, I could devote a separate \n> machine just for caching results leaving the db server untouched.\n\n\tBUT you would be limited to caching complete queries. There is a more \nefficient strategy...\n\n\n\n",
"msg_date": "Sun, 03 Oct 2004 15:30:05 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "> >>More to the point though, I think this is a feature that really really \n> >>should be in the DB, because then it's trivial for people to use. \n> >> \n> >>\n> >\n> >How does putting it into PGPool make it any less trivial for people to\n> >use?\n> >\n> The answers are at http://www2b.biglobe.ne.jp/~caco/pgpool/index-e.html \n> . Specifically, it's a separate application that needs configuration, \n> the homepage has no real discussion of the potential pitfalls of pooling \n> and what this implementation does to get around them, you get the idea. \n\nI don't know what you are exactly referring to in above URL when you\nare talking about \"potential pitfalls of pooling\". Please explain\nmore.\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 04 Oct 2004 00:28:23 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "On Fri, Oct 01, 2004 at 10:10:40AM -0700, Josh Berkus wrote:\n> Transparent \"query caching\" is the \"industry standard\" for how these things \n> are handled. However, Postgres' lack of this feature has made me consider \n> other approaches, and I'm starting to wonder if the \"standard\" query caching \n> -- where a materialized query result, or some reduction thereof, is cached in \n> database memory -- isn't the best way to cache things. I'm going to \n> abbreviate it \"SQC\" for the rest of this e-mail.\n \nNot to quibble, but are you sure that's the standard? Oracle and DB2\ndon't do this, and I didn't think MSSQL did either. What they do do is\ncache query *plans*. This is a *huge* deal in Oracle; search\nhttp://asktom.oracle.com for 'soft parse'.\n\nIn any case, I think a means of marking some specific queries as being\ncachable is an excellent idea; perfect for 'static data' scenarios. What\nI don't know is how much will be saved.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Mon, 4 Oct 2004 14:18:50 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "\n> I don't know what you are exactly referring to in above URL \n> when you are talking about \"potential pitfalls of pooling\". \n> Please explain more.\n\nSorry, I wasn't implying that pgpool doesn't deal with the issues, just that\nsome people aren't necessarily aware of them up front. For instance, pgpool\ndoes an 'abort transaction' and a 'reset all' in lieu of a full reconnect\n(of course, since a full reconnect is exactly what we are trying to avoid).\nIs this is enough to guarantee that a given pooled connection behaves\nexactly as a non-pooled connection would from a client perspective? For\ninstance, temporary tables are usually dropped at the end of a session, so a\nclient (badly coded perhaps) that does not already use persistent\nconnections might be confused when the sequence 'connect, create temp table\nfoo ..., disconnect, connect, create temp table foo ...' results in the\nerror 'Relation 'foo' already exists'.\n\n\n",
"msg_date": "Tue, 5 Oct 2004 16:35:28 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "> > I don't know what you are exactly referring to in above URL \n> > when you are talking about \"potential pitfalls of pooling\". \n> > Please explain more.\n> \n> Sorry, I wasn't implying that pgpool doesn't deal with the issues, just that\n> some people aren't necessarily aware of them up front. For instance, pgpool\n> does an 'abort transaction' and a 'reset all' in lieu of a full reconnect\n> (of course, since a full reconnect is exactly what we are trying to avoid).\n> Is this is enough to guarantee that a given pooled connection behaves\n> exactly as a non-pooled connection would from a client perspective? For\n> instance, temporary tables are usually dropped at the end of a session, so a\n> client (badly coded perhaps) that does not already use persistent\n> connections might be confused when the sequence 'connect, create temp table\n> foo ..., disconnect, connect, create temp table foo ...' results in the\n> error 'Relation 'foo' already exists'.\n\nFirst, it's not a particular problem with pgpool. As far as I know any\nconnection pool solution has exactly the same problem. Second, it's\neasy to fix if PostgreSQL provides a functionarity such as:\"drop all\ntemporary tables if any\". I think we should implement it if we agree\nthat connection pooling should be implemented outside the PostgreSQL\nengine itself. I think cores agree with this.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 07 Oct 2004 10:08:47 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> First, it's not a particular problem with pgpool. As far as I know any\n> connection pool solution has exactly the same problem. Second, it's\n> easy to fix if PostgreSQL provides a functionarity such as:\"drop all\n> temporary tables if any\".\n\nI don't like that definition exactly --- it would mean that every time\nwe add more backend-local state, we expect client drivers to know to\nissue the right incantation to reset that kind of state.\n\nI'm thinking we need to invent a command like \"RESET CONNECTION\" that\nresets GUC variables, drops temp tables, forgets active NOTIFYs, and\ngenerally does whatever else needs to be done to make the session state\nappear virgin. When we add more such state, we can fix it inside the\nbackend without bothering clients.\n\nI now realize that our \"RESET ALL\" command for GUC variables was not\nfully thought out. We could possibly redefine it as doing the above,\nbut that might break some applications ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Oct 2004 23:12:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries "
},
{
"msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> > First, it's not a particular problem with pgpool. As far as I know any\n> > connection pool solution has exactly the same problem. Second, it's\n> > easy to fix if PostgreSQL provides a functionarity such as:\"drop all\n> > temporary tables if any\".\n> \n> I don't like that definition exactly --- it would mean that every time\n> we add more backend-local state, we expect client drivers to know to\n> issue the right incantation to reset that kind of state.\n> \n> I'm thinking we need to invent a command like \"RESET CONNECTION\" that\n> resets GUC variables, drops temp tables, forgets active NOTIFYs, and\n> generally does whatever else needs to be done to make the session state\n> appear virgin. When we add more such state, we can fix it inside the\n> backend without bothering clients.\n\nGreat. It's much better than I propose.\n\n> I now realize that our \"RESET ALL\" command for GUC variables was not\n> fully thought out. We could possibly redefine it as doing the above,\n> but that might break some applications ...\n> \n> \t\t\tregards, tom lane\n> \n",
"msg_date": "Thu, 07 Oct 2004 16:18:42 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries "
},
{
"msg_contents": "\nAdded to TODO:\n\n* Add RESET CONNECTION command to reset all session state\n\n This would include resetting of all variables (RESET ALL), dropping of\n all temporary tables, removal of any NOTIFYs, etc. This could be used\n for connection pooling. We could also change RESET ALL to have this\n functionality.\n\n\n---------------------------------------------------------------------------\n\nTatsuo Ishii wrote:\n> > Tatsuo Ishii <[email protected]> writes:\n> > > First, it's not a particular problem with pgpool. As far as I know any\n> > > connection pool solution has exactly the same problem. Second, it's\n> > > easy to fix if PostgreSQL provides a functionarity such as:\"drop all\n> > > temporary tables if any\".\n> > \n> > I don't like that definition exactly --- it would mean that every time\n> > we add more backend-local state, we expect client drivers to know to\n> > issue the right incantation to reset that kind of state.\n> > \n> > I'm thinking we need to invent a command like \"RESET CONNECTION\" that\n> > resets GUC variables, drops temp tables, forgets active NOTIFYs, and\n> > generally does whatever else needs to be done to make the session state\n> > appear virgin. When we add more such state, we can fix it inside the\n> > backend without bothering clients.\n> \n> Great. It's much better than I propose.\n> \n> > I now realize that our \"RESET ALL\" command for GUC variables was not\n> > fully thought out. We could possibly redefine it as doing the above,\n> > but that might break some applications ...\n> > \n> > \t\t\tregards, tom lane\n> > \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 12 Oct 2004 21:02:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
},
{
"msg_contents": "\n>> I've looked at PREPARE, but apparently it only lasts per-session - \n>> that's\n>> worthless in our case (web based service, one connection per \n>> data-requiring\n>> connection).\n\n\tYou don't use persistent connections ???????????\n\tYour problem might simply be the connection time overhead (also including \na few TCP roudtrips).\n",
"msg_date": "Fri, 24 Dec 2004 03:19:27 +0100",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching of Queries"
}
] |
[
{
"msg_contents": "\n\n\n\nGentlefolk,\n\nI'm not sure if this is the proper forum for this question, and it might\nhave been answered in a previous thread, but I'm new to PostgreSQL and the\nresearch I did in the archives did not turn up anything addressing this\nissue. Please direct me to the proper forum is this is not the correct\nvenue.\n\nEnvironment: Red Hat Enterprise Linux 3 Workstation, PostgreSQL V7.3.6\n(stock with the RHEL distribution)\n\nThe two tables I used in the example are tbl_device and tbl_sad_event:\n\nvsa=# \\d vsa.tbl_device;\n Table \"vsa.tbl_device\"\n Column | Type |\nModifiers\n----------------+--------------------------+---------------------------------------------------------\n id | integer | not null default\nnextval('vsa.tbl_device_id_seq'::text)\n name | character varying(100) | not null\n account_id | bigint | not null\n vss_site_id | bigint | not null\n org_site_id | bigint | not null default 0\n device_type_id | integer | not null default 1\n os_family_id | integer | not null default 0\n status_id | integer | not null default 0\n timezone | character varying(80) |\n clientkey | character varying(2048) | not null\n record_created | timestamp with time zone | default now()\nIndexes: pk_tbl_device primary key btree (id),\n idx_d_uniq_name_site_account_key unique btree (name, vss_site_id,\naccount_id, clientkey),\n tbl_device_clientkey_key unique btree (clientkey),\n idx_d_account_id btree (account_id),\n idx_d_account_site_name btree (account_id, vss_site_id, name),\n idx_d_device_type_id btree (device_type_id),\n idx_d_name btree (name),\n idx_d_org_site_id btree (org_site_id),\n idx_d_os_family_id btree (os_family_id),\n idx_d_status_id btree (status_id),\n idx_d_vss_site_id btree (vss_site_id)\nForeign Key constraints: fk_d_va FOREIGN KEY (account_id) REFERENCES\nvsa.tbl_vsa_account(id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n fk_d_vs FOREIGN KEY (vss_site_id) REFERENCES\nvsa.tbl_vss_site(id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n fk_d_dof FOREIGN KEY (os_family_id) REFERENCES\nvsa.enum_device_os_family(id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n fk_d_dsc FOREIGN KEY (status_id) REFERENCES\nvsa.enum_device_status_code(id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n fk_d_dt FOREIGN KEY (device_type_id) REFERENCES\nvsa.enum_device_type(id) ON UPDATE NO ACTION ON DELETE NO ACTION\nTriggers: trg_clean_device_name\n\nvsa=# \\d vsa.tbl_sad_event\n Table \"vsa.tbl_sad_event\"\n Column | Type |\nModifiers\n----------------+-----------------------------+------------------------------------------------------------\n id | integer | not null default\nnextval('vsa.tbl_sad_event_id_seq'::text)\n device_id | bigint | not null\n log_type | integer |\n severity | character varying(20) |\n time_logged | timestamp without time zone |\n user_name | character varying(50) |\n remote_user | character varying(50) |\n remote_host | character varying(100) |\n source_tag | character varying(30) |\n event_code | character varying(50) |\n type | character varying(6) |\n record_created | timestamp with time zone | default now()\nIndexes: pk_tbl_sad_event primary key btree (id),\n idx_se_dev_time_type btree (device_id, time_logged, \"type\"),\n idx_se_device_id btree (device_id),\n idx_se_time_logged btree (time_logged),\n idx_se_type btree (\"type\"),\n sjr_se_id_time_type btree (device_id, time_logged, \"type\")\nForeign Key constraints: fk_sade_d FOREIGN KEY (device_id) REFERENCES\nvsa.tbl_device(id) ON UPDATE NO ACTION ON DELETE CASCADE\n\n\nHere is my original query, and the query plan generated by the planner:\n\nvsa=# explain\nSELECT dev.name, dev.vss_site_id, tbl.log_type, tbl.severity, tbl.count\nFROM vsa.tbl_device AS dev\nLEFT OUTER JOIN\n (SELECT stbl.device_id, stbl.log_type, stbl.severity, count(*)\n FROM vsa.dtbl_logged_event_20040922 AS stbl\n WHERE stbl.log_type IN (2, 3, 4, 5)\n GROUP BY stbl.device_id, stbl.log_type, stbl.severity) AS tbl\n ON (dev.id=tbl.device_id)\nORDER BY dev.name;\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=40893.18..40960.93 rows=27100 width=79)\n Sort Key: dev.name\n -> Merge Join (cost=38417.13..38897.77 rows=27100 width=79)\n Merge Cond: (\"outer\".id = \"inner\".device_id)\n -> Sort (cost=869.52..872.70 rows=1275 width=26)\n Sort Key: dev.id\n -> Seq Scan on tbl_device dev (cost=0.00..803.75 rows=1275 width=26)\n -> Sort (cost=37547.62..37615.37 rows=27100 width=26)\n Sort Key: tbl.device_id\n -> Subquery Scan tbl (cost=0.00..35552.21 rows=27100 width=26)\n -> Aggregate (cost=0.00..35552.21 rows=27100 width=26)\n -> Group (cost=0.00..34874.70 rows=271005 width=26)\n -> Index Scan using idx_le_id_type_severity_evtcode_20040922 on dtbl_logged_event_20040922 stbl\n(cost=0.00..32842.16 rows=271005 width=26)\n Filter: ((log_type = 2) OR (log_type = 3) OR (log_type = 4) OR (log_type = 5))\n(14 rows)\n\nTime: 1.43 ms\n\n\nLate in the development I realized that we had created an inconsistency in\nour design by having vsa.tbl_device.id defined as \"int\", and\nvsa.tbl_sad_event.device_id defined as \"bigint\". These two fields are used\nin the ON clause (ON (dev.id=tbl.device_id)), and my understanding is that\nthey should be of the same type cast. Trying to remedy this situation, I\nexplicitly tried casting vsa.tbl_sad_event.device_id as \"int\" (::int):\n\nvsa=# explain\nSELECT dev.name, dev.vss_site_id, tbl.log_type, tbl.severity, tbl.count\nFROM vsa.tbl_device AS dev\nLEFT OUTER JOIN\n (SELECT stbl.device_id, stbl.log_type, stbl.severity, count(*)\n FROM vsa.dtbl_logged_event_20040922 AS stbl\n WHERE stbl.log_type IN (2, 3, 4, 5) GROUP BY stbl.device_id,\nstbl.log_type, stbl.severity) AS tbl\n ON (dev.id=tbl.device_id::int)\nORDER BY dev.name;\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..45848850.65 rows=27100 width=79)\n Join Filter: (\"outer\".id = (\"inner\".device_id)::integer)\n -> Index Scan using idx_d_name on tbl_device dev (cost=0.00..1490.19 rows=1275 width=26)\n -> Subquery Scan tbl (cost=0.00..35552.21 rows=27100 width=26)\n -> Aggregate (cost=0.00..35552.21 rows=27100 width=26)\n -> Group (cost=0.00..34874.70 rows=271005 width=26)\n -> Index Scan using idx_le_id_type_severity_evtcode_20040922 on dtbl_logged_event_20040922 stbl (cost=0.00..32842.16\nrows=271005 width=26)\n Filter: ((log_type = 2) OR (log_type = 3) OR (log_type = 4) OR (log_type = 5))\n(8 rows)\n\nTime: 1.62 ms\n\n\nNotice that the query plan changes completely when I cast device_id as int.\nWhat is worse (and why I'm writing) is that when I run the second query, it\ngoes into an infinite CPU loop. The original query completed in under 4\nseconds. I've left the second query running for 30 minutes or more, and\nTOP show 100% CPU utilization and 0% disk I/O (0% iowait).\n\nWe are starting to see this phenomenon in other queries which do *not* have\nany explicit type casting, but in which something like\n\"cast(vsa.tbl_sad_event.time_logged AS date)\" is used in a WHERE clause.\nIt's becoming a show-stopper until we understand what is happening.\n\nAny information or suggestions about this problem or making the query more\nefficient will be greatly appreciated.\n\nThanks!\n--- Steve\n\n",
"msg_date": "Wed, 22 Sep 2004 16:09:10 -0400",
"msg_from": "Steven Rosenstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Infinite CPU loop due to field ::type casting"
}
] |
[
{
"msg_contents": "\n\n\n\nI just realized in my haste to send this email out I provided the wrong\ntable in my example. Below is the same email, but with\nvsa.dtbl_logged_event_20040922 substituted for vsa.tbl_sad_event.\n\nSorry for the inconvenience.\n\n--- Steve\n\n\nGentlefolk,\n\nI'm not sure if this is the proper forum for this question, and it might\nhave been answered in a previous thread, but I'm new to PostgreSQL and the\nresearch I did in the archives did not turn up anything addressing this\nissue. Please direct me to the proper forum is this is not the correct\nvenue.\n\nEnvironment: Red Hat Enterprise Linux 3 Workstation, PostgreSQL V7.3.6\n(stock with the RHEL distribution)\n\nThe two tables I used in the example are tbl_device and\ndtbl_logged_event_20040922:\n\nvsa=# \\d vsa.tbl_device;\n Table \"vsa.tbl_device\"\n Column | Type |\nModifiers\n----------------+--------------------------+---------------------------------------------------------\n id | integer | not null default\nnextval('vsa.tbl_device_id_seq'::text)\n name | character varying(100) | not null\n account_id | bigint | not null\n vss_site_id | bigint | not null\n org_site_id | bigint | not null default 0\n device_type_id | integer | not null default 1\n os_family_id | integer | not null default 0\n status_id | integer | not null default 0\n timezone | character varying(80) |\n clientkey | character varying(2048) | not null\n record_created | timestamp with time zone | default now()\nIndexes: pk_tbl_device primary key btree (id),\n idx_d_uniq_name_site_account_key unique btree (name, vss_site_id,\naccount_id, clientkey),\n tbl_device_clientkey_key unique btree (clientkey),\n idx_d_account_id btree (account_id),\n idx_d_account_site_name btree (account_id, vss_site_id, name),\n idx_d_device_type_id btree (device_type_id),\n idx_d_name btree (name),\n idx_d_org_site_id btree (org_site_id),\n idx_d_os_family_id btree (os_family_id),\n idx_d_status_id btree (status_id),\n idx_d_vss_site_id btree (vss_site_id)\nForeign Key constraints: fk_d_va FOREIGN KEY (account_id) REFERENCES\nvsa.tbl_vsa_account(id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n fk_d_vs FOREIGN KEY (vss_site_id) REFERENCES\nvsa.tbl_vss_site(id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n fk_d_dof FOREIGN KEY (os_family_id) REFERENCES\nvsa.enum_device_os_family(id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n fk_d_dsc FOREIGN KEY (status_id) REFERENCES\nvsa.enum_device_status_code(id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n fk_d_dt FOREIGN KEY (device_type_id) REFERENCES\nvsa.enum_device_type(id) ON UPDATE NO ACTION ON DELETE NO ACTION\nTriggers: trg_clean_device_name\n\nvsa=# \\d vsa.dtbl_logged_event_20040922\n Table\n\"vsa.dtbl_logged_event_20040922\"\n Column | Type |\nModifiers\n-----------------+-----------------------------+-------------------------------------------------------------------------\n id | integer | not null default\nnextval('vsa.dtbl_logged_event_20040922_id_seq'::text)\n device_id | bigint | not null\n report_datetime | timestamp without time zone |\n time_logged | timestamp without time zone |\n log_type | integer | not null\n type | character varying(50) |\n severity | character varying(30) |\n source_tag | character varying(30) |\n remote_host | character varying(100) |\n user_name | character varying(50) |\n event_code | character varying(10) |\n description | text |\n record_created | timestamp with time zone | default now()\n event_code_new | character varying(30) |\n remote_user | character varying(50) |\nIndexes: pk_dtbl_logged_event_20040922 primary key btree (id),\n idx_le_device_id_20040922 btree (device_id),\n idx_le_id_source_event_20040922 btree (device_id, source_tag,\nevent_code),\n idx_le_id_src_20040922 btree (device_id, source_tag),\n idx_le_id_type_severity_evtcode_20040922 btree (device_id,\nlog_type, severity, event_code),\n idx_le_log_type_20040922 btree (log_type),\n idx_le_source_tag_20040922 btree (source_tag),\n idx_le_time_logged_20040922 btree (time_logged),\n idx_le_time_type_20040922 btree (time_logged, log_type)\nForeign Key constraints: fk_le_lelt_20040922 FOREIGN KEY (log_type)\nREFERENCES vsa.enum_le_log_type(id) ON UPDATE NO ACTION ON DELETE NO\nACTION,\n fk_le_d_20040922 FOREIGN KEY (device_id)\nREFERENCES vsa.tbl_device(id) ON UPDATE NO ACTION ON DELETE CASCADE\n\n\nHere is my original query, and the query plan generated by the planner:\n\nvsa=# explain\nSELECT dev.name, dev.vss_site_id, tbl.log_type, tbl.severity, tbl.count\nFROM vsa.tbl_device AS dev\nLEFT OUTER JOIN\n (SELECT stbl.device_id, stbl.log_type, stbl.severity, count(*)\n FROM vsa.dtbl_logged_event_20040922 AS stbl\n WHERE stbl.log_type IN (2, 3, 4, 5)\n GROUP BY stbl.device_id, stbl.log_type, stbl.severity) AS tbl\n ON (dev.id=tbl.device_id)\nORDER BY dev.name;\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=40893.18..40960.93 rows=27100 width=79)\n Sort Key: dev.name\n -> Merge Join (cost=38417.13..38897.77 rows=27100 width=79)\n Merge Cond: (\"outer\".id = \"inner\".device_id)\n -> Sort (cost=869.52..872.70 rows=1275 width=26)\n Sort Key: dev.id\n -> Seq Scan on tbl_device dev (cost=0.00..803.75 rows=1275 width=26)\n -> Sort (cost=37547.62..37615.37 rows=27100 width=26)\n Sort Key: tbl.device_id\n -> Subquery Scan tbl (cost=0.00..35552.21 rows=27100 width=26)\n -> Aggregate (cost=0.00..35552.21 rows=27100 width=26)\n -> Group (cost=0.00..34874.70 rows=271005 width=26)\n -> Index Scan using idx_le_id_type_severity_evtcode_20040922 on dtbl_logged_event_20040922 stbl\n(cost=0.00..32842.16 rows=271005 width=26)\n Filter: ((log_type = 2) OR (log_type = 3) OR (log_type = 4) OR (log_type = 5))\n(14 rows)\n\nTime: 1.43 ms\n\n\nLate in the development I realized that we had created an inconsistency in\nour design by having vsa.tbl_device.id defined as \"int\", and\nvsa.dtbl_logged_event_20040922.device_id defined as \"bigint\". These two\nfields are used in the ON clause (ON (dev.id=tbl.device_id)), and my\nunderstanding is that they should be of the same type cast. Trying to\nremedy this situation, I explicitly tried casting\nvsa.dtbl_logged_event_20040922.device_id as \"int\" (::int):\n\nvsa=# explain\nSELECT dev.name, dev.vss_site_id, tbl.log_type, tbl.severity, tbl.count\nFROM vsa.tbl_device AS dev\nLEFT OUTER JOIN\n (SELECT stbl.device_id, stbl.log_type, stbl.severity, count(*)\n FROM vsa.dtbl_logged_event_20040922 AS stbl\n WHERE stbl.log_type IN (2, 3, 4, 5) GROUP BY stbl.device_id,\nstbl.log_type, stbl.severity) AS tbl\n ON (dev.id=tbl.device_id::int)\nORDER BY dev.name;\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..45848850.65 rows=27100 width=79)\n Join Filter: (\"outer\".id = (\"inner\".device_id)::integer)\n -> Index Scan using idx_d_name on tbl_device dev (cost=0.00..1490.19 rows=1275 width=26)\n -> Subquery Scan tbl (cost=0.00..35552.21 rows=27100 width=26)\n -> Aggregate (cost=0.00..35552.21 rows=27100 width=26)\n -> Group (cost=0.00..34874.70 rows=271005 width=26)\n -> Index Scan using idx_le_id_type_severity_evtcode_20040922 on dtbl_logged_event_20040922 stbl (cost=0.00..32842.16\nrows=271005 width=26)\n Filter: ((log_type = 2) OR (log_type = 3) OR (log_type = 4) OR (log_type = 5))\n(8 rows)\n\nTime: 1.62 ms\n\n\nNotice that the query plan changes completely when I cast device_id as int.\nWhat is worse (and why I'm writing) is that when I run the second query, it\ngoes into an infinite CPU loop. The original query completed in under 4\nseconds. I've left the second query running for 30 minutes or more, and\nTOP show 100% CPU utilization and 0% disk I/O (0% iowait).\n\nWe are starting to see this phenomenon in other queries which do *not* have\nany explicit type casting, but in which something like\n\"cast(vsa.dtbl_logged_event_20040922.time_logged AS date)\" is used in a\nWHERE clause. It's becoming a show-stopper until we understand what is\nhappening.\n\nAny information or suggestions about this problem or making the query more\nefficient will be greatly appreciated.\n\nThanks!\n--- Steve\n\n",
"msg_date": "Wed, 22 Sep 2004 16:33:16 -0400",
"msg_from": "Steven Rosenstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fw: Infinite CPU loop due to field ::type casting, Take II :-)"
},
{
"msg_contents": "Steven Rosenstein <[email protected]> writes:\n> Environment: Red Hat Enterprise Linux 3 Workstation, PostgreSQL V7.3.6\n\n> vsa=# explain\n> SELECT dev.name, dev.vss_site_id, tbl.log_type, tbl.severity, tbl.count\n> FROM vsa.tbl_device AS dev\n> LEFT OUTER JOIN\n> (SELECT stbl.device_id, stbl.log_type, stbl.severity, count(*)\n> FROM vsa.dtbl_logged_event_20040922 AS stbl\n> WHERE stbl.log_type IN (2, 3, 4, 5) GROUP BY stbl.device_id,\n> stbl.log_type, stbl.severity) AS tbl\n> ON (dev.id=tbl.device_id::int)\n> ORDER BY dev.name;\n> QUERY PLAN\n\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..45848850.65 rows=27100 width=79)\n> Join Filter: (\"outer\".id = (\"inner\".device_id)::integer)\n> -> Index Scan using idx_d_name on tbl_device dev (cost=0.00..1490.19 rows=1275 width=26)\n> -> Subquery Scan tbl (cost=0.00..35552.21 rows=27100 width=26)\n> -> Aggregate (cost=0.00..35552.21 rows=27100 width=26)\n> -> Group (cost=0.00..34874.70 rows=271005 width=26)\n> -> Index Scan using idx_le_id_type_severity_evtcode_20040922 on dtbl_logged_event_20040922 stbl (cost=0.00..32842.16\n> rows=271005 width=26)\n> Filter: ((log_type = 2) OR (log_type = 3) OR (log_type = 4) OR (log_type = 5))\n> (8 rows)\n\n> Time: 1.62 ms\n\n\n> Notice that the query plan changes completely when I cast device_id as int.\n> What is worse (and why I'm writing) is that when I run the second query, it\n> goes into an infinite CPU loop.\n\n\"Bad plan\" and \"infinite loop\" are two very different things.\n\nIn 7.3 you'd be better off without the cast, as you just found out. The\n7.3 backend can only handle merge or hash joins that use a join clause\nof the form \"variable = variable\" --- anything more complicated falls\nback to a nested loop join. It does handle mergejoins between unlike\ndata types, though, so you were doing okay with the undecorated query.\n\n7.4 is smarter; dunno if you want to upgrade at this point.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Sep 2004 18:03:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Infinite CPU loop due to field ::type casting, Take II :-) "
}
] |
[
{
"msg_contents": "Hello,\n\n \n\nI'll be moving a DB from internal RAID-10 SCSI storage to an EMC CX300\nFC RAID-10 LUN, bound to the host. I've setup a test host machine and a\ntest LUN. The /var/lib/pgsql/data folder is sym-linked to a partition on\nthe LUN. \n\n \n\nOther than the shared_buffers, effective cache size, and sort memory, I\nam not sure if I need to change any other parameters in the\npostgresql.conf file for getting maximum performance from the EMC box.\n\n \n\nIs there a general guideline for setting up postgres database and the\ntunable parameters on a SAN, especially for EMC?\n\n \n\nAppreciate any help,\n\n \n\nThanks,\nAnjan\n\n\n\n\n\n\n\n\n\n\nHello,\n \nI’ll be moving a DB from internal RAID-10 SCSI storage\nto an EMC CX300 FC RAID-10 LUN, bound to the host. I’ve setup a test host\nmachine and a test LUN. The /var/lib/pgsql/data folder is sym-linked to a partition\non the LUN. \n \nOther than the shared_buffers, effective cache size, and\nsort memory, I am not sure if I need to change any other parameters in the\npostgresql.conf file for getting maximum performance from the EMC box.\n \nIs there a general guideline for setting up postgres database\nand the tunable parameters on a SAN, especially for EMC?\n \nAppreciate any help,\n \nThanks,\nAnjan",
"msg_date": "Wed, 22 Sep 2004 17:49:02 -0400",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "SAN performance"
},
{
"msg_contents": "Hi,\n\nI expect you mean RAID 1/0 or 1+0 since the CX300 didn't support RAID 10 last time I looked.\n\nWhether you are using a SAN or not, you should consider putting the WAL files (pg_xlog folder) on\nseperate diskes from the DB. Since the log files are mostly written to, not read from you could\njust use RAID 1. \n\nIt's a pity pg doesn't have a way to use a cluster of servers to get the most out of your\nexpensive SAN.\n\nI read a comment earlier about setting block sizes to 8k to math pg's block size. Seems to make\nsense, you should check it out.\n\nHave fun,\nMr Pink\n\n--- Anjan Dave <[email protected]> wrote:\n\n> Hello,\n> \n> \n> \n> I'll be moving a DB from internal RAID-10 SCSI storage to an EMC CX300\n> FC RAID-10 LUN, bound to the host. I've setup a test host machine and a\n> test LUN. The /var/lib/pgsql/data folder is sym-linked to a partition on\n> the LUN. \n> \n> \n> \n> Other than the shared_buffers, effective cache size, and sort memory, I\n> am not sure if I need to change any other parameters in the\n> postgresql.conf file for getting maximum performance from the EMC box.\n> \n> \n> \n> Is there a general guideline for setting up postgres database and the\n> tunable parameters on a SAN, especially for EMC?\n> \n> \n> \n> Appreciate any help,\n> \n> \n> \n> Thanks,\n> Anjan\n> \n> \n\n\n\n\t\n\t\t\n__________________________________\nDo you Yahoo!?\nNew and Improved Yahoo! Mail - 100MB free storage!\nhttp://promotions.yahoo.com/new_mail \n",
"msg_date": "Thu, 23 Sep 2004 08:39:31 -0700 (PDT)",
"msg_from": "Mr Pink <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SAN performance"
}
] |
[
{
"msg_contents": "Scott: \n\nWe have seen similar issues when we have had massive load on our web\nserver. My determination was that simply the act of spawning and\nstopping postgres sessions was very heavy on the box, and by\nimplementing connection pooling (sqlrelay), we got much higher\nthroughput, and better response on the server then we would get any\nother way. \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Jason Coene\nSent: Thursday, September 23, 2004 10:53 AM\nTo: 'Mr Pink'; 'Scott Kirkwood'\nCc: [email protected]\nSubject: Re: [PERFORM] Caching of Queries\n\nI'm not an expert, but I've been hunting down a killer performance\nproblem\nfor a while now. It seems this may be the cause.\n\nAt peak load, our database slows to a trickle. The CPU and disk\nutilization\nare normal - 20-30% used CPU and disk performance good.\n\nAll of our \"postgres\" processes end up in the \"semwai\" state - seemingly\nwaiting on other queries to complete. If the system isn't taxed in CPU\nor\ndisk, I have a good feeling that this may be the cause. I didn't know\nthat\nplanning queries could create such a gridlock, but based on Mr Pink's\nexplanation, it sounds like a very real possibility.\n\nWe're running on SELECT's, and the number of locks on our \"high traffic\"\ntables grows to the hundreds. If it's not the SELECT locking (and we\ndon't\nget that many INSERT/UPDATE on these tables), could the planner be doing\nit?\n\nAt peak load (~ 1000 queries/sec on highest traffic table, all very\nsimilar), the serialized queries pile up and essentially create a DoS on\nour\nservice - requiring a restart of the PG daemon. Upon stop & start, it's\nback to normal.\n\nI've looked at PREPARE, but apparently it only lasts per-session -\nthat's\nworthless in our case (web based service, one connection per\ndata-requiring\nconnection).\n\nDoes this sound plausible? Is there an alternative way to do this that\nI\ndon't know about? Additionally, in our case, I personally don't see any\ndownside to caching and using the same query plan when the only thing\nsubstituted are variables. In fact, I'd imagine it would help\nperformance\nsignificantly in high-volume web applications.\n\nThanks,\n\nJason\n\n> -----Original Message-----\n> From: [email protected]\n[mailto:pgsql-performance-\n> [email protected]] On Behalf Of Mr Pink\n> Sent: Thursday, September 23, 2004 11:29 AM\n> To: Scott Kirkwood; [email protected]\n> Subject: Re: [PERFORM] Caching of Queries\n> \n> Not knowing anything about the internals of pg, I don't know how this\n> relates, but in theory,\n> query plan caching is not just about saving time re-planning queries,\nit's\n> about scalability.\n> Optimizing queries requires shared locks on the database metadata,\nwhich,\n> as I understand it\n> causes contention and serialization, which kills scalability.\n> \n> I read this thread from last to first, and I'm not sure if I missed\n> something, but if pg isnt\n> caching plans, then I would say plan caching should be a top priority\nfor\n> future enhancements. It\n> needn't be complex either: if the SQL string is the same, and none of\nthe\n> tables involved in the\n> query have changed (in structure), then re-use the cached plan.\nBasically,\n> DDL and updated\n> statistics would have to invalidate plans for affected tables.\n> \n> Preferably, it should work equally for prepared statements and those\nnot\n> pre-prepared. If you're\n> not using prepare (and bind variables) though, your plan caching down\nthe\n> drain anyway...\n> \n> I don't think that re-optimizing based on values of bind variables is\n> needed. It seems like it\n> could actually be counter-productive and difficult to asses it's\nimpact.\n> \n> That's the way I see it anyway.\n> \n> :)\n> \n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n",
"msg_date": "Thu, 23 Sep 2004 11:18:14 -0600",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Caching of Queries"
}
] |
[
{
"msg_contents": "I believe 1/0 or 1+0 is aka RAID-10. CX300 doesn't support 0+1.\r\n \r\nSo far i am aware of two things, the cache page size is 8KB (can be increased or decreased), and the stripe element size of 128 sectors default.\r\n \r\nThanks,\r\nAnjan\r\n\r\n\t-----Original Message----- \r\n\tFrom: Mr Pink [mailto:[email protected]] \r\n\tSent: Thu 9/23/2004 11:39 AM \r\n\tTo: Anjan Dave; [email protected] \r\n\tCc: \r\n\tSubject: Re: [PERFORM] SAN performance\r\n\t\r\n\t\r\n\r\n\tHi, \r\n\r\n\tI expect you mean RAID 1/0 or 1+0 since the CX300 didn't support RAID 10 last time I looked. \r\n\r\n\tWhether you are using a SAN or not, you should consider putting the WAL files (pg_xlog folder) on \r\n\tseperate diskes from the DB. Since the log files are mostly written to, not read from you could \r\n\tjust use RAID 1. \r\n\r\n\tIt's a pity pg doesn't have a way to use a cluster of servers to get the most out of your \r\n\texpensive SAN. \r\n\r\n\tI read a comment earlier about setting block sizes to 8k to math pg's block size. Seems to make \r\n\tsense, you should check it out. \r\n\r\n\tHave fun, \r\n\tMr Pink \r\n\r\n\t--- Anjan Dave <[email protected]> wrote: \r\n\r\n\t> Hello, \r\n\t> \r\n\t> \r\n\t> \r\n\t> I'll be moving a DB from internal RAID-10 SCSI storage to an EMC CX300 \r\n\t> FC RAID-10 LUN, bound to the host. I've setup a test host machine and a \r\n\t> test LUN. The /var/lib/pgsql/data folder is sym-linked to a partition on \r\n\t> the LUN. \r\n\t> \r\n\t> \r\n\t> \r\n\t> Other than the shared_buffers, effective cache size, and sort memory, I \r\n\t> am not sure if I need to change any other parameters in the \r\n\t> postgresql.conf file for getting maximum performance from the EMC box. \r\n\t> \r\n\t> \r\n\t> \r\n\t> Is there a general guideline for setting up postgres database and the \r\n\t> tunable parameters on a SAN, especially for EMC? \r\n\t> \r\n\t> \r\n\t> \r\n\t> Appreciate any help, \r\n\t> \r\n\t> \r\n\t> \r\n\t> Thanks, \r\n\t> Anjan \r\n\t> \r\n\t> \r\n\r\n\r\n\r\n\t \r\n\t \r\n\t__________________________________ \r\n\tDo you Yahoo!? \r\n\tNew and Improved Yahoo! Mail - 100MB free storage! \r\n\thttp://promotions.yahoo.com/new_mail \r\n\r\n",
"msg_date": "Thu, 23 Sep 2004 14:05:05 -0400",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SAN performance"
},
{
"msg_contents": "I'm about to do a whole bunch of testing here on various DA element \nsizes, and datablock sizes and how the affect pg performance. It doesn't \nappear possible to get > 4kb filesystem blocks under linux due to the \nlimitation of the pagesize. We're running AMD64 for these tests, but the \nDA configuration should be pretty much identical for IA32.\n\nMy best guess right now is that recompiling pg with a 4kb datablock \nsize, and using 4kb filesystem blocks with an 8 sector (4kb) element \nsize is probably the way to go for an active database.\n\nContact me off-list if you want a copy of the EMC CLARiiON \"Best \nPractices for Fiber Channel Storage\" white paper. Haven't read it since \nI only got my copy this morning, but... looks promising.\n\nDrew\n\n\nAnjan Dave wrote:\n\n> I believe 1/0 or 1+0 is aka RAID-10. CX300 doesn't support 0+1.\n> \n> So far i am aware of two things, the cache page size is 8KB (can be increased or decreased), and the stripe element size of 128 sectors default.\n> \n> Thanks,\n> Anjan\n> \n> \t-----Original Message----- \n> \tFrom: Mr Pink [mailto:[email protected]] \n> \tSent: Thu 9/23/2004 11:39 AM \n> \tTo: Anjan Dave; [email protected] \n> \tCc: \n> \tSubject: Re: [PERFORM] SAN performance\n> \t\n> \t\n> \n> \tHi, \n> \n> \tI expect you mean RAID 1/0 or 1+0 since the CX300 didn't support RAID 10 last time I looked. \n> \n> \tWhether you are using a SAN or not, you should consider putting the WAL files (pg_xlog folder) on \n> \tseperate diskes from the DB. Since the log files are mostly written to, not read from you could \n> \tjust use RAID 1. \n> \n> \tIt's a pity pg doesn't have a way to use a cluster of servers to get the most out of your \n> \texpensive SAN. \n> \n> \tI read a comment earlier about setting block sizes to 8k to math pg's block size. Seems to make \n> \tsense, you should check it out. \n> \n> \tHave fun, \n> \tMr Pink \n> \n> \t--- Anjan Dave <[email protected]> wrote: \n> \n> \t> Hello, \n> \t> \n> \t> \n> \t> \n> \t> I'll be moving a DB from internal RAID-10 SCSI storage to an EMC CX300 \n> \t> FC RAID-10 LUN, bound to the host. I've setup a test host machine and a \n> \t> test LUN. The /var/lib/pgsql/data folder is sym-linked to a partition on \n> \t> the LUN. \n> \t> \n> \t> \n> \t> \n> \t> Other than the shared_buffers, effective cache size, and sort memory, I \n> \t> am not sure if I need to change any other parameters in the \n> \t> postgresql.conf file for getting maximum performance from the EMC box. \n> \t> \n> \t> \n> \t> \n> \t> Is there a general guideline for setting up postgres database and the \n> \t> tunable parameters on a SAN, especially for EMC? \n> \t> \n> \t> \n> \t> \n> \t> Appreciate any help, \n> \t> \n> \t> \n> \t> \n> \t> Thanks, \n> \t> Anjan \n> \t> \n> \t> \n> \n> \n> \n> \t \n> \t \n> \t__________________________________ \n> \tDo you Yahoo!? \n> \tNew and Improved Yahoo! Mail - 100MB free storage! \n> \thttp://promotions.yahoo.com/new_mail \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n",
"msg_date": "Sun, 26 Sep 2004 16:37:21 -0400",
"msg_from": "Andrew Hammond <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SAN performance"
}
] |
[
{
"msg_contents": "My database was converted from MySQL a while back and has maintained all \nof the indexes which were previously used. Tt the time however, there \nwere limitations on the way PostgreSQL handled the indexes compared to \nMySQL.\n\nMeaning that under MySQL, it would make use of a multi-column index even \nif the rows within did not match. When the conversion was made more \nindexes were created overall to correct this and proceed with the \nconversion.\n\nNow the time has come to clean up the used indexes. Essentially, I \nwant to know if there is a way in which to determine which indexes are \nbeing used and which are not. This will allow me to drop off the \nunneeded ones and reduce database load as a result.\n\nAnd have things changed as to allow for mismatched multi-column indexes \nin version 7.4.x or even the upcoming 8.0.x?\n\n\tMartin Foster\n\[email protected]\n",
"msg_date": "Thu, 23 Sep 2004 22:16:28 GMT",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cleaning up indexes"
},
{
"msg_contents": "Martin Foster wrote:\n> My database was converted from MySQL a while back and has maintained all \n> of the indexes which were previously used. Tt the time however, there \n> were limitations on the way PostgreSQL handled the indexes compared to \n> MySQL.\n> \n> Meaning that under MySQL, it would make use of a multi-column index even \n> if the rows within did not match. When the conversion was made more \n> indexes were created overall to correct this and proceed with the \n> conversion.\n> \n> Now the time has come to clean up the used indexes. Essentially, I \n> want to know if there is a way in which to determine which indexes are \n> being used and which are not. This will allow me to drop off the \n> unneeded ones and reduce database load as a result.\n\nJust for clarification, PostgreSQL will use an a,b,c index for a, (a,b),\nand (a,b,c), but not for (a,c). Are you saying MySQL uses the index for\n(a,c)? This item is on our TODO list:\n\t\n\t* Use index to restrict rows returned by multi-key index when used with\n\t non-consecutive keys to reduce heap accesses\n\t\n\t For an index on col1,col2,col3, and a WHERE clause of col1 = 5 and\n\t col3 = 9, spin though the index checking for col1 and col3 matches,\n\t rather than just col1\n\t\n> And have things changed as to allow for mismatched multi-column indexes \n> in version 7.4.x or even the upcoming 8.0.x?\n\nAs someone already pointed out, the pg_stat* tables will show you what\nindexes are used.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 25 Sep 2004 21:28:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up indexes"
}
] |
[
{
"msg_contents": "\nIf you have set up the postgres instance to write stats, the tables pg_stat_user_indexes, pg_statio_all_indexes and so (use the \\dS option at the psql prompt to see these system tables); also check the pg_stat_user_tables table and similar beasts for information on total access, etc. Between these you can get a good idea of what indexes are not being used, and from the sequentail scan info on tables perhaps some idea of what may need some indexes.\n\nHTH,\n\nGreg Williamson\nDBA\nGlobeXplorer LLC \n\n-----Original Message-----\nFrom:\tMartin Foster [mailto:[email protected]]\nSent:\tThu 9/23/2004 3:16 PM\nTo:\[email protected]\nCc:\t\nSubject:\t[PERFORM] Cleaning up indexes\nMy database was converted from MySQL a while back and has maintained all \nof the indexes which were previously used. Tt the time however, there \nwere limitations on the way PostgreSQL handled the indexes compared to \nMySQL.\n\nMeaning that under MySQL, it would make use of a multi-column index even \nif the rows within did not match. When the conversion was made more \nindexes were created overall to correct this and proceed with the \nconversion.\n\nNow the time has come to clean up the used indexes. Essentially, I \nwant to know if there is a way in which to determine which indexes are \nbeing used and which are not. This will allow me to drop off the \nunneeded ones and reduce database load as a result.\n\nAnd have things changed as to allow for mismatched multi-column indexes \nin version 7.4.x or even the upcoming 8.0.x?\n\n\tMartin Foster\n\[email protected]\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if your\n joining column's datatypes do not match\n\n\n\n",
"msg_date": "Thu, 23 Sep 2004 16:05:22 -0700",
"msg_from": "\"Gregory S. Williamson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up indexes"
}
] |
[
{
"msg_contents": "Hi,\n\nWe have been running PostgreSQL 7.3.4 on 64 bit MAC OS X G5 dual \nprocessors with 8GB of RAM for a while.\nLately, we realized that consistently only about 4GB of RAM is used \neven when CPUs have maxed out\nfor postgtres processes and pageouts starts to happen. Here is a \nportion of the output from TOP:\n\nMemRegions: num = 3761, resident = 41.5M + 7.61M private, 376M shared\nPhysMem: 322M wired, 1.83G active, 1.41G inactive, 3.56G used, 4.44G \nfree\nVM: 14.0G + 69.9M 277034(0) pageins, 1461(0) pageouts\n\nIs it because PostgreSQL 7.3.4 can't take advantage of the 64 bit \nhardware or is it something else?\n\nThanks a lot!\n\nQing\n\n",
"msg_date": "Fri, 24 Sep 2004 12:23:03 -0700",
"msg_from": "Qing Zhao <[email protected]>",
"msg_from_op": true,
"msg_subject": "performance of PostgreSQL on 64 bit MAC OS X G5!"
},
{
"msg_contents": "Oops! [email protected] (Qing Zhao) was seen spray-painting on a wall:\n> Is it because PostgreSQL 7.3.4 can't take advantage of the 64 bit\n> hardware or is it something else?\n\nPostgreSQL has been able to take advantage of 64 bit hardware WHEN THE\nOS SUPPORTS IT, for quite a few versions now.\n\nAs far as I was aware, Mac OS-X was still deployed as a 32 bit\noperating system. Apple's advertising material seems deliberately\nambiguous in this regard; it seems to imply there is no difference\nbetween 32- and 64-bit applications. To wit:\n\n \"... This means that 32-bit applications that run on Mac OS X today\n will run natively on 64-bit PowerPC G5 Processor-based Macintosh\n computers, without the need for recompiling or additional\n optimizations.\"\n\nThere are quite likely to be some additional \"compiler incantations\"\nrequired in order to compile applications as 64 bit apps that will be\nable to reference, internally, more than 4GB of RAM.\n-- \nwm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','acm.org').\nhttp://www3.sympatico.ca/cbbrowne/sgml.html\nTell a man that there are 400 billion stars, and he'll believe you.\nTell him a bench has wet paint, and he has to touch it. \n",
"msg_date": "Sun, 26 Sep 2004 19:31:21 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance of PostgreSQL on 64 bit MAC OS X G5!"
}
] |
[
{
"msg_contents": "I set nested_loop = off, which is why I have the high cost.\n\n@ is a postgis operator between 2 geomotries (both polygons). It's the @\noperator which is expensive. Is there a way to force a cheaper way of\ndoing that join?\n\n -> Nested Loop (cost=100001905.94..100001906.08 rows=1\nwidth=68) (actual time=1739.368..17047.422 rows=100 loops=1)\n Join Filter: ((COALESCE(\"outer\".geom, \"outer\".geom) @\nCOALESCE(\"inner\".geom, \"inner\".geom)) AND (\"outer\".region_id <>\n\"inner\".region_id))\n\n\n",
"msg_date": "Fri, 24 Sep 2004 20:57:15 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": true,
"msg_subject": "Getting rid of nested loop"
}
] |
[
{
"msg_contents": "\nIs everyone still building interactively? I'm looking for nice ways to \nautomate building on Windows without any human action required, as part \nof the buildfarm project. Ideas on how to do this nicely for Windows \nwould be appreciated. Can one run the MSys shell without it firing up an \nemulated xterm?\n\ncheers\n\nandrew\n",
"msg_date": "Sat, 25 Sep 2004 15:39:01 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "automated builds?"
},
{
"msg_contents": "I've been using postgres 8.0 beta for win32 for quite a while now, and I \nam very happy with it. However, I am having an odd problem. Basically, I \nhave a large query which is a bunch of UNION SELECTs from a bunch of \ndifferent tables. I have all the necessary columns indexed, and when I \nrun the query by hand, it runs very fast (20ms). However, if I try to \nbundle this query up into a server side function, it runs very slow (10 \nseconds). I'm trying to figure out why, but since I can't run EXPLAIN \nANALYZE inside a function, I don't really know what else to do.\n\nThe layout of my database is a bunch of tables that all have an object \nid associated with them. There is a main object table that defines per \nobject permissions, and then all of the tables refer to eachother by the \nunique id. What I'm trying to do is get a list of objects that might \nrefer to a given id.\n\nHere is the query. 48542 is just one of the object ids. Some of these \ntables have 500,000 rows, but most are quite small, and the result is \nonly 3 rows.\n\nSELECT * FROM object WHERE id in (\n\tSELECT id FROM data_t WHERE project_id = 48542\n\tUNION SELECT id FROM analyzeset_t\n\t\tWHERE subject_id = 48542\n\t\t OR img_id = 48542\n\t\t OR hdr_id = 48542\n\tUNION SELECT id FROM bdi_t WHERE dcmstudy_id = 48542\n\tUNION SELECT id FROM crq_t WHERE dcmstudy_id = 48542\n\tUNION SELECT id FROM dcmfile_t WHERE dcmseries_id = 48542\n\tUNION SELECT id FROM dcmseries_t WHERE dcmstudy_id = 48542\n\tUNION SELECT id FROM dcmstudy_t\n\t\tWHERE dcmsub_id = 48542\n\t\t OR consent_id = 48542\n\tUNION SELECT id FROM hsq_t WHERE dcmstudy_id = 48542\n\tUNION SELECT id FROM job_t WHERE claimed_id = 48542\n\tUNION SELECT id FROM loc_t WHERE contact_id = 48542\n\tUNION SELECT id FROM pathslide_t WHERE plane_id = 48542\n\tUNION SELECT id FROM pft_t WHERE dcmstudy_id = 48542\n\tUNION SELECT id FROM pftblood_t WHERE pft_id = 48542\n\tUNION SELECT id FROM pftdata_t WHERE pft_id = 48542\n\tUNION SELECT id FROM pftpred_t WHERE pft_id = 48542\n\tUNION SELECT id FROM run_t WHERE job_id = 48542\n\tUNION SELECT id FROM scanread_t\n\t\tWHERE readby_id = 48542\n\t\t OR dcmstudy_id = 48542\n\tUNION SELECT id FROM service_t WHERE comp_id = 48542\n\tUNION SELECT id FROM sliceplane_t WHERE tissue_id = 48542\n\tUNION SELECT id FROM store_t WHERE loc_id = 48542\n\tUNION SELECT id FROM subject_t WHERE supersub_id = 48542\n\tUNION SELECT id FROM vc_t WHERE dcmstudy_id = 48542\n\tUNION SELECT id FROM vcdata_t WHERE vc_id = 48542\n\tUNION SELECT id FROM vcdyn_t WHERE vc_id = 48542\n\tUNION SELECT id FROM vcstatic_t WHERE vc_id = 48542\n\tUNION SELECT child_id as id FROM datapar_t WHERE par_id = 48542\n\tUNION SELECT par_id as id FROM datapar_t WHERE child_id = 48542\n\tUNION SELECT store_id as id FROM finst_t WHERE file_id = 48542\n\tUNION SELECT file_id as id FROM finst_t WHERE store_id = 48542\n\tUNION SELECT from_id as id FROM link_t WHERE to_id = 48542\n\tUNION SELECT to_id as id FROM link_t WHERE from_id = 48542\n\tUNION SELECT data_id as id FROM proc_t WHERE filter_id = 48542\n\tUNION SELECT filter_id as id FROM proc_t WHERE data_id = 48542\n\tUNION SELECT subject_id as id FROM subdata_t WHERE data_id = 48542\n\tUNION SELECT data_id as id FROM subdata_t WHERE subject_id = 48542\n)\n;\n\nIf I run this exact query, it takes 21 ms.\n\nI tried to wrap it into a function with:\n\ncreate function getrefs(int) returns setof object as '\n...\n' language sql;\n\nWhere the ... is just the same query with 48542 replaced with $1.\nselect getrefs(48542);\ntakes 10356.000ms\n\nI have also tried some other things such as:\n\nCREATE OR REPLACE FUNCTION mf_getrefobjs(int) RETURNS boolean AS '\nDECLARE\n\toldid alias for $1;\nBEGIN\n\tDROP TABLE refobjs;\n CREATE TEMPORARY TABLE refobjs AS\n SELECT * FROM object WHERE id in (\n SELECT id FROM data_t WHERE project_id = oldid\n...\n );\n\tRETURN 1;\nend;\n' LANGUAGE plpgSQL;\n\nI have tried returning cursors (they return fast, but the first FETCH \nNEXT, is very slow.)\n\nDoes anyone know why this would be? Is it a problem that in a function \nit doesn't notice that all of the '=' are the same number, and it cannot \noptimize the query? Is there some way I could force an EXPLAIN ANALYZE? \n(if i run it on SELECT getrefs() I just get that it made 1 function call.)\n\nI've tried adding oldid::int in case it was a casting problem.\n\nActually, I've also tried stripping it all the way down to one query:\ncreate or replace function getrefs(int4) returns setof object as '\n SELECT * FROM object WHERE id in (\n SELECT id FROM data_t WHERE project_id = $1::int\n );\n' language sql;\n\nAnd that takes 3ms to return 0 rows. It actually seems like it is \nignoring the index on project_id in this case.\n\nIt is true that project_id can be Null. It seems that if I make this \nquery on other tables that have \"not null\" the don't have the same \nperformance hit.\n\nAny help would be much appreciated, even if it is just giving me a \nbetter place to ask.\n\nJohn\n=:->",
"msg_date": "Tue, 28 Sep 2004 22:14:23 -0500",
"msg_from": "John Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Poor Performance for large queries in functions"
},
{
"msg_contents": "John Meinel <[email protected]> writes:\n> ... However, if I try to \n> bundle this query up into a server side function, it runs very slow (10 \n> seconds). I'm trying to figure out why, but since I can't run EXPLAIN \n> ANALYZE inside a function, I don't really know what else to do.\n\nA parameterized query inside a function is basically the same as a\nPREPARE'd query with parameters at the SQL level. So you can\ninvestigate what's happening here with\n\n\tPREPARE foo(int) AS\n\t\tSELECT * FROM object WHERE id in (\n\t\t\tSELECT id FROM data_t WHERE project_id = $1\n\t\tUNION SELECT ... ;\n\n\tEXPLAIN ANALYZE EXECUTE foo(48542);\n\nI'm not sure where the problem is either, so please do send along the\nresults.\n\n\t\t\tregards, tom lane\n\nPS: pgsql-performance would be a more appropriate venue for this\ndiscussion.\n",
"msg_date": "Wed, 29 Sep 2004 01:10:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor Performance for large queries in functions "
},
{
"msg_contents": "Tom Lane wrote:\n> John Meinel <[email protected]> writes:\n> \n>>... However, if I try to \n>>bundle this query up into a server side function, it runs very slow (10 \n>>seconds). I'm trying to figure out why, but since I can't run EXPLAIN \n>>ANALYZE inside a function, I don't really know what else to do.\n> \n> \n> A parameterized query inside a function is basically the same as a\n> PREPARE'd query with parameters at the SQL level. So you can\n> investigate what's happening here with\n> \n> \tPREPARE foo(int) AS\n> \t\tSELECT * FROM object WHERE id in (\n> \t\t\tSELECT id FROM data_t WHERE project_id = $1\n> \t\tUNION SELECT ... ;\n> \n> \tEXPLAIN ANALYZE EXECUTE foo(48542);\n> \n> I'm not sure where the problem is either, so please do send along the\n> results.\n> \n> \t\t\tregards, tom lane\n> \n> PS: pgsql-performance would be a more appropriate venue for this\n> discussion.\n\nWell, I think I tracked the problem down to the fact that the column \ndoes not have a \"not null\" constraint on it. Here is a demonstration. \nBasically, I have 3 tables, tobjects, tdata, and tproject. tdata \nbasically just links between tobjects and tproject, but isn't required \nto link to tproject. Yes, the real data has more columns, but this shows \nthe problem.\n\njfmeinel=> \\d tobjects\n Table \"public.tobjects\"\n Column | Type | Modifiers\n--------+---------+-----------\n id | integer | not null\nIndexes:\n \"tobjects_pkey\" primary key, btree (id)\n\njfmeinel=> \\d tproject\n Table \"public.tproject\"\n Column | Type | Modifiers\n--------+---------+-----------\n id | integer | not null\nIndexes:\n \"tproject_pkey\" primary key, btree (id)\n\njfmeinel=> \\d tdata\n Table \"public.tdata\"\n Column | Type | Modifiers\n------------+---------+-----------\n id | integer | not null\n project_id | integer |\nIndexes:\n \"tdata_pkey\" primary key, btree (id)\n \"tdata_project_id_idx\" btree (project_id)\nForeign-key constraints:\n \"tdata_id_fkey\" FOREIGN KEY (id) REFERENCES tobjects(id) ON UPDATE \nCASCADE ON DELETE CASCADE\n \"tdata_project_id_fkey\" FOREIGN KEY (project_id) REFERENCES \t\t\n\t\ttproject(id) ON UPDATE CASCADE ON DELETE SET DEFAULT\n\njfmeinel=> select count(*) from tdata;\n count\n--------\n 545768\n\njfmeinel=> select count(*) - count(project_id) from tdata;\n ?column?\n----------\n 240\n\nSo tdata(project_id) is almost completely full, of the 540000+ entries, \nonly 240 are null.\n\n\njfmeinel=> prepare myget(int) as select id from tdata\njfmeinel-> where project_id = $1;\nPREPARE\n\njfmeinel=> explain analyze execute myget(30000);\n QUERY PLAN\n--------------------------------------------------------------------\n Seq Scan on tdata (cost=0.00..9773.10 rows=181923 width=4)\n\t(actual time=1047.000..1047.000 rows=0 loops=1)\n Filter: (project_id = $1)\n Total runtime: 1047.000 ms\n\njfmeinel=> explain analyze select id from tdata where project_id = 30000;\n QUERY PLAN\n\n-------------------------------------------------------------------------\n Index Scan using tdata_project_id_idx on tdata (cost=0.00..4.20 \nrows=1 width=4) (actual time=0.000..0.000 rows=0 loops =1)\n Index Cond: (project_id = 30000)\n Total runtime: 0.000 ms\n\nSo notice that when doing the actual select it is able to do the index \nquery. But for some reason with a prepared statement, it is not able to \ndo it.\n\nAny ideas?\n\nSince I only have the integers now, I can send the data to someone if \nthey care to investigate it. It comes to 2.2M as a .tar.bz2, so \nobviously I'm not going to spam the list.\n\nIf I rewrite myget as:\nprepare myget(int) as select id from tdata where project_id = 30000; it \ndoes the right thing again. So it's something about how a variable \ninteracts with an indexed column with null values.\n\nNote: I've tried creating a script that generates dummy data to show \nthis problem and I have failed (it always performed the query correctly.)\n\nBut this test data definitely shows the problem. And yes, I've vacuum \nanalyzed all over the place.\n\nJohn\n=:->\n\nPS> I tested this on PostgreSQL 7.4.3, and it did not demonstrate this \nproblem. I am using PostgreSQL 8.0.0beta2 (probably -dev1)",
"msg_date": "Wed, 29 Sep 2004 01:34:07 -0500",
"msg_from": "John Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor Performance for large queries in functions"
},
{
"msg_contents": "John Meinel wrote:\n> \n> So notice that when doing the actual select it is able to do the index \n> query. But for some reason with a prepared statement, it is not able to \n> do it.\n> \n> Any ideas?\n\nIn the index-using example, PG knows the value you are comparing to. So, \nit can make a better estimate of how many rows will be returned. With \nthe prepared/compiled version it has to come up with a plan that makes \nsense for any value.\n\nIf you look back at the explain output you'll see PG is guessing 181,923 \nrows will match with the prepared query but only 1 for the second query. \nIf in fact you returned that many rows, you wouldn't want to use the \nindex - it would mean fetching values twice.\n\nThe only work-around if you are using plpgsql functions is to use \nEXECUTE to make sure your queries are planned for each value provided.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 29 Sep 2004 09:40:11 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Poor Performance for large queries"
},
{
"msg_contents": "Richard Huxton wrote:\n> John Meinel wrote:\n> \n>>\n>> So notice that when doing the actual select it is able to do the index \n>> query. But for some reason with a prepared statement, it is not able \n>> to do it.\n>>\n>> Any ideas?\n> \n> \n> In the index-using example, PG knows the value you are comparing to. So, \n> it can make a better estimate of how many rows will be returned. With \n> the prepared/compiled version it has to come up with a plan that makes \n> sense for any value.\n> \n> If you look back at the explain output you'll see PG is guessing 181,923 \n> rows will match with the prepared query but only 1 for the second query. \n> If in fact you returned that many rows, you wouldn't want to use the \n> index - it would mean fetching values twice.\n> \n> The only work-around if you are using plpgsql functions is to use \n> EXECUTE to make sure your queries are planned for each value provided.\n> \nI suppose that make sense. If the number was small (< 100) then there \nprobably would be a lot of responses. Because the tproject table is all \nsmall integers.\n\nBut for a large number, it probably doesn't exist on that table at all.\n\nThanks for the heads up.\n\nJohn\n=:->",
"msg_date": "Wed, 29 Sep 2004 09:56:27 -0500",
"msg_from": "John Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Poor Performance for large queries"
},
{
"msg_contents": "[ enlarging on Richard's response a bit ]\n\nJohn Meinel <[email protected]> writes:\n> jfmeinel=> explain analyze execute myget(30000);\n> QUERY PLAN\n> --------------------------------------------------------------------\n> Seq Scan on tdata (cost=0.00..9773.10 rows=181923 width=4)\n> \t(actual time=1047.000..1047.000 rows=0 loops=1)\n> Filter: (project_id = $1)\n> Total runtime: 1047.000 ms\n\n> jfmeinel=> explain analyze select id from tdata where project_id = 30000;\n> QUERY PLAN\n\n> -------------------------------------------------------------------------\n> Index Scan using tdata_project_id_idx on tdata (cost=0.00..4.20 \n> rows=1 width=4) (actual time=0.000..0.000 rows=0 loops =1)\n> Index Cond: (project_id = 30000)\n> Total runtime: 0.000 ms\n\n> So notice that when doing the actual select it is able to do the index \n> query. But for some reason with a prepared statement, it is not able to \n> do it.\n\nThis isn't a \"can't do it\" situation, it's a \"doesn't want to do it\"\nsituation, and it's got nothing whatever to do with null or not null.\nThe issue is the estimated row count, which in the first case is so high\nas to make the seqscan approach look cheaper. So the real question here\nis what are the statistics on the column that are making the planner\nguess such a large number when it has no specific information about the\ncompared-to value. Do you have one extremely common value in the column?\nHave you done an ANALYZE recently on the table, and if so can you show\nus the pg_stats row for the column?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 29 Sep 2004 12:18:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor Performance for large queries in functions "
},
{
"msg_contents": "Tom Lane wrote:\n> [ enlarging on Richard's response a bit ]\n> \n> John Meinel <[email protected]> writes:\n> \n>>jfmeinel=> explain analyze execute myget(30000);\n>> QUERY PLAN\n>>--------------------------------------------------------------------\n>> Seq Scan on tdata (cost=0.00..9773.10 rows=181923 width=4)\n>>\t(actual time=1047.000..1047.000 rows=0 loops=1)\n>> Filter: (project_id = $1)\n>> Total runtime: 1047.000 ms\n> \n> \n>>jfmeinel=> explain analyze select id from tdata where project_id = 30000;\n>> QUERY PLAN\n> \n> \n>>-------------------------------------------------------------------------\n>> Index Scan using tdata_project_id_idx on tdata (cost=0.00..4.20 \n>>rows=1 width=4) (actual time=0.000..0.000 rows=0 loops =1)\n>> Index Cond: (project_id = 30000)\n>> Total runtime: 0.000 ms\n> \n> \n>>So notice that when doing the actual select it is able to do the index \n>>query. But for some reason with a prepared statement, it is not able to \n>>do it.\n> \n> \n> This isn't a \"can't do it\" situation, it's a \"doesn't want to do it\"\n> situation, and it's got nothing whatever to do with null or not null.\n> The issue is the estimated row count, which in the first case is so high\n> as to make the seqscan approach look cheaper. So the real question here\n> is what are the statistics on the column that are making the planner\n> guess such a large number when it has no specific information about the\n> compared-to value. Do you have one extremely common value in the column?\n> Have you done an ANALYZE recently on the table, and if so can you show\n> us the pg_stats row for the column?\n> \n> \t\t\tregards, tom lane\n> \n\nThe answer is \"yes\" that particular column has very common numbers in \nit. Project id is a number from 1->21. I ended up modifying my query \nsuch that I do the bulk of the work in a regular UNION SELECT so that \nall that can be optimized, and then I later do another query for this \nrow in an 'EXECUTE ...' so that unless I'm actually requesting a small \nnumber, the query planner can notice that it can do an indexed query.\n\nI'm pretty sure this is just avoiding worst case scenario. Because it is \ntrue that if I use the number 18, it will return 500,000 rows. Getting \nthose with an indexed lookup would be very bad. But typically, I'm doing \nnumbers in a very different range, and so the planner was able to know \nthat it would not likely find that number.\n\nThanks for pointing out what the query planner was thinking, I was able \nto work around it.\n\nJohn\n=:->",
"msg_date": "Wed, 29 Sep 2004 11:44:14 -0500",
"msg_from": "John Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [pgsql-hackers-win32] Poor Performance for large queries"
}
] |
[
{
"msg_contents": "I finished the write-up I mentioned a few weeks ago.\n\nFeedback is welcome.\n\nThanks to everyone on this list and especially to all\nthe folks in IRC who helped me get up to speed on\nPostgreSQL in a very short period of time.\n\nI've already migrated one of the applications I\ncowrote with CableLabs to PostgreSQL. It isn't quite\nas demanding as the app in the review, but will serve\nas a great test bed for the migration work that is\njust beginning.\n\nHere's the write-up . . . \nhttp://www.opensourcecable.org/PostgreSQL_Performance_Analysis_Cox.pdf\n\nI think that you'll find a lot of people like myself\nwho develop applications using MySQL because their\nskillset requires a simple database, and want more\nwhen the application and their skills grow. I was(and\nstill am to an extent) a little frustrated by the\nperceived ease of use of the PG client compared to\nMySQL's client. I really think it would be a good\nidea to at least implement \"quit\" and \"exit\". The \\\ncommands take some getting used to and have more\nflexibility, but will never be as easy as MySQL show\ncommands. With that said, after only three weeks, I'm\nnot put-off by the client interface at all now.\n\n'njoy,\nMark\n",
"msg_date": "Mon, 27 Sep 2004 01:50:37 -0700 (PDT)",
"msg_from": "Mark Cotner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance/Functional Analysis Complete"
}
] |
[
{
"msg_contents": "Hi all,\ndon't you think the best statistic target for a boolean\ncolumn is something like 2? Or in general the is useless\nhave a statistics target > data type cardinality ?\n\n\n\nRegards\nGaetano Mendola\n",
"msg_date": "Mon, 27 Sep 2004 20:09:46 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "best statistic target for boolean columns"
},
{
"msg_contents": "Gaetano,\n\n> don't you think the best statistic target for a boolean\n> column is something like 2? Or in general the is useless\n> have a statistics target > data type cardinality ?\n\nIt depends, really, on the proportionality of the boolean values; if they're \nabout equal, I certainly wouldn't raise Stats from the default of 10. If, \nhowever, it's very dispraportionate -- like 2% true and 98% false -- then it \nmay pay to have better statistics so that the planner doesn't assume 50% \nhits, which it otherwise might.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 27 Sep 2004 11:31:09 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best statistic target for boolean columns"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nJosh Berkus wrote:\n| Gaetano,\n|\n|\n|>don't you think the best statistic target for a boolean\n|>column is something like 2? Or in general the is useless\n|>have a statistics target > data type cardinality ?\n|\n|\n| It depends, really, on the proportionality of the boolean values; if they're\n| about equal, I certainly wouldn't raise Stats from the default of 10. If,\n| however, it's very dispraportionate -- like 2% true and 98% false -- then it\n| may pay to have better statistics so that the planner doesn't assume 50%\n| hits, which it otherwise might.\n\nSo, I didn't understand how the statistics hystogram works.\nI'm going to take a look at analyze.c\n\n\nRegards\nGaetano Mendola\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.5 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFBWHr07UpzwH2SGd4RAi8nAJoDOa7j+5IjDEcqBvB4ATXRzRPB+wCfWZ0p\nOCmUew9zlyqVkxB9iWKoGAw=\n=7lkZ\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Mon, 27 Sep 2004 22:41:25 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: best statistic target for boolean columns"
}
] |
[
{
"msg_contents": "\n> Gaetano,\n> \n> > don't you think the best statistic target for a boolean\n> > column is something like 2? Or in general the is useless\n> > have a statistics target > data type cardinality ?\n> \n> It depends, really, on the proportionality of the boolean values; if they're \n> about equal, I certainly wouldn't raise Stats from the default of 10. If, \n> however, it's very dispraportionate -- like 2% true and 98% false -- then it \n> may pay to have better statistics so that the planner doesn't assume 50% \n> hits, which it otherwise might.\n\nNo, actually the stats table keeps the n most common values and their\nfrequency (usually in percentage). So really a target of 2 ought to be enough\nfor boolean values. In fact that's all I see in pg_statistic; I'm assuming\nthere's a full histogram somewhere but I don't see it. Where would it be?\n\nHowever the target also dictates how large a sample of the table to take. A\ntarget of two represents a very small sample. So the estimations could be\nquite far off.\n\nI ran the experiment and for a table with 2036 false rows out of 204,624 the\nestimate was 1720. Not bad. But then I did vacuum full analyze and got an\nestimate of 688. Which isn't so good.\n\n-- \ngreg\n\n",
"msg_date": "27 Sep 2004 15:13:45 -0400",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: best statistic target for boolean columns"
},
{
"msg_contents": "Gregory Stark <[email protected]> writes:\n> No, actually the stats table keeps the n most common values and their\n> frequency (usually in percentage). So really a target of 2 ought to be enough\n> for boolean values. In fact that's all I see in pg_statistic; I'm assuming\n> there's a full histogram somewhere but I don't see it. Where would it be?\n\nIt's not going to be there. The histogram only covers values that are\nnot in the most-frequent-values list, and therefore it won't exist for a\ncolumn that is completely describable by most-frequent-values.\n\n> However the target also dictates how large a sample of the table to take. A\n> target of two represents a very small sample. So the estimations could be\n> quite far off.\n\nRight. The real point of stats target for such columns is that it\ndetermines how many rows to sample, and thereby indirectly implies\nthe accuracy of the statistics. For a heavily skewed boolean column\nyou'd want a high target so that the number of occurrences of the\ninfrequent value would be estimated accurately.\n\nIt's also worth noting that the number of rows sampled is driven by the\nlargest per-column stats target in the table, and so reducing stats\ntarget to 2 for a boolean column will save *zero* effort unless all the\ncolumns in the table are booleans.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 27 Sep 2004 15:26:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best statistic target for boolean columns "
},
{
"msg_contents": "Tom Lane wrote:\n> Gregory Stark <[email protected]> writes:\n> \n>>No, actually the stats table keeps the n most common values and their\n>>frequency (usually in percentage). So really a target of 2 ought to be enough\n>>for boolean values. In fact that's all I see in pg_statistic; I'm assuming\n>>there's a full histogram somewhere but I don't see it. Where would it be?\n> \n> \n> It's not going to be there. The histogram only covers values that are\n> not in the most-frequent-values list, and therefore it won't exist for a\n> column that is completely describable by most-frequent-values.\n> \n> \n>>However the target also dictates how large a sample of the table to take. A\n>>target of two represents a very small sample. So the estimations could be\n>>quite far off.\n> \n> \n> Right. The real point of stats target for such columns is that it\n> determines how many rows to sample, and thereby indirectly implies\n> the accuracy of the statistics. For a heavily skewed boolean column\n> you'd want a high target so that the number of occurrences of the\n> infrequent value would be estimated accurately.\n> \n> It's also worth noting that the number of rows sampled is driven by the\n> largest per-column stats target in the table, and so reducing stats\n> target to 2 for a boolean column will save *zero* effort unless all the\n> columns in the table are booleans.\n\nThank you all, now I have more clear how it works.\nBtw last time I was thinking: why during an explain analyze we can not use\nthe information on about the really extracted rows vs the extimated rows ?\n\nNow I'm reading an article, written by the same author that ispired the magic \"300\"\non analyze.c, about \"Self-tuning Histograms\". If this is implemented, I understood\nwe can take rid of \"vacuum analyze\" for mantain up to date the statistics.\nHave someone in his plans to implement it ?\nAfter all the idea is simple: compare during normal selects the extimated rows and\nthe actual extracted rows then use this \"free\" information to refine the histograms.\n\n\n\n\nRegards\nGaetano Mendola\n\n\n\n",
"msg_date": "Tue, 28 Sep 2004 00:42:06 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best statistic target for boolean columns"
},
{
"msg_contents": "On Tue, 2004-09-28 at 08:42, Gaetano Mendola wrote:\n> Now I'm reading an article, written by the same author that ispired the magic \"300\"\n> on analyze.c, about \"Self-tuning Histograms\". If this is implemented, I understood\n> we can take rid of \"vacuum analyze\" for mantain up to date the statistics.\n> Have someone in his plans to implement it ?\n\nhttp://www.mail-archive.com/[email protected]/msg17477.html\n\nTom's reply is salient. I still think self-tuning histograms would be\nworth looking at for the multi-dimensional case.\n\n-Neil\n\n\n",
"msg_date": "Tue, 28 Sep 2004 09:43:45 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best statistic target for boolean columns"
},
{
"msg_contents": "Neil Conway wrote:\n> On Tue, 2004-09-28 at 08:42, Gaetano Mendola wrote:\n> \n>>Now I'm reading an article, written by the same author that ispired the magic \"300\"\n>>on analyze.c, about \"Self-tuning Histograms\". If this is implemented, I understood\n>>we can take rid of \"vacuum analyze\" for mantain up to date the statistics.\n>>Have someone in his plans to implement it ?\n> \n> \n> http://www.mail-archive.com/[email protected]/msg17477.html\n> \n> Tom's reply is salient. I still think self-tuning histograms would be\n> worth looking at for the multi-dimensional case.\n\nI see.\n\n\n\nRegards\nGaetano Mendola\n\n",
"msg_date": "Tue, 28 Sep 2004 02:16:28 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best statistic target for boolean columns"
}
] |
[
{
"msg_contents": "Help?\n\nNormally, this query takes from 5 minutes to 2 hours to run. On this update, it's been running for more than 10 hours.\n\nCan it be helped?\n\nUPDATE obs_v\nSET mag = obs_v.imag + zp.zero_v + cg.color_v * (obs_v.imag - i.imag),\n use = true\nFROM color_groups AS cg, zero_pair AS zp, obs_i AS i, files AS f, groups AS g\nWHERE obs_v.star_id = i.star_id\n AND obs_v.file_id = f.file_id\n AND cg.group_id = g.group_id\n AND g.night_id = f.night_id\n AND g.group_id = $group_id\n AND zp.pair_id = f.pair_id\n\nHash Join (cost=130079.22..639663.94 rows=1590204 width=63)\n Hash Cond: (\"outer\".star_id = \"inner\".star_id)\n -> Seq Scan on obs_i i (cost=0.00..213658.19 rows=10391319 width=8)\n -> Hash (cost=129094.19..129094.19 rows=77211 width=59)\n -> Nested Loop (cost=250.69..129094.19 rows=77211 width=59)\n -> Hash Join (cost=250.69..307.34 rows=67 width=12)\n Hash Cond: (\"outer\".pair_id = \"inner\".pair_id)\n -> Seq Scan on zero_pair zp (cost=0.00..43.32 rows=2532 width=8)\n -> Hash (cost=250.40..250.40 rows=118 width=12)\n -> Hash Join (cost=4.80..250.40 rows=118 width=12)\n Hash Cond: (\"outer\".night_id = \"inner\".night_id)\n -> Seq Scan on files f (cost=0.00..199.28 rows=9028 width=12)\n -> Hash (cost=4.80..4.80 rows=1 width=8)\n -> Nested Loop (cost=0.00..4.80 rows=1 width=8)\n -> Seq Scan on color_groups cg (cost=0.00..2.84 rows=1 width=8)\n Filter: (171 = group_id)\n -> Seq Scan on groups g (cost=0.00..1.95 rows=1 width=8)\n Filter: (group_id = 171)\n -> Index Scan using obs_v_file_id_index on obs_v (cost=0.00..1893.23 rows=2317 width=51)\n Index Cond: (obs_v.file_id = \"outer\".file_id)\n\nTable definitions:\n\ntassiv=# \\d color_groups\n Table \"public.color_groups\"\n Column | Type | Modifiers \n--------------+---------+---------------------------------------------------------------\n group_id | integer | not null default nextval('\"color_groups_group_id_seq\"'::text)\n color_u | real | \n color_b | real | \n color_v | real | \n color_r | real | \n color_i | real | \n max_residual | real | \nIndexes:\n \"color_groups_pkey\" primary key, btree (group_id)\n \"color_group_group_id_index\" btree (group_id)\n\ntassiv=# \\d zero_pair\n Table \"public.zero_pair\"\n Column | Type | Modifiers \n---------+---------+-----------\n pair_id | integer | not null\n zero_u | real | default 0\n zero_b | real | default 0\n zero_v | real | default 0\n zero_r | real | default 0\n zero_i | real | default 0\nIndexes:\n \"zero_pair_pkey\" primary key, btree (pair_id)\nForeign-key constraints:\n \"$1\" FOREIGN KEY (pair_id) REFERENCES pairs(pair_id) ON DELETE CASCADE\n\ntassiv=# \\d obs_v\n Table \"public.obs_v\"\n Column | Type | Modifiers \n---------+---------+------------------------------------------------\n x | real | not null\n y | real | not null\n imag | real | not null\n smag | real | not null\n loc | spoint | not null\n obs_id | integer | not null default nextval('\"obs_id_seq\"'::text)\n file_id | integer | not null\n use | boolean | default false\n solve | boolean | default false\n star_id | integer | \n mag | real | \nIndexes:\n \"obs_v_file_id_index\" btree (file_id)\n \"obs_v_loc_index\" gist (loc)\n \"obs_v_obs_id_index\" btree (obs_id)\n \"obs_v_star_id_index\" btree (star_id)\n \"obs_v_use_index\" btree (use)\nForeign-key constraints:\n \"obs_v_files_constraint\" FOREIGN KEY (file_id) REFERENCES files(file_id) ON DELETE CASCADE\n \"obs_v_star_id_constraint\" FOREIGN KEY (star_id) REFERENCES catalog(star_id) ON DELETE SET NULL\nTriggers:\n obs_v_trig BEFORE INSERT OR DELETE OR UPDATE ON obs_v FOR EACH ROW EXECUTE PROCEDURE observations_trigger\n()\n\ntassiv=# \\d files\n Table \"public.files\"\n Column | Type | Modifiers \n----------+-----------------------------+-------------------------------------------------------\n file_id | integer | not null default nextval('\"files_file_id_seq\"'::text)\n night_id | integer | \n pair_id | integer | \n name | character varying | not null\n date | timestamp without time zone | \nIndexes:\n \"files_pkey\" primary key, btree (file_id)\n \"files_name_key\" unique, btree (name)\n \"files_id_index\" btree (file_id, night_id, pair_id)\nForeign-key constraints:\n \"$1\" FOREIGN KEY (night_id) REFERENCES nights(night_id) ON UPDATE CASCADE ON DELETE CASCADE\n \"$2\" FOREIGN KEY (pair_id) REFERENCES pairs(pair_id) ON DELETE CASCADE\n\ntassiv=# \\d groups\n Table \"public.groups\"\n Column | Type | Modifiers \n----------+---------+-----------\n group_id | integer | not null\n night_id | integer | not null\nIndexes:\n \"groups_pkey\" primary key, btree (group_id, night_id)\nForeign-key constraints:\n \"$1\" FOREIGN KEY (group_id) REFERENCES color_groups(group_id) ON DELETE CASCADE\n \"$2\" FOREIGN KEY (night_id) REFERENCES nights(night_id) ON DELETE CASCADE\n\nServer is a dual AMD2600+ with 2Gb mem:\n\nshared_buffers = 20000 # min 16, at least max_connections*2, 8KB each\nsort_mem = 16000 # min 64, size in KB\nmax_fsm_pages = 100000 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 5000 # min 100, ~50 bytes each\neffective_cache_size = 100000 # typically 8KB each\nrandom_page_cost = 2 # units are one sequential page\ndefault_statistics_target = 500 # range 1-1000\n\nThanks,\nRob\n\n-- \n 08:06:34 up 5 days, 10:33, 2 users, load average: 3.13, 3.29, 3.61\nLinux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004",
"msg_date": "Tue, 28 Sep 2004 08:19:57 -0600",
"msg_from": "Robert Creager <[email protected]>",
"msg_from_op": true,
"msg_subject": "This query is still running after 10 hours..."
},
{
"msg_contents": "What does observations_trigger do?\n\n\n\nOn Tue, 28 Sep 2004 08:19:57 -0600, Robert Creager\n<[email protected]> wrote:\n>\n> Help?\n>\n> Normally, this query takes from 5 minutes to 2 hours to run. On this update, it's been running for more than 10 hours.\n>\n> Can it be helped?\n>\n> UPDATE obs_v\n> SET mag = obs_v.imag + zp.zero_v + cg.color_v * (obs_v.imag - i.imag),\n> use = true\n> FROM color_groups AS cg, zero_pair AS zp, obs_i AS i, files AS f, groups AS g\n> WHERE obs_v.star_id = i.star_id\n> AND obs_v.file_id = f.file_id\n> AND cg.group_id = g.group_id\n> AND g.night_id = f.night_id\n> AND g.group_id = $group_id\n> AND zp.pair_id = f.pair_id\n>\n> Hash Join (cost=130079.22..639663.94 rows=1590204 width=63)\n> Hash Cond: (\"outer\".star_id = \"inner\".star_id)\n> -> Seq Scan on obs_i i (cost=0.00..213658.19 rows=10391319 width=8)\n> -> Hash (cost=129094.19..129094.19 rows=77211 width=59)\n> -> Nested Loop (cost=250.69..129094.19 rows=77211 width=59)\n> -> Hash Join (cost=250.69..307.34 rows=67 width=12)\n> Hash Cond: (\"outer\".pair_id = \"inner\".pair_id)\n> -> Seq Scan on zero_pair zp (cost=0.00..43.32 rows=2532 width=8)\n> -> Hash (cost=250.40..250.40 rows=118 width=12)\n> -> Hash Join (cost=4.80..250.40 rows=118 width=12)\n> Hash Cond: (\"outer\".night_id = \"inner\".night_id)\n> -> Seq Scan on files f (cost=0.00..199.28 rows=9028 width=12)\n> -> Hash (cost=4.80..4.80 rows=1 width=8)\n> -> Nested Loop (cost=0.00..4.80 rows=1 width=8)\n> -> Seq Scan on color_groups cg (cost=0.00..2.84 rows=1 width=8)\n> Filter: (171 = group_id)\n> -> Seq Scan on groups g (cost=0.00..1.95 rows=1 width=8)\n> Filter: (group_id = 171)\n> -> Index Scan using obs_v_file_id_index on obs_v (cost=0.00..1893.23 rows=2317 width=51)\n> Index Cond: (obs_v.file_id = \"outer\".file_id)\n>\n> Table definitions:\n>\n> tassiv=# \\d color_groups\n> Table \"public.color_groups\"\n> Column | Type | Modifiers\n> --------------+---------+---------------------------------------------------------------\n> group_id | integer | not null default nextval('\"color_groups_group_id_seq\"'::text)\n> color_u | real |\n> color_b | real |\n> color_v | real |\n> color_r | real |\n> color_i | real |\n> max_residual | real |\n> Indexes:\n> \"color_groups_pkey\" primary key, btree (group_id)\n> \"color_group_group_id_index\" btree (group_id)\n>\n> tassiv=# \\d zero_pair\n> Table \"public.zero_pair\"\n> Column | Type | Modifiers\n> ---------+---------+-----------\n> pair_id | integer | not null\n> zero_u | real | default 0\n> zero_b | real | default 0\n> zero_v | real | default 0\n> zero_r | real | default 0\n> zero_i | real | default 0\n> Indexes:\n> \"zero_pair_pkey\" primary key, btree (pair_id)\n> Foreign-key constraints:\n> \"$1\" FOREIGN KEY (pair_id) REFERENCES pairs(pair_id) ON DELETE CASCADE\n>\n> tassiv=# \\d obs_v\n> Table \"public.obs_v\"\n> Column | Type | Modifiers\n> ---------+---------+------------------------------------------------\n> x | real | not null\n> y | real | not null\n> imag | real | not null\n> smag | real | not null\n> loc | spoint | not null\n> obs_id | integer | not null default nextval('\"obs_id_seq\"'::text)\n> file_id | integer | not null\n> use | boolean | default false\n> solve | boolean | default false\n> star_id | integer |\n> mag | real |\n> Indexes:\n> \"obs_v_file_id_index\" btree (file_id)\n> \"obs_v_loc_index\" gist (loc)\n> \"obs_v_obs_id_index\" btree (obs_id)\n> \"obs_v_star_id_index\" btree (star_id)\n> \"obs_v_use_index\" btree (use)\n> Foreign-key constraints:\n> \"obs_v_files_constraint\" FOREIGN KEY (file_id) REFERENCES files(file_id) ON DELETE CASCADE\n> \"obs_v_star_id_constraint\" FOREIGN KEY (star_id) REFERENCES catalog(star_id) ON DELETE SET NULL\n> Triggers:\n> obs_v_trig BEFORE INSERT OR DELETE OR UPDATE ON obs_v FOR EACH ROW EXECUTE PROCEDURE observations_trigger\n> ()\n>\n> tassiv=# \\d files\n> Table \"public.files\"\n> Column | Type | Modifiers\n> ----------+-----------------------------+-------------------------------------------------------\n> file_id | integer | not null default nextval('\"files_file_id_seq\"'::text)\n> night_id | integer |\n> pair_id | integer |\n> name | character varying | not null\n> date | timestamp without time zone |\n> Indexes:\n> \"files_pkey\" primary key, btree (file_id)\n> \"files_name_key\" unique, btree (name)\n> \"files_id_index\" btree (file_id, night_id, pair_id)\n> Foreign-key constraints:\n> \"$1\" FOREIGN KEY (night_id) REFERENCES nights(night_id) ON UPDATE CASCADE ON DELETE CASCADE\n> \"$2\" FOREIGN KEY (pair_id) REFERENCES pairs(pair_id) ON DELETE CASCADE\n>\n> tassiv=# \\d groups\n> Table \"public.groups\"\n> Column | Type | Modifiers\n> ----------+---------+-----------\n> group_id | integer | not null\n> night_id | integer | not null\n> Indexes:\n> \"groups_pkey\" primary key, btree (group_id, night_id)\n> Foreign-key constraints:\n> \"$1\" FOREIGN KEY (group_id) REFERENCES color_groups(group_id) ON DELETE CASCADE\n> \"$2\" FOREIGN KEY (night_id) REFERENCES nights(night_id) ON DELETE CASCADE\n>\n> Server is a dual AMD2600+ with 2Gb mem:\n>\n> shared_buffers = 20000 # min 16, at least max_connections*2, 8KB each\n> sort_mem = 16000 # min 64, size in KB\n> max_fsm_pages = 100000 # min max_fsm_relations*16, 6 bytes each\n> max_fsm_relations = 5000 # min 100, ~50 bytes each\n> effective_cache_size = 100000 # typically 8KB each\n> random_page_cost = 2 # units are one sequential page\n> default_statistics_target = 500 # range 1-1000\n>\n> Thanks,\n> Rob\n>\n> --\n> 08:06:34 up 5 days, 10:33, 2 users, load average: 3.13, 3.29, 3.61\n> Linux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004\n>\n>\n>\n>\n",
"msg_date": "Tue, 28 Sep 2004 09:28:47 -0500",
"msg_from": "Kevin Barnard <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: This query is still running after 10 hours..."
},
{
"msg_contents": "Robert Creager wrote:\n> Help?\n> \n> Normally, this query takes from 5 minutes to 2 hours to run. On this update, it's been running for more than 10 hours.\n> \n> Can it be helped?\n\n\nWhen I see this usually means that tables are full of\ndead rows. Did you vacuum you DB. Which version are you\nusing ?\n\nRegards\nGaetano Mendola\n\n",
"msg_date": "Tue, 28 Sep 2004 16:55:13 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: This query is still running after 10 hours..."
},
{
"msg_contents": "Robert Creager <[email protected]> writes:\n> Normally, this query takes from 5 minutes to 2 hours to run. On this update, it's been running for more than 10 hours.\n\n> ...\n> -> Nested Loop (cost=250.69..129094.19 rows=77211 width=59)\n> -> Hash Join (cost=250.69..307.34 rows=67 width=12)\n> Hash Cond: (\"outer\".pair_id = \"inner\".pair_id)\n> ...\n\nIt chose a nested loop here because it was only expecting 67 rows out of\nthe next-lower join, and so it thought it would only need 67 repetitions\nof the index probe into obs_v_file_id_index. I'm suspicious that that\nestimate was way low and so the nestloop is taking forever. You might\ntry \"SET enable_nestloop = off\" as a crude way of avoiding that trap.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 28 Sep 2004 11:04:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: This query is still running after 10 hours... "
},
{
"msg_contents": "When grilled further on (Tue, 28 Sep 2004 09:28:47 -0500),\nKevin Barnard <[email protected]> confessed:\n\n> What does observations_trigger do?\n> \n\nThe trigger keeps another table (catalog) up to date with the information from the obs_v and obs_i tables. There are no direct insert/update/delete's on the catalog table, only though the trigger.\n\n-- \n 19:56:54 up 5 days, 22:23, 2 users, load average: 2.46, 2.27, 2.15\nLinux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004",
"msg_date": "Tue, 28 Sep 2004 20:21:40 -0600",
"msg_from": "Robert Creager <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: This query is still running after 10 hours..."
},
{
"msg_contents": "When grilled further on (Tue, 28 Sep 2004 16:55:13 +0200),\nGaetano Mendola <[email protected]> confessed:\n\n> Robert Creager wrote:\n> > Help?\n> > \n> > Normally, this query takes from 5 minutes to 2 hours to run. On this\n> > update, it's been running for more than 10 hours.\n> > \n> > Can it be helped?\n> \n> \n> When I see this usually means that tables are full of\n> dead rows. Did you vacuum you DB. Which version are you\n> using ?\n> \n\nGee, the two questions I realized I forgot to answer going into work ;-) I run\npg_autovacuum, and it's working. Even ran a FULL ANALYZE, no help. The version\nis 7.4.1.\n\nCheers,\nRob \n\n-- \n 20:22:11 up 5 days, 22:48, 2 users, load average: 2.16, 2.18, 2.15\nLinux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004",
"msg_date": "Tue, 28 Sep 2004 20:25:07 -0600",
"msg_from": "Robert Creager <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: This query is still running after 10 hours..."
},
{
"msg_contents": "On Tue, 28 Sep 2004 20:21:40 -0600, Robert Creager\n<[email protected]> wrote:\n> \n> The trigger keeps another table (catalog) up to date with the information from the obs_v and obs_i tables. There are no direct insert/update/delete's on the catalog table, only though the trigger.\n> \n\nIt's possible that the update to catalog is what is really taking a\nlong time. You might wish to try and explain that query just to make\nsure. You might also wish to disable to trigger just to rule it out. \nDoes catalog have any triggers on it? Does it have any foreign keys?\n\nI've shot myself in the foot on this before which is the only reason I\nask about it.\n",
"msg_date": "Tue, 28 Sep 2004 21:41:50 -0500",
"msg_from": "Kevin Barnard <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: This query is still running after 10 hours..."
},
{
"msg_contents": "When grilled further on (Tue, 28 Sep 2004 11:04:23 -0400),\nTom Lane <[email protected]> confessed:\n\n> Robert Creager <[email protected]> writes:\n> > Normally, this query takes from 5 minutes to 2 hours to run. On this\n> > update, it's been running for more than 10 hours.\n> \n> > ...\n> > -> Nested Loop (cost=250.69..129094.19 rows=77211 width=59)\n> > -> Hash Join (cost=250.69..307.34 rows=67 width=12)\n> > Hash Cond: (\"outer\".pair_id = \"inner\".pair_id)\n> > ...\n> \n> It chose a nested loop here because it was only expecting 67 rows out of\n> the next-lower join, and so it thought it would only need 67 repetitions\n> of the index probe into obs_v_file_id_index. I'm suspicious that that\n> estimate was way low and so the nestloop is taking forever. You might\n> try \"SET enable_nestloop = off\" as a crude way of avoiding that trap.\n\nI tried your suggestion. Did generate a different plan (below), but the\nestimation is blown as it still used a nested loop. The query is currently\nrunning(42 minutes so far). For the query in question, there are 151 different\npair_id's in the pairs table, which equates to 302 entries in the files table\n(part of the query), which moves on to 533592 entries in the obs_v table and\n533699 entries in the obs_i table.\n\nThe groups table has 76 total entries, files 9028, zero_pair 2532, color_groups\n147. Only the obs_v and obs_i tables have data of any significant quantities\nwith 10M rows apiece. The trigger hitting the catalog table (875499 entries) is\nsearching for single entries to match (one fire per obs_v/obs_i update) on an\nindex (took 54ms on the first query of a random id just now).\n\nThere is no significant disk activity (read 0), one CPU is pegged, and that\nprocess is consuming 218M Resident memory, 168M Shared (10% available memory\ntotal). All reasonable, except for the fact it doesn't come back...\n\nHash Join (cost=100267870.17..100751247.13 rows=1578889 width=63)\n Hash Cond: (\"outer\".star_id = \"inner\".star_id)\n -> Seq Scan on obs_i i (cost=0.00..213658.19 rows=10391319 width=8)\n -> Hash (cost=100266886.39..100266886.39 rows=77113 width=59)\n -> Hash Join (cost=100000307.51..100266886.39 rows=77113 width=59)\n Hash Cond: (\"outer\".file_id = \"inner\".file_id)\n -> Seq Scan on obs_v (cost=0.00..213854.50 rows=10390650 width=5\n1) -> Hash (cost=100000307.34..100000307.34 rows=67 width=12)\n -> Hash Join (cost=100000250.69..100000307.34 rows=67\nwidth=12) Hash Cond: (\"outer\".pair_id =\n\"inner\".pair_id) -> Seq Scan on zero_pair zp \n(cost=0.00..43.32 rows=2532 width=8) -> Hash \n(cost=100000250.40..100000250.40 rows=118 width=12) \n -> Hash Join (cost=100000004.80..100000250.40 rows=118 width=12) \n Hash Cond: (\"outer\".night_id = \"inner\".night_id) \n -> Seq Scan on files f (cost=0.00..199.28\nrows=9028 width=12) -> Hash \n(cost=100000004.80..100000004.80rows=1 width=8) \n -> Nested Loop (cost=100000000.00..100000004.80 rows=1 width=8) \n -> Seq Scan on color_groups cg \n(cost=0.00..2.84 rows=1 width=8) \n Filter: (175 = group_id) \n-> Seq Scan on groups g (cost=0.00..1.95 rows=1 width=8) \n Filter: (group_id = 175)\n\n\n\n-- \n 20:48:23 up 5 days, 23:14, 2 users, load average: 2.56, 2.91, 2.78\nLinux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004",
"msg_date": "Tue, 28 Sep 2004 21:44:24 -0600",
"msg_from": "Robert Creager <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: This query is still running after 10 hours..."
},
{
"msg_contents": "When grilled further on (Tue, 28 Sep 2004 21:41:50 -0500),\nKevin Barnard <[email protected]> confessed:\n\n> On Tue, 28 Sep 2004 20:21:40 -0600, Robert Creager\n> <[email protected]> wrote:\n> > \n> > The trigger keeps another table (catalog) up to date with the information\n> > from the obs_v and obs_i tables. There are no direct insert/update/delete's\n> > on the catalog table, only though the trigger.\n> > \n> \n> It's possible that the update to catalog is what is really taking a\n> long time. You might wish to try and explain that query just to make\n> sure. You might also wish to disable to trigger just to rule it out. \n> Does catalog have any triggers on it? Does it have any foreign keys?\n\nA select on the catalog is really quick (54ms on a random query - ~1M entries). The updates use the index. The catalog table has no triggers or foreign keys. The trigger on the obs_? tables manages the catalog table.\n\ntassiv=# \\d catalog\n Table \"public.catalog\"\n Column | Type | Modifiers \n------------------+------------------+-------------------------------------------------\n star_id | integer | not null default nextval('\"star_id_seq\"'::text)\n loc_count | integer | default 0\n loc | spoint | not null\n ra_sum | double precision | default 0\n ra_sigma | real | default 0\n ra_sum_square | double precision | default 0\n dec_sum | double precision | default 0\n dec_sigma | real | default 0\n dec_sum_square | double precision | default 0\n mag_u_count | integer | default 0\n mag_u | real | default 99\n mag_u_sum | double precision | default 0\n mag_u_sigma | real | default 0\n mag_u_sum_square | double precision | default 0\n mag_b_count | integer | default 0\n mag_b | real | default 99\n mag_b_sum | double precision | default 0\n mag_b_sigma | real | default 0\n mag_b_sum_square | double precision | default 0\n mag_v_count | integer | default 0\n mag_v | real | default 99\n mag_v_sum | double precision | default 0\n mag_v_sigma | real | default 0\n mag_v_sum_square | double precision | default 0\n mag_r_count | integer | default 0\n mag_r | real | default 99\n mag_r_sum | double precision | default 0\n mag_r_sigma | real | default 0\n mag_r_sum_square | double precision | default 0\n mag_i_count | integer | default 0\n mag_i | real | default 99\n mag_i_sum | double precision | default 0\n mag_i_sigma | real | default 0\n mag_i_sum_square | double precision | default 0\nIndexes:\n \"catalog_pkey\" primary key, btree (star_id)\n \"catalog_ra_decl_index\" gist (loc)\n\n\n-- \n 21:44:49 up 6 days, 11 min, 2 users, load average: 2.03, 2.17, 2.39\nLinux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004",
"msg_date": "Tue, 28 Sep 2004 21:51:56 -0600",
"msg_from": "Robert Creager <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: This query is still running after 10 hours..."
},
{
"msg_contents": "Hi Robert,\n\n\"There is no significant disk activity (read 0), one CPU is pegged, and\nthat process is consuming 218M Resident memory, 168M Shared (10% available\nmemory total). All reasonable, except for the fact it doesn't come back...\"\n\nJust to let you know, I've observed the identical phenomenon on my RHEL3-WS\nserver running PostgreSQL V7.3.4: One of the CPU's pegged at 100% (2-way\nSMP with hyperthreading, so 4 apparent CPU's), virtually zero disk I/O\nactivity, high memory usage, etc. I thought it might be due to a casting\nproblem in a JOIN's ON clause, but that did not turn out to be the case. I\n*have* recently observed that if I run a vacuum analyze on the entire\ndatabase, the amount of time spent in this looping state decreases greatly,\nbut it has *not* disappeared in all cases.\n\nNext week I hope to be able to run some directed test with stats collection\nturned on, to try to see if I can find out what's causing this to occur.\nI'll post the results if I find anything significant.\n\n--- Steve\n___________________________________________________________________________________\n\nSteven Rosenstein\nSenior IT Architect/Specialist | IBM Virtual Server Administration\nVoice/FAX: 845-689-2064 | Cell: 646-345-6978 | Tieline: 930-6001\nText Messaging: 6463456978 @ mobile.mycingular.com\nEmail: srosenst @ us.ibm.com\n\n\"Learn from the mistakes of others because you can't live long enough to\nmake them all yourself.\" -- Eleanor Roosevelt\n\n\n \n Robert Creager \n <Robert_Creager@L \n ogicalChaos.org> To \n Sent by: Tom Lane <[email protected]> \n pgsql-performance cc \n -owner@postgresql PGPerformance \n .org <[email protected]> \n Subject \n Re: [PERFORM] This query is still \n 09/28/2004 11:44 running after 10 hours... \n PM \n \n \n \n \n \n\n\n\n\nWhen grilled further on (Tue, 28 Sep 2004 11:04:23 -0400),\nTom Lane <[email protected]> confessed:\n\n> Robert Creager <[email protected]> writes:\n> > Normally, this query takes from 5 minutes to 2 hours to run. On this\n> > update, it's been running for more than 10 hours.\n>\n> > ...\n> > -> Nested Loop (cost=250.69..129094.19 rows=77211 width=59)\n> > -> Hash Join (cost=250.69..307.34 rows=67 width=12)\n> > Hash Cond: (\"outer\".pair_id = \"inner\".pair_id)\n> > ...\n>\n> It chose a nested loop here because it was only expecting 67 rows out of\n> the next-lower join, and so it thought it would only need 67 repetitions\n> of the index probe into obs_v_file_id_index. I'm suspicious that that\n> estimate was way low and so the nestloop is taking forever. You might\n> try \"SET enable_nestloop = off\" as a crude way of avoiding that trap.\n\nI tried your suggestion. Did generate a different plan (below), but the\nestimation is blown as it still used a nested loop. The query is currently\nrunning(42 minutes so far). For the query in question, there are 151\ndifferent\npair_id's in the pairs table, which equates to 302 entries in the files\ntable\n(part of the query), which moves on to 533592 entries in the obs_v table\nand\n533699 entries in the obs_i table.\n\nThe groups table has 76 total entries, files 9028, zero_pair 2532,\ncolor_groups\n147. Only the obs_v and obs_i tables have data of any significant\nquantities\nwith 10M rows apiece. The trigger hitting the catalog table (875499\nentries) is\nsearching for single entries to match (one fire per obs_v/obs_i update) on\nan\nindex (took 54ms on the first query of a random id just now).\n\nThere is no significant disk activity (read 0), one CPU is pegged, and that\nprocess is consuming 218M Resident memory, 168M Shared (10% available\nmemory\ntotal). All reasonable, except for the fact it doesn't come back...\n\nHash Join (cost=100267870.17..100751247.13 rows=1578889 width=63)\n Hash Cond: (\"outer\".star_id = \"inner\".star_id)\n -> Seq Scan on obs_i i (cost=0.00..213658.19 rows=10391319 width=8)\n -> Hash (cost=100266886.39..100266886.39 rows=77113 width=59)\n -> Hash Join (cost=100000307.51..100266886.39 rows=77113\nwidth=59)\n Hash Cond: (\"outer\".file_id = \"inner\".file_id)\n -> Seq Scan on obs_v (cost=0.00..213854.50 rows=10390650\nwidth=5\n1) -> Hash (cost=100000307.34..100000307.34 rows=67\nwidth=12)\n -> Hash Join (cost=100000250.69..100000307.34 rows=67\nwidth=12) Hash Cond: (\"outer\".pair_id =\n\"inner\".pair_id) -> Seq Scan on zero_pair zp\n(cost=0.00..43.32 rows=2532 width=8) -> Hash\n(cost=100000250.40..100000250.40 rows=118 width=12)\n\n -> Hash Join (cost=100000004.80..100000250.40 rows=118 width=12)\n\n Hash Cond: (\"outer\".night_id = \"inner\".night_id)\n\n -> Seq Scan on files f (cost=0.00..199.28\nrows=9028 width=12) -> Hash\n(cost=100000004.80..100000004.80rows=1 width=8)\n\n -> Nested Loop (cost=100000000.00..100000004.80 rows=1 width=8)\n\n -> Seq Scan on color_groups cg\n\n(cost=0.00..2.84 rows=1 width=8)\n\n Filter: (175 = group_id)\n\n-> Seq Scan on groups g (cost=0.00..1.95 rows=1 width=8)\n\n Filter: (group_id = 175)\n\n\n\n--\n 20:48:23 up 5 days, 23:14, 2 users, load average: 2.56, 2.91, 2.78\nLinux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004\n(See attached file: color.explain)(See attached file: attlakjy.dat)",
"msg_date": "Wed, 29 Sep 2004 09:28:48 -0400",
"msg_from": "Steven Rosenstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: This query is still running after 10 hours..."
}
] |
[
{
"msg_contents": "Folks,\n\nI'm beginning a series of tests on OSDL's Scalable Test Platform in order to \ndetermine some recommended settings for many of the new PostgreSQL.conf \nparameters as well as pg_autovacuum. \n\nIs anyone else interested in helping me with this? \n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 28 Sep 2004 10:53:53 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Interest in perf testing?"
},
{
"msg_contents": "Josh Berkus wrote:\n > Folks,\n >\n > I'm beginning a series of tests on OSDL's Scalable Test Platform in order to\n > determine some recommended settings for many of the new PostgreSQL.conf\n > parameters as well as pg_autovacuum.\n >\n > Is anyone else interested in helping me with this?\n >\n\nWhat do you need ?\n\n\nRegards\nGaetano Mendola\n\n",
"msg_date": "Thu, 30 Sep 2004 00:00:44 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Interest in perf testing?"
}
] |
[
{
"msg_contents": "What is involved, rather what kind of help do you require? \n\nDan.\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Josh Berkus\nSent: Tuesday, September 28, 2004 1:54 PM\nTo: [email protected]\nSubject: [PERFORM] Interest in perf testing?\n\n\nFolks,\n\nI'm beginning a series of tests on OSDL's Scalable Test Platform in order to \ndetermine some recommended settings for many of the new PostgreSQL.conf \nparameters as well as pg_autovacuum. \n\nIs anyone else interested in helping me with this? \n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\n",
"msg_date": "Wed, 29 Sep 2004 10:44:12 -0400",
"msg_from": "\"Shea,Dan [CIS]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Interest in perf testing?"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a query where I do not understand that the rows number that \nexplain analyze finds differs so much from what explain estimates (3rd \nnested loop estimates 1 row but in real it is 4222 rows). I did analyze \nthe tables (pgsql 7.4.1).\n\nHere is the query:\n\nexplain analyze\nSELECT fts.val_1, max(fts.val_2) AS val_2\n\nFROM docobjflat AS fts,\n boxinfo,\n docobjflat AS ftw0,\n docobjflat AS ftw, envspec_map\n\nWHERE boxinfo.member=158096693\nAND boxinfo.envelope=ftw.envelope\nAND boxinfo.community=169964332\nAND boxinfo.hide=FALSE\nAND ftw0.flatid=ftw.flatid\nAND fts.flatid=ftw.flatid\nAND fts.docstart=1\nAND envspec_map.spec=169964482\nAND envspec_map.community=boxinfo.community\nAND envspec_map.envelope=boxinfo.envelope\n\nAND ftw0.val_14='IN-A01'\n\nGROUP BY fts.val_1;\n\nQuery plan is attached.\n\nRegards Dirk",
"msg_date": "Wed, 29 Sep 2004 17:55:45 +0200",
"msg_from": "=?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "why does explain analyze differ so much from estimated explain?"
}
] |
[
{
"msg_contents": "Dear Gurus,\n\nHere is this strange query that can't find the optimum plan unless I disable\nsome scan modes or change the costs.\n\n(A) is a 2x2.4GHz server with hw raid5 and v7.3.4 database. It chooses\nhashjoin.\n(B) is a 300MHz server with 7200rpm ide and v7.4.2 database. It chooses\nseqscan.\n\nIf I disable hashjoin/seqscan+hashjoin+mergejoin, both choose index scan.\n(A) goes from 1000ms to 55ms\n(B) goes from 5000+ms to 300ms\n\nIf your expert eyes could catch something missing (an index, analyze or\nsomething), I'd be greatly honoured :) Also, tips about which optimizer\ncosts may be too high or too low are highly appreciated.\n\nAs far as I fumbled with (B), disabling plans step by step got worse until\nafter disabled all tree. Reducing random_page_cost from 2 to 1.27 or lower\ninstantly activated the index scan, but I fear that it hurt most of our\nother queries. The faster server did not respond to any changes, even with\nrpc=1 and cpu_index_tuple_cost=0.0001, it chose hash join.\n\nAll that I discovered is that both servers fail to find the right index\n(szlltlvl_ttl_szlltlvl) unless forced to.\n\nIn hope of an enlightening answer,\nYours,\nG.\n%----------------------- cut here -----------------------%\n-- QUERY:\nexplain analyze -- 5000msec. rpc1.27-: 300\nSELECT coalesce(szallitolevel,0) AS scope_kov_szallitolevel,\nCASE 'raktáros' WHEN 'raktáros' THEN szallitolevel_bejovo_e(szallitolevel)\nWHEN 'sofőr' THEN 1027=(SELECT coalesce(sofor,0) FROM szallitolevel WHERE\naz=szallitolevel) ELSE true END\nFROM\n(SELECT l.az AS szallitolevel\nFROM szallitolevel l, szallitolevel_tetele t\nWHERE szallitas=1504 AND allapot NOT IN (6,7,8)\n-- pakolandó tételekkel\n AND t.szallitolevel = l.az AND NOT t.archiv\n-- ha archív van, de most nincs, legföljebb köv körben kibukik\n AND t.fajta IN (4,90,100)\nGROUP BY t.szallitolevel, l.az\nHAVING count(t.*)>0) t1\nNATURAL FULL OUTER JOIN\n(SELECT szallitolevel, az AS pakolas FROM pakolas WHERE szallitasba=1504 AND\nsztornozott_pakolas IS NULL) t2\nWHERE pakolas IS NULL ORDER BY 2 DESC LIMIT 1;\n%----------------------- cut here -----------------------%\n-- plan of (A), hashjoin --\n\nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n--------------------------------------------------------\n Limit (cost=2795.58..2795.58 rows=1 width=12) (actual\ntime=1089.72..1089.72 rows=1 loops=1)\n -> Sort (cost=2795.58..2804.26 rows=3472 width=12) (actual\ntime=1089.72..1089.72 rows=2 loops=1)\n Sort Key: szallitolevel_bejovo_e(szallitolevel)\n -> Merge Join (cost=2569.48..2591.39 rows=3472 width=12) (actual\ntime=1086.72..1089.67 rows=2 loops=1)\n Merge Cond: (\"outer\".szallitolevel = \"inner\".szallitolevel)\n Filter: (\"inner\".az IS NULL)\n -> Sort (cost=1613.43..1614.15 rows=288 width=12) (actual\ntime=1054.21..1054.26 rows=80 loops=1)\n Sort Key: t1.szallitolevel\n -> Subquery Scan t1 (cost=1572.82..1601.65 rows=288\nwidth=12) (actual time=1050.72..1054.09 rows=80 loops=1)\n -> Aggregate (cost=1572.82..1601.65 rows=288\nwidth=12) (actual time=1050.70..1053.93 rows=80 loops=1)\n Filter: (count(\"*\") > 0)\n -> Group (cost=1572.82..1594.44 rows=2883\nwidth=12) (actual time=1050.64..1052.98 rows=824 loops=1)\n -> Sort (cost=1572.82..1580.03\nrows=2883 width=12) (actual time=1050.63..1051.24 rows=824 loops=1)\n Sort Key: t.szallitolevel, l.az\n -> Hash Join\n(cost=531.09..1407.13 rows=2883 width=12) (actual time=8.13..1048.89\nrows=824 loops=1)\n Hash Cond:\n(\"outer\".szallitolevel = \"inner\".az)\n -> Index Scan using\nszallitolevel_tetele_me on szallitolevel_tetele t (cost=0.00..2.25\nrows=167550 width=8) (actual time=0.18..871.77 rows=167888 loops=1)\n Filter: ((NOT\narchiv) AND ((fajta = 4) OR (fajta = 90) OR (fajta = 100)))\n -> Hash\n(cost=530.06..530.06 rows=411 width=4) (actual time=7.92..7.92 rows=0\nloops=1)\n -> Index Scan\nusing szlltlvl_szllts on szallitolevel l (cost=0.00..530.06 rows=411\nwidth=4) (actual time=0.04..7.81 rows=92 loops=1)\n Index Cond:\n(szallitas = 1504)\n Filter:\n((allapot <> 6) AND (allapot <> 7) AND (allapot <> 8))\n -> Sort (cost=956.05..964.73 rows=3472 width=8) (actual\ntime=27.80..30.24 rows=3456 loops=1)\n Sort Key: pakolas.szallitolevel\n -> Index Scan using pakolas_szallitasba on pakolas\n(cost=0.00..751.87 rows=3472 width=8) (actual time=0.10..21.68 rows=3456\nloops=1)\n Index Cond: (szallitasba = 1504)\n Filter: (sztornozott_pakolas IS NULL)\n Total runtime: 1090.30 msec\n(28 rows)\n\n-- plan of (A), disabled hashjoin --\n\nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n---------------------------------------------\n Limit (cost=10616.86..10616.86 rows=1 width=12) (actual time=45.04..45.04\nrows=1 loops=1)\n -> Sort (cost=10616.86..10625.54 rows=3472 width=12) (actual\ntime=45.04..45.04 rows=2 loops=1)\n Sort Key: szallitolevel_bejovo_e(szallitolevel)\n -> Merge Join (cost=10390.77..10412.67 rows=3472 width=12)\n(actual time=42.06..44.99 rows=2 loops=1)\n Merge Cond: (\"outer\".szallitolevel = \"inner\".szallitolevel)\n Filter: (\"inner\".az IS NULL)\n -> Sort (cost=9434.71..9435.43 rows=288 width=12) (actual\ntime=19.83..19.89 rows=80 loops=1)\n Sort Key: t1.szallitolevel\n -> Subquery Scan t1 (cost=9394.10..9422.93 rows=288\nwidth=12) (actual time=16.39..19.71 rows=80 loops=1)\n -> Aggregate (cost=9394.10..9422.93 rows=288\nwidth=12) (actual time=16.39..19.56 rows=80 loops=1)\n Filter: (count(\"*\") > 0)\n -> Group (cost=9394.10..9415.72 rows=2883\nwidth=12) (actual time=16.33..18.62 rows=824 loops=1)\n -> Sort (cost=9394.10..9401.31\nrows=2883 width=12) (actual time=16.33..16.93 rows=824 loops=1)\n Sort Key: t.szallitolevel, l.az\n -> Nested Loop\n(cost=0.00..9228.41 rows=2883 width=12) (actual time=0.07..14.79 rows=824\nloops=1)\n -> Index Scan using\nszlltlvl_szllts on szallitolevel l (cost=0.00..530.06 rows=411 width=4)\n(actual time=0.04..7.97 rows=92 loops=1)\n Index Cond:\n(szallitas = 1504)\n Filter: ((allapot\n<> 6) AND (allapot <> 7) AND (allapot <> 8))\n -> Index Scan using\nszlltlvl_ttl_szlltlvl on szallitolevel_tetele t (cost=0.00..20.99 rows=12\nwidth=8) (actual time=0.01..0.05 rows=9 loops=92)\n Index Cond:\n(t.szallitolevel = \"outer\".az)\n Filter: ((NOT\narchiv) AND ((fajta = 4) OR (fajta = 90) OR (fajta = 100)))\n -> Sort (cost=956.05..964.73 rows=3472 width=8) (actual tim\ne=17.76..20.18 rows=3456 loops=1)\n Sort Key: pakolas.szallitolevel\n -> Index Scan using pakolas_szallitasba on pakolas\n(cost=0.00..751.87 rows=3472 width=8) (actual time=0.02..11.56 rows=3456\nloops=1)\n Index Cond: (szallitasba = 1504)\n Filter: (sztornozott_pakolas IS NULL)\n Total runtime: 45.62 msec\n(27 rows)\n\n%----------------------- cut here -----------------------%\n-- plan of (B), seqscan --\n\nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-------------------------------\n Limit (cost=9132.84..9132.84 rows=1 width=8) (actual\ntime=3958.144..3958.144 rows=0 loops=1)\n -> Sort (cost=9132.84..9142.66 rows=3928 width=8) (actual\ntime=3958.132..3958.132 rows=0 loops=1)\n Sort Key: szallitolevel_bejovo_e(COALESCE(t1.szallitolevel,\npakolas.szallitolevel))\n -> Merge Full Join (cost=8834.40..8898.34 rows=3928 width=8)\n(actual time=3958.099..3958.099 rows=0 loops=1)\n Merge Cond: (\"outer\".szallitolevel = \"inner\".szallitolevel)\n Filter: (\"inner\".az IS NULL)\n -> Sort (cost=7737.61..7743.49 rows=2352 width=4) (actual\ntime=3798.439..3798.594 rows=85 loops=1)\n Sort Key: t1.szallitolevel\n -> Subquery Scan t1 (cost=7570.63..7605.91 rows=2352\nwidth=4) (actual time=3796.553..3797.981 rows=85 loops=1)\n -> HashAggregate (cost=7570.63..7582.39\nrows=2352 width=12) (actual time=3796.535..3797.493 rows=85 loops=1)\n Filter: (count(\"*\") > 0)\n -> Hash Join (cost=628.69..7552.99\nrows=2352 width=12) (actual time=62.874..3785.188 rows=899 loops=1)\n Hash Cond: (\"outer\".szallitolevel =\n\"inner\".az)\n -> Seq Scan on szallitolevel_tetele\nt (cost=0.00..6062.06 rows=167743 width=8) (actual time=0.236..3072.882\nrows=167637 loops=1)\n Filter: ((NOT archiv) AND\n((fajta = 4) OR (fajta = 90) OR (fajta = 100)))\n -> Hash (cost=627.86..627.86\nrows=335 width=4) (actual time=54.973..54.973 rows=0 loops=1)\n -> Index Scan using\nszlltlvl_szllts on szallitolevel l (cost=0.00..627.86 rows=335 width=4)\n(actual time=0.592..54.298 rows=91 loops=1)\n Index Cond: (szallitas =\n1504)\n Filter: ((allapot <> 6)\nAND (allapot <> 7) AND (allapot <> 8))\n -> Sort (cost=1096.79..1106.61 rows=3928 width=8) (actual\ntime=137.216..143.009 rows=3458 loops=1)\n Sort Key: pakolas.szallitolevel\n -> Index Scan using pakolas_szallitasba on pakolas\n(cost=0.00..862.30 rows=3928 width=8) (actual time=0.581..107.542 rows=3458\nloops=1)\n Index Cond: (szallitasba = 1504)\n Filter: (sztornozott_pakolas IS NULL)\n Total runtime: 3971.008 ms\n(25 rows)\n\n-- plan of (B), disabled seqscan, hashjoin, mergejoin --\n\nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n------------------------------------\n Limit (cost=100012622.51..100012622.51 rows=1 width=8) (actual\ntime=295.102..295.102 rows=0 loops=1)\n -> Sort (cost=100012622.51..100012632.33 rows=3928 width=8) (actual\ntime=295.091..295.091 rows=0 loops=1)\n Sort Key: szallitolevel_bejovo_e(COALESCE(t1.szallitolevel,\npakolas.szallitolevel))\n -> Merge Full Join (cost=100012324.08..100012388.02 rows=3928\nwidth=8) (actual time=295.057..295.057 rows=0 loops=1)\n Merge Cond: (\"outer\".szallitolevel = \"inner\".szallitolevel)\n Filter: (\"inner\".az IS NULL)\n -> Sort (cost=11227.29..11233.17 rows=2352 width=4) (actual\ntime=139.055..139.200 rows=85 loops=1)\n Sort Key: t1.szallitolevel\n -> Subquery Scan t1 (cost=11060.30..11095.58\nrows=2352 width=4) (actual time=137.166..138.588 rows=85 loops=1)\n -> HashAggregate (cost=11060.30..11072.06\nrows=2352 width=12) (actual time=137.150..138.106 rows=85 loops=1)\n Filter: (count(\"*\") > 0)\n -> Nested Loop (cost=0.00..11042.66\nrows=2352 width=12) (actual time=14.451..127.809 rows=899 loops=1)\n -> Index Scan using szlltlvl_szllts\non szallitolevel l (cost=0.00..627.86 rows=335 width=4) (actual\ntime=0.579..53.503 rows=91 loops=1)\n Index Cond: (szallitas = 1504)\n Filter: ((allapot <> 6) AND\n(allapot <> 7) AND (allapot <> 8))\n -> Index Scan using\nszlltlvl_ttl_szlltlvl on szallitolevel_tetele t (cost=0.00..30.91 rows=14\nwidth=8) (actual time=0.232..0.714 rows=10 loops=91)\n Index Cond: (t.szallitolevel =\n\"outer\".az)\n Filter: ((NOT archiv) AND\n((fajta = 4) OR (fajta = 90) OR (fajta = 100)))\n -> Sort (cost=1096.79..1106.61 rows=3928 width=8) (actual\ntime=133.517..139.338 rows=3458 loops=1)\n Sort Key: pakolas.szallitolevel\n -> Index Scan using pakolas_szallitasba on pakolas\n(cost=0.00..862.30 rows=3928 width=8) (actual time=0.526..103.313 rows=3458\nloops=1)\n Index Cond: (szallitasba = 1504)\n Filter: (sztornozott_pakolas IS NULL)\n Total runtime: 297.604 ms\n(24 rows)\n\n\n%----------------------- cut here -----------------------%\n\n",
"msg_date": "Wed, 29 Sep 2004 19:44:13 +0200",
"msg_from": "=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "stubborn query confuses two different servers"
}
] |
[
{
"msg_contents": "Hi all, a small question:\n\nI've got this table \"songs\" and an index on column artist. Since there's about\none distinct artist for every 10 rows, it would be nice if it could use this\nindex when counting artists. It doesn't however:\n\nlyrics=> EXPLAIN ANALYZE SELECT count(DISTINCT artist) FROM songs;\n Aggregate (cost=31961.26..31961.26 rows=1 width=14) (actual time=808.863..808.864 rows=1 loops=1)\n -> Seq Scan on songs (cost=0.00..31950.41 rows=4341 width=14) (actual time=26.801..607.172 rows=25207 loops=1)\n Total runtime: 809.106 ms\n\nEven with enable_seqscan to off, it just can't seem to use the index. The same\nquery without the count() works just fine:\n\nlyrics=> EXPLAIN ANALYZE SELECT DISTINCT artist FROM songs;\n Unique (cost=0.00..10814.96 rows=828 width=14) (actual time=0.029..132.903 rows=3280 loops=1)\n -> Index Scan using songs_artist_key on songs (cost=0.00..10804.11 rows=4341 width=14) (actual time=0.027..103.448 rows=25207 loops=1)\n Total runtime: 135.697 ms\n\nOf course I can just take the number of rows from the latter query, but I'm\nstill wondering why it can't use indexes with functions.\n\nThanks\n-- \nShiar - http://www.shiar.org\n> Faktoj estas malamik del verajh\n",
"msg_date": "Wed, 29 Sep 2004 21:41:31 +0200",
"msg_from": "Shiar <[email protected]>",
"msg_from_op": true,
"msg_subject": "index not used when using function"
},
{
"msg_contents": "\n\tMaybe add an order by artist to force a groupaggregate ?\n\n\n> Hi all, a small question:\n>\n> I've got this table \"songs\" and an index on column artist. Since \n> there's about\n> one distinct artist for every 10 rows, it would be nice if it could use \n> this\n> index when counting artists. It doesn't however:\n>\n> lyrics=> EXPLAIN ANALYZE SELECT count(DISTINCT artist) FROM songs;\n> Aggregate (cost=31961.26..31961.26 rows=1 width=14) (actual \n> time=808.863..808.864 rows=1 loops=1)\n> -> Seq Scan on songs (cost=0.00..31950.41 rows=4341 width=14) \n> (actual time=26.801..607.172 rows=25207 loops=1)\n> Total runtime: 809.106 ms\n>\n> Even with enable_seqscan to off, it just can't seem to use the index. \n> The same\n> query without the count() works just fine:\n>\n> lyrics=> EXPLAIN ANALYZE SELECT DISTINCT artist FROM songs;\n> Unique (cost=0.00..10814.96 rows=828 width=14) (actual \n> time=0.029..132.903 rows=3280 loops=1)\n> -> Index Scan using songs_artist_key on songs (cost=0.00..10804.11 \n> rows=4341 width=14) (actual time=0.027..103.448 rows=25207 loops=1)\n> Total runtime: 135.697 ms\n>\n> Of course I can just take the number of rows from the latter query, but \n> I'm\n> still wondering why it can't use indexes with functions.\n>\n> Thanks\n\n\n",
"msg_date": "Sun, 03 Oct 2004 17:29:37 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index not used when using function"
}
] |
[
{
"msg_contents": "OK, I have a situation that might be a performance problem, a bug, or an\nunavoidable consequence of using prepared statements. The short version\nis that I am getting function executions for rows not returned in a\nresult set when they are in a prepared statement.\n\nIn other words, I have a query:\nselect f(t.c) from t where [boolean expr on t] limit 1;\n\nbecause of the limit phrase, obviously, at most one record is returned\nand f executes at most once regardless of the plan used (in practice,\nsometimes index, sometimes seq_scan.\n\nNow, if the same query is executed as a prepared statement,\nprepare ps(...) as select f(t.c) from t where [expr] limit 1;\nexecute ps;\n\nnow, if ps ends up using a index scan on t, everything is ok. However,\nif ps does a seqscan, f executes for every row on t examined until the\n[expr] criteria is met. Is this a bug? If necessary I should be able\nto set up a reproducible example. The easy workaround is to not use\nprepared statements in these situations, but I need to be able to\nguarantee that f only executes once (even if that means exploring\nsubqueries).\n\nMerlin\n",
"msg_date": "Thu, 30 Sep 2004 09:45:51 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "spurious function execution in prepared statements."
},
{
"msg_contents": "On Thu, Sep 30, 2004 at 09:45:51AM -0400, Merlin Moncure wrote:\n> Now, if the same query is executed as a prepared statement,\n> prepare ps(...) as select f(t.c) from t where [expr] limit 1;\n> execute ps;\n> \n> now, if ps ends up using a index scan on t, everything is ok. However,\n> if ps does a seqscan, f executes for every row on t examined until the\n> [expr] criteria is met. Is this a bug? If necessary I should be able\n> to set up a reproducible example. The easy workaround is to not use\n> prepared statements in these situations, but I need to be able to\n> guarantee that f only executes once (even if that means exploring\n> subqueries).\n\n\nHere's another workaround that may let you use a prepared statement:\n\nprepare ps(...) as \nselect f(c) from (select c from t where [expr] limit 1) as t1\n\n-Mike\n",
"msg_date": "Thu, 30 Sep 2004 10:13:24 -0400",
"msg_from": "Michael Adler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] spurious function execution in prepared statements."
},
{
"msg_contents": "\nOn Thu, 30 Sep 2004, Merlin Moncure wrote:\n\n> OK, I have a situation that might be a performance problem, a bug, or an\n> unavoidable consequence of using prepared statements. The short version\n> is that I am getting function executions for rows not returned in a\n> result set when they are in a prepared statement.\n>\n> In other words, I have a query:\n> select f(t.c) from t where [boolean expr on t] limit 1;\n\nAn actual boolean expr on t? Or on a column in t?\n\n> because of the limit phrase, obviously, at most one record is returned\n> and f executes at most once regardless of the plan used (in practice,\n> sometimes index, sometimes seq_scan.\n>\n> Now, if the same query is executed as a prepared statement,\n> prepare ps(...) as select f(t.c) from t where [expr] limit 1;\n> execute ps;\n>\n> now, if ps ends up using a index scan on t, everything is ok. However,\n> if ps does a seqscan, f executes for every row on t examined until the\n> [expr] criteria is met. Is this a bug? If necessary I should be able\n> to set up a reproducible example. The easy workaround is to not use\n> prepared statements in these situations, but I need to be able to\n> guarantee that f only executes once (even if that means exploring\n> subqueries).\n\nI think a reproducible example would be good. Simple attempts to duplicate\nthis on 8.0b2 have failed for me, unless I'm using order by.\n",
"msg_date": "Thu, 30 Sep 2004 07:45:40 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: spurious function execution in prepared statements."
},
{
"msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> now, if ps ends up using a index scan on t, everything is ok. However,\n> if ps does a seqscan, f executes for every row on t examined until the\n> [expr] criteria is met. Is this a bug?\n\nWorks for me.\n\nregression=# create function f(int) returns int as '\nregression'# begin\nregression'# raise notice ''f(%)'', $1;\nregression'# return $1;\nregression'# end' language plpgsql;\nCREATE FUNCTION\nregression=# select f(unique2) from tenk1 where unique2%2 = 1 limit 2;\nNOTICE: f(1)\nNOTICE: f(3)\n f\n---\n 1\n 3\n(2 rows)\n\nregression=# prepare ps as\nregression-# select f(unique2) from tenk1 where unique2%2 = 1 limit 2;\nPREPARE\nregression=# execute ps;\nNOTICE: f(1)\nNOTICE: f(3)\n f\n---\n 1\n 3\n(2 rows)\n\nregression=#\n\nYou sure you aren't using f() in the WHERE clause?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 30 Sep 2004 10:54:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] spurious function execution in prepared statements. "
}
] |
[
{
"msg_contents": "> Here's another workaround that may let you use a prepared statement:\n> \n> prepare ps(...) as\n> select f(c) from (select c from t where [expr] limit 1) as t1\n> \n> -Mike\n\nI was just exploring that. In fact, the problem is not limited to\nprepared statements...it's just that they are more likely to run a\nseqscan so I noticed it there first. Since I am in a situation where I\nneed very strict control over when and why f gets executed, I pretty\nmuch have to go with the subquery option. \n\nThat said, it just seems that out of result set excecutions of f should\nbe in violation of something... \n\nMerlin\n\n",
"msg_date": "Thu, 30 Sep 2004 10:19:12 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] spurious function execution in prepared statements."
}
] |
[
{
"msg_contents": "Sometimes when you click on a link on my site to access my postgres database\nit takes forever for it to connect. You can click this link and see how long\nit takes.\n <http://www.3idiots.com:8080/examples/jsp/movies/wantedlist.jsp>\nhttp://www.3idiots.com:8080/example.../wantedlist.jsp\n\nIt doesn't do it all the time. Sometimes its really fast. I can't figure out\nwhat is wrong. If I go on the server where the database is I can connect and\nrun queries with no problem. I can also access the database from Access and\nrun queries with no problems. It is only a problem when you come from the\nweb. Tomcat and apache are both working. You can see this by going here \n <http://www.3idiots.com:8080/examples/jsp/movies/moviesearch.jsp>\nhttp://www.3idiots.com:8080/example...moviesearch.jsp\n\nI am using \npsql (PostgreSQL) 7.2.1\ntomcat 4.0.1\napache-1.3.9-4\nRedhat linux 6.2\n\nDoes anyone know what I can check to see what is causing this. It seemed to\nhave happened all of sudden one day. I don't know what could have changed to\ncause this. I have rebooted the server and it still happens. Any help will\nbe greatly appreciated.\n\nAlso I need to add that I never get an error message. It just sits there for\nquite some time. Eventually it will work.\n\n \nScott \n \nScott Dunn\nThe Software House, Inc.\n513.563.7780\n \n\n\n\n\n\n\nSometimes \nwhen you click on a link on my site to access my postgres database it takes \nforever for it to connect. You can click this link and see how long it \ntakes.http://www.3idiots.com:8080/example.../wantedlist.jspIt doesn't do it all the time. Sometimes its really fast. I can't \nfigure out what is wrong. If I go on the server where the database is I can \nconnect and run queries with no problem. I can also access the database from \nAccess and run queries with no problems. It is only a problem when you come from \nthe web. Tomcat and apache are both working. You can see this by going here \nhttp://www.3idiots.com:8080/example...moviesearch.jspI am using psql (PostgreSQL) 7.2.1tomcat \n4.0.1apache-1.3.9-4Redhat linux 6.2Does anyone know what I can \ncheck to see what is causing this. It seemed to have happened all of sudden one \nday. I don't know what could have changed to cause this. I have rebooted the \nserver and it still happens. Any help will be greatly appreciated.Also I \nneed to add that I never get an error message. It just sits there for quite some \ntime. Eventually it will work.\n \n\nScott \n\n \n\nScott Dunn\nThe Software House, Inc.\n513.563.7780",
"msg_date": "Thu, 30 Sep 2004 10:46:27 -0400",
"msg_from": "\"Scott Dunn\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Web server to Database Taking forever"
},
{
"msg_contents": "Scott,\n\n> Sometimes when you click on a link on my site to access my postgres\n> database it takes forever for it to connect. You can click this link and\n> see how long it takes.\n> <http://www.3idiots.com:8080/examples/jsp/movies/wantedlist.jsp>\n> http://www.3idiots.com:8080/example.../wantedlist.jsp\n\nSounds like it's a problem with your Tomcat and/or JDBC setup. Try the \npgsql-jdbc mailing list for help.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 30 Sep 2004 09:39:04 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Web server to Database Taking forever"
},
{
"msg_contents": "It is all working now. The thing is I didn't change anything. So do you\nstill think its Tomcat or the jdbc driver? \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Josh Berkus\nSent: Thursday, September 30, 2004 12:39 PM\nTo: Scott Dunn\nCc: [email protected]\nSubject: Re: [PERFORM] Web server to Database Taking forever\n\nScott,\n\n> Sometimes when you click on a link on my site to access my postgres \n> database it takes forever for it to connect. You can click this link \n> and see how long it takes.\n> <http://www.3idiots.com:8080/examples/jsp/movies/wantedlist.jsp>\n> http://www.3idiots.com:8080/example.../wantedlist.jsp\n\nSounds like it's a problem with your Tomcat and/or JDBC setup. Try the\npgsql-jdbc mailing list for help.\n\n--\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if your\n joining column's datatypes do not match\n\n",
"msg_date": "Thu, 30 Sep 2004 14:58:27 -0400",
"msg_from": "\"Scott Dunn\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Web server to Database Taking forever"
},
{
"msg_contents": "On Thu, 30 Sep 2004 14:58:27 -0400, Scott Dunn <[email protected]> wrote:\n> It is all working now. The thing is I didn't change anything. So do you\n> still think its Tomcat or the jdbc driver?\n> \n\na suspect might be the nature of JSP. on the first hit,\nJSP is converted to a Servlet, the compiled and loaded\nby Tomcat. consequent hits would be fast. first one is always\nslow.\n\nhow did you connect to the database?\n-- \nstp,\neyan\n\ninhale... inhale... hold... expectorate!\n",
"msg_date": "Fri, 1 Oct 2004 03:17:10 +0800",
"msg_from": "Edwin Eyan Moragas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Web server to Database Taking forever"
},
{
"msg_contents": "On Thu, 2004-09-30 at 12:58, Scott Dunn wrote:\n> It is all working now. The thing is I didn't change anything. So do you\n> still think its Tomcat or the jdbc driver? \n\nAre getting an unnaturally large number of processes or threads or\npooled connections or what-not somewhere maybe? Have you tried logging\noutput from something like top or iostat to see what's going crazy when\nthis happens? Just wondering.\n\n",
"msg_date": "Thu, 30 Sep 2004 14:19:21 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Web server to Database Taking forever"
}
] |
[
{
"msg_contents": "Stephan Szabo wrote:\n> On Thu, 30 Sep 2004, Merlin Moncure wrote:\n> \n> > OK, I have a situation that might be a performance problem, a bug,\nor an\n> > unavoidable consequence of using prepared statements. The short\nversion\n> > is that I am getting function executions for rows not returned in a\n> > result set when they are in a prepared statement.\n> An actual boolean expr on t? Or on a column in t?\n[...]\n> I think a reproducible example would be good. Simple attempts to\nduplicate\n> this on 8.0b2 have failed for me, unless I'm using order by.\n\nNote: I confirmed that breaking out the 'where' part of the query into\nsubquery suppresses the behavior.\n\nHere is the actual query:\nselect lock_cuid(id), *\n\tfrom data3.wclaim_line_file\n\twhere wcl_vin_no >= '32-MHAB-C-X-7243' and \n\t\t(wcl_vin_no > '32-MHAB-C-X-7243' or wcl_claim_no >=\n001) and \n\t\t(wcl_vin_no > '32-MHAB-C-X-7243' or wcl_claim_no >\n001 or id > 2671212) \n\torder by wcl_vin_no, wcl_claim_no, id\n\tlimit 1\n\n\nHere is the prepared statement declaration:\nprepare data3_read_next_wclaim_line_file_1_lock (character varying,\nnumeric, int8, numeric)\n\tas select lock_cuid(id), *\n\tfrom data3.wclaim_line_file\n\twhere wcl_vin_no >= $1 and \n\t\t(wcl_vin_no > $1 or wcl_claim_no >= $2) and \n\t\t(wcl_vin_no > $1 or wcl_claim_no > $2 or id > $3) \n\torder by wcl_vin_no, wcl_claim_no, id limit $4\n\n\nHere is the plan when it runs lock_cuid repeatedly (aside: disabling\nseqscans causes an index plan, but that's not the point):\n\nesp=# explain execute data3_read_next_wclaim_line_file_1_lock\n('32-MHAB-C-X-7243', 001, 2671212, 1);\n\n\n QUERY PLAN\n\n------------------------------------------------------------------------\n----------------------------\n------------------------------------------------------------------------\n----------------------------\n--------------------------------\n Limit (cost=13108.95..13162.93 rows=21592 width=260)\n -> Sort (cost=13108.95..13162.93 rows=21592 width=260)\n Sort Key: wcl_vin_no, wcl_claim_no, id\n -> Seq Scan on wclaim_line_file (cost=0.00..11554.52\nrows=21592 width=260)\n Filter: (((wcl_vin_no)::text >= ($1)::text) AND\n(((wcl_vin_no)::text > ($1)::text) OR\n ((wcl_claim_no)::numeric >= $2)) AND (((wcl_vin_no)::text > ($1)::text)\nOR ((wcl_claim_no)::numeric\n > $2) OR ((id)::bigint > $3)))\n(5 rows)\n",
"msg_date": "Thu, 30 Sep 2004 10:58:34 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: spurious function execution in prepared statements."
},
{
"msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> Here is the actual query:\n> select lock_cuid(id), *\n> ...\n> \torder by wcl_vin_no, wcl_claim_no, id\n> \tlimit 1\n\nLooks like Stephan made the right guess.\n\nLogically the LIMIT executes after the ORDER BY, so the sorted result\nhas to be formed completely. The fact that we are able to optimize\nthis in some cases does not represent a promise that we can do it in\nall cases. Ergo, it's not a bug.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 30 Sep 2004 11:02:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: spurious function execution in prepared statements. "
}
] |
[
{
"msg_contents": "\n\tTo save some time, let me start by saying\n\nPostgreSQL 7.4.3 on powerpc-apple-darwin7.4.0, compiled by GCC gcc \n(GCC) 3.3 20030304 (Apple Computer, Inc. build 1640)\n\n\tOK, now on to details...\n\n\tI'm trying to implement oracle style ``partitions'' in postgres. I've \nrun into my first snag on what should be a fairly quick query. \nBasically, I started with the following schema and split the \n``samples'' table into one table for each year (1999-2004).\n\n\n-- BEGIN SCHEMA\n\ncreate table sensor_types (\n sensor_type_id serial,\n sensor_type text not null,\n units varchar(10) not null,\n primary key(sensor_type_id)\n);\n\ncreate table sensors (\n sensor_id serial,\n sensor_type_id integer not null,\n serial char(16) not null,\n name text not null,\n low smallint not null,\n high smallint not null,\n active boolean default true,\n primary key(sensor_id),\n foreign key(sensor_type_id) references sensor_types(sensor_type_id)\n);\ncreate unique index sensors_byserial on sensors(serial);\n\ncreate table samples (\n ts datetime not null,\n sensor_id integer not null,\n sample float not null,\n foreign key(sensor_id) references sensors(sensor_id)\n);\ncreate index samples_bytime on samples(ts);\ncreate unique index samples_bytimeid on samples(ts, sensor_id);\n\n-- END SCHEMA\n\n\tEach samples_[year] table looks, and is indexed exactly as the above \nsamples table was by using the following commands:\n\ncreate index samples_1999_bytime on samples_1999(ts);\ncreate index samples_2000_bytime on samples_2000(ts);\ncreate index samples_2001_bytime on samples_2001(ts);\ncreate index samples_2002_bytime on samples_2002(ts);\ncreate index samples_2003_bytime on samples_2003(ts);\ncreate index samples_2004_bytime on samples_2004(ts);\n\ncreate unique index samples_1999_bytimeid on samples_1999(ts, \nsensor_id);\ncreate unique index samples_2000_bytimeid on samples_2000(ts, \nsensor_id);\ncreate unique index samples_2001_bytimeid on samples_2001(ts, \nsensor_id);\ncreate unique index samples_2002_bytimeid on samples_2002(ts, \nsensor_id);\ncreate unique index samples_2003_bytimeid on samples_2003(ts, \nsensor_id);\ncreate unique index samples_2004_bytimeid on samples_2004(ts, \nsensor_id);\n\n\tThe tables contain the following number of rows:\n\nsamples_1999\t311030\nsamples_2000\t2142245\nsamples_2001\t2706571\nsamples_2002\t3111602\nsamples_2003\t3149316\nsamples_2004\t2375972\n\n\tThe following view creates the illusion of the old ``single-table'' \nmodel:\n\ncreate view samples as\n select * from samples_1999\n union select * from samples_2000\n union select * from samples_2001\n union select * from samples_2002\n union select * from samples_2003\n union select * from samples_2004\n\n\t...along with the following rule on the view for the applications \nperforming inserts:\n\ncreate rule sample_rule as on insert to samples\n do instead\n insert into samples_2004 (ts, sensor_id, sample)\n values(new.ts, new.sensor_id, new.sample)\n\n\n\tOK, now that that's over with, I have this one particular query that I \nattempt to run for a report from my phone that no longer works because \nit tries to do a table scan on *some* of the tables. Why it chooses \nthis table scan, I can't imagine. The query is as follows:\n\nselect\n s.serial as serial_num,\n s.name as name,\n date(ts) as day,\n min(sample) as min_temp,\n avg(sample) as avg_temp,\n stddev(sample) as stddev_temp,\n max(sample) as max_temp\n from\n samples inner join sensors s using (sensor_id)\n where\n ts > current_date - 7\n group by\n serial_num, name, day\n order by\n serial_num, day desc\n\n\n\texplain analyze reports the following (sorry for the horrible \nwrapping):\n\n Sort (cost=1185281.45..1185285.95 rows=1800 width=50) (actual \ntime=82832.106..82832.147 rows=56 loops=1)\n Sort Key: s.serial, date(samples.ts)\n -> HashAggregate (cost=1185161.62..1185184.12 rows=1800 width=50) \n(actual time=82830.624..82831.601 rows=56 loops=1)\n -> Hash Join (cost=1063980.21..1181539.96 rows=206952 \nwidth=50) (actual time=80408.123..81688.590 rows=66389 loops=1)\n Hash Cond: (\"outer\".sensor_id = \"inner\".sensor_id)\n -> Subquery Scan samples (cost=1063979.10..1155957.38 \nrows=4598914 width=20) (actual time=80392.477..80922.764 rows=66389 \nloops=1)\n -> Unique (cost=1063979.10..1109968.24 \nrows=4598914 width=20) (actual time=80392.451..80646.761 rows=66389 \nloops=1)\n -> Sort (cost=1063979.10..1075476.39 \nrows=4598914 width=20) (actual time=80392.437..80442.787 rows=66389 \nloops=1)\n Sort Key: ts, sensor_id, sample\n -> Append (cost=0.00..312023.46 \nrows=4598914 width=20) (actual time=79014.428..80148.396 rows=66389 \nloops=1)\n -> Subquery Scan \"*SELECT* 1\" \n(cost=0.00..9239.37 rows=103677 width=20) (actual \ntime=4010.181..4010.181 rows=0 loops=1)\n -> Seq Scan on \nsamples_1999 (cost=0.00..8202.60 rows=103677 width=20) (actual \ntime=4010.165..4010.165 rows=0 loops=1)\n Filter: (ts > \n((('now'::text)::date - 7))::timestamp without time zone)\n -> Subquery Scan \"*SELECT* 2\" \n(cost=0.00..28646.17 rows=714082 width=20) (actual time=44.827..44.827 \nrows=0 loops=1)\n -> Index Scan using \nsamples_2000_bytime on samples_2000 (cost=0.00..21505.35 rows=714082 \nwidth=20) (actual time=44.818..44.818 rows=0 loops=1)\n Index Cond: (ts > \n((('now'::text)::date - 7))::timestamp without time zone)\n -> Subquery Scan \"*SELECT* 3\" \n(cost=0.00..80393.33 rows=902191 width=20) (actual \ntime=34772.377..34772.377 rows=0 loops=1)\n -> Seq Scan on \nsamples_2001 (cost=0.00..71371.42 rows=902191 width=20) (actual \ntime=34772.366..34772.366 rows=0 loops=1)\n Filter: (ts > \n((('now'::text)::date - 7))::timestamp without time zone)\n -> Subquery Scan \"*SELECT* 4\" \n(cost=0.00..92424.05 rows=1037201 width=20) (actual \ntime=40072.103..40072.103 rows=0 loops=1)\n -> Seq Scan on \nsamples_2002 (cost=0.00..82052.04 rows=1037201 width=20) (actual \ntime=40072.090..40072.090 rows=0 loops=1)\n Filter: (ts > \n((('now'::text)::date - 7))::timestamp without time zone)\n -> Subquery Scan \"*SELECT* 5\" \n(cost=0.00..42380.58 rows=1049772 width=20) (actual time=49.455..49.455 \nrows=0 loops=1)\n -> Index Scan using \nsamples_2003_bytime on samples_2003 (cost=0.00..31882.86 rows=1049772 \nwidth=20) (actual time=49.448..49.448 rows=0 loops=1)\n Index Cond: (ts > \n((('now'::text)::date - 7))::timestamp without time zone)\n -> Subquery Scan \"*SELECT* 6\" \n(cost=0.00..58939.96 rows=791991 width=20) (actual \ntime=65.458..1124.363 rows=66389 loops=1)\n -> Index Scan using \nsamples_2004_bytime on samples_2004 (cost=0.00..51020.05 rows=791991 \nwidth=20) (actual time=65.430..750.336 rows=66389 loops=1)\n Index Cond: (ts > \n((('now'::text)::date - 7))::timestamp without time zone)\n -> Hash (cost=1.09..1.09 rows=9 width=38) (actual \ntime=15.295..15.295 rows=0 loops=1)\n -> Seq Scan on sensors s (cost=0.00..1.09 rows=9 \nwidth=38) (actual time=15.122..15.187 rows=9 loops=1)\n Total runtime: 82865.119 ms\n\n\n\tEssentially, what you can see here is that it's doing an index scan on \nsamples_2000, samples_2003, and samples_2004, but a sequential scan on \nsamples_1999, samples_2001, and samples_2002. It's very strange to me \nthat it would make these choices. If I disable sequential scans \naltogether for this session, the query runs in under 4 seconds.\n\n\tThis is a very cool solution for long-term storage, and isn't terribly \nhard to manage. I actually have other report queries that seem to be \nmaking pretty good index selection currently...but I always want more! \n:) Does anyone have any suggestions as to how to get this to do what I \nwant?\n\n\tOf course, ideally, it would ignore five of the tables altogether. :)\n\n--\nSPY My girlfriend asked me which one I like better.\npub 1024/3CAE01D5 1994/11/03 Dustin Sallings <[email protected]>\n| Key fingerprint = 87 02 57 08 02 D0 DA D6 C8 0F 3E 65 51 98 D8 BE\nL_______________________ I hope the answer won't upset her. ____________\n\n",
"msg_date": "Thu, 30 Sep 2004 23:30:49 -0700",
"msg_from": "Dustin Sallings <[email protected]>",
"msg_from_op": true,
"msg_subject": "inconsistent/weird index usage"
},
{
"msg_contents": "Dustin Sallings wrote:\n> \n\n[...]\n\n> OK, now that that's over with, I have this one particular query that \n> I attempt to run for a report from my phone that no longer works because \n> it tries to do a table scan on *some* of the tables. Why it chooses \n> this table scan, I can't imagine. The query is as follows:\n> \n> select\n> s.serial as serial_num,\n> s.name as name,\n> date(ts) as day,\n> min(sample) as min_temp,\n> avg(sample) as avg_temp,\n> stddev(sample) as stddev_temp,\n> max(sample) as max_temp\n> from\n> samples inner join sensors s using (sensor_id)\n> where\n> ts > current_date - 7\n> group by\n> serial_num, name, day\n> order by\n> serial_num, day desc\n> \n> \n\n[ next section heavily clipped for clarity ]\n\n-> Seq Scan on samples_1999 (cost rows=103677) (actual rows=0 loops=1)\n\n-> Index Scan using samples_2000_bytime on samples_2000 (cost \nrows=714082 (actual rows=0 loops=1)\n\n\n-> Seq Scan on samples_2001 (cost rows=902191) (actual rows=0 loops=1)\n\n-> Seq Scan on samples_2002 (cost rows=1037201) (actual rows=0 loops=1)\n\n-> Index Scan using samples_2003_bytime on samples_2003 (cost \nrows=1049772) (actual rows=0 loops=1)\n\n-> Index Scan using samples_2004_bytime on samples_2004 (cost \nrows=791991) (actual rows=66389 loops=1)\n\n[...]\n> \n> \n> Essentially, what you can see here is that it's doing an index scan \n> on samples_2000, samples_2003, and samples_2004, but a sequential scan \n> on samples_1999, samples_2001, and samples_2002. It's very strange to \n> me that it would make these choices. If I disable sequential scans \n> altogether for this session, the query runs in under 4 seconds.\n> \n> This is a very cool solution for long-term storage, and isn't \n> terribly hard to manage. I actually have other report queries that seem \n> to be making pretty good index selection currently...but I always want \n> more! :) Does anyone have any suggestions as to how to get this to do \n> what I want?\n> \n> Of course, ideally, it would ignore five of the tables altogether. :)\n> \n> -- \n> SPY My girlfriend asked me which one I like better.\n> pub 1024/3CAE01D5 1994/11/03 Dustin Sallings <[email protected]>\n> | Key fingerprint = 87 02 57 08 02 D0 DA D6 C8 0F 3E 65 51 98 D8 BE\n> L_______________________ I hope the answer won't upset her. ____________\n> \n> \n\nJust as a heads up. You have run vacuum analyze before running this \nquery, correct?\n\nBecause you'll notice that the query planner is thinking that it will \nhave 103677 rows from 1999, 700,000 rows from 2000, 900,000 rows from \n2001, etc, etc. Obviously the query planner is not planning well \nconsidering it there are only 60,000 rows from 2004, and no rows from \nanything else.\n\nIt just seems like it hasn't updated it's statistics to be aware of when \nthe time is on most of the tables.\n\n(By the way, an indexed scan returning 0 entries is *really* fast, so I \nwouldn't worry about ignoring the extra tables. :)\n\nI suppose the other question is whether this is a prepared or stored \nquery. Because sometimes the query planner cannot do enough optimization \nin a stored query. (I ran into this problem where I had 1 column with \n500,000+ entries referencing 1 number. If I ran manually, the time was \nmuch better because I wasn't using *that* number. With a stored query, \nit had to take into account that I *might* use that number, and didn't \nwant to do 500,000+ indexed lookups)\n\nThe only other thing I can think of is that there might be some \ncollision between datetime and date. Like it is thinking it is looking \nat the time of day when it plans the queries (hence why so many rows), \nbut really it is looking at the date. Perhaps a cast is in order to make \nit work right. I don't really know.\n\nInteresting problem, though.\nJohn\n=:->",
"msg_date": "Fri, 01 Oct 2004 08:53:17 -0500",
"msg_from": "John Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inconsistent/weird index usage"
},
{
"msg_contents": "Dustin Sallings <[email protected]> writes:\n> \tThe following view creates the illusion of the old ``single-table'' \n> model:\n\n> create view samples as\n> select * from samples_1999\n> union select * from samples_2000\n> union select * from samples_2001\n> union select * from samples_2002\n> union select * from samples_2003\n> union select * from samples_2004\n\nYou really, really, really want to use UNION ALL not UNION here.\n\n> \tOK, now that that's over with, I have this one particular query that I \n> attempt to run for a report from my phone that no longer works because \n> it tries to do a table scan on *some* of the tables. Why it chooses \n> this table scan, I can't imagine.\n\nMost of the problem here comes from the fact that \"current_date - 7\"\nisn't reducible to a constant and so the planner is making bad guesses\nabout how much of each table will be scanned. If possible, do the date\narithmetic on the client side and send over a simple literal constant.\nIf that's not practical you can fake it with a mislabeled IMMUTABLE\nfunction --- see the list archives for previous discussions of the\nsame issue.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Oct 2004 10:38:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inconsistent/weird index usage "
},
{
"msg_contents": "Dustin Sallings wrote:\n> The following view creates the illusion of the old ``single-table'' \n> model:\n> \n> create view samples as\n> select * from samples_1999\n> union select * from samples_2000\n> union select * from samples_2001\n> union select * from samples_2002\n> union select * from samples_2003\n> union select * from samples_2004\n\nTry this with UNION ALL (you know there won't be any duplicates) and \npossibly with some limits too:\n\nSELECT * FROM samples_1999 WHERE ts BETWEEN '1999-01-01 00:00:00+00' AND \n'1999-12-31 11:59:59+00'\nUNION ALL ...\n\n> select\n> s.serial as serial_num,\n> s.name as name,\n> date(ts) as day,\n> min(sample) as min_temp,\n> avg(sample) as avg_temp,\n> stddev(sample) as stddev_temp,\n> max(sample) as max_temp\n> from\n> samples inner join sensors s using (sensor_id)\n> where\n> ts > current_date - 7\n> group by\n> serial_num, name, day\n> order by\n> serial_num, day desc\n\nTry restricting the timestamp too\n\nWHERE\n ts BETWEEN (current_date -7) AND current_timestamp\n\nHopefully that will give the planner enough smarts to know it can skip \nmost of the sample_200x tables.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 01 Oct 2004 15:43:05 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inconsistent/weird index usage"
},
{
"msg_contents": "Tom,\n\n> Most of the problem here comes from the fact that \"current_date - 7\"\n> isn't reducible to a constant and so the planner is making bad guesses\n> about how much of each table will be scanned. \n\nI thought this was fixed in 7.4. No?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 1 Oct 2004 09:29:36 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inconsistent/weird index usage"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n>> Most of the problem here comes from the fact that \"current_date - 7\"\n>> isn't reducible to a constant and so the planner is making bad guesses\n>> about how much of each table will be scanned. \n\n> I thought this was fixed in 7.4. No?\n\nNo. It's not fixed as of CVS tip either, although there was some talk\nof doing something in time for 8.0.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Oct 2004 12:34:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inconsistent/weird index usage "
},
{
"msg_contents": "On R, 2004-10-01 at 19:34, Tom Lane wrote:\n> Josh Berkus <[email protected]> writes:\n> >> Most of the problem here comes from the fact that \"current_date - 7\"\n> >> isn't reducible to a constant and so the planner is making bad guesses\n> >> about how much of each table will be scanned. \n> \n> > I thought this was fixed in 7.4. No?\n> \n> No. It's not fixed as of CVS tip either, although there was some talk\n> of doing something in time for 8.0.\n\nThat's weird - my 7.4.2 databases did not consider (now()-'15\nmin'::interval) to be a constant whereas 7.4.5 does (i.e. it does use\nindex scan on index on datetime column)\n\nIs this somehow different for date types ?\n\n--------------\nHannu\n",
"msg_date": "Sun, 03 Oct 2004 02:20:41 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inconsistent/weird index usage"
},
{
"msg_contents": "Hannu Krosing <[email protected]> writes:\n>> No. It's not fixed as of CVS tip either, although there was some talk\n>> of doing something in time for 8.0.\n\n> That's weird - my 7.4.2 databases did not consider (now()-'15\n> min'::interval) to be a constant whereas 7.4.5 does (i.e. it does use\n> index scan on index on datetime column)\n\nThe question isn't whether it can use it as an indexscan bound; the\nquestion is whether it can derive an accurate rowcount estimate.\nThe issue is exactly that STABLE functions work for one but not the\nother.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 02 Oct 2004 19:44:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inconsistent/weird index usage "
}
] |
[
{
"msg_contents": "Pg: 7.4.5\nRH 7.3\n8g Ram\n200 g drive space\nRAID0+1\nTables vacuum on a nightly basis\n\nThe following process below takes 8 hours to run on 90k records and I'm \nnot sure where to being to look for the bottleneck. This isn't the only \nupdating on this database that seems to take a long time to complete. Is \nthere something I should be looking for in my conf settings? \n\nTIA\nPatrick\n\n\nSQL:\n---Bring back only selected records to run through the update process.\n--Without the function the SQL takes < 10secs to return 90,000 records\nSELECT count(pm.pm_delta_function_amazon(upc.keyp_upc,'amazon'))\nFROM mdc_upc upc\nJOIN public.mdc_products prod ON upc.keyf_products = prod.keyp_products\nJOIN public.mdc_price_post_inc price ON prod.keyp_products = \nprice.keyf_product\nJOIN public.mdc_attribute_product ap on ap.keyf_products = \nprod.keyp_products and keyf_attribute=22\nWHERE \nupper(trim(ap.attributevalue)) NOT IN ('ESTEE LAUDER', \n'CLINIQUE','ORGINS','PRESCRIPTIVES','LANC?ME','CHANEL','ARAMIS','M.A.C','TAG \nHEUER')\nAND keyf_producttype<>222\nAND prod.action_publish = 1;\n\n\nFunction:\n\nCREATE OR REPLACE FUNCTION pm.pm_delta_function_amazon(int4, \"varchar\")\n RETURNS bool AS\n'DECLARE\n varkeyf_upc ALIAS FOR $1;\n varPassword ALIAS FOR $2;\n varRealMD5 varchar;\n varDeltaMD5 varchar;\n varLastTouchDate date;\n varQuery text;\n varQuery1 text;\n varQueryMD5 text;\n varQueryRecord record;\n varFuncStatus boolean := false;\n \nBEGIN\n\n-- Check the password\n IF varPassword <> \\'amazon\\' THEN\n Return false;\n END IF;\n\n\n-- Get the md5 hash for this product\n SELECT into varQueryRecord md5(upc.keyp_upc || prod.description || \npm.pm_price_post_inc(prod.keyp_products)) AS md5\n FROM public.mdc_upc upc\n JOIN public.mdc_products prod ON upc.keyf_products = \nprod.keyp_products\n JOIN public.mdc_price_post_inc price ON price.keyf_product = \nprod.keyp_products\n WHERE upc.keyp_upc = varkeyf_upc LIMIT 1 ;\n \n \n IF NOT FOUND THEN\n RAISE EXCEPTION \\'varRealMD5 is NULL. UPC ID is %\\', varkeyf_upc;\n ELSE\n varRealMD5:=varQueryRecord.md5;\n END IF;\n\n-- Check that the product is in the delta table and return its hash for \ncomparison \n SELECT into varQueryRecord md5_hash,last_touch_date \n FROM pm.pm_delta_master_amazon\n WHERE keyf_upc = varkeyf_upc LIMIT 1;\n\n IF NOT FOUND THEN\n -- ADD and exit\n INSERT INTO pm.pm_delta_master_amazon \n(keyf_upc,status,md5_hash,last_touch_date)\n values (varkeyf_upc,\\'add\\',varRealMD5,CURRENT_DATE);\n varFuncStatus:=true;\n RETURN varFuncStatus;\n ELSE\n --Update the record \n --- If the hash matches then set the record to HOLD\n IF varRealMD5 = varQueryRecord.md5_hash THEN\n UPDATE pm.pm_delta_master_amazon\n SET status= \\'hold\\',\n last_touch_date = CURRENT_DATE\n WHERE keyf_upc = varkeyf_upc AND last_touch_date <> CURRENT_DATE; \n\n varFuncStatus:=true;\n ELSE\n -- ELSE mark the item as ADD\n UPDATE pm.pm_delta_master_amazon\n SET status= \\'add\\',\n last_touch_date = CURRENT_DATE\n WHERE keyf_upc = varkeyf_upc;\n varFuncStatus:=true;\n END IF;\n END IF;\n\n RETURN varFuncStatus;\nEND;'\n LANGUAGE 'plpgsql' IMMUTABLE;\n\n\n\nTableDef\nCREATE TABLE pm.pm_delta_master_amazon ( \n keyf_upc int4 ,\n status varchar(6) ,\n md5_hash varchar(40) ,\n last_touch_date date \n )\nGO\n\nCREATE INDEX status_idx\n ON pm.pm_delta_master_amazon(status)\nGO\n\n\n\n\n\nCONF\n--------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\n#fsync = true # turns forced synchronization on or off\n#wal_sync_method = fsync # the default varies across platforms:\n # fsync, fdatasync, open_sync, or \nopen_datasync\nwal_buffers = 32 # min 4, 8KB each\n\n# - Checkpoints -\n\ncheckpoint_segments = 50 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 600 # range 30-3600, in seconds\n#checkpoint_warning = 30 # 0 is off, in seconds\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n\n\nPatrick Hatcher\nMacys.Com\n\nPg: 7.4.5\nRH 7.3\n8g Ram\n200 g drive space\nRAID0+1\nTables vacuum on a nightly basis\n\nThe following process below takes 8\nhours to run on 90k records and I'm not sure where to being to look for\nthe bottleneck. This isn't the only updating on this database that\nseems to take a long time to complete. Is there something I should\nbe looking for in my conf settings? \n\nTIA\nPatrick\n\n\nSQL:\n---Bring back only selected records\nto run through the update process.\n--Without the function the SQL takes\n< 10secs to return 90,000 records\nSELECT count(pm.pm_delta_function_amazon(upc.keyp_upc,'amazon'))\nFROM mdc_upc upc\nJOIN public.mdc_products prod ON upc.keyf_products\n= prod.keyp_products\nJOIN public.mdc_price_post_inc price\nON prod.keyp_products = price.keyf_product\nJOIN public.mdc_attribute_product ap\non ap.keyf_products = prod.keyp_products and keyf_attribute=22\nWHERE \nupper(trim(ap.attributevalue)) NOT IN\n('ESTEE LAUDER', 'CLINIQUE','ORGINS','PRESCRIPTIVES','LANC?ME','CHANEL','ARAMIS','M.A.C','TAG\nHEUER')\nAND keyf_producttype<>222\nAND prod.action_publish = 1;\n\n\nFunction:\n\nCREATE OR REPLACE FUNCTION pm.pm_delta_function_amazon(int4, \"varchar\")\n RETURNS bool AS\n'DECLARE\n varkeyf_upc \n ALIAS FOR $1;\n varPassword \n ALIAS FOR $2;\n varRealMD5 \n varchar;\n varDeltaMD5 \n varchar;\n varLastTouchDate date;\n varQuery \n text;\n varQuery1 \n text;\n varQueryMD5 \n text;\n varQueryRecord record;\n varFuncStatus boolean\n:= false;\n \nBEGIN\n\n-- Check the password\n IF varPassword <> \\'amazon\\' THEN\n Return false;\n END IF;\n\n\n-- Get the md5 hash for this product\n SELECT into varQueryRecord md5(upc.keyp_upc || prod.description\n|| pm.pm_price_post_inc(prod.keyp_products)) AS md5\n FROM public.mdc_upc upc\n JOIN public.mdc_products prod ON upc.keyf_products = prod.keyp_products\n JOIN public.mdc_price_post_inc price ON price.keyf_product\n= prod.keyp_products\n WHERE upc.keyp_upc = varkeyf_upc LIMIT 1 ;\n \n\n IF NOT FOUND THEN\n RAISE EXCEPTION \\'varRealMD5 is NULL. UPC ID is %\\', varkeyf_upc;\n ELSE\n varRealMD5:=varQueryRecord.md5;\n END IF;\n\n-- Check that the product is in the delta table and return its hash\nfor comparison \n SELECT into varQueryRecord md5_hash,last_touch_date \n FROM pm.pm_delta_master_amazon\n WHERE keyf_upc = varkeyf_upc LIMIT 1;\n\n IF NOT FOUND THEN\n -- ADD and exit\n INSERT INTO pm.pm_delta_master_amazon (keyf_upc,status,md5_hash,last_touch_date)\n values (varkeyf_upc,\\'add\\',varRealMD5,CURRENT_DATE);\n varFuncStatus:=true;\n RETURN varFuncStatus;\n ELSE\n --Update the record \n --- If the hash matches then set the record to HOLD\n IF varRealMD5 = varQueryRecord.md5_hash THEN\n UPDATE pm.pm_delta_master_amazon\n SET status= \\'hold\\',\n last_touch_date = CURRENT_DATE\n WHERE keyf_upc = varkeyf_upc\nAND last_touch_date <> CURRENT_DATE; \n varFuncStatus:=true;\n ELSE\n -- ELSE mark the item as ADD\n UPDATE pm.pm_delta_master_amazon\n SET status= \\'add\\',\n last_touch_date = CURRENT_DATE\n WHERE keyf_upc = varkeyf_upc;\n varFuncStatus:=true;\n END IF;\n END IF;\n\n RETURN varFuncStatus;\nEND;'\n LANGUAGE 'plpgsql' IMMUTABLE;\n\n\n\nTableDef\nCREATE TABLE pm.pm_delta_master_amazon\n( \n keyf_upc \n int4 ,\n status \n varchar(6) ,\n md5_hash \n varchar(40) ,\n last_touch_date \n date \n )\nGO\n\nCREATE INDEX status_idx\n ON pm.pm_delta_master_amazon(status)\nGO\n\n\n\n\n\nCONF\n--------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\n#fsync = true \n # turns forced synchronization on or\noff\n#wal_sync_method = fsync \n # the default varies across platforms:\n \n #\nfsync, fdatasync, open_sync, or open_datasync\nwal_buffers = 32 \n # min 4, 8KB each\n\n# - Checkpoints -\n\ncheckpoint_segments = 50 \n # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 600 \n # range 30-3600, in seconds\n#checkpoint_warning = 30 \n # 0 is off, in seconds\n#commit_delay = 0 \n # range 0-100000, in microseconds\n#commit_siblings = 5 \n # range 1-1000\n\n\nPatrick Hatcher\nMacys.Com",
"msg_date": "Fri, 1 Oct 2004 11:14:08 -0700",
"msg_from": "Patrick Hatcher <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow update/insert process"
},
{
"msg_contents": "Some quick notes:\n\n- Using a side effect of a function to update the database feels bad to me\n- how long does the SELECT into varQueryRecord md5(upc.keyp....\n function take / what does it's explain look like?\n- There are a lot of non-indexed columns on that delta master table, such as keyf_upc. \n I'm guessing you're doing 90,000 x {a lot of slow scans}\n- My temptation would be to rewrite the processing to do a pass of updates, a pass of inserts, \n and then the SELECT\n ----- Original Message ----- \n From: Patrick Hatcher \n To: [email protected] \n Sent: Friday, October 01, 2004 2:14 PM\n Subject: [PERFORM] Slow update/insert process\n\n\n\n Pg: 7.4.5 \n RH 7.3 \n 8g Ram \n 200 g drive space \n RAID0+1 \n Tables vacuum on a nightly basis \n\n The following process below takes 8 hours to run on 90k records and I'm not sure where to being to look for the bottleneck. This isn't the only updating on this database that seems to take a long time to complete. Is there something I should be looking for in my conf settings? \n\n TIA \n Patrick \n\n\n SQL: \n ---Bring back only selected records to run through the update process. \n --Without the function the SQL takes < 10secs to return 90,000 records \n SELECT count(pm.pm_delta_function_amazon(upc.keyp_upc,'amazon')) \n FROM mdc_upc upc \n JOIN public.mdc_products prod ON upc.keyf_products = prod.keyp_products \n JOIN public.mdc_price_post_inc price ON prod.keyp_products = price.keyf_product \n JOIN public.mdc_attribute_product ap on ap.keyf_products = prod.keyp_products and keyf_attribute=22 \n WHERE \n upper(trim(ap.attributevalue)) NOT IN ('ESTEE LAUDER', 'CLINIQUE','ORGINS','PRESCRIPTIVES','LANC?ME','CHANEL','ARAMIS','M.A.C','TAG HEUER') \n AND keyf_producttype<>222 \n AND prod.action_publish = 1; \n\n\n Function: \n\n CREATE OR REPLACE FUNCTION pm.pm_delta_function_amazon(int4, \"varchar\")\n RETURNS bool AS\n 'DECLARE\n varkeyf_upc ALIAS FOR $1;\n varPassword ALIAS FOR $2;\n varRealMD5 varchar;\n varDeltaMD5 varchar;\n varLastTouchDate date;\n varQuery text;\n varQuery1 text;\n varQueryMD5 text;\n varQueryRecord record;\n varFuncStatus boolean := false;\n \n BEGIN\n\n -- Check the password\n IF varPassword <> \\'amazon\\' THEN\n Return false;\n END IF;\n\n\n -- Get the md5 hash for this product\n SELECT into varQueryRecord md5(upc.keyp_upc || prod.description || pm.pm_price_post_inc(prod.keyp_products)) AS md5\n FROM public.mdc_upc upc\n JOIN public.mdc_products prod ON upc.keyf_products = prod.keyp_products\n JOIN public.mdc_price_post_inc price ON price.keyf_product = prod.keyp_products\n WHERE upc.keyp_upc = varkeyf_upc LIMIT 1 ;\n \n\n IF NOT FOUND THEN\n RAISE EXCEPTION \\'varRealMD5 is NULL. UPC ID is %\\', varkeyf_upc;\n ELSE\n varRealMD5:=varQueryRecord.md5;\n END IF;\n\n -- Check that the product is in the delta table and return its hash for comparison \n SELECT into varQueryRecord md5_hash,last_touch_date \n FROM pm.pm_delta_master_amazon\n WHERE keyf_upc = varkeyf_upc LIMIT 1;\n\n IF NOT FOUND THEN\n -- ADD and exit\n INSERT INTO pm.pm_delta_master_amazon (keyf_upc,status,md5_hash,last_touch_date)\n values (varkeyf_upc,\\'add\\',varRealMD5,CURRENT_DATE);\n varFuncStatus:=true;\n RETURN varFuncStatus;\n ELSE\n --Update the record \n --- If the hash matches then set the record to HOLD\n IF varRealMD5 = varQueryRecord.md5_hash THEN\n UPDATE pm.pm_delta_master_amazon\n SET status= \\'hold\\',\n last_touch_date = CURRENT_DATE\n WHERE keyf_upc = varkeyf_upc AND last_touch_date <> CURRENT_DATE; \n varFuncStatus:=true;\n ELSE\n -- ELSE mark the item as ADD\n UPDATE pm.pm_delta_master_amazon\n SET status= \\'add\\',\n last_touch_date = CURRENT_DATE\n WHERE keyf_upc = varkeyf_upc;\n varFuncStatus:=true;\n END IF;\n END IF;\n\n RETURN varFuncStatus;\n END;'\n LANGUAGE 'plpgsql' IMMUTABLE;\n\n\n\n TableDef \n CREATE TABLE pm.pm_delta_master_amazon ( \n keyf_upc int4 , \n status varchar(6) , \n md5_hash varchar(40) , \n last_touch_date date \n ) \n GO \n\n CREATE INDEX status_idx \n ON pm.pm_delta_master_amazon(status) \n GO \n\n\n\n\n CONF \n -------- \n # WRITE AHEAD LOG \n #--------------------------------------------------------------------------- \n\n # - Settings - \n\n #fsync = true # turns forced synchronization on or off \n #wal_sync_method = fsync # the default varies across platforms: \n # fsync, fdatasync, open_sync, or open_datasync \n wal_buffers = 32 # min 4, 8KB each \n\n # - Checkpoints - \n\n checkpoint_segments = 50 # in logfile segments, min 1, 16MB each \n checkpoint_timeout = 600 # range 30-3600, in seconds \n #checkpoint_warning = 30 # 0 is off, in seconds \n #commit_delay = 0 # range 0-100000, in microseconds \n #commit_siblings = 5 # range 1-1000 \n\n\n Patrick Hatcher\n Macys.Com\n\n\n\n\n\n\n\nSome quick notes:\n \n- Using a side effect of a function to update the \ndatabase feels bad to me\n- how long does the SELECT into varQueryRecord \nmd5(upc.keyp....\n function take / what does it's explain look \nlike?\n- There are a lot of non-indexed columns on that \ndelta master table, such as keyf_upc. \n\n I'm guessing you're doing \n90,000 x {a lot of slow scans}\n- My temptation would be to rewrite the processing \nto do a pass of updates, a pass of inserts, \n and then the SELECT\n\n----- Original Message ----- \nFrom:\nPatrick \n Hatcher \nTo: [email protected]\n\nSent: Friday, October 01, 2004 2:14 \n PM\nSubject: [PERFORM] Slow update/insert \n process\nPg: 7.4.5 RH \n 7.3 8g Ram 200 g drive space RAID0+1 Tables vacuum on a \n nightly basis The following \n process below takes 8 hours to run on 90k records and I'm not sure where to \n being to look for the bottleneck. This isn't the only updating on this \n database that seems to take a long time to complete. Is there something \n I should be looking for in my conf settings? TIA Patrick SQL:\n---Bring back only selected records to run \n through the update process. --Without \n the function the SQL takes < 10secs to return 90,000 records\nSELECT \n count(pm.pm_delta_function_amazon(upc.keyp_upc,'amazon')) FROM mdc_upc upc JOIN public.mdc_products prod ON upc.keyf_products = \n prod.keyp_products JOIN \n public.mdc_price_post_inc price ON prod.keyp_products = \n price.keyf_product JOIN \n public.mdc_attribute_product ap on ap.keyf_products = prod.keyp_products and \n keyf_attribute=22 WHERE \n upper(trim(ap.attributevalue)) NOT IN \n ('ESTEE LAUDER', \n 'CLINIQUE','ORGINS','PRESCRIPTIVES','LANC?ME','CHANEL','ARAMIS','M.A.C','TAG \n HEUER') AND \n keyf_producttype<>222 AND \n prod.action_publish = 1; Function: CREATE OR \n REPLACE FUNCTION pm.pm_delta_function_amazon(int4, \"varchar\") RETURNS \n bool AS'DECLARE varkeyf_upc \n ALIAS FOR $1; varPassword \n ALIAS FOR $2; \n varRealMD5 \n varchar; varDeltaMD5 \n varchar; varLastTouchDate \n date; varQuery \n text; varQuery1 \n text; varQueryMD5 \n text; \n varQueryRecord record; \n varFuncStatus boolean := false; \n BEGIN-- Check the password IF varPassword <> \n \\'amazon\\' THEN Return false; END IF;-- \n Get the md5 hash for this product SELECT into varQueryRecord \n md5(upc.keyp_upc || prod.description || \n pm.pm_price_post_inc(prod.keyp_products)) AS md5 FROM \n public.mdc_upc upc JOIN public.mdc_products prod ON \n upc.keyf_products = prod.keyp_products JOIN \n public.mdc_price_post_inc price ON price.keyf_product = \n prod.keyp_products WHERE upc.keyp_upc = varkeyf_upc LIMIT 1 \n ; IF NOT FOUND THEN RAISE EXCEPTION \n \\'varRealMD5 is NULL. UPC ID is %\\', varkeyf_upc; ELSE \n varRealMD5:=varQueryRecord.md5; END IF;-- Check \n that the product is in the delta table and return its hash for comparison \n SELECT into varQueryRecord md5_hash,last_touch_date \n FROM pm.pm_delta_master_amazon WHERE keyf_upc = \n varkeyf_upc LIMIT 1; IF NOT FOUND THEN -- ADD and \n exit INSERT INTO pm.pm_delta_master_amazon \n (keyf_upc,status,md5_hash,last_touch_date) values \n (varkeyf_upc,\\'add\\',varRealMD5,CURRENT_DATE); \n varFuncStatus:=true; RETURN \n varFuncStatus; ELSE --Update the record \n --- If the hash matches then set the record to HOLD \n IF varRealMD5 = varQueryRecord.md5_hash THEN \n UPDATE pm.pm_delta_master_amazon \n SET status= \\'hold\\', \n last_touch_date = CURRENT_DATE WHERE \n keyf_upc = varkeyf_upc AND last_touch_date <> CURRENT_DATE; \n varFuncStatus:=true; \n ELSE -- ELSE mark the item as \n ADD UPDATE pm.pm_delta_master_amazon \n SET status= \\'add\\', \n last_touch_date = CURRENT_DATE WHERE \n keyf_upc = varkeyf_upc; \n varFuncStatus:=true; END IF; END \n IF; RETURN varFuncStatus;END;' LANGUAGE 'plpgsql' \n IMMUTABLE;TableDef\nCREATE TABLE pm.pm_delta_master_amazon ( \n keyf_upc \n int4 , status \n varchar(6) , \n md5_hash varchar(40) ,\n last_touch_date \n date \n ) GO CREATE INDEX status_idx ON \n pm.pm_delta_master_amazon(status) GO CONF -------- # WRITE AHEAD \n LOG #---------------------------------------------------------------------------\n# - Settings - #fsync = true \n # turns forced synchronization on or off #wal_sync_method = fsync # \n the default varies across platforms: \n # fsync, fdatasync, open_sync, or \n open_datasync wal_buffers = 32 \n # min 4, 8KB each\n# - Checkpoints - checkpoint_segments = 50 # \n in logfile segments, min 1, 16MB each checkpoint_timeout = 600 # range 30-3600, in \n seconds #checkpoint_warning = 30 \n # 0 is off, in seconds #commit_delay = 0 \n # range 0-100000, in microseconds #commit_siblings = 5 \n # range 1-1000 Patrick \n HatcherMacys.Com",
"msg_date": "Fri, 1 Oct 2004 15:48:50 -0400",
"msg_from": "\"Aaron Werman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow update/insert process"
},
{
"msg_contents": "Thanks for the help.\nI found the culprit. The user had created a function within the function \n( pm.pm_price_post_inc(prod.keyp_products)). Once this was fixed the time \ndropped dramatically.\n\n\nPatrick Hatcher\nMacys.Com\nLegacy Integration Developer\n415-422-1610 office\nHatcherPT - AIM\n\n\n\nPatrick Hatcher <[email protected]> \nSent by: [email protected]\n10/01/04 11:14 AM\n\nTo\n<[email protected]>\ncc\n\nSubject\n[PERFORM] Slow update/insert process\n\n\n\n\n\n\n\nPg: 7.4.5 \nRH 7.3 \n8g Ram \n200 g drive space \nRAID0+1 \nTables vacuum on a nightly basis \n\nThe following process below takes 8 hours to run on 90k records and I'm \nnot sure where to being to look for the bottleneck. This isn't the only \nupdating on this database that seems to take a long time to complete. Is \nthere something I should be looking for in my conf settings? \n\nTIA \nPatrick \n\n\nSQL: \n---Bring back only selected records to run through the update process. \n--Without the function the SQL takes < 10secs to return 90,000 records \nSELECT count(pm.pm_delta_function_amazon(upc.keyp_upc,'amazon')) \nFROM mdc_upc upc \nJOIN public.mdc_products prod ON upc.keyf_products = prod.keyp_products \nJOIN public.mdc_price_post_inc price ON prod.keyp_products = \nprice.keyf_product \nJOIN public.mdc_attribute_product ap on ap.keyf_products = \nprod.keyp_products and keyf_attribute=22 \nWHERE \nupper(trim(ap.attributevalue)) NOT IN ('ESTEE LAUDER', \n'CLINIQUE','ORGINS','PRESCRIPTIVES','LANC?ME','CHANEL','ARAMIS','M.A.C','TAG \nHEUER') \nAND keyf_producttype<>222 \nAND prod.action_publish = 1; \n\n\nFunction: \n\nCREATE OR REPLACE FUNCTION pm.pm_delta_function_amazon(int4, \"varchar\")\n RETURNS bool AS\n'DECLARE\n varkeyf_upc ALIAS FOR $1;\n varPassword ALIAS FOR $2;\n varRealMD5 varchar;\n varDeltaMD5 varchar;\n varLastTouchDate date;\n varQuery text;\n varQuery1 text;\n varQueryMD5 text;\n varQueryRecord record;\n varFuncStatus boolean := false;\n \nBEGIN\n\n-- Check the password\n IF varPassword <> \\'amazon\\' THEN\n Return false;\n END IF;\n\n\n-- Get the md5 hash for this product\n SELECT into varQueryRecord md5(upc.keyp_upc || prod.description || \npm.pm_price_post_inc(prod.keyp_products)) AS md5\n FROM public.mdc_upc upc\n JOIN public.mdc_products prod ON upc.keyf_products = prod.keyp_products\n JOIN public.mdc_price_post_inc price ON price.keyf_product = \nprod.keyp_products\n WHERE upc.keyp_upc = varkeyf_upc LIMIT 1 ;\n \n\n IF NOT FOUND THEN\n RAISE EXCEPTION \\'varRealMD5 is NULL. UPC ID is %\\', varkeyf_upc;\n ELSE\n varRealMD5:=varQueryRecord.md5;\n END IF;\n\n-- Check that the product is in the delta table and return its hash for \ncomparison \n SELECT into varQueryRecord md5_hash,last_touch_date \n FROM pm.pm_delta_master_amazon\n WHERE keyf_upc = varkeyf_upc LIMIT 1;\n\n IF NOT FOUND THEN\n -- ADD and exit\n INSERT INTO pm.pm_delta_master_amazon \n(keyf_upc,status,md5_hash,last_touch_date)\n values (varkeyf_upc,\\'add\\',varRealMD5,CURRENT_DATE);\n varFuncStatus:=true;\n RETURN varFuncStatus;\n ELSE\n --Update the record \n --- If the hash matches then set the record to HOLD\n IF varRealMD5 = varQueryRecord.md5_hash THEN\n UPDATE pm.pm_delta_master_amazon\n SET status= \\'hold\\',\n last_touch_date = CURRENT_DATE\n WHERE keyf_upc = varkeyf_upc AND last_touch_date <> CURRENT_DATE; \n varFuncStatus:=true;\n ELSE\n -- ELSE mark the item as ADD\n UPDATE pm.pm_delta_master_amazon\n SET status= \\'add\\',\n last_touch_date = CURRENT_DATE\n WHERE keyf_upc = varkeyf_upc;\n varFuncStatus:=true;\n END IF;\n END IF;\n\n RETURN varFuncStatus;\nEND;'\n LANGUAGE 'plpgsql' IMMUTABLE;\n\n\n\nTableDef \nCREATE TABLE pm.pm_delta_master_amazon ( \n keyf_upc int4 , \n status varchar(6) , \n md5_hash varchar(40) , \n last_touch_date date \n ) \nGO \n\nCREATE INDEX status_idx \n ON pm.pm_delta_master_amazon(status) \nGO \n\n\n\n\nCONF \n-------- \n# WRITE AHEAD LOG \n#--------------------------------------------------------------------------- \n\n\n# - Settings - \n\n#fsync = true # turns forced synchronization on or off \n#wal_sync_method = fsync # the default varies across platforms: \n # fsync, fdatasync, open_sync, or \nopen_datasync \nwal_buffers = 32 # min 4, 8KB each \n\n# - Checkpoints - \n\ncheckpoint_segments = 50 # in logfile segments, min 1, 16MB each \ncheckpoint_timeout = 600 # range 30-3600, in seconds \n#checkpoint_warning = 30 # 0 is off, in seconds \n#commit_delay = 0 # range 0-100000, in microseconds \n#commit_siblings = 5 # range 1-1000 \n\n\nPatrick Hatcher\nMacys.Com\n\nThanks for the help.\nI found the culprit. The user\nhad created a function within the function (\npm.pm_price_post_inc(prod.keyp_products)).\nOnce this was fixed the time dropped dramatically.\n\n\nPatrick Hatcher\nMacys.Com\nLegacy Integration Developer\n415-422-1610 office\nHatcherPT - AIM\n\n\n\n\n\nPatrick Hatcher <[email protected]>\n\nSent by: [email protected]\n10/01/04 11:14 AM\n\n\n\n\nTo\n<[email protected]>\n\n\ncc\n\n\n\nSubject\n[PERFORM] Slow update/insert\nprocess\n\n\n\n\n\n\n\n\n\nPg: 7.4.5 \nRH 7.3 \n8g Ram \n200 g drive space \nRAID0+1 \nTables vacuum on a nightly basis \n\nThe following process below takes 8 hours to run on 90k records and I'm\nnot sure where to being to look for the bottleneck. This isn't the\nonly updating on this database that seems to take a long time to complete.\n Is there something I should be looking for in my conf settings? \n\n\nTIA \nPatrick \n\n\nSQL: \n---Bring back only selected records to run through the update process.\n\n--Without the function the SQL takes < 10secs to return 90,000 records\n\nSELECT count(pm.pm_delta_function_amazon(upc.keyp_upc,'amazon'))\n\nFROM mdc_upc upc \nJOIN public.mdc_products prod ON upc.keyf_products = prod.keyp_products\n\nJOIN public.mdc_price_post_inc price ON prod.keyp_products = price.keyf_product\n\nJOIN public.mdc_attribute_product ap on ap.keyf_products = prod.keyp_products\nand keyf_attribute=22 \nWHERE \nupper(trim(ap.attributevalue)) NOT IN ('ESTEE LAUDER', 'CLINIQUE','ORGINS','PRESCRIPTIVES','LANC?ME','CHANEL','ARAMIS','M.A.C','TAG\nHEUER') \nAND keyf_producttype<>222 \nAND prod.action_publish = 1; \n\n\nFunction: \n\nCREATE OR REPLACE FUNCTION pm.pm_delta_function_amazon(int4, \"varchar\")\n RETURNS bool AS\n'DECLARE\n varkeyf_upc ALIAS\nFOR $1;\n varPassword ALIAS\nFOR $2;\n varRealMD5 varchar;\n varDeltaMD5 varchar;\n varLastTouchDate date;\n varQuery \ntext;\n varQuery1 \ntext;\n varQueryMD5 text;\n varQueryRecord record;\n varFuncStatus boolean := false;\n \nBEGIN\n\n-- Check the password\n IF varPassword <> \\'amazon\\' THEN\n Return false;\n END IF;\n\n\n-- Get the md5 hash for this product\n SELECT into varQueryRecord md5(upc.keyp_upc || prod.description || pm.pm_price_post_inc(prod.keyp_products))\nAS md5\n FROM public.mdc_upc upc\n JOIN public.mdc_products prod ON upc.keyf_products = prod.keyp_products\n JOIN public.mdc_price_post_inc price ON price.keyf_product = prod.keyp_products\n WHERE upc.keyp_upc = varkeyf_upc LIMIT 1 ;\n\n\n IF NOT FOUND THEN\n RAISE EXCEPTION \\'varRealMD5 is NULL. UPC ID is %\\', varkeyf_upc;\n ELSE\n varRealMD5:=varQueryRecord.md5;\n END IF;\n\n-- Check that the product is in the delta table and return its hash\nfor comparison \n SELECT into varQueryRecord md5_hash,last_touch_date \n FROM pm.pm_delta_master_amazon\n WHERE keyf_upc = varkeyf_upc LIMIT 1;\n\n IF NOT FOUND THEN\n -- ADD and exit\n INSERT INTO pm.pm_delta_master_amazon (keyf_upc,status,md5_hash,last_touch_date)\n values (varkeyf_upc,\\'add\\',varRealMD5,CURRENT_DATE);\n varFuncStatus:=true;\n RETURN varFuncStatus;\n ELSE\n --Update the record \n --- If the hash matches then set the record to HOLD\n IF varRealMD5 = varQueryRecord.md5_hash THEN\n UPDATE pm.pm_delta_master_amazon\n SET status= \\'hold\\',\n last_touch_date = CURRENT_DATE\n WHERE keyf_upc = varkeyf_upc AND last_touch_date\n<> CURRENT_DATE; \n varFuncStatus:=true;\n ELSE\n -- ELSE mark the item as ADD\n UPDATE pm.pm_delta_master_amazon\n SET status= \\'add\\',\n last_touch_date = CURRENT_DATE\n WHERE keyf_upc = varkeyf_upc;\n varFuncStatus:=true;\n END IF;\n END IF;\n\n RETURN varFuncStatus;\nEND;'\n LANGUAGE 'plpgsql' IMMUTABLE;\n\n\n\nTableDef \nCREATE TABLE pm.pm_delta_master_amazon ( \n keyf_upc \nint4 , \n status \nvarchar(6) , \n md5_hash \nvarchar(40) , \n last_touch_date date \n ) \nGO \n\nCREATE INDEX status_idx \n ON pm.pm_delta_master_amazon(status)\n\nGO \n\n\n\n\nCONF \n-------- \n# WRITE AHEAD LOG \n#---------------------------------------------------------------------------\n\n\n# - Settings - \n\n#fsync = true \n# turns forced synchronization on or off \n#wal_sync_method = fsync # the default varies\nacross platforms: \n \n # fsync, fdatasync, open_sync,\nor open_datasync \nwal_buffers = 32 #\nmin 4, 8KB each \n\n# - Checkpoints - \n\ncheckpoint_segments = 50 # in logfile segments,\nmin 1, 16MB each \ncheckpoint_timeout = 600 # range 30-3600, in\nseconds \n#checkpoint_warning = 30 # 0 is off, in seconds\n\n#commit_delay = 0 # range\n0-100000, in microseconds \n#commit_siblings = 5 # range 1-1000\n\n\n\nPatrick Hatcher\nMacys.Com",
"msg_date": "Mon, 4 Oct 2004 10:14:13 -0700",
"msg_from": "Patrick Hatcher <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow update/insert process"
}
] |
[
{
"msg_contents": "Okay, I've got two queries that I think the planner should reduce to be \nlogically equivalent but it's not happening. The example queries below have \nbeen simplified as much as possible while still producing the problem. \n\nWhat I'm trying to do is create a single prepared statement that can handle \nnull parameters rather than have to dynamically generate the statement in my \napp code based on supplied parameters.\n\nBasically the date constants below would be substituted with parameters \nsupplied on a web search form (or nulls).\n\n\nHere is the query and EXPLAIN that runs quickly:\n SELECT case_id FROM case_data \n WHERE case_filed_date > '2004-09-16' \n AND case_filed_date < '2004-09-20'\n\n QUERY PLAN\n-------------------------------------------------------------\nIndex Scan using case_data_case_filed_date on case_data \n(cost=0.00..13790.52 rows=3614 width=18)\n Index Cond: ((case_filed_date > '2004-09-16'::date) \n AND (case_filed_date < '2004-09-20'::date))\n\n\nAnd here is the query and EXPLAIN from the version that I believe the planner \nshould reduce to be logically equivalent:\n SELECT case_id FROM case_data \n WHERE (('2004-09-16' IS NULL) OR (case_filed_date > '2004-09-16'))\n AND (('2004-09-20' IS NULL) OR (case_filed_date < '2004-09-20'))\n\n QUERY PLAN\n-------------------------------------------------------------\nSeq Scan on case_data (cost=0.00..107422.02 rows=27509 width=18)\n Filter: ((('2004-09-16' IS NULL) OR (case_filed_date > '2004-09-16'::date))\n AND (('2004-09-20' IS NULL) OR (case_filed_date < '2004-09-20'::date)))\n\n\nI was hoping that the null comparisons would get folded out by the planner \nrelatively cheaply. But as you can see, the first query uses indexes and the \nsecond one uses sequence scans, thereby taking much longer. I guess my \nquestion is - is there a better way to accomplish what I'm doing in SQL or am \nI going to have to dynamically generate the statement based on supplied \nparameters?\n\nThanks,\nRyan\n",
"msg_date": "Fri, 1 Oct 2004 17:06:54 -0500",
"msg_from": "Ryan VanMiddlesworth <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query planner problem"
},
{
"msg_contents": "On Sat, 2 Oct 2004 08:06 am, Ryan VanMiddlesworth wrote:\n\n[snip]\n> \n> \n> Here is the query and EXPLAIN that runs quickly:\n> SELECT case_id FROM case_data \n> WHERE case_filed_date > '2004-09-16' \n> AND case_filed_date < '2004-09-20'\n> \n> QUERY PLAN\n> -------------------------------------------------------------\n> Index Scan using case_data_case_filed_date on case_data \n> (cost=0.00..13790.52 rows=3614 width=18)\n> Index Cond: ((case_filed_date > '2004-09-16'::date) \n> AND (case_filed_date < '2004-09-20'::date))\n> \n> \n> And here is the query and EXPLAIN from the version that I believe the planner \n> should reduce to be logically equivalent:\n> SELECT case_id FROM case_data \n> WHERE (('2004-09-16' IS NULL) OR (case_filed_date > '2004-09-16'))\n> AND (('2004-09-20' IS NULL) OR (case_filed_date < '2004-09-20'))\n> \n> QUERY PLAN\n> -------------------------------------------------------------\n> Seq Scan on case_data (cost=0.00..107422.02 rows=27509 width=18)\n> Filter: ((('2004-09-16' IS NULL) OR (case_filed_date > '2004-09-16'::date))\n> AND (('2004-09-20' IS NULL) OR (case_filed_date < '2004-09-20'::date)))\n> \n> \n> I was hoping that the null comparisons would get folded out by the planner \n> relatively cheaply. But as you can see, the first query uses indexes and the \n> second one uses sequence scans, thereby taking much longer. I guess my \n> question is - is there a better way to accomplish what I'm doing in SQL or am \n> I going to have to dynamically generate the statement based on supplied \n> parameters?\n> \nThe Index does not store NULL values, so you have to do a tables scan to find NULL values.\nThat means the second query cannot use an Index, even if it wanted to.\n\nRegards\n\nRussell Smith\n\n",
"msg_date": "Sun, 3 Oct 2004 08:50:28 +1000",
"msg_from": "Russell Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query planner problem"
},
{
"msg_contents": "Ryan VanMiddlesworth <[email protected]> writes:\n> And here is the query and EXPLAIN from the version that I believe the planner\n> should reduce to be logically equivalent:\n> SELECT case_id FROM case_data \n> WHERE (('2004-09-16' IS NULL) OR (case_filed_date > '2004-09-16'))\n> AND (('2004-09-20' IS NULL) OR (case_filed_date < '2004-09-20'))\n\n> I was hoping that the null comparisons would get folded out by the planner \n> relatively cheaply.\n\nYou could teach eval_const_expressions about simplifying NullTest nodes\nif you think it's important enough.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 02 Oct 2004 19:21:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query planner problem "
},
{
"msg_contents": "\nRussell Smith <[email protected]> writes:\n\n> The Index does not store NULL values\n\nThis is false.\n\nThough the fact that NULL values are indexed in postgres doesn't help with\nthis poster's actual problem.\n\n-- \ngreg\n\n",
"msg_date": "03 Oct 2004 17:56:42 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query planner problem"
}
] |
[
{
"msg_contents": "Steve,\n\n> I'm used to performance tuning on a select-heavy database, but this\n> will have a very different impact on the system. Does anyone have any\n> experience with an update heavy system, and have any performance hints\n> or hardware suggestions?\n\nMinimal/no indexes on the table(s). Raise checkpoint_segments and consider \nusing commit_siblings/commit_delay if it's a multi-stream application. \nFigure out ways to do inserts instead of updates where possible, and COPY \ninstead of insert, where possible. Put your WAL on its own disk resource.\n\nI'm a little curious as to what kind of app would be 95% writes. A log?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 4 Oct 2004 10:38:14 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance suggestions for an update-mostly database?"
},
{
"msg_contents": "I'm putting together a system where the operation mix is likely to be\n>95% update, <5% select on primary key.\n\nI'm used to performance tuning on a select-heavy database, but this\nwill have a very different impact on the system. Does anyone have any\nexperience with an update heavy system, and have any performance hints\nor hardware suggestions?\n\nCheers,\n Steve\n",
"msg_date": "Mon, 4 Oct 2004 10:40:19 -0700",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Performance suggestions for an update-mostly database?"
},
{
"msg_contents": "On Mon, Oct 04, 2004 at 10:38:14AM -0700, Josh Berkus wrote:\n> Steve,\n> \n> > I'm used to performance tuning on a select-heavy database, but this\n> > will have a very different impact on the system. Does anyone have any\n> > experience with an update heavy system, and have any performance hints\n> > or hardware suggestions?\n> \n> Minimal/no indexes on the table(s). Raise checkpoint_segments and consider \n> using commit_siblings/commit_delay if it's a multi-stream application. \n> Figure out ways to do inserts instead of updates where possible, and COPY \n> instead of insert, where possible. Put your WAL on its own disk resource.\n\nThanks.\n\n> I'm a little curious as to what kind of app would be 95% writes. A log?\n\nIt's the backend to a web application. The applications mix of queries\nis pretty normal, but it uses a large, in-core, write-through cache\nbetween the business logic and the database. It has more than usual\nlocality on queries over short time periods, so the vast majority of\nreads should be answered out of the cache and not touch the database.\n\nIn some ways something like Berkeley DB might be a better match to the\nfrontend, but I'm comfortable with PostgreSQL and prefer to have the\npower of SQL commandline for when I need it.\n\nCheers,\n Steve\n",
"msg_date": "Mon, 4 Oct 2004 11:00:36 -0700",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance suggestions for an update-mostly database?"
},
{
"msg_contents": "Steve,\n\n> In some ways something like Berkeley DB might be a better match to the\n> frontend, but I'm comfortable with PostgreSQL and prefer to have the\n> power of SQL commandline for when I need it.\n\nWell, if data corruption is not a concern, you can always turn off \ncheckpointing. This will save you a fair amount of overhead.\n\nYou could also look at telegraphCQ. It's not prodcucton yet, but their idea \nof \"streams\" as data sources really seems to fit with what you're talking \nabout.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 4 Oct 2004 12:02:03 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance suggestions for an update-mostly database?"
},
{
"msg_contents": "And obviously make sure you're vacuuming frequently.\n\nOn Mon, Oct 04, 2004 at 10:38:14AM -0700, Josh Berkus wrote:\n> Steve,\n> \n> > I'm used to performance tuning on a select-heavy database, but this\n> > will have a very different impact on the system. Does anyone have any\n> > experience with an update heavy system, and have any performance hints\n> > or hardware suggestions?\n> \n> Minimal/no indexes on the table(s). Raise checkpoint_segments and consider \n> using commit_siblings/commit_delay if it's a multi-stream application. \n> Figure out ways to do inserts instead of updates where possible, and COPY \n> instead of insert, where possible. Put your WAL on its own disk resource.\n> \n> I'm a little curious as to what kind of app would be 95% writes. A log?\n> \n> -- \n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Mon, 4 Oct 2004 14:22:54 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance suggestions for an update-mostly database?"
}
] |
[
{
"msg_contents": "would the number of fields in a table significantly affect the\nsearch-query time?\n\n(meaning: less fields = much quicker response?)\n\nI have this database table of items with LOTS of properties per-item,\nthat takes a LONG time to search.\n\nSo as I was benchmarking it against SQLite, MySQL and some others, I\nexported just a few fields for testing, into all three databases.\n\nWhat surprised me the most is that the subset, even in the original\ndatabase, gave search results MUCH faster than the full table!\n\nI know I'm being vague, but does anyone know if this is just common\nknowledge (\"duh! of course!\") or if I should be looking at is as a\nproblem to fix?\n",
"msg_date": "Mon, 4 Oct 2004 16:27:51 -0700",
"msg_from": "Miles Keaton <[email protected]>",
"msg_from_op": true,
"msg_subject": "would number of fields in a table affect search-query time?"
},
{
"msg_contents": "Miles Keaton <[email protected]> writes:\n> What surprised me the most is that the subset, even in the original\n> database, gave search results MUCH faster than the full table!\n\nThe subset table's going to be physically much smaller, so it could just\nbe that this reflects smaller I/O load. Hard to tell without a lot more\ndetail about what case you were testing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Oct 2004 19:32:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: would number of fields in a table affect search-query time? "
},
{
"msg_contents": "On Mon, Oct 04, 2004 at 04:27:51PM -0700, Miles Keaton wrote:\n> would the number of fields in a table significantly affect the\n> search-query time?\n\nMore fields = larger records = fewer records per page = if you read in\neverything, you'll need more I/O.\n\n> I have this database table of items with LOTS of properties per-item,\n> that takes a LONG time to search.\n\nIt's a bit hard to say anything without seeing your actual tables and\nqueries; I'd guess you either have a lot of matches or you're doing a\nsequential scan.\n\nYou might want to normalize your tables, but again, it's hard to say anything\nwithout seeing your actual data.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 5 Oct 2004 01:34:36 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: would number of fields in a table affect search-query time?"
},
{
"msg_contents": "Miles,\n\n> would the number of fields in a table significantly affect the\n> search-query time?\n\nYes.\n\nIn addition to the issues mentioned previously, there is the issue of \ncriteria; an OR query on 8 fields is going to take longer to filter than an \nOR query on 2 fields.\n\nAnyway, I think maybe you should tell us more about your database design. \nOften the fastest solution involves a more sophisticated approach toward \nquerying your tables.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 4 Oct 2004 19:06:33 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: would number of fields in a table affect search-query time?"
}
] |
[
{
"msg_contents": "Running a trivial query in v7.4.2 (installed with fedora core2) using\nEXPLAIN ANALYZE is taking considerably longer than just running the query\n(2mins vs 6 secs). I was using this query to quickly compare a couple of\nsystems after installing a faster disk.\n\nIs this sort of slowdown to be expected?\n\nHere's the query:\n----------------------------------------\n[chris@fedora tmp]$ time psql dbt << ENDSQL\n> select count(*) from etab;\n> ENDSQL\n count\n---------\n 9646782\n(1 row)\n\n\nreal 0m6.532s\nuser 0m0.005s\nsys 0m0.002s\n[chris@fedora tmp]$ time psql dbt << ENDSQL\n> explain analyze select count(*) from etab;\n> ENDSQL\n QUERY PLAN\n\n----------------------------------------------------------------------------\n---------------\n-----------------------------\n Aggregate (cost=182029.78..182029.78 rows=1 width=0) (actual\ntime=112701.488..112701.493\nrows=1 loops=1)\n -> Seq Scan on etab (cost=0.00..157912.82 rows=9646782 width=0) (actual\ntime=0.053..578\n59.120 rows=9646782 loops=1)\n Total runtime: 112701.862 ms\n(3 rows)\n\n\nreal 1m52.716s\nuser 0m0.003s\nsys 0m0.005s\n---------------------------------------\n\nThanks in advance for any clues.\n\nChris Hutchinson\n\n",
"msg_date": "Tue, 5 Oct 2004 16:49:26 +1000",
"msg_from": "\"Chris Hutchinson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "EXPLAIN ANALYZE much slower than running query normally"
},
{
"msg_contents": "Am Dienstag, 5. Oktober 2004 08:49 schrieb Chris Hutchinson:\n> Running a trivial query in v7.4.2 (installed with fedora core2) using\n> EXPLAIN ANALYZE is taking considerably longer than just running the query\n> (2mins vs 6 secs). I was using this query to quickly compare a couple of\n> systems after installing a faster disk.\n>\n> Is this sort of slowdown to be expected?\n\nno.\n\ndid you run VACCUM ANALYZE before? you should do it after pg_restore your db \nto a new filesystem\n\nin which order did you ran the queries. If you start your server and run two \nequal queries, the second one will be much faster because of some or even all \ndata needed to answer the query is still in the shared buffers. \n\njanning\n\n\n",
"msg_date": "Mon, 11 Oct 2004 14:15:56 +0200",
"msg_from": "Janning Vygen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN ANALYZE much slower than running query normally"
},
{
"msg_contents": "\"Chris Hutchinson\" <[email protected]> writes:\n> Running a trivial query in v7.4.2 (installed with fedora core2) using\n> EXPLAIN ANALYZE is taking considerably longer than just running the query\n> (2mins vs 6 secs). I was using this query to quickly compare a couple of\n> systems after installing a faster disk.\n\nTurning on EXPLAIN ANALYZE will incur two gettimeofday() kernel calls\nper row (in this particular plan), which is definitely nontrivial\noverhead if there's not much I/O going on. I couldn't duplicate your\nresults exactly, but I did see a test case with 2.5 million one-column\nrows go from <4 seconds to 21 seconds, which makes the cost of a\ngettimeofday about 3.4 microseconds on my machine (Fedora Core 3, P4\nrunning at something over 1Ghz). When I widened the rows to a couple\nhundred bytes, the raw runtime went up to 30 seconds and the analyzed\ntime to 50, so the overhead per row is pretty constant, as you'd expect.\n\nSome tests with a simple loop around a gettimeofday call yielded a value\nof 2.16 microsec/gettimeofday, so there's some overhead attributable to\nthe EXPLAIN mechanism as well, but the kernel call is clearly the bulk\nof it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Oct 2004 20:28:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN ANALYZE much slower than running query normally "
}
] |
[
{
"msg_contents": "All,\n\nI realize the excessive-context-switching-on-xeon issue has been \ndiscussed at length in the past, but I wanted to follow up and verify my \nconclusion from those discussions:\n\nOn a 2-way or 4-way Xeon box, there is no way to avoid excessive \n(30,000-60,000 per second) context switches when using PostgreSQL 7.4.5 \nto query a data set small enough to fit into main memory under a \nsignificant load.\n\nI am experiencing said symptom on two different dual-Xeon boxes, both \nDells with ServerWorks chipsets, running the latest RH9 and RHEL3 \nkernels, respectively. The databases are 90% read, 10% write, and are \nsmall enough to fit entirely into main memory, between pg shared buffers \nand kernel buffers.\n\nWe recently invested in an solid-state storage device \n(http://www.superssd.com/products/ramsan-320/) to help write \nperformance. Our entire pg data directory is stored on it. Regrettably \n(and in retrospect, unsurprisingly) we found that opening up the I/O \nbottleneck does little for write performance when the server is under \nload, due to the bottleneck created by excessive context switching. Is \nthe only solution then to move to a different SMP architecture such as \nItanium 2 or Opteron? If so, should we expect to see an additional \nbenefit from running PostgreSQL on a 64-bit architecture, versus 32-bit, \ncontext switching aside? Alternatively, are there good 32-bit SMP \narchitectures to consider other than Xeon, given the high cost of \nItanium 2 and Opteron systems?\n\nMore generally, how have others scaled \"up\" their PostgreSQL \nenvironments? We will eventually have to invent some \"outward\" \nscalability within the logic of our application (e.g. do read-only \ntransactions against a pool of Slony-I subscribers), but in the short \nterm we still have an urgent need to scale upward. Thoughts? General wisdom?\n\nBest Regards,\n\nBill Montgomery\n",
"msg_date": "Tue, 05 Oct 2004 12:21:40 -0400",
"msg_from": "Bill Montgomery <[email protected]>",
"msg_from_op": true,
"msg_subject": "Excessive context switching on SMP Xeons"
},
{
"msg_contents": "Bill,\n\n> I realize the excessive-context-switching-on-xeon issue has been\n> discussed at length in the past, but I wanted to follow up and verify my\n> conclusion from those discussions:\n\nFirst off, the good news: Gavin Sherry and OSDL may have made some progress \non this. We'll be testing as soon as OSDL gets the Scalable Test Platform \nrunning again. If you have the CS problem (which I don't think you do, see \nbelow) and a test box, I'd be thrilled to have you test it.\n\n> On a 2-way or 4-way Xeon box, there is no way to avoid excessive\n> (30,000-60,000 per second) context switches when using PostgreSQL 7.4.5\n> to query a data set small enough to fit into main memory under a\n> significant load.\n\nHmmm ... some clarification:\n1) I don't really consider a CS of 30,000 to 60,000 on Xeon to be excessive. \nPeople demonstrating the problem on dual or quad Xeon reported CS levels of \n150,000 or more. So you probably don't have this issue at all -- depending \non the load, your level could be considered \"normal\".\n\n2) The problem is not limited to Xeon, Linux, or x86 architecture. It has \nbeen demonstrated, for example, on 8-way Solaris machines. It's just worse \n(and thus more noticable) on Xeon.\n\n> I am experiencing said symptom on two different dual-Xeon boxes, both\n> Dells with ServerWorks chipsets, running the latest RH9 and RHEL3\n> kernels, respectively. The databases are 90% read, 10% write, and are\n> small enough to fit entirely into main memory, between pg shared buffers\n> and kernel buffers.\n\nAh. Well, you do have the worst possible architecture for PostgreSQL-SMP \nperformance. The ServerWorks chipset is badly flawed (the company is now, I \nbelieve, bankrupt from recalled products) and Xeons have several performance \nissues on databases based on online tests.\n\n> We recently invested in an solid-state storage device\n> (http://www.superssd.com/products/ramsan-320/) to help write\n> performance. Our entire pg data directory is stored on it. Regrettably\n> (and in retrospect, unsurprisingly) we found that opening up the I/O\n> bottleneck does little for write performance when the server is under\n> load, due to the bottleneck created by excessive context switching. \n\nWell, if you're CPU-bound, improved I/O won't help you, no.\n\n> Is \n> the only solution then to move to a different SMP architecture such as\n> Itanium 2 or Opteron? If so, should we expect to see an additional\n> benefit from running PostgreSQL on a 64-bit architecture, versus 32-bit,\n> context switching aside? \n\nYour performance will almost certainly be better for a variety of reasons on \nOpteron/Itanium. However, I'm still not convinced that you have the CS \nbug.\n\n> Alternatively, are there good 32-bit SMP \n> architectures to consider other than Xeon, given the high cost of\n> Itanium 2 and Opteron systems?\n\nAthalonMP appears to be less suseptible to the CS bug than Xeon, and the \neffect of the bug is not as severe. However, a quad-Opteron box can be \nbuilt for less than $6000; what's your standard for \"expensive\"? If you \ndon't have that much money, then you may be stuck for options.\n\n> More generally, how have others scaled \"up\" their PostgreSQL\n> environments? We will eventually have to invent some \"outward\"\n> scalability within the logic of our application (e.g. do read-only\n> transactions against a pool of Slony-I subscribers), but in the short\n> term we still have an urgent need to scale upward. Thoughts? General\n> wisdom?\n\nAs long as you're on x86, scaling outward is the way to go. If you want to \ncontinue to scale upwards, ask Andrew Sullivan about his experiences running \nPostgreSQL on big IBM boxes. But if you consider an quad-Opteron server \nexpensive, I don't think that's an option for you.\n\nOverall, though, I'm not convinced that you have the CS bug and I think it's \nmore likely that you have a few \"bad queries\" which are dragging down the \nwhole system. Troubleshoot those and your CPU-bound problems may go away.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 5 Oct 2004 09:47:36 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Excessive context switching on SMP Xeons"
},
{
"msg_contents": "Bill Montgomery wrote:\n> All,\n> \n> I realize the excessive-context-switching-on-xeon issue has been \n> discussed at length in the past, but I wanted to follow up and verify my \n> conclusion from those discussions:\n> \n> On a 2-way or 4-way Xeon box, there is no way to avoid excessive \n> (30,000-60,000 per second) context switches when using PostgreSQL 7.4.5 \n> to query a data set small enough to fit into main memory under a \n> significant load.\n> \n> I am experiencing said symptom on two different dual-Xeon boxes, both \n> Dells with ServerWorks chipsets, running the latest RH9 and RHEL3 \n> kernels, respectively. The databases are 90% read, 10% write, and are \n> small enough to fit entirely into main memory, between pg shared buffers \n> and kernel buffers.\n> \n\nI don't know if my box is not loaded enough but I have a dual-Xeon box,\nby DELL with the HT enabled and I'm not experiencing this kind of CS\nproblem, normaly hour CS is around 100000 per second.\n\n# cat /proc/version\nLinux version 2.4.9-e.24smp ([email protected]) (gcc version 2.96 20000731 (Red Hat Linux 7.2 2.96-118.7.2)) #1 SMP Tue May 27 16:07:39 EDT 2003\n\n\n# cat /proc/cpuinfo\nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 15\nmodel : 2\nmodel name : Intel(R) Xeon(TM) CPU 2.80GHz\nstepping : 7\ncpu MHz : 2787.139\ncache size : 512 KB\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 2\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm\nbogomips : 5557.45\n\nprocessor : 1\nvendor_id : GenuineIntel\ncpu family : 15\nmodel : 2\nmodel name : Intel(R) Xeon(TM) CPU 2.80GHz\nstepping : 7\ncpu MHz : 2787.139\ncache size : 512 KB\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 2\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm\nbogomips : 5570.56\n\nprocessor : 2\nvendor_id : GenuineIntel\ncpu family : 15\nmodel : 2\nmodel name : Intel(R) Xeon(TM) CPU 2.80GHz\nstepping : 7\ncpu MHz : 2787.139\ncache size : 512 KB\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 2\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm\nbogomips : 5570.56\n\nprocessor : 3\nvendor_id : GenuineIntel\ncpu family : 15\nmodel : 2\nmodel name : Intel(R) Xeon(TM) CPU 2.80GHz\nstepping : 7\ncpu MHz : 2787.139\ncache size : 512 KB\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 2\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm\nbogomips : 5570.56\n\n\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n",
"msg_date": "Tue, 05 Oct 2004 23:08:23 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Excessive context switching on SMP Xeons"
},
{
"msg_contents": "Thanks for the helpful response.\n\nJosh Berkus wrote:\n\n> First off, the good news: Gavin Sherry and OSDL may have made some \n> progress\n>\n>on this. We'll be testing as soon as OSDL gets the Scalable Test Platform \n>running again. If you have the CS problem (which I don't think you do, see \n>below) and a test box, I'd be thrilled to have you test it.\n>\n\nI'd be thrilled to test it too, if for no other reason that to determine \nwhether what I'm experiencing really is the \"CS problem\".\n\n>1) I don't really consider a CS of 30,000 to 60,000 on Xeon to be excessive. \n>People demonstrating the problem on dual or quad Xeon reported CS levels of \n>150,000 or more. So you probably don't have this issue at all -- depending \n>on the load, your level could be considered \"normal\".\n>\n\nFair enough. I never see nearly this much context switching on my dual \nXeon boxes running dozens (sometimes hundreds) of concurrent apache \nprocesses, but I'll concede this could just be due to the more parallel \nnature of a bunch of independent apache workers.\n\n>>I am experiencing said symptom on two different dual-Xeon boxes, both\n>>Dells with ServerWorks chipsets, running the latest RH9 and RHEL3\n>>kernels, respectively. The databases are 90% read, 10% write, and are\n>>small enough to fit entirely into main memory, between pg shared buffers\n>>and kernel buffers.\n>>\n>\n>Ah. Well, you do have the worst possible architecture for PostgreSQL-SMP \n>performance. The ServerWorks chipset is badly flawed (the company is now, I \n>believe, bankrupt from recalled products) and Xeons have several performance \n>issues on databases based on online tests.\n>\n\nHence my desire for recommendations on alternate architectures ;-)\n\n>AthalonMP appears to be less suseptible to the CS bug than Xeon, and the \n>effect of the bug is not as severe. However, a quad-Opteron box can be \n>built for less than $6000; what's your standard for \"expensive\"? If you \n>don't have that much money, then you may be stuck for options.\n>\n\nBeing a 24x7x365 shop, and these servers being mission critical, I \nrequire vendors that can offer 24x7 4-hour part replacement, like Dell \nor IBM. I haven't seen 4-way 64-bit boxes meeting that requirement for \nless than $20,000, and that's for a very minimally configured box. A \nsuitably configured pair will likely end up costing $50,000 or more. I \nwould like to avoid an unexpected expense of that size, unless there's \nno other good alternative. That said, I'm all ears for a cheaper \nalternative that meets my support and performance requirements.\n\n>Overall, though, I'm not convinced that you have the CS bug and I think it's \n>more likely that you have a few \"bad queries\" which are dragging down the \n>whole system. Troubleshoot those and your CPU-bound problems may go away.\n>\n\nYou may be right, but to compare apples to apples, here's some vmstat \noutput from a pgbench run:\n\n[billm@xxx billm]$ pgbench -i -s 20 pgbench\n<snip>\n[billm@xxx billm]$ pgbench -s 20 -t 500 -c 100 pgbench\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 20\nnumber of clients: 100\nnumber of transactions per client: 500\nnumber of transactions actually processed: 50000/50000\ntps = 369.717832 (including connections establishing)\ntps = 370.852058 (excluding connections establishing)\n\nand some of the vmstat output...\n\n[billm@poe billm]$ vmstat 1\nprocs memory swap io \nsystem cpu\n r b swpd free buff cache si so bi bo in cs us sy \nwa id\n 0 1 0 863108 220620 1571924 0 0 4 64 34 50 1 \n0 0 98\n 0 1 0 863092 220620 1571932 0 0 0 3144 171 2037 3 \n3 47 47\n 0 1 0 863084 220620 1571956 0 0 0 5840 202 3702 6 \n3 46 45\n 1 1 0 862656 220620 1572420 0 0 0 12948 631 42093 69 \n22 5 5\n11 0 0 862188 220620 1572828 0 0 0 12644 531 41330 70 \n23 2 5\n 9 0 0 862020 220620 1573076 0 0 0 8396 457 28445 43 \n17 17 22\n 9 0 0 861620 220620 1573556 0 0 0 13564 726 44330 72 \n22 2 5\n 8 1 0 861248 220620 1573980 0 0 0 12564 660 43667 65 \n26 2 7\n 3 1 0 860704 220624 1574236 0 0 0 14588 646 41176 62 \n25 5 8\n 0 1 0 860440 220624 1574476 0 0 0 42184 865 31704 44 \n23 15 18\n 8 0 0 860320 220624 1574628 0 0 0 10796 403 19971 31 \n10 29 29\n 0 1 0 860040 220624 1574884 0 0 0 23588 654 36442 49 \n20 13 17\n 0 1 0 859984 220624 1574932 0 0 0 4940 229 3884 5 \n3 45 46\n 0 1 0 859940 220624 1575004 0 0 0 12140 355 13454 20 \n10 35 35\n 0 1 0 859904 220624 1575044 0 0 0 5044 218 6922 11 \n5 41 43\n 1 1 0 859868 220624 1575052 0 0 0 4808 199 2029 3 \n3 47 48\n 0 1 0 859720 220624 1575180 0 0 0 21596 485 18075 28 \n13 29 30\n11 1 0 859372 220624 1575532 0 0 0 24520 609 41409 62 \n33 2 3\n\nWhile pgbench does not generate quite as high a number of CS as our app, \nit is an apples-to-apples comparison, and rules out the possibility of \npoorly written queries in our app. Still, 40k CS/sec seems high to me. \nWhile pgbench is just a synthetic benchmark, and not necessarily the \nbest benchmark, yada yada, 370 tps seems like pretty poor performance. \nI've benchmarked the IO subsystem at 70MB/s of random 8k writes, yet \npgbench typically doesn't use more than 10MB/s of that bandwidth (a \nlittle more at checkpoints).\n\nSo I guess the question is this: now that I've opened up the IO \nbottleneck that exists on most database servers, am I really truly CPU \nbound now, and not just suffering from poorly handled spinlocks on my \nXeon/ServerWorks platform? If so, is the expense of a 64-bit system \nworth it, or is the price/performance for PostgreSQL still better on an \nalternative 32-bit platform, like AthlonMP?\n\nBest Regards,\n\nBill Montgomery\n",
"msg_date": "Tue, 05 Oct 2004 17:08:32 -0400",
"msg_from": "Bill Montgomery <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Excessive context switching on SMP Xeons"
},
{
"msg_contents": "Bill,\n\n> I'd be thrilled to test it too, if for no other reason that to determine\n> whether what I'm experiencing really is the \"CS problem\".\n\nHmmm ... Gavin's patch is built against 8.0, and any version of the patch \nwould require linux 2.6, probably 2.6.7 minimum. Can you test on that linux \nversion? Do you have the resources to back-port Gavin's patch? \n\n> Fair enough. I never see nearly this much context switching on my dual\n> Xeon boxes running dozens (sometimes hundreds) of concurrent apache\n> processes, but I'll concede this could just be due to the more parallel\n> nature of a bunch of independent apache workers.\n\nCertainly could be. Heavy CSes only happen when you have a number of \nlong-running processes with contention for RAM in my experience. If Apache \nis dispatching thing quickly enough, they'd never arise.\n\n> Hence my desire for recommendations on alternate architectures ;-)\n\nWell, you could certainly stay on Xeon if there's better support availability. \nJust get off Dell *650's. \n\n> Being a 24x7x365 shop, and these servers being mission critical, I\n> require vendors that can offer 24x7 4-hour part replacement, like Dell\n> or IBM. I haven't seen 4-way 64-bit boxes meeting that requirement for\n> less than $20,000, and that's for a very minimally configured box. A\n> suitably configured pair will likely end up costing $50,000 or more. I\n> would like to avoid an unexpected expense of that size, unless there's\n> no other good alternative. That said, I'm all ears for a cheaper\n> alternative that meets my support and performance requirements.\n\nNo, you're going to pay through the nose for that support level. It's how \nthings work.\n\n> tps = 369.717832 (including connections establishing)\n> tps = 370.852058 (excluding connections establishing)\n\nDoesn't seem too bad to me. Have anything to compare it to?\n\nWhat's in your postgresql.conf?\n\n--Josh\n",
"msg_date": "Tue, 5 Oct 2004 15:38:51 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Excessive context switching on SMP Xeons"
},
{
"msg_contents": "A few quick random observations on the Xeon v. Opteron comparison:\n\n- running a dual Xeon with hyperthreading turned on really isn't the \nsame as having a quad cpu system. I haven't seen postgresql specific \nbenchmarks, but the general case has been that HT is a benefit in a few \nparticular work loads but with no benefit in general.\n\n- We're running postgresql 8 (in production!) on a dual Opteron 250, \nLinux 2.6, 8GB memory, 1.7TB of attached fiber channel disk, etc. This \nmachine is fast. A dual 2.8 Ghz Xeon with 512K caches (with or \nwithout HT enabled) simlpy won't be in the same performance league as \nthis dual Opteron system (assuming identical disk systems, etc). We run \na Linux 2.6 kernel because it scales under load so much better than the \n2.4 kernels.\n\nThe units we're using (and we have a lot of them) are SunFire v20z. You \ncan get a dualie Opteron 250 for $7K with 4GB memory from Sun. My \npersonal experience with this setup in a mission critical config is to \nnot depend on 4 hour spare parts, but to spend the money and install the \nspare in the rack. Naturally, one can go cheaper with slower cpus, \ndifferent vendors, etc.\n\n\nI don't care to go into the whole debate of Xeon v. Opteron here. We \nalso have a lot of dual Xeon systems. In every comparison I've done with \nour codes, the dual Opteron clearly outperforms the dual Xeon, when \nrunning on one and both cpus.\n\n\n-- Alan\n\n\n\nJosh Berkus wrote:\n\n>Bill,\n>\n> \n>\n>>I'd be thrilled to test it too, if for no other reason that to determine\n>>whether what I'm experiencing really is the \"CS problem\".\n>> \n>>\n>\n>Hmmm ... Gavin's patch is built against 8.0, and any version of the patch \n>would require linux 2.6, probably 2.6.7 minimum. Can you test on that linux \n>version? Do you have the resources to back-port Gavin's patch? \n>\n> \n>\n>>Fair enough. I never see nearly this much context switching on my dual\n>>Xeon boxes running dozens (sometimes hundreds) of concurrent apache\n>>processes, but I'll concede this could just be due to the more parallel\n>>nature of a bunch of independent apache workers.\n>> \n>>\n>\n>Certainly could be. Heavy CSes only happen when you have a number of \n>long-running processes with contention for RAM in my experience. If Apache \n>is dispatching thing quickly enough, they'd never arise.\n>\n> \n>\n>>Hence my desire for recommendations on alternate architectures ;-)\n>> \n>>\n>\n>Well, you could certainly stay on Xeon if there's better support availability. \n>Just get off Dell *650's. \n>\n> \n>\n>>Being a 24x7x365 shop, and these servers being mission critical, I\n>>require vendors that can offer 24x7 4-hour part replacement, like Dell\n>>or IBM. I haven't seen 4-way 64-bit boxes meeting that requirement for\n>>less than $20,000, and that's for a very minimally configured box. A\n>>suitably configured pair will likely end up costing $50,000 or more. I\n>>would like to avoid an unexpected expense of that size, unless there's\n>>no other good alternative. That said, I'm all ears for a cheaper\n>>alternative that meets my support and performance requirements.\n>> \n>>\n>\n>No, you're going to pay through the nose for that support level. It's how \n>things work.\n>\n> \n>\n>>tps = 369.717832 (including connections establishing)\n>>tps = 370.852058 (excluding connections establishing)\n>> \n>>\n>\n>Doesn't seem too bad to me. Have anything to compare it to?\n>\n>What's in your postgresql.conf?\n>\n>--Josh\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n>\n\n",
"msg_date": "Tue, 05 Oct 2004 23:59:10 -0400",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Excessive context switching on SMP Xeons"
},
{
"msg_contents": "\nAlan Stange <[email protected]> writes:\n\n> A few quick random observations on the Xeon v. Opteron comparison:\n> \n> - running a dual Xeon with hyperthreading turned on really isn't the same as\n> having a quad cpu system. I haven't seen postgresql specific benchmarks, but\n> the general case has been that HT is a benefit in a few particular work\n> loads but with no benefit in general.\n\nPart of the FUD with hyperthreading did have a kernel of truth that lied in\nolder kernels' schedulers. For example with Linux until recently the kernel\ncan easily end up scheduling two processes on the two virtual processors of\none single physical processor, leaving the other physical processor totally\nidle.\n\nWith modern kernels' schedulers I would expect hyperthreading to live up to\nits billing of adding 10% to 20% performance. Ie., a dual Xeon machine with\nhyperthreading won't be as fast as four processors, but it should be 10-20%\nfaster than a dual Xeon without hyperthreading.\n\nAs with all things that will only help if you're bound by the right limited\nresource to begin with. If you're I/O bound it isn't going to help. I would\nexpect Postgres with its heavy demand on memory bandwidth and shared memory\ncould potentially benefit more than usual from being able to context switch\nduring pipeline stalls.\n\n-- \ngreg\n\n",
"msg_date": "06 Oct 2004 03:44:02 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Excessive context switching on SMP Xeons"
},
{
"msg_contents": "Greg Stark wrote:\n\n>Alan Stange <[email protected]> writes:\n> \n>\n>>A few quick random observations on the Xeon v. Opteron comparison:\n>>\n>>- running a dual Xeon with hyperthreading turned on really isn't the same as\n>>having a quad cpu system. I haven't seen postgresql specific benchmarks, but\n>>the general case has been that HT is a benefit in a few particular work\n>>loads but with no benefit in general.\n>> \n>>\n>Part of the FUD with hyperthreading did have a kernel of truth that lied in\n>older kernels' schedulers. For example with Linux until recently the kernel\n>can easily end up scheduling two processes on the two virtual processors of\n>one single physical processor, leaving the other physical processor totally\n>idle.\n>\n>With modern kernels' schedulers I would expect hyperthreading to live up to\n>its billing of adding 10% to 20% performance. Ie., a dual Xeon machine with\n>hyperthreading won't be as fast as four processors, but it should be 10-20%\n>faster than a dual Xeon without hyperthreading.\n>\n>As with all things that will only help if you're bound by the right limited\n>resource to begin with. If you're I/O bound it isn't going to help. I would\n>expect Postgres with its heavy demand on memory bandwidth and shared memory\n>could potentially benefit more than usual from being able to context switch\n>during pipeline stalls.\n> \n>\nAll true. I'd be surprised if HT on an older 2.8 Ghz Xeon with only a \n512K cache will see any real benefit. The dual Xeon is already memory \nstarved, now further increase the memory pressure on the caches (because \nthe 512K is now \"shared\" by two virtual processors) and you probably \nwon't see a gain. It's memory stalls all around. To be clear, the \ncontext switch in this case isn't a kernel context switch but a \"virtual \ncpu\" context switch.\n\nThe probable reason we see dual Opteron boxes way outperforming dual \nXeons boxes is exactly because of Postgresql's heavy demand on memory. \nThe Opteron's have a much better memory system.\n\nA quick search on google or digging around in the comp.arch archives \nwill provide lots of details. HP's web site has (had?) some \nbenchmarks comparing these systems. HP sells both Xeon and Opteron \nsystems, so the comparison were quite \"fair\". Their numbers showed the \nOpteron handily outperfoming the Xeons.\n\n-- Alan\n",
"msg_date": "Wed, 06 Oct 2004 09:11:50 -0400",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Excessive context switching on SMP Xeons"
},
{
"msg_contents": "Josh Berkus wrote:\n\n>>I'd be thrilled to test it too, if for no other reason that to determine\n>>whether what I'm experiencing really is the \"CS problem\".\n>> \n>>\n>\n>Hmmm ... Gavin's patch is built against 8.0, and any version of the patch \n>would require linux 2.6, probably 2.6.7 minimum. Can you test on that linux \n>version? Do you have the resources to back-port Gavin's patch? \n> \n>\n\nI don't currently have any SMP Xeon systems running a 2.6 kernel, but it \ncould be arranged. As for back-porting the patch to 7.4.5, probably so, \nbut I'd have to see it first.\n\n>>tps = 369.717832 (including connections establishing)\n>>tps = 370.852058 (excluding connections establishing)\n>> \n>>\n>\n>Doesn't seem too bad to me. Have anything to compare it to?\n> \n>\n\nYes, about 280 tps on the same machine with the data directory on a \n3-disk RAID 5 w/ a 128MB cache, rather than the SSD. I was expecting a \nmuch larger increase, given that the RAID does about 3MB/s of random 8k \nwrites, and the SSD device does about 70MB/s of random 8k writes. Said \ndifferently, I thought my CPU bottleneck would be much higher, as to \nallow for more than a 30% increase in pgbench TPS when I took the IO \nbottleneck out of the equation. (That said, I'm not tuning for pgbench, \nbut it is a useful comparison that everyone on the list is familiar \nwith, and takes out the possibility that my app just has a bunch of \npoorly written queries).\n\n>What's in your postgresql.conf?\n> \n>\n\nSome relevant parameters:\nshared_buffers = 16384\nsort_mem = 2048\nvacuum_mem = 16384\nmax_fsm_pages = 200000\nmax_fsm_relations = 10000\nfsync = true\nwal_sync_method = fsync\nwal_buffers = 32\ncheckpoint_segments = 6\neffective_cache_size = 262144\nrandom_page_cost = 0.25\n\nEverything else is left at the default (or not relevant to this post). \nAnything blatantly stupid in there for my setup?\n\nThanks,\n\nBill Montgomery\n",
"msg_date": "Wed, 06 Oct 2004 11:45:30 -0400",
"msg_from": "Bill Montgomery <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Excessive context switching on SMP Xeons"
},
{
"msg_contents": "Hmmm...\n\nI may be mistaken (I think last time I read about optimization params was in\n7.3 docs), but doesn't RPC < 1 mean that random read is faster than\nsequential read? In your case, do you really think reading randomly is 4x\nfaster than reading sequentially? Doesn't seem to make sense, even with a\nzillion-disk array. Theoretically.\n\nAlso not sure, but sort_mem and vacuum_mem seem to be too small to me.\n\nG.\n%----------------------- cut here -----------------------%\n\\end\n\n----- Original Message ----- \nFrom: \"Bill Montgomery\" <[email protected]>\nSent: Wednesday, October 06, 2004 5:45 PM\n\n\n> Some relevant parameters:\n> shared_buffers = 16384\n> sort_mem = 2048\n> vacuum_mem = 16384\n> max_fsm_pages = 200000\n> max_fsm_relations = 10000\n> fsync = true\n> wal_sync_method = fsync\n> wal_buffers = 32\n> checkpoint_segments = 6\n> effective_cache_size = 262144\n> random_page_cost = 0.25\n\n",
"msg_date": "Wed, 6 Oct 2004 19:28:45 +0200",
"msg_from": "=?iso-8859-1?Q?SZUCS_G=E1bor?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Excessive context switching on SMP Xeons"
},
{
"msg_contents": "Alan Stange wrote:\n> A few quick random observations on the Xeon v. Opteron comparison:\n\n[SNIP]\n\n> I don't care to go into the whole debate of Xeon v. Opteron here. We \n> also have a lot of dual Xeon systems. In every comparison I've done with \n> our codes, the dual Opteron clearly outperforms the dual Xeon, when \n> running on one and both cpus.\n\nHere http://www6.tomshardware.com/cpu/20030422/ both were tested and there is\na database performance section, unfortunatelly they used MySQL.\n\n\nRegards\nGaetano Mendola\n\n\n\n",
"msg_date": "Thu, 07 Oct 2004 01:48:22 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Excessive context switching on SMP Xeons"
},
{
"msg_contents": "Here's a few numbers from the Opteron 250. If I get some time I'll post \na more comprehensive comparison including some other systems.\n\nThe system is a Sun v20z. Dual Opteron 250, 2.4Ghz, Linux 2.6, 8 GB \nmemory. I did a compile and install of pg 8.0 beta 3. I created a \ndata base on a tmpfs file system and ran pgbench. Everything was \"out \nof the box\", meaning I did not tweak any config files.\n\nI used this for pgbench:\n$ pgbench -i -s 32\n\nand this for pgbench invocations:\n$ pgbench -s 32 -c 1 -t 10000 -v\n\n\nclients tps \n1 1290 \n2 1780 \n4 1760 \n8 1680 \n16 1376 \n32 904\n\n\nHow are these results useful? In some sense, this is a speed of light \nnumber for the Opteron 250. You'll never go faster on this system with \na real storage subsystem involved instead of a tmpfs file system. It's \nalso a set of numbers that anyone else can reproduce as we don't have to \ndeal with any differences in file systems, disk subsystems, networking, \netc. Finally, it's a set of results that anyone else can compute on \nXeon's or other systems and make a simple (and naive) comparisons.\n\n\nJust to stay on topic: vmstat reported about 30K cs / second while \nthis was running the 1 and 2 client cases.\n\n-- Alan\n\n",
"msg_date": "Wed, 06 Oct 2004 23:14:20 -0400",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Excessive context switching on SMP Xeons"
},
{
"msg_contents": "Alan Stange wrote:\n\n> Here's a few numbers from the Opteron 250. If I get some time I'll \n> post a more comprehensive comparison including some other systems.\n>\n> The system is a Sun v20z. Dual Opteron 250, 2.4Ghz, Linux 2.6, 8 GB \n> memory. I did a compile and install of pg 8.0 beta 3. I created a \n> data base on a tmpfs file system and ran pgbench. Everything was \"out \n> of the box\", meaning I did not tweak any config files.\n>\n> I used this for pgbench:\n> $ pgbench -i -s 32\n>\n> and this for pgbench invocations:\n> $ pgbench -s 32 -c 1 -t 10000 -v\n>\n>\n> clients tps 1 1290 2 \n> 1780 4 1760 8 1680 \n> 16 1376 32 904\n\n\nThe same test on a Dell PowerEdge 1750, Dual Xeon 3.2 GHz, 512k cache, \nHT on, Linux 2.4.21-20.ELsmp (RHEL 3), 4GB memory, pg 7.4.5:\n\n$ pgbench -i -s 32 pgbench\n$ pgbench -s 32 -c 1 -t 10000 -v\n\nclients tps avg CS/sec\n------- ----- ----------\n 1 601 48,000\n 2 889 77,000\n 4 1006 80,000\n 8 985 59,000\n 16 966 47,000\n 32 913 46,000\n\nFar less performance that the Dual Opterons with a low number of \nclients, but the gap narrows as the number of clients goes up. Anyone \nsmarter than me care to explain?\n\nAnyone have a 4-way Opteron to run the same benchmark on?\n\n-Bill\n\n> How are these results useful? In some sense, this is a speed of light \n> number for the Opteron 250. You'll never go faster on this system \n> with a real storage subsystem involved instead of a tmpfs file \n> system. It's also a set of numbers that anyone else can reproduce as \n> we don't have to deal with any differences in file systems, disk \n> subsystems, networking, etc. Finally, it's a set of results that \n> anyone else can compute on Xeon's or other systems and make a simple \n> (and naive) comparisons.\n>\n>\n> Just to stay on topic: vmstat reported about 30K cs / second while \n> this was running the 1 and 2 client cases.\n>\n> -- Alan\n\n\n",
"msg_date": "Thu, 07 Oct 2004 11:48:41 -0400",
"msg_from": "Bill Montgomery <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Excessive context switching on SMP Xeons"
},
{
"msg_contents": "On Thu, Oct 07, 2004 at 11:48:41AM -0400, Bill Montgomery wrote:\n> Alan Stange wrote:\n> \n> The same test on a Dell PowerEdge 1750, Dual Xeon 3.2 GHz, 512k cache, \n> HT on, Linux 2.4.21-20.ELsmp (RHEL 3), 4GB memory, pg 7.4.5:\n> \n> Far less performance that the Dual Opterons with a low number of \n> clients, but the gap narrows as the number of clients goes up. Anyone \n> smarter than me care to explain?\n\nYou'll have to wait for someone smarter than you, but I will posit\nthis: Did you use a tmpfs filesystem like Alan? You didn't mention\neither way. Alan did that as an attempt remove IO as a variable.\n\n-Mike\n",
"msg_date": "Thu, 7 Oct 2004 14:15:38 -0400",
"msg_from": "Michael Adler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Excessive context switching on SMP Xeons"
},
{
"msg_contents": "Bill Montgomery wrote:\n\n> Alan Stange wrote:\n>\n>> Here's a few numbers from the Opteron 250. If I get some time I'll \n>> post a more comprehensive comparison including some other systems.\n>>\n>> The system is a Sun v20z. Dual Opteron 250, 2.4Ghz, Linux 2.6, 8 GB \n>> memory. I did a compile and install of pg 8.0 beta 3. I created a \n>> data base on a tmpfs file system and ran pgbench. Everything was \n>> \"out of the box\", meaning I did not tweak any config files.\n>>\n>> I used this for pgbench:\n>> $ pgbench -i -s 32\n>>\n>> and this for pgbench invocations:\n>> $ pgbench -s 32 -c 1 -t 10000 -v\n>>\n>>\n>> clients tps 1 1290 2 \n>> 1780 4 1760 8 1680 \n>> 16 1376 32 904\n>\n>\n>\n> The same test on a Dell PowerEdge 1750, Dual Xeon 3.2 GHz, 512k cache, \n> HT on, Linux 2.4.21-20.ELsmp (RHEL 3), 4GB memory, pg 7.4.5:\n>\n> $ pgbench -i -s 32 pgbench\n> $ pgbench -s 32 -c 1 -t 10000 -v\n>\n> clients tps avg CS/sec\n> ------- ----- ----------\n> 1 601 48,000\n> 2 889 77,000\n> 4 1006 80,000\n> 8 985 59,000\n> 16 966 47,000\n> 32 913 46,000\n>\n> Far less performance that the Dual Opterons with a low number of \n> clients, but the gap narrows as the number of clients goes up. Anyone \n> smarter than me care to explain?\n\nboy, did Thunderbird ever botch the format of the table I entered...\n\nI thought the falloff at 32 clients was a bit steep as well. One \nthought that crossed my mind is that \"pgbench -s 32 -c 32 ...\" might not \nbe valid. From the pgbench README:\n\n -s scaling_factor\n this should be used with -i (initialize) option.\n number of tuples generated will be multiple of the\n scaling factor. For example, -s 100 will imply 10M\n (10,000,000) tuples in the accounts table.\n default is 1. NOTE: scaling factor should be at least\n as large as the largest number of clients you intend\n to test; else you'll mostly be measuring update contention.\n\nAnother possible cause is the that pgbench process is cpu starved and \nisn't able to keep driving the postgresql processes. So I ran pgbench \nfrom another system with all else the same. The numbers were a bit \nsmaller but otherwise similar.\n\n\nI then reran everything using -s 64:\n\nclients tps\n1 1254\n2 1645\n4 1713\n8 1548\n16 1396\n32 1060\n\nStill starting to head down a bit. In the 32 client case, the system \nwas ~60% user time, ~25% sytem and ~15% idle. Anyway, the machine is \nclearly hitting some contention somewhere. It could be in the tmpfs \ncode, VM system, etc.\n\n-- Alan\n\n\n\n\n",
"msg_date": "Thu, 07 Oct 2004 14:28:54 -0400",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Excessive context switching on SMP Xeons"
},
{
"msg_contents": "Michael Adler wrote:\n\n>On Thu, Oct 07, 2004 at 11:48:41AM -0400, Bill Montgomery wrote:\n> \n>\n>>Alan Stange wrote:\n>>\n>>The same test on a Dell PowerEdge 1750, Dual Xeon 3.2 GHz, 512k cache, \n>>HT on, Linux 2.4.21-20.ELsmp (RHEL 3), 4GB memory, pg 7.4.5:\n>>\n>>Far less performance that the Dual Opterons with a low number of \n>>clients, but the gap narrows as the number of clients goes up. Anyone \n>>smarter than me care to explain?\n>> \n>>\n>\n>You'll have to wait for someone smarter than you, but I will posit\n>this: Did you use a tmpfs filesystem like Alan? You didn't mention\n>either way. Alan did that as an attempt remove IO as a variable.\n>\n>-Mike\n> \n>\n\nYes, I should have been more explicit. My goal was to replicate his \nexperiment as closely as possible in my environment, so I did run my \npostgres data directory on a tmpfs.\n\n-Bill Montgomery\n",
"msg_date": "Thu, 07 Oct 2004 14:48:45 -0400",
"msg_from": "Bill Montgomery <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Excessive context switching on SMP Xeons"
},
{
"msg_contents": "On Tue, Oct 05, 2004 at 09:47:36AM -0700, Josh Berkus wrote:\n> As long as you're on x86, scaling outward is the way to go. If you want to \n> continue to scale upwards, ask Andrew Sullivan about his experiences running \n> PostgreSQL on big IBM boxes. But if you consider an quad-Opteron server \n> expensive, I don't think that's an option for you.\n\nWell, they're not that big, and both Chris Browne and Andrew Hammond\nare at least as qualified to talk about this as I. But since Josh\nmentioned it, I'll put some anecdotal rablings here just in case\nanyone is interested.\n\nWe used to run our systems on Solaris 7, then 8, on Sun E4500s. We\nfound the performance on those boxes surprisingly bad under certain\npathological loads. I ultimately traced this to some remarkably poor\nshared memory handling by Solaris: during relatively heavy load\n(in particular, thousands of selects per second on the same set of\ntuples) we'd see an incredible number of semaphore operations, and\nconcluded that the buffer handling was killing us. \n\nI think we could have tuned this away, but for independent reasons we\ndecided to dump Sun gear (the hardware had become unreliable, and we\nwere not satisfied with the service we were getting). We ended up\nchoosing IBM P650s as a replacement.\n\nThe 650s are not cheap, but boy are they fast. I don't have any\nnumbers I can share, but I can tell you that we recently had a few\ndays in which our write load was as large as the entire write load\nfor last year, and you couldn't tell. It is too early for us to say\nwhether the P series lives up to its billing in terms of relibility:\nthe real reason we use these machines is reliability, so if\napproaching 100% uptime isn't important to you, the speed may not be\nworth it.\n\nWe're also, for the record, doing experiments with Opterons. So far,\nwe're impressed, and you can buy a lot of Opteron for the cost of one\nP650.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nI remember when computers were frustrating because they *did* exactly what \nyou told them to. That actually seems sort of quaint now.\n\t\t--J.D. Baldwin\n",
"msg_date": "Mon, 11 Oct 2004 13:38:19 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "IBM P-series machines (was: Excessive context switching on SMP Xeons)"
},
{
"msg_contents": "On Mon, 2004-10-11 at 13:38, Andrew Sullivan wrote:\n> On Tue, Oct 05, 2004 at 09:47:36AM -0700, Josh Berkus wrote:\n> > As long as you're on x86, scaling outward is the way to go. If you want to \n> > continue to scale upwards, ask Andrew Sullivan about his experiences running \n> > PostgreSQL on big IBM boxes. But if you consider an quad-Opteron server \n> > expensive, I don't think that's an option for you.\n\n\n> The 650s are not cheap, but boy are they fast. I don't have any\n> numbers I can share, but I can tell you that we recently had a few\n> days in which our write load was as large as the entire write load\n> for last year, and you couldn't tell. It is too early for us to say\n> whether the P series lives up to its billing in terms of relibility:\n> the real reason we use these machines is reliability, so if\n> approaching 100% uptime isn't important to you, the speed may not be\n> worth it.\n\nAgreed completely, and the 570 knocks the 650 out of the water -- nearly\ndouble the performance for math heavy queries. Beware vendor support for\nLinux on these things though -- we ran into many of the same issues with\nvendor support on the IBM machines as we did with the Opterons.\n\n",
"msg_date": "Mon, 11 Oct 2004 14:20:52 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IBM P-series machines (was: Excessive context"
},
{
"msg_contents": "[email protected] (Rod Taylor) wrote:\n> On Mon, 2004-10-11 at 13:38, Andrew Sullivan wrote:\n>> On Tue, Oct 05, 2004 at 09:47:36AM -0700, Josh Berkus wrote:\n>> > As long as you're on x86, scaling outward is the way to go. If\n>> > you want to continue to scale upwards, ask Andrew Sullivan about\n>> > his experiences running PostgreSQL on big IBM boxes. But if you\n>> > consider an quad-Opteron server expensive, I don't think that's\n>> > an option for you.\n>\n>> The 650s are not cheap, but boy are they fast. I don't have any\n>> numbers I can share, but I can tell you that we recently had a few\n>> days in which our write load was as large as the entire write load\n>> for last year, and you couldn't tell. It is too early for us to\n>> say whether the P series lives up to its billing in terms of\n>> relibility: the real reason we use these machines is reliability,\n>> so if approaching 100% uptime isn't important to you, the speed may\n>> not be worth it.\n>\n> Agreed completely, and the 570 knocks the 650 out of the water --\n> nearly double the performance for math heavy queries. Beware vendor\n> support for Linux on these things though -- we ran into many of the\n> same issues with vendor support on the IBM machines as we did with\n> the Opterons.\n\nThe 650s are running AIX, not Linux.\n\nBased on the \"Signal 11\" issue, I'm not certain what would be the\n'best' answer. It appears that the problem relates to proprietary\nbits of AIX libc. In theory, that would have been more easily\nresolvable with a source-available GLIBC.\n\nOn the other hand, I'm not sure what happens to support for any of the\ninteresting hardware extensions. I'm not sure, for instance, that we\ncould run HACMP on Linux on this hardware.\n\nAs for \"vendor support\" for Opteron, that sure looks like a\ntrainwreck... If you're going through IBM, then they won't want to\nrespond to any issues if you're not running a \"bog-standard\" RHAS/RHES\nrelease from Red Hat. And that, on Opteron, is preposterous, because\nthere's plenty of the bits of Opteron support that only ever got put\nin Linux 2.6, whilst RHAT is still back in the 2.4 days.\n\nIn a way, that's just as well, at this point. There's plenty of stuff\nsurrounding this that is still pretty experimental; the longer RHAT\nwaits to support 2.6, the greater the likelihood that Linux support\nfor Opteron will have settled down to the point that the result will\nactually be supportable by RHAT, and by proxy, by IBM and others.\n\nThere is some risk that if RHAT waits _too_ long for 2.6, people will\nhave already jumped ship to SuSE. No benefits without risks...\n-- \n(reverse (concatenate 'string \"gro.mca\" \"@\" \"enworbbc\"))\nhttp://www.ntlug.org/~cbbrowne/rdbms.html\nIf at first you don't succeed, then you didn't do it right!\nIf at first you don't succeed, then skydiving definitely isn't for you.\n",
"msg_date": "Mon, 11 Oct 2004 21:34:44 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IBM P-series machines (was: Excessive context"
},
{
"msg_contents": "\n>As for \"vendor support\" for Opteron, that sure looks like a\n>trainwreck... If you're going through IBM, then they won't want to\n>respond to any issues if you're not running a \"bog-standard\" RHAS/RHES\n>release from Red Hat. And that, on Opteron, is preposterous, because\n>there's plenty of the bits of Opteron support that only ever got put\n>in Linux 2.6, whilst RHAT is still back in the 2.4 days.\n>\n> \n>\n\nTo be fair, they have backported a boatload of 2.6 features to their kernel:\nhttp://www.redhat.com/software/rhel/kernel26/\n\nAnd that page certainly isn't an exhaustive list...\n\nM\n",
"msg_date": "Tue, 12 Oct 2004 07:53:00 +0100",
"msg_from": "Matt Clark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IBM P-series machines"
},
{
"msg_contents": "[email protected] (Matt Clark) writes:\n>>As for \"vendor support\" for Opteron, that sure looks like a\n>>trainwreck... If you're going through IBM, then they won't want to\n>>respond to any issues if you're not running a \"bog-standard\" RHAS/RHES\n>>release from Red Hat. And that, on Opteron, is preposterous, because\n>>there's plenty of the bits of Opteron support that only ever got put\n>>in Linux 2.6, whilst RHAT is still back in the 2.4 days.\n>\n> To be fair, they have backported a boatload of 2.6 features to their kernel:\n> http://www.redhat.com/software/rhel/kernel26/\n>\n> And that page certainly isn't an exhaustive list...\n\nTo be fair, we keep on actually running into things that _can't_ be\nbackported, like fibrechannel drivers that were written to take\nadvantage of changes in the SCSI support in 2.6.\n\nThis sort of thing will be particularly problematic with Opteron,\nwhere the porting efforts for AMD64 have taken place alongside the\ncreation of 2.6.\n-- \nlet name=\"cbbrowne\" and tld=\"cbbrowne.com\" in String.concat \"@\" [name;tld];;\nhttp://www.ntlug.org/~cbbrowne/linuxxian.html\nA VAX is virtually a computer, but not quite.\n",
"msg_date": "Wed, 13 Oct 2004 11:52:28 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Opteron vs RHAT"
},
{
"msg_contents": "> >>trainwreck... If you're going through IBM, then they won't want to \n> >>respond to any issues if you're not running a \n> \"bog-standard\" RHAS/RHES \n> >>release from Red Hat. \n...> To be fair, we keep on actually running into things that \n> _can't_ be backported, like fibrechannel drivers that were \n> written to take advantage of changes in the SCSI support in 2.6.\n\nI thought IBM had good support for SUSE? I don't know why I thought that...\n\n",
"msg_date": "Wed, 13 Oct 2004 17:47:20 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opteron vs RHAT"
},
{
"msg_contents": "Bill,\n\nIn order to manifest the context switch problem you will definitely\nrequire clients to be set to more than one in pgbench. It only occurs\nwhen 2 or more backends need access to shared memory.\n\nIf you want help backpatching Gavin's patch I'll be glad to do it for\nyou, but you do need a recent kernel.\n\nDave\n\n\nOn Thu, 2004-10-07 at 14:48, Bill Montgomery wrote:\n> Michael Adler wrote:\n> \n> >On Thu, Oct 07, 2004 at 11:48:41AM -0400, Bill Montgomery wrote:\n> > \n> >\n> >>Alan Stange wrote:\n> >>\n> >>The same test on a Dell PowerEdge 1750, Dual Xeon 3.2 GHz, 512k cache, \n> >>HT on, Linux 2.4.21-20.ELsmp (RHEL 3), 4GB memory, pg 7.4.5:\n> >>\n> >>Far less performance that the Dual Opterons with a low number of \n> >>clients, but the gap narrows as the number of clients goes up. Anyone \n> >>smarter than me care to explain?\n> >> \n> >>\n> >\n> >You'll have to wait for someone smarter than you, but I will posit\n> >this: Did you use a tmpfs filesystem like Alan? You didn't mention\n> >either way. Alan did that as an attempt remove IO as a variable.\n> >\n> >-Mike\n> > \n> >\n> \n> Yes, I should have been more explicit. My goal was to replicate his \n> experiment as closely as possible in my environment, so I did run my \n> postgres data directory on a tmpfs.\n> \n> -Bill Montgomery\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\nwww.postgresintl.com\n\n",
"msg_date": "Thu, 14 Oct 2004 15:59:14 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Excessive context switching on SMP Xeons"
}
] |
[
{
"msg_contents": "Hi,\n\n(pg_version 7.4.2, i do run vacuum analyze on the whole database frequently \nand just before executing statements below)\n\ni dont know if anyone can help me because i dont know really where the problem \nis, but i try. If any further information is needed i'll be glad to send.\n\nmy real rule much longer (more calculation instead of \"+ 1\") but this shortcut \nhas the same disadvantages in performance:\n\nCREATE RULE ru_sp_update AS ON UPDATE TO Spiele\nDO\n UPDATE punktecache SET pc_punkte = pc_punkte + 1\n\n FROM Spieletipps AS stip\n NATURAL JOIN tippspieltage2spiele AS tspt2sp\n\n WHERE punktecache.tr_kurzname = stip.tr_kurzname\n AND punktecache.mg_name = stip.mg_name\n AND punktecache.tspt_name = tspt2sp.tspt_name\n AND stip.sp_id = OLD.sp_id\n;\n\npunktecache is a materialized view which should be updated by this rule\n\n# \\d punktecache\n Table \"public.punktecache\"\n Column | Type | Modifiers\n-------------+----------+-----------\n tr_kurzname | text | not null\n mg_name | text | not null\n tspt_name | text | not null\n pc_punkte | smallint | not null\nIndexes:\n \"pk_punktecache\" primary key, btree (tr_kurzname, mg_name, tspt_name)\nForeign-key constraints:\n \"fk_mitglieder\" FOREIGN KEY (tr_kurzname, mg_name) REFERENCES \nmitglieder(tr_kurzname, mg_name) ON UPDATE CASCADE ON DELETE CASCADE\n \"fk_tippspieltage\" FOREIGN KEY (tr_kurzname, tspt_name) REFERENCES \ntippspieltage(tr_kurzname, tspt_name) ON UPDATE CASCADE ON DELETE CASCADE\n\n\nmy update statement:\n\nexplain analyze UPDATE spiele\nSET sp_heimtore = spup.spup_heimtore,\n sp_gasttore = spup.spup_gasttore,\n sp_abpfiff = spup.spup_abpfiff\nFROM spieleupdates AS spup\nWHERE spiele.sp_id = spup.sp_id;\n\nand output from explain\n[did i post explain's output right? i just copied it, but i wonder if there is \na more pretty print like method to post explain's output?]\n\n\n Nested Loop (cost=201.85..126524.78 rows=1 width=45) (actual \ntime=349.694..290491.442 rows=100990 loops=1)\n -> Nested Loop (cost=201.85..126518.97 rows=1 width=57) (actual \ntime=349.623..288222.145 rows=100990 loops=1)\n -> Hash Join (cost=201.85..103166.61 rows=4095 width=64) (actual \ntime=131.376..8890.220 rows=102472 loops=1)\n Hash Cond: ((\"outer\".tspt_name = \"inner\".tspt_name) AND \n(\"outer\".tr_kurzname = \"inner\".tr_kurzname))\n -> Seq Scan on punktecache (cost=0.00..40970.20 rows=2065120 \nwidth=45) (actual time=0.054..4356.321 rows=2065120 loops=1)\n -> Hash (cost=178.16..178.16 rows=4738 width=35) (actual \ntime=102.259..102.259 rows=0 loops=1)\n -> Nested Loop (cost=0.00..178.16 rows=4738 width=35) \n(actual time=17.262..88.076 rows=10519 loops=1)\n -> Seq Scan on spieleupdates spup \n(cost=0.00..0.00 rows=1 width=4) (actual time=0.015..0.024 rows=1 loops=1)\n -> Index Scan using ix_tspt2sp_fk_spiele on \ntippspieltage2spiele tspt2sp (cost=0.00..118.95 rows=4737 width=31) (actual \ntime=17.223..69.486 rows=10519 loops=1)\n Index Cond: (\"outer\".sp_id = tspt2sp.sp_id)\n -> Index Scan using pk_spieletipps on spieletipps stip \n(cost=0.00..5.69 rows=1 width=25) (actual time=2.715..2.717 rows=1 \nloops=102472)\n Index Cond: ((\"outer\".tr_kurzname = stip.tr_kurzname) AND \n(\"outer\".mg_name = stip.mg_name) AND (\"outer\".sp_id = stip.sp_id))\n -> Index Scan using pk_spiele on spiele (cost=0.00..5.78 rows=1 width=4) \n(actual time=0.012..0.014 rows=1 loops=100990)\n Index Cond: (spiele.sp_id = \"outer\".sp_id)\n Total runtime: 537319.321 ms\n\n\nCan this be made any faster? Can you give me a hint where to start research? \n\nMy guess is that the update statement inside the rule doesnt really uses the \nindex on punktecache, but i dont know why and i dont know how to change it.\n\nAny hint or help is is very appreciated.\n\nkind regards\njanning\n",
"msg_date": "Wed, 6 Oct 2004 00:55:04 +0200",
"msg_from": "Janning Vygen <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow rule on update"
}
] |
[
{
"msg_contents": "Hello!\n\nI'm using Postgres 7.4.5, sort_mem is 8192. Tables analyzed / vacuumed.\n\nHere's a function I'm using to get an age from the user's birthday:\n\nagey(date) -> SELECT date_part('year', age($1::timestamp))\n\n\nThe problem is, why do the plans differ so much between Q1 & Q3 below? Something with age() being a non-IMMUTABLE function?\n\n\nQ1: explain analyze SELECT al.pid, al.owner, al.title, al.front, al.created_at, al.n_images, u.username as owner_str, u.image as owner_image, u.puid as owner_puid FROM albums al , users u WHERE u.uid = al.owner AND al.security='a' AND al.n_images > 0 AND date_part('year', age(u.born)) > 17 AND date_part('year', age(u.born)) < 20 AND city = 1 ORDER BY al.id DESC LIMIT 9;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=5700.61..5700.63 rows=9 width=183) (actual time=564.291..564.299 rows=9 loops=1)\n -> Sort (cost=5700.61..5700.82 rows=83 width=183) (actual time=564.289..564.291 rows=9 loops=1)\n Sort Key: al.id\n -> Nested Loop (cost=0.00..5697.97 rows=83 width=183) (actual time=30.029..526.211 rows=4510 loops=1)\n -> Seq Scan on users u (cost=0.00..5311.05 rows=86 width=86) (actual time=5.416..421.264 rows=3021 loops=1)\n Filter: ((date_part('year'::text, age((('now'::text)::date)::timestamp with time zone, (born)::timestamp with time zone)) > 17::double precision) AND (date_part('year'::text, age((('now'::text)::date)::timestamp with time zone, (born)::timestamp with time zone)) < 20::double precision) AND (city = 1))\n -> Index Scan using albums_owner_key on albums al (cost=0.00..4.47 rows=2 width=101) (actual time=0.014..0.025 rows=1 loops=3021)\n Index Cond: (\"outer\".uid = al.\"owner\")\n Filter: ((\"security\" = 'a'::bpchar) AND (n_images > 0))\n Total runtime: 565.120 ms\n(10 rows)\n\n\nResult when removing the second age-check (AND date_part('year', age(u.born)) < 20):\n\nQ2: explain analyze SELECT al.pid, al.owner, al.title, al.front, al.created_at, al.n_images, u.username as owner_str, u.image as owner_image, u.puid as owner_puid FROM albums al, users u WHERE u.uid = al.owner AND al.security='a' AND al.n_images > 0 AND date_part('year', age(u.born)) > 17 AND city = 1 ORDER BY al.id DESC LIMIT 9;\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..140.95 rows=9 width=183) (actual time=0.217..2.474 rows=9 loops=1)\n -> Nested Loop (cost=0.00..86200.99 rows=5504 width=183) (actual time=0.216..2.464 rows=9 loops=1)\n -> Index Scan Backward using albums_id_key on albums al (cost=0.00..2173.32 rows=27610 width=101) (actual time=0.086..1.080 rows=40 loops=1)\n Filter: ((\"security\" = 'a'::bpchar) AND (n_images > 0))\n -> Index Scan using users_pkey on users u (cost=0.00..3.03 rows=1 width=86) (actual time=0.031..0.031 rows=0 loops=40)\n Index Cond: (u.uid = \"outer\".\"owner\")\n Filter: ((date_part('year'::text, age((('now'::text)::date)::timestamp with time zone, (born)::timestamp with time zone)) > 17::double precision) AND (city = 1))\n Total runtime: 2.611 ms\n(8 rows)\n\nTrying another approach: adding a separate \"stale\" age-column to the users-table:\n\nalter table users add column age smallint;\nupdate users set age=date_part('year'::text, age((('now'::text)::date)::timestamp with time zone, (born)::timestamp with time zone));\nanalyze users;\n\nResult with separate column:\nQ3: explain analyze SELECT al.pid, al.owner, al.title, al.front, al.created_at, al.n_images, u.username as owner_str, u.image as owner_image, u.puid as owner_puid FROM albums al , users u WHERE u.uid = al.owner AND al.security='a' AND al.n_images > 0 AND age > 17 AND age < 20 AND city = 1 ORDER BY al.id DESC LIMIT 9;\n--------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..263.40 rows=9 width=183) (actual time=0.165..2.832 rows=9 loops=1)\n -> Nested Loop (cost=0.00..85925.69 rows=2936 width=183) (actual time=0.163..2.825 rows=9 loops=1)\n -> Index Scan Backward using albums_id_key on albums al (cost=0.00..2173.32 rows=27610 width=101) (actual time=0.043..1.528 rows=56 loops=1)\n Filter: ((\"security\" = 'a'::bpchar) AND (n_images > 0))\n -> Index Scan using users_pkey on users u (cost=0.00..3.02 rows=1 width=86) (actual time=0.020..0.020 rows=0 loops=56)\n Index Cond: (u.uid = \"outer\".\"owner\")\n Filter: ((age > 17) AND (age < 20) AND (city = 1))\n Total runtime: 2.973 ms\n(8 rows)\n\nMy question is, why doesn't the planner pick the same plan for Q1 & Q3?\n\n/Nichlas\n",
"msg_date": "Wed, 6 Oct 2004 02:42:04 +0200",
"msg_from": "Nichlas =?iso-8859-1?Q?L=F6fdahl?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Planner picks the wrong plan?"
},
{
"msg_contents": "Nichlas =?iso-8859-1?Q?L=F6fdahl?= <[email protected]> writes:\n> My question is, why doesn't the planner pick the same plan for Q1 & Q3?\n\nI think it's mostly that after you've added and ANALYZEd the \"age\"\ncolumn, the planner has a pretty good idea of how many rows will pass\nthe \"age > 17 AND age < 20\" condition. It can't do very much with the\nequivalent condition in the original form, though, and in fact ends up\ndrastically underestimating the number of matching rows (86 vs reality\nof 3021). That leads directly to a bad plan choice :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Oct 2004 23:29:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner picks the wrong plan? "
}
] |
[
{
"msg_contents": "please ignore if this goes through. They've been bouncing and I'm trying to\nfind out why.\n\n-m\n",
"msg_date": "Tue, 5 Oct 2004 23:24:46 -0400",
"msg_from": "Max Baker <[email protected]>",
"msg_from_op": true,
"msg_subject": "test post"
}
] |
[
{
"msg_contents": "postgres=# explain ANALYZE select * from test where today < '2004-01-01';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..19.51 rows=334 width=44) (actual \ntime=0.545..2.429 rows=721 loops=1)\n Filter: (today < '2004-01-01 00:00:00'::timestamp without time zone)\n Total runtime: 3.072 ms\n(3 rows)\n\npostgres=# explain ANALYZE select * from test where today > '2003-01-01' \nand today < '2004-01-01';\n QUERY \nPLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using idx_today on test (cost=0.00..18.89 rows=6 width=44) \n(actual time=0.055..1.098 rows=365 loops=1)\n Index Cond: ((today > '2003-01-01 00:00:00'::timestamp without time \nzone) AND (today < '2004-01-01 00:00:00'::timestamp without time zone))\n Total runtime: 1.471 ms\n(3 rows)\n\nhello\n\nI was expected 1st query should using index, but it doesn't\n2nd query doing perfect as you see.\n\ncan you explain to me why it's not doing that i expected??\nnow I'm currently using postgresql 8.0pre3 on linux\n\n/hyunsung jang.\n",
"msg_date": "Wed, 06 Oct 2004 16:31:08 +0900",
"msg_from": "HyunSung Jang <[email protected]>",
"msg_from_op": true,
"msg_subject": "why my query is not using index??"
},
{
"msg_contents": "Am Mittwoch, 6. Oktober 2004 09:31 schrieben Sie:\n> postgres=# explain ANALYZE select * from test where today < '2004-01-01';\n> QUERY PLAN\n>------------------------- Seq Scan on test (cost=0.00..19.51 rows=334\n> width=44) (actual\n> time=0.545..2.429 rows=721 loops=1)\n> Filter: (today < '2004-01-01 00:00:00'::timestamp without time zone)\n> Total runtime: 3.072 ms\n> (3 rows)\n>\n> postgres=# explain ANALYZE select * from test where today > '2003-01-01'\n> and today < '2004-01-01';\n> QUERY\n> PLAN\n> --------------------------------------------------------------- Index\n> Scan using idx_today on test (cost=0.00..18.89 rows=6 width=44) (actual\n> time=0.055..1.098 rows=365 loops=1)\n> Index Cond: ((today > '2003-01-01 00:00:00'::timestamp without time\n> zone) AND (today < '2004-01-01 00:00:00'::timestamp without time zone))\n> Total runtime: 1.471 ms\n> (3 rows)\n>\n> hello\n>\n> I was expected 1st query should using index, but it doesn't\n> 2nd query doing perfect as you see.\n\npostgres uses a seq scan if its faster. In your case postgres seems to know \nthat most of your rows have a date < 2004-01-01 and so doesn't need to \nconsult the index if it has to read every page anyway. seq scan can be faster \non small tables. try (in psql) \"SET enable_seqscan TO off;\" before running \nyour query and see how postgres plans it without using seq scan.\n\njanning\n\n\n\n",
"msg_date": "Mon, 11 Oct 2004 14:25:02 +0200",
"msg_from": "Janning Vygen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why my query is not using index??"
},
{
"msg_contents": "HyunSung Jang <[email protected]> writes:\n> can you explain to me why it's not doing that i expected??\n\nHave you ANALYZEd this table recently? The estimated row counts seem\nway off.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Oct 2004 10:26:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why my query is not using index?? "
},
{
"msg_contents": "On Mon, 11 Oct 2004, Janning Vygen wrote:\n\n> postgres uses a seq scan if its faster. In your case postgres seems to know\n> that most of your rows have a date < 2004-01-01 and so doesn't need to\n> consult the index if it has to read every page anyway. seq scan can be faster\n> on small tables. try (in psql) \"SET enable_seqscan TO off;\" before running\n> your query and see how postgres plans it without using seq scan.\n\n\nI was about to post and saw this message.\nI have a query that was using sequential scans. Upon turning seqscan to \noff it changed to using the index. What does that mean?\nThe tables are under 5k records so I wonder if that is why the optimizer \nis option, on it's default state, to do sequential scans.\n\nI was also wondering if there is a relation between the sequential scans \nand the fact that my entire query is a series of left joins:\n\n(1)FROM Accounts\n(2)LEFT JOIN Equity_Positions ON Accounts.Account_ID = \n(3)Equity_Positions.Account_ID\n(4)LEFT JOIN Equities USING( Equity_ID )\n(5)LEFT JOIN Benchmarks USING( Benchmark_ID )\n(6)LEFT JOIN Equity_Prices ON Equities.equity_id = Equity_Prices.equity_id\n(7) AND Equity_Positions.Equity_Date = Equity_Prices.Date\n(8)LEFT JOIN Benchmark_Positions ON Equities.Benchmark_ID = \n(9)Benchmark_Positions.Benchmark_ID\n(10) AND Equity_Positions.Equity_Date = \n(11)Benchmark_Positions.Benchmark_Date\n(12)WHERE Client_ID =32\n\nWhen I saw the default explain I was surprised to see that indexes were \nnot been used. For example the join on lines 4,5 are exactly the primary \nkey of the tables yet a sequential scan was used.\n\nThe default explain was:\n\nSort (cost=382.01..382.15 rows=56 width=196)\n Sort Key: accounts.account_group, accounts.account_name, \nequities.equity_description, equity_positions.equity_date\n -> Hash Left Join (cost=357.36..380.39 rows=56 width=196)\n Hash Cond: ((\"outer\".benchmark_id = \"inner\".benchmark_id) AND (\"outer\".equity_date = \"inner\".benchmark_date))\n -> Hash Left Join (cost=353.41..375.46 rows=56 width=174)\n Hash Cond: ((\"outer\".equity_id = \"inner\".equity_id) AND (\"outer\".equity_date = \"inner\".date))\n -> Hash Left Join (cost=292.22..296.90 rows=56 width=159)\n Hash Cond: (\"outer\".benchmark_id = \"inner\".benchmark_id)\n -> Merge Right Join (cost=290.40..294.51 rows=56 width=137)\n Merge Cond: (\"outer\".equity_id = \"inner\".equity_id)\n -> Sort (cost=47.19..48.83 rows=655 width=70)\n Sort Key: equities.equity_id\n -> Seq Scan on equities (cost=0.00..16.55 rows=655 width=70)\n -> Sort (cost=243.21..243.35 rows=56 width=67)\n Sort Key: equity_positions.equity_id\n -> Nested Loop Left Join (cost=0.00..241.58 rows=56 width=67)\n -> Seq Scan on accounts (cost=0.00..5.80 rows=3 width=44)\n Filter: (client_id = 32)\n -> Index Scan using positions_acct_equity_date on equity_positions (cost=0.00..78.30 rows=23 width=27)\n Index Cond: (\"outer\".account_id = equity_positions.account_id)\n -> Hash (cost=1.66..1.66 rows=66 width=22)\n -> Seq Scan on benchmarks (cost=0.00..1.66 rows=66 width=22)\n -> Hash (cost=50.79..50.79 rows=2079 width=23)\n -> Seq Scan on equity_prices (cost=0.00..50.79 rows=2079 width=23)\n -> Hash (cost=3.30..3.30 rows=130 width=30)\n -> Seq Scan on benchmark_positions (cost=0.00..3.30 rows=130 width=30)\n\n\nAfter set enable_seqscan to off;\nIt becomes\n\nSort (cost=490.82..490.96 rows=56 width=196)\n Sort Key: accounts.account_group, accounts.account_name, \nequities.equity_description, equity_positions.equity_date\n -> Merge Left Join (cost=309.75..489.20 rows=56 width=196)\n Merge Cond: (\"outer\".benchmark_id = \"inner\".benchmark_id)\n Join Filter: (\"outer\".equity_date = \"inner\".benchmark_date)\n -> Nested Loop Left Join (cost=309.75..644.88 rows=56 width=174)\n -> Merge Left Join (cost=309.75..315.90 rows=56 width=159)\n Merge Cond: (\"outer\".benchmark_id = \"inner\".benchmark_id)\n -> Sort (cost=309.75..309.89 rows=56 width=137)\n Sort Key: equities.benchmark_id\n -> Merge Right Join (cost=254.43..308.12 rows=56 width=137)\n Merge Cond: (\"outer\".equity_id = \"inner\".equity_id)\n -> Index Scan using equities_pkey on equities (cost=0.00..51.21 rows=655 width=70)\n -> Sort (cost=254.43..254.57 rows=56 width=67)\n Sort Key: equity_positions.equity_id\n -> Nested Loop Left Join (cost=0.00..252.81 rows=56 width=67)\n -> Index Scan using accounts_pkey on accounts (cost=0.00..17.02 rows=3 width=44)\n Filter: (client_id = 32)\n -> Index Scan using positions_acct_equity_date on equity_positions (cost=0.00..78.30 rows=23 width=27)\n Index Cond: (\"outer\".account_id = equity_positions.account_id)\n -> Index Scan using benchmarks_pkey on benchmarks (cost=0.00..5.57 rows=66 width=22)\n -> Index Scan using equity_prices_equity_date on equity_prices (cost=0.00..5.86 rows=1 width=23)\n Index Cond: ((\"outer\".equity_id = equity_prices.equity_id) AND (\"outer\".equity_date = equity_prices.date))\n -> Index Scan using benchpositions_acct_equity_date on benchmark_positions (cost=0.00..10.82 rows=130 width=30)\n",
"msg_date": "Mon, 11 Oct 2004 16:49:44 -0400 (EDT)",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why my query is not using index??"
},
{
"msg_contents": "Am Montag, 11. Oktober 2004 22:49 schrieb Francisco Reyes:\n> On Mon, 11 Oct 2004, Janning Vygen wrote:\n> > postgres uses a seq scan if its faster. In your case postgres seems to\n> > know that most of your rows have a date < 2004-01-01 and so doesn't need\n> > to consult the index if it has to read every page anyway. seq scan can be\n> > faster on small tables. try (in psql) \"SET enable_seqscan TO off;\" \n> > before running your query and see how postgres plans it without using seq\n> > scan.\n>\n> I was about to post and saw this message.\n> I have a query that was using sequential scans. Upon turning seqscan to\n> off it changed to using the index. What does that mean?\n\nenable_seqscan off means that postgres is not allowed to use seqscan.\ndefault is on and postgres decides for each table lookup which method is \nfaster: seq scan or index scan. thats what the planner does: deciding which \naccess method might be the fastest.\n\n> The tables are under 5k records so I wonder if that is why the optimizer\n> is option, on it's default state, to do sequential scans.\n\nif you have small tables, postgres is using seqscan to reduce disk lookups. \npostgresql reads disk pages in 8k blocks. if your whole table is under 8k \nthere is no reason for postgres to load an index from another disk page \nbecause it has to load the whole disk anyway. \n\nnot sure, but i think postgres also analyzes the table to see which values are \nin there. if you have a huge table with a column of integers and postgres \nknows that 99% are of value 1 and you are looking for a row with a value of \n1, why should it use an index just to see that it has to load the whole table \nto find a matching row.\n\nAnd that's why you can't make performance tests with small tables. you need \ntest data which is as close as possible to real data.\n\n> I was also wondering if there is a relation between the sequential scans\n> and the fact that my entire query is a series of left joins:\n\nno.\n\njanning\n",
"msg_date": "Mon, 11 Oct 2004 23:26:02 +0200",
"msg_from": "Janning Vygen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why my query is not using index??"
},
{
"msg_contents": "Francisco Reyes wrote:\n> On Mon, 11 Oct 2004, Janning Vygen wrote:\n> \n[...]\n> When I saw the default explain I was surprised to see that indexes were \n> not been used. For example the join on lines 4,5 are exactly the primary \n> key of the tables yet a sequential scan was used.\n> \n\nNote this:\n> The default explain was:\n> \n> Sort (cost=382.01..382.15 rows=56 width=196)\n> Sort Key: accounts.account_group, accounts.account_name, \n\n[...]\n\nVersus this:\n> \n> After set enable_seqscan to off;\n> It becomes\n> \n> Sort (cost=490.82..490.96 rows=56 width=196)\n> Sort Key: accounts.account_group, accounts.account_name, \n\n[...]\n\nPostgres believes that it will cost 382 to do a sequential scan, versus \n490 for an indexed scan. Hence why it prefers to do the sequential scan. \nTry running explain analyze to see if how accurate it is.\n\nAs Janning mentioned, sometimes sequential scans *are* faster. If the \nnumber of entries that will be found is large compared to the number of \ntotal entries (I don't know the percentages, but probably >30-40%), then \nit is faster to just load the data and scan through it, rather than \ndoing a whole bunch of indexed lookups.\n\nJohn\n=:->",
"msg_date": "Mon, 11 Oct 2004 19:12:45 -0500",
"msg_from": "John Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why my query is not using index??"
},
{
"msg_contents": "\nJohn Meinel <[email protected]> writes:\n\n> As Janning mentioned, sometimes sequential scans *are* faster. If the number of\n> entries that will be found is large compared to the number of total entries (I\n> don't know the percentages, but probably >30-40%), \n\nActually 30%-40% is unrealistic. The traditional rule of thumb for the\nbreak-even point was 10%. In POstgres the actual percentage varies based on\nhow wide the records are and how correlated the location of the records is\nwith the index. Usually it's between 5%-10% but it can be even lower than that\nsometimes.\n\n-- \ngreg\n\n",
"msg_date": "11 Oct 2004 21:13:27 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why my query is not using index??"
},
{
"msg_contents": "On Mon, 11 Oct 2004, John Meinel wrote:\n\n> Postgres believes that it will cost 382 to do a sequential scan, versus 490 \n> for an indexed scan. Hence why it prefers to do the sequential scan. Try \n> running explain analyze to see if how accurate it is.\n\nWith explain analyze I have with sequential scan on\nSort (cost=382.01..382.15 rows=56 width=196)\n(actual time=64.346..64.469 rows=24 loops=1)\n\n\nAnd with seqscan off\nSort (cost=490.82..490.96 rows=56 width=196)\n(actual time=56.668..56.789 rows=24 loops=1)\n\nSo I guess that for this particular query I am better off setting the \nseqscan off.\n",
"msg_date": "Tue, 12 Oct 2004 00:56:15 -0400 (EDT)",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why my query is not using index??"
},
{
"msg_contents": "Francisco Reyes <[email protected]> writes:\n> With explain analyze I have with sequential scan on\n> Sort (cost=382.01..382.15 rows=56 width=196)\n> (actual time=64.346..64.469 rows=24 loops=1)\n\n> And with seqscan off\n> Sort (cost=490.82..490.96 rows=56 width=196)\n> (actual time=56.668..56.789 rows=24 loops=1)\n\n> So I guess that for this particular query I am better off setting the \n> seqscan off.\n\nFor that kind of margin, you'd be a fool to do any such thing.\n\nYou might want to look at making some adjustment to random_page_cost\nto bring the estimated costs in line with reality (though I'd counsel\ntaking more than one example into account while you tweak it). But\nsetting seqscan off as a production setting is just a recipe for\nshooting yourself in the foot.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Oct 2004 01:26:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why my query is not using index?? "
}
] |
[
{
"msg_contents": "Hi,\n\nI'm using Postgres 7.4.5. Tables are analyzed & vacuumed.\n\nI am wondering why postgresql never uses an index on queries of the type\n'select distinct ...' while e.g. mysql uses the index on the same query.\nSee the following explains:\n\n\npostgresql:\n\nexplain analyze select distinct \"land\" from \"customer_dim\";\n---------------------------------------------------------------------------------------------------------------------------------------+\n QUERY PLAN |\n---------------------------------------------------------------------------------------------------------------------------------------+\n Unique (cost=417261.85..430263.66 rows=18 width=15) (actual time=45875.235..67204.694 rows=103 loops=1) |\n -> Sort (cost=417261.85..423762.75 rows=2600362 width=15) (actual time=45875.226..54114.473 rows=2600362 loops=1) |\n Sort Key: land |\n -> Seq Scan on customer_dim (cost=0.00..84699.62 rows=2600362 width=15) (actual time=0.048..10733.227 rows=2600362 loops=1) |\n Total runtime: 67246.465 ms |\n---------------------------------------------------------------------------------------------------------------------------------------+\n\n\nmysql:\n\nexplain select DISTINCT `customer_dim`.`land` from `customer_dim`;\n--------------+-------+---------------+---------------+---------+--------+---------+-------------+\n table | type | possible_keys | key | key_len | ref | rows | Extra |\n--------------+-------+---------------+---------------+---------+--------+---------+-------------+\n customer_dim | index | [NULL] | IDX_cstd_land | 81 | [NULL] | 2600362 | Using index |\n--------------+-------+---------------+---------------+---------+--------+---------+-------------+\n1 row in result (first row: 8 msec; total: 9 msec)\n\n\n\nThe result set contains 103 rows (but i get this behavior with every query of\nthis kind). My tables consist of at least a million rows.\n\nThe indexes on the column 'land' are standard indexes, so in case of\npostgresql, it's a btree-index. I've tried to change the index type, but to no\navail.\n\nSo, why doesn't postgresql use the index, and (how) could i persuade postgresql\nto use an index for this type of query?\n\nTiA\n\n-- \nOle Langbehn\n\nfreiheit.com technologies gmbh\nTheodorstr. 42-90 / 22761 Hamburg, Germany\nfon +49 (0)40 / 890584-0\nfax +49 (0)40 / 890584-20\n\nFreie Software durch Bücherkauf fördern | http://bookzilla.de/\n",
"msg_date": "Wed, 6 Oct 2004 11:30:58 +0200",
"msg_from": "Ole Langbehn <[email protected]>",
"msg_from_op": true,
"msg_subject": "sequential scan on select distinct"
},
{
"msg_contents": "\n\tYou could try :\n\n\texplain analyze select \"land\" from \"customer_dim\" group by \"land\";\n\tIt will be a lot faster but I can't make it use the index on my machine...\n\n\tExample :\n\n\tcreate table dummy as (select id, id%255 as number from a large table \nwith 1M rows);\n\tso we have a table with 256 (0-255) disctinct \"number\" values.\n\n--------------------------------------------------------------------------------\n=> explain analyze select distinct number from dummy;\n Unique (cost=69.83..74.83 rows=200 width=4) (actual \ntime=13160.490..14414.004 rows=255 loops=1)\n -> Sort (cost=69.83..72.33 rows=1000 width=4) (actual \ntime=13160.483..13955.792 rows=1000000 loops=1)\n Sort Key: number\n -> Seq Scan on dummy (cost=0.00..20.00 rows=1000 width=4) \n(actual time=0.052..1759.145 rows=1000000 loops=1)\n Total runtime: 14442.872 ms\n\n=>\tHorribly slow because it has to sort 1M rows for the Unique.\n\n--------------------------------------------------------------------------------\n=> explain analyze select number from dummy group by number;\n HashAggregate (cost=22.50..22.50 rows=200 width=4) (actual \ntime=1875.214..1875.459 rows=255 loops=1)\n -> Seq Scan on dummy (cost=0.00..20.00 rows=1000 width=4) (actual \ntime=0.107..1021.014 rows=1000000 loops=1)\n Total runtime: 1875.646 ms\n\n=>\tA lot faster because it HashAggregates instead of sorting (but still \nseq scan)\n\n--------------------------------------------------------------------------------\nNow :\ncreate index dummy_idx on dummy(number);\nLet's try again.\n--------------------------------------------------------------------------------\nexplain analyze select distinct number from dummy;\n Unique (cost=0.00..35301.00 rows=200 width=4) (actual \ntime=0.165..21781.732 rows=255 loops=1)\n -> Index Scan using dummy_idx on dummy (cost=0.00..32801.00 \nrows=1000000 width=4) (actual time=0.162..21154.752 rows=1000000 loops=1)\n Total runtime: 21782.270 ms\n\n=> Index scan the whole table. argh. I should have ANALYZized.\n--------------------------------------------------------------------------------\nexplain analyze select number from dummy group by number;\n HashAggregate (cost=17402.00..17402.00 rows=200 width=4) (actual \ntime=1788.425..1788.668 rows=255 loops=1)\n -> Seq Scan on dummy (cost=0.00..14902.00 rows=1000000 width=4) \n(actual time=0.048..960.063 rows=1000000 loops=1)\n Total runtime: 1788.855 ms\n=>\tStill the same...\n--------------------------------------------------------------------------------\nLet's make a function :\nThe function starts at the lowest number and advances to the next number \nin the index until they are all exhausted.\n\n\nCREATE OR REPLACE FUNCTION sel_distinct()\n\tRETURNS SETOF INTEGER\n\tLANGUAGE plpgsql\n\tAS '\nDECLARE\n\tpos INTEGER;\nBEGIN\n\tSELECT INTO pos number FROM dummy ORDER BY number ASC LIMIT 1;\n\tIF NOT FOUND THEN\n\t\tRAISE NOTICE ''no records.'';\n\t\tRETURN;\n\tEND IF;\n\t\n\tLOOP\n\t\tRETURN NEXT pos;\n\t\tSELECT INTO pos number FROM dummy WHERE number>pos ORDER BY number ASC \nLIMIT 1;\n\t\tIF NOT FOUND THEN\n\t\t\tRETURN;\n\t\tEND IF;\n\tEND LOOP;\nEND;\n';\n\nexplain analyze select * from sel_distinct();\n Function Scan on sel_distinct (cost=0.00..12.50 rows=1000 width=4) \n(actual time=215.472..215.696 rows=255 loops=1)\n Total runtime: 215.839 ms\n\n\tThat's better !\n--------------------------------------------------------------------------------\nWhy not use DESC instead of ASC ?\n\nCREATE OR REPLACE FUNCTION sel_distinct()\n\tRETURNS SETOF INTEGER\n\tLANGUAGE plpgsql\n\tAS '\nDECLARE\n\tpos INTEGER;\nBEGIN\n\tSELECT INTO pos number FROM dummy ORDER BY number DESC LIMIT 1;\n\tIF NOT FOUND THEN\n\t\tRAISE NOTICE ''no records.'';\n\t\tRETURN;\n\tEND IF;\n\t\n\tLOOP\n\t\tRETURN NEXT pos;\n\t\tSELECT INTO pos number FROM dummy WHERE number<pos ORDER BY number DESC \nLIMIT 1;\n\t\tIF NOT FOUND THEN\n\t\t\tRETURN;\n\t\tEND IF;\n\tEND LOOP;\nEND;\n';\n\nexplain analyze select * from sel_distinct();\n Function Scan on sel_distinct (cost=0.00..12.50 rows=1000 width=4) \n(actual time=13.500..13.713 rows=255 loops=1)\n Total runtime: 13.857 ms\n\n\tHum hum ! Again, a lot better !\n\tIndex scan backwards seems a lot faster than index scan forwards. Why, I \ndon't know, but here you go from 15 seconds to 14 milliseconds...\n\n\tI don't know WHY (oh why) postgres does not use this kind of strategy \nwhen distinct'ing an indexed field... Anybody got an idea ?\n\n\n",
"msg_date": "Wed, 06 Oct 2004 12:19:24 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequential scan on select distinct"
},
{
"msg_contents": "Am Mittwoch, 6. Oktober 2004 12:19 schrieb Pierre-Frédéric Caillaud:\n> You could try :\n>\n> explain analyze select \"land\" from \"customer_dim\" group by \"land\";\n> It will be a lot faster but I can't make it use the index on my machine...\nthis already speeds up my queries to about 1/4th of the time, which is about \nthe range of mysql and oracle.\n>\n> Example :\n>\n> [..]\n>\n> Hum hum ! Again, a lot better !\n> Index scan backwards seems a lot faster than index scan forwards. Why, I\n> don't know, but here you go from 15 seconds to 14 milliseconds...\nthanks for this very extensive answer, it helped me a lot.\n>\n> I don't know WHY (oh why) postgres does not use this kind of strategy\n> when distinct'ing an indexed field... Anybody got an idea ?\nThat's the big question I still would like to see answered too. Can anyone \ntell us?\n\nTiA\n-- \nOle Langbehn\n",
"msg_date": "Wed, 6 Oct 2004 18:09:43 +0200",
"msg_from": "Ole Langbehn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: sequential scan on select distinct"
},
{
"msg_contents": "\nPierre-Fr�d�ric Caillaud <[email protected]> writes:\n\n> \tI don't know WHY (oh why) postgres does not use this kind of strategy\n> when distinct'ing an indexed field... Anybody got an idea ?\n\nWell there are two questions here. Why given the current plans available does\npostgres choose a sequential scan instead of an index scan. And why isn't\nthere this kind of \"skip index scan\" available.\n\nPostgres chooses a sequential scan with a sort (or hash aggregate) over an\nindex scan because it expects it to be faster. sequential scans are much\nfaster than random access scans of indexes, plus index scans need to read many\nmore blocks. If you're finding the index scan to be just as fast as sequential\nscans you might consider lowering random_page_cost closer to 1.0. But note\nthat you may be getting fooled by a testing methodology where more things are\ncached than would be in production.\n\nwhy isn't a \"skip index scan\" plan available? Well, nobody's written the code\nyet. It would part of the same code needed to get an index scan used for:\n\n select y,min(x) from bar group by y\n\nAnd possibly also related to the TODO item:\n\n Use index to restrict rows returned by multi-key index when used with\n non-consecutive keys to reduce heap accesses\n\n For an index on col1,col2,col3, and a WHERE clause of col1 = 5 and col3 =\n 9, spin though the index checking for col1 and col3 matches, rather than\n just col1\n\n\nNote that the optimizer would have to make a judgement call based on the\nexpected number of distinct values. If you had much more than 256 distinct\nvalues then the your plpgsql function wouldn't have performed well at all.\n\n-- \ngreg\n\n",
"msg_date": "06 Oct 2004 12:41:12 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequential scan on select distinct"
},
{
"msg_contents": "\n\tThere are even three questions here :\n\n\t- given that 'SELECT DISTINCT field FROM table' is exactly\nthe same as 'SELECT field FROM table GROUP BY field\", postgres could\ntransform the first into the second and avoid itself a (potentially\nkiller) sort.\n\n\tOn my example the table was not too large but on a very large table, \nsorting all the values and then discinct'ing them does not look too \nappealing.\n\n\tCurrently Postgres does Sort+Unique, but there could be a DistinctSort \ninstead of a Sort, that is a thing that sorts and removes the duplicates \nat the same time. Not that much complicated to code than a sort, and much \nfaster in this case.\n\tOr there could be a DistinctHash, which would be similar or rather \nidentical to a HashAggregate and would again skip the sort.\n\n\tIt would (as a bonus) speed up queries like UNION (not ALL), that kind of \nthings. For example :\n\n explain (select number from dummy) union (select number from dummy);\n Unique (cost=287087.62..297087.62 rows=2000000 width=4)\n -> Sort (cost=287087.62..292087.62 rows=2000000 width=4)\n Sort Key: number\n -> Append (cost=0.00..49804.00 rows=2000000 width=4)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..24902.00 \nrows=1000000 width=4)\n -> Seq Scan on dummy (cost=0.00..14902.00 \nrows=1000000 width=4)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..24902.00 \nrows=1000000 width=4)\n -> Seq Scan on dummy (cost=0.00..14902.00 \nrows=1000000 width=4)\n\n\tThis is scary !\n\nI can rewrite it as such (and the planner could, too) :\n\nexplain select * from ((select number from dummy) union all (select number \n from dummy)) as foo group by number;\n HashAggregate (cost=74804.00..74804.00 rows=200 width=4)\n -> Subquery Scan foo (cost=0.00..69804.00 rows=2000000 width=4)\n -> Append (cost=0.00..49804.00 rows=2000000 width=4)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..24902.00 \nrows=1000000 width=4)\n -> Seq Scan on dummy (cost=0.00..14902.00 \nrows=1000000 width=4)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..24902.00 \nrows=1000000 width=4)\n -> Seq Scan on dummy (cost=0.00..14902.00 \nrows=1000000 width=4)\n\nwhich avoids a large sort...\n\nHowever there must be cases in which performing a sort is faster, like \nwhen there are a lot of distinct values and the HashAggregate becomes huge \ntoo.\n\n> Well there are two questions here. Why given the current plans available \n> does\n> postgres choose a sequential scan instead of an index scan. And why isn't\n\n\tWell because it needs to get all the rows in the table in order.\n\tin this case seq scan+sort is about twice as fast as index scan.\n\tInterestingly, once I ANALYZED the table, postgres will chooses to \nindex-scan, which is slower.\n\n> there this kind of \"skip index scan\" available.\n\n\tIt would be really nice to have a skip index scan available.\n\n\tI have an other idea, lets call it the indexed sequential scan :\n\tWhen pg knows there are a lot of rows to access, it will ignore the index \nand seqscan. This is because index access is very random, thus slow. \nHowever postgres could implement an \"indexed sequential scan\" where :\n\t- the page numbers for the matching rows are looked up in the index\n\t(this is fast as an index has good locality)\n\t- the page numbers are grouped so we have a list of pages with one and \nonly one instance of each page number\n\t- the list is then sorted so we have page numbers in-order\n\t- the pages are loaded in sorted order (doing a kind of partial \nsequential scan) which would be faster than reading them randomly.\n\n\tOther ideas later\n\t\n\n> Postgres chooses a sequential scan with a sort (or hash aggregate) over \n> an\n> index scan because it expects it to be faster. sequential scans are much\n> faster than random access scans of indexes, plus index scans need to \n> read many\n> more blocks. If you're finding the index scan to be just as fast as \n> sequential\n> scans you might consider lowering random_page_cost closer to 1.0. But \n> note\n> that you may be getting fooled by a testing methodology where more \n> things are\n> cached than would be in production.\n>\n> why isn't a \"skip index scan\" plan available? Well, nobody's written the \n> code\n> yet. It would part of the same code needed to get an index scan used for:\n>\n> select y,min(x) from bar group by y\n>\n> And possibly also related to the TODO item:\n>\n> Use index to restrict rows returned by multi-key index when used with\n> non-consecutive keys to reduce heap accesses\n>\n> For an index on col1,col2,col3, and a WHERE clause of col1 = 5 and \n> col3 =\n> 9, spin though the index checking for col1 and col3 matches, rather \n> than\n> just col1\n>\n>\n> Note that the optimizer would have to make a judgement call based on the\n> expected number of distinct values. If you had much more than 256 \n> distinct\n> values then the your plpgsql function wouldn't have performed well at \n> all.\n>\n\n\n",
"msg_date": "Wed, 06 Oct 2004 19:34:22 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequential scan on select distinct"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> why isn't a \"skip index scan\" plan available? Well, nobody's written the code\n> yet.\n\nI don't really think it would be a useful plan anyway. What *would* be\nuseful is to support HashAggregate as an implementation alternative for\nDISTINCT --- currently I believe we only consider that for GROUP BY.\nThe DISTINCT planning code is fairly old and crufty and hasn't been\nredesigned lately.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Oct 2004 15:38:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequential scan on select distinct "
},
{
"msg_contents": "\nTom Lane <[email protected]> writes:\n\n> Greg Stark <[email protected]> writes:\n> > why isn't a \"skip index scan\" plan available? Well, nobody's written the code\n> > yet.\n> \n> I don't really think it would be a useful plan anyway. \n\nWell it would clearly be useful in this test case, where has a small number of\ndistinct values in a large table, and an index on the column. His plpgsql\nfunction that emulates such a plan is an order of magnitude faster than the\nhash aggregate plan even though it has to do entirely separate index scans for\neach key value.\n\nI'm not sure where the break-even point would be, but it would probably be\npretty low. Probably somewhere around the order of 1% distinct values in the\ntable. That might be uncommon, but certainly not impossible.\n\nBut regardless of how uncommon it is, it could be considered important in\nanother sense: when you need it there really isn't any alternative. It's an\nalgorithmic improvement with no bound on the performance difference. Nothing\nshort of using a manually maintained materialized view would bring the\nperformance into the same ballpark.\n\nSo even if it's only useful occasionally, not having the plan available can\nleave postgres with no effective plan for what should be an easy query.\n\n\n-- \ngreg\n\n",
"msg_date": "06 Oct 2004 16:02:22 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequential scan on select distinct"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> But regardless of how uncommon it is, it could be considered important in\n> another sense: when you need it there really isn't any alternative. It's an\n> algorithmic improvement with no bound on the performance difference.\n\n[ shrug... ] There are an infinite number of special cases for which\nthat claim could be made. The more we load down the planner with\nseldom-useful special cases, the *lower* the overall performance will\nbe, because we'll waste cycles checking for the special cases in every\ncase ...\n\nIn this particular case, it's not merely a matter of the planner, either.\nYou'd need some new type of plan node in the executor, so there's a\npretty fair amount of added code bulk that will have to be written and\nthen maintained.\n\nI'm open to being persuaded that this is worth doing, but the bar is\ngoing to be high; I think there are a lot of other more-profitable ways\nto invest our coding effort and planning cycles.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Oct 2004 16:20:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequential scan on select distinct "
},
{
"msg_contents": "\n> I don't really think it would be a useful plan anyway. What *would* be\n> useful is to support HashAggregate as an implementation alternative for\n> DISTINCT --- currently I believe we only consider that for GROUP BY.\n> The DISTINCT planning code is fairly old and crufty and hasn't been\n> redesigned lately.\n>\n> \t\t\tregards, tom lane\n\n\tI see this as a minor annoyance only because I can write GROUP BY\ninstead of DISTINCT and get the speed boost. It probably annoys people\ntrying to port applications to postgres though, forcing them to rewrite\ntheir queries.\n\n* SELECT DISTINCT : 21442.296 ms (by default, uses an index scan)\ndisabling index_scan => Sort + Unique : 14512.105 ms\n\n* GROUP BY : 1793.651 ms using HashAggregate\n* skip index scan by function : 13.833 ms\n\n\tThe HashAggregate speed boost is good, but rather pathetic compared\nto a \"skip index scan\" ; but it's still worth having if updating the\nDISTINCT code is easy.\n\n\tNote that it would also benefit UNION queries which apparently use\nDISTINCT\ninternally and currently produce this :\n------------------------------------------------------------------------------\nexplain analyze select number from\n\t((select number from dummy) union (select number from dummy)) as foo;\n\n Subquery Scan foo (cost=287087.62..317087.62 rows=2000000 width=4)\n(actual time=33068.776..35575.330 rows=255 loops=1)\n -> Unique (cost=287087.62..297087.62 rows=2000000 width=4) (actual\ntime=33068.763..35574.126 rows=255 loops=1)\n -> Sort (cost=287087.62..292087.62 rows=2000000 width=4)\n(actual time=33068.757..34639.180 rows=2000000 loops=1)\n Sort Key: number\n -> Append (cost=0.00..49804.00 rows=2000000 width=4)\n(actual time=0.055..7412.551 rows=2000000 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..24902.00\nrows=1000000 width=4) (actual time=0.054..3104.165 rows=1000000 loops=1)\n -> Seq Scan on dummy (cost=0.00..14902.00\nrows=1000000 width=4) (actual time=0.051..1792.348 rows=1000000 loops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..24902.00\nrows=1000000 width=4) (actual time=0.048..3034.462 rows=1000000 loops=1)\n -> Seq Scan on dummy (cost=0.00..14902.00\nrows=1000000 width=4) (actual time=0.044..1718.682 rows=1000000 loops=1)\n Total runtime: 36265.662 ms\n------------------------------------------------------------------------------\n\nBut could instead do this :\nexplain analyze select number from\n\t((select number from dummy) union all (select number from dummy)) as foo\ngroup by number;\n\n HashAggregate (cost=74804.00..74804.00 rows=200 width=4) (actual\ntime=10753.648..10753.890 rows=255 loops=1)\n -> Subquery Scan foo (cost=0.00..69804.00 rows=2000000 width=4)\n(actual time=0.059..8992.084 rows=2000000 loops=1)\n -> Append (cost=0.00..49804.00 rows=2000000 width=4) (actual\ntime=0.055..6688.639 rows=2000000 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..24902.00\nrows=1000000 width=4) (actual time=0.054..2749.708 rows=1000000 loops=1)\n -> Seq Scan on dummy (cost=0.00..14902.00\nrows=1000000 width=4) (actual time=0.052..1640.427 rows=1000000 loops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..24902.00\nrows=1000000 width=4) (actual time=0.038..2751.916 rows=1000000 loops=1)\n -> Seq Scan on dummy (cost=0.00..14902.00\nrows=1000000 width=4) (actual time=0.034..1637.818 rows=1000000 loops=1)\n Total runtime: 10754.120 ms\n------------------------------------------------------------------------------\n\tA 3x speedup, but still a good thing to have.\n\tWhen I LIMIT the two subqueries to 100k rows instead of a million, the\ntimes are about equal.\n\tWhen I LIMIT one of the subqueries to 100k and leave the other to 1M,\n\t\tUNION ALL\t\t17949.609 ms\n\t\tUNION + GROUP BY\t6130.417 ms\n\n\tStill some performance to be gained...\n\n------------------------------------------------------------------------------\n\tOf course it can't use a skip index scan on a subquery, but I could\ninstead :\n\tI know it's pretty stupid to use the same table twice but it's just an\nexample. However, if you think about table partitions and views, a \"select\ndistinct number\" from a view having multiple partitions would yield this\ntype of query, and that table partitioning seems like a hot subject lately.\n\nlet's create a dummy example view :\ncreate view dummy_view as (select * from dummy) union all (select * from\ndummy);\n\nexplain analyze select number from dummy_view group by number;\n HashAggregate (cost=74804.00..74804.00 rows=200 width=4) (actual\ntime=10206.456..10206.713 rows=255 loops=1)\n -> Subquery Scan dummy_view (cost=0.00..69804.00 rows=2000000\nwidth=4) (actual time=0.060..8431.776 rows=2000000 loops=1)\n -> Append (cost=0.00..49804.00 rows=2000000 width=8) (actual\ntime=0.055..6122.125 rows=2000000 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..24902.00\nrows=1000000 width=8) (actual time=0.054..2456.566 rows=1000000 loops=1)\n -> Seq Scan on dummy (cost=0.00..14902.00\nrows=1000000 width=8) (actual time=0.048..1107.151 rows=1000000 loops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..24902.00\nrows=1000000 width=8) (actual time=0.036..2471.748 rows=1000000 loops=1)\n -> Seq Scan on dummy (cost=0.00..14902.00\nrows=1000000 width=8) (actual time=0.031..1104.482 rows=1000000 loops=1)\n Total runtime: 10206.945 ms\n\nA smarter planner could rewrite it into this :\nselect number from ((select distinct number from dummy) union (select\ndistinct number from dummy)) as foo;\n\nand notice it would index-skip-scan the two partitions (here, example with\nmy function)\n\nexplain analyze select number from ((select sel_distinct as number from\nsel_distinct()) union all (select sel_distinct as number\n from sel_distinct())) as foo group by number;\n HashAggregate (cost=70.00..70.00 rows=200 width=4) (actual\ntime=29.078..29.332 rows=255 loops=1)\n -> Subquery Scan foo (cost=0.00..65.00 rows=2000 width=4) (actual\ntime=13.378..28.587 rows=510 loops=1)\n -> Append (cost=0.00..45.00 rows=2000 width=4) (actual\ntime=13.373..28.003 rows=510 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..22.50 rows=1000\nwidth=4) (actual time=13.373..13.902 rows=255 loops=1)\n -> Function Scan on sel_distinct (cost=0.00..12.50\nrows=1000 width=4) (actual time=13.367..13.619 rows=255 loops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..22.50 rows=1000\nwidth=4) (actual time=13.269..13.800 rows=255 loops=1)\n -> Function Scan on sel_distinct (cost=0.00..12.50\nrows=1000 width=4) (actual time=13.263..13.512 rows=255 loops=1)\n Total runtime: 29.569 ms\n\n\tSo, if a query with UNION or UNION ALL+DISTINCT tries to put DISTINCT\ninside the subqueries and yields an index skip scan, here is a massive\nspeedup.\n\n\tYou will tell me \"but if the UNION ALL has 10 subqueries, planning is\ngoing to take forever !\"\n\tWell not necessarily. The above query with 10 subqueries UNIONALLed then\nGROUPed takes :\n\tUNION : 320509.522 ms (the Sort + Unique truly becomes humongous).\n\tUNION ALL + GROUP : 54586.759 ms (you see there is already interest in\nrewiring DISTINCT/UNION)\n\tskip scan + UNION : 147.941 ms\n\tskip scan + UNION ALL + group : 147.313 ms\n\n> Well it would clearly be useful in this test case, where has a small \n> number of distinct values in a large table, and an index on the column. \n> His plpgsql function that emulates such a plan is an order of magnitude\n> faster than the hash aggregate plan even though it has to do entirely \n> separate index scans for each key value.\n\n\tActually, it is more like two orders of magnitude (100x faster) :\nin fact the time for a seq scan is O(N rows) whereas the time for the skip\nindex scan should be, if I'm not mistaken, something like\nO((N distinct values) * (log N rows)) ; in my case there are 256 distinct\nvalues for 1M rows and a speedup of 100x, so if there were 10M rows the\nspeedup would be like 300x (depending on the base of the log which I assume\nis 2). And if the skip index scan is implemented in postgres instead of in\na function, it could be much, much faster...\n\n> [ shrug... ] There are an infinite number of special cases for which\n> that claim could be made. The more we load down the planner with\n> seldom-useful special cases, the *lower* the overall performance will\n> be, because we'll waste cycles checking for the special cases in every\n> case ...\n\n\tIn a general way, you are absolutely right... special-casing a case\nfor a speedup of 2x for instance would be worthless... but we are\nconsidering\na HUGE speedup here. And, if this mode is only used for DISTINCT and\nGROUP BY queries, no planning cycles will be wasted at all on queries which\ndo not use DISTINCT nor GROUP BY.\n\n\tPresent state is that DISTINCT and UNION are slow with or without using\nthe GROUP BY trick. Including the index skip scan in the planning options\nwould only happen when appropriate cases are detected. This detection\nwould be very fast. The index skip scan would then speed up the query so\nmuch that the additional planning cost would not matter. If there are many\ndistinct values, so that seq scan is faster than skip scan, the query will\nbe slow enough anyway so that the additional planning cost does not\nmatter. The only problem cases are queries with small tables where startup\ntime is important, but in that case the planner has stats about the number\nof rows in the table, and again excluding skip scan from the start would\nbe fast.\n\n\tLateral thought :\n\tCreate a new index type which only indexes one row for each value. This\nindex would use very little space and would be very fast to update (on my\ntable it would index only 256 values). Keep the Index Scan code and all,\nbut use this index type when you can.\n\tThis solution is less general and also has a few drawbacks.\n\n\tAnother thought :\n\\d dummy\n Table ᅵpublic.dummyᅵ\n Colonne | Type | Modificateurs\n---------+---------+---------------\n id | integer |\n number | integer |\nIndex :\n ᅵdummy_idxᅵ btree (number)\n ᅵdummy_idx_2ᅵ btree (number, id)\n\t\nexplain analyze select * from dummy where id=1;\n\n Seq Scan on dummy (cost=0.00..17402.00 rows=1 width=8) (actual\ntime=274.480..1076.092 rows=1 loops=1)\n Filter: (id = 1)\n Total runtime: 1076.168 ms\n\nexplain analyze select * from dummy where number between 0 and 256 and\nid=1;\n Index Scan using dummy_idx_2 on dummy (cost=0.00..6.02 rows=1 width=8)\n(actual time=1.449..332.020 rows=1 loops=1)\n Index Cond: ((number >= 0) AND (number <= 256) AND (id = 1))\n Total runtime: 332.112 ms\n\n\tIn this case we have no index on id, but using a skip index scan,\nemulated by the \"between\" to force use of the (number,id) index, even\nthough it must look in all the 256 possible values for number, still\nspeeds it up by 3x. Interestingly, with only 16 distinct values, the time\nis quite the same. Thus, the \"skip index scan\" could be used in cases\nwhere there is a multicolumn index, but the WHERE misses a column.\n\nThis would not waste planning cycles because :\n- If the index we need exists and there is no \"distinct\" or \"group by\"\nwithout aggregate, the planner does not even consider using the skip index\nscan.\n- If the index we need does not exist, the planner only loses the cycles\nneeded to check if there is a multicolumn index which may be used. In this\ncase, either there is no such index, and a seq scan is chosen, which will\nbe slow, so the time wasted for the check is negligible ; or an index is\nfound and can be used, and the time gained by the skip index scan is well\namortized.\n\n\tCurrently one has to carefully consider which queries will be used\nfrequently and need indexes, and which ones are infrequent and don't\njustify an index (but these queries will be very slow). With the skip\nindex scan, these less frequent queries don't always mean a seq scan. Thus\npeople will need to create less infrequently used indexes, and will have a\nhigher INSERT/UPDATE speed/\n\n------------------------------------------------------------------------------------------------\n\n\tThe skip scan would also be a winner on this type of query which is a\nkiller, a variant of the famous 'TOP 10' query :\nEXPLAIN SELECT max(id), number FROM dummy GROUP BY number; -> 2229.141 ms\n\n\tPostgres uses a Seq scan + HashAggregate. Come on, we have an index btree\n(number, id), use it ! A simple customization on my skip scan emulation\nfunction takes 13.683 ms...\n\n\tI know that Postgres does not translate max() on on indexed column to\nORDER BY column DESC LIMIT 1, because it would be extremely hard to\nimplement due to the general nature of aggregates which is a very good\nthing. It does not bother me because I can still write ORDER BY column\nDESC LIMIT 1.\n\n\tWhere it does bother me is if I want the highest ID from each number,\nwhich can only be expressed by\nSELECT max(id), number FROM dummy GROUP BY number;\n\tand not with LIMITs.\n\n\tSuppose I want the first 10 higher id's for each number, which is another\nvariant on the \"killer top 10 query\". I'm stuck, I cannot even use max(),\nI have to write a custom aggregate which would keep the 10 highest values,\nwhich would be very slow, so I have to use my function and put a LIMIT 10\ninstead of a LIMIT 1 in each query, along with a FOR and some other\nconditions to check if there are less than 10 id's for a number, etc,\nwhich more or less amounts to \"select the next number, then select the\nassociated id's\". It'll still be fast a lot faster than seq scan, but it\ngets more and more complicated.\n\n\tHowever I'd like to write :\n\n\tselect number,id from dummy ORDER BY number DESC, id DESC MULTILIMIT\n50,10;\n\tThe MULTILIMIT means \"I want 50 numbers and 10 id's for each number.\"\n\tMULTILIMIT NULL,10 would mean \"I want all numbers and 10 id's for each\nnumber.\"\n\tNULL is not mandatory, it could also be -1, a keyword or something.\nMULTILIMIT could simply be LIMIT too, because LIMIT takes one parameter.\n\tThe OFFSET clause could also evolve accordingly.\n\n\tAnd this would naturally use a skip index scan, and benefit a whole class\nof queries which have traditionnaly been difficult to get right...\n\n\tConclusion :\n\n\tsmarting up the DISTINCT planner has the following benefits :\n\t- speedup on DISTINCT\n\t- speedup on UNION which seems to use DISTINCT internally\n\n\tindex skip scan has the following benefits :\n\t- massive speedup (x100) on queries involving DISTINCT or its GROUP BY\nvariant\n\t- same thing (x300) on UNION queries if the parser tries to rewrite the\nquery and put the DISTINCT inside the subqueries\n\t- paves the way for a MULTILIMIT which gives an elegant, and very\nefficient way of expressing traditionnaly difficult queries like the \"Top\n10 by category\" which are used quite often and give headaches to dba's.\n\t- Possibility to use a multicolumn index with a WHERE not including all\nleft columns\n\n\tindex skip scan has the following drawbacks :\n\t- more complexity\n\t- additional planning time\n\n\tThis last drawback is in fact, limited because :\n\t- It is easy and fast to know when the index skip scan will never be\nused, so in most queries which won't need it, the possibility can be\neliminated without wasting cycles in planning\n\t- When it is used, the performance gains are so massive that it is\njustified\n\t- People who use many queries where planning time is significant\ncomparing to execution time are probably using SQL functions or prepared\nqueries.\n\n\tEnough arguments, maybe not to convince you, but to have a second thought\non it ?\n\n---------------------------------------------------------------\n\n\tSide Note :\n\n\tWhat do you think about the idea of an \"UniqueSort\" which would do\nsort+unique in one pass ? This could also be simple to code, and would also\noffer advantages to all queries using UNION. The sort would be faster and\nconsume less storage space because the data size would diminish as\nduplicates\nare eliminated along the way.\n",
"msg_date": "Thu, 07 Oct 2004 14:01:13 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequential scan on select distinct "
},
{
"msg_contents": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?= <[email protected]> writes:\n> \tPresent state is that DISTINCT and UNION are slow with or without using\n> the GROUP BY trick. Including the index skip scan in the planning options\n> would only happen when appropriate cases are detected. This detection\n> would be very fast.\n\nYou have no basis whatever for making that last assertion; and since\nit's the critical point, I don't intend to let you slide by without\nbacking it up. I think that looking for relevant indexes would be\nnontrivial; the more so in cases like you've been armwaving about just\nabove, where you have to find a relevant index for each of several\nsubqueries. The fact that the optimization wins a lot when it wins\nis agreed, but the time spent trying to apply it when it doesn't work\nis a cost that has to be set against that. I don't accept your premise\nthat every query for which skip-index isn't relevant is so slow that\nplanning time does not matter.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Oct 2004 11:35:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequential scan on select distinct "
},
{
"msg_contents": "\nPierre-Fr�d�ric Caillaud <[email protected]> writes:\n\n> \tI see this as a minor annoyance only because I can write GROUP BY\n> instead of DISTINCT and get the speed boost. It probably annoys people\n> trying to port applications to postgres though, forcing them to rewrite\n> their queries.\n\nYeah, really DISTINCT and DISTINCT ON are just special cases of GROUP BY. It\nseems it makes more sense to put the effort into GROUP BY and just have\nDISTINCT and DISTINCT ON go through the same code path. Effectively rewriting\nit internally as a GROUP BY.\n\nThe really tricky part is that a DISTINCT ON needs to know about a first()\naggregate. And to make optimal use of indexes, a last() aggregate as well. And\nideally the planner/executor needs to know something is magic about\nfirst()/last() (and potentially min()/max() at some point) and that they don't\nneed the complete set of tuples to calculate their results.\n\n-- \ngreg\n\n",
"msg_date": "07 Oct 2004 13:08:11 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequential scan on select distinct"
},
{
"msg_contents": "Am Donnerstag, 7. Oktober 2004 14:01 schrieb Pierre-Frédéric Caillaud:\n> Side Note :\n>\n> What do you think about the idea of an \"UniqueSort\" which would do\n> sort+unique in one pass ? \nThis is what oracle does and it is quite fast with it...\n-- \nOle Langbehn\n\nfreiheit.com technologies gmbh\nTheodorstr. 42-90 / 22761 Hamburg, Germany\nfon +49 (0)40 / 890584-0\nfax +49 (0)40 / 890584-20\n\nFreie Software durch Bücherkauf fördern | http://bookzilla.de/\n",
"msg_date": "Thu, 7 Oct 2004 19:21:08 +0200",
"msg_from": "Ole Langbehn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: sequential scan on select distinct"
},
{
"msg_contents": "Ole Langbehn <[email protected]> writes:\n>> What do you think about the idea of an \"UniqueSort\" which would do\n>> sort+unique in one pass ? \n\n> This is what oracle does and it is quite fast with it...\n\nHashing is at least as fast, if not faster.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Oct 2004 13:26:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequential scan on select distinct "
},
{
"msg_contents": "Tom Lane wrote:\n> Ole Langbehn <[email protected]> writes:\n> \n>>>What do you think about the idea of an \"UniqueSort\" which would do\n>>>sort+unique in one pass ? \n> \n>>This is what oracle does and it is quite fast with it...\n\n> Hashing is at least as fast, if not faster.\n> \n> \t\t\tregards, tom lane\n\nI got good mileage in a different SQL engine, by combining the \nhash-aggregate and sort nodes into a single operator.\nThe hash table was just an index into the equivalent of the heap used \nfor generating runs. That gave me partially aggregated data,\nor eliminated duplicate keys, without extra memory overhead of the \nhash-aggregation node below the sort. Memory was scarce then ... :-)\n\nBTW I'm really puzzled that Oracle is pushing 'index skip scan' as a new \nfeature. Wasn't this in the original Oracle Rdb --- one of Gennady \nAntoshenkov's tweaks?\n",
"msg_date": "Thu, 07 Oct 2004 20:00:04 GMT",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequential scan on select distinct"
},
{
"msg_contents": "\n> The really tricky part is that a DISTINCT ON needs to know about a \n> first()\n> aggregate. And to make optimal use of indexes, a last() aggregate as \n> well. And\n> ideally the planner/executor needs to know something is magic about\n> first()/last() (and potentially min()/max() at some point) and that they \n> don't\n> need the complete set of tuples to calculate their results.\n\n\tI'm going to be accused of hand-waving again, but please pardon me, I'm \nenthusiastic, and I like to propose new idead, you can kick me if you \ndon't like them or if I put out too much uninformed bull !\n\n\tIdea :\n\n\tThe aggregate accumulation function could have a way to say :\n\"stop ! I've had enough of these values ! Get on with the next item in the \nGROUP BY clause !\"\n\tI don't know how, or if, the planner could use this (guess: no) or the \nindex scan use this (guess: no) but it would at least save the function \ncalls. I'd guess this idea is quite useless.\n\n\tAggregates could have an additional attribute saying how much values it \nwill need ('max_rows' maybe). This would prevent the creation of \"magic\" \naggregates for max() (which is a kind of special-casing), keep it generic \n(so users can create magic aggregates like this).\n\tAggregates already consist of a bunch of functions (start, accumulate, \nreturn retuls) so this could be just another element in this set.\n\tThis information would be known ahead of time and could influence the \nquery plans too. I'm going to wave my hand and say \"not too much planning \ncost\" because I guess the aggregate details are fetched during planning so \nfetching one more attribute would not be that long...\n\tFor instance first() would have max_rows=1, and users could code a \"first \nN accumulator-in-array\" which would have max_rows=N...\n\tThis does not solve the problem of min() and max() which need max_rows=1 \nonly if the result is sorted... hum... maybe another attribute like \nmax_rows_sorted = 1 for max() and -1 for min() meaning 'first 1' or 'last \n1' (or first N or last N)... according to the \"order by\" clause it would \nbe known that the 'first N' of an 'order by ... asc' is the same as the \n'last N' from an 'order by ... desc'\n\n\t???\n\n\n\t\n\n\n\n",
"msg_date": "Fri, 08 Oct 2004 10:54:59 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequential scan on select distinct"
},
{
"msg_contents": "\n> Hashing is at least as fast, if not faster.\n> \t\t\tregards, tom lane\n\n\tProbably quite faster if the dataset is not huge...\n\tUniqueSort would be useful for GROUP BY x ORDER BY x though\n\n",
"msg_date": "Fri, 08 Oct 2004 11:57:13 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequential scan on select distinct "
},
{
"msg_contents": "Hi, \nIf anyone can help pls, I have a question abt the\nexecution of cursor create/fetch/move , in particular\nabout disk cost. When a cursor is created, is the\nwhole table (with the required columns) got put into\nmemory? otherwise how does it work? (in term of disk\nread and transfer?) after user issues command\nmove/fetch, how does postgre speed up the query in\ncompare to normal selection?\nThanks a lot, \nregards,\nMT Ho \n\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nYahoo! Mail Address AutoComplete - You start. We finish.\nhttp://promotions.yahoo.com/new_mail \n",
"msg_date": "Tue, 12 Oct 2004 04:43:43 -0700 (PDT)",
"msg_from": "my ho <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: execute cursor fetch"
},
{
"msg_contents": "\n\tI just discovered this :\n\nhttp://www.postgresql.org/docs/7.4/static/jdbc-query.html#AEN24298\n\n\nOn Tue, 12 Oct 2004 04:43:43 -0700 (PDT), my ho <[email protected]> \nwrote:\n\n> Hi,\n> If anyone can help pls, I have a question abt the\n> execution of cursor create/fetch/move , in particular\n> about disk cost. When a cursor is created, is the\n> whole table (with the required columns) got put into\n> memory? otherwise how does it work? (in term of disk\n> read and transfer?) after user issues command\n> move/fetch, how does postgre speed up the query in\n> compare to normal selection?\n> Thanks a lot,\n> regards,\n> MT Ho\n>\n>\n>\n> \t\t\n> __________________________________\n> Do you Yahoo!?\n> Yahoo! Mail Address AutoComplete - You start. We finish.\n> http://promotions.yahoo.com/new_mail\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n\n",
"msg_date": "Tue, 12 Oct 2004 14:36:10 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: execute cursor fetch"
},
{
"msg_contents": "Pierre-Frédéric Caillaud mentioned :\n=> http://www.postgresql.org/docs/7.4/static/jdbc-query.html#AEN24298\n\nMy question is :\nIs this only true for postgres versions >= 7.4 ?\n\nI see the same section about \"Setting fetch size to turn cursors on and off\"\nis not in the postgres 7.3.7 docs. Does this mean 7.3 the JDBC driver\nfor postgres < 7.4 doesn't support this ?\n\nKind Regards\nStefan\n",
"msg_date": "Tue, 12 Oct 2004 15:05:15 +0200",
"msg_from": "Stef <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: execute cursor fetch"
},
{
"msg_contents": "my ho <[email protected]> writes:\n> If anyone can help pls, I have a question abt the\n> execution of cursor create/fetch/move , in particular\n> about disk cost. When a cursor is created, is the\n> whole table (with the required columns) got put into\n> memory?\n\nNo. The plan is set up and then incrementally executed each time you\nsay FETCH.\n\n> how does postgre speed up the query in\n> compare to normal selection?\n\nThe only difference from a SELECT is that the planner will prefer\n\"fast-start\" plans, on the theory that you may not be intending\nto retrieve the whole result. For instance it might prefer an\nindexscan to a seqscan + sort, when it otherwise wouldn't.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Oct 2004 10:07:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: execute cursor fetch "
},
{
"msg_contents": "\n\nOn Tue, 12 Oct 2004, Stef wrote:\n\n> Pierre-Frédéric Caillaud mentioned :\n> => http://www.postgresql.org/docs/7.4/static/jdbc-query.html#AEN24298\n> \n> My question is :\n> Is this only true for postgres versions >= 7.4 ?\n> \n> I see the same section about \"Setting fetch size to turn cursors on and off\"\n> is not in the postgres 7.3.7 docs. Does this mean 7.3 the JDBC driver\n> for postgres < 7.4 doesn't support this ?\n> \n\nYou need the 7.4 JDBC driver, but can run it against a 7.3 (or 7.2) \ndatabase. Also note the 8.0 JDBC driver can only do this against a 7.4 or \n8.0 database and not older versions.\n\nKris Jurka\n",
"msg_date": "Fri, 15 Oct 2004 17:30:17 -0500 (EST)",
"msg_from": "Kris Jurka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: execute cursor fetch"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm looking for the statistic of memory, CPU, filesystem access while \nexecuting some regular SQL query, and I want to compare them to\nsame kind of results while executing a cursor function.\n\nThe stat collector give me good results (sequencial scans , acceded \ntuple .....) for regular query but nor for cursor (as explain in the \ndocumentation)\n\nFor more results, i have activated some log level in the postgresql.conf :\n show_query_stats = true\n\nBut I have some trouble in the results interpretation such as :\n\n! system usage stats:\n\n! 2.776053 elapsed 1.880000 user 0.520000 system sec\n\n! [1.910000 user 0.540000 sys total]\n\n! 0/0 [0/0] filesystem blocks in/out\n\n! 5/1 [319/148] page faults/reclaims, 0 [0] swaps\n\n! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n\n! 0/0 [0/0] voluntary/involuntary context switches\n\n! postgres usage stats:\n\n! Shared blocks: 3877 read, 0 written, buffer hit rate = 0.00%\n\n! Local blocks: 0 read, 0 written, buffer hit rate = 0.00%\n\n! Direct blocks: 0 read, 0 written\n\nHere is result done after fetching ALL a row with 178282 records in the \ntable.\nlooking at the i/o stat of linux I saw a filesystem access while \nexecuting this request but not in the previous log !!!\n\n! 0/0 [0/0] filesystem blocks in/out\n\n\nI'm running postgresql 7.2.4 under redhat 7.2\n\nDoes am i wrong in my interpretation ?\n\nDoes any newest postgresql version could told me execution paln for a \nfetch AND better stats ?\n\nthx\n\nPs: please excuse my poor english\n\n-- \nAlban Médici\nR&D software engineer\n------------------------------\nyou can contact me @ :\nhttp://www.netcentrex.net\n------------------------------\n\n\n\n\n\n\n\nHi,\n\nI'm looking for the statistic of memory, CPU, filesystem access while\nexecuting some regular SQL query, and I want to compare them to \nsame kind of results while executing a cursor function.\n\nThe stat collector give me good results (sequencial scans , acceded\ntuple .....) for regular query but nor for cursor (as explain in the\ndocumentation)\n\nFor more results, i have activated some log level in the\npostgresql.conf :\n show_query_stats = true\n\nBut I have some trouble in the results interpretation such as :\n\n! system usage stats:\n! 2.776053 elapsed 1.880000 user 0.520000 system sec\n! [1.910000 user 0.540000 sys total]\n! 0/0 [0/0] filesystem blocks in/out\n! 5/1 [319/148] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n! 0/0 [0/0] voluntary/involuntary context switches\n! postgres usage stats:\n! Shared blocks: 3877 read, 0 written, buffer hit rate = 0.00%\n! Local blocks: 0 read, 0 written, buffer hit rate = 0.00%\n! Direct blocks: 0\nread, 0 written\n\nHere is result done after fetching ALL a row with 178282 records in\nthe table.\nlooking at the i/o stat of linux I saw a filesystem access while\nexecuting this request but not in the previous log !!!\n! 0/0 [0/0] filesystem blocks in/out\n\n\nI'm running postgresql 7.2.4 under redhat 7.2 \n\nDoes am i wrong in my interpretation ?\n\nDoes any newest postgresql version could told me execution paln for a\nfetch AND better stats ?\n\nthx\n\nPs: please excuse my poor english\n\n-- \nAlban Médici\nR&D software engineer\n------------------------------\nyou can contact me @ :\nhttp://www.netcentrex.net\n------------------------------",
"msg_date": "Wed, 06 Oct 2004 12:15:31 +0200",
"msg_from": "=?ISO-8859-1?Q?=22Alban_M=E9dici_=28NetCentrex=29=22?=\n\t<[email protected]>",
"msg_from_op": true,
"msg_subject": "stats on cursor and query execution troubleshooting"
},
{
"msg_contents": "=?ISO-8859-1?Q?=22Alban_M=E9dici_=28NetCentrex=29=22?= <[email protected]> writes:\n> I'm looking for the statistic of memory, CPU, filesystem access while=20\n> executing some regular SQL query, and I want to compare them to\n> same kind of results while executing a cursor function.\n\nI think your second query is finding all the disk pages it needs in\nkernel disk cache, because they were all read in by the first query.\nThis has little to do with cursor versus non cursor, and everything\nto do with hitting recently-read data again.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Oct 2004 10:16:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] stats on cursor and query execution troubleshooting "
},
{
"msg_contents": "Thanks for your repply, but I still don\"t understand why the statistic \nlogs :\n\n! 0/0 [0/0] filesystem blocks in/out\n\n\nit told me there is no hard disk access, I'm sure there is, I heard my \nHDD, and see activity using gkrellm (even using my first query ; big \nselect *) ?\n\n2004-10-08 10:40:05 DEBUG: query: select * from \"LINE_Line\";\n2004-10-08 10:40:53 DEBUG: QUERY STATISTICS\n! system usage stats:\n! 48.480196 elapsed 42.010000 user 0.700000 system sec\n! [42.030000 user 0.720000 sys total]\n! 0/0 [0/0] filesystem blocks in/out\n! 6/23 [294/145] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n! 0/0 [0/0] voluntary/involuntary context switches\n! postgres usage stats:\n! Shared blocks: 3902 read, 0 written, buffer hit \nrate = 11.78%\n! Local blocks: 0 read, 0 written, buffer hit \nrate = 0.00%\n! Direct blocks: 0 read, 0 written\n\n\nlooking at the web some logs, I saw those fields filled (i/o filesystem)\nDoes my postgresql.conf missing an option or is therer a known bug of my \npostgresql server 7.2.4 ?\n\n\n\nthx\nregards\n\nAlban Médici\n\n\non 06/10/2004 16:16 Tom Lane said the following:\n\n>=?ISO-8859-1?Q?=22Alban_M=E9dici_=28NetCentrex=29=22?= <[email protected]> writes:\n> \n>\n>>I'm looking for the statistic of memory, CPU, filesystem access while=20\n>>executing some regular SQL query, and I want to compare them to\n>>same kind of results while executing a cursor function.\n>> \n>>\n>\n>I think your second query is finding all the disk pages it needs in\n>kernel disk cache, because they were all read in by the first query.\n>This has little to do with cursor versus non cursor, and everything\n>to do with hitting recently-read data again.\n>\n>\t\t\tregards, tom lane\n>\n> \n>\n\n-- \nAlban Médici\nR&D software engineer\n------------------------------\nyou can contact me @ :\nhttp://www.netcentrex.net\n------------------------------\n\n\n\n\n\n\n\nThanks for your repply, but I still don\"t understand why the\nstatistic logs :\n\n! 0/0 [0/0] filesystem blocks in/out\n\n\nit told me there is no hard disk access, I'm sure there is, I\nheard my HDD, and see activity using gkrellm (even using my first\nquery ; big select *) ?\n\n2004-10-08 10:40:05 DEBUG: query: select * from \"LINE_Line\";\n2004-10-08 10:40:53 DEBUG: QUERY STATISTICS\n! system usage stats:\n! 48.480196 elapsed 42.010000 user 0.700000 system sec\n! [42.030000 user 0.720000 sys total]\n! 0/0 [0/0] filesystem blocks in/out\n! 6/23 [294/145] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n! 0/0 [0/0] voluntary/involuntary context switches\n! postgres usage stats:\n! Shared blocks: 3902 read, 0 written, buffer hit\nrate = 11.78%\n! Local blocks: 0 read, 0 written, buffer hit\nrate = 0.00%\n! Direct blocks: 0 read, 0 written\n\n\nlooking at the web some logs, I saw those fields filled (i/o\nfilesystem) \nDoes my postgresql.conf missing an option or is therer a known bug of\nmy postgresql server 7.2.4 ?\n\n\n\nthx \nregards\n\nAlban Médici\n\n\non 06/10/2004 16:16 Tom Lane said the following:\n\n=?ISO-8859-1?Q?=22Alban_M=E9dici_=28NetCentrex=29=22?= <[email protected]> writes:\n \n\nI'm looking for the statistic of memory, CPU, filesystem access while=20\nexecuting some regular SQL query, and I want to compare them to\nsame kind of results while executing a cursor function.\n \n\n\nI think your second query is finding all the disk pages it needs in\nkernel disk cache, because they were all read in by the first query.\nThis has little to do with cursor versus non cursor, and everything\nto do with hitting recently-read data again.\n\n\t\t\tregards, tom lane\n\n \n\n\n-- \nAlban Médici\nR&D software engineer\n------------------------------\nyou can contact me @ :\nhttp://www.netcentrex.net\n------------------------------",
"msg_date": "Fri, 08 Oct 2004 10:47:42 +0200",
"msg_from": "=?ISO-8859-1?Q?=22Alban_M=E9dici_=28NetCentrex=29=22?=\n\t<[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] stats on cursor and query execution troubleshooting"
},
{
"msg_contents": "=?ISO-8859-1?Q?=22Alban_M=E9dici_=28NetCentrex=29=22?= <[email protected]> writes:\n> Thanks for your repply, but I still don\"t understand why the statistic\n> logs :\n> ! 0/0 [0/0] filesystem blocks in/out\n> it told me there is no hard disk access, I'm sure there is,\n\nComplain to your friendly local kernel hacker. We just report what\ngetrusage() tells us; so if the number is wrong then it's a kernel bug.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Oct 2004 09:55:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] stats on cursor and query execution troubleshooting "
},
{
"msg_contents": "Ok thanks tom, what shall we do without U ?\n\nby the way I have look at my kernel and getrusage() is well configure \nand return good results.\ni/o stats too.\n\nI test an other version of postgresql and now, it works fine.\nIt' seems to be an install bug.\n\nthx\nregards, Alban Médici\n\n\non 08/10/2004 15:55 Tom Lane said the following:\n\n>=?ISO-8859-1?Q?=22Alban_M=E9dici_=28NetCentrex=29=22?= <[email protected]> writes:\n> \n>\n>>Thanks for your repply, but I still don\"t understand why the statistic\n>>logs :\n>>! 0/0 [0/0] filesystem blocks in/out\n>>it told me there is no hard disk access, I'm sure there is,\n>> \n>>\n>\n>Complain to your friendly local kernel hacker. We just report what\n>getrusage() tells us; so if the number is wrong then it's a kernel bug.\n>\n>\t\t\tregards, tom lane\n>\n> \n>\n\n-- \nAlban Médici\nR&D software engineer\n------------------------------\nyou can contact me @ :\nIPPhone : +33 (0)2 31 46 37 68\[email protected]\nhttp://www.netcentrex.net\n------------------------------\n\n\n\n\n\n\n\nOk thanks tom, what shall we do without U ?\n\nby the way I have look at my kernel and getrusage() is well configure\nand return good results.\ni/o stats too.\n\nI test an other version of postgresql and now, it works fine.\nIt' seems to be an install bug.\n\nthx \nregards, Alban Médici\n\n\non 08/10/2004 15:55 Tom Lane said the following:\n\n=?ISO-8859-1?Q?=22Alban_M=E9dici_=28NetCentrex=29=22?= <[email protected]> writes:\n \n\nThanks for your repply, but I still don\"t understand why the statistic\nlogs :\n! 0/0 [0/0] filesystem blocks in/out\nit told me there is no hard disk access, I'm sure there is,\n \n\n\nComplain to your friendly local kernel hacker. We just report what\ngetrusage() tells us; so if the number is wrong then it's a kernel bug.\n\n\t\t\tregards, tom lane\n\n \n\n\n-- \nAlban Médici\nR&D software engineer\n------------------------------\nyou can contact me @ :\nIPPhone : +33 (0)2 31 46 37 68\[email protected]\nhttp://www.netcentrex.net\n------------------------------",
"msg_date": "Mon, 11 Oct 2004 17:39:01 +0200",
"msg_from": "=?ISO-8859-1?Q?=22Alban_M=E9dici_=28NetCentrex=29=22?=\n\t<[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] stats on cursor and query execution"
}
] |
[
{
"msg_contents": "Hello,\n We recently upgraded os from rh 7.2 (2.4 kernel) to Suse 9.1 (2.6 \nkernel), and psql from 7.3.4 to 7.4.2\n\n One of the quirks I've noticed is how the queries don't always have the \nsame explain plans on the new psql... but that's a different email I think.\n\n My main question is I'm trying to convince the powers that be to let me \nuse persistent DB connections (from apache 2 / php), and my research has \nyielded conflicting documentation about the shared_buffers setting... real \nshocker there :)\n\n For idle persistent connections, do each of them allocate the memory \nspecified by this setting (shared_buffers * 8k), or is it one pool used by \nall the connection (which seems the logical conclusion based on the name \nSHARED_buffers)? Personally I'm more inclined to think the latter choice, \nbut I've seen references that alluded to both cases, but never a definitive \nanswer.\n\n For what its worth, shared_buffers is currently set to 50000 (on a 4G \nsystem). Also, effective_cache_size is 125000. max_connections is 256, so I \ndon't want to end up with a possible 100G (50k * 8k * 256) of memory tied \nup... not that it would be possible, but you never know.\n\n I typically never see more than a dozen or so concurrent connections to \nthe db (serving 3 web servers), so I'm thinking of actually using something \nlike pgpool to keep about 10 per web server, rather than use traditional \npersistent connections of 1 per Apache child, which would probably average \nabout 50 per web server.\n\nThanks.\n\n",
"msg_date": "Wed, 06 Oct 2004 16:04:52 -0400",
"msg_from": "Doug Y <[email protected]>",
"msg_from_op": true,
"msg_subject": "The never ending quest for clarity on shared_buffers"
},
{
"msg_contents": "Doug Y wrote:\n\n> For idle persistent connections, do each of them allocate the memory \n> specified by this setting (shared_buffers * 8k), or is it one pool used \n> by all the connection (which seems the logical conclusion based on the \n> name SHARED_buffers)? Personally I'm more inclined to think the latter \n> choice, but I've seen references that alluded to both cases, but never a \n> definitive answer.\n\nThe shared_buffers are shared (go figure) :). It is all one pool shared \nby all connections. The sort_mem and vacuum_mem are *per*connection* \nhowever, so when allocating that size you have to take into account your \nexpected number of concurrent connections.\n\nPaul\n",
"msg_date": "Wed, 06 Oct 2004 15:26:47 -0700",
"msg_from": "Paul Ramsey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The never ending quest for clarity on shared_buffers"
},
{
"msg_contents": "On Thu, 2004-10-07 at 08:26, Paul Ramsey wrote:\n> The shared_buffers are shared (go figure) :). It is all one pool shared \n> by all connections.\n\nYeah, I thought this was pretty clear. Doug, can you elaborate on where\nyou saw the misleading docs?\n\n> The sort_mem and vacuum_mem are *per*connection* however, so when\n> allocating that size you have to take into account your \n> expected number of concurrent connections.\n\nAllocations of size `sort_mem' can actually can actually happen several\ntimes within a *single* connection (if the query plan happens to involve\na number of sort steps or hash tables) -- the limit is on the amount of\nmemory that will be used for a single sort/hash table. So choosing the\nright figure is actually a little more complex than that.\n\n-Neil\n\n\n",
"msg_date": "Thu, 07 Oct 2004 11:29:07 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The never ending quest for clarity on shared_buffers"
}
] |
[
{
"msg_contents": "Hi guys,\n\n I just discussed about my problem on IRC. I am building a Web usage \nmining system based on Linux, PostgreSQL and C++ made up of an OLTP \ndatabase which feeds several and multi-purpose data warehouses about users' \nbehaviour on HTTP servers.\n\n I modelled every warehouse using the star schema, with a fact table and \nthen 'n' dimension tables linked using a surrogate ID.\n\n Discussing with the guys of the chat, I came up with these conclusions, \nregarding the warehouse's performance:\n\n1) don't use referential integrity in the facts table\n2) use INTEGER and avoid SMALLINT and NUMERIC types for dimensions' IDs\n3) use an index for every dimension's ID in the fact table\n\n As far as administration is concerned: run VACUUM ANALYSE daily and \nVACUUM FULL periodically.\n\n Is there anything else I should keep in mind?\n\n Also, I was looking for advice regarding hardware requirements for a \ndata warehouse system that needs to satisfy online queries. I have indeed \nno idea at the moment. I can only predict 4 million about records a month \nin the fact table, does it make sense or not? is it too much?\n\n Data needs to be easily backed up and eventually replicated.\n\n Having this in mind, what hardware architecture should I look for? How \nmany hard disks do I need, what kind and what RAID solution do you suggest \nme to adopt (5 or 10 - I think)?\n\nThank you so much,\n-Gabriele\n--\nGabriele Bartolini: Web Programmer, ht://Dig & IWA/HWG Member, ht://Check \nmaintainer\nCurrent Location: Prato, Toscana, Italia\[email protected] | http://www.prato.linux.it/~gbartolini | ICQ#129221447\n > \"Leave every hope, ye who enter!\", Dante Alighieri, Divine Comedy, The \nInferno\n\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.773 / Virus Database: 520 - Release Date: 05/10/2004",
"msg_date": "Wed, 06 Oct 2004 23:36:05 +0200",
"msg_from": "Gabriele Bartolini <[email protected]>",
"msg_from_op": true,
"msg_subject": "Data warehousing requirements"
},
{
"msg_contents": "Consider how the fact table is going to be used, and review hacking it up\nbased on usage. Fact tables should be fairly narrow, so if there are extra\ncolumns beyond keys and dimension keys consider breaking it into parallel\ntables (vertical partitioning).\n\nHorizontal partitioning is your friend; especially if it is large - consider\nslicing the data into chunks. If the fact table is date driven it might be\nworthwhile to break it into separate tables based on date key. This wins in\nreducing the working set of queries and in buffering. If there is a real\nhotspot, such as current month's activity, you might want to keep a separate\ntable with just the (most) active data.Static tables of unchanged data can\nsimplify backups, etc., as well.\n\nConsider summary tables if you know what type of queries you'll hit.\nEspecially here, MVCC is not your friend because it has extra work to do for\naggregate functions.\n\nCluster helps if you bulk load.\n\nIn most warehouses, the data is downstream data from existing operational\nsystems. Because of that you're not able to use database features to\npreserve integrity. In most cases, the data goes through an\nextract/transform/load process - and the output is considered acceptable.\nSo, no RI is correct for star or snowflake design. Pretty much no anything\nelse that adds intelligence - no triggers, no objects, no constraints of any\nsort. Many designers try hard to avoid nulls.\n\nOn the hardware side - RAID5 might work here because of the low volume if\nyou can pay the write performance penalty. To size hardware you need to\nestimate load in terms of transaction type (I usually make bucket categories\nof small, medium, and large effort needs) and transaction rate. Then try to\nestimate how much CPU and I/O they'll use.\n\n/Aaron\n\n\"Let us not speak of them; but look, and pass on.\"\n\n----- Original Message ----- \nFrom: \"Gabriele Bartolini\" <[email protected]>\nTo: <[email protected]>\nSent: Wednesday, October 06, 2004 5:36 PM\nSubject: [PERFORM] Data warehousing requirements\n\n\n> Hi guys,\n>\n> I just discussed about my problem on IRC. I am building a Web usage\n> mining system based on Linux, PostgreSQL and C++ made up of an OLTP\n> database which feeds several and multi-purpose data warehouses about\nusers'\n> behaviour on HTTP servers.\n>\n> I modelled every warehouse using the star schema, with a fact table\nand\n> then 'n' dimension tables linked using a surrogate ID.\n>\n> Discussing with the guys of the chat, I came up with these\nconclusions,\n> regarding the warehouse's performance:\n>\n> 1) don't use referential integrity in the facts table\n> 2) use INTEGER and avoid SMALLINT and NUMERIC types for dimensions' IDs\n> 3) use an index for every dimension's ID in the fact table\n>\n> As far as administration is concerned: run VACUUM ANALYSE daily and\n> VACUUM FULL periodically.\n>\n> Is there anything else I should keep in mind?\n>\n> Also, I was looking for advice regarding hardware requirements for a\n> data warehouse system that needs to satisfy online queries. I have indeed\n> no idea at the moment. I can only predict 4 million about records a month\n> in the fact table, does it make sense or not? is it too much?\n>\n> Data needs to be easily backed up and eventually replicated.\n>\n> Having this in mind, what hardware architecture should I look for? How\n> many hard disks do I need, what kind and what RAID solution do you suggest\n> me to adopt (5 or 10 - I think)?\n>\n> Thank you so much,\n> -Gabriele\n> --\n> Gabriele Bartolini: Web Programmer, ht://Dig & IWA/HWG Member, ht://Check\n> maintainer\n> Current Location: Prato, Toscana, Italia\n> [email protected] | http://www.prato.linux.it/~gbartolini | ICQ#129221447\n> > \"Leave every hope, ye who enter!\", Dante Alighieri, Divine Comedy, The\n> Inferno\n>\n\n\n----------------------------------------------------------------------------\n----\n\n\n>\n> ---\n> Outgoing mail is certified Virus Free.\n> Checked by AVG anti-virus system (http://www.grisoft.com).\n> Version: 6.0.773 / Virus Database: 520 - Release Date: 05/10/2004\n>\n\n\n----------------------------------------------------------------------------\n----\n\n\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n",
"msg_date": "Thu, 7 Oct 2004 07:30:07 -0400",
"msg_from": "\"Aaron Werman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data warehousing requirements"
},
{
"msg_contents": "At 13.30 07/10/2004, Aaron Werman wrote:\n>Consider how the fact table is going to be used, and review hacking it up\n>based on usage. Fact tables should be fairly narrow, so if there are extra\n>columns beyond keys and dimension keys consider breaking it into parallel\n>tables (vertical partitioning).\n\nHmm ... I have only an extra column. Sorry if I ask you to confirm this, \nbut practically vertical partitioning allows me to divide a table into 2 \ntables (like if I cut them vertically, right?) having the same key. If I \nhad 2 extra columns, that could be the case, couldn't it?\n\n>Horizontal partitioning is your friend; especially if it is large - consider\n>slicing the data into chunks. If the fact table is date driven it might be\n>worthwhile to break it into separate tables based on date key. This wins in\n>reducing the working set of queries and in buffering. If there is a real\n>hotspot, such as current month's activity, you might want to keep a separate\n>table with just the (most) active data.Static tables of unchanged data can\n>simplify backups, etc., as well.\n\nIn this case, you mean I can chunk data into: \"facts_04_08\" for the august \n2004 facts. Is this the case?\n\nOtherwise, is it right my point of view that I can get good results by \nusing a different approach, based on mixing vertical partitioning and the \nCLUSTER facility of PostgreSQL? Can I vertically partition also dimension \nkeys from the fact table or not?\n\nHowever, this subject is awesome and interesting. Far out ... data \nwarehousing seems to be really continous modeling, doesn't it! :-)\n\n>Consider summary tables if you know what type of queries you'll hit.\n\nAt this stage, I can't predict it yet. But of course I need some sort of \nsummary. I will keep it in mind.\n\n>Especially here, MVCC is not your friend because it has extra work to do for\n>aggregate functions.\n\nWhy does it have extra work? Do you mind being more precise, Aaron? It is \nreally interesting. (thanks)\n\n>Cluster helps if you bulk load.\n\nIs it maybe because I can update or build them once the load operation has \nfinished?\n\n>In most warehouses, the data is downstream data from existing operational\n>systems.\n\nThat's my case too.\n\n>Because of that you're not able to use database features to\n>preserve integrity. In most cases, the data goes through an\n>extract/transform/load process - and the output is considered acceptable.\n>So, no RI is correct for star or snowflake design. Pretty much no anything\n>else that adds intelligence - no triggers, no objects, no constraints of any\n>sort. Many designers try hard to avoid nulls.\n\nThat's another interesting argument. Again, I had in mind the space \nefficiency principle and I decided to use null IDs for dimension tables if \nI don't have the information. I noticed though that in those cases I can't \nuse any index and performances result very poor.\n\nI have a dimension table 'categories' referenced through the 'id_category' \nfield in the facts table. I decided to set it to NULL in case I don't have \nany category to associate to it. I believe it is better to set a '0' value \nif I don't have any category, allowing me not to use a \"SELECT * from facts \nwhere id_category IS NULL\" which does not use the INDEX I had previously \ncreated on that field.\n\n>On the hardware side - RAID5 might work here because of the low volume if\n>you can pay the write performance penalty. To size hardware you need to\n>estimate load in terms of transaction type (I usually make bucket categories\n>of small, medium, and large effort needs) and transaction rate. Then try to\n>estimate how much CPU and I/O they'll use.\n\nThank you so much again Aaron. Your contribution has been really important \nto me.\n\nCiao,\n-Gabriele\n\n>\"Let us not speak of them; but look, and pass on.\"\n\nP.S.: Dante rules ... :-)\n\n--\nGabriele Bartolini: Web Programmer, ht://Dig & IWA/HWG Member, ht://Check \nmaintainer\nCurrent Location: Prato, Toscana, Italia\[email protected] | http://www.prato.linux.it/~gbartolini | ICQ#129221447\n > \"Leave every hope, ye who enter!\", Dante Alighieri, Divine Comedy, The \nInferno\n\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.773 / Virus Database: 520 - Release Date: 05/10/2004",
"msg_date": "Thu, 07 Oct 2004 19:07:04 +0200",
"msg_from": "Gabriele Bartolini <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Data warehousing requirements"
},
{
"msg_contents": "Gabriele,\n\n> That's another interesting argument. Again, I had in mind the space\n> efficiency principle and I decided to use null IDs for dimension tables if\n> I don't have the information. I noticed though that in those cases I can't\n> use any index and performances result very poor.\n\nFor one thing, this is false optimization; a NULL isn't saving you any table \nsize on an INT or BIGINT column. NULLs are only smaller on variable-width \ncolumns. If you're going to start counting bytes, make sure it's an informed \ncount.\n\nMore importantly, you should never, ever allow null FKs on a star-topology \ndatabase. LEFT OUTER JOINs are vastly less efficient than INNER JOINs in a \nquery, and the difference between having 20 outer joins for your data view, \nvs 20 regular joins, can easily be a difference of 100x in execution time.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 7 Oct 2004 15:50:20 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data warehousing requirements"
},
{
"msg_contents": "\n----- Original Message ----- \nFrom: \"Gabriele Bartolini\" <[email protected]>\nTo: \"Aaron Werman\" <[email protected]>;\n<[email protected]>\nSent: Thursday, October 07, 2004 1:07 PM\nSubject: Re: [PERFORM] Data warehousing requirements\n\n\n> At 13.30 07/10/2004, Aaron Werman wrote:\n> >Consider how the fact table is going to be used, and review hacking it up\n> >based on usage. Fact tables should be fairly narrow, so if there are\nextra\n> >columns beyond keys and dimension keys consider breaking it into parallel\n> >tables (vertical partitioning).\n>\n> Hmm ... I have only an extra column. Sorry if I ask you to confirm this,\n> but practically vertical partitioning allows me to divide a table into 2\n> tables (like if I cut them vertically, right?) having the same key. If I\n> had 2 extra columns, that could be the case, couldn't it?\n\nYes - it's splitting a table's columns and copying the PK. If you have only\none column and it's narrow - partitioning becomes harder to justify.\n\n>\n> >Horizontal partitioning is your friend; especially if it is large -\nconsider\n> >slicing the data into chunks. If the fact table is date driven it might\nbe\n> >worthwhile to break it into separate tables based on date key. This wins\nin\n> >reducing the working set of queries and in buffering. If there is a real\n> >hotspot, such as current month's activity, you might want to keep a\nseparate\n> >table with just the (most) active data.Static tables of unchanged data\ncan\n> >simplify backups, etc., as well.\n>\n> In this case, you mean I can chunk data into: \"facts_04_08\" for the august\n> 2004 facts. Is this the case?\n\nExactly. The problem is when you need to query across the chunks. There was\na discussion here of creating views ala\n\ncreate view facts as\n select * from facts_04_07 where datekey between '01/07/2004' and\n'31/07/2004'\n union all\n select * from facts_04_08 where datekey between '01/08/2004' and\n'31/08/2004'\n union all\n select * from facts_04_09 where datekey between '01/09/2004' and\n'30/09/2004'\n ...\n\nhoping the restrictions would help the planner prune chunks out. Has anyone\ntried this?\n\n>\n> Otherwise, is it right my point of view that I can get good results by\n> using a different approach, based on mixing vertical partitioning and the\n> CLUSTER facility of PostgreSQL? Can I vertically partition also dimension\n> keys from the fact table or not?\n\nIf you can do that, you probably should beyond a star schema. The standard\ndefinition of a star schema is a single very large fact table with very\nsmall dimension tables. The point of a star is that it can be used to\nefficiantly restrict results out by merging the dimensional restrictions and\nonly extracting matches from the fact table. E.g.,\n\nselect\n count(*)\nfrom\n people_fact, /* 270M */\n states_dim, /* only 50 something */\n gender_dim, /* 2 */\n age_dim /* say 115 */\nwhere\n age_dim.age > 65\n and\n gender_dim.gender = 'F'\n and\n states_dim.state_code in ('PR', 'ME')\n and\n age_dim.age_key = people_fact.age_key\n and\n gender_dim.gender_key = people_fact.gender_key\n and\n states_dim.state_key = people_fact.state_key\n\n(I had to write out this trivial query because most DBAs don't realize going\nin how ugly star queries are.) If you split the fact table so ages were in a\nvertical partition you would optimize queries which didn't use the age data,\nbut if you needed the age data, you would have to join two large tables -\nwhich is not a star query.\n\nWhat you're thinking about on the cluster front is fun. You can split groups\nof dimension keys off to seperate vertical partitions, but you can only\ncluster each on a single key. So you need to split each one off, which\nresults in your inventing the index! (-:\n\n>\n> However, this subject is awesome and interesting. Far out ... data\n> warehousing seems to be really continous modeling, doesn't it! :-)\n>\n> >Consider summary tables if you know what type of queries you'll hit.\n>\n> At this stage, I can't predict it yet. But of course I need some sort of\n> summary. I will keep it in mind.\n>\n> >Especially here, MVCC is not your friend because it has extra work to do\nfor\n> >aggregate functions.\n>\n> Why does it have extra work? Do you mind being more precise, Aaron? It is\n> really interesting. (thanks)\n\nThe standard reasons - that a lot of queries that seem intuitively to be\nresolvable statically or through indices have to walk the data to find\ncurrent versions. Keeping aggregates (especially if you can allow them to be\nslightly stale) can reduce lots of reads. A big goal of horizontal\npartitioning is to give the planner some way of reducing the query scope.\n\n>\n> >Cluster helps if you bulk load.\n>\n> Is it maybe because I can update or build them once the load operation has\n> finished?\n\nIf you have streaming loads, clustering can be a pain to implement well.\n\n>\n> >In most warehouses, the data is downstream data from existing operational\n> >systems.\n>\n> That's my case too.\n>\n> >Because of that you're not able to use database features to\n> >preserve integrity. In most cases, the data goes through an\n> >extract/transform/load process - and the output is considered acceptable.\n> >So, no RI is correct for star or snowflake design. Pretty much no\nanything\n> >else that adds intelligence - no triggers, no objects, no constraints of\nany\n> >sort. Many designers try hard to avoid nulls.\n>\n> That's another interesting argument. Again, I had in mind the space\n> efficiency principle and I decided to use null IDs for dimension tables if\n> I don't have the information. I noticed though that in those cases I can't\n> use any index and performances result very poor.\n>\n> I have a dimension table 'categories' referenced through the 'id_category'\n> field in the facts table. I decided to set it to NULL in case I don't have\n> any category to associate to it. I believe it is better to set a '0' value\n> if I don't have any category, allowing me not to use a \"SELECT * from\nfacts\n> where id_category IS NULL\" which does not use the INDEX I had previously\n> created on that field.\n\n(Sorry for being a pain in the neck, but BTW - that is not a star query; it\nshould be\n\nSELECT\n facts.*\nfrom\n facts,\n id_dim\nwhere\n facts.id_key = id_dim.id_key\n and\n id_dim.id_category IS NULL\n\n[and it really gets to the whole problem of indexing low cardinality\nfields])\n\n\n>\n> >On the hardware side - RAID5 might work here because of the low volume if\n> >you can pay the write performance penalty. To size hardware you need to\n> >estimate load in terms of transaction type (I usually make bucket\ncategories\n> >of small, medium, and large effort needs) and transaction rate. Then try\nto\n> >estimate how much CPU and I/O they'll use.\n>\n> Thank you so much again Aaron. Your contribution has been really important\n> to me.\n>\n> Ciao,\n> -Gabriele\n>\n> >\"Let us not speak of them; but look, and pass on.\"\n>\n> P.S.: Dante rules ... :-)\n\n:-)\n\nthat quote was not a reference to anyone in this group!\n\nGood luck,\n/Aaron\n\n>\n> --\n> Gabriele Bartolini: Web Programmer, ht://Dig & IWA/HWG Member, ht://Check\n> maintainer\n> Current Location: Prato, Toscana, Italia\n> [email protected] | http://www.prato.linux.it/~gbartolini | ICQ#129221447\n> > \"Leave every hope, ye who enter!\", Dante Alighieri, Divine Comedy, The\n> Inferno\n>\n",
"msg_date": "Thu, 7 Oct 2004 21:19:44 -0400",
"msg_from": "\"Aaron Werman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data warehousing requirements"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> For one thing, this is false optimization; a NULL isn't saving you any table \n> size on an INT or BIGINT column. NULLs are only smaller on variable-width \n> columns.\n\nUh ... not true. The column will not be stored, either way. Now if you\nhad a row that otherwise had no nulls, the first null in the column will\ncause a null-columns-bitmap to be added, which might more than eat up\nthe savings from storing a single int or bigint. But after the first\nnull, each additional null in a row is a win, free-and-clear, whether\nit's fixed-width or not.\n\n(There are also some alignment considerations that might cause the\nsavings to vanish.)\n\n> More importantly, you should never, ever allow null FKs on a star-topology \n> database. LEFT OUTER JOINs are vastly less efficient than INNER JOINs in a \n> query, and the difference between having 20 outer joins for your data view, \n> vs 20 regular joins, can easily be a difference of 100x in execution time.\n\nIt's not so much that they are necessarily inefficient as that they\nconstrain the planner's freedom of action. You need to think a lot more\ncarefully about the order of joining than when you use inner joins.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Oct 2004 22:43:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data warehousing requirements "
},
{
"msg_contents": "Tom,\n\nWell, I sit corrected. Obviously I misread that.\n\n> It's not so much that they are necessarily inefficient as that they\n> constrain the planner's freedom of action. You need to think a lot more\n> carefully about the order of joining than when you use inner joins.\n\nI've also found that OUTER JOINS constrain the types of joins that can/will be \nused as well as the order. Maybe you didn't intend it that way, but (for \nexample) OUTER JOINs seem much more likely to use expensive merge joins.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 7 Oct 2004 22:53:26 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data warehousing requirements"
}
] |
[
{
"msg_contents": "I have a large table with a column:\nids integer[] not null\n\nmost of these entries (over 95%) contain only one array element, some\ncan contain up to 10 array elements. seqscan is naturally slow. GIST\non int_array works nice, but GIST isn't exactly a speed daemon when it\ncomes to updating.\n\nSo I thought, why not create partial indexes?\n\nCREATE INDEX one_element_array_index ON table ((ids[1])) WHERE icount(ids) <= 1;\nCREATE INDEX many_element_array_index ON table USING GIST (ids) WHERE\nicount(ids) > 1;\n\nNow, if I select WHERE icount(ids) <= 1 AND ids[1] = 33 I get\nlightning fast results.\nIf I select WHERE icount(ids) > 1 AND ids && '{33}' -- I get them even faster.\n\nBut when I phrase the query:\n\nSELECT * FROM table WHERE (icount(ids) <= 1 AND ids[1] = 33) OR\n(icount(ids) > 1 AND ids && '{33}');\n\nPlanner insists on using seqscan. Even with enable_seqscan = off;\n\nAny hints, comments? :) [ I think thsese partial indexes take best of\ntwo worlds, only if planner wanted to take advantage of it... :) ]\n\n Regards,\n Dawid\n",
"msg_date": "Fri, 8 Oct 2004 11:11:16 +0200",
"msg_from": "Dawid Kuroczko <[email protected]>",
"msg_from_op": true,
"msg_subject": "integer[] indexing."
},
{
"msg_contents": "\n\tdisclaimer : brainless proposition\n\n(SELECT * FROM table WHERE (icount(ids) <= 1 AND ids[1] = 33)\nUNION ALL\n(SELECT * FROM table WHERE (icount(ids) > 1 AND ids && '{33}'));\n\n\n",
"msg_date": "Fri, 08 Oct 2004 11:29:35 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: integer[] indexing."
},
{
"msg_contents": "In article <opsfjonlc0cq72hf@musicbox>,\n=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?= <[email protected]> writes:\n\n> \tdisclaimer : brainless proposition\n\n> (SELECT * FROM table WHERE (icount(ids) <= 1 AND ids[1] = 33)\n> UNION ALL\n> (SELECT * FROM table WHERE (icount(ids) > 1 AND ids && '{33}'));\n\nI guess my proposition is even more brainless :-)\n\nIf 95% of all records have only one value, how about putting the first\n(and most often only) value into a separate column with a btree index\non it? Something like that:\n\n CREATE TABLE tbl (\n -- other columns\n id1 INT NOT NULL,\n idN INT[] NULL\n );\n\n CREATE INDEX tbl_id1_ix ON tbl (id1);\n\nIf id1 is selective enough, you probably don't need another index on idn.\n\n",
"msg_date": "08 Oct 2004 14:28:52 +0200",
"msg_from": "Harald Fuchs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: integer[] indexing."
},
{
"msg_contents": "Dawid Kuroczko <[email protected]> writes:\n> But when I phrase the query:\n\n> SELECT * FROM table WHERE (icount(ids) <= 1 AND ids[1] = 33) OR\n> (icount(ids) > 1 AND ids && '{33}');\n\n> Planner insists on using seqscan. Even with enable_seqscan = off;\n\nThe OR-index-scan mechanism isn't currently smart enough to use partial\nindexes that are only valid for some of the OR'd clauses rather than all\nof them. Feel free to fix it ;-). (This might not even be very hard;\nI haven't looked.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Oct 2004 10:03:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: integer[] indexing. "
}
] |
[
{
"msg_contents": "\nI just ran a COPY of a million records several times, and each time I\nran it it ran apparently exponentially slower. If I do an insert of\n10 million records, even with 2 indexes (same table) it doesn't appear\nto slow down at all. Any ideas?\n\n- Mike H.\n\n(I apologize for the ^Hs)\n\nScript started on Wed Oct 6 08:37:32 2004\nbash-3.00$ psql\nWelcome to psql 7.4.5, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\nmvh=# \\timing\nTiming is on.\nmvh=# \\timing\b\b\b\b\b\b\breindex table bgtest;mvh=# \u001b[2Pdelete from bgtest;mvh=# \u001b[4hcopy bgtest from '/home/mvh/database\u001b[4lstuff/dbdmp/bgdump';\nCOPY\nTime: 69796.130 ms\nmvh=# vacuum analyze;\nVACUUM\nTime: 19148.621 ms\nmvh=# vacuum analyze;mvh=# \u001b[4hcopy bgtest from '/home/mvh/databasestuff\u001b[4l/dbdmp/bgdump';\nCOPY\nTime: 89189.939 ms\nmvh=# copy bgtest from '/home/mvh/databasestuff/dbdmp/bgdump';mvh=# vacuum analyze;\u001b[K\nVACUUM\nTime: 26814.670 ms\nmvh=# vacuum analyze;mvh=# \u001b[4hcopy bgtest from '/home/mvh/databasestuff\u001b[4l/dbdmp/bgdump';\nCOPY\nTime: 131131.982 ms\nmvh=# copy bgtest from '/home/mvh/databasestuff/dbdmp/bgdump';mvh=# vacuum analyze;\u001b[K\nVACUUM\nTime: 64997.264 ms\nmvh=# vacuum analyze;mvh=# \u001b[4hcopy bgtest from '/home/mvh/databasestuff\u001b[4l/dbdmp/bgdump';\nCOPY\nTime: 299977.697 ms\nmvh=# copy bgtest from '/home/mvh/databasestuff/dbdmp/bgdump';mvh=# vacuum analyze;\u001b[K\nVACUUM\nTime: 103541.716 ms\nmvh=# vacuum analyze;mvh=# \u001b[4hcopy bgtest from '/home/mvh/databasestuff\u001b[4l/dbdmp/bgdump';\nCOPY\nTime: 455292.600 ms\nmvh=# copy bgtest from '/home/mvh/databasestuff/dbdmp/bgdump';mvh=# vacuum analyze;\u001b[K\nVACUUM\nTime: 138910.015 ms\nmvh=# vacuum analyze;mvh=# \u001b[4hcopy bgtest from '/home/mvh/databasestuff\u001b[4l/dbdmp/bgdump';\nCOPY\nTime: 612119.661 ms\nmvh=# copy bgtest from '/home/mvh/databasestuff/dbdmp/bgdump';mvh=# vacuum analyze;\u001b[K\nVACUUM\nTime: 151331.243 ms\nmvh=# \\q\nbash-3.00$ exit\n\nScript done on Wed Oct 6 10:43:04 2004\n\n",
"msg_date": "Fri, 8 Oct 2004 05:10:29 -0700 (PDT)",
"msg_from": "Mike Harding <[email protected]>",
"msg_from_op": true,
"msg_subject": "COPY slows down?"
},
{
"msg_contents": "Mike Harding <[email protected]> writes:\n> I just ran a COPY of a million records several times, and each time I\n> ran it it ran apparently exponentially slower.\n\nTell us about indexes, foreign keys involving this table, triggers, rules?\n\nSome mention of your PG version would be appropriate, too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Oct 2004 10:22:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY slows down? "
}
] |
[
{
"msg_contents": "\nJosh Berkus <[email protected]> wrote on 08.10.2004, 07:53:26:\n> \n> > It's not so much that they are necessarily inefficient as that they\n> > constrain the planner's freedom of action. You need to think a lot more\n> > carefully about the order of joining than when you use inner joins.\n> \n> I've also found that OUTER JOINS constrain the types of joins that can/will be \n> used as well as the order. Maybe you didn't intend it that way, but (for \n> example) OUTER JOINs seem much more likely to use expensive merge joins.\n> \n\nUnfortunately, yes thats true - thats is for correctness, not an\noptimization decision. Outer joins constrain you on both join order AND\non join type. Nested loops and hash joins avoid touching all rows in\nthe right hand table, which is exactly what you don't want when you\nhave a right outer join to perform, since you wish to include rows in\nthat table when there is no match. Thus, we MUST choose a merge join\neven when (if it wasn't an outer join) we would have chosen a nested\nloops or hash.\n\nBest Regards, Simon Riggs\n",
"msg_date": "Fri, 8 Oct 2004 14:38:01 +0200",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "=?iso-8859-1?Q?Re:_Re:__Data_warehousing_requirements?="
},
{
"msg_contents": "<[email protected]> writes:\n> Unfortunately, yes thats true - thats is for correctness, not an\n> optimization decision. Outer joins constrain you on both join order AND\n> on join type. Nested loops and hash joins avoid touching all rows in\n> the right hand table, which is exactly what you don't want when you\n> have a right outer join to perform, since you wish to include rows in\n> that table when there is no match. Thus, we MUST choose a merge join\n> even when (if it wasn't an outer join) we would have chosen a nested\n> loops or hash.\n\nThe alternative of course is to flip it around to be a left outer join\nso that we can use those plan types. But depending on the relative\nsizes of the two tables this may be a loser.\n\nIf you are using a FULL join then it is indeed true that mergejoin is\nthe only supported plan type. I don't think that was at issue here\nthough.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Oct 2004 10:22:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: =?iso-8859-1?Q?Re:_Re:__Data_warehousing_requirements?= "
}
] |
[
{
"msg_contents": "Hi,\n\n I have a problem with the below query, when i do explain on the \nbelow query on my live database it doesnt use any index specified on the \ntables and does seq scan on the table which is 400k records. But if i \ncopy the same table onto a different database on a different machine it \nuses all the indexes specified and query runs much quicker. I ran \nanalyze, vacuum analyze and rebuilt indexes on the live database but \nstill there is no difference in the performance. Can anyone tell why \nthis odd behavior ?\n\nThanks!\n\nQuery\n--------\n\nSELECT a.total as fsbos, b.total as foreclosures, c.total as \nauctions, d.latestDate as lastUpdated\nFROM ((SELECT count(1) as total\n FROM Properties p INNER JOIN Datasources ds\n ON p.datasource = ds.sourceId\n WHERE p.countyState = 'GA'\n AND ds.sourceType = 'fsbo'\n AND p.status in (1,2)\n )) a,\n ((SELECT count(1) as total\n FROM Properties p INNER JOIN Datasources ds\n ON p.datasource = ds.sourceId\n WHERE p.countyState = 'GA'\n AND ds.sourceType = 'foreclosure'\n AND (p.status in (1,2)\n OR (p.status = 0 AND p.LastReviewed2 >= current_timestamp - \nINTERVAL '14 days') )\n )) b,\n ((SELECT count(1) as total\n FROM Properties p\n WHERE p.datasource = 1087\n AND p.countyState = 'GA'\n AND p.status in (1,2)\n )) c,\n ((SELECT to_char(max(entryDate2), 'MM/DD/YYYY HH24:MI') as latestDate\n FROM Properties p\n WHERE p.countyState = 'GA'\n)) d\n\nExplain from the Live database\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=1334730.95..1334731.02 rows=1 width=56)\n -> Nested Loop (cost=1026932.25..1026932.30 rows=1 width=24)\n -> Nested Loop (cost=704352.11..704352.14 rows=1 width=16)\n -> Subquery Scan b (cost=375019.89..375019.90 rows=1 \nwidth=8)\n -> Aggregate (cost=375019.89..375019.89 rows=1 \nwidth=0)\n -> Hash Join (cost=308.72..374844.49 \nrows=70158 width=0)\n Hash Cond: (\"outer\".datasource = \n\"inner\".sourceid)\n -> Seq Scan on properties p \n(cost=0.00..373289.10 rows=72678 width=4)\n Filter: ((countystate = \n'GA'::bpchar) AND ((status = 0) OR (status = 1) OR (status = 2)) AND \n((lastreviewed2 >= (('now'::text)::timestamp(6) with time zone - '14 \ndays'::interval)) OR (status = 1) OR (status = 2)))\n -> Hash (cost=288.05..288.05 \nrows=8267 width=4)\n -> Seq Scan on datasources ds \n(cost=0.00..288.05 rows=8267 width=4)\n Filter: ((sourcetype)::text \n= 'foreclosure'::text)\n -> Subquery Scan c (cost=329332.22..329332.23 rows=1 \nwidth=8)\n -> Aggregate (cost=329332.22..329332.22 rows=1 \nwidth=0)\n -> Seq Scan on properties p \n(cost=0.00..329321.06 rows=4464 width=0)\n Filter: ((datasource = 1087) AND \n(countystate = 'GA'::bpchar) AND ((status = 1) OR (status = 2)))\n -> Subquery Scan a (cost=322580.14..322580.15 rows=1 width=8)\n -> Aggregate (cost=322580.14..322580.14 rows=1 width=0)\n -> Hash Join (cost=288.24..322579.28 rows=344 \nwidth=0)\n Hash Cond: (\"outer\".datasource = \n\"inner\".sourceid)\n -> Seq Scan on properties p \n(cost=0.00..321993.05 rows=39273 width=4)\n Filter: ((countystate = 'GA'::bpchar) \nAND ((status = 1) OR (status = 2)))\n -> Hash (cost=288.05..288.05 rows=75 width=4)\n -> Seq Scan on datasources ds \n(cost=0.00..288.05 rows=75 width=4)\n Filter: ((sourcetype)::text = \n'fsbo'::text)\n -> Subquery Scan d (cost=307798.70..307798.72 rows=1 width=32)\n -> Aggregate (cost=307798.70..307798.71 rows=1 width=8)\n -> Seq Scan on properties p (cost=0.00..307337.04 \nrows=184666 width=8)\n Filter: (countystate = 'GA'::bpchar)\n\nExplain on the Copy of the Live database for the same query\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=5380.81..5380.88 rows=1 width=56)\n -> Nested Loop (cost=3714.30..3714.35 rows=1 width=48)\n -> Nested Loop (cost=2687.15..2687.18 rows=1 width=40)\n -> Subquery Scan a (cost=1022.76..1022.77 rows=1 width=8)\n -> Aggregate (cost=1022.76..1022.76 rows=1 width=0)\n -> Nested Loop (cost=0.00..1022.75 rows=2 \nwidth=0)\n -> Seq Scan on datasources ds \n(cost=0.00..4.44 rows=2 width=4)\n Filter: ((sourcetype)::text = \n'fsbo'::text)\n -> Index Scan using \nidx_properties_datasourcestateauctiondate on properties p \n(cost=0.00..509.14 rows=2 width=4)\n Index Cond: (p.datasource = \n\"outer\".sourceid)\n Filter: ((countystate = \n'GA'::bpchar) AND ((status = 1) OR (status = 2)))\n -> Subquery Scan d (cost=1664.39..1664.40 rows=1 width=32)\n -> Aggregate (cost=1664.39..1664.39 rows=1 width=8)\n -> Index Scan using properties_idx_search on \nproperties p (cost=0.00..1663.35 rows=416 width=8)\n Index Cond: (countystate = 'GA'::bpchar)\n -> Subquery Scan b (cost=1027.15..1027.16 rows=1 width=8)\n -> Aggregate (cost=1027.15..1027.15 rows=1 width=0)\n -> Nested Loop (cost=0.00..1027.14 rows=3 width=0)\n -> Seq Scan on datasources ds \n(cost=0.00..4.44 rows=2 width=4)\n Filter: ((sourcetype)::text = \n'foreclosure'::text)\n -> Index Scan using \nidx_properties_datasourcestateauctiondate on properties p \n(cost=0.00..511.32 rows=3 width=4)\n Index Cond: (p.datasource = \n\"outer\".sourceid)\n Filter: ((countystate = 'GA'::bpchar) \nAND ((status = 0) OR (status = 1) OR (status = 2)) AND ((lastreviewed2 \n >= (('now'::text)::timestamp(6) with time zone - '14 days'::interval)) \nOR (status = 1) OR (status = 2)))\n -> Subquery Scan c (cost=1666.51..1666.52 rows=1 width=8)\n -> Aggregate (cost=1666.51..1666.51 rows=1 width=0)\n -> Index Scan using properties_idx_search on properties \np (cost=0.00..1666.46 rows=18 width=0)\n Index Cond: (countystate = 'GA'::bpchar)\n Filter: ((datasource = 1087) AND ((status = 1) OR \n(status = 2)))\n\n\n\n",
"msg_date": "Fri, 08 Oct 2004 10:49:12 -0400",
"msg_from": "Pallav Kalva <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query Tuning"
},
{
"msg_contents": "Pallav Kalva <[email protected]> writes:\n> I have a problem with the below query, when i do explain on the \n> below query on my live database it doesnt use any index specified on the \n> tables and does seq scan on the table which is 400k records. But if i \n> copy the same table onto a different database on a different machine it \n> uses all the indexes specified and query runs much quicker.\n\nIt looks to me like you've never vacuumed/analyzed the copy, and so you\nget a different plan there. The fact that that plan is better than the\none made with statistics is unhappy making :-( ... but when you only\nshow us EXPLAIN output rather than EXPLAIN ANALYZE, it's impossible to\nspeculate about why. Also, what PG version is this?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Oct 2004 12:42:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Tuning "
},
{
"msg_contents": "Tom Lane wrote:\n\n>Pallav Kalva <[email protected]> writes:\n> \n>\n>> I have a problem with the below query, when i do explain on the \n>>below query on my live database it doesnt use any index specified on the \n>>tables and does seq scan on the table which is 400k records. But if i \n>>copy the same table onto a different database on a different machine it \n>>uses all the indexes specified and query runs much quicker.\n>> \n>>\n>\n>It looks to me like you've never vacuumed/analyzed the copy, and so you\n>get a different plan there. The fact that that plan is better than the\n>one made with statistics is unhappy making :-( ... but when you only\n>show us EXPLAIN output rather than EXPLAIN ANALYZE, it's impossible to\n>speculate about why. Also, what PG version is this?\n>\n>\t\t\tregards, tom lane\n>\n> \n>\nThanks! for the quick reply. I cant run the EXPLAIN ANALYZE on the live \ndatabase because, it takes lot of time and hols up lot of other queries \non the table. The postgres version I am using is 7.4 . when you say \" i \nnever vacuum/analyxed the copy\" you mean the Live database ? or the copy \nof the live database ? . I run vacuum database daily on my live database \nas a part of daily maintanence.\n\n\n",
"msg_date": "Fri, 08 Oct 2004 13:29:22 -0400",
"msg_from": "Pallav Kalva <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query Tuning"
}
] |
[
{
"msg_contents": "I'm looking at one of my standard queries and have encountered some strange \nperformance problems.\n\nThe query below is to search for vacant staff member date/time slots given a series of \ntarget date/times. The data contained in the booking_plan/staff_booking tables contain \nthe existing bookings, so I'm looking for \"clashing\" bookings to eliminate them from a \ncandidate list.\n\nThe query is:\n\nselect distinct b.staff_id from staff_booking b, booking_plan bp, t_search_reqt_dates rd\nwhere b.booking_id = bp.booking_id\nand rd.datetime_from <= bp.datetime_to and rd.datetime_to >= bp.datetime_from\nAND bp.booking_date between rd.reqt_date-1 and rd.reqt_date+1\nand rd.search_id = 13\nand rd.reqt_date between '2004-09-30' AND '2005-12-31'\n\nThere are 197877 rows in staff_booking, 573416 rows in booking_plan and 26 rows in \nt_search_reqt_dates.\n\nThe t_search reqt_dates is a temp table created and populated with the target \ndate/times. The temp table is *not* analyzed, all the other are.\n\nThe \"good\" query plan comes with the criteria on search_id and reqt_date given in the \nlast two lines in the query. Note all the rows in the temp table are search_id = 13 and all \nthe rows are between the two dates, so the whole 26 rows is always pulled out.\n\nIn this case it is doing exactly what I expect. It is pulling all rows from the \nt_search_reqt_dates table, then pulling the relevant records from the booking_plan and \nthen hashing with staff_booking. Excellent performance.\n\nThe problem is I don't need the clauses for search_id and reqt_dates as the whole table \nis always read anyway. The good plan is because the planner thinks just one row will be \nread from t_search_reqt_dates.\n\nIf I remove the redundant clauses, the planner now estimates 1000 rows returned from \nthe table, not unreasonable since it has no statistics. But *why* in that case, with *more* \nestimated rows does it choose to materialize that table (26 rows) 573416 times!!!\n\nwhenever it estimates more than one row it chooses the bad plan.\n\nI really want to remove the redundant clauses, but I can't. If I analyse the table, then it \nknows there are 26 rows and chooses the \"bad\" plan whatever I do.\n\nAny ideas???\n\nCheers,\nGary.\n\n-------------------- Plans for above query ------------------------\n\nGood QUERY PLAN\nUnique (cost=15440.83..15447.91 rows=462 width=4) (actual time=1342.000..1342.000 \nrows=110 loops=1)\n -> Sort (cost=15440.83..15444.37 rows=7081 width=4) (actual \ntime=1342.000..1342.000 rows=2173 loops=1)\n Sort Key: b.staff_id\n -> Hash Join (cost=10784.66..15350.26 rows=7081 width=4) (actual \ntime=601.000..1331.000 rows=2173 loops=1)\n Hash Cond: (\"outer\".booking_id = \"inner\".booking_id)\n -> Seq Scan on staff_booking b (cost=0.00..4233.39 rows=197877 width=8) \n(actual time=0.000..400.000 rows=197877 loops=1)\n -> Hash (cost=10781.12..10781.12 rows=7080 width=4) (actual \ntime=591.000..591.000 rows=0 loops=1)\n -> Nested Loop (cost=0.00..10781.12 rows=7080 width=4) (actual \ntime=10.000..581.000 rows=2173 loops=1)\n Join Filter: ((\"outer\".datetime_from <= \"inner\".datetime_to) AND \n(\"outer\".datetime_to >= \"inner\".datetime_from))\n -> Seq Scan on t_search_reqt_dates rd (cost=0.00..16.50 rows=1 \nwidth=20) (actual time=0.000..0.000 rows=26 loops=1)\n Filter: ((search_id = 13) AND (reqt_date >= '2004-09-30'::date) AND \n(reqt_date <= '2005-12-31'::date))\n -> Index Scan using booking_plan_idx2 on booking_plan bp \n(cost=0.00..10254.91 rows=63713 width=24) (actual time=0.000..11.538 rows=5871 \nloops=26)\n Index Cond: ((bp.booking_date >= (\"outer\".reqt_date - 1)) AND \n(bp.booking_date <= (\"outer\".reqt_date + 1)))\nTotal runtime: 1342.000 ms\n\n\nBad QUERY PLAN\nUnique (cost=7878387.29..7885466.50 rows=462 width=4) (actual \ntime=41980.000..41980.000 rows=110 loops=1)\n -> Sort (cost=7878387.29..7881926.90 rows=7079211 width=4) (actual \ntime=41980.000..41980.000 rows=2173 loops=1)\n Sort Key: b.staff_id\n -> Nested Loop (cost=5314.32..7480762.73 rows=7079211 width=4) (actual \ntime=6579.000..41980.000 rows=2173 loops=1)\n Join Filter: ((\"inner\".datetime_from <= \"outer\".datetime_to) AND \n(\"inner\".datetime_to >= \"outer\".datetime_from) AND (\"outer\".booking_date >= \n(\"inner\".reqt_date - 1)) AND (\"outer\".booking_date <= (\"inner\".reqt_date + 1)))\n -> Hash Join (cost=5299.32..26339.73 rows=573416 width=24) (actual \ntime=2413.000..7832.000 rows=573416 loops=1)\n Hash Cond: (\"outer\".booking_id = \"inner\".booking_id)\n -> Seq Scan on booking_plan bp (cost=0.00..7646.08 rows=573416 \nwidth=24) (actual time=0.000..1201.000 rows=573416 loops=1)\n -> Hash (cost=4233.39..4233.39 rows=197877 width=8) (actual \ntime=811.000..811.000 rows=0 loops=1)\n -> Seq Scan on staff_booking b (cost=0.00..4233.39 rows=197877 \nwidth=8) (actual time=0.000..430.000 rows=197877 loops=1)\n -> Materialize (cost=15.00..20.00 rows=1000 width=20) (actual \ntime=0.001..0.019 rows=26 loops=573416)\n -> Seq Scan on t_search_reqt_dates rd (cost=0.00..15.00 rows=1000 \nwidth=20) (actual time=0.000..0.000 rows=26 loops=1)\nTotal runtime: 41980.000 ms\n\n\n\n\n\n\n\nI'm looking at one of my standard queries and have encountered some strange \nperformance problems.\n\n\nThe query below is to search for vacant staff member date/time slots given a series of \ntarget date/times. The data contained in the booking_plan/staff_booking tables contain \nthe existing bookings, so I'm looking for \"clashing\" bookings to eliminate them from a \ncandidate list.\n\n\nThe query is:\n\n\nselect distinct b.staff_id from staff_booking b, booking_plan bp, t_search_reqt_dates rd\nwhere b.booking_id = bp.booking_id\nand rd.datetime_from <= bp.datetime_to and rd.datetime_to >= bp.datetime_from\nAND bp.booking_date between rd.reqt_date-1 and rd.reqt_date+1\nand rd.search_id = 13\nand rd.reqt_date between '2004-09-30' AND '2005-12-31'\n\n\nThere are 197877 rows in staff_booking, 573416 rows in booking_plan and 26 rows in \nt_search_reqt_dates.\n\n\nThe t_search reqt_dates is a temp table created and populated with the target \ndate/times. The temp table is *not* analyzed, all the other are.\n\n\nThe \"good\" query plan comes with the criteria on search_id and reqt_date given in the \nlast two lines in the query. Note all the rows in the temp table are search_id = 13 and all \nthe rows are between the two dates, so the whole 26 rows is always pulled out.\n\n\nIn this case it is doing exactly what I expect. It is pulling all rows from the \nt_search_reqt_dates table, then pulling the relevant records from the booking_plan and \nthen hashing with staff_booking. Excellent performance.\n\n\nThe problem is I don't need the clauses for search_id and reqt_dates as the whole table \nis always read anyway. The good plan is because the planner thinks just one row will be \nread from t_search_reqt_dates.\n\n\nIf I remove the redundant clauses, the planner now estimates 1000 rows returned from \nthe table, not unreasonable since it has no statistics. But *why* in that case, with *more* \nestimated rows does it choose to materialize that table (26 rows) 573416 times!!!\n\n\nwhenever it estimates more than one row it chooses the bad plan.\n\n\nI really want to remove the redundant clauses, but I can't. If I analyse the table, then it \nknows there are 26 rows and chooses the \"bad\" plan whatever I do.\n\n\nAny ideas???\n\n\nCheers,\nGary.\n\n\n-------------------- Plans for above query ------------------------\n\n\nGood QUERY PLAN\nUnique (cost=15440.83..15447.91 rows=462 width=4) (actual time=1342.000..1342.000 \nrows=110 loops=1)\n -> Sort (cost=15440.83..15444.37 rows=7081 width=4) (actual \ntime=1342.000..1342.000 rows=2173 loops=1)\n Sort Key: b.staff_id\n -> Hash Join (cost=10784.66..15350.26 rows=7081 width=4) \n(actual \ntime=601.000..1331.000 rows=2173 loops=1)\n Hash Cond: (\"outer\".booking_id \n= \"inner\".booking_id)\n -> Seq Scan on staff_booking \nb (cost=0.00..4233.39 rows=197877 width=8) \n(actual time=0.000..400.000 rows=197877 loops=1)\n -> Hash (cost=10781.12..10781.12 \nrows=7080 width=4) (actual \ntime=591.000..591.000 rows=0 loops=1)\n \n-> Nested Loop (cost=0.00..10781.12 rows=7080 width=4) (actual \ntime=10.000..581.000 rows=2173 loops=1)\n \nJoin Filter: ((\"outer\".datetime_from <= \"inner\".datetime_to) AND \n(\"outer\".datetime_to >= \"inner\".datetime_from))\n \n-> Seq Scan on t_search_reqt_dates rd (cost=0.00..16.50 rows=1 \nwidth=20) (actual time=0.000..0.000 rows=26 loops=1)\n \nFilter: ((search_id = 13) AND (reqt_date >= '2004-09-30'::date) AND \n(reqt_date <= '2005-12-31'::date))\n \n-> Index Scan using booking_plan_idx2 on booking_plan bp \n(cost=0.00..10254.91 rows=63713 width=24) (actual time=0.000..11.538 rows=5871 \nloops=26)\n \nIndex Cond: ((bp.booking_date >= (\"outer\".reqt_date - 1)) AND \n(bp.booking_date <= (\"outer\".reqt_date + 1)))\nTotal runtime: 1342.000 ms\n\n\n\n\nBad QUERY PLAN\nUnique (cost=7878387.29..7885466.50 rows=462 width=4) (actual \ntime=41980.000..41980.000 rows=110 loops=1)\n -> Sort (cost=7878387.29..7881926.90 rows=7079211 width=4) (actual \ntime=41980.000..41980.000 rows=2173 loops=1)\n Sort Key: b.staff_id\n -> Nested Loop (cost=5314.32..7480762.73 rows=7079211 \nwidth=4) (actual \ntime=6579.000..41980.000 rows=2173 loops=1)\n Join Filter: ((\"inner\".datetime_from \n<= \"outer\".datetime_to) AND \n(\"inner\".datetime_to >= \"outer\".datetime_from) AND (\"outer\".booking_date >= \n(\"inner\".reqt_date - 1)) AND (\"outer\".booking_date <= (\"inner\".reqt_date + 1)))\n -> Hash Join \n(cost=5299.32..26339.73 rows=573416 width=24) (actual \ntime=2413.000..7832.000 rows=573416 loops=1)\n \nHash Cond: (\"outer\".booking_id = \"inner\".booking_id)\n \n-> Seq Scan on booking_plan bp (cost=0.00..7646.08 rows=573416 \nwidth=24) (actual time=0.000..1201.000 rows=573416 loops=1)\n \n-> Hash (cost=4233.39..4233.39 rows=197877 width=8) (actual \ntime=811.000..811.000 rows=0 loops=1)\n \n-> Seq Scan on staff_booking b (cost=0.00..4233.39 rows=197877 \nwidth=8) (actual time=0.000..430.000 rows=197877 loops=1)\n -> Materialize \n(cost=15.00..20.00 rows=1000 width=20) (actual \ntime=0.001..0.019 rows=26 loops=573416)\n \n-> Seq Scan on t_search_reqt_dates rd (cost=0.00..15.00 rows=1000 \nwidth=20) (actual time=0.000..0.000 rows=26 loops=1)\nTotal runtime: 41980.000 ms",
"msg_date": "Fri, 08 Oct 2004 20:32:22 +0100",
"msg_from": "\"Gary Doades\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Odd planner choice?"
},
{
"msg_contents": "Oops, forgot to mention:\n\nPostgreSQL 8.0 beta 2 Windows.\n\nThanks,\nGary.\n\nOn 8 Oct 2004 at 20:32, Gary Doades wrote:\n\n> \n> I'm looking at one of my standard queries and have encountered some strange performance \n> problems.\n> \n> The query below is to search for vacant staff member date/time slots given a series of target \n> date/times. The data contained in the booking_plan/staff_booking tables contain the existing \n> bookings, so I'm looking for \"clashing\" bookings to eliminate them from a candidate list.\n> \n> The query is:\n> \n> select distinct b.staff_id from staff_booking b, booking_plan bp, t_search_reqt_dates rd\n> where b.booking_id = bp.booking_id\n> and rd.datetime_from <= bp.datetime_to and rd.datetime_to >= bp.datetime_from\n> AND bp.booking_date between rd.reqt_date-1 and rd.reqt_date+1\n> and rd.search_id = 13\n> and rd.reqt_date between '2004-09-30' AND '2005-12-31'\n> \n> There are 197877 rows in staff_booking, 573416 rows in booking_plan and 26 rows in \n> t_search_reqt_dates.\n> \n> The t_search reqt_dates is a temp table created and populated with the target date/times. The \n> temp table is *not* analyzed, all the other are.\n> \n> The \"good\" query plan comes with the criteria on search_id and reqt_date given in the last two \n> lines in the query. Note all the rows in the temp table are search_id = 13 and all the rows are \n> between the two dates, so the whole 26 rows is always pulled out.\n> \n> In this case it is doing exactly what I expect. It is pulling all rows from the t_search_reqt_dates \n> table, then pulling the relevant records from the booking_plan and then hashing with \n> staff_booking. Excellent performance.\n> \n> The problem is I don't need the clauses for search_id and reqt_dates as the whole table is \n> always read anyway. The good plan is because the planner thinks just one row will be read from \n> t_search_reqt_dates.\n> \n> If I remove the redundant clauses, the planner now estimates 1000 rows returned from the table, \n> not unreasonable since it has no statistics. But *why* in that case, with *more* estimated rows \n> does it choose to materialize that table (26 rows) 573416 times!!!\n> \n> whenever it estimates more than one row it chooses the bad plan.\n> \n> I really want to remove the redundant clauses, but I can't. If I analyse the table, then it knows \n> there are 26 rows and chooses the \"bad\" plan whatever I do.\n> \n> Any ideas???\n> \n> Cheers,\n> Gary.\n> \n> -------------------- Plans for above query ------------------------\n> \n> Good QUERY PLAN\n> Unique (cost=15440.83..15447.91 rows=462 width=4) (actual time=1342.000..1342.000 \n> rows=110 loops=1)\n> -> Sort (cost=15440.83..15444.37 rows=7081 width=4) (actual time=1342.000..1342.000 \n> rows=2173 loops=1)\n> Sort Key: b.staff_id\n> -> Hash Join (cost=10784.66..15350.26 rows=7081 width=4) (actual \n> time=601.000..1331.000 rows=2173 loops=1)\n> Hash Cond: (\"outer\".booking_id = \"inner\".booking_id)\n> -> Seq Scan on staff_booking b (cost=0.00..4233.39 rows=197877 width=8) (actual \n> time=0.000..400.000 rows=197877 loops=1)\n> -> Hash (cost=10781.12..10781.12 rows=7080 width=4) (actual \n> time=591.000..591.000 rows=0 loops=1)\n> -> Nested Loop (cost=0.00..10781.12 rows=7080 width=4) (actual \n> time=10.000..581.000 rows=2173 loops=1)\n> Join Filter: ((\"outer\".datetime_from <= \"inner\".datetime_to) AND \n> (\"outer\".datetime_to >= \"inner\".datetime_from))\n> -> Seq Scan on t_search_reqt_dates rd (cost=0.00..16.50 rows=1 width=20) \n> (actual time=0.000..0.000 rows=26 loops=1)\n> Filter: ((search_id = 13) AND (reqt_date >= '2004-09-30'::date) AND \n> (reqt_date <= '2005-12-31'::date))\n> -> Index Scan using booking_plan_idx2 on booking_plan bp \n> (cost=0.00..10254.91 rows=63713 width=24) (actual time=0.000..11.538 rows=5871 loops=26)\n> Index Cond: ((bp.booking_date >= (\"outer\".reqt_date - 1)) AND \n> (bp.booking_date <= (\"outer\".reqt_date + 1)))\n> Total runtime: 1342.000 ms\n> \n> \n> Bad QUERY PLAN\n> Unique (cost=7878387.29..7885466.50 rows=462 width=4) (actual time=41980.000..41980.000 \n> rows=110 loops=1)\n> -> Sort (cost=7878387.29..7881926.90 rows=7079211 width=4) (actual \n> time=41980.000..41980.000 rows=2173 loops=1)\n> Sort Key: b.staff_id\n> -> Nested Loop (cost=5314.32..7480762.73 rows=7079211 width=4) (actual \n> time=6579.000..41980.000 rows=2173 loops=1)\n> Join Filter: ((\"inner\".datetime_from <= \"outer\".datetime_to) AND (\"inner\".datetime_to >= \n> \"outer\".datetime_from) AND (\"outer\".booking_date >= (\"inner\".reqt_date - 1)) AND \n> (\"outer\".booking_date <= (\"inner\".reqt_date + 1)))\n> -> Hash Join (cost=5299.32..26339.73 rows=573416 width=24) (actual \n> time=2413.000..7832.000 rows=573416 loops=1)\n> Hash Cond: (\"outer\".booking_id = \"inner\".booking_id)\n> -> Seq Scan on booking_plan bp (cost=0.00..7646.08 rows=573416 width=24) \n> (actual time=0.000..1201.000 rows=573416 loops=1)\n> -> Hash (cost=4233.39..4233.39 rows=197877 width=8) (actual \n> time=811.000..811.000 rows=0 loops=1)\n> -> Seq Scan on staff_booking b (cost=0.00..4233.39 rows=197877 width=8) \n> (actual time=0.000..430.000 rows=197877 loops=1)\n> -> Materialize (cost=15.00..20.00 rows=1000 width=20) (actual time=0.001..0.019 \n> rows=26 loops=573416)\n> -> Seq Scan on t_search_reqt_dates rd (cost=0.00..15.00 rows=1000 width=20) \n> (actual time=0.000..0.000 rows=26 loops=1)\n> Total runtime: 41980.000 ms\n> \n\n\n",
"msg_date": "Fri, 08 Oct 2004 20:46:03 +0100",
"msg_from": "\"Gary Doades\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd planner choice?"
},
{
"msg_contents": "\"Gary Doades\" <[email protected]> writes:\n> If I remove the redundant clauses, the planner now estimates 1000 rows returned from \n> the table, not unreasonable since it has no statistics. But *why* in that case, with *more* \n> estimated rows does it choose to materialize that table (26 rows) 573416 times!!!\n\nIt isn't. It's materializing that once and scanning it 573416 times,\nonce for each row in the outer relation. And this is not a bad plan\ngiven the estimates. If it had stuck to what you call the good plan,\nand there *had* been 1000 rows in the temp table, that plan would have\nrun 1000 times longer than it did.\n\nAs a general rule, if your complaint is that you get a bad plan for an\nunanalyzed table, the response is going to be \"so analyze the table\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Oct 2004 16:04:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd planner choice? "
},
{
"msg_contents": "On 8 Oct 2004 at 16:04, Tom Lane wrote:\n\n> \"Gary Doades\" <[email protected]> writes:\n> > If I remove the redundant clauses, the planner now estimates 1000 rows returned from \n> > the table, not unreasonable since it has no statistics. But *why* in that case, with *more* \n> > estimated rows does it choose to materialize that table (26 rows) 573416 times!!!\n> \n> It isn't. It's materializing that once and scanning it 573416 times,\n> once for each row in the outer relation. And this is not a bad plan\n> given the estimates. If it had stuck to what you call the good plan,\n> and there *had* been 1000 rows in the temp table, that plan would have\n> run 1000 times longer than it did.\n> \n> As a general rule, if your complaint is that you get a bad plan for an\n> unanalyzed table, the response is going to be \"so analyze the table\".\n> \n\nThe problem is in this case is that if I *do* analyse the table I *always* get the bad plan. \nBad in this case meaning the query takes a lot longer. I'm still not sure why it can't \nchoose the better plan by just reading the 26 rows once and index scan the \nbooking_plan table 26 times (as in the \"good\" plan).\n\nOK, with 1000 row estimate I can see that index scanning 1000 times into the \nbooking_plan table would take some time, but the even if planner estimates 5 rows it still \nproduces the same slow query.\n\nIf I analyze the table it then knows there are 26 rows and therefore always goes slow.\n\nThis is why I am not analyzing this table, to fool the planner into thinking there is only \none row and produce a much faster access plan. Not ideal I know.\n\nJust using one redundant clause I now get:\n\nselect distinct b.staff_id from staff_booking b, booking_plan bp, t_search_reqt_dates rd\nwhere b.booking_id = bp.booking_id\nand rd.datetime_from <= bp.datetime_to and rd.datetime_to >= bp.datetime_from\nAND bp.booking_date between rd.reqt_date-1 and rd.reqt_date+1\nand rd.search_id = 13\n\nQUERY PLAN\nUnique (cost=50885.97..50921.37 rows=462 width=4) (actual \ntime=35231.000..35241.000 rows=110 loops=1)\n -> Sort (cost=50885.97..50903.67 rows=35397 width=4) (actual \ntime=35231.000..35241.000 rows=2173 loops=1)\n Sort Key: b.staff_id\n -> Hash Join (cost=44951.32..50351.07 rows=35397 width=4) (actual \ntime=34530.000..35231.000 rows=2173 loops=1)\n Hash Cond: (\"outer\".booking_id = \"inner\".booking_id)\n -> Seq Scan on staff_booking b (cost=0.00..4233.39 rows=197877 width=8) \n(actual time=0.000..351.000 rows=197877 loops=1)\n -> Hash (cost=44933.62..44933.62 rows=35397 width=4) (actual \ntime=34530.000..34530.000 rows=0 loops=1)\n -> Nested Loop (cost=15.50..44933.62 rows=35397 width=4) (actual \ntime=8342.000..34520.000 rows=2173 loops=1)\n Join Filter: ((\"inner\".datetime_from <= \"outer\".datetime_to) AND \n(\"inner\".datetime_to >= \"outer\".datetime_from) AND (\"outer\".booking_date >= \n(\"inner\".reqt_date - 1)) AND (\"outer\".booking_date <= (\"inner\".reqt_date + 1)))\n -> Seq Scan on booking_plan bp (cost=0.00..7646.08 rows=573416 \nwidth=24) (actual time=0.000..1053.000 rows=573416 loops=1)\n -> Materialize (cost=15.50..15.53 rows=5 width=20) (actual \ntime=0.001..0.019 rows=26 loops=573416)\n -> Seq Scan on t_search_reqt_dates rd (cost=0.00..15.50 rows=5 \nwidth=20) (actual time=0.000..0.000 rows=26 loops=1)\n Filter: (search_id = 13)\nTotal runtime: 35241.000 ms\n\nIf this is the only answer for now, then fair enough I will just have to do more testing.\n\nRegards,\nGary.\n\n",
"msg_date": "Fri, 08 Oct 2004 21:29:57 +0100",
"msg_from": "\"Gary Doades\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd planner choice? "
}
] |
[
{
"msg_contents": "Folks,\n\nI'm hoping that some of you can shed some light on this.\n\nI've been trying to peg the \"sweet spot\" for shared memory using OSDL's \nequipment. With Jan's new ARC patch, I was expecting that the desired \namount of shared_buffers to be greatly increased. This has not turned out to \nbe the case.\n\nThe first test series was using OSDL's DBT2 (OLTP) test, with 150 \n\"warehouses\". All tests were run on a 4-way Pentium III 700mhz 3.8GB RAM \nsystem hooked up to a rather high-end storage device (14 spindles). Tests \nwere on PostgreSQL 8.0b3, Linux 2.6.7.\n\nHere's a top-level summary:\n\nshared_buffers\t\t% RAM\tNOTPM20*\n1000\t\t\t\t0.2%\t\t1287\n23000\t\t\t5%\t\t1507\n46000\t\t\t10%\t\t1481\n69000\t\t\t15%\t\t1382\n92000\t\t\t20%\t\t1375\n115000\t\t\t25%\t\t1380\n138000\t\t\t30%\t\t1344\n\n* = New Order Transactions Per Minute, last 20 Minutes\n Higher is better. The maximum possible is 1800.\n\nAs you can see, the \"sweet spot\" appears to be between 5% and 10% of RAM, \nwhich is if anything *lower* than recommendations for 7.4! \n\nThis result is so surprising that I want people to take a look at it and tell \nme if there's something wrong with the tests or some bottlenecking factor \nthat I've not seen.\n\nin order above:\nhttp://khack.osdl.org/stp/297959/\nhttp://khack.osdl.org/stp/297960/\nhttp://khack.osdl.org/stp/297961/\nhttp://khack.osdl.org/stp/297962/\nhttp://khack.osdl.org/stp/297963/\nhttp://khack.osdl.org/stp/297964/\nhttp://khack.osdl.org/stp/297965/\n\nPlease note that many of the Graphs in these reports are broken. For one \nthing, some aren't recorded (flat lines) and the CPU usage graph has \nmislabeled lines.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n",
"msg_date": "Fri, 8 Oct 2004 14:43:16 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "First set of OSDL Shared Mem scalability results, some wierdness ..."
},
{
"msg_contents": "I have an idea that makes some assumptions about internals that I think\nare correct.\n\nWhen you have a huge number of buffers in a list that has to be\ntraversed to look for things in cache, e.g. 100k, you will generate an\nalmost equivalent number of cache line misses on the processor to jump\nthrough all those buffers. As I understand it (and I haven't looked so\nI could be wrong), the buffer cache is searched by traversing it\nsequentially. OTOH, it seems reasonable to me that the OS disk cache\nmay actually be using a tree structure that would generate vastly fewer\ncache misses by comparison to find a buffer. This could mean a\nsubstantial linear search cost as a function of the number of buffers,\nbig enough to rise above the noise floor when you have hundreds of\nthousands of buffers.\n\nCache misses start to really add up when a code path generates many,\nmany thousands of them, and differences in the access path between the\nbuffer cache and disk cache would be reflected when you have that many\nbuffers. I've seen these types of unexpected performance anomalies\nbefore that got traced back to code patterns and cache efficiency and\ngotten integer factors improvements by making some seemingly irrelevant\ncode changes.\n\nSo I guess my question would be 1) are my assumptions about the\ninternals correct, and 2) if they are, is there a way to optimize\nsearching the buffer cache so that a search doesn't iterate over a\nreally long buffer list that is bottlenecked on cache line replacement. \n\nMy random thought of the day,\n\nj. andrew rogers\n\n\n",
"msg_date": "08 Oct 2004 15:13:00 -0700",
"msg_from": "\"J. Andrew Rogers\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results,"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> Here's a top-level summary:\n\n> shared_buffers\t% RAM\t\tNOTPM20*\n> 1000\t\t\t0.2%\t\t1287\n> 23000\t\t\t5%\t\t1507\n> 46000\t\t\t10%\t\t1481\n> 69000\t\t\t15%\t\t1382\n> 92000\t\t\t20%\t\t1375\n> 115000\t\t25%\t\t1380\n> 138000\t\t30%\t\t1344\n\n> As you can see, the \"sweet spot\" appears to be between 5% and 10% of RAM, \n> which is if anything *lower* than recommendations for 7.4! \n\nThis doesn't actually surprise me a lot. There are a number of aspects\nof Postgres that will get slower the more buffers there are.\n\nOne thing that I hadn't focused on till just now, which is a new\noverhead in 8.0, is that StrategyDirtyBufferList() scans the *entire*\nbuffer list *every time it's called*, which is to say once per bgwriter\nloop. And to add insult to injury, it's doing that with the BufMgrLock\nheld (not that it's got any choice).\n\nWe could alleviate this by changing the API between this function and\nBufferSync, such that StrategyDirtyBufferList can stop as soon as it's\nfound all the buffers that are going to be written in this bgwriter\ncycle ... but AFAICS that means abandoning the \"bgwriter_percent\" knob\nsince you'd never really know how many dirty pages there were\naltogether.\n\nBTW, what is the actual size of the test database (disk footprint wise)\nand how much of that do you think is heavily accessed during the run?\nIt's possible that the test conditions are such that adjusting\nshared_buffers isn't going to mean anything anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Oct 2004 18:21:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": "\"J. Andrew Rogers\" <[email protected]> writes:\n> As I understand it (and I haven't looked so I could be wrong), the\n> buffer cache is searched by traversing it sequentially.\n\nYou really should look first.\n\nThe main-line code paths use hashed lookups. There are some cases that\ndo linear searches through the buffer headers or the CDB lists; in\ntheory those are supposed to be non-performance-critical cases, though\nI am suspicious that some are not (see other response). In any case,\nthose structures are considerably more compact than the buffers proper,\nand I doubt that cache misses per se are the killer factor.\n\nThis does raise a question for Josh though, which is \"where's the\noprofile results?\" If we do have major problems at the level of cache\nmisses then oprofile would be able to prove it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Oct 2004 18:32:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results, "
},
{
"msg_contents": "On Fri, Oct 08, 2004 at 06:32:32PM -0400, Tom Lane wrote:\n> This does raise a question for Josh though, which is \"where's the\n> oprofile results?\" If we do have major problems at the level of cache\n> misses then oprofile would be able to prove it.\n\nOr cachegrind. I've found it to be really effective at pinpointing cache\nmisses in the past (one CPU-intensive routine was sped up by 30% just by\navoiding a memory clear). :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Sat, 9 Oct 2004 00:39:45 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results,"
},
{
"msg_contents": "Tom,\n\n> This does raise a question for Josh though, which is \"where's the\n> oprofile results?\" If we do have major problems at the level of cache\n> misses then oprofile would be able to prove it.\n\nMissing, I'm afraid. OSDL has been having technical issues with STP all week.\n\nHopefully the next test run will have them.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 8 Oct 2004 16:08:14 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results,"
},
{
"msg_contents": "Tom,\n\n> BTW, what is the actual size of the test database (disk footprint wise)\n> and how much of that do you think is heavily accessed during the run?\n> It's possible that the test conditions are such that adjusting\n> shared_buffers isn't going to mean anything anyway.\n\nThe raw data is 32GB, but a lot of the activity is incremental, that is \ninserts and updates to recent inserts. Still, according to Mark, most of \nthe data does get queried in the course of filling orders.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 8 Oct 2004 16:31:41 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": "[email protected] (Josh Berkus) wrote:\n> I've been trying to peg the \"sweet spot\" for shared memory using\n> OSDL's equipment. With Jan's new ARC patch, I was expecting that\n> the desired amount of shared_buffers to be greatly increased. This\n> has not turned out to be the case.\n\nThat doesn't surprise me.\n\nMy primary expectation would be that ARC would be able to make small\nbuffers much more effective alongside vacuums and seq scans than they\nused to be. That does not establish anything about the value of\nincreasing the size buffer caches...\n\n> This result is so surprising that I want people to take a look at it\n> and tell me if there's something wrong with the tests or some\n> bottlenecking factor that I've not seen.\n\nI'm aware of two conspicuous scenarios where ARC would be expected to\n_substantially_ improve performance:\n\n 1. When it allows a VACUUM not to throw useful data out of \n the shared cache in that VACUUM now only 'chews' on one\n page of the cache;\n\n 2. When it allows a Seq Scan to not push useful data out of\n the shared cache, for much the same reason.\n\nI don't imagine either scenario are prominent in the OSDL tests.\n\nIncreasing the number of cache buffers _is_ likely to lead to some\nslowdowns:\n\n - Data that passes through the cache also passes through kernel\n cache, so it's recorded twice, and read twice...\n\n - The more cache pages there are, the more work is needed for\n PostgreSQL to manage them. That will notably happen anywhere\n that there is a need to scan the cache.\n\n - If there are any inefficiencies in how the OS kernel manages shared\n memory, as their size scales, well, that will obviously cause a\n slowdown.\n-- \nIf this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me\nhttp://www.ntlug.org/~cbbrowne/internet.html\n\"One World. One Web. One Program.\" -- MICROS~1 hype\n\"Ein Volk, ein Reich, ein Fuehrer\" -- Nazi hype\n(One people, one country, one leader)\n",
"msg_date": "Fri, 08 Oct 2004 22:10:19 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": "Christopher Browne wrote:\n> Increasing the number of cache buffers _is_ likely to lead to some\n> slowdowns:\n> \n> - Data that passes through the cache also passes through kernel\n> cache, so it's recorded twice, and read twice...\n\nEven worse, memory that's used for the PG cache is memory that's not\navailable to the kernel's page cache. Even if the overall memory\nusage in the system isn't enough to cause some paging to disk, most\nmodern kernels will adjust the page/disk cache size dynamically to fit\nthe memory demands of the system, which in this case means it'll be\nsmaller if running programs need more memory for their own use.\n\nThis is why I sometimes wonder whether or not it would be a win to use\nmmap() to access the data and index files -- doing so under a truly\nmodern OS would surely at the very least save a buffer copy (from the\npage/disk cache to program memory) because the OS could instead\ndirecly map the buffer cache pages directly to the program's memory\nspace.\n\nSince PG often has to have multiple files open at the same time, and\nin a production database many of those files will be rather large, PG\nwould have to limit the size of the mmap()ed region on 32-bit\nplatforms, which means that things like the order of mmap() operations\nto access various parts of the file can become just as important in\nthe mmap()ed case as it is in the read()/write() case (if not more\nso!). I would imagine that the use of mmap() on a 64-bit platform\nwould be a much, much larger win because PG would most likely be able\nto mmap() entire files and let the OS work out how to order disk reads\nand writes.\n\nThe biggest problem as I see it is that (I think) mmap() would have to\nbe made to cooperate with malloc() for virtual address space. I\nsuspect issues like this have already been worked out by others,\nhowever...\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n",
"msg_date": "Sat, 9 Oct 2004 04:20:48 -0700",
"msg_from": "Kevin Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": "Christopher Browne wrote:\n\n>[email protected] (Josh Berkus) wrote:\n> \n>\n>>This result is so surprising that I want people to take a look at it\n>>and tell me if there's something wrong with the tests or some\n>>bottlenecking factor that I've not seen.\n>> \n>>\n>I'm aware of two conspicuous scenarios where ARC would be expected to\n>_substantially_ improve performance:\n>\n> 1. When it allows a VACUUM not to throw useful data out of \n> the shared cache in that VACUUM now only 'chews' on one\n> page of the cache;\n>\n\nRight, Josh, I assume you didn't run these test with pg_autovacuum \nrunning, which might be interesting. \n\nAlso how do these numbers compare to 7.4? They may not be what you \nexpected, but they might still be an improvment.\n\nMatthew\n\n",
"msg_date": "Sat, 09 Oct 2004 11:02:09 -0400",
"msg_from": "Matthew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results, some"
},
{
"msg_contents": "Kevin Brown <[email protected]> writes:\n> This is why I sometimes wonder whether or not it would be a win to use\n> mmap() to access the data and index files -- \n\nmmap() is Right Out because it does not afford us sufficient control\nover when changes to the in-memory data will propagate to disk. The\naddress-space-management problems you describe are also a nasty\nheadache, but that one is the showstopper.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 09 Oct 2004 11:07:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": "Tom Lane wrote:\n> Kevin Brown <[email protected]> writes:\n> > This is why I sometimes wonder whether or not it would be a win to use\n> > mmap() to access the data and index files -- \n> \n> mmap() is Right Out because it does not afford us sufficient control\n> over when changes to the in-memory data will propagate to disk. The\n> address-space-management problems you describe are also a nasty\n> headache, but that one is the showstopper.\n\nHuh? Surely fsync() or fdatasync() of the file descriptor associated\nwith the mmap()ed region at the appropriate times would accomplish\nmuch of this? I'm particularly confused since PG's entire approach to\ndisk I/O is predicated on the notion that the OS, and not PG, is the\nbest arbiter of when data hits the disk. Otherwise it would be using\nraw partitions for the highest-speed data store, yes?\n\nAlso, there isn't any particular requirement to use mmap() for\neverything -- you can use traditional open/write/close calls for the\nWAL and mmap() for the data/index files (but it wouldn't surprise me\nif this would require some extensive code changes).\n\nThat said, if it's typical for many changes to made to a page\ninternally before PG needs to commit that page to disk, then your\nargument makes sense, and that's especially true if we simply cannot\nhave the page written to disk in a partially-modified state (something\nI can easily see being an issue for the WAL -- would the same hold\ntrue of the index/data files?).\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n",
"msg_date": "Sat, 9 Oct 2004 13:37:12 -0700",
"msg_from": "Kevin Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": "I wrote:\n> That said, if it's typical for many changes to made to a page\n> internally before PG needs to commit that page to disk, then your\n> argument makes sense, and that's especially true if we simply cannot\n> have the page written to disk in a partially-modified state (something\n> I can easily see being an issue for the WAL -- would the same hold\n> true of the index/data files?).\n\nAlso, even if multiple changes would be made to the page, with the\npage being valid for a disk write only after all such changes are\nmade, the use of mmap() (in conjunction with an internal buffer that\nwould then be copied to the mmap()ed memory space at the appropriate\ntime) would potentially save a system call over the use of write()\n(even if write() were used to write out multiple pages). However,\nthere is so much lower-hanging fruit than this that an mmap()\nimplementation almost certainly isn't worth pursuing for this alone.\n\nSo: it seems to me that mmap() is worth pursuing only if most internal\nbuffers tend to be written to only once or if it's acceptable for a\npartially modified data/index page to be written to disk (which I\nsuppose could be true for data/index pages in the face of a rock-solid\nWAL).\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n",
"msg_date": "Sat, 9 Oct 2004 14:01:02 -0700",
"msg_from": "Kevin Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": "Kevin Brown <[email protected]> writes:\n> Tom Lane wrote:\n>> mmap() is Right Out because it does not afford us sufficient control\n>> over when changes to the in-memory data will propagate to disk.\n\n> ... that's especially true if we simply cannot\n> have the page written to disk in a partially-modified state (something\n> I can easily see being an issue for the WAL -- would the same hold\n> true of the index/data files?).\n\nYou're almost there. Remember the fundamental WAL rule: log entries\nmust hit disk before the data changes they describe. That means that we\nneed not only a way of forcing changes to disk (fsync) but a way of\nbeing sure that changes have *not* gone to disk yet. In the existing\nimplementation we get that by just not issuing write() for a given page\nuntil we know that the relevant WAL log entries are fsync'd down to\ndisk. (BTW, this is what the LSN field on every page is for: it tells\nthe buffer manager the latest WAL offset that has to be flushed before\nit can safely write the page.)\n\nmmap provides msync which is comparable to fsync, but AFAICS it\nprovides no way to prevent an in-memory change from reaching disk too\nsoon. This would mean that WAL entries would have to be written *and\nflushed* before we could make the data change at all, which would\nconvert multiple updates of a single page into a series of write-and-\nwait-for-WAL-fsync steps. Not good. fsync'ing WAL once per transaction\nis bad enough, once per atomic action is intolerable.\n\nThere is another reason for doing things this way. Consider a backend\nthat goes haywire and scribbles all over shared memory before crashing.\nWhen the postmaster sees the abnormal child termination, it forcibly\nkills the other active backends and discards shared memory altogether.\nThis gives us fairly good odds that the crash did not affect any data on\ndisk. It's not perfect of course, since another backend might have been\nin process of issuing a write() when the disaster happens, but it's\npretty good; and I think that that isolation has a lot to do with PG's\ngood reputation for not corrupting data in crashes. If we had a large\nfraction of the address space mmap'd then this sort of crash would be\njust about guaranteed to propagate corruption into the on-disk files.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 09 Oct 2004 19:05:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": "Josh Berkus wrote:\n> Folks,\n> \n> I'm hoping that some of you can shed some light on this.\n> \n> I've been trying to peg the \"sweet spot\" for shared memory using OSDL's \n> equipment. With Jan's new ARC patch, I was expecting that the desired \n> amount of shared_buffers to be greatly increased. This has not turned out to \n> be the case.\n> \n> The first test series was using OSDL's DBT2 (OLTP) test, with 150 \n> \"warehouses\". All tests were run on a 4-way Pentium III 700mhz 3.8GB RAM \n> system hooked up to a rather high-end storage device (14 spindles). Tests \n> were on PostgreSQL 8.0b3, Linux 2.6.7.\n\nI'd like to see these tests running using the cpu affinity capability in order\nto oblige a backend to not change CPU during his life, this could drastically\nincrease the cache hit.\n\n\nRegards\nGaetano Mendola\n\n\n\n\n",
"msg_date": "Sun, 10 Oct 2004 11:25:23 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results, some wierdness"
},
{
"msg_contents": "On Fri, 8 Oct 2004, Josh Berkus wrote:\n\n> As you can see, the \"sweet spot\" appears to be between 5% and 10% of RAM, \n> which is if anything *lower* than recommendations for 7.4! \n\nWhat recommendation is that? To have shared buffers being about 10% of the\nram sounds familiar to me. What was recommended for 7.4? In the past we\nused to say that the worst value is 50% since then the same things might\nbe cached both by pg and the os disk cache.\n\nWhy do we excpect the shared buffer size sweet spot to change because of\nthe new arc stuff? And why would it make it better to have bigger shared \nmem?\n\nWouldn't it be the opposit, that now we don't invalidate as much of the\ncache for vacuums and seq. scan so now we can do as good caching as \nbefore but with less shared buffers.\n\nThat said, testing and getting some numbers of good sizes for shared mem\nis good.\n\n-- \n/Dennis Bj�rklund\n\n",
"msg_date": "Sun, 10 Oct 2004 23:48:48 +0200 (CEST)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results, some"
},
{
"msg_contents": "On 10/8/2004 10:10 PM, Christopher Browne wrote:\n\n> [email protected] (Josh Berkus) wrote:\n>> I've been trying to peg the \"sweet spot\" for shared memory using\n>> OSDL's equipment. With Jan's new ARC patch, I was expecting that\n>> the desired amount of shared_buffers to be greatly increased. This\n>> has not turned out to be the case.\n> \n> That doesn't surprise me.\n\nNeither does it surprise me.\n\n> \n> My primary expectation would be that ARC would be able to make small\n> buffers much more effective alongside vacuums and seq scans than they\n> used to be. That does not establish anything about the value of\n> increasing the size buffer caches...\n\nThe primary goal of ARC is to prevent total cache eviction caused by \nsequential scans. Which means it is designed to avoid the catastrophic \nimpact of a pg_dump or other, similar access in parallel to the OLTP \ntraffic. It would be much more interesting to see how a half way into a \n2 hour measurement interval started pg_dump affects the response times.\n\nOne also has to take a closer look at the data of the DBT2. What amount \nof that 32GB is high-frequently accessed, and therefore a good thing to \nlive in the PG shared cache? A cache significantly larger than that \ndoesn't make sense to me, under no cache strategy.\n\n\nJan\n\n\n> \n>> This result is so surprising that I want people to take a look at it\n>> and tell me if there's something wrong with the tests or some\n>> bottlenecking factor that I've not seen.\n> \n> I'm aware of two conspicuous scenarios where ARC would be expected to\n> _substantially_ improve performance:\n> \n> 1. When it allows a VACUUM not to throw useful data out of \n> the shared cache in that VACUUM now only 'chews' on one\n> page of the cache;\n> \n> 2. When it allows a Seq Scan to not push useful data out of\n> the shared cache, for much the same reason.\n> \n> I don't imagine either scenario are prominent in the OSDL tests.\n> \n> Increasing the number of cache buffers _is_ likely to lead to some\n> slowdowns:\n> \n> - Data that passes through the cache also passes through kernel\n> cache, so it's recorded twice, and read twice...\n> \n> - The more cache pages there are, the more work is needed for\n> PostgreSQL to manage them. That will notably happen anywhere\n> that there is a need to scan the cache.\n> \n> - If there are any inefficiencies in how the OS kernel manages shared\n> memory, as their size scales, well, that will obviously cause a\n> slowdown.\n\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n",
"msg_date": "Wed, 13 Oct 2004 20:49:03 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results, some"
},
{
"msg_contents": "On 10/9/2004 7:20 AM, Kevin Brown wrote:\n\n> Christopher Browne wrote:\n>> Increasing the number of cache buffers _is_ likely to lead to some\n>> slowdowns:\n>> \n>> - Data that passes through the cache also passes through kernel\n>> cache, so it's recorded twice, and read twice...\n> \n> Even worse, memory that's used for the PG cache is memory that's not\n> available to the kernel's page cache. Even if the overall memory\n\nWhich underlines my previous statement, that a PG shared cache much \nlarger than the high-frequently accessed data portion of the DB is \ncounterproductive. Double buffering (kernel-disk-buffer plus shared \nbuffer) only makes sense for data that would otherwise cause excessive \nmemory copies in and out of the shared buffer. After that, in only \nlowers the memory available for disk buffers.\n\n\nJan\n\n> usage in the system isn't enough to cause some paging to disk, most\n> modern kernels will adjust the page/disk cache size dynamically to fit\n> the memory demands of the system, which in this case means it'll be\n> smaller if running programs need more memory for their own use.\n> \n> This is why I sometimes wonder whether or not it would be a win to use\n> mmap() to access the data and index files -- doing so under a truly\n> modern OS would surely at the very least save a buffer copy (from the\n> page/disk cache to program memory) because the OS could instead\n> direcly map the buffer cache pages directly to the program's memory\n> space.\n> \n> Since PG often has to have multiple files open at the same time, and\n> in a production database many of those files will be rather large, PG\n> would have to limit the size of the mmap()ed region on 32-bit\n> platforms, which means that things like the order of mmap() operations\n> to access various parts of the file can become just as important in\n> the mmap()ed case as it is in the read()/write() case (if not more\n> so!). I would imagine that the use of mmap() on a 64-bit platform\n> would be a much, much larger win because PG would most likely be able\n> to mmap() entire files and let the OS work out how to order disk reads\n> and writes.\n> \n> The biggest problem as I see it is that (I think) mmap() would have to\n> be made to cooperate with malloc() for virtual address space. I\n> suspect issues like this have already been worked out by others,\n> however...\n> \n> \n> \n\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n",
"msg_date": "Wed, 13 Oct 2004 20:52:43 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results, some"
},
{
"msg_contents": "Jan Wieck <[email protected]> writes:\n\n> On 10/8/2004 10:10 PM, Christopher Browne wrote:\n> \n> > [email protected] (Josh Berkus) wrote:\n> >> I've been trying to peg the \"sweet spot\" for shared memory using\n> >> OSDL's equipment. With Jan's new ARC patch, I was expecting that\n> >> the desired amount of shared_buffers to be greatly increased. This\n> >> has not turned out to be the case.\n> > That doesn't surprise me.\n> \n> Neither does it surprise me.\n\nThere's been some speculation that having a large shared buffers be about 50%\nof your RAM is pessimal as it guarantees the OS cache is merely doubling up on\nall the buffers postgres is keeping. I wonder whether there's a second sweet\nspot where the postgres cache is closer to the total amount of RAM.\n\nThat configuration would have disadvantages for servers running other jobs\nbesides postgres. And I was led to believe earlier that postgres starts each\nbackend with a fairly fresh slate as far as the ARC algorithm, so it wouldn't\nwork well for a postgres server that had lots of short to moderate life\nsessions.\n\nBut if it were even close it could be interesting. Reading the data with\nO_DIRECT and having a single global cache could be interesting experiments. I\nknow there are arguments against each of these, but ...\n\nI'm still pulling for an mmap approach to eliminate postgres's buffer cache\nentirely in the long term, but it seems like slim odds now. But one way or the\nother having two layers of buffering seems like a waste.\n\n-- \ngreg\n\n",
"msg_date": "13 Oct 2004 23:52:27 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results, some"
},
{
"msg_contents": "On 10/13/2004 11:52 PM, Greg Stark wrote:\n\n> Jan Wieck <[email protected]> writes:\n> \n>> On 10/8/2004 10:10 PM, Christopher Browne wrote:\n>> \n>> > [email protected] (Josh Berkus) wrote:\n>> >> I've been trying to peg the \"sweet spot\" for shared memory using\n>> >> OSDL's equipment. With Jan's new ARC patch, I was expecting that\n>> >> the desired amount of shared_buffers to be greatly increased. This\n>> >> has not turned out to be the case.\n>> > That doesn't surprise me.\n>> \n>> Neither does it surprise me.\n> \n> There's been some speculation that having a large shared buffers be about 50%\n> of your RAM is pessimal as it guarantees the OS cache is merely doubling up on\n> all the buffers postgres is keeping. I wonder whether there's a second sweet\n> spot where the postgres cache is closer to the total amount of RAM.\n\nWhich would require that shared memory is not allowed to be swapped out, \nand that is allowed in Linux by default IIRC, not to completely distort \nthe entire test.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n",
"msg_date": "Thu, 14 Oct 2004 00:17:34 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results, some"
},
{
"msg_contents": "Jan Wieck <[email protected]> writes:\n\n> Which would require that shared memory is not allowed to be swapped out, and\n> that is allowed in Linux by default IIRC, not to completely distort the entire\n> test.\n\nWell if it's getting swapped out then it's clearly not being used effectively.\n\nThere are APIs to bar swapping out pages and the tests could be run without\nswap. I suggested it only as an experiment though, there are lots of details\nbetween here and having it be a good configuration for production use.\n\n-- \ngreg\n\n",
"msg_date": "14 Oct 2004 00:22:40 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results, some"
},
{
"msg_contents": "On 10/14/2004 12:22 AM, Greg Stark wrote:\n\n> Jan Wieck <[email protected]> writes:\n> \n>> Which would require that shared memory is not allowed to be swapped out, and\n>> that is allowed in Linux by default IIRC, not to completely distort the entire\n>> test.\n> \n> Well if it's getting swapped out then it's clearly not being used effectively.\n\nIs it really that easy if 3 different cache algorithms (PG cache, kernel \nbuffers and swapping) are competing for the same chips?\n\n\nJan\n\n> \n> There are APIs to bar swapping out pages and the tests could be run without\n> swap. I suggested it only as an experiment though, there are lots of details\n> between here and having it be a good configuration for production use.\n> \n\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n",
"msg_date": "Thu, 14 Oct 2004 00:29:19 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results, some"
},
{
"msg_contents": "Tom Lane wrote:\n> Kevin Brown <[email protected]> writes:\n> > Tom Lane wrote:\n> >> mmap() is Right Out because it does not afford us sufficient control\n> >> over when changes to the in-memory data will propagate to disk.\n> \n> > ... that's especially true if we simply cannot\n> > have the page written to disk in a partially-modified state (something\n> > I can easily see being an issue for the WAL -- would the same hold\n> > true of the index/data files?).\n> \n> You're almost there. Remember the fundamental WAL rule: log entries\n> must hit disk before the data changes they describe. That means that we\n> need not only a way of forcing changes to disk (fsync) but a way of\n> being sure that changes have *not* gone to disk yet. In the existing\n> implementation we get that by just not issuing write() for a given page\n> until we know that the relevant WAL log entries are fsync'd down to\n> disk. (BTW, this is what the LSN field on every page is for: it tells\n> the buffer manager the latest WAL offset that has to be flushed before\n> it can safely write the page.)\n> \n> mmap provides msync which is comparable to fsync, but AFAICS it\n> provides no way to prevent an in-memory change from reaching disk too\n> soon. This would mean that WAL entries would have to be written *and\n> flushed* before we could make the data change at all, which would\n> convert multiple updates of a single page into a series of write-and-\n> wait-for-WAL-fsync steps. Not good. fsync'ing WAL once per transaction\n> is bad enough, once per atomic action is intolerable.\n\nHmm...something just occurred to me about this.\n\nWould a hybrid approach be possible? That is, use mmap() to handle\nreads, and use write() to handle writes?\n\nAny code that wishes to write to a page would have to recognize that\nit's doing so and fetch a copy from the storage manager (or\nsomething), which would look to see if the page already exists as a\nwriteable buffer. If it doesn't, it creates it by allocating the\nmemory and then copying the page from the mmap()ed area to the new\nbuffer, and returning it. If it does, it just returns a pointer to\nthe buffer. There would obviously have to be some bookkeeping\ninvolved: the storage manager would have to know how to map a mmap()ed\npage back to a writeable buffer and vice-versa, so that once it\ndecides to write the buffer it can determine which page in the\noriginal file the buffer corresponds to (so it can do the appropriate\nseek()).\n\nIn a write-heavy database, you'll end up with a lot of memory copy\noperations, but with the scheme we currently use you get that anyway\n(it just happens in kernel code instead of user code), so I don't see\nthat as much of a loss, if any. Where you win is in a read-heavy\ndatabase: you end up being able to read directly from the pages in the\nkernel's page cache and thus save a memory copy from kernel space to\nuser space, not to mention the context switch that happens due to\nissuing the read().\n\n\nObviously you'd want to mmap() the file read-only in order to prevent\nthe issues you mention regarding an errant backend, and then reopen\nthe file read-write for the purpose of writing to it. In fact, you\ncould decouple the two: mmap() the file, then close the file -- the\nmmap()ed region will remain mapped. Then, as long as the file remains\nmapped, you need to open the file again only when you want to write to\nit.\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n",
"msg_date": "Thu, 14 Oct 2004 13:25:31 -0700",
"msg_from": "Kevin Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": "\nFirst off, I'd like to get involved with these tests - pressure of other\nwork only has prevented me.\n\nHere's my take on the results so far:\n\nI think taking the ratio of the memory allocated to shared_buffers against\nthe total memory available on the server is completely fallacious. That is\nwhy they cannnot be explained - IMHO the ratio has no real theoretical\nbasis.\n\nThe important ratio for me is the amount of shared_buffers against the total\nsize of the database in the benchmark test. Every database workload has a\ndiffering percentage of the total database size that represents the \"working\nset\", or the memory that can be beneficially cached. For the tests that\nDBT-2 is performing, I say that there is only so many blocks that are worth\nthe trouble caching. If you cache more than this, you are wasting your time.\n\nFor me, these tests don't show that there is a \"sweet spot\" that you should\nset your shared_buffers to, only that for that specific test, you have\nlocated the correct size for shared_buffers. For me, it would be an\nincorrect inference that this could then be interpreted that this was the\npercentage of the available RAM where the \"sweet spot\" lies for all\nworkloads.\n\nThe theoretical basis for my comments is this: DBT-2 is essentially a static\nworkload. That means, for a long test, we can work out with reasonable\ncertainty the probability that a block will be requested, for every single\nblock in the database. Given a particular size of cache, you can work out\nwhat your overall cache hit ratio is and therfore what your speed up is\ncompared with retrieving every single block from disk (the no cache\nscenario). If you draw a graph of speedup (y) against cache size as a % of\ntotal database size, the graph looks like an upside-down \"L\" - i.e. the\ngraph rises steeply as you give it more memory, then turns sharply at a\nparticular point, after which it flattens out. The \"turning point\" is the\n\"sweet spot\" we all seek - the optimum amount of cache memory to allocate -\nbut this spot depends upon the worklaod and database size, not on available\nRAM on the system under test.\n\nClearly, the presence of the OS disk cache complicates this. Since we have\ntwo caches both allocated from the same pot of memory, it should be clear\nthat if we overallocate one cache beyond its optimium effectiveness, while\nthe second cache is still in its \"more is better\" stage, then we will get\nreduced performance. That seems to be the case here. I wouldn't accept that\na fixed ratio between the two caches exists for ALL, or even the majority of\nworkloads - though clearly broad brush workloads such as \"OLTP\" and \"Data\nWarehousing\" do have similar-ish requirements.\n\nAs an example, lets look at an example:\nAn application with two tables: SmallTab has 10,000 rows of 100 bytes each\n(so table is ~1 Mb)- one row per photo in a photo gallery web site. LargeTab\nhas large objects within it and has 10,000 photos, average size 10 Mb (so\ntable is ~100Gb). Assuming all photos are requested randomly, you can see\nthat an optimum cache size for this workload is 1Mb RAM, 100Gb disk. Trying\nto up the cache doesn't have much effect on the probability that a photo\n(from LargeTab) will be in cache, unless you have a large % of 100Gb of RAM,\nwhen you do start to make gains. (Please don't be picky about indexes,\ncatalog, block size etc). That clearly has absolutely nothing at all to do\nwith the RAM of the system on which it is running.\n\nI think Jan has said this also in far fewer words, but I'll leave that to\nJan to agree/disagree...\n\nI say this: ARC in 8.0 PostgreSQL allows us to sensibly allocate as large a\nshared_buffers cache as is required by the database workload, and this\nshould not be constrained to a small percentage of server RAM.\n\nBest Regards,\n\nSimon Riggs\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Josh Berkus\n> Sent: 08 October 2004 22:43\n> To: [email protected]\n> Cc: [email protected]\n> Subject: [PERFORM] First set of OSDL Shared Mem scalability results,\n> some wierdness ...\n>\n>\n> Folks,\n>\n> I'm hoping that some of you can shed some light on this.\n>\n> I've been trying to peg the \"sweet spot\" for shared memory using OSDL's\n> equipment. With Jan's new ARC patch, I was expecting that the desired\n> amount of shared_buffers to be greatly increased. This has not\n> turned out to\n> be the case.\n>\n> The first test series was using OSDL's DBT2 (OLTP) test, with 150\n> \"warehouses\". All tests were run on a 4-way Pentium III 700mhz\n> 3.8GB RAM\n> system hooked up to a rather high-end storage device (14\n> spindles). Tests\n> were on PostgreSQL 8.0b3, Linux 2.6.7.\n>\n> Here's a top-level summary:\n>\n> shared_buffers\t\t% RAM\tNOTPM20*\n> 1000\t\t\t\t0.2%\t\t1287\n> 23000\t\t\t5%\t\t1507\n> 46000\t\t\t10%\t\t1481\n> 69000\t\t\t15%\t\t1382\n> 92000\t\t\t20%\t\t1375\n> 115000\t\t\t25%\t\t1380\n> 138000\t\t\t30%\t\t1344\n>\n> * = New Order Transactions Per Minute, last 20 Minutes\n> Higher is better. The maximum possible is 1800.\n>\n> As you can see, the \"sweet spot\" appears to be between 5% and 10% of RAM,\n> which is if anything *lower* than recommendations for 7.4!\n>\n> This result is so surprising that I want people to take a look at\n> it and tell\n> me if there's something wrong with the tests or some bottlenecking factor\n> that I've not seen.\n>\n> in order above:\n> http://khack.osdl.org/stp/297959/\n> http://khack.osdl.org/stp/297960/\n> http://khack.osdl.org/stp/297961/\n> http://khack.osdl.org/stp/297962/\n> http://khack.osdl.org/stp/297963/\n> http://khack.osdl.org/stp/297964/\n> http://khack.osdl.org/stp/297965/\n>\n> Please note that many of the Graphs in these reports are broken. For one\n> thing, some aren't recorded (flat lines) and the CPU usage graph has\n> mislabeled lines.\n>\n> --\n> --Josh\n>\n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n",
"msg_date": "Thu, 14 Oct 2004 23:36:22 +0100",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": "Simon,\n\n<lots of good stuff clipped>\n\n> If you draw a graph of speedup (y) against cache size as a \n> % of total database size, the graph looks like an upside-down \"L\" - i.e.\n> the graph rises steeply as you give it more memory, then turns sharply at a\n> particular point, after which it flattens out. The \"turning point\" is the\n> \"sweet spot\" we all seek - the optimum amount of cache memory to allocate -\n> but this spot depends upon the worklaod and database size, not on available\n> RAM on the system under test.\n\nHmmm ... how do you explain, then the \"camel hump\" nature of the real \nperformance? That is, when we allocated even a few MB more than the \n\"optimum\" ~190MB, overall performance stated to drop quickly. The result is \nthat allocating 2x optimum RAM is nearly as bad as allocating too little \n(e.g. 8MB). \n\nThe only explanation I've heard of this so far is that there is a significant \nloss of efficiency with larger caches. Or do you see the loss of 200MB out \nof 3500MB would actually affect the Kernel cache that much?\n\nAnyway, one test of your theory that I can run immediately is to run the exact \nsame workload on a bigger, faster server and see if the desired quantity of \nshared_buffers is roughly the same. I'm hoping that you're wrong -- not \nbecause I don't find your argument persuasive, but because if you're right it \nleaves us without any reasonable ability to recommend shared_buffer settings.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 14 Oct 2004 16:57:45 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": "On Thu, 2004-10-14 at 16:57 -0700, Josh Berkus wrote:\n> Simon,\n> \n> <lots of good stuff clipped>\n> \n> > If you draw a graph of speedup (y) against cache size as a \n> > % of total database size, the graph looks like an upside-down \"L\" - i.e.\n> > the graph rises steeply as you give it more memory, then turns sharply at a\n> > particular point, after which it flattens out. The \"turning point\" is the\n> > \"sweet spot\" we all seek - the optimum amount of cache memory to allocate -\n> > but this spot depends upon the worklaod and database size, not on available\n> > RAM on the system under test.\n> \n> Hmmm ... how do you explain, then the \"camel hump\" nature of the real \n> performance? That is, when we allocated even a few MB more than the \n> \"optimum\" ~190MB, overall performance stated to drop quickly. The result is \n> that allocating 2x optimum RAM is nearly as bad as allocating too little \n> (e.g. 8MB). \n> \n> The only explanation I've heard of this so far is that there is a significant \n> loss of efficiency with larger caches. Or do you see the loss of 200MB out \n> of 3500MB would actually affect the Kernel cache that much?\n> \n In a past life there seemed to be a sweet spot around the\napplications\nworking set. Performance went up until you got just a little larger\nthan\nthe cache needed to hold the working set and then went down. Most of\nthe time a nice looking hump. It seems to have to do with the\nadditional pages\nnot increasing your hit ratio but increasing the amount of work to get a\nhit in cache. This seemed to be independent of the actual database\nsoftware being used. (I observed this running Oracle, Informix, Sybase\nand Ingres.)\n\n> Anyway, one test of your theory that I can run immediately is to run the exact \n> same workload on a bigger, faster server and see if the desired quantity of \n> shared_buffers is roughly the same. I'm hoping that you're wrong -- not \n> because I don't find your argument persuasive, but because if you're right it \n> leaves us without any reasonable ability to recommend shared_buffer settings.\n> \n-- \nTimothy D. Witham - Chief Technology Officer - [email protected]\nOpen Source Development Lab Inc - A non-profit corporation\n12725 SW Millikan Way - Suite 400 - Beaverton OR, 97005\n(503)-626-2455 x11 (office) (503)-702-2871 (cell)\n(503)-626-2436 (fax)\n\n",
"msg_date": "Thu, 14 Oct 2004 17:09:05 -0700",
"msg_from": "\"Timothy D. Witham\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Testperf-general] Re: First set of OSDL Shared Mem"
},
{
"msg_contents": "Quoth [email protected] (\"Simon Riggs\"):\n> I say this: ARC in 8.0 PostgreSQL allows us to sensibly allocate as\n> large a shared_buffers cache as is required by the database\n> workload, and this should not be constrained to a small percentage\n> of server RAM.\n\nI don't think that this particularly follows from \"what ARC does.\"\n\n\"What ARC does\" is to prevent certain conspicuous patterns of\nsequential accesses from essentially trashing the contents of the\ncache.\n\nIf a particular benchmark does not include conspicuous vacuums or\nsequential scans on large tables, then there is little reason to\nexpect ARC to have a noticeable impact on performance.\n\nIt _could_ be that this implies that ARC allows you to get some use\nout of a larger shared cache, as it won't get blown away by vacuums\nand Seq Scans. But it is _not_ obvious that this is a necessary\ntruth.\n\n_Other_ truths we know about are:\n\n a) If you increase the shared cache, that means more data that is\n represented in both the shared cache and the OS buffer cache,\n which seems rather a waste;\n\n b) The larger the shared cache, the more pages there are for the\n backend to rummage through before it looks to the filesystem,\n and therefore the more expensive cache misses get. Cache hits\n get more expensive, too. Searching through memory is not\n costless.\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")\nhttp://linuxfinances.info/info/linuxdistributions.html\n\"The X-Files are too optimistic. The truth is *not* out there...\"\n-- Anthony Ord <[email protected]>\n",
"msg_date": "Thu, 14 Oct 2004 20:10:59 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": "Kevin Brown <[email protected]> writes:\n> Hmm...something just occurred to me about this.\n\n> Would a hybrid approach be possible? That is, use mmap() to handle\n> reads, and use write() to handle writes?\n\nNope. Have you read the specs regarding mmap-vs-stdio synchronization?\nBasically it says that there are no guarantees whatsoever if you try\nthis. The SUS text is a bit weaselly (\"the application must ensure\ncorrect synchronization\") but the HPUX mmap man page, among others,\nlays it on the line:\n\n It is also unspecified whether write references to a memory region\n mapped with MAP_SHARED are visible to processes reading the file and\n whether writes to a file are visible to processes that have mapped the\n modified portion of that file, except for the effect of msync().\n\nIt might work on particular OSes but I think depending on such behavior\nwould be folly...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Oct 2004 01:13:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": ">Timothy D. Witham\n> On Thu, 2004-10-14 at 16:57 -0700, Josh Berkus wrote:\n> > Simon,\n> >\n> > <lots of good stuff clipped>\n> >\n> > > If you draw a graph of speedup (y) against cache size as a\n> > > % of total database size, the graph looks like an upside-down\n> \"L\" - i.e.\n> > > the graph rises steeply as you give it more memory, then\n> turns sharply at a\n> > > particular point, after which it flattens out. The \"turning\n> point\" is the\n> > > \"sweet spot\" we all seek - the optimum amount of cache memory\n> to allocate -\n> > > but this spot depends upon the worklaod and database size,\n> not on available\n> > > RAM on the system under test.\n> >\n> > Hmmm ... how do you explain, then the \"camel hump\" nature of the real\n> > performance? That is, when we allocated even a few MB more than the\n> > \"optimum\" ~190MB, overall performance stated to drop quickly.\n> The result is\n> > that allocating 2x optimum RAM is nearly as bad as allocating\n> too little\n> > (e.g. 8MB).\n\nTwo ways of explaining this:\n1. Once you've hit the optimum size of shared_buffers, you may not yet have\nhit the optimum size of the OS cache. If that is true, every extra block\ngiven to shared_buffers is wasted, yet detracts from the beneficial effect\nof the OS cache. I don't see how the small drop in size of the OS cache\ncould have the effect you have measured, so I suggest that this possible\nexplanation doesn't fit the results well.\n\n2. There is some algorithmic effect within PostgreSQL that makes larger\nshared_buffers much worse than smaller ones. Imagine that each extra block\nwe hold in cache has the positive benefit from caching, minus a postulated\nnegative drag effect. With that model we would get: Once the optimal size of\nthe cache has been reached the positive benefit tails off to almost zero and\nwe are just left with the situation that each new block added to\nshared_buffers acts as a further drag on performance. That model would fit\nthe results, so we can begin to look at what the drag effect might be.\n\nSpeculating wildly because I don't know that portion of the code this might\nbe:\nCONJECTURE 1: the act of searching for a block in cache is an O(n)\noperation, not an O(1) or O(log n) operation - so searching a larger cache\nhas an additional slowing effect on the application, via a buffer cache lock\nthat is held while the cache is searched - larger caches are locked for\nlonger than smaller caches, so this causes additional contention in the\nsystem, which then slows down performance.\n\nThe effect might show up by examining the oprofile results for the test\ncases. What we would be looking for is something that is being called more\nfrequently with larger shared_buffers - this could be anything....but my\nguess is the oprofile results won't be similar and could lead us to a better\nunderstanding.\n\n> >\n> > The only explanation I've heard of this so far is that there is\n> a significant\n> > loss of efficiency with larger caches. Or do you see the loss\n> of 200MB out\n> > of 3500MB would actually affect the Kernel cache that much?\n> >\n> In a past life there seemed to be a sweet spot around the\n> applications\n> working set. Performance went up until you got just a little larger\n> than\n> the cache needed to hold the working set and then went down. Most of\n> the time a nice looking hump. It seems to have to do with the\n> additional pages\n> not increasing your hit ratio but increasing the amount of work to get a\n> hit in cache. This seemed to be independent of the actual database\n> software being used. (I observed this running Oracle, Informix, Sybase\n> and Ingres.)\n\nGood, our experiences seems to be similar.\n\n>\n> > Anyway, one test of your theory that I can run immediately is\n> to run the exact\n> > same workload on a bigger, faster server and see if the desired\n> quantity of\n> > shared_buffers is roughly the same.\n\nI agree that you could test this by running on a bigger or smaller server,\ni.e. one with more or less RAM. Running on a faster/slower server at the\nsame time might alter the results and confuse the situation.\n\n> I'm hoping that you're wrong -- not\n> because I don't find your argument persuasive, but because if\n> you're right it\n> > leaves us without any reasonable ability to recommend\n> shared_buffer settings.\n>\n\nFor the record, what I think we need is dynamically resizable\nshared_buffers, not a-priori knowledge of what you should set shared_buffers\nto. I've been thinking about implementing a scheme that helps you decide how\nbig the shared_buffers SHOULD BE, by making the LRU list bigger than the\ncache itself, so you'd be able to see whether there is beneficial effect in\nincreasing shared_buffers.\n\n...remember that this applies to other databases too, and with those we find\nthat they have dynamically resizable memory.\n\nHaving said all that, there are still a great many other performance tests\nto run so that we CAN recommend other settings, such as the optimizer cost\nparameters, bg writer defaults etc.\n\nBest Regards,\n\nSimon Riggs\n2nd Quadrant\n\n",
"msg_date": "Fri, 15 Oct 2004 08:55:59 +0100",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Testperf-general] Re: First set of OSDL Shared Memscalability\n\tresults, some wierdness ..."
},
{
"msg_contents": "Tom Lane wrote:\n> Kevin Brown <[email protected]> writes:\n> > Hmm...something just occurred to me about this.\n> \n> > Would a hybrid approach be possible? That is, use mmap() to handle\n> > reads, and use write() to handle writes?\n> \n> Nope. Have you read the specs regarding mmap-vs-stdio synchronization?\n> Basically it says that there are no guarantees whatsoever if you try\n> this. The SUS text is a bit weaselly (\"the application must ensure\n> correct synchronization\") but the HPUX mmap man page, among others,\n> lays it on the line:\n> \n> It is also unspecified whether write references to a memory region\n> mapped with MAP_SHARED are visible to processes reading the file and\n> whether writes to a file are visible to processes that have mapped the\n> modified portion of that file, except for the effect of msync().\n> \n> It might work on particular OSes but I think depending on such behavior\n> would be folly...\n\nYeah, and at this point it can't be considered portable in any real\nway because of this. Thanks for the perspective. I should have\nexpected the general specification to be quite broken in this regard,\nnot to mention certain implementations. :-)\n\nGood thing there's a lot of lower-hanging fruit than this...\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n",
"msg_date": "Fri, 15 Oct 2004 02:19:40 -0700",
"msg_from": "Kevin Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": "Tom Lane wrote:\n\n>Kevin Brown <[email protected]> writes:\n> \n>\n>>Hmm...something just occurred to me about this.\n>> \n>>\n>>Would a hybrid approach be possible? That is, use mmap() to handle\n>>reads, and use write() to handle writes?\n>> \n>>\n>\n>Nope. Have you read the specs regarding mmap-vs-stdio synchronization?\n>Basically it says that there are no guarantees whatsoever if you try\n>this. The SUS text is a bit weaselly (\"the application must ensure\n>correct synchronization\") but the HPUX mmap man page, among others,\n>lays it on the line:\n>\n> It is also unspecified whether write references to a memory region\n> mapped with MAP_SHARED are visible to processes reading the file and\n> whether writes to a file are visible to processes that have mapped the\n> modified portion of that file, except for the effect of msync().\n>\n>It might work on particular OSes but I think depending on such behavior\n>would be folly...\n>\nWe have some anecdotal experience along these lines: There was a set \nof kernel bugs in Solaris 2.6 or 7 related to this as well. We had \nseveral kernel panics and it took a bit to chase down, but the basic \nfeedback was \"oops. we're screwed\". I've forgotten most of the \ndetails right now; the basic problem was a file was being read+written \nvia mmap and read()/write() at (essentially) the same time from the same \npid. It would panic the system quite reliably. I believe the bugs \nrelated to this have been resolved in Solaris, but it was unpleasant to \nchase that problem down...\n\n-- Alan\n",
"msg_date": "Fri, 15 Oct 2004 08:53:19 -0400",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results, some"
},
{
"msg_contents": "\"Simon Riggs\" <[email protected]> writes:\n> Speculating wildly because I don't know that portion of the code this might\n> be:\n> CONJECTURE 1: the act of searching for a block in cache is an O(n)\n> operation, not an O(1) or O(log n) operation\n\nI'm not sure how this meme got into circulation, but I've seen a couple\nof people recently either conjecturing or asserting that. Let me remind\npeople of the actual facts:\n\n1. We use a hashtable to keep track of which blocks are currently in\nshared buffers. Either a cache hit or a cache miss should be O(1),\nbecause the hashtable size is scaled proportionally to shared_buffers,\nand so the number of hash entries examined should remain constant.\n\n2. There are some allegedly-not-performance-critical operations that do\nscan through all the buffers, and therefore are O(N) in shared_buffers.\n\nI just eyeballed all the latter, and came up with this list of O(N)\noperations and their call points:\n\nAtEOXact_Buffers\n\ttransaction commit or abort\nUnlockBuffers\n\ttransaction abort, backend exit\nStrategyDirtyBufferList\n\tbackground writer's idle loop\nFlushRelationBuffers\n\tVACUUM\n\tDROP TABLE, DROP INDEX\n\tTRUNCATE, CLUSTER, REINDEX\n\tALTER TABLE SET TABLESPACE\nDropRelFileNodeBuffers\n\tTRUNCATE (only for ON COMMIT TRUNC temp tables)\n\tREINDEX (inplace case only)\n\tsmgr_internal_unlink (ie, the tail end of DROP TABLE/INDEX)\nDropBuffers\n\tDROP DATABASE\n\nThe fact that the first two are called during transaction commit/abort\nis mildly alarming. The constant factors are going to be very tiny\nthough, because what these routines actually do is scan backend-local\nstatus arrays looking for locked buffers, which they're not going to\nfind very many of. For instance AtEOXact_Buffers looks like\n\n int i;\n\n for (i = 0; i < NBuffers; i++)\n {\n if (PrivateRefCount[i] != 0)\n {\n\t // some code that should never be executed at all in the commit\n\t // case, and not that much in the abort case either\n }\n }\n\nI suppose with hundreds of thousands of shared buffers this might get to\nthe point of being noticeable, but I've never seen it show up at all in\nprofiling with more-normal buffer counts. Not sure if it's worth\ndevising a more complex data structure to aid in finding locked buffers.\n(To some extent this code is intended to be belt-and-suspenders stuff\nfor catching omissions elsewhere, and so a more complex data structure\nthat could have its own bugs is not especially attractive.)\n\nThe one that's bothering me at the moment is StrategyDirtyBufferList,\nwhich is a new overhead in 8.0. It wouldn't directly affect foreground\nquery performance, but indirectly it would hurt by causing the bgwriter\nto suck more CPU cycles than one would like (and it holds the BufMgrLock\nwhile it's doing it, too :-(). One easy way you could see whether this\nis an issue in the OSDL test is to see what happens if you double all\nthree bgwriter parameters (delay, percent, maxpages). This should\nresult in about the same net I/O demand from the bgwriter, but\nStrategyDirtyBufferList will be executed half as often.\n\nI doubt that the other ones are issues. We could improve them by\ndevising a way to quickly find all buffers for a given relation, but\nI am just about sure that complicating the buffer management to do so\nwould be a net loss for normal workloads.\n\n> For the record, what I think we need is dynamically resizable\n> shared_buffers, not a-priori knowledge of what you should set\n> shared_buffers to.\n\nThis isn't likely to happen because the SysV shared memory API isn't\nconducive to it. Absent some amazingly convincing demonstration that\nwe have to have it, the effort of making it happen in a portable way\nisn't going to get spent.\n\n> I've been thinking about implementing a scheme that helps you decide how\n> big the shared_buffers SHOULD BE, by making the LRU list bigger than the\n> cache itself, so you'd be able to see whether there is beneficial effect in\n> increasing shared_buffers.\n\nARC already keeps such a list --- couldn't you learn what you want to\nknow from the existing data structure? It'd be fairly cool if we could\nput out warnings \"you ought to increase shared_buffers\" analogous to the\nexisting facility for noting excessive checkpointing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Oct 2004 12:48:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Testperf-general] Re: First set of OSDL Shared Memscalability\n\tresults, some wierdness ..."
},
{
"msg_contents": "Tom Lane wrote:\n> > I've been thinking about implementing a scheme that helps you decide how\n> > big the shared_buffers SHOULD BE, by making the LRU list bigger than the\n> > cache itself, so you'd be able to see whether there is beneficial effect in\n> > increasing shared_buffers.\n> \n> ARC already keeps such a list --- couldn't you learn what you want to\n> know from the existing data structure? It'd be fairly cool if we could\n> put out warnings \"you ought to increase shared_buffers\" analogous to the\n> existing facility for noting excessive checkpointing.\n\nAgreed. ARC already keeps a list of buffers it had to push out recently\nso if it needs them again soon it knows its sizing of recent/frequent\nmight be off (I think). Anyway, such a log report would be super-cool,\nsay if you pushed out a buffer and needed it very soon, and the ARC\nbuffers are already at their maximum for that buffer pool.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 15 Oct 2004 12:57:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Testperf-general] Re: First set of OSDL Shared Memscalability"
},
{
"msg_contents": "> Bruce Momjian\n> Tom Lane wrote:\n> > > I've been thinking about implementing a scheme that helps you\n> decide how\n> > > big the shared_buffers SHOULD BE, by making the LRU list\n> bigger than the\n> > > cache itself, so you'd be able to see whether there is\n> beneficial effect in\n> > > increasing shared_buffers.\n> >\n> > ARC already keeps such a list --- couldn't you learn what you want to\n> > know from the existing data structure? It'd be fairly cool if we could\n> > put out warnings \"you ought to increase shared_buffers\" analogous to the\n> > existing facility for noting excessive checkpointing.\n\nFirst off, many thanks for taking the time to provide the real detail on the\ncode.\n\nThat gives us some much needed direction in interpreting the oprofile\noutput.\n\n>\n> Agreed. ARC already keeps a list of buffers it had to push out recently\n> so if it needs them again soon it knows its sizing of recent/frequent\n> might be off (I think). Anyway, such a log report would be super-cool,\n> say if you pushed out a buffer and needed it very soon, and the ARC\n> buffers are already at their maximum for that buffer pool.\n>\n\nOK, I guess I hadn't realised we were half-way there.\n\nThe \"increase shared_buffers\" warning would be useful, but it would be much\ncooler to have some guidance as to how big to set it, especially since this\nrequires a restart of the server.\n\nWhat I had in mind was a way of keeping track of how the buffer cache hit\nratio would look at various sizes of shared_buffers, for example 50%, 80%,\n120%, 150%, 200% and 400% say. That way you'd stand a chance of plotting the\ncurve and thereby assessing how much memory could be allocated. I've got a\nfew ideas, but I need to check out the code first.\n\nI'll investigate both simple/complex options as an 8.1 feature.\n\nBest Regards, Simon Riggs\n\n\n\n",
"msg_date": "Fri, 15 Oct 2004 20:42:39 +0100",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Testperf-general] Re: First set of OSDL Shared Memscalability\n\tresults, some wierdness ..."
},
{
"msg_contents": "People:\n\n> First off, many thanks for taking the time to provide the real detail on\n> the code.\n>\n> That gives us some much needed direction in interpreting the oprofile\n> output.\n\nI have some oProfile output; however, it's in 2 out of 20 tests I ran recently \nand I need to get them sorted out.\n\n--Josh\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 15 Oct 2004 12:44:52 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [Testperf-general] Re: First set of OSDL Shared Memscalability\n\tresults, some wierdness ..."
},
{
"msg_contents": "Tom, Simon:\n\nFirst off, two test runs with OProfile are available at:\nhttp://khack.osdl.org/stp/298124/\nhttp://khack.osdl.org/stp/298121/\n\n> AtEOXact_Buffers\n> transaction commit or abort\n> UnlockBuffers\n> transaction abort, backend exit\n\nActually, this might explain the \"hump\" shape of the curve for this test. \nDBT2 is an OLTP test, which means that (at this scale level) it's attempting \nto do approximately 30 COMMITs per second as well as one ROLLBACK every 3 \nseconds. When I get the tests on DBT3 running, if we see a more gentle \ndropoff on overallocated memory, it would indicate that the above may be a \nfactor.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 15 Oct 2004 13:13:13 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [Testperf-general] Re: First set of OSDL Shared Memscalability\n\tresults, some wierdness ..."
},
{
"msg_contents": "> this. The SUS text is a bit weaselly (\"the application must ensure\n> correct synchronization\") but the HPUX mmap man page, among others,\n> lays it on the line:\n>\n> It is also unspecified whether write references to a memory region\n> mapped with MAP_SHARED are visible to processes reading the file \n> and\n> whether writes to a file are visible to processes that have \n> mapped the\n> modified portion of that file, except for the effect of msync().\n>\n> It might work on particular OSes but I think depending on such behavior\n> would be folly...\n\nAgreed. Only OSes with a coherent file system buffer cache should ever \nuse mmap(2). In order for this to work on HPUX, msync(2) would need to \nbe used. -sc\n\n-- \nSean Chittenden\n\n",
"msg_date": "Fri, 15 Oct 2004 13:16:27 -0700",
"msg_from": "Sean Chittenden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> First off, two test runs with OProfile are available at:\n> http://khack.osdl.org/stp/298124/\n> http://khack.osdl.org/stp/298121/\n\nHmm. The stuff above 1% in the first of these is\n\nCounted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000\nsamples % app name symbol name\n8522858 19.7539 vmlinux default_idle\n3510225 8.1359 vmlinux recalc_sigpending_tsk\n1874601 4.3449 vmlinux .text.lock.signal\n1653816 3.8331 postgres SearchCatCache\n1080908 2.5053 postgres AllocSetAlloc\n920369 2.1332 postgres AtEOXact_Buffers\n806218 1.8686 postgres OpernameGetCandidates\n803125 1.8614 postgres StrategyDirtyBufferList\n746123 1.7293 vmlinux __copy_from_user_ll\n651978 1.5111 vmlinux __copy_to_user_ll\n640511 1.4845 postgres XLogInsert\n630797 1.4620 vmlinux rm_from_queue\n607833 1.4088 vmlinux next_thread\n436682 1.0121 postgres LWLockAcquire\n419672 0.9727 postgres yyparse\n\nIn the second test AtEOXact_Buffers is much lower (down around 0.57\npercent) but the other suspects are similar. Since the only difference\nin parameters is shared_buffers (36000 vs 9000), it does look like we\nare approaching the point where AtEOXact_Buffers is a problem, but so\nfar it's only a 2% drag.\n\nI suspect the reason recalc_sigpending_tsk is so high is that the\noriginal coding of PG_TRY involved saving and restoring the signal mask,\nwhich led to a whole lot of sigsetmask-type kernel calls. Is this test\nwith beta3, or something older?\n\nAnother interesting item here is the costs of __copy_from_user_ll/\n__copy_to_user_ll:\n\n36000 buffers:\n746123 1.7293 vmlinux __copy_from_user_ll\n651978 1.5111 vmlinux __copy_to_user_ll\n\n9000 buffers:\n866414 2.0810 vmlinux __copy_from_user_ll\n852620 2.0479 vmlinux __copy_to_user_ll\n\nPresumably the higher costs for 9000 buffers reflect an increased amount\nof shuffling of data between kernel and user space. So 36000 is not\nenough to make the working set totally memory-resident, but even if we\ndrove this cost to zero we'd only be buying a couple percent.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Oct 2004 16:34:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Testperf-general] Re: First set of OSDL Shared Memscalability\n\tresults, some wierdness ..."
},
{
"msg_contents": "Tom,\n\n> I suspect the reason recalc_sigpending_tsk is so high is that the\n> original coding of PG_TRY involved saving and restoring the signal mask,\n> which led to a whole lot of sigsetmask-type kernel calls. Is this test\n> with beta3, or something older?\n\nBeta3, *without* Gavin or Neil's Futex patch.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 15 Oct 2004 13:38:17 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [Testperf-general] Re: First set of OSDL Shared Memscalability\n\tresults, some wierdness ..."
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n>> I suspect the reason recalc_sigpending_tsk is so high is that the\n>> original coding of PG_TRY involved saving and restoring the signal mask,\n>> which led to a whole lot of sigsetmask-type kernel calls. Is this test\n>> with beta3, or something older?\n\n> Beta3, *without* Gavin or Neil's Futex patch.\n\nHmm, in that case the cost deserves some further investigation. Can we\nfind out just what that routine does and where it's being called from?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Oct 2004 17:27:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Testperf-general] Re: First set of OSDL Shared Memscalability\n\tresults, some wierdness ..."
},
{
"msg_contents": "On Fri, Oct 15, 2004 at 05:27:29PM -0400, Tom Lane wrote:\n> Josh Berkus <[email protected]> writes:\n> >> I suspect the reason recalc_sigpending_tsk is so high is that the\n> >> original coding of PG_TRY involved saving and restoring the signal mask,\n> >> which led to a whole lot of sigsetmask-type kernel calls. Is this test\n> >> with beta3, or something older?\n> \n> > Beta3, *without* Gavin or Neil's Futex patch.\n> \n> Hmm, in that case the cost deserves some further investigation. Can we\n> find out just what that routine does and where it's being called from?\n> \n\nThere's a call-graph feature with oprofile as of version 0.8 with\nthe opstack tool, but I'm having a terrible time figuring out why the\noutput isn't doing the graphing part. Otherwise, I'd have that\navailable already...\n\nMark\n",
"msg_date": "Fri, 15 Oct 2004 14:32:06 -0700",
"msg_from": "Mark Wong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Testperf-general] Re: First set of OSDL Shared Memscalability\n\tresults, some wierdness ..."
},
{
"msg_contents": "Mark Wong <[email protected]> writes:\n> On Fri, Oct 15, 2004 at 05:27:29PM -0400, Tom Lane wrote:\n>> Hmm, in that case the cost deserves some further investigation. Can we\n>> find out just what that routine does and where it's being called from?\n\n> There's a call-graph feature with oprofile as of version 0.8 with\n> the opstack tool, but I'm having a terrible time figuring out why the\n> output isn't doing the graphing part. Otherwise, I'd have that\n> available already...\n\nI was wondering if this might be associated with do_sigaction.\ndo_sigaction is only 0.23 percent of the runtime according to the\noprofile results:\nhttp://khack.osdl.org/stp/298124/oprofile/DBT_2_Profile-all.oprofile.txt\nbut the profile results for the same run:\nhttp://khack.osdl.org/stp/298124/profile/DBT_2_Profile-tick.sort\nshow do_sigaction very high and recalc_sigpending_tsk nowhere at all.\nSomething funny there.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Oct 2004 17:44:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Testperf-general] Re: First set of OSDL Shared Memscalability\n\tresults, some wierdness ..."
},
{
"msg_contents": "On Fri, Oct 15, 2004 at 05:44:34PM -0400, Tom Lane wrote:\n> Mark Wong <[email protected]> writes:\n> > On Fri, Oct 15, 2004 at 05:27:29PM -0400, Tom Lane wrote:\n> >> Hmm, in that case the cost deserves some further investigation. Can we\n> >> find out just what that routine does and where it's being called from?\n> \n> > There's a call-graph feature with oprofile as of version 0.8 with\n> > the opstack tool, but I'm having a terrible time figuring out why the\n> > output isn't doing the graphing part. Otherwise, I'd have that\n> > available already...\n> \n> I was wondering if this might be associated with do_sigaction.\n> do_sigaction is only 0.23 percent of the runtime according to the\n> oprofile results:\n> http://khack.osdl.org/stp/298124/oprofile/DBT_2_Profile-all.oprofile.txt\n> but the profile results for the same run:\n> http://khack.osdl.org/stp/298124/profile/DBT_2_Profile-tick.sort\n> show do_sigaction very high and recalc_sigpending_tsk nowhere at all.\n> Something funny there.\n> \n\nI have always attributed those kind of differences based on how\nreadprofile and oprofile collect their data. Granted I don't exactly\nunderstand it. Anyone familiar with the two differences?\n\nMark\n",
"msg_date": "Fri, 15 Oct 2004 15:10:22 -0700",
"msg_from": "Mark Wong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Testperf-general] Re: First set of OSDL Shared Memscalability\n\tresults, some wierdness ..."
},
{
"msg_contents": "I wrote:\n> Josh Berkus <[email protected]> writes:\n>> First off, two test runs with OProfile are available at:\n>> http://khack.osdl.org/stp/298124/\n>> http://khack.osdl.org/stp/298121/\n\n> Hmm. The stuff above 1% in the first of these is\n\n> Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000\n> samples % app name symbol name\n> ...\n> 920369 2.1332 postgres AtEOXact_Buffers\n> ...\n\n> In the second test AtEOXact_Buffers is much lower (down around 0.57\n> percent) but the other suspects are similar. Since the only difference\n> in parameters is shared_buffers (36000 vs 9000), it does look like we\n> are approaching the point where AtEOXact_Buffers is a problem, but so\n> far it's only a 2% drag.\n\nIt occurs to me that given the 8.0 resource manager mechanism, we could\nin fact dispense with AtEOXact_Buffers, or perhaps better turn it into a\nno-op unless #ifdef USE_ASSERT_CHECKING. We'd just get rid of the\nspecial case for transaction termination in resowner.c and let the\nresource owner be responsible for releasing locked buffers always. The\nOSDL results suggest that this won't matter much at the level of 10000\nor so shared buffers, but for 100000 or more buffers the linear scan in\nAtEOXact_Buffers is going to become a problem.\n\nWe could also get rid of the linear search in UnlockBuffers(). The only\nthing it's for anymore is to release a BM_PIN_COUNT_WAITER flag, and\nsince a backend could not be doing more than one of those at a time,\nwe don't really need an array of flags for that, only a single variable.\nThis does not show in the OSDL results, which I presume means that their\ntest case is not exercising transaction aborts; but I think we need to\nzap both routines to make the world safe for large shared_buffers\nvalues. (See also\nhttp://archives.postgresql.org/pgsql-performance/2004-10/msg00218.php)\n\nAny objection to doing this for 8.0?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 16 Oct 2004 12:54:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Getting rid of AtEOXact_Buffers (was Re: [Testperf-general] Re:\n\t[PERFORM] First set of OSDL Shared Memscalability results,\n\tsome wierdness ...)"
},
{
"msg_contents": "Tom,\n\n> We could also get rid of the linear search in UnlockBuffers(). The only\n> thing it's for anymore is to release a BM_PIN_COUNT_WAITER flag, and\n> since a backend could not be doing more than one of those at a time,\n> we don't really need an array of flags for that, only a single variable.\n> This does not show in the OSDL results, which I presume means that their\n> test case is not exercising transaction aborts;\n\nIn the test, one out of every 100 new order transactions is aborted (about 1 \nout of 150 transactions overall).\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sat, 16 Oct 2004 14:18:26 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Getting rid of AtEOXact_Buffers (was Re: [Testperf-general] Re:\n\t[PERFORM] First set of OSDL Shared Memscalability results,\n\tsome wierdness ...)"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n>> This does not show in the OSDL results, which I presume means that their\n>> test case is not exercising transaction aborts;\n\n> In the test, one out of every 100 new order transactions is aborted (about 1 \n> out of 150 transactions overall).\n\nOkay, but that just ensures that any bottlenecks in xact abort will be\ndown in the noise in this test case ...\n\nIn any case, those changes are in CVS now if you want to try them.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 16 Oct 2004 17:19:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Getting rid of AtEOXact_Buffers (was Re: [Testperf-general] Re:\n\t[PERFORM] First set of OSDL Shared Memscalability results,\n\tsome wierdness ...)"
},
{
"msg_contents": "Tom,\n\n> In any case, those changes are in CVS now if you want to try them.\n\nOK. Will have to wait until OSDL gives me a dedicated testing machine \nsometime mon/tues/wed.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sat, 16 Oct 2004 14:36:45 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Getting rid of AtEOXact_Buffers (was Re: [Testperf-general] Re:\n\t[PERFORM] First set of OSDL Shared Memscalability results,\n\tsome wierdness ...)"
},
{
"msg_contents": "On 10/14/2004 6:36 PM, Simon Riggs wrote:\n\n> [...]\n> I think Jan has said this also in far fewer words, but I'll leave that to\n> Jan to agree/disagree...\n\nI do agree. The total DB size has as little to do with the optimum \nshared buffer cache size as the total available RAM of the machine.\n\nAfter reading your comments it appears more clear to me. All what those \ntests did show is the amount of high frequently accessed data in this \ndatabase population and workload combination.\n\n> \n> I say this: ARC in 8.0 PostgreSQL allows us to sensibly allocate as large a\n> shared_buffers cache as is required by the database workload, and this\n> should not be constrained to a small percentage of server RAM.\n\nRight.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n",
"msg_date": "Mon, 18 Oct 2004 15:17:11 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results, some"
},
{
"msg_contents": "On 10/14/2004 8:10 PM, Christopher Browne wrote:\n\n> Quoth [email protected] (\"Simon Riggs\"):\n>> I say this: ARC in 8.0 PostgreSQL allows us to sensibly allocate as\n>> large a shared_buffers cache as is required by the database\n>> workload, and this should not be constrained to a small percentage\n>> of server RAM.\n> \n> I don't think that this particularly follows from \"what ARC does.\"\n\nThe combination of ARC together with the background writer is supposed \nto allow us to allocate the optimum even if that is large. The former \nimplementation of the LRU without background writer would just hang the \nserver for a long time during a checkpoint, which is absolutely \ninacceptable for any OLTP system.\n\n\nJan\n\n> \n> \"What ARC does\" is to prevent certain conspicuous patterns of\n> sequential accesses from essentially trashing the contents of the\n> cache.\n> \n> If a particular benchmark does not include conspicuous vacuums or\n> sequential scans on large tables, then there is little reason to\n> expect ARC to have a noticeable impact on performance.\n> \n> It _could_ be that this implies that ARC allows you to get some use\n> out of a larger shared cache, as it won't get blown away by vacuums\n> and Seq Scans. But it is _not_ obvious that this is a necessary\n> truth.\n> \n> _Other_ truths we know about are:\n> \n> a) If you increase the shared cache, that means more data that is\n> represented in both the shared cache and the OS buffer cache,\n> which seems rather a waste;\n> \n> b) The larger the shared cache, the more pages there are for the\n> backend to rummage through before it looks to the filesystem,\n> and therefore the more expensive cache misses get. Cache hits\n> get more expensive, too. Searching through memory is not\n> costless.\n\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n",
"msg_date": "Mon, 18 Oct 2004 15:37:43 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results, some"
},
{
"msg_contents": "Simon,\n\n> I agree that you could test this by running on a bigger or smaller server,\n> i.e. one with more or less RAM. Running on a faster/slower server at the\n> same time might alter the results and confuse the situation.\n\nUnfortunately, a faster server is the only option I have that also has more \nRAM. If I double the RAM and double the processors at the same time, what \nwould you expect to happen to the shared_buffers curve?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 18 Oct 2004 14:00:45 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [Testperf-general] Re: First set of OSDL Shared Memscalability\n\tresults, some wierdness ..."
},
{
"msg_contents": "Simon, Folks,\n\nI've put links to all of my OSDL-STP test results up on the TestPerf project:\nhttp://pgfoundry.org/forum/forum.php?thread_id=164&forum_id=160\n\nSHare&Enjoy!\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 21 Oct 2004 13:28:46 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Links to OSDL test results up"
},
{
"msg_contents": "On Sat, 9 Oct 2004, Tom Lane wrote:\n\n> mmap provides msync which is comparable to fsync, but AFAICS it\n> provides no way to prevent an in-memory change from reaching disk too\n> soon. This would mean that WAL entries would have to be written *and\n> flushed* before we could make the data change at all, which would\n> convert multiple updates of a single page into a series of write-and-\n> wait-for-WAL-fsync steps. Not good. fsync'ing WAL once per transaction\n> is bad enough, once per atomic action is intolerable.\n\nBack when I was working out how to do this, I reckoned that you could\nuse mmap by keeping a write queue for each modified page. Reading,\nyou'd have to read the datum from the page and then check the write\nqueue for that page to see if that datum had been updated, using the\nnew value if it's there. Writing, you'd add the modified datum to the\nwrite queue, but not apply the write queue to the page until you'd had\nconfirmation that the corresponding transaction log entry had been\nwritten. So multiple writes are no big deal; they just all queue up in\nthe write queue, and at any time you can apply as much of the write\nqueue to the page itself as the current log entry will allow.\n\nThere are several different strategies available for mapping and\nunmapping the pages, and in fact there might need to be several\navailable to get the best performance out of different systems. Most\nOSes do not seem to be optimized for having thousands or tens of\nthousands of small mappings (certainly NetBSD isn't), but I've never\ndone any performance tests to see what kind of strategies might work\nwell or not.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.NetBSD.org\n Make up enjoying your city life...produced by BIC CAMERA\n",
"msg_date": "Sat, 23 Oct 2004 16:33:40 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results, some"
},
{
"msg_contents": "Curt Sampson <[email protected]> writes:\n> Back when I was working out how to do this, I reckoned that you could\n> use mmap by keeping a write queue for each modified page. Reading,\n> you'd have to read the datum from the page and then check the write\n> queue for that page to see if that datum had been updated, using the\n> new value if it's there. Writing, you'd add the modified datum to the\n> write queue, but not apply the write queue to the page until you'd had\n> confirmation that the corresponding transaction log entry had been\n> written. So multiple writes are no big deal; they just all queue up in\n> the write queue, and at any time you can apply as much of the write\n> queue to the page itself as the current log entry will allow.\n\nSeems to me the overhead of any such scheme would swamp the savings from\navoiding kernel/userspace copies ... the locking issues alone would be\npainful.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 23 Oct 2004 14:11:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": "On Sat, 23 Oct 2004, Tom Lane wrote:\n\n> Seems to me the overhead of any such scheme would swamp the savings from\n> avoiding kernel/userspace copies ...\n\nWell, one really can't know without testing, but memory copies are\nextremely expensive if they go outside of the cache.\n\n> the locking issues alone would be painful.\n\nI don't see why they would be any more painful than the current locking\nissues. In fact, I don't see any reason to add more locking than we\nalready use when updating pages.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.NetBSD.org\n Make up enjoying your city life...produced by BIC CAMERA\n",
"msg_date": "Sun, 24 Oct 2004 14:46:16 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results, some"
},
{
"msg_contents": "Curt Sampson <[email protected]> writes:\n> On Sat, 23 Oct 2004, Tom Lane wrote:\n>> Seems to me the overhead of any such scheme would swamp the savings from\n>> avoiding kernel/userspace copies ...\n\n> Well, one really can't know without testing, but memory copies are\n> extremely expensive if they go outside of the cache.\n\nSure, but what about all the copying from write queue to page?\n\n>> the locking issues alone would be painful.\n\n> I don't see why they would be any more painful than the current locking\n> issues.\n\nBecause there are more locks --- the write queue data structure will\nneed to be locked separately from the page. (Even with a separate write\nqueue per page, there will need to be a shared data structure that\nallows you to allocate and find write queues, and that thing will be a\nsubject of contention. See BufMgrLock, which is not held while actively\ntwiddling the contents of pages, but is a serious cause of contention\nanyway.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 24 Oct 2004 10:39:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results, some "
},
{
"msg_contents": "On Sun, 24 Oct 2004, Tom Lane wrote:\n\n> > Well, one really can't know without testing, but memory copies are\n> > extremely expensive if they go outside of the cache.\n>\n> Sure, but what about all the copying from write queue to page?\n\nThere's a pretty big difference between few-hundred-bytes-on-write and\neight-kilobytes-with-every-read memory copy.\n\nAs for the queue allocation, again, I have no data to back this up, but\nI don't think it would be as bad as BufMgrLock. Not every page will have\na write queue, and a \"hot\" page is only going to get one once. (If a\npage has a write queue, you might as well leave it with the page after\nflushing it, and get rid of it only when the page leaves memory.)\n\nI see the OS issues related to mapping that much memory as a much bigger\npotential problem.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.NetBSD.org\n Make up enjoying your city life...produced by BIC CAMERA\n",
"msg_date": "Mon, 25 Oct 2004 09:30:56 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results, some"
},
{
"msg_contents": "Curt Sampson <[email protected]> writes:\n> I see the OS issues related to mapping that much memory as a much bigger\n> potential problem.\n\nI see potential problems everywhere I look ;-)\n\nConsidering that the available numbers suggest we could win just a few\npercent (and that's assuming that all this extra mechanism has zero\ncost), I can't believe that the project is worth spending manpower on.\nThere is a lot of much more attractive fruit hanging at lower levels.\nThe bitmap-indexing stuff that was recently being discussed, for\ninstance, would certainly take less effort than this; it would create\nno new portability issues; and at least for the queries where it helps,\nit could offer integer-multiple speedups, not percentage points.\n\nMy engineering professors taught me that you put large effort where you\nhave a chance at large rewards. Converting PG to mmap doesn't seem to\nmeet that test, even if I believed it would work.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 24 Oct 2004 21:18:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results, some "
},
{
"msg_contents": "On Sun, 24 Oct 2004, Tom Lane wrote:\n\n> Considering that the available numbers suggest we could win just a few\n> percent...\n\nI must confess that I was completely unaware of these \"numbers.\" Where\ndo I find them?\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.NetBSD.org\n Make up enjoying your city life...produced by BIC CAMERA\n",
"msg_date": "Mon, 25 Oct 2004 10:32:55 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results, some"
},
{
"msg_contents": "Curt Sampson <[email protected]> writes:\n> On Sun, 24 Oct 2004, Tom Lane wrote:\n>> Considering that the available numbers suggest we could win just a few\n>> percent...\n\n> I must confess that I was completely unaware of these \"numbers.\" Where\n> do I find them?\n\nThe only numbers I've seen that directly bear on the question is\nthe oprofile results that Josh recently put up for the DBT-3 benchmark,\nwhich showed the kernel copy-to-userspace and copy-from-userspace\nsubroutines eating a percent or two apiece of the total runtime.\nI don't have the URL at hand but it was posted just a few days ago.\n(Now that covers all such copies and not only our datafile reads/writes,\nbut it's probably fair to assume that the datafile I/O is the bulk of it.)\n\nThis is, of course, only one benchmark ... but lacking any measurements\nin opposition, I'm inclined to believe it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 24 Oct 2004 21:49:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results, some "
},
{
"msg_contents": "I wrote:\n> I don't have the URL at hand but it was posted just a few days ago.\n\n... actually, it was the beginning of this here thread ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 24 Oct 2004 21:50:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: First set of OSDL Shared Mem scalability results, some "
}
] |
[
{
"msg_contents": "Hi all,\nI'm wondering if setting the $PG_DATA directory\nas synchronous directory in order to make a crash\nevent more safe will penalyze the performances.\n\nIf you run a kernel 2.6 the command is:\n\nchattr +S $PG_DATA\n\n\n\nRegards\nGaetano Mendola\n\n\n\n",
"msg_date": "Sun, 10 Oct 2004 11:19:59 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "kernel 2.6 synchronous directory"
}
] |
[
{
"msg_contents": "I've been wondering...\n\nSuppose we have two tables\nCREATE TABLE messages (\n message_id serial PRIMARY KEY,\n message text NOT NULL\n);\nCREATE TABLE entries (\n entry_id serial PRIMARY KEY,\n message_id integer NOT NULL REFERENCES messages\n);\n\nAnd we have a join:\nSELECT entry_id,message FROM entries NATURAL JOIN messages ORDER BY\nentry_id DESC LIMIT 10;\n\nThe typical planners order of doing things is -- join the tables,\nperform sort, perform limit.\n\nBut in the above case (which I guess is quite common) few things can be assumed.\n1) to perform ORDER BY we don't need any join (entry_id is in our\nentries table).\n2) entries.entry_id references PRIMARY KEY, which is unique, so we\nwill have not less, not more but exactly one row per join (one row\nfrom messages per one row from entries)\n3) Knowing above, instead of performing join on each of thousands of\nentries rows, we could perform ORDER BY and LIMIT before JOINing.\n4) And then, after LIMITing we could JOIN those 5 rows.\n\nThis I guess would be quite benefitial for VIEWs. :)\n\nOther thing that would be, I guess, benefitial for views would be\nspecial handling of lines like this:\n\nSELECT entry_id,message_id FROM entries NATURAL JOIN messages;\n\nHere there is no reason to perform JOIN at all -- the data will not be used.\nAs above, since entries.message_id IS NOT NULL REFERENCES messages\nand messages is UNIQUE (PRIMARY KEY) we are sure there will be one-to-one(*)\nmapping between two tables. And since these keys are not used, no need to\nwaste time and perform JOIN.\n\nI wonder what you all think about it. :)\n\n Regards,\n Dawid\n\n(*) not exactly one-to-one, because same messages.message_id can be\nreferences many times from entries.message_id, but the join will\nreturn exactly the same number of lines as would select * from\nentries;\n",
"msg_date": "Mon, 11 Oct 2004 11:54:41 +0200",
"msg_from": "Dawid Kuroczko <[email protected]>",
"msg_from_op": true,
"msg_subject": "Views, joins and LIMIT"
},
{
"msg_contents": "Dawid Kuroczko <[email protected]> writes:\n> This I guess would be quite benefitial for VIEWs. :)\n\nHave you tried it?\n\nregression-# SELECT entry_id,message FROM entries NATURAL JOIN messages ORDER BY entry_id DESC LIMIT 10;\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------\n Limit (cost=0.00..48.88 rows=10 width=36)\n -> Nested Loop (cost=0.00..4887.52 rows=1000 width=36)\n -> Index Scan Backward using entries_pkey on entries (cost=0.00..52.00 rows=1000 width=8)\n -> Index Scan using messages_pkey on messages (cost=0.00..4.82 rows=1 width=36)\n Index Cond: (\"outer\".message_id = messages.message_id)\n(5 rows)\n\n> Other thing that would be, I guess, benefitial for views would be\n> special handling of lines like this:\n\n> SELECT entry_id,message_id FROM entries NATURAL JOIN messages;\n\n> Here there is no reason to perform JOIN at all -- the data will not be used.\n> As above, since entries.message_id IS NOT NULL REFERENCES messages\n> and messages is UNIQUE (PRIMARY KEY) we are sure there will be one-to-one(*)\n> mapping between two tables. And since these keys are not used, no need to\n> waste time and perform JOIN.\n\nThe bang-for-the-buck ratio on that seems much too low.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Oct 2004 10:14:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Views, joins and LIMIT "
}
] |
[
{
"msg_contents": "Is there a tutorial or reference to the different terms that appear on the \nexplain output?\n\n\nItems such as \"Nested Loop\", \"Hash\"..\n\nAlso is there a way to easily tell which of two explains is \"worse\". \nExample I am running a query with \"set enable_seqscan to off;\" and i see \nthe explain now shows index scans, but not sure if is any faster now.\n\nI tried \"explain analyze\" and the \"total runtime\" for the one with \nseq_scan off was faster, but after repeathing them they both dropped in \ntime, likely due to data getting cached. Even after the time drops for \nboth the one with seqscan off was always faster.\n\nIs there any disadvantage of having the enable_seqscan off?\n",
"msg_date": "Mon, 11 Oct 2004 17:04:16 -0400 (EDT)",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Understanding explains"
},
{
"msg_contents": "while you weren't looking, Francisco Reyes wrote:\n\n> Is there any disadvantage of having the enable_seqscan off?\n\nPlenty.\n\nThe planner will choose whichever plan looks \"cheapest\", based on the\ninformation it has available (table size, statistics, &c). If a\nsequential scan looks cheaper, and in your case above it clearly is,\nthe planner will choose that query plan. Setting enable_seqscan =\nfalse doesn't actually disable sequential scans; it merely makes them\nseem radically more expensive to the planner, in hopes of biasing its\nchoice towards another query plan. In your case, that margin made an\nindex scan look less expensive than sequential scan, but your query\nruntimes clearly suggest otherwise.\n\nIn general, it's best to let the planner make the appropriate choice\nwithout any artificial constraints. I've seen pathalogical cases\nwhere the planner makes the wrong choice(s), but upon analysis,\nthey're almost always attributable to poor statistics, long\nun-vacuumed tables, &c.\n\n/rls\n\n-- \n:wq\n",
"msg_date": "Mon, 11 Oct 2004 17:03:07 -0500",
"msg_from": "Rosser Schwarz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Understanding explains"
},
{
"msg_contents": "On Mon, 11 Oct 2004, Rosser Schwarz wrote:\n\n> In general, it's best to let the planner make the appropriate choice\n> without any artificial constraints.\n\nAs someone suggested ran with Explain analyze.\nWith seqscan_off was better.\nRan a vacuum analyze this afternoon so the stats were up to date.\nAlthough I will leave the setting as it's default for most of everything I \ndo, it seems that for some reason in this case it mases sense to turn it \noff.\n",
"msg_date": "Tue, 12 Oct 2004 00:59:37 -0400 (EDT)",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Understanding explains"
}
] |
[
{
"msg_contents": "Hi guys,\n\n please consider this scenario. I have this table:\n\nCREATE TABLE ip2location (\n ip_address_from BIGINT NOT NULL,\n ip_address_to BIGINT NOT NULL,\n id_location BIGINT NOT NULL,\n PRIMARY KEY (ip_address_from, ip_address_to)\n);\n\nI created a cluster on its primary key, by running:\nCLUSTER ip2location_ip_address_from_key ON ip2location;\n\nThis allowed me to organise data in a more efficient way: the data that is \ncontained are ranges of IP addresses with empty intersections; for every IP \nclass there is a related location's ID. The total number of entries is 1392443.\n\nFor every IP address I have, an application retrieves the corresponding \nlocation's id from the above table, by running a query like:\n\nSELECT id_location FROM ip2location WHERE '11020000111' >= ip_address_from \nAND '11020000111' <= ip_address_to;\n\nFor instance, by running the 'EXPLAIN ANALYSE' command, I get this \"funny\" \nresult:\n\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------\n Seq Scan on ip2location (cost=0.00..30490.65 rows=124781 width=8) \n(actual time=5338.120..40237.283 rows=1 loops=1)\n Filter: ((1040878301::bigint >= ip_address_from) AND \n(1040878301::bigint <= ip_address_to))\n Total runtime: 40237.424 ms\n\n\nWith other data, that returns an empty set, I get:\n\nexplain SELECT id_location FROM ip2location WHERE '11020000111' >= \nip_address_from AND '11020000111' <= ip_address_to;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------\n Index Scan using ip2location_ip_address_from_key on \nip2location (cost=0.00..419.16 rows=140 width=8)\n Index Cond: ((11020000111::bigint >= ip_address_from) AND \n(11020000111::bigint <= ip_address_to))\n\n\nI guess the planner chooses the best of the available options for the first \ncase, the sequential scan. This is not confirmed though by the fact that, \nafter I ran \"SET enable_scan TO off\", I got this:\n QUERY \nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using ip2location_ip_address_from_key on \nip2location (cost=0.00..31505.73 rows=124781 width=8) (actual \ntime=2780.172..2780.185 rows=1 loops=1)\n Index Cond: ((1040878301::bigint >= ip_address_from) AND \n(1040878301::bigint <= ip_address_to))\n Total runtime: 2780.359 ms\n\n\nIs this a normal case or should I worry? What am I missing? Do you have any \nsuggestion or comment to do (that would be extremely appreciated)? Is the \nCLUSTER I created worthwhile or not?\n\nThank you,\n-Gabriele\n\n--\nGabriele Bartolini: Web Programmer, ht://Dig & IWA/HWG Member, ht://Check \nmaintainer\nCurrent Location: Prato, Toscana, Italia\[email protected] | http://www.prato.linux.it/~gbartolini | ICQ#129221447\n > \"Leave every hope, ye who enter!\", Dante Alighieri, Divine Comedy, The \nInferno\n\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.773 / Virus Database: 520 - Release Date: 05/10/2004",
"msg_date": "Mon, 11 Oct 2004 23:05:59 +0200",
"msg_from": "Gabriele Bartolini <[email protected]>",
"msg_from_op": true,
"msg_subject": "Normal case or bad query plan?"
},
{
"msg_contents": "\n\nOn Mon, 11 Oct 2004, Gabriele Bartolini wrote:\n\n> ---------------------------------------------------------------------------------------------------------------------\n> Seq Scan on ip2location (cost=0.00..30490.65 rows=124781 width=8) \n> (actual time=5338.120..40237.283 rows=1 loops=1)\n> Filter: ((1040878301::bigint >= ip_address_from) AND \n> (1040878301::bigint <= ip_address_to))\n> Total runtime: 40237.424 ms\n> \n\nI believe the problem is that pg's lack of cross-column statistics is \nproducing the poor number of rows estimate. The number of rows mataching \njust the first 1040878301::bigint >= ip_address_from condition is 122774 \nwhich is roughtly 10% of the table. I imagine the query planner \nbelieves that the other condition alone will match the other 90% of the \ntable. The problem is that it doesn't know that these two ranges'\nintersection is actually tiny. The planner assumes a complete or nearly \ncomplete overlap so it thinks it will need to fetch 10% of the rows from \nboth the index and the heap and chooses a seqscan.\n\nKris Jurka\n",
"msg_date": "Mon, 11 Oct 2004 16:17:24 -0500 (EST)",
"msg_from": "Kris Jurka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Normal case or bad query plan?"
},
{
"msg_contents": "Gabriele Bartolini <[email protected]> writes:\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------\n> Seq Scan on ip2location (cost=0.00..30490.65 rows=124781 width=8) \n> (actual time=5338.120..40237.283 rows=1 loops=1)\n> Filter: ((1040878301::bigint >= ip_address_from) AND \n> (1040878301::bigint <= ip_address_to))\n> Total runtime: 40237.424 ms\n\n> Is this a normal case or should I worry? What am I missing?\n\nThe striking thing about that is the huge difference between estimated\nrowcount (124781) and actual (1). The planner would certainly have\npicked an indexscan if it thought the query would select only one row.\n\nI suspect that you haven't ANALYZEd this table in a long time, if ever.\nYou really need reasonably up-to-date ANALYZE stats if you want the\nplanner to do an adequate job of planning range queries. It may well be\nthat you need to increase the analyze statistics target for this table,\nalso --- in BIGINT terms the distribution is probably pretty irregular,\nwhich will mean you need finer-grain statistics to get good estimates.\n\n(BTW, have you looked at the inet datatype to see if that would fit your\nneeds?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Oct 2004 17:33:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Normal case or bad query plan? "
},
{
"msg_contents": "Hi Tom,\n\n thanks for your interest.\n\nAt 23.33 11/10/2004, Tom Lane wrote:\n\n>Gabriele Bartolini <[email protected]> writes:\n> > QUERY PLAN\n> > \n> ---------------------------------------------------------------------------------------------------------------------\n> > Seq Scan on ip2location (cost=0.00..30490.65 rows=124781 width=8)\n> > (actual time=5338.120..40237.283 rows=1 loops=1)\n> > Filter: ((1040878301::bigint >= ip_address_from) AND\n> > (1040878301::bigint <= ip_address_to))\n> > Total runtime: 40237.424 ms\n>\n> > Is this a normal case or should I worry? What am I missing?\n>\n>The striking thing about that is the huge difference between estimated\n>rowcount (124781) and actual (1). The planner would certainly have\n>picked an indexscan if it thought the query would select only one row.\n>\n>I suspect that you haven't ANALYZEd this table in a long time, if ever.\n>You really need reasonably up-to-date ANALYZE stats if you want the\n>planner to do an adequate job of planning range queries.\n\nThat's the thing ... I had just peformed a VACUUM ANALYSE :-(\n\n\n> It may well be that you need to increase the analyze statistics target \n> for this table,\n>also --- in BIGINT terms the distribution is probably pretty irregular,\n>which will mean you need finer-grain statistics to get good estimates.\n\nYou mean ... SET STATISTICS for the two columns, don't you?\n\n>(BTW, have you looked at the inet datatype to see if that would fit your\n>needs?)\n\nYes, I know. In other cases I use it. But this is a type of data coming \nfrom an external source (www.ip2location.com) and I can't change it.\n\nThank you so much. I will try to play with the grain of the statistics, \notherwise - if worse comes to worst - I will simply disable the seq scan \nafter connecting.\n\n-Gabriele\n--\nGabriele Bartolini: Web Programmer, ht://Dig & IWA/HWG Member, ht://Check \nmaintainer\nCurrent Location: Prato, Toscana, Italia\[email protected] | http://www.prato.linux.it/~gbartolini | ICQ#129221447\n > \"Leave every hope, ye who enter!\", Dante Alighieri, Divine Comedy, The \nInferno\n\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.773 / Virus Database: 520 - Release Date: 05/10/2004",
"msg_date": "Tue, 12 Oct 2004 07:26:12 +0200",
"msg_from": "Gabriele Bartolini <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Normal case or bad query plan? "
},
{
"msg_contents": "Gabriele Bartolini <[email protected]> writes:\n> Seq Scan on ip2location (cost=0.00..30490.65 rows=124781 width=8)\n> (actual time=5338.120..40237.283 rows=1 loops=1)\n> Filter: ((1040878301::bigint >= ip_address_from) AND\n> (1040878301::bigint <= ip_address_to))\n> Total runtime: 40237.424 ms\n>> \n>> I suspect that you haven't ANALYZEd this table in a long time, if ever.\n>> You really need reasonably up-to-date ANALYZE stats if you want the\n>> planner to do an adequate job of planning range queries.\n\n> That's the thing ... I had just peformed a VACUUM ANALYSE :-(\n\nIn that case I think Kris Jurka had it right: the problem is the planner\ndoesn't know enough about the relationship of the ip_address_from and\nip_address_to columns to realize that this is a very selective query.\nBut actually, even *had* it realized that, it would have had little\nchoice but to use a seqscan, because neither of the independent\nconditions is really very useful as an index condition by itself.\n\nAssuming that this problem is representative of your query load, you\nreally need to recast the data representation to make it more readily\nsearchable. I think you might be able to get somewhere by combining\nip_address_from and ip_address_to into a single CIDR column and then\nusing the network-overlap operator to probe for matches to your query\naddress. (This assumes that the from/to pairs are actually meant to\nrepresent CIDR subnets; if not you need some other idea.) Another\npossibility is to convert to a geometric type and use an rtree index\nwith an \"overlaps\" operator. I'm too tired to work out the details,\nbut try searching for \"decorrelation\" in the list archives to see some\nrelated problems.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Oct 2004 01:45:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Normal case or bad query plan? "
},
{
"msg_contents": "Makes sense. See DB2 8.2 info on their new implementation of cross column\nstatistics. If this is common and you're willing to change code, you can\nfake that by adding a operation index on some hash function of both columns,\nand search for both columns and the hash.\n\n----- Original Message ----- \nFrom: \"Kris Jurka\" <[email protected]>\nTo: \"Gabriele Bartolini\" <[email protected]>\nCc: <[email protected]>\nSent: Monday, October 11, 2004 5:17 PM\nSubject: Re: [PERFORM] Normal case or bad query plan?\n\n\n>\n>\n> On Mon, 11 Oct 2004, Gabriele Bartolini wrote:\n>\n>\n> --------------------------------------------------------------------------\n-------------------------------------------\n> > Seq Scan on ip2location (cost=0.00..30490.65 rows=124781 width=8)\n> > (actual time=5338.120..40237.283 rows=1 loops=1)\n> > Filter: ((1040878301::bigint >= ip_address_from) AND\n> > (1040878301::bigint <= ip_address_to))\n> > Total runtime: 40237.424 ms\n> >\n>\n> I believe the problem is that pg's lack of cross-column statistics is\n> producing the poor number of rows estimate. The number of rows mataching\n> just the first 1040878301::bigint >= ip_address_from condition is 122774\n> which is roughtly 10% of the table. I imagine the query planner\n> believes that the other condition alone will match the other 90% of the\n> table. The problem is that it doesn't know that these two ranges'\n> intersection is actually tiny. The planner assumes a complete or nearly\n> complete overlap so it thinks it will need to fetch 10% of the rows from\n> both the index and the heap and chooses a seqscan.\n>\n> Kris Jurka\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n>\n",
"msg_date": "Tue, 12 Oct 2004 08:20:52 -0400",
"msg_from": "\"Aaron Werman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Normal case or bad query plan?"
},
{
"msg_contents": "Hi Kris,\n\n>I believe the problem is that pg's lack of cross-column statistics is\n>producing the poor number of rows estimate. The number of rows mataching\n\nI got your point now. I had not understood it last night but it makes really\nsense.\n\n>which is roughtly 10% of the table. I imagine the query planner\n>believes that the other condition alone will match the other 90% of the\n\n>table. The problem is that it doesn't know that these two ranges'\n>intersection is actually tiny. The planner assumes a complete or nearly\n>complete overlap so it thinks it will need to fetch 10% of the rows from\n\nYep, because it performs those checks separately and it gets 10% for one\ncheck and 90% for the other.\n\nAs Tom says, I should somehow make PostgreSQL see my data as a single entity\nin order to perform a real range check. I will study some way to obtain\nit.\n\nHowever, I got better results by specifying the grane of the statistics\nthrough \"ALTER TABLE ... SET STATISTICS\".\n\nFYI I set it to 1000 (the maximum) and I reduced the query's estimated time\nby the 90% (from 40000ms to 4000ms) although much slower than the index\nscan (200ms).\n\nI will play a bit with data types as Tom suggested.\n\nFor now, thanks anyone who tried and helped me.\n\nCiao,\n-Gabriele\n\n",
"msg_date": "Tue, 12 Oct 2004 16:29:36 +0200",
"msg_from": "\"Gabriele Bartolini\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Normal case or bad query plan?"
},
{
"msg_contents": "On Tue, Oct 12, 2004 at 04:29:36PM +0200, Gabriele Bartolini wrote:\n> FYI I set it to 1000 (the maximum) and I reduced the query's estimated time\n> by the 90% (from 40000ms to 4000ms) although much slower than the index\n> scan (200ms).\n\nNote that the estimated times are _not_ in ms. They are in multiples of a\ndisk fetch (?). Thus, you can't compare estimated and real times like that.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 12 Oct 2004 16:40:54 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Normal case or bad query plan?"
},
{
"msg_contents": "This may sound more elaborate than it's worth, but I don't know of\na better way to avoid a table scan.\n\nYou want to index on a computed value that is a common prefix of your\nFROM and TO fields.\n\nThe next step is to search on a fixed SET of prefixes of different \nlengths. For example, some of your ranges might be common in the first 3 \nbytes of ipaddr, some in two, some in only one.\n\nYou create and index on one common prefix of either 1,2 or 3 bytes, for \neach row.\n\nYour query then looks something like (pardon my ignorance in PGSQL)\n\n\tselect\t*\n\tfrom\tip2location\n\twhere\tip2prefix in (\n\t\tnetwork(:myaddr || '/8'),\n\t\tnetwork(:myaddr || '/16'),\n\t\tnetwork(:myaddr || '/24'),\n\t\t:myaddr --- assuming single-address ranges are possible\n\t\t)\n\tand :myaddr between ip_address_from and ip_address_to\n\nAlthough this looks a little gross, it hits very few records.\nIt also adapts cleanly to a join between ip2location and a table of\nip addrs.\n\nGabriele Bartolini wrote:\n> Hi guys,\n> \n> please consider this scenario. I have this table:\n> \n> CREATE TABLE ip2location (\n> ip_address_from BIGINT NOT NULL,\n> ip_address_to BIGINT NOT NULL,\n> id_location BIGINT NOT NULL,\n> PRIMARY KEY (ip_address_from, ip_address_to)\n> );\n> \n> I created a cluster on its primary key, by running:\n> CLUSTER ip2location_ip_address_from_key ON ip2location;\n> \n> This allowed me to organise data in a more efficient way: the data that \n> is contained are ranges of IP addresses with empty intersections; for \n> every IP class there is a related location's ID. The total number of \n> entries is 1392443.\n> \n> For every IP address I have, an application retrieves the corresponding \n> location's id from the above table, by running a query like:\n> \n> SELECT id_location FROM ip2location WHERE '11020000111' >= \n> ip_address_from AND '11020000111' <= ip_address_to;\n> \n> For instance, by running the 'EXPLAIN ANALYSE' command, I get this \n> \"funny\" result:\n> \n> \n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------- \n> \n> Seq Scan on ip2location (cost=0.00..30490.65 rows=124781 width=8) \n> (actual time=5338.120..40237.283 rows=1 loops=1)\n> Filter: ((1040878301::bigint >= ip_address_from) AND \n> (1040878301::bigint <= ip_address_to))\n> Total runtime: 40237.424 ms\n> \n> \n> With other data, that returns an empty set, I get:\n> \n> explain SELECT id_location FROM ip2location WHERE '11020000111' >= \n> ip_address_from AND '11020000111' <= ip_address_to;\n> \n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------- \n> \n> Index Scan using ip2location_ip_address_from_key on ip2location \n> (cost=0.00..419.16 rows=140 width=8)\n> Index Cond: ((11020000111::bigint >= ip_address_from) AND \n> (11020000111::bigint <= ip_address_to))\n> \n> \n> I guess the planner chooses the best of the available options for the \n> first case, the sequential scan. This is not confirmed though by the \n> fact that, after I ran \"SET enable_scan TO off\", I got this:\n> \n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------ \n> \n> Index Scan using ip2location_ip_address_from_key on ip2location \n> (cost=0.00..31505.73 rows=124781 width=8) (actual \n> time=2780.172..2780.185 rows=1 loops=1)\n> Index Cond: ((1040878301::bigint >= ip_address_from) AND \n> (1040878301::bigint <= ip_address_to))\n> Total runtime: 2780.359 ms\n> \n> \n> Is this a normal case or should I worry? What am I missing? Do you have \n> any suggestion or comment to do (that would be extremely appreciated)? \n> Is the CLUSTER I created worthwhile or not?\n> \n> Thank you,\n> -Gabriele\n> \n> -- \n> Gabriele Bartolini: Web Programmer, ht://Dig & IWA/HWG Member, \n> ht://Check maintainer\n> Current Location: Prato, Toscana, Italia\n> [email protected] | http://www.prato.linux.it/~gbartolini | ICQ#129221447\n> > \"Leave every hope, ye who enter!\", Dante Alighieri, Divine Comedy, \n> The Inferno\n> \n> \n> ------------------------------------------------------------------------\n> \n> \n> ---\n> Outgoing mail is certified Virus Free.\n> Checked by AVG anti-virus system (http://www.grisoft.com).\n> Version: 6.0.773 / Virus Database: 520 - Release Date: 05/10/2004\n> \n> \n> ------------------------------------------------------------------------\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n",
"msg_date": "Tue, 12 Oct 2004 16:56:53 GMT",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Normal case or bad query plan?"
}
] |
[
{
"msg_contents": "Folks,\n\nIn order to have a place for scripts, graphs, results, etc., I've started the \nTestPerf project on pgFoundry:\nhttp://pgfoundry.org/projects/testperf/\n\nIf you are interested in doing performance testing for PostgreSQL, please join \nthe mailing list for the project. I could certainly use some help \nscripting.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 11 Oct 2004 16:52:14 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "TestPerf Project started"
}
] |
[
{
"msg_contents": "Hello\n\nI am doing a comparison between MySQL and PostgreSQL.\n\nIn the MySQL manual it says that MySQL performs best with Linux 2.4 with\nReiserFS on x86. Can anyone official, or in the know, give similar\ninformation regarding PostgreSQL?\n\nAlso, any links to benchmarking tests available on the internet between\nMySQL and PostgreSQL would be appreciated.\n\nThank you!\n\nTim\n\n\n",
"msg_date": "Tue, 12 Oct 2004 17:45:01 +0200 (CEST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Which plattform do you recommend I run PostgreSQL for best\n\tperformance?"
},
{
"msg_contents": "\n>In the MySQL manual it says that MySQL performs best with Linux 2.4 with\n>ReiserFS on x86. Can anyone official, or in the know, give similar\n>information regarding PostgreSQL?\n> \n>\nI'm neither official, nor in the know, but I do have a spare moment! I \ncan tell you that any *NIX variant on any modern hardware platform will \ngive you good performance, except for Cygwin/x86. Any differences \nbetween OSes on the same hardware are completely swamped by far more \ndirect concerns like IO systems, database design, OS tuning etc. Pick \nthe OS you're most familiar with is usually a good recommendation (and \nnot just for Postgres).\n",
"msg_date": "Tue, 12 Oct 2004 16:47:34 +0100",
"msg_from": "Matt Clark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Which plattform do you recommend I run PostgreSQL for"
},
{
"msg_contents": "On Tue, 12 Oct 2004 [email protected] wrote:\n\n> In the MySQL manual it says that MySQL performs best with Linux 2.4 with\n> ReiserFS on x86. Can anyone official, or in the know, give similar\n> information regarding PostgreSQL?\n\nDon't know which OS/filesystem PostgreSQL runs best on, but you should \ntest on whatever OS and filesystem you are most experienced.\n\nWhatever speed gain you may get from \"best setups\" will mean little if \nthe machine crashes and you don't know how to fix it and get it back \nup quickly.\n",
"msg_date": "Tue, 12 Oct 2004 12:27:14 -0400 (EDT)",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Which plattform do you recommend I run PostgreSQL for"
},
{
"msg_contents": "In an attempt to throw the authorities off his trail, [email protected] transmitted:\n> I am doing a comparison between MySQL and PostgreSQL.\n>\n> In the MySQL manual it says that MySQL performs best with Linux 2.4 with\n> ReiserFS on x86. Can anyone official, or in the know, give similar\n> information regarding PostgreSQL?\n\nThe fastest I have ever seen PostgreSQL run is on an IBM pSeries 650\nsystem using AIX 5.1 and JFS2. There aren't many Linux systems that\nare anywhere _near_ as fast as that.\n\nThere's some indication that FreeBSD 4.9, running the Berkeley FFS\nfilesystem might be the quickest way of utilizing pedestrian IA-32\nhardware, although it is _much_ more important to have a system for\nwhich you have a competent sysadmin than it is to have some\n\"tweaked-out\" OS configuration.\n\nIn practice, competent people generally prefer to have systems that\nhum along nicely as opposed to systems that have ben \"tweaked out\"\nsuch that any little change will cause them to cave in. \n\nBenchmarks are useful in determining:\n\n a) Whether or not it is realistic to attempt a project, and\n\n b) Whether or not you have made conspicuous errors in configuring\n your systems.\n\nThey are notoriously BAD as predictive tools, as the benchmarks\nsponsored by vendors get tweaked to make the vendors' products look\ngood, as opposed to being written to be useful for prediction.\n\nSee if you see anything useful from MySQL in this regard:\n<http://www.mysql.com/it-resources/benchmarks/>\n\n> Also, any links to benchmarking tests available on the internet\n> between MySQL and PostgreSQL would be appreciated.\n\nMost database vendors have licenses that specifically forbid\npublishing benchmarks.\n-- \n(reverse (concatenate 'string \"gro.mca\" \"@\" \"enworbbc\"))\nhttp://www3.sympatico.ca/cbbrowne/oses.html\nDo you know where your towel is?\n",
"msg_date": "Tue, 12 Oct 2004 16:02:53 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Which plattform do you recommend I run PostgreSQL for best\n\tperformance?"
},
{
"msg_contents": "Tim,\n\n> In the MySQL manual it says that MySQL performs best with Linux 2.4 with\n> ReiserFS on x86. Can anyone official, or in the know, give similar\n> information regarding PostgreSQL?\n\nPostgreSQL runs on a lot more platforms than MySQL; it's not even reasonable \nto compare some of them, like rtLinux, AIX or Cygwin. The only reasonable \ncomparative testing that's been done seems to indicate that:\nLinux 2.6 is more efficient than FreeBSD which is more efficient than Linux \n2.4 all of which are significantly more efficient than Solaris, and\nReiserFS, XFS and JFS *seem* to outperform other Linux journalling FSes.\n\nHowever, as others have said, configuration and hardware will probably make \nmore difference than your choice of OS except in extreme cases. And all of \nthe above is being further tested, particularly the filesystems.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 12 Oct 2004 13:32:13 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Which plattform do you recommend I run PostgreSQL for best\n\tperformance?"
},
{
"msg_contents": "hi,\n\[email protected] wrote:\n> Hello\n> \n> I am doing a comparison between MySQL and PostgreSQL.\n> \n> In the MySQL manual it says that MySQL performs best with Linux 2.4 with\n> ReiserFS on x86. Can anyone official, or in the know, give similar\n> information regarding PostgreSQL?\n> \n> Also, any links to benchmarking tests available on the internet between\n> MySQL and PostgreSQL would be appreciated.\n\nhttp://www.potentialtech.com/wmoran/postgresql.php\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\nhttp://candle.pha.pa.us/main/writings/pgsql/hw_performance/\nhttp://database.sarang.net/database/postgres/optimizing_postgresql.html\n\nC.\n",
"msg_date": "Wed, 13 Oct 2004 10:46:59 +0200",
"msg_from": "CoL <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Which plattform do you recommend I run PostgreSQL for best "
},
{
"msg_contents": "Thank you.\n\nTim\n\n> hi,\n>\n> [email protected] wrote:\n>> Hello\n>>\n>> I am doing a comparison between MySQL and PostgreSQL.\n>>\n>> In the MySQL manual it says that MySQL performs best with Linux 2.4 with\n>> ReiserFS on x86. Can anyone official, or in the know, give similar\n>> information regarding PostgreSQL?\n>>\n>> Also, any links to benchmarking tests available on the internet between\n>> MySQL and PostgreSQL would be appreciated.\n>\n> http://www.potentialtech.com/wmoran/postgresql.php\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n> http://candle.pha.pa.us/main/writings/pgsql/hw_performance/\n> http://database.sarang.net/database/postgres/optimizing_postgresql.html\n>\n> C.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n\n",
"msg_date": "Wed, 20 Oct 2004 14:54:12 +0200 (CEST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Which plattform do you recommend I run PostgreSQL "
}
] |
[
{
"msg_contents": "Hi,\n\nI sent this to general earlier but I was redirected to performance.\n\nThe query have been running ok for quite some time, but after I did a\nvacuum on the database, it's very very slow. This IN-query is only 2\nids. Before the problem that in was a subselect-query returning around\n6-7 ids. The tables included in the query are described in database.txt.\n\nstatus=# select count(id) from data;\n count\n---------\n 1577621\n(1 row)\n\nstatus=# select count(data_id) from data_values;\n count\n---------\n 9680931\n(1 row)\n\nI did run a new explain analyze on the query and found the attached\nresult. The obvious problem I see is a full index scan in\n\"idx_dv_data_id\". I tried dropping and adding the index again, thats why\nis't called \"idx_data_values_data_id\" in the dump.\n\nstatus=# EXPLAIN ANALYZE\nstatus-# SELECT\nstatus-# data.entered,\nstatus-# data.machine_id,\nstatus-# datatemplate_intervals.template_id,\nstatus-# data_values.value\nstatus-# FROM\nstatus-# data, data_values, datatemplate_intervals\nstatus-# WHERE\nstatus-# datatemplate_intervals.id = data_values.template_id AND\nstatus-# data_values.data_id = data.id AND\nstatus-# data.machine_id IN (2,3) AND\nstatus-# current_timestamp::timestamp - interval '60 seconds' <\ndata.entered;\n\n\n\nRegards,\nRobin\n\n\n-- \nRobin Ericsson <[email protected]>\nProfecta HB",
"msg_date": "Wed, 13 Oct 2004 11:21:11 +0200",
"msg_from": "Robin Ericsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "query problem"
},
{
"msg_contents": "On Wed, 2004-10-13 at 02:21, Robin Ericsson wrote:\n> Hi,\n> \n> I sent this to general earlier but I was redirected to performance.\n> \n> The query have been running ok for quite some time, but after I did a\n> vacuum on the database, it's very very slow. \n\nDid you do a VACUUM FULL ANALYZE on the database or just a VACUUM? It\nlooks like your statistics in your query are all off which ANALYZE\nshould fix.\n\n\n\n> This IN-query is only 2\n> ids. Before the problem that in was a subselect-query returning around\n> 6-7 ids. The tables included in the query are described in database.txt.\n> \n> status=# select count(id) from data;\n> count\n> ---------\n> 1577621\n> (1 row)\n> \n> status=# select count(data_id) from data_values;\n> count\n> ---------\n> 9680931\n> (1 row)\n> \n> I did run a new explain analyze on the query and found the attached\n> result. The obvious problem I see is a full index scan in\n> \"idx_dv_data_id\". I tried dropping and adding the index again, thats why\n> is't called \"idx_data_values_data_id\" in the dump.\n> \n> status=# EXPLAIN ANALYZE\n> status-# SELECT\n> status-# data.entered,\n> status-# data.machine_id,\n> status-# datatemplate_intervals.template_id,\n> status-# data_values.value\n> status-# FROM\n> status-# data, data_values, datatemplate_intervals\n> status-# WHERE\n> status-# datatemplate_intervals.id = data_values.template_id AND\n> status-# data_values.data_id = data.id AND\n> status-# data.machine_id IN (2,3) AND\n> status-# current_timestamp::timestamp - interval '60 seconds' <\n> data.entered;\n> \n> \n> \n> Regards,\n> Robin\n> \n\n",
"msg_date": "Wed, 13 Oct 2004 07:37:43 -0700",
"msg_from": "ken <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query problem"
},
{
"msg_contents": "Robin Ericsson <[email protected]> writes:\n> I sent this to general earlier but I was redirected to performance.\n\nActually, I think I suggested that you consult the pgsql-performance\narchives, where this type of problem has been hashed out before.\nSee for instance this thread:\nhttp://archives.postgresql.org/pgsql-performance/2004-07/msg00169.php\nparticularly\nhttp://archives.postgresql.org/pgsql-performance/2004-07/msg00175.php\nhttp://archives.postgresql.org/pgsql-performance/2004-07/msg00184.php\nhttp://archives.postgresql.org/pgsql-performance/2004-07/msg00185.php\nwhich show three different ways of getting the planner to do something\nsane with an index range bound like \"now() - interval\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Oct 2004 11:03:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query problem "
},
{
"msg_contents": "On Wed, 2004-10-13 at 11:03 -0400, Tom Lane wrote:\n> Robin Ericsson <[email protected]> writes:\n> > I sent this to general earlier but I was redirected to performance.\n> \n> Actually, I think I suggested that you consult the pgsql-performance\n> archives, where this type of problem has been hashed out before.\n> See for instance this thread:\n> http://archives.postgresql.org/pgsql-performance/2004-07/msg00169.php\n> particularly\n> http://archives.postgresql.org/pgsql-performance/2004-07/msg00175.php\n> http://archives.postgresql.org/pgsql-performance/2004-07/msg00184.php\n> http://archives.postgresql.org/pgsql-performance/2004-07/msg00185.php\n> which show three different ways of getting the planner to do something\n> sane with an index range bound like \"now() - interval\".\n\nUsing exact timestamp makes the query go back as it should in speed (see\nexplain below). However I still have the problem using a stored\nprocedure or even using the \"ago\"-example from above.\n\n\n\n\nregards,\nRobin\n\nstatus=# explain analyse\nstatus-# SELECT\nstatus-# data.entered,\nstatus-# data.machine_id,\nstatus-# datatemplate_intervals.template_id,\nstatus-# data_values.value\nstatus-# FROM\nstatus-# data, data_values, datatemplate_intervals\nstatus-# WHERE\nstatus-# datatemplate_intervals.id =\ndata_values.template_id AND\nstatus-# data_values.data_id = data.id AND\nstatus-# data.machine_id IN (SELECT machine_id FROM\nmachine_group_xref WHERE group_id = 1) AND\nstatus-# '2004-10-13 17:47:36.902062' < data.entered\nstatus-# ;\n\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=3.09..481.28 rows=777 width=24) (actual\ntime=0.637..1.804 rows=57 loops=1)\n Hash Cond: (\"outer\".template_id = \"inner\".id)\n -> Nested Loop (cost=1.17..467.71 rows=776 width=24) (actual\ntime=0.212..1.012 rows=57 loops=1)\n -> Hash IN Join (cost=1.17..9.56 rows=146 width=16) (actual\ntime=0.165..0.265 rows=9 loops=1)\n Hash Cond: (\"outer\".machine_id = \"inner\".machine_id)\n -> Index Scan using idx_d_entered on data\n(cost=0.00..6.14 rows=159 width=16) (actual time=0.051..0.097 rows=10\nloops=1)\n Index Cond: ('2004-10-13\n17:47:36.902062'::timestamp without time zone < entered)\n -> Hash (cost=1.14..1.14 rows=11 width=4) (actual\ntime=0.076..0.076 rows=0 loops=1)\n -> Seq Scan on machine_group_xref\n(cost=0.00..1.14 rows=11 width=4) (actual time=0.017..0.054 rows=11\nloops=1)\n Filter: (group_id = 1)\n -> Index Scan using idx_data_values_data_id on data_values\n(cost=0.00..3.07 rows=5 width=16) (actual time=0.018..0.047 rows=6\nloops=9)\n Index Cond: (data_values.data_id = \"outer\".id)\n -> Hash (cost=1.74..1.74 rows=74 width=8) (actual time=0.382..0.382\nrows=0 loops=1)\n -> Seq Scan on datatemplate_intervals (cost=0.00..1.74\nrows=74 width=8) (actual time=0.024..0.248 rows=74 loops=1)\n Total runtime: 2.145 ms\n(15 rows)\n\n\n",
"msg_date": "Wed, 13 Oct 2004 18:01:10 +0200",
"msg_from": "Robin Ericsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query problem"
},
{
"msg_contents": "On Wed, 2004-10-13 at 18:01 +0200, Robin Ericsson wrote:\n> Using exact timestamp makes the query go back as it should in speed (see\n> explain below). However I still have the problem using a stored\n> procedure or even using the \"ago\"-example from above.\n\nWell, changing ago() to use timestamp without time zone it goes ok in\nthe query. This query now takes ~2ms.\n\nSELECT\n data.entered,\n data.machine_id,\n datatemplate_intervals.template_id,\n data_values.value\nFROM\n data, data_values, datatemplate_intervals\nWHERE\n datatemplate_intervals.id = data_values.template_id AND\n data_values.data_id = data.id AND\n data.machine_id IN (SELECT machine_id FROM machine_group_xref\nWHERE group_id = 1) AND\n ago('60 seconds') < data.entered\n\nUsing it in this procedure.\nselect * from get_current_machine_status('60 seconds', 1);\ntakes ~100s. Maybe there's some obvious wrong I do about it?\n\nCREATE TYPE public.mstatus_holder AS\n (entered timestamp,\n machine_id int4,\n template_id int4,\n value varchar);\nCREATE OR REPLACE FUNCTION public.get_current_machine_status(interval,\nint4)\n RETURNS SETOF mstatus_holder AS\n'\n\tSELECT\n\t\tdata.entered,\n\t\tdata.machine_id,\n\t\tdatatemplate_intervals.template_id,\n\t\tdata_values.value\n\tFROM\n\t\tdata, data_values, datatemplate_intervals\n\tWHERE\n\t\tdatatemplate_intervals.id = data_values.template_id AND\n\t\tdata_values.data_id = data.id AND\n\t\tdata.machine_id IN (SELECT machine_id FROM machine_group_xref WHERE\ngroup_id = $2) AND\n\t\tago($1) < data.entered\n'\n LANGUAGE 'sql' VOLATILE;\n\n\nRegards,\n\tRobin\n\n\n",
"msg_date": "Wed, 13 Oct 2004 18:27:20 +0200",
"msg_from": "Robin Ericsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] query problem"
},
{
"msg_contents": "Sorry, this should have been going to performance.\n\n\n\nRegards,\n\tRobin\n\n",
"msg_date": "Wed, 13 Oct 2004 18:36:26 +0200",
"msg_from": "Robin Ericsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] query problem"
}
] |
[
{
"msg_contents": "All,\n\tMy company (Chariot Solutions) is sponsoring a day of free\nPostgreSQL training by Bruce Momjian (one of the core PostgreSQL\ndevelopers). The day is split into 2 sessions (plus a Q&A session):\n\n * Mastering PostgreSQL Administration\n * PostgreSQL Performance Tuning\n\n\tRegistration is required, and space is limited. The location is\nMalvern, PA (suburb of Philadelphia) and it's on Saturday Oct 30. For\nmore information or to register, see\n\nhttp://chariotsolutions.com/postgresql.jsp\n\nThanks,\n\tAaron\n",
"msg_date": "Wed, 13 Oct 2004 12:21:27 -0400 (EDT)",
"msg_from": "Aaron Mulder <[email protected]>",
"msg_from_op": true,
"msg_subject": "Free PostgreSQL Training, Philadelphia, Oct 30"
},
{
"msg_contents": "> All,\n> \tMy company (Chariot Solutions) is sponsoring a day of free\n> PostgreSQL training by Bruce Momjian (one of the core PostgreSQL\n> developers). The day is split into 2 sessions (plus a Q&A session):\n>\n> * Mastering PostgreSQL Administration\n> * PostgreSQL Performance Tuning\n>\n> \tRegistration is required, and space is limited. The location is\n> Malvern, PA (suburb of Philadelphia) and it's on Saturday Oct 30. For\n> more information or to register, see\n>\n> http://chariotsolutions.com/postgresql.jsp\n>\n> Thanks,\n> \tAaron\n\nWow, that's good stuff, too bad there's no one doing stuff like that in the\nLos Angeles area.\n\n-b\n\n",
"msg_date": "Wed, 13 Oct 2004 09:23:47 -0700",
"msg_from": "\"Bryan Encina\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Free PostgreSQL Training, Philadelphia, Oct 30"
},
{
"msg_contents": "On Wed, 13 Oct 2004 09:23:47 -0700, Bryan Encina\n<[email protected]> wrote:\n>\n> Wow, that's good stuff, too bad there's no one doing stuff like that in the\n> Los Angeles area.\n>\n> -b\n\nThat makes two of us. Hanging out with Tom, Bruce, and others at OSCON\n2002 was one of the most informative and fun times I've had. That and\nI could really stand to brush up on my Postgres basics\n\n\naaron.glenn\n",
"msg_date": "Wed, 13 Oct 2004 13:10:48 -0700",
"msg_from": "Aaron Glenn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Free PostgreSQL Training, Philadelphia, Oct 30"
},
{
"msg_contents": "Aaron,\n\n> That makes two of us. Hanging out with Tom, Bruce, and others at OSCON\n> 2002 was one of the most informative and fun times I've had. That and\n> I could really stand to brush up on my Postgres basics\n\nYou're thinking of Jan. Tom wasn't at OSCON. ;-)\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 13 Oct 2004 15:36:33 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Free PostgreSQL Training, Philadelphia, Oct 30"
},
{
"msg_contents": "Josh Berkus wrote:\n> Aaron,\n> \n> > That makes two of us. Hanging out with Tom, Bruce, and others at OSCON\n> > 2002 was one of the most informative and fun times I've had. That and\n> > I could really stand to brush up on my Postgres basics\n> \n> You're thinking of Jan. Tom wasn't at OSCON. ;-)\n\nAh, but he said 2002 and I think Tom was there that year.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 13 Oct 2004 23:47:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Free PostgreSQL Training, Philadelphia, Oct 30"
},
{
"msg_contents": "On 10/13/2004 11:47 PM, Bruce Momjian wrote:\n\n> Josh Berkus wrote:\n>> Aaron,\n>> \n>> > That makes two of us. Hanging out with Tom, Bruce, and others at OSCON\n>> > 2002 was one of the most informative and fun times I've had. That and\n>> > I could really stand to brush up on my Postgres basics\n>> \n>> You're thinking of Jan. Tom wasn't at OSCON. ;-)\n> \n> Ah, but he said 2002 and I think Tom was there that year.\n> \n\nAnd I wasn't, which makes it rather difficult to hang out with me.\n\nI will however be in Malvern too, since it's just around the corner for me.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n",
"msg_date": "Mon, 18 Oct 2004 14:44:33 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Free PostgreSQL Training, Philadelphia, Oct 30"
},
{
"msg_contents": "On Wed, Oct 13, 2004 at 12:21:27PM -0400, Aaron Mulder wrote:\n> All,\n> \tMy company (Chariot Solutions) is sponsoring a day of free\n> PostgreSQL training by Bruce Momjian (one of the core PostgreSQL\n> developers). The day is split into 2 sessions (plus a Q&A session):\n> \n> * Mastering PostgreSQL Administration\n> * PostgreSQL Performance Tuning\n> \n> \tRegistration is required, and space is limited. The location is\n> Malvern, PA (suburb of Philadelphia) and it's on Saturday Oct 30. For\n> more information or to register, see\n> \n> http://chariotsolutions.com/postgresql.jsp\n\nI'm up in New York City and would be taking the train down to Philly. Is\nanyone coming from Philly or New York that would be able to give me a lift\nto/from the train station? Sounds like a great event.\n\nCheers,\n-m\n",
"msg_date": "Tue, 19 Oct 2004 14:43:29 -0400",
"msg_from": "Max Baker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Free PostgreSQL Training, Philadelphia, Oct 30"
},
{
"msg_contents": "I'm driving from Tenafly NJ and going to both sessions. If you're able\nto get to the George Washington Bridge (A train to 178th Street [Port\nAuthority North] and a bus over the bridge), I can drive you down. I'm\nnot sure right now about the return because I have confused plans to\nmeet someone.\n\n/Aaron\n\n\nOn Tue, 19 Oct 2004 14:43:29 -0400, Max Baker <[email protected]> wrote:\n> On Wed, Oct 13, 2004 at 12:21:27PM -0400, Aaron Mulder wrote:\n> > All,\n> > My company (Chariot Solutions) is sponsoring a day of free\n> > PostgreSQL training by Bruce Momjian (one of the core PostgreSQL\n> > developers). The day is split into 2 sessions (plus a Q&A session):\n> >\n> > * Mastering PostgreSQL Administration\n> > * PostgreSQL Performance Tuning\n> >\n> > Registration is required, and space is limited. The location is\n> > Malvern, PA (suburb of Philadelphia) and it's on Saturday Oct 30. For\n> > more information or to register, see\n> >\n> > http://chariotsolutions.com/postgresql.jsp\n> \n> I'm up in New York City and would be taking the train down to Philly. Is\n> anyone coming from Philly or New York that would be able to give me a lift\n> to/from the train station? Sounds like a great event.\n> \n> Cheers,\n> -m\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n\n-- \n\nRegards,\n/Aaron\n",
"msg_date": "Wed, 20 Oct 2004 07:10:35 -0400",
"msg_from": "Aaron Werman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Free PostgreSQL Training, Philadelphia, Oct 30"
},
{
"msg_contents": "\tIf anyone is going to take the train all the way, please e-mail me\noffline. There is a train station relatively close to the event (NY to\nPhilly then the R5 to Malvern), but it's not within walking distance, so\nwe'll figure out some way to pick people up from there.\n\nThanks,\n\tAaron\n\nOn Wed, 20 Oct 2004, Aaron Werman wrote:\n> I'm driving from Tenafly NJ and going to both sessions. If you're able\n> to get to the George Washington Bridge (A train to 178th Street [Port\n> Authority North] and a bus over the bridge), I can drive you down. I'm\n> not sure right now about the return because I have confused plans to\n> meet someone.\n> \n> /Aaron\n> \n> \n> On Tue, 19 Oct 2004 14:43:29 -0400, Max Baker <[email protected]> wrote:\n> > On Wed, Oct 13, 2004 at 12:21:27PM -0400, Aaron Mulder wrote:\n> > > All,\n> > > My company (Chariot Solutions) is sponsoring a day of free\n> > > PostgreSQL training by Bruce Momjian (one of the core PostgreSQL\n> > > developers). The day is split into 2 sessions (plus a Q&A session):\n> > >\n> > > * Mastering PostgreSQL Administration\n> > > * PostgreSQL Performance Tuning\n> > >\n> > > Registration is required, and space is limited. The location is\n> > > Malvern, PA (suburb of Philadelphia) and it's on Saturday Oct 30. For\n> > > more information or to register, see\n> > >\n> > > http://chariotsolutions.com/postgresql.jsp\n> > \n> > I'm up in New York City and would be taking the train down to Philly. Is\n> > anyone coming from Philly or New York that would be able to give me a lift\n> > to/from the train station? Sounds like a great event.\n> > \n> > Cheers,\n> > -m\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n> > \n> \n> \n> -- \n> \n> Regards,\n> /Aaron\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n",
"msg_date": "Wed, 20 Oct 2004 09:11:25 -0400 (EDT)",
"msg_from": "Aaron Mulder <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Free PostgreSQL Training, Philadelphia, Oct 30"
},
{
"msg_contents": "Nobody got a plane to came from europe :-) ???\nAs a poor frenchie I will not come ...\nHave a good time\n\nAlban\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Aaron Mulder\nSent: mercredi 20 octobre 2004 15:11\nTo: [email protected]\nSubject: Re: [PERFORM] Free PostgreSQL Training, Philadelphia, Oct 30\n\n\tIf anyone is going to take the train all the way, please e-mail me\noffline. There is a train station relatively close to the event (NY to\nPhilly then the R5 to Malvern), but it's not within walking distance, so\nwe'll figure out some way to pick people up from there.\n\nThanks,\n\tAaron\n\nOn Wed, 20 Oct 2004, Aaron Werman wrote:\n> I'm driving from Tenafly NJ and going to both sessions. If you're able \n> to get to the George Washington Bridge (A train to 178th Street [Port \n> Authority North] and a bus over the bridge), I can drive you down. I'm \n> not sure right now about the return because I have confused plans to \n> meet someone.\n> \n> /Aaron\n> \n> \n> On Tue, 19 Oct 2004 14:43:29 -0400, Max Baker <[email protected]> wrote:\n> > On Wed, Oct 13, 2004 at 12:21:27PM -0400, Aaron Mulder wrote:\n> > > All,\n> > > My company (Chariot Solutions) is sponsoring a day of free \n> > > PostgreSQL training by Bruce Momjian (one of the core PostgreSQL \n> > > developers). The day is split into 2 sessions (plus a Q&A session):\n> > >\n> > > * Mastering PostgreSQL Administration\n> > > * PostgreSQL Performance Tuning\n> > >\n> > > Registration is required, and space is limited. The \n> > > location is Malvern, PA (suburb of Philadelphia) and it's on \n> > > Saturday Oct 30. For more information or to register, see\n> > >\n> > > http://chariotsolutions.com/postgresql.jsp\n> > \n> > I'm up in New York City and would be taking the train down to \n> > Philly. Is anyone coming from Philly or New York that would be able \n> > to give me a lift to/from the train station? Sounds like a great event.\n> > \n> > Cheers,\n> > -m\n> > \n> > ---------------------------(end of \n> > broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n> > \n> \n> \n> --\n> \n> Regards,\n> /Aaron\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n",
"msg_date": "Thu, 21 Oct 2004 10:24:04 +0200",
"msg_from": "\"Alban Medici (NetCentrex)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Free PostgreSQL Training, Philadelphia, Oct 30"
},
{
"msg_contents": "On Wed, 13 Oct 2004 12:21:27 -0400 (EDT)\nAaron Mulder <[email protected]> wrote:\n> All,\n> \tMy company (Chariot Solutions) is sponsoring a day of free\n> PostgreSQL training by Bruce Momjian (one of the core PostgreSQL\n> developers). The day is split into 2 sessions (plus a Q&A session):\n\nIs there anyone else from the Toronto area going down that would like to\nshare the driving? I am planning to drive down Friday morning and drive\nback Sunday. I'm not looking for expense sharing. I just don't want to\ndrive for eight hours straight.\n\n-- \nD'Arcy J.M. Cain <[email protected]> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 27 Oct 2004 07:23:46 -0400",
"msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Free PostgreSQL Training, Philadelphia, Oct 30"
},
{
"msg_contents": "\nMany thanks to Chariot Solutions, http://chariotsolutions.com, for hosting\nBruce Momjian giving one of his PostgreSQL seminars outside of\nPhiladelphia, PA yesterday. There were about sixty folks there, one person\ndriving from Toronto and another coming from California (!).\n\nI found it very enlightening and learned some new things about PostgreSQL,\neven if I *did* doze off for a few minutes after lunch when all my energy\nwas concentrated in my stomach.\n\nBut after I got back home I dreamt about PostgreSQL all night long!!!\n\nThanks Bruce and Chariot Solutions.\n\nbrew\n\n ==========================================================================\n Strange Brew ([email protected])\n Check out my Musician's Online Database Exchange (The MODE Pages)\n http://www.TheMode.com\n ==========================================================================\n\n",
"msg_date": "Sun, 31 Oct 2004 13:38:55 -0500 (EST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Thanks Chariot Solutions"
},
{
"msg_contents": "On Sun, 31 Oct 2004 13:38:55 -0500 (EST)\[email protected] wrote:\n> \n> Many thanks to Chariot Solutions, http://chariotsolutions.com, for\n> hosting Bruce Momjian giving one of his PostgreSQL seminars outside of\n> Philadelphia, PA yesterday. There were about sixty folks there, one\n> person driving from Toronto and another coming from California (!).\n\nSeconded. It was definitely worth the drive from Toronto.\n\n-- \nD'Arcy J.M. Cain <[email protected]> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 1 Nov 2004 07:59:49 -0500",
"msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thanks Chariot Solutions"
}
] |
[
{
"msg_contents": "Sent this to wrong list.\n\n-------- Forwarded Message --------\nFrom: Robin Ericsson <[email protected]>\nTo: [email protected]\nSubject: Re: [GENERAL] [PERFORM] query problem\nDate: Wed, 13 Oct 2004 18:27:20 +0200\nOn Wed, 2004-10-13 at 18:01 +0200, Robin Ericsson wrote:\n> Using exact timestamp makes the query go back as it should in speed (see\n> explain below). However I still have the problem using a stored\n> procedure or even using the \"ago\"-example from above.\n\nWell, changing ago() to use timestamp without time zone it goes ok in\nthe query. This query now takes ~2ms.\n\nSELECT\n data.entered,\n data.machine_id,\n datatemplate_intervals.template_id,\n data_values.value\nFROM\n data, data_values, datatemplate_intervals\nWHERE\n datatemplate_intervals.id = data_values.template_id AND\n data_values.data_id = data.id AND\n data.machine_id IN (SELECT machine_id FROM machine_group_xref\nWHERE group_id = 1) AND\n ago('60 seconds') < data.entered\n\nUsing it in this procedure.\nselect * from get_current_machine_status('60 seconds', 1);\ntakes ~100s. Maybe there's some obvious wrong I do about it?\n\nCREATE TYPE public.mstatus_holder AS\n (entered timestamp,\n machine_id int4,\n template_id int4,\n value varchar);\nCREATE OR REPLACE FUNCTION public.get_current_machine_status(interval,\nint4)\n RETURNS SETOF mstatus_holder AS\n'\n\tSELECT\n\t\tdata.entered,\n\t\tdata.machine_id,\n\t\tdatatemplate_intervals.template_id,\n\t\tdata_values.value\n\tFROM\n\t\tdata, data_values, datatemplate_intervals\n\tWHERE\n\t\tdatatemplate_intervals.id = data_values.template_id AND\n\t\tdata_values.data_id = data.id AND\n\t\tdata.machine_id IN (SELECT machine_id FROM machine_group_xref WHERE\ngroup_id = $2) AND\n\t\tago($1) < data.entered\n'\n LANGUAGE 'sql' VOLATILE;\n\n\nRegards,\n\tRobin\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\n-- \nRobin Ericsson <[email protected]>\nProfecta HB\n\n",
"msg_date": "Wed, 13 Oct 2004 18:36:50 +0200",
"msg_from": "Robin Ericsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "[Fwd: Re: [GENERAL] query problem]"
}
] |
[
{
"msg_contents": "Hi guys,\n\nI have some DBT-3 (decision support) results using Gavin's original\nfutex patch fix. It's on out 8-way Pentium III Xeon systems\nin our STP environment. Definitely see some overall throughput\nperformance on the tests, about 15% increase, but not change with\nrespect to the number of context switches.\n\nPerhaps it doesn't really address what's causing the incredible number\nof context switches, but still helps. I think I'm seeing what Gavin\nhas, that it seems to solves some concurrency problems on x86 platforms.\n\nHere's results without futexes:\n\thttp://khack.osdl.org/stp/298114/\n\nResults with futexes:\n\thttp://khack.osdl.org/stp/298115/\n\nMark\n",
"msg_date": "Wed, 13 Oct 2004 11:57:44 -0700",
"msg_from": "Mark Wong <[email protected]>",
"msg_from_op": true,
"msg_subject": "futex results with dbt-3"
},
{
"msg_contents": "On Thu, 2004-10-14 at 04:57, Mark Wong wrote: \n> I have some DBT-3 (decision support) results using Gavin's original\n> futex patch fix.\n\nI sent an initial description of the futex patch to the mailing lists\nlast week, but it never appeared (from talking to Marc I believe it\nexceeded the size limit on -performance). In any case, the \"futex patch\"\nuses the Linux 2.6 futex API to implement PostgreSQL spinlocks. The hope\nis that using futexes will lead to better performance when there is\ncontention for spinlocks (e.g. on a busy SMP system). The original patch\nwas written by Stephen Hemminger at OSDL; Gavin and myself have done a\nbunch of additional bugfixing and optimization, as well as added IA64\nsupport.\n\nI've attached a WIP copy of the patch to this email (it supports x86,\nx86-64 (untested) and IA64 -- more architectures can be added at\nrequest). I'll post a longer writeup when I submit the patch to\n-patches.\n\n> Definitely see some overall throughput performance on the tests, about\n> 15% increase, but not change with respect to the number of context\n> switches.\n\nI'm glad to see that there is a performance improvement; in my own\ntesting on an 8-way P3 system provided by OSDL, I saw a similar\nimprovement in pgbench performance (50 concurrent clients, 1000\ntransactions each, scale factor 75; without the patch, TPS/sec was\nbetween 180 and 185, with the patch TPS/sec was between 200 and 215).\n\nAs for context switching, there was some earlier speculation that the\npatch might improve or even resolve the \"CS storm\" issue that some\npeople have experienced with SMP Xeon P4 systems. I don't think we have\nenough evidence to answer this one way or the other at this point.\n\n-Neil",
"msg_date": "Thu, 14 Oct 2004 12:30:05 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: futex results with dbt-3"
}
] |
[
{
"msg_contents": "Hi,\n\nWe are experiencing slow performance on 8 Beta 2 Dev3 on Win32 and are\ntrying to determine why. Any info is appreciated.\n\nWe have a Web Server and a DB server both running Win2KServer with all\nservice packs and critical updates.\n\nAn ASP page on the Web Server hits the DB Server with a simple query that\nreturns 205 rows and makes the ASP page delivered to the user about 350K.\n\nOn an ethernet lan a client pc perceives just under 1 sec performance with\nthe following DB Server configuration:\n PIII 550Mhz\n 256MB RAM\n 7200 RPM HD\n cygwin\n Postgresql 7.1.3\n PGODBC 7.3.2\n\nWe set up another DB Server with 8 beta (same Web Server, same network, same\nclient pc) and now the client pc perceives response of just over 3 sec with\nthe following DB server config:\n PIII 700 Mhz\n 448MB RAM\n 7200 RPM HD\n 8 Beta 2 Dev3 on Win32 running as a service\n\nIs the speed decrease because it's a beta?\nIs the speed decrease because it's running on Win instead of cygwin?\n\nWe did not install cygwin on the new DB Server.\n\nThanks,\n\nMike\n\n\n\n",
"msg_date": "Thu, 14 Oct 2004 12:01:38 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance on Win32 vs Cygwin"
},
{
"msg_contents": "Have you looked at the 7.3 configuration file vs. the 8.0. It's\npossible that the 7.3 file is tweakled better then the 8.0. Have you\nanaylzed the tables after loading the data into 8.0\n\n\nOn Thu, 14 Oct 2004 12:01:38 -0500, [email protected]\n<[email protected]> wrote:\n> Hi,\n> \n> We are experiencing slow performance on 8 Beta 2 Dev3 on Win32 and are\n> trying to determine why. Any info is appreciated.\n> \n> We have a Web Server and a DB server both running Win2KServer with all\n> service packs and critical updates.\n> \n> An ASP page on the Web Server hits the DB Server with a simple query that\n> returns 205 rows and makes the ASP page delivered to the user about 350K.\n> \n> On an ethernet lan a client pc perceives just under 1 sec performance with\n> the following DB Server configuration:\n> PIII 550Mhz\n> 256MB RAM\n> 7200 RPM HD\n> cygwin\n> Postgresql 7.1.3\n> PGODBC 7.3.2\n> \n> We set up another DB Server with 8 beta (same Web Server, same network, same\n> client pc) and now the client pc perceives response of just over 3 sec with\n> the following DB server config:\n> PIII 700 Mhz\n> 448MB RAM\n> 7200 RPM HD\n> 8 Beta 2 Dev3 on Win32 running as a service\n> \n> Is the speed decrease because it's a beta?\n> Is the speed decrease because it's running on Win instead of cygwin?\n> \n> We did not install cygwin on the new DB Server.\n> \n> Thanks,\n> \n> Mike\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n",
"msg_date": "Tue, 19 Oct 2004 17:09:21 -0500",
"msg_from": "Kevin Barnard <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Performance on Win32 vs Cygwin"
}
] |
[
{
"msg_contents": ">Hi,\n>\n>We are experiencing slow performance on 8 Beta 2 Dev3 on Win32 and are\n>trying to determine why. Any info is appreciated.\n>\n>We have a Web Server and a DB server both running Win2KServer with all\n>service packs and critical updates.\n>\n>An ASP page on the Web Server hits the DB Server with a simple \n>query that\n>returns 205 rows and makes the ASP page delivered to the user \n>about 350K.\n>\n>On an ethernet lan a client pc perceives just under 1 sec \n>performance with\n>the following DB Server configuration:\n> PIII 550Mhz\n> 256MB RAM\n> 7200 RPM HD\n> cygwin\n> Postgresql 7.1.3\n> PGODBC 7.3.2\n>\n>We set up another DB Server with 8 beta (same Web Server, same \n>network, same\n>client pc) and now the client pc perceives response of just \n>over 3 sec with\n>the following DB server config:\n> PIII 700 Mhz\n> 448MB RAM\n> 7200 RPM HD\n> 8 Beta 2 Dev3 on Win32 running as a service\n>\n>Is the speed decrease because it's a beta?\n>Is the speed decrease because it's running on Win instead of cygwin?\n>\n>We did not install cygwin on the new DB Server.\n\nIIRC, previous versions of postgresql (< 8.0) did not correctly sync\ndisks when running on Cygwin. I'm not 100% sure, can someone confirm?\n8.0 does, and I beleive it does both under native win32 and cygwin.\n\nIt's been my experience that the native version is slightly faster than\nthe cygwin one, but I've only compared 8.0 to 8.0.\n\n\n//Magnus\n\n",
"msg_date": "Thu, 14 Oct 2004 19:29:25 +0200",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance on Win32 vs Cygwin"
},
{
"msg_contents": "Magnus Hagander schrieb:\n> IIRC, previous versions of postgresql (< 8.0) did not correctly sync\n> disks when running on Cygwin. I'm not 100% sure, can someone confirm?\n> 8.0 does, and I beleive it does both under native win32 and cygwin.\n\nyes, sync is a NOOP on cygwin.\n\n> It's been my experience that the native version is slightly faster than\n> the cygwin one, but I've only compared 8.0 to 8.0.\n\nSure. This is expected. Cygwin's interim's layer costs a lot of time. \n(process handling, path resolution)\n-- \nReini Urban\nhttp://xarch.tu-graz.ac.at/home/rurban/\n",
"msg_date": "Thu, 14 Oct 2004 21:40:59 +0200",
"msg_from": "Reini Urban <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on Win32 vs Cygwin"
},
{
"msg_contents": "Thanks Magnus,\n\nSo are we correct to rely on\n- 8 being slower than 7.x in general and\n- 8 on Win32 being a little faster than 8 on Cygwin?\n\nWill the final release of 8 be faster than the beta?\n\nThanks,\n\nMike\n\n\n----- Original Message ----- \nFrom: \"Magnus Hagander\" <[email protected]>\nTo: \"[email protected]\" <[email protected]>;\n<[email protected]>; <[email protected]>\nSent: Thursday, October 14, 2004 12:29 PM\nSubject: SV: [pgsql-hackers-win32] Performance on Win32 vs Cygwin\n\n\n>Hi,\n>\n>We are experiencing slow performance on 8 Beta 2 Dev3 on Win32 and are\n>trying to determine why. Any info is appreciated.\n>\n>We have a Web Server and a DB server both running Win2KServer with all\n>service packs and critical updates.\n>\n>An ASP page on the Web Server hits the DB Server with a simple\n>query that\n>returns 205 rows and makes the ASP page delivered to the user\n>about 350K.\n>\n>On an ethernet lan a client pc perceives just under 1 sec\n>performance with\n>the following DB Server configuration:\n> PIII 550Mhz\n> 256MB RAM\n> 7200 RPM HD\n> cygwin\n> Postgresql 7.1.3\n> PGODBC 7.3.2\n>\n>We set up another DB Server with 8 beta (same Web Server, same\n>network, same\n>client pc) and now the client pc perceives response of just\n>over 3 sec with\n>the following DB server config:\n> PIII 700 Mhz\n> 448MB RAM\n> 7200 RPM HD\n> 8 Beta 2 Dev3 on Win32 running as a service\n>\n>Is the speed decrease because it's a beta?\n>Is the speed decrease because it's running on Win instead of cygwin?\n>\n>We did not install cygwin on the new DB Server.\n\nIIRC, previous versions of postgresql (< 8.0) did not correctly sync\ndisks when running on Cygwin. I'm not 100% sure, can someone confirm?\n8.0 does, and I beleive it does both under native win32 and cygwin.\n\nIt's been my experience that the native version is slightly faster than\nthe cygwin one, but I've only compared 8.0 to 8.0.\n\n\n//Magnus\n\n",
"msg_date": "Thu, 14 Oct 2004 16:45:51 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on Win32 vs Cygwin"
},
{
"msg_contents": "\"[email protected]\" <[email protected]> writes:\n> So are we correct to rely on\n> - 8 being slower than 7.x in general and\n\nI think this is a highly unlikely claim ... *especially* if you are\ncomparing against 7.1. The point about sync() being a no-op is real,\nbut offhand I think it would only come into play at checkpoints. We\nhave never issued sync() during regular queries.\n\nWhat seems more likely to me is that you have neglected to do any\nperformance tuning on the new installation. Have you vacuumed/analyzed\nall your tables? Checked the postgresql.conf settings for sanity?\n\nIf you'd like to do an apples-to-apples comparison to prove whether\n7.1's failure to sync() is relevant, then turn off fsync in the 8.0\nconfiguration and see how much difference that makes.\n\nIf you can identify specific queries that are slower in 8.0 than 7.1,\nI'd be interested to see the EXPLAIN ANALYZE details from each.\n(Actually, I'm not sure 7.1 had EXPLAIN ANALYZE; you may have to\nsettle for EXPLAIN from it.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Oct 2004 11:37:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on Win32 vs Cygwin "
}
] |
[
{
"msg_contents": "Hi all,\n\nI recently migrated my database from schema 'public' to multiple schema.\nI have around 100 tables, and divided them in 14 different schemas, and then adapted my application to use schemas as well.\nI could percept that the query / insert / update times get pretty much faster then when I was using the old unique schema, and I'd just like to confirm with you if using schemas speed up the things. Is that true ?\n\nWhat else I can do to speed up the query processing, best pratices, recommendations ... ? What about indexed views, does postgresql supports it?\n\nRegards,\nIgor\n--\[email protected]\n\n\n\n\n\n\n\nHi all,\n \nI recently migrated my database from schema \n'public' to multiple schema.\nI have around 100 tables, and divided them in \n14 different schemas, and then adapted my application to use schemas as \nwell.\nI could percept that the query / insert / \nupdate times get pretty much faster then when I was using the old unique schema, \nand I'd just like to confirm with you if using schemas speed up the things. Is \nthat true ?\n \nWhat else I can do to speed up the query \nprocessing, best pratices, recommendations ... ? What about indexed views, does \npostgresql supports it?\n \nRegards,\[email protected]",
"msg_date": "Thu, 14 Oct 2004 15:38:52 -0300",
"msg_from": "\"Igor Maciel Macaubas\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance vs Schemas"
},
{
"msg_contents": "On Fri, 2004-10-15 at 04:38, Igor Maciel Macaubas wrote:\n> I have around 100 tables, and divided them in 14 different schemas,\n> and then adapted my application to use schemas as well.\n> I could percept that the query / insert / update times get pretty much\n> faster then when I was using the old unique schema, and I'd just like\n> to confirm with you if using schemas speed up the things. Is that true\n> ?\n\nSchemas are a namespacing technique; AFAIK they shouldn't significantly\naffect performance (either positively or negatively).\n\n> What about indexed views, does postgresql supports it?\n\nNo, you'll need to create indexes on the view's base tables.\n\n-Neil\n\n\n",
"msg_date": "Fri, 15 Oct 2004 10:52:35 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance vs Schemas"
},
{
"msg_contents": "Hi Igor,\n\nI expect that when you moved your tables to different schemas that you \neffectively did a physical re-organization (ie unload/reload of the tables). \nIt's nothing to do with the use of schemas as such. If you had reloaded your \ntables into the same system schema you would have experienced the same \nspeedup as the data tables would be more compact.\n\nregards\nIain\n ----- Original Message ----- \n From: Igor Maciel Macaubas\n To: [email protected]\n Sent: Friday, October 15, 2004 3:38 AM\n Subject: [PERFORM] Performance vs Schemas\n\n\n Hi all,\n\n I recently migrated my database from schema 'public' to multiple schema.\n I have around 100 tables, and divided them in 14 different schemas, and \nthen adapted my application to use schemas as well.\n I could percept that the query / insert / update times get pretty much \nfaster then when I was using the old unique schema, and I'd just like to \nconfirm with you if using schemas speed up the things. Is that true ?\n\n What else I can do to speed up the query processing, best pratices, \nrecommendations ... ? What about indexed views, does postgresql supports it?\n\n Regards,\n Igor\n --\n [email protected]\n\n\n\n\n\n\n\nHi Igor,\n \nI expect that when you moved your tables \nto different schemas that you effectively did a physical re-organization (ie \nunload/reload of the tables). It's nothing to do with the use of \nschemas as such. If you had reloaded your tables into the same system \nschema you would have experienced the same speedup as the data tables would \nbe more compact.\n \nregards\nIain\n\n----- Original Message ----- \nFrom:\nIgor \n Maciel Macaubas \nTo: [email protected]\n\nSent: Friday, October 15, 2004 \n 3:38 AM\nSubject: [PERFORM] Performance vs \n Schemas\n\nHi all,\n \nI recently migrated my database from schema \n 'public' to multiple schema.\nI have around 100 tables, and divided them in \n 14 different schemas, and then adapted my application to use schemas as \n well.\nI could percept that the query / insert / \n update times get pretty much faster then when I was using the old unique \n schema, and I'd just like to confirm with you if using schemas speed up the \n things. Is that true ?\n \nWhat else I can do to speed up the query \n processing, best pratices, recommendations ... ? What about indexed views, \n does postgresql supports it?\n \nRegards,\[email protected]",
"msg_date": "Fri, 15 Oct 2004 10:41:35 +0900",
"msg_from": "\"Iain\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance vs Schemas"
}
] |
[
{
"msg_contents": "Igor,\n\nI'm not sure if it is proper to state that schemas are themselves speeding things up.\n\nAs an example, we have data that is usually accessed by county; when we put all of the data into one big table and select from it using a code for a county of interest, the process is fairly slow as there are several hundred thousand candidate rows from that county in a table with many millions of rows. When we broke out certain aspects of the data into schemas (one per county) the searches become very fast indeed because we can skip the searching for a specific county code with the relevant tables and there is less (unneeded) data in the table being searched. \n\nAs always, \"EXPLAIN ANALYZE ...\" is your friend in understanding what the planner is doing with a given query.\n\nSee <http://www.varlena.com/varlena/GeneralBits/Tidbits/> for some useful information, especially under the performance tips section.\n\nHTH,\n\nGreg Williamson\nDBA\nGlobeXplorer LLC\n\n-----Original Message-----\nFrom:\tIgor Maciel Macaubas [mailto:[email protected]]\nSent:\tThu 10/14/2004 11:38 AM\nTo:\[email protected]\nCc:\t\nSubject:\t[PERFORM] Performance vs Schemas\nHi all,\n\nI recently migrated my database from schema 'public' to multiple schema.\nI have around 100 tables, and divided them in 14 different schemas, and then adapted my application to use schemas as well.\nI could percept that the query / insert / update times get pretty much faster then when I was using the old unique schema, and I'd just like to confirm with you if using schemas speed up the things. Is that true ?\n\nWhat else I can do to speed up the query processing, best pratices, recommendations ... ? What about indexed views, does postgresql supports it?\n\nRegards,\nIgor\n--\[email protected]\n\n\n\n",
"msg_date": "Thu, 14 Oct 2004 11:45:10 -0700",
"msg_from": "\"Gregory S. Williamson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance vs Schemas"
},
{
"msg_contents": "Right - if you split a table to a lot of more selective tables, it can often\ndramatically change the plan options (e.g. - in a single table, selectivity\nfor a query may be 1% and require an expensive nested loop while in the more\nrestrictive table it may match 14% of the data and do a cheaper scan).\n\nAlso - don't forget that just rebuilding a database cleanly can dramatically\nimprove performance.\n\nThe only dbms I know that indexes views is MS SQL Server 2000, where it is a\nlimited form of materialized queries. pg doesn't do MQs, but check out\nfunctional indices.\n\n/Aaron\n\n----- Original Message ----- \nFrom: \"Gregory S. Williamson\" <[email protected]>\nTo: \"Igor Maciel Macaubas\" <[email protected]>;\n<[email protected]>\nSent: Thursday, October 14, 2004 2:45 PM\nSubject: Re: [PERFORM] Performance vs Schemas\n\n\nIgor,\n\nI'm not sure if it is proper to state that schemas are themselves speeding\nthings up.\n\nAs an example, we have data that is usually accessed by county; when we put\nall of the data into one big table and select from it using a code for a\ncounty of interest, the process is fairly slow as there are several hundred\nthousand candidate rows from that county in a table with many millions of\nrows. When we broke out certain aspects of the data into schemas (one per\ncounty) the searches become very fast indeed because we can skip the\nsearching for a specific county code with the relevant tables and there is\nless (unneeded) data in the table being searched.\n\nAs always, \"EXPLAIN ANALYZE ...\" is your friend in understanding what the\nplanner is doing with a given query.\n\nSee <http://www.varlena.com/varlena/GeneralBits/Tidbits/> for some useful\ninformation, especially under the performance tips section.\n\nHTH,\n\nGreg Williamson\nDBA\nGlobeXplorer LLC\n\n-----Original Message-----\nFrom: Igor Maciel Macaubas [mailto:[email protected]]\nSent: Thu 10/14/2004 11:38 AM\nTo: [email protected]\nCc:\nSubject: [PERFORM] Performance vs Schemas\nHi all,\n\nI recently migrated my database from schema 'public' to multiple schema.\nI have around 100 tables, and divided them in 14 different schemas, and then\nadapted my application to use schemas as well.\nI could percept that the query / insert / update times get pretty much\nfaster then when I was using the old unique schema, and I'd just like to\nconfirm with you if using schemas speed up the things. Is that true ?\n\nWhat else I can do to speed up the query processing, best pratices,\nrecommendations ... ? What about indexed views, does postgresql supports it?\n\nRegards,\nIgor\n--\[email protected]\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\n http://archives.postgresql.org\n",
"msg_date": "Thu, 14 Oct 2004 15:27:51 -0400",
"msg_from": "\"Aaron Werman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance vs Schemas"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm trying to find smarter ways to dig data from my database, and have the following scenario:\n\ntable1\n-- id\n-- name\n.\n.\n.\n.\n.\n.\n\ntable2\n-- id\n-- number\n.\n.\n.\n.\n.\n.\n\nI want to create a view to give me back just what I want:\nThe id, the name and the number.\nI tought in doing the following:\ncreate view my_view as select t1.id, t1.name, t2.number from table1 as t1, table2 as t2 where t1.id = t2.id;\n\nWill this be enough fast ? Are there a faster way to make it work ?!\nThis table is mid-big, around 100K registers .. \n\nRegards,\nIgor\n--\[email protected]\n\n\n\n\n\n\n\n\n\nHi all,\n \nI'm trying to find smarter ways to dig data from \nmy database, and have the following scenario:\n \ntable1\n-- id\n-- name\n.\n.\n.\n.\n.\n.\n \ntable2\n-- id\n-- number\n.\n.\n.\n.\n.\n.\n \nI want to create a view to give me back just \nwhat I want:\nThe id, the name and the number.\nI tought in doing the following:\ncreate view my_view as select t1.id, \nt1.name, t2.number from table1 as t1, table2 as t2 where t1.id = \nt2.id;\n \nWill this be enough fast ? Are there a faster \nway to make it work ?!\nThis table is mid-big, around 100K registers .. \n\n \nRegards,\[email protected]",
"msg_date": "Thu, 14 Oct 2004 18:02:41 -0300",
"msg_from": "\"Igor Maciel Macaubas\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "View & Query Performance"
},
{
"msg_contents": "Can you tell us more about the structure of your tables,\nwitch sort of index did you set on witch fields ?\n \nDid you really need to get ALL records at once, instead you may be could use\npaging (cursor or SELECT LIMIT OFFSET ) ?\n \nAnd did you well configure your .conf ?\n \nRegards\n \nAlban Médici\n\n _____ \n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Igor Maciel\nMacaubas\nSent: jeudi 14 octobre 2004 23:03\nTo: [email protected]\nSubject: [PERFORM] View & Query Performance\n\n\nHi all,\n \nI'm trying to find smarter ways to dig data from my database, and have the\nfollowing scenario:\n \ntable1\n-- id\n-- name\n.\n.\n.\n.\n.\n.\n \ntable2\n-- id\n-- number\n.\n.\n.\n.\n.\n.\n \nI want to create a view to give me back just what I want:\nThe id, the name and the number.\nI tought in doing the following:\ncreate view my_view as select t1.id, t1.name, t2.number from table1 as t1,\ntable2 as t2 where t1.id = t2.id;\n \nWill this be enough fast ? Are there a faster way to make it work ?!\nThis table is mid-big, around 100K registers .. \n \nRegards,\nIgor\n--\[email protected]\n \n \n \n\n\n\n\n\n\n\n\nCan you tell us more about the structure of your \ntables,\nwitch sort of index did you set on witch fields \n?\n \nDid you really need to get ALL records at \nonce, instead you may be could use paging \n(cursor or SELECT LIMIT OFFSET ) ?\n \nAnd did you well \nconfigure your .conf ?\n \nRegards\n \nAlban \nMédici\n\n\nFrom: [email protected] \n[mailto:[email protected]] On Behalf Of Igor Maciel \nMacaubasSent: jeudi 14 octobre 2004 23:03To: \[email protected]: [PERFORM] View & Query \nPerformance\n\nHi all,\n \nI'm trying to find smarter ways to dig data from \nmy database, and have the following scenario:\n \ntable1\n-- id\n-- name\n.\n.\n.\n.\n.\n.\n \ntable2\n-- id\n-- number\n.\n.\n.\n.\n.\n.\n \nI want to create a view to give me back just \nwhat I want:\nThe id, the name and the number.\nI tought in doing the following:\ncreate view my_view as select t1.id, \nt1.name, t2.number from table1 as t1, table2 as t2 where t1.id = \nt2.id;\n \nWill this be enough fast ? Are there a faster \nway to make it work ?!\nThis table is mid-big, around 100K registers .. \n\n \nRegards,\[email protected]",
"msg_date": "Fri, 15 Oct 2004 09:28:18 +0200",
"msg_from": "\"Alban Medici (NetCentrex)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: View & Query Performance"
},
{
"msg_contents": "Igor Maciel Macaubas wrote:\n> Hi all,\n> \n> I'm trying to find smarter ways to dig data from my database, and\n> have the following scenario:\n> \n> table1 -- id -- name . . . . . .\n> \n> table2 -- id -- number . . . . . .\n> \n> I want to create a view to give me back just what I want: The id, the\n> name and the number. I tought in doing the following: create view\n> my_view as select t1.id, t1.name, t2.number from table1 as t1, table2\n> as t2 where t1.id = t2.id;\n> \n> Will this be enough fast ? Are there a faster way to make it work ?! \n> This table is mid-big, around 100K registers ..\n\nThat's as simple a way as you will find. If you apply further \nconditions, e.g.\n SELECT * FROM my_view WHERE id = 123;\nthen you should see any index on \"id\" being used.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 15 Oct 2004 08:37:06 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: View & Query Performance"
}
] |
[
{
"msg_contents": "pg to my mind is unique in not trying to avoid OS buffering. Other\ndbmses spend a substantial effort to create a virtual OS (task\nmanagement, I/O drivers, etc.) both in code and support. Choosing mmap\nseems such a limiting an option - it adds OS dependency and limits\nkernel developer options (2G limits, global mlock serializations,\nporting problems, inability to schedule or parallelize I/O, still\nhaving to coordinate writers and readers).\n\nMore to the point, I think it is very hard to effectively coordinate\nmultithreaded I/O, and mmap seems used mostly to manage relatively\nsimple scenarios. If the I/O options are:\n- OS (which has enormous investment and is stable, but is general\npurpose with overhead)\n- pg (direct I/O would be costly and potentially destabilizing, but\nwith big possible performance rewards)\n- mmap (a feature mostly used to reduce buffer copies in less\nconcurrent apps such as image processing that has major architectural\nrisk including an order of magnitude more semaphores, but can reduce\nsome extra block copies)\nmmap doesn't look that promising.\n\n/Aaron\n\n----- Original Message ----- \nFrom: \"Kevin Brown\" <[email protected]>\nTo: <[email protected]>\nSent: Thursday, October 14, 2004 4:25 PM\nSubject: Re: [PERFORM] First set of OSDL Shared Mem scalability\nresults, some wierdness ...\n\n\n> Tom Lane wrote:\n> > Kevin Brown <[email protected]> writes:\n> > > Tom Lane wrote:\n> > >> mmap() is Right Out because it does not afford us sufficient control\n> > >> over when changes to the in-memory data will propagate to disk.\n> > \n> > > ... that's especially true if we simply cannot\n> > > have the page written to disk in a partially-modified state (something\n> > > I can easily see being an issue for the WAL -- would the same hold\n> > > true of the index/data files?).\n> > \n> > You're almost there. Remember the fundamental WAL rule: log entries\n> > must hit disk before the data changes they describe. That means that we\n> > need not only a way of forcing changes to disk (fsync) but a way of\n> > being sure that changes have *not* gone to disk yet. In the existing\n> > implementation we get that by just not issuing write() for a given page\n> > until we know that the relevant WAL log entries are fsync'd down to\n> > disk. (BTW, this is what the LSN field on every page is for: it tells\n> > the buffer manager the latest WAL offset that has to be flushed before\n> > it can safely write the page.)\n> > \n> > mmap provides msync which is comparable to fsync, but AFAICS it\n> > provides no way to prevent an in-memory change from reaching disk too\n> > soon. This would mean that WAL entries would have to be written *and\n> > flushed* before we could make the data change at all, which would\n> > convert multiple updates of a single page into a series of write-and-\n> > wait-for-WAL-fsync steps. Not good. fsync'ing WAL once per transaction\n> > is bad enough, once per atomic action is intolerable.\n> \n> Hmm...something just occurred to me about this.\n> \n> Would a hybrid approach be possible? That is, use mmap() to handle\n> reads, and use write() to handle writes?\n> \n> Any code that wishes to write to a page would have to recognize that\n> it's doing so and fetch a copy from the storage manager (or\n> something), which would look to see if the page already exists as a\n> writeable buffer. If it doesn't, it creates it by allocating the\n> memory and then copying the page from the mmap()ed area to the new\n> buffer, and returning it. If it does, it just returns a pointer to\n> the buffer. There would obviously have to be some bookkeeping\n> involved: the storage manager would have to know how to map a mmap()ed\n> page back to a writeable buffer and vice-versa, so that once it\n> decides to write the buffer it can determine which page in the\n> original file the buffer corresponds to (so it can do the appropriate\n> seek()).\n> \n> In a write-heavy database, you'll end up with a lot of memory copy\n> operations, but with the scheme we currently use you get that anyway\n> (it just happens in kernel code instead of user code), so I don't see\n> that as much of a loss, if any. Where you win is in a read-heavy\n> database: you end up being able to read directly from the pages in the\n> kernel's page cache and thus save a memory copy from kernel space to\n> user space, not to mention the context switch that happens due to\n> issuing the read().\n> \n> \n> Obviously you'd want to mmap() the file read-only in order to prevent\n> the issues you mention regarding an errant backend, and then reopen\n> the file read-write for the purpose of writing to it. In fact, you\n> could decouple the two: mmap() the file, then close the file -- the\n> mmap()ed region will remain mapped. Then, as long as the file remains\n> mapped, you need to open the file again only when you want to write to\n> it.\n> \n> \n> -- \n> Kevin Brown [email protected]\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n> \n-- \n\nRegards,\n/Aaron\n",
"msg_date": "Thu, 14 Oct 2004 20:25:36 -0400",
"msg_from": "Aaron Werman <[email protected]>",
"msg_from_op": true,
"msg_subject": "mmap (was First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": "Aaron Werman wrote:\n> pg to my mind is unique in not trying to avoid OS buffering. Other\n> dbmses spend a substantial effort to create a virtual OS (task\n> management, I/O drivers, etc.) both in code and support. Choosing mmap\n> seems such a limiting an option - it adds OS dependency and limits\n> kernel developer options (2G limits, global mlock serializations,\n> porting problems, inability to schedule or parallelize I/O, still\n> having to coordinate writers and readers).\n\nI'm not sure I entirely agree with this. Whether you access a file\nvia mmap() or via read(), the end result is that you still have to\naccess it, and since PG has significant chunks of system-dependent\ncode that it heavily relies on as it is (e.g., locking mechanisms,\nshared memory), writing the I/O subsystem in a similar way doesn't\nseem to me to be that much of a stretch (especially since PG already\nhas the storage manager), though it might involve quite a bit of work.\n\nAs for parallelization of I/O, the use of mmap() for reads should\nsignficantly improve parallelization -- now instead of issuing read()\nsystem calls, possibly for the same set of blocks, all the backends\nwould essentially be examining the same data directly. The\nperformance improvements as a result of accessing the kernel's cache\npages directly instead of having it do buffer copies to process-local\nmemory should increase as concurrency goes up. But see below.\n\n> More to the point, I think it is very hard to effectively coordinate\n> multithreaded I/O, and mmap seems used mostly to manage relatively\n> simple scenarios. \n\nPG already manages and coordinates multithreaded I/O. The mechanisms\nused to coordinate writes needn't change at all. But the way reads\nare done relative to writes might have to be rethought, since an\nmmap()ed buffer always reflects what's actually in kernel space at the\ntime the buffer is accessed, while a buffer retrieved via read()\nreflects the state of the file at the time of the read(). If it's\nnecessary for the state of the buffers to be fixed at examination\ntime, then mmap() will be at best a draw, not a win.\n\n> mmap doesn't look that promising.\n\nThis ultimately depends on two things: how much time is spent copying\nbuffers around in kernel memory, and how much advantage can be gained\nby freeing up the memory used by the backends to store the\nbackend-local copies of the disk pages they use (and thus making that\nmemory available to the kernel to use for additional disk buffering).\nThe gains from the former are likely small. The gains from the latter\nare probably also small, but harder to estimate.\n\nThe use of mmap() is probably one of those optimizations that should\nbe done when there's little else left to optimize, because the\npotential gains are possibly (if not probably) relatively small and\nthe amount of work involved may be quite large.\n\n\nSo I agree -- compared with other, much lower-hanging fruit, mmap()\ndoesn't look promising.\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n",
"msg_date": "Thu, 14 Oct 2004 18:08:41 -0700",
"msg_from": "Kevin Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mmap (was First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": ">> pg to my mind is unique in not trying to avoid OS buffering. Other\n>> dbmses spend a substantial effort to create a virtual OS (task\n>> management, I/O drivers, etc.) both in code and support. Choosing mmap\n>> seems such a limiting an option - it adds OS dependency and limits\n>> kernel developer options (2G limits, global mlock serializations,\n>> porting problems, inability to schedule or parallelize I/O, still\n>> having to coordinate writers and readers).\n\n2G limits? That must be a Linux limitation, not a limitation with \nmmap(2). On OS-X and FreeBSD it's anywhere from 4GB to ... well, \nwhatever the 64bit limit is (which is bigger than any data file in \n$PGDATA). An mlock(2) serialization problem is going to be cheaper \nthan hitting the disk in nearly all cases and should be no worse than a \ncontext switch or semaphore (what we use for the current locking \nscheme), of which PostgreSQL causes plenty of 'em because it's \nmulti-process, not multi-threaded. Coordination of data isn't \nnecessary if you mmap(2) data as a private block, which takes a \nsnapshot of the page at the time you make the mmap(2) call and gets \ncopied only when the page is written to. More on that later.\n\n> I'm not sure I entirely agree with this. Whether you access a file\n> via mmap() or via read(), the end result is that you still have to\n> access it, and since PG has significant chunks of system-dependent\n> code that it heavily relies on as it is (e.g., locking mechanisms,\n> shared memory), writing the I/O subsystem in a similar way doesn't\n> seem to me to be that much of a stretch (especially since PG already\n> has the storage manager), though it might involve quite a bit of work.\n\nObviously you have to access the file on the hard drive, but you're \nforgetting an enormous advantage of mmap(2). With a read(2) system \ncall, the program has to allocate space for the read(2), then it copies \ndata from the kernel into the allocated memory in the userland's newly \nallocated memory location. With mmap(2) there is no second copy.\n\nLet's look at what happens with a read(2) call. To read(2) data you \nhave to have a block of memory to copy data into. Assume your OS of \nchoice has a good malloc(3) implementation and it only needs to call \nbrk(2) once to extend the process's memory address after the first \nmalloc(3) call. There's your first system call, which guarantees one \ncontext switch. The second hit, a much larger hit, is the actual \nread(2) call itself, wherein the kernel has to copy the data twice: \nonce into a kernel buffer, then from the kernel buffer into the \nuserland's memory space. Yuk. Webserver's figured this out long ago \nthat read(2) is slow and evil in terms of performance. Apache uses \nmmap(2) to send static files at performance levels that don't suck and \nis actually quite fast (in terms of responsiveness, I'm not talking \nabout Apache's parallelism/concurrency performance levels... which in \n1.X aren't great).\n\nmmap(2) is a totally different animal in that you don't ever need to \nmake calls to read(2): mmap(2) is used in place of those calls (With \n#ifdef and a good abstraction, the rest of PostgreSQL wouldn't know it \nwas working with a page of mmap(2)'ed data or need to know that it is). \n Instead you mmap(2) a file descriptor and the kernel does some heavy \nlifting/optimized magic in its VM. The kernel reads the file \ndescriptor and places the data it reads into its buffer (exactly the \nsame as what happens with read(2)), but, instead of copying the data to \nthe userspace, mmap(2) adjusts the process's address space and maps the \naddress of the kernel buffer into the process's address space. No \ncopying necessary. The savings here are *huge*!\n\nDepending on the mmap(2) implementation, the VM may not even get a page \nfrom disk until its actually needed. So, lets say you mmap(2) a 16M \nfile. The address space picks up an extra 16M of bits that the process \n*can* use, but doesn't necessarily use. So if a user reads only ten \npages out of a 16MB file, only 10 pages (10 * getpagesize()), or \nusually 40,960K, which is 0.24% the amount of disk access (((4096 * 10) \n/ (16 *1024 * 1024)) * 100). Did I forget to mention that if the file \nis already in the kernel's buffers, there's no need for the kernel to \naccess the hard drive? Another big win for data that's hot/frequently \naccessed.\n\nThere's another large savings if the machine is doing network IO too...\n\n> As for parallelization of I/O, the use of mmap() for reads should\n> signficantly improve parallelization -- now instead of issuing read()\n> system calls, possibly for the same set of blocks, all the backends\n> would essentially be examining the same data directly. The\n> performance improvements as a result of accessing the kernel's cache\n> pages directly instead of having it do buffer copies to process-local\n> memory should increase as concurrency goes up. But see below.\n\nThat's kinda true... though not quite correct. The improvement in IO \nconcurrency comes from zero-socket-copy operations from the disk to the \nnetwork controller. If a write(2) system call is issued on a page of \nmmap(2)'ed data (and your operating system supports it, I know FreeBSD \ndoes, but don't think Linux does), then the page of data is DMA'ed by \nthe network controller and sent out without the data needing to be \ncopied into the network controller's buffer. So, instead of the CPU \ncopying data from the OS's buffer to a kernel buffer, the network card \ngrabs the chunk of data in one interrupt because of the DMA (direct \nmemory access). This is a pretty big deal for web serving, but if \nyou've got a database sending large sets of data over the network, \nassuming the network isn't the bottle neck, this results in a heafty \nperformance boost (that won't be noticed by most until they're running \nhuge, very busy installations). This optimization comes for free and \nwithout needing to add one line of code to an application once mmap(2) \nhas been added to an application.\n\n>> More to the point, I think it is very hard to effectively coordinate\n>> multithreaded I/O, and mmap seems used mostly to manage relatively\n>> simple scenarios.\n>\n> PG already manages and coordinates multithreaded I/O. The mechanisms\n> used to coordinate writes needn't change at all. But the way reads\n> are done relative to writes might have to be rethought, since an\n> mmap()ed buffer always reflects what's actually in kernel space at the\n> time the buffer is accessed, while a buffer retrieved via read()\n> reflects the state of the file at the time of the read(). If it's\n> necessary for the state of the buffers to be fixed at examination\n> time, then mmap() will be at best a draw, not a win.\n\nHere's where things can get interesting from a transaction stand point. \n Your statement is correct up until you make the assertion that a page \nneeds to be fixed. If you're doing a read(2) transaction, mmap(2) a \nregion and set the MAP_PRIVATE flag so the ground won't change \nunderneath you. No copying of this page is done by the kernel unless \nit gets written to. If you're doing a write(2) or are directly \nscribbling on an mmap(2)'ed page[1], you need to grab some kind of an \nexclusive lock on the page/file (mlock(2) is going to be no more \nexpensive than a semaphore, but probably less expensive). We already \ndo that with semaphores, however. So for databases that don't have \nhigh contention for the same page/file of data, there are no additional \ncopies made. When a piece of data is written, a page is duplicated \nbefore it gets scribbled on, but the application never knows this \nhappens. The next time a process mmap(2)'s a region of memory that's \nbeen written to, it'll get the updated data without any need to flush a \ncache or mark pages as dirty: the operating system does all of this for \nus (and probably faster too). mmap(2) implementations are, IMHO, more \noptimized that shared memory implementations (mmap(2) is a VM function, \nwhich gets many eyes to look it over and is always being tuned, whereas \nshared mem is a bastardized subsystem that works, but isn't integral to \nany performance areas in the kernel so it gets neglected. Just my \nobservations from the *BSD commit lists. Linux it may be different).\n\n[1] I forgot to mention earlier, you don't have to write(2) data to a \nfile if it's mmap(2)'ed, you can change the contents of an mmap(2)'ed \nregion, then msync(2) it back to disk (to ensure it gets written out) \nor let the last munmap(2) call do that for you (which would be just as \ndangerous as running without fsync... but would result in an additional \nperformance boost).\n\n>> mmap doesn't look that promising.\n>\n> This ultimately depends on two things: how much time is spent copying\n> buffers around in kernel memory, and how much advantage can be gained\n> by freeing up the memory used by the backends to store the\n> backend-local copies of the disk pages they use (and thus making that\n> memory available to the kernel to use for additional disk buffering).\n\nSomeone on IRC pointed me to some OSDL benchmarks, which broke down \nwhere time is being spent. Want to know what the most expensive part \nof PostgreSQL is? *drum roll*\n\nhttp://khack.osdl.org/stp/297960/profile/DBT_2_Profile-tick.sort\n\n3967393 total 1.7735\n2331284 default_idle 36426.3125\n825716 do_sigaction 1290.1813\n133126 __copy_from_user_ll 1040.0469\n 97780 __copy_to_user_ll 763.9062\n 43135 finish_task_switch 269.5938\n 30973 do_anonymous_page 62.4456\n 24175 scsi_request_fn 22.2197\n 23355 __do_softirq 121.6406\n 17039 __wake_up 133.1172\n 16527 __make_request 10.8730\n 9823 try_to_wake_up 13.6431\n 9525 generic_unplug_device 66.1458\n 8799 find_get_page 78.5625\n 7878 scsi_end_request 30.7734\n\nCopying data to/from userspace and signal handling!!!! Let's hear it \nfor the need for mmap(2)!!! *crowd goes wild*\n\n> The gains from the former are likely small. The gains from the latter\n> are probably also small, but harder to estimate.\n\nI disagree.\n\n> The use of mmap() is probably one of those optimizations that should\n> be done when there's little else left to optimize, because the\n> potential gains are possibly (if not probably) relatively small and\n> the amount of work involved may be quite large.\n\nIf system/kernel time is where most of your database spends its time, \nthen mmap(2) is a huge optimization that is very much worth pursuing. \nIt's stable (nearly all webservers use it, notably Apache), widely \ndeployed, POSIX specified (granted not all implementations are 100% \nconsistent, but that's an OS bug and mmap(2) doesn't have to be turned \non for those platforms: it's no worse than where we are now), and well \noptimized by operating system hackers. I guarantee that your operating \nsystem of choice has a faster VM and disk cache than PostgreSQL's \nuserland cache, nevermind using the OSs buffers leads to many \nperformance boosts as the OS can short-circuit common pathways that \nwould require data copying (ex: zero-socket-copy operations and copying \ndata to/from userland).\n\nmmap(2) isn't a panacea or replacement for good software design, but it \ncertainly does make IO operations vastly faster, which is what \nPostgreSQL does a lot of (hence its need for a userland cache). \nRemember, back when PostgreSQL had its architecture thunk up, mmap(2) \nhardly existed in anyone's eyes, nevermind it being widely used or a \nPOSIX function. It wasn't until Apache started using it that Operating \nSystem vendors felt the need to implement it or make it work well. Now \nit's integral to nearly all virtual memory implementations and a modern \nOS can't live without it or have it broken in any way. It would be \nlargely beneficial to PostgreSQL to heavily utilize mmap(2).\n\nA few places it should be used include:\n\n*) Storage. It is a good idea to mmap(2) all files instead of \nread(2)'ing files. mmap(2) doesn't fetch a page from disk until its \nactually needed, which is a nifty savings. Sure it causes a fault in \nthe kernel, but it won't the second time that page is accessed. \nChanges are necessary to src/backend/storage/file/, possibly \nsrc/backend/storage/freespace/ (why is it using fread(3) and not \nread(2)?), src/backend/storage/large_object/ can remain gimpy since \npeople should use BYTEA instead (IMHO), src/backend/storage/page/ \ndoesn't need changes (I don't think), src/backend/storage/smgr/ \nshouldn't need any modifications either.\n\n*) ARC. Why unmmap(2) data if you don't need to? With ARC, it's \npossible for the database to coach the operating system in what pages \nshould be persistent. ARC's a smart algorithm for handling the needs \nof a database. Instead of having a cache of pages in userland, \nPostgreSQL would have a cache of mmap(2)'ed pages. It's shared between \nprocesses, the changes are public to external programs read(2)'ing \ndata, and its quick. The needs for shared memory by the kernel drops \nto nearly nothing. The needs for mmap(2)'able space in the kernel, \nhowever, does go up. Unlike SysV shared mem, this can normally be \nchanged on the fly. The end result would be, if a page is needed, it \nchecks to see if its in the cache. If it is, the mmap(2)'ed page is \nreturned. If it isn't, the page gets read(2)/mmap(2) like it currently \nis loaded (except in the mmap(2) case where after the data has been \nloaded, the page gets munmap(2)'ed). If ARC decides to keep the page, \nthe page doesn't get munmap(2)'ed. I don't think any changes need to \nbe made though to take advantage of mmap(2) if the changes are made in \nthe places mentioned above in the Storage point.\n\n\nA few other perks:\n\n*) DIRECTIO can be used without much of a cache coherency headache \nsince the cache of data is in the kernel, not userland.\n\n*) NFS. I'm not suggesting multiple clients use the same data \ndirectory via NFS (unless read only), but if there were a single client \naccessing a data directory over NFS, performance would be much better \nthan it is today because data consistency is handled by the kernel so \nin flight packets for writes that get dropped or lost won't cause a \nslow down (mmap(2) behaves differently with NFS pages) or corruption.\n\n*) mmap(2) is conditional on the operating system's abilities, but \ndoesn't require any architectural changes. It does change the location \nof the cache, from being in the userland, down in to the kernel. This \nis a change for database administrators, but a good one, IMHO. \nPreviously, the operating system would be split 25% kernel, 75% user \nbecause PostgreSQL would need the available RAM for its cache. Now, \nthat can be moved closer to the opposite, 75% kernel, 25% user because \nmost of the memory is mmap(2)'ed pages instead of actual memory in the \nuserland.\n\n*) Pages can be protected via PROT_(EXEC|READ|WRITE). For backends \nthat aren't making changes to the DDL or system catalogs (permissions, \netc.), pages that are loaded from the catalogs could be loaded with the \nprotection PROT_READ, which would prevent changes to the catalogs. All \nDDL and permission altering commands (anything that touches the system \ncatalogs) would then load the page with the PROT_WRITE bit set, make \ntheir changes, then PROT_READ the page again. This would provide a \nfirst line of defense against buggy programs or exploits.\n\n*) Eliminates the double caching done currently (caching in PostgreSQL \nand the kernel) by pushing the cache into the kernel... but without \nPostgreSQL knowing it's working on a page that's in the kernel.\n\nPlease ask questions if you have them.\n\n-sc\n\n-- \nSean Chittenden\n\n",
"msg_date": "Fri, 15 Oct 2004 13:09:01 -0700",
"msg_from": "Sean Chittenden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mmap (was First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": "On Fri, Oct 15, 2004 at 01:09:01PM -0700, Sean Chittenden wrote:\n[snip]\n> >\n> > This ultimately depends on two things: how much time is spent copying\n> > buffers around in kernel memory, and how much advantage can be gained\n> > by freeing up the memory used by the backends to store the\n> > backend-local copies of the disk pages they use (and thus making that\n> > memory available to the kernel to use for additional disk buffering).\n> \n> Someone on IRC pointed me to some OSDL benchmarks, which broke down \n> where time is being spent. Want to know what the most expensive part \n> of PostgreSQL is? *drum roll*\n> \n> http://khack.osdl.org/stp/297960/profile/DBT_2_Profile-tick.sort\n> \n> 3967393 total 1.7735\n> 2331284 default_idle 36426.3125\n> 825716 do_sigaction 1290.1813\n> 133126 __copy_from_user_ll 1040.0469\n> 97780 __copy_to_user_ll 763.9062\n> 43135 finish_task_switch 269.5938\n> 30973 do_anonymous_page 62.4456\n> 24175 scsi_request_fn 22.2197\n> 23355 __do_softirq 121.6406\n> 17039 __wake_up 133.1172\n> 16527 __make_request 10.8730\n> 9823 try_to_wake_up 13.6431\n> 9525 generic_unplug_device 66.1458\n> 8799 find_get_page 78.5625\n> 7878 scsi_end_request 30.7734\n> \n> Copying data to/from userspace and signal handling!!!! Let's hear it \n> for the need for mmap(2)!!! *crowd goes wild*\n> \n[snip]\n\nI know where the do_sigaction is coming from in this particular case.\nManfred Spraul tracked it to a pair of pgsignal calls in libpq.\nCommenting out those two calls out virtually eliminates do_sigaction from\nthe kernel profile for this workload. I've lost track of the discussion\nover the past year, but I heard a rumor that it was finally addressed to\nsome degree. I did understand it touched on a lot of other things, but\ncan anyone summarize where that discussion has gone?\n\nMark\n",
"msg_date": "Fri, 15 Oct 2004 13:39:47 -0700",
"msg_from": "Mark Wong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mmap (was First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": "Sean Chittenden <[email protected]> writes:\n> Coordination of data isn't \n> necessary if you mmap(2) data as a private block, which takes a \n> snapshot of the page at the time you make the mmap(2) call and gets \n> copied only when the page is written to. More on that later.\n\nWe cannot move to a model where different backends have different\nviews of the same page, which seems to me to be inherent in the idea of\nusing MAP_PRIVATE for anything. To take just one example, a backend\nthat had mapped one btree index page some time ago could get completely\nconfused if that page splits, because it might see the effects of the\nsplit in nearby index pages but not in the one that was split. Or it\ncould follow an index link to a heap entry that isn't there anymore,\nor miss an entry it should have seen. MVCC doesn't save you from this\nbecause btree adjustments happen below the level of transactions.\n\nHowever the really major difficulty with using mmap is that it breaks\nthe scheme we are currently using for WAL, because you don't have any\nway to restrict how soon a change in an mmap'd page will go to disk.\n(No, I don't believe that mlock guarantees this. It says that the\npage will not be removed from main memory; it does not specify that,\nsay, the syncer won't write the contents out anyway.)\n\n> Let's look at what happens with a read(2) call. To read(2) data you \n> have to have a block of memory to copy data into. Assume your OS of \n> choice has a good malloc(3) implementation and it only needs to call \n> brk(2) once to extend the process's memory address after the first \n> malloc(3) call. There's your first system call, which guarantees one \n> context switch.\n\nWrong. Our reads occur into shared memory allocated at postmaster\nstartup, remember?\n\n> mmap(2) is a totally different animal in that you don't ever need to \n> make calls to read(2): mmap(2) is used in place of those calls (With \n> #ifdef and a good abstraction, the rest of PostgreSQL wouldn't know it \n> was working with a page of mmap(2)'ed data or need to know that it is). \n\nInstead, you have to worry about address space management and keeping a\nconsistent view of the data.\n\n> ... If a write(2) system call is issued on a page of \n> mmap(2)'ed data (and your operating system supports it, I know FreeBSD \n> does, but don't think Linux does), then the page of data is DMA'ed by \n> the network controller and sent out without the data needing to be \n> copied into the network controller's buffer.\n\nPerfectly irrelevant to Postgres, since there is no situation where we'd\never write directly from a disk buffer to a socket; in the present\nimplementation there are at least two levels of copy needed in between\n(datatype-specific output function and protocol message assembly). And\nthat's not even counting the fact that any data item large enough to\nmake the savings interesting would have been sliced, diced, and\ncompressed by TOAST.\n\n> ... If you're doing a write(2) or are directly \n> scribbling on an mmap(2)'ed page[1], you need to grab some kind of an \n> exclusive lock on the page/file (mlock(2) is going to be no more \n> expensive than a semaphore, but probably less expensive).\n\nMore incorrect information. The locking involved here is done by\nLWLockAcquire, which is significantly *less* expensive than a kernel\ncall in the case where there is no need to block. (If you have to\nblock, any kernel call to do so is probably about as bad as any other.)\nSwitching over to mlock would likely make things considerably slower.\nIn any case, you didn't actually mean to say mlock did you? It doesn't\nlock pages against writes by other processes AFAICS.\n\n> shared mem is a bastardized subsystem that works, but isn't integral to \n> any performance areas in the kernel so it gets neglected.\n\nWhat performance issues do you think shared memory needs to have fixed?\nWe don't issue any shmem kernel calls after the initial shmget, so\ncomparing the level of kernel tenseness about shmget to the level of\ntenseness about mmap is simply irrelevant. Perhaps the reason you don't\nsee any traffic about this on the kernel lists is that shared memory\nalready works fine and doesn't need any fixing.\n\n> Please ask questions if you have them.\n\nDo you have any arguments that are actually convincing? What I just\nread was a proposal to essentially throw away not only the entire\nlow-level data access model, but the entire low-level locking model,\nand start from scratch. There is no possible way we could support both\nthis approach and the current one, which means that we'd be permanently\ndropping support for all platforms without high-quality mmap\nimplementations; and despite your enthusiasm I don't think that that\ncategory includes every interesting platform. Furthermore, you didn't\ngive any really convincing reasons to think that the enormous effort\ninvolved would be repaid. Those oprofile reports Josh just put up\nshowed 3% of the CPU time going into userspace/kernelspace copying.\nEven assuming that that number consists entirely of reads and writes of\nshared buffers (and of course no other kernel call ever transfers any\ndata across that boundary ;-)), there's no way we are going to buy into\nthis sort of project in hopes of a 3% win.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Oct 2004 17:22:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mmap (was First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": "Mark Wong <[email protected]> writes:\n> I know where the do_sigaction is coming from in this particular case.\n> Manfred Spraul tracked it to a pair of pgsignal calls in libpq.\n> Commenting out those two calls out virtually eliminates do_sigaction from\n> the kernel profile for this workload.\n\nHmm, I suppose those are the ones associated with suppressing SIGPIPE\nduring send(). It looks to me like those should go away in 8.0 if you\nhave compiled with ENABLE_THREAD_SAFETY ... exactly how is PG being\nbuilt in the current round of tests?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Oct 2004 17:37:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mmap (was First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": "On Fri, Oct 15, 2004 at 05:37:50PM -0400, Tom Lane wrote:\n> Mark Wong <[email protected]> writes:\n> > I know where the do_sigaction is coming from in this particular case.\n> > Manfred Spraul tracked it to a pair of pgsignal calls in libpq.\n> > Commenting out those two calls out virtually eliminates do_sigaction from\n> > the kernel profile for this workload.\n> \n> Hmm, I suppose those are the ones associated with suppressing SIGPIPE\n> during send(). It looks to me like those should go away in 8.0 if you\n> have compiled with ENABLE_THREAD_SAFETY ... exactly how is PG being\n> built in the current round of tests?\n> \n\nAh, yes. Ok. It's not being configured with any options. That'll be easy to\nrememdy though. I'll get that change made and we can try again.\n\nMark\n",
"msg_date": "Fri, 15 Oct 2004 14:56:30 -0700",
"msg_from": "Mark Wong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mmap (was First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": "Tom Lane wrote:\n> Mark Wong <[email protected]> writes:\n> > I know where the do_sigaction is coming from in this particular case.\n> > Manfred Spraul tracked it to a pair of pgsignal calls in libpq.\n> > Commenting out those two calls out virtually eliminates do_sigaction from\n> > the kernel profile for this workload.\n> \n> Hmm, I suppose those are the ones associated with suppressing SIGPIPE\n> during send(). It looks to me like those should go away in 8.0 if you\n> have compiled with ENABLE_THREAD_SAFETY ... exactly how is PG being\n> built in the current round of tests?\n\nYes, those calls are gone in 8.0 with --enable-thread-safety and were\nadded specifically because of Manfred's reports.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 15 Oct 2004 21:22:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mmap (was First set of OSDL Shared Mem scalability results,"
},
{
"msg_contents": "On Fri, Oct 15, 2004 at 09:22:03PM -0400, Bruce Momjian wrote:\n> Tom Lane wrote:\n> > Mark Wong <[email protected]> writes:\n> > > I know where the do_sigaction is coming from in this particular case.\n> > > Manfred Spraul tracked it to a pair of pgsignal calls in libpq.\n> > > Commenting out those two calls out virtually eliminates do_sigaction from\n> > > the kernel profile for this workload.\n> > \n> > Hmm, I suppose those are the ones associated with suppressing SIGPIPE\n> > during send(). It looks to me like those should go away in 8.0 if you\n> > have compiled with ENABLE_THREAD_SAFETY ... exactly how is PG being\n> > built in the current round of tests?\n> \n> Yes, those calls are gone in 8.0 with --enable-thread-safety and were\n> added specifically because of Manfred's reports.\n> \n\nOk, I had the build commands changed for installing PostgreSQL in STP.\nThe do_sigaction call isn't at the top of the profile anymore, here's\na reference for those who are interested; it should have the same test\nparameters as the one Tom referenced a little earlier:\n\thttp://khack.osdl.org/stp/298230/\n\nMark\n",
"msg_date": "Mon, 18 Oct 2004 08:28:10 -0700",
"msg_from": "Mark Wong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mmap (was First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": "> However the really major difficulty with using mmap is that it breaks\n> the scheme we are currently using for WAL, because you don't have any\n> way to restrict how soon a change in an mmap'd page will go to disk.\n> (No, I don't believe that mlock guarantees this. It says that the\n> page will not be removed from main memory; it does not specify that,\n> say, the syncer won't write the contents out anyway.)\n\nI had to think about this for a minute (now nearly a week) and reread \nthe docs on WAL before I groked what could happen here. You're \nabsolutely right in that WAL needs to be taken into account first. How \ndoes this execution path sound to you?\n\nBy default, all mmap(2)'ed pages are MAP_SHARED. There are no \ncomplications with regards to reads.\n\nWhen a backend wishes to write a page, the following steps are taken:\n\n1) Backend grabs a lock from the lockmgr to write to the page (exactly \nas it does now)\n\n2) Backend mmap(2)'s a second copy of the page(s) being written to, \nthis time with the MAP_PRIVATE flag set. Mapping a copy of the page \nagain is wasteful in terms of address space, but does not require any \nmore memory than our current scheme. The re-mapping of the page with \nMAP_PRIVATE prevents changes to the data that other backends are \nviewing.\n\n3) The writing backend, can then scribble on its private copy of the \npage(s) as it sees fit.\n\n4) Once completed making changes and a transaction is to be committed, \nthe backend WAL logs its changes.\n\n5) Once the WAL logging is complete and it has hit the disk, the \nbackend msync(2)'s its private copy of the pages to disk (ASYNC or \nSYNC, it doesn't really matter too much to me).\n\n6) Optional(?). I'm not sure whether or not the backend would need to \nalso issues an msync(2) MS_INVALIDATE, but, I suspect it would not need \nto on systems with unified buffer caches such as FreeBSD or OS-X. On \nHPUX, or other older *NIX'es, it may be necessary. *shrug* I could be \ntrying to be overly protective here.\n\n7) Backend munmap(2)'s its private copy of the written on page(s).\n\n8) Backend releases its lock from the lockmgr.\n\nAt this point, the remaining backends now are able to see the updated \npages of data.\n\n>> Let's look at what happens with a read(2) call. To read(2) data you\n>> have to have a block of memory to copy data into. Assume your OS of\n>> choice has a good malloc(3) implementation and it only needs to call\n>> brk(2) once to extend the process's memory address after the first\n>> malloc(3) call. There's your first system call, which guarantees one\n>> context switch.\n>\n> Wrong. Our reads occur into shared memory allocated at postmaster\n> startup, remember?\n\nDoh. Fair enough. In most programs that involve read(2), a call to \nalloc(3) needs to be made.\n\n>> mmap(2) is a totally different animal in that you don't ever need to\n>> make calls to read(2): mmap(2) is used in place of those calls (With\n>> #ifdef and a good abstraction, the rest of PostgreSQL wouldn't know it\n>> was working with a page of mmap(2)'ed data or need to know that it \n>> is).\n>\n> Instead, you have to worry about address space management and keeping a\n> consistent view of the data.\n\nWhich is largely handled by mmap() and the VM.\n\n>> ... If a write(2) system call is issued on a page of\n>> mmap(2)'ed data (and your operating system supports it, I know FreeBSD\n>> does, but don't think Linux does), then the page of data is DMA'ed by\n>> the network controller and sent out without the data needing to be\n>> copied into the network controller's buffer.\n>\n> Perfectly irrelevant to Postgres, since there is no situation where \n> we'd\n> ever write directly from a disk buffer to a socket; in the present\n> implementation there are at least two levels of copy needed in between\n> (datatype-specific output function and protocol message assembly). And\n> that's not even counting the fact that any data item large enough to\n> make the savings interesting would have been sliced, diced, and\n> compressed by TOAST.\n\nThe biggest winners will be columns whos storage type is PLAIN or \nEXTERNAL. writev(2) from mmap(2)'ed pages and non-mmap(2)'ed pages \nwould be a nice perk too (not sure if PostgreSQL uses this or not). \nSince compression isn't happening on most tuples under 1K in size and \nmost tuples in a database are going to be under that, most tuples are \ngoing to be uncompressed. Total pages for the database, however, is \nlikely a different story. For large tuples that are uncompressed and \nlarger than a page, it is probably beneficial to use sendfile(2) \ninstead of mmap(2) + write(2)'ing the page/file.\n\nIf a large tuple is compressed, it'd be interesting to see if it'd be \nworthwhile to have the data uncompressed onto an anonymously mmap(2)'ed \npage(s) that way the benefits of zero-socket-copies could be used.\n\n>> shared mem is a bastardized subsystem that works, but isn't integral \n>> to\n>> any performance areas in the kernel so it gets neglected.\n>\n> What performance issues do you think shared memory needs to have fixed?\n> We don't issue any shmem kernel calls after the initial shmget, so\n> comparing the level of kernel tenseness about shmget to the level of\n> tenseness about mmap is simply irrelevant. Perhaps the reason you \n> don't\n> see any traffic about this on the kernel lists is that shared memory\n> already works fine and doesn't need any fixing.\n\nI'm gunna get flamed for this, but I think its improperly used as a \nsecond level cache on top of the operating system's cache. mmap(2) \nwould consolidate all caching into the kernel.\n\n>> Please ask questions if you have them.\n>\n> Do you have any arguments that are actually convincing?\n\nThree things come to mind.\n\n1) A single cache for pages\n2) Ability to give access hints to the kernel regarding future IO\n3) On the fly memory use for a cache. There would be no need to \npreallocate slabs of shared memory on startup.\n\nAnd a more minor point would be:\n\n4) Not having shared pages get lost when the backend dies (mmap(2) uses \nrefcounts and cleans itself up, no need for ipcs/ipcrm/ipcclean). This \nisn't too practical in production though, but it sucks doing PostgreSQL \ndevelopment on OS-X because there is no ipcs/ipcrm command.\n\n> What I just read was a proposal to essentially throw away not only the \n> entire\n> low-level data access model, but the entire low-level locking model,\n> and start from scratch.\n\n From the above list, steps 2, 3, 5, 6, and 7 would be different than \nour current approach, all of which could be safely handled with some \n#ifdef's on platforms that don't have mmap(2).\n\n> There is no possible way we could support both\n> this approach and the current one, which means that we'd be permanently\n> dropping support for all platforms without high-quality mmap\n> implementations;\n\nArchitecturally, I don't see anything different or incompatibilities \nthat aren't solved with an #ifdef USE_MMAP/#else/#endif.\n\n> Furthermore, you didn't\n> give any really convincing reasons to think that the enormous effort\n> involved would be repaid.\n\nSteven's has a great reimplementaion of cat(1) that uses mmap(1) and \nbenchmarks the two. I did my own version of that here:\n\nhttp://people.freebsd.org/~seanc/mmap_test/\n\nWhen read(2)'ing/write(2)'ing /etc/services 100,000 times without \nmmap(2), it takes 82 seconds. With mmap(2), it takes anywhere from 1.1 \nto 18 seconds. Worst case scenario with mmap(2) yields a speedup by a \nfactor of four. Best case scenario... *shrug* something better than \n4x. I doubt PostgreSQL would see 4x speedups in the IO department, but \nI do think it would be vastly greater than the 3% suggested.\n\n> Those oprofile reports Josh just put up\n> showed 3% of the CPU time going into userspace/kernelspace copying.\n> Even assuming that that number consists entirely of reads and writes of\n> shared buffers (and of course no other kernel call ever transfers any\n> data across that boundary ;-)), there's no way we are going to buy into\n> this sort of project in hopes of a 3% win.\n\nWould it be helpful if I created a test program that demonstrated that \nthe execution path for writing mmap(2)'ed pages as outlined above?\n\n-sc\n\n-- \nSean Chittenden\n\n",
"msg_date": "Thu, 21 Oct 2004 13:29:34 -0700",
"msg_from": "Sean Chittenden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mmap (was First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
},
{
"msg_contents": "Sean Chittenden <[email protected]> writes:\n> When a backend wishes to write a page, the following steps are taken:\n> ...\n> 2) Backend mmap(2)'s a second copy of the page(s) being written to, \n> this time with the MAP_PRIVATE flag set.\n> ...\n> 5) Once the WAL logging is complete and it has hit the disk, the \n> backend msync(2)'s its private copy of the pages to disk (ASYNC or \n> SYNC, it doesn't really matter too much to me).\n\nMy man page for mmap says that changes in a MAP_PRIVATE region are\nprivate; they do not affect the file at all, msync or no. So I don't\nthink the above actually works.\n\nIn any case, this scheme still forces you to flush WAL records to disk\nbefore making the changed page visible to other backends, so I don't\nsee how it improves the situation. In the existing scheme we only have\nto fsync WAL at (1) transaction commit, (2) when we are forced to write\na page out from shared buffers because we are short of buffers, or (3)\ncheckpoint. Anything that implies an fsync per atomic action is going\nto be a loser. It does not matter how great your kernel API is if you\nonly get to perform one atomic action per disk rotation :-(\n\nThe important point here is that you can't postpone making changes at\nthe page level visible to other backends; there's no MVCC at this level.\nConsider for example two backends wanting to insert a new row. If they\nboth MAP_PRIVATE the same page, they'll probably choose the same tuple\nslot on the page to insert into (certainly there is nothing to stop that\nfrom happening). Now you have conflicting definitions for the same\nCTID, not to mention probably conflicting uses of the page's physical\nfree space; disaster ensues. So \"atomic action\" really means \"lock\npage, make changes, add WAL record to in-memory WAL buffers, unlock\npage\" with the understanding that as soon as you unlock the page the\nchanges you've made in it are visible to all other backends. You\n*can't* afford to put a WAL fsync in this sequence.\n\nYou could possibly buy back most of the lossage in this scenario if\nthere were some efficient way for a backend to hold the low-level lock\non a page just until some other backend wanted to modify the page;\nwhereupon the previous owner would have to do what's needed to make his\nchanges visible before releasing the lock. Given the right access\npatterns you don't have to fsync very often (though given the wrong\naccess patterns you're still in deep trouble). But we don't have any\nsuch mechanism and I think the communication costs of one would be\nforbidding.\n\n> [ much snipped ]\n> 4) Not having shared pages get lost when the backend dies (mmap(2) uses \n> refcounts and cleans itself up, no need for ipcs/ipcrm/ipcclean).\n\nActually, that is not a bug that's a feature. One of the things that\nscares me about mmap is that a crashing backend is able to scribble all\nover live disk buffers before it finally SEGV's (think about memcpy gone\nwrong and similar cases). In our existing scheme there's a pretty good\nchance that we will be able to commit hara-kiri before any of the\ntrashed data gets written out. In an mmap scheme, it's time to dig out\nyour backup tapes, because there simply is no distinction between\ntransient and permanent data --- the kernel has no way to know that you\ndidn't mean it.\n\nIn short, I remain entirely unconvinced that mmap is of any interest to us.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Oct 2004 00:12:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mmap (was First set of OSDL Shared Mem scalability results,\n\tsome wierdness ..."
}
] |
[
{
"msg_contents": "Hi,\n\nwe are working on a product which was originally developed against an Oracle \ndatabase and which should be changed to also work with postgres. \n\nOverall the changes we had to make are very small and we are very pleased with \nthe good performance of postgres - but we also found queries which execute \nmuch faster on Oracle. Since I am not yet familiar with tuning queries for \npostgres, it would be great if someone could give me a hint on the following \ntwo issues. (We are using PG 8.0.0beta3 on Linux kernel 2.4.27):\n\n1/ The following query takes about 5 sec. with postrgres whereas on Oracle it \nexecutes in about 30 ms (although both tables only contain 200 k records in \nthe postgres version).\n\nSQL:\n\nSELECT cmp.WELL_INDEX, cmp.COMPOUND, con.CONCENTRATION \n\tFROM SCR_WELL_COMPOUND cmp, SCR_WELL_CONCENTRATION con \n\tWHERE cmp.BARCODE=con.BARCODE \n\t\tAND cmp.WELL_INDEX=con.WELL_INDEX \n\t\tAND cmp.MAT_ID=con.MAT_ID \n\t\tAND cmp.MAT_ID = 3 \n\t\tAND cmp.BARCODE='910125864' \n\t\tAND cmp.ID_LEVEL = 1;\n\nTable-def:\n Table \"public.scr_well_compound\"\n Column | Type | Modifiers\n------------+------------------------+-----------\n mat_id | numeric(10,0) | not null\n barcode | character varying(240) | not null\n well_index | numeric(5,0) | not null\n id_level | numeric(3,0) | not null\n compound | character varying(240) | not null\nIndexes:\n \"scr_wcm_pk\" PRIMARY KEY, btree (id_level, mat_id, barcode, well_index)\nForeign-key constraints:\n \"scr_wcm_mat_fk\" FOREIGN KEY (mat_id) REFERENCES scr_mapping_table(mat_id) \nON DELETE CASCADE\n\n Table \"public.scr_well_concentration\"\n Column | Type | Modifiers\n---------------+------------------------+-----------\n mat_id | numeric(10,0) | not null\n barcode | character varying(240) | not null\n well_index | numeric(5,0) | not null\n concentration | numeric(20,10) | not null\nIndexes:\n \"scr_wco_pk\" PRIMARY KEY, btree (mat_id, barcode, well_index)\nForeign-key constraints:\n \"scr_wco_mat_fk\" FOREIGN KEY (mat_id) REFERENCES scr_mapping_table(mat_id) \nON DELETE CASCADE\n\nI tried several variants of the query (including the SQL 92 JOIN ON syntax) \nbut with no success. I have also rebuilt the underlying indices.\n\nA strange observation is that the same query runs pretty fast without the \nrestriction to a certain MAT_ID, i. e. omitting the MAT_ID=3 part.\n\nAlso fetching the data for both tables separately is pretty fast and a \npossible fallback would be to do the actual join in the application (which is \nof course not as beautiful as doing it using SQL ;-)\n\n2/ Batch-inserts using jdbc (maybe this should go to the jdbc-mailing list - \nbut it is also performance related ...):\nPerforming many inserts using a PreparedStatement and batch execution makes a \nsignificant performance improvement in Oracle. In postgres, I did not observe \nany performance improvement using batch execution. Are there any special \ncaveats when using batch execution with postgres?\n\nThanks and regards\n\nBernd\n\n\n\n",
"msg_date": "Fri, 15 Oct 2004 12:25:26 +0200",
"msg_from": "Bernd <[email protected]>",
"msg_from_op": true,
"msg_subject": "Select with qualified join condition / Batch inserts"
},
{
"msg_contents": "> SELECT cmp.WELL_INDEX, cmp.COMPOUND, con.CONCENTRATION \n> \tFROM SCR_WELL_COMPOUND cmp, SCR_WELL_CONCENTRATION con \n> \tWHERE cmp.BARCODE=con.BARCODE \n> \t\tAND cmp.WELL_INDEX=con.WELL_INDEX \n> \t\tAND cmp.MAT_ID=con.MAT_ID \n> \t\tAND cmp.MAT_ID = 3 \n> \t\tAND cmp.BARCODE='910125864' \n> \t\tAND cmp.ID_LEVEL = 1;\n\nQuick guess - type mismatch forcing sequential scan. Try some quotes:\n \t\tAND cmp.MAT_ID = '3' \n \t\tAND cmp.BARCODE='910125864' \n \t\tAND cmp.ID_LEVEL = '1';\n\nM\n\n",
"msg_date": "Fri, 15 Oct 2004 11:36:35 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select with qualified join condition / Batch inserts"
},
{
"msg_contents": "On Fri, 15 Oct 2004, Bernd wrote:\n\n> Hi,\n>\n> we are working on a product which was originally developed against an Oracle\n> database and which should be changed to also work with postgres.\n>\n> Overall the changes we had to make are very small and we are very pleased with\n> the good performance of postgres - but we also found queries which execute\n> much faster on Oracle. Since I am not yet familiar with tuning queries for\n> postgres, it would be great if someone could give me a hint on the following\n> two issues. (We are using PG 8.0.0beta3 on Linux kernel 2.4.27):\n>\n> 1/ The following query takes about 5 sec. with postrgres whereas on Oracle it\n> executes in about 30 ms (although both tables only contain 200 k records in\n> the postgres version).\n>\n> SQL:\n>\n> SELECT cmp.WELL_INDEX, cmp.COMPOUND, con.CONCENTRATION\n> \tFROM SCR_WELL_COMPOUND cmp, SCR_WELL_CONCENTRATION con\n> \tWHERE cmp.BARCODE=con.BARCODE\n> \t\tAND cmp.WELL_INDEX=con.WELL_INDEX\n> \t\tAND cmp.MAT_ID=con.MAT_ID\n> \t\tAND cmp.MAT_ID = 3\n> \t\tAND cmp.BARCODE='910125864'\n> \t\tAND cmp.ID_LEVEL = 1;\n>\n> Table-def:\n> Table \"public.scr_well_compound\"\n> Column | Type | Modifiers\n> ------------+------------------------+-----------\n> mat_id | numeric(10,0) | not null\n> barcode | character varying(240) | not null\n> well_index | numeric(5,0) | not null\n> id_level | numeric(3,0) | not null\n> compound | character varying(240) | not null\n> Indexes:\n> \"scr_wcm_pk\" PRIMARY KEY, btree (id_level, mat_id, barcode, well_index)\n\nI presume you've VACUUM FULL'd and ANALYZE'd? Can we also see a plan?\nEXPLAIN ANALYZE <query>.\nhttp://www.postgresql.org/docs/7.4/static/sql-explain.html.\n\nYou may need to create indexes with other primary columns. Ie, on mat_id\nor barcode.\n\n\n> 2/ Batch-inserts using jdbc (maybe this should go to the jdbc-mailing list -\n> but it is also performance related ...):\n> Performing many inserts using a PreparedStatement and batch execution makes a\n> significant performance improvement in Oracle. In postgres, I did not observe\n> any performance improvement using batch execution. Are there any special\n> caveats when using batch execution with postgres?\n\nThe JDBC people should be able to help with that.\n\nGavin\n",
"msg_date": "Fri, 15 Oct 2004 20:47:56 +1000 (EST)",
"msg_from": "Gavin Sherry <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select with qualified join condition / Batch inserts"
},
{
"msg_contents": "Bernd <[email protected]> writes:\n> 1/ The following query takes about 5 sec. with postrgres whereas on Oracle it\n> executes in about 30 ms (although both tables only contain 200 k records in \n> the postgres version).\n\nWhat does EXPLAIN ANALYZE have to say about it? Have you ANALYZEd the\ntables involved in the query?\n\nYou would in any case be very well advised to change the \"numeric\"\ncolumns to integer, bigint, or smallint when appropriate. There is\na substantial performance advantage to using the simple integral\ndatatypes instead of the general numeric type.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Oct 2004 13:08:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select with qualified join condition / Batch inserts "
},
{
"msg_contents": "On Fri, 15 Oct 2004 08:47 pm, Gavin Sherry wrote:\n> On Fri, 15 Oct 2004, Bernd wrote:\n> \n> > Hi,\n[snip]\n\n> > Table-def:\n> > Table \"public.scr_well_compound\"\n> > Column | Type | Modifiers\n> > ------------+------------------------+-----------\n> > mat_id | numeric(10,0) | not null\n> > barcode | character varying(240) | not null\n> > well_index | numeric(5,0) | not null\n> > id_level | numeric(3,0) | not null\n> > compound | character varying(240) | not null\n> > Indexes:\n> > \"scr_wcm_pk\" PRIMARY KEY, btree (id_level, mat_id, barcode, well_index)\n> \nnumeric is not optimized by postgresql like it is by Oracle. You will get much better\nperformance by changing the numeric types to int, big int, or small int.\n\nThat should get the query time down to somewhere near what Oracle is giving you.\n\nRegards\n\nRussell Smith.\n\n\n\n[snip]\n",
"msg_date": "Wed, 20 Oct 2004 09:18:57 +1000",
"msg_from": "Russell Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select with qualified join condition / Batch inserts"
}
] |
[
{
"msg_contents": "But he's testing with v8 beta3, so you'd expect the typecast problem not to appear?\n\nAre all tables fully vacuumed? Should the statistics-target be raised for some columns, perhaps? What about the config file?\n\n--Tim\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]]On Behalf Of Matt Clark\nSent: Friday, October 15, 2004 12:37 PM\nTo: 'Bernd'; [email protected]\nSubject: Re: [PERFORM] Select with qualified join condition / Batch inserts\n\n\n> SELECT cmp.WELL_INDEX, cmp.COMPOUND, con.CONCENTRATION \n> \tFROM SCR_WELL_COMPOUND cmp, SCR_WELL_CONCENTRATION con \n> \tWHERE cmp.BARCODE=con.BARCODE \n> \t\tAND cmp.WELL_INDEX=con.WELL_INDEX \n> \t\tAND cmp.MAT_ID=con.MAT_ID \n> \t\tAND cmp.MAT_ID = 3 \n> \t\tAND cmp.BARCODE='910125864' \n> \t\tAND cmp.ID_LEVEL = 1;\n\nQuick guess - type mismatch forcing sequential scan. Try some quotes:\n \t\tAND cmp.MAT_ID = '3' \n \t\tAND cmp.BARCODE='910125864' \n \t\tAND cmp.ID_LEVEL = '1';\n\nM\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n",
"msg_date": "Fri, 15 Oct 2004 12:44:37 +0200",
"msg_from": "\"Leeuw van der, Tim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Select with qualified join condition / Batch inserts"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.