threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Hello all,\n\nI've been struggling with some performance questions regarding our\nPostgres databases. Here's the background:\n\nWe run 4 ~25-30Gb databases which cache information from eBay. These\ndatabases have had performance issues since before I joined the company.\nThe databases have gone through a number of iterations. Initially, they\nwere deployed as one huge database - performance was apparently\nunacceptable. They were split, and tried on a variety of hardware\nplatforms. When I joined the company last year, the databases were\ndeployed on 12-disk RAID5 arrays on dual-proc AMD machines with 4Gb of\nRAM, running Debian Woody and Postgres 7.2. These systems seemed to\nsuffer a gradually decreasing performance accompanied by a gradually\ngrowing disk space usage. The DBA had come to the conclusion that the\nVACUUM command did/does not work on these systems, because even after a\nVACUUM FULL, the size of the database was continually increasing. So, as\nthings stand with the PG7.2 machines, vacuuming is run nightly, and\nwhenever the database size reaches 40Gb on disk (the point at which\nperformance has degraded below tolerance), the DBA exports the data,\ndeletes the database, and then imports the data, shrinking it to the\nactual size of the dataset.\n\nThis process is time-consuming, costly, and the servers that we are\ndeployed on do not meet our stability requirements. So, after much\npushing, I was able to deploy a 12-disk RAID5 dual-proc AMD64 machine\nwith 16Gb of RAM running FreeBSD and Postgres 8.1.\n\nThe performance increase was immediate, obvious, and dramatic, as you\nmight expect from such a large boost in the underlying hardware.\n\nThis upgrade appears to have solved the VACUUM issue - regular VACUUM\ncommands now seem able to maintain the database size at a steady-state\n(taking into account fluctuations associated with actual changes in the\ndataset size!). We are now planning on moving the other three databases\nto the new platform and hardware.\n\nHowever, we still are suffering a gradual decrease in performance over\ntime - or so the application engineers claim. The DBA and I have been\nbanging our heads against this for a month.\n\nWhich brings me to the questions:\n\n1) How does one define 'performance' anyway? Is it average time to\ncomplete a query? If so, what kind of query? Is it some other metric?\n\n2) I've combed the archives and seen evidence that people out there are\nrunning much much larger databases on comparable hardware with decent\nperformance. Is this true, or is my dataset at about the limit of my\nhardware?\n\n3) Though this may seem irrelevant, since we are moving away from the\nplatform, it would still be good to know - was VACUUM actually\ncompletely useless on PG7.2 or is there some other culprit on these\nlegacy machines?\n\n4) Much of my reading of the PG docs and list archives seems to suggest\nthat much of performance tuning is done at the query level - you have to\nknow how to ask for information in an efficient way. To that end, I took\na look at some of the queries we get on a typical day. On average, 24\ntimes per minute, our application causes a unique key violation. This\nstruck me as strange, but the VP of Engineering says this is a\nperformance ENHANCEMENT - the code doesn't bother checking for the\nunique key because it depends on the database to enforce that. My\ninterpretation of reading the archives & docs seems to indicate that\nthis could be causing the constantly increasing database size... so now\nthat I've rambled about it, does an INSERT transaction that is rolled\nback due to a unique key violation leave dead rows in the table? If so, why?\n\nI really appreciate any information you guys can give me. I'm convinced\nthat PG is the best database for our needs, but I need to be able to get\nthis database performing well enough to convince the bigwigs.\n\nRegards,\nPaul Lathrop\nSystems Administrator",
"msg_date": "Thu, 30 Nov 2006 14:59:27 -0800",
"msg_from": "Paul Lathrop <[email protected]>",
"msg_from_op": true,
"msg_subject": "Defining performance."
},
{
"msg_contents": "[Paul Lathrop - Thu at 02:59:27PM -0800]\n> growing disk space usage. The DBA had come to the conclusion that the\n> VACUUM command did/does not work on these systems, because even after a\n> VACUUM FULL, the size of the database was continually increasing. So, as\n> things stand with the PG7.2 machines, vacuuming is run nightly, and\n> whenever the database size reaches 40Gb on disk (the point at which\n> performance has degraded below tolerance), the DBA exports the data,\n> deletes the database, and then imports the data, shrinking it to the\n> actual size of the dataset.\n\nWe found one reason why vacuuming didn't always work for us - we had\nlong running transactions - in addition to killing the vacuum, it did\nreally nasty things to the performance in general.\n\nTo check for those transactions, I think it's needed to turn on\nstats_command_string in the config.\n\nI use this query to check:\n\nselect * from pg_stat_activity where current_query<>'<IDLE>' order by\nquery_start ;\n\nIf you spot any \"<IDLE> in transaction\" with an old query_start\ntimestamp, then that's most probably the reason.\n\nLong running transactions doesn't have to be idle ... check the pg_locks\nview for the lowest transactionid and compare (through the pid) with the\npg_stat_activity view to find the actual backend.\n\n> However, we still are suffering a gradual decrease in performance over\n> time - or so the application engineers claim. The DBA and I have been\n> banging our heads against this for a month.\n\nWe're having the same issues, so we do the dumping and restoring every\nnow and then to be sure everything is properly cleaned up. With 8.1.\n\n> 1) How does one define 'performance' anyway? Is it average time to\n> complete a query? If so, what kind of query? Is it some other metric?\n\nWe have the same kind of problem, and the project leader (I sometimes\nrefer him as the \"bottleneck\" ;-) is most concerned about iowait at our\ncpu graphs. Anyway, we do have other measures:\n\n - our applications does log the duration of each request towards the\n application as well as each query towards the database. If the\n request (this is web servers) is taking \"too long\" time, it's logged\n as error instead of debug. If a significant number of such errors\n is due to database calls taking too much time, then the performance\n is bad. Unfortunately, we have no way to automate such checking.\n\n - I've setting up two scripts pinging that pg_stat_activity view every\n now and then, logging how much \"gruff\" it finds there. Those two\n scripts are eventually to be merged. One is simply logging what it\n finds, the other is a plugin system to the Munin graphing package. \n\nI've thrown the scripts we use out here:\n\nhttp://oppetid.no/~tobixen/pg_activity_log.txt\nhttp://oppetid.no/~tobixen/pg_activity.munin.txt\n\n(I had to rename them to .txt to get the web server to play along).\n\nThose are very as-is, should certainly be modified a bit to fit to any\nother production environment. :-)\n\nThe pg_activity_log dumps a single number indicating the \"stress level\"\nof the database to a file. I think this stress number, when taking out\ni.e. the 20% worst numbers from the file for each day, can indicate\nsomething about the performance of the database server. However, I\nhaven't had the chance to discuss it with the bottleneck yet.\n\n",
"msg_date": "Fri, 1 Dec 2006 01:05:37 +0100",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defining performance."
},
{
"msg_contents": "Paul Lathrop <[email protected]> writes:\n> ... When I joined the company last year, the databases were\n> deployed on 12-disk RAID5 arrays on dual-proc AMD machines with 4Gb of\n> RAM, running Debian Woody and Postgres 7.2. These systems seemed to\n> suffer a gradually decreasing performance accompanied by a gradually\n> growing disk space usage. The DBA had come to the conclusion that the\n> VACUUM command did/does not work on these systems, because even after a\n> VACUUM FULL, the size of the database was continually increasing.\n\nThe very first thing you need to do is get off 7.2.\n\nAfter that, I'd recommend looking at *not* using VACUUM FULL. FULL is\nactually counterproductive in a lot of scenarios, because it shrinks the\ntables at the price of bloating the indexes. And 7.2's poor ability to\nreuse index space turns that into a double whammy. Have you checked\ninto the relative sizes of tables and indexes and tracked the trend over\ntime?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 30 Nov 2006 19:26:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defining performance. "
},
{
"msg_contents": "On Thu, 2006-11-30 at 18:26, Tom Lane wrote:\n> Paul Lathrop <[email protected]> writes:\n> > ... When I joined the company last year, the databases were\n> > deployed on 12-disk RAID5 arrays on dual-proc AMD machines with 4Gb of\n> > RAM, running Debian Woody and Postgres 7.2. These systems seemed to\n> > suffer a gradually decreasing performance accompanied by a gradually\n> > growing disk space usage. The DBA had come to the conclusion that the\n> > VACUUM command did/does not work on these systems, because even after a\n> > VACUUM FULL, the size of the database was continually increasing.\n> \n> The very first thing you need to do is get off 7.2.\n> \n> After that, I'd recommend looking at *not* using VACUUM FULL. FULL is\n> actually counterproductive in a lot of scenarios, because it shrinks the\n> tables at the price of bloating the indexes. And 7.2's poor ability to\n> reuse index space turns that into a double whammy. Have you checked\n> into the relative sizes of tables and indexes and tracked the trend over\n> time?\n\nAnd if you cant get off 7.2, look into scheduling some downtime to run\nreindex on the bloated indexes.\n\nIn all honesty, a simple single processor workstation with a gig of ram\nand a couple of good sized SATA drives and a modern linux distro can\nprobably outrun your 7.2 server if it's running on 8.1 / 8.2 \n\nIt's that much faster now.\n\nFor the love of all that's holy, as well as your data, start planning\nyour migration now, and if you can, have it done by the end of next week\nor so.\n\nAnd backup every night religiously.\n",
"msg_date": "Thu, 30 Nov 2006 18:43:54 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defining performance."
},
{
"msg_contents": "On Fri, 2006-12-01 at 01:05 +0100, Tobias Brox wrote:\n> > However, we still are suffering a gradual decrease in performance over\n> > time - or so the application engineers claim. The DBA and I have been\n> > banging our heads against this for a month.\n> \n> We're having the same issues, so we do the dumping and restoring every\n> now and then to be sure everything is properly cleaned up. With 8.1.\n> \n\nWhat's causing that? Is it index bloat?\n\nI would think a REINDEX would avoid having to dump/restore, right? A\nCLUSTER might also be necessary, depending on what kind of performance\ndegradation you're experiencing.\n\nAm I missing something?\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Thu, 30 Nov 2006 16:57:54 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defining performance."
},
{
"msg_contents": "[Jeff Davis - Thu at 04:57:54PM -0800]\n> > We're having the same issues, so we do the dumping and restoring every\n> > now and then to be sure everything is properly cleaned up. With 8.1.\n> > \n> \n> What's causing that? Is it index bloat?\n> \n> I would think a REINDEX would avoid having to dump/restore, right? A\n> CLUSTER might also be necessary, depending on what kind of performance\n> degradation you're experiencing.\n> \n> Am I missing something?\n\nJust as with Paul Lathrops case, the performance degradation is\nsomething perceived by the application developers. We haven't had time\nto actually verify reliably that the performance is actually beeing\ndegraded, neither that the reload beeing done helps (after we resolved\nthe pending transaction issue, anyway), nor look into what the possible\nreasons of this percieved degradation could be.\n\n",
"msg_date": "Fri, 1 Dec 2006 02:07:54 +0100",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defining performance."
},
{
"msg_contents": "Paul Lathrop wrote:\n> We run 4 ~25-30Gb databases which cache information from eBay. These\n> databases have had performance issues since before I joined the company.\n> The databases have gone through a number of iterations. Initially, they\n> were deployed as one huge database - performance was apparently\n> unacceptable. They were split, and tried on a variety of hardware\n> platforms. When I joined the company last year, the databases were\n> deployed on 12-disk RAID5 arrays on dual-proc AMD machines with 4Gb of\n> RAM, running Debian Woody and Postgres 7.2. \n\nWell, first of all you need to upgrade. 7.2 is old and not supported \nanymore.\n\n> These systems seemed to\n> suffer a gradually decreasing performance accompanied by a gradually\n> growing disk space usage. The DBA had come to the conclusion that the\n> VACUUM command did/does not work on these systems, because even after a\n> VACUUM FULL, the size of the database was continually increasing. So, as\n> things stand with the PG7.2 machines, vacuuming is run nightly, and\n> whenever the database size reaches 40Gb on disk (the point at which\n> performance has degraded below tolerance), the DBA exports the data,\n> deletes the database, and then imports the data, shrinking it to the\n> actual size of the dataset.\n\nVacuum didn't reclaim empty index pages until 7.4, so you might be \nsuffering from index bloat. A nightly reindex would help with that.\n\n> This process is time-consuming, costly, and the servers that we are\n> deployed on do not meet our stability requirements. So, after much\n> pushing, I was able to deploy a 12-disk RAID5 dual-proc AMD64 machine\n> with 16Gb of RAM running FreeBSD and Postgres 8.1.\n\nYou should give 8.2 (now in beta stage) a try as well. There's some \nsignificant performance enhancements, for example vacuums should run faster.\n\n> 1) How does one define 'performance' anyway? Is it average time to\n> complete a query? If so, what kind of query? Is it some other metric?\n\nPerformance is really an umbrella term that can mean a lot of things. \nYou'll have to come up with a metric that's most meaningful to you and \nthat you can easily measure. Some typical metrics are:\n\n* Average response time to a query/queries\n* Max or 90% percentile response time to a query\n* throughput, transactions per second\n\nYou'll have to start measuring performance somehow. You might find out \nthat actually your performance is bad only during some hour of day for \nexample. Take a look at the log_min_duration_statement parameter in more \nrecent versions for starter.\n\n> 2) I've combed the archives and seen evidence that people out there are\n> running much much larger databases on comparable hardware with decent\n> performance. Is this true, or is my dataset at about the limit of my\n> hardware?\n\nIt depends on your load, really. A dataset of ~ 40 GB is certainly not \nthat big compared to what some people have.\n\n> 3) Though this may seem irrelevant, since we are moving away from the\n> platform, it would still be good to know - was VACUUM actually\n> completely useless on PG7.2 or is there some other culprit on these\n> legacy machines?\n\nIt's certainly better nowadays..\n\n> 4) Much of my reading of the PG docs and list archives seems to suggest\n> that much of performance tuning is done at the query level - you have to\n> know how to ask for information in an efficient way. To that end, I took\n> a look at some of the queries we get on a typical day. On average, 24\n> times per minute, our application causes a unique key violation. This\n> struck me as strange, but the VP of Engineering says this is a\n> performance ENHANCEMENT - the code doesn't bother checking for the\n> unique key because it depends on the database to enforce that. My\n> interpretation of reading the archives & docs seems to indicate that\n> this could be causing the constantly increasing database size... so now\n> that I've rambled about it, does an INSERT transaction that is rolled\n> back due to a unique key violation leave dead rows in the table? If so, why?\n\nThe way unique checking works in PostgreSQL is:\n\n1. The row is inserted into heap.\n2. The corresponding index tuple is inserted to index. While doing that, \n we check that there's no duplicate key there already.\n\nSo yes, a unique key violation will leave the dead tuple in the heap, \nand it will be removed by vacuum later on.\n\nI think it's a valid and sane approach to leave the uniqueness check to \nthe database. Unless a very large proportion of your transactions abort \ndue to unique key violations, the dead rows won't be a problem. The \nspace will be reclaimed by vacuum.\n\nIn general, it's normal that there's some dead rows in the database. As \nlong as you vacuum regularly, the database size should eventually reach \na steady-state where it doesn't grow anymore, unless the real live \ndataset size increases.\n\n--\n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Fri, 01 Dec 2006 10:36:27 +0000",
"msg_from": "\"Heikki Linnakangas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defining performance."
}
] |
[
{
"msg_contents": "> 4) Much of my reading of the PG docs and list archives seems to suggest\nthat much of performance tuning is done at the query level - you have to\nknow how to ask for information in an efficient way.\n\nPerformance does not exist in a vacuum (ho ho, PostgreSQL joke). The\nperson writing the queries has to understand what is actually happening\ninside of the database. When I ported my app from MS SQL over to\nPostgreSQL several years ago, there were many queries that were previously\nzippy that ran like turds on PostgreSQL, and vice-versa!\n\nAs my dataset has gotten larger I have had to throw more metal at the\nproblem, but I have also had to rethink my table and query design. Just\nbecause your data set grows linearly does NOT mean that the performance of\nyour query is guaranteed to grow linearly! A sloppy query that runs OK\nwith 3000 rows in your table may choke horribly when you hit 50000.\n\nJohn\n\n\n\n",
"msg_date": "Thu, 30 Nov 2006 18:37:12 -0600 (CST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Defining performance."
}
] |
[
{
"msg_contents": "Hi,\n\nI was called to find out why one of our PostgreSQL servers has not a\nsatisfatory response time. The server has only two SCSI disks configurated\nas a RAID1 software.\n\nWhile collecting performance data I discovered very bad numbers in the I/O\nsubsystem and I would like to know if I�m thinking correctly.\n\nHere is a typical iostat -x:\n\navg-cpu: %user %nice %system %iowait %idle\n\n 50.40 0.00 0.50 1.10 48.00\n\n\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s\navgrq-sz avgqu-sz await svctm %util\n\nsda 0.00 7.80 0.40 6.40 41.60 113.60 20.80 56.80\n22.82 570697.50 10.59 147.06 100.00\n\nsdb 0.20 7.80 0.60 6.40 40.00 113.60 20.00 56.80\n21.94 570697.50 9.83 142.86 100.00\n\nmd1 0.00 0.00 1.20 13.40 81.60 107.20 40.80 53.60\n12.93 0.00 0.00 0.00 0.00\n\nmd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 0.00 0.00 0.00 0.00\n\n\n\nAre they not saturated?\n\n\n\nWhat kind of parameters should I pay attention when comparing SCSI\ncontrollers and disks? I would like to discover how much cache is present in\nthe controller, how can I find this value from Linux?\n\n\n\nThank you in advance!\n\n\n\ndmesg output:\n\n...\n\nSCSI subsystem initialized\n\nACPI: PCI Interrupt 0000:04:02.0[A] -> GSI 18 (level, low) -> IRQ 18\n\nscsi0 : Adaptec AIC79XX PCI-X SCSI HBA DRIVER, Rev 3.0\n\n <Adaptec (Dell OEM) 39320 Ultra320 SCSI adapter>\n\n aic7902: Ultra320 Wide Channel A, SCSI Id=7, PCI 33 or 66Mhz, 512\nSCBs\n\n\n\n Vendor: SEAGATE Model: ST336607LW Rev: DS10\n\n Type: Direct-Access ANSI SCSI revision: 03\n\n target0:0:0: asynchronous\n\nscsi0:A:0:0: Tagged Queuing enabled. Depth 4\n\n target0:0:0: Beginning Domain Validation\n\n target0:0:0: wide asynchronous\n\n target0:0:0: FAST-160 WIDE SCSI 320.0 MB/s DT IU QAS RDSTRM RTI WRFLOW\nPCOMP (6.25 ns, offset 63)\n\n target0:0:0: Ending Domain Validation\n\nSCSI device sda: 71132959 512-byte hdwr sectors (36420 MB)\n\nsda: Write Protect is off\n\nsda: Mode Sense: ab 00 10 08\n\nSCSI device sda: drive cache: write back w/ FUA\n\nSCSI device sda: 71132959 512-byte hdwr sectors (36420 MB)\n\nsda: Write Protect is off\n\nsda: Mode Sense: ab 00 10 08\n\nSCSI device sda: drive cache: write back w/ FUA\n\n sda: sda1 sda2 sda3\n\nsd 0:0:0:0: Attached scsi disk sda\n\n Vendor: SEAGATE Model: ST336607LW Rev: DS10\n\n Type: Direct-Access ANSI SCSI revision: 03\n\n target0:0:1: asynchronous\n\nscsi0:A:1:0: Tagged Queuing enabled. Depth 4\n\n target0:0:1: Beginning Domain Validation\n\n target0:0:1: wide asynchronous\n\n target0:0:1: FAST-160 WIDE SCSI 320.0 MB/s DT IU QAS RDSTRM RTI WRFLOW\nPCOMP (6.25 ns, offset 63)\n\n target0:0:1: Ending Domain Validation\n\nSCSI device sdb: 71132959 512-byte hdwr sectors (36420 MB)\n\nsdb: Write Protect is off\n\nsdb: Mode Sense: ab 00 10 08\n\nSCSI device sdb: drive cache: write back w/ FUA\n\nSCSI device sdb: 71132959 512-byte hdwr sectors (36420 MB)\n\nsdb: Write Protect is off\n\nsdb: Mode Sense: ab 00 10 08\n\nSCSI device sdb: drive cache: write back w/ FUA\n\n sdb: sdb1 sdb2 sdb3\n\nsd 0:0:1:0: Attached scsi disk sdb\n\nACPI: PCI Interrupt 0000:04:02.1[B] -> GSI 19 (level, low) -> IRQ 19\n\nscsi1 : Adaptec AIC79XX PCI-X SCSI HBA DRIVER, Rev 3.0\n\n <Adaptec (Dell OEM) 39320 Ultra320 SCSI adapter>\n\n aic7902: Ultra320 Wide Channel B, SCSI Id=7, PCI 33 or 66Mhz, 512\nSCBs\n\n...\n\n\n\nReimer\n\n\n\n\n\nHi,\n \nI was called to \nfind out why one of our PostgreSQL servers has not a satisfatory response \ntime. The server has only two SCSI disks configurated as a RAID1 \nsoftware.\n \nWhile collecting performance data I discovered \nvery bad numbers in the I/O subsystem and I would like to know if I´m thinking \ncorrectly.\n \nHere is a typical \niostat -x:\n \n\navg-cpu: %user %nice %system %iowait %idle\n \n50.40 0.00 0.50 1.10 48.00\n \nDevice: rrqm/s \nwrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util\nsda \n0.00 7.80 0.40 6.40 41.60 113.60 20.80 56.80 22.82 570697.50 10.59 147.06 \n100.00\nsdb \n0.20 7.80 0.60 6.40 40.00 113.60 20.00 56.80 21.94 570697.50 9.83 142.86 \n100.00\nmd1 \n0.00 0.00 1.20 13.40 81.60 107.20 40.80 53.60 12.93 0.00 0.00 0.00 0.00\nmd0 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n \nAre they not \nsaturated?\n \nWhat kind of parameters should I pay \nattention when comparing SCSI controllers and disks? I would like to discover \nhow much cache is present in the controller, how can I find this value from \nLinux?\n \nThank you in \nadvance!\n \ndmesg output:\n...\nSCSI subsystem \ninitialized\nACPI: PCI Interrupt 0000:04:02.0[A] -> \nGSI 18 (level, low) -> IRQ 18\nscsi0 : Adaptec AIC79XX PCI-X SCSI HBA \nDRIVER, Rev 3.0\n \n<Adaptec (Dell OEM) 39320 Ultra320 SCSI adapter>\n \naic7902: Ultra320 Wide Channel A, SCSI Id=7, PCI 33 or 66Mhz, 512 \nSCBs\n \n Vendor: SEAGATE Model: \nST336607LW Rev: \nDS10\n Type: \nDirect-Access \nANSI SCSI revision: 03\n target0:0:0: \nasynchronous\nscsi0:A:0:0: Tagged Queuing enabled. \nDepth 4\n target0:0:0: Beginning Domain \nValidation\n target0:0:0: wide \nasynchronous\n target0:0:0: FAST-160 WIDE SCSI \n320.0 MB/s DT IU QAS RDSTRM RTI WRFLOW PCOMP (6.25 ns, offset \n63)\n target0:0:0: Ending Domain \nValidation\nSCSI device sda: 71132959 512-byte hdwr \nsectors (36420 MB)\nsda: Write Protect is \noff\nsda: Mode Sense: ab 00 10 \n08\nSCSI device sda: drive cache: write back \nw/ FUA\nSCSI device sda: 71132959 512-byte hdwr \nsectors (36420 MB)\nsda: Write Protect is \noff\nsda: Mode Sense: ab 00 10 \n08\nSCSI device sda: drive cache: write back \nw/ FUA\n sda: sda1 sda2 \nsda3\nsd 0:0:0:0: Attached scsi disk \nsda\n Vendor: SEAGATE Model: \nST336607LW Rev: \nDS10\n Type: \nDirect-Access \nANSI SCSI revision: 03\n target0:0:1: \nasynchronous\nscsi0:A:1:0: Tagged Queuing enabled. \nDepth 4\n target0:0:1: Beginning Domain \nValidation\n target0:0:1: wide \nasynchronous\n target0:0:1: FAST-160 WIDE SCSI \n320.0 MB/s DT IU QAS RDSTRM RTI WRFLOW PCOMP (6.25 ns, offset \n63)\n target0:0:1: Ending Domain \nValidation\nSCSI device sdb: 71132959 512-byte hdwr \nsectors (36420 MB)\nsdb: Write Protect is \noff\nsdb: Mode Sense: ab 00 10 \n08\nSCSI device sdb: drive cache: write back \nw/ FUA\nSCSI device sdb: 71132959 512-byte hdwr \nsectors (36420 MB)\nsdb: Write Protect is \noff\nsdb: Mode Sense: ab 00 10 \n08\nSCSI device sdb: drive cache: write back \nw/ FUA\n sdb: sdb1 sdb2 \nsdb3\nsd 0:0:1:0: Attached scsi disk \nsdb\nACPI: PCI Interrupt 0000:04:02.1[B] -> \nGSI 19 (level, low) -> IRQ 19\nscsi1 : Adaptec AIC79XX PCI-X SCSI HBA \nDRIVER, Rev 3.0\n \n<Adaptec (Dell OEM) 39320 Ultra320 SCSI adapter>\n \naic7902: Ultra320 Wide Channel B, SCSI Id=7, PCI 33 or 66Mhz, 512 \nSCBs\n...\n \nReimer",
"msg_date": "Thu, 30 Nov 2006 22:44:25 -0200",
"msg_from": "\"Carlos H. Reimer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad iostat numbers"
},
{
"msg_contents": "Carlos H. Reimer wrote:\n> While collecting performance data I discovered very bad numbers in the \n> I/O subsystem and I would like to know if I�m thinking correctly.\n> \n> Here is a typical iostat -x:\n> \n> \n> avg-cpu: %user %nice %system %iowait %idle\n> \n> 50.40 0.00 0.50 1.10 48.00\n> \n> \n> \n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s \n> avgrq-sz avgqu-sz await svctm %util\n> \n> sda 0.00 7.80 0.40 6.40 41.60 113.60 20.80 \n> 56.80 22.82 570697.50 10.59 147.06 100.00\n> \n> sdb 0.20 7.80 0.60 6.40 40.00 113.60 20.00 \n> 56.80 21.94 570697.50 9.83 142.86 100.00\n> \n> md1 0.00 0.00 1.20 13.40 81.60 107.20 40.80 \n> 53.60 12.93 0.00 0.00 0.00 0.00\n> \n> md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00\n> \n> \n> \n> Are they not saturated?\n> \n\nThey look it (if I'm reading your typical numbers correctly) - %util 100 \nand svctime in the region of 100 ms!\n\nOn the face of it, looks like you need something better than a RAID1 \nsetup - probably RAID10 (RAID5 is probably no good as you are writing \nmore than you are reading it seems). However read on...\n\nIf this is a sudden change in system behavior, then it is probably worth \ntrying to figure out what is causing it (i.e which queries) - for \ninstance it might be that you have some new queries that are doing disk \nbased sorts (this would mean you really need more memory rather than \nbetter disk...)\n\nCheers\n\nMark\n\n\n",
"msg_date": "Fri, 01 Dec 2006 14:47:24 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad iostat numbers"
},
{
"msg_contents": "Carlos H. Reimer wrote:\n\n> \n>\n> avg-cpu: %user %nice %system %iowait %idle\n>\n> 50.40 0.00 0.50 1.10 48.00\n>\n> \n>\n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s \n> avgrq-sz avgqu-sz await svctm %util\n>\n> sda 0.00 7.80 0.40 6.40 41.60 113.60 20.80 \n> 56.80 22.82 570697.50 10.59 147.06 100.00\n>\n> sdb 0.20 7.80 0.60 6.40 40.00 113.60 20.00 \n> 56.80 21.94 570697.50 9.83 142.86 100.00\n>\n> md1 0.00 0.00 1.20 13.40 81.60 107.20 40.80 \n> 53.60 12.93 0.00 0.00 0.00 0.00\n>\n> md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00\n>\n> \n>\n> Are they not saturated?\n>\n> \n>\n> What kind of parameters should I pay attention when comparing SCSI \n> controllers and disks? I would like to discover how much cache is \n> present in the controller, how can I find this value from Linux?\n>\n>\nThese number look a bit strange. I am wondering if there is a hardware \nproblem on one of the drives\nor on the controller. Check in syslog for messages about disk timeouts \netc. 100% util but 6 writes/s\nis just wrong (unless the drive is a 1980's vintage floppy).\n\n\n",
"msg_date": "Thu, 30 Nov 2006 19:24:34 -0700",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad iostat numbers"
},
{
"msg_contents": "David Boreham wrote:\n\n>\n> These number look a bit strange. I am wondering if there is a hardware \n> problem on one of the drives\n> or on the controller. Check in syslog for messages about disk timeouts \n> etc. 100% util but 6 writes/s\n> is just wrong (unless the drive is a 1980's vintage floppy).\n> \n\nAgreed - good call, I was misreading the wkB/s as wMB/s...\n",
"msg_date": "Fri, 01 Dec 2006 16:00:05 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad iostat numbers"
},
{
"msg_contents": "Hi,\n\nI�ve taken a look in the /var/log/messages and found some temperature\nmessages about the disk drives:\n\nNov 30 11:08:07 totall smartd[1620]: Device: /dev/sda, Temperature changed 2\nCelsius to 51 Celsius since last report\n\nCan this temperature influence in the performance?\n\nReimer\n\n> -----Mensagem original-----\n> De: David Boreham [mailto:[email protected]]\n> Enviada em: sexta-feira, 1 de dezembro de 2006 00:25\n> Para: [email protected]\n> Cc: [email protected]\n> Assunto: Re: [PERFORM] Bad iostat numbers\n>\n>\n> Carlos H. Reimer wrote:\n>\n> >\n> >\n> > avg-cpu: %user %nice %system %iowait %idle\n> >\n> > 50.40 0.00 0.50 1.10 48.00\n> >\n> >\n> >\n> > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s\n> > avgrq-sz avgqu-sz await svctm %util\n> >\n> > sda 0.00 7.80 0.40 6.40 41.60 113.60 20.80\n> > 56.80 22.82 570697.50 10.59 147.06 100.00\n> >\n> > sdb 0.20 7.80 0.60 6.40 40.00 113.60 20.00\n> > 56.80 21.94 570697.50 9.83 142.86 100.00\n> >\n> > md1 0.00 0.00 1.20 13.40 81.60 107.20 40.80\n> > 53.60 12.93 0.00 0.00 0.00 0.00\n> >\n> > md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> > 0.00 0.00 0.00 0.00 0.00 0.00\n> >\n> >\n> >\n> > Are they not saturated?\n> >\n> >\n> >\n> > What kind of parameters should I pay attention when comparing SCSI\n> > controllers and disks? I would like to discover how much cache is\n> > present in the controller, how can I find this value from Linux?\n> >\n> >\n> These number look a bit strange. I am wondering if there is a hardware\n> problem on one of the drives\n> or on the controller. Check in syslog for messages about disk timeouts\n> etc. 100% util but 6 writes/s\n> is just wrong (unless the drive is a 1980's vintage floppy).\n>\n>\n>\n>\n\n",
"msg_date": "Fri, 1 Dec 2006 12:29:11 -0200",
"msg_from": "\"Carlos H. Reimer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RES: Bad iostat numbers"
},
{
"msg_contents": "Hi,\n\nIf you look the iostat data it shows that the system is doing much more\nwrites than reads. It is strange, because if you look in the pg_stat tables\nwe see a complete different scenario. Much more reads than writes. I was\nmonitoring the presence of temporary files in the data directory what could\ndenote big sorts, but nothing there too.\n\nBut I think it is explained because of the high number of indexes present in\nthose tables. One write in the base table, many others in the indexes.\n\nWell, about the server behaviour, it has not changed suddenly but the\nperformance is becoming worse day by day.\n\n\nReimer\n\n\n> -----Mensagem original-----\n> De: Mark Kirkwood [mailto:[email protected]]\n> Enviada em: quinta-feira, 30 de novembro de 2006 23:47\n> Para: [email protected]\n> Cc: [email protected]\n> Assunto: Re: [PERFORM] Bad iostat numbers\n>\n>\n> Carlos H. Reimer wrote:\n> > While collecting performance data I discovered very bad numbers in the\n> > I/O subsystem and I would like to know if I�m thinking correctly.\n> >\n> > Here is a typical iostat -x:\n> >\n> >\n> > avg-cpu: %user %nice %system %iowait %idle\n> >\n> > 50.40 0.00 0.50 1.10 48.00\n> >\n> >\n> >\n> > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s\n> > avgrq-sz avgqu-sz await svctm %util\n> >\n> > sda 0.00 7.80 0.40 6.40 41.60 113.60 20.80\n> > 56.80 22.82 570697.50 10.59 147.06 100.00\n> >\n> > sdb 0.20 7.80 0.60 6.40 40.00 113.60 20.00\n> > 56.80 21.94 570697.50 9.83 142.86 100.00\n> >\n> > md1 0.00 0.00 1.20 13.40 81.60 107.20 40.80\n> > 53.60 12.93 0.00 0.00 0.00 0.00\n> >\n> > md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> > 0.00 0.00 0.00 0.00 0.00 0.00\n> >\n> >\n> >\n> > Are they not saturated?\n> >\n>\n> They look it (if I'm reading your typical numbers correctly) - %util 100\n> and svctime in the region of 100 ms!\n>\n> On the face of it, looks like you need something better than a RAID1\n> setup - probably RAID10 (RAID5 is probably no good as you are writing\n> more than you are reading it seems). However read on...\n>\n> If this is a sudden change in system behavior, then it is probably worth\n> trying to figure out what is causing it (i.e which queries) - for\n> instance it might be that you have some new queries that are doing disk\n> based sorts (this would mean you really need more memory rather than\n> better disk...)\n>\n> Cheers\n>\n> Mark\n>\n>\n>\n>\n>\n\n",
"msg_date": "Fri, 1 Dec 2006 12:47:28 -0200",
"msg_from": "\"Carlos H. Reimer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RES: Bad iostat numbers"
},
{
"msg_contents": "Carlos H. Reimer wrote:\n\n>I�ve taken a look in the /var/log/messages and found some temperature\n>messages about the disk drives:\n>\n>Nov 30 11:08:07 totall smartd[1620]: Device: /dev/sda, Temperature changed 2\n>Celsius to 51 Celsius since last report\n>\n>Can this temperature influence in the performance?\n> \n>\nit can influence 'working-ness' which I guess in turn affects performance ;)\n\nBut I'm not sure if 50C is too high for a disk drive, it might be ok.\n\nIf you are able to, I'd say just replace the drives and see if that \nimproves things.\n\n\n",
"msg_date": "Fri, 01 Dec 2006 08:32:03 -0700",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RES: Bad iostat numbers"
},
{
"msg_contents": "On Thu, 30 Nov 2006, Carlos H. Reimer wrote:\n\n> I would like to discover how much cache is present in\n> the controller, how can I find this value from Linux?\n\nAs far as I know there is no cache on an Adaptec 39320. The write-back \ncache Linux was reporting on was the one in the drives, which is 8MB; see \nhttp://www.seagate.com/cda/products/discsales/enterprise/tech/1,1593,541,00.html \nBe warned that running your database with the combination of an uncached \ncontroller plus disks with write caching is dangerous to your database \nintegrity.\n\nThere is a common problem with the Linux driver for this card (aic7902) \nwhere it enters what's they're calling an \"Infinite Interrupt Loop\". \nThat seems to match your readings:\n\n> Here is a typical iostat -x:\n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s\n> sda 0.00 7.80 0.40 6.40 41.60 113.60 20.80 56.80\n> avgrq-sz avgqu-sz await svctm %util\n> 22.82 570697.50 10.59 147.06 100.00\n\nAn avgqu-sz of 570697.50 is extremely large. That explains why the \nutilization is 100%, because there's a massive number of I/O operations \nqueued up that aren't getting flushed out. The read and write data says \nthese drives are barely doing anything, as 20kB/s and 57KB/s are \npractically idle; they're not even remotely close to saturated.\n\nSee http://lkml.org/lkml/2005/10/1/47 for a suggested workaround that may \nreduce the magnitude of this issue; lower the card's speed to U160 in the \nBIOS was also listed as a useful workaround. You might get better results \nby upgrading to a newer Linux kernel, and just rebooting to clear out the \ngarbage might help if you haven't tried that yet.\n\nOn the pessimistic side, other people reporting issues with this \ncontroller are:\n\nhttp://lkml.org/lkml/2005/12/17/55\nhttp://www.ussg.iu.edu/hypermail/linux/kernel/0512.2/0390.html\nhttp://www.linuxforums.org/forum/peripherals-hardware/59306-scsi-hangs-boot.html\nand even under FreeBSD at \nhttp://lists.freebsd.org/pipermail/aic7xxx/2003-August/003973.html\n\nThis Adaptec card just barely works under Linux, which happens regularly \nwith their controllers, and my guess is that you've run into one of the \nways it goes crazy sometimes. I just chuckled when checking \nhttp://linux.adaptec.com/ again and noticing they can't even be bothered \nto keep that server up at all. According to \nhttp://www.adaptec.com/en-US/downloads/linux_source/linux_source_code?productId=ASC-39320-R&dn=Adaptec+SCSI+Card+39320-R \nthe driver for your card is \"*minimally tested* for Linux Kernel v2.6 on \nall platforms.\" Adaptec doesn't care about Linux support on their \nproducts; if you want a SCSI controller that actually works under Linux, \nget an LSI MegaRAID.\n\nIf this were really a Postgres problem, I wouldn't expect %iowait=1.10. \nWere the database engine waiting to read/write data, that number would be \ndramatically higher. Whatever is generating all these I/O requests, it's \nnot waiting for them to complete like the database would be. Besides the \ndriver problems that I'm very suspicious of, I'd suspect a runaway process \nwriting garbage to the disks might also cause this behavior.\n\n> Ive taken a look in the /var/log/messages and found some temperature\n> messages about the disk drives:\n> Nov 30 11:08:07 totall smartd[1620]: Device: /dev/sda, Temperature changed 2\n> Celsius to 51 Celsius since last report\n> Can this temperature influence in the performance?\n\nThat's close to the upper tolerance for this drive (55 degrees), which \nmeans the drive is being cooked and will likely wear out quickly. But \nthat won't slow it down, and you'd get much scarier messages out of smartd \nif the drives had a real problem. You should improve cooling in this case \nif you want to drives to have a healthy life, odds are low this is \nrelevant to your performance issue though.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Mon, 4 Dec 2006 00:44:47 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad iostat numbers"
},
{
"msg_contents": "People recommend LSI MegaRAID controllers on here regularly, but I have\nfound that they do not work that well. I have bonnie++ numbers that show\nthe controller is not performing anywhere near the disk's saturation level\nin a simple RAID 1 on RedHat Linux EL4 on two seperate machines provided by\ntwo different hosting companies. In one case I asked them to replace the\ncard, and the numbers got a bit better, but still not optimal.\n\nLSI MegaRAID has proved to be a bit of a disapointment. I have seen better\nnumbers from the HP SmartArray 6i, and from 3ware cards with 7200RPM SATA\ndrives.\n\nfor the output: http://www.infoconinc.com/test/bonnie++.html (the first line\nis a six drive RAID 10 on a 3ware 9500S, the next three are all RAID 1s on\nLSI MegaRAID controllers, verified by lspci).\n\nAlex.\n\nOn 12/4/06, Greg Smith <[email protected]> wrote:\n>\n> On Thu, 30 Nov 2006, Carlos H. Reimer wrote:\n>\n> > I would like to discover how much cache is present in\n> > the controller, how can I find this value from Linux?\n>\n> As far as I know there is no cache on an Adaptec 39320. The write-back\n> cache Linux was reporting on was the one in the drives, which is 8MB; see\n>\n> http://www.seagate.com/cda/products/discsales/enterprise/tech/1,1593,541,00.html\n> Be warned that running your database with the combination of an uncached\n> controller plus disks with write caching is dangerous to your database\n> integrity.\n>\n> There is a common problem with the Linux driver for this card (aic7902)\n> where it enters what's they're calling an \"Infinite Interrupt Loop\".\n> That seems to match your readings:\n>\n> > Here is a typical iostat -x:\n> > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s\n> > sda 0.00 7.80 0.40 6.40 41.60 113.60 20.80 56.80\n> > avgrq-sz avgqu-sz await svctm %util\n> > 22.82 570697.50 10.59 147.06 100.00\n>\n> An avgqu-sz of 570697.50 is extremely large. That explains why the\n> utilization is 100%, because there's a massive number of I/O operations\n> queued up that aren't getting flushed out. The read and write data says\n> these drives are barely doing anything, as 20kB/s and 57KB/s are\n> practically idle; they're not even remotely close to saturated.\n>\n> See http://lkml.org/lkml/2005/10/1/47 for a suggested workaround that may\n> reduce the magnitude of this issue; lower the card's speed to U160 in the\n> BIOS was also listed as a useful workaround. You might get better results\n> by upgrading to a newer Linux kernel, and just rebooting to clear out the\n> garbage might help if you haven't tried that yet.\n>\n> On the pessimistic side, other people reporting issues with this\n> controller are:\n>\n> http://lkml.org/lkml/2005/12/17/55\n> http://www.ussg.iu.edu/hypermail/linux/kernel/0512.2/0390.html\n>\n> http://www.linuxforums.org/forum/peripherals-hardware/59306-scsi-hangs-boot.html\n> and even under FreeBSD at\n> http://lists.freebsd.org/pipermail/aic7xxx/2003-August/003973.html\n>\n> This Adaptec card just barely works under Linux, which happens regularly\n> with their controllers, and my guess is that you've run into one of the\n> ways it goes crazy sometimes. I just chuckled when checking\n> http://linux.adaptec.com/ again and noticing they can't even be bothered\n> to keep that server up at all. According to\n>\n> http://www.adaptec.com/en-US/downloads/linux_source/linux_source_code?productId=ASC-39320-R&dn=Adaptec+SCSI+Card+39320-R\n> the driver for your card is \"*minimally tested* for Linux Kernel v2.6 on\n> all platforms.\" Adaptec doesn't care about Linux support on their\n> products; if you want a SCSI controller that actually works under Linux,\n> get an LSI MegaRAID.\n>\n> If this were really a Postgres problem, I wouldn't expect %iowait=1.10.\n> Were the database engine waiting to read/write data, that number would be\n> dramatically higher. Whatever is generating all these I/O requests, it's\n> not waiting for them to complete like the database would be. Besides the\n> driver problems that I'm very suspicious of, I'd suspect a runaway process\n> writing garbage to the disks might also cause this behavior.\n>\n> > Ive taken a look in the /var/log/messages and found some temperature\n> > messages about the disk drives:\n> > Nov 30 11:08:07 totall smartd[1620]: Device: /dev/sda, Temperature\n> changed 2\n> > Celsius to 51 Celsius since last report\n> > Can this temperature influence in the performance?\n>\n> That's close to the upper tolerance for this drive (55 degrees), which\n> means the drive is being cooked and will likely wear out quickly. But\n> that won't slow it down, and you'd get much scarier messages out of smartd\n> if the drives had a real problem. You should improve cooling in this case\n> if you want to drives to have a healthy life, odds are low this is\n> relevant to your performance issue though.\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n\nPeople recommend LSI MegaRAID controllers on here regularly, but I have found that they do not work that well. I have bonnie++ numbers that show the controller is not performing anywhere near the disk's saturation level in a simple RAID 1 on RedHat Linux EL4 on two seperate machines provided by two different hosting companies. In one case I asked them to replace the card, and the numbers got a bit better, but still not optimal.\nLSI MegaRAID has proved to be a bit of a disapointment. I have seen better numbers from the HP SmartArray 6i, and from 3ware cards with 7200RPM SATA drives.for the output: \nhttp://www.infoconinc.com/test/bonnie++.html (the first line is a six drive RAID 10 on a 3ware 9500S, the next three are all RAID 1s on LSI MegaRAID controllers, verified by lspci).Alex.\nOn 12/4/06, Greg Smith <[email protected]> wrote:\nOn Thu, 30 Nov 2006, Carlos H. Reimer wrote:> I would like to discover how much cache is present in> the controller, how can I find this value from Linux?As far as I know there is no cache on an Adaptec 39320. The write-back\ncache Linux was reporting on was the one in the drives, which is 8MB; seehttp://www.seagate.com/cda/products/discsales/enterprise/tech/1,1593,541,00.html\nBe warned that running your database with the combination of an uncachedcontroller plus disks with write caching is dangerous to your databaseintegrity.There is a common problem with the Linux driver for this card (aic7902)\nwhere it enters what's they're calling an \"Infinite Interrupt Loop\".That seems to match your readings:> Here is a typical iostat -x:> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s\n> sda 0.00 7.80 0.40 6.40 41.60 113.60 20.80 56.80> avgrq-sz avgqu-sz await svctm %util> 22.82 570697.50 10.59 147.06 100.00An avgqu-sz of 570697.50 is extremely large. That explains why the\nutilization is 100%, because there's a massive number of I/O operationsqueued up that aren't getting flushed out. The read and write data saysthese drives are barely doing anything, as 20kB/s and 57KB/s are\npractically idle; they're not even remotely close to saturated.See http://lkml.org/lkml/2005/10/1/47 for a suggested workaround that mayreduce the magnitude of this issue; lower the card's speed to U160 in the\nBIOS was also listed as a useful workaround. You might get better resultsby upgrading to a newer Linux kernel, and just rebooting to clear out thegarbage might help if you haven't tried that yet.On the pessimistic side, other people reporting issues with this\ncontroller are:http://lkml.org/lkml/2005/12/17/55http://www.ussg.iu.edu/hypermail/linux/kernel/0512.2/0390.html\nhttp://www.linuxforums.org/forum/peripherals-hardware/59306-scsi-hangs-boot.htmland even under FreeBSD at\nhttp://lists.freebsd.org/pipermail/aic7xxx/2003-August/003973.htmlThis Adaptec card just barely works under Linux, which happens regularlywith their controllers, and my guess is that you've run into one of the\nways it goes crazy sometimes. I just chuckled when checkinghttp://linux.adaptec.com/ again and noticing they can't even be botheredto keep that server up at all. According to\nhttp://www.adaptec.com/en-US/downloads/linux_source/linux_source_code?productId=ASC-39320-R&dn=Adaptec+SCSI+Card+39320-R\nthe driver for your card is \"*minimally tested* for Linux Kernel v2.6 onall platforms.\" Adaptec doesn't care about Linux support on theirproducts; if you want a SCSI controller that actually works under Linux,\nget an LSI MegaRAID.If this were really a Postgres problem, I wouldn't expect %iowait=1.10.Were the database engine waiting to read/write data, that number would bedramatically higher. Whatever is generating all these I/O requests, it's\nnot waiting for them to complete like the database would be. Besides thedriver problems that I'm very suspicious of, I'd suspect a runaway processwriting garbage to the disks might also cause this behavior.\n> Ive taken a look in the /var/log/messages and found some temperature> messages about the disk drives:> Nov 30 11:08:07 totall smartd[1620]: Device: /dev/sda, Temperature changed 2> Celsius to 51 Celsius since last report\n> Can this temperature influence in the performance?That's close to the upper tolerance for this drive (55 degrees), whichmeans the drive is being cooked and will likely wear out quickly. Butthat won't slow it down, and you'd get much scarier messages out of smartd\nif the drives had a real problem. You should improve cooling in this caseif you want to drives to have a healthy life, odds are low this isrelevant to your performance issue though.--* Greg Smith \[email protected] http://www.gregsmith.com Baltimore, MD---------------------------(end of broadcast)---------------------------\nTIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate",
"msg_date": "Mon, 4 Dec 2006 02:17:48 -0500",
"msg_from": "\"Alex Turner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad iostat numbers"
},
{
"msg_contents": "On Mon, 2006-12-04 at 01:17, Alex Turner wrote:\n> People recommend LSI MegaRAID controllers on here regularly, but I\n> have found that they do not work that well. I have bonnie++ numbers\n> that show the controller is not performing anywhere near the disk's\n> saturation level in a simple RAID 1 on RedHat Linux EL4 on two\n> seperate machines provided by two different hosting companies. In one\n> case I asked them to replace the card, and the numbers got a bit\n> better, but still not optimal. \n> \n> LSI MegaRAID has proved to be a bit of a disapointment. I have seen\n> better numbers from the HP SmartArray 6i, and from 3ware cards with\n> 7200RPM SATA drives.\n> \n> for the output: http://www.infoconinc.com/test/bonnie++.html (the\n> first line is a six drive RAID 10 on a 3ware 9500S, the next three are\n> all RAID 1s on LSI MegaRAID controllers, verified by lspci).\n\nWait, you're comparing a MegaRAID running a RAID 1 against another\ncontroller running a 6 disk RAID10? That's hardly fair.\n\nMy experience with the LSI was that with the 1.18 series drivers, they\nwere slow but stable.\n\nWith the version 2.x drivers, I found that the performance was very good\nwith RAID-5 and fair with RAID-1 and that layered RAID was not any\nbetter than unlayered (i.e. layering RAID0 over RAID1 resulted in basic\nRAID-1 performance).\n\nOTOH, with the choice at my last place of employment being LSI or\nAdaptec, LSI was a much better choice. :)\n\nI'd ask which LSI megaraid you've tested, and what driver was used. \nDoes RHEL4 have the megaraid 2 driver? \n",
"msg_date": "Mon, 04 Dec 2006 10:25:14 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad iostat numbers"
},
{
"msg_contents": "On Mon, 2006-12-04 at 10:25, Scott Marlowe wrote:\n> \n> OTOH, with the choice at my last place of employment being LSI or\n> Adaptec, LSI was a much better choice. :)\n> \n> I'd ask which LSI megaraid you've tested, and what driver was used. \n> Does RHEL4 have the megaraid 2 driver? \n\nJust wanted to add that what we used our database for at my last company\nwas for lots of mostly small writes / reads. I.e. sequential throughput\ndidn't really matter, but random write speed did. for that application,\nthe LSI Megaraid with battery backed cache was great.\n\nLast point, bonnie++ is a good benchmarking tool, but until you test\nyour app / postgresql on top of the hardware, you can't really say how\nwell it will perform.\n\nA controller that looks fast under a single bonnie++ thread might\nperform poorly when there are 100+ pending writes, and vice versa, a\ncontroller that looks mediocre under bonnie++ might shine when there's\nheavy parallel write load to handle.\n",
"msg_date": "Mon, 04 Dec 2006 10:37:34 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad iostat numbers"
},
{
"msg_contents": "The RAID 10 was in there merely for filling in, not really as a compare,\nindeed it would be ludicrous to compare a RAID 1 to a 6 drive RAID 10!!\n\nHow do I find out if it has version 2 of the driver?\n\nThis discussion I think is important, as I think it would be useful for this\nlist to have a list of RAID cards that _do_ work well under Linux/BSD for\npeople as recommended hardware for Postgresql. So far, all I can recommend\nis what I've found to be good, which is 3ware 9500 series cards with 10k\nSATA drives. Throughput was great until you reached higher levels of RAID\n10 (the bonnie++ mark I posted showed write speed is a bit slow). But that\ndoesn't solve the problem for SCSI. What cards in the SCSI arena solve the\nproblem optimally? Why should we settle for sub-optimal performance in SCSI\nwhen there are a number of almost optimally performing cards in the SATA\nworld (Areca, 3Ware/AMCC, LSI).\n\nThanks,\n\nAlex\n\nOn 12/4/06, Scott Marlowe <[email protected]> wrote:\n>\n> On Mon, 2006-12-04 at 01:17, Alex Turner wrote:\n> > People recommend LSI MegaRAID controllers on here regularly, but I\n> > have found that they do not work that well. I have bonnie++ numbers\n> > that show the controller is not performing anywhere near the disk's\n> > saturation level in a simple RAID 1 on RedHat Linux EL4 on two\n> > seperate machines provided by two different hosting companies. In one\n> > case I asked them to replace the card, and the numbers got a bit\n> > better, but still not optimal.\n> >\n> > LSI MegaRAID has proved to be a bit of a disapointment. I have seen\n> > better numbers from the HP SmartArray 6i, and from 3ware cards with\n> > 7200RPM SATA drives.\n> >\n> > for the output: http://www.infoconinc.com/test/bonnie++.html (the\n> > first line is a six drive RAID 10 on a 3ware 9500S, the next three are\n> > all RAID 1s on LSI MegaRAID controllers, verified by lspci).\n>\n> Wait, you're comparing a MegaRAID running a RAID 1 against another\n> controller running a 6 disk RAID10? That's hardly fair.\n>\n> My experience with the LSI was that with the 1.18 series drivers, they\n> were slow but stable.\n>\n> With the version 2.x drivers, I found that the performance was very good\n> with RAID-5 and fair with RAID-1 and that layered RAID was not any\n> better than unlayered (i.e. layering RAID0 over RAID1 resulted in basic\n> RAID-1 performance).\n>\n> OTOH, with the choice at my last place of employment being LSI or\n> Adaptec, LSI was a much better choice. :)\n>\n> I'd ask which LSI megaraid you've tested, and what driver was used.\n> Does RHEL4 have the megaraid 2 driver?\n>\n\nThe RAID 10 was in there merely for filling in, not really as a compare, indeed it would be ludicrous to compare a RAID 1 to a 6 drive RAID 10!!How do I find out if it has version 2 of the driver?This discussion I think is important, as I think it would be useful for this list to have a list of RAID cards that _do_ work well under Linux/BSD for people as recommended hardware for Postgresql. So far, all I can recommend is what I've found to be good, which is 3ware 9500 series cards with 10k SATA drives. Throughput was great until you reached higher levels of RAID 10 (the bonnie++ mark I posted showed write speed is a bit slow). But that doesn't solve the problem for SCSI. What cards in the SCSI arena solve the problem optimally? Why should we settle for sub-optimal performance in SCSI when there are a number of almost optimally performing cards in the SATA world (Areca, 3Ware/AMCC, LSI).\nThanks,AlexOn 12/4/06, Scott Marlowe <[email protected]> wrote:\nOn Mon, 2006-12-04 at 01:17, Alex Turner wrote:> People recommend LSI MegaRAID controllers on here regularly, but I> have found that they do not work that well. I have bonnie++ numbers> that show the controller is not performing anywhere near the disk's\n> saturation level in a simple RAID 1 on RedHat Linux EL4 on two> seperate machines provided by two different hosting companies. In one> case I asked them to replace the card, and the numbers got a bit\n> better, but still not optimal.>> LSI MegaRAID has proved to be a bit of a disapointment. I have seen> better numbers from the HP SmartArray 6i, and from 3ware cards with> 7200RPM SATA drives.\n>> for the output: http://www.infoconinc.com/test/bonnie++.html (the> first line is a six drive RAID 10 on a 3ware 9500S, the next three are\n> all RAID 1s on LSI MegaRAID controllers, verified by lspci).Wait, you're comparing a MegaRAID running a RAID 1 against anothercontroller running a 6 disk RAID10? That's hardly fair.My experience with the LSI was that with the \n1.18 series drivers, theywere slow but stable.With the version 2.x drivers, I found that the performance was very goodwith RAID-5 and fair with RAID-1 and that layered RAID was not anybetter than unlayered (\ni.e. layering RAID0 over RAID1 resulted in basicRAID-1 performance).OTOH, with the choice at my last place of employment being LSI orAdaptec, LSI was a much better choice. :)I'd ask which LSI megaraid you've tested, and what driver was used.\nDoes RHEL4 have the megaraid 2 driver?",
"msg_date": "Mon, 4 Dec 2006 12:37:29 -0500",
"msg_from": "\"Alex Turner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad iostat numbers"
},
{
"msg_contents": "On Mon, Dec 04, 2006 at 12:37:29PM -0500, Alex Turner wrote:\n>This discussion I think is important, as I think it would be useful for this\n>list to have a list of RAID cards that _do_ work well under Linux/BSD for\n>people as recommended hardware for Postgresql. So far, all I can recommend\n>is what I've found to be good, which is 3ware 9500 series cards with 10k\n>SATA drives. Throughput was great until you reached higher levels of RAID\n>10 (the bonnie++ mark I posted showed write speed is a bit slow). But that\n>doesn't solve the problem for SCSI. What cards in the SCSI arena solve the\n>problem optimally? Why should we settle for sub-optimal performance in SCSI\n>when there are a number of almost optimally performing cards in the SATA\n>world (Areca, 3Ware/AMCC, LSI).\n\nWell, one factor is to be more precise about what you're looking for; a \nHBA != RAID controller, and you may be comparing apples and oranges. (If \nyou have an external array with an onboard controller you probably want \na simple HBA rather than a RAID controller.)\n\nMike Stone\n",
"msg_date": "Mon, 04 Dec 2006 12:43:22 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad iostat numbers"
},
{
"msg_contents": "http://en.wikipedia.org/wiki/RAID_controller\n\nAlex\n\nOn 12/4/06, Michael Stone <[email protected]> wrote:\n>\n> On Mon, Dec 04, 2006 at 12:37:29PM -0500, Alex Turner wrote:\n> >This discussion I think is important, as I think it would be useful for\n> this\n> >list to have a list of RAID cards that _do_ work well under Linux/BSD for\n> >people as recommended hardware for Postgresql. So far, all I can\n> recommend\n> >is what I've found to be good, which is 3ware 9500 series cards with 10k\n> >SATA drives. Throughput was great until you reached higher levels of\n> RAID\n> >10 (the bonnie++ mark I posted showed write speed is a bit slow). But\n> that\n> >doesn't solve the problem for SCSI. What cards in the SCSI arena solve\n> the\n> >problem optimally? Why should we settle for sub-optimal performance in\n> SCSI\n> >when there are a number of almost optimally performing cards in the SATA\n> >world (Areca, 3Ware/AMCC, LSI).\n>\n> Well, one factor is to be more precise about what you're looking for; a\n> HBA != RAID controller, and you may be comparing apples and oranges. (If\n> you have an external array with an onboard controller you probably want\n> a simple HBA rather than a RAID controller.)\n>\n> Mike Stone\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\nhttp://en.wikipedia.org/wiki/RAID_controllerAlexOn 12/4/06, Michael Stone <\[email protected]> wrote:On Mon, Dec 04, 2006 at 12:37:29PM -0500, Alex Turner wrote:\n>This discussion I think is important, as I think it would be useful for this>list to have a list of RAID cards that _do_ work well under Linux/BSD for>people as recommended hardware for Postgresql. So far, all I can recommend\n>is what I've found to be good, which is 3ware 9500 series cards with 10k>SATA drives. Throughput was great until you reached higher levels of RAID>10 (the bonnie++ mark I posted showed write speed is a bit slow). But that\n>doesn't solve the problem for SCSI. What cards in the SCSI arena solve the>problem optimally? Why should we settle for sub-optimal performance in SCSI>when there are a number of almost optimally performing cards in the SATA\n>world (Areca, 3Ware/AMCC, LSI).Well, one factor is to be more precise about what you're looking for; aHBA != RAID controller, and you may be comparing apples and oranges. (Ifyou have an external array with an onboard controller you probably want\na simple HBA rather than a RAID controller.)Mike Stone---------------------------(end of broadcast)---------------------------TIP 6: explain analyze is your friend",
"msg_date": "Mon, 4 Dec 2006 12:52:46 -0500",
"msg_from": "\"Alex Turner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad iostat numbers"
},
{
"msg_contents": "On Mon, Dec 04, 2006 at 12:52:46PM -0500, Alex Turner wrote:\n>http://en.wikipedia.org/wiki/RAID_controller\n\nWhat is the wikipedia quote supposed to prove? Pray tell, if you \nconsider RAID==HBA, what would you call a SCSI (e.g.) controller that \nhas no RAID functionality? If you'd call it an HBA, then there is a \nuseful distinction to be made, no?\n\nMike Stone\n",
"msg_date": "Mon, 04 Dec 2006 13:03:17 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad iostat numbers"
},
{
"msg_contents": "On Mon, 2006-12-04 at 11:43, Michael Stone wrote:\n> On Mon, Dec 04, 2006 at 12:37:29PM -0500, Alex Turner wrote:\n> >This discussion I think is important, as I think it would be useful for this\n> >list to have a list of RAID cards that _do_ work well under Linux/BSD for\n> >people as recommended hardware for Postgresql. So far, all I can recommend\n> >is what I've found to be good, which is 3ware 9500 series cards with 10k\n> >SATA drives. Throughput was great until you reached higher levels of RAID\n> >10 (the bonnie++ mark I posted showed write speed is a bit slow). But that\n> >doesn't solve the problem for SCSI. What cards in the SCSI arena solve the\n> >problem optimally? Why should we settle for sub-optimal performance in SCSI\n> >when there are a number of almost optimally performing cards in the SATA\n> >world (Areca, 3Ware/AMCC, LSI).\n> \n> Well, one factor is to be more precise about what you're looking for; a \n> HBA != RAID controller, and you may be comparing apples and oranges. (If \n> you have an external array with an onboard controller you probably want \n> a simple HBA rather than a RAID controller.)\n\nI think he's been pretty clear. He's just talking about SCSI based RAID\ncontrollers is all.\n",
"msg_date": "Mon, 04 Dec 2006 12:05:15 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad iostat numbers"
},
{
"msg_contents": "On Mon, 2006-12-04 at 11:37, Alex Turner wrote:\n> The RAID 10 was in there merely for filling in, not really as a\n> compare, indeed it would be ludicrous to compare a RAID 1 to a 6 drive\n> RAID 10!!\n> \n> How do I find out if it has version 2 of the driver?\n\nGo to the directory it lives in (on my Fedora Core 2 box, it's in\nsomething like: /lib/modules/2.6.10-1.9_FC2/kernel/drivers/scsi )\nand run modinfo on the driver:\n\nmodinfo megaraid.ko\nauthor: LSI Logic Corporation\ndescription: LSI Logic MegaRAID driver\nlicense: GPL\nversion: 2.00.3\n\nSNIPPED extra stuff\n\n> This discussion I think is important, as I think it would be useful\n> for this list to have a list of RAID cards that _do_ work well under\n> Linux/BSD for people as recommended hardware for Postgresql. So far,\n> all I can recommend is what I've found to be good, which is 3ware 9500\n> series cards with 10k SATA drives. Throughput was great until you\n> reached higher levels of RAID 10 (the bonnie++ mark I posted showed\n> write speed is a bit slow). But that doesn't solve the problem for\n> SCSI. What cards in the SCSI arena solve the problem optimally? Why\n> should we settle for sub-optimal performance in SCSI when there are a\n> number of almost optimally performing cards in the SATA world (Areca,\n> 3Ware/AMCC, LSI). \n\nWell, I think the LSI works VERY well under linux. And I've always made\nit quite clear in my posts that while I find it an acceptable performer,\nmy main recommendation is based on it's stability, not speed, and that\nthe Areca and 3Ware cards are generally regarded as faster. And all\nthree beat the adaptecs which are observed as being rather unstable.\n\nDoes this LSI have battery backed cache? Are you testing it under heavy\nparallel load versus single threaded to get an idea how it scales with\nmultiple processes hitting it at once?\n\nDon't get me wrong, I'm a big fan of running tools like bonnie to get a\nbasic idea of how good the hardware is, but benchmarks that simulate\nreal production loads are the only ones worth putting your trust in.\n",
"msg_date": "Mon, 04 Dec 2006 12:13:12 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad iostat numbers"
},
{
"msg_contents": "On Mon, 4 Dec 2006, Alex Turner wrote:\n\n> People recommend LSI MegaRAID controllers on here regularly, but I have \n> found that they do not work that well. I have bonnie++ numbers that \n> show the controller is not performing anywhere near the disk's \n> saturation level in a simple RAID 1 on RedHat Linux EL4 on two seperate \n> machines provided by two different hosting companies. \n> http://www.infoconinc.com/test/bonnie++.html\n\nI don't know what's going on with your www-september-06 machine, but the \nother two are giving 32-40MB/s writes and 53-68MB/s reads. For a RAID-1 \nvolume, these aren't awful numbers, but I agree they're not great.\n\nMy results are no better. For your comparison, here's a snippet of \nbonnie++ results from one of my servers: RHEL 4, P4 3GHz, MegaRAID \nfirmware 1L37, write-thru cache setup, RAID 1; I think the drives are 10K \nRPM Seagate Cheetahs. This is from the end of the drive where performance \nis the worst (I partitioned the important stuff at the beginning where \nit's fastest and don't have enough free space to run bonnie there):\n\n------Sequential Output------ --Sequential Input- --Random-\n-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nK/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\n20708 50 21473 9 9603 3 34419 72 55799 7 467.1 1\n\n21Mb/s writes, 56MB/s reads. Not too different from yours (especially if \nyour results were from the beginning of the disk), and certainly nothing \nspecial. I might be able to tune the write performance higher if I cared; \nthe battery backed cache sits unused and everything is tuned for paranoia \nrather than performance. On this machine it doesn't matter.\n\nThe thing is, even though it's rarely the top performing card even when \nsetup perfectly, the LSI SCSI Megaraid just works. The driver is stable, \ncaching behavior is well defined, it's a pleasure to administer. I'm \nnever concerned that it's lying to me or doing anything to put data at \nrisk. The command-line tools for Linux work perfectly, let me look at or \ncontrol whatever I want, and it was straighforward for me to make my own \ncustomized monitoring script using them.\n\n> LSI MegaRAID has proved to be a bit of a disapointment. I have seen \n> better numbers from the HP SmartArray 6i, and from 3ware cards with \n> 7200RPM SATA drives.\n\nWhereas although I use 7200RPM SATA drives, I always try to keep an eye on \nthem because I never really trust them. The performance list archives \nhere also have plenty of comments about people having issues with the \nSmartArray controllers; search the archives for \"cciss\" and you'll see \nwhat I'm talking about.\n\nThe Megaraid controller is very boring. That's why I like it. As a Linux \ndistribution, RedHat has similar characteristics. If I were going for a \nperformance setup, I'd dump that, too, for something sexier with a newish \nkernel. It all depends on which side of the performance/stability \ntradeoff you're aiming at.\n\nOn Mon, 4 Dec 2006, Scott Marlowe wrote:\n> Does RHEL4 have the megaraid 2 driver?\n\nThis is from the moderately current RHEL4 installation I had results from \nabove. Redhat has probably done a kernel rev since I last updated back in \nSeptember, haven't needed or wanted to reboot since then:\n\nmegaraid cmm: 2.20.2.6 (Release Date: Mon Mar 7 00:01:03 EST 2005)\nmegaraid: 2.20.4.6-rh2 (Release Date: Wed Jun 28 12:27:22 EST 2006)\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Mon, 4 Dec 2006 23:10:00 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad iostat numbers"
},
{
"msg_contents": "I agree, that MegaRAID is very stable, and it's very appealing from that\nperspective. And two years ago I would have never even mentioned cciss\nbased cards on this list, because they sucked wind big time, but I believe\nsome people have started seeing better number from the 6i. 20MB/sec write,\nwhen the number should be closer to 60.... thats off by a factor of 3. For\nmy data wharehouse application, thats a big difference, and if I can get a\nbetter number from 7200RPM drives and a good SATA controller, I'm gonna do\nthat because my data isn't OLTP, and I don't care if the whole system shits\nitself and I have to restore from backup one day.\n\nMy other and most important point is that I can't find any solid\nrecommendations for a SCSI card that can perform optimally in Linux or\n*BSD. Off by a factor of 3x is pretty sad IMHO. (and yes, we know the\nAdaptec cards suck worse, that doesn't bring us to a _good_ card).\n\nAlex.\n\nOn 12/4/06, Greg Smith <[email protected]> wrote:\n>\n> On Mon, 4 Dec 2006, Alex Turner wrote:\n>\n> > People recommend LSI MegaRAID controllers on here regularly, but I have\n> > found that they do not work that well. I have bonnie++ numbers that\n> > show the controller is not performing anywhere near the disk's\n> > saturation level in a simple RAID 1 on RedHat Linux EL4 on two seperate\n> > machines provided by two different hosting companies.\n> > http://www.infoconinc.com/test/bonnie++.html\n>\n> I don't know what's going on with your www-september-06 machine, but the\n> other two are giving 32-40MB/s writes and 53-68MB/s reads. For a RAID-1\n> volume, these aren't awful numbers, but I agree they're not great.\n>\n> My results are no better. For your comparison, here's a snippet of\n> bonnie++ results from one of my servers: RHEL 4, P4 3GHz, MegaRAID\n> firmware 1L37, write-thru cache setup, RAID 1; I think the drives are 10K\n> RPM Seagate Cheetahs. This is from the end of the drive where performance\n> is the worst (I partitioned the important stuff at the beginning where\n> it's fastest and don't have enough free space to run bonnie there):\n>\n> ------Sequential Output------ --Sequential Input- --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n> K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\n> 20708 50 21473 9 9603 3 34419 72 55799 7 467.1 1\n>\n> 21Mb/s writes, 56MB/s reads. Not too different from yours (especially if\n> your results were from the beginning of the disk), and certainly nothing\n> special. I might be able to tune the write performance higher if I cared;\n> the battery backed cache sits unused and everything is tuned for paranoia\n> rather than performance. On this machine it doesn't matter.\n>\n> The thing is, even though it's rarely the top performing card even when\n> setup perfectly, the LSI SCSI Megaraid just works. The driver is stable,\n> caching behavior is well defined, it's a pleasure to administer. I'm\n> never concerned that it's lying to me or doing anything to put data at\n> risk. The command-line tools for Linux work perfectly, let me look at or\n> control whatever I want, and it was straighforward for me to make my own\n> customized monitoring script using them.\n>\n> > LSI MegaRAID has proved to be a bit of a disapointment. I have seen\n> > better numbers from the HP SmartArray 6i, and from 3ware cards with\n> > 7200RPM SATA drives.\n>\n> Whereas although I use 7200RPM SATA drives, I always try to keep an eye on\n> them because I never really trust them. The performance list archives\n> here also have plenty of comments about people having issues with the\n> SmartArray controllers; search the archives for \"cciss\" and you'll see\n> what I'm talking about.\n>\n> The Megaraid controller is very boring. That's why I like it. As a Linux\n> distribution, RedHat has similar characteristics. If I were going for a\n> performance setup, I'd dump that, too, for something sexier with a newish\n> kernel. It all depends on which side of the performance/stability\n> tradeoff you're aiming at.\n>\n> On Mon, 4 Dec 2006, Scott Marlowe wrote:\n> > Does RHEL4 have the megaraid 2 driver?\n>\n> This is from the moderately current RHEL4 installation I had results from\n> above. Redhat has probably done a kernel rev since I last updated back in\n> September, haven't needed or wanted to reboot since then:\n>\n> megaraid cmm: 2.20.2.6 (Release Date: Mon Mar 7 00:01:03 EST 2005)\n> megaraid: 2.20.4.6-rh2 (Release Date: Wed Jun 28 12:27:22 EST 2006)\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\nI agree, that MegaRAID is very stable, and it's very appealing from that perspective. And two years ago I would have never even mentioned cciss based cards on this list, because they sucked wind big time, but I believe some people have started seeing better number from the 6i. 20MB/sec write, when the number should be closer to 60.... thats off by a factor of 3. For my data wharehouse application, thats a big difference, and if I can get a better number from 7200RPM drives and a good SATA controller, I'm gonna do that because my data isn't OLTP, and I don't care if the whole system shits itself and I have to restore from backup one day.\nMy other and most important point is that I can't find any solid recommendations for a SCSI card that can perform optimally in Linux or *BSD. Off by a factor of 3x is pretty sad IMHO. (and yes, we know the Adaptec cards suck worse, that doesn't bring us to a _good_ card).\nAlex.On 12/4/06, Greg Smith <[email protected]> wrote:\nOn Mon, 4 Dec 2006, Alex Turner wrote:> People recommend LSI MegaRAID controllers on here regularly, but I have> found that they do not work that well. I have bonnie++ numbers that> show the controller is not performing anywhere near the disk's\n> saturation level in a simple RAID 1 on RedHat Linux EL4 on two seperate> machines provided by two different hosting companies.> http://www.infoconinc.com/test/bonnie++.html\nI don't know what's going on with your www-september-06 machine, but theother two are giving 32-40MB/s writes and 53-68MB/s reads. For a RAID-1volume, these aren't awful numbers, but I agree they're not great.\nMy results are no better. For your comparison, here's a snippet ofbonnie++ results from one of my servers: RHEL 4, P4 3GHz, MegaRAIDfirmware 1L37, write-thru cache setup, RAID 1; I think the drives are 10K\nRPM Seagate Cheetahs. This is from the end of the drive where performanceis the worst (I partitioned the important stuff at the beginning whereit's fastest and don't have enough free space to run bonnie there):\n------Sequential Output------ --Sequential Input- --Random--Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP20708 50 21473 9 9603 3 34419 72 55799 7 \n467.1 121Mb/s writes, 56MB/s reads. Not too different from yours (especially ifyour results were from the beginning of the disk), and certainly nothingspecial. I might be able to tune the write performance higher if I cared;\nthe battery backed cache sits unused and everything is tuned for paranoiarather than performance. On this machine it doesn't matter.The thing is, even though it's rarely the top performing card even when\nsetup perfectly, the LSI SCSI Megaraid just works. The driver is stable,caching behavior is well defined, it's a pleasure to administer. I'mnever concerned that it's lying to me or doing anything to put data at\nrisk. The command-line tools for Linux work perfectly, let me look at orcontrol whatever I want, and it was straighforward for me to make my owncustomized monitoring script using them.> LSI MegaRAID has proved to be a bit of a disapointment. I have seen\n> better numbers from the HP SmartArray 6i, and from 3ware cards with> 7200RPM SATA drives.Whereas although I use 7200RPM SATA drives, I always try to keep an eye onthem because I never really trust them. The performance list archives\nhere also have plenty of comments about people having issues with theSmartArray controllers; search the archives for \"cciss\" and you'll seewhat I'm talking about.The Megaraid controller is very boring. That's why I like it. As a Linux\ndistribution, RedHat has similar characteristics. If I were going for aperformance setup, I'd dump that, too, for something sexier with a newishkernel. It all depends on which side of the performance/stability\ntradeoff you're aiming at.On Mon, 4 Dec 2006, Scott Marlowe wrote:> Does RHEL4 have the megaraid 2 driver?This is from the moderately current RHEL4 installation I had results fromabove. Redhat has probably done a kernel rev since I last updated back in\nSeptember, haven't needed or wanted to reboot since then:megaraid cmm: 2.20.2.6 (Release Date: Mon Mar 7 00:01:03 EST 2005)megaraid: 2.20.4.6-rh2 (Release Date: Wed Jun 28 12:27:22 EST 2006)\n--* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster",
"msg_date": "Tue, 5 Dec 2006 01:21:38 -0500",
"msg_from": "\"Alex Turner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad iostat numbers"
},
{
"msg_contents": "On Tue, Dec 05, 2006 at 01:21:38AM -0500, Alex Turner wrote:\n>My other and most important point is that I can't find any solid\n>recommendations for a SCSI card that can perform optimally in Linux or\n>*BSD. Off by a factor of 3x is pretty sad IMHO. (and yes, we know the\n>Adaptec cards suck worse, that doesn't bring us to a _good_ card).\n\nThis gets back to my point about terminology. As a SCSI HBA the Adaptec \nis decent: I can sustain about 300MB/s off a single channel of the \n39320A using an external RAID controller. As a RAID controller I can't \neven imagine using the Adaptec; I'm fairly certain they put that \n\"functionality\" on there just so they could charge more for the card. It \nmay be that there's not much market for on-board SCSI RAID controllers; \nbetween SATA on the low end and SAS & FC on the high end, there isn't a\nwhole lotta space left for SCSI. I definitely don't think much\nR&D is going into SCSI controllers any more, compared to other solutions \nlike SATA or SAS RAID (the 39320 hasn't change in at least 3 years, \nIIRC). Anyway, since the Adaptec part is a decent SCSI controller and a \nlousy RAID controller, have you tried just using software RAID? \n\nMike Stone\n",
"msg_date": "Tue, 05 Dec 2006 07:15:09 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad iostat numbers"
},
{
"msg_contents": "The problem I see with software raid is the issue of a battery backed unit:\nIf the computer loses power, then the 'cache' which is held in system\nmemory, goes away, and fubars your RAID.\n\nAlex\n\nOn 12/5/06, Michael Stone <[email protected]> wrote:\n>\n> On Tue, Dec 05, 2006 at 01:21:38AM -0500, Alex Turner wrote:\n> >My other and most important point is that I can't find any solid\n> >recommendations for a SCSI card that can perform optimally in Linux or\n> >*BSD. Off by a factor of 3x is pretty sad IMHO. (and yes, we know the\n> >Adaptec cards suck worse, that doesn't bring us to a _good_ card).\n>\n> This gets back to my point about terminology. As a SCSI HBA the Adaptec\n> is decent: I can sustain about 300MB/s off a single channel of the\n> 39320A using an external RAID controller. As a RAID controller I can't\n> even imagine using the Adaptec; I'm fairly certain they put that\n> \"functionality\" on there just so they could charge more for the card. It\n> may be that there's not much market for on-board SCSI RAID controllers;\n> between SATA on the low end and SAS & FC on the high end, there isn't a\n> whole lotta space left for SCSI. I definitely don't think much\n> R&D is going into SCSI controllers any more, compared to other solutions\n> like SATA or SAS RAID (the 39320 hasn't change in at least 3 years,\n> IIRC). Anyway, since the Adaptec part is a decent SCSI controller and a\n> lousy RAID controller, have you tried just using software RAID?\n>\n> Mike Stone\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\nThe problem I see with software raid is the issue of a battery backed unit: If the computer loses power, then the 'cache' which is held in system memory, goes away, and fubars your RAID.Alex\nOn 12/5/06, Michael Stone <[email protected]> wrote:\nOn Tue, Dec 05, 2006 at 01:21:38AM -0500, Alex Turner wrote:>My other and most important point is that I can't find any solid>recommendations for a SCSI card that can perform optimally in Linux or>*BSD. Off by a factor of 3x is pretty sad IMHO. (and yes, we know the\n>Adaptec cards suck worse, that doesn't bring us to a _good_ card).This gets back to my point about terminology. As a SCSI HBA the Adaptecis decent: I can sustain about 300MB/s off a single channel of the\n39320A using an external RAID controller. As a RAID controller I can'teven imagine using the Adaptec; I'm fairly certain they put that\"functionality\" on there just so they could charge more for the card. It\nmay be that there's not much market for on-board SCSI RAID controllers;between SATA on the low end and SAS & FC on the high end, there isn't awhole lotta space left for SCSI. I definitely don't think much\nR&D is going into SCSI controllers any more, compared to other solutionslike SATA or SAS RAID (the 39320 hasn't change in at least 3 years,IIRC). Anyway, since the Adaptec part is a decent SCSI controller and a\nlousy RAID controller, have you tried just using software RAID?Mike Stone---------------------------(end of broadcast)---------------------------TIP 5: don't forget to increase your free space map settings",
"msg_date": "Tue, 5 Dec 2006 07:57:43 -0500",
"msg_from": "\"Alex Turner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad iostat numbers"
},
{
"msg_contents": "Alex Turner wrote:\n> The problem I see with software raid is the issue of a battery backed \n> unit: If the computer loses power, then the 'cache' which is held in \n> system memory, goes away, and fubars your RAID.\n\nI'm not sure I see the difference. If data are cached, they're not written whether it is software or hardware RAID. I guess if you're writing RAID 1, the N disks could be out of sync, but the system can synchronize them once the array is restored, so that's no different than a single disk or a hardware RAID. If you're writing RAID 5, then the blocks are inherently error detecting/correcting, so you're still OK if a partial write occurs, right?\n\nI'm not familiar with the inner details of software RAID, but the only circumstance I can see where things would get corrupted is if the RAID driver writes a LOT of blocks to one disk of the array before synchronizing the others, but my guess (and it's just a guess) is that the writes to the N disks are tightly coupled.\n\nIf I'm wrong about this, I'd like to know, because I'm using software RAID 1 and 1+0, and I'm pretty happy with it.\n\nCraig\n",
"msg_date": "Tue, 05 Dec 2006 06:46:34 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad iostat numbers"
},
{
"msg_contents": "On Tue, Dec 05, 2006 at 07:57:43AM -0500, Alex Turner wrote:\n>The problem I see with software raid is the issue of a battery backed unit:\n>If the computer loses power, then the 'cache' which is held in system\n>memory, goes away, and fubars your RAID.\n\nSince the Adaptec doesn't have a BBU, it's a lateral move. Also, this is \nless an issue of data integrity than performance; you can get exactly \nthe same level of integrity, you just have to wait for the data to sync \nto disk. If you're read-mostly that's irrelevant. \n\nMike Stone\n",
"msg_date": "Tue, 05 Dec 2006 09:49:41 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad iostat numbers"
},
{
"msg_contents": "On Tue, 5 Dec 2006, Craig A. James wrote:\n\n> I'm not familiar with the inner details of software RAID, but the only \n> circumstance I can see where things would get corrupted is if the RAID driver \n> writes a LOT of blocks to one disk of the array before synchronizing the \n> others...\n\nYou're talking about whether the discs in the RAID are kept consistant. \nWhile it's helpful with that, too, that's not the main reason a the \nbattery-backed cache is so helpful. When PostgreSQL writes to the WAL, it \nwaits until that data has really been placed on the drive before it enters \nthat update into the database. In a normal situation, that means that you \nhave to pause until the disk has physically written the blocks out, and \nthat puts a fairly low upper limit on write performance that's based on \nhow fast your drives rotate. RAID 0, RAID 1, none of that will speed up \nthe time it takes to complete a single synchronized WAL write.\n\nWhen your controller has a battery-backed cache, it can immediately tell \nPostgres that the WAL write completed succesfully, while actually putting \nit on the disk later. On my systems, this results in simple writes going \n2-4X as fast as they do without a cache. Should there be a PC failure, as \nlong as power is restored before the battery runs out that transaction \nwill be preserved.\n\nWhat Alex is rightly pointing out is that a software RAID approach doesn't \nhave this feature. In fact, in this area performance can be even worse \nunder SW RAID than what you get from a single disk, because you may have \nto wait for multiple discs to spin to the correct position and write data \nout before you can consider the transaction complete.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Tue, 5 Dec 2006 23:54:58 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad iostat numbers"
},
{
"msg_contents": "\nOn Dec 5, 2006, at 8:54 PM, Greg Smith wrote:\n\n> On Tue, 5 Dec 2006, Craig A. James wrote:\n>\n>> I'm not familiar with the inner details of software RAID, but the \n>> only circumstance I can see where things would get corrupted is if \n>> the RAID driver writes a LOT of blocks to one disk of the array \n>> before synchronizing the others...\n>\n> You're talking about whether the discs in the RAID are kept \n> consistant. While it's helpful with that, too, that's not the main \n> reason a the battery-backed cache is so helpful. When PostgreSQL \n> writes to the WAL, it waits until that data has really been placed \n> on the drive before it enters that update into the database. In a \n> normal situation, that means that you have to pause until the disk \n> has physically written the blocks out, and that puts a fairly low \n> upper limit on write performance that's based on how fast your \n> drives rotate. RAID 0, RAID 1, none of that will speed up the time \n> it takes to complete a single synchronized WAL write.\n>\n> When your controller has a battery-backed cache, it can immediately \n> tell Postgres that the WAL write completed succesfully, while \n> actually putting it on the disk later. On my systems, this results \n> in simple writes going 2-4X as fast as they do without a cache. \n> Should there be a PC failure, as long as power is restored before \n> the battery runs out that transaction will be preserved.\n>\n> What Alex is rightly pointing out is that a software RAID approach \n> doesn't have this feature. In fact, in this area performance can \n> be even worse under SW RAID than what you get from a single disk, \n> because you may have to wait for multiple discs to spin to the \n> correct position and write data out before you can consider the \n> transaction complete.\n\nSo... the ideal might be a RAID1 controller with BBU for the WAL and \nsomething else, such as software RAID, for the main data array?\n\nCheers,\n Steve\n\n",
"msg_date": "Wed, 6 Dec 2006 08:19:18 -0800",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad iostat numbers"
}
] |
[
{
"msg_contents": "[[email protected] - Thu at 06:37:12PM -0600]\n> As my dataset has gotten larger I have had to throw more metal at the\n> problem, but I have also had to rethink my table and query design. Just\n> because your data set grows linearly does NOT mean that the performance of\n> your query is guaranteed to grow linearly! A sloppy query that runs OK\n> with 3000 rows in your table may choke horribly when you hit 50000.\n\nThen some limit is hit ... either the memory cache, or that the planner\nis doing an unlucky change of strategy when hitting 50000.\n\nAnyway, it's very important when testing queries that they actually are\ntested on a (copy of the) production database, and not on an empty\ndatabase or a database containing some random test data. If testing\nqueries off the production database, it's important to have equal\nhardware and configuration on both servers.\n\n",
"msg_date": "Fri, 1 Dec 2006 02:15:54 +0100",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Defining performance."
},
{
"msg_contents": "Tobias Brox wrote:\n> [[email protected] - Thu at 06:37:12PM -0600]\n>> As my dataset has gotten larger I have had to throw more metal at the\n>> problem, but I have also had to rethink my table and query design. Just\n>> because your data set grows linearly does NOT mean that the performance of\n>> your query is guaranteed to grow linearly! A sloppy query that runs OK\n>> with 3000 rows in your table may choke horribly when you hit 50000.\n> \n> Then some limit is hit ... either the memory cache, or that the planner\n> is doing an unlucky change of strategy when hitting 50000.\n\nNot really. A bad query is a bad query (eg missing a join element). It \nwon't show up for 3000 rows, but will very quickly if you increase that \nby a reasonable amount. Even as simple as a missing index on a join \ncolumn won't show up for a small dataset but will for a larger one.\n\nIt's a pretty common mistake to assume that a small dataset will behave \nexactly the same as a larger one - not always the case.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Fri, 01 Dec 2006 14:32:05 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defining performance."
}
] |
[
{
"msg_contents": "[Chris - Fri at 02:32:05PM +1100]\n> Not really. A bad query is a bad query (eg missing a join element). It \n> won't show up for 3000 rows, but will very quickly if you increase that \n> by a reasonable amount. Even as simple as a missing index on a join \n> column won't show up for a small dataset but will for a larger one.\n\nOk, you're talking about O(n^2) and such stuff :-)\n\n> It's a pretty common mistake to assume that a small dataset will behave \n> exactly the same as a larger one - not always the case.\n\nNo. :-) We had the worst experience when launching our product - it had\nbeen stress tested, but only by increasing the number of customers, not\nby increasing the overall size of the data set available for browsing.\nWhen opening the web pages for the public, this data set was already\nsome ten-hundred times bigger than in the version enduring the stress\ntests - and the servers had no chances processing all the traffic.\n\nThe worst bottle neck was not the database this time, but some horror\nalgorithm one of the programmers had sneaked in ... poorly documented,\ncryptically programmed, slow ... and since I never understood that\nalgorithm, I can only guess it must have been of O(n^2).\n\n",
"msg_date": "Fri, 1 Dec 2006 05:03:26 +0100",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Defining performance."
}
] |
[
{
"msg_contents": "Does anyone have any performance experience with the Dell Perc 5i\ncontrollers in RAID 10/RAID 5?\n\nThanks,\n\nAlex\n\nDoes anyone have any performance experience with the Dell Perc 5i controllers in RAID 10/RAID 5?Thanks,Alex",
"msg_date": "Fri, 1 Dec 2006 11:34:33 -0500",
"msg_from": "\"Alex Turner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance of Perc 5i"
},
{
"msg_contents": "\n>Does anyone have any performance experience with the Dell Perc 5i\ncontrollers in \n>RAID 10/RAID 5?\n\nCheck the archives for details- I posted some numbers a while ago. I was\ngetting around 250 MB/s sequential write (dd) on Raid5x6, and about 220\nMB/s on Raid 10x4 (keep in mind that's dd- RAID10 should do better for\nrandom io). Default config and pgbench I've seen io > 130MB/s\nread/write, which is likely due to not using the full resources of the\nbox due to default config.\n\nI'm going to try and post some more detailed numbers when I get a chance\nto test the box again, but that probably won't be for a few more weeks. \n\nIf anyone's interested or has comments, here's my test plan:\n\nBonnie++ 1.03\ndd\npgbench\n\n(all of the above with sizes 2x RAM) against:\nRAID5x6\nRAID5x4\nRAID10x6\nRAID10x4\n\nIf I get time, I'd also like to test putting pg_xlog on RAID1x2 data on\nRAID10x4, vs. everything on RAID10x4 and RAID10x6, since I know people\nhave asked about this.\n\nHTH,\n\nBucky\n",
"msg_date": "Fri, 1 Dec 2006 12:06:42 -0500",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of Perc 5i"
}
] |
[
{
"msg_contents": "Hello Performance,\n\nYesterday, with help from the admin list, we took care of a problem we \nwere having with creating tables/views/indexes following a server \noverheat & crash (an index on pg_attribute was corrupted, causing the \ncreate to hang and go through the roof on memory usage until it failed \nout - a reindex fixed it like charm). Following the repair of creates \n(we also took the system into single user to run reindex system), and \nwith no other known problems with the db itself, we immediately began a \ndump of the database.\n\nTypical dump time: ~12 hours, we dump overnight but there is still \ndecently heavy activity. However, this dump has the box to itself and \nafter 10 hours we are only about 20% done just with pulling schema for \nthe indexes - something that typically takes it 4-6 hours to complete \nall schema entirely. Load on the machine is minimal, along with memory \nusage by the dump process itself (864M, not large for this system). It \nis definitely moving, but just very slowly. At this point we are \nattempting to determine if this is a machine level problem (which we \nhaven't seen sign of yet) or still a potential problem in postgres. The \ndump is currently doing I/O at 10mbps, but in testing our sys admin \nreports he has no problem getting stronger I/O stats from other processes.\n\nThe current dump query running:\nSELECT t.tableoid, t.oid, t.relname as indexname, \npg_catalog.pg_get_indexdef(i.indexrelid) as indexdef, t.relnatts as \nindnkeys, i.indkey, i.indisclustered, c.contype, c.conname, c.tableoid \nas contableoid, c.oid as conoid, (SELECT spcname FROM \npg_catalog.pg_tablespace s WHERE s.oid = t.reltablespace) as tablespace \nFROM pg_catalog.pg_index i JOIN pg_catalog.pg_class t ON (t.oid = \ni.indexrelid) LEFT JOIN pg_catalog.pg_depend d ON (d.classid = \nt.tableoid AND d.objid = t.oid AND d.deptype = 'i') LEFT JOIN \npg_catalog.pg_constraint c ON (d.refclassid = c.tableoid AND d.refobjid \n= c.oid) WHERE i.indrelid = '44240'::pg_catalog.oid ORDER BY indexname\n\nAmount of time it took me to run the query from console: ~5secs (I'm \ncounting in my head, sophisticated, eh?)\n\nWe tend to respond to slow queries with vacs and analyzes, but \nconsidering these are system tables that have recently been reindexed, \nhow likely is it that we could improve things by doing a vac? At this \npoint we plan to halt the dump and run a vac full on the db, but any \nideas you may have as to why the dump is sluggish on getting this \ninformation, I'd appreciate them.\n\nspec info -\nPostgres 8.1.4\ndb size: 200+ GB\n101,745 tables\n314,821 indexes\n1,569 views\nmaintenance_work_mem = 262144\n\nServer:\nOS: Solaris 10\nSunfire X4100 XL\n2x AMD Opteron Model 275 dual core procs\n8GB of ram\n(this server overheated & crashed due to a cooling problem at the \nhosting service)\n\nOn top of a:\nSun Storedge 6130\n14x 146GB Drives in a Raid 5\nBrocade 200E switches\nEmulex 4gb HBAs\n(this server had no known problems)\n\nThanks in advance,\nKim Hatcher <http://www.myemma.com>\n\n\n\n\n\n\nHello Performance,\n\n\nYesterday, with help from the admin list, we took care of a problem we\nwere having with creating tables/views/indexes following a server\noverheat & crash (an index on pg_attribute was corrupted, causing\nthe create to hang and go through the roof on memory usage until it\nfailed out - a reindex fixed it like charm). Following the repair of\ncreates (we also took the system into single user to run reindex\nsystem), and with no other known problems with the db itself, we\nimmediately began a dump of the database.\n\n\nTypical dump time: ~12 hours, we dump overnight but there is still\ndecently heavy activity. However, this dump has the box to itself and\nafter 10 hours we are only about 20% done just with pulling schema for\nthe indexes - something that typically takes it 4-6 hours to complete\nall schema entirely. Load on the machine is minimal, along with memory\nusage by the dump process itself (864M, not large for this system). It\nis definitely moving, but just very slowly. At this point we are\nattempting to determine if this is a machine level problem (which we\nhaven't seen sign of yet) or still a potential problem in postgres. The\ndump is currently doing I/O at 10mbps, but in testing our sys admin\nreports he has no problem getting stronger I/O stats from other\nprocesses.\n\n\nThe current dump query running:\n\nSELECT t.tableoid, t.oid, t.relname as indexname,\npg_catalog.pg_get_indexdef(i.indexrelid) as indexdef, t.relnatts as\nindnkeys, i.indkey, i.indisclustered, c.contype, c.conname, c.tableoid\nas contableoid, c.oid as conoid, (SELECT spcname FROM\npg_catalog.pg_tablespace s WHERE s.oid = t.reltablespace) as tablespace\nFROM pg_catalog.pg_index i JOIN pg_catalog.pg_class t ON (t.oid =\ni.indexrelid) LEFT JOIN pg_catalog.pg_depend d ON (d.classid =\nt.tableoid AND d.objid = t.oid AND d.deptype = 'i') LEFT JOIN\npg_catalog.pg_constraint c ON (d.refclassid = c.tableoid AND d.refobjid\n= c.oid) WHERE i.indrelid = '44240'::pg_catalog.oid ORDER BY indexname\n\n\nAmount of time it took me to run the query from console: ~5secs (I'm\ncounting in my head, sophisticated, eh?)\n\n\nWe tend to respond to slow queries with vacs and analyzes, but\nconsidering these are system tables that have recently been reindexed,\nhow likely is it that we could improve things by doing a vac? At this\npoint we plan to halt the dump and run a vac full on the db, but any\nideas you may have as to why the dump is sluggish on getting this\ninformation, I'd appreciate them.\n\n\nspec info -\n\nPostgres 8.1.4\n\ndb size: 200+ GB\n\n101,745 tables\n\n314,821 indexes\n\n1,569 views\n\nmaintenance_work_mem = 262144\n\n\nServer:\n\nOS: Solaris 10\n\nSunfire X4100 XL\n\n2x AMD Opteron Model 275 dual core procs\n\n8GB of ram\n\n(this server overheated & crashed due to a cooling problem at the\nhosting service)\n\nOn top of a:\n\nSun Storedge 6130\n\n14x 146GB Drives in a Raid 5\n\nBrocade 200E switches\n\nEmulex 4gb HBAs\n\n(this server had no known problems)\n\nThanks in advance,\nKim Hatcher",
"msg_date": "Sat, 02 Dec 2006 09:50:20 -0600",
"msg_from": "Kim <[email protected]>",
"msg_from_op": true,
"msg_subject": "Dump performance problems following server crash"
},
{
"msg_contents": "Kim <[email protected]> writes:\n> The current dump query running:\n> SELECT t.tableoid, t.oid, t.relname as indexname, \n> pg_catalog.pg_get_indexdef(i.indexrelid) as indexdef, t.relnatts as \n> indnkeys, i.indkey, i.indisclustered, c.contype, c.conname, c.tableoid \n> as contableoid, c.oid as conoid, (SELECT spcname FROM \n> pg_catalog.pg_tablespace s WHERE s.oid = t.reltablespace) as tablespace \n> FROM pg_catalog.pg_index i JOIN pg_catalog.pg_class t ON (t.oid = \n> i.indexrelid) LEFT JOIN pg_catalog.pg_depend d ON (d.classid = \n> t.tableoid AND d.objid = t.oid AND d.deptype = 'i') LEFT JOIN \n> pg_catalog.pg_constraint c ON (d.refclassid = c.tableoid AND d.refobjid \n> = c.oid) WHERE i.indrelid = '44240'::pg_catalog.oid ORDER BY indexname\n\n> Amount of time it took me to run the query from console: ~5secs (I'm \n> counting in my head, sophisticated, eh?)\n\nEven 5 seconds is way too long. You've apparently still got something\ncorrupted somewhere. Did you reindex *all* the system catalogs?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 02 Dec 2006 12:13:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dump performance problems following server crash "
},
{
"msg_contents": "We dropped into single user mode and ran reindex system - it was my \nunderstanding this would reindex them all, including shared catalogs - \nbut perhaps not?\n\nKim\n\nTom Lane wrote:\n\n>Kim <[email protected]> writes:\n> \n>\n>>The current dump query running:\n>>SELECT t.tableoid, t.oid, t.relname as indexname, \n>>pg_catalog.pg_get_indexdef(i.indexrelid) as indexdef, t.relnatts as \n>>indnkeys, i.indkey, i.indisclustered, c.contype, c.conname, c.tableoid \n>>as contableoid, c.oid as conoid, (SELECT spcname FROM \n>>pg_catalog.pg_tablespace s WHERE s.oid = t.reltablespace) as tablespace \n>>FROM pg_catalog.pg_index i JOIN pg_catalog.pg_class t ON (t.oid = \n>>i.indexrelid) LEFT JOIN pg_catalog.pg_depend d ON (d.classid = \n>>t.tableoid AND d.objid = t.oid AND d.deptype = 'i') LEFT JOIN \n>>pg_catalog.pg_constraint c ON (d.refclassid = c.tableoid AND d.refobjid \n>>= c.oid) WHERE i.indrelid = '44240'::pg_catalog.oid ORDER BY indexname\n>> \n>>\n>\n> \n>\n>>Amount of time it took me to run the query from console: ~5secs (I'm \n>>counting in my head, sophisticated, eh?)\n>> \n>>\n>\n>Even 5 seconds is way too long. You've apparently still got something\n>corrupted somewhere. Did you reindex *all* the system catalogs?\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: Don't 'kill -9' the postmaster\n>\n> \n>\n\n-- \n\n*kim hatcher*\nsenior developer, emma�\ne: [email protected] <mailto:[email protected]>\np: 800 595 4401\nw: www.myemma.com <http://www.myemma.com>\n\n\n\n\n\n\nWe dropped into single user mode and ran reindex system - it was my\nunderstanding this would reindex them all, including shared catalogs -\nbut perhaps not? \n\nKim\n\nTom Lane wrote:\n\nKim <[email protected]> writes:\n \n\nThe current dump query running:\nSELECT t.tableoid, t.oid, t.relname as indexname, \npg_catalog.pg_get_indexdef(i.indexrelid) as indexdef, t.relnatts as \nindnkeys, i.indkey, i.indisclustered, c.contype, c.conname, c.tableoid \nas contableoid, c.oid as conoid, (SELECT spcname FROM \npg_catalog.pg_tablespace s WHERE s.oid = t.reltablespace) as tablespace \nFROM pg_catalog.pg_index i JOIN pg_catalog.pg_class t ON (t.oid = \ni.indexrelid) LEFT JOIN pg_catalog.pg_depend d ON (d.classid = \nt.tableoid AND d.objid = t.oid AND d.deptype = 'i') LEFT JOIN \npg_catalog.pg_constraint c ON (d.refclassid = c.tableoid AND d.refobjid \n= c.oid) WHERE i.indrelid = '44240'::pg_catalog.oid ORDER BY indexname\n \n\n\n \n\nAmount of time it took me to run the query from console: ~5secs (I'm \ncounting in my head, sophisticated, eh?)\n \n\n\nEven 5 seconds is way too long. You've apparently still got something\ncorrupted somewhere. Did you reindex *all* the system catalogs?\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n \n\n\n-- \n\nkim hatcher\n\nsenior developer, emma®\ne: [email protected]\np: 800 595 4401\nw: www.myemma.com",
"msg_date": "Sat, 02 Dec 2006 11:29:30 -0600",
"msg_from": "Kim <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dump performance problems following server crash"
}
] |
[
{
"msg_contents": "Hello..\n\nI have a low performance problem with regexp.\n\nHere are the details:\n\nasterisk=> explain analyze SELECT * FROM destlist WHERE '0039051248787' ~ \nprefix AND id_ent='-2' AND dir=0 ORDER by length(prefix) DESC;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=7925.07..7925.15 rows=31 width=67) (actual \ntime=857.715..857.716 rows=2 loops=1)\n Sort Key: length((prefix)::text)\n -> Bitmap Heap Scan on destlist (cost=60.16..7924.30 rows=31 width=67) \n(actual time=2.156..857.686 rows=2 loops=1)\n Recheck Cond: ((id_ent = -2) AND (dir = 0))\n Filter: ('0039051248787'::text ~ (prefix)::text)\n -> Bitmap Index Scan on destlist_indx2 (cost=0.00..60.16 \nrows=6193 width=0) (actual time=1.961..1.961 rows=5205 loops=1)\n Index Cond: ((id_ent = -2) AND (dir = 0))\n Total runtime: 857.804 ms\n(8 rows)\n\n\nThe main problem is the query time.\nAs you can see , the use of index destlist_indx2 is pretty quick (1.9 ms) \n, but the regexp operation takes a lot of time (857 ms).\n\nHow can i improve that ?\n\nRegards\n Alex\n\n\nPS: Additional info:\n\n[root@voce1 billing]# uname -a\nLinux voce1 2.6.11-1.1369_FC4smp #1 SMP Thu Jun 2 23:16:33 EDT 2005 x86_64 \nx86_64 x86_64 GNU/Linux\n\n[root@voce1 billing]# free\n total used free shared buffers cached\nMem: 1023912 977788 46124 0 80900 523868\n-/+ buffers/cache: 373020 650892\nSwap: 3172728 8488 3164240\n\n\nWelcome to psql 8.1.2, the PostgreSQL interactive terminal.\n\nasterisk=> \\d destlist;\n Table \"public.destlist\"\n Column | Type | Modifiers\n---------+------------------------+-----------------------------------------------------------\n id | bigint | not null default \nnextval(('destlist_id'::text)::regclass)\n id_ent | integer |\n dir | integer |\n prefix | character varying(255) |\n country | character varying(255) |\n network | character varying(255) |\n tip | integer |\n\n\nIndexes:\n \"destlist_unique\" UNIQUE, btree (id_ent, dir, prefix)\n \"destlist_indx2\" btree (id_ent, dir)\n \"destlist_indx3\" btree (id_ent, dir, prefix)\n \"mmumu\" btree (prefix varchar_pattern_ops)\n\n\nasterisk=> select count(*) from destlist;\n count\n--------\n 576424\n(1 row)\n\n\n\n\n\n\n[root@voce1 billing]# cat /proc/cpuinfo\nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 15\nmodel : 4\nmodel name : Intel(R) Xeon(TM) CPU 3.00GHz\nstepping : 3\ncpu MHz : 2992.658\ncache size : 2048 KB\nphysical id : 0\nsiblings : 2\ncore id : 0\ncpu cores : 1\nfpu : yes\nfpu_exception : yes\ncpuid level : 5\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca \ncmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm \nconstant_tsc pni monitor ds_cpl cid cx16 xtpr\nbogomips : 5914.62\nclflush size : 64\ncache_alignment : 128\naddress sizes : 36 bits physical, 48 bits virtual\npower management:\n\nprocessor : 1\nvendor_id : GenuineIntel\ncpu family : 15\nmodel : 4\nmodel name : Intel(R) Xeon(TM) CPU 3.00GHz\nstepping : 3\ncpu MHz : 2992.658\ncache size : 2048 KB\nphysical id : 0\nsiblings : 2\ncore id : 0\ncpu cores : 1\nfpu : yes\nfpu_exception : yes\ncpuid level : 5\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca \ncmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm \nconstant_tsc pni monitor ds_cpl cid cx16 xtpr\nbogomips : 5980.16\nclflush size : 64\ncache_alignment : 128\naddress sizes : 36 bits physical, 48 bits virtual\npower management:\n\n\n",
"msg_date": "Sat, 2 Dec 2006 21:00:49 +0200",
"msg_from": "\"Alexandru Coseru\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Regex performance issue"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] On Behalf Of Alexandru Coseru\n> asterisk=> explain analyze SELECT * FROM destlist WHERE \n> '0039051248787' ~ \n> prefix AND id_ent='-2' AND dir=0 ORDER by length(prefix) DESC;\n>\n> \n> QUERY PLAN\n> --------------------------------------------------------------\n> ----------------------------------------------------------------------\n> Sort (cost=7925.07..7925.15 rows=31 width=67) (actual \n> time=857.715..857.716 rows=2 loops=1)\n> Sort Key: length((prefix)::text)\n> -> Bitmap Heap Scan on destlist (cost=60.16..7924.30 \n> rows=31 width=67) \n> (actual time=2.156..857.686 rows=2 loops=1)\n> Recheck Cond: ((id_ent = -2) AND (dir = 0))\n> Filter: ('0039051248787'::text ~ (prefix)::text)\n> -> Bitmap Index Scan on destlist_indx2 (cost=0.00..60.16 \n> rows=6193 width=0) (actual time=1.961..1.961 rows=5205 loops=1)\n> Index Cond: ((id_ent = -2) AND (dir = 0))\n> Total runtime: 857.804 ms\n> (8 rows)\n> \n>\n> \"mmumu\" btree (prefix varchar_pattern_ops)\n> \n\nI'm surpised Postgres isn't using the index on prefix seeing as the index\nuses the varchar_pattern_ops operator class. It could be that the index\nisn't selective enough, or is Postgres not able to use an index with Posix\nregular expressions? The docs seem to say that it can, but I'd be curious\nto see what happens if you use LIKE instead of ~. \n\nDave\n \n\n",
"msg_date": "Sat, 2 Dec 2006 14:36:49 -0600",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regex performance issue"
},
{
"msg_contents": "Hello...\n\nI cannot use LIKE , because the order of the match is reversed.\nThe prefix column is containing telephone destinations.\nIE: ^001 - US , ^0039 Italy , etc..\n\nHere is a small sample:\n\nasterisk=> select * from destlist LIMIT 10;\n id | id_ent | dir | prefix | country | network | tip\n----+--------+-----+------------+-------------+--------------------+-----\n 1 | -1 | 0 | (^0093) | AFGHANISTAN | AFGHANISTAN | 6\n 2 | -1 | 0 | (^00937) | AFGHANISTAN | AFGHANISTAN Mobile | 5\n 3 | -1 | 0 | (^00355) | ALBANIA | ALBANIA | 6\n 4 | -1 | 0 | (^0035538) | ALBANIA | ALBANIA Mobile | 5\n 5 | -1 | 0 | (^0035568) | ALBANIA | ALBANIA Mobile | 5\n 6 | -1 | 0 | (^0035569) | ALBANIA | ALBANIA Mobile | 5\n 7 | -1 | 0 | (^00213) | ALGERIA | ALGERIA | 6\n 8 | -1 | 0 | (^0021361) | ALGERIA | ALGERIA Mobile | 5\n 9 | -1 | 0 | (^0021362) | ALGERIA | ALGERIA Mobile | 5\n 10 | -1 | 0 | (^0021363) | ALGERIA | ALGERIA Mobile | 5\n\n\nNow , I have to match a dialednumber (let's say 00213618833) and find \nit's destination...(It's algeria mobile).\nI tried to make with a query of using LIKE , but i was not able to get \nsomething..\n\n\nRegards\n Alex\n\n\n\n\n\n----- Original Message ----- \nFrom: \"Dave Dutcher\" <[email protected]>\nTo: \"'Alexandru Coseru'\" <[email protected]>; \n<[email protected]>\nSent: Saturday, December 02, 2006 10:36 PM\nSubject: RE: [PERFORM] Regex performance issue\n\n\n> -----Original Message-----\n> From: [email protected] On Behalf Of Alexandru Coseru\n> asterisk=> explain analyze SELECT * FROM destlist WHERE\n> '0039051248787' ~\n> prefix AND id_ent='-2' AND dir=0 ORDER by length(prefix) DESC;\n>\n>\n> QUERY PLAN\n> --------------------------------------------------------------\n> ----------------------------------------------------------------------\n> Sort (cost=7925.07..7925.15 rows=31 width=67) (actual\n> time=857.715..857.716 rows=2 loops=1)\n> Sort Key: length((prefix)::text)\n> -> Bitmap Heap Scan on destlist (cost=60.16..7924.30\n> rows=31 width=67)\n> (actual time=2.156..857.686 rows=2 loops=1)\n> Recheck Cond: ((id_ent = -2) AND (dir = 0))\n> Filter: ('0039051248787'::text ~ (prefix)::text)\n> -> Bitmap Index Scan on destlist_indx2 (cost=0.00..60.16\n> rows=6193 width=0) (actual time=1.961..1.961 rows=5205 loops=1)\n> Index Cond: ((id_ent = -2) AND (dir = 0))\n> Total runtime: 857.804 ms\n> (8 rows)\n>\n>\n> \"mmumu\" btree (prefix varchar_pattern_ops)\n>\n\nI'm surpised Postgres isn't using the index on prefix seeing as the index\nuses the varchar_pattern_ops operator class. It could be that the index\nisn't selective enough, or is Postgres not able to use an index with Posix\nregular expressions? The docs seem to say that it can, but I'd be curious\nto see what happens if you use LIKE instead of ~.\n\nDave\n\n\n\n\n\n-- \nNo virus found in this incoming message.\nChecked by AVG Free Edition.\nVersion: 7.1.409 / Virus Database: 268.15.4/563 - Release Date: 12/2/2006\n\n\n",
"msg_date": "Sat, 2 Dec 2006 22:48:45 +0200",
"msg_from": "\"Alexandru Coseru\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Regex performance issue"
},
{
"msg_contents": "I may miss something but I'd use tsearch2. Check\nintdict dictionary for basic idea - http://www.sai.msu.su/~megera/wiki/Gendict\n\nOleg\nOn Sat, 2 Dec 2006, Alexandru Coseru wrote:\n\n> Hello...\n>\n> I cannot use LIKE , because the order of the match is reversed.\n> The prefix column is containing telephone destinations.\n> IE: ^001 - US , ^0039 Italy , etc..\n>\n> Here is a small sample:\n>\n> asterisk=> select * from destlist LIMIT 10;\n> id | id_ent | dir | prefix | country | network | tip\n> ----+--------+-----+------------+-------------+--------------------+-----\n> 1 | -1 | 0 | (^0093) | AFGHANISTAN | AFGHANISTAN | 6\n> 2 | -1 | 0 | (^00937) | AFGHANISTAN | AFGHANISTAN Mobile | 5\n> 3 | -1 | 0 | (^00355) | ALBANIA | ALBANIA | 6\n> 4 | -1 | 0 | (^0035538) | ALBANIA | ALBANIA Mobile | 5\n> 5 | -1 | 0 | (^0035568) | ALBANIA | ALBANIA Mobile | 5\n> 6 | -1 | 0 | (^0035569) | ALBANIA | ALBANIA Mobile | 5\n> 7 | -1 | 0 | (^00213) | ALGERIA | ALGERIA | 6\n> 8 | -1 | 0 | (^0021361) | ALGERIA | ALGERIA Mobile | 5\n> 9 | -1 | 0 | (^0021362) | ALGERIA | ALGERIA Mobile | 5\n> 10 | -1 | 0 | (^0021363) | ALGERIA | ALGERIA Mobile | 5\n>\n>\n> Now , I have to match a dialednumber (let's say 00213618833) and find \n> it's destination...(It's algeria mobile).\n> I tried to make with a query of using LIKE , but i was not able to get \n> something..\n>\n>\n> Regards\n> Alex\n>\n>\n>\n>\n>\n> ----- Original Message ----- From: \"Dave Dutcher\" <[email protected]>\n> To: \"'Alexandru Coseru'\" <[email protected]>; \n> <[email protected]>\n> Sent: Saturday, December 02, 2006 10:36 PM\n> Subject: RE: [PERFORM] Regex performance issue\n>\n>\n>> -----Original Message-----\n>> From: [email protected] On Behalf Of Alexandru Coseru\n>> asterisk=> explain analyze SELECT * FROM destlist WHERE\n>> '0039051248787' ~\n>> prefix AND id_ent='-2' AND dir=0 ORDER by length(prefix) DESC;\n>> \n>> \n>> QUERY PLAN\n>> --------------------------------------------------------------\n>> ----------------------------------------------------------------------\n>> Sort (cost=7925.07..7925.15 rows=31 width=67) (actual\n>> time=857.715..857.716 rows=2 loops=1)\n>> Sort Key: length((prefix)::text)\n>> -> Bitmap Heap Scan on destlist (cost=60.16..7924.30\n>> rows=31 width=67)\n>> (actual time=2.156..857.686 rows=2 loops=1)\n>> Recheck Cond: ((id_ent = -2) AND (dir = 0))\n>> Filter: ('0039051248787'::text ~ (prefix)::text)\n>> -> Bitmap Index Scan on destlist_indx2 (cost=0.00..60.16\n>> rows=6193 width=0) (actual time=1.961..1.961 rows=5205 loops=1)\n>> Index Cond: ((id_ent = -2) AND (dir = 0))\n>> Total runtime: 857.804 ms\n>> (8 rows)\n>> \n>>\n>> \"mmumu\" btree (prefix varchar_pattern_ops)\n>> \n>\n> I'm surpised Postgres isn't using the index on prefix seeing as the index\n> uses the varchar_pattern_ops operator class. It could be that the index\n> isn't selective enough, or is Postgres not able to use an index with Posix\n> regular expressions? The docs seem to say that it can, but I'd be curious\n> to see what happens if you use LIKE instead of ~.\n>\n> Dave\n>\n>\n>\n>\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n",
"msg_date": "Sat, 2 Dec 2006 23:54:59 +0300 (MSK)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regex performance issue"
},
{
"msg_contents": "Alexandru Coseru wrote:\n> I cannot use LIKE , because the order of the match is reversed.\n> The prefix column is containing telephone destinations.\n> IE: ^001 - US , ^0039 Italy , etc..\n\nMaybe you could create a functional index on substr(<minimum length of \nprefix>)? It might restrict the result set prior to applying the regex \njust enough to make the performance acceptable.\n\n> asterisk=> select * from destlist LIMIT 10;\n> id | id_ent | dir | prefix | country | network | tip\n> ----+--------+-----+------------+-------------+--------------------+-----\n> 1 | -1 | 0 | (^0093) | AFGHANISTAN | AFGHANISTAN | 6\n> 2 | -1 | 0 | (^00937) | AFGHANISTAN | AFGHANISTAN Mobile | 5\n> 3 | -1 | 0 | (^00355) | ALBANIA | ALBANIA | 6\n> 4 | -1 | 0 | (^0035538) | ALBANIA | ALBANIA Mobile | 5\n> 5 | -1 | 0 | (^0035568) | ALBANIA | ALBANIA Mobile | 5\n> 6 | -1 | 0 | (^0035569) | ALBANIA | ALBANIA Mobile | 5\n> 7 | -1 | 0 | (^00213) | ALGERIA | ALGERIA | 6\n> 8 | -1 | 0 | (^0021361) | ALGERIA | ALGERIA Mobile | 5\n> 9 | -1 | 0 | (^0021362) | ALGERIA | ALGERIA Mobile | 5\n> 10 | -1 | 0 | (^0021363) | ALGERIA | ALGERIA Mobile | 5\n> \n> Now , I have to match a dialednumber (let's say 00213618833) and find it's destination...(It's algeria mobile).\n> I tried to make with a query of using LIKE , but i was not able to get something..\n\nAnother idea would be to add some extra rows so that you could use \nnormal inequality searches. For example, let's take the Albanian rows:\n\n 3 | -1 | 0 | 00355\n 4 | -1 | 0 | 0035538\n* 3 | -1 | 0 | 0035539\n 5 | -1 | 0 | 0035568\n 6 | -1 | 0 | 0035569\n* 3 | -1 | 0 | 0035570\n\nNow you can do \"SELECT * FROM destlist WHERE ? >= prefix ORDER BY prefix \nLIMIT 1\".\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Sat, 02 Dec 2006 22:04:06 +0000",
"msg_from": "\"Heikki Linnakangas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regex performance issue"
},
{
"msg_contents": "Hello..\n\nI cannot use the first advice , because i'm not aware of the prefix length \nin the database...\nThis is why i'm ordering after length(prefix)..\n\n\nOn the 2nd one , i'm not sure that i can follow you..\n\nRegards\n Alex\n----- Original Message ----- \nFrom: \"Heikki Linnakangas\" <[email protected]>\nTo: \"Alexandru Coseru\" <[email protected]>\nCc: \"Dave Dutcher\" <[email protected]>; <[email protected]>\nSent: Sunday, December 03, 2006 12:04 AM\nSubject: Re: [PERFORM] Regex performance issue\n\n\n> Alexandru Coseru wrote:\n>> I cannot use LIKE , because the order of the match is reversed.\n>> The prefix column is containing telephone destinations.\n>> IE: ^001 - US , ^0039 Italy , etc..\n>\n> Maybe you could create a functional index on substr(<minimum length of \n> prefix>)? It might restrict the result set prior to applying the regex \n> just enough to make the performance acceptable.\n>\n>> asterisk=> select * from destlist LIMIT 10;\n>> id | id_ent | dir | prefix | country | network | tip\n>> ----+--------+-----+------------+-------------+--------------------+-----\n>> 1 | -1 | 0 | (^0093) | AFGHANISTAN | AFGHANISTAN | 6\n>> 2 | -1 | 0 | (^00937) | AFGHANISTAN | AFGHANISTAN Mobile | 5\n>> 3 | -1 | 0 | (^00355) | ALBANIA | ALBANIA | 6\n>> 4 | -1 | 0 | (^0035538) | ALBANIA | ALBANIA Mobile | 5\n>> 5 | -1 | 0 | (^0035568) | ALBANIA | ALBANIA Mobile | 5\n>> 6 | -1 | 0 | (^0035569) | ALBANIA | ALBANIA Mobile | 5\n>> 7 | -1 | 0 | (^00213) | ALGERIA | ALGERIA | 6\n>> 8 | -1 | 0 | (^0021361) | ALGERIA | ALGERIA Mobile | 5\n>> 9 | -1 | 0 | (^0021362) | ALGERIA | ALGERIA Mobile | 5\n>> 10 | -1 | 0 | (^0021363) | ALGERIA | ALGERIA Mobile | 5\n>>\n>> Now , I have to match a dialednumber (let's say 00213618833) and find \n>> it's destination...(It's algeria mobile).\n>> I tried to make with a query of using LIKE , but i was not able to get \n>> something..\n>\n> Another idea would be to add some extra rows so that you could use normal \n> inequality searches. For example, let's take the Albanian rows:\n>\n> 3 | -1 | 0 | 00355\n> 4 | -1 | 0 | 0035538\n> * 3 | -1 | 0 | 0035539\n> 5 | -1 | 0 | 0035568\n> 6 | -1 | 0 | 0035569\n> * 3 | -1 | 0 | 0035570\n>\n> Now you can do \"SELECT * FROM destlist WHERE ? >= prefix ORDER BY prefix \n> LIMIT 1\".\n>\n> -- \n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n>\n>\n>\n> -- \n> No virus found in this incoming message.\n> Checked by AVG Free Edition.\n> Version: 7.1.409 / Virus Database: 268.15.4/563 - Release Date: 12/2/2006\n>\n> \n\n",
"msg_date": "Sun, 3 Dec 2006 00:13:27 +0200",
"msg_from": "\"Alexandru Coseru\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Regex performance issue"
},
{
"msg_contents": "Alexandru Coseru wrote:\n> Hello..\n> \n> I cannot use the first advice , because i'm not aware of the prefix \n> length in the database...\n> This is why i'm ordering after length(prefix)..\n> \n> \n> On the 2nd one , i'm not sure that i can follow you..\n\nOk, let me try again :)\n\n> asterisk=> select * from destlist LIMIT 10;\n> id | id_ent | dir | prefix | country | network | tip\n> ----+--------+-----+------------+-------------+--------------------+-----\n> 1 | -1 | 0 | (^0093) | AFGHANISTAN | AFGHANISTAN | 6\n> 2 | -1 | 0 | (^00937) | AFGHANISTAN | AFGHANISTAN Mobile | 5\n> 3 | -1 | 0 | (^00355) | ALBANIA | ALBANIA | 6\n> 4 | -1 | 0 | (^0035538) | ALBANIA | ALBANIA Mobile | 5\n> 5 | -1 | 0 | (^0035568) | ALBANIA | ALBANIA Mobile | 5\n> 6 | -1 | 0 | (^0035569) | ALBANIA | ALBANIA Mobile | 5\n> 7 | -1 | 0 | (^00213) | ALGERIA | ALGERIA | 6\n> 8 | -1 | 0 | (^0021361) | ALGERIA | ALGERIA Mobile | 5\n> 9 | -1 | 0 | (^0021362) | ALGERIA | ALGERIA Mobile | 5\n> 10 | -1 | 0 | (^0021363) | ALGERIA | ALGERIA Mobile | 5 \n\nStore the prefix in a character column, without the regex stuff. Like \nbelow. I've removed the columns that are not relevant, in fact it would \nmake sense to store them in another table, and have just the id and \nprefix in this table.\n\nid | prefix | network\n---+---------+--------------------\n 1 | 0093 | AFGHANISTAN\n 2 | 00937 | AFGHANISTAN Mobile\n 3 | 00355 | ALBANIA\n 4 | 0035538 | ALBANIA Mobile\n 5 | 0035568 | ALBANIA Mobile\n 6 | 0035569 | ALBANIA Mobile\n 7 | 00213 | ALGERIA\n 8 | 0021361 | ALGERIA Mobile\n 9 | 0021362 | ALGERIA Mobile\n10 | 0021363 | ALGERIA Mobile\n\nNow, add the rows marked with start below:\n\n id | prefix | network\n----+---------+--------------------\n 1 | 0093 | AFGHANISTAN\n 2 | 00937 | AFGHANISTAN Mobile\n* 1 | 00938 | AFGHANISTAN\n 3 | 00355 | ALBANIA\n 4 | 0035538 | ALBANIA Mobile\n* 3 | 0035539 | ALBANIA\n 5 | 0035568 | ALBANIA Mobile\n 6 | 0035569 | ALBANIA Mobile\n* 3 | 003557 | ALBANIA\n 7 | 00213 | ALGERIA\n 8 | 0021361 | ALGERIA Mobile\n 9 | 0021362 | ALGERIA Mobile\n 10 | 0021363 | ALGERIA Mobile\n* 7 | 0021364 | ALGERIA\n\nThe added rows make it unnecessary to use regex for the searches. You \ncan use just the >= operator like this: (using the example number you gave)\n\nSELECT id FROM destlist WHERE '00213618833' >= prefix ORDER BY prefix \nLIMIT 1\n\nWhich would return id 7. As another example, a query of \"00213654321\" \nwould match the last row, which again has id 7.\n\nI'm too tired to figure out the exact algorithm for adding the rows, but \nI'm pretty sure it can be automated... The basic idea is that when \nthere's a row with id A and prefix XXXX, and another row with id B and \nprefix XXXXY, we add another row with id A and prefix XXXX(Y+1).\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Sat, 02 Dec 2006 22:35:18 +0000",
"msg_from": "\"Heikki Linnakangas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regex performance issue"
},
{
"msg_contents": "Hello..\n\nThanks for the tip , i think i have got the ideea..\n\nI'm too tired too , and i will try it tommorow.\n\n\nAnyway , anybody has a clue why this regex is that CPU intensive ? I did \nnot saw the light on my drives blinking , and also vmstat doesn't yeld any \nblocks in or out...\nAnd how can it be optimized ?\n\nIs there a way to trace the system calls ?\nstrace doesn't give me anything else but some lseeks and reads...\n\n\nPS: Tried it with a 8.2 snaphsot and the result is the same..\n\nRegards\n Alex\n----- Original Message ----- \nFrom: \"Heikki Linnakangas\" <[email protected]>\nTo: \"Alexandru Coseru\" <[email protected]>\nCc: <[email protected]>\nSent: Sunday, December 03, 2006 12:35 AM\nSubject: Re: [PERFORM] Regex performance issue\n\n\n> Alexandru Coseru wrote:\n>> Hello..\n>>\n>> I cannot use the first advice , because i'm not aware of the prefix \n>> length in the database...\n>> This is why i'm ordering after length(prefix)..\n>>\n>>\n>> On the 2nd one , i'm not sure that i can follow you..\n>\n> Ok, let me try again :)\n>\n>> asterisk=> select * from destlist LIMIT 10;\n>> id | id_ent | dir | prefix | country | network | tip\n>> ----+--------+-----+------------+-------------+--------------------+-----\n>> 1 | -1 | 0 | (^0093) | AFGHANISTAN | AFGHANISTAN | 6\n>> 2 | -1 | 0 | (^00937) | AFGHANISTAN | AFGHANISTAN Mobile | 5\n>> 3 | -1 | 0 | (^00355) | ALBANIA | ALBANIA | 6\n>> 4 | -1 | 0 | (^0035538) | ALBANIA | ALBANIA Mobile | 5\n>> 5 | -1 | 0 | (^0035568) | ALBANIA | ALBANIA Mobile | 5\n>> 6 | -1 | 0 | (^0035569) | ALBANIA | ALBANIA Mobile | 5\n>> 7 | -1 | 0 | (^00213) | ALGERIA | ALGERIA | 6\n>> 8 | -1 | 0 | (^0021361) | ALGERIA | ALGERIA Mobile | 5\n>> 9 | -1 | 0 | (^0021362) | ALGERIA | ALGERIA Mobile | 5\n>> 10 | -1 | 0 | (^0021363) | ALGERIA | ALGERIA Mobile | 5\n>\n> Store the prefix in a character column, without the regex stuff. Like \n> below. I've removed the columns that are not relevant, in fact it would \n> make sense to store them in another table, and have just the id and prefix \n> in this table.\n>\n> id | prefix | network\n> ---+---------+--------------------\n> 1 | 0093 | AFGHANISTAN\n> 2 | 00937 | AFGHANISTAN Mobile\n> 3 | 00355 | ALBANIA\n> 4 | 0035538 | ALBANIA Mobile\n> 5 | 0035568 | ALBANIA Mobile\n> 6 | 0035569 | ALBANIA Mobile\n> 7 | 00213 | ALGERIA\n> 8 | 0021361 | ALGERIA Mobile\n> 9 | 0021362 | ALGERIA Mobile\n> 10 | 0021363 | ALGERIA Mobile\n>\n> Now, add the rows marked with start below:\n>\n> id | prefix | network\n> ----+---------+--------------------\n> 1 | 0093 | AFGHANISTAN\n> 2 | 00937 | AFGHANISTAN Mobile\n> * 1 | 00938 | AFGHANISTAN\n> 3 | 00355 | ALBANIA\n> 4 | 0035538 | ALBANIA Mobile\n> * 3 | 0035539 | ALBANIA\n> 5 | 0035568 | ALBANIA Mobile\n> 6 | 0035569 | ALBANIA Mobile\n> * 3 | 003557 | ALBANIA\n> 7 | 00213 | ALGERIA\n> 8 | 0021361 | ALGERIA Mobile\n> 9 | 0021362 | ALGERIA Mobile\n> 10 | 0021363 | ALGERIA Mobile\n> * 7 | 0021364 | ALGERIA\n>\n> The added rows make it unnecessary to use regex for the searches. You can \n> use just the >= operator like this: (using the example number you gave)\n>\n> SELECT id FROM destlist WHERE '00213618833' >= prefix ORDER BY prefix \n> LIMIT 1\n>\n> Which would return id 7. As another example, a query of \"00213654321\" \n> would match the last row, which again has id 7.\n>\n> I'm too tired to figure out the exact algorithm for adding the rows, but \n> I'm pretty sure it can be automated... The basic idea is that when there's \n> a row with id A and prefix XXXX, and another row with id B and prefix \n> XXXXY, we add another row with id A and prefix XXXX(Y+1).\n>\n> -- \n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n>\n>\n>\n> -- \n> No virus found in this incoming message.\n> Checked by AVG Free Edition.\n> Version: 7.1.409 / Virus Database: 268.15.4/563 - Release Date: 12/2/2006\n> \n\n",
"msg_date": "Sun, 3 Dec 2006 02:53:53 +0200",
"msg_from": "\"Alexandru Coseru\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Regex performance issue"
},
{
"msg_contents": "Hello..\n\nI have never used tsearch2 , but at a first glance , i would not see any \nmajor improvement , because the main advantage of tsearch is the splitting \nin words of a phrase..\nBut here , i only have one word (no spaces).\n\n\nRegards\n Alex\n----- Original Message ----- \nFrom: \"Oleg Bartunov\" <[email protected]>\nTo: \"Alexandru Coseru\" <[email protected]>\nCc: \"Dave Dutcher\" <[email protected]>; <[email protected]>\nSent: Saturday, December 02, 2006 10:54 PM\nSubject: Re: [PERFORM] Regex performance issue\n\n\n>I may miss something but I'd use tsearch2. Check\n> intdict dictionary for basic idea - \n> http://www.sai.msu.su/~megera/wiki/Gendict\n>\n> Oleg\n> On Sat, 2 Dec 2006, Alexandru Coseru wrote:\n>\n>> Hello...\n>>\n>> I cannot use LIKE , because the order of the match is reversed.\n>> The prefix column is containing telephone destinations.\n>> IE: ^001 - US , ^0039 Italy , etc..\n>>\n>> Here is a small sample:\n>>\n>> asterisk=> select * from destlist LIMIT 10;\n>> id | id_ent | dir | prefix | country | network | tip\n>> ----+--------+-----+------------+-------------+--------------------+-----\n>> 1 | -1 | 0 | (^0093) | AFGHANISTAN | AFGHANISTAN | 6\n>> 2 | -1 | 0 | (^00937) | AFGHANISTAN | AFGHANISTAN Mobile | 5\n>> 3 | -1 | 0 | (^00355) | ALBANIA | ALBANIA | 6\n>> 4 | -1 | 0 | (^0035538) | ALBANIA | ALBANIA Mobile | 5\n>> 5 | -1 | 0 | (^0035568) | ALBANIA | ALBANIA Mobile | 5\n>> 6 | -1 | 0 | (^0035569) | ALBANIA | ALBANIA Mobile | 5\n>> 7 | -1 | 0 | (^00213) | ALGERIA | ALGERIA | 6\n>> 8 | -1 | 0 | (^0021361) | ALGERIA | ALGERIA Mobile | 5\n>> 9 | -1 | 0 | (^0021362) | ALGERIA | ALGERIA Mobile | 5\n>> 10 | -1 | 0 | (^0021363) | ALGERIA | ALGERIA Mobile | 5\n>>\n>>\n>> Now , I have to match a dialednumber (let's say 00213618833) and find \n>> it's destination...(It's algeria mobile).\n>> I tried to make with a query of using LIKE , but i was not able to get \n>> something..\n>>\n>>\n>> Regards\n>> Alex\n>>\n>>\n>>\n>>\n>>\n>> ----- Original Message ----- From: \"Dave Dutcher\" <[email protected]>\n>> To: \"'Alexandru Coseru'\" <[email protected]>; \n>> <[email protected]>\n>> Sent: Saturday, December 02, 2006 10:36 PM\n>> Subject: RE: [PERFORM] Regex performance issue\n>>\n>>\n>>> -----Original Message-----\n>>> From: [email protected] On Behalf Of Alexandru \n>>> Coseru\n>>> asterisk=> explain analyze SELECT * FROM destlist WHERE\n>>> '0039051248787' ~\n>>> prefix AND id_ent='-2' AND dir=0 ORDER by length(prefix) DESC;\n>>>\n>>>\n>>> QUERY PLAN\n>>> --------------------------------------------------------------\n>>> ----------------------------------------------------------------------\n>>> Sort (cost=7925.07..7925.15 rows=31 width=67) (actual\n>>> time=857.715..857.716 rows=2 loops=1)\n>>> Sort Key: length((prefix)::text)\n>>> -> Bitmap Heap Scan on destlist (cost=60.16..7924.30\n>>> rows=31 width=67)\n>>> (actual time=2.156..857.686 rows=2 loops=1)\n>>> Recheck Cond: ((id_ent = -2) AND (dir = 0))\n>>> Filter: ('0039051248787'::text ~ (prefix)::text)\n>>> -> Bitmap Index Scan on destlist_indx2 (cost=0.00..60.16\n>>> rows=6193 width=0) (actual time=1.961..1.961 rows=5205 loops=1)\n>>> Index Cond: ((id_ent = -2) AND (dir = 0))\n>>> Total runtime: 857.804 ms\n>>> (8 rows)\n>>>\n>>> \"mmumu\" btree (prefix varchar_pattern_ops)\n>>>\n>>\n>> I'm surpised Postgres isn't using the index on prefix seeing as the index\n>> uses the varchar_pattern_ops operator class. It could be that the index\n>> isn't selective enough, or is Postgres not able to use an index with \n>> Posix\n>> regular expressions? The docs seem to say that it can, but I'd be \n>> curious\n>> to see what happens if you use LIKE instead of ~.\n>>\n>> Dave\n>>\n>>\n>>\n>>\n>>\n>>\n>\n> Regards,\n> Oleg\n> _____________________________________________________________\n> Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\n> Sternberg Astronomical Institute, Moscow University, Russia\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(495)939-16-83, +007(495)939-23-83\n>\n>\n>\n> -- \n> No virus found in this incoming message.\n> Checked by AVG Free Edition.\n> Version: 7.1.409 / Virus Database: 268.15.4/563 - Release Date: 12/2/2006\n> \n\n",
"msg_date": "Sun, 3 Dec 2006 02:55:27 +0200",
"msg_from": "\"Alexandru Coseru\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Regex performance issue"
},
{
"msg_contents": "On Sun, 2006-12-03 at 02:53 +0200, Alexandru Coseru wrote:\n> Hello..\n> \n> Thanks for the tip , i think i have got the ideea..\n> \n> I'm too tired too , and i will try it tommorow.\n> \n> \n> Anyway , anybody has a clue why this regex is that CPU intensive ? I did \n> not saw the light on my drives blinking , and also vmstat doesn't yeld any \n> blocks in or out...\n> And how can it be optimized ?\n\nI think it's just that you're running the regex across 5400 or so\nelements. at 850 or so milliseconds, that comes out to 150uS for each\ncomparison. I'd say it's just the number of comparisons that have to be\nmade that's killing you here.\n",
"msg_date": "Sat, 02 Dec 2006 19:31:37 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regex performance issue"
},
{
"msg_contents": "\"Alexandru Coseru\" <[email protected]> writes:\n> Anyway , anybody has a clue why this regex is that CPU intensive ?\n\nThe EXPLAIN result you posted offers *no* evidence that the regexp is\nCPU intensive. All you know is that it took 850+ msec to fetch 5200\nrows from disk and apply the regexp filter to them. There's no evidence\nhere that that was CPU time and not I/O time.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 02 Dec 2006 23:05:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regex performance issue "
},
{
"msg_contents": "On Sun, 3 Dec 2006, Alexandru Coseru wrote:\n\n> Hello..\n>\n> I have never used tsearch2 , but at a first glance , i would not see any \n> major improvement , because the main advantage of tsearch is the splitting in \n> words of a phrase..\n> But here , i only have one word (no spaces).\n\nOh, yes, I was confused :) What if you consider you prefix as \n1.2.3.4.5.6, then you could try our contrib/ltree module.\n\n\nOleg\n\n\n>\n>\n> Regards\n> Alex\n> ----- Original Message ----- From: \"Oleg Bartunov\" <[email protected]>\n> To: \"Alexandru Coseru\" <[email protected]>\n> Cc: \"Dave Dutcher\" <[email protected]>; <[email protected]>\n> Sent: Saturday, December 02, 2006 10:54 PM\n> Subject: Re: [PERFORM] Regex performance issue\n>\n>\n>> I may miss something but I'd use tsearch2. Check\n>> intdict dictionary for basic idea - \n>> http://www.sai.msu.su/~megera/wiki/Gendict\n>> \n>> Oleg\n>> On Sat, 2 Dec 2006, Alexandru Coseru wrote:\n>> \n>>> Hello...\n>>> \n>>> I cannot use LIKE , because the order of the match is reversed.\n>>> The prefix column is containing telephone destinations.\n>>> IE: ^001 - US , ^0039 Italy , etc..\n>>> \n>>> Here is a small sample:\n>>> \n>>> asterisk=> select * from destlist LIMIT 10;\n>>> id | id_ent | dir | prefix | country | network | tip\n>>> ----+--------+-----+------------+-------------+--------------------+-----\n>>> 1 | -1 | 0 | (^0093) | AFGHANISTAN | AFGHANISTAN | 6\n>>> 2 | -1 | 0 | (^00937) | AFGHANISTAN | AFGHANISTAN Mobile | 5\n>>> 3 | -1 | 0 | (^00355) | ALBANIA | ALBANIA | 6\n>>> 4 | -1 | 0 | (^0035538) | ALBANIA | ALBANIA Mobile | 5\n>>> 5 | -1 | 0 | (^0035568) | ALBANIA | ALBANIA Mobile | 5\n>>> 6 | -1 | 0 | (^0035569) | ALBANIA | ALBANIA Mobile | 5\n>>> 7 | -1 | 0 | (^00213) | ALGERIA | ALGERIA | 6\n>>> 8 | -1 | 0 | (^0021361) | ALGERIA | ALGERIA Mobile | 5\n>>> 9 | -1 | 0 | (^0021362) | ALGERIA | ALGERIA Mobile | 5\n>>> 10 | -1 | 0 | (^0021363) | ALGERIA | ALGERIA Mobile | 5\n>>> \n>>> \n>>> Now , I have to match a dialednumber (let's say 00213618833) and find \n>>> it's destination...(It's algeria mobile).\n>>> I tried to make with a query of using LIKE , but i was not able to get \n>>> something..\n>>> \n>>> \n>>> Regards\n>>> Alex\n>>> \n>>> \n>>> \n>>> \n>>> \n>>> ----- Original Message ----- From: \"Dave Dutcher\" <[email protected]>\n>>> To: \"'Alexandru Coseru'\" <[email protected]>; \n>>> <[email protected]>\n>>> Sent: Saturday, December 02, 2006 10:36 PM\n>>> Subject: RE: [PERFORM] Regex performance issue\n>>> \n>>> \n>>>> -----Original Message-----\n>>>> From: [email protected] On Behalf Of Alexandru \n>>>> Coseru\n>>>> asterisk=> explain analyze SELECT * FROM destlist WHERE\n>>>> '0039051248787' ~\n>>>> prefix AND id_ent='-2' AND dir=0 ORDER by length(prefix) DESC;\n>>>> \n>>>> \n>>>> QUERY PLAN\n>>>> --------------------------------------------------------------\n>>>> ----------------------------------------------------------------------\n>>>> Sort (cost=7925.07..7925.15 rows=31 width=67) (actual\n>>>> time=857.715..857.716 rows=2 loops=1)\n>>>> Sort Key: length((prefix)::text)\n>>>> -> Bitmap Heap Scan on destlist (cost=60.16..7924.30\n>>>> rows=31 width=67)\n>>>> (actual time=2.156..857.686 rows=2 loops=1)\n>>>> Recheck Cond: ((id_ent = -2) AND (dir = 0))\n>>>> Filter: ('0039051248787'::text ~ (prefix)::text)\n>>>> -> Bitmap Index Scan on destlist_indx2 (cost=0.00..60.16\n>>>> rows=6193 width=0) (actual time=1.961..1.961 rows=5205 loops=1)\n>>>> Index Cond: ((id_ent = -2) AND (dir = 0))\n>>>> Total runtime: 857.804 ms\n>>>> (8 rows)\n>>>>\n>>>> \"mmumu\" btree (prefix varchar_pattern_ops)\n>>>> \n>>> \n>>> I'm surpised Postgres isn't using the index on prefix seeing as the index\n>>> uses the varchar_pattern_ops operator class. It could be that the index\n>>> isn't selective enough, or is Postgres not able to use an index with Posix\n>>> regular expressions? The docs seem to say that it can, but I'd be curious\n>>> to see what happens if you use LIKE instead of ~.\n>>> \n>>> Dave\n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>\n>> Regards,\n>> Oleg\n>> _____________________________________________________________\n>> Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\n>> Sternberg Astronomical Institute, Moscow University, Russia\n>> Internet: [email protected], http://www.sai.msu.su/~megera/\n>> phone: +007(495)939-16-83, +007(495)939-23-83\n>> \n>> \n>> \n>> -- \n>> No virus found in this incoming message.\n>> Checked by AVG Free Edition.\n>> Version: 7.1.409 / Virus Database: 268.15.4/563 - Release Date: 12/2/2006\n>> \n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n",
"msg_date": "Sun, 3 Dec 2006 10:05:17 +0300 (MSK)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regex performance issue"
},
{
"msg_contents": "Hello..\n\nI presumed CPU intensive because:\n 1) I have no hdd lights turn on during a series of queries (about 50 of \nthem)\n 2) vmstat doesn't give me blocks in and just a couple of blocks out.\n 3) top reports between 95 and 100 user cpu . sometimes ,i can see \nsome hi and si work (max of 2%).\n\n\nI'm pretty sure that the whole table is in cache..\nThe table size is about 50 Megs , and I have 1 Gb of RAM..\nI have no other RAM eating procesess.\n\nCould it be related to CPU <-> RAM speed ?\n\nRegards\n Alex\n\n----- Original Message ----- \nFrom: \"Tom Lane\" <[email protected]>\nTo: \"Alexandru Coseru\" <[email protected]>\nCc: \"Heikki Linnakangas\" <[email protected]>; \n<[email protected]>\nSent: Sunday, December 03, 2006 6:05 AM\nSubject: Re: [PERFORM] Regex performance issue\n\n\n> \"Alexandru Coseru\" <[email protected]> writes:\n>> Anyway , anybody has a clue why this regex is that CPU intensive ?\n>\n> The EXPLAIN result you posted offers *no* evidence that the regexp is\n> CPU intensive. All you know is that it took 850+ msec to fetch 5200\n> rows from disk and apply the regexp filter to them. There's no evidence\n> here that that was CPU time and not I/O time.\n>\n> regards, tom lane\n>\n>\n>\n> -- \n> No virus found in this incoming message.\n> Checked by AVG Free Edition.\n> Version: 7.1.409 / Virus Database: 268.15.4/563 - Release Date: 12/2/2006\n>\n> \n\n",
"msg_date": "Sun, 3 Dec 2006 13:11:29 +0200",
"msg_from": "\"Alexandru Coseru\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Regex performance issue "
}
] |
[
{
"msg_contents": "Hi,\n\nI am struggling with the performance of a JBoss based J2EE application\nwith CMP 2.1. beans and using PostgreSQL as database back-end.\n\nBecause JBoss is largely responsible for the SQL queries that are send\nto the back-end , I would like to see the queries that are actually\nreceived by PostgreSQL (insert, select, update and delete), together\nwith the number of times they are called, the average execution time,\ntotal execution time etc.\n\nI have tried PQA (http://pgfoundry.org/projects/pqa/) but that does not\nseem to work with PostgreSQL 8.1.5 on Debian Etch: I get output saying\n\"Continuation for no previous query\" and no statistics. As I don't know\nanything about Ruby, I am lost here.\n\nCan I \"repair\" PQA somehow (without resorting to a crash course \"Ruby\nfor *extreme* Ruby dummies\") or are there any other, preferably \"Open\nSource\" (or extremely cheap ;-)) and multi-platform (Linux and Windows\n2000 +), tools that can gather the statistics that I want? \n\nOne last question: can I get the select queries in PostgreSQL *without*\nall the internal PostgreSQL selects that appear in the log files if I\nset log_statement to \"ddl\" or \"all\" or should I try to catch these by a\njudiciously chosen log_min_duration_statement ?\n\nTIA \n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\nweb: www.askesis.nl\n",
"msg_date": "Sun, 03 Dec 2006 09:59:25 +0100",
"msg_from": "Joost Kraaijeveld <[email protected]>",
"msg_from_op": true,
"msg_subject": "Which query analiser tools are available?"
},
{
"msg_contents": "Joost Kraaijeveld <[email protected]> schrieb:\n> Because JBoss is largely responsible for the SQL queries that are send\n> to the back-end , I would like to see the queries that are actually\n> received by PostgreSQL (insert, select, update and delete), together\n> with the number of times they are called, the average execution time,\n> total execution time etc.\n\nYou can use pgfouine for such job: \nhttp://pgfouine.projects.postgresql.org/\n\nAnd you can play with config-parameters like:\n\n- log_statement = [none, mod, ddl, all]\n- log_min_duration_statement = X (logs all statements runs longer then X\n ms)\n- stats_command_string = on\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknow)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n",
"msg_date": "Sun, 3 Dec 2006 10:57:36 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Which query analiser tools are available?"
}
] |
[
{
"msg_contents": "The following left outer join plan puzzles me:\n\nEXPLAIN ANALYZE SELECT * from t28 LEFT OUTER JOIN (t1 JOIN t11 ON\n(t11.o = '<http://example.org>' AND t11.s = t1.o)) ON t28.s = t1.s\nWHERE t28.o = '\"spec\"';\n\nt28, t1, and t11 all have indexed columns named 's' and 'o' that contain 'text';\n\n Nested Loop Left Join (cost=794249.26..3289704.61 rows=1 width=301)\n(actual time=581293.390..581293.492 rows=1 loops=1)\n Join Filter: (t28.s = t1.s)\n -> Index Scan using t28_o on t28 (cost=0.00..9.22 rows=1\nwidth=89) (actual time=0.073..0.077 rows=1 loops=1)\n Index Cond: (o = '\"spec\"'::text)\n -> Merge Join (cost=794249.26..3267020.66 rows=1813979 width=212)\n(actual time=230365.522..577078.266 rows=1894969 loops=1)\n Merge Cond: (t1.o = t11.s)\n -> Index Scan using t1_o on t1 (cost=0.00..2390242.10\nrows=22225696 width=109) (actual time=0.209..162586.801 rows=22223925\nloops=1)\n -> Sort (cost=794249.26..798784.21 rows=1813979 width=103)\n(actual time=230365.175..237409.474 rows=1894969 loops=1)\n Sort Key: t11.s\n -> Bitmap Heap Scan on t11 (cost=78450.82..605679.55\nrows=1813979 width=103) (actual time=3252.103..22782.271 rows=1894969\nloops=1)\n Recheck Cond: (o = '<http://example.org>'::text)\n -> Bitmap Index Scan on t11_o\n(cost=0.00..78450.82 rows=1813979 width=0) (actual\ntime=2445.422..2445.422 rows=1894969 loops=1)\n Index Cond: (o = '<http://example.org>'::text)\n\n\nIt seems to me that this plan is not very desirable, since the outer\npart of the nested loop left join (the merge join node) is very\nexpensive. Is is possible to generate a plan that looks like this:\n\n Nested Loop Left Join (cost=???)\n -> Index Scan using t28_o on t28 (cost=0.00..9.11 rows=1 width=89)\n Index Cond: (o = '\"spec\"'::text)\n -> Nested Loop (cost=???)\n -> Index Scan using t1_s on t1 (cost=???)\n Index Cond: (s = t28.s)\n -> Bitmap Heap Scan on t11 (cost=???)\n Recheck Cond: (t11.s = t1.o)\n Filter: (o = '<http://example.org>'::text)\n -> Bitmap Index Scan on t11_s (cost=??? )\n Index Cond: (t11.s = t1.o)\n\nI *think* this plan is equivalent to the above if I'm assuming the\nbehaviour of the 'nested loop left join' node correctly. So far, I\nhave been tweaking the statistics, cost estimates, and\nenabling.disabling certain plans to see if I can get it to propagate\nthe join condition t1.s = t28.s to the outer node of the left join..\nbut so far, I cannot. So, my questions are:\n\n1) Is my 'desired' query plan logic correct\n2) Can the executor execute a plan such as my 'desired' plan\n3) If (1) and (2) are 'yes', then how may I get the planner to\ngenerate such a plan, or do I just need to look harder into tweaking\nthe statistics and cost estimates\n\n -Aaron\n",
"msg_date": "Sun, 3 Dec 2006 10:11:30 -0500",
"msg_from": "\"Aaron Birkland\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Propagating outer join conditions"
},
{
"msg_contents": "How about trying:\n\nSelect *\nFrom\n(Select * from t28 where t28.0='spec') t28a\nLeft out join (t1 JOIN t11 ON\n> (t11.o = '<http://example.org>' AND t11.s = t1.o)) ON t28a.s = t1.s\n\nIn this way, I think, the where clause on t28 would be performed before the\njoin rather than after.\n\nJonathan Blitz\n\n\n> -----Original Message-----\n> From: Aaron Birkland [mailto:[email protected]]\n> Sent: Sunday, December 03, 2006 5:12 PM\n> To: [email protected]\n> Subject: [PERFORM] Propagating outer join conditions\n> \n> The following left outer join plan puzzles me:\n> \n> EXPLAIN ANALYZE SELECT * from t28 LEFT OUTER JOIN (t1 JOIN t11 ON\n> (t11.o = '<http://example.org>' AND t11.s = t1.o)) ON t28.s = t1.s\n> WHERE t28.o = '\"spec\"';\n> \n> t28, t1, and t11 all have indexed columns named 's' and 'o' that contain\n'text';\n> \n> Nested Loop Left Join (cost=794249.26..3289704.61 rows=1 width=301)\n> (actual time=581293.390..581293.492 rows=1 loops=1)\n> Join Filter: (t28.s = t1.s)\n> -> Index Scan using t28_o on t28 (cost=0.00..9.22 rows=1\n> width=89) (actual time=0.073..0.077 rows=1 loops=1)\n> Index Cond: (o = '\"spec\"'::text)\n> -> Merge Join (cost=794249.26..3267020.66 rows=1813979 width=212)\n> (actual time=230365.522..577078.266 rows=1894969 loops=1)\n> Merge Cond: (t1.o = t11.s)\n> -> Index Scan using t1_o on t1 (cost=0.00..2390242.10\n> rows=22225696 width=109) (actual time=0.209..162586.801 rows=22223925\n> loops=1)\n> -> Sort (cost=794249.26..798784.21 rows=1813979 width=103)\n> (actual time=230365.175..237409.474 rows=1894969 loops=1)\n> Sort Key: t11.s\n> -> Bitmap Heap Scan on t11 (cost=78450.82..605679.55\n> rows=1813979 width=103) (actual time=3252.103..22782.271 rows=1894969\n> loops=1)\n> Recheck Cond: (o = '<http://example.org>'::text)\n> -> Bitmap Index Scan on t11_o\n> (cost=0.00..78450.82 rows=1813979 width=0) (actual\n> time=2445.422..2445.422 rows=1894969 loops=1)\n> Index Cond: (o = '<http://example.org>'::text)\n> \n> \n> It seems to me that this plan is not very desirable, since the outer\n> part of the nested loop left join (the merge join node) is very\n> expensive. Is is possible to generate a plan that looks like this:\n> \n> Nested Loop Left Join (cost=???)\n> -> Index Scan using t28_o on t28 (cost=0.00..9.11 rows=1 width=89)\n> Index Cond: (o = '\"spec\"'::text)\n> -> Nested Loop (cost=???)\n> -> Index Scan using t1_s on t1 (cost=???)\n> Index Cond: (s = t28.s)\n> -> Bitmap Heap Scan on t11 (cost=???)\n> Recheck Cond: (t11.s = t1.o)\n> Filter: (o = '<http://example.org>'::text)\n> -> Bitmap Index Scan on t11_s (cost=??? )\n> Index Cond: (t11.s = t1.o)\n> \n> I *think* this plan is equivalent to the above if I'm assuming the\n> behaviour of the 'nested loop left join' node correctly. So far, I\n> have been tweaking the statistics, cost estimates, and\n> enabling.disabling certain plans to see if I can get it to propagate\n> the join condition t1.s = t28.s to the outer node of the left join..\n> but so far, I cannot. So, my questions are:\n> \n> 1) Is my 'desired' query plan logic correct\n> 2) Can the executor execute a plan such as my 'desired' plan\n> 3) If (1) and (2) are 'yes', then how may I get the planner to\n> generate such a plan, or do I just need to look harder into tweaking\n> the statistics and cost estimates\n> \n> -Aaron\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n> \n> --\n> No virus found in this incoming message.\n> Checked by AVG Free Edition.\n> Version: 7.1.409 / Virus Database: 268.15.6/565 - Release Date: 12/2/2006\n> \n> \n> --\n> No virus found in this incoming message.\n> Checked by AVG Free Edition.\n> Version: 7.1.409 / Virus Database: 268.15.6/565 - Release Date: 12/02/2006\n> \n\n-- \nNo virus found in this outgoing message.\nChecked by AVG Free Edition.\nVersion: 7.1.409 / Virus Database: 268.15.6/565 - Release Date: 12/02/2006\n \n\n",
"msg_date": "Sun, 3 Dec 2006 17:25:15 +0200",
"msg_from": "\"Jonathan Blitz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Propagating outer join conditions"
},
{
"msg_contents": "First, I forgot to mention - this is 8.2 RC1 I was trying on\n\nThe suggested change produces an identical 'bad' query plan. The main\nissue (I think) is that the query node that processes \"t1 JOIN t11 ON\n..' is not aware of the join condition 't28.s = t1.s'.. even though\nthe value of t28.s (as determined by the inner index scan where t28.o\n= 'spec') could(should?) theoretically be known to it. If it did, then\nI imagine it would realize that a nested loop join starting with t1.s\n= t28.s (which is very selective) would be much cheaper than doing the\nbig merge join.\n\n -Aaron\n\nOn 12/3/06, Jonathan Blitz <[email protected]> wrote:\n> How about trying:\n>\n> Select *\n> From\n> (Select * from t28 where t28.0='spec') t28a\n> Left out join (t1 JOIN t11 ON\n> > (t11.o = '<http://example.org>' AND t11.s = t1.o)) ON t28a.s = t1.s\n>\n> In this way, I think, the where clause on t28 would be performed before the\n> join rather than after.\n",
"msg_date": "Sun, 3 Dec 2006 11:30:28 -0500",
"msg_from": "\"Aaron Birkland\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Propagating outer join conditions"
},
{
"msg_contents": "Hello..\n\nI'm waiting for my new system , and meanwhile , i have some questions.\nFirst , here are the specs:\n\n\nThe server will have kernel 2.1.19 and it will be use only as a postgresql \nserver (nothing else... no named,dhcp,web,mail , etc).\nPostgresql version will be 8.2.\nIt will be heavily used on inserts , triggers on each insert/update and \noccasionally selects.\n\nSystem: SuperMicro 7045B-3\nCPU: 1 Dual Core Woodcrest ,2.66 Ghz , 4Mb cache , FSB 1333Mhz\nRAM: 4 Gb (4 x 1 Gb modules) at 667Mhz\nRAID CTRL: LSI MegaRAID SAS 8408E\nDISKS: 8 x SATA II 7200 rpm , NCQ , 8 Mb cache with Supermicro 8 \nSas enclosure\n\nBased on the needs , i'm planning an update of the drives to 15.000 rpms \nSAS. (pretty expensive now)\n\n\nQuestion 1:\n The RAID layout should be:\n a) 2 hdd in raid 1 for system and pg_xlog and 6 hdd in raid10 \nfor data ?\n b) 8 hdd in raid10 for all ?\n c) 2 hdd in raid1 for system , 2 hdd in raid1 for pg_xlog , 4 \nhdd in raid10 for data ?\n Obs: I'm going for setup a) , but i want to hear your thoughts as well.\n\n\nQuestion 2: (Don't want to start a flame here..... but here is goes)\n What filesystem should i run for data ? ext3 or xfs ?\n The tables have ~ 15.000 rel_pages each. The biggest table has now \nover 30.000 pages.\n\nQuestion 3:\n The block size in postgresql is 8kb. The strip size in the raid \nctrl is 64k.\n Should i increase the pgsql block size to 16 or 32 or even 64k ?\n\n\n\nAs soon as the system will be delivered , i'm planning some benchmarks.\n\nRegards\n Alex \n\n",
"msg_date": "Sun, 3 Dec 2006 19:29:02 +0200",
"msg_from": "\"Alexandru Coseru\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Hardware advice"
},
{
"msg_contents": "\"Aaron Birkland\" <[email protected]> writes:\n> ... Is is possible to generate a plan that looks like this:\n\n> Nested Loop Left Join (cost=???)\n> -> Index Scan using t28_o on t28 (cost=0.00..9.11 rows=1 width=89)\n> Index Cond: (o = '\"spec\"'::text)\n> -> Nested Loop (cost=???)\n> -> Index Scan using t1_s on t1 (cost=???)\n> Index Cond: (s = t28.s)\n> -> Bitmap Heap Scan on t11 (cost=???)\n> Recheck Cond: (t11.s = t1.o)\n> Filter: (o = '<http://example.org>'::text)\n> -> Bitmap Index Scan on t11_s (cost=??? )\n> Index Cond: (t11.s = t1.o)\n\nNo. It'd take massive work in both the planner and executor to support\nsomething like that --- they are designed around the assumption that\nnestloop with inner index scan is exactly that, just one indexscan on\nthe inside of the loop. (As of 8.2 there's some klugery to handle an\nAppend of indexscans too, but it won't generalize readily.)\n\nIt might be something to work on for the future, but I can see a couple\nof risks to trying to support this type of plan:\n* exponential growth in the number of plans the planner has to consider;\n* greatly increased runtime penalty if the planner has underestimated\n the number of rows produced by the outer side of the nestloop.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 03 Dec 2006 13:24:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Propagating outer join conditions "
},
{
"msg_contents": "Alexandru,\n\n> The server will have kernel 2.1.19 and it will be use only as a postgresql\n\nAssuming you're talking Linux, I think you mean 2.6.19?\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Sun, 3 Dec 2006 12:11:00 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware advice"
},
{
"msg_contents": "Hello..\n\nYes , sorry for the mistype..\n\nRegards\n Alex\n----- Original Message ----- \nFrom: \"Josh Berkus\" <[email protected]>\nTo: <[email protected]>\nCc: \"Alexandru Coseru\" <[email protected]>\nSent: Sunday, December 03, 2006 10:11 PM\nSubject: Re: [PERFORM] Hardware advice\n\n\nAlexandru,\n\n> The server will have kernel 2.1.19 and it will be use only as a postgresql\n\nAssuming you're talking Linux, I think you mean 2.6.19?\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n\n\n\n-- \nNo virus found in this incoming message.\nChecked by AVG Free Edition.\nVersion: 7.1.409 / Virus Database: 268.15.4/563 - Release Date: 12/2/2006\n\n\n",
"msg_date": "Sun, 3 Dec 2006 23:38:14 +0200",
"msg_from": "\"Alexandru Coseru\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware advice"
},
{
"msg_contents": "Hi Alexandru,\n\nAlexandru Coseru schrieb:\n> [...]\n> Question 1:\n> The RAID layout should be:\n> a) 2 hdd in raid 1 for system and pg_xlog and 6 hdd in\n> raid10 for data ?\n> b) 8 hdd in raid10 for all ?\n> c) 2 hdd in raid1 for system , 2 hdd in raid1 for pg_xlog ,\n> 4 hdd in raid10 for data ?\n> Obs: I'm going for setup a) , but i want to hear your thoughts as well.\n\nThis depends on you data size. I think, option a and c are good.\nThe potential bottleneck may the RAID 1 for pg_xlog if you have huge\namount of updates and insert.\nWhat is about another setup\n\n4 hdd in RAID 10 for System and pg_xlog - System partitions are normally\nnot in heavy use and pg_xlog should be fast for writing.\n4 hdd in RAID 10 for data.\n\n> \n> \n> Question 2: (Don't want to start a flame here..... but here is goes)\n> What filesystem should i run for data ? ext3 or xfs ?\n> The tables have ~ 15.000 rel_pages each. The biggest table has\n> now over 30.000 pages.\n\nWe have a database running with 60,000+ tables. The tables size is\nbetween a few kByte for the small tables and up to 30 GB for the largest\none. We had no issue with ext3 in the past.\n\n> \n> Question 3:\n> The block size in postgresql is 8kb. The strip size in the\n> raid ctrl is 64k.\n> Should i increase the pgsql block size to 16 or 32 or even 64k ?\n\nYou should keep in mind that the file system has also a block size. Ext3\nhas as maximum 4k.\nI would set up the partitions aligned to the stripe size to prevent\nunaligned reads. I guess, you can imagine that a larger block size of\npostgresql may also end up in unaligned reads because the file system\nhas a smaller block size.\n\nRAID Volume and File system set up\n1. Make all partitions aligned to the RAID strip size.\n The first partition should be start at 128 kByte.\n You can do this with fdisk. after you created the partition switch\n to the expert mode (type x) and modify the begin of the partition\n (type b). You should change this value to 128 (default is 63).\n All other partition should also start on a multiple of 128 kByte.\n\n2. Give the file system a hint that you work with larger block sizes.\n Ext3: mke2fs -b 4096 -j -R stride=2 /dev/sda1 -L LABEL\n I made a I/O test with PostgreSQL on a RAID system with stripe size\n of 64kByte and block size of 8 kByte in the RAID system.\n Stride=2 was the best value.\n\n\nPS: You should have a second XEON in your budget plan.\n\nSven.\n\n",
"msg_date": "Tue, 05 Dec 2006 10:57:11 +0100",
"msg_from": "Sven Geisler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware advice"
},
{
"msg_contents": "Hello..\n\nThanks for the advices..\n\nActually , i'm waiting for the clovertown to show up on the market...\n\nRegards\n Alex\n\n----- Original Message ----- \nFrom: \"Sven Geisler\" <[email protected]>\nTo: \"Alexandru Coseru\" <[email protected]>\nCc: <[email protected]>\nSent: Tuesday, December 05, 2006 11:57 AM\nSubject: Re: [PERFORM] Hardware advice\n\n\n> Hi Alexandru,\n>\n> Alexandru Coseru schrieb:\n>> [...]\n>> Question 1:\n>> The RAID layout should be:\n>> a) 2 hdd in raid 1 for system and pg_xlog and 6 hdd in\n>> raid10 for data ?\n>> b) 8 hdd in raid10 for all ?\n>> c) 2 hdd in raid1 for system , 2 hdd in raid1 for pg_xlog ,\n>> 4 hdd in raid10 for data ?\n>> Obs: I'm going for setup a) , but i want to hear your thoughts as \n>> well.\n>\n> This depends on you data size. I think, option a and c are good.\n> The potential bottleneck may the RAID 1 for pg_xlog if you have huge\n> amount of updates and insert.\n> What is about another setup\n>\n> 4 hdd in RAID 10 for System and pg_xlog - System partitions are normally\n> not in heavy use and pg_xlog should be fast for writing.\n> 4 hdd in RAID 10 for data.\n>\n>>\n>>\n>> Question 2: (Don't want to start a flame here..... but here is goes)\n>> What filesystem should i run for data ? ext3 or xfs ?\n>> The tables have ~ 15.000 rel_pages each. The biggest table has\n>> now over 30.000 pages.\n>\n> We have a database running with 60,000+ tables. The tables size is\n> between a few kByte for the small tables and up to 30 GB for the largest\n> one. We had no issue with ext3 in the past.\n>\n>>\n>> Question 3:\n>> The block size in postgresql is 8kb. The strip size in the\n>> raid ctrl is 64k.\n>> Should i increase the pgsql block size to 16 or 32 or even 64k ?\n>\n> You should keep in mind that the file system has also a block size. Ext3\n> has as maximum 4k.\n> I would set up the partitions aligned to the stripe size to prevent\n> unaligned reads. I guess, you can imagine that a larger block size of\n> postgresql may also end up in unaligned reads because the file system\n> has a smaller block size.\n>\n> RAID Volume and File system set up\n> 1. Make all partitions aligned to the RAID strip size.\n> The first partition should be start at 128 kByte.\n> You can do this with fdisk. after you created the partition switch\n> to the expert mode (type x) and modify the begin of the partition\n> (type b). You should change this value to 128 (default is 63).\n> All other partition should also start on a multiple of 128 kByte.\n>\n> 2. Give the file system a hint that you work with larger block sizes.\n> Ext3: mke2fs -b 4096 -j -R stride=2 /dev/sda1 -L LABEL\n> I made a I/O test with PostgreSQL on a RAID system with stripe size\n> of 64kByte and block size of 8 kByte in the RAID system.\n> Stride=2 was the best value.\n>\n>\n> PS: You should have a second XEON in your budget plan.\n>\n> Sven.\n>\n>\n>\n>\n> -- \n> No virus found in this incoming message.\n> Checked by AVG Free Edition.\n> Version: 7.1.409 / Virus Database: 268.15.7/569 - Release Date: 12/5/2006\n>\n> \n\n",
"msg_date": "Tue, 5 Dec 2006 23:44:33 +0200",
"msg_from": "\"Alexandru Coseru\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware advice"
},
{
"msg_contents": "The test that I did - which was somewhat limited, showed no benefit\nsplitting disks into seperate partitions for large bulk loads.\n\nThe program read from one very large file and wrote the input out to two\nother large files.\n\nThe totaly throughput on a single partition was close to the maximum\ntheoretical for that logical drive, even though the process was reading and\nwriting to three seperate places on the disk. I don't know what this means\nfor postgresql setups directly, but I would postulate that the benefit from\nsplitting pg_xlog onto a seperate spindle is not as great as it might once\nhave been for large bulk transactions. I am therefore going to be going to\na single 6 drive RAID 5 for my data wharehouse application because I want\nthe read speed to be availalbe. I can benefit from fast reads when I want\nto do large data scans at the expense of slightly slower insert speed.\n\nAlex.\n\nOn 12/5/06, Alexandru Coseru <[email protected]> wrote:\n>\n> Hello..\n>\n> Thanks for the advices..\n>\n> Actually , i'm waiting for the clovertown to show up on the market...\n>\n> Regards\n> Alex\n>\n> ----- Original Message -----\n> From: \"Sven Geisler\" <[email protected]>\n> To: \"Alexandru Coseru\" <[email protected]>\n> Cc: <[email protected]>\n> Sent: Tuesday, December 05, 2006 11:57 AM\n> Subject: Re: [PERFORM] Hardware advice\n>\n>\n> > Hi Alexandru,\n> >\n> > Alexandru Coseru schrieb:\n> >> [...]\n> >> Question 1:\n> >> The RAID layout should be:\n> >> a) 2 hdd in raid 1 for system and pg_xlog and 6 hdd in\n> >> raid10 for data ?\n> >> b) 8 hdd in raid10 for all ?\n> >> c) 2 hdd in raid1 for system , 2 hdd in raid1 for pg_xlog\n> ,\n> >> 4 hdd in raid10 for data ?\n> >> Obs: I'm going for setup a) , but i want to hear your thoughts as\n> >> well.\n> >\n> > This depends on you data size. I think, option a and c are good.\n> > The potential bottleneck may the RAID 1 for pg_xlog if you have huge\n> > amount of updates and insert.\n> > What is about another setup\n> >\n> > 4 hdd in RAID 10 for System and pg_xlog - System partitions are normally\n> > not in heavy use and pg_xlog should be fast for writing.\n> > 4 hdd in RAID 10 for data.\n> >\n> >>\n> >>\n> >> Question 2: (Don't want to start a flame here..... but here is goes)\n> >> What filesystem should i run for data ? ext3 or xfs ?\n> >> The tables have ~ 15.000 rel_pages each. The biggest table has\n> >> now over 30.000 pages.\n> >\n> > We have a database running with 60,000+ tables. The tables size is\n> > between a few kByte for the small tables and up to 30 GB for the largest\n> > one. We had no issue with ext3 in the past.\n> >\n> >>\n> >> Question 3:\n> >> The block size in postgresql is 8kb. The strip size in the\n> >> raid ctrl is 64k.\n> >> Should i increase the pgsql block size to 16 or 32 or even 64k\n> ?\n> >\n> > You should keep in mind that the file system has also a block size. Ext3\n> > has as maximum 4k.\n> > I would set up the partitions aligned to the stripe size to prevent\n> > unaligned reads. I guess, you can imagine that a larger block size of\n> > postgresql may also end up in unaligned reads because the file system\n> > has a smaller block size.\n> >\n> > RAID Volume and File system set up\n> > 1. Make all partitions aligned to the RAID strip size.\n> > The first partition should be start at 128 kByte.\n> > You can do this with fdisk. after you created the partition switch\n> > to the expert mode (type x) and modify the begin of the partition\n> > (type b). You should change this value to 128 (default is 63).\n> > All other partition should also start on a multiple of 128 kByte.\n> >\n> > 2. Give the file system a hint that you work with larger block sizes.\n> > Ext3: mke2fs -b 4096 -j -R stride=2 /dev/sda1 -L LABEL\n> > I made a I/O test with PostgreSQL on a RAID system with stripe size\n> > of 64kByte and block size of 8 kByte in the RAID system.\n> > Stride=2 was the best value.\n> >\n> >\n> > PS: You should have a second XEON in your budget plan.\n> >\n> > Sven.\n> >\n> >\n> >\n> >\n> > --\n> > No virus found in this incoming message.\n> > Checked by AVG Free Edition.\n> > Version: 7.1.409 / Virus Database: 268.15.7/569 - Release Date:\n> 12/5/2006\n> >\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\nThe test that I did - which was somewhat limited, showed no benefit splitting disks into seperate partitions for large bulk loads.The program read from one very large file and wrote the input out to two other large files.\nThe totaly throughput on a single partition was close to the maximum theoretical for that logical drive, even though the process was reading and writing to three seperate places on the disk. I don't know what this means for postgresql setups directly, but I would postulate that the benefit from splitting pg_xlog onto a seperate spindle is not as great as it might once have been for large bulk transactions. I am therefore going to be going to a single 6 drive RAID 5 for my data wharehouse application because I want the read speed to be availalbe. I can benefit from fast reads when I want to do large data scans at the expense of slightly slower insert speed.\nAlex.On 12/5/06, Alexandru Coseru <[email protected]> wrote:\nHello..Thanks for the advices..Actually , i'm waiting for the clovertown to show up on the market...\nRegards Alex----- Original Message -----From: \"Sven Geisler\" <[email protected]>To: \"Alexandru Coseru\" <\[email protected]>Cc: <[email protected]>Sent: Tuesday, December 05, 2006 11:57 AMSubject: Re: [PERFORM] Hardware advice\n> Hi Alexandru,>> Alexandru Coseru schrieb:>> [...]>> Question 1:>> The RAID layout should be:>> a) 2 hdd in raid 1 for system and pg_xlog and 6 hdd in\n>> raid10 for data ?>> b) 8 hdd in raid10 for all ?>> c) 2 hdd in raid1 for system , 2 hdd in raid1 for pg_xlog ,>> 4 hdd in raid10 for data ?>> Obs: I'm going for setup a) , but i want to hear your thoughts as\n>> well.>> This depends on you data size. I think, option a and c are good.> The potential bottleneck may the RAID 1 for pg_xlog if you have huge> amount of updates and insert.> What is about another setup\n>> 4 hdd in RAID 10 for System and pg_xlog - System partitions are normally> not in heavy use and pg_xlog should be fast for writing.> 4 hdd in RAID 10 for data.>>>>>\n>> Question 2: (Don't want to start a flame here..... but here is goes)>> What filesystem should i run for data ? ext3 or xfs ?>> The tables have ~ 15.000 rel_pages each. The biggest table has\n>> now over 30.000 pages.>> We have a database running with 60,000+ tables. The tables size is> between a few kByte for the small tables and up to 30 GB for the largest> one. We had no issue with ext3 in the past.\n>>>>> Question 3:>> The block size in postgresql is 8kb. The strip size in the>> raid ctrl is 64k.>> Should i increase the pgsql block size to 16 or 32 or even 64k ?\n>> You should keep in mind that the file system has also a block size. Ext3> has as maximum 4k.> I would set up the partitions aligned to the stripe size to prevent> unaligned reads. I guess, you can imagine that a larger block size of\n> postgresql may also end up in unaligned reads because the file system> has a smaller block size.>> RAID Volume and File system set up> 1. Make all partitions aligned to the RAID strip size.\n> The first partition should be start at 128 kByte.> You can do this with fdisk. after you created the partition switch> to the expert mode (type x) and modify the begin of the partition> (type b). You should change this value to 128 (default is 63).\n> All other partition should also start on a multiple of 128 kByte.>> 2. Give the file system a hint that you work with larger block sizes.> Ext3: mke2fs -b 4096 -j -R stride=2 /dev/sda1 -L LABEL\n> I made a I/O test with PostgreSQL on a RAID system with stripe size> of 64kByte and block size of 8 kByte in the RAID system.> Stride=2 was the best value.>>> PS: You should have a second XEON in your budget plan.\n>> Sven.>>>>> --> No virus found in this incoming message.> Checked by AVG Free Edition.> Version: 7.1.409 / Virus Database: 268.15.7/569 - Release Date: 12/5/2006\n>>---------------------------(end of broadcast)---------------------------TIP 4: Have you searched our list archives? http://archives.postgresql.org",
"msg_date": "Tue, 5 Dec 2006 23:24:19 -0500",
"msg_from": "\"Alex Turner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware advice"
},
{
"msg_contents": "Hi Alex,\n\nPlease check out <http://www.powerpostgresql.com/PerfList> before you\nuse RAID 5 for PostgreSQL.\n\nAnyhow, In a larger scale you end up in the response time of the I/O\nsystem for an read or write. The read is in modern RAID and SAN\nenvironments the part where you have to focus when you want to tune your\nsystem because most RAID and SAN system can buffer write.\nPostgreSQL does use the Linux file system cache which is normally much\nlarger then the RAID or SAN cache for reading. This means whenever a\nPostgreSQL read goes to the RAID or SAN sub system the response time of\nthe hard disk will become interesting.\nI guess you can imagine that multiple reads to the same spins are\ncausing an delay in the response time.\n\n\nAlexandru,\n\nYou should have two XEONs, what every your core count is.\nThis would use the full benefit of the memory architecture.\nYou know two FSBs and two memory channels.\n\nCheers\nSven\n\nAlex Turner schrieb:\n> The test that I did - which was somewhat limited, showed no benefit\n> splitting disks into seperate partitions for large bulk loads.\n> \n> The program read from one very large file and wrote the input out to two\n> other large files.\n> \n> The totaly throughput on a single partition was close to the maximum\n> theoretical for that logical drive, even though the process was reading\n> and writing to three seperate places on the disk. I don't know what\n> this means for postgresql setups directly, but I would postulate that\n> the benefit from splitting pg_xlog onto a seperate spindle is not as\n> great as it might once have been for large bulk transactions. I am\n> therefore going to be going to a single 6 drive RAID 5 for my data\n> wharehouse application because I want the read speed to be availalbe. I\n> can benefit from fast reads when I want to do large data scans at the\n> expense of slightly slower insert speed.\n> \n> Alex.\n> \n> On 12/5/06, *Alexandru Coseru* <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> Hello..\n> \n> Thanks for the advices..\n> \n> Actually , i'm waiting for the clovertown to show up on the market...\n> \n> Regards\n> Alex\n> \n> ----- Original Message -----\n> From: \"Sven Geisler\" <[email protected] <mailto:[email protected]>>\n> To: \"Alexandru Coseru\" < [email protected]\n> <mailto:[email protected]>>\n> Cc: <[email protected]\n> <mailto:[email protected]>>\n> Sent: Tuesday, December 05, 2006 11:57 AM\n> Subject: Re: [PERFORM] Hardware advice\n> \n> \n> > Hi Alexandru,\n> >\n> > Alexandru Coseru schrieb:\n> >> [...]\n> >> Question 1:\n> >> The RAID layout should be:\n> >> a) 2 hdd in raid 1 for system and pg_xlog and 6 hdd in\n> >> raid10 for data ?\n> >> b) 8 hdd in raid10 for all ?\n> >> c) 2 hdd in raid1 for system , 2 hdd in raid1 for\n> pg_xlog ,\n> >> 4 hdd in raid10 for data ?\n> >> Obs: I'm going for setup a) , but i want to hear your\n> thoughts as\n> >> well.\n> >\n> > This depends on you data size. I think, option a and c are good.\n> > The potential bottleneck may the RAID 1 for pg_xlog if you have huge\n> > amount of updates and insert.\n> > What is about another setup\n> >\n> > 4 hdd in RAID 10 for System and pg_xlog - System partitions are\n> normally\n> > not in heavy use and pg_xlog should be fast for writing.\n> > 4 hdd in RAID 10 for data.\n> >\n> >>\n> >>\n> >> Question 2: (Don't want to start a flame here..... but here is goes)\n> >> What filesystem should i run for data ? ext3 or xfs ?\n> >> The tables have ~ 15.000 rel_pages each. The biggest\n> table has\n> >> now over 30.000 pages.\n> >\n> > We have a database running with 60,000+ tables. The tables size is\n> > between a few kByte for the small tables and up to 30 GB for the\n> largest\n> > one. We had no issue with ext3 in the past.\n> >\n> >>\n> >> Question 3:\n> >> The block size in postgresql is 8kb. The strip size\n> in the\n> >> raid ctrl is 64k.\n> >> Should i increase the pgsql block size to 16 or 32 or\n> even 64k ?\n> >\n> > You should keep in mind that the file system has also a block\n> size. Ext3\n> > has as maximum 4k.\n> > I would set up the partitions aligned to the stripe size to prevent\n> > unaligned reads. I guess, you can imagine that a larger block size of\n> > postgresql may also end up in unaligned reads because the file system\n> > has a smaller block size.\n> >\n> > RAID Volume and File system set up\n> > 1. Make all partitions aligned to the RAID strip size.\n> > The first partition should be start at 128 kByte.\n> > You can do this with fdisk. after you created the partition switch\n> > to the expert mode (type x) and modify the begin of the partition\n> > (type b). You should change this value to 128 (default is 63).\n> > All other partition should also start on a multiple of 128 kByte.\n> >\n> > 2. Give the file system a hint that you work with larger block sizes.\n> > Ext3: mke2fs -b 4096 -j -R stride=2 /dev/sda1 -L LABEL\n> > I made a I/O test with PostgreSQL on a RAID system with stripe size\n> > of 64kByte and block size of 8 kByte in the RAID system.\n> > Stride=2 was the best value.\n> >\n> >\n> > PS: You should have a second XEON in your budget plan.\n> >\n> > Sven.\n> >\n> >\n> >\n> >\n> > --\n> > No virus found in this incoming message.\n> > Checked by AVG Free Edition.\n> > Version: 7.1.409 / Virus Database: 268.15.7/569 - Release Date:\n> 12/5/2006\n> >\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> <http://archives.postgresql.org>\n> \n> \n\n-- \n/This email and any files transmitted with it are confidential and\nintended solely for the use of the individual or entity to whom they\nare addressed. If you are not the intended recipient, you should not\ncopy it, re-transmit it, use it or disclose its contents, but should\nreturn it to the sender immediately and delete your copy from your\nsystem. Thank you for your cooperation./\n\nSven Geisler <[email protected]> Tel +49.30.5362.1627 Fax .1638\nSenior Developer, AEC/communications GmbH Berlin, Germany\n",
"msg_date": "Wed, 06 Dec 2006 10:09:25 +0100",
"msg_from": "Sven Geisler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware advice"
},
{
"msg_contents": "If your data is valuable I'd recommend against RAID5 ... see <http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt>\n\nperformance aside, I'd advise against RAID5 in almost all circumstances. Why take chances ?\n\nGreg Williamson\nDBA\nGlobeXplorer LLC\n\n-----Original Message-----\nFrom:\[email protected] on behalf of Sven Geisler\nSent:\tWed 12/6/2006 1:09 AM\nTo:\tAlex Turner\nCc:\tAlexandru Coseru; [email protected]\nSubject:\tRe: [PERFORM] Hardware advice\n\nHi Alex,\n\nPlease check out <http://www.powerpostgresql.com/PerfList> before you\nuse RAID 5 for PostgreSQL.\n\nAnyhow, In a larger scale you end up in the response time of the I/O\nsystem for an read or write. The read is in modern RAID and SAN\nenvironments the part where you have to focus when you want to tune your\nsystem because most RAID and SAN system can buffer write.\nPostgreSQL does use the Linux file system cache which is normally much\nlarger then the RAID or SAN cache for reading. This means whenever a\nPostgreSQL read goes to the RAID or SAN sub system the response time of\nthe hard disk will become interesting.\nI guess you can imagine that multiple reads to the same spins are\ncausing an delay in the response time.\n\n\nAlexandru,\n\nYou should have two XEONs, what every your core count is.\nThis would use the full benefit of the memory architecture.\nYou know two FSBs and two memory channels.\n\nCheers\nSven\n\nAlex Turner schrieb:\n> The test that I did - which was somewhat limited, showed no benefit\n> splitting disks into seperate partitions for large bulk loads.\n> \n> The program read from one very large file and wrote the input out to two\n> other large files.\n> \n> The totaly throughput on a single partition was close to the maximum\n> theoretical for that logical drive, even though the process was reading\n> and writing to three seperate places on the disk. I don't know what\n> this means for postgresql setups directly, but I would postulate that\n> the benefit from splitting pg_xlog onto a seperate spindle is not as\n> great as it might once have been for large bulk transactions. I am\n> therefore going to be going to a single 6 drive RAID 5 for my data\n> wharehouse application because I want the read speed to be availalbe. I\n> can benefit from fast reads when I want to do large data scans at the\n> expense of slightly slower insert speed.\n> \n> Alex.\n> \n> On 12/5/06, *Alexandru Coseru* <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> Hello..\n> \n> Thanks for the advices..\n> \n> Actually , i'm waiting for the clovertown to show up on the market...\n> \n> Regards\n> Alex\n> \n> ----- Original Message -----\n> From: \"Sven Geisler\" <[email protected] <mailto:[email protected]>>\n> To: \"Alexandru Coseru\" < [email protected]\n> <mailto:[email protected]>>\n> Cc: <[email protected]\n> <mailto:[email protected]>>\n> Sent: Tuesday, December 05, 2006 11:57 AM\n> Subject: Re: [PERFORM] Hardware advice\n> \n> \n> > Hi Alexandru,\n> >\n> > Alexandru Coseru schrieb:\n> >> [...]\n> >> Question 1:\n> >> The RAID layout should be:\n> >> a) 2 hdd in raid 1 for system and pg_xlog and 6 hdd in\n> >> raid10 for data ?\n> >> b) 8 hdd in raid10 for all ?\n> >> c) 2 hdd in raid1 for system , 2 hdd in raid1 for\n> pg_xlog ,\n> >> 4 hdd in raid10 for data ?\n> >> Obs: I'm going for setup a) , but i want to hear your\n> thoughts as\n> >> well.\n> >\n> > This depends on you data size. I think, option a and c are good.\n> > The potential bottleneck may the RAID 1 for pg_xlog if you have huge\n> > amount of updates and insert.\n> > What is about another setup\n> >\n> > 4 hdd in RAID 10 for System and pg_xlog - System partitions are\n> normally\n> > not in heavy use and pg_xlog should be fast for writing.\n> > 4 hdd in RAID 10 for data.\n> >\n> >>\n> >>\n> >> Question 2: (Don't want to start a flame here..... but here is goes)\n> >> What filesystem should i run for data ? ext3 or xfs ?\n> >> The tables have ~ 15.000 rel_pages each. The biggest\n> table has\n> >> now over 30.000 pages.\n> >\n> > We have a database running with 60,000+ tables. The tables size is\n> > between a few kByte for the small tables and up to 30 GB for the\n> largest\n> > one. We had no issue with ext3 in the past.\n> >\n> >>\n> >> Question 3:\n> >> The block size in postgresql is 8kb. The strip size\n> in the\n> >> raid ctrl is 64k.\n> >> Should i increase the pgsql block size to 16 or 32 or\n> even 64k ?\n> >\n> > You should keep in mind that the file system has also a block\n> size. Ext3\n> > has as maximum 4k.\n> > I would set up the partitions aligned to the stripe size to prevent\n> > unaligned reads. I guess, you can imagine that a larger block size of\n> > postgresql may also end up in unaligned reads because the file system\n> > has a smaller block size.\n> >\n> > RAID Volume and File system set up\n> > 1. Make all partitions aligned to the RAID strip size.\n> > The first partition should be start at 128 kByte.\n> > You can do this with fdisk. after you created the partition switch\n> > to the expert mode (type x) and modify the begin of the partition\n> > (type b). You should change this value to 128 (default is 63).\n> > All other partition should also start on a multiple of 128 kByte.\n> >\n> > 2. Give the file system a hint that you work with larger block sizes.\n> > Ext3: mke2fs -b 4096 -j -R stride=2 /dev/sda1 -L LABEL\n> > I made a I/O test with PostgreSQL on a RAID system with stripe size\n> > of 64kByte and block size of 8 kByte in the RAID system.\n> > Stride=2 was the best value.\n> >\n> >\n> > PS: You should have a second XEON in your budget plan.\n> >\n> > Sven.\n> >\n> >\n> >\n> >\n> > --\n> > No virus found in this incoming message.\n> > Checked by AVG Free Edition.\n> > Version: 7.1.409 / Virus Database: 268.15.7/569 - Release Date:\n> 12/5/2006\n> >\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> <http://archives.postgresql.org>\n> \n> \n\n-- \n/This email and any files transmitted with it are confidential and\nintended solely for the use of the individual or entity to whom they\nare addressed. If you are not the intended recipient, you should not\ncopy it, re-transmit it, use it or disclose its contents, but should\nreturn it to the sender immediately and delete your copy from your\nsystem. Thank you for your cooperation./\n\nSven Geisler <[email protected]> Tel +49.30.5362.1627 Fax .1638\nSenior Developer, AEC/communications GmbH Berlin, Germany\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n\n-------------------------------------------------------\nClick link below if it is SPAM [email protected]\n\"https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=4576889b113104295495211&[email protected]&retrain=spam&template=history&history_page=1\"\n!DSPAM:4576889b113104295495211!\n-------------------------------------------------------\n\n\n\n\n\n",
"msg_date": "Wed, 6 Dec 2006 01:21:22 -0800",
"msg_from": "\"Gregory S. Williamson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware advice"
}
] |
[
{
"msg_contents": "I am trying to optimize queries on one of the large table we have by\npartitioning it. To test it I created a sample script. When I use Explain\nAnalyze on one of the queries the query planer shows sequence scan on all\nthe child partitions instead of only one child containing the required data.\nI am using PostgreSQL 8.1.5 on i686-pc-mingw32.\n\n\n\nHere is my sample script:\n\n\n\n\nCREATE TABLE parent (\n monthdate date NOT NULL,\n id int4 NOT NULL,\n CONSTRAINT parent_idx PRIMARY KEY (monthdate,id )\n);\n\nCREATE TABLE child1\n(\nCONSTRAINT child1_idx PRIMARY KEY (monthdate,id),\nCONSTRAINT child1_chk CHECK (monthdate >= '2006-01-01 00:00:00'::timestamp\nwithout time zone AND monthdate < '2006-02-01 00:00:00'::timestamp without\ntime zone)\n)INHERITS (parent)\nWITHOUT OIDS;\n\nCREATE TABLE child2\n(\nCONSTRAINT child2_idx PRIMARY KEY (monthdate,id),\nCONSTRAINT child2_chk CHECK (monthdate >= '2006-02-01 00:00:00'::timestamp\nwithout time zone AND monthdate < '2006-03-01 00:00:00'::timestamp without\ntime zone)\n)INHERITS (parent)\nWITHOUT OIDS;\n\nCREATE TABLE child3\n(\nCONSTRAINT child3_idx PRIMARY KEY (monthdate,id),\nCONSTRAINT child3_chk CHECK (monthdate >= '2006-03-01 00:00:00'::timestamp\nwithout time zone AND monthdate < '2006-04-01 00:00:00'::timestamp without\ntime zone)\n)INHERITS (parent)\nWITHOUT OIDS;\n\nCREATE RULE child1rule AS\nON INSERT TO parent WHERE\n ( monthdate >= DATE '2006-01-01' AND monthdate < DATE '2006-02-01' )\nDO INSTEAD\n INSERT INTO child1 VALUES ( NEW.monthdate,NEW.id);\nCREATE RULE child2rule AS\nON INSERT TO parent WHERE\n ( monthdate >= DATE '2006-02-01' AND monthdate < DATE '2006-03-01' )\nDO INSTEAD\n INSERT INTO child2 VALUES ( NEW.monthdate,NEW.id);\n\nCREATE RULE child3rule AS\nON INSERT TO parent WHERE\n ( monthdate >= DATE '2006-03-01' AND monthdate < DATE '2006-04-01' )\nDO INSTEAD\n INSERT INTO child3 VALUES ( NEW.monthdate,NEW.id);\n\ninsert into parent values('2006-01-02',12);\ninsert into parent values('2006-02-02',13);\ninsert into parent values('2006-03-02',14);\n\nSET constraint_exclusion = on;\nSHOW constraint_exclusion;\n\nEXPLAIN ANALYZE select monthdate, id from parent where monthdate =\n'2006-03-11' and id = 13\n\n\"Result (cost=0.00..7.87 rows=4 width=8) (actual time=0.063..0.063 rows=0\nloops=1)\"\n\" -> Append (cost=0.00..7.87 rows=4 width=8) (actual\ntime=0.055..0.055rows=0 loops=1)\"\n\" -> Index Scan using parent_idx on parent (cost=0.00..4.83 rows=1\nwidth=8) (actual time=0.013..0.013 rows=0 loops=1)\"\n\" Index Cond: ((monthdate = '2006-03-11'::date) AND (id = 13))\"\n\" -> Seq Scan on child1 parent (cost=0.00..1.01 rows=1 width=8)\n(actual time=0.012..0.012 rows=0 loops=1)\"\n\" Filter: ((monthdate = '2006-03-11'::date) AND (id = 13))\"\n\" -> Seq Scan on child2 parent (cost=0.00..1.01 rows=1 width=8)\n(actual time=0.005..0.005 rows=0 loops=1)\"\n\" Filter: ((monthdate = '2006-03-11'::date) AND (id = 13))\"\n\" -> Seq Scan on child3 parent (cost=0.00..1.01 rows=1 width=8)\n(actual time=0.005..0.005 rows=0 loops=1)\"\n\" Filter: ((monthdate = '2006-03-11'::date) AND (id = 13))\"\n\"Total runtime: 0.225 ms\"\n\n\n\nI am interested to now what I am doing wrong in above scenario because of\nwhich planner is not optimizing this simple query. Any insight will be\nappreciated\n\n\n\nThank you,\n\n\n\n- Fayza\n\n\nI am trying to optimize queries on one of the large table we have by partitioning it. To test it I created a sample script. When I use Explain Analyze on one of the queries the query planer shows sequence scan on all the child partitions instead of only one child containing the required data. I am using PostgreSQL \n8.1.5 on i686-pc-mingw32. \n \nHere is my sample script:\n \n \nCREATE TABLE parent ( monthdate date NOT NULL, id int4 NOT NULL, CONSTRAINT parent_idx PRIMARY KEY (monthdate,id ));\nCREATE TABLE child1(CONSTRAINT child1_idx PRIMARY KEY (monthdate,id),CONSTRAINT child1_chk CHECK (monthdate >= '2006-01-01 00:00:00'::timestamp without time zone AND monthdate < '2006-02-01 00:00:00'::timestamp without time zone)\n)INHERITS (parent)WITHOUT OIDS;\nCREATE TABLE child2(CONSTRAINT child2_idx PRIMARY KEY (monthdate,id),CONSTRAINT child2_chk CHECK (monthdate >= '2006-02-01 00:00:00'::timestamp without time zone AND monthdate < '2006-03-01 00:00:00'::timestamp without time zone)\n)INHERITS (parent)WITHOUT OIDS;\nCREATE TABLE child3(CONSTRAINT child3_idx PRIMARY KEY (monthdate,id),CONSTRAINT child3_chk CHECK (monthdate >= '2006-03-01 00:00:00'::timestamp without time zone AND monthdate < '2006-04-01 00:00:00'::timestamp without time zone)\n)INHERITS (parent)WITHOUT OIDS;\n\nCREATE RULE child1rule ASON INSERT TO parent WHERE ( monthdate >= DATE '2006-01-01' AND monthdate < DATE '2006-02-01' )DO INSTEAD INSERT INTO child1 VALUES ( NEW.monthdate,NEW.id);\nCREATE RULE child2rule ASON INSERT TO parent WHERE ( monthdate >= DATE '2006-02-01' AND monthdate < DATE '2006-03-01' )DO INSTEAD INSERT INTO child2 VALUES ( NEW.monthdate,NEW.id);\nCREATE RULE child3rule ASON INSERT TO parent WHERE ( monthdate >= DATE '2006-03-01' AND monthdate < DATE '2006-04-01' )DO INSTEAD INSERT INTO child3 VALUES ( NEW.monthdate,NEW.id);\ninsert into parent values('2006-01-02',12);insert into parent values('2006-02-02',13);insert into parent values('2006-03-02',14);\nSET constraint_exclusion = on;SHOW constraint_exclusion;\nEXPLAIN ANALYZE select monthdate, id from parent where monthdate = '2006-03-11' and id = 13\n \n\"Result (cost=0.00..7.87 rows=4 width=8) (actual time=0.063..0.063 rows=0 loops=1)\"\" -> Append (cost=0.00..7.87 rows=4 width=8) (actual time=0.055..0.055 rows=0 loops=1)\"\" -> Index Scan using parent_idx on parent (cost=\n0.00..4.83 rows=1 width=8) (actual time=0.013..0.013 rows=0 loops=1)\"\" Index Cond: ((monthdate = '2006-03-11'::date) AND (id = 13))\"\" -> Seq Scan on child1 parent (cost=\n0.00..1.01 rows=1 width=8) (actual time=0.012..0.012 rows=0 loops=1)\"\" Filter: ((monthdate = '2006-03-11'::date) AND (id = 13))\"\" -> Seq Scan on child2 parent (cost=0.00..1.01\n rows=1 width=8) (actual time=0.005..0.005 rows=0 loops=1)\"\" Filter: ((monthdate = '2006-03-11'::date) AND (id = 13))\"\" -> Seq Scan on child3 parent (cost=0.00..1.01 rows=1 width=8) (actual time=\n0.005..0.005 rows=0 loops=1)\"\" Filter: ((monthdate = '2006-03-11'::date) AND (id = 13))\"\"Total runtime: 0.225 ms\"\n \nI am interested to now what I am doing wrong in above scenario because of which planner is not optimizing this simple query. Any insight will be appreciated\n\n \nThank you,\n \n- Fayza",
"msg_date": "Mon, 4 Dec 2006 03:21:51 +0500",
"msg_from": "\"Fayza Sultan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Enabling constraint_exclusion does not avoid scanning all child\n\tpartitions"
},
{
"msg_contents": "\"Fayza Sultan\" <[email protected]> writes:\n> CREATE TABLE parent (\n> monthdate date NOT NULL,\n> id int4 NOT NULL,\n> CONSTRAINT parent_idx PRIMARY KEY (monthdate,id )\n> );\n\n> CREATE TABLE child1\n> (\n> CONSTRAINT child1_idx PRIMARY KEY (monthdate,id),\n> CONSTRAINT child1_chk CHECK (monthdate >= '2006-01-01 00:00:00'::timestamp\n> without time zone AND monthdate < '2006-02-01 00:00:00'::timestamp without\n> time zone)\n\nmonthdate is date, not timestamp. See the caveat in the documentation\nabout avoiding cross-type comparisons when formulating constraints for\nconstraint exclusion to use.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 03 Dec 2006 17:25:27 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Enabling constraint_exclusion does not avoid scanning all child\n\tpartitions"
},
{
"msg_contents": "Your point solved my problem.\n\nThank you\n\n-Fayza\n\n\nOn 12/4/06, Tom Lane <[email protected]> wrote:\n>\n> \"Fayza Sultan\" <[email protected]> writes:\n> > CREATE TABLE parent (\n> > monthdate date NOT NULL,\n> > id int4 NOT NULL,\n> > CONSTRAINT parent_idx PRIMARY KEY (monthdate,id )\n> > );\n>\n> > CREATE TABLE child1\n> > (\n> > CONSTRAINT child1_idx PRIMARY KEY (monthdate,id),\n> > CONSTRAINT child1_chk CHECK (monthdate >= '2006-01-01\n> 00:00:00'::timestamp\n> > without time zone AND monthdate < '2006-02-01 00:00:00'::timestamp\n> without\n> > time zone)\n>\n> monthdate is date, not timestamp. See the caveat in the documentation\n> about avoiding cross-type comparisons when formulating constraints for\n> constraint exclusion to use.\n>\n> regards, tom lane\n>\n\nYour point solved my problem.\n \nThank you\n \n-Fayza \nOn 12/4/06, Tom Lane <[email protected]> wrote:\n\"Fayza Sultan\" <[email protected]> writes:\n> CREATE TABLE parent (> monthdate date NOT NULL,> id int4 NOT NULL,> CONSTRAINT parent_idx PRIMARY KEY (monthdate,id )> );> CREATE TABLE child1> (> CONSTRAINT child1_idx PRIMARY KEY (monthdate,id),\n> CONSTRAINT child1_chk CHECK (monthdate >= '2006-01-01 00:00:00'::timestamp> without time zone AND monthdate < '2006-02-01 00:00:00'::timestamp without> time zone)monthdate is date, not timestamp. See the caveat in the documentation\nabout avoiding cross-type comparisons when formulating constraints forconstraint exclusion to use. regards, tom lane",
"msg_date": "Mon, 4 Dec 2006 03:37:37 +0500",
"msg_from": "\"Fayza Sultan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Enabling constraint_exclusion does not avoid scanning all child\n\tpartitions"
}
] |
[
{
"msg_contents": "I have always been frustrated by the wildly erratic performance of our \npostgresql 8 server. We run aprogram that does heavy data importing via a \nheuristics-based import program. Sometime records being imported would just \nfly by, sometimes they would crawl. The import program imports records from \na flat table and uses heuristics to normalise and dedupe. This is done via a \nsequence of updates and inserts bracketed by a start-end transaction.\n\nAt a certain checkpoint representing about 1,000,000 rows read and imported, \nI ran a vacuum/analyze on all of the tables in the target schema. To my \nhorror, performance reduced to less than TEN percent of what it was befor \nthe vacuum/analyze. I thought that turning autovacuum off and doing my own \nvacuuming would improve performance, but it seems to be killing it.\n\nI have since turned autovacuum on and am tearing my hair out wathcing the \nimported records crawl by. I have tried vacuuming the entire DB as well as \nrebuilding indexes. Nothing. Any ideas what could have happened? What is the \nright thing to do?\n\nCarlo \n\n\n",
"msg_date": "Mon, 4 Dec 2006 00:16:22 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is Vacuum/analyze destroying my performance?"
},
{
"msg_contents": "Update on this issue, I \"solved\" my problem by doing the following:\n\n1) Stopped the import, and did a checkpoint backup on my import target \nschema\n2) Dropped the import target schema\n3) Restored a backup from a previous checkpoint when the tables were much \nsmaller\n4) Performed a VACUUM/ANALYZE on all of the tables in the import target \nschema in that smaller state\n5) Dropped the import target schema again\n6) Restored the checkpoint backup of the larger data set referred to in step \n1\n7) Rstarted the import from where it left off\n\nThe result: the import is flying again, with 10-20 times the performance. \nThe import runs as 4 different TCL scripts in parallel, importing difernt \nsegments of the table. The problem that I have when the import runs at this \nspeed is that I hve to constantly watch for lock-ups. Previously I had \nreported that when these multiple processes are running at high speed, \nPostgreSQL occasionally freezes one or more of the processes by never \nretutning from a COMMIT. I look at the target tables, and it seems that the \ncommit has gone through.\n\nThis used to be a disaster because Ithought I had to restart every frozen \nproess by killing the script and restarting at the last imported row.\n\nNow I have found a way to un-freeze the program: I find the frozen process \nvia PgAdmin III and send a CANCEL. To my surprise, the import continues as i \nnothing happened. Still incredibly inconvenient and laborious, but at least \nit's a little less tedious.\n\nCould these two problems - the weird slowdowns after a VACUUM/ANALYZE and \nthe frequent lockups when the import process is running quickly - be \nrelated?\n\nCarlo \n\n\n",
"msg_date": "Mon, 4 Dec 2006 00:49:45 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is Vacuum/analyze destroying my performance?"
},
{
"msg_contents": "Just a wild guess, but the performance problem sounds like maybe as your \ndata changes, eventually the planner moves some query from an index scan \nto a sequential scan, do you have any details on what queries are taking \nso long when things are running slow? You can turn on the GUC var \n\"log_min_duration_statement\" and see what queries are slow and then \nmanually check them with an explain analyze, that might help.\n\nMatt\n\n\nCarlo Stonebanks wrote:\n> Update on this issue, I \"solved\" my problem by doing the following:\n> \n> 1) Stopped the import, and did a checkpoint backup on my import target \n> schema\n> 2) Dropped the import target schema\n> 3) Restored a backup from a previous checkpoint when the tables were much \n> smaller\n> 4) Performed a VACUUM/ANALYZE on all of the tables in the import target \n> schema in that smaller state\n> 5) Dropped the import target schema again\n> 6) Restored the checkpoint backup of the larger data set referred to in step \n> 1\n> 7) Rstarted the import from where it left off\n> \n> The result: the import is flying again, with 10-20 times the performance. \n> The import runs as 4 different TCL scripts in parallel, importing difernt \n> segments of the table. The problem that I have when the import runs at this \n> speed is that I hve to constantly watch for lock-ups. Previously I had \n> reported that when these multiple processes are running at high speed, \n> PostgreSQL occasionally freezes one or more of the processes by never \n> retutning from a COMMIT. I look at the target tables, and it seems that the \n> commit has gone through.\n> \n> This used to be a disaster because Ithought I had to restart every frozen \n> proess by killing the script and restarting at the last imported row.\n> \n> Now I have found a way to un-freeze the program: I find the frozen process \n> via PgAdmin III and send a CANCEL. To my surprise, the import continues as i \n> nothing happened. Still incredibly inconvenient and laborious, but at least \n> it's a little less tedious.\n> \n> Could these two problems - the weird slowdowns after a VACUUM/ANALYZE and \n> the frequent lockups when the import process is running quickly - be \n> related?\n> \n> Carlo \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n",
"msg_date": "Mon, 04 Dec 2006 09:35:44 -0500",
"msg_from": "Matthew O'Connor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is Vacuum/analyze destroying my performance?"
},
{
"msg_contents": "\"\"Matthew O'Connor\"\" <[email protected]> wrote in message \nnews:[email protected]...\n> Just a wild guess, but the performance problem sounds like maybe as your \n> data changes, eventually the planner moves some query from an index scan \n> to a sequential scan, do you have any details on what queries are taking \n> so long when things are running slow? You can turn on the GUC var \n> \"log_min_duration_statement\" and see what queries are slow and then \n> manually check them with an explain analyze, that might help.\n>\n> Matt\n\nThis is pretty well what I think is happening - I expect all queries to \neventually move from seq scans to index scans. I actually have a SQL logging \nopion built into the import app.\n\nI just can't figure out how the planner can be so wrong. We are running a 4 \nCPU server (two dual core 3.2 GHz Xeons) with 4GB RAM and Windows Server \n2003 x64 and a PERC RAID subsystem (I don't know the RAID type). I know that \nthe metrics for the planner can be changed - is the default config for \npostgesql not suitable for our setup? For this server, we would like to be \noptimised for high speed over a few connections, rather than the classic \nbalanced speed over many connections. \n\n\n",
"msg_date": "Mon, 4 Dec 2006 12:08:42 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is Vacuum/analyze destroying my performance?"
},
{
"msg_contents": "Carlo Stonebanks wrote:\n>> Just a wild guess, but the performance problem sounds like maybe as your \n>> data changes, eventually the planner moves some query from an index scan \n>> to a sequential scan, do you have any details on what queries are taking \n>> so long when things are running slow? You can turn on the GUC var \n>> \"log_min_duration_statement\" and see what queries are slow and then \n>> manually check them with an explain analyze, that might help.\n>> \n> This is pretty well what I think is happening - I expect all queries to \n> eventually move from seq scans to index scans. I actually have a SQL logging \n> opion built into the import app.\n>\n> I just can't figure out how the planner can be so wrong. We are running a 4 \n> CPU server (two dual core 3.2 GHz Xeons) with 4GB RAM and Windows Server \n> 2003 x64 and a PERC RAID subsystem (I don't know the RAID type). I know that \n> the metrics for the planner can be changed - is the default config for \n> postgesql not suitable for our setup? For this server, we would like to be \n> optimised for high speed over a few connections, rather than the classic \n> balanced speed over many connections. \n\nIf it is the planner choosing a very bad plan, then I don't think your \nhardware has anything to do with it. And, we can't diagnose why the \nplanner is doing what it's doing without a lot more detail. I suggest \nyou do something to figure out what queries are taking so long, then \nsend us an explain analyze, that might shine some light on the subject.\n\n\n\n",
"msg_date": "Tue, 05 Dec 2006 00:11:58 -0500",
"msg_from": "\"Matthew T. O'Connor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is Vacuum/analyze destroying my performance?"
}
] |
[
{
"msg_contents": "Hi List,\n\nWe've been doing some benchmarks lately (one of them made it to the \nPostgreSQL frontpage) with postgresql 8.2 dev (cvs checkout of 3 june \n2006). But we prefer of course to run a more production-like version and \ninstalled postgresql 8.2rc1.\n\nAs it turns out after a dump/restore (to go from 820 to 822), copying \nthe configuration files doing a fresh 'vacuumdb -z' (z is analyze) and \n'clusterdb' the RC1 processes about 50% *less* (webpage)requests than \nthe 8.2dev we had, on the same machine/linux kernel/etc. On all \ncpu-configurations and loads we throw at it. Since its a read-mostly \ndatabase the location on disk should matter only very slightly.\n\nFor instance, with the system currently at hand it peaks at about 20 \nconcurrent clients in pg8.2 dev with 465407 requests processed in a 10 \nminuten timeframe. 8.2rc1 can only achieve 332499 requests in that same \ntime frame with the same load and has a peak of 335995 with 35 \nconcurrent clients (but with 30 it only reached 287624). And we see the \nsame for all loads we throw at them.\n\nSo either I'm missing something, there is a (significant enough) \ndifference in how the tables where analyzed or there have been some \ncode-changes since then to change the behaviour and thereby decreasing \nperformance in our set-up.\n\nPreferably I'd load the statistics from the 8.2-dev database in the \n8.2-rc1 one, but a simple insert or copy-statement won't work due to the \n'anyarray'-fields of pg_statistic, will it?\n\nI'll run another analyze on the database to see if that makes any \ndifference, but after that I'm not sure what to check first to figure \nout where things go wrong?\n\nBest regards,\n\nArjen van der Meijden\nTweakers.net\n",
"msg_date": "Mon, 04 Dec 2006 08:44:55 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": true,
"msg_subject": "8.2rc1 (much) slower than 8.2dev?"
},
{
"msg_contents": "Arjen van der Meijden <[email protected]> writes:\n> I'll run another analyze on the database to see if that makes any \n> difference, but after that I'm not sure what to check first to figure \n> out where things go wrong?\n\nLook for changes in plans?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Dec 2006 09:53:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.2rc1 (much) slower than 8.2dev? "
},
{
"msg_contents": "Tom Lane wrote:\n> Arjen van der Meijden <[email protected]> writes:\n>> I'll run another analyze on the database to see if that makes any \n>> difference, but after that I'm not sure what to check first to figure \n>> out where things go wrong?\n> \n> Look for changes in plans?\n\nYeah, there are a few number of small changes in plans and costs \nestimated. I've a large list of queries executed against both databases, \nand I haven't seen any differences in row-estimates, so the analyze's \nhave yielded similar enough results.\n\nI'm not sure whether some of the changes are for better or worse, you \ncan probably spot that a bit faster than I can. I saw a few index scans \nreplaced by seq scans (on small tables), all index scans seem to have \ndoubled in cost? And I saw a few bitmap scans in stead of normal index \nscans and more such small changes. But not so small if you execute a \nhundreds of thousands of those queries.\n\nSince I'd rather not send the entire list of queries to the entire \nworld, is it OK to send both explain analyze-files to you off list?\n\nBest regards,\n\nArjen\n",
"msg_date": "Mon, 04 Dec 2006 17:41:14 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8.2rc1 (much) slower than 8.2dev?"
},
{
"msg_contents": "On Mon, Dec 04, 2006 at 05:41:14PM +0100, Arjen van der Meijden wrote:\n> Since I'd rather not send the entire list of queries to the entire \n> world, is it OK to send both explain analyze-files to you off list?\n\nCan you post them on the web somewhere so everyone can look at them?\n\nAlso, are you looking at EXPLAIN or EXPLAIN ANALYZE?\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 7 Dec 2006 00:01:33 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.2rc1 (much) slower than 8.2dev?"
},
{
"msg_contents": "On 7-12-2006 7:01 Jim C. Nasby wrote:\n> Can you post them on the web somewhere so everyone can look at them?\nNo, its not (only) the size that matters, its the confidentiality I'm \nnot allowed to just break by myself. Well, at least not on a scale like \nthat. I've been mailing off-list with Tom and we found at least one \nquery that in some circumstances takes a lot more time than it should, \ndue to it mistakenly chosing to do a bitmap index scan rather than a \nnormal index scan.\n\n> Also, are you looking at EXPLAIN or EXPLAIN ANALYZE?\nExplain analyze and normal query execution times of several millions of \nqueries executed on both versions of postgresql, so we can say something \nabout them statistically.\n\nBest regards,\n\nArjen\n",
"msg_date": "Thu, 07 Dec 2006 10:13:17 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8.2rc1 (much) slower than 8.2dev?"
},
{
"msg_contents": "Arjen van der Meijden <[email protected]> writes:\n> I've been mailing off-list with Tom and we found at least one \n> query that in some circumstances takes a lot more time than it should, \n> due to it mistakenly chosing to do a bitmap index scan rather than a \n> normal index scan.\n\nJust to clue folks in: the problem queries seem to be cases like\n\n WHERE col1 = 'const'\n AND col2 = othertab.colx\n AND col3 IN (several hundred integers)\n\nwhere the table has an index on (col1,col2,col3). 8.2 is generating\na plan involving a nestloop with inner bitmap indexscan on this index,\nand using all three of these WHERE clauses with the index. The ability\nto use an IN clause (ie, ScalarArrayOpExpr) in an index condition is\nnew in 8.2, and we seem to have a few bugs left in the cost estimation\nfor it. The problem is that a ScalarArrayOpExpr effectively causes a\nBitmapOr across N index scans using each of the array elements as an\nindividual scan qualifier. So the above amounts to several hundred\nindex probes for each outer row. In Arjen's scenario it seems that\nthe first two index columns are already pretty selective, and it comes\nout a lot faster if you just do one indexscan using the first two\ncolumns and then apply the IN-condition as a filter to the relatively\nsmall number of rows you get that way.\n\nWhat's not clear to me yet is why the 8.2dev code didn't fall into this\nsame trap, because the ScalarArrayOpExpr indexing code was already there\non 3-June. But we didn't and still don't have any code that considers\nthe possibility that a potential indexqual condition should be\ndeliberately *not* used with the index.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Dec 2006 10:38:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.2rc1 (much) slower than 8.2dev? "
},
{
"msg_contents": "Arjen van der Meijden <[email protected]> writes:\n>> The file is attached. (bz)Grepping for 'Was executed on 8.2 final slower \n>> than 8.2 dev' with -A1 will show you the timecomparisons, which you \n>> could than look up using your favorite editor.\n\nI dug through this file, and it seems that all the cases where 8.2 final\nis choosing a really markedly worse plan are instances of the same\nquery, for which 8.2 chose a nestloop plan with an inner indexscan on\na clustered index. 8.2 final is failing to choose that because in the\nnestloop case it's not giving any cost credit for the index's\ncorrelation. Obviously a clustered index should have very high\ncorrelation. In the test case, half a dozen or so heap tuples need to\nbe fetched per index scan, and because of the correlation it's likely\nthey are all on the same heap page ... but the costing is assuming that\nthey are scattered randomly. I had punted on this point back in June\nbecause it seemed too complicated to handle in combination with the\ncross-scan caching, but obviously we need to do something. After\nthinking a bit more, I propose the attached patch --- to be applied on\ntop of the other ones I sent you --- which seems to fix the problem\nhere. Please give it a try.\n\n\t\t\tregards, tom lane\n\n\n*** src/backend/optimizer/path/costsize.c.orig\tFri Dec 8 15:58:34 2006\n--- src/backend/optimizer/path/costsize.c\tThu Dec 14 15:37:40 2006\n***************\n*** 276,288 ****\n \tif (outer_rel != NULL && outer_rel->rows > 1)\n \t{\n \t\t/*\n! \t\t * For repeated indexscans, scale up the number of tuples fetched in\n \t\t * the Mackert and Lohman formula by the number of scans, so that we\n! \t\t * estimate the number of pages fetched by all the scans. Then\n \t\t * pro-rate the costs for one scan. In this case we assume all the\n! \t\t * fetches are random accesses. XXX it'd be good to include\n! \t\t * correlation in this model, but it's not clear how to do that\n! \t\t * without double-counting cache effects.\n \t\t */\n \t\tdouble\t\tnum_scans = outer_rel->rows;\n \n--- 276,287 ----\n \tif (outer_rel != NULL && outer_rel->rows > 1)\n \t{\n \t\t/*\n! \t\t * For repeated indexscans, the appropriate estimate for the\n! \t\t * uncorrelated case is to scale up the number of tuples fetched in\n \t\t * the Mackert and Lohman formula by the number of scans, so that we\n! \t\t * estimate the number of pages fetched by all the scans; then\n \t\t * pro-rate the costs for one scan. In this case we assume all the\n! \t\t * fetches are random accesses.\n \t\t */\n \t\tdouble\t\tnum_scans = outer_rel->rows;\n \n***************\n*** 291,297 ****\n \t\t\t\t\t\t\t\t\t\t\t(double) index->pages,\n \t\t\t\t\t\t\t\t\t\t\troot);\n \n! \t\trun_cost += (pages_fetched * random_page_cost) / num_scans;\n \t}\n \telse\n \t{\n--- 290,316 ----\n \t\t\t\t\t\t\t\t\t\t\t(double) index->pages,\n \t\t\t\t\t\t\t\t\t\t\troot);\n \n! \t\tmax_IO_cost = (pages_fetched * random_page_cost) / num_scans;\n! \n! \t\t/*\n! \t\t * In the perfectly correlated case, the number of pages touched\n! \t\t * by each scan is selectivity * table_size, and we can use the\n! \t\t * Mackert and Lohman formula at the page level to estimate how\n! \t\t * much work is saved by caching across scans. We still assume\n! \t\t * all the fetches are random, though, which is an overestimate\n! \t\t * that's hard to correct for without double-counting the cache\n! \t\t * effects. (But in most cases where such a plan is actually\n! \t\t * interesting, only one page would get fetched per scan anyway,\n! \t\t * so it shouldn't matter much.)\n! \t\t */\n! \t\tpages_fetched = ceil(indexSelectivity * (double) baserel->pages);\n! \n! \t\tpages_fetched = index_pages_fetched(pages_fetched * num_scans,\n! \t\t\t\t\t\t\t\t\t\t\tbaserel->pages,\n! \t\t\t\t\t\t\t\t\t\t\t(double) index->pages,\n! \t\t\t\t\t\t\t\t\t\t\troot);\n! \n! \t\tmin_IO_cost = (pages_fetched * random_page_cost) / num_scans;\n \t}\n \telse\n \t{\n***************\n*** 312,326 ****\n \t\tmin_IO_cost = random_page_cost;\n \t\tif (pages_fetched > 1)\n \t\t\tmin_IO_cost += (pages_fetched - 1) * seq_page_cost;\n \n! \t\t/*\n! \t\t * Now interpolate based on estimated index order correlation to get\n! \t\t * total disk I/O cost for main table accesses.\n! \t\t */\n! \t\tcsquared = indexCorrelation * indexCorrelation;\n \n! \t\trun_cost += max_IO_cost + csquared * (min_IO_cost - max_IO_cost);\n! \t}\n \n \t/*\n \t * Estimate CPU costs per tuple.\n--- 331,345 ----\n \t\tmin_IO_cost = random_page_cost;\n \t\tif (pages_fetched > 1)\n \t\t\tmin_IO_cost += (pages_fetched - 1) * seq_page_cost;\n+ \t}\n \n! \t/*\n! \t * Now interpolate based on estimated index order correlation to get\n! \t * total disk I/O cost for main table accesses.\n! \t */\n! \tcsquared = indexCorrelation * indexCorrelation;\n \n! \trun_cost += max_IO_cost + csquared * (min_IO_cost - max_IO_cost);\n \n \t/*\n \t * Estimate CPU costs per tuple.",
"msg_date": "Thu, 14 Dec 2006 16:10:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] 8.2rc1 (much) slower than 8.2dev? "
}
] |
[
{
"msg_contents": "Hi\n\n \n\nWe are migrating our Postgres 7.3.4 application to postgres 8.1.5 and also moving it to a server with a much larger hardware configuration as well. The server will have the following specification.\n\n \n\n- 4 physical CPUs (hyperthreaded to 8)\n\n- 32 GB RAM\n\n- x86_64 architecture\n\n- RedHat AS 4\n\n- postgres 8.1.5\n\n \n\nIve been taking a look at the various postgres tuning parameters, and have come up with the following settings. \n\n \n\nshared_buffers - 50,000 - From what Id read, increasing this number higher than this wont have any advantages ?\n\n \n\neffective_cache_size = 524288 - My logic was I thought Id give the DB 16GB of the 32, and based this number on 25% of that number, sound okay?\n\n \n\nwork_mem - 32768 - I only have up to 30 connections in parallel, and more likely less than ½ that number. My sql is relatively simple, so figured even if there was 5 sorts per query and 30 queries in parallel, 32768 would use up 4GB of memory.. Does this number sound too high?\n\n \n\nMaintenance_work_mem = 1048576 - Figured Id allocate 1GB for this.\n\n \n\nfsm_relations = 2000 - I have about 200 tables plus maybe 4 or 5 indexes on each, and didn't want to have to worry about this number in future so doubled it.\n\n \n\nfsm_pages = 200,000 - Based this on some statistics about the number of pages freed from a vacuum on older server. Not sure if its fair to calculate this based on vacuum stats of 7.3.4 server?\n\n \n\nDo these numbers look reasonable given the machine above? Any other settings that I should be paying particular consideration too?\n\n \n\nThanks \n\n\nMark\n\n \n\n\n\n\n\n\n\n \n \n \nHi\n \nWe are migrating our Postgres 7.3.4 application to\npostgres 8.1.5 and also moving it to a server with a much larger hardware\nconfiguration as well. The server will have the following\nspecification.\n \n- 4 physical CPUs (hyperthreaded to 8)\n- 32 GB RAM\n- x86_64 architecture\n- RedHat AS 4\n- postgres 8.1.5\n \nIve been taking a look at the various postgres tuning\nparameters, and have come up with the following settings. \n \nshared_buffers – 50,000 - \n>From what Id read, increasing this number higher than this wont have any\nadvantages ?\n \neffective_cache_size = 524288 - My logic was I\nthought Id give the DB 16GB of the 32, and based this number on 25% of that\nnumber, sound okay?\n \nwork_mem – 32768 - I only have up to 30\nconnections in parallel, and more likely less than ½ that number. My\nsql is relatively simple, so figured even if there was 5 sorts per query and 30\nqueries in parallel, 32768 would use up 4GB of memory.. Does this\nnumber sound too high?\n \nMaintenance_work_mem = 1048576 – Figured Id allocate\n1GB for this.\n \nfsm_relations = 2000 - I have about 200 tables plus\nmaybe 4 or 5 indexes on each, and didn’t want to have to worry about this\nnumber in future so doubled it.\n \nfsm_pages = 200,000 – Based this on some statistics\nabout the number of pages freed from a vacuum on older server. Not\nsure if its fair to calculate this based on vacuum stats of 7.3.4 server?\n \nDo these numbers look reasonable given the machine above? Any\nother settings that I should be paying particular consideration too?\n \nThanks \n\nMark",
"msg_date": "Mon, 4 Dec 2006 12:10:59 -0500",
"msg_from": "\"Mark Lonsdale\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Configuration settings for 32GB RAM server"
},
{
"msg_contents": "On Mon, 2006-12-04 at 12:10 -0500, Mark Lonsdale wrote:\n\n> - 4 physical CPUs (hyperthreaded to 8)\n> \n> - 32 GB RAM\n> \n> - x86_64 architecture\n> \n> - RedHat AS 4\n> \n> - postgres 8.1.5\n> \n> \n> \n> Ive been taking a look at the various postgres tuning parameters, and\n> have come up with the following settings. \n> \n> \n> \n> shared_buffers – 50,000 - >From what Id read, increasing this\n> number higher than this wont have any advantages ?\n> \n\nWhere did you read that? You should do some tests. Generally 25% of\nphysical memory on a dedicated box is a good point of reference (on 8.1,\nanyway). I've heard as high as 50% can give you a benefit, but I haven't\nseen that myself.\n\n\n> fsm_pages = 200,000 – Based this on some statistics about the number\n> of pages freed from a vacuum on older server. Not sure if its fair\n> to calculate this based on vacuum stats of 7.3.4 server?\n> \n\nMight as well make it a higher number because you have a lot of RAM\nanyway. It's better than running out of space in the FSM, because to\nincrease that setting you need to restart the daemon. Increasing this by\n1 only uses 6 bytes. That means you could set it to 10 times the number\nyou currently have, and it would still be insignificant.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Mon, 04 Dec 2006 09:42:57 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration settings for 32GB RAM server"
},
{
"msg_contents": "On Dec 4, 2006, at 12:10 PM, Mark Lonsdale wrote:\n\n\n> - 4 physical CPUs (hyperthreaded to 8)\n\ni'd tend to disable hyperthreading on Xeons...\n> shared_buffers � 50,000 - >From what Id read, increasing this \n> number higher than this wont have any advantages ?\n>\n>\nif you can, increase it until your performance no longer increases. \ni run with about 70k on a server with 8Gb of RAM.\n\n> effective_cache_size = 524288 - My logic was I thought Id give \n> the DB 16GB of the 32, and based this number on 25% of that number, \n> sound okay?\n>\n>\n\nthis number is advisory to Pg. it doesn't allocate resources, rather \nit tells Pg how much disk cache your OS will provide.\n> work_mem � 32768 - I only have up to 30 connections in parallel, \n> and more likely less than � that number. My sql is relatively \n> simple, so figured even if there was 5 sorts per query and 30 \n> queries in parallel, 32768 would use up 4GB of memory.. Does this \n> number sound too high?\nyou need to evaluate how much memory you need for your queries and \nthen decide if increasing this will help. benchmarking your own use \npatterns is the only way to do this.\n\n> Maintenance_work_mem = 1048576 � Figured Id allocate 1GB for this.\n>\n>\nI usually do this, too.\n\n> fsm_relations = 2000 - I have about 200 tables plus maybe 4 or 5 \n> indexes on each, and didn�t want to have to worry about this number \n> in future so doubled it.\n\ni usually never need to go more than the default.\n> fsm_pages = 200,000 � Based this on some statistics about the \n> number of pages freed from a vacuum on older server. Not sure if \n> its fair to calculate this based on vacuum stats of 7.3.4 server?\nOn my big DB server, this sits at 1.2 million pages. You have to \ncheck the output of vacuum verbose from time to time to ensure it is \nnot getting out of bounds; if so, you need to either vacuum more \noften or you need to pack your tables, or increase this parameter.\n\n> Do these numbers look reasonable given the machine above? Any \n> other settings that I should be paying particular consideration too?\nThey're a good starting point.",
"msg_date": "Mon, 4 Dec 2006 13:44:59 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration settings for 32GB RAM server"
},
{
"msg_contents": "On 12/4/06, Mark Lonsdale <[email protected]> wrote:\n>\n>\n> Maintenance_work_mem = 1048576 – Figured Id allocate 1GB for this.\n>\n\nDo you know how often and when you will be creating indexes or clustering?\nWe set ours to 2GB because of the performance gains. We've also thought\nabout testing it at 4GB. We can do this because we know during the middle\nof the night our server load drops to nearly zero. If you know you have\nwindows like that, then I would definitely suggest increasing your\nmaintenance_work_mem. It's halved the time for io intesive tasks like\ncluster.\n\nOn 12/4/06, Mark Lonsdale <[email protected]> wrote:\n\nMaintenance_work_mem = 1048576 – Figured Id allocate\n1GB for this.Do you know how often and when you will be creating indexes or clustering? We set ours to 2GB because of the performance gains. We've also thought about testing it at 4GB. We can do this because we know during the middle of the night our server load drops to nearly zero. If you know you have windows like that, then I would definitely suggest increasing your maintenance_work_mem. It's halved the time for io intesive tasks like cluster.",
"msg_date": "Mon, 4 Dec 2006 11:57:04 -0700",
"msg_from": "\"Joshua Marsh\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration settings for 32GB RAM server"
},
{
"msg_contents": "On Mon, 2006-12-04 at 12:57, Joshua Marsh wrote:\n> \n> On 12/4/06, Mark Lonsdale <[email protected]> wrote:\n> Maintenance_work_mem = 1048576 – Figured Id allocate 1GB for\n> this.\n> \n> \n> \n> Do you know how often and when you will be creating indexes or\n> clustering? We set ours to 2GB because of the performance gains. \n> We've also thought about testing it at 4GB. We can do this because we\n> know during the middle of the night our server load drops to nearly\n> zero. If you know you have windows like that, then I would definitely\n> suggest increasing your maintenance_work_mem. It's halved the time\n> for io intesive tasks like cluster. \n> \n\nAlso, remember that most of those settings (work_mem,\nmaintenance_work_mem) can be changed for an individual session. So, you\ncan leave work_mem at something conservative, like 8 meg, and for a\nsession that is going to run at 2am and iterate over billions of rows,\nyou can throw several gigabytes at it and not worry about that one\nsetting blowing out all the other processes on the machine.\n",
"msg_date": "Mon, 04 Dec 2006 13:23:04 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration settings for 32GB RAM server"
},
{
"msg_contents": "On 4-Dec-06, at 12:10 PM, Mark Lonsdale wrote:\n\n>\n>\n>\n>\n>\n>\n> Hi\n>\n>\n>\n> We are migrating our Postgres 7.3.4 application to postgres 8.1.5 \n> and also moving it to a server with a much larger hardware \n> configuration as well. The server will have the following \n> specification.\n>\n>\n>\n> - 4 physical CPUs (hyperthreaded to 8)\nTry both hyperthreaded and not, there's been some evidence that HT \nhelps us now\n> - 32 GB RAM\n>\n> - x86_64 architecture\n>\n> - RedHat AS 4\n>\n> - postgres 8.1.5\n>\n>\n>\n> Ive been taking a look at the various postgres tuning parameters, \n> and have come up with the following settings.\n>\n>\n>\n> shared_buffers � 50,000 - From what Id read, increasing this \n> number higher than this wont have any advantages ?\nThis is no longer true, 25% of available memory is a good starting \nplace, and go up from there\n>\n>\n> effective_cache_size = 524288 - My logic was I thought Id give \n> the DB 16GB of the 32, and based this number on 25% of that number, \n> sound okay?\n>\n>\nthis should be around 3/4 of available memory or 24G\n> work_mem � 32768 - I only have up to 30 connections in parallel, \n> and more likely less than � that number. My sql is relatively \n> simple, so figured even if there was 5 sorts per query and 30 \n> queries in parallel, 32768 would use up 4GB of memory.. Does this \n> number sound too high?\n>\n>\n>\n> Maintenance_work_mem = 1048576 � Figured Id allocate 1GB for this.\n>\n>\n>\n> fsm_relations = 2000 - I have about 200 tables plus maybe 4 or 5 \n> indexes on each, and didn�t want to have to worry about this number \n> in future so doubled it.\n>\n>\n>\n> fsm_pages = 200,000 � Based this on some statistics about the \n> number of pages freed from a vacuum on older server. Not sure if \n> its fair to calculate this based on vacuum stats of 7.3.4 server?\nthis is dependent on your application\n>\n>\n> Do these numbers look reasonable given the machine above? Any \n> other settings that I should be paying particular consideration too?\n\nautovacuum settings.\n\n\n>\n>\n> Thanks\n>\n>\n> Mark\n>\n>\n>\n>\n\n\nOn 4-Dec-06, at 12:10 PM, Mark Lonsdale wrote: Hi We are migrating our Postgres 7.3.4 application to postgres 8.1.5 and also moving it to a server with a much larger hardware configuration as well. The server will have the following specification. - 4 physical CPUs (hyperthreaded to 8)Try both hyperthreaded and not, there's been some evidence that HT helps us now- 32 GB RAM- x86_64 architecture- RedHat AS 4- postgres 8.1.5 Ive been taking a look at the various postgres tuning parameters, and have come up with the following settings. shared_buffers – 50,000 - From what Id read, increasing this number higher than this wont have any advantages ?This is no longer true, 25% of available memory is a good starting place, and go up from there effective_cache_size = 524288 - My logic was I thought Id give the DB 16GB of the 32, and based this number on 25% of that number, sound okay? this should be around 3/4 of available memory or 24Gwork_mem – 32768 - I only have up to 30 connections in parallel, and more likely less than ½ that number. My sql is relatively simple, so figured even if there was 5 sorts per query and 30 queries in parallel, 32768 would use up 4GB of memory.. Does this number sound too high? Maintenance_work_mem = 1048576 – Figured Id allocate 1GB for this. fsm_relations = 2000 - I have about 200 tables plus maybe 4 or 5 indexes on each, and didn’t want to have to worry about this number in future so doubled it. fsm_pages = 200,000 – Based this on some statistics about the number of pages freed from a vacuum on older server. Not sure if its fair to calculate this based on vacuum stats of 7.3.4 server?this is dependent on your application Do these numbers look reasonable given the machine above? Any other settings that I should be paying particular consideration too?autovacuum settings. ThanksMark",
"msg_date": "Mon, 4 Dec 2006 18:28:57 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration settings for 32GB RAM server"
},
{
"msg_contents": "On Mon, Dec 04, 2006 at 09:42:57AM -0800, Jeff Davis wrote:\n> > fsm_pages = 200,000 ??? Based this on some statistics about the number\n> > of pages freed from a vacuum on older server. Not sure if its fair\n> > to calculate this based on vacuum stats of 7.3.4 server?\n> > \n> \n> Might as well make it a higher number because you have a lot of RAM\n> anyway. It's better than running out of space in the FSM, because to\n> increase that setting you need to restart the daemon. Increasing this by\n> 1 only uses 6 bytes. That means you could set it to 10 times the number\n> you currently have, and it would still be insignificant.\n\nYou can also run vacuumdb -av and look at the last few lines to see what\nit says you need. Or you can get that info out of\ncontrib/pg_freespacemap.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 7 Dec 2006 00:03:20 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration settings for 32GB RAM server"
}
] |
[
{
"msg_contents": "How can I move pg_xlog to another drive on Windows? In Linux I can use a\nsymlink, but how do I that on windows?\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\nweb: www.askesis.nl\n",
"msg_date": "Mon, 04 Dec 2006 18:48:19 +0100",
"msg_from": "Joost Kraaijeveld <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to move pg_xlog to another drive on Windows????"
},
{
"msg_contents": "Hmm ... I'm guessing you'd do it with a shortcut, and then rename the\nShortCut from \"Shortcut to pg_xlog\" to \"pg_xlog\".\n\nHaven't done it with PostgreSQL, but it works with a few other programs\nI've had to do that with.\n\n\n--\nAnthony Presley\nResolution Software\nOwner/Founder\nwww.resolution.com\n\nOn Mon, 2006-12-04 at 18:48 +0100, Joost Kraaijeveld wrote:\n> How can I move pg_xlog to another drive on Windows? In Linux I can use a\n> symlink, but how do I that on windows?\n> \n> --\n> Groeten,\n> \n> Joost Kraaijeveld\n> Askesis B.V.\n> Molukkenstraat 14\n> 6524NB Nijmegen\n> tel: 024-3888063 / 06-51855277\n> fax: 024-3608416\n> web: www.askesis.nl\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n",
"msg_date": "Mon, 04 Dec 2006 12:46:40 -0600",
"msg_from": "Anthony Presley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to move pg_xlog to another drive on Windows????"
},
{
"msg_contents": "On 12/4/06, Joost Kraaijeveld <[email protected]> wrote:\n> How can I move pg_xlog to another drive on Windows? In Linux I can use a\n> symlink, but how do I that on windows?\n\nyou can possibly attempt it with junction points. good luck:\n\nhttp://support.microsoft.com/kb/205524\n\nmerlin\n",
"msg_date": "Mon, 4 Dec 2006 14:04:11 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to move pg_xlog to another drive on Windows????"
},
{
"msg_contents": "You can also use the freeware junction utility from \nhttp://www.sysinternals.com as we do on Win 2K, XP and 2003.\n\nAfter installing it, shutdown pg, move pg_xlog to another drive, create \njunction as pg_xlog to point to new directory, then restart pg.\n\nCheers,\nKC.\n\nAt 03:04 06/12/05, Merlin Moncure wrote:\n>On 12/4/06, Joost Kraaijeveld <[email protected]> wrote:\n>>How can I move pg_xlog to another drive on Windows? In Linux I can use a\n>>symlink, but how do I that on windows?\n>\n>you can possibly attempt it with junction points. good luck:\n>\n>http://support.microsoft.com/kb/205524\n>\n>merlin\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: don't forget to increase your free space map settings\n\n",
"msg_date": "Tue, 05 Dec 2006 12:02:13 +0800",
"msg_from": "K C Lau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to move pg_xlog to another drive on"
}
] |
[
{
"msg_contents": "Hi all, I've run into an issue with a Pg 7.4.6 to 8.1.5 upgrade along \nwith hardware upgrade. \nI moved the database from a 2x 3.0ghz Xeon (512kb w/HT) to a 2x Opteron \n250. The database is in\nmemory on a tmpfs partition. (the application can rebuild the db during \ntotal system failure)\n\nWhen I first switched it over, the results were exactly what I expected. \nI was sustaining about\na 2.2 on the Xeon and with the Opteron, about a 1.5 with the same \ntraffic. The box is highload,\nit ouputs about 15mbps.\n\nAfter a couple hours working perfectly the ' system' (vs user) load \njumped from\na 3% of total CPU usage, to 30%, a 10x increase, and postgres started to \nwrite out\nto data/base at a fairly fast rate. The CPU context switch rate doubled \nat the same time.\nIowait, which was historically 0 on the 7.4 box, went to 0.08.\n\nStrangely enough, a vacuum (not full or analyze) stopped postgres from \nwriting to data/base\nbut the strange load pattern remains. (system is ~30% of the overall \nload, vs 3% before)\n\nSo, my question is, what happened, and how can I get it back to the same \nload pattern\n7.4.6 had, and the same pattern I had for 4 hours before it went crazy?\n\n\nMatt\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Mon, 04 Dec 2006 13:58:38 -0800",
"msg_from": "Matt Chambers <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgsql upgrade"
}
] |
[
{
"msg_contents": "Hi. I have a performance problem with this simple query:\n\nSELECT collect_time FROM n_traffic JOIN n_logins USING (login_id)\nWHERE n_logins.account_id = '1655' ORDER BY collect_time LIMIT 1;\n----------------------------------------\nLimit (cost=0.00..2026.57 rows=1 width=8) (actual\ntime=5828.681..5828.681 rows=0 loops=1)\n -> Nested Loop (cost=0.00..901822.69 rows=445 width=8) (actual\ntime=5828.660..5828.660 rows=0 loops=1)\n -> Index Scan using n_traffic_collect_time_login_id on\nn_traffic (cost=0.00..10280.58 rows=281608 width=12) (actual\ntime=0.026..1080.405 rows=281655 loops=1)\n -> Index Scan using n_logins_pkey on n_logins\n(cost=0.00..3.15 rows=1 width=4) (actual time=0.011..0.011 rows=0\nloops=281655)\n Index Cond: (\"outer\".login_id = n_logins.login_id)\n Filter: (account_id = 1655)\n Total runtime: 5828.963 ms\n(7 rows)\n\nThis looks very argly... I done some hack and change the\nJOIN n_logins USING (login_id) WHERE n_logins.account_id = '1655'\nwith the\nWHERE (login_id = '1' OR login_id = '2'...)\nin script which prepare this ...OR...OR... list and form this query.\nBut THIS IS DIRTY HACK! There must be gooder way... Please, help,\nexplain!\n\nIf there is only one login_id сorresponds to some account_id the query\ngoes fast:\n\n=# explain analyze SELECT collect_time FROM n_traffic JOIN n_logins\nUSING (login_id) WHERE n_logins.account_id= '15' ORDER BY collect_time\nLIMIT 1;\n---------------------------------------------------------\n Limit (cost=1262.93..1262.94 rows=1 width=8) (actual\ntime=61.884..61.886 rows=1 loops=1)\n -> Sort (cost=1262.93..1263.49 rows=223 width=8) (actual\ntime=61.867..61.867 rows=1 loops=1)\n Sort Key: n_traffic.collect_time\n -> Nested Loop (cost=5.60..1254.23 rows=223 width=8)\n(actual time=3.657..36.890 rows=4536 loops=1)\n -> Index Scan using n_logins_account_id on n_logins\n(cost=0.00..3.22 rows=1 width=4) (actual time=0.032..0.049 rows=1\nloops=1)\n Index Cond: (account_id = 15)\n -> Bitmap Heap Scan on n_traffic (cost=5.60..1241.72\nrows=743 width=12) (actual time=3.601..19.012 rows=4536 loops=1)\n Recheck Cond: (n_traffic.login_id = \"outer\".login_id)\n -> Bitmap Index Scan on n_traffic_login_id\n(cost=0.00..5.60 rows=743 width=0) (actual time=3.129..3.129 rows=4536\nloops=1)\n Index Cond: (n_traffic.login_id = \"outer\".login_id)\n Total runtime: 63.697 ms\n(11 rows)\n\nTables:\n\n=# \\d n_traffic\n Table \"public.n_traffic\"\n Column | Type | Modifiers\n--------------+-----------------------------+------------------------------\n login_id | integer | not null\n traftype_id | integer | not null\n collect_time | timestamp without time zone | not null default now()\n bytes_in | bigint | not null default (0)::bigint\n bytes_out | bigint | not null default (0)::bigint\nIndexes:\n \"n_traffic_login_id_key\" UNIQUE, btree (login_id, traftype_id, collect_time)\n \"n_traffic_collect_time\" btree (collect_time)\n \"n_traffic_collect_time_login_id\" btree (collect_time, login_id)\n \"n_traffic_login_id\" btree (login_id)\n \"n_traffic_login_id_collect_time\" btree (login_id, collect_time)\nForeign-key constraints:\n \"n_traffic_login_id_fkey\" FOREIGN KEY (login_id) REFERENCES\nn_logins(login_id) ON UPDATE CASCADE\n \"n_traffic_traftype_id_fkey\" FOREIGN KEY (traftype_id) REFERENCES\nn_traftypes(traftype_id) ON UPDATE CASCADE\n\n=# \\d n_logins\n Table \"public.n_logins\"\n Column | Type | Modifiers\n------------+------------------------+---------------------------------------------------\nlogin_id | integer | not null default\nnextval('n_logins_login_id_seq'::regclass)\n account_id | integer | not null\n login | character varying(255) | not null\n pwd | character varying(128) |\nIndexes:\n \"n_logins_pkey\" PRIMARY KEY, btree (login_id)\n \"n_logins_login_key\" UNIQUE, btree (\"login\")\n \"n_logins_account_id\" btree (account_id)\nForeign-key constraints:\n \"n_logins_account_id_fkey\" FOREIGN KEY (account_id) REFERENCES\nn_accounts(account_id)\nTriggers:\n tr_after_n_logins AFTER INSERT OR DELETE OR UPDATE ON n_logins FOR\nEACH ROW EXECUTE PROCEDURE tr_f_after_n_logins()\n tr_before_n_logins BEFORE UPDATE ON n_logins FOR EACH ROW EXECUTE\nPROCEDURE tr_f_before_n_logins()\n\n\nMy machine is 2xPIII 1 GHz + 1 Gb RAM + RAID5 on 6 SCSI disks. My settings is:\nmax_connections = 50\nshared_buffers = 32768\ntemp_buffers = 57792\nwork_mem = 81920\nmaintenance_work_mem = 196608\nmax_fsm_pages = 262144\nmax_fsm_relations = 1000\nwal_buffers = 64\ncheckpoint_segments = 4\ncheckpoint_timeout = 300\ncheckpoint_warning = 30\neffective_cache_size = 6553\nrandom_page_cost = 3\ndefault_statistics_target = 800\nlog_rotation_age = 1440\nlog_line_prefix = '%t %u@%d '\nstats_start_collector = on\nstats_command_string = on\nstats_block_level = on\nstats_row_level = on\nstats_reset_on_server_start = on\n---\n# ipcs -T\nmsginfo:\n msgmax: 16384 (max characters in a message)\n msgmni: 40 (# of message queues)\n msgmnb: 2048 (max characters in a message queue)\n msgtql: 40 (max # of messages in system)\n msgssz: 8 (size of a message segment)\n msgseg: 2048 (# of message segments in system)\n\nshminfo:\n shmmax: 838860800 (max shared memory segment size)\n shmmin: 1 (min shared memory segment size)\n shmmni: 128 (max number of shared memory identifiers)\n shmseg: 128 (max shared memory segments per process)\n shmall: 204800 (max amount of shared memory in pages)\n\nseminfo:\n semmni: 256 (# of semaphore identifiers)\n semmns: 2048 (# of semaphores in system)\n semmnu: 256 (# of undo structures in system)\n semmsl: 60 (max # of semaphores per id)\n semopm: 100 (max # of operations per semop call)\n semume: 10 (max # of undo entries per process)\n semusz: 100 (size in bytes of undo structure)\n semvmx: 32767 (semaphore maximum value)\n semaem: 16384 (adjust on exit max value)\n---\nkern.seminfo.semmni=256\nkern.seminfo.semmns=2048\nkern.seminfo.semmnu=256\n---\npostgresql have its own login class:\n\npostgresql:\\\n :openfiles-cur=768:\\\n :tc=daemon:\ndaemon:\\\n :ignorenologin:\\\n :datasize=infinity:\\\n :maxproc=infinity:\\\n :openfiles-cur=128:\\\n :stacksize-cur=8M:\\\n :localcipher=blowfish,8:\\\n :tc=default:\n-- \nengineer\n",
"msg_date": "Tue, 5 Dec 2006 14:05:35 +0500",
"msg_from": "Anton <[email protected]>",
"msg_from_op": true,
"msg_subject": "JOIN work somehow strange on simple query"
},
{
"msg_contents": "> Hi. I have a performance problem with this simple query:\n>\n> SELECT collect_time FROM n_traffic JOIN n_logins USING (login_id)\n> WHERE n_logins.account_id = '1655' ORDER BY collect_time LIMIT 1;\n\nI must add that is occurs when there is no rows in n_traffic for these\nlogin_id's. Where there is at least one (example, login_id='411'\nbelongs to account_id='1655') query goes fast:\n\n=# INSERT INTO n_traffic VALUES ('411', '1', '2006-09-23 12:23:05', '0', '0');\n\n=# explain analyze SELECT collect_time FROM n_traffic JOIN n_logins\nUSING (login_id) WHERE n_logins.account_id= '1655' ORDER BY\ncollect_time LIMIT 1;\n------------------------------------\n Limit (cost=0.00..2025.76 rows=1 width=8) (actual time=0.070..0.072\nrows=1 loops=1)\n -> Nested Loop (cost=0.00..913617.15 rows=451 width=8) (actual\ntime=0.065..0.065 rows=1 loops=1)\n -> Index Scan using n_traffic_collect_time_login_id on\nn_traffic (cost=0.00..10418.19 rows=285290 width=12) (actual\ntime=0.026..0.026 rows=1 loops=1)\n -> Index Scan using n_logins_pkey on n_logins\n(cost=0.00..3.15 rows=1 width=4) (actual time=0.026..0.026 rows=1\nloops=1)\n Index Cond: (\"outer\".login_id = n_logins.login_id)\n Filter: (account_id = 1655)\n Total runtime: 0.322 ms\n(7 rows)\n\n-- \nengineer\n",
"msg_date": "Tue, 5 Dec 2006 14:38:54 +0500",
"msg_from": "Anton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: JOIN work somehow strange on simple query"
},
{
"msg_contents": "Anton wrote:\n> Hi. I have a performance problem with this simple query:\n\nPlease post to one list at a time Anton. I'll see you over on the \nperformance list.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 05 Dec 2006 12:00:16 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JOIN work somehow strange on simple query"
}
] |
[
{
"msg_contents": "Thanks to all for the feedback on this issue.. After reviewing your comments, Im thinking of changing to the following values\n\n \n\nshared_buffers = 786432 - If Ive done my math right, then this is 6GB which is 25% of 24GB ( I want to preserve the other 8GB for OS and App )\n\n \n\neffective_cache_size = 2359296 - Equates to 18GB, which is 75% of 24GB.. Using the feedback from Dave Cramer\n\n \n\nwork_mem = 32768\n\n \n\nmaintenance_work_mem = 1048576 i.e. 1GB\n\n \n\nmax_fsm_relations = 10,000 - Given the small amount of memory this will use, I figure go large and not worry about it in the future.\n\n \n\nmax_fsm_pages = 10,000,000 - Again, increasing this significantly to cover my existing vacuuming numbers, and given I have a lot of memory, it seems like its not going to hurt me at all.\n\n \n\n \n\nSound good?\n\n \n\n________________________________\n\nFrom: Dave Cramer [mailto:[email protected]] \nSent: 04 December 2006 23:29\nTo: Mark Lonsdale\nCc: [email protected]\nSubject: Re: [PERFORM] Configuration settings for 32GB RAM server\n\n \n\n \n\nOn 4-Dec-06, at 12:10 PM, Mark Lonsdale wrote:\n\n\n\n\n\n \n\n \n\n \n\nHi\n\n \n\nWe are migrating our Postgres 7.3.4 application to postgres 8.1.5 and also moving it to a server with a much larger hardware configuration as well. The server will have the following specification.\n\n \n\n- 4 physical CPUs (hyperthreaded to 8)\n\nTry both hyperthreaded and not, there's been some evidence that HT helps us now\n\n\n\n- 32 GB RAM\n\n- x86_64 architecture\n\n- RedHat AS 4\n\n- postgres 8.1.5\n\n \n\nIve been taking a look at the various postgres tuning parameters, and have come up with the following settings. \n\n \n\nshared_buffers - 50,000 - From what Id read, increasing this number higher than this wont have any advantages ?\n\nThis is no longer true, 25% of available memory is a good starting place, and go up from there\n\n\n\n \n\neffective_cache_size = 524288 - My logic was I thought Id give the DB 16GB of the 32, and based this number on 25% of that number, sound okay?\n\n \n\nthis should be around 3/4 of available memory or 24G\n\n\n\nwork_mem - 32768 - I only have up to 30 connections in parallel, and more likely less than ½ that number. My sql is relatively simple, so figured even if there was 5 sorts per query and 30 queries in parallel, 32768 would use up 4GB of memory.. Does this number sound too high?\n\n \n\nMaintenance_work_mem = 1048576 - Figured Id allocate 1GB for this.\n\n \n\nfsm_relations = 2000 - I have about 200 tables plus maybe 4 or 5 indexes on each, and didn't want to have to worry about this number in future so doubled it.\n\n \n\nfsm_pages = 200,000 - Based this on some statistics about the number of pages freed from a vacuum on older server. Not sure if its fair to calculate this based on vacuum stats of 7.3.4 server?\n\nthis is dependent on your application\n\n\n\n \n\nDo these numbers look reasonable given the machine above? Any other settings that I should be paying particular consideration too?\n\n \n\nautovacuum settings.\n\n \n\n\n\n\n\n \n\nThanks\n\n\nMark\n\n \n\n\n\n\n\n \n\n\n\n\n\n\n\n\n \nThanks to all for the feedback on this\nissue.. After reviewing your comments, Im thinking of changing to the\nfollowing values\n \nshared_buffers = 786432 - If Ive done my\nmath right, then this is 6GB which is 25% of 24GB ( I want to preserve the\nother 8GB for OS and App )\n \neffective_cache_size = 2359296 - Equates\nto 18GB, which is 75% of 24GB.. Using the feedback from Dave Cramer\n \nwork_mem = 32768\n \nmaintenance_work_mem = 1048576 i.e. 1GB\n \nmax_fsm_relations = 10,000 – Given the\nsmall amount of memory this will use, I figure go large and not worry about it\nin the future.\n \nmax_fsm_pages = 10,000,000 – Again,\nincreasing this significantly to cover my existing vacuuming numbers, and given\nI have a lot of memory, it seems like its not going to hurt me at all.\n \n \nSound good?\n \n\n\n\n\nFrom: Dave Cramer\n[mailto:[email protected]] \nSent: 04 December 2006 23:29\nTo: Mark Lonsdale\nCc:\[email protected]\nSubject: Re: [PERFORM]\nConfiguration settings for 32GB RAM server\n\n \n \n\n\nOn 4-Dec-06, at 12:10 PM, Mark Lonsdale wrote:\n\n\n\n\n\n \n \n \nHi\n \nWe are migrating our\nPostgres 7.3.4 application to postgres 8.1.5 and also moving it to a server\nwith a much larger hardware configuration as well. The server\nwill have the following specification.\n \n- 4 physical CPUs\n(hyperthreaded to 8)\n\nTry both hyperthreaded and not, there's been some evidence that HT\nhelps us now\n\n\n\n- 32 GB RAM\n\n- x86_64 architecture\n- RedHat AS 4\n- postgres 8.1.5\n \nIve been taking a look at\nthe various postgres tuning parameters, and have come up with the following\nsettings. \n \nshared_buffers – 50,000\n - From what Id read, increasing this number\nhigher than this wont have any advantages ?\n\nThis is no longer true, 25% of available memory is a good starting\nplace, and go up from there\n\n\n\n \neffective_cache_size =\n524288 - My logic was I thought Id give the DB 16GB of the 32, and\nbased this number on 25% of that number, sound okay?\n \n\nthis should be around 3/4 of available memory or 24G\n\n\n\nwork_mem – 32768 - I only have up\nto 30 connections in parallel, and more likely less than ½ that number. \n My sql is relatively simple, so figured even if there was 5 sorts per\nquery and 30 queries in parallel, 32768 would use up 4GB of memory..\n Does this number sound too high?\n \nMaintenance_work_mem =\n1048576 – Figured Id allocate 1GB for this.\n \nfsm_relations =\n2000 - I have about 200 tables plus maybe 4 or 5 indexes on each, and\ndidn’t want to have to worry about this number in future so doubled it.\n \nfsm_pages = 200,000 –\nBased this on some statistics about the number of pages freed from a vacuum on\nolder server. Not sure if its fair to calculate this based on\nvacuum stats of 7.3.4 server?\n\nthis is dependent on your application\n\n\n\n \nDo these numbers look\nreasonable given the machine above? Any other settings that I\nshould be paying particular consideration too?\n\n\n \n\nautovacuum settings.\n\n\n \n\n\n\n\n\n\n \nThanks\n\nMark",
"msg_date": "Tue, 5 Dec 2006 06:14:45 -0500",
"msg_from": "\"Mark Lonsdale\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Configuration settings for 32GB RAM server"
}
] |
[
{
"msg_contents": "Hi,\n\nI have to refactoring a 'DELETE FROM x WHERE y IN (...)' because IN got\nto much parameters. => 'stack depth limit exceeded'\nI don't want to increase just the parameter for max_stack_depth. It is\nbetter to refactoring because the number of arguments to IN may increase\nin the future.\n\nMy approach is to do multiple 'DELETE FROM x WHERE y=...'.\n\nMy question is now, what is better for PostgreSQL from a performance\nperspective?\n1. all multiple deletes in one transaction\n2. each delete in its own transaction\n\nThe number of arguments is around 10,000.\n\nBTW: The arguments are generate in the application tier. I would have to\ncreate a temporary table which I can use in 'DELETE FROM x WHERE y IN\n(SELECT z FROM tmp)'.\n\nCheers\nSven\n",
"msg_date": "Tue, 05 Dec 2006 16:26:55 +0100",
"msg_from": "Sven Geisler <[email protected]>",
"msg_from_op": true,
"msg_subject": "single transaction vs multiple transactions"
},
{
"msg_contents": "Sven Geisler wrote:\n> I have to refactoring a 'DELETE FROM x WHERE y IN (...)' because IN got\n> to much parameters. => 'stack depth limit exceeded'\n> I don't want to increase just the parameter for max_stack_depth. It is\n> better to refactoring because the number of arguments to IN may increase\n> in the future.\n> \n> My approach is to do multiple 'DELETE FROM x WHERE y=...'.\n\nYou could also do something in between, issuing the deletes in batches \nof say 100 deletes each. But using a temporary table is much better.\n\n> My question is now, what is better for PostgreSQL from a performance\n> perspective?\n> 1. all multiple deletes in one transaction\n> 2. each delete in its own transaction\n\nAll in one transaction is definitely faster.\n\n> The number of arguments is around 10,000.\n> \n> BTW: The arguments are generate in the application tier. I would have to\n> create a temporary table which I can use in 'DELETE FROM x WHERE y IN\n> (SELECT z FROM tmp)'.\n\nI think that's exactly what you should do.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Tue, 05 Dec 2006 15:30:43 +0000",
"msg_from": "\"Heikki Linnakangas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: single transaction vs multiple transactions"
},
{
"msg_contents": "Hi Heikki\n\nHeikki Linnakangas schrieb:\n> Sven Geisler wrote:\n>> I have to refactoring a 'DELETE FROM x WHERE y IN (...)' because IN got\n>> to much parameters. => 'stack depth limit exceeded'\n>> I don't want to increase just the parameter for max_stack_depth. It is\n>> better to refactoring because the number of arguments to IN may increase\n>> in the future.\n[...]\n>>\n>> BTW: The arguments are generate in the application tier. I would have to\n>> create a temporary table which I can use in 'DELETE FROM x WHERE y IN\n>> (SELECT z FROM tmp)'.\n> \n> I think that's exactly what you should do.\n\nI have to insert my arguments to a temporary table first, because the\narguments are only known in the application tier.\nIs a multiple insert to a temporary table and a final 'DELETE FROM x\nWHERE y IN (SELECT z FROM tmp)' faster than multiple deletes?\n\nSven.\n",
"msg_date": "Tue, 05 Dec 2006 16:35:19 +0100",
"msg_from": "Sven Geisler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: single transaction vs multiple transactions"
},
{
"msg_contents": "Sven Geisler wrote:\n> I have to insert my arguments to a temporary table first, because the\n> arguments are only known in the application tier.\n> Is a multiple insert to a temporary table and a final 'DELETE FROM x\n> WHERE y IN (SELECT z FROM tmp)' faster than multiple deletes?\n\nIf the number of records is high, it most likely is faster. You should \ntry it with your data to be sure, but in general doing all the deletes \nin one batch is faster when the number of records is high because it \nallows using efficient merge joins or sequential scans.\n\nPopulating the temporary table with no indexes should be quite \ninexpensive if you make sure you don't do it one record at a time. Use \nthe COPY command or batched inserts instead.\n\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Tue, 05 Dec 2006 15:42:57 +0000",
"msg_from": "\"Heikki Linnakangas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: single transaction vs multiple transactions"
},
{
"msg_contents": "Hi,\n\nPostgreSQL offers some proprietary SQL parameters and commands which \neasily solves such problems.\nIf you are sure PostgreSQL is the DB for your app forever ;) , you could \nuse this parameters and commands.\n\nHere a possible resolution for your problem.\nDELETE FROM x USING tmp WHERE x.y=tmp.z;\n\nPlease read the extensive documentation of PostgreSQL first, before \nposting.\n\nCU,\nJens\n\nOn Tue, 05 Dec 2006 16:26:55 +0100, Sven Geisler <[email protected]> \nwrote:\n\n> Hi,\n>\n> I have to refactoring a 'DELETE FROM x WHERE y IN (...)' because IN got\n> to much parameters. => 'stack depth limit exceeded'\n> I don't want to increase just the parameter for max_stack_depth. It is\n> better to refactoring because the number of arguments to IN may increase\n> in the future.\n>\n> My approach is to do multiple 'DELETE FROM x WHERE y=...'.\n>\n> My question is now, what is better for PostgreSQL from a performance\n> perspective?\n> 1. all multiple deletes in one transaction\n> 2. each delete in its own transaction\n>\n> The number of arguments is around 10,000.\n>\n> BTW: The arguments are generate in the application tier. I would have to\n> create a temporary table which I can use in 'DELETE FROM x WHERE y IN\n> (SELECT z FROM tmp)'.\n>\n> Cheers\n> Sven\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n\n\n\n-- \n**\nDipl.-Designer Jens Schipkowski\nAPUS Software GmbH\n",
"msg_date": "Tue, 05 Dec 2006 16:45:25 +0100",
"msg_from": "\"Jens Schipkowski\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: single transaction vs multiple transactions"
},
{
"msg_contents": "Hi,\n\nMany thanks for the fast response.\nI will use temporary table with copy.\nOther issue I have is the connection between app and db.\nI guess, the approach with the copy will also hold down the network I/O\nbetween app and db. Keeping in mind that I produce 10,000+ statements.\n\nThx\nSven\n\nHeikki Linnakangas schrieb:\n> Sven Geisler wrote:\n>> I have to insert my arguments to a temporary table first, because the\n>> arguments are only known in the application tier.\n>> Is a multiple insert to a temporary table and a final 'DELETE FROM x\n>> WHERE y IN (SELECT z FROM tmp)' faster than multiple deletes?\n> \n> If the number of records is high, it most likely is faster. You should\n> try it with your data to be sure, but in general doing all the deletes\n> in one batch is faster when the number of records is high because it\n> allows using efficient merge joins or sequential scans.\n> \n> Populating the temporary table with no indexes should be quite\n> inexpensive if you make sure you don't do it one record at a time. Use\n> the COPY command or batched inserts instead.\n> \n> \n",
"msg_date": "Tue, 05 Dec 2006 16:58:30 +0100",
"msg_from": "Sven Geisler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: single transaction vs multiple transactions"
},
{
"msg_contents": "\"Heikki Linnakangas\" <[email protected]> writes:\n> Sven Geisler wrote:\n>> I have to refactoring a 'DELETE FROM x WHERE y IN (...)' because IN got\n>> to much parameters. => 'stack depth limit exceeded'\n>> The number of arguments is around 10,000.\n>> ...\n>> BTW: The arguments are generate in the application tier. I would have to\n>> create a temporary table which I can use in 'DELETE FROM x WHERE y IN\n>> (SELECT z FROM tmp)'.\n\n> I think that's exactly what you should do.\n\nAlso, if you're planning to update to 8.2 soon, the tradeoffs will\nchange completely. 8.2 should avoid the stack depth problem, and you\ncan get something closely approximating the plan you'd get for a join\nagainst a temp table using VALUES:\n\nregression=# explain select * from tenk1 where unique2 in (1,2,3,4,6,8);\n QUERY PLAN\n-----------------------------------------------------------------------------\n Bitmap Heap Scan on tenk1 (cost=24.01..45.79 rows=6 width=244)\n Recheck Cond: (unique2 = ANY ('{1,2,3,4,6,8}'::integer[]))\n -> Bitmap Index Scan on tenk1_unique2 (cost=0.00..24.01 rows=6 width=0)\n Index Cond: (unique2 = ANY ('{1,2,3,4,6,8}'::integer[]))\n(4 rows)\n\nregression=# explain select * from tenk1 where unique2 in (values(1),(2),(3),(4),(6),(8));\n QUERY PLAN\n----------------------------------------------------------------------------------\n Nested Loop (cost=4.10..48.34 rows=6 width=244)\n -> HashAggregate (cost=0.09..0.15 rows=6 width=4)\n -> Values Scan on \"*VALUES*\" (cost=0.00..0.08 rows=6 width=4)\n -> Bitmap Heap Scan on tenk1 (cost=4.01..8.02 rows=1 width=244)\n Recheck Cond: (tenk1.unique2 = \"*VALUES*\".column1)\n -> Bitmap Index Scan on tenk1_unique2 (cost=0.00..4.01 rows=1 width=0)\n Index Cond: (tenk1.unique2 = \"*VALUES*\".column1)\n(7 rows)\n\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Dec 2006 11:54:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: single transaction vs multiple transactions "
}
] |
[
{
"msg_contents": "Hello,\n\nIs there a relation between database size and PostGreSQL restart duration ?\nIf true, i'm looking for a law predicting how much time is required to \nrestart PostGreSQL, depending on the DB size.\n\nDoes anyone now the behavior of restart time ?\n\nThanks\n\n-- \n-- Jean Arnaud\n\n",
"msg_date": "Tue, 05 Dec 2006 18:08:31 +0100",
"msg_from": "Jean Arnaud <[email protected]>",
"msg_from_op": true,
"msg_subject": "Restart time"
},
{
"msg_contents": "Jean Arnaud <[email protected]> writes:\n> Is there a relation between database size and PostGreSQL restart duration ?\n\nNo.\n\n> Does anyone now the behavior of restart time ?\n\nIt depends on how many updates were applied since the last checkpoint\nbefore the crash.\n\nIf you're talking about startup of a cleanly-shut-down database, it\nshould be pretty much constant time.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Dec 2006 13:16:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restart time "
},
{
"msg_contents": "On 12/5/06, Tom Lane <[email protected]> wrote:\n>\n> Jean Arnaud <[email protected]> writes:\n> > Is there a relation between database size and PostGreSQL restart\n> duration ?\n>\n> No.\n>\n> > Does anyone now the behavior of restart time ?\n>\n> It depends on how many updates were applied since the last checkpoint\n> before the crash.\n>\n> If you're talking about startup of a cleanly-shut-down database, it\n> should be pretty much constant time.\n\n\nDear Sir,\n\nStartup time of a clean shutdown database is constant. But we still\nface problem when it comes to shutting down. PostgreSQL waits\nfor clients to finish gracefully. till date i have never been able to\nshutdown\nquickly (web application scenerio) and i tend to do pg_ctl -m immediate stop\nmostly.\n\n\n regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\nOn 12/5/06, Tom Lane <[email protected]> wrote:\nJean Arnaud <[email protected]> writes:> Is there a relation between database size and PostGreSQL restart duration ?No.> Does anyone now the behavior of restart time ?\nIt depends on how many updates were applied since the last checkpointbefore the crash.If you're talking about startup of a cleanly-shut-down database, itshould be pretty much constant time.\nDear Sir,Startup time of a clean shutdown database is constant. But we stillface problem when it comes to shutting down. PostgreSQL waitsfor clients to finish gracefully. till date i have never been able to shutdown\nquickly (web application scenerio) and i tend to do pg_ctl -m immediate stopmostly. \n regards, tom lane---------------------------(end of broadcast)---------------------------TIP 2: Don't 'kill -9' the postmaster",
"msg_date": "Wed, 6 Dec 2006 01:02:10 +0530",
"msg_from": "\"Rajesh Kumar Mallah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restart time"
},
{
"msg_contents": "\"Rajesh Kumar Mallah\" <[email protected]> writes:\n> Startup time of a clean shutdown database is constant. But we still\n> face problem when it comes to shutting down. PostgreSQL waits\n> for clients to finish gracefully. till date i have never been able to\n> shutdown\n> quickly (web application scenerio) and i tend to do pg_ctl -m immediate stop\n> mostly.\n\nRTFM ... you should be using -m fast not -m immediate. -m immediate\nis for emergency situations not routine.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Dec 2006 14:39:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restart time "
},
{
"msg_contents": "On 12/6/06, Tom Lane <[email protected]> wrote:\n>\n> \"Rajesh Kumar Mallah\" <[email protected]> writes:\n> > Startup time of a clean shutdown database is constant. But we still\n> > face problem when it comes to shutting down. PostgreSQL waits\n> > for clients to finish gracefully. till date i have never been able to\n> > shutdown\n> > quickly (web application scenerio) and i tend to do pg_ctl -m immediate\n> stop\n> > mostly.\n>\n> RTFM ... you should be using -m fast not -m immediate. -m immediate\n> is for emergency situations not routine.\n\n\nThanks for correcting , -m fast works fine for me.\nI shall RTFM..... :)\nRegds\nmallah. regards, tom lane\n\nOn 12/6/06, Tom Lane <[email protected]> wrote:\n\"Rajesh Kumar Mallah\" <[email protected]> writes:> Startup time of a clean shutdown database is constant. But we still> face problem when it comes to shutting down. PostgreSQL waits\n> for clients to finish gracefully. till date i have never been able to> shutdown> quickly (web application scenerio) and i tend to do pg_ctl -m immediate stop> mostly.RTFM ... you should be using -m fast not -m immediate. -m immediate\nis for emergency situations not routine.\nThanks for correcting , -m fast works fine for me. \nI shall RTFM..... :)\nRegds\nmallah. \nregards, tom lane",
"msg_date": "Wed, 6 Dec 2006 11:50:30 +0530",
"msg_from": "\"Rajesh Kumar Mallah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restart time"
},
{
"msg_contents": "Rajesh Kumar Mallah a �crit :\n>\n>\n> On 12/5/06, *Tom Lane* <[email protected] <mailto:[email protected]>> \n> wrote:\n>\n> Jean Arnaud <[email protected]\n> <mailto:[email protected]>> writes:\n> > Is there a relation between database size and PostGreSQL restart\n> duration ?\n>\n> No.\n>\n> > Does anyone now the behavior of restart time ?\n>\n> It depends on how many updates were applied since the last checkpoint\n> before the crash.\n>\n> If you're talking about startup of a cleanly-shut-down database, it\n> should be pretty much constant time.\n>\n>\n> Dear Sir,\n>\n> Startup time of a clean shutdown database is constant. But we still\n> face problem when it comes to shutting down. PostgreSQL waits\n> for clients to finish gracefully. till date i have never been able to \n> shutdown\n> quickly (web application scenerio) and i tend to do pg_ctl -m \n> immediate stop\n> mostly.\n> \n>\n> regards, tom lane\n>\nHi\n\nThanks everybody for answers !\n\nTo be sure SGBD will stop before a certain time, I use function that is \na combination of pg_ctl -m fast to stop most of process cleanly, and \nafter few seconds, I send pg_ctl -m immediate to be shut down immediatly \nthe system if not already stoped. This works pretty well in practice and \noffers a good compromise between clean and fast shutdown.\n\nCheers\n\n-- \n--- Jean Arnaud\n--- Projet SARDES \n--- INRIA Rh�ne-Alpes\n\n",
"msg_date": "Wed, 06 Dec 2006 09:44:16 +0100",
"msg_from": "Jean Arnaud <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Restart time"
}
] |
[
{
"msg_contents": "I am wanting some ideas about improving the performance of ORDER BY in\nour use. I have a DB on the order of 500,000 rows and 50 columns.\nThe results are always sorted with ORDER BY. Sometimes, the users end up\nwith a search that matches most of the rows. In that case, I have a\nLIMIT 5000 to keep the returned results under control. However, the\nsorting seems to take 10-60 sec. If I do the same search without the\nORDER BY, it takes about a second. \n\nI am currently on version 8.0.1 on Windows XP using a Dell Optiplex 280\nwith 1Gb of ram. I have set sort_mem=100000 set.\n\nAny ideas?\n\nThanks,\nGlenn\n\n",
"msg_date": "Tue, 05 Dec 2006 10:12:03 -0700",
"msg_from": "Glenn Sullivan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance of ORDER BY "
},
{
"msg_contents": "Glenn,\n\nOn 12/5/06 9:12 AM, \"Glenn Sullivan\" <[email protected]> wrote:\n\n> I am wanting some ideas about improving the performance of ORDER BY in\n> our use. I have a DB on the order of 500,000 rows and 50 columns.\n> The results are always sorted with ORDER BY. Sometimes, the users end up\n> with a search that matches most of the rows. In that case, I have a\n> LIMIT 5000 to keep the returned results under control. However, the\n> sorting seems to take 10-60 sec. If I do the same search without the\n> ORDER BY, it takes about a second.\n> \n> I am currently on version 8.0.1 on Windows XP using a Dell Optiplex 280\n> with 1Gb of ram. I have set sort_mem=100000 set.\n> \n> Any ideas?\n\nUpgrade to 8.1 or 8.2, there were very large performance improvements to the\nsort code made for 8.1+. Also, when you've upgraded, you can experiment\nwith increasing work_mem to get better performance. At some value of\nwork_mem (probably > 32MB) you will reach a plateau of performance, probably\n4-5 times faster than what you see with 8.0.\n\n- Luke \n\n\n",
"msg_date": "Tue, 05 Dec 2006 09:36:44 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of ORDER BY"
},
{
"msg_contents": "Glenn Sullivan <[email protected]> writes:\n> I am wanting some ideas about improving the performance of ORDER BY in\n> our use. I have a DB on the order of 500,000 rows and 50 columns.\n> The results are always sorted with ORDER BY. Sometimes, the users end up\n> with a search that matches most of the rows. In that case, I have a\n> LIMIT 5000 to keep the returned results under control. However, the\n> sorting seems to take 10-60 sec. If I do the same search without the\n> ORDER BY, it takes about a second. \n\nDoes the ORDER BY match an index? If so, is it using the index?\n(See EXPLAIN.)\n\n> I am currently on version 8.0.1 on Windows XP using a Dell Optiplex 280\n> with 1Gb of ram. I have set sort_mem=100000 set.\n\nIn 8.0 that might be counterproductively high --- we have seen cases\nwhere more sort_mem = slower with the older sorting code. I concur\nwith Luke's advice that you should update to 8.2 (not 8.1) to get the\nimproved sorting code.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Dec 2006 13:02:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of ORDER BY "
},
{
"msg_contents": "On Tue, Dec 05, 2006 at 01:02:06PM -0500, Tom Lane wrote:\n> In 8.0 that might be counterproductively high --- we have seen cases\n> where more sort_mem = slower with the older sorting code. I concur\n> with Luke's advice that you should update to 8.2 (not 8.1) to get the\n> improved sorting code.\n\nBy the way, is the new sorting code any better for platforms that already\nhave a decent qsort() (like Linux)?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 5 Dec 2006 19:17:33 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of ORDER BY"
},
{
"msg_contents": "am Tue, dem 05.12.2006, um 13:02:06 -0500 mailte Tom Lane folgendes:\n> In 8.0 that might be counterproductively high --- we have seen cases\n> where more sort_mem = slower with the older sorting code. I concur\n> with Luke's advice that you should update to 8.2 (not 8.1) to get the\n> improved sorting code.\n\nHey, yes, 8.2 is released! Great, and thanks to all the people, which\nhelp to develop this great software.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47215, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n",
"msg_date": "Tue, 5 Dec 2006 19:20:28 +0100",
"msg_from": "\"A. Kretschmer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of ORDER BY"
},
{
"msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> By the way, is the new sorting code any better for platforms that already\n> have a decent qsort() (like Linux)?\n\nIt seemed better to us. Linux' qsort() is really mergesort, which is\nbetter sometimes but often worse --- mergesort tends to have a less\nCPU-cache-friendly memory access distribution. Another big problem with\nthe Linux version is that it pays no attention to sort_mem, but will\nenthusiastically allocate lots of additional memory, thereby blowing\nwhatever cross-backend memory budgeting you might have been doing.\n\nIf you care there is quite a lot of discussion in the -hackers and\n-performance archives from last spring or so.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Dec 2006 13:39:16 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of ORDER BY "
},
{
"msg_contents": "Steinar H. Gunderson wrote:\n> On Tue, Dec 05, 2006 at 01:02:06PM -0500, Tom Lane wrote:\n>> In 8.0 that might be counterproductively high --- we have seen cases\n>> where more sort_mem = slower with the older sorting code. I concur\n>> with Luke's advice that you should update to 8.2 (not 8.1) to get the\n>> improved sorting code.\n> \n> By the way, is the new sorting code any better for platforms that already\n> have a decent qsort() (like Linux)?\n\nyes - especially on-disk sorts will get some tremendous speedups in 8.2.\n\n\nStefan\n",
"msg_date": "Tue, 05 Dec 2006 20:55:40 +0100",
"msg_from": "Stefan Kaltenbrunner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of ORDER BY"
},
{
"msg_contents": "\n\n\n\n\n\nThanks to Luke and Tom for the input. I guess this was good timing\ngiven that it looks like\n8.2 was just released today. I will upgade to that before doing\nanything else.\n\nGlenn\n\nTom Lane wrote:\n\nGlenn Sullivan <[email protected]> writes:\n \n\nI am wanting some ideas about improving the performance of ORDER BY in\nour use. I have a DB on the order of 500,000 rows and 50 columns.\nThe results are always sorted with ORDER BY. Sometimes, the users end up\nwith a search that matches most of the rows. In that case, I have a\nLIMIT 5000 to keep the returned results under control. However, the\nsorting seems to take 10-60 sec. If I do the same search without the\nORDER BY, it takes about a second. \n \n\n\nDoes the ORDER BY match an index? If so, is it using the index?\n(See EXPLAIN.)\n\n \n\nI am currently on version 8.0.1 on Windows XP using a Dell Optiplex 280\nwith 1Gb of ram. I have set sort_mem=100000 set.\n \n\n\nIn 8.0 that might be counterproductively high --- we have seen cases\nwhere more sort_mem = slower with the older sorting code. I concur\nwith Luke's advice that you should update to 8.2 (not 8.1) to get the\nimproved sorting code.\n\n\t\t\tregards, tom lane\n\n \n\n\n\n",
"msg_date": "Tue, 05 Dec 2006 13:42:56 -0700",
"msg_from": "Glenn Sullivan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of ORDER BY"
}
] |
[
{
"msg_contents": "I wanted to post a follow up to the list regarding a high \nshared_buffers value on OS X 10.4.8.\n\nThanks to Tom's help we successfully compiled PostgreSQL 8.1.5 using \n64-bit on OS X Server 10.4.8 (You can find info. for this on pgports)\n\nshared_buffers can now be set as high as shmmax without getting the \nerror message \"could not create shared memory segment...\". Now, \nhowever, when shared_buffers are set greater than 279212 a \nsegmentation fault occurs on startup of PostgreSQL.\n\nWhile trying to quantify the performance difference with higher \nshared_buffers versus relying more on the kernel cache, the \ndifference does not appear as significant as I thought. We currently \nhave shared_buffers set to about 25% of system memory on our box \nwhere we are free to set it within the bounds of shmmax (not OS X, of \ncourse).\n\nBrian Wipf\n<[email protected]>\n\n",
"msg_date": "Tue, 5 Dec 2006 16:06:41 -0700",
"msg_from": "Brian Wipf <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: shared_buffers > 284263 on OS X"
},
{
"msg_contents": "Brian Wipf <[email protected]> writes:\n> shared_buffers can now be set as high as shmmax without getting the \n> error message \"could not create shared memory segment...\". Now, \n> however, when shared_buffers are set greater than 279212 a \n> segmentation fault occurs on startup of PostgreSQL.\n\nStack trace please?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Dec 2006 18:10:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers > 284263 on OS X "
},
{
"msg_contents": "On 5-Dec-06, at 4:10 PM, Tom Lane wrote:\n> Brian Wipf <[email protected]> writes:\n>> shared_buffers can now be set as high as shmmax without getting the\n>> error message \"could not create shared memory segment...\". Now,\n>> however, when shared_buffers are set greater than 279212 a\n>> segmentation fault occurs on startup of PostgreSQL.\n>\n> Stack trace please?\n\nNormally, when something crashes in OS X, I get a stack trace in the / \nLibrary/Logs/CrashReporter/crashdump.crash.log. Unfortunately, the \nonly stack trace in crashdump.crash.log appears to be for \nCrashReporter itself crashing. A guess is that crashreporterd is only \n32-bit and can't handle generating a report for the crash of a 64-bit \nprogram.\n\n# file /usr/libexec/crashreporterd\n/usr/libexec/crashreporterd: Mach-O executable ppc\n\nRunning gdb on the only generated core file shows:\n\n/cores root# gdb /usr/local/pgsql/bin/postgres core.708\nGNU gdb 6.3.50-20050815 (Apple version gdb-573) (Fri Oct 20 15:54:33 \nGMT 2006)\nCore was generated by `/usr/libexec/crashdump'.\n#0 0x00000000fffeff20 in ?? ()\n(gdb) bt\n#0 0x00000000fffeff20 in ?? ()\nCannot access memory at address 0xbffffa8000000000\n#1 0x000000009293d968 in ?? ()\nCannot access memory at address 0xbffffa8000000000\nCannot access memory at address 0xbffffa8000000010\n\nwhich appears to be for Crash Reporter crashing. Any idea how I might \nbe able to get a useful stack trace?\n\nAll crashdump.crash.log shows is:\n\nHost Name: Hulk1\nDate/Time: 2006-12-05 23:36:51.928 +0000\nOS Version: 10.4.8 (Build 8L127)\nReport Version: 4\n\nCommand: crashdump\nPath: /usr/libexec/crashdump\nParent: crashreporterd [112]\n\nVersion: ??? (???)\n\nPID: 708\nThread: 0\n\nException: EXC_BAD_ACCESS (0x0001)\nCodes: KERN_INVALID_ADDRESS (0x0001) at 0x789421ff\n\nThread 0 Crashed:\n0 <<00000000>> 0xfffeff20 objc_msgSend_rtp + 32\n1 com.apple.Foundation 0x9293d968 NSPopAutoreleasePool + 536\n2 crashdump 0x00005c58 0x1000 + 19544\n3 crashdump 0x00005cec 0x1000 + 19692\n4 crashdump 0x00002954 0x1000 + 6484\n5 crashdump 0x000027f8 0x1000 + 6136\n\nThread 0 crashed with PPC Thread State 64:\n srr0: 0x00000000fffeff20 srr1: \n0x100000000000f030 vrsave: 0x0000000000000000\n cr: 0x44000234 xer: 0x0000000000000000 lr: \n0x000000009293d968 ctr: 0x0000000000000001\n r0: 0x0000000000000001 r1: 0x00000000bffff620 r2: \n0x00000000789421ff r3: 0x0000000000308bc0\n r4: 0x0000000090aa8904 r5: 0x0000000001800000 r6: \n0xffffffffffffffff r7: 0x0000000001807400\n r8: 0x0000000000000001 r9: 0x0000000000030000 r10: \n0x0000000000000005 r11: 0x000000006f548904\n r12: 0x000000000000357b r13: 0x0000000000000000 r14: \n0x0000000000000000 r15: 0x0000000000000000\n r16: 0x0000000000000000 r17: 0x0000000000000000 r18: \n0x0000000000000000 r19: 0x0000000000000000\n r20: 0x0000000000000000 r21: 0x0000000000000000 r22: \n0x0000000000000000 r23: 0x0000000000000000\n r24: 0x0000000000000000 r25: 0x0000000000306990 r26: \n0x0000000000010000 r27: 0x0000000000000001\n r28: 0x0000000000010000 r29: 0x0000000000308bc0 r30: \n0x000000000000d02c r31: 0x000000009293d764\n\nBinary Images Description:\n 0x1000 - 0xbfff crashdump /usr/libexec/crashdump\n0x8fe00000 - 0x8fe51fff dyld 45.3 /usr/lib/dyld\n0x90000000 - 0x901bcfff libSystem.B.dylib /usr/lib/ \nlibSystem.B.dylib\n0x90214000 - 0x90219fff libmathCommon.A.dylib /usr/lib/system/ \nlibmathCommon.A.dylib\n0x9021b000 - 0x90268fff com.apple.CoreText 1.0.2 (???) /System/ \nLibrary/Frameworks/ApplicationServices.framework/Versions/A/ \nFrameworks/CoreText.framework/Versions/A/CoreText\n...\n...\n0x94281000 - 0x94281fff com.apple.audio.units.AudioUnit 1.4 / \nSystem/Library/Frameworks/AudioUnit.framework/Versions/A/AudioUnit\n0x94283000 - 0x94456fff com.apple.QuartzCore 1.4.9 /System/ \nLibrary/Frameworks/QuartzCore.framework/Versions/A/QuartzCore\n0x944ac000 - 0x944e9fff libsqlite3.0.dylib /usr/lib/ \nlibsqlite3.0.dylib\n0x944f1000 - 0x94541fff libGLImage.dylib /System/Library/ \nFrameworks/OpenGL.framework/Versions/A/Libraries/libGLImage.dylib\n0x945d2000 - 0x94614fff com.apple.vmutils 4.0.2 (93.1) /System/ \nLibrary/PrivateFrameworks/vmutils.framework/Versions/A/vmutils\n\n\n",
"msg_date": "Tue, 5 Dec 2006 17:06:20 -0700",
"msg_from": "Brian Wipf <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: shared_buffers > 284263 on OS X "
}
] |
[
{
"msg_contents": "We're using 8.1 - I thought such a construct was safe in pg 8.1:\n\n select max(indexed_value) from huge_table;\n\nwhile earlier we had to use:\n\n select indexed_value from huge_table order by indexed_value desc limit 1;\n\nseems like I was wrong:\n\n\nmydb=> explain analyze select indexed_value1 from mytable order by indexed_value1 desc limit 1;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..1.96 rows=1 width=4) (actual time=0.115..0.117 rows=1 loops=1)\n -> Index Scan Backward using index1 on mytable (cost=0.00..23890756.52 rows=12164924 width=4) (actual time=0.111..0.111 rows=1 loops=1)\n Total runtime: 0.162 ms\n(3 rows)\n\nmydb=> explain analyze select indexed_value2 from mytable order by indexed_value2 desc limit 1;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..0.04 rows=1 width=4) (actual time=0.128..0.130 rows=1 loops=1)\n -> Index Scan Backward using index2 on mytable (cost=0.00..428231.16 rows=12164924 width=4) (actual time=0.124..0.124 rows=1 loops=1)\n Total runtime: 0.160 ms\n(3 rows)\n\n\nmydb=> explain analyze select max(indexed_value2) from mytable; \n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.04..0.05 rows=1 width=0) (actual time=11652.138..11652.139 rows=1 loops=1)\n InitPlan\n -> Limit (cost=0.00..0.04 rows=1 width=4) (actual time=11652.122..11652.124 rows=1 loops=1)\n -> Index Scan Backward using index2 on mytable (cost=0.00..428231.16 rows=12164924 width=4) (actual time=11652.117..11652.117 rows=1 loops=1)\n Filter: (indexed_value2 IS NOT NULL)\n Total runtime: 11652.200 ms\n(6 rows)\n\nmydb=> explain analyze select max(indexed_value1) from mytable; \n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=1.96..1.97 rows=1 width=0) (actual time=713.780..713.781 rows=1 loops=1)\n InitPlan\n -> Limit (cost=0.00..1.96 rows=1 width=4) (actual time=713.767..713.770 rows=1 loops=1)\n -> Index Scan Backward using index1 on mytable (cost=0.00..23890756.52 rows=12164924 width=4) (actual time=713.764..713.764 rows=1 loops=1)\n Filter: (indexed_value1 IS NOT NULL)\n Total runtime: 713.861 ms\n(6 rows)\n",
"msg_date": "Wed, 6 Dec 2006 04:01:56 +0100",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "max/min and index usage"
},
{
"msg_contents": "Tobias Brox <[email protected]> writes:\n> We're using 8.1 - I thought such a construct was safe in pg 8.1:\n> select max(indexed_value) from huge_table;\n> while earlier we had to use:\n> select indexed_value from huge_table order by indexed_value desc limit 1;\n\nThese are not actually exactly the same thing. In particular, I suppose\nyour table contains a lot of nulls?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Dec 2006 22:29:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: max/min and index usage "
},
{
"msg_contents": "[Tom Lane - Tue at 10:29:53PM -0500]\n> These are not actually exactly the same thing. In particular, I suppose\n> your table contains a lot of nulls?\n\nYes; I'm sorry I was a bit quick with the first posting.\n",
"msg_date": "Wed, 6 Dec 2006 04:35:50 +0100",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: max/min and index usage"
}
] |
[
{
"msg_contents": "[Tobias Brox - Wed at 04:01:56AM +0100]\n> We're using 8.1 - I thought such a construct was safe in pg 8.1:\n> \n> select max(indexed_value) from huge_table;\n> \n> while earlier we had to use:\n> \n> select indexed_value from huge_table order by indexed_value desc limit 1;\n> \n> seems like I was wrong:\n\nThe difference is all about those NULL values ... those columns are quite\nsparsely populated in the table. The second query gives NULL, which is\nnot much useful :-)\n\nHowever, I made a partial index to solve this problem - this query is\nable to use the partial index:\n\n select indexed_value from huge_table where indexed_value is not NULL\n order by indexed_value desc limit 1;\n\nwhile this one is not:\n\n select max(indexed_value) from huge_table;\n\nI guess this is a bug? :-)\n",
"msg_date": "Wed, 6 Dec 2006 04:17:13 +0100",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: max/min and index usage"
}
] |
[
{
"msg_contents": "Does PostgreSQL lock the entire row in a table if I update only 1\ncolumn?\n\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\nweb: www.askesis.nl\n",
"msg_date": "Wed, 06 Dec 2006 08:04:12 +0100",
"msg_from": "Joost Kraaijeveld <[email protected]>",
"msg_from_op": true,
"msg_subject": "Locking in PostgreSQL?"
},
{
"msg_contents": "Hi,\n\nJoost Kraaijeveld wrote:\n> Does PostgreSQL lock the entire row in a table if I update only 1\n> column?\n\nYes. In PostgreSQL, an update is much like a delete + insert. A \nconcurrent transaction will still see the old row. Thus the lock only \nprevents other writing transactions, not readers.\n\nRegards\n\nMarkus\n\nP.S.: please do not cross post such questions.\n",
"msg_date": "Wed, 06 Dec 2006 08:11:44 +0100",
"msg_from": "Markus Schiltknecht <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Locking in PostgreSQL?"
},
{
"msg_contents": "Unless you specifically ask for it postgresql doesn't lock any rows \nwhen you update data.\n\nDave\nOn 6-Dec-06, at 2:04 AM, Joost Kraaijeveld wrote:\n\n> Does PostgreSQL lock the entire row in a table if I update only 1\n> column?\n>\n>\n> -- \n> Groeten,\n>\n> Joost Kraaijeveld\n> Askesis B.V.\n> Molukkenstraat 14\n> 6524NB Nijmegen\n> tel: 024-3888063 / 06-51855277\n> fax: 024-3608416\n> web: www.askesis.nl\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n",
"msg_date": "Wed, 6 Dec 2006 07:29:37 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Locking in PostgreSQL?"
},
{
"msg_contents": "On Wed, 06 Dec 2006 13:29:37 +0100, Dave Cramer <[email protected]> wrote:\n\n> Unless you specifically ask for it postgresql doesn't lock any rows when \n> you update data.\n>\nThats not right. UPDATE will force a RowExclusiveLock to rows matching the \nWHERE clause, or all if no one is specified.\n@Joost Kraaijeveld: Yes, because there is no EntryExclusiveLock or \nsomething like that. Roughly you can say, each UPDATE statement iterates \nthrough the affected table and locks the WHERE clause matching records \n(rows) exclusivly to prevent data inconsistancy during the UPDATE. After \nthat your rows will be updated and the lock will be repealed.\nYou can see this during an long lasting UPDATE by querying the pg_locks \nwith joined pg_stats_activity (statement must be enabled).\n\n> Dave\n> On 6-Dec-06, at 2:04 AM, Joost Kraaijeveld wrote:\n>\n>> Does PostgreSQL lock the entire row in a table if I update only 1\n>> column?\n>>\n>>\n>> --Groeten,\n>>\n>> Joost Kraaijeveld\n>> Askesis B.V.\n>> Molukkenstraat 14\n>> 6524NB Nijmegen\n>> tel: 024-3888063 / 06-51855277\n>> fax: 024-3608416\n>> web: www.askesis.nl\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 9: In versions below 8.0, the planner will ignore your desire to\n>> choose an index scan if your joining column's datatypes do not\n>> match\n>>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n\n\nCU,\nJens\n\n-- \n**\nJens Schipkowski\n",
"msg_date": "Wed, 06 Dec 2006 14:20:04 +0100",
"msg_from": "\"Jens Schipkowski\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Locking in PostgreSQL?"
},
{
"msg_contents": "\nOn 6-Dec-06, at 8:20 AM, Jens Schipkowski wrote:\n\n> On Wed, 06 Dec 2006 13:29:37 +0100, Dave Cramer <[email protected]> \n> wrote:\n>\n>> Unless you specifically ask for it postgresql doesn't lock any \n>> rows when you update data.\n>>\n> Thats not right. UPDATE will force a RowExclusiveLock to rows \n> matching the WHERE clause, or all if no one is specified.\n> @Joost Kraaijeveld: Yes, because there is no EntryExclusiveLock or \n> something like that. Roughly you can say, each UPDATE statement \n> iterates through the affected table and locks the WHERE clause \n> matching records (rows) exclusivly to prevent data inconsistancy \n> during the UPDATE. After that your rows will be updated and the \n> lock will be repealed.\n> You can see this during an long lasting UPDATE by querying the \n> pg_locks with joined pg_stats_activity (statement must be enabled).\n\nApparently I've completely misunderstood MVCC then.... My \nunderstanding is that unless you do a select ... for update then \nupdate the rows will not be locked .\n\nDave\n>\n>> Dave\n>> On 6-Dec-06, at 2:04 AM, Joost Kraaijeveld wrote:\n>>\n>>> Does PostgreSQL lock the entire row in a table if I update only 1\n>>> column?\n>>>\n>>>\n>>> --Groeten,\n>>>\n>>> Joost Kraaijeveld\n>>> Askesis B.V.\n>>> Molukkenstraat 14\n>>> 6524NB Nijmegen\n>>> tel: 024-3888063 / 06-51855277\n>>> fax: 024-3608416\n>>> web: www.askesis.nl\n>>>\n>>> ---------------------------(end of \n>>> broadcast)---------------------------\n>>> TIP 9: In versions below 8.0, the planner will ignore your desire to\n>>> choose an index scan if your joining column's datatypes do \n>>> not\n>>> match\n>>>\n>>\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 9: In versions below 8.0, the planner will ignore your desire to\n>> choose an index scan if your joining column's datatypes do not\n>> match\n>\n>\n>\n> CU,\n> Jens\n>\n> -- \n> **\n> Jens Schipkowski\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n",
"msg_date": "Wed, 6 Dec 2006 08:26:45 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Locking in PostgreSQL?"
},
{
"msg_contents": "On Wed, Dec 06, 2006 at 08:26:45AM -0500, Dave Cramer wrote:\n> Apparently I've completely misunderstood MVCC then.... My \n> understanding is that unless you do a select ... for update then \n> update the rows will not be locked .\n\nThe discussion was about updates, not selects. Selects do not in general lock\n(except for ... for update, as you say).\n\nTo (partially) answer the original question: The number of columns updated\ndoes not matter for the locking situation.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 6 Dec 2006 14:31:04 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Locking in PostgreSQL?"
},
{
"msg_contents": "Hi,\n\nDave Cramer wrote:\n> Apparently I've completely misunderstood MVCC then.... \n\nProbably not. You are both somewhat right.\n\nJens Schipkowski wrote:\n >> Thats not right. UPDATE will force a RowExclusiveLock to rows\n >> matching the WHERE clause, or all if no one is specified.\n\nThat almost right, RowExclusiveLock is a table level lock. An UPDATE \nacquires that, yes. Additionally there are row-level locks, which is \nwhat you're speaking about. An UPDATE gets an exclusive row-level lock \non rows it updates.\n\nPlease note however, that these row-level locks only block concurrent \nwriters, not readers (MVCC lets the readers see the old, unmodified row).\n\n> My understanding \n> is that unless you do a select ... for update then update the rows will \n> not be locked.\n\nAlso almost right, depending on what you mean by 'locked'. A plain \nSELECT acquires an ACCESS SHARE lock on the table, but no row-level \nlocks. Only a SELECT ... FOR UPDATE does row-level locking (shared ones \nhere...)\n\nThe very fine documentation covers that in [1].\n\nRegards\n\nMarkus\n\n\n[1]: PostgreSQL Documentation, Explicit Locking:\nhttp://www.postgresql.org/docs/8.2/interactive/explicit-locking.html\n\n",
"msg_date": "Wed, 06 Dec 2006 14:42:32 +0100",
"msg_from": "Markus Schiltknecht <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Locking in PostgreSQL?"
},
{
"msg_contents": "On Wed, Dec 06, 2006 at 08:26:45AM -0500, Dave Cramer wrote:\n> >>Unless you specifically ask for it postgresql doesn't lock any \n> >>rows when you update data.\n> >>\n> >Thats not right. UPDATE will force a RowExclusiveLock to rows \n> >matching the WHERE clause, or all if no one is specified.\n> \n> Apparently I've completely misunderstood MVCC then.... My \n> understanding is that unless you do a select ... for update then \n> update the rows will not be locked .\n\nI think it comes down to what you mean by RowExclusiveLock. In MVCC,\nwriters don't block readers, so even if someone executes an update on a\nrow, readers (SELECT statements) will not be blocked.\n\nSo it's not a lock as such, more a \"I've updated this row, go find the\nnew version if that's appropriate for your snapshot\".\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> From each according to his ability. To each according to his ability to litigate.",
"msg_date": "Wed, 6 Dec 2006 14:45:27 +0100",
"msg_from": "Martijn van Oosterhout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Locking in PostgreSQL?"
},
{
"msg_contents": "On Dec 5, 2006, at 11:04 PM, Joost Kraaijeveld wrote:\n\n> Does PostgreSQL lock the entire row in a table if I update only 1\n> column?\n\nKnow that updating 1 column is actually updating the whole row. So if \none transaction updates column A of a row, it will block another \nconcurrent transaction that tries to update column B of the same row. \nAs was mentioned however, neither of these transactions block others \nreading the row in question, though they see the row as it existed \nbefore the updates until those update transactions commit.\n\nIf you know that your application will suffer excessive update \ncontention trying to update different columns of the same row, you \ncould consider splitting the columns into separate tables. This is an \noptimization to favor write contention over read performance (since \nyou would likely need to join the tables when selecting) and I \nwouldn't do it speculatively. I'd only do it if profiling the \napplication demonstrated significantly better performance with two \ntables.\n\n-Casey\n",
"msg_date": "Wed, 6 Dec 2006 09:58:37 -0800",
"msg_from": "Casey Duncan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Locking in PostgreSQL?"
},
{
"msg_contents": "Casey Duncan wrote:\n> On Dec 5, 2006, at 11:04 PM, Joost Kraaijeveld wrote:\n>\n>> Does PostgreSQL lock the entire row in a table if I update only 1\n>> column?\n>\n> Know that updating 1 column is actually updating the whole row. So if \n> one transaction updates column A of a row, it will block another \n> concurrent transaction that tries to update column B of the same row. \n> As was mentioned however, neither of these transactions block others \n> reading the row in question, though they see the row as it existed \n> before the updates until those update transactions commit.\n>\n> If you know that your application will suffer excessive update \n> contention trying to update different columns of the same row, you \n> could consider splitting the columns into separate tables. This is an \n> optimization to favor write contention over read performance (since \n> you would likely need to join the tables when selecting) and I \n> wouldn't do it speculatively. I'd only do it if profiling the \n> application demonstrated significantly better performance with two \n> tables.\n>\n> -Casey\nOr, come up with some kind of (pre)caching strategy for your updates \nwherein you could then combine multiple updates to the same row into one \nupdate.\n\n-- \nerik jones <[email protected]>\nsoftware development\nemma(r)\n\n",
"msg_date": "Wed, 06 Dec 2006 12:09:24 -0600",
"msg_from": "Erik Jones <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Locking in PostgreSQL?"
}
] |
[
{
"msg_contents": "All tests are with bonnie++ 1.03a\n\nMain components of system:\n16 WD Raptor 150GB 10000 RPM drives all in a RAID 10\nARECA 1280 PCI-Express RAID adapter with 1GB BB Cache (Thanks for the \nrecommendation, Ron!)\n32 GB RAM\nDual Intel 5160 Xeon Woodcrest 3.0 GHz processors\nOS: SUSE Linux 10.1\n\nAll runs are with the write cache disabled on the hard disks, except \nfor one additional test for xfs where it was enabled. I tested with \nordered and writeback journaling modes for ext3 to see if writeback \njournaling would help over the default of ordered. The 1GB of battery \nbacked cache on the RAID card was enabled for all tests as well. \nTests are in order of increasing random seek performance. In my tests \non this hardware, xfs is the decisive winner, beating all of the \nother file systems in performance on every single metric. 658 random \nseeks per second, 433 MB/sec sequential read, and 350 MB/sec \nsequential write seems decent enough, but not as high as numbers \nother people have suggested are attainable with a 16 disk RAID 10. \n350 MB/sec sequential write with disk caches enabled versus 280 MB/ \nsec sequential write with disk caches disabled sure makes enabling \nthe disk write cache tempting. Anyone run their RAIDs with disk \ncaches enabled, or is this akin to having fsync off?\n\next3 (writeback data journaling mode):\n/usr/local/sbin/bonnie++ -d bonnie -s 64368:8k\nVersion 1.03 ------Sequential Output------ --Sequential Input- \n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- -- \nBlock-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % \nCP /sec %CP\nhulk4 64368M 78625 91 279921 51 112346 13 89463 96 417695 \n22 545.7 0\n ------Sequential Create------ --------Random \nCreate--------\n -Create-- --Read--- -Delete-- -Create-- -- \nRead--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec % \nCP /sec %CP\n 16 5903 99 +++++ +++ +++++ +++ 6112 99 +++++ ++ \n+ 18620 100\nhulk4,64368M, \n78625,91,279921,51,112346,13,89463,96,417695,22,545.7,0,16,5903,99,+++ \n++,+++,+++++,+++,6112,99,+++++,+++,18620,100\n\next3 (ordered data journaling mode):\n/usr/local/sbin/bonnie++ -d bonnie -s 64368:8k\nVersion 1.03 ------Sequential Output------ --Sequential Input- \n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- -- \nBlock-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % \nCP /sec %CP\nhulk4 64368M 74902 89 250274 52 123637 16 88992 96 417222 \n23 548.3 0\n ------Sequential Create------ --------Random \nCreate--------\n -Create-- --Read--- -Delete-- -Create-- -- \nRead--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec % \nCP /sec %CP\n 16 5941 97 +++++ +++ +++++ +++ 6270 99 +++++ ++ \n+ 18670 99\nhulk4,64368M, \n74902,89,250274,52,123637,16,88992,96,417222,23,548.3,0,16,5941,97,+++ \n++,+++,+++++,+++,6270,99,+++++,+++,18670,99\n\n\nreiserfs:\n/usr/local/sbin/bonnie++ -d bonnie -s 64368:8k\nVersion 1.03 ------Sequential Output------ --Sequential Input- \n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- -- \nBlock-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % \nCP /sec %CP\nhulk4 64368M 81004 99 269191 50 128322 16 87865 96 407035 \n28 550.3 0\n ------Sequential Create------ --------Random \nCreate--------\n -Create-- --Read--- -Delete-- -Create-- -- \nRead--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec % \nCP /sec %CP\n 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ ++ \n+ +++++ +++\nhulk4,64368M, \n81004,99,269191,50,128322,16,87865,96,407035,28,550.3,0,16,+++++,+++,+ \n++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n\njfs:\n/usr/local/sbin/bonnie++ -d bonnie/ -s 64368:8k\nVersion 1.03 ------Sequential Output------ --Sequential Input- \n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- -- \nBlock-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % \nCP /sec %CP\nhulk4 64368M 73246 80 268886 28 110465 9 89516 96 413897 \n21 639.5 0\n ------Sequential Create------ --------Random \nCreate--------\n -Create-- --Read--- -Delete-- -Create-- -- \nRead--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec % \nCP /sec %CP\n 16 3756 5 +++++ +++ +++++ +++ 23763 90 +++++ ++ \n+ 22371 70\nhulk4,64368M, \n73246,80,268886,28,110465,9,89516,96,413897,21,639.5,0,16,3756,5,++++ \n+,+++,+++++,+++,23763,90,+++++,+++,22371,70\n\nxfs (with write cache disabled on disks):\n/usr/local/sbin/bonnie++ -d bonnie/ -s 64368:8k\nVersion 1.03 ------Sequential Output------ --Sequential Input- \n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- -- \nBlock-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % \nCP /sec %CP\nhulk4 64368M 90621 99 283916 35 105871 11 88569 97 433890 \n23 644.5 0\n ------Sequential Create------ --------Random \nCreate--------\n -Create-- --Read--- -Delete-- -Create-- -- \nRead--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec % \nCP /sec %CP\n 16 28435 95 +++++ +++ 28895 82 28523 91 +++++ ++ \n+ 24369 86\nhulk4,64368M, \n90621,99,283916,35,105871,11,88569,97,433890,23,644.5,0,16,28435,95,++ \n+++,+++,28895,82,28523,91,+++++,+++,24369,86\n\nxfs (with write cache enabled on disks):\n/usr/local/sbin/bonnie++ -d bonnie -s 64368:8k\nVersion 1.03 ------Sequential Output------ --Sequential Input- \n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- -- \nBlock-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % \nCP /sec %CP\nhulk4 64368M 90861 99 348401 43 131887 14 89412 97 432964 \n23 658.7 0\n ------Sequential Create------ --------Random \nCreate--------\n -Create-- --Read--- -Delete-- -Create-- -- \nRead--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec % \nCP /sec %CP\n 16 28871 90 +++++ +++ 28923 91 30879 93 +++++ ++ \n+ 28012 94\nhulk4,64368M, \n90861,99,348401,43,131887,14,89412,97,432964,23,658.7,0,16,28871,90,++ \n+++,+++,28923,91,30879,93,+++++,+++,28012,94\n\n\n\n",
"msg_date": "Wed, 6 Dec 2006 08:40:55 -0700",
"msg_from": "Brian Wipf <[email protected]>",
"msg_from_op": true,
"msg_subject": "File Systems Compared"
},
{
"msg_contents": "Brian Wipf wrote:\n\n> All tests are with bonnie++ 1.03a\n\nThanks for posting these tests. Now I have actual numbers to beat our \nstorage server provider about the head and shoulders with. Also, I \nfound them interesting in and of themselves.\n\nThese numbers are close enough to bus-saturation rates that I'd strongly \nadvise new people setting up systems to go this route over spending \nmoney on some fancy storage area network solution- unless you need more \nHD space than fits nicely in one of these raids. If reliability is a \nconcern, buy 2 servers and implement Sloni for failover. \n\nBrian\n\n",
"msg_date": "Wed, 06 Dec 2006 11:02:10 -0500",
"msg_from": "Brian Hurt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "On Dec 6, 2006, at 16:40 , Brian Wipf wrote:\n\n> All tests are with bonnie++ 1.03a\n[snip]\n\nCare to post these numbers *without* word wrapping? Thanks.\n\nAlexander.\n",
"msg_date": "Wed, 6 Dec 2006 17:05:11 +0100",
"msg_from": "Alexander Staubo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "Brian,\n\nOn 12/6/06 8:02 AM, \"Brian Hurt\" <[email protected]> wrote:\n\n> These numbers are close enough to bus-saturation rates\n\nPCIX is 1GB/s + and the memory architecture is 20GB/s+, though each CPU is\nlikely to obtain only 2-3GB/s.\n\nWe routinely achieve 1GB/s I/O rate on two 3Ware adapters and 2GB/s on the\nSun X4500 with ZFS.\n\n> advise new people setting up systems to go this route over spending\n> money on some fancy storage area network solution\n\nPeople buy SANs for interesting reasons, some of them having to do with the\nmanageability features of high end SANs. I've heard it said in those cases\nthat \"performance doesn't matter much\".\n\nAs you suggest, database replication provides one of those features, and\nSolaris ZFS has many of the data management features found in high end SANs.\nPerhaps we can get the best of both?\n\nIn the end, I think SAN vs. server storage is a religious battle.\n\n- Luke\n\n\n",
"msg_date": "Wed, 06 Dec 2006 08:21:43 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "Hi,\n\nAlexander Staubo wrote:\n> Care to post these numbers *without* word wrapping? Thanks.\n\nHow is one supposed to do that? Care giving an example?\n\nMarkus\n\n",
"msg_date": "Wed, 06 Dec 2006 17:31:01 +0100",
"msg_from": "Markus Schiltknecht <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "\n> As you suggest, database replication provides one of those features, and\n> Solaris ZFS has many of the data management features found in high end SANs.\n> Perhaps we can get the best of both?\n> \n> In the end, I think SAN vs. server storage is a religious battle.\n\nI agree. I have many people that want to purchase a SAN because someone\ntold them that is what they need... Yet they can spend 20% of the cost\non two external arrays and get incredible performance...\n\nWe are seeing great numbers from the following config:\n\n(2) HP MS 30s (loaded) dual bus\n(2) HP 6402, one connected to each MSA.\n\nThe performance for the money is incredible.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n> \n> - Luke\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n\n",
"msg_date": "Wed, 06 Dec 2006 08:34:38 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "Luke Lonergan wrote:\n\n>Brian,\n>\n>On 12/6/06 8:02 AM, \"Brian Hurt\" <[email protected]> wrote:\n>\n> \n>\n>>These numbers are close enough to bus-saturation rates\n>> \n>>\n>\n>PCIX is 1GB/s + and the memory architecture is 20GB/s+, though each CPU is\n>likely to obtain only 2-3GB/s.\n>\n>We routinely achieve 1GB/s I/O rate on two 3Ware adapters and 2GB/s on the\n>Sun X4500 with ZFS.\n>\n> \n>\nFor some reason I'd got it stuck in my head that PCI-Express maxed out \nat a theoretical 533 MByte/sec- at which point, getting 480 MByte/sec \nacross it is pretty dang good. But actually looking things up, I see \nthat PCI-Express has a theoretical 8 Gbit/sec, or about 800Mbyte/sec. \nIt's PCI-X that's 533 MByte/sec. So there's still some headroom \navailable there.\n\nBrian\n\n\n\n\n\n\n\n\nLuke Lonergan wrote:\n\nBrian,\n\nOn 12/6/06 8:02 AM, \"Brian Hurt\" <[email protected]> wrote:\n\n \n\nThese numbers are close enough to bus-saturation rates\n \n\n\nPCIX is 1GB/s + and the memory architecture is 20GB/s+, though each CPU is\nlikely to obtain only 2-3GB/s.\n\nWe routinely achieve 1GB/s I/O rate on two 3Ware adapters and 2GB/s on the\nSun X4500 with ZFS.\n\n \n\nFor some reason I'd got it stuck in my head that PCI-Express maxed out\nat a theoretical 533 MByte/sec- at which point, getting 480 MByte/sec\nacross it is pretty dang good. But actually looking things up, I see\nthat PCI-Express has a theoretical 8 Gbit/sec, or about 800Mbyte/sec. \nIt's PCI-X that's 533 MByte/sec. So there's still some headroom\navailable there.\n\nBrian",
"msg_date": "Wed, 06 Dec 2006 11:40:06 -0500",
"msg_from": "Brian Hurt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "On Wed, Dec 06, 2006 at 05:31:01PM +0100, Markus Schiltknecht wrote:\n>> Care to post these numbers *without* word wrapping? Thanks.\n> How is one supposed to do that? Care giving an example?\n\nThis is a rather long sentence without any kind of word wrapping except what would be imposed on your own side -- how to set that up properly depends on the sending e-mail client, but in mine it's just a matter of turning off the word wrapping in your editor :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 6 Dec 2006 17:43:38 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "* Brian Wipf:\n\n> Anyone run their RAIDs with disk caches enabled, or is this akin to\n> having fsync off?\n\nIf your cache is backed by a battery, enabling write cache shouldn't\nbe a problem. You can check if the whole thing is working well by\nrunning this test script: <http://brad.livejournal.com/2116715.html>\n\nEnabling write cache leads to various degrees of data corruption in\ncase of a power outage (possibly including file system corruption\nrequiring manual recover).\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Wed, 06 Dec 2006 17:44:23 +0100",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "> Anyone run their RAIDs with disk caches enabled, or is this akin to\n> having fsync off?\n\nDisk write caches are basically always akin to having fsync off. The\nonly time a write-cache is (more or less) safe to enable is when it is\nbacked by a battery or in some other way made non-volatile.\n\nSo a RAID controller with a battery-backed write cache can enable its\nown write cache, but can't safely enable the write-caches on the disk\ndrives it manages.\n\n-- Mark Lewis\n",
"msg_date": "Wed, 06 Dec 2006 08:55:14 -0800",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "Brian,\n\nOn 12/6/06 8:40 AM, \"Brian Hurt\" <[email protected]> wrote:\n\n> But actually looking things up, I see that PCI-Express has a theoretical 8\n> Gbit/sec, or about 800Mbyte/sec. It's PCI-X that's 533 MByte/sec. So there's\n> still some headroom available there.\n\nSee here for the official specifications of both:\n http://www.pcisig.com/specifications/pcix_20/\n\nNote that PCI-X version 1.0 at 133MHz runs at 1GB/s. It's a parallel bus,\n64 bits wide (8 bytes) and runs at 133MHz, so 8 x 133 ~= 1 gigabyte/second.\n\nPCI Express with 16 lanes (PCIe x16) can transfer data at 4GB/s. The Arecas\nuse (PCIe x8, see here:\nhttp://www.areca.com.tw/products/html/pcie-sata.htm), so they can do 2GB/s.\n\n- Luke \n\n\n",
"msg_date": "Wed, 06 Dec 2006 09:40:25 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "Hi,\n\nSteinar H. Gunderson wrote:\n> This is a rather long sentence without any kind of word wrapping except what would be imposed on your own side -- how to set that up properly depends on the sending e-mail client, but in mine it's just a matter of turning off the word wrapping in your editor :-)\n\nDuh!\n\nCool, thank you for the example :-) I thought the MTA or at least the the mailing list would wrap mails at some limit. I've now set word-wrap to 9999 characters (it seems not possible to turn it off completely in thunderbird). But when writing, I'm now getting one long line.\n\nWhat's common practice? What's it on the pgsql mailing lists?\n\nRegards\n\nMarkus\n\n",
"msg_date": "Wed, 06 Dec 2006 18:45:56 +0100",
"msg_from": "Markus Schiltknecht <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "Markus Schiltknecht a écrit :\n> What's common practice? What's it on the pgsql mailing lists?\n\nThe netiquette usually advise mailers to wrap after 72 characters \non mailing lists.\nThis does not apply for format=flowed I guess (that's the format \nused in Steinar's message).\n",
"msg_date": "Wed, 06 Dec 2006 18:59:12 +0100",
"msg_from": "Arnaud Lesauvage <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "On Wed, Dec 06, 2006 at 06:45:56PM +0100, Markus Schiltknecht wrote:\n> Cool, thank you for the example :-) I thought the MTA or at least the the \n> mailing list would wrap mails at some limit. I've now set word-wrap to 9999 \n> characters (it seems not possible to turn it off completely in \n> thunderbird). But when writing, I'm now getting one long line.\n\nThunderbird uses format=flowed, so it's wrapped nevertheless. Google to find\nout how to turn it off if you really need to.\n\n> What's common practice?\n\nUsually 72 or 76 characters, TTBOMK -- but when posting tables or big query\nplans, one should simply turn it off, as it kills readability.\n\n> What's it on the pgsql mailing lists?\n\nNo idea. :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 6 Dec 2006 19:08:12 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "[offtopic] Word wrapping"
},
{
"msg_contents": "On Wed, Dec 06, 2006 at 06:59:12PM +0100, Arnaud Lesauvage wrote:\n>Markus Schiltknecht a �crit :\n>>What's common practice? What's it on the pgsql mailing lists?\n>\n>The netiquette usually advise mailers to wrap after 72 characters \n>on mailing lists.\n>This does not apply for format=flowed I guess (that's the format \n>used in Steinar's message).\n\nIt would apply to either; format=flowed can be wrapped at the receiver's \nend, but still be formatted to a particular column for readers that \ndon't understand format=flowed. (Which is likely to be many, since \nthat's a standard that never really took off.) No wrap netiquette \napplies to formatted text blocks which are unreadable if wrapped (such \nas bonnie or EXPLAIN output).\n\nMike Stone\n",
"msg_date": "Wed, 06 Dec 2006 13:11:53 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "On 6-Dec-06, at 9:05 AM, Alexander Staubo wrote:\n>> All tests are with bonnie++ 1.03a\n> [snip]\n> Care to post these numbers *without* word wrapping? Thanks.\n\nThat's what Bonnie++'s output looks like. If you have Bonnie++ \ninstalled, you can run the following:\n\nbon_csv2html << EOF\nhulk4,64368M, \n78625,91,279921,51,112346,13,89463,96,417695,22,545.7,0,16,5903,99,+++ \n++,+++,+++++,+++,6112,99,+++++,+++,18620,100\nEOF\n\nWhich will prettify the CSV results using HTML.\n\n",
"msg_date": "Wed, 6 Dec 2006 11:47:05 -0700",
"msg_from": "Brian Wipf <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [offtopic] File Systems Compared"
},
{
"msg_contents": "On 12/6/06, Luke Lonergan <[email protected]> wrote:\n> People buy SANs for interesting reasons, some of them having to do with the\n> manageability features of high end SANs. I've heard it said in those cases\n> that \"performance doesn't matter much\".\n\nThere is movement in the industry right now away form tape systems to\nmanaged disk storage for backups and data retention. In these cases\nperformance requirements are not very high -- and a single server can\nmanage a huge amount of storage. In theory, you can do the same thing\nattached via sas expanders but fc networking is imo more flexible and\nscalable.\n\nThe manageability features of SANs are a mixed bag and decidedly\noverrated but they have a their place, imo.\n\nmerlin\n",
"msg_date": "Wed, 6 Dec 2006 15:12:24 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "Luke Lonergan wrote:\n\n>Brian,\n>\n>On 12/6/06 8:40 AM, \"Brian Hurt\" <[email protected]> wrote:\n>\n> \n>\n>>But actually looking things up, I see that PCI-Express has a theoretical 8\n>>Gbit/sec, or about 800Mbyte/sec. It's PCI-X that's 533 MByte/sec. So there's\n>>still some headroom available there.\n>> \n>>\n>\n>See here for the official specifications of both:\n> http://www.pcisig.com/specifications/pcix_20/\n>\n>Note that PCI-X version 1.0 at 133MHz runs at 1GB/s. It's a parallel bus,\n>64 bits wide (8 bytes) and runs at 133MHz, so 8 x 133 ~= 1 gigabyte/second.\n>\n>PCI Express with 16 lanes (PCIe x16) can transfer data at 4GB/s. The Arecas\n>use (PCIe x8, see here:\n>http://www.areca.com.tw/products/html/pcie-sata.htm), so they can do 2GB/s.\n>\n>- Luke \n>\n>\n>\n>\n> \n>\nThanks. I stand corrected (again).\n\nBrian\n\n\n\n\n\n\n\n\nLuke Lonergan wrote:\n\nBrian,\n\nOn 12/6/06 8:40 AM, \"Brian Hurt\" <[email protected]> wrote:\n\n \n\nBut actually looking things up, I see that PCI-Express has a theoretical 8\nGbit/sec, or about 800Mbyte/sec. It's PCI-X that's 533 MByte/sec. So there's\nstill some headroom available there.\n \n\n\nSee here for the official specifications of both:\n http://www.pcisig.com/specifications/pcix_20/\n\nNote that PCI-X version 1.0 at 133MHz runs at 1GB/s. It's a parallel bus,\n64 bits wide (8 bytes) and runs at 133MHz, so 8 x 133 ~= 1 gigabyte/second.\n\nPCI Express with 16 lanes (PCIe x16) can transfer data at 4GB/s. The Arecas\nuse (PCIe x8, see here:\nhttp://www.areca.com.tw/products/html/pcie-sata.htm), so they can do 2GB/s.\n\n- Luke \n\n\n\n\n \n\nThanks. I stand corrected (again).\n\nBrian",
"msg_date": "Wed, 06 Dec 2006 15:24:09 -0500",
"msg_from": "Brian Hurt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "On Wed, Dec 06, 2006 at 18:45:56 +0100,\n Markus Schiltknecht <[email protected]> wrote:\n> \n> Cool, thank you for the example :-) I thought the MTA or at least the the \n> mailing list would wrap mails at some limit. I've now set word-wrap to 9999 \n> characters (it seems not possible to turn it off completely in \n> thunderbird). But when writing, I'm now getting one long line.\n> \n> What's common practice? What's it on the pgsql mailing lists?\n\nIf you do this you should set format=flowed (see rfc 2646). If you do that,\nthen clients can break the lines in an appropiate way. This is actually\nbetter than fixing the line width in the original message, since the\nrecipient may not have the same number of characters (or pixels) of display\nas the sender.\n",
"msg_date": "Wed, 6 Dec 2006 14:31:01 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "At 10:40 AM 12/6/2006, Brian Wipf wrote:\n\nAll tests are with bonnie++ 1.03a\n\nMain components of system:\n16 WD Raptor 150GB 10000 RPM drives all in a RAID 10\nARECA 1280 PCI-Express RAID adapter with 1GB BB Cache (Thanks for the \nrecommendation, Ron!)\n32 GB RAM\nDual Intel 5160 Xeon Woodcrest 3.0 GHz processors\nOS: SUSE Linux 10.1\n\n>xfs (with write cache disabled on disks):\n>/usr/local/sbin/bonnie++ -d bonnie/ -s 64368:8k\n>Version 1.03 ------Sequential Output------ --Sequential Input-\n>--Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- -- \n> Block-- --Seeks--\n>Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % \n>CP /sec %CP\n>hulk4 64368M 90621 99 283916 35 105871 11 88569 97 \n>433890 23 644.5 0\n> ------Sequential Create------ --------Random\n>Create--------\n> -Create-- --Read--- -Delete-- -Create-- -- \n> Read--- -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec % \n> CP /sec %CP\n> 16 28435 95 +++++ +++ 28895 82 28523 91 +++++ \n> ++ + 24369 86\n>hulk4,64368M, \n>90621,99,283916,35,105871,11,88569,97,433890,23,644.5,0,16,28435,95,++ \n>+++,+++,28895,82,28523,91,+++++,+++,24369,86\n>\n>xfs (with write cache enabled on disks):\n>/usr/local/sbin/bonnie++ -d bonnie -s 64368:8k\n>Version 1.03 ------Sequential Output------ --Sequential Input-\n>--Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- -- \n> Block-- --Seeks--\n>Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % \n>CP /sec %CP\n>hulk4 64368M 90861 99 348401 43 131887 14 89412 97 \n>432964 23 658.7 0\n> ------Sequential Create------ --------Random\n>Create--------\n> -Create-- --Read--- -Delete-- -Create-- -- \n> Read--- -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec % \n> CP /sec %CP\n> 16 28871 90 +++++ +++ 28923 91 30879 93 +++++ \n> ++ + 28012 94\n>hulk4,64368M, \n>90861,99,348401,43,131887,14,89412,97,432964,23,658.7,0,16,28871,90,++ \n>+++,+++,28923,91,30879,93,+++++,+++,28012,94\nHmmm. Something is not right. With a 16 HD RAID 10 based on 10K \nrpm HDs, you should be seeing higher absolute performance numbers.\n\nFind out what HW the Areca guys and Tweakers guys used to test the 1280s.\nAt LW2006, Areca was demonstrating all-in-cache reads and writes of \n~1600MBps and ~1300MBps respectively along with RAID 0 Sustained \nRates of ~900MBps read, and ~850MBps write.\n\nLuke, I know you've managed to get higher IO rates than this with \nthis class of HW. Is there a OS or SW config issue Brian should \nclosely investigate?\n\nRon Peacetree\n\n",
"msg_date": "Wed, 06 Dec 2006 15:52:55 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "> Hmmm. Something is not right. With a 16 HD RAID 10 based on 10K \n> rpm HDs, you should be seeing higher absolute performance numbers.\n>\n> Find out what HW the Areca guys and Tweakers guys used to test the \n> 1280s.\n> At LW2006, Areca was demonstrating all-in-cache reads and writes of \n> ~1600MBps and ~1300MBps respectively along with RAID 0 Sustained \n> Rates of ~900MBps read, and ~850MBps write.\n>\n> Luke, I know you've managed to get higher IO rates than this with \n> this class of HW. Is there a OS or SW config issue Brian should \n> closely investigate?\n\nI wrote 1280 by a mistake. It's actually a 1260. Sorry about that. \nThe IOP341 class of cards weren't available when we ordered the parts \nfor the box, so we had to go with the 1260. The box(es) we build next \nmonth will either have the 1261ML or 1280 depending on whether we go \n16 or 24 disk.\n\nI noticed Bucky got almost 800 random seeks per second on her 6 disk \n10000 RPM SAS drive Dell PowerEdge 2950. The random seek performance \nof this box disappointed me the most. Even running 2 concurrent \nbonnies, the random seek performance only increased from 644 seeks/ \nsec to 813 seeks/sec. Maybe there is some setting I'm missing? This \ncard looked pretty impressive on tweakers.net.\n\n",
"msg_date": "Wed, 6 Dec 2006 14:47:53 -0700",
"msg_from": "Brian Wipf <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "On 6-Dec-06, at 2:47 PM, Brian Wipf wrote:\n\n>> Hmmm. Something is not right. With a 16 HD RAID 10 based on 10K \n>> rpm HDs, you should be seeing higher absolute performance numbers.\n>>\n>> Find out what HW the Areca guys and Tweakers guys used to test the \n>> 1280s.\n>> At LW2006, Areca was demonstrating all-in-cache reads and writes \n>> of ~1600MBps and ~1300MBps respectively along with RAID 0 \n>> Sustained Rates of ~900MBps read, and ~850MBps write.\n>>\n>> Luke, I know you've managed to get higher IO rates than this with \n>> this class of HW. Is there a OS or SW config issue Brian should \n>> closely investigate?\n>\n> I wrote 1280 by a mistake. It's actually a 1260. Sorry about that. \n> The IOP341 class of cards weren't available when we ordered the \n> parts for the box, so we had to go with the 1260. The box(es) we \n> build next month will either have the 1261ML or 1280 depending on \n> whether we go 16 or 24 disk.\n>\n> I noticed Bucky got almost 800 random seeks per second on her 6 \n> disk 10000 RPM SAS drive Dell PowerEdge 2950. The random seek \n> performance of this box disappointed me the most. Even running 2 \n> concurrent bonnies, the random seek performance only increased from \n> 644 seeks/sec to 813 seeks/sec. Maybe there is some setting I'm \n> missing? This card looked pretty impressive on tweakers.net.\n\nAreca has some performance numbers in a downloadable PDF for the \nAreca ARC-1120, which is in the same class as the ARC-1260, except \nwith 8 ports. With all 8 drives in a RAID 0 the card gets the \nfollowing performance numbers:\n\nCard single thread write 20 thread write single \nthread read 20 thread read\nARC-1120 321.26 MB/s 404.76 MB/s 412.55 MB/ \ns 672.45 MB/s\n\nMy numbers for sequential i/o for the ARC-1260 in a 16 disk RAID 10 \nare slightly better than the ARC-1120 in an 8 disk RAID 0 for a \nsingle thread. I guess this means my numbers are reasonable.\n\n",
"msg_date": "Wed, 6 Dec 2006 15:30:39 -0700",
"msg_from": "Brian Wipf <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Areca 1260 Performance (was: File Systems Compared)"
},
{
"msg_contents": "The 1100 series is PCI-X based. The 1200 series is PCI-E x8 \nbased. Apples and oranges.\n\nI still think Luke Lonergan or Josh Berkus may have some interesting \nideas regarding possible OS and SW optimizations.\n\nWD1500ADFDs are each good for ~90MBps read and ~60MBps write ASTR.\nThat means your 16 HD RAID 10 should be sequentially transferring \n~720MBps read and ~480MBps write.\nClearly more HDs will be required to allow a ARC-12xx to attain its \npeak performance.\n\nOne thing that occurs to me with your present HW is that your CPU \nutilization numbers are relatively high.\nSince 5160s are clocked about as high as is available, that leaves \ntrying CPUs with more cores and trying more CPUs.\n\nYou've got basically got 4 HW threads at the moment. If you can, \nevaluate CPUs and mainboards that allow for 8 or 16 HW threads.\nIntel-wise, that's the new Kentfields. AMD-wise, you have lot's of \n4S mainboard options, but the AMD 4C CPUs won't be available until \nsometime late in 2007.\n\nI've got other ideas, but this list is not the appropriate venue for \nthe level of detail required.\n\nRon Peacetree\n\n\nAt 05:30 PM 12/6/2006, Brian Wipf wrote:\n>On 6-Dec-06, at 2:47 PM, Brian Wipf wrote:\n>\n>>>Hmmm. Something is not right. With a 16 HD RAID 10 based on 10K\n>>>rpm HDs, you should be seeing higher absolute performance numbers.\n>>>\n>>>Find out what HW the Areca guys and Tweakers guys used to test the\n>>>1280s.\n>>>At LW2006, Areca was demonstrating all-in-cache reads and writes\n>>>of ~1600MBps and ~1300MBps respectively along with RAID 0\n>>>Sustained Rates of ~900MBps read, and ~850MBps write.\n>>>\n>>>Luke, I know you've managed to get higher IO rates than this with\n>>>this class of HW. Is there a OS or SW config issue Brian should\n>>>closely investigate?\n>>\n>>I wrote 1280 by a mistake. It's actually a 1260. Sorry about that.\n>>The IOP341 class of cards weren't available when we ordered the\n>>parts for the box, so we had to go with the 1260. The box(es) we\n>>build next month will either have the 1261ML or 1280 depending on\n>>whether we go 16 or 24 disk.\n>>\n>>I noticed Bucky got almost 800 random seeks per second on her 6\n>>disk 10000 RPM SAS drive Dell PowerEdge 2950. The random seek\n>>performance of this box disappointed me the most. Even running 2\n>>concurrent bonnies, the random seek performance only increased from\n>>644 seeks/sec to 813 seeks/sec. Maybe there is some setting I'm\n>>missing? This card looked pretty impressive on tweakers.net.\n>\n>Areca has some performance numbers in a downloadable PDF for the\n>Areca ARC-1120, which is in the same class as the ARC-1260, except\n>with 8 ports. With all 8 drives in a RAID 0 the card gets the\n>following performance numbers:\n>\n>Card single thread write 20 thread write single\n>thread read 20 thread read\n>ARC-1120 321.26 MB/s 404.76 MB/s 412.55 MB/ \n>s 672.45 MB/s\n>\n>My numbers for sequential i/o for the ARC-1260 in a 16 disk RAID 10\n>are slightly better than the ARC-1120 in an 8 disk RAID 0 for a\n>single thread. I guess this means my numbers are reasonable.\n\n",
"msg_date": "Wed, 06 Dec 2006 18:25:19 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Areca 1260 Performance (was: File Systems"
},
{
"msg_contents": "I appreciate your suggestions, Ron. And that helps answer my question \non processor selection for our next box; I wasn't sure if the lower \nMHz speed of the Kentsfield compared to the Woodcrest but with double \nthe cores would be better for us overall or not.\n\nOn 6-Dec-06, at 4:25 PM, Ron wrote:\n\n> The 1100 series is PCI-X based. The 1200 series is PCI-E x8 \n> based. Apples and oranges.\n>\n> I still think Luke Lonergan or Josh Berkus may have some \n> interesting ideas regarding possible OS and SW optimizations.\n>\n> WD1500ADFDs are each good for ~90MBps read and ~60MBps write ASTR.\n> That means your 16 HD RAID 10 should be sequentially transferring \n> ~720MBps read and ~480MBps write.\n> Clearly more HDs will be required to allow a ARC-12xx to attain its \n> peak performance.\n>\n> One thing that occurs to me with your present HW is that your CPU \n> utilization numbers are relatively high.\n> Since 5160s are clocked about as high as is available, that leaves \n> trying CPUs with more cores and trying more CPUs.\n>\n> You've got basically got 4 HW threads at the moment. If you can, \n> evaluate CPUs and mainboards that allow for 8 or 16 HW threads.\n> Intel-wise, that's the new Kentfields. AMD-wise, you have lot's of \n> 4S mainboard options, but the AMD 4C CPUs won't be available until \n> sometime late in 2007.\n>\n> I've got other ideas, but this list is not the appropriate venue \n> for the level of detail required.\n>\n> Ron Peacetree\n>\n>\n> At 05:30 PM 12/6/2006, Brian Wipf wrote:\n>> On 6-Dec-06, at 2:47 PM, Brian Wipf wrote:\n>>\n>>>> Hmmm. Something is not right. With a 16 HD RAID 10 based on 10K\n>>>> rpm HDs, you should be seeing higher absolute performance numbers.\n>>>>\n>>>> Find out what HW the Areca guys and Tweakers guys used to test the\n>>>> 1280s.\n>>>> At LW2006, Areca was demonstrating all-in-cache reads and writes\n>>>> of ~1600MBps and ~1300MBps respectively along with RAID 0\n>>>> Sustained Rates of ~900MBps read, and ~850MBps write.\n>>>>\n>>>> Luke, I know you've managed to get higher IO rates than this with\n>>>> this class of HW. Is there a OS or SW config issue Brian should\n>>>> closely investigate?\n>>>\n>>> I wrote 1280 by a mistake. It's actually a 1260. Sorry about that.\n>>> The IOP341 class of cards weren't available when we ordered the\n>>> parts for the box, so we had to go with the 1260. The box(es) we\n>>> build next month will either have the 1261ML or 1280 depending on\n>>> whether we go 16 or 24 disk.\n>>>\n>>> I noticed Bucky got almost 800 random seeks per second on her 6\n>>> disk 10000 RPM SAS drive Dell PowerEdge 2950. The random seek\n>>> performance of this box disappointed me the most. Even running 2\n>>> concurrent bonnies, the random seek performance only increased from\n>>> 644 seeks/sec to 813 seeks/sec. Maybe there is some setting I'm\n>>> missing? This card looked pretty impressive on tweakers.net.\n>>\n>> Areca has some performance numbers in a downloadable PDF for the\n>> Areca ARC-1120, which is in the same class as the ARC-1260, except\n>> with 8 ports. With all 8 drives in a RAID 0 the card gets the\n>> following performance numbers:\n>>\n>> Card single thread write 20 thread write single\n>> thread read 20 thread read\n>> ARC-1120 321.26 MB/s 404.76 MB/s 412.55 \n>> MB/ s 672.45 MB/s\n>>\n>> My numbers for sequential i/o for the ARC-1260 in a 16 disk RAID 10\n>> are slightly better than the ARC-1120 in an 8 disk RAID 0 for a\n>> single thread. I guess this means my numbers are reasonable.\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n\n",
"msg_date": "Wed, 6 Dec 2006 16:40:55 -0700",
"msg_from": "Brian Wipf <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Areca 1260 Performance"
},
{
"msg_contents": "At 06:40 PM 12/6/2006, Brian Wipf wrote:\n>I appreciate your suggestions, Ron. And that helps answer my question\n>on processor selection for our next box; I wasn't sure if the lower\n>MHz speed of the Kentsfield compared to the Woodcrest but with double\n>the cores would be better for us overall or not.\nPlease do not misunderstand me. I am not endorsing the use of Kentsfield.\nI am recommending =evaluating= Kentsfield.\n\nI am also recommending the evaluation of 2C 4S AMD solutions.\n\nAll this stuff is so leading edge that it is far from clear what the \nRW performance of DBMS based on these components will be without \nextensive testing of =your= app under =your= workload.\n\nOne thing that is clear from what you've posted thus far is that you \nare going to needmore HDs if you want to have any chance of fully \nutilizing your Areca HW.\n\nOut of curiosity, where are you geographically?\n\nHoping I'm being helpful,\nRon\n\n\n",
"msg_date": "Wed, 06 Dec 2006 19:26:23 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Areca 1260 Performance"
},
{
"msg_contents": "On Wed, 6 Dec 2006, Alexander Staubo wrote:\n\n> Care to post these numbers *without* word wrapping?\n\nBrian's message was sent with format=flowed and therefore it's easy to \nre-assemble into original form if your software understands that. I just \nchecked with two e-mail clients (Thunderbird and Pine) and all his \nbonnie++ results were perfectly readable on both as soon as I made the \ndisplay wide enough. If you had trouble reading it, you might consider \nupgrading your mail client to one that understands that standard. \nStatistically, though, if you have this problem you're probably using \nOutlook and there may not be a useful upgrade path for you. I know it's \nbeen added to the latest Express version (which even defaults to sending \nmessages flowed, driving many people crazy), but am not sure if any of the \nOffice Outlooks know what to do with flowed messages yet.\n\nAnd those of you pointing people at the RFC's, that's a bit hardcore--the \nRFC documents themselves could sure use some better formatting. \nhttps://bugzilla.mozilla.org/attachment.cgi?id=134270&action=view has a \nreadable introduction to the encoding of flowed messages, \nhttp://mailformat.dan.info/body/linelength.html gives some history to how \nwe all got into this mess in the first place, and \nhttp://joeclark.org/ffaq.html also has some helpful (albeit out of date in \nspots) comments on this subject.\n\nEven if it is correct netiquette to disable word-wrapping for long lines \nlike bonnie output (there are certainly two sides with valid points in \nthat debate), to make them more compatible with flow-impaired clients, you \ncan't expect that mail composition software is sophisticated enough to \nallow doing that for one section while still wrapping the rest of the text \ncorrectly.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Thu, 7 Dec 2006 01:17:45 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "On 6-Dec-06, at 5:26 PM, Ron wrote:\n> At 06:40 PM 12/6/2006, Brian Wipf wrote:\n>> I appreciate your suggestions, Ron. And that helps answer my question\n>> on processor selection for our next box; I wasn't sure if the lower\n>> MHz speed of the Kentsfield compared to the Woodcrest but with double\n>> the cores would be better for us overall or not.\n> Please do not misunderstand me. I am not endorsing the use of \n> Kentsfield.\n> I am recommending =evaluating= Kentsfield.\n>\n> I am also recommending the evaluation of 2C 4S AMD solutions.\n>\n> All this stuff is so leading edge that it is far from clear what \n> the RW performance of DBMS based on these components will be \n> without extensive testing of =your= app under =your= workload.\nI want the best performance for the dollar, so I can't rule anything \nout. Right now I'm leaning towards Kentsfield, but I will do some \nmore research before I make a decision. We probably won't wait much \npast January though.\n\n> One thing that is clear from what you've posted thus far is that \n> you are going to needmore HDs if you want to have any chance of \n> fully utilizing your Areca HW.\nDo you know off hand where I might find a chassis that can fit 24[+] \ndrives? The last chassis we ordered was through Supermicro, and the \nlargest they carry fits 16 drives.\n\n> Hoping I'm being helpful\nI appreciate any help I can get.\n\nBrian Wipf\n<[email protected]>\n\n",
"msg_date": "Thu, 7 Dec 2006 01:37:34 -0700",
"msg_from": "Brian Wipf <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Areca 1260 Performance"
},
{
"msg_contents": "At 03:37 AM 12/7/2006, Brian Wipf wrote:\n>On 6-Dec-06, at 5:26 PM, Ron wrote:\n>>\n>>All this stuff is so leading edge that it is far from clear what\n>>the RW performance of DBMS based on these components will be\n>>without extensive testing of =your= app under =your= workload.\n>I want the best performance for the dollar, so I can't rule anything\n>out. Right now I'm leaning towards Kentsfield, but I will do some\n>more research before I make a decision. We probably won't wait much\n>past January though.\nKentsfield's outrageously high pricing and operating costs (power and \ncooling) are not likely to make it the cost/performance winner.\n\nOTOH,\n1= ATM it is the way to throw the most cache per socket at a DBMS \nwithin the Core2 CPU line (Tulsa has even more at 16MB per CPU).\n2= SSSE3 and other Core2 optimizations have led to some impressive \nperformance numbers- unless raw clock rate is the thing that can help \nyou the most.\n\nIf what you need for highest performance is the absolute highest \nclock rate or most cache per core, then bench some Intel Tulsa's.\n\nApps with memory footprints too large for on die or in socket caches \nor that require extreme memory subsystem performance are still best \nserved by AMD CPUs.\n\nIf you are getting the impression that it is presently complicated \ndeciding which CPU is best for any specific pg app, then I am making \nthe impression I intend to.\n\n\n>>One thing that is clear from what you've posted thus far is that\n>>you are going to needmore HDs if you want to have any chance of\n>>fully utilizing your Areca HW.\n>Do you know off hand where I might find a chassis that can fit 24[+]\n>drives? The last chassis we ordered was through Supermicro, and the\n>largest they carry fits 16 drives.\nwww.pogolinux.com has 24 and 48 bay 3.5\" HD chassis'; and a 64 bay \n2.5\" chassis. Tell them I sent you.\n\nwww.impediment.com are folks I trust regarding all things storage \n(and RAM). Again, tell them I sent you.\n\nwww.aberdeeninc.com is also a vendor I've had luck with, but try Pogo \nand Impediment first.\n\n\nGood luck and please post what happens,\nRon Peacetree\n \n\n",
"msg_date": "Thu, 07 Dec 2006 07:34:16 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Areca 1260 Performance"
},
{
"msg_contents": "\n>> One thing that is clear from what you've posted thus far is that you \n>> are going to needmore HDs if you want to have any chance of fully \n>> utilizing your Areca HW.\n> Do you know off hand where I might find a chassis that can fit 24[+] \n> drives? The last chassis we ordered was through Supermicro, and the \n> largest they carry fits 16 drives.\n\nChenbro has a 24 drive case - the largest I have seen. It fits the big \n4/8 cpu boards as well.\n\nhttp://www.chenbro.com/corporatesite/products_01features.php?serno=43\n\n\n-- \n\nShane Ambler\[email protected]\n\nGet Sheeky @ http://Sheeky.Biz\n",
"msg_date": "Fri, 08 Dec 2006 02:13:30 +1030",
"msg_from": "Shane Ambler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Areca 1260 Performance"
},
{
"msg_contents": "I'm building a SuperServer 6035B server (16 scsi drives). My schema has\nbasically two large tables (million+ per day) each which are partitioned\ndaily, and queried independently of each other. Would you recommend a raid1\nsystem partition and 14 drives in a raid 10 or should i create separate\npartitions/tablespaces for the two large tables and indexes?\n\nThanks\nGene\n\nOn 12/7/06, Shane Ambler <[email protected]> wrote:\n>\n>\n> >> One thing that is clear from what you've posted thus far is that you\n> >> are going to needmore HDs if you want to have any chance of fully\n> >> utilizing your Areca HW.\n> > Do you know off hand where I might find a chassis that can fit 24[+]\n> > drives? The last chassis we ordered was through Supermicro, and the\n> > largest they carry fits 16 drives.\n>\n> Chenbro has a 24 drive case - the largest I have seen. It fits the big\n> 4/8 cpu boards as well.\n>\n> http://www.chenbro.com/corporatesite/products_01features.php?serno=43\n>\n>\n> --\n>\n> Shane Ambler\n> [email protected]\n>\n> Get Sheeky @ http://Sheeky.Biz\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n\n\n-- \nGene Hart\ncell: 443-604-2679\n\nI'm building a SuperServer 6035B server (16 scsi drives). My schema has basically two large tables (million+ per day) each which are partitioned daily, and queried independently of each other. Would you recommend a raid1 system partition and 14 drives in a raid 10 or should i create separate partitions/tablespaces for the two large tables and indexes?\nThanksGeneOn 12/7/06, Shane Ambler <[email protected]> wrote:\n>> One thing that is clear from what you've posted thus far is that you>> are going to needmore HDs if you want to have any chance of fully>> utilizing your Areca HW.> Do you know off hand where I might find a chassis that can fit 24[+]\n> drives? The last chassis we ordered was through Supermicro, and the> largest they carry fits 16 drives.Chenbro has a 24 drive case - the largest I have seen. It fits the big4/8 cpu boards as well.\nhttp://www.chenbro.com/corporatesite/products_01features.php?serno=43--Shane Ambler\[email protected] Sheeky @ http://Sheeky.Biz---------------------------(end of broadcast)---------------------------TIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not match-- Gene Hartcell: 443-604-2679",
"msg_date": "Thu, 7 Dec 2006 11:02:27 -0500",
"msg_from": "Gene <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Areca 1260 Performance"
},
{
"msg_contents": "On 12/6/06, Brian Wipf <[email protected]> wrote:\n> > Hmmm. Something is not right. With a 16 HD RAID 10 based on 10K\n> > rpm HDs, you should be seeing higher absolute performance numbers.\n> >\n> > Find out what HW the Areca guys and Tweakers guys used to test the\n> > 1280s.\n> > At LW2006, Areca was demonstrating all-in-cache reads and writes of\n> > ~1600MBps and ~1300MBps respectively along with RAID 0 Sustained\n> > Rates of ~900MBps read, and ~850MBps write.\n> >\n> > Luke, I know you've managed to get higher IO rates than this with\n> > this class of HW. Is there a OS or SW config issue Brian should\n> > closely investigate?\n>\n> I wrote 1280 by a mistake. It's actually a 1260. Sorry about that.\n> The IOP341 class of cards weren't available when we ordered the parts\n> for the box, so we had to go with the 1260. The box(es) we build next\n> month will either have the 1261ML or 1280 depending on whether we go\n> 16 or 24 disk.\n>\n> I noticed Bucky got almost 800 random seeks per second on her 6 disk\n> 10000 RPM SAS drive Dell PowerEdge 2950. The random seek performance\n> of this box disappointed me the most. Even running 2 concurrent\n> bonnies, the random seek performance only increased from 644 seeks/\n> sec to 813 seeks/sec. Maybe there is some setting I'm missing? This\n> card looked pretty impressive on tweakers.net.\n\nI've been looking a lot at the SAS enclosures lately and am starting\nto feel like that's the way to go. Performance is amazing and the\nflexibility of choosing low cost SATA or high speed SAS drives is\ngreat. not only that, but more and more SAS is coming out in 2.5\"\ndrives which seems to be a better fit for databases...more spindles.\nwith a 2.5\" drive enclosure they can stuff 10 hot swap drives into a\n1u enclosure...that's pretty amazing.\n\none downside of SAS is most of the HBAs are pci-express only, that can\nlimit your options unless your server is very new. also you don't\nwant to skimp on the hba, get the best available, which looks to be\nlsi logic at the moment (dell perc5/e is lsi logic controller as is\nthe intel sas hba)...others?\n\nmerlin\n",
"msg_date": "Thu, 7 Dec 2006 16:23:37 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "At 11:02 AM 12/7/2006, Gene wrote:\n>I'm building a SuperServer 6035B server (16 scsi drives). My schema \n>has basically two large tables (million+ per day) each which are \n>partitioned daily, and queried independently of each other. Would \n>you recommend a raid1 system partition and 14 drives in a raid 10 or \n>should i create separate partitions/tablespaces for the two large \n>tables and indexes?\nNot an easy question to answer w/o knowing more about your actual \nqueries and workload.\n\nTo keep the math simple, let's assume each SCSI HD has and ASTR of \n75MBps. A 14 HD RAID 10 therefore has an ASTR of 7* 75= 525MBps. If \nthe rest of your system can handle this much or more bandwidth, then \nthis is most probably the best config.\n\nDedicating spindles to specific tables is usually best done when \nthere is HD bandwidth that can't be utilized if the HDs are in a \nlarger set +and+ there is a significant hot spot that can use \ndedicated resources.\n\nMy first attempt would be to use other internal HDs for a RAID 1 \nsystems volume and use all 16 of your HBA HDs for a 16 HD RAID 10 array.\nThen I'd bench the config to see if it had acceptable performance.\n\nIf yes, stop. Else start considering the more complicated alternatives.\n\nRemember that adding HDs and RAM is far cheaper than even a few hours \nof skilled technical labor.\n\nRon Peacetree \n\n",
"msg_date": "Thu, 07 Dec 2006 19:11:09 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Areca 1260 Performance"
},
{
"msg_contents": "On Wed, Dec 06, 2006 at 08:55:14 -0800,\n Mark Lewis <[email protected]> wrote:\n> > Anyone run their RAIDs with disk caches enabled, or is this akin to\n> > having fsync off?\n> \n> Disk write caches are basically always akin to having fsync off. The\n> only time a write-cache is (more or less) safe to enable is when it is\n> backed by a battery or in some other way made non-volatile.\n> \n> So a RAID controller with a battery-backed write cache can enable its\n> own write cache, but can't safely enable the write-caches on the disk\n> drives it manages.\n\nThis appears to be changing under Linux. Recent kernels have write barriers\nimplemented using cache flush commands (which some drives ignore, so you\nneed to be careful). In very recent kernels, software raid using raid 1\nwill also handle write barriers. To get this feature, you are supposed to\nmount ext3 file systems with the barrier=1 option. For other file systems,\nthe parameter may need to be different.\n",
"msg_date": "Mon, 11 Dec 2006 11:54:11 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "On Dec 11, 2006, at 12:54 PM, Bruno Wolff III wrote:\n> On Wed, Dec 06, 2006 at 08:55:14 -0800,\n> Mark Lewis <[email protected]> wrote:\n>>> Anyone run their RAIDs with disk caches enabled, or is this akin to\n>>> having fsync off?\n>>\n>> Disk write caches are basically always akin to having fsync off. The\n>> only time a write-cache is (more or less) safe to enable is when \n>> it is\n>> backed by a battery or in some other way made non-volatile.\n>>\n>> So a RAID controller with a battery-backed write cache can enable its\n>> own write cache, but can't safely enable the write-caches on the disk\n>> drives it manages.\n>\n> This appears to be changing under Linux. Recent kernels have write \n> barriers\n> implemented using cache flush commands (which some drives ignore, \n> so you\n> need to be careful). In very recent kernels, software raid using \n> raid 1\n> will also handle write barriers. To get this feature, you are \n> supposed to\n> mount ext3 file systems with the barrier=1 option. For other file \n> systems,\n> the parameter may need to be different.\n\nBut would that actually provide a meaningful benefit? When you \nCOMMIT, the WAL data must hit non-volatile storage of some kind, \nwhich without a BBU or something similar, means hitting the platter. \nSo I don't see how enabling the disk cache will help, unless of \ncourse it's ignoring fsync.\n\nNow, I have heard something about drives using their stored \nrotational energy to flush out the cache... but I tend to suspect \nurban legend there...\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Thu, 14 Dec 2006 01:39:00 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "On Thu, Dec 14, 2006 at 01:39:00 -0500,\n Jim Nasby <[email protected]> wrote:\n> On Dec 11, 2006, at 12:54 PM, Bruno Wolff III wrote:\n> >\n> >This appears to be changing under Linux. Recent kernels have write \n> >barriers\n> >implemented using cache flush commands (which some drives ignore, \n> >so you\n> >need to be careful). In very recent kernels, software raid using \n> >raid 1\n> >will also handle write barriers. To get this feature, you are \n> >supposed to\n> >mount ext3 file systems with the barrier=1 option. For other file \n> >systems,\n> >the parameter may need to be different.\n> \n> But would that actually provide a meaningful benefit? When you \n> COMMIT, the WAL data must hit non-volatile storage of some kind, \n> which without a BBU or something similar, means hitting the platter. \n> So I don't see how enabling the disk cache will help, unless of \n> course it's ignoring fsync.\n\nWhen you do an fsync, the OS sends a cache flush command to the drive,\nwhich on most drives (but supposedly there are ones that ignore this\ncommand) doesn't return until all of the cached pages have been written\nto the platter, and doesn't return from the fsync until the flush is complete.\nWhile this writes more sectors than you really need, it is safe. And it allows\nfor caching to speed up some things (though not as much as having queued\ncommands would).\n\nI have done some tests on my systems and the speeds I am getting make it\nclear that write barriers slow things down to about the same range as having\ncaches disabled. So I believe that it is likely working as advertised.\n\nNote the use case for this is more for hobbiests or development boxes. You can\nonly use it on software raid (md) 1, which rules out most \"real\" systems.\n",
"msg_date": "Thu, 14 Dec 2006 11:48:09 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "Bruno Wolff III wrote:\n> On Thu, Dec 14, 2006 at 01:39:00 -0500,\n> Jim Nasby <[email protected]> wrote:\n>> On Dec 11, 2006, at 12:54 PM, Bruno Wolff III wrote:\n>>> This appears to be changing under Linux. Recent kernels have write \n>>> barriers implemented using cache flush commands (which \n>>> some drives ignore, so you need to be careful).\n\nIs it true that some drives ignore this; or is it mostly\nan urban legend that was started by testers that didn't\nhave kernels with write barrier support. I'd be especially\ninterested in knowing if there are any currently available\ndrives which ignore those commands.\n\n>>> In very recent kernels, software raid using raid 1 will also\n>>> handle write barriers. To get this feature, you are supposed to\n>>> mount ext3 file systems with the barrier=1 option. For other file \n>>> systems, the parameter may need to be different.\n\nWith XFS the default is apparently to enable write barrier\nsupport unless you explicitly disable it with the nobarrier mount option.\nIt also will warn you in the system log if the underlying device\ndoesn't have write barrier support.\n\nSGI recommends that you use the \"nobarrier\" mount option if you do\nhave a persistent (battery backed) write cache on your raid device.\n\n http://oss.sgi.com/projects/xfs/faq.html#wcache\n\n\n>> But would that actually provide a meaningful benefit? When you \n>> COMMIT, the WAL data must hit non-volatile storage of some kind, \n>> which without a BBU or something similar, means hitting the platter. \n>> So I don't see how enabling the disk cache will help, unless of \n>> course it's ignoring fsync.\n\nWith write barriers, fsync() waits for the physical disk; but I believe\nthe background writes from write() done by pdflush don't have to; so\nit's kinda like only disabling the cache for WAL files and the filesystem's\njournal, but having it enabled for the rest of your write activity (the\ntables except at checkpoints? the log file?).\n\n> Note the use case for this is more for hobbiests or development boxes. You can\n> only use it on software raid (md) 1, which rules out most \"real\" systems.\n> \n\nUgh. Looking for where that's documented; and hoping it is or will soon\nwork on software 1+0 as well.\n",
"msg_date": "Thu, 14 Dec 2006 13:21:11 -0800",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "The reply wasn't (directly copied to the performance list, but I will\ncopy this one back.\n\nOn Thu, Dec 14, 2006 at 13:21:11 -0800,\n Ron Mayer <[email protected]> wrote:\n> Bruno Wolff III wrote:\n> > On Thu, Dec 14, 2006 at 01:39:00 -0500,\n> > Jim Nasby <[email protected]> wrote:\n> >> On Dec 11, 2006, at 12:54 PM, Bruno Wolff III wrote:\n> >>> This appears to be changing under Linux. Recent kernels have write \n> >>> barriers implemented using cache flush commands (which \n> >>> some drives ignore, so you need to be careful).\n> \n> Is it true that some drives ignore this; or is it mostly\n> an urban legend that was started by testers that didn't\n> have kernels with write barrier support. I'd be especially\n> interested in knowing if there are any currently available\n> drives which ignore those commands.\n> \n> >>> In very recent kernels, software raid using raid 1 will also\n> >>> handle write barriers. To get this feature, you are supposed to\n> >>> mount ext3 file systems with the barrier=1 option. For other file \n> >>> systems, the parameter may need to be different.\n> \n> With XFS the default is apparently to enable write barrier\n> support unless you explicitly disable it with the nobarrier mount option.\n> It also will warn you in the system log if the underlying device\n> doesn't have write barrier support.\n> \n> SGI recommends that you use the \"nobarrier\" mount option if you do\n> have a persistent (battery backed) write cache on your raid device.\n> \n> http://oss.sgi.com/projects/xfs/faq.html#wcache\n> \n> \n> >> But would that actually provide a meaningful benefit? When you \n> >> COMMIT, the WAL data must hit non-volatile storage of some kind, \n> >> which without a BBU or something similar, means hitting the platter. \n> >> So I don't see how enabling the disk cache will help, unless of \n> >> course it's ignoring fsync.\n> \n> With write barriers, fsync() waits for the physical disk; but I believe\n> the background writes from write() done by pdflush don't have to; so\n> it's kinda like only disabling the cache for WAL files and the filesystem's\n> journal, but having it enabled for the rest of your write activity (the\n> tables except at checkpoints? the log file?).\n> \n> > Note the use case for this is more for hobbiests or development boxes. You can\n> > only use it on software raid (md) 1, which rules out most \"real\" systems.\n> > \n> \n> Ugh. Looking for where that's documented; and hoping it is or will soon\n> work on software 1+0 as well.\n",
"msg_date": "Fri, 15 Dec 2006 10:34:15 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "On Thu, Dec 14, 2006 at 13:21:11 -0800,\n Ron Mayer <[email protected]> wrote:\n> Bruno Wolff III wrote:\n> > On Thu, Dec 14, 2006 at 01:39:00 -0500,\n> > Jim Nasby <[email protected]> wrote:\n> >> On Dec 11, 2006, at 12:54 PM, Bruno Wolff III wrote:\n> >>> This appears to be changing under Linux. Recent kernels have write \n> >>> barriers implemented using cache flush commands (which \n> >>> some drives ignore, so you need to be careful).\n> \n> Is it true that some drives ignore this; or is it mostly\n> an urban legend that was started by testers that didn't\n> have kernels with write barrier support. I'd be especially\n> interested in knowing if there are any currently available\n> drives which ignore those commands.\n\nI saw posts claiming this, but no specific drives mentioned. I did see one\npost that claimed that the cache flush command was mandated (not optional)\nby the spec.\n\n> >>> In very recent kernels, software raid using raid 1 will also\n> >>> handle write barriers. To get this feature, you are supposed to\n> >>> mount ext3 file systems with the barrier=1 option. For other file \n> >>> systems, the parameter may need to be different.\n> \n> With XFS the default is apparently to enable write barrier\n> support unless you explicitly disable it with the nobarrier mount option.\n> It also will warn you in the system log if the underlying device\n> doesn't have write barrier support.\n\nI think there might be a similar patch for ext3 going into 2.6.19. I haven't\nchecked a 2.6.19 kernel to make sure though.\n\n> \n> SGI recommends that you use the \"nobarrier\" mount option if you do\n> have a persistent (battery backed) write cache on your raid device.\n> \n> http://oss.sgi.com/projects/xfs/faq.html#wcache\n> \n> \n> >> But would that actually provide a meaningful benefit? When you \n> >> COMMIT, the WAL data must hit non-volatile storage of some kind, \n> >> which without a BBU or something similar, means hitting the platter. \n> >> So I don't see how enabling the disk cache will help, unless of \n> >> course it's ignoring fsync.\n> \n> With write barriers, fsync() waits for the physical disk; but I believe\n> the background writes from write() done by pdflush don't have to; so\n> it's kinda like only disabling the cache for WAL files and the filesystem's\n> journal, but having it enabled for the rest of your write activity (the\n> tables except at checkpoints? the log file?).\n\nNot exactly. Whenever you commit the file system log or fsync the wal file,\nall previously written blocks will be flushed to the disk platter, before\nany new write requests are honored. So journalling semantics will work\nproperly.\n\n> > Note the use case for this is more for hobbiests or development boxes. You can\n> > only use it on software raid (md) 1, which rules out most \"real\" systems.\n> > \n> \n> Ugh. Looking for where that's documented; and hoping it is or will soon\n> work on software 1+0 as well.\n\nI saw a comment somewhere that raid 0 provided some problems and the suggestion\nwas to handle the barrier at a different level (though I don't know how you\ncould). So I don't belive 1+0 or 5 are currently supported or will be in the\nnear term.\n\nThe other feature I would like is to be able to use write barriers with\nencrypted file systems. I haven't found anythign on whether or not there\nare near term plans by any one to support that.\n",
"msg_date": "Fri, 15 Dec 2006 10:44:39 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "On Fri, Dec 15, 2006 at 10:34:15 -0600,\n Bruno Wolff III <[email protected]> wrote:\n> The reply wasn't (directly copied to the performance list, but I will\n> copy this one back.\n\nSorry about this one, I meant to intersperse my replies and hit the 'y'\nkey at the wrong time. (And there ended up being a copy on performance\nanyway from the news gateway.)\n",
"msg_date": "Fri, 15 Dec 2006 15:12:47 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
},
{
"msg_contents": "On Fri, Dec 15, 2006 at 10:44:39 -0600,\n Bruno Wolff III <[email protected]> wrote:\n> \n> The other feature I would like is to be able to use write barriers with\n> encrypted file systems. I haven't found anythign on whether or not there\n> are near term plans by any one to support that.\n\nI asked about this on the dm-crypt list and was told that write barriers\nwork pre 2.6.19. There was a change for 2.6.19 that might break things for\nSMP systems. But that will probably get fixed eventually.\n",
"msg_date": "Sun, 17 Dec 2006 05:18:00 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: File Systems Compared"
}
] |
[
{
"msg_contents": "Hi,\n I have a \"product\" table having 350 records. It takes approx 1.8 seconds to get all records from this table. I copies this table to a \"product_temp\" table and run the same query to select all records; and it took 10ms(much faster).\n I did \"VACUUM FULL\" on \"product\" table but It did not work.\n \n I checked the file size of these two tables. \n \"product\" table's file size is \"32mb\" and\n \"product_temp\" table's file size is \"72k\".\n \n So, it seems that \"VACUUM FULL\" is not doing anything. \n Please suggest.\n \n asif ali\n icrossing inc.\n \n---------------------------------\nHave a burning question? Go to Yahoo! Answers and get answers from real people who know.\nHi, I have a \"product\" table having 350 records. It takes approx 1.8 seconds to get all records from this table. I copies this table to a \"product_temp\" table and run the same query to select all records; and it took 10ms(much faster). I did \"VACUUM FULL\" on \"product\" table but It did not work. I checked the file size of these two tables. \"product\" table's file size is \"32mb\" and \"product_temp\" table's file size is \"72k\". So, it seems that \"VACUUM FULL\" is not doing anything. Please suggest. asif ali icrossing inc.\nHave a burning question? Go to Yahoo! Answers and get answers from real people who know.",
"msg_date": "Wed, 6 Dec 2006 09:07:03 -0800 (PST)",
"msg_from": "asif ali <[email protected]>",
"msg_from_op": true,
"msg_subject": "VACUUM FULL does not works......."
},
{
"msg_contents": "2006/12/6, asif ali <[email protected]>:\n> Hi,\n> I have a \"product\" table having 350 records. It takes approx 1.8 seconds to\n> get all records from this table. I copies this table to a \"product_temp\"\n> table and run the same query to select all records; and it took 10ms(much\n> faster).\n> I did \"VACUUM FULL\" on \"product\" table but It did not work.\n>\n> I checked the file size of these two tables.\n> \"product\" table's file size is \"32mb\" and\n> \"product_temp\" table's file size is \"72k\".\n>\n> So, it seems that \"VACUUM FULL\" is not doing anything.\n> Please suggest.\n\ntry VACUUM FULL VERBOSE and report the result.\n\n-- \nJean-Max Reymond\nCKR Solutions Open Source\nNice France\nhttp://www.ckr-solutions.com\n",
"msg_date": "Wed, 6 Dec 2006 18:13:07 +0100",
"msg_from": "\"Jean-Max Reymond\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM FULL does not works......."
},
{
"msg_contents": "On Wed, 2006-12-06 at 11:07, asif ali wrote:\n> Hi,\n> I have a \"product\" table having 350 records. It takes approx 1.8\n> seconds to get all records from this table. I copies this table to a\n> \"product_temp\" table and run the same query to select all records; and\n> it took 10ms(much faster).\n> I did \"VACUUM FULL\" on \"product\" table but It did not work.\n> \n> I checked the file size of these two tables. \n> \"product\" table's file size is \"32mb\" and\n> \"product_temp\" table's file size is \"72k\".\n> \n> So, it seems that \"VACUUM FULL\" is not doing anything. \n> Please suggest.\n\nMore than likely you've got a long running transaction that the vacuum\ncan't vacuum around.\n",
"msg_date": "Wed, 06 Dec 2006 11:20:43 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM FULL does not works......."
},
{
"msg_contents": "On 12/6/06, asif ali <[email protected]> wrote:\n> Hi,\n> I have a \"product\" table having 350 records. It takes approx 1.8 seconds to\n> get all records from this table. I copies this table to a \"product_temp\"\n> table and run the same query to select all records; and it took 10ms(much\n> faster).\n> I did \"VACUUM FULL\" on \"product\" table but It did not work.\n>\n> I checked the file size of these two tables.\n> \"product\" table's file size is \"32mb\" and\n> \"product_temp\" table's file size is \"72k\".\n>\n> So, it seems that \"VACUUM FULL\" is not doing anything.\n> Please suggest.\n\nIt is desirable that PostgreSQL version be reported in problem descriptions.\n\nOlder versions of pgsql had problem of index bloat. It is interesting to\nfind out why VACUUM FULL does not work in your case(wait for the experts) ,\n but most probably CLUSTERING the table on primary key is going to\nsolve the query performance problem (temporarily)\n\n>\n> asif ali\n> icrossing inc.\n>\n> ________________________________\n> Have a burning question? Go to Yahoo! Answers and get answers from real\n> people who know.\n",
"msg_date": "Wed, 6 Dec 2006 23:17:10 +0530",
"msg_from": "\"Rajesh Kumar Mallah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM FULL does not works......."
},
{
"msg_contents": "Thanks for the prompt reply...\n Here is the output of \"VACUUM FULL VERBOSE\"\n The postgres version is \"8.0.3\".\n \n Thanks\n asif ali\n icrossing inc\n \n INFO: vacuuming \"public.product_table\"\n INFO: \"product_table\": found 0 removable, 139178 nonremovable row versions in 4305 pages\n DETAIL: 138859 dead row versions cannot be removed yet.\n Nonremovable row versions range from 152 to 273 bytes long.\n There were 26916 unused item pointers.\n Total free space (including removable row versions) is 4507788 bytes.\n 249 pages are or will become empty, including 0 at the end of the table.\n 746 pages containing 4286656 free bytes are potential move destinations.\n CPU 0.04s/0.06u sec elapsed 0.15 sec.\n INFO: index \"product_table_client_name_unique\" now contains 139178 row versions in 3916 pages\n DETAIL: 0 index row versions were removed.\n 2539 index pages have been deleted, 2055 are currently reusable.\n CPU 0.08s/0.02u sec elapsed 0.76 sec.\n INFO: index \"product_table_cpc_agent_id_unique\" now contains 139178 row versions in 1980 pages\n DETAIL: 0 index row versions were removed.\n 1162 index pages have been deleted, 950 are currently reusable.\n CPU 0.04s/0.02u sec elapsed 0.49 sec.\n INFO: index \"product_table_pk\" now contains 139178 row versions in 3472 pages\n DETAIL: 0 index row versions were removed.\n 2260 index pages have been deleted, 1870 are currently reusable.\n CPU 0.08s/0.02u sec elapsed 0.53 sec.\n INFO: \"product_table\": moved 18631 row versions, truncated 4305 to 4299 pages\n DETAIL: CPU 0.18s/1.14u sec elapsed 2.38 sec.\n INFO: index \"product_table_client_name_unique\" now contains 157728 row versions in 3916 pages\n DETAIL: 81 index row versions were removed.\n 2407 index pages have been deleted, 1923 are currently reusable.\n CPU 0.04s/0.01u sec elapsed 0.17 sec.\n INFO: index \"product_table_cpc_agent_id_unique\" now contains 157728 row versions in 1980 pages\n DETAIL: 81 index row versions were removed.\n 1100 index pages have been deleted, 888 are currently reusable.\n CPU 0.03s/0.01u sec elapsed 0.16 sec.\n INFO: index \"product_table_pk\" now contains 157728 row versions in 3472 pages\n DETAIL: 81 index row versions were removed.\n 2150 index pages have been deleted, 1760 are currently reusable.\n CPU 0.05s/0.01u sec elapsed 0.30 sec.\n INFO: vacuuming \"pg_toast.pg_toast_11891545\"\n INFO: \"pg_toast_11891545\": found 0 removable, 0 nonremovable row versions in 0 pages\n DETAIL: 0 dead row versions cannot be removed yet.\n Nonremovable row versions range from 0 to 0 bytes long.\n There were 0 unused item pointers.\n Total free space (including removable row versions) is 0 bytes.\n 0 pages are or will become empty, including 0 at the end of the table.\n 0 pages containing 0 free bytes are potential move destinations.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\n INFO: index \"pg_toast_11891545_index\" now contains 0 row versions in 1 pages\n DETAIL: 0 index pages have been deleted, 0 are currently reusable.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\n \n Query returned successfully with no result in 5201 ms.\n\nJean-Max Reymond <[email protected]> wrote: 2006/12/6, asif ali :\n> Hi,\n> I have a \"product\" table having 350 records. It takes approx 1.8 seconds to\n> get all records from this table. I copies this table to a \"product_temp\"\n> table and run the same query to select all records; and it took 10ms(much\n> faster).\n> I did \"VACUUM FULL\" on \"product\" table but It did not work.\n>\n> I checked the file size of these two tables.\n> \"product\" table's file size is \"32mb\" and\n> \"product_temp\" table's file size is \"72k\".\n>\n> So, it seems that \"VACUUM FULL\" is not doing anything.\n> Please suggest.\n\ntry VACUUM FULL VERBOSE and report the result.\n\n-- \nJean-Max Reymond\nCKR Solutions Open Source\nNice France\nhttp://www.ckr-solutions.com\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n\n \n---------------------------------\nWant to start your own business? Learn how on Yahoo! Small Business.\nThanks for the prompt reply... Here is the output of \"VACUUM FULL VERBOSE\" The postgres version is \"8.0.3\". Thanks asif ali icrossing inc INFO: vacuuming \"public.product_table\" INFO: \"product_table\": found 0 removable, 139178 nonremovable row versions in 4305 pages DETAIL: 138859 dead row versions cannot be removed yet. Nonremovable row versions range from 152 to 273 bytes long. There were 26916 unused item pointers. Total free space (including removable row versions) is 4507788 bytes. 249 pages are or will become empty, including 0 at the end of the table. 746 pages containing 4286656 free bytes are potential move destinations. CPU 0.04s/0.06u sec elapsed 0.15 sec. INFO: index \"product_table_client_name_unique\" now contains 139178 row versions in 3916 pages DETAIL: 0 index row versions were removed. 2539 index pages have been deleted, 2055 are currently reusable.\n CPU 0.08s/0.02u sec elapsed 0.76 sec. INFO: index \"product_table_cpc_agent_id_unique\" now contains 139178 row versions in 1980 pages DETAIL: 0 index row versions were removed. 1162 index pages have been deleted, 950 are currently reusable. CPU 0.04s/0.02u sec elapsed 0.49 sec. INFO: index \"product_table_pk\" now contains 139178 row versions in 3472 pages DETAIL: 0 index row versions were removed. 2260 index pages have been deleted, 1870 are currently reusable. CPU 0.08s/0.02u sec elapsed 0.53 sec. INFO: \"product_table\": moved 18631 row versions, truncated 4305 to 4299 pages DETAIL: CPU 0.18s/1.14u sec elapsed 2.38 sec. INFO: index \"product_table_client_name_unique\" now contains 157728 row versions in 3916 pages DETAIL: 81 index row versions were removed. 2407 index pages have been deleted, 1923 are currently reusable. CPU 0.04s/0.01u sec elapsed 0.17 sec. INFO: \n index \"product_table_cpc_agent_id_unique\" now contains 157728 row versions in 1980 pages DETAIL: 81 index row versions were removed. 1100 index pages have been deleted, 888 are currently reusable. CPU 0.03s/0.01u sec elapsed 0.16 sec. INFO: index \"product_table_pk\" now contains 157728 row versions in 3472 pages DETAIL: 81 index row versions were removed. 2150 index pages have been deleted, 1760 are currently reusable. CPU 0.05s/0.01u sec elapsed 0.30 sec. INFO: vacuuming \"pg_toast.pg_toast_11891545\" INFO: \"pg_toast_11891545\": found 0 removable, 0 nonremovable row versions in 0 pages DETAIL: 0 dead row versions cannot be removed yet. Nonremovable row versions range from 0 to 0 bytes long. There were 0 unused item pointers. Total free space (including removable row versions) is 0 bytes. 0 pages are or will become empty, including 0 at the end of the table. 0 pages containing 0\n free bytes are potential move destinations. CPU 0.00s/0.00u sec elapsed 0.00 sec. INFO: index \"pg_toast_11891545_index\" now contains 0 row versions in 1 pages DETAIL: 0 index pages have been deleted, 0 are currently reusable. CPU 0.00s/0.00u sec elapsed 0.00 sec. Query returned successfully with no result in 5201 ms.Jean-Max Reymond <[email protected]> wrote: 2006/12/6, asif ali :> Hi,> I have a \"product\" table having 350 records. It takes approx 1.8 seconds to> get all records from this table. I copies this table to a \"product_temp\"> table and run the same query to select all records; and it took 10ms(much> faster).> I did \"VACUUM FULL\" on \"product\" table but It did not work.>> I checked the file size of\n these two tables.> \"product\" table's file size is \"32mb\" and> \"product_temp\" table's file size is \"72k\".>> So, it seems that \"VACUUM FULL\" is not doing anything.> Please suggest.try VACUUM FULL VERBOSE and report the result.-- Jean-Max ReymondCKR Solutions Open SourceNice Francehttp://www.ckr-solutions.com---------------------------(end of broadcast)---------------------------TIP 6: explain analyze is your friend\nWant to start your own business? Learn how on Yahoo! Small Business.",
"msg_date": "Wed, 6 Dec 2006 10:37:20 -0800 (PST)",
"msg_from": "asif ali <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM FULL does not works......."
},
{
"msg_contents": "asif ali <[email protected]> writes:\n> INFO: vacuuming \"public.product_table\"\n> INFO: \"product_table\": found 0 removable, 139178 nonremovable row versions in 4305 pages\n> DETAIL: 138859 dead row versions cannot be removed yet.\n\nSo Scott's guess was correct: you've got a whole lot of dead rows in\nthere that will eventually be removable, but not while there's still\nan open transaction that might be able to \"see\" them. Find your open\ntransaction and get rid of it (pg_stat_activity might help).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Dec 2006 14:55:27 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM FULL does not works....... "
},
{
"msg_contents": "Thanks Everybody for helping me out.\n I checked \"pg_stat_activity\"/pg_locks, but do not see any activity on the table. \n How to find a old running transaction...\n I saw this link, but it did not help..\n http://archives.postgresql.org/pgsql-hackers/2005-02/msg00760.php\n \n Thanks\n \n asif ali\n icrossing inc\n \nTom Lane <[email protected]> wrote: asif ali writes:\n> INFO: vacuuming \"public.product_table\"\n> INFO: \"product_table\": found 0 removable, 139178 nonremovable row versions in 4305 pages\n> DETAIL: 138859 dead row versions cannot be removed yet.\n\nSo Scott's guess was correct: you've got a whole lot of dead rows in\nthere that will eventually be removable, but not while there's still\nan open transaction that might be able to \"see\" them. Find your open\ntransaction and get rid of it (pg_stat_activity might help).\n\n regards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n\n \n---------------------------------\nCheck out the all-new Yahoo! Mail beta - Fire up a more powerful email and get things done faster.\nThanks Everybody for helping me out. I checked \"pg_stat_activity\"/pg_locks, but do not see any activity on the table. How to find a old running transaction... I saw this link, but it did not help.. http://archives.postgresql.org/pgsql-hackers/2005-02/msg00760.php Thanks asif ali icrossing inc Tom Lane <[email protected]> wrote: asif ali writes:> INFO: vacuuming \"public.product_table\"> INFO: \"product_table\": found 0 removable, 139178 nonremovable row versions in 4305 pages> DETAIL: 138859 dead row versions cannot be removed yet.So Scott's guess was correct: you've got a whole lot of dead rows inthere that will eventually be removable, but not while there's stillan open transaction that might be able to \"see\" them. Find your\n opentransaction and get rid of it (pg_stat_activity might help). regards, tom lane---------------------------(end of broadcast)---------------------------TIP 4: Have you searched our list archives? http://archives.postgresql.org\nCheck out the all-new Yahoo! Mail beta - Fire up a more powerful email and get things done faster.",
"msg_date": "Wed, 6 Dec 2006 13:53:01 -0800 (PST)",
"msg_from": "asif ali <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM FULL does not works....... "
},
{
"msg_contents": "On Wed, 2006-12-06 at 15:53, asif ali wrote:\n> Thanks Everybody for helping me out.\n> I checked \"pg_stat_activity\"/pg_locks, but do not see any activity on\n> the table. \n> How to find a old running transaction...\n> I saw this link, but it did not help..\n> http://archives.postgresql.org/pgsql-hackers/2005-02/msg00760.php\n\nSometimes just using top or ps will show you.\n\non linux you can run top and then hit c for show command line and look\nfor ones that are IDLE\n\nOr, try ps:\n\nps axw|grep postgres\n\nOn my machine normally:\n\n 2408 ? S 0:00 /usr/local/pgsql/bin/postmaster -p 5432 -D /home/postgres/data\n 2615 ? S 0:00 postgres: stats buffer process\n 2616 ? S 0:00 postgres: stats collector process\n 2857 ? S 0:00 postgres: writer process\n 2858 ? S 0:00 postgres: stats buffer process\n 2859 ? S 0:00 postgres: stats collector process\n\nBut with an idle transaction:\n\n 2408 ? S 0:00 /usr/local/pgsql/bin/postmaster -p 5432 -D /home/postgres/data\n 2615 ? S 0:00 postgres: stats buffer process\n 2616 ? S 0:00 postgres: stats collector process\n 2857 ? S 0:00 postgres: writer process\n 2858 ? S 0:00 postgres: stats buffer process\n 2859 ? S 0:00 postgres: stats collector process\n 8679 ? S 0:00 postgres: smarlowe test [local] idle in transaction\n\nThar she blows!\n\nAlso, you can restart the database and vacuum it then too. Of course,\ndon't do that during regular business hours...\n",
"msg_date": "Wed, 06 Dec 2006 16:13:29 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM FULL does not works......."
},
{
"msg_contents": "Thanks Scott,\n It worked!!!\n We killed an old idle running transaction, now everything is fine..\n \n Thanks Again\n asif ali\n icrossing inc\n\nScott Marlowe <[email protected]> wrote: On Wed, 2006-12-06 at 15:53, asif ali wrote:\n> Thanks Everybody for helping me out.\n> I checked \"pg_stat_activity\"/pg_locks, but do not see any activity on\n> the table. \n> How to find a old running transaction...\n> I saw this link, but it did not help..\n> http://archives.postgresql.org/pgsql-hackers/2005-02/msg00760.php\n\nSometimes just using top or ps will show you.\n\non linux you can run top and then hit c for show command line and look\nfor ones that are IDLE\n\nOr, try ps:\n\nps axw|grep postgres\n\nOn my machine normally:\n\n 2408 ? S 0:00 /usr/local/pgsql/bin/postmaster -p 5432 -D /home/postgres/data\n 2615 ? S 0:00 postgres: stats buffer process\n 2616 ? S 0:00 postgres: stats collector process\n 2857 ? S 0:00 postgres: writer process\n 2858 ? S 0:00 postgres: stats buffer process\n 2859 ? S 0:00 postgres: stats collector process\n\nBut with an idle transaction:\n\n 2408 ? S 0:00 /usr/local/pgsql/bin/postmaster -p 5432 -D /home/postgres/data\n 2615 ? S 0:00 postgres: stats buffer process\n 2616 ? S 0:00 postgres: stats collector process\n 2857 ? S 0:00 postgres: writer process\n 2858 ? S 0:00 postgres: stats buffer process\n 2859 ? S 0:00 postgres: stats collector process\n 8679 ? S 0:00 postgres: smarlowe test [local] idle in transaction\n\nThar she blows!\n\nAlso, you can restart the database and vacuum it then too. Of course,\ndon't do that during regular business hours...\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n\n \n---------------------------------\nHave a burning question? Go to Yahoo! Answers and get answers from real people who know.\nThanks Scott, It worked!!! We killed an old idle running transaction, now everything is fine.. Thanks Again asif ali icrossing incScott Marlowe <[email protected]> wrote: On Wed, 2006-12-06 at 15:53, asif ali wrote:> Thanks Everybody for helping me out.> I checked \"pg_stat_activity\"/pg_locks, but do not see any activity on> the table. > How to find a old running transaction...> I saw this link, but it did not help..> http://archives.postgresql.org/pgsql-hackers/2005-02/msg00760.phpSometimes just using top or ps will show you.on linux you can run top and then hit c for show command line and lookfor ones that are IDLEOr, try ps:ps axw|grep postgresOn my machine normally: 2408 ? S 0:00\n /usr/local/pgsql/bin/postmaster -p 5432 -D /home/postgres/data 2615 ? S 0:00 postgres: stats buffer process 2616 ? S 0:00 postgres: stats collector process 2857 ? S 0:00 postgres: writer process 2858 ? S 0:00 postgres: stats buffer process 2859 ? S 0:00 postgres: stats collector processBut with an idle transaction: 2408 ? S 0:00 /usr/local/pgsql/bin/postmaster -p 5432 -D /home/postgres/data 2615 ? S 0:00 postgres: stats buffer process 2616 ? S 0:00 postgres: stats collector process 2857 ? S 0:00 postgres: writer process 2858 ? S 0:00 postgres: stats buffer process 2859 ? S 0:00 postgres: stats collector process 8679 ? S 0:00 postgres: smarlowe test [local] idle in transactionThar she blows!Also, you can restart the database and vacuum it then too. Of\n course,don't do that during regular business hours...---------------------------(end of broadcast)---------------------------TIP 5: don't forget to increase your free space map settings\nHave a burning question? Go to Yahoo! Answers and get answers from real people who know.",
"msg_date": "Wed, 6 Dec 2006 16:19:26 -0800 (PST)",
"msg_from": "asif ali <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM FULL does not works......."
},
{
"msg_contents": "We have a view in our database.\n\n\nCREATE view public.hogs AS\nSELECT pg_stat_activity.procpid, pg_stat_activity.usename,\npg_stat_activity.current_query\n FROM ONLY pg_stat_activity;\n\nSelect current_query from public.hogs helps us to spot errant queries\nat times.\n\n\nregds\nmallah.\n\n\n\n\nOn 12/7/06, asif ali <[email protected]> wrote:\n> Thanks Scott,\n> It worked!!!\n> We killed an old idle running transaction, now everything is fine..\n>\n> Thanks Again\n> asif ali\n> icrossing inc\n>\n>\n> Scott Marlowe <[email protected]> wrote:\n> On Wed, 2006-12-06 at 15:53, asif ali wrote:\n> > Thanks Everybody for helping me out.\n> > I checked \"pg_stat_activity\"/pg_locks, but do not see any activity on\n> > the table.\n> > How to find a old running transaction...\n> > I saw this link, but it did not help..\n> >\n> http://archives.postgresql.org/pgsql-hackers/2005-02/msg00760.php\n>\n> Sometimes just using top or ps will show you.\n>\n> on linux you can run top and then hit c for show command line and look\n> for ones that are IDLE\n>\n> Or, try ps:\n>\n> ps axw|grep postgres\n>\n> On my machine normally:\n>\n> 2408 ? S 0:00 /usr/local/pgsql/bin/postmaster -p 5432 -D\n> /home/postgres/data\n> 2615 ? S 0:00 postgres: stats buffer process\n> 2616 ? S 0:00 postgres: stats collector process\n> 2857 ? S 0:00 postgres: writer process\n> 2858 ? S 0:00 postgres: stats buffer process\n> 2859 ? S 0:00 postgres: stats collector process\n>\n> But with an idle transaction:\n>\n> 2408 ? S 0:00 /usr/local/pgsql/bin/postmaster -p 5432 -D\n> /home/postgres/data\n> 2615 ? S 0:00 postgres: stats buffer process\n> 2616 ? S 0:00 postgres: stats collector process\n> 2857 ? S 0:00 postgres: writer process\n> 2858 ? S 0:00 postgres: stats buffer process\n> 2859 ? S 0:00 postgres: stats collector process\n> 8679 ? S 0:00 postgres: smarlowe test [local] idle in transaction\n>\n> Thar she blows!\n>\n> Also, you can restart the database and vacuum it then too. Of course,\n> don't do that during regular business hours...\n>\n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n>\n>\n> ________________________________\n> Have a burning question? Go to Yahoo! Answers and get answers from real\n> people who know.\n>\n>\n",
"msg_date": "Thu, 7 Dec 2006 12:28:37 +0530",
"msg_from": "\"Rajesh Kumar Mallah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM FULL does not works......."
}
] |
[
{
"msg_contents": "Hello\n\nWe are having some problems with an UPDATE ... FROM sql-statement and\npg-8.1.4. It takes ages to finish. The problem is the Seq Scan of the\ntable 'mail', this table is over 6GB without indexes, and when we send\nthousands of this type of statement, the server has a very high iowait\npercent.\n\nHow can we get rid of this Seq Scan?\n\nI send the output of an explain and table definitions:\n-------------------------------------------------------------------------\n\nmailstats=# EXPLAIN update mail SET spamscore = '-5.026' FROM mail m,\nmail_received mr where mr.mail_id = m.mail_id AND mr.queue_id =\n'1GrxLs-0004N9-I1' and mr.mailhost = '129.240.10.47';\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..932360.78 rows=7184312 width=57)\n -> Nested Loop (cost=0.00..6.54 rows=1 width=0)\n -> Index Scan using received_queue_id_index on mail_received\nmr (cost=0.00..3.20 rows=1 width=4)\n Index Cond: ((queue_id)::text = '1GrxLs-0004N9-I1'::text)\n Filter: (mailhost = '129.240.10.47'::inet)\n -> Index Scan using mail_pkey on mail m (cost=0.00..3.32\nrows=1 width=4)\n Index Cond: (\"outer\".mail_id = m.mail_id)\n -> Seq Scan on mail (cost=0.00..860511.12 rows=7184312 width=57)\n(8 rows)\n\nmailstats=# \\d mail\n Table \"public.mail\"\n Column | Type | Modifiers\n------------+--------------+--------------------------------------------------------\n mail_id | integer | not null default\nnextval('mail_mail_id_seq'::regclass)\n size | integer |\n message_id | text | not null\n spamscore | numeric(6,3) |\nIndexes:\n \"mail_pkey\" PRIMARY KEY, btree (mail_id)\n \"mail_message_id_key\" UNIQUE, btree (message_id)\n\nmailstats=# \\d mail_received\n Table \"public.mail_received\"\n Column | Type |\nModifiers\n---------------+-----------------------------+----------------------------------------------------------------------\n reception_id | integer | not null default\nnextval('mail_received_reception_id_seq'::regclass)\n mail_id | integer | not null\n envelope_from | text |\n helohost | text |\n from_host | inet |\n protocol | text |\n mailhost | inet |\n received | timestamp without time zone | not null\n completed | timestamp without time zone |\n queue_id | character varying(16) | not null\nIndexes:\n \"mail_received_pkey\" PRIMARY KEY, btree (reception_id)\n \"mail_received_queue_id_key\" UNIQUE, btree (queue_id, mailhost)\n \"mail_received_completed_idx\" btree (completed)\n \"mail_received_mailhost_index\" btree (mailhost)\n \"mail_received_received_index\" btree (received)\n \"received_id_index\" btree (mail_id)\n \"received_queue_id_index\" btree (queue_id)\nForeign-key constraints:\n \"$1\" FOREIGN KEY (mail_id) REFERENCES mail(mail_id)\n-------------------------------------------------------------------------\n\nThanks in advance.\nregards,\n-- \nRafael Martinez, <[email protected]>\nCenter for Information Technology Services\nUniversity of Oslo, Norway\n\nPGP Public Key: http://folk.uio.no/rafael/\n\n\n",
"msg_date": "Wed, 06 Dec 2006 20:10:43 +0100",
"msg_from": "Rafael Martinez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problems with an update-from statement and pg-8.1.4"
},
{
"msg_contents": "On Wed, 6 Dec 2006, Rafael Martinez wrote:\n\n> We are having some problems with an UPDATE ... FROM sql-statement and\n> pg-8.1.4. It takes ages to finish. The problem is the Seq Scan of the\n> table 'mail', this table is over 6GB without indexes, and when we send\n> thousands of this type of statement, the server has a very high iowait\n> percent.\n>\n> How can we get rid of this Seq Scan?\n>\n> I send the output of an explain and table definitions:\n> -------------------------------------------------------------------------\n>\n> mailstats=# EXPLAIN update mail SET spamscore = '-5.026' FROM mail m,\n> mail_received mr where mr.mail_id = m.mail_id AND mr.queue_id =\n> '1GrxLs-0004N9-I1' and mr.mailhost = '129.240.10.47';\n\nI don't think this statement does what you expect. You're ending up with\ntwo copies of mail in the above one as \"mail\" and one as \"m\". You probably\nwant to remove the mail m in FROM and use mail rather than m in the\nwhere clause.\n",
"msg_date": "Wed, 6 Dec 2006 11:20:44 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with an update-from statement and pg-8.1.4"
},
{
"msg_contents": "Stephan Szabo wrote:\n> On Wed, 6 Dec 2006, Rafael Martinez wrote:\n>\n> \n>> We are having some problems with an UPDATE ... FROM sql-statement and\n>> pg-8.1.4. It takes ages to finish. The problem is the Seq Scan of the\n>> table 'mail', this table is over 6GB without indexes, and when we send\n>> thousands of this type of statement, the server has a very high iowait\n>> percent.\n>>\n>> How can we get rid of this Seq Scan?\n>>\n>> I send the output of an explain and table definitions:\n>> -------------------------------------------------------------------------\n>>\n>> mailstats=# EXPLAIN update mail SET spamscore = '-5.026' FROM mail m,\n>> mail_received mr where mr.mail_id = m.mail_id AND mr.queue_id =\n>> '1GrxLs-0004N9-I1' and mr.mailhost = '129.240.10.47';\n>> \n>\n> I don't think this statement does what you expect. You're ending up with\n> two copies of mail in the above one as \"mail\" and one as \"m\". You probably\n> want to remove the mail m in FROM and use mail rather than m in the\n> where clause.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n> \nWorse yet I think your setting \"spamcore\" for EVERY row in mail to \n'-5.026'. The above solution should fix it though.\n\n-- Ted\n\n*\n* <http://www.blackducksoftware.com>\n\n",
"msg_date": "Wed, 06 Dec 2006 14:55:14 -0500",
"msg_from": "Ted Allen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with an update-from statement and pg-8.1.4"
},
{
"msg_contents": "On Wed, 2006-12-06 at 14:55 -0500, Ted Allen wrote:\n> Stephan Szabo wrote:\n> > On Wed, 6 Dec 2006, Rafael Martinez wrote:\n> >>\n> >> mailstats=# EXPLAIN update mail SET spamscore = '-5.026' FROM mail m,\n> >> mail_received mr where mr.mail_id = m.mail_id AND mr.queue_id =\n> >> '1GrxLs-0004N9-I1' and mr.mailhost = '129.240.10.47';\n> >> \n> >\n> > I don't think this statement does what you expect. You're ending up with\n> > two copies of mail in the above one as \"mail\" and one as \"m\". You probably\n> > want to remove the mail m in FROM and use mail rather than m in the\n> > where clause.\n> >\n> > \n> Worse yet I think your setting \"spamcore\" for EVERY row in mail to \n> '-5.026'. The above solution should fix it though.\n> \n> -- Ted\n> \n\nThanks for the answers. I think the 'problem' is explain in the\ndocumentation:\n\n\"fromlist\n\nA list of table expressions, allowing columns from other tables to\nappear in the WHERE condition and the update expressions. This is\nsimilar to the list of tables that can be specified in the FROMClause of\na SELECT statement. Note that the target table must not appear in the\nfromlist, unless you intend a self-join (in which case it must appear\nwith an alias in the fromlist)\". \n\nAnd as you said, we can not have 'mail m' in the FROM clause. I have\ncontacted the developers and they will change the statement. I gave then\nthese 2 examples:\n\n-------------------------------------------------------------------------------\nmailstats=# EXPLAIN update mail SET spamscore = '-5.026' FROM\nmail_received mr where mr.mail_id = mail.mail_id AND mr.queue_id =\n'1GrxLs-0004N9-I1' and mr.mailhost = '129.240.10.47';\n QUERY PLAN\n------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..6.54 rows=1 width=57)\n -> Index Scan using received_queue_id_index on mail_received mr\n(cost=0.00..3.20 rows=1 width=4)\n Index Cond: ((queue_id)::text = '1GrxLs-0004N9-I1'::text)\n Filter: (mailhost = '129.240.10.47'::inet)\n -> Index Scan using mail_pkey on mail (cost=0.00..3.32 rows=1\nwidth=57)\n Index Cond: (\"outer\".mail_id = mail.mail_id)\n(6 rows)\n\nmailstats=# explain update mail SET spamscore = '-5.026' where mail_id\n= (select mail_id from mail_received where queue_id = '1GrxLs-0004N9-I1'\nand mailhost = '129.240.10.47');\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Index Scan using mail_pkey on mail (cost=3.20..6.52 rows=1 width=57)\n Index Cond: (mail_id = $0)\n InitPlan\n -> Index Scan using received_queue_id_index on mail_received\n(cost=0.00..3.20 rows=1 width=4)\n Index Cond: ((queue_id)::text = '1GrxLs-0004N9-I1'::text)\n Filter: (mailhost = '129.240.10.47'::inet)\n(6 rows)\n-------------------------------------------------------------------------------\n\nregards,\n-- \nRafael Martinez, <[email protected]>\nCenter for Information Technology Services\nUniversity of Oslo, Norway\n\nPGP Public Key: http://folk.uio.no/rafael/\n\n\n",
"msg_date": "Wed, 06 Dec 2006 21:10:45 +0100",
"msg_from": "Rafael Martinez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problems with an update-from statement and pg-8.1.4"
},
{
"msg_contents": "Rafael Martinez wrote:\n> On Wed, 2006-12-06 at 14:55 -0500, Ted Allen wrote:\n> \n>> Stephan Szabo wrote:\n>> \n>>> On Wed, 6 Dec 2006, Rafael Martinez wrote:\n>>> \n>>>> mailstats=# EXPLAIN update mail SET spamscore = '-5.026' FROM mail m,\n>>>> mail_received mr where mr.mail_id = m.mail_id AND mr.queue_id =\n>>>> '1GrxLs-0004N9-I1' and mr.mailhost = '129.240.10.47';\n>>>> \n>>>> \n>>> I don't think this statement does what you expect. You're ending up with\n>>> two copies of mail in the above one as \"mail\" and one as \"m\". You probably\n>>> want to remove the mail m in FROM and use mail rather than m in the\n>>> where clause.\n>>>\n>>> \n>>> \n>> Worse yet I think your setting \"spamcore\" for EVERY row in mail to \n>> '-5.026'. The above solution should fix it though.\n>>\n>> -- Ted\n>>\n>> \n>\n> Thanks for the answers. I think the 'problem' is explain in the\n> documentation:\n>\n> \"fromlist\n>\n> A list of table expressions, allowing columns from other tables to\n> appear in the WHERE condition and the update expressions. This is\n> similar to the list of tables that can be specified in the FROMClause of\n> a SELECT statement. Note that the target table must not appear in the\n> fromlist, unless you intend a self-join (in which case it must appear\n> with an alias in the fromlist)\". \n>\n> And as you said, we can not have 'mail m' in the FROM clause. I have\n> contacted the developers and they will change the statement. I gave then\n> these 2 examples:\n>\n> -------------------------------------------------------------------------------\n> mailstats=# EXPLAIN update mail SET spamscore = '-5.026' FROM\n> mail_received mr where mr.mail_id = mail.mail_id AND mr.queue_id =\n> '1GrxLs-0004N9-I1' and mr.mailhost = '129.240.10.47';\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..6.54 rows=1 width=57)\n> -> Index Scan using received_queue_id_index on mail_received mr\n> (cost=0.00..3.20 rows=1 width=4)\n> Index Cond: ((queue_id)::text = '1GrxLs-0004N9-I1'::text)\n> Filter: (mailhost = '129.240.10.47'::inet)\n> -> Index Scan using mail_pkey on mail (cost=0.00..3.32 rows=1\n> width=57)\n> Index Cond: (\"outer\".mail_id = mail.mail_id)\n> (6 rows)\n>\n> mailstats=# explain update mail SET spamscore = '-5.026' where mail_id\n> = (select mail_id from mail_received where queue_id = '1GrxLs-0004N9-I1'\n> and mailhost = '129.240.10.47');\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------\n> Index Scan using mail_pkey on mail (cost=3.20..6.52 rows=1 width=57)\n> Index Cond: (mail_id = $0)\n> InitPlan\n> -> Index Scan using received_queue_id_index on mail_received\n> (cost=0.00..3.20 rows=1 width=4)\n> Index Cond: ((queue_id)::text = '1GrxLs-0004N9-I1'::text)\n> Filter: (mailhost = '129.240.10.47'::inet)\n> (6 rows)\n> -------------------------------------------------------------------------------\n> \nLook again at the estimated costs of those two query plans. You haven't \ngained anything there. Try this out:\n\nEXPLAIN UPDATE mail\nSET spamscore = '-5.026'\nFROM mail_received mr\nWHERE mail.mail_id = mr.mail_id AND mr.queue_id = '1GrxLs-0004N9-I1' ;\n\n-- \nerik jones <[email protected]>\nsoftware development\nemma(r)\n\n",
"msg_date": "Wed, 06 Dec 2006 14:19:38 -0600",
"msg_from": "Erik Jones <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with an update-from statement and pg-8.1.4"
},
{
"msg_contents": "On Wed, 2006-12-06 at 14:19 -0600, Erik Jones wrote:\n> Rafael Martinez wrote:\n> > On Wed, 2006-12-06 at 14:55 -0500, Ted Allen wrote:\n> > \n> >> Stephan Szabo wrote:\n> >> \n> >>> On Wed, 6 Dec 2006, Rafael Martinez wrote:\n> >>> \n> >>>> mailstats=# EXPLAIN update mail SET spamscore = '-5.026' FROM mail m,\n> >>>> mail_received mr where mr.mail_id = m.mail_id AND mr.queue_id =\n> >>>> '1GrxLs-0004N9-I1' and mr.mailhost = '129.240.10.47';\n> >>>> \n> >>>> \n> >>> I don't think this statement does what you expect. You're ending up with\n> >>> two copies of mail in the above one as \"mail\" and one as \"m\". You probably\n> >>> want to remove the mail m in FROM and use mail rather than m in the\n> >>> where clause.\n> >>>\n> >>> \n> >>> \n> >> Worse yet I think your setting \"spamcore\" for EVERY row in mail to \n> >> '-5.026'. The above solution should fix it though.\n> >>\n> >> -- Ted\n> >>\n> >> \n> >\n> > Thanks for the answers. I think the 'problem' is explain in the\n> > documentation:\n> >\n> > \"fromlist\n> >\n> > A list of table expressions, allowing columns from other tables to\n> > appear in the WHERE condition and the update expressions. This is\n> > similar to the list of tables that can be specified in the FROMClause of\n> > a SELECT statement. Note that the target table must not appear in the\n> > fromlist, unless you intend a self-join (in which case it must appear\n> > with an alias in the fromlist)\". \n> >\n> > And as you said, we can not have 'mail m' in the FROM clause. I have\n> > contacted the developers and they will change the statement. I gave then\n> > these 2 examples:\n> >\n> > -------------------------------------------------------------------------------\n> > mailstats=# EXPLAIN update mail SET spamscore = '-5.026' FROM\n> > mail_received mr where mr.mail_id = mail.mail_id AND mr.queue_id =\n> > '1GrxLs-0004N9-I1' and mr.mailhost = '129.240.10.47';\n> > QUERY PLAN\n> > ------------------------------------------------------------------------------------------------------\n> > Nested Loop (cost=0.00..6.54 rows=1 width=57)\n> > -> Index Scan using received_queue_id_index on mail_received mr\n> > (cost=0.00..3.20 rows=1 width=4)\n> > Index Cond: ((queue_id)::text = '1GrxLs-0004N9-I1'::text)\n> > Filter: (mailhost = '129.240.10.47'::inet)\n> > -> Index Scan using mail_pkey on mail (cost=0.00..3.32 rows=1\n> > width=57)\n> > Index Cond: (\"outer\".mail_id = mail.mail_id)\n> > (6 rows)\n> >\n> > mailstats=# explain update mail SET spamscore = '-5.026' where mail_id\n> > = (select mail_id from mail_received where queue_id = '1GrxLs-0004N9-I1'\n> > and mailhost = '129.240.10.47');\n> > QUERY PLAN\n> > -----------------------------------------------------------------------------------------------------\n> > Index Scan using mail_pkey on mail (cost=3.20..6.52 rows=1 width=57)\n> > Index Cond: (mail_id = $0)\n> > InitPlan\n> > -> Index Scan using received_queue_id_index on mail_received\n> > (cost=0.00..3.20 rows=1 width=4)\n> > Index Cond: ((queue_id)::text = '1GrxLs-0004N9-I1'::text)\n> > Filter: (mailhost = '129.240.10.47'::inet)\n> > (6 rows)\n> > -------------------------------------------------------------------------------\n> > \n> Look again at the estimated costs of those two query plans. You haven't \n> gained anything there. Try this out:\n> \n> EXPLAIN UPDATE mail\n> SET spamscore = '-5.026'\n> FROM mail_received mr\n> WHERE mail.mail_id = mr.mail_id AND mr.queue_id = '1GrxLs-0004N9-I1' ;\n> \n\nHaven't we? \n\n* In the statement with problems we got this:\nNested Loop (cost=0.00..932360.78 rows=7184312 width=57)\n\n* In the ones I sent:\nNested Loop (cost=0.00..6.54 rows=1 width=57)\nIndex Scan using mail_pkey on mail (cost=3.20..6.52 rows=1 width=57)\n\n* And in the last one you sent me:\n------------------------------------------------------ \nNested Loop (cost=0.00..6.53 rows=1 width=57)\n -> Index Scan using received_queue_id_index on mail_received mr\n(cost=0.00..3.20 rows=1 width=4)\n Index Cond: ((queue_id)::text = '1GrxLs-0004N9-I1'::text)\n -> Index Scan using mail_pkey on mail (cost=0.00..3.32 rows=1\nwidth=57)\n Index Cond: (mail.mail_id = \"outer\".mail_id)\n(5 rows)\n------------------------------------------------------\n\nI can not see the different.\n\nregards,\n-- \nRafael Martinez, <[email protected]>\nCenter for Information Technology Services\nUniversity of Oslo, Norway\n\nPGP Public Key: http://folk.uio.no/rafael/\n\n\n",
"msg_date": "Wed, 06 Dec 2006 21:29:26 +0100",
"msg_from": "Rafael Martinez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problems with an update-from statement and pg-8.1.4"
},
{
"msg_contents": "Rafael Martinez wrote:\n> On Wed, 2006-12-06 at 14:19 -0600, Erik Jones wrote:\n> \n>> Rafael Martinez wrote:\n>> \n>>> On Wed, 2006-12-06 at 14:55 -0500, Ted Allen wrote:\n>>> \n>>> \n>>>> Stephan Szabo wrote:\n>>>> \n>>>> \n>>>>> On Wed, 6 Dec 2006, Rafael Martinez wrote:\n>>>>> \n>>>>> \n>>>>>> mailstats=# EXPLAIN update mail SET spamscore = '-5.026' FROM mail m,\n>>>>>> mail_received mr where mr.mail_id = m.mail_id AND mr.queue_id =\n>>>>>> '1GrxLs-0004N9-I1' and mr.mailhost = '129.240.10.47';\n>>>>>> \n>>>>>> \n>>>>>> \n>>>>> I don't think this statement does what you expect. You're ending up with\n>>>>> two copies of mail in the above one as \"mail\" and one as \"m\". You probably\n>>>>> want to remove the mail m in FROM and use mail rather than m in the\n>>>>> where clause.\n>>>>>\n>>>>> \n>>>>> \n>>>>> \n>>>> Worse yet I think your setting \"spamcore\" for EVERY row in mail to \n>>>> '-5.026'. The above solution should fix it though.\n>>>>\n>>>> -- Ted\n>>>>\n>>>> \n>>>> \n>>> Thanks for the answers. I think the 'problem' is explain in the\n>>> documentation:\n>>>\n>>> \"fromlist\n>>>\n>>> A list of table expressions, allowing columns from other tables to\n>>> appear in the WHERE condition and the update expressions. This is\n>>> similar to the list of tables that can be specified in the FROMClause of\n>>> a SELECT statement. Note that the target table must not appear in the\n>>> fromlist, unless you intend a self-join (in which case it must appear\n>>> with an alias in the fromlist)\". \n>>>\n>>> And as you said, we can not have 'mail m' in the FROM clause. I have\n>>> contacted the developers and they will change the statement. I gave then\n>>> these 2 examples:\n>>>\n>>> -------------------------------------------------------------------------------\n>>> mailstats=# EXPLAIN update mail SET spamscore = '-5.026' FROM\n>>> mail_received mr where mr.mail_id = mail.mail_id AND mr.queue_id =\n>>> '1GrxLs-0004N9-I1' and mr.mailhost = '129.240.10.47';\n>>> QUERY PLAN\n>>> ------------------------------------------------------------------------------------------------------\n>>> Nested Loop (cost=0.00..6.54 rows=1 width=57)\n>>> -> Index Scan using received_queue_id_index on mail_received mr\n>>> (cost=0.00..3.20 rows=1 width=4)\n>>> Index Cond: ((queue_id)::text = '1GrxLs-0004N9-I1'::text)\n>>> Filter: (mailhost = '129.240.10.47'::inet)\n>>> -> Index Scan using mail_pkey on mail (cost=0.00..3.32 rows=1\n>>> width=57)\n>>> Index Cond: (\"outer\".mail_id = mail.mail_id)\n>>> (6 rows)\n>>>\n>>> mailstats=# explain update mail SET spamscore = '-5.026' where mail_id\n>>> = (select mail_id from mail_received where queue_id = '1GrxLs-0004N9-I1'\n>>> and mailhost = '129.240.10.47');\n>>> QUERY PLAN\n>>> -----------------------------------------------------------------------------------------------------\n>>> Index Scan using mail_pkey on mail (cost=3.20..6.52 rows=1 width=57)\n>>> Index Cond: (mail_id = $0)\n>>> InitPlan\n>>> -> Index Scan using received_queue_id_index on mail_received\n>>> (cost=0.00..3.20 rows=1 width=4)\n>>> Index Cond: ((queue_id)::text = '1GrxLs-0004N9-I1'::text)\n>>> Filter: (mailhost = '129.240.10.47'::inet)\n>>> (6 rows)\n>>> -------------------------------------------------------------------------------\n>>> \n>>> \n>> Look again at the estimated costs of those two query plans. You haven't \n>> gained anything there. Try this out:\n>>\n>> EXPLAIN UPDATE mail\n>> SET spamscore = '-5.026'\n>> FROM mail_received mr\n>> WHERE mail.mail_id = mr.mail_id AND mr.queue_id = '1GrxLs-0004N9-I1' ;\n>>\n>> \n>\n> Haven't we? \n>\n> * In the statement with problems we got this:\n> Nested Loop (cost=0.00..932360.78 rows=7184312 width=57)\n>\n> * In the ones I sent:\n> Nested Loop (cost=0.00..6.54 rows=1 width=57)\n> Index Scan using mail_pkey on mail (cost=3.20..6.52 rows=1 width=57)\n>\n> * And in the last one you sent me:\n> ------------------------------------------------------ \n> Nested Loop (cost=0.00..6.53 rows=1 width=57)\n> -> Index Scan using received_queue_id_index on mail_received mr\n> (cost=0.00..3.20 rows=1 width=4)\n> Index Cond: ((queue_id)::text = '1GrxLs-0004N9-I1'::text)\n> -> Index Scan using mail_pkey on mail (cost=0.00..3.32 rows=1\n> width=57)\n> Index Cond: (mail.mail_id = \"outer\".mail_id)\n> (5 rows)\n> ------------------------------------------------------\n>\n> I can not see the different.\n>\n> regards,\n> \nAh, sorry, I was just looking at the two that you sent in your last \nmessage thinking that they were 'old' and 'new', not both 'new'. My bad...\n\n-- \nerik jones <[email protected]>\nsoftware development\nemma(r)\n\n",
"msg_date": "Wed, 06 Dec 2006 14:31:06 -0600",
"msg_from": "Erik Jones <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with an update-from statement and pg-8.1.4"
}
] |
[
{
"msg_contents": "Joshua D. Drake wrote:\n> I agree. I have many people that want to purchase a SAN because someone\n> told them that is what they need... Yet they can spend 20% of the cost\n> on two external arrays and get incredible performance...\n>\n> We are seeing great numbers from the following config:\n>\n> (2) HP MS 30s (loaded) dual bus\n> (2) HP 6402, one connected to each MSA.\n>\n> The performance for the money is incredible.\n\nThis raises some questions for me. I just budgeted for a san because I\nneed lots of storage for email/web systems and don't want to have a\nbunch of local disks in each server requiring each server to have it's\nown spares. The idea is that I can have a platform wide disk chassis\nwhich requires only one set of spares and run my linux hosts diskless.\n Since I am planing on buying the sanraid iscsi solution I would simply\nboot hosts with pxelinux and pass a kernel/initrd image that would mount\nthe iscsi target as root. If a server fails, I simply change the mac\naddress in the bootp server then bring up a spare in it's place.\n\nNow that I'm reading these messages about disk performance and sans,\nit's got me thinking that this solution is not ideal for a database\nserver. Also, it appears that there are several people on the list that\nhave experience with sans so perhaps some of you can fill in some blanks\nfor me:\n\n1. Is iscsi a decent way to do a san? How much performance do I loose\n vs connecting the hosts directly with a fiber channel controller?\n\n2. Would it be better to omit my database server from the san (or at\nleast the database storage) and stick with local disks? If so what\ndisks/controller card do I want? I use dell servers for everything so\nit would be nice if the recommendation is a dell system, but doesn't\nneed to be. Overall I'm not very impressed with the LSI cards, but I'm\ntold the new ones are much better.\n\n3. Anyone use the sanrad box? Is it any good? Seems like\nconsolidating disk space and disk spares platform wide is good idea, but\nI've not used a san before so I'm nervous about it.\n\n4. What would be the performance of SATA disks in a JBOD? If I got 8\n200g disks and made 4 raid one mirrors in the jbod then striped them\ntogether in the sanraid would that perform decent? Is there an\nadvantage splitting up raid 1+0 across the two boxes, or am I better\ndoing raid 1+0 in the jbod and using the sanrad as an iscsi translator?\n\nThats enough questions for now....\n\nThanks,\nschu\n\n",
"msg_date": "Wed, 06 Dec 2006 11:12:54 -0900",
"msg_from": "Matthew Schumacher <[email protected]>",
"msg_from_op": true,
"msg_subject": "Disk storage and san questions (was File Systems Compared)"
},
{
"msg_contents": "I was working on a project that was considering using a Dell/EMC (dell's\nrebranded emc hardware) and here's some thoughts on your questions based\non that.\n\n> 1. Is iscsi a decent way to do a san? How much performance do I\nloose\n> vs connecting the hosts directly with a fiber channel controller?\nIt's cheaper, but if you want any sort of reasonable performance, you'll\nneed a dedicated gigabit network. I'd highly recommend a dedicated\nswitch too, not just vlan. You should also have dual nics, and use one\ndedicated to iSCSI. Most all poweredges come with dual nics these days.\n\n> \n> 2. Would it be better to omit my database server from the san (or at\n> least the database storage) and stick with local disks? If so what\n> disks/controller card do I want? I use dell servers for everything so\n> it would be nice if the recommendation is a dell system, but doesn't\n> need to be. Overall I'm not very impressed with the LSI cards, but\nI'm\n> told the new ones are much better.\nThe new dell perc4, and perc5 to more extent, are reasonable performers\nin my experience. However, this depends on the performance needs of your\ndatabase. You should be able to at least get better performance than\nonboard storage (Poweredges max out at 6 disks- 8 if you go 2.5\" SATA,\nbut I don't recommend those for reliability/performance reasons). If you\nget one of the better Dell/EMC combo sans, you can allocate a raid pool\nfor your database and probably saturate the iSCSI interface. Next step\nmight be the MD1000 15 disk SAS enclosure with Perc5/e cards if you're\nsticking with dell, or step up to multi-homed FC cards. (btw- you can\nsplit the MD1000 in half and share it across two servers, since it has\ntwo scsi cards. You can also daisy chain up to three of them for a total\nof 45 disks). Either way, take a good look at what the SAN chassis can\nsupport in terms of IO bandwidth- cause once you use it up, there's no\nmore to allocate to the DB. \n\n> \n> 3. Anyone use the sanrad box? Is it any good? Seems like\n> consolidating disk space and disk spares platform wide is good idea,\nbut\n> I've not used a san before so I'm nervous about it.\n> \nIf you haven't used a san, much less an enterprise grade one, then I'd\nbe very nervous about them too. Optimizing SAN performance is much more\ndifficult than attached storage simply due to the complexity factor.\nDefinitely plan on a pretty steep learning curve, especially for\nsomething like EMC and a good number of servers. \n\nIMO, the big benefit to SAN is storage management and utilization, not\nnecessarily performance (you can get decent performance if you buy the\nright hardware and tune it correctly). To your points- you can reduce\nthe number of hot spares, and allocate storage much more efficiently.\nAlso, you can allocate storage pools based on performance needs- slow\nSATA 500Gb drives for archive, fast 15K SAS for db, etc. There's some\nnice failover options too, as you mentioned boot from san allows you to\nswap hardware, but I would get a demonstration from the vendor of this\nworking with your hardware/os setup (including booting up the cold spare\nserver). I know this was a big issue in some of the earlier Dell/EMC\nhardware.\n\nSorry for the long post, but hopefully some of the info will be useful\nto you.\n\nBucky\n",
"msg_date": "Thu, 7 Dec 2006 19:07:33 -0500",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disk storage and san questions (was File Systems Compared)"
}
] |
[
{
"msg_contents": "\n Hello,\n\n We're planning new server or two for PostgreSQL and I'm wondering Intel\nCore 2 (Woodcrest for servers?) or Opteron is faster for PostgreSQL now?\n\n When I look through hardware sites Core 2 wins. But I believe those tests\nmostly are being done in 32 bits. Does the picture change in 64 bits?\n\n And I also remember that in PostgreSQL Opteron earlier had huge advantage\nover older Xeons. But did Intel manage to change picture now?\n\n Thanks,\n\n Mindaugas\n\n",
"msg_date": "Thu, 7 Dec 2006 12:18:21 +0200",
"msg_from": "\"Mindaugas\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Core 2 or Opteron"
},
{
"msg_contents": "These benchmarks are all done using 64 bit linux:\nhttp://tweakers.net/reviews/646\n\nBest regards,\n\nArjen\n\nOn 7-12-2006 11:18 Mindaugas wrote:\n> \n> Hello,\n> \n> We're planning new server or two for PostgreSQL and I'm wondering Intel\n> Core 2 (Woodcrest for servers?) or Opteron is faster for PostgreSQL now?\n> \n> When I look through hardware sites Core 2 wins. But I believe those tests\n> mostly are being done in 32 bits. Does the picture change in 64 bits?\n> \n> And I also remember that in PostgreSQL Opteron earlier had huge advantage\n> over older Xeons. But did Intel manage to change picture now?\n> \n> Thanks,\n> \n> Mindaugas\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n",
"msg_date": "Thu, 07 Dec 2006 11:29:48 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Core 2 or Opteron"
},
{
"msg_contents": "> We're planning new server or two for PostgreSQL and I'm wondering Intel\n> Core 2 (Woodcrest for servers?) or Opteron is faster for PostgreSQL now?\n>\n> When I look through hardware sites Core 2 wins. But I believe those tests\n> mostly are being done in 32 bits. Does the picture change in 64 bits?\n\nWe just migrated from a 4-way opteron @ 2 GHz with 8 GB ram to a DL380\nG5 with a 4-way woodcrest @ 3 GHz and 16 GB ram. It was like night and\nday, system load dropped, not just quite a bit, but almost by a factor\nof 100 in worst case scenarios.\n\nGoing from a 64 MB diskcontroller to a 256 MB ditto probably helped\nsome and so did a speedup from 2 -> 3 GHz, but overall it seems the\nnew woodcrest cpu's feel at home doing db-stuff.\n\nThis is on FreeBSD 6.2 RC1 and postgresql 7.4.14.\n\n> And I also remember that in PostgreSQL Opteron earlier had huge advantage\n> over older Xeons. But did Intel manage to change picture now?\n\nThat was pre-woodcrest, aka. nocona and before. Horrible and the\nreason I went for opteron to begin with. But AMD probably wont sit\nidle.\n\nThe link posted in another reply illustrates the current situation quite well.\n\nregards\nClaus\n",
"msg_date": "Thu, 7 Dec 2006 11:44:48 +0100",
"msg_from": "\"Claus Guttesen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Core 2 or Opteron"
},
{
"msg_contents": "\n\n> These benchmarks are all done using 64 bit linux:\n> http://tweakers.net/reviews/646\n\n I see. Thanks.\n\n Now about 2 core vs 4 core Woodcrest. For HP DL360 I see similarly priced \ndual core 5160@3GHz and four core [email protected]. According to article's \nscaling data PostgreSQL performance should be similar (1.86GHz * 2 * 80% = \n~3GHz). And quad core has slightly slower FSB (1066 vs 1333).\n\n So it looks like more likely dual core 5160 Woodrest is the way to go if I \nwant \"ultimate\" performance on two sockets?\n Besides that I think it should consume a bit less power!?\n\n Mindaugas\n\n",
"msg_date": "Thu, 7 Dec 2006 13:05:24 +0200",
"msg_from": "\"Mindaugas\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Core 2 or Opteron"
},
{
"msg_contents": "On 7-12-2006 12:05 Mindaugas wrote:\n> Now about 2 core vs 4 core Woodcrest. For HP DL360 I see similarly \n> priced dual core 5160@3GHz and four core [email protected]. According to \n> article's scaling data PostgreSQL performance should be similar (1.86GHz \n> * 2 * 80% = ~3GHz). And quad core has slightly slower FSB (1066 vs 1333).\n> \n> So it looks like more likely dual core 5160 Woodrest is the way to go \n> if I want \"ultimate\" performance on two sockets?\n> Besides that I think it should consume a bit less power!?\n\nI think that's the better choice yes. I've seen the X5355 (quad core \n2.66Ghz) in work and that one is faster than the 5160 we tested. But its \nnot as much faster as the extra ghz' could imply, so the 5320 would very \nlikely not outperform the 5160. At least not in our postgresql benchmark.\nBesides that you end up with a slower FSB for more cores (1333 / 2 = 666 \nper core, 1066 / 4 = 266 per core!) while there will be more traffic \nsince the seperate \"dual cores\" on the quad core communicate via the bus \nand there are more cores so there is also in an absolute sence more \ncache coherency traffic...\n\nSo I'd definitely go with the 5160 or perhaps just the 5150 if the \nsavings can allow for better I/O or more memory.\n\nBest regards,\n\nArjen\n",
"msg_date": "Thu, 07 Dec 2006 12:18:58 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Core 2 or Opteron"
}
] |
[
{
"msg_contents": "\n\n\n\n\n07/12/2006 04:31\n\nSQL_CALC_FOUND_ROWS in POSTGRESQL\n\nIn mysqln i m using the command SQL_CALC_FOUND_ROWS in follow sintax.\nSELECT SQL_CALC_FOUND_ROWS name, email, tel FROM mytable WHERE name\n<> '' LIMIT 0, 10\nto have the recorset data.\nand\nSELECT FOUND_ROWS();\nto have the total of registers found.\n\nI dont want to use the command count(*), because the performance will\nfall down, depending of the quantyt of tables and \"joins\".\n\nThe Data base postgresql have something similar ???\n\n\n---------------------------------------------------------------------------------------------------\n\n07/12/2006 04:31\n\nSQL_CALC_FOUND_ROWS no POSTGRESQL\nDúvida NINJA no POSTGRESQL\nNo mysql utilizo o comando SQL_CALC_FOUND_ROWS na seguinte sintax\nSELECT SQL_CALC_FOUND_ROWS nome, email, telefone FROM tabela WHERE nome\n<> '' LIMIT 0, 10\npara obter o meu recordset\ne\nSELECT FOUND_ROWS();\npara obter o total de resgitros que realmente existem em minha tabela\ncondicionado pelo WHERE, sem ser limitado pelo LIMIT.\n\nNão quero usar o count(*) pois o desempenho cai dependendo da\nquantidade de tabelas selecionadas e quantidade de registros.\n\n\nO postgreSQL possui algo similar? Caso sim pode me informar qual e\nfornecer um exemplo. \n\n\n\n",
"msg_date": "Thu, 07 Dec 2006 11:19:15 -0200",
"msg_from": "Marcos Borges <[email protected]>",
"msg_from_op": true,
"msg_subject": "SQL_CALC_FOUND_ROWS in POSTGRESQL / Some one can helpe me."
},
{
"msg_contents": "Marcos Borges wrote:\n> 07/12/2006 04:31\n> *SQL_CALC_FOUND_ROWS in POSTGRESQL*\n> \n> In mysqln i m using the command SQL_CALC_FOUND_ROWS in follow sintax.\n> SELECT SQL_CALC_FOUND_ROWS name, email, tel FROM mytable WHERE name <> \n> '' LIMIT 0, 10\n> to have the recorset data.\n> and\n> SELECT FOUND_ROWS();\n> to have the total of registers found.\n> \n> I dont want to use the command count(*), because the performance will \n> fall down, depending of the quantyt of tables and \"joins\".\n> \n> The Data base postgresql have something similar ???\n\nNope, you're out of luck sorry. That's a mysql-ism and I doubt postgres \nwill ever include something similar.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Mon, 11 Dec 2006 14:33:22 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL_CALC_FOUND_ROWS in POSTGRESQL / Some one can helpe"
},
{
"msg_contents": "On Mon, 2006-12-11 at 14:33 +1100, Chris wrote:\n> Marcos Borges wrote:\n> > 07/12/2006 04:31\n> > *SQL_CALC_FOUND_ROWS in POSTGRESQL*\n> > \n> > In mysqln i m using the command SQL_CALC_FOUND_ROWS in follow sintax.\n> > SELECT SQL_CALC_FOUND_ROWS name, email, tel FROM mytable WHERE name <> \n> > '' LIMIT 0, 10\n> > to have the recorset data.\n> > and\n> > SELECT FOUND_ROWS();\n> > to have the total of registers found.\n> > \n> > I dont want to use the command count(*), because the performance will \n> > fall down, depending of the quantyt of tables and \"joins\".\n> > \n> > The Data base postgresql have something similar ???\n> \n> Nope, you're out of luck sorry. That's a mysql-ism and I doubt postgres \n> will ever include something similar.\n\nYour language will have a similar binding. Something like pg_numrows.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n> \n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n\n",
"msg_date": "Sun, 10 Dec 2006 20:04:38 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL_CALC_FOUND_ROWS in POSTGRESQL / Some one can"
},
{
"msg_contents": "Joshua D. Drake wrote:\n> On Mon, 2006-12-11 at 14:33 +1100, Chris wrote:\n>> Marcos Borges wrote:\n>>> 07/12/2006 04:31\n>>> *SQL_CALC_FOUND_ROWS in POSTGRESQL*\n>>>\n>>> In mysqln i m using the command SQL_CALC_FOUND_ROWS in follow sintax.\n>>> SELECT SQL_CALC_FOUND_ROWS name, email, tel FROM mytable WHERE name <> \n>>> '' LIMIT 0, 10\n>>> to have the recorset data.\n>>> and\n>>> SELECT FOUND_ROWS();\n>>> to have the total of registers found.\n>>>\n>>> I dont want to use the command count(*), because the performance will \n>>> fall down, depending of the quantyt of tables and \"joins\".\n>>>\n>>> The Data base postgresql have something similar ???\n>> Nope, you're out of luck sorry. That's a mysql-ism and I doubt postgres \n>> will ever include something similar.\n> \n> Your language will have a similar binding. Something like pg_numrows.\n\nI guess they are similar but also not really :)\n\nThe SQL_CALC_FOUND_ROWS directive in mysql will run the same query but \nwithout the limit.\n\nIt's the same as doing a select count(*) type query using the same \nclauses, but all in one query instead of two.\n\nIt doesn't return any extra rows on top of the limit query so it's \nbetter than using pg_numrows which runs the whole query and returns it \nto php (in this example).\n\n\nTheir docs explain it:\n\nhttp://dev.mysql.com/doc/refman/4.1/en/information-functions.html\n\nSee \"FOUND_ROWS()\"\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Mon, 11 Dec 2006 15:36:12 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL_CALC_FOUND_ROWS in POSTGRESQL / Some one can"
},
{
"msg_contents": "Chris <[email protected]> writes:\n> Their docs explain it:\n> http://dev.mysql.com/doc/refman/4.1/en/information-functions.html\n> See \"FOUND_ROWS()\"\n\nSounds like a pretty ugly crock ...\n\nThe functionality as described is to let you fetch only the first N\nrows, and then still find out the total number of rows that could have\nbeen returned. You can do that in Postgres with a cursor:\n\n\tDECLARE c CURSOR FOR SELECT ... (no LIMIT here);\n\tFETCH n FROM c;\n\tMOVE FORWARD ALL IN c;\n\t-- then figure the sum of the number of rows fetched and the\n\t-- rows-moved count reported by MOVE\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 10 Dec 2006 23:57:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL_CALC_FOUND_ROWS in POSTGRESQL / Some one can "
},
{
"msg_contents": "Chris wrote:\n\n> It's the same as doing a select count(*) type query using the same \n> clauses, but all in one query instead of two.\n> \n> It doesn't return any extra rows on top of the limit query so it's \n> better than using pg_numrows which runs the whole query and returns it \n> to php (in this example).\n> \n> \n> Their docs explain it:\n> \n> http://dev.mysql.com/doc/refman/4.1/en/information-functions.html\n> \n> See \"FOUND_ROWS()\"\n> \n\nNote that from the same page:\n\n\"If you are using SELECT SQL_CALC_FOUND_ROWS, MySQL must calculate how \nmany rows are in the full result set. However, this is faster than \nrunning the query again without LIMIT, because the result set need not \nbe sent to the client.\"\n\nSo it is not as cost-free as it would seem - the CALC step is \nessentially doing \"SELECT count(*) FROM (your-query)\" in addition to \nyour-query-with-the-limit.\n\nI don't buy the \"its cheap 'cause nothing is returned to the client\" \nbit, because 'SELECT count(*) ...' returns only 1 tuple of 1 element to \nthe client anyway. On the face of it, it *looks* like you save an extra \nset of parse, execute, construct (trivially small) resultset calls - but \n'SELECT FOUND_ROWS()' involves that set of steps too, so I'm not \nentirely convinced that doing a 2nd 'SELECT count(*)...' is really any \ndifferent in impact.\n\nCheers\n\nMark\n\n\n\n\n",
"msg_date": "Mon, 11 Dec 2006 18:49:56 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL_CALC_FOUND_ROWS in POSTGRESQL / Some one can"
},
{
"msg_contents": "Mark Kirkwood wrote:\n> Chris wrote:\n> \n>> It's the same as doing a select count(*) type query using the same \n>> clauses, but all in one query instead of two.\n>>\n>> It doesn't return any extra rows on top of the limit query so it's \n>> better than using pg_numrows which runs the whole query and returns it \n>> to php (in this example).\n>>\n>>\n>> Their docs explain it:\n>>\n>> http://dev.mysql.com/doc/refman/4.1/en/information-functions.html\n>>\n>> See \"FOUND_ROWS()\"\n>>\n> \n> Note that from the same page:\n> \n> \"If you are using SELECT SQL_CALC_FOUND_ROWS, MySQL must calculate how \n> many rows are in the full result set. However, this is faster than \n> running the query again without LIMIT, because the result set need not \n> be sent to the client.\"\n> \n> So it is not as cost-free as it would seem - the CALC step is \n> essentially doing \"SELECT count(*) FROM (your-query)\" in addition to \n> your-query-with-the-limit.\n> \n> I don't buy the \"its cheap 'cause nothing is returned to the client\" \n> bit, because 'SELECT count(*) ...' returns only 1 tuple of 1 element to \n> the client anyway. On the face of it, it *looks* like you save an extra \n> set of parse, execute, construct (trivially small) resultset calls - but \n> 'SELECT FOUND_ROWS()' involves that set of steps too, so I'm not \n> entirely convinced that doing a 2nd 'SELECT count(*)...' is really any \n> different in impact.\n\nSorry - I created a bit of confusion here. It's not doing the count(*), \nit's doing the query again without the limit.\n\nie:\n\nselect SQL_CALC_FOUND_ROWS userid, username, password from users limit 10;\n\nwill do:\n\nselect userid, username, password from users limit 10;\n\nand calculate this:\n\nselect userid, username, password from users;\n\nand tell you how many rows that will return (so you can call \n'found_rows()').\n\n\nthe second one does do a lot more because it has to send the results \nacross to the client program - whether the client uses that info or not \ndoesn't matter.\n\n\nThe OP didn't want to have to change to using two different queries:\nselect count(*) from table;\nselect * from table limit 10 offset 0;\n\n\nJosh's comment was to do the query again without the limit:\nselect userid, username, password from users;\n\nand then use something like http://www.php.net/pg_numrows to work out \nthe number of results the query would have returned.. but that would \nkeep the dataset in memory and eventually with a large enough dataset \ncause a problem.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Mon, 11 Dec 2006 17:01:11 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL_CALC_FOUND_ROWS in POSTGRESQL / Some one can"
},
{
"msg_contents": "* Chris <[email protected]> [061211 07:01]:\n> select SQL_CALC_FOUND_ROWS userid, username, password from users limit 10;\n> \n> will do:\n> \n> select userid, username, password from users limit 10;\n> \n> and calculate this:\n> \n> select userid, username, password from users;\n> \n> and tell you how many rows that will return (so you can call 'found_rows()').\n> \n> \n> the second one does do a lot more because it has to send the results across to the client program - whether the client uses that info or not doesn't matter.\nNot really. Sending the data to the client is usually (if you are not\nconnected via some small-bandwidth connection) a trivial cost compared\nto calculating the number of rows.\n\n(Our tables involve 100Ms of rows, while the net connectivity is a\nprivate internal Gigabit net, returning the data seems never to be an\nissue. Reading it from the disc, selecting the rows are issues. Not\nsending the data.)\n\nActually, if you think that sending the data is an issue, PG offers\nthe more generic concept of cursors.\n\nAndreas\n",
"msg_date": "Mon, 11 Dec 2006 07:42:14 +0100",
"msg_from": "Andreas Kostyrka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL_CALC_FOUND_ROWS in POSTGRESQL / Some one can"
},
{
"msg_contents": "On m�n, 2006-12-11 at 17:01 +1100, Chris wrote:\n> Mark Kirkwood wrote:\n> > Chris wrote:\n> > \n> >> It's the same as doing a select count(*) type query using the same \n> >> clauses, but all in one query instead of two.\n> >>\n> >> It doesn't return any extra rows on top of the limit query so it's \n> >> better than using pg_numrows which runs the whole query and returns it \n> >> to php (in this example).\n> >>\n> >>\n> >> Their docs explain it:\n> >>\n> >> http://dev.mysql.com/doc/refman/4.1/en/information-functions.html\n> >>\n> >> See \"FOUND_ROWS()\"\n> >>\n> > \n> > Note that from the same page:\n> > \n> > \"If you are using SELECT SQL_CALC_FOUND_ROWS, MySQL must calculate how \n> > many rows are in the full result set. However, this is faster than \n> > running the query again without LIMIT, because the result set need not \n> > be sent to the client.\"\n\nyes but not any faster than a \nselect count(*) from (full query without LIMIT)\n\nso the only advantage to the SQL_CALC_FOUND_ROWS thingie\nis that instead of doing\n select count(*) from full-query\n select * from query-with-LIMIT\nwhich will do the query twice, but possibly with\ndifferent optimisations,\n\nyou would do a non-standard\n select SQL_CALC_FOUND_ROWS query-with-LIMIT\n select FOUND_ROWS()\nwhich will do one full query, without any\nLIMIT optimisation, but with the same\nnumber of round-trips, and same amount of\ndata over the line.\n\nthe only case where the second way may be\nmore effective, is if no LIMIT optimisation\ncan be made, and where the dataset is larger\nthan file buffer space, so that there is no\neffect from caching.\n\ngnari\n\n\n",
"msg_date": "Mon, 11 Dec 2006 09:39:32 +0000",
"msg_from": "Ragnar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL_CALC_FOUND_ROWS in POSTGRESQL / Some one can"
}
] |
[
{
"msg_contents": "\nI'm gearing up to do some serious investigation into performance for\nPostgreSQL with regard to our application. I have two issues that I've\nquestions about, and I'll address them in two seperate emails.\n\nThis email regards the tuning of work_mem.\n\nI'm planning on going through all of the queries our application does,\nunder various load scenarios and approaching each performance issue as\nit appears.\n\nWhat I'm fuzzy on is how to discretely know when I'm overflowing\nwork_mem? Obviously, if work_mem is exhausted by a particular\nquery, temp files will be created and performance will begin to suck,\nbut it would be nice to have some more information -- how large was\nthe resultant temp file, for example.\n\nDoes the creation of a temp file trigger any logging? I've yet to\nsee any, but we may not have hit any circumstances where work_mem\nwas exhausted. I've been looking through the docs at the various\npg_stat* views and functions, but it doesn't look as if there's\nanything in there about this.\n\nThat leads to my other question. Assuming I've got lots of\nconnections (which I do), how can I determine if work_mem is too\nhigh? Do server processes allocated it even if they don't actually\nuse it? Is the only way to find out to reduce it and see when it\nstarts to be a problem? If so, that leads back to my first question:\nhow can I be sure whether temp files were created or not?\n\nMy goal is to set work_mem as small as is possible for the most\ncommon queries, then force the developers to use \"set work_mem to x\"\nto adjust it for big queries.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n\[email protected]\nPhone: 412-422-3463x4023\n",
"msg_date": "Thu, 7 Dec 2006 11:35:31 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Advice on selecting good values for work_mem?"
},
{
"msg_contents": "Bill Moran <[email protected]> writes:\n> Does the creation of a temp file trigger any logging?\n\nNo; but it wouldn't be hard to add some if you wanted. I'd do it at\ndeletion, not creation, so you could log the size the file reached.\nSee FileClose() in src/backend/storage/file/fd.c.\n\n> That leads to my other question. Assuming I've got lots of\n> connections (which I do), how can I determine if work_mem is too\n> high?\n\nWhen you start swapping, you have a problem --- so watch vmstat or\nlocal equivalent.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Dec 2006 12:02:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice on selecting good values for work_mem? "
},
{
"msg_contents": "* Bill Moran ([email protected]) wrote:\n> What I'm fuzzy on is how to discretely know when I'm overflowing\n> work_mem? Obviously, if work_mem is exhausted by a particular\n> query, temp files will be created and performance will begin to suck,\n\nI don't believe this is necessairly *always* the case. There are\ninstances in PostgreSQL where it will just continue to allocate memory\nbeyond the work_mem setting. This is usually due to poor statistics\n(you changed the data in the table dramatically and havn't run analyze,\nor you never ran analyze on the table at all, or the statistics\ngathering values are set too low to capture enough information about\nthe data, etc). It would nice if it was possible to have this detected\nand logged, or similar. Additionally, work_mem isn't actually a\nper-query thing, aiui, it's more like a per-node in the planner thing.\nThat is to say that if you have multiple sorts going on, or a sort and a\nhash, that *both* of those expect to be able to use up to work_mem\namount of memory.\n\nAlso, another point you might want to consider how to handle is that\nwork_mem has no bearing on libpq and I don't recall there being a way to\nconstrain libpq's memory usage. This has been an issue for me just\ntoday when a forgot a couple parameters to a join which caused a\ncartesean product result and ended up running the box out of memory.\nSure, it's my fault, and unlikely to happen in an application, but it\nstill sucks. :) It also managed to run quickly enough that I didn't\nnotice what was happening. :/ Of course, the server side didn't need\nmuch memory at all to generate that result. Also, libpq stores\neverything in *it's* memory before passing it to the client. An example\nscenario of this being kind of an issue is psql, you need double the\nmemory size of a given result because the result is first completely\ngrabbed and stored in libpq and then sent to your pager (eg: less) which\nthen sucks it all into memory again. In applications (and I guess psql,\nthough I never think of it, and it'd be nice to have as a configurable\noption if it isn't already...) you can use cursors to limit the amount\nof memory libpq uses.\n\nAs these are new things (both the temp file creation logging and the\nwork_mem overflow detection, I believe), this discussion is probably\nmore appropriate for -hackers.\n\n> That leads to my other question. Assuming I've got lots of\n> connections (which I do), how can I determine if work_mem is too\n> high? Do server processes allocated it even if they don't actually\n> use it? Is the only way to find out to reduce it and see when it\n> starts to be a problem? If so, that leads back to my first question:\n> how can I be sure whether temp files were created or not?\n\nYeah, look for swappiness... It'd be nice to be able to get memory\nstatistics on queries which have been run though...\n\n> My goal is to set work_mem as small as is possible for the most\n> common queries, then force the developers to use \"set work_mem to x\"\n> to adjust it for big queries.\n\nSounds like an excellent plan. Be careful though, work_mem settings can\naffect query plans and they may discover that if set high enough the\nplanner will, for example, do a hashjoin which is much faster than\nsorting and merge-joining, but takes alot of memory... They may say\n\"hey, I like it being fast\" but not consider what happens when alot of\nthose queries run at once..\n\n\tThanks!\n\n\t\tStephen",
"msg_date": "Thu, 7 Dec 2006 13:16:22 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice on selecting good values for work_mem?"
},
{
"msg_contents": "In response to Tom Lane <[email protected]>:\n\n> Bill Moran <[email protected]> writes:\n> > Does the creation of a temp file trigger any logging?\n> \n> No; but it wouldn't be hard to add some if you wanted. I'd do it at\n> deletion, not creation, so you could log the size the file reached.\n> See FileClose() in src/backend/storage/file/fd.c.\n\nIs this along the lines of what you were thinking? Is this acceptable\nto get pulled into the tree (maintaining local patches sucks ;) I've\nonly been using this patch a day and I'm already giddy about how much\nit helps tuning work memory sizes ...\n\n-- \nBill Moran\nCollaborative Fusion Inc.",
"msg_date": "Mon, 18 Dec 2006 16:18:05 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Advice on selecting good values for work_mem?"
},
{
"msg_contents": "In response to Stephen Frost <[email protected]>:\n\n> * Bill Moran ([email protected]) wrote:\n> > What I'm fuzzy on is how to discretely know when I'm overflowing\n> > work_mem? Obviously, if work_mem is exhausted by a particular\n> > query, temp files will be created and performance will begin to suck,\n> \n> I don't believe this is necessairly *always* the case. There are\n> instances in PostgreSQL where it will just continue to allocate memory\n> beyond the work_mem setting. This is usually due to poor statistics\n> (you changed the data in the table dramatically and havn't run analyze,\n> or you never ran analyze on the table at all, or the statistics\n> gathering values are set too low to capture enough information about\n> the data, etc). It would nice if it was possible to have this detected\n> and logged, or similar. Additionally, work_mem isn't actually a\n> per-query thing, aiui, it's more like a per-node in the planner thing.\n> That is to say that if you have multiple sorts going on, or a sort and a\n> hash, that *both* of those expect to be able to use up to work_mem\n> amount of memory.\n\nI'm aware of that. It's one of the reasons I asked about monitoring its\nusage.\n\nI mean, if I could be sure that each process only used work_mem amount of\nspace, it would be pretty easy to run some calculations and go to\nmanagement and say, \"these servers need X amount of RAM for optimal\nperformance ...\"\n\nAs it is, I'm trying to find the most complex queries and estimate how\nmany joins and sorts there are and how much that's going to add up to.\nIt'd be nice to be able to crank up the debugging and have postgresql\nsay:\nQUERY 0: total work_mem: aaaaaaaa bytes\n JOIN 0: xxxxx bytes\n JOIN 1: yyyyy bytes\n ...\n\nPerhaps it's in there somewhere ... I haven't experimented with cranking\nthe logging up to maximum yet. If it's missing, I'm hoping to have some\ntime to add it. Adding debugging to PostgreSQL is a pretty easy way to\nlearn how the code fits together ...\n\n> Also, another point you might want to consider how to handle is that\n> work_mem has no bearing on libpq and I don't recall there being a way to\n> constrain libpq's memory usage. This has been an issue for me just\n> today when a forgot a couple parameters to a join which caused a\n> cartesean product result and ended up running the box out of memory.\n> Sure, it's my fault, and unlikely to happen in an application, but it\n> still sucks. :) It also managed to run quickly enough that I didn't\n> notice what was happening. :/ Of course, the server side didn't need\n> much memory at all to generate that result. Also, libpq stores\n> everything in *it's* memory before passing it to the client. An example\n> scenario of this being kind of an issue is psql, you need double the\n> memory size of a given result because the result is first completely\n> grabbed and stored in libpq and then sent to your pager (eg: less) which\n> then sucks it all into memory again. In applications (and I guess psql,\n> though I never think of it, and it'd be nice to have as a configurable\n> option if it isn't already...) you can use cursors to limit the amount\n> of memory libpq uses.\n\nIn our case, the database servers are always dedicated, and the application\nside always runs on a different server. This is both a blessing and a\ncurse: On the one hand, I don't have to worry about any client apps eating\nup RAM on the DB server. On the other hand, last week we found a place\nwhere a query with lots of joins was missing a key WHERE clause, it was\npulling something like 10X the number of records it needed, then limiting\nit further on the client side. Optimizing this sort of thing is something\nI enjoy.\n\n> As these are new things (both the temp file creation logging and the\n> work_mem overflow detection, I believe), this discussion is probably\n> more appropriate for -hackers.\n\nTrue. It started out here because I wasn't sure that the stuff didn't\nalready exist, and was curious how others were doing it.\n\nWhen I've had some more opportunity to investigate work_mem monitoring,\nI'll start the discussion back up on -hackers.\n\n> > That leads to my other question. Assuming I've got lots of\n> > connections (which I do), how can I determine if work_mem is too\n> > high? Do server processes allocated it even if they don't actually\n> > use it? Is the only way to find out to reduce it and see when it\n> > starts to be a problem? If so, that leads back to my first question:\n> > how can I be sure whether temp files were created or not?\n> \n> Yeah, look for swappiness... It'd be nice to be able to get memory\n> statistics on queries which have been run though...\n> \n> > My goal is to set work_mem as small as is possible for the most\n> > common queries, then force the developers to use \"set work_mem to x\"\n> > to adjust it for big queries.\n> \n> Sounds like an excellent plan. Be careful though, work_mem settings can\n> affect query plans and they may discover that if set high enough the\n> planner will, for example, do a hashjoin which is much faster than\n> sorting and merge-joining, but takes alot of memory... They may say\n> \"hey, I like it being fast\" but not consider what happens when alot of\n> those queries run at once..\n\nWell ... as long as those kinds of issues exist, I'll have a job ;)\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n",
"msg_date": "Mon, 18 Dec 2006 16:43:05 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Advice on selecting good values for work_mem?"
},
{
"msg_contents": "On Mon, 2006-12-18 at 16:18 -0500, Bill Moran wrote: \n> In response to Tom Lane <[email protected]>:\n> \n> > Bill Moran <[email protected]> writes:\n> > > Does the creation of a temp file trigger any logging?\n> > \n> > No; but it wouldn't be hard to add some if you wanted. I'd do it at\n> > deletion, not creation, so you could log the size the file reached.\n> > See FileClose() in src/backend/storage/file/fd.c.\n> \n> Is this along the lines of what you were thinking? Is this acceptable\n> to get pulled into the tree (maintaining local patches sucks ;) I've\n> only been using this patch a day and I'm already giddy about how much\n> it helps tuning work memory sizes ...\n\nYou need to submit to patches, not here. Patch looks right, but needs\nsome extra things:\n\n- activate based upon a GUC called trace_temp_files (?) to be added in\nsrc/backend/utils/misc/guc.c - using a temp file is a problem only in\nthe eye of the beholder, so let the admin decide whether to show the\ninfo or not (default not)\n\n- level LOG not WARNING, with no hint message (?)\n\n- message should be something like \"temp file: size %s path: %s\" so we\ncan see where the file was created (there is another todo about creating\ntemp files in different locations)\n\n- add a trace point also for those who don't want to enable a parameter,\ndescribed here\nhttp://www.postgresql.org/docs/8.2/static/dynamic-trace.html\n\ne.g. PGTRACE1(temp__file__cleanup, filestats.st_size)\n\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 28 Dec 2006 13:19:44 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice on selecting good values for work_mem?"
}
] |
[
{
"msg_contents": "\nI'm gearing up to do some serious investigation into performance for\nPostgreSQL with regard to our application. I have two issues that I've\nquestions about, and I'll address them in two seperate emails.\n\nThis one regards tuning shared_buffers.\n\nI believe I have a good way to monitor database activity and tell when\na database grows large enough that it would benefit from more\nshared_buffers: if I monitor the blks_read column of pg_stat_database,\nit should increase very slowly if there is enough shared_buffer\nspace. When shared buffer space runs out, more disk read requests\nwill be required and this number will begin to climb.\n\nIf anyone sees a flaw in this approach, I'd be interested to hear it.\n\nThe other tuning issue with shared_buffers is how to tell if I'm\nallocating too much. For example, if I allocate 1G of RAM to\nshared buffers, and the entire database can fit in 100M, that 900M\nmight be better used as work_mem, or something else.\n\nI haven't been able to find anything regarding how much of the\nshared buffer space PostgreSQL is actually using, as opposed to\nsimply allocating.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n\[email protected]\nPhone: 412-422-3463x4023\n",
"msg_date": "Thu, 7 Dec 2006 11:42:39 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to determine if my setting for shared_buffers is too high?"
},
{
"msg_contents": "Bill Moran <[email protected]> writes:\n> I haven't been able to find anything regarding how much of the\n> shared buffer space PostgreSQL is actually using, as opposed to\n> simply allocating.\n\nIn 8.1 and up, contrib/pg_buffercache/ would give you some visibility\nof this.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Dec 2006 12:04:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to determine if my setting for shared_buffers is too high? "
},
{
"msg_contents": "Remember that as you increase shared_buffers you might need to make the\nbgwriter more aggressive too.\n\nOn Thu, Dec 07, 2006 at 11:42:39AM -0500, Bill Moran wrote:\n> \n> I'm gearing up to do some serious investigation into performance for\n> PostgreSQL with regard to our application. I have two issues that I've\n> questions about, and I'll address them in two seperate emails.\n> \n> This one regards tuning shared_buffers.\n> \n> I believe I have a good way to monitor database activity and tell when\n> a database grows large enough that it would benefit from more\n> shared_buffers: if I monitor the blks_read column of pg_stat_database,\n> it should increase very slowly if there is enough shared_buffer\n> space. When shared buffer space runs out, more disk read requests\n> will be required and this number will begin to climb.\n> \n> If anyone sees a flaw in this approach, I'd be interested to hear it.\n> \n> The other tuning issue with shared_buffers is how to tell if I'm\n> allocating too much. For example, if I allocate 1G of RAM to\n> shared buffers, and the entire database can fit in 100M, that 900M\n> might be better used as work_mem, or something else.\n> \n> I haven't been able to find anything regarding how much of the\n> shared buffer space PostgreSQL is actually using, as opposed to\n> simply allocating.\n> \n> -- \n> Bill Moran\n> Collaborative Fusion Inc.\n> \n> [email protected]\n> Phone: 412-422-3463x4023\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Fri, 8 Dec 2006 13:19:35 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to determine if my setting for shared_buffers is too high?"
}
] |
[
{
"msg_contents": "[email protected] writes:\n> If anyone knows what may cause this problem, or has any other ideas, I\n> would be grateful.\n\nSubmit the command \"VACUUM ANALYZE VERBOSE locations;\" on both\nservers, and post the output of that. That might help us tell for\nsure whether the table is bloated (and needs VACUUM FULL/CLUSTER).\n\nThe query plans are suggestive; on the 'master', the cost is\n113921.40, whereas on the 'slave' it's 2185.09; I'll bet that those\nnumbers are proportional to the number of pages assigned to the table\non the respective servers...\n-- \n(reverse (concatenate 'string \"ofni.sesabatadxunil\" \"@\" \"enworbbc\"))\nhttp://cbbrowne.com/info/lsf.html\n\"We use Linux for all our mission-critical applications. Having the\nsource code means that we are not held hostage by anyone's support\ndepartment.\" -- Russell Nelson, President of Crynwr Software\n",
"msg_date": "Thu, 07 Dec 2006 22:33:07 -0500",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: One table is very slow, but replicated table (same data) is fine"
}
] |
[
{
"msg_contents": "Hi,\n\nI have written my own 'large object'-like feature using the following table:\n\n----\nCREATE TABLE blob\n(\n id bigint NOT NULL,\n pageno integer NOT NULL,\n data bytea,\n CONSTRAINT blob_pkey PRIMARY KEY (id, pageno)\n)\nWITHOUT OIDS;\nALTER TABLE blob ALTER COLUMN data SET STORAGE EXTERNAL;\n\nCREATE SEQUENCE seq_key_blob;\n----\n\nOne blob consist of many rows, each containing one 'page'. I insert pages with\nPQexecPrepared with the format set to binary. This works quite well for the\nfollowing setups:\n\nclient -> server\n-----------------\nlinux -> linux\nlinux -> windows\nwindows -> windows\n\nbut pretty bad (meaning about 10 times slower) for this setup\n\nwindows -> linux\n\n\nThe used postgresql versions are 8.1.5 for both operating system. A (sort of)\nminimal code sample exposing this problem may be found appended to this e-mail.\n\nAny ideas?\n\nThanks,\n Axel",
"msg_date": "Sat, 9 Dec 2006 02:24:13 +0100",
"msg_from": "\"Axel Waggershauser\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Low throughput of binary inserts from windows to linux"
},
{
"msg_contents": "\"Axel Waggershauser\" <[email protected]> writes:\n> ... This works quite well for the\n> following setups:\n\n> client -> server\n> -----------------\n> linux -> linux\n> linux -> windows\n> windows -> windows\n\n> but pretty bad (meaning about 10 times slower) for this setup\n\n> windows -> linux\n\nThis has to be a network-level problem. IIRC, there are some threads in\nour archives discussing possibly-related performance issues seen with\nWindows' TCP stack. Don't recall details, but I think in some cases\nthe problem was traced to third-party add-ons to the Windows stack.\nYou might want to check just what you're running there.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Dec 2006 22:52:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Low throughput of binary inserts from windows to linux "
},
{
"msg_contents": "On 12/9/06, Tom Lane <[email protected]> wrote:\n> \"Axel Waggershauser\" <[email protected]> writes:\n> > ... This works quite well for the\n> > following setups:\n>\n> > client -> server\n> > -----------------\n> > linux -> linux\n> > linux -> windows\n> > windows -> windows\n>\n> > but pretty bad (meaning about 10 times slower) for this setup\n>\n> > windows -> linux\n>\n> This has to be a network-level problem. IIRC, there are some threads in\n> our archives discussing possibly-related performance issues seen with\n> Windows' TCP stack. Don't recall details, but I think in some cases\n> the problem was traced to third-party add-ons to the Windows stack.\n> You might want to check just what you're running there.\n\nI searched the archives but found nothing really enlightening\nregarding my problem. One large thread regarding win32 was related to\na psql problem related to multiple open handles, other mails referred\nto a \"QoS\" patch but I could not find more specific information.\n\nI thought about firewall or virus scanning software myself, but I\ncan't really see why such software should distinguish between a\nwindows remote host and a linux remote host. Furthermore,\n\"downloading\" is fast on all setups, it's just uploading from windows\nto linux, which is slow.\n\nI repeated my test with a vanilla windows 2000 machine (inc. tons of\nmicrosoft hot-fixes) and it exposes the same problem.\n\nI'm out of ideas here, maybe someone could try to reproduce this\nbehavior or could point me to the thread containing relevant\ninformation (sorry, maybe I'm just too dumb :-/)\n\nThank,\n Axel\n",
"msg_date": "Mon, 11 Dec 2006 15:58:18 +0100",
"msg_from": "\"Axel Waggershauser\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Low throughput of binary inserts from windows to linux"
},
{
"msg_contents": "> I'm out of ideas here, maybe someone could try to reproduce this\n> behavior or could point me to the thread containing relevant\n> information (sorry, maybe I'm just too dumb :-/)\n\nplease specify how you're transfering the data from windows -> linux. are \nyou using odbc? if yes, what driver? are you using FDQN server names or a \nplain ip adress? etc etc.\n\n- thomas \n\n\n",
"msg_date": "Mon, 11 Dec 2006 16:03:00 +0100",
"msg_from": "\"Thomas H.\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Low throughput of binary inserts from windows to linux"
},
{
"msg_contents": "On 12/11/06, Thomas H. <[email protected]> wrote:\n> > I'm out of ideas here, maybe someone could try to reproduce this\n> > behavior or could point me to the thread containing relevant\n> > information (sorry, maybe I'm just too dumb :-/)\n>\n> please specify how you're transfering the data from windows -> linux. are\n> you using odbc? if yes, what driver? are you using FDQN server names or a\n> plain ip adress? etc etc.\n\nYou may take a look at my first mail (starting this thread), there you\nfind a my_lo.c attached, containing a the complete code. I use libpq.\nThe connection is established like this:\n\n conn = PQsetdbLogin(argv[1], \"5432\", NULL, NULL, argv[2],\nargv[3], argv[4]);\n\nI called the test program with the plain ip-address of the server machine.\n\nAxel\n",
"msg_date": "Mon, 11 Dec 2006 16:16:04 +0100",
"msg_from": "\"Axel Waggershauser\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Low throughput of binary inserts from windows to linux"
},
{
"msg_contents": "\"Axel Waggershauser\" <[email protected]> writes:\n> On 12/9/06, Tom Lane <[email protected]> wrote:\n>> This has to be a network-level problem. IIRC, there are some threads in\n>> our archives discussing possibly-related performance issues seen with\n>> Windows' TCP stack.\n\n> I searched the archives but found nothing really enlightening\n> regarding my problem. One large thread regarding win32 was related to\n> a psql problem related to multiple open handles, other mails referred\n> to a \"QoS\" patch but I could not find more specific information.\n\nYeah, that's what I couldn't think of the other day. The principal\nreport was here:\nhttp://archives.postgresql.org/pgsql-general/2005-01/msg01231.php\n\n By default, Windows XP installs the QoS Packet Scheduler service.\n It is not installed by default on Windows 2000. After I installed\n QoS Packet Scheduler on the Windows 2000 machine, the latency\n problem vanished. \n\nNow he was talking about a local connection not remote, but it's still\nsomething worth trying.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Dec 2006 10:59:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Low throughput of binary inserts from windows to linux "
},
{
"msg_contents": ">>> On Mon, Dec 11, 2006 at 8:58 AM, in message\n<[email protected]>, \"Axel\nWaggershauser\" <[email protected]> wrote: \n> \n> I'm out of ideas here, maybe someone could try to reproduce this\n> behavior or could point me to the thread containing relevant\n> information\n \nNo guarantees that this is the problem, but I have seen similar issues\nin other software because of delays introduced in the TCP stack by the\nNagle algorithm. Turning on TCP_NODELAY has solved such problems. I\ndon't know if PostgreSQL is vulnerable to this, or how it would be fixed\nin a PostgreSQL environment, but it might give you another avenue to\nsearch.\n \n-Kevin\n \n\n",
"msg_date": "Mon, 11 Dec 2006 10:25:14 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Low throughput of binary inserts from windows to"
},
{
"msg_contents": "On 12/11/06, Tom Lane <[email protected]> wrote:\n> Yeah, that's what I couldn't think of the other day. The principal\n> report was here:\n> http://archives.postgresql.org/pgsql-general/2005-01/msg01231.php\n>\n> By default, Windows XP installs the QoS Packet Scheduler service.\n> It is not installed by default on Windows 2000. After I installed\n> QoS Packet Scheduler on the Windows 2000 machine, the latency\n> problem vanished.\n\nI found a QoS-RVPS service (not sure about the last four characters\nand I'm sitting at my mac at home now...) on one of the WinXP test\nboxes, started it and immediately lost network connection :-(. Since I\nhave pretty much the same skepticism regarding the usefulness of a QoS\npacket scheduler to help with a raw-throughput-problem like Lincoln\nYeoh in a follow up mail to the above\n(http://archives.postgresql.org/pgsql-general/2005-01/msg01243.php), I\ndidn't investigate this further.\n\nAnd regarding the TCP_NODELAY hint from Kevin Grittner: if I am not\nwrong with interpreting fe_connect.c, the libpq already deals with it\n(fe_connect.c:connectNoDelay). But this made me think about the\n'page'-size I use in my blob table...\n\nI tested different sizes on linux some time ago and found that 64KB\nwas optimal. But playing with different sizes again revealed that my\nwindows->linux problem seems to be solved if I use _any_ other\n(reasonable - meaning something between 4K and 512K) power of two ?!?\n\nDoes this make sense to anyone?\n\nThanks,\n axel\n",
"msg_date": "Mon, 11 Dec 2006 23:44:25 +0100",
"msg_from": "\"Axel Waggershauser\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Low throughput of binary inserts from windows to linux"
},
{
"msg_contents": "\"Axel Waggershauser\" <[email protected]> writes:\n> I tested different sizes on linux some time ago and found that 64KB\n> was optimal. But playing with different sizes again revealed that my\n> windows->linux problem seems to be solved if I use _any_ other\n> (reasonable - meaning something between 4K and 512K) power of two ?!?\n\nI think this almost certainly indicates a Nagle/delayed-ACK\ninteraction. I googled and found a nice description of the issue:\nhttp://www.stuartcheshire.org/papers/NagleDelayedAck/\n\nNote that there are no TCP connections in which message payloads are\nexact powers of two (and no, I don't know why they didn't try to make\nit so). You are probably looking at a situation where this particular\ntransfer size results in an odd number of messages where the other sizes\ndo not, with the different overheads between Windows and everybody else\naccounting for the fact that it's only seen with a Windows sender.\n\nIf you don't like that theory, another line of reasoning has to do with\nthe fact that the maximum advertiseable window size in TCP is 65535 ---\nthere could be some edge-case behaviors in the Windows and Linux stacks\nthat don't play nicely together for 64K transfer sizes.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Dec 2006 20:25:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Low throughput of binary inserts from windows to linux "
},
{
"msg_contents": "* Tom Lane:\n\n> If you don't like that theory, another line of reasoning has to do with\n> the fact that the maximum advertiseable window size in TCP is 65535 ---\n> there could be some edge-case behaviors in the Windows and Linux stacks\n> that don't play nicely together for 64K transfer sizes.\n\nLinux enables window scaling, so the actual window size can be more\nthan 64K. Windows should cope with it, but some PIX firewalls and\nother historic boxes won't.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Tue, 12 Dec 2006 08:39:16 +0100",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Low throughput of binary inserts from windows to linux"
},
{
"msg_contents": "On 12/12/06, Tom Lane <[email protected]> wrote:\n> \"Axel Waggershauser\" <[email protected]> writes:\n> > I tested different sizes on linux some time ago and found that 64KB\n> > was optimal. But playing with different sizes again revealed that my\n> > windows->linux problem seems to be solved if I use _any_ other\n> > (reasonable - meaning something between 4K and 512K) power of two ?!?\n>\n> I think this almost certainly indicates a Nagle/delayed-ACK\n> interaction. I googled and found a nice description of the issue:\n> http://www.stuartcheshire.org/papers/NagleDelayedAck/\n\nBut that means I must have misinterpreted fe-connect.c, right? Meaning\non the standard windows build the\n\n setsockopt(conn->sock, IPPROTO_TCP, TCP_NODELAY, (char *) &on, sizeof(on))\n\nline gets never called (eather because TCP_NODELAY is not defined or\nIS_AF_UNIX(addr_cur->ai_family) in PQconnectPoll evaluates to true).\n\nIn case I was mistaken, this explanation makes perfectly sens to me.\nBut then again it would indicate a 'bug' in libpq, in the sense that\nit (apparently) sets TCP_NODELAY on linux but not on windows.\n\n\nAxel\n",
"msg_date": "Tue, 12 Dec 2006 11:18:58 +0100",
"msg_from": "\"Axel Waggershauser\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Low throughput of binary inserts from windows to linux"
},
{
"msg_contents": "\"Axel Waggershauser\" <[email protected]> writes:\n> On 12/12/06, Tom Lane <[email protected]> wrote:\n>> I think this almost certainly indicates a Nagle/delayed-ACK\n>> interaction. I googled and found a nice description of the issue:\n>> http://www.stuartcheshire.org/papers/NagleDelayedAck/\n\n> In case I was mistaken, this explanation makes perfectly sens to me.\n> But then again it would indicate a 'bug' in libpq, in the sense that\n> it (apparently) sets TCP_NODELAY on linux but not on windows.\n\nNo, it would mean a bug in Windows in that it fails to honor TCP_NODELAY.\nAgain, given that you only see the behavior at one specific message\nlength, I suspect this is a corner case rather than a generic \"it\ndoesn't work\" issue.\n\nWe're pretty much guessing though. Have you tried tracing the traffic\nwith a packet sniffer to see what's really happening at different\nmessage sizes?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Dec 2006 11:33:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Low throughput of binary inserts from windows to linux "
},
{
"msg_contents": "Tom Lane wrote:\n\n>>In case I was mistaken, this explanation makes perfectly sens to me.\n>>But then again it would indicate a 'bug' in libpq, in the sense that\n>>it (apparently) sets TCP_NODELAY on linux but not on windows.\n>> \n>>\n>\n>No, it would mean a bug in Windows in that it fails to honor TCP_NODELAY.\n> \n>\nLast time I did battle with nagle/delayed ack interaction in windows \n(the other end\nhas to be another stack implementation -- windows to itself I don't \nthink has the problem),\nit _did_ honor TCP_NODELAY. That was a while ago (1997) but I'd be surprised\nif things have changed much since then.\n\nBasically nagle has to be turned off for protocols like this \n(request/response interaction\nover TCP) otherwise you'll sometimes end up with stalls waiting for the \ndelayed ack\nbefore sending, which in turn results in very low throughput, per \nconnection. As I remember\nWindows client talking to Solaris server had the problem, but various \nother permutations\nof client and server stack implementation did not.\n\n\n\n\n\n\n\n\n\n\n\nTom Lane wrote:\n\n\nIn case I was mistaken, this explanation makes perfectly sens to me.\nBut then again it would indicate a 'bug' in libpq, in the sense that\nit (apparently) sets TCP_NODELAY on linux but not on windows.\n \n\n\nNo, it would mean a bug in Windows in that it fails to honor TCP_NODELAY.\n \n\nLast time I did battle with nagle/delayed ack interaction in windows\n(the other end\nhas to be another stack implementation -- windows to itself I don't\nthink has the problem),\nit _did_ honor TCP_NODELAY. That was a while ago (1997) but I'd be\nsurprised\nif things have changed much since then.\n\nBasically nagle has to be turned off for protocols like this\n(request/response interaction\nover TCP) otherwise you'll sometimes end up with stalls waiting for the\ndelayed ack\nbefore sending, which in turn results in very low throughput, per\nconnection. As I remember\nWindows client talking to Solaris server had the problem, but various\nother permutations\nof client and server stack implementation did not.",
"msg_date": "Tue, 12 Dec 2006 09:56:02 -0700",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Low throughput of binary inserts from windows to linux"
}
] |
[
{
"msg_contents": "Hi yall,\n\nalthough I've worked with databases for more than 7 years now, I'm\npetty new to PostgreSQL.\n\nI have an application using SQLite3 as an embedded SQL solution\nbecause it's simple and it can handle the load that *most* of my\nclients have.\n\nBecause of that '*most*' part, because of the client/server way and\nbecause of the license, I'm think about start using PostgreSQL.\n\nMy app uses only three tables: one has low read and really high write\nrates, a second has high read and low write and the third one is\nequally high on both.\n\nI need a db that can handle something like 500 operations/sec\ncontinuously. It's something like 250 writes/sec and 250 reads/sec. My\ndatabases uses indexes.\n\nEach table would have to handle 5 million rows/day. So I'm thinking\nabout creating different tables (clusters?) to different days to make\nqueries return faster. Am I right or there is no problem in having a\n150 million (one month) rows on a table?\n\nAll my data is e-mail traffic: user's quarentine, inbond traffic,\noutbond traffic, sender, recipients, subjects, attachments, etc...\n\nWhat do you people say, is it possible with PostgreSQL? What kind of\nhardware would I need to handle that kind of traffic?\n\nOn a first test, at a badly tunned AMD Athlon XP 1800+ (ergh!) I could\ndo 1400 writes/sec locally after I disabled fsync. We have UPSs, in\nthe last year we only had 1 power failure.\n\nThank you all for your tips.\n\nBest regards,\nDaniel Colchete\n",
"msg_date": "Sun, 10 Dec 2006 17:41:46 -0200",
"msg_from": "\"Daniel van Ham Colchete\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Daniel van Ham Colchete wrote:\n> although I've worked with databases for more than 7 years now, I'm\n> petty new to PostgreSQL.\n\nSame here.\n\n> I need a db that can handle something like 500 operations/sec\n> continuously. It's something like 250 writes/sec and 250 reads/sec. My\n> databases uses indexes.\n\nTaken from an email to the admin list about a week ago -\n\nStats about the system:\nPostgres 8.1.4\ndb size: 200+ GB\nInheritance is used extremely heavily, so in figuring out what could \ncause a create to hang, it may be of interest to know that there are:\n101,745 tables\n314,821 indexes\n1,569 views\nThe last averages taken on the number of writes per hour on this \ndatabase: ~3 million (this stat is a few weeks old)\n\nMachine info:\nOS: Solaris 10\nSunfire X4100 XL\n2x AMD Opteron Model 275 dual core procs\n8GB of ram\n\n\n> Each table would have to handle 5 million rows/day. So I'm thinking\n> about creating different tables (clusters?) to different days to make\n> queries return faster. Am I right or there is no problem in having a\n> 150 million (one month) rows on a table?\n\nSounds to me that a month might be on the large size for real fast \nresponse times - I would think of seperating weekly rather than daily.\n\nStart with\nhttp://www.postgresql.org/docs/8.2/interactive/ddl-inherit.html\nthen the next chapter explains using that to partition data into \ndifferent tables dependant on specified criteria.\n\nYou may be interested in tsearch2 which is in the contrib dir and adds \nfull text indexing.\n\n\n-- \n\nShane Ambler\[email protected]\n\nGet Sheeky @ http://Sheeky.Biz\n",
"msg_date": "Mon, 11 Dec 2006 10:22:08 +1030",
"msg_from": "Shane Ambler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Hi Gene,\n\nat my postgresql.conf, the only non-comented lines are:\nfsync = off\nlc_messages = 'C'\nlc_monetary = 'C'\nlc_numeric = 'C'\nlc_time = 'C'\nmax_connections = 100\nshared_buffers = 5000\ntemp_buffers = 1000\nwork_mem = 4096\n\nThe only two values I changed are shared_buffers and work_mem.\n\n*** BUT ***\nI'm using Gentoo Linux, so all my libraries (including glibc that is\nvery important to PostgreSQL), and all my softwares are compiled with\ngood CFLAG options to my processor (\"-O2 march=athlon-xp (...)\"). My\nLinux is not an Intel-AMD binary compatible turtle like\nFedora/RedHat/SUSE/... It's really important to have your GLIBC\ncompiled for your processor. It is essencial for performance.\n\nI can't imagine anyone buying a $1k-dollar quad-core XEON and using an\ni585 compatible distro that doesn't even know what the fudge is\nSSE/SSE2/vectorized instructions.\n\nBest regards,\nDaniel Colchete\n\nOn 12/10/06, Gene <[email protected]> wrote:\n> I have a similar type application, I'm partitioning using constraint\n> exclusion so queries only have to look at a few tables. I've found that\n> there is some overhead to using partitioning so you should test to see how\n> many partitions you want to create. Could I check out you postgresql.conf\n> parameters to compare? thanks\n>\n>\n> Gene Hart\n",
"msg_date": "Sun, 10 Dec 2006 23:02:44 -0200",
"msg_from": "\"Daniel van Ham Colchete\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Shane,\n\nreally good answer man!!! It really helped me!\n\nI can't wait to have time to test this partitions thing. This will\nreally solve a lot of my problems (and another one that I was\ndelaying).\n\nInstead of making partitions over days/weeks I can tune it to the\nseconds (timestamp) and try to find the best partition size for\nperformance. That partition thing is really good.\n\nThat server you sent me handles about 15 times more writes than I would have to.\n\nThank you very much for your answer.\n\nBest regards,\nDaniel Colchete\n\nOn 12/10/06, Shane Ambler <[email protected]> wrote:\n> Daniel van Ham Colchete wrote:\n> > although I've worked with databases for more than 7 years now, I'm\n> > petty new to PostgreSQL.\n>\n> Same here.\n>\n> > I need a db that can handle something like 500 operations/sec\n> > continuously. It's something like 250 writes/sec and 250 reads/sec. My\n> > databases uses indexes.\n>\n> Taken from an email to the admin list about a week ago -\n>\n> Stats about the system:\n> Postgres 8.1.4\n> db size: 200+ GB\n> Inheritance is used extremely heavily, so in figuring out what could\n> cause a create to hang, it may be of interest to know that there are:\n> 101,745 tables\n> 314,821 indexes\n> 1,569 views\n> The last averages taken on the number of writes per hour on this\n> database: ~3 million (this stat is a few weeks old)\n>\n> Machine info:\n> OS: Solaris 10\n> Sunfire X4100 XL\n> 2x AMD Opteron Model 275 dual core procs\n> 8GB of ram\n>\n>\n> > Each table would have to handle 5 million rows/day. So I'm thinking\n> > about creating different tables (clusters?) to different days to make\n> > queries return faster. Am I right or there is no problem in having a\n> > 150 million (one month) rows on a table?\n>\n> Sounds to me that a month might be on the large size for real fast\n> response times - I would think of seperating weekly rather than daily.\n>\n> Start with\n> http://www.postgresql.org/docs/8.2/interactive/ddl-inherit.html\n> then the next chapter explains using that to partition data into\n> different tables dependant on specified criteria.\n>\n> You may be interested in tsearch2 which is in the contrib dir and adds\n> full text indexing.\n>\n>\n> --\n>\n> Shane Ambler\n> [email protected]\n>\n> Get Sheeky @ http://Sheeky.Biz\n>\n",
"msg_date": "Sun, 10 Dec 2006 23:15:31 -0200",
"msg_from": "\"Daniel van Ham Colchete\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "I'm using gentoo as well, I'm having performance issues as the number of\npartitions is increasing I imagine do due to overhead managing them and\nfiguring out where to put each insert/update. I'm switching to weekly\npartitions instead of daily. I believe in PG8.2 constraint exclusion works\nwith updates/deletes also so I'm eager to upgrade. I get about 1 million\nrecords per day in two tables each, each record updated about 4 times within\n30 minutes.\n\nDo you think using UTF8 vs US-ASCII hurts performance signficantly, some of\nmy smaller tables require unicode, and I don't think you can have some\ntables be unicode and some be ASCII.\n\nOn 12/10/06, Daniel van Ham Colchete <[email protected]> wrote:\n>\n> Hi Gene,\n>\n> at my postgresql.conf, the only non-comented lines are:\n> fsync = off\n> lc_messages = 'C'\n> lc_monetary = 'C'\n> lc_numeric = 'C'\n> lc_time = 'C'\n> max_connections = 100\n> shared_buffers = 5000\n> temp_buffers = 1000\n> work_mem = 4096\n>\n> The only two values I changed are shared_buffers and work_mem.\n>\n> *** BUT ***\n> I'm using Gentoo Linux, so all my libraries (including glibc that is\n> very important to PostgreSQL), and all my softwares are compiled with\n> good CFLAG options to my processor (\"-O2 march=athlon-xp (...)\"). My\n> Linux is not an Intel-AMD binary compatible turtle like\n> Fedora/RedHat/SUSE/... It's really important to have your GLIBC\n> compiled for your processor. It is essencial for performance.\n>\n> I can't imagine anyone buying a $1k-dollar quad-core XEON and using an\n> i585 compatible distro that doesn't even know what the fudge is\n> SSE/SSE2/vectorized instructions.\n>\n> Best regards,\n> Daniel Colchete\n>\n> On 12/10/06, Gene <[email protected]> wrote:\n> > I have a similar type application, I'm partitioning using constraint\n> > exclusion so queries only have to look at a few tables. I've found that\n> > there is some overhead to using partitioning so you should test to see\n> how\n> > many partitions you want to create. Could I check out you\n> postgresql.conf\n> > parameters to compare? thanks\n> >\n> >\n> > Gene Hart\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n\n\n-- \nGene Hart\ncell: 443-604-2679\n\nI'm using gentoo as well, I'm having performance issues as the number of partitions is increasing I imagine do due to overhead managing them and figuring out where to put each insert/update. I'm switching to weekly partitions instead of daily. I believe in \nPG8.2 constraint exclusion works with updates/deletes also so I'm eager to upgrade. I get about 1 million records per day in two tables each, each record updated about 4 times within 30 minutes.Do you think using UTF8 vs US-ASCII hurts performance signficantly, some of my smaller tables require unicode, and I don't think you can have some tables be unicode and some be ASCII.\nOn 12/10/06, Daniel van Ham Colchete <[email protected]> wrote:\nHi Gene,at my postgresql.conf, the only non-comented lines are:fsync = offlc_messages = 'C'lc_monetary = 'C'lc_numeric = 'C'lc_time = 'C'max_connections = 100shared_buffers = 5000temp_buffers = 1000\nwork_mem = 4096The only two values I changed are shared_buffers and work_mem.*** BUT ***I'm using Gentoo Linux, so all my libraries (including glibc that isvery important to PostgreSQL), and all my softwares are compiled with\ngood CFLAG options to my processor (\"-O2 march=athlon-xp (...)\"). MyLinux is not an Intel-AMD binary compatible turtle likeFedora/RedHat/SUSE/... It's really important to have your GLIBCcompiled for your processor. It is essencial for performance.\nI can't imagine anyone buying a $1k-dollar quad-core XEON and using ani585 compatible distro that doesn't even know what the fudge isSSE/SSE2/vectorized instructions.Best regards,Daniel Colchete\nOn 12/10/06, Gene <[email protected]> wrote:> I have a similar type application, I'm partitioning using constraint> exclusion so queries only have to look at a few tables. I've found that\n> there is some overhead to using partitioning so you should test to see how> many partitions you want to create. Could I check out you postgresql.conf> parameters to compare? thanks>>\n> Gene Hart---------------------------(end of broadcast)---------------------------TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not\n match-- Gene Hartcell: 443-604-2679",
"msg_date": "Sun, 10 Dec 2006 20:18:48 -0500",
"msg_from": "Gene <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Gene,\n\nat Postgres's docs they say that the \"constraint checks are relatively\nexpensive\". From what you're saying, it's really worth studying the\nmatter deply first.\n\nI never understood what's the matter between the ASCII/ISO-8859-1/UTF8\ncharsets to a database. They're all simple C strings that doesn't have\nthe zero-byte in the midlle (like UTF16 would) and that doesn't\nrequire any different processing unless you are doing case insensitive\nsearch (them you would have a problem).\n\nASCII chars are also correct UTF8 chars as well. The first 127 Unicode\nchars are the same as the ASCII chars. So you would not have any\nproblems changing your table from ASCII to UTF8. My software uses\nUTF16 and UTF8 at some of it's internals and I only notice performance\nproblems with UTF16 (because of the zero-byte thing, the processing I\nmake is diferent). So, I imagine that you wouldn't have any\nperformance issues changing from ASCII to UTF8 if necessary.\n\nNowadays everything is turning to Unicode (thank god). I wouldn't\nstart anything with any other charset. I would only be asking for a\nrewrite in a near future.\n\nBest,\nDaniel\n\nOn 12/10/06, Gene <[email protected]> wrote:\n> I'm using gentoo as well, I'm having performance issues as the number of\n> partitions is increasing I imagine do due to overhead managing them and\n> figuring out where to put each insert/update. I'm switching to weekly\n> partitions instead of daily. I believe in PG8.2 constraint exclusion works\n> with updates/deletes also so I'm eager to upgrade. I get about 1 million\n> records per day in two tables each, each record updated about 4 times within\n> 30 minutes.\n>\n> Do you think using UTF8 vs US-ASCII hurts performance signficantly, some of\n> my smaller tables require unicode, and I don't think you can have some\n> tables be unicode and some be ASCII.\n",
"msg_date": "Sun, 10 Dec 2006 23:47:01 -0200",
"msg_from": "\"Daniel van Ham Colchete\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "\nOn Dec 11, 2006, at 10:47 , Daniel van Ham Colchete wrote:\n\n> I never understood what's the matter between the ASCII/ISO-8859-1/UTF8\n> charsets to a database.\n\nIf what you mean by ASCII is SQL_ASCII, then there is at least one \nsignificant difference between UTF8 (the PostgreSQL encoding) and \nSQL_ASCII. AIUI, SQL_ASCII does no checking at all with respect to \nwhat bytes are going in, while UTF8 does make sure to the best of its \nability that the bytes represent valid UTF-8 characters, throwing an \nerror if an invalid byte sequence is detected.\n\nThere's more information regarding this here:\nhttp://www.postgresql.org/docs/8.2/interactive/ \nmultibyte.html#MULTIBYTE-CHARSET-SUPPORTED\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n",
"msg_date": "Mon, 11 Dec 2006 11:11:29 +0900",
"msg_from": "Michael Glaesemann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Sun, Dec 10, 2006 at 11:02:44PM -0200, Daniel van Ham Colchete wrote:\n>I'm using Gentoo Linux, so all my libraries (including glibc that is\n>very important to PostgreSQL), and all my softwares are compiled with\n>good CFLAG options to my processor (\"-O2 march=athlon-xp (...)\"). My\n>Linux is not an Intel-AMD binary compatible turtle like\n>Fedora/RedHat/SUSE/... It's really important to have your GLIBC\n>compiled for your processor. It is essencial for performance.\n\nPlease, point to the benchmarks that demonstrate this for a postgres \napplication.\n\nMike Stone\n",
"msg_date": "Sun, 10 Dec 2006 21:18:49 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Hi Daniel\nOn 10-Dec-06, at 8:02 PM, Daniel van Ham Colchete wrote:\n\n> Hi Gene,\n>\n> at my postgresql.conf, the only non-comented lines are:\n> fsync = off\nThis can, and will result in lost data.\n> lc_messages = 'C'\n> lc_monetary = 'C'\n> lc_numeric = 'C'\n> lc_time = 'C'\n\nHow much memory does this machine have and what version of postgresql \nare you using?\n> max_connections = 100\n> shared_buffers = 5000\n> temp_buffers = 1000\n> work_mem = 4096\n>\n> The only two values I changed are shared_buffers and work_mem.\n\nDave\n>\n> *** BUT ***\n> I'm using Gentoo Linux, so all my libraries (including glibc that is\n> very important to PostgreSQL), and all my softwares are compiled with\n> good CFLAG options to my processor (\"-O2 march=athlon-xp (...)\"). My\n> Linux is not an Intel-AMD binary compatible turtle like\n> Fedora/RedHat/SUSE/... It's really important to have your GLIBC\n> compiled for your processor. It is essencial for performance.\n>\n> I can't imagine anyone buying a $1k-dollar quad-core XEON and using an\n> i585 compatible distro that doesn't even know what the fudge is\n> SSE/SSE2/vectorized instructions.\n>\n> Best regards,\n> Daniel Colchete\n>\n> On 12/10/06, Gene <[email protected]> wrote:\n>> I have a similar type application, I'm partitioning using constraint\n>> exclusion so queries only have to look at a few tables. I've found \n>> that\n>> there is some overhead to using partitioning so you should test to \n>> see how\n>> many partitions you want to create. Could I check out you \n>> postgresql.conf\n>> parameters to compare? thanks\n>>\n>>\n>> Gene Hart\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n",
"msg_date": "Sun, 10 Dec 2006 21:44:09 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Dec 11, 2006, at 02:47 , Daniel van Ham Colchete wrote:\n\n> I never understood what's the matter between the ASCII/ISO-8859-1/UTF8\n> charsets to a database. They're all simple C strings that doesn't have\n> the zero-byte in the midlle (like UTF16 would) and that doesn't\n> require any different processing unless you are doing case insensitive\n> search (them you would have a problem).\n\nThat's not the whole story. UTF-8 and other variable-width encodings \ndon't provide a 1:1 mapping of logical characters to single bytes; in \nparticular, combination characters opens the possibility of multiple \ndifferent byte sequences mapping to the same code point; therefore, \nstring comparison in such encodings generally cannot be done at the \nbyte level (unless, of course, you first acertain that the strings \ninvolved are all normalized to an unambiguous subset of your encoding).\n\nPostgreSQL's use of strings is not limited to string comparison. \nSubstring extraction, concatenation, regular expression matching, up/ \ndowncasing, tokenization and so on are all part of PostgreSQL's small \nlibrary of text manipulation functions, and all deal with logical \ncharacters, meaning they must be Unicode-aware.\n\nAlexander.\n",
"msg_date": "Mon, 11 Dec 2006 04:27:05 +0100",
"msg_from": "Alexander Staubo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "> That's not the whole story. UTF-8 and other variable-width encodings \n> don't provide a 1:1 mapping of logical characters to single bytes; in \n> particular, combination characters opens the possibility of multiple \n> different byte sequences mapping to the same code point; therefore, \n> string comparison in such encodings generally cannot be done at the \n> byte level (unless, of course, you first acertain that the strings \n> involved are all normalized to an unambiguous subset of your encoding).\n\nCan you show me such encodings supported by PostgreSQL other\nthan UTF-8?\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\n",
"msg_date": "Mon, 11 Dec 2006 12:37:03 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Dec 11, 2006, at 04:35 , Tatsuo Ishii wrote:\n\n>> That's not the whole story. UTF-8 and other variable-width encodings\n>> don't provide a 1:1 mapping of logical characters to single bytes; in\n>> particular, combination characters opens the possibility of multiple\n>> different byte sequences mapping to the same code point; therefore,\n>> string comparison in such encodings generally cannot be done at the\n>> byte level (unless, of course, you first acertain that the strings\n>> involved are all normalized to an unambiguous subset of your \n>> encoding).\n>\n> Can you tell me such encodings supported by PostgreSQL other\n> than UTF-8?\n\nhttp://www.postgresql.org/docs/8.1/interactive/ \nmultibyte.html#MULTIBYTE-CHARSET-SUPPORTED\n\nAlexander.\n",
"msg_date": "Mon, 11 Dec 2006 10:05:03 +0100",
"msg_from": "Alexander Staubo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Dec 11, 2006, at 10:20 , Tatsuo Ishii wrote:\n\n> My question was what kind of encoding other than UTF-8 has a\n> chracteristic such as: \"combination characters opens the possibility\n> of multiple different byte sequences mapping to the same code point\"\n\nNo idea; perhaps only UTF-8. What I said was that variable-width \nencodings don't provide a 1:1 mapping of logical characters to single \nbytes; combination characters is a feature of Unicode.\n\nAlexander.\n\n",
"msg_date": "Mon, 11 Dec 2006 10:32:41 +0100",
"msg_from": "Alexander Staubo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On 12/11/06, Alexander Staubo <[email protected]> wrote:\n> On Dec 11, 2006, at 02:47 , Daniel van Ham Colchete wrote:\n>\n> > I never understood what's the matter between the ASCII/ISO-8859-1/UTF8\n> > charsets to a database. They're all simple C strings that doesn't have\n> > the zero-byte in the midlle (like UTF16 would) and that doesn't\n> > require any different processing unless you are doing case insensitive\n> > search (them you would have a problem).\n>\n> That's not the whole story. UTF-8 and other variable-width encodings\n> don't provide a 1:1 mapping of logical characters to single bytes; in\n> particular, combination characters opens the possibility of multiple\n> different byte sequences mapping to the same code point; therefore,\n> string comparison in such encodings generally cannot be done at the\n> byte level (unless, of course, you first acertain that the strings\n> involved are all normalized to an unambiguous subset of your encoding).\n>\n> PostgreSQL's use of strings is not limited to string comparison.\n> Substring extraction, concatenation, regular expression matching, up/\n> downcasing, tokenization and so on are all part of PostgreSQL's small\n> library of text manipulation functions, and all deal with logical\n> characters, meaning they must be Unicode-aware.\n>\n> Alexander.\n>\n\nYou're right. I was thinking only about my cases that takes the\nUnicode normatization for granted and doesn't use\nregexp/tokenization/...\nThanks\n\nBest\nDaniel\n",
"msg_date": "Mon, 11 Dec 2006 08:32:43 -0200",
"msg_from": "\"Daniel van Ham Colchete\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Hi Dave,\n\nOn 12/11/06, Dave Cramer <[email protected]> wrote:\n> Hi Daniel\n> On 10-Dec-06, at 8:02 PM, Daniel van Ham Colchete wrote:\n>\n> > Hi Gene,\n> >\n> > at my postgresql.conf, the only non-comented lines are:\n> > fsync = off\n> This can, and will result in lost data.\n\nI know... If there is a power failure things can happen. I'm know, but\nthe performance dif is really really big I just have to decide if I'm\nwilling to take that chance or not.\n\n> > lc_messages = 'C'\n> > lc_monetary = 'C'\n> > lc_numeric = 'C'\n> > lc_time = 'C'\n>\n> How much memory does this machine have and what version of postgresql\n> are you using?\nIt's only a test server with 512MB RAM, I only used it to see how well\nwould the PostgreSQL do in a ugly case.\n\nDaniel\n",
"msg_date": "Mon, 11 Dec 2006 08:36:07 -0200",
"msg_from": "\"Daniel van Ham Colchete\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "\nOn 11-Dec-06, at 5:36 AM, Daniel van Ham Colchete wrote:\n\n> Hi Dave,\n>\n> On 12/11/06, Dave Cramer <[email protected]> wrote:\n>> Hi Daniel\n>> On 10-Dec-06, at 8:02 PM, Daniel van Ham Colchete wrote:\n>>\n>> > Hi Gene,\n>> >\n>> > at my postgresql.conf, the only non-comented lines are:\n>> > fsync = off\n>> This can, and will result in lost data.\n>\n> I know... If there is a power failure things can happen. I'm know, but\n> the performance dif is really really big I just have to decide if I'm\n> willing to take that chance or not.\n>\n>> > lc_messages = 'C'\n>> > lc_monetary = 'C'\n>> > lc_numeric = 'C'\n>> > lc_time = 'C'\n>>\n>> How much memory does this machine have and what version of postgresql\n>> are you using?\n> It's only a test server with 512MB RAM, I only used it to see how well\n> would the PostgreSQL do in a ugly case.\n\nGiven that optimal performance for postgresql can require up to 50% \nof available memory, you are going to leave the OS with 256MB of \nmemory ?\n\nDave\n>\n> Daniel\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n\n",
"msg_date": "Mon, 11 Dec 2006 06:04:40 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Mike,\n\nunfortunally I don't have any benchmarks right now. Doing something\nlike this with the same machine would be a 2-day work (at least).\nInstalling a Gentoo and putting it to run well is not a quick task\n(although is easy).\n\nBut, trust me on this one. It's worth it. Think of this: PostgreSQL\nand GNU LibC use a lot of complex algorithms: btree, hashes,\nchecksums, strings functions, etc... And you have a lot of ways to\ncompile it into binary code. Now you have Pentium4's vectorization\nthat allow you to run plenty of instructions in paralell, but AMD\ndoesn't have this. Intel also have SSE2 that makes double-precision\nfloatpoint operations a lot faster, AMD also doesn't have this (at\nleast on 32bits). Now imagine that you're RedHat and that you have to\ndeliver one CD to AMD and Intel servers. That means you can't use any\nAMD-specific or Intel-specific tecnology at the binary level.\n\nThings can get worse. If you take a look at GCC's code (at\ngcc/config/i386/i386.c), you'll see that GCC knows what runs faster on\neach processor. Let-me give an example with the FDIV and FSQRT\ninstructions:\n\nARCH - FDIV - FSQRT (costs relative to an ADD)\ni386: 88 - 122\ni486: 73 - 83\npentium: 39 - 70\nk6: 56 - 56\nAthlon: 24 - 35\nK8: 19 - 35\nPentium4: 43 - 43\nNocona: 40 - 44\n\nImagine that you are GCC and that you have two options in front of\nyou: you can use FSQRT or FDIV plus 20 ADD/SUB. If you are on an\nPentium situation you should use the second option. But on a Pentium4\nor on a Athlon you should choose for the first one. This was only an\nexample, nowadays you would have to choose between: 387, 3dNow,\n3dNow+, SSE, SSE2, ...\n\nWith this info, GCC knows how to choose the best ways to doing things\nto each processor. When you are compiling to an generic i586 (like\nFedora and RedHat), them you are using pentium's timings.\n\nAn example that I know of: it's impossible to run my software at a\nhigh demanding customer without compiling it to the it's processor (I\nmake 5 compilations on every release). Using Intel's Compiler for\nIntel's processors makes things even faster, but it is not free and\nthe \"how fast\" part really depends on your application is coded.\n\nWith 64bits processors, AMD and Intel restarted the process and\neveryone has SSE2 (but not SSE3). Even so, the timings are also very\ndiferent.\n\nBest regards\nDaniel\n\nOn 12/11/06, Michael Stone <[email protected]> wrote:\n> On Sun, Dec 10, 2006 at 11:02:44PM -0200, Daniel van Ham Colchete wrote:\n> >I'm using Gentoo Linux, so all my libraries (including glibc that is\n> >very important to PostgreSQL), and all my softwares are compiled with\n> >good CFLAG options to my processor (\"-O2 march=athlon-xp (...)\"). My\n> >Linux is not an Intel-AMD binary compatible turtle like\n> >Fedora/RedHat/SUSE/... It's really important to have your GLIBC\n> >compiled for your processor. It is essencial for performance.\n>\n> Please, point to the benchmarks that demonstrate this for a postgres\n> application.\n>\n> Mike Stone\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n",
"msg_date": "Mon, 11 Dec 2006 09:05:56 -0200",
"msg_from": "\"Daniel van Ham Colchete\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "> >>\n> >> How much memory does this machine have and what version of postgresql\n> >> are you using?\n> > It's only a test server with 512MB RAM, I only used it to see how well\n> > would the PostgreSQL do in a ugly case.\n>\n> Given that optimal performance for postgresql can require up to 50%\n> of available memory, you are going to leave the OS with 256MB of\n> memory ?\n\nIf it were the case I wouldn't have any problems letting the OS use\nonly 256MB, but this is not my production server. My production is\nbuilt yet. It'll have at least 2GB of memory.\n\nBut it's good to know anyway.\n",
"msg_date": "Mon, 11 Dec 2006 09:08:31 -0200",
"msg_from": "\"Daniel van Ham Colchete\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Mon, Dec 11, 2006 at 09:05:56AM -0200, Daniel van Ham Colchete wrote:\n> But, trust me on this one. It's worth it.\n\nYou know what? I don't.\n\n> Think of this: PostgreSQL and GNU LibC use a lot of complex algorithms:\n> btree, hashes, checksums, strings functions, etc... And you have a lot of\n> ways to compile it into binary code. Now you have Pentium4's vectorization\n> that allow you to run plenty of instructions in paralell, but AMD doesn't\n> have this. Intel also have SSE2 that makes double-precision floatpoint\n> operations a lot faster, AMD also doesn't have this (at least on 32bits). \n\nAthlon 64 has SSE2, also in 32-bit-mode.\n\nOf course, it doesn't really matter, since at the instant you hit the disk\neven once, it's going to take a million cycles and any advantage you got from\nsaving single cycles is irrelevant.\n\n> Imagine that you are GCC and that you have two options in front of\n> you: you can use FSQRT or FDIV plus 20 ADD/SUB.\n\nCould you please describe a reasonable case where GCC would have such an\noption? I cannot imagine any.\n\n> An example that I know of: it's impossible to run my software at a\n> high demanding customer without compiling it to the it's processor (I\n> make 5 compilations on every release).\n\nWhat's \"your software\"? How can you make such assertions without backing them\nup? How can you know that the same holds for PostgreSQL?\n\nAs Mike said, point to the benchmarks showing this \"essential\" difference\nbetween -O2 and -O2 -mcpu=pentium4 (or whatever). The only single worthwhile\ndifference I can think of, is that glibc can use the SYSENTER function if it\nknows you have a 686 or higher (which includes AMD), and with recent kernels,\nI'm not even sure if that is needed anymore.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 11 Dec 2006 12:25:47 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Mon, Dec 11, 2006 at 09:05:56AM -0200, Daniel van Ham Colchete wrote:\n>unfortunally I don't have any benchmarks right now.\n\nThat's fairly normal for gentoo users pushing their compile options.\n\nMike Stone\n",
"msg_date": "Mon, 11 Dec 2006 07:15:50 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On 12/11/06, Steinar H. Gunderson <[email protected]> wrote:\n> On Mon, Dec 11, 2006 at 09:05:56AM -0200, Daniel van Ham Colchete wrote:\n> > But, trust me on this one. It's worth it.\n>\n> You know what? I don't.\nSo test it yourself.\n\n> > Think of this: PostgreSQL and GNU LibC use a lot of complex algorithms:\n> > btree, hashes, checksums, strings functions, etc... And you have a lot of\n> > ways to compile it into binary code. Now you have Pentium4's vectorization\n> > that allow you to run plenty of instructions in paralell, but AMD doesn't\n> > have this. Intel also have SSE2 that makes double-precision floatpoint\n> > operations a lot faster, AMD also doesn't have this (at least on 32bits).\n>\n> Athlon 64 has SSE2, also in 32-bit-mode.\nIt's true. But, I'm not saying that Postfix is faster on AMD or Intel\nsystems. I'm saying that it's a lot faster on you compile Postfix and\nyour glibc to your processor. AMD also has features that Intel systems\ndoesn't: 3dNow for example. The fact is that if your distro is\ncompatible with a plain Athlon, you can't use neighter SSE nor SSE2.\n\n> Of course, it doesn't really matter, since at the instant you hit the disk\n> even once, it's going to take a million cycles and any advantage you got from\n> saving single cycles is irrelevant.\n\nReally??? We're talking about high performance systems and every case\nis diferent. I once saw a ddr2 ram based storage once (like 1TB).\nBefore you say it, I don't understand how it works, but you won't lose\nyour data on a reboot or powerfailure. It was very expensive but\nreally solve this thing with the IO bottleneck. Even when your\nbottleneck is the IO, still makes no sense to waste CPU resources\nunnecessarily.\n\n> > Imagine that you are GCC and that you have two options in front of\n> > you: you can use FSQRT or FDIV plus 20 ADD/SUB.\n>\n> Could you please describe a reasonable case where GCC would have such an\n> option? I cannot imagine any.\nAs I said, it is an example. Take floatpoint divisions. You have\nplenty of ways of doing it: 387, MMX, SSE, 3dNow, etc... Here GCC have\nto make a choice. And this is only one case. Usually, compiler\noptimizations are really complex and the processor's timings counts a\nlot.\n\nAt every optimization the compile needs to mesure the quickest path,\nso it uses information on how the processor will run the code. If you\ntake a look the AMD's docs you will see that theirs SSE2\nimplementation is diferent from Intel's internally. So, sometimes the\nquickest path uses SSE2 and sometimes it doesn't. You also have to\ncount the costs of converting SSE registers to commom ones.\n\nIf you still can't imagine any case, you can read Intel's assembler\nreference. You'll see that there are a lot of ways of doing a lot of\nthings.\n\n> > An example that I know of: it's impossible to run my software at a\n> > high demanding customer without compiling it to the it's processor (I\n> > make 5 compilations on every release).\n>\n> What's \"your software\"? How can you make such assertions without backing them\n> up? How can you know that the same holds for PostgreSQL?\n>\n> As Mike said, point to the benchmarks showing this \"essential\" difference\n> between -O2 and -O2 -mcpu=pentium4 (or whatever). The only single worthwhile\n> difference I can think of, is that glibc can use the SYSENTER function if it\n> knows you have a 686 or higher (which includes AMD), and with recent kernels,\n> I'm not even sure if that is needed anymore.\n\nSteinar, you should really test it. I won't read the PostgreSQL source\nto point you were it could use SSE or SSE2 or whatever. And I won't\nread glibc's code.\n\nYou don't need to belive in what I'm saying. You can read GCC docs,\nIntel's assembler reference, AMD's docs about their processor and\nabout how diferent that arch is.\n\nBest regards,\nDaniel Colchete\n",
"msg_date": "Mon, 11 Dec 2006 11:09:13 -0200",
"msg_from": "\"Daniel van Ham Colchete\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Oops! [email protected] (\"Daniel van Ham Colchete\") was seen spray-painting on a wall:\n> But, trust me on this one. It's worth it. \n\nNo, the point of performance analysis is that you *can't* trust the\npeople that say \"trust me on this one.\"\n\nIf you haven't got a benchmark where you can demonstrate a material\nand repeatable difference, then you're just some Gentoo \"speed racer\"\nmaking things up.\n-- \nselect 'cbbrowne' || '@' || 'acm.org';\nhttp://linuxdatabases.info/info/wp.html\nOne last point about metaphor, poetry, etc. As an example to\nillustrate these capabilities in Sastric Sanskrit, consider the\n\"bahuvrihi\" construct (literally \"man with a lot of rice\") which is\nused currently in linguistics to describe references outside of\ncompounds.\n",
"msg_date": "Mon, 11 Dec 2006 08:15:34 -0500",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "This is very very very true :-)!\n\nI just remebered one case with MySQL. When I changed the distro from\nConectiva 10 (rpm-based ended brazilian distro) to Gentoo, a MySQL\noperation that usually took 2 minutes to run, ended in 47 seconds.\nThis is absolutely vage. I don't have how to prove it to you. The old\nsituation doesn't even exists anymore because I used the same hardware\non the upgrade. And I can't mesure how each factor helped: compiling\nglibc and Mysql with good cflags, rebuilding my database in a ordered\nway, never kernel, etc.. All I know is that this process still runs\nwith less than 1 minute (my database is larger now).\n\nI used the very same hardware: P4 3.0Ghz SATA disk without RAID. And I\nonly upgraded because Conectiva's support to their version 10 ended\nand I need to keep my system up with the security patches.\n\nBest,\nDaniel\n\nOn 12/11/06, Michael Stone <[email protected]> wrote:\n> On Mon, Dec 11, 2006 at 09:05:56AM -0200, Daniel van Ham Colchete wrote:\n> >unfortunally I don't have any benchmarks right now.\n>\n> That's fairly normal for gentoo users pushing their compile options.\n>\n> Mike Stone\n",
"msg_date": "Mon, 11 Dec 2006 11:17:06 -0200",
"msg_from": "\"Daniel van Ham Colchete\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "You are right Christopher.\n\nOkay. Let's solve this matter.\n\nWhat PostgreSQL benchmark software should I use???\n\nI'll test PostgreSQL 8.1 on a Fedora Core 6 and on a Gentoo. I'll get\nthe same version FC6 uses and install it at my Gentoo. I'll use the\nsame hardware (diferent partitions to each).\n\nI'm not saying that Gentoo is faster than FC6. I just want to prove\nthat if you compile your software to make better use of your\nprocessor, it will run faster.\n\nIt might take a few days because I'm pretty busy right now at my job.\n\nBest regards,\nDaniel\n\nOn 12/11/06, Christopher Browne <[email protected]> wrote:\n> Oops! [email protected] (\"Daniel van Ham Colchete\") was seen spray-painting on a wall:\n> > But, trust me on this one. It's worth it.\n>\n> No, the point of performance analysis is that you *can't* trust the\n> people that say \"trust me on this one.\"\n>\n> If you haven't got a benchmark where you can demonstrate a material\n> and repeatable difference, then you're just some Gentoo \"speed racer\"\n> making things up.\n> --\n> select 'cbbrowne' || '@' || 'acm.org';\n> http://linuxdatabases.info/info/wp.html\n> One last point about metaphor, poetry, etc. As an example to\n> illustrate these capabilities in Sastric Sanskrit, consider the\n> \"bahuvrihi\" construct (literally \"man with a lot of rice\") which is\n> used currently in linguistics to describe references outside of\n> compounds.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n",
"msg_date": "Mon, 11 Dec 2006 11:31:48 -0200",
"msg_from": "\"Daniel van Ham Colchete\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Mon, Dec 11, 2006 at 11:09:13AM -0200, Daniel van Ham Colchete wrote:\n>> You know what? I don't.\n> So test it yourself.\n\nYou're making the claims, you're supposed to be proving them...\n\n> As I said, it is an example. Take floatpoint divisions. You have\n> plenty of ways of doing it: 387, MMX, SSE, 3dNow, etc... Here GCC have\n> to make a choice. \n\nNo, you don't. MMX, SSE and 3Dnow! will all give you the wrong result\n(reduced precision). SSE2, on the other hand, has double precision floats, so\nyou might have a choice there -- except that PostgreSQL doesn't really do a\nlot of floating-point anyhow.\n\n> And this is only one case. Usually, compiler optimizations are really\n> complex and the processor's timings counts a lot.\n\nYou keep asserting this, with no good backing.\n\n> If you still can't imagine any case, you can read Intel's assembler\n> reference. You'll see that there are a lot of ways of doing a lot of\n> things.\n\nI've been programming x86 assembler for ten years or so...\n\n> Steinar, you should really test it. I won't read the PostgreSQL source\n> to point you were it could use SSE or SSE2 or whatever. And I won't\n> read glibc's code.\n\nThen you should stop making these sort of wild claims.\n\n> You don't need to belive in what I'm saying. You can read GCC docs,\n> Intel's assembler reference, AMD's docs about their processor and\n> about how diferent that arch is.\n\nI have.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 11 Dec 2006 14:34:00 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On 12/11/06, Steinar H. Gunderson <[email protected]> wrote:\n> On Mon, Dec 11, 2006 at 11:09:13AM -0200, Daniel van Ham Colchete wrote:\n> >> You know what? I don't.\n> > So test it yourself.\n>\n> You're making the claims, you're supposed to be proving them...\n>\n> > As I said, it is an example. Take floatpoint divisions. You have\n> > plenty of ways of doing it: 387, MMX, SSE, 3dNow, etc... Here GCC have\n> > to make a choice.\n>\n> No, you don't. MMX, SSE and 3Dnow! will all give you the wrong result\n> (reduced precision). SSE2, on the other hand, has double precision floats, so\n> you might have a choice there -- except that PostgreSQL doesn't really do a\n> lot of floating-point anyhow.\n>\n> > And this is only one case. Usually, compiler optimizations are really\n> > complex and the processor's timings counts a lot.\n>\n> You keep asserting this, with no good backing.\n>\n> > If you still can't imagine any case, you can read Intel's assembler\n> > reference. You'll see that there are a lot of ways of doing a lot of\n> > things.\n>\n> I've been programming x86 assembler for ten years or so...\nSo, I'm a newbie to you. I learned x86 assembler last year.\n\n> > Steinar, you should really test it. I won't read the PostgreSQL source\n> > to point you were it could use SSE or SSE2 or whatever. And I won't\n> > read glibc's code.\n>\n> Then you should stop making these sort of wild claims.\n>\n> > You don't need to belive in what I'm saying. You can read GCC docs,\n> > Intel's assembler reference, AMD's docs about their processor and\n> > about how diferent that arch is.\n>\n> I have.\n>\n> /* Steinar */\n\nOkay, I'll do the benchmarks. Just sent an e-mail about this to the\nlist. If you have any sugestions of how to make the benchmark please\nlet-me know.\nI like when I prove myself wrong. Although it's much better when I'm\nright :-)...\n\nBest regards,\nDaniel Colchete\n",
"msg_date": "Mon, 11 Dec 2006 11:45:59 -0200",
"msg_from": "\"Daniel van Ham Colchete\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Mon, Dec 11, 2006 at 11:17:06AM -0200, Daniel van Ham Colchete wrote:\n> I just remebered one case with MySQL. When I changed the distro from\n> Conectiva 10 (rpm-based ended brazilian distro) to Gentoo, a MySQL\n> operation that usually took 2 minutes to run, ended in 47 seconds.\n\nHow do you know that this improvement had _anything_ to do with the use of\ndifferent optimization flags? Were even the MySQL versions or configuration\nthe same?\n\n> This is absolutely vage.\n\nIndeed it is.\n\n> I don't have how to prove it to you.\n\nNo, but you should stop making this sort of \"absolutely essential\" claims if\nyou can't.\n\n> And I can't mesure how each factor helped: compiling glibc and Mysql with\n> good cflags, rebuilding my database in a ordered way, never kernel, etc..\n\nExactly. So why are you attributing it to the first factor only? And why do\nyou think this would carry over to PostgreSQL?\n\nRemember, anecdotal evidence isn't.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 11 Dec 2006 15:02:56 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Mon, Dec 11, 2006 at 11:31:48AM -0200, Daniel van Ham Colchete wrote:\n> What PostgreSQL benchmark software should I use???\n\nLook up the list archives; search for \"TPC\".\n\n> I'll test PostgreSQL 8.1 on a Fedora Core 6 and on a Gentoo. I'll get\n> the same version FC6 uses and install it at my Gentoo. I'll use the\n> same hardware (diferent partitions to each).\n\nWhy do you want to compare FC6 and Gentoo? Wasn't your point that the -march=\nwas supposed to be the relevant factor here? In that case, you want to keep\nall other things equal; so use the same distribution, only with -O2\n-march=i686 vs. -march=athlon-xp (or whatever).\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 11 Dec 2006 15:06:38 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On 12/11/06, Daniel van Ham Colchete <[email protected]> wrote:\n> But, trust me on this one. It's worth it. Think of this: PostgreSQL\n> and GNU LibC use a lot of complex algorithms: btree, hashes,\n> checksums, strings functions, etc... And you have a lot of ways to\n> compile it into binary code. Now you have Pentium4's vectorization\n> that allow you to run plenty of instructions in paralell, but AMD\n> doesn't have this. Intel also have SSE2 that makes double-precision\n> floatpoint operations a lot faster, AMD also doesn't have this (at\n> least on 32bits). Now imagine that you're RedHat and that you have to\n> deliver one CD to AMD and Intel servers. That means you can't use any\n> AMD-specific or Intel-specific tecnology at the binary level.\n\nAMD processors since the K6-2 and I think Intel ones since P-Pro are\nessentially RISC processors with a hardware microcode compiler that\ntranslates and reorganizes instructions on the fly. Instruction\nchoice and ordering was extremely important in older 32 bit\narchitectures (like the 486) but is much less important these days. I\nthink you will find that an optimized glibc might be faster in\nspecific contrived cases, the whole is unfortunately less than the sum\nof its parts.\n\nWhile SSE2 might be able to optimize things like video decoding and\nthe like, for most programs it's of little benifit and IMO a waste of\ntime. Also as others pointed out things like cache hits/misses and\ni/o considerations are actually much more important than instruction\nexecution speed. We ran Gentoo here for months and did not to be\nfaster enough to merit the bleeding edge quirks it has for production\nenvironments.\n\nIf you dig assembly, there was an interesting tackle of the spinlocks\ncode on the hackers list last year IIRC.\n\nmerlin\n",
"msg_date": "Mon, 11 Dec 2006 09:47:14 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "[email protected] (\"Daniel van Ham Colchete\") writes:\n> You are right Christopher.\n>\n> Okay. Let's solve this matter.\n>\n> What PostgreSQL benchmark software should I use???\n\npgbench is one option. \n\nThere's a TPC-W at pgFoundry\n(<http://pgfoundry.org/projects/tpc-w-php/>). \n\nThere's the Open Source Database Benchmark.\n(<http://osdb.sourceforge.net/>)\n\nThose are three reasonable options.\n\n> I'll test PostgreSQL 8.1 on a Fedora Core 6 and on a Gentoo. I'll get\n> the same version FC6 uses and install it at my Gentoo. I'll use the\n> same hardware (diferent partitions to each).\n\nWrong approach. You'll be comparing apples to oranges, because Gentoo\nand Fedora pluck sources from different points in the source code\nstream.\n\nIn order to prove what you want to prove, you need to run the\nbenchmarks all on Gentoo, where you run with 4 categorizations:\n\n 1. Where you run PostgreSQL and GLIBC without any processor-specific\n optimizations\n\n 2. Where you run PostgreSQL and GLIBC with all relevant\n processor-specific optimizations\n\n 3. Where you run PostgreSQL with, and GLIBC without\n processor-specific optimizations\n\n 4. Where you run PostgreSQL without, and GLIBC with processor-specific\n optimizations\n\nThat would allow one to clearly distinguish which optimizations are\nparticularly relevant.\n\n> I'm not saying that Gentoo is faster than FC6. I just want to prove\n> that if you compile your software to make better use of your\n> processor, it will run faster.\n>\n> It might take a few days because I'm pretty busy right now at my\n> job.\n\nI expect that you'll discover, if you actually do these tests, that\nthis belief is fairly much nonsense.\n\n- Modern CPUs do a huge amount of on-CPU self-tuning.\n\n- CPU features that could have a material effect tend to be unusable\n when compiling general purpose libraries and applications. GCC\n doesn't generate MMX-like instructions.\n\n- Database application performance tends to be I/O driven.\n\n- When database application performance *isn't* I/O driven, it is\n likely to be driven by cache management, which compiler options\n won't affect.\n-- \noutput = reverse(\"ofni.secnanifxunil\" \"@\" \"enworbbc\")\nhttp://linuxdatabases.info/info/sgml.html\n\"very few people approach me in real life and insist on proving they\nare drooling idiots.\" -- Erik Naggum, comp.lang.lisp\n",
"msg_date": "Mon, 11 Dec 2006 10:47:51 -0500",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "This definitely is the correct approach.\n\nActually, Daniel van Ham Colchete may not be as \"all wet\" as some \naround here think. We've had previous data that shows that pg can \nbecome CPU bound (see previous posts by Josh Berkus and others \nregarding CPU overhead in what should be IO bound tasks).\n\nIn addition, we know that x86 compatible 64b implementations differ \nenough between AMD and Intel products that it sometimes shows on benches.\n\nEvidence outside the DBMS arena supports the hypothesis that recent \nCPUs are needing more hand-holding and product specific compiling, \nnot less, compared to their previous versions.\n\nSide Note: I wonder what if anything pg could gain from using SWAR \ninstructions (SSE*, MMX, etc)?\n\nI'd say the fairest attitude is to do everything we can to support \nhaving the proper experiments done w/o presuming the results.\n\nRon Peacetree\n\n\nAt 10:47 AM 12/11/2006, Chris Browne wrote:\n\n>In order to prove what you want to prove, you need to run the\n>benchmarks all on Gentoo, where you run with 4 categorizations:\n>\n> 1. Where you run PostgreSQL and GLIBC without any processor-specific\n> optimizations\n>\n> 2. Where you run PostgreSQL and GLIBC with all relevant\n> processor-specific optimizations\n>\n> 3. Where you run PostgreSQL with, and GLIBC without\n> processor-specific optimizations\n>\n> 4. Where you run PostgreSQL without, and GLIBC with processor-specific\n> optimizations\n>\n>That would allow one to clearly distinguish which optimizations are\n>particularly relevant.\n>\n> > I'm not saying that Gentoo is faster than FC6. I just want to prove\n> > that if you compile your software to make better use of your\n> > processor, it will run faster.\n> >\n> > It might take a few days because I'm pretty busy right now at my\n> > job.\n>\n>I expect that you'll discover, if you actually do these tests, that\n>this belief is fairly much nonsense.\n>\n>- Modern CPUs do a huge amount of on-CPU self-tuning.\n>\n>- CPU features that could have a material effect tend to be unusable\n> when compiling general purpose libraries and applications. GCC\n> doesn't generate MMX-like instructions.\n>\n>- Database application performance tends to be I/O driven.\n>\n>- When database application performance *isn't* I/O driven, it is\n> likely to be driven by cache management, which compiler options\n> won't affect.\n\n",
"msg_date": "Mon, 11 Dec 2006 12:15:51 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Mon, Dec 11, 2006 at 12:15:51PM -0500, Ron wrote:\n>I'd say the fairest attitude is to do everything we can to support \n>having the proper experiments done w/o presuming the results.\n\nWho's presuming results?[1] It is fair to say that making extraordinary \nclaims without any evidence should be discouraged. It's also fair to say \nthat if there are specific things that need cpu-specific tuning they'll \nbe fairly limited critical areas (e.g., locks) which would probably be \nbetter implemented with a hand-tuned code and runtime cpu detection than \nby magical mystical compiler invocations.\n\nMike Stone\n\n[1] I will say that I have never seen a realistic benchmark of general \ncode where the compiler flags made a statistically significant \ndifference in the runtime. There are some particularly cpu-intensive \ncodes, like some science simulations or encoding routines where they \nmatter, but that's not the norm--and many of those algorithms already \nhave hand-tuned versions which will outperform autogenerated code. You'd \nthink that with all the talk that the users of certain OS's generate \nabout CFLAG settings, there'd be some well-published numbers backing up \nthe hype. At any rate if there were numbers to back the claim then I \nthink they could certainly be considered without prejudice.\n",
"msg_date": "Mon, 11 Dec 2006 12:31:08 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Statements like these can not be reasonably interpreted in any manner \n_except_ that of presuming the results:\n\n\"I expect that you'll discover, if you actually do these tests, that \nthis belief (that using arch specific compiler options lead to \nbetter performing SW) is fairly much nonsense.\"\n\n\"...IMO a waste of time...\"\n\netc\n\nThe correct objective response to claims w/o evidence is to request \nevidence, and to do everything we can to support it being properly \ngathered. Not to try to discourage the claimant from even trying by \nganging up on them with multiple instances of Argument From Authority \nor variations of Ad Hominem attacks.\n(The validity of the claim has nothing to do with the skills or \nexperience of the claimant or anyone else in the discussion. Only on \nthe evidence.)\n\n It is a tad unfair and prejudicial to call claims that CPU \noptimizations matter to the performance of DB product \"extraordinary\".\nEvidence outside the DBMS field exists; and previous posts here show \nthat pg can indeed become CPU-bound during what should be IO bound tasks.\nAt the moment, Daniel's claims are not well supported. That is far \ndifferent from being \"extraordinary\" given the current circumstantial \nevidence.\n\nLet's also bear in mind that as a community project, we can use all \nthe help we can get. Driving potential resources away is in \nopposition to that goal.\n\n[1] The evidence that arch specific flags matter to performance can \nbe found as easily as recompiling your kernel or your \ncompiler. While it certainly could be argued how \"general purpose\" \nsuch SW is, the same could be said for just about any SW at some \nlevel of abstraction.\n\nRon Peacetree\n\n\nAt 12:31 PM 12/11/2006, Michael Stone wrote:\n>On Mon, Dec 11, 2006 at 12:15:51PM -0500, Ron wrote:\n>>I'd say the fairest attitude is to do everything we can to support \n>>having the proper experiments done w/o presuming the results.\n>\n>Who's presuming results?[1] It is fair to say that making \n>extraordinary claims without any evidence should be discouraged. \n>It's also fair to say that if there are specific things that need \n>cpu-specific tuning they'll be fairly limited critical areas (e.g., \n>locks) which would probably be better implemented with a hand-tuned \n>code and runtime cpu detection than by magical mystical compiler invocations.\n>\n>Mike Stone\n>\n>[1] I will say that I have never seen a realistic benchmark of \n>general code where the compiler flags made a statistically \n>significant difference in the runtime. There are some particularly \n>cpu-intensive codes, like some science simulations or encoding \n>routines where they matter, but that's not the norm--and many of \n>those algorithms already have hand-tuned versions which will \n>outperform autogenerated code. You'd think that with all the talk \n>that the users of certain OS's generate about CFLAG settings, \n>there'd be some well-published numbers backing up the hype. At any \n>rate if there were numbers to back the claim then I think they could \n>certainly be considered without prejudice.\n\n",
"msg_date": "Mon, 11 Dec 2006 13:20:50 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Michael,\n\nOn 12/11/06 9:31 AM, \"Michael Stone\" <[email protected]> wrote:\n\n> [1] I will say that I have never seen a realistic benchmark of general\n> code where the compiler flags made a statistically significant\n> difference in the runtime.\n\nHere's one - I wrote a general purpose Computational Fluid Dynamics analysis\nmethod used by hundreds of people to perform aircraft and propulsion systems\nanalysis. Compiler flag tuning would speed it up by factors of 2-3 or even\nmore on some architectures. The reason it was so effective is that the\nstructure of the code was designed to be general, but also to expose the\ncritical performance sections in a way that the compilers could use - deep\npipelining/vectorization, unrolling, etc, were carefully made easy for the\ncompilers to exploit in critical sections. Yes, this made the code in those\nsections harder to read, but it was a common practice because it might take\nweeks of runtime to get an answer and performance mattered.\n\nThe problem I see with general purpose DBMS code the way it's structured in\npgsql (and others) is that many of the critical performance sections are\nembedded in abstract interfaces that obscure them from optimization. An\nexample is doing a simple \"is equal to\" operation has many layers\nsurrounding it to ensure that UDFs can be declared and that special\ncomparison semantics can be accomodated. But if you're simply performing a\nlarge number of INT vs. INT comparisons, it will be thousands of times\nslower than a CPU native operation because of the function call overhead,\netc. I've seen presentations that show IPC of Postgres at about 0.5, versus\nthe 2-4 possible from the CPU.\n\nColumn databases like C-Store remove these abstractions at planner time to\nexpose native operations in large chunks to the compiler and the IPC\nreflects that - typically 1+ and as high as 2.5. If we were to redesign the\nexecutor and planner to emulate that same structure we could achieve similar\nspeedups and the compiler would matter more.\n\n- Luke\n\n\n",
"msg_date": "Mon, 11 Dec 2006 10:30:55 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Ron wrote:\n> We are not going to get valuable contributions nor help people become \n> more valuable to the community by \"flaming them into submission\".\n> \n> Let's support getting definitive evidence. No matter who brings it to \n> the table ;-)\n\nThanks, Ron, for a voice of respect and reason. Since I first started using Usenet back in 1984, inexplicable rudeness has been a plague on otherwise civilized people. We're a community, we're all in this to help one another. Sometimes we give good advice, and sometimes even those \"wearing the mantle of authority\" can make boneheaded comments. I know I do, and when it happens, I always appreciate it when I'm taken to task with good humor and tolerance.\n\nWhen someone comes to this forum with an idea you disagree with, no matter how brash or absurd their claims, it's so easy to challenge them with grace and good humor, rather than chastizing with harsh words and driving someone from our community. If you're right, you will have taught a valuable lesson to someone. And if on occasion a newcomer shows us something new, then we've all learned. Either way, we have a new friend and contributor to the community.\n\nCraig\n",
"msg_date": "Mon, 11 Dec 2006 10:46:26 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Mon, Dec 11, 2006 at 01:20:50PM -0500, Ron wrote:\n>(The validity of the claim has nothing to do with the skills or \n>experience of the claimant or anyone else in the discussion. Only on \n>the evidence.)\n\nPlease go back and reread the original post. I don't think the response \nwas unwarranted.\n\n> It is a tad unfair and prejudicial to call claims that CPU \n>optimizations matter to the performance of DB product \"extraordinary\".\n>Evidence outside the DBMS field exists; and previous posts here show \n>that pg can indeed become CPU-bound during what should be IO bound tasks.\n>At the moment, Daniel's claims are not well supported. That is far \n>different from being \"extraordinary\" given the current circumstantial \n>evidence.\n\nNo, they're extraordinary regardless of whether postgres is CPU bound. \nThe question is whether cpu-specific compiler flags will have a \nsignificant impact--which is, historically, fairly unlikely. Far more \nlikely is that performance can be improved with either a \nnon-cpu-specific optimization (e.g., loop unrolling vs not) or with an \nalgorithmic enhancement. \n\nMore importantly, you're arguing *your own* point, not the original \nclaim. I'll refresh your memory: \"My Linux is not an Intel-AMD binary \ncompatible turtle like Fedora/RedHat/SUSE/... It's really important to \nhave your GLIBC compiled for your processor. It is essencial for \nperformance.\" You wanna draw the line between that (IMO, extraordinary) \nclaim and the rational argument that you're trying to substitute in its \nplace?\n\n>[1] The evidence that arch specific flags matter to performance can \n>be found as easily as recompiling your kernel or your \n>compiler. \n\nThen, please, point to the body of evidence. IME, the results of such \nefforts aren't statistically all that signficant on most workloads. I'm \nsure there are edge cases, but it's certainly not going to be on my top \nten things to look at when tuning a database system. (If your kernel's \ncpu utilization is the bottleneck in your database, you've probably got \nbigger problems than compiler flags can solve.) Where you get the real \nbig benefits in a (linux) kernel recompile is when you select code \nthat's specifically tuned for a particular processor--not from \narch-specific gcc flags--and those sorts of things are increasingly \nmoving toward boot-time autotuning rather than compile-time manual \ntuning for obvious reasons.\n\nMike Stone\n",
"msg_date": "Mon, 11 Dec 2006 13:47:28 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Mon, Dec 11, 2006 at 10:30:55AM -0800, Luke Lonergan wrote:\n>Here's one - I wrote a general purpose Computational Fluid Dynamics analysis\n>method used by hundreds of people to perform aircraft and propulsion systems\n>analysis. \n\nThat's kinda the opposite of what I meant by general code. I was trying \n(perhaps poorly) to distinguish between scientific codes and other \nstuff (especially I/O or human interface code).\n\n>Compiler flag tuning would speed it up by factors of 2-3 or even\n>more on some architectures. The reason it was so effective is that the\n>structure of the code was designed to be general, but also to expose the\n>critical performance sections in a way that the compilers could use - deep\n>pipelining/vectorization, unrolling, etc, were carefully made easy for the\n>compilers to exploit in critical sections. \n\nIt also sounds like code specifically written to take advantage of \ncompiler techniques, rather than random code thrown at a pile of cflags. \nI don't disagree that it is possible to get performance improvements if \ncode is written to be performant code; I do (and did) disagree with the \nidea that you'll get huge performance improvements by taking regular old \nC application code and playing with compiler flags. \n\n>Yes, this made the code in those\n>sections harder to read, but it was a common practice because it might \n>take weeks of runtime to get an answer and performance mattered.\n\nIMO that's appropriate for some science codes (although I think even \nthat sector is beginning to find that they've gone too far in a lot of \nways), but for a database I'd rather have people debugging clean, readable \ncode than risking my data to something incomprehensible that runs in \noptimal time.\n\n>Column databases like C-Store remove these abstractions at planner time to\n>expose native operations in large chunks to the compiler and the IPC\n>reflects that - typically 1+ and as high as 2.5. If we were to redesign the\n>executor and planner to emulate that same structure we could achieve similar\n>speedups and the compiler would matter more.\n\ngcc --make-it-really-fast-by-rewriting-it-from-the-ground-up?\n\nMike Stone\n",
"msg_date": "Mon, 11 Dec 2006 13:57:28 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On 12/11/06, Ron <[email protected]> wrote:\n> Statements like these can not be reasonably interpreted in any manner\n> _except_ that of presuming the results:\n>\n> \"I expect that you'll discover, if you actually do these tests, that\n> this belief (that using arch specific compiler options lead to\n> better performing SW) is fairly much nonsense.\"\n>\n> \"...IMO a waste of time...\"\n>\n> etc\n>\n> The correct objective response to claims w/o evidence is to request\n> evidence, and to do everything we can to support it being properly\n> gathered. Not to try to discourage the claimant from even trying by\n> ganging up on them with multiple instances of Argument From Authority\n> or variations of Ad Hominem attacks.\n> (The validity of the claim has nothing to do with the skills or\n> experience of the claimant or anyone else in the discussion. Only on\n> the evidence.)\n\n/shrugs, this is not debate class, I just happened to have barked up\nthis particular tree before, and decided to share my insights from it.\n A lot of the misunderstanding here stems from legacy perceptions\nabout how cpus work, not to mention the entire architecture. If\nsomebody produces hard facts to the contrary, great, and I encourage\nthem to do so.\n\nalso, some people posting here, not necessarily me, are authority figures. :-)\n\nmerlin\n",
"msg_date": "Mon, 11 Dec 2006 14:28:25 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "At 01:47 PM 12/11/2006, Michael Stone wrote:\n>On Mon, Dec 11, 2006 at 01:20:50PM -0500, Ron wrote:\n>>(The validity of the claim has nothing to do with the skills or \n>>experience of the claimant or anyone else in the discussion. Only \n>>on the evidence.)\n>\n>Please go back and reread the original post. I don't think the \n>response was unwarranted.\n\nSo he's evidently young and perhaps a trifle over-enthusiast. We \nwere once too. ;-)\n\nWe are not going to get valuable contributions nor help people become \nmore valuable to the community by \"flaming them into submission\".\n\n...and who knows, =properly= done experiment may provide both \nsurprises and unexpected insights/benefits.\n\nI agree completely with telling him he needs to get better evidence \nand even with helping him understand how he should go about getting it.\n\nIt should be noted that his opposition has not yet done these \nexperiments either. (Else they could just simply point to the \nresults that refute Daniel's hypothesis.)\n\nThe reality is that a new CPU architecture and multiple new memory \ntechnologies are part of this discussion. I certainly do not expect \nthem to change the fundamental thinking regarding how to get best \nperformance for a DBMS. OTOH, there are multiple valid reasons to \ngive such new stuff a thorough and rigorous experimental shake-down.\n\nATM, =both= sides of this debate are lacking evidence for their POV.\n\nLet's support getting definitive evidence. No matter who brings it \nto the table ;-)\nRon Peacetree \n\n",
"msg_date": "Mon, 11 Dec 2006 14:51:09 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On 12/11/06, Steinar H. Gunderson <[email protected]> wrote:\n> On Mon, Dec 11, 2006 at 11:17:06AM -0200, Daniel van Ham Colchete wrote:\n> > I just remebered one case with MySQL. When I changed the distro from\n> > Conectiva 10 (rpm-based ended brazilian distro) to Gentoo, a MySQL\n> > operation that usually took 2 minutes to run, ended in 47 seconds.\n>\n> How do you know that this improvement had _anything_ to do with the use of\n> different optimization flags? Were even the MySQL versions or configuration\n> the same?\n>\n> > This is absolutely vage.\n>\n> Indeed it is.\nFinally we agreed on something.\n\n>\n> > I don't have how to prove it to you.\n>\n> No, but you should stop making this sort of \"absolutely essential\" claims if\n> you can't.\n>\n> > And I can't mesure how each factor helped: compiling glibc and Mysql with\n> > good cflags, rebuilding my database in a ordered way, never kernel, etc..\n>\n> Exactly. So why are you attributing it to the first factor only? And why do\n> you think this would carry over to PostgreSQL?\n>\n> Remember, anecdotal evidence isn't.\n\nBut that's exactly what I said. I'm not attributing this case to the\noptimization factor. As I said there are a lot of factors involved.\nThe MySQL version change of a minor upgrade (from 4.1.15 to 4.1.21).\n\nSteinar, I say I'll do the benchmark and it will be the end of the story.\n\n>\n> /* Steinar */\n",
"msg_date": "Mon, 11 Dec 2006 18:07:26 -0200",
"msg_from": "\"Daniel van Ham Colchete\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "At 02:28 PM 12/11/2006, Merlin Moncure wrote:\n\n>also, some people posting here, not necessarily me, are authority figures. :-)\n>\n>merlin\n\nNoam Chomsky was one of the most influential thinkers in Linguistics \nto yet have lived. He was proven wrong a number of times. Even \nwithin Linguistics.\nThere are plenty of other historical examples.\n\nAs others have said, opinion without evidence and logic is just that- opinion.\nAnd even Expert Opinion has been known to be wrong. Sometimes very much so.\n\nPart of what makes an expert an expert is that they can back up their \nstatements with evidence and logic that are compelling even to the \nnon expert when asked to do so.\n\nAll I'm saying is let's all remember how \"assume\" is spelled and \nsupport the getting of some hard data.\nRon Peacetree\n\n\n",
"msg_date": "Mon, 11 Dec 2006 15:08:02 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Michael,\n\nOn 12/11/06 10:57 AM, \"Michael Stone\" <[email protected]> wrote:\n\n> That's kinda the opposite of what I meant by general code. I was trying\n> (perhaps poorly) to distinguish between scientific codes and other\n> stuff (especially I/O or human interface code).\n\nYes - choice of language has often been a differentiator in these markets -\nLISP versus FORTRAN, C++ versus SQL/DBMS. This isn't just about science,\nit's also in Business Intelligence - e.g. Special purpose datamining code\nversus algorithms expressed inside a data management engine.\n \n> It also sounds like code specifically written to take advantage of\n> compiler techniques, rather than random code thrown at a pile of cflags.\n> I don't disagree that it is possible to get performance improvements if\n> code is written to be performant code; I do (and did) disagree with the\n> idea that you'll get huge performance improvements by taking regular old\n> C application code and playing with compiler flags.\n\nAgreed - that's my point exactly.\n \n> IMO that's appropriate for some science codes (although I think even\n> that sector is beginning to find that they've gone too far in a lot of\n> ways), but for a database I'd rather have people debugging clean, readable\n> code than risking my data to something incomprehensible that runs in\n> optimal time.\n\nCertainly something of a compromise is needed.\n \n>> Column databases like C-Store remove these abstractions at planner time to\n>\n> gcc --make-it-really-fast-by-rewriting-it-from-the-ground-up?\n\nMaybe not from ground->up, but rather from about 10,000 ft -> 25,000 ft?\n\nThere are some who have done a lot of work studying the impact of more\nefficient DBMS, see here:\n http://homepages.cwi.nl/~boncz/x100.html\n\n- Luke\n\n\n",
"msg_date": "Mon, 11 Dec 2006 12:09:09 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Mon, Dec 11, 2006 at 02:51:09PM -0500, Ron wrote:\n>Let's support getting definitive evidence.\n\nSince nobody opposed the concept of contrary evidence, I don't suppose \nyou're fighting an uphill battle on that particular point.\n\nIt's fine to get preachy about supporting intellectual curiosity, but do \nremember that it's a waste of everyone's (limited) time to give equal \ntime to all theories. If someone comes to you with an idea for a \nperpetual motion machine your effort is probably better spent on \nsomething other than helping him build it, regardless of whether that \nsomehow seems unfair. (Now if he brings a working model, that's a \ndifferent story...) Heck, even building a bunch of non-working perpetual \nmotion machines as a demonstration is a waste of time, because it's \nalways easy to say \"well, if you had just...\". That's precisely why the \nperson bringing the extraordinary claim is also expected to bring the \nproof, rather than expecting that everyone else prove the status quo.\n\nMike Stone\n",
"msg_date": "Mon, 11 Dec 2006 15:09:49 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On 12/11/06, Steinar H. Gunderson <[email protected]> wrote:\n> On Mon, Dec 11, 2006 at 11:31:48AM -0200, Daniel van Ham Colchete wrote:\n> > What PostgreSQL benchmark software should I use???\n>\n> Look up the list archives; search for \"TPC\".\n>\n> > I'll test PostgreSQL 8.1 on a Fedora Core 6 and on a Gentoo. I'll get\n> > the same version FC6 uses and install it at my Gentoo. I'll use the\n> > same hardware (diferent partitions to each).\n>\n> Why do you want to compare FC6 and Gentoo? Wasn't your point that the -march=\n> was supposed to be the relevant factor here? In that case, you want to keep\n> all other things equal; so use the same distribution, only with -O2\n> -march=i686 vs. -march=athlon-xp (or whatever).\n>\n> /* Steinar */\nUsing Gentoo is just a easy way to make cflag optimizations to all the\nother libs as well: glibc, ...\n\nI can also mesure performance on Gentoo with cflag optimized\nPostgreSQL and plain PostgreSQL as well.\n",
"msg_date": "Mon, 11 Dec 2006 18:10:12 -0200",
"msg_from": "\"Daniel van Ham Colchete\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On 12/11/06, Chris Browne <[email protected]> wrote:\n> [email protected] (\"Daniel van Ham Colchete\") writes:\n> > You are right Christopher.\n> >\n> > Okay. Let's solve this matter.\n> >\n> > What PostgreSQL benchmark software should I use???\n>\n> pgbench is one option.\n>\n> There's a TPC-W at pgFoundry\n> (<http://pgfoundry.org/projects/tpc-w-php/>).\n>\n> There's the Open Source Database Benchmark.\n> (<http://osdb.sourceforge.net/>)\n>\n> Those are three reasonable options.\nThanks Chris, I'm going to take a look at those options.\n\n>\n> > I'll test PostgreSQL 8.1 on a Fedora Core 6 and on a Gentoo. I'll get\n> > the same version FC6 uses and install it at my Gentoo. I'll use the\n> > same hardware (diferent partitions to each).\n>\n> Wrong approach. You'll be comparing apples to oranges, because Gentoo\n> and Fedora pluck sources from different points in the source code\n> stream.\n>\n> In order to prove what you want to prove, you need to run the\n> benchmarks all on Gentoo, where you run with 4 categorizations:\n>\n> 1. Where you run PostgreSQL and GLIBC without any processor-specific\n> optimizations\n>\n> 2. Where you run PostgreSQL and GLIBC with all relevant\n> processor-specific optimizations\n>\n> 3. Where you run PostgreSQL with, and GLIBC without\n> processor-specific optimizations\n>\n> 4. Where you run PostgreSQL without, and GLIBC with processor-specific\n> optimizations\n>\n> That would allow one to clearly distinguish which optimizations are\n> particularly relevant.\n\nGood ideia also. And it is much easier to do as well.\n",
"msg_date": "Mon, 11 Dec 2006 18:13:41 -0200",
"msg_from": "\"Daniel van Ham Colchete\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Daniel van Ham Colchete wrote:\n> On 12/11/06, Steinar H. Gunderson <[email protected]> wrote:\n>> On Mon, Dec 11, 2006 at 11:31:48AM -0200, Daniel van Ham Colchete wrote:\n>> > What PostgreSQL benchmark software should I use???\n>>\n>> Look up the list archives; search for \"TPC\".\n>>\n>> > I'll test PostgreSQL 8.1 on a Fedora Core 6 and on a Gentoo. I'll get\n>> > the same version FC6 uses and install it at my Gentoo. I'll use the\n>> > same hardware (diferent partitions to each).\n>>\n>> Why do you want to compare FC6 and Gentoo? Wasn't your point that the \n>> -march=\n>> was supposed to be the relevant factor here? In that case, you want to \n>> keep\n>> all other things equal; so use the same distribution, only with -O2\n>> -march=i686 vs. -march=athlon-xp (or whatever).\n>>\n>> /* Steinar */\n> Using Gentoo is just a easy way to make cflag optimizations to all the\n> other libs as well: glibc, ...\n> \n> I can also mesure performance on Gentoo with cflag optimized\n> PostgreSQL and plain PostgreSQL as well.\n> \n\nI can certainly recall that when I switched from Fedora Core (2 or 3 \ncan't recall now) to Gentoo that the machine was \"faster\" for many \nactivities. Of course I can't recall precisely what now :-(.\n\nTo actually track down *why* and *what* make it faster is another story, \nand custom CFLAGS is only 1 of the possible factors: others could be:\n\n- different kernel versions (Gentoo would have possibly been later)\n- different kernel patches (both RedHat and Gentoo patch 'em)\n- different versions of glibc (Gentoo possibly later again).\n- different config options for glibc (not sure if they in fact are, but \nit's possible...)\n- kernel and glibc built with different versions of gcc (again I suspect \nGentoo may have used a later version)\n\nSo there are a lot of variables to consider if you want to settle this \ndebate once and for all :-)!\n\nBest wishes\n\nMark\n",
"msg_date": "Tue, 12 Dec 2006 11:13:30 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Hi yall,\n\nI made some preliminary tests.\n\nBefore the results, I would like to make some acknowledgments:\n1 - I didn't show any prove to any of the things I said until now.\n2 - It really is a waste of everyone's time to say one thing when I\ncan't prove it.\n\nBut all I said, is the knowledge I have been acumulating over the past\nfew years working on a project where optimization is important. After\nalgorithmic optimizations, compiler options is the second on my list\nand with my software they show measurable improvement. With the other\nsoftware I use, they seen to run faster, but I didn't measure it.\n\nTEST PROCEDURE\n================\nI ran this test at a Gentoo test machine I have here. It's a Pentium 4\n3.0GHz (I don't know witch P4) with 1 GB of RAM memory. It only uses\nSATA drives. I didn't changed my glibc (or any other lib) during the\ntest. I used GCC 3.4.6.\n\nI ran each test three times. So we can get an idea about average\nvalues and standard deviation.\n\nEach time I ran the test with the following commands:\ndropdb mydb\ncreatedb mydb\npgbench -i -s 10 mydb 2> /dev/null\npsql -c 'vacuum analyze' mydb\npsql -c 'checkpoint' mydb\nsync\npgbench -v -n -t 600 -c 5 mydb\n\nMy postgresql.conf was the default one, except for:\n\tfsync = <depends on the test>\n\tshared_buffers = 10000\n\twork_mem = 10240\n\nEvery test results should begin the above, but I removed it because\nit's always the same:\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 5\nnumber of transactions per client: 600\nnumber of transactions actually processed: 3000/3000\n\nTESTS RESULTS\n==============\nTEST 01: CFLAGS=\"-O2 -march=i686\" fsync=false\n\ntps = 734.948620 (including connections establishing)\ntps = 736.866642 (excluding connections establishing)\n\ntps = 713.225047 (including connections establishing)\ntps = 715.039059 (excluding connections establishing)\n\ntps = 721.769259 (including connections establishing)\ntps = 723.631065 (excluding connections establishing)\n\n\nTEST 02: CFLAGS=\"-O2 -march=i686\" fsync=true\n\ntps = 75.466058 (including connections establishing)\ntps = 75.485675 (excluding connections establishing)\n\ntps = 75.115797 (including connections establishing)\ntps = 75.135311 (excluding connections establishing)\n\ntps = 73.883113 (including connections establishing)\ntps = 73.901997 (excluding connections establishing)\n\n\nTEST 03: CFLAGS=\"-O2 -march=pentium4\" fsync=false\n\ntps = 846.337784 (including connections establishing)\ntps = 849.067017 (excluding connections establishing)\n\ntps = 829.476269 (including connections establishing)\ntps = 832.008129 (excluding connections establishing)\n\ntps = 831.416457 (including connections establishing)\ntps = 835.300001 (excluding connections establishing)\n\n\nTEST 04 CFLAGS=\"-O2 -march=pentium4\" fsync=true\n\ntps = 83.224016 (including connections establishing)\ntps = 83.248157 (excluding connections establishing)\n\ntps = 80.811892 (including connections establishing)\ntps = 80.834525 (excluding connections establishing)\n\ntps = 80.671406 (including connections establishing)\ntps = 80.693975 (excluding connections establishing)\n\n\nCONCLUSIONS\nEveryone can get their own conclusion. Mine is:\n\n1 - You have improvement when you compile your postgresql using\nprocessor specific tecnologies. With the fsync the you have an\nimprovement of 9% at the tps rate. Without the fsync, the improvement\nis of 15,6%.\n\n2 - You can still improve your indexes, sqls and everythingelse, this\nonly adds another possible improvment.\n\n3 - I can't prove this but I *think* that this is related to the fact\nthat GCC knows how to do the same thing better on each processor.\n\n4 - I'm still using source-based distros.\n\nWHAT NOW\nThere are other things I wish to test:\n 1 - What efect an optimized glibc has on PostgreSQL?\n 2 - How much improvement can I get playing with my postgresql.conf.\n 3 - What efect optimizations have with concurrency?\n 4 - What if I used Intel C++ Compiler instead of GCC?\n 5 - What if I use GCC 4.1.1 instead of GCC 3.4.6?\n\nI'm thinking about writing a script to make all the tests (more than 3\ntimes each), get the data and plot some graphs.\n\nI don't have the time right now to do it, maybe next week I'll have.\n\nI invite everyone to comment/sugest on the procedure or the results.\n\nBest regards,\nDaniel Colchete\n",
"msg_date": "Mon, 11 Dec 2006 20:22:42 -0200",
"msg_from": "\"Daniel van Ham Colchete\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Daniel,\n\nGood stuff.\n\nCan you try this with just \"-O3\" versus \"-O2\"?\n\n- Luke\n\n\nOn 12/11/06 2:22 PM, \"Daniel van Ham Colchete\" <[email protected]>\nwrote:\n\n> Hi yall,\n> \n> I made some preliminary tests.\n> \n> Before the results, I would like to make some acknowledgments:\n> 1 - I didn't show any prove to any of the things I said until now.\n> 2 - It really is a waste of everyone's time to say one thing when I\n> can't prove it.\n> \n> But all I said, is the knowledge I have been acumulating over the past\n> few years working on a project where optimization is important. After\n> algorithmic optimizations, compiler options is the second on my list\n> and with my software they show measurable improvement. With the other\n> software I use, they seen to run faster, but I didn't measure it.\n> \n> TEST PROCEDURE\n> ================\n> I ran this test at a Gentoo test machine I have here. It's a Pentium 4\n> 3.0GHz (I don't know witch P4) with 1 GB of RAM memory. It only uses\n> SATA drives. I didn't changed my glibc (or any other lib) during the\n> test. I used GCC 3.4.6.\n> \n> I ran each test three times. So we can get an idea about average\n> values and standard deviation.\n> \n> Each time I ran the test with the following commands:\n> dropdb mydb\n> createdb mydb\n> pgbench -i -s 10 mydb 2> /dev/null\n> psql -c 'vacuum analyze' mydb\n> psql -c 'checkpoint' mydb\n> sync\n> pgbench -v -n -t 600 -c 5 mydb\n> \n> My postgresql.conf was the default one, except for:\n> fsync = <depends on the test>\n> shared_buffers = 10000\n> work_mem = 10240\n> \n> Every test results should begin the above, but I removed it because\n> it's always the same:\n> transaction type: TPC-B (sort of)\n> scaling factor: 10\n> number of clients: 5\n> number of transactions per client: 600\n> number of transactions actually processed: 3000/3000\n> \n> TESTS RESULTS\n> ==============\n> TEST 01: CFLAGS=\"-O2 -march=i686\" fsync=false\n> \n> tps = 734.948620 (including connections establishing)\n> tps = 736.866642 (excluding connections establishing)\n> \n> tps = 713.225047 (including connections establishing)\n> tps = 715.039059 (excluding connections establishing)\n> \n> tps = 721.769259 (including connections establishing)\n> tps = 723.631065 (excluding connections establishing)\n> \n> \n> TEST 02: CFLAGS=\"-O2 -march=i686\" fsync=true\n> \n> tps = 75.466058 (including connections establishing)\n> tps = 75.485675 (excluding connections establishing)\n> \n> tps = 75.115797 (including connections establishing)\n> tps = 75.135311 (excluding connections establishing)\n> \n> tps = 73.883113 (including connections establishing)\n> tps = 73.901997 (excluding connections establishing)\n> \n> \n> TEST 03: CFLAGS=\"-O2 -march=pentium4\" fsync=false\n> \n> tps = 846.337784 (including connections establishing)\n> tps = 849.067017 (excluding connections establishing)\n> \n> tps = 829.476269 (including connections establishing)\n> tps = 832.008129 (excluding connections establishing)\n> \n> tps = 831.416457 (including connections establishing)\n> tps = 835.300001 (excluding connections establishing)\n> \n> \n> TEST 04 CFLAGS=\"-O2 -march=pentium4\" fsync=true\n> \n> tps = 83.224016 (including connections establishing)\n> tps = 83.248157 (excluding connections establishing)\n> \n> tps = 80.811892 (including connections establishing)\n> tps = 80.834525 (excluding connections establishing)\n> \n> tps = 80.671406 (including connections establishing)\n> tps = 80.693975 (excluding connections establishing)\n> \n> \n> CONCLUSIONS\n> Everyone can get their own conclusion. Mine is:\n> \n> 1 - You have improvement when you compile your postgresql using\n> processor specific tecnologies. With the fsync the you have an\n> improvement of 9% at the tps rate. Without the fsync, the improvement\n> is of 15,6%.\n> \n> 2 - You can still improve your indexes, sqls and everythingelse, this\n> only adds another possible improvment.\n> \n> 3 - I can't prove this but I *think* that this is related to the fact\n> that GCC knows how to do the same thing better on each processor.\n> \n> 4 - I'm still using source-based distros.\n> \n> WHAT NOW\n> There are other things I wish to test:\n> 1 - What efect an optimized glibc has on PostgreSQL?\n> 2 - How much improvement can I get playing with my postgresql.conf.\n> 3 - What efect optimizations have with concurrency?\n> 4 - What if I used Intel C++ Compiler instead of GCC?\n> 5 - What if I use GCC 4.1.1 instead of GCC 3.4.6?\n> \n> I'm thinking about writing a script to make all the tests (more than 3\n> times each), get the data and plot some graphs.\n> \n> I don't have the time right now to do it, maybe next week I'll have.\n> \n> I invite everyone to comment/sugest on the procedure or the results.\n> \n> Best regards,\n> Daniel Colchete\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n",
"msg_date": "Mon, 11 Dec 2006 14:31:21 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Mon, Dec 11, 2006 at 08:22:42PM -0200, Daniel van Ham Colchete wrote:\n>TEST 01: CFLAGS=\"-O2 -march=i686\" fsync=false\n>\n>tps = 734.948620 (including connections establishing)\n>tps = 736.866642 (excluding connections establishing)\n[snip]\n>TEST 03: CFLAGS=\"-O2 -march=pentium4\" fsync=false\n>\n>tps = 846.337784 (including connections establishing)\n>tps = 849.067017 (excluding connections establishing)\n\nCan anyone else reproduce these results? I'm on similar hardware (2.5GHz \nP4, 1.5G RAM) and my test results are more like this:\n\n(postgresql 8.2.0)\n\nCFLAGS=(default)\n\ntps = 527.300454 (including connections establishing)\ntps = 528.898671 (excluding connections establishing)\n\ntps = 517.874347 (including connections establishing)\ntps = 519.404970 (excluding connections establishing)\n\ntps = 534.934905 (including connections establishing)\ntps = 536.562150 (excluding connections establishing)\n\nCFLAGS=686\n\ntps = 525.179375 (including connections establishing)\ntps = 526.801278 (excluding connections establishing)\n\ntps = 557.821136 (including connections establishing)\ntps = 559.602414 (excluding connections establishing)\n\ntps = 532.142941 (including connections establishing)\ntps = 533.740209 (excluding connections establishing)\n\n\nCFLAGS=pentium4\n\ntps = 518.869825 (including connections establishing)\ntps = 520.394341 (excluding connections establishing)\n\ntps = 537.759982 (including connections establishing)\ntps = 539.402547 (excluding connections establishing)\n\ntps = 538.522198 (including connections establishing)\ntps = 540.200458 (excluding connections establishing)\n\nMike Stone\n",
"msg_date": "Mon, 11 Dec 2006 20:37:09 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "After a long battle with technology, [email protected] (Michael Stone), an earthling, wrote:\n> [1] I will say that I have never seen a realistic benchmark of\n> general code where the compiler flags made a statistically\n> significant difference in the runtime.\n\nWhen we were initially trying out PostgreSQL on AIX, I did some\n(limited, admittedly) comparisons between behaviour when compiled\nusing GCC 3.something, VisualAge C, and VisualAge C++. I did some\nmodifications of -O values; I didn't find differences amounting to\nmore than a percent or two between any of the combinations.\n\nIf there's to be a difference, anywhere, it ought to have figured\npretty prominently between a pretty elderly GCC version and IBM's top\nof the line PowerPC compiler.\n-- \noutput = reverse(\"gro.mca\" \"@\" \"enworbbc\")\nhttp://cbbrowne.com/info/linuxdistributions.html\n\"I doubt this language difference would confuse anybody unless you\nwere providing instructions on the insertion of a caffeine enema.\"\n-- On alt.coffee\n",
"msg_date": "Mon, 11 Dec 2006 22:27:46 -0500",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Mon, 11 Dec 2006, Michael Stone wrote:\n\n> Can anyone else reproduce these results? I'm on similar hardware (2.5GHz P4, \n> 1.5G RAM)...\n\nThere are two likely candidates for why Daniel's P4 3.0GHz significantly \noutperforms your 2.5GHz system.\n\n1) Most 2.5GHZ P4 processors use a 533MHz front-side bus (FSB); most \n3.0GHZ ones use an 800MHz bus.\n\n2) A typical motherboard paired with a 2.5GHz era processor will have a \nsingle-channel memory interface; a typical 3.0GHZ era board supports \ndual-channel DDR.\n\nThese changes could easily explain the magnitude of difference in results \nyou're seeing, expecially when combined with a 20% greater raw CPU clock.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Tue, 12 Dec 2006 01:35:04 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Greg Smith <[email protected]> writes:\n> On Mon, 11 Dec 2006, Michael Stone wrote:\n>> Can anyone else reproduce these results? I'm on similar hardware (2.5GHz P4,\n>> 1.5G RAM)...\n\n> There are two likely candidates for why Daniel's P4 3.0GHz significantly \n> outperforms your 2.5GHz system.\n\nUm, you entirely missed the point: the hardware speedups you mention are\nquite independent of any compiler options. The numbers we are looking\nat are the relative speeds of two different compiles on the same\nhardware, not whether hardware A is faster than hardware B.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Dec 2006 01:47:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations "
},
{
"msg_contents": "Mike,\n\nare you using \"-mtune/-mcpu\" or \"-march\" with GCC?\nWitch GCC version? Are you working with a 32bits OS or 64bits?\n\nDaniel\n\nOn 12/11/06, Michael Stone <[email protected]> wrote:\n>\n> Can anyone else reproduce these results? I'm on similar hardware (2.5GHz\n> P4, 1.5G RAM) and my test results are more like this:\n>\n> (postgresql 8.2.0)\n>\n> CFLAGS=(default)\n>\n> tps = 527.300454 (including connections establishing)\n> tps = 528.898671 (excluding connections establishing)\n>\n> tps = 517.874347 (including connections establishing)\n> tps = 519.404970 (excluding connections establishing)\n>\n> tps = 534.934905 (including connections establishing)\n> tps = 536.562150 (excluding connections establishing)\n>\n> CFLAGS=686\n>\n> tps = 525.179375 (including connections establishing)\n> tps = 526.801278 (excluding connections establishing)\n>\n> tps = 557.821136 (including connections establishing)\n> tps = 559.602414 (excluding connections establishing)\n>\n> tps = 532.142941 (including connections establishing)\n> tps = 533.740209 (excluding connections establishing)\n>\n>\n> CFLAGS=pentium4\n>\n> tps = 518.869825 (including connections establishing)\n> tps = 520.394341 (excluding connections establishing)\n>\n> tps = 537.759982 (including connections establishing)\n> tps = 539.402547 (excluding connections establishing)\n>\n> tps = 538.522198 (including connections establishing)\n> tps = 540.200458 (excluding connections establishing)\n>\n> Mike Stone\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n",
"msg_date": "Tue, 12 Dec 2006 07:10:34 -0200",
"msg_from": "\"Daniel van Ham Colchete\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On 12.12.2006, at 02:37, Michael Stone wrote:\n\n> Can anyone else reproduce these results? I'm on similar hardware \n> (2.5GHz P4, 1.5G RAM) and my test results are more like this:\n\nI'm on totally different hardware / software (MacBook Pro 2.33GHz \nC2D) and I can't reproduce the tests.\n\nI have played with a lot of settings in the CFLAGS including -march \nand -O3 and -O2 - there is no significant difference in the tests.\n\nWith fsync=off I get around 2100tps on average with all different \nsettings I have tested. I tried to get the rest of the setup as \nsimilar to the described on ty Daniel as possible. It might be that \nthe crappy Pentium 4 needs some special handling, but I can't get the \nCore 2 Duo in my laptop produce different tps numbers with the \ndifferent optimizations.\n\nBtw: best results were 2147 with -march=i686 and 2137 with - \nmarch=nocona. Both with -O3.\n\ncug\n",
"msg_date": "Tue, 12 Dec 2006 11:34:10 +0100",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Dec 11, 2006, at 23:22 , Daniel van Ham Colchete wrote:\n\n> I ran this test at a Gentoo test machine I have here. It's a Pentium 4\n> 3.0GHz (I don't know witch P4)\n\nTry cat /proc/cpuinfo.\n\n> TESTS RESULTS\n> ==============\n\nOn a dual-core Opteron 280 with 4G RAM with an LSI PCI-X Fusion-MPT \nSAS controller, I am getting wildly uneven results:\n\ntps = 264.775137 (excluding connections establishing)\ntps = 160.365754 (excluding connections establishing)\ntps = 151.967193 (excluding connections establishing)\ntps = 148.010349 (excluding connections establishing)\ntps = 260.973569 (excluding connections establishing)\ntps = 144.693287 (excluding connections establishing)\ntps = 148.147036 (excluding connections establishing)\ntps = 259.485717 (excluding connections establishing)\n\nI suspect the hardware's real maximum performance of the system is \n~150 tps, but that the LSI's write cache is buffering the writes. I \nwould love to validate this hypothesis, but I'm not sure how.\n\nAlexander.\n",
"msg_date": "Tue, 12 Dec 2006 12:29:29 +0100",
"msg_from": "Alexander Staubo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Tue, Dec 12, 2006 at 01:35:04AM -0500, Greg Smith wrote:\n>These changes could easily explain the magnitude of difference in results \n>you're seeing, expecially when combined with a 20% greater raw CPU clock.\n\nI'm not interested in comparing the numbers between the systems (which \nis obviously pointless); I am intested in the fact that there was a \nconsistent difference among the numbers on his system (based on \ndifference optimizations) and no difference among mine.\n\nMike Stone\n",
"msg_date": "Tue, 12 Dec 2006 07:30:10 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Tue, Dec 12, 2006 at 07:10:34AM -0200, Daniel van Ham Colchete wrote:\n>are you using \"-mtune/-mcpu\" or \"-march\" with GCC?\n\nI used exactly the options you said you used.\n\n>Witch GCC version? Are you working with a 32bits OS or 64bits?\n\n3.3.5; 32\n\nMike Stone\n",
"msg_date": "Tue, 12 Dec 2006 07:31:32 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Tue, Dec 12, 2006 at 12:29:29PM +0100, Alexander Staubo wrote:\n>I suspect the hardware's real maximum performance of the system is \n>~150 tps, but that the LSI's write cache is buffering the writes. I \n>would love to validate this hypothesis, but I'm not sure how.\n\nWith fsync off? The write cache shouldn't really matter in that case. \n(And for this purpose that's probably a reasonable configuration.)\n\nMike Stone\n",
"msg_date": "Tue, 12 Dec 2006 07:32:55 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Luke Lonergan wrote:\n\n> Can you try this with just \"-O3\" versus \"-O2\"?\n\nThanks to Daniel for doing these tests.\nI happen to have done the same tests about 3/4 years ago,\nand concluded that gcc flags did *not* influence performance.\n\nMoved by curiosity, I revamped those tests now on a test\nmachine (single P4 @ 3.2 Ghz, with 2Mb cache and 512 Mb Ram).\n\nHere are the results:\n\nhttp://www.streppone.it/cosimo/work/pg/gcc.png\n\nIn short: tests executed with postgresql 8.2.0,\ngcc version 3.4.3 20041212 (Red Hat 3.4.3-9.EL4),\ntps figures computed as average of 9 pgbench runs (don't ask why 9... :-),\nwith exactly the same commands given by Daniel:\n\n\"-O0\" ~ 957 tps\n\"-O1 -mcpu=pentium4 -mtune=pentium4\" ~ 1186 tps\n\"-O2 -mcpu=pentium4 -mtune=pentium4\" ~ 1229 tps\n\"-O3 -mcpu=pentium4 -mtune=pentium4\" ~ 1257 tps\n\"-O6 -mcpu=pentium4 -mtune=pentium4\" ~ 1254 tps\n\nI'm curious now to get the same tests run with\na custom-cflags-compiled glibc.\n\n-- \nCosimo\n",
"msg_date": "Tue, 12 Dec 2006 13:42:06 +0100",
"msg_from": "Cosimo Streppone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Tue, Dec 12, 2006 at 01:42:06PM +0100, Cosimo Streppone wrote:\n> \"-O0\" ~ 957 tps\n> \"-O1 -mcpu=pentium4 -mtune=pentium4\" ~ 1186 tps\n> \"-O2 -mcpu=pentium4 -mtune=pentium4\" ~ 1229 tps\n> \"-O3 -mcpu=pentium4 -mtune=pentium4\" ~ 1257 tps\n> \"-O6 -mcpu=pentium4 -mtune=pentium4\" ~ 1254 tps\n\nFor the record, -O3 = -O6 for regular gcc. It used to matter for pgcc, but\nthat is hardly in use anymore.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 12 Dec 2006 13:46:33 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Tue, Dec 12, 2006 at 01:42:06PM +0100, Cosimo Streppone wrote:\n>\"-O0\" ~ 957 tps\n>\"-O1 -mcpu=pentium4 -mtune=pentium4\" ~ 1186 tps\n>\"-O2 -mcpu=pentium4 -mtune=pentium4\" ~ 1229 tps\n>\"-O3 -mcpu=pentium4 -mtune=pentium4\" ~ 1257 tps\n>\"-O6 -mcpu=pentium4 -mtune=pentium4\" ~ 1254 tps\n>\n>I'm curious now to get the same tests run with\n>a custom-cflags-compiled glibc.\n\nI'd be curious to see -O2 with and without the arch-specific flags, \nsince that's mostly what the discussion is about. \n\nMike Stone\n",
"msg_date": "Tue, 12 Dec 2006 07:48:06 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Tue, Dec 12, 2006 at 07:48:06AM -0500, Michael Stone wrote:\n>I'd be curious to see -O2 with and without the arch-specific flags, \n>since that's mostly what the discussion is about. \n\nThat came across more harshly than I intended; I apologize for that. \nIt's certainly a useful data point to compare the various optimization \nlevels.\n\nI'm rerunning the tests with gcc 4.1.2 and another platform to see if \nthat makes any difference.\n\nMike Stone\n",
"msg_date": "Tue, 12 Dec 2006 07:56:35 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Mon, 2006-12-11 at 20:22 -0200, Daniel van Ham Colchete wrote:\n> \n> I'm thinking about writing a script to make all the tests (more than 3\n> times each), get the data and plot some graphs.\n> \n> I don't have the time right now to do it, maybe next week I'll have.\n\nCheck out the OSDL test suite stuff. It runs the test and handles all\nthe reporting for you (including graphs).\nhttp://www.osdl.org/lab_activities/kernel_testing/osdl_database_test_suite/\n\n> I invite everyone to comment/sugest on the procedure or the results.\n\nI'd recommend running each test for quite a bit longer. Get your\nbuffers dirty and exercised, and see what things look like then.\n\nAlso, if you have the disk, offload your wal files to a separate disk.\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n",
"msg_date": "Tue, 12 Dec 2006 09:28:25 -0500",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Dec 12, 2006, at 13:32 , Michael Stone wrote:\n\n> On Tue, Dec 12, 2006 at 12:29:29PM +0100, Alexander Staubo wrote:\n>> I suspect the hardware's real maximum performance of the system \n>> is ~150 tps, but that the LSI's write cache is buffering the \n>> writes. I would love to validate this hypothesis, but I'm not \n>> sure how.\n>\n> With fsync off? The write cache shouldn't really matter in that \n> case. (And for this purpose that's probably a reasonable \n> configuration.)\n\nNo, fsync=on. The tps values are similarly unstable with fsync=off, \nthough -- I'm seeing bursts of high tps values followed by low-tps \nvalleys, a kind of staccato flow indicative of a write caching being \nfilled up and flushed.\n\nAlexander.\n\n",
"msg_date": "Tue, 12 Dec 2006 15:32:03 +0100",
"msg_from": "Alexander Staubo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "1= In all these results I'm seeing, no one has yet reported what \ntheir physical IO subsystem is... ...when we are benching a DB.\n\n2= So far we've got ~ a factor of 4 performance difference between \nMichael Stone's 1S 1C Netburst era 2.5GHz P4 PC and Guido Neitzer's \n1S 2C MacBook Pro 2.33GHz C2D. If the physical IO subsystems are \neven close to equivalent across the systems benched so far, we've \nclearly established that pg performance is more sensitive to factors \noutside the physical IO subsystem than might usually be thought with \nregard to a DBMS. (At least for this benchmark SW.)\n\n3= Daniel van Ham Colchete is running Gentoo. That means every SW \ncomponent on his box has been compiled to be optimized for the HW it \nis running on.\nThere may be a combination of effects going on for him that others \nnot running a system optimized from the ground up for its HW do not see.\n\n4= If we are testing arch specific compiler options and only arch \nspecific compiler options, we should remove the OS as a variable.\nSince Daniel has presented evidence in support of his hypothesis, the \nfirst step should be to duplicate his environment as =exactly= as \npossible and see if someone can independently reproduce the results \nwhen the only significant difference is the human involved. This \nwill guard against procedural error in the experiment.\n\nPossible Outcomes\nA= Daniel made a procedural error. We all learn what is and to avoid it.\nB= The Gentoo results are confirmed but no other OS shows this \neffect. Much digging ensues ;-)\nC= Daniel's results are confirmed as platform independent once we \ntake all factor into account properly\nWe all learn more re: how to best set up pg for highest performance.\n\nRon Peacetree\n\n\nAt 01:35 AM 12/12/2006, Greg Smith wrote:\n>On Mon, 11 Dec 2006, Michael Stone wrote:\n>\n>>Can anyone else reproduce these results? I'm on similar hardware \n>>(2.5GHz P4, 1.5G RAM)...\n>\n>There are two likely candidates for why Daniel's P4 3.0GHz \n>significantly outperforms your 2.5GHz system.\n>\n>1) Most 2.5GHZ P4 processors use a 533MHz front-side bus (FSB); most \n>3.0GHZ ones use an 800MHz bus.\n>\n>2) A typical motherboard paired with a 2.5GHz era processor will \n>have a single-channel memory interface; a typical 3.0GHZ era board \n>supports dual-channel DDR.\n>\n>These changes could easily explain the magnitude of difference in \n>results you're seeing, expecially when combined with a 20% greater \n>raw CPU clock.\n\n",
"msg_date": "Tue, 12 Dec 2006 10:13:14 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "* Cosimo Streppone:\n\n> \"-O0\" ~ 957 tps\n> \"-O1 -mcpu=pentium4 -mtune=pentium4\" ~ 1186 tps\n> \"-O2 -mcpu=pentium4 -mtune=pentium4\" ~ 1229 tps\n> \"-O3 -mcpu=pentium4 -mtune=pentium4\" ~ 1257 tps\n> \"-O6 -mcpu=pentium4 -mtune=pentium4\" ~ 1254 tps\n\n-mcpu and -mtune are synonymous. You really should -march here (but\nthe result is non-generic code). Keep in mind that GCC does not\ncontain an instruction scheduler for the Pentium 4s. I also believe\nthat the GCC switches are not fine-grained enough to cover the various\nPentium 4 variants. For instance, some chips don't like the CMOV\ninstruction at all, but others can process it with decent speed.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Tue, 12 Dec 2006 16:17:27 +0100",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "In response to Ron <[email protected]>:\n> \n> 3= Daniel van Ham Colchete is running Gentoo. That means every SW \n> component on his box has been compiled to be optimized for the HW it \n> is running on.\n> There may be a combination of effects going on for him that others \n> not running a system optimized from the ground up for its HW do not see.\n\nhttp://www.potentialtech.com/wmoran/source.php\n\nYou get an idea of how old these tests are by the fact that the latest\nand greatest was FreeBSD 4.9 at the time, but I suppose it may still\nbe relevent.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n",
"msg_date": "Tue, 12 Dec 2006 10:26:10 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Alexander Staubo <[email protected]> writes:\n> No, fsync=on. The tps values are similarly unstable with fsync=off, \n> though -- I'm seeing bursts of high tps values followed by low-tps \n> valleys, a kind of staccato flow indicative of a write caching being \n> filled up and flushed.\n\nIt's notoriously hard to get repeatable numbers out of pgbench :-(\n\nA couple of tips:\n\t* don't put any faith in short runs. I usually use -t 1000\n\t plus -c whatever.\n\t* make sure you loaded the database (pgbench -i) with a scale\n\t factor (-s) at least equal to the maximum -c you want to test.\n\t Otherwise you're mostly measuring update contention.\n\t* pay attention to when checkpoints occur. You probably need\n\t to increase checkpoint_segments if you want pgbench not to be\n\t checkpoint-bound.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Dec 2006 10:47:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations "
},
{
"msg_contents": "On 12/12/06, Florian Weimer <[email protected]> wrote:\n> * Cosimo Streppone:\n>\n> > \"-O0\" ~ 957 tps\n> > \"-O1 -mcpu=pentium4 -mtune=pentium4\" ~ 1186 tps\n> > \"-O2 -mcpu=pentium4 -mtune=pentium4\" ~ 1229 tps\n> > \"-O3 -mcpu=pentium4 -mtune=pentium4\" ~ 1257 tps\n> > \"-O6 -mcpu=pentium4 -mtune=pentium4\" ~ 1254 tps\n>\n> -mcpu and -mtune are synonymous. You really should -march here (but\n> the result is non-generic code). Keep in mind that GCC does not\n> contain an instruction scheduler for the Pentium 4s. I also believe\n> that the GCC switches are not fine-grained enough to cover the various\n> Pentium 4 variants. For instance, some chips don't like the CMOV\n> instruction at all, but others can process it with decent speed.\n\nYou can use -march=pentium4, -march=prescott and -march=nocona to the\ndifferent Pentium4 processors. But you have to use -march (and not\n-mcpu or -mtune) because without it you are still using only i386\ninstructions.\n\nDaniel\n",
"msg_date": "Tue, 12 Dec 2006 13:57:20 -0200",
"msg_from": "\"Daniel van Ham Colchete\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Alexander Staubo wrote:\n\n> No, fsync=on. The tps values are similarly unstable with fsync=off, \n> though -- I'm seeing bursts of high tps values followed by low-tps \n> valleys, a kind of staccato flow indicative of a write caching being \n> filled up and flushed.\n\nDatabases with checkpointing typically exhibit this cyclical throughput \nsyndrome.\n(put another way : this is to be expected and you are correct that it \nindicates buffered\ndata being flushed to disk periodically).\n\n\n\n",
"msg_date": "Tue, 12 Dec 2006 08:58:21 -0700",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Tue, 12 Dec 2006, Tom Lane wrote:\n\n> Um, you entirely missed the point: the hardware speedups you mention are\n> quite independent of any compiler options. The numbers we are looking\n> at are the relative speeds of two different compiles on the same\n> hardware, not whether hardware A is faster than hardware B.\n\nThe point that I failed to make clear is that expecting Mike's system to \nperform like Daniel's just because they have similar processors isn't \nrealistic, considering the changes that happened in the underlying \nhardware during that period. Having very different memory subsystems will \nshift which optimizations are useful and which have minimal impact even if \nthe processor is basically the same.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Tue, 12 Dec 2006 11:08:14 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations "
},
{
"msg_contents": "Tom Lane wrote:\n> Alexander Staubo <[email protected]> writes:\n> > No, fsync=on. The tps values are similarly unstable with fsync=off, \n> > though -- I'm seeing bursts of high tps values followed by low-tps \n> > valleys, a kind of staccato flow indicative of a write caching being \n> > filled up and flushed.\n> \n> It's notoriously hard to get repeatable numbers out of pgbench :-(\n> \n> A couple of tips:\n> \t* don't put any faith in short runs. I usually use -t 1000\n> \t plus -c whatever.\n> \t* make sure you loaded the database (pgbench -i) with a scale\n> \t factor (-s) at least equal to the maximum -c you want to test.\n> \t Otherwise you're mostly measuring update contention.\n> \t* pay attention to when checkpoints occur. You probably need\n> \t to increase checkpoint_segments if you want pgbench not to be\n> \t checkpoint-bound.\n\nWhile skimming over the pgbench source it has looked to me like it's\nnecessary to pass the -s switch (scale factor) to both the\ninitialization (-i) and the subsequent (non -i) runs. I'm not sure if\nthis is obvious from the documentation but I thought it may be useful to\nmention.\n\n",
"msg_date": "Tue, 12 Dec 2006 13:11:06 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Tue, 12 Dec 2006, Alvaro Herrera wrote:\n\n> While skimming over the pgbench source it has looked to me like it's\n> necessary to pass the -s switch (scale factor) to both the\n> initialization (-i) and the subsequent (non -i) runs.\n\nFor non-custom runs, it's computed based on the number of branches. \nAround line 1415 you should find:\n\n res = PQexec(con, \"select count(*) from branches\");\n ...\n scale = atoi(PQgetvalue(res, 0, 0));\n\nSo it shouldn't be required during the run, just the initialization.\n\nHowever, note that there were some recent bug fixes to the scaling \nimplementation, and I would recommend using the version that comes with \n8.2 (pgbench 1.58 2006/10/21). It may compile fine even if you copy that \npgbench.c into an older version's contrib directory; it's certainly a \ndrop-in replacement (and improvement) for the pgbench 1.45 that comes with \ncurrent Postgres 8.1 versions.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Tue, 12 Dec 2006 11:36:01 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Mike,\n\nI'm making some other tests here at another hardware (also Gentoo). I\nfound out that PostgreSQL stops for a while if I change the -t\nparameter on bgbench from 600 to 1000 and I have ~150 tps instead of\n~950tps.\n\nI don't know why PostgreSQL stoped, but it was longer than 5 seconds\nand my disk IO was comsuming 100% of my CPU time during this period.\nAnd I'm testing with my fsync turned off.\n\nMaybe if you lower your -t rate you are going to see this improvement.\n\nBest regards,\nDaniel\n\nOn 12/12/06, Michael Stone <[email protected]> wrote:\n> On Tue, Dec 12, 2006 at 07:10:34AM -0200, Daniel van Ham Colchete wrote:\n> >are you using \"-mtune/-mcpu\" or \"-march\" with GCC?\n>\n> I used exactly the options you said you used.\n>\n> >Witch GCC version? Are you working with a 32bits OS or 64bits?\n>\n> 3.3.5; 32\n>\n> Mike Stone\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n",
"msg_date": "Tue, 12 Dec 2006 14:58:34 -0200",
"msg_from": "\"Daniel van Ham Colchete\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> While skimming over the pgbench source it has looked to me like it's\n> necessary to pass the -s switch (scale factor) to both the\n> initialization (-i) and the subsequent (non -i) runs.\n\nNo, it's not supposed to be, and I've never found it needed in practice.\nThe code seems able to pull the scale out of the database (I forget how\nit figures it out exactly).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Dec 2006 12:13:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations "
},
{
"msg_contents": "\"Daniel van Ham Colchete\" <[email protected]> writes:\n> I'm making some other tests here at another hardware (also Gentoo). I\n> found out that PostgreSQL stops for a while if I change the -t\n> parameter on bgbench from 600 to 1000 and I have ~150 tps instead of\n> ~950tps.\n\n> I don't know why PostgreSQL stoped, but it was longer than 5 seconds\n> and my disk IO was comsuming 100% of my CPU time during this period.\n\nCheckpoint?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Dec 2006 12:32:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations "
},
{
"msg_contents": "At 10:47 AM 12/12/2006, Tom Lane wrote:\n\n>It's notoriously hard to get repeatable numbers out of pgbench :-(\nThat's not a good characteristic in bench marking SW...\n\nDoes the ODSL stuff have an easier time getting reproducible results?\n\n\n>A couple of tips:\n> * don't put any faith in short runs. I usually use -t 1000 \n> plus -c whatever.\n> * make sure you loaded the database (pgbench -i) with a \n> scale factor (-s) at least equal to the maximum -c you want to test.\n> Otherwise you're mostly measuring update contention.\n> * pay attention to when checkpoints occur. You probably \n> need to increase checkpoint_segments if you want pgbench not to be \n> checkpoint-bound.\nThis all looks very useful. Can you give some guidance as to what \ncheckpoint_segments should be increased to? Do the values you are \nrunning pgbench with suggest what value checkpoint_segments should be?\n\nRon Peacetree \n\n",
"msg_date": "Tue, 12 Dec 2006 12:56:28 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations "
},
{
"msg_contents": "I just made another test with a second Gentoo machine:\n\nPentium 4 3.0Ghz Prescott\nGCC 4.1.1\nGlibc 2.4\nPostgreSQL 8.1.5\nKernel 2.6.17\n\nSame postgresql.conf as yesterday's.\n\nFirst test\n==========\n GLIBC: -O2 -march=i686\n PostgreSQL: -O2 -march=i686\n Results: 974.638731 975.602142 975.882051 969.142503 992.914167\n983.467131 983.231575 994.901330 970.375221 978.377467\n Average (error): 980 tps (13 tps)\n\nSecond test\n===========\n GLIBC: -O2 -march=i686\n PostgreSQL: -O2 -march=prescott\n Results: 988.319643 976.152973 1006.482553 992.431322 983.090838\n992.674065 989.216746 990.897615 987.129802 975.907955\n Average (error): 988 tps (15 tps)\n\nThird test\n==========\n GLIBC: -O2 -march=prescott\n PostgreSQL: -O2 -march=i686\n Results: 969.085400 966.187309 994.882325 968.715150 956.766771\n970.151542 960.090571 967.680628 986.568462 991.756520\n Average (error): 973 tps (19 tps)\n\nForth test\n==========\n GLIBC: -O2 -march=prescott\n PostgreSQL: -O2 -march=prescott\n Results: 980.888371 978.128269 969.344669 978.021509 979.256603\n993.236457 984.078399 981.654834 976.295925 969.796277\n Average (error): 979 tps (11 tps)\n\nThe results showed no significant change. The conclusion of today's\ntest would be that there are no improvement at PostgreSQL when using\n-march=prescott.\n\nI only see 3 diferences between yesterday's server and today's: the\nkernel version (y: 2.6.18, t:2.6.17), the server uses an IDE harddrive\n(yesterday was SATA), and the gcc version (3.4.6 -> 4.1.1).\n\nI don't know why yesterday we had improved and today we had not.\n\nBest\nDaniel\n\nOn 12/12/06, Daniel van Ham Colchete <[email protected]> wrote:\n> I'm making some other tests here at another hardware (also Gentoo). I\n> found out that PostgreSQL stops for a while if I change the -t\n> parameter on bgbench from 600 to 1000 and I have ~150 tps instead of\n> ~950tps.\n",
"msg_date": "Tue, 12 Dec 2006 16:35:50 -0200",
"msg_from": "\"Daniel van Ham Colchete\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "> I just made another test with a second Gentoo machine:\n> \n> Pentium 4 3.0Ghz Prescott\n> GCC 4.1.1\n> Glibc 2.4\n> PostgreSQL 8.1.5\n> Kernel 2.6.17\n> \n> Same postgresql.conf as yesterday's.\n> \n> First test\n> ==========\n> GLIBC: -O2 -march=i686\n> PostgreSQL: -O2 -march=i686\n> Results: 974.638731 975.602142 975.882051 969.142503 992.914167\n> 983.467131 983.231575 994.901330 970.375221 978.377467\n> Average (error): 980 tps (13 tps)\n\nDo you have any scripts for the above process you could share with the\nlist? I have a few machines (a woodcrest and an older Xeon) that I could\nrun this on...\n\nThanks,\n\nBucky\n",
"msg_date": "Tue, 12 Dec 2006 15:16:17 -0500",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "At 01:35 PM 12/12/2006, Daniel van Ham Colchete wrote:\n>I just made another test with a second Gentoo machine:\n\n><snip>\n>\n>The results showed no significant change. The conclusion of today's \n>test would be that there are no improvement at PostgreSQL when using \n>-march=prescott.\n>\n>I only see 3 diferences between yesterday's server and today's: the \n>kernel version (y: 2.6.18, t:2.6.17), the server uses an IDE \n>harddrive (yesterday was SATA), and the gcc version (3.4.6 -> 4.1.1).\n>\n>I don't know why yesterday we had improved and today we had not.\nSATA HD's, particularly SATA II HD's and _especially_ 10Krpm 150GB \nSATA II Raptors are going to have far better performance than older IDE HDs.\n\nDo some raw bonnie++ benches on the two systems. If the numbers from \nbonnie++ are close to those obtained during the pgbench runs, then \nthe HDs are limiting pgbench.\n\n Best would be to use the exact same HD IO subsystem on both boxes, \nbut that may not be feasible.\n\nIn general, it would be helpful if the entire config, HW + OS + pg \nstuff, was documented when submitting benchmark results.\n(For instance, it would not be outside the realm of plausibility for \nGuidos C2D laptop to be HD IO limited and for Micheal's 2.5 GHZ P4 PC \nto be CPU limited during pgbench runs.)\n\nRon Peacetree \n\n",
"msg_date": "Tue, 12 Dec 2006 15:31:42 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Tue, 12 Dec 2006, Daniel van Ham Colchete wrote:\n\n> I'm making some other tests here at another hardware (also Gentoo). I\n> found out that PostgreSQL stops for a while if I change the -t\n> parameter on bgbench from 600 to 1000 and I have ~150 tps instead of\n> ~950tps.\n\nSure sounds like a checkpoint to me; the ones pgbench generates really \naren't fun to watch when running against IDE drives. I've seen my test \nsystem with 2 IDE drives pause for 15 seconds straight to process one when \nfsync is on, caching was disabled on the WAL disk, and the shared_buffer \ncache is large.\n\nIf you were processing 600 transactions/client without hitting a \ncheckpoint but 1000 is, try editing your configuration file, double \ncheckpoint_segments, restart the server, and then try again. This is \ncheating but will prove the source of the problem.\n\nThis kind of behavior is what other list members were trying to suggest to \nyou before: once you get disk I/O involved, that drives the performance \ncharacteristics of so many database operations that small improvements in \nCPU optimization are lost. Running the regular pgbench code is so wrapped \nin disk writes that it's practically a worst-case for what you're trying \nto show.\n\nI would suggest that you run all your optimization tests with the -S \nparameter to pgbench that limits it to select statements. That will let \nyou benchmark whether the core code is benefitting from the CPU \nimprovements without having disk I/O as the main driver of performance.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Tue, 12 Dec 2006 16:08:51 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "[email protected] (Alexander Staubo) wrote:\n> On Dec 12, 2006, at 13:32 , Michael Stone wrote:\n>\n>> On Tue, Dec 12, 2006 at 12:29:29PM +0100, Alexander Staubo wrote:\n>>> I suspect the hardware's real maximum performance of the system is\n>>> ~150 tps, but that the LSI's write cache is buffering the\n>>> writes. I would love to validate this hypothesis, but I'm not\n>>> sure how.\n>>\n>> With fsync off? The write cache shouldn't really matter in that\n>> case. (And for this purpose that's probably a reasonable\n>> configuration.)\n>\n> No, fsync=on. The tps values are similarly unstable with fsync=off,\n> though -- I'm seeing bursts of high tps values followed by low-tps\n> valleys, a kind of staccato flow indicative of a write caching being\n> filled up and flushed.\n\nIf that seems coincidental with checkpoint flushing, that would be one\nof the notable causes of that sort of phenomenon.\n\nYou could get more readily comparable numbers either by:\n a) Increasing the frequency of checkpoint flushes, so they would be\n individually smaller, or\n b) Decreasing the frequency so you could exclude it from the time of\n the test.\n-- \noutput = (\"cbbrowne\" \"@\" \"gmail.com\")\nhttp://cbbrowne.com/info/slony.html\n\"It can be shown that for any nutty theory, beyond-the-fringe\npolitical view or strange religion there exists a proponent on the\nNet. The proof is left as an exercise for your kill-file.\"\n-- Bertil Jonell\n",
"msg_date": "Tue, 12 Dec 2006 17:29:28 -0500",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Hi,\n\nDid someone try '-mfpmath=sse -msse3'?\n\nWould be interesting to know if -mfpmath=sse boost the performance.\n\nI guess, the difference in the generated code isn't that much between\ni686 and prescott. The bigger step is i386 to i686. '-mfpmath=sse\n-msse3' will also use the SSE unit, which the classic i686 doesn't have.\n\nCFLAGS=-O2 -march=prescott -mfpmath=sse -msse3\n\nBest regards\nSven.\n\nDaniel van Ham Colchete schrieb:\n> I just made another test with a second Gentoo machine:\n> \n> Pentium 4 3.0Ghz Prescott\n> GCC 4.1.1\n> Glibc 2.4\n> PostgreSQL 8.1.5\n> Kernel 2.6.17\n> \n> Same postgresql.conf as yesterday's.\n> \n> First test\n> ==========\n> GLIBC: -O2 -march=i686\n> PostgreSQL: -O2 -march=i686\n> Results: 974.638731 975.602142 975.882051 969.142503 992.914167\n> 983.467131 983.231575 994.901330 970.375221 978.377467\n> Average (error): 980 tps (13 tps)\n> \n> Second test\n> ===========\n> GLIBC: -O2 -march=i686\n> PostgreSQL: -O2 -march=prescott\n> Results: 988.319643 976.152973 1006.482553 992.431322 983.090838\n> 992.674065 989.216746 990.897615 987.129802 975.907955\n> Average (error): 988 tps (15 tps)\n> \n> Third test\n> ==========\n> GLIBC: -O2 -march=prescott\n> PostgreSQL: -O2 -march=i686\n> Results: 969.085400 966.187309 994.882325 968.715150 956.766771\n> 970.151542 960.090571 967.680628 986.568462 991.756520\n> Average (error): 973 tps (19 tps)\n> \n> Forth test\n> ==========\n> GLIBC: -O2 -march=prescott\n> PostgreSQL: -O2 -march=prescott\n> Results: 980.888371 978.128269 969.344669 978.021509 979.256603\n> 993.236457 984.078399 981.654834 976.295925 969.796277\n> Average (error): 979 tps (11 tps)\n> \n> The results showed no significant change. The conclusion of today's\n> test would be that there are no improvement at PostgreSQL when using\n> -march=prescott.\n> \n> I only see 3 diferences between yesterday's server and today's: the\n> kernel version (y: 2.6.18, t:2.6.17), the server uses an IDE harddrive\n> (yesterday was SATA), and the gcc version (3.4.6 -> 4.1.1).\n> \n> I don't know why yesterday we had improved and today we had not.\n> \n> Best\n> Daniel\n> \n> On 12/12/06, Daniel van Ham Colchete <[email protected]> wrote:\n>> I'm making some other tests here at another hardware (also Gentoo). I\n>> found out that PostgreSQL stops for a while if I change the -t\n>> parameter on bgbench from 600 to 1000 and I have ~150 tps instead of\n>> ~950tps.\n",
"msg_date": "Wed, 13 Dec 2006 10:21:44 +0100",
"msg_from": "Sven Geisler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Michael Stone wrote:\n\n> On Tue, Dec 12, 2006 at 01:42:06PM +0100, Cosimo Streppone wrote:\n>> \"-O0\" ~ 957 tps\n>> \"-O1 -mcpu=pentium4 -mtune=pentium4\" ~ 1186 tps\n>> \"-O2 -mcpu=pentium4 -mtune=pentium4\" ~ 1229 tps\n>> \"-O3 -mcpu=pentium4 -mtune=pentium4\" ~ 1257 tps\n>> \"-O6 -mcpu=pentium4 -mtune=pentium4\" ~ 1254 tps\n>>\n>> I'm curious now to get the same tests run with\n>> a custom-cflags-compiled glibc.\n> \n> I'd be curious to see -O2 with and without the arch-specific flags, \n> since that's mostly what the discussion is about.\n\nI run the same tests only for:\n\n1) '-O2'\n2) '-O2 -march=pentium4 -mtune=pentium4 -mcpu=pentium4'\n (so no more doubts here, and thanks for gcc hints :-)\n\nand I obtained respectively an average of 1238 (plain -O2)\nvs. 1229 tps on 9 runs.\nDisk subsystem is a standard desktop SATA, no more than that.\n\nI tried also recompiling *only* pgbench with various options, but as\nI expected (and hoped) nothing changed.\n\nInteresting, eh?\n\n-- \nCosimo\n\n",
"msg_date": "Wed, 13 Dec 2006 17:11:59 +0100",
"msg_from": "Cosimo Streppone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "At 11:11 AM 12/13/2006, Cosimo Streppone wrote:\n\n>Interesting, eh?\n>\n>Cosimo\n\nWhat I find interesting is that so far Guido's C2D Mac laptop has \ngotten the highest values by far in this set of experiments, and no \none else is even close.\nThe slowest results, Michael's, are on the system with what appears \nto be the slowest CPU of the bunch; and the ranking of the rest of \nthe results seem to similarly depend on relative CPU \nperformance. This is not what one would naively expect when benching \na IO intensive app like a DBMS.\n\nGiven that the typical laptop usually has 1 HD, and a relatively \nmodest one at that (the fastest available are SATA 7200rpm or \nSeagate's perpendicular recording 5400rpm) in terms of performance, \nthis feels very much like other factors are bottlenecking the \nexperiments to the point where Daniel's results regarding compiler \noptions are not actually being tested.\n\nAnyone got a 2.33 GHz C2D box with a decent HD IO subsystem more \nrepresentative of a typical DB server hooked up to it?\n\nAgain, the best way to confirm/deny Daniel's results is to duplicate \nthe environment he obtained those results with as closely as possible \n(preferably exactly) and then have someone else try to duplicate his results.\n\nAlso, I think the warnings regarding proper configuration of pgbench \nand which version of pgbench to use are worthy of note. Do we have \nguidance yet as to what checkpoint_segments should be set \nto? Should we be considering using something other than pgbench for \nsuch experiments?\n\nRon Peacetree\n\n\n",
"msg_date": "Wed, 13 Dec 2006 13:03:04 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "> What I find interesting is that so far Guido's C2D Mac laptop has\n> gotten the highest values by far in this set of experiments, and no\n> one else is even close.\n> The slowest results, Michael's, are on the system with what appears\n> to be the slowest CPU of the bunch; and the ranking of the rest of\n> the results seem to similarly depend on relative CPU\n> performance. This is not what one would naively expect when benching\n> a IO intensive app like a DBMS.\n> \n> Given that the typical laptop usually has 1 HD, and a relatively\n> modest one at that (the fastest available are SATA 7200rpm or\n> Seagate's perpendicular recording 5400rpm) in terms of performance,\n> this feels very much like other factors are bottlenecking the\n> experiments to the point where Daniel's results regarding compiler\n> options are not actually being tested.\n> \n> Anyone got a 2.33 GHz C2D box with a decent HD IO subsystem more\n> representative of a typical DB server hooked up to it?\n\nI've only seen pg_bench numbers > 2,000 tps on either really large\nhardware (none of the above mentioned comes close) or the results are in\nmemory due to a small database size (aka measuring update contention).\n\nJust a guess, but these tests (compiler opts.) seem like they sometimes\nshow a benefit where the database is mostly in RAM (which I'd guess many\npeople have) since that would cause more work to be put on the\nCPU/Memory subsystems. \n\nOther people on the list hinted at this, but I share their hypothesis\nthat once you get IO involved as a bottleneck (which is a more typical\nDB situation) you won't notice compiler options.\n\nI've got a 2 socket x 2 core woodcrest poweredge 2950 with a BBC 6 disk\nRAID I'll run some tests on as soon as I get a chance. \n\nI'm also thinking for this test, there's no need to tweak the default\nconfig other than maybe checkpoint_segments, since I don't really want\npostgres using large amounts of RAM (all that does is require me to\nbuild a larger test DB). Thoughts?\n\nThanks,\n\nBucky\n",
"msg_date": "Wed, 13 Dec 2006 13:49:52 -0500",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "At 01:49 PM 12/13/2006, Bucky Jordan wrote:\n\n>I've only seen pg_bench numbers > 2,000 tps on either really large \n>hardware (none of the above mentioned comes close) or the results \n>are in memory due to a small database size (aka measuring update contention).\nWhich makes a laptop achieving such numbers all the more interesting IMHO.\n\n\n>Just a guess, but these tests (compiler opts.) seem like they \n>sometimes show a benefit where the database is mostly in RAM (which \n>I'd guess many people have) since that would cause more work to be \n>put on the CPU/Memory subsystems.\nThe cases where the working set, or the performance critical part of \nthe working set, of the DB is RAM resident are very important ones ITRW.\n\n\n>Other people on the list hinted at this, but I share their \n>hypothesis that once you get IO involved as a bottleneck (which is a \n>more typical DB situation) you won't notice compiler options.\nCertainly makes intuitive sense. OTOH, this list has seen discussion \nof what should be IO bound operations being CPU bound. Evidently due \nto the expense of processing pg datastructures. Only objective \nbenches are going to tell us where the various limitations on pg \nperformance really are.\n\n\n>I've got a 2 socket x 2 core woodcrest poweredge 2950 with a BBC 6 \n>disk RAID I'll run some tests on as soon as I get a chance.\n>\n>I'm also thinking for this test, there's no need to tweak the \n>default config other than maybe checkpoint_segments, since I don't \n>really want postgres using large amounts of RAM (all that does is \n>require me to build a larger test DB).\n\nDaniel's orginal system had 512MB RAM. This suggests to me that \ntests involving 256MB of pg memory should be plenty big enough.\n\n\n>Thoughts?\nHope they are useful.\n\nRon Peacetree \n\n",
"msg_date": "Wed, 13 Dec 2006 14:49:57 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On 13.12.2006, at 19:03, Ron wrote:\n\n> What I find interesting is that so far Guido's C2D Mac laptop has \n> gotten the highest values by far in this set of experiments, and no \n> one else is even close.\n\nThis might be the case because I have tested with fsync=off as my \ninternal harddrive would be a limiting factor and the results \nwouldn't be really helpful. Perhaps it's still the IO system, I don't \nknow. I can try to reproduce the tests as close as possible again. \nPerhaps I had different settings on something but I doubt that.\n\nThe new Core * CPUs from Intel are extremely fast with PostgreSQL.\n\n> Anyone got a 2.33 GHz C2D box with a decent HD IO subsystem more \n> representative of a typical DB server hooked up to it?\n\nI have also now an Xserve with two Dual-Core Xeons and two SAS drives \n(15k Seagates) in a mirrored RAID here. Will do some testing tomorrow.\n\nBtw: I always compare only to my own results to have something \ncomparable - same test, same scripts, same db version, same operating \nsystem and so on. The rest is just pure interest.\n\ncug\n",
"msg_date": "Wed, 13 Dec 2006 20:59:31 +0100",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n> > While skimming over the pgbench source it has looked to me like it's\n> > necessary to pass the -s switch (scale factor) to both the\n> > initialization (-i) and the subsequent (non -i) runs.\n> \n> No, it's not supposed to be, and I've never found it needed in practice.\n> The code seems able to pull the scale out of the database (I forget how\n> it figures it out exactly).\n\npgbench is designed to be a general benchmark, meanining it exercises\nall parts of the system. I am thinking just reexecuting a single SELECT\nover and over again would be a better test of the CPU optimizations.\n\n-- \n Bruce Momjian [email protected]\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Wed, 13 Dec 2006 20:04:13 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "[offtopic];\nhmm quite a long thread below is stats of posting\nTotal Messages:87 Total Participants: 27\n-------------------------------------------------\n 19 Daniel van Ham Colchete\n 12 Michael Stone\n 9 Ron\n 5 Steinar H. Gunderson\n 5 Alexander Staubo\n 4 Tom Lane\n 4 Greg Smith\n 3 Luke Lonergan\n 3 Christopher Browne\n 2 Merlin Moncure\n 2 Guido Neitzer\n 2 Dave Cramer\n 2 Cosimo Streppone\n 2 Bucky Jordan\n 1 Tatsuo Ishii\n 1 Sven Geisler\n 1 Shane Ambler\n 1 Michael Glaesemann\n 1 Mark Kirkwood\n 1 Gene\n 1 Florian Weimer\n 1 David Boreham\n 1 Craig A. James\n 1 Chris Browne\n 1 Brad Nicholson\n 1 Bill Moran\n 1 Alvaro Herrera\n-------------------------------------------\n",
"msg_date": "Thu, 14 Dec 2006 07:51:00 +0530",
"msg_from": "\"Rajesh Kumar Mallah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Bruce,\n\n> pgbench is designed to be a general benchmark, meanining it exercises\n> all parts of the system. I am thinking just reexecuting a single SELECT\n> over and over again would be a better test of the CPU optimizations.\n\nMostly, though, pgbench just gives the I/O system a workout. It's not a \nreally good general workload.\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Wed, 13 Dec 2006 18:36:25 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Wed, 2006-12-13 at 18:36 -0800, Josh Berkus wrote:\n> Bruce,\n> \n> > pgbench is designed to be a general benchmark, meanining it exercises\n> > all parts of the system. I am thinking just reexecuting a single SELECT\n> > over and over again would be a better test of the CPU optimizations.\n> \n> Mostly, though, pgbench just gives the I/O system a workout. It's not a \n> really good general workload.\n\nIt also will not utilize all cpus on a many cpu machine. We recently\nfound that the only way to *really* test with pgbench was to actually\nrun 4+ copies of pgbench at the same time.\n\nJ\n\n\n> \n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n\n",
"msg_date": "Wed, 13 Dec 2006 19:02:39 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "\"Joshua D. Drake\" <[email protected]> writes:\n> On Wed, 2006-12-13 at 18:36 -0800, Josh Berkus wrote:\n>> Mostly, though, pgbench just gives the I/O system a workout. It's not a \n>> really good general workload.\n\n> It also will not utilize all cpus on a many cpu machine. We recently\n> found that the only way to *really* test with pgbench was to actually\n> run 4+ copies of pgbench at the same time.\n\nThe pgbench app itself becomes the bottleneck at high transaction\nrates. Awhile back I rewrote it to improve its ability to issue\ncommands concurrently, but then desisted from submitting the\nchanges --- if we change the app like that, future numbers would\nbe incomparable to past ones, which sort of defeats the purpose of a\nbenchmark no?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Dec 2006 00:44:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations "
},
{
"msg_contents": "\nOn Dec 14, 2006, at 14:44 , Tom Lane wrote:\n\n> The pgbench app itself becomes the bottleneck at high transaction\n> rates. Awhile back I rewrote it to improve its ability to issue\n> commands concurrently, but then desisted from submitting the\n> changes --- if we change the app like that, future numbers would\n> be incomparable to past ones, which sort of defeats the purpose of a\n> benchmark no?\n\nAt the same time, if the current pgbench isn't the tool we want to \nuse, is this kind of backward comparison going to hinder any move to \nimprove it? It sounds like there's quite a bit of room for \nimprovement in pg_bench, and in my opinion we should move forward to \nmake an improved tool, one that measures what we want to measure. And \nwhile comparison with past results might not be possible, there \nremains the possibility of rerunning the improved pgbench on previous \nsystems, I should think.\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n",
"msg_date": "Thu, 14 Dec 2006 15:11:11 +0900",
"msg_from": "Michael Glaesemann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations "
},
{
"msg_contents": "Benchmarks, like any other SW, need modernizing and updating from time to time.\n\nGiven the multi-core CPU approach to higher performance as the \ncurrent fad in CPU architecture, we need a benchmark that is appropriate.\n\nIf SPEC feels it is appropriate to rev their benchmark suite \nregularly, we probably should as well.\n\nRon Peacetree\n\nAt 12:44 AM 12/14/2006, Tom Lane wrote:\n>\"Joshua D. Drake\" <[email protected]> writes:\n> > On Wed, 2006-12-13 at 18:36 -0800, Josh Berkus wrote:\n> >> Mostly, though, pgbench just gives the I/O system a workout. It's not a\n> >> really good general workload.\n>\n> > It also will not utilize all cpus on a many cpu machine. We recently\n> > found that the only way to *really* test with pgbench was to actually\n> > run 4+ copies of pgbench at the same time.\n>\n>The pgbench app itself becomes the bottleneck at high transaction\n>rates. Awhile back I rewrote it to improve its ability to issue\n>commands concurrently, but then desisted from submitting the\n>changes --- if we change the app like that, future numbers would\n>be incomparable to past ones, which sort of defeats the purpose of a\n>benchmark no?\n\n",
"msg_date": "Thu, 14 Dec 2006 01:45:29 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations "
},
{
"msg_contents": "(Re)-Design it to do both, unless there's reason to believe that doing one after the other would skew the results.\n\nThen old results are available, new results are also visible and useful for future comparisons. And seeing them side by side mught be an interesting exercise as well, at least for a while.\n\n(sorry for top-posting -- web based interface that doesn't do proper quoting)\n\nGreg Williamson\nDBA\nGlobeXplorer LLC\n\n-----Original Message-----\nFrom:\[email protected] on behalf of Michael Glaesemann\nSent:\tWed 12/13/2006 10:11 PM\nTo:\tTom Lane\nCc:\tJoshua D. Drake; Josh Berkus; [email protected]; Bruce Momjian; Alvaro Herrera; Alexander Staubo; Michael Stone\nSubject:\tRe: [PERFORM] New to PostgreSQL, performance considerations \n\n\nOn Dec 14, 2006, at 14:44 , Tom Lane wrote:\n\n> The pgbench app itself becomes the bottleneck at high transaction\n> rates. Awhile back I rewrote it to improve its ability to issue\n> commands concurrently, but then desisted from submitting the\n> changes --- if we change the app like that, future numbers would\n> be incomparable to past ones, which sort of defeats the purpose of a\n> benchmark no?\n\nAt the same time, if the current pgbench isn't the tool we want to \nuse, is this kind of backward comparison going to hinder any move to \nimprove it? It sounds like there's quite a bit of room for \nimprovement in pg_bench, and in my opinion we should move forward to \nmake an improved tool, one that measures what we want to measure. And \nwhile comparison with past results might not be possible, there \nremains the possibility of rerunning the improved pgbench on previous \nsystems, I should think.\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n\n-------------------------------------------------------\nClick link below if it is SPAM [email protected]\n\"https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=4580ea76236074356172766&[email protected]&retrain=spam&template=history&history_page=1\"\n!DSPAM:4580ea76236074356172766!\n-------------------------------------------------------\n\n\n\n\n\n",
"msg_date": "Wed, 13 Dec 2006 22:49:05 -0800",
"msg_from": "\"Gregory S. Williamson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations "
},
{
"msg_contents": "On Wed, 13 Dec 2006, Ron wrote:\n\n> The slowest results, Michael's, are on the system with what appears to be the \n> slowest CPU of the bunch; and the ranking of the rest of the results seem to \n> similarly depend on relative CPU performance. This is not what one would \n> naively expect when benching a IO intensive app like a DBMS.\n\npgbench with 3000 total transactions and fsync off is barely doing I/O to \ndisk; it's writing a bunch of data to the filesystem cache and ending the \nbenchmark before the data even makes it to the hard drive. This is why \nhis results become completely different as soon as the number of \ntransactions increases. With little or no actual disk writes, you should \nexpect results to be ranked by CPU speed.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Thu, 14 Dec 2006 10:00:14 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Thu, 14 Dec 2006, Tom Lane wrote:\n\n> The pgbench app itself becomes the bottleneck at high transaction rates. \n> Awhile back I rewrote it to improve its ability to issue commands \n> concurrently, but then desisted from submitting the changes --- if we \n> change the app like that, future numbers would be incomparable to past \n> ones, which sort of defeats the purpose of a benchmark no?\n\nNot at all. Here's an example from the PC hardware benchmarking \nlandscape. Futuremark Corporation has a very popular benchmark for 3D \nhardware called 3DMark. Every year, they release a new version, and \nnumbers from it are completely different from those produced by the \nprevious year's version. That lets them rev the entire approach taken by \nthe benchmark to reflect current practice. So when the standard for \nhigh-end hardware includes, say, acceleration of lighting effects, the new \nversion will include a lighting test. In order to break 1000 points (say) \non that test, you absolutely have to have lighting acceleration, even \nthough on the previous year's test you could score that high without it.\n\nThat is not an isolated example; every useful PC benchmark gets updated \nregularly, completely breaking backward compatibility, to reflect the \ncapabilities of current hardware and software. Otherwise we'd still be \ntesting how well DOS runs on new processors. Right now everyone is (or \nhas already) upgraded their PC benchmarking code such that you need a \ndual-core processor to do well on some of the tests.\n\nIf you have a pgbench version with better concurrency features, I for one \nwould love to see it. I'm in the middle of patching that thing up right \nnow anyway but hadn't gotten that far (yet--I just spent some of yesterday \nstaring at how it submits into libpq trying to figure out how to improve \nthat). I would be happy to take your changes, my changes, changes to the \nbase code since you forked it, and reconcile everything together into a \npgbench2007--whose results can't be directly compared to the earlier \nversion, but are more useful on current gen multi-processor/core systems.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Thu, 14 Dec 2006 10:24:12 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations "
},
{
"msg_contents": "On Wed, Dec 13, 2006 at 01:03:04PM -0500, Ron wrote:\n>What I find interesting is that so far Guido's C2D Mac laptop has \n>gotten the highest values by far in this set of experiments, and no \n>one else is even close.\n>The slowest results, Michael's, are on the system with what appears \n>to be the slowest CPU of the bunch; and the ranking of the rest of \n>the results seem to similarly depend on relative CPU \n>performance. This is not what one would naively expect when benching \n>a IO intensive app like a DBMS.\n\nNote that I ran with fsync off, that the data set is <300M, and that all \nof the systems (IIRC) have at least 1G RAM. This is exactly the \ndistribution I would expect since we're configuring the benchmark to \ndetermine whether cpu-specific optimizations affect the results.\n\nMike Stone\n",
"msg_date": "Thu, 14 Dec 2006 10:26:27 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "At 10:00 AM 12/14/2006, Greg Smith wrote:\n>On Wed, 13 Dec 2006, Ron wrote:\n>\n>>The slowest results, Michael's, are on the system with what appears \n>>to be the slowest CPU of the bunch; and the ranking of the rest of \n>>the results seem to similarly depend on relative CPU \n>>performance. This is not what one would naively expect when \n>>benching a IO intensive app like a DBMS.\n>\n>pgbench with 3000 total transactions and fsync off is barely doing \n>I/O to disk; it's writing a bunch of data to the filesystem cache \n>and ending the benchmark before the data even makes it to the hard \n>drive. This is why his results become completely different as soon \n>as the number of transactions increases. With little or no actual \n>disk writes, you should expect results to be ranked by CPU speed.\nI of course agree with you in the general sense. OTOH, I'm fairly \nsure the exact point where this cross-over occurs is dependent on the \ncomponents and configuration of the system involved.\n\n(Nor do I want to dismiss this scenario as irrelevant or \nunimportant. There are plenty of RW situations where this takes \nplace or where the primary goal of a tuning effort is to make it take \nplace. Multi-GB BB RAID caches anyone?)\n\nIn addition, let's keep in mind that we all know that overall system \nperformance is limited by whatever component hits its limits \nfirst. Local pg performance has been known to be limited by any of : \nCPUs, memory subsystems, or physical IO subsystems. Intuitively, one \ntends to expect only the later to be a limiting factor in the vast \nmajority of DBMS tasks. pg has a history of regularly surprising \nsuch intuition in many cases.\nIMO, this makes good bench marking tools and procedures more \nimportant to have.\n\n(If nothing else, knowing what component is likely to be the \nbottleneck in system \"X\" made of components \"x1, x2, x3, ....\" for \ntask \"Y\" is valuable lore for the pg community to have as preexisting \ndata when first asked any given question on this list! )\n\nOne plausible positive effect of tuning like that Daniel advocates is \nto \"move\" the level of system activity where the physical IO \nsubsystem becomes the limiting factor on overall system performance.\n\nWe are not going to know definitively if such an effect exists, or to \nwhat degree, or how to achieve it, if we don't have appropriately \nrigorous and reproducible experiments (and documentation of them) in \nplace to test for it.\n\n So it seem to make sense that the community should have a \ndiscussion about the proper bench marking of pg and to get some \nresults based on said.\n\nRon Peacetree\n\n",
"msg_date": "Thu, 14 Dec 2006 11:03:08 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Tom Lane wrote:\n> \"Joshua D. Drake\" <[email protected]> writes:\n>> On Wed, 2006-12-13 at 18:36 -0800, Josh Berkus wrote:\n>>> Mostly, though, pgbench just gives the I/O system a workout. It's not a \n>>> really good general workload.\n> \n>> It also will not utilize all cpus on a many cpu machine. We recently\n>> found that the only way to *really* test with pgbench was to actually\n>> run 4+ copies of pgbench at the same time.\n> \n> The pgbench app itself becomes the bottleneck at high transaction\n> rates. Awhile back I rewrote it to improve its ability to issue\n> commands concurrently, but then desisted from submitting the\n> changes --- if we change the app like that, future numbers would\n> be incomparable to past ones, which sort of defeats the purpose of a\n> benchmark no?\n\nWhat is to stop us from running the new pgbench against older versions \nof PGSQL? Any stats taken from a run of pgbench a long time ago \nprobably aren't relevant against a modern test anyway as the underlying \nhardware and OS are likely to have changed or been updated.\n\n\n",
"msg_date": "Thu, 14 Dec 2006 11:16:14 -0500",
"msg_from": "Matthew O'Connor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Dec 14, 2006, at 16:00 , Greg Smith wrote:\n\n> On Wed, 13 Dec 2006, Ron wrote:\n>\n>> The slowest results, Michael's, are on the system with what \n>> appears to be the slowest CPU of the bunch; and the ranking of the \n>> rest of the results seem to similarly depend on relative CPU \n>> performance. This is not what one would naively expect when \n>> benching a IO intensive app like a DBMS.\n>\n> pgbench with 3000 total transactions and fsync off is barely doing \n> I/O to disk; it's writing a bunch of data to the filesystem cache \n> and ending the benchmark before the data even makes it to the hard \n> drive. This is why his results become completely different as soon \n> as the number of transactions increases. With little or no actual \n> disk writes, you should expect results to be ranked by CPU speed.\n\nI also second your suggestion that pgbench should be run with -S to \ndisable updates. As far as I can see, nobody has reported numbers for \nthis setting, so here goes. I also increased the buffer size, which I \nfound was needed to avoid hitting the disk for block reads, and \nincreased the memory settings.\n\nMy PostgreSQL config overrides, then, are:\n\nshared_buffers = 1024MB\nwork_mem = 1MB\nmaintenance_work_mem = 16MB\nfsync = off\n\nEnvironment: Linux 2.6.15-23-amd64-generic on Ubuntu. Dual-core AMD \nOpteron 280 with 4GB of RAM. LSI PCI-X Fusion-MPT SAS.\n\nRunning with: pgbench -S -v -n -t 5000 -c 5.\n\nResults as a graph: http://purefiction.net/paste/pgbench.pdf\n\nStats for CFLAGS=\"-O0\": 18440.181894 19207.882300 19894.432185 \n19635.625622 19876.858884 20032.597042 19683.597973 20370.166669 \n19989.157881 20207.343510 19993.745956 20081.353580 20356.416424 \n20047.810017 20319.834190 19417.807528 19906.788454 20536.039929 \n19491.308046 20002.144230\n\nStats for CFLAGS=\"-O3 -msse2 -mfpmath=sse -funroll-loops -m64 - \nmarch=opteron -pipe\": 23830.358351 26162.203569 25569.091264 \n26762.755665 26590.822550 26864.908197 26608.029665 26796.116921 \n26323.742015 26692.576261 26878.859132 26106.770425 26328.371664 \n26755.595130 25488.304946 26635.527959 26377.485023 24817.590708 \n26480.245737 26223.427801\n\nAlexander.\n\n",
"msg_date": "Thu, 14 Dec 2006 20:14:03 +0100",
"msg_from": "Alexander Staubo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Alexander, Good stuff.\n\nCan you do runs with just CFLAGS=\"-O3\" and just CFLAGS=\"-msse2 \n-mfpmath=sse -funroll-loops -m64 - march=opteron -pipe\" as well ?\n\nAs it is, you've given a good lower and upper bound on your \nperformance obtainable using compiler options, but you've given no \ndata to show what effect arch specific compiler options have by themselves.\n\nAlso, what HDs are you using? How many in what config?\n\nThanks in Advance,\nRon Peacetree\n\nAt 02:14 PM 12/14/2006, Alexander Staubo wrote:\n\n>My PostgreSQL config overrides, then, are:\n>\n>shared_buffers = 1024MB\n>work_mem = 1MB\n>maintenance_work_mem = 16MB\n>fsync = off\n>\n>Environment: Linux 2.6.15-23-amd64-generic on Ubuntu. Dual-core AMD\n>Opteron 280 with 4GB of RAM. LSI PCI-X Fusion-MPT SAS.\n>\n>Running with: pgbench -S -v -n -t 5000 -c 5.\n>\n>Results as a graph: http://purefiction.net/paste/pgbench.pdf\n>\n>Stats for CFLAGS=\"-O0\": 18440.181894 19207.882300 19894.432185\n>19635.625622 19876.858884 20032.597042 19683.597973 20370.166669\n>19989.157881 20207.343510 19993.745956 20081.353580 20356.416424\n>20047.810017 20319.834190 19417.807528 19906.788454 20536.039929\n>19491.308046 20002.144230\n>\n>Stats for CFLAGS=\"-O3 -msse2 -mfpmath=sse -funroll-loops -m64 - \n>march=opteron -pipe\": 23830.358351 26162.203569 25569.091264\n>26762.755665 26590.822550 26864.908197 26608.029665 26796.116921\n>26323.742015 26692.576261 26878.859132 26106.770425 26328.371664\n>26755.595130 25488.304946 26635.527959 26377.485023 24817.590708\n>26480.245737 26223.427801\n\n",
"msg_date": "Thu, 14 Dec 2006 14:28:42 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Dec 14, 2006, at 20:28 , Ron wrote:\n\n> Can you do runs with just CFLAGS=\"-O3\" and just CFLAGS=\"-msse2 - \n> mfpmath=sse -funroll-loops -m64 - march=opteron -pipe\" as well ?\n\nAll right. From my perspective, the effect of -O3 is significant, \nwhereas architecture-related optimizations have no statistically \nsignificant effect. As far as I'm aware, though, there's no other \narch targets on the Opteron that will make sense, there being no \npredecessor CPU instruction set to choose from; -march=pentium4 \ndoesn't exist.\n\n> Also, what HDs are you using? How many in what config?\n\nI believe the volume is a two-drive RAID 1 configuration, but I'm not \nmanaging these servers, so I'll ask the company's support people.\n\nInterestingly enough I see that PostgreSQL seems to be writing around \n1MB/s during the pgbench run, even though I'm running pgbench in the - \nS mode. I haven't had the chance to look at the source yet; is it \nreally only doing selects?\n\nAlexander.\n",
"msg_date": "Thu, 14 Dec 2006 23:39:55 +0100",
"msg_from": "Alexander Staubo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "At 05:39 PM 12/14/2006, Alexander Staubo wrote:\n>On Dec 14, 2006, at 20:28 , Ron wrote:\n>\n>>Can you do runs with just CFLAGS=\"-O3\" and just CFLAGS=\"-msse2 - \n>>mfpmath=sse -funroll-loops -m64 - march=opteron -pipe\" as well ?\n>\n>All right. From my perspective, the effect of -O3 is significant, \n>whereas architecture-related optimizations have no statistically \n>significant effect.\n\nIs this opinion? Or have you rerun the tests using the flags I \nsuggested? If so, can you post the results?\n\nIf \"-O3 -msse2 - mfpmath=sse -funroll-loops -m64 - march=opteron \n-pipe\" results in a 30-40% speed up over \"-O0\", and\n\" -msse2 - mfpmath=sse -funroll-loops -m64 - march=opteron -pipe\" \nresults in a 5-10% speedup, then ~ 1/8 - 1/3 of the total possible \nspeedup is due to arch specific optimizations.\n\n(testing \"-O3\" in isolation in addition tests for independence of \nfactors as well as showing what \"plain\" \"-O3\" can accomplish.)\n\nSome might argue that a 5-10% speedup which represents 1/8 - 1/3 of \nthe total speedup is significant...\n\nBut enough speculating. I look forward to seeing your data.\n\nRon Peacetree \n\n",
"msg_date": "Thu, 14 Dec 2006 19:16:42 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Dec 15, 2006, at 01:16 , Ron wrote:\n\n> At 05:39 PM 12/14/2006, Alexander Staubo wrote:\n>> On Dec 14, 2006, at 20:28 , Ron wrote:\n>>\n>>> Can you do runs with just CFLAGS=\"-O3\" and just CFLAGS=\"-msse2 - \n>>> mfpmath=sse -funroll-loops -m64 - march=opteron -pipe\" as well ?\n>>\n>> All right. From my perspective, the effect of -O3 is significant, \n>> whereas architecture-related optimizations have no statistically \n>> significant effect.\n>\n> Is this opinion? Or have you rerun the tests using the flags I \n> suggested? If so, can you post the results?\n\nSorry, I neglected to include the pertinent graph:\n\n http://purefiction.net/paste/pgbench2.pdf\n\nThe raw data:\n\nCFLAGS=\"-msse2 -mfpmath=sse -funroll-loops -m64 -march=opteron -pipe\":\n\n18480.899621 19977.162108 19640.562003 19823.585944 19500.293284 \n19964.383540 20228.664827\n20515.766366 19956.431120 19740.795459 20184.551390 19984.907398 \n20457.260691 19771.395220\n20159.225628 19907.248149 20197.580815 19947.498185 20209.450748 \n20088.501904\n\nCFLAGS=\"-O3\"\n\n23814.672315 26846.761905 27137.807960 26957.898233 27109.057570 \n26997.227925 27291.056939\n27565.553643 27422.624323 27392.397185 27757.144967 27402.365372 \n27563.365421 27349.544685\n27544.658154 26957.200592 27523.824623 27457.380654 27052.910082 \n24452.819263\n\nCFLAGS=\"-O0\"\n\n18440.181894 19207.882300 19894.432185 19635.625622 19876.858884 \n20032.597042 19683.597973\n20370.166669 19989.157881 20207.343510 19993.745956 20081.353580 \n20356.416424 20047.810017\n20319.834190 19417.807528 19906.788454 20536.039929 19491.308046 \n20002.144230\n\nCFLAGS=\"-O3 -msse2 -mfpmath=sse -funroll-loops -m64 -march=opteron - \npipe\"\n\n23830.358351 26162.203569 25569.091264 26762.755665 26590.822550 \n26864.908197 26608.029665\n26796.116921 26323.742015 26692.576261 26878.859132 26106.770425 \n26328.371664 26755.595130\n25488.304946 26635.527959 26377.485023 24817.590708 26480.245737 \n26223.427801\n\n> If \"-O3 -msse2 - mfpmath=sse -funroll-loops -m64 - march=opteron - \n> pipe\" results in a 30-40% speed up over \"-O0\", and\n> \" -msse2 - mfpmath=sse -funroll-loops -m64 - march=opteron -pipe\" \n> results in a 5-10% speedup, then ~ 1/8 - 1/3 of the total possible \n> speedup is due to arch specific optimizations.\n\nUnfortunately, I don't see a 5-10% speedup; \"-O0\" and \"-msse2 ...\" \nare statistically identical.\n\nAlexander.\n",
"msg_date": "Fri, 15 Dec 2006 01:27:58 +0100",
"msg_from": "Alexander Staubo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "At 07:27 PM 12/14/2006, Alexander Staubo wrote:\n\n>Sorry, I neglected to include the pertinent graph:\n>\n> http://purefiction.net/paste/pgbench2.pdf\nIn fact, your graph suggests that using arch specific options in \naddition to -O3 actually =hurts= performance.\n\n...that seems unexpected...\nRon Peacetree \n\n",
"msg_date": "Thu, 14 Dec 2006 22:09:03 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Thu, 14 Dec 2006, Alexander Staubo wrote:\n\n> Interestingly enough I see that PostgreSQL seems to be writing around 1MB/s \n> during the pgbench run, even though I'm running pgbench in the -S mode. I \n> haven't had the chance to look at the source yet; is it really only doing \n> selects?\n\nI've noticed the same thing and have been meaning to figure out what the \ncause is. It's just doing a select in there; it's not even in a begin/end \nblock.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Thu, 14 Dec 2006 22:21:59 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Dec 15, 2006, at 04:09 , Ron wrote:\n\n> At 07:27 PM 12/14/2006, Alexander Staubo wrote:\n>\n>> Sorry, I neglected to include the pertinent graph:\n>>\n>> http://purefiction.net/paste/pgbench2.pdf\n> In fact, your graph suggests that using arch specific options in \n> addition to -O3 actually =hurts= performance.\n\nThe difference is very slight. I'm going to run without -funroll- \nloops and -pipe (which are not arch-related) to get better data.\n\nAlexander.\n\n\n",
"msg_date": "Fri, 15 Dec 2006 10:53:25 +0100",
"msg_from": "Alexander Staubo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Fri, Dec 15, 2006 at 10:53:25AM +0100, Alexander Staubo wrote:\n> The difference is very slight. I'm going to run without -funroll- \n> loops and -pipe (which are not arch-related) to get better data.\n\n-pipe does not matter for the generated code; it only affects compiler speed.\n(It simply means that the compiler runs cpp | cc | as1 instead of cpp > tmp;\ncc < tmp > tmp2; as1 < tmp2.)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 15 Dec 2006 11:28:23 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "At 04:54 AM 12/15/2006, Alexander Staubo wrote:\n>On Dec 15, 2006, at 04:09 , Ron wrote:\n>\n>>At 07:27 PM 12/14/2006, Alexander Staubo wrote:\n>>\n>>>Sorry, I neglected to include the pertinent graph:\n>>>\n>>> http://purefiction.net/paste/pgbench2.pdf\n>>In fact, your graph suggests that using arch specific options in \n>>addition to -O3 actually =hurts= performance.\n>\n>According to the tech staff, this is a Sun X4100 with a two-drive \n>RAID 1 volume. No idea about the make of the hard drives.\n>\n>Alexander.\nhttp://www.sun.com/servers/entry/x4100/features.xml\n\nSo we are dealing with a 1U 1-4S (which means 1-8C) AMD Kx box with \nup to 32GB of ECC RAM (DDR2 ?) and 2 Seagate 2.5\" SAS HDs.\n\nhttp://www.seagate.com/cda/products/discsales/index/1,,,00.html?interface=SAS\n\nMy bet is the X4100 contains one of the 3 models of Cheetah \n15K.4's. A simple du, dkinfo, whatever, will tell you which.\n\nI'm looking more closely into exactly what the various gcc -O \noptimizations do on Kx's as well.\n64b vs 32b gets x86 compatible code access to ~ 2x as many registers; \nand MMX or SSE instructions get you access to not only more \nregisters, but wider ones as well.\n\nAs one wit has noted, \"all optimization is an exercise in caching.\" \n(Terje Mathisen- one of the better assembler coders on the planet.)\n\nIt seems unusual that code generation options which give access to \nmore registers would ever result in slower code...\nRon Peacetree\n\n",
"msg_date": "Fri, 15 Dec 2006 06:55:52 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On 12/15/06, Ron <[email protected]> wrote:\n> I'm looking more closely into exactly what the various gcc -O\n> optimizations do on Kx's as well.\n> 64b vs 32b gets x86 compatible code access to ~ 2x as many registers;\n> and MMX or SSE instructions get you access to not only more\n> registers, but wider ones as well.\n>\n> As one wit has noted, \"all optimization is an exercise in caching.\"\n> (Terje Mathisen- one of the better assembler coders on the planet.)\n>\n> It seems unusual that code generation options which give access to\n> more registers would ever result in slower code...\n\nThe slower is probably due to the unroll loops switch which can\nactually hurt code due to the larger footprint (less cache coherency).\n\nThe extra registers are not all that important because of pipelining\nand other hardware tricks. Pretty much all the old assembly\nstrategies such as forcing local variables to registers are basically\nobsolete...especially with regards to integer math. As I said before,\nmodern CPUs are essentially RISC engines with a CISC preprocessing\nengine laid in top.\n\nThings are much more complicated than they were in the old days where\nyou could count instructions for the assembly optimization process. I\nsuspect that there is little or no differnece between the -march=686\nand the various specifc archicectures. Did anybody think to look at\nthe binaries and look for the amount of differences? I bet you code\ncompiled for march=opteron will just fine on a pentium 2 if compiled\nfor 32 bit.\n\nmerlin\n",
"msg_date": "Fri, 15 Dec 2006 09:23:43 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Fri, 15 Dec 2006, Merlin Moncure wrote:\n\n> The slower is probably due to the unroll loops switch which can\n> actually hurt code due to the larger footprint (less cache coherency).\n\nThe cache issues are so important with current processors that I'd suggest \nthrowing -Os (optimize for size) into the mix people test. That one may \nstack usefully with -O2, but probably not with -O3 (3 includes \noptimizations that increase code size).\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Fri, 15 Dec 2006 09:50:15 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "At 09:23 AM 12/15/2006, Merlin Moncure wrote:\n>On 12/15/06, Ron <[email protected]> wrote:\n>>\n>>It seems unusual that code generation options which give access to\n>>more registers would ever result in slower code...\n>\n>The slower is probably due to the unroll loops switch which can \n>actually hurt code due to the larger footprint (less cache coherency).\n\nI have seen that effect as well occasionally in the last few decades \n;-) OTOH, suspicion is not _proof_; and I've seen other \n\"optimizations\" turn out to be \"pessimizations\" over the years as well.\n\n\n>The extra registers are not all that important because of pipelining \n>and other hardware tricks.\n\nNo. Whoever told you this or gave you such an impression was \nmistaken. There are many instances of x86 compatible code that get \n30-40% speedups just because they get access to 16 rather than 8 GPRs \nwhen recompiled for x84-64.\n\n\n>Pretty much all the old assembly strategies such as forcing local \n>variables to registers are basically obsolete...especially with \n>regards to integer math.\n\nAgain, not true. OTOH, humans are unlikely at this point to be able \nto duplicate the accuracy of the compiler's register coloring \nalgorithms. Especially on x86 compatibles. (The flip side is that \n_expert_ humans can often put the quirky register set and instruction \npipelines of x86 compatibles to more effective use for a specific \nchunk of code than even the best compilers can.)\n\n\n>As I said before, modern CPUs are essentially RISC engines with a \n>CISC preprocessing engine laid in top.\n\nI'm sure you meant modern =x86 compatible= CPUs are essentially RISC \nengines with a CISC engine on top. Just as \"all the world's not a \nVAX\", \"all CPUs are not x86 compatibles\". Forgetting this has \noccasionally cost folks I know...\n\n\n>Things are much more complicated than they were in the old days \n>where you could count instructions for the assembly optimization process.\n\nThose were the =very= old days in computer time...\n\n\n>I suspect that there is little or no differnece between the \n>-march=686 and the various specifc archicectures.\n\nThere should be. The FSF compiler folks (and the rest of the \nindustry compiler folks for that matter) are far from stupid. They \nare not just adding compiler switches because they randomly feel like it.\n\nEvidence suggests that the most recent CPUs are in need of =more= \narch specific TLC compared to their ancestors, and that this trend is \nnot only going to continue, it is going to accelerate.\n\n\n>Did anybody think to look at the binaries and look for the amount of \n>differences? I bet you code compiled for march=opteron will just \n>fine on a pentium 2 if compiled\n>for 32 bit.\nSucker bet given that the whole point of a 32b x86 compatible is to \nbe able to run code on any I32 ISA. CPU.\nOTOH, I bet that code optimized for best performance on a P2 is not \ngetting best performance on a P4. Or vice versa. ;-)\n\nThe big arch specific differences in Kx's are in 64b mode. Not 32b\n\n\nRon Peacetree. \n\n",
"msg_date": "Fri, 15 Dec 2006 10:04:58 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On 12/15/06, Ron <[email protected]> wrote:\n> At 09:23 AM 12/15/2006, Merlin Moncure wrote:\n> >On 12/15/06, Ron <[email protected]> wrote:\n> >>\n> >>It seems unusual that code generation options which give access to\n> >>more registers would ever result in slower code...\n> >\n> >The slower is probably due to the unroll loops switch which can\n> >actually hurt code due to the larger footprint (less cache coherency).\n>\n> I have seen that effect as well occasionally in the last few decades\n> ;-) OTOH, suspicion is not _proof_; and I've seen other\n> \"optimizations\" turn out to be \"pessimizations\" over the years as well.\n>\n> >The extra registers are not all that important because of pipelining\n> >and other hardware tricks.\n>\n> No. Whoever told you this or gave you such an impression was\n> mistaken. There are many instances of x86 compatible code that get\n> 30-40% speedups just because they get access to 16 rather than 8 GPRs\n> when recompiled for x84-64.\n\nI'm not debating that this is true in specific cases. Encryption and\nvideo en/decoding have shown to be faster in 64 bit mode on the same\nachicture (a cursor search in google will confirm this). However,\n32-64 bit is not the same argument since there are a lot of other\nvariables besides more registers. 64 bit mode is often slower on many\nprograms because the extra code size from 64 bit pointers. We\nbenchmarked PostgreSQL internally here and found it to be fastest in\n32 bit mode running on a 64 bit platform -- this was on a quad opteron\n870 runnning our specific software stack, your results might be\ndiffernt of course.\n\n> >Pretty much all the old assembly strategies such as forcing local\n> >variables to registers are basically obsolete...especially with\n> >regards to integer math.\n>\n> Again, not true. OTOH, humans are unlikely at this point to be able\n> to duplicate the accuracy of the compiler's register coloring\n> algorithms. Especially on x86 compatibles. (The flip side is that\n> _expert_ humans can often put the quirky register set and instruction\n> pipelines of x86 compatibles to more effective use for a specific\n> chunk of code than even the best compilers can.)\n>\n>\n> >As I said before, modern CPUs are essentially RISC engines with a\n> >CISC preprocessing engine laid in top.\n>\n> I'm sure you meant modern =x86 compatible= CPUs are essentially RISC\n> engines with a CISC engine on top. Just as \"all the world's not a\n> VAX\", \"all CPUs are not x86 compatibles\". Forgetting this has\n> occasionally cost folks I know...\n\nyes, In fact made this point earler.\n\n> >Things are much more complicated than they were in the old days\n> >where you could count instructions for the assembly optimization process.\n>\n> Those were the =very= old days in computer time...\n>\n>\n> >I suspect that there is little or no differnece between the\n> >-march=686 and the various specifc archicectures.\n>\n> There should be. The FSF compiler folks (and the rest of the\n> industry compiler folks for that matter) are far from stupid. They\n> are not just adding compiler switches because they randomly feel like it.\n>\n> Evidence suggests that the most recent CPUs are in need of =more=\n> arch specific TLC compared to their ancestors, and that this trend is\n> not only going to continue, it is going to accelerate.\n>\n>\n> >Did anybody think to look at the binaries and look for the amount of\n> >differences? I bet you code compiled for march=opteron will just\n> >fine on a pentium 2 if compiled\n> >for 32 bit.\n> Sucker bet given that the whole point of a 32b x86 compatible is to\n> be able to run code on any I32 ISA. CPU.\n> OTOH, I bet that code optimized for best performance on a P2 is not\n> getting best performance on a P4. Or vice versa. ;-)\n>\n> The big arch specific differences in Kx's are in 64b mode. Not 32b\n\nI dont think so. IMO all the processor specific instruction sets were\nhacks of 32 bit mode to optimize specific tasks. Except for certain\nthings these instructions are rarely, if ever used in 64 bit mode,\nespecially in integer math (i.e. database binaries). Since Intel and\nAMD64 64 bit are virtually indentical I submit that -march is not\nreally important anymore except for very, very specific (but\nimportant) cases like spinlocks. This thread is about how much\narchitecture depenant binares can beat standard ones. I say they\ndon't very much at all, and with the specific exception of Daniel's\nbenchmarking the results posted to this list bear that out.\n\nmerlin\n",
"msg_date": "Fri, 15 Dec 2006 10:55:14 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "At 09:50 AM 12/15/2006, Greg Smith wrote:\n>On Fri, 15 Dec 2006, Merlin Moncure wrote:\n>\n>>The slower is probably due to the unroll loops switch which can \n>>actually hurt code due to the larger footprint (less cache coherency).\n>\n>The cache issues are so important with current processors that I'd \n>suggest throwing -Os (optimize for size) into the mix people \n>test. That one may stack usefully with -O2, but probably not with \n>-O3 (3 includes optimizations that increase code size).\n\n-Os\nOptimize for size. -Os enables all -O2 optimizations that do not \ntypically increase code size. It also performs further optimizations \ndesigned to reduce code size.\n\n-Os disables the following optimization flags:\n-falign-functions -falign-jumps -falign-loops -falign-labels\n-freorder-blocks -freorder-blocks-and-partition\n-fprefetch-loop-arrays\n-ftree-vect-loop-version\n\nHmmm. That list of disabled flags bears thought.\n\n-falign-functions -falign-jumps -falign-loops -falign-labels\n\n1= Most RISC CPUs performance is very sensitive to misalignment \nissues. Not recommended to turn these off.\n\n-freorder-blocks\nReorder basic blocks in the compiled function in order to reduce \nnumber of taken branches and improve code locality.\n\nEnabled at levels -O2, -O3.\n-freorder-blocks-and-partition\nIn addition to reordering basic blocks in the compiled function, in \norder to reduce number of taken branches, partitions hot and cold \nbasic blocks into separate sections of the assembly and .o files, to \nimprove paging and cache locality performance.\n\nThis optimization is automatically turned off in the presence of \nexception handling, for link once sections, for functions with a \nuser-defined section attribute and on any architecture that does not \nsupport named sections.\n\n2= Most RISC CPUs are cranky about branchy code and (lack of) cache \nlocality. Wouldn't suggest punting these either.\n\n-fprefetch-loop-arrays\nIf supported by the target machine, generate instructions to prefetch \nmemory to improve the performance of loops that access large arrays.\n\nThis option may generate better or worse code; results are highly \ndependent on the structure of loops within the source code.\n\n3= OTOH, This one looks worth experimenting with turning off.\n\n-ftree-vect-loop-version\nPerform loop versioning when doing loop vectorization on trees. When \na loop appears to be vectorizable except that data alignment or data \ndependence cannot be determined at compile time then vectorized and \nnon-vectorized versions of the loop are generated along with runtime \nchecks for alignment or dependence to control which version is \nexecuted. This option is enabled by default except at level -Os where \nit is disabled.\n\n4= ...and this one looks like a 50/50 shot.\n\nRon Peacetree\n\n\n\n",
"msg_date": "Fri, 15 Dec 2006 11:53:28 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "At 10:55 AM 12/15/2006, Merlin Moncure wrote:\n>On 12/15/06, Ron <[email protected]> wrote:\n>>\n>>There are many instances of x86 compatible code that get\n>>30-40% speedups just because they get access to 16 rather than 8 GPRs\n>>when recompiled for x84-64.\n>\n>...We benchmarked PostgreSQL internally here and found it to be \n>fastest in 32 bit mode running on a 64 bit platform -- this was on a \n>quad opteron 870 runnning our specific software stack, your results \n>might be differnt of course.\n\nOn AMD Kx's, you probably will get best performance in 64b mode (so \nyou get all those extra registers and other stuff) while using 32b \npointers (to keep code size and memory footprint down).\n\nOn Intel C2's, things are more complicated since Intel's x86-64 \nimplementation and memory IO architecture are both different enough \nfrom AMD's to have caused some consternation on occasion when Intel's \n64b performance did not match AMD's.\n\n\n\n>>The big arch specific differences in Kx's are in 64b mode. Not 32b\n>\n>I dont think so. IMO all the processor specific instruction sets were\n>hacks of 32 bit mode to optimize specific tasks. Except for certain\n>things these instructions are rarely, if ever used in 64 bit mode,\n>especially in integer math (i.e. database binaries). Since Intel and\n>AMD64 64 bit are virtually indentical I submit that -march is not\n>really important anymore except for very, very specific (but\n>important) cases like spinlocks.\n\nTake a good look at the processor specific manuals and the x86-64 \nbenches around the net. The evidence outside the DBMS domain is \npretty solidly in contrast to your statement and \nposition. Admittedly, DBMS are not web servers or FPS games or ... \nThat's why we need to do our own rigorous study of the subject.\n\n\n>This thread is about how much architecture depenant binares can beat \n>standard ones. I say they don't very much at all, and with the \n>specific exception of Daniel's\n>benchmarking the results posted to this list bear that out.\n...and IMHO the issue is still very much undecided given that we \ndon't have enough properly obtained and documented evidence.\n\nATM, the most we can say is that in a number of systems with modest \nphysical IO subsystems that are not running Gentoo Linux we have not \nbeen able to duplicate the results. (We've also gotten some \ninteresting results like yours suggesting the arch specific \noptimizations are bad for pg performance in your environment.)\n\nIn the process questions have been asked and issues raised regarding \nboth the tolls involved and the proper way to use them.\n\nWe really do need to have someone other than Daniel duplicate his \nGentoo environment and independently try to duplicate his results.\n\n\n...and let us bear in mind that this is not just intellectual \ncuriosity. The less pg is mysterious, the better the odds pg will be \nadopted in any specific case.\nRon Peacetree\n \n\n",
"msg_date": "Fri, 15 Dec 2006 12:24:46 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Dec 15, 2006, at 17:53 , Ron wrote:\n\n> At 09:50 AM 12/15/2006, Greg Smith wrote:\n>> On Fri, 15 Dec 2006, Merlin Moncure wrote:\n>>\n>>> The slower is probably due to the unroll loops switch which can \n>>> actually hurt code due to the larger footprint (less cache \n>>> coherency).\n>>\n>> The cache issues are so important with current processors that I'd \n>> suggest throwing -Os (optimize for size) into the mix people \n>> test. That one may stack usefully with -O2, but probably not with \n>> -O3 (3 includes optimizations that increase code size).\n>\n> -Os\n> Optimize for size. -Os enables all -O2 optimizations that do not \n> typically increase code size. It also performs further \n> optimizations designed to reduce code size.\n\nSo far I have been compiling PostgreSQL and running my pgbench script \nmanually, but this makes me want to modify my script to run pgbench \nautomatically using all possible permutations of a set of compiler \nflags.\n\nLast I tried GCC to produce 32-bit code on this Opteron system, \nthough, it complained about the lack of a compiler; can I persuade it \nto generate 32-bit code (or 32-bit pointers for that matter) without \ngoing the cross-compilation route?\n\nAlexander.\n",
"msg_date": "Fri, 15 Dec 2006 23:29:18 +0100",
"msg_from": "Alexander Staubo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Fri, Dec 15, 2006 at 12:24:46PM -0500, Ron wrote:\n>ATM, the most we can say is that in a number of systems with modest \n>physical IO subsystems \n\n\nSo I reran it on a 3.2GHz xeon with 6G RAM off a ramdisk; I/O ain't the \nbottleneck on that one. Results didn't show didn't show any signficant \ngains regardless of compilation options (results hovered around 12k \ntps). If people want to continue this, I will point out that they should \nmake sure they're linked against the optimized libpq rather than an \nexisting one elsewhere in the library path. Beyond that, I'm done with \nthis thread. Maybe there are some gains to be found somewhere, but the \ntesting done thus far (while limited) is sufficient, IMO, to demonstrate \nthat compiler options aren't going to provide a blow-your-socks-off \ndramatic performance improvement.\n\nMike Stone\n",
"msg_date": "Fri, 15 Dec 2006 19:06:56 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Alexander Staubo wrote:\n\n> On Dec 15, 2006, at 17:53 , Ron wrote:\n> \n>> At 09:50 AM 12/15/2006, Greg Smith wrote:\n>>\n>>> On Fri, 15 Dec 2006, Merlin Moncure wrote:\n>>>\n>>>> The slower is probably due to the unroll loops switch which can \n>>>> actually hurt code due to the larger footprint (less cache coherency).\n>>>\n>>> The cache issues are so important with current processors that I'd \n>>> suggest throwing -Os (optimize for size) into the mix people test. \n> \n> So far I have been compiling PostgreSQL and running my pgbench script \n> manually, but this makes me want to modify my script to run pgbench \n> automatically using all possible permutations of a set of compiler flags.\n\nI don't know if it's practical, but this link comes to mind:\n\nhttp://clusty.com/search?query=acovea\n\n-- \nCosimo\n",
"msg_date": "Sat, 16 Dec 2006 09:27:01 +0100",
"msg_from": "Cosimo Streppone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "At 07:06 PM 12/15/2006, Michael Stone wrote:\n>On Fri, Dec 15, 2006 at 12:24:46PM -0500, Ron wrote:\n>>ATM, the most we can say is that in a number of systems with modest \n>>physical IO subsystems\n>\n>\n>So I reran it on a 3.2GHz xeon with 6G RAM off a ramdisk; I/O ain't \n>the bottleneck on that one. Results didn't show didn't show any \n>signficant gains regardless of compilation options (results hovered \n>around 12k tps). If people want to continue this, I will point out \n>that they should make sure they're linked against the optimized \n>libpq rather than an existing one elsewhere in the library path. \n>Beyond that, I'm done with this thread. Maybe there are some gains \n>to be found somewhere, but the testing done thus far (while limited) \n>is sufficient, IMO, to demonstrate that compiler options aren't \n>going to provide a blow-your-socks-off dramatic performance improvement.\nAFAICT, no one has stated there would be a \"blow-your-socks-off \ndramatic performance improvement\" for pg due to compilation \noptions. Just that there might be some, and there might be some that \nare arch specific.\n\nSo far these experiments have shown\n= multiple instances of a ~30-35% performance improvement going from \n-O0 to --O3\n= 1 instance of arch specific options hurting performance when \ncombined with -O3\n= 1 instance of arch specific options helping performance on an OS \nthat only one person has tested (Gentoo Linux)\n= that a 2.33 GHz C2D Mac laptop (under what OS?) with a typical \nlaptop modest physical IO subystem can do ~2100tps\n= that pg has a \"speed limit\" on a 3.2GHz Xeon (which kind?) with 6G \nRAM off a ramdisk (under what OS?) of ~12K tps\n(I'd be curious to see what this limit is with better CPUs and memory \nsubsystems)\n\nNote that except for the first point, all the other results are \nsingletons that as of yet have not been reproduced.\n\nThe most important \"gain\" IMO is Knowledge, and I'd say there is \nstill more to learn and/or verify IMHO. YMMV.\n\nRon Peacetree\n\n",
"msg_date": "Sat, 16 Dec 2006 10:53:21 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Sat, Dec 16, 2006 at 10:53:21AM -0500, Ron wrote:\n> AFAICT, no one has stated there would be a \"blow-your-socks-off \n> dramatic performance improvement\" for pg due to compilation \n> options. Just that there might be some, and there might be some that \n> are arch specific.\n\nFWIW, the original claim was: \"It's really important to have your GLIBC\ncompiled for your processor. It is essencial for performance.\"\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Sat, 16 Dec 2006 17:04:13 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "On Sat, Dec 16, 2006 at 10:53:21AM -0500, Ron wrote:\n>The most important \"gain\" IMO is Knowledge, and I'd say there is \n>still more to learn and/or verify IMHO. YMMV.\n\nWell, I think there are other areas where I can spend my time where \npotential gains are more likely. YMMV (although, I note, you don't seem \nto be spending much of your own time testing this)\n\nMike Stone\n",
"msg_date": "Sat, 16 Dec 2006 12:33:41 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
},
{
"msg_contents": "Sorry for the delay in responding. I had familial obligations.\n\nAs a matter of fact, I am spending a decent amount of time on \nthis. I don't usually pore through documentation for compilers and \nOS's to the degree I've been since this thread started. Nor do I \nusually try and get access to the HW I'm presently tracking down.\n\nI'll post my thoughts re: detailed analysis of gcc/g++ compiler \noptions later today or tomorrow as work schedule allows.\n\nWhy this is worth it:\n1= Any gains from setup and configuration are the cheapest ones \navailable once we codify how to obtain them.\n2= any public database or knowledge about how to best setup, \nconfigure, and test pg is very good for the community.\n3= developers need to know and agree on proper procedure and \ntechnique for generating results for discussion or we end up wasting \na lot of time.\n4= measuring and documenting pg performance means we know where best \nto allocate resources for improving pg. Or where using pg is \n(in)appropriate compared to competitors.\n\nPotential performance gains are not the only value of this thread.\nRon Peacetree\n\n\nAt 12:33 PM 12/16/2006, Michael Stone wrote:\n>On Sat, Dec 16, 2006 at 10:53:21AM -0500, Ron wrote:\n>>The most important \"gain\" IMO is Knowledge, and I'd say there is \n>>still more to learn and/or verify IMHO. YMMV.\n>\n>Well, I think there are other areas where I can spend my time where \n>potential gains are more likely. YMMV (although, I note, you don't \n>seem to be spending much of your own time testing this)\n\n",
"msg_date": "Mon, 18 Dec 2006 07:19:39 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New to PostgreSQL, performance considerations"
}
] |
[
{
"msg_contents": "Hello,\n\nHow to get Postgresql Threshold value ?. Any commands available ?. \n\nRegards, Ravi\nDISCLAIMER \nThe contents of this e-mail and any attachment(s) are confidential and intended for the \n\nnamed recipient(s) only. It shall not attach any liability on the originator or HCL or its \n\naffiliates. Any views or opinions presented in this email are solely those of the author and \n\nmay not necessarily reflect the opinions of HCL or its affiliates. Any form of reproduction, \n\ndissemination, copying, disclosure, modification, distribution and / or publication of this \n\nmessage without the prior written consent of the author of this e-mail is strictly \n\nprohibited. If you have received this email in error please delete it and notify the sender \n\nimmediately. Before opening any mail and attachments please check them for viruses and \n\ndefect.\n",
"msg_date": "Mon, 11 Dec 2006 13:07:54 +0530",
"msg_from": "\"Ravindran G - TLS, Chennai.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql - Threshold value."
},
{
"msg_contents": "On 12/11/06, Ravindran G - TLS, Chennai. <[email protected]> wrote:\n> Hello,\n>\n> How to get Postgresql Threshold value ?. Any commands available ?.\n\nWhat is meant my threshold value ?\n",
"msg_date": "Mon, 11 Dec 2006 14:49:17 +0530",
"msg_from": "\"Rajesh Kumar Mallah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql - Threshold value."
},
{
"msg_contents": "Hi,\n\ntry using:\n\ntmp=# show all;\n\nand\n\ntmp=# show geqo_threshold;\n\n\nRegards,\n\n Kaloyan Iliev\n\nRavindran G - TLS, Chennai. wrote:\n\n>Hello,\n>\n>How to get Postgresql Threshold value ?. Any commands available ?. \n>\n>Regards, Ravi\n>DISCLAIMER \n>The contents of this e-mail and any attachment(s) are confidential and intended for the \n>\n>named recipient(s) only. It shall not attach any liability on the originator or HCL or its \n>\n>affiliates. Any views or opinions presented in this email are solely those of the author and \n>\n>may not necessarily reflect the opinions of HCL or its affiliates. Any form of reproduction, \n>\n>dissemination, copying, disclosure, modification, distribution and / or publication of this \n>\n>message without the prior written consent of the author of this e-mail is strictly \n>\n>prohibited. If you have received this email in error please delete it and notify the sender \n>\n>immediately. Before opening any mail and attachments please check them for viruses and \n>\n>defect.\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n>\n> \n>\n\n",
"msg_date": "Mon, 11 Dec 2006 11:34:24 +0200",
"msg_from": "Kaloyan Iliev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql - Threshold value."
}
] |
[
{
"msg_contents": "Thank you very much for your reply. \n\nThis is not working in Postgresql 8.1.4. Its throwing some error. \n\nBTW, how to fetch the total disk space that is alloted to DB in postgresql\n?. Any commands available ?. \n\nRegards, Ravi\n\n\n\n\n-----Original Message-----\nFrom: Kaloyan Iliev [mailto:[email protected]]\nSent: Monday, December 11, 2006 3:04 PM\nTo: Ravindran G - TLS, Chennai.; [email protected]\nSubject: Re: [PERFORM] Postgresql - Threshold value.\n\n\nHi,\n\ntry using:\n\ntmp=# show all;\n\nand\n\ntmp=# show geqo_threshold;\n\n\nRegards,\n\n Kaloyan Iliev\n\nRavindran G - TLS, Chennai. wrote:\n\n>Hello,\n>\n>How to get Postgresql Threshold value ?. Any commands available ?. \n>\n>Regards, Ravi\n>DISCLAIMER \n>The contents of this e-mail and any attachment(s) are confidential and\nintended for the \n>\n>named recipient(s) only. It shall not attach any liability on the\noriginator or HCL or its \n>\n>affiliates. Any views or opinions presented in this email are solely those\nof the author and \n>\n>may not necessarily reflect the opinions of HCL or its affiliates. Any form\nof reproduction, \n>\n>dissemination, copying, disclosure, modification, distribution and / or\npublication of this \n>\n>message without the prior written consent of the author of this e-mail is\nstrictly \n>\n>prohibited. If you have received this email in error please delete it and\nnotify the sender \n>\n>immediately. Before opening any mail and attachments please check them for\nviruses and \n>\n>defect.\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n>\n> \n>\n",
"msg_date": "Mon, 11 Dec 2006 19:41:29 +0530",
"msg_from": "\"Ravindran G - TLS, Chennai.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql - Threshold value."
},
{
"msg_contents": "am Mon, dem 11.12.2006, um 19:41:29 +0530 mailte Ravindran G - TLS, Chennai. folgendes:\n> Thank you very much for your reply. \n> \n> This is not working in Postgresql 8.1.4. Its throwing some error. \n\nWhich errors?\n\ntest=# show geqo_threshold;\n geqo_threshold\n----------------\n 12\n(1 row)\n\ntest=*# select version();\n version\n-------------------------------------------------------------------------------------------\n PostgreSQL 8.1.4 on i386-pc-linux-gnu, compiled by GCC cc (GCC) 3.3.5 (Debian 1:3.3.5-13)\n(1 row)\n\n\n\n> \n> BTW, how to fetch the total disk space that is alloted to DB in postgresql\n> ?. Any commands available ?. \n\nYes, RTFM.\nhttp://www.postgresql.org/docs/8.1/interactive/functions-admin.html\n-> pg_database_size(name)\n\n\n> \n> Regards, Ravi\n> \n> \n> \n> \n> -----Original Message-----\n\nPlease, no silly fullquote below your text.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47215, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n",
"msg_date": "Mon, 11 Dec 2006 15:24:46 +0100",
"msg_from": "\"A. Kretschmer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql - Threshold value."
}
] |
[
{
"msg_contents": "Thanks. \n\nI am using Postgres 8.1.4 in windows 2000 and i don't get the proper\nresponse for threshold.\n\n---------\n\npg_database_size(name) - is giving the current space used by DB. But I want\nlike \"Total space utilized by DB\" and \"Free space\". \n\nRegards, Ravi\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of A. Kretschmer\nSent: Monday, December 11, 2006 7:55 PM\nTo: [email protected]\nSubject: Re: [PERFORM] Postgresql - Threshold value.\n\n\nam Mon, dem 11.12.2006, um 19:41:29 +0530 mailte Ravindran G - TLS,\nChennai. folgendes:\n> Thank you very much for your reply. \n> \n> This is not working in Postgresql 8.1.4. Its throwing some error. \n\nWhich errors?\n\ntest=# show geqo_threshold;\n geqo_threshold\n----------------\n 12\n(1 row)\n\ntest=*# select version();\n version\n----------------------------------------------------------------------------\n---------------\n PostgreSQL 8.1.4 on i386-pc-linux-gnu, compiled by GCC cc (GCC) 3.3.5\n(Debian 1:3.3.5-13)\n(1 row)\n\n\n\n> \n> BTW, how to fetch the total disk space that is alloted to DB in postgresql\n> ?. Any commands available ?. \n\nYes, RTFM.\nhttp://www.postgresql.org/docs/8.1/interactive/functions-admin.html\n-> pg_database_size(name)\n\n\n> \n> Regards, Ravi\n> \n> \n> \n> \n> -----Original Message-----\n\nPlease, no silly fullquote below your text.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47215, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\nDISCLAIMER \nThe contents of this e-mail and any attachment(s) are confidential and intended for the \n\nnamed recipient(s) only. It shall not attach any liability on the originator or HCL or its \n\naffiliates. Any views or opinions presented in this email are solely those of the author and \n\nmay not necessarily reflect the opinions of HCL or its affiliates. Any form of reproduction, \n\ndissemination, copying, disclosure, modification, distribution and / or publication of this \n\nmessage without the prior written consent of the author of this e-mail is strictly \n\nprohibited. If you have received this email in error please delete it and notify the sender \n\nimmediately. Before opening any mail and attachments please check them for viruses and \n\ndefect.\n",
"msg_date": "Mon, 11 Dec 2006 20:13:56 +0530",
"msg_from": "\"Ravindran G - TLS, Chennai.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql - Threshold value."
},
{
"msg_contents": "On 12/11/06, Ravindran G - TLS, Chennai. <[email protected]> wrote:\n> Thanks.\n>\n> I am using Postgres 8.1.4 in windows 2000 and i don't get the proper\n> response for threshold.\n\nwhat is the response you get ? please be specific about the issues.\n\nalso the footer that comes with your emails are\nnot appreciated by many people. if possible pls avoid it.\n\nRegds\nmallah.\n\n>\n> ---------\n>\n",
"msg_date": "Mon, 11 Dec 2006 21:12:16 +0530",
"msg_from": "\"Rajesh Kumar Mallah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql - Threshold value."
},
{
"msg_contents": "\"Rajesh Kumar Mallah\" <[email protected]> writes:\n> On 12/11/06, Ravindran G - TLS, Chennai. <[email protected]> wrote:\n>> I am using Postgres 8.1.4 in windows 2000 and i don't get the proper\n>> response for threshold.\n\n> what is the response you get ? please be specific about the issues.\n\nEven more to the point, what sort of response do you think you should\nget? What kind of threshold are you talking about, and why do you think\nit even exists in Postgres?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Dec 2006 11:09:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql - Threshold value. "
}
] |
[
{
"msg_contents": "Hello!\n\nIn our JAVA application we do multiple inserts to a table by data from a \nHash Map. Due to poor database access implemention - done by another \ncompany (we got the job to enhance the software) - we cannot use prepared \nstatements. (We are not allowed to change code at database access!)\nFirst, we tried to fire one INSERT statement per record to insert. This \ncosts 3 ms per row which is to slow because normally we insert 10.000 \nrecords which results in 30.000 ms just for inserts.\n\nfor(){\nsql = \"INSERT INTO tblfoo(foo,bar) VALUES(\"+it.next()+\",\"+CONST.BAR+\");\";\n}\n\nI was searching for an quicker way - MSSQL offers Array Inserts - at \nPostgreSQL. The only solution seem to be \"INSERT INTO foo SELECT\" and this \nis really dirty.\nI improved the inserts using the subselect with union.\n\nsql = \"INSERT INTO tblfoo(foo,bar) \";\nfor(){\nsql += \"SELECT \"+it.next()+\",\"+CONST.BAR+\" UNION \" ...\n}\n\nThis results in a really long INSERT INTO SELECT UNION statement and works \ncorrect and quick but looks dirty.\n\nWhen I heard about COPY I thought this will be the right way. But it does \nnot work using JDBC. Testing via psql does it perfect but sending the same \nSQL statements via JDBC throws an error.\n-> BEGIN\nsql = \"COPY tblfoo(foo,bar) FROM STDIN;\\n1 'foobar'\\n2 'foobar'\\n\\\\.\";\n-> COMMIT\nERROR: syntax error at or near \"1\" at character 34\n\nSo, my questions:\nIs it possible to use COPY FROM STDIN with JDBC?\nWill it bring performance improvement compared to SELECT UNION solution?\n\nmany thanks in advance,\nJens Schipkowski\n\n-- \n**\nAPUS Software GmbH\n",
"msg_date": "Mon, 11 Dec 2006 17:19:27 +0100",
"msg_from": "\"Jens Schipkowski\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "really quick multiple inserts can use COPY?"
},
{
"msg_contents": "\"Jens Schipkowski\" <jens.schipkowski 'at' apus.co.at> writes:\n\n> Hello!\n> \n> In our JAVA application we do multiple inserts to a table by data from\n> a Hash Map. Due to poor database access implemention - done by\n> another company (we got the job to enhance the software) - we cannot\n> use prepared statements. (We are not allowed to change code at\n> database access!)\n> First, we tried to fire one INSERT statement per record to insert.\n> This costs 3 ms per row which is to slow because normally we insert\n> 10.000 records which results in 30.000 ms just for inserts.\n> \n> for(){\n> sql = \"INSERT INTO tblfoo(foo,bar) VALUES(\"+it.next()+\",\"+CONST.BAR+\");\";\n> }\n\nYou should try to wrap that into a single transaction. PostgreSQL\nwaits for I/O write completion for each INSERT as it's\nimplicitely in its own transaction. Maybe the added performance\nwould be satisfactory for you.\n\n-- \nGuillaume Cottenceau\nCreate your personal SMS or WAP Service - visit http://mobilefriends.ch/\n",
"msg_date": "11 Dec 2006 17:53:52 +0100",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: really quick multiple inserts can use COPY?"
},
{
"msg_contents": "Jens Schipkowski <[email protected]> schrieb:\n\n> Hello!\n> \n> In our JAVA application we do multiple inserts to a table by data from a \n> Hash Map. Due to poor database access implemention - done by another \n> company (we got the job to enhance the software) - we cannot use prepared \n> statements. (We are not allowed to change code at database access!)\n> First, we tried to fire one INSERT statement per record to insert. This \n> costs 3 ms per row which is to slow because normally we insert 10.000 \n> records which results in 30.000 ms just for inserts.\n\nCan you change this from INSERT-Statements to COPY? Copy is *much*\nfaster than INSERT.\n\nIf no, do you using all INSERTs in one transaction? I believe, a 'BEGIN'\nand a 'COMMIT' around all INSERTs may increase the speed.\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknow)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n",
"msg_date": "Mon, 11 Dec 2006 17:57:32 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: really quick multiple inserts can use COPY?"
},
{
"msg_contents": "> So, my questions:\n> Is it possible to use COPY FROM STDIN with JDBC?\n\nShould be. Its at least possible using DBI and DBD::Pg (perl)\n\n\n my $copy_sth = $dbh -> prepare( \"COPY\ngeneral.datamining_mailing_lists (query_id,email_key) FROM STDIN;\") ;\n $copy_sth -> execute();\n while (my ($email_key ) = $fetch_sth -> fetchrow_array ()) {\n $dbh -> func(\"$query_id\\t$email_key\\n\", 'putline');\n }\n $fetch_sth -> finish();\n $dbh -> func(\"\\\\.\\n\", 'putline');\n $dbh -> func('endcopy');\n $copy_sth->finish();\n\nSome JDBC expert would tell better how its done with JDBC.\n\n\n> Will it bring performance improvement compared to SELECT UNION solution?\n\nCOPY is quite faast.\n\nRegds\nmallah.\n\n>\n> many thanks in advance,\n> Jens Schipkowski\n>\n> --\n> **\n> APUS Software GmbH\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n",
"msg_date": "Mon, 11 Dec 2006 22:31:08 +0530",
"msg_from": "\"Rajesh Kumar Mallah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: really quick multiple inserts can use COPY?"
},
{
"msg_contents": "\"Jens Schipkowski\" <[email protected]> writes:\n> Is it possible to use COPY FROM STDIN with JDBC?\n\nYou should be asking the pgsql-jdbc list, not here. (I know I've seen\nmention of a JDBC patch to support COPY, but I dunno if it's made it into\nany official version.)\n\n> Will it bring performance improvement compared to SELECT UNION solution?\n\nPlease, at least be smart enough to use UNION ALL not UNION.\n\nIf you're using 8.2 you could also consider using INSERT with multiple\nVALUES-lists.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Dec 2006 12:11:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: really quick multiple inserts can use COPY? "
},
{
"msg_contents": "On 12/11/06, Andreas Kretschmer <[email protected]> wrote:\n> Jens Schipkowski <[email protected]> schrieb:\n>\n> > Hello!\n> >\n> > In our JAVA application we do multiple inserts to a table by data from a\n> > Hash Map. Due to poor database access implemention - done by another\n> > company (we got the job to enhance the software) - we cannot use prepared\n> > statements. (We are not allowed to change code at database access!)\n> > First, we tried to fire one INSERT statement per record to insert. This\n> > costs 3 ms per row which is to slow because normally we insert 10.000\n> > records which results in 30.000 ms just for inserts.\n>\n> Can you change this from INSERT-Statements to COPY? Copy is *much*\n> faster than INSERT.\n>\n> If no, do you using all INSERTs in one transaction? I believe, a 'BEGIN'\n> and a 'COMMIT' around all INSERTs may increase the speed.\n\n\nPerformance increment can also be gained by disabling constraints in the\ntransaction. These disabled constraints are invoked at the end of the\ntransaction according to the SQL standard, so no worries about data\nconsistency.\n\nHmmm... PG currently supports disabling foreign constraints only. But that\ncan still be significant.\n\n--Imad\nwww.EnterpriseDB.com\n",
"msg_date": "Tue, 12 Dec 2006 00:22:19 +0500",
"msg_from": "imad <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: really quick multiple inserts can use COPY?"
},
{
"msg_contents": "Thanks a lot to all for your tips.\n\nOf course, I am doing all the INSERTs using a transaction. So the cost per \nINSERT dropped from 30 ms to 3 ms.\nThe improvement factor matches with the hint by Brian Hurt.\nSorry, I forgot to mention we are using PostgreSQL 8.1.4.\nThanks for the code snippet posted by mallah. It looks like you are using \nprepared statements, which are not available to us.\nBut I will check our database access if its possible to do a workaround, \nbecause this looks clean and quick to me.\n\nregards\nJens Schipkowski\n\n\nOn Mon, 11 Dec 2006 17:53:52 +0100, Guillaume Cottenceau <[email protected]> wrote:\n\n> \"Jens Schipkowski\" <jens.schipkowski 'at' apus.co.at> writes:\n>\n>> Hello!\n>>\n>> In our JAVA application we do multiple inserts to a table by data from\n>> a Hash Map. Due to poor database access implemention - done by\n>> another company (we got the job to enhance the software) - we cannot\n>> use prepared statements. (We are not allowed to change code at\n>> database access!)\n>> First, we tried to fire one INSERT statement per record to insert.\n>> This costs 3 ms per row which is to slow because normally we insert\n>> 10.000 records which results in 30.000 ms just for inserts.\n>>\n>> for(){\n>> sql = \"INSERT INTO tblfoo(foo,bar) \n>> VALUES(\"+it.next()+\",\"+CONST.BAR+\");\";\n>> }\n>\n> You should try to wrap that into a single transaction. PostgreSQL\n> waits for I/O write completion for each INSERT as it's\n> implicitely in its own transaction. Maybe the added performance\n> would be satisfactory for you.\n>\n\n\n\n-- \n**\nAPUS Software GmbH\n",
"msg_date": "Tue, 12 Dec 2006 08:45:49 +0100",
"msg_from": "\"Jens Schipkowski\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: really quick multiple inserts can use COPY?"
},
{
"msg_contents": "\n\n\n\nJens Schipkowski wrote:\n> Thanks a lot to all for your tips.\n>\n> Of course, I am doing all the INSERTs using a transaction. So the cost \n> per INSERT dropped from 30 ms to 3 ms.\n> The improvement factor matches with the hint by Brian Hurt.\n> Sorry, I forgot to mention we are using PostgreSQL 8.1.4.\n> Thanks for the code snippet posted by mallah. It looks like you are \n> using prepared statements, which are not available to us.\n> But I will check our database access if its possible to do a \n> workaround, because this looks clean and quick to me.\n>\n> regards\n> Jens Schipkowski\n>\n>\n> On Mon, 11 Dec 2006 17:53:52 +0100, Guillaume Cottenceau <[email protected]> \n> wrote:\n>\n>> \"Jens Schipkowski\" <jens.schipkowski 'at' apus.co.at> writes:\n>>\n>>> Hello!\n>>>\n>>> In our JAVA application we do multiple inserts to a table by data from\n>>> a Hash Map. Due to poor database access implemention - done by\n>>> another company (we got the job to enhance the software) - we cannot\n>>> use prepared statements. (We are not allowed to change code at\n>>> database access!)\n>>> First, we tried to fire one INSERT statement per record to insert.\n>>> This costs 3 ms per row which is to slow because normally we insert\n>>> 10.000 records which results in 30.000 ms just for inserts.\n>>>\n>>> for(){\n>>> sql = \"INSERT INTO tblfoo(foo,bar) \n>>> VALUES(\"+it.next()+\",\"+CONST.BAR+\");\";\n>>> }\n>>\n>> You should try to wrap that into a single transaction. PostgreSQL\n>> waits for I/O write completion for each INSERT as it's\n>> implicitely in its own transaction. Maybe the added performance\n>> would be satisfactory for you.\n>>\n>\n>\n>\n> --**\n> APUS Software GmbH\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n>\n>\n\n\nThis link might be what you are looking for, it has some information \nabout implementing COPY in the JDBC driver. Check the reply message as \nwell.\n\nhttp://archives.postgresql.org/pgsql-jdbc/2005-04/msg00134.php\n\n\nAnother solution might be to have Java dump the contents of the HashMap \nto a CVS file and have it load through psql with COPY commands.\n\nGood luck,\n\nNick\n\n",
"msg_date": "Tue, 12 Dec 2006 09:12:34 +0100",
"msg_from": "nicky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: really quick multiple inserts can use COPY?"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'd like to get suggestions from all you out there for\na new Postgresql server that will replace an existing one.\n\nMy performance analysis shows very *low* iowaits,\nand very high loads at times of peak system activity.\nThe average concurrent processes number is 3/4, with peaks of 10/15.\n*Sustained* system load varies from 1.5 to 4, while peak load\nreaches 20 and above, always with low iowait%.\nI see this as a clear sign of more processors power need.\n\nI'm aware of the context-switching storm problem, but here\nthe cs stays well under 50,000, so I think it's not the problem\nhere.\n\nCurrent machine is an Acer Altos R700 (2 Xeon 2.8 Ghz, 4 Gb RAM),\nsimilar to this one (R710):\nhttp://www.acer.co.uk/acereuro/page9.do?sp=page4&dau34.oid=7036&UserCtxParam=0&GroupCtxParam=0&dctx1=17&CountryISOCtxParam=UK&LanguageISOCtxParam=en&ctx3=-1&ctx4=United+Kingdom&crc=334044639\n\nSo, I'm looking for advice on a mid-range OLTP server with\nthe following (flexible) requirements:\n\n- 4 physical processors expandable to 8 (dual core preferred),\n either Intel or AMD\n- 8 Gb RAM exp. to at least 32 Gb\n- Compatibility with Acer S300 External storage enclosure.\n The \"old\" server uses that, and it is boxed with 15krpm hdds\n in RAID-10, which do perfectly well their job.\n The enclosure is connected via 2 x LSI Logic PCI U320 controllers\n- On-site, same-day hardware support contract\n- Rack mount\n\nMachines we're evaluating currently include:\n- Acer Altos R910\n- Sun Fire V40Z,\n- HP Integrity RX4640\n- IBM eServer 460\n\nExperiences on these machines?\nOther suggestions?\n\nThanks.\n\n-- \nCosimo\n\n",
"msg_date": "Mon, 11 Dec 2006 17:26:49 +0100",
"msg_from": "Cosimo Streppone <[email protected]>",
"msg_from_op": true,
"msg_subject": "Looking for hw suggestions for high concurrency OLTP app"
},
{
"msg_contents": "The Sun X4600 is very good for this, the V40z is actually EOL so I'd stay\naway from it.\n\nYou can currently do 8 dual core CPUs with the X4600 and 128GB of RAM and\nsoon you should be able to do 8 quad core CPUs and 256GB of RAM.\n\n- Luke \n\n\nOn 12/11/06 8:26 AM, \"Cosimo Streppone\" <[email protected]> wrote:\n\n> Hi all,\n> \n> I'd like to get suggestions from all you out there for\n> a new Postgresql server that will replace an existing one.\n> \n> My performance analysis shows very *low* iowaits,\n> and very high loads at times of peak system activity.\n> The average concurrent processes number is 3/4, with peaks of 10/15.\n> *Sustained* system load varies from 1.5 to 4, while peak load\n> reaches 20 and above, always with low iowait%.\n> I see this as a clear sign of more processors power need.\n> \n> I'm aware of the context-switching storm problem, but here\n> the cs stays well under 50,000, so I think it's not the problem\n> here.\n> \n> Current machine is an Acer Altos R700 (2 Xeon 2.8 Ghz, 4 Gb RAM),\n> similar to this one (R710):\n> http://www.acer.co.uk/acereuro/page9.do?sp=page4&dau34.oid=7036&UserCtxParam=0\n> &GroupCtxParam=0&dctx1=17&CountryISOCtxParam=UK&LanguageISOCtxParam=en&ctx3=-1\n> &ctx4=United+Kingdom&crc=334044639\n> \n> So, I'm looking for advice on a mid-range OLTP server with\n> the following (flexible) requirements:\n> \n> - 4 physical processors expandable to 8 (dual core preferred),\n> either Intel or AMD\n> - 8 Gb RAM exp. to at least 32 Gb\n> - Compatibility with Acer S300 External storage enclosure.\n> The \"old\" server uses that, and it is boxed with 15krpm hdds\n> in RAID-10, which do perfectly well their job.\n> The enclosure is connected via 2 x LSI Logic PCI U320 controllers\n> - On-site, same-day hardware support contract\n> - Rack mount\n> \n> Machines we're evaluating currently include:\n> - Acer Altos R910\n> - Sun Fire V40Z,\n> - HP Integrity RX4640\n> - IBM eServer 460\n> \n> Experiences on these machines?\n> Other suggestions?\n> \n> Thanks.\n\n\n",
"msg_date": "Mon, 11 Dec 2006 10:37:51 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Looking for hw suggestions for high concurrency"
},
{
"msg_contents": "On 12/11/06, Luke Lonergan <[email protected]> wrote:\n> The Sun X4600 is very good for this, the V40z is actually EOL so I'd stay\n> away from it.\n\nalso, it has no pci express slots. make sure to get pci-e slots :)\n\n> You can currently do 8 dual core CPUs with the X4600 and 128GB of RAM and\n> soon you should be able to do 8 quad core CPUs and 256GB of RAM.\n\n...and this 6 of them (wow!). the v40z was top of its class. Will K8L\nrun on this server?\n\nmerlin\n",
"msg_date": "Mon, 11 Dec 2006 15:19:40 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Looking for hw suggestions for high concurrency"
},
{
"msg_contents": "Merlin,\n\nOn 12/11/06 12:19 PM, \"Merlin Moncure\" <[email protected]> wrote:\n\n> ...and this 6 of them (wow!). the v40z was top of its class. Will K8L\n> run on this server?\n\nNo official word yet.\n\nThe X4600 slipped in there quietly under the X4500 (Thumper) announcement,\nbut it's a pretty awesome server. It's been in production for\nsupercomputing in Japan for a long while now, so I'd trust it.\n\n- Luke\n\n\n",
"msg_date": "Mon, 11 Dec 2006 13:39:43 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Looking for hw suggestions for high concurrency"
}
] |
[
{
"msg_contents": "Hi list !\n\nI am running a query to update the boolean field of a table based on\nanother table's fields.\n\nThe query is (changed names for readability):\nUPDATE t1\nSET booleanfield = (t2.field1 IN ('some', 'other') AND t2.field2 = 'Y')\nFROM t2\nWHERE t1.uid = t2.uid\n\nt2.uid is the PRIMARY KEY.\nt2 only has ~1000 rows, so I think it fits fully in memory.\nt1 as ~2.000.000 rows.\nThere is an index on t1.uid also.\n\nThe explain (sorry, not explain analyze available yet) is :\n\nHash Join (cost=112.75..307410.10 rows=2019448 width=357)\n Hash Cond: (\"outer\".uid= \"inner\".uid)\n -> Seq Scan on t1 (cost=0.00..261792.01 rows=2033001 width=340)\n -> Hash (cost=110.20..110.20 rows=1020 width=53)\n -> Seq Scan on t2 (cost=0.00..110.20 rows=1020 width=53)\n\nMy query has been running for more than 1.5 hour now, and it is still running.\nNothing else is running on the server.\nThere are two multicolumn-indexes on this column (both are 3-columns indexes). One of them has a \nfunctional column (date_trunc('month', datefield)).\n\nDo you think the problem is with the indexes ?\n\nThe hardware is not great, but the database is on a RAID1 array, so its not bad either.\nI am surprised that it takes more than 3 seconds per row to be updated.\n\nThanks for your opinion on this !\n\n--\nArnaud\n",
"msg_date": "Wed, 13 Dec 2006 11:51:10 +0100",
"msg_from": "Arnaud Lesauvage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow update with simple query"
},
{
"msg_contents": "On mi�, 2006-12-13 at 11:51 +0100, Arnaud Lesauvage wrote:\n> Hi list !\n> \n> I am running a query to update the boolean field of a table based on\n> another table's fields.\n> \n> The query is (changed names for readability):\n> UPDATE t1\n> SET booleanfield = (t2.field1 IN ('some', 'other') AND t2.field2 = 'Y')\n> FROM t2\n> WHERE t1.uid = t2.uid\n> \n> t2.uid is the PRIMARY KEY.\n> t2 only has ~1000 rows, so I think it fits fully in memory.\n> t1 as ~2.000.000 rows.\n> There is an index on t1.uid also.\n> \n> The explain (sorry, not explain analyze available yet) is :\n> \n> Hash Join (cost=112.75..307410.10 rows=2019448 width=357)\n> Hash Cond: (\"outer\".uid= \"inner\".uid)\n> -> Seq Scan on t1 (cost=0.00..261792.01 rows=2033001 width=340)\n> -> Hash (cost=110.20..110.20 rows=1020 width=53)\n> -> Seq Scan on t2 (cost=0.00..110.20 rows=1020 width=53)\n> \n> My query has been running for more than 1.5 hour now, and it is still running.\n> Nothing else is running on the server.\n> There are two multicolumn-indexes on this column (both are 3-columns indexes). One of them has a \n> functional column (date_trunc('month', datefield)).\n> \n> Do you think the problem is with the indexes ?\n\nI guess so. are you sure about the index on t1.uid?\nwhat are the column definitions for t1.uid and t2.uid ?\nare they the same ?\nyou should ba able to get a plan similar to:\nMerge Join (cost=0.00..43.56 rows=1000 width=11)\n Merge Cond: (\"outer\".uid = \"inner\".uid)\n -> Index Scan using t1i on t1 (cost=0.00..38298.39 rows=2000035\nwidth=10)\n -> Index Scan using t2i on t2 (cost=0.00..26.73 rows=1000 width=5)\n\nwhat postgres version are you using ?\n\ngnari\n\n\n\n\n> \n> The hardware is not great, but the database is on a RAID1 array, so its not bad either.\n> I am surprised that it takes more than 3 seconds per row to be updated.\n> \n> Thanks for your opinion on this !\n> \n> --\n> Arnaud\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n",
"msg_date": "Wed, 13 Dec 2006 11:20:03 +0000",
"msg_from": "Ragnar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow update with simple query"
},
{
"msg_contents": "Ragnar a �crit :\n>> Do you think the problem is with the indexes ?\n> \n> I guess so. are you sure about the index on t1.uid?\n> what are the column definitions for t1.uid and t2.uid ?\n> are they the same ?\n\nMan, no !!!\nI just checked and indeed, no index on this column. I probably dropped \nit lately.\nThanks Ragnar.\n(t1.uid and t2.uid were the same, character(32) columns)\n\n> you should ba able to get a plan similar to:\n> Merge Join (cost=0.00..43.56 rows=1000 width=11)\n> Merge Cond: (\"outer\".uid = \"inner\".uid)\n> -> Index Scan using t1i on t1 (cost=0.00..38298.39 rows=2000035\n> width=10)\n> -> Index Scan using t2i on t2 (cost=0.00..26.73 rows=1000 width=5)\n> \n> what postgres version are you using ?\n\nOoops, forgot that too : 8.1.4\n\nI am creating the index right now, I'll tell you if this fixes the problem.\nThanks for your help !\n\n--\nArnaud\n",
"msg_date": "Wed, 13 Dec 2006 12:35:04 +0100",
"msg_from": "Arnaud Lesauvage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow update with simple query"
},
{
"msg_contents": "Hi,\n\nthe problem is a combination of bad formed SQL and maybe missing indexes.\ntry this:\nUPDATE t1\nSET booleanfield = foo.bar\n FROM (SELECT uid,(field IN ('some','other') AND field2 = 'Y') AS bar FROM \nt2) AS foo\nWHERE t1.uid=foo.uid;\n\nand index t1.uid, t2.uid, t2.field, t2.field2\n\nregards,\nJens Schipkowski\n\nOn Wed, 13 Dec 2006 11:51:10 +0100, Arnaud Lesauvage <[email protected]> \nwrote:\n\n> Hi list !\n>\n> I am running a query to update the boolean field of a table based on\n> another table's fields.\n>\n> The query is (changed names for readability):\n> UPDATE t1\n> SET booleanfield = (t2.field1 IN ('some', 'other') AND t2.field2 = 'Y')\n> FROM t2\n> WHERE t1.uid = t2.uid\n>\n> t2.uid is the PRIMARY KEY.\n> t2 only has ~1000 rows, so I think it fits fully in memory.\n> t1 as ~2.000.000 rows.\n> There is an index on t1.uid also.\n>\n> The explain (sorry, not explain analyze available yet) is :\n>\n> Hash Join (cost=112.75..307410.10 rows=2019448 width=357)\n> Hash Cond: (\"outer\".uid= \"inner\".uid)\n> -> Seq Scan on t1 (cost=0.00..261792.01 rows=2033001 width=340)\n> -> Hash (cost=110.20..110.20 rows=1020 width=53)\n> -> Seq Scan on t2 (cost=0.00..110.20 rows=1020 width=53)\n>\n> My query has been running for more than 1.5 hour now, and it is still \n> running.\n> Nothing else is running on the server.\n> There are two multicolumn-indexes on this column (both are 3-columns \n> indexes). One of them has a functional column (date_trunc('month', \n> datefield)).\n>\n> Do you think the problem is with the indexes ?\n>\n> The hardware is not great, but the database is on a RAID1 array, so its \n> not bad either.\n> I am surprised that it takes more than 3 seconds per row to be updated.\n>\n> Thanks for your opinion on this !\n>\n> --\n> Arnaud\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n\n\n-- \n**\nAPUS Software GmbH\n",
"msg_date": "Wed, 13 Dec 2006 13:18:09 +0100",
"msg_from": "\"Jens Schipkowski\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow update with simple query"
},
{
"msg_contents": "Jens Schipkowski a écrit :\n> the problem is a combination of bad formed SQL and maybe missing indexes.\n> try this:\n> UPDATE t1\n> SET booleanfield = foo.bar\n> FROM (SELECT uid,(field IN ('some','other') AND field2 = 'Y') AS bar FROM \n> t2) AS foo\n> WHERE t1.uid=foo.uid;\n\n\nHi Jens,\nWhy is this query better than the other one ? Because it runs the \n\"(field IN ('some','other') AND field2 = 'Y')\" once and then executes \nthe join with the resulting set ?\n\n> and index t1.uid, t2.uid, t2.field, t2.field2\n\nt1.field can only take 3 or 4 values (don't remember exactly), and \nfield2 only 2 ('Y' or 'N'). So this fields have a very low cardinality.\nWon't the planner chose to do a table scan in such a case ?\n\nThanks for your advices !\n\n--\nArnaud\n",
"msg_date": "Wed, 13 Dec 2006 13:23:41 +0100",
"msg_from": "Arnaud Lesauvage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow update with simple query"
},
{
"msg_contents": "On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage <[email protected]> \nwrote:\n\n> Jens Schipkowski a écrit :\n>> the problem is a combination of bad formed SQL and maybe missing \n>> indexes.\n>> try this:\n>> UPDATE t1\n>> SET booleanfield = foo.bar\n>> FROM (SELECT uid,(field IN ('some','other') AND field2 = 'Y') AS bar \n>> FROM t2) AS foo\n>> WHERE t1.uid=foo.uid;\n>\n>\n> Hi Jens,\n> Why is this query better than the other one ? Because it runs the \n> \"(field IN ('some','other') AND field2 = 'Y')\" once and then executes \n> the join with the resulting set ?\nTrue. The Subselect in FROM clause will be executed once and will be \njoined using the condition at where clause. So your condition at t2 is not \nexecuted for each row in t1(2mio records) but for each row in t2(1k \nrecords). And the boolean value is already set during update.\n\nregards,\nJens\n\n>\n>> and index t1.uid, t2.uid, t2.field, t2.field2\n>\n> t1.field can only take 3 or 4 values (don't remember exactly), and \n> field2 only 2 ('Y' or 'N'). So this fields have a very low cardinality.\n> Won't the planner chose to do a table scan in such a case ?\n>\n> Thanks for your advices !\n>\n> --\n> Arnaud\n\n\n\n-- \n**\nAPUS Software GmbH\n",
"msg_date": "Wed, 13 Dec 2006 14:20:40 +0100",
"msg_from": "\"Jens Schipkowski\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow update with simple query"
},
{
"msg_contents": "Jens Schipkowski a écrit :\n> On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage <[email protected]> \n>> Why is this query better than the other one ? Because it runs the \n>> \"(field IN ('some','other') AND field2 = 'Y')\" once and then executes \n>> the join with the resulting set ?\n> True. The Subselect in FROM clause will be executed once and will be \n> joined using the condition at where clause. So your condition at t2 is not \n> executed for each row in t1(2mio records) but for each row in t2(1k \n> records). And the boolean value is already set during update.\n\nOK Jens, thanks for clarifying this.\nI thought the planner could guess what to do in such cases.\n\nRegards\n--\nArnaud\n",
"msg_date": "Wed, 13 Dec 2006 14:38:37 +0100",
"msg_from": "Arnaud Lesauvage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow update with simple query"
},
{
"msg_contents": "On mi�, 2006-12-13 at 14:38 +0100, Arnaud Lesauvage wrote:\n> Jens Schipkowski a �crit :\n> > On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage <[email protected]> \n> >> Why is this query better than the other one ? Because it runs the \n> >> \"(field IN ('some','other') AND field2 = 'Y')\" once and then executes \n> >> the join with the resulting set ?\n> > True. The Subselect in FROM clause will be executed once and will be \n> > joined using the condition at where clause. So your condition at t2 is not \n> > executed for each row in t1(2mio records) but for each row in t2(1k \n> > records). And the boolean value is already set during update.\n> \n> OK Jens, thanks for clarifying this.\n> I thought the planner could guess what to do in such cases.\n\ndon't worry, it will.\nthis is not your problem\n\ngnari\n\n\n",
"msg_date": "Wed, 13 Dec 2006 14:49:00 +0000",
"msg_from": "Ragnar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow update with simple query"
},
{
"msg_contents": "Ragnar a �crit :\n> On mi�, 2006-12-13 at 14:38 +0100, Arnaud Lesauvage wrote:\n>> Jens Schipkowski a �crit :\n>> > On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage <[email protected]> \n>> >> Why is this query better than the other one ? Because it runs the \n>> >> \"(field IN ('some','other') AND field2 = 'Y')\" once and then executes \n>> >> the join with the resulting set ?\n>> > True. The Subselect in FROM clause will be executed once and will be \n>> > joined using the condition at where clause. So your condition at t2 is not \n>> > executed for each row in t1(2mio records) but for each row in t2(1k \n>> > records). And the boolean value is already set during update.\n>> \n>> OK Jens, thanks for clarifying this.\n>> I thought the planner could guess what to do in such cases.\n> \n> don't worry, it will.\n> this is not your problem\n\nIndeed, the new query does not perform that well :\n\n\"Hash Join (cost=112.75..307504.97 rows=2024869 width=355) (actual time=53.995..246443.811 rows=2020061 loops=1)\"\n\" Hash Cond: (\"outer\".uid = \"inner\".uid)\"\n\" -> Seq Scan on t1 (cost=0.00..261792.01 rows=2033001 width=338) (actual time=19.342..234304.499 rows=2033001 loops=1)\"\n\" -> Hash (cost=110.20..110.20 rows=1020 width=53) (actual time=4.853..4.853 rows=1020 loops=1)\"\n\" -> Seq Scan on t2 (cost=0.00..110.20 rows=1020 width=53) (actual time=0.017..2.586 rows=1020 loops=1)\"\n\"Total runtime: 2777844.892 ms\"\n\nI removed all unnecessary indexes on t1 before running the query (I left the index on uid and the multicolumn index containind the updated field).\nI believe the multicolumn-functional-index computation is taking some time here, isn't it ?\n\nRegards\n--\nArnaud\n",
"msg_date": "Wed, 13 Dec 2006 16:19:55 +0100",
"msg_from": "Arnaud Lesauvage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow update with simple query"
},
{
"msg_contents": "Arnaud,\n Have you run \"ANALYZE\" on the table after creating index?\n Also make sure that \"#effective_cache_size\" is set properly. A higher value makes it more likely to use index scans.\n \n Thanks\n asif ali\n\nArnaud Lesauvage <[email protected]> wrote: Ragnar a �crit :\n> On mi�, 2006-12-13 at 14:38 +0100, Arnaud Lesauvage wrote:\n>> Jens Schipkowski a �crit :\n>> > On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage \n>> >> Why is this query better than the other one ? Because it runs the \n>> >> \"(field IN ('some','other') AND field2 = 'Y')\" once and then executes \n>> >> the join with the resulting set ?\n>> > True. The Subselect in FROM clause will be executed once and will be \n>> > joined using the condition at where clause. So your condition at t2 is not \n>> > executed for each row in t1(2mio records) but for each row in t2(1k \n>> > records). And the boolean value is already set during update.\n>> \n>> OK Jens, thanks for clarifying this.\n>> I thought the planner could guess what to do in such cases.\n> \n> don't worry, it will.\n> this is not your problem\n\nIndeed, the new query does not perform that well :\n\n\"Hash Join (cost=112.75..307504.97 rows=2024869 width=355) (actual time=53.995..246443.811 rows=2020061 loops=1)\"\n\" Hash Cond: (\"outer\".uid = \"inner\".uid)\"\n\" -> Seq Scan on t1 (cost=0.00..261792.01 rows=2033001 width=338) (actual time=19.342..234304.499 rows=2033001 loops=1)\"\n\" -> Hash (cost=110.20..110.20 rows=1020 width=53) (actual time=4.853..4.853 rows=1020 loops=1)\"\n\" -> Seq Scan on t2 (cost=0.00..110.20 rows=1020 width=53) (actual time=0.017..2.586 rows=1020 loops=1)\"\n\"Total runtime: 2777844.892 ms\"\n\nI removed all unnecessary indexes on t1 before running the query (I left the index on uid and the multicolumn index containind the updated field).\nI believe the multicolumn-functional-index computation is taking some time here, isn't it ?\n\nRegards\n--\nArnaud\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: You can help support the PostgreSQL project by donating at\n\n http://www.postgresql.org/about/donate\n\n\n \n---------------------------------\nAny questions? Get answers on any topic at Yahoo! Answers. Try it now.\nArnaud, Have you run \"ANALYZE\" on the table after creating index? Also make sure that \"#effective_cache_size\" is set properly. A higher value makes it more likely to use index scans. Thanks asif aliArnaud Lesauvage <[email protected]> wrote: Ragnar a �crit :> On mi�, 2006-12-13 at 14:38 +0100, Arnaud Lesauvage wrote:>> Jens Schipkowski a �crit :>> > On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage >> >> Why is this query better than the other one ? Because it runs the >> >> \"(field IN ('some','other') AND field2 = 'Y')\" once and then executes >> >> the join with the resulting set ?>> > True. The Subselect in FROM clause will be executed once and will be >> > joined\n using the condition at where clause. So your condition at t2 is not >> > executed for each row in t1(2mio records) but for each row in t2(1k >> > records). And the boolean value is already set during update.>> >> OK Jens, thanks for clarifying this.>> I thought the planner could guess what to do in such cases.> > don't worry, it will.> this is not your problemIndeed, the new query does not perform that well :\"Hash Join (cost=112.75..307504.97 rows=2024869 width=355) (actual time=53.995..246443.811 rows=2020061 loops=1)\"\" Hash Cond: (\"outer\".uid = \"inner\".uid)\"\" -> Seq Scan on t1 (cost=0.00..261792.01 rows=2033001 width=338) (actual time=19.342..234304.499 rows=2033001 loops=1)\"\" -> Hash (cost=110.20..110.20 rows=1020 width=53) (actual time=4.853..4.853 rows=1020 loops=1)\"\" -> Seq Scan on t2 (cost=0.00..110.20 rows=1020 width=53) (actual\n time=0.017..2.586 rows=1020 loops=1)\"\"Total runtime: 2777844.892 ms\"I removed all unnecessary indexes on t1 before running the query (I left the index on uid and the multicolumn index containind the updated field).I believe the multicolumn-functional-index computation is taking some time here, isn't it ?Regards--Arnaud---------------------------(end of broadcast)---------------------------TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate\nAny questions? Get answers on any topic at Yahoo! Answers. Try it now.",
"msg_date": "Wed, 13 Dec 2006 08:43:57 -0800 (PST)",
"msg_from": "asif ali <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow update with simple query"
},
{
"msg_contents": "Arnaud Lesauvage <[email protected]> writes:\n> Indeed, the new query does not perform that well :\n\n> \"Hash Join (cost=112.75..307504.97 rows=2024869 width=355) (actual time=53.995..246443.811 rows=2020061 loops=1)\"\n> ...\n> \"Total runtime: 2777844.892 ms\"\n\n> I removed all unnecessary indexes on t1 before running the query (I left the index on uid and the multicolumn index containind the updated field).\n> I believe the multicolumn-functional-index computation is taking some time here, isn't it ?\n\nGiven that the plan itself only takes 246 sec, there's *something*\nassociated with row insertion that's eating the other 2500+ seconds.\nEither index entry computation or constraint checking ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Dec 2006 11:46:25 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow update with simple query "
},
{
"msg_contents": "asif ali a �crit :\n> Arnaud,\n> Have you run \"ANALYZE\" on the table after creating index?\n\nYes, I have !\n\n> Also make sure that \"#effective_cache_size\" is set properly. A higher value makes it more likely to use index scans.\n\nIt is set to 50.000. I thought this would be enough, and maybe too much !\n\nThanks for your advice !\n\n--\nArnaud\n",
"msg_date": "Wed, 13 Dec 2006 17:47:20 +0100",
"msg_from": "Arnaud Lesauvage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow update with simple query"
},
{
"msg_contents": "But he's using 8.1.4-- in that version, an explain analyze would list\nthe time taken to go through triggers, so the fact that we don't see any\nof those lines means that it can't be constraint checking, so wouldn't\nit have to be the index update overhead?\n\n-- Mark\n\nOn Wed, 2006-12-13 at 11:46 -0500, Tom Lane wrote:\n> Arnaud Lesauvage <[email protected]> writes:\n> > Indeed, the new query does not perform that well :\n> \n> > \"Hash Join (cost=112.75..307504.97 rows=2024869 width=355) (actual time=53.995..246443.811 rows=2020061 loops=1)\"\n> > ...\n> > \"Total runtime: 2777844.892 ms\"\n> \n> > I removed all unnecessary indexes on t1 before running the query (I left the index on uid and the multicolumn index containind the updated field).\n> > I believe the multicolumn-functional-index computation is taking some time here, isn't it ?\n> \n> Given that the plan itself only takes 246 sec, there's *something*\n> associated with row insertion that's eating the other 2500+ seconds.\n> Either index entry computation or constraint checking ...\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n",
"msg_date": "Wed, 13 Dec 2006 09:51:13 -0800",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow update with simple query"
},
{
"msg_contents": "Mark Lewis <[email protected]> writes:\n> But he's using 8.1.4-- in that version, an explain analyze would list\n> the time taken to go through triggers, so the fact that we don't see any\n> of those lines means that it can't be constraint checking, so wouldn't\n> it have to be the index update overhead?\n\nWell, it can't be foreign key checking. Could have been an expensive\nfunction in a CHECK constraint, maybe...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Dec 2006 13:47:16 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow update with simple query "
},
{
"msg_contents": "Tom Lane a �crit :\n> Arnaud Lesauvage <[email protected]> writes:\n>> Indeed, the new query does not perform that well :\n> \n>> \"Hash Join (cost=112.75..307504.97 rows=2024869 width=355) (actual time=53.995..246443.811 rows=2020061 loops=1)\"\n>> ...\n>> \"Total runtime: 2777844.892 ms\"\n> \n>> I removed all unnecessary indexes on t1 before running the query (I left the index on uid and the multicolumn index containind the updated field).\n>> I believe the multicolumn-functional-index computation is taking some time here, isn't it ?\n> \n> Given that the plan itself only takes 246 sec, there's *something*\n> associated with row insertion that's eating the other 2500+ seconds.\n> Either index entry computation or constraint checking ...\n\nThere is an insert trigger (but here I am only updating the data), and a\nmulticolumn functional index. That's all I can think of.\n\nI must be missing something, so here is the full table description.\nThe field I am updating is incluredansstats.\nThe field I am join on is userinternalid.\n\n\nCREATE TABLE statistiques.log\n(\n gid serial NOT NULL,\n userinternalid character(32),\n ip character varying(255),\n browser character varying(255),\n fichier character varying(255),\n querystring text,\n page character varying(255),\n useridentity character varying(100),\n incluredansstats boolean NOT NULL DEFAULT true,\n date character varying,\n heure character varying,\n dateformatee timestamp without time zone,\n sessionid character(32),\n sortindex integer,\n CONSTRAINT log_pkey PRIMARY KEY (gid)\n)\nWITHOUT OIDS;\nALTER TABLE statistiques.log OWNER TO postgres;ncluredansstats;\n\nCREATE INDEX idx_page_datemonth_incluredansstats\n ON statistiques.log\n USING btree\n (page, date_trunc('month'::text, dateformatee), incluredansstats);\n\nCREATE INDEX idx_userinternalid\n ON statistiques.log\n USING btree\n (userinternalid);\n\nCREATE INDEX idx_userinternalid_page_datemonth\n ON statistiques.log\n USING btree\n (userinternalid, page, date_trunc('month'::text, dateformatee));\n\nALTER TABLE statistiques.log\n ADD CONSTRAINT log_pkey PRIMARY KEY(gid);\n\nCREATE TRIGGER parse_log_trigger\n BEFORE INSERT\n ON statistiques.log\n FOR EACH ROW\n EXECUTE PROCEDURE statistiques.parse_log_trigger();\n\n\nThis was a \"one-shot\" query, so I don't really mind it being slow, but if you \nwant I can still troubleshoot it !\n",
"msg_date": "Thu, 14 Dec 2006 15:01:34 +0100",
"msg_from": "Arnaud Lesauvage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow update with simple query"
},
{
"msg_contents": "Arnaud Lesauvage <[email protected]> writes:\n> I must be missing something, so here is the full table description.\n\nIt looks pretty harmless, except for\n\n> CREATE TRIGGER parse_log_trigger\n> BEFORE INSERT\n> ON statistiques.log\n> FOR EACH ROW\n> EXECUTE PROCEDURE statistiques.parse_log_trigger();\n\nIt seems the time must be going into this trigger function. What\ndoes it do?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Dec 2006 10:18:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow update with simple query "
},
{
"msg_contents": "Tom Lane a �crit :\n> Arnaud Lesauvage <[email protected]> writes:\n>> I must be missing something, so here is the full table description.\n> \n> It looks pretty harmless, except for\n> \n>> CREATE TRIGGER parse_log_trigger\n>> BEFORE INSERT\n>> ON statistiques.log\n>> FOR EACH ROW\n>> EXECUTE PROCEDURE statistiques.parse_log_trigger();\n> \n> It seems the time must be going into this trigger function. What\n> does it do?\n\nA lot of things ! Indeed, if it runs it will very badly hurt performances (table \nlookups, string manipulation, etc...) !\nBut it should only be tringered on INSERTs, and I am doing an UPDATE !\n\nI can post the function's body if you want.\n\nRegards\n--\nArnaud\n\n",
"msg_date": "Thu, 14 Dec 2006 16:23:00 +0100",
"msg_from": "Arnaud Lesauvage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow update with simple query"
},
{
"msg_contents": "Arnaud Lesauvage <[email protected]> writes:\n> Tom Lane a �crit :\n>> It seems the time must be going into this trigger function. What\n>> does it do?\n\n> A lot of things ! Indeed, if it runs it will very badly hurt performances (table \n> lookups, string manipulation, etc...) !\n> But it should only be tringered on INSERTs, and I am doing an UPDATE !\n\nDoh, right, I obviously still need to ingest more caffeine this morning.\n\nI think the conclusion must be that there was just too much I/O to be\ndone to update all the rows. Have you done any tuning of shared_buffers\nand so forth? I recall having seen cases where update performance went\nbad as soon as the upper levels of a large index no longer fit into\nshared_buffers ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Dec 2006 11:19:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow update with simple query "
},
{
"msg_contents": "Tom Lane a �crit :\n> I think the conclusion must be that there was just too much I/O to be\n> done to update all the rows. Have you done any tuning of shared_buffers\n> and so forth? I recall having seen cases where update performance went\n> bad as soon as the upper levels of a large index no longer fit into\n> shared_buffers ...\n\nYes, that's probably it.\nI think my raid1 array's performances are very bad.\nI am switching to a brand new hardware next week, I am quite confident that this \nwill solve many problems.\n\nThanks for helping !\n\nRegards\n--\nArnaud\n\n",
"msg_date": "Thu, 14 Dec 2006 17:26:23 +0100",
"msg_from": "Arnaud Lesauvage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow update with simple query"
},
{
"msg_contents": "Out of curiosity, how hard would it be to modify the output of EXPLAIN\nANALYZE when doing an insert/update to include the index update times\nand/or non-FK constraint checking times and/or the table row update\ntimes? Or any other numbers that might be useful in circumstances like\nthis. I'm wondering if it's possible to shed some light on the\nremaining dark shadows of PG performance troubleshooting.\n\n-- Mark Lewis\n\nOn Thu, 2006-12-14 at 11:19 -0500, Tom Lane wrote:\n> Arnaud Lesauvage <[email protected]> writes:\n> > Tom Lane a crit :\n> >> It seems the time must be going into this trigger function. What\n> >> does it do?\n> \n> > A lot of things ! Indeed, if it runs it will very badly hurt performances (table \n> > lookups, string manipulation, etc...) !\n> > But it should only be tringered on INSERTs, and I am doing an UPDATE !\n> \n> Doh, right, I obviously still need to ingest more caffeine this morning.\n> \n> I think the conclusion must be that there was just too much I/O to be\n> done to update all the rows. Have you done any tuning of shared_buffers\n> and so forth? I recall having seen cases where update performance went\n> bad as soon as the upper levels of a large index no longer fit into\n> shared_buffers ...\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n",
"msg_date": "Thu, 14 Dec 2006 08:34:55 -0800",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow update with simple query"
},
{
"msg_contents": "Mark Lewis <[email protected]> writes:\n> Out of curiosity, how hard would it be to modify the output of EXPLAIN\n> ANALYZE when doing an insert/update to include the index update times\n> and/or non-FK constraint checking times and/or the table row update\n> times?\n\nI don't think it'd help much --- in an example like this, other tools\nlike vmstat would be more likely to be useful in investigating the\nproblem. There's also the problem that EXPLAIN ANALYZE overhead is\nalready too high --- another dozen gettimeofday() calls per row\nprocessed doesn't sound appetizing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Dec 2006 11:50:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow update with simple query "
}
] |
[
{
"msg_contents": "Hi,\n\nOur application is using Postgres 7.4 and I'd like to understand the root\ncause of this problem:\n\nTo speed up overall insert time, our application will write thousands of\nrows, one by one, into a temp table (same structure as a permanent table),\nthen do a bulk insert from the temp table to the permanent table. After\nthis bulk insert is done, the temp table is truncated and the process is\nrepeated. We do this because Postgres can do many individual inserts to a\ntemp table much faster than to a permanent table.\n\nThe problem we are seeing is that over time, the cost of a single insert to\nthe temp table seems to grow. After a restart of postgres, a single insert\nto the temp table takes about 3ms. Over a few days, this grows to about\n60ms per insert. Restarting postgres drops this insert time back to 3ms,\nsupposedly because the temp table is re-created. Our workaround right now\nis to restart the database every few days, but we don't like this solution\nmuch.\n\nAny idea where the bloat is happening? I believe that if we were dropping\nand re-creating the temp table over and over, that could result in pg_class\nbloat (among other catalog tables), but what is going wrong if we use the\nsame table over and over and truncate it?\n\nThanks,\nSteve\n\nHi,\n \nOur application is using Postgres 7.4 and I'd like to understand the root cause of this problem:\n \nTo speed up overall insert time, our application will write thousands of rows, one by one, into a temp table (same structure as a permanent table), then do a bulk insert from the temp table to the permanent table. After this bulk insert is done, the temp table is truncated and the process is repeated. We do this because Postgres can do many individual inserts to a temp table much faster than to a permanent table.\n\n \nThe problem we are seeing is that over time, the cost of a single insert to the temp table seems to grow. After a restart of postgres, a single insert to the temp table takes about 3ms. Over a few days, this grows to about 60ms per insert. Restarting postgres drops this insert time back to 3ms, supposedly because the temp table is re-created. Our workaround right now is to restart the database every few days, but we don't like this solution much.\n\n \nAny idea where the bloat is happening? I believe that if we were dropping and re-creating the temp table over and over, that could result in pg_class bloat (among other catalog tables), but what is going wrong if we use the same table over and over and truncate it?\n\n \nThanks,\nSteve",
"msg_date": "Wed, 13 Dec 2006 11:44:19 -0500",
"msg_from": "\"Steven Flatt\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Insertion to temp table deteriorating over time"
},
{
"msg_contents": "On 12/13/06, Steven Flatt <[email protected]> wrote:\n> Hi,\n>\n> Our application is using Postgres 7.4 and I'd like to understand the root\n> cause of this problem:\n>\n> To speed up overall insert time, our application will write thousands of\n> rows, one by one, into a temp table\n\n1. how frequently are you commiting the transaction ?\n if you commit less frequetly it will be faster.\n\n2. If you use COPY instead of INSERT it will be faster.\n using COPY is easy with DBD::Pg (perl). In versions\n 8.x i think there has been major speed improvements\n in COPY.\n\nI do not know the root cause of slowdown though.\n\nRegds\nmallah.\n\n\n\n>\n",
"msg_date": "Wed, 13 Dec 2006 22:44:10 +0530",
"msg_from": "\"Rajesh Kumar Mallah\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insertion to temp table deteriorating over time"
},
{
"msg_contents": "\"Steven Flatt\" <[email protected]> writes:\n> Any idea where the bloat is happening? I believe that if we were dropping\n> and re-creating the temp table over and over, that could result in pg_class\n> bloat (among other catalog tables), but what is going wrong if we use the\n> same table over and over and truncate it?\n\nThat seems very strange --- I too would have expected a TRUNCATE to\nbring you back to ground zero performance-wise. I wonder whether the\nissue is not directly related to the temp table but is just some generic\nresource leakage problem in a very long-running backend. Have you\nchecked to see if the backend process bloats memory-wise, or perhaps has\na huge number of files open (I wonder if it could be leaking open file\nhandles to the successive generations of the temp table)? Are you sure\nthat the slowdown is specific to inserts into the temp table, as opposed\nto generic SQL activity?\n\nAlso, which PG version is this exactly (\"7.4\" is not specific enough)?\nOn what platform? Can you show us the full schema definition for the\ntemp table and any indexes on it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Dec 2006 12:29:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insertion to temp table deteriorating over time "
},
{
"msg_contents": "After running some further standalone tests using temp tables, I'm not\nconvinced the problem is specific to temp table usage. In fact it looks\nlike generic SQL activity degrades over time.\n\nHaving said that, what kinds of things should I be looking for that could\ndeteriorate/bloat over time? Ordinarily the culprit might be infrequent\nvacuuming or analyzing, but that wouldn't be corrected by a restart of\nPostgres. In our case, restarting Postgres gives us a huge performance\nimprovement (for a short while, anyways).\n\nBy the way, we are using PG 7.4.6 on FreeBSD 5.30.0170. The temp table has\n15 columns: a timestamp, a double, and the rest integers. It has no\nindexes.\n\nThanks,\nSteve\n\n\nOn 12/13/06, Tom Lane <[email protected]> wrote:\n>\n> \"Steven Flatt\" <[email protected]> writes:\n> > Any idea where the bloat is happening? I believe that if we were\n> dropping\n> > and re-creating the temp table over and over, that could result in\n> pg_class\n> > bloat (among other catalog tables), but what is going wrong if we use\n> the\n> > same table over and over and truncate it?\n>\n> That seems very strange --- I too would have expected a TRUNCATE to\n> bring you back to ground zero performance-wise. I wonder whether the\n> issue is not directly related to the temp table but is just some generic\n> resource leakage problem in a very long-running backend. Have you\n> checked to see if the backend process bloats memory-wise, or perhaps has\n> a huge number of files open (I wonder if it could be leaking open file\n> handles to the successive generations of the temp table)? Are you sure\n> that the slowdown is specific to inserts into the temp table, as opposed\n> to generic SQL activity?\n>\n> Also, which PG version is this exactly (\"7.4\" is not specific enough)?\n> On what platform? Can you show us the full schema definition for the\n> temp table and any indexes on it?\n>\n> regards, tom lane\n>\n\nAfter running some further standalone tests using temp tables, I'm not convinced the problem is specific to temp table usage. In fact it looks like generic SQL activity degrades over time.\n \nHaving said that, what kinds of things should I be looking for that could deteriorate/bloat over time? Ordinarily the culprit might be infrequent vacuuming or analyzing, but that wouldn't be corrected by a restart of Postgres. In our case, restarting Postgres gives us a huge performance improvement (for a short while, anyways).\n\n \nBy the way, we are using PG 7.4.6 on FreeBSD 5.30.0170. The temp table has 15 columns: a timestamp, a double, and the rest integers. It has no indexes.\n \nThanks,\nSteve \nOn 12/13/06, Tom Lane <[email protected]> wrote:\n\"Steven Flatt\" <[email protected]> writes:\n> Any idea where the bloat is happening? I believe that if we were dropping> and re-creating the temp table over and over, that could result in pg_class> bloat (among other catalog tables), but what is going wrong if we use the\n> same table over and over and truncate it?That seems very strange --- I too would have expected a TRUNCATE tobring you back to ground zero performance-wise. I wonder whether theissue is not directly related to the temp table but is just some generic\nresource leakage problem in a very long-running backend. Have youchecked to see if the backend process bloats memory-wise, or perhaps hasa huge number of files open (I wonder if it could be leaking open file\nhandles to the successive generations of the temp table)? Are you surethat the slowdown is specific to inserts into the temp table, as opposedto generic SQL activity?Also, which PG version is this exactly (\"\n7.4\" is not specific enough)?On what platform? Can you show us the full schema definition for thetemp table and any indexes on it? regards, tom lane",
"msg_date": "Wed, 13 Dec 2006 18:17:41 -0500",
"msg_from": "\"Steven Flatt\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insertion to temp table deteriorating over time"
},
{
"msg_contents": "\"Steven Flatt\" <[email protected]> writes:\n> Having said that, what kinds of things should I be looking for that could\n> deteriorate/bloat over time? Ordinarily the culprit might be infrequent\n> vacuuming or analyzing, but that wouldn't be corrected by a restart of\n> Postgres. In our case, restarting Postgres gives us a huge performance\n> improvement (for a short while, anyways).\n\nDo you actually need to restart the postmaster, or is just starting a\nfresh session (fresh backend) sufficient? And again, have you monitored\nthe backend process to see if it's bloating memory-wise or open-file-wise?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Dec 2006 18:27:27 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insertion to temp table deteriorating over time "
},
{
"msg_contents": "\"Steven Flatt\" <[email protected]> writes:\n> Having said that, what kinds of things should I be looking for that could\n> deteriorate/bloat over time? Ordinarily the culprit might be infrequent\n> vacuuming or analyzing, but that wouldn't be corrected by a restart of\n> Postgres. In our case, restarting Postgres gives us a huge performance\n> improvement (for a short while, anyways).\n\n> By the way, we are using PG 7.4.6 on FreeBSD 5.30.0170. The temp table has\n> 15 columns: a timestamp, a double, and the rest integers. It has no\n> indexes.\n\nHm, *are* you vacuuming only infrequently? In particular, what is your\nmaintenance policy for pg_class?\n\nSome experimentation with TRUNCATE and VACUUM VERBOSE shows that in 7.4,\na TRUNCATE of a temp table with no indexes and no toast table generates\nthree dead row versions in pg_class. (I'm surprised that it's as many\nas three, but in any case the TRUNCATE would certainly have to do one\nupdate of the table's pg_class entry and thereby generate one dead row\nversion.)\n\nIf you're being sloppy about vacuuming pg_class, then over time the\nrepeated-truncate pattern would build up a huge number of dead rows\nin pg_class, *all with the same OID*. It's unsurprising that this\nwould create some slowness in looking up the temp table's pg_class\nentry.\n\nIf this theory is correct, the reason that starting a fresh backend\nmakes it fast again is that the new backend creates a whole new temp\ntable with a new OID assigned, and so the adjacent litter in pg_class\ndoesn't matter anymore (or not so much anyway).\n\nSolution would be to institute regular vacuuming of the system\ncatalogs...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Dec 2006 18:44:47 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insertion to temp table deteriorating over time "
},
{
"msg_contents": "Thanks for your replies.\n\nStarting a fresh session (not restarting the postmaster) seems to be\nsufficient to reset performance (and is an easy enough workaround). Still,\nit would be nice to know the root cause of the problem.\n\nThe backend process does not seem to be bloating memory-wise (I'm using\nvmstat to monitor memory usage on the machine). It also does not appear to\nbe bloating in terms of open file handles (using fstat, I can see the\nbackend process has 160-180 open file handles, not growing).\n\nRegarding your other email -- interesting -- but we are vacuuming pg_class\nevery hour. So I don't think the answer lies there...\n\nSteve\n\nOn 12/13/06, Tom Lane <[email protected]> wrote:\n>\n> \"Steven Flatt\" <[email protected]> writes:\n> > Having said that, what kinds of things should I be looking for that\n> could\n> > deteriorate/bloat over time? Ordinarily the culprit might be infrequent\n> > vacuuming or analyzing, but that wouldn't be corrected by a restart of\n> > Postgres. In our case, restarting Postgres gives us a huge performance\n> > improvement (for a short while, anyways).\n>\n> Do you actually need to restart the postmaster, or is just starting a\n> fresh session (fresh backend) sufficient? And again, have you monitored\n> the backend process to see if it's bloating memory-wise or open-file-wise?\n>\n> regards, tom lane\n>\n\nThanks for your replies.\n \nStarting a fresh session (not restarting the postmaster) seems to be sufficient to reset performance (and is an easy enough workaround). Still, it would be nice to know the root cause of the problem.\n \nThe backend process does not seem to be bloating memory-wise (I'm using vmstat to monitor memory usage on the machine). It also does not appear to be bloating in terms of open file handles (using fstat, I can see the backend process has 160-180 open file handles, not growing).\n\n \nRegarding your other email -- interesting -- but we are vacuuming pg_class every hour. So I don't think the answer lies there... \nSteve \nOn 12/13/06, Tom Lane <[email protected]> wrote:\n\"Steven Flatt\" <[email protected]> writes:\n> Having said that, what kinds of things should I be looking for that could> deteriorate/bloat over time? Ordinarily the culprit might be infrequent> vacuuming or analyzing, but that wouldn't be corrected by a restart of\n> Postgres. In our case, restarting Postgres gives us a huge performance> improvement (for a short while, anyways).Do you actually need to restart the postmaster, or is just starting afresh session (fresh backend) sufficient? And again, have you monitored\nthe backend process to see if it's bloating memory-wise or open-file-wise? regards, tom lane",
"msg_date": "Thu, 14 Dec 2006 15:40:24 -0500",
"msg_from": "\"Steven Flatt\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insertion to temp table deteriorating over time"
},
{
"msg_contents": "\"Steven Flatt\" <[email protected]> writes:\n> Regarding your other email -- interesting -- but we are vacuuming pg_class\n> every hour. So I don't think the answer lies there...\n\nThat's good, but is the vacuum actually accomplishing anything? I'm\nwondering if there's also a long-running transaction in the mix.\nTry a manual \"VACUUM VERBOSE pg_class;\" after the thing has slowed down,\nand see what it says about removable and nonremovable rows.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Dec 2006 16:23:16 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insertion to temp table deteriorating over time "
},
{
"msg_contents": "Here's the output of \"VACUUM VERBOSE pg_class\". I think it looks fine. I\neven did it three times in a row, each about 10 minutes apart, just to see\nwhat was changing:\n\n--------------------\nINFO: vacuuming \"pg_catalog.pg_class\"\nINFO: index \"pg_class_oid_index\" now contains 3263 row versions in 175\npages\nDETAIL: 5680 index row versions were removed.\n150 index pages have been deleted, 136 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"pg_class_relname_nsp_index\" now contains 3263 row versions in\n1301\n pages\nDETAIL: 5680 index row versions were removed.\n822 index pages have been deleted, 734 are currently reusable.\nCPU 0.01s/0.01u sec elapsed 0.03 sec.\nINFO: \"pg_class\": removed 5680 row versions in 109 pages\nDETAIL: CPU 0.00s/0.01u sec elapsed 0.04 sec.\nINFO: \"pg_class\": found 5680 removable, 3263 nonremovable row versions in\n625 p\nages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 23925 unused item pointers.\n0 pages are entirely empty.\nCPU 0.02s/0.04u sec elapsed 0.10 sec.\nVACUUM\n--------------------\nINFO: vacuuming \"pg_catalog.pg_class\"\nINFO: index \"pg_class_oid_index\" now contains 3263 row versions in 175\npages\nDETAIL: 24 index row versions were removed.\n150 index pages have been deleted, 150 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"pg_class_relname_nsp_index\" now contains 3263 row versions in\n1301\n pages\nDETAIL: 24 index row versions were removed.\n822 index pages have been deleted, 822 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_class\": removed 24 row versions in 2 pages\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_class\": found 24 removable, 3263 nonremovable row versions in 625\npag\nes\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 29581 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\n--------------------\nINFO: vacuuming \"pg_catalog.pg_class\"\nINFO: index \"pg_class_oid_index\" now contains 3263 row versions in 175\npages\nDETAIL: 150 index pages have been deleted, 150 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"pg_class_relname_nsp_index\" now contains 3263 row versions in\n1301\n pages\nDETAIL: 822 index pages have been deleted, 822 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_class\": found 0 removable, 3263 nonremovable row versions in 625\npage\ns\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 29605 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\n--------------------\n\nThe one thing that seems to be steadily increasing is the number of unused\nitem pointers. Not sure if that's normal. I should also point out that\nSELECT statements are not experiencing the same degradation as the INSERTs\nto the temp table. SELECTs are performing just as well now (24 hours since\nrestarting the connection) as they did immediately after restarting the\nconnection. INSERTs to the temp table are 5 times slower now than they were\n24 hours ago.\n\nI wonder if the problem has to do with a long running ODBC connection.\n\nSteve\n\n\nOn 12/14/06, Tom Lane <[email protected]> wrote:\n>\n> \"Steven Flatt\" <[email protected]> writes:\n> > Regarding your other email -- interesting -- but we are vacuuming\n> pg_class\n> > every hour. So I don't think the answer lies there...\n>\n> That's good, but is the vacuum actually accomplishing anything? I'm\n> wondering if there's also a long-running transaction in the mix.\n> Try a manual \"VACUUM VERBOSE pg_class;\" after the thing has slowed down,\n> and see what it says about removable and nonremovable rows.\n>\n> regards, tom lane\n>\n\nHere's the output of \"VACUUM VERBOSE pg_class\". I think it looks fine. I even did it three times in a row, each about 10 minutes apart, just to see what was changing:\n \n--------------------\nINFO: vacuuming \"pg_catalog.pg_class\"INFO: index \"pg_class_oid_index\" now contains 3263 row versions in 175 pagesDETAIL: 5680 index row versions were removed.150 index pages have been deleted, 136 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"pg_class_relname_nsp_index\" now contains 3263 row versions in 1301 pagesDETAIL: 5680 index row versions were removed.822 index pages have been deleted, 734 are currently reusable.\nCPU 0.01s/0.01u sec elapsed 0.03 sec.INFO: \"pg_class\": removed 5680 row versions in 109 pagesDETAIL: CPU 0.00s/0.01u sec elapsed 0.04 sec.INFO: \"pg_class\": found 5680 removable, 3263 nonremovable row versions in 625 p\nagesDETAIL: 0 dead row versions cannot be removed yet.There were 23925 unused item pointers.0 pages are entirely empty.CPU 0.02s/0.04u sec elapsed 0.10 sec.VACUUM\n--------------------\nINFO: vacuuming \"pg_catalog.pg_class\"INFO: index \"pg_class_oid_index\" now contains 3263 row versions in 175 pagesDETAIL: 24 index row versions were removed.150 index pages have been deleted, 150 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"pg_class_relname_nsp_index\" now contains 3263 row versions in 1301 pagesDETAIL: 24 index row versions were removed.822 index pages have been deleted, 822 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: \"pg_class\": removed 24 row versions in 2 pagesDETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: \"pg_class\": found 24 removable, 3263 nonremovable row versions in 625 pag\nesDETAIL: 0 dead row versions cannot be removed yet.There were 29581 unused item pointers.0 pages are entirely empty.CPU 0.00s/0.00u sec elapsed 0.00 sec.VACUUM\n--------------------\nINFO: vacuuming \"pg_catalog.pg_class\"INFO: index \"pg_class_oid_index\" now contains 3263 row versions in 175 pagesDETAIL: 150 index pages have been deleted, 150 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"pg_class_relname_nsp_index\" now contains 3263 row versions in 1301 pagesDETAIL: 822 index pages have been deleted, 822 are currently reusable.CPU \n0.00s/0.00u sec elapsed 0.00 sec.INFO: \"pg_class\": found 0 removable, 3263 nonremovable row versions in 625 pagesDETAIL: 0 dead row versions cannot be removed yet.There were 29605 unused item pointers.\n0 pages are entirely empty.CPU 0.00s/0.00u sec elapsed 0.00 sec.VACUUM\n--------------------\n \nThe one thing that seems to be steadily increasing is the number of unused item pointers. Not sure if that's normal. I should also point out that SELECT statements are not experiencing the same degradation as the INSERTs to the temp table. SELECTs are performing just as well now (24 hours since restarting the connection) as they did immediately after restarting the connection. INSERTs to the temp table are 5 times slower now than they were 24 hours ago.\n\n \nI wonder if the problem has to do with a long running ODBC connection.\n \nSteve\n \nOn 12/14/06, Tom Lane <[email protected]> wrote:\n\"Steven Flatt\" <[email protected]> writes:\n> Regarding your other email -- interesting -- but we are vacuuming pg_class> every hour. So I don't think the answer lies there...That's good, but is the vacuum actually accomplishing anything? I'm\nwondering if there's also a long-running transaction in the mix.Try a manual \"VACUUM VERBOSE pg_class;\" after the thing has slowed down,and see what it says about removable and nonremovable rows.\n regards, tom lane",
"msg_date": "Fri, 15 Dec 2006 11:21:35 -0500",
"msg_from": "\"Steven Flatt\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insertion to temp table deteriorating over time"
},
{
"msg_contents": "\"Steven Flatt\" <[email protected]> writes:\n> Here's the output of \"VACUUM VERBOSE pg_class\". I think it looks fine. I\n> even did it three times in a row, each about 10 minutes apart, just to see\n> what was changing:\n\nHm, look at the numbers of rows removed:\n\n> INFO: \"pg_class\": found 5680 removable, 3263 nonremovable row versions in\n> 625 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n\n> INFO: \"pg_class\": found 24 removable, 3263 nonremovable row versions in 625\n> pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n\n> INFO: \"pg_class\": found 0 removable, 3263 nonremovable row versions in 625\n> pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n\nThe lack of unremovable dead rows is good, but why were there so many\ndead rows the first time? You didn't say what the cycle time is on your\ntruncate-and-refill process, but the last two suggest that the average\nrate of accumulation of dead pg_class rows is only a couple per minute,\nin which case it's been a lot longer than an hour since the previous\nVACUUM of pg_class. I'm back to suspecting that you don't vacuum\npg_class regularly. You mentioned having an hourly cron job to fire off\nvacuums ... are you sure it's run as a database superuser?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Dec 2006 12:09:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insertion to temp table deteriorating over time "
},
{
"msg_contents": "Our application is such that there is a great deal of activity at the\nbeginning of the hour and minimal activity near the end of the hour. Those\n3 vacuums were done at (approximately) 30 minutes past, 40 minutes past, and\n50 minutes past the hour, during low activity. Vacuums of pg_class look\nlike they're being done on the hour. So it's not surprising that the first\nvacuum found a lot of dead rows while the latter two found very few.\n\nIn fact, I just did another vacuum (about 30 minutes past the hour again)\nand got:\n\nINFO: \"pg_class\": found 5490 removable, 3263 nonremovable row versions in\n171 pages\nDETAIL: 0 dead row versions cannot be removed yet.\n\n... and clearly a vacuum was done under an hour ago.\n\nThe truncate and re-fill process is done once per hour, at the end of the\nhigh-load cycle, so I doubt that's even a big contributor to the number of\nremovable rows in pg_class.\n\nFor this particular setup, we expect high load for 10-15 minutes at the\nbeginning of the hour, which is the case when a new connection is\ninitialized. After a day or so (as is happening right now), the high-load\nperiod spills into the second half of the hour. Within 3-4 days, we start\nspilling into the next hour and, as you can imagine, everything gets behind\nand we spiral down from there. For now, our workaround is to manually kill\nthe connection every few days, but I would like a better solution than\nsetting up a cron job to do this!\n\nThanks again,\nSteve\n\n\nOn 12/15/06, Tom Lane <[email protected]> wrote:\n>\n> Hm, look at the numbers of rows removed:\n>\n> > INFO: \"pg_class\": found 5680 removable, 3263 nonremovable row versions\n> in\n> > 625 pages\n> > DETAIL: 0 dead row versions cannot be removed yet.\n>\n> > INFO: \"pg_class\": found 24 removable, 3263 nonremovable row versions in\n> 625\n> > pages\n> > DETAIL: 0 dead row versions cannot be removed yet.\n>\n> > INFO: \"pg_class\": found 0 removable, 3263 nonremovable row versions in\n> 625\n> > pages\n> > DETAIL: 0 dead row versions cannot be removed yet.\n>\n> The lack of unremovable dead rows is good, but why were there so many\n> dead rows the first time? You didn't say what the cycle time is on your\n> truncate-and-refill process, but the last two suggest that the average\n> rate of accumulation of dead pg_class rows is only a couple per minute,\n> in which case it's been a lot longer than an hour since the previous\n> VACUUM of pg_class. I'm back to suspecting that you don't vacuum\n> pg_class regularly. You mentioned having an hourly cron job to fire off\n> vacuums ... are you sure it's run as a database superuser?\n>\n> regards, tom lane\n>\n\nOur application is such that there is a great deal of activity at the beginning of the hour and minimal activity near the end of the hour. Those 3 vacuums were done at (approximately) 30 minutes past, 40 minutes past, and 50 minutes past the hour, during low activity. Vacuums of pg_class look like they're being done on the hour. So it's not surprising that the first vacuum found a lot of dead rows while the latter two found very few.\n\n \nIn fact, I just did another vacuum (about 30 minutes past the hour again) and got: \nINFO: \"pg_class\": found 5490 removable, 3263 nonremovable row versions in 171 pagesDETAIL: 0 dead row versions cannot be removed yet.\n \n... and clearly a vacuum was done under an hour ago.\n \nThe truncate and re-fill process is done once per hour, at the end of the high-load cycle, so I doubt that's even a big contributor to the number of removable rows in pg_class.\n \nFor this particular setup, we expect high load for 10-15 minutes at the beginning of the hour, which is the case when a new connection is initialized. After a day or so (as is happening right now), the high-load period spills into the second half of the hour. Within 3-4 days, we start spilling into the next hour and, as you can imagine, everything gets behind and we spiral down from there. For now, our workaround is to manually kill the connection every few days, but I would like a better solution than setting up a cron job to do this!\n\n \nThanks again,\nSteve \n \nOn 12/15/06, Tom Lane <[email protected]> wrote:\nHm, look at the numbers of rows removed:> INFO: \"pg_class\": found 5680 removable, 3263 nonremovable row versions in\n> 625 pages> DETAIL: 0 dead row versions cannot be removed yet.> INFO: \"pg_class\": found 24 removable, 3263 nonremovable row versions in 625> pages> DETAIL: 0 dead row versions cannot be removed yet.\n> INFO: \"pg_class\": found 0 removable, 3263 nonremovable row versions in 625> pages> DETAIL: 0 dead row versions cannot be removed yet.The lack of unremovable dead rows is good, but why were there so many\ndead rows the first time? You didn't say what the cycle time is on yourtruncate-and-refill process, but the last two suggest that the averagerate of accumulation of dead pg_class rows is only a couple per minute,\nin which case it's been a lot longer than an hour since the previousVACUUM of pg_class. I'm back to suspecting that you don't vacuumpg_class regularly. You mentioned having an hourly cron job to fire off\nvacuums ... are you sure it's run as a database superuser? regards, tom lane",
"msg_date": "Fri, 15 Dec 2006 12:55:46 -0500",
"msg_from": "\"Steven Flatt\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insertion to temp table deteriorating over time"
},
{
"msg_contents": "\"Steven Flatt\" <[email protected]> writes:\n> Our application is such that there is a great deal of activity at the\n> beginning of the hour and minimal activity near the end of the hour.\n\nOK ...\n\n> The truncate and re-fill process is done once per hour, at the end of the\n> high-load cycle, so I doubt that's even a big contributor to the number of\n> removable rows in pg_class.\n\nOh, then where *are* the removable rows coming from? At this point I\nthink that the truncate/refill thing is not the culprit, or at any rate\nis only one part of a problematic usage pattern that we don't see all of\nyet.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Dec 2006 13:06:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insertion to temp table deteriorating over time "
},
{
"msg_contents": "Good question, and I agree with your point.\n\nAre the removable rows in pg_class even an issue? So what if 5000-6000 dead\ntuples are generated every hour then vacuumed? Performance continues to\nsteadily decline over a few days time. Memory usage does not appear to be\nbloating. Open file handles remain fairly fixed. Is there anything else I\ncan monitor (perhaps something to do with the odbc connection) that I could\npotentially correlate with the degrading performance?\n\nSteve\n\n\nOn 12/15/06, Tom Lane <[email protected]> wrote:\n>\n> Oh, then where *are* the removable rows coming from? At this point I\n> think that the truncate/refill thing is not the culprit, or at any rate\n> is only one part of a problematic usage pattern that we don't see all of\n> yet.\n>\n> regards, tom lane\n>\n\nGood question, and I agree with your point.\n \nAre the removable rows in pg_class even an issue? So what if 5000-6000 dead tuples are generated every hour then vacuumed? Performance continues to steadily decline over a few days time. Memory usage does not appear to be bloating. Open file handles remain fairly fixed. Is there anything else I can monitor (perhaps something to do with the odbc connection) that I could potentially correlate with the degrading performance?\n\n \nSteve \nOn 12/15/06, Tom Lane <[email protected]> wrote:\nOh, then where *are* the removable rows coming from? At this point Ithink that the truncate/refill thing is not the culprit, or at any rate\nis only one part of a problematic usage pattern that we don't see all ofyet. regards, tom lane",
"msg_date": "Fri, 15 Dec 2006 13:26:31 -0500",
"msg_from": "\"Steven Flatt\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insertion to temp table deteriorating over time"
},
{
"msg_contents": "\"Steven Flatt\" <[email protected]> writes:\n> Are the removable rows in pg_class even an issue? So what if 5000-6000 dead\n> tuples are generated every hour then vacuumed? Performance continues to\n> steadily decline over a few days time. Memory usage does not appear to be\n> bloating. Open file handles remain fairly fixed. Is there anything else I\n> can monitor (perhaps something to do with the odbc connection) that I could\n> potentially correlate with the degrading performance?\n\nAt this point I think the most productive thing for you to do is to try\nto set up a self-contained test case that reproduces the slowdown. That\nwould allow you to poke at it without disturbing your production system,\nand would let other people look at it too. From what you've said, I'd\ntry a simple little program that inserts some data into a temp table,\ntruncates the table, and repeats, as fast as it can, using the same SQL\ncommands as your real code and similar but dummy data. It shouldn't\ntake long to observe the slowdown if it occurs. If you can't reproduce\nit in isolation then we'll know that some other part of your application\nenvironment is contributing to the problem; if you can, I'd be happy to\nlook at the test case with gprof or oprofile and find out exactly what's\ngoing on.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Dec 2006 14:09:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insertion to temp table deteriorating over time "
},
{
"msg_contents": "I've been trying to reproduce the problem for days now :). I've done pretty\nmuch exactly what you describe below, but I can't reproduce the problem on\nany of our lab machines. Something is indeed special in this environment.\n\nThanks for all your help,\n\nSteve\n\n\nOn 12/15/06, Tom Lane <[email protected]> wrote:\n>\n> \"Steven Flatt\" <[email protected]> writes:\n> > Are the removable rows in pg_class even an issue? So what if 5000-6000\n> dead\n> > tuples are generated every hour then vacuumed? Performance continues to\n> > steadily decline over a few days time. Memory usage does not appear to\n> be\n> > bloating. Open file handles remain fairly fixed. Is there anything\n> else I\n> > can monitor (perhaps something to do with the odbc connection) that I\n> could\n> > potentially correlate with the degrading performance?\n>\n> At this point I think the most productive thing for you to do is to try\n> to set up a self-contained test case that reproduces the slowdown. That\n> would allow you to poke at it without disturbing your production system,\n> and would let other people look at it too. From what you've said, I'd\n> try a simple little program that inserts some data into a temp table,\n> truncates the table, and repeats, as fast as it can, using the same SQL\n> commands as your real code and similar but dummy data. It shouldn't\n> take long to observe the slowdown if it occurs. If you can't reproduce\n> it in isolation then we'll know that some other part of your application\n> environment is contributing to the problem; if you can, I'd be happy to\n> look at the test case with gprof or oprofile and find out exactly what's\n> going on.\n>\n> regards, tom lane\n>\n\nI've been trying to reproduce the problem for days now :). I've done pretty much exactly what you describe below, but I can't reproduce the problem on any of our lab machines. Something is indeed special in this environment.\n\n \nThanks for all your help,\n \nSteve \nOn 12/15/06, Tom Lane <[email protected]> wrote:\n\"Steven Flatt\" <[email protected]> writes:\n> Are the removable rows in pg_class even an issue? So what if 5000-6000 dead> tuples are generated every hour then vacuumed? Performance continues to> steadily decline over a few days time. Memory usage does not appear to be\n> bloating. Open file handles remain fairly fixed. Is there anything else I> can monitor (perhaps something to do with the odbc connection) that I could> potentially correlate with the degrading performance?\nAt this point I think the most productive thing for you to do is to tryto set up a self-contained test case that reproduces the slowdown. Thatwould allow you to poke at it without disturbing your production system,\nand would let other people look at it too. From what you've said, I'dtry a simple little program that inserts some data into a temp table,truncates the table, and repeats, as fast as it can, using the same SQL\ncommands as your real code and similar but dummy data. It shouldn'ttake long to observe the slowdown if it occurs. If you can't reproduceit in isolation then we'll know that some other part of your application\nenvironment is contributing to the problem; if you can, I'd be happy tolook at the test case with gprof or oprofile and find out exactly what'sgoing on. regards, tom lane",
"msg_date": "Fri, 15 Dec 2006 14:21:01 -0500",
"msg_from": "\"Steven Flatt\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insertion to temp table deteriorating over time"
},
{
"msg_contents": "\"Steven Flatt\" <[email protected]> writes:\n> I've been trying to reproduce the problem for days now :). I've done pretty\n> much exactly what you describe below, but I can't reproduce the problem on\n> any of our lab machines. Something is indeed special in this environment.\n\nYuck. You could try strace'ing the problem backend and see if anything\nis visibly different between fast and slow operation. I don't suppose\nyou have oprofile on that machine, but if you did it'd be even better.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Dec 2006 14:30:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insertion to temp table deteriorating over time "
},
{
"msg_contents": "I have an update on this.\n\nThe reason I couldn't reproduce this problem was because of the way I was\ncreating the temp table in my tests. I was using:\n\nCREATE TEMP TABLE tmp (LIKE perm);\n\nThis did not observe performance degradation over time.\n\nHowever, the way our application was creating this table (something I should\nhave observed sooner, no doubt) is:\n\nCREATE TEMP TABLE tmp AS SELECT <column-list> FROM perm LIMIT 0;\n\nThis, on its own however, is not enough to reproduce the problem. Next\nimagine that perm is actually a view, which is defined as a UNION ALL SELECT\nfrom several other views, and those views are also defined as UNION ALL\nSELECTs from a bunch of permanent tables. All views have insert rules\nredirecting rows according to some criteria. The whole structure is pretty\nconvoluted.\n\nI can fix this problem by using CREATE TEMP TABLE ... LIKE instead of CREATE\nTEMP TABLE ... AS.\n\nI'm still curious about the root cause of this problem. From the docs, I\nsee that CREATE TABLE AS evaluates the query just once to create the table,\nbut based on what I'm seeing, I'm wondering whether this isn't truly the\ncase. Are there any known issues with CREATE TABLE AS when the table you're\ncreating is temporary and you're selecting from a view?\n\nSteve\n\n\nOn 12/15/06, Tom Lane <[email protected]> wrote:\n>\n> \"Steven Flatt\" <[email protected]> writes:\n> > I've been trying to reproduce the problem for days now :). I've done\n> pretty\n> > much exactly what you describe below, but I can't reproduce the problem\n> on\n> > any of our lab machines. Something is indeed special in this\n> environment.\n>\n> Yuck. You could try strace'ing the problem backend and see if anything\n> is visibly different between fast and slow operation. I don't suppose\n> you have oprofile on that machine, but if you did it'd be even better.\n>\n> regards, tom lane\n>\n\nI have an update on this.\n \nThe reason I couldn't reproduce this problem was because of the way I was creating the temp table in my tests. I was using:\n \nCREATE TEMP TABLE tmp (LIKE perm);\n \nThis did not observe performance degradation over time.\n \nHowever, the way our application was creating this table (something I should have observed sooner, no doubt) is:\n \nCREATE TEMP TABLE tmp AS SELECT <column-list> FROM perm LIMIT 0;\n \nThis, on its own however, is not enough to reproduce the problem. Next imagine that perm is actually a view, which is defined as a UNION ALL SELECT from several other views, and those views are also defined as UNION ALL SELECTs from a bunch of permanent tables. All views have insert rules redirecting rows according to some criteria. The whole structure is pretty convoluted.\n\n \nI can fix this problem by using CREATE TEMP TABLE ... LIKE instead of CREATE TEMP TABLE ... AS.\n \nI'm still curious about the root cause of this problem. From the docs, I see that CREATE TABLE AS evaluates the query just once to create the table, but based on what I'm seeing, I'm wondering whether this isn't truly the case. Are there any known issues with CREATE TABLE AS when the table you're creating is temporary and you're selecting from a view?\n\n \nSteve \nOn 12/15/06, Tom Lane <[email protected]> wrote:\n\"Steven Flatt\" <[email protected]> writes:\n> I've been trying to reproduce the problem for days now :). I've done pretty> much exactly what you describe below, but I can't reproduce the problem on> any of our lab machines. Something is indeed special in this environment.\nYuck. You could try strace'ing the problem backend and see if anythingis visibly different between fast and slow operation. I don't supposeyou have oprofile on that machine, but if you did it'd be even better.\n regards, tom lane",
"msg_date": "Mon, 18 Dec 2006 12:51:13 -0500",
"msg_from": "\"Steven Flatt\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insertion to temp table deteriorating over time"
},
{
"msg_contents": "\"Steven Flatt\" <[email protected]> writes:\n> I can fix this problem by using CREATE TEMP TABLE ... LIKE instead of CREATE\n> TEMP TABLE ... AS.\n\nThat seems ... um ... bizarre. Now are you able to put together a\nself-contained test case? Seems like we could have two independent bugs\nhere: first, why (and how) is the temp table different, and second how\ndoes that result in the observed performance problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Dec 2006 13:00:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insertion to temp table deteriorating over time "
},
{
"msg_contents": "Please ignore my post from earlier today. As strange as it sounds, changing\n\"CREATE TEMP TABLE ... AS\" to \"CREATE TEMP TABLE ... LIKE\" appeared to fix\nmy performance problem because things errored out so quickly (and silently\nin my test program). After checking the pgsql logs, it became clear to me\nthat you can't use LIKE on a view. Duh.\n\nMoving forward, I have also discovered that our temp table did in fact have\na varchar column (no specified limit on varchar). With this in mind, I\ncould easily reproduce the problem on a temp table with one column. So...\n\nIssue #1:\n\n(I'm assuming there's a reasonable explanation for this.) If I create a\ntemp table with a single varchar column (or text column), do 100 inserts to\nthat table, copy to a permanent table, truncate the temp table and repeat,\nthe time required for the 100 inserts grows almost linearly. Maybe the data\nis treated as large objects.\n\nNote that if I change the column type to varchar(SOME_LIMIT), integer,\ntimestamptz, interval, etc., performance does not degrade. Also note that\nif I do not use a temp table (but do use a varchar column), inserts are\nslower (as expected) but do not degrade over time. So this seems to be\nspecific to temp tables with varchar/text column(s).\n\nIssue #2:\n\nAs I said earlier, the temp table is created via:\n\nCREATE TEMP TABLE tmp AS SELECT <column-list> FROM perm LIMIT 0;\n\nwhere perm is a view defined as follows:\n\nView definition:\n SELECT <column-list>\n FROM view2\n JOIN tbl USING (col1, col2)\n WHERE <some-conditions>\nUNION ALL\n SELECT <column-list>\n FROM view3\n JOIN tbl USING (col1, col2)\n WHERE <some-conditions>;\n\nNow the varchar columns that end up in the perm view come from the tbl\ntable, but in tbl, they are defined as varchar(40). Somehow the 40 limit is\nlost when constructing the view. After a little more testing, I found that\nthis problem only occurs when you are creating a view (i.e. CREATE TABLE ...\nAS does not observe this problem) and also that the UNION ALL clause must be\npresent to observe this problem.\n\nThis looks like a bug. I know this is Postgres 7.4.6 and I haven't been\nable to verify with a later version of Postgres, but does this look familiar\nto anyone?\n\nSteve\n\nPlease ignore my post from earlier today. As strange as it sounds, changing \"CREATE TEMP TABLE ... AS\" to \"CREATE TEMP TABLE ... LIKE\" appeared to fix my performance problem because things errored out so quickly (and silently in my test program). After checking the pgsql logs, it became clear to me that you can't use LIKE on a view. Duh.\nMoving forward, I have also discovered that our temp table did in fact have a varchar column (no specified limit on varchar). With this in mind, I could easily reproduce the problem on a temp table with one column. So...\n\n \nIssue #1:\n \n(I'm assuming there's a reasonable explanation for this.) If I create a temp table with a single varchar column (or text column), do 100 inserts to that table, copy to a permanent table, truncate the temp table and repeat, the time required for the 100 inserts grows almost linearly. Maybe the data is treated as large objects.\n\n \nNote that if I change the column type to varchar(SOME_LIMIT), integer, timestamptz, interval, etc., performance does not degrade. Also note that if I do not use a temp table (but do use a varchar column), inserts are slower (as expected) but do not degrade over time. So this seems to be specific to temp tables with varchar/text column(s).\n\n \nIssue #2:\n \nAs I said earlier, the temp table is created via:\n \nCREATE TEMP TABLE tmp AS SELECT <column-list> FROM perm LIMIT 0;\n \nwhere perm is a view defined as follows:\n \nView definition: SELECT <column-list> FROM view2 JOIN tbl USING (col1, col2) WHERE <some-conditions>UNION ALL SELECT <column-list> FROM view3 JOIN tbl USING (col1, col2)\n WHERE <some-conditions>;\n \nNow the varchar columns that end up in the perm view come from the tbl table, but in tbl, they are defined as varchar(40). Somehow the 40 limit is lost when constructing the view. After a little more testing, I found that this problem only occurs when you are creating a view (\ni.e. CREATE TABLE ... AS does not observe this problem) and also that the UNION ALL clause must be present to observe this problem.\n \nThis looks like a bug. I know this is Postgres 7.4.6 and I haven't been able to verify with a later version of Postgres, but does this look familiar to anyone?\n \nSteve",
"msg_date": "Mon, 18 Dec 2006 18:06:04 -0500",
"msg_from": "\"Steven Flatt\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insertion to temp table deteriorating over time"
},
{
"msg_contents": "\"Steven Flatt\" <[email protected]> writes:\n> Issue #1:\n\n> (I'm assuming there's a reasonable explanation for this.) If I create a\n> temp table with a single varchar column (or text column), do 100 inserts to\n> that table, copy to a permanent table, truncate the temp table and repeat,\n> the time required for the 100 inserts grows almost linearly.\n\nI still can't reproduce this. Using 7.4 branch tip, I did\n\ncreate temp table foo(f1 varchar);\ncreate table nottemp(f1 varchar);\n\\timing\ninsert into foo select stringu1 from tenk1 limit 100; insert into nottemp select * from foo; truncate foo;\ninsert into foo select stringu1 from tenk1 limit 100; insert into nottemp select * from foo; truncate foo;\n... repeat several thousand times ...\n\nand couldn't see any consistent growth in the reported times. So either\nit's been fixed since 7.4.6 (but I don't see anything related-looking in\nthe CVS logs), or you haven't provided all the details.\n\n> Now the varchar columns that end up in the perm view come from the tbl\n> table, but in tbl, they are defined as varchar(40). Somehow the 40 limit is\n> lost when constructing the view.\n\nYeah, this is a known issue with UNIONs not preserving the length info\n--- which is not entirely unreasonable: what will you do with varchar(40)\nunion varchar(50)? There's a hack in place as of 8.2 to keep the\nlength if all the union arms have the same length.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 19 Dec 2006 03:08:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insertion to temp table deteriorating over time "
},
{
"msg_contents": "On 12/19/06, Tom Lane <[email protected]> wrote:\n>\n> I still can't reproduce this. Using 7.4 branch tip, I did\n>\n> create temp table foo(f1 varchar);\n> create table nottemp(f1 varchar);\n> \\timing\n> insert into foo select stringu1 from tenk1 limit 100; insert into nottemp\n> select * from foo; truncate foo;\n> insert into foo select stringu1 from tenk1 limit 100; insert into nottemp\n> select * from foo; truncate foo;\n> ... repeat several thousand times ...\n\n\n I can't reproduce the problem that way either (or when using a server-side\nPLpgSQL function to do similar). It looks like you have to go through an\nODBC connection, with the looping done on the client side. Each individual\ninsert to the temp table needs to be sent over the connection and this is\nwhat degrades over time. I can reproduce on 7.4.6 and 8.1.4. I have a\nsmall C program to do this which I can send you offline if you're\ninterested.\n\n\n> Now the varchar columns that end up in the perm view come from the tbl\n> > table, but in tbl, they are defined as varchar(40). Somehow the 40\n> limit is\n> > lost when constructing the view.\n>\n> Yeah, this is a known issue with UNIONs not preserving the length info\n> --- which is not entirely unreasonable: what will you do with varchar(40)\n> union varchar(50)? There's a hack in place as of 8.2 to keep the\n> length if all the union arms have the same length.\n\n\n I guess it comes down to what your philosophy is on this. You might just\ndisallow unions when the data types do not match (varchar(40) !=\nvarchar(50)). But it might come down to what's best for your application.\nI tend to think that when the unioned types do match, the type should be\npreserved in the inheriting view (as done by the \"hack\" in 8.2).\n\nThanks again for all your help. Steve\n\nOn 12/19/06, Tom Lane <[email protected]> wrote:\nI still can't reproduce this. Using 7.4 branch tip, I didcreate temp table foo(f1 varchar);\ncreate table nottemp(f1 varchar);\\timinginsert into foo select stringu1 from tenk1 limit 100; insert into nottemp select * from foo; truncate foo;insert into foo select stringu1 from tenk1 limit 100; insert into nottemp select * from foo; truncate foo;\n... repeat several thousand times ...\n \n\nI can't reproduce the problem that way either (or when using a server-side PLpgSQL function to do similar). It looks like you have to go through an ODBC connection, with the looping done on the client side. Each individual insert to the temp table needs to be sent over the connection and this is what degrades over time. I can reproduce on \n7.4.6 and 8.1.4. I have a small C program to do this which I can send you offline if you're interested.\n \n> Now the varchar columns that end up in the perm view come from the tbl> table, but in tbl, they are defined as varchar(40). Somehow the 40 limit is\n> lost when constructing the view.Yeah, this is a known issue with UNIONs not preserving the length info--- which is not entirely unreasonable: what will you do with varchar(40)union varchar(50)? There's a hack in place as of \n8.2 to keep thelength if all the union arms have the same length.\n \n\nI guess it comes down to what your philosophy is on this. You might just disallow unions when the data types do not match (varchar(40) != varchar(50)). But it might come down to what's best for your application. I tend to think that when the unioned types do match, the type should be preserved in the inheriting view (as done by the \"hack\" in \n8.2).Thanks again for all your help. \nSteve",
"msg_date": "Tue, 19 Dec 2006 10:43:13 -0500",
"msg_from": "\"Steven Flatt\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insertion to temp table deteriorating over time"
},
{
"msg_contents": "\"Steven Flatt\" <[email protected]> writes:\n> I can't reproduce the problem that way either (or when using a server-side\n> PLpgSQL function to do similar). It looks like you have to go through an\n> ODBC connection, with the looping done on the client side. Each individual\n> insert to the temp table needs to be sent over the connection and this is\n> what degrades over time. I can reproduce on 7.4.6 and 8.1.4. I have a\n> small C program to do this which I can send you offline if you're\n> interested.\n\nPlease.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 19 Dec 2006 11:59:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insertion to temp table deteriorating over time "
}
] |
[
{
"msg_contents": "Hi,\n\n I have a query that uses an IN clause and it seems in perform great\nwhen there is more than two values in it but if there is only one it is\nreally slow. Also if I change the query to use an = instead of IN in the\ncase of only one value it is still slow. Possibly I need to reindex this\nparticular index?\n\nthanks \n\nTim Jones\nHealthcare Project Manager\nOptio Software, Inc.\n(770) 576-3555\n\n",
"msg_date": "Wed, 13 Dec 2006 13:42:20 -0500",
"msg_from": "\"Tim Jones\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "strange query behavior"
},
{
"msg_contents": "On mi�, 2006-12-13 at 13:42 -0500, Tim Jones wrote:\n\n> I have a query that uses an IN clause and it seems in perform great\n> when there is more than two values in it but if there is only one it is\n> really slow. Also if I change the query to use an = instead of IN in the\n> case of only one value it is still slow. Possibly I need to reindex this\n> particular index?\n\ncan you provide us with an EXPLAIN ANALYZE for these 2 cases?\n\nwhat version pg is this?\n\ndoes this happen only for a particular single value, or for any values?\n\nI assume you have ANALYZEd the table in question.\n\ngnari\n\n\n",
"msg_date": "Wed, 13 Dec 2006 19:15:11 +0000",
"msg_from": "Ragnar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange query behavior"
},
{
"msg_contents": "\"Tim Jones\" <[email protected]> writes:\n> I have a query that uses an IN clause and it seems in perform great\n> when there is more than two values in it but if there is only one it is\n> really slow. Also if I change the query to use an = instead of IN in the\n> case of only one value it is still slow.\n\nPlease provide EXPLAIN ANALYZE output for both cases.\n\n> Possibly I need to reindex this\n> particular index?\n\nMore likely you need to ANALYZE the table so that the planner has\nup-to-date stats ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Dec 2006 14:16:32 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange query behavior "
},
{
"msg_contents": "The tables for theses queries are vacuumed and analyzed regularly. I\njust did an analyze to be sure and here are the results\n\n\nexplain analyze select * from battery join observationresults on\nbattery.batteryidentifier = observationresults.batteryidentifier left\nouter join observationcomment on\nobservationresults.observationidentifier =\nobservationcomment.observationidentifier left outer join batterycomment\non battery.batteryidentifier=batterycomment.batteryidentifier where\nbattery.batteryidentifier in (1177470, 1177469);\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n---------------\n Nested Loop Left Join (cost=5.03..12553.00 rows=13 width=248) (actual\ntime=0.362..1.345 rows=30 loops=1)\n -> Nested Loop Left Join (cost=4.01..12424.13 rows=13 width=208)\n(actual time=0.307..0.927 rows=30 loops=1)\n -> Nested Loop (cost=4.01..9410.49 rows=13 width=145) (actual\ntime=0.227..0.416 rows=30 loops=1)\n -> Bitmap Heap Scan on battery (cost=4.01..11.64 rows=2\nwidth=69) (actual time=0.135..0.138 rows=2 loops=1)\n Recheck Cond: ((batteryidentifier = 1177470) OR\n(batteryidentifier = 1177469))\n -> BitmapOr (cost=4.01..4.01 rows=2 width=0)\n(actual time=0.106..0.106 rows=0 loops=1)\n -> Bitmap Index Scan on ix_battery_id\n(cost=0.00..2.00 rows=1 width=0) (actual time=0.091..0.091 rows=1\nloops=1)\n Index Cond: (batteryidentifier =\n1177470)\n -> Bitmap Index Scan on ix_battery_id\n(cost=0.00..2.00 rows=1 width=0) (actual time=0.011..0.011 rows=1\nloops=1)\n Index Cond: (batteryidentifier =\n1177469)\n -> Index Scan using ix_obresults_bat on\nobservationresults (cost=0.00..4682.40 rows=1362 width=76) (actual\ntime=0.047..0.091 rows=15 loops=2)\n Index Cond: (\"outer\".batteryidentifier =\nobservationresults.batteryidentifier)\n -> Index Scan using ix_obcomment_obid on observationcomment\n(cost=0.00..227.73 rows=327 width=63) (actual time=0.013..0.013 rows=0\nloops=30)\n Index Cond: (\"outer\".observationidentifier =\nobservationcomment.observationidentifier)\n -> Bitmap Heap Scan on batterycomment (cost=1.02..9.84 rows=6\nwidth=40) (actual time=0.007..0.007 rows=0 loops=30)\n Recheck Cond: (\"outer\".batteryidentifier =\nbatterycomment.batteryidentifier)\n -> Bitmap Index Scan on ix_batcomment (cost=0.00..1.02 rows=6\nwidth=0) (actual time=0.005..0.005 rows=0 loops=30)\n Index Cond: (\"outer\".batteryidentifier =\nbatterycomment.batteryidentifier)\n Total runtime: 1.585 ms\n\n\nexplain analyze select * from battery join observationresults on\nbattery.batteryidentifier = observationresults.batteryidentifier left\nouter join observationcomment on\nobservationresults.observationidentifier =\nobservationcomment.observationidentifier left outer join batterycomment\non battery.batteryidentifier=batterycomment.batteryidentifier where\nbattery.batteryidentifier = 1177470;\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n------------------------\n Hash Left Join (cost=4733.62..269304.43 rows=1348 width=248) (actual\ntime=19275.506..19275.568 rows=9 loops=1)\n Hash Cond: (\"outer\".batteryidentifier = \"inner\".batteryidentifier)\n -> Merge Right Join (cost=4723.75..269287.81 rows=1348 width=208)\n(actual time=19275.432..19275.473 rows=9 loops=1)\n Merge Cond: (\"outer\".observationidentifier =\n\"inner\".observationidentifier)\n -> Index Scan using ix_obcomment_obid on observationcomment\n(cost=0.00..245841.14 rows=7484253 width=63) (actual\ntime=0.094..13403.300 rows=4361601 loops=1)\n -> Sort (cost=4723.75..4727.12 rows=1348 width=145) (actual\ntime=0.270..0.278 rows=9 loops=1)\n Sort Key: observationresults.observationidentifier\n -> Nested Loop (cost=0.00..4653.67 rows=1348 width=145)\n(actual time=0.166..0.215 rows=9 loops=1)\n -> Index Scan using ix_battery_id on battery\n(cost=0.00..5.81 rows=1 width=69) (actual time=0.079..0.082 rows=1\nloops=1)\n Index Cond: (batteryidentifier = 1177470)\n -> Index Scan using ix_obresults_bat on\nobservationresults (cost=0.00..4634.38 rows=1348 width=76) (actual\ntime=0.079..0.102 rows=9 loops=1)\n Index Cond: (1177470 = batteryidentifier)\n -> Hash (cost=9.85..9.85 rows=6 width=40) (actual time=0.039..0.039\nrows=0 loops=1)\n -> Bitmap Heap Scan on batterycomment (cost=1.02..9.85 rows=6\nwidth=40) (actual time=0.037..0.037 rows=0 loops=1)\n Recheck Cond: (batteryidentifier = 1177470)\n -> Bitmap Index Scan on ix_batcomment (cost=0.00..1.02\nrows=6 width=0) (actual time=0.032..0.032 rows=0 loops=1)\n Index Cond: (batteryidentifier = 1177470)\n Total runtime: 19275.838 ms\n(18 rows)\n\n\nTim Jones\nHealthcare Project Manager\nOptio Software, Inc.\n(770) 576-3555\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Wednesday, December 13, 2006 2:17 PM\nTo: Tim Jones\nCc: [email protected]\nSubject: Re: [PERFORM] strange query behavior \n\n\"Tim Jones\" <[email protected]> writes:\n> I have a query that uses an IN clause and it seems in perform great \n> when there is more than two values in it but if there is only one it \n> is really slow. Also if I change the query to use an = instead of IN \n> in the case of only one value it is still slow.\n\nPlease provide EXPLAIN ANALYZE output for both cases.\n\n> Possibly I need to reindex this\n> particular index?\n\nMore likely you need to ANALYZE the table so that the planner has\nup-to-date stats ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Dec 2006 14:44:36 -0500",
"msg_from": "\"Tim Jones\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: strange query behavior "
},
{
"msg_contents": "\"Tim Jones\" <[email protected]> writes:\n> The tables for theses queries are vacuumed and analyzed regularly. I\n> just did an analyze to be sure and here are the results\n> ...\n\nThere's something pretty wacko about the choice of plan in the slow case\n--- I don't see why it'd not have used the same plan structure as for\nthe IN case. It's coming up with a cost a lot higher than for the\nother, so it certainly knows this isn't a great plan ...\n\nWhich PG version is this exactly? Are you running with any nondefault\nplanner parameters?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Dec 2006 16:58:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange query behavior "
},
{
"msg_contents": "Version 8.1\n\nHere are the planner constraints I believe we changed\neffective_cache_size and random_page_cost\nBTW this is an AIX 5.2 \n\n#-----------------------------------------------------------------------\n----\n# QUERY TUNING\n#-----------------------------------------------------------------------\n----\n\n# - Planner Method Configuration -\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_mergejoin = on\n#enable_nestloop = on\n#enable_seqscan = on\n#enable_sort = on\n#enable_tidscan = on\n\n# - Planner Cost Constants -\n\n#effective_cache_size = 10000 # typically 8KB each\neffective_cache_size = 400000\nrandom_page_cost = 3.8 # units are one sequential page fetch\n # cost\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n\n# - Genetic Query Optimizer -\n\n#geqo = on\n#geqo_threshold = 12\n#geqo_effort = 5 # range 1-10\n#geqo_pool_size = 0 # selects default based on effort\n#geqo_generations = 0 # selects default based on effort\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 10 # range 1-1000\n#constraint_exclusion = off\n#from_collapse_limit = 8\n#join_collapse_limit = 8 # 1 disables collapsing of explicit\n # JOINs\n\n\nThanks\n\nTim Jones\nHealthcare Project Manager\nOptio Software, Inc.\n(770) 576-3555\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Wednesday, December 13, 2006 4:59 PM\nTo: Tim Jones\nCc: [email protected]\nSubject: Re: [PERFORM] strange query behavior \n\n\"Tim Jones\" <[email protected]> writes:\n> The tables for theses queries are vacuumed and analyzed regularly. I \n> just did an analyze to be sure and here are the results ...\n\nThere's something pretty wacko about the choice of plan in the slow case\n--- I don't see why it'd not have used the same plan structure as for\nthe IN case. It's coming up with a cost a lot higher than for the\nother, so it certainly knows this isn't a great plan ...\n\nWhich PG version is this exactly? Are you running with any nondefault\nplanner parameters?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Dec 2006 17:08:44 -0500",
"msg_from": "\"Tim Jones\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: strange query behavior "
},
{
"msg_contents": "\"Tim Jones\" <[email protected]> writes:\n>> Which PG version is this exactly? Are you running with any nondefault\n>> planner parameters?\n\n> Version 8.1\n\n8.1.what?\n\n> Here are the planner constraints I believe we changed\n> effective_cache_size and random_page_cost\n\nThose look reasonably harmless.\n\nMy best bet at the moment is that you've got a pretty early 8.1.x\nrelease and are hitting one of the planner bugs that we fixed earlier\nthis year. Not enough info to say for sure though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Dec 2006 17:36:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange query behavior "
},
{
"msg_contents": "Looks like 8.1.2\n\nTim Jones\nHealthcare Project Manager\nOptio Software, Inc.\n(770) 576-3555\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Wednesday, December 13, 2006 5:37 PM\nTo: Tim Jones\nCc: [email protected]\nSubject: Re: [PERFORM] strange query behavior \n\n\"Tim Jones\" <[email protected]> writes:\n>> Which PG version is this exactly? Are you running with any \n>> nondefault planner parameters?\n\n> Version 8.1\n\n8.1.what?\n\n> Here are the planner constraints I believe we changed \n> effective_cache_size and random_page_cost\n\nThose look reasonably harmless.\n\nMy best bet at the moment is that you've got a pretty early 8.1.x\nrelease and are hitting one of the planner bugs that we fixed earlier\nthis year. Not enough info to say for sure though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Dec 2006 17:44:36 -0500",
"msg_from": "\"Tim Jones\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: strange query behavior "
},
{
"msg_contents": "That's what I did and got 8.1.2 ... do you want gcc version etc 3.3.2\npowerpc aix5.2\n\nTim Jones\nHealthcare Project Manager\nOptio Software, Inc.\n(770) 576-3555\n\n-----Original Message-----\nFrom: Matthew O'Connor [mailto:[email protected]] \nSent: Wednesday, December 13, 2006 5:51 PM\nTo: Tim Jones\nSubject: Re: [PERFORM] strange query behavior\n\n From psql perform: select version();\nand send us that output.\n\nTim Jones wrote:\n> Looks like 8.1.2\n> \n> Tim Jones\n> Healthcare Project Manager\n> Optio Software, Inc.\n> (770) 576-3555\n> \n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Wednesday, December 13, 2006 5:37 PM\n> To: Tim Jones\n> Cc: [email protected]\n> Subject: Re: [PERFORM] strange query behavior\n> \n> \"Tim Jones\" <[email protected]> writes:\n>>> Which PG version is this exactly? Are you running with any \n>>> nondefault planner parameters?\n> \n>> Version 8.1\n> \n> 8.1.what?\n> \n>> Here are the planner constraints I believe we changed \n>> effective_cache_size and random_page_cost\n> \n> Those look reasonably harmless.\n> \n> My best bet at the moment is that you've got a pretty early 8.1.x \n> release and are hitting one of the planner bugs that we fixed earlier \n> this year. Not enough info to say for sure though.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n\n\n",
"msg_date": "Wed, 13 Dec 2006 17:54:06 -0500",
"msg_from": "\"Tim Jones\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: strange query behavior"
},
{
"msg_contents": "\"Tim Jones\" <[email protected]> writes:\n> [ explain results ]\n\nAs best I can see, the problem is with the estimate of the size of the\ninner join: for two keys we have\n\n -> Nested Loop (cost=4.01..9410.49 rows=13 width=145) (actual time=0.227..0.416 rows=30 loops=1)\n -> Bitmap Heap Scan on battery (cost=4.01..11.64 rows=2 width=69) (actual time=0.135..0.138 rows=2 loops=1)\n Recheck Cond: ((batteryidentifier = 1177470) OR (batteryidentifier = 1177469))\n -> BitmapOr (cost=4.01..4.01 rows=2 width=0) (actual time=0.106..0.106 rows=0 loops=1)\n -> Bitmap Index Scan on ix_battery_id (cost=0.00..2.00 rows=1 width=0) (actual time=0.091..0.091 rows=1 loops=1)\n Index Cond: (batteryidentifier = 1177470)\n -> Bitmap Index Scan on ix_battery_id (cost=0.00..2.00 rows=1 width=0) (actual time=0.011..0.011 rows=1 loops=1)\n Index Cond: (batteryidentifier = 1177469)\n -> Index Scan using ix_obresults_bat on observationresults (cost=0.00..4682.40 rows=1362 width=76) (actual time=0.047..0.091 rows=15 loops=2)\n Index Cond: (\"outer\".batteryidentifier = observationresults.batteryidentifier)\n\nbut for one key we have\n\n -> Nested Loop (cost=0.00..4653.67 rows=1348 width=145) (actual time=0.166..0.215 rows=9 loops=1)\n -> Index Scan using ix_battery_id on battery (cost=0.00..5.81 rows=1 width=69) (actual time=0.079..0.082 rows=1 loops=1)\n Index Cond: (batteryidentifier = 1177470)\n -> Index Scan using ix_obresults_bat on observationresults (cost=0.00..4634.38 rows=1348 width=76) (actual time=0.079..0.102 rows=9 loops=1)\n Index Cond: (1177470 = batteryidentifier)\n\nThe large rowcount estimate makes it back off to a non-nestloop\nplan for the outer joins, and in this situation that's a loser.\n\nI'm actually not sure why they're not both too high --- with the\nrowcount estimate of 1362 for the inner scan in the first example, you'd\nexpect about twice that for the join result. But the immediate problem\nis that in the case where it knows exactly what batteryidentifier is\nbeing probed for, it's still off by more than a factor of 100 on the\nrowcount estimate for observationresults. How many rows in\nobservationresults, and may we see the pg_stats entry for\nobservationresults.batteryidentifier?\n\nIt's likely that the answer for you will be \"raise the statistics target\nfor observationresults and re-ANALYZE\", but I'd like to gather more info\nabout what's going wrong first.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Dec 2006 18:24:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange query behavior "
},
{
"msg_contents": "\n18,273,008 rows in observationresults\n\npg_stats:\n\nselect * from pg_stats where tablename='observationresults' and\nattname='batteryidentifier';\n\n schemaname | tablename | attname | null_frac |\navg_width | n_distinct | most_common_vals\n| most_common_freqs\n| histogram_bounds\n| correlation\n------------+--------------------+-------------------+-----------+------\n-----+------------+-----------------------------------------------------\n---------------------+--------------------------------------------------\n-----------------------+------------------------------------------------\n-------------------------------------+-------------\n public | observationresults | batteryidentifier | 0 |\n4 | 12942 |\n{437255,1588952,120420,293685,356599,504069,589910,693683,834990,854693}\n|\n{0.00133333,0.00133333,0.001,0.001,0.001,0.001,0.001,0.001,0.001,0.001}\n|\n{3561,271263,556929,839038,1125682,1406538,1697589,1970463,2226781,25392\n41,2810844} | 0.31779\n\nthanks\n\nTim Jones\nHealthcare Project Manager\nOptio Software, Inc.\n(770) 576-3555\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Wednesday, December 13, 2006 6:25 PM\nTo: Tim Jones\nCc: [email protected]\nSubject: Re: [PERFORM] strange query behavior \n\n\nThe large rowcount estimate makes it back off to a non-nestloop plan for\nthe outer joins, and in this situation that's a loser.\n\nI'm actually not sure why they're not both too high --- with the\nrowcount estimate of 1362 for the inner scan in the first example, you'd\nexpect about twice that for the join result. But the immediate problem\nis that in the case where it knows exactly what batteryidentifier is\nbeing probed for, it's still off by more than a factor of 100 on the\nrowcount estimate for observationresults. How many rows in\nobservationresults, and may we see the pg_stats entry for\nobservationresults.batteryidentifier?\n\nIt's likely that the answer for you will be \"raise the statistics target\nfor observationresults and re-ANALYZE\", but I'd like to gather more info\nabout what's going wrong first.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Dec 2006 09:02:08 -0500",
"msg_from": "\"Tim Jones\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: strange query behavior "
},
{
"msg_contents": "\"Tim Jones\" <[email protected]> writes:\n> 18,273,008 rows in observationresults\n> [ and n_distinct = 12942 ]\n\nOK, so the estimated rowcounts are coming from those two numbers.\nIt's notoriously hard to get a decent n_distinct estimate from a small\nsample :-(, and I would imagine the number of batteryidentifiers is\nreally a lot more than 12942?\n\nWhat you need to do is increase the statistics target for\nobservationresults.batteryidentifier (see ALTER TABLE) and re-ANALYZE\nand see if you get a saner n_distinct in pg_stats. I'd try 100 and\nthen 1000 as target. Or you could just increase the global default\ntarget (see postgresql.conf) but that might be overkill.\n\nIt's still a bit odd that the case with two batteryidentifiers was\nestimated fairly accurately when the other wasn't; I'll go look into\nthat. But in any case you need better stats if you want good plans.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Dec 2006 12:48:35 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange query behavior "
},
{
"msg_contents": "I wrote:\n> It's still a bit odd that the case with two batteryidentifiers was\n> estimated fairly accurately when the other wasn't; I'll go look into\n> that.\n\nFor the sake of the archives: I looked into this, and it seems there's\nnot anything going wrong other than the bogusly small n_distinct for\nobservationresults.\n\nI'm assuming that battery.batteryidentifier is unique (stop me here,\nTim, if not). That means that (a) there won't be any most-common-values\nstatistics list for it, and (b) the n_distinct estimate should be pretty\naccurate.\n\nWhat happens in the multiple-batteryidentifier case is that eqjoinsel()\ndoesn't have two MCV lists to work with, and so it bases its selectivity\nestimate on the larger n_distinct, which in this case is the accurate\nvalue from the battery table. So we come out with a decent estimate\neven though the other n_distinct is all wrong.\n\nWhat happens in the single-batteryidentifier case is that transitive\nequality deduction removes the battery.batteryidentifier =\nobservationresults.batteryidentifier join condition altogether,\nreplacing it with two restriction conditions batteryidentifier = 1177470.\nSo eqjoinsel() is never called, and the join size estimate is just the\nproduct of the indexscan size estimates, and the scan estimate for\nobservationresults is too high because its n_distinct is too small.\n\nSo the bottom line is that eqjoinsel() is actually a bit more robust\nthan one might have thought ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Dec 2006 13:30:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange query behavior "
},
{
"msg_contents": "ok thanks Tom I will alter the statistics and re-analyze the table.\n\nTim Jones\nHealthcare Project Manager\nOptio Software, Inc.\n(770) 576-3555\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Thursday, December 14, 2006 12:49 PM\nTo: Tim Jones\nCc: [email protected]\nSubject: Re: [PERFORM] strange query behavior \n\n\"Tim Jones\" <[email protected]> writes:\n> 18,273,008 rows in observationresults\n> [ and n_distinct = 12942 ]\n\nOK, so the estimated rowcounts are coming from those two numbers.\nIt's notoriously hard to get a decent n_distinct estimate from a small\nsample :-(, and I would imagine the number of batteryidentifiers is\nreally a lot more than 12942?\n\nWhat you need to do is increase the statistics target for\nobservationresults.batteryidentifier (see ALTER TABLE) and re-ANALYZE\nand see if you get a saner n_distinct in pg_stats. I'd try 100 and then\n1000 as target. Or you could just increase the global default target\n(see postgresql.conf) but that might be overkill.\n\nIt's still a bit odd that the case with two batteryidentifiers was\nestimated fairly accurately when the other wasn't; I'll go look into\nthat. But in any case you need better stats if you want good plans.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Dec 2006 14:18:55 -0500",
"msg_from": "\"Tim Jones\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: strange query behavior "
}
] |
[
{
"msg_contents": "I've currently got this table:\n\n,----\n| n=# \\d nanpa\n| Table \"public.nanpa\"\n| Column | Type | Modifiers \n| ------------+--------------+-----------\n| state | character(2) | \n| npa | character(3) | not null\n| nxx | character(3) | not null\n| ocn | character(4) | \n| company | text | \n| ratecenter | text | \n| switch | text | \n| effective | date | \n| use | character(2) | not null\n| assign | date | \n| ig | character(1) | \n| Indexes:\n| \"nanpa_pkey\" PRIMARY KEY, btree (npa, nxx) CLUSTER\n`----\n\nand was doing queries of the form:\n\n,----\n| select * from nanpa where npa=775 and nxx=413;\n`----\n\nwhere were quite slow. Explain showed that it was doing sequential\nscans even though the primary key contained the two term I was\nselecting on.\n\nToday, looking at it again in prep to this post, I noticed that the\nnumbers were being converted to ::text, and a quick test showed that\nqueries of the form:\n\n,----\n| select * from nanpa where npa=775::bpchar and nxx=413::bpchar;\n`----\n\nused the index.\n\nI specified char(3) when I created the table simple because npa and\nnxx are defined as three-character strings. Tagging the queies is\na pain, especially as I often do queries of that form in psql(1).\n\n(Incidently, there are multiple similar tables, also keyed on\n(npa,nxx), which show the same problem. The nanpa table above is\njust a good example.)\n\nShould I convert the columns to text? Or create an additional index\nthat expects ::text args? (If so, how?)\n\nOr is there some other way to ensure the indices get used w/o having\nto tag data in the queries?\n\nThanks,\n\n-JimC\n-- \nJames Cloos <[email protected]> OpenPGP: 1024D/ED7DAEA6\n",
"msg_date": "Wed, 13 Dec 2006 13:48:10 -0500",
"msg_from": "James Cloos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizing a query"
},
{
"msg_contents": "\nHave you run vacuum/analyze on the table?\n\n--\n Husam \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of James Cloos\nSent: Wednesday, December 13, 2006 10:48 AM\nTo: [email protected]\nSubject: [PERFORM] Optimizing a query\n\nI've currently got this table:\n\n,----\n| n=# \\d nanpa\n| Table \"public.nanpa\"\n| Column | Type | Modifiers \n| ------------+--------------+-----------\n| state | character(2) | \n| npa | character(3) | not null\n| nxx | character(3) | not null\n| ocn | character(4) | \n| company | text | \n| ratecenter | text | \n| switch | text | \n| effective | date | \n| use | character(2) | not null\n| assign | date | \n| ig | character(1) | \n| Indexes:\n| \"nanpa_pkey\" PRIMARY KEY, btree (npa, nxx) CLUSTER\n`----\n\nand was doing queries of the form:\n\n,----\n| select * from nanpa where npa=775 and nxx=413;\n`----\n\nwhere were quite slow. Explain showed that it was doing sequential\nscans even though the primary key contained the two term I was\nselecting on.\n\nToday, looking at it again in prep to this post, I noticed that the\nnumbers were being converted to ::text, and a quick test showed that\nqueries of the form:\n\n,----\n| select * from nanpa where npa=775::bpchar and nxx=413::bpchar;\n`----\n\nused the index.\n\nI specified char(3) when I created the table simple because npa and\nnxx are defined as three-character strings. Tagging the queies is\na pain, especially as I often do queries of that form in psql(1).\n\n(Incidently, there are multiple similar tables, also keyed on\n(npa,nxx), which show the same problem. The nanpa table above is\njust a good example.)\n\nShould I convert the columns to text? Or create an additional index\nthat expects ::text args? (If so, how?)\n\nOr is there some other way to ensure the indices get used w/o having\nto tag data in the queries?\n\nThanks,\n\n-JimC\n-- \nJames Cloos <[email protected]> OpenPGP: 1024D/ED7DAEA6\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n**********************************************************************\nThis message contains confidential information intended only for the use of the addressee(s) named above and may contain information that is legally privileged. If you are not the addressee, or the person responsible for delivering it to the addressee, you are hereby notified that reading, disseminating, distributing or copying this message is strictly prohibited. If you have received this message by mistake, please immediately notify us by replying to the message and delete the original message immediately thereafter.\n\nThank you.\n\r\n FADLD Tag\n**********************************************************************\n\n",
"msg_date": "Wed, 13 Dec 2006 11:25:00 -0800",
"msg_from": "\"Tomeh, Husam\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing a query"
},
{
"msg_contents": "James Cloos <[email protected]> writes:\n> ... and was doing queries of the form:\n> | select * from nanpa where npa=775 and nxx=413;\n\nIf those are char(3) columns, shouldn't you be quoting the constants?\n\nselect * from nanpa where npa='775' and nxx='413';\n\nAny non-numeric input will fail entirely without the quotes, and I'm\nalso not too confident that inputs of less than three digits will work\nas you expect (the blank-padding might not match what's in the table).\nLeading zeroes would be troublesome too.\n\nOTOH, if the keys are and always will be small integers, it's just\nstupid not to be storing them as integers --- integer comparison\nis far faster than string.\n\nPostgres' data type capabilities are exceptional. Learn to work with\nthem, not against them --- that means thinking about what the data\nreally is and declaring it appropriately.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Dec 2006 16:48:47 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing a query "
},
{
"msg_contents": ">>>>> \"Husam\" == Tomeh, Husam <[email protected]> writes:\n\nHusam> Have you run vacuum/analyze on the table?\n\nYes, back when I first noticed how slow it was.\nIt did not make any difference.\n\nexplain analyze says:\n\n,----\n| n=# explain analyse select * from nanpa where npa=775 and nxx=473;\n| QUERY PLAN \n| --------------------------------------------------------------------------------------------------------\n| Seq Scan on nanpa (cost=0.00..5344.60 rows=4 width=105) (actual time=371.718..516.816 rows=1 loops=1)\n| Filter: (((npa)::text = '775'::text) AND ((nxx)::text = '473'::text))\n| Total runtime: 516.909 ms\n| (3 rows)\n`----\n\nvs:\n\n,----\n| n=# explain analyse select * from nanpa where npa=775::char and nxx=473::char;\n| QUERY PLAN \n| ----------------------------------------------------------------------------------------------------------------------\n| Index Scan using nanpa_pkey on nanpa (cost=0.00..4.33 rows=1 width=105) (actual time=64.831..64.831 rows=0 loops=1)\n| Index Cond: ((npa = '7'::bpchar) AND (nxx = '4'::bpchar))\n| Total runtime: 64.927 ms\n| (3 rows)\n`----\n\nBTW, I forgot to mention I'm at 8.1.4 on that box.\n\n-JimC\n-- \nJames Cloos <[email protected]> OpenPGP: 1024D/ED7DAEA6\n",
"msg_date": "Wed, 13 Dec 2006 17:35:26 -0500",
"msg_from": "James Cloos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing a query"
},
{
"msg_contents": "\nYour nap and nxx columns have character datatype, so you should use\nquotes. Try:\n\n explain analyze select * from nanpa where npa='775' and\nnxx='473'; \n\nIf that does not work, you could try to influence the planner's\nexecution plan to favor index scans over sequential scan by tweaking a\ncouple of the postgres parameters, particularly, the\neffective_cache_size. This parameter primarily set the planner's\nestimates of the relative likelihood of a particular table or index\nbeing in memory, and will thus have a significant effect on whether the\nplanner chooses indexes over seqscans. Tweaking such parameters are\nusually done as a last resort.\n\n--\n Husam \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of James Cloos\nSent: Wednesday, December 13, 2006 2:35 PM\nTo: Tomeh, Husam\nCc: [email protected]\nSubject: Re: [PERFORM] Optimizing a query\n\n>>>>> \"Husam\" == Tomeh, Husam <[email protected]> writes:\n\nHusam> Have you run vacuum/analyze on the table?\n\nYes, back when I first noticed how slow it was.\nIt did not make any difference.\n\nexplain analyze says:\n\n,----\n| n=# explain analyse select * from nanpa where npa=775 and nxx=473;\n| QUERY PLAN\n\n|\n------------------------------------------------------------------------\n--------------------------------\n| Seq Scan on nanpa (cost=0.00..5344.60 rows=4 width=105) (actual\ntime=371.718..516.816 rows=1 loops=1)\n| Filter: (((npa)::text = '775'::text) AND ((nxx)::text =\n'473'::text))\n| Total runtime: 516.909 ms\n| (3 rows)\n`----\n\nvs:\n\n,----\n| n=# explain analyse select * from nanpa where npa=775::char and\nnxx=473::char;\n| QUERY PLAN\n\n|\n------------------------------------------------------------------------\n----------------------------------------------\n| Index Scan using nanpa_pkey on nanpa (cost=0.00..4.33 rows=1\nwidth=105) (actual time=64.831..64.831 rows=0 loops=1)\n| Index Cond: ((npa = '7'::bpchar) AND (nxx = '4'::bpchar))\n| Total runtime: 64.927 ms\n| (3 rows)\n`----\n\nBTW, I forgot to mention I'm at 8.1.4 on that box.\n\n-JimC\n-- \nJames Cloos <[email protected]> OpenPGP: 1024D/ED7DAEA6\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n**********************************************************************\nThis message contains confidential information intended only for the use of the addressee(s) named above and may contain information that is legally privileged. If you are not the addressee, or the person responsible for delivering it to the addressee, you are hereby notified that reading, disseminating, distributing or copying this message is strictly prohibited. If you have received this message by mistake, please immediately notify us by replying to the message and delete the original message immediately thereafter.\n\nThank you.\n\r\n FADLD Tag\n**********************************************************************\n\n",
"msg_date": "Wed, 13 Dec 2006 15:15:40 -0800",
"msg_from": "\"Tomeh, Husam\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing a query"
}
] |
[
{
"msg_contents": "Hi, everybody!\n\nRunning the same query on pg 8.2 through EXPLAIN ANALYZE takes 4x-10x time as running it without it.\nIs it ok?\n\n\nExample:\n\ntesting=> select count(*) from auth_user;\n count \n---------\n 2575675\n(1 row)\n\nTime: 1450,829 ms\ntesting=> explain analyze select count(*) from auth_user;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=89814.87..89814.88 rows=1 width=0) (actual time=18460.436..18460.439 rows=1 loops=1)\n -> Seq Scan on auth_user (cost=0.00..83373.89 rows=2576389 width=0) (actual time=0.424..9871.520 rows=2575675 loops=1)\n Total runtime: 18460.535 ms\n(3 rows)\n\nTime: 18461,194 ms\n",
"msg_date": "Thu, 14 Dec 2006 16:32:30 +0300",
"msg_from": "Evgeny Gridasov <[email protected]>",
"msg_from_op": true,
"msg_subject": "EXPLAIN ANALYZE on 8.2"
},
{
"msg_contents": "Evgeny Gridasov <[email protected]> writes:\n> Running the same query on pg 8.2 through EXPLAIN ANALYZE takes 4x-10x time as running it without it.\n\nIf your machine has slow gettimeofday() this is not surprising. 8.2 is\nno worse (or better) than any prior version.\n\nSome quick arithmetic from your results suggests that gettimeofday() is\ntaking about 3.3 microseconds, which is indeed pretty awful. What sort\nof machine is this exactly?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Dec 2006 11:11:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN ANALYZE on 8.2 "
},
{
"msg_contents": "Tom,\nHello.\n\nThis is a Linux Debian 3.1 ontop of 2x XEON 3.4 Ghz.\nPostgreSQL is 8.2 checked out from CVS REL8_2_STABLE yesterday.\nI'm running the same Postgres on another machine,\nwith Debian Etch and have the same results.\n\nOn Thu, 14 Dec 2006 11:11:42 -0500\nTom Lane <[email protected]> wrote:\n\n> Evgeny Gridasov <[email protected]> writes:\n> > Running the same query on pg 8.2 through EXPLAIN ANALYZE takes 4x-10x time as running it without it.\n> \n> If your machine has slow gettimeofday() this is not surprising. 8.2 is\n> no worse (or better) than any prior version.\n> \n> Some quick arithmetic from your results suggests that gettimeofday() is\n> taking about 3.3 microseconds, which is indeed pretty awful. What sort\n> of machine is this exactly?\n> \n> \t\t\tregards, tom lane\n> \n",
"msg_date": "Thu, 14 Dec 2006 19:29:20 +0300",
"msg_from": "Evgeny Gridasov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: EXPLAIN ANALYZE on 8.2"
},
{
"msg_contents": "Evgeny Gridasov <[email protected]> writes:\n> This is a Linux Debian 3.1 ontop of 2x XEON 3.4 Ghz.\n> PostgreSQL is 8.2 checked out from CVS REL8_2_STABLE yesterday.\n> I'm running the same Postgres on another machine,\n> with Debian Etch and have the same results.\n\nHmph. With 8.2 on Fedora 5 on a 2.8Ghz dual Xeon, I get this:\n\n\nregression=# create table foo as select x from generate_series(1,2500000) x;\nSELECT\nregression=# vacuum foo;\nVACUUM\nregression=# checkpoint;\nCHECKPOINT\nregression=# \\timing\nTiming is on.\nregression=# select count(*) from foo;\n count\n---------\n 2500000\n(1 row)\n\nTime: 666.639 ms\nregression=# select count(*) from foo;\n count\n---------\n 2500000\n(1 row)\n\nTime: 609.514 ms\nregression=# explain analyze select count(*) from foo;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=44764.00..44764.01 rows=1 width=0) (actual time=1344.812..1344.813 rows=1 loops=1)\n -> Seq Scan on foo (cost=0.00..38514.00 rows=2500000 width=0) (actual time=0.031..748.571 rows=2500000 loops=1)\n Total runtime: 1344.891 ms\n(3 rows)\n\nTime: 1345.755 ms\nregression=# explain analyze select count(*) from foo;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=44764.00..44764.01 rows=1 width=0) (actual time=1324.846..1324.847 rows=1 loops=1)\n -> Seq Scan on foo (cost=0.00..38514.00 rows=2500000 width=0) (actual time=0.046..748.582 rows=2500000 loops=1)\n Total runtime: 1324.902 ms\n(3 rows)\n\nTime: 1325.591 ms\nregression=#\n\nwhich works out to about 0.14 microsec per gettimeofday call, on a\nmachine that ought to be slower than yours. So I think you've got\neither a crummy motherboard, or a kernel that doesn't know the best\nway to read the clock on that hardware. There is some discussion\nof this in the archives (probably in pgsql-hackers); look back around\nMay or so when we were unsuccessfully trying to hack EXPLAIN to use\nfewer gettimeofday calls.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Dec 2006 11:41:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN ANALYZE on 8.2 "
},
{
"msg_contents": "On 12/14/06, Tom Lane <[email protected]> wrote:\n> Evgeny Gridasov <[email protected]> writes:\n> > This is a Linux Debian 3.1 ontop of 2x XEON 3.4 Ghz.\n> > PostgreSQL is 8.2 checked out from CVS REL8_2_STABLE yesterday.\n> > I'm running the same Postgres on another machine,\n> > with Debian Etch and have the same results.\n>\n> Hmph. With 8.2 on Fedora 5 on a 2.8Ghz dual Xeon, I get this:\n>\n<snip>\n> regression=# explain analyze select count(*) from foo;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=44764.00..44764.01 rows=1 width=0) (actual time=1324.846..1324.847 rows=1 loops=1)\n> -> Seq Scan on foo (cost=0.00..38514.00 rows=2500000 width=0) (actual time=0.046..748.582 rows=2500000 loops=1)\n> Total runtime: 1324.902 ms\n> (3 rows)\n>\n> Time: 1325.591 ms\n> regression=#\n>\n> which works out to about 0.14 microsec per gettimeofday call, on a\n> machine that ought to be slower than yours. So I think you've got\n> either a crummy motherboard, or a kernel that doesn't know the best\n> way to read the clock on that hardware. There is some discussion\n> of this in the archives (probably in pgsql-hackers); look back around\n> May or so when we were unsuccessfully trying to hack EXPLAIN to use\n> fewer gettimeofday calls.\n\nYow! I notice the same thing on our HP BL25p blades w/2*opteron 270\n(four total cores, AMD 8111 or 8131 chipset). 1.25 microsec/call vs\nmy new desktop (Intel Core2 6300) 0.16 microsec/call.\n\nI hope this isn't a \"crummy mainboard\" but I can't seem to affect\nthings by changing clock source (kernel 2.6.16 SLES10). I tried\nkernel command option clock=XXX where XXX in\n(cyclone,hpet,pmtmr,tsc,pit), no option was significantly better than\nthe default.\n\nAnyone know how this might be improved (short of replacing hardware)?\n\n-K\n",
"msg_date": "Thu, 14 Dec 2006 17:43:51 -0600",
"msg_from": "\"Kelly Burkhart\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN ANALYZE on 8.2"
},
{
"msg_contents": "Kelly Burkhart wrote:\n> \n> I hope this isn't a \"crummy mainboard\" but I can't seem to affect\n> things by changing clock source (kernel 2.6.16 SLES10). I tried\n> kernel command option clock=XXX where XXX in\n> (cyclone,hpet,pmtmr,tsc,pit), no option was significantly better than\n> the default.\n> \n> Anyone know how this might be improved (short of replacing hardware)?\n> \n\nUpdating the BIOS might be worth investigating, and then bugging your \nLinux distro mailing list/support etc for more help. (What sort of \nmotherboard is it?)\n\nCheers\n\nMark\n",
"msg_date": "Fri, 15 Dec 2006 12:59:40 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN ANALYZE on 8.2"
},
{
"msg_contents": "\"Kelly Burkhart\" <[email protected]> writes:\n> I hope this isn't a \"crummy mainboard\" but I can't seem to affect\n> things by changing clock source (kernel 2.6.16 SLES10). I tried\n> kernel command option clock=XXX where XXX in\n> (cyclone,hpet,pmtmr,tsc,pit), no option was significantly better than\n> the default.\n\nI believe that on machines where gettimeofday is really nice and fast,\nit doesn't require entry to the kernel at all; there's some hack that\nmakes the clock readable from userspace. (Obviously a switch to kernel\nmode would set you back a lot of the cycles involved here.) So it's not\nso much the kernel that you need to teach as glibc. How you do that is\nbeyond my expertise, but maybe that will help you google for the right\nthing ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Dec 2006 19:05:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN ANALYZE on 8.2 "
},
{
"msg_contents": "On Thu, 2006-12-14 at 19:05 -0500, Tom Lane wrote:\n> \"Kelly Burkhart\" <[email protected]> writes:\n> > I hope this isn't a \"crummy mainboard\" but I can't seem to affect\n> > things by changing clock source (kernel 2.6.16 SLES10). I tried\n> > kernel command option clock=XXX where XXX in\n> > (cyclone,hpet,pmtmr,tsc,pit), no option was significantly better than\n> > the default.\n> \n> I believe that on machines where gettimeofday is really nice and fast,\n> it doesn't require entry to the kernel at all; there's some hack that\n> makes the clock readable from userspace. (Obviously a switch to kernel\n> mode would set you back a lot of the cycles involved here.) So it's not\n> so much the kernel that you need to teach as glibc. How you do that is\n> beyond my expertise, but maybe that will help you google for the right\n> thing ...\n\nUntil we work out a better solution we can fix this in two ways:\n\n1. EXPLAIN ANALYZE [ [ WITH | WITHOUT ] TIME STATISTICS ] ...\n\n2. enable_analyze_timer = off | on (default) (USERSET)\n\nA performance drop of 4x-10x is simply unacceptable when trying to tune\nqueries where the current untuned time is already too high. Tying down\nproduction servers for hours on end when we know for certain all they\nare doing is calling gettimeofday millions of times is not good. This\nquickly leads to the view from objective people that PostgreSQL doesn't\nhave a great optimizer, whatever we say in its defence. I don't want to\nleave this alone, but I don't want to spend a month fixing it either.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 15 Dec 2006 10:28:08 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN ANALYZE on 8.2"
},
{
"msg_contents": "On Fri, Dec 15, 2006 at 10:28:08AM +0000, Simon Riggs wrote:\n> Until we work out a better solution we can fix this in two ways:\n> \n> 1. EXPLAIN ANALYZE [ [ WITH | WITHOUT ] TIME STATISTICS ] ...\n> \n> 2. enable_analyze_timer = off | on (default) (USERSET)\n\nWhat exactly would this do? Only count actual rows or something? I\nwrote a patch that tried statistical sampling, but the figures were too\nfar off for people's liking.\n\n> A performance drop of 4x-10x is simply unacceptable when trying to tune\n> queries where the current untuned time is already too high. Tying down\n> production servers for hours on end when we know for certain all they\n> are doing is calling gettimeofday millions of times is not good. This\n> quickly leads to the view from objective people that PostgreSQL doesn't\n> have a great optimizer, whatever we say in its defence. I don't want to\n> leave this alone, but I don't want to spend a month fixing it either.\n\nI think the best option is setitimer(), but it's not POSIX so\nplatform support is going to be patchy.\n\nBTW, doing gettimeofday() without kernel entry is not really possible.\nYou could use the cycle counter but it has the problem that if you have\nmultiple CPUs you need to calibrate the result. If the CPU goes to\nsleep, there's is no way for the userspace process to know. Only the\nkernel has all the relevent information about what \"time\" is to get a\nreasonable result.\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> From each according to his ability. To each according to his ability to litigate.",
"msg_date": "Fri, 15 Dec 2006 11:50:08 +0100",
"msg_from": "Martijn van Oosterhout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] EXPLAIN ANALYZE on 8.2"
},
{
"msg_contents": "Hi list,\n\nLe vendredi 15 décembre 2006 11:50, Martijn van Oosterhout a écrit :\n> BTW, doing gettimeofday() without kernel entry is not really possible.\n> You could use the cycle counter but it has the problem that if you have\n> multiple CPUs you need to calibrate the result. If the CPU goes to\n> sleep, there's is no way for the userspace process to know. Only the\n> kernel has all the relevent information about what \"time\" is to get a\n> reasonable result.\n\nI remember having played with intel RDTSC (time stamp counter) for some timing \nmeasurement, but just read from several sources (including linux kernel \nhackers considering its usage for gettimeofday() implementation) that TSC is \nnot an accurate method to have elapsed time information.\n\nMay be some others method than gettimeofday() are available (Lamport \nTimestamps, as PGDG may have to consider having a distributed processing \nready EA in some future), cheaper and accurate?\nAfter all, the discussion, as far as I understand it, is about having a \naccurate measure of duration of events, knowing when they occurred in the day \ndoes not seem to be the point.\n\nMy 2¢, hoping this could be somehow helpfull,\n-- \nDimitri Fontaine\nhttp://www.dalibo.com/",
"msg_date": "Fri, 15 Dec 2006 12:24:35 +0100",
"msg_from": "Dimitri Fontaine <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] EXPLAIN ANALYZE on 8.2"
},
{
"msg_contents": "\"Martijn van Oosterhout\" <[email protected]> writes:\n\n> BTW, doing gettimeofday() without kernel entry is not really possible.\n\nThat's too strong a conclusion. Doing gettimeofday() without some help from\nthe kernel isn't possible but it isn't necessary to enter the kernel for each\ncall.\n\nThere are various attempts at providing better timing infrastructure at low\noverhead but I'm not sure what's out there currently. I expect to do this what\nwe'll have to do is invent a pg_* abstraction that has various implementations\non different architectures. On Solaris it can use DTrace internally, on Linux\nit might have something else (or more likely several different options\ndepending on the age and config options of the kernel). \n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Fri, 15 Dec 2006 12:15:59 +0000",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] EXPLAIN ANALYZE on 8.2"
},
{
"msg_contents": "On Fri, 2006-12-15 at 11:50 +0100, Martijn van Oosterhout wrote:\n> On Fri, Dec 15, 2006 at 10:28:08AM +0000, Simon Riggs wrote:\n> > Until we work out a better solution we can fix this in two ways:\n> > \n> > 1. EXPLAIN ANALYZE [ [ WITH | WITHOUT ] TIME STATISTICS ] ...\n> > \n> > 2. enable_analyze_timer = off | on (default) (USERSET)\n> \n> What exactly would this do? Only count actual rows or something? \n\nYes. It's better to have this than nothing at all.\n\n> I\n> wrote a patch that tried statistical sampling, but the figures were too\n> far off for people's liking.\n\nWell, I like your ideas, so if you have any more...\n\nMaybe sampling every 10 rows will bring things down to an acceptable\nlevel (after the first N). You tried less than 10 didn't you?\n\nMaybe we can count how many real I/Os were required to perform each\nparticular row, so we can adjust the time per row based upon I/Os. ISTM\nthat sampling at too low a rate means we can't spot the effects of cache\nand I/O which can often be low frequency but high impact.\n\n> I think the best option is setitimer(), but it's not POSIX so\n> platform support is going to be patchy.\n\nDon't understand that. I thought that was to do with alarms and signals.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 15 Dec 2006 12:20:46 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] EXPLAIN ANALYZE on 8.2"
},
{
"msg_contents": "On Fri, Dec 15, 2006 at 12:15:59PM +0000, Gregory Stark wrote:\n> There are various attempts at providing better timing infrastructure at low\n> overhead but I'm not sure what's out there currently. I expect to do this what\n> we'll have to do is invent a pg_* abstraction that has various implementations\n> on different architectures. On Solaris it can use DTrace internally, on Linux\n> it might have something else (or more likely several different options\n> depending on the age and config options of the kernel). \n\nI think we need to move to a sampling approach. setitimer is good,\nexcept it doesn't tell you if signals have been lost. Given they are\nmost likely to be lost during high disk I/O, they're actually\nsignificant. I'm trying to think of a way around that. Then you don't\nneed a cheap gettimeofday at all...\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> From each according to his ability. To each according to his ability to litigate.",
"msg_date": "Fri, 15 Dec 2006 13:24:52 +0100",
"msg_from": "Martijn van Oosterhout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] EXPLAIN ANALYZE on 8.2"
},
{
"msg_contents": "On Fri, Dec 15, 2006 at 12:20:46PM +0000, Simon Riggs wrote:\n> > I\n> > wrote a patch that tried statistical sampling, but the figures were too\n> > far off for people's liking.\n> \n> Well, I like your ideas, so if you have any more...\n> \n> Maybe sampling every 10 rows will bring things down to an acceptable\n> level (after the first N). You tried less than 10 didn't you?\n\nYeah, it reduced the number of calls as the count got larger. It broke\nsomewhere, though I don't quite remember why.\n\n> Maybe we can count how many real I/Os were required to perform each\n> particular row, so we can adjust the time per row based upon I/Os. ISTM\n> that sampling at too low a rate means we can't spot the effects of cache\n> and I/O which can often be low frequency but high impact.\n\nOne idea is to sample at fixed interval. Where the I/O cost is high,\nthere'll be a lot of sampling points. The issue being that the final\nresult are not totally accurate anymore.\n\n> > I think the best option is setitimer(), but it's not POSIX so\n> > platform support is going to be patchy.\n> \n> Don't understand that. I thought that was to do with alarms and signals.\n\nOn my system the manpage say setitimer() conforms to: SVr4, 4.4BSD. I'm\nunsure how many of the supported platforms fall under that\ncatagorisation.\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> From each according to his ability. To each according to his ability to litigate.",
"msg_date": "Fri, 15 Dec 2006 13:30:52 +0100",
"msg_from": "Martijn van Oosterhout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] EXPLAIN ANALYZE on 8.2"
},
{
"msg_contents": "* Simon Riggs:\n\n>> I think the best option is setitimer(), but it's not POSIX so\n>> platform support is going to be patchy.\n>\n> Don't understand that. I thought that was to do with alarms and\n> signals.\n\nYou could use it for sampling. Every few milliseconds, you record\nwhich code is executing (possibly using a global variable which is set\nand reset accordingly). But this might lead to tons of interrupted\nsystem calls, and not all systems mask them, so this might not be an\noption for the PostgreSQL code base.\n\nOn the other hand, if performance in more recent Linux releases (from\nkernel.org) are acceptable, you should assume that the problem will\neventually fix itself. FWIW, I see the 9x overhead on something that\nis close to 2.6.17 (on AMD64/Opteron), so this could be wishful\nthinking. 8-(\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Fri, 15 Dec 2006 13:32:20 +0100",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] EXPLAIN ANALYZE on 8.2"
},
{
"msg_contents": "Am Freitag, 15. Dezember 2006 11:28 schrieb Simon Riggs:\n> Until we work out a better solution we can fix this in two ways:\n>\n> 1. EXPLAIN ANALYZE [ [ WITH | WITHOUT ] TIME STATISTICS ] ...\n>\n> 2. enable_analyze_timer = off | on (default) (USERSET)\n\nThe second one is enough in my mind.\n\n-- \nPeter Eisentraut\nhttp://developer.postgresql.org/~petere/\n",
"msg_date": "Fri, 15 Dec 2006 14:06:55 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] EXPLAIN ANALYZE on 8.2"
},
{
"msg_contents": "Martijn van Oosterhout <[email protected]> writes:\n> On Fri, Dec 15, 2006 at 12:20:46PM +0000, Simon Riggs wrote:\n>> Maybe sampling every 10 rows will bring things down to an acceptable\n>> level (after the first N). You tried less than 10 didn't you?\n\n> Yeah, it reduced the number of calls as the count got larger. It broke\n> somewhere, though I don't quite remember why.\n\nThe fundamental problem with it was the assumption that different\nexecutions of a plan node will have the same timing. That's not true,\nin fact not even approximately true. IIRC the patch did realize\nthat first-time-through is not a predictor for the rest, but some of\nour plan nodes have enormous variance even after the first time.\nI think the worst case is batched hash joins.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Dec 2006 09:56:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] EXPLAIN ANALYZE on 8.2 "
},
{
"msg_contents": "Gregory Stark <[email protected]> writes:\n> There are various attempts at providing better timing infrastructure at low\n> overhead but I'm not sure what's out there currently. I expect to do this what\n> we'll have to do is invent a pg_* abstraction that has various implementations\n> on different architectures.\n\nYou've got to be kidding. Surely it's glibc's responsibility, not ours,\nto implement gettimeofday correctly for the hardware.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Dec 2006 10:45:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] EXPLAIN ANALYZE on 8.2 "
},
{
"msg_contents": "On Fri, Dec 15, 2006 at 09:56:57AM -0500, Tom Lane wrote:\n> Martijn van Oosterhout <[email protected]> writes:\n> > On Fri, Dec 15, 2006 at 12:20:46PM +0000, Simon Riggs wrote:\n> >> Maybe sampling every 10 rows will bring things down to an acceptable\n> >> level (after the first N). You tried less than 10 didn't you?\n> \n> > Yeah, it reduced the number of calls as the count got larger. It broke\n> > somewhere, though I don't quite remember why.\n> \n> The fundamental problem with it was the assumption that different\n> executions of a plan node will have the same timing. That's not true,\n> in fact not even approximately true. IIRC the patch did realize\n> that first-time-through is not a predictor for the rest, but some of\n> our plan nodes have enormous variance even after the first time.\n> I think the worst case is batched hash joins.\n\nIt didn't assume that because that's obviously bogus. It assumed the\ndurations would be spread as a normal distribution. Which meant that\nover time the average of the measured iterations would approch the\nactual average. It tried to take enough measurements to try and keep\nexpected error small, but it's statistics, you can only say \"this will\ngive the right answer >95% of the time\".\n\nYou are correct though, the error was caused by unexpectedly large\nvariations, or more likely, an unexpected distribution curve.\nStatistically, we took enough samples to not be affected significantly\nby large variations. Even if it looked more like a gamma distribution\nit should not have been as far off as it was.\n\nLooking at alternative approaches, like sampling with a timer, you end\nup with the same problem: sometimes the calculations will fail and\nproduce something strange. The simplest example being than a 100Hz\ntimer is not going to produce any useful information for queries in the\nmillisecond range. A higher frequency timer than that is not going to\nbe available portably.\n\nYou could probably throw more effort into refining the statistics behind\nit, but at some point we're going to have to draw a line and say: it's\ngoing to be wrong X% of the time, deal with it. If we're not willing to\nsay that, then there's no point going ahead with any statistical\napproach.\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> From each according to his ability. To each according to his ability to litigate.",
"msg_date": "Fri, 15 Dec 2006 16:48:45 +0100",
"msg_from": "Martijn van Oosterhout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] EXPLAIN ANALYZE on 8.2"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Am Freitag, 15. Dezember 2006 11:28 schrieb Simon Riggs:\n>> Until we work out a better solution we can fix this in two ways:\n>> \n>> 1. EXPLAIN ANALYZE [ [ WITH | WITHOUT ] TIME STATISTICS ] ...\n>> 2. enable_analyze_timer = off | on (default) (USERSET)\n\n> The second one is enough in my mind.\n\nI don't see any point in either one. If you're not going to collect\ntiming data then the only useful info EXPLAIN ANALYZE could provide is\nknowledge of which rowcount estimates are badly off ... and to get that,\nyou have to wait for the query to finish, which may well be impractical\neven without the gettimeofday overhead. We had discussed upthread the\nidea of having an option to issue a NOTICE as soon as any actual\nrowcount exceeds the estimate by some-configurable-percentage, and that\nseems to me to be a much more useful response to the problem of\n\"E.A. takes too long\" than removing gettimeofday.\n\nOne thing that's not too clear to me though is how the NOTICE would\nidentify the node at which the rowcount was exceeded...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Dec 2006 10:57:25 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] EXPLAIN ANALYZE on 8.2 "
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> Gregory Stark <[email protected]> writes:\n> > There are various attempts at providing better timing infrastructure at low\n> > overhead but I'm not sure what's out there currently. I expect to do this what\n> > we'll have to do is invent a pg_* abstraction that has various implementations\n> > on different architectures.\n> \n> You've got to be kidding. Surely it's glibc's responsibility, not ours,\n> to implement gettimeofday correctly for the hardware.\n\nExcept for two things:\n\na) We don't really need gettimeofday. That means we don't need something\nsensitive to adjustments made by ntp etc. In fact that would be actively bad.\nCurrently if the user runs \"date\" to reset his clock back a few days I bet\ninteresting things happen to a large explain analyze that's running.\n\nIn fact we don't need something that represents any absolute time, only time\nelapsed since some other point we choose. That might be easier to implement\nthan what glibc has to do to implement gettimeofday fully.\n\nb) glibc may not want to endure an overhead on every syscall and context\nswitch to make gettimeofday faster on the assumption that gettimeofday is a\nrare call and it should pay the price rather than imposing an overhead on\neverything else.\n\nPostgres knows when it's running an explain analyze and a 1% overhead would be\nentirely tolerable, especially if it affected the process pretty much evenly\nunlike the per-gettimeofday-overhead which can get up as high as 100% on some\ntypes of subplans and is negligible on others. And more to the point Postgres\nwouldn't have to endure this overhead at all when it's not needed whereas\nglibc has no idea when you're going to need gettimeofday next.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "15 Dec 2006 11:10:10 -0500",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] EXPLAIN ANALYZE on 8.2"
},
{
"msg_contents": "At 10:45 AM 12/15/2006, Tom Lane wrote:\n>Gregory Stark <[email protected]> writes:\n> > There are various attempts at providing better timing infrastructure at low\n> > overhead but I'm not sure what's out there currently. I expect to \n> do this what\n> > we'll have to do is invent a pg_* abstraction that has various \n> implementations\n> > on different architectures.\n>\n>You've got to be kidding. Surely it's glibc's responsibility, not ours,\n>to implement gettimeofday correctly for the hardware.\n>\n> regards, tom lane\n\nI agree with Tom on this. Perhaps the best compromise is for the pg \ncommunity to make thoughtful suggestions to the glibc community?\n\nRon Peacetree \n\n",
"msg_date": "Fri, 15 Dec 2006 11:55:52 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] EXPLAIN ANALYZE on 8.2 "
},
{
"msg_contents": "On Fri, 2006-12-15 at 10:57 -0500, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n> > Am Freitag, 15. Dezember 2006 11:28 schrieb Simon Riggs:\n> >> Until we work out a better solution we can fix this in two ways:\n> >> \n> >> 1. EXPLAIN ANALYZE [ [ WITH | WITHOUT ] TIME STATISTICS ] ...\n> >> 2. enable_analyze_timer = off | on (default) (USERSET)\n> \n> > The second one is enough in my mind.\n> \n> I don't see any point in either one. If you're not going to collect\n> timing data then the only useful info EXPLAIN ANALYZE could provide is\n> knowledge of which rowcount estimates are badly off ... and to get that,\n> you have to wait for the query to finish, which may well be impractical\n> even without the gettimeofday overhead. \n\nOn a different part of this thread, you say:\n\nOn Fri, 2006-12-15 at 09:56 -0500, Tom Lane wrote:\n\n> The fundamental problem with it was the assumption that different\n> executions of a plan node will have the same timing. That's not true,\n> in fact not even approximately true. \n\nIt doesn't make sense to me to claim that the timing is so important\nthat we cannot do without it, at the same time as saying it isn't even\napproximately true that is highly variable.\n\n> We had discussed upthread the\n> idea of having an option to issue a NOTICE as soon as any actual\n> rowcount exceeds the estimate by some-configurable-percentage, and that\n> seems to me to be a much more useful response to the problem of\n> \"E.A. takes too long\" than removing gettimeofday.\n\n> One thing that's not too clear to me though is how the NOTICE would\n> identify the node at which the rowcount was exceeded...\n\nWe'd have to output the whole EXPLAIN as a NOTICE for it to make any\nsense. If we can't do without the timings, then half an EXPLAIN would be\neven worse.\n\nWe'd need to take account of non-linear nodes. Hash nodes react badly\nbeyond a certain point, HashAgg even worse. Sort performs poorly after\nthe end of memory, as does Materialize. Other nodes are more linear so\nwould need a different percentage. I don't like the sound of a whole\ngaggle of GUCs to describe that. Any ideas?\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 18 Dec 2006 22:33:23 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] EXPLAIN ANALYZE on 8.2"
},
{
"msg_contents": "\"Simon Riggs\" <[email protected]> writes:\n> On Fri, 2006-12-15 at 09:56 -0500, Tom Lane wrote:\n>> The fundamental problem with it was the assumption that different\n>> executions of a plan node will have the same timing. That's not true,\n>> in fact not even approximately true. \n\n> It doesn't make sense to me to claim that the timing is so important\n> that we cannot do without it, at the same time as saying it isn't even\n> approximately true that is highly variable.\n\nHuh? What I said was that successive executions of the same plan node\nmay take considerably different amounts of time, and the proposed\nsampling patch failed to handle that situation with acceptable accuracy.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Dec 2006 23:43:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] EXPLAIN ANALYZE on 8.2 "
}
] |
[
{
"msg_contents": "I have the opportunity to benchmark a system is based on Supermicro 6015B-8V. \nIt has 2x Quad Xeon E5320 1.86GHZ, 4GB DDR2 533, 1x 73GB 10k SCSI.\n\nhttp://www.supermicro.com/products/system/1U/6015/SYS-6015B-8V.cfm\n\nWe can add more RAM and drives for testing purposes. Can someone suggest what \nbenchmarks with what settings would be desirable to see how this system \nperforms. I don't believe I've seen any postgres benchmarks done on a quad \nxeon yet.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n",
"msg_date": "Fri, 15 Dec 2006 19:24:05 -0800 (PST)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "opportunity to benchmark a quad core Xeon"
},
{
"msg_contents": "On 16-12-2006 4:24 Jeff Frost wrote:\n> We can add more RAM and drives for testing purposes. Can someone \n> suggest what benchmarks with what settings would be desirable to see how \n> this system performs. I don't believe I've seen any postgres benchmarks \n> done on a quad xeon yet.\n\nWe've done our \"standard\" benchmark on a dual X5355:\nhttp://tweakers.net/reviews/661\n\nVerdict is that for a price/performance-ratio you're better off with a \n5160, but in absolute performance it does win.\n\nBest regards,\n\nArjen\n",
"msg_date": "Sat, 16 Dec 2006 09:11:59 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: opportunity to benchmark a quad core Xeon"
},
{
"msg_contents": "On Sat, 16 Dec 2006, Arjen van der Meijden wrote:\n\n> On 16-12-2006 4:24 Jeff Frost wrote:\n>> We can add more RAM and drives for testing purposes. Can someone suggest \n>> what benchmarks with what settings would be desirable to see how this \n>> system performs. I don't believe I've seen any postgres benchmarks done on \n>> a quad xeon yet.\n>\n> We've done our \"standard\" benchmark on a dual X5355:\n> http://tweakers.net/reviews/661\n>\n> Verdict is that for a price/performance-ratio you're better off with a 5160, \n> but in absolute performance it does win.\n>\n\nArjen,\n\nHave you guys run your benchmark on a quad opteron board yet? I'm curious how \nthe dual quad core Intels compare to quad dual core opteron.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n",
"msg_date": "Mon, 18 Dec 2006 09:20:57 -0800 (PST)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: opportunity to benchmark a quad core Xeon"
}
] |
[
{
"msg_contents": "Hello,\nI am trying to make partitions:\n\nCREATE SEQUENCE data_seq;\nCREATE TABLE data (\n identyf bigint,\n name varchar,\n added timestamp default now()\n);\n\n/*********************************************/\nCREATE TABLE data_a (CHECK (name LIKE varchar 'a%')\n) INHERITS (data);\n--\nCREATE TABLE data_b (CHECK (name LIKE varchar 'b%')\n) INHERITS (data);\n\n/*********************************************/\nCREATE INDEX data_a_idx ON data_a(name);\nCREATE INDEX data_b_idx ON data_b(name);\n\n/*********************************************/\nCREATE RULE data_insert_a AS ON INSERT TO data WHERE (name LIKE 'a%')\nDO INSTEAD INSERT INTO data_a(identyf,name) VALUES\n(nextval('data_seq'),NEW.name);\n--\nCREATE RULE data_insert_b AS ON INSERT TO data WHERE (name LIKE 'b%')\nDO INSTEAD INSERT INTO data_b(identyf,name) VALUES\n(nextval('data_seq'),NEW.name);\n\n\nI put some data and vacuum:\n\n/*********************************************/\nINSERT INTO data(name) VALUES ('aaa');\nINSERT INTO data(name) VALUES ('aab');\nINSERT INTO data(name) VALUES ('baa');\nINSERT INTO data(name) VALUES ('bab');\n\nVACUUM ANALYZE data_a;\nVACUUM ANALYZE data_b;\n\n/*********************************************/\nSET constraint_exclusion=off;\nSET\nEXPLAIN SELECT * FROM data WHERE name = 'aaa';\n QUERY PLAN\n------------------------------------------------------------------------\n Result (cost=0.00..24.42 rows=7 width=48)\n -> Append (cost=0.00..24.42 rows=7 width=48)\n -> Seq Scan on data (cost=0.00..22.38 rows=5 width=48)\n Filter: ((name)::text = 'aaa'::text)\n -> Seq Scan on data_a data (cost=0.00..1.02 rows=1 width=23)\n Filter: ((name)::text = 'aaa'::text)\n -> Seq Scan on data_b data (cost=0.00..1.02 rows=1 width=23)\n Filter: ((name)::text = 'aaa'::text)\n(8 rows)\n\n\n/*********************************************/\nSET constraint_exclusion=on;\nSET\n\nSHOW constraint_exclusion;\n constraint_exclusion\n----------------------\n on\n(1 row)\n\nEXPLAIN SELECT * FROM data WHERE name = 'aaa';\n QUERY PLAN\n------------------------------------------------------------------------\n Result (cost=0.00..24.42 rows=7 width=48)\n -> Append (cost=0.00..24.42 rows=7 width=48)\n -> Seq Scan on data (cost=0.00..22.38 rows=5 width=48)\n Filter: ((name)::text = 'aaa'::text)\n -> Seq Scan on data_a data (cost=0.00..1.02 rows=1 width=23)\n Filter: ((name)::text = 'aaa'::text)\n -> Seq Scan on data_b data (cost=0.00..1.02 rows=1 width=23)\n Filter: ((name)::text = 'aaa'::text)\n(8 rows)\n\n\nI have tried with name as text in data table and in CHECK. Where do I\nhave an error? Is it possible to make partitions with strings?\n\nThank you for any clues.\n\nBest regards,\njamcito\n\n----------------------------------------------------------------------\nsmieszne, muzyka, pilka, sexy, kibice, kino, ciekawe, extreme, kabaret\nhttp://link.interia.pl/f19d4 - najlepsze filmy w intermecie\n\n",
"msg_date": "Sat, 16 Dec 2006 15:29:40 +0100",
"msg_from": "jamcito <[email protected]>",
"msg_from_op": true,
"msg_subject": "partition text/varchar check problem"
},
{
"msg_contents": "jamcito <[email protected]> writes:\n> I am trying to make partitions:\n\n> CREATE TABLE data_a (CHECK (name LIKE varchar 'a%')\n> ) INHERITS (data);\n> --\n> CREATE TABLE data_b (CHECK (name LIKE varchar 'b%')\n> ) INHERITS (data);\n\nThat's not going to work because the planner is unable to prove anything\nabout the behavior of LIKE --- there is nothing in the system that\noffers a relationship between the = operator and the LIKE operator.\nYou'll need something like\n\n\tCHECK (name >= 'a' AND name < 'b')\n\tCHECK (name >= 'b' AND name < 'c')\n\netc. (These work for a query like \"WHERE name = 'foo'\" because\nthe >= < and = operators are all members of the same btree opclass,\nso the planner knows how to reason about them.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 16 Dec 2006 10:48:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partition text/varchar check problem "
},
{
"msg_contents": "Tom Lane wrote:\n>> CREATE TABLE data_a (CHECK (name LIKE varchar 'a%')\n>> ) INHERITS (data);\n>> --\n>> CREATE TABLE data_b (CHECK (name LIKE varchar 'b%')\n>> ) INHERITS (data);\n> \n> That's not going to work because the planner is unable to prove anything\n> about the behavior of LIKE --- there is nothing in the system that\n> offers a relationship between the = operator and the LIKE operator.\n> You'll need something like\n> \n> \tCHECK (name >= 'a' AND name < 'b')\n> \tCHECK (name >= 'b' AND name < 'c')\n> \n> etc. (These work for a query like \"WHERE name = 'foo'\" because\n> the >= < and = operators are all members of the same btree opclass,\n> so the planner knows how to reason about them.)\n> \n> \t\t\tregards, tom lane\n\nThank you, it works!\n\nCheers,\njamcito\n\n----------------------------------------------------------------------\nJestes kierowca? To poczytaj! >>> http://link.interia.pl/f199e\n\n",
"msg_date": "Sat, 16 Dec 2006 19:19:37 +0100",
"msg_from": "jamcito <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: partition text/varchar check problem -- solved"
},
{
"msg_contents": "jamcito napisaďż˝(a):\n> /*********************************************/\n> SET constraint_exclusion=on;\n> SET\n>\n> SHOW constraint_exclusion;\n> constraint_exclusion\n> ----------------------\n> on\n> (1 row)\n>\n> EXPLAIN SELECT * FROM data WHERE name = 'aaa';\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Result (cost=0.00..24.42 rows=7 width=48)\n> -> Append (cost=0.00..24.42 rows=7 width=48)\n> -> Seq Scan on data (cost=0.00..22.38 rows=5 width=48)\n> Filter: ((name)::text = 'aaa'::text)\n> -> Seq Scan on data_a data (cost=0.00..1.02 rows=1 width=23)\n> Filter: ((name)::text = 'aaa'::text)\n> -> Seq Scan on data_b data (cost=0.00..1.02 rows=1 width=23)\n> Filter: ((name)::text = 'aaa'::text)\n> (8 rows)\n> \nCan you show what you get from:\nEXPLAIN SELECT * FROM data WHERE name LIKE 'a%'\n\n?\n\nIrek.\n\n",
"msg_date": "Sat, 16 Dec 2006 21:16:13 +0100",
"msg_from": "Ireneusz Pluta <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partition text/varchar check problem"
},
{
"msg_contents": "Ireneusz Pluta wrote:\n> Can you show what you get from:\n> EXPLAIN SELECT * FROM data WHERE name LIKE 'a%'\n> \n> ?\n> \n> Irek.\n\nI get:\n QUERY PLAN\n------------------------------------------------------------------------\n Result (cost=0.00..24.42 rows=8 width=48)\n -> Append (cost=0.00..24.42 rows=8 width=48)\n -> Seq Scan on data (cost=0.00..22.38 rows=5 width=48)\n Filter: ((name)::text ~~ 'a%'::text)\n -> Seq Scan on data_a data (cost=0.00..1.02 rows=2 width=23)\n Filter: ((name)::text ~~ 'a%'::text)\n -> Seq Scan on data_b data (cost=0.00..1.02 rows=1 width=23)\n Filter: ((name)::text ~~ 'a%'::text)\n(8 rows)\n\nBoth partition tables are scanned.\n\nBest,\njamcito\n\n----------------------------------------------------------------------\nsmieszne, muzyka, pilka, sexy, kibice, kino, ciekawe, extreme, kabaret\nhttp://link.interia.pl/f19d4 - najlepsze filmy w intermecie\n\n",
"msg_date": "Sat, 16 Dec 2006 21:30:03 +0100",
"msg_from": "jamcito <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: partition text/varchar check problem"
},
{
"msg_contents": "Ireneusz Pluta <[email protected]> writes:\n> Can you show what you get from:\n> EXPLAIN SELECT * FROM data WHERE name LIKE 'a%'\n\nWon't help. Exact equality of the WHERE condition is useful for\npartial-index cases, because there the planner needs to prove that\nthe WHERE condition implies the index predicate before it can use\nthe index; and exact equality is certainly sufficient for that.\nBut for constraint exclusion, the problem is to prove that the\nWHERE condition refutes the constraint, rather than implies it.\nKnowing that \"name LIKE 'a%'\" disproves \"name LIKE 'b%'\" requires\nmore knowledge about LIKE than the planner has got.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 16 Dec 2006 15:34:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partition text/varchar check problem "
}
] |
[
{
"msg_contents": "I'm writing a webmail-type application that is meant to be used in a\ncorporate environment. The core of my system is a Postgres database\nthat is used as a message header cache. The two (relevant) tables\nbeing used are pasted into the end of this message. My problem is\nthat, as the messages table increases to tens of millions of rows,\npgsql slows down considerably. Even an operation like \"select\ncount(*) from messages\" can take minutes, with a totally idle system.\nPostgres seems to be the most scalable Free database out there, so I\nmust be doing something wrong.\n\nAs for the most common strategy of having a slower (more rows)\n\"archival\" database and a smaller, faster \"live\" database, all the\nclients in the company are using their normal corporate email server\nfor day-to-day email handling. The webmail is used for access email\nthat's no longer on the corporate server, so it's not really simple to\nsay which emails should be considered live and which are really\nout-of-date.\n\nMy postgres settings are entirely default with the exception of\nshared_buffers being set to 40,000 and max_connections set to 400.\nI'm not sure what the meaning of most of the other settings are, so I\nhaven't touched them. The machines running the database servers are\nmy home desktop (a dual-core athlon 3200+ with 2GB RAM and a 120GB\nSATA II drive), and a production server with two dual-core Intel\nchips, 4 GB RAM, and a RAID 5 array of SATA II drives on a 3Ware 9550\ncontroller. Both machines are running Gentoo Linux with a 2.6.1x\nkernel, and both exhibit significant performance degradation when I\nstart getting tens of millions of records.\n\nAny advice would be most appreciated. Thanks in advance!\n\nTables:\n\nCREATE TABLE EmailAddresses (\n emailid SERIAL PRIMARY KEY, -- The unique identifier of this address\n name TEXT NOT NULL, -- The friendly name in the address\n addrspec TEXT NOT NULL, -- The user@domain part of the address\n UNIQUE(name, addrspec)\n);\n\nand\n\nCREATE TABLE Messages (\n -- Store info:\n msgkey BIGSERIAL PRIMARY KEY, -- Unique identifier for a message\n path TEXT NOT NULL, -- Where the message is on the file system\n inserted TIMESTAMP DEFAULT now(),-- When the message was fetched\n -- Message Info:\n msgid TEXT UNIQUE NOT NULL, -- Message's Message-Id field\n mfrom INTEGER -- Who sent the message\n REFERENCES EmailAddresses\n DEFAULT NULL,\n mdate TIMESTAMP DEFAULT NULL, -- Message \"date\" header field\n replyto TEXT DEFAULT NULL, -- Message-ID of replied-to message\n subject TEXT DEFAULT NULL, -- Message \"subject\" header field\n numatch INTEGER DEFAULT NULL, -- Number of attachments\n UNIQUE(path)\n);\n",
"msg_date": "Sat, 16 Dec 2006 11:26:02 -0600",
"msg_from": "tsuraan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Scaling concerns"
},
{
"msg_contents": "On Sat, Dec 16, 2006 at 11:26:02AM -0600, tsuraan wrote:\n> Even an operation like \"select count(*) from messages\" can take minutes,\n> with a totally idle system. Postgres seems to be the most scalable Free\n> database out there, so I must be doing something wrong.\n\nUnqualified SELECT COUNT(*) FROM foo is one of the most expensive operations\nyou can do on your system, since the visibility information has to be checked\non disk for each row. Instead, try real queries on real data, and post here\nif some are too slow for you.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Sat, 16 Dec 2006 18:32:21 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling concerns"
},
{
"msg_contents": "> Unqualified SELECT COUNT(*) FROM foo is one of the most expensive operations\n> you can do on your system, since the visibility information has to be\n> checked\n> on disk for each row. Instead, try real queries on real data, and post here\n> if some are too slow for you.\n\nOk, that's a bad example. I'm learning :-) Is insert ... select also\nreally expensive then? I have a table loaded with message-id and path\ninformation of currently-existing messages. It has ~20 million rows.\nTrying to do \"INSERT INTO Messages(path, msgid) SELECT (path, msgid)\nFROM tmpMessages\" took a really long time before psql died with an\nout-of-memory error. Is there a more sane way to do a table copy, or\nshould I have just dropped all the indices from the Message table and\nloaded into that?\n\nThanks!\n",
"msg_date": "Sat, 16 Dec 2006 12:03:38 -0600",
"msg_from": "tsuraan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Scaling concerns"
},
{
"msg_contents": "Le samedi 16 décembre 2006 18:32, Steinar H. Gunderson a écrit :\n> Instead, try real queries on real data,\n> and post here if some are too slow for you.\n\nTo quickly find out a subset of slow queries on your production system, you \ncan use the pgfouine tool:\n http://pgfouine.projects.postgresql.org/\n\nIf you then want to make some measurements of PostgreSQL performances with \nsome different settings and compare them, consider using the tsung tool (and \nmay be tsung-ploter companion tool to graph several benchs onto the same \ncharts for comparing purpose):\n http://pgfouine.projects.postgresql.org/tsung.html\n http://tsung.erlang-projects.org/\n http://debian.dalibo.org/unstable/\n\nThis latter link also contains a .tar.gz archive of tsung-ploter in case \nyou're not running a debian system. Dependencies are python and matplotlib.\n\nRegards,\n-- \nDimitri Fontaine\nhttp://www.dalibo.com/",
"msg_date": "Sat, 16 Dec 2006 22:32:59 +0100",
"msg_from": "Dimitri Fontaine <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling concerns"
},
{
"msg_contents": "* tsuraan <[email protected]> [061216 18:26]:\n> I'm writing a webmail-type application that is meant to be used in a\n> corporate environment. The core of my system is a Postgres database\n> that is used as a message header cache. The two (relevant) tables\n> being used are pasted into the end of this message. My problem is\n> that, as the messages table increases to tens of millions of rows,\n> pgsql slows down considerably. Even an operation like \"select\n> count(*) from messages\" can take minutes, with a totally idle system.\n> Postgres seems to be the most scalable Free database out there, so I\n> must be doing something wrong.\n\nselect count(*) from table is the worst case in PostgreSQL. (MVC\nsystems in general I guess).\n\nIf you really need to run count(*) you need to think about the\nrequired isolation level of these operations and make some aggregate\ntable yourself.\n\n(btw, select aggregate(*) from bigtable is something that no database\nlikes, it's just the degree of slowness that sometimes is different).\n\nFor scaling you should consider slony. Either hangout on #slony on\nFreenode.net or ask on the mailing list if you have questions.\n\n> As for the most common strategy of having a slower (more rows)\n> \"archival\" database and a smaller, faster \"live\" database, all the\n> clients in the company are using their normal corporate email server\n> for day-to-day email handling. The webmail is used for access email\n> that's no longer on the corporate server, so it's not really simple to\n> say which emails should be considered live and which are really\n> out-of-date.\n> \n> My postgres settings are entirely default with the exception of\n> shared_buffers being set to 40,000 and max_connections set to 400.\n> I'm not sure what the meaning of most of the other settings are, so I\n> haven't touched them. The machines running the database servers are\n> my home desktop (a dual-core athlon 3200+ with 2GB RAM and a 120GB\n> SATA II drive), and a production server with two dual-core Intel\n\nIntel chips => define more. There are Intel boxes known to have issues\nunder specific load scenarios with PostgreSQL (again specific\nversions). To make it funnier, these are really really hard to track\ndown ;)\n\n> chips, 4 GB RAM, and a RAID 5 array of SATA II drives on a 3Ware 9550\n> controller. Both machines are running Gentoo Linux with a 2.6.1x\n> kernel, and both exhibit significant performance degradation when I\n> start getting tens of millions of records.\n> \n> Any advice would be most appreciated. Thanks in advance!\n\nCluster. One box that applies changes, and multiple boxes that read\nthe data.\n\nIf you cannot afford multiple boxes from the start, design your\napplication still to work with two connections: one connection to a\nuser with read/write permissions, and one connecting to a user having\nonly select permissions => this way you can later easily add a\nloadbalancer to the mix, and use multiple postgres boxes for reading\nstuff.\n\nAndreas\n",
"msg_date": "Sun, 17 Dec 2006 04:25:27 +0100",
"msg_from": "Andreas Kostyrka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling concerns"
},
{
"msg_contents": "Tsuraan,\n\n\"Select count(*) from bigtable\" is testing your disk drive speed up till\nabout 300MB/s, after which it is CPU limited in Postgres.\n\nMy guess is that your system has a very slow I/O configuration, either due\nto faulty driver/hardware or the configuration.\n\nThe first thing you should do is run a simple I/O test on your data\ndirectory - write a file twice the size of memory using dd like this:\n\n time bash -c \"dd if=/dev/zero of=data_directory/bigfile bs=8k count=(2 *\nmemory_size / 8192) && sync\"\n\n time dd if=data_directory/bigfile of=/dev/null bs=8k\n\nThen report the times here.\n\n- Luke\n\nOn 12/16/06 9:26 AM, \"tsuraan\" <[email protected]> wrote:\n\n> I'm writing a webmail-type application that is meant to be used in a\n> corporate environment. The core of my system is a Postgres database\n> that is used as a message header cache. The two (relevant) tables\n> being used are pasted into the end of this message. My problem is\n> that, as the messages table increases to tens of millions of rows,\n> pgsql slows down considerably. Even an operation like \"select\n> count(*) from messages\" can take minutes, with a totally idle system.\n> Postgres seems to be the most scalable Free database out there, so I\n> must be doing something wrong.\n> \n> As for the most common strategy of having a slower (more rows)\n> \"archival\" database and a smaller, faster \"live\" database, all the\n> clients in the company are using their normal corporate email server\n> for day-to-day email handling. The webmail is used for access email\n> that's no longer on the corporate server, so it's not really simple to\n> say which emails should be considered live and which are really\n> out-of-date.\n> \n> My postgres settings are entirely default with the exception of\n> shared_buffers being set to 40,000 and max_connections set to 400.\n> I'm not sure what the meaning of most of the other settings are, so I\n> haven't touched them. The machines running the database servers are\n> my home desktop (a dual-core athlon 3200+ with 2GB RAM and a 120GB\n> SATA II drive), and a production server with two dual-core Intel\n> chips, 4 GB RAM, and a RAID 5 array of SATA II drives on a 3Ware 9550\n> controller. Both machines are running Gentoo Linux with a 2.6.1x\n> kernel, and both exhibit significant performance degradation when I\n> start getting tens of millions of records.\n> \n> Any advice would be most appreciated. Thanks in advance!\n> \n> Tables:\n> \n> CREATE TABLE EmailAddresses (\n> emailid SERIAL PRIMARY KEY, -- The unique identifier of this address\n> name TEXT NOT NULL, -- The friendly name in the address\n> addrspec TEXT NOT NULL, -- The user@domain part of the address\n> UNIQUE(name, addrspec)\n> );\n> \n> and\n> \n> CREATE TABLE Messages (\n> -- Store info:\n> msgkey BIGSERIAL PRIMARY KEY, -- Unique identifier for a message\n> path TEXT NOT NULL, -- Where the message is on the file\n> system\n> inserted TIMESTAMP DEFAULT now(),-- When the message was fetched\n> -- Message Info:\n> msgid TEXT UNIQUE NOT NULL, -- Message's Message-Id field\n> mfrom INTEGER -- Who sent the message\n> REFERENCES EmailAddresses\n> DEFAULT NULL,\n> mdate TIMESTAMP DEFAULT NULL, -- Message \"date\" header field\n> replyto TEXT DEFAULT NULL, -- Message-ID of replied-to message\n> subject TEXT DEFAULT NULL, -- Message \"subject\" header field\n> numatch INTEGER DEFAULT NULL, -- Number of attachments\n> UNIQUE(path)\n> );\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n\n",
"msg_date": "Sat, 16 Dec 2006 23:18:25 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling concerns"
},
{
"msg_contents": "tsuraan <[email protected]> wrote:\n>\n> I'm writing a webmail-type application that is meant to be used in a\n> corporate environment. The core of my system is a Postgres database\n> that is used as a message header cache. The two (relevant) tables\n> being used are pasted into the end of this message. My problem is\n> that, as the messages table increases to tens of millions of rows,\n> pgsql slows down considerably. Even an operation like \"select\n> count(*) from messages\" can take minutes, with a totally idle system.\n> Postgres seems to be the most scalable Free database out there, so I\n> must be doing something wrong.\n> \n> As for the most common strategy of having a slower (more rows)\n> \"archival\" database and a smaller, faster \"live\" database, all the\n> clients in the company are using their normal corporate email server\n> for day-to-day email handling. The webmail is used for access email\n> that's no longer on the corporate server, so it's not really simple to\n> say which emails should be considered live and which are really\n> out-of-date.\n> \n> My postgres settings are entirely default with the exception of\n> shared_buffers being set to 40,000 and max_connections set to 400.\n> I'm not sure what the meaning of most of the other settings are, so I\n> haven't touched them. The machines running the database servers are\n> my home desktop (a dual-core athlon 3200+ with 2GB RAM and a 120GB\n> SATA II drive), and a production server with two dual-core Intel\n> chips, 4 GB RAM, and a RAID 5 array of SATA II drives on a 3Ware 9550\n> controller. Both machines are running Gentoo Linux with a 2.6.1x\n> kernel, and both exhibit significant performance degradation when I\n> start getting tens of millions of records.\n> \n> Any advice would be most appreciated. Thanks in advance!\n> \n> Tables:\n> \n> CREATE TABLE EmailAddresses (\n> emailid SERIAL PRIMARY KEY, -- The unique identifier of this address\n> name TEXT NOT NULL, -- The friendly name in the address\n> addrspec TEXT NOT NULL, -- The user@domain part of the address\n> UNIQUE(name, addrspec)\n> );\n> \n> and\n> \n> CREATE TABLE Messages (\n> -- Store info:\n> msgkey BIGSERIAL PRIMARY KEY, -- Unique identifier for a message\n> path TEXT NOT NULL, -- Where the message is on the file system\n> inserted TIMESTAMP DEFAULT now(),-- When the message was fetched\n> -- Message Info:\n> msgid TEXT UNIQUE NOT NULL, -- Message's Message-Id field\n> mfrom INTEGER -- Who sent the message\n> REFERENCES EmailAddresses\n> DEFAULT NULL,\n> mdate TIMESTAMP DEFAULT NULL, -- Message \"date\" header field\n> replyto TEXT DEFAULT NULL, -- Message-ID of replied-to message\n> subject TEXT DEFAULT NULL, -- Message \"subject\" header field\n> numatch INTEGER DEFAULT NULL, -- Number of attachments\n> UNIQUE(path)\n> );\n\nYou might benefit from adding some performance-specific changes to your\nschema.\n\nFor example, if you created a separate table for each emailid (call them\nMessages_1, Messages_2, etc). I expect that no one user will have an\nunbearable number of messages, thus each user will see reasonable\nperformance when working with their mailbox.\n\nYou can handle the management of this entirely in your application logic,\nbut it might be better of you wrote stored procedures to access message\ntables -- to make it easier on the application side.\n\n-Bill\n",
"msg_date": "Sun, 17 Dec 2006 10:44:29 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling concerns"
},
{
"msg_contents": "> To quickly find out a subset of slow queries on your production system, you\n> can use the pgfouine tool:\n> http://pgfouine.projects.postgresql.org/\n>\n> If you then want to make some measurements of PostgreSQL performances with\n> some different settings and compare them, consider using the tsung tool (and\n> may be tsung-ploter companion tool to graph several benchs onto the same\n> charts for comparing purpose):\n> http://pgfouine.projects.postgresql.org/tsung.html\n> http://tsung.erlang-projects.org/\n> http://debian.dalibo.org/unstable/\n\nThanks for all the links! I'll check these out when I get back to work.\n",
"msg_date": "Sun, 17 Dec 2006 15:34:38 -0600",
"msg_from": "tsuraan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Scaling concerns"
},
{
"msg_contents": "> For scaling you should consider slony. Either hangout on #slony on\n> Freenode.net or ask on the mailing list if you have questions.\n\nFor some reason I had thought slony was really immature, but it\nactually looks really usable.\n\n> Intel chips => define more. There are Intel boxes known to have issues\n> under specific load scenarios with PostgreSQL (again specific\n> versions). To make it funnier, these are really really hard to track\n> down ;)\n\nI can't access the Intel machine right now, but it's the current\nfastest Intel dual-core. I'll figure it out tomorrow.\n\n> Cluster. One box that applies changes, and multiple boxes that read\n> the data.\n>\n> If you cannot afford multiple boxes from the start, design your\n> application still to work with two connections: one connection to a\n> user with read/write permissions, and one connecting to a user having\n> only select permissions => this way you can later easily add a\n> loadbalancer to the mix, and use multiple postgres boxes for reading\n> stuff.\n\nI think I'll see what I can do for that. Is there an aggregation-type\nof clustering for Postgres? I'm thinking of something where database\ninformation is divided between machines, rather than shared among them\nas in Slony. Sort of like the difference between RAID0 and RAID1.\nSince my application is constantly adding to the database (far more is\nwritten than is ever read), it would be nice to have a multiple-write,\nsingle reader solution, if such a thing exists.\n",
"msg_date": "Sun, 17 Dec 2006 15:59:11 -0600",
"msg_from": "tsuraan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Scaling concerns"
},
{
"msg_contents": "On Sun, 17 Dec 2006, tsuraan wrote:\n\n> Since my application is constantly adding to the database (far more is\n> written than is ever read), it would be nice to have a multiple-write,\n> single reader solution, if such a thing exists.\n\nYou seem to be working from the assumption that you have a scaling issue, \nand that therefore you should be researching how to scale your app to more \nmachines. I'm not so sure you do; I would suggest that you drop that \nentire idea for now, spend some time doing basic performance tuning for \nPostgres instead, and only after then consider adding more machines. It \ndoes little good to add more incorrectly setup servers to the mix, and \nsolving the multiple-write problem is hard. Let's take a quick tour \nthrough your earlier messages:\n\n> My postgres settings are entirely default with the exception of \n> shared_buffers being set to 40,000 and max_connections set to 400. I'm \n> not sure what the meaning of most of the other settings are, so I \n> haven't touched them.\n\nhttp://www.powerpostgresql.com/Downloads/annotated_conf_80.html is a good \nintro to the various parameters you might set, with some valuable hints on \nthe effective range you should be considering. I'd suggest you use that \nto identify the most likely things to increase, then read the manuals at \nhttp://www.postgresql.org/docs/8.2/interactive/runtime-config.html for \nmore detail on what you're actually adjusting. To get you started, \nconsider increasing effective_cache_size, checkpoint_segments, and \nwork_mem; those are three whose defaults are very low for your \napplication, relative to your hardware. The thought of how your poor \ndatabase is suffering when trying to manage a heavy write load with the \ndefault checkpoint_segments in particular makes me sad, especially when we \nadd:\n\n> The machines running the database servers are my home desktop (a \n> dual-core athlon 3200+ with 2GB RAM and a 120GB SATA II drive), and a \n> production server with two dual-core Intel chips, 4 GB RAM, and a RAID 5 \n> array of SATA II drives on a 3Ware 9550 controller.\n\nOne big RAID 5 volume is probably the worst setup available for what \nyou're doing. Luke already gave you a suggestion for testing write speed; \nyou should run that test, but I wouldn't expect happy numbers there. You \nmight be able to get by with the main database running like that, but \nthink about what you'd need to do to add more disks (or reorganize the \nones you have) so that you could dedicate a pair to a RAID-1 volume for \nholding the WAL. If you're limited by write performance, I think you'd \nfind adding a separate WAL drive set a dramatically more productive \nupgrade than trying to split the app to another machine. Try it on your \nhome machine first; that's a cheap upgrade, to add another SATA drive to \nthere, and you should see a marked improvement (especially once you get \nthe server parameters set to more appropriate values).\n\nI'd also suggest that you'd probably be able to get more help from people \nhere if you posted a snippet of output from vmstat and iostat -x with a \nlow interval (say 5 seconds) during a period where the machine was busy; \nthat's helpful for figuring out where the bottleneck on your machine \nreally is.\n\n> Trying to do \"INSERT INTO Messages(path, msgid) SELECT (path, msgid) \n> FROM tmpMessages\" took a really long time before psql died with an \n> out-of-memory error.\n\nDo you have the exact text of the error? I suspect you're falling victim \nto the default parameters being far too low here as well, but without the \nerror it's hard to know exactly which.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Sun, 17 Dec 2006 19:53:10 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling concerns"
},
{
"msg_contents": "> http://www.powerpostgresql.com/Downloads/annotated_conf_80.html is a good\n> intro to the various parameters you might set, with some valuable hints on\n> the effective range you should be considering. I'd suggest you use that\n> to identify the most likely things to increase, then read the manuals at\n> http://www.postgresql.org/docs/8.2/interactive/runtime-config.html for\n> more detail on what you're actually adjusting. To get you started,\n> consider increasing effective_cache_size, checkpoint_segments, and\n> work_mem; those are three whose defaults are very low for your\n\nI'll play with these - I've definitely seen the (for me) confusing use\nof seqscan rather than index scan that the annotated page says is\ncaused by too little effective_cache_size. That first link is really\ngreat; I can't believe I've never seen it before.\n\n> One big RAID 5 volume is probably the worst setup available for what\n> you're doing. Luke already gave you a suggestion for testing write speed;\n> you should run that test, but I wouldn't expect happy numbers there. You\n\nI've run dstat with a really busy postgres and seen 94 MB read and\nwrite simultaneously for a few seconds. I think our RAID cards have\n16MB of RAM, so unless it was really freakish, I probably wasn't\nseeing all cache access. I'll try the some tests with dd tomorrow\nwhen I get to work.\n\n> might be able to get by with the main database running like that, but\n> think about what you'd need to do to add more disks (or reorganize the\n> ones you have) so that you could dedicate a pair to a RAID-1 volume for\n> holding the WAL. If you're limited by write performance, I think you'd\n> find adding a separate WAL drive set a dramatically more productive\n> upgrade than trying to split the app to another machine. Try it on your\n> home machine first; that's a cheap upgrade, to add another SATA drive to\n> there, and you should see a marked improvement (especially once you get\n> the server parameters set to more appropriate values).\n\nIs the WAL at the same location as the xlog (transaction log?)? The\ncheckpoint_segments doc says increasing that value is really only\nuseful if the xlog is separate from the data, so do I put both WAL and\nxlog on the separate drive, or is that automatic (or redundant; I\ndon't know what I'm talking about...)?\n\n> I'd also suggest that you'd probably be able to get more help from people\n> here if you posted a snippet of output from vmstat and iostat -x with a\n> low interval (say 5 seconds) during a period where the machine was busy;\n> that's helpful for figuring out where the bottleneck on your machine\n> really is.\n\nI'll try to stress a machine and get some real stats soon.\n\n> Do you have the exact text of the error? I suspect you're falling victim\n> to the default parameters being far too low here as well, but without the\n> error it's hard to know exactly which.\n\nWell, I tried to repeat it on my home machine with 20 million rows,\nand it worked fine in about two minutes. I'll have to see what's\ngoing on on that other system...\n\nThanks for the help!\n",
"msg_date": "Sun, 17 Dec 2006 22:36:50 -0600",
"msg_from": "tsuraan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Scaling concerns"
},
{
"msg_contents": "tsuraan <[email protected]> writes:\n> Is the WAL at the same location as the xlog (transaction log?)?\n\nSame thing.\n\n> The checkpoint_segments doc says increasing that value is really only\n> useful if the xlog is separate from the data,\n\nDunno where you read that, but it's utter bilge. If you've got a\nwrite-intensive workload, you want to crank checkpoint_segments as high\nas you can stand. With the default settings on a modern machine it's\nnot hard at all to push it into checkpointing every dozen or seconds,\nwhich will completely kill performance. (Disk space for pg_xlog/ and\npotential delay during crash restart are the only negatives here. If\nyou are willing to push the average inter-checkpoint interval past five\nminutes then you need to increase checkpoint_timeout too.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 17 Dec 2006 23:59:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling concerns "
}
] |
[
{
"msg_contents": "I have a table similar to this:\n\nCREATE TABLE event_resources (\n event_resource_id serial NOT NULL,\n event_id integer NOT NULL,\n resource_id integer NOT NULL,\n start_date timestamptz NOT NULL,\n end_date timestamptz NOT NULL,\n\tCONSTRAINT event_resources_pkey PRIMARY KEY (event_resource_id)\n);\n\nWhere the same resource can be added to an event multiple times. \nSince the table spans a few years, any day queried should\nreturn at most 0.1% of the table, and seems perfect for indexes. So I\nadd these:\n\nCREATE INDEX er_idx1 ON event_resources (start_date);\nCREATE INDEX er_idx2 ON event_resources (end_date);\n\nOne query I need to perform is \"All event resources that start or end\non a particular day\". The first thing that comes to mind is this:\n\nselect *\nfrom event_resources er\nwhere er.start_date::date = $1::date or er.end_date::date = $1::date\n\nThis is very slow. Pg chooses a sequential scan. (I am running vacuum\nand analyze) Shouldn't Pg be able to use an index here?\n\nI've tried creating function indexes using cast, but Pg returns this\nerror message:\n\nERROR: functions in index expression must be marked IMMUTABLE\n\nWhich I assume is related to timezones and daylight saving issues in\nconverting\na timestamptz into a plain date.\n\nThis form strangely won't use an index either:\n\nselect *\nfrom event_resources er\nwhere (er.start_date, er.end_date) overlaps ($1::date, $1::date+1)\n\nThis is the only query form I've found that will use an index:\n\nselect *\nfrom event_resources er\nwhere (er.start_date >= $1::date and er.start_date < ($1::date+1))\nor (er.end_date >= $1::date and er.end_date < ($1::date+1))\n\nI know it's not exactly the same as the overlaps method, but since this\nworks\nI would expect OVERLAPS to work as well. I prefer overlaps because it's\nclean\nand simple, self documenting.\n\nAnother (similar) query I need to perform is \"All event resources that\noverlap a given\ntime range\". Seems tailor-made for OVERLAPS:\n\nselect *\nfrom event_resources er\nwhere (er.start_date, er.end_date) overlaps ($1::timestamptz,\n$2::timestamptz)\n\nAgain. can't get this to use an index. I have to use this again:\n\nselect *\nfrom event_resources er\nwhere (er.start_date >= $1::timestamptz and er.start_date <\n$2::timestamptz)\nor (er.end_date >= $1::timestamptz and er.end_date < $2::timestamptz)\n\nWhat am I doing wrong? This is Pg 8.1.2 on RHEL 4.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nOptimizing timestamp queries? Inefficient Overlaps?\n\n\n\n\nI have a table similar to this:\n\nCREATE TABLE event_resources (\n event_resource_id serial NOT NULL,\n event_id integer NOT NULL,\n resource_id integer NOT NULL,\n start_date timestamptz NOT NULL,\n end_date timestamptz NOT NULL,\n CONSTRAINT event_resources_pkey PRIMARY KEY (event_resource_id)\n);\n\nWhere the same resource can be added to an event multiple times. \nSince the table spans a few years, any day queried should\nreturn at most 0.1% of the table, and seems perfect for indexes. So I add these:\n\nCREATE INDEX er_idx1 ON event_resources (start_date);\nCREATE INDEX er_idx2 ON event_resources (end_date);\n\nOne query I need to perform is \"All event resources that start or end\non a particular day\". The first thing that comes to mind is this:\n\nselect *\nfrom event_resources er\nwhere er.start_date::date = $1::date or er.end_date::date = $1::date\n\nThis is very slow… Pg chooses a sequential scan. (I am running vacuum\nand analyze) Shouldn't Pg be able to use an index here?\n\nI've tried creating function indexes using cast, but Pg returns this error message:\n\nERROR: functions in index expression must be marked IMMUTABLE\n\nWhich I assume is related to timezones and daylight saving issues in converting\na timestamptz into a plain date.\n\nThis form strangely won't use an index either:\n\nselect *\nfrom event_resources er\nwhere (er.start_date, er.end_date) overlaps ($1::date, $1::date+1)\n\nThis is the only query form I've found that will use an index:\n\nselect *\nfrom event_resources er\nwhere (er.start_date >= $1::date and er.start_date < ($1::date+1))\nor (er.end_date >= $1::date and er.end_date < ($1::date+1))\n\nI know it's not exactly the same as the overlaps method, but since this works\nI would expect OVERLAPS to work as well. I prefer overlaps because it's clean\nand simple, self documenting.\n\nAnother (similar) query I need to perform is \"All event resources that overlap a given\ntime range\". Seems tailor-made for OVERLAPS:\n\nselect *\nfrom event_resources er\nwhere (er.start_date, er.end_date) overlaps ($1::timestamptz, $2::timestamptz)\n\nAgain… can't get this to use an index. I have to use this again:\n\nselect *\nfrom event_resources er\nwhere (er.start_date >= $1::timestamptz and er.start_date < $2::timestamptz)\nor (er.end_date >= $1::timestamptz and er.end_date < $2::timestamptz)\n\nWhat am I doing wrong? This is Pg 8.1.2 on RHEL 4.",
"msg_date": "Mon, 18 Dec 2006 00:07:46 -0600",
"msg_from": "\"Adam Rich\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizing timestamp queries? Inefficient Overlaps?"
},
{
"msg_contents": "\"Adam Rich\" <[email protected]> writes:\n> I have a table similar to this:\n\n> CREATE TABLE event_resources (\n> event_resource_id serial NOT NULL,\n> event_id integer NOT NULL,\n> resource_id integer NOT NULL,\n> start_date timestamptz NOT NULL,\n> end_date timestamptz NOT NULL,\n> \tCONSTRAINT event_resources_pkey PRIMARY KEY (event_resource_id)\n> );\n> CREATE INDEX er_idx1 ON event_resources (start_date);\n> CREATE INDEX er_idx2 ON event_resources (end_date);\n\n> select *\n> from event_resources er\n> where er.start_date::date = $1::date or er.end_date::date = $1::date\n\n> This is very slow. Pg chooses a sequential scan. (I am running vacuum\n> and analyze) Shouldn't Pg be able to use an index here?\n\nNo, unless you were to create the indexes on start_date::date and\nend_date::date ...\n\n> I've tried creating function indexes using cast, but Pg returns this\n> error message:\n> ERROR: functions in index expression must be marked IMMUTABLE\n\n... which you can't do because the cast from timestamptz to date is\ndependent on the current timezone setting.\n\nIf the start and end are really intended to be accurate only to the\nday, as the column names seem to suggest, why didn't you declare them\nas date to start with?\n\n> select *\n> from event_resources er\n> where (er.start_date >= $1::date and er.start_date < ($1::date+1))\n> or (er.end_date >= $1::date and er.end_date < ($1::date+1))\n\n> I know it's not exactly the same as the overlaps method, but since this\n> works I would expect OVERLAPS to work as well.\n\nSorry, but no -- read the SQL spec for OVERLAPS sometime. It's not even\nclose to being the same, and with all the weird special cases for nulls,\nit's just about unoptimizable :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Dec 2006 01:31:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing timestamp queries? Inefficient Overlaps? "
}
] |
[
{
"msg_contents": "Hi all,\n\nA vacuum full command logs the message:\n... LOG: transaction ID wrap limit is 1073822617, limited by database \"A\"\n\nSometimes ago, the vacuum full logged:\n... LOG: transaction ID wrap limit is 2147484148, limited by database \"A\"\n\nWhat causes that difference of the limit ? Should I set or optimize \nsomething on my Postgresql server ?\n\nTIA,\nSabin \n\n\n",
"msg_date": "Mon, 18 Dec 2006 11:26:38 +0200",
"msg_from": "\"Sabin Coanda\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "transaction ID wrap limit"
},
{
"msg_contents": "\"Sabin Coanda\" <[email protected]> writes:\n> A vacuum full command logs the message:\n> ... LOG: transaction ID wrap limit is 1073822617, limited by database \"A\"\n\n> Sometimes ago, the vacuum full logged:\n> ... LOG: transaction ID wrap limit is 2147484148, limited by database \"A\"\n\n> What causes that difference of the limit ?\n\nThe limit is *supposed* to advance. The fact that it jumped this much\nin one step suggests you're not vacuuming often enough :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Dec 2006 10:51:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: transaction ID wrap limit "
}
] |
[
{
"msg_contents": "I have a database that has 3 tables with a relatively small number of\nrecords in each. (see schema/counts below). These 3 tables are loaded\nwhen the db is created and there are never any updates or deletes on the\n3 tables. This database does have many other tables.\n\nds_tables 132 rows, ds_types 281 rows, ds_columns 2191 rows\n\nWhen I run the query below on a new database where all the other tables\nare empty except for the 3 tables that contain static information, the\nquery time is ~200ms. When I run the same query on a production database\nthat has many records in the other tables (3 static tables the same),\nthe query took ~7 seconds. When I run it again on a test database with\nmore data then the production database, but with a different\ndistribution of data, ~1.2 seconds.\n\nI have seen this query take as much as 25 seconds because all seq scans\nwhere used. Vacuum full analyze and reindex on ONLY the 3 static tables\nreduced the query time.\n\nAll queries where run on the same computer with the same\npostgresql.conf. i.e no configuration changes.\n\nI am trying to find out what causes the query on production database to\nbe so much slower. The query is changing (index vs sequencial scan) when\nthe data remains the same. \n\nWhy would the existence of data in other tables affect the query\nperformance on the 3 static tables?\n\nWhy does vacuum full and reindex make a difference if the 3 tables are\nnever updated or records deleted?\n\nUsing postgresql 8.1.3.\n\n<<QUERY>>\nexplain analyze select ds_tables.name as table_name, ds_columns.name as\ncolumn_name from ds_tables left join ds_columns on ds_tables.classid =\nds_columns.classid left join ds_types on ds_columns.typeid =\nds_types.typeid where ds_types.name like 'OMWeakObjRef<%>' and\nlower(ds_columns.name) not in\n('owner','ownergroup','generatedby','originator','extendedby','audituser\n','settingclassdef','itemowner','srcobj','dstobj','srcweakobj','dstweako\nbj','notificationcreateuser','metadataowner','rpcdef','settingdef','sett\ningparent','taskacceptuser','workobj','testref') and\nlower(ds_tables.name) not in\n('ds_omdatatest','ds_ommessage','ds_omusersetting','ds_omloginsession','\nds_omclassdef','ds_omuser','ds_omusergroupsetting','ds_omtestobject','ds\n_omhomedirectory')and lower(ds_tables.name) like 'ds_om%';\n\n<<<RESULT USING NEW DATABASE>>\n Nested Loop (cost=34.48..73.15 rows=1 width=64) (actual\ntime=0.897..42.562 rows=55 loops=1)\n -> Nested Loop (cost=34.48..61.38 rows=2 width=48) (actual\ntime=0.782..41.378 rows=61 loops=1)\n -> Bitmap Heap Scan on ds_types (cost=2.02..9.63 rows=1\nwidth=16) (actual time=0.160..0.707 rows=130 loops=1)\n Filter: (name ~~ 'OMWeakObjRef<%>'::text)\n -> Bitmap Index Scan on ds_types_name_key\n(cost=0.00..2.02 rows=4 width=0) (actual time=0.124..0.124 rows=130\nloops=1)\n Index Cond: ((name >= 'OMWeakObjRef<'::text) AND\n(name < 'OMWeakObjRef='::text))\n -> Bitmap Heap Scan on ds_columns (cost=32.46..51.64 rows=9\nwidth=64) (actual time=0.301..0.307 rows=0 loops=130)\n Recheck Cond: (ds_columns.typeid = \"outer\".typeid)\n Filter: ((lower(name) <> 'owner'::text) AND (lower(name)\n<> 'ownergroup'::text) AND (lower(name) <> 'generatedby'::text) AND\n(lower(name) <> 'originator'::text) AND (lower(name) <>\n'extendedby'::text) AND (lower(name) <> 'audituser'::text) AND\n(lower(name) <> 'settingclassdef'::text) AND (lower(name) <>\n'itemowner'::text) AND (lower(name) <> 'srcobj'::text) AND (lower(name)\n<> 'dstobj'::text) AND (lower(name) <> 'srcweakobj'::text) AND\n(lower(name) <> 'dstweakobj'::text) AND (lower(name) <>\n'notificationcreateuser'::text) AND (lower(name) <>\n'metadataowner'::text) AND (lower(name) <> 'rpcdef'::text) AND\n(lower(name) <> 'settingdef'::text) AND (lower(name) <>\n'settingparent'::text) AND (lower(name) <> 'taskacceptuser'::text) AND\n(lower(name) <> 'workobj'::text) AND (lower(name) <> 'testref'::text))\n -> Bitmap Index Scan on ds_columns_pkey\n(cost=0.00..32.46 rows=9 width=0) (actual time=0.293..0.293 rows=3\nloops=130)\n Index Cond: (ds_columns.typeid = \"outer\".typeid)\n -> Index Scan using ds_tables_pkey on ds_tables (cost=0.00..5.87\nrows=1 width=48) (actual time=0.012..0.014 rows=1 loops=61)\n Index Cond: (ds_tables.classid = \"outer\".classid)\n Filter: ((lower(name) <> 'ds_omdatatest'::text) AND\n(lower(name) <> 'ds_ommessage'::text) AND (lower(name) <>\n'ds_omusersetting'::text) AND (lower(name) <> 'ds_omloginsession'::text)\nAND (lower(name) <> 'ds_omclassdef'::text) AND (lower(name) <>\n'ds_omuser'::text) AND (lower(name) <> 'ds_omusergroupsetting'::text)\nAND (lower(name) <> 'ds_omtestobject'::text) AND (lower(name) <>\n'ds_omhomedirectory'::text) AND (lower(name) ~~ 'ds_om%'::text))\n Total runtime: 191.034 ms\n(15 rows)\n\n<<<RESULT USING PRODUCTION DATABASE>>\n Nested Loop (cost=27.67..69.70 rows=1 width=46) (actual\ntime=12.433..6905.152 rows=55 loops=1)\n Join Filter: (\"inner\".typeid = \"outer\".typeid)\n -> Index Scan using ds_types_name_key on ds_types (cost=0.00..5.57\nrows=1 width=16) (actual time=0.062..1.209 rows=130 loops=1)\n Index Cond: ((name >= 'OMWeakObjRef<'::text) AND (name <\n'OMWeakObjRef='::text))\n Filter: (name ~~ 'OMWeakObjRef<%>'::text)\n -> Nested Loop (cost=27.67..63.94 rows=15 width=62) (actual\ntime=0.313..51.255 rows=1431 loops=130)\n -> Seq Scan on ds_tables (cost=0.00..9.92 rows=1 width=48)\n(actual time=0.007..0.718 rows=121 loops=130)\n Filter: ((lower(name) <> 'ds_omdatatest'::text) AND\n(lower(name) <> 'ds_ommessage'::text) AND (lower(name) <>\n'ds_omusersetting'::text) AND(lower(name) <> 'ds_omloginsession'::text)\nAND (lower(name) <> 'ds_omclassdef'::text) AND (lower(name) <>\n'ds_omuser'::text) AND (lower(name) <> 'ds_omusergroupsetting'::text)\nAND (lower(name) <> 'ds_omtestobject'::text) AND (lower(name) <>\n'ds_omhomedirectory'::text) AND (lower(name) ~~ 'ds_om%'::text))\n -> Bitmap Heap Scan on ds_columns (cost=27.67..53.81 rows=17\nwidth=46) (actual time=0.305..0.384 rows=12 loops=15730)\n Recheck Cond: (\"outer\".classid = ds_columns.classid)\n Filter: ((lower(name) <> 'owner'::text) AND (lower(name)\n<> 'ownergroup'::text) AND (lower(name) <> 'generatedby'::text) AND\n(lower(name) <> 'originator'::text) AND (lower(name) <>\n'extendedby'::text) AND (lower(name) <> 'audituser'::text) AND\n(lower(name) <> 'settingclassdef'::text) AND (lower(name) <>\n'itemowner'::text) AND (lower(name) <> 'srcobj'::text) AND (lower(name)\n<> 'dstobj'::text) AND (lower(name) <> 'srcweakobj'::text) AND\n(lower(name) <> 'dstweakobj'::text) AND (lower(name) <>\n'notificationcreateuser'::text) AND (lower(name) <>\n'metadataowner'::text) AND (lower(name) <> 'rpcdef'::text) AND\n(lower(name) <> 'settingdef'::text) AND (lower(name) <>\n'settingparent'::text) AND (lower(name) <> 'taskacceptuser'::text) AND\n(lower(name) <> 'workobj'::text) AND (lower(name) <> 'testref'::text))\n -> Bitmap Index Scan on ds_columns_pkey\n(cost=0.00..27.67 rows=17 width=0) (actual time=0.291..0.291 rows=16\nloops=15730)\n Index Cond: (\"outer\".classid = ds_columns.classid)\n Total runtime: 7064.142 ms\n(14 rows)\n\nNote: In the explain analyze result there is small difference in result\nbetween the new db and production db because the new db initially has a\nbit more info due to changes made after the production db was created).\n\n<<<RESULT USING TEST DATABASE>>\n(1 row) Nested Loop (cost=119.82..127.32 rows=1 width=46) (actual\ntime=76.067..574.193 rows=55 loops=1)\n Join Filter: (\"inner\".typeid = \"outer\".typeid)\n -> Index Scan using ds_types_name_key on ds_types (cost=0.00..6.51\nrows=22 width=16) (actual time=25.948..26.763 rows=130 loops=1)\n Index Cond: ((name >= 'OMWeakObjRef<'::text) AND (name <\n'OMWeakObjRef='::text))\n Filter: (name ~~ 'OMWeakObjRef<%>'::text)\n -> Materialize (cost=119.82..119.84 rows=2 width=62) (actual\ntime=0.288..2.411 rows=1431 loops=130)\n -> Nested Loop (cost=33.67..119.82 rows=2 width=62) (actual\ntime=37.288..94.879 rows=1431 loops=1)\n -> Seq Scan on ds_tables (cost=0.00..59.80 rows=1\nwidth=48) (actual time=15.208..15.968 rows=121 loops=1)\n Filter: ((lower(name) <> 'ds_omdatatest'::text) AND\n(lower(name) <> 'ds_ommessage'::text) AND (lower(name) <>\n'ds_omusersetting'::text) AND (lower(name) <> 'ds_omloginsession'::text)\nAND (lower(name) <> 'ds_omclassdef'::text) AND (lower(name) <>\n'ds_omuser'::text) AND (lower(name) <> 'ds_omusergroupsetting'::text)\nAND (lower(name) <> 'ds_omtestobject'::text) AND (lower(name) <>\n'ds_omhomedirectory'::text) AND (lower(name) ~~ 'ds_om%'::text))\n -> Bitmap Heap Scan on ds_columns (cost=33.67..59.81\nrows=17 width=46) (actual time=0.502..0.617 rows=12 loops=121)\n Recheck Cond: (\"outer\".classid =\nds_columns.classid)\n Filter: ((lower(name) <> 'owner'::text) AND\n(lower(name) <> 'ownergroup'::text) AND (lower(name) <>\n'generatedby'::text) AND (lower(name) <> 'originator'::text) AND\n(lower(name) <> 'extendedby'::text) AND (lower(name) <>\n'audituser'::text) AND (lower(name) <> 'settingclassdef'::text)\nAND(lower(name) <> 'itemowner'::text) AND (lower(name) <>\n'srcobj'::text) AND (lower(name) <> 'dstobj'::text) AND (lower(name) <>\n'srcweakobj'::text) AND (lower(name) <> 'dstweakobj'::text) AND\n(lower(name) <> 'notificationcreateuser'::text) AND (lower(name) <>\n'metadataowner'::text) AND (lower(name) <> 'rpcdef'::text) AND\n(lower(name) <> 'settingdef'::text) AND (lower(name) <>\n'settingparent'::text) AND (lower(name) <> 'taskacceptuser'::text) AND\n(lower(name) <>'workobj'::text) AND (lower(name) <> 'testref'::text))\n -> Bitmap Index Scan on ds_columns_pkey\n(cost=0.00..33.67 rows=17 width=0) (actual time=0.313..0.313 rows=16\nloops=121)\n Index Cond: (\"outer\".classid =\nds_columns.classid)\n Total runtime: 1216.834 ms\n\n<<SCHEMA>>\nTable \"public.ds_tables\"\n Column | Type | Modifiers\n---------------+---------+-----------\n classid | ds_uuid | not null\n name | text | not null\n parentclassid | ds_uuid |\nIndexes:\n \"ds_tables_pkey\" PRIMARY KEY, btree (classid)\n \"ds_tables_name_key\" UNIQUE, btree (name)\nForeign-key constraints:\n \"ds_tables_parentclassid_fkey\" FOREIGN KEY (parentclassid)\nREFERENCES ds_tables(classid)\n\nTable \"public.ds_types\"\n Column | Type | Modifiers\n-----------------+---------+-----------\n name | text | not null\n typeid | ds_uuid | not null\n internalsqltype | text | not null\nIndexes:\n \"ds_types_pkey\" PRIMARY KEY, btree (typeid)\n \"ds_types_name_key\" UNIQUE, btree (name)\n\nTable \"public.ds_columns\"\n Column | Type | Modifiers\n------------+---------+-----------\n propertyid | ds_uuid | not null\n classid | ds_uuid | not null\n name | text | not null\n typeid | ds_uuid | not null\nIndexes:\n \"ds_columns_pkey\" PRIMARY KEY, btree (propertyid, classid, typeid)\nForeign-key constraints:\n \"ds_columns_classid_fkey\" FOREIGN KEY (classid) REFERENCES\nds_tables(classid)\n \"ds_columns_typeid_fkey\" FOREIGN KEY (typeid) REFERENCES\nds_types(typeid)\n\n\n\n\n\n\n\nQuery plan changing when queried data does not\n\n\n\nI have a database that has 3 tables with a relatively small number of records in each. (see schema/counts below). These 3 tables are loaded when the db is created and there are never any updates or deletes on the 3 tables. This database does have many other tables.\nds_tables 132 rows, ds_types 281 rows, ds_columns 2191 rows\n\nWhen I run the query below on a new database where all the other tables are empty except for the 3 tables that contain static information, the query time is ~200ms. When I run the same query on a production database that has many records in the other tables (3 static tables the same), the query took ~7 seconds. When I run it again on a test database with more data then the production database, but with a different distribution of data, ~1.2 seconds.\nI have seen this query take as much as 25 seconds because all seq scans where used. Vacuum full analyze and reindex on ONLY the 3 static tables reduced the query time.\nAll queries where run on the same computer with the same postgresql.conf. i.e no configuration changes.\n\nI am trying to find out what causes the query on production database to be so much slower. The query is changing (index vs sequencial scan) when the data remains the same. \nWhy would the existence of data in other tables affect the query performance on the 3 static tables?\n\nWhy does vacuum full and reindex make a difference if the 3 tables are never updated or records deleted?\n\nUsing postgresql 8.1.3.\n\n<<QUERY>>\nexplain analyze select ds_tables.name as table_name, ds_columns.name as column_name from ds_tables left join ds_columns on ds_tables.classid = ds_columns.classid left join ds_types on ds_columns.typeid = ds_types.typeid where ds_types.name like 'OMWeakObjRef<%>' and lower(ds_columns.name) not in ('owner','ownergroup','generatedby','originator','extendedby','audituser','settingclassdef','itemowner','srcobj','dstobj','srcweakobj','dstweakobj','notificationcreateuser','metadataowner','rpcdef','settingdef','settingparent','taskacceptuser','workobj','testref') and lower(ds_tables.name) not in ('ds_omdatatest','ds_ommessage','ds_omusersetting','ds_omloginsession','ds_omclassdef','ds_omuser','ds_omusergroupsetting','ds_omtestobject','ds_omhomedirectory')and lower(ds_tables.name) like 'ds_om%';\n<<<RESULT USING NEW DATABASE>>\n Nested Loop (cost=34.48..73.15 rows=1 width=64) (actual time=0.897..42.562 rows=55 loops=1)\n -> Nested Loop (cost=34.48..61.38 rows=2 width=48) (actual time=0.782..41.378 rows=61 loops=1)\n -> Bitmap Heap Scan on ds_types (cost=2.02..9.63 rows=1 width=16) (actual time=0.160..0.707 rows=130 loops=1)\n Filter: (name ~~ 'OMWeakObjRef<%>'::text)\n -> Bitmap Index Scan on ds_types_name_key (cost=0.00..2.02 rows=4 width=0) (actual time=0.124..0.124 rows=130 loops=1)\n Index Cond: ((name >= 'OMWeakObjRef<'::text) AND (name < 'OMWeakObjRef='::text))\n -> Bitmap Heap Scan on ds_columns (cost=32.46..51.64 rows=9 width=64) (actual time=0.301..0.307 rows=0 loops=130)\n Recheck Cond: (ds_columns.typeid = \"outer\".typeid)\n Filter: ((lower(name) <> 'owner'::text) AND (lower(name) <> 'ownergroup'::text) AND (lower(name) <> 'generatedby'::text) AND (lower(name) <> 'originator'::text) AND (lower(name) <> 'extendedby'::text) AND (lower(name) <> 'audituser'::text) AND (lower(name) <> 'settingclassdef'::text) AND (lower(name) <> 'itemowner'::text) AND (lower(name) <> 'srcobj'::text) AND (lower(name) <> 'dstobj'::text) AND (lower(name) <> 'srcweakobj'::text) AND (lower(name) <> 'dstweakobj'::text) AND (lower(name) <> 'notificationcreateuser'::text) AND (lower(name) <> 'metadataowner'::text) AND (lower(name) <> 'rpcdef'::text) AND (lower(name) <> 'settingdef'::text) AND (lower(name) <> 'settingparent'::text) AND (lower(name) <> 'taskacceptuser'::text) AND (lower(name) <> 'workobj'::text) AND (lower(name) <> 'testref'::text))\n -> Bitmap Index Scan on ds_columns_pkey (cost=0.00..32.46 rows=9 width=0) (actual time=0.293..0.293 rows=3 loops=130)\n Index Cond: (ds_columns.typeid = \"outer\".typeid)\n -> Index Scan using ds_tables_pkey on ds_tables (cost=0.00..5.87 rows=1 width=48) (actual time=0.012..0.014 rows=1 loops=61)\n Index Cond: (ds_tables.classid = \"outer\".classid)\n Filter: ((lower(name) <> 'ds_omdatatest'::text) AND (lower(name) <> 'ds_ommessage'::text) AND (lower(name) <> 'ds_omusersetting'::text) AND (lower(name) <> 'ds_omloginsession'::text) AND (lower(name) <> 'ds_omclassdef'::text) AND (lower(name) <> 'ds_omuser'::text) AND (lower(name) <> 'ds_omusergroupsetting'::text) AND (lower(name) <> 'ds_omtestobject'::text) AND (lower(name) <> 'ds_omhomedirectory'::text) AND (lower(name) ~~ 'ds_om%'::text))\n Total runtime: 191.034 ms\n(15 rows)\n\n<<<RESULT USING PRODUCTION DATABASE>>\n Nested Loop (cost=27.67..69.70 rows=1 width=46) (actual time=12.433..6905.152 rows=55 loops=1)\n Join Filter: (\"inner\".typeid = \"outer\".typeid)\n -> Index Scan using ds_types_name_key on ds_types (cost=0.00..5.57 rows=1 width=16) (actual time=0.062..1.209 rows=130 loops=1)\n Index Cond: ((name >= 'OMWeakObjRef<'::text) AND (name < 'OMWeakObjRef='::text))\n Filter: (name ~~ 'OMWeakObjRef<%>'::text)\n -> Nested Loop (cost=27.67..63.94 rows=15 width=62) (actual time=0.313..51.255 rows=1431 loops=130)\n -> Seq Scan on ds_tables (cost=0.00..9.92 rows=1 width=48) (actual time=0.007..0.718 rows=121 loops=130)\n Filter: ((lower(name) <> 'ds_omdatatest'::text) AND (lower(name) <> 'ds_ommessage'::text) AND (lower(name) <> 'ds_omusersetting'::text) AND(lower(name) <> 'ds_omloginsession'::text) AND (lower(name) <> 'ds_omclassdef'::text) AND (lower(name) <> 'ds_omuser'::text) AND (lower(name) <> 'ds_omusergroupsetting'::text) AND (lower(name) <> 'ds_omtestobject'::text) AND (lower(name) <> 'ds_omhomedirectory'::text) AND (lower(name) ~~ 'ds_om%'::text))\n -> Bitmap Heap Scan on ds_columns (cost=27.67..53.81 rows=17 width=46) (actual time=0.305..0.384 rows=12 loops=15730)\n Recheck Cond: (\"outer\".classid = ds_columns.classid)\n Filter: ((lower(name) <> 'owner'::text) AND (lower(name) <> 'ownergroup'::text) AND (lower(name) <> 'generatedby'::text) AND (lower(name) <> 'originator'::text) AND (lower(name) <> 'extendedby'::text) AND (lower(name) <> 'audituser'::text) AND (lower(name) <> 'settingclassdef'::text) AND (lower(name) <> 'itemowner'::text) AND (lower(name) <> 'srcobj'::text) AND (lower(name) <> 'dstobj'::text) AND (lower(name) <> 'srcweakobj'::text) AND (lower(name) <> 'dstweakobj'::text) AND (lower(name) <> 'notificationcreateuser'::text) AND (lower(name) <> 'metadataowner'::text) AND (lower(name) <> 'rpcdef'::text) AND (lower(name) <> 'settingdef'::text) AND (lower(name) <> 'settingparent'::text) AND (lower(name) <> 'taskacceptuser'::text) AND (lower(name) <> 'workobj'::text) AND (lower(name) <> 'testref'::text))\n -> Bitmap Index Scan on ds_columns_pkey (cost=0.00..27.67 rows=17 width=0) (actual time=0.291..0.291 rows=16 loops=15730)\n Index Cond: (\"outer\".classid = ds_columns.classid)\n Total runtime: 7064.142 ms\n(14 rows)\n\nNote: In the explain analyze result there is small difference in result between the new db and production db because the new db initially has a bit more info due to changes made after the production db was created).\n<<<RESULT USING TEST DATABASE>>\n(1 row) Nested Loop (cost=119.82..127.32 rows=1 width=46) (actual time=76.067..574.193 rows=55 loops=1)\n Join Filter: (\"inner\".typeid = \"outer\".typeid)\n -> Index Scan using ds_types_name_key on ds_types (cost=0.00..6.51 rows=22 width=16) (actual time=25.948..26.763 rows=130 loops=1)\n Index Cond: ((name >= 'OMWeakObjRef<'::text) AND (name < 'OMWeakObjRef='::text))\n Filter: (name ~~ 'OMWeakObjRef<%>'::text)\n -> Materialize (cost=119.82..119.84 rows=2 width=62) (actual time=0.288..2.411 rows=1431 loops=130)\n -> Nested Loop (cost=33.67..119.82 rows=2 width=62) (actual time=37.288..94.879 rows=1431 loops=1)\n -> Seq Scan on ds_tables (cost=0.00..59.80 rows=1 width=48) (actual time=15.208..15.968 rows=121 loops=1)\n Filter: ((lower(name) <> 'ds_omdatatest'::text) AND (lower(name) <> 'ds_ommessage'::text) AND (lower(name) <> 'ds_omusersetting'::text) AND (lower(name) <> 'ds_omloginsession'::text) AND (lower(name) <> 'ds_omclassdef'::text) AND (lower(name) <> 'ds_omuser'::text) AND (lower(name) <> 'ds_omusergroupsetting'::text) AND (lower(name) <> 'ds_omtestobject'::text) AND (lower(name) <> 'ds_omhomedirectory'::text) AND (lower(name) ~~ 'ds_om%'::text))\n -> Bitmap Heap Scan on ds_columns (cost=33.67..59.81 rows=17 width=46) (actual time=0.502..0.617 rows=12 loops=121)\n Recheck Cond: (\"outer\".classid = ds_columns.classid)\n Filter: ((lower(name) <> 'owner'::text) AND (lower(name) <> 'ownergroup'::text) AND (lower(name) <> 'generatedby'::text) AND (lower(name) <> 'originator'::text) AND (lower(name) <> 'extendedby'::text) AND (lower(name) <> 'audituser'::text) AND (lower(name) <> 'settingclassdef'::text) AND(lower(name) <> 'itemowner'::text) AND (lower(name) <> 'srcobj'::text) AND (lower(name) <> 'dstobj'::text) AND (lower(name) <> 'srcweakobj'::text) AND (lower(name) <> 'dstweakobj'::text) AND (lower(name) <> 'notificationcreateuser'::text) AND (lower(name) <> 'metadataowner'::text) AND (lower(name) <> 'rpcdef'::text) AND (lower(name) <> 'settingdef'::text) AND (lower(name) <> 'settingparent'::text) AND (lower(name) <> 'taskacceptuser'::text) AND (lower(name) <>'workobj'::text) AND (lower(name) <> 'testref'::text))\n -> Bitmap Index Scan on ds_columns_pkey (cost=0.00..33.67 rows=17 width=0) (actual time=0.313..0.313 rows=16 loops=121)\n Index Cond: (\"outer\".classid = ds_columns.classid)\n Total runtime: 1216.834 ms\n\n<<SCHEMA>>\nTable \"public.ds_tables\"\n Column | Type | Modifiers\n---------------+---------+-----------\n classid | ds_uuid | not null\n name | text | not null\n parentclassid | ds_uuid |\nIndexes:\n \"ds_tables_pkey\" PRIMARY KEY, btree (classid)\n \"ds_tables_name_key\" UNIQUE, btree (name)\nForeign-key constraints:\n \"ds_tables_parentclassid_fkey\" FOREIGN KEY (parentclassid) REFERENCES ds_tables(classid)\n\nTable \"public.ds_types\"\n Column | Type | Modifiers\n-----------------+---------+-----------\n name | text | not null\n typeid | ds_uuid | not null\n internalsqltype | text | not null\nIndexes:\n \"ds_types_pkey\" PRIMARY KEY, btree (typeid)\n \"ds_types_name_key\" UNIQUE, btree (name)\n\nTable \"public.ds_columns\"\n Column | Type | Modifiers\n------------+---------+-----------\n propertyid | ds_uuid | not null\n classid | ds_uuid | not null\n name | text | not null\n typeid | ds_uuid | not null\nIndexes:\n \"ds_columns_pkey\" PRIMARY KEY, btree (propertyid, classid, typeid)\nForeign-key constraints:\n \"ds_columns_classid_fkey\" FOREIGN KEY (classid) REFERENCES ds_tables(classid)\n \"ds_columns_typeid_fkey\" FOREIGN KEY (typeid) REFERENCES ds_types(typeid)",
"msg_date": "Mon, 18 Dec 2006 11:32:19 -0500",
"msg_from": "\"Harry Hehl\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query plan changing when queried data does not"
},
{
"msg_contents": "\"Harry Hehl\" <[email protected]> writes:\n> Why does vacuum full and reindex make a difference if the 3 tables are\n> never updated or records deleted?\n\nProbably because you did an ANALYZE somewhere and updated the planner's\nstats. I think your major problem is poor estimation of the ds_tables\nresult:\n\n> -> Seq Scan on ds_tables (cost=0.00..59.80 rows=1\n> width=48) (actual time=15.208..15.968 rows=121 loops=1)\n> Filter: ((lower(name) <> 'ds_omdatatest'::text) AND\n> (lower(name) <> 'ds_ommessage'::text) AND (lower(name) <>\n> 'ds_omusersetting'::text) AND (lower(name) <> 'ds_omloginsession'::text)\n> AND (lower(name) <> 'ds_omclassdef'::text) AND (lower(name) <>\n> 'ds_omuser'::text) AND (lower(name) <> 'ds_omusergroupsetting'::text)\n> AND (lower(name) <> 'ds_omtestobject'::text) AND (lower(name) <>\n> 'ds_omhomedirectory'::text) AND (lower(name) ~~ 'ds_om%'::text))\n\nIf you have an index on lower(name) then ANALYZE will collect statistics\non it, and you'd get an estimate of the result size that was better than\nrandom chance ... but I bet you have no such index. You might get some\nimprovement from raising the default statistics target, too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Dec 2006 12:07:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan changing when queried data does not "
}
] |
[
{
"msg_contents": "I have the following query which performs extremely slow:\nselect min(nlogid) as start_nlogid,\n max(nlogid) as end_nlogid,\n min(dtCreateDate) as start_transaction_timestamp,\n max(dtCreateDate) as end_transaction_timestamp\nfrom activity_log_facts \nwhere nlogid > ( select max(a.end_nlogid) from\nactivity_log_import_history a)\nand dtCreateDate < '2006-12-18 9:10'\n\n\nIf I change the where clause to have the return value of the subquery it\nruns very fast:\nselect min(nlogid) as start_nlogid,\n max(nlogid) as end_nlogid,\n min(dtCreateDate) as start_transaction_timestamp,\n max(dtCreateDate) as end_transaction_timestamp\nfrom activity_log_facts \nwhere nlogid > 402123456\nand dtCreateDate < '2006-12-18 9:10'\n\n\nIf I change the query to the following, it runs fast:\nselect min(nlogid) as start_nlogid,\n max(nlogid) as end_nlogid,\n min(dtCreateDate) as start_transaction_timestamp,\n max(dtCreateDate) as end_transaction_timestamp\nfrom activity_log_facts \ninner join ( select max(end_nlogid) as previous_nlogid from\nactivity_log_import_history) as a \non activity_log_facts.nlogid > a.previous_nlogid\nwhere dtCreateDate < ${IMPORT_TIMESTAMP}\n\n\nI am running PG 8.2. Why is that this the case? Shouldn't the query\nplanner be smart enough to know that the first query is the same as the\nsecond and third? The inner query does not refer to any columns outside\nof itself. I personally find the first query easiest to read and wish\nit performed well.\n\nJeremy Haile\n",
"msg_date": "Tue, 19 Dec 2006 09:32:17 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inner join vs where-clause subquery"
},
{
"msg_contents": "Jeremy Haile wrote:\n> I have the following query which performs extremely slow:\n> select min(nlogid) as start_nlogid,\n> max(nlogid) as end_nlogid,\n> min(dtCreateDate) as start_transaction_timestamp,\n> max(dtCreateDate) as end_transaction_timestamp\n> from activity_log_facts \n> where nlogid > ( select max(a.end_nlogid) from\n> activity_log_import_history a)\n> and dtCreateDate < '2006-12-18 9:10'\n\nCan you post the EXPLAIN ANALYSE for this one please? That'll show us \nexactly what it's doing.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 19 Dec 2006 16:31:41 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inner join vs where-clause subquery"
},
{
"msg_contents": "Here is the explain analyze output:\n\nResult (cost=9.45..9.46 rows=1 width=0) (actual\ntime=156589.390..156589.391 rows=1 loops=1)\n InitPlan\n -> Result (cost=0.04..0.05 rows=1 width=0) (actual\n time=0.034..0.034 rows=1 loops=1)\n InitPlan\n -> Limit (cost=0.00..0.04 rows=1 width=4) (actual\n time=0.029..0.030 rows=1 loops=1)\n -> Index Scan Backward using\n activity_log_import_history_end_nlogid_idx on\n activity_log_import_history a (cost=0.00..113.43\n rows=2877 width=4) (actual time=0.027..0.027 rows=1\n loops=1)\n Filter: (end_nlogid IS NOT NULL)\n -> Limit (cost=0.00..1.19 rows=1 width=12) (actual\n time=0.052..0.052 rows=0 loops=1)\n -> Index Scan using activity_log_facts_pkey on\n activity_log_facts (cost=0.00..1831613.82 rows=1539298\n width=12) (actual time=0.050..0.050 rows=0 loops=1)\n Index Cond: (nlogid > $1)\n Filter: ((nlogid IS NOT NULL) AND (dtcreatedate <\n '2006-12-18 09:10:00'::timestamp without time zone))\n -> Limit (cost=0.00..1.19 rows=1 width=12) (actual\n time=0.006..0.006 rows=0 loops=1)\n -> Index Scan Backward using activity_log_facts_pkey on\n activity_log_facts (cost=0.00..1831613.82 rows=1539298\n width=12) (actual time=0.004..0.004 rows=0 loops=1)\n Index Cond: (nlogid > $1)\n Filter: ((nlogid IS NOT NULL) AND (dtcreatedate <\n '2006-12-18 09:10:00'::timestamp without time zone))\n -> Limit (cost=0.00..3.51 rows=1 width=12) (actual\n time=100221.955..100221.955 rows=0 loops=1)\n -> Index Scan using activity_log_facts_dtcreatedate_idx on\n activity_log_facts (cost=0.00..5406927.50 rows=1539298\n width=12) (actual time=100221.953..100221.953 rows=0 loops=1)\n Index Cond: (dtcreatedate < '2006-12-18\n 09:10:00'::timestamp without time zone)\n Filter: ((dtcreatedate IS NOT NULL) AND (nlogid > $1))\n -> Limit (cost=0.00..3.51 rows=1 width=12) (actual\n time=56367.367..56367.367 rows=0 loops=1)\n -> Index Scan Backward using\n activity_log_facts_dtcreatedate_idx on activity_log_facts \n (cost=0.00..5406927.50 rows=1539298 width=12) (actual\n time=56367.364..56367.364 rows=0 loops=1)\n Index Cond: (dtcreatedate < '2006-12-18\n 09:10:00'::timestamp without time zone)\n Filter: ((dtcreatedate IS NOT NULL) AND (nlogid > $1))\nTotal runtime: 156589.605 ms\n\n\nOn Tue, 19 Dec 2006 16:31:41 +0000, \"Richard Huxton\" <[email protected]>\nsaid:\n> Jeremy Haile wrote:\n> > I have the following query which performs extremely slow:\n> > select min(nlogid) as start_nlogid,\n> > max(nlogid) as end_nlogid,\n> > min(dtCreateDate) as start_transaction_timestamp,\n> > max(dtCreateDate) as end_transaction_timestamp\n> > from activity_log_facts \n> > where nlogid > ( select max(a.end_nlogid) from\n> > activity_log_import_history a)\n> > and dtCreateDate < '2006-12-18 9:10'\n> \n> Can you post the EXPLAIN ANALYSE for this one please? That'll show us \n> exactly what it's doing.\n> \n> -- \n> Richard Huxton\n> Archonet Ltd\n",
"msg_date": "Tue, 19 Dec 2006 12:35:05 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inner join vs where-clause subquery"
},
{
"msg_contents": "Jeremy Haile wrote:\n> Here is the explain analyze output:\n\nWell, the row estimates are about as far out as you can get:\n\n> -> Index Scan using activity_log_facts_pkey on\n> activity_log_facts (cost=0.00..1831613.82 rows=1539298\n> width=12) (actual time=0.050..0.050 rows=0 loops=1)\n\n> -> Index Scan Backward using activity_log_facts_pkey on\n> activity_log_facts (cost=0.00..1831613.82 rows=1539298\n> width=12) (actual time=0.004..0.004 rows=0 loops=1)\n\n> -> Index Scan using activity_log_facts_dtcreatedate_idx on\n> activity_log_facts (cost=0.00..5406927.50 rows=1539298\n> width=12) (actual time=100221.953..100221.953 rows=0 loops=1)\n\n> -> Index Scan Backward using\n> activity_log_facts_dtcreatedate_idx on activity_log_facts \n> (cost=0.00..5406927.50 rows=1539298 width=12) (actual\n> time=56367.364..56367.364 rows=0 loops=1)\n\nHmm - it's using the indexes on dtCreateDate and nlogid which seems \nbroadly sensible, and then plans to limit the results for min()/max(). \nHowever, it's clearly wrong about how many rows will satisfy\n nlogid > (select max(a.end_nlogid) from activity_log_import_history a)\n\n>>> select min(nlogid) as start_nlogid,\n>>> max(nlogid) as end_nlogid,\n>>> min(dtCreateDate) as start_transaction_timestamp,\n>>> max(dtCreateDate) as end_transaction_timestamp\n>>> from activity_log_facts \n>>> where nlogid > ( select max(a.end_nlogid) from\n>>> activity_log_import_history a)\n>>> and dtCreateDate < '2006-12-18 9:10'\n\nIf you run explain on the other forms of your query, I'd guess it's much \nmore accurate. There's a simple way to see if that is the issue. Run the \nsub-query and substitute the actual value returned into the query above. \nThen, try the same but with a prepared query. If it's down to nlogid \nestimates then the first should be fast and the second slow.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 19 Dec 2006 18:23:06 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inner join vs where-clause subquery"
},
{
"msg_contents": "Here's the query and explain analyze using the result of the sub-query\nsubstituted: \n\nQUERY\nexplain analyze select min(nlogid) as start_nlogid,\n max(nlogid) as end_nlogid,\n min(dtCreateDate) as start_transaction_timestamp,\n max(dtCreateDate) as end_transaction_timestamp\nfrom activity_log_facts\nwhere nlogid > 478287801\nand dtCreateDate < '2006-12-18 9:10'\n\nEXPLAIN ANALYZE\nAggregate (cost=657.37..657.38 rows=1 width=12) (actual\ntime=0.018..0.019 rows=1 loops=1)\n -> Index Scan using activity_log_facts_nlogid_idx on\n activity_log_facts (cost=0.00..652.64 rows=472 width=12) (actual\n time=0.014..0.014 rows=0 loops=1)\n Index Cond: (nlogid > 478287801)\n Filter: (dtcreatedate < '2006-12-18 09:10:00'::timestamp without\n time zone)\nTotal runtime: 0.076 ms\n\n\nSorry if the reason should be obvious, but I'm not the best at\ninterpreting the explains. Why is this explain so much simpler than the\nother query plan (with the subquery)?\n\n\n\nOn Tue, 19 Dec 2006 18:23:06 +0000, \"Richard Huxton\" <[email protected]>\nsaid:\n> Jeremy Haile wrote:\n> > Here is the explain analyze output:\n> \n> Well, the row estimates are about as far out as you can get:\n> \n> > -> Index Scan using activity_log_facts_pkey on\n> > activity_log_facts (cost=0.00..1831613.82 rows=1539298\n> > width=12) (actual time=0.050..0.050 rows=0 loops=1)\n> \n> > -> Index Scan Backward using activity_log_facts_pkey on\n> > activity_log_facts (cost=0.00..1831613.82 rows=1539298\n> > width=12) (actual time=0.004..0.004 rows=0 loops=1)\n> \n> > -> Index Scan using activity_log_facts_dtcreatedate_idx on\n> > activity_log_facts (cost=0.00..5406927.50 rows=1539298\n> > width=12) (actual time=100221.953..100221.953 rows=0 loops=1)\n> \n> > -> Index Scan Backward using\n> > activity_log_facts_dtcreatedate_idx on activity_log_facts \n> > (cost=0.00..5406927.50 rows=1539298 width=12) (actual\n> > time=56367.364..56367.364 rows=0 loops=1)\n> \n> Hmm - it's using the indexes on dtCreateDate and nlogid which seems \n> broadly sensible, and then plans to limit the results for min()/max(). \n> However, it's clearly wrong about how many rows will satisfy\n> nlogid > (select max(a.end_nlogid) from activity_log_import_history a)\n> \n> >>> select min(nlogid) as start_nlogid,\n> >>> max(nlogid) as end_nlogid,\n> >>> min(dtCreateDate) as start_transaction_timestamp,\n> >>> max(dtCreateDate) as end_transaction_timestamp\n> >>> from activity_log_facts \n> >>> where nlogid > ( select max(a.end_nlogid) from\n> >>> activity_log_import_history a)\n> >>> and dtCreateDate < '2006-12-18 9:10'\n> \n> If you run explain on the other forms of your query, I'd guess it's much \n> more accurate. There's a simple way to see if that is the issue. Run the \n> sub-query and substitute the actual value returned into the query above. \n> Then, try the same but with a prepared query. If it's down to nlogid \n> estimates then the first should be fast and the second slow.\n> \n> -- \n> Richard Huxton\n> Archonet Ltd\n",
"msg_date": "Tue, 19 Dec 2006 14:34:50 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inner join vs where-clause subquery"
},
{
"msg_contents": "Jeremy Haile wrote:\n> Here's the query and explain analyze using the result of the sub-query\n> substituted: \n> \n> QUERY\n> explain analyze select min(nlogid) as start_nlogid,\n> max(nlogid) as end_nlogid,\n> min(dtCreateDate) as start_transaction_timestamp,\n> max(dtCreateDate) as end_transaction_timestamp\n> from activity_log_facts\n> where nlogid > 478287801\n> and dtCreateDate < '2006-12-18 9:10'\n> \n> EXPLAIN ANALYZE\n> Aggregate (cost=657.37..657.38 rows=1 width=12) (actual\n> time=0.018..0.019 rows=1 loops=1)\n> -> Index Scan using activity_log_facts_nlogid_idx on\n> activity_log_facts (cost=0.00..652.64 rows=472 width=12) (actual\n> time=0.014..0.014 rows=0 loops=1)\n> Index Cond: (nlogid > 478287801)\n> Filter: (dtcreatedate < '2006-12-18 09:10:00'::timestamp without\n> time zone)\n> Total runtime: 0.076 ms\n> \n> \n> Sorry if the reason should be obvious, but I'm not the best at\n> interpreting the explains. Why is this explain so much simpler than the\n> other query plan (with the subquery)?\n\nBecause it's planning it with knowledge of what \"nlogid\"s it's filtering \nby. It knows it isn't going to get many rows back with nlogid > \n478287801. In your previous explain it thought a large number of rows \nwould match and was trying not to sequentially scan the \nactivity_log_facts table.\n\nIdeally, the planner would evaluate the subquery in your original form \n(it should know it's only getting one row back from max()). Then it \ncould plan the query as above. I'm not sure how tricky that is to do though.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 19 Dec 2006 20:02:35 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inner join vs where-clause subquery"
},
{
"msg_contents": "Makes sense. It is NOT executing the subquery more than once is it?\n\nOn Tue, 19 Dec 2006 20:02:35 +0000, \"Richard Huxton\" <[email protected]>\nsaid:\n> Jeremy Haile wrote:\n> > Here's the query and explain analyze using the result of the sub-query\n> > substituted: \n> > \n> > QUERY\n> > explain analyze select min(nlogid) as start_nlogid,\n> > max(nlogid) as end_nlogid,\n> > min(dtCreateDate) as start_transaction_timestamp,\n> > max(dtCreateDate) as end_transaction_timestamp\n> > from activity_log_facts\n> > where nlogid > 478287801\n> > and dtCreateDate < '2006-12-18 9:10'\n> > \n> > EXPLAIN ANALYZE\n> > Aggregate (cost=657.37..657.38 rows=1 width=12) (actual\n> > time=0.018..0.019 rows=1 loops=1)\n> > -> Index Scan using activity_log_facts_nlogid_idx on\n> > activity_log_facts (cost=0.00..652.64 rows=472 width=12) (actual\n> > time=0.014..0.014 rows=0 loops=1)\n> > Index Cond: (nlogid > 478287801)\n> > Filter: (dtcreatedate < '2006-12-18 09:10:00'::timestamp without\n> > time zone)\n> > Total runtime: 0.076 ms\n> > \n> > \n> > Sorry if the reason should be obvious, but I'm not the best at\n> > interpreting the explains. Why is this explain so much simpler than the\n> > other query plan (with the subquery)?\n> \n> Because it's planning it with knowledge of what \"nlogid\"s it's filtering \n> by. It knows it isn't going to get many rows back with nlogid > \n> 478287801. In your previous explain it thought a large number of rows \n> would match and was trying not to sequentially scan the \n> activity_log_facts table.\n> \n> Ideally, the planner would evaluate the subquery in your original form \n> (it should know it's only getting one row back from max()). Then it \n> could plan the query as above. I'm not sure how tricky that is to do\n> though.\n> \n> -- \n> Richard Huxton\n> Archonet Ltd\n",
"msg_date": "Tue, 19 Dec 2006 15:18:07 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inner join vs where-clause subquery"
},
{
"msg_contents": "I'm still confused as to why the inner join version ran so much faster\nthan the where-clause version. \n\nHere's the inner join query and explain ouput:\nselect min(nlogid) as start_nlogid,\n max(nlogid) as end_nlogid,\n min(dtCreateDate) as start_transaction_timestamp,\n max(dtCreateDate) as end_transaction_timestamp\nfrom activity_log_facts\ninner join ( select max(end_nlogid) as previous_nlogid from\nactivity_log_import_history) as a\non activity_log_facts.nlogid > a.previous_nlogid\nwhere dtCreateDate < '2006-12-18 9:10'\n\nAggregate (cost=246226.95..246226.96 rows=1 width=12)\n -> Nested Loop (cost=49233.27..231209.72 rows=1501722 width=12)\n -> Result (cost=0.04..0.05 rows=1 width=0)\n InitPlan\n -> Limit (cost=0.00..0.04 rows=1 width=4)\n -> Index Scan Backward using\n activity_log_import_history_end_nlogid_idx on\n activity_log_import_history (cost=0.00..114.97\n rows=2913 width=4)\n Filter: (end_nlogid IS NOT NULL)\n -> Bitmap Heap Scan on activity_log_facts \n (cost=49233.23..210449.44 rows=1660817 width=12)\n Recheck Cond: (activity_log_facts.nlogid >\n a.previous_nlogid)\n Filter: (dtcreatedate < '2006-12-18 09:10:00'::timestamp\n without time zone)\n -> Bitmap Index Scan on activity_log_facts_nlogid_idx \n (cost=0.00..49233.23 rows=1660817 width=0)\n Index Cond: (activity_log_facts.nlogid >\n a.previous_nlogid)\n\n\nSince the inner join is basically the same thing as doing the\nwhere-clause subquery, why does it generate a far different plan?\n\n\n\nOn Tue, 19 Dec 2006 20:02:35 +0000, \"Richard Huxton\" <[email protected]>\nsaid:\n> Jeremy Haile wrote:\n> > Here's the query and explain analyze using the result of the sub-query\n> > substituted: \n> > \n> > QUERY\n> > explain analyze select min(nlogid) as start_nlogid,\n> > max(nlogid) as end_nlogid,\n> > min(dtCreateDate) as start_transaction_timestamp,\n> > max(dtCreateDate) as end_transaction_timestamp\n> > from activity_log_facts\n> > where nlogid > 478287801\n> > and dtCreateDate < '2006-12-18 9:10'\n> > \n> > EXPLAIN ANALYZE\n> > Aggregate (cost=657.37..657.38 rows=1 width=12) (actual\n> > time=0.018..0.019 rows=1 loops=1)\n> > -> Index Scan using activity_log_facts_nlogid_idx on\n> > activity_log_facts (cost=0.00..652.64 rows=472 width=12) (actual\n> > time=0.014..0.014 rows=0 loops=1)\n> > Index Cond: (nlogid > 478287801)\n> > Filter: (dtcreatedate < '2006-12-18 09:10:00'::timestamp without\n> > time zone)\n> > Total runtime: 0.076 ms\n> > \n> > \n> > Sorry if the reason should be obvious, but I'm not the best at\n> > interpreting the explains. Why is this explain so much simpler than the\n> > other query plan (with the subquery)?\n> \n> Because it's planning it with knowledge of what \"nlogid\"s it's filtering \n> by. It knows it isn't going to get many rows back with nlogid > \n> 478287801. In your previous explain it thought a large number of rows \n> would match and was trying not to sequentially scan the \n> activity_log_facts table.\n> \n> Ideally, the planner would evaluate the subquery in your original form \n> (it should know it's only getting one row back from max()). Then it \n> could plan the query as above. I'm not sure how tricky that is to do\n> though.\n> \n> -- \n> Richard Huxton\n> Archonet Ltd\n",
"msg_date": "Tue, 19 Dec 2006 15:47:56 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inner join vs where-clause subquery"
}
] |
[
{
"msg_contents": "HI,\n \nI've looking around the log files of my server and lately they indicate that I should consider increase the check_point segments because they're beeing reading too often and also recommend increasing the max_fsm_pages over 169728...\n \nthose are the config values present in the postgresql.conf\n \nshared_buffers = 1000 # min 16 or max_connections*2, 8KB eachwork_mem = 8192 # min 64, size in KBmaintenance_work_mem = 262152 # min 1024, size in KBmax_fsm_pages = 40000 # min max_fsm_relations*16, 6 bytes eachmax_fsm_relations = 2000 # min 100, ~70 bytes each\n \n \nAnd the new values, acording to the HNIT in log files ...\n \nshared_buffers = 1000 # min 16 or max_connections*2, 8KB eachwork_mem = 8192 # min 64, size in KBmaintenance_work_mem = 262152 # min 1024, size in KBmax_fsm_pages = 170000 # min max_fsm_relations*16, 6 bytes eachmax_fsm_relations = 10625 # min 100, ~70 bytes each\n \n \n \n_________________________________________________________________\nConsigue el nuevo Windows Live Messenger\nhttp://get.live.com/messenger/overview\n\n\n\n\nHI,\n \nI've looking around the log files of my server and lately they indicate that I should consider increase the check_point segments because they're beeing reading too often and also recommend increasing the max_fsm_pages over 169728...\n \nthose are the config values present in the postgresql.conf\n \nshared_buffers = 1000 # min 16 or max_connections*2, 8KB eachwork_mem = 8192 # min 64, size in KBmaintenance_work_mem = 262152 # min 1024, size in KBmax_fsm_pages = 40000 # min max_fsm_relations*16, 6 bytes eachmax_fsm_relations = 2000 # min 100, ~70 bytes each\n \n \nAnd the new values, acording to the HNIT in log files ...\n \nshared_buffers = 1000 # min 16 or max_connections*2, 8KB eachwork_mem = 8192 # min 64, size in KBmaintenance_work_mem = 262152 # min 1024, size in KBmax_fsm_pages = 170000 # min max_fsm_relations*16, 6 bytes eachmax_fsm_relations = 10625 # min 100, ~70 bytes each\n \n \n Consigue el nuevo Windows Live Messenger Pruébalo",
"msg_date": "Wed, 20 Dec 2006 05:31:21 +0000",
"msg_from": "ALVARO ARCILA <[email protected]>",
"msg_from_op": true,
"msg_subject": "max_fsm_pages and check_points"
},
{
"msg_contents": "On mi�, 2006-12-20 at 05:31 +0000, ALVARO ARCILA wrote:\n> \n> HI,\n> \n> I've looking around the log files of my server and lately they\n> indicate that I should consider increase the check_point segments\n> because they're beeing reading too often and also recommend increasing\n> the max_fsm_pages over 169728...\n\nif this has been happening for some time, some tables\nmight possibly have become bloated with dead rows, so\na one-time VACUUM FULL or CLUSTER on these might be indicated\nto speed up reaching the steady state.\n\nI think the max_fsm_pages is a minimum recommendation, so you\nmight want to look at VACUUM VERBOSE output after setting it,\nto see if an even higher value is indicated\n\n> those are the config values present in the postgresql.conf\n> \n> shared_buffers = 1000\n> work_mem = 8192\n\nif you have got a lot of memory, you might want to experiment\nwith these a little\n\ngnari\n\n",
"msg_date": "Wed, 20 Dec 2006 09:01:28 +0000",
"msg_from": "Ragnar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: max_fsm_pages and check_points"
}
] |
[
{
"msg_contents": "I have a question about the following. The table has an index on (clicked at time zone 'PST'). I am using postgres 8.1.3\n\nActually, I think I answered my own question already. But I want to confirm - Is the GROUP BY faster because it doesn't have to sort results, whereas DISTINCT must produce sorted results? This wasn't clear to me from the documentation. If it's true, then I could save considerable time by using GROUP BY where I have been using DISTINCT in the past. Usually I simply want a count of the distinct values, and there is no need to sort for that.\n\nI'm also interested in the count(distinct) case at the bottom. The cost estimate seems similar to the GROUP BY, but the actual cost is much higher.\n\nThe table is insert-only and was analyzed before running these queries. The domain column being aggregated has around 16k distinct values, and there are 780k rows in total (for the entire table, not the slice being selected in these queries).\n\nThanks,\nBrian\n\n\nlive:parking=> explain analyze SELECT domain \n FROM parked_redirects\n WHERE (clicked at time zone 'PST') >= '2006-12-17'\n AND (clicked at time zone 'PST')\n < '2006-12-18'::timestamp without time zone + '1 day'::interval\n GROUP BY domain;\n\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=19078.50..19085.29 rows=679 width=18) (actual time=709.309..717.096 rows=14526 loops=1)\n -> Index Scan using parked_redirects_pst on parked_redirects (cost=0.01..17846.82 rows=492672 width=18) (actual time=0.073..406.510 rows=504972 loops=1)\n Index Cond: ((timezone('PST'::text, clicked) >= '2006-12-17 00:00:00'::timestamp without time zone) AND (timezone('PST'::text, clicked) < '2006-12-19 00:00:00'::timestamp without time zone))\n Total runtime: 719.810 ms\n(4 rows)\n\nlive:parking=> explain analyze SELECT DISTINCT domain\n FROM parked_redirects\n WHERE (clicked at time zone 'PST') >= '2006-12-17'\n AND (clicked at time zone 'PST')\n < '2006-12-18'::timestamp without time zone + '1 day'::interval;\n\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=64433.98..66897.52 rows=679 width=18) (actual time=15329.904..15647.849 rows=14526 loops=1)\n -> Sort (cost=64433.98..65665.75 rows=492709 width=18) (actual time=15329.901..15511.479 rows=504972 loops=1)\n Sort Key: \"domain\"\n -> Index Scan using parked_redirects_pst on parked_redirects (cost=0.01..17847.41 rows=492709 width=18) (actual time=0.068..519.696 rows=504972 loops=1)\n Index Cond: ((timezone('PST'::text, clicked) >= '2006-12-17 00:00:00'::timestamp without time zone) AND (timezone('PST'::text, clicked) < '2006-12-19 00:00:00'::timestamp without time zone))\n Total runtime: 15666.863 ms\n(6 rows)\n\nlive:parking=> explain analyze SELECT count(DISTINCT domain)\n FROM parked_redirects\n WHERE (clicked at time zone 'PST') >= '2006-12-17'\n AND (clicked at time zone 'PST')\n < '2006-12-18'::timestamp without time zone + '1 day'::interval;\n\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=19107.20..19107.21 rows=1 width=18) (actual time=11380.530..11380.531 rows=1 loops=1)\n -> Index Scan using parked_redirects_pst on parked_redirects (cost=0.01..17873.67 rows=493412 width=18) (actual time=0.022..347.473 rows=504972 loops=1)\n Index Cond: ((timezone('PST'::text, clicked) >= '2006-12-17 00:00:00'::timestamp without time zone) AND (timezone('PST'::text, clicked) < '2006-12-19 00:00:00'::timestamp without time zone))\n Total runtime: 11384.923 ms\n(4 rows)\n\n\n\n",
"msg_date": "Tue, 19 Dec 2006 23:19:39 -0800 (PST)",
"msg_from": "Brian Herlihy <[email protected]>",
"msg_from_op": true,
"msg_subject": "GROUP BY vs DISTINCT"
},
{
"msg_contents": "On Tue, Dec 19, 2006 at 11:19:39PM -0800, Brian Herlihy wrote:\n> Actually, I think I answered my own question already. But I want to\n> confirm - Is the GROUP BY faster because it doesn't have to sort results,\n> whereas DISTINCT must produce sorted results? This wasn't clear to me from\n> the documentation. If it's true, then I could save considerable time by\n> using GROUP BY where I have been using DISTINCT in the past. Usually I\n> simply want a count of the distinct values, and there is no need to sort\n> for that.\n\nYou are right; at the moment, GROUP BY is more intelligent than DISTINCT,\neven if they have to compare the same columns. This is, as always, something\nthat could be improved in a future release, TTBOMK.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 20 Dec 2006 12:00:07 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GROUP BY vs DISTINCT"
},
{
"msg_contents": "On 20/12/06, Steinar H. Gunderson <[email protected]> wrote:\n> On Tue, Dec 19, 2006 at 11:19:39PM -0800, Brian Herlihy wrote:\n> > Actually, I think I answered my own question already. But I want to\n> > confirm - Is the GROUP BY faster because it doesn't have to sort results,\n> > whereas DISTINCT must produce sorted results? This wasn't clear to me from\n> > the documentation. If it's true, then I could save considerable time by\n> > using GROUP BY where I have been using DISTINCT in the past. Usually I\n> > simply want a count of the distinct values, and there is no need to sort\n> > for that.\n>\n> You are right; at the moment, GROUP BY is more intelligent than DISTINCT,\n> even if they have to compare the same columns. This is, as always, something\n> that could be improved in a future release, TTBOMK.\n>\n> /* Steinar */\n\nOh so thats why group by is nearly always quicker than distinct. I\nalways thought distinct was just short hand for \"group by same columns\nas I've just selected\"\nIs it actually in the sql spec to sort in a distinct or could we just\nget the parser to rewrite distinct into group by and hence remove the\nextra code a different way of doing it must mean.?\n\nPeter.\n",
"msg_date": "Wed, 20 Dec 2006 11:16:40 +0000",
"msg_from": "\"Peter Childs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GROUP BY vs DISTINCT"
},
{
"msg_contents": "\"Peter Childs\" <[email protected]> writes:\n> Is it actually in the sql spec to sort in a distinct\n\nNo. PG's code that supports GROUP BY is newer and smarter than the code\nthat supports DISTINCT, is all. One of the things on the to-do list is\nto revise DISTINCT so it can also consider hash-based implementations.\nThe hard part of that is not to break DISTINCT ON ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Dec 2006 10:36:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GROUP BY vs DISTINCT "
}
] |
[
{
"msg_contents": "Hi,\n\nhas anyone here had any good/bad experiences clustering & load balancing \na PostgreSQL server on Redhat (ES)?\n\ni have two identical servers with plenty of memory plus a nice disc \narray. the solution should have both redundancy and performance. BTW: \nit's a back end to a web based application in a high traffic situation.\n\ni thought it might be sensible to use a redhat cluster rather than \nintroduce another ingredient like PGCluster given we 'can' use a RH cluster.\n\nany suggestions invited and appreciated.\n\n\n\n",
"msg_date": "Wed, 20 Dec 2006 21:25:42 +1000",
"msg_from": "CARMODA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Question: Clustering & Load Balancing "
},
{
"msg_contents": "CARMODA napsal(a):\n> has anyone here had any good/bad experiences clustering & load balancing \n> a PostgreSQL server on Redhat (ES)?\n\nWe have recently successfully rolled-out a solution consisting of two\nPostgreSQL database backends replicated by Slony. The backends are\naccessed from a bunch of application servers (Apache) via pgpool. We\nhave some shell scripts around it to handle monitoring, failover and\nswitch-over. It is built on CentOS4 (ie. RedHat ES4). It's pretty solid.\n\n-- \nMichal T�borsk�\nchief systems architect\nInternet Mall, a.s.\n<http://www.MALL.cz>\n\n",
"msg_date": "Wed, 20 Dec 2006 16:50:03 +0100",
"msg_from": "Michal Taborsky - Internet Mall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question: Clustering & Load Balancing"
}
] |
[
{
"msg_contents": "Hi,\nWe have a database with one table of 10,000,000 tuples and 4 tables with 5,000,000 tuples.\nWhile in SQL Server it takes 3 minutes to restore this complete database, in PostgreSQL it takes more than 2 hours.\nThe Backup takes 6 minutes in SQLServer and 13 minutes (which is not a problem)\n\nWe are running PostgreSQL 8.1 for Windows and we are using:\nC:\\pg_dump.exe -i -h localhost -p 5432 -U usuario -F c -b -v -f \"C:\\BK\\file.backup\" base\nand \nC:\\pg_restore.exe -i -h localhost -p 5432 -U usuario -d base -O -v \"C:\\BK\\file.backup\"\n\nWe use those parameters because we copied them from PGAdminIII.\n\nIs there any way to make it faster?\n\nTanks\n Sebasti�n\n __________________________________________________\nCorreo Yahoo!\nEspacio para todos tus mensajes, antivirus y antispam �gratis! \n�Abr� tu cuenta ya! - http://correo.yahoo.com.ar\nHi,We have a database with one table of 10,000,000 tuples and 4 tables with 5,000,000 tuples.While in SQL Server it takes 3 minutes to restore this complete database, in PostgreSQL it takes more than 2 hours.The Backup takes 6 minutes in SQLServer and 13 minutes (which is not a problem)We are running PostgreSQL 8.1 for Windows and we are using:C:\\pg_dump.exe -i -h localhost -p 5432 -U usuario -F c -b -v -f \"C:\\BK\\file.backup\" baseand C:\\pg_restore.exe -i -h localhost -p 5432 -U usuario -d base -O -v \"C:\\BK\\file.backup\"We use those parameters because we copied them from PGAdminIII.Is there any way to make it faster?Tanks Sebasti�n __________________________________________________Correo Yahoo!Espacio para todos tus mensajes, antivirus y antispam �gratis! �Abr� tu cuenta ya! -\n http://correo.yahoo.com.ar",
"msg_date": "Fri, 22 Dec 2006 14:32:09 +0000 (GMT)",
"msg_from": "=?iso-8859-1?q?Sebasti=E1n=20Baioni?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Backup/Restore too slow"
},
{
"msg_contents": "Rebuilding the indexes or integrity confirmations are probably taking \nmost of the time.\n\nWhat is your work_mem setting?\n\nOn 22-Dec-06, at 9:32 AM, Sebasti�n Baioni wrote:\n\n> Hi,\n> We have a database with one table of 10,000,000 tuples and 4 tables \n> with 5,000,000 tuples.\n> While in SQL Server it takes 3 minutes to restore this complete \n> database, in PostgreSQL it takes more than 2 hours.\n> The Backup takes 6 minutes in SQLServer and 13 minutes (which is \n> not a problem)\n>\n> We are running PostgreSQL 8.1 for Windows and we are using:\n> C:\\pg_dump.exe -i -h localhost -p 5432 -U usuario -F c -b -v -f \"C: \n> \\BK\\file.backup\" base\n> and\n> C:\\pg_restore.exe -i -h localhost -p 5432 -U usuario -d base -O -v \n> \"C:\\BK\\file.backup\"\n>\n> We use those parameters because we copied them from PGAdminIII.\n>\n> Is there any way to make it faster?\n>\n> Tanks\n> Sebasti�n\n> __________________________________________________\n> Correo Yahoo!\n> Espacio para todos tus mensajes, antivirus y antispam �gratis!\n> �Abr� tu cuenta ya! - http://correo.yahoo.com.ar\n\n\nRebuilding the indexes or integrity confirmations are probably taking most of the time.What is your work_mem setting?On 22-Dec-06, at 9:32 AM, Sebastián Baioni wrote:Hi,We have a database with one table of 10,000,000 tuples and 4 tables with 5,000,000 tuples.While in SQL Server it takes 3 minutes to restore this complete database, in PostgreSQL it takes more than 2 hours.The Backup takes 6 minutes in SQLServer and 13 minutes (which is not a problem)We are running PostgreSQL 8.1 for Windows and we are using:C:\\pg_dump.exe -i -h localhost -p 5432 -U usuario -F c -b -v -f \"C:\\BK\\file.backup\" baseand C:\\pg_restore.exe -i -h localhost -p 5432 -U usuario -d base -O -v \"C:\\BK\\file.backup\"We use those parameters because we copied them from PGAdminIII.Is there any way to make it faster?Tanks Sebastián __________________________________________________Correo Yahoo!Espacio para todos tus mensajes, antivirus y antispam ¡gratis! ¡Abrí tu cuenta ya! - http://correo.yahoo.com.ar",
"msg_date": "Fri, 29 Dec 2006 12:28:49 -0500",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup/Restore too slow"
},
{
"msg_contents": "Rod Taylor <[email protected]> writes:\n> Rebuilding the indexes or integrity confirmations are probably taking\n> most of the time.\n\n> What is your work_mem setting?\n\nmaintenance_work_mem is the thing to look at, actually. I concur that\nbumping it up might help.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Dec 2006 12:44:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup/Restore too slow "
},
{
"msg_contents": "Thanks for answering.\nThis is my configuration:\n# - Memory -\n\nshared_buffers = 1000 # min 16, at least max_connections*2, 8KB each\n#work_mem = 1024 # min 64, size in KB\n#maintenance_work_mem = 16384 # min 1024, size in KB\n#max_stack_depth = 2048 # min 100, size in KB\n\nThe PC where we are runing PostgreSQL server is:\nAMD Athlon(tm) 64 Processor\n3000+\n1.79 GHz, 1.93 GB RAM\nwith WindowsXP Proffesional, Version 2002 Service Pack 2.\n\nHow should we set it?\n\nThanks a lot!\n Sebasti�n\n\nTom Lane <[email protected]> escribi�: Rod Taylor writes:\n> Rebuilding the indexes or integrity confirmations are probably taking\n> most of the time.\n\n> What is your work_mem setting?\n\nmaintenance_work_mem is the thing to look at, actually. I concur that\nbumping it up might help.\n\n regards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n __________________________________________________\nCorreo Yahoo!\nEspacio para todos tus mensajes, antivirus y antispam �gratis! \n�Abr� tu cuenta ya! - http://correo.yahoo.com.ar\nThanks for answering.This is my configuration:# - Memory -shared_buffers = 1000 # min 16, at least max_connections*2, 8KB each#work_mem = 1024 # min 64, size in KB#maintenance_work_mem = 16384 # min 1024, size in KB#max_stack_depth = 2048 # min 100, size in KBThe PC where we are runing PostgreSQL server is:AMD Athlon(tm) 64 Processor3000+1.79 GHz, 1.93 GB RAMwith WindowsXP Proffesional, Version 2002 Service Pack 2.How should we set it?Thanks a lot! Sebasti�nTom Lane <[email protected]> escribi�: Rod Taylor writes:> Rebuilding the indexes or integrity confirmations are probably\n taking> most of the time.> What is your work_mem setting?maintenance_work_mem is the thing to look at, actually. I concur thatbumping it up might help. regards, tom lane---------------------------(end of broadcast)---------------------------TIP 2: Don't 'kill -9' the postmaster __________________________________________________Correo Yahoo!Espacio para todos tus mensajes, antivirus y antispam �gratis! �Abr� tu cuenta ya! - http://correo.yahoo.com.ar",
"msg_date": "Fri, 29 Dec 2006 18:03:55 +0000 (GMT)",
"msg_from": "=?iso-8859-1?q?Sebasti=E1n=20Baioni?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup/Restore too slow "
},
{
"msg_contents": "Sebasti�n Baioni wrote:\n> Thanks for answering.\n> This is my configuration:\n> # - Memory -\n>\n> shared_buffers = 1000 # min 16, at least max_connections*2, 8KB \n> each\n> #work_mem = 1024 # min 64, size in KB\n> #maintenance_work_mem = 16384 # min 1024, size in KB\n> #max_stack_depth = 2048 # min 100, size in KB\n>\n> The PC where we are runing PostgreSQL server is:\n> AMD Athlon(tm) 64 Processor\n> 3000+\n> 1.79 GHz, 1.93 GB RAM\n> with WindowsXP Proffesional, Version 2002 Service Pack 2.\n>\n> How should we set it?\nShared buffers even on a workstation should be higher than 1000 if you \nwant some performance. It depends how much memory you have spare to use \nfor PostgreSQL. But something like\nshared_buffers = 20000\nmaintenance_work_mem = 256000\n\nWill certainly give you a performance boost. You will have to adjust \nthose figures based on whatever else you are doing on the machine.\n\nRussell Smith.\n>\n> Thanks a lot!\n> Sebasti�n\n>\n> */Tom Lane <[email protected]>/* escribi�:\n>\n> Rod Taylor writes:\n> > Rebuilding the indexes or integrity confirmations are probably\n> taking\n> > most of the time.\n>\n> > What is your work_mem setting?\n>\n> maintenance_work_mem is the thing to look at, actually. I concur that\n> bumping it up might help.\n>\n> regards, tom lane\n>\n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n> __________________________________________________\n> Correo Yahoo!\n> Espacio para todos tus mensajes, antivirus y antispam �gratis!\n> �Abr� tu cuenta ya! - http://correo.yahoo.com.ar\n>\n\n\n\n\n\n\n\n\nSebastián Baioni wrote:\nThanks for answering.\nThis is my configuration:\n# - Memory -\n\nshared_buffers = 1000 # min 16, at least max_connections*2, 8KB\neach\n#work_mem = 1024 # min 64, size in KB\n#maintenance_work_mem = 16384 # min 1024, size in KB\n#max_stack_depth = 2048 # min 100, size in KB\n\nThe PC where we are runing PostgreSQL server is:\nAMD Athlon(tm) 64 Processor\n3000+\n1.79 GHz, 1.93 GB RAM\nwith WindowsXP Proffesional, Version 2002 Service Pack 2.\n\nHow should we set it?\n\nShared buffers even on a workstation should be higher than 1000 if you\nwant some performance. It depends how much memory you have spare to\nuse for PostgreSQL. But something like\nshared_buffers = 20000\nmaintenance_work_mem = 256000\n\nWill certainly give you a performance boost. You will have to adjust\nthose figures based on whatever else you are doing on the machine.\n\nRussell Smith.\n\nThanks a lot!\n Sebastián\n\nTom Lane <[email protected]> escribió:\n \nRod Taylor writes:\n> Rebuilding the indexes or integrity confirmations are probably\ntaking\n> most of the time.\n\n> What is your work_mem setting?\n\nmaintenance_work_mem is the thing to look at, actually. I concur that\nbumping it up might help.\n\nregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n __________________________________________________\nCorreo Yahoo!\nEspacio para todos tus mensajes, antivirus y antispam ¡gratis! \n¡Abrí tu cuenta ya! - http://correo.yahoo.com.ar",
"msg_date": "Sat, 30 Dec 2006 08:25:12 +1100",
"msg_from": "Russell Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup/Restore too slow"
}
] |
[
{
"msg_contents": "I created a 10GB partition for pg_xlog and ran out of disk space today\nduring a long running update. My checkpoint_segments is set to 12, but\nthere are 622 files in pg_xlog. What size should the pg_xlog partition\nbe? \n\nPostmaster is currently not starting up (critical for my organization)\nand reports \"FATAL: The database system is starting up\" .\n\nThe log reports:\n2006-12-22 10:50:09 LOG: checkpoint record is at 2E/87A323C8\n2006-12-22 10:50:09 LOG: redo record is at 2E/8729A6E8; undo record is\nat 0/0; shutdown FALSE\n2006-12-22 10:50:09 LOG: next transaction ID: 0/25144015; next OID:\n140986\n2006-12-22 10:50:09 LOG: next MultiXactId: 12149; next MultiXactOffset:\n24306\n2006-12-22 10:50:09 LOG: database system was not properly shut down;\nautomatic recovery in progress\n2006-12-22 10:50:09 LOG: redo starts at 2E/8729A6E8\n\n\nThis has been running for 20 minutes. What can I do? Please help!\n",
"msg_date": "Fri, 22 Dec 2006 11:06:46 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "URGENT: Out of disk space pg_xlog"
},
{
"msg_contents": "Sorry for my rushed posting, as I was in a bit of a panic.\n\nWe moved the pg_xlog directory over to a 70GB partition, and after 15-20\nminutes the automatic recovery finished. Everything is working fine\nnow.\n\nI would still appreciate a PG guru explaining how to estimate size for a\npg_xlog partition. It seems like it can vary considerably depending on\nhow intensive your current transactions are. Is there a way to\ndetermine a maximum?\n\nOn Fri, 22 Dec 2006 11:06:46 -0500, \"Jeremy Haile\" <[email protected]>\nsaid:\n> I created a 10GB partition for pg_xlog and ran out of disk space today\n> during a long running update. My checkpoint_segments is set to 12, but\n> there are 622 files in pg_xlog. What size should the pg_xlog partition\n> be? \n> \n> Postmaster is currently not starting up (critical for my organization)\n> and reports \"FATAL: The database system is starting up\" .\n> \n> The log reports:\n> 2006-12-22 10:50:09 LOG: checkpoint record is at 2E/87A323C8\n> 2006-12-22 10:50:09 LOG: redo record is at 2E/8729A6E8; undo record is\n> at 0/0; shutdown FALSE\n> 2006-12-22 10:50:09 LOG: next transaction ID: 0/25144015; next OID:\n> 140986\n> 2006-12-22 10:50:09 LOG: next MultiXactId: 12149; next MultiXactOffset:\n> 24306\n> 2006-12-22 10:50:09 LOG: database system was not properly shut down;\n> automatic recovery in progress\n> 2006-12-22 10:50:09 LOG: redo starts at 2E/8729A6E8\n> \n> \n> This has been running for 20 minutes. What can I do? Please help!\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n",
"msg_date": "Fri, 22 Dec 2006 11:52:58 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: URGENT: Out of disk space pg_xlog"
},
{
"msg_contents": "\n> 2006-12-22 10:50:09 LOG: database system was not properly shut down;\n> automatic recovery in progress\n> 2006-12-22 10:50:09 LOG: redo starts at 2E/8729A6E8\n> \n> \n> This has been running for 20 minutes. What can I do? Please help!\n\n1. Turn off postgresql.\n2. Make tar backup of entire thing\n3. Move pg_xlog somehwere that has space\n4. ln postgresql to new pg_xlog directory\n5. Start postgresql\n6. Look for errors\n7. Report back\n\nSincerely.\n\nJoshua D. Drake\n\n> \n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n\n",
"msg_date": "Fri, 22 Dec 2006 08:55:31 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: URGENT: Out of disk space pg_xlog"
},
{
"msg_contents": "On Fri, 2006-12-22 at 11:52 -0500, Jeremy Haile wrote:\n\n> I would still appreciate ... explaining how to estimate size for a\n> pg_xlog partition. It seems like it can vary considerably depending on\n> how intensive your current transactions are. Is there a way to\n> determine a maximum?\n\nThere should be at most 2*checkpoint_segments+1 files in pg_xlog, which\nare 16MB each. So you shouldn't be having a problem.\n\nIf there are more than this, it could be because you have\ncurrently/previously had archive_command set and the archive command\nfailed to execute correctly, or the database was shutdown/crashed prior\nto the archive commands being executed.\n\nIIRC there was a bug that allowed this to happen, but that was some time\nago.\n\nPerhaps you could show us the dir listing, so we can check that there is\nnot a new problem emerging? Can you also show us the contents of the\npg_xlog/archive_status directory? Thanks.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 22 Dec 2006 17:02:43 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: URGENT: Out of disk space pg_xlog"
},
{
"msg_contents": "As I understand it, the log space accumulates for the oldest transaction\nwhich is still running, and all transactions which started after it. I\ndon't think there is any particular limit besides available disk space. \nLong running transactions can cause various problems, including table\nand index bloat which can degrade performance. You should probably look\nat whether the long running transaction could be broken down into a\nnumber of smaller ones.\n \n-Kevin\n \n \n>>> On Fri, Dec 22, 2006 at 10:52 AM, in message\n<[email protected]>, \"Jeremy\nHaile\"\n<[email protected]> wrote: \n> Sorry for my rushed posting, as I was in a bit of a panic.\n> \n> We moved the pg_xlog directory over to a 70GB partition, and after\n15- 20\n> minutes the automatic recovery finished. Everything is working fine\n> now.\n> \n> I would still appreciate a PG guru explaining how to estimate size\nfor a\n> pg_xlog partition. It seems like it can vary considerably depending\non\n> how intensive your current transactions are. Is there a way to\n> determine a maximum?\n> \n> On Fri, 22 Dec 2006 11:06:46 - 0500, \"Jeremy Haile\"\n<[email protected]>\n> said:\n>> I created a 10GB partition for pg_xlog and ran out of disk space\ntoday\n>> during a long running update. My checkpoint_segments is set to 12,\nbut\n>> there are 622 files in pg_xlog. What size should the pg_xlog\npartition\n>> be? \n>> \n>> Postmaster is currently not starting up (critical for my\norganization)\n>> and reports \"FATAL: The database system is starting up\" .\n>> \n>> The log reports:\n>> 2006- 12- 22 10:50:09 LOG: checkpoint record is at 2E/87A323C8\n>> 2006- 12- 22 10:50:09 LOG: redo record is at 2E/8729A6E8; undo\nrecord is\n>> at 0/0; shutdown FALSE\n>> 2006- 12- 22 10:50:09 LOG: next transaction ID: 0/25144015; next\nOID:\n>> 140986\n>> 2006- 12- 22 10:50:09 LOG: next MultiXactId: 12149; next\nMultiXactOffset:\n>> 24306\n>> 2006- 12- 22 10:50:09 LOG: database system was not properly shut\ndown;\n>> automatic recovery in progress\n>> 2006- 12- 22 10:50:09 LOG: redo starts at 2E/8729A6E8\n>> \n>> \n>> This has been running for 20 minutes. What can I do? Please help!\n>> \n>> --------------------------- (end of\nbroadcast)---------------------------\n>> TIP 4: Have you searched our list archives?\n>> \n>> http://archives.postgresql.org\n> \n> --------------------------- (end of\nbroadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n",
"msg_date": "Fri, 22 Dec 2006 11:24:08 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: URGENT: Out of disk space pg_xlog"
},
{
"msg_contents": "The archive_status directory is empty. I've never seen any files in\nthere and I've never set archive_command.\n\nWell, the problem has since resolved, but here is what is in the\ndirectory now. Previously there were hundreds of files, but these\ndisappeared after Postgres performed the automatic recovery.\n\n12/22/2006 11:16 AM 16,777,216 0000000100000030000000D2\n12/22/2006 11:17 AM 16,777,216 0000000100000030000000D3\n12/22/2006 11:17 AM 16,777,216 0000000100000030000000D4\n12/22/2006 11:17 AM 16,777,216 0000000100000030000000D5\n12/22/2006 11:18 AM 16,777,216 0000000100000030000000D6\n12/22/2006 11:19 AM 16,777,216 0000000100000030000000D7\n12/22/2006 11:19 AM 16,777,216 0000000100000030000000D8\n12/22/2006 11:19 AM 16,777,216 0000000100000030000000D9\n12/22/2006 11:19 AM 16,777,216 0000000100000030000000DA\n12/22/2006 11:21 AM 16,777,216 0000000100000030000000DB\n12/22/2006 10:07 AM 16,777,216 0000000100000030000000DC\n12/22/2006 10:07 AM 16,777,216 0000000100000030000000DD\n12/22/2006 10:07 AM 16,777,216 0000000100000030000000DE\n12/22/2006 10:33 AM 16,777,216 0000000100000030000000DF\n12/22/2006 10:08 AM 16,777,216 0000000100000030000000E0\n12/22/2006 10:32 AM 16,777,216 0000000100000030000000E1\n12/22/2006 10:08 AM 16,777,216 0000000100000030000000E2\n12/22/2006 10:08 AM 16,777,216 0000000100000030000000E3\n12/22/2006 10:17 AM 16,777,216 0000000100000030000000E4\n12/22/2006 10:11 AM 16,777,216 0000000100000030000000E5\n12/22/2006 11:10 AM 16,777,216 0000000100000030000000E6\n12/22/2006 11:11 AM 16,777,216 0000000100000030000000E7\n12/22/2006 11:15 AM 16,777,216 0000000100000030000000E8\n12/22/2006 11:15 AM 16,777,216 0000000100000030000000E9\n12/22/2006 11:15 AM 16,777,216 0000000100000030000000EA\n12/22/2006 11:16 AM 16,777,216 0000000100000030000000EB\n12/22/2006 11:16 AM 16,777,216 0000000100000030000000EC\n12/22/2006 11:16 AM 16,777,216 0000000100000030000000ED\n12/18/2006 08:52 PM <DIR> archive_status\n 28 File(s) 469,762,048 bytes\n 3 Dir(s) 10,206,756,864 bytes free\n\nOn Fri, 22 Dec 2006 17:02:43 +0000, \"Simon Riggs\"\n<[email protected]> said:\n> On Fri, 2006-12-22 at 11:52 -0500, Jeremy Haile wrote:\n> \n> > I would still appreciate ... explaining how to estimate size for a\n> > pg_xlog partition. It seems like it can vary considerably depending on\n> > how intensive your current transactions are. Is there a way to\n> > determine a maximum?\n> \n> There should be at most 2*checkpoint_segments+1 files in pg_xlog, which\n> are 16MB each. So you shouldn't be having a problem.\n> \n> If there are more than this, it could be because you have\n> currently/previously had archive_command set and the archive command\n> failed to execute correctly, or the database was shutdown/crashed prior\n> to the archive commands being executed.\n> \n> IIRC there was a bug that allowed this to happen, but that was some time\n> ago.\n> \n> Perhaps you could show us the dir listing, so we can check that there is\n> not a new problem emerging? Can you also show us the contents of the\n> pg_xlog/archive_status directory? Thanks.\n> \n> -- \n> Simon Riggs \n> EnterpriseDB http://www.enterprisedb.com\n> \n> \n",
"msg_date": "Fri, 22 Dec 2006 12:30:25 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: URGENT: Out of disk space pg_xlog"
},
{
"msg_contents": "On Fri, 2006-12-22 at 12:30 -0500, Jeremy Haile wrote:\n> The archive_status directory is empty. I've never seen any files in\n> there and I've never set archive_command.\n> \n> Well, the problem has since resolved, but here is what is in the\n> directory now. Previously there were hundreds of files, but these\n> disappeared after Postgres performed the automatic recovery.\n\nWhat were you doing before the server crashed?\n\nDid you previously have checkpoint_segments set higher? When/how was it\nreduced?\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 22 Dec 2006 17:36:39 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: URGENT: Out of disk space pg_xlog"
},
{
"msg_contents": "checkpoint_segments has been set at 12 for a while and was never set\nhigher than that. (before that it was set to the PG default - 3 I think)\n\nBefore the server crashed I was running an update that updates a boolean\nflag on two large tables (10 million rows each) for transactions older\nthan today (roughly 80% of the rows) The transaction ran for a long\ntime and I assume is what caused the pg_xlog to fill up.\n\nOn Fri, 22 Dec 2006 17:36:39 +0000, \"Simon Riggs\"\n<[email protected]> said:\n> On Fri, 2006-12-22 at 12:30 -0500, Jeremy Haile wrote:\n> > The archive_status directory is empty. I've never seen any files in\n> > there and I've never set archive_command.\n> > \n> > Well, the problem has since resolved, but here is what is in the\n> > directory now. Previously there were hundreds of files, but these\n> > disappeared after Postgres performed the automatic recovery.\n> \n> What were you doing before the server crashed?\n> \n> Did you previously have checkpoint_segments set higher? When/how was it\n> reduced?\n> \n> -- \n> Simon Riggs \n> EnterpriseDB http://www.enterprisedb.com\n> \n> \n",
"msg_date": "Fri, 22 Dec 2006 12:39:48 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: URGENT: Out of disk space pg_xlog"
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> As I understand it, the log space accumulates for the oldest transaction\n> which is still running, and all transactions which started after it.\n\nNo, pg_xlog can be truncated as soon as a checkpoint occurs. If Jeremy\nwasn't using archive_command then the only possible explanation for\nbloated pg_xlog is that checkpoints were failing. Which is not unlikely\nif the *data* partition runs out of space. Were there gripes in the log\nbefore the system crash? The scenario we've seen in the past is\n\n* data partition out of space, so writes fail\n* each time Postgres attempts a checkpoint, writes fail, so the\n checkpoint fails. No data loss at this point, the dirty buffers\n just stay in memory.\n* pg_xlog bloats because we can't truncate away old segments\n* eventually pg_xlog runs out of space, at which point we PANIC\n and can't continue running the database\n\nOnce you free some space on the data partition and restart, you should\nbe good to go --- there will be no loss of committed transactions, since\nall the operations are in pg_xlog. Might take a little while to replay\nall that log though :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Dec 2006 13:14:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: URGENT: Out of disk space pg_xlog "
},
{
"msg_contents": ">>> On Fri, Dec 22, 2006 at 12:14 PM, in message\n<[email protected]>,\nTom Lane <[email protected]> wrote: \n> \"Kevin Grittner\" <[email protected]> writes:\n>> As I understand it, the log space accumulates for the oldest\ntransaction\n>> which is still running, and all transactions which started after\nit.\n> \n> No, pg_xlog can be truncated as soon as a checkpoint occurs.\n \nThanks. Good to know. I had missed that.\n \n> If Jeremy\n> wasn't using archive_command then the only possible explanation for\n> bloated pg_xlog is that checkpoints were failing. Which is not\nunlikely\n> if the *data* partition runs out of space. Were there gripes in the\nlog\n> before the system crash? The scenario we've seen in the past is\n> \n> * data partition out of space, so writes fail\n> * each time Postgres attempts a checkpoint, writes fail, so the\n> checkpoint fails. No data loss at this point, the dirty buffers\n> just stay in memory.\n> * pg_xlog bloats because we can't truncate away old segments\n \nSo, at this point, if space is freed on the data partition somehow,\nPostgres recovers with no problems? (i.e.,, the database is still\nrunning and no requests have been terminated abnormally due to the space\nproblems?)\n \n> * eventually pg_xlog runs out of space, at which point we PANIC\n> and can't continue running the database\n> \n> Once you free some space on the data partition and restart, you\nshould\n> be good to go --- there will be no loss of committed transactions,\nsince\n> all the operations are in pg_xlog. Might take a little while to\nreplay\n> all that log though :- (\n \nJust to confirm what I would assume at this point -- non-committed\ntransactions should roll back cleanly; it is reasonable to assume no\ncorruption at this point?\n \nThanks,\n \n-Kevin\n \n\n",
"msg_date": "Fri, 22 Dec 2006 12:37:23 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: URGENT: Out of disk space pg_xlog"
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Tom Lane <[email protected]> wrote: \n>> before the system crash? The scenario we've seen in the past is\n>> \n>> * data partition out of space, so writes fail\n>> * each time Postgres attempts a checkpoint, writes fail, so the\n>> checkpoint fails. No data loss at this point, the dirty buffers\n>> just stay in memory.\n>> * pg_xlog bloats because we can't truncate away old segments\n \n> So, at this point, if space is freed on the data partition somehow,\n> Postgres recovers with no problems? (i.e.,, the database is still\n> running and no requests have been terminated abnormally due to the space\n> problems?)\n\nRight, no committed transactions have been lost. Depending on what you\nare doing, you might see individual transactions fail due to\nout-of-space --- an INSERT/UPDATE that couldn't find free space within\nits table would probably fail while trying to extend the table, and\nanything requiring a large temp file would fail.\n \n> Just to confirm what I would assume at this point -- non-committed\n> transactions should roll back cleanly; it is reasonable to assume no\n> corruption at this point?\n\nYeah, I would expect no problems.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Dec 2006 13:46:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: URGENT: Out of disk space pg_xlog "
},
{
"msg_contents": "On Fri, 22 Dec 2006, Tom Lane wrote:\n\n> Date: Fri, 22 Dec 2006 13:14:18 -0500\n> From: Tom Lane <[email protected]>\n> To: Kevin Grittner <[email protected]>\n> Cc: Jeremy Haile <[email protected]>, [email protected]\n> Subject: Re: [PERFORM] URGENT: Out of disk space pg_xlog\n>\n> \"Kevin Grittner\" <[email protected]> writes:\n> > As I understand it, the log space accumulates for the oldest transaction\n> > which is still running, and all transactions which started after it.\n>\n> No, pg_xlog can be truncated as soon as a checkpoint occurs.\n\nEven for currently running transactions ?\n\nMy understanding was that checkpoint was syncing data files for commited\ntransactions.\n\nWhat happens to pg_xlogs when a transaction updates M of rows/tables and\nruns for hours?\n[snip]\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\nregards\n-- \nOlivier PRENANT \t Tel: +33-5-61-50-97-00 (Work)\n15, Chemin des Monges +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n",
"msg_date": "Fri, 22 Dec 2006 19:47:05 +0100 (CET)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: URGENT: Out of disk space pg_xlog "
},
{
"msg_contents": "[email protected] writes:\n> On Fri, 22 Dec 2006, Tom Lane wrote:\n>> No, pg_xlog can be truncated as soon as a checkpoint occurs.\n\n> Even for currently running transactions ?\n\nYes.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Dec 2006 15:13:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: URGENT: Out of disk space pg_xlog "
},
{
"msg_contents": "[email protected] wrote:\n> On Fri, 22 Dec 2006, Tom Lane wrote:\n> \n> > Date: Fri, 22 Dec 2006 13:14:18 -0500\n> > From: Tom Lane <[email protected]>\n> > To: Kevin Grittner <[email protected]>\n> > Cc: Jeremy Haile <[email protected]>, [email protected]\n> > Subject: Re: [PERFORM] URGENT: Out of disk space pg_xlog\n> >\n> > \"Kevin Grittner\" <[email protected]> writes:\n> > > As I understand it, the log space accumulates for the oldest transaction\n> > > which is still running, and all transactions which started after it.\n> >\n> > No, pg_xlog can be truncated as soon as a checkpoint occurs.\n> \n> Even for currently running transactions ?\n> \n> My understanding was that checkpoint was syncing data files for commited\n> transactions.\n\nNo, it syncs data files for all transactions, even those currently\nrunning.\n\n> What happens to pg_xlogs when a transaction updates M of rows/tables and\n> runs for hours?\n\nThey get recycled as the update goes.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Fri, 22 Dec 2006 17:25:43 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: URGENT: Out of disk space pg_xlog"
},
{
"msg_contents": "On Fri, Dec 22, 2006 at 07:47:05PM +0100, [email protected] wrote:\n>> No, pg_xlog can be truncated as soon as a checkpoint occurs.\n> Even for currently running transactions ?\n\nIsn't that the entire point of having checkpoints in the first place? :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 22 Dec 2006 21:29:36 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: URGENT: Out of disk space pg_xlog"
},
{
"msg_contents": "Tom Lane wrote:\n> \"Kevin Grittner\" <[email protected]> writes:\n> > As I understand it, the log space accumulates for the oldest transaction\n> > which is still running, and all transactions which started after it.\n> \n> No, pg_xlog can be truncated as soon as a checkpoint occurs. If Jeremy\n> wasn't using archive_command then the only possible explanation for\n> bloated pg_xlog is that checkpoints were failing. Which is not unlikely\n> if the *data* partition runs out of space. Were there gripes in the log\n> before the system crash? The scenario we've seen in the past is\n> \n> * data partition out of space, so writes fail\n> * each time Postgres attempts a checkpoint, writes fail, so the\n> checkpoint fails. No data loss at this point, the dirty buffers\n> just stay in memory.\n> * pg_xlog bloats because we can't truncate away old segments\n> * eventually pg_xlog runs out of space, at which point we PANIC\n> and can't continue running the database\n> \n> Once you free some space on the data partition and restart, you should\n> be good to go --- there will be no loss of committed transactions, since\n> all the operations are in pg_xlog. Might take a little while to replay\n> all that log though :-(\n\nAmazing that all works. What I did not see is confirmation from the\nuser that the data directory filled up _before_ pg_xlog filled up.\n\n-- \n Bruce Momjian [email protected]\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Fri, 22 Dec 2006 17:15:49 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: URGENT: Out of disk space pg_xlog"
},
{
"msg_contents": "Thanks for your reply.\n\nI learnt something again!\nOn Fri, 22 Dec 2006, Alvaro Herrera wrote:\n\n> Date: Fri, 22 Dec 2006 17:25:43 -0300\n> From: Alvaro Herrera <[email protected]>\n> To: [email protected]\n> Cc: Tom Lane <[email protected]>,\n> Kevin Grittner <[email protected]>,\n> Jeremy Haile <[email protected]>, [email protected]\n> Subject: Re: [PERFORM] URGENT: Out of disk space pg_xlog\n>\n> [email protected] wrote:\n> > On Fri, 22 Dec 2006, Tom Lane wrote:\n> >\n> > > Date: Fri, 22 Dec 2006 13:14:18 -0500\n> > > From: Tom Lane <[email protected]>\n> > > To: Kevin Grittner <[email protected]>\n> > > Cc: Jeremy Haile <[email protected]>, [email protected]\n> > > Subject: Re: [PERFORM] URGENT: Out of disk space pg_xlog\n> > >\n> > > \"Kevin Grittner\" <[email protected]> writes:\n> > > > As I understand it, the log space accumulates for the oldest transaction\n> > > > which is still running, and all transactions which started after it.\n> > >\n> > > No, pg_xlog can be truncated as soon as a checkpoint occurs.\n> >\n> > Even for currently running transactions ?\n> >\n> > My understanding was that checkpoint was syncing data files for commited\n> > transactions.\n>\n> No, it syncs data files for all transactions, even those currently\n> running.\n>\n> > What happens to pg_xlogs when a transaction updates M of rows/tables and\n> > runs for hours?\n>\n> They get recycled as the update goes.\n>\n>\n\n-- \nOlivier PRENANT \t Tel: +33-5-61-50-97-00 (Work)\n15, Chemin des Monges +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n",
"msg_date": "Sat, 23 Dec 2006 15:00:37 +0100 (CET)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: URGENT: Out of disk space pg_xlog"
},
{
"msg_contents": "> > Once you free some space on the data partition and restart, you should\n> > be good to go --- there will be no loss of committed transactions, since\n> > all the operations are in pg_xlog. Might take a little while to replay\n> > all that log though :-(\n> \n> Amazing that all works. What I did not see is confirmation from the\n> user that the data directory filled up _before_ pg_xlog filled up.\n\nAfter I freed up space on the pg_xlog partition and restarted, it took\nsome time to replay all of the log (15-20 minutes) and everything\nrecovered with no data corruption! However, the theory about the data\npartition filling up first didn't happen in my case. The data partition\nwas (and still is) less than 50% utilized. My pg_xlog files typically\nrun around 400MB, but with the long running update filled up the entire\n10GB partition. (which is now a 70 GB partition) \n\nSo, I'm still not sure what caused the problem. When I get back to work\n(or maybe sooner), I'll take a look in the PG logs and post anything\nthat looks suspicious here. Thanks for all of your comments and\nsuggestions. Even though I haven't figured out the root of the problem\nyet, they've been very informative.\n",
"msg_date": "Sat, 23 Dec 2006 12:32:48 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: URGENT: Out of disk space pg_xlog"
},
{
"msg_contents": "Jeremy Haile wrote:\n> > > Once you free some space on the data partition and restart, you should\n> > > be good to go --- there will be no loss of committed transactions, since\n> > > all the operations are in pg_xlog. Might take a little while to replay\n> > > all that log though :-(\n> > \n> > Amazing that all works. What I did not see is confirmation from the\n> > user that the data directory filled up _before_ pg_xlog filled up.\n> \n> After I freed up space on the pg_xlog partition and restarted, it took\n> some time to replay all of the log (15-20 minutes) and everything\n> recovered with no data corruption! However, the theory about the data\n> partition filling up first didn't happen in my case. The data partition\n> was (and still is) less than 50% utilized. My pg_xlog files typically\n> run around 400MB, but with the long running update filled up the entire\n> 10GB partition. (which is now a 70 GB partition) \n> \n> So, I'm still not sure what caused the problem. When I get back to work\n> (or maybe sooner), I'll take a look in the PG logs and post anything\n> that looks suspicious here. Thanks for all of your comments and\n> suggestions. Even though I haven't figured out the root of the problem\n> yet, they've been very informative.\n\nThe bottom line is that we know of now cases where a long-running\ntransaction would delay recycling of the WAL files, so there is\ncertainly something not understood here.\n\n-- \n Bruce Momjian [email protected]\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Sat, 23 Dec 2006 13:13:51 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: URGENT: Out of disk space pg_xlog"
},
{
"msg_contents": "On Sat, 2006-12-23 at 13:13 -0500, Bruce Momjian wrote:\n\n> The bottom line is that we know of now cases where a long-running\n> transaction would delay recycling of the WAL files, so there is\n> certainly something not understood here.\n\nWe can see from all of this that a checkpoint definitely didn't occur.\nTom's causal chain was just one way that could have happened, there\ncould well be others.\n\nI've noticed previously that a checkpoint can be starved out when trying\nto acquire the CheckpointStartLock. I've witnessed a two minute delay\nplus in obtaining the lock in the face of heavy transactions.\n\nIf wal_buffers is small enough, WAL write rate high enough and the\ntransaction rate high enough, a long queue can form for the\nWALWriteLock, which ensures that the CheckpointStartLock would queue\nindefinitely. \n\nI've tried implementing a queueable shared lock for the\nCheckpointStartLock. That helps the checkpoint, but it harms performance\nof other transactions waiting to commit, so I let that idea go.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 29 Dec 2006 18:18:18 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: URGENT: Out of disk space pg_xlog"
}
] |
[
{
"msg_contents": "Hey Everyone,\n\nI am having a bit of trouble with a web host, and was wondering as what\nyou would class as a high level of traffic to a database (queries per\nsecond) to an average server running postgres in a shared hosting\nenvironment (very modern servers).\n\nMany Thanks in Advance,\nOliver\n\n",
"msg_date": "23 Dec 2006 00:12:03 -0800",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "What you would consider as heavy traffic?"
},
{
"msg_contents": "Depends on what the query is. If the queries take 3 to 5 days to \nexecute, then 1 query per day on a 4 CPU machine would be at capacity.\n\nOn 23-Dec-06, at 3:12 AM, [email protected] wrote:\n\n> Hey Everyone,\n>\n> I am having a bit of trouble with a web host, and was wondering as \n> what\n> you would class as a high level of traffic to a database (queries per\n> second) to an average server running postgres in a shared hosting\n> environment (very modern servers).\n>\n> Many Thanks in Advance,\n> Oliver\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n",
"msg_date": "Fri, 29 Dec 2006 12:32:39 -0500",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What you would consider as heavy traffic?"
}
] |
[
{
"msg_contents": "Hi List(s);\n\nI'm wanting to find more details per the following methods, can someone \nexplain to me exactly what each of these methods is, how its implemented in \npostgres or point me to some docs or README's that explain these methods?\n\nSome of 'em are obviously no-brainers but I'm writing a postgres book and want \nto make sure I convey the concepts properly and technically correct.\n\n- Bitmap Scans\n- Hash Aggregation\n- Hash Joins\n- Index Scans\n- Merge Joins\n- Nested Loop Joins\n- TID Scans\n\n\nThanks in advance for your help...\n\n",
"msg_date": "Sun, 24 Dec 2006 12:15:18 -0700",
"msg_from": "Kevin Kempter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Questions about planner methods"
},
{
"msg_contents": "On Sun, Dec 24, 2006 at 12:15:18PM -0700, Kevin Kempter wrote:\n> Hi List(s);\n> \n> I'm wanting to find more details per the following methods, can someone \n> explain to me exactly what each of these methods is, how its implemented in \n> postgres or point me to some docs or README's that explain these methods?\n\nThere's plenty of the comments in the files that implement them (the\nexecutor directory. Have you checked them?\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> From each according to his ability. To each according to his ability to litigate.",
"msg_date": "Sun, 24 Dec 2006 21:49:19 +0100",
"msg_from": "Martijn van Oosterhout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Questions about planner methods"
}
] |
[
{
"msg_contents": "Hi All,\n\nA friend has asked me about creating a unique table for individual users \nthat sign up for his site. (In essence, each user who signs up would \nessentially get a set of CREATE TABLE {users,friends,movies,eats}_<id> ( \n... ); statements executed, the idea being to reduce the number of rows \nthat the DB needs to search or index/update in regards to a particular \nuser id.) The just seems ludicrous to me, because the database still \nneeds to find those tables from its internal structure, not to mention \nthat it just seems silly to me from a design perspective. Something \nabout unable to optimize any queries because not only is the WHERE \nclause in flux, but so is the FROM clause.\n\nQuestion: Could someone explain to me why this would be bad idea, \nbecause I can't put into words why it is. Or is it a good idea? \n(Towards learning, I'm looking for a technical, Postgres oriented \nanswer, as well as the more general answer.)\n\nSince he's worried about performance, methinks a better idea would be to \nuse the partitioning support of Postgres. Is this a good hunch?\n\n(Side question: When did Postgres get partitioning support? I see it \nreferenced in the 8.1 documentation but not before.)\n\n\nThanks,\n\nKevin\n",
"msg_date": "Tue, 26 Dec 2006 13:36:43 -0500",
"msg_from": "Kevin Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partitioning"
},
{
"msg_contents": "Kevin Hunter <[email protected]> writes:\n> A friend has asked me about creating a unique table for individual users \n> that sign up for his site. (In essence, each user who signs up would \n> essentially get a set of CREATE TABLE {users,friends,movies,eats}_<id> ( \n> ... ); statements executed, the idea being to reduce the number of rows \n> that the DB needs to search or index/update in regards to a particular \n> user id.) The just seems ludicrous to me, because the database still \n> needs to find those tables from its internal structure, not to mention \n> that it just seems silly to me from a design perspective. Something \n> about unable to optimize any queries because not only is the WHERE \n> clause in flux, but so is the FROM clause.\n\n> Question: Could someone explain to me why this would be bad idea, \n> because I can't put into words why it is.\n\nI thought you did a fine job right there ;-). In essence this would be\nreplacing one level of indexing with two, which is unlikely to be a win.\nIf you have exactly M rows in each of N tables then theoretically your\nlookup costs would be about O(log(N) + log(M)), which is nominally the\nsame as O(log(M*N)) which is the cost to index into one big table --- so\nat best you break even, and that's ignoring the fact that index search\nhas a nonzero startup cost that'll be paid twice in the first case.\nBut the real problem is that if the N tables contain different numbers\nof rows then you have an unevenly filled search tree, which is a net\nloss.\n\nMost DBMSes aren't really designed to scale to many thousands of tables\nanyway. In Postgres this would result in many thousands of files in\nthe same database directory, which probably creates some filesystem\nlookup inefficiencies in addition to whatever might be inherent to\nPostgres.\n\nPartitioning is indeed something that is commonly done, but on a very\ncoarse grain --- you might have a dozen or two active partitions, not\nthousands. The point of partitioning is either to spread a huge table\nacross multiple filesystems (and how many filesystems have you got?)\nor else to make predictable removals of segments of the data cheap (for\ninstance, dropping the oldest month's worth of data once a month, in a\ntable where you only keep the last year or so's worth of data on-line).\nI can't see doing it on a per-user basis.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 26 Dec 2006 14:55:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning "
},
{
"msg_contents": "On 26 Dec 2006 at 2:55p -0500, Tom Lane wrote:\n> Kevin Hunter <[email protected]> writes:\n>> A friend has asked me about creating a unique table for individual users \n>> that sign up for his site. (In essence, each user who signs up would \n>> essentially get a set of CREATE TABLE {users,friends,movies,eats}_<id> ( \n>> ... ); statements executed, the idea being to reduce the number of rows \n>> that the DB needs to search or index/update in regards to a particular \n>> user id.) The just seems ludicrous to me, because the database still \n>> needs to find those tables from its internal structure, not to mention \n>> that it just seems silly to me from a design perspective. Something \n>> about unable to optimize any queries because not only is the WHERE \n>> clause in flux, but so is the FROM clause.\n>\n>> Question: Could someone explain to me why this would be bad idea, \n>> because I can't put into words why it is.\n>\n> I thought you did a fine job right there ;-). In essence this would be \n> replacing one level of indexing with two, which is unlikely to be a win. \n> If you have exactly M rows in each of N tables then theoretically your \n> lookup costs would be about O(log(N) + log(M)), which is nominally the \n> same as O(log(M*N)) which is the cost to index into one big table --- so \n> at best you break even, and that's ignoring the fact that index search \n> has a nonzero startup cost that'll be paid twice in the first case. \n> But the real problem is that if the N tables contain different numbers \n> of rows then you have an unevenly filled search tree, which is a net \n> loss.\n\nHurm. If I remember my Algorithms/Data Structures course, that implies\nthat table lookup is implemented with a B-Tree . . . right? Since at\nSQL preparation time the tables in the query are known, why couldn't you\nuse a hash lookup? In the above case, that would make it effectively\nO(1 + log(M)) or O(log(M)). Granted, it's /still/ a bad idea because of\nthe next paragraph . . .\n\n> Most DBMSes aren't really designed to scale to many thousands of tables \n> anyway. In Postgres this would result in many thousands of files in \n> the same database directory, which probably creates some filesystem \n> lookup inefficiencies in addition to whatever might be inherent to \n> Postgres.\n\nSo, still a bad idea, but I couldn't immediately think of why. Thank you.\n\n> Partitioning is indeed something that is commonly done, but on a very \n> coarse grain --- you might have a dozen or two active partitions, not \n> thousands. The point of partitioning is either to spread a huge table \n> across multiple filesystems (and how many filesystems have you got?) \n> or else to make predictable removals of segments of the data cheap (for \n> instance, dropping the oldest month's worth of data once a month, in a \n> table where you only keep the last year or so's worth of data on-line).\n\nAh! I was missing where to hang/put partitioning in my head. Thank you \nagain!\n\n> I can't see doing it on a per-user basis.\n\nPerhaps not on a per-user basis, but he could certainly improve access\ntimes by partitioning even coursely. I'll point him in that direction \nAre there other, perhaps better ways to improve the access times? (Now \nI'm curious just for my sake.) The best that I keep reading is just to \ndo as much in parallel as possible (i.e. multiple systems) and to use \nPostgres ( :) ).\n\nThanks,\n\nKevin\n",
"msg_date": "Tue, 26 Dec 2006 17:29:58 -0500",
"msg_from": "Kevin Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [NOVICE] Partitioning"
},
{
"msg_contents": "Kevin Hunter <[email protected]> writes:\n> On 26 Dec 2006 at 2:55p -0500, Tom Lane wrote:\n>> I thought you did a fine job right there ;-). In essence this would be \n>> replacing one level of indexing with two, which is unlikely to be a win. \n>> If you have exactly M rows in each of N tables then theoretically your \n>> lookup costs would be about O(log(N) + log(M)), which is nominally the \n>> same as O(log(M*N)) which is the cost to index into one big table --- so \n\n> Hurm. If I remember my Algorithms/Data Structures course, that implies\n> that table lookup is implemented with a B-Tree . . . right?\n\nb-tree or something else with log-N behavior. I think it can be proved\nthat every search method is at best log-N once N gets large enough, but\nof course that might be for N far beyond any practical values.\n\n> Since at SQL preparation time the tables in the query are known, why\n> couldn't you use a hash lookup?\n\nThe hash index implementation currently available in Postgres hasn't\never been proven to beat our b-tree implementation, on any dimension.\nThis might be a matter of how much effort has been thrown at it, or\nmaybe there's some fundamental problem; but anyway you won't find a\nlot of enthusiasm around here for moving the system-catalog indexes\nto hashing...\n\n>> I can't see doing it on a per-user basis.\n\n> Perhaps not on a per-user basis, but he could certainly improve access\n> times by partitioning even coursely. I'll point him in that direction \n> Are there other, perhaps better ways to improve the access times?\n\nActually, I forgot to mention an interesting talk I heard last month,\nwherein Casey Duncan explained how pandora.com scales across an\nunreasonable number of users whose queries are mostly-but-not-always\nlocalized. Essentially they partition their tables on the basis of\na hash of the user ID, where there are as many hash result values\nas they have servers. The point still remains though that there\nare far fewer partitions than users. I think what you want to take\naway is that you choose the number of partitions based on physical\nimplementation numbers (\"how many filesystems do we have today\")\nand not on logical numbers like \"how many users do we have today\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 26 Dec 2006 23:37:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [NOVICE] Partitioning "
}
] |
[
{
"msg_contents": "how can i get the disk usage for each table? can i do it via SQL? \n\n\n\nThanks,\n\nMailing-Lists\n",
"msg_date": "Thu, 28 Dec 2006 10:42:52 +0800",
"msg_from": "JM <[email protected]>",
"msg_from_op": true,
"msg_subject": "Need Help"
},
{
"msg_contents": "\n--- JM <[email protected]> wrote:\n\n> how can i get the disk usage for each table? can i do it via SQL? \n\nThis function should do what you want:\n\npg_relation_size(text) or pg_total_relation_size(text) \n\nI found it on the following link:\n\nhttp://www.postgresql.org/docs/8.2/interactive/functions-admin.html#FUNCTIONS-ADMIN-DBSIZE\n\nRegards,\n\nRichard Broersma Jr.\n",
"msg_date": "Wed, 27 Dec 2006 18:54:44 -0800 (PST)",
"msg_from": "Richard Broersma Jr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need Help"
}
] |
[
{
"msg_contents": "I don't want to violate any license agreement by discussing performance, \nso I'll refer to a large, commercial PostgreSQL-compatible DBMS only as \nBigDBMS here.\n\nI'm trying to convince my employer to replace BigDBMS with PostgreSQL \nfor at least some of our Java applications. As a proof of concept, I \nstarted with a high-volume (but conceptually simple) network data \ncollection application. This application collects files of 5-minute \nusage statistics from our network devices, and stores a raw form of \nthese stats into one table and a normalized form into a second table. \nWe are currently storing about 12 million rows a day in the normalized \ntable, and each month we start new tables. For the normalized data, the \napp inserts rows initialized to zero for the entire current day first \nthing in the morning, then throughout the day as stats are received, \nexecutes updates against existing rows. So the app has very high update \nactivity.\n\nIn my test environment, I have a dual-x86 Linux platform running the \napplication, and an old 4-CPU Sun Enterprise 4500 running BigDBMS and \nPostgreSQL 8.2.0 (only one at a time.) The Sun box has 4 disk arrays \nattached, each with 12 SCSI hard disks (a D1000 and 3 A1000, for those \nfamiliar with these devices.) The arrays are set up with RAID5. So I'm \nworking with a consistent hardware platform for this comparison. I'm \nonly processing a small subset of files (144.)\n\nBigDBMS processed this set of data in 20000 seconds, with all foreign \nkeys in place. With all foreign keys in place, PG took 54000 seconds to \ncomplete the same job. I've tried various approaches to autovacuum \n(none, 30-seconds) and it doesn't seem to make much difference. What \ndoes seem to make a difference is eliminating all the foreign keys; in \nthat configuration, PG takes about 30000 seconds. Better, but BigDBMS \nstill has it beat significantly.\n\nI've got PG configured so that that the system database is on disk array \n2, as are the transaction log files. The default table space for the \ntest database is disk array 3. I've got all the reference tables (the \ntables to which the foreign keys in the stats tables refer) on this \narray. I also store the stats tables on this array. Finally, I put the \nindexes for the stats tables on disk array 4. I don't use disk array 1 \nbecause I believe it is a software array.\n\nI'm out of ideas how to improve this picture any further. I'd \nappreciate some suggestions. Thanks.\n\n-- \nGuy Rouillier\n\n",
"msg_date": "Thu, 28 Dec 2006 00:46:49 -0500",
"msg_from": "Guy Rouillier <[email protected]>",
"msg_from_op": true,
"msg_subject": "High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "Here are my few recommendations that might help you:\n\n- You will need to do table partitioning (\nhttp://www.postgresql.org/docs/current/static/ddl-partitioning.html) as you\nare storing quite a lot of data in one table per day.\n\n- You are using a RAID5 setup which is something that can also affect\nperformance so switching to RAID1 might help you there, but again you have a\nRAID5 with 12 disks so hmm that shouldn't be that much of a problem.\n\n- Have you done the tuning for postgresql.conf parameters? if not then you\nreally need to do this for like checkpoint segments, random page cost,\nshared buffers, cache size, fsm pages, vacuum cost delay, work_mem, bgwriter\netc etc. You can get good advice for tuning these parameters at -->\nhttp://www.powerpostgresql.com/PerfList/\n\n- For autovacuuming you need to properly tune the thresholds so that the\nvacuum and analyze is done at the right time not affecting the database\nserver performance. (You can find help for this at\nhttp://www.postgresql.org/docs/current/static/routine-vacuuming.html under \"\n22.1.4. The auto-vacuum daemon\")\n\n- You will need to separate your transactional logs i.e. pg_xlog folder to a\ndifferent drive other then your database server drive. This can be done by\ncreating symlinks for pg_xlog folder.\n\n- I hope you are doing proper connection pool management, because good use\nof database connections can be really effect the overall performance,\nconnections can be expensive to create, and consume memory if they are not\nproperly exited.\n\nHope that helps your tests...\n\n----------------\nShoaib Mir\nEnterpriseDB (www.enterprisedb.com)\n\nOn 12/28/06, Guy Rouillier <[email protected]> wrote:\n>\n> I don't want to violate any license agreement by discussing performance,\n> so I'll refer to a large, commercial PostgreSQL-compatible DBMS only as\n> BigDBMS here.\n>\n> I'm trying to convince my employer to replace BigDBMS with PostgreSQL\n> for at least some of our Java applications. As a proof of concept, I\n> started with a high-volume (but conceptually simple) network data\n> collection application. This application collects files of 5-minute\n> usage statistics from our network devices, and stores a raw form of\n> these stats into one table and a normalized form into a second table.\n> We are currently storing about 12 million rows a day in the normalized\n> table, and each month we start new tables. For the normalized data, the\n> app inserts rows initialized to zero for the entire current day first\n> thing in the morning, then throughout the day as stats are received,\n> executes updates against existing rows. So the app has very high update\n> activity.\n>\n> In my test environment, I have a dual-x86 Linux platform running the\n> application, and an old 4-CPU Sun Enterprise 4500 running BigDBMS and\n> PostgreSQL 8.2.0 (only one at a time.) The Sun box has 4 disk arrays\n> attached, each with 12 SCSI hard disks (a D1000 and 3 A1000, for those\n> familiar with these devices.) The arrays are set up with RAID5. So I'm\n> working with a consistent hardware platform for this comparison. I'm\n> only processing a small subset of files (144.)\n>\n> BigDBMS processed this set of data in 20000 seconds, with all foreign\n> keys in place. With all foreign keys in place, PG took 54000 seconds to\n> complete the same job. I've tried various approaches to autovacuum\n> (none, 30-seconds) and it doesn't seem to make much difference. What\n> does seem to make a difference is eliminating all the foreign keys; in\n> that configuration, PG takes about 30000 seconds. Better, but BigDBMS\n> still has it beat significantly.\n>\n> I've got PG configured so that that the system database is on disk array\n> 2, as are the transaction log files. The default table space for the\n> test database is disk array 3. I've got all the reference tables (the\n> tables to which the foreign keys in the stats tables refer) on this\n> array. I also store the stats tables on this array. Finally, I put the\n> indexes for the stats tables on disk array 4. I don't use disk array 1\n> because I believe it is a software array.\n>\n> I'm out of ideas how to improve this picture any further. I'd\n> appreciate some suggestions. Thanks.\n>\n> --\n> Guy Rouillier\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\nHere are my few recommendations that might help you:- You will need to do table partitioning (http://www.postgresql.org/docs/current/static/ddl-partitioning.html\n) as you are storing quite a lot of data in one table per day.- You are using a RAID5 setup which is something that can also affect performance so switching to RAID1 might help you there, but again you have a RAID5 with 12 disks so hmm that shouldn't be that much of a problem.\n- Have you done the tuning for postgresql.conf parameters? if not then you really need to do this for like checkpoint segments, random page cost, shared buffers, cache size, fsm pages, vacuum cost delay, work_mem, bgwriter etc etc. You can get good advice for tuning these parameters at --> \nhttp://www.powerpostgresql.com/PerfList/- For autovacuuming you need to properly tune the thresholds so that the vacuum and analyze is done at the right time not affecting the database server performance. (You can find help for this at \nhttp://www.postgresql.org/docs/current/static/routine-vacuuming.html under \"22.1.4. The auto-vacuum daemon\")- You will need to separate your transactional logs \ni.e. pg_xlog folder to a different drive other then your database server drive. This can be done by creating symlinks for pg_xlog folder.- I hope you are doing proper connection pool management, because good use of database connections can be really effect the overall performance, connections can be expensive to create, and consume memory if they are not properly exited.\nHope that helps your tests...----------------\nShoaib Mir\nEnterpriseDB (www.enterprisedb.com)On 12/28/06, Guy Rouillier <[email protected]\n> wrote:I don't want to violate any license agreement by discussing performance,\nso I'll refer to a large, commercial PostgreSQL-compatible DBMS only asBigDBMS here.I'm trying to convince my employer to replace BigDBMS with PostgreSQLfor at least some of our Java applications. As a proof of concept, I\nstarted with a high-volume (but conceptually simple) network datacollection application. This application collects files of 5-minuteusage statistics from our network devices, and stores a raw form ofthese stats into one table and a normalized form into a second table.\nWe are currently storing about 12 million rows a day in the normalizedtable, and each month we start new tables. For the normalized data, theapp inserts rows initialized to zero for the entire current day first\nthing in the morning, then throughout the day as stats are received,executes updates against existing rows. So the app has very high updateactivity.In my test environment, I have a dual-x86 Linux platform running the\napplication, and an old 4-CPU Sun Enterprise 4500 running BigDBMS andPostgreSQL 8.2.0 (only one at a time.) The Sun box has 4 disk arraysattached, each with 12 SCSI hard disks (a D1000 and 3 A1000, for those\nfamiliar with these devices.) The arrays are set up with RAID5. So I'mworking with a consistent hardware platform for this comparison. I'monly processing a small subset of files (144.)BigDBMS processed this set of data in 20000 seconds, with all foreign\nkeys in place. With all foreign keys in place, PG took 54000 seconds tocomplete the same job. I've tried various approaches to autovacuum(none, 30-seconds) and it doesn't seem to make much difference. What\ndoes seem to make a difference is eliminating all the foreign keys; inthat configuration, PG takes about 30000 seconds. Better, but BigDBMSstill has it beat significantly.I've got PG configured so that that the system database is on disk array\n2, as are the transaction log files. The default table space for thetest database is disk array 3. I've got all the reference tables (thetables to which the foreign keys in the stats tables refer) on this\narray. I also store the stats tables on this array. Finally, I put theindexes for the stats tables on disk array 4. I don't use disk array 1because I believe it is a software array.I'm out of ideas how to improve this picture any further. I'd\nappreciate some suggestions. Thanks.--Guy Rouillier---------------------------(end of broadcast)---------------------------TIP 5: don't forget to increase your free space map settings",
"msg_date": "Thu, 28 Dec 2006 14:06:44 +0500",
"msg_from": "\"Shoaib Mir\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "Guy,\n\nDid you tune postgresql ? How much memory does the box have? Have you \ntuned postgresql ?\n\nDave\nOn 28-Dec-06, at 12:46 AM, Guy Rouillier wrote:\n\n> I don't want to violate any license agreement by discussing \n> performance, so I'll refer to a large, commercial PostgreSQL- \n> compatible DBMS only as BigDBMS here.\n>\n> I'm trying to convince my employer to replace BigDBMS with \n> PostgreSQL for at least some of our Java applications. As a proof \n> of concept, I started with a high-volume (but conceptually simple) \n> network data collection application. This application collects \n> files of 5-minute usage statistics from our network devices, and \n> stores a raw form of these stats into one table and a normalized \n> form into a second table. We are currently storing about 12 million \n> rows a day in the normalized table, and each month we start new \n> tables. For the normalized data, the app inserts rows initialized \n> to zero for the entire current day first thing in the morning, then \n> throughout the day as stats are received, executes updates against \n> existing rows. So the app has very high update activity.\n>\n> In my test environment, I have a dual-x86 Linux platform running \n> the application, and an old 4-CPU Sun Enterprise 4500 running \n> BigDBMS and PostgreSQL 8.2.0 (only one at a time.) The Sun box has \n> 4 disk arrays attached, each with 12 SCSI hard disks (a D1000 and 3 \n> A1000, for those familiar with these devices.) The arrays are set \n> up with RAID5. So I'm working with a consistent hardware platform \n> for this comparison. I'm only processing a small subset of files \n> (144.)\n>\n> BigDBMS processed this set of data in 20000 seconds, with all \n> foreign keys in place. With all foreign keys in place, PG took \n> 54000 seconds to complete the same job. I've tried various \n> approaches to autovacuum (none, 30-seconds) and it doesn't seem to \n> make much difference. What does seem to make a difference is \n> eliminating all the foreign keys; in that configuration, PG takes \n> about 30000 seconds. Better, but BigDBMS still has it beat \n> significantly.\n>\n> I've got PG configured so that that the system database is on disk \n> array 2, as are the transaction log files. The default table space \n> for the test database is disk array 3. I've got all the reference \n> tables (the tables to which the foreign keys in the stats tables \n> refer) on this array. I also store the stats tables on this \n> array. Finally, I put the indexes for the stats tables on disk \n> array 4. I don't use disk array 1 because I believe it is a \n> software array.\n>\n> I'm out of ideas how to improve this picture any further. I'd \n> appreciate some suggestions. Thanks.\n>\n> -- \n> Guy Rouillier\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n",
"msg_date": "Thu, 28 Dec 2006 07:38:04 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "\"Shoaib Mir\" <[email protected]> writes:\n> Here are my few recommendations that might help you:\n> [ snip good advice ]\n\nAnother thing to look at is whether you are doing inserts/updates as\nindividual transactions, and if so see if you can \"batch\" them to\nreduce the per-transaction overhead.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 28 Dec 2006 09:56:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS "
},
{
"msg_contents": "On Fri, 29 Dec 2006, Alvaro Herrera wrote:\n\n> Ron wrote:\n>\n>> C= What file system are you using? Unlike BigDBMS, pg does not have\n>> its own native one, so you have to choose the one that best suits\n>> your needs. For update heavy applications involving lots of small\n>> updates jfs and XFS should both be seriously considered.\n>\n> Actually it has been suggested that a combination of ext2 (for WAL) and\n> ext3 (for data, with data journalling disabled) is a good performer.\n> AFAIK you don't want the overhead of journalling for the WAL partition.\n\nWhen benchmarking various options for a new PG server at one of my clients, I \ntried ext2 and ext3 (data=writeback) for the WAL and it appeared to be fastest \nto have ext2 for the WAL. The winning time was 157m46.713s for ext2, \n159m47.098s for combined ext3 data/xlog and 158m25.822s for ext3 \ndata=writeback. This was on an 8x150GB Raptor RAID10 on an Areca 1130 w/ 1GB \nBBU cache. This config benched out faster than a 6disk RAID10 + 2 disk RAID1 \nfor those of you who have been wondering if the BBU write back cache mitigates \nthe need for separate WAL (at least on this workload). Those are the fastest \ntimes for each config, but ext2 WAL was always faster than the other two \noptions. I didn't test any other filesystems in this go around.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n",
"msg_date": "Thu, 28 Dec 2006 14:15:31 -0800 (PST)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "At 12:46 AM 12/28/2006, Guy Rouillier wrote:\n>I don't want to violate any license agreement by discussing \n>performance, so I'll refer to a large, commercial \n>PostgreSQL-compatible DBMS only as BigDBMS here.\n>\n>I'm trying to convince my employer to replace BigDBMS with \n>PostgreSQL for at least some of our Java applications. As a proof \n>of concept, I started with a high-volume (but conceptually simple) \n>network data collection application. This application collects \n>files of 5-minute usage statistics from our network devices, and \n>stores a raw form of these stats into one table and a normalized \n>form into a second table. We are currently storing about 12 million \n>rows a day in the normalized table, and each month we start new \n>tables. For the normalized data, the app inserts rows initialized \n>to zero for the entire current day first thing in the morning, then \n>throughout the day as stats are received, executes updates against \n>existing rows. So the app has very high update activity.\n>\n>In my test environment, I have a dual-x86 Linux platform running the \n>application, and an old 4-CPU Sun Enterprise 4500 running BigDBMS \n>and PostgreSQL 8.2.0 (only one at a time.) The Sun box has 4 disk \n>arrays attached, each with 12 SCSI hard disks (a D1000 and 3 A1000, \n>for those familiar with these devices.) The arrays are set up with \n>RAID5. So I'm working with a consistent hardware platform for this \n>comparison. I'm only processing a small subset of files (144.)\n>\n>BigDBMS processed this set of data in 20000 seconds, with all \n>foreign keys in place. With all foreign keys in place, PG took \n>54000 seconds to complete the same job. I've tried various \n>approaches to autovacuum (none, 30-seconds) and it doesn't seem to \n>make much difference. What does seem to make a difference is \n>eliminating all the foreign keys; in that configuration, PG takes \n>about 30000 seconds. Better, but BigDBMS still has it beat significantly.\n\nIf you are using pg configured as default installed, you are not \ngetting pg's best performance. Ditto using data structures optimized \nfor BigDBMS.\n\nA= go through each query and see what work_mem needs to be for that \nquery to be as RAM resident as possible. If you have enough RAM, set \nwork_mem for that query that large. Remember that work_mem is =per \nquery=, so queries running in parallel eat the sum of each of their work_mem's.\n\nB= Make sure shared buffers is set reasonably. A good rule of thumb \nfor 8.x is that shared buffers should be at least ~1/4 your RAM. If \nyour E4500 is maxed with RAM, there's a good chance shared buffers \nshould be considerably more than 1/4 of RAM.\n\nC= What file system are you using? Unlike BigDBMS, pg does not have \nits own native one, so you have to choose the one that best suits \nyour needs. For update heavy applications involving lots of small \nupdates jfs and XFS should both be seriously considered.\n\nD= Your table schema and physical table layout probably needs to \nchange. What BigDBMS likes here is most likely different from what pg likes.\n\nE= pg does not actually update records in place. It appends new \nrecords to the table and marks the old version invalid. This means \nthat things like pages size, RAID stripe size, etc etc may need to \nhave different values than they do for BigDBMS. Another consequence \nis that pg likes RAID 10 even more than most of its competitors.\n\nF= This may seem obvious, but how many of the foreign keys and other \noverhead do you actually need? Get rid of the unnecessary.\n\nG= Bother the folks at Sun, like Josh Berkus, who know pq inside and \nout +and+ know your HW (or have access to those that do ;-) )inside \nand out. I'll bet they'll have ideas I'm not thinking of.\n\nH= Explain Analyze is your friend. Slow queries may need better \ntable statistics, or better SQL, or may be symptoms of issues \"C\" or \n\"D\" above or ...\n\n>I've got PG configured so that that the system database is on disk \n>array 2, as are the transaction log files. The default table space \n>for the test database is disk array 3. I've got all the reference \n>tables (the tables to which the foreign keys in the stats tables \n>refer) on this array. I also store the stats tables on this \n>array. Finally, I put the indexes for the stats tables on disk \n>array 4. I don't use disk array 1 because I believe it is a software array.\nI= With 4 arrays of 12 HDs each, you definitely have enough spindles \nto place pg_xlog somewhere separate from all the other pg tables. In \naddition, you should analyze you table access patterns and then \nscatter them across your 4 arrays in such as way as to minimize head \ncontention.\n\n\n>I'm out of ideas how to improve this picture any further. I'd \n>appreciate some suggestions. Thanks.\nHope this helps,\n\nRon Peacetree \n\n",
"msg_date": "Fri, 29 Dec 2006 07:52:59 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "Hi all,\n\n> A= go through each query and see what work_mem needs to be for that \n> query to be as RAM resident as possible. If you have enough RAM, set \n> work_mem for that query that large. Remember that work_mem is =per \n> query=, so queries running in parallel eat the sum of each of their \n> work_mem's.\n\nHow can I know what work_mem needs a query needs?\n\nRegards\n-- \nArnau\n",
"msg_date": "Fri, 29 Dec 2006 15:26:15 +0100",
"msg_from": "Arnau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "Ron wrote:\n\n> C= What file system are you using? Unlike BigDBMS, pg does not have \n> its own native one, so you have to choose the one that best suits \n> your needs. For update heavy applications involving lots of small \n> updates jfs and XFS should both be seriously considered.\n\nActually it has been suggested that a combination of ext2 (for WAL) and\next3 (for data, with data journalling disabled) is a good performer.\nAFAIK you don't want the overhead of journalling for the WAL partition.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Fri, 29 Dec 2006 19:07:47 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "You should search the archives for Luke Lonegran's posting about how IO in\nPostgresql is significantly bottlenecked because it's not async. A 12 disk\narray is going to max out Postgresql's max theoretical write capacity to\ndisk, and therefore BigRDBMS is always going to win in such a config. You\ncan also look towards Bizgres which allegedly elimates some of these\nproblems, and is cheaper than most BigRDBMS products.\n\nAlex.\n\nOn 12/28/06, Guy Rouillier <[email protected]> wrote:\n>\n> I don't want to violate any license agreement by discussing performance,\n> so I'll refer to a large, commercial PostgreSQL-compatible DBMS only as\n> BigDBMS here.\n>\n> I'm trying to convince my employer to replace BigDBMS with PostgreSQL\n> for at least some of our Java applications. As a proof of concept, I\n> started with a high-volume (but conceptually simple) network data\n> collection application. This application collects files of 5-minute\n> usage statistics from our network devices, and stores a raw form of\n> these stats into one table and a normalized form into a second table.\n> We are currently storing about 12 million rows a day in the normalized\n> table, and each month we start new tables. For the normalized data, the\n> app inserts rows initialized to zero for the entire current day first\n> thing in the morning, then throughout the day as stats are received,\n> executes updates against existing rows. So the app has very high update\n> activity.\n>\n> In my test environment, I have a dual-x86 Linux platform running the\n> application, and an old 4-CPU Sun Enterprise 4500 running BigDBMS and\n> PostgreSQL 8.2.0 (only one at a time.) The Sun box has 4 disk arrays\n> attached, each with 12 SCSI hard disks (a D1000 and 3 A1000, for those\n> familiar with these devices.) The arrays are set up with RAID5. So I'm\n> working with a consistent hardware platform for this comparison. I'm\n> only processing a small subset of files (144.)\n>\n> BigDBMS processed this set of data in 20000 seconds, with all foreign\n> keys in place. With all foreign keys in place, PG took 54000 seconds to\n> complete the same job. I've tried various approaches to autovacuum\n> (none, 30-seconds) and it doesn't seem to make much difference. What\n> does seem to make a difference is eliminating all the foreign keys; in\n> that configuration, PG takes about 30000 seconds. Better, but BigDBMS\n> still has it beat significantly.\n>\n> I've got PG configured so that that the system database is on disk array\n> 2, as are the transaction log files. The default table space for the\n> test database is disk array 3. I've got all the reference tables (the\n> tables to which the foreign keys in the stats tables refer) on this\n> array. I also store the stats tables on this array. Finally, I put the\n> indexes for the stats tables on disk array 4. I don't use disk array 1\n> because I believe it is a software array.\n>\n> I'm out of ideas how to improve this picture any further. I'd\n> appreciate some suggestions. Thanks.\n>\n> --\n> Guy Rouillier\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\nYou should search the archives for Luke Lonegran's posting about how IO in Postgresql is significantly bottlenecked because it's not async. A 12 disk array is going to max out Postgresql's max theoretical write capacity to disk, and therefore BigRDBMS is always going to win in such a config. You can also look towards Bizgres which allegedly elimates some of these problems, and is cheaper than most BigRDBMS products.\nAlex.On 12/28/06, Guy Rouillier <[email protected]> wrote:\nI don't want to violate any license agreement by discussing performance,so I'll refer to a large, commercial PostgreSQL-compatible DBMS only asBigDBMS here.I'm trying to convince my employer to replace BigDBMS with PostgreSQL\nfor at least some of our Java applications. As a proof of concept, Istarted with a high-volume (but conceptually simple) network datacollection application. This application collects files of 5-minuteusage statistics from our network devices, and stores a raw form of\nthese stats into one table and a normalized form into a second table.We are currently storing about 12 million rows a day in the normalizedtable, and each month we start new tables. For the normalized data, the\napp inserts rows initialized to zero for the entire current day firstthing in the morning, then throughout the day as stats are received,executes updates against existing rows. So the app has very high update\nactivity.In my test environment, I have a dual-x86 Linux platform running theapplication, and an old 4-CPU Sun Enterprise 4500 running BigDBMS andPostgreSQL 8.2.0 (only one at a time.) The Sun box has 4 disk arrays\nattached, each with 12 SCSI hard disks (a D1000 and 3 A1000, for thosefamiliar with these devices.) The arrays are set up with RAID5. So I'mworking with a consistent hardware platform for this comparison. I'm\nonly processing a small subset of files (144.)BigDBMS processed this set of data in 20000 seconds, with all foreignkeys in place. With all foreign keys in place, PG took 54000 seconds tocomplete the same job. I've tried various approaches to autovacuum\n(none, 30-seconds) and it doesn't seem to make much difference. Whatdoes seem to make a difference is eliminating all the foreign keys; inthat configuration, PG takes about 30000 seconds. Better, but BigDBMS\nstill has it beat significantly.I've got PG configured so that that the system database is on disk array2, as are the transaction log files. The default table space for thetest database is disk array 3. I've got all the reference tables (the\ntables to which the foreign keys in the stats tables refer) on thisarray. I also store the stats tables on this array. Finally, I put theindexes for the stats tables on disk array 4. I don't use disk array 1\nbecause I believe it is a software array.I'm out of ideas how to improve this picture any further. I'dappreciate some suggestions. Thanks.--Guy Rouillier---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings",
"msg_date": "Fri, 29 Dec 2006 22:22:35 -0500",
"msg_from": "\"Alex Turner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "Tom Lane wrote:\n> \"Shoaib Mir\" <[email protected]> writes:\n>> Here are my few recommendations that might help you:\n>> [ snip good advice ]\n> \n> Another thing to look at is whether you are doing inserts/updates as\n> individual transactions, and if so see if you can \"batch\" them to\n> reduce the per-transaction overhead.\n\nThank you everyone who replied with suggestions. Unfortunately, this is \na background activity for me, so I can only work on it when I can \nsqueeze in time. Right now, I can't do anything; I swapped out a broken \nswitch in our network and the DB server is currently inaccessible ;(. I \nwill eventually work through all suggestions, but I'll start with the \nones I can respond to without further investigation.\n\nI'm not doing updates as individual transactions. I cannot use the Java \nbatch functionality because the code uses stored procedures to do the \ninserts and updates, and the PG JDBC driver cannot handle executing \nstored procedures in batch. Briefly, executing a stored procedure \nreturns a result set, and Java batches don't expect result sets.\n\nSo, in the code I turn autocommit off, and do a commit every 100 \nexecutions of the stored proc. The exact same code is running against \nBigDBMS, so any penalty from this approach should be evenly felt.\n\n-- \nGuy Rouillier\n",
"msg_date": "Sun, 31 Dec 2006 02:26:31 -0500",
"msg_from": "Guy Rouillier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "Alvaro Herrera wrote:\n> Ron wrote:\n> \n>> C= What file system are you using? Unlike BigDBMS, pg does not have \n>> its own native one, so you have to choose the one that best suits \n>> your needs. For update heavy applications involving lots of small \n>> updates jfs and XFS should both be seriously considered.\n> \n> Actually it has been suggested that a combination of ext2 (for WAL) and\n> ext3 (for data, with data journalling disabled) is a good performer.\n> AFAIK you don't want the overhead of journalling for the WAL partition.\n\nI'm curious as to why ext3 for data with journalling disabled? Would \nthat not be the same as ext2?\n\n-- \nUntil later, Geoffrey\n\nThose who would give up essential Liberty, to purchase a little\ntemporary Safety, deserve neither Liberty nor Safety.\n - Benjamin Franklin\n",
"msg_date": "Tue, 02 Jan 2007 09:04:30 -0500",
"msg_from": "Geoffrey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "On �ri, 2007-01-02 at 09:04 -0500, Geoffrey wrote:\n> Alvaro Herrera wrote:\n> > \n> > Actually it has been suggested that a combination of ext2 (for WAL) and\n> > ext3 (for data, with data journalling disabled) is a good performer.\n> > AFAIK you don't want the overhead of journalling for the WAL partition.\n> \n> I'm curious as to why ext3 for data with journalling disabled? Would \n> that not be the same as ext2?\n\nI believe Alvaro was referring to ext3 with journalling enabled \nfor meta-data, but not for data.\nI also believe this is the standard ext3 configuration, but I\ncould be wrong on that.\n\ngnari\n\n\n\n",
"msg_date": "Tue, 02 Jan 2007 14:54:56 +0000",
"msg_from": "Ragnar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\nOn 2 Jan 2007, at 14:54, Ragnar wrote:\n\n> On �ri, 2007-01-02 at 09:04 -0500, Geoffrey wrote:\n>> Alvaro Herrera wrote:\n>>>\n>>> Actually it has been suggested that a combination of ext2 (for \n>>> WAL) and\n>>> ext3 (for data, with data journalling disabled) is a good performer.\n>>> AFAIK you don't want the overhead of journalling for the WAL \n>>> partition.\n>>\n>> I'm curious as to why ext3 for data with journalling disabled? Would\n>> that not be the same as ext2?\n>\n> I believe Alvaro was referring to ext3 with journalling enabled\n> for meta-data, but not for data.\n> I also believe this is the standard ext3 configuration, but I\n> could be wrong on that.\n>\n> gnari\n>\n>\nit doesn't really belong here but ext3 has\ndata journaled (data and meta data)\nordered (meta data journald but data written before meta data (default))\njournald (meta data only journal)\nmodes.\n\nThe performance differences between ordered and meta data only \njournaling should be very small enyway\n\n- --\n\nViele Gr��e,\nLars Heidieker\n\[email protected]\nhttp://paradoxon.info\n\n- ------------------------------------\n\nMystische Erkl�rungen.\nDie mystischen Erkl�rungen gelten f�r tief;\ndie Wahrheit ist, dass sie noch nicht einmal oberfl�chlich sind.\n -- Friedrich Nietzsche\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.5 (Darwin)\n\niD8DBQFFmnJUcxuYqjT7GRYRApNrAJ9oYusdw+Io4iSZrEITTbFy2qDA4QCgmBW5\n7cpQZmlIv61EF2wP2yNXZhA=\n=glwc\n-----END PGP SIGNATURE-----\n",
"msg_date": "Tue, 2 Jan 2007 14:55:09 +0000",
"msg_from": "Lars Heidieker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "More specifically, you should set the noatime,data=writeback options in\nfstab on ext3 partitions for best performance. Correct?\n\n> it doesn't really belong here but ext3 has\n> data journaled (data and meta data)\n> ordered (meta data journald but data written before meta data (default))\n> journald (meta data only journal)\n> modes.\n> \n> The performance differences between ordered and meta data only \n> journaling should be very small enyway\n",
"msg_date": "Tue, 02 Jan 2007 09:58:43 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "On Fri, 2006-12-29 at 07:52 -0500, Ron wrote:\n> A= go through each query and see what work_mem needs to be for that \n> query to be as RAM resident as possible. If you have enough RAM, set \n> work_mem for that query that large. Remember that work_mem is =per \n> query=, so queries running in parallel eat the sum of each of their work_mem's.\n\nJust to clarify, from the docs on work_mem at\nhttp://www.postgresql.org/docs/current/static/runtime-config-\nresource.html :\n\n\"Specifies the amount of memory to be used by internal sort operations\nand hash tables before switching to temporary disk files. The value is\nspecified in kilobytes, and defaults to 1024 kilobytes (1 MB). Note that\nfor a complex query, several sort or hash operations might be running in\nparallel; each one will be allowed to use as much memory as this value\nspecifies before it starts to put data into temporary files. Also,\nseveral running sessions could be doing such operations concurrently. So\nthe total memory used could be many times the value of work_mem; it is\nnecessary to keep this fact in mind when choosing the value. Sort\noperations are used for ORDER BY, DISTINCT, and merge joins. Hash tables\nare used in hash joins, hash-based aggregation, and hash-based\nprocessing of IN subqueries.\"\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Tue, 02 Jan 2007 10:36:47 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "I've got back access to my test system. I ran another test run with the \nsame input data set. This time I put pg_xlog on a different RAID volume \n(the unused one that I suspect is a software RAID), and I turned \nfsync=off in postgresql.conf. I left the rest of the configuration \nalone (all foreign keys removed), etc. Unfortunately, this only dropped \nelapsed time down to about 28000 seconds (from 30000), still \nsignificantly more than BigDBMS. Additional info inline below.\n\nShoaib Mir wrote:\n> Here are my few recommendations that might help you:\n> \n> - You will need to do table partitioning \n> (http://www.postgresql.org/docs/current/static/ddl-partitioning.html \n> <http://www.postgresql.org/docs/current/static/ddl-partitioning.html>) \n> as you are storing quite a lot of data in one table per day.\n\nI'm focusing on the detailed perspective for now. The 144 files I'm \nprocessing represent not even two hours of data, so that surely wouldn't \nbe split up.\n\n> \n> - You are using a RAID5 setup which is something that can also affect \n> performance so switching to RAID1 might help you there, but again you \n> have a RAID5 with 12 disks so hmm that shouldn't be that much of a problem.\n\nAgreed.\n\n> \n> - Have you done the tuning for postgresql.conf parameters? if not then \n> you really need to do this for like checkpoint segments, random page \n> cost, shared buffers, cache size, fsm pages, vacuum cost delay, \n> work_mem, bgwriter etc etc. You can get good advice for tuning these \n> parameters at --> http://www.powerpostgresql.com/PerfList/\n\nThe box has 3 GB of memory. I would think that BigDBMS would be hurt by \nthis more than PG. Here are the settings I've modified in postgresql.conf:\n\nautovacuum=on\nstats_row_level = on\nmax_connections = 10\nlisten_addresses = 'db01,localhost'\nshared_buffers = 128MB\nwork_mem = 16MB\nmaintenance_work_mem = 64MB\ntemp_buffers = 32MB\nmax_fsm_pages = 204800\ncheckpoint_segments = 30\nredirect_stderr = on\nlog_line_prefix = '%t %d'\n\n> \n> - For autovacuuming you need to properly tune the thresholds so that the \n> vacuum and analyze is done at the right time not affecting the database \n> server performance. (You can find help for this at \n> http://www.postgresql.org/docs/current/static/routine-vacuuming.html \n> under \"22.1.4. The auto-vacuum daemon\")\n\nThe real-life load on this database would be fairly constant throughout \nthe day. Stats from network devices are received every 15 minutes from \neach device, but they are staggered. As a result, the database is \nalmost constantly being updated, so there is no dead time to do vacuums.\n\n> \n> - You will need to separate your transactional logs i.e. pg_xlog folder \n> to a different drive other then your database server drive. This can be \n> done by creating symlinks for pg_xlog folder.\n\nDone, see opening remarks. Unfortunately minor impact.\n\n> \n> - I hope you are doing proper connection pool management, because good \n> use of database connections can be really effect the overall \n> performance, connections can be expensive to create, and consume memory \n> if they are not properly exited.\n\nI probably should have mentioned this originally but was afraid of \ninformation overload. The application runs on JBoss and uses JBoss \nconnection pools. So connections are pooled, but I don't know how they \nwould compare to native PG connection pools. Essentially, JBoss gets \nnative JDBC connections, and the pools simply allow them to be re-used \nwithout opening and closing each time. So if the native PG connection \npools provide any pooling optimizations beyond that, those advantages \nare not being realized.\n\n> \n> Hope that helps your tests...\n\nThanks to everyone for providing suggestions, and I apologize for my \ndelay in responding to each of them.\n\n> \n> ----------------\n> Shoaib Mir\n> EnterpriseDB (www.enterprisedb.com <http://www.enterprisedb.com>)\n> \n> On 12/28/06, *Guy Rouillier* <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> I don't want to violate any license agreement by discussing\n> performance,\n> so I'll refer to a large, commercial PostgreSQL-compatible DBMS only as\n> BigDBMS here.\n> \n> I'm trying to convince my employer to replace BigDBMS with PostgreSQL\n> for at least some of our Java applications. As a proof of concept, I\n> started with a high-volume (but conceptually simple) network data\n> collection application. This application collects files of 5-minute\n> usage statistics from our network devices, and stores a raw form of\n> these stats into one table and a normalized form into a second table.\n> We are currently storing about 12 million rows a day in the normalized\n> table, and each month we start new tables. For the normalized data, the\n> app inserts rows initialized to zero for the entire current day first\n> thing in the morning, then throughout the day as stats are received,\n> executes updates against existing rows. So the app has very high update\n> activity.\n> \n> In my test environment, I have a dual-x86 Linux platform running the\n> application, and an old 4-CPU Sun Enterprise 4500 running BigDBMS and\n> PostgreSQL 8.2.0 (only one at a time.) The Sun box has 4 disk arrays\n> attached, each with 12 SCSI hard disks (a D1000 and 3 A1000, for those\n> familiar with these devices.) The arrays are set up with RAID5. So I'm\n> working with a consistent hardware platform for this comparison. I'm\n> only processing a small subset of files (144.)\n> \n> BigDBMS processed this set of data in 20000 seconds, with all foreign\n> keys in place. With all foreign keys in place, PG took 54000 seconds to\n> complete the same job. I've tried various approaches to autovacuum\n> (none, 30-seconds) and it doesn't seem to make much difference. What\n> does seem to make a difference is eliminating all the foreign keys; in\n> that configuration, PG takes about 30000 seconds. Better, but BigDBMS\n> still has it beat significantly.\n> \n> I've got PG configured so that that the system database is on disk\n> array\n> 2, as are the transaction log files. The default table space for the\n> test database is disk array 3. I've got all the reference tables (the\n> tables to which the foreign keys in the stats tables refer) on this\n> array. I also store the stats tables on this array. Finally, I put the\n> indexes for the stats tables on disk array 4. I don't use disk array 1\n> because I believe it is a software array.\n> \n> I'm out of ideas how to improve this picture any further. I'd\n> appreciate some suggestions. Thanks.\n> \n> --\n> Guy Rouillier\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n> \n\n\n-- \nGuy Rouillier\n",
"msg_date": "Fri, 05 Jan 2007 21:51:18 -0500",
"msg_from": "Guy Rouillier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "Guy Rouillier wrote:\n> I've got back access to my test system. I ran another test run with the \n> same input data set. This time I put pg_xlog on a different RAID volume \n> (the unused one that I suspect is a software RAID), and I turned \n> fsync=off in postgresql.conf. I left the rest of the configuration \n> alone (all foreign keys removed), etc. Unfortunately, this only dropped \n> elapsed time down to about 28000 seconds (from 30000), still \n> significantly more than BigDBMS. Additional info inline below.\n\nAlthough tuning is extremely important, you also have to look at the application itself. I discovered (the hard way) that there's simply no substitute for a bit of redesign/rewriting of the schema and/or SQL statements.\n\nMany of us who \"grew up\" on Oracle assume that their SQL is standard stuff, and that Oracle's optimizer is \"the way it's done.\" But in fact most Oracle applications are tweaked and tuned to take advantage of Oracle's strengths and avoid its weaknesses. If you designed an application from the ground up to use Postgres, then migrated to Oracle, you would probably be equally frustrated by Oracle's poor performance on your Postgres-tuned application.\n\nI don't know if you have access to the application's SQL, or the time to experiment a bit, but unless your schema is trival and your SQL is boneheaded simple, you're not going to get equal performance from Postgres until you do some analysis of your application under real-world conditions, and optimize the problem areas.\n\nIn my case, I found just a few specific SQL constructs that, with a bit of tuning, made massive differences in performance.\n\nCraig\n",
"msg_date": "Fri, 05 Jan 2007 19:14:41 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "\nOn 5-Jan-07, at 9:51 PM, Guy Rouillier wrote:\n\n> I've got back access to my test system. I ran another test run \n> with the same input data set. This time I put pg_xlog on a \n> different RAID volume (the unused one that I suspect is a software \n> RAID), and I turned fsync=off in postgresql.conf. I left the rest \n> of the configuration alone (all foreign keys removed), etc. \n> Unfortunately, this only dropped elapsed time down to about 28000 \n> seconds (from 30000), still significantly more than BigDBMS. \n> Additional info inline below.\n>\n> Shoaib Mir wrote:\n>> Here are my few recommendations that might help you:\n>> - You will need to do table partitioning (http:// \n>> www.postgresql.org/docs/current/static/ddl-partitioning.html \n>> <http://www.postgresql.org/docs/current/static/ddl- \n>> partitioning.html>) as you are storing quite a lot of data in one \n>> table per day.\n>\n> I'm focusing on the detailed perspective for now. The 144 files \n> I'm processing represent not even two hours of data, so that surely \n> wouldn't be split up.\n>\n>> - You are using a RAID5 setup which is something that can also \n>> affect performance so switching to RAID1 might help you there, but \n>> again you have a RAID5 with 12 disks so hmm that shouldn't be that \n>> much of a problem.\n>\n> Agreed.\n>\n>> - Have you done the tuning for postgresql.conf parameters? if not \n>> then you really need to do this for like checkpoint segments, \n>> random page cost, shared buffers, cache size, fsm pages, vacuum \n>> cost delay, work_mem, bgwriter etc etc. You can get good advice \n>> for tuning these parameters at --> http://www.powerpostgresql.com/ \n>> PerfList/\n>\n> The box has 3 GB of memory. I would think that BigDBMS would be \n> hurt by this more than PG. Here are the settings I've modified in \n> postgresql.conf:\n\nAs I said you need to set shared_buffers to at least 750MB this is \nthe starting point, it can actually go higher. Additionally effective \ncache should be set to 2.25 G turning fsync is not a real world \nsituation. Additional tuning of file systems can provide some gain, \nhowever as Craig pointed out some queries may need to be tweaked.\n>\n> autovacuum=on\n> stats_row_level = on\n> max_connections = 10\n> listen_addresses = 'db01,localhost'\n> shared_buffers = 128MB\n> work_mem = 16MB\n> maintenance_work_mem = 64MB\n> temp_buffers = 32MB\n> max_fsm_pages = 204800\n> checkpoint_segments = 30\n> redirect_stderr = on\n> log_line_prefix = '%t %d'\n>\n>> - For autovacuuming you need to properly tune the thresholds so \n>> that the vacuum and analyze is done at the right time not \n>> affecting the database server performance. (You can find help for \n>> this at http://www.postgresql.org/docs/current/static/routine- \n>> vacuuming.html under \"22.1.4. The auto-vacuum daemon\")\n>\n> The real-life load on this database would be fairly constant \n> throughout the day. Stats from network devices are received every \n> 15 minutes from each device, but they are staggered. As a result, \n> the database is almost constantly being updated, so there is no \n> dead time to do vacuums.\n>\n>> - You will need to separate your transactional logs i.e. pg_xlog \n>> folder to a different drive other then your database server drive. \n>> This can be done by creating symlinks for pg_xlog folder.\n>\n> Done, see opening remarks. Unfortunately minor impact.\n>\n>> - I hope you are doing proper connection pool management, because \n>> good use of database connections can be really effect the overall \n>> performance, connections can be expensive to create, and consume \n>> memory if they are not properly exited.\n>\n> I probably should have mentioned this originally but was afraid of \n> information overload. The application runs on JBoss and uses JBoss \n> connection pools. So connections are pooled, but I don't know how \n> they would compare to native PG connection pools. Essentially, \n> JBoss gets native JDBC connections, and the pools simply allow them \n> to be re-used without opening and closing each time. So if the \n> native PG connection pools provide any pooling optimizations beyond \n> that, those advantages are not being realized.\n\nthe PG Connection pools will not help, they do not currently provide \nany extra optimization.\n\nDave\n>\n>> Hope that helps your tests...\n>\n> Thanks to everyone for providing suggestions, and I apologize for \n> my delay in responding to each of them.\n>\n>> ----------------\n>> Shoaib Mir\n>> EnterpriseDB (www.enterprisedb.com <http://www.enterprisedb.com>)\n>> On 12/28/06, *Guy Rouillier* <[email protected] <mailto:guyr- \n>> [email protected]>> wrote:\n>> I don't want to violate any license agreement by discussing\n>> performance,\n>> so I'll refer to a large, commercial PostgreSQL-compatible \n>> DBMS only as\n>> BigDBMS here.\n>> I'm trying to convince my employer to replace BigDBMS with \n>> PostgreSQL\n>> for at least some of our Java applications. As a proof of \n>> concept, I\n>> started with a high-volume (but conceptually simple) network data\n>> collection application. This application collects files of 5- \n>> minute\n>> usage statistics from our network devices, and stores a raw \n>> form of\n>> these stats into one table and a normalized form into a second \n>> table.\n>> We are currently storing about 12 million rows a day in the \n>> normalized\n>> table, and each month we start new tables. For the normalized \n>> data, the\n>> app inserts rows initialized to zero for the entire current \n>> day first\n>> thing in the morning, then throughout the day as stats are \n>> received,\n>> executes updates against existing rows. So the app has very \n>> high update\n>> activity.\n>> In my test environment, I have a dual-x86 Linux platform \n>> running the\n>> application, and an old 4-CPU Sun Enterprise 4500 running \n>> BigDBMS and\n>> PostgreSQL 8.2.0 (only one at a time.) The Sun box has 4 disk \n>> arrays\n>> attached, each with 12 SCSI hard disks (a D1000 and 3 A1000, \n>> for those\n>> familiar with these devices.) The arrays are set up with \n>> RAID5. So I'm\n>> working with a consistent hardware platform for this \n>> comparison. I'm\n>> only processing a small subset of files (144.)\n>> BigDBMS processed this set of data in 20000 seconds, with all \n>> foreign\n>> keys in place. With all foreign keys in place, PG took 54000 \n>> seconds to\n>> complete the same job. I've tried various approaches to \n>> autovacuum\n>> (none, 30-seconds) and it doesn't seem to make much \n>> difference. What\n>> does seem to make a difference is eliminating all the foreign \n>> keys; in\n>> that configuration, PG takes about 30000 seconds. Better, but \n>> BigDBMS\n>> still has it beat significantly.\n>> I've got PG configured so that that the system database is on \n>> disk\n>> array\n>> 2, as are the transaction log files. The default table space \n>> for the\n>> test database is disk array 3. I've got all the reference \n>> tables (the\n>> tables to which the foreign keys in the stats tables refer) on \n>> this\n>> array. I also store the stats tables on this array. Finally, \n>> I put the\n>> indexes for the stats tables on disk array 4. I don't use \n>> disk array 1\n>> because I believe it is a software array.\n>> I'm out of ideas how to improve this picture any further. I'd\n>> appreciate some suggestions. Thanks.\n>> --\n>> Guy Rouillier\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 5: don't forget to increase your free space map settings\n>\n>\n> -- \n> Guy Rouillier\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n",
"msg_date": "Fri, 5 Jan 2007 23:05:42 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "Craig A. James wrote:\n> I don't know if you have access to the application's SQL, or the time to \n> experiment a bit, but unless your schema is trival and your SQL is \n> boneheaded simple, you're not going to get equal performance from \n> Postgres until you do some analysis of your application under real-world \n> conditions, and optimize the problem areas.\n\nCraig, thanks for taking the time to think about this. Yes, I have all \nthe application source code, and all the time in the world, as I'm doing \nthis experimentation on my own time. The test hardware is old stuff no \none intends to use for production work ever again, so I can use it as \nlong as I want.\n\nThe application is fairly straightforward, but as you say, what is \nworking okay with BigDBMS isn't working as well under PG. I'm going to \ntry other configuration suggestions made by others before I attempt \nlogic changes. The core logic is unchangeable; millions of rows of data \nin a single table will be updated throughout the day. If PG can't \nhandle high volume updates well, this may be brick wall.\n\n-- \nGuy Rouillier\n",
"msg_date": "Sat, 06 Jan 2007 23:22:58 -0500",
"msg_from": "Guy Rouillier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "Dave Cramer wrote:\n\n>>\n>> The box has 3 GB of memory. I would think that BigDBMS would be hurt \n>> by this more than PG. Here are the settings I've modified in \n>> postgresql.conf:\n> \n> As I said you need to set shared_buffers to at least 750MB this is the \n> starting point, it can actually go higher. Additionally effective cache \n> should be set to 2.25 G turning fsync is not a real world situation. \n> Additional tuning of file systems can provide some gain, however as \n> Craig pointed out some queries may need to be tweaked.\n\nDave, thanks for the hard numbers, I'll try them. I agree turning fsync \noff is not a production option. In another reply to my original \nposting, Alex mentioned that BigDBMS gets an advantage from its async \nIO. So simply as a test, I turned fsync off in an attempt to open wide \nall the pipes.\n\nRegarding shared_buffers=750MB, the last discussions I remember on this \nsubject said that anything over 10,000 (8K buffers = 80 MB) had unproven \nbenefits. So I'm surprised to see such a large value suggested. I'll \ncertainly give it a try and see what happens.\n\n>>\n>> autovacuum=on\n>> stats_row_level = on\n>> max_connections = 10\n>> listen_addresses = 'db01,localhost'\n>> shared_buffers = 128MB\n>> work_mem = 16MB\n>> maintenance_work_mem = 64MB\n>> temp_buffers = 32MB\n>> max_fsm_pages = 204800\n>> checkpoint_segments = 30\n>> redirect_stderr = on\n>> log_line_prefix = '%t %d'\n-- \nGuy Rouillier\n",
"msg_date": "Sat, 06 Jan 2007 23:32:59 -0500",
"msg_from": "Guy Rouillier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "\n> Regarding shared_buffers=750MB, the last discussions I remember on this \n> subject said that anything over 10,000 (8K buffers = 80 MB) had unproven \n> benefits. So I'm surprised to see such a large value suggested. I'll \n> certainly give it a try and see what happens.\n> \n\nThat is old news :) As of 8.1 it is quite beneficial to go well above\nthe aforementioned amount.\n\nJ\n\n\n> >>\n> >> autovacuum=on\n> >> stats_row_level = on\n> >> max_connections = 10\n> >> listen_addresses = 'db01,localhost'\n> >> shared_buffers = 128MB\n> >> work_mem = 16MB\n> >> maintenance_work_mem = 64MB\n> >> temp_buffers = 32MB\n> >> max_fsm_pages = 204800\n> >> checkpoint_segments = 30\n> >> redirect_stderr = on\n> >> log_line_prefix = '%t %d'\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n\n",
"msg_date": "Sat, 06 Jan 2007 20:56:47 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "Guy,\n \n> The application is fairly straightforward, but as you say, what is \n> working okay with BigDBMS isn't working as well under PG. I'm going to \n> try other configuration suggestions made by others before I attempt \n> logic changes. The core logic is unchangeable; millions of rows of data \n> in a single table will be updated throughout the day. If PG can't \n> handle high volume updates well, this may be brick wall.\n\nHere are a couple things I learned.\n\nANALYZE is VERY important, surprisingly so even for small tables. I had a case last week where a temporary \"scratch\" table with just 100 rows was joined to two more tables of 6 and 12 million rows. You might think that a 100-row table wouldn't need to be analyzed, but it does: Without the ANALYZE, Postgres generated a horrible plan that took many minutes to run; with the ANALYZE, it took milliseconds. Any time a table's contents change dramatically, ANALYZE it, ESPECIALLY if it's a small table. After all, changing 20 rows in a 100-row table has a much larger affect on its statistics than changing 20 rows in a million-row table.\n\nPostgres functions like count() and max() are \"plug ins\" which has huge architectural advantages. But in pre-8.1 releases, there was a big speed penalty for this: functions like count() were very, very slow, requiring a full table scan. I think this is vastly improved from 8.0x to 8.1 and forward; others might be able to comment whether count() is now as fast in Postgres as Oracle. The \"idiom\" to replace count() was \"select col from tbl order by col desc limit 1\". It worked miracles for my app.\n\nPostgres has explicit garbage collection via VACUUM, and you have to design your application with this in mind. In Postgres, update is delete+insert, meaning updates create garbage. If you have very \"wide\" tables, but only a subset of the columns are updated frequently, put these columns in a separate table with an index to join the two tables. For example, my original design was something like this:\n\n integer primary key\n very large text column\n ... a bunch of integer columns, float columns, and small text columns\n\nThe properties were updated by the application, but the large text column never changed. This led to huge garbage-collection problems as the large text field was repeatedly deleted and reinserted by the updates. By separating these into two tables, one with the large text column, and the other table with the dynamic, but smaller, columns, garbage is massively reduced, and performance increased, both immediately (smaller data set to update) and long term (smaller vacuums). You can use views to recreate your original combined columns, so the changes to your app are limited to where updates occur.\n\nIf you have a column that is *frequently* updated (say, for example, a user's last-access timestamp each time s/he hits your web server) then you definitely want this in its own table, not mixed in with the user's name, address, etc.\n\nPartitioning in Postgres is more powerful than in Oracle. Use it if you can.\n\nPartial indexes are VERY nice in Postgres, if your data is poorly distributed (for example, a mostly-NULL column with a small percentage of very important values).\n\nI'm sure there are more things that others can contribute.\n\nCraig\n\n",
"msg_date": "Sat, 06 Jan 2007 23:23:32 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "Guy Rouillier wrote:\n> The application is fairly straightforward, but as you say, what is \n> working okay with BigDBMS isn't working as well under PG. I'm going to \n> try other configuration suggestions made by others before I attempt \n> logic changes. The core logic is unchangeable; millions of rows of data \n> in a single table will be updated throughout the day. If PG can't \n> handle high volume updates well, this may be brick wall.\n\nI understand your reluctance to change your working design in the change \nover to PostgreSQL but -\n\n1. Your table definitions may or may not be the issue and a small change \nin design (even only choice of datatype) may be all that is needed to \nget the needed performance out of PostgreSQL. These changes would be \ndone before you put PostgreSQL into production use so the amount of \ncurrent usage is not relevant when deciding/analyzing these changes but \nthey may affect your ability to use PostgreSQL as an alternative.\n\n2. I think that the idea of logic changes suggested earlier was more \naimed at your select/update commands than the structure of your tables. \nYou should expect to have some SQL changes between any database and \nusing select/update's designed to take advantage of PostgreSQL strengths \ncan give you performance improvements.\n\n\n-- \n\nShane Ambler\[email protected]\n\nGet Sheeky @ http://Sheeky.Biz\n",
"msg_date": "Sun, 07 Jan 2007 19:34:08 +1030",
"msg_from": "Shane Ambler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "\nOn 6-Jan-07, at 11:32 PM, Guy Rouillier wrote:\n\n> Dave Cramer wrote:\n>\n>>>\n>>> The box has 3 GB of memory. I would think that BigDBMS would be \n>>> hurt by this more than PG. Here are the settings I've modified \n>>> in postgresql.conf:\n>> As I said you need to set shared_buffers to at least 750MB this is \n>> the starting point, it can actually go higher. Additionally \n>> effective cache should be set to 2.25 G turning fsync is not a \n>> real world situation. Additional tuning of file systems can \n>> provide some gain, however as Craig pointed out some queries may \n>> need to be tweaked.\n>\n> Dave, thanks for the hard numbers, I'll try them. I agree turning \n> fsync off is not a production option. In another reply to my \n> original posting, Alex mentioned that BigDBMS gets an advantage \n> from its async IO. So simply as a test, I turned fsync off in an \n> attempt to open wide all the pipes.\n>\n> Regarding shared_buffers=750MB, the last discussions I remember on \n> this subject said that anything over 10,000 (8K buffers = 80 MB) \n> had unproven benefits. So I'm surprised to see such a large value \n> suggested. I'll certainly give it a try and see what happens.\n\nThat is 25% of your available memory. This is just a starting point. \nThere are reports that going as high as 50% can be advantageous, \nhowever you need to measure it yourself.\n\n>\n>>>\n>>> autovacuum=on\n>>> stats_row_level = on\n>>> max_connections = 10\n>>> listen_addresses = 'db01,localhost'\n>>> shared_buffers = 128MB\n>>> work_mem = 16MB\n>>> maintenance_work_mem = 64MB\n>>> temp_buffers = 32MB\n>>> max_fsm_pages = 204800\n>>> checkpoint_segments = 30\n>>> redirect_stderr = on\n>>> log_line_prefix = '%t %d'\n> -- \n> Guy Rouillier\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n",
"msg_date": "Sun, 7 Jan 2007 10:29:02 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "Craig A. James wrote:\n> The \"idiom\" to replace count() was \n> \"select col from tbl order by col desc limit 1\". It worked miracles for \n> my app.\n\nSorry, I meant to write, \"the idiom to replace MAX()\", not count()... MAX() was the function that was killing me, 'tho count() also gave me problems.\n\nCraig\n",
"msg_date": "Sun, 07 Jan 2007 15:56:31 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "\nI'm using 8.2 and using order by & limit is still faster than MAX()\neven though MAX() now seems to rewrite to an almost identical plan\ninternally.\n\nCount(*) still seems to use a full table scan rather than an index scan.\n\nUsing one of our tables, MySQL/Oracle/MS-SQL all return instantly while\nPG takes longer ther 700ms. Luckily we can design around this issue.\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Craig A.\nJames\nSent: Sunday, January 07, 2007 5:57 PM\nTo: Guy Rouillier; PostgreSQL Performance\nSubject: Re: [PERFORM] High update activity, PostgreSQL vs BigDBMS\n\n\nCraig A. James wrote:\n> The \"idiom\" to replace count() was \n> \"select col from tbl order by col desc limit 1\". It worked miracles\nfor \n> my app.\n\nSorry, I meant to write, \"the idiom to replace MAX()\", not count()...\nMAX() was the function that was killing me, 'tho count() also gave me\nproblems.\n\nCraig\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n",
"msg_date": "Sun, 7 Jan 2007 20:26:29 -0600",
"msg_from": "\"Adam Rich\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "\"Adam Rich\" <[email protected]> writes:\n> I'm using 8.2 and using order by & limit is still faster than MAX()\n> even though MAX() now seems to rewrite to an almost identical plan\n> internally.\n\nCare to quantify that? AFAICT any difference is within measurement\nnoise, at least for the case of separately-issued SQL commands.\n\n> Count(*) still seems to use a full table scan rather than an index scan.\n\nYup. Don't hold your breath for something different. Postgres has made\ndesign choices that make certain cases fast and others slow, and\ncount(*) is one case that has come out on the short end of the stick.\nIf that's your most important measure of performance, then indeed you\nshould select a different database that's made different tradeoffs.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 07 Jan 2007 21:47:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS "
},
{
"msg_contents": "On Sun, 2007-01-07 at 20:26 -0600, Adam Rich wrote:\n> I'm using 8.2 and using order by & limit is still faster than MAX()\n> even though MAX() now seems to rewrite to an almost identical plan\n> internally.\n\n\nGonna need you to back that up :) Can we get an explain analyze?\n\n\n> Count(*) still seems to use a full table scan rather than an index scan.\n> \n\nThere is a TODO out there to help this. Don't know if it will get done.\n\nJoshua D. Drake\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n\n",
"msg_date": "Sun, 07 Jan 2007 19:09:59 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "Dave Cramer wrote:\n> \n> On 6-Jan-07, at 11:32 PM, Guy Rouillier wrote:\n> \n>> Dave Cramer wrote:\n>>\n>>>>\n>>>> The box has 3 GB of memory. I would think that BigDBMS would be \n>>>> hurt by this more than PG. Here are the settings I've modified in \n>>>> postgresql.conf:\n>>> As I said you need to set shared_buffers to at least 750MB this is \n>>> the starting point, it can actually go higher. Additionally effective \n>>> cache should be set to 2.25 G turning fsync is not a real world \n>>> situation. Additional tuning of file systems can provide some gain, \n>>> however as Craig pointed out some queries may need to be tweaked.\n>>\n>> Dave, thanks for the hard numbers, I'll try them. I agree turning \n>> fsync off is not a production option. In another reply to my original \n>> posting, Alex mentioned that BigDBMS gets an advantage from its async \n>> IO. So simply as a test, I turned fsync off in an attempt to open \n>> wide all the pipes.\n>>\n>> Regarding shared_buffers=750MB, the last discussions I remember on \n>> this subject said that anything over 10,000 (8K buffers = 80 MB) had \n>> unproven benefits. So I'm surprised to see such a large value \n>> suggested. I'll certainly give it a try and see what happens.\n> \n> That is 25% of your available memory. This is just a starting point. \n> There are reports that going as high as 50% can be advantageous, however \n> you need to measure it yourself.\n\nOk, I ran with the settings below, but with\n\nshared_buffers=768MB\neffective_cache_size=2048MB\nfsync=on\n\nThis run took 29000 seconds. I'm beginning to think configuration \nchanges are not going to buy significant additional improvement. Time \nto look at the app implementation.\n\n> \n>>\n>>>>\n>>>> autovacuum=on\n>>>> stats_row_level = on\n>>>> max_connections = 10\n>>>> listen_addresses = 'db01,localhost'\n>>>> shared_buffers = 128MB\n>>>> work_mem = 16MB\n>>>> maintenance_work_mem = 64MB\n>>>> temp_buffers = 32MB\n>>>> max_fsm_pages = 204800\n>>>> checkpoint_segments = 30\n>>>> redirect_stderr = on\n>>>> log_line_prefix = '%t %d'\n>> --Guy Rouillier\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 4: Have you searched our list archives?\n>>\n>> http://archives.postgresql.org\n>>\n> \n\n\n-- \nGuy Rouillier\n",
"msg_date": "Sun, 07 Jan 2007 23:26:01 -0500",
"msg_from": "Guy Rouillier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "\nHere's the queries and explains... Granted, it's not a huge difference\nhere,\nbut both timings are absolutely consistent. Using max(), this runs\nalmost \n15 queries/sec and \"limit 1\" runs at almost 40 queries/sec. \n\nIs the differene in explain analyze expected behavior? (rows=168196 vs.\nrows=1)\n(The table is freshly analayzed)\n\n\n\nselect max(item_id) from receipt_items\n\nResult (cost=0.04..0.05 rows=1 width=0) (actual time=0.030..0.031\nrows=1 loops=1)\nInitPlan\n-> Limit (cost=0.00..0.04 rows=1 width=4) (actual time=0.023..0.024\nrows=1 loops=1)\n-> Index Scan Backward using receipt_items_pkey on receipt_items\n(cost=0.00..6883.71 rows=168196 width=4) (actual time=0.020..0.020\nrows=1 loops=1)\nFilter: (item_id IS NOT NULL)\nTotal runtime: 0.067 ms\n\n\nselect item_id \nfrom receipt_items\norder by item_id desc\nlimit 1\n\nLimit (cost=0.00..0.04 rows=1 width=4) (actual time=0.010..0.011 rows=1\nloops=1)\n-> Index Scan Backward using receipt_items_pkey on receipt_items\n(cost=0.00..6883.71 rows=168196 width=4) (actual time=0.008..0.008\nrows=1 loops=1)\nTotal runtime: 0.026 ms\n\n\nA couple more similar examples from this table:\n\n\n\nselect max(create_date) from receipt_items\n\nResult (cost=0.05..0.06 rows=1 width=0) (actual time=0.032..0.032\nrows=1 loops=1)\nInitPlan\n-> Limit (cost=0.00..0.05 rows=1 width=8) (actual time=0.025..0.026\nrows=1 loops=1)\n-> Index Scan Backward using test_idx_1 on receipt_items\n(cost=0.00..7986.82 rows=168196 width=8) (actual time=0.022..0.022\nrows=1 loops=1)\nFilter: (create_date IS NOT NULL)\nTotal runtime: 0.069 ms\n\n\nselect create_date\nfrom receipt_items\norder by create_date desc\nlimit 1;\n\nLimit (cost=0.00..0.05 rows=1 width=8) (actual time=0.011..0.012 rows=1\nloops=1)\n-> Index Scan Backward using test_idx_1 on receipt_items\n(cost=0.00..7986.82 rows=168196 width=8) (actual time=0.009..0.009\nrows=1 loops=1)\nTotal runtime: 0.027 ms\n\n\n\n\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Sunday, January 07, 2007 8:48 PM\nTo: Adam Rich\nCc: 'Craig A. James'; 'Guy Rouillier'; 'PostgreSQL Performance'\nSubject: Re: [PERFORM] High update activity, PostgreSQL vs BigDBMS \n\n\n\"Adam Rich\" <[email protected]> writes:\n> I'm using 8.2 and using order by & limit is still faster than MAX()\n> even though MAX() now seems to rewrite to an almost identical plan\n> internally.\n\nCare to quantify that? AFAICT any difference is within measurement\nnoise, at least for the case of separately-issued SQL commands.\n\n> Count(*) still seems to use a full table scan rather than an index\nscan.\n\nYup. Don't hold your breath for something different. Postgres has made\ndesign choices that make certain cases fast and others slow, and\ncount(*) is one case that has come out on the short end of the stick.\nIf that's your most important measure of performance, then indeed you\nshould select a different database that's made different tradeoffs.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 7 Jan 2007 22:59:01 -0600",
"msg_from": "\"Adam Rich\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS "
},
{
"msg_contents": "Ron wrote:\n> C= What file system are you using? Unlike BigDBMS, pg does not have its \n> own native one, so you have to choose the one that best suits your \n> needs. For update heavy applications involving lots of small updates \n> jfs and XFS should both be seriously considered.\n\nRon, thanks for your ideas. Many of them I've addressed in response to \nsuggestions from others. I wanted to address this one in particular. \nUnfortunately, I do not have the liberty to change file systems on this \nold Sun box. All file systems are formatted Sun UFS. BigDBMS is \nequally subject to whatever pluses or minuses can be attributed to this \nfile system, so I'm thinking that this issue would be a wash between the \ntwo.\n\nI've come to the conclusion that configuration changes to PG alone will \nnot equal the playing field. My next step is to try to determine where \nthe biggest payback will be regarding changing the implementation.\n\n-- \nGuy Rouillier\n",
"msg_date": "Sun, 07 Jan 2007 23:59:14 -0500",
"msg_from": "Guy Rouillier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "\nHere's another, more drastic example... Here the order by / limit\nversion\nruns in less than 1/7000 the time of the MAX() version.\n\n\nselect max(item_id)\nfrom events e, receipts r, receipt_items ri\nwhere e.event_id=r.event_id and r.receipt_id=ri.receipt_id\n\nAggregate (cost=10850.84..10850.85 rows=1 width=4) (actual\ntime=816.382..816.383 rows=1 loops=1)\n -> Hash Join (cost=2072.12..10503.30 rows=139019 width=4) (actual\ntime=155.177..675.870 rows=147383 loops=1)\n Hash Cond: (ri.receipt_id = r.receipt_id)\n -> Seq Scan on receipt_items ri (cost=0.00..4097.56\nrows=168196 width=8) (actual time=0.009..176.894 rows=168196 loops=1)\n -> Hash (cost=2010.69..2010.69 rows=24571 width=4) (actual\ntime=155.146..155.146 rows=24571 loops=1)\n -> Hash Join (cost=506.84..2010.69 rows=24571 width=4)\n(actual time=34.803..126.452 rows=24571 loops=1)\n Hash Cond: (r.event_id = e.event_id)\n -> Seq Scan on receipts r (cost=0.00..663.58\nrows=29728 width=8) (actual time=0.006..30.870 rows=29728 loops=1)\n -> Hash (cost=469.73..469.73 rows=14843 width=4)\n(actual time=34.780..34.780 rows=14843 loops=1)\n -> Seq Scan on events e (cost=0.00..469.73\nrows=14843 width=4) (actual time=0.007..17.603 rows=14843 loops=1)\nTotal runtime: 816.645 ms\n\nselect item_id\nfrom events e, receipts r, receipt_items ri\nwhere e.event_id=r.event_id and r.receipt_id=ri.receipt_id\norder by item_id desc limit 1\n\n\nLimit (cost=0.00..0.16 rows=1 width=4) (actual time=0.047..0.048 rows=1\nloops=1)\n -> Nested Loop (cost=0.00..22131.43 rows=139019 width=4) (actual\ntime=0.044..0.044 rows=1 loops=1)\n -> Nested Loop (cost=0.00..12987.42 rows=168196 width=8)\n(actual time=0.032..0.032 rows=1 loops=1)\n -> Index Scan Backward using receipt_items_pkey on\nreceipt_items ri (cost=0.00..6885.50 rows=168196 width=8) (actual\ntime=0.016..0.016 rows=1 loops=1)\n -> Index Scan using receipts_pkey on receipts r\n(cost=0.00..0.02 rows=1 width=8) (actual time=0.010..0.010 rows=1\nloops=1)\n Index Cond: (r.receipt_id = ri.receipt_id)\n -> Index Scan using events_pkey on events e (cost=0.00..0.04\nrows=1 width=4) (actual time=0.009..0.009 rows=1 loops=1)\n Index Cond: (e.event_id = r.event_id)\nTotal runtime: 0.112 ms\n\n\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Joshua D.\nDrake\nSent: Sunday, January 07, 2007 9:10 PM\nTo: Adam Rich\nCc: 'Craig A. James'; 'Guy Rouillier'; 'PostgreSQL Performance'\nSubject: Re: [PERFORM] High update activity, PostgreSQL vs BigDBMS\n\n\nOn Sun, 2007-01-07 at 20:26 -0600, Adam Rich wrote:\n> I'm using 8.2 and using order by & limit is still faster than MAX()\n> even though MAX() now seems to rewrite to an almost identical plan\n> internally.\n\n\nGonna need you to back that up :) Can we get an explain analyze?\n\n\n> Count(*) still seems to use a full table scan rather than an index\nscan.\n> \n\nThere is a TODO out there to help this. Don't know if it will get done.\n\nJoshua D. Drake\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n",
"msg_date": "Sun, 7 Jan 2007 23:53:05 -0600",
"msg_from": "\"Adam Rich\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "Craig A. James wrote:\n> Postgres functions like count() and max() are \"plug ins\" which has huge\n> architectural advantages. But in pre-8.1 releases, there was a big\n> speed penalty for this: functions like count() were very, very slow,\n> requiring a full table scan. I think this is vastly improved from 8.0x\n> to 8.1 and forward; others might be able to comment whether count() is\n> now as fast in Postgres as Oracle. The \"idiom\" to replace count() was\n\n ^^^^^^\n\nBigDBMS == Oracle. ;-)\n\n--\n Bruce Momjian [email protected]\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Mon, 8 Jan 2007 13:09:16 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "On Thu, Dec 28, 2006 at 02:15:31PM -0800, Jeff Frost wrote:\n> When benchmarking various options for a new PG server at one of my clients, \n> I tried ext2 and ext3 (data=writeback) for the WAL and it appeared to be \n> fastest to have ext2 for the WAL. The winning time was 157m46.713s for \n> ext2, 159m47.098s for combined ext3 data/xlog and 158m25.822s for ext3 \n> data=writeback. This was on an 8x150GB Raptor RAID10 on an Areca 1130 w/ \n> 1GB BBU cache. This config benched out faster than a 6disk RAID10 + 2 disk \n> RAID1 for those of you who have been wondering if the BBU write back cache \n> mitigates the need for separate WAL (at least on this workload). Those are \n> the fastest times for each config, but ext2 WAL was always faster than the \n> other two options. I didn't test any other filesystems in this go around.\n\nUh, if I'm reading this correctly, you're saying that WAL on a separate\next2 vs. one big ext3 with data=writeback saved ~39 seconds out of\n~158.5 minutes, or 0.4%? Is that even above the noise for your\nmeasurements? I suspect the phase of the moon might play a bigger role\n;P\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Tue, 9 Jan 2007 06:33:15 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "On Sun, Jan 07, 2007 at 11:26:01PM -0500, Guy Rouillier wrote:\n> Ok, I ran with the settings below, but with\n> \n> shared_buffers=768MB\n> effective_cache_size=2048MB\n> fsync=on\n> \n> This run took 29000 seconds. I'm beginning to think configuration \n> changes are not going to buy significant additional improvement. Time \n> to look at the app implementation.\n\nVery likely, but one thing I haven't seen mentioned is what your\nbottleneck actually is. Is it CPU? Disk? Something else (ie: neither CPU\nor IO is at 100%). Additionally, since you have multiple arrays, are you\nsure they're being utilized equally? Having something like MRTG or\ncricket will make your tuning much easier. Unlike Oracle, PostgreSQL has\nno ability to avoid hitting the base table even if an index could cover\na query... so compared to Oracle you'll need to dedicate a lot more IO\nto the base tables.\n\nSearch around for PostgreSQL on Solaris tuning tips... there's some\nOS-settings that can make a huge difference. In particular, by default\nSolaris will only dedicate a fraction of memory to disk caching. That\nwon't bother Oracle much but it's a big deal to PostgreSQL. I think\nthere's some other relevant OS parameters as well.\n\nFor vacuum, you're going to need to tune the vacuum_cost_* settings so\nthat you can balance the IO impact of vacuums with the need to complete\nthe vacuums in a reasonable time. You'll find this easiest to tune by\nrunning manual vacuums and monitoring IO activity.\n\nYou'll also likely need to tune the bgwriter so that checkpoints aren't\nkilling you. If you're targeting a checkpoint every 5 minutes you'll\nneed to at least up bgwriter_all_maxpages to shared_buffers (in pages) /\n300 / 5. I'd round up a bit. As with everything, you'll need to tweak\nyour values from there. If you're using stock bgwriter settings then\nyou'll probably be seeing a big IO spike every time a checkpoint occurs.\n\nSpeaking of which... how often are checkpoints? If you can tolerate 5\nminutes of recovery time, (the default checkpoint_timeout), I suggest\nsetting checkpount_warning to 290 seconds or so; that way if you're\ngetting checkpoints much more often than every 5 minutes you'll be able\nto see in the logs.\n\nSpeaking of which, going longer between checkpoints will likely help\nperformance, if you can tolerate longer recovery times. I haven't\nactually tested the correlation, but I would expect recovery to complete\nin a maximum of checkpount_timeout seconds. If you can tolerate being in\nrecovery mode for 10 minutes after a crash, try bumping\ncheckpount_timeout, checkpount_warning and checkpoint_segments and see\nwhat it does for performance (if you do that you'll also want to tweak\nbgwriter further... in this case increasing bgwriter_delay would be\neasiest).\n\nGiven what sounds like decent IO capabilities, you'll likely get better\nquery plans from decreasing random_page_cost, probably to between 2 and\n3.\n\nSpeaking of IO... if you can switch to RAID10 you'll likely get better\npreformance since your write load is so heavy. Normally RAID5 is a\ncomplete performance killer as soon as you're doing much writing, but\nI'm guessing that those nice expensive Sun arrays are better than most\nRAID controllers.\n\nAll that being said... generally the biggest tuning impact to be had for\nany database environment is in how the application is using the\ndatabase. A few sub-optimal things in the application/database design\ncould easily erase every gain you'll get from all your tuning. I suggest\nrunning EXPLAIN ANALYZE on the queries that are run most often and\nseeing what that shows.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Tue, 9 Jan 2007 06:54:27 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "On Tue, 9 Jan 2007, Jim C. Nasby wrote:\n\n> On Thu, Dec 28, 2006 at 02:15:31PM -0800, Jeff Frost wrote:\n>> When benchmarking various options for a new PG server at one of my clients,\n>> I tried ext2 and ext3 (data=writeback) for the WAL and it appeared to be\n>> fastest to have ext2 for the WAL. The winning time was 157m46.713s for\n>> ext2, 159m47.098s for combined ext3 data/xlog and 158m25.822s for ext3\n>> data=writeback. This was on an 8x150GB Raptor RAID10 on an Areca 1130 w/\n>> 1GB BBU cache. This config benched out faster than a 6disk RAID10 + 2 disk\n>> RAID1 for those of you who have been wondering if the BBU write back cache\n>> mitigates the need for separate WAL (at least on this workload). Those are\n>> the fastest times for each config, but ext2 WAL was always faster than the\n>> other two options. I didn't test any other filesystems in this go around.\n>\n> Uh, if I'm reading this correctly, you're saying that WAL on a separate\n> ext2 vs. one big ext3 with data=writeback saved ~39 seconds out of\n> ~158.5 minutes, or 0.4%? Is that even above the noise for your\n> measurements? I suspect the phase of the moon might play a bigger role\n> ;P\n\nThat's what I thought too...cept I ran it 20 times and ext2 won by that margin \nevery time, so it was quite repeatable. :-/\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n",
"msg_date": "Tue, 9 Jan 2007 09:10:51 -0800 (PST)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "On Tue, Jan 09, 2007 at 09:10:51AM -0800, Jeff Frost wrote:\n> On Tue, 9 Jan 2007, Jim C. Nasby wrote:\n> \n> >On Thu, Dec 28, 2006 at 02:15:31PM -0800, Jeff Frost wrote:\n> >>When benchmarking various options for a new PG server at one of my \n> >>clients,\n> >>I tried ext2 and ext3 (data=writeback) for the WAL and it appeared to be\n> >>fastest to have ext2 for the WAL. The winning time was 157m46.713s for\n> >>ext2, 159m47.098s for combined ext3 data/xlog and 158m25.822s for ext3\n> >>data=writeback. This was on an 8x150GB Raptor RAID10 on an Areca 1130 w/\n> >>1GB BBU cache. This config benched out faster than a 6disk RAID10 + 2 \n> >>disk\n> >>RAID1 for those of you who have been wondering if the BBU write back cache\n> >>mitigates the need for separate WAL (at least on this workload). Those \n> >>are\n> >>the fastest times for each config, but ext2 WAL was always faster than the\n> >>other two options. I didn't test any other filesystems in this go around.\n> >\n> >Uh, if I'm reading this correctly, you're saying that WAL on a separate\n> >ext2 vs. one big ext3 with data=writeback saved ~39 seconds out of\n> >~158.5 minutes, or 0.4%? Is that even above the noise for your\n> >measurements? I suspect the phase of the moon might play a bigger role\n> >;P\n> \n> That's what I thought too...cept I ran it 20 times and ext2 won by that \n> margin every time, so it was quite repeatable. :-/\n\nEven so, you've got to really be hunting for performance to go through\nthe hassle of different filesystems just to gain 0.4%... :)\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 10 Jan 2007 14:26:52 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "On Wed, 10 Jan 2007, Jim C. Nasby wrote:\n\n>>>> RAID1 for those of you who have been wondering if the BBU write back cache\n>>>> mitigates the need for separate WAL (at least on this workload). Those\n>>>> are\n>>>> the fastest times for each config, but ext2 WAL was always faster than the\n>>>> other two options. I didn't test any other filesystems in this go around.\n>>>\n>>> Uh, if I'm reading this correctly, you're saying that WAL on a separate\n>>> ext2 vs. one big ext3 with data=writeback saved ~39 seconds out of\n>>> ~158.5 minutes, or 0.4%? Is that even above the noise for your\n>>> measurements? I suspect the phase of the moon might play a bigger role\n>>> ;P\n>>\n>> That's what I thought too...cept I ran it 20 times and ext2 won by that\n>> margin every time, so it was quite repeatable. :-/\n>\n> Even so, you've got to really be hunting for performance to go through\n> the hassle of different filesystems just to gain 0.4%... :)\n\nIndeed, but actually, I did the math again and it appears that it saves close \nto 2 minutes versus one big ext3. I guess the moral of the story is that \nhaving a separate pg_xlog even on the same physical volume tends to be \nslightly faster for write oriented workloads. Ext2 is slightly faster than \next3, but of course you could likely go with another filesystem yet and be \neven slightly faster as well. :-)\n\nI guess the real moral of the story is that you can probably use one big ext3 \nwith the default config and it won't matter much more than 1-2% if you have a \nBBU.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n",
"msg_date": "Wed, 10 Jan 2007 12:33:53 -0800 (PST)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
},
{
"msg_contents": "I originally posted the question below back in Dec 2006, and many \nhelpful suggestions resulted. Unfortunately, since this was a closet \neffort, my official duties pushed further exploration to the back \nburner, then I lost my original test environment. So while I can no \nlonger compare to BigDBMS, I've just made some discoveries that I \nthought others might find helpful.\n\nThe app (which I inherited) was implemented making exhaustive use of \nstored procedures. All inserts and updates are done using procs. When \nconfiguration changes produced no noticeable improvements in \nperformance, I turned to the application architecture. In a new \nenvironment, I updated an insert/update intensive part of the app to use \nembedded insert and update statements instead of invoking stored \nprocedures that did the same work. All the remaining code, database \nimplementation, hardware, etc remains the same.\n\nThe results were significant. Running a repeatable test set of data \nproduced the following results:\n\nWith stored procs: 2595 seconds\nWith embedded inserts/updates: 991 seconds\n\nSo at least in this one scenario, it looks like the extensive use of \nstored procs is contributing significantly to long run times.\n\nGuy Rouillier wrote:\n> I don't want to violate any license agreement by discussing performance, \n> so I'll refer to a large, commercial PostgreSQL-compatible DBMS only as \n> BigDBMS here.\n> \n> I'm trying to convince my employer to replace BigDBMS with PostgreSQL \n> for at least some of our Java applications. As a proof of concept, I \n> started with a high-volume (but conceptually simple) network data \n> collection application. This application collects files of 5-minute \n> usage statistics from our network devices, and stores a raw form of \n> these stats into one table and a normalized form into a second table. We \n> are currently storing about 12 million rows a day in the normalized \n> table, and each month we start new tables. For the normalized data, the \n> app inserts rows initialized to zero for the entire current day first \n> thing in the morning, then throughout the day as stats are received, \n> executes updates against existing rows. So the app has very high update \n> activity.\n> \n> In my test environment, I have a dual-x86 Linux platform running the \n> application, and an old 4-CPU Sun Enterprise 4500 running BigDBMS and \n> PostgreSQL 8.2.0 (only one at a time.) The Sun box has 4 disk arrays \n> attached, each with 12 SCSI hard disks (a D1000 and 3 A1000, for those \n> familiar with these devices.) The arrays are set up with RAID5. So I'm \n> working with a consistent hardware platform for this comparison. I'm \n> only processing a small subset of files (144.)\n> \n> BigDBMS processed this set of data in 20000 seconds, with all foreign \n> keys in place. With all foreign keys in place, PG took 54000 seconds to \n> complete the same job. I've tried various approaches to autovacuum \n> (none, 30-seconds) and it doesn't seem to make much difference. What \n> does seem to make a difference is eliminating all the foreign keys; in \n> that configuration, PG takes about 30000 seconds. Better, but BigDBMS \n> still has it beat significantly.\n> \n> I've got PG configured so that that the system database is on disk array \n> 2, as are the transaction log files. The default table space for the \n> test database is disk array 3. I've got all the reference tables (the \n> tables to which the foreign keys in the stats tables refer) on this \n> array. I also store the stats tables on this array. Finally, I put the \n> indexes for the stats tables on disk array 4. I don't use disk array 1 \n> because I believe it is a software array.\n> \n> I'm out of ideas how to improve this picture any further. I'd \n> appreciate some suggestions. Thanks.\n> \n\n\n-- \nGuy Rouillier\n",
"msg_date": "Fri, 17 Aug 2007 18:48:05 -0400",
"msg_from": "Guy Rouillier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High update activity, PostgreSQL vs BigDBMS"
}
] |
[
{
"msg_contents": "Good day,\n\nI have been reading about the configuration of postgresql, but I have a \nserver who does not give me the performance that should. The tables are \nindexed and made vacuum regularly, i monitor with top, ps and \npg_stat_activity and when i checked was slow without a heavy load overage.\n\nBefore, the server reached 2000 connections to postgresql (with \nmax_connections=3000 in it for future workflow).\n\nI divided the load with another server for better performance, and now reach \n500 connections, but yet is overflow.\n\n\nMy question is about how much memory should i configure in shared_buffers \nand effective_cache_size.\n\nFeatures:\n\n- 4 Processsors Intel Xeon Dual 3.0Ghz\n- 12 GB RAM\n- 2 discos en RAID 1 for OS\n- 4 discs RAID 5 for DB\n- S.O Slackware 11.0 Linux 2.6.17.7\n- Postgres 8.1.4\n\n\n=====In internet i found this:\n\nTuning PostgreSQL for performance\n2 Some basic parameters\n2.1 Shared buffers\n\n# Start at 4MB (512) for a workstation\n# Medium size data set and 256-512MB available RAM: 16-32MB (2048-4096)\n# Large dataset and lots of available RAM (1-4GB): 64-256MB (8192-32768)\n======\n\n\nMy postgresql.conf configuration is:\n\n#---------------------------------------------------------------------------\n# FILE LOCATIONS\n#---------------------------------------------------------------------------\n\n# The default values of these variables are driven from the -D command line\n# switch or PGDATA environment variable, represented here as ConfigDir.\n\n#data_directory = 'ConfigDir'\t\t# use data in another directory\n#hba_file = 'ConfigDir/pg_hba.conf'\t# host-based authentication file\n#ident_file = 'ConfigDir/pg_ident.conf'\t# IDENT configuration file\n\n# If external_pid_file is not explicitly set, no extra pid file is written.\n#external_pid_file = '(none)'\t\t# write an extra pid file\n\n\n#---------------------------------------------------------------------------\n# CONNECTIONS AND AUTHENTICATION\n#---------------------------------------------------------------------------\n\n# - Connection Settings -\n\nlisten_addresses = '*'\t\t# what IP address(es) to listen on;\n\t\t\t\t\t# comma-separated list of addresses;\n\t\t\t\t\t# defaults to 'localhost', '*' = all\nport = 5432\nmax_connections = 3000\n# note: increasing max_connections costs ~400 bytes of shared memory per\n# connection slot, plus lock space (see max_locks_per_transaction). You\n# might also need to raise shared_buffers to support more connections.\n#superuser_reserved_connections = 2\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777\t\t# octal\n#bonjour_name = ''\t\t\t# defaults to the computer name\n\n\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\n\nshared_buffers = 81920\t\t\t# min 16 or max_connections*2, 8KB each\ntemp_buffers = 5000\t\t\t# min 100, 8KB each\nmax_prepared_transactions = 1000\t# can be 0 or more\n\n# note: increasing max_prepared_transactions costs ~600 bytes of shared \nmemory\n\n# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 10240\t\t\t# min 64, size in KB\nmaintenance_work_mem = 253952\t\t# min 1024, size in KB\nmax_stack_depth = 4096\t\t\t# min 100, size in KB\n\n# - Free Space Map -\n\n#max_fsm_pages = 20000\t\t\t# min max_fsm_relations*16, 6 bytes each\n#max_fsm_relations = 1000\t\t# min 100, ~70 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000\t\t# min 25\n#preload_libraries = ''\n\n# - Cost-Based Vacuum Delay -\n\n#vacuum_cost_delay = 0\t\t\t# 0-1000 milliseconds\n#vacuum_cost_page_hit = 1\t\t# 0-10000 credits\n#vacuum_cost_page_miss = 10\t\t# 0-10000 credits\n#vacuum_cost_page_dirty = 20\t\t# 0-10000 credits\n#vacuum_cost_limit = 200\t\t# 0-10000 credits\n\n# - Background writer -\n\n#bgwriter_delay = 200\t\t\t# 10-10000 milliseconds between rounds\n#bgwriter_lru_percent = 1.0\t\t# 0-100% of LRU buffers scanned/round\n#bgwriter_lru_maxpages = 5\t\t# 0-1000 buffers max written/round\n#bgwriter_all_percent = 0.333\t\t# 0-100% of all buffers scanned/round\n#bgwriter_all_maxpages = 5\t\t# 0-1000 buffers max written/round\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\n#fsync = on\t\t\t\t# turns forced synchronization on or off\n#wal_sync_method = fsync\t\t# the default is the first option\n\t\t\t\t\t# supported by the operating system:\n\t\t\t\t\t# open_datasync\n\t\t\t\t\t# fdatasync\n\t\t\t\t\t# fsync\n\t\t\t\t\t# fsync_writethrough\n\t\t\t\t\t# open_sync\n#full_page_writes = on\t\t\t# recover from partial page writes\n#wal_buffers = 8\t\t\t# min 4, 8KB each\n#commit_delay = 0\t\t\t# range 0-100000, in microseconds\n#commit_siblings = 5\t\t\t# range 1-1000\n\n# - Checkpoints -\n\ncheckpoint_segments = 20\t\t# in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 300\t\t# range 30-3600, in seconds\n#checkpoint_warning = 30\t\t# in seconds, 0 is off\n\n# - Archiving -\n\n#archive_command = ''\t\t\t# command to use to archive a logfile\n\t\t\t\t\t# segment\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_mergejoin = on\nenable_nestloop = off\nenable_seqscan = off\n#enable_sort = on\n#enable_tidscan = on\n\n# - Planner Cost Constants -\n\neffective_cache_size = 65536\t\t# typically 8KB each\n#random_page_cost = 4\t\t\t# units are one sequential page fetch\n\t\t\t\t\t# cost\n#cpu_tuple_cost = 0.01\t\t\t# (same)\n#cpu_index_tuple_cost = 0.001\t\t# (same)\n#cpu_operator_cost = 0.0025\t\t# (same)\n\n\nthe sysctl.conf\n\nkernel.shmmax = 970170573\nkernel.shmall = 970170573\nkernel.sem = 400 42000 32 1024\nvm.overcommit_memory = 2\n\n=========The configuration is correct?=======\n\nIf you can help me i will be pleased, thanks.\n\n_________________________________________________________________\n� Ya tienes la �ltima versi�n de Messenger? \nwww.imagine-msn.com/messenger/launch80/default.aspx?locale=es-mx Windows \nLive Messenger en Prodigy/MSN\n\n",
"msg_date": "Fri, 29 Dec 2006 01:32:07 +0000",
"msg_from": "=?iso-8859-1?B?RmFicmljaW8gUGXxdWVsYXM=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql Configutation and overflow"
}
] |
[
{
"msg_contents": "Good day,\n\nI have been reading about the configuration of postgresql, but I have a\nserver who does not give me the performance that should. The tables are\nindexed and made vacuum regularly, i monitor with top, ps and\npg_stat_activity and when i checked was slow without a heavy load overage.\n\nBefore, the server reached 2000 connections to postgresql (with\nmax_connections=3000 in it for future workflow).\n\nI divided the load with another server for better performance, and now reach\n500 connections, but yet is overflow.\n\n\nMy question is about how much memory should i configure in shared_buffers\nand effective_cache_size.\n\nFeatures:\n\n- 4 Processsors Intel Xeon Dual 3.0Ghz\n- 12 GB RAM\n- 2 discos en RAID 1 for OS\n- 4 discs RAID 5 for DB\n- S.O Slackware 11.0 Linux 2.6.17.7\n- Postgres 8.1.4\n\n\n=====In internet i found this:\n\nTuning PostgreSQL for performance\n2 Some basic parameters\n2.1 Shared buffers\n\n# Start at 4MB (512) for a workstation\n# Medium size data set and 256-512MB available RAM: 16-32MB (2048-4096)\n# Large dataset and lots of available RAM (1-4GB): 64-256MB (8192-32768)\n======\n\n\nMy postgresql.conf configuration is:\n\n#---------------------------------------------------------------------------\n# FILE LOCATIONS\n#---------------------------------------------------------------------------\n\n# The default values of these variables are driven from the -D command line\n# switch or PGDATA environment variable, represented here as ConfigDir.\n\n#data_directory = 'ConfigDir' # use data in another directory\n#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file\n#ident_file = 'ConfigDir/pg_ident.conf' # IDENT configuration file\n\n# If external_pid_file is not explicitly set, no extra pid file is written.\n#external_pid_file = '(none)' # write an extra pid file\n\n\n#---------------------------------------------------------------------------\n# CONNECTIONS AND AUTHENTICATION\n#---------------------------------------------------------------------------\n\n# - Connection Settings -\n\nlisten_addresses = '*' # what IP address(es) to listen on;\n # comma-separated list of addresses;\n # defaults to 'localhost', '*' = all\nport = 5432\nmax_connections = 3000\n# note: increasing max_connections costs ~400 bytes of shared memory per\n# connection slot, plus lock space (see max_locks_per_transaction). You\n# might also need to raise shared_buffers to support more connections.\n#superuser_reserved_connections = 2\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777 # octal\n#bonjour_name = '' # defaults to the computer name\n\n\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\n\nshared_buffers = 81920 # min 16 or max_connections*2, 8KB each\ntemp_buffers = 5000 # min 100, 8KB each\nmax_prepared_transactions = 1000 # can be 0 or more\n\n# note: increasing max_prepared_transactions costs ~600 bytes of shared\nmemory\n\n# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 10240 # min 64, size in KB\nmaintenance_work_mem = 253952 # min 1024, size in KB\nmax_stack_depth = 4096 # min 100, size in KB\n\n# - Free Space Map -\n\n#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n#max_fsm_relations = 1000 # min 100, ~70 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000 # min 25\n#preload_libraries = ''\n\n# - Cost-Based Vacuum Delay -\n\n#vacuum_cost_delay = 0 # 0-1000 milliseconds\n#vacuum_cost_page_hit = 1 # 0-10000 credits\n#vacuum_cost_page_miss = 10 # 0-10000 credits\n#vacuum_cost_page_dirty = 20 # 0-10000 credits\n#vacuum_cost_limit = 200 # 0-10000 credits\n\n# - Background writer -\n\n#bgwriter_delay = 200 # 10-10000 milliseconds between rounds\n#bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers scanned/round\n#bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round\n#bgwriter_all_percent = 0.333 # 0-100% of all buffers scanned/round\n#bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\n#fsync = on # turns forced synchronization on or off\n#wal_sync_method = fsync # the default is the first option\n # supported by the operating system:\n # open_datasync\n # fdatasync\n # fsync\n # fsync_writethrough\n # open_sync\n#full_page_writes = on # recover from partial page writes\n#wal_buffers = 8 # min 4, 8KB each\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n\n# - Checkpoints -\n\ncheckpoint_segments = 20 # in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 300 # range 30-3600, in seconds\n#checkpoint_warning = 30 # in seconds, 0 is off\n\n# - Archiving -\n\n#archive_command = '' # command to use to archive a logfile\n # segment\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_mergejoin = on\nenable_nestloop = off\nenable_seqscan = off\n#enable_sort = on\n#enable_tidscan = on\n\n# - Planner Cost Constants -\n\neffective_cache_size = 65536 # typically 8KB each\n#random_page_cost = 4 # units are one sequential page fetch\n # cost\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n\n\nthe sysctl.conf\n\nkernel.shmmax = 970170573\nkernel.shmall = 970170573\nkernel.sem = 400 42000 32 1024\nvm.overcommit_memory = 2\n\n=========The configuration is correct?=======\n\nIf you can help me i will be pleased, thanks.\n\nGood day, I have been reading about the configuration of postgresql, but I have a server who does not give me the performance that should. The tables are indexed and made vacuum regularly, i monitor with top, ps and pg_stat_activity and when i checked was slow without a heavy load overage.\nBefore, the server reached 2000 connections to postgresql (with max_connections=3000 in it for future workflow). I divided the load with another server for better performance, and now reach 500 connections, but yet is overflow. \nMy question is about how much memory should i configure in shared_buffers and effective_cache_size. Features:- 4 Processsors Intel Xeon Dual 3.0Ghz- 12 GB RAM- 2 discos en RAID 1 for OS\n- 4 discs RAID 5 for DB- S.O Slackware 11.0 Linux 2.6.17.7- Postgres 8.1.4=====In internet i found this:Tuning PostgreSQL for performance2 Some basic parameters\n2.1 Shared buffers# Start at 4MB (512) for a workstation# Medium size data set and 256-512MB available RAM: 16-32MB (2048-4096)# Large dataset and lots of available RAM (1-4GB): 64-256MB (8192-32768)\n======My postgresql.conf configuration is:#---------------------------------------------------------------------------# FILE LOCATIONS#---------------------------------------------------------------------------\n# The default values of these variables are driven from the -D command line# switch or PGDATA environment variable, represented here as ConfigDir.#data_directory = 'ConfigDir' # use data in another directory\n#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file#ident_file = 'ConfigDir/pg_ident.conf' # IDENT configuration file# If external_pid_file is not explicitly set, no extra pid file is written.\n#external_pid_file = '(none)' # write an extra pid file#---------------------------------------------------------------------------# CONNECTIONS AND AUTHENTICATION#---------------------------------------------------------------------------\n# - Connection Settings -listen_addresses = '*' # what IP address(es) to listen on; # comma-separated list of addresses; # defaults to 'localhost', '*' = all\nport = 5432max_connections = 3000# note: increasing max_connections costs ~400 bytes of shared memory per# connection slot, plus lock space (see max_locks_per_transaction). You# might also need to raise shared_buffers to support more connections.\n#superuser_reserved_connections = 2#unix_socket_directory = ''#unix_socket_group = ''#unix_socket_permissions = 0777 # octal#bonjour_name = '' # defaults to the computer name\n#---------------------------------------------------------------------------# RESOURCE USAGE (except WAL)#---------------------------------------------------------------------------# - Memory -\nshared_buffers = 81920 # min 16 or max_connections*2, 8KB eachtemp_buffers = 5000 # min 100, 8KB eachmax_prepared_transactions = 1000 # can be 0 or more# note: increasing max_prepared_transactions costs ~600 bytes of shared memory\n# per transaction slot, plus lock space (see max_locks_per_transaction).work_mem = 10240 # min 64, size in KBmaintenance_work_mem = 253952 # min 1024, size in KBmax_stack_depth = 4096 # min 100, size in KB\n# - Free Space Map -#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each#max_fsm_relations = 1000 # min 100, ~70 bytes each# - Kernel Resource Usage -#max_files_per_process = 1000 # min 25\n#preload_libraries = ''# - Cost-Based Vacuum Delay -#vacuum_cost_delay = 0 # 0-1000 milliseconds#vacuum_cost_page_hit = 1 # 0-10000 credits#vacuum_cost_page_miss = 10 # 0-10000 credits\n#vacuum_cost_page_dirty = 20 # 0-10000 credits#vacuum_cost_limit = 200 # 0-10000 credits# - Background writer -#bgwriter_delay = 200 # 10-10000 milliseconds between rounds\n#bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers scanned/round#bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round#bgwriter_all_percent = 0.333 # 0-100% of all buffers scanned/round\n#bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round#---------------------------------------------------------------------------# WRITE AHEAD LOG#---------------------------------------------------------------------------\n# - Settings -#fsync = on # turns forced synchronization on or off#wal_sync_method = fsync # the default is the first option # supported by the operating system:\n # open_datasync # fdatasync # fsync # fsync_writethrough # open_sync#full_page_writes = on # recover from partial page writes\n#wal_buffers = 8 # min 4, 8KB each#commit_delay = 0 # range 0-100000, in microseconds#commit_siblings = 5 # range 1-1000# - Checkpoints -checkpoint_segments = 20 # in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 300 # range 30-3600, in seconds#checkpoint_warning = 30 # in seconds, 0 is off# - Archiving -#archive_command = '' # command to use to archive a logfile\n # segment#---------------------------------------------------------------------------# QUERY TUNING#---------------------------------------------------------------------------\n# - Planner Method Configuration -#enable_bitmapscan = on#enable_hashagg = on#enable_hashjoin = on#enable_indexscan = on#enable_mergejoin = onenable_nestloop = offenable_seqscan = off\n#enable_sort = on#enable_tidscan = on# - Planner Cost Constants -effective_cache_size = 65536 # typically 8KB each#random_page_cost = 4 # units are one sequential page fetch # cost\n#cpu_tuple_cost = 0.01 # (same)#cpu_index_tuple_cost = 0.001 # (same)#cpu_operator_cost = 0.0025 # (same)the sysctl.confkernel.shmmax = 970170573kernel.shmall = 970170573\nkernel.sem = 400 42000 32 1024vm.overcommit_memory = 2=========The configuration is correct?=======If you can help me i will be pleased, thanks.",
"msg_date": "Thu, 28 Dec 2006 18:58:17 -0700",
"msg_from": "\"=?ISO-8859-1?Q?fabrix_pe=F1uelas?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql Configutation and overflow"
},
{
"msg_contents": "What are your table sizes? What are your queries like? (Mostly read,\nmostly write?)\nCan you post the \"analyze\" output for some of the slow queries? \n \nThe three things that stand out for me is your disk configuration (RAID\n5 is not ideal for databases,\nyou really want RAID 1 or 1+0) and also that you have enable_seqscan set\nto off. I would leave\nthat turned on. Lastly, your effective_cache_size looks low. Your OS\nis probably caching more\nthan 512 MB, I know mine is usually 1-2 GB and I don't have 12 GB of ram\navailable.\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of fabrix\npeñuelas\nSent: Thursday, December 28, 2006 7:58 PM\nTo: [email protected]\nSubject: [PERFORM] Postgresql Configutation and overflow\n\n\nGood day,\n \nI have been reading about the configuration of postgresql, but I have a\nserver who does not give me the performance that should. The tables are\nindexed and made vacuum regularly, i monitor with top, ps and\npg_stat_activity and when i checked was slow without a heavy load\noverage. \n\nBefore, the server reached 2000 connections to postgresql (with\nmax_connections=3000 in it for future workflow). \n\nI divided the load with another server for better performance, and now\nreach 500 connections, but yet is overflow. \n\n\nMy question is about how much memory should i configure in\nshared_buffers and effective_cache_size. \n\nFeatures:\n\n- 4 Processsors Intel Xeon Dual 3.0Ghz\n- 12 GB RAM\n- 2 discos en RAID 1 for OS \n- 4 discs RAID 5 for DB\n- S.O Slackware 11.0 Linux 2.6.17.7\n- Postgres 8.1.4\n\n\n=====In internet i found this:\n\nTuning PostgreSQL for performance\n2 Some basic parameters \n2.1 Shared buffers\n\n# Start at 4MB (512) for a workstation\n# Medium size data set and 256-512MB available RAM: 16-32MB (2048-4096)\n# Large dataset and lots of available RAM (1-4GB): 64-256MB (8192-32768)\n======\n\n\nMy postgresql.conf configuration is:\n\n#-----------------------------------------------------------------------\n----\n# FILE LOCATIONS\n#-----------------------------------------------------------------------\n---- \n\n# The default values of these variables are driven from the -D command\nline\n# switch or PGDATA environment variable, represented here as ConfigDir.\n\n#data_directory = 'ConfigDir' # use data in another directory \n#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file\n#ident_file = 'ConfigDir/pg_ident.conf' # IDENT configuration file\n\n# If external_pid_file is not explicitly set, no extra pid file is\nwritten. \n#external_pid_file = '(none)' # write an extra pid file\n\n\n#-----------------------------------------------------------------------\n----\n# CONNECTIONS AND AUTHENTICATION\n#-----------------------------------------------------------------------\n---- \n\n# - Connection Settings -\n\nlisten_addresses = '*' # what IP address(es) to listen on;\n # comma-separated list of addresses;\n # defaults to 'localhost', '*' = all \nport = 5432\nmax_connections = 3000\n# note: increasing max_connections costs ~400 bytes of shared memory per\n# connection slot, plus lock space (see max_locks_per_transaction). You\n# might also need to raise shared_buffers to support more connections. \n#superuser_reserved_connections = 2\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777 # octal\n#bonjour_name = '' # defaults to the computer name \n\n\n\n#-----------------------------------------------------------------------\n----\n# RESOURCE USAGE (except WAL)\n#-----------------------------------------------------------------------\n----\n\n# - Memory - \n\n\nshared_buffers = 81920 # min 16 or max_connections*2, 8KB\neach\ntemp_buffers = 5000 # min 100, 8KB each\nmax_prepared_transactions = 1000 # can be 0 or more\n\n# note: increasing max_prepared_transactions costs ~600 bytes of shared\nmemory \n\n# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 10240 # min 64, size in KB\nmaintenance_work_mem = 253952 # min 1024, size in KB\nmax_stack_depth = 4096 # min 100, size in KB \n\n# - Free Space Map -\n\n#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes\neach\n#max_fsm_relations = 1000 # min 100, ~70 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000 # min 25 \n#preload_libraries = ''\n\n# - Cost-Based Vacuum Delay -\n\n#vacuum_cost_delay = 0 # 0-1000 milliseconds\n#vacuum_cost_page_hit = 1 # 0-10000 credits\n#vacuum_cost_page_miss = 10 # 0-10000 credits \n#vacuum_cost_page_dirty = 20 # 0-10000 credits\n#vacuum_cost_limit = 200 # 0-10000 credits\n\n# - Background writer -\n\n#bgwriter_delay = 200 # 10-10000 milliseconds between rounds\n#bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers scanned/round\n#bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round\n#bgwriter_all_percent = 0.333 # 0-100% of all buffers\nscanned/round \n#bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round\n\n\n#-----------------------------------------------------------------------\n----\n# WRITE AHEAD LOG\n#-----------------------------------------------------------------------\n---- \n\n# - Settings -\n\n#fsync = on # turns forced synchronization on or off\n#wal_sync_method = fsync # the default is the first option\n # supported by the operating system: \n # open_datasync\n # fdatasync\n # fsync\n # fsync_writethrough\n # open_sync\n#full_page_writes = on # recover from partial page writes \n#wal_buffers = 8 # min 4, 8KB each\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n\n# - Checkpoints -\n\ncheckpoint_segments = 20 # in logfile segments, min 1, 16MB each \n#checkpoint_timeout = 300 # range 30-3600, in seconds\n#checkpoint_warning = 30 # in seconds, 0 is off\n\n# - Archiving -\n\n#archive_command = '' # command to use to archive a logfile \n # segment\n\n\n#-----------------------------------------------------------------------\n----\n# QUERY TUNING\n#-----------------------------------------------------------------------\n----\n\n# - Planner Method Configuration -\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_mergejoin = on\nenable_nestloop = off\nenable_seqscan = off\n#enable_sort = on\n#enable_tidscan = on\n\n# - Planner Cost Constants -\n\neffective_cache_size = 65536 # typically 8KB each\n#random_page_cost = 4 # units are one sequential page fetch\n # cost \n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n\n\nthe sysctl.conf\n\nkernel.shmmax = 970170573\nkernel.shmall = 970170573 \nkernel.sem = 400 42000 32 1024\nvm.overcommit_memory = 2\n\n=========The configuration is correct?=======\n\nIf you can help me i will be pleased, thanks.\n\n\n\n\n\n\nMessage\n\n\nWhat \nare your table sizes? What are your queries like? (Mostly read, \nmostly write?)\nCan \nyou post the \"analyze\" output for some of the slow queries? \n\n \nThe \nthree things that stand out for me is your disk configuration (RAID 5 is not \nideal for databases,\nyou \nreally want RAID 1 or 1+0) and also that you have enable_seqscan set to \noff. I would leave\nthat \nturned on. Lastly, your effective_cache_size looks low. \nYour OS is probably caching more\nthan \n512 MB, I know mine is usually 1-2 GB and I don't have 12 GB of ram \navailable.\n \n\n\n-----Original Message-----From: \n [email protected] \n [mailto:[email protected]] On Behalf Of fabrix \n peñuelasSent: Thursday, December 28, 2006 7:58 PMTo: \n [email protected]: [PERFORM] Postgresql \n Configutation and overflowGood day, \n I have been reading about the configuration of postgresql, but I \n have a server who does not give me the performance that should. The tables are \n indexed and made vacuum regularly, i monitor with top, ps and pg_stat_activity \n and when i checked was slow without a heavy load overage. Before, the \n server reached 2000 connections to postgresql (with max_connections=3000 in it \n for future workflow). I divided the load with another server for \n better performance, and now reach 500 connections, but yet is overflow. \n My question is about how much memory should i configure in \n shared_buffers and effective_cache_size. Features:- 4 \n Processsors Intel Xeon Dual 3.0Ghz- 12 GB RAM- 2 discos en RAID 1 for \n OS - 4 discs RAID 5 for DB- S.O Slackware 11.0 Linux 2.6.17.7- Postgres 8.1.4=====In \n internet i found this:Tuning PostgreSQL for performance2 Some \n basic parameters 2.1 Shared buffers# Start at 4MB (512) for \n a workstation# Medium size data set and 256-512MB available RAM: 16-32MB \n (2048-4096)# Large dataset and lots of available RAM (1-4GB): 64-256MB \n (8192-32768)======My postgresql.conf configuration \n is:#---------------------------------------------------------------------------# \n FILE \n LOCATIONS#--------------------------------------------------------------------------- \n # The default values of these variables are driven from the -D command \n line# switch or PGDATA environment variable, represented here as \n ConfigDir.#data_directory = 'ConfigDir' \n # use data in another directory #hba_file = \n 'ConfigDir/pg_hba.conf' # host-based authentication \n file#ident_file = 'ConfigDir/pg_ident.conf' # IDENT \n configuration file# If external_pid_file is not explicitly set, no \n extra pid file is written. #external_pid_file = '(none)' \n # write an extra pid \n file#---------------------------------------------------------------------------# \n CONNECTIONS AND \n AUTHENTICATION#--------------------------------------------------------------------------- \n # - Connection Settings -listen_addresses = '*' \n # what IP address(es) to listen on; \n \n # comma-separated list of addresses; \n # defaults to \n 'localhost', '*' = all port = 5432max_connections = 3000# note: \n increasing max_connections costs ~400 bytes of shared memory per# \n connection slot, plus lock space (see max_locks_per_transaction). \n You# might also need to raise shared_buffers to support more connections. \n #superuser_reserved_connections = 2#unix_socket_directory = \n ''#unix_socket_group = ''#unix_socket_permissions = 0777 \n # octal#bonjour_name = '' \n # defaults to the computer name \n #---------------------------------------------------------------------------# \n RESOURCE USAGE (except \n WAL)#---------------------------------------------------------------------------# \n - Memory - shared_buffers = 81920 \n # min 16 or max_connections*2, 8KB \n eachtemp_buffers = 5000 \n # min 100, 8KB eachmax_prepared_transactions = 1000 \n # can be 0 or more# note: increasing max_prepared_transactions \n costs ~600 bytes of shared memory # per transaction slot, plus lock \n space (see max_locks_per_transaction).work_mem = 10240 \n # min 64, size in \n KBmaintenance_work_mem = 253952 # min \n 1024, size in KBmax_stack_depth = 4096 \n # min 100, size in KB # - Free Space Map \n -#max_fsm_pages = 20000 \n # min max_fsm_relations*16, 6 bytes \n each#max_fsm_relations = 1000 # min \n 100, ~70 bytes each# - Kernel Resource Usage \n -#max_files_per_process = 1000 # \n min 25 #preload_libraries = ''# - Cost-Based Vacuum Delay \n -#vacuum_cost_delay = 0 \n # 0-1000 milliseconds#vacuum_cost_page_hit = \n 1 # 0-10000 \n credits#vacuum_cost_page_miss = 10 # \n 0-10000 credits #vacuum_cost_page_dirty = 20 \n # 0-10000 credits#vacuum_cost_limit = \n 200 # 0-10000 credits# - \n Background writer -#bgwriter_delay = 200 \n # 10-10000 milliseconds between \n rounds#bgwriter_lru_percent = 1.0 # \n 0-100% of LRU buffers scanned/round#bgwriter_lru_maxpages = 5 \n # 0-1000 buffers max \n written/round#bgwriter_all_percent = 0.333 \n # 0-100% of all buffers scanned/round #bgwriter_all_maxpages = \n 5 # 0-1000 buffers max \n written/round#---------------------------------------------------------------------------# \n WRITE AHEAD \n LOG#--------------------------------------------------------------------------- \n # - Settings -#fsync = on \n # turns forced synchronization on \n or off#wal_sync_method = fsync # the \n default is the first option \n # supported by \n the operating system: \n # \n open_datasync \n # \n fdatasync \n # \n fsync \n # fsync_writethrough \n \n # open_sync#full_page_writes = on \n # recover from partial page writes \n #wal_buffers = 8 # \n min 4, 8KB each#commit_delay = 0 \n # range 0-100000, in microseconds#commit_siblings \n = 5 # range \n 1-1000# - Checkpoints -checkpoint_segments = 20 \n # in logfile segments, min 1, 16MB each \n #checkpoint_timeout = 300 # range \n 30-3600, in seconds#checkpoint_warning = 30 \n # in seconds, 0 is off# - Archiving -#archive_command = \n '' # command to use to \n archive a logfile \n # \n segment#---------------------------------------------------------------------------# \n QUERY \n TUNING#---------------------------------------------------------------------------# \n - Planner Method Configuration -#enable_bitmapscan = \n on#enable_hashagg = on#enable_hashjoin = on#enable_indexscan = \n on#enable_mergejoin = onenable_nestloop = offenable_seqscan = \n off#enable_sort = on#enable_tidscan = on# - Planner Cost \n Constants -effective_cache_size = 65536 \n # typically 8KB each#random_page_cost = 4 \n # units are one sequential page \n fetch \n # cost #cpu_tuple_cost = 0.01 \n # (same)#cpu_index_tuple_cost \n = 0.001 # (same)#cpu_operator_cost = \n 0.0025 # (same)the \n sysctl.confkernel.shmmax = 970170573kernel.shmall = 970170573 \n kernel.sem = 400 42000 32 1024vm.overcommit_memory = \n 2=========The configuration is correct?=======If you can help \n me i will be pleased, thanks.",
"msg_date": "Thu, 28 Dec 2006 20:12:44 -0600",
"msg_from": "\"Adam Rich\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Configutation and overflow"
},
{
"msg_contents": "Hi,\n\nOn 28-Dec-06, at 8:58 PM, fabrix pe�uelas wrote:\n\n> Good day,\n>\n> I have been reading about the configuration of postgresql, but I \n> have a server who does not give me the performance that should. The \n> tables are indexed and made vacuum regularly, i monitor with top, \n> ps and pg_stat_activity and when i checked was slow without a heavy \n> load overage.\n>\n> Before, the server reached 2000 connections to postgresql (with \n> max_connections=3000 in it for future workflow).\nWhy would you need 2000 connections ?\n>\n> I divided the load with another server for better performance, and \n> now reach 500 connections, but yet is overflow.\n>\n>\n> My question is about how much memory should i configure in \n> shared_buffers and effective_cache_size.\n\nstart with 25% of your 12G as shared buffers, and 75% of 12G for \neffective cache\n\nYou can go higher for shared buffers, but only do so with testing.\n\nDave\n>\n> Features:\n>\n> - 4 Processsors Intel Xeon Dual 3.0Ghz\n> - 12 GB RAM\n> - 2 discos en RAID 1 for OS\n> - 4 discs RAID 5 for DB\n> - S.O Slackware 11.0 Linux 2.6.17.7\n> - Postgres 8.1.4\n>\n>\n> =====In internet i found this:\n>\n> Tuning PostgreSQL for performance\n> 2 Some basic parameters\n> 2.1 Shared buffers\n>\n> # Start at 4MB (512) for a workstation\n> # Medium size data set and 256-512MB available RAM: 16-32MB \n> (2048-4096)\n> # Large dataset and lots of available RAM (1-4GB): 64-256MB \n> (8192-32768)\n> ======\n>\n>\n> My postgresql.conf configuration is:\n>\n> #--------------------------------------------------------------------- \n> ------\n> # FILE LOCATIONS\n> #--------------------------------------------------------------------- \n> ------\n>\n> # The default values of these variables are driven from the -D \n> command line\n> # switch or PGDATA environment variable, represented here as \n> ConfigDir.\n>\n> #data_directory = 'ConfigDir' # use data in another directory\n> #hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication \n> file\n> #ident_file = 'ConfigDir/pg_ident.conf' # IDENT configuration file\n>\n> # If external_pid_file is not explicitly set, no extra pid file is \n> written.\n> #external_pid_file = '(none)' # write an extra pid file\n>\n>\n> #--------------------------------------------------------------------- \n> ------\n> # CONNECTIONS AND AUTHENTICATION\n> #--------------------------------------------------------------------- \n> ------\n>\n> # - Connection Settings -\n>\n> listen_addresses = '*' # what IP address(es) to listen on;\n> # comma-separated list of addresses;\n> # defaults to 'localhost', '*' = all\n> port = 5432\n> max_connections = 3000\n> # note: increasing max_connections costs ~400 bytes of shared \n> memory per\n> # connection slot, plus lock space (see \n> max_locks_per_transaction). You\n> # might also need to raise shared_buffers to support more connections.\n> #superuser_reserved_connections = 2\n> #unix_socket_directory = ''\n> #unix_socket_group = ''\n> #unix_socket_permissions = 0777 # octal\n> #bonjour_name = '' # defaults to the computer name\n>\n>\n>\n> #--------------------------------------------------------------------- \n> ------\n> # RESOURCE USAGE (except WAL)\n> #--------------------------------------------------------------------- \n> ------\n>\n> # - Memory -\n>\n>\n> shared_buffers = 81920 # min 16 or max_connections*2, \n> 8KB each\n> temp_buffers = 5000 # min 100, 8KB each\n> max_prepared_transactions = 1000 # can be 0 or more\n>\n> # note: increasing max_prepared_transactions costs ~600 bytes of \n> shared memory\n>\n> # per transaction slot, plus lock space (see \n> max_locks_per_transaction).\n> work_mem = 10240 # min 64, size in KB\n> maintenance_work_mem = 253952 # min 1024, size in KB\n> max_stack_depth = 4096 # min 100, size in KB\n>\n> # - Free Space Map -\n>\n> #max_fsm_pages = 20000 # min max_fsm_relations*16, 6 \n> bytes each\n> #max_fsm_relations = 1000 # min 100, ~70 bytes each\n>\n> # - Kernel Resource Usage -\n>\n> #max_files_per_process = 1000 # min 25\n> #preload_libraries = ''\n>\n> # - Cost-Based Vacuum Delay -\n>\n> #vacuum_cost_delay = 0 # 0-1000 milliseconds\n> #vacuum_cost_page_hit = 1 # 0-10000 credits\n> #vacuum_cost_page_miss = 10 # 0-10000 credits\n> #vacuum_cost_page_dirty = 20 # 0-10000 credits\n> #vacuum_cost_limit = 200 # 0-10000 credits\n>\n> # - Background writer -\n>\n> #bgwriter_delay = 200 # 10-10000 milliseconds between \n> rounds\n> #bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers scanned/ \n> round\n> #bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round\n> #bgwriter_all_percent = 0.333 # 0-100% of all buffers \n> scanned/round\n> #bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round\n>\n>\n> #--------------------------------------------------------------------- \n> ------\n> # WRITE AHEAD LOG\n> #--------------------------------------------------------------------- \n> ------\n>\n> # - Settings -\n>\n> #fsync = on # turns forced synchronization on or off\n> #wal_sync_method = fsync # the default is the first option\n> # supported by the operating system:\n> # open_datasync\n> # fdatasync\n> # fsync\n> # fsync_writethrough\n> # open_sync\n> #full_page_writes = on # recover from partial page writes\n> #wal_buffers = 8 # min 4, 8KB each\n> #commit_delay = 0 # range 0-100000, in microseconds\n> #commit_siblings = 5 # range 1-1000\n>\n> # - Checkpoints -\n>\n> checkpoint_segments = 20 # in logfile segments, min 1, 16MB \n> each\n> #checkpoint_timeout = 300 # range 30-3600, in seconds\n> #checkpoint_warning = 30 # in seconds, 0 is off\n>\n> # - Archiving -\n>\n> #archive_command = '' # command to use to archive a logfile\n> # segment\n>\n>\n> #--------------------------------------------------------------------- \n> ------\n> # QUERY TUNING\n> #--------------------------------------------------------------------- \n> ------\n>\n> # - Planner Method Configuration -\n>\n> #enable_bitmapscan = on\n> #enable_hashagg = on\n> #enable_hashjoin = on\n> #enable_indexscan = on\n> #enable_mergejoin = on\n> enable_nestloop = off\n> enable_seqscan = off\n> #enable_sort = on\n> #enable_tidscan = on\n>\n> # - Planner Cost Constants -\n>\n> effective_cache_size = 65536 # typically 8KB each\n> #random_page_cost = 4 # units are one sequential page fetch\n> # cost\n> #cpu_tuple_cost = 0.01 # (same)\n> #cpu_index_tuple_cost = 0.001 # (same)\n> #cpu_operator_cost = 0.0025 # (same)\n>\n>\n> the sysctl.conf\n>\n> kernel.shmmax = 970170573\n> kernel.shmall = 970170573\n> kernel.sem = 400 42000 32 1024\n> vm.overcommit_memory = 2\n>\n> =========The configuration is correct?=======\n>\n> If you can help me i will be pleased, thanks.\n>\n\n\nHi,On 28-Dec-06, at 8:58 PM, fabrix peñuelas wrote:Good day, I have been reading about the configuration of postgresql, but I have a server who does not give me the performance that should. The tables are indexed and made vacuum regularly, i monitor with top, ps and pg_stat_activity and when i checked was slow without a heavy load overage. Before, the server reached 2000 connections to postgresql (with max_connections=3000 in it for future workflow). Why would you need 2000 connections ? I divided the load with another server for better performance, and now reach 500 connections, but yet is overflow. My question is about how much memory should i configure in shared_buffers and effective_cache_size. start with 25% of your 12G as shared buffers, and 75% of 12G for effective cacheYou can go higher for shared buffers, but only do so with testing.DaveFeatures:- 4 Processsors Intel Xeon Dual 3.0Ghz- 12 GB RAM- 2 discos en RAID 1 for OS - 4 discs RAID 5 for DB- S.O Slackware 11.0 Linux 2.6.17.7- Postgres 8.1.4=====In internet i found this:Tuning PostgreSQL for performance2 Some basic parameters 2.1 Shared buffers# Start at 4MB (512) for a workstation# Medium size data set and 256-512MB available RAM: 16-32MB (2048-4096)# Large dataset and lots of available RAM (1-4GB): 64-256MB (8192-32768) ======My postgresql.conf configuration is:#---------------------------------------------------------------------------# FILE LOCATIONS#--------------------------------------------------------------------------- # The default values of these variables are driven from the -D command line# switch or PGDATA environment variable, represented here as ConfigDir.#data_directory = 'ConfigDir' # use data in another directory #hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file#ident_file = 'ConfigDir/pg_ident.conf' # IDENT configuration file# If external_pid_file is not explicitly set, no extra pid file is written. #external_pid_file = '(none)' # write an extra pid file#---------------------------------------------------------------------------# CONNECTIONS AND AUTHENTICATION#--------------------------------------------------------------------------- # - Connection Settings -listen_addresses = '*' # what IP address(es) to listen on; # comma-separated list of addresses; # defaults to 'localhost', '*' = all port = 5432max_connections = 3000# note: increasing max_connections costs ~400 bytes of shared memory per# connection slot, plus lock space (see max_locks_per_transaction). You# might also need to raise shared_buffers to support more connections. #superuser_reserved_connections = 2#unix_socket_directory = ''#unix_socket_group = ''#unix_socket_permissions = 0777 # octal#bonjour_name = '' # defaults to the computer name #---------------------------------------------------------------------------# RESOURCE USAGE (except WAL)#---------------------------------------------------------------------------# - Memory - shared_buffers = 81920 # min 16 or max_connections*2, 8KB eachtemp_buffers = 5000 # min 100, 8KB eachmax_prepared_transactions = 1000 # can be 0 or more# note: increasing max_prepared_transactions costs ~600 bytes of shared memory # per transaction slot, plus lock space (see max_locks_per_transaction).work_mem = 10240 # min 64, size in KBmaintenance_work_mem = 253952 # min 1024, size in KBmax_stack_depth = 4096 # min 100, size in KB # - Free Space Map -#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each#max_fsm_relations = 1000 # min 100, ~70 bytes each# - Kernel Resource Usage -#max_files_per_process = 1000 # min 25 #preload_libraries = ''# - Cost-Based Vacuum Delay -#vacuum_cost_delay = 0 # 0-1000 milliseconds#vacuum_cost_page_hit = 1 # 0-10000 credits#vacuum_cost_page_miss = 10 # 0-10000 credits #vacuum_cost_page_dirty = 20 # 0-10000 credits#vacuum_cost_limit = 200 # 0-10000 credits# - Background writer -#bgwriter_delay = 200 # 10-10000 milliseconds between rounds #bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers scanned/round#bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round#bgwriter_all_percent = 0.333 # 0-100% of all buffers scanned/round #bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round#---------------------------------------------------------------------------# WRITE AHEAD LOG#--------------------------------------------------------------------------- # - Settings -#fsync = on # turns forced synchronization on or off#wal_sync_method = fsync # the default is the first option # supported by the operating system: # open_datasync # fdatasync # fsync # fsync_writethrough # open_sync#full_page_writes = on # recover from partial page writes #wal_buffers = 8 # min 4, 8KB each#commit_delay = 0 # range 0-100000, in microseconds#commit_siblings = 5 # range 1-1000# - Checkpoints -checkpoint_segments = 20 # in logfile segments, min 1, 16MB each #checkpoint_timeout = 300 # range 30-3600, in seconds#checkpoint_warning = 30 # in seconds, 0 is off# - Archiving -#archive_command = '' # command to use to archive a logfile # segment#---------------------------------------------------------------------------# QUERY TUNING#--------------------------------------------------------------------------- # - Planner Method Configuration -#enable_bitmapscan = on#enable_hashagg = on#enable_hashjoin = on#enable_indexscan = on#enable_mergejoin = onenable_nestloop = offenable_seqscan = off #enable_sort = on#enable_tidscan = on# - Planner Cost Constants -effective_cache_size = 65536 # typically 8KB each#random_page_cost = 4 # units are one sequential page fetch # cost #cpu_tuple_cost = 0.01 # (same)#cpu_index_tuple_cost = 0.001 # (same)#cpu_operator_cost = 0.0025 # (same)the sysctl.confkernel.shmmax = 970170573kernel.shmall = 970170573 kernel.sem = 400 42000 32 1024vm.overcommit_memory = 2=========The configuration is correct?=======If you can help me i will be pleased, thanks.",
"msg_date": "Thu, 28 Dec 2006 22:35:29 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Configutation and overflow"
},
{
"msg_contents": "On Thu, Dec 28, 2006 at 10:35:29PM -0500, Dave Cramer wrote:\n> start with 25% of your 12G as shared buffers, and 75% of 12G for \n> effective cache\n\nI'm curious... why leave 3G for the kernel? Seems like overkill...\n\nGranted, as long as you're in the ballpark on effective_cache_size\nthat's all that matters...\n\n> You can go higher for shared buffers, but only do so with testing.\n> \n> Dave\n> >\n> >Features:\n> >\n> >- 4 Processsors Intel Xeon Dual 3.0Ghz\n> >- 12 GB RAM\n> >- 2 discos en RAID 1 for OS\n> >- 4 discs RAID 5 for DB\n> >- S.O Slackware 11.0 Linux 2.6.17.7\n> >- Postgres 8.1.4\n> >\n> >\n> >=====In internet i found this:\n> >\n> >Tuning PostgreSQL for performance\n> >2 Some basic parameters\n> >2.1 Shared buffers\n> >\n> ># Start at 4MB (512) for a workstation\n> ># Medium size data set and 256-512MB available RAM: 16-32MB \n> >(2048-4096)\n> ># Large dataset and lots of available RAM (1-4GB): 64-256MB \n> >(8192-32768)\n> >======\n> >\n> >\n> >My postgresql.conf configuration is:\n> >\n> >#--------------------------------------------------------------------- \n> >------\n> ># FILE LOCATIONS\n> >#--------------------------------------------------------------------- \n> >------\n> >\n> ># The default values of these variables are driven from the -D \n> >command line\n> ># switch or PGDATA environment variable, represented here as \n> >ConfigDir.\n> >\n> >#data_directory = 'ConfigDir' # use data in another directory\n> >#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication \n> >file\n> >#ident_file = 'ConfigDir/pg_ident.conf' # IDENT configuration file\n> >\n> ># If external_pid_file is not explicitly set, no extra pid file is \n> >written.\n> >#external_pid_file = '(none)' # write an extra pid file\n> >\n> >\n> >#--------------------------------------------------------------------- \n> >------\n> ># CONNECTIONS AND AUTHENTICATION\n> >#--------------------------------------------------------------------- \n> >------\n> >\n> ># - Connection Settings -\n> >\n> >listen_addresses = '*' # what IP address(es) to listen on;\n> > # comma-separated list of addresses;\n> > # defaults to 'localhost', '*' = all\n> >port = 5432\n> >max_connections = 3000\n> ># note: increasing max_connections costs ~400 bytes of shared \n> >memory per\n> ># connection slot, plus lock space (see \n> >max_locks_per_transaction). You\n> ># might also need to raise shared_buffers to support more connections.\n> >#superuser_reserved_connections = 2\n> >#unix_socket_directory = ''\n> >#unix_socket_group = ''\n> >#unix_socket_permissions = 0777 # octal\n> >#bonjour_name = '' # defaults to the computer name\n> >\n> >\n> >\n> >#--------------------------------------------------------------------- \n> >------\n> ># RESOURCE USAGE (except WAL)\n> >#--------------------------------------------------------------------- \n> >------\n> >\n> ># - Memory -\n> >\n> >\n> >shared_buffers = 81920 # min 16 or max_connections*2, \n> >8KB each\n> >temp_buffers = 5000 # min 100, 8KB each\n> >max_prepared_transactions = 1000 # can be 0 or more\n> >\n> ># note: increasing max_prepared_transactions costs ~600 bytes of \n> >shared memory\n> >\n> ># per transaction slot, plus lock space (see \n> >max_locks_per_transaction).\n> >work_mem = 10240 # min 64, size in KB\n> >maintenance_work_mem = 253952 # min 1024, size in KB\n> >max_stack_depth = 4096 # min 100, size in KB\n> >\n> ># - Free Space Map -\n> >\n> >#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 \n> >bytes each\n> >#max_fsm_relations = 1000 # min 100, ~70 bytes each\n> >\n> ># - Kernel Resource Usage -\n> >\n> >#max_files_per_process = 1000 # min 25\n> >#preload_libraries = ''\n> >\n> ># - Cost-Based Vacuum Delay -\n> >\n> >#vacuum_cost_delay = 0 # 0-1000 milliseconds\n> >#vacuum_cost_page_hit = 1 # 0-10000 credits\n> >#vacuum_cost_page_miss = 10 # 0-10000 credits\n> >#vacuum_cost_page_dirty = 20 # 0-10000 credits\n> >#vacuum_cost_limit = 200 # 0-10000 credits\n> >\n> ># - Background writer -\n> >\n> >#bgwriter_delay = 200 # 10-10000 milliseconds between \n> >rounds\n> >#bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers scanned/ \n> >round\n> >#bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round\n> >#bgwriter_all_percent = 0.333 # 0-100% of all buffers \n> >scanned/round\n> >#bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round\n> >\n> >\n> >#--------------------------------------------------------------------- \n> >------\n> ># WRITE AHEAD LOG\n> >#--------------------------------------------------------------------- \n> >------\n> >\n> ># - Settings -\n> >\n> >#fsync = on # turns forced synchronization on or off\n> >#wal_sync_method = fsync # the default is the first option\n> > # supported by the operating system:\n> > # open_datasync\n> > # fdatasync\n> > # fsync\n> > # fsync_writethrough\n> > # open_sync\n> >#full_page_writes = on # recover from partial page writes\n> >#wal_buffers = 8 # min 4, 8KB each\n> >#commit_delay = 0 # range 0-100000, in microseconds\n> >#commit_siblings = 5 # range 1-1000\n> >\n> ># - Checkpoints -\n> >\n> >checkpoint_segments = 20 # in logfile segments, min 1, 16MB \n> >each\n> >#checkpoint_timeout = 300 # range 30-3600, in seconds\n> >#checkpoint_warning = 30 # in seconds, 0 is off\n> >\n> ># - Archiving -\n> >\n> >#archive_command = '' # command to use to archive a logfile\n> > # segment\n> >\n> >\n> >#--------------------------------------------------------------------- \n> >------\n> ># QUERY TUNING\n> >#--------------------------------------------------------------------- \n> >------\n> >\n> ># - Planner Method Configuration -\n> >\n> >#enable_bitmapscan = on\n> >#enable_hashagg = on\n> >#enable_hashjoin = on\n> >#enable_indexscan = on\n> >#enable_mergejoin = on\n> >enable_nestloop = off\n> >enable_seqscan = off\n> >#enable_sort = on\n> >#enable_tidscan = on\n> >\n> ># - Planner Cost Constants -\n> >\n> >effective_cache_size = 65536 # typically 8KB each\n> >#random_page_cost = 4 # units are one sequential page fetch\n> > # cost\n> >#cpu_tuple_cost = 0.01 # (same)\n> >#cpu_index_tuple_cost = 0.001 # (same)\n> >#cpu_operator_cost = 0.0025 # (same)\n> >\n> >\n> >the sysctl.conf\n> >\n> >kernel.shmmax = 970170573\n> >kernel.shmall = 970170573\n> >kernel.sem = 400 42000 32 1024\n> >vm.overcommit_memory = 2\n> >\n> >=========The configuration is correct?=======\n> >\n> >If you can help me i will be pleased, thanks.\n> >\n> \n\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Tue, 9 Jan 2007 10:44:57 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Configutation and overflow"
}
] |
[
{
"msg_contents": "\nHi\n\nI have a simple query which uses 32ms on 7.4.14 and 1015ms on 8.2.0.\nI guess 7.4.14 creates a better execution plan than 8.2.0 for this query but\ni don't know how to get it to select a better one.\nExplain analyse output will be found near the end of the e-mail.\n\n(I have simplified my real query to get it as simple as possible. The original query \ncontain 6 tables and was acceptable on 7.4.2, but took far too long on 8.1.4)\n\nI have made a test setup to compare 7.4.14, 8.1.4 and 8.2.0.\n8.1.4 and 8.2.0 uses the same execution plan and same time to execute.\n\npostgresql.conf values i changed is \n7.4.14 \n\tRaised shared_buffers from 32MB to 128MB\n\tRaised temp_buffers from 8MB to 32MB\n8.2.0\n\tRaised shared_buffers from 32MB to 128MB\n\tRaised temp_buffers from 8MB to 32MB\n\tRaised work_mem from 1MB to 8MB\n\n(It did however not have any influence of speed for \nthe view_subset query shown below.)\n\nvacuum analyze has been executed.\n\nComputer:\n\tDell PowerEdge 2950\n\topenSUSE Linux 10.1\n\tIntel(R) Xeon 3.00GHz\n\t4GB memory\n\txfs filesystem on SAS disks\n\n Table \"public.step_result_subset\"\n Column | Type | Modifiers \n-------------+---------+-----------\n id | integer | not null\n uut_result | integer | \n step_parent | integer | \nIndexes:\n \"step_result_subset_pkey\" PRIMARY KEY, btree (id)\n \"step_result_subset_parent_key\" btree (step_parent)\n \"step_result_uut_result_idx\" btree (uut_result)\nTable contain 17 179 506 rows, and is ~400M when exported to file\n\n Table \"public.uut_result_subset\"\n Column | Type | Modifiers \n-----------------+-----------------------------+-----------\n id | integer | not null\n start_date_time | timestamp without time zone | \nIndexes:\n \"uut_result_subset_pkey\" PRIMARY KEY, btree (id)\n \"uut_result_subset_start_date_time_idx\" btree (start_date_time)\nTable contain ~176 555 rows, and is ~4.7M when exportd to file\n\nQuery is defined as view:\n\ncreate view view_subset as \nselect\n ur.id as ur_id,\n sr.id as sr_id\nfrom\n uut_result_subset as ur\n inner join step_result_subset as sr\n on ur.id=sr.uut_result\nwhere\n ur.start_date_time > '2006-12-11'\n and sr.step_parent=0;\n\nExplain analyze is run several times to get a stable result \nso i guess the numbers presented is with as much as possible\ndata in memory buffers.\n\nColumn step_result_subset.step_parent contain 0 in as many rows as there are rows in table uut_result_subset.\n(In my data set this will be 176 500 rows, Other values for step_result_subset.step_parent is present 1003 times and lower.)\n\nQuery: \"select * from view_subset;\" run against 7.4.14 server.\nQUERY PLAN \n------------------------------------------------------------------------\n Nested Loop (cost=0.00..1400.86 rows=17 width=8) (actual time=0.161..26.287 rows=68 loops=1)\n -> Index Scan using uut_result_subset_start_date_time_idx on uut_result_subset ur (cost=0.00..63.28 rows=18 width=4) (actual time=0.052..0.195 rows=68 loops=1)\n Index Cond: (start_date_time > '2006-12-11 00:00:00'::timestamp without time zone)\n -> Index Scan using step_result_uut_result_idx on step_result_subset sr (cost=0.00..74.28 rows=2 width=8) (actual time=0.149..0.379 rows=1 loops=68)\n Index Cond: (\"outer\".id = sr.uut_result)\n Filter: (step_parent = 0)\n Total runtime: 26.379 ms\n\nQuery: \"select * from view_subset;\" run against 8.4.0 server.\n \nQUERY PLAN \n----------------------------------------------------------------------\n Hash Join (cost=339.61..77103.61 rows=96 width=8) (actual time=5.249..1010.669 rows=68 loops=1)\n Hash Cond: (sr.uut_result = ur.id)\n -> Index Scan using step_result_subset_parent_key on step_result_subset sr (cost=0.00..76047.23 rows=143163 width=8) (actual time=0.082..905.326 rows=176449 loops=1)\n Index Cond: (step_parent = 0)\n -> Hash (cost=339.31..339.31 rows=118 width=4) (actual time=0.149..0.149 rows=68 loops=1)\n -> Bitmap Heap Scan on uut_result_subset ur (cost=4.90..339.31 rows=118 width=4) (actual time=0.060..0.099 rows=68 loops=1)\n Recheck Cond: (start_date_time > '2006-12-11 00:00:00'::timestamp without time zone)\n -> Bitmap Index Scan on uut_result_subset_start_date_time_idx (cost=0.00..4.90 rows=118 width=0) (actual time=0.050..0.050 rows=68 loops=1)\n Index Cond: (start_date_time > '2006-12-11 00:00:00'::timestamp without time zone)\n Total runtime: 1010.775 ms\n\nThanks for tips.\n\nBest regards\nRolf Østvik\n",
"msg_date": "Sun, 31 Dec 2006 12:33:09 +0100",
"msg_from": "Rolf =?iso-8859-1?q?=D8stvik?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Worse perfomance on 8.2.0 than on 7.4.14"
},
{
"msg_contents": "> I have a simple query which uses 32ms on 7.4.14 and 1015ms on 8.2.0.\n> I guess 7.4.14 creates a better execution plan than 8.2.0 for this query but\n> i don't know how to get it to select a better one.\n> Explain analyse output will be found near the end of the e-mail.\n>\n> Explain analyze is run several times to get a stable result\n> so i guess the numbers presented is with as much as possible\n> data in memory buffers.\n>\n> Query: \"select * from view_subset;\" run against 7.4.14 server.\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Nested Loop (cost=0.00..1400.86 rows=17 width=8) (actual time=0.161..26.287 rows=68 loops=1)\n> -> Index Scan using uut_result_subset_start_date_time_idx on uut_result_subset ur (cost=0.00..63.28 rows=18 width=4) (actual time=0.052..0.195 rows=68 loops=1)\n> Index Cond: (start_date_time > '2006-12-11 00:00:00'::timestamp without time zone)\n> -> Index Scan using step_result_uut_result_idx on step_result_subset sr (cost=0.00..74.28 rows=2 width=8) (actual time=0.149..0.379 rows=1 loops=68)\n> Index Cond: (\"outer\".id = sr.uut_result)\n> Filter: (step_parent = 0)\n> Total runtime: 26.379 ms\n>\n> Query: \"select * from view_subset;\" run against 8.4.0 server.\n>\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> Hash Join (cost=339.61..77103.61 rows=96 width=8) (actual time=5.249..1010.669 rows=68 loops=1)\n> Hash Cond: (sr.uut_result = ur.id)\n> -> Index Scan using step_result_subset_parent_key on step_result_subset sr (cost=0.00..76047.23 rows=143163 width=8) (actual time=0.082..905.326 rows=176449 loops=1)\n> Index Cond: (step_parent = 0)\n> -> Hash (cost=339.31..339.31 rows=118 width=4) (actual time=0.149..0.149 rows=68 loops=1)\n> -> Bitmap Heap Scan on uut_result_subset ur (cost=4.90..339.31 rows=118 width=4) (actual time=0.060..0.099 rows=68 loops=1)\n> Recheck Cond: (start_date_time > '2006-12-11 00:00:00'::timestamp without time zone)\n> -> Bitmap Index Scan on uut_result_subset_start_date_time_idx (cost=0.00..4.90 rows=118 width=0) (actual time=0.050..0.050 rows=68 loops=1)\n> Index Cond: (start_date_time > '2006-12-11 00:00:00'::timestamp without time zone)\n> Total runtime: 1010.775 ms\n\nDid you lower random_page_cost in 8.2 (which defaults to 4.0)? If not try 2.\n\nregards\nClaus\n",
"msg_date": "Sun, 31 Dec 2006 13:16:43 +0100",
"msg_from": "\"Claus Guttesen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Worse perfomance on 8.2.0 than on 7.4.14"
},
{
"msg_contents": "\n--- Claus Guttesen <[email protected]> skrev:\n\n> > I have a simple query which uses 32ms on 7.4.14\n> and 1015ms on 8.2.0.\n> > I guess 7.4.14 creates a better execution plan\n> than 8.2.0 for this query but\n> > i don't know how to get it to select a better one.\n> > Explain analyse output will be found near the end\n> of the e-mail.\n> >\n> > Explain analyze is run several times to get a\n> stable result\n> > so i guess the numbers presented is with as much\n> as possible\n> > data in memory buffers.\n> >\n> > Query: \"select * from view_subset;\" run against\n> 7.4.14 server.\n> > QUERY PLAN\n> >\n>\n------------------------------------------------------------------------\n> > Nested Loop (cost=0.00..1400.86 rows=17 width=8)\n> (actual time=0.161..26.287 rows=68 loops=1)\n> > -> Index Scan using\n> uut_result_subset_start_date_time_idx on\n> uut_result_subset ur (cost=0.00..63.28 rows=18\n> width=4) (actual time=0.052..0.195 rows=68 loops=1)\n> > Index Cond: (start_date_time >\n> '2006-12-11 00:00:00'::timestamp without time zone)\n> > -> Index Scan using step_result_uut_result_idx\n> on step_result_subset sr (cost=0.00..74.28 rows=2\n> width=8) (actual time=0.149..0.379 rows=1 loops=68)\n> > Index Cond: (\"outer\".id = sr.uut_result)\n> > Filter: (step_parent = 0)\n> > Total runtime: 26.379 ms\n> >\n> > Query: \"select * from view_subset;\" run against\n> 8.4.0 server.\n> >\n> > QUERY PLAN\n> >\n>\n----------------------------------------------------------------------\n> > Hash Join (cost=339.61..77103.61 rows=96\n> width=8) (actual time=5.249..1010.669 rows=68\n> loops=1)\n> > Hash Cond: (sr.uut_result = ur.id)\n> > -> Index Scan using\n> step_result_subset_parent_key on step_result_subset\n> sr (cost=0.00..76047.23 rows=143163 width=8)\n> (actual time=0.082..905.326 rows=176449 loops=1)\n> > Index Cond: (step_parent = 0)\n> > -> Hash (cost=339.31..339.31 rows=118\n> width=4) (actual time=0.149..0.149 rows=68 loops=1)\n> > -> Bitmap Heap Scan on uut_result_subset\n> ur (cost=4.90..339.31 rows=118 width=4) (actual\n> time=0.060..0.099 rows=68 loops=1)\n> > Recheck Cond: (start_date_time >\n> '2006-12-11 00:00:00'::timestamp without time zone)\n> > -> Bitmap Index Scan on\n> uut_result_subset_start_date_time_idx \n> (cost=0.00..4.90 rows=118 width=0) (actual\n> time=0.050..0.050 rows=68 loops=1)\n> > Index Cond: (start_date_time\n> > '2006-12-11 00:00:00'::timestamp without time\n> zone)\n> > Total runtime: 1010.775 ms\n> \n> Did you lower random_page_cost in 8.2 (which\n> defaults to 4.0)? If not try 2.\n\nThanks for the suggestion, but it was no change of\nresult.\n \n> regards\n> Claus\n\n\n__________________________________________________\nBruker du Yahoo!?\nLei av spam? Yahoo! Mail har den beste spambeskyttelsen \nhttp://no.mail.yahoo.com \n",
"msg_date": "Sun, 31 Dec 2006 13:54:09 +0100 (CET)",
"msg_from": "=?iso-8859-1?q?Rolf=20=D8stvik?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Worse perfomance on 8.2.0 than on 7.4.14"
},
{
"msg_contents": "Rolf �stvik skrev:\n\n> I have a simple query which uses 32ms on 7.4.14 and 1015ms on 8.2.0.\n> I guess 7.4.14 creates a better execution plan than 8.2.0 for this query but\n\n\nTry to turn off planner options in 8.2 to make it generate the same plan \nas 7.4. Then run EXPLAIN ANALYZE on that query that generate the same \nplan as in 7.4 and we can compare the costs and maybe understand what go \nwrong.\n\nFor example, try\n\nset enable_hashjoin to false;\nset enable_bitmapscan to false;\n\nbut you might need to turn off more things to get it to generate the 7.4 \nplan.\n\n/Dennis\n",
"msg_date": "Sun, 31 Dec 2006 16:27:12 +0100",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Worse perfomance on 8.2.0 than on 7.4.14"
},
{
"msg_contents": "\nOn 31-Dec-06, at 6:33 AM, Rolf �stvik wrote:\n\n>\n> Hi\n>\n> I have a simple query which uses 32ms on 7.4.14 and 1015ms on 8.2.0.\n> I guess 7.4.14 creates a better execution plan than 8.2.0 for this \n> query but\n> i don't know how to get it to select a better one.\n> Explain analyse output will be found near the end of the e-mail.\n>\n> (I have simplified my real query to get it as simple as possible. \n> The original query\n> contain 6 tables and was acceptable on 7.4.2, but took far too long \n> on 8.1.4)\n>\n> I have made a test setup to compare 7.4.14, 8.1.4 and 8.2.0.\n> 8.1.4 and 8.2.0 uses the same execution plan and same time to execute.\n>\n> postgresql.conf values i changed is\n> 7.4.14\n> \tRaised shared_buffers from 32MB to 128MB\n> \tRaised temp_buffers from 8MB to 32MB\n> 8.2.0\n> \tRaised shared_buffers from 32MB to 128MB\n> \tRaised temp_buffers from 8MB to 32MB\n> \tRaised work_mem from 1MB to 8MB\n>\nset effective_cache to 3G\nshared buffers should be 1G on this computer for 8.2\n\nDave\n> (It did however not have any influence of speed for\n> the view_subset query shown below.)\n>\n> vacuum analyze has been executed.\n>\n> Computer:\n> \tDell PowerEdge 2950\n> \topenSUSE Linux 10.1\n> \tIntel(R) Xeon 3.00GHz\n> \t4GB memory\n> \txfs filesystem on SAS disks\n>\n> Table \"public.step_result_subset\"\n> Column | Type | Modifiers\n> -------------+---------+-----------\n> id | integer | not null\n> uut_result | integer |\n> step_parent | integer |\n> Indexes:\n> \"step_result_subset_pkey\" PRIMARY KEY, btree (id)\n> \"step_result_subset_parent_key\" btree (step_parent)\n> \"step_result_uut_result_idx\" btree (uut_result)\n> Table contain 17 179 506 rows, and is ~400M when exported to file\n>\n> Table \"public.uut_result_subset\"\n> Column | Type | Modifiers\n> -----------------+-----------------------------+-----------\n> id | integer | not null\n> start_date_time | timestamp without time zone |\n> Indexes:\n> \"uut_result_subset_pkey\" PRIMARY KEY, btree (id)\n> \"uut_result_subset_start_date_time_idx\" btree (start_date_time)\n> Table contain ~176 555 rows, and is ~4.7M when exportd to file\n>\n> Query is defined as view:\n>\n> create view view_subset as\n> select\n> ur.id as ur_id,\n> sr.id as sr_id\n> from\n> uut_result_subset as ur\n> inner join step_result_subset as sr\n> on ur.id=sr.uut_result\n> where\n> ur.start_date_time > '2006-12-11'\n> and sr.step_parent=0;\n>\n> Explain analyze is run several times to get a stable result\n> so i guess the numbers presented is with as much as possible\n> data in memory buffers.\n>\n> Column step_result_subset.step_parent contain 0 in as many rows as \n> there are rows in table uut_result_subset.\n> (In my data set this will be 176 500 rows, Other values for \n> step_result_subset.step_parent is present 1003 times and lower.)\n>\n> Query: \"select * from view_subset;\" run against 7.4.14 server.\n> QUERY PLAN\n> ---------------------------------------------------------------------- \n> --\n> Nested Loop (cost=0.00..1400.86 rows=17 width=8) (actual \n> time=0.161..26.287 rows=68 loops=1)\n> -> Index Scan using uut_result_subset_start_date_time_idx on \n> uut_result_subset ur (cost=0.00..63.28 rows=18 width=4) (actual \n> time=0.052..0.195 rows=68 loops=1)\n> Index Cond: (start_date_time > '2006-12-11 \n> 00:00:00'::timestamp without time zone)\n> -> Index Scan using step_result_uut_result_idx on \n> step_result_subset sr (cost=0.00..74.28 rows=2 width=8) (actual \n> time=0.149..0.379 rows=1 loops=68)\n> Index Cond: (\"outer\".id = sr.uut_result)\n> Filter: (step_parent = 0)\n> Total runtime: 26.379 ms\n>\n> Query: \"select * from view_subset;\" run against 8.4.0 server.\n>\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> Hash Join (cost=339.61..77103.61 rows=96 width=8) (actual \n> time=5.249..1010.669 rows=68 loops=1)\n> Hash Cond: (sr.uut_result = ur.id)\n> -> Index Scan using step_result_subset_parent_key on \n> step_result_subset sr (cost=0.00..76047.23 rows=143163 width=8) \n> (actual time=0.082..905.326 rows=176449 loops=1)\n> Index Cond: (step_parent = 0)\n> -> Hash (cost=339.31..339.31 rows=118 width=4) (actual \n> time=0.149..0.149 rows=68 loops=1)\n> -> Bitmap Heap Scan on uut_result_subset ur \n> (cost=4.90..339.31 rows=118 width=4) (actual time=0.060..0.099 \n> rows=68 loops=1)\n> Recheck Cond: (start_date_time > '2006-12-11 \n> 00:00:00'::timestamp without time zone)\n> -> Bitmap Index Scan on \n> uut_result_subset_start_date_time_idx (cost=0.00..4.90 rows=118 \n> width=0) (actual time=0.050..0.050 rows=68 loops=1)\n> Index Cond: (start_date_time > '2006-12-11 \n> 00:00:00'::timestamp without time zone)\n> Total runtime: 1010.775 ms\n>\n> Thanks for tips.\n>\n> Best regards\n> Rolf �stvik\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n",
"msg_date": "Sun, 31 Dec 2006 11:26:53 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Worse perfomance on 8.2.0 than on 7.4.14"
},
{
"msg_contents": "Rolf =?iso-8859-1?q?=D8stvik?= <[email protected]> writes:\n> I have a simple query which uses 32ms on 7.4.14 and 1015ms on 8.2.0.\n\nThere's something awfully strange about that 8.2 plan --- if it knew\nthat it'd have to scan all of uut_result_subset (which it should have\nknown, if the stats were up-to-date), why did it use an indexscan\nrather than a seqscan? Are you sure you haven't tweaked any parameters\nyou didn't tell us about, such as setting enable_seqscan = off?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 31 Dec 2006 12:11:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Worse perfomance on 8.2.0 than on 7.4.14 "
},
{
"msg_contents": "(I am sorry if my explain analyze outputs gets\ngarbled)\n\n--- Dennis Bjorklund <[email protected]> skrev:\n\n> Rolf �stvik skrev:\n> \n> > I have a simple query which uses 32ms on 7.4.14\n> and 1015ms on 8.2.0.\n> > I guess 7.4.14 creates a better execution plan\n> than 8.2.0 for this query but\n> \n> \n> Try to turn off planner options in 8.2 to make it\n> generate the same plan \n> as 7.4. Then run EXPLAIN ANALYZE on that query that\n> generate the same \n> plan as in 7.4 and we can compare the costs and\n> maybe understand what go \n> wrong.\n>\n> /Dennis\n> \n\nI have adjusted some settings in the postgresql.conf\nfile:\n7.4.14\n shared_buffers=64000 #512MB\n sort_mem=32000 #32KB\n effective_cache_size=128000 #1GB\n8.2.0\n shared_buffers=512MB\n temp_buffers=32MB\n work_mem=8MB\n effective_cache_size=1GB\n random_page_cost=2.0\n\nAnd also disabled some planner options in 8.2\n enable_bitscanmap = off\n enable_hashjoin = off\n\nNB: enable_seqscan = on (default value)\n\nFirst i have some queries to give you a feel of size\nof\ndatasets and plans and times. \n\nQ-A: (Simple query A)\n select sr.id from step_result_subset as sr\n where sr.step_parent = 0;\nQ-B: (Simple query B)\n select ur.id from uut_result_subset as ur\n where ur.start_date_time > '2006-12-11'; \nQ-C: (Simple query C)\n select ur.id from uut_result_subset as ur\n where ur.start_date_time > '2006-12-11'; \n\n7.4.14\nExplain analyze of Q-A on 7.1.14:\n Index Scan using step_result_subset_parent_key on\nstep_result_subset sr (cost=0.00..42793.67\nrows=166069 width=4) (actual time=0.091..1201.073\nrows=176449 loops=1)\n Index Cond: (step_parent = 0)\n Total runtime: 1263.592 ms\n(3 rows)\n\nExplain analyze of Q-B on 7.1.14:\n Index Scan using\nuut_result_subset_start_date_time_idx on\nuut_result_subset ur (cost=0.00..63.28 rows=18\nwidth=4) (actual time=0.081..0.190 rows=68 loops=1)\n Index Cond: (start_date_time > '2006-12-11\n00:00:00'::timestamp without time zone)\n Total runtime: 0.242 ms\n(3 rows)\n\nExplain analyze of Q-C on 7.1.14:\n Seq Scan on uut_result_subset ur (cost=0.00..3161.94\nrows=28640 width=4) (actual time=0.059..108.159\nrows=29144 loops=1)\n Filter: (start_date_time > '2006-09-11\n00:00:00'::timestamp without time zone)\n Total runtime: 117.560 ms\n(3 rows)\n\n8.2.0\nExplain analyze of Q-A on 8.2.0:\n Index Scan using step_result_subset_parent_key on\nstep_result_subset sr (cost=0.00..26759.41\nrows=143163 width=4) (actual time=0.099..924.039\nrows=176449 loops=1)\n Index Cond: (step_parent = 0)\n Total runtime: 998.757 ms\n(3 rows)\n\nExplain analyze of Q-A on 8.2.0:\n Index Scan using\nuut_result_subset_start_date_time_idx on\nuut_result_subset ur (cost=0.00..196.15 rows=118\nwidth=4) (actual time=0.025..0.081 rows=68 loops=1)\n Index Cond: (start_date_time > '2006-12-11\n00:00:00'::timestamp without time zone)\n Total runtime: 0.159 ms\n(3 rows)\n\nExplain analyze of Q-C on 8.2.0:\n Index Scan using\nuut_result_subset_start_date_time_idx on\nuut_result_subset ur (cost=0.00..2382.39 rows=31340\nwidth=4) (actual time=0.035..35.367 rows=29144\nloops=1)\n Index Cond: (start_date_time > '2006-09-11\n00:00:00'::timestamp without time zone)\n Total runtime: 47.168 ms\n(3 rows)\n\nHere is the complex query/view.\ncreate view view_subset as \nselect\n ur.id as ur_id,\n sr.id as sr_id\nfrom\n uut_result_subset as ur\n inner join step_result_subset as sr\n on ur.id=sr.uut_result\nwhere\n ur.start_date_time > '2006-12-11'\n and sr.step_parent=0\n;\n\nQuery with start_date_time > '2006-12-11' on 7.4.14\nQuery PLAN\n---------------\n Nested Loop (cost=0.00..1400.86 rows=17 width=8)\n(actual time=0.066..12.754 rows=68 loops=1)\n -> Index Scan using\nuut_result_subset_start_date_time_idx on\nuut_result_subset ur (cost=0.00..63.28 rows=18\nwidth=4) (actual time=0.019..0.136 rows=68 loops=1)\n Index Cond: (start_date_time > '2006-12-11\n00:00:00'::timestamp without time zone)\n -> Index Scan using step_result_uut_result_idx on\nstep_result_subset sr (cost=0.00..74.28 rows=2\nwidth=8) (actual time=0.085..0.182 rows=1 loops=68)\n Index Cond: (\"outer\".id = sr.uut_result)\n Filter: (step_parent = 0)\n Total runtime: 12.849 ms\n\nQuery with start_date_time > '2006-12-11' on 8.2.0\nQuery PLAN\n---------------\n Nested Loop (cost=0.00..35860.83 rows=96 width=8)\n(actual time=11.891..2339.878 rows=68 loops=1)\n -> Index Scan using step_result_subset_parent_key\non step_result_subset sr (cost=0.00..26759.41\nrows=143163 width=8) (actual time=0.083..1017.500\nrows=176449 loops=1)\n Index Cond: (step_parent = 0)\n -> Index Scan using uut_result_subset_pkey on\nuut_result_subset ur (cost=0.00..0.05 rows=1 width=4)\n(actual time=0.006..0.006 rows=0 loops=176449)\n Index Cond: (ur.id = sr.uut_result)\n Filter: (start_date_time > '2006-12-11\n00:00:00'::timestamp without time zone)\n Total runtime: 2339.974 ms\n\nI also wanted to try it with a bigger dataset so i \nset the restriction of start_date_time to\n\"start_date_time> '2006-09-11'\"\nI also then set \"enable_hashjoin = on\" to get same\nplans on 7.4.14 and 8.2.0.\n\nQuery with start_date_time > '2006-09-11' on 7.4.14\nQuery PLAN\n---------------\n Hash Join (cost=3233.54..47126.96 rows=26940\nwidth=8) (actual time=126.437..1489.584 rows=29139\nloops=1)\n Hash Cond: (\"outer\".uut_result = \"inner\".id)\n -> Index Scan using step_result_subset_parent_key\non step_result_subset sr (cost=0.00..42793.67\nrows=166069 width=8) (actual time=0.078..1137.123\nrows=176449 loops=1)\n Index Cond: (step_parent = 0)\n -> Hash (cost=3161.94..3161.94 rows=28640\nwidth=4) (actual time=126.068..126.068 rows=0 loops=1)\n -> Seq Scan on uut_result_subset ur \n(cost=0.00..3161.94 rows=28640 width=4) (actual\ntime=0.063..107.041 rows=29144 loops=1)\n Filter: (start_date_time > '2006-09-11\n00:00:00'::timestamp without time zone)\n Total runtime: 1504.600 ms\n(8 rows)\n\nQuery with start_date_time > '2006-09-11' on 8.2.0\nQuery PLAN\n---------------\n Hash Join (cost=2460.74..32695.45 rows=25413\nwidth=8) (actual time=61.453..1198.048 rows=29139\nloops=1)\n Hash Cond: (sr.uut_result = ur.id)\n -> Index Scan using step_result_subset_parent_key\non step_result_subset sr (cost=0.00..26759.41\nrows=143163 width=8) (actual time=0.089..937.124\nrows=176449 loops=1)\n Index Cond: (step_parent = 0)\n -> Hash (cost=2382.39..2382.39 rows=31340\nwidth=4) (actual time=55.975..55.975 rows=29144\nloops=1)\n -> Index Scan using\nuut_result_subset_start_date_time_idx on\nuut_result_subset ur (cost=0.00..2382.39 rows=31340\nwidth=4) (actual time=0.051..35.635 rows=29144\nloops=1)\n Index Cond: (start_date_time >\n'2006-09-11 00:00:00'::timestamp without time zone)\n Total runtime: 1212.910 ms\n(8 rows)\n\nOther comments.\nI am _beginning_ to get a feeling of adjusting\nparameters and how my dataset behaves. 8.2.0 does (as\nexpected) work much better on bigger datasets than\n7.4.14.\nI was still hoping that i could get better response \ntimes since i think the Q-C query \n(ur.start_date_time > '2006-09-11') should be the\nbiggest restrictor of the datasets i want to look at.\n>From what i can understand that is what happens with\nthe query plan for 7.4.14 when restriction is \n\"ur.start_date_time > '2006-12-11'\".\n\nBest regards\nRolf �stvik\n\n__________________________________________________\nBruker du Yahoo!?\nLei av spam? Yahoo! Mail har den beste spambeskyttelsen \nhttp://no.mail.yahoo.com \n",
"msg_date": "Mon, 1 Jan 2007 16:04:10 +0100 (CET)",
"msg_from": "=?iso-8859-1?q?Rolf=20=D8stvik?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Worse perfomance on 8.2.0 than on 7.4.14"
},
{
"msg_contents": "\n--- Dave Cramer <[email protected]> skrev:\n\n> \n> On 31-Dec-06, at 6:33 AM, Rolf �stvik wrote:\n> \n> >\n> > Hi\n> >\n> > I have a simple query which uses 32ms on 7.4.14\n> and 1015ms on 8.2.0.\n> > I guess 7.4.14 creates a better execution plan\n> than 8.2.0 for this \n> > query but\n> > i don't know how to get it to select a better one.\n> > Explain analyse output will be found near the end\n> of the e-mail.\n> >\n> > (I have simplified my real query to get it as\n> simple as possible. \n> > The original query\n> > contain 6 tables and was acceptable on 7.4.2, but\n> took far too long \n> > on 8.1.4)\n> >\n> > I have made a test setup to compare 7.4.14, 8.1.4\n> and 8.2.0.\n> > 8.1.4 and 8.2.0 uses the same execution plan and\n> same time to execute.\n> >\n> > postgresql.conf values i changed is\n> > 7.4.14\n> > \tRaised shared_buffers from 32MB to 128MB\n> > \tRaised temp_buffers from 8MB to 32MB\n> > 8.2.0\n> > \tRaised shared_buffers from 32MB to 128MB\n> > \tRaised temp_buffers from 8MB to 32MB\n> > \tRaised work_mem from 1MB to 8MB\n> >\n> set effective_cache to 3G\n> shared buffers should be 1G on this computer for 8.2\n\nThanks for the input. Did not have a big influence on\nmy specific problem but comments as this is very\nvaluable in the total setup of my server.\n\nBest regards\nRolf �stvik\n\n__________________________________________________\nBruker du Yahoo!?\nLei av spam? Yahoo! Mail har den beste spambeskyttelsen \nhttp://no.mail.yahoo.com \n",
"msg_date": "Mon, 1 Jan 2007 16:14:14 +0100 (CET)",
"msg_from": "=?iso-8859-1?q?Rolf=20=D8stvik?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Worse perfomance on 8.2.0 than on 7.4.14"
},
{
"msg_contents": "\n--- Tom Lane <[email protected]> skrev:\n\n> Rolf =?iso-8859-1?q?=D8stvik?= <[email protected]>\n> writes:\n> > I have a simple query which uses 32ms on 7.4.14\n> and 1015ms on 8.2.0.\n> \n> There's something awfully strange about that 8.2\n> plan --- if it knew\n> that it'd have to scan all of uut_result_subset\n> (which it should have\n> known, if the stats were up-to-date), \n\nI can't really see the need for it to do an sequence\nscan, but that is me not knowing how things work.\n\n>why did it use\n> an indexscan\n> rather than a seqscan? Are you sure you haven't\n> tweaked any parameters\n> you didn't tell us about, such as setting\n> enable_seqscan = off?\n\nI haven't touched enable_seqscan.\nIt could be that i have forgotton to tell you about a\nparameter i have tweaked, but i doubt it.\n\nBest regards\nRolf �stvik\n\n__________________________________________________\nBruker du Yahoo!?\nLei av spam? Yahoo! Mail har den beste spambeskyttelsen \nhttp://no.mail.yahoo.com \n",
"msg_date": "Mon, 1 Jan 2007 16:19:05 +0100 (CET)",
"msg_from": "=?iso-8859-1?q?Rolf=20=D8stvik?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Worse perfomance on 8.2.0 than on 7.4.14 "
},
{
"msg_contents": "=?iso-8859-1?q?Rolf=20=D8stvik?= <[email protected]> writes:\n> First i have some queries to give you a feel of size\n> of datasets and plans and times. \n\nYou said earlier that essentially all the rows of step_result_subset\nhave step_parent = 0 ... is that really true? I can hardly believe\nthat either 7.4 or 8.2 would use an indexscan for Q-A if so.\n\nI'd be interested to see the results of\n\nprepare foo(int) as select id from step_result_subset sr\n where uut_result = $1 and step_parent = 0;\nexplain analyze execute foo(42);\n\n(use some representative uut_result value instead of 42). If it doesn't\nwant to use an indexscan for this, disable plan types until it does.\nThis would perhaps shed some light on why 8.2 doesn't want to use a scan\nlike that as the inside of a nestloop.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Jan 2007 18:30:27 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Worse perfomance on 8.2.0 than on 7.4.14 "
},
{
"msg_contents": "\n--- Tom Lane <[email protected]> skrev:\n\n> =?iso-8859-1?q?Rolf=20=D8stvik?=\n> <[email protected]> writes:\n> > First i have some queries to give you a feel of\n> size\n> > of datasets and plans and times. \n> \n> You said earlier that essentially all the rows of\n> step_result_subset\n> have step_parent = 0 ... is that really true?\n\nNot true, but i am sorry if it could be intepreted\nthat way.\nWhat i tried to say was \n step_result_subset contain 17 179 506 rows\n uut_Result_subset contain 176 555 rows\n\nThere is one entry in step_result_subset with the\ncondition step_parent = 0 for each entry in\nuut_result_subset (there is 176 555 rows in\nstep_result_subset which have step_parent = 0).\n\nFor this (sample) query i have found that if i select\njust a little bigger data set (setting start_date_time\nto an earlier date) the plan selected by the server\ndoes the best job and gives a more stable execution\ntime independent of size of data sets. I also have\nfound that my theories of the best solution has been\nwrong. \n\n\nIf you (Tom) still want me to do the following steps\nthen please tell me.\n\n> I can\n> hardly believe\n> that either 7.4 or 8.2 would use an indexscan for\n> Q-A if so.\n> \n> I'd be interested to see the results of\n> \n> prepare foo(int) as select id from\n> step_result_subset sr\n> where uut_result = $1 and step_parent = 0;\n> explain analyze execute foo(42);\n> \n> (use some representative uut_result value instead of\n> 42). If it doesn't\n> want to use an indexscan for this, disable plan\n> types until it does.\n> This would perhaps shed some light on why 8.2\n> doesn't want to use a scan\n> like that as the inside of a nestloop.\n> \n> \t\t\tregards, tom lane\n> \n\nBest regards\nRolf �stvik\n\n\n__________________________________________________\nBruker du Yahoo!?\nLei av spam? Yahoo! Mail har den beste spambeskyttelsen \nhttp://no.mail.yahoo.com \n",
"msg_date": "Tue, 2 Jan 2007 13:45:18 +0100 (CET)",
"msg_from": "=?iso-8859-1?q?Rolf=20=D8stvik?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Worse perfomance on 8.2.0 than on 7.4.14 "
},
{
"msg_contents": "=?iso-8859-1?q?Rolf=20=D8stvik?= <[email protected]> writes:\n> If you (Tom) still want me to do the following steps\n> then please tell me.\n\nPlease --- I'm still curious why the estimated cost changed so much from\n7.4 to 8.2. I can believe a marginal change in cost leading to a plan\nswitch, but comparing the total-cost numbers shows that 8.2 must think\nthat indexscan is a whole lot more expensive than 7.4 did, which seems\nodd. For the most part 8.2 ought to think nestloop-with-inner-indexscan\nis cheaper than 7.4 did, because we now account for caching effects across\nrepeated iterations of the inner scan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Jan 2007 10:07:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Worse perfomance on 8.2.0 than on 7.4.14 "
},
{
"msg_contents": "\n--- Tom Lane <[email protected]> skrev:\n\n> \n> Please --- I'm still curious why the estimated cost changed so much from\n> 7.4 to 8.2. I can believe a marginal change in cost leading to a plan\n\nIs this the output you need?\n\nlogistics_82=# prepare foo(int) as select id from step_result_subset where uut_Result = $1 and\nstep_parent = 0;\nPREPARE\nlogistics_82=# explain analyze execute foo(180226);\nQUERY PLAN\n-----------------------------------------------\n Index Scan using step_result_uut_result_idx on step_result_subset (cost=0.00..563.85 rows=23\nwidth=4) (actual time=0.069..0.069 rows=0 loops=1)\n Index Cond: (uut_result = $1)\n Filter: (step_parent = 0)\n Total runtime: 0.112 ms\n(4 rows)\n\nBest regards,\nRolf �stvik\n\n__________________________________________________\nBruker du Yahoo!?\nLei av spam? Yahoo! Mail har den beste spambeskyttelsen \nhttp://no.mail.yahoo.com \n",
"msg_date": "Wed, 3 Jan 2007 09:01:37 +0100 (CET)",
"msg_from": "=?iso-8859-1?q?Rolf=20=D8stvik?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Worse perfomance on 8.2.0 than on 7.4.14 "
},
{
"msg_contents": "=?iso-8859-1?q?Rolf=20=D8stvik?= <[email protected]> writes:\n> Index Scan using step_result_uut_result_idx on step_result_subset (cost=0.00..563.85 rows=23\n> width=4) (actual time=0.069..0.069 rows=0 loops=1)\n> Index Cond: (uut_result = $1)\n> Filter: (step_parent = 0)\n> Total runtime: 0.112 ms\n> (4 rows)\n\nHm, that's interesting. In your original message we have the following\nfor 7.4's estimate of the same plan step:\n\n -> Index Scan using step_result_uut_result_idx on step_result_subset sr (cost=0.00..74.28 rows=2 width=8) (actual time=0.149..0.379 rows=1 loops=68)\n Index Cond: (\"outer\".id = sr.uut_result)\n Filter: (step_parent = 0)\n\nThe number-of-matching-rows estimate has gone up by a factor of 10,\nwhich undoubtedly has a lot to do with the much higher cost estimate.\nDo you have any idea why that is ... is the table really the same size\nin both servers? If so, could we see the pg_stats row for\nstep_result_subset.uut_result on both servers?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Jan 2007 10:31:32 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Worse perfomance on 8.2.0 than on 7.4.14 "
},
{
"msg_contents": "\n--- Tom Lane <[email protected]> skrev:\n\n> The number-of-matching-rows estimate has gone up by a factor of 10,\n> which undoubtedly has a lot to do with the much higher cost estimate.\n> Do you have any idea why that is ... is the table really the same size\n> in both servers? If so, could we see the pg_stats row for\n> step_result_subset.uut_result on both servers?\n\nTable step_result_subset and uut_result_subset in both databases is created from same schema\ndefinition file and filled with data from the same data source file.\n\n==== Server 7.4.14: ====\n\nlogistics_74# select count(*) from step_result_subset;\n count \n----------\n 17179506\n(1 row)\n\nlogistics_74# select count(distinct uut_result) from step_result_subset;\n count \n--------\n 176450\n(1 row)\n\nlogistics_74# analyse verbose step_result_subset;\nINFO: analyzing \"public.step_result_subset\"\nINFO: \"step_result_subset\": 92863 pages, 3000 rows sampled, 17179655 estimated total rows\nANALYZE\n\nlogistics_74# select * from pg_stats where tablename = step_result_subset and\nattname='uut_result';\n schemaname | tablename | attname | null_frac | avg_width | n_distinct | \n most_common_vals | \nmost_common_freqs | \nhistogram_bounds | correlation \n------------+--------------------+------------+-----------+-----------+------------+----------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+-------------\n public | step_result_subset | uut_result | 0 | 4 | 57503 |\n{70335,145211,17229,20091,21827,33338,34370,42426,47274,54146} |\n{0.001,0.001,0.000666667,0.000666667,0.000666667,0.000666667,0.000666667,0.000666667,0.000666667,0.000666667}\n| {213,30974,51300,68529,85053,100838,114971,128126,144230,161657,176691} | 0.951364\n(1 row)\n\n\n==== Server 8.2.0: ====\n\nlogistics_82# select count(*) from step_result_subset;\n count \n----------\n 17179506\n(1 row)\n\nlogistics_82# select count(distinct uut_result) from step_result_subset;\n count \n--------\n 176450\n(1 row)\n\nlogistics_82# analyse verbose step_result_subset;\nINFO: analyzing \"public.step_result_subset\"\nINFO: \"step_result_subset\": scanned 3000 of 92863 pages, containing 555000 live rows and 0 dead\nrows; 3000 rows in sample, 17179655 estimated total rows\nANALYZE\n\nlogistics_# select * from pg_stats where tablename = step_result_subset and attname='uut_result';\n schemaname | tablename | attname | null_frac | avg_width | n_distinct | \n most_common_vals | \nmost_common_freqs | \nhistogram_bounds | correlation \n------------+--------------------+------------+-----------+-----------+------------+-----------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+-------------\n public | step_result_subset | uut_result | 0 | 4 | 6516 |\n{35010,111592,35790,41162,56844,57444,60709,73017,76295,106470} |\n{0.00166667,0.00166667,0.00133333,0.00133333,0.00133333,0.00133333,0.00133333,0.00133333,0.00133333,0.00133333}\n| {147,31791,54286,70928,85996,102668,117885,130947,144766,162098,176685} | 0.954647\n(1 row)\n\nThen on server 8.2.0 i need to set statistics to ~120 on step_result_subset.uut_result to get\nn_distinct to be in same range as n_distinct on 7.4.14.\n\nEven with a statistics value of 1000, the n_distinct value does only reach ~138 000. Is it correct\nthat _ideally_ the n_distinct value should be the same as \"select count(distinct uut_result) from\nstep_result_subset\"? \n\n====\nEven with better statistics on step_result_subset.uut_result neither of 7.4.14 or 8.2.0 manages to\npick the best plan when i want to select bigger datasets (in my examples that would be to set an\nearlier date in the where clause for \"ur.start_date_time > '2006-12-11'\"). I will continue to\nadjust other parameters and see what i can manage myself.\n\nBest regards \nRolf �stvik\n\n\n__________________________________________________\nBruker du Yahoo!?\nLei av spam? Yahoo! Mail har den beste spambeskyttelsen \nhttp://no.mail.yahoo.com \n",
"msg_date": "Fri, 5 Jan 2007 19:28:33 +0100 (CET)",
"msg_from": "=?iso-8859-1?q?Rolf=20=D8stvik?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Worse perfomance on 8.2.0 than on 7.4.14 "
},
{
"msg_contents": "On Fri, 2007-01-05 at 19:28 +0100, Rolf Østvik wrote:\n> --- Tom Lane <[email protected]> skrev:\n> \n> > The number-of-matching-rows estimate has gone up by a factor of 10,\n> > which undoubtedly has a lot to do with the much higher cost estimate.\n> > Do you have any idea why that is ... is the table really the same size\n> > in both servers? If so, could we see the pg_stats row for\n> > step_result_subset.uut_result on both servers?\n> \n> Table step_result_subset and uut_result_subset in both databases is created from same schema\n> definition file and filled with data from the same data source file.\n> \n> ==== Server 7.4.14: ====\n> \n> logistics_74# select count(*) from step_result_subset;\n> count \n> ----------\n> 17179506\n> (1 row)\n> \n> logistics_74# select count(distinct uut_result) from step_result_subset;\n> count \n> --------\n> 176450\n> (1 row)\n> \n> logistics_74# analyse verbose step_result_subset;\n> INFO: analyzing \"public.step_result_subset\"\n> INFO: \"step_result_subset\": 92863 pages, 3000 rows sampled, 17179655 estimated total rows\n> ANALYZE\n> \n> logistics_74# select * from pg_stats where tablename = step_result_subset and\n> attname='uut_result';\n> schemaname | tablename | attname | null_frac | avg_width | n_distinct | \n> most_common_vals | \n> most_common_freqs | \n> histogram_bounds | correlation \n> ------------+--------------------+------------+-----------+-----------+------------+----------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+-------------\n> public | step_result_subset | uut_result | 0 | 4 | 57503 |\n> {70335,145211,17229,20091,21827,33338,34370,42426,47274,54146} |\n> {0.001,0.001,0.000666667,0.000666667,0.000666667,0.000666667,0.000666667,0.000666667,0.000666667,0.000666667}\n> | {213,30974,51300,68529,85053,100838,114971,128126,144230,161657,176691} | 0.951364\n> (1 row)\n> \n> \n> ==== Server 8.2.0: ====\n> \n> logistics_82# select count(*) from step_result_subset;\n> count \n> ----------\n> 17179506\n> (1 row)\n> \n> logistics_82# select count(distinct uut_result) from step_result_subset;\n> count \n> --------\n> 176450\n> (1 row)\n> \n> logistics_82# analyse verbose step_result_subset;\n> INFO: analyzing \"public.step_result_subset\"\n> INFO: \"step_result_subset\": scanned 3000 of 92863 pages, containing 555000 live rows and 0 dead\n> rows; 3000 rows in sample, 17179655 estimated total rows\n> ANALYZE\n> \n> logistics_# select * from pg_stats where tablename = step_result_subset and attname='uut_result';\n> schemaname | tablename | attname | null_frac | avg_width | n_distinct | \n> most_common_vals | \n> most_common_freqs | \n> histogram_bounds | correlation \n> ------------+--------------------+------------+-----------+-----------+------------+-----------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+-------------\n> public | step_result_subset | uut_result | 0 | 4 | 6516 |\n> {35010,111592,35790,41162,56844,57444,60709,73017,76295,106470} |\n> {0.00166667,0.00166667,0.00133333,0.00133333,0.00133333,0.00133333,0.00133333,0.00133333,0.00133333,0.00133333}\n> | {147,31791,54286,70928,85996,102668,117885,130947,144766,162098,176685} | 0.954647\n> (1 row)\n> \n> Then on server 8.2.0 i need to set statistics to ~120 on step_result_subset.uut_result to get\n> n_distinct to be in same range as n_distinct on 7.4.14.\n> \n> Even with a statistics value of 1000, the n_distinct value does only reach ~138 000. Is it correct\n> that _ideally_ the n_distinct value should be the same as \"select count(distinct uut_result) from\n> step_result_subset\"? \n\nThat is correct, as long as the number hasn't changed between the\nANALYZE and the select.\n\n> Even with better statistics on step_result_subset.uut_result neither of 7.4.14 or 8.2.0 manages to\n> pick the best plan when i want to select bigger datasets (in my examples that would be to set an\n> earlier date in the where clause for \"ur.start_date_time > '2006-12-11'\"). I will continue to\n> adjust other parameters and see what i can manage myself.\n\nThe ndistinct figure is very sensitive.\n\nCould you re-run ANALYZE say 10 times each on the two release levels?\nThat will give you a better feel for the spread of likely values.\n\nThe distribution of rows with those values also makes a difference to\nthe results. ANALYZE assumes that all values are randomly distributed\nwithin the table, so if the values are clumped together for whatever\nreason the ndistinct calc is less likely to take that into account.\n\nThe larger sample size gained by increasing stats target does make a\ndifference.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 06 Jan 2007 10:17:04 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Worse perfomance on 8.2.0 than on 7.4.14"
},
{
"msg_contents": "\n--- Simon Riggs <[email protected]> skrev:\n\n> \n> The distribution of rows with those values also makes a difference to\n> the results. ANALYZE assumes that all values are randomly distributed\n> within the table, so if the values are clumped together for whatever\n> reason the ndistinct calc is less likely to take that into account.\n\nThis is an important factor.\n\nAs a summary, one table is defined like this:\n\nTable \"public.step_result_subset\"\n Column | Type | Modifiers \n-------------+---------+-----------\n id | integer | not null\n uut_result | integer | \n step_parent | integer | \nIndexes:\n \"step_result_subset_pkey\" PRIMARY KEY, btree (id)\n \"step_result_subset_parent_key\" btree (step_parent)\n \"step_result_uut_result_idx\" btree (uut_result)\n\nThe values in step_result_subset.uut_result is clumped together (between 10 and 1000 of same\nvalue, and also increasing through the table).\nThe rows where step_result_subset.step_parent is 0 (a special case) is distributed within the\ntable. \n\nEven when i set statistics on test_result_subset.uut_result to 1000 7.4.14 picks a better plan\nthan 8.2.0 for some returned datasets. The best results for both 7.4.14 and 8.2.0 is if i remove\nthe index step_result_subset_parent_key.\nI will have to check if other queries which uses step_result_subset.step_parent will be \"broken\"\nby removing the index but i think it should be ok.\n\n\nI have gotten some ideas from this thread , read some more documentation, read the archives, and\ntested other queries and will try to speed up some more advance queries.\n\nThanks everyone.\n\nbest regards\nRolf �stvik\n\n\n\n__________________________________________________\nBruker du Yahoo!?\nLei av spam? Yahoo! Mail har den beste spambeskyttelsen \nhttp://no.mail.yahoo.com \n",
"msg_date": "Tue, 9 Jan 2007 15:15:15 +0100 (CET)",
"msg_from": "=?iso-8859-1?q?Rolf=20=D8stvik?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Worse perfomance on 8.2.0 than on 7.4.14"
}
] |
[
{
"msg_contents": "I'm using pg_dump/pg_restore to quickly copy databases between servers. But my server keeps crashing when I run pg_restore:\n\n glibc detected *** double free or corruption (!prev): 0x0a00b1a0\n\nPostgres: 8.1.4\n Linux: 2.6.12-1.1381_FC3\n glibc: 2.3.6-0.fc3.1\n\nServer: Dell\n CPU: Xeon 2.80GHz\nMemory: 4 GB\n\nThis is pretty repeatable. Any particular pg_dump file that causes the crash will cause it every time it is used, and it happens with a lot of my databases.\n\nWhat can I do to help diagnose this problem?\n\nCraig\n\n\n\n",
"msg_date": "Mon, 01 Jan 2007 17:29:30 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "glibc double-free error"
},
{
"msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> I'm using pg_dump/pg_restore to quickly copy databases between servers. But my server keeps crashing when I run pg_restore:\n> glibc detected *** double free or corruption (!prev): 0x0a00b1a0\n\n> What can I do to help diagnose this problem?\n\nEither dig into it yourself with gdb, or send me a not-too-large example\ndump file (off-list)...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Jan 2007 20:39:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: glibc double-free error "
},
{
"msg_contents": "Tom Lane wrote:\n> \"Craig A. James\" <[email protected]> writes:\n>> I'm using pg_dump/pg_restore to quickly copy databases between servers. But my server keeps crashing when I run pg_restore:\n>> glibc detected *** double free or corruption (!prev): 0x0a00b1a0\n> \n>> What can I do to help diagnose this problem?\n> \n> Either dig into it yourself with gdb, or send me a not-too-large example\n> dump file (off-list)...\n\nHmmm ... after moving to our production server, four hours of work copying a dozen databases, there hasn't been a single glibc problem. The development server is Fedora Core 3, the productions server is Fedora Core 4. Unless it happens on FC4, I'm diagnosing that it's a glibc bug or incompatibility that was already fixed.\n\nThanks,\nCraig\n\n\n",
"msg_date": "Mon, 01 Jan 2007 22:03:18 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: glibc double-free error"
},
{
"msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> ... The development server is Fedora Core 3, the productions server is\n> Fedora Core 4. Unless it happens on FC4, I'm diagnosing that it's a\n> glibc bug or incompatibility that was already fixed.\n\n[ squint... ] I find that explanation pretty implausible, but unless\nyou can reproduce it on the production machine, I suppose digging\nfurther would be a waste of time ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Jan 2007 01:11:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: glibc double-free error "
}
] |
[
{
"msg_contents": "Hi all,\n\n In a previous post, Ron Peacetree suggested to check what work_mem\nneeds a query needs. How that can be done?\n\nThanks all\n-- \nArnau\n",
"msg_date": "Tue, 02 Jan 2007 10:55:18 +0100",
"msg_from": "Arnau <[email protected]>",
"msg_from_op": true,
"msg_subject": "what work_mem needs a query needs?"
}
] |
[
{
"msg_contents": "I'm curious what parameters you guys typically *always* adjust on new\nPostgreSQL installs. \n\nI am working with a database that contains several large tables (10-20\nmillion) and many smaller tables (hundreds of rows). My system has 2 GB\nof RAM currently, although I will be upping it to 4GB soon.\n\nMy motivation in asking this question is to make sure I'm not making a\nbig configuration no-no by missing a parameter, and also for my own\nchecklist of parameters I should almost always set when configuring a\nnew install. \n\nThe parameters that I almost always change when installing a new system\nis shared_buffers, max_fsm_pages, checkpoint_segments, and\neffective_cache_size.\n\nAre there any parameters missing that always should be changed when\ndeploying to a decent server? \n\n",
"msg_date": "Tue, 02 Jan 2007 11:22:13 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Config parameters"
},
{
"msg_contents": "Jeremy Haile wrote:\n> I'm curious what parameters you guys typically *always* adjust on new\n> PostgreSQL installs. \n\n> The parameters that I almost always change when installing a new system\n> is shared_buffers, max_fsm_pages, checkpoint_segments, and\n> effective_cache_size.\n\nAlways: work_mem, maintenance_work_mem\nAlso consider temp_buffers and random_page_cost.\n\nA lot will depend on how much of the data you handle ends up cached.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 02 Jan 2007 16:34:19 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Config parameters"
},
{
"msg_contents": "What is a decent default setting for work_mem and maintenance_work_mem,\nconsidering I am regularly querying tables that are tens of millions of\nrows and have 2-4 GB of RAM?\n\nAlso - what is the best way to determine decent settings for\ntemp_buffers and random_page_cost?\n\n\nOn Tue, 02 Jan 2007 16:34:19 +0000, \"Richard Huxton\" <[email protected]>\nsaid:\n> Jeremy Haile wrote:\n> > I'm curious what parameters you guys typically *always* adjust on new\n> > PostgreSQL installs. \n> \n> > The parameters that I almost always change when installing a new system\n> > is shared_buffers, max_fsm_pages, checkpoint_segments, and\n> > effective_cache_size.\n> \n> Always: work_mem, maintenance_work_mem\n> Also consider temp_buffers and random_page_cost.\n> \n> A lot will depend on how much of the data you handle ends up cached.\n> \n> -- \n> Richard Huxton\n> Archonet Ltd\n",
"msg_date": "Tue, 02 Jan 2007 12:06:24 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Config parameters"
},
{
"msg_contents": "Jeremy Haile wrote:\n> What is a decent default setting for work_mem and maintenance_work_mem,\n> considering I am regularly querying tables that are tens of millions of\n> rows and have 2-4 GB of RAM?\n\nWell, work_mem will depend on your query-load. Queries that do a lot of \nsorting should benefit from increased work_mem. You only have limited \nRAM though, so it's a balancing act between memory used to cache disk \nand per-process sort memory. Note that work_mem is per sort, so you can \nuse multiples of that amount in a single query. You can issue a \"set\" to \nchange the value for a session.\n\nHow you set maintenance_work_mem will depend on whether you vacuum \ncontinually (e.g. autovacuum) or at set times.\n\n> Also - what is the best way to determine decent settings for\n> temp_buffers and random_page_cost?\n\nWith all of these, testing I'm afraid. The only sure thing you can say \nis that random_page_cost should be 1 if all your database fits in RAM.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 02 Jan 2007 18:49:54 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Config parameters"
},
{
"msg_contents": "Thanks for the information!\n\nAre there any rule-of-thumb starting points for these values that you\nuse when setting up servers? I'd at least like a starting point for\ntesting different values. \n\nFor example, I'm sure setting a default work_mem of 100MB is usually\noverkill - but is 5MB usually a reasonable number? 20MB? My system\ndoes not have a huge number of concurrent users, but they are hitting\nlarge tables. I'm not sure what numbers people usually use here\nsuccessfully.\n\nFor maintenance_work_mem, I turned off autovacuum to save on\nperformance, but run a vacuum analyze once an hour. My current database\ncharacteristics are heavy insert (bulk inserts every 5 minutes) and\nmedium amount of selects on large, heavily indexed tables.\n\nFor temp_buffers - any rule of thumb starting point? What's the best\nway to evaluate if this number is adjusted correctly?\n\nFor random_page_cost - is the default of 4 pretty good for most drives? \nDo you usually bump it up to 3 on modern servers? I've usually done\ninternal RAID setups, but the database I'm currently working on is\nhitting a SAN over fiber.\n\nI realize that these values can vary a lot based on a variety of factors\n- but I'd love some more advice on what good rule-of-thumb starting\npoints are for experimentation and how to evaluate whether the values\nare set correctly. (in the case of temp_buffers and work_mem especially)\n\n\nOn Tue, 02 Jan 2007 18:49:54 +0000, \"Richard Huxton\" <[email protected]>\nsaid:\n> Jeremy Haile wrote:\n> > What is a decent default setting for work_mem and maintenance_work_mem,\n> > considering I am regularly querying tables that are tens of millions of\n> > rows and have 2-4 GB of RAM?\n> \n> Well, work_mem will depend on your query-load. Queries that do a lot of \n> sorting should benefit from increased work_mem. You only have limited \n> RAM though, so it's a balancing act between memory used to cache disk \n> and per-process sort memory. Note that work_mem is per sort, so you can \n> use multiples of that amount in a single query. You can issue a \"set\" to \n> change the value for a session.\n> \n> How you set maintenance_work_mem will depend on whether you vacuum \n> continually (e.g. autovacuum) or at set times.\n> \n> > Also - what is the best way to determine decent settings for\n> > temp_buffers and random_page_cost?\n> \n> With all of these, testing I'm afraid. The only sure thing you can say \n> is that random_page_cost should be 1 if all your database fits in RAM.\n> \n> -- \n> Richard Huxton\n> Archonet Ltd\n",
"msg_date": "Tue, 02 Jan 2007 14:19:58 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Config parameters"
},
{
"msg_contents": "On Tue, 2007-01-02 at 13:19, Jeremy Haile wrote:\n> Thanks for the information!\n> \n> Are there any rule-of-thumb starting points for these values that you\n> use when setting up servers? I'd at least like a starting point for\n> testing different values. \n> \n> For example, I'm sure setting a default work_mem of 100MB is usually\n> overkill - but is 5MB usually a reasonable number? 20MB? My system\n> does not have a huge number of concurrent users, but they are hitting\n> large tables. I'm not sure what numbers people usually use here\n> successfully.\n\nThe setting for work_mem is very dependent on how many simultaneous\nconnections you'll be processing at the same time, and how likely they\nare to be doing sorts.\n\nIf you'll only ever have 5 connections to a database on a machine with a\nlot of memory, then setting it to 100M is probably fine. Keep in mind,\nthe limit is PER SORT, not per query. An upper limit of about 25% of\nthe machine's total memory is a good goal for how big to size work_mem.\n\nSo, on a 4 Gig machine you could divide 1G (25%) by the total possible\nconnections, then again by the average number of sorts you'd expect per\nquery / connection to get an idea. \n\nAlso, you can set it smaller than that, and for a given connection, set\nit on the fly when needed.\n\n<CONNECT>\nset work_mem=1000000;\nselect .....\n<DISCONNECT>\n\nAnd you run less risk of blowing out the machine with joe user's random\nquery.\n\n> For maintenance_work_mem, I turned off autovacuum to save on\n> performance, but run a vacuum analyze once an hour. My current database\n> characteristics are heavy insert (bulk inserts every 5 minutes) and\n> medium amount of selects on large, heavily indexed tables.\n\nDid you turn off stats collection as well? That's really the major\nperformance issue with autovacuum, not autovacuum itself. Plus, if\nyou've got a table that really needs vacuuming every 5 minutes to keep\nthe database healthy, you may be working against yourself by turning off\nautovacuum.\n\nI.e. the cure may be worse than the disease. OTOH, if you don't delete\n/ update often, then don't worry about it.\n\n> For temp_buffers - any rule of thumb starting point? What's the best\n> way to evaluate if this number is adjusted correctly?\n\nHaven't researched temp_buffers at all.\n\n> For random_page_cost - is the default of 4 pretty good for most drives? \n> Do you usually bump it up to 3 on modern servers? I've usually done\n> internal RAID setups, but the database I'm currently working on is\n> hitting a SAN over fiber.\n\nrandom_page_cost is the hardest to come up with the proper setting. If\nyou're hitting a RAID10 with 40 disk drives or some other huge drive\narray, you might need to crank up random_page_cost to some very large\nnumber, as sequential accesses are often preferred there. I believe\nthere were some posts by Luke Lonergan (sp) a while back where he had\nset random_page_cost to 20 or something even higher on a large system\nlike that. \n\nOn data sets that fit in memory, the cost nominally approaces 1. On\nsmaller work group servers with a single mirror set for a drive\nsubsystem and moderate to large data sets, I've found values of 1.4 to\n3.0 to be reasonable, depending on the workload.\n\n> I realize that these values can vary a lot based on a variety of factors\n> - but I'd love some more advice on what good rule-of-thumb starting\n> points are for experimentation and how to evaluate whether the values\n> are set correctly. (in the case of temp_buffers and work_mem especially)\n\nTo see if the values are good or not, run a variety of your worst\nqueries on the machine while varying the settings to see which run\nbest. That will at least let you know if you're close. While you can't\nchange buffers on the fly, you can change work_mem and random_page_cost\non the fly, per connection, to see the change.\n\n",
"msg_date": "Tue, 02 Jan 2007 13:51:25 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Config parameters"
},
{
"msg_contents": "> So, on a 4 Gig machine you could divide 1G (25%) by the total possible\n> connections, then again by the average number of sorts you'd expect per\n> query / connection to get an idea. \n\nThanks for the advice. I'll experiment with higher work_mem settings,\nas I am regularly doing sorts on large datasets. I imagine the default\nsetting isn't very optimal in my case.\n\n\n> Did you turn off stats collection as well? That's really the major\n> performance issue with autovacuum, not autovacuum itself. \n\nI did turn off stats collection. I'm not sure how much of a difference\nit makes, but I was trying to squeeze every ounce of performance out of\nthe database. \n\n\n> I.e. the cure may be worse than the disease. OTOH, if you don't delete\n> / update often, then don't worry about it.\n\nI hardly ever delete/update. I update regularly, but only on small\ntables so it doesn't make as big of a difference. I do huge inserts,\nwhich is why turning off stats/autovacuum gives me some performance\nbenefit. I usually only do deletes nightly in large batches, so\nautovacuuming/analyzing once an hour works fairly well.\n\n\n> Haven't researched temp_buffers at all.\n\nDo you usually change temp_buffers? Mine is currently at the default\nsetting. I guess I could arbitrarily bump it up - but I'm not sure what\nthe consequences would be or how to tell if it is set correctly.\n\n\n> random_page_cost is the hardest to come up with the proper setting. \n\nThis definitely sounds like the hardest to figure out. (since it seems\nto be almost all trial-and-error) I'll play with some different values.\n This is only used by the query planner right? How much of a\nperformance difference does it usually make to tweak this number? (i.e.\nhow much performance difference would someone usually expect when they\nfind that 2.5 works better than 4?)\n\n\n> While you can't\n> change buffers on the fly, you can change work_mem and random_page_cost\n> on the fly, per connection, to see the change.\n\nThanks for the advice. I was aware you could change work_mem on the\nfly, but didn't think about setting random_page_cost on-the-fly.\n",
"msg_date": "Tue, 02 Jan 2007 15:01:56 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Config parameters"
}
] |
[
{
"msg_contents": "Hello, we recently migrated our system from 8.1.x to 8.2 and when \nrunning dumps have noticed an extreme decrease in speed where the dump \nis concerned (by more than a factor of 2). I was wondering if someone \nmight offer some suggestions as to what may be causing the problem. How \nimportant are max_fsm_pages and max_fsm_relations to doing a dump? I \nwas just looking over your config file and that's the only thing that \njumped out at me as needing to be changed.\n\nMachine info:\nOS: Solaris 10\nSunfire X4100 XL\n2x AMD Opteron Model 275 dual core procs\n8GB of ram\n\nPertinent postgres settings:\nshared_buffers: 50000\nwork_mem: 8192\nmaintenance_work_mem: 262144\nmax_stack_depth: 3048 (default)\n\nThere doesn't seem to be any other performance degradation while the \ndump is running (which I suppose is good). Any ideas?\n\n-- \nerik jones <[email protected]>\nsoftware development\nemma(r)\n\n",
"msg_date": "Tue, 02 Jan 2007 10:41:53 -0600",
"msg_from": "Erik Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow dump?"
},
{
"msg_contents": "Erik Jones <[email protected]> writes:\n> Hello, we recently migrated our system from 8.1.x to 8.2 and when \n> running dumps have noticed an extreme decrease in speed where the dump \n> is concerned (by more than a factor of 2).\n\nThat's odd. pg_dump is normally pretty much I/O bound, at least\nassuming your tables are sizable. The only way it wouldn't be is if you\nhave a datatype with a very slow output converter. Have you looked into\nexactly which tables are slow to dump and what datatypes they contain?\n(Running pg_dump with log_min_duration_statement enabled would provide\nuseful data about which steps take a long time, if you're not sure.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Jan 2007 12:30:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow dump? "
},
{
"msg_contents": "Tom Lane wrote:\n> Erik Jones <[email protected]> writes:\n> \n>> Hello, we recently migrated our system from 8.1.x to 8.2 and when \n>> running dumps have noticed an extreme decrease in speed where the dump \n>> is concerned (by more than a factor of 2).\n>> \n>\n> That's odd. pg_dump is normally pretty much I/O bound, at least\n> assuming your tables are sizable. The only way it wouldn't be is if you\n> have a datatype with a very slow output converter. Have you looked into\n> exactly which tables are slow to dump and what datatypes they contain?\n> (Running pg_dump with log_min_duration_statement enabled would provide\n> useful data about which steps take a long time, if you're not sure.)\n>\n> \t\t\tregards, tom lane\n> \nWell, all of our tables use pretty basic data types: integer (various \nsizes), text, varchar, boolean, and timestamps without time zone. In \naddition, other than not having a lot of our foreign keys in place, \nthere have been no other schema changes since the migration.\n\n-- \nerik jones <[email protected]>\nsoftware development\nemma(r)\n\n",
"msg_date": "Tue, 02 Jan 2007 11:40:18 -0600",
"msg_from": "Erik Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow dump?"
}
] |
[
{
"msg_contents": "Hmm... This gets stranger and stranger. When connecting to the \ndatabase with the psql client in 8.2's bin directory and using commands \nsuch as \\d the client hangs, or takes an extremely long time. If we \nconnect to the same 8.2 database with a psql client from 8.1.4, both \nremotely and locally, \\d responds immediately. Could the issue be with \nthe client programs somehow? Note also that we did our migration over \nthe xmas weekend using the dump straight into a restore command. We \nkicked it off Saturday (12-23-06) night and it had just reached the \npoint of adding foreign keys the morning of the 26th. We stopped it \nthere, wrote a script to go through and build indexes (which finished in \na timely manner) and have added just the foreign keys strictly necessary \nfor our applications functionality (i.e. foreign keys set to cascade on \nupdate/delete, etc...).\n\n-------- Original Message --------\nSubject: \tRe: [PERFORM] Slow dump?\nDate: \tTue, 02 Jan 2007 11:40:18 -0600\nFrom: \tErik Jones <[email protected]>\nTo: \tTom Lane <[email protected]>\nCC: \[email protected]\nReferences: \t<[email protected]> <[email protected]>\n\n\n\nTom Lane wrote:\n> Erik Jones <[email protected]> writes:\n> \n>> Hello, we recently migrated our system from 8.1.x to 8.2 and when \n>> running dumps have noticed an extreme decrease in speed where the dump \n>> is concerned (by more than a factor of 2).\n>> \n>\n> That's odd. pg_dump is normally pretty much I/O bound, at least\n> assuming your tables are sizable. The only way it wouldn't be is if you\n> have a datatype with a very slow output converter. Have you looked into\n> exactly which tables are slow to dump and what datatypes they contain?\n> (Running pg_dump with log_min_duration_statement enabled would provide\n> useful data about which steps take a long time, if you're not sure.)\n>\n> \t\t\tregards, tom lane\n> \nWell, all of our tables use pretty basic data types: integer (various \nsizes), text, varchar, boolean, and timestamps without time zone. In \naddition, other than not having a lot of our foreign keys in place, \nthere have been no other schema changes since the migration.\n\n-- \nerik jones <[email protected]>\nsoftware development\nemma(r)\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n\n-- \nerik jones <[email protected]>\nsoftware development\nemma(r)\n\n",
"msg_date": "Tue, 02 Jan 2007 15:59:04 -0600",
"msg_from": "Erik Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "More 8.2 client issues (Was: [Slow dump?)"
},
{
"msg_contents": "Erik Jones wrote:\n> Hmm... This gets stranger and stranger. When connecting to the \n> database with the psql client in 8.2's bin directory and using commands \n> such as \\d the client hangs, or takes an extremely long time. If we \n> connect to the same 8.2 database with a psql client from 8.1.4, both \n> remotely and locally, \\d responds immediately. Could the issue be with \n> the client programs somehow?\n\nCouldn't be some DNS problems that only affect the 8.2 client I suppose?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 02 Jan 2007 22:06:24 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More 8.2 client issues (Was: [Slow dump?)"
},
{
"msg_contents": "Richard Huxton wrote:\n> Erik Jones wrote:\n>> Hmm... This gets stranger and stranger. When connecting to the \n>> database with the psql client in 8.2's bin directory and using \n>> commands such as \\d the client hangs, or takes an extremely long \n>> time. If we connect to the same 8.2 database with a psql client from \n>> 8.1.4, both remotely and locally, \\d responds immediately. Could the \n>> issue be with the client programs somehow?\n>\n> Couldn't be some DNS problems that only affect the 8.2 client I suppose?\n>\nHmm... I don't see how that would matter when the 8.2. client is being \nrun locally.\n\n-- \nerik jones <[email protected]>\nsoftware development\nemma(r)\n\n",
"msg_date": "Tue, 02 Jan 2007 16:16:57 -0600",
"msg_from": "Erik Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More 8.2 client issues (Was: [Slow dump?)"
},
{
"msg_contents": "Erik Jones <[email protected]> writes:\n> Hmm... This gets stranger and stranger. When connecting to the \n> database with the psql client in 8.2's bin directory and using commands \n> such as \\d the client hangs, or takes an extremely long time.\n\nHangs at what point? During connection? Try strace'ing psql (or\nwhatever the Solaris equivalent is) to see what it's doing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Jan 2007 18:09:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More 8.2 client issues (Was: [Slow dump?) "
},
{
"msg_contents": "Tom Lane wrote:\n> Erik Jones <[email protected]> writes:\n> \n>> Hmm... This gets stranger and stranger. When connecting to the \n>> database with the psql client in 8.2's bin directory and using commands \n>> such as \\d the client hangs, or takes an extremely long time.\n>> \n>\n> Hangs at what point? During connection? Try strace'ing psql (or\n> whatever the Solaris equivalent is) to see what it's doing.\n> \nOk, here's the truss output when attached to psql with \"\\d pg_class\", I \nput a marker where the pause is. Note that today the pause is only \n(sic) about 3-4 seconds long before the command completes and the output \nis displayed and that the only difference in the system between \nyesterday and today is that today we don't have a dump running. I \nrealize that most of this output below is unnecessary, but while I know \nwhat most of this is doing individually, I wouldn't know what to cut out \nfor brevity's sake without accidentally also clipping something that is \nneeded.\n\nread(0, 0x08047B7B, 1) (sleeping...)\nread(0, \" \\\", 1) = 1\nwrite(1, \" \\\", 1) = 1\nread(0, \" d\", 1) = 1\nwrite(1, \" d\", 1) = 1\nread(0, \" \", 1) = 1\nwrite(1, \" \", 1) = 1\nread(0, \" \", 1) = 1\nwrite(1, \" \", 1) = 1\nread(0, \" p\", 1) = 1\nwrite(1, \" p\", 1) = 1\nread(0, \"7F\", 1) = 1\nwrite(1, \"\\b \\b\", 3) = 3\nread(0, \"7F\", 1) = 1\nwrite(1, \"\\b \\b\", 3) = 3\nread(0, \" p\", 1) = 1\nwrite(1, \" p\", 1) = 1\nread(0, \" g\", 1) = 1\nwrite(1, \" g\", 1) = 1\nread(0, \" _\", 1) = 1\nwrite(1, \" _\", 1) = 1\nread(0, \" c\", 1) = 1\nwrite(1, \" c\", 1) = 1\nread(0, \" l\", 1) = 1\nwrite(1, \" l\", 1) = 1\nread(0, \" a\", 1) = 1\nwrite(1, \" a\", 1) = 1\nread(0, \" s\", 1) = 1\nwrite(1, \" s\", 1) = 1\nread(0, \" s\", 1) = 1\nwrite(1, \" s\", 1) = 1\nread(0, \"\\r\", 1) = 1\nwrite(1, \"\\n\", 1) = 1\nlwp_sigmask(SIG_SETMASK, 0x00000002, 0x00000000) = 0xFFBFFEFF [0x0000FFFF]\nioctl(0, TCSETSW, 0xFEF431E0) = 0\nlwp_sigmask(SIG_SETMASK, 0x00000000, 0x00000000) = 0xFFBFFEFF [0x0000FFFF]\nsigaction(SIGINT, 0x08047B80, 0x08047BD0) = 0\nsigaction(SIGTERM, 0x08047B80, 0x08047BD0) = 0\nsigaction(SIGQUIT, 0x08047B80, 0x08047BD0) = 0\nsigaction(SIGALRM, 0x08047B80, 0x08047BD0) = 0\nsigaction(SIGTSTP, 0x08047B80, 0x08047BD0) = 0\nsigaction(SIGTTOU, 0x08047B80, 0x08047BD0) = 0\nsigaction(SIGTTIN, 0x08047B80, 0x08047BD0) = 0\nsigaction(SIGWINCH, 0x08047B80, 0x08047BD0) = 0\nsigaction(SIGWINCH, 0x08047B80, 0x08047BD0) = 0\nsigaction(SIGPIPE, 0x08046E20, 0x08046E70) = 0\nsend(4, \" Q\\0\\0\\0E5 S E L E C T \".., 230, 0) = 230 \n<----------------------------------------------------------- Hang is \nright here!\nsigaction(SIGPIPE, 0x08046E20, 0x08046E70) = 0\npollsys(0x08046EE8, 1, 0x00000000, 0x00000000) (sleeping...)\npollsys(0x08046EE8, 1, 0x00000000, 0x00000000) = 1\nrecv(4, \" T\\0\\0\\0 P\\003 o i d\\0\\0\".., 16384, 0) = 140\nsigaction(SIGPIPE, 0x08046E20, 0x08046E70) = 0\nsend(4, \" Q\\0\\0\\08F S E L E C T \".., 144, 0) = 144\nsigaction(SIGPIPE, 0x08046E20, 0x08046E70) = 0\npollsys(0x08046EE8, 1, 0x00000000, 0x00000000) = 1\nrecv(4, \" T\\0\\0\\0D3\\007 r e l h a\".., 16384, 0) = 272\nsigaction(SIGPIPE, 0x08046E20, 0x08046E70) = 0\nsend(4, \" Q\\0\\00186 S E L E C T \".., 391, 0) = 391\nsigaction(SIGPIPE, 0x08046E20, 0x08046E70) = 0\npollsys(0x08046EE8, 1, 0x00000000, 0x00000000) = 1\nrecv(4, \" T\\0\\0\\08F\\005 a t t n a\".., 16384, 0) = 1375\nsigaction(SIGPIPE, 0x08046E20, 0x08046E70) = 0\nsend(4, \" Q\\0\\001 g S E L E C T \".., 360, 0) = 360\nsigaction(SIGPIPE, 0x08046E20, 0x08046E70) = 0\npollsys(0x08046EE8, 1, 0x00000000, 0x00000000) = 1\nrecv(4, \" T\\0\\0\\0DD\\007 r e l n a\".., 16384, 0) = 526\nsigaction(SIGPIPE, 0x08046E20, 0x08046E70) = 0\nsend(4, \" Q\\0\\0\\090 S E L E C T \".., 145, 0) = 145\nsigaction(SIGPIPE, 0x08046E20, 0x08046E70) = 0\npollsys(0x08046EE8, 1, 0x00000000, 0x00000000) = 1\nrecv(4, \" T\\0\\0\\0 \\001 r e l n a\".., 16384, 0) = 51\nioctl(0, TCGETA, 0x08046F24) = 0\nioctl(1, TCGETA, 0x08046F24) = 0\nioctl(1, TIOCGWINSZ, 0x08046F58) = 0\nwrite(1, \" T a b l e \".., 34) = 34\nwrite(1, \" C o l u m n \".., 41) = 41\nwrite(1, \" - - - - - - - - - - - -\".., 41) = 41\nwrite(1, \" r e l n a m e \".., 39) = 39\nwrite(1, \" r e l n a m e s p a c\".., 39) = 39\nwrite(1, \" r e l t y p e \".., 39) = 39\nwrite(1, \" r e l o w n e r \".., 39) = 39\nwrite(1, \" r e l a m \".., 39) = 39\nwrite(1, \" r e l f i l e n o d e\".., 39) = 39\nwrite(1, \" r e l t a b l e s p a\".., 39) = 39\nwrite(1, \" r e l p a g e s \".., 39) = 39\nwrite(1, \" r e l t u p l e s \".., 39) = 39\nwrite(1, \" r e l t o a s t r e l\".., 39) = 39\nwrite(1, \" r e l t o a s t i d x\".., 39) = 39\nwrite(1, \" r e l h a s i n d e x\".., 39) = 39\nwrite(1, \" r e l i s s h a r e d\".., 39) = 39\nwrite(1, \" r e l k i n d \".., 39) = 39\nwrite(1, \" r e l n a t t s \".., 39) = 39\nwrite(1, \" r e l c h e c k s \".., 39) = 39\nwrite(1, \" r e l t r i g g e r s\".., 39) = 39\nwrite(1, \" r e l u k e y s \".., 39) = 39\nwrite(1, \" r e l f k e y s \".., 39) = 39\nwrite(1, \" r e l r e f s \".., 39) = 39\nwrite(1, \" r e l h a s o i d s \".., 39) = 39\nwrite(1, \" r e l h a s p k e y \".., 39) = 39\nwrite(1, \" r e l h a s r u l e s\".., 39) = 39\nwrite(1, \" r e l h a s s u b c l\".., 39) = 39\nwrite(1, \" r e l f r o z e n x i\".., 39) = 39\nwrite(1, \" r e l a c l \".., 31) = 31\nwrite(1, \" r e l o p t i o n s \".., 31) = 31\nwrite(1, \" I n d e x e s :\\n\", 9) = 9\nwrite(1, \" \" p g _ c l a s\".., 45) = 45\nwrite(1, \" \" p g _ c l a s\".., 71) = 71\nwrite(1, \"\\n\", 1) = 1\nlwp_sigmask(SIG_SETMASK, 0x00000002, 0x00000000) = 0xFFBFFEFF [0x0000FFFF]\nioctl(0, TIOCGWINSZ, 0x08047B58) = 0\nioctl(0, TIOCSWINSZ, 0x08047B58) = 0\nioctl(0, TCGETS, 0x08047BB0) = 0\nioctl(0, TCSETSW, 0x08047BB0) = 0\nlwp_sigmask(SIG_SETMASK, 0x00000000, 0x00000000) = 0xFFBFFEFF [0x0000FFFF]\nsigaction(SIGINT, 0x08047AA0, 0x08047B10) = 0\nsigaction(SIGTERM, 0x08047AA0, 0x08047B10) = 0\nsigaction(SIGQUIT, 0x08047AA0, 0x08047B10) = 0\nsigaction(SIGALRM, 0x08047B00, 0x08047B70) = 0\nsigaction(SIGTSTP, 0x08047AA0, 0x08047B10) = 0\nsigaction(SIGTTOU, 0x08047AA0, 0x08047B10) = 0\nsigaction(SIGTTIN, 0x08047AA0, 0x08047B10) = 0\nsigaction(SIGWINCH, 0x08047AA0, 0x08047B10) = 0\nwrite(1, \" e m m a 2 = # \", 8) = 8\nread(0, 0x08047B7B, 1) (sleeping...)\n\n-- \nerik jones <[email protected]>\nsoftware development\nemma(r)\n\n",
"msg_date": "Wed, 03 Jan 2007 10:13:16 -0600",
"msg_from": "Erik Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More 8.2 client issues (Was: [Slow dump?)"
},
{
"msg_contents": "Erik Jones <[email protected]> writes:\n> ...\n> sigaction(SIGPIPE, 0x08046E20, 0x08046E70) = 0\n> send(4, \" Q\\0\\0\\0E5 S E L E C T \".., 230, 0) = 230 \n> <----------------------------------------------------------- Hang is \n> right here!\n> sigaction(SIGPIPE, 0x08046E20, 0x08046E70) = 0\n> pollsys(0x08046EE8, 1, 0x00000000, 0x00000000) (sleeping...)\n> pollsys(0x08046EE8, 1, 0x00000000, 0x00000000) = 1\n> recv(4, \" T\\0\\0\\0 P\\003 o i d\\0\\0\".., 16384, 0) = 140\n> ...\n\nHmph. So it seems the delay really is on the server's end. Any chance\nyou could truss the connected backend process too and see what it's doing?\n\nActually ... before you do that, the first query for \"\\d pg_class\"\nshould look like\n\nSELECT c.oid,\n n.nspname,\n c.relname\nFROM pg_catalog.pg_class c\n LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace\nWHERE c.relname ~ '^(pg_class)$'\n AND pg_catalog.pg_table_is_visible(c.oid)\nORDER BY 2, 3;\n\nI could see this taking an unreasonable amount of time if you had a huge\nnumber of pg_class rows or a very long search_path --- is your database\nat all out of the ordinary in those ways?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Jan 2007 11:24:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More 8.2 client issues (Was: [Slow dump?) "
},
{
"msg_contents": "Tom Lane wrote:\n> Erik Jones <[email protected]> writes:\n> \n>> ...\n>> sigaction(SIGPIPE, 0x08046E20, 0x08046E70) = 0\n>> send(4, \" Q\\0\\0\\0E5 S E L E C T \".., 230, 0) = 230 \n>> <----------------------------------------------------------- Hang is \n>> right here!\n>> sigaction(SIGPIPE, 0x08046E20, 0x08046E70) = 0\n>> pollsys(0x08046EE8, 1, 0x00000000, 0x00000000) (sleeping...)\n>> pollsys(0x08046EE8, 1, 0x00000000, 0x00000000) = 1\n>> recv(4, \" T\\0\\0\\0 P\\003 o i d\\0\\0\".., 16384, 0) = 140\n>> ...\n>> \n>\n> Hmph. So it seems the delay really is on the server's end. Any chance\n> you could truss the connected backend process too and see what it's doing?\n>\n> Actually ... before you do that, the first query for \"\\d pg_class\"\n> should look like\n>\n> SELECT c.oid,\n> n.nspname,\n> c.relname\n> FROM pg_catalog.pg_class c\n> LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace\n> WHERE c.relname ~ '^(pg_class)$'\n> AND pg_catalog.pg_table_is_visible(c.oid)\n> ORDER BY 2, 3;\n>\n> I could see this taking an unreasonable amount of time if you had a huge\n> number of pg_class rows or a very long search_path --- is your database\n> at all out of the ordinary in those ways?\n> \nWell, running \"select count(*) from pg_class;\" returns 524699 rows and \nour search path is the default. I'd also like to reiterate that \\d \npg_class returns instantly when run from the 8.1.4 psql client connected \nto the 8.2 db. How would I go about determining which backend server \nprocess psql was attached to?\n\n-- \nerik jones <[email protected]>\nsoftware development\nemma(r)\n\n",
"msg_date": "Wed, 03 Jan 2007 10:40:56 -0600",
"msg_from": "Erik Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More 8.2 client issues (Was: [Slow dump?)"
},
{
"msg_contents": "Erik,\n\nCould you set log_min_duration_statement=0 on your server and enable\nlogging (tutorial here if you don't know how to do that:\nhttp://pgfouine.projects.postgresql.org/tutorial.html).\n\nYou should see which queries are executed in both cases and find the\nslow one easily.\n\nRegards,\n\n--\nGuillaume\n",
"msg_date": "Wed, 3 Jan 2007 17:50:41 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More 8.2 client issues (Was: [Slow dump?)"
},
{
"msg_contents": "Erik Jones <[email protected]> writes:\n> Tom Lane wrote:\n>> I could see this taking an unreasonable amount of time if you had a huge\n>> number of pg_class rows or a very long search_path --- is your database\n>> at all out of the ordinary in those ways?\n>> \n> Well, running \"select count(*) from pg_class;\" returns 524699 rows\n\nOuch.\n\n> our search path is the default. I'd also like to reiterate that \\d \n> pg_class returns instantly when run from the 8.1.4 psql client connected \n> to the 8.2 db.\n\nI think I know where the problem is: would you compare timing of\n\n\tselect * from pg_class where c.relname ~ '^(pg_class)$';\n\n\tselect * from pg_class where c.relname ~ '^pg_class$';\n\nRecent versions of psql put parentheses into the regex pattern for\nsafety in case it's got \"|\", but I just realized that that probably\nconfuses the optimizer's check for an indexable regex :-(\n\nHowever, this only explains slowdown in psql's \\d commands, which\nwasn't your original complaint ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Jan 2007 11:56:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More 8.2 client issues (Was: [Slow dump?) "
},
{
"msg_contents": "Tom Lane wrote:\n> Erik Jones <[email protected]> writes:\n> \n>> Tom Lane wrote:\n>> \n>>> I could see this taking an unreasonable amount of time if you had a huge\n>>> number of pg_class rows or a very long search_path --- is your database\n>>> at all out of the ordinary in those ways?\n>>>\n>>> \n>> Well, running \"select count(*) from pg_class;\" returns 524699 rows\n>> \n>\n> Ouch.\n>\n> \n>> our search path is the default. I'd also like to reiterate that \\d \n>> pg_class returns instantly when run from the 8.1.4 psql client connected \n>> to the 8.2 db.\n>> \n>\n> I think I know where the problem is: would you compare timing of\n>\n> \tselect * from pg_class where c.relname ~ '^(pg_class)$';\n> \nApproximately 4 seconds.\n> \tselect * from pg_class where c.relname ~ '^pg_class$';\n> \nInstant.\n> Recent versions of psql put parentheses into the regex pattern for\n> safety in case it's got \"|\", but I just realized that that probably\n> confuses the optimizer's check for an indexable regex :-(\n>\n> However, this only explains slowdown in psql's \\d commands, which\n> wasn't your original complaint ...\n> \nWell, it explains the slowdown wrt a query against the catalog tables by \na postgres client application. Were there any changes made like this to \npg_dump and/or pg_restore?\n\n-- \nerik jones <[email protected]>\nsoftware development\nemma(r)\n\n",
"msg_date": "Wed, 03 Jan 2007 11:10:15 -0600",
"msg_from": "Erik Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More 8.2 client issues (Was: [Slow dump?)"
},
{
"msg_contents": "Guillaume Smet wrote:\n> Erik,\n>\n> Could you set log_min_duration_statement=0 on your server and enable\n> logging (tutorial here if you don't know how to do that:\n> http://pgfouine.projects.postgresql.org/tutorial.html).\n>\n> You should see which queries are executed in both cases and find the\n> slow one easily.\nHeh, unfortunately, setting log_min_duration_statement=0 would be a \ntotal last resort as the last we counted (2 months ago) we were doing \napproximately 3 million transactions per hour.\n\n-- \nerik jones <[email protected]>\nsoftware development\nemma(r)\n\n",
"msg_date": "Wed, 03 Jan 2007 11:15:39 -0600",
"msg_from": "Erik Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More 8.2 client issues (Was: [Slow dump?)"
},
{
"msg_contents": "Erik Jones <[email protected]> writes:\n> Guillaume Smet wrote:\n>> Could you set log_min_duration_statement=0 on your server and enable\n\n> Heh, unfortunately, setting log_min_duration_statement=0 would be a \n> total last resort as the last we counted (2 months ago) we were doing \n> approximately 3 million transactions per hour.\n\nDo it just for the pg_dump:\n\n\texport PGOPTIONS=\"--log_min_duration_statement=0\"\n\tpg_dump ...\n\nI don't think that the regex issue explains pg_dump being slow,\nunless perhaps you are making use of the table-selection switches?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Jan 2007 12:21:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More 8.2 client issues (Was: [Slow dump?) "
},
{
"msg_contents": "Tom Lane wrote:\n> Erik Jones <[email protected]> writes:\n> \n>> Guillaume Smet wrote:\n>> \n>>> Could you set log_min_duration_statement=0 on your server and enable\n>>> \n>\n> \n>> Heh, unfortunately, setting log_min_duration_statement=0 would be a \n>> total last resort as the last we counted (2 months ago) we were doing \n>> approximately 3 million transactions per hour.\n>> \n>\n> Do it just for the pg_dump:\n>\n> \texport PGOPTIONS=\"--log_min_duration_statement=0\"\n> \tpg_dump ...\n>\n> I don't think that the regex issue explains pg_dump being slow,\n> unless perhaps you are making use of the table-selection switches?\n> \nThat's a good idea, but first I'll still need to run it by my sysadmin \nwrt space -- our dump files are around 22GB when we can let them finish \nthese days. We do have plans to move off of the dump to a snapshot \nbackup strategy that will eventually lead to a PITR warm-standby setup \nbut, first, we want to make sure we have a stable, fast, up-to-date \nserver -- our web servers are still connecting to the db via 8.1.4 \nclient libs as given what we've seen of the track record for 8.2. client \nlibs on our setup, we're bit reticent to move the rest of the \napplication over. While I wait to see what we can do about logging \neverything during the dump I'll probably build 8.2 on a remote linux \nmachine and see how connecting via those tools compares.\n\n-- \nerik jones <[email protected]>\nsoftware development\nemma(r)\n\n",
"msg_date": "Wed, 03 Jan 2007 11:58:16 -0600",
"msg_from": "Erik Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More 8.2 client issues (Was: [Slow dump?)"
},
{
"msg_contents": "Erik Jones <[email protected]> writes:\n> That's a good idea, but first I'll still need to run it by my sysadmin \n> wrt space -- our dump files are around 22GB when we can let them finish \n> these days.\n\nGiven that we're now speculating about regex problems, you could do a\ntest run of \"pg_dump -s\" with logging enabled; that shouldn't take an\nunreasonable amount of time or space.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Jan 2007 13:26:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More 8.2 client issues (Was: [Slow dump?) "
},
{
"msg_contents": "Tom Lane wrote:\n> Erik Jones <[email protected]> writes:\n> \n>> That's a good idea, but first I'll still need to run it by my sysadmin \n>> wrt space -- our dump files are around 22GB when we can let them finish \n>> these days.\n>> \n>\n> Given that we're now speculating about regex problems, you could do a\n> test run of \"pg_dump -s\" with logging enabled; that shouldn't take an\n> unreasonable amount of time or space.\n>\n> \t\t\tregards, tom lane\n> \nSounds like a post-lunch plan! By the way, even though this isn't even \nsolved yet, thank you for all of your help!\n\n-- \nerik jones <[email protected]>\nsoftware development\nemma(r)\n\n",
"msg_date": "Wed, 03 Jan 2007 12:59:40 -0600",
"msg_from": "Erik Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More 8.2 client issues (Was: [Slow dump?)"
},
{
"msg_contents": "Erik Jones wrote:\n> Tom Lane wrote:\n>> Erik Jones <[email protected]> writes:\n>> \n>>> That's a good idea, but first I'll still need to run it by my \n>>> sysadmin wrt space -- our dump files are around 22GB when we can let \n>>> them finish these days.\n>>> \n>>\n>> Given that we're now speculating about regex problems, you could do a\n>> test run of \"pg_dump -s\" with logging enabled; that shouldn't take an\n>> unreasonable amount of time or space.\n>>\n>> regards, tom lane\n>> \n> Sounds like a post-lunch plan! By the way, even though this isn't \n> even solved yet, thank you for all of your help!\n>\nOk, this ended up taking a bit longer to get to due to the fact that \nwe've been building indexes on our user tables off and on for the last \nfew days. But, I'm back on it now. Here is my general plan of \naction: I'm going to do a schema dump of the pg_catalog schema from a \nfresh, clean 8.2 install and, tomorrow night after I do the same against \nthe db we've been having issues with, diff the two to see if there are \nany glaring discrepancies. While running the dump from the live db I \nwill have statement logging on for the dump, are there any queries or \nquery lengths that I should pay particular attention to?\n\n-- \nerik jones <[email protected]>\nsoftware development\nemma(r)\n\n",
"msg_date": "Fri, 05 Jan 2007 12:02:04 -0600",
"msg_from": "Erik Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More 8.2 client issues (Was: [Slow dump?)"
},
{
"msg_contents": "On Wed, Jan 03, 2007 at 11:56:20AM -0500, Tom Lane wrote:\n> Erik Jones <[email protected]> writes:\n> > Tom Lane wrote:\n> >> I could see this taking an unreasonable amount of time if you had a huge\n> >> number of pg_class rows or a very long search_path --- is your database\n> >> at all out of the ordinary in those ways?\n> >> \n> > Well, running \"select count(*) from pg_class;\" returns 524699 rows\n> \n> Ouch.\n> \n> > our search path is the default. I'd also like to reiterate that \\d \n> > pg_class returns instantly when run from the 8.1.4 psql client connected \n> > to the 8.2 db.\n> \n> I think I know where the problem is: would you compare timing of\n> \n> \tselect * from pg_class where c.relname ~ '^(pg_class)$';\n> \n> \tselect * from pg_class where c.relname ~ '^pg_class$';\n> \n> Recent versions of psql put parentheses into the regex pattern for\n> safety in case it's got \"|\", but I just realized that that probably\n> confuses the optimizer's check for an indexable regex :-(\n> \n> However, this only explains slowdown in psql's \\d commands, which\n> wasn't your original complaint ...\n\nOn the other hand, with 500k relations pg_dump is presumably going to be\ndoing a lot of querying of the catalog tables, so if it uses similar\nqueries...\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 10 Jan 2007 14:07:54 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More 8.2 client issues (Was: [Slow dump?)"
}
] |
[
{
"msg_contents": "I am sure that this has been discussed before, but I can't seem to find\nany recent posts. (I am running PostgreSQL 8.2)\n\nI have always ran PostgreSQL on Linux in the past, but the company I am\ncurrently working for uses Windows on all of their servers. I don't\nhave the luxury right now of running my own benchmarks on the two OSes,\nbut wanted to know if anyone else has done a performance comparison. Is\nthere any significant differences?\n",
"msg_date": "Wed, 03 Jan 2007 12:24:24 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance of PostgreSQL on Windows vs Linux"
},
{
"msg_contents": "Jeremy Haile wrote:\n> I am sure that this has been discussed before, but I can't seem to find\n> any recent posts. (I am running PostgreSQL 8.2)\n> \n> I have always ran PostgreSQL on Linux in the past, but the company I am\n> currently working for uses Windows on all of their servers. I don't\n> have the luxury right now of running my own benchmarks on the two OSes,\n> but wanted to know if anyone else has done a performance comparison. Is\n> there any significant differences?\n\nThat depends on your usage pattern. There are certainly cases where the\nWin32 version will be significantly slower.\nFor example, if you open a lot of new connections, that is a lot more\nexpensive on Windows since each connection needs to execute a new\nbackend due to the lack of fork().\n\nI don't think you'll find any case where the Windows version is faster\nthan Linux ;-) But to get a good answer on if the difference is\nsignificant enough to matter, you really need to run some kind of simple\nbenchmark on *your* workload.\n\n//Magnus\n",
"msg_date": "Thu, 04 Jan 2007 00:18:23 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of PostgreSQL on Windows vs Linux"
},
{
"msg_contents": "Le jeudi 4 janvier 2007 00:18, Magnus Hagander a écrit :\n> But to get a good answer on if the difference is\n> significant enough to matter, you really need to run some kind of simple\n> benchmark on *your* workload.\n\nTo easily stress test a couple of servers and compare results on *your* \nworkload, please consider using both pgfouine[1,2] and tsung[3].\n\nThe companion tool tsung-ploter[4] (for plotting several results using common \ngraph, hence scales), may also be usefull.\n\n[1]: http://pgfouine.projects.postgresql.org/\n[2]: http://pgfouine.projects.postgresql.org/tsung.html\n[3]: http://tsung.erlang-projects.org/\n[4]: http://debian.dalibo.org/unstable/tsung-ploter_0.1-1.tar.gz\n\nRegards,\n-- \nDimitri Fontaine\nhttp://www.dalibo.com/",
"msg_date": "Thu, 4 Jan 2007 00:46:32 +0100",
"msg_from": "Dimitri Fontaine <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of PostgreSQL on Windows vs Linux"
},
{
"msg_contents": "Thanks for the recommendations. I wasn't familiar with those packages!\n\nOn Thu, 4 Jan 2007 00:46:32 +0100, \"Dimitri Fontaine\" <[email protected]>\nsaid:\n> Le jeudi 4 janvier 2007 00:18, Magnus Hagander a écrit :\n> > But to get a good answer on if the difference is\n> > significant enough to matter, you really need to run some kind of simple\n> > benchmark on *your* workload.\n> \n> To easily stress test a couple of servers and compare results on *your* \n> workload, please consider using both pgfouine[1,2] and tsung[3].\n> \n> The companion tool tsung-ploter[4] (for plotting several results using\n> common \n> graph, hence scales), may also be usefull.\n> \n> [1]: http://pgfouine.projects.postgresql.org/\n> [2]: http://pgfouine.projects.postgresql.org/tsung.html\n> [3]: http://tsung.erlang-projects.org/\n> [4]: http://debian.dalibo.org/unstable/tsung-ploter_0.1-1.tar.gz\n> \n> Regards,\n> -- \n> Dimitri Fontaine\n> http://www.dalibo.com/\n",
"msg_date": "Thu, 04 Jan 2007 09:27:46 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of PostgreSQL on Windows vs Linux"
},
{
"msg_contents": "Thanks for the response! I know I have to benchmark them to get a real\nanswer. I am just looking to hear someone say \"We benchmarked Linux vs.\nWindows with similar configuration and hardware and experienced a 25%\nperformance boost in Linux.\" or \"We benchmarked them and found no\nsignificant difference.\" \n\nI realize the situation varies based on usage patterns, but I'm just\nlooking for some general info based on anyone else's experiences. \n\nMy usage pattern is a single application that hits the database. The\napplication uses a connection pool, so opening lots of connections is\nnot a huge issue. However - it does have very large tables and\nregularly queries and inserts into these tables. I insert several\nmillion rows into 3 tables every day - and also delete about the same\namount.\n\n\n\nOn Thu, 04 Jan 2007 00:18:23 +0100, \"Magnus Hagander\"\n<[email protected]> said:\n> Jeremy Haile wrote:\n> > I am sure that this has been discussed before, but I can't seem to find\n> > any recent posts. (I am running PostgreSQL 8.2)\n> > \n> > I have always ran PostgreSQL on Linux in the past, but the company I am\n> > currently working for uses Windows on all of their servers. I don't\n> > have the luxury right now of running my own benchmarks on the two OSes,\n> > but wanted to know if anyone else has done a performance comparison. Is\n> > there any significant differences?\n> \n> That depends on your usage pattern. There are certainly cases where the\n> Win32 version will be significantly slower.\n> For example, if you open a lot of new connections, that is a lot more\n> expensive on Windows since each connection needs to execute a new\n> backend due to the lack of fork().\n> \n> I don't think you'll find any case where the Windows version is faster\n> than Linux ;-) But to get a good answer on if the difference is\n> significant enough to matter, you really need to run some kind of simple\n> benchmark on *your* workload.\n> \n> //Magnus\n",
"msg_date": "Thu, 04 Jan 2007 09:46:53 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of PostgreSQL on Windows vs Linux"
},
{
"msg_contents": "\"Jeremy Haile\" <[email protected]> writes:\n> Thanks for the response! I know I have to benchmark them to get a real\n> answer. I am just looking to hear someone say \"We benchmarked Linux vs.\n> Windows with similar configuration and hardware and experienced a 25%\n> performance boost in Linux.\" or \"We benchmarked them and found no\n> significant difference.\" \n\nI've heard anecdotal reports both ways: \"there's no difference\" and\n\"there's a big difference\". So there's no substitute for benchmarking\nyour own application.\n\nI think one big variable in this is which PG version you are testing.\nWe've been gradually filing down some of the rough edges in the native\nWindows port, so I'd expect that the performance gap is closing over\ntime. I don't know how close to closed it is in 8.2, but I'd surely\nsuggest that you do your benchmarking with 8.2.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 04 Jan 2007 10:23:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of PostgreSQL on Windows vs Linux "
},
{
"msg_contents": "I'm using 8.2. I don't know when I'll get a chance to run my own\nbenchmarks. (I don't currently have access to a Windows and Linux\nserver with similar hardware/configuration) But when/if I get a chance\nto run them, I will post the results here.\n\nThanks for the feedback.\n\nJeremy Haile\n\n\nOn Thu, 04 Jan 2007 10:23:51 -0500, \"Tom Lane\" <[email protected]> said:\n> \"Jeremy Haile\" <[email protected]> writes:\n> > Thanks for the response! I know I have to benchmark them to get a real\n> > answer. I am just looking to hear someone say \"We benchmarked Linux vs.\n> > Windows with similar configuration and hardware and experienced a 25%\n> > performance boost in Linux.\" or \"We benchmarked them and found no\n> > significant difference.\" \n> \n> I've heard anecdotal reports both ways: \"there's no difference\" and\n> \"there's a big difference\". So there's no substitute for benchmarking\n> your own application.\n> \n> I think one big variable in this is which PG version you are testing.\n> We've been gradually filing down some of the rough edges in the native\n> Windows port, so I'd expect that the performance gap is closing over\n> time. I don't know how close to closed it is in 8.2, but I'd surely\n> suggest that you do your benchmarking with 8.2.\n> \n> \t\t\tregards, tom lane\n",
"msg_date": "Thu, 04 Jan 2007 10:29:41 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of PostgreSQL on Windows vs Linux"
},
{
"msg_contents": "On Wed, Jan 03, 2007 at 12:24:24PM -0500, Jeremy Haile wrote:\n> I am sure that this has been discussed before, but I can't seem to find\n> any recent posts. (I am running PostgreSQL 8.2)\n> \n> I have always ran PostgreSQL on Linux in the past, but the company I am\n> currently working for uses Windows on all of their servers. I don't\n> have the luxury right now of running my own benchmarks on the two OSes,\n> but wanted to know if anyone else has done a performance comparison. Is\n> there any significant differences?\n\nOne thing to consider... I've seen a case or two where pgbench running\non windows with HyperThreading enabled was actually faster than with it\nturned off. (General experience has been that HT hurts PostgreSQL). I\nsuspect that the windows kernel may have features that allow it to\nbetter utilize HT than linux.\n\nOf course if you don't have HT... it doesn't matter. :)\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 10 Jan 2007 14:15:26 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of PostgreSQL on Windows vs Linux"
},
{
"msg_contents": "Hey Jim - \n\nThanks for the feedback. The server has dual Xeons with HyperThreading\nenabled - so perhaps I should try disabling it. How much performance\nboost have you seen by disabling it? Of course, the bottleneck in my\ncase is more on the I/O or RAM side, not the CPU side.\n\nJeremy Haile\n\n\nOn Wed, 10 Jan 2007 14:15:26 -0600, \"Jim C. Nasby\" <[email protected]> said:\n> On Wed, Jan 03, 2007 at 12:24:24PM -0500, Jeremy Haile wrote:\n> > I am sure that this has been discussed before, but I can't seem to find\n> > any recent posts. (I am running PostgreSQL 8.2)\n> > \n> > I have always ran PostgreSQL on Linux in the past, but the company I am\n> > currently working for uses Windows on all of their servers. I don't\n> > have the luxury right now of running my own benchmarks on the two OSes,\n> > but wanted to know if anyone else has done a performance comparison. Is\n> > there any significant differences?\n> \n> One thing to consider... I've seen a case or two where pgbench running\n> on windows with HyperThreading enabled was actually faster than with it\n> turned off. (General experience has been that HT hurts PostgreSQL). I\n> suspect that the windows kernel may have features that allow it to\n> better utilize HT than linux.\n> \n> Of course if you don't have HT... it doesn't matter. :)\n> -- \n> Jim Nasby [email protected]\n> EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 10 Jan 2007 15:32:01 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of PostgreSQL on Windows vs Linux"
},
{
"msg_contents": "On Wed, 2007-01-10 at 14:15, Jim C. Nasby wrote:\n> On Wed, Jan 03, 2007 at 12:24:24PM -0500, Jeremy Haile wrote:\n> > I am sure that this has been discussed before, but I can't seem to find\n> > any recent posts. (I am running PostgreSQL 8.2)\n> > \n> > I have always ran PostgreSQL on Linux in the past, but the company I am\n> > currently working for uses Windows on all of their servers. I don't\n> > have the luxury right now of running my own benchmarks on the two OSes,\n> > but wanted to know if anyone else has done a performance comparison. Is\n> > there any significant differences?\n> \n> One thing to consider... I've seen a case or two where pgbench running\n> on windows with HyperThreading enabled was actually faster than with it\n> turned off. (General experience has been that HT hurts PostgreSQL). I\n> suspect that the windows kernel may have features that allow it to\n> better utilize HT than linux.\n\nI've also seen a few comments in perform (and elsewhere) in the past\nthat newer linux kernels seem to handle HT better than older ones, and\nalso might give better numbers for certain situations.\n\nNote that you should really test with a wide variety of loads (i.e. a\nlot of parallel loads, a few etc...) to see what the curve looks like. \nIf HT gets you 10% gain on 4 or fewer clients, but 20% slower with 8\nclients, then hyperthreading might be a not so good choice.\n",
"msg_date": "Wed, 10 Jan 2007 14:45:31 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of PostgreSQL on Windows vs Linux"
}
] |
[
{
"msg_contents": "Well, once again I'm hosed because there's no way to tell the optimizer the cost for a user-defined function. I know this issue has already been raised (by me!) several times, but I have to remind everyone about this. I frequently must rewrite my SQL to work around this problem.\n\nHere is the function definition:\n\n CREATE OR REPLACE FUNCTION cansmiles(text) RETURNS text\n AS '/usr/local/pgsql/lib/libchem.so', 'cansmiles'\n LANGUAGE 'C' STRICT IMMUTABLE;\n\nHere is the bad optimization:\n\ndb=> explain analyze select version_id, 'Brc1ccc2nc(cn2c1)C(=O)O' from version where version.isosmiles = cansmiles('Brc1ccc2nc(cn2c1)C(=O)O', 1);\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------\n Seq Scan on version (cost=0.00..23.41 rows=1 width=4) (actual time=1434.281..1540.253 rows=1 loops=1)\n Filter: (isosmiles = cansmiles('Brc1ccc2nc(cn2c1)C(=O)O'::text, 1))\n Total runtime: 1540.347 ms\n(3 rows)\n\nI've had to break it up into two separate queries. Ironically, for large databases, Postgres does the right thing -- it computes the function, then uses the index on the \"isosmiles\" column. It's blazingly fast and very satisfactory. But for small databases, it apparently decides to recompute the function once per row, making the query N times slower (N = number of rows) than it should be!\n\nIn this instance, there are 1000 rows, and factor of 10^4 is a pretty dramatic slowdown... To make it work, I had to call the function separately then use its result to do the select.\n\n\ndb=> explain analyze select cansmiles('Brc1ccc2nc(cn2c1)C(=O)O', 1);\n QUERY PLAN \n------------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=1.692..1.694 rows=1 loops=1)\n Total runtime: 1.720 ms\n(2 rows)\n\ndb=> explain analyze select version_id, 'Brc1ccc2nc(cn2c1)C(=O)O' from version where version.isosmiles = 'Brc1ccc2nc(cn2c1)C(=O)O';\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------\n Index Scan using i_version_isosmiles on version (cost=0.00..5.80 rows=1 width=4) (actual time=0.114..0.117 rows=1 loops=1)\n Index Cond: (isosmiles = 'Brc1ccc2nc(cn2c1)C(=O)O'::text)\n Total runtime: 0.158 ms\n(3 rows)\n\nCraig\n\n",
"msg_date": "Wed, 03 Jan 2007 19:11:24 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Trivial function query optimized badly"
},
{
"msg_contents": "\nCraig,\nWhat version of postgres are you using? I just tested this on PG 8.1.2\nand was unable to reproduce these results. I wrote a simple function\nthat returns the same text passed to it, after sleeping for 1 second.\nI use it in a where clause, like your example below, and regardless of\nthe number of rows in the table, it still takes roughly 1 second,\nindicating to me the function is only called once.\n\nIs it possible that your function really isn't immutable? Would PG \nrealize this and fall back to treating it as VOLATILE ?\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Craig A.\nJames\nSent: Wednesday, January 03, 2007 9:11 PM\nTo: [email protected]\nSubject: [PERFORM] Trivial function query optimized badly\n\n\nWell, once again I'm hosed because there's no way to tell the optimizer\nthe cost for a user-defined function. I know this issue has already\nbeen raised (by me!) several times, but I have to remind everyone about\nthis. I frequently must rewrite my SQL to work around this problem.\n\nHere is the function definition:\n\n CREATE OR REPLACE FUNCTION cansmiles(text) RETURNS text\n AS '/usr/local/pgsql/lib/libchem.so', 'cansmiles'\n LANGUAGE 'C' STRICT IMMUTABLE;\n\nHere is the bad optimization:\n\ndb=> explain analyze select version_id, 'Brc1ccc2nc(cn2c1)C(=O)O' from\nversion where version.isosmiles = cansmiles('Brc1ccc2nc(cn2c1)C(=O)O',\n1);\n QUERY PLAN\n\n------------------------------------------------------------------------\n--------------------------------\n Seq Scan on version (cost=0.00..23.41 rows=1 width=4) (actual\ntime=1434.281..1540.253 rows=1 loops=1)\n Filter: (isosmiles = cansmiles('Brc1ccc2nc(cn2c1)C(=O)O'::text, 1))\n Total runtime: 1540.347 ms\n(3 rows)\n\nI've had to break it up into two separate queries. Ironically, for\nlarge databases, Postgres does the right thing -- it computes the\nfunction, then uses the index on the \"isosmiles\" column. It's blazingly\nfast and very satisfactory. But for small databases, it apparently\ndecides to recompute the function once per row, making the query N times\nslower (N = number of rows) than it should be!\n\nIn this instance, there are 1000 rows, and factor of 10^4 is a pretty\ndramatic slowdown... To make it work, I had to call the function\nseparately then use its result to do the select.\n\n\ndb=> explain analyze select cansmiles('Brc1ccc2nc(cn2c1)C(=O)O', 1);\n QUERY PLAN\n\n------------------------------------------------------------------------\n------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=1.692..1.694\nrows=1 loops=1)\n Total runtime: 1.720 ms\n(2 rows)\n\ndb=> explain analyze select version_id, 'Brc1ccc2nc(cn2c1)C(=O)O' from\nversion where version.isosmiles = 'Brc1ccc2nc(cn2c1)C(=O)O';\n QUERY PLAN\n\n------------------------------------------------------------------------\n-----------------------------------------------------\n Index Scan using i_version_isosmiles on version (cost=0.00..5.80\nrows=1 width=4) (actual time=0.114..0.117 rows=1 loops=1)\n Index Cond: (isosmiles = 'Brc1ccc2nc(cn2c1)C(=O)O'::text)\n Total runtime: 0.158 ms\n(3 rows)\n\nCraig\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n\n",
"msg_date": "Wed, 3 Jan 2007 22:17:48 -0600",
"msg_from": "\"Adam Rich\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trivial function query optimized badly"
},
{
"msg_contents": "Adam Rich wrote:\n> Craig,\n> What version of postgres are you using? I just tested this on PG 8.1.2\n> and was unable to reproduce these results. I wrote a simple function\n> that returns the same text passed to it, after sleeping for 1 second.\n> I use it in a where clause, like your example below, and regardless of\n> the number of rows in the table, it still takes roughly 1 second,\n> indicating to me the function is only called once.\n\nSorry, I forgot that critical piece of info: I'm using 8.1.4.\n\nYour results would indicate that 8.1.2 creates a different plan than 8.1.4, or else there's some configuration parameter that's different between your installation and mine that causes a radically different plan to be used. I assume you vacuum/analyzed the table before you ran the query.\n\n> Is it possible that your function really isn't immutable? Would PG \n> realize this and fall back to treating it as VOLATILE ?\n\nNow that you say this, this seems more like a bug with the definition of IMMUTABLE. The function should only be called once if it's given a constant string, right? So the fact that Postgres called it once per row is just wrong.\n\nCraig\n\n",
"msg_date": "Wed, 03 Jan 2007 21:42:20 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trivial function query optimized badly"
},
{
"msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> CREATE OR REPLACE FUNCTION cansmiles(text) RETURNS text\n> AS '/usr/local/pgsql/lib/libchem.so', 'cansmiles'\n> LANGUAGE 'C' STRICT IMMUTABLE;\n\nUmm ... this is a single-argument function.\n\n> db=> explain analyze select version_id, 'Brc1ccc2nc(cn2c1)C(=O)O' from version where version.isosmiles = cansmiles('Brc1ccc2nc(cn2c1)C(=O)O', 1);\n\nAnd this query is invoking some other, two-argument function; which\napparently hasn't been marked IMMUTABLE, else it'd have been folded\nto a constant.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 04 Jan 2007 00:46:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trivial function query optimized badly "
},
{
"msg_contents": "Tom Lane wrote:\n> \"Craig A. James\" <[email protected]> writes:\n>> CREATE OR REPLACE FUNCTION cansmiles(text) RETURNS text\n>> AS '/usr/local/pgsql/lib/libchem.so', 'cansmiles'\n>> LANGUAGE 'C' STRICT IMMUTABLE;\n> \n> Umm ... this is a single-argument function.\n> \n>> db=> explain analyze select version_id, 'Brc1ccc2nc(cn2c1)C(=O)O' from version where version.isosmiles = cansmiles('Brc1ccc2nc(cn2c1)C(=O)O', 1);\n> \n> And this query is invoking some other, two-argument function; which\n> apparently hasn't been marked IMMUTABLE, else it'd have been folded\n> to a constant.\n\nGood catch, mystery solved. There are two definitions for this function, the first just a \"wrapper\" for the second with the latter parameter defaulting to \"1\". The second definition was missing the \"IMMUTABLE\" keyword.\n\nThanks!\nCraig\n",
"msg_date": "Wed, 03 Jan 2007 22:57:05 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trivial function query optimized badly"
}
] |
[
{
"msg_contents": "I'm building an e-mail service that has two requirements: It should\nindex messages on the fly to have lightening search results, and it\nshould be able to handle large amounts of space. The server is going\nto be dedicated only for e-mail with 250GB of storage in Raid-5. I'd\nlike to know how PostgreSQL could handle such a large amount of data.\nHow much RAM would I need? I expect my users to have a 10GB quota per\ne-mail account.\nThanks for your advice,\n\n-- \nCharles A. Landemaine.\n",
"msg_date": "Thu, 4 Jan 2007 15:00:05 -0300",
"msg_from": "\"Charles A. Landemaine\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL to host e-mail?"
},
{
"msg_contents": "I'm building an e-mail service that has two requirements: It should\nindex messages on the fly to have lightening search results, and it\nshould be able to handle large amounts of space. The server is going\nto be dedicated only for e-mail with 250GB of storage in Raid-5. I'd\nlike to know how PostgreSQL could handle such a large amount of data.\nHow much RAM would I need? I expect my users to have a 10GB quota per\ne-mail account.\nThanks for your advice,\n\n--\nCharles A. Landemaine.\n",
"msg_date": "Thu, 4 Jan 2007 15:04:35 -0300",
"msg_from": "\"Charles A. Landemaine\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL to host e-mail?"
},
{
"msg_contents": "On Thu, 4 Jan 2007 15:00:05 -0300\n\"Charles A. Landemaine\" <[email protected]> wrote:\n\n> I'm building an e-mail service that has two requirements: It should\n> index messages on the fly to have lightening search results, and it\n> should be able to handle large amounts of space. The server is going\n> to be dedicated only for e-mail with 250GB of storage in Raid-5. I'd\n> like to know how PostgreSQL could handle such a large amount of data.\n> How much RAM would I need? I expect my users to have a 10GB quota per\n> e-mail account.\n\n Well this is a bit like asking \"what's the top speed for a van that\n can carry 8 people\", it really isn't enough information to be able\n to give you a good answer. \n\n It depends on everything from the data model you use to represent\n your E-mail messages, to configuration, to hardware speeds, etc. \n\n In general, you want as much RAM as you can afford for the project,\n the more the better. I'd say 2-4GB is the minimum. And RAID-5 isn't\n very good for database work in general, you'll get better performance\n from RAID 1+0. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n",
"msg_date": "Thu, 4 Jan 2007 12:08:30 -0600",
"msg_from": "Frank Wiles <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL to host e-mail?"
},
{
"msg_contents": "On Thu, 2007-01-04 at 15:00 -0300, Charles A. Landemaine wrote:\n> I'm building an e-mail service that has two requirements: It should\n> index messages on the fly to have lightening search results, and it\n> should be able to handle large amounts of space. The server is going\n> to be dedicated only for e-mail with 250GB of storage in Raid-5.\n\nWell Raid 5 is likely a mistake. Consider RAID 10.\n\n\n> I'd\n> like to know how PostgreSQL could handle such a large amount of data.\n\n250GB is not really that much data for PostgreSQL I have customers with\nmuch larger data sets. \n\n\n> How much RAM would I need?\n\nLots... which is about all I can tell you without more information. How\nmany customers? Are you using table partitioning? How will you be\nsearching? Full text or regex? \n\nJoshua D. Drake\n\n\n> I expect my users to have a 10GB quota per\n> e-mail account.\n> Thanks for your advice,\n> \n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n\n",
"msg_date": "Thu, 04 Jan 2007 10:11:45 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL to host e-mail?"
},
{
"msg_contents": "Frank Wiles wrote:\n> On Thu, 4 Jan 2007 15:00:05 -0300\n> \"Charles A. Landemaine\" <[email protected]> wrote:\n> \n>> I'm building an e-mail service that has two requirements: It should\n>> index messages on the fly to have lightening search results, and it\n>> should be able to handle large amounts of space. The server is going\n>> to be dedicated only for e-mail with 250GB of storage in Raid-5. I'd\n>> like to know how PostgreSQL could handle such a large amount of data.\n>> How much RAM would I need? I expect my users to have a 10GB quota per\n>> e-mail account.\n> \n\nI wouldn't do it this way, I would use cyrus. It stores the messages in\nplain text each in it's own file (like maildir) but then it also indexes\nthe headers in bdb format and also can index the search database in bdb\nformat. The result is a very simple mail store that can perform\nsearches very fast. The only problem is that the search index in only\nupdated periodically.\n\nIf you need more information then look at the cyrus-users list as this\nis WAY off topic.\n\nschu\n",
"msg_date": "Thu, 04 Jan 2007 09:42:40 -0900",
"msg_from": "Matthew Schumacher <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL to host e-mail?"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: RIPEMD160\n\nCharles A. Landemaine wrote:\n| I'm building an e-mail service that has two requirements: It should\n| index messages on the fly to have lightening search results, and it\n| should be able to handle large amounts of space. The server is going\n| to be dedicated only for e-mail with 250GB of storage in Raid-5. I'd\n| like to know how PostgreSQL could handle such a large amount of data.\n| How much RAM would I need? I expect my users to have a 10GB quota per\n| e-mail account.\n| Thanks for your advice,\n|\n\nHello, Charles.\n\nI'll second people's suggestions to stay away from RAID5; the kind of\nworkload a mail storage will have is one that is approximately an even mix\nof writes (in database terms, INSERTs, UPDATEs and DELETEs) and reads, and\nwe all know RAID5 is a loser when it comes to writing a lot, at least when\nyou're building arrays with less than 10-15 drives. I'd suggest you go for\nRAID10 for the database cluster and an extra drive for WAL.\n\nAnother point of interest I'd like to mention is one particular aspect of\nthe workflow of an e-mail user: we will typically touch the main inbox a lot\nand leave most of the other folders pretty much intact for most of the time.\nThis suggests per-inbox quota might be useful, maybe in addition to the\noverall quota, because then you can calculate your database working set more\neasily, based on usage statistics for a typical account. Namely, if the\nmaximum size of an inbox is x MB, with y% average utilization, and you plan\nfor z users, of which w% will be typically active in one day, your database\nworking set will be somewhere in the general area of (x * y%) * (z * w%) MB.\nAdd to that the size of the indexes you create, and you have a very\napproximate idea of the amount of RAM you need to place in your machines to\nkeep your performance from becoming I/O-bound.\n\nThe main reason I'm writing this mail though, is to suggest you take a look\nat Oryx, http://www.oryx.com/; They used to have this product called\nMailstore, which was designed to be a mail store using PostgreSQL as a\nbackend, and has since evolved to a bit more than just that, it seems.\nPerhaps it could be of help to you while building your system, and I'm sure\nthe people at Oryx will be glad to hear from you while, and after you've\nbuilt your system.\n\nKind regards,\n- --\n~ Grega Bremec\n~ gregab at p0f dot net\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2 (GNU/Linux)\n\niD8DBQFFncGcfu4IwuB3+XoRA9Y9AJ0WA+0aooVvGMOpQXGStzkRNVDCjwCeNdfs\nCArTFwo6geR1oRBFDzFRY/U=\n=Y1Lf\n-----END PGP SIGNATURE-----\n",
"msg_date": "Fri, 05 Jan 2007 04:10:20 +0100",
"msg_from": "Grega Bremec <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL to host e-mail?"
},
{
"msg_contents": "On Fri, 2007-01-05 at 04:10 +0100, Grega Bremec wrote:\n> he main reason I'm writing this mail though, is to suggest you take a\n> look\n> at Oryx, http://www.oryx.com/; They used to have this product called\n> Mailstore, which was designed to be a mail store using PostgreSQL as a\n> backend, and has since evolved to a bit more than just that, it seems.\n> Perhaps it could be of help to you while building your system, and I'm\n> sure\n> the people at Oryx will be glad to hear from you while, and after\n> you've\n> built your system.\n> \n> Kind regards,\n> --\n> ~ Grega Bremec \nre above...\nhttp://www.archiveopteryx.org/1.10.html \n",
"msg_date": "Fri, 05 Jan 2007 13:15:44 -0500",
"msg_from": "Reid Thompson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL to host e-mail?"
},
{
"msg_contents": "On Fri, Jan 05, 2007 at 01:15:44PM -0500, Reid Thompson wrote:\n> On Fri, 2007-01-05 at 04:10 +0100, Grega Bremec wrote:\n> > he main reason I'm writing this mail though, is to suggest you take a\n> > look\n> > at Oryx, http://www.oryx.com/; They used to have this product called\n> > Mailstore, which was designed to be a mail store using PostgreSQL as a\n> > backend, and has since evolved to a bit more than just that, it seems.\n> > Perhaps it could be of help to you while building your system, and I'm\n> > sure\n> > the people at Oryx will be glad to hear from you while, and after\n> > you've\n> > built your system.\n> > \n> > Kind regards,\n> > --\n> > ~ Grega Bremec \n> re above...\n> http://www.archiveopteryx.org/1.10.html \n\nYou should also look at http://dbmail.org/ , which runs on several\ndatabases (PostgreSQL included).\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 10 Jan 2007 14:18:20 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL to host e-mail?"
}
] |
[
{
"msg_contents": "Hello,\n \nI am looking at upgrading from 8.1.2 to 8.2.0, and I've found a query which\nruns a lot slower. Here is the query:\n \nselect type, currency_id, instrument_id, sum(amount) as total_amount from\nom_transaction \nwhere \nstrategy_id in\n('BASKET1','BASKET2','BASKET3','BASKET4','BASKET5','BASKET6','BASKET7','BASK\nET8','BASKET9','BASKET10','BASKET11')\nand owner_trader_id in ('dave','sam','bob','tad',\n'tim','harry','frank','bart','lisa','homer','marge','maggie','apu','milhouse\n','disco stu')\nand cf_account_id in (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,29)\nand as_of_date > '2006-12-04' and as_of_date <= '2006-12-05' \ngroup by type, currency_id, instrument_id;\n\nI changed the values in the in statements to fake ones, but it still takes\nover three seconds on 8.2, where 8.1 only takes 26 milliseconds. When I\nincrease the number of valules in the IN clauses, the query rapidly gets\nworse. I tried increasing my stats target to 1000 and analyzing, but that\ndidn't help so I put that back to 10. While the query is running the CPU is\nat 100%. Is there a more efficient way to write a query like this? I've\nattached the output from EXPLAIN ANALYZE in a file because it is somewhat\nlarge.\n \nThanks,\n \n\nDave Dutcher\nTelluride Asset Management\n952.653.6411",
"msg_date": "Thu, 4 Jan 2007 17:31:43 -0600",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow Query on Postgres 8.2"
},
{
"msg_contents": "\"Dave Dutcher\" <[email protected]> writes:\n> I am looking at upgrading from 8.1.2 to 8.2.0, and I've found a query which\n> runs a lot slower.\n\nUm ... what indexes has this table got exactly? It's very unclear what\nalternatives the planner is being faced with.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 04 Jan 2007 20:12:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query on Postgres 8.2 "
},
{
"msg_contents": "Dave,\nIs it me or are the two examples you attached returning different row\ncounts? \nThat means either the source data is different, or your queries are.\n \n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Dave\nDutcher\nSent: Thursday, January 04, 2007 5:32 PM\nTo: [email protected]\nSubject: [PERFORM] Slow Query on Postgres 8.2\n\n\n\nHello,\n \nI am looking at upgrading from 8.1.2 to 8.2.0, and I've found a query\nwhich runs a lot slower. Here is the query:\n \nselect type, currency_id, instrument_id, sum(amount) as total_amount\nfrom om_transaction \nwhere \nstrategy_id in\n('BASKET1','BASKET2','BASKET3','BASKET4','BASKET5','BASKET6','BASKET7','\nBASKET8','BASKET9','BASKET10','BASKET11')\nand owner_trader_id in ('dave','sam','bob','tad',\n'tim','harry','frank','bart','lisa','homer','marge','maggie','apu','milh\nouse','disco stu')\nand cf_account_id in (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,29)\nand as_of_date > '2006-12-04' and as_of_date <= '2006-12-05' \ngroup by type, currency_id, instrument_id;\n\nI changed the values in the in statements to fake ones, but it still\ntakes over three seconds on 8.2, where 8.1 only takes 26 milliseconds.\nWhen I increase the number of valules in the IN clauses, the query\nrapidly gets worse. I tried increasing my stats target to 1000 and\nanalyzing, but that didn't help so I put that back to 10. While the\nquery is running the CPU is at 100%. Is there a more efficient way to\nwrite a query like this? I've attached the output from EXPLAIN ANALYZE\nin a file because it is somewhat large.\n \nThanks,\n \n\nDave Dutcher\nTelluride Asset Management\n952.653.6411\n\n \n\n \n\n\n\n\nMessage\n\n\nDave,\nIs it \nme or are the two examples you attached returning different row counts? \n\nThat \nmeans either the source data is different, or your queries \nare.\n \n \n\n\n-----Original Message-----From: \n [email protected] \n [mailto:[email protected]] On Behalf Of Dave \n DutcherSent: Thursday, January 04, 2007 5:32 PMTo: \n [email protected]: [PERFORM] Slow Query on \n Postgres 8.2\n\nHello,\n \nI am looking at \n upgrading from 8.1.2 to 8.2.0, and I've found a query which runs a lot \n slower. Here is the query:\n \nselect type, \n currency_id, instrument_id, sum(amount) as total_amount from om_transaction \n where strategy_id in \n ('BASKET1','BASKET2','BASKET3','BASKET4','BASKET5','BASKET6','BASKET7','BASKET8','BASKET9','BASKET10','BASKET11')and \n owner_trader_id in ('dave','sam','bob','tad', \n 'tim','harry','frank','bart','lisa','homer','marge','maggie','apu','milhouse','disco \n stu')and cf_account_id in \n (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,29)and as_of_date > \n '2006-12-04' and as_of_date <= '2006-12-05' group by type, currency_id, \n instrument_id;\nI changed the \n values in the in statements to fake ones, but it still takes over three \n seconds on 8.2, where 8.1 only takes 26 milliseconds. When I increase \n the number of valules in the IN clauses, the query rapidly gets worse. I \n tried increasing my stats target to 1000 and analyzing, but that didn't help \n so I put that back to 10. While the query is running the CPU is at \n 100%. Is there a more efficient way to write a query like this? \n I've attached the output from EXPLAIN ANALYZE in a file because it is somewhat \n large.\n \nThanks,\n \n\nDave DutcherTelluride Asset \n Management952.653.6411",
"msg_date": "Thu, 4 Jan 2007 19:19:25 -0600",
"msg_from": "\"Adam Rich\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query on Postgres 8.2"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of Tom Lane\n> \n> Um ... what indexes has this table got exactly? It's very \n> unclear what\n> alternatives the planner is being faced with.\n> \n\nHere is the table definition. Thanks.\n\n Table \"public.om_transaction\"\n Column | Type | Modifiers\n-----------------+------------------------+---------------------------------\n--------\n transaction_id | character varying(20) | not null default '0'::character\nvarying\n type | character varying(20) | not null default ''::character\nvarying\n fund_id | character varying(10) | not null default ''::character\nvarying\n owner_trader_id | character varying(10) | not null default ''::character\nvarying\n strategy_id | character varying(30) | not null default ''::character\nvarying\n instrument_id | integer | default 0\n cf_account_id | integer | not null default 0\n as_of_date | date | not null default\n'0001-01-01'::date\n insert_date | date | not null default\n'0001-01-01'::date\n amount | numeric(22,9) | not null default 0.000000000\n currency_id | integer | not null default 0\n process_state | integer | not null\n comment | character varying(256) | default ''::character varying\nIndexes:\n \"om_transaction_pkey\" PRIMARY KEY, btree (transaction_id)\n \"cf_account_id_om_transaction_index\" btree (cf_account_id)\n \"currency_id_om_transaction_index\" btree (currency_id)\n \"fund_id_om_transaction_index\" btree (fund_id)\n \"instrument_id_om_transaction_index\" btree (instrument_id)\n \"om_transaction_om_transaction_index\" btree (as_of_date, fund_id,\nstrategy_id, owner_trader_id, cf_account_id, instrument_id, \"type\")\n \"om_transaction_partial_process_state_index\" btree (process_state) WHERE\nprocess_state = 0\n \"owner_trader_id_om_transaction_index\" btree (owner_trader_id)\n \"strategy_id_om_transaction_index\" btree (strategy_id)\nForeign-key constraints:\n \"$1\" FOREIGN KEY (owner_trader_id) REFERENCES om_trader(trader_id)\n \"$2\" FOREIGN KEY (fund_id) REFERENCES om_fund(fund_id)\n \"$3\" FOREIGN KEY (strategy_id) REFERENCES om_strategy(strategy_id)\n \"$4\" FOREIGN KEY (cf_account_id) REFERENCES om_cf_account(id)\n \"$5\" FOREIGN KEY (instrument_id) REFERENCES om_instrument(id)\n \"$6\" FOREIGN KEY (currency_id) REFERENCES om_instrument(id)\n\n",
"msg_date": "Thu, 4 Jan 2007 19:51:16 -0600",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow Query on Postgres 8.2 "
},
{
"msg_contents": "The source data is a little different. The fast query was on our production\n8.1 server, and the other was a test 8.2 server with day old data. The\nproduction server has like 3.84 million rows vs 3.83 million rows in test,\nso the statistics might be a little different, but I would figure the\ncompairison is still valid.\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Adam Rich\nSent: Thursday, January 04, 2007 7:19 PM\nTo: 'Dave Dutcher'; [email protected]\nSubject: Re: [PERFORM] Slow Query on Postgres 8.2\n\n\nDave,\nIs it me or are the two examples you attached returning different row\ncounts? \nThat means either the source data is different, or your queries are.\n \n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Dave Dutcher\nSent: Thursday, January 04, 2007 5:32 PM\nTo: [email protected]\nSubject: [PERFORM] Slow Query on Postgres 8.2\n\n\n\nHello,\n \nI am looking at upgrading from 8.1.2 to 8.2.0, and I've found a query which\nruns a lot slower. Here is the query:\n \nselect type, currency_id, instrument_id, sum(amount) as total_amount from\nom_transaction \nwhere \nstrategy_id in\n('BASKET1','BASKET2','BASKET3','BASKET4','BASKET5','BASKET6','BASKET7','BASK\nET8','BASKET9','BASKET10','BASKET11')\nand owner_trader_id in ('dave','sam','bob','tad',\n'tim','harry','frank','bart','lisa','homer','marge','maggie','apu','milhouse\n','disco stu')\nand cf_account_id in (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,29)\nand as_of_date > '2006-12-04' and as_of_date <= '2006-12-05' \ngroup by type, currency_id, instrument_id;\n\nI changed the values in the in statements to fake ones, but it still takes\nover three seconds on 8.2, where 8.1 only takes 26 milliseconds. When I\nincrease the number of valules in the IN clauses, the query rapidly gets\nworse. I tried increasing my stats target to 1000 and analyzing, but that\ndidn't help so I put that back to 10. While the query is running the CPU is\nat 100%. Is there a more efficient way to write a query like this? I've\nattached the output from EXPLAIN ANALYZE in a file because it is somewhat\nlarge.\n \nThanks,\n \n\nDave Dutcher\nTelluride Asset Management\n952.653.6411\n\n \n\n \n\n\n\n\nMessage\n\n\nThe \nsource data is a little different. The fast query was on our \nproduction 8.1 server, and the other was a test 8.2 server with day old \ndata. The production server has like 3.84 million rows vs 3.83 million \nrows in test, so the statistics might be a little different, but I would figure \nthe compairison is still valid.\n \n\n\n-----Original Message-----From: \n [email protected] \n [mailto:[email protected]] On Behalf Of Adam \n RichSent: Thursday, January 04, 2007 7:19 PMTo: 'Dave \n Dutcher'; [email protected]: Re: [PERFORM] \n Slow Query on Postgres 8.2\nDave,\nIs \n it me or are the two examples you attached returning different row \n counts? \nThat \n means either the source data is different, or your queries \n are.\n \n \n\n\n-----Original Message-----From: \n [email protected] \n [mailto:[email protected]] On Behalf Of Dave \n DutcherSent: Thursday, January 04, 2007 5:32 PMTo: \n [email protected]: [PERFORM] Slow Query on \n Postgres 8.2\n\nHello,\n \nI am looking at \n upgrading from 8.1.2 to 8.2.0, and I've found a query which runs a lot \n slower. Here is the query:\n \nselect type, \n currency_id, instrument_id, sum(amount) as total_amount from om_transaction \n where strategy_id in \n ('BASKET1','BASKET2','BASKET3','BASKET4','BASKET5','BASKET6','BASKET7','BASKET8','BASKET9','BASKET10','BASKET11')and \n owner_trader_id in ('dave','sam','bob','tad', \n 'tim','harry','frank','bart','lisa','homer','marge','maggie','apu','milhouse','disco \n stu')and cf_account_id in \n (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,29)and as_of_date > \n '2006-12-04' and as_of_date <= '2006-12-05' group by type, \n currency_id, instrument_id;\nI changed the \n values in the in statements to fake ones, but it still takes over three \n seconds on 8.2, where 8.1 only takes 26 milliseconds. When I increase \n the number of valules in the IN clauses, the query rapidly gets worse. \n I tried increasing my stats target to 1000 and analyzing, but that didn't \n help so I put that back to 10. While the query is running the CPU is \n at 100%. Is there a more efficient way to write a query like \n this? I've attached the output from EXPLAIN ANALYZE in a file because \n it is somewhat large.\n \nThanks,\n \n\nDave DutcherTelluride Asset \n Management952.653.6411",
"msg_date": "Thu, 4 Jan 2007 20:01:12 -0600",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow Query on Postgres 8.2"
},
{
"msg_contents": "\"Dave Dutcher\" <[email protected]> writes:\n> Here is the table definition. Thanks.\n\n[ fools around with it for awhile... ] I think this is already fixed\nfor 8.2.1. Note the costs of the two related index scans:\n\n8.2.0:\n -> Bitmap Index Scan on om_transaction_om_transaction_index (cost=0.00..7421.67 rows=488 width=0) (actual time=3411.227..3411.227 rows=0 loops=1)\n Index Cond: ((as_of_date > '2006-12-04'::date) AND (as_of_date <= '2006-12-05'::date) AND ((strategy_id)::text = ANY (('{BASKET1,BASKET2,BASKET3,BASKET4,BASKET5,BASKET6,BASKET7,BASKET8,BASKET9,BASKET10,BASKET11}'::character varying[])::text[])) AND ((owner_trader_id)::text = ANY (('{dave,sam,bob,tad,tim,harry,frank,bart,lisa,homer,marge,maggie,apu,milhouse,\"disco stu\"}'::character varying[])::text[])) AND (cf_account_id = ANY ('{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,29}'::integer[])))\n\n8.1.2:\n -> Bitmap Index Scan on om_transaction_om_transaction_index (cost=0.00..101.69 rows=5949 width=0) (actual time=3.419..3.419 rows=7967 loops=1)\n Index Cond: ((as_of_date > '2006-12-04'::date) AND (as_of_date <= '2006-12-05'::date))\n\n8.1.2 returns a lot more rows but spends a lot less time doing it.\nThe reason is that using all those =ANY clauses as index quals is\n*expensive* --- they actually trigger multiple scans of the index.\n8.2.0 is underestimating their cost. We fixed that a couple weeks ago\n(after some reports from Arjen van der Meijden) and I can't actually get\n8.2 branch tip to produce a plan like what you show.\n\nPlease try it again when 8.2.1 comes out (Monday) and we'll see if\nthere's any more tweaking needed.\n\nBTW, it's interesting to note that the plan 8.1.2 produces is pretty\nobviously bogus in itself ... why do only the first two arms of the\nBitmapOr use as_of_date conditions? We fixed some sillinesses in the\nbitmap scan planning later in the 8.1 series, so I think you'd find\nthat 8.1.latest does this differently.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 04 Jan 2007 22:04:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query on Postgres 8.2 "
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of Tom Lane\n> \n> [ fools around with it for awhile... ] I think this is already fixed\n> for 8.2.1. Note the costs of the two related index scans:\n\nI installed 8.2.1 this morning and it works much better. The query that was\ntaking 3411.429ms on 8.2.0 now takes 9.3ms. Thanks for your help.\n\n",
"msg_date": "Mon, 8 Jan 2007 12:59:14 -0600",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow Query on Postgres 8.2 "
}
] |
[
{
"msg_contents": "Hi all,\n\n I'm not sure if this question fits in the topic of this list.\n\n I'm interested in partitioning and it's the first time I'd use it.\nThere is an issue I don't know how you handle it. Lets say I'm\ninterested in store monthly based statistical data like the example of\nhttp://www.postgresql.org/docs/8.2/static/ddl-partitioning.html. What I\ndon't like of this approach is that the monthly tables, rules... must be\ncreated \"manually\" or at least I haven't found any other option.\n\n My question is how do you manage this? do you have a cron task that\ncreates automatically these monthly elements (tables, rules, ... ) or\nthere is another approach that doesn't require external things like cron\n only PostgreSQL.\n-- \nArnau\n",
"msg_date": "Fri, 05 Jan 2007 12:02:21 +0100",
"msg_from": "Arnau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partitioning"
}
] |
[
{
"msg_contents": "Take a look at the set of partitioning functions I wrote shortly after\nthe 8.1 release:\n\nhttp://www.studenter.hb.se/~arch/files/part_functions.sql\n\nYou could probably work something out using those functions (as-is, or\nas inspiration) together with pgAgent\n(http://www.pgadmin.org/docs/1.4/pgagent.html)\n\n/Mikael\n\n> -----Original Message-----\n> From: [email protected]\n[mailto:pgsql-performance-\n> [email protected]] On Behalf Of Arnau\n> Sent: den 5 januari 2007 12:02\n> To: [email protected]\n> Subject: [PERFORM] Partitioning\n> \n> Hi all,\n> \n> I'm not sure if this question fits in the topic of this list.\n> \n> I'm interested in partitioning and it's the first time I'd use it.\n> There is an issue I don't know how you handle it. Lets say I'm\n> interested in store monthly based statistical data like the example of\n> http://www.postgresql.org/docs/8.2/static/ddl-partitioning.html. What\nI\n> don't like of this approach is that the monthly tables, rules... must\nbe\n> created \"manually\" or at least I haven't found any other option.\n> \n> My question is how do you manage this? do you have a cron task that\n> creates automatically these monthly elements (tables, rules, ... ) or\n> there is another approach that doesn't require external things like\ncron\n> only PostgreSQL.\n> --\n> Arnau\n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n",
"msg_date": "Fri, 5 Jan 2007 12:47:08 +0100",
"msg_from": "\"Mikael Carneholm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitioning"
},
{
"msg_contents": "BTW, someone coming up with a set of functions to handle partitioning\nfor the general 'partition by time' case would make a GREAT project on\npgFoundry.\n\nOn Fri, Jan 05, 2007 at 12:47:08PM +0100, Mikael Carneholm wrote:\n> Take a look at the set of partitioning functions I wrote shortly after\n> the 8.1 release:\n> \n> http://www.studenter.hb.se/~arch/files/part_functions.sql\n> \n> You could probably work something out using those functions (as-is, or\n> as inspiration) together with pgAgent\n> (http://www.pgadmin.org/docs/1.4/pgagent.html)\n> \n> /Mikael\n> \n> > -----Original Message-----\n> > From: [email protected]\n> [mailto:pgsql-performance-\n> > [email protected]] On Behalf Of Arnau\n> > Sent: den 5 januari 2007 12:02\n> > To: [email protected]\n> > Subject: [PERFORM] Partitioning\n> > \n> > Hi all,\n> > \n> > I'm not sure if this question fits in the topic of this list.\n> > \n> > I'm interested in partitioning and it's the first time I'd use it.\n> > There is an issue I don't know how you handle it. Lets say I'm\n> > interested in store monthly based statistical data like the example of\n> > http://www.postgresql.org/docs/8.2/static/ddl-partitioning.html. What\n> I\n> > don't like of this approach is that the monthly tables, rules... must\n> be\n> > created \"manually\" or at least I haven't found any other option.\n> > \n> > My question is how do you manage this? do you have a cron task that\n> > creates automatically these monthly elements (tables, rules, ... ) or\n> > there is another approach that doesn't require external things like\n> cron\n> > only PostgreSQL.\n> > --\n> > Arnau\n> > \n> > ---------------------------(end of\n> broadcast)---------------------------\n> > TIP 5: don't forget to increase your free space map settings\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 10 Jan 2007 14:20:06 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning"
},
{
"msg_contents": "I really wish that PostgreSQL supported a \"nice\" partitioning syntax\nlike MySQL has. \n\nHere is an example:\nCREATE TABLE tr (id INT, name VARCHAR(50), purchased DATE)\n PARTITION BY RANGE( YEAR(purchased) ) (\n PARTITION p0 VALUES LESS THAN (1990),\n PARTITION p1 VALUES LESS THAN (1995),\n PARTITION p2 VALUES LESS THAN (2000),\n PARTITION p3 VALUES LESS THAN (2005)\n);\n\nAnd to drop a partition:\nALTER TABLE tr DROP PARTITION p2;\n\n\nThis seems so much more intuitive and simpler than what is required to\nset it up in PostgreSQL. Does PostgreSQL's approach to table\npartitioning have any advantage over MySQL? Is a \"nicer\" syntax planned\nfor Postgres?\n\n\nOn Wed, 10 Jan 2007 14:20:06 -0600, \"Jim C. Nasby\" <[email protected]> said:\n> BTW, someone coming up with a set of functions to handle partitioning\n> for the general 'partition by time' case would make a GREAT project on\n> pgFoundry.\n> \n> On Fri, Jan 05, 2007 at 12:47:08PM +0100, Mikael Carneholm wrote:\n> > Take a look at the set of partitioning functions I wrote shortly after\n> > the 8.1 release:\n> > \n> > http://www.studenter.hb.se/~arch/files/part_functions.sql\n> > \n> > You could probably work something out using those functions (as-is, or\n> > as inspiration) together with pgAgent\n> > (http://www.pgadmin.org/docs/1.4/pgagent.html)\n> > \n> > /Mikael\n> > \n> > > -----Original Message-----\n> > > From: [email protected]\n> > [mailto:pgsql-performance-\n> > > [email protected]] On Behalf Of Arnau\n> > > Sent: den 5 januari 2007 12:02\n> > > To: [email protected]\n> > > Subject: [PERFORM] Partitioning\n> > > \n> > > Hi all,\n> > > \n> > > I'm not sure if this question fits in the topic of this list.\n> > > \n> > > I'm interested in partitioning and it's the first time I'd use it.\n> > > There is an issue I don't know how you handle it. Lets say I'm\n> > > interested in store monthly based statistical data like the example of\n> > > http://www.postgresql.org/docs/8.2/static/ddl-partitioning.html. What\n> > I\n> > > don't like of this approach is that the monthly tables, rules... must\n> > be\n> > > created \"manually\" or at least I haven't found any other option.\n> > > \n> > > My question is how do you manage this? do you have a cron task that\n> > > creates automatically these monthly elements (tables, rules, ... ) or\n> > > there is another approach that doesn't require external things like\n> > cron\n> > > only PostgreSQL.\n> > > --\n> > > Arnau\n> > > \n> > > ---------------------------(end of\n> > broadcast)---------------------------\n> > > TIP 5: don't forget to increase your free space map settings\n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 9: In versions below 8.0, the planner will ignore your desire to\n> > choose an index scan if your joining column's datatypes do not\n> > match\n> > \n> \n> -- \n> Jim Nasby [email protected]\n> EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n",
"msg_date": "Wed, 10 Jan 2007 15:28:00 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning"
},
{
"msg_contents": "On Fri, Jan 05, 2007 at 12:47:08PM +0100, Mikael Carneholm wrote:\n>> Take a look at the set of partitioning functions I wrote shortly after\n>> the 8.1 release:\n>>\n>> http://www.studenter.hb.se/~arch/files/part_functions.sql\n>>\n>> You could probably work something out using those functions (as-is, or\n>> as inspiration) together with pgAgent\n>> (http://www.pgadmin.org/docs/1.4/pgagent.html)\n>>\n>> /Mikael\n>> \nThose are some great functions.\n\n-- \nerik jones <[email protected]>\nsoftware development\nemma(r)\n\n",
"msg_date": "Wed, 10 Jan 2007 14:30:45 -0600",
"msg_from": "Erik Jones <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning"
},
{
"msg_contents": "On Wed, Jan 10, 2007 at 03:28:00PM -0500, Jeremy Haile wrote:\n> This seems so much more intuitive and simpler than what is required to\n> set it up in PostgreSQL. Does PostgreSQL's approach to table\n> partitioning have any advantage over MySQL? Is a \"nicer\" syntax planned\n> for Postgres?\n\nThe focus was to get the base functionality working, and working\ncorrectly. Another consideration is that there's multiple ways to\naccomplish the partitioning; exposing the basic functionality without\nenforcing a given interface provides more flexibility (ie: it appears\nthat you can't do list partitioning with MySQL, while you can with\nPostgreSQL).\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 10 Jan 2007 15:09:31 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning"
},
{
"msg_contents": "You can do list partitioning in MySQL:\nhttp://dev.mysql.com/doc/refman/5.1/en/partitioning-list.html\n\nMy comment was not meant as a criticism of PostgreSQL's current state -\nI'm glad that it has partitioning. I'm simply wondering if there are\nany plans of adopting a more user-friendly syntax in the future similar\nto MySQL partitioning support. Having first-class citizen support of\npartitions would also allow some nice administrative GUIs and views to\nbe built for managing them. \n\nJeremy Haile\n\n\nOn Wed, 10 Jan 2007 15:09:31 -0600, \"Jim C. Nasby\" <[email protected]> said:\n> On Wed, Jan 10, 2007 at 03:28:00PM -0500, Jeremy Haile wrote:\n> > This seems so much more intuitive and simpler than what is required to\n> > set it up in PostgreSQL. Does PostgreSQL's approach to table\n> > partitioning have any advantage over MySQL? Is a \"nicer\" syntax planned\n> > for Postgres?\n> \n> The focus was to get the base functionality working, and working\n> correctly. Another consideration is that there's multiple ways to\n> accomplish the partitioning; exposing the basic functionality without\n> enforcing a given interface provides more flexibility (ie: it appears\n> that you can't do list partitioning with MySQL, while you can with\n> PostgreSQL).\n> -- \n> Jim Nasby [email protected]\n> EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 10 Jan 2007 16:15:54 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning"
},
{
"msg_contents": "On Wed, 2007-01-10 at 15:09, Jim C. Nasby wrote:\n> On Wed, Jan 10, 2007 at 03:28:00PM -0500, Jeremy Haile wrote:\n> > This seems so much more intuitive and simpler than what is required to\n> > set it up in PostgreSQL. Does PostgreSQL's approach to table\n> > partitioning have any advantage over MySQL? Is a \"nicer\" syntax planned\n> > for Postgres?\n> \n> The focus was to get the base functionality working, and working\n> correctly. Another consideration is that there's multiple ways to\n> accomplish the partitioning; exposing the basic functionality without\n> enforcing a given interface provides more flexibility (ie: it appears\n> that you can't do list partitioning with MySQL, while you can with\n> PostgreSQL).\n\nAnd I don't think the mysql partition supports tablespaces either.\n",
"msg_date": "Wed, 10 Jan 2007 15:30:16 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning"
},
{
"msg_contents": "On Wed, 2007-01-10 at 15:15, Jeremy Haile wrote:\n> You can do list partitioning in MySQL:\n> http://dev.mysql.com/doc/refman/5.1/en/partitioning-list.html\n> \n> My comment was not meant as a criticism of PostgreSQL's current state -\n> I'm glad that it has partitioning. I'm simply wondering if there are\n> any plans of adopting a more user-friendly syntax in the future similar\n> to MySQL partitioning support. Having first-class citizen support of\n> partitions would also allow some nice administrative GUIs and views to\n> be built for managing them. \n\nI don't think anyone took it as a negative criticism. Jim and I were\nboth more pointing out that the development process of the two projects\nis somewhat different.\n\nIn MySQL a small group that doesn't necessarily interact with a large\nuser community sets out to implement a feature in a given time line with\na given set of requirements and they tend to ignore what they see as\nesoteric requirements.\n\nIn PostgreSQL a large development community that communicates fairly\nwell with it's large user community put somewhat of the onus of proving\nthe need and doing the initial proof of concept on those who say they\nneed a feature, often working in a method where the chief hackers lend a\nhand to someone who wants the feature so they can get a proof of concept\nup and running. And example would be the auditing / time travel in the\ncontrib/spi project. After several iterations, and given the chance to\nlearn from the mistakes of the previous incarnations, something often\nrises out of that to produce the feature needed.\n\nGenerally speaking the postgresql method takes longer, making life\nharder today, but produces cleaner more easily maintained solutions,\nmaking life easier in the future. Meanwhile the mysql method works\nfaster, making life easier today, but makes compromises that might make\nlife harder in the future.\n\nSomething that embodies that difference is the table handler philosophy\nof both databases. PostgreSQL has the abstraction to have more than one\ntable handler, but in practice has exactly one table handler. MySQL has\nthe ability to have many table handlers, and in fact uses many of them.\n\nWith PostgreSQL this means that things like the query parsing /\nexecution and the table handler are tightly coupled. This results in\nthings like transactable DDL. Sometimes this results in suggestions\nbeing dismissed out of hand because they would have unintended\nconsequences.\n\nIn MySQL, because of the multiple table handlers, many compromises on\nthe query parsing have to be made. The most common one being that you\ncan define constraints / foreign keys in a column item, and they will\nsimply be ignored with no error or notice. The fk constraints have to\ngo at the end of the column list to be parsed and executed.\n\nSo, partitioning, being something that will touch a lot of parts of the\ndatabase, isn't gonna just show up one afternoon in pgsql. It will\nlikely take a few people making proof of concept versions before a\nconsensus is reached and someone who has the ability codes it up.\n",
"msg_date": "Wed, 10 Jan 2007 15:51:55 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning"
},
{
"msg_contents": "\n\n\nOn Wed, 10 Jan 2007 15:30:16 -0600, Scott Marlowe <[email protected]> wrote:\n\n[...]\n\n> \n> And I don't think the mysql partition supports tablespaces either.\n> \n\nMySQL supports distributing partitions over multiple disks via the SUBPARTITION clause [1].\nI leave it to you, wether their syntax is cleaner, more powerful or easier or ....;)\n\n\nBernd\n\n[1] http://dev.mysql.com/doc/refman/5.1/en/partitioning-subpartitions.html\n",
"msg_date": "Thu, 11 Jan 2007 13:51:12 +0100",
"msg_from": "Bernd Helmle <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning"
},
{
"msg_contents": "> On Fri, Jan 05, 2007 at 12:47:08PM +0100, Mikael Carneholm wrote:\n> >> Take a look at the set of partitioning functions I wrote shortly\nafter\n> >> the 8.1 release:\n> >>\n> >> http://www.studenter.hb.se/~arch/files/part_functions.sql\n> >>\n> >> You could probably work something out using those functions (as-is,\nor\n> >> as inspiration) together with pgAgent\n> >> (http://www.pgadmin.org/docs/1.4/pgagent.html)\n> >>\n> >> /Mikael\n> >>\n> Those are some great functions.\n> \n\nWell, they're less than optimal in one aspect: they add one rule per\npartition, making them unsuitable for OLTP type applications (actually:\nany application where insert performance is crucial). Someone with time\nand/or energy could probably fix that, I guess...patches are welcome :)\n\n/Mikael\n\n\n",
"msg_date": "Thu, 11 Jan 2007 13:59:20 +0100",
"msg_from": "\"Mikael Carneholm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitioning"
},
{
"msg_contents": "Well - whether or not MySQL's implementation of partitioning has some\ndeficiency, it sure is a lot easier to set up than PostgreSQL. And I\ndon't think there is any technical reason that setting up partitioning\non Postgres couldn't be very easy and still be robust.\n\nOn Thu, 11 Jan 2007 13:59:20 +0100, \"Mikael Carneholm\"\n<[email protected]> said:\n> > On Fri, Jan 05, 2007 at 12:47:08PM +0100, Mikael Carneholm wrote:\n> > >> Take a look at the set of partitioning functions I wrote shortly\n> after\n> > >> the 8.1 release:\n> > >>\n> > >> http://www.studenter.hb.se/~arch/files/part_functions.sql\n> > >>\n> > >> You could probably work something out using those functions (as-is,\n> or\n> > >> as inspiration) together with pgAgent\n> > >> (http://www.pgadmin.org/docs/1.4/pgagent.html)\n> > >>\n> > >> /Mikael\n> > >>\n> > Those are some great functions.\n> > \n> \n> Well, they're less than optimal in one aspect: they add one rule per\n> partition, making them unsuitable for OLTP type applications (actually:\n> any application where insert performance is crucial). Someone with time\n> and/or energy could probably fix that, I guess...patches are welcome :)\n> \n> /Mikael\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n",
"msg_date": "Thu, 11 Jan 2007 09:02:01 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning"
},
{
"msg_contents": "\nEach partition can have its own disk, without using subpartitions.\n\nCREATE TABLE th (id INT, name VARCHAR(30), adate DATE)\nPARTITION BY LIST(YEAR(adate))\n(\n PARTITION p1999 VALUES IN (1995, 1999, 2003)\n DATA DIRECTORY = '/var/appdata/95/data'\n INDEX DIRECTORY = '/var/appdata/95/idx',\n PARTITION p2000 VALUES IN (1996, 2000, 2004)\n DATA DIRECTORY = '/var/appdata/96/data'\n INDEX DIRECTORY = '/var/appdata/96/idx',\n PARTITION p2001 VALUES IN (1997, 2001, 2005)\n DATA DIRECTORY = '/var/appdata/97/data'\n INDEX DIRECTORY = '/var/appdata/97/idx',\n PARTITION p2000 VALUES IN (1998, 2002, 2006)\n DATA DIRECTORY = '/var/appdata/98/data'\n INDEX DIRECTORY = '/var/appdata/98/idx'\n);\n\nSubpartitions are just a way to break (parent) partitions up into \nsmaller pieces. Those of course can be moved to other disks \njust like the main partitions.\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Bernd\nHelmle\nSent: Thursday, January 11, 2007 6:51 AM\nTo: Scott Marlowe\nCc: Jim C. Nasby; Jeremy Haile; [email protected]\nSubject: Re: [PERFORM] Partitioning\n\n\n\n\n\nOn Wed, 10 Jan 2007 15:30:16 -0600, Scott Marlowe\n<[email protected]> wrote:\n\n[...]\n\n> \n> And I don't think the mysql partition supports tablespaces either.\n> \n\nMySQL supports distributing partitions over multiple disks via the\nSUBPARTITION clause [1].\nI leave it to you, wether their syntax is cleaner, more powerful or\neasier or ....;)\n\n\nBernd\n\n[1]\nhttp://dev.mysql.com/doc/refman/5.1/en/partitioning-subpartitions.html\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n",
"msg_date": "Thu, 11 Jan 2007 08:18:39 -0600",
"msg_from": "\"Adam Rich\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning"
},
{
"msg_contents": "\n\n\nOn Thu, 11 Jan 2007 08:18:39 -0600, \"Adam Rich\" <[email protected]> wrote:\n\n> \n> Subpartitions are just a way to break (parent) partitions up into\n> smaller pieces. Those of course can be moved to other disks\n> just like the main partitions.\n\nAh, didn't know that (i just wondered why i need a subpartition to\nchange the location of a partition). \n\nThanks for your clarification...\n\nBernd\n",
"msg_date": "Thu, 11 Jan 2007 15:32:56 +0100",
"msg_from": "Bernd Helmle <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning"
}
] |
[
{
"msg_contents": "Greetings,\n\nI've been running Postgresql for many years now and have been more than \nhappy with its performance and stability. One of those things I've never \nquite understood was vacuuming. So I've been running 8.1.4 for a while \nand enabled 'autovacuum' when I first insalled 8.1.4 ... So in my mind, my \ndatabase should stay 'clean'...\n\nAs the months have gone by, I notice many of my tables having *lots* of \nunused item pointers. For example,\n\nThere were 31046438 unused item pointers.\nTotal free space (including removable row versions) is 4894537260 bytes.\n580240 pages are or will become empty, including 7 at the end of the \ntable.\n623736 pages containing 4893544876 free bytes are potential move \ndestinations.\n\nPerhaps I shouldn't be concerned with this? In all, I've got around 400 \nGB of data on postgresql, but am not sure how much of it is old data. \nMany of my tables have 100,000s of updates per day.\n\nDo I need to be running old fashioned 'vacuumdb' routinely as well? I \nguess I just don't understand why autovacuum is not automatically doing \nthis for me and I have tables with so many unused item pointers.\n\nthanks!\n daryl\n",
"msg_date": "Sat, 6 Jan 2007 12:59:03 -0600 (CST)",
"msg_from": "Daryl Herzmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Missing the point of autovacuum"
},
{
"msg_contents": "[Daryl Herzmann - Sat at 12:59:03PM -0600]\n> As the months have gone by, I notice many of my tables having *lots* of \n> unused item pointers. For example,\n\nProbably not the issue here, but we had some similar issue where we had\nmany long-running transactions - i.e. some careless colleague entering\n\"begin\" into his psql shell and leaving it running for some days without\nentering \"commit\" or \"rollback\", plus some instances where the\napplications started a transaction without closing it.\n\n> Perhaps I shouldn't be concerned with this? In all, I've got around 400 \n> GB of data on postgresql, but am not sure how much of it is old data. \n\nI didn't count the zeroes, but autovacuum does have rules saying it will\nnot touch the table until some percentages of it needs to be vacuumed\noff. This is of course configurable.\n\n> Do I need to be running old fashioned 'vacuumdb' routinely as well? I \n> guess I just don't understand why autovacuum is not automatically doing \n> this for me and I have tables with so many unused item pointers.\n\nIf you have some period of the day with less activity than else, it is a\ngood idea running an old-fashionated vacuum as well. The regular vacuum\nprocess will benefit from any work done by the autovacuum.\n\n",
"msg_date": "Sat, 6 Jan 2007 20:33:13 +0100",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Missing the point of autovacuum"
},
{
"msg_contents": "Daryl Herzmann <[email protected]> writes:\n> As the months have gone by, I notice many of my tables having *lots* of \n> unused item pointers. For example,\n\n> There were 31046438 unused item pointers.\n> Total free space (including removable row versions) is 4894537260 bytes.\n> 580240 pages are or will become empty, including 7 at the end of the \n> table.\n> 623736 pages containing 4893544876 free bytes are potential move \n> destinations.\n\nThis definitely looks like autovac is not getting the job done for you.\nThe default autovac settings in 8.1 are very un-aggressive and many\npeople find that they need to change the settings to make autovac vacuum\nmore often. Combining autovac with the traditional approach of a cron\njob isn't a bad idea, either, if you have known slow times of day ---\nautovac doesn't currently have any concept of a maintenance window,\nso if you'd rather your vacuuming mostly happened at 2AM, you still need\na cron job for that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 06 Jan 2007 14:46:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Missing the point of autovacuum "
}
] |
[
{
"msg_contents": "Hi there, we've partioned a table (using 8.2) by day due to the 50TB of\ndata (500k row size, 100G rows) we expect to store it in a year.\nOur performance on inserts and selects against the master table is\ndisappointing, 10x slower (with ony 1 partition constraint) than we get by\ngoing to the partioned table directly. Browsing the list I get the\nimpression this just a case of too many partitions? would be better off\ndoing partitions of partitions ?\n\nAny other advice or pointers to more information with dealing with these\nsorts of scales appreciated.\n\nthanks\nColin.\n\nHi there, we've partioned a table (using 8.2) by day due to the 50TB of data (500k row size, 100G rows) we expect to store it in a year. Our performance on inserts and selects against the master table is disappointing, 10x slower (with ony 1 partition constraint) than we get by going to the partioned table directly. Browsing the list I get the impression this just a case of too many partitions? would be better off doing partitions of partitions ?\nAny other advice or pointers to more information with dealing with these sorts of scales appreciated.thanksColin.",
"msg_date": "Sun, 7 Jan 2007 17:37:08 +1300",
"msg_from": "\"Colin Taylor\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "table partioning performance"
},
{
"msg_contents": "Colin,\n\n\nOn 1/6/07 8:37 PM, \"Colin Taylor\" <[email protected]> wrote:\n\n> Hi there, we've partioned a table (using 8.2) by day due to the 50TB of data\n> (500k row size, 100G rows) we expect to store it in a year.\n> Our performance on inserts and selects against the master table is\n> disappointing, 10x slower (with ony 1 partition constraint) than we get by\n> going to the partioned table directly. Browsing the list I get the impression\n> this just a case of too many partitions? would be better off doing partitions\n> of partitions ? \n\nCan you post an \"explain analyze\" of your query here so we can see what's\ngoing on?\n\n- Luke\n\n\n",
"msg_date": "Mon, 08 Jan 2007 07:39:21 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: table partioning performance"
},
{
"msg_contents": "On 1/6/07, Colin Taylor <[email protected]> wrote:\n\n> Hi there, we've partioned a table (using 8.2) by day due to the 50TB of\n> data (500k row size, 100G rows) we expect to store it in a year.\n> Our performance on inserts and selects against the master table is\n> disappointing, 10x slower (with ony 1 partition constraint) than we get by\n> going to the partioned table directly.\n\n\nAre you implementing table partitioning as described at:\nhttp://developer.postgresql.org/pgdocs/postgres/ddl-partitioning.html ?\n\nIf yes, and if I understand your partitioning \"by day\" correctly, then you\nhave one base/master table with 366 partitions (inherited/child tables). Do\neach of these partitions have check constraints and does your master table\nuse rules to redirect inserts to the appropriate partition? I guess I don't\nunderstand your \"only 1 partition constraint\" comment.\n\nWe use partitioned tables extensively and we have observed linear\nperformance degradation on inserts as the number of rules on the master\ntable grows (i.e. number of rules = number of partitions). We had to come\nup with a solution that didn't have a rule per partition on the master\ntable. Just wondering if you are observing the same thing.\n\nSelects shouldn't be affected in the same way, theoretically, if you have\nconstraint_exclusion enabled.\n\nSteve\n\nOn 1/6/07, Colin Taylor <[email protected]> wrote:\nHi there, we've partioned a table (using 8.2) by day due to the 50TB of data (500k row size, 100G rows) we expect to store it in a year. \nOur performance on inserts and selects against the master table is disappointing, 10x slower (with ony 1 partition constraint) than we get by going to the partioned table directly.\n \nAre you implementing table partitioning as described at: http://developer.postgresql.org/pgdocs/postgres/ddl-partitioning.html ?\n \nIf yes, and if I understand your partitioning \"by day\" correctly, then you have one base/master table with 366 partitions (inherited/child tables). Do each of these partitions have check constraints and does your master table use rules to redirect inserts to the appropriate partition? I guess I don't understand your \"only 1 partition constraint\" comment.\n\n \nWe use partitioned tables extensively and we have observed linear performance degradation on inserts as the number of rules on the master table grows (i.e. number of rules = number of partitions). We had to come up with a solution that didn't have a rule per partition on the master table. Just wondering if you are observing the same thing.\n\n \nSelects shouldn't be affected in the same way, theoretically, if you have constraint_exclusion enabled.\n \nSteve",
"msg_date": "Mon, 8 Jan 2007 15:02:24 -0500",
"msg_from": "\"Steven Flatt\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: table partioning performance"
},
{
"msg_contents": "On 1/7/07, Colin Taylor <[email protected]> wrote:\n> Hi there, we've partioned a table (using 8.2) by day due to the 50TB of\n> data (500k row size, 100G rows) we expect to store it in a year.\n> Our performance on inserts and selects against the master table is\n> disappointing, 10x slower (with ony 1 partition constraint) than we get by\n> going to the partioned table directly. Browsing the list I get the\n> impression this just a case of too many partitions? would be better off\n> doing partitions of partitions ?\n>\n> Any other advice or pointers to more information with dealing with these\n> sorts of scales appreciated.\n\nas others have stated, something is not set up correctly. table\npartitioning with constraint exclusion should be considerably faster\nfor situations were the planner can optimize for it (select queries\nare case dependent, but inserts are not).\n\nalso, I would like to speak for everybody else here and ask for as\nmuch detail as possible about the hardware and software challenges you\nare solving :-) in particular, I am curious how you arrived at 500k\nrow size.\n\nmerlin\n",
"msg_date": "Tue, 9 Jan 2007 06:05:31 +0530",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: table partioning performance"
},
{
"msg_contents": "On Mon, 2007-01-08 at 15:02 -0500, Steven Flatt wrote:\n> On 1/6/07, Colin Taylor <[email protected]> wrote:\n> Hi there, we've partioned a table (using 8.2) by day due to\n> the 50TB of data (500k row size, 100G rows) we expect to store\n> it in a year. \n> Our performance on inserts and selects against the master\n> table is disappointing, 10x slower (with ony 1 partition\n> constraint) than we get by going to the partioned table\n> directly.\n> \n> Are you implementing table partitioning as described at:\n> http://developer.postgresql.org/pgdocs/postgres/ddl-partitioning.html ?\n> \n> If yes, and if I understand your partitioning \"by day\" correctly, then\n> you have one base/master table with 366 partitions (inherited/child\n> tables). Do each of these partitions have check constraints and does\n> your master table use rules to redirect inserts to the appropriate\n> partition? I guess I don't understand your \"only 1 partition\n> constraint\" comment.\n> \n> We use partitioned tables extensively and we have observed linear\n> performance degradation on inserts as the number of rules on the\n> master table grows (i.e. number of rules = number of partitions). We\n> had to come up with a solution that didn't have a rule per partition\n> on the master table. Just wondering if you are observing the same\n> thing.\n\nIf you are doing date range partitioning it should be fairly simple to\nload data into the latest table directly. That was the way I originally\nintended for it to be used. The rules approach isn't something I'd\nrecommend as a bulk loading option and its a lot more complex anyway.\n \n> Selects shouldn't be affected in the same way, theoretically, if you\n> have constraint_exclusion enabled.\n\nSelects can incur parsing overhead if there are a large number of\npartitions. That will be proportional to the number of partitions, at\npresent.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 09 Jan 2007 12:48:52 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: table partioning performance"
},
{
"msg_contents": "On Mon, Jan 08, 2007 at 03:02:24PM -0500, Steven Flatt wrote:\n> We use partitioned tables extensively and we have observed linear\n> performance degradation on inserts as the number of rules on the master\n> table grows (i.e. number of rules = number of partitions). We had to come\n> up with a solution that didn't have a rule per partition on the master\n> table. Just wondering if you are observing the same thing.\n\nExcept for the simplest partitioning cases, you'll be much better off\nusing a trigger on the parent table to direct inserts/updates/deletes to\nthe children. As a bonus, using a trigger makes it a lot more realistic\nto deal with an update moving data between partitions.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 10 Jan 2007 14:24:42 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: table partioning performance"
},
{
"msg_contents": "On 1/9/07, Simon Riggs <[email protected]> wrote:\n>\n> If you are doing date range partitioning it should be fairly simple to\n> load data into the latest table directly. That was the way I originally\n> intended for it to be used. The rules approach isn't something I'd\n> recommend as a bulk loading option and its a lot more complex anyway.\n>\nThe problem we have with blindly loading all data into the latest table is\nthat some data (< 5%, possibly even much less) is actually delivered \"late\"\nand belongs in earlier partitions. So we still needed the ability to send\ndata to an arbitrary partition.\n\nSteve\n\nOn 1/9/07, Simon Riggs <[email protected]> wrote:\nIf you are doing date range partitioning it should be fairly simple toload data into the latest table directly. That was the way I originally\nintended for it to be used. The rules approach isn't something I'drecommend as a bulk loading option and its a lot more complex anyway.\nThe problem we have with blindly loading all data into the latest table is that some data (< 5%, possibly even much less) is actually delivered \"late\" and belongs in earlier partitions. So we still needed the ability to send data to an arbitrary partition.\n\n \nSteve",
"msg_date": "Wed, 10 Jan 2007 16:00:00 -0500",
"msg_from": "\"Steven Flatt\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: table partioning performance"
},
{
"msg_contents": "On 1/10/07, Jim C. Nasby <[email protected]> wrote:\n>\n> Except for the simplest partitioning cases, you'll be much better off\n> using a trigger on the parent table to direct inserts/updates/deletes to\n> the children. As a bonus, using a trigger makes it a lot more realistic\n> to deal with an update moving data between partitions.\n\n\nIn our application, data is never moved between partitions.\n\nThe problem I found with triggers is the non-robustness of the PLpgSQL\nrecord data type. For example, in an \"on insert\" trigger, I can't determine\nthe fields of the NEW record unless I hard code the column names into the\ntrigger. This makes it hard to write a generic trigger, which I can use for\nall our partitioned tables. It would have been somewhat of a pain to write\na separate trigger for each of our partitioned tables.\n\nFor that and other reasons, we moved some of the insert logic up to the\napplication level in our product.\n\nSteve\n\nOn 1/10/07, Jim C. Nasby <[email protected]> wrote:\nExcept for the simplest partitioning cases, you'll be much better offusing a trigger on the parent table to direct inserts/updates/deletes to\nthe children. As a bonus, using a trigger makes it a lot more realisticto deal with an update moving data between partitions.\n \nIn our application, data is never moved between partitions.\n \nThe problem I found with triggers is the non-robustness of the PLpgSQL record data type. For example, in an \"on insert\" trigger, I can't determine the fields of the NEW record unless I hard code the column names into the trigger. This makes it hard to write a generic trigger, which I can use for all our partitioned tables. It would have been somewhat of a pain to write a separate trigger for each of our partitioned tables.\n\n \nFor that and other reasons, we moved some of the insert logic up to the application level in our product.\n \nSteve",
"msg_date": "Wed, 10 Jan 2007 16:39:06 -0500",
"msg_from": "\"Steven Flatt\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: table partioning performance"
},
{
"msg_contents": "On Wed, Jan 10, 2007 at 04:39:06PM -0500, Steven Flatt wrote:\n> On 1/10/07, Jim C. Nasby <[email protected]> wrote:\n> >\n> >Except for the simplest partitioning cases, you'll be much better off\n> >using a trigger on the parent table to direct inserts/updates/deletes to\n> >the children. As a bonus, using a trigger makes it a lot more realistic\n> >to deal with an update moving data between partitions.\n> \n> \n> In our application, data is never moved between partitions.\n> \n> The problem I found with triggers is the non-robustness of the PLpgSQL\n> record data type. For example, in an \"on insert\" trigger, I can't determine\n> the fields of the NEW record unless I hard code the column names into the\n> trigger. This makes it hard to write a generic trigger, which I can use for\n> all our partitioned tables. It would have been somewhat of a pain to write\n> a separate trigger for each of our partitioned tables.\n> \n> For that and other reasons, we moved some of the insert logic up to the\n> application level in our product.\n\nYeah, I think the key there would be to produce a function that wrote\nthe function for you.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 10 Jan 2007 19:23:55 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: table partioning performance"
},
{
"msg_contents": "On Wed, 2007-01-10 at 16:00 -0500, Steven Flatt wrote:\n> On 1/9/07, Simon Riggs <[email protected]> wrote: \n> If you are doing date range partitioning it should be fairly\n> simple to\n> load data into the latest table directly. That was the way I\n> originally \n> intended for it to be used. The rules approach isn't something\n> I'd\n> recommend as a bulk loading option and its a lot more complex\n> anyway.\n> The problem we have with blindly loading all data into the latest\n> table is that some data (< 5%, possibly even much less) is actually\n> delivered \"late\" and belongs in earlier partitions. So we still\n> needed the ability to send data to an arbitrary partition.\n\nYes, understand the problem.\n\nCOPY is always going to be faster than INSERTs anyhow and COPY doesn't\nallow views, nor utilise rules. You can set up a client-side program to\npre-qualify the data and feed it to multiple simultaneous COPY commands,\nas the best current way to handle this.\n\n--\nNext section aimed at pgsql-hackers, relates directly to above:\n\n\nMy longer term solution looks like this:\n\n1. load all data into newly created partition (optimised in a newly\nsubmitted patch for 8.3), then add the table as a new partition\n\n2. use a newly created, permanent \"errortable\" into which rows that\ndon't match constraints or have other formatting problems would be put.\nFollowing the COPY you would then run an INSERT SELECT to load the\nremaining rows from the errortable into their appropriate tables. The\nINSERT statement could target the parent table, so that rules to\ndistribute the rows would be applied appropriately. When all of those\nhave happened, drop the errortable. This would allow the database to\napply its constraints accurately without aborting the load when a\nconstraint error occurs.\n\nIn the use case you outline this would provide a fast path for 95% of\nthe data load, plus a straightforward mechanism for the remaining 5%.\n\nWe discussed this on hackers earlier, though we had difficulty with\nhandling unique constraint errors, so the idea was shelved. The\nerrortable part of the concept was sound however.\nhttp://archives.postgresql.org/pgsql-hackers/2005-11/msg01100.php\nJames William Pye had a similar proposal\nhttp://archives.postgresql.org/pgsql-hackers/2006-02/msg00120.php\n\nThe current TODO says\n\"Allow COPY to report error lines and continue \nThis requires the use of a savepoint before each COPY line is processed,\nwith ROLLBACK on COPY failure.\"\n\nIf we agreed that the TODO actually has two parts to it, each of which\nis separately implementable:\n1. load errors to a table (all errors apart from uniqueness violation)\n2. do something sensible with unique violation ERRORs\n\nIMHO part (1) can be implemented without Savepoints, which testing has\nshown (see James' results) would not be an acceptable solution for bulk\ndata loading. So (1) can be implemented fairly easily, whereas (2)\nremains an issue that we have no acceptable solution for, as yet.\n\nCan we agree to splitting the TODO into two parts? That way we stand a\nchance of getting at least some functionality in this important area.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Jan 2007 12:15:50 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: table partioning performance"
},
{
"msg_contents": "On Thu, Jan 11, 2007 at 12:15:50PM +0000, Simon Riggs wrote:\n> On Wed, 2007-01-10 at 16:00 -0500, Steven Flatt wrote:\n> > On 1/9/07, Simon Riggs <[email protected]> wrote: \n> > If you are doing date range partitioning it should be fairly\n> > simple to\n> > load data into the latest table directly. That was the way I\n> > originally \n> > intended for it to be used. The rules approach isn't something\n> > I'd\n> > recommend as a bulk loading option and its a lot more complex\n> > anyway.\n> > The problem we have with blindly loading all data into the latest\n> > table is that some data (< 5%, possibly even much less) is actually\n> > delivered \"late\" and belongs in earlier partitions. So we still\n> > needed the ability to send data to an arbitrary partition.\n> \n> Yes, understand the problem.\n> \n> COPY is always going to be faster than INSERTs anyhow and COPY doesn't\n> allow views, nor utilise rules. You can set up a client-side program to\n> pre-qualify the data and feed it to multiple simultaneous COPY commands,\n> as the best current way to handle this.\n> \n> --\n> Next section aimed at pgsql-hackers, relates directly to above:\n\nI'm wondering if you see any issues with COPYing into a partitioned\ntable that's using triggers instead of rules to direct data to the\nappropriate tables?\n\nBTW, I think improved copy error handling would be great, and might\nperform better than triggers, once we have it...\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 11 Jan 2007 15:01:22 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] table partioning performance"
},
{
"msg_contents": "On Thu, 2007-01-11 at 15:01 -0600, Jim C. Nasby wrote:\n\n> I'm wondering if you see any issues with COPYing into a partitioned\n> table that's using triggers instead of rules to direct data to the\n> appropriate tables?\n\nThe data demographics usually guides you towards what to do.\n\nYou could COPY into the table that would receive most rows and use\nbefore triggers to INSERT into the other tables, rather than the main\none. I'd be surprised if that was very fast for an even distribution\nthough. It could well be faster if you have a small number of rows into\na large number of targets because that would be quicker than re-scanning\na temp table repeatedly just to extract a few rows each time.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 12 Jan 2007 19:50:54 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] table partioning performance"
}
] |
[
{
"msg_contents": "I'd like to optimize my postgres configuration for optimal \nperformance under typical load. Unfortunately, as I understand \nthings, that implies that I have to have a way to repeat the same \nload each time I try out new settings, so that I can fairly compare. \nIt's difficult for me to drive the load all the way through my \napplication, so I'm envisioning some script like pqa that analyzes \nthe logged queries against the database during a typical load period, \nand then lets me replay them directly against the database with the \nsame original frequency.\n\nHas anybody done anything similar to this? Anything better? Did it \nend up being a waste of time, or was it helpful?\n",
"msg_date": "Sun, 7 Jan 2007 11:03:50 -0800",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "tweaking under repeatable load"
},
{
"msg_contents": "Le dimanche 7 janvier 2007 20:03, Ben a écrit :\n> I'd like to optimize my postgres configuration for optimal\n> performance under typical load. Unfortunately, as I understand\n> things, that implies that I have to have a way to repeat the same\n> load each time I try out new settings, so that I can fairly compare.\n[...]\n>\n> Has anybody done anything similar to this? Anything better? Did it\n> end up being a waste of time, or was it helpful?\n\nPlease have a look at this:\n http://pgfouine.projects.postgresql.org/tsung.html\n-- \nDimitri Fontaine\nhttp://www.dalibo.com/",
"msg_date": "Mon, 8 Jan 2007 10:18:19 +0100",
"msg_from": "Dimitri Fontaine <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tweaking under repeatable load"
}
] |
[
{
"msg_contents": "This is a query migrated from postgres. In postgres it runs about 10,000 times *slower* than on informix on somewhat newer hardware. The problem is entirely due to the planner. This PostgreSQL 8.1.4 on linux, 2 gigs of ram.\n\nThe table:\n Table \"reporting.bill_rpt_work\"\n Column | Type | Modifiers\n---------------+-----------------------+-----------\n report_id | integer |\n client_id | character varying(10) |\n contract_id | integer | not null\n rate | numeric | not null\n appid | character varying(10) | not null\n userid | text | not null\n collection_id | integer | not null\n client_name | character varying(60) |\n use_sius | integer | not null\n is_subscribed | integer | not null\n hits | numeric | not null\n sius | numeric | not null\n total_amnt | numeric | not null\n royalty_total | numeric |\nIndexes:\n \"billrptw_ndx\" UNIQUE, btree (report_id, client_id, contract_id, rate, appid, userid, collection_id)\n \"billrpt_cntrct_ndx\" btree (report_id, contract_id, client_id)\n \"billrpt_collid_ndx\" btree (report_id, collection_id, client_id, contract_id)\nForeign-key constraints:\n \"$1\" FOREIGN KEY (report_id) REFERENCES billing_reports(report_id)\n \"$2\" FOREIGN KEY (client_id) REFERENCES \"work\".clients(client_id)\n\n\nThe query:\nexplain analyze select\nw.appid,w.rate,w.is_subscribed,sum(w.hits) AS Hits ,sum(w.sius) AS IUs,\nsum(w.total_amnt) AS Total,sum(w.hits) * w.rate AS ByHits,\nsum(w.sius) * w.rate AS BYIUS\nfrom bill_rpt_work w, billing_reports b\nwhere w.report_id in\n(select b.report_id from billing_reports where b.report_s_date = '2006-09-30')\nand (w.client_id = '227400001' or w.client_id = '2274000010')\ngroup by 1,2,3\norder by 1,2,3;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n------------------------------\n GroupAggregate (cost=298061335.44..298259321.62 rows=26 width=58) (actual time=372213.673..372213.689 rows=2 loops=1)\n -> Sort (cost=298061335.44..298083333.83 rows=8799354 width=58) (actual time=372213.489..372213.503 rows=37 loops=1)\n Sort Key: w.appid, w.rate, w.is_subscribed\n -> Nested Loop (cost=0.00..296121313.45 rows=8799354 width=58) (actual time=286628.486..372213.053 rows=37 loops=1)\n Join Filter: (subplan)\n -> Seq Scan on bill_rpt_work w (cost=0.00..85703.20 rows=11238 width=62) (actual time=1.239..1736.746 rows=61020 loops=1)\n Filter: (((client_id)::text = '227400001'::text) OR ((client_id)::text = '2274000010'::text))\n -> Seq Scan on billing_reports b (cost=0.00..29.66 rows=1566 width=8) (actual time=0.001..0.879 rows=1566 loops=61020)\n SubPlan\n -> Result (cost=0.00..29.66 rows=1566 width=0) (actual time=0.000..0.002 rows=1 loops=95557320)\n One-Time Filter: ($1 = '2006-09-30'::date)\n -> Seq Scan on billing_reports (cost=0.00..29.66 rows=1566 width=0) (actual time=0.001..0.863 rows=1565 loops=61020)\n Total runtime: 372214.085 ms\n\n\nInformix uses report id/client id as an index, thus eliminating a huge number of rows. The table has 2280545 rows currently; slightly fewer when the above analyze was run. Informix has about 5 times as much data.\n\nselect count(*) from bill_rpt_work where report_id in (select report_id from billing_reports where report_s_date = '2006-09-30') and (client_id = '227400001' or client_id = '2274000010');\n count\n-------\n 37\n(1 row)\n\nSo scanning everything seems particularly senseless.\n\nI had some success adding client id and report id to the initial select list, but that causes all sorts of problems in calling procedures that expect different data grouping.\n\nAny suggestion would be welcome because this is a horrible show stopper.\n\nThanks,\n\nGreg Williamson\nDBA\nGlobeXplorer LLC\n\n\n\n",
"msg_date": "Tue, 9 Jan 2007 03:55:56 -0800",
"msg_from": "\"Gregory S. Williamson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Horribly slow query/ sequential scan"
},
{
"msg_contents": "I don't think I understand the idea behind this query. Do you really need\nbilling_reports twice?\n\n> The query:\n> explain analyze select\n> w.appid,w.rate,w.is_subscribed,sum(w.hits) AS Hits ,sum(w.sius) AS IUs,\n> sum(w.total_amnt) AS Total,sum(w.hits) * w.rate AS ByHits,\n> sum(w.sius) * w.rate AS BYIUS\n> from bill_rpt_work w, billing_reports b\n> where w.report_id in\n> (select b.report_id from billing_reports where b.report_s_date =\n> '2006-09-30')\n> and (w.client_id = '227400001' or w.client_id = '2274000010')\n> group by 1,2,3\n> order by 1,2,3;\n\nMaybe this is the query you want instead?\n\nselect w.appid,\n w.rate,\n w.is_subscribed,\n sum(w.hits) AS Hits,\n sum(w.sius) AS IUs,\n sum(w.total_amnt) AS Total,\n sum(w.hits) * w.rate AS ByHits,\n sum(w.sius) * w.rate AS BYIUS\n from bill_rpt_work w\n where w.report_id in\n (select b.report_id from billing_reports b where b.report_s_date =\n'2006-09-30')\n and (w.client_id = '227400001' or w.client_id = '2274000010')\ngroup by 1,2,3\norder by 1,2,3;\n\n/Dennis\n\n",
"msg_date": "Tue, 9 Jan 2007 13:35:36 +0100 (CET)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Horribly slow query/ sequential scan"
},
{
"msg_contents": "Voila ! You da man !\n\n& other expressions of awe and appreciation ...\n\nHAving burdened others with my foolishness too often, I hesitate to ask, but could someone either point me to a reference or explain what the difference might be ... I can see it with the eyes but I am having trouble understanding what Informix might have been doing to my (bad ?) SQL to \"fix\" the query. Seeing a redundancy and eliminating it ?\n\nThe explain analyze for \"db\"'s sql (slightly faster than Informix on an older Sun machine ... about 20%):\n GroupAggregate (cost=64.35..64.75 rows=8 width=58) (actual time=0.612..0.629 rows=2 loops=1)\n -> Sort (cost=64.35..64.37 rows=8 width=58) (actual time=0.463..0.476 rows=37 loops=1)\n Sort Key: w.appid, w.rate, w.is_subscribed\n -> Nested Loop (cost=8.11..64.23 rows=8 width=58) (actual time=0.130..0.211 rows=37 loops=1)\n Join Filter: (\"inner\".report_id = \"outer\".report_id)\n -> HashAggregate (cost=3.95..3.96 rows=1 width=4) (actual time=0.035..0.035 rows=1 loops=1)\n -> Index Scan using billrpt_sdate_ndx on billing_reports b (cost=0.00..3.94 rows=1 width=4) (actual time=0.021..0.023 rows=1 loops=1)\n Index Cond: (report_s_date = '2006-09-30'::date)\n -> Bitmap Heap Scan on bill_rpt_work w (cost=4.17..59.92 rows=28 width=62) (actual time=0.084..0.111 rows=37 loops=1)\n Recheck Cond: (((w.report_id = \"outer\".report_id) AND ((w.client_id)::text = '227400001'::text)) OR ((w.report_id = \"outer\".report_id) AND ((w.client_id)::text = '2274000010'::text)))\n -> BitmapOr (cost=4.17..4.17 rows=28 width=0) (actual time=0.078..0.078 rows=0 loops=1)\n -> Bitmap Index Scan on billrptw_ndx (cost=0.00..2.08 rows=14 width=0) (actual time=0.053..0.053 rows=22 loops=1)\n Index Cond: ((w.report_id = \"outer\".report_id) AND ((w.client_id)::text = '227400001'::text))\n -> Bitmap Index Scan on billrptw_ndx (cost=0.00..2.08 rows=14 width=0) (actual time=0.024..0.024 rows=15 loops=1)\n Index Cond: ((w.report_id = \"outer\".report_id) AND ((w.client_id)::text = '2274000010'::text))\n Total runtime: 6.110 ms\n(16 rows)\n\nThanks again (and sorry for the top-posting but this particular interface is ungainly)\n\nG\n\n-----Original Message-----\nFrom:\[email protected] [mailto:[email protected]]\nSent:\tTue 1/9/2007 4:35 AM\nTo:\tGregory S. Williamson\nCc:\[email protected]\nSubject:\tRe: [PERFORM] Horribly slow query/ sequential scan\n\nI don't think I understand the idea behind this query. Do you really need\nbilling_reports twice?\n\n> The query:\n> explain analyze select\n> w.appid,w.rate,w.is_subscribed,sum(w.hits) AS Hits ,sum(w.sius) AS IUs,\n> sum(w.total_amnt) AS Total,sum(w.hits) * w.rate AS ByHits,\n> sum(w.sius) * w.rate AS BYIUS\n> from bill_rpt_work w, billing_reports b\n> where w.report_id in\n> (select b.report_id from billing_reports where b.report_s_date =\n> '2006-09-30')\n> and (w.client_id = '227400001' or w.client_id = '2274000010')\n> group by 1,2,3\n> order by 1,2,3;\n\nMaybe this is the query you want instead?\n\nselect w.appid,\n w.rate,\n w.is_subscribed,\n sum(w.hits) AS Hits,\n sum(w.sius) AS IUs,\n sum(w.total_amnt) AS Total,\n sum(w.hits) * w.rate AS ByHits,\n sum(w.sius) * w.rate AS BYIUS\n from bill_rpt_work w\n where w.report_id in\n (select b.report_id from billing_reports b where b.report_s_date =\n'2006-09-30')\n and (w.client_id = '227400001' or w.client_id = '2274000010')\ngroup by 1,2,3\norder by 1,2,3;\n\n/Dennis\n\n\n\n-------------------------------------------------------\nClick link below if it is SPAM [email protected]\n\"https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=45a38b1548991076418835&[email protected]&retrain=spam&template=history&history_page=1\"\n!DSPAM:45a38b1548991076418835!\n-------------------------------------------------------\n\n\n\n\nVoi\n",
"msg_date": "Tue, 9 Jan 2007 04:56:43 -0800",
"msg_from": "\"Gregory S. Williamson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Horribly slow query/ sequential scan"
},
{
"msg_contents": "\"Gregory S. Williamson\" <[email protected]> writes:\n> HAving burdened others with my foolishness too often, I hesitate to\n> ask, but could someone either point me to a reference or explain what\n> the difference might be ... I can see it with the eyes but I am having\n> trouble understanding what Informix might have been doing to my (bad\n> ?) SQL to \"fix\" the query.\n\nMe too. Does informix have anything EXPLAIN-like to show what it's\ndoing?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 09 Jan 2007 10:13:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Horribly slow query/ sequential scan "
}
] |
[
{
"msg_contents": "Forget abount \"IN\". Its horribly slow.\n\ntry :\n\nselect w.appid,\n w.rate,\n w.is_subscribed,\n sum(w.hits) AS Hits,\n sum(w.sius) AS IUs,\n sum(w.total_amnt) AS Total,\n sum(w.hits) * w.rate AS ByHits,\n sum(w.sius) * w.rate AS BYIUS\n from bill_rpt_work w\n where (select b.report_id from billing_reports b where b.report_s_date = '2006-09-30' and w.report_id = b.report_id)\n and w.client_id IN ('227400001','2274000010')\ngroup by 1,2,3\norder by 1,2,3;\n\n\n\nshould by faster; \n\nassuming : index on report_id in b; index on report_id, client_id in w\n\nto enforce useage of indexes on grouping (depends on result size), consider extending w with cols 1,2,3.\n\n\nregards, \nmarcus\n\n-----Ursprüngliche Nachricht-----\nVon: [email protected]\n[mailto:[email protected]]Im Auftrag von\[email protected]\nGesendet: Dienstag, 9. Januar 2007 13:36\nAn: Gregory S. Williamson\nCc: [email protected]\nBetreff: Re: [PERFORM] Horribly slow query/ sequential scan\n\n\nI don't think I understand the idea behind this query. Do you really need\nbilling_reports twice?\n\n> The query:\n> explain analyze select\n> w.appid,w.rate,w.is_subscribed,sum(w.hits) AS Hits ,sum(w.sius) AS IUs,\n> sum(w.total_amnt) AS Total,sum(w.hits) * w.rate AS ByHits,\n> sum(w.sius) * w.rate AS BYIUS\n> from bill_rpt_work w, billing_reports b\n> where w.report_id in\n> (select b.report_id from billing_reports where b.report_s_date =\n> '2006-09-30')\n> and (w.client_id = '227400001' or w.client_id = '2274000010')\n> group by 1,2,3\n> order by 1,2,3;\n\nMaybe this is the query you want instead?\n\nselect w.appid,\n w.rate,\n w.is_subscribed,\n sum(w.hits) AS Hits,\n sum(w.sius) AS IUs,\n sum(w.total_amnt) AS Total,\n sum(w.hits) * w.rate AS ByHits,\n sum(w.sius) * w.rate AS BYIUS\n from bill_rpt_work w\n where w.report_id in\n (select b.report_id from billing_reports b where b.report_s_date =\n'2006-09-30')\n and (w.client_id = '227400001' or w.client_id = '2274000010')\ngroup by 1,2,3\norder by 1,2,3;\n\n/Dennis\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: You can help support the PostgreSQL project by donating at\n\n http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 9 Jan 2007 13:50:47 +0100",
"msg_from": "=?iso-8859-1?Q?N=F6rder-Tuitje=2C_Marcus?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Horribly slow query/ sequential scan"
},
{
"msg_contents": "Thanks for the suggestion ... I will try it when I've had some sleep and the server is quiet again ... the IN seems to have improved markedly since the 7.4 release, as advertised, so I will be interested in trying this.\n\nGSW\n\n-----Original Message-----\nFrom:\tNörder-Tuitje, Marcus [mailto:[email protected]]\nSent:\tTue 1/9/2007 4:50 AM\nTo:\[email protected]; Gregory S. Williamson\nCc:\[email protected]\nSubject:\tAW: [PERFORM] Horribly slow query/ sequential scan\n\nForget abount \"IN\". Its horribly slow.\n\ntry :\n\nselect w.appid,\n w.rate,\n w.is_subscribed,\n sum(w.hits) AS Hits,\n sum(w.sius) AS IUs,\n sum(w.total_amnt) AS Total,\n sum(w.hits) * w.rate AS ByHits,\n sum(w.sius) * w.rate AS BYIUS\n from bill_rpt_work w\n where (select b.report_id from billing_reports b where b.report_s_date = '2006-09-30' and w.report_id = b.report_id)\n and w.client_id IN ('227400001','2274000010')\ngroup by 1,2,3\norder by 1,2,3;\n\n\n\nshould by faster; \n\nassuming : index on report_id in b; index on report_id, client_id in w\n\nto enforce useage of indexes on grouping (depends on result size), consider extending w with cols 1,2,3.\n\n\nregards, \nmarcus\n\n-----Ursprüngliche Nachricht-----\nVon: [email protected]\n[mailto:[email protected]]Im Auftrag von\[email protected]\nGesendet: Dienstag, 9. Januar 2007 13:36\nAn: Gregory S. Williamson\nCc: [email protected]\nBetreff: Re: [PERFORM] Horribly slow query/ sequential scan\n\n\nI don't think I understand the idea behind this query. Do you really need\nbilling_reports twice?\n\n> The query:\n> explain analyze select\n> w.appid,w.rate,w.is_subscribed,sum(w.hits) AS Hits ,sum(w.sius) AS IUs,\n> sum(w.total_amnt) AS Total,sum(w.hits) * w.rate AS ByHits,\n> sum(w.sius) * w.rate AS BYIUS\n> from bill_rpt_work w, billing_reports b\n> where w.report_id in\n> (select b.report_id from billing_reports where b.report_s_date =\n> '2006-09-30')\n> and (w.client_id = '227400001' or w.client_id = '2274000010')\n> group by 1,2,3\n> order by 1,2,3;\n\nMaybe this is the query you want instead?\n\nselect w.appid,\n w.rate,\n w.is_subscribed,\n sum(w.hits) AS Hits,\n sum(w.sius) AS IUs,\n sum(w.total_amnt) AS Total,\n sum(w.hits) * w.rate AS ByHits,\n sum(w.sius) * w.rate AS BYIUS\n from bill_rpt_work w\n where w.report_id in\n (select b.report_id from billing_reports b where b.report_s_date =\n'2006-09-30')\n and (w.client_id = '227400001' or w.client_id = '2274000010')\ngroup by 1,2,3\norder by 1,2,3;\n\n/Dennis\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: You can help support the PostgreSQL project by donating at\n\n http://www.postgresql.org/about/donate\n\n\n\n\n-------------------------------------------------------\nClick link below if it is SPAM [email protected]\n\"https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=45a38ea050372117817174&[email protected]&retrain=spam&template=history&history_page=1\"\n!DSPAM:45a38ea050372117817174!\n-------------------------------------------------------\n\n\n\n\n\n",
"msg_date": "Tue, 9 Jan 2007 04:59:21 -0800",
"msg_from": "\"Gregory S. Williamson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Horribly slow query/ sequential scan"
},
{
"msg_contents": "\nOn 9-Jan-07, at 7:50 AM, N�rder-Tuitje, Marcus wrote:\n\n> Forget abount \"IN\". Its horribly slow.\n\nI think that statement above was historically correct, but is now \nincorrect. IN has been optimized quite significantly since 7.4\n\nDave\n>\n> try :\n>\n> select w.appid,\n> w.rate,\n> w.is_subscribed,\n> sum(w.hits) AS Hits,\n> sum(w.sius) AS IUs,\n> sum(w.total_amnt) AS Total,\n> sum(w.hits) * w.rate AS ByHits,\n> sum(w.sius) * w.rate AS BYIUS\n> from bill_rpt_work w\n> where (select b.report_id from billing_reports b where \n> b.report_s_date = '2006-09-30' and w.report_id = b.report_id)\n> and w.client_id IN ('227400001','2274000010')\n> group by 1,2,3\n> order by 1,2,3;\n>\n>\n>\n> should by faster;\n>\n> assuming : index on report_id in b; index on report_id, client_id in w\n>\n> to enforce useage of indexes on grouping (depends on result size), \n> consider extending w with cols 1,2,3.\n>\n>\n> regards,\n> marcus\n>\n> -----Urspr�ngliche Nachricht-----\n> Von: [email protected]\n> [mailto:[email protected]]Im Auftrag von\n> [email protected]\n> Gesendet: Dienstag, 9. Januar 2007 13:36\n> An: Gregory S. Williamson\n> Cc: [email protected]\n> Betreff: Re: [PERFORM] Horribly slow query/ sequential scan\n>\n>\n> I don't think I understand the idea behind this query. Do you \n> really need\n> billing_reports twice?\n>\n>> The query:\n>> explain analyze select\n>> w.appid,w.rate,w.is_subscribed,sum(w.hits) AS Hits ,sum(w.sius) AS \n>> IUs,\n>> sum(w.total_amnt) AS Total,sum(w.hits) * w.rate AS ByHits,\n>> sum(w.sius) * w.rate AS BYIUS\n>> from bill_rpt_work w, billing_reports b\n>> where w.report_id in\n>> (select b.report_id from billing_reports where b.report_s_date =\n>> '2006-09-30')\n>> and (w.client_id = '227400001' or w.client_id = '2274000010')\n>> group by 1,2,3\n>> order by 1,2,3;\n>\n> Maybe this is the query you want instead?\n>\n> select w.appid,\n> w.rate,\n> w.is_subscribed,\n> sum(w.hits) AS Hits,\n> sum(w.sius) AS IUs,\n> sum(w.total_amnt) AS Total,\n> sum(w.hits) * w.rate AS ByHits,\n> sum(w.sius) * w.rate AS BYIUS\n> from bill_rpt_work w\n> where w.report_id in\n> (select b.report_id from billing_reports b where \n> b.report_s_date =\n> '2006-09-30')\n> and (w.client_id = '227400001' or w.client_id = '2274000010')\n> group by 1,2,3\n> order by 1,2,3;\n>\n> /Dennis\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n",
"msg_date": "Tue, 9 Jan 2007 09:16:57 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Horribly slow query/ sequential scan"
}
] |
[
{
"msg_contents": "GFS2, OFCS2, lustre, CXFS, GPFS, Veritas and what else there is..\n\n..has someone experience with any of those? Is it bearable to run PG on \nthem from a performance point of view? I guess not, but any positive \nreports?\n\nThanks\n\n-- \nRegards,\nHannes Dorbath\n",
"msg_date": "Tue, 09 Jan 2007 15:15:26 +0100",
"msg_from": "Hannes Dorbath <[email protected]>",
"msg_from_op": true,
"msg_subject": "Running PG on cluster files systems"
},
{
"msg_contents": "Hi,\n\nOn Tue, 2007-01-09 at 15:15 +0100, Hannes Dorbath wrote:\n> GFS2, OFCS2, lustre, CXFS, GPFS, Veritas and what else there is..\n> \n> ..has someone experience with any of those? Is it bearable to run PG\n> on them from a performance point of view? I guess not, but any\n> positive reports?\n\nI have tested it on GFS (v1) and lustre. On Lustre, I saw a bit\nperformance problem, but I think we could bear it thinking the\nadvantages of Lustre.\n\nOTOH, RHEL AS + RHEL Cluster Suite + LVM + GFS combination worked very\nwell, as compared to extX. I don't have benchmarks handy, BTW.\n\nRegards,\n-- \nThe PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, ODBCng - http://www.commandprompt.com/",
"msg_date": "Tue, 09 Jan 2007 16:16:58 +0200",
"msg_from": "Devrim GUNDUZ <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Running PG on cluster files systems"
}
] |
[
{
"msg_contents": " Yes it does:\n\nSET EXPLAIN ON;\n\nIt writes the file to sqexplain.out\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Tom Lane\nSent: Tuesday, January 09, 2007 9:13 AM\nTo: Gregory S. Williamson\nCc: [email protected]; [email protected]\nSubject: Re: [PERFORM] Horribly slow query/ sequential scan \n\n\"Gregory S. Williamson\" <[email protected]> writes:\n> HAving burdened others with my foolishness too often, I hesitate to\n> ask, but could someone either point me to a reference or explain what\n> the difference might be ... I can see it with the eyes but I am having\n> trouble understanding what Informix might have been doing to my (bad\n> ?) SQL to \"fix\" the query.\n\nMe too. Does informix have anything EXPLAIN-like to show what it's\ndoing?\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: You can help support the PostgreSQL project by donating at\n\n http://www.postgresql.org/about/donate\n",
"msg_date": "Tue, 9 Jan 2007 09:36:46 -0600",
"msg_from": "\"Plugge, Joe R.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Horribly slow query/ sequential scan "
},
{
"msg_contents": "As Joe indicated, there is indeed an Informix explain, appended below my signature ... \n\nThis table has 5565862 total rows, and 37 target rows. So about twice the total data, but all of the \"extra\" data in infomrix is much older.\n\nThanks for the help, one and all!\n\nGreg W.\n\nQUERY:\n------\nSELECT collection_id,client_id,client_name,appid,SUM(hits),SUM(sius),SUM(royalty_total)\nFROM bill_rpt_work WHERE report_id in\n (SELECT report_id FROM billing_reports WHERE report_s_date = '2004-09-10')\n GROUP BY collection_id, client_id,client_name,appid\n ORDER BY collection_id,client_id,appid\n\nEstimated Cost: 2015\nEstimated # of Rows Returned: 481\nTemporary Files Required For: Order By Group By\n\n 1) informix.bill_rpt_work: INDEX PATH\n\n (1) Index Keys: report_id (Serial, fragments: ALL)\n Lower Index Filter: informix.bill_rpt_work.report_id = ANY <subquery> \n\n Subquery:\n ---------\n Estimated Cost: 44\n Estimated # of Rows Returned: 1\n\n 1) informix.billing_reports: SEQUENTIAL SCAN\n\n Filters: informix.billing_reports.report_s_date = datetime(2004-09-10) year to day \n\n\n\nQUERY:\n------\nselect count(*) from informix.systables;\n\nEstimated Cost: 1\nEstimated # of Rows Returned: 1\n\n 1) informix.systables: INDEX PATH\n\n (1) Index Keys: (count)\n\n\nQUERY:\n------\n select tabname , tabid , owner from informix . systables where tabname != 'ANSI' and tabtype != 'P' order by tabname\n\nEstimated Cost: 30\nEstimated # of Rows Returned: 196\n\n 1) informix.systables: INDEX PATH\n\n Filters: informix.systables.tabtype != 'P' \n\n (1) Index Keys: tabname owner (Key-First)\n Key-First Filters: (informix.systables.tabname != 'ANSI' )\n\n\nQUERY:\n------\nselect tabid, tabtype, tabname, owner from informix.systables where (tabname = ? and owner like ?)\n\nEstimated Cost: 2\nEstimated # of Rows Returned: 1\n\n 1) informix.systables: INDEX PATH\n\n (1) Index Keys: tabname owner (Key-First)\n Lower Index Filter: informix.systables.tabname = 'bill_rpt_work' \n Key-First Filters: (informix.systables.owner LIKE '%' )\n\n\nQUERY:\n------\nselect count(*) from \t\t informix.systables where tabname = 'sysindices';\n\nEstimated Cost: 2\nEstimated # of Rows Returned: 1\n\n 1) informix.systables: INDEX PATH\n\n (1) Index Keys: tabname owner \n Lower Index Filter: informix.systables.tabname = 'sysindices' \n\n\nQUERY:\n------\nselect colno, colname, coltype, collength, informix.syscolumns.extended_id, name from informix.syscolumns, informix.systables, outer informix.sysxtdtypes where informix.syscolumns.tabid = informix.systables.tabid and informix.syscolumns.extended_id = informix.sysxtdtypes.extended_id and tabname = ? and informix.systables.owner = ? order by informix.syscolumns.colno;\n\nEstimated Cost: 9\nEstimated # of Rows Returned: 7\nTemporary Files Required For: Order By \n\n 1) informix.systables: INDEX PATH\n\n (1) Index Keys: tabname owner \n Lower Index Filter: (informix.systables.tabname = 'bill_rpt_work' AND informix.systables.owner = 'informix ' ) \n\n 2) informix.syscolumns: INDEX PATH\n\n (1) Index Keys: tabid colno \n Lower Index Filter: informix.syscolumns.tabid = informix.systables.tabid \nNESTED LOOP JOIN\n\n 3) informix.sysxtdtypes: INDEX PATH\n\n (1) Index Keys: extended_id \n Lower Index Filter: informix.syscolumns.extended_id = informix.sysxtdtypes.extended_id \nNESTED LOOP JOIN\n\n\nQUERY:\n------\nselect tabid, tabtype, tabname, owner from informix.systables where (tabname = ? and owner like ?)\n\nEstimated Cost: 2\nEstimated # of Rows Returned: 1\n\n 1) informix.systables: INDEX PATH\n\n (1) Index Keys: tabname owner (Key-First)\n Lower Index Filter: informix.systables.tabname = 'bill_rpt_work' \n Key-First Filters: (informix.systables.owner LIKE '%' )\n\n\nQUERY:\n------\nselect count(*) from \t\t informix.systables where tabname = 'sysindices';\n\nEstimated Cost: 2\nEstimated # of Rows Returned: 1\n\n 1) informix.systables: INDEX PATH\n\n (1) Index Keys: tabname owner \n Lower Index Filter: informix.systables.tabname = 'sysindices' \n\n\nQUERY:\n------\nselect idxtype, clustered,idxname, informix.sysindices.owner, indexkeys::lvarchar, amid, am_name from informix.sysindices, informix.systables, informix.sysams where informix.systables.tabname = ? and informix.systables.tabid = informix.sysindices.tabid and informix.systables.owner like ? and informix.sysindices.amid = informix.sysams.am_id;\n\nEstimated Cost: 6\nEstimated # of Rows Returned: 2\n\n 1) informix.systables: INDEX PATH\n\n (1) Index Keys: tabname owner \n Lower Index Filter: (informix.systables.tabname = 'bill_rpt_work' AND informix.systables.owner = 'informix' ) \n\n 2) informix.sysindices: INDEX PATH\n\n (1) Index Keys: tabid \n Lower Index Filter: informix.systables.tabid = informix.sysindices.tabid \nNESTED LOOP JOIN\n\n 3) informix.sysams: INDEX PATH\n\n (1) Index Keys: am_id \n Lower Index Filter: informix.sysindices.amid = informix.sysams.am_id \nNESTED LOOP JOIN\n\nUDRs in query:\n--------------\n UDR id :\t1\n UDR name:\tindexkeyarray_out\n\nQUERY:\n------\nselect tabid, tabtype, tabname, owner from informix.systables where (tabname = ? and owner like ?)\n\nEstimated Cost: 2\nEstimated # of Rows Returned: 1\n\n 1) informix.systables: INDEX PATH\n\n (1) Index Keys: tabname owner (Key-First)\n Lower Index Filter: informix.systables.tabname = 'bill_rpt_work' \n Key-First Filters: (informix.systables.owner LIKE '%' )\n\n\nQUERY:\n------\nselect count(*) from \t\t informix.systables where tabname = 'sysindices';\n\nEstimated Cost: 2\nEstimated # of Rows Returned: 1\n\n 1) informix.systables: INDEX PATH\n\n (1) Index Keys: tabname owner \n Lower Index Filter: informix.systables.tabname = 'sysindices' \n\n\nQUERY:\n------\nselect colno, colname, coltype, collength, informix.syscolumns.extended_id, name from informix.syscolumns, informix.systables, outer informix.sysxtdtypes where informix.syscolumns.tabid = informix.systables.tabid and informix.syscolumns.extended_id = informix.sysxtdtypes.extended_id and tabname = ? and informix.systables.owner = ? order by informix.syscolumns.colno;\n\nEstimated Cost: 9\nEstimated # of Rows Returned: 7\nTemporary Files Required For: Order By \n\n 1) informix.systables: INDEX PATH\n\n (1) Index Keys: tabname owner \n Lower Index Filter: (informix.systables.tabname = 'bill_rpt_work' AND informix.systables.owner = 'informix ' ) \n\n 2) informix.syscolumns: INDEX PATH\n\n (1) Index Keys: tabid colno \n Lower Index Filter: informix.syscolumns.tabid = informix.systables.tabid \nNESTED LOOP JOIN\n\n 3) informix.sysxtdtypes: INDEX PATH\n\n (1) Index Keys: extended_id \n Lower Index Filter: informix.syscolumns.extended_id = informix.sysxtdtypes.extended_id \nNESTED LOOP JOIN\n\n\n\nQUERY:\n------\nselect\nw.appid,w.rate,w.is_subscribed,sum(w.hits) AS Hits ,sum(w.sius) AS IUs,\nsum(w.total_amnt) AS Total,sum(w.hits) * w.rate AS ByHits,\nsum(w.sius) * w.rate AS BYIUS\nfrom bill_rpt_work w, billing_reports b\nwhere w.report_id in\n(select b.report_id from billing_reports where b.report_s_date = '2006-09-30')\nand (w.client_id = '227400001' or w.client_id = '2274000010')\ngroup by 1,2,3\norder by 1,2,3\n\nEstimated Cost: 3149\nEstimated # of Rows Returned: 1\nTemporary Files Required For: Order By Group By\n\n 1) informix.b: INDEX PATH\n\n (1) Index Keys: report_s_date (Serial, fragments: ALL)\n Lower Index Filter: informix.b.report_s_date = datetime(2006-09-30) year to day \n\n 2) informix.w: INDEX PATH\n\n Filters: (informix.w.client_id = '227400001' OR informix.w.client_id = '2274000010' ) \n\n (1) Index Keys: report_id (Serial, fragments: ALL)\n Lower Index Filter: informix.w.report_id = informix.b.report_id \nNESTED LOOP JOIN\n\n 3) informix.billing_reports: SEQUENTIAL SCAN (First Row)\nNESTED LOOP JOIN (Semi Join)\n\n\n\n\n-----Original Message-----\nFrom:\[email protected] on behalf of Plugge, Joe R.\nSent:\tTue 1/9/2007 7:36 AM\nTo:\[email protected]\nCc:\t\nSubject:\tRe: [PERFORM] Horribly slow query/ sequential scan \n\n Yes it does:\n\nSET EXPLAIN ON;\n\nIt writes the file to sqexplain.out\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Tom Lane\nSent: Tuesday, January 09, 2007 9:13 AM\nTo: Gregory S. Williamson\nCc: [email protected]; [email protected]\nSubject: Re: [PERFORM] Horribly slow query/ sequential scan \n\n\"Gregory S. Williamson\" <[email protected]> writes:\n> HAving burdened others with my foolishness too often, I hesitate to\n> ask, but could someone either point me to a reference or explain what\n> the difference might be ... I can see it with the eyes but I am having\n> trouble understanding what Informix might have been doing to my (bad\n> ?) SQL to \"fix\" the query.\n\nMe too. Does informix have anything EXPLAIN-like to show what it's\ndoing?\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: You can help support the PostgreSQL project by donating at\n\n http://www.postgresql.org/about/donate\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n\n-------------------------------------------------------\nClick link below if it is SPAM [email protected]\n\"https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=45a3b93d75271019119885&[email protected]&retrain=spam&template=history&history_page=1\"\n!DSPAM:45a3b93d75271019119885!\n-------------------------------------------------------\n\n\n\n\n\n",
"msg_date": "Tue, 9 Jan 2007 15:40:51 -0800",
"msg_from": "\"Gregory S. Williamson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Horribly slow query/ sequential scan "
},
{
"msg_contents": "\"Gregory S. Williamson\" <[email protected]> writes:\n> As Joe indicated, there is indeed an Informix explain, appended below my signature ... \n\n> select\n> w.appid,w.rate,w.is_subscribed,sum(w.hits) AS Hits ,sum(w.sius) AS IUs,\n> sum(w.total_amnt) AS Total,sum(w.hits) * w.rate AS ByHits,\n> sum(w.sius) * w.rate AS BYIUS\n> from bill_rpt_work w, billing_reports b\n> where w.report_id in\n> (select b.report_id from billing_reports where b.report_s_date = '2006-09-30')\n> and (w.client_id = '227400001' or w.client_id = '2274000010')\n> group by 1,2,3\n> order by 1,2,3\n\n> Estimated Cost: 3149\n> Estimated # of Rows Returned: 1\n> Temporary Files Required For: Order By Group By\n\n> 1) informix.b: INDEX PATH\n\n> (1) Index Keys: report_s_date (Serial, fragments: ALL)\n> Lower Index Filter: informix.b.report_s_date = datetime(2006-09-30) year to day \n\n> 2) informix.w: INDEX PATH\n\n> Filters: (informix.w.client_id = '227400001' OR informix.w.client_id = '2274000010' ) \n\n> (1) Index Keys: report_id (Serial, fragments: ALL)\n> Lower Index Filter: informix.w.report_id = informix.b.report_id \n> NESTED LOOP JOIN\n\n> 3) informix.billing_reports: SEQUENTIAL SCAN (First Row)\n> NESTED LOOP JOIN (Semi Join)\n\nInteresting! \"Semi join\" is the two-dollar technical term for what our\ncode calls an \"IN join\", viz a join that returns at most one copy of a\nleft-hand row even when there's more than one right-hand join candidate\nfor it. So I think there's not any execution mechanism here that we\ndon't have. What seems to be happening is that Informix is willing to\nflatten the sub-SELECT into an IN join even though the sub-SELECT is\ncorrelated to the outer query (that is, it contains outer references).\nI'm not sure whether we're just being paranoid by not doing that, or\nwhether there are special conditions to check before allowing it, or\nwhether Informix is wrong ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Jan 2007 00:55:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Horribly slow query/ sequential scan "
},
{
"msg_contents": "I wrote:\n> ... What seems to be happening is that Informix is willing to\n> flatten the sub-SELECT into an IN join even though the sub-SELECT is\n> correlated to the outer query (that is, it contains outer references).\n\nI did some googling this morning and found confirmation that recent\nversions of Informix have pretty extensive support for optimizing\ncorrelated subqueries:\nhttp://www.iiug.org/waiug/archive/iugnew83/FeaturesIDS73.htm\n\nThis is something we've not really spent much time on for Postgres,\nbut it might be interesting to look at someday. Given that the problem\nwith your query was really a mistake anyway, I'm not sure that your\nexample is compelling evidence for making it a high priority.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Jan 2007 09:53:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Horribly slow query/ sequential scan "
}
] |
[
{
"msg_contents": "I am developing an application that has very predictable database\noperations:\n -inserts several thousand rows into 3 tables every 5 minutes. (table\n contain around 10 million rows each)\n -truncates and rebuilds aggregate tables of this data every 5 minutes.\n (several thousand rows each)\n -regular reads of aggregate table and sometimes large tables by user\n interaction\n -every night, hundreds of thousands of rows are deleted from these 3\n tables (old data)\n -20-30 other tables get inserted/updated slowly throughout the day\n\nIn order to optimize performance of the inserts, I disabled\nautovacuum/row-level stats and instead run \"vacuum analyze\" on the whole\nDB every hour. However this operation takes around 20 minutes of each\nhour. This means that the database is involved in vacuum/analyzing\ntables 33% of the time.\n\nI'd like any performance advice, but my main concern is the amount of\ntime vacuum/analyze runs and its possible impact on the overall DB\nperformance. Thanks!\n\n\nI am running 8.2 (will be 8.2.1 soon). The box is Windows with 2GB RAM\nconnected to a SAN over fiber. The data and pg_xlog are on separate\npartitions. \n\nModified configuration:\neffective_cache_size = 1000MB\nrandom_page_cost = 3\ndefault_statistics_target = 50\nmaintenance_work_mem = 256MB\nshared_buffers = 400MB\ntemp_buffers = 10MB\nwork_mem = 10MB\nmax_fsm_pages = 1500000\ncheckpoint_segments = 30\nstats_row_level = off\nstats_start_collector = off\n",
"msg_date": "Tue, 09 Jan 2007 12:26:41 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "High inserts, bulk deletes - autovacuum vs scheduled vacuum"
},
{
"msg_contents": "* Jeremy Haile:\n\n> I'd like any performance advice, but my main concern is the amount of\n> time vacuum/analyze runs and its possible impact on the overall DB\n> performance. Thanks!\n\nYou could partition your data tables by date and discard old data\nsimply by dropping the tables. This is far more effective than\nvacuuming, but obviously, this approach cannot be used in all cases\n(e.g. if you need more dynamic expiry rules).\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Tue, 09 Jan 2007 19:02:25 +0100",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High inserts, bulk deletes - autovacuum vs scheduled vacuum"
},
{
"msg_contents": "Good advice on the partitioning idea. I may have to restructure some of\nmy queries, since some of them query across the whole range - but it may\nbe a much more performant solution. How is the performance when\nquerying across a set of partitioned tables vs. querying on a single\ntable with all rows? This may be a long term idea I could tackle, but\nis probably not feasible for my current time-frame. \n\nDoes my current approach of disabling autovacuum and manually vacuuming\nonce-an-hour sound like a good idea, or would I likely have better\nresults by auto-vacuuming and turning row-level stats back on?\n\n\nOn Tue, 09 Jan 2007 19:02:25 +0100, \"Florian Weimer\" <[email protected]>\nsaid:\n> * Jeremy Haile:\n> \n> > I'd like any performance advice, but my main concern is the amount of\n> > time vacuum/analyze runs and its possible impact on the overall DB\n> > performance. Thanks!\n> \n> You could partition your data tables by date and discard old data\n> simply by dropping the tables. This is far more effective than\n> vacuuming, but obviously, this approach cannot be used in all cases\n> (e.g. if you need more dynamic expiry rules).\n> \n> -- \n> Florian Weimer <[email protected]>\n> BFK edv-consulting GmbH http://www.bfk.de/\n> Kriegsstraße 100 tel: +49-721-96201-1\n> D-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Tue, 09 Jan 2007 15:14:28 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High inserts, bulk deletes - autovacuum vs scheduled"
},
{
"msg_contents": "* Jeremy Haile:\n\n> Good advice on the partitioning idea. I may have to restructure some of\n> my queries, since some of them query across the whole range - but it may\n> be a much more performant solution. How is the performance when\n> querying across a set of partitioned tables vs. querying on a single\n> table with all rows?\n\nLocality of access decreases, of course, and depending on your data\nsize, you hit something like to 2 or 4 additional disk seeks per\npartition for index-based accesses. Sequential scans are not\nimpacted.\n\n> Does my current approach of disabling autovacuum and manually vacuuming\n> once-an-hour sound like a good idea, or would I likely have better\n> results by auto-vacuuming and turning row-level stats back on?\n\nSorry, I haven't got much experience with autovacuum, since most of\nother databases are INSERT-only (or get VACUUMed automatically after\nmajor updates).\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Wed, 10 Jan 2007 10:33:53 +0100",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High inserts, bulk deletes - autovacuum vs scheduled vacuum"
},
{
"msg_contents": "On Tue, Jan 09, 2007 at 12:26:41PM -0500, Jeremy Haile wrote:\n> I am developing an application that has very predictable database\n> operations:\n> -inserts several thousand rows into 3 tables every 5 minutes. (table\n> contain around 10 million rows each)\n> -truncates and rebuilds aggregate tables of this data every 5 minutes.\n> (several thousand rows each)\n> -regular reads of aggregate table and sometimes large tables by user\n> interaction\n> -every night, hundreds of thousands of rows are deleted from these 3\n> tables (old data)\n> -20-30 other tables get inserted/updated slowly throughout the day\n> \n> In order to optimize performance of the inserts, I disabled\n> autovacuum/row-level stats and instead run \"vacuum analyze\" on the whole\n> DB every hour. However this operation takes around 20 minutes of each\n> hour. This means that the database is involved in vacuum/analyzing\n> tables 33% of the time.\n> \n> I'd like any performance advice, but my main concern is the amount of\n> time vacuum/analyze runs and its possible impact on the overall DB\n> performance. Thanks!\n \nIf much of the data in the database isn't changing that often, then why\ncontinually re-vacuum the whole thing?\n\nI'd suggest trying autovacuum and see how it does (though you might want\nto tune it to be more or less aggressive, and you'll probably want to\nenable the cost delay).\n\nThe only cases where manual vacuum makes sense to me is if you've got a\ndefined slow period and vacuuming during that slow period is still\nfrequent enough to keep up with demand, or if you've got tables that\nhave a very high churn rate and need to be kept small. In the later\ncase, I'll usually setup a cronjob to vacuum those tables once a minute\nwith no cost delay. I'm sure there might be some other cases where not\nusing autovac might make sense, but generally I'd much rather let\nautovac worry about this so I don't have to.\n\n> I am running 8.2 (will be 8.2.1 soon). The box is Windows with 2GB RAM\n> connected to a SAN over fiber. The data and pg_xlog are on separate\n> partitions. \n> \n> Modified configuration:\n> effective_cache_size = 1000MB\n> random_page_cost = 3\n> default_statistics_target = 50\n> maintenance_work_mem = 256MB\n> shared_buffers = 400MB\n> temp_buffers = 10MB\n> work_mem = 10MB\n> max_fsm_pages = 1500000\n\nOne other useful manual vacuum to consider is running vacuumdb -av\nperiodically (say, once a month) and looking at the last few lines of\noutput. That will give you a good idea on how large you should set\nmax_fsm_pages. Running the output of vacuumdb -av through pgFouine will\ngive you other useful data.\n\n> checkpoint_segments = 30\n> stats_row_level = off\n> stats_start_collector = off\n\nUnless you're really trying to get the last ounce of performance out,\nit's probably not worth turning those stats settings off.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 10 Jan 2007 14:35:56 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High inserts, bulk deletes - autovacuum vs scheduled vacuum"
},
{
"msg_contents": "Please cc the list so others can help and learn.\n\nOn Wed, Jan 10, 2007 at 03:43:00PM -0500, Jeremy Haile wrote:\n> > I'd suggest trying autovacuum and see how it does (though you might want\n> > to tune it to be more or less aggressive, and you'll probably want to\n> > enable the cost delay).\n> \n> What are some decent default values for the cost delay vacuum settings? \n> I haven't used these before.\n \nI find that simply setting vacuum_cost_delay to 20 is generally a good\nstarting point. I'll usually do that and then run a vacuum while\nwatching disk activity; I try and tune it so that the disk is ~90%\nutilized with vacuum running. That allows a safety margin without\nstretching vacuums out forever.\n\n> Also - do the default autovacuum settings make sense for tables on the\n> scale of 10 million rows? For example, using the defaults it would\n> require about a million rows (250 + 0.1 * 10 million) to be\n> inserted/updated/deleted before analyzing - which seems high. (about 2\n> million for vacuum) Or am I overestimating how often I would need to\n> vacuum/analyze these tables?\n \nDepends on your application... the way I look at it is that a setting of\n0.1 means 10% dead space in the table. While 5% or 1% would be better,\nyou hit a point of diminishing returns since you have to read the entire\ntable and it's indexes to vacuum it.\n\nBTW, that's the default values for analyze... the defaults for vacuum\nare 2x that.\n\n> Do most people use the default autovacuum settings successfully, or are\n> they usually modified?\n\nI generally use the 8.2 defaults (which are much better than the 8.1\ndefaults) unless I'm really trying to tune things. What's more important\nis to make sure critical tables (such as queue tables) are getting\nvacuumed frequently so that they stay small. (Of course you also need to\nensure there's no long running transactions).\n-- \nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n",
"msg_date": "Wed, 10 Jan 2007 15:21:26 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High inserts, bulk deletes - autovacuum vs scheduled vacuum"
},
{
"msg_contents": "> BTW, that's the default values for analyze... the defaults for vacuum\n> are 2x that.\n\nYeah - I was actually more concerned that tables would need to be\nanalyzed more often than I was about vacuuming too often, so I used\nanalyze as the example. Since my app is inserting constantly throughout\nthe day and querying for \"recent\" data - I want to make sure the query\nplanner realizes that there are lots of rows with new timestamps on\nthem. In other words, if I run a query \"select * from mytable where\ntimestamp > '9:00am'\" - I want to make sure it hasn't been a day since\nthe table was analyzed, so the planner thinks there are zero rows\ngreater than 9:00am today.\n\n> What's more important\n> is to make sure critical tables (such as queue tables) are getting\n> vacuumed frequently so that they stay small. \n\nIs the best way to do that usually to lower the scale factors? Is it\never a good approach to lower the scale factor to zero and just set the\nthresholds to a pure number of rows? (when setting it for a specific\ntable)\n\nThanks,\nJeremy Haile\n",
"msg_date": "Wed, 10 Jan 2007 16:48:42 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High inserts, bulk deletes - autovacuum vs scheduled"
},
{
"msg_contents": "On Wed, Jan 10, 2007 at 04:48:42PM -0500, Jeremy Haile wrote:\n> > BTW, that's the default values for analyze... the defaults for vacuum\n> > are 2x that.\n> \n> Yeah - I was actually more concerned that tables would need to be\n> analyzed more often than I was about vacuuming too often, so I used\n> analyze as the example. Since my app is inserting constantly throughout\n> the day and querying for \"recent\" data - I want to make sure the query\n> planner realizes that there are lots of rows with new timestamps on\n> them. In other words, if I run a query \"select * from mytable where\n> timestamp > '9:00am'\" - I want to make sure it hasn't been a day since\n> the table was analyzed, so the planner thinks there are zero rows\n> greater than 9:00am today.\n \nWell, analyze is pretty cheap. At most it'll read only 30,000 pages,\nwhich shouldn't take terribly long on a decent system. So you can be a\nlot more aggressive with it.\n\n> > What's more important\n> > is to make sure critical tables (such as queue tables) are getting\n> > vacuumed frequently so that they stay small. \n> \n> Is the best way to do that usually to lower the scale factors? Is it\n> ever a good approach to lower the scale factor to zero and just set the\n> thresholds to a pure number of rows? (when setting it for a specific\n> table)\n\nThe problem is what happens if autovac goes off and starts vacuuming\nsome large table? While that's going on your queue table is sitting\nthere bloating. If you have a separate cronjob to handle the queue\ntable, it'll stay small, especially in 8.2.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 10 Jan 2007 19:27:09 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High inserts, bulk deletes - autovacuum vs scheduled vacuum"
},
{
"msg_contents": "Jim C. Nasby wrote:\n\n> > Is the best way to do that usually to lower the scale factors? Is it\n> > ever a good approach to lower the scale factor to zero and just set the\n> > thresholds to a pure number of rows? (when setting it for a specific\n> > table)\n> \n> The problem is what happens if autovac goes off and starts vacuuming\n> some large table? While that's going on your queue table is sitting\n> there bloating. If you have a separate cronjob to handle the queue\n> table, it'll stay small, especially in 8.2.\n\nYou mean \"at least in 8.2\". In previous releases, you could vacuum\nthat queue table until you were blue on the face, but it would achieve\nnothing because it would consider that the dead tuples were visible to a\nrunning transaction: that running the vacuum on the large table. This\nis an annoyance that was fixed in 8.2.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Thu, 11 Jan 2007 00:10:34 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High inserts, bulk deletes - autovacuum vs scheduled vacuum"
},
{
"msg_contents": "On Thu, Jan 11, 2007 at 12:10:34AM -0300, Alvaro Herrera wrote:\n> Jim C. Nasby wrote:\n> \n> > > Is the best way to do that usually to lower the scale factors? Is it\n> > > ever a good approach to lower the scale factor to zero and just set the\n> > > thresholds to a pure number of rows? (when setting it for a specific\n> > > table)\n> > \n> > The problem is what happens if autovac goes off and starts vacuuming\n> > some large table? While that's going on your queue table is sitting\n> > there bloating. If you have a separate cronjob to handle the queue\n> > table, it'll stay small, especially in 8.2.\n> \n> You mean \"at least in 8.2\". In previous releases, you could vacuum\n> that queue table until you were blue on the face, but it would achieve\n> nothing because it would consider that the dead tuples were visible to a\n> running transaction: that running the vacuum on the large table. This\n> is an annoyance that was fixed in 8.2.\n\nTrue, but in many environments there are other transactions that run\nlong enough that additional vacuums while a long vacuum was running\nwould still help.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 10 Jan 2007 21:42:00 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High inserts, bulk deletes - autovacuum vs scheduled vacuum"
}
] |
[
{
"msg_contents": "I have a table of messages with paths and inserted dates (among other\nthings), like so:\n\nCREATE TABLE Messages (\n msgkey BIGSERIAL PRIMARY KEY,\n path TEXT NOT NULL,\n inserted TIMESTAMP WITHOUT TIMEZONE DEFAULT NOW()\n);\n\nI run a query to determine which days actually saw emails come in, like so:\n\nSELECT DATE(inserted) FROM Messages GROUP BY DATE(inserted);\n\nThat's obviously not very efficient, so I made an index:\n\nCREATE INDEX messages_date_inserted_ind ON Messages(DATE(inserted));\n\nHowever, GROUP BY does not use this index:\n\n=# explain analyze select date(inserted) from messages group by\ndate(inserted);\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=104773.10..104789.51 rows=1313 width=8) (actual time=\n31269.476..31269.557 rows=44 loops=1)\n -> Seq Scan on messages (cost=0.00..101107.25 rows=1466340 width=8)\n(actual time=23.923..25248.400 rows=1467036 loops=1)\n Total runtime: 31269.735 ms\n(3 rows)\n\n\nIs it possible to get pg to use an index in a group by? I don't see why it\nwouldn't be possible, but maybe I'm missing something.\n\nUsing pg 8.1.4...\n\nI have a table of messages with paths and inserted dates (among other things), like so:CREATE TABLE Messages ( msgkey BIGSERIAL PRIMARY KEY, path TEXT NOT NULL, inserted TIMESTAMP WITHOUT TIMEZONE DEFAULT NOW()\n);I run a query to determine which days actually saw emails come in, like so:SELECT DATE(inserted) FROM Messages GROUP BY DATE(inserted);That's obviously not very efficient, so I made an index:\nCREATE INDEX messages_date_inserted_ind ON Messages(DATE(inserted));However, GROUP BY does not use this index:=# explain analyze select date(inserted) from messages group by date(inserted); QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------ HashAggregate (cost=104773.10..104789.51 rows=1313 width=8) (actual time=31269.476..31269.557\n rows=44 loops=1) -> Seq Scan on messages (cost=0.00..101107.25 rows=1466340 width=8) (actual time=23.923..25248.400 rows=1467036 loops=1) Total runtime: 31269.735 ms(3 rows)Is it possible to get pg to use an index in a group by? I don't see why it wouldn't be possible, but maybe I'm missing something.\nUsing pg 8.1.4...",
"msg_date": "Tue, 9 Jan 2007 17:05:48 -0600",
"msg_from": "tsuraan <[email protected]>",
"msg_from_op": true,
"msg_subject": "group by will not use an index?"
},
{
"msg_contents": "That query looks strange to me (a group by without an aggregate). See\nif this is\nany faster:\n \nSELECT DISTINCT DATE(inserted) FROM Messages\n \nI won't hold my breath though, I don't think there's any way around the\nfull table scan\nin Postgres, because the index does not contain enough information about\ntransactional\nstate, so table access is always required (unlike virtually every other\ntype of db)\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of tsuraan\nSent: Tuesday, January 09, 2007 5:06 PM\nTo: pgsql-performance\nSubject: [PERFORM] group by will not use an index?\n\n\nI have a table of messages with paths and inserted dates (among other\nthings), like so:\n\nCREATE TABLE Messages (\n msgkey BIGSERIAL PRIMARY KEY,\n path TEXT NOT NULL,\n inserted TIMESTAMP WITHOUT TIMEZONE DEFAULT NOW() \n);\n\nI run a query to determine which days actually saw emails come in, like\nso:\n\nSELECT DATE(inserted) FROM Messages GROUP BY DATE(inserted);\n\nThat's obviously not very efficient, so I made an index: \n\nCREATE INDEX messages_date_inserted_ind ON Messages(DATE(inserted));\n\nHowever, GROUP BY does not use this index:\n\n=# explain analyze select date(inserted) from messages group by\ndate(inserted);\n QUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------\n HashAggregate (cost=104773.10..104789.51 rows=1313 width=8) (actual\ntime=31269.476..31269.557 rows=44 loops=1)\n -> Seq Scan on messages (cost=0.00..101107.25 rows=1466340 width=8)\n(actual time=23.923..25248.400 rows=1467036 loops=1)\n Total runtime: 31269.735 ms\n(3 rows)\n\n\nIs it possible to get pg to use an index in a group by? I don't see why\nit wouldn't be possible, but maybe I'm missing something. \n\nUsing pg 8.1.4...\n\n\n\n\n\nMessage\n\n\nThat \nquery looks strange to me (a group by without an aggregate). See if this \nis\nany \nfaster:\n \nSELECT \nDISTINCT DATE(inserted) FROM Messages\n \nI \nwon't hold my breath though, I don't think there's any way around the full table \nscan\nin \nPostgres, because the index does not contain enough information about \ntransactional\nstate, \nso table access is always required (unlike virtually every other type of \ndb)\n \n\n\n-----Original Message-----From: \n [email protected] \n [mailto:[email protected]] On Behalf Of \n tsuraanSent: Tuesday, January 09, 2007 5:06 PMTo: \n pgsql-performanceSubject: [PERFORM] group by will not use an \n index?I have a table of messages with paths and inserted \n dates (among other things), like so:CREATE TABLE Messages \n ( msgkey BIGSERIAL PRIMARY KEY, \n path TEXT NOT NULL, inserted TIMESTAMP WITHOUT TIMEZONE \n DEFAULT NOW() );I run a query to determine which days actually saw \n emails come in, like so:SELECT DATE(inserted) FROM Messages GROUP BY \n DATE(inserted);That's obviously not very efficient, so I made an \n index: CREATE INDEX messages_date_inserted_ind ON \n Messages(DATE(inserted));However, GROUP BY does not use this \n index:=# explain analyze select date(inserted) from messages group by \n date(inserted); \n QUERY \n PLAN \n ------------------------------------------------------------------------------------------------------------------------------ HashAggregate \n (cost=104773.10..104789.51 rows=1313 width=8) (actual \n time=31269.476..31269.557 rows=44 loops=1) -> Seq \n Scan on messages (cost=0.00..101107.25 rows=1466340 width=8) (actual \n time=23.923..25248.400 rows=1467036 loops=1) Total runtime: 31269.735 \n ms(3 rows)Is it possible to get pg to use an index in a group \n by? I don't see why it wouldn't be possible, but maybe I'm missing \n something. Using pg 8.1.4...",
"msg_date": "Tue, 9 Jan 2007 17:32:50 -0600",
"msg_from": "\"Adam Rich\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: group by will not use an index?"
},
{
"msg_contents": "On Tue, 2007-01-09 at 17:05, tsuraan wrote:\n> I have a table of messages with paths and inserted dates (among other\n> things), like so:\n> \n> CREATE TABLE Messages (\n> msgkey BIGSERIAL PRIMARY KEY,\n> path TEXT NOT NULL,\n> inserted TIMESTAMP WITHOUT TIMEZONE DEFAULT NOW() \n> );\n> \n> I run a query to determine which days actually saw emails come in,\n> like so:\n> \n> SELECT DATE(inserted) FROM Messages GROUP BY DATE(inserted);\n\nYou're probably under the mistaken impression that PostgreSQL and can\nretrieve all the data it needs from the index alone. It can't. Anytime\npostgresql gets an index reference, it has to then visit the actual\ntable file to grab the individual entry. That's because indexes don't\nstore mvcc visibility information, and due to the problems locking both\nindexes and tables together would present, probably won't any time soon.\n",
"msg_date": "Tue, 09 Jan 2007 17:42:04 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: group by will not use an index?"
},
{
"msg_contents": "For the reasons indicated (that is, MVCC), PG can not do a DISTINCT or the equivalent\nGROUP BY from index values alone.\n\nIf this table is large, perhaps you could denormalize and maintain a\nsummary table with date (using truncation) and count, updated with\ntriggers on the original table. This table will presumably have a\nsmall number of rows at the cost of doubling the times for updates,\ninserts, and deletes.\n\n\n",
"msg_date": "Tue, 9 Jan 2007 17:41:27 -0800",
"msg_from": "Andrew Lazarus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: group by will not use an index?"
},
{
"msg_contents": "> For the reasons indicated (that is, MVCC), PG can not do a DISTINCT or the\n> equivalent\n> GROUP BY from index values alone.\n\n\nOk, that makes sense. Thanks for the help everybody!\n\nIf this table is large, perhaps you could denormalize and maintain a\n> summary table with date (using truncation) and count, updated with\n> triggers on the original table. This table will presumably have a\n> small number of rows at the cost of doubling the times for updates,\n> inserts, and deletes.\n\n\nWell, the inserted time, at least, is never updated, and deletions are very\nrare (never, so far), so I'll have a look at doing things that way. Thanks!\n\nFor the reasons indicated (that is, MVCC), PG can not do a DISTINCT or the equivalent\nGROUP BY from index values alone.Ok, that makes sense. Thanks for the help everybody! \nIf this table is large, perhaps you could denormalize and maintain asummary table with date (using truncation) and count, updated withtriggers on the original table. This table will presumably have asmall number of rows at the cost of doubling the times for updates,\ninserts, and deletes.Well, the inserted time, at least, is never updated, and deletions are very rare (never, so far), so I'll have a look at doing things that way. Thanks!",
"msg_date": "Wed, 10 Jan 2007 11:14:13 -0600",
"msg_from": "tsuraan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: group by will not use an index?"
}
] |
[
{
"msg_contents": "Actually, as I recently discovered, GROUP BY is faster than DISTINCT. It's just due to how they are implemented, so don't go looking for any deep reason :) The thread \"GROUP BY vs DISTINCT\" from 2006-12-20 discusses it. DISTINCT sorts the results to find the unique rows, but GROUP BY uses a hash.\n\nBrian\n\n----- Original Message ----\nFrom: Adam Rich <[email protected]>\nTo: tsuraan <[email protected]>; pgsql-performance <[email protected]>\nSent: Wednesday, 10 January, 2007 7:32:50 AM\nSubject: Re: [PERFORM] group by will not use an index?\n\nMessage\n\n \n\n\n\nThat \nquery looks strange to me (a group by without an aggregate). See if this \nis\n\nany \nfaster:\n\n \n\nSELECT \nDISTINCT DATE(inserted) FROM Messages\n\n \n\nI \nwon't hold my breath though, I don't think there's any way around the full table \nscan\n\nin \nPostgres, because the index does not contain enough information about \ntransactional\n\nstate, \nso table access is always required (unlike virtually every other type of \ndb)\n\n \n\n\n \n\n -----Original Message-----\nFrom: \n [email protected] \n [mailto:[email protected]] On Behalf Of \n tsuraan\nSent: Tuesday, January 09, 2007 5:06 PM\nTo: \n pgsql-performance\nSubject: [PERFORM] group by will not use an \n index?\n\n\nI have a table of messages with paths and inserted \n dates (among other things), like so:\n\nCREATE TABLE Messages \n (\n msgkey BIGSERIAL PRIMARY KEY,\n \n path TEXT NOT NULL,\n inserted TIMESTAMP WITHOUT TIMEZONE \n DEFAULT NOW() \n);\n\nI run a query to determine which days actually saw \n emails come in, like so:\n\nSELECT DATE(inserted) FROM Messages GROUP BY \n DATE(inserted);\n\nThat's obviously not very efficient, so I made an \n index: \n\nCREATE INDEX messages_date_inserted_ind ON \n Messages(DATE(inserted));\n\nHowever, GROUP BY does not use this \n index:\n\n=# explain analyze select date(inserted) from messages group by \n date(inserted);\n \n QUERY \n PLAN \n \n------------------------------------------------------------------------------------------------------------------------------\n HashAggregate \n (cost=104773.10..104789.51 rows=1313 width=8) (actual \n time=31269.476..31269.557 rows=44 loops=1)\n -> Seq \n Scan on messages (cost=0.00..101107.25 rows=1466340 width=8) (actual \n time=23.923..25248.400 rows=1467036 loops=1)\n Total runtime: 31269.735 \n ms\n(3 rows)\n\n\nIs it possible to get pg to use an index in a group \n by? I don't see why it wouldn't be possible, but maybe I'm missing \n something. \n\nUsing pg 8.1.4...\n\n\n\n\nActually, as I recently discovered, GROUP BY is faster than DISTINCT. It's just due to how they are implemented, so don't go looking for any deep reason :) The thread \"GROUP BY vs DISTINCT\" from 2006-12-20 discusses it. DISTINCT sorts the results to find the unique rows, but GROUP BY uses a hash.Brian----- Original Message ----From: Adam Rich <[email protected]>To: tsuraan <[email protected]>; pgsql-performance <[email protected]>Sent: Wednesday, 10 January, 2007 7:32:50 AMSubject: Re: [PERFORM] group by will not use an index?Message\nThat \nquery looks strange to me (a group by without an aggregate). See if this \nis\nany \nfaster:\n \nSELECT \nDISTINCT DATE(inserted) FROM Messages\n \nI \nwon't hold my breath though, I don't think there's any way around the full table \nscan\nin \nPostgres, because the index does not contain enough information about \ntransactional\nstate, \nso table access is always required (unlike virtually every other type of \ndb)\n \n\n\n-----Original Message-----From: \n [email protected] \n [mailto:[email protected]] On Behalf Of \n tsuraanSent: Tuesday, January 09, 2007 5:06 PMTo: \n pgsql-performanceSubject: [PERFORM] group by will not use an \n index?I have a table of messages with paths and inserted \n dates (among other things), like so:CREATE TABLE Messages \n ( msgkey BIGSERIAL PRIMARY KEY, \n path TEXT NOT NULL, inserted TIMESTAMP WITHOUT TIMEZONE \n DEFAULT NOW() );I run a query to determine which days actually saw \n emails come in, like so:SELECT DATE(inserted) FROM Messages GROUP BY \n DATE(inserted);That's obviously not very efficient, so I made an \n index: CREATE INDEX messages_date_inserted_ind ON \n Messages(DATE(inserted));However, GROUP BY does not use this \n index:=# explain analyze select date(inserted) from messages group by \n date(inserted); \n QUERY \n PLAN \n ------------------------------------------------------------------------------------------------------------------------------ HashAggregate \n (cost=104773.10..104789.51 rows=1313 width=8) (actual \n time=31269.476..31269.557 rows=44 loops=1) -> Seq \n Scan on messages (cost=0.00..101107.25 rows=1466340 width=8) (actual \n time=23.923..25248.400 rows=1467036 loops=1) Total runtime: 31269.735 \n ms(3 rows)Is it possible to get pg to use an index in a group \n by? I don't see why it wouldn't be possible, but maybe I'm missing \n something. Using pg 8.1.4...",
"msg_date": "Tue, 9 Jan 2007 17:07:03 -0800 (PST)",
"msg_from": "Brian Herlihy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: group by will not use an index?"
},
{
"msg_contents": "On Tue, Jan 09, 2007 at 05:07:03PM -0800, Brian Herlihy wrote:\n> Actually, as I recently discovered, GROUP BY is faster than DISTINCT. It's\n> just due to how they are implemented, so don't go looking for any deep\n> reason :) The thread \"GROUP BY vs DISTINCT\" from 2006-12-20 discusses it.\n> DISTINCT sorts the results to find the unique rows, but GROUP BY uses a\n> hash.\n\nActually, GROUP BY _can_ use a sort too, it's just that a hash is usually\nfaster.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 10 Jan 2007 02:11:04 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: group by will not use an index?"
}
] |
[
{
"msg_contents": "I have a query made by joining two subqueries where the outer query\nperforming the join takes significantly longer to run than the two\nsubqueries. \n\nThe first subquery runs in 600ms. The seconds subquery runs in 700ms. \nBut the outer query takes 240 seconds to run! Both of the two\nsubqueries only return 8728 rows. \n\nChanging the inner join to a left join makes the outer query run in\nabout 1000ms (which is great), but I don't understand why the inner join\nis so slow!\n\nI'm using PostgreSQL 8.2.1. Any ideas?\n\nQUERY PLAN (Inner Join) - takes 240 seconds\n-------------------\nNested Loop (cost=17.46..17.56 rows=1 width=120)\n Join Filter: ((a.merchant_dim_id = b.merchant_dim_id) AND\n (a.dcms_dim_id = b.dcms_dim_id))\n -> HashAggregate (cost=8.71..8.74 rows=1 width=16)\n -> Index Scan using transaction_facts_transaction_date_idx on\n transaction_facts (cost=0.00..8.69 rows=1 width=16)\n Index Cond: ((transaction_date >= '2007-01-09\n 00:00:00'::timestamp without time zone) AND\n (transaction_date < '2007-01-09 09:30:00'::timestamp\n without time zone))\n -> HashAggregate (cost=8.75..8.78 rows=1 width=16)\n -> HashAggregate (cost=8.71..8.72 rows=1 width=55)\n -> Index Scan using\n transaction_facts_transaction_date_idx on\n transaction_facts (cost=0.00..8.69 rows=1 width=55)\n Index Cond: ((transaction_date >= '2007-01-09\n 00:00:00'::timestamp without time zone) AND\n (transaction_date < '2007-01-09 09:30:00'::timestamp\n without time zone))\n\n\nQUERY PLAN (Left Join) - takes one second\n-------------------\nMerge Left Join (cost=304037.63..304064.11 rows=2509 width=120)\n Merge Cond: ((a.dcms_dim_id = b.dcms_dim_id) AND (a.merchant_dim_id =\n b.merchant_dim_id))\n -> Sort (cost=152019.45..152025.72 rows=2509 width=64)\n Sort Key: a.dcms_dim_id, a.merchant_dim_id\n -> HashAggregate (cost=151771.15..151852.69 rows=2509\n width=16)\n -> Bitmap Heap Scan on transaction_facts \n (cost=5015.12..150419.90 rows=77214 width=16)\n Recheck Cond: ((transaction_date >= '2007-01-09\n 00:00:00'::timestamp without time zone) AND\n (transaction_date < '2007-01-09 09:30:00'::timestamp\n without time zone))\n -> Bitmap Index Scan on\n transaction_facts_transaction_date_idx \n (cost=0.00..4995.81 rows=77214 width=0)\n Index Cond: ((transaction_date >= '2007-01-09\n 00:00:00'::timestamp without time zone) AND\n (transaction_date < '2007-01-09\n 09:30:00'::timestamp without time zone))\n -> Sort (cost=152018.18..152020.54 rows=943 width=64)\n Sort Key: b.dcms_dim_id, b.merchant_dim_id\n -> Subquery Scan b (cost=151931.51..151971.59 rows=943\n width=64)\n -> HashAggregate (cost=151931.51..151962.16 rows=943\n width=16)\n -> HashAggregate (cost=151578.11..151672.35\n rows=9424 width=55)\n -> Bitmap Heap Scan on transaction_facts \n (cost=5015.12..150419.90 rows=77214 width=55)\n Recheck Cond: ((transaction_date >=\n '2007-01-09 00:00:00'::timestamp without\n time zone) AND (transaction_date <\n '2007-01-09 09:30:00'::timestamp without\n time zone))\n -> Bitmap Index Scan on\n transaction_facts_transaction_date_idx \n (cost=0.00..4995.81 rows=77214 width=0)\n Index Cond: ((transaction_date >=\n '2007-01-09 00:00:00'::timestamp\n without time zone) AND\n (transaction_date < '2007-01-09\n 09:30:00'::timestamp without time\n zone))\n\n\nQUERY\n-------------------\nselect a.merchant_dim_id, a.dcms_dim_id, \n a.num_success, a.num_failed, a.total_transactions,\n a.success_rate,\n b.distinct_num_success, b.distinct_num_failed,\n b.distinct_total_transactions, b.distinct_success_rate\nfrom (\n\n-- SUBQUERY 1\nselect merchant_dim_id, \n dcms_dim_id,\n sum(success) as num_success, \n sum(failed) as num_failed, \n count(*) as total_transactions,\n (sum(success) * 1.0 / count(*)) as success_rate \nfrom transaction_facts \nwhere transaction_date >= '2007-1-9' \nand transaction_date < '2007-1-9 9:30' \ngroup by merchant_dim_id, dcms_dim_id\n\n) as a inner join (\n\n-- SUBQUERY 2\nselect merchant_dim_id, \n dcms_dim_id,\n sum(success) as distinct_num_success, \n sum(failed) as distinct_num_failed, \n count(*) as distinct_total_transactions, \n (sum(success) * 1.0 / count(*)) as distinct_success_rate \nfrom (\n\n select merchant_dim_id, \n dcms_dim_id,\n serial_number,\n success,\n failed \n from transaction_facts \n where transaction_date >= '2007-1-9' \n and transaction_date < '2007-1-9 9:30' \n group by merchant_dim_id, dcms_dim_id, serial_number, success, failed\n\n ) as distinct_summary\ngroup by merchant_dim_id, dcms_dim_id\n\n) as b using(merchant_dim_id, dcms_dim_id)\n",
"msg_date": "Wed, 10 Jan 2007 11:17:22 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow inner join, but left join is fast"
},
{
"msg_contents": "\"Jeremy Haile\" <[email protected]> writes:\n> I have a query made by joining two subqueries where the outer query\n> performing the join takes significantly longer to run than the two\n> subqueries. \n\nPlease show EXPLAIN ANALYZE results, not just EXPLAIN.\nAlso, have you analyzed your tables recently?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Jan 2007 12:15:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow inner join, but left join is fast "
},
{
"msg_contents": "The table should have been analyzed, but to make sure I ran analyze on\nthe table before executing the explain analyze queries. Well - problem\nsolved. This time the inner join query runs quickly. \n\nI still don't understand why the inner join would be so different from\nthe left join prior to the analyze. It looks like the amount of rows\nexpected in the original query plan for inner join was 1 (not correct\nsince it was really 8728) The left join query had the exact same\nsubqueries but expected 77214 rows to be returned from them, which was\nstill not correct but resulted in a better query plan.\n\nAfter the recent analyze, here's the new inner join query plan. I won't\nbother pasting the left join plan, since it is almost identical now\n(including row counts) FYI -the result of the queries is (and always\nwas) identical for inner join and left join.\n\n\nQUERY PLAN (inner join)\nMerge Join (cost=279457.86..279479.83 rows=43 width=120) (actual\ntime=626.771..670.275 rows=8728 loops=1)\n Merge Cond: ((a.dcms_dim_id = b.dcms_dim_id) AND (a.merchant_dim_id =\n b.merchant_dim_id))\n -> Sort (cost=139717.30..139722.38 rows=2029 width=64) (actual\n time=265.669..269.878 rows=8728 loops=1)\n Sort Key: a.dcms_dim_id, a.merchant_dim_id\n -> HashAggregate (cost=139519.61..139585.56 rows=2029\n width=16) (actual time=211.368..247.429 rows=8728 loops=1)\n -> Bitmap Heap Scan on transaction_facts \n (cost=4427.62..138316.05 rows=68775 width=16) (actual\n time=21.858..100.998 rows=65789 loops=1)\n Recheck Cond: ((transaction_date >= '2007-01-09\n 00:00:00'::timestamp without time zone) AND\n (transaction_date < '2007-01-09 09:30:00'::timestamp\n without time zone))\n -> Bitmap Index Scan on\n transaction_facts_transaction_date_idx \n (cost=0.00..4410.42 rows=68775 width=0) (actual\n time=21.430..21.430 rows=65789 loops=1)\n Index Cond: ((transaction_date >= '2007-01-09\n 00:00:00'::timestamp without time zone) AND\n (transaction_date < '2007-01-09\n 09:30:00'::timestamp without time zone))\n -> Sort (cost=139740.56..139742.67 rows=843 width=64) (actual\n time=361.083..365.418 rows=8728 loops=1)\n Sort Key: b.dcms_dim_id, b.merchant_dim_id\n -> Subquery Scan b (cost=139663.76..139699.59 rows=843\n width=64) (actual time=308.567..346.135 rows=8728 loops=1)\n -> HashAggregate (cost=139663.76..139691.16 rows=843\n width=16) (actual time=308.563..337.677 rows=8728 loops=1)\n -> HashAggregate (cost=139347.68..139431.97\n rows=8429 width=55) (actual time=198.093..246.591\n rows=48942 loops=1)\n -> Bitmap Heap Scan on transaction_facts \n (cost=4427.62..138316.05 rows=68775 width=55)\n (actual time=24.080..83.988 rows=65789\n loops=1)\n Recheck Cond: ((transaction_date >=\n '2007-01-09 00:00:00'::timestamp without\n time zone) AND (transaction_date <\n '2007-01-09 09:30:00'::timestamp without\n time zone))\n -> Bitmap Index Scan on\n transaction_facts_transaction_date_idx \n (cost=0.00..4410.42 rows=68775 width=0)\n (actual time=23.596..23.596 rows=65789\n loops=1)\n Index Cond: ((transaction_date >=\n '2007-01-09 00:00:00'::timestamp\n without time zone) AND\n (transaction_date < '2007-01-09\n 09:30:00'::timestamp without time\n zone))\nTotal runtime: 675.638 ms\n\n\n\nOn Wed, 10 Jan 2007 12:15:44 -0500, \"Tom Lane\" <[email protected]> said:\n> \"Jeremy Haile\" <[email protected]> writes:\n> > I have a query made by joining two subqueries where the outer query\n> > performing the join takes significantly longer to run than the two\n> > subqueries. \n> \n> Please show EXPLAIN ANALYZE results, not just EXPLAIN.\n> Also, have you analyzed your tables recently?\n> \n> \t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Jan 2007 13:20:18 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow inner join, but left join is fast"
},
{
"msg_contents": "\"Jeremy Haile\" <[email protected]> writes:\n> I still don't understand why the inner join would be so different from\n> the left join prior to the analyze.\n\nAre you sure you hadn't analyzed in between? Or maybe autovac did it\nfor you? The reason for the plan change is the change from estimating\n1 row matching the transaction_date range constraint, to estimating lots\nof them, and the join type away up at the top would surely not have\naffected that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Jan 2007 13:38:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow inner join, but left join is fast "
},
{
"msg_contents": "I'm pretty sure it didn't analyze in between - autovac is turned off\nand I ran the test multiple times before posting. \n\nBut since I can't reproduce it anymore, I can't be 100% sure. And it\ncertainly doesn't make sense that the estimate for the index scan would\nchange based on an unrelated join condition.\n\nIf I ever get it to happen again, I'll be more careful and repost if it\nis a real issue. Thanks for pointing me in the right direction!\n\n\nOn Wed, 10 Jan 2007 13:38:15 -0500, \"Tom Lane\" <[email protected]> said:\n> \"Jeremy Haile\" <[email protected]> writes:\n> > I still don't understand why the inner join would be so different from\n> > the left join prior to the analyze.\n> \n> Are you sure you hadn't analyzed in between? Or maybe autovac did it\n> for you? The reason for the plan change is the change from estimating\n> 1 row matching the transaction_date range constraint, to estimating lots\n> of them, and the join type away up at the top would surely not have\n> affected that.\n> \n> \t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Jan 2007 13:38:24 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow inner join, but left join is fast"
},
{
"msg_contents": "Another random idea - does PostgreSQL do any caching of query plans?\neven on the session level? \n\nI ran these queries from the same Query window, so my idea is that maybe\nthe inner join plan was cached prior to an automatic analyze being run. \n\nBut I'm doubting PostgreSQL would do something like that. And of\ncourse, if PostgreSQL doesn't cache query plans - this idea is bogus =)\n\n\nOn Wed, 10 Jan 2007 13:38:24 -0500, \"Jeremy Haile\" <[email protected]>\nsaid:\n> I'm pretty sure it didn't analyze in between - autovac is turned off\n> and I ran the test multiple times before posting. \n> \n> But since I can't reproduce it anymore, I can't be 100% sure. And it\n> certainly doesn't make sense that the estimate for the index scan would\n> change based on an unrelated join condition.\n> \n> If I ever get it to happen again, I'll be more careful and repost if it\n> is a real issue. Thanks for pointing me in the right direction!\n> \n> \n> On Wed, 10 Jan 2007 13:38:15 -0500, \"Tom Lane\" <[email protected]> said:\n> > \"Jeremy Haile\" <[email protected]> writes:\n> > > I still don't understand why the inner join would be so different from\n> > > the left join prior to the analyze.\n> > \n> > Are you sure you hadn't analyzed in between? Or maybe autovac did it\n> > for you? The reason for the plan change is the change from estimating\n> > 1 row matching the transaction_date range constraint, to estimating lots\n> > of them, and the join type away up at the top would surely not have\n> > affected that.\n> > \n> > \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n",
"msg_date": "Wed, 10 Jan 2007 13:44:37 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow inner join, but left join is fast"
},
{
"msg_contents": "\"Jeremy Haile\" <[email protected]> writes:\n> Another random idea - does PostgreSQL do any caching of query plans?\n\nOnly if the client specifies it, either by PREPARE or the equivalent\nprotocol-level message. I dunno what client software you were using,\nbut I think few if any would PREPARE behind your back. Might be worth\nchecking into though, if you've eliminated autovacuum.\n\nActually there's another possibility --- did you create any indexes on\nthe table in between? CREATE INDEX doesn't do a full stats update, but\nit does count the rows and update pg_class.reltuples. But it's hard to\nbelieve that'd have caused as big a rowcount shift as we see here ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Jan 2007 14:22:35 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow inner join, but left join is fast "
},
{
"msg_contents": "I did create and drop an index at some point while looking at this\nissue. But I definitely reran both of the queries (and explains) after\nthe index was dropped, so I don't understand why there would be a\ndifference between the inner and left query plans. (which were run\nback-to-back more than once) Anyways - I'll let you know if something\nsimilar happens again.\n\nThanks,\nJeremy Haile \n\n\nOn Wed, 10 Jan 2007 14:22:35 -0500, \"Tom Lane\" <[email protected]> said:\n> \"Jeremy Haile\" <[email protected]> writes:\n> > Another random idea - does PostgreSQL do any caching of query plans?\n> \n> Only if the client specifies it, either by PREPARE or the equivalent\n> protocol-level message. I dunno what client software you were using,\n> but I think few if any would PREPARE behind your back. Might be worth\n> checking into though, if you've eliminated autovacuum.\n> \n> Actually there's another possibility --- did you create any indexes on\n> the table in between? CREATE INDEX doesn't do a full stats update, but\n> it does count the rows and update pg_class.reltuples. But it's hard to\n> believe that'd have caused as big a rowcount shift as we see here ...\n> \n> \t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Jan 2007 14:26:55 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow inner join, but left join is fast"
}
] |
[
{
"msg_contents": "Hi all,\n\n I've got a doubt about how to create an index and a primary key. Lets \nsay I have the following table:\n\n CREATE TABLE blacklist\n (\n telephone VARCHAR(15),\n customer_id INT4\n CONSTRAINT fk_blacklist_customerid REFERENCES\n customers( customer_id ),\n country_id INT2\n CONSTRAINT fk_blacklist_countryid REFERENCES\n countries( country_id ),\n CONSTRAINT pk_blacklist_cidcustidtel\n PRIMARY KEY(country_id, customer_id, telephone)\n );\n\n The country_id column can have maybe 100 - 250 different values.\n The customer_id column can have as much several hundred values (less \nthan 1000).\n The telephone is where all will be different.\n\n So my doubt is, in terms of performance makes any difference the \norder of the primary key fields? The same in the index definition? I \nhave checked the postgresql documentation I haven't been able to find \nanything about.\n\nThanks\n-- \nArnau\n",
"msg_date": "Thu, 11 Jan 2007 11:56:50 +0100",
"msg_from": "Arnau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Does it matters the column order in indexes and constraints creation?"
},
{
"msg_contents": "Arnau wrote:\n> Hi all,\n> \n> I've got a doubt about how to create an index and a primary key. Lets \n> say I have the following table:\n\n> The country_id column can have maybe 100 - 250 different values.\n> The customer_id column can have as much several hundred values (less \n> than 1000).\n> The telephone is where all will be different.\n> \n> So my doubt is, in terms of performance makes any difference the order \n> of the primary key fields? The same in the index definition? I have \n> checked the postgresql documentation I haven't been able to find \n> anything about.\n\nWell, it makes no *logical* difference, but clearly the index will have \na different shape depending on how you create it.\n\nIf you regularly write queries that select by country but not by \ncustomer, then use (country_id,customer_id). A more likely scenario is \nthat you will access via customer, which in any case is more selective.\n\nHowever, since both colums reference other tables you might want to make \nsure the \"secondary\" column has its own index if you do lots of updating \non the target FK table.\n\nBut, the more indexes you have the slower updates will be on this table, \nsince you'll need to keep the indexes up-to-date too.\n\nI find it's easier to spot where to put indexes during testing. It's \neasy to add indexes where they're never used.\n\nHTH\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 11 Jan 2007 11:27:20 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does it matters the column order in indexes and constraints"
}
] |
[
{
"msg_contents": "Hello all!\nRunning a vac on an 8.2 client connecting to an 8.2 database (following \nexample was for a 14 row static table) - vacuums will sit (for lack of a \nbetter word) for anywhere from ten to twenty minutes before taking a \nlock out on the table and doing work there. Once the vacuum does \ncomplete, I noticed that no timestamp is registered in \npg_stat_all_tables for that relation for the last-vacuum'd timestamp \n(however analyze does seem to set it's timestamp). I asked it to run a \nvacuum on an index (knowing it would fail out), and again, the vacuum \nsat for several minutes before finally erroring out saying that it \ncouldn't vacuum an index. Out of curiosity I tried the vacuum on an 8.1 \nclient connected to the 8.2 db, same delay.\n\nIn running a truss on the process while it is running, there is over \nfive minutes where the process seems to be scanning pg_class (at least \nthats the only table listed in pg_locks for this process). Following \nthis it drops into a cycle of doing the same send() command with several \nseconds lag between each one, and every so often it catches the same \ninterrupt (SIGUSR1) and then goes back into the same cycle of send() \ncalls. Also, whatever it is doing during this stage, it isn't checking \nfor process-cancelled interrupts, as the process won't recognize it's \nbeen requested to cancel until it breaks out of this cycle of send()s \nand SIGUSR1s (which can go for another several minutes). I'm happy to \nsend along the gore of the truss call if you think it would be helpful...\n\nAny ideas what the vac is prepping for that it could become bogged down \nin before finally taking the lock on the table?\n\nIs the lack of a timestamp set for last_vacuum in pg_stat_all_tables an \nindication that there may be something incomplete about our install?\n\nSince the upgrade, we've also seen unusual lag time in simple inserts \ninto tables (atomic inserts have been seen running for several seconds), \nand also extreme delays in running \\d on tables (I got tired of counting \npast 2 minutes, connecting with an 8.1 client gives immediate response \non this command). We plan to upgrade to 8.2.1 as soon as possible, and \nalso to drop into single user mode and run a reindex system, but any \nsuggestions in the meantime as to a potential cause or a way to further \ndebug the vacs would be greatly appreciated.\n\nOS: Solaris 10\nwrite transactions/hr: 1.5 million\nsize of pg_class: 535,226\nnumber of relations: 108,694\n\nThanks to all,\n\nKim\n",
"msg_date": "Thu, 11 Jan 2007 11:20:13 -0600",
"msg_from": "Kim <[email protected]>",
"msg_from_op": true,
"msg_subject": "unusual performance for vac following 8.2 upgrade"
},
{
"msg_contents": "Kim wrote:\n\n<snip>\n> OS: Solaris 10\n> write transactions/hr: 1.5 million\n> size of pg_class: 535,226\n> number of relations: 108,694\n>\nThat is a huge pg_class. I remember some discussion recently about \nproblems with 8.2 and the way it scans pg_class. I also believe it's \nfixed in 8.2.1. Are you running that. If not, I suggest you upgrade \nand see if the fault still exists.\n\nRegards\n\nRussell Smith\n> Thanks to all,\n>\n> Kim\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n>\n\n",
"msg_date": "Fri, 12 Jan 2007 05:07:23 +1100",
"msg_from": "Russell Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unusual performance for vac following 8.2 upgrade"
},
{
"msg_contents": "Kim <[email protected]> writes:\n> Running a vac on an 8.2 client connecting to an 8.2 database (following \n> example was for a 14 row static table) - vacuums will sit (for lack of a \n> better word) for anywhere from ten to twenty minutes before taking a \n> lock out on the table and doing work there.\n\nHow big is this database (how many pg_class entries)? What do you get\nfrom \"VACUUM VERBOSE pg_class\"? The truss results make it sound like\nthe problem is pgstat_vacuum_tabstat() taking a long time, but that code\nhas not changed since 8.1 ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Jan 2007 13:17:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unusual performance for vac following 8.2 upgrade "
},
{
"msg_contents": "Kim <[email protected]> writes:\n> size of pg_class: 535,226\n> number of relations: 108,694\n\nOh, I shoulda read all the way to the bottom of your email :-(. What\nversion of PG were you running before? I would think that pretty much\nany version of pgstat_vacuum_tabstats would have had a performance issue\nwith pg_class that large. Also, could we see\n\n\tselect relkind, count(*) from pg_class group by relkind;\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Jan 2007 13:20:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unusual performance for vac following 8.2 upgrade "
},
{
"msg_contents": "Hey Tom,\nWe were running on 8.1.1 previous to upgrading to 8.2, and yes, we \ndefinitely have a heafty pg_class. The inheritance model is heavily used \nin our schema (the results of the group by you wanted to see are down \nbelow). However, no significant problems were seen with vacs while we \nwere on 8.1. Execution time for the vac seemed more linked to large \ntable size and how active the table was with updates, rather than being \nuniversally over 10 minutes regardless of the vac's object. We will be \ndoing an audit of our 8.2 install to try and make sure that it looks \nlike a complete install, any tests you can think of that may further \nnarrow things down for us?\n\n\n\n relkind | count\n---------+--------\n v | 1740\n t | 49986\n c | 4\n S | 57\n r | 108689\n i | 374723\n(6 rows)\n\n\n\nTom Lane wrote:\n\n>Kim <[email protected]> writes:\n> \n>\n>>size of pg_class: 535,226\n>>number of relations: 108,694\n>> \n>>\n>\n>Oh, I shoulda read all the way to the bottom of your email :-(. What\n>version of PG were you running before? I would think that pretty much\n>any version of pgstat_vacuum_tabstats would have had a performance issue\n>with pg_class that large. Also, could we see\n>\n>\tselect relkind, count(*) from pg_class group by relkind;\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n> \n>\n\n\n\n\n\n\nHey Tom,\nWe were running on 8.1.1 previous to upgrading to 8.2, and yes, we\ndefinitely have a heafty pg_class. The inheritance model is heavily\nused in our schema (the results of the group by you wanted to see are\ndown below). However, no significant problems were seen with vacs\nwhile we were on 8.1. Execution time for the vac seemed more linked to\nlarge table size and how active the table was with updates, rather than\nbeing universally over 10 minutes regardless of the vac's object. We\nwill be doing an audit of our 8.2 install to try and make sure that it\nlooks like a complete install, any tests you can think of that may\nfurther narrow things down for us? \n\n\n\n relkind | count\n---------+--------\n v | 1740\n t | 49986\n c | 4\n S | 57\n r | 108689\n i | 374723\n(6 rows)\n\n\n\nTom Lane wrote:\n\nKim <[email protected]> writes:\n \n\nsize of pg_class: 535,226\nnumber of relations: 108,694\n \n\n\nOh, I shoulda read all the way to the bottom of your email :-(. What\nversion of PG were you running before? I would think that pretty much\nany version of pgstat_vacuum_tabstats would have had a performance issue\nwith pg_class that large. Also, could we see\n\n\tselect relkind, count(*) from pg_class group by relkind;\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org",
"msg_date": "Thu, 11 Jan 2007 13:13:56 -0600",
"msg_from": "Kim <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: unusual performance for vac following 8.2 upgrade"
},
{
"msg_contents": "Kim <[email protected]> writes:\n> We were running on 8.1.1 previous to upgrading to 8.2, and yes, we \n> definitely have a heafty pg_class. The inheritance model is heavily used \n> in our schema (the results of the group by you wanted to see are down \n> below). However, no significant problems were seen with vacs while we \n> were on 8.1.\n\nOdd, because the 8.1 code looks about the same, and it is perfectly\nobvious in hindsight that its runtime is about O(N^2) in the number of\nrelations :-(. At least that'd be the case if the stats collector\noutput were fully populated. Did you have either stats_block_level or\nstats_row_level turned on in 8.1? If not, maybe the reason for the\nchange is that in 8.2, that table *will* be pretty fully populated,\nbecause now it's got a last-vacuum-time entry that gets made even if the\nstats are otherwise turned off. Perhaps making that non-disablable\nwasn't such a hot idea :-(.\n\nWhat I think we need to do about this is\n\n(1) fix pgstat_vacuum_tabstats to have non-O(N^2) behavior; I'm thinking\nof using a hash table for the OIDs instead of a linear list. Should be\na pretty small change; I'll work on it today.\n\n(2) Reconsider whether last-vacuum-time should be sent to the collector\nunconditionally.\n\nComments from hackers?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Jan 2007 14:45:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unusual performance for vac following 8.2 upgrade "
},
{
"msg_contents": "Tom Lane wrote:\n\n> What I think we need to do about this is\n> \n> (1) fix pgstat_vacuum_tabstats to have non-O(N^2) behavior; I'm thinking\n> of using a hash table for the OIDs instead of a linear list. Should be\n> a pretty small change; I'll work on it today.\n> \n> (2) Reconsider whether last-vacuum-time should be sent to the collector\n> unconditionally.\n\n(2) seems a perfectly reasonably answer, but ISTM (1) would be good to\nhave anyway (at least in HEAD).\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Thu, 11 Jan 2007 16:49:28 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] unusual performance for vac following 8.2 upgrade"
},
{
"msg_contents": "On Thu, 2007-01-11 at 14:45 -0500, Tom Lane wrote:\n> Kim <[email protected]> writes:\n> > We were running on 8.1.1 previous to upgrading to 8.2, and yes, we \n> > definitely have a heafty pg_class. The inheritance model is heavily used \n> > in our schema (the results of the group by you wanted to see are down \n> > below). However, no significant problems were seen with vacs while we \n> > were on 8.1.\n> \n> Odd, because the 8.1 code looks about the same, and it is perfectly\n> obvious in hindsight that its runtime is about O(N^2) in the number of\n> relations :-(. At least that'd be the case if the stats collector\n> output were fully populated. Did you have either stats_block_level or\n> stats_row_level turned on in 8.1? If not, maybe the reason for the\n> change is that in 8.2, that table *will* be pretty fully populated,\n> because now it's got a last-vacuum-time entry that gets made even if the\n> stats are otherwise turned off. Perhaps making that non-disablable\n> wasn't such a hot idea :-(.\n> \n> What I think we need to do about this is\n> \n> (1) fix pgstat_vacuum_tabstats to have non-O(N^2) behavior; I'm thinking\n> of using a hash table for the OIDs instead of a linear list. Should be\n> a pretty small change; I'll work on it today.\n> \n> (2) Reconsider whether last-vacuum-time should be sent to the collector\n> unconditionally.\n> \n> Comments from hackers?\n\nIt's not clear to me how this fix will alter the INSERT issue Kim\nmentions. Are those issues connected? Or are you thinking that handling\nstats in a tight loop is slowing down other aspects of the system?\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Jan 2007 20:40:06 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unusual performance for vac following 8.2 upgrade"
},
{
"msg_contents": "\"Simon Riggs\" <[email protected]> writes:\n> It's not clear to me how this fix will alter the INSERT issue Kim\n> mentions.\n\nI didn't say that it would; we have no information on the INSERT issue,\nso I'm just concentrating on the problem that he did provide info on.\n\n(BTW, I suppose the slow-\\d issue is the regex planning problem we\nalready knew about.)\n\nI'm frankly not real surprised that there are performance issues with\nsuch a huge pg_class; it's not a regime that anyone's spent any time\noptimizing. It is interesting that 8.2 seems to have regressed but\nI can think of several places that would've been bad before. One is\nthat there are seqscans of pg_inherits ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Jan 2007 16:11:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] unusual performance for vac following 8.2 upgrade "
},
{
"msg_contents": "On Thu, Jan 11, 2007 at 04:49:28PM -0300, Alvaro Herrera wrote:\n> Tom Lane wrote:\n> \n> > What I think we need to do about this is\n> > \n> > (1) fix pgstat_vacuum_tabstats to have non-O(N^2) behavior; I'm thinking\n> > of using a hash table for the OIDs instead of a linear list. Should be\n> > a pretty small change; I'll work on it today.\n> > \n> > (2) Reconsider whether last-vacuum-time should be sent to the collector\n> > unconditionally.\n> \n> (2) seems a perfectly reasonably answer, but ISTM (1) would be good to\n> have anyway (at least in HEAD).\n\nActually, I'd rather see the impact #1 has before adding #2... If #1\nmeans we're good for even someone with 10M relations, I don't see much\npoint in #2.\n\nBTW, we're now starting to see more users with a large number of\nrelations, thanks to partitioning. It would probably be wise to expand\ntest coverage for that case, especially when it comes to performance.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 11 Jan 2007 15:16:09 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] unusual performance for vac following 8.2 upgrade"
},
{
"msg_contents": "For 8.1, we did have stats_block_level and stats_row_level on, so thats \nnot it either :-/ However, I did go on to an alternate database of ours \non the same machine, using the same install, same postmaster - that \nholds primarily static relations, and not many of those (16 relations \ntotal). The response of running a vac for a 1.3k static table was quick \n(6 seconds - but it still did not set the last_vacuum field). Not sure \nwhy we weren't seeing more probs with this on 8.1 for the full db, but \nfrom the looks of things I think your theory on the primary problem with \nour vacs is solid. I'm hoping we can fire up our old 8.1 dataset and run \nsome tests on there to confirm/reject the idea that it was doing any \nbetter, but that will require quieter times on the machine than we've \ngot right now :)\n\nWe are going to try and upgrade to 8.2.1 as soon as we can, and if we \ncontinue to see some of the other problems I mentioned as side-notes, \nwe'll build some information on those and pass it along...\n\nThanks so much!\n\nKim\n\n\nTom Lane wrote:\n\n>Kim <[email protected]> writes:\n> \n>\n>>We were running on 8.1.1 previous to upgrading to 8.2, and yes, we \n>>definitely have a heafty pg_class. The inheritance model is heavily used \n>>in our schema (the results of the group by you wanted to see are down \n>>below). However, no significant problems were seen with vacs while we \n>>were on 8.1.\n>> \n>>\n>\n>Odd, because the 8.1 code looks about the same, and it is perfectly\n>obvious in hindsight that its runtime is about O(N^2) in the number of\n>relations :-(. At least that'd be the case if the stats collector\n>output were fully populated. Did you have either stats_block_level or\n>stats_row_level turned on in 8.1? If not, maybe the reason for the\n>change is that in 8.2, that table *will* be pretty fully populated,\n>because now it's got a last-vacuum-time entry that gets made even if the\n>stats are otherwise turned off. Perhaps making that non-disablable\n>wasn't such a hot idea :-(.\n>\n>What I think we need to do about this is\n>\n>(1) fix pgstat_vacuum_tabstats to have non-O(N^2) behavior; I'm thinking\n>of using a hash table for the OIDs instead of a linear list. Should be\n>a pretty small change; I'll work on it today.\n>\n>(2) Reconsider whether last-vacuum-time should be sent to the collector\n>unconditionally.\n>\n>Comments from hackers?\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: Don't 'kill -9' the postmaster\n>\n> \n>\n\n\n\n\n\n\nFor 8.1, we did have stats_block_level and stats_row_level on, so thats\nnot it either :-/ However, I did go on to an alternate database of ours\non the same machine, using the same install, same postmaster - that\nholds primarily static relations, and not many of those (16 relations\ntotal). The response of running a vac for a 1.3k static table was quick\n(6 seconds - but it still did not set the last_vacuum field). Not sure\nwhy we weren't seeing more probs with this on 8.1 for the full db, but\nfrom the looks of things I think your theory on the primary problem\nwith our vacs is solid. I'm hoping we can fire up our old 8.1 dataset\nand run some tests on there to confirm/reject the idea that it was\ndoing any better, but that will require quieter times on the machine\nthan we've got right now :) \n\nWe are going to try and upgrade to 8.2.1 as soon as we can, and if we\ncontinue to see some of the other problems I mentioned as side-notes,\nwe'll build some information on those and pass it along...\n\nThanks so much!\n\nKim\n\n\nTom Lane wrote:\n\nKim <[email protected]> writes:\n \n\nWe were running on 8.1.1 previous to upgrading to 8.2, and yes, we \ndefinitely have a heafty pg_class. The inheritance model is heavily used \nin our schema (the results of the group by you wanted to see are down \nbelow). However, no significant problems were seen with vacs while we \nwere on 8.1.\n \n\n\nOdd, because the 8.1 code looks about the same, and it is perfectly\nobvious in hindsight that its runtime is about O(N^2) in the number of\nrelations :-(. At least that'd be the case if the stats collector\noutput were fully populated. Did you have either stats_block_level or\nstats_row_level turned on in 8.1? If not, maybe the reason for the\nchange is that in 8.2, that table *will* be pretty fully populated,\nbecause now it's got a last-vacuum-time entry that gets made even if the\nstats are otherwise turned off. Perhaps making that non-disablable\nwasn't such a hot idea :-(.\n\nWhat I think we need to do about this is\n\n(1) fix pgstat_vacuum_tabstats to have non-O(N^2) behavior; I'm thinking\nof using a hash table for the OIDs instead of a linear list. Should be\na pretty small change; I'll work on it today.\n\n(2) Reconsider whether last-vacuum-time should be sent to the collector\nunconditionally.\n\nComments from hackers?\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster",
"msg_date": "Thu, 11 Jan 2007 15:52:23 -0600",
"msg_from": "Kim <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: unusual performance for vac following 8.2 upgrade"
},
{
"msg_contents": "I wrote:\n> (2) Reconsider whether last-vacuum-time should be sent to the collector\n> unconditionally.\n\nActually, now that I look, the collector already contains this logic:\n\n /*\n * Don't create either the database or table entry if it doesn't already\n * exist. This avoids bloating the stats with entries for stuff that is\n * only touched by vacuum and not by live operations.\n */\n\nand ditto for analyze messages. So my idea that the addition of\nlast-vac-time was causing an increase in the statistics file size\ncompared to 8.1 seems wrong.\n\nHow large is your $PGDATA/global/pgstat.stat file, anyway?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Jan 2007 17:26:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unusual performance for vac following 8.2 upgrade "
},
{
"msg_contents": "I wrote:\n> What I think we need to do about this is\n> (1) fix pgstat_vacuum_tabstats to have non-O(N^2) behavior; I'm thinking\n> of using a hash table for the OIDs instead of a linear list. Should be\n> a pretty small change; I'll work on it today.\n\nI've applied the attached patch to 8.2 to do the above. Please give it\na try and see how much it helps for you. Some limited testing here\nconfirms a noticeable improvement in VACUUM startup time at 10000\ntables, and of course it should be 100X worse with 100000 tables.\n\nI am still confused why you didn't see the problem in 8.1, though.\nThis code is just about exactly the same in 8.1. Maybe you changed\nyour stats collector settings when moving to 8.2?\n\n\t\t\tregards, tom lane\n\n\nIndex: pgstat.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/postmaster/pgstat.c,v\nretrieving revision 1.140\ndiff -c -r1.140 pgstat.c\n*** pgstat.c\t21 Nov 2006 20:59:52 -0000\t1.140\n--- pgstat.c\t11 Jan 2007 22:32:30 -0000\n***************\n*** 159,164 ****\n--- 159,165 ----\n static void pgstat_read_statsfile(HTAB **dbhash, Oid onlydb);\n static void backend_read_statsfile(void);\n static void pgstat_read_current_status(void);\n+ static HTAB *pgstat_collect_oids(Oid catalogid);\n \n static void pgstat_setheader(PgStat_MsgHdr *hdr, StatMsgType mtype);\n static void pgstat_send(void *msg, int len);\n***************\n*** 657,666 ****\n void\n pgstat_vacuum_tabstat(void)\n {\n! \tList\t *oidlist;\n! \tRelation\trel;\n! \tHeapScanDesc scan;\n! \tHeapTuple\ttup;\n \tPgStat_MsgTabpurge msg;\n \tHASH_SEQ_STATUS hstat;\n \tPgStat_StatDBEntry *dbentry;\n--- 658,664 ----\n void\n pgstat_vacuum_tabstat(void)\n {\n! \tHTAB\t *htab;\n \tPgStat_MsgTabpurge msg;\n \tHASH_SEQ_STATUS hstat;\n \tPgStat_StatDBEntry *dbentry;\n***************\n*** 679,693 ****\n \t/*\n \t * Read pg_database and make a list of OIDs of all existing databases\n \t */\n! \toidlist = NIL;\n! \trel = heap_open(DatabaseRelationId, AccessShareLock);\n! \tscan = heap_beginscan(rel, SnapshotNow, 0, NULL);\n! \twhile ((tup = heap_getnext(scan, ForwardScanDirection)) != NULL)\n! \t{\n! \t\toidlist = lappend_oid(oidlist, HeapTupleGetOid(tup));\n! \t}\n! \theap_endscan(scan);\n! \theap_close(rel, AccessShareLock);\n \n \t/*\n \t * Search the database hash table for dead databases and tell the\n--- 677,683 ----\n \t/*\n \t * Read pg_database and make a list of OIDs of all existing databases\n \t */\n! \thtab = pgstat_collect_oids(DatabaseRelationId);\n \n \t/*\n \t * Search the database hash table for dead databases and tell the\n***************\n*** 698,709 ****\n \t{\n \t\tOid\t\t\tdbid = dbentry->databaseid;\n \n! \t\tif (!list_member_oid(oidlist, dbid))\n \t\t\tpgstat_drop_database(dbid);\n \t}\n \n \t/* Clean up */\n! \tlist_free(oidlist);\n \n \t/*\n \t * Lookup our own database entry; if not found, nothing more to do.\n--- 688,701 ----\n \t{\n \t\tOid\t\t\tdbid = dbentry->databaseid;\n \n! \t\tCHECK_FOR_INTERRUPTS();\n! \n! \t\tif (hash_search(htab, (void *) &dbid, HASH_FIND, NULL) == NULL)\n \t\t\tpgstat_drop_database(dbid);\n \t}\n \n \t/* Clean up */\n! \thash_destroy(htab);\n \n \t/*\n \t * Lookup our own database entry; if not found, nothing more to do.\n***************\n*** 717,731 ****\n \t/*\n \t * Similarly to above, make a list of all known relations in this DB.\n \t */\n! \toidlist = NIL;\n! \trel = heap_open(RelationRelationId, AccessShareLock);\n! \tscan = heap_beginscan(rel, SnapshotNow, 0, NULL);\n! \twhile ((tup = heap_getnext(scan, ForwardScanDirection)) != NULL)\n! \t{\n! \t\toidlist = lappend_oid(oidlist, HeapTupleGetOid(tup));\n! \t}\n! \theap_endscan(scan);\n! \theap_close(rel, AccessShareLock);\n \n \t/*\n \t * Initialize our messages table counter to zero\n--- 709,715 ----\n \t/*\n \t * Similarly to above, make a list of all known relations in this DB.\n \t */\n! \thtab = pgstat_collect_oids(RelationRelationId);\n \n \t/*\n \t * Initialize our messages table counter to zero\n***************\n*** 738,750 ****\n \thash_seq_init(&hstat, dbentry->tables);\n \twhile ((tabentry = (PgStat_StatTabEntry *) hash_seq_search(&hstat)) != NULL)\n \t{\n! \t\tif (list_member_oid(oidlist, tabentry->tableid))\n \t\t\tcontinue;\n \n \t\t/*\n \t\t * Not there, so add this table's Oid to the message\n \t\t */\n! \t\tmsg.m_tableid[msg.m_nentries++] = tabentry->tableid;\n \n \t\t/*\n \t\t * If the message is full, send it out and reinitialize to empty\n--- 722,738 ----\n \thash_seq_init(&hstat, dbentry->tables);\n \twhile ((tabentry = (PgStat_StatTabEntry *) hash_seq_search(&hstat)) != NULL)\n \t{\n! \t\tOid\t\t\ttabid = tabentry->tableid;\n! \n! \t\tCHECK_FOR_INTERRUPTS();\n! \n! \t\tif (hash_search(htab, (void *) &tabid, HASH_FIND, NULL) != NULL)\n \t\t\tcontinue;\n \n \t\t/*\n \t\t * Not there, so add this table's Oid to the message\n \t\t */\n! \t\tmsg.m_tableid[msg.m_nentries++] = tabid;\n \n \t\t/*\n \t\t * If the message is full, send it out and reinitialize to empty\n***************\n*** 776,782 ****\n \t}\n \n \t/* Clean up */\n! \tlist_free(oidlist);\n }\n \n \n--- 764,813 ----\n \t}\n \n \t/* Clean up */\n! \thash_destroy(htab);\n! }\n! \n! \n! /* ----------\n! * pgstat_collect_oids() -\n! *\n! *\tCollect the OIDs of either all databases or all tables, according to\n! *\tthe parameter, into a temporary hash table. Caller should hash_destroy\n! *\tthe result when done with it.\n! * ----------\n! */\n! static HTAB *\n! pgstat_collect_oids(Oid catalogid)\n! {\n! \tHTAB\t *htab;\n! \tHASHCTL\t\thash_ctl;\n! \tRelation\trel;\n! \tHeapScanDesc scan;\n! \tHeapTuple\ttup;\n! \n! \tmemset(&hash_ctl, 0, sizeof(hash_ctl));\n! \thash_ctl.keysize = sizeof(Oid);\n! \thash_ctl.entrysize = sizeof(Oid);\n! \thash_ctl.hash = oid_hash;\n! \thtab = hash_create(\"Temporary table of OIDs\",\n! \t\t\t\t\t PGSTAT_TAB_HASH_SIZE,\n! \t\t\t\t\t &hash_ctl,\n! \t\t\t\t\t HASH_ELEM | HASH_FUNCTION);\n! \n! \trel = heap_open(catalogid, AccessShareLock);\n! \tscan = heap_beginscan(rel, SnapshotNow, 0, NULL);\n! \twhile ((tup = heap_getnext(scan, ForwardScanDirection)) != NULL)\n! \t{\n! \t\tOid\t\tthisoid = HeapTupleGetOid(tup);\n! \n! \t\tCHECK_FOR_INTERRUPTS();\n! \n! \t\t(void) hash_search(htab, (void *) &thisoid, HASH_ENTER, NULL);\n! \t}\n! \theap_endscan(scan);\n! \theap_close(rel, AccessShareLock);\n! \n! \treturn htab;\n }",
"msg_date": "Thu, 11 Jan 2007 18:10:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] unusual performance for vac following 8.2 upgrade "
},
{
"msg_contents": "Our pgstats.stat file is 40M for 8.2, on 8.1 it was 33M. Our schema size \nhasn't grown *that* much in the two weeks since we upgraded\n\nI'm not sure if this sheds any more light on the situation, but in \nscanning down through the process output from truss, it looks like the \nfirst section of output was a large chunk of reads on pgstat.stat, \nfollowed by a larger chunk of reads on the global directory and \ndirectories under base - this whole section probably went on for a good \n6-7 minutes, though I would say the reads on pgstat likely finished \nwithin a couple of minutes or so. Following this there was a phase were \nit did a lot of seeks and reads on files under pg_clog, and it was while \ndoing this (or perhaps it had finished whatever it wanted with clogs) it \ndropped into the send()/SIGUSR1 loop that goes for another several minutes.\n\nKim\n\n\nTom Lane wrote:\n\n>I wrote:\n> \n>\n>>(2) Reconsider whether last-vacuum-time should be sent to the collector\n>>unconditionally.\n>> \n>>\n>\n>Actually, now that I look, the collector already contains this logic:\n>\n> /*\n> * Don't create either the database or table entry if it doesn't already\n> * exist. This avoids bloating the stats with entries for stuff that is\n> * only touched by vacuum and not by live operations.\n> */\n>\n>and ditto for analyze messages. So my idea that the addition of\n>last-vac-time was causing an increase in the statistics file size\n>compared to 8.1 seems wrong.\n>\n>How large is your $PGDATA/global/pgstat.stat file, anyway?\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: explain analyze is your friend\n>\n> \n>\n\n\n\n\n\n\nOur pgstats.stat file is 40M for 8.2, on 8.1 it was 33M. Our schema\nsize hasn't grown *that* much in the two weeks since we upgraded\n\nI'm not sure if this sheds any more light on the situation, but in\nscanning down through the process output from truss, it looks like the\nfirst section of output was a large chunk of reads on pgstat.stat,\nfollowed by a larger chunk of reads on the global directory and\ndirectories under base - this whole section probably went on for a good\n6-7 minutes, though I would say the reads on pgstat likely finished\nwithin a couple of minutes or so. Following this there was a phase were\nit did a lot of seeks and reads on files under pg_clog, and it was\nwhile doing this (or perhaps it had finished whatever it wanted with\nclogs) it dropped into the send()/SIGUSR1 loop that goes for another\nseveral minutes. \n\nKim\n\n\nTom Lane wrote:\n\nI wrote:\n \n\n(2) Reconsider whether last-vacuum-time should be sent to the collector\nunconditionally.\n \n\n\nActually, now that I look, the collector already contains this logic:\n\n /*\n * Don't create either the database or table entry if it doesn't already\n * exist. This avoids bloating the stats with entries for stuff that is\n * only touched by vacuum and not by live operations.\n */\n\nand ditto for analyze messages. So my idea that the addition of\nlast-vac-time was causing an increase in the statistics file size\ncompared to 8.1 seems wrong.\n\nHow large is your $PGDATA/global/pgstat.stat file, anyway?\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend",
"msg_date": "Thu, 11 Jan 2007 17:12:46 -0600",
"msg_from": "Kim <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: unusual performance for vac following 8.2 upgrade"
},
{
"msg_contents": "On Thu, 2007-01-11 at 16:11 -0500, Tom Lane wrote:\n> \"Simon Riggs\" <[email protected]> writes:\n> > It's not clear to me how this fix will alter the INSERT issue Kim\n> > mentions.\n> \n> I didn't say that it would; we have no information on the INSERT issue,\n> so I'm just concentrating on the problem that he did provide info on.\n\nOK.\n\n> I'm frankly not real surprised that there are performance issues with\n> such a huge pg_class; it's not a regime that anyone's spent any time\n> optimizing. \n\nYeh, I saw a pg_class that big once, but it just needed a VACUUM.\n\nTemp relations still make pg_class entried don't they? Is that on the\nTODO list to change?\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Jan 2007 23:14:34 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] unusual performance for vac following 8.2upgrade"
},
{
"msg_contents": "Simon Riggs wrote:\n\n> Temp relations still make pg_class entried don't they? Is that on the\n> TODO list to change?\n\nYeah, and pg_attribute entries as well, which may be more problematic\nbecause they are a lot. Did we get rid of pg_attribute entries for\nsystem attributes already?\n\nCan we actually get rid of pg_class entries for temp tables. Maybe\ncreating a \"temp pg_class\" which would be local to each session? Heck,\nit doesn't even have to be an actual table -- it just needs to be\nsomewhere from where we can load entries into the relcache.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Thu, 11 Jan 2007 22:09:18 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] unusual performance for vac following 8.2upgrade"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Can we actually get rid of pg_class entries for temp tables. Maybe\n> creating a \"temp pg_class\" which would be local to each session? Heck,\n> it doesn't even have to be an actual table -- it just needs to be\n> somewhere from where we can load entries into the relcache.\n\nA few things to think about:\n\n1. You'll break a whole lotta client-side code if temp tables disappear\nfrom pg_class. This is probably solvable --- one thought is to give\npg_class an inheritance child that is a view on a SRF that reads out the\nstored-in-memory rows for temp pg_class entries. Likewise for\npg_attribute and everything else related to a table definition.\n\n2. How do you keep the OIDs for temp tables (and their associated\nrowtypes) from conflicting with OIDs for real tables? Given the way\nthat OID generation works, there wouldn't be any real problem unless a\ntemp table survived for as long as it takes the OID counter to wrap all\nthe way around --- but in a database that has WITH OIDS user tables,\nthat might not be impossibly long ...\n\n3. What about dependencies on user-defined types, functions, etc?\nHow will you get things to behave sanely if one backend tries to drop a\ntype that some other backend is using in a column of a temp table? Even\nif you put entries into pg_depend, which would kind of defeat the point\nof not having on-disk catalog entries for temp tables, I don't see how\nthe other backend figures out what the referencing object is.\n\nI don't really see any solution to that last point :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Jan 2007 21:51:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] unusual performance for vac following 8.2upgrade "
},
{
"msg_contents": "Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n>> Can we actually get rid of pg_class entries for temp tables. Maybe\n>> creating a \"temp pg_class\" which would be local to each session? Heck,\n>> it doesn't even have to be an actual table -- it just needs to be\n>> somewhere from where we can load entries into the relcache.\n> \n> A few things to think about:\n> \n> 1. You'll break a whole lotta client-side code if temp tables disappear\n> from pg_class.\n\n> 2. How do you keep the OIDs for temp tables (and their associated\n> rowtypes) from conflicting with OIDs for real tables?\n\n> 3. What about dependencies on user-defined types, functions, etc?\n\nIs there not some gain from just a \"standard\" partitioning of pg_class \ninto: (system-objects, user-objects, temp-objects)? I'd expect them to \nform a hierarchy of change+vacuum rates (if you see what I mean).\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 12 Jan 2007 09:06:40 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] unusual performance for vac following 8.2upgrade"
},
{
"msg_contents": "\n\"Tom Lane\" <[email protected]> writes:\n\n> 3. What about dependencies on user-defined types, functions, etc?\n> How will you get things to behave sanely if one backend tries to drop a\n> type that some other backend is using in a column of a temp table? Even\n> if you put entries into pg_depend, which would kind of defeat the point\n> of not having on-disk catalog entries for temp tables, I don't see how\n> the other backend figures out what the referencing object is.\n\nWe could just lock the object it depends on. Only really makes sense for very\ntemporary tables though, not tables a session expects to use for a long series\nof transactions.\n\nAnother direction to go to address the same problem would be to implement the\nstandard temporary table concept of a permanent table definition for which\neach session gets a different actual set of data which is reset frequently.\nThen the meta-data isn't changing frequently.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Fri, 12 Jan 2007 12:43:42 +0000",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] unusual performance for vac following 8.2upgrade"
},
{
"msg_contents": "On Thu, Jan 11, 2007 at 09:51:39PM -0500, Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n> > Can we actually get rid of pg_class entries for temp tables. Maybe\n> > creating a \"temp pg_class\" which would be local to each session? Heck,\n> > it doesn't even have to be an actual table -- it just needs to be\n> > somewhere from where we can load entries into the relcache.\n> \n> A few things to think about:\n> \n> 1. You'll break a whole lotta client-side code if temp tables disappear\n> from pg_class. This is probably solvable --- one thought is to give\n> pg_class an inheritance child that is a view on a SRF that reads out the\n> stored-in-memory rows for temp pg_class entries. Likewise for\n> pg_attribute and everything else related to a table definition.\n> \n> 2. How do you keep the OIDs for temp tables (and their associated\n> rowtypes) from conflicting with OIDs for real tables? Given the way\n> that OID generation works, there wouldn't be any real problem unless a\n> temp table survived for as long as it takes the OID counter to wrap all\n> the way around --- but in a database that has WITH OIDS user tables,\n> that might not be impossibly long ...\n> \n> 3. What about dependencies on user-defined types, functions, etc?\n> How will you get things to behave sanely if one backend tries to drop a\n> type that some other backend is using in a column of a temp table? Even\n> if you put entries into pg_depend, which would kind of defeat the point\n> of not having on-disk catalog entries for temp tables, I don't see how\n> the other backend figures out what the referencing object is.\n> \n> I don't really see any solution to that last point :-(\n\nPerhaps it would be better to partition pg_class and _attributes based\non whether an object is temporary or not. Granted, that still means\nvacuuming is a consideration, but at least it wouldn't be affecting\npg_class itself. Separating temp objects out would also make it more\nreasonable to have the system automatically vacuum those tables after\nevery X number of dropped objects.\n\nUnfortunately, that still wouldn't help with the OID issue. :( Unless\nthere was a SERIAL column in pg_class_temp and other parts of the system\ncould differentiate between temp and non-temp objects.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Fri, 12 Jan 2007 11:33:06 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] unusual performance for vac following 8.2upgrade"
},
{
"msg_contents": "Hello again Tom,\n\nWe have our upgrade to 8.2.1 scheduled for this weekend, and we noticed \nyour message regarding the vacuum patch being applied to 8.2 and \nback-patched. I expect I know the answer to this next question :) but I \nwas wondering if the patch referenced below has also been bundled into \nthe normal source download of 8.2.1 or if we would still need to \nmanually apply it?\n\n- Fix a performance problem in databases with large numbers of tables\n (or other types of pg_class entry): the function\n pgstat_vacuum_tabstat, invoked during VACUUM startup, had runtime\n proportional to the number of stats table entries times the number\n of pg_class rows; in other words O(N^2) if the stats collector's\n information is reasonably complete. Replace list searching with a\n hash table to bring it back to O(N) behavior. Per report from kim\n at myemma.com. Back-patch as far as 8.1; 8.0 and before use\n different coding here.\n\nThanks,\nKim\n\n\nTom Lane wrote:\n\n>I wrote:\n> \n>\n>>What I think we need to do about this is\n>>(1) fix pgstat_vacuum_tabstats to have non-O(N^2) behavior; I'm thinking\n>>of using a hash table for the OIDs instead of a linear list. Should be\n>>a pretty small change; I'll work on it today.\n>> \n>>\n>\n>I've applied the attached patch to 8.2 to do the above. Please give it\n>a try and see how much it helps for you. Some limited testing here\n>confirms a noticeable improvement in VACUUM startup time at 10000\n>tables, and of course it should be 100X worse with 100000 tables.\n>\n>I am still confused why you didn't see the problem in 8.1, though.\n>This code is just about exactly the same in 8.1. Maybe you changed\n>your stats collector settings when moving to 8.2?\n>\n>\t\t\tregards, tom lane\n>\n> \n>\n>------------------------------------------------------------------------\n>\n>Index: pgstat.c\n>===================================================================\n>RCS file: /cvsroot/pgsql/src/backend/postmaster/pgstat.c,v\n>retrieving revision 1.140\n>diff -c -r1.140 pgstat.c\n>*** pgstat.c\t21 Nov 2006 20:59:52 -0000\t1.140\n>--- pgstat.c\t11 Jan 2007 22:32:30 -0000\n>***************\n>*** 159,164 ****\n>--- 159,165 ----\n> static void pgstat_read_statsfile(HTAB **dbhash, Oid onlydb);\n> static void backend_read_statsfile(void);\n> static void pgstat_read_current_status(void);\n>+ static HTAB *pgstat_collect_oids(Oid catalogid);\n> \n> static void pgstat_setheader(PgStat_MsgHdr *hdr, StatMsgType mtype);\n> static void pgstat_send(void *msg, int len);\n>***************\n>*** 657,666 ****\n> void\n> pgstat_vacuum_tabstat(void)\n> {\n>! \tList\t *oidlist;\n>! \tRelation\trel;\n>! \tHeapScanDesc scan;\n>! \tHeapTuple\ttup;\n> \tPgStat_MsgTabpurge msg;\n> \tHASH_SEQ_STATUS hstat;\n> \tPgStat_StatDBEntry *dbentry;\n>--- 658,664 ----\n> void\n> pgstat_vacuum_tabstat(void)\n> {\n>! \tHTAB\t *htab;\n> \tPgStat_MsgTabpurge msg;\n> \tHASH_SEQ_STATUS hstat;\n> \tPgStat_StatDBEntry *dbentry;\n>***************\n>*** 679,693 ****\n> \t/*\n> \t * Read pg_database and make a list of OIDs of all existing databases\n> \t */\n>! \toidlist = NIL;\n>! \trel = heap_open(DatabaseRelationId, AccessShareLock);\n>! \tscan = heap_beginscan(rel, SnapshotNow, 0, NULL);\n>! \twhile ((tup = heap_getnext(scan, ForwardScanDirection)) != NULL)\n>! \t{\n>! \t\toidlist = lappend_oid(oidlist, HeapTupleGetOid(tup));\n>! \t}\n>! \theap_endscan(scan);\n>! \theap_close(rel, AccessShareLock);\n> \n> \t/*\n> \t * Search the database hash table for dead databases and tell the\n>--- 677,683 ----\n> \t/*\n> \t * Read pg_database and make a list of OIDs of all existing databases\n> \t */\n>! \thtab = pgstat_collect_oids(DatabaseRelationId);\n> \n> \t/*\n> \t * Search the database hash table for dead databases and tell the\n>***************\n>*** 698,709 ****\n> \t{\n> \t\tOid\t\t\tdbid = dbentry->databaseid;\n> \n>! \t\tif (!list_member_oid(oidlist, dbid))\n> \t\t\tpgstat_drop_database(dbid);\n> \t}\n> \n> \t/* Clean up */\n>! \tlist_free(oidlist);\n> \n> \t/*\n> \t * Lookup our own database entry; if not found, nothing more to do.\n>--- 688,701 ----\n> \t{\n> \t\tOid\t\t\tdbid = dbentry->databaseid;\n> \n>! \t\tCHECK_FOR_INTERRUPTS();\n>! \n>! \t\tif (hash_search(htab, (void *) &dbid, HASH_FIND, NULL) == NULL)\n> \t\t\tpgstat_drop_database(dbid);\n> \t}\n> \n> \t/* Clean up */\n>! \thash_destroy(htab);\n> \n> \t/*\n> \t * Lookup our own database entry; if not found, nothing more to do.\n>***************\n>*** 717,731 ****\n> \t/*\n> \t * Similarly to above, make a list of all known relations in this DB.\n> \t */\n>! \toidlist = NIL;\n>! \trel = heap_open(RelationRelationId, AccessShareLock);\n>! \tscan = heap_beginscan(rel, SnapshotNow, 0, NULL);\n>! \twhile ((tup = heap_getnext(scan, ForwardScanDirection)) != NULL)\n>! \t{\n>! \t\toidlist = lappend_oid(oidlist, HeapTupleGetOid(tup));\n>! \t}\n>! \theap_endscan(scan);\n>! \theap_close(rel, AccessShareLock);\n> \n> \t/*\n> \t * Initialize our messages table counter to zero\n>--- 709,715 ----\n> \t/*\n> \t * Similarly to above, make a list of all known relations in this DB.\n> \t */\n>! \thtab = pgstat_collect_oids(RelationRelationId);\n> \n> \t/*\n> \t * Initialize our messages table counter to zero\n>***************\n>*** 738,750 ****\n> \thash_seq_init(&hstat, dbentry->tables);\n> \twhile ((tabentry = (PgStat_StatTabEntry *) hash_seq_search(&hstat)) != NULL)\n> \t{\n>! \t\tif (list_member_oid(oidlist, tabentry->tableid))\n> \t\t\tcontinue;\n> \n> \t\t/*\n> \t\t * Not there, so add this table's Oid to the message\n> \t\t */\n>! \t\tmsg.m_tableid[msg.m_nentries++] = tabentry->tableid;\n> \n> \t\t/*\n> \t\t * If the message is full, send it out and reinitialize to empty\n>--- 722,738 ----\n> \thash_seq_init(&hstat, dbentry->tables);\n> \twhile ((tabentry = (PgStat_StatTabEntry *) hash_seq_search(&hstat)) != NULL)\n> \t{\n>! \t\tOid\t\t\ttabid = tabentry->tableid;\n>! \n>! \t\tCHECK_FOR_INTERRUPTS();\n>! \n>! \t\tif (hash_search(htab, (void *) &tabid, HASH_FIND, NULL) != NULL)\n> \t\t\tcontinue;\n> \n> \t\t/*\n> \t\t * Not there, so add this table's Oid to the message\n> \t\t */\n>! \t\tmsg.m_tableid[msg.m_nentries++] = tabid;\n> \n> \t\t/*\n> \t\t * If the message is full, send it out and reinitialize to empty\n>***************\n>*** 776,782 ****\n> \t}\n> \n> \t/* Clean up */\n>! \tlist_free(oidlist);\n> }\n> \n> \n>--- 764,813 ----\n> \t}\n> \n> \t/* Clean up */\n>! \thash_destroy(htab);\n>! }\n>! \n>! \n>! /* ----------\n>! * pgstat_collect_oids() -\n>! *\n>! *\tCollect the OIDs of either all databases or all tables, according to\n>! *\tthe parameter, into a temporary hash table. Caller should hash_destroy\n>! *\tthe result when done with it.\n>! * ----------\n>! */\n>! static HTAB *\n>! pgstat_collect_oids(Oid catalogid)\n>! {\n>! \tHTAB\t *htab;\n>! \tHASHCTL\t\thash_ctl;\n>! \tRelation\trel;\n>! \tHeapScanDesc scan;\n>! \tHeapTuple\ttup;\n>! \n>! \tmemset(&hash_ctl, 0, sizeof(hash_ctl));\n>! \thash_ctl.keysize = sizeof(Oid);\n>! \thash_ctl.entrysize = sizeof(Oid);\n>! \thash_ctl.hash = oid_hash;\n>! \thtab = hash_create(\"Temporary table of OIDs\",\n>! \t\t\t\t\t PGSTAT_TAB_HASH_SIZE,\n>! \t\t\t\t\t &hash_ctl,\n>! \t\t\t\t\t HASH_ELEM | HASH_FUNCTION);\n>! \n>! \trel = heap_open(catalogid, AccessShareLock);\n>! \tscan = heap_beginscan(rel, SnapshotNow, 0, NULL);\n>! \twhile ((tup = heap_getnext(scan, ForwardScanDirection)) != NULL)\n>! \t{\n>! \t\tOid\t\tthisoid = HeapTupleGetOid(tup);\n>! \n>! \t\tCHECK_FOR_INTERRUPTS();\n>! \n>! \t\t(void) hash_search(htab, (void *) &thisoid, HASH_ENTER, NULL);\n>! \t}\n>! \theap_endscan(scan);\n>! \theap_close(rel, AccessShareLock);\n>! \n>! \treturn htab;\n> }\n> \n> \n> \n>\n\n\n\n\n\n\nHello again Tom,\n\nWe have our upgrade to 8.2.1 scheduled for this weekend, and we noticed\nyour message regarding the vacuum patch being applied to 8.2 and\nback-patched. I expect I know the answer to this next question :) but I\nwas wondering if the patch referenced below has also been bundled into\nthe normal source download of 8.2.1 or if we would still need to\nmanually apply it? \n\n- Fix a performance problem in databases with\nlarge numbers of tables\n (or other types of pg_class entry): the\nfunction\n pgstat_vacuum_tabstat, invoked during\nVACUUM startup, had runtime\n proportional to the number of stats table\nentries times the number\n of pg_class rows; in other words O(N^2) if\nthe stats collector's\n information is reasonably complete. \nReplace list searching with a\n hash table to bring it back to O(N)\nbehavior. Per report from kim\n at myemma.com. Back-patch as far as 8.1;\n8.0 and before use\n different coding here.\n\nThanks,\nKim\n\n\nTom Lane wrote:\n\nI wrote:\n \n\nWhat I think we need to do about this is\n(1) fix pgstat_vacuum_tabstats to have non-O(N^2) behavior; I'm thinking\nof using a hash table for the OIDs instead of a linear list. Should be\na pretty small change; I'll work on it today.\n \n\n\nI've applied the attached patch to 8.2 to do the above. Please give it\na try and see how much it helps for you. Some limited testing here\nconfirms a noticeable improvement in VACUUM startup time at 10000\ntables, and of course it should be 100X worse with 100000 tables.\n\nI am still confused why you didn't see the problem in 8.1, though.\nThis code is just about exactly the same in 8.1. Maybe you changed\nyour stats collector settings when moving to 8.2?\n\n\t\t\tregards, tom lane\n\n \n\n\nIndex: pgstat.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/postmaster/pgstat.c,v\nretrieving revision 1.140\ndiff -c -r1.140 pgstat.c\n*** pgstat.c\t21 Nov 2006 20:59:52 -0000\t1.140\n--- pgstat.c\t11 Jan 2007 22:32:30 -0000\n***************\n*** 159,164 ****\n--- 159,165 ----\n static void pgstat_read_statsfile(HTAB **dbhash, Oid onlydb);\n static void backend_read_statsfile(void);\n static void pgstat_read_current_status(void);\n+ static HTAB *pgstat_collect_oids(Oid catalogid);\n \n static void pgstat_setheader(PgStat_MsgHdr *hdr, StatMsgType mtype);\n static void pgstat_send(void *msg, int len);\n***************\n*** 657,666 ****\n void\n pgstat_vacuum_tabstat(void)\n {\n! \tList\t *oidlist;\n! \tRelation\trel;\n! \tHeapScanDesc scan;\n! \tHeapTuple\ttup;\n \tPgStat_MsgTabpurge msg;\n \tHASH_SEQ_STATUS hstat;\n \tPgStat_StatDBEntry *dbentry;\n--- 658,664 ----\n void\n pgstat_vacuum_tabstat(void)\n {\n! \tHTAB\t *htab;\n \tPgStat_MsgTabpurge msg;\n \tHASH_SEQ_STATUS hstat;\n \tPgStat_StatDBEntry *dbentry;\n***************\n*** 679,693 ****\n \t/*\n \t * Read pg_database and make a list of OIDs of all existing databases\n \t */\n! \toidlist = NIL;\n! \trel = heap_open(DatabaseRelationId, AccessShareLock);\n! \tscan = heap_beginscan(rel, SnapshotNow, 0, NULL);\n! \twhile ((tup = heap_getnext(scan, ForwardScanDirection)) != NULL)\n! \t{\n! \t\toidlist = lappend_oid(oidlist, HeapTupleGetOid(tup));\n! \t}\n! \theap_endscan(scan);\n! \theap_close(rel, AccessShareLock);\n \n \t/*\n \t * Search the database hash table for dead databases and tell the\n--- 677,683 ----\n \t/*\n \t * Read pg_database and make a list of OIDs of all existing databases\n \t */\n! \thtab = pgstat_collect_oids(DatabaseRelationId);\n \n \t/*\n \t * Search the database hash table for dead databases and tell the\n***************\n*** 698,709 ****\n \t{\n \t\tOid\t\t\tdbid = dbentry->databaseid;\n \n! \t\tif (!list_member_oid(oidlist, dbid))\n \t\t\tpgstat_drop_database(dbid);\n \t}\n \n \t/* Clean up */\n! \tlist_free(oidlist);\n \n \t/*\n \t * Lookup our own database entry; if not found, nothing more to do.\n--- 688,701 ----\n \t{\n \t\tOid\t\t\tdbid = dbentry->databaseid;\n \n! \t\tCHECK_FOR_INTERRUPTS();\n! \n! \t\tif (hash_search(htab, (void *) &dbid, HASH_FIND, NULL) == NULL)\n \t\t\tpgstat_drop_database(dbid);\n \t}\n \n \t/* Clean up */\n! \thash_destroy(htab);\n \n \t/*\n \t * Lookup our own database entry; if not found, nothing more to do.\n***************\n*** 717,731 ****\n \t/*\n \t * Similarly to above, make a list of all known relations in this DB.\n \t */\n! \toidlist = NIL;\n! \trel = heap_open(RelationRelationId, AccessShareLock);\n! \tscan = heap_beginscan(rel, SnapshotNow, 0, NULL);\n! \twhile ((tup = heap_getnext(scan, ForwardScanDirection)) != NULL)\n! \t{\n! \t\toidlist = lappend_oid(oidlist, HeapTupleGetOid(tup));\n! \t}\n! \theap_endscan(scan);\n! \theap_close(rel, AccessShareLock);\n \n \t/*\n \t * Initialize our messages table counter to zero\n--- 709,715 ----\n \t/*\n \t * Similarly to above, make a list of all known relations in this DB.\n \t */\n! \thtab = pgstat_collect_oids(RelationRelationId);\n \n \t/*\n \t * Initialize our messages table counter to zero\n***************\n*** 738,750 ****\n \thash_seq_init(&hstat, dbentry->tables);\n \twhile ((tabentry = (PgStat_StatTabEntry *) hash_seq_search(&hstat)) != NULL)\n \t{\n! \t\tif (list_member_oid(oidlist, tabentry->tableid))\n \t\t\tcontinue;\n \n \t\t/*\n \t\t * Not there, so add this table's Oid to the message\n \t\t */\n! \t\tmsg.m_tableid[msg.m_nentries++] = tabentry->tableid;\n \n \t\t/*\n \t\t * If the message is full, send it out and reinitialize to empty\n--- 722,738 ----\n \thash_seq_init(&hstat, dbentry->tables);\n \twhile ((tabentry = (PgStat_StatTabEntry *) hash_seq_search(&hstat)) != NULL)\n \t{\n! \t\tOid\t\t\ttabid = tabentry->tableid;\n! \n! \t\tCHECK_FOR_INTERRUPTS();\n! \n! \t\tif (hash_search(htab, (void *) &tabid, HASH_FIND, NULL) != NULL)\n \t\t\tcontinue;\n \n \t\t/*\n \t\t * Not there, so add this table's Oid to the message\n \t\t */\n! \t\tmsg.m_tableid[msg.m_nentries++] = tabid;\n \n \t\t/*\n \t\t * If the message is full, send it out and reinitialize to empty\n***************\n*** 776,782 ****\n \t}\n \n \t/* Clean up */\n! \tlist_free(oidlist);\n }\n \n \n--- 764,813 ----\n \t}\n \n \t/* Clean up */\n! \thash_destroy(htab);\n! }\n! \n! \n! /* ----------\n! * pgstat_collect_oids() -\n! *\n! *\tCollect the OIDs of either all databases or all tables, according to\n! *\tthe parameter, into a temporary hash table. Caller should hash_destroy\n! *\tthe result when done with it.\n! * ----------\n! */\n! static HTAB *\n! pgstat_collect_oids(Oid catalogid)\n! {\n! \tHTAB\t *htab;\n! \tHASHCTL\t\thash_ctl;\n! \tRelation\trel;\n! \tHeapScanDesc scan;\n! \tHeapTuple\ttup;\n! \n! \tmemset(&hash_ctl, 0, sizeof(hash_ctl));\n! \thash_ctl.keysize = sizeof(Oid);\n! \thash_ctl.entrysize = sizeof(Oid);\n! \thash_ctl.hash = oid_hash;\n! \thtab = hash_create(\"Temporary table of OIDs\",\n! \t\t\t\t\t PGSTAT_TAB_HASH_SIZE,\n! \t\t\t\t\t &hash_ctl,\n! \t\t\t\t\t HASH_ELEM | HASH_FUNCTION);\n! \n! \trel = heap_open(catalogid, AccessShareLock);\n! \tscan = heap_beginscan(rel, SnapshotNow, 0, NULL);\n! \twhile ((tup = heap_getnext(scan, ForwardScanDirection)) != NULL)\n! \t{\n! \t\tOid\t\tthisoid = HeapTupleGetOid(tup);\n! \n! \t\tCHECK_FOR_INTERRUPTS();\n! \n! \t\t(void) hash_search(htab, (void *) &thisoid, HASH_ENTER, NULL);\n! \t}\n! \theap_endscan(scan);\n! \theap_close(rel, AccessShareLock);\n! \n! \treturn htab;\n }",
"msg_date": "Wed, 17 Jan 2007 10:55:53 -0600",
"msg_from": "Kim <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] unusual performance for vac following 8.2"
}
] |
[
{
"msg_contents": "Hi,\n\nI know that the problem with the following SQL is the \"LOG.CODCEP =\nENDE.CODCEP||CODLOG\" condition, but what can I\ndo to improve the performance?\n\nIs there a type of index that could help or is there another way to build\nthis SQL?\n\nThank you in advance!\n\nexplain analyze\nSELECT ENDE.* , DEND.DESEND, DEND.USOEND, DEND.DUPEND,\n to_char('F') as NOVO,\n LOG.TIPLOG\n FROM TT_END ENDE LEFT OUTER JOIN TD_END DEND ON DEND.CODTAB =\nENDE.TIPEND\n LEFT OUTER JOIN TT_LOG LOG ON LOG.CODCEP =\nENDE.CODCEP||CODLOG\n WHERE ENDE.FILCLI = '001'\n AND ENDE.CODCLI = ' 19475';\n\n\n QUERY PLAN\n----------------------------------------------------------------------------\n------------------------------------------------------\n Nested Loop Left Join (cost=0.00..25366.84 rows=1259 width=417) (actual\ntime=1901.499..1901.529 rows=1 loops=1)\n Join Filter: ((\"inner\".codcep)::text = ((\"outer\".codcep)::text ||\n(\"outer\".codlog)::text))\n -> Nested Loop Left Join (cost=0.00..4.91 rows=1 width=412) (actual\ntime=0.117..0.144 rows=1 loops=1)\n Join Filter: (\"inner\".codtab = \"outer\".tipend)\n -> Index Scan using pk_end on tt_end ende (cost=0.00..3.87 rows=1\nwidth=388) (actual time=0.066..0.078 rows=1 loops=1)\n Index Cond: ((filcli = '001'::bpchar) AND (codcli = '\n19475'::bpchar))\n -> Seq Scan on td_end dend (cost=0.00..1.02 rows=2 width=33)\n(actual time=0.012..0.018 rows=2 loops=1)\n -> Seq Scan on tt_log log (cost=0.00..12254.24 rows=582424 width=17)\n(actual time=0.013..582.521 rows=582424 loops=1)\n Total runtime: 1901.769 ms\n(9 rows)\n\n\\d tt_log\n Table \"TOTALL.tt_log\"\n Column | Type | Modifiers\n--------+------------------------+-----------\n codbai | numeric(5,0) | not null\n nomlog | character varying(55) | not null\n codcep | character(8) | not null\n\n\\d tt_end\n Table \"TOTALL.tt_end\"\n Column | Type | Modifiers\n--------+-----------------------+-----------------------------------------\n...\n...\n...\n codlog | character(3) |\n...\n...\n...\n codcep | character(5) |\n...\n...\nReimer\n\n\n\n\n\n\n\nHi,\n \nI know \nthat the problem with the following SQL is the \"LOG.CODCEP = \nENDE.CODCEP||CODLOG\" condition, but what can Ido to improve the \nperformance?\n \nIs there a type of index that could help or is \nthere another way to build this SQL?\n \nThank you in advance!\n \nexplain analyze SELECT ENDE.* , DEND.DESEND, \nDEND.USOEND, \nDEND.DUPEND, \nto_char('F') as \nNOVO, \nLOG.TIPLOG FROM TT_END \nENDE LEFT OUTER JOIN TD_END DEND ON DEND.CODTAB = \nENDE.TIPEND \nLEFT OUTER JOIN TT_LOG LOG ON LOG.CODCEP = \nENDE.CODCEP||CODLOG WHERE \nENDE.FILCLI = \n'001' AND \nENDE.CODCLI = ' 19475';\n \n \nQUERY \nPLAN \n---------------------------------------------------------------------------------------------------------------------------------- Nested \nLoop Left Join (cost=0.00..25366.84 rows=1259 width=417) (actual \ntime=1901.499..1901.529 rows=1 loops=1) Join Filter: \n((\"inner\".codcep)::text = ((\"outer\".codcep)::text || \n(\"outer\".codlog)::text)) -> Nested Loop Left Join \n(cost=0.00..4.91 rows=1 width=412) (actual time=0.117..0.144 rows=1 \nloops=1) Join Filter: \n(\"inner\".codtab = \n\"outer\".tipend) -> \nIndex Scan using pk_end on tt_end ende (cost=0.00..3.87 rows=1 width=388) \n(actual time=0.066..0.078 rows=1 \nloops=1) \nIndex Cond: ((filcli = '001'::bpchar) AND (codcli = ' \n19475'::bpchar)) -> \nSeq Scan on td_end dend (cost=0.00..1.02 rows=2 width=33) (actual \ntime=0.012..0.018 rows=2 loops=1) -> Seq Scan on tt_log \nlog (cost=0.00..12254.24 rows=582424 width=17) (actual time=0.013..582.521 \nrows=582424 loops=1) Total runtime: 1901.769 ms(9 rows)\n \n\\d \ntt_log \nTable \"TOTALL.tt_log\" Column \n| \nType | \nModifiers--------+------------------------+----------- codbai | \nnumeric(5,0) | not \nnull nomlog | character varying(55) | not null codcep | \ncharacter(8) | not \nnull\n \n\\d \ntt_end \nTable \"TOTALL.tt_end\" Column \n| \nType \n| \nModifiers--------+-----------------------+-----------------------------------------......... codlog \n| character(3) \n|......... codcep | \ncharacter(5) \n|......\nReimer",
"msg_date": "Thu, 11 Jan 2007 16:17:06 -0200",
"msg_from": "\"Carlos H. Reimer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Improving SQL performance"
},
{
"msg_contents": "Carlos H. Reimer wrote:\n> Hi,\n> \n> I know that the problem with the following SQL is the \"LOG.CODCEP = \n> ENDE.CODCEP||CODLOG\" condition, but what can I\n> do to improve the performance?\n> \nI wouldn't say it's the join condition. There is a nested loop join on \n500k+ rows.\nIs it possible to put an index on LOG.CODCEP?\n\nThat might give you a better plan, as you only have 1 row in the left of \nthe join. so index scan would be preferable.\n\nRegards\n\nRussell Smith\n> Is there a type of index that could help or is there another way to \n> build this SQL?\n> \n> Thank you in advance!\n> \n> explain analyze\n> SELECT ENDE.* , DEND.DESEND, DEND.USOEND, DEND.DUPEND,\n> to_char('F') as NOVO,\n> LOG.TIPLOG\n> FROM TT_END ENDE LEFT OUTER JOIN TD_END DEND ON DEND.CODTAB \n> = ENDE.TIPEND\n> LEFT OUTER JOIN TT_LOG LOG ON LOG.CODCEP = \n> ENDE.CODCEP||CODLOG\n> WHERE ENDE.FILCLI = '001'\n> AND ENDE.CODCLI = ' 19475';\n> \n>\n> QUERY \n> PLAN \n> ----------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop Left Join (cost=0.00..25366.84 rows=1259 width=417) \n> (actual time=1901.499..1901.529 rows=1 loops=1)\n> Join Filter: ((\"inner\".codcep)::text = ((\"outer\".codcep)::text || \n> (\"outer\".codlog)::text))\n> -> Nested Loop Left Join (cost=0.00..4.91 rows=1 width=412) \n> (actual time=0.117..0.144 rows=1 loops=1)\n> Join Filter: (\"inner\".codtab = \"outer\".tipend)\n> -> Index Scan using pk_end on tt_end ende (cost=0.00..3.87 \n> rows=1 width=388) (actual time=0.066..0.078 rows=1 loops=1)\n> Index Cond: ((filcli = '001'::bpchar) AND (codcli = ' \n> 19475'::bpchar))\n> -> Seq Scan on td_end dend (cost=0.00..1.02 rows=2 \n> width=33) (actual time=0.012..0.018 rows=2 loops=1)\n> -> Seq Scan on tt_log log (cost=0.00..12254.24 rows=582424 \n> width=17) (actual time=0.013..582.521 rows=582424 loops=1)\n> Total runtime: 1901.769 ms\n> (9 rows)\n> \n> \\d tt_log\n> Table \"TOTALL.tt_log\"\n> Column | Type | Modifiers\n> --------+------------------------+-----------\n> codbai | numeric(5,0) | not null\n> nomlog | character varying(55) | not null\n> codcep | character(8) | not null\n> \n> \\d tt_end\n> Table \"TOTALL.tt_end\"\n> Column | Type | Modifiers\n> --------+-----------------------+-----------------------------------------\n> ...\n> ...\n> ...\n> codlog | character(3) |\n> ...\n> ...\n> ...\n> codcep | character(5) |\n> ...\n> ...\n>\n> Reimer\n>\n> \n\n",
"msg_date": "Fri, 12 Jan 2007 05:28:07 +1100",
"msg_from": "Russell Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving SQL performance"
},
{
"msg_contents": "\"Carlos H. Reimer\" <[email protected]> writes:\n> I know that the problem with the following SQL is the \"LOG.CODCEP =\n> ENDE.CODCEP||CODLOG\" condition, but what can I\n> do to improve the performance?\n\nSeems the problem is not using an index for tt_log. Do you have an\nindex on tt_log.codcep? If so, maybe you need to cast the result of\nthe concatenation to char(8) to get it to use the index.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Jan 2007 13:31:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving SQL performance "
},
{
"msg_contents": "Yes, I do have an index on tt_log.codcep.\n\nIndexes I�ve on both tables:\n\ntt_end\nIndexes:\n \"pk_end\" PRIMARY KEY, btree (filcli, codcli, codfil, numend)\n \"ak_end_numdoc\" UNIQUE, btree (numdoc)\n \"i_fk_end_darc\" btree (codarc, tiparc)\n \"i_fk_end_dend\" btree (tipend)\n \"i_fk_end_dfil\" btree (codfil)\n \"i_fk_end_dreg\" btree (regiao)\n \"i_fk_end_mun\" btree (codcid)\ntt_log\nIndexes:\n \"i_fk_log_bai\" btree (codbai)\n \"i_lc_log_codcep\" btree (codcep)\n\nAny clue?\n\nThanks!\n\nReimer\n\n\n> -----Mensagem original-----\n> De: Tom Lane [mailto:[email protected]]\n> Enviada em: quinta-feira, 11 de janeiro de 2007 16:31\n> Para: [email protected]\n> Cc: [email protected]\n> Assunto: Re: [PERFORM] Improving SQL performance\n>\n>\n> \"Carlos H. Reimer\" <[email protected]> writes:\n> > I know that the problem with the following SQL is the \"LOG.CODCEP =\n> > ENDE.CODCEP||CODLOG\" condition, but what can I\n> > do to improve the performance?\n>\n> Seems the problem is not using an index for tt_log. Do you have an\n> index on tt_log.codcep? If so, maybe you need to cast the result of\n> the concatenation to char(8) to get it to use the index.\n>\n> \t\t\tregards, tom lane\n>\n>\n\n",
"msg_date": "Thu, 11 Jan 2007 19:14:51 -0200",
"msg_from": "\"Carlos H. Reimer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RES: Improving SQL performance "
},
{
"msg_contents": "Hi, Carlos,\n\nWouldn't it be better if you used INT in 'codcep' in both tables (as \nCEP/ZIP numbers are [0-9]{8})? Casting as Tom Lane suggested is also a \ngood alternative, yet I think it'd be much better if you used int in \nboth columns.\n\nRegards,\nCesar\n\nLet's see the query:\n\nSELECT ENDE.* , DEND.DESEND, DEND.USOEND, DEND.DUPEND,\n to_char('F') as NOVO,\n LOG.TIPLOG\n FROM TT_END ENDE LEFT OUTER JOIN TD_END DEND ON DEND.CODTAB = \nENDE.TIPEND\n LEFT OUTER JOIN TT_LOG LOG ON LOG.CODCEP = \nENDE.CODCEP||CODLOG\n WHERE ENDE.FILCLI = '001'\n AND ENDE.CODCLI = ' 19475';\n \n\n\n\nCarlos H. Reimer wrote:\n> Yes, I do have an index on tt_log.codcep.\n>\n> Indexes I�ve on both tables:\n>\n> tt_end\n> Indexes:\n> \"pk_end\" PRIMARY KEY, btree (filcli, codcli, codfil, numend)\n> \"ak_end_numdoc\" UNIQUE, btree (numdoc)\n> \"i_fk_end_darc\" btree (codarc, tiparc)\n> \"i_fk_end_dend\" btree (tipend)\n> \"i_fk_end_dfil\" btree (codfil)\n> \"i_fk_end_dreg\" btree (regiao)\n> \"i_fk_end_mun\" btree (codcid)\n> tt_log\n> Indexes:\n> \"i_fk_log_bai\" btree (codbai)\n> \"i_lc_log_codcep\" btree (codcep)\n>\n> Any clue?\n>\n> Thanks!\n>\n> Reimer\n>\n>\n> \n>> -----Mensagem original-----\n>> De: Tom Lane [mailto:[email protected]]\n>> Enviada em: quinta-feira, 11 de janeiro de 2007 16:31\n>> Para: [email protected]\n>> Cc: [email protected]\n>> Assunto: Re: [PERFORM] Improving SQL performance\n>>\n>>\n>> \"Carlos H. Reimer\" <[email protected]> writes:\n>> \n>>> I know that the problem with the following SQL is the \"LOG.CODCEP =\n>>> ENDE.CODCEP||CODLOG\" condition, but what can I\n>>> do to improve the performance?\n>>> \n>> Seems the problem is not using an index for tt_log. Do you have an\n>> index on tt_log.codcep? If so, maybe you need to cast the result of\n>> the concatenation to char(8) to get it to use the index.\n>>\n>> \t\t\tregards, tom lane\n>>\n>>\n>> \n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n> \n\n",
"msg_date": "Thu, 11 Jan 2007 21:47:33 -0200",
"msg_from": "Cesar Suga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RES: Improving SQL performance"
},
{
"msg_contents": "Yes, casting the result improved the time response a lot.\n\nThank you!\n\nReimer\n\n\n\n> -----Mensagem original-----\n> De: [email protected]\n> [mailto:[email protected]]Em nome de Tom Lane\n> Enviada em: quinta-feira, 11 de janeiro de 2007 16:31\n> Para: [email protected]\n> Cc: [email protected]\n> Assunto: Re: [PERFORM] Improving SQL performance \n> \n> \n> \"Carlos H. Reimer\" <[email protected]> writes:\n> > I know that the problem with the following SQL is the \"LOG.CODCEP =\n> > ENDE.CODCEP||CODLOG\" condition, but what can I\n> > do to improve the performance?\n> \n> Seems the problem is not using an index for tt_log. Do you have an\n> index on tt_log.codcep? If so, maybe you need to cast the result of\n> the concatenation to char(8) to get it to use the index.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n> \n",
"msg_date": "Fri, 12 Jan 2007 15:37:13 -0200",
"msg_from": "\"Carlos H. Reimer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RES: Improving SQL performance "
}
] |
[
{
"msg_contents": "We have a table with a timestamp attribute (event_time) and a state flag\nwhich usually changes value around the event_time (it goes to 4). Now\nwe have more than two years of events in the database, and around 5k of\nfuture events.\n\nIt is important to frequently pick out \"overdue events\", say:\n\n select * from events where state<>4 and event_time<now()\n\nThis query would usually yield between 0 and 100 rows - however, the\nplanner doesn't see the correlation betewen state and event_time - since\nmost of the events have event_time<now, the planner also assumes most of\nthe events with state<>4 has event_time<now, so the expected number of\nrows is closer to 5k. This matters, because I have a query with joins,\nand I would really benefit from nested loops.\n\n(I've tried replacing \"now()\" above with different timestamps from the\nfuture and the past. I'm using pg 8.2)\n\nAny suggestions?\n\n",
"msg_date": "Fri, 12 Jan 2007 09:16:31 +0100",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Planner statistics, correlations"
},
{
"msg_contents": "On 12/01/07, Tobias Brox <[email protected]> wrote:\n> We have a table with a timestamp attribute (event_time) and a state flag\n> which usually changes value around the event_time (it goes to 4). Now\n> we have more than two years of events in the database, and around 5k of\n> future events.\n>\n> It is important to frequently pick out \"overdue events\", say:\n>\n> select * from events where state<>4 and event_time<now()\n>\n> This query would usually yield between 0 and 100 rows - however, the\n> planner doesn't see the correlation betewen state and event_time - since\n> most of the events have event_time<now, the planner also assumes most of\n> the events with state<>4 has event_time<now, so the expected number of\n> rows is closer to 5k. This matters, because I have a query with joins,\n> and I would really benefit from nested loops.\n>\n> (I've tried replacing \"now()\" above with different timestamps from the\n> future and the past. I'm using pg 8.2)\n>\n> Any suggestions?\n>\n\nCan you say what state might be rather than what it is not. I'm guess\nthat state is an int but there is only a limited list of possible\nstates, if you can say what it might be rather than what it is the\nindex is more liklly to be used.\n\nPeter.\n",
"msg_date": "Fri, 12 Jan 2007 08:56:54 +0000",
"msg_from": "\"Peter Childs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner statistics, correlations"
},
{
"msg_contents": "[Peter Childs - Fri at 08:56:54AM +0000]\n> Can you say what state might be rather than what it is not. I'm guess\n> that state is an int but there is only a limited list of possible\n> states, if you can say what it might be rather than what it is the\n> index is more liklly to be used.\n\n explain select * from events where state in (1,2,3) and event_time<now()\n\nalso estimates almost 5k of rows. I also tried:\n\n explain select * from events where state=2 and event_time<now()\n\nbut get the same behaviour.\n\nMaybe it would help to partitionate the table every year?\n",
"msg_date": "Fri, 12 Jan 2007 10:05:00 +0100",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planner statistics, correlations"
},
{
"msg_contents": "Tobias Brox wrote:\n> [Peter Childs - Fri at 08:56:54AM +0000]\n>> Can you say what state might be rather than what it is not. I'm guess\n>> that state is an int but there is only a limited list of possible\n>> states, if you can say what it might be rather than what it is the\n>> index is more liklly to be used.\n> \n> explain select * from events where state in (1,2,3) and event_time<now()\n> \n> also estimates almost 5k of rows. \n\nTry a partial index:\nCREATE INDEX my_new_index ON events (event_time)\nWHERE state in (1,2,3);\n\nNow, if that doesn't work you might want to split the query into two...\n\nSELECT * FROM events\nWHERE state IN (1,2,3) AND event_time < '2007-01-01'::date\nUNION ALL\nSELECT * FROM events\nWHERE state IN (1,2,3) AND event_time >= '2007-01-01'::date AND \nevent_time < now();\n\nCREATE INDEX my_new_index ON events (event_time)\nWHERE state in (1,2,3) AND event_time < '2007-01-01'::date;\n\nCREATE INDEX event_time_state_idx ON events (event_time, state);\n\nYou'll want to replace the index/update the query once a year/month etc.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 12 Jan 2007 09:17:48 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner statistics, correlations"
},
{
"msg_contents": "[Richard Huxton - Fri at 09:17:48AM +0000]\n> Try a partial index:\n> CREATE INDEX my_new_index ON events (event_time)\n> WHERE state in (1,2,3);\n\nI have that, the index is used and the query is lightning fast - the\nonly problem is that the planner is using the wrong estimates. This\nbecomes a real problem when doing joins and more complex queries.\n\n> Now, if that doesn't work you might want to split the query into two...\n\nHm, that's an idea - to use a two-pass query ... first:\n\n select max(event_time) from events where state in (1,2,3);\n\nand then use the result:\n\n select * from events \n where event_time>? and event_time<now() and state in (1,2,3)\n\nThis would allow the planner to get the estimates in the right ballpark\n(given that the events won't stay for too long in the lower states), and\nit would in any case not be significantly slower than the straight-ahead\napproach - but quite inelegant.\n\n",
"msg_date": "Fri, 12 Jan 2007 10:56:55 +0100",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planner statistics, correlations"
},
{
"msg_contents": "Tobias Brox wrote:\n> Maybe it would help to partitionate the table every year?\n\nI thought about partitioning the table by state, putting rows with \nstate=4 into one partition, and all others to another partition.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Fri, 12 Jan 2007 10:41:34 +0000",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner statistics, correlations"
},
{
"msg_contents": "[Heikki Linnakangas - Fri at 10:41:34AM +0000]\n> I thought about partitioning the table by state, putting rows with \n> state=4 into one partition, and all others to another partition.\n\nThat sounds like a good idea - but wouldn't that be costly when changing state?\n",
"msg_date": "Fri, 12 Jan 2007 11:44:22 +0100",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planner statistics, correlations"
},
{
"msg_contents": "Tobias Brox wrote:\n> [Heikki Linnakangas - Fri at 10:41:34AM +0000]\n>> I thought about partitioning the table by state, putting rows with \n>> state=4 into one partition, and all others to another partition.\n> \n> That sounds like a good idea - but wouldn't that be costly when changing state?\n\nIn PostgreSQL, UPDATE internally inserts a new row and marks the old one \nas deleted, so there shouldn't be much of a performance difference.\n\nI'm not very familiar with our partitioning support, so I'm not sure if \nthere's any problems with an update moving a row from one partition to \nanother. I think you'll have to create an INSTEAD OF UPDATE rule to do a \nDELETE on one partition and an INSERT on the other partition. Depending \non your application, that might be a problem; UPDATE is different from \nDELETE+INSERT from transaction isolation point of view.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Fri, 12 Jan 2007 10:55:40 +0000",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner statistics, correlations"
}
] |
[
{
"msg_contents": "Can anybody help me out\n\nI just wanted to knw what will be the configuraion settings for partitioning\ntable so as to make inserts faster on the partitioned tables.\n\n\n-- \nRegards\nGauri\n\nCan anybody help me out \n \nI just wanted to knw what will be the configuraion settings for partitioning table so as to make inserts faster on the partitioned tables.\n \n \n-- RegardsGauri",
"msg_date": "Fri, 12 Jan 2007 17:10:04 +0530",
"msg_from": "\"Gauri Kanekar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partitioning"
},
{
"msg_contents": "> Can anybody help me out\n> \n> I just wanted to knw what will be the configuraion settings for\n> partitioning table so as to make inserts faster on the partitioned tables.\n\nWell, that depends. Many questions are important here. Will you insert\ninto several partitions or only to a single one? Do you need to enforce\nsome constraint between several partitioned tables?\n\nIf you need to insert into several partitions, it can be faster as you\ncan place them on different drives. If you need to insert only into the\nlast one (that's usually the case with 'logging' tables) then this\nprobably won't give a huge performance benefit.\n\nIf you need to enforce some kind of constraint between multiple\npartitions (possibly from several tables), you'll have to do that\nmanually using a plpgsql procedure (for example). This is the case with\nUNIQUE constraint on a single table, FOREIGN KEY between multimple\npartitioned tables, etc. This can mean a serious performance penalty,\nesecially when you do mostly insert/update on that table.\n\nThis is mostly about application architecture - if you use partitions\nincorrectly it's almost impossible to fix that by changing settings in\npostgresql.conf.\n\nTomas\n",
"msg_date": "Tue, 30 Jan 2007 08:10:14 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning"
}
] |
[
{
"msg_contents": "Hello - \n\nI have a fairly large table (3 million records), and am fetching 10,000 non-contigous records doing a simple select on an indexed column ie \n\nselect grades from large_table where teacher_id = X\n\nThis is a test database, so the number of records is always 10,000 and i have 300 different teacher ids.\n\nThe problem is, sometimes fetching un-cached records takes 0.5 secs and sometimes (more often) is takes more like 10.0 seconds\n\n(fetching the same records for a given teacher_id a second time takes about 0.25 secs)\n\nHas anyone seen similar behavior or know what the solution might be?\n\nany help much appreciated,\nMark\n\n\n\nps. My shared_buffers is set at 5000 (kernal max), and work_mem=8192\n\n\n\nHello - I have a fairly large table (3 million records), and am fetching 10,000 non-contigous records doing a simple select on an indexed column ie select grades from large_table where teacher_id = XThis is a test database, so the number of records is always 10,000 and i have 300 different teacher ids.The problem is, sometimes fetching un-cached records takes 0.5 secs and sometimes (more often) is takes more like 10.0 seconds(fetching the same records for a given teacher_id a second time takes about 0.25 secs)Has anyone seen similar behavior or know what the solution might be?any help much appreciated,Markps. My shared_buffers is set at 5000 (kernal max), and work_mem=8192",
"msg_date": "Fri, 12 Jan 2007 16:31:09 -0800 (PST)",
"msg_from": "Mark Dobbrow <[email protected]>",
"msg_from_op": true,
"msg_subject": "Large table performance"
},
{
"msg_contents": "\nOn 12-Jan-07, at 7:31 PM, Mark Dobbrow wrote:\n\n> Hello -\n>\n> I have a fairly large table (3 million records), and am fetching \n> 10,000 non-contigous records doing a simple select on an indexed \n> column ie\n>\n> select grades from large_table where teacher_id = X\n>\n> This is a test database, so the number of records is always 10,000 \n> and i have 300 different teacher ids.\n>\n> The problem is, sometimes fetching un-cached records takes 0.5 secs \n> and sometimes (more often) is takes more like 10.0 seconds\n>\n> (fetching the same records for a given teacher_id a second time \n> takes about 0.25 secs)\n>\n> Has anyone seen similar behavior or know what the solution might be?\n>\n> any help much appreciated,\n> Mark\n>\n>\n>\n> ps. My shared_buffers is set at 5000 (kernal max), and work_mem=8192\n\n5000 is pretty low, you need at least 1/4 of memory for an 8.1.x or \nnewer server.\neffective cache should be 3/4 of available memory\n\nDave\n>\n>\n\n",
"msg_date": "Fri, 12 Jan 2007 19:40:25 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large table performance"
},
{
"msg_contents": "On Fri, Jan 12, 2007 at 07:40:25PM -0500, Dave Cramer wrote:\n> 5000 is pretty low, you need at least 1/4 of memory for an 8.1.x or \n> newer server.\n\nIs this the new \"common wisdom\"? It looks like at some point, someone here\nsaid \"oh, and it looks like you're better off using large values here for\n8.1.x and newer\", and now everybody seems to repeat it as if it was always\nwell-known.\n\nAre there any real benchmarks out there that we can point to? And, if you set\nshared_buffers to half of the available memory, won't the kernel cache\nduplicate more or less exactly the same data? (At least that's what people\nused to say around here, but I guess the kernel cache gets adapted to the\nfact that Postgres won't ask for the most common stuff, ie. the one in the\nshared buffer cache.)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Sat, 13 Jan 2007 02:33:42 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large table performance"
},
{
"msg_contents": "What if we start a project where we define tests for PostgreSQL\noverall performance and individual points with any database structure?\nIt could be done, throught a SQL logger and statistics, where we can\nsee complete processess and measure then after. We have many things to\nmeasure, and something that would help here is pg_buffercache (contrib\nmodule). We could define many other tests.\n\nI was thinking about something like that, where an aplication reads\ninformation (from catalog too) about an production database, and use\nthis information to build a data set of any size, respecting anything\nmeasured before.\n\nIs it too complicated? I'm trying to make programs with C++ and\nlibpqxx, and successfully used Python with PostgreSQL before (was a\ndatabase structure comparer). Python could make it easyer, C++ could\nbe a chalenge for someone like me.\n\nSomeone would like to contribute? When we start the project? :)\n\nOn 1/12/07, Steinar H. Gunderson <[email protected]> wrote:\n> On Fri, Jan 12, 2007 at 07:40:25PM -0500, Dave Cramer wrote:\n> > 5000 is pretty low, you need at least 1/4 of memory for an 8.1.x or\n> > newer server.\n>\n> Is this the new \"common wisdom\"? It looks like at some point, someone here\n> said \"oh, and it looks like you're better off using large values here for\n> 8.1.x and newer\", and now everybody seems to repeat it as if it was always\n> well-known.\n>\n> Are there any real benchmarks out there that we can point to? And, if you set\n> shared_buffers to half of the available memory, won't the kernel cache\n> duplicate more or less exactly the same data? (At least that's what people\n> used to say around here, but I guess the kernel cache gets adapted to the\n> fact that Postgres won't ask for the most common stuff, ie. the one in the\n> shared buffer cache.)\n>\n> /* Steinar */\n> --\n> Homepage: http://www.sesse.net/\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n\n\n-- \nDaniel Cristian Cruz\nAnalista de Sistemas\nEspecialista postgreSQL e Linux\nInstrutor Certificado Mandriva\n",
"msg_date": "Sat, 13 Jan 2007 13:55:35 -0200",
"msg_from": "\"Daniel Cristian Cruz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large table performance"
},
{
"msg_contents": "Have you run vacuum and analyze on the table? What version of Postgres are\nyou running? What OS are you using?\n \nThis looks like a straight forward query. With any database the first time\nyou run the query its going to be slower because it actually has to read off\ndisk. The second time its faster because some or all of the data/indexes\nwill be cached. However 10 seconds sounds like a long time for pulling\n10,000 records out of a table of 3 million. If you post an EXPLAIN ANALYZE,\nit might give us a clue.\n \nDave\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Mark Dobbrow\nSent: Friday, January 12, 2007 6:31 PM\nTo: [email protected]\nSubject: [PERFORM] Large table performance\n\n\nHello - \n\nI have a fairly large table (3 million records), and am fetching 10,000\nnon-contigous records doing a simple select on an indexed column ie \n\nselect grades from large_table where teacher_id = X\n\nThis is a test database, so the number of records is always 10,000 and i\nhave 300 different teacher ids.\n\nThe problem is, sometimes fetching un-cached records takes 0.5 secs and\nsometimes (more often) is takes more like 10.0 seconds\n\n(fetching the same records for a given teacher_id a second time takes about\n0.25 secs)\n\nHas anyone seen similar behavior or know what the solution might be?\n\nany help much appreciated,\nMark\n\n\n\nps. My shared_buffers is set at 5000 (kernal max), and work_mem=8192\n\n\n\n\n\n\n\nMessage\n\n\nHave \nyou run vacuum and analyze on the table? What version of Postgres are you \nrunning? What OS are you using?\n \nThis \nlooks like a straight forward query. With any database the first time you \nrun the query its going to be slower because it actually has to read off \ndisk. The second time its faster because some or all of the data/indexes \nwill be cached. However 10 seconds sounds like a long time for pulling \n10,000 records out of a table of 3 million. If you post an EXPLAIN \nANALYZE, it might give us a clue.\n \nDave\n \n\n\n-----Original Message-----From: \n [email protected] \n [mailto:[email protected]] On Behalf Of Mark \n DobbrowSent: Friday, January 12, 2007 6:31 PMTo: \n [email protected]: [PERFORM] Large table \n performanceHello - I have a fairly large table (3 \n million records), and am fetching 10,000 non-contigous records doing a simple \n select on an indexed column ie select grades from large_table where \n teacher_id = XThis is a test database, so the number of records is \n always 10,000 and i have 300 different teacher ids.The problem is, \n sometimes fetching un-cached records takes 0.5 secs and sometimes (more often) \n is takes more like 10.0 seconds(fetching the same records for a given \n teacher_id a second time takes about 0.25 secs)Has anyone seen similar \n behavior or know what the solution might be?any help much \n appreciated,Markps. My shared_buffers is set at 5000 \n (kernal max), and work_mem=8192",
"msg_date": "Sat, 13 Jan 2007 20:40:18 -0600",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large table performance"
},
{
"msg_contents": "Depending on the available memory try increasing the shared buffers and\nwork_mem and see if that changes the query execution time. Also make sure\nyou have proper indices created and also if possible try doing partitions\nfor the table.\n\nOnce you post the EXPLAIN ANALYZE output that will certainly help solving\nthe problem...\n\n-------------\nShoaib Mir\nEnterpriseDB (www.enterprisedb.com)\n\nOn 1/14/07, Dave Dutcher <[email protected]> wrote:\n>\n> Have you run vacuum and analyze on the table? What version of Postgres\n> are you running? What OS are you using?\n>\n> This looks like a straight forward query. With any database the first\n> time you run the query its going to be slower because it actually has to\n> read off disk. The second time its faster because some or all of the\n> data/indexes will be cached. However 10 seconds sounds like a long time for\n> pulling 10,000 records out of a table of 3 million. If you post an EXPLAIN\n> ANALYZE, it might give us a clue.\n>\n> Dave\n>\n>\n> -----Original Message-----\n> *From:* [email protected] [mailto:\n> [email protected]] *On Behalf Of *Mark Dobbrow\n> *Sent:* Friday, January 12, 2007 6:31 PM\n> *To:* [email protected]\n> *Subject:* [PERFORM] Large table performance\n>\n> Hello -\n>\n> I have a fairly large table (3 million records), and am fetching 10,000\n> non-contigous records doing a simple select on an indexed column ie\n>\n> select grades from large_table where teacher_id = X\n>\n> This is a test database, so the number of records is always 10,000 and i\n> have 300 different teacher ids.\n>\n> The problem is, sometimes fetching un-cached records takes 0.5 secs and\n> sometimes (more often) is takes more like 10.0 seconds\n>\n> (fetching the same records for a given teacher_id a second time takes\n> about 0.25 secs)\n>\n> Has anyone seen similar behavior or know what the solution might be?\n>\n> any help much appreciated,\n> Mark\n>\n>\n>\n> ps. My shared_buffers is set at 5000 (kernal max), and work_mem=8192\n>\n>\n>\n\nDepending on the available memory try increasing the shared buffers and work_mem and see if that changes the query execution time. Also make sure you have proper indices created and also if possible try doing partitions for the table.\nOnce you post the EXPLAIN ANALYZE output that will certainly help solving the problem...-------------Shoaib MirEnterpriseDB (www.enterprisedb.com)\nOn 1/14/07, Dave Dutcher <[email protected]> wrote:\n\nHave \nyou run vacuum and analyze on the table? What version of Postgres are you \nrunning? What OS are you using?\n \nThis \nlooks like a straight forward query. With any database the first time you \nrun the query its going to be slower because it actually has to read off \ndisk. The second time its faster because some or all of the data/indexes \nwill be cached. However 10 seconds sounds like a long time for pulling \n10,000 records out of a table of 3 million. If you post an EXPLAIN \nANALYZE, it might give us a clue.\n \nDave\n \n\n\n-----Original Message-----From:\[email protected] \n [mailto:[email protected]] On Behalf Of Mark \n DobbrowSent: Friday, January 12, 2007 6:31 PMTo:\[email protected]: [PERFORM] Large table \n performanceHello - I have a fairly large table (3 \n million records), and am fetching 10,000 non-contigous records doing a simple \n select on an indexed column ie select grades from large_table where \n teacher_id = XThis is a test database, so the number of records is \n always 10,000 and i have 300 different teacher ids.The problem is, \n sometimes fetching un-cached records takes 0.5 secs and sometimes (more often) \n is takes more like 10.0 seconds(fetching the same records for a given \n teacher_id a second time takes about 0.25 secs)Has anyone seen similar \n behavior or know what the solution might be?any help much \n appreciated,Markps. My shared_buffers is set at 5000 \n (kernal max), and work_mem=8192",
"msg_date": "Sun, 14 Jan 2007 16:47:15 +0500",
"msg_from": "\"Shoaib Mir\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large table performance"
},
{
"msg_contents": "Mark,\n\nThis behavior likely depends on how the data is loaded into the DBMS. If\nthe records you are fetching are distributed widely among the 3M records on\ndisk, then \n\n\nOn 1/12/07 4:31 PM, \"Mark Dobbrow\" <[email protected]> wrote:\n\n> Hello - \n> \n> I have a fairly large table (3 million records), and am fetching 10,000\n> non-contigous records doing a simple select on an indexed column ie\n> \n> select grades from large_table where teacher_id = X\n> \n> This is a test database, so the number of records is always 10,000 and i have\n> 300 different teacher ids.\n> \n> The problem is, sometimes fetching un-cached records takes 0.5 secs and\n> sometimes (more often) is takes more like 10.0 seconds\n> \n> (fetching the same records for a given teacher_id a second time takes about\n> 0.25 secs)\n> \n> Has anyone seen similar behavior or know what the solution might be?\n> \n> any help much appreciated,\n> Mark\n> \n> \n> \n> ps. My shared_buffers is set at 5000 (kernal max), and work_mem=8192\n> \n> \n> \n\n\n\n\n\nRe: [PERFORM] Large table performance\n\n\nMark,\n\nThis behavior likely depends on how the data is loaded into the DBMS. If the records you are fetching are distributed widely among the 3M records on disk, then \n\n\nOn 1/12/07 4:31 PM, \"Mark Dobbrow\" <[email protected]> wrote:\n\nHello - \n\nI have a fairly large table (3 million records), and am fetching 10,000 non-contigous records doing a simple select on an indexed column ie \n\nselect grades from large_table where teacher_id = X\n\nThis is a test database, so the number of records is always 10,000 and i have 300 different teacher ids.\n\nThe problem is, sometimes fetching un-cached records takes 0.5 secs and sometimes (more often) is takes more like 10.0 seconds\n\n(fetching the same records for a given teacher_id a second time takes about 0.25 secs)\n\nHas anyone seen similar behavior or know what the solution might be?\n\nany help much appreciated,\nMark\n\n\n\nps. My shared_buffers is set at 5000 (kernal max), and work_mem=8192",
"msg_date": "Sun, 14 Jan 2007 11:02:24 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large table performance"
},
{
"msg_contents": "Mark,\n\nNote that selecting an index column means that Postgres fetches the whole\nrows from disk. I think your performance problem is either: 1) slow disk or\n2) index access of distributed data. If it¹s (1), there are plenty of\nreferences from this list on how to check for that and fix it. If it¹s (2),\nsee below. \n\nThe performance of index accessed data in Postgres depends on how the data\nis loaded into the DBMS. If the records you are fetching are distributed\nwidely among the 3M records on disk, then the select is going to ³hop, skip\nand jump² across the disk to get the records you need. If those records are\nstored more closely together, then the fetching from disk is going to be\nsequential. A good example of the best situation for an index is an index\non a date column when the data is loaded sequentially by date. A query\nagainst a specific date range will result in an ordered fetch from the disk,\nwhich leverages fast sequential access.\n\nThe difference in performance between ordered and distributed access is\nsimilar to the difference between ³random seek² and ³sequential² performance\nof the disk subsystem. The random seek performance of typical disk\nsubsystems with one thread (for one user in postgres) is 120 seeks per\nsecond. If your data was randomly distributed, you¹d expect about\n10,000/120 = 83 seconds to gather these records. Since you¹re getting 10\nseconds, I expect that your data is lumped into groups and you are getting a\nmix of sequential reads and seeks.\n\nNote that adding more disks into a RAID does not help the random seek\nperformance within Postgres, but may linearly improve the ordered access\nspeed. So even with 44 disks in a RAID10 pool on a Sun X4500, the seek\nperformance of Postgres (and other DBMS¹s without async or threaded I/O) is\nthat of a single disk 120 seeks per second. Adding more users allows the\nseeks to scale on such a machine as users are added, up to the number of\ndisks in the RAID. But for your one user example no help.\n\nIf your problem is (2), you can re-order the data on disk by using a CREATE\nTABLE statement like this:\n CREATE TABLE fast_table AS SELECT * FROM slow_table ORDER BY teacher_id;\n CREATE INDEX teacher_id_ix ON fast_table;\n VACUUM ANALYZE fast_table;\n\nYou should then see ordered access when you do index scans on teacher_id.\n\n- Luke\n\n\nOn 1/12/07 4:31 PM, \"Mark Dobbrow\" <[email protected]> wrote:\n\n> Hello - \n> \n> I have a fairly large table (3 million records), and am fetching 10,000\n> non-contigous records doing a simple select on an indexed column ie\n> \n> select grades from large_table where teacher_id = X\n> \n> This is a test database, so the number of records is always 10,000 and i have\n> 300 different teacher ids.\n> \n> The problem is, sometimes fetching un-cached records takes 0.5 secs and\n> sometimes (more often) is takes more like 10.0 seconds\n> \n> (fetching the same records for a given teacher_id a second time takes about\n> 0.25 secs)\n> \n> Has anyone seen similar behavior or know what the solution might be?\n> \n> any help much appreciated,\n> Mark\n> \n> \n> \n> ps. My shared_buffers is set at 5000 (kernal max), and work_mem=8192\n> \n> \n> \n\n\n\n\n\nRe: [PERFORM] Large table performance\n\n\nMark,\n\nNote that selecting an index column means that Postgres fetches the whole rows from disk. I think your performance problem is either: 1) slow disk or 2) index access of distributed data. If it’s (1), there are plenty of references from this list on how to check for that and fix it. If it’s (2), see below. \n\nThe performance of index accessed data in Postgres depends on how the data is loaded into the DBMS. If the records you are fetching are distributed widely among the 3M records on disk, then the select is going to “hop, skip and jump” across the disk to get the records you need. If those records are stored more closely together, then the fetching from disk is going to be sequential. A good example of the best situation for an index is an index on a date column when the data is loaded sequentially by date. A query against a specific date range will result in an ordered fetch from the disk, which leverages fast sequential access. \n\nThe difference in performance between ordered and distributed access is similar to the difference between “random seek” and “sequential” performance of the disk subsystem. The random seek performance of typical disk subsystems with one thread (for one user in postgres) is 120 seeks per second. If your data was randomly distributed, you’d expect about 10,000/120 = 83 seconds to gather these records. Since you’re getting 10 seconds, I expect that your data is lumped into groups and you are getting a mix of sequential reads and seeks.\n\nNote that adding more disks into a RAID does not help the random seek performance within Postgres, but may linearly improve the ordered access speed. So even with 44 disks in a RAID10 pool on a Sun X4500, the seek performance of Postgres (and other DBMS’s without async or threaded I/O) is that of a single disk – 120 seeks per second. Adding more users allows the seeks to scale on such a machine as users are added, up to the number of disks in the RAID. But for your one user example – no help.\n\nIf your problem is (2), you can re-order the data on disk by using a CREATE TABLE statement like this:\n CREATE TABLE fast_table AS SELECT * FROM slow_table ORDER BY teacher_id;\n CREATE INDEX teacher_id_ix ON fast_table;\n VACUUM ANALYZE fast_table;\n\nYou should then see ordered access when you do index scans on teacher_id.\n\n- Luke\n\n\nOn 1/12/07 4:31 PM, \"Mark Dobbrow\" <[email protected]> wrote:\n\nHello - \n\nI have a fairly large table (3 million records), and am fetching 10,000 non-contigous records doing a simple select on an indexed column ie \n\nselect grades from large_table where teacher_id = X\n\nThis is a test database, so the number of records is always 10,000 and i have 300 different teacher ids.\n\nThe problem is, sometimes fetching un-cached records takes 0.5 secs and sometimes (more often) is takes more like 10.0 seconds\n\n(fetching the same records for a given teacher_id a second time takes about 0.25 secs)\n\nHas anyone seen similar behavior or know what the solution might be?\n\nany help much appreciated,\nMark\n\n\n\nps. My shared_buffers is set at 5000 (kernal max), and work_mem=8192",
"msg_date": "Sun, 14 Jan 2007 11:20:35 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large table performance"
}
] |
[
{
"msg_contents": "Hello\n\nI am going to separate physical locations of tables and their indexes, by moving indexes to a \ndifferent volume, through use of tablespaces.\n\nAs the data are going to be split, I am considering where pg_xlog should go. It is a general well \nknown advise to keep pg_xlog on a different physical location than data, so it would be nice to have \na third dedicated place. Unfortunately, this is not an option at the moment.\n\nSo here my question comes, what could be better:\n1. keeping pg_xlog together with tables, or\n2. keeping pg_xlog together with indexes?\n\nMy database is mainly readonly, with exceptinos for overnight periods, when bulk data is inserted \nand updated.\n\nThank you\n\nIreneusz Pluta\n\n",
"msg_date": "Sat, 13 Jan 2007 10:21:21 +0100",
"msg_from": "Ireneusz Pluta <[email protected]>",
"msg_from_op": true,
"msg_subject": "Physical separation of tables and indexes - where pg_xlog should go?"
}
] |
[
{
"msg_contents": "Hello All,\n\nI am using the latest 8.2 source that I compiled with Sun Studio 11 and \ntested it on Solaris 10 11/06 against an app server. I find that the CPU \nutilization was higher than I expected and started digging through it.\n\nAparently the top CPU usage comes from the following stack trace which \nis roughly about 10-15% of the total the postgresql uses.\n\nAnyway a real developer might make more sense from this than I can\n\n\n libc_psr.so.1`memcpy+0x524\n postgres`SearchCatCache+0x24\n postgres`getBaseType+0x2c\n postgres`find_coercion_pathway+0x14\n postgres`can_coerce_type+0x58\n postgres`func_match_argtypes+0x24\n postgres`oper_select_candidate+0x14\n postgres`make_op+0x1a8\n postgres`transformAExprAnd+0x1c\n postgres`transformWhereClause+0x18\n postgres`transformUpdateStmt+0x94\n postgres`transformStmt+0x1dc\n postgres`do_parse_analyze+0x18\n postgres`parse_analyze_varparams+0x30\n postgres`exec_parse_message+0x2fc\n postgres`PostgresMain+0x117c\n postgres`BackendRun+0x248\n postgres`BackendStartup+0xf4\n postgres`ServerLoop+0x4c8\n postgres`PostmasterMain+0xca0\n\n\nFUNCTION COUNT PCNT\npostgres`can_coerce_type 1 0.1%\npostgres`find_coercion_pathway 11 0.9%\npostgres`SearchCatCache 43 3.4%\nlibc_psr.so.1`memcpy 136 10.6%\n\nThe appserver is basically using bunch of prepared statements that the \nserver should be executing directly without doing the parsing again. \nSince it is the parser module that invokes the catalog search, does \nanybody know how to improve the can_coerce_type function in order to \nreduce the similar comparisions again and again for same type of statements.\n\nI also wanted to check if postgresql stores prepared statements at the \nserver level or does it parse each incoming prepared statement again?\n\nAny insight will help here in understanding what it is attempting to do \nand what can be the possible workarounds.\n\nRegards,\nJignesh\n",
"msg_date": "Sat, 13 Jan 2007 23:38:24 +0000",
"msg_from": "Jignesh Shah <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance of Parser?"
},
{
"msg_contents": "Jignesh Shah <[email protected]> writes:\n> The appserver is basically using bunch of prepared statements that the \n> server should be executing directly without doing the parsing again. \n\nBetter have another look at that theory, because you're clearly spending\na lot of time in parsing (operator resolution to be specific). I think\nyour client code is failing to re-use prepared statements the way you\nthink it is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 13 Jan 2007 19:24:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of Parser? "
},
{
"msg_contents": "Jignesh Shah wrote:\n> The appserver is basically using bunch of prepared statements that the\n> server should be executing directly without doing the parsing again.\n>\n\nThe first thing you need to do is turn on statement logging, if you\nhaven't already, to verify this statement.\n\ncheers\n\nandrew\n\n",
"msg_date": "Sat, 13 Jan 2007 18:48:14 -0600 (CST)",
"msg_from": "\"Andrew Dunstan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of Parser?"
},
{
"msg_contents": "\nOn 13-Jan-07, at 7:24 PM, Tom Lane wrote:\n\n> Jignesh Shah <[email protected]> writes:\n>> The appserver is basically using bunch of prepared statements that \n>> the\n>> server should be executing directly without doing the parsing again.\n>\n> Better have another look at that theory, because you're clearly \n> spending\n> a lot of time in parsing (operator resolution to be specific). I \n> think\n> your client code is failing to re-use prepared statements the way you\n> think it is.\n\nThis is exactly what is happening. The driver needs to cache \nstatements for this to work.\n\nDave\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that \n> your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Sat, 13 Jan 2007 22:08:57 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of Parser? "
}
] |
[
{
"msg_contents": "12345678901234567890123456789012345678901234567890123456789012345678901234567890\n00000000001111111111222222222233333333334444444444555555555566666666667777777777\nI have been trying to change a many parameters on server versions\n7.4.15, 8.1.4, 8.2.0 and 8.2.1. I still hope a have managed to keep\nmy head straigth and that i do not present to much faulty information.\n\nThe cost estimates generated by the different server versions differ.\nI have a query which (as far as i can tell) have some strange differences\nbetween 8.2.0 8.2.1. I can provide information about that if anyone want\nit.\n\nGenerally these parameters are used.\ndefault_statistics_target = 10\n\t(4 selected columns is set to 1000)\n\t(I have tested with 1000 as default value\n but that did not have an impact)\n\t(analyzed whenever value was changed)\nshared_buffers = 64000 (512MB)\nwork_mem/sort_mem = variable, see different run's\neffective_cache_size = 128000 (1G)\nrandom_page_cost = 2\ncpu_index_tuple_cost = 0.001\ncpu_operator_cost = 0.025\ncpu_tuple_cost = 0.01\n\nI have tested with different values for random_page_cost and\ncpu_*_cost but it have not made a difference.\nI have tried with random_page cost between 1 and 8,\nand cpu_*_cost with standard value and 50x bigger)\n\nQuery is:\nexplain\n analyze\n select\n ur.id as ur_id,\n ur.unit_ref,\n ur.execution_time,\n u.serial_number,\n to_char(ur.start_date_time, 'YYYY-MM-DD'),\n count(*) as num_test\n from\n uut_result as ur\n inner join units as u\n on ur.unit_ref=u.ref\n inner join step_result as sr\n on ur.id=sr.uut_result\n where\n ur.id between 174000 and 174000+999\n group by\n ur.id,\n ur.unit_ref,\n ur.execution_time,\n u.serial_number, \n ur.start_date_time\n-- order by\n-- ur.start_date_time\n;\nNB: order by clause is used in some results below.\n\n=== Run 1:\nDetect work_mem setting influence (See also Run 2)\n - server version 8.2.1\n - Query executed without \"order by\" clause\n - work_mem = 8600;\nQUERY PLAN \n---------------------------------------------\n GroupAggregate (cost=44857.70..47976.79 rows=95972 width=37) (actual time=1802.716..2017.337 rows=1000 loops=1)\n -> Sort (cost=44857.70..45097.63 rows=95972 width=37) (actual time=1802.461..1892.743 rows=138810 loops=1)\n Sort Key: ur.id, ur.unit_ref, ur.execution_time, u.serial_number, ur.start_date_time\n -> Nested Loop (cost=0.00..36915.87 rows=95972 width=37) (actual time=0.063..268.186 rows=138810 loops=1)\n -> Nested Loop (cost=0.00..5017.65 rows=981 width=37) (actual time=0.047..11.919 rows=1000 loops=1)\n -> Index Scan using uut_result_pkey on uut_result ur (cost=0.00..1538.77 rows=1000 width=24) (actual time=0.029..1.727 rows=1000 loops=1)\n Index Cond: ((id >= 174000) AND (id <= 174999))\n -> Index Scan using units_pkey on units u (cost=0.00..3.47 rows=1 width=17) (actual time=0.006..0.007 rows=1 loops=1000)\n Index Cond: (ur.unit_ref = u.ref)\n -> Index Scan using uut_result_key on step_result sr (cost=0.00..30.82 rows=136 width=4) (actual time=0.011..0.125 rows=139 loops=1000)\n Index Cond: (ur.id = sr.uut_result) Total runtime: 2021.833 ms\n(12 rows)\n\n\n=== Run 2:\nDetect work_mem setting influence (See also Run 1)\n - server version 8.2.1\n - Query executed without \"order by\" clause\n - work_mem = 8700;\nQUERY PLAN \n---------------------------------------------\n HashAggregate (cost=38355.45..39795.03 rows=95972 width=37) (actual time=436.406..439.867 rows=1000 loops=1)\n -> Nested Loop (cost=0.00..36915.87 rows=95972 width=37) (actual time=0.066..256.235 rows=138810 loops=1)\n -> Nested Loop (cost=0.00..5017.65 rows=981 width=37) (actual time=0.049..10.858 rows=1000 loops=1)\n -> Index Scan using uut_result_pkey on uut_result ur (cost=0.00..1538.77 rows=1000 width=24) (actual time=0.031..1.546 rows=1000 loops=1)\n Index Cond: ((id >= 174000) AND (id <= 174999))\n -> Index Scan using units_pkey on units u (cost=0.00..3.47 rows=1 width=17) (actual time=0.005..0.006 rows=1 loops=1000)\n Index Cond: (ur.unit_ref = u.ref)\n -> Index Scan using uut_result_key on step_result sr (cost=0.00..30.82 rows=136 width=4) (actual time=0.011..0.123 rows=139 loops=1000)\n Index Cond: (ur.id = sr.uut_result) Total runtime: 441.193 ms (10 rows)\n\n=== Comment on Run 1 versus Run 2 (adjusted work_mem) ===\nThe difference in setup is value of work_mem. Bigger work_mem gave different\ncost estimates and selected HashAggregate instead of GroupAggregate.\nResult was a reduced runtime. I guess that is as expected.\n\n(One remark, the switchover between different plans on version 8.1.5 was for\nwork_mem values of 6800 and 6900)\n\n=== Run 3 (with order by clause):\nTest \"group by\" and \"order by\" (See also Run 1 and Run 4)\n - server version 8.2.1\n - Query executed with \"order by\" clause\n - work_mem = 8700\n (tried values from 2000 to 128000 with same cost and plan as result)\nQUERY PLAN \n---------------------------------------------\n GroupAggregate (cost=44857.70..47976.79 rows=95972 width=37) (actual time=1891.464..2114.462 rows=1000 loops=1)\n -> Sort (cost=44857.70..45097.63 rows=95972 width=37) (actual time=1891.263..1982.137 rows=138810 loops=1)\n Sort Key: ur.start_date_time, ur.id, ur.unit_ref, ur.execution_time, u.serial_number\n -> Nested Loop (cost=0.00..36915.87 rows=95972 width=37) (actual time=0.064..264.358 rows=138810 loops=1)\n -> Nested Loop (cost=0.00..5017.65 rows=981 width=37) (actual time=0.047..12.253 rows=1000 loops=1)\n -> Index Scan using uut_result_pkey on uut_result ur (cost=0.00..1538.77 rows=1000 width=24) (actual time=0.029..1.743 rows=1000 loops=1)\n Index Cond: ((id >= 174000) AND (id <= 174999))\n -> Index Scan using units_pkey on units u (cost=0.00..3.47 rows=1 width=17) (actual time=0.006..0.007 rows=1 loops=1000)\n Index Cond: (ur.unit_ref = u.ref)\n -> Index Scan using uut_result_key on step_result sr (cost=0.00..30.82 rows=136 width=4) (actual time=0.011..0.124 rows=139 loops=1000)\n Index Cond: (ur.id = sr.uut_result) Total runtime: 2118.986 ms\n(12 rows)\n\n=== Run 4 (with order by clause, on server 8.1.4):\nTest \"group by\" and \"order by\" (See also Run 1 and Run 3)\n - server version 8.1.4\n - Query executed with \"order by\" clause\n - work_mem = 6900\n (same plan select for all work_mem values above 6900)\nQUERY PLAN \n------------------------------------------------------------\n Sort (cost=46578.83..46820.66 rows=96734 width=37) (actual time=505.562..505.988 rows=1000 loops=1)\n Sort Key: ur.start_date_time\n -> HashAggregate (cost=37117.40..38568.41 rows=96734 width=37) (actual time=498.697..502.374 rows=1000 loops=1)\n -> Nested Loop (cost=0.00..35666.39 rows=96734 width=37) (actual time=0.058..288.270 rows=138810 loops=1)\n -> Nested Loop (cost=0.00..5342.20 rows=984 width=37) (actual time=0.042..11.773 rows=1000 loops=1)\n -> Index Scan using uut_result_pkey on uut_result ur (cost=0.00..1626.46 rows=1003 width=24) (actual time=0.020..1.868 rows=1000 loops=1)\n Index Cond: ((id >= 174000) AND (id <= 174999))\n -> Index Scan using units_pkey on units u (cost=0.00..3.69 rows=1 width=17) (actual time=0.006..0.007 rows=1 loops=1000)\n Index Cond: (\"outer\".unit_ref = u.ref)\n -> Index Scan using uut_result_key on step_result sr (cost=0.00..29.09 rows=138 width=4) (actual time=0.006..0.146 rows=139 loops=1000)\n Index Cond: (\"outer\".id = sr.uut_result) Total runtime: 507.452 ms\n(12 rows)\n\n=== Coemment on selected plan for 8.2.1 when using \"order by\" ===\nRun 3 (8.2.1 with order by) selects same plan as Run1 (without order by).\nIt does hovever exist a better plan for Run3, and 8.1.5 manages to select\nthat plan (shown in Run 4).\nBoth versions (8.1.5 and 8.2.1) uses same plan until the uppermost Nested Loop.\nThe big difference is that 8.1.5 then will do HashAggregate, and then sort,\nwhile 8.2.1 will does a sort and then a GroupAggregate.\n\nI have tried different combinations for statistics_target, cpu_*_cost,\nwork_mem and random page cost without finding a solution. \n\nAnyone with an idea on what to do? Feel free to suggest one of the above\nparameters, i might have overlooked some combination.\n\nI am a little unsure on how much extra information is necessery, but i\nwill provide some:\n\nThe three tables are\n units\t\tList of produced items\n uut_Result\t\tSummary of test result\n step_result\tIndividuel tests results\nThe system is a production test log. (there are a lot of units which\n does not have an entry in uut_result).\n\n Table \"public.units\"\n Column | Type | Modifiers \n------------------+-----------------------+-----------------------------\n------------------+-----------------------+-----------------------------\n------------------+-----------------------+----------\n ref | integer | not null default nextval(('public.units_ref_seq'::text)::regclass)\n serial_number | character varying(30) | not null\n product_ref | integer | not null\n week | integer | not null\n status | integer | not null\n comment | text | \n last_user | text | default \"current_user\"()\n last_date | date | default ('now'::text)::date\n product_info_ref | integer | not null\nIndexes:\n \"units_pkey\" PRIMARY KEY, btree (ref)\n \"units_no_sno_idx\" UNIQUE, btree (product_ref, week) WHERE serial_number::text = ''::text\n \"units_serial_number_idx\" UNIQUE, btree (serial_number, product_info_ref) WHERE serial_number::text <> ''::text\n \"units_product_ref_key\" btree (product_ref)\nTriggers:\n ct_unit_update_log AFTER UPDATE ON units FOR EACH ROW EXECUTE PROCEDURE cf_unit_update_log()\n ct_units_update_product_info_ref BEFORE INSERT OR UPDATE ON units FOR EACH ROW EXECUTE PROCEDURE cf_units_update_product_info_ref()\nselect count(*) from units => 292 676 rows\n\n Table \"public.uut_result\"\n Column | Type | Modifiers \n-------------------+-----------------------------+----------------------\n-------------------+-----------------------------+----------------------\n-------------------+-----------------------------+--------\n id | integer | not null\n uut_serial_number | text | \n unit_ref | integer | \n order_unit_ref | integer | \n user_login_name | text | \n start_date_time | timestamp without time zone | \n execution_time | double precision | \n uut_status | text | \n uut_error_code | integer | \n uut_error_message | text | \n last_user | text | default \"current_user\"()\n last_timestamp | timestamp with time zone | default ('now'::text)::timestamp(6) with time zone\n test_name | text | \n teststation_name | text | \n teststation_ref | integer | \n process_step_ref | integer | \nIndexes:\n \"uut_result_pkey\" PRIMARY KEY, btree (id)\n \"uut_result_start_date_time_idx\" btree (start_date_time)\n \"uut_result_test_name\" btree (test_name)\nTriggers:\n ct_set_process_step_ref BEFORE INSERT OR UPDATE ON uut_result FOR EACH ROW EXECUTE PROCEDURE cf_set_process_step_ref() select count(*) from uut_result => 180 111 rows\n\n Table \"public.step_result\"\n Column | Type | Modifiers \n--------------------+------------------+-----------\n id | integer | not null\n uut_result | integer | \n step_parent | integer | \n step_name | text | \n step_extra_info | text | \n step_type | text | \n status | text | \n report_text | text | \n error_code | integer | \n error_message | text | \n module_time | double precision | \n total_time | double precision | \n num_loops | integer | \n num_passed | integer | \n num_failed | integer | \n ending_loop_index | integer | \n loop_index | integer | \n interactive_exenum | integer | \n step_group | text | \n step_index | integer | \n order_number | integer | \n pass_fail | integer | \n numeric_value | double precision | \n high_limit | double precision | \n low_limit | double precision | \n comp_operator | text | \n string_value | text | \n string_limit | text | \n button_pressed | integer | \n response | text | \n exit_code | integer | \n num_limits_in_file | integer | \n num_rows_in_file | integer | \n num_limits_applied | integer | \n sequence_name | text | \n sequence_file_path | text | \nIndexes:\n \"step_result_pkey\" PRIMARY KEY, btree (id)\n \"step_parent_key\" btree (step_parent)\n \"temp_index_idx\" btree (sequence_file_path)\n \"uut_result_key\" btree (uut_result)\nselect count(*) from step_result => 17 624 657 rows\n\nBest regards\nRolf Østvik\n",
"msg_date": "Sun, 14 Jan 2007 13:31:10 +0100",
"msg_from": "=?iso-8859-1?Q?Rolf_=D8stvik_=28HA/EXA=29?= <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
},
{
"msg_contents": "\nOn 14-Jan-07, at 7:31 AM, Rolf �stvik (HA/EXA) wrote:\n\n> 1234567890123456789012345678901234567890123456789012345678901234567890 \n> 1234567890\n> 0000000000111111111122222222223333333333444444444455555555556666666666 \n> 7777777777\n> I have been trying to change a many parameters on server versions\n> 7.4.15, 8.1.4, 8.2.0 and 8.2.1. I still hope a have managed to keep\n> my head straigth and that i do not present to much faulty information.\n>\n> The cost estimates generated by the different server versions differ.\n> I have a query which (as far as i can tell) have some strange \n> differences\n> between 8.2.0 8.2.1. I can provide information about that if anyone \n> want\n> it.\n>\n> Generally these parameters are used.\n> default_statistics_target = 10\n> \t(4 selected columns is set to 1000)\n> \t(I have tested with 1000 as default value\n> but that did not have an impact)\n> \t(analyzed whenever value was changed)\n> shared_buffers = 64000 (512MB)\n> work_mem/sort_mem = variable, see different run's\n> effective_cache_size = 128000 (1G)\n> random_page_cost = 2\n> cpu_index_tuple_cost = 0.001\n> cpu_operator_cost = 0.025\n> cpu_tuple_cost = 0.01\n\nCan you tell us how big the machine is ? How much memory it has ? Not \nthat it is terribly important, but it's a data point for me.\n\n>\n> I have tested with different values for random_page_cost and\n> cpu_*_cost but it have not made a difference.\n> I have tried with random_page cost between 1 and 8,\n> and cpu_*_cost with standard value and 50x bigger)\n>\n> Query is:\n> explain\n> analyze\n> select\n> ur.id as ur_id,\n> ur.unit_ref,\n> ur.execution_time,\n> u.serial_number,\n> to_char(ur.start_date_time, 'YYYY-MM-DD'),\n> count(*) as num_test\n> from\n> uut_result as ur\n> inner join units as u\n> on ur.unit_ref=u.ref\n> inner join step_result as sr\n> on ur.id=sr.uut_result\n> where\n> ur.id between 174000 and 174000+999\n> group by\n> ur.id,\n> ur.unit_ref,\n> ur.execution_time,\n> u.serial_number,\n> ur.start_date_time\n> -- order by\n> -- ur.start_date_time\n> ;\n> NB: order by clause is used in some results below.\n>\n> === Run 1:\n> Detect work_mem setting influence (See also Run 2)\n> - server version 8.2.1\n> - Query executed without \"order by\" clause\n> - work_mem = 8600;\n> QUERY PLAN\n> ---------------------------------------------\n> GroupAggregate (cost=44857.70..47976.79 rows=95972 width=37) \n> (actual time=1802.716..2017.337 rows=1000 loops=1)\n> -> Sort (cost=44857.70..45097.63 rows=95972 width=37) (actual \n> time=1802.461..1892.743 rows=138810 loops=1)\n> Sort Key: ur.id, ur.unit_ref, ur.execution_time, \n> u.serial_number, ur.start_date_time\n> -> Nested Loop (cost=0.00..36915.87 rows=95972 width=37) \n> (actual time=0.063..268.186 rows=138810 loops=1)\n> -> Nested Loop (cost=0.00..5017.65 rows=981 \n> width=37) (actual time=0.047..11.919 rows=1000 loops=1)\n> -> Index Scan using uut_result_pkey on \n> uut_result ur (cost=0.00..1538.77 rows=1000 width=24) (actual \n> time=0.029..1.727 rows=1000 loops=1)\n> Index Cond: ((id >= 174000) AND (id <= \n> 174999))\n> -> Index Scan using units_pkey on units u \n> (cost=0.00..3.47 rows=1 width=17) (actual time=0.006..0.007 rows=1 \n> loops=1000)\n> Index Cond: (ur.unit_ref = u.ref)\n> -> Index Scan using uut_result_key on step_result \n> sr (cost=0.00..30.82 rows=136 width=4) (actual time=0.011..0.125 \n> rows=139 loops=1000)\n> Index Cond: (ur.id = sr.uut_result) Total \n> runtime: 2021.833 ms\n> (12 rows)\n>\n>\n> === Run 2:\n> Detect work_mem setting influence (See also Run 1)\n> - server version 8.2.1\n> - Query executed without \"order by\" clause\n> - work_mem = 8700;\n> QUERY PLAN\n> ---------------------------------------------\n> HashAggregate (cost=38355.45..39795.03 rows=95972 width=37) \n> (actual time=436.406..439.867 rows=1000 loops=1)\n> -> Nested Loop (cost=0.00..36915.87 rows=95972 width=37) \n> (actual time=0.066..256.235 rows=138810 loops=1)\n> -> Nested Loop (cost=0.00..5017.65 rows=981 width=37) \n> (actual time=0.049..10.858 rows=1000 loops=1)\n> -> Index Scan using uut_result_pkey on uut_result \n> ur (cost=0.00..1538.77 rows=1000 width=24) (actual \n> time=0.031..1.546 rows=1000 loops=1)\n> Index Cond: ((id >= 174000) AND (id <= 174999))\n> -> Index Scan using units_pkey on units u \n> (cost=0.00..3.47 rows=1 width=17) (actual time=0.005..0.006 rows=1 \n> loops=1000)\n> Index Cond: (ur.unit_ref = u.ref)\n> -> Index Scan using uut_result_key on step_result sr \n> (cost=0.00..30.82 rows=136 width=4) (actual time=0.011..0.123 \n> rows=139 loops=1000)\n> Index Cond: (ur.id = sr.uut_result) Total runtime: \n> 441.193 ms (10 rows)\n>\n> === Comment on Run 1 versus Run 2 (adjusted work_mem) ===\n> The difference in setup is value of work_mem. Bigger work_mem gave \n> different\n> cost estimates and selected HashAggregate instead of GroupAggregate.\n> Result was a reduced runtime. I guess that is as expected.\n>\n> (One remark, the switchover between different plans on version \n> 8.1.5 was for\n> work_mem values of 6800 and 6900)\n>\n> === Run 3 (with order by clause):\n> Test \"group by\" and \"order by\" (See also Run 1 and Run 4)\n> - server version 8.2.1\n> - Query executed with \"order by\" clause\n> - work_mem = 8700\n> (tried values from 2000 to 128000 with same cost and plan as \n> result)\n> QUERY PLAN\n> ---------------------------------------------\n> GroupAggregate (cost=44857.70..47976.79 rows=95972 width=37) \n> (actual time=1891.464..2114.462 rows=1000 loops=1)\n> -> Sort (cost=44857.70..45097.63 rows=95972 width=37) (actual \n> time=1891.263..1982.137 rows=138810 loops=1)\n> Sort Key: ur.start_date_time, ur.id, ur.unit_ref, \n> ur.execution_time, u.serial_number\n> -> Nested Loop (cost=0.00..36915.87 rows=95972 width=37) \n> (actual time=0.064..264.358 rows=138810 loops=1)\n> -> Nested Loop (cost=0.00..5017.65 rows=981 \n> width=37) (actual time=0.047..12.253 rows=1000 loops=1)\n> -> Index Scan using uut_result_pkey on \n> uut_result ur (cost=0.00..1538.77 rows=1000 width=24) (actual \n> time=0.029..1.743 rows=1000 loops=1)\n> Index Cond: ((id >= 174000) AND (id <= \n> 174999))\n> -> Index Scan using units_pkey on units u \n> (cost=0.00..3.47 rows=1 width=17) (actual time=0.006..0.007 rows=1 \n> loops=1000)\n> Index Cond: (ur.unit_ref = u.ref)\n> -> Index Scan using uut_result_key on step_result \n> sr (cost=0.00..30.82 rows=136 width=4) (actual time=0.011..0.124 \n> rows=139 loops=1000)\n> Index Cond: (ur.id = sr.uut_result) Total \n> runtime: 2118.986 ms\n> (12 rows)\n>\n> === Run 4 (with order by clause, on server 8.1.4):\n> Test \"group by\" and \"order by\" (See also Run 1 and Run 3)\n> - server version 8.1.4\n> - Query executed with \"order by\" clause\n> - work_mem = 6900\n> (same plan select for all work_mem values above 6900)\n> QUERY PLAN\n> ------------------------------------------------------------\n> Sort (cost=46578.83..46820.66 rows=96734 width=37) (actual \n> time=505.562..505.988 rows=1000 loops=1)\n> Sort Key: ur.start_date_time\n> -> HashAggregate (cost=37117.40..38568.41 rows=96734 width=37) \n> (actual time=498.697..502.374 rows=1000 loops=1)\n> -> Nested Loop (cost=0.00..35666.39 rows=96734 width=37) \n> (actual time=0.058..288.270 rows=138810 loops=1)\n> -> Nested Loop (cost=0.00..5342.20 rows=984 \n> width=37) (actual time=0.042..11.773 rows=1000 loops=1)\n> -> Index Scan using uut_result_pkey on \n> uut_result ur (cost=0.00..1626.46 rows=1003 width=24) (actual \n> time=0.020..1.868 rows=1000 loops=1)\n> Index Cond: ((id >= 174000) AND (id <= \n> 174999))\n> -> Index Scan using units_pkey on units u \n> (cost=0.00..3.69 rows=1 width=17) (actual time=0.006..0.007 rows=1 \n> loops=1000)\n> Index Cond: (\"outer\".unit_ref = u.ref)\n> -> Index Scan using uut_result_key on step_result \n> sr (cost=0.00..29.09 rows=138 width=4) (actual time=0.006..0.146 \n> rows=139 loops=1000)\n> Index Cond: (\"outer\".id = sr.uut_result) \n> Total runtime: 507.452 ms\n> (12 rows)\n>\n> === Coemment on selected plan for 8.2.1 when using \"order by\" ===\n> Run 3 (8.2.1 with order by) selects same plan as Run1 (without \n> order by).\n> It does hovever exist a better plan for Run3, and 8.1.5 manages to \n> select\n> that plan (shown in Run 4).\n> Both versions (8.1.5 and 8.2.1) uses same plan until the uppermost \n> Nested Loop.\n> The big difference is that 8.1.5 then will do HashAggregate, and \n> then sort,\n> while 8.2.1 will does a sort and then a GroupAggregate.\n>\n> I have tried different combinations for statistics_target, cpu_*_cost,\n> work_mem and random page cost without finding a solution.\n>\n> Anyone with an idea on what to do? Feel free to suggest one of the \n> above\n> parameters, i might have overlooked some combination.\n>\n> I am a little unsure on how much extra information is necessery, but i\n> will provide some:\n>\n> The three tables are\n> units\t\tList of produced items\n> uut_Result\t\tSummary of test result\n> step_result\tIndividuel tests results\n> The system is a production test log. (there are a lot of units which\n> does not have an entry in uut_result).\n>\n> Table \"public.units\"\n> Column | Type \n> | Modifiers\n> ------------------+----------------------- \n> +-----------------------------\n> ------------------+----------------------- \n> +-----------------------------\n> ------------------+-----------------------+----------\n> ref | integer | not null default nextval \n> (('public.units_ref_seq'::text)::regclass)\n> serial_number | character varying(30) | not null\n> product_ref | integer | not null\n> week | integer | not null\n> status | integer | not null\n> comment | text |\n> last_user | text | default \"current_user\"()\n> last_date | date | default \n> ('now'::text)::date\n> product_info_ref | integer | not null\n> Indexes:\n> \"units_pkey\" PRIMARY KEY, btree (ref)\n> \"units_no_sno_idx\" UNIQUE, btree (product_ref, week) WHERE \n> serial_number::text = ''::text\n> \"units_serial_number_idx\" UNIQUE, btree (serial_number, \n> product_info_ref) WHERE serial_number::text <> ''::text\n> \"units_product_ref_key\" btree (product_ref)\n> Triggers:\n> ct_unit_update_log AFTER UPDATE ON units FOR EACH ROW EXECUTE \n> PROCEDURE cf_unit_update_log()\n> ct_units_update_product_info_ref BEFORE INSERT OR UPDATE ON \n> units FOR EACH ROW EXECUTE PROCEDURE \n> cf_units_update_product_info_ref()\n> select count(*) from units => 292 676 rows\n>\n> Table \"public.uut_result\"\n> Column | Type \n> | Modifiers\n> -------------------+----------------------------- \n> +----------------------\n> -------------------+----------------------------- \n> +----------------------\n> -------------------+-----------------------------+--------\n> id | integer | not null\n> uut_serial_number | text |\n> unit_ref | integer |\n> order_unit_ref | integer |\n> user_login_name | text |\n> start_date_time | timestamp without time zone |\n> execution_time | double precision |\n> uut_status | text |\n> uut_error_code | integer |\n> uut_error_message | text |\n> last_user | text | default \n> \"current_user\"()\n> last_timestamp | timestamp with time zone | default \n> ('now'::text)::timestamp(6) with time zone\n> test_name | text |\n> teststation_name | text |\n> teststation_ref | integer |\n> process_step_ref | integer |\n> Indexes:\n> \"uut_result_pkey\" PRIMARY KEY, btree (id)\n> \"uut_result_start_date_time_idx\" btree (start_date_time)\n> \"uut_result_test_name\" btree (test_name)\n> Triggers:\n> ct_set_process_step_ref BEFORE INSERT OR UPDATE ON uut_result \n> FOR EACH ROW EXECUTE PROCEDURE cf_set_process_step_ref() select \n> count(*) from uut_result => 180 111 rows\n>\n> Table \"public.step_result\"\n> Column | Type | Modifiers\n> --------------------+------------------+-----------\n> id | integer | not null\n> uut_result | integer |\n> step_parent | integer |\n> step_name | text |\n> step_extra_info | text |\n> step_type | text |\n> status | text |\n> report_text | text |\n> error_code | integer |\n> error_message | text |\n> module_time | double precision |\n> total_time | double precision |\n> num_loops | integer |\n> num_passed | integer |\n> num_failed | integer |\n> ending_loop_index | integer |\n> loop_index | integer |\n> interactive_exenum | integer |\n> step_group | text |\n> step_index | integer |\n> order_number | integer |\n> pass_fail | integer |\n> numeric_value | double precision |\n> high_limit | double precision |\n> low_limit | double precision |\n> comp_operator | text |\n> string_value | text |\n> string_limit | text |\n> button_pressed | integer |\n> response | text |\n> exit_code | integer |\n> num_limits_in_file | integer |\n> num_rows_in_file | integer |\n> num_limits_applied | integer |\n> sequence_name | text |\n> sequence_file_path | text |\n> Indexes:\n> \"step_result_pkey\" PRIMARY KEY, btree (id)\n> \"step_parent_key\" btree (step_parent)\n> \"temp_index_idx\" btree (sequence_file_path)\n> \"uut_result_key\" btree (uut_result)\n> select count(*) from step_result => 17 624 657 rows\n>\n> Best regards\n> Rolf �stvik\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n",
"msg_date": "Sun, 14 Jan 2007 08:46:31 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
}
] |
[
{
"msg_contents": "(now with a more sensible subject)\n\nI have been trying to change a many parameters on server versions\n7.4.15, 8.1.4, 8.2.0 and 8.2.1. I still hope a have managed to keep\nmy head straigth and that i do not present to much faulty information.\n\nThe cost estimates generated by the different server versions differ.\nI have a query which (as far as i can tell) have some strange differences\nbetween 8.2.0 8.2.1. I can provide information about that if anyone want\nit.\n\nGenerally these parameters are used.\ndefault_statistics_target = 10\n\t(4 selected columns is set to 1000)\n\t(I have tested with 1000 as default value\n but that did not have an impact)\n\t(analyzed whenever value was changed)\nshared_buffers = 64000 (512MB)\nwork_mem/sort_mem = variable, see different run's\neffective_cache_size = 128000 (1G)\nrandom_page_cost = 2\ncpu_index_tuple_cost = 0.001\ncpu_operator_cost = 0.025\ncpu_tuple_cost = 0.01\n\nI have tested with different values for random_page_cost and\ncpu_*_cost but it have not made a difference.\nI have tried with random_page cost between 1 and 8,\nand cpu_*_cost with standard value and 50x bigger)\n\nQuery is:\nexplain\n analyze\n select\n ur.id as ur_id,\n ur.unit_ref,\n ur.execution_time,\n u.serial_number,\n to_char(ur.start_date_time, 'YYYY-MM-DD'),\n count(*) as num_test\n from\n uut_result as ur\n inner join units as u\n on ur.unit_ref=u.ref\n inner join step_result as sr\n on ur.id=sr.uut_result\n where\n ur.id between 174000 and 174000+999\n group by\n ur.id,\n ur.unit_ref,\n ur.execution_time,\n u.serial_number, \n ur.start_date_time\n-- order by\n-- ur.start_date_time\n;\nNB: order by clause is used in some results below.\n\n=== Run 1:\nDetect work_mem setting influence (See also Run 2)\n - server version 8.2.1\n - Query executed without \"order by\" clause\n - work_mem = 8600;\nQUERY PLAN \n---------------------------------------------\n GroupAggregate (cost=44857.70..47976.79 rows=95972 width=37) (actual time=1802.716..2017.337 rows=1000 loops=1)\n -> Sort (cost=44857.70..45097.63 rows=95972 width=37) (actual time=1802.461..1892.743 rows=138810 loops=1)\n Sort Key: ur.id, ur.unit_ref, ur.execution_time, u.serial_number, ur.start_date_time\n -> Nested Loop (cost=0.00..36915.87 rows=95972 width=37) (actual time=0.063..268.186 rows=138810 loops=1)\n -> Nested Loop (cost=0.00..5017.65 rows=981 width=37) (actual time=0.047..11.919 rows=1000 loops=1)\n -> Index Scan using uut_result_pkey on uut_result ur (cost=0.00..1538.77 rows=1000 width=24) (actual time=0.029..1.727 rows=1000 loops=1)\n Index Cond: ((id >= 174000) AND (id <= 174999))\n -> Index Scan using units_pkey on units u (cost=0.00..3.47 rows=1 width=17) (actual time=0.006..0.007 rows=1 loops=1000)\n Index Cond: (ur.unit_ref = u.ref)\n -> Index Scan using uut_result_key on step_result sr (cost=0.00..30.82 rows=136 width=4) (actual time=0.011..0.125 rows=139 loops=1000)\n Index Cond: (ur.id = sr.uut_result) Total runtime: 2021.833 ms\n(12 rows)\n\n\n=== Run 2:\nDetect work_mem setting influence (See also Run 1)\n - server version 8.2.1\n - Query executed without \"order by\" clause\n - work_mem = 8700;\nQUERY PLAN \n---------------------------------------------\n HashAggregate (cost=38355.45..39795.03 rows=95972 width=37) (actual time=436.406..439.867 rows=1000 loops=1)\n -> Nested Loop (cost=0.00..36915.87 rows=95972 width=37) (actual time=0.066..256.235 rows=138810 loops=1)\n -> Nested Loop (cost=0.00..5017.65 rows=981 width=37) (actual time=0.049..10.858 rows=1000 loops=1)\n -> Index Scan using uut_result_pkey on uut_result ur (cost=0.00..1538.77 rows=1000 width=24) (actual time=0.031..1.546 rows=1000 loops=1)\n Index Cond: ((id >= 174000) AND (id <= 174999))\n -> Index Scan using units_pkey on units u (cost=0.00..3.47 rows=1 width=17) (actual time=0.005..0.006 rows=1 loops=1000)\n Index Cond: (ur.unit_ref = u.ref)\n -> Index Scan using uut_result_key on step_result sr (cost=0.00..30.82 rows=136 width=4) (actual time=0.011..0.123 rows=139 loops=1000)\n Index Cond: (ur.id = sr.uut_result) Total runtime: 441.193 ms (10 rows)\n\n=== Comment on Run 1 versus Run 2 (adjusted work_mem) ===\nThe difference in setup is value of work_mem. Bigger work_mem gave different\ncost estimates and selected HashAggregate instead of GroupAggregate.\nResult was a reduced runtime. I guess that is as expected.\n\n(One remark, the switchover between different plans on version 8.1.5 was for\nwork_mem values of 6800 and 6900)\n\n=== Run 3 (with order by clause):\nTest \"group by\" and \"order by\" (See also Run 1 and Run 4)\n - server version 8.2.1\n - Query executed with \"order by\" clause\n - work_mem = 8700\n (tried values from 2000 to 128000 with same cost and plan as result)\nQUERY PLAN \n---------------------------------------------\n GroupAggregate (cost=44857.70..47976.79 rows=95972 width=37) (actual time=1891.464..2114.462 rows=1000 loops=1)\n -> Sort (cost=44857.70..45097.63 rows=95972 width=37) (actual time=1891.263..1982.137 rows=138810 loops=1)\n Sort Key: ur.start_date_time, ur.id, ur.unit_ref, ur.execution_time, u.serial_number\n -> Nested Loop (cost=0.00..36915.87 rows=95972 width=37) (actual time=0.064..264.358 rows=138810 loops=1)\n -> Nested Loop (cost=0.00..5017.65 rows=981 width=37) (actual time=0.047..12.253 rows=1000 loops=1)\n -> Index Scan using uut_result_pkey on uut_result ur (cost=0.00..1538.77 rows=1000 width=24) (actual time=0.029..1.743 rows=1000 loops=1)\n Index Cond: ((id >= 174000) AND (id <= 174999))\n -> Index Scan using units_pkey on units u (cost=0.00..3.47 rows=1 width=17) (actual time=0.006..0.007 rows=1 loops=1000)\n Index Cond: (ur.unit_ref = u.ref)\n -> Index Scan using uut_result_key on step_result sr (cost=0.00..30.82 rows=136 width=4) (actual time=0.011..0.124 rows=139 loops=1000)\n Index Cond: (ur.id = sr.uut_result) Total runtime: 2118.986 ms\n(12 rows)\n\n=== Run 4 (with order by clause, on server 8.1.4):\nTest \"group by\" and \"order by\" (See also Run 1 and Run 3)\n - server version 8.1.4\n - Query executed with \"order by\" clause\n - work_mem = 6900\n (same plan select for all work_mem values above 6900)\nQUERY PLAN \n------------------------------------------------------------\n Sort (cost=46578.83..46820.66 rows=96734 width=37) (actual time=505.562..505.988 rows=1000 loops=1)\n Sort Key: ur.start_date_time\n -> HashAggregate (cost=37117.40..38568.41 rows=96734 width=37) (actual time=498.697..502.374 rows=1000 loops=1)\n -> Nested Loop (cost=0.00..35666.39 rows=96734 width=37) (actual time=0.058..288.270 rows=138810 loops=1)\n -> Nested Loop (cost=0.00..5342.20 rows=984 width=37) (actual time=0.042..11.773 rows=1000 loops=1)\n -> Index Scan using uut_result_pkey on uut_result ur (cost=0.00..1626.46 rows=1003 width=24) (actual time=0.020..1.868 rows=1000 loops=1)\n Index Cond: ((id >= 174000) AND (id <= 174999))\n -> Index Scan using units_pkey on units u (cost=0.00..3.69 rows=1 width=17) (actual time=0.006..0.007 rows=1 loops=1000)\n Index Cond: (\"outer\".unit_ref = u.ref)\n -> Index Scan using uut_result_key on step_result sr (cost=0.00..29.09 rows=138 width=4) (actual time=0.006..0.146 rows=139 loops=1000)\n Index Cond: (\"outer\".id = sr.uut_result) Total runtime: 507.452 ms\n(12 rows)\n\n=== Coemment on selected plan for 8.2.1 when using \"order by\" ===\nRun 3 (8.2.1 with order by) selects same plan as Run1 (without order by).\nIt does hovever exist a better plan for Run3, and 8.1.5 manages to select\nthat plan (shown in Run 4).\nBoth versions (8.1.5 and 8.2.1) uses same plan until the uppermost Nested Loop.\nThe big difference is that 8.1.5 then will do HashAggregate, and then sort,\nwhile 8.2.1 will does a sort and then a GroupAggregate.\n\nI have tried different combinations for statistics_target, cpu_*_cost,\nwork_mem and random page cost without finding a solution. \n\nAnyone with an idea on what to do? Feel free to suggest one of the above\nparameters, i might have overlooked some combination.\n\nI am a little unsure on how much extra information is necessery, but i\nwill provide some:\n\nThe three tables are\n units\t\tList of produced items\n uut_Result\t\tSummary of test result\n step_result\tIndividuel tests results\nThe system is a production test log. (there are a lot of units which\n does not have an entry in uut_result).\n\n Table \"public.units\"\n Column | Type | Modifiers \n------------------+-----------------------+-----------------------------\n------------------+-----------------------+-----------------------------\n------------------+-----------------------+----------\n ref | integer | not null default nextval(('public.units_ref_seq'::text)::regclass)\n serial_number | character varying(30) | not null\n product_ref | integer | not null\n week | integer | not null\n status | integer | not null\n comment | text | \n last_user | text | default \"current_user\"()\n last_date | date | default ('now'::text)::date\n product_info_ref | integer | not null\nIndexes:\n \"units_pkey\" PRIMARY KEY, btree (ref)\n \"units_no_sno_idx\" UNIQUE, btree (product_ref, week) WHERE serial_number::text = ''::text\n \"units_serial_number_idx\" UNIQUE, btree (serial_number, product_info_ref) WHERE serial_number::text <> ''::text\n \"units_product_ref_key\" btree (product_ref)\nTriggers:\n ct_unit_update_log AFTER UPDATE ON units FOR EACH ROW EXECUTE PROCEDURE cf_unit_update_log()\n ct_units_update_product_info_ref BEFORE INSERT OR UPDATE ON units FOR EACH ROW EXECUTE PROCEDURE cf_units_update_product_info_ref()\nselect count(*) from units => 292 676 rows\n\n Table \"public.uut_result\"\n Column | Type | Modifiers \n-------------------+-----------------------------+----------------------\n-------------------+-----------------------------+----------------------\n-------------------+-----------------------------+--------\n id | integer | not null\n uut_serial_number | text | \n unit_ref | integer | \n order_unit_ref | integer | \n user_login_name | text | \n start_date_time | timestamp without time zone | \n execution_time | double precision | \n uut_status | text | \n uut_error_code | integer | \n uut_error_message | text | \n last_user | text | default \"current_user\"()\n last_timestamp | timestamp with time zone | default ('now'::text)::timestamp(6) with time zone\n test_name | text | \n teststation_name | text | \n teststation_ref | integer | \n process_step_ref | integer | \nIndexes:\n \"uut_result_pkey\" PRIMARY KEY, btree (id)\n \"uut_result_start_date_time_idx\" btree (start_date_time)\n \"uut_result_test_name\" btree (test_name)\nTriggers:\n ct_set_process_step_ref BEFORE INSERT OR UPDATE ON uut_result FOR EACH ROW EXECUTE PROCEDURE cf_set_process_step_ref() select count(*) from uut_result => 180 111 rows\n\n Table \"public.step_result\"\n Column | Type | Modifiers \n--------------------+------------------+-----------\n id | integer | not null\n uut_result | integer | \n step_parent | integer | \n step_name | text | \n step_extra_info | text | \n step_type | text | \n status | text | \n report_text | text | \n error_code | integer | \n error_message | text | \n module_time | double precision | \n total_time | double precision | \n num_loops | integer | \n num_passed | integer | \n num_failed | integer | \n ending_loop_index | integer | \n loop_index | integer | \n interactive_exenum | integer | \n step_group | text | \n step_index | integer | \n order_number | integer | \n pass_fail | integer | \n numeric_value | double precision | \n high_limit | double precision | \n low_limit | double precision | \n comp_operator | text | \n string_value | text | \n string_limit | text | \n button_pressed | integer | \n response | text | \n exit_code | integer | \n num_limits_in_file | integer | \n num_rows_in_file | integer | \n num_limits_applied | integer | \n sequence_name | text | \n sequence_file_path | text | \nIndexes:\n \"step_result_pkey\" PRIMARY KEY, btree (id)\n \"step_parent_key\" btree (step_parent)\n \"temp_index_idx\" btree (sequence_file_path)\n \"uut_result_key\" btree (uut_result)\nselect count(*) from step_result => 17 624 657 rows\n\nBest regards\nRolf Østvik\n",
"msg_date": "Sun, 14 Jan 2007 13:43:44 +0100",
"msg_from": "=?iso-8859-1?Q?Rolf_=D8stvik_=28HA/EXA=29?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem with grouping, uses Sort and GroupAggregate,\n\tHashAggregate is better(?)"
},
{
"msg_contents": "Computer:\n\tDell PowerEdge 2950\n\topenSUSE Linux 10.1\n\tIntel(R) Xeon 3.00GHz\n\t4GB memory\n\txfs filesystem on SAS disks \n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Rolf Østvik (HA/EXA)\n> Sent: Sunday, January 14, 2007 1:44 PM\n> To: [email protected]\n> Subject: [PERFORM] Problem with grouping, uses Sort and \n> GroupAggregate, HashAggregate is better(?)\n> \n> (now with a more sensible subject)\n> \n> I have been trying to change a many parameters on server versions\n> 7.4.15, 8.1.4, 8.2.0 and 8.2.1. I still hope a have managed to keep\n> my head straigth and that i do not present to much faulty information.\n> \n> The cost estimates generated by the different server versions differ.\n> I have a query which (as far as i can tell) have some strange \n> differences\n> between 8.2.0 8.2.1. I can provide information about that if \n> anyone want\n> it.\n> \n> Generally these parameters are used.\n> default_statistics_target = 10\n> \t(4 selected columns is set to 1000)\n> \t(I have tested with 1000 as default value\n> but that did not have an impact)\n> \t(analyzed whenever value was changed)\n> shared_buffers = 64000 (512MB)\n> work_mem/sort_mem = variable, see different run's\n> effective_cache_size = 128000 (1G)\n> random_page_cost = 2\n> cpu_index_tuple_cost = 0.001\n> cpu_operator_cost = 0.025\n> cpu_tuple_cost = 0.01\n> \n> I have tested with different values for random_page_cost and\n> cpu_*_cost but it have not made a difference.\n> I have tried with random_page cost between 1 and 8,\n> and cpu_*_cost with standard value and 50x bigger)\n> \n> Query is:\n> explain\n> analyze\n> select\n> ur.id as ur_id,\n> ur.unit_ref,\n> ur.execution_time,\n> u.serial_number,\n> to_char(ur.start_date_time, 'YYYY-MM-DD'),\n> count(*) as num_test\n> from\n> uut_result as ur\n> inner join units as u\n> on ur.unit_ref=u.ref\n> inner join step_result as sr\n> on ur.id=sr.uut_result\n> where\n> ur.id between 174000 and 174000+999\n> group by\n> ur.id,\n> ur.unit_ref,\n> ur.execution_time,\n> u.serial_number, \n> ur.start_date_time\n> -- order by\n> -- ur.start_date_time\n> ;\n> NB: order by clause is used in some results below.\n> \n> === Run 1:\n> Detect work_mem setting influence (See also Run 2)\n> - server version 8.2.1\n> - Query executed without \"order by\" clause\n> - work_mem = 8600;\n> QUERY PLAN \n> \n> ---------------------------------------------\n> GroupAggregate (cost=44857.70..47976.79 rows=95972 \n> width=37) (actual time=1802.716..2017.337 rows=1000 loops=1)\n> -> Sort (cost=44857.70..45097.63 rows=95972 width=37) \n> (actual time=1802.461..1892.743 rows=138810 loops=1)\n> Sort Key: ur.id, ur.unit_ref, ur.execution_time, \n> u.serial_number, ur.start_date_time\n> -> Nested Loop (cost=0.00..36915.87 rows=95972 \n> width=37) (actual time=0.063..268.186 rows=138810 loops=1)\n> -> Nested Loop (cost=0.00..5017.65 rows=981 \n> width=37) (actual time=0.047..11.919 rows=1000 loops=1)\n> -> Index Scan using uut_result_pkey on \n> uut_result ur (cost=0.00..1538.77 rows=1000 width=24) \n> (actual time=0.029..1.727 rows=1000 loops=1)\n> Index Cond: ((id >= 174000) AND \n> (id <= 174999))\n> -> Index Scan using units_pkey on units \n> u (cost=0.00..3.47 rows=1 width=17) (actual \n> time=0.006..0.007 rows=1 loops=1000)\n> Index Cond: (ur.unit_ref = u.ref)\n> -> Index Scan using uut_result_key on \n> step_result sr (cost=0.00..30.82 rows=136 width=4) (actual \n> time=0.011..0.125 rows=139 loops=1000)\n> Index Cond: (ur.id = sr.uut_result) \n> Total runtime: 2021.833 ms\n> (12 rows)\n> \n> \n> === Run 2:\n> Detect work_mem setting influence (See also Run 1)\n> - server version 8.2.1\n> - Query executed without \"order by\" clause\n> - work_mem = 8700;\n> QUERY PLAN \n> \n> ---------------------------------------------\n> HashAggregate (cost=38355.45..39795.03 rows=95972 width=37) \n> (actual time=436.406..439.867 rows=1000 loops=1)\n> -> Nested Loop (cost=0.00..36915.87 rows=95972 width=37) \n> (actual time=0.066..256.235 rows=138810 loops=1)\n> -> Nested Loop (cost=0.00..5017.65 rows=981 \n> width=37) (actual time=0.049..10.858 rows=1000 loops=1)\n> -> Index Scan using uut_result_pkey on \n> uut_result ur (cost=0.00..1538.77 rows=1000 width=24) \n> (actual time=0.031..1.546 rows=1000 loops=1)\n> Index Cond: ((id >= 174000) AND (id <= 174999))\n> -> Index Scan using units_pkey on units u \n> (cost=0.00..3.47 rows=1 width=17) (actual time=0.005..0.006 \n> rows=1 loops=1000)\n> Index Cond: (ur.unit_ref = u.ref)\n> -> Index Scan using uut_result_key on step_result \n> sr (cost=0.00..30.82 rows=136 width=4) (actual \n> time=0.011..0.123 rows=139 loops=1000)\n> Index Cond: (ur.id = sr.uut_result) Total \n> runtime: 441.193 ms (10 rows)\n> \n> === Comment on Run 1 versus Run 2 (adjusted work_mem) ===\n> The difference in setup is value of work_mem. Bigger work_mem \n> gave different\n> cost estimates and selected HashAggregate instead of GroupAggregate.\n> Result was a reduced runtime. I guess that is as expected.\n> \n> (One remark, the switchover between different plans on \n> version 8.1.5 was for\n> work_mem values of 6800 and 6900)\n> \n> === Run 3 (with order by clause):\n> Test \"group by\" and \"order by\" (See also Run 1 and Run 4)\n> - server version 8.2.1\n> - Query executed with \"order by\" clause\n> - work_mem = 8700\n> (tried values from 2000 to 128000 with same cost and plan \n> as result)\n> QUERY PLAN \n> \n> ---------------------------------------------\n> GroupAggregate (cost=44857.70..47976.79 rows=95972 \n> width=37) (actual time=1891.464..2114.462 rows=1000 loops=1)\n> -> Sort (cost=44857.70..45097.63 rows=95972 width=37) \n> (actual time=1891.263..1982.137 rows=138810 loops=1)\n> Sort Key: ur.start_date_time, ur.id, ur.unit_ref, \n> ur.execution_time, u.serial_number\n> -> Nested Loop (cost=0.00..36915.87 rows=95972 \n> width=37) (actual time=0.064..264.358 rows=138810 loops=1)\n> -> Nested Loop (cost=0.00..5017.65 rows=981 \n> width=37) (actual time=0.047..12.253 rows=1000 loops=1)\n> -> Index Scan using uut_result_pkey on \n> uut_result ur (cost=0.00..1538.77 rows=1000 width=24) \n> (actual time=0.029..1.743 rows=1000 loops=1)\n> Index Cond: ((id >= 174000) AND \n> (id <= 174999))\n> -> Index Scan using units_pkey on units \n> u (cost=0.00..3.47 rows=1 width=17) (actual \n> time=0.006..0.007 rows=1 loops=1000)\n> Index Cond: (ur.unit_ref = u.ref)\n> -> Index Scan using uut_result_key on \n> step_result sr (cost=0.00..30.82 rows=136 width=4) (actual \n> time=0.011..0.124 rows=139 loops=1000)\n> Index Cond: (ur.id = sr.uut_result) \n> Total runtime: 2118.986 ms\n> (12 rows)\n> \n> === Run 4 (with order by clause, on server 8.1.4):\n> Test \"group by\" and \"order by\" (See also Run 1 and Run 3)\n> - server version 8.1.4\n> - Query executed with \"order by\" clause\n> - work_mem = 6900\n> (same plan select for all work_mem values above 6900)\n> QUERY PLAN \n> \n> ------------------------------------------------------------\n> Sort (cost=46578.83..46820.66 rows=96734 width=37) (actual \n> time=505.562..505.988 rows=1000 loops=1)\n> Sort Key: ur.start_date_time\n> -> HashAggregate (cost=37117.40..38568.41 rows=96734 \n> width=37) (actual time=498.697..502.374 rows=1000 loops=1)\n> -> Nested Loop (cost=0.00..35666.39 rows=96734 \n> width=37) (actual time=0.058..288.270 rows=138810 loops=1)\n> -> Nested Loop (cost=0.00..5342.20 rows=984 \n> width=37) (actual time=0.042..11.773 rows=1000 loops=1)\n> -> Index Scan using uut_result_pkey on \n> uut_result ur (cost=0.00..1626.46 rows=1003 width=24) \n> (actual time=0.020..1.868 rows=1000 loops=1)\n> Index Cond: ((id >= 174000) AND \n> (id <= 174999))\n> -> Index Scan using units_pkey on units \n> u (cost=0.00..3.69 rows=1 width=17) (actual \n> time=0.006..0.007 rows=1 loops=1000)\n> Index Cond: (\"outer\".unit_ref = u.ref)\n> -> Index Scan using uut_result_key on \n> step_result sr (cost=0.00..29.09 rows=138 width=4) (actual \n> time=0.006..0.146 rows=139 loops=1000)\n> Index Cond: (\"outer\".id = sr.uut_result) \n> Total runtime: 507.452 ms\n> (12 rows)\n> \n> === Coemment on selected plan for 8.2.1 when using \"order by\" ===\n> Run 3 (8.2.1 with order by) selects same plan as Run1 \n> (without order by).\n> It does hovever exist a better plan for Run3, and 8.1.5 \n> manages to select\n> that plan (shown in Run 4).\n> Both versions (8.1.5 and 8.2.1) uses same plan until the \n> uppermost Nested Loop.\n> The big difference is that 8.1.5 then will do HashAggregate, \n> and then sort,\n> while 8.2.1 will does a sort and then a GroupAggregate.\n> \n> I have tried different combinations for statistics_target, cpu_*_cost,\n> work_mem and random page cost without finding a solution. \n> \n> Anyone with an idea on what to do? Feel free to suggest one \n> of the above\n> parameters, i might have overlooked some combination.\n> \n> I am a little unsure on how much extra information is necessery, but i\n> will provide some:\n> \n> The three tables are\n> units\t\tList of produced items\n> uut_Result\t\tSummary of test result\n> step_result\tIndividuel tests results\n> The system is a production test log. (there are a lot of units which\n> does not have an entry in uut_result).\n> \n> Table \"public.units\"\n> Column | Type | \n> Modifiers \n> ------------------+-----------------------+-------------------\n> ----------\n> ------------------+-----------------------+-------------------\n> ----------\n> ------------------+-----------------------+----------\n> ref | integer | not null default \n> nextval(('public.units_ref_seq'::text)::regclass)\n> serial_number | character varying(30) | not null\n> product_ref | integer | not null\n> week | integer | not null\n> status | integer | not null\n> comment | text | \n> last_user | text | default \"current_user\"()\n> last_date | date | default \n> ('now'::text)::date\n> product_info_ref | integer | not null\n> Indexes:\n> \"units_pkey\" PRIMARY KEY, btree (ref)\n> \"units_no_sno_idx\" UNIQUE, btree (product_ref, week) \n> WHERE serial_number::text = ''::text\n> \"units_serial_number_idx\" UNIQUE, btree (serial_number, \n> product_info_ref) WHERE serial_number::text <> ''::text\n> \"units_product_ref_key\" btree (product_ref)\n> Triggers:\n> ct_unit_update_log AFTER UPDATE ON units FOR EACH ROW \n> EXECUTE PROCEDURE cf_unit_update_log()\n> ct_units_update_product_info_ref BEFORE INSERT OR UPDATE \n> ON units FOR EACH ROW EXECUTE PROCEDURE \n> cf_units_update_product_info_ref()\n> select count(*) from units => 292 676 rows\n> \n> Table \"public.uut_result\"\n> Column | Type | \n> Modifiers \n> -------------------+-----------------------------+------------\n> ----------\n> -------------------+-----------------------------+------------\n> ----------\n> -------------------+-----------------------------+--------\n> id | integer | not null\n> uut_serial_number | text | \n> unit_ref | integer | \n> order_unit_ref | integer | \n> user_login_name | text | \n> start_date_time | timestamp without time zone | \n> execution_time | double precision | \n> uut_status | text | \n> uut_error_code | integer | \n> uut_error_message | text | \n> last_user | text | default \n> \"current_user\"()\n> last_timestamp | timestamp with time zone | default \n> ('now'::text)::timestamp(6) with time zone\n> test_name | text | \n> teststation_name | text | \n> teststation_ref | integer | \n> process_step_ref | integer | \n> Indexes:\n> \"uut_result_pkey\" PRIMARY KEY, btree (id)\n> \"uut_result_start_date_time_idx\" btree (start_date_time)\n> \"uut_result_test_name\" btree (test_name)\n> Triggers:\n> ct_set_process_step_ref BEFORE INSERT OR UPDATE ON \n> uut_result FOR EACH ROW EXECUTE PROCEDURE \n> cf_set_process_step_ref() select count(*) from uut_result => \n> 180 111 rows\n> \n> Table \"public.step_result\"\n> Column | Type | Modifiers \n> --------------------+------------------+-----------\n> id | integer | not null\n> uut_result | integer | \n> step_parent | integer | \n> step_name | text | \n> step_extra_info | text | \n> step_type | text | \n> status | text | \n> report_text | text | \n> error_code | integer | \n> error_message | text | \n> module_time | double precision | \n> total_time | double precision | \n> num_loops | integer | \n> num_passed | integer | \n> num_failed | integer | \n> ending_loop_index | integer | \n> loop_index | integer | \n> interactive_exenum | integer | \n> step_group | text | \n> step_index | integer | \n> order_number | integer | \n> pass_fail | integer | \n> numeric_value | double precision | \n> high_limit | double precision | \n> low_limit | double precision | \n> comp_operator | text | \n> string_value | text | \n> string_limit | text | \n> button_pressed | integer | \n> response | text | \n> exit_code | integer | \n> num_limits_in_file | integer | \n> num_rows_in_file | integer | \n> num_limits_applied | integer | \n> sequence_name | text | \n> sequence_file_path | text | \n> Indexes:\n> \"step_result_pkey\" PRIMARY KEY, btree (id)\n> \"step_parent_key\" btree (step_parent)\n> \"temp_index_idx\" btree (sequence_file_path)\n> \"uut_result_key\" btree (uut_result)\n> select count(*) from step_result => 17 624 657 rows\n> \n> Best regards\n> Rolf Østvik\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n",
"msg_date": "Sun, 14 Jan 2007 16:34:03 +0100",
"msg_from": "=?iso-8859-1?Q?Rolf_=D8stvik_=28HA/EXA=29?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problem with grouping, uses Sort and GroupAggregate,\n\tHashAggregate is better(?)"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Rolf Østvik (HA/EXA)\n\nHave you tried \"set enable_sort=off\" with 8.1.2? I'm not sure if that will\nchange anything because it has to do at least one sort. Its just a lots\nfaster to do a hashagg + small sort than one big sort in this case. (I\nwonder if there should be enable_groupagg?) \n\n",
"msg_date": "Sun, 14 Jan 2007 10:12:18 -0600",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem with grouping, uses Sort and GroupAggregate,\n\tHashAggregate is better(?)"
},
{
"msg_contents": "\nOn 14-Jan-07, at 10:34 AM, Rolf �stvik (HA/EXA) wrote:\n\n> Computer:\n> \tDell PowerEdge 2950\n> \topenSUSE Linux 10.1\n> \tIntel(R) Xeon 3.00GHz\n> \t4GB memory\n> \txfs filesystem on SAS disks\n>\n>> -----Original Message-----\n>> From: [email protected]\n>> [mailto:[email protected]] On Behalf Of\n>> Rolf �stvik (HA/EXA)\n>> Sent: Sunday, January 14, 2007 1:44 PM\n>> To: [email protected]\n>> Subject: [PERFORM] Problem with grouping, uses Sort and\n>> GroupAggregate, HashAggregate is better(?)\n>>\n>> (now with a more sensible subject)\n>>\n>> I have been trying to change a many parameters on server versions\n>> 7.4.15, 8.1.4, 8.2.0 and 8.2.1. I still hope a have managed to keep\n>> my head straigth and that i do not present to much faulty \n>> information.\n>>\n>> The cost estimates generated by the different server versions differ.\n>> I have a query which (as far as i can tell) have some strange\n>> differences\n>> between 8.2.0 8.2.1. I can provide information about that if\n>> anyone want\n>> it.\n>>\n>> Generally these parameters are used.\n>> default_statistics_target = 10\n>> \t(4 selected columns is set to 1000)\n>> \t(I have tested with 1000 as default value\n>> but that did not have an impact)\n>> \t(analyzed whenever value was changed)\n>> shared_buffers = 64000 (512MB)\n\ndouble shared_buffers\n>> work_mem/sort_mem = variable, see different run's\n>> effective_cache_size = 128000 (1G)\n\ntriple effective_cache (which does not actually use memory but tells \nthe planner what it should expect to see in the buffers)\n\n>> random_page_cost = 2\n>> cpu_index_tuple_cost = 0.001\n>> cpu_operator_cost = 0.025\n>> cpu_tuple_cost = 0.01\n>>\n>> I have tested with different values for random_page_cost and\n>> cpu_*_cost but it have not made a difference.\n>> I have tried with random_page cost between 1 and 8,\n>> and cpu_*_cost with standard value and 50x bigger)\n\nThis is a dubious setting to play with. random_page_cost is the ratio \nof random_seeks vs sequential seeks, 4 is generally the right number, \nunless you are using a *very* fast disk, or ram disk.\n>>\n\n\n>> Query is:\n>> explain\n>> analyze\n>> select\n>> ur.id as ur_id,\n>> ur.unit_ref,\n>> ur.execution_time,\n>> u.serial_number,\n>> to_char(ur.start_date_time, 'YYYY-MM-DD'),\n>> count(*) as num_test\n>> from\n>> uut_result as ur\n>> inner join units as u\n>> on ur.unit_ref=u.ref\n>> inner join step_result as sr\n>> on ur.id=sr.uut_result\n>> where\n>> ur.id between 174000 and 174000+999\n>> group by\n>> ur.id,\n>> ur.unit_ref,\n>> ur.execution_time,\n>> u.serial_number,\n>> ur.start_date_time\n>> -- order by\n>> -- ur.start_date_time\n>> ;\n>> NB: order by clause is used in some results below.\n>>\n>> === Run 1:\n>> Detect work_mem setting influence (See also Run 2)\n>> - server version 8.2.1\n>> - Query executed without \"order by\" clause\n>> - work_mem = 8600;\n>> QUERY PLAN\n>>\n>> ---------------------------------------------\n>> GroupAggregate (cost=44857.70..47976.79 rows=95972\n>> width=37) (actual time=1802.716..2017.337 rows=1000 loops=1)\n>> -> Sort (cost=44857.70..45097.63 rows=95972 width=37)\n>> (actual time=1802.461..1892.743 rows=138810 loops=1)\n>> Sort Key: ur.id, ur.unit_ref, ur.execution_time,\n>> u.serial_number, ur.start_date_time\n>> -> Nested Loop (cost=0.00..36915.87 rows=95972\n>> width=37) (actual time=0.063..268.186 rows=138810 loops=1)\n>> -> Nested Loop (cost=0.00..5017.65 rows=981\n>> width=37) (actual time=0.047..11.919 rows=1000 loops=1)\n>> -> Index Scan using uut_result_pkey on\n>> uut_result ur (cost=0.00..1538.77 rows=1000 width=24)\n>> (actual time=0.029..1.727 rows=1000 loops=1)\n>> Index Cond: ((id >= 174000) AND\n>> (id <= 174999))\n>> -> Index Scan using units_pkey on units\n>> u (cost=0.00..3.47 rows=1 width=17) (actual\n>> time=0.006..0.007 rows=1 loops=1000)\n>> Index Cond: (ur.unit_ref = u.ref)\n>> -> Index Scan using uut_result_key on\n>> step_result sr (cost=0.00..30.82 rows=136 width=4) (actual\n>> time=0.011..0.125 rows=139 loops=1000)\n>> Index Cond: (ur.id = sr.uut_result)\n>> Total runtime: 2021.833 ms\n>> (12 rows)\n>>\n>>\n>> === Run 2:\n>> Detect work_mem setting influence (See also Run 1)\n>> - server version 8.2.1\n>> - Query executed without \"order by\" clause\n>> - work_mem = 8700;\n>> QUERY PLAN\n>>\n>> ---------------------------------------------\n>> HashAggregate (cost=38355.45..39795.03 rows=95972 width=37)\n>> (actual time=436.406..439.867 rows=1000 loops=1)\n>> -> Nested Loop (cost=0.00..36915.87 rows=95972 width=37)\n>> (actual time=0.066..256.235 rows=138810 loops=1)\n>> -> Nested Loop (cost=0.00..5017.65 rows=981\n>> width=37) (actual time=0.049..10.858 rows=1000 loops=1)\n>> -> Index Scan using uut_result_pkey on\n>> uut_result ur (cost=0.00..1538.77 rows=1000 width=24)\n>> (actual time=0.031..1.546 rows=1000 loops=1)\n>> Index Cond: ((id >= 174000) AND (id <= 174999))\n>> -> Index Scan using units_pkey on units u\n>> (cost=0.00..3.47 rows=1 width=17) (actual time=0.005..0.006\n>> rows=1 loops=1000)\n>> Index Cond: (ur.unit_ref = u.ref)\n>> -> Index Scan using uut_result_key on step_result\n>> sr (cost=0.00..30.82 rows=136 width=4) (actual\n>> time=0.011..0.123 rows=139 loops=1000)\n>> Index Cond: (ur.id = sr.uut_result) Total\n>> runtime: 441.193 ms (10 rows)\n>>\n>> === Comment on Run 1 versus Run 2 (adjusted work_mem) ===\n>> The difference in setup is value of work_mem. Bigger work_mem\n>> gave different\n>> cost estimates and selected HashAggregate instead of GroupAggregate.\n>> Result was a reduced runtime. I guess that is as expected.\n>>\n>> (One remark, the switchover between different plans on\n>> version 8.1.5 was for\n>> work_mem values of 6800 and 6900)\n>>\n>> === Run 3 (with order by clause):\n>> Test \"group by\" and \"order by\" (See also Run 1 and Run 4)\n>> - server version 8.2.1\n>> - Query executed with \"order by\" clause\n>> - work_mem = 8700\n>> (tried values from 2000 to 128000 with same cost and plan\n>> as result)\n>> QUERY PLAN\n>>\n>> ---------------------------------------------\n>> GroupAggregate (cost=44857.70..47976.79 rows=95972\n>> width=37) (actual time=1891.464..2114.462 rows=1000 loops=1)\n>> -> Sort (cost=44857.70..45097.63 rows=95972 width=37)\n>> (actual time=1891.263..1982.137 rows=138810 loops=1)\n>> Sort Key: ur.start_date_time, ur.id, ur.unit_ref,\n>> ur.execution_time, u.serial_number\n>> -> Nested Loop (cost=0.00..36915.87 rows=95972\n>> width=37) (actual time=0.064..264.358 rows=138810 loops=1)\n>> -> Nested Loop (cost=0.00..5017.65 rows=981\n>> width=37) (actual time=0.047..12.253 rows=1000 loops=1)\n>> -> Index Scan using uut_result_pkey on\n>> uut_result ur (cost=0.00..1538.77 rows=1000 width=24)\n>> (actual time=0.029..1.743 rows=1000 loops=1)\n>> Index Cond: ((id >= 174000) AND\n>> (id <= 174999))\n>> -> Index Scan using units_pkey on units\n>> u (cost=0.00..3.47 rows=1 width=17) (actual\n>> time=0.006..0.007 rows=1 loops=1000)\n>> Index Cond: (ur.unit_ref = u.ref)\n>> -> Index Scan using uut_result_key on\n>> step_result sr (cost=0.00..30.82 rows=136 width=4) (actual\n>> time=0.011..0.124 rows=139 loops=1000)\n>> Index Cond: (ur.id = sr.uut_result)\n>> Total runtime: 2118.986 ms\n>> (12 rows)\n>>\n>> === Run 4 (with order by clause, on server 8.1.4):\n>> Test \"group by\" and \"order by\" (See also Run 1 and Run 3)\n>> - server version 8.1.4\n>> - Query executed with \"order by\" clause\n>> - work_mem = 6900\n>> (same plan select for all work_mem values above 6900)\n>> QUERY PLAN\n>>\n>> ------------------------------------------------------------\n>> Sort (cost=46578.83..46820.66 rows=96734 width=37) (actual\n>> time=505.562..505.988 rows=1000 loops=1)\n>> Sort Key: ur.start_date_time\n>> -> HashAggregate (cost=37117.40..38568.41 rows=96734\n>> width=37) (actual time=498.697..502.374 rows=1000 loops=1)\n>> -> Nested Loop (cost=0.00..35666.39 rows=96734\n>> width=37) (actual time=0.058..288.270 rows=138810 loops=1)\n>> -> Nested Loop (cost=0.00..5342.20 rows=984\n>> width=37) (actual time=0.042..11.773 rows=1000 loops=1)\n>> -> Index Scan using uut_result_pkey on\n>> uut_result ur (cost=0.00..1626.46 rows=1003 width=24)\n>> (actual time=0.020..1.868 rows=1000 loops=1)\n>> Index Cond: ((id >= 174000) AND\n>> (id <= 174999))\n>> -> Index Scan using units_pkey on units\n>> u (cost=0.00..3.69 rows=1 width=17) (actual\n>> time=0.006..0.007 rows=1 loops=1000)\n>> Index Cond: (\"outer\".unit_ref = u.ref)\n>> -> Index Scan using uut_result_key on\n>> step_result sr (cost=0.00..29.09 rows=138 width=4) (actual\n>> time=0.006..0.146 rows=139 loops=1000)\n>> Index Cond: (\"outer\".id = sr.uut_result)\n>> Total runtime: 507.452 ms\n>> (12 rows)\n>>\n>> === Coemment on selected plan for 8.2.1 when using \"order by\" ===\n>> Run 3 (8.2.1 with order by) selects same plan as Run1\n>> (without order by).\n>> It does hovever exist a better plan for Run3, and 8.1.5\n>> manages to select\n>> that plan (shown in Run 4).\n>> Both versions (8.1.5 and 8.2.1) uses same plan until the\n>> uppermost Nested Loop.\n>> The big difference is that 8.1.5 then will do HashAggregate,\n>> and then sort,\n>> while 8.2.1 will does a sort and then a GroupAggregate.\n>>\n>> I have tried different combinations for statistics_target, \n>> cpu_*_cost,\n>> work_mem and random page cost without finding a solution.\n>>\n>> Anyone with an idea on what to do? Feel free to suggest one\n>> of the above\n>> parameters, i might have overlooked some combination.\n>>\n>> I am a little unsure on how much extra information is necessery, \n>> but i\n>> will provide some:\n>>\n>> The three tables are\n>> units\t\tList of produced items\n>> uut_Result\t\tSummary of test result\n>> step_result\tIndividuel tests results\n>> The system is a production test log. (there are a lot of units which\n>> does not have an entry in uut_result).\n>>\n>> Table \"public.units\"\n>> Column | Type |\n>> Modifiers\n>> ------------------+-----------------------+-------------------\n>> ----------\n>> ------------------+-----------------------+-------------------\n>> ----------\n>> ------------------+-----------------------+----------\n>> ref | integer | not null default\n>> nextval(('public.units_ref_seq'::text)::regclass)\n>> serial_number | character varying(30) | not null\n>> product_ref | integer | not null\n>> week | integer | not null\n>> status | integer | not null\n>> comment | text |\n>> last_user | text | default \"current_user\"()\n>> last_date | date | default\n>> ('now'::text)::date\n>> product_info_ref | integer | not null\n>> Indexes:\n>> \"units_pkey\" PRIMARY KEY, btree (ref)\n>> \"units_no_sno_idx\" UNIQUE, btree (product_ref, week)\n>> WHERE serial_number::text = ''::text\n>> \"units_serial_number_idx\" UNIQUE, btree (serial_number,\n>> product_info_ref) WHERE serial_number::text <> ''::text\n>> \"units_product_ref_key\" btree (product_ref)\n>> Triggers:\n>> ct_unit_update_log AFTER UPDATE ON units FOR EACH ROW\n>> EXECUTE PROCEDURE cf_unit_update_log()\n>> ct_units_update_product_info_ref BEFORE INSERT OR UPDATE\n>> ON units FOR EACH ROW EXECUTE PROCEDURE\n>> cf_units_update_product_info_ref()\n>> select count(*) from units => 292 676 rows\n>>\n>> Table \"public.uut_result\"\n>> Column | Type |\n>> Modifiers\n>> -------------------+-----------------------------+------------\n>> ----------\n>> -------------------+-----------------------------+------------\n>> ----------\n>> -------------------+-----------------------------+--------\n>> id | integer | not null\n>> uut_serial_number | text |\n>> unit_ref | integer |\n>> order_unit_ref | integer |\n>> user_login_name | text |\n>> start_date_time | timestamp without time zone |\n>> execution_time | double precision |\n>> uut_status | text |\n>> uut_error_code | integer |\n>> uut_error_message | text |\n>> last_user | text | default\n>> \"current_user\"()\n>> last_timestamp | timestamp with time zone | default\n>> ('now'::text)::timestamp(6) with time zone\n>> test_name | text |\n>> teststation_name | text |\n>> teststation_ref | integer |\n>> process_step_ref | integer |\n>> Indexes:\n>> \"uut_result_pkey\" PRIMARY KEY, btree (id)\n>> \"uut_result_start_date_time_idx\" btree (start_date_time)\n>> \"uut_result_test_name\" btree (test_name)\n>> Triggers:\n>> ct_set_process_step_ref BEFORE INSERT OR UPDATE ON\n>> uut_result FOR EACH ROW EXECUTE PROCEDURE\n>> cf_set_process_step_ref() select count(*) from uut_result =>\n>> 180 111 rows\n>>\n>> Table \"public.step_result\"\n>> Column | Type | Modifiers\n>> --------------------+------------------+-----------\n>> id | integer | not null\n>> uut_result | integer |\n>> step_parent | integer |\n>> step_name | text |\n>> step_extra_info | text |\n>> step_type | text |\n>> status | text |\n>> report_text | text |\n>> error_code | integer |\n>> error_message | text |\n>> module_time | double precision |\n>> total_time | double precision |\n>> num_loops | integer |\n>> num_passed | integer |\n>> num_failed | integer |\n>> ending_loop_index | integer |\n>> loop_index | integer |\n>> interactive_exenum | integer |\n>> step_group | text |\n>> step_index | integer |\n>> order_number | integer |\n>> pass_fail | integer |\n>> numeric_value | double precision |\n>> high_limit | double precision |\n>> low_limit | double precision |\n>> comp_operator | text |\n>> string_value | text |\n>> string_limit | text |\n>> button_pressed | integer |\n>> response | text |\n>> exit_code | integer |\n>> num_limits_in_file | integer |\n>> num_rows_in_file | integer |\n>> num_limits_applied | integer |\n>> sequence_name | text |\n>> sequence_file_path | text |\n>> Indexes:\n>> \"step_result_pkey\" PRIMARY KEY, btree (id)\n>> \"step_parent_key\" btree (step_parent)\n>> \"temp_index_idx\" btree (sequence_file_path)\n>> \"uut_result_key\" btree (uut_result)\n>> select count(*) from step_result => 17 624 657 rows\n>>\n>> Best regards\n>> Rolf �stvik\n>>\n>> ---------------------------(end of\n>> broadcast)---------------------------\n>> TIP 5: don't forget to increase your free space map settings\n>>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n",
"msg_date": "Sun, 14 Jan 2007 11:52:52 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem with grouping, uses Sort and GroupAggregate,\n\tHashAggregate is better(?)"
},
{
"msg_contents": " \n\n> -----Original Message-----\n> From: Dave Dutcher [mailto:[email protected]] \n> Sent: Sunday, January 14, 2007 5:12 PM\n> To: Rolf Østvik (HA/EXA); [email protected]\n> Subject: RE: [PERFORM] Problem with grouping, uses Sort and \n> GroupAggregate, HashAggregate is better(?)\n> \n> > -----Original Message-----\n> > From: [email protected] \n> > [mailto:[email protected]] On Behalf Of \n> > Rolf Østvik (HA/EXA)\n> \n> Have you tried \"set enable_sort=off\" with 8.1.2? I'm not \n> sure if that will\n> change anything because it has to do at least one sort. Its \n> just a lots\n> faster to do a hashagg + small sort than one big sort in this \n> case. (I\n> wonder if there should be enable_groupagg?) \n\nDid you mean enable_sort = 'off' for 8.2.1?\n\n\nI tried to set enable_sort = 'off' for both the\n8.1.4 server and the 8.2.1 server.\nBoth servers used the same plan as Run 4 and Run 3 respectively.\nThere were of course some changes in the planner cost for the sort \nsteps, but the execution times was of course the same.\n\nRegards\nRolf Østvik\n",
"msg_date": "Mon, 15 Jan 2007 09:58:25 +0100",
"msg_from": "=?iso-8859-1?Q?Rolf_=D8stvik_=28HA/EXA=29?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problem with grouping, uses Sort and GroupAggregate,\n\tHashAggregate is better(?)"
}
] |
[
{
"msg_contents": "\nDid anybody get a chance to look at this? Is it expected behavior?\nEveryone seemed so incredulous, I hoped maybe this exposed a bug\nthat would be fixed in a near release.\n\n\n-----Original Message-----\nFrom: Adam Rich [mailto:[email protected]] \nSent: Sunday, January 07, 2007 11:53 PM\nTo: 'Joshua D. Drake'; 'Tom Lane'\nCc: 'Craig A. James'; 'PostgreSQL Performance'\nSubject: RE: [PERFORM] High update activity, PostgreSQL vs BigDBMS\n\n\n\nHere's another, more drastic example... Here the order by / limit\nversion\nruns in less than 1/7000 the time of the MAX() version.\n\n\nselect max(item_id)\nfrom events e, receipts r, receipt_items ri\nwhere e.event_id=r.event_id and r.receipt_id=ri.receipt_id\n\nAggregate (cost=10850.84..10850.85 rows=1 width=4) (actual\ntime=816.382..816.383 rows=1 loops=1)\n -> Hash Join (cost=2072.12..10503.30 rows=139019 width=4) (actual\ntime=155.177..675.870 rows=147383 loops=1)\n Hash Cond: (ri.receipt_id = r.receipt_id)\n -> Seq Scan on receipt_items ri (cost=0.00..4097.56\nrows=168196 width=8) (actual time=0.009..176.894 rows=168196 loops=1)\n -> Hash (cost=2010.69..2010.69 rows=24571 width=4) (actual\ntime=155.146..155.146 rows=24571 loops=1)\n -> Hash Join (cost=506.84..2010.69 rows=24571 width=4)\n(actual time=34.803..126.452 rows=24571 loops=1)\n Hash Cond: (r.event_id = e.event_id)\n -> Seq Scan on receipts r (cost=0.00..663.58\nrows=29728 width=8) (actual time=0.006..30.870 rows=29728 loops=1)\n -> Hash (cost=469.73..469.73 rows=14843 width=4)\n(actual time=34.780..34.780 rows=14843 loops=1)\n -> Seq Scan on events e (cost=0.00..469.73\nrows=14843 width=4) (actual time=0.007..17.603 rows=14843 loops=1)\nTotal runtime: 816.645 ms\n\nselect item_id\nfrom events e, receipts r, receipt_items ri\nwhere e.event_id=r.event_id and r.receipt_id=ri.receipt_id\norder by item_id desc limit 1\n\n\nLimit (cost=0.00..0.16 rows=1 width=4) (actual time=0.047..0.048 rows=1\nloops=1)\n -> Nested Loop (cost=0.00..22131.43 rows=139019 width=4) (actual\ntime=0.044..0.044 rows=1 loops=1)\n -> Nested Loop (cost=0.00..12987.42 rows=168196 width=8)\n(actual time=0.032..0.032 rows=1 loops=1)\n -> Index Scan Backward using receipt_items_pkey on\nreceipt_items ri (cost=0.00..6885.50 rows=168196 width=8) (actual\ntime=0.016..0.016 rows=1 loops=1)\n -> Index Scan using receipts_pkey on receipts r\n(cost=0.00..0.02 rows=1 width=8) (actual time=0.010..0.010 rows=1\nloops=1)\n Index Cond: (r.receipt_id = ri.receipt_id)\n -> Index Scan using events_pkey on events e (cost=0.00..0.04\nrows=1 width=4) (actual time=0.009..0.009 rows=1 loops=1)\n Index Cond: (e.event_id = r.event_id)\nTotal runtime: 0.112 ms\n\n\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Joshua D.\nDrake\nSent: Sunday, January 07, 2007 9:10 PM\nTo: Adam Rich\nCc: 'Craig A. James'; 'Guy Rouillier'; 'PostgreSQL Performance'\nSubject: Re: [PERFORM] High update activity, PostgreSQL vs BigDBMS\n\n\nOn Sun, 2007-01-07 at 20:26 -0600, Adam Rich wrote:\n> I'm using 8.2 and using order by & limit is still faster than MAX()\n> even though MAX() now seems to rewrite to an almost identical plan\n> internally.\n\n\nGonna need you to back that up :) Can we get an explain analyze?\n\n\n> Count(*) still seems to use a full table scan rather than an index\nscan.\n> \n\nThere is a TODO out there to help this. Don't know if it will get done.\n\nJoshua D. Drake\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n",
"msg_date": "Sun, 14 Jan 2007 22:52:16 -0600",
"msg_from": "\"Adam Rich\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: max() versus order/limit (WAS: High update activity,\n\tPostgreSQL vs BigDBMS)"
},
{
"msg_contents": "Adam Rich wrote:\n> \n> Did anybody get a chance to look at this? Is it expected behavior?\n> Everyone seemed so incredulous, I hoped maybe this exposed a bug\n> that would be fixed in a near release.\n\nActually, the planner is only able to do the min()/max() transformation\ninto order by/limit in the case of a single table being scanned. Since\nyou have a join here, the optimization is obviously not used:\n\n> select max(item_id)\n> from events e, receipts r, receipt_items ri\n> where e.event_id=r.event_id and r.receipt_id=ri.receipt_id\n\nplan/planagg.c says\n\n /*\n * We also restrict the query to reference exactly one table, since join\n * conditions can't be handled reasonably. (We could perhaps handle a\n * query containing cartesian-product joins, but it hardly seems worth the\n * trouble.)\n */\n\nso you should keep using your hand-written order by/limit query.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Mon, 15 Jan 2007 07:35:36 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: max() versus order/limit (WAS: High update activity,\n\tPostgreSQL vs BigDBMS)"
}
] |
[
{
"msg_contents": "Adam,\n\nThis optimization would require teaching the planner to use an index for\nMAX/MIN when available. It seems like an OK thing to do to me.\n\n- Luke\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of Adam Rich\n> Sent: Sunday, January 14, 2007 8:52 PM\n> To: 'Joshua D. Drake'; 'Tom Lane'\n> Cc: 'Craig A. James'; 'PostgreSQL Performance'\n> Subject: Re: [PERFORM] max() versus order/limit (WAS: High \n> update activity, PostgreSQL vs BigDBMS)\n> \n> \n> Did anybody get a chance to look at this? Is it expected behavior?\n> Everyone seemed so incredulous, I hoped maybe this exposed a \n> bug that would be fixed in a near release.\n> \n> \n> -----Original Message-----\n> From: Adam Rich [mailto:[email protected]]\n> Sent: Sunday, January 07, 2007 11:53 PM\n> To: 'Joshua D. Drake'; 'Tom Lane'\n> Cc: 'Craig A. James'; 'PostgreSQL Performance'\n> Subject: RE: [PERFORM] High update activity, PostgreSQL vs BigDBMS\n> \n> \n> \n> Here's another, more drastic example... Here the order by / limit\n> version\n> runs in less than 1/7000 the time of the MAX() version.\n> \n> \n> select max(item_id)\n> from events e, receipts r, receipt_items ri\n> where e.event_id=r.event_id and r.receipt_id=ri.receipt_id\n> \n> Aggregate (cost=10850.84..10850.85 rows=1 width=4) (actual\n> time=816.382..816.383 rows=1 loops=1)\n> -> Hash Join (cost=2072.12..10503.30 rows=139019 width=4) (actual\n> time=155.177..675.870 rows=147383 loops=1)\n> Hash Cond: (ri.receipt_id = r.receipt_id)\n> -> Seq Scan on receipt_items ri (cost=0.00..4097.56\n> rows=168196 width=8) (actual time=0.009..176.894 rows=168196 loops=1)\n> -> Hash (cost=2010.69..2010.69 rows=24571 width=4) (actual\n> time=155.146..155.146 rows=24571 loops=1)\n> -> Hash Join (cost=506.84..2010.69 rows=24571 width=4)\n> (actual time=34.803..126.452 rows=24571 loops=1)\n> Hash Cond: (r.event_id = e.event_id)\n> -> Seq Scan on receipts r (cost=0.00..663.58\n> rows=29728 width=8) (actual time=0.006..30.870 rows=29728 loops=1)\n> -> Hash (cost=469.73..469.73 rows=14843 width=4)\n> (actual time=34.780..34.780 rows=14843 loops=1)\n> -> Seq Scan on events e (cost=0.00..469.73\n> rows=14843 width=4) (actual time=0.007..17.603 rows=14843 loops=1)\n> Total runtime: 816.645 ms\n> \n> select item_id\n> from events e, receipts r, receipt_items ri\n> where e.event_id=r.event_id and r.receipt_id=ri.receipt_id\n> order by item_id desc limit 1\n> \n> \n> Limit (cost=0.00..0.16 rows=1 width=4) (actual \n> time=0.047..0.048 rows=1\n> loops=1)\n> -> Nested Loop (cost=0.00..22131.43 rows=139019 width=4) (actual\n> time=0.044..0.044 rows=1 loops=1)\n> -> Nested Loop (cost=0.00..12987.42 rows=168196 width=8)\n> (actual time=0.032..0.032 rows=1 loops=1)\n> -> Index Scan Backward using receipt_items_pkey on\n> receipt_items ri (cost=0.00..6885.50 rows=168196 width=8) (actual\n> time=0.016..0.016 rows=1 loops=1)\n> -> Index Scan using receipts_pkey on receipts r\n> (cost=0.00..0.02 rows=1 width=8) (actual time=0.010..0.010 rows=1\n> loops=1)\n> Index Cond: (r.receipt_id = ri.receipt_id)\n> -> Index Scan using events_pkey on events e (cost=0.00..0.04\n> rows=1 width=4) (actual time=0.009..0.009 rows=1 loops=1)\n> Index Cond: (e.event_id = r.event_id)\n> Total runtime: 0.112 ms\n> \n> \n> \n> \n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Joshua D.\n> Drake\n> Sent: Sunday, January 07, 2007 9:10 PM\n> To: Adam Rich\n> Cc: 'Craig A. James'; 'Guy Rouillier'; 'PostgreSQL Performance'\n> Subject: Re: [PERFORM] High update activity, PostgreSQL vs BigDBMS\n> \n> \n> On Sun, 2007-01-07 at 20:26 -0600, Adam Rich wrote:\n> > I'm using 8.2 and using order by & limit is still faster than MAX()\n> > even though MAX() now seems to rewrite to an almost identical plan\n> > internally.\n> \n> \n> Gonna need you to back that up :) Can we get an explain analyze?\n> \n> \n> > Count(*) still seems to use a full table scan rather than an index\n> scan.\n> > \n> \n> There is a TODO out there to help this. Don't know if it will \n> get done.\n> \n> Joshua D. Drake\n> \n> -- \n> \n> === The PostgreSQL Company: Command Prompt, Inc. ===\n> Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n> Providing the most comprehensive PostgreSQL solutions since 1997\n> http://www.commandprompt.com/\n> \n> Donate to the PostgreSQL Project: \n> http://www.postgresql.org/about/donate\n> \n> \n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n> \n> \n\n",
"msg_date": "Mon, 15 Jan 2007 02:42:34 -0500",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: max() versus order/limit (WAS: High update"
},
{
"msg_contents": "Luke Lonergan wrote:\n> Adam,\n> \n> This optimization would require teaching the planner to use an index for\n> MAX/MIN when available. It seems like an OK thing to do to me.\n\nThis optimization already exists, albeit for queries that use a single\ntable.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Mon, 15 Jan 2007 07:38:09 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: max() versus order/limit (WAS: High update"
},
{
"msg_contents": "Luke Lonergan wrote:\n> Adam,\n> \n> This optimization would require teaching the planner to use an index for\n> MAX/MIN when available. It seems like an OK thing to do to me.\n\nUhmmm I thought we did that already in 8.1?\n\nJoshua D. Drake\n\n\n> \n> - Luke\n> \n>> -----Original Message-----\n>> From: [email protected] \n>> [mailto:[email protected]] On Behalf Of Adam Rich\n>> Sent: Sunday, January 14, 2007 8:52 PM\n>> To: 'Joshua D. Drake'; 'Tom Lane'\n>> Cc: 'Craig A. James'; 'PostgreSQL Performance'\n>> Subject: Re: [PERFORM] max() versus order/limit (WAS: High \n>> update activity, PostgreSQL vs BigDBMS)\n>>\n>>\n>> Did anybody get a chance to look at this? Is it expected behavior?\n>> Everyone seemed so incredulous, I hoped maybe this exposed a \n>> bug that would be fixed in a near release.\n>>\n>>\n>> -----Original Message-----\n>> From: Adam Rich [mailto:[email protected]]\n>> Sent: Sunday, January 07, 2007 11:53 PM\n>> To: 'Joshua D. Drake'; 'Tom Lane'\n>> Cc: 'Craig A. James'; 'PostgreSQL Performance'\n>> Subject: RE: [PERFORM] High update activity, PostgreSQL vs BigDBMS\n>>\n>>\n>>\n>> Here's another, more drastic example... Here the order by / limit\n>> version\n>> runs in less than 1/7000 the time of the MAX() version.\n>>\n>>\n>> select max(item_id)\n>> from events e, receipts r, receipt_items ri\n>> where e.event_id=r.event_id and r.receipt_id=ri.receipt_id\n>>\n>> Aggregate (cost=10850.84..10850.85 rows=1 width=4) (actual\n>> time=816.382..816.383 rows=1 loops=1)\n>> -> Hash Join (cost=2072.12..10503.30 rows=139019 width=4) (actual\n>> time=155.177..675.870 rows=147383 loops=1)\n>> Hash Cond: (ri.receipt_id = r.receipt_id)\n>> -> Seq Scan on receipt_items ri (cost=0.00..4097.56\n>> rows=168196 width=8) (actual time=0.009..176.894 rows=168196 loops=1)\n>> -> Hash (cost=2010.69..2010.69 rows=24571 width=4) (actual\n>> time=155.146..155.146 rows=24571 loops=1)\n>> -> Hash Join (cost=506.84..2010.69 rows=24571 width=4)\n>> (actual time=34.803..126.452 rows=24571 loops=1)\n>> Hash Cond: (r.event_id = e.event_id)\n>> -> Seq Scan on receipts r (cost=0.00..663.58\n>> rows=29728 width=8) (actual time=0.006..30.870 rows=29728 loops=1)\n>> -> Hash (cost=469.73..469.73 rows=14843 width=4)\n>> (actual time=34.780..34.780 rows=14843 loops=1)\n>> -> Seq Scan on events e (cost=0.00..469.73\n>> rows=14843 width=4) (actual time=0.007..17.603 rows=14843 loops=1)\n>> Total runtime: 816.645 ms\n>>\n>> select item_id\n>> from events e, receipts r, receipt_items ri\n>> where e.event_id=r.event_id and r.receipt_id=ri.receipt_id\n>> order by item_id desc limit 1\n>>\n>>\n>> Limit (cost=0.00..0.16 rows=1 width=4) (actual \n>> time=0.047..0.048 rows=1\n>> loops=1)\n>> -> Nested Loop (cost=0.00..22131.43 rows=139019 width=4) (actual\n>> time=0.044..0.044 rows=1 loops=1)\n>> -> Nested Loop (cost=0.00..12987.42 rows=168196 width=8)\n>> (actual time=0.032..0.032 rows=1 loops=1)\n>> -> Index Scan Backward using receipt_items_pkey on\n>> receipt_items ri (cost=0.00..6885.50 rows=168196 width=8) (actual\n>> time=0.016..0.016 rows=1 loops=1)\n>> -> Index Scan using receipts_pkey on receipts r\n>> (cost=0.00..0.02 rows=1 width=8) (actual time=0.010..0.010 rows=1\n>> loops=1)\n>> Index Cond: (r.receipt_id = ri.receipt_id)\n>> -> Index Scan using events_pkey on events e (cost=0.00..0.04\n>> rows=1 width=4) (actual time=0.009..0.009 rows=1 loops=1)\n>> Index Cond: (e.event_id = r.event_id)\n>> Total runtime: 0.112 ms\n>>\n>>\n>>\n>>\n>>\n>> -----Original Message-----\n>> From: [email protected]\n>> [mailto:[email protected]] On Behalf Of Joshua D.\n>> Drake\n>> Sent: Sunday, January 07, 2007 9:10 PM\n>> To: Adam Rich\n>> Cc: 'Craig A. James'; 'Guy Rouillier'; 'PostgreSQL Performance'\n>> Subject: Re: [PERFORM] High update activity, PostgreSQL vs BigDBMS\n>>\n>>\n>> On Sun, 2007-01-07 at 20:26 -0600, Adam Rich wrote:\n>>> I'm using 8.2 and using order by & limit is still faster than MAX()\n>>> even though MAX() now seems to rewrite to an almost identical plan\n>>> internally.\n>>\n>> Gonna need you to back that up :) Can we get an explain analyze?\n>>\n>>\n>>> Count(*) still seems to use a full table scan rather than an index\n>> scan.\n>> There is a TODO out there to help this. Don't know if it will \n>> get done.\n>>\n>> Joshua D. Drake\n>>\n>> -- \n>>\n>> === The PostgreSQL Company: Command Prompt, Inc. ===\n>> Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n>> Providing the most comprehensive PostgreSQL solutions since 1997\n>> http://www.commandprompt.com/\n>>\n>> Donate to the PostgreSQL Project: \n>> http://www.postgresql.org/about/donate\n>>\n>>\n>>\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 4: Have you searched our list archives?\n>>\n>> http://archives.postgresql.org\n>>\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 7: You can help support the PostgreSQL project by donating at\n>>\n>> http://www.postgresql.org/about/donate\n>>\n>>\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Mon, 15 Jan 2007 07:51:50 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: max() versus order/limit (WAS: High update"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.