threads
listlengths
1
275
[ { "msg_contents": "> 2/ Batch-inserts using jdbc (maybe this should go to the jdbc-mailing\nlist - \n> but it is also performance related ...):\n> Performing many inserts using a PreparedStatement and batch execution\nmakes a \n> significant performance improvement in Oracle. In postgres, I did not\nobserve \n> any performance improvement using batch execution. Are there any\nspecial \n> caveats when using batch execution with postgres?\n\nWhen you call executeBatch(), it doesn't send all the queries in a\nsingle round-trip; it just iterates through the batched queries and\nexecutes them one by one. In my own applications, I've done\nsimulated-batch queries like this:\n\ninsert into T (a, b, c)\n select 1,2,3 union all\n select 2,3,4 union all\n select 3,4,5\n\nIt's ugly, and you have to structure your code in such a way that the\nquery can't get too large, but it provides a similar performance benefit\nto batching. You probably don't save nearly as much parse time as using\na batched PreparedStatement, but you at least get rid of the network\nroundtrips.\n\n(Of course, it'd be much nicer if statement-batching worked. There have\nbeen rumblings about doing this, and some discussion on how to do it,\nbut I haven't heard about any progress. Anyone?)\n\nmike\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Bernd\nSent: Friday, October 15, 2004 5:25 AM\nTo: [email protected]\nSubject: [PERFORM] Select with qualified join condition / Batch inserts\n\n\nHi,\n\nwe are working on a product which was originally developed against an\nOracle \ndatabase and which should be changed to also work with postgres. \n\nOverall the changes we had to make are very small and we are very\npleased with \nthe good performance of postgres - but we also found queries which\nexecute \nmuch faster on Oracle. Since I am not yet familiar with tuning queries\nfor \npostgres, it would be great if someone could give me a hint on the\nfollowing \ntwo issues. (We are using PG 8.0.0beta3 on Linux kernel 2.4.27):\n\n1/ The following query takes about 5 sec. with postrgres whereas on\nOracle it \nexecutes in about 30 ms (although both tables only contain 200 k records\nin \nthe postgres version).\n\nSQL:\n\nSELECT cmp.WELL_INDEX, cmp.COMPOUND, con.CONCENTRATION \n\tFROM SCR_WELL_COMPOUND cmp, SCR_WELL_CONCENTRATION con \n\tWHERE cmp.BARCODE=con.BARCODE \n\t\tAND cmp.WELL_INDEX=con.WELL_INDEX \n\t\tAND cmp.MAT_ID=con.MAT_ID \n\t\tAND cmp.MAT_ID = 3 \n\t\tAND cmp.BARCODE='910125864' \n\t\tAND cmp.ID_LEVEL = 1;\n\nTable-def:\n Table \"public.scr_well_compound\"\n Column | Type | Modifiers\n------------+------------------------+-----------\n mat_id | numeric(10,0) | not null\n barcode | character varying(240) | not null\n well_index | numeric(5,0) | not null\n id_level | numeric(3,0) | not null\n compound | character varying(240) | not null\nIndexes:\n \"scr_wcm_pk\" PRIMARY KEY, btree (id_level, mat_id, barcode,\nwell_index) Foreign-key constraints:\n \"scr_wcm_mat_fk\" FOREIGN KEY (mat_id) REFERENCES\nscr_mapping_table(mat_id) \nON DELETE CASCADE\n\n Table \"public.scr_well_concentration\"\n Column | Type | Modifiers\n---------------+------------------------+-----------\n mat_id | numeric(10,0) | not null\n barcode | character varying(240) | not null\n well_index | numeric(5,0) | not null\n concentration | numeric(20,10) | not null\nIndexes:\n \"scr_wco_pk\" PRIMARY KEY, btree (mat_id, barcode, well_index)\nForeign-key constraints:\n \"scr_wco_mat_fk\" FOREIGN KEY (mat_id) REFERENCES\nscr_mapping_table(mat_id) \nON DELETE CASCADE\n\nI tried several variants of the query (including the SQL 92 JOIN ON\nsyntax) \nbut with no success. I have also rebuilt the underlying indices.\n\nA strange observation is that the same query runs pretty fast without\nthe \nrestriction to a certain MAT_ID, i. e. omitting the MAT_ID=3 part.\n\nAlso fetching the data for both tables separately is pretty fast and a \npossible fallback would be to do the actual join in the application\n(which is \nof course not as beautiful as doing it using SQL ;-)\n\n2/ Batch-inserts using jdbc (maybe this should go to the jdbc-mailing\nlist - \nbut it is also performance related ...):\nPerforming many inserts using a PreparedStatement and batch execution\nmakes a \nsignificant performance improvement in Oracle. In postgres, I did not\nobserve \nany performance improvement using batch execution. Are there any special\n\ncaveats when using batch execution with postgres?\n\nThanks and regards\n\nBernd\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n", "msg_date": "Fri, 15 Oct 2004 09:17:06 -0500", "msg_from": "\"Michael Nonemacher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Select with qualified join condition / Batch inserts" } ]
[ { "msg_contents": "Hello,\nI've seen a couple references to using ipcs to help properly size \nshared_buffers.\n\nI don't claim to be a SA guru, so could someone help explain how to \ninterpret the output of ipcs and how that relates to shared_buffers? How \ndoes one determine the size of the segment arrays? I see the total size \nusing ipcs -m which is roughly shared_buffers * 8k.\n\nI tried all of the dash commands in the ipcs man page, and the only one \nthat might give a clue is ipcs -t which shows the time the semaphores \nwere last used. If you look at the example I give below, it appears as \nif I'm only using 4 of the 17 semaphores (PG was started on Oct 8).\n\nAm I correct in assuming that if the arrays are all the same size then I \nshould only need about 1/4 of my currently allocated shared_buffers?\n\n------ Shared Memory Operation/Change Times --------\nshmid owner last-op last-changed \n847183872 postgres Fri Oct 8 11:03:31 2004 Fri Oct 8 11:03:31 2004 \n847216641 postgres Fri Oct 8 11:03:31 2004 Fri Oct 8 11:03:31 2004 \n847249410 postgres Fri Oct 8 11:03:31 2004 Fri Oct 8 11:03:31 2004 \n847282179 postgres Fri Oct 8 11:03:31 2004 Fri Oct 8 11:03:31 2004 \n847314948 postgres Fri Oct 8 11:03:31 2004 Fri Oct 8 11:03:31 2004 \n847347717 postgres Fri Oct 8 11:03:31 2004 Fri Oct 8 11:03:31 2004 \n847380486 postgres Fri Oct 8 11:03:31 2004 Fri Oct 8 11:03:31 2004 \n847413255 postgres Fri Oct 8 11:03:31 2004 Fri Oct 8 11:03:31 2004 \n847446024 postgres Fri Oct 8 11:03:31 2004 Fri Oct 8 11:03:31 2004 \n847478793 postgres Fri Oct 8 11:03:31 2004 Fri Oct 8 11:03:31 2004 \n847511562 postgres Fri Oct 8 11:03:31 2004 Fri Oct 8 11:03:31 2004 \n847544331 postgres Fri Oct 8 11:03:31 2004 Fri Oct 8 11:03:31 2004 \n847577100 postgres Fri Oct 8 11:03:31 2004 Fri Oct 8 11:03:31 2004 \n847609869 postgres Fri Oct 15 11:34:28 2004 Fri Oct 15 11:34:29 2004 \n847642638 postgres Fri Oct 15 11:33:35 2004 Fri Oct 15 11:33:35 2004 \n847675407 postgres Fri Oct 15 11:34:28 2004 Fri Oct 15 11:34:29 2004 \n847708176 postgres Fri Oct 15 11:27:17 2004 Fri Oct 15 11:32:20 2004 \n\nAlso, isn't the shared memory supposed to show up in free? Its always \nshowing as 0:\n\n# free\n total used free shared buffers cached\nMem: 3896928 3868424 28504 0 59788 3605548\n-/+ buffers/cache: 203088 3693840\nSwap: 1052216 16 1052200\n\nThanks!\n\n", "msg_date": "Fri, 15 Oct 2004 12:07:27 -0400", "msg_from": "Doug Y <[email protected]>", "msg_from_op": true, "msg_subject": "Tuning shared_buffers with ipcs ?" }, { "msg_contents": "Doug Y <[email protected]> writes:\n> I've seen a couple references to using ipcs to help properly size \n> shared_buffers.\n\nI have not seen any such claim, and I do not see any way offhand that\nipcs could help.\n\n> I tried all of the dash commands in the ipcs man page, and the only one \n> that might give a clue is ipcs -t which shows the time the semaphores \n> were last used. If you look at the example I give below, it appears as \n> if I'm only using 4 of the 17 semaphores (PG was started on Oct 8).\n\nThis might tell you something about how many concurrent backends you've\nused, but nothing about how many shared buffers you need.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Oct 2004 13:51:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning shared_buffers with ipcs ? " }, { "msg_contents": "Tom Lane wrote:\n\n>Doug Y <[email protected]> writes:\n> \n>\n>>I've seen a couple references to using ipcs to help properly size \n>>shared_buffers.\n>> \n>>\n>\n>I have not seen any such claim, and I do not see any way offhand that\n>ipcs could help.\n> \n>\nDirectly from:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n\n\"As a rule of thumb, observe shared memory usage of PostgreSQL with \ntools like ipcs and determine the setting.\"\n\nI've seen references in the admin\n\n>>I tried all of the dash commands in the ipcs man page, and the only one \n>>that might give a clue is ipcs -t which shows the time the semaphores \n>>were last used. If you look at the example I give below, it appears as \n>>if I'm only using 4 of the 17 semaphores (PG was started on Oct 8).\n>> \n>>\n>\n>This might tell you something about how many concurrent backends you've\n>used, but nothing about how many shared buffers you need.\n> \n>\nThats strange, I know I've had more than 4 concurrent connections on \nthat box... (I just checked and there were at least a dozen). A mirror \nDB with the same config also has the same basic output from ipcs, except \nthat it has times for 11 of the 17 arrays slots and most of them are the \ntime when we do our backup dump (which makes sense that it would require \nmore memory at that time.)\n\n>\t\t\tregards, tom lane\n>\n>\n> \n>\nI'm not saying you're wrong, because I don't know how the nitty gritty \nstuff works, I'm just trying to find something to work with, since \npresently there isn't anything other than anecdotal evidence. From what \nI've inferred, there seems to be some circumstantial evidence supporting \nmy theory.\n\nThanks.\n", "msg_date": "Fri, 15 Oct 2004 14:51:33 -0400", "msg_from": "Doug Y <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tuning shared_buffers with ipcs ?" }, { "msg_contents": "Doug Y <[email protected]> writes:\n> Tom Lane wrote:\n>> I have not seen any such claim, and I do not see any way offhand that\n>> ipcs could help.\n>> \n> Directly from:\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n> \"As a rule of thumb, observe shared memory usage of PostgreSQL with \n> tools like ipcs and determine the setting.\"\n\n[ shrug ... ] So ask elein why she thinks that will help.\n\n>> This might tell you something about how many concurrent backends you've\n>> used, but nothing about how many shared buffers you need.\n>> \n> Thats strange, I know I've had more than 4 concurrent connections on \n> that box... (I just checked and there were at least a dozen).\n\nThere is more than one per-backend semaphore per semaphore set, 16 per\nset if memory serves; so the ipcs evidence points to a maximum of\nbetween 49 and 64 concurrently active backends. It's not telling you a\ndarn thing about appropriate shared_buffers settings, however.\n\n> A mirror DB with the same config also has the same basic output from\n> ipcs, except that it has times for 11 of the 17 arrays slots and most\n> of them are the time when we do our backup dump (which makes sense\n> that it would require more memory at that time.)\n\nThat doesn't follow either. I think you may have some bottleneck that\ncauses client requests to pile up during a backup dump.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Oct 2004 15:53:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning shared_buffers with ipcs ? " }, { "msg_contents": "Tom Lane wrote:\n\n>Doug Y <[email protected]> writes:\n> \n>\n>>Tom Lane wrote:\n>> \n>>\n>>>This might tell you something about how many concurrent backends you've\n>>>used, but nothing about how many shared buffers you need.\n>>>\n>>> \n>>>\n>>Thats strange, I know I've had more than 4 concurrent connections on \n>>that box... (I just checked and there were at least a dozen).\n>> \n>>\n>\n>There is more than one per-backend semaphore per semaphore set, 16 per\n>set if memory serves; so the ipcs evidence points to a maximum of\n>between 49 and 64 concurrently active backends. It's not telling you a\n>darn thing about appropriate shared_buffers settings, however.\n>\n> \n>\n>>A mirror DB with the same config also has the same basic output from\n>>ipcs, except that it has times for 11 of the 17 arrays slots and most\n>>of them are the time when we do our backup dump (which makes sense\n>>that it would require more memory at that time.)\n>> \n>>\n>\n>That doesn't follow either. I think you may have some bottleneck that\n>causes client requests to pile up during a backup dump.\n>\n>\t\t\tregards, tom lane\n> \n>\nOk, that explains the number of arrays... max_connections / 16.\n\nThanks... my mind works better when I can associate actual settings to \neffects like that. And I'm sure that performance takes a hit during out \nback-up dump. We're in the process of migrating them to dedicated mirror \nmachine to run dumps/reports etc from crons so that it won't negatively \naffect the DB servers that get queries from the web applications.\n\nThanks again for clarification.\n", "msg_date": "Fri, 15 Oct 2004 17:18:12 -0400", "msg_from": "Doug Y <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tuning shared_buffers with ipcs ?" } ]
[ { "msg_contents": "> Thanks Magnus,\n> \n> So are we correct to rely on\n> - 8 being slower than 7.x in general and\n> - 8 on Win32 being a little faster than 8 on Cygwin?\n> \n> Will the final release of 8 be faster than the beta?\n\nI'm pretty certain that previous to 8.0 no win32 based postgesql\nproperly sync()ed the files. Win32 does not have sync(), and it is\nimpossible to emulate it without relying on the application to track\nwhich files to sync. 8.0 does this because it fsync()s the files\nindividually. Therefore, benchmarking fsync=on on 8.0 to a <8.0 version\nof windows is not apples to apples. This includes, by the way, the SFU\nbased port of postgresql because they didn't implement sync() there,\neither.\n\nOther than the sync() issue, the cygwin/win32 i/o performance should be\nroughly equal. Unless I'm terribly mistaken about things, all the i/o\ncalls should boil down to win32 api calls.\n\nThe cygwin IPC stack is implemented differently...pg 8.0 win32 native\nversion does all the ipc stuff by hand, so you might get slightly\ndifferent behavior there.\n\nMerln\n", "msg_date": "Fri, 15 Oct 2004 12:22:40 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on Win32 vs Cygwin" }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> I'm pretty certain that previous to 8.0 no win32 based postgesql\n> properly sync()ed the files. Win32 does not have sync(), and it is\n> impossible to emulate it without relying on the application to track\n> which files to sync. 8.0 does this because it fsync()s the files\n> individually. Therefore, benchmarking fsync=on on 8.0 to a <8.0 version\n> of windows is not apples to apples. This includes, by the way, the SFU\n> based port of postgresql because they didn't implement sync() there,\n> either.\n\nThis is all true, but for performance testing I am not sure that you'd\nnotice much difference, because the sync or lack of it only happens\nwithin checkpoints.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Oct 2004 13:55:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Performance on Win32 vs Cygwin " } ]
[ { "msg_contents": "My basic question to the community is \"is PostgreSQL approximately as fast\nas Oracle?\"\n\nI don't want benchmarks, they're BS. I want a gut feel from this community\nbecause I know many of you are in mixed shops that run both products, or\nhave had experience with both.\n\nI fully intend to tune, vacuum, analyze, size buffers, etc. I've read what\npeople have written on the topic, and from that my gut feel is that using\nPostgreSQL will not adversely affect performance of my application versus\nOracle. I know it won't adversely affect my pocket book. I also know that\nrequests for help will be quick, clear, and multifaceted.\n\nI'm currently running single processor UltraSPARC workstations, and intend\nto use Intel Arch laptops and Linux. The application is a big turnkey\nworkstation app. I know the hardware switch alone will enhance\nperformance, and may do so to the point where even a slower database will\nstill be adequate.\n\nWhadyall think?\n\nThanks,\n\nRick\n\n\n", "msg_date": "Fri, 15 Oct 2004 11:54:44 -0500", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Does PostgreSQL run with Oracle?" }, { "msg_contents": "[email protected] wrote:\n> My basic question to the community is \"is PostgreSQL approximately as fast\n> as Oracle?\"\n> \n> I don't want benchmarks, they're BS. I want a gut feel from this community\n> because I know many of you are in mixed shops that run both products, or\n> have had experience with both.\n> \n> I fully intend to tune, vacuum, analyze, size buffers, etc. I've read what\n> people have written on the topic, and from that my gut feel is that using\n> PostgreSQL will not adversely affect performance of my application versus\n> Oracle. I know it won't adversely affect my pocket book. I also know that\n> requests for help will be quick, clear, and multifaceted.\n> \n> I'm currently running single processor UltraSPARC workstations, and intend\n> to use Intel Arch laptops and Linux. The application is a big turnkey\n> workstation app. I know the hardware switch alone will enhance\n> performance, and may do so to the point where even a slower database will\n> still be adequate.\n\nI have always been told we are +/- 10% of Oracle. That's what I say at\ntalks and no one has disputed that. We are +10-30% faster than\nInformix.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 15 Oct 2004 13:02:20 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does PostgreSQL run with Oracle?" }, { "msg_contents": "On Fri, Oct 15, 2004 at 11:54:44AM -0500, [email protected] wrote:\n> My basic question to the community is \"is PostgreSQL approximately as fast\n> as Oracle?\"\n\n> I'm currently running single processor UltraSPARC workstations, and intend\n> to use Intel Arch laptops and Linux. The application is a big turnkey\n> workstation app. I know the hardware switch alone will enhance\n> performance, and may do so to the point where even a slower database will\n> still be adequate.\n\nI have found that PostgreSQL seems to perform poorly on Solaris/SPARC\n(less so after recent improvements, but still...) compared to x86\nsystems - more so than the delta between Oracle on the two platforms.\nJust a gut impression, but it might mean that comparing the two\ndatabases on SPARC may not be that useful comparison if you're\nplanning to move to x86.\n\nCheers,\n Steve\n\n", "msg_date": "Fri, 15 Oct 2004 10:19:48 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does PostgreSQL run with Oracle?" }, { "msg_contents": "[email protected] writes:\n> My basic question to the community is \"is PostgreSQL approximately as fast\n> as Oracle?\"\n\nThe anecdotal evidence I've seen leaves me with the impression that when\nyou first take an Oracle-based app and drop it into Postgres, it won't\nperform particularly well, but with tuning and tweaking you can roughly\nequal and often exceed the original performance. The two DBs are enough\nunalike that a database schema that's been tuned for Oracle is probably\nmistuned for Postgres. You will certainly find \"some things are faster,\nsome are slower\" at the end of the day, but we've had lots of satisfied\nswitchers ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Oct 2004 14:06:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does PostgreSQL run with Oracle? " }, { "msg_contents": "On Fri, 15 Oct 2004 11:54:44 -0500, [email protected]\n<[email protected]> wrote:\n> My basic question to the community is \"is PostgreSQL approximately as fast\n> as Oracle?\"\n> \n> I don't want benchmarks, they're BS. I want a gut feel from this community\n> because I know many of you are in mixed shops that run both products, or\n> have had experience with both.\n\nThat all depends on exactly what your application needs to do.\n\nThere are many more features that Oracle has and postgres doesn't than\nvice versa. If you need to do something with your data that isn't\npossible to do as efficiently without one of those features, then yes\npostgresql can be much slower. If you don't need any such features,\nit can be ballpark, until you start getting into fairly hefty\nhardware.\n", "msg_date": "Fri, 15 Oct 2004 11:15:25 -0700", "msg_from": "Marc Slemko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does PostgreSQL run with Oracle?" }, { "msg_contents": "On Fri, 15 Oct 2004 11:54:44 -0500, [email protected]\n<[email protected]> wrote:\n> My basic question to the community is \"is PostgreSQL approximately as fast\n> as Oracle?\"\n\nMy personal experience comparing PG to Oracle is across platforms,\nOracle on Sun/Solaris (2.7, quad-proc R440) and PG on Intel/Linux (2.6\nkernel, dual P3/1GHz). When both were tuned for the specific app I\nsaw a 45% speedup after switching to PG. This was with a customized\nCRM and System Monitoring application serving ~40,000 trouble tickets\nand monitoring 5,000 metric datapoints every 5-30 minutes.\n\nThe hardware was definitely not comparable (the Intel boxes have more\nhorsepower and faster disks), but dollar for dollar, including support\ncosts, PG is the winner by a BIG margin. YMMV, of course, and my\nresults are apparently above average.\n\nAnother big plus I found was that PG is much easier to admin as long\nas you turn on pg_autovacuum.\n\n--miker\n", "msg_date": "Fri, 15 Oct 2004 15:45:11 -0400", "msg_from": "Mike Rylander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does PostgreSQL run with Oracle?" }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n \n \n> My basic question to the community is \"is PostgreSQL approximately\n> as fast as Oracle?\"\n>\n> I don't want benchmarks, they're BS. I want a gut feel from this community\n> because I know many of you are in mixed shops that run both products, or\n> have had experience with both.\n \nMy gut feeling is not just \"as fast\", but \"often times faster.\" I've found very\nfew cases in which Oracle was faster, and that was usually due to some easily\nspotted difference such as tablespace support.\n \n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200410191925\n \n-----BEGIN PGP SIGNATURE-----\n \niD8DBQFBdaK0vJuQZxSWSsgRApPRAKDTjM+QybR2HnB1UNOao1RY7YDU9ACcDhnr\nzvH1gwn35Ah8mixo2XHOFr4=\n=NNZf\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Tue, 19 Oct 2004 23:25:26 -0000", "msg_from": "\"Greg Sabino Mullane\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does PostgreSQL run with Oracle?" }, { "msg_contents": "On Fri, Oct 15, 2004 at 10:19:48AM -0700, Steve Atkins wrote:\n> On Fri, Oct 15, 2004 at 11:54:44AM -0500, [email protected] wrote:\n> > My basic question to the community is \"is PostgreSQL approximately as fast\n> > as Oracle?\"\n> \n> > I'm currently running single processor UltraSPARC workstations, and intend\n> > to use Intel Arch laptops and Linux. The application is a big turnkey\n> > workstation app. I know the hardware switch alone will enhance\n> > performance, and may do so to the point where even a slower database will\n> > still be adequate.\n> \n> I have found that PostgreSQL seems to perform poorly on Solaris/SPARC\n> (less so after recent improvements, but still...) compared to x86\n> systems - more so than the delta between Oracle on the two platforms.\n> Just a gut impression, but it might mean that comparing the two\n> databases on SPARC may not be that useful comparison if you're\n> planning to move to x86.\n \nAs a point of reference, an IBM hardware sales rep I worked with a few\nyears ago told me that he got a lot of sales from Oracle shops that were\nrunning Sun and switched to RS/6000. Basically, for a given workload, it\nwould take 2x the number of Sun CPUs as RS/6000 CPUs. The difference in\nOracle licensing costs was usually enough to pay for the new hardware in\none year.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Thu, 21 Oct 2004 17:11:00 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does PostgreSQL run with Oracle?" } ]
[ { "msg_contents": "Neil wrote:\n\n>. In any case, the \"futex patch\"\n>uses the Linux 2.6 futex API to implement PostgreSQL spinlocks. \n>\nHas anyone tried to replace the whole lwlock implementation with \npthread_rwlock? At least for Linux with recent glibcs, pthread_rwlock is \nimplemented with futexes, i.e. we would get a fast lock handling without \nos specific hacks. Perhaps other os contain user space pthread locks, too.\nAttached is an old patch. I tested it on an uniprocessor system a year \nago and it didn't provide much difference, but perhaps the scalability \nis better. You'll have to add -lpthread to the library list for linking.\n\nRegarding Neil's patch:\n\n>! /*\n>! * XXX: is there a more efficient way to write this? Perhaps using\n>! * decl...?\n>! */\n>! static __inline__ slock_t\n>! atomic_dec(volatile slock_t *ptr)\n>! {\n>! \tslock_t prev = -1;\n>! \n>! \t__asm__ __volatile__(\n>! \t\t\"\tlock\t\t\\n\"\n>! \t\t\"\txadd %0,%1\t\\n\"\n>! \t\t:\"=q\"(prev)\n>! \t\t:\"m\"(*ptr), \"0\"(prev)\n>! \t\t:\"memory\", \"cc\");\n>! \n>! \treturn prev;\n>! }\n>\nxadd is not supported by original 80386 cpus, it was added for 80486 \ncpus. There is no instruction in the 80386 cpu that allows to atomically \ndecrement and retrieve the old value of an integer. The only option are \natomic_dec_test_zero or atomic_dec_test_negative - that can be \nimplemented by looking at the sign/zero flag. Depending on what you want \nthis may be enough. Or make the futex code conditional for > 80386 cpus.\n\n--\n Manfred", "msg_date": "Sun, 17 Oct 2004 09:39:33 +0200", "msg_from": "Manfred Spraul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: futex results with dbt-3 " }, { "msg_contents": "Manfred Spraul <[email protected]> writes:\n> Has anyone tried to replace the whole lwlock implementation with \n> pthread_rwlock? At least for Linux with recent glibcs, pthread_rwlock is \n> implemented with futexes, i.e. we would get a fast lock handling without \n> os specific hacks.\n\n\"At least for Linux\" does not strike me as equivalent to \"without\nOS-specific hacks\".\n\nThe bigger problem here is that the SMP locking bottlenecks we are\ncurrently seeing are *hardware* issues (AFAICT anyway). The only way\nthat futexes can offer a performance win is if they have a smarter way\nof executing the basic atomic-test-and-set sequence than we do;\nand if so, we could read their code and adopt that method without having\nto buy into any large reorganization of our code.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Oct 2004 17:52:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: futex results with dbt-3 " }, { "msg_contents": "Tom,\n\n> The bigger problem here is that the SMP locking bottlenecks we are\n> currently seeing are *hardware* issues (AFAICT anyway).  The only way\n> that futexes can offer a performance win is if they have a smarter way\n> of executing the basic atomic-test-and-set sequence than we do;\n> and if so, we could read their code and adopt that method without having\n> to buy into any large reorganization of our code.\n\nWell, initial results from Gavin/Neil's patch seem to indicate that, while \nfutexes do not cure the CSStorm bug, they do lessen its effects in terms of \nreal performance loss.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 19 Oct 2004 14:59:48 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: futex results with dbt-3" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> The bigger problem here is that the SMP locking bottlenecks we are\n>> currently seeing are *hardware* issues (AFAICT anyway).\n\n> Well, initial results from Gavin/Neil's patch seem to indicate that, while \n> futexes do not cure the CSStorm bug, they do lessen its effects in terms of \n> real performance loss.\n\nIt would be reasonable to expect that futexes would have a somewhat more\nefficient code path in the case where you have to block (mainly because\nSysV semaphores have such a heavyweight API, much more complex than we\nreally need). However, the code path that is killing us is the one\nwhere you *don't* actually need to block. If we had a proper fix for\nthe problem then the context swap storm itself would go away, and\nwhatever advantage you might be able to measure now for futexes likewise\nwould go away.\n\nIn other words, I'm not real excited about a wholesale replacement of\ncode in order to speed up a code path that I don't want to be taking\nin the first place; especially not if that replacement puts a fence\nbetween me and working on the code path that I do care about.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Oct 2004 18:39:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: futex results with dbt-3 " }, { "msg_contents": "Tom Lane wrote:\n\n>Manfred Spraul <[email protected]> writes:\n> \n>\n>>Has anyone tried to replace the whole lwlock implementation with \n>>pthread_rwlock? At least for Linux with recent glibcs, pthread_rwlock is \n>>implemented with futexes, i.e. we would get a fast lock handling without \n>>os specific hacks.\n>> \n>>\n>\n>\"At least for Linux\" does not strike me as equivalent to \"without\n>OS-specific hacks\".\n>\n> \n>\nFor me, \"at least for Linux\" means that I have tested the patch with \nLinux. I'd expect that the patch works on most recent unices \n(pthread_rwlock_t is probably mandatory for Unix98 compatibility). You \nand others on this mailing list have access to other systems - my patch \nshould be seen as a call for testers, not as a proposal for merging. I \nexpect that Linux is not the only OS with fast user space semaphores, \nand if an OS has such objects, then the pthread_ locking functions are \nhopefully implemented by using them. IMHO it's better to support the \nstandard function instead of trying to use the native (and OS specific) \nfast semaphore functions.\n\n>The bigger problem here is that the SMP locking bottlenecks we are\n>currently seeing are *hardware* issues (AFAICT anyway). The only way\n>that futexes can offer a performance win is if they have a smarter way\n>of executing the basic atomic-test-and-set sequence than we do;\n> \n>\nlwlocks operations are not a basic atomic-test-and-set sequence. They \nare spinlock, several nonatomic operations, spin_unlock.\n\n--\n Manfred\n", "msg_date": "Wed, 20 Oct 2004 18:51:49 +0200", "msg_from": "Manfred Spraul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: futex results with dbt-3" }, { "msg_contents": "On Sun, Oct 17, 2004 at 09:39:33AM +0200, Manfred Spraul wrote:\n> Neil wrote:\n> \n> >. In any case, the \"futex patch\"\n> >uses the Linux 2.6 futex API to implement PostgreSQL spinlocks. \n> >\n> Has anyone tried to replace the whole lwlock implementation with \n> pthread_rwlock? At least for Linux with recent glibcs, pthread_rwlock is \n> implemented with futexes, i.e. we would get a fast lock handling without \n> os specific hacks. Perhaps other os contain user space pthread locks, too.\n> Attached is an old patch. I tested it on an uniprocessor system a year \n> ago and it didn't provide much difference, but perhaps the scalability \n> is better. You'll have to add -lpthread to the library list for linking.\n\nI've heard that simply linking to the pthreads libraries, regardless of\nwhether you're using them or not creates a significant overhead. Has\nanyone tried it for kicks?\n\nMark\n", "msg_date": "Wed, 20 Oct 2004 10:10:01 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: futex results with dbt-3" }, { "msg_contents": "Mark Wong wrote:\n\n>I've heard that simply linking to the pthreads libraries, regardless of\n>whether you're using them or not creates a significant overhead. Has\n>anyone tried it for kicks?\n>\n> \n>\nThat depends on the OS and the functions that are used. The typical \nworst case is buffered IO of single characters: The single threaded \nimplementation is just copy and update buffer status, the multi threaded \nimplementation contains full locking.\n\nFor most other functions there is no difference at all.\n--\n Manfred\n\n", "msg_date": "Wed, 20 Oct 2004 19:14:46 +0200", "msg_from": "Manfred Spraul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: futex results with dbt-3" }, { "msg_contents": "Manfred Spraul <[email protected]> writes:\n> Tom Lane wrote:\n>> The bigger problem here is that the SMP locking bottlenecks we are\n>> currently seeing are *hardware* issues (AFAICT anyway). The only way\n>> that futexes can offer a performance win is if they have a smarter way\n>> of executing the basic atomic-test-and-set sequence than we do;\n>> \n> lwlocks operations are not a basic atomic-test-and-set sequence. They \n> are spinlock, several nonatomic operations, spin_unlock.\n\nRight, and it is the spinlock that is the problem. See discussions a\nfew months back: at least on Intel SMP machines, most of the problem\nseems to have to do with trading the spinlock's cache line back and\nforth between CPUs. It's difficult to see how a futex is going to avoid\nthat.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Oct 2004 13:15:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: futex results with dbt-3 " }, { "msg_contents": "Tom Lane wrote:\n\n>Manfred Spraul <[email protected]> writes:\n> \n>\n>>Tom Lane wrote:\n>> \n>>\n>>>The bigger problem here is that the SMP locking bottlenecks we are\n>>>currently seeing are *hardware* issues (AFAICT anyway). The only way\n>>>that futexes can offer a performance win is if they have a smarter way\n>>>of executing the basic atomic-test-and-set sequence than we do;\n>>>\n>>> \n>>>\n>>lwlocks operations are not a basic atomic-test-and-set sequence. They \n>>are spinlock, several nonatomic operations, spin_unlock.\n>> \n>>\n>\n>Right, and it is the spinlock that is the problem. See discussions a\n>few months back: at least on Intel SMP machines, most of the problem\n>seems to have to do with trading the spinlock's cache line back and\n>forth between CPUs.\n>\nI'd disagree: cache line bouncing is one problem. If this happens then \nthere is only one solution: The number of changes to that cacheline must \nbe reduced. The tools that are used in the linux kernel are:\n- hashing. An emergency approach if there is no other solution. I think \nRedHat used it for the buffer cache RH AS: Instead of one buffer cache, \nthere were lots of smaller buffer caches with individual locks. The \ncache was chosen based on the file position (probably mixed with some \npointers to avoid overloading cache 0).\n- For read-heavy loads: sequence locks. A reader reads a counter value \nand then accesses the data structure. At the end it checks if the \ncounter was modified. If it's still the same value then it can continue, \notherwise it must retry. Writers acquire a normal spinlock and then \nmodify the counter value. RCU is the second option, but there are \npatents - please be careful before using that tool.\n- complete rewrites that avoid the global lock. I think the global \nbuffer cache is now gone, everything is handled per-file. I think there \nis a global list for buffer replacement, but the at the top of the \nbuffer replacement strategy is a simple clock algorithm. That means that \nsimple lookups/accesses just set a (local) referenced bit and don't have \nto acquire a global lock. I know that this is the total opposite of ARC, \nbut perhaps it's the only scalable solution. ARC could be used as the \nsecond level strategy.\n\nBut: According to the descriptions the problem is a context switch \nstorm. I don't see that cache line bouncing can cause a context switch \nstorm. What causes the context switch storm? If it's the pg_usleep in \ns_lock, then my patch should help a lot: with pthread_rwlock locks, this \nline doesn't exist anymore.\n\n--\n Manfred\n", "msg_date": "Wed, 20 Oct 2004 19:39:13 +0200", "msg_from": "Manfred Spraul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: futex results with dbt-3" }, { "msg_contents": "On Wed, Oct 20, 2004 at 07:39:13PM +0200, Manfred Spraul wrote:\n> \n> But: According to the descriptions the problem is a context switch \n> storm. I don't see that cache line bouncing can cause a context switch \n> storm. What causes the context switch storm? If it's the pg_usleep in \n> s_lock, then my patch should help a lot: with pthread_rwlock locks, this \n> line doesn't exist anymore.\n> \n\nI gave Manfred's patch a try on my 4-way Xeon system with Tom's test_script.sql\nfiles. I ran 4 processes of test_script.sql against 8.0beta3 (without any\npatches) and from my observations with top, the cpu utilization between\nprocessors was pretty erratic. They'd jump anywhere from 30% - 70%.\n\nWith the futex patches that Neil and Gavin have been working on, I'd see\nthe processors evenly utilized at about 50% each.\n\nWith just Manfred's patch I think there might be a problem somewhere with\nthe patch, or something else, as only one processor is doing anything at a\ntime and 100% utilized.\n\nHere are some other details, per Manfred's request:\n\nLinux 2.6.8.1 (on a gentoo distro)\ngcc 3.3.4\nglibc 2.3.3.20040420\n\nMark\n", "msg_date": "Wed, 20 Oct 2004 15:05:28 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: futex results with dbt-3" }, { "msg_contents": "Forgive my naivete, but do futex's implement some priority algorithm for \nwhich process gets control. One of the problems as I understand it is \nthat linux does (did ) not implement a priority algorithm, so it is \npossible for the context which just gave up control to be the next \ncontext woken up, which of course is a complete waste of time.\n\n--dc--\n\nTom Lane wrote:\n\n>Manfred Spraul <[email protected]> writes:\n> \n>\n>>Tom Lane wrote:\n>> \n>>\n>>>The bigger problem here is that the SMP locking bottlenecks we are\n>>>currently seeing are *hardware* issues (AFAICT anyway). The only way\n>>>that futexes can offer a performance win is if they have a smarter way\n>>>of executing the basic atomic-test-and-set sequence than we do;\n>>>\n>>> \n>>>\n>>lwlocks operations are not a basic atomic-test-and-set sequence. They \n>>are spinlock, several nonatomic operations, spin_unlock.\n>> \n>>\n>\n>Right, and it is the spinlock that is the problem. See discussions a\n>few months back: at least on Intel SMP machines, most of the problem\n>seems to have to do with trading the spinlock's cache line back and\n>forth between CPUs. It's difficult to see how a futex is going to avoid\n>that.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n>\n> \n>\n\n-- \nDave Cramer\nwww.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Wed, 20 Oct 2004 20:27:49 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: futex results with dbt-3" }, { "msg_contents": "Mark Wong wrote:\n\n>Here are some other details, per Manfred's request:\n>\n>Linux 2.6.8.1 (on a gentoo distro)\n> \n>\nHow complicated are Tom's test scripts? His immediate reply was that I \nshould retest with Fedora, to rule out any gentoo bugs.\n\nI have a dual-cpu system with RH FC, I could use it for testing.\n\n--\n Manfred\n", "msg_date": "Thu, 21 Oct 2004 07:45:53 +0200", "msg_from": "Manfred Spraul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: futex results with dbt-3" }, { "msg_contents": "On Thu, Oct 21, 2004 at 07:45:53AM +0200, Manfred Spraul wrote:\n> Mark Wong wrote:\n> \n> >Here are some other details, per Manfred's request:\n> >\n> >Linux 2.6.8.1 (on a gentoo distro)\n> > \n> >\n> How complicated are Tom's test scripts? His immediate reply was that I \n> should retest with Fedora, to rule out any gentoo bugs.\n> \n> I have a dual-cpu system with RH FC, I could use it for testing.\n> \n\nPretty, simple. One to load the database, and 1 to query it. I'll \nattach them.\n\nMark", "msg_date": "Thu, 21 Oct 2004 06:54:58 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: futex results with dbt-3" }, { "msg_contents": "Mark Wong wrote:\n\n>Pretty, simple. One to load the database, and 1 to query it. I'll \n>attach them.\n>\n> \n>\nI've tested it on my dual-cpu computer:\n- it works, both cpus run within the postmaster. It seems something your \ngentoo setup is broken.\n- the number of context switch is down slightly, but not significantly: \nThe glibc implementation is more or less identical to the implementation \nright now in lwlock.c: a spinlock that protects a few variables that are \nused to implement the actual mutex, several wait queues: one for \nspinlock busy, one or two for the actual mutex code.\n\nAround 25% of the context switches are from spinlock collisions, the \nrest are from actual mutex collisions. It might be possible to get rid \nof the spinlock collisions by writing a special, futex based semaphore \nfunction that only supports exclusive access [like sem_wait/sem_post], \nbut I don't think that it's worth the effort: 75% of the context \nswitches would remain.\nWhat's needed is a buffer manager that can do lookups without a global lock.\n\n--\n Manfred\n", "msg_date": "Fri, 22 Oct 2004 22:55:38 +0200", "msg_from": "Manfred Spraul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: futex results with dbt-3" }, { "msg_contents": "Josh Berkus wrote:\n > Tom,\n >\n >\n >>The bigger problem here is that the SMP locking bottlenecks we are\n >>currently seeing are *hardware* issues (AFAICT anyway). The only way\n >>that futexes can offer a performance win is if they have a smarter way\n >>of executing the basic atomic-test-and-set sequence than we do;\n >>and if so, we could read their code and adopt that method without having\n >>to buy into any large reorganization of our code.\n >\n >\n > Well, initial results from Gavin/Neil's patch seem to indicate that, while\n > futexes do not cure the CSStorm bug, they do lessen its effects in terms of\n > real performance loss.\n\nI proposed weeks ago to see how the CSStorm is affected by stick each backend\nin one processor ( where the process was born ) using the cpu-affinity capability\n( kernel 2.6 ), is this proposal completely out of mind ?\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n", "msg_date": "Sat, 23 Oct 2004 12:51:39 +0200", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: futex results with dbt-3" }, { "msg_contents": "Gaetano Mendola <[email protected]> writes:\n> I proposed weeks ago to see how the CSStorm is affected by stick each\n> backend in one processor ( where the process was born ) using the\n> cpu-affinity capability ( kernel 2.6 ), is this proposal completely\n> out of mind ?\n\nThat was investigated long ago. See for instance\nhttp://archives.postgresql.org/pgsql-performance/2004-04/msg00313.php\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Oct 2004 14:21:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: futex results with dbt-3 " }, { "msg_contents": "Tom Lane wrote:\n> Gaetano Mendola <[email protected]> writes:\n> \n>>I proposed weeks ago to see how the CSStorm is affected by stick each\n>>backend in one processor ( where the process was born ) using the\n>>cpu-affinity capability ( kernel 2.6 ), is this proposal completely\n>>out of mind ?\n> \n> \n> That was investigated long ago. See for instance\n> http://archives.postgresql.org/pgsql-performance/2004-04/msg00313.php\n> \n\nIf I read correctly this help on the CSStorm, I guess also that this could\nalso help the performances. Unfortunatelly I do not have any kernel 2.6 running\non SMP to give it a try.\n\n\nRegards\nGaetano Mendola\n\n", "msg_date": "Sat, 23 Oct 2004 20:56:58 +0200", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: futex results with dbt-3" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nJosh Berkus wrote:\n| Gaetano,\n|\n|\n|>I proposed weeks ago to see how the CSStorm is affected by stick each\n|>backend in one processor ( where the process was born ) using the\n|>cpu-affinity capability ( kernel 2.6 ), is this proposal completely out of\n|>mind ?\n|\n|\n| I don't see how that would help. The problem is not backends switching\n| processors, it's the buffermgrlock needing to be swapped between processors.\n\nThis is not clear to me. What happen if during a spinlock a backend is\nmoved away from one processor to another one ?\n\n\nRegards\nGaetano Mendola\n\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.5 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFBeudN7UpzwH2SGd4RAkL9AKCUY9vsw1CPmBV1kC7BKxUtuneN2wCfXaYr\nE8utuJI34MAIP8jUm6By09M=\n=oRvU\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Sun, 24 Oct 2004 01:20:46 +0200", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: futex results with dbt-3" }, { "msg_contents": "Manfred,\n\n> How complicated are Tom's test scripts? His immediate reply was that I\n> should retest with Fedora, to rule out any gentoo bugs.\n\nWe've done some testing on other Linux. Linking in pthreads reduced CSes by \n< 15%, which was no appreciable impact on real performance.\n\nGavin/Neil's full futex patch was of greater benefit; while it did not reduce \nCSes very much (25%) somehow the real performance benefit was greater.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 25 Oct 2004 09:33:07 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: futex results with dbt-3" }, { "msg_contents": "Manfred Spraul <[email protected]> writes:\n> But: According to the descriptions the problem is a context switch \n> storm. I don't see that cache line bouncing can cause a context switch \n> storm. What causes the context switch storm?\n\nAs best I can tell, the CS storm arises because the backends get into\nsome sort of lockstep timing that makes it far more likely than you'd\nexpect for backend A to try to enter the bufmgr when backend B is already\nholding the BufMgrLock. In the profiles we were looking at back in\nApril, it seemed that about 10% of the time was spent inside bufmgr\n(which is bad enough in itself) but the odds of LWLock collision were\nmuch higher than 10%, leading to many context swaps.\n\nThis is not totally surprising given that they are running identical\nqueries and so are running through loops of the same length, but still\nit seems like there must be some effect driving their timing to converge\ninstead of diverge away from the point of conflict.\n\nWhat I think (and here is where it's a leap of logic, cause I can't\nprove it) is that the excessive time spent passing the spinlock cache\nline back and forth is exactly the factor causing that convergence.\nSomehow, the delay caused when a processor has to wait to get the cache\nline contributes to keeping the backend loops in lockstep.\n\nIt is critical to understand that the CS storm is associated with LWLock\ncontention not spinlock contention: what we saw was a lot of semop()s\nnot a lot of select()s.\n\n> If it's the pg_usleep in s_lock, then my patch should help a lot: with\n> pthread_rwlock locks, this line doesn't exist anymore.\n\nThe profiles showed that s_lock() is hardly entered at all, and the\nselect() delay is reached even more seldom. So changes in that area\nwill make exactly zero difference. This is the surprising and\ncounterintuitive thing: oprofile clearly shows that very large fractions\nof the CPU time are being spent at the initial TAS instructions in\nLWLockAcquire and LWLockRelease, and yet those TASes hardly ever fail,\nas proven by the fact that oprofile shows s_lock() is seldom entered.\nSo as far as the Postgres code can tell, there isn't any contention\nworth mentioning for the spinlock. This is indeed the way it was\ndesigned to be, but when so much time is going to the TAS instructions,\nyou'd think there'd be more software-visible contention for the\nspinlock.\n\nIt could be that I'm all wet and there is no relationship between the\ncache line thrashing and the seemingly excessive BufMgrLock contention.\nThey are after all occurring at two very different levels of abstraction.\nBut I think there is some correlation that we just don't understand yet.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Oct 2004 12:45:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: futex results with dbt-3 " }, { "msg_contents": "Tom Lane wrote:\n\n>It could be that I'm all wet and there is no relationship between the\n>cache line thrashing and the seemingly excessive BufMgrLock contention.\n> \n>\nIs it important? The fix is identical in both cases: per-bucket locks \nfor the hash table and a buffer aging strategy that doesn't need one \nglobal lock that must be acquired for every lookup.\n\n--\n Manfred\n", "msg_date": "Mon, 25 Oct 2004 19:30:38 +0200", "msg_from": "Manfred Spraul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: futex results with dbt-3" }, { "msg_contents": "Manfred Spraul <[email protected]> writes:\n> Tom Lane wrote:\n>> It could be that I'm all wet and there is no relationship between the\n>> cache line thrashing and the seemingly excessive BufMgrLock contention.\n>> \n> Is it important? The fix is identical in both cases: per-bucket locks \n> for the hash table and a buffer aging strategy that doesn't need one \n> global lock that must be acquired for every lookup.\n\nReducing BufMgrLock contention is a good idea, but it's not really my\nidea of a fix for this issue. In the absence of a full understanding,\nwe may be fixing the wrong thing. It's worth remembering that when we\nfirst hit this issue, I made some simple changes that approximately\nhalved the number of BufMgrLock acquisitions by joining ReleaseBuffer\nand ReadBuffer calls into ReleaseAndReadBuffer in all the places in the\ntest case's loop. This made essentially no change in the CS storm\nbehavior :-(. So I do not know how much contention we have to get rid\nof to get the problem to go away, or even whether this is the right path\nto take.\n\n(I am unconvinced that either of those specific suggestions is The Right\nWay to break up the bufmgrlock, either, but that's a different thread.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Oct 2004 13:45:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: futex results with dbt-3 " } ]
[ { "msg_contents": "\nSeeing as I've missed the last N messages... I'll just reply to this\none, rather than each of them in turn...\n\nTom Lane <[email protected]> wrote on 16.10.2004, 18:54:17:\n> I wrote:\n> > Josh Berkus writes:\n> >> First off, two test runs with OProfile are available at:\n> >> http://khack.osdl.org/stp/298124/\n> >> http://khack.osdl.org/stp/298121/\n> \n> > Hmm. The stuff above 1% in the first of these is\n> \n> > Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000\n> > samples % app name symbol name\n> > ...\n> > 920369 2.1332 postgres AtEOXact_Buffers\n> > ...\n> \n> > In the second test AtEOXact_Buffers is much lower (down around 0.57\n> > percent) but the other suspects are similar. Since the only difference\n> > in parameters is shared_buffers (36000 vs 9000), it does look like we\n> > are approaching the point where AtEOXact_Buffers is a problem, but so\n> > far it's only a 2% drag.\n\nYes... as soon as you first mentioned AtEOXact_Buffers, I realised I'd\nseen it near the top of the oprofile results on previous tests.\n\nAlthough you don't say this, I presume you're acting on the thought that\na 2% drag would soon become a much larger contention point with more\nusers and/or smaller transactions - since these things are highly\nnon-linear.\n\n> \n> It occurs to me that given the 8.0 resource manager mechanism, we could\n> in fact dispense with AtEOXact_Buffers, or perhaps better turn it into a\n> no-op unless #ifdef USE_ASSERT_CHECKING. We'd just get rid of the\n> special case for transaction termination in resowner.c and let the\n> resource owner be responsible for releasing locked buffers always. The\n> OSDL results suggest that this won't matter much at the level of 10000\n> or so shared buffers, but for 100000 or more buffers the linear scan in\n> AtEOXact_Buffers is going to become a problem.\n\nIf the resource owner is always responsible for releasing locked\nbuffers, who releases the locks if the backend crashes? Do we need some\nadditional code in bgwriter (or?) to clean up buffer locks?\n\n> \n> We could also get rid of the linear search in UnlockBuffers(). The only\n> thing it's for anymore is to release a BM_PIN_COUNT_WAITER flag, and\n> since a backend could not be doing more than one of those at a time,\n> we don't really need an array of flags for that, only a single variable.\n> This does not show in the OSDL results, which I presume means that their\n> test case is not exercising transaction aborts; but I think we need to\n> zap both routines to make the world safe for large shared_buffers\n> values. (See also\n> http://archives.postgresql.org/pgsql-performance/2004-10/msg00218.php)\n\nYes, that's important. \n\n> Any objection to doing this for 8.0?\n> \n\nAs you say, if these issues are definitely kicking in at 100000\nshared_buffers - there's a good few people out there with 800Mb\nshared_buffers already. \n\nCould I also suggest that we adopt your earlier suggestion of raising\nthe bgwriter parameters as a permanent measure - i.e. changing the\ndefaults in postgresql.conf. That way, StrategyDirtyBufferList won't\nimmediately show itself as a problem when using the default parameter\nset. It would be a shame to remove one obstacle only to leave another\none following so close behind. [...and that also argues against an\nearlier thought to introduce more fine grained values for the\nbgwriter's parameters, ISTM]\n\nAlso, what will Vacuum delay do to the O(N) effect of\nFlushRelationBuffers when called by VACUUM? Will the locks be held for\nlonger?\n\nI think we should do some tests while running a VACUUM in the background\nalso, which isn't part of the DBT-2 set-up, but perhaps we might argue\n*it should be for the PostgreSQL version*?\n\nDare we hope for a scalability increase in 8.0 after all.... \n\nBest Regards,\n\nSimon Riggs\n", "msg_date": "Sun, 17 Oct 2004 21:40:01 +0200", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "\n =?iso-8859-1?Q?Re:__Getting_rid_of_AtEOXact_Buffers_(was_Re:_[Testperf-general]_Re:_[PERFORM]_First_set_of_OSDL_Shared_Memscalability_results,\n\t_some_wierdness_=2E=2E=2E)?=" }, { "msg_contents": "<[email protected]> writes:\n> If the resource owner is always responsible for releasing locked\n> buffers, who releases the locks if the backend crashes?\n\nThe ensuing system reset takes care of that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 17 Oct 2004 16:12:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re:\n =?iso-8859-1?Q?Re:__Getting_rid_of_AtEOXact_Buffers_(was_Re:_[Testperf-general]_Re:_[PERFORM]_First_set_of_OSDL_Shared_Memscalability_results,\n\t_some_wierdness_=2E=2E=2E)?=" }, { "msg_contents": "[email protected] wrote:\n\n> If the resource owner is always responsible for releasing locked\n> buffers, who releases the locks if the backend crashes?\n\nThe semaphore \"undo\" I hope.\n\n\nRegards\nGaetano Mendola\n\n", "msg_date": "Mon, 18 Oct 2004 02:22:08 +0200", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Getting rid of AtEOXact Buffers (was Re:\n\t[Testperf-general]" }, { "msg_contents": "On 10/17/2004 3:40 PM, [email protected] wrote:\n\n> Seeing as I've missed the last N messages... I'll just reply to this\n> one, rather than each of them in turn...\n> \n> Tom Lane <[email protected]> wrote on 16.10.2004, 18:54:17:\n>> I wrote:\n>> > Josh Berkus writes:\n>> >> First off, two test runs with OProfile are available at:\n>> >> http://khack.osdl.org/stp/298124/\n>> >> http://khack.osdl.org/stp/298121/\n>> \n>> > Hmm. The stuff above 1% in the first of these is\n>> \n>> > Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000\n>> > samples % app name symbol name\n>> > ...\n>> > 920369 2.1332 postgres AtEOXact_Buffers\n>> > ...\n>> \n>> > In the second test AtEOXact_Buffers is much lower (down around 0.57\n>> > percent) but the other suspects are similar. Since the only difference\n>> > in parameters is shared_buffers (36000 vs 9000), it does look like we\n>> > are approaching the point where AtEOXact_Buffers is a problem, but so\n>> > far it's only a 2% drag.\n> \n> Yes... as soon as you first mentioned AtEOXact_Buffers, I realised I'd\n> seen it near the top of the oprofile results on previous tests.\n> \n> Although you don't say this, I presume you're acting on the thought that\n> a 2% drag would soon become a much larger contention point with more\n> users and/or smaller transactions - since these things are highly\n> non-linear.\n> \n>> \n>> It occurs to me that given the 8.0 resource manager mechanism, we could\n>> in fact dispense with AtEOXact_Buffers, or perhaps better turn it into a\n>> no-op unless #ifdef USE_ASSERT_CHECKING. We'd just get rid of the\n>> special case for transaction termination in resowner.c and let the\n>> resource owner be responsible for releasing locked buffers always. The\n>> OSDL results suggest that this won't matter much at the level of 10000\n>> or so shared buffers, but for 100000 or more buffers the linear scan in\n>> AtEOXact_Buffers is going to become a problem.\n> \n> If the resource owner is always responsible for releasing locked\n> buffers, who releases the locks if the backend crashes? Do we need some\n> additional code in bgwriter (or?) to clean up buffer locks?\n\nIf the backend crashes, the postmaster (assuming a possibly corrupted \nshared memory) restarts the whole lot ... so why bother?\n\n> \n>> \n>> We could also get rid of the linear search in UnlockBuffers(). The only\n>> thing it's for anymore is to release a BM_PIN_COUNT_WAITER flag, and\n>> since a backend could not be doing more than one of those at a time,\n>> we don't really need an array of flags for that, only a single variable.\n>> This does not show in the OSDL results, which I presume means that their\n>> test case is not exercising transaction aborts; but I think we need to\n>> zap both routines to make the world safe for large shared_buffers\n>> values. (See also\n>> http://archives.postgresql.org/pgsql-performance/2004-10/msg00218.php)\n> \n> Yes, that's important. \n> \n>> Any objection to doing this for 8.0?\n>> \n> \n> As you say, if these issues are definitely kicking in at 100000\n> shared_buffers - there's a good few people out there with 800Mb\n> shared_buffers already. \n> \n> Could I also suggest that we adopt your earlier suggestion of raising\n> the bgwriter parameters as a permanent measure - i.e. changing the\n> defaults in postgresql.conf. That way, StrategyDirtyBufferList won't\n> immediately show itself as a problem when using the default parameter\n> set. It would be a shame to remove one obstacle only to leave another\n> one following so close behind. [...and that also argues against an\n> earlier thought to introduce more fine grained values for the\n> bgwriter's parameters, ISTM]\n\nI realized that StrategyDirtyBufferList currently wasts a lot of time by \nfirst scanning over all the buffers that haven't even been hit since \nit's last call and neither have been dirty last time (and thus, are at \nthe beginning of the list and can't be dirty anyway). If we would have a \nway to give it a smart \"point in the list to start scanning\" ...\n\n\n> \n> Also, what will Vacuum delay do to the O(N) effect of\n> FlushRelationBuffers when called by VACUUM? Will the locks be held for\n> longer?\n\nVacuum only naps at the points where it checks for interrupts, and at \nthat time it isn't supposed to hold any critical locks.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Mon, 18 Oct 2004 16:36:19 -0400", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting rid of AtEOXact Buffers (was Re: [Testperf-general]" }, { "msg_contents": "Jan Wieck <[email protected]> writes:\n> I realized that StrategyDirtyBufferList currently wasts a lot of time by \n> first scanning over all the buffers that haven't even been hit since \n> it's last call and neither have been dirty last time (and thus, are at \n> the beginning of the list and can't be dirty anyway). If we would have a \n> way to give it a smart \"point in the list to start scanning\" ...\n\nI don't think it's true that they *can't* be dirty.\n\n(1) Buffers are marked dirty when released, whereas they are moved to\nthe fronts of the lists when acquired.\n\n(2) the cntxDirty bit can be set asynchronously to any BufMgrLock'd\noperation.\n\nBut it sure seems like we are doing more work than we really need to.\n\nOne idea I had was for the bgwriter to collect all the dirty pages up to\nsay halfway on the LRU lists, and then write *all* of these, not just\nthe first N, over as many rounds as are needed. Then go back and call\nStrategyDirtyBufferList again to get a new list. (We don't want it to\nwrite every dirty buffer this way, because the ones near the front of\nthe list are likely to be dirtied again right away. But certainly we\ncould write more than 1% of the dirty buffers without getting into the\narea of the recently-used buffers.)\n\nThere isn't any particularly good reason for this to share code with\ncheckpoint-style BufferSync, btw. BufferSync could just as easily scan\nthe buffers linearly, since it doesn't matter what order it writes them\nin. So we could change StrategyDirtyBufferList to stop as soon as it's\nhalfway up the LRU lists, which would save at least a few cycles.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Oct 2004 17:01:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting rid of AtEOXact Buffers (was Re: [Testperf-general] Re:\n\t[PERFORM] First set of OSDL Shared Memscalability results,\n\tsome wierdness ...)" }, { "msg_contents": "Trying to think a little out of the box, how \"common\" is it in modern \noperating systems to be able to swap out shared memory?\n\nMaybe we're not using the ARC algorithm correctly after all. The ARC \nalgorithm does not consider the second level OS buffer cache in it's \ndesign. Maybe the total size of the ARC cache directory should not be 2x \nthe size of what is configured as the shared buffer cache, but rather 2x \nthe size of the effective cache size (in 8k pages). If we assume that \nthe job of the T1 queue is better done by the OS buffers anyway (and \nthis is what this discussion seems to point out), we shouldn't hold them \nin shared buffers (only very few of them and evict them ASAP). We just \naccount for them and assume that the OS will have those cached that we \nfind in our T1 directory. I think with the right configuration for \neffective cache size, this is a fair assumption. The T2 queue represents \nthe frequently used blocks. If our implementation would cause the \nunused/used portions of the buffers not to move around, the OS will swap \nout currently unused portions of the shared buffer cache and utilize \nthose as OS buffers.\n\nTo verify this theory it would be interesting what the ARC strategy \nafter a long DBT run with a \"large\" buffer cache thinks a good T2 size \nwould be. Enabling the strategy debug message and running postmaster \nwith -d1 will show that. In theory, this size should be anywhere near \nthe sweet spot.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Mon, 18 Oct 2004 17:19:17 -0400", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Autotuning of shared buffer size (was: Re: Getting rid\n\tof AtEOXact Buffers (was Re: [Testperf-general] Re: [PERFORM] First set\n\tof OSDL Shared Memscalability results, some wierdness ...))" } ]
[ { "msg_contents": "Hello !\nWe have difficulties with the use of indexes. For example, we have two \ntables :\n\n * table lnk : \n \nTable \"public.lnk\"\n Column | Type | Modifiers\n--------+-----------------------+-----------\n index | integer | not null\n sgaccn | character varying(12) | not null\nIndexes:\n \"pkey1\" primary key, btree (\"index\", sgaccn)\nForeign-key constraints:\n \"fk_sgaccn1\" FOREIGN KEY (sgaccn) REFERENCES main_tbl(sgaccn) ON UPDATE\nCASCADE ON DELETE CASCADE\n\n * table dic :\n\nTable \"public.dic\"\n Column | Type | Modifiers\n \n--------+-----------------------+--------------------------------------------------------------------\n index | integer | not null default \nnextval('public.dic_index_seq'::text)\n word | character varying(60) | not null\nIndexes:\n \"dic_pkey\" primary key, btree (\"index\")\n \"dic_word_idx\" unique, btree (word)\n \"dic_word_key\" unique, btree (word)\n\n\n\nThe table lnk contains 33 000 000 tuples and table dic contains 303 000 \ntuples.\n\nWhen we try to execute a join between these two tables, the planner \nproposes to excute a hash-join plan :\n\nexplain select sgaccn from dic, lnk where dic.index=lnk.index;\n QUERY PLAN\n \n-----------------------------------------------------------------------------------\n Hash Join (cost=6793.29..1716853.80 rows=33743101 width=11)\n Hash Cond: (\"outer\".\"index\" = \"inner\".\"index\")\n -> Seq Scan on lnk (cost=0.00..535920.00 rows=33743100 width=15)\n -> Hash (cost=4994.83..4994.83 rows=303783 width=4)\n -> Seq Scan dic (cost=0.00..4994.83 rows=303783 width=4)\n(5 rows)\n\nSo the planner decides to scan 33 000 000 of tuples and we would like to \nforce it to scan the table dic (303 000 tuples) and to use\nthe index on the integer index to execute the join. So we have set the \nparameters enable_hashjoin and enable_mergejoin to off. So the planner \nproposes the following query :\n\n QUERY PLAN\n \n--------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..102642540.60 rows=33743101 width=11)\n -> Seq Scan on refs_ra_lnk1 (cost=0.00..535920.00 rows=33743100 \nwidth=15)\n -> Index Scan using refs_ra_dic_new_pkey on refs_ra_dic_new \n(cost=0.00..3.01 rows=1 width=4)\n Index Cond: (refs_ra_dic_new.\"index\" = \"outer\".\"index\")\n(4 rows)\n\nWe were surprised of this response because the planner continues to \npropose us to scan the 33 000 000 of tuples instead of the smaller \ntable. Is there any way to force it to scan the smaller table ?\n\nThanks\n\nCeline Charavay\n\n", "msg_date": "Mon, 18 Oct 2004 10:50:21 +0200", "msg_from": "charavay <[email protected]>", "msg_from_op": true, "msg_subject": "Indexes performance" }, { "msg_contents": "Charavay,\n\n> ---------------------------------------------------------------------------\n>-------- Hash Join  (cost=6793.29..1716853.80 rows=33743101 width=11)\n>    Hash Cond: (\"outer\".\"index\" = \"inner\".\"index\")\n>    ->  Seq Scan on lnk  (cost=0.00..535920.00 rows=33743100 width=15)\n>    ->  Hash  (cost=4994.83..4994.83 rows=303783 width=4)\n>          ->  Seq Scan dic  (cost=0.00..4994.83 rows=303783 width=4)\n> (5 rows)\n\nAccording to the estimate, you are selecting all of the rows in the database. \nThis is going to require a Seq Scan no matter what.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 18 Oct 2004 12:45:10 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes performance" }, { "msg_contents": "charavay <[email protected]> writes:\n> ... So the planner decides to scan 33 000 000 of tuples and we would like to \n> force it to scan the table dic (303 000 tuples) and to use\n> the index on the integer index to execute the join.\n\nI'm mystified why you think that that will be a superior plan. It still\nrequires visiting every row of the larger table (I assume that all of\nthe larger table's rows do join to some row of the smaller table).\nAll that it accomplishes is to force those visits to occur in a\nquasi-random order; which not only loses any chance of kernel read-ahead\noptimizations, but very likely causes each page of the table to be read\nmore than once.\n\nAFAICT the planner made exactly the right choice by picking a hashjoin.\nHave you tried comparing its estimates against actual runtimes for the\ndifferent plans? (See EXPLAIN ANALYZE.)\n\nOffhand the only way I can think of to force it to do the nestloop the\nother way around from what it wants to is to temporarily drop the\nindex it wants to use. You can do that conveniently like so:\n\n\tbegin;\n\talter table dic drop constraint dic_pkey;\n\texplain analyze select ...;\n\trollback;\n\nwhich of course would be no good for production, but it should at least\nserve to destroy your illusions about wanting to do it in production.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Oct 2004 20:02:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes performance " } ]
[ { "msg_contents": "Hi,\n \nI have a problem where a query inside a function is up to 100 times slower\ninside a function than as a stand alone query run in psql.\n \nThe column 'botnumber' is a character(10), is indexed and there are 125000\nrows in the table.\n \nHelp please!\n \nThis query is fast:-\n \nexplain analyze \n SELECT batchserial\n FROM transbatch\n WHERE botnumber = '1-7'\n LIMIT 1;\n\n QUERY PLAN\n\n----------------------------------------------------------------------------\n----------------------------------------------------\n Limit (cost=0.00..0.42 rows=1 width=4) (actual time=0.73..148.23 rows=1\nloops=1)\n -> Index Scan using ind_tbatchx on transbatch (cost=0.00..18.73 rows=45\nwidth=4) (actual time=0.73..148.22 rows=1 loops=1)\n Index Cond: (botnumber = '1-7'::bpchar)\n Total runtime: 148.29 msec\n(4 rows)\n \n \nThis function is slow:-\n \nCREATE OR REPLACE FUNCTION sp_test_rod3 ( ) returns integer \nas '\nDECLARE\n bot char(10);\n oldbatch INTEGER;\nBEGIN\n \n bot := ''1-7'';\n \n SELECT INTO oldbatch batchserial\n FROM transbatch\n WHERE botnumber = bot\n LIMIT 1;\n \n IF FOUND THEN\n RETURN 1;\n ELSE\n RETURN 0;\n END IF;\n \nEND;\n'\nlanguage plpgsql ;\n\n \nexplain analyze SELECT sp_test_rod3();\n\n QUERY PLAN\n\n----------------------------------------------------------------------------\n------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=1452.39..1452.40\nrows=1 loops=1)\n Total runtime: 1452.42 msec\n(2 rows)\n\n\n\n\n\n\nHi,\n \nI have a problem \nwhere a query inside a function is up to 100 times slower inside a function than \nas a stand alone query run in psql.\n \nThe column \n'botnumber' is a character(10), is indexed and there are 125000 rows in the \ntable.\n \nHelp \nplease!\n \nThis query is \nfast:-\n \nexplain \nanalyze   \n  \nSELECT batchserial  FROM transbatch  WHERE botnumber = \n'1-7'  LIMIT 1;\n                                                           \nQUERY \nPLAN                                                           \n-------------------------------------------------------------------------------------------------------------------------------- Limit  \n(cost=0.00..0.42 rows=1 width=4) (actual time=0.73..148.23 rows=1 \nloops=1)   ->  Index Scan using ind_tbatchx on \ntransbatch  (cost=0.00..18.73 rows=45 width=4) (actual time=0.73..148.22 \nrows=1 loops=1)         Index Cond: \n(botnumber = '1-7'::bpchar) Total runtime: 148.29 msec(4 \nrows)\n \n \nThis function \nis slow:-\n \nCREATE OR \nREPLACE FUNCTION  sp_test_rod3 ( ) returns \ninteger          as \n'DECLARE  bot char(10);  oldbatch \nINTEGER;BEGIN\n \n  bot := \n''1-7'';\n \n  SELECT \nINTO oldbatch batchserial  FROM transbatch  WHERE botnumber = \nbot  LIMIT 1;\n \n  IF \nFOUND THEN    RETURN 1;  ELSE    \nRETURN 0;  END IF;\n \nEND;'language plpgsql  ;\n \nexplain \nanalyze SELECT sp_test_rod3();\n                                       \nQUERY \nPLAN                                       \n---------------------------------------------------------------------------------------- Result  \n(cost=0.00..0.01 rows=1 width=0) (actual time=1452.39..1452.40 rows=1 \nloops=1) Total runtime: 1452.42 msec(2 \nrows)", "msg_date": "Mon, 18 Oct 2004 19:01:25 +0100", "msg_from": "\"Rod Dutton\" <[email protected]>", "msg_from_op": true, "msg_subject": "Queries slow using stored procedures" }, { "msg_contents": "You seem to not have index on botnumber, but in your query bot number is\nthe clause.\n \nI don't explain you why the same query is so long.\nbut have your try procedure with a loop structure (witch create cursor) ?\n \nyou could try\n \n \nCREATE OR REPLACE FUNCTION sp_test_Alban1 ( ) returns integer \nas '\nDECLARE\n bot char(10);\n oldbatch INTEGER;\n rec RECORD;\n query VARCHAR;\n\nBEGIN\n \n -- initialisation\n bot := ''1-7'';\n query := '' SELECT batchserial FROM transbatch WHERE botnumber = ' ||\nquote_ident(bot) || '' <optionaly your limit clause> ;'';\n \n \n FOR rec IN EXECUTE var_query LOOP\n return rec.\"batchserial \".; \n END LOOP;\n \n --else\n return 0;\n \nEND;\n'\nlanguage plpgsql ;\n\ndoes it return the same results in the same time ? \n\n _____ \n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Rod Dutton\nSent: lundi 18 octobre 2004 20:01\nTo: [email protected]\nSubject: [PERFORM] Queries slow using stored procedures\n\n\nHi,\n \nI have a problem where a query inside a function is up to 100 times slower\ninside a function than as a stand alone query run in psql.\n \nThe column 'botnumber' is a character(10), is indexed and there are 125000\nrows in the table.\n \nHelp please!\n \nThis query is fast:-\n \nexplain analyze \n SELECT batchserial\n FROM transbatch\n WHERE botnumber = '1-7'\n LIMIT 1;\n\n QUERY PLAN\n\n----------------------------------------------------------------------------\n----------------------------------------------------\n Limit (cost=0.00..0.42 rows=1 width=4) (actual time=0.73..148.23 rows=1\nloops=1)\n -> Index Scan using ind_tbatchx on transbatch (cost=0.00..18.73 rows=45\nwidth=4) (actual time=0.73..148.22 rows=1 loops=1)\n Index Cond: (botnumber = '1-7'::bpchar)\n Total runtime: 148.29 msec\n(4 rows)\n \n \nThis function is slow:-\n \nCREATE OR REPLACE FUNCTION sp_test_rod3 ( ) returns integer \nas '\nDECLARE\n bot char(10);\n oldbatch INTEGER;\nBEGIN\n \n bot := ''1-7'';\n \n SELECT INTO oldbatch batchserial\n FROM transbatch\n WHERE botnumber = bot\n LIMIT 1;\n \n IF FOUND THEN\n RETURN 1;\n ELSE\n RETURN 0;\n END IF;\n \nEND;\n'\nlanguage plpgsql ;\n\n \nexplain analyze SELECT sp_test_rod3();\n\n QUERY PLAN\n\n----------------------------------------------------------------------------\n------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=1452.39..1452.40\nrows=1 loops=1)\n Total runtime: 1452.42 msec\n(2 rows)\n\n\n\n\n\n\nYou seem to not have index on botnumber,  but in \nyour query bot number is the clause.\n \nI don't explain you why the same query is so \nlong.\nbut have your try procedure with a loop structure \n(witch create cursor) ?\n \nyou could try\n \n \n\nCREATE OR \nREPLACE FUNCTION  sp_test_Alban1 ( ) \nreturns integer          as \n'DECLARE  bot char(10);  oldbatch \nINTEGER;\n  rec \nRECORD;\n  query \nVARCHAR;\nBEGIN\n \n  -- initialisation\n  bot := ''1-7'';\n  query  := '' \nSELECT  batchserial FROM transbatch WHERE \nbotnumber  = ' || quote_ident(bot) || '' <optionaly your limit \nclause> ;'';\n \n \n   FOR rec IN \nEXECUTE var_query  LOOP        return \nrec.\"batchserial \".;    \n   END \nLOOP;\n    \n    \n--else\n    return \n0;\n \nEND;'language plpgsql  \n;\ndoes it return the same results in the same time \n? \n\n\nFrom: [email protected] \n[mailto:[email protected]] On Behalf Of Rod \nDuttonSent: lundi 18 octobre 2004 20:01To: \[email protected]: [PERFORM] Queries slow using \nstored procedures\n\nHi,\n \nI have a problem \nwhere a query inside a function is up to 100 times slower inside a function than \nas a stand alone query run in psql.\n \nThe column \n'botnumber' is a character(10), is indexed and there are 125000 rows in the \ntable.\n \nHelp \nplease!\n \nThis query is \nfast:-\n \nexplain \nanalyze   \n  \nSELECT batchserial  FROM transbatch  WHERE botnumber = \n'1-7'  LIMIT 1;\n                                                           \nQUERY \nPLAN                                                           \n-------------------------------------------------------------------------------------------------------------------------------- Limit  \n(cost=0.00..0.42 rows=1 width=4) (actual time=0.73..148.23 rows=1 \nloops=1)   ->  Index Scan using ind_tbatchx on \ntransbatch  (cost=0.00..18.73 rows=45 width=4) (actual time=0.73..148.22 \nrows=1 loops=1)         Index Cond: \n(botnumber = '1-7'::bpchar) Total runtime: 148.29 msec(4 \nrows)\n \n \nThis \nfunction is slow:-\n \nCREATE \nOR REPLACE FUNCTION  sp_test_rod3 ( ) returns \ninteger          as \n'DECLARE  bot char(10);  oldbatch \nINTEGER;BEGIN\n \n  \nbot := ''1-7'';\n \n  \nSELECT INTO oldbatch batchserial  FROM transbatch  WHERE \nbotnumber = bot  LIMIT 1;\n \n  \nIF FOUND THEN    RETURN 1;  \nELSE    RETURN 0;  END \nIF;\n \nEND;'language plpgsql  ;\n \nexplain analyze SELECT sp_test_rod3();\n                                       \nQUERY \nPLAN                                       \n---------------------------------------------------------------------------------------- Result  \n(cost=0.00..0.01 rows=1 width=0) (actual time=1452.39..1452.40 rows=1 \nloops=1) Total runtime: 1452.42 msec(2 \nrows)", "msg_date": "Tue, 19 Oct 2004 09:25:08 +0200", "msg_from": "\"Alban Medici (NetCentrex)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries slow using stored procedures" } ]
[ { "msg_contents": "[email protected] wrote:\n\n>Hello\n>\n>I posted this on the general list but think it would be more appropriate\n>here. Sorry.\n>\n>I know it is possible to time isolated queries through the settting of the\n>\\timing option in psql. This makes PgSQL report the time it took to\n>perform one operation.\n>\n>I would like to know how one can get a time summary of many operations, if\n>it is at all possible.\n>\n> \n>\nHello,\n\nYou can turn on statement and duration logging in the postgresql.conf\n\n\n>Thank you.\n>\n>Tim\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL\n\n", "msg_date": "Mon, 18 Oct 2004 11:11:27 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to time several queries?" }, { "msg_contents": "Hello\n\nI posted this on the general list but think it would be more appropriate\nhere. Sorry.\n\nI know it is possible to time isolated queries through the settting of the\n\\timing option in psql. This makes PgSQL report the time it took to\nperform one operation.\n\nI would like to know how one can get a time summary of many operations, if\nit is at all possible.\n\nThank you.\n\nTim\n\n\n", "msg_date": "Mon, 18 Oct 2004 20:28:24 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "How to time several queries?" }, { "msg_contents": "When I'm using psql and I want to time queries, which is what I've been\ndoing for a little over a day now, I do the following:\n\nSelect now(); query 1; query 2; query 3; select now();\n\nThis works fine unless you're doing selects with a lot of rows which will\ncause your first timestamp to scroll off the screen.\n\n-- \nMatthew Nuzum + \"Man was born free, and everywhere\nwww.bearfruit.org : he is in chains,\" Rousseau\n+~~~~~~~~~~~~~~~~~~+ \"Then you will know the truth, and \nthe TRUTH will set you free,\" Jesus Christ (John 8:32 NIV)\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of\[email protected]\nSent: Monday, October 18, 2004 2:28 PM\nTo: [email protected]\nSubject: [PERFORM] How to time several queries?\n\nHello\n\nI posted this on the general list but think it would be more appropriate\nhere. Sorry.\n\nI know it is possible to time isolated queries through the settting of the\n\\timing option in psql. This makes PgSQL report the time it took to\nperform one operation.\n\nI would like to know how one can get a time summary of many operations, if\nit is at all possible.\n\nThank you.\n\nTim\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n", "msg_date": "Wed, 20 Oct 2004 08:50:42 -0400", "msg_from": "\"Matthew Nuzum\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to time several queries?" } ]
[ { "msg_contents": "The following is from a database of several hundred million rows of real\ndata that has been VACUUM ANALYZEd.\n\n \n\nWhy isn't the index being used for a query that seems tailor-made for\nit? The results (6,300 rows) take about ten minutes to retrieve with a\nsequential scan.\n\n \n\nA copy of this database with \"integer\" in place of \"smallint\", a primary\nkey in column order (date, time, type, subtype) and a secondary index in\nthe required order (type, subtype, date, time) correctly uses the\nsecondary index to return results in under a second.\n\n \n\nActually, the integer version is the first one I made, and the smallint\nis the copy, but that shouldn't matter.\n\n \n\nPostgres is version \"postgresql-server-7.3.4-3.rhl9\" from Red Hat Linux\n9.\n\n \n\n=====\n\n \n\ntestdb2=# \\d db\n\n Table \"public.db\"\n\n Column | Type | Modifiers\n\n---------+------------------------+-----------\n\n date | date | not null\n\n time | time without time zone | not null\n\n type | smallint | not null\n\n subtype | smallint | not null\n\n value | integer |\n\nIndexes: db_pkey primary key btree (\"type\", subtype, date, \"time\")\n\n \n\ntestdb2=# set enable_seqscan to off;\n\nSET\n\n \n\ntestdb2=# explain select * from db where type=90 and subtype=70 and\ndate='7/1/2004';\n\n QUERY PLAN\n\n------------------------------------------------------------------------\n------\n\n Seq Scan on db (cost=100000000.00..107455603.76 rows=178 width=20)\n\n Filter: ((\"type\" = 90) AND (subtype = 70) AND (date =\n'2004-07-01'::date))\n\n(2 rows)\n\n\n\n\n\n\n\n\n\n\nThe following is from a database of several hundred million\nrows of real data that has been VACUUM ANALYZEd.\n \nWhy isn't the index being used for a query that seems\ntailor-made for it? The results (6,300 rows) take about ten minutes to retrieve\nwith a sequential scan.\n \nA copy of this database with \"integer\" in\nplace of \"smallint\", a primary key in column order (date, time, type,\nsubtype) and a secondary index in the required order (type, subtype, date,\ntime) correctly uses the secondary index to return results in under a second.\n \nActually, the integer version is the first one I\nmade, and the smallint is the copy, but that shouldn't matter.\n \nPostgres is version \"postgresql-server-7.3.4-3.rhl9\"\nfrom Red Hat Linux 9.\n \n=====\n \ntestdb2=# \\d db\n             \nTable \"public.db\"\n Column \n|         \nType          | Modifiers\n---------+------------------------+-----------\n date    |\ndate                  \n| not null\n time    | time without time\nzone | not null\n type    |\nsmallint              \n| not null\n subtype |\nsmallint              \n| not null\n value   |\ninteger               \n|\nIndexes: db_pkey primary key btree\n(\"type\", subtype, date, \"time\")\n \ntestdb2=# set enable_seqscan to off;\nSET\n \ntestdb2=# explain select * from db where type=90 and\nsubtype=70 and date='7/1/2004';\n                                 \nQUERY PLAN\n------------------------------------------------------------------------------\n Seq Scan on db \n(cost=100000000.00..107455603.76 rows=178 width=20)\n   Filter: ((\"type\" = 90) AND\n(subtype = 70) AND (date = '2004-07-01'::date))\n(2 rows)", "msg_date": "Tue, 19 Oct 2004 11:14:55 -0400", "msg_from": "\"Knutsen, Mark\" <[email protected]>", "msg_from_op": true, "msg_subject": "Why isn't this index being used?" }, { "msg_contents": "Hi, I ran into a similar problem using bigints...\n\nSee:\nhttp://www.postgresql.org/docs/7.3/static/datatype.html#DATATYPE-INT\n\nsmall & big int have to be cast when used in querries... try:\nexplain select * from db where type=90::smallint and \nsubtype=70::smallint and date='7/1/2004';\nor\nexplain select * from db where type='90' and subtype='70' and \ndate='7/1/2004';\n\nKnutsen, Mark wrote:\n\n> The following is from a database of several hundred million rows of \n> real data that has been VACUUM ANALYZEd.\n>\n> \n>\n> Why isn't the index being used for a query that seems tailor-made for \n> it? The results (6,300 rows) take about ten minutes to retrieve with a \n> sequential scan.\n>\n> \n>\n> A copy of this database with \"integer\" in place of \"smallint\", a \n> primary key in column order (date, time, type, subtype) and a \n> secondary index in the required order (type, subtype, date, time) \n> correctly uses the secondary index to return results in under a second.\n>\n> \n>\n> Actually, the integer version is the first one I made, and the \n> smallint is the copy, but that shouldn't matter.\n>\n> \n>\n> Postgres is version \"postgresql-server-7.3.4-3.rhl9\" from Red Hat Linux 9.\n>\n> \n>\n> =====\n>\n> \n>\n> testdb2=# \\d db\n>\n> Table \"public.db\"\n>\n> Column | Type | Modifiers\n>\n> ---------+------------------------+-----------\n>\n> date | date | not null\n>\n> time | time without time zone | not null\n>\n> type | smallint | not null\n>\n> subtype | smallint | not null\n>\n> value | integer |\n>\n> Indexes: db_pkey primary key btree (\"type\", subtype, date, \"time\")\n>\n> \n>\n> testdb2=# set enable_seqscan to off;\n>\n> SET\n>\n> \n>\n> testdb2=# explain select * from db where type=90 and subtype=70 and \n> date='7/1/2004';\n>\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------\n>\n> Seq Scan on db (cost=100000000.00..107455603.76 rows=178 width=20)\n>\n> Filter: ((\"type\" = 90) AND (subtype = 70) AND (date = \n> '2004-07-01'::date))\n>\n> (2 rows)\n>\n\n", "msg_date": "Tue, 19 Oct 2004 11:28:16 -0400", "msg_from": "Doug Y <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why isn't this index being used?" } ]
[ { "msg_contents": "(Why don't replies automatically go to the list?)\n\nSure enough, quoting the constants fixes the problem.\n\nIs it a best practice to always quote constants?\n\n> -----Original Message-----\n> From: Doug Y [mailto:[email protected]]\n> Sent: Tuesday, October 19, 2004 11:28 AM\n> To: Knutsen, Mark\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Why isn't this index being used?\n> \n> Hi, I ran into a similar problem using bigints...\n> \n> See:\n> http://www.postgresql.org/docs/7.3/static/datatype.html#DATATYPE-INT\n> \n> small & big int have to be cast when used in querries... try:\n> explain select * from db where type=90::smallint and\n> subtype=70::smallint and date='7/1/2004';\n> or\n> explain select * from db where type='90' and subtype='70' and\n> date='7/1/2004';\n> \n> Knutsen, Mark wrote:\n> \n> > The following is from a database of several hundred million rows of\n> > real data that has been VACUUM ANALYZEd.\n> >\n> >\n> >\n> > Why isn't the index being used for a query that seems tailor-made\nfor\n> > it? The results (6,300 rows) take about ten minutes to retrieve with\na\n> > sequential scan.\n> >\n> >\n> >\n> > A copy of this database with \"integer\" in place of \"smallint\", a\n> > primary key in column order (date, time, type, subtype) and a\n> > secondary index in the required order (type, subtype, date, time)\n> > correctly uses the secondary index to return results in under a\nsecond.\n> >\n> >\n> >\n> > Actually, the integer version is the first one I made, and the\n> > smallint is the copy, but that shouldn't matter.\n> >\n> >\n> >\n> > Postgres is version \"postgresql-server-7.3.4-3.rhl9\" from Red Hat\nLinux\n> 9.\n> >\n> >\n> >\n> > =====\n> >\n> >\n> >\n> > testdb2=# \\d db\n> >\n> > Table \"public.db\"\n> >\n> > Column | Type | Modifiers\n> >\n> > ---------+------------------------+-----------\n> >\n> > date | date | not null\n> >\n> > time | time without time zone | not null\n> >\n> > type | smallint | not null\n> >\n> > subtype | smallint | not null\n> >\n> > value | integer |\n> >\n> > Indexes: db_pkey primary key btree (\"type\", subtype, date, \"time\")\n> >\n> >\n> >\n> > testdb2=# set enable_seqscan to off;\n> >\n> > SET\n> >\n> >\n> >\n> > testdb2=# explain select * from db where type=90 and subtype=70 and\n> > date='7/1/2004';\n> >\n> > QUERY PLAN\n> >\n> >\n------------------------------------------------------------------------\n> ------\n> >\n> > Seq Scan on db (cost=100000000.00..107455603.76 rows=178 width=20)\n> >\n> > Filter: ((\"type\" = 90) AND (subtype = 70) AND (date =\n> > '2004-07-01'::date))\n> >\n> > (2 rows)\n\n\n", "msg_date": "Tue, 19 Oct 2004 11:33:50 -0400", "msg_from": "\"Knutsen, Mark\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why isn't this index being used?" }, { "msg_contents": "On Tue, Oct 19, 2004 at 11:33:50AM -0400, Knutsen, Mark wrote:\n> (Why don't replies automatically go to the list?)\n\nBecause sometimes you don't want them to. There's been dozens of\ndiscussions about this. BTW, mutt has a nice feature which allows\nyou to reply to lists -- I imagine other MUAs have such a feature\ntoo.\n\n> Sure enough, quoting the constants fixes the problem.\n> \n> Is it a best practice to always quote constants?\n\nNo, but it's very useful in these cases. The problem is I believe\nthis is fixed in 8.0, BTW. See the FAQ, question 4.8\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nI remember when computers were frustrating because they *did* exactly what \nyou told them to. That actually seems sort of quaint now.\n\t\t--J.D. Baldwin\n", "msg_date": "Tue, 19 Oct 2004 11:47:18 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why isn't this index being used?" } ]
[ { "msg_contents": "Hi Folks,\n\nThis is my _4th_ time trying to post this, me and the mailing list software\nare fighting. I think it's because of the attachments so I'll just put\nlinks to them instead. All apologies if this gets duplicated.\n\nI've been having problems maintaining the speed of the database in the\nlong run. VACUUMs of the main tables happen a few times a day after maybe\n50,000 or less rows are added and deleted (say 6 times a day).\n\nI have a whole lot (probably too much) indexing going on to try to speed\nthings up. \n\nWhatever the case, the database still slows down to a halt after a month or\nso, and I have to go in and shut everything down and do a VACUUM FULL by\nhand. One index (of many many) takes 2000 seconds to vacuum. The whole\nprocess takes a few hours.\n\nI would love suggestions on what I can do either inside my application, or\nfrom a dba point of view to keep the database maintained without having to\ninflict downtime. This is for 'Netdisco' -- an open source network\nmanagement software by the way. I'ld like to fix this for everyone who uses\nit.\n\n\nSys Info :\n\n$ uname -a\n FreeBSD xxxx.ucsc.edu 4.10-STABLE FreeBSD 4.10-STABLE #0: Mon Aug 16\n14:56:19 PDT 2004 [email protected]:/usr/src/sys/compile/xxxx i386\n\n$ pg_config --version\n PostgreSQL 7.3.2\n\n$ cat postgresql.conf\n max_connections = 32\n shared_buffers = 3900 # 30Mb - Bsd current kernel limit\n max_fsm_relations = 1000 # min 10, fsm is free space map, ~40 bytes\n max_fsm_pages = 10000 # min 1000, fsm is free space map, ~6 bytes\n max_locks_per_transaction = 64 # min 10\n wal_buffers = 8 # min 4, typically 8KB each\n\nThe log of the vacuum and the db schema could not be attached, so they are\nat : \n http://netdisco.net/db_vacuum.txt\n http://netdisco.net/pg_all.input\n\nThanks for any help!\n-m\n", "msg_date": "Tue, 19 Oct 2004 11:38:21 -0400", "msg_from": "Max Baker <[email protected]>", "msg_from_op": true, "msg_subject": "Vacuum takes a really long time, vacuum full required" }, { "msg_contents": "> Whatever the case, the database still slows down to a halt after a month or\n> so, and I have to go in and shut everything down and do a VACUUM FULL by\n> hand. One index (of many many) takes 2000 seconds to vacuum. The whole\n> process takes a few hours.\n\nDo a REINDEX on that table instead, and regular vacuum more frequently.\n\n> $ pg_config --version\n> PostgreSQL 7.3.2\n\n7.4.x deals with index growth a little better 7.3 and older did.\n\n", "msg_date": "Tue, 19 Oct 2004 11:40:17 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum takes a really long time, vacuum full required" }, { "msg_contents": "Max Baker <[email protected]> writes:\n> I've been having problems maintaining the speed of the database in the\n> long run. VACUUMs of the main tables happen a few times a day after maybe\n> 50,000 or less rows are added and deleted (say 6 times a day).\n\n> I have a whole lot (probably too much) indexing going on to try to speed\n> things up. \n\n> Whatever the case, the database still slows down to a halt after a month or\n> so, and I have to go in and shut everything down and do a VACUUM FULL by\n> hand. One index (of many many) takes 2000 seconds to vacuum. The whole\n> process takes a few hours.\n\nThe first and foremost recommendation is to increase your FSM settings;\nyou seem to be using the defaults, which are pegged for a database size\nof not more than about 100Mb.\n\nSecond is to update to PG 7.4. I think you are probably suffering from\nindex bloat to some extent, and 7.4 should help.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Oct 2004 12:21:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum takes a really long time, vacuum full required " }, { "msg_contents": "Hi Rod,\n\nOn Tue, Oct 19, 2004 at 11:40:17AM -0400, Rod Taylor wrote:\n> > Whatever the case, the database still slows down to a halt after a month or\n> > so, and I have to go in and shut everything down and do a VACUUM FULL by\n> > hand. One index (of many many) takes 2000 seconds to vacuum. The whole\n> > process takes a few hours.\n> \n> Do a REINDEX on that table instead, and regular vacuum more frequently.\n\nGreat, this is exactly what I think it needs. Meanwhile, I was checking out\n\n http://www.postgresql.org/docs/7.3/static/sql-reindex.html\n\nWhich suggests I might be able to do a drop/add on each index with the\ndatabase 'live'.\n\nHowever, the DROP INDEX command was taking an awfully long time to complete\nand it hung my app in the mean time. Does anyone know if the DROP INDEX\ncauses an exclusive lock, or is it just a lengthy process?\n\n> > $ pg_config --version\n> > PostgreSQL 7.3.2\n> \n> 7.4.x deals with index growth a little better 7.3 and older did.\n\nWill do. Meanwhile I'm stuck supporting older 7.x versions, so I'm still\nlooking for a solution for them.\n\nThanks!\n-m\n", "msg_date": "Tue, 19 Oct 2004 14:12:45 -0400", "msg_from": "Max Baker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum takes a really long time, vacuum full required" }, { "msg_contents": "On Tue, Oct 19, 2004 at 11:40:17AM -0400, Rod Taylor wrote:\n> > Whatever the case, the database still slows down to a halt after a month or\n> > so, and I have to go in and shut everything down and do a VACUUM FULL by\n> > hand. One index (of many many) takes 2000 seconds to vacuum. The whole\n> > process takes a few hours.\n> \n> Do a REINDEX on that table instead, and regular vacuum more frequently.\n> \n> > $ pg_config --version\n> > PostgreSQL 7.3.2\n> \n> 7.4.x deals with index growth a little better 7.3 and older did.\n\nI did a REINDEX of the database. The results are pretty insane, the db went\nfrom 16GB to 381MB. Needless to say things are running a lot faster. \n\nI will now take Tom's well-given advice and upgrade to 7.4. But at least\nnow I have something to tell my users who are not able to do a DB upgrade\nfor whatever reason.\n\nThanks for all your help folks!\n-m\n\nBefore:\n# du -h pgsql \n 135K pgsql/global\n 128M pgsql/pg_xlog\n 80M pgsql/pg_clog\n 3.6M pgsql/base/1\n 3.6M pgsql/base/16975\n 1.0K pgsql/base/16976/pgsql_tmp\n 16G pgsql/base/16976\n 16G pgsql/base\n 16G pgsql\n\nAfter Reindex:\n# du /data/pgsql/\n 131K /data/pgsql/global\n 128M /data/pgsql/pg_xlog\n 81M /data/pgsql/pg_clog\n 3.6M /data/pgsql/base/1\n 3.6M /data/pgsql/base/16975\n 1.0K /data/pgsql/base/16976/pgsql_tmp\n 268M /data/pgsql/base/16976\n 275M /data/pgsql/base\n 484M /data/pgsql/\n\nAfter Vacuum:\n# du /data/pgsql/ \n 131K /data/pgsql/global\n 144M /data/pgsql/pg_xlog\n 81M /data/pgsql/pg_clog\n 3.6M /data/pgsql/base/1\n 3.6M /data/pgsql/base/16975\n 1.0K /data/pgsql/base/16976/pgsql_tmp\n 149M /data/pgsql/base/16976\n 156M /data/pgsql/base\n 381M /data/pgsql/\n\nnetdisco=> select relname, relpages from pg_class order by relpages desc;\n\nBefore:\n relname | relpages \n---------------------------------+----------\n idx_node_switch_port_active | 590714\n idx_node_switch_port | 574344\n idx_node_switch | 482202\n idx_node_mac | 106059\n idx_node_mac_active | 99842\n\nAfter:\n relname | relpages \n---------------------------------+----------\n node_ip | 13829\n node | 9560\n device_port | 2124\n node_ip_pkey | 1354\n idx_node_ip_ip | 1017\n idx_node_ip_mac_active | 846\n\n", "msg_date": "Sun, 24 Oct 2004 01:08:11 -0400", "msg_from": "Max Baker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum takes a really long time, vacuum full required" } ]
[ { "msg_contents": "Hi to all! I have the following query. The execution time is very big, it\ndoesn't use the indexes and I don't understand why...\n\n\nSELECT count(o.id) FROM orders o\n\n INNER JOIN report r ON o.id=r.id_order\n\n INNER JOIN status s ON o.id_status=s.id\n\n INNER JOIN contact c ON o.id_ag=c.id\n\n INNER JOIN endkunde e ON\no.id_endkunde=e.id\n\n INNER JOIN zufriden z ON\nr.id_zufriden=z.id\n\n INNER JOIN plannung v ON\nv.id=o.id_plannung\n\n INNER JOIN mpsworker w ON\nv.id_worker=w.id\n\n INNER JOIN person p ON p.id = w.id_person\n\n WHERE o.id_status > 3\n\nThe query explain:\n\n\n\nAggregate (cost=32027.38..32027.38 rows=1 width=4)\n\n -> Hash Join (cost=23182.06..31944.82 rows=33022 width=4)\n\n Hash Cond: (\"outer\".id_person = \"inner\".id)\n\n -> Hash Join (cost=23179.42..31446.85 rows=33022 width=8)\n\n Hash Cond: (\"outer\".id_endkunde = \"inner\".id)\n\n -> Hash Join (cost=21873.54..28891.42 rows=33022 width=12)\n\n Hash Cond: (\"outer\".id_ag = \"inner\".id)\n\n -> Hash Join (cost=21710.05..28067.50 rows=33021\nwidth=16)\n\n Hash Cond: (\"outer\".id_status = \"inner\".id)\n\n -> Hash Join (cost=21708.97..27571.11 rows=33021\nwidth=20)\n\n Hash Cond: (\"outer\".id_worker = \"inner\".id)\n\n -> Hash Join (cost=21707.49..27074.31\nrows=33021 width=20)\n\n Hash Cond: (\"outer\".id_zufriden =\n\"inner\".id)\n\n -> Hash Join\n(cost=21706.34..26564.09 rows=35772 width=24)\n\n Hash Cond: (\"outer\".id_plannung\n= \"inner\".id)\n\n -> Hash Join\n(cost=20447.15..23674.04 rows=35771 width=24)\n\n Hash Cond: (\"outer\".id =\n\"inner\".id_order)\n\n -> Seq Scan on orders o\n(cost=0.00..1770.67 rows=36967 width=20)\n\n Filter: (id_status >\n3)\n\n -> Hash\n(cost=20208.32..20208.32 rows=37132 width=8)\n\n -> Seq Scan on\nreport r (cost=0.00..20208.32 rows=37132 width=8)\n\n -> Hash (cost=913.15..913.15\nrows=54015 width=8)\n\n -> Seq Scan on plannung v\n(cost=0.00..913.15 rows=54015 width=8)\n\n -> Hash (cost=1.12..1.12 rows=12\nwidth=4)\n\n -> Seq Scan on zufriden z\n(cost=0.00..1.12 rows=12 width=4)\n\n -> Hash (cost=1.39..1.39 rows=39 width=8)\n\n -> Seq Scan on mpsworker w\n(cost=0.00..1.39 rows=39 width=8)\n\n -> Hash (cost=1.06..1.06 rows=6 width=4)\n\n -> Seq Scan on status s (cost=0.00..1.06\nrows=6 width=4)\n\n -> Hash (cost=153.19..153.19 rows=4119 width=4)\n\n -> Seq Scan on contact c (cost=0.00..153.19\nrows=4119 width=4)\n\n -> Hash (cost=1077.91..1077.91 rows=38391 width=4)\n\n -> Seq Scan on endkunde e (cost=0.00..1077.91\nrows=38391 width=4)\n\n -> Hash (cost=2.51..2.51 rows=51 width=4)\n\n -> Seq Scan on person p (cost=0.00..2.51 rows=51 width=4)\n\n\n\n\n\nAs you can see, no index is used.I made everywhere indexes where the jons\nare made. If I use the following query the indexes are used:\n\n\n\nSELECT count(o.id) FROM orders o\n\n INNER JOIN report r ON o.id=r.id_order\n\n INNER JOIN status s ON o.id_status=s.id\n\n INNER JOIN contact c ON o.id_ag=c.id\n\n INNER JOIN endkunde e ON\no.id_endkunde=e.id\n\n INNER JOIN zufriden z ON\nr.id_zufriden=z.id\n\n INNER JOIN plannung v ON\nv.id=o.id_plannung\n\n INNER JOIN mpsworker w ON\nv.id_worker=w.id\n\n INNER JOIN person p ON p.id = w.id_person\n\n WHERE o.id_status =4\n\n\n\nAggregate (cost=985.55..985.55 rows=1 width=4)\n\n -> Hash Join (cost=5.28..985.42 rows=50 width=4)\n\n Hash Cond: (\"outer\".id_person = \"inner\".id)\n\n -> Hash Join (cost=2.64..982.03 rows=50 width=8)\n\n Hash Cond: (\"outer\".id_worker = \"inner\".id)\n\n -> Nested Loop (cost=1.15..979.79 rows=50 width=8)\n\n -> Nested Loop (cost=1.15..769.64 rows=49 width=8)\n\n -> Nested Loop (cost=1.15..535.57 rows=48\nwidth=12)\n\n -> Seq Scan on status s (cost=0.00..1.07\nrows=1 width=4)\n\n Filter: (4 = id)\n\n -> Nested Loop (cost=1.15..534.01 rows=48\nwidth=16)\n\n -> Hash Join (cost=1.15..366.37\nrows=47 width=20)\n\n Hash Cond: (\"outer\".id_zufriden\n= \"inner\".id)\n\n -> Nested Loop\n(cost=0.00..364.48 rows=51 width=24)\n\n -> Index Scan using\norders_id_status_idx on orders o (cost=0.00..69.55 rows=52 width=20)\n\n Index Cond:\n(id_status = 4)\n\n -> Index Scan using\nreport_id_order_idx on report r (cost=0.00..5.66 rows=1 width=8)\n\n Index Cond:\n(\"outer\".id = r.id_order)\n\n -> Hash (cost=1.12..1.12\nrows=12 width=4)\n\n -> Seq Scan on zufriden z\n(cost=0.00..1.12 rows=12 width=4)\n\n -> Index Scan using endkunde_pkey on\nendkunde e (cost=0.00..3.55 rows=1 width=4)\n\n Index Cond: (\"outer\".id_endkunde\n= e.id)\n\n -> Index Scan using contact_pkey on contact c\n(cost=0.00..4.86 rows=1 width=4)\n\n Index Cond: (\"outer\".id_ag = c.id)\n\n -> Index Scan using plannung_pkey on plannung v\n(cost=0.00..4.28 rows=1 width=8)\n\n Index Cond: (v.id = \"outer\".id_plannung)\n\n -> Hash (cost=1.39..1.39 rows=39 width=8)\n\n -> Seq Scan on mpsworker w (cost=0.00..1.39 rows=39\nwidth=8)\n\n -> Hash (cost=2.51..2.51 rows=51 width=4)\n\n -> Seq Scan on person p (cost=0.00..2.51 rows=51 width=4)\n\n\n\n\n\nBest regards,\n\nAndy.\n\n\n\n", "msg_date": "Tue, 19 Oct 2004 19:26:49 +0300", "msg_from": "\"Andrei Bintintan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Index not used in query. Why?" }, { "msg_contents": "\"Andrei Bintintan\" <[email protected]> writes:\n> Hi to all! I have the following query. The execution time is very big, it\n> doesn't use the indexes and I don't understand why...\n\nIndexes are not necessarily the best way to do a large join.\n\n> If I use the following query the indexes are used:\n\nThe key reason this wins seems to be that the id_status = 4 condition\nis far more selective than id_status > 3 (the estimates are 52 and 36967\nrows respectively ... is that accurate?) which means that the second\nquery is inherently about 1/700th as much work. This, and not the use\nof indexes, is the fundamental reason why it's faster.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Oct 2004 12:52:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index not used in query. Why? " }, { "msg_contents": "Is there a solution to make it faster?\nAt the end I need only in the query the id_status =4 and 6, but if I write\nin the sql query (where condition) where id_status in (4,6), the explain\nsays the same(the slow version).\n\nFor example:\nSELECT count(o.id) FROM orders o\n INNER JOIN report r ON o.id=r.id_order\n INNER JOIN status s ON o.id_status=s.id\n INNER JOIN contact c ON o.id_ag=c.id\n INNER JOIN endkunde e ON\no.id_endkunde=e.id\n INNER JOIN zufriden z ON\nr.id_zufriden=z.id\n INNER JOIN plannung v ON\nv.id=o.id_plannung\n INNER JOIN mpsworker w ON\nv.id_worker=w.id\n INNER JOIN person p ON p.id = w.id_person\n WHERE o.id_status in (4,6);\n\nThe result for this query is also without index searches.\n\nI really have to make this query a little more faster. Suggestions?\n\nRegards,\nAndy.\n\n----- Original Message -----\nFrom: \"Tom Lane\" <[email protected]>\nTo: \"Andrei Bintintan\" <[email protected]>\nCc: <[email protected]>\nSent: Tuesday, October 19, 2004 7:52 PM\nSubject: Re: [PERFORM] Index not used in query. Why?\n\n\n> \"Andrei Bintintan\" <[email protected]> writes:\n> > Hi to all! I have the following query. The execution time is very big,\nit\n> > doesn't use the indexes and I don't understand why...\n>\n> Indexes are not necessarily the best way to do a large join.\n>\n> > If I use the following query the indexes are used:\n>\n> The key reason this wins seems to be that the id_status = 4 condition\n> is far more selective than id_status > 3 (the estimates are 52 and 36967\n> rows respectively ... is that accurate?) which means that the second\n> query is inherently about 1/700th as much work. This, and not the use\n> of indexes, is the fundamental reason why it's faster.\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Tue, 19 Oct 2004 20:49:45 +0300", "msg_from": "\"Contact AR-SD.NET\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index not used in query. Why? " }, { "msg_contents": "There's a chance that you could gain from quoting the '4' and '6' if \nthose orders.id_status isn't a pure int column and is indexed.\n\nSee http://www.postgresql.org/docs/7.4/static/datatype.html#DATATYPE-INT\n\n-tfo\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-260-0005\n\nOn Oct 19, 2004, at 12:49 PM, Contact AR-SD.NET wrote:\n\n> Is there a solution to make it faster?\n> At the end I need only in the query the id_status =4 and 6, but if I \n> write\n> in the sql query (where condition) where id_status in (4,6), the \n> explain\n> says the same(the slow version).\n>\n> For example:\n> SELECT count(o.id) FROM orders o\n> INNER JOIN report r ON \n> o.id=r.id_order\n> INNER JOIN status s ON \n> o.id_status=s.id\n> INNER JOIN contact c ON o.id_ag=c.id\n> INNER JOIN endkunde e ON\n> o.id_endkunde=e.id\n> INNER JOIN zufriden z ON\n> r.id_zufriden=z.id\n> INNER JOIN plannung v ON\n> v.id=o.id_plannung\n> INNER JOIN mpsworker w ON\n> v.id_worker=w.id\n> INNER JOIN person p ON p.id = \n> w.id_person\n> WHERE o.id_status in (4,6);\n>\n> The result for this query is also without index searches.\n>\n> I really have to make this query a little more faster. Suggestions?\n>\n> Regards,\n> Andy.\n>\n> ----- Original Message -----\n> From: \"Tom Lane\" <[email protected]>\n> To: \"Andrei Bintintan\" <[email protected]>\n> Cc: <[email protected]>\n> Sent: Tuesday, October 19, 2004 7:52 PM\n> Subject: Re: [PERFORM] Index not used in query. Why?\n>\n>\n>> \"Andrei Bintintan\" <[email protected]> writes:\n>>> Hi to all! I have the following query. The execution time is very \n>>> big,\n> it\n>>> doesn't use the indexes and I don't understand why...\n>>\n>> Indexes are not necessarily the best way to do a large join.\n>>\n>>> If I use the following query the indexes are used:\n>>\n>> The key reason this wins seems to be that the id_status = 4 condition\n>> is far more selective than id_status > 3 (the estimates are 52 and \n>> 36967\n>> rows respectively ... is that accurate?) which means that the second\n>> query is inherently about 1/700th as much work. This, and not the use\n>> of indexes, is the fundamental reason why it's faster.\n>>\n>> regards, tom lane\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 4: Don't 'kill -9' the postmaster\n>>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to \n> [email protected])\n\n", "msg_date": "Wed, 20 Oct 2004 11:13:49 -0500", "msg_from": "Thomas F.O'Connell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index not used in query. Why? " } ]
[ { "msg_contents": "Hello, I've thought it would be nice to index certain aspects of my\napache log files for analysis. I've used several different techniques\nand have something usable now, but I'd like to tweak it one step\nfurther.\n\nMy first performance optimization was to change the logformat into a\nCSV format. I processed the logfiles with PHP and plsql stored\nprocedures. Unfortunately, it took more than 24 hours to process 1\ndays worth of log files.\n\nI've now switched to using C# (using mono) to create hash-tables to do\nalmost all of the pre-processing. This has brought the time down to\nabout 3 hours. Actually, if I take out one step it brought the\nprocess down to about 6 minutes, which is a tremendous improvement.\n\nThe one step that is adding 2.5+ hours to the job is not easily done\nin C#, as far as I know.\n\nOnce the mostly-normalized data has been put into a table called\nusage_raw_access I then use this query:\ninsert into usage_access select * , \nusage_normalize_session(accountid,client,atime) as sessionid \nfrom usage_raw_access;\n\nAll it does is try to \"link\" pageviews together into a session. \nhere's the function:\n create or replace function usage_normalize_session (varchar(12),\ninet, timestamptz) returns integer as '\n DECLARE\n -- $1 = Account ID, $2 = IP Address, $3 = Time\n RecordSet record;\n BEGIN\n SELECT INTO RecordSet DISTINCT sessionid FROM usage_access ua\n WHERE ua.accountid = $1\n AND ua.client = $2\n AND ua.atime <= ($3 - ''20 min''::interval)::timestamptz;\n\n if found\n then return RecordSet.sessionid;\n end if;\n\n return nextval(''usage_session_ids'');\n END;'\n language plpgsql;\n\nAnd the table usage_access looks like this:\n Table \"public.usage_access\"\n Column | Type | Modifiers\n-------------+--------------------------+-----------\n[snip]\nclient | inet |\natime | timestamp with time zone |\naccountid | character varying(12) |\nsessionid | integer |\nIndexes: usage_acccess_req_url btree (req_url),\n usage_access_accountid btree (accountid),\n usage_access_atime btree (atime),\n usage_access_hostid btree (hostid),\n usage_access_sessionid btree (sessionid)\n usage_access_sessionlookup btree (accountid,client,atime);\n\nAs you can see, this looks for clients who have visited the same site\nwithin 20 min. If there is no match, a unique sessionid is assigned\nfrom a sequence. If there is a visit, the session id assigned to them\nis used. I'm only able to process about 25 records per second with my\nsetup. My window to do this job is 3-4 hours and the shorter the\nbetter.\n\nHere is an explain analyze of the query I do (note I limited it to 1000):\nEXPLAIN ANALYZE\ninsert into usage_access select * ,\nusage_normalize_session(accountid,client,atime) as sessionid from\nusage_raw_access limit 1000;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan \"*SELECT*\" (cost=0.00..20.00 rows=1000 width=196)\n(actual time=51.63..47634.22 rows=1000 loops=1)\n -> Limit (cost=0.00..20.00 rows=1000 width=196) (actual\ntime=51.59..47610.23 rows=1000 loops=1)\n -> Seq Scan on usage_raw_access (cost=0.00..20.00 rows=1000\nwidth=196) (actual time=51.58..47606.14 rows=1001 loops=1)\n Total runtime: 48980.54 msec\n\nI also did an explain of the query that's performed inside the function:\n\nEXPLAIN ANALYZE\nselect sessionid from usage_access ua where ua.accountid = 'XYZ' and\nua.client = '64.68.88.45'::inet and ua.atime <= '2003-11-02\n04:50:01-05'::timestamptz;\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\nIndex Scan using usage_access_sessionlookup on usage_access ua \n(cost=0.00..6.02 rows=1 width=4) (actual time=0.29..0.29 rows=0\nloops=1)\n Index Cond: ((accountid = 'XYZ'::character varying) AND (client =\n'64.68.88.45'::inet) AND (atime <= '2003-11-02 04:50:01-05'::timestamp\nwith time zone))\nTotal runtime: 0.35 msec\n(3 rows)\n\n\nWhat I'd really like to know is if someone knows a way to do any of\nthe following:\n a: Make the INSERT into ... SELECT *,usage_access_sessionlookup().. work faster\n b: Make the usage_access_sessionlookup() smarter,better,etc.\n c: Do this in C# using a hash-table or some other procedure that\nwould be quicker.\n d: Find an algorithm to create the sessionid without having to do any\ndatabase or hash-table lookups. As the dataset gets bigger, it won't\nfit in RAM and the lookup queries will become I/O bound, drastically\nslowing things down.\n\nd: is my first choice.\n\nFor some reason I just can't seem to get my mind around the data. I\nwonder if there's someway to create a unique value from client ip\naddress, the accountid and the period of time so that all visits by\nthe IP for the account in that period would match.\n\nI thought of using the date_part function to create a unique period,\nbut it would be a hack because if someone visits at 11:50 pm and\ncontinues to browse for an hour they would be counted as two sessions.\n That's not the end of the world, but some of my customers in\ndrastically different time zones would always have skewed results.\n\nI tried and tried to get C# to turn the apache date string into a\nusable time but could not. I just leave the date intact and let\npostgresql handle it when I do the copy. Therefore, though I'd like\nto do it in my C# program, I'll likely have to do the sessionid code\nin a stored procedure.\n\nI'd really love some feedback on ways to optimize this. Any\nsuggestions are greatly appreciated.\n\n-- \nMatthew Nuzum \t| Makers of \"Elite Content Management System\"\nwww.followers.net \t| View samples of Elite CMS in action\[email protected] \t| http://www.followers.net/portfolio/\n", "msg_date": "Tue, 19 Oct 2004 15:35:24 -0400", "msg_from": "Matt Nuzum <[email protected]>", "msg_from_op": true, "msg_subject": "Speeding up this function" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Matt Nuzum\n> Sent: Tuesday, October 19, 2004 3:35 PM\n> To: pgsql-performance\n> Subject: [PERFORM] Speeding up this function\n> \n<snip>\n> \n> All it does is try to \"link\" pageviews together into a session. \n> here's the function:\n> create or replace function usage_normalize_session \n> (varchar(12), inet, timestamptz) returns integer as ' DECLARE\n> -- $1 = Account ID, $2 = IP Address, $3 = Time\n> RecordSet record;\n> BEGIN\n> SELECT INTO RecordSet DISTINCT sessionid FROM usage_access ua\n> WHERE ua.accountid = $1\n> AND ua.client = $2\n> AND ua.atime <= ($3 - ''20 \n> min''::interval)::timestamptz;\n> \n> if found\n> then return RecordSet.sessionid;\n> end if;\n> \n> return nextval(''usage_session_ids'');\n> END;'\n> language plpgsql;\n> \n\nThis is probably a stupid question, but why are you trying to create\nsessions after the fact? Since it appears that users of your site must\nlogin, why not just assign a sessionID to them at login time, and keep\nit in the URL for the duration of the session? Then it would be easy to\ntrack where they've been.\n\n- Jeremy\n\n", "msg_date": "Tue, 19 Oct 2004 15:49:45 -0400", "msg_from": "\"Jeremy Dunn\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up this function" }, { "msg_contents": "On Tue, 19 Oct 2004 15:49:45 -0400, Jeremy Dunn <[email protected]> wrote:\n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]] On Behalf Of\n> > Matt Nuzum\n> > Sent: Tuesday, October 19, 2004 3:35 PM\n> > To: pgsql-performance\n> > Subject: [PERFORM] Speeding up this function\n> >\n> <snip>\n<snip>\n> \n> This is probably a stupid question, but why are you trying to create\n> sessions after the fact? Since it appears that users of your site must\n> login, why not just assign a sessionID to them at login time, and keep\n> it in the URL for the duration of the session? Then it would be easy to\n> track where they've been.\n> \n> - Jeremy\n> \n> \n\nYou don't have to log in to visit the sites. These log files are\nactually for many domains. Right now, we do logging with a web-bug\nand it does handle the sessions, but it relies on javascript and we\nwant to track a lot more than we are now. Plus, that code is in\nJavaScript and one of our primary motiviations is to ditch MySQL\ncompletely.\n\n-- \nMatthew Nuzum\t\t| Makers of \"Elite Content Management System\"\nwww.followers.net\t\t| View samples of Elite CMS in action\[email protected]\t\t| http://www.followers.net/portfolio/\n", "msg_date": "Tue, 19 Oct 2004 15:55:04 -0400", "msg_from": "Matt Nuzum <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speeding up this function" }, { "msg_contents": "\n\tHow many lines do you have in your daily logfiles\n\n> As you can see, this looks for clients who have visited the same site\n> within 20 min. If there is no match, a unique sessionid is assigned\n> from a sequence. If there is a visit, the session id assigned to them\n> is used. I'm only able to process about 25 records per second with my\n> setup. My window to do this job is 3-4 hours and the shorter the\n> better.\n\n\tI'd say your function is flawed because if a client stays more than 20 \nminutes he'll get two sessions.\n\tI'd propose the following :\n\n\t* solution with postgres (variant #1):\n\t- insert everything into big table,\n\t- SELECT make_session(...) FROM big table GROUP BY account_id\n\n\t(you may or may not wish to use the ip address, using it will duplicate \nsessions for people using anonimyzing crowds-style proxies, not using it \nwill merge sessions from the same user from two different ip's). I'd not \nuse it.\n\tuse an index-powered GroupAggregate maybe.\n\n\tNow it's well ordered, ie. all accesses from the same account are \ngrouped, you just have to find 'gaps' of more than 20 minutes in the \natimes to merge or make sessions. This is made by the aggregate \n'make_session' which has an internal state consisting of a list of \nsessions of the form :\n\t- session :\n\t\t- session start time\n\t\t- session end time\n\n\tall the aggregate does is look if the atime of the incoming row is < \n(session end time + 20 min)\n\t\tif <, update session to mark session end time to atime\n\t\tif >, create a new session with session start time = session end time = \natime\n\t\t\tand append it to the session list\n\n\tSo you get a table of session arrays, you just have to assign them id's \nand trackback to the URLs to mark them.\n\n\tIf an aggregate can issue INSERT or UPDATE queries, it can even generate \nsession ids on the fly in a table, which simplifies its internal state.\n\n\t* solution with postgres (variant #2):\n\n\t- insert everything into raw_table,\n\t- CREATE TABLE sorted_table\n\tjust like raw_table but with a \"id SERIAL PRIMARY KEY\" added.\n\t- INSERT INTO sorted_table SELECT * FROM raw_table ORDER by account_id, \natime;\n\n\tthe aggregate was basically comparing the atime's of two adjacent lines \nto detect a gap of more than 20 minutes, so you could also do a join \nbetween rows a and b\n\twhere b.id = a.id+1\n\t\tAND (\n\t\t\tb.account_id != a.account_id\n\t\t\tOR (b.atime > a.atime+20 minutes)\n\t\t\tOR b does not exist )\n\n\tthis will give you the rows which mark a session start, then you have to \njoin again to update all the rows in that session (BETWEEN id's) with the \nsession id.\n\n\t* solution without postgres\n\n\tTake advantage of the fact that the sessions are created and then die to \nonly use RAM for the active sessions.\n\tRead the logfile sequentially, you'll need to parse the date, if you \ncan't do it use another language, change your apache date format output, \nor write a parser.\n\n\tBasically you're doing event-driven programming like in a logic \nsimulator, where the events are session expirations.\n\n\tAs you read the file,\n\t\t- keep a hash of sessions indexed on account_id,\n\t\t- and also a sorted (btree) list of sessions indexed on a the session \nexpiry time.\n\tIt's very important that this list has fast insertion even in the middle, \nwhich is why a tree structure would be better. Try a red-black tree.\n\n\tFor each record do:\n\t\t- look in the hashtable for account_id, find expiry date for this \nsession,\n\t\t\tif session still alive you're in that session,\n\t\t\t\tupdate session expiry date and btree index accordingly\n\t\t\t\tappend url and infos to a list in the session if you want to keep them\n\t\t\telse\n\t\t\t\texpire session and start a new one, insert into hash and btree\n\t\t\t\tstore the expired session on disk and remove it from memory, you dont \nneed it anymore !\n\n\tAnd, as you see the atime advancing, scan the btree for sessions to \nexpire.\n\tIt's ordered by expiry date, so that's fast.\n\tFor all expired sessions found,\n\t\texpire session\n\t\tstore the expired session on disk and remove it from memory, you dont \nneed it anymore !\n\n\n\tThat'd be my preferred solution. You'll need a good implementation of a \nsorted tree, you can find that in opensource.\n\n\t* solution with postgres (variant #3)\n\n\tjust like variant #2 but instead of an aggregate use a plpgsql procedure \nwhich reads the logs ordered by account_id, atime, while keeping a copy of \nthe last row, and detecting session expirations on the fly.\n\n\n\n\n\n\n\n\n\t\n\n\n", "msg_date": "Wed, 20 Oct 2004 02:08:16 +0200", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up this function" } ]
[ { "msg_contents": "I'm trying to figure out what I need to do to get my postgres server\nmoving faster. It's just crawling right now. It's on a p4 HT with 2\ngigs of mem.\n\nI was thinking I need to increase the amount of shared buffers, but\nI've been told \"the sweet spot for shared_buffers is usually on the\norder of 10000 buffers\". I already have it set at 21,078. If you have,\nsay 100 gigs of ram, are you supposed to still only give postgres\n10,000?\n\nAlso, do I need to up the shmmax at all? I've used the formula \"250 kB\n+ 8.2 kB * shared_buffers + 14.2 kB * max_connections up to infinity\"\nat http://www.postgresql.org/docs/7.4/interactive/kernel-resources.html#SYSVIPC\nbut it's never quite high enough, so I just make sure it's above the\namount that the postgres log says it needs.\n\nWhat else can I do to speed this server up? I'm running vacuum analyze\non the heavily updated/inserted/deleted db's once an hour, and doing a\nfull vacuum once a night. Should I change the vacuum mem setting at\nall?\n\nAre there any other settings I should be concerned with? I've heard\nabout the effective_cache_size setting, but I haven't seen anything on\nwhat the size should be.\n\nAny help would be great. This server is very very slow at the moment.\n\nAlso, I'm using a 2.6.8.1 kernel with high mem enabled, so all the ram\nis recognized.\n\nThanks.\n\n-Josh\n", "msg_date": "Tue, 19 Oct 2004 17:31:59 -0500", "msg_from": "Josh Close <[email protected]>", "msg_from_op": true, "msg_subject": "how much mem to give postgres?" }, { "msg_contents": ">Josh Close\n> I'm trying to figure out what I need to do to get my postgres server\n> moving faster. It's just crawling right now. It's on a p4 HT with 2\n> gigs of mem.\n\n....and using what version of PostgreSQL are you using? 8.0beta, I hope?\n\n> I was thinking I need to increase the amount of shared buffers, but\n> I've been told \"the sweet spot for shared_buffers is usually on the\n> order of 10000 buffers\". I already have it set at 21,078. If you have,\n> say 100 gigs of ram, are you supposed to still only give postgres\n> 10,000?\n\nThats under test currently. My answer would be, \"clearly not\", others\ndiffer, for varying reasons.\n\n> Also, do I need to up the shmmax at all? I've used the formula \"250 kB\n> + 8.2 kB * shared_buffers + 14.2 kB * max_connections up to infinity\"\n> at\nhttp://www.postgresql.org/docs/7.4/interactive/kernel-resources.html#SYSVIPC\n> but it's never quite high enough, so I just make sure it's above the\n> amount that the postgres log says it needs.\n\nshmmax isn't a tuning parameter for PostgreSQL, its just a limit. If you get\nno error messages, then its high enough.\n\n> Are there any other settings I should be concerned with? I've heard\n> about the effective_cache_size setting, but I haven't seen anything on\n> what the size should be.\n\nwal_buffers if the databases are heavily updated.\n\n> Any help would be great. This server is very very slow at the moment.\n>\n\nTry *very fast disks*, especially for the logs.\n\nBest regards, Simon Riggs\n\n", "msg_date": "Wed, 20 Oct 2004 01:33:16 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how much mem to give postgres?" }, { "msg_contents": "On Wed, 20 Oct 2004 01:33:16 +0100, Simon Riggs <[email protected]> wrote:\n> ....and using what version of PostgreSQL are you using? 8.0beta, I hope?\n\nI'm using version 7.4.5.\n\n> > I was thinking I need to increase the amount of shared buffers, but\n> > I've been told \"the sweet spot for shared_buffers is usually on the\n> > order of 10000 buffers\". I already have it set at 21,078. If you have,\n> > say 100 gigs of ram, are you supposed to still only give postgres\n> > 10,000?\n> \n> Thats under test currently. My answer would be, \"clearly not\", others\n> differ, for varying reasons.\n\nShould I stick the rule of around 15% of mem then? I haven't found any\ninformation on why you should use certain settings at all. I read\nsomewhere on the postgres site about using as much memory as possible,\nbut leave a little room for other processes. Whould that be an ok\ntheory? I'd kinda like to know why I should or shouldn't do something\nlike this.\n\n-Josh\n", "msg_date": "Tue, 19 Oct 2004 23:02:30 -0500", "msg_from": "Josh Close <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how much mem to give postgres?" }, { "msg_contents": "Josh Close <[email protected]> writes:\n> I'm trying to figure out what I need to do to get my postgres server\n> moving faster. It's just crawling right now.\n\nI suspect that fooling with shared_buffers is entirely the wrong tree\nfor you to be barking up. My suggestion is to be looking at individual\nqueries that are slow, and seeing how to speed those up. This might\ninvolve adding indexes, or tweaking the query source, or adjusting\nplanner parameters, or several other things. EXPLAIN ANALYZE is your\nfriend ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Oct 2004 00:35:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how much mem to give postgres? " }, { "msg_contents": "JJosh,\n\n> I'm trying to figure out what I need to do to get my postgres server\n> moving faster. It's just crawling right now. It's on a p4 HT with 2\n> gigs of mem.\n\nThere have been issues with Postgres+HT, especially on Linux 2.4. Try \nturning HT off if other tuning doesn't solve things.\n\nOtherwise, see:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 19 Oct 2004 22:23:24 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how much mem to give postgres?" }, { "msg_contents": "On Wed, 20 Oct 2004 00:35:31 -0400, Tom Lane <[email protected]> wrote:\n> I suspect that fooling with shared_buffers is entirely the wrong tree\n> for you to be barking up. My suggestion is to be looking at individual\n> queries that are slow, and seeing how to speed those up. This might\n> involve adding indexes, or tweaking the query source, or adjusting\n> planner parameters, or several other things. EXPLAIN ANALYZE is your\n> friend ...\n> \n> regards, tom lane\n\nOnly problem is, a \"select count(1)\" is taking a long time. Indexes\nshouldn't matter with this since it's counting every row, right? The\ntables are fairly well indexed also, I could probably add a few more.\n\nIf shared_buffers isn't the way to go ( you said 10k is the sweetspot\n), then what about the effective_cache_size? I was suggested on the\ngeneral list about possibly setting that to 75% of ram.\n\nThanks.\n\n-Josh\n", "msg_date": "Wed, 20 Oct 2004 08:36:49 -0500", "msg_from": "Josh Close <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how much mem to give postgres?" }, { "msg_contents": "On Tue, 19 Oct 2004 22:23:24 -0700, Josh Berkus <[email protected]> wrote:\n> There have been issues with Postgres+HT, especially on Linux 2.4. Try\n> turning HT off if other tuning doesn't solve things.\n> \n> Otherwise, see:\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\nHow would I turn that off? In the kernel config? Not too familiar with\nthat. I have a 2 proc xeon with 4 gigs of mem on the way for postgres,\nso I hope HT isn't a problem. If HT is turned off, does it just not\nuse the other \"half\" of the processor? Or does the processor just work\nas one unit?\n\nAlso, I'm taking a look at that site right now :)\n\n-Josh\n", "msg_date": "Wed, 20 Oct 2004 08:39:53 -0500", "msg_from": "Josh Close <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how much mem to give postgres?" }, { "msg_contents": "\n>How would I turn that off? In the kernel config? Not too familiar with\n>that. I have a 2 proc xeon with 4 gigs of mem on the way for postgres,\n>so I hope HT isn't a problem. If HT is turned off, does it just not\n>use the other \"half\" of the processor? Or does the processor just work\n>as one unit?\n> \n>\nYou turn it off in the BIOS. There is no 'other half', the processor is \njust pretending to have two cores by shuffling registers around, which \ngives maybe a 5-10% performance gain in certain multithreaded \nsituations. <opinion>A hack to overcome marchitactural limitations due \nto the overly long pipeline in the Prescott core.</opinion>. Really of \nmost use for desktop interactivity rather than actual throughput.\n\nM\n", "msg_date": "Wed, 20 Oct 2004 15:07:00 +0100", "msg_from": "Matt Clark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how much mem to give postgres?" }, { "msg_contents": "On Wed, Oct 20, 2004 at 03:07:00PM +0100, Matt Clark wrote:\n\n> You turn it off in the BIOS. There is no 'other half', the processor is \n> just pretending to have two cores by shuffling registers around, which \n> gives maybe a 5-10% performance gain in certain multithreaded \n> situations. \n\n\n> <opinion>A hack to overcome marchitactural limitations due \n> to the overly long pipeline in the Prescott core.</opinion>. Really of \n> most use for desktop interactivity rather than actual throughput.\n\n<OT>\nHyperthreading is actually an excellent architectural feature that\ncan give significant performance gains when implemented well and used\nfor an appropriate workload under a decently HT aware OS.\n\nIMO, typical RDBMS streams are not an obviously appropriate workload,\nIntel didn't implement it particularly well and I don't think there\nare any OSes that support it particularly well.\n</OT>\n\nBut don't write off using it in the future, when it's been improved\nat both the OS and the silicon levels.\n\nCheers,\n Steve\n", "msg_date": "Wed, 20 Oct 2004 10:50:39 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how much mem to give postgres?" }, { "msg_contents": "\n><OT>\n>Hyperthreading is actually an excellent architectural feature that\n>can give significant performance gains when implemented well and used\n>for an appropriate workload under a decently HT aware OS.\n>\n>IMO, typical RDBMS streams are not an obviously appropriate workload,\n>Intel didn't implement it particularly well and I don't think there\n>are any OSes that support it particularly well.\n></OT>\n>\n>But don't write off using it in the future, when it's been improved\n>at both the OS and the silicon levels.\n>\n> \n>\nYou are quite right of course - unfortunately the current Intel \nimplementation meets nearly none of these criteria! As Rod Taylor \npointed out off-list, IBM's SMT implementation on the Power5 is vastly \nsuperior. Though he's also just told me that Sun is beating IBM on \nprice/performance for his workload, so who knows how reliable a chap he \nis... ;-)\n\nM\n", "msg_date": "Wed, 20 Oct 2004 19:16:18 +0100", "msg_from": "Matt Clark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how much mem to give postgres?" }, { "msg_contents": "On Wed, Oct 20, 2004 at 07:16:18PM +0100, Matt Clark wrote:\n> ><OT>\n> >Hyperthreading is actually an excellent architectural feature that\n> >can give significant performance gains when implemented well and used\n> >for an appropriate workload under a decently HT aware OS.\n> >\n> >IMO, typical RDBMS streams are not an obviously appropriate workload,\n> >Intel didn't implement it particularly well and I don't think there\n> >are any OSes that support it particularly well.\n> ></OT>\n> >\n> >But don't write off using it in the future, when it's been improved\n> >at both the OS and the silicon levels.\n> >\n> > \n> >\n> You are quite right of course - unfortunately the current Intel \n> implementation meets nearly none of these criteria! \n\nIndeed. And when I said \"no OSes support it particularly well\" I meant\nthe x86 SMT implementation, rather than SMT in general.\n\nAs Rod pointed out, AIX seems to have decent support and Power has a\nvery nice implementation, and the same is probably true for at least\none other OS/architecture implementation.\n\n> As Rod Taylor pointed out off-list, IBM's SMT implementation on the\n> Power5 is vastly superior. Though he's also just told me that Sun\n> is beating IBM on price/performance for his workload, so who knows\n> how reliable a chap he is... ;-)\n\n:)\n\nCheers,\n Steve\n", "msg_date": "Wed, 20 Oct 2004 11:39:22 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how much mem to give postgres?" } ]
[ { "msg_contents": "Hi, \n\nI've after some opinions about insert performance.\n\nI'm importing a file with 13,002 lines to a database that ends up with\n75,703 records across 6 tables. This is a partial file – the real data\nis 4 files with total lines 95174. I'll be loading these files each\nmorning, and then running a number of queries on them.\n\nThe select queries run fast enough, (mostly - 2 queries are slow but\nI'll look into that later), but importing is slower than I'd like it\nto be, but I'm wondering what to expect?\n\nI've done some manual benchmarking running my script 'time script.pl'\nI realise my script uses some of the time, bench marking shows that\n%50 of the time is spent in dbd:execute.\n\nTest 1, For each import, I'm dropping all indexes and pkeys/fkeys,\nthen importing, then adding keys and indexes. Then I've got successive\nruns. I figure the reindexing will get more expensive as the database\ngrows?\n\nSuccessive Imports: 44,49,50,57,55,61,72 (seconds)\n= average 1051inserts/second (which now that I've written this seems\nfairly good)\n\nTest 2, no dropping etc of indexes, just INSERTs\nImport – 61, 62, 73, 68, 78, 74 (seconds)\n= average 1091 inserts/second\n\nMachine is Linux 2.6.4, 1GB RAM, 3.something GHz XEON processor, SCSI\nhdd's (raid1). PostgreSQL 7.4.2. Lightly loaded machine, not doing\nmuch other than my script. Script and DB on same machine.\n\nSysctl –a | grep shm\nkernel.shmmni = 4096\nkernel.shmall = 134217728 (pages or bytes? Anyway…)\nkernel.shmmax = 134217728\n\npostgresql.conf\ntcpip_socket = true\nmax_connections = 32\nsuperuser_reserved_connections = 2\nshared_buffers = 8192 \nsort_mem = 4096 \nvacuum_mem = 16384 \nmax_fsm_relations = 300 \nfsync = true \nwal_buffers = 64 \ncheckpoint_segments = 10 \neffective_cache_size = 16000 \nsyslog = 1 \nsilent_mode = false \nlog_connections = true\nlog_pid = true\nlog_timestamp = true\nstats_start_collector = true\nstats_row_level = true\n\nCan I expect it to go faster than this? I'll see where I can make my\nscript itself go faster, but I don't think I'll be able to do much.\nI'll do some pre-prepare type stuff, but I don't expect significant\ngains, maybe 5-10%. I'd could happily turn off fsync for this job, but\nnot for some other databases the server is hosting.\n\nAny comments/suggestions would be appreciated.\n\nThanks :)\n\nBrock Henry\n", "msg_date": "Wed, 20 Oct 2004 11:53:37 +1000", "msg_from": "Brock Henry <[email protected]>", "msg_from_op": true, "msg_subject": "Insert performance, what should I expect?" }, { "msg_contents": "Brock Henry wrote:\n\n>Hi, \n>\n>I've after some opinions about insert performance.\n>\nHave you looked into using the copy command instead of inserts? For \nbulk loading of data it can be significantly faster.\n", "msg_date": "Tue, 19 Oct 2004 22:12:10 -0400", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert performance, what should I expect?" }, { "msg_contents": "> I've done some manual benchmarking running my script 'time script.pl'\n> I realise my script uses some of the time, bench marking shows that\n> %50 of the time is spent in dbd:execute.\n\nThe perl drivers don't currently use database level prepared statements\nwhich would give a small boost.\n\nBut your best bet is to switch to using COPY instead of INSERT. Two ways\nto do this.\n\n1) Drop DBD::Pg and switch to the Pg driver for Perl instead (non-DBI\ncompliant) which has functions similar to putline() that allow COPY to\nbe used.\n\n2) Have your perl script output a .sql file with the data prepared (COPY\nstatements) which you feed into the database via psql.\n\nYou can probably achieve a 50% increase in throughput.\n\n", "msg_date": "Tue, 19 Oct 2004 22:12:28 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert performance, what should I expect?" }, { "msg_contents": "On Wed, 2004-10-20 at 11:53 +1000, Brock Henry wrote:\n> \n> Test 1, For each import, I'm dropping all indexes and pkeys/fkeys,\n> then importing, then adding keys and indexes. Then I've got successive\n> runs. I figure the reindexing will get more expensive as the database\n> grows?\n\nSounds like the right approach to me, if the tables are empty before the\nimport.\n\n\n> Successive Imports: 44,49,50,57,55,61,72 (seconds)\n> = average 1051inserts/second (which now that I've written this seems\n> fairly good)\n\n(A) Are you doing the whole thing inside a transaction? This will be\nsignificantly quicker. COPY would probably be quicker still, but the\nbiggest difference will be a single transaction.\n\n(B) If you are starting with empty files, are you ensuring that the dead\nrecords are vacuumed before you start? I would recommend a \"vacuum\nfull\" on the affected tables prior to the first import run (i.e. when\nthe tables are empty). This is likely to be the reason that the timing\non your successive imports increases so much.\n\n\n\n> sort_mem = 4096 \n\nYou probably want to increase this - if you have 1G of RAM then there is\nprobably some spare. But if you actually expect to use 32 connections\nthen 32 * 4M = 128M might mean a careful calculation is needed. If you\nare really only likely to have 1-2 connections running concurrently then\nincrease it to (e.g.) 32768.\n\n> max_fsm_relations = 300 \n\nIf you do a \"vacuum full verbose;\" the last line will give you some\nclues as to what to set this (and max_fsm_pages) too.\n\n\n> effective_cache_size = 16000 \n\n16000 * 8k = 128M seems low for a 1G machine - probably you could say\n64000 without fear of being wrong. What does \"free\" show as \"cached\"?\nDepending on how dedicated the machine is to the database, the effective\ncache size may be as much as 80-90% of that.\n\n\n> Can I expect it to go faster than this? I'll see where I can make my\n> script itself go faster, but I don't think I'll be able to do much.\n> I'll do some pre-prepare type stuff, but I don't expect significant\n> gains, maybe 5-10%. I'd could happily turn off fsync for this job, but\n> not for some other databases the server is hosting.\n\nYou can probably double the speed - maybe more.\n\nCheers,\n\t\t\t\t\tAndrew,\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n How many things I can do without! -- Socrates\n-------------------------------------------------------------------------", "msg_date": "Wed, 20 Oct 2004 20:50:44 +1300", "msg_from": "Andrew McMillan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert performance, what should I expect?" }, { "msg_contents": "When grilled further on (Tue, 19 Oct 2004 22:12:28 -0400),\nRod Taylor <[email protected]> confessed:\n\n> > I've done some manual benchmarking running my script 'time script.pl'\n> > I realise my script uses some of the time, bench marking shows that\n> > %50 of the time is spent in dbd:execute.\n> > \n> 1) Drop DBD::Pg and switch to the Pg driver for Perl instead (non-DBI\n> compliant) which has functions similar to putline() that allow COPY to\n> be used.\n\nCOPY can be used with DBD::Pg, per a script I use:\n\n$dbh->do( \"COPY temp_obs_$band ( $col_list ) FROM stdin\" );\n$dbh->func( join ( \"\\t\", @data ) . \"\\n\", 'putline' );\n$dbh->func( \"\\\\.\\n\", 'putline' );\n$dbh->func( 'endcopy' );\n\nWith sets of data from 1000 to 8000 records, my COPY performance is consistent\nat ~10000 records per second.\n\nCheers,\nRob\n\n-- \n 10:39:31 up 2 days, 16:25, 2 users, load average: 2.15, 2.77, 3.06\nLinux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004", "msg_date": "Wed, 20 Oct 2004 10:45:27 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert performance, what should I expect?" }, { "msg_contents": "On Wed, 2004-10-20 at 12:45, Robert Creager wrote:\n> When grilled further on (Tue, 19 Oct 2004 22:12:28 -0400),\n> Rod Taylor <[email protected]> confessed:\n> \n> > > I've done some manual benchmarking running my script 'time script.pl'\n> > > I realise my script uses some of the time, bench marking shows that\n> > > %50 of the time is spent in dbd:execute.\n> > > \n> > 1) Drop DBD::Pg and switch to the Pg driver for Perl instead (non-DBI\n> > compliant) which has functions similar to putline() that allow COPY to\n> > be used.\n> \n> COPY can be used with DBD::Pg, per a script I use:\n> \n> $dbh->do( \"COPY temp_obs_$band ( $col_list ) FROM stdin\" );\n> $dbh->func( join ( \"\\t\", @data ) . \"\\n\", 'putline' );\n> $dbh->func( \"\\\\.\\n\", 'putline' );\n> $dbh->func( 'endcopy' );\n\nThanks for that. All of the conversations I've seen on the subject\nstated that DBD::Pg only supported standard DB features -- copy not\namongst them.\n\n> With sets of data from 1000 to 8000 records, my COPY performance is consistent\n> at ~10000 records per second.\n\nWell done.\n\n\n", "msg_date": "Wed, 20 Oct 2004 13:20:19 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert performance, what should I expect?" }, { "msg_contents": "Brock Henry wrote:\n > Any comments/suggestions would be appreciated.\n\nTune also the disk I/O elevator.\n\nlook at this: http://www.varlena.com/varlena/GeneralBits/49.php\n\n\nRegards\nGaetano Mendola\n\n\n", "msg_date": "Sat, 23 Oct 2004 12:31:32 +0200", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert performance, what should I expect?" }, { "msg_contents": "On Sat, Oct 23, 2004 at 12:31:32PM +0200, Gaetano Mendola wrote:\n>> Any comments/suggestions would be appreciated.\n> Tune also the disk I/O elevator.\n> \n> look at this: http://www.varlena.com/varlena/GeneralBits/49.php\n\nMm, interesting. I've heard somewhere that the best for database-like loads\non Linux is to disable the anticipatory I/O scheduler\n(http://kerneltrap.org/node/view/567), which should probably\ninfluence the numbers for elvtune also -- anybody know whether this is true\nor not for PostgreSQL?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sat, 23 Oct 2004 13:15:30 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert performance, what should I expect?" } ]
[ { "msg_contents": "It doesn't seem to work. I want a time summary at the end. I am inserting\ninsert queries from a file with the \\i option.\n\nThis is the outcome:\n\n[7259] LOG: statement: INSERT INTO weather VALUES ('San Francisco', 46,\n50, 0.25, '1994-11-27');\n[7259] LOG: duration: 1.672 ms\n[7259] LOG: statement: INSERT INTO weather VALUES ('San Francisco', 46,\n50, 0.25, '1994-11-27');\n[7259] LOG: duration: 1.730 ms\n[7259] LOG: statement: INSERT INTO weather VALUES ('San Francisco', 46,\n50, 0.25, '1994-11-27');\n[7259] LOG: duration: 1.698 ms\n[7259] LOG: statement: INSERT INTO weather VALUES ('San Francisco', 46,\n50, 0.25, '1994-11-27');\n[7259] LOG: duration: 1.805 ms\n[7259] LOG: statement: INSERT INTO weather VALUES ('San Francisco', 46,\n50, 0.25, '1994-11-27');\n[7259] LOG: duration: 1.670 ms\n[7259] LOG: statement: INSERT INTO weather VALUES ('San Francisco', 46,\n50, 0.25, '1994-11-27');\n[7259] LOG: duration: 1.831 ms\n[7259] LOG: statement: INSERT INTO weather VALUES ('San Francisco', 46,\n50, 0.25, '1994-11-27');\n[7259] LOG: duration: 1.815 ms\n[7259] LOG: statement: INSERT INTO weather VALUES ('San Francisco', 46,\n50, 0.25, '1994-11-27');\n[7259] LOG: duration: 1.793 ms\n[7259] LOG: statement: INSERT INTO weather VALUES ('San Francisco', 46,\n50, 0.25, '1994-11-27');\n[7259] LOG: duration: 1.660 ms\n[7259] LOG: statement: INSERT INTO weather VALUES ('San Francisco', 46,\n50, 0.25, '1994-11-27');\n[7259] LOG: duration: 1.667 ms\n[7259] LOG: statement: INSERT INTO weather VALUES ('San Francisco', 46,\n50, 0.25, '1994-11-27');\n[7259] LOG: duration: 1.754 ms\n[7259] LOG: statement: INSERT INTO weather VALUES ('San Francisco', 46,\n50, 0.25, '1994-11-27');\n[7259] LOG: duration: 1.668 ms\n[7259] LOG: statement: INSERT INTO weather VALUES ('San Francisco', 46,\n50, 0.25, '1994-11-27');\n[7259] LOG: duration: 1.688 ms\n[7259] LOG: statement: INSERT INTO weather VALUES ('San Francisco', 46,\n50, 0.25, '1994-11-27');\n[7259] LOG: duration: 1.671 ms\n[7259] LOG: statement: INSERT INTO weather VALUES ('San Francisco', 46,\n50, 0.25, '1994-11-27');\n[7259] LOG: duration: 1.787 ms\n[7259] LOG: statement: INSERT INTO weather VALUES ('San Francisco', 46,\n50, 0.25, '1994-11-27');\n[7259] LOG: duration: 1.722 ms\n[7309] LOG: statement: DELETE FROM weather;\n[7309] LOG: duration: 11.314 ms\n[7330] LOG: statement: INSERT INTO weather VALUES ('San Francisco', 46,\n50, 0.25, '1994-11-27')\n\n\nTim\n\n\n", "msg_date": "Wed, 20 Oct 2004 13:50:38 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: How to time several queries?" } ]
[ { "msg_contents": "Hi List,\n\nI have a Dual-Xeon 3Ghz System with with GB RAM and an Adaptec 212ß SCSI\nRAID with 4 SCA Harddiscs. Our customer wants to have the Machine tuned\nfor best Database performance. Which OS should we used? We are tending\nbetween Linux 2.6 or FreeBSD. The Database Size is 5GB and ascending.\nMost SQL-Queries are Selects, the Tablesizes are beetween 300k and up to\n10 MB. I've read the Hardware Performance Guide and the result was to\ntake FreeBSD in the Decision too :)\n\nAnd what is on this Context Switiching Bug i have read in the Archive? \n\nHope you can help me\n\nRegards\n\nTom\n", "msg_date": "Wed, 20 Oct 2004 15:10:15 +0200", "msg_from": "Tom Fischer <[email protected]>", "msg_from_op": true, "msg_subject": "OS desicion" }, { "msg_contents": "You are asking the wrong question. The best OS is the OS you (and/or \nthe customer) knows and can administer competently. The real \nperformance differences between unices are so small as to be ignorable \nin this context. The context switching bug is not OS-dependent, but \nvarys in severity across machine architectures (I understand it to be \nmostly P4/Athlon related, but don't take my word for it).\n\nM\n\nTom Fischer wrote:\n\n>Hi List,\n>\n>I have a Dual-Xeon 3Ghz System with with GB RAM and an Adaptec 212� SCSI\n>RAID with 4 SCA Harddiscs. Our customer wants to have the Machine tuned\n>for best Database performance. Which OS should we used? We are tending\n>between Linux 2.6 or FreeBSD. The Database Size is 5GB and ascending.\n>Most SQL-Queries are Selects, the Tablesizes are beetween 300k and up to\n>10 MB. I've read the Hardware Performance Guide and the result was to\n>take FreeBSD in the Decision too :)\n>\n>And what is on this Context Switiching Bug i have read in the Archive? \n>\n>Hope you can help me\n>\n>Regards\n>\n>Tom\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n>\n", "msg_date": "Wed, 20 Oct 2004 14:23:21 +0100", "msg_from": "Matt Clark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OS desicion" }, { "msg_contents": "Tom,\n\n> You are asking the wrong question. The best OS is the OS you (and/or\n> the customer) knows and can administer competently. \n\nI'll have to 2nd this.\n\n> The real \n> performance differences between unices are so small as to be ignorable\n> in this context. \n\nWell, at least the difference between Linux and BSD. There are substantial \ntradeoffs should you chose to use Solaris or UnixWare.\n\n> The context switching bug is not OS-dependent, but \n> varys in severity across machine architectures (I understand it to be\n> mostly P4/Athlon related, but don't take my word for it).\n\nThe bug is at its apparent worst on multi-processor HT Xeons and weak \nnorthbridges running Linux 2.4. However, it has been demonstrated (with \nlesser impact) on Solaris/Sparc, PentiumIII, and Athalon. Primarily it \nseems to affect data warehousing applications. Your choice of OS is not \naffected by this bug.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 20 Oct 2004 09:38:51 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OS desicion" }, { "msg_contents": ">>The real \n>>performance differences between unices are so small as to be ignorable\n>>in this context. \n>> \n>>\n> <>\n> Well, at least the difference between Linux and BSD. There are \n> substantial\n> tradeoffs should you chose to use Solaris or UnixWare.\n\nYes, quite right, I should have said 'popular x86-based unices'. \n\n\n\n\n\n\n\n\n\n\nThe real \nperformance differences between unices are so small as to be ignorable\nin this context. \n \n\n <>\nWell, at least the difference between Linux and BSD. There are\nsubstantial \ntradeoffs should you chose to use Solaris or UnixWare.\n >\nYes, quite right, I should have said 'popular x86-based unices'.", "msg_date": "Wed, 20 Oct 2004 18:12:38 +0100", "msg_from": "Matt Clark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OS desicion" }, { "msg_contents": "On Wed, Oct 20, 2004 at 09:38:51AM -0700, Josh Berkus wrote:\n> Tom,\n> \n> > You are asking the wrong question. The best OS is the OS you (and/or\n> > the customer) knows and can administer competently. \n> \n> I'll have to 2nd this.\n\nI'll 3rd but add one tidbit: FreeBSD will schedule disk I/O based on\nprocess priority, while linux won't. This can be very handy for things\nlike vacuum.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Thu, 21 Oct 2004 17:14:57 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OS desicion" } ]
[ { "msg_contents": "\nHello All,\n\nI have an iostat question in that one of the raid arrays seems to act \ndifferently than the other 3. Is this reasonable behavior for the \ndatabase or should I suspect a hardware or configuration problem? \n\nBut first some background: \nPostgresql 7.4.2 \nLinux 2.4.20, 2GB RAM, 1-Xeon 2.4ghz with HT turned off\n3Ware SATA RAID controller with 8 identical drives configured as 4 \n RAID-1 spindles\n64MB RAM disk\n\npostgresql.conf differences to postgresql.conf.sample:\ntcpip_socket = true\nmax_connections = 128\nshared_buffers = 2048\nvacuum_mem = 16384\nmax_fsm_pages = 50000\nwal_buffers = 128\ncheckpoint_segments = 64\neffective_cache_size = 196000\nrandom_page_cost = 1\ndefault_statistics_target = 100\nstats_command_string = true\nstats_block_level = true\nstats_row_level = true\n\nThe database is spread over 5 spindles:\n/ram0 holds the busiest insert/update/delete table and assoc. indexes for\n temporary session data\n/sda5 holds the OS and most of the tables and indexes\n/sdb2 holds the WAL\n/sdc1 holds the 2nd busiest i/u/d table (70% of the writes)\n/sdd1 holds the single index for that busy table on/sdc1\n\nLately we have 45 connections open from a python/psycopg connection pool.\n99% of the reads are cached.\nNo swapping.\n\nAnd finally iostat reports:\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s \navgrq-sz avgqu-sz await svctm %util\n/dev/sda5 0.01 3.32 0.01 0.68 0.16 32.96 0.08 16.48 \n48.61 0.09 12.16 2.01 0.14\n/dev/sdb2 0.00 6.38 0.00 3.54 0.01 79.36 0.00 39.68 \n22.39 0.12 3.52 1.02 0.36\n/dev/sdc1 0.03 0.13 0.00 0.08 0.27 1.69 0.13 0.84 \n24.06 0.13 163.28 13.75 0.11\n/dev/sdd1 0.01 8.67 0.00 0.77 0.06 82.35 0.03 41.18 \n107.54 0.09 10.51 2.76 0.21\n\nThe /sdc1's await seems awfully long compared to the rest to the stats.\n \nJelle\n\n\n-- \n\nhttp://www.sv650.org/audiovisual/loading_a_bike.mpeg\nOsama-in-October office pool.\n\n", "msg_date": "Wed, 20 Oct 2004 17:28:08 -0700 (PDT)", "msg_from": "jelle <[email protected]>", "msg_from_op": true, "msg_subject": "iostat question" } ]
[ { "msg_contents": "Hi All,\n\nI have a table in my postgres:\nTable: doc\n Column | Type | Modifiers \n ---------------+-----------------------------+-----------\n doc_id | bigint | not null\n comp_grp_id | bigint | not null\n doc_type | character varying(10)| not null\n doc_urn | character varying(20)| not null\n\nI want to create an index on doc_urn column with using substr function like this:\nCREATE INDEX idx_doc_substr_doc_urn ON doc USING btree (SUBSTR(doc_urn,10));\n\nbut there is an error:\n\nERROR: parser: parse error at or near \"10\" at character 68\n\nwhat's wrong for this SQL? As I have found some reference on the internet, I can't find anything wrong in this SQL.\n\nThanks\nRay\n\n\n\n\n\n\nHi All,\n \nI have a table in my postgres:\nTable: doc\n     \nColumn     \n|            \nType             | \nModifiers      \n---------------+-----------------------------+----------- doc_id          | \nbigint                      \n| not null comp_grp_id | \nbigint                      \n| not null doc_type      | character \nvarying(10)| not null doc_urn        \n| character varying(20)| not null\nI want to create an index on doc_urn column with \nusing substr function like this:\nCREATE INDEX idx_doc_substr_doc_urn ON doc USING \nbtree (SUBSTR(doc_urn,10));\n \nbut there is an error:\nERROR:  parser: parse error at or near \"10\" at character 68\n \nwhat's wrong for this SQL? As I have found some reference on the internet, \nI can't find anything wrong in this SQL.\n \nThanks\nRay", "msg_date": "Thu, 21 Oct 2004 10:25:17 +0800", "msg_from": "\"Ray\" <[email protected]>", "msg_from_op": true, "msg_subject": "create index with substr function" }, { "msg_contents": "\nOn Thu, 21 Oct 2004, Ray wrote:\n\n> Hi All,\n>\n> I have a table in my postgres:\n> Table: doc\n> Column | Type | Modifiers\n> ---------------+-----------------------------+-----------\n> doc_id | bigint | not null\n> comp_grp_id | bigint | not null\n> doc_type | character varying(10)| not null\n> doc_urn | character varying(20)| not null\n>\n> I want to create an index on doc_urn column with using substr function like this:\n> CREATE INDEX idx_doc_substr_doc_urn ON doc USING btree (SUBSTR(doc_urn,10));\n>\n> but there is an error:\n>\n> ERROR: parser: parse error at or near \"10\" at character 68\n>\n> what's wrong for this SQL? As I have found some reference on the\n> internet, I can't find anything wrong in this SQL.\n\nWhat version are you using? If you're using anything previous to 7.4 then\nthe above definately won't work and the only work around I know of is to\nmake another function which takes only the column argument and calls\nsubstr with the 10 constant.\n\n", "msg_date": "Wed, 20 Oct 2004 19:57:54 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: create index with substr function" }, { "msg_contents": "\"Ray\" <[email protected]> writes:\n> CREATE INDEX idx_doc_substr_doc_urn ON doc USING btree (SUBSTR(doc_urn,10));\n> ERROR: parser: parse error at or near \"10\" at character 68\n\nThis will work in 7.4, but not older releases.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Oct 2004 22:59:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: create index with substr function " }, { "msg_contents": "Thank you all kindly response..... : )\n\nI am currently using postgres 7.3, so any example or solution for version\nafter 7.4 if i want to create an index with substr function???\n\nThanks,\nRay\n\n\n----- Original Message ----- \nFrom: \"Stephan Szabo\" <[email protected]>\nTo: \"Ray\" <[email protected]>\nCc: <[email protected]>\nSent: Thursday, October 21, 2004 10:57 AM\nSubject: Re: [PERFORM] create index with substr function\n\n\n>\n> On Thu, 21 Oct 2004, Ray wrote:\n>\n> > Hi All,\n> >\n> > I have a table in my postgres:\n> > Table: doc\n> > Column | Type | Modifiers\n> > ---------------+-----------------------------+-----------\n> > doc_id | bigint | not null\n> > comp_grp_id | bigint | not null\n> > doc_type | character varying(10)| not null\n> > doc_urn | character varying(20)| not null\n> >\n> > I want to create an index on doc_urn column with using substr function\nlike this:\n> > CREATE INDEX idx_doc_substr_doc_urn ON doc USING btree\n(SUBSTR(doc_urn,10));\n> >\n> > but there is an error:\n> >\n> > ERROR: parser: parse error at or near \"10\" at character 68\n> >\n> > what's wrong for this SQL? As I have found some reference on the\n> > internet, I can't find anything wrong in this SQL.\n>\n> What version are you using? If you're using anything previous to 7.4 then\n> the above definately won't work and the only work around I know of is to\n> make another function which takes only the column argument and calls\n> substr with the 10 constant.\n>\n\n", "msg_date": "Thu, 21 Oct 2004 11:11:05 +0800", "msg_from": "\"Ray\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: create index with substr function" }, { "msg_contents": "while you weren't looking, Ray wrote:\n\n> CREATE INDEX idx_doc_substr_doc_urn ON doc USING btree (SUBSTR(doc_urn,10));\n\nCREATE INDEX idx_doc_substr_doc_urn ON doc USING btree ((SUBSTR(doc_urn,10)));\n\nYou need an additional set of parens around the SUBSTR() call.\n\n/rls\n\n-- \n:wq\n", "msg_date": "Wed, 20 Oct 2004 22:34:55 -0500", "msg_from": "Rosser Schwarz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: create index with substr function" }, { "msg_contents": "sorry it doesn't works, as my postgres is 7.3 not 7.4. any other alternative\nsolution for version after 7.4??\n\nThank\nRay : )\n\n----- Original Message ----- \nFrom: \"Rosser Schwarz\" <[email protected]>\nTo: \"Ray\" <[email protected]>\nCc: <[email protected]>\nSent: Thursday, October 21, 2004 11:34 AM\nSubject: Re: [PERFORM] create index with substr function\n\n\n> while you weren't looking, Ray wrote:\n>\n> > CREATE INDEX idx_doc_substr_doc_urn ON doc USING btree\n(SUBSTR(doc_urn,10));\n>\n> CREATE INDEX idx_doc_substr_doc_urn ON doc USING btree\n((SUBSTR(doc_urn,10)));\n>\n> You need an additional set of parens around the SUBSTR() call.\n>\n> /rls\n>\n> -- \n> :wq\n>\n\n", "msg_date": "Thu, 21 Oct 2004 11:37:26 +0800", "msg_from": "\"Ray\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: create index with substr function" }, { "msg_contents": "Tom Lane wrote:\n\n>\"Ray\" <[email protected]> writes:\n> \n>\n>>CREATE INDEX idx_doc_substr_doc_urn ON doc USING btree (SUBSTR(doc_urn,10));\n>>ERROR: parser: parse error at or near \"10\" at character 68\n>> \n>>\n>\n>This will work in 7.4, but not older releases.\n>\n> \n>\nCan't you just use a SQL function that calls the substr function? I have \ndone that with date functions before\nlike:\n\nCREATE OR REPLACE FUNCTION get_month(text) returns double precision AS '\n SELECT date_part('month',$1);\n' LANGUAGE 'SQL' IMMUTABLE;\n\nCREATE INDEX get_month_idx on foo(get_month(date_field));\n\nOr in this case:\n\nCREATE OR REPLACE FUNCTION sub_text(text) returns text AS '\n SELECT SUBSTR($1,10) from foo;\n' LANGUAGE 'SQL' IMMUTABLE;\n\nCREATE INDEX sub_text_idx ON foo(sub_text(doc_urn));\n\nThis works on 7.3.6???\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 7: don't forget to increase your free space map settings\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL\n\n\n\n\n\n\n\n\nTom Lane wrote:\n\n\"Ray\" <[email protected]> writes:\n \n\nCREATE INDEX idx_doc_substr_doc_urn ON doc USING btree (SUBSTR(doc_urn,10));\nERROR: parser: parse error at or near \"10\" at character 68\n \n\n\nThis will work in 7.4, but not older releases.\n\n \n\nCan't you just use a SQL function that calls the substr function? I\nhave done that with date functions before\nlike:\nCREATE OR REPLACE FUNCTION get_month(text) returns double precision AS '\n SELECT date_part('month',$1);\n' LANGUAGE 'SQL' IMMUTABLE;\n\nCREATE INDEX get_month_idx on foo(get_month(date_field));\nOr in this case:\n\nCREATE OR REPLACE FUNCTION sub_text(text) returns text AS '\n      SELECT SUBSTR($1,10) from foo;\n' LANGUAGE 'SQL' IMMUTABLE;\n\nCREATE INDEX sub_text_idx ON foo(sub_text(doc_urn));\n\nThis works on 7.3.6???\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\n \n\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Wed, 20 Oct 2004 23:13:32 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: create index with substr function" }, { "msg_contents": "As previously suggested by Stephan Szabo, you need to create a helper\nfunction, e.g.:\ncreate or replace function after9(text)returns text language plpgsql immutable as '\n begin\n return substr($1, 10);\n end;\n';\n\nYou may need the \"immutable\" specification is to allow the\nfunction's use in an index.\n\nThen use this function in the index creation:\n\nCREATE INDEX idx_doc_substr_doc_urn ON doc USING btree (after9(doc_urn));\n\nI think that should do it.\n\n\n-- George\n>\nOn Thu, 21 Oct 2004 11:37:26 +0800\n\"Ray\" <[email protected]> threw this fish to the penguins:\n\n> sorry it doesn't works, as my postgres is 7.3 not 7.4. any other alternative\n> solution for version after 7.4??\n> \n> Thank\n> Ray : )\n> \n> ----- Original Message ----- \n> From: \"Rosser Schwarz\" <[email protected]>\n> To: \"Ray\" <[email protected]>\n> Cc: <[email protected]>\n> Sent: Thursday, October 21, 2004 11:34 AM\n> Subject: Re: [PERFORM] create index with substr function\n> \n> \n> > while you weren't looking, Ray wrote:\n> >\n> > > CREATE INDEX idx_doc_substr_doc_urn ON doc USING btree\n> (SUBSTR(doc_urn,10));\n> >\n> > CREATE INDEX idx_doc_substr_doc_urn ON doc USING btree\n> ((SUBSTR(doc_urn,10)));\n> >\n> > You need an additional set of parens around the SUBSTR() call.\n> >\n> > /rls\n> >\n> > -- \n> > :wq\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n\n-- \n\"Are the gods not just?\" \"Oh no, child.\nWhat would become of us if they were?\" (CSL)\n", "msg_date": "Thu, 21 Oct 2004 10:22:37 -0400", "msg_from": "george young <[email protected]>", "msg_from_op": false, "msg_subject": "Re: create index with substr function" } ]
[ { "msg_contents": "I suppose I'm just idly wondering really. Clearly it's against PG\nphilosophy to build an FS or direct IO management into PG, but now it's so\nrelatively easy to plug filesystems into the main open-source Oses, It\nstruck me that there might be some useful changes to, say, XFS or ext3, that\ncould be made that would help PG out.\n\nI'm thinking along the lines of an FS that's aware of PG's strategies and\nrequirements and therefore optimised to make those activities as efiicient\nas possible - possibly even being aware of PG's disk layout and treating\nfiles differently on that basis.\n\nNot being an FS guru I'm not really clear on whether this would help much\n(enough to be worth it anyway) or not - any thoughts? And if there were\nuseful gains to be had, would it need a whole new FS or could an existing\none be modified?\n\nSo there might be (as I said, I'm not an FS guru...):\n* great append performance for the WAL?\n* optimised scattered writes for checkpointing?\n* Knowledge that FSYNC is being used for preserving ordering a lot of the\ntime, rather than requiring actual writes to disk (so long as the writes\neventually happen in order...)?\n\n\nMatt\n\n\n\nMatt Clark\nYmogen Ltd\nP: 0845 130 4531\nW: https://ymogen.net/\nM: 0774 870 1584\n \n\n", "msg_date": "Thu, 21 Oct 2004 08:58:01 +0100", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": true, "msg_subject": "Anything to be gained from a 'Postgres Filesystem'?" }, { "msg_contents": "\n\tReiser4 ?\n\nOn Thu, 21 Oct 2004 08:58:01 +0100, Matt Clark <[email protected]> wrote:\n\n> I suppose I'm just idly wondering really. Clearly it's against PG\n> philosophy to build an FS or direct IO management into PG, but now it's \n> so\n> relatively easy to plug filesystems into the main open-source Oses, It\n> struck me that there might be some useful changes to, say, XFS or ext3, \n> that\n> could be made that would help PG out.\n>\n> I'm thinking along the lines of an FS that's aware of PG's strategies and\n> requirements and therefore optimised to make those activities as \n> efiicient\n> as possible - possibly even being aware of PG's disk layout and treating\n> files differently on that basis.\n>\n> Not being an FS guru I'm not really clear on whether this would help much\n> (enough to be worth it anyway) or not - any thoughts? And if there were\n> useful gains to be had, would it need a whole new FS or could an existing\n> one be modified?\n>\n> So there might be (as I said, I'm not an FS guru...):\n> * great append performance for the WAL?\n> * optimised scattered writes for checkpointing?\n> * Knowledge that FSYNC is being used for preserving ordering a lot of the\n> time, rather than requiring actual writes to disk (so long as the writes\n> eventually happen in order...)?\n>\n>\n> Matt\n>\n>\n>\n> Matt Clark\n> Ymogen Ltd\n> P: 0845 130 4531\n> W: https://ymogen.net/\n> M: 0774 870 1584\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n\n", "msg_date": "Thu, 21 Oct 2004 10:33:31 +0200", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" }, { "msg_contents": "On Thu, Oct 21, 2004 at 08:58:01AM +0100, Matt Clark wrote:\n> I suppose I'm just idly wondering really. Clearly it's against PG\n> philosophy to build an FS or direct IO management into PG, but now it's so\n> relatively easy to plug filesystems into the main open-source Oses, It\n> struck me that there might be some useful changes to, say, XFS or ext3, that\n> could be made that would help PG out.\n\nThis really sounds like a poor replacement for just making PostgreSQL use raw\ndevices to me. (I have no idea why that isn't done already, but presumably it\nisn't all that easy to get right. :-) )\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 21 Oct 2004 12:27:27 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" }, { "msg_contents": "Matt Clark wrote:\n> I'm thinking along the lines of an FS that's aware of PG's strategies and\n> requirements and therefore optimised to make those activities as efiicient\n> as possible - possibly even being aware of PG's disk layout and treating\n> files differently on that basis.\n\nAs someone else noted, this doesn't belong in the filesystem (rather the \nkernel's block I/O layer/buffer cache). But I agree, an API by which we \ncan tell the kernel what kind of I/O behavior to expect would be good. \nThe kernel needs to provide good behavior for a wide range of \napplications, but the DBMS can take advantage of a lot of \ndomain-specific information. In theory, being able to pass that \ndomain-specific information on to the kernel would mean we could get \nbetter performance without needing to reimplement large chunks of \nfunctionality that really ought to be done by the kernel anyway (as \nimplementing raw I/O would require, for example). On the other hand, it \nwould probably mean adding a fair bit of OS-specific hackery, which \nwe've largely managed to avoid in the past.\n\nThe closest API to what you're describing that I'm aware of is \nposix_fadvise(). While that is technically-speaking a POSIX standard, it \nis not widely implemented (I know Linux 2.6 implements it; based on some \nquick googling, it looks like AIX does too). Using posix_fadvise() has \nbeen discussed in the past, so you might want to search the archives. We \ncould use FADV_SEQUENTIAL to request more aggressive readahead on a file \nthat we know we're about to sequentially scan. We might be able to use \nFADV_NOREUSE on the WAL. We might be able to get away with specifying \nFADV_RANDOM for indexes all of the time, or at least most of the time. \nOne question is how this would interact with concurrent access (AFAICS \nthere is no way to fetch the \"current advice\" on an fd...)\n\nAlso, I would imagine Win32 provides some means to inform the kernel \nabout your expected I/O pattern, but I haven't checked. Does anyone know \nof any other relevant APIs?\n\n-Neil\n", "msg_date": "Thu, 21 Oct 2004 22:02:00 +1000", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" }, { "msg_contents": "Neil Conway wrote:\n> Also, I would imagine Win32 provides some means to inform the kernel \n> about your expected I/O pattern, but I haven't checked. Does anyone know \n> of any other relevant APIs?\n\nSee CreateFile, Parameter dwFlagsAndAttributes\n\nhttp://msdn.microsoft.com/library/default.asp?url=/library/en-us/fileio/base/createfile.asp\n\nThere is FILE_FLAG_NO_BUFFERING, FILE_FLAG_OPEN_NO_RECALL,\nFILE_FLAG_RANDOM_ACCESS and even FILE_FLAG_POSIX_SEMANTICS\n\nJan\n\n", "msg_date": "Thu, 21 Oct 2004 17:02:00 +0200", "msg_from": "Jan Dittmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" }, { "msg_contents": "> As someone else noted, this doesn't belong in the filesystem (rather \n> the kernel's block I/O layer/buffer cache). But I agree, an API by \n> which we can tell the kernel what kind of I/O behavior to expect would \n> be good.\n[snip]\n> The closest API to what you're describing that I'm aware of is \n> posix_fadvise(). While that is technically-speaking a POSIX standard, \n> it is not widely implemented (I know Linux 2.6 implements it; based on \n> some quick googling, it looks like AIX does too).\n\nDon't forget about the existence/usefulness/widely implemented \nmadvise(2)/posix_madvise(2) call, which can give the OS the following \nhints: MADV_NORMAL, MADV_SEQUENTIAL, MADV_RANDOM, MADV_WILLNEED, \nMADV_DONTNEED, and MADV_FREE. :) -sc\n\n-- \nSean Chittenden\n\n", "msg_date": "Thu, 21 Oct 2004 11:40:08 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" }, { "msg_contents": "Note that most people are now moving away from raw devices for databases\nin most applicaitons. The relatively small performance gain isn't worth\nthe hassles.\n\nOn Thu, Oct 21, 2004 at 12:27:27PM +0200, Steinar H. Gunderson wrote:\n> On Thu, Oct 21, 2004 at 08:58:01AM +0100, Matt Clark wrote:\n> > I suppose I'm just idly wondering really. Clearly it's against PG\n> > philosophy to build an FS or direct IO management into PG, but now it's so\n> > relatively easy to plug filesystems into the main open-source Oses, It\n> > struck me that there might be some useful changes to, say, XFS or ext3, that\n> > could be made that would help PG out.\n> \n> This really sounds like a poor replacement for just making PostgreSQL use raw\n> devices to me. (I have no idea why that isn't done already, but presumably it\n> isn't all that easy to get right. :-) )\n> \n> /* Steinar */\n> -- \n> Homepage: http://www.sesse.net/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Thu, 21 Oct 2004 17:18:30 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" } ]
[ { "msg_contents": "Hiya,\n\nLooking at that list, I got the feeling that you'd want to push that PG-awareness down into the block-io layer as well, then, so as to be able to optimise for (perhaps) conflicting goals depending on what the app does; for the IO system to be able to read the apps mind it needs to have some knowledge of what the app is / needs / wants and I get the impression that this awareness needs to go deeper than the FS only.\n\n--Tim\n\n(But you might have time to rewrite Linux/BSD as a PG-OS? just kidding!)\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]]On Behalf Of Matt Clark\nSent: Thursday, October 21, 2004 9:58 AM\nTo: [email protected]\nSubject: [PERFORM] Anything to be gained from a 'Postgres Filesystem'?\n\n\nI suppose I'm just idly wondering really. Clearly it's against PG\nphilosophy to build an FS or direct IO management into PG, but now it's so\nrelatively easy to plug filesystems into the main open-source Oses, It\nstruck me that there might be some useful changes to, say, XFS or ext3, that\ncould be made that would help PG out.\n\nI'm thinking along the lines of an FS that's aware of PG's strategies and\nrequirements and therefore optimised to make those activities as efiicient\nas possible - possibly even being aware of PG's disk layout and treating\nfiles differently on that basis.\n\nNot being an FS guru I'm not really clear on whether this would help much\n(enough to be worth it anyway) or not - any thoughts? And if there were\nuseful gains to be had, would it need a whole new FS or could an existing\none be modified?\n\nSo there might be (as I said, I'm not an FS guru...):\n* great append performance for the WAL?\n* optimised scattered writes for checkpointing?\n* Knowledge that FSYNC is being used for preserving ordering a lot of the\ntime, rather than requiring actual writes to disk (so long as the writes\neventually happen in order...)?\n\n\nMatt\n\n\n\nMatt Clark\nYmogen Ltd\nP: 0845 130 4531\nW: https://ymogen.net/\nM: 0774 870 1584\n \n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n", "msg_date": "Thu, 21 Oct 2004 10:27:22 +0200", "msg_from": "\"Leeuw van der, Tim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" }, { "msg_contents": "> Looking at that list, I got the feeling that you'd want to \n> push that PG-awareness down into the block-io layer as well, \n> then, so as to be able to optimise for (perhaps) conflicting \n> goals depending on what the app does; for the IO system to be \n> able to read the apps mind it needs to have some knowledge of \n> what the app is / needs / wants and I get the impression that \n> this awareness needs to go deeper than the FS only.\n\nThat's a fair point, it would need be a kernel patch really, although not\nnecessarily a very big one, more a case of looking at FDs and if they're\nflagged in some way then get the PGfs to do the job instead of/as well as\nthe normal code path.\n\n", "msg_date": "Thu, 21 Oct 2004 09:38:40 +0100", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" } ]
[ { "msg_contents": "Hi,\n\nI guess the difference is in 'severe hacking inside PG' vs. 'some unknown amount of hacking that doesn't touch PG code'.\n\nHacking PG internally to handle raw devices will meet with strong resistance from large portions of the development team. I don't expect (m)any core devs of PG will be excited about rewriting the entire I/O architecture of PG and duplicating large amounts of OS type of code inside the application, just to try to attain an unknown performance benefit.\n\nPG doesn't use one big file, as some databases do, but many small files. Now PG would need to be able to do file-management, if you put the PG database on a raw disk partition! That's icky stuff, and you'll find much resistance against putting such code inside PG.\nSo why not try to have the external FS know a bit about PG and it's directory-layout, and it's IO requirements? Then such type of code can at least be maintained outside the application, and will not be as much of a burden to the rest of the application.\n\n(I'm not sure if it's a good idea to create a PG-specific FS in your OS of choice, but it's certainly gonna be easier than getting FS code inside of PG)\n\ncheers,\n\n--Tim\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]]On Behalf Of Steinar H. Gunderson\nSent: Thursday, October 21, 2004 12:27 PM\nTo: [email protected]\nSubject: Re: [PERFORM] Anything to be gained from a 'Postgres Filesystem'?\n\n\nOn Thu, Oct 21, 2004 at 08:58:01AM +0100, Matt Clark wrote:\n> I suppose I'm just idly wondering really. Clearly it's against PG\n> philosophy to build an FS or direct IO management into PG, but now it's so\n> relatively easy to plug filesystems into the main open-source Oses, It\n> struck me that there might be some useful changes to, say, XFS or ext3, that\n> could be made that would help PG out.\n\nThis really sounds like a poor replacement for just making PostgreSQL use raw\ndevices to me. (I have no idea why that isn't done already, but presumably it\nisn't all that easy to get right. :-) )\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n", "msg_date": "Thu, 21 Oct 2004 12:44:10 +0200", "msg_from": "\"Leeuw van der, Tim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" }, { "msg_contents": "The intuitive thing would be to put pg into a file system. \n\n/Aaron\n\nOn Thu, 21 Oct 2004 12:44:10 +0200, Leeuw van der, Tim\n<[email protected]> wrote:\n> Hi,\n> \n> I guess the difference is in 'severe hacking inside PG' vs. 'some unknown amount of hacking that doesn't touch PG code'.\n> \n> Hacking PG internally to handle raw devices will meet with strong resistance from large portions of the development team. I don't expect (m)any core devs of PG will be excited about rewriting the entire I/O architecture of PG and duplicating large amounts of OS type of code inside the application, just to try to attain an unknown performance benefit.\n> \n> PG doesn't use one big file, as some databases do, but many small files. Now PG would need to be able to do file-management, if you put the PG database on a raw disk partition! That's icky stuff, and you'll find much resistance against putting such code inside PG.\n> So why not try to have the external FS know a bit about PG and it's directory-layout, and it's IO requirements? Then such type of code can at least be maintained outside the application, and will not be as much of a burden to the rest of the application.\n> \n> (I'm not sure if it's a good idea to create a PG-specific FS in your OS of choice, but it's certainly gonna be easier than getting FS code inside of PG)\n> \n> cheers,\n> \n> --Tim\n> \n> \n> \n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On Behalf Of Steinar H. Gunderson\n> Sent: Thursday, October 21, 2004 12:27 PM\n> To: [email protected]\n> Subject: Re: [PERFORM] Anything to be gained from a 'Postgres Filesystem'?\n> \n> On Thu, Oct 21, 2004 at 08:58:01AM +0100, Matt Clark wrote:\n> > I suppose I'm just idly wondering really. Clearly it's against PG\n> > philosophy to build an FS or direct IO management into PG, but now it's so\n> > relatively easy to plug filesystems into the main open-source Oses, It\n> > struck me that there might be some useful changes to, say, XFS or ext3, that\n> > could be made that would help PG out.\n> \n> This really sounds like a poor replacement for just making PostgreSQL use raw\n> devices to me. (I have no idea why that isn't done already, but presumably it\n> isn't all that easy to get right. :-) )\n> \n> /* Steinar */\n> --\n> Homepage: http://www.sesse.net/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n> \n\n\n-- \n\nRegards,\n/Aaron\n", "msg_date": "Thu, 21 Oct 2004 07:47:14 -0400", "msg_from": "Aaron Werman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" }, { "msg_contents": "On Thu, Oct 21, 2004 at 12:44:10PM +0200, Leeuw van der, Tim wrote:\n> Hacking PG internally to handle raw devices will meet with strong\n> resistance from large portions of the development team. I don't expect\n> (m)any core devs of PG will be excited about rewriting the entire I/O\n> architecture of PG and duplicating large amounts of OS type of code inside\n> the application, just to try to attain an unknown performance benefit.\n\nWell, at least I see people claiming >30% difference between different file\nsystems, but no, I'm not shouting \"bah, you'd better do this or I'll warez\nOracle\" :-) I have no idea how much you can improve over the \"best\"\nfilesystems out there, but having two layers of journalling (both WAL _and_\nFS journalling) on top of each other don't make all that much sense to me.\n:-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 21 Oct 2004 15:45:06 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" }, { "msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> ... I have no idea how much you can improve over the \"best\"\n> filesystems out there, but having two layers of journalling (both WAL _and_\n> FS journalling) on top of each other don't make all that much sense to me.\n\nWhich is why setting the FS to journal metadata but not file contents is\noften suggested as best practice for a PG-only filesystem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Oct 2004 10:20:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'? " }, { "msg_contents": "On Thu, Oct 21, 2004 at 10:20:55AM -0400, Tom Lane wrote:\n>> ... I have no idea how much you can improve over the \"best\"\n>> filesystems out there, but having two layers of journalling (both WAL _and_\n>> FS journalling) on top of each other don't make all that much sense to me.\n> Which is why setting the FS to journal metadata but not file contents is\n> often suggested as best practice for a PG-only filesystem.\n\nMm, but you still journal the metadata. Oh well, noatime etc.. :-)\n\nBy the way, I'm probably hitting a FAQ here, but would O_DIRECT help\nPostgreSQL any, given large enough shared_buffers?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 21 Oct 2004 17:49:23 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" }, { "msg_contents": "Hi, Leeuw,\n\nOn Thu, 21 Oct 2004 12:44:10 +0200\n\"Leeuw van der, Tim\" <[email protected]> wrote:\n\n> (I'm not sure if it's a good idea to create a PG-specific FS in your\n> OS of choice, but it's certainly gonna be easier than getting FS code\n> inside of PG)\n\nI don't think PG really needs a specific FS. I rather think that PG\ncould profit from some functionality that's missing in traditional UN*X\nfile systems.\n\nposix_fadvise(2) may be a candidate. Read/Write bareers another pone, as\nwell asn syncing a bunch of data in different files with a single call\n(so that the OS can determine the best write order). I can also imagine\nsome interaction with the FS journalling system (to avoid duplicate\nefforts).\n\nWe should create a list of those needs, and then communicate those to\nthe kernel/fs developers. Then we (as well as other apps) can make use\nof those features where they are available, and use the old way\neverywhere else.\n\nMaybe Reiser4 is a step into the right way, and maybe even a postgres\nplugin for Reiser4 will be worth the effort. Maybe XFS/JFS etc. already\nhave such capabilities. Maybe that's completely wrong.\n\ncheers,\nMarkus\n\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zürich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n", "msg_date": "Thu, 4 Nov 2004 12:00:47 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" }, { "msg_contents": "\n> posix_fadvise(2) may be a candidate. Read/Write bareers another pone, as\n> well asn syncing a bunch of data in different files with a single call\n> (so that the OS can determine the best write order). I can also imagine\n> some interaction with the FS journalling system (to avoid duplicate\n> efforts).\n\n\tThere is also the fact that syncing after every transaction could be \nchanged to syncing every N transactions (N fixed or depending on the data \nsize written by the transactions) which would be more efficient than the \ncurrent behaviour with a sleep. HOWEVER suppressing the sleep() would lead \nto postgres returning from the COMMIT while it is in fact not synced, \nwhich somehow rings a huge alarm bell somewhere.\n\n\tWhat about read order ?\n\tThis could be very useful for SELECT queries involving indexes, which in \ncase of a non-clustered table lead to random seeks in the table.\n\tThere's fadvise to tell the OS to readahead on a seq scan (I think the OS \ndetects it anyway), but if there was a system call telling the OS \"in the \nnext seconds I'm going to read these chunks of data from this file (gives \na list of offsets and lengths), could you put them in your cache in the \nmost efficient order without seeking too much, so that when I read() them \nin random order, they will be in the cache already ?\". This would be an \nasynchronous call which would return immediately, just queuing up the data \nsomewhere in the kernel, and maybe sending a signal to the application \nwhen a certain percentage of the data has been cached.\n\tPG could take advantage of this with not much code changes, simply by \nputting a fifo between the index scan and the tuple fetches, to wait the \ntime necessary for the OS to have enough reads to cluster them efficiently.\n\tOn very large tables this would maybe not gain much, but on tables which \nare explicitely clustered, or naturally clustered like accessing an index \non a serial primary key in order, it could be interesting.\n\n\tJust a thought.\n", "msg_date": "Thu, 04 Nov 2004 13:29:19 +0100", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" }, { "msg_contents": "[email protected] (Pierre-Frᅵdᅵric Caillaud) writes:\n>> posix_fadvise(2) may be a candidate. Read/Write bareers another pone, as\n>> well asn syncing a bunch of data in different files with a single call\n>> (so that the OS can determine the best write order). I can also imagine\n>> some interaction with the FS journalling system (to avoid duplicate\n>> efforts).\n>\n> \tThere is also the fact that syncing after every transaction\n> could be changed to syncing every N transactions (N fixed or\n> depending on the data size written by the transactions) which would\n> be more efficient than the current behaviour with a sleep. HOWEVER\n> suppressing the sleep() would lead to postgres returning from the\n> COMMIT while it is in fact not synced, which somehow rings a huge\n> alarm bell somewhere.\n>\n> \tWhat about read order ?\n> \tThis could be very useful for SELECT queries involving\n> indexes, which in case of a non-clustered table lead to random seeks\n> in the table.\n\nAnother thing that would be valuable would be to have some way to say:\n\n \"Read this data; don't bother throwing other data out of the cache\n to stuff this in.\"\n\nSomething like a \"read_uncached()\" call...\n\nThat would mean that a seq scan or a vacuum wouldn't force useful data\nout of cache.\n-- \nlet name=\"cbbrowne\" and tld=\"cbbrowne.com\" in String.concat \"@\" [name;tld];;\nhttp://www.ntlug.org/~cbbrowne/linuxxian.html\nA VAX is virtually a computer, but not quite.\n", "msg_date": "Thu, 04 Nov 2004 10:47:31 -0500", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" }, { "msg_contents": "On Thu, 2004-11-04 at 15:47, Chris Browne wrote:\n\n> Another thing that would be valuable would be to have some way to say:\n> \n> \"Read this data; don't bother throwing other data out of the cache\n> to stuff this in.\"\n> \n> Something like a \"read_uncached()\" call...\n> \n> That would mean that a seq scan or a vacuum wouldn't force useful data\n> out of cache.\n\nARC does almost exactly those two things in 8.0.\n\nSeq scans do get put in cache, but in a way that means they don't spoil\nthe main bulk of the cache.\n\n-- \nBest Regards, Simon Riggs\n\n", "msg_date": "Thu, 04 Nov 2004 19:03:47 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" }, { "msg_contents": "On Thu, Nov 04, 2004 at 10:47:31AM -0500, Chris Browne wrote:\n> Another thing that would be valuable would be to have some way to say:\n> \n> \"Read this data; don't bother throwing other data out of the cache\n> to stuff this in.\"\n> \n> Something like a \"read_uncached()\" call...\n\nYou mean, like, open(filename, O_DIRECT)? :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 4 Nov 2004 20:20:04 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> On Thu, 2004-11-04 at 15:47, Chris Browne wrote:\n>> Something like a \"read_uncached()\" call...\n>> \n>> That would mean that a seq scan or a vacuum wouldn't force useful data\n>> out of cache.\n\n> ARC does almost exactly those two things in 8.0.\n\nBut only for Postgres' own shared buffers. The kernel cache still gets\ntrashed, because we have no way to suggest to the kernel that it not\nhang onto the data read in.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Nov 2004 14:34:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'? " }, { "msg_contents": "On Thu, 2004-11-04 at 19:34, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > On Thu, 2004-11-04 at 15:47, Chris Browne wrote:\n> >> Something like a \"read_uncached()\" call...\n> >> \n> >> That would mean that a seq scan or a vacuum wouldn't force useful data\n> >> out of cache.\n> \n> > ARC does almost exactly those two things in 8.0.\n> \n> But only for Postgres' own shared buffers. The kernel cache still gets\n> trashed, because we have no way to suggest to the kernel that it not\n> hang onto the data read in.\n\nI guess a difference in viewpoints. I'm inclined to give most of the RAM\nto PostgreSQL, since as you point out, the kernel is out of our control.\nThat way, we can do what we like with it - keep it or not, as we choose.\n\n-- \nBest Regards, Simon Riggs\n\n", "msg_date": "Thu, 04 Nov 2004 20:40:45 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> On Thu, 2004-11-04 at 19:34, Tom Lane wrote:\n>> But only for Postgres' own shared buffers. The kernel cache still gets\n>> trashed, because we have no way to suggest to the kernel that it not\n>> hang onto the data read in.\n\n> I guess a difference in viewpoints. I'm inclined to give most of the RAM\n> to PostgreSQL, since as you point out, the kernel is out of our control.\n> That way, we can do what we like with it - keep it or not, as we choose.\n\nThat's always been a Bad Idea for three or four different reasons, of\nwhich ARC will eliminate no more than one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Nov 2004 15:45:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'? " }, { "msg_contents": "After a long battle with technology, [email protected] (Simon Riggs), an earthling, wrote:\n> On Thu, 2004-11-04 at 15:47, Chris Browne wrote:\n>\n>> Another thing that would be valuable would be to have some way to say:\n>> \n>> \"Read this data; don't bother throwing other data out of the cache\n>> to stuff this in.\"\n>> \n>> Something like a \"read_uncached()\" call...\n>> \n>> That would mean that a seq scan or a vacuum wouldn't force useful\n>> data out of cache.\n>\n> ARC does almost exactly those two things in 8.0.\n>\n> Seq scans do get put in cache, but in a way that means they don't\n> spoil the main bulk of the cache.\n\nWe're not talking about the same cache.\n\nARC does these exact things for _shared memory_ cache, and is the\nobvious inspiration.\n\nBut it does more or less nothing about the way OS file buffer cache is\nmanaged, and the handling of _that_ would be the point of modifying OS\nfilesystem semantics.\n-- \nselect 'cbbrowne' || '@' || 'linuxfinances.info';\nhttp://www3.sympatico.ca/cbbrowne/oses.html\nHave you ever considered beating yourself with a cluestick?\n", "msg_date": "Thu, 04 Nov 2004 21:05:36 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" }, { "msg_contents": "In an attempt to throw the authorities off his trail, [email protected] (Markus Schaber) transmitted:\n> We should create a list of those needs, and then communicate those\n> to the kernel/fs developers. Then we (as well as other apps) can\n> make use of those features where they are available, and use the old\n> way everywhere else.\n\nWhich kernel/fs developers did you have in mind? The ones working on\nLinux? Or FreeBSD? Or DragonflyBSD? Or Solaris? Or AIX?\n\nPlease keep in mind that many of the PostgreSQL developers are BSD\nfolk that aren't particularly interested in creating bleeding edge\nLinux capabilities.\n\nFurthermore, I'd think long and hard before jumping into such a\n_spectacularly_ bleeding edge kind of project. The reason why you\nwould want this would be if you needed to get some margin of\nperformance. I can't see wanting that without also wanting some\n_assurance_ of system reliability, at which point I also want things\nlike vendor support.\n\nIf you've ever contacted Red Hat Software, you'd know that they very\nnearly refuse to provide support for any filesystem other than ext3.\nUse anything else and they'll make noises about not being able to\nassure you of anything at all.\n\nIf you need high performance, you'd also want to use interesting sorts\nof hardware. Disk arrays, RAID controllers, that sort of thing.\nVendors of such things don't particularly want to talk to you unless\nyou're using a \"supported\" Linux distribution and a \"supported\"\nfilesystem.\n\nJumping into a customized filesystem that neither hardware nor\nsoftware vendors would remotely consider supporting just doesn't look\nlike a viable strategy to me.\n\n> Maybe Reiser4 is a step into the right way, and maybe even a\n> postgres plugin for Reiser4 will be worth the effort. Maybe XFS/JFS\n> etc. already have such capabilities. Maybe that's completely wrong.\n\nThe capabilities tend to be redundant. They tend to implement vaguely\nsimilar transactional capabilities to what databases have to\nimplement. The similarities are not close enough to eliminate either\nvariety of \"commit\" as redundant.\n-- \n\"cbbrowne\",\"@\",\"linuxfinances.info\"\nhttp://linuxfinances.info/info/linux.html\nRules of the Evil Overlord #128. \"I will not employ robots as agents\nof destruction if there is any possible way that they can be\nre-programmed or if their battery packs are externally mounted and\neasily removable.\" <http://www.eviloverlord.com/>\n", "msg_date": "Thu, 04 Nov 2004 21:29:04 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" }, { "msg_contents": "On Fri, 2004-11-05 at 06:20, Steinar H. Gunderson wrote:\n> You mean, like, open(filename, O_DIRECT)? :-)\n\nThis disables readahead (at least on Linux), which is certainly not we\nwant: for the very case where we don't want to keep the data in cache\nfor a while (sequential scans, VACUUM), we also want aggressive\nreadahead.\n\n-Neil\n\n\n", "msg_date": "Fri, 05 Nov 2004 15:28:39 +1100", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" }, { "msg_contents": "On Thu, 2004-11-04 at 23:29, Pierre-Fr�d�ric Caillaud wrote:\n> \tThere is also the fact that syncing after every transaction could be \n> changed to syncing every N transactions (N fixed or depending on the data \n> size written by the transactions) which would be more efficient than the \n> current behaviour with a sleep.\n\nUh, which \"sleep\" are you referring to?\n\nAlso, how would interacting with the filesystem's journal effect how\noften we need to force-write the WAL to disk? (ISTM we need to sync\n_something_ to disk when a transaction commits in order to maintain the\nWAL invariant.)\n\n> \tThere's fadvise to tell the OS to readahead on a seq scan (I think the OS \n> detects it anyway)\n\nNot perfectly, though; also, Linux will do a more aggressive readahead\nif you tell it to do so via posix_fadvise().\n\n> if there was a system call telling the OS \"in the \n> next seconds I'm going to read these chunks of data from this file (gives \n> a list of offsets and lengths), could you put them in your cache in the \n> most efficient order without seeking too much, so that when I read() them \n> in random order, they will be in the cache already ?\".\n\nhttp://www.opengroup.org/onlinepubs/009695399/functions/posix_fadvise.html\n\nPOSIX_FADV_WILLNEED \n Specifies that the application expects to access the specified\n data in the near future.\n\n-Neil\n\n\n", "msg_date": "Fri, 05 Nov 2004 15:33:32 +1100", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" }, { "msg_contents": "On Fri, 2004-11-05 at 02:47, Chris Browne wrote:\n> Another thing that would be valuable would be to have some way to say:\n> \n> \"Read this data; don't bother throwing other data out of the cache\n> to stuff this in.\"\n\nThis is similar, although not exactly the same thing:\n\nhttp://www.opengroup.org/onlinepubs/009695399/functions/posix_fadvise.html\n\nPOSIX_FADV_NOREUSE \n Specifies that the application expects to access the specified\n data once and then not reuse it thereafter.\n\n-Neil\n\n\n", "msg_date": "Fri, 05 Nov 2004 15:35:08 +1100", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" }, { "msg_contents": "Hi, Christopher,\n[sorry for the delay of my answer, we were rather busy last weks]\n\nOn Thu, 04 Nov 2004 21:29:04 -0500\nChristopher Browne <[email protected]> wrote:\n\n> In an attempt to throw the authorities off his trail, [email protected] (Markus Schaber) transmitted:\n> > We should create a list of those needs, and then communicate those\n> > to the kernel/fs developers. Then we (as well as other apps) can\n> > make use of those features where they are available, and use the old\n> > way everywhere else.\n> \n> Which kernel/fs developers did you have in mind? The ones working on\n> Linux? Or FreeBSD? Or DragonflyBSD? Or Solaris? Or AIX?\n\nAll of them, and others (e. G. Windows).\n\nOnce we have a list of those needs, the advocates can talk to the OS\ndevelopers. Some OS developers will follow, others not.\n\nThen the postgres folks (and other application developers that benefit\nfrom this capabilities) can point interested users to our benchmarks and\ntell them that Foox performs 3 times as fast as BaarOs because they\nprovide better support for database needs.\n\n> Please keep in mind that many of the PostgreSQL developers are BSD\n> folk that aren't particularly interested in creating bleeding edge\n> Linux capabilities.\n\nThen this should be motivation to add those things to BSD, maybe as a\npatch or loadable module so it does not bloat mainstream. I personally\nwould prefer it to appear in BSD first, because in case it really pays\nof, it won't be long until it appears in Linux as well :-)\n\n> Jumping into a customized filesystem that neither hardware nor\n> software vendors would remotely consider supporting just doesn't look\n> like a viable strategy to me.\n\nI did not vote for a custom filesystem, as the OP did. I did vote for\nisolating a set of useful capabilities PostgreSQL could exploit, and\nthen try to confince the kernel developers to include this capabilities,\nso they are likely to be included in the main distributions.\n\nI don't know about the BSD market, but I know that Redhat and SuSE often\nship their patched versions of the kernels (so then they officially\nsupport the extensions), and most of this is likely to be included in\nmain stream later.\n\n> > Maybe Reiser4 is a step into the right way, and maybe even a\n> > postgres plugin for Reiser4 will be worth the effort. Maybe XFS/JFS\n> > etc. already have such capabilities. Maybe that's completely wrong.\n> \n> The capabilities tend to be redundant. They tend to implement vaguely\n> similar transactional capabilities to what databases have to\n> implement. The similarities are not close enough to eliminate either\n> variety of \"commit\" as redundant.\n\nBut a speed gain may be possible by coordinating DB and FS tansactions.\n\nThanks,\nMarkus\n\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zürich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n", "msg_date": "Tue, 14 Dec 2004 19:20:06 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anything to be gained from a 'Postgres Filesystem'?" } ]
[ { "msg_contents": "Hi all,\n\nI'm writing this because I've reached the limit of my imagination and\npatience! So here is it...\n\n2 tables:\n1 containing 27 million variable lenght, alpha-numeric records\n(strings) in 1 (one) field. (10 - 145 char lenght per record)\n1 containing 2.5 million variable lenght, alpha-numeric records\n(strings) in 1 (one) field.\n\ntable wehere created using:\nCREATE TABLE \"public\".\"BIGMA\" (\"string\" VARCHAR(255) NOT NULL) WITH OIDS; +\nCREATE INDEX \"BIGMA_INDEX\" ON \"public\".\"BIGMA\" USING btree (\"string\");\nand \nCREATE TABLE \"public\".\"DIRTY\" (\"string\" VARCHAR(128) NOT NULL) WITH OIDS; +\nCREATE INDEX \"DIRTY_INDEX\" ON \"public\".\"DIRTY\" USING btree (\"string\");\n\nWhat I am requested to do is to keep all records from 'BIGMA' that do\nnot apear in 'DIRTY'\nSo far I have tried solving this by going for:\n\n[explain] select * from BIGMA where string not in (select * from DIRTY);\n QUERY PLAN\n------------------------------------------------------------------------\n Seq Scan on bigma (cost=0.00..24582291.25 rows=500 width=145)\n Filter: (NOT (subplan))\n SubPlan\n -> Seq Scan on dirty (cost=0.00..42904.63 rows=2503963 width=82)\n(4 rows)\n\nAND\n\n[explain] select * from bigma,dirty where bigma.email!=dirty.email;\n QUERY PLAN\n-----------------------------------------------------------------------\n Nested Loop (cost=20.00..56382092.13 rows=2491443185 width=227)\n Join Filter: ((\"inner\".email)::text <> (\"outer\".email)::text)\n -> Seq Scan on dirty (cost=0.00..42904.63 rows=2503963 width=82)\n -> Materialize (cost=20.00..30.00 rows=1000 width=145)\n -> Seq Scan on bigma (cost=0.00..20.00 rows=1000 width=145)\n(5 rows)\n\nNow the problem is that both of my previous tries seem to last\nforever! I'm not a pqsql guru so that's why I'm asking you fellas to\nguide mw right! I've tried this on mysql previosly but there seems to\nbe no way mysql can handle this large query.\n\nQUESTIONS:\nWhat can I do in order to make this work?\nWhere do I make mistakes? Is there a way I can improve the performance\nin table design, query style, server setting so that I can get this\nmonster going and producing a result?\n\nThanks all for your preciuos time and answers!\n\nVictor C.\n", "msg_date": "Thu, 21 Oct 2004 17:34:17 +0300", "msg_from": "Victor Ciurus <[email protected]>", "msg_from_op": true, "msg_subject": "Simple machine-killing query!" }, { "msg_contents": "\nOn Thu, 21 Oct 2004, Victor Ciurus wrote:\n\n> Hi all,\n>\n> I'm writing this because I've reached the limit of my imagination and\n> patience! So here is it...\n>\n> 2 tables:\n> 1 containing 27 million variable lenght, alpha-numeric records\n> (strings) in 1 (one) field. (10 - 145 char lenght per record)\n> 1 containing 2.5 million variable lenght, alpha-numeric records\n> (strings) in 1 (one) field.\n>\n> table wehere created using:\n> CREATE TABLE \"public\".\"BIGMA\" (\"string\" VARCHAR(255) NOT NULL) WITH OIDS; +\n> CREATE INDEX \"BIGMA_INDEX\" ON \"public\".\"BIGMA\" USING btree (\"string\");\n> and\n> CREATE TABLE \"public\".\"DIRTY\" (\"string\" VARCHAR(128) NOT NULL) WITH OIDS; +\n> CREATE INDEX \"DIRTY_INDEX\" ON \"public\".\"DIRTY\" USING btree (\"string\");\n>\n> What I am requested to do is to keep all records from 'BIGMA' that do\n> not apear in 'DIRTY'\n> So far I have tried solving this by going for:\n>\n> [explain] select * from BIGMA where string not in (select * from DIRTY);\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Seq Scan on bigma (cost=0.00..24582291.25 rows=500 width=145)\n> Filter: (NOT (subplan))\n> SubPlan\n> -> Seq Scan on dirty (cost=0.00..42904.63 rows=2503963 width=82)\n> (4 rows)\n\nHave you analyzed bigma? The number of rows from the two explains for that\ntable look suspiciously like default values.\n\nAlso, what version are you using, because there are some differences from\n7.3 to 7.4 that change possible suggestions.\n\nThe first is that on 7.4, you may be able to do better with a higher\nsort_mem which could possible switch over to the hashed implementation,\nalthough I think it's probably going to take a pretty high value given the\nsize.\n\nThe second is that you might get better results (even on older versions)\nfrom an exists or left join solution, something like (assuming no nulls in\nbigma.email):\n\nselect * from bigma where not exists(select 1 from dirty where dirty.email\n!= bigma.email);\n\nselect bigma.* from bigma left outer join dirty on (dirty.email =\nbigma.email) where dirty.email is null;\n\nIf you've got nulls in bigma.email you have to be a little more careful.\n\n> [explain] select * from bigma,dirty where bigma.email!=dirty.email;\n\nThis *almost* certainly does not do what you want. For most data sets\nthis is going to give you a number of rows very close to # of rows in\ndirty * # of rows in bigma. Needless to say, this is going to take a long\ntime.\n", "msg_date": "Thu, 21 Oct 2004 08:05:59 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple machine-killing query!" }, { "msg_contents": "Sounds like you need some way to match a subset of the data first,\nrather than try indices that are bigger than the data. Can you add\noperation indices, perhaps on the first 10 bytes of the keys in both\ntables or on a integer hash of all of the strings? If so you could\njoin on the exact set difference over the set difference of the\noperation match.\n\n/Aaron\n\n\nOn Thu, 21 Oct 2004 17:34:17 +0300, Victor Ciurus <[email protected]> wrote:\n> Hi all,\n> \n> I'm writing this because I've reached the limit of my imagination and\n> patience! So here is it...\n> \n> 2 tables:\n> 1 containing 27 million variable lenght, alpha-numeric records\n> (strings) in 1 (one) field. (10 - 145 char lenght per record)\n> 1 containing 2.5 million variable lenght, alpha-numeric records\n> (strings) in 1 (one) field.\n> \n> table wehere created using:\n> CREATE TABLE \"public\".\"BIGMA\" (\"string\" VARCHAR(255) NOT NULL) WITH OIDS; +\n> CREATE INDEX \"BIGMA_INDEX\" ON \"public\".\"BIGMA\" USING btree (\"string\");\n> and\n> CREATE TABLE \"public\".\"DIRTY\" (\"string\" VARCHAR(128) NOT NULL) WITH OIDS; +\n> CREATE INDEX \"DIRTY_INDEX\" ON \"public\".\"DIRTY\" USING btree (\"string\");\n> \n> What I am requested to do is to keep all records from 'BIGMA' that do\n> not apear in 'DIRTY'\n> So far I have tried solving this by going for:\n> \n> [explain] select * from BIGMA where string not in (select * from DIRTY);\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Seq Scan on bigma (cost=0.00..24582291.25 rows=500 width=145)\n> Filter: (NOT (subplan))\n> SubPlan\n> -> Seq Scan on dirty (cost=0.00..42904.63 rows=2503963 width=82)\n> (4 rows)\n> \n> AND\n> \n> [explain] select * from bigma,dirty where bigma.email!=dirty.email;\n> QUERY PLAN\n> -----------------------------------------------------------------------\n> Nested Loop (cost=20.00..56382092.13 rows=2491443185 width=227)\n> Join Filter: ((\"inner\".email)::text <> (\"outer\".email)::text)\n> -> Seq Scan on dirty (cost=0.00..42904.63 rows=2503963 width=82)\n> -> Materialize (cost=20.00..30.00 rows=1000 width=145)\n> -> Seq Scan on bigma (cost=0.00..20.00 rows=1000 width=145)\n> (5 rows)\n> \n> Now the problem is that both of my previous tries seem to last\n> forever! I'm not a pqsql guru so that's why I'm asking you fellas to\n> guide mw right! I've tried this on mysql previosly but there seems to\n> be no way mysql can handle this large query.\n> \n> QUESTIONS:\n> What can I do in order to make this work?\n> Where do I make mistakes? Is there a way I can improve the performance\n> in table design, query style, server setting so that I can get this\n> monster going and producing a result?\n> \n> Thanks all for your preciuos time and answers!\n> \n> Victor C.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n\n\n-- \n\nRegards,\n/Aaron\n", "msg_date": "Thu, 21 Oct 2004 11:14:14 -0400", "msg_from": "Aaron Werman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple machine-killing query!" }, { "msg_contents": "Victor Ciurus <[email protected]> writes:\n> What I am requested to do is to keep all records from 'BIGMA' that do\n> not apear in 'DIRTY'\n> So far I have tried solving this by going for:\n\n> [explain] select * from BIGMA where string not in (select * from DIRTY);\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Seq Scan on bigma (cost=0.00..24582291.25 rows=500 width=145)\n> Filter: (NOT (subplan))\n> SubPlan\n> -> Seq Scan on dirty (cost=0.00..42904.63 rows=2503963 width=82)\n> (4 rows)\n\nIf you are using PG 7.4, you can get reasonable performance out of this\napproach, but you need to jack sort_mem up to the point where the whole\nDIRTY table will fit into sort_mem (so that you get a hashed-subplan\nplan and not a plain subplan). If you find yourself setting sort_mem to\nmore than say half of your machine's available RAM, you should probably\nforget that idea.\n\n> [explain] select * from bigma,dirty where bigma.email!=dirty.email;\n\nThis of course does not give the right answer at all.\n\nA trick that people sometimes use is an outer join:\n\nselect * from bigma left join dirty on (bigma.email=dirty.email)\nwhere dirty.email is null;\n\nUnderstanding why this works is left as an exercise for the reader\n... but it does work, and pretty well too. If you're using pre-7.4\nPG then this is about the only effective solution AFAIR.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Oct 2004 11:41:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple machine-killing query! " }, { "msg_contents": "Well guys,\n\nYour replies have been more than helpful to me, showing me both the\nlearning stuff I still have to get in my mind about real SQL and the\nwonder called PostgreSQL and a very good solution from Tom Lane\n(thanks a lot sir!)!\n\nIndeed, changing mem_sort and other server parmeters along with the\nquite strange (to me!) outer join Tom mentioned finally got me to\nfinalize the cleaning task and indeed in warp speed (some 5 mintues or\nless!). I am running PG v7.4.5 on a PIV Celeron 1,7Ghz with 1,5Gb ram\nso talking about the time performance I might say that I'm more than\npleased with the result. As with the amazement PG \"caused\" me through\nits reliability so far I am decided to go even deeper in learning it!\n\nThanks again all for your precious help!\n\nRegards,\nVictor \n\n\n> \n> If you are using PG 7.4, you can get reasonable performance out of this\n> approach, but you need to jack sort_mem up to the point where the whole\n> DIRTY table will fit into sort_mem (so that you get a hashed-subplan\n> plan and not a plain subplan). If you find yourself setting sort_mem to\n> more than say half of your machine's available RAM, you should probably\n> forget that idea.\n> \n> > [explain] select * from bigma,dirty where bigma.email!=dirty.email;\n> \n> This of course does not give the right answer at all.\n> \n> A trick that people sometimes use is an outer join:\n> \n> select * from bigma left join dirty on (bigma.email=dirty.email)\n> where dirty.email is null;\n> \n> Understanding why this works is left as an exercise for the reader\n> ... but it does work, and pretty well too. If you're using pre-7.4\n> PG then this is about the only effective solution AFAIR.\n> \n> regards, tom lane\n>\n", "msg_date": "Thu, 21 Oct 2004 20:03:41 +0300", "msg_from": "Victor Ciurus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple machine-killing query!" }, { "msg_contents": "Victor,\n\n> [explain] select * from BIGMA where string not in (select * from DIRTY);\n>                                QUERY PLAN\n> ------------------------------------------------------------------------\n>  Seq Scan on bigma  (cost=0.00..24582291.25 rows=500 width=145)\n>    Filter: (NOT (subplan))\n>    SubPlan\n>      ->  Seq Scan on dirty  (cost=0.00..42904.63 rows=2503963 width=82)\n\nThis is what you call an \"evil query\". I'm not surprised it takes forever; \nyou're telling the database \"Compare every value in 2.7 million rows of text \nagainst 2.5 million rows of text and give me those that don't match.\" There \nis simply no way, on ANY RDBMS, for this query to execute and not eat all of \nyour RAM and CPU for a long time.\n\nYou're simply going to have to allocate shared_buffers and sort_mem (about 2GB \nof sort_mem would be good) to the query, and turn the computer over to the \ntask until it's done. And, for the sake of sanity, when you find the \n200,000 rows that don't match, flag them so that you don't have to do this \nagain.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 21 Oct 2004 10:14:39 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple machine-killing query!" } ]
[ { "msg_contents": "I'm seeing some weird behavior on a repurposed server that was wiped \nclean and set up to run as a database and application server with \npostgres and Apache, as well as some command-line PHP scripts.\n\nThe box itself is a quad processor (2.4 GHz Intel Xeons) Debian woody \nGNU/Linux (2.6.2) system.\n\npostgres is crawling on some fairly routine queries. I'm wondering if \nthis could somehow be related to the fact that this isn't a \ndatabase-only server, but Apache is not really using any resources when \npostgres slows to a crawl.\n\nHere's an example of analysis of a recent query:\n\nEXPLAIN ANALYZE SELECT COUNT(DISTINCT u.id)\nFROM userdata as u, userdata_history as h\nWHERE h.id = '18181'\nAND h.id = u.id;\n \nQUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------\n Aggregate (cost=0.02..0.02 rows=1 width=8) (actual \ntime=298321.421..298321.422 rows=1 loops=1)\n -> Nested Loop (cost=0.00..0.01 rows=1 width=8) (actual \ntime=1.771..298305.531 rows=2452 loops=1)\n Join Filter: (\"inner\".id = \"outer\".id)\n -> Seq Scan on userdata u (cost=0.00..0.00 rows=1 width=8) \n(actual time=0.026..11.869 rows=2452 loops=1)\n -> Seq Scan on userdata_history h (cost=0.00..0.00 rows=1 \nwidth=8) (actual time=0.005..70.519 rows=41631 loops=2452)\n Filter: (id = 18181::bigint)\n Total runtime: 298321.926 ms\n(7 rows)\n\nuserdata has a primary/foreign key on id, which references \nuserdata_history.id, which is a primary key.\n\nAt the time of analysis, the userdata table had < 2,500 rows. \nuserdata_history had < 50,000 rows. I can't imagine how even a seq scan \ncould result in a runtime of nearly 5 minutes in these circumstances.\n\nAlso, doing a count( * ) from each table individually returns nearly \ninstantly.\n\nI can provide details of postgresql.conf and kernel settings if \nnecessary, but I'm using some pretty well tested settings that I use \nany time I admin a postgres installation these days based on box \nresources and database size. I'm more interested in knowing if there \nare any bird's eye details I should be checking immediately.\n\nThanks.\n\n-tfo\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-260-0005\n\n", "msg_date": "Thu, 21 Oct 2004 15:36:02 -0500", "msg_from": "Thomas F.O'Connell <[email protected]>", "msg_from_op": true, "msg_subject": "Performance Anomalies in 7.4.5" }, { "msg_contents": "I know, I know: I should've done this before I posted. REINDEXing and \nVACUUMing mostly fixed this problem. Which gets me back to where I was \nyesterday, reviewing an import process (that existed previously) that \npopulates tables in this system that seems to allow small data sets to \ncause simple queries like this to crawl. Is there anything about \ngeneral COPY/INSERT activity that can cause small data sets to become \nso severely slow in postgres that can be prevented other than being \ndiligent about VACUUMing? I was hoping that pg_autovacuum along with \npost-import manual VACUUMs would be sufficient, but it doesn't seem to \nbe the case necessarily. Granted, I haven't done a methodical and \ncomplete review of the process, but I'm still surprised at how quickly \nit seems able to debilitate postgres with even small amounts of data. I \nhad a similar situation crawl yesterday based on a series of COPYs \ninvolving 5 rows!\n\nAs in, can I look for something to treat the cause rather than the \nsymptoms?\n\nIf not, should I be REINDEXing manually, as well as VACUUMing manually \nafter large data imports (whether via COPY or INSERT)? Or will a VACUUM \nFULL ANALYZE be enough?\n\nThanks!\n\n-tfo\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-260-0005\n\nOn Oct 21, 2004, at 3:36 PM, Thomas F.O'Connell wrote:\n\n> I'm seeing some weird behavior on a repurposed server that was wiped \n> clean and set up to run as a database and application server with \n> postgres and Apache, as well as some command-line PHP scripts.\n>\n> The box itself is a quad processor (2.4 GHz Intel Xeons) Debian woody \n> GNU/Linux (2.6.2) system.\n>\n> postgres is crawling on some fairly routine queries. I'm wondering if \n> this could somehow be related to the fact that this isn't a \n> database-only server, but Apache is not really using any resources \n> when postgres slows to a crawl.\n>\n> Here's an example of analysis of a recent query:\n>\n> EXPLAIN ANALYZE SELECT COUNT(DISTINCT u.id)\n> FROM userdata as u, userdata_history as h\n> WHERE h.id = '18181'\n> AND h.id = u.id;\n> \n> QUERY PLAN\n> ----------------------------------------------------------------------- \n> ----------------------------------------------------------------------- \n> --\n> Aggregate (cost=0.02..0.02 rows=1 width=8) (actual \n> time=298321.421..298321.422 rows=1 loops=1)\n> -> Nested Loop (cost=0.00..0.01 rows=1 width=8) (actual \n> time=1.771..298305.531 rows=2452 loops=1)\n> Join Filter: (\"inner\".id = \"outer\".id)\n> -> Seq Scan on userdata u (cost=0.00..0.00 rows=1 width=8) \n> (actual time=0.026..11.869 rows=2452 loops=1)\n> -> Seq Scan on userdata_history h (cost=0.00..0.00 rows=1 \n> width=8) (actual time=0.005..70.519 rows=41631 loops=2452)\n> Filter: (id = 18181::bigint)\n> Total runtime: 298321.926 ms\n> (7 rows)\n>\n> userdata has a primary/foreign key on id, which references \n> userdata_history.id, which is a primary key.\n>\n> At the time of analysis, the userdata table had < 2,500 rows. \n> userdata_history had < 50,000 rows. I can't imagine how even a seq \n> scan could result in a runtime of nearly 5 minutes in these \n> circumstances.\n>\n> Also, doing a count( * ) from each table individually returns nearly \n> instantly.\n>\n> I can provide details of postgresql.conf and kernel settings if \n> necessary, but I'm using some pretty well tested settings that I use \n> any time I admin a postgres installation these days based on box \n> resources and database size. I'm more interested in knowing if there \n> are any bird's eye details I should be checking immediately.\n>\n> Thanks.\n>\n> -tfo\n>\n> --\n> Thomas F. O'Connell\n> Co-Founder, Information Architect\n> Sitening, LLC\n> http://www.sitening.com/\n> 110 30th Avenue North, Suite 6\n> Nashville, TN 37203-6320\n> 615-260-0005\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n", "msg_date": "Thu, 21 Oct 2004 15:50:20 -0500", "msg_from": "Thomas F.O'Connell <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance Anomalies in 7.4.5" }, { "msg_contents": "Thomas F.O'Connell wrote:\n> I'm seeing some weird behavior on a repurposed server that was wiped \n> clean and set up to run as a database and application server with \n> postgres and Apache, as well as some command-line PHP scripts.\n> \n> The box itself is a quad processor (2.4 GHz Intel Xeons) Debian woody \n> GNU/Linux (2.6.2) system.\n> \n> postgres is crawling on some fairly routine queries. I'm wondering if \n> this could somehow be related to the fact that this isn't a \n> database-only server, but Apache is not really using any resources when \n> postgres slows to a crawl.\n> \n> Here's an example of analysis of a recent query:\n> \n> EXPLAIN ANALYZE SELECT COUNT(DISTINCT u.id)\n> FROM userdata as u, userdata_history as h\n> WHERE h.id = '18181'\n> AND h.id = u.id;\n> \n> QUERY PLAN\n> ------------------------------------------------------------------------ \n> ------------------------------------------------------------------------\n> Aggregate (cost=0.02..0.02 rows=1 width=8) (actual \n> time=298321.421..298321.422 rows=1 loops=1)\n> -> Nested Loop (cost=0.00..0.01 rows=1 width=8) (actual \n> time=1.771..298305.531 rows=2452 loops=1)\n> Join Filter: (\"inner\".id = \"outer\".id)\n> -> Seq Scan on userdata u (cost=0.00..0.00 rows=1 width=8) \n> (actual time=0.026..11.869 rows=2452 loops=1)\n> -> Seq Scan on userdata_history h (cost=0.00..0.00 rows=1 \n> width=8) (actual time=0.005..70.519 rows=41631 loops=2452)\n> Filter: (id = 18181::bigint)\n> Total runtime: 298321.926 ms\n> (7 rows)\n> \n> userdata has a primary/foreign key on id, which references \n> userdata_history.id, which is a primary key.\n> \n> At the time of analysis, the userdata table had < 2,500 rows. \n> userdata_history had < 50,000 rows. I can't imagine how even a seq scan \n> could result in a runtime of nearly 5 minutes in these circumstances.\n> \n> Also, doing a count( * ) from each table individually returns nearly \n> instantly.\n> \n> I can provide details of postgresql.conf and kernel settings if \n> necessary, but I'm using some pretty well tested settings that I use \n> any time I admin a postgres installation these days based on box \n> resources and database size. I'm more interested in knowing if there \n> are any bird's eye details I should be checking immediately.\n> \n> Thanks.\n> \n> -tfo\n> \n> -- \n> Thomas F. O'Connell\n> Co-Founder, Information Architect\n> Sitening, LLC\n> http://www.sitening.com/\n> 110 30th Avenue North, Suite 6\n> Nashville, TN 37203-6320\n> 615-260-0005\n\nIs your enable_seqscan set to true?\nTry it after issuing set enable_seqscan to off;\n\n\n", "msg_date": "Thu, 21 Oct 2004 21:04:58 GMT", "msg_from": "Bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Anomalies in 7.4.5" }, { "msg_contents": "On Thu, 21 Oct 2004, Thomas F.O'Connell wrote:\n\n> Aggregate (cost=0.02..0.02 rows=1 width=8) (actual \n> time=298321.421..298321.422 rows=1 loops=1)\n> -> Nested Loop (cost=0.00..0.01 rows=1 width=8) (actual \n> time=1.771..298305.531 rows=2452 loops=1)\n> Join Filter: (\"inner\".id = \"outer\".id)\n> -> Seq Scan on userdata u (cost=0.00..0.00 rows=1 width=8) \n> (actual time=0.026..11.869 rows=2452 loops=1)\n> -> Seq Scan on userdata_history h (cost=0.00..0.00 rows=1 \n> width=8) (actual time=0.005..70.519 rows=41631 loops=2452)\n> Filter: (id = 18181::bigint)\n\nIt looks like you have not run ANALYZE recently. Most people run VACUUM \nANALYZE every night (or similar) in a cron job.\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Thu, 21 Oct 2004 23:05:20 +0200 (CEST)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Anomalies in 7.4.5" }, { "msg_contents": "The irony is that I had just disabled pg_autovacuum the previous day \nduring analysis of a wider issue affecting imports of data into the \nsystem.\n\n-tfo\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-260-0005\n\nOn Oct 21, 2004, at 4:05 PM, Dennis Bjorklund wrote:\n\n> On Thu, 21 Oct 2004, Thomas F.O'Connell wrote:\n>\n>> Aggregate (cost=0.02..0.02 rows=1 width=8) (actual\n>> time=298321.421..298321.422 rows=1 loops=1)\n>> -> Nested Loop (cost=0.00..0.01 rows=1 width=8) (actual\n>> time=1.771..298305.531 rows=2452 loops=1)\n>> Join Filter: (\"inner\".id = \"outer\".id)\n>> -> Seq Scan on userdata u (cost=0.00..0.00 rows=1 width=8)\n>> (actual time=0.026..11.869 rows=2452 loops=1)\n>> -> Seq Scan on userdata_history h (cost=0.00..0.00 rows=1\n>> width=8) (actual time=0.005..70.519 rows=41631 loops=2452)\n>> Filter: (id = 18181::bigint)\n>\n> It looks like you have not run ANALYZE recently. Most people run VACUUM\n> ANALYZE every night (or similar) in a cron job.\n>\n> -- \n> /Dennis Björklund\n", "msg_date": "Thu, 21 Oct 2004 16:11:39 -0500", "msg_from": "Thomas F.O'Connell <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance Anomalies in 7.4.5" }, { "msg_contents": "\"Thomas F.O'Connell\" <[email protected]> writes:\n> -> Nested Loop (cost=0.00..0.01 rows=1 width=8) (actual \n> time=1.771..298305.531 rows=2452 loops=1)\n> Join Filter: (\"inner\".id = \"outer\".id)\n> -> Seq Scan on userdata u (cost=0.00..0.00 rows=1 width=8) \n> (actual time=0.026..11.869 rows=2452 loops=1)\n> -> Seq Scan on userdata_history h (cost=0.00..0.00 rows=1 \n> width=8) (actual time=0.005..70.519 rows=41631 loops=2452)\n> Filter: (id = 18181::bigint)\n> Total runtime: 298321.926 ms\n> (7 rows)\n\nWhat's killing you here is that the planner thinks these tables are\ncompletely empty (notice the zero cost estimates, which implies the\ntable has zero blocks --- the fact that the rows estimate is 1 and not 0\nis the result of sanity-check clamping inside costsize.c). This leads\nit to choose a nestloop, which would be the best plan if there were only\na few rows involved, but it degenerates rapidly when there are not.\n\nIt's easy to fall into this trap when truncating and reloading tables;\nall you need is an \"analyze\" while the table is empty. The rule of\nthumb is to analyze just after you reload the table, not just before.\n\nI'm getting more and more convinced that we need to drop the reltuples\nand relpages entries in pg_class, in favor of checking the physical\ntable size whenever we make a plan. We could derive the tuple count\nestimate by having ANALYZE store a tuples-per-page estimate in pg_class\nand then multiply by the current table size; tuples-per-page should be\na much more stable figure than total tuple count.\n\nOne drawback to this is that it would require an additional lseek per\ntable while planning, but that doesn't seem like a huge penalty.\n\nProbably the most severe objection to doing things this way is that the\nselected plan could change unexpectedly as a result of the physical\ntable size changing. Right now the DBA can keep tight rein on actions\nthat might affect plan selection (ie, VACUUM and ANALYZE), but that\nwould go by the board with this. OTOH, we seem to be moving towards\nautovacuum, which also takes away any guarantees in this department.\n\nIn any case this is speculation for 8.1; I think it's too late for 8.0.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Oct 2004 17:53:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Anomalies in 7.4.5 " }, { "msg_contents": "On 21 Oct 2004 at 15:50, Thomas F.O'Connell wrote:\n\n> If not, should I be REINDEXing manually, as well as VACUUMing manually \n> after large data imports (whether via COPY or INSERT)? Or will a VACUUM \n> FULL ANALYZE be enough?\n> \n\nIt's not the vacuuming that's important here, just the analyze. If you import any data into \na table, Postgres often does not *know* that until you gather the statistics on the table.\nYou are simply running into the problem of the planner not knowing how much \ndata/distribution of data in your tables.\n\nIf you have large imports it may be faster overall to drop the indexes first, then insert the \ndata, then put the indexes back on, then analyze.\n\nCheers,\nGary.\n\n", "msg_date": "Thu, 21 Oct 2004 23:21:35 +0100", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Anomalies in 7.4.5" }, { "msg_contents": "\n> Probably the most severe objection to doing things this way is that the\n> selected plan could change unexpectedly as a result of the physical\n> table size changing. Right now the DBA can keep tight rein on actions\n> that might affect plan selection (ie, VACUUM and ANALYZE), but that\n> would go by the board with this. OTOH, we seem to be moving towards\n> autovacuum, which also takes away any guarantees in this department.\n\nBut aren't we requiring that we can disable autovacuum on some\ntables? I've actually used, more than once, the finger-on-the-scale\nmethod of thumping values in pg_class when I had a pretty good idea\nof how the table was changing, particularly when it would change in\nsuch a way as to confuse the planner. There are still enough cases\nwhere the planner doesn't quite get things right that I'd really\nprefer the ability to give it clues, at least indirectly. I can't\nimagine that there's going to be a lot of enthusiasm for hints, so\nanything that isn't a sure-fire planner helper is a potential loss,\nat least to me.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nI remember when computers were frustrating because they *did* exactly what \nyou told them to. That actually seems sort of quaint now.\n\t\t--J.D. Baldwin\n", "msg_date": "Fri, 22 Oct 2004 15:40:58 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Anomalies in 7.4.5" }, { "msg_contents": "Andrew Sullivan wrote:\n\n>>Probably the most severe objection to doing things this way is that the\n>>selected plan could change unexpectedly as a result of the physical\n>>table size changing. Right now the DBA can keep tight rein on actions\n>>that might affect plan selection (ie, VACUUM and ANALYZE), but that\n>>would go by the board with this. OTOH, we seem to be moving towards\n>>autovacuum, which also takes away any guarantees in this department.\n>> \n>>\n>\n>But aren't we requiring that we can disable autovacuum on some\n>tables? \n>\nYes that is the long term goal, but the autovac in 8.0 is still all or \nnothing.\n", "msg_date": "Fri, 22 Oct 2004 17:13:18 -0400", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Anomalies in 7.4.5" }, { "msg_contents": "On Fri, Oct 22, 2004 at 05:13:18PM -0400, Matthew T. O'Connor wrote:\n\n> Yes that is the long term goal, but the autovac in 8.0 is still all or \n> nothing.\n\nYes, which is why I couldn't use the current iteration for\nproduction: the cost is too high. I think this re-inforces my\noriginal point, which is that taking away the ability of DBAs to\nthump the planner for certain tables -- even indirectly -- under\ncertain pathological conditions is crucial for production work. In\nthe ideal world, the wizards and genius planners and such like would\nwork perfectly, and the DBA would never have to intervene. In\npractice, there are cases when you need to haul on a knob or two. \nWhile this doesn't mean that we should adopt the Oracle approach of\nhaving knobs which adjust the sensitivity of the knob that tunes the\nmain knob-tuner, I'm still pretty leery of anything which smacks of\ncompletely locking the DBA's knowledge out of the system.\n\nA \n\n-- \nAndrew Sullivan | [email protected]\nThe plural of anecdote is not data.\n\t\t--Roger Brinner\n", "msg_date": "Mon, 25 Oct 2004 09:51:31 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Anomalies in 7.4.5" }, { "msg_contents": "This topic probably available in 8.x will be very usefull for people just\nusing postgresql as a \"normal\" Database user.\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Tom Lane\nSent: jeudi 21 octobre 2004 23:53\nTo: Thomas F.O'Connell\nCc: PgSQL - Performance\nSubject: Re: [PERFORM] Performance Anomalies in 7.4.5 \n\n\"Thomas F.O'Connell\" <[email protected]> writes:\n> -> Nested Loop (cost=0.00..0.01 rows=1 width=8) (actual\n> time=1.771..298305.531 rows=2452 loops=1)\n> Join Filter: (\"inner\".id = \"outer\".id)\n> -> Seq Scan on userdata u (cost=0.00..0.00 rows=1 width=8) \n> (actual time=0.026..11.869 rows=2452 loops=1)\n> -> Seq Scan on userdata_history h (cost=0.00..0.00 rows=1\n> width=8) (actual time=0.005..70.519 rows=41631 loops=2452)\n> Filter: (id = 18181::bigint)\n> Total runtime: 298321.926 ms\n> (7 rows)\n\nWhat's killing you here is that the planner thinks these tables are\ncompletely empty (notice the zero cost estimates, which implies the table\nhas zero blocks --- the fact that the rows estimate is 1 and not 0 is the\nresult of sanity-check clamping inside costsize.c). This leads it to choose\na nestloop, which would be the best plan if there were only a few rows\ninvolved, but it degenerates rapidly when there are not.\n\nIt's easy to fall into this trap when truncating and reloading tables; all\nyou need is an \"analyze\" while the table is empty. The rule of thumb is to\nanalyze just after you reload the table, not just before.\n\nI'm getting more and more convinced that we need to drop the reltuples and\nrelpages entries in pg_class, in favor of checking the physical table size\nwhenever we make a plan. We could derive the tuple count estimate by having\nANALYZE store a tuples-per-page estimate in pg_class and then multiply by\nthe current table size; tuples-per-page should be a much more stable figure\nthan total tuple count.\n\nOne drawback to this is that it would require an additional lseek per table\nwhile planning, but that doesn't seem like a huge penalty.\n\nProbably the most severe objection to doing things this way is that the\nselected plan could change unexpectedly as a result of the physical table\nsize changing. Right now the DBA can keep tight rein on actions that might\naffect plan selection (ie, VACUUM and ANALYZE), but that would go by the\nboard with this. OTOH, we seem to be moving towards autovacuum, which also\ntakes away any guarantees in this department.\n\nIn any case this is speculation for 8.1; I think it's too late for 8.0.\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n\n", "msg_date": "Thu, 28 Oct 2004 10:01:02 +0200", "msg_from": "\"Alban Medici (NetCentrex)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Anomalies in 7.4.5 " }, { "msg_contents": "Tom,\n\n> One drawback to this is that it would require an additional lseek per table\n> while planning, but that doesn't seem like a huge penalty.\n\nHmmm ... would the additional lseek take longer for larger tables, or would it \nbe a fixed cost? \n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 28 Oct 2004 09:07:00 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Anomalies in 7.4.5" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> One drawback to this is that it would require an additional lseek per table\n>> while planning, but that doesn't seem like a huge penalty.\n\n> Hmmm ... would the additional lseek take longer for larger tables, or would it \n> be a fixed cost? \n\nShould be pretty much a fixed cost: one kernel call per table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Oct 2004 12:31:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Anomalies in 7.4.5 " }, { "msg_contents": "On Thu, 2004-10-28 at 12:31, Tom Lane wrote:\n> Josh Berkus <[email protected]> writes:\n> >> One drawback to this is that it would require an additional lseek per table\n> >> while planning, but that doesn't seem like a huge penalty.\n> \n> > Hmmm ... would the additional lseek take longer for larger tables, or would it \n> > be a fixed cost? \n> \n> Should be pretty much a fixed cost: one kernel call per table.\n\nIs this something that the bgwriter could periodically do and share the\ndata? Possibly in the future it could even force a function or prepared\nstatement recompile if the data has changed significantly?\n\n\n", "msg_date": "Thu, 28 Oct 2004 13:20:56 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Anomalies in 7.4.5" }, { "msg_contents": "Rod Taylor <[email protected]> writes:\n> On Thu, 2004-10-28 at 12:31, Tom Lane wrote:\n>> Should be pretty much a fixed cost: one kernel call per table.\n\n> Is this something that the bgwriter could periodically do and share the\n> data?\n\nI think a kernel call would be cheaper.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Oct 2004 13:50:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Anomalies in 7.4.5 " } ]
[ { "msg_contents": "Hello everyone,\n\nI am currently working on a data project that uses PostgreSQL\nextensively to store, manage and maintain the data. We haven't had\nany problems regarding database size until recently. The three major\ntables we use never get bigger than 10 million records. With this\nsize, we can do things like storing the indexes or even the tables in\nmemory to allow faster access.\n\nRecently, we have found customers who are wanting to use our service\nwith data files between 100 million and 300 million records. At that\nsize, each of the three major tables will hold between 150 million and\n700 million records. At this size, I can't expect it to run queries\nin 10-15 seconds (what we can do with 10 million records), but would\nprefer to keep them all under a minute.\n\nWe did some original testing and with a server with 8GB or RAM and\nfound we can do operations on data file up to 50 million fairly well,\nbut performance drop dramatically after that. Does anyone have any\nsuggestions on a good way to improve performance for these extra large\ntables? Things that have come to mind are Replication and Beowulf\nclusters, but from what I have recently studied, these don't do so wel\nwith singular processes. We will have parallel process running, but\nit's more important that the speed of each process be faster than\nseveral parallel processes at once.\n\nAny help would be greatly appreciated! \n\nThanks,\n\nJoshua Marsh\n\nP.S. Off-topic, I have a few invitations to gmail. If anyone would\nlike one, let me know.\n", "msg_date": "Thu, 21 Oct 2004 21:14:59 -0600", "msg_from": "Joshua Marsh <[email protected]>", "msg_from_op": true, "msg_subject": "Large Database Performance suggestions" }, { "msg_contents": "On Thu, 21 Oct 2004, Joshua Marsh wrote:\n\n> Recently, we have found customers who are wanting to use our service\n> with data files between 100 million and 300 million records. At that\n> size, each of the three major tables will hold between 150 million and\n> 700 million records. At this size, I can't expect it to run queries\n> in 10-15 seconds (what we can do with 10 million records), but would\n> prefer to keep them all under a minute.\n\nTo provide any useful information, we'd need to look at your table schemas\nand sample queries.\n\nThe values for sort_mem and shared_buffers will also be useful.\n\nAre you VACUUMing and ANALYZEing? (or is the data read only?))\n\ngavin\n", "msg_date": "Fri, 22 Oct 2004 13:29:57 +1000 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large Database Performance suggestions" }, { "msg_contents": "Joshua Marsh <[email protected]> writes:\n> ... We did some original testing and with a server with 8GB or RAM and\n> found we can do operations on data file up to 50 million fairly well,\n> but performance drop dramatically after that.\n\nWhat you have to ask is *why* does it drop dramatically? There aren't\nany inherent limits in Postgres that are going to kick in at that level.\nI'm suspicious that you could improve the situation by adjusting\nsort_mem and/or other configuration parameters; but there's not enough\ninfo here to make specific recommendations. I would suggest posting\nEXPLAIN ANALYZE results for your most important queries both in the size\nrange where you are getting good results, and the range where you are not.\nThen we'd have something to chew on.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Oct 2004 23:37:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large Database Performance suggestions " }, { "msg_contents": "On Thu, 2004-10-21 at 21:14, Joshua Marsh wrote:\n> Hello everyone,\n> \n> I am currently working on a data project that uses PostgreSQL\n> extensively to store, manage and maintain the data. We haven't had\n> any problems regarding database size until recently. The three major\n> tables we use never get bigger than 10 million records. With this\n> size, we can do things like storing the indexes or even the tables in\n> memory to allow faster access.\n> \n> Recently, we have found customers who are wanting to use our service\n> with data files between 100 million and 300 million records. At that\n> size, each of the three major tables will hold between 150 million and\n> 700 million records. At this size, I can't expect it to run queries\n> in 10-15 seconds (what we can do with 10 million records), but would\n> prefer to keep them all under a minute.\n> \n> We did some original testing and with a server with 8GB or RAM and\n> found we can do operations on data file up to 50 million fairly well,\n> but performance drop dramatically after that. Does anyone have any\n> suggestions on a good way to improve performance for these extra large\n> tables? Things that have come to mind are Replication and Beowulf\n> clusters, but from what I have recently studied, these don't do so wel\n> with singular processes. We will have parallel process running, but\n> it's more important that the speed of each process be faster than\n> several parallel processes at once.\n\n\nI'd assume that what's happening is that up to a certain data set size,\nit all fits in memory, and you're going from CPU/memory bandwidth\nlimited to I/O limited. If this is the case, then a faster storage\nsubsystem is the only real answer. If the database is mostly read, then\na large RAID5 or RAID 1+0 array should help quite a bit.\n\nYou might wanna post some explain analyze of the queries that are going\nslower at some point in size, along with schema for those tables etc...\n\n", "msg_date": "Thu, 21 Oct 2004 23:03:34 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large Database Performance suggestions" }, { "msg_contents": "Josh,\n\nYour hardware setup would be useful too. It's surprising how slow some \nbig name servers really are.\nIf you are seriously considering memory sizes over 4G you may want to \nlook at an opteron.\n\nDave\n\nJoshua Marsh wrote:\n\n>Hello everyone,\n>\n>I am currently working on a data project that uses PostgreSQL\n>extensively to store, manage and maintain the data. We haven't had\n>any problems regarding database size until recently. The three major\n>tables we use never get bigger than 10 million records. With this\n>size, we can do things like storing the indexes or even the tables in\n>memory to allow faster access.\n>\n>Recently, we have found customers who are wanting to use our service\n>with data files between 100 million and 300 million records. At that\n>size, each of the three major tables will hold between 150 million and\n>700 million records. At this size, I can't expect it to run queries\n>in 10-15 seconds (what we can do with 10 million records), but would\n>prefer to keep them all under a minute.\n>\n>We did some original testing and with a server with 8GB or RAM and\n>found we can do operations on data file up to 50 million fairly well,\n>but performance drop dramatically after that. Does anyone have any\n>suggestions on a good way to improve performance for these extra large\n>tables? Things that have come to mind are Replication and Beowulf\n>clusters, but from what I have recently studied, these don't do so wel\n>with singular processes. We will have parallel process running, but\n>it's more important that the speed of each process be faster than\n>several parallel processes at once.\n>\n>Any help would be greatly appreciated! \n>\n>Thanks,\n>\n>Joshua Marsh\n>\n>P.S. Off-topic, I have a few invitations to gmail. If anyone would\n>like one, let me know.\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n>\n> \n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Fri, 22 Oct 2004 07:38:49 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large Database Performance suggestions" }, { "msg_contents": "Thanks for all of your help so far. Here is some of the information\nyou guys were asking for:\n\nTest System:\n2x AMD Opteron 244 (1.8Ghz)\n8GB RAM\n7x 72GB SCSI HDD (Raid 5)\n\npostrgesql.conf information:\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 1000 # min 16, at least max_connections*2, 8KB each\n#sort_mem = 1024 # min 64, size in KB\n#vacuum_mem = 8192 # min 1024, size in KB\nsort_mem = 4096000\nvacuum_mem = 1024000\n\n# - Free Space Map -\n\n#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n#max_fsm_relations = 1000 # min 100, ~50 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000 # min 25\n#preload_libraries = ''\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\n#fsync = true # turns forced synchronization on or off\n#wal_sync_method = fsync # the default varies across platforms:\n # fsync, fdatasync, open_sync, or open_datasync\n#wal_buffers = 8 # min 4, 8KB each\n\n# - Checkpoints -\n\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 300 # range 30-3600, in seconds\n#checkpoint_warning = 30 # 0 is off, in seconds\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n\nEverything else are at their defaults. I actually think the WAL\noptions are set to defaults as well, but I don't recall exactly :)\n\nAs for the queries and table, The data we store is confidential, but\nit is essentially an account number with a bunch of boolean fields\nthat specify if a person applies to criteria. So a query may look\nsomething like:\n\nSELECT acctno FROM view_of_data WHERE has_name AND is_active_member\nAND state = 'OH';\n\nwhich is explained as something like this:\n QUERY PLAN\n-----------------------------------------------------------------\n Seq Scan on view_of_data (cost=0.00..25304.26 rows=22054 width=11)\n Filter: (has_name AND is_active_member AND ((state)::text = 'OH'::text))\n(2 rows)\n\nOccasionally, because we store data from several sources, we will have\nrequests for data from several sources. We simply intersect the\nview_of_data table with a sources table that lists what acctno belong\nto what source. This query would look something like this:\n\nSELECT acctno FROM view_of_data WHERE has_name AND is_active_member\nAND state = 'OH' INTERSECT SELECT acctno FROM sources_data WHERE\nsource = 175;\n\nwhich is explained as follows:\n QUERY PLAN\n-------------------------------------------------------------------------------------------\n SetOp Intersect (cost=882226.14..885698.20 rows=69441 width=11)\n -> Sort (cost=882226.14..883962.17 rows=694411 width=11)\n Sort Key: acctno\n -> Append (cost=0.00..814849.42 rows=694411 width=11)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..25524.80\nrows=22054 width=11)\n -> Seq Scan on view_of_data \n(cost=0.00..25304.26 rows=22054 width=11)\n Filter: (has_name AND is_active_member AND\n((state)::text = 'OH'::text))\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..789324.62\nrows=672357 width=11)\n -> Seq Scan on sources_data \n(cost=0.00..782601.05 rows=672357 width=11)\n Filter: (source = 23)\n\n\nAgain, we see our biggest bottlenecks when we get over about 50\nmillion records. The time to execute grows exponentially from that\npoint.\n\nThanks again for all of your help!\n\n-Josh\n\n\nOn Fri, 22 Oct 2004 07:38:49 -0400, Dave Cramer <[email protected]> wrote:\n> Josh,\n> \n> Your hardware setup would be useful too. It's surprising how slow some\n> big name servers really are.\n> If you are seriously considering memory sizes over 4G you may want to\n> look at an opteron.\n> \n> Dave\n> \n> \n> \n> Joshua Marsh wrote:\n> \n> >Hello everyone,\n> >\n> >I am currently working on a data project that uses PostgreSQL\n> >extensively to store, manage and maintain the data. We haven't had\n> >any problems regarding database size until recently. The three major\n> >tables we use never get bigger than 10 million records. With this\n> >size, we can do things like storing the indexes or even the tables in\n> >memory to allow faster access.\n> >\n> >Recently, we have found customers who are wanting to use our service\n> >with data files between 100 million and 300 million records. At that\n> >size, each of the three major tables will hold between 150 million and\n> >700 million records. At this size, I can't expect it to run queries\n> >in 10-15 seconds (what we can do with 10 million records), but would\n> >prefer to keep them all under a minute.\n> >\n> >We did some original testing and with a server with 8GB or RAM and\n> >found we can do operations on data file up to 50 million fairly well,\n> >but performance drop dramatically after that. Does anyone have any\n> >suggestions on a good way to improve performance for these extra large\n> >tables? Things that have come to mind are Replication and Beowulf\n> >clusters, but from what I have recently studied, these don't do so wel\n> >with singular processes. We will have parallel process running, but\n> >it's more important that the speed of each process be faster than\n> >several parallel processes at once.\n> >\n> >Any help would be greatly appreciated!\n> >\n> >Thanks,\n> >\n> >Joshua Marsh\n> >\n> >P.S. Off-topic, I have a few invitations to gmail. If anyone would\n> >like one, let me know.\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 1: subscribe and unsubscribe commands go to [email protected]\n> >\n> >\n> >\n> >\n> \n> --\n> Dave Cramer\n> http://www.postgresintl.com\n> 519 939 0336\n> ICQ#14675561\n> \n>\n", "msg_date": "Tue, 26 Oct 2004 09:24:08 -0600", "msg_from": "Joshua Marsh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large Database Performance suggestions" }, { "msg_contents": "Joshua Marsh <[email protected]> writes:\n> shared_buffers = 1000 # min 16, at least max_connections*2, 8KB each\n\nThis is on the small side for an 8G machine. I'd try 10000 or so.\n\n> sort_mem = 4096000\n\nYikes. You do realize you just said that *each sort operation* can use 4G?\n(Actually, it's probably overflowing internally; I dunno what amount of\nsort space you are really ending up with but it could be small.) Try\nsomething saner, maybe in the 10 to 100MB range.\n\n> vacuum_mem = 1024000\n\nThis is probably excessive as well.\n\n> #max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n> #max_fsm_relations = 1000 # min 100, ~50 bytes each\n\nYou will need to bump these up a good deal to avoid database bloat.\n\n> Occasionally, because we store data from several sources, we will have\n> requests for data from several sources. We simply intersect the\n> view_of_data table with a sources table that lists what acctno belong\n> to what source. This query would look something like this:\n\n> SELECT acctno FROM view_of_data WHERE has_name AND is_active_member\n> AND state = 'OH' INTERSECT SELECT acctno FROM sources_data WHERE\n> source = 175;\n\nIMHO you need to rethink your table layout. There is simply no way that\nthat query is going to be fast. Adding a source column to view_of_data\nwould work much better.\n\nIf you're not in a position to redo the tables, you might try it as a\njoin:\n\nSELECT acctno FROM view_of_data JOIN sources_data USING (acctno)\nWHERE has_name AND is_active_member AND state = 'OH'\n AND source = 175;\n\nbut I'm not really sure if that will be better or not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Oct 2004 12:39:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large Database Performance suggestions " }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nJoshua Marsh wrote:\n| Thanks for all of your help so far. Here is some of the information\n| you guys were asking for:\n|\n| Test System:\n| 2x AMD Opteron 244 (1.8Ghz)\n| 8GB RAM\n| 7x 72GB SCSI HDD (Raid 5)\n\nYou probably want to look at investing in a SAN.\n\n- --\nAndrew Hammond 416-673-4138 [email protected]\nDatabase Administrator, Afilias Canada Corp.\nCB83 2838 4B67 D40F D086 3568 81FC E7E5 27AF 4A9A\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.5 (GNU/Linux)\n\niD8DBQFBmMnSgfzn5SevSpoRAlp2AKCVXQkZLR7TuGId/OLveHPqpzC4zwCffNFC\n7zjXzJ6Ukg4TeO1ecWj/nFQ=\n=N5vp\n-----END PGP SIGNATURE-----\n", "msg_date": "Mon, 15 Nov 2004 10:22:59 -0500", "msg_from": "Andrew Hammond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large Database Performance suggestions" } ]
[ { "msg_contents": "I've been using the ARC debug options to analyse memory usage on the\nPostgreSQL 8.0 server. This is a precursor to more complex performance\nanalysis work on the OSDL test suite.\n\nI've simplified some of the ARC reporting into a single log line, which\nis enclosed here as a patch on freelist.c. This includes reporting of:\n- the total memory in use, which wasn't previously reported\n- the cache hit ratio, which was slightly incorrectly calculated\n- a useful-ish value for looking at the \"B\" lists in ARC\n(This is a patch against cvstip, but I'm not sure whether this has\npotential for inclusion in 8.0...)\n\nThe total memory in use is useful because it allows you to tell whether\nshared_buffers is set too high. If it is set too high, then memory usage\nwill continue to grow slowly up to the max, without any corresponding\nincrease in cache hit ratio. If shared_buffers is too small, then memory\nusage will climb quickly and linearly to its maximum.\n\nThe last one I've called \"turbulence\" in an attempt to ascribe some\nuseful meaning to B1/B2 hits - I've tried a few other measures though\nwithout much success. Turbulence is the hit ratio of B1+B2 lists added\ntogether. By observation, this is zero when ARC gives smooth operation,\nand goes above zero otherwise. Typically, turbulence occurs when\nshared_buffers is too small for the working set of the database/workload\ncombination and ARC repeatedly re-balances the lengths of T1/T2 as a\nresult of \"near-misses\" on the B1/B2 lists. Turbulence doesn't usually\ncut in until the cache is fully utilized, so there is usually some delay\nafter startup.\n\nWe also recently discussed that I would add some further memory analysis\nfeatures for 8.1, so I've been trying to figure out how.\n\nThe idea that B1, B2 represent something really useful doesn't seem to\nhave been borne out - though I'm open to persuasion there.\n\nI originally envisaged a \"shadow list\" operating in extension of the\nmain ARC list. This will require some re-coding, since the variables and\nmacros are all hard-coded to a single set of lists. No complaints, just\nit will take a little longer than we all thought (for me, that is...)\n\nMy proposal is to alter the code to allow an array of memory linked\nlists. The actual list would be [0] - other additional lists would be \ncreated dynamically as required i.e. not using IFDEFs, since I want this\nto be controlled by a SIGHUP GUC to allow on-site tuning, not just lab\nwork. This will then allow reporting against the additional lists, so\nthat cache hit ratios can be seen with various other \"prototype\"\nshared_buffer settings.\n\nAny thoughts?\n\n-- \nBest Regards, Simon Riggs", "msg_date": "Fri, 22 Oct 2004 19:50:59 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "ARC Memory Usage analysis" }, { "msg_contents": "On 10/22/2004 2:50 PM, Simon Riggs wrote:\n\n> I've been using the ARC debug options to analyse memory usage on the\n> PostgreSQL 8.0 server. This is a precursor to more complex performance\n> analysis work on the OSDL test suite.\n> \n> I've simplified some of the ARC reporting into a single log line, which\n> is enclosed here as a patch on freelist.c. This includes reporting of:\n> - the total memory in use, which wasn't previously reported\n> - the cache hit ratio, which was slightly incorrectly calculated\n> - a useful-ish value for looking at the \"B\" lists in ARC\n> (This is a patch against cvstip, but I'm not sure whether this has\n> potential for inclusion in 8.0...)\n> \n> The total memory in use is useful because it allows you to tell whether\n> shared_buffers is set too high. If it is set too high, then memory usage\n> will continue to grow slowly up to the max, without any corresponding\n> increase in cache hit ratio. If shared_buffers is too small, then memory\n> usage will climb quickly and linearly to its maximum.\n> \n> The last one I've called \"turbulence\" in an attempt to ascribe some\n> useful meaning to B1/B2 hits - I've tried a few other measures though\n> without much success. Turbulence is the hit ratio of B1+B2 lists added\n> together. By observation, this is zero when ARC gives smooth operation,\n> and goes above zero otherwise. Typically, turbulence occurs when\n> shared_buffers is too small for the working set of the database/workload\n> combination and ARC repeatedly re-balances the lengths of T1/T2 as a\n> result of \"near-misses\" on the B1/B2 lists. Turbulence doesn't usually\n> cut in until the cache is fully utilized, so there is usually some delay\n> after startup.\n> \n> We also recently discussed that I would add some further memory analysis\n> features for 8.1, so I've been trying to figure out how.\n> \n> The idea that B1, B2 represent something really useful doesn't seem to\n> have been borne out - though I'm open to persuasion there.\n> \n> I originally envisaged a \"shadow list\" operating in extension of the\n> main ARC list. This will require some re-coding, since the variables and\n> macros are all hard-coded to a single set of lists. No complaints, just\n> it will take a little longer than we all thought (for me, that is...)\n> \n> My proposal is to alter the code to allow an array of memory linked\n> lists. The actual list would be [0] - other additional lists would be \n> created dynamically as required i.e. not using IFDEFs, since I want this\n> to be controlled by a SIGHUP GUC to allow on-site tuning, not just lab\n> work. This will then allow reporting against the additional lists, so\n> that cache hit ratios can be seen with various other \"prototype\"\n> shared_buffer settings.\n\nAll the existing lists live in shared memory, so that dynamic approach \nsuffers from the fact that the memory has to be allocated during ipc_init.\n\nWhat do you think about my other theory to make C actually 2x effective \ncache size and NOT to keep T1 in shared buffers but to assume T1 lives \nin the OS buffer cache?\n\n\nJan\n\n> \n> Any thoughts?\n> \n> \n> \n> ------------------------------------------------------------------------\n> \n> Index: freelist.c\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/backend/storage/buffer/freelist.c,v\n> retrieving revision 1.48\n> diff -d -c -r1.48 freelist.c\n> *** freelist.c\t16 Sep 2004 16:58:31 -0000\t1.48\n> --- freelist.c\t22 Oct 2004 18:15:38 -0000\n> ***************\n> *** 126,131 ****\n> --- 126,133 ----\n> \tif (StrategyControl->stat_report + DebugSharedBuffers < now)\n> \t{\n> \t\tlong\t\tall_hit,\n> + buf_used,\n> + b_hit,\n> \t\t\t\t\tb1_hit,\n> \t\t\t\t\tt1_hit,\n> \t\t\t\t\tt2_hit,\n> ***************\n> *** 155,161 ****\n> \t\t}\n> \n> \t\tif (StrategyControl->num_lookup == 0)\n> ! \t\t\tall_hit = b1_hit = t1_hit = t2_hit = b2_hit = 0;\n> \t\telse\n> \t\t{\n> \t\t\tb1_hit = (StrategyControl->num_hit[STRAT_LIST_B1] * 100 /\n> --- 157,163 ----\n> \t\t}\n> \n> \t\tif (StrategyControl->num_lookup == 0)\n> ! \t\t\tall_hit = buf_used = b_hit = b1_hit = t1_hit = t2_hit = b2_hit = 0;\n> \t\telse\n> \t\t{\n> \t\t\tb1_hit = (StrategyControl->num_hit[STRAT_LIST_B1] * 100 /\n> ***************\n> *** 166,181 ****\n> \t\t\t\t\t StrategyControl->num_lookup);\n> \t\t\tb2_hit = (StrategyControl->num_hit[STRAT_LIST_B2] * 100 /\n> \t\t\t\t\t StrategyControl->num_lookup);\n> ! \t\t\tall_hit = b1_hit + t1_hit + t2_hit + b2_hit;\n> \t\t}\n> \n> \t\terrcxtold = error_context_stack;\n> \t\terror_context_stack = NULL;\n> ! \t\telog(DEBUG1, \"ARC T1target=%5d B1len=%5d T1len=%5d T2len=%5d B2len=%5d\",\n> \t\t\t T1_TARGET, B1_LENGTH, T1_LENGTH, T2_LENGTH, B2_LENGTH);\n> ! \t\telog(DEBUG1, \"ARC total =%4ld%% B1hit=%4ld%% T1hit=%4ld%% T2hit=%4ld%% B2hit=%4ld%%\",\n> \t\t\t all_hit, b1_hit, t1_hit, t2_hit, b2_hit);\n> ! \t\telog(DEBUG1, \"ARC clean buffers at LRU T1= %5d T2= %5d\",\n> \t\t\t t1_clean, t2_clean);\n> \t\terror_context_stack = errcxtold;\n> \n> --- 168,187 ----\n> \t\t\t\t\t StrategyControl->num_lookup);\n> \t\t\tb2_hit = (StrategyControl->num_hit[STRAT_LIST_B2] * 100 /\n> \t\t\t\t\t StrategyControl->num_lookup);\n> ! \t\t\tall_hit = t1_hit + t2_hit;\n> ! \t\t\tb_hit = b1_hit + b2_hit;\n> ! buf_used = T1_LENGTH + T2_LENGTH;\n> \t\t}\n> \n> \t\terrcxtold = error_context_stack;\n> \t\terror_context_stack = NULL;\n> ! \t\telog(DEBUG1, \"shared_buffers used=%8ld cache hits=%4ld%% turbulence=%4ld%%\",\n> ! \t\t\t buf_used, all_hit, b_hit);\n> ! \t\telog(DEBUG2, \"ARC T1target=%5d B1len=%5d T1len=%5d T2len=%5d B2len=%5d\",\n> \t\t\t T1_TARGET, B1_LENGTH, T1_LENGTH, T2_LENGTH, B2_LENGTH);\n> ! \t\telog(DEBUG2, \"ARC total =%4ld%% B1hit=%4ld%% T1hit=%4ld%% T2hit=%4ld%% B2hit=%4ld%%\",\n> \t\t\t all_hit, b1_hit, t1_hit, t2_hit, b2_hit);\n> ! \t\telog(DEBUG2, \"ARC clean buffers at LRU T1= %5d T2= %5d\",\n> \t\t\t t1_clean, t2_clean);\n> \t\terror_context_stack = errcxtold;\n> \n> \n> \n> ------------------------------------------------------------------------\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Fri, 22 Oct 2004 15:35:49 -0400", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ARC Memory Usage analysis" }, { "msg_contents": "On Fri, Oct 22, 2004 at 03:35:49PM -0400, Jan Wieck wrote:\n> On 10/22/2004 2:50 PM, Simon Riggs wrote:\n> \n> >I've been using the ARC debug options to analyse memory usage on the\n> >PostgreSQL 8.0 server. This is a precursor to more complex performance\n> >analysis work on the OSDL test suite.\n> >\n> >I've simplified some of the ARC reporting into a single log line, which\n> >is enclosed here as a patch on freelist.c. This includes reporting of:\n> >- the total memory in use, which wasn't previously reported\n> >- the cache hit ratio, which was slightly incorrectly calculated\n> >- a useful-ish value for looking at the \"B\" lists in ARC\n> >(This is a patch against cvstip, but I'm not sure whether this has\n> >potential for inclusion in 8.0...)\n> >\n> >The total memory in use is useful because it allows you to tell whether\n> >shared_buffers is set too high. If it is set too high, then memory usage\n> >will continue to grow slowly up to the max, without any corresponding\n> >increase in cache hit ratio. If shared_buffers is too small, then memory\n> >usage will climb quickly and linearly to its maximum.\n> >\n> >The last one I've called \"turbulence\" in an attempt to ascribe some\n> >useful meaning to B1/B2 hits - I've tried a few other measures though\n> >without much success. Turbulence is the hit ratio of B1+B2 lists added\n> >together. By observation, this is zero when ARC gives smooth operation,\n> >and goes above zero otherwise. Typically, turbulence occurs when\n> >shared_buffers is too small for the working set of the database/workload\n> >combination and ARC repeatedly re-balances the lengths of T1/T2 as a\n> >result of \"near-misses\" on the B1/B2 lists. Turbulence doesn't usually\n> >cut in until the cache is fully utilized, so there is usually some delay\n> >after startup.\n> >\n> >We also recently discussed that I would add some further memory analysis\n> >features for 8.1, so I've been trying to figure out how.\n> >\n> >The idea that B1, B2 represent something really useful doesn't seem to\n> >have been borne out - though I'm open to persuasion there.\n> >\n> >I originally envisaged a \"shadow list\" operating in extension of the\n> >main ARC list. This will require some re-coding, since the variables and\n> >macros are all hard-coded to a single set of lists. No complaints, just\n> >it will take a little longer than we all thought (for me, that is...)\n> >\n> >My proposal is to alter the code to allow an array of memory linked\n> >lists. The actual list would be [0] - other additional lists would be \n> >created dynamically as required i.e. not using IFDEFs, since I want this\n> >to be controlled by a SIGHUP GUC to allow on-site tuning, not just lab\n> >work. This will then allow reporting against the additional lists, so\n> >that cache hit ratios can be seen with various other \"prototype\"\n> >shared_buffer settings.\n> \n> All the existing lists live in shared memory, so that dynamic approach \n> suffers from the fact that the memory has to be allocated during ipc_init.\n> \n> What do you think about my other theory to make C actually 2x effective \n> cache size and NOT to keep T1 in shared buffers but to assume T1 lives \n> in the OS buffer cache?\n> \n> \n> Jan\n> \nJan,\n\n From the articles that I have seen on the ARC algorithm, I do not think\nthat using the effective cache size to set C would be a win. The design\nof the ARC process is to allow the cache to optimize its use in response\nto the actual workload. It may be the best use of the cache in some cases\nto have the entire cache allocated to T1 and similarly for T2. If fact,\nthe ability to alter the behavior as needed is one of the key advantages.\n\n--Ken\n", "msg_date": "Fri, 22 Oct 2004 15:09:55 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ARC Memory Usage analysis" }, { "msg_contents": "On Fri, 2004-10-22 at 20:35, Jan Wieck wrote:\n> On 10/22/2004 2:50 PM, Simon Riggs wrote:\n> \n> > \n> > My proposal is to alter the code to allow an array of memory linked\n> > lists. The actual list would be [0] - other additional lists would be \n> > created dynamically as required i.e. not using IFDEFs, since I want this\n> > to be controlled by a SIGHUP GUC to allow on-site tuning, not just lab\n> > work. This will then allow reporting against the additional lists, so\n> > that cache hit ratios can be seen with various other \"prototype\"\n> > shared_buffer settings.\n> \n> All the existing lists live in shared memory, so that dynamic approach \n> suffers from the fact that the memory has to be allocated during ipc_init.\n> \n\n[doh] - dreaming again. Yes of course, server startup it is then. [That\nway, we can include the memory for it at server startup, then allow the\nGUC to be turned off after a while to avoid another restart?]\n\n> What do you think about my other theory to make C actually 2x effective \n> cache size and NOT to keep T1 in shared buffers but to assume T1 lives \n> in the OS buffer cache?\n\nSummarised like that, I understand it.\n\nMy observation is that performance varies significantly between startups\nof the database, which does indicate that the OS cache is working well.\nSo, yes it does seem as if we have a 3 tier cache. I understand you to\nbe effectively suggesting that we go back to having just a 2-tier cache.\n\nI guess we've got two options:\n1. Keep ARC as it is, but just allocate much of the available physical\nmemory to shared_buffers, so you know that effective_cache_size is low\nand that its either in T1 or its on disk.\n2. Alter ARC so that we experiment with the view that T1 is in the OS\nand T2 is in shared_buffers, we don't bother keeping T1. (as you say)\n\nHmmm...I think I'll pass on trying to judge its effectiveness -\nsimplifying things is likely to make it easier to understand and predict\nbehaviour. It's well worth trying, and it seems simple enough to make a\npatch that keeps T1target at zero.\n\ni.e. Scientific method: conjecture + experimental validation = theory\n\nIf you make up a patch, probably against BETA4, Josh and I can include it in the performance testing that I'm hoping we can do over the next few weeks.\n\nWhatever makes 8.0 a high performance release is well worth it.\n\nBest Regards, \n\nSimon Riggs\n\n", "msg_date": "Fri, 22 Oct 2004 21:21:39 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ARC Memory Usage analysis" }, { "msg_contents": "On 10/22/2004 4:21 PM, Simon Riggs wrote:\n\n> On Fri, 2004-10-22 at 20:35, Jan Wieck wrote:\n>> On 10/22/2004 2:50 PM, Simon Riggs wrote:\n>> \n>> > \n>> > My proposal is to alter the code to allow an array of memory linked\n>> > lists. The actual list would be [0] - other additional lists would be \n>> > created dynamically as required i.e. not using IFDEFs, since I want this\n>> > to be controlled by a SIGHUP GUC to allow on-site tuning, not just lab\n>> > work. This will then allow reporting against the additional lists, so\n>> > that cache hit ratios can be seen with various other \"prototype\"\n>> > shared_buffer settings.\n>> \n>> All the existing lists live in shared memory, so that dynamic approach \n>> suffers from the fact that the memory has to be allocated during ipc_init.\n>> \n> \n> [doh] - dreaming again. Yes of course, server startup it is then. [That\n> way, we can include the memory for it at server startup, then allow the\n> GUC to be turned off after a while to avoid another restart?]\n> \n>> What do you think about my other theory to make C actually 2x effective \n>> cache size and NOT to keep T1 in shared buffers but to assume T1 lives \n>> in the OS buffer cache?\n> \n> Summarised like that, I understand it.\n> \n> My observation is that performance varies significantly between startups\n> of the database, which does indicate that the OS cache is working well.\n> So, yes it does seem as if we have a 3 tier cache. I understand you to\n> be effectively suggesting that we go back to having just a 2-tier cache.\n\nEffectively yes, just with the difference that we keep a pseudo T1 list \nand hope that what we are tracking there is what the OS is caching. As \nsaid before, if the effective cache size is set properly, that is what \nshould happen.\n\n> \n> I guess we've got two options:\n> 1. Keep ARC as it is, but just allocate much of the available physical\n> memory to shared_buffers, so you know that effective_cache_size is low\n> and that its either in T1 or its on disk.\n> 2. Alter ARC so that we experiment with the view that T1 is in the OS\n> and T2 is in shared_buffers, we don't bother keeping T1. (as you say)\n> \n> Hmmm...I think I'll pass on trying to judge its effectiveness -\n> simplifying things is likely to make it easier to understand and predict\n> behaviour. It's well worth trying, and it seems simple enough to make a\n> patch that keeps T1target at zero.\n\nNot keeping T1target at zero, because that would keep T2 at the size of \nshared_buffers. What I suspect is that in the current calculation the \nT1target is underestimated. It is incremented on B1 hits, but B1 is only \nof T2 size. What it currently tells is what got pushed from T1 into the \nOS cache. It could well be that it would work much more effective if it \nwould fuzzily tell what got pushed out of the OS cache to disk.\n\n\nJan\n\n> \n> i.e. Scientific method: conjecture + experimental validation = theory\n> \n> If you make up a patch, probably against BETA4, Josh and I can include it in the performance testing that I'm hoping we can do over the next few weeks.\n> \n> Whatever makes 8.0 a high performance release is well worth it.\n> \n> Best Regards, \n> \n> Simon Riggs\n\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Fri, 22 Oct 2004 16:29:20 -0400", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ARC Memory Usage analysis" }, { "msg_contents": "Jan Wieck <[email protected]> writes:\n> What do you think about my other theory to make C actually 2x effective \n> cache size and NOT to keep T1 in shared buffers but to assume T1 lives \n> in the OS buffer cache?\n\nWhat will you do when initially fetching a page? It's not supposed to\ngo directly into T2 on first use, but we're going to have some\ndifficulty accessing a page that's not in shared buffers. I don't think\nyou can equate the T1/T2 dichotomy to \"is in shared buffers or not\".\n\nYou could maybe have a T3 list of \"pages that aren't in shared buffers\nanymore but we think are still in OS buffer cache\", but what would be\nthe point? It'd be a sufficiently bad model of reality as to be pretty\nmuch useless for stats gathering, I'd think.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Oct 2004 16:45:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ARC Memory Usage analysis " }, { "msg_contents": "On Fri, 2004-10-22 at 21:45, Tom Lane wrote:\n> Jan Wieck <[email protected]> writes:\n> > What do you think about my other theory to make C actually 2x effective \n> > cache size and NOT to keep T1 in shared buffers but to assume T1 lives \n> > in the OS buffer cache?\n> \n> What will you do when initially fetching a page? It's not supposed to\n> go directly into T2 on first use, but we're going to have some\n> difficulty accessing a page that's not in shared buffers. I don't think\n> you can equate the T1/T2 dichotomy to \"is in shared buffers or not\".\n> \n\nYes, there are issues there. I want Jan to follow his thoughts through.\nThis is important enough that its worth it - there's only a few even\nattempting this.\n\n> You could maybe have a T3 list of \"pages that aren't in shared buffers\n> anymore but we think are still in OS buffer cache\", but what would be\n> the point? It'd be a sufficiently bad model of reality as to be pretty\n> much useless for stats gathering, I'd think.\n> \n\nThe OS cache is in many ways a wild horse, I agree. Jan is trying to\nthink of ways to harness it, whereas I had mostly ignored it - but its\nthere. Raw disk usage never allowed this opportunity.\n\nFor high performance systems, we can assume that the OS cache is ours to\nplay with - what will we do with it? We need to use it for some\npurposes, yet would like to ignore it for others.\n\n-- \nBest Regards, Simon Riggs\n\n", "msg_date": "Fri, 22 Oct 2004 23:01:05 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] ARC Memory Usage analysis" }, { "msg_contents": "On 10/22/2004 4:09 PM, Kenneth Marshall wrote:\n\n> On Fri, Oct 22, 2004 at 03:35:49PM -0400, Jan Wieck wrote:\n>> On 10/22/2004 2:50 PM, Simon Riggs wrote:\n>> \n>> >I've been using the ARC debug options to analyse memory usage on the\n>> >PostgreSQL 8.0 server. This is a precursor to more complex performance\n>> >analysis work on the OSDL test suite.\n>> >\n>> >I've simplified some of the ARC reporting into a single log line, which\n>> >is enclosed here as a patch on freelist.c. This includes reporting of:\n>> >- the total memory in use, which wasn't previously reported\n>> >- the cache hit ratio, which was slightly incorrectly calculated\n>> >- a useful-ish value for looking at the \"B\" lists in ARC\n>> >(This is a patch against cvstip, but I'm not sure whether this has\n>> >potential for inclusion in 8.0...)\n>> >\n>> >The total memory in use is useful because it allows you to tell whether\n>> >shared_buffers is set too high. If it is set too high, then memory usage\n>> >will continue to grow slowly up to the max, without any corresponding\n>> >increase in cache hit ratio. If shared_buffers is too small, then memory\n>> >usage will climb quickly and linearly to its maximum.\n>> >\n>> >The last one I've called \"turbulence\" in an attempt to ascribe some\n>> >useful meaning to B1/B2 hits - I've tried a few other measures though\n>> >without much success. Turbulence is the hit ratio of B1+B2 lists added\n>> >together. By observation, this is zero when ARC gives smooth operation,\n>> >and goes above zero otherwise. Typically, turbulence occurs when\n>> >shared_buffers is too small for the working set of the database/workload\n>> >combination and ARC repeatedly re-balances the lengths of T1/T2 as a\n>> >result of \"near-misses\" on the B1/B2 lists. Turbulence doesn't usually\n>> >cut in until the cache is fully utilized, so there is usually some delay\n>> >after startup.\n>> >\n>> >We also recently discussed that I would add some further memory analysis\n>> >features for 8.1, so I've been trying to figure out how.\n>> >\n>> >The idea that B1, B2 represent something really useful doesn't seem to\n>> >have been borne out - though I'm open to persuasion there.\n>> >\n>> >I originally envisaged a \"shadow list\" operating in extension of the\n>> >main ARC list. This will require some re-coding, since the variables and\n>> >macros are all hard-coded to a single set of lists. No complaints, just\n>> >it will take a little longer than we all thought (for me, that is...)\n>> >\n>> >My proposal is to alter the code to allow an array of memory linked\n>> >lists. The actual list would be [0] - other additional lists would be \n>> >created dynamically as required i.e. not using IFDEFs, since I want this\n>> >to be controlled by a SIGHUP GUC to allow on-site tuning, not just lab\n>> >work. This will then allow reporting against the additional lists, so\n>> >that cache hit ratios can be seen with various other \"prototype\"\n>> >shared_buffer settings.\n>> \n>> All the existing lists live in shared memory, so that dynamic approach \n>> suffers from the fact that the memory has to be allocated during ipc_init.\n>> \n>> What do you think about my other theory to make C actually 2x effective \n>> cache size and NOT to keep T1 in shared buffers but to assume T1 lives \n>> in the OS buffer cache?\n>> \n>> \n>> Jan\n>> \n> Jan,\n> \n>>From the articles that I have seen on the ARC algorithm, I do not think\n> that using the effective cache size to set C would be a win. The design\n> of the ARC process is to allow the cache to optimize its use in response\n> to the actual workload. It may be the best use of the cache in some cases\n> to have the entire cache allocated to T1 and similarly for T2. If fact,\n> the ability to alter the behavior as needed is one of the key advantages.\n\nOnly the \"working set\" of the database, that is the pages that are very \nfrequently used, are worth holding in shared memory at all. The rest \nshould be copied in and out of the OS disc buffers.\n\nThe problem is, with a too small directory ARC cannot guesstimate what \nmight be in the kernel buffers. Nor can it guesstimate what recently was \nin the kernel buffers and got pushed out from there. That results in a \nway too small B1 list, and therefore we don't get B1 hits when in fact \nthe data was found in memory. B1 hits is what increases the T1target, \nand since we are missing them with a too small directory size, our \nimplementation of ARC is propably using a T2 size larger than the \nworking set. That is not optimal.\n\nIf we would replace the dynamic T1 buffers with a max_backends*2 area of \nshared buffers, use a C value representing the effective cache size and \nlimit the T1target on the lower bound to effective cache size - shared \nbuffers, then we basically moved the T1 cache into the OS buffers.\n\nThis all only holds water, if the OS is allowed to swap out shared \nmemory. And that was my initial question, how likely is it to find this \nto be true these days?\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Mon, 25 Oct 2004 11:34:25 -0400", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ARC Memory Usage analysis" }, { "msg_contents": "Jan Wieck <[email protected]> writes:\n> This all only holds water, if the OS is allowed to swap out shared \n> memory. And that was my initial question, how likely is it to find this \n> to be true these days?\n\nI think it's more likely that not that the OS will consider shared\nmemory to be potentially swappable. On some platforms there is a shmctl\ncall you can make to lock your shmem in memory, but (a) we don't use it\nand (b) it may well require privileges we haven't got anyway.\n\nThis has always been one of the arguments against making shared_buffers\nreally large, of course --- if the buffers aren't all heavily used, and\nthe OS decides to swap them to disk, you are worse off than you would\nhave been with a smaller shared_buffers setting.\n\n\nHowever, I'm still really nervous about the idea of using\neffective_cache_size to control the ARC algorithm. That number is\nusually entirely bogus. Right now it is only a second-order influence\non certain planner estimates, and I am afraid to rely on it any more\nheavily than that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Oct 2004 12:03:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ARC Memory Usage analysis " }, { "msg_contents": "\nTom Lane <[email protected]> writes:\n\n> However, I'm still really nervous about the idea of using\n> effective_cache_size to control the ARC algorithm. That number is\n> usually entirely bogus. \n\nIt wouldn't be too hard to have a port-specific function that tries to guess\nthe total amount of memory. That isn't always right but it's at least a better\nballpark default than a fixed arbitrary value.\n\nHowever I wonder about another approach entirely. If postgres timed how long\nreads took it shouldn't find it very hard to distinguish between a cached\nbuffer being copied and an actual i/o operation. It should be able to track\nthe percentage of time that buffers requested are in the kernel's cache and\nuse that directly instead of the estimated cache size.\n\nAdding two gettimeofdays to every read call would be annoyingly expensive. But\na port-specific function to check the cpu instruction counter could be useful.\nIt doesn't have to be an accurate measurement of time (such as on some\nmulti-processor machines) as long as it's possible to distinguish when a slow\ndisk operation has occurred from when no disk operation has occurred.\n\n-- \ngreg\n\n", "msg_date": "25 Oct 2004 13:14:39 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ARC Memory Usage analysis" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n\n> However I wonder about another approach entirely. If postgres timed how long\n> reads took it shouldn't find it very hard to distinguish between a cached\n> buffer being copied and an actual i/o operation. It should be able to track\n> the percentage of time that buffers requested are in the kernel's cache and\n> use that directly instead of the estimated cache size.\n\nI tested this with a program that times seeking to random locations in a file.\nIt's pretty easy to spot the break point. There are very few fetches that take\nbetween 50us and 1700us, probably they come from the drive's onboard cache.\n\nThe 1700us bound probably would be lower for high end server equipment with\n10k RPM drives and RAID arrays. But I doubt it will ever come close to the\n100us edge, not without features like cache ram that Postgres would be better\noff considering to be part of \"effective_cache\" anyways.\n\nSo I would suggest using something like 100us as the threshold for determining\nwhether a buffer fetch came from cache.\n\nHere are two graphs, one showing a nice curve showing how disk seek times are\ndistributed. It's neat to look at for that alone:\n\n\n\n\nThis is the 1000 fastest data points zoomed to the range under 1800us:\n\n\n\n\nThis is the program I wrote to test this:\n\n\n\n\nHere are the commands I used to generate the graphs:\n\n$ dd bs=1M count=1024 if=/dev/urandom of=/tmp/big\n$ ./a.out 10000 /tmp/big > /tmp/l\n$ gnuplot\ngnuplot> set terminal png\ngnuplot> set output \"/tmp/plot1.png\"\ngnuplot> plot '/tmp/l2' with points pointtype 1 pointsize 1\ngnuplot> set output \"/tmp/plot2.png\"\ngnuplot> plot [0:2000] [0:1000] '/tmp/l2' with points pointtype 1 pointsize 1\n\n\n\n-- \ngreg", "msg_date": "25 Oct 2004 17:30:52 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ARC Memory Usage analysis" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> So I would suggest using something like 100us as the threshold for\n> determining whether a buffer fetch came from cache.\n\nI see no reason to hardwire such a number. On any hardware, the\ndistribution is going to be double-humped, and it will be pretty easy to\ndetermine a cutoff after minimal accumulation of data. The real question\nis whether we can afford a pair of gettimeofday() calls per read().\nThis isn't a big issue if the read actually results in I/O, but if it\ndoesn't, the percentage overhead could be significant.\n\nIf we assume that the effective_cache_size value isn't changing very\nfast, maybe it would be good enough to instrument only every N'th read\n(I'm imagining N on the order of 100) for this purpose. Or maybe we\nneed only instrument reads that are of blocks that are close to where\nthe ARC algorithm thinks the cache edge is.\n\nOne small problem is that the time measurement gives you only a lower\nbound on the time the read() actually took. In a heavily loaded system\nyou might not get the CPU back for long enough to fool you about whether\nthe block came from cache or not.\n\nAnother issue is what we do with the effective_cache_size value once we\nhave a number we trust. We can't readily change the size of the ARC\nlists on the fly.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Oct 2004 17:53:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ARC Memory Usage analysis " }, { "msg_contents": "On Mon, Oct 25, 2004 at 05:53:25PM -0400, Tom Lane wrote:\n> Greg Stark <[email protected]> writes:\n> > So I would suggest using something like 100us as the threshold for\n> > determining whether a buffer fetch came from cache.\n> \n> I see no reason to hardwire such a number. On any hardware, the\n> distribution is going to be double-humped, and it will be pretty easy to\n> determine a cutoff after minimal accumulation of data. The real question\n> is whether we can afford a pair of gettimeofday() calls per read().\n> This isn't a big issue if the read actually results in I/O, but if it\n> doesn't, the percentage overhead could be significant.\n> \nHow invasive would reading the \"CPU counter\" be, if it is available?\nA read operation should avoid flushing a cache line and we can throw\nout the obvious outliers since we only need an estimate and not the\nactual value.\n\n--Ken\n\n", "msg_date": "Mon, 25 Oct 2004 17:48:30 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ARC Memory Usage analysis" }, { "msg_contents": "Kenneth Marshall <[email protected]> writes:\n> How invasive would reading the \"CPU counter\" be, if it is available?\n\nInvasive or not, this is out of the question; too unportable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Oct 2004 19:11:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ARC Memory Usage analysis " }, { "msg_contents": "\nTom Lane <[email protected]> writes:\n\n> I see no reason to hardwire such a number. On any hardware, the\n> distribution is going to be double-humped, and it will be pretty easy to\n> determine a cutoff after minimal accumulation of data. \n\nWell my stats-fu isn't up to the task. My hunch is that the wide range that\nthe disk reads are spread out over will throw off more sophisticated\nalgorithms. Eliminating hardwired numbers is great, but practically speaking\nit's not like any hardware is ever going to be able to fetch the data within\n100us. If it does it's because it's really a solid state drive or pulling the\ndata from disk cache and therefore really ought to be considered part of\neffective_cache_size anyways.\n\n> The real question is whether we can afford a pair of gettimeofday() calls\n> per read(). This isn't a big issue if the read actually results in I/O, but\n> if it doesn't, the percentage overhead could be significant.\n\nMy thinking was to use gettimeofday by default but allow individual ports to\nprovide a replacement function that uses the cpu TSC counter (via rdtsc) or\nequivalent. Most processors provide such a feature. If it's not there then we\njust fall back to gettimeofday.\n\nYour idea to sample only 1% of the reads is a fine idea too.\n\nMy real question is different. Is it worth heading down this alley at all? Or\nwill postgres eventually opt to use O_DIRECT and boost the size of its buffer\ncache? If it goes the latter route, and I suspect it will one day, then all of\nthis is a waste of effort.\n\nI see mmap or O_DIRECT being the only viable long-term stable states. My\nnatural inclination was the former but after the latest thread on the subject\nI suspect it'll be forever out of reach. That makes O_DIRECT And a Postgres\nmanaged cache the only real choice. Having both caches is just a waste of\nmemory and a waste of cpu cycles.\n\n> Another issue is what we do with the effective_cache_size value once we\n> have a number we trust. We can't readily change the size of the ARC\n> lists on the fly.\n\nHuh? I thought effective_cache_size was just used as an factor the cost\nestimation equation. My general impression was that a higher\neffective_cache_size effectively lowered your random page cost by making the\nsystem think that fewer nonsequential block reads would really incur the cost.\nIs that wrong? Is it used for anything else?\n\n-- \ngreg\n\n", "msg_date": "26 Oct 2004 01:17:40 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ARC Memory Usage analysis" }, { "msg_contents": "\nIs something broken with the list software? I'm receiving other emails from\nthe list but I haven't received any of the mails in this thread. I'm only able\nto follow the thread based on the emails people are cc'ing to me directly.\n\nI think I've caught this behaviour in the past as well. Is it a misguided list\nsoftware feature trying to avoid duplicates or something like that? It makes\nit really hard to follow threads in MUAs with good filtering since they're\nfragmented between two mailboxes.\n\n-- \ngreg\n\n", "msg_date": "26 Oct 2004 01:27:29 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ARC Memory Usage analysis" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> Another issue is what we do with the effective_cache_size value once we\n>> have a number we trust. We can't readily change the size of the ARC\n>> lists on the fly.\n\n> Huh? I thought effective_cache_size was just used as an factor the cost\n> estimation equation.\n\nToday, that is true. Jan is speculating about using it as a parameter\nof the ARC cache management algorithm ... and that worries me.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Oct 2004 01:53:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ARC Memory Usage analysis " }, { "msg_contents": "On Tue, 26 Oct 2004, Greg Stark wrote:\n\n> I see mmap or O_DIRECT being the only viable long-term stable states. My\n> natural inclination was the former but after the latest thread on the subject\n> I suspect it'll be forever out of reach. That makes O_DIRECT And a Postgres\n> managed cache the only real choice. Having both caches is just a waste of\n> memory and a waste of cpu cycles.\n\nI don't see why mmap is any more out of reach than O_DIRECT; it's not\nall that much harder to implement, and mmap (and madvise!) is more\nwidely available.\n\nBut if using two caches is only costing us 1% in performance, there's\nnot really much point....\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.NetBSD.org\n Make up enjoying your city life...produced by BIC CAMERA\n", "msg_date": "Tue, 26 Oct 2004 15:04:58 +0900 (JST)", "msg_from": "Curt Sampson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ARC Memory Usage analysis" }, { "msg_contents": "On Mon, 2004-10-25 at 16:34, Jan Wieck wrote: \n> The problem is, with a too small directory ARC cannot guesstimate what \n> might be in the kernel buffers. Nor can it guesstimate what recently was \n> in the kernel buffers and got pushed out from there. That results in a \n> way too small B1 list, and therefore we don't get B1 hits when in fact \n> the data was found in memory. B1 hits is what increases the T1target, \n> and since we are missing them with a too small directory size, our \n> implementation of ARC is propably using a T2 size larger than the \n> working set. That is not optimal.\n\nI think I have seen that the T1 list shrinks \"too much\", but need more\ntests...with some good test results\n\nThe effectiveness of ARC relies upon the balance between the often\nconflicting requirements of \"recency\" and \"frequency\". It seems\npossible, even likely, that pgsql's version of ARC may need some subtle\nchanges to rebalance it - if we are unlikely enough to find cases where\nit genuinely is out of balance. Many performance tests are required,\ntogether with a few ideas on extra parameters to include....hence my\nsupport of Jan's ideas.\n\nThat's also why I called the B1+B2 hit ratio \"turbulence\" because it\nrelates to how much oscillation is happening between T1 and T2. In\nphysical systems, we expect the oscillations to be damped, but there is\nno guarantee that we have a nearly critically damped oscillator. (Note\nthat the absence of turbulence doesn't imply that T1+T2 is optimally\nsized, just that is balanced).\n\n[...and all though the discussion has wandered away from my original\npatch...would anybody like to commit, or decline the patch?]\n\n> If we would replace the dynamic T1 buffers with a max_backends*2 area of \n> shared buffers, use a C value representing the effective cache size and \n> limit the T1target on the lower bound to effective cache size - shared \n> buffers, then we basically moved the T1 cache into the OS buffers.\n\nLimiting the minimum size of T1len to be 2* maxbackends sounds like an\neasy way to prevent overbalancing of T2, but I would like to follow up\non ways to have T1 naturally stay larger. I'll do a patch with this idea\nin, for testing. I'll call this \"T1 minimum size\" so we can discuss it.\n\nAny other patches are welcome...\n\nIt could be that B1 is too small and so we could use a larger value of C\nto keep track of more blocks. I think what is being suggested is two\nGUCs: shared_buffers (as is), plus another one, larger, which would\nallow us to track what is in shared_buffers and what is in OS cache. \n\nI have comments on \"effective cache size\" below....\n\nOn Mon, 2004-10-25 at 17:03, Tom Lane wrote:\n> Jan Wieck <[email protected]> writes:\n> > This all only holds water, if the OS is allowed to swap out shared \n> > memory. And that was my initial question, how likely is it to find this \n> > to be true these days?\n> \n> I think it's more likely that not that the OS will consider shared\n> memory to be potentially swappable. On some platforms there is a shmctl\n> call you can make to lock your shmem in memory, but (a) we don't use it\n> and (b) it may well require privileges we haven't got anyway.\n\nAre you saying we shouldn't, or we don't yet? I simply assumed that we\ndid use that function - surely it must be at least an option? RHEL\nsupports this at least....\n\nIt may well be that we don't have those privileges, in which case we\nturn off the option. Often, we (or I?) will want to install a dedicated\nserver, so we should have all the permissions we need, in which case...\n\n> This has always been one of the arguments against making shared_buffers\n> really large, of course --- if the buffers aren't all heavily used, and\n> the OS decides to swap them to disk, you are worse off than you would\n> have been with a smaller shared_buffers setting.\n\nNot really, just an argument against making them *too* large. Large\n*and* utilised is OK, so we need ways of judging optimal sizing.\n\n> However, I'm still really nervous about the idea of using\n> effective_cache_size to control the ARC algorithm. That number is\n> usually entirely bogus. Right now it is only a second-order influence\n> on certain planner estimates, and I am afraid to rely on it any more\n> heavily than that.\n\n...ah yes, effective_cache_size.\n\nThe manual describes effective_cache_size as if it had something to do\nwith the OS, and some of this discussion has picked up on that.\n\neffective_cache_size is used in only two places in the code (both in the\nplanner), as an estimate for calculating the cost of a) nonsequential\naccess and b) index access, mainly as a way of avoiding overestimates of\naccess costs for small tables.\n\nThere is absolutely no implication in the code that effective_cache_size\nmeasures anything in the OS; what it gives is an estimate of the number\nof blocks that will be available from *somewhere* in memory (i.e. in\nshared_buffers OR OS cache) for one particular table (the one currently\nbeing considered by the planner).\n\nCrucially, the \"size\" referred to is the size of the *estimate*, not the\nsize of the OS cache (nor the size of the OS cache + shared_buffers). So\nsetting effective_cache_size = total memory available or setting\neffective_cache_size = total memory - shared_buffers are both wildly\nirrelevant things to do, or any assumption that directly links memory\nsize to that parameter. So talking about \"effective_cache_size\" as if it\nwere the OS cache isn't the right thing to do.\n\n...It could be that we use a very high % of physical memory as\nshared_buffers - in which case the effective_cache_size would represent\nthe contents of shared_buffers.\n\nNote also that the planner assumes that all tables are equally likely to\nbe in cache. Increasing effective_cache_size in postgresql.conf seems\ndestined to give the wrong answer in planning unless you absolutely\nunderstand what it does.\n\nI will submit a patch to correct the description in the manual.\n\nFurther comments:\nThe two estimates appear to use effective_cache_size differently:\na) assumes that a table of size effective_cache_size will be 50% in\ncache\nb) assumes that effective_cache_size blocks are available, so for a\ntable of size == effective_cache_size, then it will be 100% available\n\nIMHO the GUC should be renamed \"estimated_cached_blocks\", with the old\nname deprecated to force people to re-read the manual description of\nwhat effective_cache_size means and then set accordingly.....all of that\nin 8.0....\n\n-- \nBest Regards, Simon Riggs\n\n", "msg_date": "Tue, 26 Oct 2004 09:49:15 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] ARC Memory Usage analysis" }, { "msg_contents": "On Tue, 2004-10-26 at 06:53, Tom Lane wrote:\n> Greg Stark <[email protected]> writes:\n> > Tom Lane <[email protected]> writes:\n> >> Another issue is what we do with the effective_cache_size value once we\n> >> have a number we trust. We can't readily change the size of the ARC\n> >> lists on the fly.\n> \n> > Huh? I thought effective_cache_size was just used as an factor the cost\n> > estimation equation.\n> \n> Today, that is true. Jan is speculating about using it as a parameter\n> of the ARC cache management algorithm ... and that worries me.\n> \n\nISTM that we should be optimizing the use of shared_buffers, not whats\noutside. Didn't you (Tom) already say that?\n\nBTW, very good ideas on how to proceed, but why bother?\n\nFor me, if the sysadmin didn't give shared_buffers to PostgreSQL, its\nbecause the memory is intended for use by something else and so not\navailable at all. At least not dependably. The argument against large\nshared_buffers because of swapping applies to that assumption also...the\nOS cache is too volatile to attempt to gauge sensibly.\n\nThere's an argument for improving performance for people that haven't\nset their parameters correctly, but thats got to be a secondary\nconsideration anyhow.\n\n-- \nBest Regards, Simon Riggs\n\n", "msg_date": "Tue, 26 Oct 2004 10:50:58 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] ARC Memory Usage analysis" }, { "msg_contents": "On Tue, 2004-10-26 at 09:49, Simon Riggs wrote:\n> On Mon, 2004-10-25 at 16:34, Jan Wieck wrote: \n> > The problem is, with a too small directory ARC cannot guesstimate what \n> > might be in the kernel buffers. Nor can it guesstimate what recently was \n> > in the kernel buffers and got pushed out from there. That results in a \n> > way too small B1 list, and therefore we don't get B1 hits when in fact \n> > the data was found in memory. B1 hits is what increases the T1target, \n> > and since we are missing them with a too small directory size, our \n> > implementation of ARC is propably using a T2 size larger than the \n> > working set. That is not optimal.\n> \n> I think I have seen that the T1 list shrinks \"too much\", but need more\n> tests...with some good test results\n> \n> > If we would replace the dynamic T1 buffers with a max_backends*2 area of \n> > shared buffers, use a C value representing the effective cache size and \n> > limit the T1target on the lower bound to effective cache size - shared \n> > buffers, then we basically moved the T1 cache into the OS buffers.\n> \n> Limiting the minimum size of T1len to be 2* maxbackends sounds like an\n> easy way to prevent overbalancing of T2, but I would like to follow up\n> on ways to have T1 naturally stay larger. I'll do a patch with this idea\n> in, for testing. I'll call this \"T1 minimum size\" so we can discuss it.\n> \n\nDon't know whether you've seen this latest update on the ARC idea:\nSorav Bansal and Dharmendra S. Modha, \nCAR: Clock with Adaptive Replacement,\n in Proceedings of the USENIX Conference on File and Storage Technologies\n (FAST), pages 187--200, March 2004.\n[I picked up the .pdf here http://citeseer.ist.psu.edu/bansal04car.html]\n\nIn that paper Bansal and Modha introduce an update to ARC called CART\nwhich they say is more appropriate for databases. Their idea is to\nintroduce a \"temporal locality window\" as a way of making sure that\nblocks called twice within a short period don't fall out of T1, though\ndon't make it into T2 either. Strangely enough the \"temporal locality\nwindow\" is made by increasing the size of T1... in an adpative way, of\ncourse.\n\nIf we were going to put a limit on the minimum size of T1, then this\nwould put a minimal \"temporal locality window\" in place....rather than\nthe increased complexity they go to in order to make T1 larger. I note\ntest results from both the ARC and CAR papers that show that T2 usually\nrepresents most of C, so the observations that T1 is very small is not\natypical. That implies that the cost of managing the temporal locality\nwindow in CART is usually wasted, even though it does cut in as an\noverall benefit: The results show that CART is better than ARC over the\nwhole range of cache sizes tested (16MB to 4GB) and workloads (apart\nfrom 1 out 22).\n\nIf we were to implement a minimum size of T1, related as suggested to\nnumber of users, then this would provide a reasonable approximation of\nthe temporal locality window. This wouldn't prevent the adaptation of T1\nto be higher than this when required.\n\nJan has already optimised ARC for PostgreSQL by the addition of a\nspecial lookup on transactionId required to optimise for the double\ncache lookup of select/update that occurs on a T1 hit. That seems likely\nto be able to be removed as a result of having a larger T1.\n\nI'd suggest limiting T1 to be a value of:\nshared_buffers <= 1000\t\tT1limit = max_backends *0.75\nshared_buffers <= 2000\t\tT1limit = max_backends\nshared_buffers <= 5000\t\tT1limit = max_backends *1.5\nshared_buffers > 5000\t\tT1limit = max_backends *2\n\nI'll try some tests with both\n- minimum size of T1\n- update optimisation removed\n\nThoughts?\n\n-- \nBest Regards, Simon Riggs\n\n", "msg_date": "Tue, 26 Oct 2004 13:18:25 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] ARC Memory Usage analysis" }, { "msg_contents": "\nCurt Sampson <[email protected]> writes:\n\n> On Tue, 26 Oct 2004, Greg Stark wrote:\n> \n> > I see mmap or O_DIRECT being the only viable long-term stable states. My\n> > natural inclination was the former but after the latest thread on the subject\n> > I suspect it'll be forever out of reach. That makes O_DIRECT And a Postgres\n> > managed cache the only real choice. Having both caches is just a waste of\n> > memory and a waste of cpu cycles.\n> \n> I don't see why mmap is any more out of reach than O_DIRECT; it's not\n> all that much harder to implement, and mmap (and madvise!) is more\n> widely available.\n\nBecause there's no way to prevent a write-out from occurring and no way to be\nnotified by mmap before a write-out occurs, and Postgres wants to do its WAL\nlogging at that time if it hasn't already happened.\n\n> But if using two caches is only costing us 1% in performance, there's\n> not really much point....\n\nWell firstly it depends on the work profile. It can probably get much higher\nthan we saw in that profile if your work load is causing more fresh buffers to\nbe fetched.\n\nSecondly it also reduces the amount of cache available. If you have 256M of\nram with about 200M free, and 40Mb of ram set aside for Postgres's buffer\ncache then you really only get 160Mb. It's costing you 20% of your cache, and\nreducing the cache hit rate accordingly.\n\nThirdly the kernel doesn't know as much as Postgres about the load. Postgres\ncould optimize its use of cache based on whether it knows the data is being\nloaded by a vacuum or sequential scan rather than an index lookup. In practice\nPostgres has gone with ARC which I suppose a kernel could implement anyways,\nbut afaik neither linux nor BSD choose to do anything like it.\n\n-- \ngreg\n\n", "msg_date": "26 Oct 2004 11:30:23 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ARC Memory Usage analysis" }, { "msg_contents": "Simon,\n\nAs a postgres DBA, I find your comments about how not to use \neffective_cache_size instructive, but I'm still not sure how I should \narrive at a target value for it.\n\nOn most of the machines on which I admin postgres, I generally set \nshared_buffers to 10,000 (using what seems to have been the recent \nconventional wisdom of the lesser of 10,000 or 10% of RAM). I haven't \nreally settled on an optimal value for effective_cache_size, and now \nI'm again confused as to how I might even benchmark it.\n\nHere are the documents on which I've based my knowledge:\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html#effcache\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\nhttp://www.ca.postgresql.org/docs/momjian/hw_performance/node8.html\n\n From Bruce's document, I gather that effective_cache_size would assume \nthat either shared buffers or unused RAM were valid sources of cached \npages for the purposes of assessing plans.\n\nAs a result, I was intending to inflate the value of \neffective_cache_size to closer to the amount of unused RAM on some of \nthe machines I admin (once I've verified that they all have a unified \nbuffer cache). Is that correct?\n\n-tfo\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-260-0005\n\nOn Oct 26, 2004, at 3:49 AM, Simon Riggs wrote:\n\n> On Mon, 2004-10-25 at 16:34, Jan Wieck wrote:\n>> The problem is, with a too small directory ARC cannot guesstimate what\n>> might be in the kernel buffers. Nor can it guesstimate what recently \n>> was\n>> in the kernel buffers and got pushed out from there. That results in a\n>> way too small B1 list, and therefore we don't get B1 hits when in fact\n>> the data was found in memory. B1 hits is what increases the T1target,\n>> and since we are missing them with a too small directory size, our\n>> implementation of ARC is propably using a T2 size larger than the\n>> working set. That is not optimal.\n>\n> I think I have seen that the T1 list shrinks \"too much\", but need more\n> tests...with some good test results\n>\n> The effectiveness of ARC relies upon the balance between the often\n> conflicting requirements of \"recency\" and \"frequency\". It seems\n> possible, even likely, that pgsql's version of ARC may need some subtle\n> changes to rebalance it - if we are unlikely enough to find cases where\n> it genuinely is out of balance. Many performance tests are required,\n> together with a few ideas on extra parameters to include....hence my\n> support of Jan's ideas.\n>\n> That's also why I called the B1+B2 hit ratio \"turbulence\" because it\n> relates to how much oscillation is happening between T1 and T2. In\n> physical systems, we expect the oscillations to be damped, but there is\n> no guarantee that we have a nearly critically damped oscillator. (Note\n> that the absence of turbulence doesn't imply that T1+T2 is optimally\n> sized, just that is balanced).\n>\n> [...and all though the discussion has wandered away from my original\n> patch...would anybody like to commit, or decline the patch?]\n>\n>> If we would replace the dynamic T1 buffers with a max_backends*2 area \n>> of\n>> shared buffers, use a C value representing the effective cache size \n>> and\n>> limit the T1target on the lower bound to effective cache size - shared\n>> buffers, then we basically moved the T1 cache into the OS buffers.\n>\n> Limiting the minimum size of T1len to be 2* maxbackends sounds like an\n> easy way to prevent overbalancing of T2, but I would like to follow up\n> on ways to have T1 naturally stay larger. I'll do a patch with this \n> idea\n> in, for testing. I'll call this \"T1 minimum size\" so we can discuss it.\n>\n> Any other patches are welcome...\n>\n> It could be that B1 is too small and so we could use a larger value of \n> C\n> to keep track of more blocks. I think what is being suggested is two\n> GUCs: shared_buffers (as is), plus another one, larger, which would\n> allow us to track what is in shared_buffers and what is in OS cache.\n>\n> I have comments on \"effective cache size\" below....\n>\n> On Mon, 2004-10-25 at 17:03, Tom Lane wrote:\n>> Jan Wieck <[email protected]> writes:\n>>> This all only holds water, if the OS is allowed to swap out shared\n>>> memory. And that was my initial question, how likely is it to find \n>>> this\n>>> to be true these days?\n>>\n>> I think it's more likely that not that the OS will consider shared\n>> memory to be potentially swappable. On some platforms there is a \n>> shmctl\n>> call you can make to lock your shmem in memory, but (a) we don't use \n>> it\n>> and (b) it may well require privileges we haven't got anyway.\n>\n> Are you saying we shouldn't, or we don't yet? I simply assumed that we\n> did use that function - surely it must be at least an option? RHEL\n> supports this at least....\n>\n> It may well be that we don't have those privileges, in which case we\n> turn off the option. Often, we (or I?) will want to install a dedicated\n> server, so we should have all the permissions we need, in which case...\n>\n>> This has always been one of the arguments against making \n>> shared_buffers\n>> really large, of course --- if the buffers aren't all heavily used, \n>> and\n>> the OS decides to swap them to disk, you are worse off than you would\n>> have been with a smaller shared_buffers setting.\n>\n> Not really, just an argument against making them *too* large. Large\n> *and* utilised is OK, so we need ways of judging optimal sizing.\n>\n>> However, I'm still really nervous about the idea of using\n>> effective_cache_size to control the ARC algorithm. That number is\n>> usually entirely bogus. Right now it is only a second-order influence\n>> on certain planner estimates, and I am afraid to rely on it any more\n>> heavily than that.\n>\n> ...ah yes, effective_cache_size.\n>\n> The manual describes effective_cache_size as if it had something to do\n> with the OS, and some of this discussion has picked up on that.\n>\n> effective_cache_size is used in only two places in the code (both in \n> the\n> planner), as an estimate for calculating the cost of a) nonsequential\n> access and b) index access, mainly as a way of avoiding overestimates \n> of\n> access costs for small tables.\n>\n> There is absolutely no implication in the code that \n> effective_cache_size\n> measures anything in the OS; what it gives is an estimate of the number\n> of blocks that will be available from *somewhere* in memory (i.e. in\n> shared_buffers OR OS cache) for one particular table (the one currently\n> being considered by the planner).\n>\n> Crucially, the \"size\" referred to is the size of the *estimate*, not \n> the\n> size of the OS cache (nor the size of the OS cache + shared_buffers). \n> So\n> setting effective_cache_size = total memory available or setting\n> effective_cache_size = total memory - shared_buffers are both wildly\n> irrelevant things to do, or any assumption that directly links memory\n> size to that parameter. So talking about \"effective_cache_size\" as if \n> it\n> were the OS cache isn't the right thing to do.\n>\n> ...It could be that we use a very high % of physical memory as\n> shared_buffers - in which case the effective_cache_size would represent\n> the contents of shared_buffers.\n>\n> Note also that the planner assumes that all tables are equally likely \n> to\n> be in cache. Increasing effective_cache_size in postgresql.conf seems\n> destined to give the wrong answer in planning unless you absolutely\n> understand what it does.\n>\n> I will submit a patch to correct the description in the manual.\n>\n> Further comments:\n> The two estimates appear to use effective_cache_size differently:\n> a) assumes that a table of size effective_cache_size will be 50% in\n> cache\n> b) assumes that effective_cache_size blocks are available, so for a\n> table of size == effective_cache_size, then it will be 100% available\n>\n> IMHO the GUC should be renamed \"estimated_cached_blocks\", with the old\n> name deprecated to force people to re-read the manual description of\n> what effective_cache_size means and then set accordingly.....all of \n> that\n> in 8.0....\n>\n> -- \n> Best Regards, Simon Riggs\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to \n> [email protected])\n\n", "msg_date": "Tue, 26 Oct 2004 19:09:22 -0500", "msg_from": "Thomas F.O'Connell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] [HACKERS] ARC Memory Usage analysis" }, { "msg_contents": "Thomas,\n\n> As a result, I was intending to inflate the value of\n> effective_cache_size to closer to the amount of unused RAM on some of\n> the machines I admin (once I've verified that they all have a unified\n> buffer cache). Is that correct?\n\nCurrently, yes. Right now, e_c_s is used just to inform the planner and make \nindex vs. table scan and join order decisions.\n\nThe problem which Simon is bringing up is part of a discussion about doing \n*more* with the information supplied by e_c_s. He points out that it's not \nreally related to the *real* probability of any particular table being \ncached. At least, if I'm reading him right.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 26 Oct 2004 17:39:59 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] [PATCHES] ARC Memory Usage analysis" }, { "msg_contents": "On Wed, 26 Oct 2004, Greg Stark wrote:\n\n> > I don't see why mmap is any more out of reach than O_DIRECT; it's not\n> > all that much harder to implement, and mmap (and madvise!) is more\n> > widely available.\n>\n> Because there's no way to prevent a write-out from occurring and no way to be\n> notified by mmap before a write-out occurs, and Postgres wants to do its WAL\n> logging at that time if it hasn't already happened.\n\nI already described a solution to that problem in a post earlier in this\nthread (a write queue on the block). I may even have described it on\nthis list a couple of years ago, that being about the time I thought\nit up. (The mmap idea just won't die, but at least I wasn't the one to\nbring it up this time. :-))\n\n> Well firstly it depends on the work profile. It can probably get much higher\n> than we saw in that profile....\n\nTrue, but 1% was is much, much lower than I'd expected. That tells me\nthat my intuitive idea of the performance model is wrong, which means,\nfor me at least, it's time to shut up or put up some benchmarks.\n\n> Secondly it also reduces the amount of cache available. If you have 256M of\n> ram with about 200M free, and 40Mb of ram set aside for Postgres's buffer\n> cache then you really only get 160Mb. It's costing you 20% of your cache, and\n> reducing the cache hit rate accordingly.\n\nYeah, no question about that.\n\n> Thirdly the kernel doesn't know as much as Postgres about the load. Postgres\n> could optimize its use of cache based on whether it knows the data is being\n> loaded by a vacuum or sequential scan rather than an index lookup. In practice\n> Postgres has gone with ARC which I suppose a kernel could implement anyways,\n> but afaik neither linux nor BSD choose to do anything like it.\n\nmadvise().\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.NetBSD.org\n Make up enjoying your city life...produced by BIC CAMERA\n", "msg_date": "Wed, 27 Oct 2004 10:32:13 +0900 (JST)", "msg_from": "Curt Sampson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ARC Memory Usage analysis" }, { "msg_contents": "On Mon, 2004-10-25 at 23:53, Tom Lane wrote:\n> Greg Stark <[email protected]> writes:\n> > Tom Lane <[email protected]> writes:\n> >> Another issue is what we do with the effective_cache_size value once we\n> >> have a number we trust. We can't readily change the size of the ARC\n> >> lists on the fly.\n> \n> > Huh? I thought effective_cache_size was just used as an factor the cost\n> > estimation equation.\n> \n> Today, that is true. Jan is speculating about using it as a parameter\n> of the ARC cache management algorithm ... and that worries me.\n\nBecause it's so often set wrong I take it. But if it's set right, and\nit makes the the database faster to pay attention to it, then I'd be in\nfavor of it. Or at least having a switch to turn on the ARC buffer's\nability to look at it.\n\nOr is it some other issue, having to do with the idea of knowing\neffective cache size cause a positive effect overall on the ARC\nalgorhythm?\n\n", "msg_date": "Tue, 26 Oct 2004 20:30:34 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ARC Memory Usage analysis" }, { "msg_contents": "On Mon, Oct 25, 2004 at 11:34:25AM -0400, Jan Wieck wrote:\n> On 10/22/2004 4:09 PM, Kenneth Marshall wrote:\n> \n> > On Fri, Oct 22, 2004 at 03:35:49PM -0400, Jan Wieck wrote:\n> >> On 10/22/2004 2:50 PM, Simon Riggs wrote:\n> >> \n> >> >I've been using the ARC debug options to analyse memory usage on the\n> >> >PostgreSQL 8.0 server. This is a precursor to more complex performance\n> >> >analysis work on the OSDL test suite.\n> >> >\n> >> >I've simplified some of the ARC reporting into a single log line, which\n> >> >is enclosed here as a patch on freelist.c. This includes reporting of:\n> >> >- the total memory in use, which wasn't previously reported\n> >> >- the cache hit ratio, which was slightly incorrectly calculated\n> >> >- a useful-ish value for looking at the \"B\" lists in ARC\n> >> >(This is a patch against cvstip, but I'm not sure whether this has\n> >> >potential for inclusion in 8.0...)\n> >> >\n> >> >The total memory in use is useful because it allows you to tell whether\n> >> >shared_buffers is set too high. If it is set too high, then memory usage\n> >> >will continue to grow slowly up to the max, without any corresponding\n> >> >increase in cache hit ratio. If shared_buffers is too small, then memory\n> >> >usage will climb quickly and linearly to its maximum.\n> >> >\n> >> >The last one I've called \"turbulence\" in an attempt to ascribe some\n> >> >useful meaning to B1/B2 hits - I've tried a few other measures though\n> >> >without much success. Turbulence is the hit ratio of B1+B2 lists added\n> >> >together. By observation, this is zero when ARC gives smooth operation,\n> >> >and goes above zero otherwise. Typically, turbulence occurs when\n> >> >shared_buffers is too small for the working set of the database/workload\n> >> >combination and ARC repeatedly re-balances the lengths of T1/T2 as a\n> >> >result of \"near-misses\" on the B1/B2 lists. Turbulence doesn't usually\n> >> >cut in until the cache is fully utilized, so there is usually some delay\n> >> >after startup.\n> >> >\n> >> >We also recently discussed that I would add some further memory analysis\n> >> >features for 8.1, so I've been trying to figure out how.\n> >> >\n> >> >The idea that B1, B2 represent something really useful doesn't seem to\n> >> >have been borne out - though I'm open to persuasion there.\n> >> >\n> >> >I originally envisaged a \"shadow list\" operating in extension of the\n> >> >main ARC list. This will require some re-coding, since the variables and\n> >> >macros are all hard-coded to a single set of lists. No complaints, just\n> >> >it will take a little longer than we all thought (for me, that is...)\n> >> >\n> >> >My proposal is to alter the code to allow an array of memory linked\n> >> >lists. The actual list would be [0] - other additional lists would be \n> >> >created dynamically as required i.e. not using IFDEFs, since I want this\n> >> >to be controlled by a SIGHUP GUC to allow on-site tuning, not just lab\n> >> >work. This will then allow reporting against the additional lists, so\n> >> >that cache hit ratios can be seen with various other \"prototype\"\n> >> >shared_buffer settings.\n> >> \n> >> All the existing lists live in shared memory, so that dynamic approach \n> >> suffers from the fact that the memory has to be allocated during ipc_init.\n> >> \n> >> What do you think about my other theory to make C actually 2x effective \n> >> cache size and NOT to keep T1 in shared buffers but to assume T1 lives \n> >> in the OS buffer cache?\n> >> \n> >> \n> >> Jan\n> >> \n> > Jan,\n> > \n> >>From the articles that I have seen on the ARC algorithm, I do not think\n> > that using the effective cache size to set C would be a win. The design\n> > of the ARC process is to allow the cache to optimize its use in response\n> > to the actual workload. It may be the best use of the cache in some cases\n> > to have the entire cache allocated to T1 and similarly for T2. If fact,\n> > the ability to alter the behavior as needed is one of the key advantages.\n> \n> Only the \"working set\" of the database, that is the pages that are very \n> frequently used, are worth holding in shared memory at all. The rest \n> should be copied in and out of the OS disc buffers.\n> \n> The problem is, with a too small directory ARC cannot guesstimate what \n> might be in the kernel buffers. Nor can it guesstimate what recently was \n> in the kernel buffers and got pushed out from there. That results in a \n> way too small B1 list, and therefore we don't get B1 hits when in fact \n> the data was found in memory. B1 hits is what increases the T1target, \n> and since we are missing them with a too small directory size, our \n> implementation of ARC is propably using a T2 size larger than the \n> working set. That is not optimal.\n> \n> If we would replace the dynamic T1 buffers with a max_backends*2 area of \n> shared buffers, use a C value representing the effective cache size and \n> limit the T1target on the lower bound to effective cache size - shared \n> buffers, then we basically moved the T1 cache into the OS buffers.\n> \n> This all only holds water, if the OS is allowed to swap out shared \n> memory. And that was my initial question, how likely is it to find this \n> to be true these days?\n> \n> \n> Jan\n> \n\nI've asked our linux kernel guys some quick questions and they say\nyou can lock mmapped memory and sys v shared memory with mlock and\nSHM_LOCK, resp. Otherwise the OS will swap out memory as it sees\nfit, whether or not it's shared.\n\nMark\n", "msg_date": "Wed, 27 Oct 2004 14:34:22 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ARC Memory Usage analysis" }, { "msg_contents": "Tom Lane wrote:\n> Greg Stark <[email protected]> writes:\n> > So I would suggest using something like 100us as the threshold for\n> > determining whether a buffer fetch came from cache.\n> \n> I see no reason to hardwire such a number. On any hardware, the\n> distribution is going to be double-humped, and it will be pretty easy to\n> determine a cutoff after minimal accumulation of data. The real question\n> is whether we can afford a pair of gettimeofday() calls per read().\n> This isn't a big issue if the read actually results in I/O, but if it\n> doesn't, the percentage overhead could be significant.\n> \n> If we assume that the effective_cache_size value isn't changing very\n> fast, maybe it would be good enough to instrument only every N'th read\n> (I'm imagining N on the order of 100) for this purpose. Or maybe we\n> need only instrument reads that are of blocks that are close to where\n> the ARC algorithm thinks the cache edge is.\n\nIf it's decided to instrument reads, then perhaps an even better use\nof it would be to tune random_page_cost. If the storage manager knows\nthe difference between a sequential scan and a random scan, then it\nshould easily be able to measure the actual performance it gets for\neach and calculate random_page_cost based on the results.\n\nWhile the ARC lists can't be tuned on the fly, random_page_cost can.\n\n> One small problem is that the time measurement gives you only a lower\n> bound on the time the read() actually took. In a heavily loaded system\n> you might not get the CPU back for long enough to fool you about whether\n> the block came from cache or not.\n\nTrue, but that's information that you'd want to factor into the\nperformance measurements anyway. The database needs to know how much\nwall clock time it takes for it to fetch a page under various\ncircumstances from disk via the OS. For determining whether or not\nthe read() hit the disk instead of just OS cache, what would matter is\nthe average difference between the two. That's admittedly a problem\nif the difference is less than the noise, though, but at the same time\nthat would imply that given the circumstances it really doesn't matter\nwhether or not the page was fetched from disk: the difference is small\nenough that you could consider them equivalent.\n\n\nYou don't need 100% accuracy for this stuff, just statistically\nsignificant accuracy.\n\n\n> Another issue is what we do with the effective_cache_size value once\n> we have a number we trust. We can't readily change the size of the\n> ARC lists on the fly.\n\nCompare it with the current value, and notify the DBA if the values\nare significantly different? Perhaps write the computed value to a\nfile so the DBA can look at it later?\n\nSame with other values that are computed on the fly. In fact, it\nmight make sense to store them in a table that gets periodically\nupdated, and load their values from that table, and then the values in\npostgresql.conf or the command line would be the default that's used\nif there's nothing in the table (and if you really want fine-grained\ncontrol of this process, you could stick a boolean column in the table\nto indicate whether or not to load the value from the table at startup\ntime).\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Wed, 27 Oct 2004 20:20:45 -0700", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ARC Memory Usage analysis" }, { "msg_contents": "On 10/26/2004 1:53 AM, Tom Lane wrote:\n> Greg Stark <[email protected]> writes:\n>> Tom Lane <[email protected]> writes:\n>>> Another issue is what we do with the effective_cache_size value once we\n>>> have a number we trust. We can't readily change the size of the ARC\n>>> lists on the fly.\n> \n>> Huh? I thought effective_cache_size was just used as an factor the cost\n>> estimation equation.\n> \n> Today, that is true. Jan is speculating about using it as a parameter\n> of the ARC cache management algorithm ... and that worries me.\n\nIf we need another config option, it's not that we are running out of \npossible names, is it?\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Sat, 30 Oct 2004 09:40:39 -0400", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ARC Memory Usage analysis" }, { "msg_contents": "Jan Wieck <[email protected]> writes:\n> On 10/26/2004 1:53 AM, Tom Lane wrote:\n>> Greg Stark <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>>> Another issue is what we do with the effective_cache_size value once we\n>>> have a number we trust. We can't readily change the size of the ARC\n>>> lists on the fly.\n>> \n> Huh? I thought effective_cache_size was just used as an factor the cost\n> estimation equation.\n>> \n>> Today, that is true. Jan is speculating about using it as a parameter\n>> of the ARC cache management algorithm ... and that worries me.\n\n> If we need another config option, it's not that we are running out of \n> possible names, is it?\n\nNo, the point is that the value is not very trustworthy at the moment.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 30 Oct 2004 12:53:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] ARC Memory Usage analysis " }, { "msg_contents": "Thomas F.O'Connell wrote:\n> \n> As a result, I was intending to inflate the value of \n> effective_cache_size to closer to the amount of unused RAM on some of \n> the machines I admin (once I've verified that they all have a unified \n> buffer cache). Is that correct?\n> \n\nEffective cache size is IMHO a \"bogus\" parameter on postgresql.conf,\nthis because:\n\n1) That parameter is not intended to instruct postgres to use that ram but\n is only an hint to the engine on what the \"DBA\" *believe* the OS cache\n memory for postgres\n2) This parameter change only the cost evaluation of plans ( and not soo\n much )\n\nso don't hope to double this parameter and push postgres to use more RAM.\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n", "msg_date": "Mon, 01 Nov 2004 02:11:45 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] [HACKERS] ARC Memory Usage analysis" }, { "msg_contents": "On Wed, 2004-10-27 at 01:39, Josh Berkus wrote:\n> Thomas,\n> \n> > As a result, I was intending to inflate the value of\n> > effective_cache_size to closer to the amount of unused RAM on some of\n> > the machines I admin (once I've verified that they all have a unified\n> > buffer cache). Is that correct?\n> \n> Currently, yes. \n\nI now believe the answer to that is \"no, that is not fully correct\",\nfollowing investigation into how to set that parameter correctly.\n\n> Right now, e_c_s is used just to inform the planner and make \n> index vs. table scan and join order decisions.\n\nYes, I agree that is what e_c_s is used for.\n\n...lets go deeper:\n\neffective_cache_size is used to calculate the number of I/Os required to\nindex scan a table, which varies according to the size of the available\ncache (whether this be OS cache or shared_buffers). The reason to do\nthis is because whether a table is in cache can make a very great\ndifference to access times; *small* tables tend to be the ones that vary\nmost significantly. PostgreSQL currently uses the Mackert and Lohman\n[1989] equation to assess how much of a table is in cache in a blocked\nDBMS with a finite cache. \n\nThe Mackert and Lohman equation is accurate, as long as the parameter b\nis reasonably accurately set. [I'm discussing only the current behaviour\nhere, not what it can or should or could be] If it is incorrectly set,\nthen the equation will give the wrong answer for small tables. The same\nanswer (i.e. same asymptotic behaviour) is returned for very large\ntables, but they are the ones we didn't worry about anyway. Getting the\nequation wrong means you will choose sub-optimal plans, potentially\nreducing your performance considerably.\n\nAs I read it, effective_cache_size is equivalent to the parameter b,\ndefined as (p.3) \"minimum buffer size dedicated to a given scan\". M&L\nthey point out (p.3) \"We...do not consider interactions of\nmultiple users sharing the buffer for multiple file accesses\". \n\nEither way, M&L aren't talking about \"the total size of the cache\",\nwhich we would interpret to mean shared_buffers + OS cache, in our\neffort to not forget the beneficial effect of the OS cache. They use the\nphrase \"dedicated to a given scan\"....\n\nAFAICS \"effective_cache_size\" should be set to a value that reflects how\nmany other users of the cache there might be. If you know for certain\nyou're the only user, set it according to the existing advice. If you\nknow you aren't, then set it an appropriate factor lower. Setting that\naccurately on a system wide basis may clearly be difficult and setting\nit high will often be inappropriate.\n\nThe manual is not clear as to how to set effective_cache_size. Other\nadvice misses out the effect of the many scans/many tables issue and\nwill give the wrong answer for many calculations, and thus produce\nincorrect plans for 8.0 (and earlier releases also).\n\nThis is something that needs to be documented rather than a bug fix.\nIt's a complex one, so I'll await all of your objections before I write\na new doc patch.\n\n[Anyway, I do hope I've missed something somewhere in all that, though\nI've read their paper twice now. Fairly accessible, but requires\ninterpretation to the PostgreSQL case. Mackert and Lohman [1989] \"Index\nScans using a finite LRU buffer: A validated I/O model\"]\n\n> The problem which Simon is bringing up is part of a discussion about doing \n> *more* with the information supplied by e_c_s. He points out that it's not \n> really related to the *real* probability of any particular table being \n> cached. At least, if I'm reading him right.\n\nYes, that was how Jan originally meant to discuss it, but not what I meant.\n\nBest regards,\n\nSimon Riggs\n\n\n", "msg_date": "Mon, 01 Nov 2004 22:03:58 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] [PATCHES] ARC Memory Usage analysis" } ]
[ { "msg_contents": "The following query has never finished. I have let it run for over 24 \nhours. This is a one time update that is part of a conversion script \nfrom MSSQL data. All of the tables are freshly built and inserted \ninto. I have not run explain analyze because it does not return in a \nreasonable time. Explain output is posted below. Any suggestions on \nsyntax changes or anything else to improve this would be appreciated.\n\nDual PIII 1Ghz\n4 GB RAM\n4 spindle IDE RAID 0 on LSI controller.\nPostgres 7.4.5\nLinux version 2.6.3-7mdk-p3-smp-64GB\n\npostgresql.cong snip:\n\ntcpip_socket = true\nmax_connections = 40\nshared_buffers = 1000\nsort_mem = 65536 \nfsync = true\n\n\nsource_song_title +-10,500,000 rows\nsource_song +-9,500,000 rows\nsource_system 10 rows\nsource_title +- 5,600,000\n\nCode run right before this query:\ncreate index ssa_source_song_id on source_song_artist (source_song_id);\nanalyze source_song_artist;\ncreate index sa_artist_id on source_artist (artist_id);\nanalyze source_artist;\ncreate index ss_source_song_id on source_song (source_song_id);\nanalyze source_song;\ncreate index st_title_id on source_title (title_id);\nanalyze source_title;\n\nsource_song.source_song_id = int4\nsource_song_title.source_song_id = int4\nsource_title.title_id = int4\nsource_song_title.title_id = int4\n\nupdate source_song_title set\nsource_song_title_id = nextval('source_song_title_seq')\n,licensing_match_order = (select licensing_match_order from \nsource_system where source_system_id = ss.source_system_id)\n,affiliation_match_order = (select affiliation_match_order from \nsource_system where source_system_id = ss.source_system_id)\n,title = st.title\nfrom source_song_title sst\njoin source_song ss on ss.source_song_id = sst.source_song_id\njoin source_title st on st.title_id = sst.title_id\nwhere source_song_title.source_song_id = sst.source_song_id;\n\n\nExplain output:\n\"Hash Join (cost=168589.60..16651072.43 rows=6386404 width=335)\"\n\" Hash Cond: (\"outer\".title_id = \"inner\".title_id)\"\n\" -> Merge Join (cost=0.00..1168310.61 rows=6386403 width=311)\"\n\" Merge Cond: (\"outer\".source_song_id = \"inner\".source_song_id)\"\n\" -> Merge Join (cost=0.00..679279.40 rows=6386403 width=16)\"\n\" Merge Cond: (\"outer\".source_song_id = \n\"inner\".source_song_id)\"\n\" -> Index Scan using source_song_title_pkey on \nsource_song_title sst (cost=0.00..381779.37 rows=10968719 width=8)\"\n\" -> Index Scan using ss_source_song_id on source_song ss \n(cost=0.00..190583.36 rows=6386403 width=8)\"\n\" -> Index Scan using source_song_title_pkey on \nsource_song_title (cost=0.00..381779.37 rows=10968719 width=303)\"\n\" -> Hash (cost=117112.08..117112.08 rows=5513808 width=32)\"\n\" -> Seq Scan on source_title st (cost=0.00..117112.08 \nrows=5513808 width=32)\"\n\" SubPlan\"\n\" -> Seq Scan on source_system (cost=0.00..1.14 rows=2 width=4)\"\n\" Filter: (source_system_id = $0)\"\n\" -> Seq Scan on source_system (cost=0.00..1.14 rows=2 width=2)\"\n\" Filter: (source_system_id = $0)\"\n\n", "msg_date": "Fri, 22 Oct 2004 12:06:30 -0700", "msg_from": "Roger Ging <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query" }, { "msg_contents": "Roger Ging <[email protected]> writes:\n> update source_song_title set\n> source_song_title_id = nextval('source_song_title_seq')\n> ,licensing_match_order = (select licensing_match_order from \n> source_system where source_system_id = ss.source_system_id)\n> ,affiliation_match_order = (select affiliation_match_order from \n> source_system where source_system_id = ss.source_system_id)\n> ,title = st.title\n> from source_song_title sst\n> join source_song ss on ss.source_song_id = sst.source_song_id\n> join source_title st on st.title_id = sst.title_id\n> where source_song_title.source_song_id = sst.source_song_id;\n\nWhy is \"source_song_title sst\" in there? To the extent that\nsource_song_id is not unique, you are multiply updating rows\nbecause of the self-join.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Oct 2004 16:37:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query " }, { "msg_contents": "Any time you run subqueries, it's going to slow down the update\nprocess a lot. Each record that is updated in source_song_title runs\ntwo additional queries. When I do large updates like this, I usualy\nRun a transaction that will select all the new data into a new table\non a join. For example\n\nSELECT \n a.*, \n b.licensing_match_order, \n b.affiliation_match_order, \n d.title\nINTO \n updated_data \nFROM \n source_song_title AS a\nINNER JOIN\n source_system AS b\nON\n b.id = d.id\nINNER JOIN\n source_song AS c\nON\n a.id = c.id\nINNER JOIN\n source_title AS d\nON\n a.id = d.id\n\nI'm not sure that query does what you want, but you get the idea. \nThen just drop the old table and rename the updated_data table. This\nway instead of doing a bunch of updates, you do one select and a\nrename.\n\n-Josh\n\nOn Fri, 22 Oct 2004 16:37:14 -0400, Tom Lane <[email protected]> wrote:\n> Roger Ging <[email protected]> writes:\n> > update source_song_title set\n> > source_song_title_id = nextval('source_song_title_seq')\n> > ,licensing_match_order = (select licensing_match_order from\n> > source_system where source_system_id = ss.source_system_id)\n> > ,affiliation_match_order = (select affiliation_match_order from\n> > source_system where source_system_id = ss.source_system_id)\n> > ,title = st.title\n> > from source_song_title sst\n> > join source_song ss on ss.source_song_id = sst.source_song_id\n> > join source_title st on st.title_id = sst.title_id\n> > where source_song_title.source_song_id = sst.source_song_id;\n> \n> Why is \"source_song_title sst\" in there? To the extent that\n> source_song_id is not unique, you are multiply updating rows\n> because of the self-join.\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n", "msg_date": "Sat, 23 Oct 2004 06:08:04 -0600", "msg_from": "Joshua Marsh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query" } ]
[ { "msg_contents": "I haven't read much FAQs but has anyone done some benchmarks with\ndifferent io schedulers in linux with postgresql?\n\nLook at these:\nhttp://kerneltrap.org/node/view/843\nhttp://simonraven.nuit.ca/blog/archives/2004/05/20/quikconf/\n\n\nMaybe its well known knowledge but i just found this information and\nhaven't had time to do my own testing yet.\n\nmvh\n\n-- \nBjᅵrn Bength | Systems Designer\n--\nCuralia AB | www.curalia.se\nTjᅵrhovsgatan 21, SE - 116 28 Stockholm, Sweden\nPhone: +46 (0)8-410 064 40\n--\n\n", "msg_date": "Sat, 23 Oct 2004 17:54:05 +0200", "msg_from": "Bjorn Bength <[email protected]>", "msg_from_op": true, "msg_subject": "different io elevators in linux" }, { "msg_contents": "Bjorn,\n\n> I haven't read much FAQs but has anyone done some benchmarks with\n> different io schedulers in linux with postgresql?\n\nAccording to OSDL, using the \"deadline\" scheduler sometimes results in a \nroughly 5% boost to performance, and sometimes none, depending on the \napplication. We use it for all testing, though, just in case.\n\n--Josh\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 25 Oct 2004 10:09:17 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: different io elevators in linux" }, { "msg_contents": "On Mon, Oct 25, 2004 at 10:09:17AM -0700, Josh Berkus wrote:\n> Bjorn,\n> \n> > I haven't read much FAQs but has anyone done some benchmarks with\n> > different io schedulers in linux with postgresql?\n> \n> According to OSDL, using the \"deadline\" scheduler sometimes results in a \n> roughly 5% boost to performance, and sometimes none, depending on the \n> application. We use it for all testing, though, just in case.\n> \n> --Josh\n> \n\nYes, we found with an OLTP type workload, the as scheduler performs\nabout 5% worse than the deadline scheduler, where in a DSS type\nworkload there really isn't much difference. The former doing a\nmix of reading/writing, where the latter is doing mostly reading.\n\nMark\n", "msg_date": "Wed, 27 Oct 2004 14:40:48 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: different io elevators in linux" } ]
[ { "msg_contents": "Hi, \n \nHas anybody got any ideas on my recent posting ? (thanks in advance) :-\n \n \nI have a problem where a query inside a function is up to 100 times slower\ninside a function than as a stand alone query run in psql.\n \nThe column 'botnumber' is a character(10), is indexed and there are 125000\nrows in the table.\n \nHelp please!\n \nThis query is fast:-\n \nexplain analyze \n SELECT batchserial\n FROM transbatch\n WHERE botnumber = '1-7'\n LIMIT 1;\n\n QUERY PLAN\n\n----------------------------------------------------------------------------\n----------------------------------------------------\n Limit (cost=0.00..0.42 rows=1 width=4) (actual time=0.73..148.23 rows=1\nloops=1)\n -> Index Scan using ind_tbatchx on transbatch (cost=0.00..18.73 rows=45\nwidth=4) (actual time=0.73..148.22 rows=1 loops=1)\n Index Cond: (botnumber = '1-7'::bpchar)\n Total runtime: 148.29 msec\n(4 rows)\n \n \nThis function is slow:-\n \nCREATE OR REPLACE FUNCTION sp_test_rod3 ( ) returns integer \nas '\nDECLARE\n bot char(10);\n oldbatch INTEGER;\nBEGIN\n \n bot := ''1-7'';\n \n SELECT INTO oldbatch batchserial\n FROM transbatch\n WHERE botnumber = bot\n LIMIT 1;\n \n IF FOUND THEN\n RETURN 1;\n ELSE\n RETURN 0;\n END IF;\n \nEND;\n'\nlanguage plpgsql ;\n\n \nexplain analyze SELECT sp_test_rod3();\n\n QUERY PLAN\n\n----------------------------------------------------------------------------\n------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=1452.39..1452.40\nrows=1 loops=1)\n Total runtime: 1452.42 msec\n(2 rows)\n\n\n\n\n\n\n \n\nHi, \n \nHas anybody got any ideas on my \nrecent posting ? (thanks in advance) :-\n \n \nI have a problem \nwhere a query inside a function is up to 100 times slower inside a function than \nas a stand alone query run in psql.\n \nThe column \n'botnumber' is a character(10), is indexed and there are 125000 rows in the \ntable.\n \nHelp \nplease!\n \nThis query is \nfast:-\n \nexplain \nanalyze   \n  \nSELECT batchserial  FROM transbatch  WHERE botnumber = \n'1-7'  LIMIT 1;\n                                                           \nQUERY \nPLAN                                                           \n-------------------------------------------------------------------------------------------------------------------------------- Limit  \n(cost=0.00..0.42 rows=1 width=4) (actual time=0.73..148.23 rows=1 \nloops=1)   ->  Index Scan using ind_tbatchx on \ntransbatch  (cost=0.00..18.73 rows=45 width=4) (actual time=0.73..148.22 \nrows=1 loops=1)         Index Cond: \n(botnumber = '1-7'::bpchar) Total runtime: 148.29 msec(4 \nrows)\n \n \nThis \nfunction is slow:-\n \nCREATE \nOR REPLACE FUNCTION  sp_test_rod3 ( ) returns \ninteger          as \n'DECLARE  bot char(10);  oldbatch \nINTEGER;BEGIN\n \n  \nbot := ''1-7'';\n \n  \nSELECT INTO oldbatch batchserial  FROM transbatch  WHERE \nbotnumber = bot  LIMIT 1;\n \n  \nIF FOUND THEN    RETURN 1;  \nELSE    RETURN 0;  END \nIF;\n \nEND;'language plpgsql  ;\n \nexplain analyze SELECT sp_test_rod3();\n                                       \nQUERY \nPLAN                                       \n---------------------------------------------------------------------------------------- Result  \n(cost=0.00..0.01 rows=1 width=0) (actual time=1452.39..1452.40 rows=1 \nloops=1) Total runtime: 1452.42 msec(2 \nrows)", "msg_date": "Sun, 24 Oct 2004 19:13:23 +0100", "msg_from": "\"Rod Dutton\" <[email protected]>", "msg_from_op": true, "msg_subject": "Queries slow using stored procedures" }, { "msg_contents": "Rod Dutton wrote:\n> \n> Hi, \n> \n> Has anybody got any ideas on my recent posting ? (thanks in advance) :-\n> \n> \n> I have a problem where a query inside a function is up to 100 times \n> slower inside a function than as a stand alone query run in psql.\n> \n> The column 'botnumber' is a character(10), is indexed and there are \n> 125000 rows in the table.\n> \n\n[...]\n\nI had a similar problem before, where the function version (stored \nprocedure or prepared query) was much slower. I had a bunch of tables \nall with references to another table. I was querying all of the \nreferences to see if anyone from any of the tables was referencing a \nparticular row in the base table.\n\nIt turned out that one of the child tables was referencing the same row \n300,000/500,000 times. So if I happened to pick *that* number, postgres \nwanted to a sequential scan because of all the potential results. In my \ntesting, I never picked that number, so it was very fast, since it knew \nit wouldn't get in trouble.\n\nIn the case of the stored procedure, it didn't know which number I was \ngoing to ask for, so it had to plan for the worst, and *always* do a \nsequential scan.\n\nSo the question is... In your table, does the column \"botnumber\" have \nthe same value repeated many, many times, but '1-7' only occurs a few?\n\nIf you change the function to:\n\nCREATE OR REPLACE FUNCTION sp_test_rod3 ( ) returns integer\nas '\nDECLARE\n bot char(10);\n oldbatch INTEGER;\nBEGIN\n\n SELECT INTO oldbatch batchserial\n FROM transbatch\n WHERE botnumber = ''1-7''\n LIMIT 1;\n\n IF FOUND THEN\n RETURN 1;\n ELSE\n RETURN 0;\n END IF;\n\nEND;\n'\nlanguage plpgsql ;\n\n\nIs it still slow?\n\nI don't know if you could get crazy with something like:\n\nselect 1 where exist(select from transbatch where botnumber = '1-7' \nlimit 1);\n\nJust some thoughts about where *I've* found performance to change \nbetween functions versus raw SQL.\n\nYou probably should also mention what version of postgres you are \nrunning (and possibly what your hardware is)\n\nJohn\n=:->", "msg_date": "Sun, 24 Oct 2004 13:25:49 -0500", "msg_from": "John Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries slow using stored procedures" } ]
[ { "msg_contents": "Rod Dutton wrote:\n> Thank John,\n> \n> I am running Postgres 7.3.7 on a Dell PowerEdge 6600 Server with Quad Xeon\n> 2.7GHz processors with 16GB RAM and 12 x 146GB drives in Raid 10 (OS, WAL,\n> Data all on separate arrays).\n> \n\nYou might want think about upgraded to 7.4, as I know it is better at \nquite a few things. But I'm not all that experienced (I just had a \nsimilar problem).\n\n> I did try hard coding botnumber as you suggested and it was FAST. So it\n> does look like the scenario that you have explained. \n> \n\nThere are 2 ways of doing it that I know of. First, you can make you \nfunction create a query and execute it. Something like:\n\nEXECUTE ''SELECT 1 FROM transbatch WHERE botnumber = ''\n\t|| quote_literal(botnum)\n\t|| '' LIMIT 1'';\n\nThat forces the database to redesign the query each time. The problem \nyou are having is a stored procedure has to prepare the query in advance.\n\n> \n>>does the column \"botnumber\" have the same value repeated many, many times,\n> \n> but '1-7' only occurs a few?\n> \n> Yes, that could be the case, the table fluctuates massively from small to\n> big to small regularly with a real mixture of occurrences of these values\n> i.e. some values are repeated many times and some occur only a few times.\n> \n> I wonder if the answer is to: a) don't use a stored procedure b) up the\n> statistics gathering for that column ?\n> \n\nI don't believe increasing statistics will help, as prepared statements \nrequire one-size-fits-all queries.\n\n> I will try your idea: select 1 where exist(select from transbatch where\n> botnumber = '1-7' limit 1);\n> \n> Also, how can I get \"EXPLAIN\" output from the internals of the stored\n> procedure as that would help me?\n> \n\nI believe the only way to get explain is to use prepared statements \ninstead of stored procedures. For example:\n\nPREPARE my_plan(char(10)) AS SELECT 1 FROM transbatch\n\tWHERE botnumber = $1 LIMIT 1;\n\nEXPLAIN ANALYZE EXECUTE my_plan('1-7');\n\n\n> Many thanks,\n> \n> Rod\n> \n\nIf you have to do the first thing I mentioned, I'm not sure if you are \ngetting much out of your function, so you might prefer to just ask the \nquestion directly.\n\nWhat really surprises me is that it doesn't use the index even after the \nLIMIT clause. But I just did a check on my machine where I had a column \nwith lots of repeated entries, and it didn't use the index.\n\nSo a question for the true Guru's (like Tom Lane):\n\nWhy doesn't postgres use an indexed query if you supply a LIMIT?\n\nJohn\n=:->", "msg_date": "Sun, 24 Oct 2004 14:08:59 -0500", "msg_from": "John Meinel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries slow using stored procedures" } ]
[ { "msg_contents": "Rod Dutton wrote:\n> I also should add that the sp is only slow when the table is big (probably\n> obvious!).\n> \n> Rod \n\nSure, the problem is it is switching to a sequential search, with a lot \nof rows, versus doing an indexed search.\n\nIt's all about trying to figure out how to fix that, especially for any \nvalue of botnum. I would have hoped that using LIMIT 1 would have fixed \nthat.\n\nJohn\n=:->", "msg_date": "Sun, 24 Oct 2004 14:11:49 -0500", "msg_from": "John Meinel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries slow using stored procedures" } ]
[ { "msg_contents": "I was looking into another problem, and I found something that surprised \nme. If I'm doing \"SELECT * FROM mytable WHERE col = 'myval' LIMIT 1.\".\nNow \"col\" is indexed, by mytable has 500,000 rows, and 'myval' occurs \nmaybe 100,000 times. Without the LIMIT, this query should definitely do \na sequential scan.\n\nBut with the LIMIT, doesn't it know that it will return at max 1 value, \nand thus be able to use the index?\n\nIt seems to be doing the LIMIT too late.\n\nThe real purpose of this query is to check to see if a value exists in \nthe column, so there might be a better way of doing it. Here is the demo \ninfo:\n\n# select count(*) from finst_t;\n542315\n\n# select count(*) from finst_t where store_id = 539960;\n85076\n\n# explain analyze select id from finst_t where store_id = 539960 limit 1;\n QUERY PLAN\n \n-------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..0.13 rows=1 width=4) (actual time=860.000..860.000 \nrows=1 loops=1)\n -> Seq Scan on finst_t (cost=0.00..11884.94 rows=88217 width=4) \n(actual time=860.000..860.000 rows=1 loops=1)\n Filter: (store_id = 539960)\n Total runtime: 860.000 ms\n\nNotice that the \"actual rows=1\", meaning it is aware of the limit as it \nis going through the table. But for some reason the planner thinks it is \ngoing to return 88,217 rows. (This is close to the reality of 85076 if \nit actually had to find all of the rows).\n\nNow, if I do a select on a value that *does* only have 1 value, it works \nfine:\n\n# explain analyze select id from finst_t where store_id = 9605 limit 1;\n QUERY PLAN\n\n \n------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..3.96 rows=1 width=4) (actual time=0.000..0.000 \nrows=1 loops=1)\n -> Index Scan using finst_t_store_id_idx on finst_t \n(cost=0.00..3.96 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=1)\n Index Cond: (store_id = 9605)\n Total runtime: 0.000 ms\n\nAnd 1 further thing, I *can* force it to do a fast index scan if I \ndisable sequential scanning.\n\n# set enable_seqscan to off;\n# explain analyze select id from finst_t where store_id = 539960 limit 1;\n QUERY \nPLAN\n\n \n------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..1.59 rows=1 width=4) (actual time=0.000..0.000 \nrows=1 loops=1)\n -> Index Scan using finst_t_store_id_idx on finst_t \n(cost=0.00..140417.22 rows=88217 width=4) (actual time=0.000..0.000 \nrows=1 loops=1)\n Index Cond: (store_id = 539960)\n Total runtime: 0.000 ms\n\nCould being aware of LIMIT be added to the planner? Is there a better \nway to check for existence?\n\nJohn\n=:->\n\nPS> I'm using postgres 8.0-beta3 on win32 (the latest installer).", "msg_date": "Sun, 24 Oct 2004 14:27:59 -0500", "msg_from": "John Meinel <[email protected]>", "msg_from_op": true, "msg_subject": "Sequential Scan with LIMIT" }, { "msg_contents": "John Meinel <[email protected]> writes:\n> I was looking into another problem, and I found something that surprised \n> me. If I'm doing \"SELECT * FROM mytable WHERE col = 'myval' LIMIT 1.\".\n> Now \"col\" is indexed, by mytable has 500,000 rows, and 'myval' occurs \n> maybe 100,000 times. Without the LIMIT, this query should definitely do \n> a sequential scan.\n\n> But with the LIMIT, doesn't it know that it will return at max 1 value, \n> and thus be able to use the index?\n\nBut the LIMIT will cut the cost of the seqscan case too. Given the\nnumbers you posit above, about one row in five will have 'myval', so a\nseqscan can reasonably expect to hit the first matching row in the first\npage of the table. This is still cheaper than doing an index scan\n(which must require reading at least one index page plus at least one\ntable page).\n\nThe test case you are showing is probably suffering from nonrandom\nplacement of this particular data value; which is something that the\nstatistics we keep are too crude to detect.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Oct 2004 16:11:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan with LIMIT " }, { "msg_contents": "Tom Lane wrote:\n> John Meinel <[email protected]> writes:\n> \n>>I was looking into another problem, and I found something that surprised \n>>me. If I'm doing \"SELECT * FROM mytable WHERE col = 'myval' LIMIT 1.\".\n>>Now \"col\" is indexed, by mytable has 500,000 rows, and 'myval' occurs \n>>maybe 100,000 times. Without the LIMIT, this query should definitely do \n>>a sequential scan.\n> \n> \n>>But with the LIMIT, doesn't it know that it will return at max 1 value, \n>>and thus be able to use the index?\n> \n> \n> But the LIMIT will cut the cost of the seqscan case too. Given the\n> numbers you posit above, about one row in five will have 'myval', so a\n> seqscan can reasonably expect to hit the first matching row in the first\n> page of the table. This is still cheaper than doing an index scan\n> (which must require reading at least one index page plus at least one\n> table page).\n> \n> The test case you are showing is probably suffering from nonrandom\n> placement of this particular data value; which is something that the\n> statistics we keep are too crude to detect.\n> \n> \t\t\tregards, tom lane\n\nYou are correct about non-random placement. I'm a little surprised it \ndoesn't change with values, then. For instance,\n\n# select count(*) from finst_t where store_id = 52;\n13967\n\nStill does a sequential scan for the \"select id from...\" query.\n\nThe only value it does an index query for is 9605 which only has 1 row.\n\nIt estimates ~18,000 rows, but that is still < 3% of the total data.\n\nThis row corresponds to disk location where files can be found. So when \na storage location fills up, generally a new one is created. This means \nthat *generally* the numbers will be increasing as you go further in the \ntable (not guaranteed, as there are multiple locations open at any one \ntime).\n\nAm I better off in this case just wrapping my query with:\n\nset enable_seqscan to off;\nquery\nset enable_seqscan to on;\n\n\nThere is still the possibility that there is a better way to determine \nexistence of a value in a column. I was wondering about something like:\n\nSELECT 1 WHERE EXISTS\n\t(SELECT id FROM finst_t WHERE store_id=52 LIMIT 1);\n\nThough the second part is the same, so it still does the sequential scan.\n\nThis isn't critical, I was just trying to understand what's going on. \nThanks for your help.\n\nJohn\n=:->", "msg_date": "Sun, 24 Oct 2004 16:51:24 -0500", "msg_from": "John Meinel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sequential Scan with LIMIT" }, { "msg_contents": "On Sun, 24 Oct 2004, John Meinel wrote:\n\n> I was looking into another problem, and I found something that surprised\n> me. If I'm doing \"SELECT * FROM mytable WHERE col = 'myval' LIMIT 1.\".\n> Now \"col\" is indexed...\n> The real purpose of this query is to check to see if a value exists in\n> the column,...\n\nWhen you select all the columns, you're going to force it to go to the\ntable. If you select only the indexed column, it ought to be able to use\njust the index, and never read the table at all. You could also use more\nstandard and more set-oriented SQL while you're at it:\n\n SELECT DISTINCT(col) FROM mytable WHERE col = 'myval'\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.NetBSD.org\n Make up enjoying your city life...produced by BIC CAMERA\n", "msg_date": "Mon, 25 Oct 2004 16:17:07 +0900 (JST)", "msg_from": "Curt Sampson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan with LIMIT" }, { "msg_contents": "On Mon, 2004-10-25 at 17:17, Curt Sampson wrote:\n> When you select all the columns, you're going to force it to go to the\n> table. If you select only the indexed column, it ought to be able to use\n> just the index, and never read the table at all.\n\nPerhaps in other database systems, but not in PostgreSQL. MVCC\ninformation is only stored in the heap, not in indexes: therefore,\nPostgreSQL cannot determine whether a given index entry refers to a\nvalid tuple. Therefore, it needs to check the heap even if the index\ncontains all the columns referenced by the query.\n\nWhile it would be nice to be able to do index-only scans, there is good\nreason for this design decision. Check the archives for past discussion\nabout the tradeoffs involved.\n\n> You could also use more\n> standard and more set-oriented SQL while you're at it:\n> \n> SELECT DISTINCT(col) FROM mytable WHERE col = 'myval'\n\nThis is likely to be less efficient though.\n\n-Neil\n\n\n", "msg_date": "Mon, 25 Oct 2004 17:24:54 +1000", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan with LIMIT" }, { "msg_contents": "Curt Sampson wrote:\n> On Sun, 24 Oct 2004, John Meinel wrote:\n> \n> \n>>I was looking into another problem, and I found something that surprised\n>>me. If I'm doing \"SELECT * FROM mytable WHERE col = 'myval' LIMIT 1.\".\n>>Now \"col\" is indexed...\n>>The real purpose of this query is to check to see if a value exists in\n>>the column,...\n> \n> \n> When you select all the columns, you're going to force it to go to the\n> table. If you select only the indexed column, it ought to be able to use\n> just the index, and never read the table at all. You could also use more\n> standard and more set-oriented SQL while you're at it:\n> \n> SELECT DISTINCT(col) FROM mytable WHERE col = 'myval'\n> \n> cjs\n\nWell, what you wrote was actually much slower, as it had to scan the \nwhole table, grab all the rows, and then distinct them in the end.\n\nHowever, this query worked:\n\n\n\tSELECT DISTINCT(col) FROM mytable WHERE col = 'myval' LIMIT 1;\n\n\nNow, *why* that works differently from:\n\nSELECT col FROM mytable WHERE col = 'myval' LIMIT 1;\nor\nSELECT DISTINCT(col) FROM mytable WHERE col = 'myval';\n\nI'm not sure. They all return the same information.\n\nWhat's also weird is stuff like:\nSELECT DISTINCT(NULL) FROM mytable WHERE col = 'myval' LIMIT 1;\n\nAlso searches the entire table, sorting that NULL == NULL wherever col = \n'myval'. Which is as expensive as the non-limited case (I'm guessing \nthat the limit is occurring after the distinct, which is causing the \nproblem. SELECT NULL FROM ... still uses a sequential scan, but it stops \nafter finding the first one.)\n\n\nActually, in doing a little bit more testing, the working query only \nworks on some of the values. Probably it just increases the expense \nenough that it switches over. It also has the downside that when it does \nswitch to seq scan, it is much more expensive as it has to do a sort and \na unique on all the entries.\n\nJohn\n=:->", "msg_date": "Mon, 25 Oct 2004 09:33:38 -0500", "msg_from": "John Meinel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sequential Scan with LIMIT" }, { "msg_contents": " --- John Meinel <[email protected]> escribi�: \n> Curt Sampson wrote:\n> > On Sun, 24 Oct 2004, John Meinel wrote:\n> > \n> > \n> >>I was looking into another problem, and I found\n> something that surprised\n> >>me. If I'm doing \"SELECT * FROM mytable WHERE col\n> = 'myval' LIMIT 1.\".\n> >>Now \"col\" is indexed...\n> >>The real purpose of this query is to check to see\n> if a value exists in\n> >>the column,...\n> > \n> > \n> > When you select all the columns, you're going to\n> force it to go to the\n> > table. If you select only the indexed column, it\n> ought to be able to use\n> > just the index, and never read the table at all.\n> You could also use more\n> > standard and more set-oriented SQL while you're at\n> it:\n> > \n> > SELECT DISTINCT(col) FROM mytable WHERE col =\n> 'myval'\n> > \n> > cjs\n> \n> Well, what you wrote was actually much slower, as it\n> had to scan the \n> whole table, grab all the rows, and then distinct\n> them in the end.\n> \n> However, this query worked:\n> \n> \n> \tSELECT DISTINCT(col) FROM mytable WHERE col =\n> 'myval' LIMIT 1;\n> \n> \n> Now, *why* that works differently from:\n> \n> SELECT col FROM mytable WHERE col = 'myval' LIMIT 1;\n> or\n> SELECT DISTINCT(col) FROM mytable WHERE col =\n> 'myval';\n> \n> I'm not sure. They all return the same information.\n\nof course, both queries will return the same but\nthat's just because you forced it.\n\nLIMIT and DISTINCT are different things so they behave\nand are plenned different.\n\n\n> \n> What's also weird is stuff like:\n> SELECT DISTINCT(NULL) FROM mytable WHERE col =\n> 'myval' LIMIT 1;\n\nwhy do you want to do such a thing?\n\nregards,\nJaime Casanova\n\n_________________________________________________________\nDo You Yahoo!?\nInformaci�n de Estados Unidos y Am�rica Latina, en Yahoo! Noticias.\nVis�tanos en http://noticias.espanol.yahoo.com\n", "msg_date": "Tue, 26 Oct 2004 15:19:06 -0500 (CDT)", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan with LIMIT" }, { "msg_contents": "Jaime Casanova wrote:\n[...]\n>>\n>>I'm not sure. They all return the same information.\n> \n> \n> of course, both queries will return the same but\n> that's just because you forced it.\n> \n> LIMIT and DISTINCT are different things so they behave\n> and are plenned different.\n> \n> \n> \n>>What's also weird is stuff like:\n>>SELECT DISTINCT(NULL) FROM mytable WHERE col =\n>>'myval' LIMIT 1;\n> \n> \n> why do you want to do such a thing?\n> \n> regards,\n> Jaime Casanova\n> \n\nI was trying to see if selecting a constant would change things.\nI could have done SELECT DISTINCT(1) or just SELECT 1 FROM ...\nThe idea of the query is that if 'myval' exists in the table, return \nsomething different than if 'myval' does not exist. If you are writing a \nfunction, you can use:\n\nSELECT something...\nIF FOUND THEN\n do a\nELSE\n do b\nEND IF;\n\nThe whole point of this exercise was just to find what the cheapest \nquery is when you want to test for the existence of a value in a column. \nThe only thing I've found for my column is:\n\nSET enable_seq_scan TO off;\nSELECT col FROM mytable WHERE col = 'myval' LIMIT 1;\nSET enable_seq_scan TO on;\n\nMy column is not distributed well (larger numbers occur later in the \ndataset, but may occur many times.) In total there are something like \n500,000 rows, the number 555647 occurs 100,000 times, but not until row \n300,000 or so.\n\nThe analyzer looks at the data and says \"1/5th of the time it is 555647, \nso I can just do a sequential scan as the odds are I don't have to look \nfor very long, then I don't have to load the index\". It turns out this \nis very bad, where with an index you just have to do 2 page loads, \ninstead of reading 300,000 rows.\n\nObviously this isn't a general-case solution. But if you have a \nsituation similar to mine, it might be useful.\n\n(That's one thing with DB tuning. It seems to be very situation \ndependent, and it's hard to plan without a real dataset.)\n\nJohn\n=:->", "msg_date": "Tue, 26 Oct 2004 17:21:35 -0500", "msg_from": "John Meinel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sequential Scan with LIMIT" }, { "msg_contents": " --- John Meinel <[email protected]> escribi�: \n> Jaime Casanova wrote:\n> [...]\n> >>\n> >>I'm not sure. They all return the same\n> information.\n> > \n> > \n> > of course, both queries will return the same but\n> > that's just because you forced it.\n> > \n> > LIMIT and DISTINCT are different things so they\n> behave\n> > and are plenned different.\n> > \n> > \n> > \n> >>What's also weird is stuff like:\n> >>SELECT DISTINCT(NULL) FROM mytable WHERE col =\n> >>'myval' LIMIT 1;\n> > \n> > \n> > why do you want to do such a thing?\n> > \n> > regards,\n> > Jaime Casanova\n> > \n> \n> I was trying to see if selecting a constant would\n> change things.\n> I could have done SELECT DISTINCT(1) or just SELECT\n> 1 FROM ...\n> The idea of the query is that if 'myval' exists in\n> the table, return \n> something different than if 'myval' does not exist.\n> If you are writing a \n> function, you can use:\n> \n> SELECT something...\n> IF FOUND THEN\n> do a\n> ELSE\n> do b\n> END IF;\n> \n> The whole point of this exercise was just to find\n> what the cheapest \n> query is when you want to test for the existence of\n> a value in a column. \n> The only thing I've found for my column is:\n> \n> SET enable_seq_scan TO off;\n> SELECT col FROM mytable WHERE col = 'myval' LIMIT 1;\n> SET enable_seq_scan TO on;\n> \n> My column is not distributed well (larger numbers\n> occur later in the \n> dataset, but may occur many times.) In total there\n> are something like \n> 500,000 rows, the number 555647 occurs 100,000\n> times, but not until row \n> 300,000 or so.\n> \n> The analyzer looks at the data and says \"1/5th of\n> the time it is 555647, \n> so I can just do a sequential scan as the odds are I\n> don't have to look \n> for very long, then I don't have to load the index\".\n> It turns out this \n> is very bad, where with an index you just have to do\n> 2 page loads, \n> instead of reading 300,000 rows.\n> \n> Obviously this isn't a general-case solution. But if\n> you have a \n> situation similar to mine, it might be useful.\n> \n> (That's one thing with DB tuning. It seems to be\n> very situation \n> dependent, and it's hard to plan without a real\n> dataset.)\n> \n> John\n> =:->\n> \n\nIn http://www.postgresql.org/docs/faqs/FAQ.html under\n\"4.8) My queries are slow or don't make use of the\nindexes. Why?\" says: \n\n\"However, LIMIT combined with ORDER BY often will use\nan index because only a small portion of the table is\nreturned. In fact, though MAX() and MIN() don't use\nindexes, it is possible to retrieve such values using\nan index with ORDER BY and LIMIT: \n SELECT col\n FROM tab\n ORDER BY col [ DESC ]\n LIMIT 1;\"\n\nSo, maybe you can try your query as \n\nSELECT col FROM mytable \nWHERE col = 'myval' \nORDER BY col \nLIMIT 1;\n\nregards,\nJaime Casanova\n \n\n_________________________________________________________\nDo You Yahoo!?\nInformaci�n de Estados Unidos y Am�rica Latina, en Yahoo! Noticias.\nVis�tanos en http://noticias.espanol.yahoo.com\n", "msg_date": "Wed, 27 Oct 2004 12:16:21 -0500 (CDT)", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan with LIMIT" }, { "msg_contents": "Jaime Casanova wrote:\n[...]\n> \n> In http://www.postgresql.org/docs/faqs/FAQ.html under\n> \"4.8) My queries are slow or don't make use of the\n> indexes. Why?\" says: \n> \n> \"However, LIMIT combined with ORDER BY often will use\n> an index because only a small portion of the table is\n> returned. In fact, though MAX() and MIN() don't use\n> indexes, it is possible to retrieve such values using\n> an index with ORDER BY and LIMIT: \n> SELECT col\n> FROM tab\n> ORDER BY col [ DESC ]\n> LIMIT 1;\"\n> \n> So, maybe you can try your query as \n> \n> SELECT col FROM mytable \n> WHERE col = 'myval' \n> ORDER BY col \n> LIMIT 1;\n> \n> regards,\n> Jaime Casanova\n\nThanks for the heads up. This actually worked. All queries against that \ntable have turned into index scans instead of sequential.\n\nJohn\n=:->", "msg_date": "Thu, 28 Oct 2004 10:27:03 -0500", "msg_from": "John Meinel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sequential Scan with LIMIT" }, { "msg_contents": "On Sun, Oct 24, 2004 at 04:11:53PM -0400, Tom Lane wrote:\n> But the LIMIT will cut the cost of the seqscan case too. Given the\n> numbers you posit above, about one row in five will have 'myval', so a\n> seqscan can reasonably expect to hit the first matching row in the first\n> page of the table. This is still cheaper than doing an index scan\n> (which must require reading at least one index page plus at least one\n> table page).\n> \n> The test case you are showing is probably suffering from nonrandom\n> placement of this particular data value; which is something that the\n> statistics we keep are too crude to detect.\n \nIsn't that exactly what pg_stats.correlation is?\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Thu, 28 Oct 2004 18:46:58 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan with LIMIT" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Sun, Oct 24, 2004 at 04:11:53PM -0400, Tom Lane wrote:\n>> The test case you are showing is probably suffering from nonrandom\n>> placement of this particular data value; which is something that the\n>> statistics we keep are too crude to detect.\n \n> Isn't that exactly what pg_stats.correlation is?\n\nNo. A far-from-zero correlation gives you a clue that on average, *all*\nthe data values are placed nonrandomly ... but it doesn't really tell\nyou much one way or the other about a single data value.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Oct 2004 19:49:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan with LIMIT " }, { "msg_contents": "On Thu, Oct 28, 2004 at 07:49:28PM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > On Sun, Oct 24, 2004 at 04:11:53PM -0400, Tom Lane wrote:\n> >> The test case you are showing is probably suffering from nonrandom\n> >> placement of this particular data value; which is something that the\n> >> statistics we keep are too crude to detect.\n> \n> > Isn't that exactly what pg_stats.correlation is?\n> \n> No. A far-from-zero correlation gives you a clue that on average, *all*\n> the data values are placed nonrandomly ... but it doesn't really tell\n> you much one way or the other about a single data value.\n \nMaybe I'm confused about what the original issue was then... it appeared\nthat you were suggesting PGSQL was doing a seq scan instead of an index\nscan because it thought it would find it on the first page if the data\nwas randomly distributed. If the correlation is highly non-zero though,\nshouldn't it 'play it safe' and assume that unless it's picking the min\nor max value stored in statistics it will be better to do an index scan,\nsince the value it's looking for is probably in the middle of the table\nsomewhere? IE: if the values in the field are between 1 and 5 and the\ntable is clustered on that field then clearly an index scan would be\nbetter to find a row with field=3 than a seq scan.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Thu, 28 Oct 2004 18:54:30 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sequential Scan with LIMIT" } ]
[ { "msg_contents": "Hi,\n \nI have had some performance problems recently on very large tables (10s of\nmillions of rows). A vacuum full did make a large improvement and then\ndropping & re-creating the indexes also was very beneficial. My performance\nproblem has now been solved.\n \nMy question is: will using the contrib/reindexdb or REINDEX sql command do\nessentially the same job as dropping and re-creating the indexes. I.E. do\nyou get a fully compacted and balanced index? If so then I could use\ncontrib/reindexdb or REINDEX instead of drop/recreate. \n \nHow is concurrency handled by contrib/reindexdb and REINDEX (I know you can\ncreate an index on the fly with no obvious lock outs)?\n \nThanks,\n \nRod\n \n\n\n\n\n\nHi,\n \nI have had some \nperformance problems recently on very large tables (10s of millions of \nrows).  A vacuum full did make a large improvement and then dropping & \nre-creating the indexes also was very beneficial.  My performance problem \nhas now been solved.\n \nMy question is: will \nusing the contrib/reindexdb or REINDEX sql command do essentially the same job \nas dropping and re-creating the indexes.  I.E. do you get a fully compacted \nand balanced index?  If so then I could use contrib/reindexdb or REINDEX \ninstead of drop/recreate.  \n \nHow is concurrency \nhandled by contrib/reindexdb and REINDEX (I know you can create an index on the \nfly with no obvious lock outs)?\n \nThanks,\n \nRod", "msg_date": "Sun, 24 Oct 2004 23:00:42 +0100", "msg_from": "\"Rod Dutton\" <[email protected]>", "msg_from_op": true, "msg_subject": "Reindexdb and REINDEX" }, { "msg_contents": "\"Rod Dutton\" <[email protected]> writes:\n> My question is: will using the contrib/reindexdb or REINDEX sql command do\n> essentially the same job as dropping and re-creating the indexes. I.E. do\n> you get a fully compacted and balanced index?\n\nYes.\n \n> How is concurrency handled by contrib/reindexdb and REINDEX (I know you can\n> create an index on the fly with no obvious lock outs)?\n\nIn 8.0 they are almost equivalent, but in earlier releases\nREINDEX takes an exclusive lock on the index's parent table.\n\nThe details are:\n\nDROP INDEX: takes exclusive lock, but doesn't hold it long.\nCREATE INDEX: takes ShareLock, which blocks writers but not readers.\n\nSo when you do it that way, readers can use the table while CREATE INDEX\nruns, but of course they have no use of the dropped index. Putting the\nDROP and the CREATE in one transaction isn't a good idea if you want\nconcurrency, because then the exclusive lock persists till transaction\nend.\n\nREINDEX before 8.0: takes exclusive lock for the duration.\n\nThis of course is a dead loss for concurrency.\n\nREINDEX in 8.0: takes ShareLock on the table and exclusive lock on\nthe particular index.\n\nThis means that writers are blocked, but readers can proceed *as long as\nthey don't try to use the index under reconstruction*. If they try,\nthey block.\n\nIf you're rebuilding a popular index, you have a choice of making\nreaders do seqscans or having them block till the rebuild completes.\n\nOne other point is that DROP/CREATE breaks any stored plans that use the\nindex, which can have negative effects on plpgsql functions and PREPAREd\nstatements. REINDEX doesn't break plans. We don't currently have any\nautomated way of rebuilding stored plans, so in the worst case you may\nhave to terminate open backend sessions after a DROP/CREATE.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Oct 2004 20:16:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reindexdb and REINDEX " } ]
[ { "msg_contents": "Hi,\n\n \n\nI am dealing with an app here that uses pg to handle a few thousand\nconcurrent web users. It seems that under heavy load, the INSERT and\nUPDATE statements to one or two specific tables keep queuing up, to the\ncount of 150+ (one table has about 432K rows, other has about 2.6Million\nrows), resulting in 'wait's for other queries, and then everything piles\nup, with the load average shooting up to 10+. \n\n \n\nWe (development) have gone through the queries/explain analyzes and made\nsure the appropriate indexes exist among other efforts put in.\n\n \n\nI would like to know if there is anything that can be changed for better\nfrom the systems perspective. Here's what I have done and some recent\nchanges from the system side:\n\n \n\n-Upgraded from 7.4.0 to 7.4.1 sometime ago\n\n-Upgraded from RH8 to RHEL 3.0\n\n-The settings from postgresql.conf (carried over, basically) are:\n\n shared_buffers = 10240 (80MB)\n\n max_connections = 400\n\n sort_memory = 1024\n\n effective_cache_size = 262144 (2GB)\n\n checkpoint_segments = 15\n\nstats_start_collector = true\n\nstats_command_string = true \n\nRest everything is at default\n\n \n\nIn /etc/sysctl.conf (512MB shared mem)\n\nkernel.shmall = 536870912\n\nkernel.shmmax = 536870912\n\n \n\n-This is a new Dell 6650 (quad XEON 2.2GHz, 8GB RAM, Internal HW\nRAID10), RHEL 3.0 (2.4.21-20.ELsmp), PG 7.4.1\n\n-Vaccum Full run everyday\n\n-contrib/Reindex run everyday\n\n-Disabled HT in BIOS\n\n \n\nI would greatly appreciate any helpful ideas.\n\n \n\nThanks in advance,\n\n \n\nAnjan\n\n\n\n\n\n\n\n\n\n\nHi,\n \nI am dealing with an app here that uses pg to handle a few\nthousand concurrent web users. It seems that under heavy load, the INSERT and\nUPDATE statements to one or two specific tables keep queuing up, to the count\nof 150+ (one table has about 432K rows, other has about 2.6Million rows), resulting\nin ‘wait’s for other queries, and then everything piles up, with the\nload average shooting up to 10+. \n \nWe (development) have gone through the queries/explain\nanalyzes and made sure the appropriate indexes exist among other efforts put\nin.\n \nI would like to know if there is anything that can be\nchanged for better from the systems perspective. Here’s what I have done and\nsome recent changes from the system side:\n \n-Upgraded from 7.4.0 to 7.4.1 sometime ago\n-Upgraded from RH8 to RHEL 3.0\n-The settings from postgresql.conf (carried over, basically)\nare:\n            shared_buffers\n= 10240 (80MB)\n            max_connections\n= 400\n            sort_memory\n= 1024\n            effective_cache_size\n= 262144 (2GB)\n            checkpoint_segments\n= 15\nstats_start_collector = true\nstats_command_string = true \nRest everything is at default\n \nIn /etc/sysctl.conf (512MB shared\nmem)\nkernel.shmall = 536870912\nkernel.shmmax = 536870912\n \n-This is a new Dell 6650 (quad XEON 2.2GHz, 8GB RAM,\nInternal HW RAID10), RHEL 3.0 (2.4.21-20.ELsmp), PG 7.4.1\n-Vaccum Full run everyday\n-contrib/Reindex run everyday\n-Disabled HT in BIOS\n \nI would greatly appreciate any helpful ideas.\n \nThanks in advance,\n \nAnjan", "msg_date": "Mon, 25 Oct 2004 16:53:23 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "can't handle large number of INSERT/UPDATEs" }, { "msg_contents": "On Mon, 2004-10-25 at 16:53, Anjan Dave wrote:\n> Hi,\n> \n> \n> \n> I am dealing with an app here that uses pg to handle a few thousand\n> concurrent web users. It seems that under heavy load, the INSERT and\n> UPDATE statements to one or two specific tables keep queuing up, to\n> the count of 150+ (one table has about 432K rows, other has about\n> 2.6Million rows), resulting in οΏ½waitοΏ½s for other queries, and then\n\nThis isn't an index issue, it's a locking issue. Sounds like you have a\nbunch of inserts and updates hitting the same rows over and over again.\n\nEliminate that contention point, and you will have solved your problem.\n\nFree free to describe the processes involved, and we can help you do\nthat.\n\n\n", "msg_date": "Mon, 25 Oct 2004 17:19:14 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can't handle large number of INSERT/UPDATEs" }, { "msg_contents": "\nOn Oct 25, 2004, at 13:53, Anjan Dave wrote:\n\n> I am dealing with an app here that uses pg to handle a few thousand \n> concurrent web users. It seems that under heavy load, the INSERT and \n> UPDATE statements to one or two specific tables keep queuing up, to \n> the count of 150+ (one table has about 432K rows, other has about \n> 2.6Million rows), resulting in ‘wait’s for other queries, and then \n> everything piles up, with the load average shooting up to 10+.\n\n\tDepending on your requirements and all that, but I had a similar issue \nin one of my applications and made the problem disappear entirely by \nserializing the transactions into a separate thread (actually, a thread \npool) responsible for performing these transactions. This reduced the \nload on both the application server and the DB server.\n\n\tNot a direct answer to your question, but I've found that a lot of \ntimes when someone has trouble scaling a database application, much of \nthe performance win can be in trying to be a little smarter about how \nand when the database is accessed.\n\n--\nSPY My girlfriend asked me which one I like better.\npub 1024/3CAE01D5 1994/11/03 Dustin Sallings <[email protected]>\n| Key fingerprint = 87 02 57 08 02 D0 DA D6 C8 0F 3E 65 51 98 D8 BE\nL_______________________ I hope the answer won't upset her. ____________\n\n", "msg_date": "Mon, 25 Oct 2004 19:29:19 -0700", "msg_from": "Dustin Sallings <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can't handle large number of INSERT/UPDATEs" }, { "msg_contents": "On Mon, 2004-10-25 at 16:53 -0400, Anjan Dave wrote:\n> Hi,\n> \n> \n> \n> I am dealing with an app here that uses pg to handle a few thousand\n> concurrent web users. It seems that under heavy load, the INSERT and\n> UPDATE statements to one or two specific tables keep queuing up, to\n> the count of 150+ (one table has about 432K rows, other has about\n> 2.6Million rows), resulting in ‘wait’s for other queries, and then\n> everything piles up, with the load average shooting up to 10+. \n\nHi,\n\nWe saw a similar problem here that was related to the locking that can\nhappen against referred tables for referential integrity.\n\nIn our case we had referred tables with very few rows (i.e. < 10) which\ncaused the insert and update on the large tables to be effectively\nserialised due to the high contention on the referred tables.\n\nWe changed our app to implement those referential integrity checks\ndifferently and performance was hugely boosted.\n\nRegards,\n\t\t\t\t\tAndrew.\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n Chicken Little was right.\n-------------------------------------------------------------------------", "msg_date": "Wed, 27 Oct 2004 09:50:50 +1300", "msg_from": "Andrew McMillan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can't handle large number of INSERT/UPDATEs" } ]
[ { "msg_contents": ">>Eliminate that contention point, and you will have solved your problem.\n\nI agree, If your updates are slow then you will get a queue building up. \n\nMake sure that:-\n1) all your indexing is optimised.\n2) you are doing regular vacuuming (bloated tables will cause a slow down\ndue to swapping).\n3) your max_fsm_pages setting is large enough - it needs to be big enough to\nhold all the transactions between vacuums (+ some spare for good measure).\n4) do a full vacuum - do one to start and then do one after you have had 2&3\n(above) in place for a while - if the full vacuum handles lots of dead\ntuples then your max_fsm_pages setting is too low.\n5) Also try reindexing or drop/recreate the indexes in question as...\n\"PostgreSQL is unable to reuse B-tree index pages in certain cases. The\nproblem is that if indexed rows are deleted, those index pages can only be\nreused by rows with similar values. For example, if indexed rows are deleted\nand newly inserted/updated rows have much higher values, the new rows can't\nuse the index space made available by the deleted rows. Instead, such new\nrows must be placed on new index pages. In such cases, disk space used by\nthe index will grow indefinitely, even if VACUUM is run frequently. \"\n\nAre your updates directly executed or do you use stored procs? We had a\nrecent problem with stored procs as they store a \"one size fits all\" query\nplan when compiled - this can be less than optimum in some cases.\n\nWe have a similar sounding app to yours and if tackled correctly then all\nthe above will make a massive difference in performance.\n\nRod\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Rod Taylor\nSent: 25 October 2004 22:19\nTo: Anjan Dave\nCc: Postgresql Performance\nSubject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs\n\nOn Mon, 2004-10-25 at 16:53, Anjan Dave wrote:\n> Hi,\n> \n> \n> \n> I am dealing with an app here that uses pg to handle a few thousand\n> concurrent web users. It seems that under heavy load, the INSERT and\n> UPDATE statements to one or two specific tables keep queuing up, to\n> the count of 150+ (one table has about 432K rows, other has about\n> 2.6Million rows), resulting in ?wait?s for other queries, and then\n\nThis isn't an index issue, it's a locking issue. Sounds like you have a\nbunch of inserts and updates hitting the same rows over and over again.\n\nEliminate that contention point, and you will have solved your problem.\n\nFree free to describe the processes involved, and we can help you do\nthat.\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n", "msg_date": "Tue, 26 Oct 2004 09:19:33 +0100", "msg_from": "\"Rod Dutton\" <[email protected]>", "msg_from_op": true, "msg_subject": "FW: can't handle large number of INSERT/UPDATEs" } ]
[ { "msg_contents": "Hi all,\n\nI am (stilll) converting a database from a Clarion Topspeed database to Postgresql 7.4.5 on Debian Linux 2.6.6-1. The program that uses the database uses a query like \"select * from table\" to show the user the contents of a table. This query cannot be changed (it is generated by Clarion and the person in charge of the program cannot alter that behaviour).\n\nNow I have a big performance problem with reading a large table ( 96713 rows). The query that is send to the database is \"select * from table\".\n\n\"explain\" and \"explain analyze\", using psql on cygwin:\n\nmunt=# explain select * from klt_alg;\n QUERY PLAN \n----------------------------------------------------------------- \nSeq Scan on klt_alg (cost=0.00..10675.13 rows=96713 width=729) \n\n\nmunt=# explain analyze select * from klt_alg;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------\nSeq Scan on klt_alg (cost=0.00..10675.13 rows=96713 width=729) (actual time=13.172..2553.328 rows=96713 loops=1)\nTotal runtime: 2889.109 ms\n(2 rows) \n\nRunning the query (with pgAdmin III):\n-- Executing query:\nselect * from klt_alg;\n\nTotal query runtime: 21926 ms.\nData retrieval runtime: 72841 ms.\n96713 rows retrieved.\n\nQUESTIONS:\n\nGENERAL:\n1. The manual says about \"explain analyze\" : \"The ANALYZE option causes the statement to be actually executed, not only planned. The total elapsed time expended within each plan node (in milliseconds) and total number of rows it actually returned are added to the display.\" Does this time include datatransfer or just the time the database needs to collect the data, without any data transfer?\n2. If the time is without data transfer to the client, is there a reliable way to measure the time needed to run the query and get the data (without the overhead of a program that does something with the data)?\n\nPGADMIN:\n1. What does the \"Total query runtime\" really mean? (It was my understanding that it was the time the database needs to collect the data, without any data transfer).\n2. What does the \"Data retrieval runtime\" really mean? (Is this including the filling of the datagrid/GUI, or just the datatransfer?)\n\nTIA\n\nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n", "msg_date": "Tue, 26 Oct 2004 13:37:24 +0200", "msg_from": "\"Joost Kraaijeveld\" <[email protected]>", "msg_from_op": true, "msg_subject": "Measuring server performance with psql and pgAdmin" }, { "msg_contents": "Joost,\n\n> 1. The manual says about \"explain analyze\" : \"The ANALYZE option causes the\n> statement to be actually executed, not only planned. The total elapsed time\n> expended within each plan node (in milliseconds) and total number of rows\n> it actually returned are added to the display.\" Does this time include\n> datatransfer or just the time the database needs to collect the data,\n> without any data transfer? \n\nCorrect. It's strictly backend time.\n\n> 2. If the time is without data transfer to the \n> client, is there a reliable way to measure the time needed to run the query\n> and get the data (without the overhead of a program that does something\n> with the data)?\n\nin PSQL, you can use \\timing\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 26 Oct 2004 12:20:21 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Measuring server performance with psql and pgAdmin" } ]
[ { "msg_contents": "It probably is locking issue. I got a long list of locks held when we ran select * from pg_locks during a peak time.\n\nrelation | database | transaction | pid | mode | granted \n----------+----------+-------------+-------+------------------+---------\n 17239 | 17142 | | 3856 | AccessShareLock | t\n | | 21196323 | 3875 | ExclusiveLock | t\n 16390 | 17142 | | 3911 | AccessShareLock | t\n 16595 | 17142 | | 3782 | AccessShareLock | t\n 17227 | 17142 | | 3840 | AccessShareLock | t\n 17227 | 17142 | | 3840 | RowExclusiveLock | t\n...\n...\n\n\nVmstat would show a lot of disk IO at the same time.\n\nIs this pointing towards a disk IO issue? (to that end, other than a higher CPU speed, and disabling HT, only thing changed is that it's RAID5 volume now, instead of a RAID10)\n\n-anjan\n\n\n-----Original Message-----\nFrom: Rod Taylor [mailto:[email protected]] \nSent: Monday, October 25, 2004 5:19 PM\nTo: Anjan Dave\nCc: Postgresql Performance\nSubject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs\n\nOn Mon, 2004-10-25 at 16:53, Anjan Dave wrote:\n> Hi,\n> \n> \n> \n> I am dealing with an app here that uses pg to handle a few thousand\n> concurrent web users. It seems that under heavy load, the INSERT and\n> UPDATE statements to one or two specific tables keep queuing up, to\n> the count of 150+ (one table has about 432K rows, other has about\n> 2.6Million rows), resulting in ‘wait’s for other queries, and then\n\nThis isn't an index issue, it's a locking issue. Sounds like you have a\nbunch of inserts and updates hitting the same rows over and over again.\n\nEliminate that contention point, and you will have solved your problem.\n\nFree free to describe the processes involved, and we can help you do\nthat.\n\n\n\n", "msg_date": "Tue, 26 Oct 2004 13:42:21 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: can't handle large number of INSERT/UPDATEs" }, { "msg_contents": "On Tue, 2004-10-26 at 13:42, Anjan Dave wrote:\n> It probably is locking issue. I got a long list of locks held when we ran select * from pg_locks during a peak time.\n> \n> relation | database | transaction | pid | mode | granted \n> ----------+----------+-------------+-------+------------------+---------\n> 17239 | 17142 | | 3856 | AccessShareLock | t\n\nHow many have granted = false?\n\n> Vmstat would show a lot of disk IO at the same time.\n> \n> Is this pointing towards a disk IO issue?\n\nNot necessarily. Is your IO reaching the limit or is it just heavy?\n\n", "msg_date": "Tue, 26 Oct 2004 13:48:42 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can't handle large number of INSERT/UPDATEs" }, { "msg_contents": "Anjan,\n\n> It probably is locking issue. I got a long list of locks held when we ran\n> select * from pg_locks during a peak time.\n\nDo the back-loaded tables have FKs on them? This would be a likely cause \nof lock contention, and thus serializing inserts/updates to the tables.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 26 Oct 2004 11:27:02 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can't handle large number of INSERT/UPDATEs" } ]
[ { "msg_contents": "None of the locks are in state false actually.\n\nI don't have iostat on that machine, but vmstat shows a lot of writes to\nthe drives, and the runnable processes are more than 1:\n\nprocs memory swap io system\ncpu\n r b swpd free buff cache si so bi bo in cs us sy\nwa id\n 1 2 0 3857568 292936 2791876 0 0 0 44460 1264 2997 23\n13 22 41\n 2 2 0 3824668 292936 2791884 0 0 0 25262 1113 4797 28\n12 29 31\n 2 3 0 3784772 292936 2791896 0 0 0 38988 1468 6677 28\n12 48 12\n 2 4 0 3736256 292936 2791904 0 0 0 50970 1530 5217 19\n12 49 20\n 4 2 0 3698056 292936 2791908 0 0 0 43576 1369 7316 20\n15 35 30\n 2 1 0 3667124 292936 2791920 0 0 0 39174 1444 4659 25\n16 35 24\n 6 1 0 3617652 292936 2791928 0 0 0 52430 1347 4681 25\n19 20 37\n 1 3 0 3599992 292936 2790868 0 0 0 40156 1439 4394 20\n14 29 37\n 6 0 0 3797488 292936 2568648 0 0 0 17706 2272 21534 28\n23 19 30\n 0 0 0 3785396 292936 2568736 0 0 0 1156 1237 14057 33\n8 0 59\n 0 0 0 3783568 292936 2568736 0 0 0 704 512 1537 5\n2 1 92\n 1 0 0 3783188 292936 2568752 0 0 0 842 613 1919 6\n1 1 92\n\n-anjan\n\n-----Original Message-----\nFrom: Rod Taylor [mailto:[email protected]] \nSent: Tuesday, October 26, 2004 1:49 PM\nTo: Anjan Dave\nCc: Postgresql Performance\nSubject: RE: [PERFORM] can't handle large number of INSERT/UPDATEs\n\nOn Tue, 2004-10-26 at 13:42, Anjan Dave wrote:\n> It probably is locking issue. I got a long list of locks held when we\nran select * from pg_locks during a peak time.\n> \n> relation | database | transaction | pid | mode | granted\n\n>\n----------+----------+-------------+-------+------------------+---------\n> 17239 | 17142 | | 3856 | AccessShareLock | t\n\nHow many have granted = false?\n\n> Vmstat would show a lot of disk IO at the same time.\n> \n> Is this pointing towards a disk IO issue?\n\nNot necessarily. Is your IO reaching the limit or is it just heavy?\n\n\n", "msg_date": "Tue, 26 Oct 2004 13:56:25 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: can't handle large number of INSERT/UPDATEs" }, { "msg_contents": "\n>I don't have iostat on that machine, but vmstat shows a lot of writes to\n>the drives, and the runnable processes are more than 1:\n>\n> 6 1 0 3617652 292936 2791928 0 0 0 52430 1347 4681 25\n>19 20 37\n> \n>\nAssuming that's the output of 'vmstat 1' and not some other delay, \n50MB/second of sustained writes is usually considered 'a lot'. \n", "msg_date": "Tue, 26 Oct 2004 19:28:34 +0100", "msg_from": "Matt Clark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can't handle large number of INSERT/UPDATEs" }, { "msg_contents": "\"Anjan Dave\" <[email protected]> writes:\n> None of the locks are in state false actually.\n\nIn that case you don't have a locking problem.\n\n> I don't have iostat on that machine, but vmstat shows a lot of writes to\n> the drives, and the runnable processes are more than 1:\n\nI get the impression that you are just saturating the write bandwidth of\nyour disk :-(\n\nIt's fairly likely that this happens during checkpoints. Look to see if\nthe postmaster has a child that shows itself as a checkpointer in \"ps\"\nwhen the saturation is occurring. You might be able to improve matters\nby altering the checkpoint frequency parameters (though beware that\neither too small or too large will likely make matters even worse).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Oct 2004 17:53:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can't handle large number of INSERT/UPDATEs " } ]
[ { "msg_contents": "Andrew/Josh,\n\nJosh also suggested to check for any FK/referential integrity checks,\nbut I am told that we don't have any foreign key constraints.\n\nThanks,\nanjan\n\n-----Original Message-----\nFrom: Andrew McMillan [mailto:[email protected]] \nSent: Tuesday, October 26, 2004 4:51 PM\nTo: Anjan Dave\nCc: [email protected]\nSubject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs\n\nOn Mon, 2004-10-25 at 16:53 -0400, Anjan Dave wrote:\n> Hi,\n> \n> \n> \n> I am dealing with an app here that uses pg to handle a few thousand\n> concurrent web users. It seems that under heavy load, the INSERT and\n> UPDATE statements to one or two specific tables keep queuing up, to\n> the count of 150+ (one table has about 432K rows, other has about\n> 2.6Million rows), resulting in 'wait's for other queries, and then\n> everything piles up, with the load average shooting up to 10+. \n\nHi,\n\nWe saw a similar problem here that was related to the locking that can\nhappen against referred tables for referential integrity.\n\nIn our case we had referred tables with very few rows (i.e. < 10) which\ncaused the insert and update on the large tables to be effectively\nserialised due to the high contention on the referred tables.\n\nWe changed our app to implement those referential integrity checks\ndifferently and performance was hugely boosted.\n\nRegards,\n\t\t\t\t\tAndrew.\n------------------------------------------------------------------------\n-\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n Chicken Little was right.\n------------------------------------------------------------------------\n-\n\n\n", "msg_date": "Tue, 26 Oct 2004 17:13:04 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: can't handle large number of INSERT/UPDATEs" } ]
[ { "msg_contents": "That is 1 or maybe 2 second interval.\n\nOne thing I am not sure is why 'bi' (disk writes) stays at 0 mostly,\nit's the 'bo' column that shows high numbers (reads from disk). With so\nmany INSERT/UPDATEs, I would expect it the other way around...\n\n-anjan\n\n\n\n-----Original Message-----\nFrom: Matt Clark [mailto:[email protected]] \nSent: Tuesday, October 26, 2004 2:29 PM\nTo: Anjan Dave\nCc: Rod Taylor; Postgresql Performance\nSubject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs\n\n\n>I don't have iostat on that machine, but vmstat shows a lot of writes\nto\n>the drives, and the runnable processes are more than 1:\n>\n> 6 1 0 3617652 292936 2791928 0 0 0 52430 1347 4681 25\n>19 20 37\n> \n>\nAssuming that's the output of 'vmstat 1' and not some other delay, \n50MB/second of sustained writes is usually considered 'a lot'. \n\n", "msg_date": "Tue, 26 Oct 2004 17:15:49 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: can't handle large number of INSERT/UPDATEs" }, { "msg_contents": "\"Anjan Dave\" <[email protected]> writes:\n> One thing I am not sure is why 'bi' (disk writes) stays at 0 mostly,\n> it's the 'bo' column that shows high numbers (reads from disk). With so\n> many INSERT/UPDATEs, I would expect it the other way around...\n\nEr ... it *is* the other way around. bi is blocks in (to the CPU),\nbo is blocks out (from the CPU).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Oct 2004 18:37:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can't handle large number of INSERT/UPDATEs " }, { "msg_contents": "\nOn Tue, 26 Oct 2004, Tom Lane wrote:\n\n> \"Anjan Dave\" <[email protected]> writes:\n> > One thing I am not sure is why 'bi' (disk writes) stays at 0 mostly,\n> > it's the 'bo' column that shows high numbers (reads from disk). With so\n> > many INSERT/UPDATEs, I would expect it the other way around...\n> \n> Er ... it *is* the other way around. bi is blocks in (to the CPU),\n> bo is blocks out (from the CPU).\n> \n> \t\t\tregards, tom lane\n\nUmmm.....\n\n[curtisz@labsoft T2]$ man vmstat\n<snip>\nFIELD DESCRIPTIONS\n<snip>\n IO\n bi: Blocks sent to a block device (blocks/s).\n bo: Blocks received from a block device (blocks/s).\n\nAnd on my read-heavy 7.4.2 system (running on rh8 at the moment....)\n(truncated for readability...)\n\n[root@labsoft T2]# vmstat 1\n procs memory swap io system \n r b w swpd free buff cache si so bi bo in cs us \n 0 0 0 127592 56832 365496 2013788 0 1 3 6 4 0 4 \n 2 0 0 127592 56868 365496 2013788 0 0 0 0 363 611 1 \n 1 0 0 127592 57444 365508 2013788 0 0 8 972 1556 3616 11 \n 0 0 1 127592 57408 365512 2013800 0 0 0 448 614 1216 5 \n 0 0 0 127592 56660 365512 2013800 0 0 0 0 666 1150 6 \n 0 3 1 127592 56680 365512 2013816 0 0 16 180 1280 2050 2 \n 0 0 0 127592 56864 365516 2013852 0 0 20 728 2111 4360 11 \n 0 0 0 127592 57952 365544 2013824 0 0 0 552 1153 2002 10 \n 0 0 0 127592 57276 365544 2013824 0 0 0 504 718 1111 5 \n 1 0 0 127592 57244 365544 2013824 0 0 0 436 1495 2366 7 \n 0 0 0 127592 57252 365544 2013824 0 0 0 0 618 1380 5 \n 0 0 0 127592 57276 365556 2014192 0 0 360 1240 2418 5056 14 \n 2 0 0 127592 56664 365564 2014176 0 0 0 156 658 1349 5 \n 1 0 0 127592 55864 365568 2014184 0 0 0 1572 1388 3598 9 \n 2 0 0 127592 56160 365572 2014184 0 0 0 536 4860 6621 13 \n\nWhich seems appropriate for both the database and the man page....\n\n-Curtis\n\n\n", "msg_date": "Tue, 26 Oct 2004 16:04:35 -0700 (MST)", "msg_from": "Curtis Zinzilieta <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can't handle large number of INSERT/UPDATEs " }, { "msg_contents": "Curtis Zinzilieta <[email protected]> writes:\n> On Tue, 26 Oct 2004, Tom Lane wrote:\n>> Er ... it *is* the other way around. bi is blocks in (to the CPU),\n>> bo is blocks out (from the CPU).\n\n> Ummm.....\n> [curtisz@labsoft T2]$ man vmstat\n> bi: Blocks sent to a block device (blocks/s).\n> bo: Blocks received from a block device (blocks/s).\n\nYou might want to have a word with your OS vendor. My vmstat\nman page says\n\n IO\n bi: Blocks received from a block device (blocks/s).\n bo: Blocks sent to a block device (blocks/s).\n\nand certainly anyone who's been around a computer more than a week or\ntwo knows which direction \"in\" and \"out\" are customarily seen from.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Oct 2004 23:21:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can't handle large number of INSERT/UPDATEs " }, { "msg_contents": "Tom Lane wrote:\n> Curtis Zinzilieta <[email protected]> writes:\n> \n>>On Tue, 26 Oct 2004, Tom Lane wrote:\n>>\n>>>Er ... it *is* the other way around. bi is blocks in (to the CPU),\n>>>bo is blocks out (from the CPU).\n> \n> \n>>Ummm.....\n>>[curtisz@labsoft T2]$ man vmstat\n>> bi: Blocks sent to a block device (blocks/s).\n>> bo: Blocks received from a block device (blocks/s).\n> \n> \n> You might want to have a word with your OS vendor. My vmstat\n> man page says\n> \n> IO\n> bi: Blocks received from a block device (blocks/s).\n> bo: Blocks sent to a block device (blocks/s).\n> \n> and certainly anyone who's been around a computer more than a week or\n> two knows which direction \"in\" and \"out\" are customarily seen from.\n> \n> \t\t\tregards, tom lane\n> \n\nInteresting. I checked this on several machines. They actually say \ndifferent things.\n\nRedhat 9- bi: Blocks sent to a block device (blocks/s).\nLatest Cygwin- bi: Blocks sent to a block device (blocks/s).\nRedhat 7.x- bi: Blocks sent to a block device (blocks/s).\nRedhat AS3- bi: blocks sent out to a block device (in blocks/s)\n\nI would say that I probably agree, things should be relative to the cpu. \nHowever, it doesn't seem to be something that was universally agreed \nupon. Or maybe the man-pages were all wrong, and only got updated recently.\n\nJohn\n=:->", "msg_date": "Tue, 26 Oct 2004 23:18:09 -0500", "msg_from": "John Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can't handle large number of INSERT/UPDATEs" }, { "msg_contents": "Turbo linux 7 sems to be agreeing with Curtis,\n\nbi: ブロックデバイスに送られたブロック (blocks/s)。\nbo: ブロックデバイスから受け取ったブロック (blocks/s)。\n\nSorry it's in Japanese but bi says \"blocks sent to block device\" and bo is \n\"blocks received from block device\".\n\nI don't know that much about it but the actual output seems to suggest that \nthe man page is wrong. I find it just the slightest bit amusing that such \nerrors in the docs should be translated faithfully when translating \ninvariably introduces errors of it's own ;)\n\nRegards\nIain\n\n\n----- Original Message ----- \nFrom: \"Tom Lane\" <[email protected]>\nTo: \"Curtis Zinzilieta\" <[email protected]>\nCc: \"Anjan Dave\" <[email protected]>; \"Matt Clark\" <[email protected]>; \"Rod \nTaylor\" <[email protected]>; \"Postgresql Performance\" \n<[email protected]>\nSent: Wednesday, October 27, 2004 12:21 PM\nSubject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs\n\n\n> Curtis Zinzilieta <[email protected]> writes:\n>> On Tue, 26 Oct 2004, Tom Lane wrote:\n>>> Er ... it *is* the other way around. bi is blocks in (to the CPU),\n>>> bo is blocks out (from the CPU).\n>\n>> Ummm.....\n>> [curtisz@labsoft T2]$ man vmstat\n>> bi: Blocks sent to a block device (blocks/s).\n>> bo: Blocks received from a block device (blocks/s).\n>\n> You might want to have a word with your OS vendor. My vmstat\n> man page says\n>\n> IO\n> bi: Blocks received from a block device (blocks/s).\n> bo: Blocks sent to a block device (blocks/s).\n>\n> and certainly anyone who's been around a computer more than a week or\n> two knows which direction \"in\" and \"out\" are customarily seen from.\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster \n\n", "msg_date": "Wed, 27 Oct 2004 14:02:48 +0900", "msg_from": "\"Iain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can't handle large number of INSERT/UPDATEs " }, { "msg_contents": "\n>> and certainly anyone who's been around a computer more than a week or\n>> two knows which direction \"in\" and \"out\" are customarily seen from.\n>>\n>> regards, tom lane\n>>\n>\nApparently not whoever wrote the man page that everyone copied ;-)\n\n> Interesting. I checked this on several machines. They actually say \n> different things.\n>\n> Redhat 9- bi: Blocks sent to a block device (blocks/s).\n> Latest Cygwin- bi: Blocks sent to a block device (blocks/s).\n> Redhat 7.x- bi: Blocks sent to a block device (blocks/s).\n> Redhat AS3- bi: blocks sent out to a block device (in blocks/s)\n>\n> I would say that I probably agree, things should be relative to the \n> cpu. However, it doesn't seem to be something that was universally \n> agreed upon. Or maybe the man-pages were all wrong, and only got \n> updated recently.\n>\nLooks like the man pages are wrong, for RH7.3 at least. It says bi is \n'blocks written', but an actual test like 'dd if=/dev/zero of=/tmp/test \nbs=1024 count=16384' on an otherwise nearly idle RH7.3 box gives:\n procs memory swap io \nsystem cpu\n r b w swpd free buff cache si so bi bo in cs us \nsy id\n0 0 0 75936 474704 230452 953580 0 0 0 0 106 2527 0 \n0 99\n 0 0 0 75936 474704 230452 953580 0 0 0 16512 376 2572 \n0 2 98\n 0 0 0 75936 474704 230452 953580 0 0 0 0 105 2537 \n0 0 100\n\nWhich is in line with bo being 'blocks written'.\n\nM\n", "msg_date": "Wed, 27 Oct 2004 07:09:21 +0100", "msg_from": "Matt Clark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can't handle large number of INSERT/UPDATEs" }, { "msg_contents": "Turns out the man page of vmstat in procps was changed on Oct 8 2002:\n\nhttp://cvs.sourceforge.net/viewcvs.py/procps/procps/vmstat.8?r1=1.1&r2=1.2\n\nin reaction to a debian bug report:\n\nhttp://bugs.debian.org/cgi-bin/bugreport.cgi?bug=157935\n\n-- \nMarkus Bertheau <[email protected]>\n\n", "msg_date": "Sat, 30 Oct 2004 23:12:25 +0200", "msg_from": "Markus Bertheau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can't handle large number of INSERT/UPDATEs" } ]
[ { "msg_contents": "It just seems that the more activity there is (that is when there's a\nlot of disk activity) the checkpoints happen quicker too.\n\nHere's a snapshot from the /var/log/messages - \n\nOct 26 17:21:22 vl-pe6650-003 postgres[13978]: [2-1] LOG: recycled\ntransaction\nlog file \"0000000B0000007E\"\nOct 26 17:21:22 vl-pe6650-003 postgres[13978]: [3-1] LOG: recycled\ntransaction\nlog file \"0000000B0000007F\"\n...\nOct 26 17:26:25 vl-pe6650-003 postgres[14273]: [2-1] LOG: recycled\ntransaction\nlog file \"0000000B00000080\"\nOct 26 17:26:25 vl-pe6650-003 postgres[14273]: [3-1] LOG: recycled\ntransaction\nlog file \"0000000B00000081\"\nOct 26 17:26:25 vl-pe6650-003 postgres[14273]: [4-1] LOG: recycled\ntransaction\nlog file \"0000000B00000082\"\n...\nOct 26 17:31:27 vl-pe6650-003 postgres[14508]: [2-1] LOG: recycled\ntransaction\nlog file \"0000000B00000083\"\nOct 26 17:31:27 vl-pe6650-003 postgres[14508]: [3-1] LOG: recycled\ntransaction\nlog file \"0000000B00000084\"\nOct 26 17:31:27 vl-pe6650-003 postgres[14508]: [4-1] LOG: recycled\ntransaction\nlog file \"0000000B00000085\"\n...\n\nI have increased them from default 3 to 15. Haven't altered the\nfrequency though....\n\nThanks,\nAnjan \n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Tuesday, October 26, 2004 5:53 PM\nTo: Anjan Dave\nCc: Rod Taylor; Postgresql Performance\nSubject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs \n\n\"Anjan Dave\" <[email protected]> writes:\n> None of the locks are in state false actually.\n\nIn that case you don't have a locking problem.\n\n> I don't have iostat on that machine, but vmstat shows a lot of writes\nto\n> the drives, and the runnable processes are more than 1:\n\nI get the impression that you are just saturating the write bandwidth of\nyour disk :-(\n\nIt's fairly likely that this happens during checkpoints. Look to see if\nthe postmaster has a child that shows itself as a checkpointer in \"ps\"\nwhen the saturation is occurring. You might be able to improve matters\nby altering the checkpoint frequency parameters (though beware that\neither too small or too large will likely make matters even worse).\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 26 Oct 2004 18:14:10 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: can't handle large number of INSERT/UPDATEs " }, { "msg_contents": "Anjan,\n\n> Oct 26 17:26:25 vl-pe6650-003 postgres[14273]: [4-1] LOG:  recycled\n> transaction\n> log file \"0000000B00000082\"\n> ...\n> Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [2-1] LOG:  recycled\n> transaction\n> log file \"0000000B00000083\"\n> Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [3-1] LOG:  recycled\n> transaction\n> log file \"0000000B00000084\"\n> Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [4-1] LOG:  recycled\n> transaction\n> log file \"0000000B00000085\"\n\nLooks like you're running out of disk space for pending transactions. Can you \nafford more checkpoint_segments? Have you considered checkpoint_siblings?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 26 Oct 2004 17:42:53 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can't handle large number of INSERT/UPDATEs" } ]
[ { "msg_contents": "Ok, i was thinking from the disk perspective. Thanks!\r\n\r\n\t-----Original Message----- \r\n\tFrom: Tom Lane [mailto:[email protected]] \r\n\tSent: Tue 10/26/2004 6:37 PM \r\n\tTo: Anjan Dave \r\n\tCc: Matt Clark; Rod Taylor; Postgresql Performance \r\n\tSubject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs \r\n\t\r\n\t\r\n\r\n\t\"Anjan Dave\" <[email protected]> writes: \r\n\t> One thing I am not sure is why 'bi' (disk writes) stays at 0 mostly, \r\n\t> it's the 'bo' column that shows high numbers (reads from disk). With so \r\n\t> many INSERT/UPDATEs, I would expect it the other way around... \r\n\r\n\tEr ... it *is* the other way around. bi is blocks in (to the CPU), \r\n\tbo is blocks out (from the CPU). \r\n\r\n\t regards, tom lane \r\n\r\n", "msg_date": "Tue, 26 Oct 2004 18:45:37 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: can't handle large number of INSERT/UPDATEs" } ]
[ { "msg_contents": "Josh,\r\n \r\nI have increased them to 30, will see if that helps. Space is not a concern. slightly longer recovery time could be fine too. Wonder what people use (examples) for this value for high volume databases (except for dump/restore)...?\r\n \r\nI don't know what is checkpoint_sibling. I'll read about it if there's some info on it somewhere.\r\n \r\nThanks,\r\nAnjan\r\n \r\n-----Original Message----- \r\nFrom: Josh Berkus [mailto:[email protected]] \r\nSent: Tue 10/26/2004 8:42 PM \r\nTo: [email protected] \r\nCc: Anjan Dave; Tom Lane; Rod Taylor \r\nSubject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs\r\n\r\n\r\n\r\n\tAnjan, \r\n\r\n\t> Oct 26 17:26:25 vl-pe6650-003 postgres[14273]: [4-1] LOG: recycled \r\n\t> transaction \r\n\t> log file \"0000000B00000082\" \r\n\t> ... \r\n\t> Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [2-1] LOG: recycled \r\n\t> transaction \r\n\t> log file \"0000000B00000083\" \r\n\t> Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [3-1] LOG: recycled \r\n\t> transaction \r\n\t> log file \"0000000B00000084\" \r\n\t> Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [4-1] LOG: recycled \r\n\t> transaction \r\n\t> log file \"0000000B00000085\" \r\n\r\n\tLooks like you're running out of disk space for pending transactions. Can you \r\n\tafford more checkpoint_segments? Have you considered checkpoint_siblings? \r\n\r\n\t-- \r\n\t--Josh \r\n\r\n\tJosh Berkus \r\n\tAglio Database Solutions \r\n\tSan Francisco \r\n\r\n", "msg_date": "Tue, 26 Oct 2004 23:12:20 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: can't handle large number of INSERT/UPDATEs" } ]
[ { "msg_contents": "Hello!\n\n My name is TTK, and I'm a software engineer at the Internet Archive's \nData Repository department. We have recently started using postgresql \nfor a couple of projects (we have historically been a MySQL outfit), \nand today my co-worker noticed psql eating memory like mad when invoked \nwith a simple select statement incorporating a join of two tables.\n\n The platform is a heavily modified RedHat 7.3 Linux. We are using \nversion 7.4.5 of postgresql.\n\n The invocation was via sh script:\n\n#!/bin/bash\n\noutfile=$1\nif [ -z \"$outfile\" ]; then\n outfile=/0/brad/all_arcs.txt\nfi\n\n/usr/lib/postgresql/bin/psql -c 'select ServerDisks.servername,ServerDisks.diskserial,ServerDisks.diskmountpoint,DiskFiles.name,DiskFiles.md5 from DiskFiles,ServerDisks where DiskFiles.diskserial=ServerDisks.diskserial;' -F ' ' -A -t -o $outfile\n\n.. and the tables in question are somewhat large (hundreds of GB's \nof data), though we didn't expect that to be an issue as far as the \npsql process was concerned.\n\n We monitored server load via 'top -i -d 0.5' and watched the output \nfile for data. Over the course of about 200 seconds, psql's RSS \nclimbed to about 1.6 GB, and stayed there, while no data was written \nto the output file. Eventually 10133194 lines were written to the \noutput file, all at once, about 1.2GB's worth of data.\n\n I re-ran the select query using psql in interactive mode, and saw \nthe same results.\n\n I re-ran it again, using \"explain analyse\", and this time psql's \nRSS did *not* increase significantly. The result is here, if it \nhelps:\n\nbrad=# explain analyse select ServerDisks.servername,ServerDisks.diskserial,ServerDisks.diskmountpoint,DiskFiles.name,DiskFiles.md5 from DiskFiles,ServerDisks where DiskFiles.diskserial=ServerDisks.diskserial;\n QUERY PLAN \n------------------------------------------------------------------\n Hash Join (cost=22.50..65.00 rows=1000 width=274) (actual time=118.584..124653.729 rows=10133349 loops=1)\n Hash Cond: ((\"outer\".diskserial)::text = (\"inner\".diskserial)::text)\n -> Seq Scan on diskfiles (cost=0.00..20.00 rows=1000 width=198) (actual time=7.201..31336.063 rows=10133349 loops=1)\n -> Hash (cost=20.00..20.00 rows=1000 width=158) (actual time=90.821..90.821 rows=0 loops=1)\n -> Seq Scan on serverdisks (cost=0.00..20.00 rows=1000 width=158) (actual time=9.985..87.364 rows=2280 loops=1)\n Total runtime: 130944.586 ms\n\n At a guess, it looks like the data set is being buffered in its \nentirety by psql, before any data is written to the output file, \nwhich is surprising. I would have expected it to grab data as it \nappeared on the socket from postmaster and write it to disk. Is \nthere something we can do to stop psql from buffering results? \nDoes anyone know what's going on here?\n\n If the solution is to just write a little client that uses perl \nDBI to fetch rows one at a time and write them out, that's doable, \nbut it would be nice if psql could be made to \"just work\" without \nthe monster RSS.\n\n I'd appreciate any feedback. If you need any additional info, \nplease let me know and I will provide it.\n\n -- TTK\n [email protected]\n [email protected]\n\n", "msg_date": "Wed, 27 Oct 2004 00:57:48 -0700", "msg_from": "TTK Ciar <[email protected]>", "msg_from_op": true, "msg_subject": "psql large RSS (1.6GB)" }, { "msg_contents": "\nOn Oct 27, 2004, at 0:57, TTK Ciar wrote:\n\n> At a guess, it looks like the data set is being buffered in its\n> entirety by psql, before any data is written to the output file,\n> which is surprising. I would have expected it to grab data as it\n> appeared on the socket from postmaster and write it to disk. Is\n> there something we can do to stop psql from buffering results?\n> Does anyone know what's going on here?\n\n\tYes, the result set is sent back to the client before it can be used. \nAn easy workaround when dealing with this much data is to use a cursor. \n Something like this:\n\ndb# start transaction;\nSTART TRANSACTION\ndb# declare logcur cursor for select * from some_table;\nDECLARE CURSOR\ndb# fetch 5 in logcur;\n[...]\n(5 rows)\n\n\tThis will do approximately what you expected the select to do in the \nfirst, place, but the fetch will decide how many rows to buffer into \nthe client at a time.\n\n> If the solution is to just write a little client that uses perl\n> DBI to fetch rows one at a time and write them out, that's doable,\n> but it would be nice if psql could be made to \"just work\" without\n> the monster RSS.\n\n\tIt wouldn't make a difference unless that driver implements the \nunderlying protocol on its own.\n\n--\nSPY My girlfriend asked me which one I like better.\npub 1024/3CAE01D5 1994/11/03 Dustin Sallings <[email protected]>\n| Key fingerprint = 87 02 57 08 02 D0 DA D6 C8 0F 3E 65 51 98 D8 BE\nL_______________________ I hope the answer won't upset her. ____________\n\n", "msg_date": "Sat, 30 Oct 2004 23:02:54 -0700", "msg_from": "Dustin Sallings <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql large RSS (1.6GB)" }, { "msg_contents": "В Срд, 27.10.2004, в 09:57, TTK Ciar пишет:\n\n> brad=# explain analyse select ServerDisks.servername,ServerDisks.diskserial,ServerDisks.diskmountpoint,DiskFiles.name,DiskFiles.md5 from DiskFiles,ServerDisks where DiskFiles.diskserial=ServerDisks.diskserial;\n> QUERY PLAN \n> ------------------------------------------------------------------\n> Hash Join (cost=22.50..65.00 rows=1000 width=274) (actual time=118.584..124653.729 rows=10133349 loops=1)\n> Hash Cond: ((\"outer\".diskserial)::text = (\"inner\".diskserial)::text)\n> -> Seq Scan on diskfiles (cost=0.00..20.00 rows=1000 width=198) (actual time=7.201..31336.063 rows=10133349 loops=1)\n> -> Hash (cost=20.00..20.00 rows=1000 width=158) (actual time=90.821..90.821 rows=0 loops=1)\n> -> Seq Scan on serverdisks (cost=0.00..20.00 rows=1000 width=158) (actual time=9.985..87.364 rows=2280 loops=1)\n> Total runtime: 130944.586 ms\n\nYou should run ANALYZE on your database once in a while.\n\n-- \nMarkus Bertheau <[email protected]>\n\n", "msg_date": "Sun, 31 Oct 2004 11:27:01 +0100", "msg_from": "Markus Bertheau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql large RSS (1.6GB)" }, { "msg_contents": "On Sat, 30 Oct 2004, Dustin Sallings wrote:\n\n> > If the solution is to just write a little client that uses perl\n> > DBI to fetch rows one at a time and write them out, that's doable,\n> > but it would be nice if psql could be made to \"just work\" without\n> > the monster RSS.\n>\n> \tIt wouldn't make a difference unless that driver implements the\n> underlying protocol on its own.\n\nEven though we can tell people to make use of cursors, it seems that\nmemory usage for large result sets should be addressed. A quick search of\nthe archives does not reveal any discussion about having libpq spill to\ndisk if a result set reaches some threshold. Has this been canvassed in\nthe past?\n\nThanks,\n\nGavin\n", "msg_date": "Tue, 2 Nov 2004 00:45:07 +1100 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql large RSS (1.6GB)" }, { "msg_contents": "Gavin Sherry wrote:\n> On Sat, 30 Oct 2004, Dustin Sallings wrote:\n> \n> > > If the solution is to just write a little client that uses perl\n> > > DBI to fetch rows one at a time and write them out, that's doable,\n> > > but it would be nice if psql could be made to \"just work\" without\n> > > the monster RSS.\n> >\n> > \tIt wouldn't make a difference unless that driver implements the\n> > underlying protocol on its own.\n> \n> Even though we can tell people to make use of cursors, it seems that\n> memory usage for large result sets should be addressed. A quick search of\n> the archives does not reveal any discussion about having libpq spill to\n> disk if a result set reaches some threshold. Has this been canvassed in\n> the past?\n\nNo, I don't remember hearing this discussed and I don't think most\npeople would want libpq spilling to disk by default.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 1 Nov 2004 09:04:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql large RSS (1.6GB)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> No, I don't remember hearing this discussed and I don't think most\n> people would want libpq spilling to disk by default.\n\nFar more useful would be some sort of streaming API to let the\napplication process the rows as they arrive, or at least fetch the rows\nin small batches (the V3 protocol supports the latter even without any\nexplicit use of a cursor). I'm not sure if this can be bolted onto the\nexisting libpq framework reasonably, but that's the direction I'd prefer\nto go in.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Nov 2004 11:03:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql large RSS (1.6GB) " }, { "msg_contents": "Tom,\n\n> Far more useful would be some sort of streaming API to let the\n> application process the rows as they arrive, or at least fetch the rows\n> in small batches (the V3 protocol supports the latter even without any\n> explicit use of a cursor).  I'm not sure if this can be bolted onto the\n> existing libpq framework reasonably, but that's the direction I'd prefer\n> to go in.\n\nI think that TelegraphCQ incorporates this. However, I'm not sure whether \nit's a portable component; it may be too tied in to their streaming query \nengine. They have talked about porting their \"background query\" patch for \nPSQL, though ...\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 1 Nov 2004 10:49:06 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql large RSS (1.6GB)" } ]
[ { "msg_contents": "I would like to thank everyone for their timely suggestions.\n\n \n\nThe problem appears to be resolved now. We verified/modified -\nlocking/indexes/vacuum/checkpoints/IO bottleneck/queries, etc.\n\n \n\nCouple significant changes were the number of checkpoint segments were\nincreased, and we moved over the database to a new SAN RAID10 volume\n(which was in plan anyway, just did it sooner).\n\n \n\n \n\nThanks,\nAnjan\n\n \n\n _____ \n\nFrom: Anjan Dave \nSent: Monday, October 25, 2004 4:53 PM\nTo: [email protected]\nSubject: [PERFORM] can't handle large number of INSERT/UPDATEs\n\n \n\nHi,\n\n \n\nI am dealing with an app here that uses pg to handle a few thousand\nconcurrent web users. It seems that under heavy load, the INSERT and\nUPDATE statements to one or two specific tables keep queuing up, to the\ncount of 150+ (one table has about 432K rows, other has about 2.6Million\nrows), resulting in 'wait's for other queries, and then everything piles\nup, with the load average shooting up to 10+. \n\n \n\nWe (development) have gone through the queries/explain analyzes and made\nsure the appropriate indexes exist among other efforts put in.\n\n \n\nI would like to know if there is anything that can be changed for better\nfrom the systems perspective. Here's what I have done and some recent\nchanges from the system side:\n\n \n\n-Upgraded from 7.4.0 to 7.4.1 sometime ago\n\n-Upgraded from RH8 to RHEL 3.0\n\n-The settings from postgresql.conf (carried over, basically) are:\n\n shared_buffers = 10240 (80MB)\n\n max_connections = 400\n\n sort_memory = 1024\n\n effective_cache_size = 262144 (2GB)\n\n checkpoint_segments = 15\n\nstats_start_collector = true\n\nstats_command_string = true \n\nRest everything is at default\n\n \n\nIn /etc/sysctl.conf (512MB shared mem)\n\nkernel.shmall = 536870912\n\nkernel.shmmax = 536870912\n\n \n\n-This is a new Dell 6650 (quad XEON 2.2GHz, 8GB RAM, Internal HW\nRAID10), RHEL 3.0 (2.4.21-20.ELsmp), PG 7.4.1\n\n-Vaccum Full run everyday\n\n-contrib/Reindex run everyday\n\n-Disabled HT in BIOS\n\n \n\nI would greatly appreciate any helpful ideas.\n\n \n\nThanks in advance,\n\n \n\nAnjan\n\n\n\n\n\n\n\n\n\n\n\n\nI would like to thank everyone for their timely\nsuggestions.\n \nThe problem appears to be resolved now. We\nverified/modified  - locking/indexes/vacuum/checkpoints/IO bottleneck/queries,\netc.\n \nCouple significant changes were the number\nof checkpoint segments were increased, and we moved over the database to a new\nSAN RAID10 volume (which was in plan anyway, just did it sooner).\n \n \nThanks,\nAnjan\n \n\n\n\n\nFrom: Anjan Dave \nSent: Monday, October 25, 2004\n4:53 PM\nTo:\[email protected]\nSubject: [PERFORM] can't handle\nlarge number of INSERT/UPDATEs\n\n \nHi,\n \nI am dealing with an app here that uses pg to handle a few\nthousand concurrent web users. It seems that under heavy load, the INSERT and\nUPDATE statements to one or two specific tables keep queuing up, to the count\nof 150+ (one table has about 432K rows, other has about 2.6Million rows),\nresulting in ‘wait’s for other queries, and then everything piles\nup, with the load average shooting up to 10+. \n \nWe (development) have gone through the queries/explain\nanalyzes and made sure the appropriate indexes exist among other efforts put\nin.\n \nI would like to know if there is anything that can be\nchanged for better from the systems perspective. Here’s what I have done\nand some recent changes from the system side:\n \n-Upgraded from 7.4.0 to 7.4.1 sometime ago\n-Upgraded from RH8 to RHEL 3.0\n-The settings from postgresql.conf (carried over, basically)\nare:\n           \nshared_buffers = 10240 (80MB)\n           \nmax_connections = 400\n           \nsort_memory = 1024\n           \neffective_cache_size = 262144 (2GB)\n           \ncheckpoint_segments = 15\nstats_start_collector = true\nstats_command_string = true \nRest everything is at default\n \nIn /etc/sysctl.conf (512MB shared\nmem)\nkernel.shmall = 536870912\nkernel.shmmax = 536870912\n \n-This is a new Dell 6650 (quad XEON 2.2GHz, 8GB RAM,\nInternal HW RAID10), RHEL 3.0 (2.4.21-20.ELsmp), PG 7.4.1\n-Vaccum Full run everyday\n-contrib/Reindex run everyday\n-Disabled HT in BIOS\n \nI would greatly appreciate any helpful ideas.\n \nThanks in advance,\n \nAnjan", "msg_date": "Thu, 28 Oct 2004 11:38:26 -0400", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Summary: can't handle large number of INSERT/UPDATEs" } ]
[ { "msg_contents": "Pg: 7.4.5\n8G ram\n200G RAID5\n\nI have my fsm set as such:\nmax_fsm_pages = 300000 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 500 # min 100, ~50 bytes each\n\n\nI just did a vacuum full on one table and saw this result:\nINFO: analyzing \"cdm.cdm_fed_agg_purch\"\nINFO: \"cdm_fed_agg_purch\": 667815 pages, 3000 rows sampled, 52089570 \nestimated total rows\n\n\nMy question is this: I have about 8 databases running on this server. \nWhen I do a vacuum full on each of these databases, there is a INFO \nsection that I assume is the total pages used for that database. Should \nadd ALL these individual pages together and pad the total and use this \nas my new max_fsm_pages? Should I do the same thing with max_fsm_relations?\n\nTIA\nPatrick\n", "msg_date": "Fri, 29 Oct 2004 07:04:53 -0700", "msg_from": "Patrick Hatcher <[email protected]>", "msg_from_op": true, "msg_subject": "determining max_fsm_pages" }, { "msg_contents": "> Pg: 7.4.5\n> 8G ram\n> 200G RAID5\n> \n> I have my fsm set as such:\n> max_fsm_pages = 300000 # min max_fsm_relations*16, 6 bytes each\n> max_fsm_relations = 500 # min 100, ~50 bytes each\n> \n> \n> I just did a vacuum full on one table and saw this result:\n> INFO: analyzing \"cdm.cdm_fed_agg_purch\"\n> INFO: \"cdm_fed_agg_purch\": 667815 pages, 3000 rows sampled, 52089570 \n> estimated total rows\n> \n> \n> My question is this: I have about 8 databases running on this server. \n> When I do a vacuum full on each of these databases, there is a INFO \n> section that I assume is the total pages used for that database. Should \n> add ALL these individual pages together and pad the total and use this \n> as my new max_fsm_pages? Should I do the same thing with max_fsm_relations?\n\nI think that's too much and too big FSM affects performance in my\nopinion. The easiest way to calculate appropreate FSM size is doing\nvacuumdb -a -v and watching the message. At the very end, you would\nsee something like:\n\nINFO: free space map: 13 relations, 1447 pages stored; 1808 total pages needed\nDETAIL: Allocated FSM size: 100 relations + 1600 pages = 19 kB shared memory.\n\nIn this case 1808 is the minimum FSM size. Of course this number would\nchange depending on the frequency of VACUUM. Therefore you need some\nroom for the FSM size.\n--\nTatsuo Ishii\n", "msg_date": "Fri, 29 Oct 2004 23:31:51 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: determining max_fsm_pages" }, { "msg_contents": "Patrick Hatcher <[email protected]> writes:\n> My question is this: I have about 8 databases running on this server. \n> When I do a vacuum full on each of these databases, there is a INFO \n> section that I assume is the total pages used for that database. Should \n> add ALL these individual pages together and pad the total and use this \n> as my new max_fsm_pages? Should I do the same thing with max_fsm_relations?\n\nNo, the numbers shown at the end of a vacuum verbose printout reflect\nthe current cluster-wide FSM demand. BTW you do *not* want to use FULL\nbecause that's not going to reflect the FSM requirements when you are\njust running normal vacuums.\n\nI would vacuum all your databases (to make sure each one's FSM contents\nare pretty up-to-date) and then take the numbers shown by the last one\nas your targets.\n\nIf you find yourself having to raise max_fsm_relations, it may be\nnecessary to repeat the vacuuming cycle before you can get a decent\ntotal for max_fsm_pages. IIRC, the vacuum printout does include in\n\"needed\" a count of pages that it would have stored if it'd had room;\nbut this is only tracked for relations that have an FSM relation entry.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Oct 2004 10:37:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: determining max_fsm_pages " } ]
[ { "msg_contents": "\n\nHi ,\n\nGist indexes take a long time to create as compared\nto normal indexes is there any way to speed them up ?\n\n(for example by modifying sort_mem or something temporarily )\n\nRegds\nMallah.\n", "msg_date": "Mon, 1 Nov 2004 02:33:10 +0530 (IST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Speeding up Gist Index creations" }, { "msg_contents": "Mallah,\n\n> Gist indexes take a long time to create as compared\n> to normal indexes is there any way to speed them up ?\n>\n> (for example by modifying sort_mem or something temporarily )\n\nMore sort_mem will indeed help. So will more RAM and a faster CPU.\n\nOur GIST-index-creation process is probably not optimized; this has been \nmarked as \"needs work\" in the code for several versions. If you know anyone \nwho can help Oleg & Teodor out, be put them in touch ...\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 31 Oct 2004 16:01:15 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up Gist Index creations" }, { "msg_contents": "On Mon, 2004-11-01 at 11:01, Josh Berkus wrote:\n> > Gist indexes take a long time to create as compared\n> > to normal indexes is there any way to speed them up ?\n> >\n> > (for example by modifying sort_mem or something temporarily )\n> \n> More sort_mem will indeed help.\n\nHow so? sort_mem improves index creation for B+-tree because we\nimplement bulk loading; there is no implementation of bulk loading for\nGiST, so I don't see how sort_mem will help.\n\n-Neil\n\n\n", "msg_date": "Tue, 02 Nov 2004 12:35:55 +1100", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up Gist Index creations" }, { "msg_contents": "Neil,\n\n> How so? sort_mem improves index creation for B+-tree because we\n> implement bulk loading; there is no implementation of bulk loading for\n> GiST, so I don't see how sort_mem will help.\n\nAh, wasn't aware of that deficiency.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 1 Nov 2004 21:58:35 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up Gist Index creations" } ]
[ { "msg_contents": "Hello,\n\n \n\nI am trying to understand the output of the 'ipcs' command during peak\nactivity and how I can use it to possibly tune the shared_buffers...\n\n \n\nHere's what I see right now: (ipcs -m) - (Host is RHAS 3.0)\n\n \n\n------ Shared Memory Segments --------\n\nkey shmid owner perms bytes nattch status\n\n0x0052e2c1 1966080 postgres 600 92078080 322\n\n \n\nWhat is nattch? Is this the num of segments attached? Is it saying that\nabout 92MB is used out of 512MB?\n\n \n\n-Shared memory segment size is defined to be 512MB\n\n \n\n \n\n-Currently, shared_buffers are at 80MB (10240)\n\n \n\n \n\nHere's the 'top' output:\n\n \n\n12:29:42 up 24 days, 15:04, 6 users, load average: 2.28, 1.07, 1.07\n\n421 processes: 414 sleeping, 3 running, 4 zombie, 0 stopped\n\nCPU states: cpu user nice system irq softirq iowait idle\n\n total 83.6% 0.0% 40.8% 0.0% 7.6% 76.4% 190.0%\n\n cpu00 20.9% 0.0% 9.0% 0.3% 0.1% 22.5% 46.8%\n\n cpu01 19.2% 0.0% 10.6% 0.0% 7.3% 14.4% 48.3%\n\n cpu02 15.0% 0.0% 7.3% 0.0% 0.0% 8.6% 68.9%\n\n cpu03 28.6% 0.0% 14.0% 0.0% 0.1% 31.0% 26.0%\n\nMem: 7973712k av, 7675856k used, 297856k free, 0k shrd, 149220k\nbuff\n\n 3865444k actv, 2638404k in_d, 160092k in_c\n\nSwap: 4096532k av, 28k used, 4096504k free 6387092k\ncached\n\n \n\n \n\nCan I conclude anything from these outputs and the buffer setting?\n\n \n\n \n\nAppreciate any thoughts.\n\n \n\n \n\nThanks,\nAnjan\n\n\n\n\n\n\n\n\n\n\nHello,\n \nI am trying to understand the output of the ‘ipcs’\ncommand during peak activity and how I can use it to possibly tune the\nshared_buffers…\n \nHere’s what I see right now: (ipcs –m) – (Host\nis RHAS 3.0)\n \n------ Shared Memory Segments --------\nkey        shmid      owner      perms      bytes     \nnattch     status\n0x0052e2c1 1966080    postgres  600        92078080   322\n \nWhat is nattch? Is this the num of segments attached? Is it\nsaying that about 92MB is used out of 512MB?\n \n-Shared memory segment size is defined to be 512MB\n \n \n-Currently, shared_buffers are at 80MB (10240)\n \n \nHere’s the ‘top’ output:\n \n12:29:42  up 24 days, 15:04,  6 users,  load average: 2.28,\n1.07, 1.07\n421 processes: 414 sleeping, 3 running, 4 zombie, 0 stopped\nCPU states:  cpu    user    nice  system    irq  softirq \niowait    idle\n           total   83.6%    0.0%   40.8%   0.0%     7.6%  \n76.4%  190.0%\n           cpu00   20.9%    0.0%    9.0%   0.3%     0.1%  \n22.5%   46.8%\n           cpu01   19.2%    0.0%   10.6%   0.0%     7.3%  \n14.4%   48.3%\n           cpu02   15.0%    0.0%    7.3%   0.0%     0.0%   \n8.6%   68.9%\n           cpu03   28.6%    0.0%   14.0%   0.0%     0.1%  \n31.0%   26.0%\nMem:  7973712k av, 7675856k used,  297856k free,       0k\nshrd,  149220k buff\n                   3865444k actv, 2638404k in_d,  160092k\nin_c\nSwap: 4096532k av,      28k used, 4096504k free                \n6387092k cached\n \n \nCan I conclude anything from these outputs and the buffer\nsetting?\n \n \nAppreciate any thoughts.\n \n \nThanks,\nAnjan", "msg_date": "Mon, 1 Nov 2004 12:37:53 -0500", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "shared_buffers and Shared Memory Segments" } ]
[ { "msg_contents": "Hi,\n\nI have some views that are used to make some queries simplest. But when\nI use them there is a performance loss, because the query don't use\nindexes anymore. Below I'm sending the query with and without the view,\nits execution times, explains and the view's body. I didn't understood\nthe why the performance is so different (20x in seconds, 1000x in page\nreads) if the queries are semantically identical.\n\nShouldn't I use views in situations like this? Is there some way to use\nthe view and the indexes?\n\n--------------\n-- View body\n--------------\n\nCREATE VIEW vw_test AS\nSELECT e.person_id, ci.city_id, ci.city_name, s.state_id,\ns.state_acronym\n FROM address a\n LEFT OUTER JOIN zip zp ON a.zip_code_id = zp.zip_code_id\n LEFT OUTER JOIN city ci ON ci.city_id = zp.city_id\n LEFT OUTER JOIN state s ON ci.state_id = s.state_id\n WHERE a.adress_type = 2;\n\n---------------------\n-- Without the view\n---------------------\n\nSELECT p.person_id, ci.city_id, ci.city_name, s.state_id,\ns.state_acronym\n FROM person p\n LEFT OUTER JOIN address e USING (person_id)\n LEFT OUTER JOIN zip zp ON a.zip_code_id = zp.zip_code_id\n LEFT OUTER JOIN city ci ON ci.city_id = zp.city_id\n LEFT OUTER JOIN state u ON ci.state_id = s.state_id\n WHERE a.adress_type = 2\n AND p.person_id = 19257;\n\n person_id | city_id | city_name | state_id | state_acronym\n-----------+-----------+-----------+----------+---------------\n 19257 | 70211 | JAGUARAO | 22 | RS\n(1 record)\nTime: 110,047 ms\n\nQUERY PLAN\n---------------------------------------------------------------------\n Nested Loop Left Join (cost=0.00..20.04 rows=1 width=33)\n Join Filter: (\"outer\".state_id = \"inner\".state_id)\n -> Nested Loop Left Join (cost=0.00..18.43 rows=1 width=27)\n -> Nested Loop Left Join (cost=0.00..13.87 rows=1 width=8)\n -> Nested Loop (cost=0.00..10.75 rows=1 width=8)\n -> Index Scan using pk_person on person p\n(cost=0.00..5.41 rows=1 width=4)\n Index Cond: (person_id = 19257)\n -> Index Scan using un_address_adress_type on\naddress e (cost=0.00..5.33 rows=1 width=8)\n Index Cond: (19257 = person_id)\n Filter: (adress_type = 2)\n -> Index Scan using pk_zip on zip zp (cost=0.00..3.11\nrows=1 width=8)\n Index Cond: (\"outer\".zip_code_id = zp.zip_code_id)\n -> Index Scan using pk_city on city ci (cost=0.00..4.55\nrows=1 width=23)\n Index Cond: (ci.city_id = \"outer\".city_id)\n -> Seq Scan on state u (cost=0.00..1.27 rows=27 width=10)\n(15 records)\n\n---------------------\n-- With the view\n---------------------\n\nSELECT p.person_id, t.city_id, t.city_name, t.state_id, t.state_acronym\n FROM person p\n LEFT OUTER JOIN vw_test t USING (person_id)\n WHERE p.person_id = 19257;\n\n person_id | city_id | city_name | state_id | state_acronym\n-----------+-----------+-----------+----------+--------------\n 19257 | 70211 | JAGUARAO | 22 | RS\n(1 record)\nTime: 1982,743 ms\n\nQUERY PLAN\n---------------------------------------------------------------------\n Nested Loop Left Join (cost=10921.71..28015.63 rows=1 width=33)\n Join Filter: (\"outer\".person_id = \"inner\".person_id)\n -> Index Scan using pk_person on person p (cost=0.00..5.41 rows=1\nwidth=4)\n Index Cond: (person_id = 19257)\n -> Hash Left Join (cost=10921.71..27799.55 rows=16854 width=33)\n Hash Cond: (\"outer\".state_id = \"inner\".state_id)\n -> Hash Left Join (cost=10920.38..27545.40 rows=16854\nwidth=27)\n Hash Cond: (\"outer\".city_id = \"inner\".city_id)\n -> Hash Left Join (cost=10674.20..26688.88 rows=16854\nwidth=8)\n Hash Cond: (\"outer\".zip_code_id =\n\"inner\".zip_code_id)\n -> Seq Scan on address e (cost=0.00..1268.67\nrows=16854 width=8)\n Filter: (adress_type = 2)\n -> Hash (cost=8188.36..8188.36 rows=387936\nwidth=8)\n -> Seq Scan on zip zp (cost=0.00..8188.36\nrows=387936 width=8)\n -> Hash (cost=164.94..164.94 rows=9694 width=23)\n -> Seq Scan on city ci (cost=0.00..164.94\nrows=9694 width=23)\n -> Hash (cost=1.27..1.27 rows=27 width=10)\n -> Seq Scan on state u (cost=0.00..1.27 rows=27\nwidth=10)\n(18 records)\n\nBest regards,\n\n-- \n+---------------------------------------------------+\n| Alvaro Nunes Melo Atua Sistemas de Informacao |\n| [email protected] www.atua.com.br |\n| UIN - 42722678 (54) 327-1044 |\n+---------------------------------------------------+\n\n", "msg_date": "Mon, 01 Nov 2004 19:40:30 -0200", "msg_from": "Alvaro Nunes Melo <[email protected]>", "msg_from_op": true, "msg_subject": "Performance difference when using views" }, { "msg_contents": "Alvaro Nunes Melo <[email protected]> writes:\n> I have some views that are used to make some queries simplest. But when\n> I use them there is a performance loss, because the query don't use\n> indexes anymore. Below I'm sending the query with and without the view,\n> its execution times, explains and the view's body.\n\nIt's not the same query, because you are implicitly changing the order\nof the LEFT JOINs when you group some of them into a subquery (view).\nJoin order is significant for outer joins ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Nov 2004 17:08:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance difference when using views " }, { "msg_contents": "On Mon, 2004-11-01 at 21:40, Alvaro Nunes Melo wrote:\n> Hi,\n> \n> I have some views that are used to make some queries simplest. But when\n> I use them there is a performance loss, because the query don't use\n> indexes anymore. Below I'm sending the query with and without the view,\n> its execution times, explains and the view's body. I didn't understood\n> the why the performance is so different (20x in seconds, 1000x in page\n> reads) if the queries are semantically identical.\n> \n> Shouldn't I use views in situations like this? Is there some way to use\n> the view and the indexes?\n> \n> --------------\n> -- View body\n> --------------\n> \n> CREATE VIEW vw_test AS\n> SELECT e.person_id, ci.city_id, ci.city_name, s.state_id,\n> s.state_acronym\n> FROM address a\n> LEFT OUTER JOIN zip zp ON a.zip_code_id = zp.zip_code_id\n> LEFT OUTER JOIN city ci ON ci.city_id = zp.city_id\n> LEFT OUTER JOIN state s ON ci.state_id = s.state_id\n> WHERE a.adress_type = 2;\n> \n> ---------------------\n> -- Without the view\n> ---------------------\n> \n> SELECT p.person_id, ci.city_id, ci.city_name, s.state_id,\n> s.state_acronym\n> FROM person p\n> LEFT OUTER JOIN address e USING (person_id)\n> LEFT OUTER JOIN zip zp ON a.zip_code_id = zp.zip_code_id\n> LEFT OUTER JOIN city ci ON ci.city_id = zp.city_id\n> LEFT OUTER JOIN state u ON ci.state_id = s.state_id\n> WHERE a.adress_type = 2\n> AND p.person_id = 19257;\n> \n\nTry this....\n\nSELECT p.person_id, ci.city_id, ci.city_name, s.state_id,\ns.state_acronym\n FROM person p\n LEFT OUTER JOIN ( address a\n LEFT OUTER JOIN zip zp ON a.zip_code_id = zp.zip_code_id\n LEFT OUTER JOIN city ci ON ci.city_id = zp.city_id\n LEFT OUTER JOIN state u ON ci.state_id = s.state_id )\n\t\t\t\tUSING (person_id)\n WHERE a.adress_type = 2\n AND p.person_id = 19257;\n\nWhich should return the same answer, and also hopefully the same plan.\n\n-- \nBest Regards, Simon Riggs\n\n", "msg_date": "Mon, 01 Nov 2004 22:28:50 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance difference when using views" } ]
[ { "msg_contents": "Is there a way to restrict how much load a PostgreSQL server can take \nbefore dropping queries in order to safeguard the server? I was \nlooking at the login.conf (5) man page and while it allows me to limit \nby processor time this seems to not fit my specific needs.\n\nEssentially, I am looking for a sort of functionality similar to what \nSendmail and Apache have. Once the load of the system reaches a \ncertain defined limit the daemon drops tasks until such a time that it \ncan resume normal operation.\n\nWhile not necessarily common on my servers I have witnessed some fairly \nhigh load averages which may have led to the machine dropping outright. \n Any help on this matter would be appreciated.\n-- \n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n", "msg_date": "Tue, 02 Nov 2004 23:52:12 GMT", "msg_from": "Martin Foster <[email protected]>", "msg_from_op": true, "msg_subject": "Restricting Postgres" }, { "msg_contents": "On Tue, Nov 02, 2004 at 11:52:12PM +0000, Martin Foster wrote:\n> Is there a way to restrict how much load a PostgreSQL server can take \n> before dropping queries in order to safeguard the server? I was \n\nWell, you could limit the number of concurrent connections, and set\nthe query timeout to a relatively low level. What that ought to mean\nis that, under heavy load, some queries will abort.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nWhen my information changes, I alter my conclusions. What do you do sir?\n\t\t--attr. John Maynard Keynes\n", "msg_date": "Wed, 3 Nov 2004 09:17:43 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "On Tue, 2004-11-02 at 23:52, Martin Foster wrote:\n> Is there a way to restrict how much load a PostgreSQL server can take \n> before dropping queries in order to safeguard the server? I was \n> looking at the login.conf (5) man page and while it allows me to limit \n> by processor time this seems to not fit my specific needs.\n> \n> Essentially, I am looking for a sort of functionality similar to what \n> Sendmail and Apache have. Once the load of the system reaches a \n> certain defined limit the daemon drops tasks until such a time that it \n> can resume normal operation.\n\nSounds great... could you give more shape to the idea, so people can\ncomment on it?\n\nWhat limit? Measured how? Normal operation is what?\n\nDrop what? How to tell?\n\n> \n> While not necessarily common on my servers I have witnessed some fairly \n> high load averages which may have led to the machine dropping outright. \n> Any help on this matter would be appreciated.\n\nYou can limit the number of connections overall?\n\n-- \nBest Regards, Simon Riggs\n\n", "msg_date": "Wed, 03 Nov 2004 19:21:24 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "Simon Riggs wrote:\n> On Tue, 2004-11-02 at 23:52, Martin Foster wrote:\n> \n>>Is there a way to restrict how much load a PostgreSQL server can take \n>>before dropping queries in order to safeguard the server? I was \n>>looking at the login.conf (5) man page and while it allows me to limit \n>>by processor time this seems to not fit my specific needs.\n>>\n>>Essentially, I am looking for a sort of functionality similar to what \n>>Sendmail and Apache have. Once the load of the system reaches a \n>>certain defined limit the daemon drops tasks until such a time that it \n>>can resume normal operation.\n> \n> \n> Sounds great... could you give more shape to the idea, so people can\n> comment on it?\n> \n> What limit? Measured how? Normal operation is what?\n> \n> Drop what? How to tell?\n> \n> \n\nLet's use the example in Apache, there is the Apache::LoadAvgLimit \nmod_perl module which allows one to limit based on the system load \naverages. Here is an example of the configuration one would find:\n\n <Location /perl>\n PerlInitHandler Apache::LoadAvgLimit\n PerlSetVar LoadAvgLimit_1 3.00\n PerlSetVar LoadAvgLimit_5 2.00\n PerlSetVar LoadAvgLimit_15 1.50\n PerlSetVar LoadAvgRetryAfter 120\n </Location>\n\nThe end state is simple, once the load average moves above 3.00 for the \n1 minute average the web server will not process the CGI scripts or \nmod_perl applications under that directory. Instead it will return a \n503 error and save the system from being crushed by ever increasing load \naverages.\n\nOnly once the load average is below the defined limits will the server \nprocess requests as normal. This is not necessarily the nicest or \ncleanest way or doing things, but it does allow the Apache web server to \nprevent a collapse.\n\nThere are ways of restricting the size of files, number of concurrent \nprocesses and even memory being used by a daemon. This can be done \nthrough ulimit or the login.conf file if your system supports it. \nHowever, there is no way to restrict based on load averages, only \nprocessor time which is ineffective for a perpetually running daemon \nlike PostgreSQL has.\n\n>>While not necessarily common on my servers I have witnessed some fairly \n>>high load averages which may have led to the machine dropping outright. \n>> Any help on this matter would be appreciated.\n> \n> \n> You can limit the number of connections overall?\n> \n\nLimiting concurrent connections is not always the solution to the \nproblem. Problems can occur when there is a major spike in activity \nthat would be considered abnormal, due to outside conditions.\n\nFor example using Apache::DBI or pgpool the DBMS may be required to \nspawn a great deal of child processed in a short order of time. This \nin turn can cause a major spike in processor load and if unchecked by \nrunning as high demand queries the system can literally increase in load \nuntil the server buckles.\n\nI've seen this behavior before when restarting the web server during \nheavy loads. Apache goes from zero connections to a solid 120, \ncausing PostgreSQL to spawn that many children in a short order of time \njust to keep up with the demand.\n\nPostgreSQL undertakes a penalty when spawning a new client and accepting \na connection, this slows takes resources at every level to accomplish. \n However clients on the web server are hitting the server at an \naccelerated rate because of the slowed response, leading to even more \ndemand being placed on both machines.\n\nIn most cases the processor will be taxed and the load average high \nenough to cause even a noticeable delay when using a console, however it \nwill generally recover... slowly or in rare cases crash outright. In \nsuch a circumstance, having the database server refuse queries when the \nsanity of the system is concerned might come in handy for such a \ncircumstance.\n\nOf course, I am not blaming PostgreSQL, there are probably some \ninstabilities in the AMD64 port of FreeBSD 5.2.1 for dual processor \nsystems that lead to an increased chance of failure instead of recovery. \n However, if there was a way to prevent the process from reaching \nthose limits, it may avoid the problem altogether.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n", "msg_date": "Wed, 03 Nov 2004 21:25:45 GMT", "msg_from": "Martin Foster <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "Martin Foster wrote:\n> Simon Riggs wrote:\n> \n>> On Tue, 2004-11-02 at 23:52, Martin Foster wrote:\n[...]\n\n> I've seen this behavior before when restarting the web server during \n> heavy loads. Apache goes from zero connections to a solid 120, \n> causing PostgreSQL to spawn that many children in a short order of time \n> just to keep up with the demand.\n> \n\nBut wouldn't limiting the number of concurrent connections do this at \nthe source. If you tell it that \"You can at most have 20 connections\" \nyou would never have postgres spawn 120 children.\nI'm not sure what apache does if it can't get a DB connection, but it \nseems exactly like what you want.\n\nNow, if you expected to have 50 clients that all like to just sit on \nopen connections, you could leave the number of concurrent connections high.\n\nBut if your only connect is from the webserver, where all of them are \ndesigned to be short connections, then leave the max low.\n\nThe other possibility is having the webserver use connection pooling, so \nit uses a few long lived connections. But even then, you could limit it \nto something like 10-20, not 120.\n\nJohn\n=:->", "msg_date": "Wed, 03 Nov 2004 17:25:27 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "John A Meinel wrote:\n\n> Martin Foster wrote:\n> \n>> Simon Riggs wrote:\n>>\n>>> On Tue, 2004-11-02 at 23:52, Martin Foster wrote:\n> \n> [...]\n> \n>> I've seen this behavior before when restarting the web server during \n>> heavy loads. Apache goes from zero connections to a solid 120, \n>> causing PostgreSQL to spawn that many children in a short order of \n>> time just to keep up with the demand.\n>>\n> \n> But wouldn't limiting the number of concurrent connections do this at \n> the source. If you tell it that \"You can at most have 20 connections\" \n> you would never have postgres spawn 120 children.\n> I'm not sure what apache does if it can't get a DB connection, but it \n> seems exactly like what you want.\n> \n> Now, if you expected to have 50 clients that all like to just sit on \n> open connections, you could leave the number of concurrent connections \n> high.\n> \n> But if your only connect is from the webserver, where all of them are \n> designed to be short connections, then leave the max low.\n> \n> The other possibility is having the webserver use connection pooling, so \n> it uses a few long lived connections. But even then, you could limit it \n> to something like 10-20, not 120.\n> \n> John\n> =:->\n> \n\nI have a dual processor system that can support over 150 concurrent \nconnections handling normal traffic and load. Now suppose I setup \nApache to spawn all of it's children instantly, what will happen is that \nas this happens the PostgreSQL server will also receive 150 attempts at \nconnection.\n\nThis will spawn 150 children in a short order of time and as this takes \nplace clients can connect and start requesting information not allowing \nthe machine to settle down to a normal traffic. That spike when \ninitiated can cripple the machine or even the webserver if a deadlocked \ntransaction is introduced.\n\nBecause on the webserver side a slowdown in the database means that it \nwill just get that many more connection attempts pooled from the \nclients. As they keep clicking and hitting reload over and over to get \na page load, that server starts to buckle hitting unbelievably high load \naverages.\n\nWhen the above happened once, I lost the ability to type on a console \nbecause of a 60+ (OpenBSD) load average on a single processor system. \nThe reason why Apache now drops a 503 Service Unavailable when loads get \ntoo high.\n\nIt's that spike I worry about and it can happen for whatever reason. It \ncould just as easily be triggered by a massive concurrent request for \nprocessing of an expensive query done in DDOS fashion. This may not \naffect the webserver at all, at least immediately, but the same problem \ncan effect can come into effect.\n\nLimiting connections help, but it's not the silver bullet and limits \nyour ability to support more connections because of that initial spike. \n The penalty for forking a new child is hardly unexecpected, even \nApache will show the same effect when restarted in a high traffic time.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n", "msg_date": "Wed, 03 Nov 2004 18:35:52 -0500", "msg_from": "Martin Foster <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "> I have a dual processor system that can support over 150 concurrent \n> connections handling normal traffic and load. Now suppose I setup \n> Apache to spawn all of it's children instantly, what will \n...\n> This will spawn 150 children in a short order of time and as \n> this takes \n\n\"Doctor, it hurts when I do this!\"\n\"Well, don't do that then...\"\n\nSorry, couldn't resist ;-)\n\nOur Apache/PG driven website also needs to be able to deal with occasional\nlarge peaks, so what we do is:\n\nStartServers 15\t\t# Don't create too many children initially\nMinSpareServers 10\t# Always have at least 10 spares lying around\nMaxSpareServers 20\t# But no more than 20\nMaxClients 150\t\t# Up to 150 - the default 256 is too much for our\nRAM\n\n\nSo on server restart 15 Apache children are created, then one new child\nevery second up to a maximum of 150.\n\nApache's 'ListenBackLog' is around 500 by default, so there's plenty of\nscope for queuing inbound requests while we wait for sufficient children to\nbe spawned.\n\nIn addition we (as _every_ high load site should) run Squid as an\naccelerator, which dramatically increases the number of client connections\nthat can be handled. Across 2 webservers at peak times we've had 50,000\nconcurrently open http & https client connections to Squid, with 150 Apache\nchildren doing the work that squid can't (i.e. all the dynamic stuff), and\nPG (on a separate box of course) whipping through nearly 800 mixed selects,\ninserts and updates per second - and then had to restart Apache on one of\nthe servers for a config change... Not a problem :-)\n\nOne little tip - if you run squid on the same machine as apache, and use a\ndual-proc box, then because squid is single-threaded it will _never_ take\nmore than half the CPU - nicely self balancing in a way.\n\nM\n\n", "msg_date": "Thu, 4 Nov 2004 08:29:39 -0000", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "On Wed, 2004-11-03 at 21:25, Martin Foster wrote:\n> Simon Riggs wrote:\n> > On Tue, 2004-11-02 at 23:52, Martin Foster wrote:\n> > \n> >>Is there a way to restrict how much load a PostgreSQL server can take \n> >>before dropping queries in order to safeguard the server? I was \n> >>looking at the login.conf (5) man page and while it allows me to limit \n> >>by processor time this seems to not fit my specific needs.\n> >>\n> >>Essentially, I am looking for a sort of functionality similar to what \n> >>Sendmail and Apache have. Once the load of the system reaches a \n> >>certain defined limit the daemon drops tasks until such a time that it \n> >>can resume normal operation.\n> > \n> > \n> > Sounds great... could you give more shape to the idea, so people can\n> > comment on it?\n> > \n> > What limit? Measured how? Normal operation is what?\n> > \n> > Drop what? How to tell?\n> > \n> > \n> \n> Let's use the example in Apache, there is the Apache::LoadAvgLimit \n> mod_perl module which allows one to limit based on the system load \n> averages. Here is an example of the configuration one would find:\n> \n> <Location /perl>\n> PerlInitHandler Apache::LoadAvgLimit\n> PerlSetVar LoadAvgLimit_1 3.00\n> PerlSetVar LoadAvgLimit_5 2.00\n> PerlSetVar LoadAvgLimit_15 1.50\n> PerlSetVar LoadAvgRetryAfter 120\n> </Location>\n> \n> The end state is simple, once the load average moves above 3.00 for the \n> 1 minute average the web server will not process the CGI scripts or \n> mod_perl applications under that directory. Instead it will return a \n> 503 error and save the system from being crushed by ever increasing load \n> averages.\n> \n> Only once the load average is below the defined limits will the server \n> process requests as normal. This is not necessarily the nicest or \n> cleanest way or doing things, but it does allow the Apache web server to \n> prevent a collapse.\n> \n> There are ways of restricting the size of files, number of concurrent \n> processes and even memory being used by a daemon. This can be done \n> through ulimit or the login.conf file if your system supports it. \n> However, there is no way to restrict based on load averages, only \n> processor time which is ineffective for a perpetually running daemon \n> like PostgreSQL has.\n> \n\nAll workloads are not created equally, so mixing them can be tricky.\nThis will be better in 8.0 because seq scans don't spoil the cache.\n\nApache is effectively able to segregate the workloads because each\nworkload is \"in a directory\". SQL isn't stored anywhere for PostgreSQL\nto say \"just those ones please\", so defining which statements are in\nwhich workload is the tricky part.\n\nPostgreSQL workload management could look at userid, tables, processor\nload (?) and estimated cost to decide what to do.\n\nThere is a TODO item on limiting numbers of connections per\nuserid/group, in addition to the max number of sessions per server.\n\nPerhaps the easiest way would be to have the Apache workloads segregated\nby PostgreSQL userid, then limit connections to each.\n\n> For example using Apache::DBI or pgpool the DBMS may be required to \n> spawn a great deal of child processed in a short order of time. This \n> in turn can cause a major spike in processor load and if unchecked by \n> running as high demand queries the system can literally increase in load \n> until the server buckles.\n> \n\nThat's been nicely covered off by John and Matt on the other threads, so\nyou're sorted out for now and doesn't look like a bug in PostgreSQL.\n\n> Of course, I am not blaming PostgreSQL, there are probably some \n> instabilities in the AMD64 port of FreeBSD 5.2.1 for dual processor \n> systems that lead to an increased chance of failure instead of recovery. \n\nGood!\n\n-- \nBest Regards, Simon Riggs\n\n", "msg_date": "Thu, 04 Nov 2004 12:19:00 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "Matt Clark wrote:\n\n>>I have a dual processor system that can support over 150 concurrent \n>>connections handling normal traffic and load. Now suppose I setup \n>>Apache to spawn all of it's children instantly, what will \n> \n> ...\n> \n>>This will spawn 150 children in a short order of time and as \n>>this takes \n> \n> \n> \"Doctor, it hurts when I do this!\"\n> \"Well, don't do that then...\"\n> \n> Sorry, couldn't resist ;-)\n> \n> Our Apache/PG driven website also needs to be able to deal with occasional\n> large peaks, so what we do is:\n> \n> StartServers 15\t\t# Don't create too many children initially\n> MinSpareServers 10\t# Always have at least 10 spares lying around\n> MaxSpareServers 20\t# But no more than 20\n> MaxClients 150\t\t# Up to 150 - the default 256 is too much for our\n> RAM\n> \n> \n> So on server restart 15 Apache children are created, then one new child\n> every second up to a maximum of 150.\n> \n> Apache's 'ListenBackLog' is around 500 by default, so there's plenty of\n> scope for queuing inbound requests while we wait for sufficient children to\n> be spawned.\n> \n> In addition we (as _every_ high load site should) run Squid as an\n> accelerator, which dramatically increases the number of client connections\n> that can be handled. Across 2 webservers at peak times we've had 50,000\n> concurrently open http & https client connections to Squid, with 150 Apache\n> children doing the work that squid can't (i.e. all the dynamic stuff), and\n> PG (on a separate box of course) whipping through nearly 800 mixed selects,\n> inserts and updates per second - and then had to restart Apache on one of\n> the servers for a config change... Not a problem :-)\n> \n> One little tip - if you run squid on the same machine as apache, and use a\n> dual-proc box, then because squid is single-threaded it will _never_ take\n> more than half the CPU - nicely self balancing in a way.\n> \n> M\n> \n\nI've heard of the merits of Squid in the use as a reverse proxy. \nHowever, well over 99% of my traffic is dynamic, hence why I may be \nexperiencing behavior that people normally do not expect.\n\nAs I have said before in previous threads, the scripts are completely \ndatabase driven and at the time the database averaged 65 queries per \nsecond under MySQL before a migration, while the webserver was averaging \n2 to 4.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n", "msg_date": "Thu, 04 Nov 2004 08:10:38 -0500", "msg_from": "Martin Foster <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "Simon Riggs wrote\n> \n> \n> All workloads are not created equally, so mixing them can be tricky.\n> This will be better in 8.0 because seq scans don't spoil the cache.\n> \n> Apache is effectively able to segregate the workloads because each\n> workload is \"in a directory\". SQL isn't stored anywhere for PostgreSQL\n> to say \"just those ones please\", so defining which statements are in\n> which workload is the tricky part.\n> \n> PostgreSQL workload management could look at userid, tables, processor\n> load (?) and estimated cost to decide what to do.\n> \n> There is a TODO item on limiting numbers of connections per\n> userid/group, in addition to the max number of sessions per server.\n> \n> Perhaps the easiest way would be to have the Apache workloads segregated\n> by PostgreSQL userid, then limit connections to each.\n> \n\nApache has a global setting for load average limits, the above was just \na module which extended the capability. It might also make sense to \nhave limitations set on schema's which can be used in a similar way to \nApache directories.\n\nWhile for most people the database protecting itself against a sudden \nsurge of high traffic would be undesirable. It can help those who run \ndynamically driven sites and get slammed by Slashdot for example.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n", "msg_date": "Thu, 04 Nov 2004 08:17:22 -0500", "msg_from": "Martin Foster <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "I am generally interested in a good solution for this. So far our\nsolution has been to increase the hardware to the point of allowing\n800 connections to the DB.\n\nI don't have the mod loaded for Apache, but we haven't had too many\nproblems there. The site is split pretty good between dynamic and\nnon-dynamic, it's largely Flash with several plugins to the DB. \nHowever we still can and have been slammed and up to point of the 800\nconnections.\n\nWhat I don't get is why not use pgpool? This should eliminate the\nrapid fire forking of postgres instanaces in the DB server. I'm\nassuming you app can safely handle a failure to connect to the DB\n(i.e. exceed number of DB connections). If not it should be fairly\nsimple to send a 503 header when it's unable to get the connection.\n\nOn Thu, 04 Nov 2004 08:17:22 -0500, Martin Foster\n<[email protected]> wrote:\n> Apache has a global setting for load average limits, the above was just\n> a module which extended the capability. It might also make sense to\n> have limitations set on schema's which can be used in a similar way to\n> Apache directories.\n> \n> While for most people the database protecting itself against a sudden\n> surge of high traffic would be undesirable. It can help those who run\n> dynamically driven sites and get slammed by Slashdot for example.\n> \n> \n> \n> Martin Foster\n> Creator/Designer Ethereal Realms\n> [email protected]\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n", "msg_date": "Thu, 4 Nov 2004 10:00:38 -0600", "msg_from": "Kevin Barnard <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Restricting Postgres" }, { "msg_contents": "Kevin Barnard wrote:\n> I am generally interested in a good solution for this. So far our\n> solution has been to increase the hardware to the point of allowing\n> 800 connections to the DB.\n> \n> I don't have the mod loaded for Apache, but we haven't had too many\n> problems there. The site is split pretty good between dynamic and\n> non-dynamic, it's largely Flash with several plugins to the DB. \n> However we still can and have been slammed and up to point of the 800\n> connections.\n> \n> What I don't get is why not use pgpool? This should eliminate the\n> rapid fire forking of postgres instanaces in the DB server. I'm\n> assuming you app can safely handle a failure to connect to the DB\n> (i.e. exceed number of DB connections). If not it should be fairly\n> simple to send a 503 header when it's unable to get the connection.\n> \n\nNote, that I am not necessarily looking for a PostgreSQL solution to the \nmatter. Just a way to prevent the database from killing off the server \nit sits on, but looking at the load averages.\n\nI have attempted to make use of pgpool and have had some very poor \nperformance. There were constant error messages being sounded, load \naverages on that machine seemed to skyrocket and it just seemed to not \nbe suited for my needs.\n\nApache::DBI overall works better to what I require, even if it is not a \npool per sey. Now if pgpool supported variable rate pooling like \nApache does with it's children, it might help to even things out. That \nand you'd still get the spike if you have to start the webserver and \ndatabase server at or around the same time.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n", "msg_date": "Thu, 04 Nov 2004 11:15:12 -0500", "msg_from": "Martin Foster <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "> Apache::DBI overall works better to what I require, even if \n> it is not a \n> pool per sey. Now if pgpool supported variable rate pooling like \n> Apache does with it's children, it might help to even things \n> out. That \n> and you'd still get the spike if you have to start the webserver and \n> database server at or around the same time.\n\nI still don't quite get it though - you shouldn't be getting more than one\nchild per second being launched by Apache, so that's only one PG postmaster\nper second, which is really a trivial load. That is unless you have\n'StartServers' set high, in which case the 'obvious' answer is to lower it.\nAre you launching multiple DB connections per Apache process as well?\n\n", "msg_date": "Thu, 4 Nov 2004 16:33:33 -0000", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "Matt - Very interesting information about squid effectiveness, thanks.\n\nMartin,\nYou mean your site had no images? No CSS files? No JavaScript files? Nearly\neverything is dynamic?\n\nI've found that our CMS spends more time sending a 23KB image to a dial up\nuser than it does generating and serving dynamic content.\n\nThis means that if you have a \"light\" squid process who caches and serves\nyour images and static content from it's cache then your apache processes\ncan truly focus on only the dynamic data.\n\nCase in point: A first time visitor hits your home page. A dynamic page is\ngenerated (in about 1 second) and served (taking 2 more seconds) which\ncontains links to 20 additional files (images, styles and etc). Then\nexpensive apache processes are used to serve each of those 20 files, which\ntakes an additional 14 seconds. Your precious application server processes\nhave now spent 14 seconds serving stuff that could have been served by an\nupstream cache.\n\nI am all for using upstream caches and SSL accelerators to take the load off\nof application servers. My apache children often take 16 or 20MB of RAM\neach. Why spend all of that on a 1.3KB image?\n\nJust food for thought. There are people who use proxying in apache to\nredirect expensive tasks to other servers that are dedicated to just one\nheavy challenge. In that case you likely do have 99% dynamic content.\n\nMatthew Nuzum\t\t| Makers of \"Elite Content Management System\"\nwww.followers.net\t\t| View samples of Elite CMS in action\[email protected]\t| http://www.followers.net/portfolio/\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Martin Foster\n\nMatt Clark wrote:\n\n> In addition we (as _every_ high load site should) run Squid as an\n> accelerator, which dramatically increases the number of client connections\n> that can be handled. Across 2 webservers at peak times we've had 50,000\n> concurrently open http & https client connections to Squid, with 150\nApache\n> children doing the work that squid can't (i.e. all the dynamic stuff), and\n> PG (on a separate box of course) whipping through nearly 800 mixed\nselects,\n> inserts and updates per second - and then had to restart Apache on one of\n> the servers for a config change... Not a problem :-)\n> \n> One little tip - if you run squid on the same machine as apache, and use a\n> dual-proc box, then because squid is single-threaded it will _never_ take\n> more than half the CPU - nicely self balancing in a way.\n> \n> M\n> \n\nI've heard of the merits of Squid in the use as a reverse proxy. \nHowever, well over 99% of my traffic is dynamic, hence why I may be \nexperiencing behavior that people normally do not expect.\n\nAs I have said before in previous threads, the scripts are completely \ndatabase driven and at the time the database averaged 65 queries per \nsecond under MySQL before a migration, while the webserver was averaging \n2 to 4.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n\n\n", "msg_date": "Thu, 4 Nov 2004 11:37:42 -0500", "msg_from": "\"Matthew Nuzum\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "> Case in point: A first time visitor hits your home page. A \n> dynamic page is generated (in about 1 second) and served \n> (taking 2 more seconds) which contains links to 20 additional \n\nThe gain from an accelerator is actually even more that that, as it takes\nessentially zero seconds for Apache to return the generated content (which\nin the case of a message board could be quite large) to Squid, which can\nthen feed it slowly to the user, leaving Apache free again to generate\nanother page. When serving dialup users large dynamic pages this can be a\n_huge_ gain.\n\nI think Martin's pages (dimly recalling another thread) take a pretty long\ntime to generate though, so he may not see quite such a significant gain.\n\n\n", "msg_date": "Thu, 4 Nov 2004 16:46:25 -0000", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "\n\tMyself, I like a small Apache with few modules serving static files (no \ndynamic content, no db connections), and with a mod_proxy on a special \npath directed to another Apache which generates the dynamic pages (few \nprocesses, persistent connections...)\n\tYou get the best of both, static files do not hog DB connections, and the \nsecond apache sends generated pages very fast to the first which then \ntrickles them down to the clients.\n\n\n>> Case in point: A first time visitor hits your home page. A\n>> dynamic page is generated (in about 1 second) and served\n>> (taking 2 more seconds) which contains links to 20 additional\n>\n> The gain from an accelerator is actually even more that that, as it takes\n> essentially zero seconds for Apache to return the generated content \n> (which\n> in the case of a message board could be quite large) to Squid, which can\n> then feed it slowly to the user, leaving Apache free again to generate\n> another page. When serving dialup users large dynamic pages this can be \n> a\n> _huge_ gain.\n>\n> I think Martin's pages (dimly recalling another thread) take a pretty \n> long\n> time to generate though, so he may not see quite such a significant gain.\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if \n> your\n> joining column's datatypes do not match\n>\n\n\n", "msg_date": "Thu, 04 Nov 2004 18:25:16 +0100", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "Matt Clark wrote:\n\n>>Case in point: A first time visitor hits your home page. A \n>>dynamic page is generated (in about 1 second) and served \n>>(taking 2 more seconds) which contains links to 20 additional \n> \n> \n> The gain from an accelerator is actually even more that that, as it takes\n> essentially zero seconds for Apache to return the generated content (which\n> in the case of a message board could be quite large) to Squid, which can\n> then feed it slowly to the user, leaving Apache free again to generate\n> another page. When serving dialup users large dynamic pages this can be a\n> _huge_ gain.\n> \n> I think Martin's pages (dimly recalling another thread) take a pretty long\n> time to generate though, so he may not see quite such a significant gain.\n> \n> \n\nCorrect the 75% of all hits are on a script that can take anywhere from \na few seconds to a half an hour to complete. The script essentially \nauto-flushes to the browser so they get new information as it arrives \ncreating the illusion of on demand generation.\n\nA squid proxy would probably cause severe problems when dealing with a \nscript that does not complete output for a variable rate of time.\n\nAs for images, CSS, javascript and such the site makes use of it, but in \nthe grand scheme of things the amount of traffic they tie up is \nliterally inconsequential. Though I will probably move all of that \nonto another server just to allow the main server the capabilities of \ndealing with almost exclusively dynamic content.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n", "msg_date": "Thu, 04 Nov 2004 12:59:25 -0500", "msg_from": "Martin Foster <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "Matt Clark wrote:\n\n>>Apache::DBI overall works better to what I require, even if \n>>it is not a \n>>pool per sey. Now if pgpool supported variable rate pooling like \n>>Apache does with it's children, it might help to even things \n>>out. That \n>>and you'd still get the spike if you have to start the webserver and \n>>database server at or around the same time.\n> \n> \n> I still don't quite get it though - you shouldn't be getting more than one\n> child per second being launched by Apache, so that's only one PG postmaster\n> per second, which is really a trivial load. That is unless you have\n> 'StartServers' set high, in which case the 'obvious' answer is to lower it.\n> Are you launching multiple DB connections per Apache process as well?\n> \n\nI have start servers set to a fairly high limit. However this would \nmake little different overall if I restarted the webservers to load in \nnew modules during a high load time. When I am averaging 145 \nconcurrent connections before a restart, I can expect that many request \nto hit the server once Apache begins to respond.\n\nAs a result, it will literally cause a spike on both machines as new \nconnections are initiated at a high rate. In my case I don't always \nhave the luxury of waiting till 0300 just to test a change.\n\nAgain, not necessarily looking for a PostgreSQL solution. I am looking \nfor a method that would allow the database or the OS itself to protect \nthe system it's hosted on. If both the database and the apache server \nwere on the same machine this type of scenario would be unstable to say \nthe least.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n", "msg_date": "Thu, 04 Nov 2004 13:04:00 -0500", "msg_from": "Martin Foster <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "> Correct the 75% of all hits are on a script that can take \n> anywhere from \n> a few seconds to a half an hour to complete. The script \n> essentially \n> auto-flushes to the browser so they get new information as it arrives \n> creating the illusion of on demand generation.\n\nThis is more like a streaming data server, which is a very different beast\nfrom a webserver, and probably better suited to the job. Usually either\nmultithreaded or single-process using select() (just like Squid). You could\nprobably build one pretty easily. Using a 30MB Apache process to serve one\nclient for half an hour seems like a hell of a waste of RAM.\n\n> A squid proxy would probably cause severe problems when \n> dealing with a \n> script that does not complete output for a variable rate of time.\n\nNo, it's fine, squid gives it to the client as it gets it, but can receive\nfrom the server faster.\n\n", "msg_date": "Thu, 4 Nov 2004 18:20:18 -0000", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "Matt Clark wrote:\n>>Correct the 75% of all hits are on a script that can take \n>>anywhere from \n>>a few seconds to a half an hour to complete. The script \n>>essentially \n>>auto-flushes to the browser so they get new information as it arrives \n>>creating the illusion of on demand generation.\n> \n> \n> This is more like a streaming data server, which is a very different beast\n> from a webserver, and probably better suited to the job. Usually either\n> multithreaded or single-process using select() (just like Squid). You could\n> probably build one pretty easily. Using a 30MB Apache process to serve one\n> client for half an hour seems like a hell of a waste of RAM.\n> \n\nThese are CGI scripts at the lowest level, nothing more and nothing \nless. While I could probably embed a small webserver directly into the \nperl scripts and run that as a daemon, it would take away the \nportability that the scripts currently offer.\n\nThis should be my last question on the matter, does squid report the \nproper IP address of the client themselves? That's a critical \nrequirement for the scripts.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n", "msg_date": "Thu, 04 Nov 2004 15:30:19 -0500", "msg_from": "Martin Foster <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "On Thu, 4 Nov 2004 18:20:18 -0000, Matt Clark <[email protected]> wrote:\n\n>> Correct the 75% of all hits are on a script that can take\n>> anywhere from\n>> a few seconds to a half an hour to complete. The script\n>> essentially\n>> auto-flushes to the browser so they get new information as it arrives\n>> creating the illusion of on demand generation.\n\n\tEr, do you mean that :\n\n\t1- You have a query that runs for half an hour and you spoon feed the \nresults to the client ?\n\t(argh)\n\n\t2- Your script looks for new data every few seconds, sends a packet, then \nsleeps, and loops ?\n\n\tIf it's 2 I have a readymade solution for you, just ask.\n", "msg_date": "Thu, 04 Nov 2004 21:30:35 +0100", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "On Thu, Nov 04, 2004 at 03:30:19PM -0500, Martin Foster wrote:\n> This should be my last question on the matter, does squid report the \n> proper IP address of the client themselves? That's a critical \n> requirement for the scripts.\n\nAFAIK it's in some header; I believe they're called \"X-Forwarded-For\". If\nyou're using caching, your script will obviously be called fewer times than\nusual, though, so be careful about relying too much on side effects. :-)\n(This is, of course, exactly the same if the client side uses a caching\nproxy. Saying anything more is impossible without knowing exactly what you\nare doing, though :-) )\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 4 Nov 2004 21:46:03 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "\n> These are CGI scripts at the lowest level, nothing more and nothing \n> less. While I could probably embed a small webserver directly into \n> the perl scripts and run that as a daemon, it would take away the \n> portability that the scripts currently offer.\n\nIf they're CGI *scripts* then they just use the CGI environment, not \nApache, so a daemon that accepts the inbound connections, then compiles \nthe scripts a-la Apache::Registry, but puts each in a separate thread \nwould be, er, relatively easy for someone better at multithreaded stuff \nthan me.\n\n>\n> This should be my last question on the matter, does squid report the \n> proper IP address of the client themselves? That's a critical \n> requirement for the scripts.\n>\nIn the X-Forwarded-For header. Not that you can be sure you're seeing \nthe true client IP anyway if they've gone through an ISP proxy beforehand.\n\n\n", "msg_date": "Thu, 04 Nov 2004 21:00:27 +0000", "msg_from": "Matt Clark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "\n> 1- You have a query that runs for half an hour and you spoon feed \n> the results to the client ?\n> (argh)\n>\n> 2- Your script looks for new data every few seconds, sends a \n> packet, then sleeps, and loops ?\n>\n> If it's 2 I have a readymade solution for you, just ask.\n>\nI'm guessing (2) - PG doesn't give the results of a query in a stream. \n", "msg_date": "Thu, 04 Nov 2004 21:01:52 +0000", "msg_from": "Matt Clark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "\n> I'm guessing (2) - PG doesn't give the results of a query in a stream.\n\n\tIn 1- I was thinking about a cursor...\n\tbut I think his problem is more like 2-\n\n\tIn that case one can either code a special purpose server or use the \nfollowing hack :\n\n\tIn your webpage include an iframe with a Javascript to refresh it every \nfive seconds. The iframe fetches a page from the server which brings in \nthe new data in form of generated JavaScript which writes in the parent \nwindow. Thus, you get a very short request every 5 seconds to fetch new \ndata, and it is displayed in the client's window very naturally.\n\n\tI've used this technique for another application and find it very cool. \nIt's for selection lists, often you'll see a list of things to be checked \nor not, which makes a big form that people forget to submit. Thus I've \nreplaced the checkboxes with clickable zones which trigger the loading of \na page in a hidden iframe, which does appropriate modifications in the \ndatabase, and updates the HTML in the parent page, changing texts here and \nthere... it feels a bit like it's not a webpage but rather a standard GUI. \nVery neat. Changes are recorded without needing a submit button... I \nshould write a framework for making that easy to do.\n\n\tI did not use a frame because frames suck, but iframes are convenient. \nYeah, it does not work with Lynx... it needs JavaScript... but it works \nwell.\n", "msg_date": "Thu, 04 Nov 2004 23:00:14 +0100", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "\n>\n> In your webpage include an iframe with a Javascript to refresh it \n> every five seconds. The iframe fetches a page from the server which \n> brings in the new data in form of generated JavaScript which writes \n> in the parent window. Thus, you get a very short request every 5 \n> seconds to fetch new data, and it is displayed in the client's window \n> very naturally.\n>\n> ...\n\nYup. If you go the JS route then you can do even better by using JS to \nload data into JS objects in the background and manipulate the page \ncontent directly, no need for even an Iframe. Ignore the dullards who \nhave JS turned off - it's essential for modern web apps, and refusing JS \nconflicts absolutely with proper semantic markup.\n\nhttp://developer.apple.com/internet/webcontent/xmlhttpreq.html is a good \nstarting point.\n\nIt's clear that this discussion has moved way away from PG! Although in \nthe context of DB backed web apps I guess in remains a bit on-topic...\n\nM\n", "msg_date": "Thu, 04 Nov 2004 22:37:06 +0000", "msg_from": "Matt Clark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "On Thu, Nov 04, 2004 at 22:37:06 +0000,\n Matt Clark <[email protected]> wrote:\n> >...\n> \n> Yup. If you go the JS route then you can do even better by using JS to \n> load data into JS objects in the background and manipulate the page \n> content directly, no need for even an Iframe. Ignore the dullards who \n> have JS turned off - it's essential for modern web apps, and refusing JS \n> conflicts absolutely with proper semantic markup.\n\nJavascript is too powerful to turn for any random web page. It is only\nessential for web pages because people write their web pages to only\nwork with javascript.\n", "msg_date": "Thu, 4 Nov 2004 17:14:29 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "\ncheck this marvelus piece of 5 minutes of work :\nhttp://boutiquenumerique.com/test/iframe_feed.html\n\n> Yup. If you go the JS route then you can do even better by using JS to \n> load data into JS objects in the background and manipulate the page \n> content directly, no need for even an Iframe. Ignore the dullards who \n> have JS turned off - it's essential for modern web apps, and refusing JS \n> conflicts absolutely with proper semantic markup.\n>\n> http://developer.apple.com/internet/webcontent/xmlhttpreq.html is a good \n> starting point.\n\n\tDidn't know this existed ! Very, very cool.\n\tI have to check this out more in depth.\n\n\tA note though : you'll have to turn off HTTP persistent connections in \nyour server (not in your proxy) or youre back to square one.\n\n>\n> It's clear that this discussion has moved way away from PG! Although in \n> the context of DB backed web apps I guess in remains a bit on-topic...\n\n\tI find it very on-topic as\n\t- it's a way to help this guy solve his \"pg problem\" which was iin fact a \ndesign problem\n\t- it's the future of database driven web apps (no more reloading the \nwhole page !)\n\n\tI think in the future there will be a good bit of presentation login in \nthe client...\n", "msg_date": "Fri, 05 Nov 2004 00:21:48 +0100", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "\n>Javascript is too powerful to turn for any random web page. It is only\n>essential for web pages because people write their web pages to only\n>work with javascript.\n> \n>\nHmm... I respectfully disagree. It is so powerful that it is impossible \nto ignore when implementing a sophisticated app. And it is not \ndangerous to the user so long as they have a popup blocker. \nCommercially, I can ignore the people who turn it off, and I can gain a \nhuge benefit from knowing that 95% of people have it turned on, because \nit gives my users a hugely better experience than the equivalent XHTML \nonly page (which I deliver, and which works, but which is a fairly \ndepressing experience compared to the JS enabled version).\n\nIt is _amazing_ how much crud you can take out of a page if you let JS \ndo the dynamic stuff (with CSS still in full control of the styling). \nNice, clean, semantically sensible XHTML, that can be transformed for \nmultiple devices - it's great.\n\nAn example:\n\n<a class=\"preview_link\">/previews/foo.wmv</a>\n\nBut we want it to appear in a popup when viewed in certain devices.... \nEasy - Attach an 'onclick' event handler (or just set the target \nattribute) when the device has a suitable screen & media player, but \nleave the markup clean for the rest of the world.\n\n\n\n\n", "msg_date": "Thu, 04 Nov 2004 23:22:22 +0000", "msg_from": "Matt Clark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "\n>\n> A note though : you'll have to turn off HTTP persistent \n> connections in your server (not in your proxy) or youre back to \n> square one.\n>\nI hadn't considered that. On the client side it would seem to be up to \nthe client whether to use a persistent connection or not. If it does, \nthen yeah, a request every 5 seconds would still just hold open a \nserver. One more reason to use a proxy I s'pose.\n\n>>\n>> It's clear that this discussion has moved way away from PG! Although \n>> in the context of DB backed web apps I guess in remains a bit \n>> on-topic...\n>\n>\n> I find it very on-topic as\n> - it's a way to help this guy solve his \"pg problem\" which was iin \n> fact a design problem\n> - it's the future of database driven web apps (no more reloading \n> the whole page !)\n>\n> I think in the future there will be a good bit of presentation \n> login in the client...\n\nNot if Bruno has his way ;-)\n\n\n", "msg_date": "Thu, 04 Nov 2004 23:32:57 +0000", "msg_from": "Matt Clark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "\n\nPierre-Frᅵdᅵric Caillaud wrote:\n\n>\n> check this marvelus piece of 5 minutes of work :\n> http://boutiquenumerique.com/test/iframe_feed.html\n>\ncela m'a fait le sourire :-)\n\n(apologies for bad french)\n\nM\n\n\n", "msg_date": "Thu, 04 Nov 2004 23:35:45 +0000", "msg_from": "Matt Clark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "Matt Clark wrote:\n\n> \n> \n> Pierre-Frᅵdᅵric Caillaud wrote:\n> \n>>\n>> check this marvelus piece of 5 minutes of work :\n>> http://boutiquenumerique.com/test/iframe_feed.html\n>>\n> cela m'a fait le sourire :-)\n> \n> (apologies for bad french)\n> \n> M\n> \n> \n\nJavascript is not an option for the scripts, one of the mandates of the \nproject is to support as many different client setups as possible and we \nhave encountered everything from WebTV to the latest Firefox release. \n It's a chat/roleplay community and not everyone will invest in new \nequipment.\n\nNow, it would seem to me that there is a trade off between a JS push \nsystem and a constant ever-present process. With the traditional \nmethod as I use it, a client will incur the initial penalty of going \nthrough authentication, pulling the look and feel of the realms, sites \nand simply poll one table from that point on.\n\nNow on the other hand, you have one user making a call for new posts \nevery x amount of seconds. This means every X seconds the penalty for \nauthentication and design would kick in, increasing overall the load.\n\nThe current scripts can also by dynamically adapted to slow things down \nbased on heavy load or quiet realms that bring little posts in. It's \nmuch harder to expect Javascript solutions to work perfectly every time \nand not be modified by some proxy.\n\nUnfortunately, we are getting way off track. I'm looking for a way to \nprotect the PostgreSQL server, either from PostgreSQL or some sort of \nexternal script which pools load average once in a while to make that \ndetermination.\n\nNow is there an administrative command in PostgreSQL that will cause it \nto move into some sort of maintenance mode? For me that could be \nexceedingly useful as it would still allow for an admin connection to be \nmade and run a VACUUM FULL and such.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n", "msg_date": "Fri, 05 Nov 2004 02:49:42 GMT", "msg_from": "Martin Foster <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Restricting Postgres" }, { "msg_contents": "On Thu, Nov 04, 2004 at 23:32:57 +0000,\n Matt Clark <[email protected]> wrote:\n> >\n> > I think in the future there will be a good bit of presentation \n> >login in the client...\n> \n> Not if Bruno has his way ;-)\n\nSure there will, but it will be controlled by the client, perhaps taking\nsuggestions from the style sheet pointed to by the document.\n\nRunning foreign code from random or even semi-random places is a recipe\nfor becoming a spam server. See examples from Microsoft such as their\nspreadsheet and office software. Documents really need to be passive\ndata, not active code.\n\nIf the client and the server have a special trust relationship, then\nrunning code supplied by the server makes sense. So you might use javascript\nwithin a business where the IT department runs the server and the employees\nrun clients. However, encouraging people to browse the internet with\njavascript enabled is a bad idea.\n", "msg_date": "Thu, 4 Nov 2004 23:16:46 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Postgres" } ]
[ { "msg_contents": "I am working with some pretty convoluted queries that work very slowly the\nfirst time they're called but perform fine on the second call. I am fairly\ncertain that these differences are due to the caching. Can someone point me\nin a direction that would allow me to pre-cache the critical indexes?\n\n\n\n\n\n\n\n\n\n\nI am working with some pretty convoluted queries that work\nvery slowly the first time they’re called but perform fine on the second\ncall. I am fairly certain that these differences are due to the caching. Can\nsomeone point me in a direction that would allow me to pre-cache the critical\nindexes?", "msg_date": "Wed, 3 Nov 2004 10:30:47 -0700", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "preloading indexes" }, { "msg_contents": "The best way to get all the stuff needed by a query into RAM is to run the\nquery. Is it more that you want to 'pin' the data in RAM so it doesn't get\noverwritten by other queries?\n \n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of\[email protected]\nSent: 03 November 2004 17:31\nTo: [email protected]\nSubject: [PERFORM] preloading indexes\n\n\n\nI am working with some pretty convoluted queries that work very slowly the\nfirst time they're called but perform fine on the second call. I am fairly\ncertain that these differences are due to the caching. Can someone point me\nin a direction that would allow me to pre-cache the critical indexes?\n\n\n\n\nMessage\n\n\n\n\n\nThe best way to get all the stuff needed by a query into \nRAM is to run the query.  Is it more that you want to 'pin' the data in RAM \nso it doesn't get overwritten by other \nqueries?\n \n-----Original Message-----From: \[email protected] \n[mailto:[email protected]] On Behalf Of \[email protected]: 03 November 2004 \n17:31To: [email protected]: \n[PERFORM] preloading indexes\n\n\nI am working with some pretty \n convoluted queries that work very slowly the first time they’re called but \n perform fine on the second call. I am fairly certain that these differences \n are due to the caching. Can someone point me in a direction that would allow \n me to pre-cache the critical \nindexes?", "msg_date": "Wed, 3 Nov 2004 17:59:02 -0000", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: preloading indexes" }, { "msg_contents": "That's correct - I'd like to be able to keep particular indexes in RAM\navailable all the time\n\n \n\nThe best way to get all the stuff needed by a query into RAM is to run the\nquery. Is it more that you want to 'pin' the data in RAM so it doesn't get\noverwritten by other queries?\n\n \n\nI am working with some pretty convoluted queries that work very slowly the\nfirst time they're called but perform fine on the second call. I am fairly\ncertain that these differences are due to the caching. Can someone point me\nin a direction that would allow me to pre-cache the critical indexes?\n\n\n\n\n\n\nMessage\n\n\n\n\nThat’s correct – I’d\nlike to be able to keep particular indexes in RAM available all the time\n \n\nThe best way to get all the stuff needed\nby a query into RAM is to run the query.  Is it more that you want to\n'pin' the data in RAM so it doesn't get overwritten by other queries?\n \n\n\nI am working with some pretty convoluted queries that work\nvery slowly the first time they’re called but perform fine on the second\ncall. I am fairly certain that these differences are due to the caching. Can\nsomeone point me in a direction that would allow me to pre-cache the critical\nindexes?", "msg_date": "Wed, 3 Nov 2004 12:12:43 -0700", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: preloading indexes" }, { "msg_contents": "On Wed, Nov 03, 2004 at 12:12:43PM -0700, [email protected] wrote:\n> That's correct - I'd like to be able to keep particular indexes in RAM\n> available all the time\n\nIf these are queries that run frequently, then the relevant cache\nwill probably remain populated[1]. If they _don't_ run frequently, why\ndo you want to force the memory to be used to optimise something that\nis uncommon? But in any case, there's no mechanism to do this.\n\nA\n\n[1] there are in fact limits on the caching: if your data set is\nlarger than memory, for instance, there's no way it will all stay\ncached. Also, VACUUM does nasty things to the cache. It is hoped\nthat nastiness is fixed in 8.0.\n\n-- \nAndrew Sullivan | [email protected]\nThe plural of anecdote is not data.\n\t\t--Roger Brinner\n", "msg_date": "Wed, 3 Nov 2004 14:35:28 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: preloading indexes" }, { "msg_contents": "\n\t--\n\n\tuh, you can always load a table in cache by doing a seq scan on it... \nlike select count(1) from table or something... this doesn't work for \nindexes of course, but you can always look in the system catalogs, find \nthe filename for the index, then just open() it from an external program \nand read it without caring for the data... it'll save you the seeks in the \nindex... of course you'll have problems with file permissions etc, not \nmentioning security, locking, etc, etc, etc, is that worth the trouble ?\n\nOn Wed, 3 Nov 2004 14:35:28 -0500, Andrew Sullivan <[email protected]> \nwrote:\n\n> On Wed, Nov 03, 2004 at 12:12:43PM -0700, [email protected] \n> wrote:\n>> That's correct - I'd like to be able to keep particular indexes in RAM\n>> available all the time\n>\n> If these are queries that run frequently, then the relevant cache\n> will probably remain populated[1]. If they _don't_ run frequently, why\n> do you want to force the memory to be used to optimise something that\n> is uncommon? But in any case, there's no mechanism to do this.\n>\n> A\n>\n> [1] there are in fact limits on the caching: if your data set is\n> larger than memory, for instance, there's no way it will all stay\n> cached. Also, VACUUM does nasty things to the cache. It is hoped\n> that nastiness is fixed in 8.0.\n>\n\n\n", "msg_date": "Wed, 03 Nov 2004 20:50:04 +0100", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: preloading indexes" }, { "msg_contents": "<[email protected]> writes:\n> I am working with some pretty convoluted queries that work very slowly the\n> first time they're called but perform fine on the second call. I am fairly\n> certain that these differences are due to the caching. Can someone point me\n> in a direction that would allow me to pre-cache the critical indexes?\n\nBuy more RAM. Also check your shared_buffers setting (but realize that\nmore is not necessarily better).\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 Nov 2004 14:55:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: preloading indexes " }, { "msg_contents": "The caching appears to disappear overnight. The environment is not in\nproduction yet so I'm the only one on it. \n\nIs there a time limit on the length of time in cache? I believe there is\nsufficient RAM, but maybe I need to look again.\n\ns \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Andrew Sullivan\nSent: Wednesday, November 03, 2004 12:35 PM\nTo: [email protected]\nSubject: Re: [PERFORM] preloading indexes\n\nOn Wed, Nov 03, 2004 at 12:12:43PM -0700, [email protected] wrote:\n> That's correct - I'd like to be able to keep particular indexes in RAM\n> available all the time\n\nIf these are queries that run frequently, then the relevant cache\nwill probably remain populated[1]. If they _don't_ run frequently, why\ndo you want to force the memory to be used to optimise something that\nis uncommon? But in any case, there's no mechanism to do this.\n\nA\n\n[1] there are in fact limits on the caching: if your data set is\nlarger than memory, for instance, there's no way it will all stay\ncached. Also, VACUUM does nasty things to the cache. It is hoped\nthat nastiness is fixed in 8.0.\n\n-- \nAndrew Sullivan | [email protected]\nThe plural of anecdote is not data.\n\t\t--Roger Brinner\n\n---------------------------(end of broadcast)---------------------------\nTIP 8: explain analyze is your friend\n\n\n", "msg_date": "Wed, 3 Nov 2004 13:19:43 -0700", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: preloading indexes" }, { "msg_contents": "<[email protected]> writes:\n> The caching appears to disappear overnight.\n\nYou've probably got cron jobs that run late at night and blow out your\nkernel disk cache by accessing a whole lot of non-Postgres stuff.\n(A nightly disk backup is one obvious candidate.) The most likely\nsolution is to run some cron job a little later to exercise your\ndatabase and thereby repopulate the cache with Postgres files before\nyou get to work ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 Nov 2004 15:53:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: preloading indexes " }, { "msg_contents": "On Wed, Nov 03, 2004 at 01:19:43PM -0700, [email protected] wrote:\n> The caching appears to disappear overnight. The environment is not in\n> production yet so I'm the only one on it. \n\nAre you vacuuming at night? It grovels through the entire database,\nand may bust your query out of the cache. Also, we'd need some more\ninfo about how you've tuned this thing. Maybe check out the archives\nfirst for some tuning pointers to help you.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nIn the future this spectacle of the middle classes shocking the avant-\ngarde will probably become the textbook definition of Postmodernism. \n --Brad Holland\n", "msg_date": "Wed, 3 Nov 2004 15:53:16 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: preloading indexes" }, { "msg_contents": "\nThanks - this is what I was afraid of, but I may have to do this\n\nIs there a good way to monitor what's in the cache?\n\nj\n\n<[email protected]> writes:\n> The caching appears to disappear overnight.\n\nYou've probably got cron jobs that run late at night and blow out your\nkernel disk cache by accessing a whole lot of non-Postgres stuff.\n(A nightly disk backup is one obvious candidate.) The most likely\nsolution is to run some cron job a little later to exercise your\ndatabase and thereby repopulate the cache with Postgres files before\nyou get to work ;-)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 3 Nov 2004 13:56:10 -0700", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: preloading indexes " }, { "msg_contents": "If your running Linux, and kernel 2.6.x, you can try playing with the:\n\n/proc/sys/vm/swappiness\n\nsetting.\n\nMy understanding is that:\n\necho \"0\" > /proc/sys/vm/swappiness\n\nWill try to keep all in-use application memory from being swapped out\nwhen other processes query the disk a lot.\n\nAlthough, since PostgreSQL utilizes the disk cache quite a bit, this may\nnot help you. \n\n\nOn Wed, 2004-11-03 at 15:53 -0500, Tom Lane wrote:\n> <[email protected]> writes:\n> > The caching appears to disappear overnight.\n> \n> You've probably got cron jobs that run late at night and blow out your\n> kernel disk cache by accessing a whole lot of non-Postgres stuff.\n> (A nightly disk backup is one obvious candidate.) The most likely\n> solution is to run some cron job a little later to exercise your\n> database and thereby repopulate the cache with Postgres files before\n> you get to work ;-)\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n-- \nMike Benoit <[email protected]>", "msg_date": "Wed, 03 Nov 2004 16:50:42 -0800", "msg_from": "Mike Benoit <[email protected]>", "msg_from_op": false, "msg_subject": "Re: preloading indexes" }, { "msg_contents": "On Wed, Nov 03, 2004 at 03:53:16PM -0500, Andrew Sullivan wrote:\n> and may bust your query out of the cache. Also, we'd need some more\n\nUh, the data you're querying, of course. Queries themselves aren't\ncached.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nI remember when computers were frustrating because they *did* exactly what \nyou told them to. That actually seems sort of quaint now.\n\t\t--J.D. Baldwin\n", "msg_date": "Thu, 4 Nov 2004 09:02:17 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: preloading indexes" } ]
[ { "msg_contents": "Greetings pgsql-performance :)\n\nYesterday I posted to the pgsql-sql list about an issue with VACUUM\nwhile trying to track-down an issue with performance of a SQL SELECT\nstatement invovling a stored function. It was suggested that I bring\nthe discussion over to -performance.\n\nInstread of reposting the message here is a link to the original\nmessage followed by a brief summary:\n\n http://marc.theaimsgroup.com/?l=postgresql-sql&m=109945118928530&w=2\n\n\nSummary:\n\nOur customer complains about web/php-based UI sluggishness accessing\nthe data in db. I created a \"stripped down\" version of the tables\nin question to be able to post to the pgsql-sql list asking for hints\nas to how I can improve the SQL query. While doing this I noticed\nthat if I 'createdb' and populate it with the \"sanatized\" data the\nquery in question is quite fast; 618 rows returned in 864.522 ms.\nThis was puzzling. Next I noticed that after a VACUUM the very same\nquery would slow down to a crawl; 618 rows returned in 1080688.921 ms).\n\nThis was reproduced on PostgreSQL 7.4.2 running on a Intel PIII 700Mhz,\n512mb. This system is my /personal/ test system/sandbox. i.e., it\nisn't being stressed by any other processes.\n\n\nThanks for reading,\n--patrick\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nCheck out the new Yahoo! Front Page. \nwww.yahoo.com \n \n\n", "msg_date": "Wed, 3 Nov 2004 10:56:47 -0800 (PST)", "msg_from": "patrick ~ <[email protected]>", "msg_from_op": true, "msg_subject": "vacuum analyze slows sql query" }, { "msg_contents": "Given that the plan doesn't change after an analyze, my guess would be that the first query is hitting cached data, then \n you vacuum and that chews though all the cache with its own data pushing the good data out of the cache so it has to \nbe re-fetched from disk.\n\nIf you run the select a 2nd time after the vacuum, what is the time?\n\nNot sure what your pkk_offer_has_pending_purch function does, that might be something to look at as well.\n\nI could be wrong, but thats the only thing that makes sense to me. ARC is supposed to help with that type of behavior in 8.0\n\npatrick ~ wrote:\n> Greetings pgsql-performance :)\n> \n> Yesterday I posted to the pgsql-sql list about an issue with VACUUM\n> while trying to track-down an issue with performance of a SQL SELECT\n> statement invovling a stored function. It was suggested that I bring\n> the discussion over to -performance.\n> \n> Instread of reposting the message here is a link to the original\n> message followed by a brief summary:\n> \n> http://marc.theaimsgroup.com/?l=postgresql-sql&m=109945118928530&w=2\n> \n> \n> Summary:\n> \n> Our customer complains about web/php-based UI sluggishness accessing\n> the data in db. I created a \"stripped down\" version of the tables\n> in question to be able to post to the pgsql-sql list asking for hints\n> as to how I can improve the SQL query. While doing this I noticed\n> that if I 'createdb' and populate it with the \"sanatized\" data the\n> query in question is quite fast; 618 rows returned in 864.522 ms.\n> This was puzzling. Next I noticed that after a VACUUM the very same\n> query would slow down to a crawl; 618 rows returned in 1080688.921 ms).\n> \n> This was reproduced on PostgreSQL 7.4.2 running on a Intel PIII 700Mhz,\n> 512mb. This system is my /personal/ test system/sandbox. i.e., it\n> isn't being stressed by any other processes.\n> \n> \n> Thanks for reading,\n> --patrick\n> \n\n", "msg_date": "Wed, 03 Nov 2004 14:17:52 -0500", "msg_from": "Doug Y <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum analyze slows sql query" }, { "msg_contents": "patrick ~ <[email protected]> writes:\n> that if I 'createdb' and populate it with the \"sanatized\" data the\n> query in question is quite fast; 618 rows returned in 864.522 ms.\n> This was puzzling. Next I noticed that after a VACUUM the very same\n> query would slow down to a crawl; 618 rows returned in 1080688.921 ms).\n\nThe outer query is too simple to have more than one possible plan,\nso the issue is certainly a change in query plans inside the function.\nYou need to be investigating what's happening inside that function.\n7.1 doesn't have adequate tools for this, but in 7.4 you can use\nPREPARE and EXPLAIN ANALYZE EXECUTE to examine the query plans used\nfor parameterized statements, which is what you've got here.\n\nMy bet is that with ANALYZE stats present, the planner guesses wrong\nabout which index to use; but without looking at EXPLAIN ANALYZE output\nthere's no way to be sure.\n\nBTW, why the bizarrely complicated substitute for a NOT NULL test?\nISTM you only need\n\ncreate function\npkk_offer_has_pending_purch( integer )\n returns bool\nas '\n select p0.purchase_id is not null\n from pkk_purchase p0\n where p0.offer_id = $1\n and ( p0.pending = true\n or ( ( p0.expire_time > now()\n or p0.expire_time isnull )\n and p0.cancel_date isnull ) )\n limit 1\n' language 'sql' ;\n\n(Actually, seeing that pkk_purchase.purchase_id is defined as NOT NULL,\nI wonder why the function exists at all ... but I suppose you've\n\"stripped\" the function to the point of being nonsense.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 Nov 2004 15:12:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum analyze slows sql query " }, { "msg_contents": "Here is a fresh run with 'explain analyze' run before and after the\nVACUUM statement:\n\n-- begin\n% dropdb pkk\nDROP DATABASE\n% createdb pkk\nCREATE DATABASE\n% psql pkk < pkk_db.sql\nERROR: function pkk_offer_has_pending_purch(integer) does not exist\nERROR: function pkk_offer_has_pending_purch2(integer) does not exist\nERROR: table \"pkk_billing\" does not exist\nERROR: table \"pkk_purchase\" does not exist\nERROR: table \"pkk_offer\" does not exist\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \"pkk_offer_pkey\"\nfor table \"pkk_offer\"\nCREATE TABLE\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n\"pkk_purchase_pkey\" for table \"pkk_purchase\"\nCREATE TABLE\nCREATE INDEX\nCREATE INDEX\nCREATE INDEX\nCREATE TABLE\nCREATE INDEX\nCREATE FUNCTION\nCREATE FUNCTION\n% zcat pkk.20041028_00.sql.gz | psql pkk \nSET\nSET\nSET\nSET\n% psql pkk\npkk=# select offer_id, pkk_offer_has_pending_purch( offer_id ) from pkk_offer ;\n<ommitting output />\n(618 rows)\n\nTime: 877.348 ms\npkk=# explain analyze select offer_id, pkk_offer_has_pending_purch( offer_id )\nfrom pkk_offer ;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------\n Seq Scan on pkk_offer (cost=0.00..22.50 rows=1000 width=4) (actual\ntime=1.291..845.485 rows=618 loops=1)\n Total runtime: 849.475 ms\n(2 rows)\n\nTime: 866.613 ms\npkk=# vacuum analyze ;\nVACUUM\nTime: 99344.399 ms\npkk=# explain analyze select offer_id, pkk_offer_has_pending_purch( offer_id )\nfrom pkk_offer ;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Seq Scan on pkk_offer (cost=0.00..13.72 rows=618 width=4) (actual\ntime=3636.401..1047412.851 rows=618 loops=1)\n Total runtime: 1047415.525 ms\n(2 rows)\n\nTime: 1047489.477 ms\n-- end\n\n\n\nTom,\n\nThe reason of the extra \"case\" part in the function is to ensure non-null\nfields on the result. I tried your version as well and i get similar\nperformance results:\n\n-- begin\npkk=# create function toms_pending_purch( integer ) returns bool as 'select \np0.purchase_id is not null from pkk_purchase p0 where p0.offer_id = $1 and (\np0.pending = true or ( ( p0.expire_time > now() or p0.expire_time isnull ) and\np0.cancel_date isnull ) ) limit 1 ' language 'sql' ;\nCREATE FUNCTION\nTime: 2.496 ms\npkk=# select offer_id, toms_pending_purch( offer_id ) from pkk_offer ;\n(618 rows)\n\nTime: 1052339.506 ms\n-- end\n\n\nRight now, I'm studying the document section on PREPARE and will\nattempt to play around with it.\n\n\nI was asked (in a prior post) whether running the statement a second\ntime after the VACUUM improves in performance. It does not. After\nthe VACUUM the statement remains slow.\n\n\nThanks for your help,\n--patrick\n\n\n\n--- Tom Lane <[email protected]> wrote:\n\n> patrick ~ <[email protected]> writes:\n> > that if I 'createdb' and populate it with the \"sanatized\" data the\n> > query in question is quite fast; 618 rows returned in 864.522 ms.\n> > This was puzzling. Next I noticed that after a VACUUM the very same\n> > query would slow down to a crawl; 618 rows returned in 1080688.921 ms).\n> \n> The outer query is too simple to have more than one possible plan,\n> so the issue is certainly a change in query plans inside the function.\n> You need to be investigating what's happening inside that function.\n> 7.1 doesn't have adequate tools for this, but in 7.4 you can use\n> PREPARE and EXPLAIN ANALYZE EXECUTE to examine the query plans used\n> for parameterized statements, which is what you've got here.\n> \n> My bet is that with ANALYZE stats present, the planner guesses wrong\n> about which index to use; but without looking at EXPLAIN ANALYZE output\n> there's no way to be sure.\n> \n> BTW, why the bizarrely complicated substitute for a NOT NULL test?\n> ISTM you only need\n> \n> create function\n> pkk_offer_has_pending_purch( integer )\n> returns bool\n> as '\n> select p0.purchase_id is not null\n> from pkk_purchase p0\n> where p0.offer_id = $1\n> and ( p0.pending = true\n> or ( ( p0.expire_time > now()\n> or p0.expire_time isnull )\n> and p0.cancel_date isnull ) )\n> limit 1\n> ' language 'sql' ;\n> \n> (Actually, seeing that pkk_purchase.purchase_id is defined as NOT NULL,\n> I wonder why the function exists at all ... but I suppose you've\n> \"stripped\" the function to the point of being nonsense.)\n> \n> \t\t\tregards, tom lane\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nCheck out the new Yahoo! Front Page. \nwww.yahoo.com \n \n\n", "msg_date": "Wed, 3 Nov 2004 14:22:57 -0800 (PST)", "msg_from": "patrick ~ <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum analyze slows sql query " } ]
[ { "msg_contents": "Hello !\n\nSorry if this has been discussed before, it is just hard to find in the \narchives using the words \"or\" or \"in\" :-o\n\nI use postgres-8.0 beta4 for windows.\nI broke down my problem to a very simple table - two columns \n\"primary_key\" and \"secondary_key\". Creates and Insert you will find below.\n\nIf I query the _empty_ freshly created table I get the following explain \nresult:\n\nselect * from tt where seckey = 1;\nIndex Scan using seckey_key on tt (cost=0.00..17.07 rows=5 width=12)\n Index Cond: (seckey = 1)\n\nIf I use \"OR\" (or IN) things get worse:\n\nselect * from tt where seckey = 1 or seckey = 2\nSeq Scan on tt (cost=0.00..0.00 rows=1 width=12)\n Filter: ((seckey = 1) OR (seckey = 2))\n\nNote the \"Seq Scan\" instead of using the index.\n\nAfter populating the table with 8920 records and \"analyze\" the scenario \ngets even worser:\n\nselect * from tt where seckey = 1;\nSeq Scan on tt (cost=0.00..168.50 rows=1669 width=12) (actual \ntime=0.000..15.000 rows=1784 loops=1)\n Filter: (seckey = 1)\nTotal runtime: 31.000 ms\n\nNow also this simple query uses a \"Seq Scan\".\n\nNow the questions are:\na) Why is the index not used if I use \"OR\" or \"IN\"\nb) Why is the index not used after \"analyze\" ?\n\nAny help is very appreciated!\n\nThanks,\nMario\n\n\n// The table and data\n\nCREATE TABLE tt (\n pkey int4 NOT NULL DEFAULT nextval('public.\"tt_PKEY_seq\"'::text),\n seckey int8,\n CONSTRAINT pkey_key PRIMARY KEY (pkey)\n)\nWITHOUT OIDS;\n\nCREATE INDEX seckey_key ON tt USING btree (seckey);\n\n// inserted many-many times\ninsert into tt values (default, 1);\ninsert into tt values (default, 2);\ninsert into tt values (default, 3);\ninsert into tt values (default, 4);\ninsert into tt values (default, 5);\n\n", "msg_date": "Thu, 04 Nov 2004 08:55:50 +0100", "msg_from": "Mario Ivankovits <[email protected]>", "msg_from_op": true, "msg_subject": "index not used if using IN or OR" }, { "msg_contents": "Mario Ivankovits wrote:\n> Hello !\n> \n> Sorry if this has been discussed before, it is just hard to find in the \n> archives using the words \"or\" or \"in\" :-o\n> \n> I use postgres-8.0 beta4 for windows.\n> I broke down my problem to a very simple table - two columns \n> \"primary_key\" and \"secondary_key\". Creates and Insert you will find below.\n> \n> If I query the _empty_ freshly created table I get the following explain \n> result:\n> \n> select * from tt where seckey = 1;\n> Index Scan using seckey_key on tt (cost=0.00..17.07 rows=5 width=12)\n> Index Cond: (seckey = 1)\n> \n> If I use \"OR\" (or IN) things get worse:\n> \n> select * from tt where seckey = 1 or seckey = 2\n> Seq Scan on tt (cost=0.00..0.00 rows=1 width=12)\n> Filter: ((seckey = 1) OR (seckey = 2))\n> \n> Note the \"Seq Scan\" instead of using the index.\n\nBut as you said, your table is *empty* - why would an index be faster? \nTry running EXPLAIN ANALYSE on these queries and look at the actual times.\n\n> After populating the table with 8920 records and \"analyze\" the scenario \n> gets even worser:\n> \n> select * from tt where seckey = 1;\n> Seq Scan on tt (cost=0.00..168.50 rows=1669 width=12) (actual \n> time=0.000..15.000 rows=1784 loops=1)\n> Filter: (seckey = 1)\n> Total runtime: 31.000 ms\n> \n> Now also this simple query uses a \"Seq Scan\".\n\nWell, it thinks it's going to be returning 1669 rows. If that's roughly \nright, then scanning the table probably is faster.\n\nRun the queries again with EXPLAIN ANALYSE. Also try issuing\n set enable_seqscan=false;\nThis will force the planner to use any indexes it finds. Compare the \ntimes with and without, and don't forget to account for the effects of \ncaching.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 04 Nov 2004 08:32:08 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index not used if using IN or OR" }, { "msg_contents": "Mario Ivankovits <[email protected]> writes:\n> After populating the table with 8920 records and \"analyze\" the scenario \n> gets even worser:\n\n> select * from tt where seckey = 1;\n> Seq Scan on tt (cost=0.00..168.50 rows=1669 width=12) (actual \n> time=0.000..15.000 rows=1784 loops=1)\n> Filter: (seckey = 1)\n> Total runtime: 31.000 ms\n\n> Now also this simple query uses a \"Seq Scan\".\n\nWhich is exactly what it *should* do, considering that it is selecting\n1784 out of 8920 records. Indexscans only win for small selectivities\n--- the rule of thumb is that retrieving more than about 1% of the\nrecords should use a seqscan.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Nov 2004 09:52:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index not used if using IN or OR " } ]
[ { "msg_contents": "Hi,\n\nI have a very tricky situation here. A client bought a Dell dual-machine\nto be used as Database Server, and we have a cheaper machine used in\ndevelopment. With identical databases, configuration parameters and\nrunning the same query, our machine is almost 3x faster.\n\nI tried to increase the shared_buffers and other parameters, but the\nresult is still the same. I would like to know what can I do to check\nwhat can be \"holding\" the Dell server (HD, memory, etc). Both machines\nrun Debian Linux.\n\nI'll post configuration details below, so you'll can figure my scenario\nbetter.\n\n==> Dell PowerEdge:\nHD: SCSI\nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 15\nmodel : 2\nmodel name : Intel(R) Xeon(TM) CPU 2.80GHz\nstepping : 9\ncpu MHz : 2791.292\ncache size : 512 KB\n\nprocessor : 1\nvendor_id : GenuineIntel\ncpu family : 15\nmodel : 2\nmodel name : Intel(R) Xeon(TM) CPU 2.80GHz\nstepping : 9\ncpu MHz : 2791.292\ncache size : 512 KB\n\n# free -m\n total used free shared buffers \ncached\nMem: 1010 996 14 0 98 \n506\n\n==> Other machine:\nHD: IDE\nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 15\nmodel : 2\nmodel name : Intel(R) Pentium(R) 4 CPU 2.26GHz\nstepping : 5\ncpu MHz : 2262.166\ncache size : 512 KB\n\n#free -m\n total used free shared buffers \ncached\nMem: 439 434 4 0 16 \n395\n\n\n-- \n+---------------------------------------------------+\n| Alvaro Nunes Melo Atua Sistemas de Informacao |\n| [email protected] www.atua.com.br |\n| UIN - 42722678 (54) 327-1044 |\n+---------------------------------------------------+\n\n", "msg_date": "Thu, 04 Nov 2004 19:06:01 -0200", "msg_from": "Alvaro Nunes Melo <[email protected]>", "msg_from_op": true, "msg_subject": "Better Hardware, worst Results" }, { "msg_contents": "On Thu, 2004-11-04 at 16:06, Alvaro Nunes Melo wrote:\n> Hi,\n> \n> I have a very tricky situation here. A client bought a Dell dual-machine\n> to be used as Database Server, and we have a cheaper machine used in\n> development. With identical databases, configuration parameters and\n> running the same query, our machine is almost 3x faster.\n\nPlease send an explain analyze from both.\n\n\n", "msg_date": "Thu, 04 Nov 2004 16:16:38 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better Hardware, worst Results" }, { "msg_contents": "Alvaro Nunes Melo <[email protected]> writes:\n> I have a very tricky situation here. A client bought a Dell dual-machine\n> to be used as Database Server, and we have a cheaper machine used in\n> development. With identical databases, configuration parameters and\n> running the same query, our machine is almost 3x faster.\n\n> ==> Dell PowerEdge:\n> HD: SCSI\n\n> ==> Other machine:\n> HD: IDE\n\nI'll bet a nickel that the IDE drive is lying about write completion,\nthereby gaining a significant performance boost at the cost of probable\ndata corruption during a power failure. SCSI drives generally tell the\ntruth about this, but consumer-grade IDE gear is usually configured to\nlie.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Nov 2004 16:46:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better Hardware, worst Results " }, { "msg_contents": "Citando Rod Taylor <[email protected]>:\n> Please send an explain analyze from both.\nI'm sendin three explains. In the first the Dell machine didn't use existing\nindexes, so I turn enable_seqscan off (this is the second explain). The total\ncost decreased, but the total time not. The third explain refers to the cheaper\n(and faster) machine. The last thing is the query itself.\n\n\n Nested Loop (cost=9008.68..13596.97 rows=1 width=317) (actual\ntime=9272.803..65287.304 rows=2604 loops=1)\n -> Hash Join (cost=9008.68..13590.91 rows=1 width=319) (actual\ntime=9243.294..10560.330 rows=2604 loops=1)\n Hash Cond: (\"outer\".cd_tipo_pagamento = \"inner\".cd_tipo_pagamento)\n -> Hash Join (cost=9007.59..13589.81 rows=1 width=317) (actual\ntime=9243.149..10529.765 rows=2604 loops=1)\n Hash Cond: (\"outer\".cd_condicao = \"inner\".cd_condicao)\n -> Nested Loop (cost=9006.46..13588.62 rows=8 width=315)\n(actual time=9243.083..10497.385 rows=2604 loops=1)\n -> Merge Join (cost=9006.46..13540.44 rows=8 width=290)\n(actual time=9242.962..10405.245 rows=2604 loops=1)\n Merge Cond: (\"outer\".cd_pessoa = \"inner\".cd_pessoa)\n -> Nested Loop Left Join (cost=4658.37..9183.72\nrows=375 width=286) (actual time=9210.101..10327.003 rows=23392 loops=1)\n -> Merge Left Join (cost=4658.37..6924.15\nrows=375 width=274) (actual time=9209.952..9981.475 rows=23392 loops=1)\n Merge Cond: (\"outer\".cd_pessoa =\n\"inner\".cd_pessoa)\n -> Merge Left Join \n(cost=3366.00..5629.19 rows=375 width=255) (actual time=9158.705..9832.781\nrows=23392 loops=1)\n Merge Cond: (\"outer\".cd_pessoa =\n\"inner\".cd_pessoa)\n -> Nested Loop Left Join \n(cost=2073.63..4334.24 rows=375 width=236) (actual time=8679.698..9152.213\nrows=23392 loops=\n1)\n -> Merge Left Join \n(cost=2073.63..2075.94 rows=375 width=44) (actual time=8679.557..8826.898\nrows=23392 loops=1\n)\n Merge Cond:\n(\"outer\".cd_pessoa = \"inner\".cd_pessoa)\n -> Sort \n(cost=1727.15..1728.09 rows=375 width=40) (actual time=8580.391..8611.842\nrows=23392 loops=1)\n Sort Key:\np.cd_pessoa\n -> Seq Scan on\npessoa p (cost=0.00..1711.12 rows=375 width=40) (actual time=0.371..8247.028\nrows=50\n412 loops=1)\n Filter:\n(cliente_liberado(cd_pessoa) = 1)\n -> Sort \n(cost=346.47..346.69 rows=85 width=8) (actual time=99.121..120.706 rows=16470\nloops=1)\n Sort Key:\ne.cd_pessoa\n -> Seq Scan on\nendereco e (cost=0.00..343.75 rows=85 width=8) (actual time=0.070..30.558\nrows=16858\n loops=1)\n Filter:\n(id_tipo_endereco = 2)\n -> Index Scan using\npk_pessoa_juridica on pessoa_juridica pj (cost=0.00..6.01 rows=1 width=196)\n(actual time=0.\n007..0.008 rows=1 loops=23392)\n Index Cond:\n(pj.cd_pessoa = \"outer\".cd_pessoa)\n -> Sort (cost=1292.37..1293.18\nrows=325 width=23) (actual time=478.963..522.701 rows=33659 loops=1)\n Sort Key: t.cd_pessoa\n -> Seq Scan on telefone t \n(cost=0.00..1278.81 rows=325 width=23) (actual time=0.039..120.256 rows=59572\nloops=1\n)\n Filter: (id_principal =\n1::smallint)\n -> Sort (cost=1292.37..1293.18 rows=325\nwidth=23) (actual time=51.205..53.662 rows=3422 loops=1)\n Sort Key: tf.cd_pessoa\n -> Seq Scan on telefone tf \n(cost=0.00..1278.81 rows=325 width=23) (actual time=0.024..43.192 rows=3885\nloops=1)\n Filter: (id_tipo =\n4::smallint)\n -> Index Scan using pk_cep on cep c \n(cost=0.00..6.01 rows=1 width=20) (actual time=0.007..0.009 rows=1 loops=23392)\n Index Cond: (c.cd_cep = \"outer\".cd_cep)\n -> Sort (cost=4348.08..4351.89 rows=1524 width=4)\n(actual time=13.182..18.069 rows=2619 loops=1)\n Sort Key: cgv.cd_pessoa\n -> Index Scan using\nidx_cliente_grupo_vendedor_cd_vendedor on cliente_grupo_vendedor cgv \n(cost=0.00..4267.51 rows=1524 width=4) (\nactual time=0.114..8.986 rows=2619 loops=1)\n Index Cond: (cd_vendedor = 577)\n -> Index Scan using pk_cliente_financeiro on\ncliente_financeiro cf (cost=0.00..6.01 rows=1 width=25) (actual\ntime=0.018..0.021 rows=1 loops=2\n604)\n Index Cond: (\"outer\".cd_pessoa = cf.cd_pessoa)\n -> Hash (cost=1.11..1.11 rows=11 width=6) (actual\ntime=0.029..0.029 rows=0 loops=1)\n -> Seq Scan on condicao_pagamento cp (cost=0.00..1.11\nrows=11 width=6) (actual time=0.006..0.024 rows=11 loops=1)\n -> Hash (cost=1.07..1.07 rows=7 width=6) (actual time=0.114..0.114\nrows=0 loops=1)\n -> Seq Scan on tipo_pagamento tp (cost=0.00..1.07 rows=7\nwidth=6) (actual time=0.095..0.106 rows=7 loops=1)\n -> Index Scan using pk_cliente on cliente cl (cost=0.00..6.01 rows=1\nwidth=10) (actual time=0.013..0.017 rows=1 loops=2604)\n Index Cond: (\"outer\".cd_pessoa = cl.cd_pessoa)\n Total runtime: 65298.215 ms\n(49 registros)\n\n\n*************************\n\n Nested Loop (cost=5155.51..19320.20 rows=1 width=317) (actual\ntime=480.311..62530.121 rows=2604 loops=1)\n -> Nested Loop (cost=5155.51..19314.14 rows=1 width=319) (actual\ntime=445.146..7385.369 rows=2604 loops=1)\n -> Hash Join (cost=5155.51..19309.45 rows=1 width=317) (actual\ntime=429.995..7307.799 rows=2604 loops=1)\n Hash Cond: (\"outer\".cd_tipo_pagamento =\n\"inner\".cd_tipo_pagamento)\n -> Nested Loop (cost=5149.42..19303.31 rows=8 width=315)\n(actual time=365.722..7208.785 rows=2604 loops=1)\n -> Merge Join (cost=5149.42..19255.13 rows=8 width=290)\n(actual time=365.551..7112.292 rows=2604 loops=1)\n Merge Cond: (\"outer\".cd_pessoa = \"inner\".cd_pessoa)\n -> Nested Loop Left Join (cost=801.33..14898.41\nrows=375 width=286) (actual time=180.146..7026.597 rows=23392 loops=1)\n -> Merge Left Join (cost=801.33..12638.83\nrows=375 width=274) (actual time=180.087..6620.025 rows=23392 loops=1)\n Merge Cond: (\"outer\".cd_pessoa =\n\"inner\".cd_pessoa)\n -> Merge Left Join \n(cost=801.33..9709.38 rows=375 width=255) (actual time=179.964..6443.147\nrows=23392 loops=1)\n Merge Cond: (\"outer\".cd_pessoa =\n\"inner\".cd_pessoa)\n -> Nested Loop Left Join \n(cost=801.33..6779.94 rows=375 width=236) (actual time=178.106..6131.000\nrows=23392 loops=1)\n -> Merge Left Join \n(cost=801.33..4521.63 rows=375 width=44) (actual time=177.883..5737.847\nrows=23392 loops=1)\n Merge Cond:\n(\"outer\".cd_pessoa = \"inner\".cd_pessoa)\n -> Index Scan using\npk_pessoa on pessoa p (cost=0.00..3718.93 rows=375 width=40) (actual\ntime=41.851..543\n1.143 rows=23392 loops=1)\n Filter:\n(cliente_liberado(cd_pessoa) = 1)\n -> Sort \n(cost=801.33..801.55 rows=85 width=8) (actual time=135.988..166.175 rows=16470\nloops=1)\n Sort Key:\ne.cd_pessoa\n -> Index Scan\nusing idx_endereco_cd_cep on endereco e (cost=0.00..798.61 rows=85 width=8)\n(actual t\nime=8.121..61.640 rows=16858 loops=1)\n Filter:\n(id_tipo_endereco = 2)\n -> Index Scan using\npk_pessoa_juridica on pessoa_juridica pj (cost=0.00..6.01 rows=1 width=196)\n(actual time=0.\n009..0.010 rows=1 loops=23392)\n Index Cond:\n(pj.cd_pessoa = \"outer\".cd_pessoa)\n -> Index Scan using\nidx_telefone_cd_pessoa_id_principal on telefone t (cost=0.00..2927.68 rows=325\nwidth=23) (actual\ntime=1.840..106.496 rows=33659 loops=1)\n Filter: (id_principal =\n1::smallint)\n -> Index Scan using\nidx_telefone_cd_pessoa_id_principal on telefone tf (cost=0.00..2927.68\nrows=325 width=23) (actual time=\n0.056..67.089 rows=3422 loops=1)\n Filter: (id_tipo = 4::smallint)\n -> Index Scan using pk_cep on cep c \n(cost=0.00..6.01 rows=1 width=20) (actual time=0.010..0.011 rows=1 loops=23392)\n Index Cond: (c.cd_cep = \"outer\".cd_cep)\n -> Sort (cost=4348.08..4351.89 rows=1524 width=4)\n(actual time=14.178..18.668 rows=2619 loops=1)\n Sort Key: cgv.cd_pessoa\n -> Index Scan using\nidx_cliente_grupo_vendedor_cd_vendedor on cliente_grupo_vendedor cgv \n(cost=0.00..4267.51 rows=1524 width=4) (\nactual time=0.177..9.557 rows=2619 loops=1)\n Index Cond: (cd_vendedor = 577)\n -> Index Scan using pk_cliente_financeiro on\ncliente_financeiro cf (cost=0.00..6.01 rows=1 width=25) (actual\ntime=0.019..0.022 rows=1 loops=2\n604)\n Index Cond: (\"outer\".cd_pessoa = cf.cd_pessoa)\n -> Hash (cost=6.08..6.08 rows=7 width=6) (actual\ntime=64.025..64.025 rows=0 loops=1)\n -> Index Scan using pk_tipo_pagamento on tipo_pagamento tp\n (cost=0.00..6.08 rows=7 width=6) (actual time=63.991..64.007 rows=7 loops=1)\n -> Index Scan using pk_condicao_pagamento on condicao_pagamento cp \n(cost=0.00.. Index Cond: (cp.cd_condicao = \"outer\".cd_condicao)\n -> Index Scan using pk_cliente on cliente cl (cost=0.00..6.01 rows=1\nwidth=10) (actual time=0.013..0.017 rows=1 loops=2604)\n Index Cond: (\"outer\".cd_pessoa = cl.cd_pessoa)\n Total runtime: 62536.845 ms\n(42 registros)\n4.68 rows=1 width=6) (actual time=0.014..0.016 rows=1 loops=2604)\n\n\n*************************\n\n Hash Join (cost=2.23..11191.77 rows=9 width=134) (actual\ntime=341.708..21868.167 rows=2604 loops=1)\n Hash Cond: (\"outer\".cd_condicao = \"inner\".cd_condicao)\n -> Hash Join (cost=1.09..11190.16 rows=9 width=132) (actual\ntime=329.205..19758.764 rows=2604 loops=1)\n Hash Cond: (\"outer\".cd_tipo_pagamento = \"inner\".cd_tipo_pagamento)\n -> Nested Loop (cost=0.00..11188.94 rows=9 width=130) (actual\ntime=329.086..19727.477 rows=2604 loops=1)\n -> Merge Join (cost=0.00..9190.52 rows=245 width=138) (actual\ntime=7.860..18543.354 rows=24380 loops=1)\n Merge Cond: (\"outer\".cd_pessoa = \"inner\".cd_pessoa)\n -> Merge Join (cost=0.00..11686.19 rows=245 width=128)\n(actual time=7.692..17802.380 rows=24380 loops=1)\n Merge Cond: (\"outer\".cd_pessoa = \"inner\".cd_pessoa)\n -> Nested Loop Left Join (cost=0.00..14123.02\nrows=375 width=106) (actual time=7.513..17071.221 rows=70931 loops=1)\n -> Merge Left Join (cost=0.00..12973.12\nrows=375 width=94) (actual time=7.297..16005.974 rows=70931 loops=1)\n Merge Cond: (\"outer\".cd_pessoa =\n\"inner\".cd_pessoa)\n -> Merge Left Join (cost=0.00..10076.90\nrows=375 width=82) (actual time=7.161..15391.752 rows=70931 loops=1)\n Merge Cond: (\"outer\".cd_pessoa =\n\"inner\".cd_pessoa)\n -> Nested Loop Left Join \n(cost=0.00..7040.30 rows=375 width=70) (actual time=6.990..14516.256 rows=47998\nloops=1)\n -> Nested Loop Left Join \n(cost=0.00..5401.41 rows=375 width=37) (actual time=6.839..13504.771 rows=47998\nloops=\n1)\n -> Index Scan using\npk_pessoa on pessoa p (cost=0.00..3398.09 rows=375 width=33) (actual\ntime=6.599..1234\n7.532 rows=47998 loops=1)\n Filter:\n(cliente_liberado(cd_pessoa) = 1)\n -> Index Scan using\nun_endereco_id_tipo_endereco on endereco e (cost=0.00..5.33 rows=1 width=8)\n(actual t\nime=0.015..0.016 rows=0 loops=47998)\n Index Cond:\n(e.cd_pessoa = \"outer\".cd_pessoa)\n Filter:\n(id_tipo_endereco = 2)\n -> Index Scan using\npk_pessoa_juridica on pessoa_juridica pj (cost=0.00..4.36 rows=1 width=37)\n(actual time=0.0\n12..0.013 rows=0 loops=47998)\n Index Cond:\n(pj.cd_pessoa = \"outer\".cd_pessoa)\n -> Index Scan using\nidx_telefone_cd_pessoa_id_principal on telefone t (cost=0.00..2884.52\nrows=59265 width=16) (actua\nl time=0.146..260.008 rows=58128 loops=1)\n Filter: (id_principal =\n1::smallint)\n -> Index Scan using\nidx_telefone_cd_pessoa_id_principal on telefone tf (cost=0.00..2884.52\nrows=4217 width=16) (actual time\n=0.053..159.212 rows=3600 loops=1)\n Filter: (id_tipo = 4::smallint)\n -> Index Scan using pk_cep on cep c \n(cost=0.00..3.05 rows=1 width=20) (actual time=0.006..0.007 rows=0 loops=70931)\n Index Cond: (c.cd_cep = \"outer\".cd_cep)\n -> Index Scan using pk_cliente_financeiro on\ncliente_financeiro cf (cost=0.00..1806.88 rows=48765 width=22) (actual\ntime=0.146..175.468\n rows=48765 loops=1)\n -> Index Scan using pk_cliente on cliente cl \n(cost=0.00..1387.01 rows=48805 width=10) (actual time=0.135..179.715 rows=48804\nloops=1)\n -> Index Scan using idx_cliente_grupo_vendedor_cd_pessoa on\ncliente_grupo_vendedor cgv (cost=0.00..8.14 rows=1 width=4) (actual\ntime=0.042..0.043 r\nows=0 loops=24380)\n Index Cond: (cgv.cd_pessoa = \"outer\".cd_pessoa)\n Filter: (cd_vendedor = 577)\n -> Hash (cost=1.07..1.07 rows=7 width=6) (actual time=0.059..0.059\nrows=0 loops=1)\n -> Seq Scan on tipo_pagamento tp (cost=0.00..1.07 rows=7\nwidth=6) (actual time=0.033..0.047 rows=7 loops=1)\n -> Hash (cost=1.11..1.11 rows=11 width=6) (actual time=0.096..0.096 rows=0\nloops=1)\n -> Seq Scan on condicao_pagamento cp (cost=0.00..1.11 rows=11\nwidth=6) (actual time=0.054..0.079 rows=11 loops=1)\n Total runtime: 21873.236 ms\n(39 rows)\n\n\nSELECT p.cd_pessoa,\n obtem_cnpj_cpf(p.cd_pessoa) AS nr_cnpj_cpf, p.nm_pessoa,\nCOALESCE(pj.nm_fantasia, p.nm_pessoa),\n obtem_endereco(obtem_endereco_comercial(p.cd_pessoa)) AS ds_endereco,\n obtem_bairro(obtem_endereco_comercial(p.cd_pessoa)) AS ds_bairro,\n c.cd_cidade, c.nr_cep, pj.nr_ie, '0' || t.nr_telefone, '0' ||\ntf.nr_telefone,\n cf.cd_tipo_pagamento, cf.cd_condicao, cp.nr_dias, cl.cd_atividade,\ntp.nr_hierarquia,\n '0', REPLACE(cf.pr_taxa_financeira, '.', ',') AS pr_taxa_financeira,\n TO_CHAR(p.dt_nascimento, 'DDMMYYYY') AS dt_nascimento,\n cl.nr_checkouts,\n CASE WHEN cf.id_confianca = 1 THEN 'A'\n WHEN cf.id_confianca = 2 THEN 'B'\n WHEN cf.id_confianca = 3 THEN 'C'\n WHEN cf.id_confianca = 4 THEN 'D'\n END AS id_confianca,\n '' AS id_cadastro\nFROM pessoa p\n LEFT OUTER JOIN endereco e ON e.cd_pessoa = p.cd_pessoa AND\ne.id_tipo_endereco = 2\n LEFT OUTER JOIN pessoa_juridica pj ON pj.cd_pessoa = p.cd_pessoa\n LEFT OUTER JOIN telefone t ON t.cd_pessoa = p.cd_pessoa AND t.id_principal\n= '1'\n LEFT OUTER JOIN telefone tf ON tf.cd_pessoa = p.cd_pessoa AND tf.id_tipo =\n'4'\n LEFT OUTER JOIN cep c ON c.cd_cep = e.cd_cep\n JOIN cliente cl ON cl.cd_pessoa = p.cd_pessoa\n JOIN cliente_financeiro cf ON cf.cd_pessoa = cl.cd_pessoa\n JOIN cliente_grupo_vendedor cgv ON cgv.cd_pessoa = p.cd_pessoa\n JOIN condicao_pagamento cp ON cp.cd_condicao = cf.cd_condicao\n JOIN tipo_pagamento tp ON tp.cd_tipo_pagamento = cf.cd_tipo_pagamento\nWHERE cgv.cd_vendedor = '577'\nAND cliente_liberado(p.cd_pessoa) = 1;\n\n\n\n--\n+---------------------------------------------------+\n| Alvaro Nunes Melo Atua Sistemas de Informacao |\n| [email protected] www.atua.com.br |\n| UIN - 42722678 (54) 327-1044 |\n+---------------------------------------------------+\n\n----------------------------------------------------------------\nThis message was sent using IMP, the Internet Messaging Program.\n\nAtua Sistemas de Informa��o - http://www.atua.com.br \n", "msg_date": "Thu, 4 Nov 2004 20:42:03 -0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Better Hardware, worst Results" }, { "msg_contents": "On Thu, 2004-11-04 at 17:42, [email protected] wrote:\n> Citando Rod Taylor <[email protected]>:\n> > Please send an explain analyze from both.\n> I'm sendin three explains. In the first the Dell machine didn't use existing\n> indexes, so I turn enable_seqscan off (this is the second explain). The total\n> cost decreased, but the total time not. The third explain refers to the cheaper\n> (and faster) machine. The last thing is the query itself.\n\nAll 3 plans have crappy estimates.\n\nRun ANALYZE in production, then send another explain analyze (as an\nattachment please, to avoid linewrap).\n\n\n", "msg_date": "Thu, 04 Nov 2004 17:58:29 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better Hardware, worst Results" }, { "msg_contents": "[email protected] wrote:\n\n>Citando Rod Taylor <[email protected]>:\n> \n>\n>>Please send an explain analyze from both.\n>> \n>>\n>I'm sendin three explains. In the first the Dell machine didn't use existing\n>indexes, so I turn enable_seqscan off (this is the second explain). The total\n>cost decreased, but the total time not. The third explain refers to the cheaper\n>(and faster) machine. The last thing is the query itself.\n>\n>\n> Nested Loop (cost=9008.68..13596.97 rows=1 width=317) (actual\n>time=9272.803..65287.304 rows=2604 loops=1)\n> Nested Loop (cost=5155.51..19320.20 rows=1 width=317) (actual\n>time=480.311..62530.121 rows=2604 loops=1)\n> Hash Join (cost=2.23..11191.77 rows=9 width=134) (actual\n>time=341.708..21868.167 rows=2604 loops=1)\n>\n> \n>\nWell the plan is completely different on the dev machine. Therefore \neither the PG version or the postgresql.conf is different. No other \npossible answer.\n\nM\n\n\n\n\n\n\n\n\n\[email protected] wrote:\n\nCitando Rod Taylor <[email protected]>:\n \n\nPlease send an explain analyze from both.\n \n\nI'm sendin three explains. In the first the Dell machine didn't use existing\nindexes, so I turn enable_seqscan off (this is the second explain). The total\ncost decreased, but the total time not. The third explain refers to the cheaper\n(and faster) machine. The last thing is the query itself.\n\n\n Nested Loop (cost=9008.68..13596.97 rows=1 width=317) (actual\ntime=9272.803..65287.304 rows=2604 loops=1)\n Nested Loop (cost=5155.51..19320.20 rows=1 width=317) (actual\ntime=480.311..62530.121 rows=2604 loops=1)\n Hash Join (cost=2.23..11191.77 rows=9 width=134) (actual\ntime=341.708..21868.167 rows=2604 loops=1)\n\n \n\nWell the plan is completely different on the dev machine.  Therefore\neither the PG version or the postgresql.conf is different.  No other\npossible answer.\n\nM", "msg_date": "Thu, 04 Nov 2004 22:58:55 +0000", "msg_from": "Matt Clark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better Hardware, worst Results" }, { "msg_contents": "\n>All 3 plans have crappy estimates.\n>\n>Run ANALYZE in production, then send another explain analyze (as an\n>attachment please, to avoid linewrap).\n>\n> \n>\nEr, no other possible answer except Rod's :-)\n", "msg_date": "Thu, 04 Nov 2004 23:08:59 +0000", "msg_from": "Matt Clark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Better Hardware, worst Results" }, { "msg_contents": "Em Qui, 2004-11-04 �s 20:58, Rod Taylor escreveu:\n> All 3 plans have crappy estimates.\n> \n> Run ANALYZE in production, then send another explain analyze (as an\n> attachment please, to avoid linewrap).\nFirst of all, I'd like to apoligize for taking so long to post a new\nposition. After this, I apologize again because the problem was in my\nquery. It used some functions that for some reason made the Dell machine\nhave a greater cost than our house-made machine. After correcting this\nfunctions, the results were faster in the Dell machine.\n\nThe last apologize is for the linewrapped explains. In our brazilian\nPostgreSQL mailing list, attachments are not allowed, so I send them as\ninline text.\n\nThanks to everyone who spent some time to help me solving this problem.\n\n-- \n+---------------------------------------------------+\n| Alvaro Nunes Melo Atua Sistemas de Informacao |\n| [email protected] www.atua.com.br |\n| UIN - 42722678 (54) 327-1044 |\n+---------------------------------------------------+\n\n", "msg_date": "Mon, 08 Nov 2004 09:46:51 -0200", "msg_from": "Alvaro Nunes Melo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Better Hardware, worst Results" } ]
[ { "msg_contents": "Hello,\nI am seeking some advice on appropriate indexing. I think I have a rough \nidea where to place my indices but would be grateful for some tips from \nmore experienced people.\nThe following example shows what is probably the most complex query of \nthe application.\n\nA few points to give you a rough indicator about the DB:\n- application is more query than update intensive\n- each table has a surrogate PK (serial)\n- access of tables ITEM and PRODUCT always involves join on BRAND, \nMODEL, TYPE\n- CATEGORY,SECTION,CONDITION are pretty much static and have no more \nthan 30 rows\n- PRODUCT table will eventually contain a few thousand records\n- ITEM table will, grow, grow, grow (sold items are not deleted)\n- PRODUCT_FK, TYPE_FK, MODEL_FK, BRAND_FK are never NULL\n- PRODUCT_LENS... columns are only NOT NULL where CATEGORY_PK=2\n- ITEM.STATUS = available, sold, reserved ..., never NULL\n- ITEM.KIND = secondhand, commission, new, never NULL\n\n=============================================\nMy understanding is:\n- index the FK columns used for joins\n- index columns typically used in WHERE clause\n- index on e.g. PRODUCT.CATEGORY_FK prevents seq scan of CATEGORY\n- as CATEGORY contains few rows it's not worth indexing CATEGORY_FK\n\nQuestions:\n- Does the order of the JOIN clauses make a difference?\n- Does the order of the WHERE clauses make a difference?\n\n=============================================\nSELECT\n\nBRAND.BRAND_NAME,\nMODEL.MODEL_NAME,\nTYPE.TYPE_NAME,\nITEM.RETAIL_PRICE,\nCONDITION.ABBREVIATION\n\nFROM ITEM\n\nLEFT JOIN PRODUCT ON ITEM.PRODUCT_FK=PRODUCT.PRODUCT_PK\nLEFT JOIN TYPE ON PRODUCT.TYPE_FK=TYPE.TYPE_PK\nLEFT JOIN MODEL ON TYPE.MODEL_FK=MODEL.MODEL_PK\nLEFT JOIN BRAND ON MODEL.BRAND_FK=BRAND.BRAND_PK\nLEFT JOIN CATEGORY ON PRODUCT.CATEGORY_FK=CATEGORY.CATEGORY_PK\nLEFT JOIN SECTION SECTION ON PRODUCT.SECTION_USED_FK=SECTION.SECTION_PK\nLEFT JOIN CONDITION ON ITEM.CONDITION_FK=CONDITION.CONDITION_PK\n\nWHERE PRODUCT.SECTION_USED_FK IS NOT NULL AND ITEM.STATUS=1 and \n(ITEM.KIND=2 or ITEM.KIND=3)\n\nORDER BY SECTION.POSITION, CATEGORY.POSITION,\nPRODUCT.LENS_FOCAL_LEN_FROM,PRODUCT.LENS_FOCAL_LEN_TO IS NOT NULL,\nPRODUCT.LENS_FOCAL_LEN_TO,\nPRODUCT.LENS_SPEED_FROM,PRODUCT.LENS_SPEED_TO,\nTYPE.TYPE_NAME, CONDITION.POSITION\n\n\nI'd appreciate a few pointers based on this example. Thanks in advance.\n\n-- \n\n\nRegards/Gru�,\n\nTarlika Elisabeth Schmitz\n", "msg_date": "Thu, 04 Nov 2004 22:32:57 +0000", "msg_from": "T E Schmitz <[email protected]>", "msg_from_op": true, "msg_subject": "appropriate indexing" }, { "msg_contents": "\n> - ITEM table will, grow, grow, grow (sold items are not deleted)\n> WHERE PRODUCT.SECTION_USED_FK IS NOT NULL AND ITEM.STATUS=1 and \n> (ITEM.KIND=2 or ITEM.KIND=3)\n>\nPartial index on item.status ?\n", "msg_date": "Thu, 04 Nov 2004 22:38:57 +0000", "msg_from": "Matt Clark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: appropriate indexing" } ]
[ { "msg_contents": "Hi all,\n I have a table which have more than 200000 records. I need to get\nthe records which matches like this\n\nwhere today::date = '2004-11-05';\n\nThis is the only condition in the query. There is a btree index on the\ncolumn today.\nIs there any way to optimise it.\n\nrgds\nAntony Paul\n", "msg_date": "Fri, 5 Nov 2004 12:46:20 +0530", "msg_from": "Antony Paul <[email protected]>", "msg_from_op": true, "msg_subject": "Checking = with timestamp field is slow" }, { "msg_contents": "\nOn Nov 5, 2004, at 4:16 PM, Antony Paul wrote:\n> where today::date = '2004-11-05';\n>\n> This is the only condition in the query. There is a btree index on the\n> column today.\n> Is there any way to optimise it.\n\nI'm sure others out there have better ideas, but you might want to try\n\nwhere current_date = date '2004-11-05'\n\nMight not make a difference at all, but perhaps PostgreSQL is coercing \nboth values to timestamp or some other type as you're only providing a \nstring to compare to a date. Then again, it might make no difference at \nall.\n\nMy 1 cent.\n\nMichael Glaesemann\ngrzm myrealbox com\n\n", "msg_date": "Fri, 5 Nov 2004 17:14:01 +0900", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Checking = with timestamp field is slow" }, { "msg_contents": "\nOn Nov 5, 2004, at 5:14 PM, Michael Glaesemann wrote:\n\n>\n> On Nov 5, 2004, at 4:16 PM, Antony Paul wrote:\n>> where today::date = '2004-11-05';\n>>\n>> This is the only condition in the query. There is a btree index on the\n>> column today.\n>> Is there any way to optimise it.\n>\n> I'm sure others out there have better ideas, but you might want to try\n>\n> where current_date = date '2004-11-05'\n\nAch! just re-read that. today is one of your columns! Try\n\nwhere today::date = date '2004-11-05'\n\n", "msg_date": "Fri, 5 Nov 2004 17:32:49 +0900", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Checking = with timestamp field is slow" }, { "msg_contents": "On Fri, Nov 05, 2004 at 12:46:20PM +0530, Antony Paul wrote:\n\n> I have a table which have more than 200000 records. I need to get\n> the records which matches like this\n> \n> where today::date = '2004-11-05';\n> \n> This is the only condition in the query. There is a btree index on the\n> column today. Is there any way to optimise it.\n\nIs the today column a TIMESTAMP as the subject implies? If so then\nyour queries probably aren't using the index because you're changing\nthe type to something that's not indexed. Your queries should speed\nup if you create an index on DATE(today):\n\nCREATE INDEX foo_date_today_idx ON foo (DATE(today));\n\nAfter creating the new index, use WHERE DATE(today) = '2004-11-05'\nin your queries. EXPLAIN ANALYZE should show that the index is\nbeing used.\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Fri, 5 Nov 2004 01:34:01 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Checking = with timestamp field is slow" }, { "msg_contents": "On Fri, 2004-11-05 at 12:46 +0530, Antony Paul wrote:\n> Hi all,\n> I have a table which have more than 200000 records. I need to get\n> the records which matches like this\n> \n> where today::date = '2004-11-05';\n> \n> This is the only condition in the query. There is a btree index on the\n> column today.\n> Is there any way to optimise it.\n\nHi Antony,\n\nI take it your field is called \"today\" (seems dodgy, but these things\nhappen...). Anywa, have you tried indexing on the truncated value?\n\n create index xyz_date on xyz( today::date );\n analyze xyz;\n\nThat's one way. It depends on how many of those 200,000 rows are on\neach date too, as to whether it will get used by your larger query.\n\nRegards,\n\t\t\t\t\tAndrew.\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n When in doubt, tell the truth.\n -- Mark Twain\n-------------------------------------------------------------------------", "msg_date": "Fri, 05 Nov 2004 21:48:59 +1300", "msg_from": "Andrew McMillan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Checking = with timestamp field is slow" }, { "msg_contents": "On Fri, Nov 05, 2004 at 05:32:49PM +0900, Michael Glaesemann wrote:\n> \n> On Nov 5, 2004, at 5:14 PM, Michael Glaesemann wrote:\n> \n> >\n> >On Nov 5, 2004, at 4:16 PM, Antony Paul wrote:\n> >>where today::date = '2004-11-05';\n> >>\n> >>This is the only condition in the query. There is a btree index on the\n> >>column today.\n> >>Is there any way to optimise it.\n> >\n> >I'm sure others out there have better ideas, but you might want to try\n> >\n> >where current_date = date '2004-11-05'\n> \n> Ach! just re-read that. today is one of your columns! Try\n> \n> where today::date = date '2004-11-05'\n\nCasting '2004-11-05' to DATE shouldn't be necessary, at least not\nin 7.4.5.\n\ntest=> EXPLAIN ANALYZE SELECT * FROM foo WHERE today::DATE = '2004-11-05';\n QUERY PLAN \n--------------------------------------------------------------------------------------------------\n Seq Scan on foo (cost=0.00..268.00 rows=50 width=16) (actual time=0.592..50.854 rows=1 loops=1)\n Filter: ((today)::date = '2004-11-05'::date)\n\n\nAs you can see, '2004-11-05' is already cast to DATE. The sequential\nscan is happening because there's no index on today::DATE.\n\n\ntest=> CREATE INDEX foo_date_idx ON foo (DATE(today));\nCREATE INDEX\ntest=> EXPLAIN ANALYZE SELECT * FROM foo WHERE DATE(today) = '2004-11-05';\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------\n Index Scan using foo_date_idx on foo (cost=0.00..167.83 rows=50 width=16) (actual time=0.051..0.061 rows=1 loops=1)\n Index Cond: (date(today) = '2004-11-05'::date)\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Fri, 5 Nov 2004 02:13:06 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Checking = with timestamp field is slow" }, { "msg_contents": "After a long battle with technology, [email protected] (Antony Paul), an earthling, wrote:\n> Hi all,\n> I have a table which have more than 200000 records. I need to get\n> the records which matches like this\n>\n> where today::date = '2004-11-05';\n>\n> This is the only condition in the query. There is a btree index on the\n> column today.\n> Is there any way to optimise it.\n\nHow about changing the criterion to:\n\n where today between '2004-11-05' and '2004-11-06';\n\nThat ought to make use of the index on \"today\".\n-- \n\"cbbrowne\",\"@\",\"ntlug.org\"\nhttp://www.ntlug.org/~cbbrowne/sgml.html\n\"People need to quit pretending they can invent THE interface and walk\naway from it, like some Deist fantasy.\" -- Michael Peck\n", "msg_date": "Fri, 05 Nov 2004 07:47:54 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Checking = with timestamp field is slow" }, { "msg_contents": "On Fri, Nov 05, 2004 at 07:47:54AM -0500, Christopher Browne wrote:\n> \n> How about changing the criterion to:\n> \n> where today between '2004-11-05' and '2004-11-06';\n> \n> That ought to make use of the index on \"today\".\n\nYes it should, but it'll also return records that have a \"today\"\nvalue of '2004-11-06 00:00:00' since \"x BETWEEN y AND z\" is equivalent\nto \"x >= y AND x <= z\". Try this instead:\n\n WHERE today >= '2004-11-05' AND today < '2004-11-06'\n\nIn another post I suggested creating an index on DATE(today). The\nabove query should make that unnecessary, although in 8.0 such an\nindex would be used in queries like this:\n\n WHERE today IN ('2004-09-01', '2004-10-01', '2004-11-01');\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Fri, 5 Nov 2004 10:45:34 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Checking = with timestamp field is slow" } ]
[ { "msg_contents": "To what extent would your problems be solved by having a 2nd server, a replication system (such as slony-1, but there are others), and some sort of load-balancer in front of it? The load-balancing could be as simple as round-robin DNS server, perhaps...\n\nThen when you need to do maintenance such a vacuum full, you can temporarily take 1 server out of the load-balancer (I hope) and do maintenance, and then the other.\nI don't know what that does to replication, but I would venture that replication systems should be designed to handle a node going offline.\n\nLoad balancing could also help to protect against server-overload and 1 server toppling over.\n\nOf course, I don't know to what extent having another piece of hardware is an option, for you.\n\ncheers,\n\n--Tim\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]]On Behalf Of Martin Foster\nSent: Friday, November 05, 2004 3:50 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Restricting Postgres\n\n[...]\n\nNow is there an administrative command in PostgreSQL that will cause it \nto move into some sort of maintenance mode? For me that could be \nexceedingly useful as it would still allow for an admin connection to be \nmade and run a VACUUM FULL and such.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n", "msg_date": "Fri, 5 Nov 2004 09:48:00 +0100", "msg_from": "\"Leeuw van der, Tim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Restricting Postgres" } ]
[ { "msg_contents": "Hey people, long while since I posted here, but I'm having an index\nissue that looks on the surface to be a little strange.\n\nI have a text field that I'm trying to query on in a table with\nmillions of rows. Stupid I know, but a fairly common stupid thing to\ntry to do.\n\nFor some reason it's a requirement that partial wildcard searches are\ndone on this field, such as \"SELECT ... WHERE field LIKE 'A%'\"\n\nI thought an interesting way to do this would be to simply create\npartial indexes for each letter on that field, and it works when the\nquery matches the WHERE clause in the index exactly like above. The\nproblem is thus:\n\nSay I have an index.. CREATE INDEX column_idx_a ON table (column)\nWHERE column LIKE 'A%'\n\nIt seems to me that a query saying \"SELECT column FROM table WHERE\ncolumn LIKE 'AA%';\" should be just as fast or very close to the first\ncase up above. However, explain tells me that this query is not using\nthe index above, which is what's not making sense to me.\n\nDoes the planner not realize that 'AA%' will always fall between 'A%'\nand 'B', and thus that using the index would be the best way to go, or\nam I missing something else that's preventing this from working?\n", "msg_date": "Fri, 5 Nov 2004 09:39:16 -0500", "msg_from": "Allen Landsidel <[email protected]>", "msg_from_op": true, "msg_subject": "Strange (?) Index behavior?" }, { "msg_contents": "> For some reason it's a requirement that partial wildcard \n> searches are done on this field, such as \"SELECT ... WHERE \n> field LIKE 'A%'\"\n> \n> I thought an interesting way to do this would be to simply \n> create partial indexes for each letter on that field, and it \n> works when the query matches the WHERE clause in the index \n> exactly like above. The problem is thus:\n\nI thought PG could use an ordinary index for 'like' conditions with just a\nterminating '%'?\n\nMy other thought is that like 'A%' should grab about 1/26th of the table\nanyway (if the initial character distribution is random), and so a\nsequential scan might be the best plan anyway...\n\nM\n\n", "msg_date": "Fri, 5 Nov 2004 15:06:32 -0000", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange (?) Index behavior?" }, { "msg_contents": "> It seems to me that a query saying \"SELECT column FROM table WHERE\n> column LIKE 'AA%';\" should be just as fast or very close to the first\n> case up above. However, explain tells me that this query is not using\n> the index above, which is what's not making sense to me.\n\nIt looks for an exact expression match, and doesn't know about values\nwhich are equal.\n\nYou can provide both clauses.\n\nWHERE column LIKE 'A%' and column LIKE 'AA%';\n\n\n", "msg_date": "Fri, 05 Nov 2004 10:07:38 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange (?) Index behavior?" }, { "msg_contents": "On Fri, 05 Nov 2004 10:07:38 -0500, Rod Taylor <[email protected]> wrote:\n> > It seems to me that a query saying \"SELECT column FROM table WHERE\n> > column LIKE 'AA%';\" should be just as fast or very close to the first\n> > case up above. However, explain tells me that this query is not using\n> > the index above, which is what's not making sense to me.\n> \n> It looks for an exact expression match, and doesn't know about values\n> which are equal.\n> \n> You can provide both clauses.\n> \n> WHERE column LIKE 'A%' and column LIKE 'AA%';\n\nI see. That's not really optimal either however as you can probably\nsee already.. adding AB, AC, AD...AZ is likely to be pretty bogus and\nat the least is time consuming.\n\nMatt Clark was right that it will use a standard index, which is in\nfact what it's doing right now in the \"SELECT column WHERE column LIKE\n'AA%';\" case.. however as I said, the table has millions of rows --\ncurrently about 76 million, so even a full index scan is fairly slow.\n\nThe machine isn't all that hot performance wise either, a simple dual\n800 P3 with a single 47GB Seagate SCSI. The only redeeming factor is\nthat it has 2GB of memory, which I'm trying to make the most of with\nthese indexes.\n\nSo assuming this partial index situation isn't going to change (it\nseems like it would be a fairly simple fix for someone that knows the\npg code however) I'm wondering if a subselect may speed things up any,\nso I'm going to investigate that next.\n\nPerhaps.. SELECT column FROM (SELECT column FROM table WHERE column\nLIKE 'A%') AS sq WHERE column LIKE 'AA%';\n\nThe query planner thinks this will be pretty fast indeed, and does use\nthe index I am after.\n\nOS is, of course, FreeBSD.\n", "msg_date": "Fri, 5 Nov 2004 10:32:43 -0500", "msg_from": "Allen Landsidel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange (?) Index behavior?" }, { "msg_contents": "On Fri, 5 Nov 2004 10:32:43 -0500, Allen Landsidel <[email protected]> wrote:\n> On Fri, 05 Nov 2004 10:07:38 -0500, Rod Taylor <[email protected]> wrote:\n> \n> \n> > > It seems to me that a query saying \"SELECT column FROM table WHERE\n> > > column LIKE 'AA%';\" should be just as fast or very close to the first\n> > > case up above. However, explain tells me that this query is not using\n> > > the index above, which is what's not making sense to me.\n> >\n> > It looks for an exact expression match, and doesn't know about values\n> > which are equal.\n> >\n> > You can provide both clauses.\n> >\n> > WHERE column LIKE 'A%' and column LIKE 'AA%';\n> \n> I see. That's not really optimal either however as you can probably\n> see already.. adding AB, AC, AD...AZ is likely to be pretty bogus and\n> at the least is time consuming.\n\nI see now that you mean to add that to the SELECT clause and not the\nindex, my mistake.\n\n> Perhaps.. SELECT column FROM (SELECT column FROM table WHERE column\n> LIKE 'A%') AS sq WHERE column LIKE 'AA%';\n> \n> The query planner thinks this will be pretty fast indeed, and does use\n> the index I am after.\n\nThis was indeed pretty fast. About 7 seconds, as was modifying the\nWHERE as suggested above.\n\n-Allen\n", "msg_date": "Fri, 5 Nov 2004 11:54:02 -0500", "msg_from": "Allen Landsidel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange (?) Index behavior?" }, { "msg_contents": "On Fri, Nov 05, 2004 at 09:39:16 -0500,\n Allen Landsidel <[email protected]> wrote:\n> \n> For some reason it's a requirement that partial wildcard searches are\n> done on this field, such as \"SELECT ... WHERE field LIKE 'A%'\"\n> \n> I thought an interesting way to do this would be to simply create\n> partial indexes for each letter on that field, and it works when the\n> query matches the WHERE clause in the index exactly like above. The\n> problem is thus:\n\nThat may not help much except for prefixes that have a below average\nnumber of occurences. If you are going to be select 1/26 of the records,\nyou are probably going to do about as well with a sequential scan as an\nindex scan.\n\nJust having a normal index on the column will work if the database locale\nis C. In 7.4 you can create an index usable by LIKE even in the database\nlocale isn't C, but I don't remember the exact syntax. You will be better\noff having just one index rather than 26 partial indexes.\n", "msg_date": "Fri, 5 Nov 2004 11:51:59 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange (?) Index behavior?" }, { "msg_contents": "On Fri, 5 Nov 2004 11:51:59 -0600, Bruno Wolff III <[email protected]> wrote:\n> On Fri, Nov 05, 2004 at 09:39:16 -0500,\n> Allen Landsidel <[email protected]> wrote:\n> >\n> > For some reason it's a requirement that partial wildcard searches are\n> > done on this field, such as \"SELECT ... WHERE field LIKE 'A%'\"\n> >\n> > I thought an interesting way to do this would be to simply create\n> > partial indexes for each letter on that field, and it works when the\n> > query matches the WHERE clause in the index exactly like above. The\n> > problem is thus:\n> \n> That may not help much except for prefixes that have a below average\n> number of occurences. If you are going to be select 1/26 of the records,\n> you are probably going to do about as well with a sequential scan as an\n> index scan.\n\nThe thing isn't that I want 1/26th of the records since the\ndistribution is not exactly equal among different letters, but more\nimportantly, there are about 76million rows currently, and for some\nreason I am being told by the people with the pointy hair that a query\nlike \"select foo,bar from table where foo like 'abc%';\" is not an\nuncommon type of query to run. I don't know why it's common and to be\nhonest, I'm afraid to ask. ;)\n\nWith that many rows, and a normal index on the field, postgres figures\nthe best option for say \"I%\" is not an index scan, but a sequential\nscan on the table, with a filter -- quite obviously this is slow as\nheck, and yes, I've run analyze several times and in fact have the\nvacuum analyze automated.\n\nWith the partial index the index scan is used and the cost drops from\n0..2million to 0..9000 -- a vast improvement.\n\nSo I'm going to go with the partial indexes, and have a total of 36 of\nthem -- A-Z and 0-9.\n\n> Just having a normal index on the column will work if the database locale\n> is C. In 7.4 you can create an index usable by LIKE even in the database\n> locale isn't C, but I don't remember the exact syntax. You will be better\n> off having just one index rather than 26 partial indexes.\n\nI haven't written a line of C in years, and it was never my strong\nsuit, so despite all my years doing development and sysadminning, the\nlocale stuff is still something of a mystery to me.\n\nThe locale though is C, the default, and will for the time being at\nleast be storing only ascii strings -- no unicode, other character\nsets, or anything funky like that.\n\n-Allen\n", "msg_date": "Fri, 5 Nov 2004 12:56:47 -0500", "msg_from": "Allen Landsidel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange (?) Index behavior?" }, { "msg_contents": "> With that many rows, and a normal index on the field, \n> postgres figures the best option for say \"I%\" is not an index \n> scan, but a sequential scan on the table, with a filter -- \n> quite obviously this is slow as heck, and yes, I've run \n> analyze several times and in fact have the vacuum analyze automated.\n\nAh, so \"like 'I%'\" uses a very slow seq scan, but \"like 'ABC%'\" uses an\nordinary index OK? If so then...\n\nThe planner would usually assume (from what Tom usually says) that 1/26\nselectivity isn't worth doing an index scan for, but in your case it's wrong\n(maybe because the rows are very big?)\n\nYou may be able to get the planner to go for an index scan on \"like 'I%'\" by\ntweaking the foo_cost variables in postgresql.conf \n\nOr you could have the app rewrite \"like 'I%'\" to \"like 'IA%' or like 'IB%'\n... \", or do that as a stored proc.\n\n> With the partial index the index scan is used and the cost \n> drops from 0..2million to 0..9000 -- a vast improvement.\n\nSo there are really only 9000 rows out of 76 million starting with 'I'? How\nabout combining some techniques - you could create an index on the first two\nchars of the field (should be selective enough to give an index scan),\nselect from that, and select the actual data with the like clause.\n\nCREATE INDEX idx_firstletters ON table (substr(field, 1, 2));\nCREATE INDEX idx_all ON table (field);\nSELECT field FROM (SELECT field FROM table WHERE substr(field, 1, 2) = 'DE')\nAS approx WHERE field LIKE 'DE%';\n\nAny good?\n\n", "msg_date": "Fri, 5 Nov 2004 18:34:23 -0000", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange (?) Index behavior?" }, { "msg_contents": "Allen Landsidel <[email protected]> writes:\n> With that many rows, and a normal index on the field, postgres figures\n> the best option for say \"I%\" is not an index scan, but a sequential\n> scan on the table, with a filter -- quite obviously this is slow as\n> heck, and yes, I've run analyze several times and in fact have the\n> vacuum analyze automated.\n> With the partial index the index scan is used and the cost drops from\n> 0..2million to 0..9000 -- a vast improvement.\n\nHmm. This suggests to me that you're using a non-C locale and so a\nplain index *can't* be used for a LIKE query. Can you force it to use\nan indexscan by setting enable_seqscan = false? If not then you've got\na locale problem. As someone else pointed out, this can be worked\naround by creating an index with the right operator class.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Nov 2004 14:57:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange (?) Index behavior? " }, { "msg_contents": "On Fri, 05 Nov 2004 14:57:40 -0500, Tom Lane <[email protected]> wrote:\n> Allen Landsidel <[email protected]> writes:\n> > With that many rows, and a normal index on the field, postgres figures\n> > the best option for say \"I%\" is not an index scan, but a sequential\n> > scan on the table, with a filter -- quite obviously this is slow as\n> > heck, and yes, I've run analyze several times and in fact have the\n> > vacuum analyze automated.\n> > With the partial index the index scan is used and the cost drops from\n> > 0..2million to 0..9000 -- a vast improvement.\n> \n> Hmm. This suggests to me that you're using a non-C locale and so a\n> plain index *can't* be used for a LIKE query. Can you force it to use\n> an indexscan by setting enable_seqscan = false? If not then you've got\n> a locale problem. As someone else pointed out, this can be worked\n> around by creating an index with the right operator class.\n\nTom, disabling seqscan does cause it to use the index.\n\nWith seqscan enabled however, \"AB%\" will use the index, but \"A%\" will not.\n\nThe estimated cost for the query is much higher without the partial\nindexes than it is with them, and the actual runtime of the query is\ndefinitely longer without the partial indexes.\n\nThe locale is set in the postgresql.conf file as per default, with..\n\n# These settings are initialized by initdb -- they may be changed\nlc_messages = 'C' # locale for system error message strings\nlc_monetary = 'C' # locale for monetary formatting\nlc_numeric = 'C' # locale for number formatting\nlc_time = 'C' # locale for time formatting\n\n-Allen\n", "msg_date": "Fri, 5 Nov 2004 15:36:28 -0500", "msg_from": "Allen Landsidel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange (?) Index behavior?" }, { "msg_contents": "On Fri, 5 Nov 2004 18:34:23 -0000, Matt Clark <[email protected]> wrote:\n> > With that many rows, and a normal index on the field,\n> > postgres figures the best option for say \"I%\" is not an index\n> > scan, but a sequential scan on the table, with a filter --\n> > quite obviously this is slow as heck, and yes, I've run\n> > analyze several times and in fact have the vacuum analyze automated.\n> \n> Ah, so \"like 'I%'\" uses a very slow seq scan, but \"like 'ABC%'\" uses an\n> ordinary index OK? If so then...\n\nThat is correct.\n\n> The planner would usually assume (from what Tom usually says) that 1/26\n> selectivity isn't worth doing an index scan for, but in your case it's wrong\n> (maybe because the rows are very big?)\n\nThe rows aren't big, it's a text field, a few ints, and a few\ntimestamps. That's all. The text field is the one we're querying on\nhere and lengthwise it's typically not over 32 chars.\n\n> You may be able to get the planner to go for an index scan on \"like 'I%'\" by\n> tweaking the foo_cost variables in postgresql.conf\n\nThat's true but I'd rather not, there are times when the seqscan will\nhave a faster net result (for other queries) and I'd rather not have\nthem suffer.\n\n> Or you could have the app rewrite \"like 'I%'\" to \"like 'IA%' or like 'IB%'\n> ... \", or do that as a stored proc.\n\nHoly cow. Yeah that seems a little outrageous. It would be cleaner\nlooking in \"\\d table\" than having all these indexes at the cost of\nhaving one very ugly query.\n\n> > With the partial index the index scan is used and the cost\n> > drops from 0..2million to 0..9000 -- a vast improvement.\n> \n> So there are really only 9000 rows out of 76 million starting with 'I'? How\n> about combining some techniques - you could create an index on the first two\n> chars of the field (should be selective enough to give an index scan),\n> select from that, and select the actual data with the like clause.\n\nI was talking about the cost, not the number of rows. About 74,000\nrows are returned but the query only takes about 8 seconds to run. --\nwith the partial index in place.\n\n> CREATE INDEX idx_firstletters ON table (substr(field, 1, 2));\n> CREATE INDEX idx_all ON table (field);\n> SELECT field FROM (SELECT field FROM table WHERE substr(field, 1, 2) = 'DE')\n> AS approx WHERE field LIKE 'DE%';\n\nThat looks like a pretty slick way to create an index, I didn't know\nthere was such a way to do it.. but It appears that this will not work\nwith queries where the WHERE clause wants to find substrings longer\nthan 2 characters.\n\nI will give it a try and see how it goes though I think I'm fairly\n\"settled\" on creating all the other indexes, unless there is some\nspecific reason I shouldn't -- they are used in all cases where the\nsubstring is >= 1 character, so long as I make sure the first where\nclause (or inner select in a subquery) is the most ambiguous from an\nindex standpoint.\n\nGoing back to the initial problem -- having only one large, complete\nindex on the table (no partial indexes) the query \"SELECT field FROM\ntable WHERE field LIKE 'A%';\" does not use the index. The query\n\"SELECT field FROM table WHERE field LIKE 'AB%';\" however, does use\nthe single large index if it exists.\n\nAdding the partial index \"CREATE INDEX idx_table_substrfield_A ON\ntable (field) WHERE field LIKE 'A%';\" causes all queries with\nsubstrings of any length to do index scans.provided I issue the query\nas:\n\nSELECT field FROM table WHERE field LIKE 'A%' AND field LIKE 'AB%';\n -- or even --\nSELECT field FROM table WHERE field LIKE 'A%';\n\nThe latter query, without the partial index described, does a\nsequential scan on the table itself instead of an index scan.\n\n-Allen\n", "msg_date": "Fri, 5 Nov 2004 16:02:43 -0500", "msg_from": "Allen Landsidel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange (?) Index behavior?" }, { "msg_contents": "Allen Landsidel <[email protected]> writes:\n> With seqscan enabled however, \"AB%\" will use the index, but \"A%\" will not.\n\n> The estimated cost for the query is much higher without the partial\n> indexes than it is with them, and the actual runtime of the query is\n> definitely longer without the partial indexes.\n\nOK. This suggests that the planner is drastically misestimating\nthe selectivity of the 'A%' clause, which seems odd to me since in\nprinciple it could get that fairly well from the ANALYZE histogram.\nBut it could well be that you need to increase the resolution of the\nhistogram --- see ALTER TABLE SET STATISTICS.\n\nDid you ever show us EXPLAIN ANALYZE results for this query?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Nov 2004 16:08:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange (?) Index behavior? " }, { "msg_contents": "On Fri, 05 Nov 2004 16:08:56 -0500, Tom Lane <[email protected]> wrote:\n\n\n> Allen Landsidel <[email protected]> writes:\n> > With seqscan enabled however, \"AB%\" will use the index, but \"A%\" will not.\n>\n> > The estimated cost for the query is much higher without the partial\n> > indexes than it is with them, and the actual runtime of the query is\n> > definitely longer without the partial indexes.\n>\n> OK. This suggests that the planner is drastically misestimating\n> the selectivity of the 'A%' clause, which seems odd to me since in\n> principle it could get that fairly well from the ANALYZE histogram.\n> But it could well be that you need to increase the resolution of the\n> histogram --- see ALTER TABLE SET STATISTICS.\n\nI will look into this.\n\n>\n> Did you ever show us EXPLAIN ANALYZE results for this query?\n\nNo, I didn't. I am running it now without the partial index on to\ngive you the results but it's (the 'A%' problem query) been running\npretty much since I got this message (an hour ago) and is still not\nfinished.\n\nThe EXPLAIN results without the ANALYZE will have to suffice until\nit's done, I can readd the index, and run it again, so you have both\nto compare to.\n\nFirst two queries run where both the main index, and the 'A%' index exist:\n\n-- QUERY 1\nsearch=# explain\nsearch-# SELECT test_name FROM test WHERE test_name LIKE 'A%';\n QUERY PLAN\n-------------------------------------------------------------------------------------------\nIndex Scan using test_name_idx_a on \"test\" (cost=0.00..8605.88\nrows=391208 width=20)\n Index Cond: ((test_name >= 'A'::text) AND (test_name < 'B'::text))\n Filter: (test_name ~~ 'A%'::text)\n(3 rows)\n\nTime: 16.507 ms\n\n-- QUERY 2\nsearch=# explain\nsearch-# SELECT test_name FROM test WHERE test_name LIKE 'A%' AND\ntest_name LIKE 'AB%';\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\nIndex Scan using test_name_idx_a on \"test\" (cost=0.00..113.79\nrows=28 width=20)\n Index Cond: ((test_name >= 'A'::text) AND (test_name < 'B'::text)\nAND (test_name >= 'AB'::text) AND (test_name < 'AC'::text))\n Filter: ((test_name ~~ 'A%'::text) AND (test_name ~~ 'AB%'::text))\n(3 rows)\n\nTime: 3.197 ms\n\nOk, now the same two queries after a DROP INDEX test_name_idx_a;\n\nsearch=# explain\nsearch-# SELECT test_name FROM test WHERE test_name LIKE 'A%';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------\nIndex Scan using test_name_unique on \"test\" (cost=0.00..1568918.66\nrows=391208 width=20)\n Index Cond: ((test_name >= 'A'::text) AND (test_name < 'B'::text))\n Filter: (test_name ~~ 'A%'::text)\n(3 rows)\n\nTime: 2.470 ms\n\nsearch=# explain\nsearch-# SELECT test_name FROM test WHERE test_name LIKE 'AB%';\n QUERY PLAN\n-------------------------------------------------------------------------------------------\nIndex Scan using test_name_unique on \"test\" (cost=0.00..20379.49\nrows=5081 width=20)\n Index Cond: ((test_name >= 'AB'::text) AND (test_name < 'AC'::text))\n Filter: (test_name ~~ 'AB%'::text)\n(3 rows)\n\nTime: 2.489 ms\n\n------------------\nCopying just the costs you can see the vast difference...\nIndex Scan using test_name_unique on \"test\" (cost=0.00..1568918.66\nrows=391208 width=20)\nIndex Scan using test_name_unique on \"test\" (cost=0.00..20379.49\nrows=5081 width=20)\n\nvs\n\nIndex Scan using test_name_idx_a on \"test\" (cost=0.00..8605.88\nrows=391208 width=20)\nIndex Scan using test_name_idx_a on \"test\" (cost=0.00..113.79\nrows=28 width=20)\n\nLastly no, neither of these row guesstimates is correct.. I'll get\nback and tell you how much they're off by if it's important, once this\nquery is done.\n\nThe odd thing is it used the index scan here each time -- that has not\nalways been the case with the main unique index, it's trying to make a\nliar out of me heh.\n\nI'm used to the estimates and plan changing from one vacuum analyze to\nthe next, even without any inserts or updates between.. the index scan\nis always used however when I have the partial indexes in place, and\nsomething like..\n\nCREATE TEMP TABLE t1 AS\n SELECT field FROM table\n WHERE field LIKE 'A%'\n AND field LIKE 'AA%';\n\nruns in 6-8 seconds as well, with a bit under 100k records.\n\n-Allen\n", "msg_date": "Fri, 5 Nov 2004 17:40:23 -0500", "msg_from": "Allen Landsidel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange (?) Index behavior?" }, { "msg_contents": "\n>>So there are really only 9000 rows out of 76 million starting with 'I'? How\n>>about combining some techniques - you could create an index on the first two\n>>chars of the field (should be selective enough to give an index scan),\n>>select from that, and select the actual data with the like clause.\n>> \n>>\n>\n>I was talking about the cost, not the number of rows. About 74,000\n>rows are returned but the query only takes about 8 seconds to run. --\n> \n>\nWell, 74000/76000000 ~= 0.1%, way less than 1/26, so no surprise that an \nindexscan is better, and also no surprise that the planner can't know \nthat I is such an uncommon initial char.\n\n>with the partial index in place.\n>\n> \n>\n>>CREATE INDEX idx_firstletters ON table (substr(field, 1, 2));\n>>CREATE INDEX idx_all ON table (field);\n>>SELECT field FROM (SELECT field FROM table WHERE substr(field, 1, 2) = 'DE')\n>>AS approx WHERE field LIKE 'DE%';\n>> \n>>\n>\n>That looks like a pretty slick way to create an index, I didn't know\n>there was such a way to do it.. but It appears that this will not work\n>with queries where the WHERE clause wants to find substrings longer\n>than 2 characters.\n> \n>\nI don't see why not, it just uses the functional index to grap the \n1/(ascii_chars^2) of the rows that are of obvious interest, and then \nuses the standard index to filter that set.. Where it won't work is \nwhere you just want one initial char! Which is why I suggested the \nsilly query rewrite...\n\n>Going back to the initial problem -- having only one large, complete\n>index on the table (no partial indexes) the query \"SELECT field FROM\n>table WHERE field LIKE 'A%';\" does not use the index. The query\n>\"SELECT field FROM table WHERE field LIKE 'AB%';\" however, does use\n>the single large index if it exists.\n>\n> \n>\nIf you were planning the query, what would you do? Assuming we're \ntalking about A-Z as possible first chars, and assuming we don't know \nthe distribution of those chars, then we have to assume 1/26 probability \nof each char, so a seq scan makes sense. Whereas like 'JK%' should only \npull 1/500 rows.\n\n>Adding the partial index \"CREATE INDEX idx_table_substrfield_A ON\n>table (field) WHERE field LIKE 'A%';\" causes all queries with\n>substrings of any length to do index scans.provided I issue the query\n>as:\n>\n>SELECT field FROM table WHERE field LIKE 'A%' AND field LIKE 'AB%';\n> -- or even --\n>SELECT field FROM table WHERE field LIKE 'A%';\n>\n>The latter query, without the partial index described, does a\n>sequential scan on the table itself instead of an index scan.\n> \n>\nYes, because (I assume, Tom will no doubt clarify/correct), by creating \nthe partial indices you create a lot more information about the \ndistribution of the first char - either that, or the planner simply \nalways uses an exactly matching partial index if available.\n\nI _think_ that creating 26 partial indexes on '?%' is essentially the \nsame thing as creating one functional index on substr(field,1,1), just \nmessier, unless the partial indexes cause the planner to do something \nspecial...\n\nM\n", "msg_date": "Sat, 06 Nov 2004 00:14:15 +0000", "msg_from": "Matt Clark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange (?) Index behavior?" }, { "msg_contents": "Matt Clark <[email protected]> writes:\n> Well, 74000/76000000 ~= 0.1%, way less than 1/26, so no surprise that an \n> indexscan is better, and also no surprise that the planner can't know \n> that I is such an uncommon initial char.\n\nBut it *can* know that, at least given adequate ANALYZE statistics.\nI'm pretty convinced that the basic answer to Allen's problem is to\nincrease the histogram size. How large he needs to make it is not\nclear --- obviously his data distribution is not uniform, but I don't\nhave a fix on how badly non-uniform.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Nov 2004 23:04:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange (?) Index behavior? " }, { "msg_contents": "On Fri, 05 Nov 2004 23:04:23 -0500, Tom Lane <[email protected]> wrote:\n> Matt Clark <[email protected]> writes:\n> > Well, 74000/76000000 ~= 0.1%, way less than 1/26, so no surprise that an\n> > indexscan is better, and also no surprise that the planner can't know\n> > that I is such an uncommon initial char.\n> \n> But it *can* know that, at least given adequate ANALYZE statistics.\n> I'm pretty convinced that the basic answer to Allen's problem is to\n> increase the histogram size. How large he needs to make it is not\n> clear --- obviously his data distribution is not uniform, but I don't\n> have a fix on how badly non-uniform.\n> \n\nTom just an update, it's now 2am.. several hours since I started that\nEXPLAIN ANALYZE and it still hasn't finished, so I've aborted it. I\nwill do the example with the more precise substring instead to\nillustrate the performance differences, both with and without the\nsubstring index and report back here.\n\nI'm also interested in something someone else posted, namely that the\n36 indexes I have, \"A%\" through \"Z%\" and \"0%\" through \"9%\" could be\nreplaced with a single index like:\n\n\"CREATE INDEX idx_table_field_substr ON table substr(field, 1, 1);\"\n\nI'm wondering, histogram and other information aside, will this\nfunction as well (or better) than creating all the individual indexes?\n", "msg_date": "Sat, 6 Nov 2004 02:27:25 -0500", "msg_from": "Allen Landsidel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange (?) Index behavior?" }, { "msg_contents": "Ok, you thought maybe this thread died or got abandoned in the face of\nall the senseless trolling and spam going on.. you were wrong.. ;)\n\nI thought though I'd start over trying to explain what's going on. \nI've gone through some dumps, and recreation of the database with some\ndifferent filesystem options and whatnot, and starting over fresh\nhere's the situation.\n\nFirst, the structure.\n\nCREATE TABLE testtable (\n nid serial UNIQUE NOT NULL,\n sname text NOT NULL,\n iother int4\n);\n\nCREATE UNIQUE INDEX idx_sname_unique ON testtable (sname);\n\n-----\n\nWith the above, the query \"SELECT sname FROM testtable WHERE sname\nLIKE 'A%';\" DOES use an index scan on idx_sname_unique -- sometimes. \nOther times, the planner thinks a sequential scan would be better.\n\nThe index is large. There are over 70 million rows in this table. \nThe estimated cost and so forth from EXPLAIN on the above query is way\noff as well, but I expect that to be the case considering the size of\nthe table -- perhaps there is a tunable in the statistics gathering\nbackend ot fix this?\n\nMy goal was to obviously make queries of the above type, as well as\nmore refined ones such as \"... LIKE 'AB%';\" faster.\n\nThis goal in mind, I thought that creating several indexes (36 of\nthem) would speed things up -- one index per alphanumeric start\ncharacter, via..\n\nCREATE INDEX idx_sname_suba ON testtable (sname) WHERE sname LIKE 'A%';\nCREATE INDEX idx_sname_subb ON testtable (sname) WHERE sname LIKE 'B%';\n...\nCREATE INDEX idx_sname_subz ON testtable (sname) WHERE sname LIKE 'Z%';\n\n(also including 0..9)\n\nI've wracked my brain trying to come up with other ways of doing this,\nincluding partitioning the table, and trying the suggestions here such\nas \"substr(1,1)\" in the index creation instead of creating many\ndistinct indexes.\n\nNone of these seems to speed up the queries enough to make them\n\"acceptable\" when it comes to runtimes. My data from before was\nsomehow in error.. not sure why. At this point, using one index vs.\nthe other the runtimes are about the same.\n\nsearch=# explain analyze\nsearch-# SELECT sname FROM\nsearch-# (SELECT sname FROM testtable WHERE sname LIKE 'A%') AS subq\nsearch-# WHERE sname LIKE 'AA%';\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using sname_a on \"testtable\" (cost=0.00..189.41 rows=47\nwidth=20) (actual time=16.219..547053.251 rows=74612 loops=1)\n Index Cond: ((sname >= 'A'::text) AND (sname < 'B'::text) AND\n(sname >= 'AA'::text) AND (sname < 'AB'::text))\n Filter: ((sname ~~ 'A%'::text) AND (sname ~~ 'AA%'::text))\n Total runtime: 547454.939 ms\n(4 rows)\n\nTime: 547458.216 ms\n\n\nsearch=# explain analyze\nsearch-# SELECT sname FROM testtable WHERE sname LIKE 'AA%';\n \nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using sname_unique on \"testtable\" (cost=0.00..34453.74\nrows=8620 width=20) (actual time=77.004..537065.079 rows=74612\nloops=1)\n Index Cond: ((sname >= 'AA'::text) AND (sname < 'AB'::text))\n Filter: (sname ~~ 'AA%'::text)\n Total runtime: 537477.737 ms\n(4 rows)\n\nTime: 537480.571 ms\n", "msg_date": "Thu, 11 Nov 2004 11:21:13 -0500", "msg_from": "Allen Landsidel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange (?) Index behavior?" }, { "msg_contents": "Allen,\n\n> Ok, you thought maybe this thread died or got abandoned in the face of\n> all the senseless trolling and spam going on.. you were wrong.. ;)\n>\n> I thought though I'd start over trying to explain what's going on.\n> I've gone through some dumps, and recreation of the database with some\n> different filesystem options and whatnot, and starting over fresh\n> here's the situation.\n\nI can't find the beginning of this thread. What's your sort_mem? \nShared_buffers?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 11 Nov 2004 10:52:43 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange (?) Index behavior?" }, { "msg_contents": "On Thu, 11 Nov 2004 10:52:43 -0800, Josh Berkus <[email protected]> wrote:\n> Allen,\n> \n> > Ok, you thought maybe this thread died or got abandoned in the face of\n> > all the senseless trolling and spam going on.. you were wrong.. ;)\n> >\n> > I thought though I'd start over trying to explain what's going on.\n> > I've gone through some dumps, and recreation of the database with some\n> > different filesystem options and whatnot, and starting over fresh\n> > here's the situation.\n> \n> I can't find the beginning of this thread. What's your sort_mem?\n> Shared_buffers?\n\nCurrently sort_mem is 64MB and shared_buffers is 256MB.\n\nThe box is a dual 800 with 2GB physical, running FreeBSD 4.10-STABLE,\nsingle U2W SCSI hdd.\n", "msg_date": "Thu, 11 Nov 2004 14:30:57 -0500", "msg_from": "Allen Landsidel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange (?) Index behavior?" }, { "msg_contents": "Allen Landsidel <[email protected]> writes:\n\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using sname_unique on \"testtable\" (cost=0.00..34453.74\n> rows=8620 width=20) (actual time=77.004..537065.079 rows=74612\n> loops=1)\n> Index Cond: ((sname >= 'AA'::text) AND (sname < 'AB'::text))\n> Filter: (sname ~~ 'AA%'::text)\n> Total runtime: 537477.737 ms\n> (4 rows)\n> \n> Time: 537480.571 ms\n\nNothing you're going to do to the query is going to come up with a more\neffective plan than this. It's using the index after all. It's never going to\nbe lightning fast because it has to process 75k rows.\n\nHowever 75k rows shouldn't be taking nearly 10 minutes. It should be taking\nabout 10 seconds.\n\nThe 77ms before finding the first record is a bit suspicious. Have you\nvacuumed this table regularly? Try a VACUUM FULL VERBOSE, and send the\nresults. You might try to REINDEX it as well, though I doubt that would help.\n\nActually you might consider clustering the table on sname_unique. That would\naccomplish the same thing as the VACUUM FULL command and also speed up the\nindex scan. And the optimizer knows (if you analyze afterwards) it so it\nshould be more likely to pick the index scan. But currently you have to rerun\ncluster periodically.\n\n-- \ngreg\n\n", "msg_date": "11 Nov 2004 15:49:46 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange (?) Index behavior?" }, { "msg_contents": "On 11 Nov 2004 15:49:46 -0500, Greg Stark <[email protected]> wrote:\n> Allen Landsidel <[email protected]> writes:\n> \n> \n> \n> > QUERY PLAN\n> > -----------------------------------------------------------------------------------------------------------------------------------------------\n> > Index Scan using sname_unique on \"testtable\" (cost=0.00..34453.74\n> > rows=8620 width=20) (actual time=77.004..537065.079 rows=74612\n> > loops=1)\n> > Index Cond: ((sname >= 'AA'::text) AND (sname < 'AB'::text))\n> > Filter: (sname ~~ 'AA%'::text)\n> > Total runtime: 537477.737 ms\n> > (4 rows)\n> >\n> > Time: 537480.571 ms\n> \n> Nothing you're going to do to the query is going to come up with a more\n> effective plan than this. It's using the index after all. It's never going to\n> be lightning fast because it has to process 75k rows.\n> \n> However 75k rows shouldn't be taking nearly 10 minutes. It should be taking\n> about 10 seconds.\n\nThat's my feeling as well, I thought the index was to blame because it\nwill be quite large, possibly large enough to not fit in memory nor be\nquickly bursted up.\n\n> The 77ms before finding the first record is a bit suspicious. Have you\n> vacuumed this table regularly? Try a VACUUM FULL VERBOSE, and send the\n> results. You might try to REINDEX it as well, though I doubt that would help.\n\nThis table is *brand spanking new* for lack of a better term. I have\nthe data for it in a CSV. I load the CSV up which takes a bit, then\ncreate the indexes, do a vacuum analyze verbose, and then posted the\nresults above. I don't think running vacuum a more times is going to\nchange things, at least not without tweaking config settings that\naffect vacuum. Not a single row has been inserted or altered since the\ninitial load.. it's just a test.\n\nI can't give vacuum stats right now because the thing is reloading\n(again) with different newfs settings -- something I figure I have the\ntime to fiddle with now, and seldom do at other times. These numbers\nthough don't change much between 8K on up to 64K 'cluster' sizes. I'm\ntrying it now with 8K page sizes, with 8K \"minimum fragment\" sizes. \nShould speed things up a tiny bit but not enough to really affect this\nquery.\n\nDo you still see a need to have the output from the vacuum?\n\n> Actually you might consider clustering the table on sname_unique. That would\n> accomplish the same thing as the VACUUM FULL command and also speed up the\n> index scan. And the optimizer knows (if you analyze afterwards) it so it\n> should be more likely to pick the index scan. But currently you have to rerun\n> cluster periodically.\n\nClustering is really unworkable in this situation. It would work now,\nin this limited test case, but using it if this were to go into\nproduction is unrealistic. It would have to happen fairly often since\nthis table is updated frequently, which will break the clustering\nquickly with MVCC.\n\nRunning it often.. well.. it has 70M+ rows, and the entire table is\ncopied, reordered, and rewritten.. so that's a lot of 'scratch space'\nneeded. Finally, clustering locks the table..\n\nSomething I'd already considered but quickly ruled out because of\nthese reasons..\n\nMore ideas are welcome though. ;)\n\n-Allen\n", "msg_date": "Thu, 11 Nov 2004 16:12:51 -0500", "msg_from": "Allen Landsidel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange (?) Index behavior?" }, { "msg_contents": "Allen Landsidel <[email protected]> writes:\n> Clustering is really unworkable in this situation.\n\nNonetheless, please do it in your test scenario, so we can see if it has\nany effect or not.\n\nThe speed you're getting works out to about 7.2 msec/row, which would be\nabout right if every single row fetch caused a disk seek, which seems\nimprobable unless the table is just huge compared to your available RAM.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Nov 2004 16:41:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange (?) Index behavior? " }, { "msg_contents": ">>>-----------------------------------------------------------------------------------------------------------------------------------------------\n>>> Index Scan using sname_unique on \"testtable\" (cost=0.00..34453.74\n>>>rows=8620 width=20) (actual time=77.004..537065.079 rows=74612\n>>>loops=1)\n>>> Index Cond: ((sname >= 'AA'::text) AND (sname < 'AB'::text))\n>>> Filter: (sname ~~ 'AA%'::text)\n>>> Total runtime: 537477.737 ms\n>>>(4 rows)\n>>>\n>>>Time: 537480.571 ms\n>>\n>>Nothing you're going to do to the query is going to come up with a more\n>>effective plan than this. It's using the index after all. It's never going to\n>>be lightning fast because it has to process 75k rows.\n>>\n>>However 75k rows shouldn't be taking nearly 10 minutes. It should be taking\n>>about 10 seconds.\n\nI am confused about this statement. I have a table with 1.77 million \nrows that I use gist indexes on (TSearch) and I can pull out of it in \nless than 2 seconds.\n\nAre you saying it should be taking 10 seconds because of the type of \nplan? 10 seconds seems like an awfullong time for this.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n> \n> \n> That's my feeling as well, I thought the index was to blame because it\n> will be quite large, possibly large enough to not fit in memory nor be\n> quickly bursted up.\n> \n> \n>>The 77ms before finding the first record is a bit suspicious. Have you\n>>vacuumed this table regularly? Try a VACUUM FULL VERBOSE, and send the\n>>results. You might try to REINDEX it as well, though I doubt that would help.\n> \n> \n> This table is *brand spanking new* for lack of a better term. I have\n> the data for it in a CSV. I load the CSV up which takes a bit, then\n> create the indexes, do a vacuum analyze verbose, and then posted the\n> results above. I don't think running vacuum a more times is going to\n> change things, at least not without tweaking config settings that\n> affect vacuum. Not a single row has been inserted or altered since the\n> initial load.. it's just a test.\n> \n> I can't give vacuum stats right now because the thing is reloading\n> (again) with different newfs settings -- something I figure I have the\n> time to fiddle with now, and seldom do at other times. These numbers\n> though don't change much between 8K on up to 64K 'cluster' sizes. I'm\n> trying it now with 8K page sizes, with 8K \"minimum fragment\" sizes. \n> Should speed things up a tiny bit but not enough to really affect this\n> query.\n> \n> Do you still see a need to have the output from the vacuum?\n> \n> \n>>Actually you might consider clustering the table on sname_unique. That would\n>>accomplish the same thing as the VACUUM FULL command and also speed up the\n>>index scan. And the optimizer knows (if you analyze afterwards) it so it\n>>should be more likely to pick the index scan. But currently you have to rerun\n>>cluster periodically.\n> \n> \n> Clustering is really unworkable in this situation. It would work now,\n> in this limited test case, but using it if this were to go into\n> production is unrealistic. It would have to happen fairly often since\n> this table is updated frequently, which will break the clustering\n> quickly with MVCC.\n> \n> Running it often.. well.. it has 70M+ rows, and the entire table is\n> copied, reordered, and rewritten.. so that's a lot of 'scratch space'\n> needed. Finally, clustering locks the table..\n> \n> Something I'd already considered but quickly ruled out because of\n> these reasons..\n> \n> More ideas are welcome though. ;)\n> \n> -Allen\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n\n-- \nCommand Prompt, Inc., home of PostgreSQL Replication, and plPHP.\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nMammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL", "msg_date": "Thu, 11 Nov 2004 13:48:27 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange (?) Index behavior?" }, { "msg_contents": "On Thu, 11 Nov 2004 16:41:51 -0500, Tom Lane <[email protected]> wrote:\n> Allen Landsidel <[email protected]> writes:\n> > Clustering is really unworkable in this situation.\n> \n> Nonetheless, please do it in your test scenario, so we can see if it has\n> any effect or not.\n\nIt did not, not enough to measure anyway, which does strike me as\npretty odd.. Here's what I've got, after the cluster. Note that this\nis also on a new filesystem, as I said, have been taking the chance to\nexperiment. The other two results were from a filesystem with 64KB\nblock size, 8KB fragment size. This one is 8KB and 8KB.\n\nsearch=# explain analyze\nsearch-# SELECT sname FROM testtable WHERE sname LIKE 'AA%';\n \nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using sname_unique on \"testtable\" (cost=0.00..642138.83\nrows=160399 width=20) (actual time=0.088..514438.470 rows=74612\nloops=1)\n Index Cond: ((sname >= 'AA'::text) AND (sname < 'AB'::text))\n Filter: (sname ~~ 'AA%'::text)\n Total runtime: 514818.837 ms\n(4 rows)\n\nTime: 514821.993 ms\n\n> \n> The speed you're getting works out to about 7.2 msec/row, which would be\n> about right if every single row fetch caused a disk seek, which seems\n> improbable unless the table is just huge compared to your available RAM.\n> \n> regards, tom lane\n\nThe CSV for the table is \"huge\" but not compared to RAM. The dump of\nthe database in native/binary format is ~1GB; the database currently\nhas only this table and the system stuff.\n\nThe time to fetch the first row was much faster with the cluster in\nplace, but after that, it's pretty much the same. 537s vs. 515s\n", "msg_date": "Fri, 12 Nov 2004 15:12:35 -0500", "msg_from": "Allen Landsidel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange (?) Index behavior?" }, { "msg_contents": "Allen Landsidel <[email protected]> writes:\n> On Thu, 11 Nov 2004 16:41:51 -0500, Tom Lane <[email protected]> wrote:\n>> Allen Landsidel <[email protected]> writes:\n>>> Clustering is really unworkable in this situation.\n>> \n>> Nonetheless, please do it in your test scenario, so we can see if it has\n>> any effect or not.\n\n> It did not, not enough to measure anyway, which does strike me as\n> pretty odd.\n\nMe too. Maybe we are barking up the wrong tree entirely, because I\nreally expected to see a significant change.\n\nLets start from first principles. While you are running this query,\nwhat sort of output do you get from \"vmstat 1\"? I'm wondering if it's\nI/O bound or CPU bound ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Nov 2004 17:35:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange (?) Index behavior? " }, { "msg_contents": "On Fri, 12 Nov 2004 17:35:00 -0500, Tom Lane <[email protected]> wrote:\n> \n> \n> Allen Landsidel <[email protected]> writes:\n> > On Thu, 11 Nov 2004 16:41:51 -0500, Tom Lane <[email protected]> wrote:\n> >> Allen Landsidel <[email protected]> writes:\n> >>> Clustering is really unworkable in this situation.\n> >>\n> >> Nonetheless, please do it in your test scenario, so we can see if it has\n> >> any effect or not.\n> \n> > It did not, not enough to measure anyway, which does strike me as\n> > pretty odd.\n> \n> Me too. Maybe we are barking up the wrong tree entirely, because I\n> really expected to see a significant change.\n> \n> Lets start from first principles. While you are running this query,\n> what sort of output do you get from \"vmstat 1\"? I'm wondering if it's\n> I/O bound or CPU bound ...\n\nI am running systat -vmstat 1 constantly on the box.. it's almost\nalways I/O bound.. and the numbers are far lower than what I expect\nthem to be, under 1MB/s. bonnie++ shows decent scores so.. I'm not\nsure what's goin on.\n\n[allen@dbtest01 /mnt_db/work#]bonnie++ -d /mnt_db/work -c 2 -u nobody\nUsing uid:65534, gid:65534.\nWriting a byte at a time...done\nWriting intelligently...done\nRewriting...done\nReading a byte at a time...done\nReading intelligently...done\nstart 'em...done...done...done...done...done...\nCreate files in sequential order...done.\nStat files in sequential order...done.\nDelete files in sequential order...done.\nCreate files in random order...done.\nStat files in random order...done.\nDelete files in random order...done.\nVersion 1.93c ------Sequential Output------ --Sequential Input- --Random-\nConcurrency 2 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\ndbtest01.distr 300M 100 98 17426 21 17125 18 197 98 182178 99 2027 42\nLatency 96208us 594ms 472ms 56751us 15691us 3710ms\nVersion 1.93c ------Sequential Create------ --------Random Create--------\ndbtest01.distribute -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 12932 90 +++++ +++ 20035 98 11912 91 +++++ +++ 13074 93\nLatency 26691us 268us 18789us 26755us 13586us 25039us\n1.93c,1.93c,dbtest01.distributedmail.com,2,1100269160,300M,,100,98,17426,21,17125,18,197,98,182178,99,2027,42,16,,,,,12932,90,+++++,+++,20035,98,11912,91,+++++,+++,13074,93,96208us,594ms,472ms,56751us,15691us,3710ms,26691us,268us,18789us,26755us,13586us,25039us\n\nLooking at these numbers, obviously things could be a bit quicker, but\nit doesn't look slow enough to my eyes or experience to account for\nwhat I'm seeing with the query performance..\n\nDuring the query, swap doesn't get touched, the cpus are mostly idle,\nbut the disk activity seems to be maxed at under 1MB/s, 100% busy.\n\nTo refresh and extend..\nThe box is FreeBSD 4.10-STABLE\nDual 800MHz PIII's, 2GB of memory\n\nRelevent kernel options:\n\nmaxusers 512\n...\noptions SYSVSHM\noptions SHMMAXPGS=262144\noptions SHMSEG=512\noptions SHMMNI=512\noptions SYSVSEM\noptions SEMMNI=512\noptions SEMMNS=1024\noptions SEMMNU=512\noptions SEMMAP=512\n\n...\n\nnothing custom going on in /etc/sysctl.conf\n\nFilesystem is..\n/dev/da1s1e on /mnt_db (ufs, local, noatime, soft-updates)\n\nAnd, from my postgresql.conf..\n\nshared_buffers = 32768 # min 16, at least max_connections*2, 8KB each\nsort_mem = 65536 # min 64, size in KB\nvacuum_mem = 65536 # min 1024, size in KB\n...\nmax_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 1000 # min 100, ~50 bytes each\n...\nstats_start_collector = true\nstats_command_string = true\nstats_block_level = false\nstats_row_level = true\nstats_reset_on_server_start = true\n\nThanks for helping me out with this Tom and everyone else. I suppose\nit's possible that something could be physically wrong with the drive,\nbut I'm not seeing anything in syslog. I'm going to poke around with\ncamcontrol looking for any bad sectors / remapped stuff while I wait\nfor replies.\n\n-Allen\n", "msg_date": "Fri, 12 Nov 2004 19:26:39 -0500", "msg_from": "Allen Landsidel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange (?) Index behavior?" }, { "msg_contents": "Sorry if I'm contributing more noise to the signal here, just thought\nI'd repost this one to the list since it may have gotten lost in all\nthe garbage from the guy unhappy about the usenet thing..\n\n\n---------- Forwarded message ----------\nFrom: Allen Landsidel <[email protected]>\nDate: Fri, 12 Nov 2004 19:26:39 -0500\nSubject: Re: [PERFORM] Strange (?) Index behavior?\nTo: [email protected]\n\n\nOn Fri, 12 Nov 2004 17:35:00 -0500, Tom Lane <[email protected]> wrote:\n\n\n>\n>\n> Allen Landsidel <[email protected]> writes:\n> > On Thu, 11 Nov 2004 16:41:51 -0500, Tom Lane <[email protected]> wrote:\n> >> Allen Landsidel <[email protected]> writes:\n> >>> Clustering is really unworkable in this situation.\n> >>\n> >> Nonetheless, please do it in your test scenario, so we can see if it has\n> >> any effect or not.\n>\n> > It did not, not enough to measure anyway, which does strike me as\n> > pretty odd.\n>\n> Me too. Maybe we are barking up the wrong tree entirely, because I\n> really expected to see a significant change.\n>\n> Lets start from first principles. While you are running this query,\n> what sort of output do you get from \"vmstat 1\"? I'm wondering if it's\n> I/O bound or CPU bound ...\n\nI am running systat -vmstat 1 constantly on the box.. it's almost\nalways I/O bound.. and the numbers are far lower than what I expect\nthem to be, under 1MB/s. bonnie++ shows decent scores so.. I'm not\nsure what's goin on.\n\n[allen@dbtest01 /mnt_db/work#]bonnie++ -d /mnt_db/work -c 2 -u nobody\nUsing uid:65534, gid:65534.\nWriting a byte at a time...done\nWriting intelligently...done\nRewriting...done\nReading a byte at a time...done\nReading intelligently...done\nstart 'em...done...done...done...done...done...\nCreate files in sequential order...done.\nStat files in sequential order...done.\nDelete files in sequential order...done.\nCreate files in random order...done.\nStat files in random order...done.\nDelete files in random order...done.\nVersion 1.93c ------Sequential Output------ --Sequential Input- --Random-\nConcurrency 2 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\ndbtest01.distr 300M 100 98 17426 21 17125 18 197 98 182178 99 2027 42\nLatency 96208us 594ms 472ms 56751us 15691us 3710ms\nVersion 1.93c ------Sequential Create------ --------Random Create--------\ndbtest01.distribute -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 12932 90 +++++ +++ 20035 98 11912 91 +++++ +++ 13074 93\nLatency 26691us 268us 18789us 26755us 13586us 25039us\n1.93c,1.93c,dbtest01.distributedmail.com,2,1100269160,300M,,100,98,17426,21,17125,18,197,98,182178,99,2027,42,16,,,,,12932,90,+++++,+++,20035,98,11912,91,+++++,+++,13074,93,96208us,594ms,472ms,56751us,15691us,3710ms,26691us,268us,18789us,26755us,13586us,25039us\n\nLooking at these numbers, obviously things could be a bit quicker, but\nit doesn't look slow enough to my eyes or experience to account for\nwhat I'm seeing with the query performance..\n\nDuring the query, swap doesn't get touched, the cpus are mostly idle,\nbut the disk activity seems to be maxed at under 1MB/s, 100% busy.\n\nTo refresh and extend..\nThe box is FreeBSD 4.10-STABLE\nDual 800MHz PIII's, 2GB of memory\n\nRelevent kernel options:\n\nmaxusers 512\n...\noptions SYSVSHM\noptions SHMMAXPGS=262144\noptions SHMSEG=512\noptions SHMMNI=512\noptions SYSVSEM\noptions SEMMNI=512\noptions SEMMNS=1024\noptions SEMMNU=512\noptions SEMMAP=512\n\n...\n\nnothing custom going on in /etc/sysctl.conf\n\nFilesystem is..\n/dev/da1s1e on /mnt_db (ufs, local, noatime, soft-updates)\n\nAnd, from my postgresql.conf..\n\nshared_buffers = 32768 # min 16, at least max_connections*2, 8KB each\nsort_mem = 65536 # min 64, size in KB\nvacuum_mem = 65536 # min 1024, size in KB\n...\nmax_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 1000 # min 100, ~50 bytes each\n...\nstats_start_collector = true\nstats_command_string = true\nstats_block_level = false\nstats_row_level = true\nstats_reset_on_server_start = true\n\nThanks for helping me out with this Tom and everyone else. I suppose\nit's possible that something could be physically wrong with the drive,\nbut I'm not seeing anything in syslog. I'm going to poke around with\ncamcontrol looking for any bad sectors / remapped stuff while I wait\nfor replies.\n\n-Allen\n", "msg_date": "Mon, 15 Nov 2004 17:22:36 -0500", "msg_from": "Allen Landsidel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange (?) Index behavior?" } ]
[ { "msg_contents": "Does anybody have any experiences with postgresql 7.4+ running on amd-64\nin 64 bit mode? Specifically, does it run quicker and if so do the\nperformance benefits justify the extra headaches running 64 bit linux?\n\nRight now I'm building a dual Opteron 246 with 4 gig ddr400. \n\nMerlin\n", "msg_date": "Fri, 5 Nov 2004 10:33:38 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql amd-64" }, { "msg_contents": "I have two dual opteron 248's with 4g of ram each, 6x36G 15k rpm ultra\n320 scsi disks in hardware raid 5, and they are by far the fastest\nmachines I've user used. As far as this \"headache\" of using 64 bit\nLinux, I've experienced no such thing. I'm using gentoo on both\nmachines, which are dedicated for postgres 7.4 and replicated with\nslony. They're both quite fast and reliable. One machine even runs a\nsecondary instance of pg, pg 8 beta4 in this case, for development,\nwhich also runs quite well.\n\nDaniel\n\nMerlin Moncure wrote:\n\n>Does anybody have any experiences with postgresql 7.4+ running on amd-64\n>in 64 bit mode? Specifically, does it run quicker and if so do the\n>performance benefits justify the extra headaches running 64 bit linux?\n>\n>Right now I'm building a dual Opteron 246 with 4 gig ddr400. \n>\n>Merlin\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n>\n\n-- \n\nDaniel Ceregatti - Programmer\nOmnis Network, LLC\n\nThe forest is safe because a lion lives therein and the lion is safe because\nit lives in a forest. Likewise the friendship of persons rests on mutual help.\n\t\t-- Laukikanyay.\n\n", "msg_date": "Fri, 05 Nov 2004 10:23:20 -0800", "msg_from": "Daniel Ceregatti <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql amd-64" }, { "msg_contents": "I'm hoping I'll have the opportunity to build a similar machine soon and am \nwondering about the choice of 64 bit distributions.\n\nGentoo is obviously a possibility but I'm also condsidering Debian. There is \nalso a 64 compile of redhat sources somewhere around, but I can't remember \nwhat they call it offhand.\n\nIf anyone has opinions about that, I'd be happy to hear.\n\nregards\nIain\n\n----- Original Message ----- \nFrom: \"Daniel Ceregatti\" <[email protected]>\nTo: \"Merlin Moncure\" <[email protected]>\nCc: <[email protected]>\nSent: Saturday, November 06, 2004 3:23 AM\nSubject: Re: [PERFORM] postgresql amd-64\n\n\n>I have two dual opteron 248's with 4g of ram each, 6x36G 15k rpm ultra\n> 320 scsi disks in hardware raid 5, and they are by far the fastest\n> machines I've user used. As far as this \"headache\" of using 64 bit\n> Linux, I've experienced no such thing. I'm using gentoo on both\n> machines, which are dedicated for postgres 7.4 and replicated with\n> slony. They're both quite fast and reliable. One machine even runs a\n> secondary instance of pg, pg 8 beta4 in this case, for development,\n> which also runs quite well.\n>\n> Daniel\n>\n> Merlin Moncure wrote:\n>\n>>Does anybody have any experiences with postgresql 7.4+ running on amd-64\n>>in 64 bit mode? Specifically, does it run quicker and if so do the\n>>performance benefits justify the extra headaches running 64 bit linux?\n>>\n>>Right now I'm building a dual Opteron 246 with 4 gig ddr400.\n>>\n>>Merlin\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 5: Have you checked our extensive FAQ?\n>>\n>> http://www.postgresql.org/docs/faqs/FAQ.html\n>>\n>>\n>\n> -- \n>\n> Daniel Ceregatti - Programmer\n> Omnis Network, LLC\n>\n> The forest is safe because a lion lives therein and the lion is safe \n> because\n> it lives in a forest. Likewise the friendship of persons rests on mutual \n> help.\n> -- Laukikanyay.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match \n\n", "msg_date": "Mon, 8 Nov 2004 11:13:22 +0900", "msg_from": "\"Iain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql amd-64" }, { "msg_contents": "Hi Andrew,\n\nI had never heard of Ubuntu before, thanks for the tip.\n\nregards\niain\n----- Original Message ----- \nFrom: \"Andrew McMillan\" <[email protected]>\nTo: \"Iain\" <[email protected]>\nSent: Monday, November 08, 2004 12:51 PM\nSubject: Re: [PERFORM] postgresql amd-64\n\nOn Mon, 2004-11-08 at 11:13 +0900, Iain wrote:\n> I'm hoping I'll have the opportunity to build a similar machine soon and \n> am\n> wondering about the choice of 64 bit distributions.\n>\n> Gentoo is obviously a possibility but I'm also condsidering Debian. There \n> is\n> also a 64 compile of redhat sources somewhere around, but I can't remember\n> what they call it offhand.\n>\n> If anyone has opinions about that, I'd be happy to hear.\n\nHi Iain,\n\nWe are using Debian on a few dual-Opteron systems and a number of AMD64\ndesktop systems and it is working fine.\n\nWe are also using Ubuntu, which can be installed with the \"Custom\"\noptions for a server, is Debian based, and includes PostgreSQL in the\nbasic supported package set.\n\nDue to the better security support for AMD64 in Ubuntu we are likely to\nmigrate our server environments to that, at least until there is a more\nofficially support AMD64 port for Debian.\n\nCheers,\n Andrew.\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n Don't get to bragging.\n-------------------------------------------------------------------------\n\n\n", "msg_date": "Mon, 8 Nov 2004 16:36:38 +0900", "msg_from": "\"Iain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql amd-64" }, { "msg_contents": "Iain wrote:\n> I'm hoping I'll have the opportunity to build a similar machine soon and \n> am wondering about the choice of 64 bit distributions.\n> \n> Gentoo is obviously a possibility but I'm also condsidering Debian. \n> There is also a 64 compile of redhat sources somewhere around, but I \n> can't remember what they call it offhand.\n> \n\nRedHat's community OS is now called Fedora: http://fedora.redhat.com/\nThere's been two AMD64 releases of this OS, Fedora Core 1 and Fedora Core 2. \nCore 3 is just around the corner.\nI've been running FC2 x86_64 with kernel 2.6 as a desktop system for quite some \ntime now, with PostgreSQL 7.4.2 / 64bit installed.\nI find Fedora to be a really good Linux distro, continuing and improving upon \nthe fine tradition of RedHat's releases.\nYou can also get RedHat's commercial releases on AMD64; according to\nhttp://www.redhat.com/software/rhel/features/\nyou can also get a EM64T release.\n\n> If anyone has opinions about that, I'd be happy to hear.\n\n-- \nRadu-Adrian Popescu\nCSA, DBA, Developer\nAldrapay MD\nAldratech Ltd.\n+40213212243", "msg_date": "Mon, 08 Nov 2004 13:40:31 +0200", "msg_from": "Radu-Adrian Popescu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql amd-64" } ]
[ { "msg_contents": "To me, these three queries seem identical... why doesn't the first one\n(simplest to understand and write) go the same speed as the third one?\n\nI'll I'm trying to do is get statistics for one day (in this case,\ntoday) summarized. Table has ~25M rows. I'm using postgres 7.3.? on\nrh linux 7.3 (note that i think the difference between the first two\nmight just be related to the data being in memory for the second\nquery).\n\n\n EXPLAIN ANALYZE\n select count(distinct sessionid) from usage_access where atime >\ndate_trunc('day', now());\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=933439.69..933439.69 rows=1 width=4) (actual\ntime=580350.43..580350.43 rows=1 loops=1)\n -> Seq Scan on usage_access (cost=0.00..912400.11 rows=8415831\nwidth=4) (actual time=580164.48..580342.21 rows=2964 loops=1)\n Filter: (atime > date_trunc('day'::text, now()))\n Total runtime: 580350.65 msec\n(4 rows)\n\n\n EXPLAIN ANALYZE\n select count(distinct sessionid) from (select * from usage_access\nwhere atime > date_trunc('day', now())) as temp;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=933439.69..933439.69 rows=1 width=4) (actual\ntime=348012.85..348012.85 rows=1 loops=1)\n -> Seq Scan on usage_access (cost=0.00..912400.11 rows=8415831\nwidth=4) (actual time=347960.53..348004.68 rows=2964 loops=1)\n Filter: (atime > date_trunc('day'::text, now()))\n Total runtime: 348013.10 msec\n(4 rows)\n\n\n EXPLAIN ANALYZE\n select count(distinct sessionid) from usage_access where atime\nbetween date_trunc('day', now()) and date_trunc('day', now()) + '1\nday'::interval;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=89324.98..89324.98 rows=1 width=4) (actual\ntime=27.84..27.84 rows=1 loops=1)\n -> Index Scan using usage_access_atime on usage_access \n(cost=0.00..89009.39 rows=126237 width=4) (actual time=0.51..20.37\nrows=2964 loops=1)\n Index Cond: ((atime >= date_trunc('day'::text, now())) AND\n(atime <= (date_trunc('day'::text, now()) + '1 day'::interval)))\n Total runtime: 28.11 msec\n(4 rows)\n\n-- \nMatthew Nuzum\t\t| Makers of \"Elite Content Management System\"\nwww.followers.net\t\t| View samples of Elite CMS in action\[email protected]\t\t| http://www.followers.net/portfolio/\n", "msg_date": "Fri, 5 Nov 2004 13:09:26 -0500", "msg_from": "Matt Nuzum <[email protected]>", "msg_from_op": true, "msg_subject": "What is the difference between these?" }, { "msg_contents": "Matt Nuzum wrote:\n\n>To me, these three queries seem identical... why doesn't the first one\n>(simplest to understand and write) go the same speed as the third one?\n> \n>\nIf you look at the explain output, you will notice that only the 3rd \nquery is using an Index Scan, where as the 1st and 2nd are doing a \nsequential scan over the entire table of 25M rows. My guess is that the \nproblem is related to outdated statistics on the atime column. If you \nnotice the 1st and 2nd queries estimate 8.4M rows returned at which \npoint a seq scan is the right choice, but the 3rd query using the \nbetween statement only estimates 127k rows which make the Index a better \noption. All of these queries only return 2964 rows so it looks like \nyour stats are out of date. Try running an analyze command right before \ndoing any of these queries and see what happens.\n\n>I'll I'm trying to do is get statistics for one day (in this case,\n>today) summarized. Table has ~25M rows. I'm using postgres 7.3.? on\n>rh linux 7.3 (note that i think the difference between the first two\n>might just be related to the data being in memory for the second\n>query).\n>\n>\n> EXPLAIN ANALYZE\n> select count(distinct sessionid) from usage_access where atime >\n>date_trunc('day', now());\n> QUERY PLAN \n>----------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=933439.69..933439.69 rows=1 width=4) (actual\n>time=580350.43..580350.43 rows=1 loops=1)\n> -> Seq Scan on usage_access (cost=0.00..912400.11 rows=8415831\n>width=4) (actual time=580164.48..580342.21 rows=2964 loops=1)\n> Filter: (atime > date_trunc('day'::text, now()))\n> Total runtime: 580350.65 msec\n>(4 rows)\n>\n>\n> EXPLAIN ANALYZE\n> select count(distinct sessionid) from (select * from usage_access\n>where atime > date_trunc('day', now())) as temp;\n> QUERY PLAN \n>----------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=933439.69..933439.69 rows=1 width=4) (actual\n>time=348012.85..348012.85 rows=1 loops=1)\n> -> Seq Scan on usage_access (cost=0.00..912400.11 rows=8415831\n>width=4) (actual time=347960.53..348004.68 rows=2964 loops=1)\n> Filter: (atime > date_trunc('day'::text, now()))\n> Total runtime: 348013.10 msec\n>(4 rows)\n>\n>\n> EXPLAIN ANALYZE\n> select count(distinct sessionid) from usage_access where atime\n>between date_trunc('day', now()) and date_trunc('day', now()) + '1\n>day'::interval;\n> QUERY PLAN \n>--------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=89324.98..89324.98 rows=1 width=4) (actual\n>time=27.84..27.84 rows=1 loops=1)\n> -> Index Scan using usage_access_atime on usage_access \n>(cost=0.00..89009.39 rows=126237 width=4) (actual time=0.51..20.37\n>rows=2964 loops=1)\n> Index Cond: ((atime >= date_trunc('day'::text, now())) AND\n>(atime <= (date_trunc('day'::text, now()) + '1 day'::interval)))\n> Total runtime: 28.11 msec\n>(4 rows)\n>\n> \n>\n", "msg_date": "Fri, 05 Nov 2004 14:44:17 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the difference between these?" }, { "msg_contents": "Matt Nuzum <[email protected]> writes:\n> To me, these three queries seem identical... why doesn't the first one\n> (simplest to understand and write) go the same speed as the third one?\n\nThis is the standard problem that the planner has to guess about the\nselectivity of inequalities involving non-constants (like now()).\nThe guesses are set up so that a one-sided inequality will use a\nseqscan while a range constraint will use an indexscan.\n\nSee the pgsql-performance archives for other ways of persuading it\nthat an indexscan is a good idea.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Nov 2004 15:32:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the difference between these? " } ]
[ { "msg_contents": "Just wanted to know if there were any insights after looking at\nrequested 'explain analyze select ...'?\n\n\nThanks,\n--patrick\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nCheck out the new Yahoo! Front Page. \nwww.yahoo.com \n \n\n", "msg_date": "Fri, 5 Nov 2004 10:26:00 -0800 (PST)", "msg_from": "patrick ~ <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum analyze slows sql query" } ]
[ { "msg_contents": "Hi guys,\n\n I have been given a dual PIII with 768MB RAM and I am going to install \nPostgreSQL on it, for data warehousing reasons. I have also been given four \n160 Ultra SCSI disks (36MB each) with a RAID controller (Adaptec 2100). I \nam going to use a RAID5 architecture (this gives me approximately 103 GB of \ndata) and install a Debian Linux on it: this machine will be dedicated \nexclusively to PostgreSQL.\n\n I was wondering which file system you suggest me: ext3 or reiserfs? \nAlso, I was thinking of using the 2.6.x kernel which offers a faster thread \nsupport: will PostgreSQL gain anything from it or should I stick with 2.4.x?\n\nThank you very much,\n-Gabriele\n--\nGabriele Bartolini: Web Programmer, ht://Dig & IWA/HWG Member, ht://Check \nmaintainer\nCurrent Location: Prato, Toscana, Italia\[email protected] | http://www.prato.linux.it/~gbartolini | ICQ#129221447\n > \"Leave every hope, ye who enter!\", Dante Alighieri, Divine Comedy, The \nInferno\n\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.788 / Virus Database: 533 - Release Date: 01/11/2004", "msg_date": "Fri, 05 Nov 2004 22:00:16 +0100", "msg_from": "Gabriele Bartolini <[email protected]>", "msg_from_op": true, "msg_subject": "Question regarding the file system" }, { "msg_contents": "Gabriele,\n\n> I have been given a dual PIII with 768MB RAM and I am going to install\n> PostgreSQL on it, for data warehousing reasons. I have also been given four\n> 160 Ultra SCSI disks (36MB each) with a RAID controller (Adaptec 2100). I\n> am going to use a RAID5 architecture (this gives me approximately 103 GB of\n> data) and install a Debian Linux on it: this machine will be dedicated\n> exclusively to PostgreSQL.\n\nFWIW, RAID5 with < 5 disks is probably the worst-performing disk setup for PG \nwith most kinds of DB applications. However, with 4 disks you don't have a \nlot of other geometries available. If the database will fit on one disk, I \nmight suggest doing RAID 1 for 2 of the disks, and having two single disks, \none with the OS and swap, and one with the database log.\n\nIf you're doing Debian, make sure to get a current version of PG from Debian \nUnstable.\n\n> I was wondering which file system you suggest me: ext3 or reiserfs?\n\nThese seem to be equivalent in data=writeback mode for most database \napplications. Use whichever you find easier to install & maintain.\n\n> Also, I was thinking of using the 2.6.x kernel which offers a faster thread\n> support: will PostgreSQL gain anything from it or should I stick with\n> 2.4.x?\n\nPostgreSQL won't gain anything from the thread support (unless you're using a \nthreaded front-end app with thread-safe ecpg). But it will gain from \nseveral other improvements in 2.6, especially better scheduling and VM \nsupport. Use 2.6.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 8 Nov 2004 09:44:33 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question regarding the file system" }, { "msg_contents": "Gabriele,\n\n> By any chance, do you have some reference or some tests that talk about the\n> fact that RAID5 with less than 5 disks is not performing?\n\nJust this list. But it's easy to test yourself; run bonnie++ and compare the \nperformance of seeks and random writes (which PG does a lot of) vs. a plain \nsingle disk.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 8 Nov 2004 17:31:22 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question regarding the file system" } ]
[ { "msg_contents": "> I have two dual opteron 248's with 4g of ram each, 6x36G 15k rpm ultra\n> 320 scsi disks in hardware raid 5, and they are by far the fastest\n> machines I've user used. As far as this \"headache\" of using 64 bit\n> Linux, I've experienced no such thing. I'm using gentoo on both\n> machines, which are dedicated for postgres 7.4 and replicated with\n> slony. They're both quite fast and reliable. One machine even runs a\n> secondary instance of pg, pg 8 beta4 in this case, for development,\n> which also runs quite well.\n\nGood, I'll give it a shot and see what I come up with...thx.\n\nMerlin\n", "msg_date": "Fri, 5 Nov 2004 16:17:02 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql amd-64" }, { "msg_contents": "Merlin,\n\n\n\n\n> Good, I'll give it a shot and see what I come up with...thx.\n> \nDo share your experience with us.\n\n-- \nWith Best Regards,\nVishal Kashyap.\nDid you know SaiPACS is one and only PACS\nManagement tool.\nhttp://saihertz.com\n", "msg_date": "Sat, 6 Nov 2004 08:31:44 +0530", "msg_from": "\"Vishal Kashyap @ [Sai Hertz And Control Systems]\"\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql amd-64" } ]
[ { "msg_contents": "Hi everyone,\n\nSome more data I've collected, trying to best tune dbt-2 with\n8.0beta4. Was hoping for some suggestions, explanations for what I'm\nseeing, etc.\n\nA review of hardware I've got:\n\n4 x 1.5Ghz Itanium 2\n16GB memory\n84 15K RPM disks (6 controlers, 12 channels)\n\nPhysical Database table layout (using LVM2 for tables using more than 1 disk):\n- warehouse 2 disks\n- district 2 disks\n- order_line 2 disks\n- customer 4 disks\n- stock 12 disks\n- log 12 disks\n- orders 2 disks\n- new_order 2 disks\n\n- history 1 disk\n- item 1 disk\n- index1 1 disk\n- index2 1 disk\n\nAll these tests are using a 500 warehouse database.\n\n\nTest 1: http://www.osdl.org/projects/dbt2dev/results/dev4-010/188/\nMetric: 3316\n\nDB parameter changes from default:\nbgwriter_percent | 10\ncheckpoint_timeout | 300\ncheckpoint_segments | 800\ncheckpoint_timeout | 1800\ndefault_statistics_target | 1000\nmax_connections | 140\nstats_block_level | on\nstats_command_string | on\nstats_row_level | on\nwal_buffers | 128\nwal_sync_method | fsync\nwork_mem | 2048\n\n\nTest 2: http://www.osdl.org/projects/dbt2dev/results/dev4-010/189/\nMetric: 3261 -1.7% decrease Test 1\n\nDB parameter changes from Test 1:\nshared_buffers | 60000\n\nNoted changes:\n\nThe block read for the customer table decreases significantly according\nto the database.\n\n\nTest 3: http://www.osdl.org/projects/dbt2dev/results/dev4-010/190/\nMetric: 3261 0% change from Test 2\n\nDB parameter changes from Test 2:\neffective_cache_size | 220000\n\nNoted changes:\n\nNo apparent changes according to the charts.\n\n\nTest 4: http://www.osdl.org/projects/dbt2dev/results/dev4-010/191/\nMetric: 3323 1.9 increase from Test 3\n\nDB parameter changes from Test 3:\ncheckpoint_segments | 1024\neffective_cache_size | 1000\n\nNoted Changes:\n\nThe increased checkpoint_segments smothed out the throughput and\nother i/o related stats.\n\n\nTest 5: http://www.osdl.org/projects/dbt2dev/results/dev4-010/192/\nMetric: 3149 -5% decrease from Test 4\n\nDB parameter changes from Test 4:\nshared_buffers | 80000\n\nNoted changes:\n\nThe graphs are starting to jump around a bit. I figure 80,000\nshared_buffers is too much.\n\n\nTest 6: http://www.osdl.org/projects/dbt2dev/results/dev4-010/193/\nMetric: 3277 4% increase from Test 5\n\nDB parameter changes from Test 5:\nrandom_page_cost | 2\nshared_buffers | 60000\n\nNoted changes:\n\nReducing the shared_buffers to the smoother performance found in Test 4\nseemed to have disrupted by decreasing the random_page_cost to 2.\n", "msg_date": "Fri, 5 Nov 2004 15:13:45 -0800", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": true, "msg_subject": "ia64 results with dbt2 and 8.0beta4" } ]
[ { "msg_contents": "Looking around at the pg_ tables and some PostgreSQL online\ndocs prompted by another post/reply on this list regarding\nALERT TABLE SET STATISTICS i found out that prior to a VACUUM\nthe following select (taken from the online docs) shows:\n\npkk=# select relname, relkind, reltuples, relpages from pg_class where relname\nlike 'pkk_%';\n relname | relkind | reltuples | relpages \n-------------------+---------+-----------+----------\n pkk_billing | r | 1000 | 10\n pkk_offer | r | 1000 | 10\n pkk_offer_pkey | i | 1000 | 1\n pkk_purchase | r | 1000 | 10\n pkk_purchase_pkey | i | 1000 | 1\n(5 rows)\n\nTime: 1097.263 ms\n\n\nand after a VACUUM:\n\npkk=# vacuum analyze ;\nVACUUM\nTime: 100543.359 ms\n\n\nit shows:\n\npkk=# select relname, relkind, reltuples, relpages from pg_class where relname\nlike 'pkk_%'; \n relname | relkind | reltuples | relpages \n-------------------+---------+-------------+----------\n pkk_billing | r | 714830 | 4930\n pkk_offer | r | 618 | 6\n pkk_offer_pkey | i | 618 | 4\n pkk_purchase | r | 1.14863e+06 | 8510\n pkk_purchase_pkey | i | 1.14863e+06 | 8214\n(5 rows)\n\nTime: 3.868 ms\n\n\n\nFurther, I notice that if I were to delete rows from the\npg_statistic table I get the db in a state where the query\nis fast again:\n\npkk=# explain analyze select offer_id, pkk_offer_has_pending_purch( offer_id )\nfrom pkk_offer ;\n QUERY PLAN \n \n-----------------------------------------------------------------------------------------------------------------\n Seq Scan on pkk_offer (cost=0.00..13.72 rows=618 width=4) (actual\ntime=2415.739..1065709.092 rows=618 loops=1)\n Total runtime: 1065711.651 ms\n(2 rows)\n\nTime: 1065713.446 ms\n\n\n\npkk=# delete from pg_statistic where pg_statistic.starelid = pg_class.oid and\npg_class.relname like 'pkk_%';\nDELETE 11\nTime: 3.368 ms\n\n\n\npkk=# select offer_id, pkk_offer_has_pending_purch( offer_id ) from pkk_offer ;\n(618 rows)\n\nTime: 876.377 ms\n\n\npkk=# explain analyze select offer_id, pkk_offer_has_pending_purch( offer_id )\nfrom pkk_offer ;\n QUERY PLAN \n \n----------------------------------------------------------------------------------------------------------\n Seq Scan on pkk_offer (cost=0.00..13.72 rows=618 width=4) (actual\ntime=1.329..846.786 rows=618 loops=1)\n Total runtime: 848.170 ms\n(2 rows)\n\nTime: 849.958 ms\n\n\n\n\nNow, I'm sure someone (a PostgreSQL developer most likely)\nis about to shoot me for doing such a thing :-)\n\nBut, however *ugly, wrong, sacrilege* this may be, if this is\nthe only solution...err workaround I have that will help me\ni must resort to it.\n\nThe only two questions I have about this are:\n\n1. Is this really the only solution left for me?\n2. Am I in anyway screwing the db doing this?\n\n\nBest regards,\n--patrick\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nCheck out the new Yahoo! Front Page. \nwww.yahoo.com \n \n\n", "msg_date": "Fri, 5 Nov 2004 16:26:49 -0800 (PST)", "msg_from": "patrick ~ <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum analyze slows sql query" }, { "msg_contents": "patrick ~ <[email protected]> writes:\n> 1. Is this really the only solution left for me?\n\nYou still haven't followed the suggestions that were given to you\n(ie, find out what is happening with the plan for the query inside\nthe problematic function).\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Nov 2004 23:06:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum analyze slows sql query " }, { "msg_contents": "Hi Tom, -performance@,\n\nI apologize if I didn't follow through with the PREPARE and\nEXECUTE. I assume that is what you are refering to. After\nreading the PostgreSQL docs on PREPARE statement I realized\ntwo things: a) PREPARE is only session long and b) that I\ncan not (at least I haven't figured out how) PREPARE a\nstatement which would mimic my original select statement\nwhich I could EXECUTE over all rows of pkk_offer table.\n\nBest I could do is either:\n\n PREPARE pkk_01 ( interger ) select $1, pkk_offer_has_pending_purch( $1 ) from\npkk_offer ;\n\nor\n\n PREPARE pkk_00 ( integer ) <the def of pkk_offer_has_pending_purc( integer )>\n\nIn the former case the EXPLAIN ANALYZE doesn't give enough\ndata (it is the same as w/o the PREPARE statement). In the\nlatter case, I can only execute it with one offer_id at at\ntime. Is this sufficient?\n\nIf so, here are the results before and after VACUUM ANALYZE:\n\npkk=# explain analyze execute pkk_00( 795 ) ;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=8.57..8.58 rows=1 width=0) (actual time=0.095..0.096 rows=1\nloops=1)\n InitPlan\n -> Limit (cost=0.00..8.57 rows=1 width=4) (actual time=0.083..0.084\nrows=1 loops=1)\n -> Index Scan using pur_offer_id_idx on pkk_purchase p0 \n(cost=0.00..17.13 rows=2 width=4) (actual time=0.079..0.079 rows=1 loops=1)\n Index Cond: (offer_id = $1)\n Filter: ((((expire_time)::timestamp with time zone > now()) OR\n(expire_time IS NULL) OR (pending = true)) AND ((cancel_date IS NULL) OR\n(pending = true)))\n Total runtime: 0.238 ms\n(7 rows)\n\npkk=# VACUUM ANALYZE ;\nVACUUM\nTime: 97105.589 ms\n\npkk=# explain analyze execute pkk_00( 795 ) ;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=8.57..8.58 rows=1 width=0) (actual time=0.329..0.330 rows=1\nloops=1)\n InitPlan\n -> Limit (cost=0.00..8.57 rows=1 width=4) (actual time=0.311..0.312\nrows=1 loops=1)\n -> Index Scan using pur_offer_id_idx on pkk_purchase p0 \n(cost=0.00..17.13 rows=2 width=4) (actual time=0.307..0.307 rows=1 loops=1)\n Index Cond: (offer_id = $1)\n Filter: ((((expire_time)::timestamp with time zone > now()) OR\n(expire_time IS NULL) OR (pending = true)) AND ((cancel_date IS NULL) OR\n(pending = true)))\n Total runtime: 0.969 ms\n(7 rows)\n\nTime: 16.252 ms\n\n\n\nIn both before and after \"Index Scan\" is used on pur_offer_id_idx.\nSo, unless I'm missing something obvious here I am at a loss.\n\nI went as far as doing the EXPLAIN ANALYZE EXECUTE pkk_00( offer_id )\nfor each offer_id in pkk_offer table one at a time (not manually but\nby scripting it). All instances use \"Index Scan\".\n\nI only noticed a couple that had quite large \"actual times\" like\nthis following:\n\n\npkk=# explain analyze execute pkk_00( 2312 ) ;\n \n QUERY PLAN \n \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=8.57..8.58 rows=1 width=0) (actual time=21.279..21.282 rows=1\nloops=1)\n InitPlan\n -> Limit (cost=0.00..8.57 rows=1 width=4) (actual time=21.256..21.258\nrows=1 loops=1)\n -> Index Scan using pur_offer_id_idx on pkk_purchase p0 \n(cost=0.00..17.13 rows=2 width=4) (actual time=21.249..21.249 rows=1 loops=1)\n Index Cond: (offer_id = $1)\n Filter: ((((expire_time)::timestamp with time zone > now()) OR\n(expire_time IS NULL) OR (pending = true)) AND ((cancel_date IS NULL) OR\n(pending = true)))\n Total runtime: 21.435 ms\n(7 rows)\n\nTime: 22.541 ms\n\n\nWhich makes sense when you look at the number of entries this\noffer_id has in pkk_purchase table vs offer_id = 795:\n\npkk=# select offer_id, count(*) from pkk_purchase where offer_id in ( 795, 2312\n) group by offer_id ;\n offer_id | count \n----------+-------\n 795 | 4\n 2312 | 1015\n(2 rows)\n\nTime: 21.118 ms\n\n\n--patrick\n\n\n\n\n--- Tom Lane <[email protected]> wrote:\n\n> patrick ~ <[email protected]> writes:\n> > 1. Is this really the only solution left for me?\n> \n> You still haven't followed the suggestions that were given to you\n> (ie, find out what is happening with the plan for the query inside\n> the problematic function).\n> \n> \t\t\tregards, tom lane\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nCheck out the new Yahoo! Front Page. \nwww.yahoo.com \n \n\n", "msg_date": "Fri, 5 Nov 2004 22:18:50 -0800 (PST)", "msg_from": "patrick ~ <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum analyze slows sql query " }, { "msg_contents": "patrick ~ <[email protected]> writes:\n> PREPARE pkk_00 ( integer ) <the def of pkk_offer_has_pending_purc( integer )\n\nThis is what you want to do, but not quite like that. The PREPARE\ndetermines the plan and so VACUUMing and re-EXECUTing is going to show\nthe same plan. What we need to look at is\n\t- standing start\n\tPREPARE pkk_00 ...\n\tEXPLAIN ANALYZE EXECUTE pkk_00 ...\n\tVACUUM ANALYZE;\n\tPREPARE pkk_01 ...\n\tEXPLAIN ANALYZE EXECUTE pkk_01 ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 Nov 2004 01:38:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum analyze slows sql query " }, { "msg_contents": "Sorry for the late reply. Was feeling a bit under the weather\nthis weekend and didn't get a chance to look at this.\n\n\n--- Tom Lane <[email protected]> wrote:\n\n> patrick ~ <[email protected]> writes:\n> > PREPARE pkk_00 ( integer ) <the def of pkk_offer_has_pending_purc( integer\n> )\n> \n> This is what you want to do, but not quite like that. The PREPARE\n> determines the plan and so VACUUMing and re-EXECUTing is going to show\n> the same plan. What we need to look at is\n> \t- standing start\n> \tPREPARE pkk_00 ...\n> \tEXPLAIN ANALYZE EXECUTE pkk_00 ...\n> \tVACUUM ANALYZE;\n> \tPREPARE pkk_01 ...\n> \tEXPLAIN ANALYZE EXECUTE pkk_01 ...\n\nBut of course! I feel a bit silly now.\n\nThis is what I get after following Tom's directions:\n\npkk=# prepare pkk_00 ( integer ) as select ...\nPREPARE\nTime: 1.753 ms\npkk=# execute pkk_00( 241 );\n case \n------\n f\n(1 row)\n\nTime: 0.788 ms\npkk=# explain analyze execute pkk_00( 241 );\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=10.73..10.74 rows=1 width=0) (actual time=0.067..0.068 rows=1\nloops=1)\n InitPlan\n -> Limit (cost=0.00..10.73 rows=1 width=4) (actual time=0.055..0.055\nrows=0 loops=1)\n -> Index Scan using pur_offer_id_idx on pkk_purchase p0 \n(cost=0.00..20690.18 rows=1929 width=4) (actual time=0.052..0.052 rows=0\nloops=1)\n Index Cond: (offer_id = $1)\n Filter: ((((expire_time)::timestamp with time zone > now()) OR\n(expire_time IS NULL) OR (pending = true)) AND ((cancel_date IS NULL) OR\n(pending = true)))\n Total runtime: 0.213 ms\n(7 rows)\n\nTime: 24.654 ms\npkk=# vacuum analyze ;\nVACUUM\nTime: 128826.078 ms\npkk=# prepare pkk_01 ( integer ) as select ...\nPREPARE\nTime: 104.658 ms\npkk=# execute pkk_01( 241 );\n case \n------\n f\n(1 row)\n\nTime: 7652.708 ms\npkk=# explain analyze execute pkk_01( 241 );\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=2.66..2.67 rows=1 width=0) (actual time=2872.211..2872.213\nrows=1 loops=1)\n InitPlan\n -> Limit (cost=0.00..2.66 rows=1 width=4) (actual\ntime=2872.189..2872.189 rows=0 loops=1)\n -> Seq Scan on pkk_purchase p0 (cost=0.00..37225.83 rows=13983\nwidth=4) (actual time=2872.180..2872.180 rows=0 loops=1)\n Filter: ((offer_id = $1) AND (((expire_time)::timestamp with\ntime zone > now()) OR (expire_time IS NULL) OR (pending = true)) AND\n((cancel_date IS NULL) OR (pending = true)))\n Total runtime: 2872.339 ms\n(6 rows)\n\nTime: 2873.479 ms\n\n\nSo it looks like after the VACCUM the planner resorts to Seq Scan\nrather than Index Scan.\n\nThis is because of the value of correlation field in pg_stats\n(according to PostgreSQL docs) being closer to 0 rather than\n���1:\n\npkk=# select tablename,attname,correlation from pg_stats where tablename =\n'pkk_purchase' and attname = 'offer_id' ;\n tablename | attname | correlation \n--------------+----------+-------------\n pkk_purchase | offer_id | 0.428598\n(1 row)\n\n\nSo I started to experiment with ALTER TABLE SET STATISTICS\nvalues to see which gets the correlation closer to ���1. The\ntrend seems to indicat the higher the stat value is set it\npushes the correlation value closer to 0:\n\nset statistics correlation\n----------------------------\n 800 0.393108\n 500 0.408137\n 200 0.43197\n 50 0.435211\n 1 0.45758\n\nAnd a subsequent PREPARE and EXPLAIN ANALYZE confirms that\nthe Planer reverts back to using the Index Scan after setting\nstats to 1 (even though correlation value is still closer\nto 0 than 1):\n\npkk=# explain analyze execute pkk_02( 241 );\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=2.95..2.96 rows=1 width=0) (actual time=0.068..0.069 rows=1\nloops=1)\n InitPlan\n -> Limit (cost=0.00..2.95 rows=1 width=4) (actual time=0.056..0.056\nrows=0 loops=1)\n -> Index Scan using pur_offer_id_idx on pkk_purchase p0 \n(cost=0.00..35810.51 rows=12119 width=4) (actual time=0.053..0.053 rows=0\nloops=1)\n Index Cond: (offer_id = $1)\n Filter: ((((expire_time)::timestamp with time zone > now()) OR\n(expire_time IS NULL) OR (pending = true)) AND ((cancel_date IS NULL) OR\n(pending = true)))\n Total runtime: 0.200 ms\n(7 rows)\n\n\n\nSo, is this the ultimate solution to this issue?\n\n--patrick\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nCheck out the new Yahoo! Front Page. \nwww.yahoo.com \n \n\n", "msg_date": "Mon, 8 Nov 2004 10:57:02 -0800 (PST)", "msg_from": "patrick ~ <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum analyze slows sql query " }, { "msg_contents": "patrick ~ wrote:\n[...]\n> pkk=# explain analyze execute pkk_01( 241 );\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Result (cost=2.66..2.67 rows=1 width=0) (actual time=2872.211..2872.213\n> rows=1 loops=1)\n> InitPlan\n> -> Limit (cost=0.00..2.66 rows=1 width=4) (actual\n> time=2872.189..2872.189 rows=0 loops=1)\n> -> Seq Scan on pkk_purchase p0 (cost=0.00..37225.83 rows=13983\n> width=4) (actual time=2872.180..2872.180 rows=0 loops=1)\n> Filter: ((offer_id = $1) AND (((expire_time)::timestamp with\n> time zone > now()) OR (expire_time IS NULL) OR (pending = true)) AND\n> ((cancel_date IS NULL) OR (pending = true)))\n> Total runtime: 2872.339 ms\n> (6 rows)\n> \n> Time: 2873.479 ms\n> \n\n[...]\n\n> So, is this the ultimate solution to this issue?\n> \n> --patrick\n\nIt's not so much that correlation is < 0.5. It sounds like you're \nrunning into the same issue that I ran into in the past. You have a \ncolumn with lots of repeated values, and a few exceptional ones. Notice \nthis part of the query:\n-> Seq Scan on pkk_purchase p0 (cost rows=13983) (actual rows=0)\n\nFor a general number, it thinks it might return 14,000 rows, hence the \nsequential scan. Before you do ANALYZE, it uses whatever defaults exist, \nwhich are probably closer to reality.\n\nThe problem is that you probably have some values for pkk_purchase where \nit could return 14,000 rows (possibly much much more). And for those, \nseq scan is the best plan. However, for the particular value that you \nare testing, there are very few (no) entries in the table.\n\nWith a prepared statement (or a function) it has to determine ahead of \ntime what the best query is without knowing what value you are going to \n ask for.\n\nLets say for a second that you manage to trick it into using index scan, \nand then you actually call the function with one of the values that \nreturns 1,000s of rows. Probably it will take 10-100 times longer than \nif it used a seq scan.\n\nSo what is the solution? The only one I'm aware of is to turn your \nstatic function into a dynamic one.\n\nSo somewhere within the function you build up a SQL query string and \ncall EXECUTE str. This forces the query planner to be run every time you \ncall the function. This means that if you call it will a \"nice\" value, \nyou will get the fast index scan, and if you call it with a \"bad\" value, \nit will switch back to seq scan.\n\nThe downside is you don't get much of a benefit from using as stored \nprocedure, as it has to run the query planner all the time (as though \nyou issue the query manually each time.) But it still might be better \nfor you in the long run.\n\nExample:\n\ninstead of\n\ncreate function test(int) returns int as '\ndeclare\n x alias for $1;\n int y;\nbegin\n select into y ... from ... where id=x limit ...;\n return y;\nend\n';\n\nuse this format\n\ncreate function test(int) returns int as '\ndeclare\n x alias for $1;\n int y;\nbegin\n EXECUTE ''select into y ... from ... where id=''\n\t||quote_literal(x)\n\t|| '' limit ...'';\n return y;\nend;\n';\n\nI think that will point you in the right direction.\n\nJohn\n=:->", "msg_date": "Mon, 08 Nov 2004 13:29:39 -0600", "msg_from": "John Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum analyze slows sql query" }, { "msg_contents": "Hi John,\n\nThanks for your reply and analysis.\n\n\n--- John Meinel <[email protected]> wrote:\n\n> patrick ~ wrote:\n> [...]\n> > pkk=# explain analyze execute pkk_01( 241 );\n> > QUERY PLAN\n> >\n>\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> > Result (cost=2.66..2.67 rows=1 width=0) (actual time=2872.211..2872.213\n> > rows=1 loops=1)\n> > InitPlan\n> > -> Limit (cost=0.00..2.66 rows=1 width=4) (actual\n> > time=2872.189..2872.189 rows=0 loops=1)\n> > -> Seq Scan on pkk_purchase p0 (cost=0.00..37225.83 rows=13983\n> > width=4) (actual time=2872.180..2872.180 rows=0 loops=1)\n> > Filter: ((offer_id = $1) AND (((expire_time)::timestamp\n> with\n> > time zone > now()) OR (expire_time IS NULL) OR (pending = true)) AND\n> > ((cancel_date IS NULL) OR (pending = true)))\n> > Total runtime: 2872.339 ms\n> > (6 rows)\n> > \n> > Time: 2873.479 ms\n> > \n> \n> [...]\n> \n> > So, is this the ultimate solution to this issue?\n> > \n> > --patrick\n> \n> It's not so much that correlation is < 0.5. It sounds like you're \n> running into the same issue that I ran into in the past. You have a \n> column with lots of repeated values, and a few exceptional ones. Notice \n> this part of the query:\n> -> Seq Scan on pkk_purchase p0 (cost rows=13983) (actual rows=0)\n> \n> For a general number, it thinks it might return 14,000 rows, hence the \n> sequential scan. Before you do ANALYZE, it uses whatever defaults exist, \n> which are probably closer to reality.\n> \n> The problem is that you probably have some values for pkk_purchase where \n> it could return 14,000 rows (possibly much much more). And for those, \n> seq scan is the best plan. However, for the particular value that you \n> are testing, there are very few (no) entries in the table.\n\nYou are absoultely correct:\n\npkk=# select offer_id,count(*) from pkk_purchase group by offer_id order by\ncount ;\n offer_id | count \n----------+--------\n 1019 | 1\n 1018 | 1\n 1016 | 1 (many of these)\n ... | ...\n 2131 | 6\n 844 | 6\n 1098 | 6 (a dozen or so of these)\n ... | ...\n 2263 | 682\n 2145 | 723\n 2258 | 797\n 2091 | 863\n ... | ...\n 1153 | 96330 (the few heavy weights)\n 244 | 122163\n 242 | 255719\n 243 | 273427\n 184 | 348476\n\n\n> With a prepared statement (or a function) it has to determine ahead of \n> time what the best query is without knowing what value you are going to \n> ask for.\n> \n> Lets say for a second that you manage to trick it into using index scan, \n> and then you actually call the function with one of the values that \n> returns 1,000s of rows. Probably it will take 10-100 times longer than \n> if it used a seq scan.\n\n\nHmm... The fact is I am selecting (in this example anyway) over all\nvalues in pkk_offer table and calling the stored function with each\npkk_offer.offer_id which in turn does a select on pkk_purchase table.\nNote that offer_id is a foreign key in pkk_purchase referencing\npkk_offer table.\n\nI don't know if it matters (I suspect that it does) but I am using\nLIMIT 1 in the sub-query/stored function. All I need is one single\nrow meeting any of the criteria laid out in the stored procedure to\nestablish an offer_id is \"pending\".\n\n\n> So what is the solution? The only one I'm aware of is to turn your \n> static function into a dynamic one.\n> \n> So somewhere within the function you build up a SQL query string and \n> call EXECUTE str. This forces the query planner to be run every time you \n> call the function. This means that if you call it will a \"nice\" value, \n> you will get the fast index scan, and if you call it with a \"bad\" value, \n> it will switch back to seq scan.\n> \n> The downside is you don't get much of a benefit from using as stored \n> procedure, as it has to run the query planner all the time (as though \n> you issue the query manually each time.) But it still might be better \n> for you in the long run.\n\n\nWell, running the query without the stored function, basically typing\nout the stored function as a sub-query shows me:\n\n\npkk=# explain analyze select o0.offer_id, ( select case when ( select \np0.purchase_id from pkk_purchase p0 where p0.offer_id = o0.offer_id and (\np0.pending = true or ( ( p0.expire_time > now() or p0.expire_time isnull ) and\np0.cancel_date isnull ) ) limit 1 ) isnull then false else true end ) from\npkk_offer o0 ;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on pkk_offer o0 (cost=0.00..1834.11 rows=618 width=4) (actual\ntime=2413.398..1341885.084 rows=618 loops=1)\n SubPlan\n -> Result (cost=2.94..2.95 rows=1 width=0) (actual\ntime=2171.287..2171.289 rows=1 loops=618)\n InitPlan\n -> Limit (cost=0.00..2.94 rows=1 width=4) (actual\ntime=2171.264..2171.266 rows=1 loops=618)\n -> Seq Scan on pkk_purchase p0 (cost=0.00..37225.83\nrows=12670 width=4) (actual time=2171.245..2171.245 rows=1 loops=618)\n Filter: ((offer_id = $0) AND\n(((expire_time)::timestamp with time zone > now()) OR (expire_time IS NULL) OR\n(pending = true)) AND ((cancel_date IS NULL) OR (pending = true)))\n Total runtime: 1341887.523 ms\n(8 rows)\n\n\nwhile deleting all statistics on the pkk_% tables I get:\n\npkk=# delete from pg_statistic where pg_statistic.starelid = pg_class.oid and\npg_class.relname like 'pkk_%';\nDELETE 11\n\npkk=# explain analyze select o0.offer_id, ( select case when ( select \np0.purchase_id from pkk_purchase p0 where p0.offer_id = o0.offer_id and (\np0.pending = true or ( ( p0.expire_time > now() or p0.expire_time isnull ) and\np0.cancel_date isnull ) ) limit 1 ) isnull then false else true end ) from\npkk_offer o0 ;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on pkk_offer o0 (cost=0.00..6646.94 rows=618 width=4) (actual\ntime=0.190..799.930 rows=618 loops=1)\n SubPlan\n -> Result (cost=10.73..10.74 rows=1 width=0) (actual time=1.277..1.278\nrows=1 loops=618)\n InitPlan\n -> Limit (cost=0.00..10.73 rows=1 width=4) (actual\ntime=1.266..1.267 rows=1 loops=618)\n -> Index Scan using pur_offer_id_idx on pkk_purchase p0 \n(cost=0.00..20690.18 rows=1929 width=4) (actual time=1.258..1.258 rows=1\nloops=618)\n Index Cond: (offer_id = $0)\n Filter: ((((expire_time)::timestamp with time zone >\nnow()) OR (expire_time IS NULL) OR (pending = true)) AND ((cancel_date IS NULL)\nOR (pending = true)))\n Total runtime: 801.234 ms\n(9 rows)\n\n\nAs you can see this query (over all values of pkk_offer) with out\nany pg_statistics on the pkk_purchase table is extremely fast.\n\nIs this a bug in the PostgreSQL planner that misjudges the best\nchoice with pg_statistics at hand?\n\n--patrick\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nCheck out the new Yahoo! Front Page. \nwww.yahoo.com \n \n\n", "msg_date": "Mon, 8 Nov 2004 15:19:51 -0800 (PST)", "msg_from": "patrick ~ <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum analyze slows sql query" }, { "msg_contents": "patrick ~ wrote:\n> Hi John,\n> \n> Thanks for your reply and analysis.\n> \n\nNo problem. It just happens that this is a problem we ran into recently.\n\n> \n> --- John Meinel <[email protected]> wrote:\n> \n> \n>>patrick ~ wrote:\n[...]\n\n> \n> Hmm... The fact is I am selecting (in this example anyway) over all\n> values in pkk_offer table and calling the stored function with each\n> pkk_offer.offer_id which in turn does a select on pkk_purchase table.\n> Note that offer_id is a foreign key in pkk_purchase referencing\n> pkk_offer table.\n> \n> I don't know if it matters (I suspect that it does) but I am using\n> LIMIT 1 in the sub-query/stored function. All I need is one single\n> row meeting any of the criteria laid out in the stored procedure to\n> establish an offer_id is \"pending\".\n> \n\nIf you are trying to establish existence, we also had a whole thread on \nthis. Basically what we found was that adding an ORDER BY clause, helped \ntremendously in getting the planner to switch to an Index scan. You \nmight try something like:\n\nSELECT column FROM mytable WHERE column='myval' ORDER BY column LIMIT 1;\n\nThere seems to be a big difference between the above statement and:\n\nSELECT column FROM mytable WHERE column='myval' LIMIT 1;\n\n\n> \n> \n>>So what is the solution? The only one I'm aware of is to turn your \n>>static function into a dynamic one.\n>>\n>>So somewhere within the function you build up a SQL query string and \n>>call EXECUTE str. This forces the query planner to be run every time you \n>>call the function. This means that if you call it will a \"nice\" value, \n>>you will get the fast index scan, and if you call it with a \"bad\" value, \n>>it will switch back to seq scan.\n>>\n>>The downside is you don't get much of a benefit from using as stored \n>>procedure, as it has to run the query planner all the time (as though \n>>you issue the query manually each time.) But it still might be better \n>>for you in the long run.\n> \n> \n> \n> Well, running the query without the stored function, basically typing\n> out the stored function as a sub-query shows me:\n> \n> \n> pkk=# explain analyze select o0.offer_id, ( select case when ( select \n> p0.purchase_id from pkk_purchase p0 where p0.offer_id = o0.offer_id and (\n> p0.pending = true or ( ( p0.expire_time > now() or p0.expire_time isnull ) and\n> p0.cancel_date isnull ) ) limit 1 ) isnull then false else true end ) from\n> pkk_offer o0 ;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on pkk_offer o0 (cost=0.00..1834.11 rows=618 width=4) (actual\n> time=2413.398..1341885.084 rows=618 loops=1)\n> SubPlan\n> -> Result (cost=2.94..2.95 rows=1 width=0) (actual\n> time=2171.287..2171.289 rows=1 loops=618)\n> InitPlan\n> -> Limit (cost=0.00..2.94 rows=1 width=4) (actual\n> time=2171.264..2171.266 rows=1 loops=618)\n> -> Seq Scan on pkk_purchase p0 (cost=0.00..37225.83\n> rows=12670 width=4) (actual time=2171.245..2171.245 rows=1 loops=618)\n> Filter: ((offer_id = $0) AND\n> (((expire_time)::timestamp with time zone > now()) OR (expire_time IS NULL) OR\n> (pending = true)) AND ((cancel_date IS NULL) OR (pending = true)))\n> Total runtime: 1341887.523 ms\n> (8 rows)\n> \n> \n> while deleting all statistics on the pkk_% tables I get:\n> \n> pkk=# delete from pg_statistic where pg_statistic.starelid = pg_class.oid and\n> pg_class.relname like 'pkk_%';\n> DELETE 11\n> \n> pkk=# explain analyze select o0.offer_id, ( select case when ( select \n> p0.purchase_id from pkk_purchase p0 where p0.offer_id = o0.offer_id and (\n> p0.pending = true or ( ( p0.expire_time > now() or p0.expire_time isnull ) and\n> p0.cancel_date isnull ) ) limit 1 ) isnull then false else true end ) from\n> pkk_offer o0 ;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on pkk_offer o0 (cost=0.00..6646.94 rows=618 width=4) (actual\n> time=0.190..799.930 rows=618 loops=1)\n> SubPlan\n> -> Result (cost=10.73..10.74 rows=1 width=0) (actual time=1.277..1.278\n> rows=1 loops=618)\n> InitPlan\n> -> Limit (cost=0.00..10.73 rows=1 width=4) (actual\n> time=1.266..1.267 rows=1 loops=618)\n> -> Index Scan using pur_offer_id_idx on pkk_purchase p0 \n> (cost=0.00..20690.18 rows=1929 width=4) (actual time=1.258..1.258 rows=1\n> loops=618)\n> Index Cond: (offer_id = $0)\n> Filter: ((((expire_time)::timestamp with time zone >\n> now()) OR (expire_time IS NULL) OR (pending = true)) AND ((cancel_date IS NULL)\n> OR (pending = true)))\n> Total runtime: 801.234 ms\n> (9 rows)\n> \n> \n> As you can see this query (over all values of pkk_offer) with out\n> any pg_statistics on the pkk_purchase table is extremely fast.\n> \n> Is this a bug in the PostgreSQL planner that misjudges the best\n> choice with pg_statistics at hand?\n> \n> --patrick\n> \n\nIn order to understand your query I broke it up and restructured it as \nfollows.\nYou might try to add the ORDER BY line, and see what you get.\n\n EXPLAIN ANALYZE\n SELECT o0.offer_id,\n ( SELECT CASE WHEN\n ( SELECT p0.purchase_id FROM pkk_purchase p0\n WHERE p0.offer_id = o0.offer_id\n AND ( p0.pending = true\n OR ( p0.cancel_date ISNULL\n AND ( p0.expire_time > NOW() or p0.expire_time \nISNULL )\n )\n )\n ORDER BY p0.purchase_id --Insert this line\n LIMIT 1\n ) ISNULL THEN false\n ELSE true\n END\n ) FROM pkk_offer o0 ;\n\nI also wonder about some parts of your query. I don't know your business \nlogic but you are tacking a lot of the query into the WHERE, and I \nwonder if postgres just thinks it's going to need to analyze all the \ndata before it gets a match.\n\nI also don't remember what columns you have indices on. Or whether it is \ncommon to have cancel_date null, or expire_time > NOW() or expire_time \nnull, etc.\n\nSo is your function just everything within the CASE statement?\n\nYou might try rewriting it as a loop using a cursor, as I believe using \na cursor again lends itself to index scans (as it is even more likely \nthat you will not get all the data.)\n\nSomething like (this is untested)\n\ncreate function is_pending(int) returns bool as '\ndeclare\n p_id alias for $1;\nbegin\n\n DECLARE is_pending_cursor CURSOR FOR\n\tSELECT p0.purchase_id FROM pkk_purchase p0\n\t WHERE p0.offer_id = p_id;\n FOR READ ONLY;\n\n FOR FETCH NEXT is_pending_cursor\n IF row.pending = true or ...\n RETURN true;\n\n RETURN false;\nEND;\n';\n\nI don't know cursors terribly well, but this might get you going. \nProbably in your case you also have a large portion of the records with \npending = true, which means that with an index scan it doesn't have to \nhit very many records. Either you have a low record count for a \nparticular purchase_id, or you have a lot of pendings. seq scan just \nhurts because it has to sift out all the other id's that you don't care \nabout.\n\nBut remember, I'm not a guru, just someone who has been hit by the \ninequal distribution problem.\n\nJohn\n=:->", "msg_date": "Mon, 08 Nov 2004 18:01:30 -0600", "msg_from": "John Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum analyze slows sql query" }, { "msg_contents": "\n>> Lets say for a second that you manage to trick it into using index scan,\n>> and then you actually call the function with one of the values that\n>> returns 1,000s of rows. Probably it will take 10-100 times longer than\n>> if it used a seq scan.\n\n> I don't know if it matters (I suspect that it does) but I am using\n> LIMIT 1 in the sub-query/stored function. All I need is one single\n> row meeting any of the criteria laid out in the stored procedure to\n> establish an offer_id is \"pending\".\n\n\tSo, in your case if you LIMIT the index scan will always be fast, and the \nseq scan will be catastrophic, because you don't need to retrieve all the \nrows, but just one. (IMHO the planner screws these LIMIT clauses becauses \nit expects the data to be randomly distributed in the first page while in \nreal life it's not).\n\n\tYou could use EXIST to test the existence of a subquery (after all, thats \nits purpose), or you could :\n\n\tWhen SELECT ... FROM table WHERE stuff=value LIMIT 1\n\tobstinately uses a seq scan, spray a little order by :\n\n\tWhen SELECT ... FROM table WHERE stuff=value ORDER BY stuff LIMIT 1\n\n\tthe ORDER BY will make the planner think \"I could use the index to \norder\"...\n", "msg_date": "Tue, 09 Nov 2004 01:04:25 +0100", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum analyze slows sql query" }, { "msg_contents": "patrick ~ wrote:\n> Hi John,\n> \n> Thanks for your reply and analysis.\n> \n> \n> --- John Meinel <[email protected]> wrote:\n> \n> \n>>patrick ~ wrote:\n>>[...]\n>>\n>>>pkk=# explain analyze execute pkk_01( 241 );\n>>> QUERY PLAN\n>>>\n>>\n\nOne other thing that I just thought of. I think it is actually possible \nto add an index on a function of a column. So if you have the \n\"is_really_pending\" function, you might be able to do:\n\nCREATE INDEX pkk_is_really_pending ON pkk_purchase\n\t(is_really_pending(purchase_id));\n\nBut you need a better guru than I to make sure of that.\n\nThis *might* do what you need.\n\nJohn\n=:->", "msg_date": "Mon, 08 Nov 2004 18:31:33 -0600", "msg_from": "John Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum analyze slows sql query" }, { "msg_contents": "\n--- John Meinel <[email protected]> wrote:\n\n> If you are trying to establish existence, we also had a whole thread on \n> this. Basically what we found was that adding an ORDER BY clause, helped \n> tremendously in getting the planner to switch to an Index scan. You \n> might try something like:\n> \n> SELECT column FROM mytable WHERE column='myval' ORDER BY column LIMIT 1;\n> \n> There seems to be a big difference between the above statement and:\n> \n> SELECT column FROM mytable WHERE column='myval' LIMIT 1;\n\n\nThe ORDER BY \"trick\" worked beautifully! I just hope it'll\ncontinue to work consistently in production code.\n\n\n\n> I also wonder about some parts of your query. I don't know your business \n> logic but you are tacking a lot of the query into the WHERE, and I \n> wonder if postgres just thinks it's going to need to analyze all the \n> data before it gets a match.\n\n\nI have a table of offers (pkk_offer) and a table keeping track\nof all purchases against each offer (pkk_purchase) and a third\ntable keeping track of billing for each purchase (pkk_billing).\n\nThat's the basic setup of my db. In actuallity there are more\ntables and views invovled keeping track of usage, etc.\n\nThe on UI page that lists all offers in the system needs to\nindicated to the user (operator) which offers are \"pending\".\nThe term \"pending\" is used to mean that the particular offer\nhas either an active purchase against it or has a purchase\nwhich hasn't yet been entered into the billing system yet\n(doesn't yet show up in pkk_billing).\n\nAn active purcahse is indicated by pkk_purchase.expire_time in\nthe future or IS NULL. Where IS NULL indicates a subscription\ntype purchase (a recurring purchase). Offers are created either\nto be one-time purchasable or subscription type.\n\nThe pkk_purchase.pending (boolean) column indicates whether\nor not the purchase has been entered into the billing system.\nIt is a rare case where this flag remains true for a long\nperiod of time (which would indicate something wrong with\nthe billing sub-system).\n\nThere is a foreign key pkk_purchase.offer_id referencing\npkk_offer.offer_id. And likewise, pkk_billing.client_id\nreferencing pkk_purchase.client_id and pkk_billing.purchase_id\nreferecning pkk_purchase.purchase_id.\n\n\n> So is your function just everything within the CASE statement?\n\nYes. I posted a \"stripped down\" version of the database\non pgsql-sql@ list earlier this month if you are interested\nin looking at it:\n\nhttp://marc.theaimsgroup.com/?l=postgresql-sql&m=109945118928530&w=2\n\n(Just side note: MARC doesn't seem to subscribe to -performance\nor -hackers. I have requested them to carry these two lists. I\nthink if more people request this it might happen. I like their\narchiving system).\n\n\n> You might try rewriting it as a loop using a cursor, as I believe using \n> a cursor again lends itself to index scans (as it is even more likely \n> that you will not get all the data.)\n\nI may try this as well as trying a suggestion by Pierre-Fr���������d���������ric\nCaillaud to use EXISTS, though my initial attempt to use it didn't\nseem to be any faster than my original stored function.\n\nSo far the ORDER BY \"trick\" seems to be the best solution.\n\nI appreciate everyone's help and suggestions on this topic!\n\nBest wishes,\n--patrick\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nCheck out the new Yahoo! Front Page. \nwww.yahoo.com \n \n\n", "msg_date": "Tue, 9 Nov 2004 11:26:44 -0800 (PST)", "msg_from": "patrick ~ <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum analyze slows sql query" }, { "msg_contents": "patrick ~ wrote:\n> --- John Meinel <[email protected]> wrote:\n> \n> \n>>If you are trying to establish existence, we also had a whole thread on \n>>this. Basically what we found was that adding an ORDER BY clause, helped \n>>tremendously in getting the planner to switch to an Index scan. You \n>>might try something like:\n>>\n>>SELECT column FROM mytable WHERE column='myval' ORDER BY column LIMIT 1;\n>>\n>>There seems to be a big difference between the above statement and:\n>>\n>>SELECT column FROM mytable WHERE column='myval' LIMIT 1;\n> \n> \n> \n> The ORDER BY \"trick\" worked beautifully! I just hope it'll\n> continue to work consistently in production code.\n\nFor sure it will not break the goal: \"check the existence\".\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n", "msg_date": "Thu, 11 Nov 2004 09:03:24 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum analyze slows sql query" } ]
[ { "msg_contents": "I have migrated a database from MS SQL to a\npostgresSQL database, but when running it, the results\nare very slow (and unusable) which is the only reason\nwe don't entirely move to postgresSQL.\nThe problem is that there are many nested views which\nnormally join tables by using two fields, one\ncharacter and other integer.\nThe biggest table has about 300k records (isn't it too\nlittle for having performance problems?)\nWhat could be the poor performance reason? the server\nis a dual itanium (intel 64bits) processor with 6Gb of\nRAM and a 36Gb Raid 5 scsi hdds of 15k rpm.\nIf someone has the time and wants to check the\nstructure, I have a copy of everything at\nhttp://www.micredito.com.sv/.carlos/materiales.sql.bz2\nit is a pgsqldump made with postgres 7.4\nThanks in advance for your help.\n\nCarlos Lopez Linares.\n\n\n=====\n___\nIng. Carlos L���pez Linares\nIT Consultant\nQuieres aprender linux?\nvisita http://www.aprende-linux.com.sv\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nCheck out the new Yahoo! Front Page. \nwww.yahoo.com \n \n\n", "msg_date": "Sat, 6 Nov 2004 11:52:15 -0800 (PST)", "msg_from": "Carlos Lopez <[email protected]>", "msg_from_op": true, "msg_subject": "poor performance in migrated database" }, { "msg_contents": "On Sat, 06 Nov 2004 11:52:15 -0800, Carlos Lopez wrote:\n\n> I have migrated a database from MS SQL to a\n> postgresSQL database, but when running it, the results\n> are very slow (and unusable) which is the only reason\n> we don't entirely move to postgresSQL.\n\nHave you run ANALYZE lately? (See manual.)\n\nDo you know how to use EXPLAIN? (See manual.) If so: Please post an\nexample query which is slow, and the corresponding output from EXPLAIN.\n\nHave you tried turning your random_page_cost a bit down? (My experience\nits value should generally be lessened.)\n\nHave you read\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html ?\n\n> The biggest table has about 300k records (isn't it too\n> little for having performance problems?)\n\nThat should be no problem.\n\n-- \nGreetings from Troels Arvin, Copenhagen, Denmark\n\n\n", "msg_date": "Sat, 06 Nov 2004 21:07:06 +0100", "msg_from": "Troels Arvin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: poor performance in migrated database" }, { "msg_contents": "On Sat, 06 Nov 2004 11:52:15 -0800, Carlos Lopez wrote:\n\n> I have migrated a database from MS SQL to a postgresSQL database, but\n> when running it, the results are very slow (and unusable) which is the\n> only reason we don't entirely move to postgresSQL.\n \nHave you run ANALYZE lately? (See manual.)\n \nDo you know how to use EXPLAIN? (See manual.) If so: Please post an\nexample query which is slow, and the corresponding output from EXPLAIN.\n \nHave you tried turning your random_page_cost a bit down? (My experience\nits value should generally be lessened.)\n \nHave you read\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html ?\n \n> The biggest table has about 300k records (isn't it too little for having\n> performance problems?)\n \nThat should be no problem.\n\n-- \nGreetings from Troels Arvin, Copenhagen, Denmark\n\n\n", "msg_date": "Sat, 06 Nov 2004 21:26:15 +0100", "msg_from": "Troels Arvin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: poor performance in migrated database" }, { "msg_contents": "On Sat, 2004-11-06 at 19:52, Carlos Lopez wrote:\n> The problem is that there are many nested views which\n> normally join tables by using two fields, one\n> character and other integer.\n\nPostgreSQL has difficulty with some multi-column situations, even though\nin general it has a particularly good query optimizer.\n\nIf the first column is poorly selective, yet the addition of the second\ncolumn makes the combination very highly selective then PostgreSQL may\nnot be able to realise this, ANALYZE or not. ANALYZE doesn't have\nanywhere to store multi-column selectivity statistics. \n\nEXPLAIN ANALYZE will show you whether this is the case. It seems likely\nthat the estimated cardinality of certain joins is incorrect.\n\n-- \nBest Regards, Simon Riggs\n\n", "msg_date": "Sat, 06 Nov 2004 21:39:10 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] poor performance in migrated database" }, { "msg_contents": "On Sat, 2004-11-06 at 12:52, Carlos Lopez wrote:\n> I have migrated a database from MS SQL to a\n> postgresSQL database, but when running it, the results\n> are very slow (and unusable) which is the only reason\n> we don't entirely move to postgresSQL.\n> The problem is that there are many nested views which\n> normally join tables by using two fields, one\n> character and other integer.\n\nIf you are joining on different type fields, you might find the query\nplanner encouraged to use the indexes if you cast one field to the other\nfield's type. If that's possible.\n\n\n\n", "msg_date": "Sat, 06 Nov 2004 16:00:15 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] poor performance in migrated database" }, { "msg_contents": "This is one of the queries that work,and is the first\nin a 4 level nested query....\n\nwhere do I find how to interpret explains???\nthanks in advance,\nCarlos.\n\nmate=# explain analyze select * from vdocinvdpre;\n \n \n QUERY PLAN \n \n \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan vdocinvdpre (cost=265045.23..281225.66\nrows=231149 width=684) (actual\ntime=29883.231..37652.860 rows=210073 loops=1)\n -> Unique (cost=265045.23..278914.17 rows=231149\nwidth=423) (actual time=29883.182..34109.259\nrows=210073 loops=1)\n -> Sort (cost=265045.23..265623.10\nrows=231149 width=423) (actual\ntime=29883.166..31835.849 rows=210073 loops=1)\n Sort Key: no_doc, seq, codigoinv, lote,\nno_rollo, costo_uni, po, cantidad_total, id_pedido,\nid_proveedor, udm, doc_ref, corte, id_planta, accion,\ncosto_total, ubicacion, cantidad_detallada,\ndescripcion, observaciones, factura, fecha_factura,\ncorrelativo\n -> Append (cost=36954.34..60836.63\nrows=231149 width=423) (actual\ntime=4989.382..18277.031 rows=210073 loops=1)\n -> Subquery Scan \"*SELECT* 1\" \n(cost=36954.34..44100.17 rows=79542 width=402) (actual\ntime=4989.371..8786.752 rows=58466 loops=1)\n -> Merge Left Join \n(cost=36954.34..43304.75 rows=79542 width=402) (actual\ntime=4989.341..7767.335 rows=58466 loops=1)\n Merge Cond:\n((\"outer\".seq = \"inner\".seq) AND (\"outer\".\"?column18?\"\n= \"inner\".\"?column6?\"))\n -> Sort \n(cost=29785.78..29925.97 rows=56076 width=366) (actual\ntime=2829.242..3157.807 rows=56076 loops=1)\n Sort Key:\ndocinvdtrims.seq,\nltrim(rtrim((docinvdtrims.no_doc)::text))\n -> Seq Scan on\ndocinvdtrims (cost=0.00..2522.76 rows=56076\nwidth=366) (actual time=17.776..954.557 rows=56076\nloops=1)\n -> Sort \n(cost=7168.56..7310.40 rows=56738 width=60) (actual\ntime=2159.854..2460.061 rows=56738 loops=1)\n Sort Key:\ndocinvdtrimsubica.seq,\nltrim(rtrim((docinvdtrimsubica.no_doc)::text))\n -> Seq Scan on\ndocinvdtrimsubica (cost=0.00..1327.38 rows=56738\nwidth=60) (actual time=14.545..528.530 rows=56738\nloops=1)\n -> Subquery Scan \"*SELECT* 2\" \n(cost=0.00..16736.46 rows=151607 width=423) (actual\ntime=7.731..7721.147 rows=151607 loops=1)\n -> Seq Scan on\ndocinvdrollos (cost=0.00..15220.39 rows=151607\nwidth=423) (actual time=7.699..5109.468 rows=151607\nloops=1)\n Total runtime: 38599.868 ms\n(17 filas)\n\n\n\n--- Simon Riggs <[email protected]> wrote:\n\n> On Sat, 2004-11-06 at 19:52, Carlos Lopez wrote:\n> > The problem is that there are many nested views\n> which\n> > normally join tables by using two fields, one\n> > character and other integer.\n> \n> PostgreSQL has difficulty with some multi-column\n> situations, even though\n> in general it has a particularly good query\n> optimizer.\n> \n> If the first column is poorly selective, yet the\n> addition of the second\n> column makes the combination very highly selective\n> then PostgreSQL may\n> not be able to realise this, ANALYZE or not. ANALYZE\n> doesn't have\n> anywhere to store multi-column selectivity\n> statistics. \n> \n> EXPLAIN ANALYZE will show you whether this is the\n> case. It seems likely\n> that the estimated cardinality of certain joins is\n> incorrect.\n> \n> -- \n> Best Regards, Simon Riggs\n> \n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose\n> an index scan if your\n> joining column's datatypes do not match\n> \n\n\n=====\n___\nIng. Carlos L���pez Linares\nIT Consultant\nQuieres aprender linux?\nvisita http://www.aprende-linux.com.sv\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nCheck out the new Yahoo! Front Page. \nwww.yahoo.com \n \n\n", "msg_date": "Mon, 8 Nov 2004 13:28:41 -0800 (PST)", "msg_from": "Carlos Lopez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] poor performance in migrated database" }, { "msg_contents": "Carlos Lopez <[email protected]> writes:\n> This is one of the queries that work,and is the first\n> in a 4 level nested query....\n\nDo you really need UNION (as opposed to UNION ALL) in this query?\nThe EXPLAIN shows that almost half the runtime is going into the\nsort/uniq to eliminate duplicates ... and according to the row\ncounts, there are no duplicates, so it's wasted effort.\n\nI looked at your schema and saw an awful lot of SELECT DISTINCTs\nthat looked like they might not be necessary, too. But I'm not\nwilling to crawl through 144 views with no information about\nwhich ones are causing you problems. What's a typical query\nthat you are unsatisfied with the performance of?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Nov 2004 18:57:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] poor performance in migrated database " }, { "msg_contents": "Dear Tom,\nthanks for your information.\nWhere can I learn more about the explain and analyze??\nOne view that is giving a lot of problems is vkardex_3\nwhich is used most of the time...\nThe explain analyze I sent is one of the views that\nconform this one.\n\nThanks in advance.\nCarlos Lopez Linares\n\n\n--- Tom Lane <[email protected]> wrote:\n\n> Carlos Lopez <[email protected]> writes:\n> > This is one of the queries that work,and is the\n> first\n> > in a 4 level nested query....\n> \n> Do you really need UNION (as opposed to UNION ALL)\n> in this query?\n> The EXPLAIN shows that almost half the runtime is\n> going into the\n> sort/uniq to eliminate duplicates ... and according\n> to the row\n> counts, there are no duplicates, so it's wasted\n> effort.\n> \n> I looked at your schema and saw an awful lot of\n> SELECT DISTINCTs\n> that looked like they might not be necessary, too. \n> But I'm not\n> willing to crawl through 144 views with no\n> information about\n> which ones are causing you problems. What's a\n> typical query\n> that you are unsatisfied with the performance of?\n> \n> \t\t\tregards, tom lane\n> \n\n\n=====\n___\nIng. Carlos L���pez Linares\nIT Consultant\nQuieres aprender linux?\nvisita http://www.aprende-linux.com.sv\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n", "msg_date": "Tue, 9 Nov 2004 06:56:31 -0800 (PST)", "msg_from": "Carlos Lopez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] poor performance in migrated database " } ]
[ { "msg_contents": "Is \n\nSELECT DISTINCT foo, bar FROM baz;\n\nequivalent to\n\nSELECT foo, bar from baz GROUP BY foo, bar;\n\n?\n\nIn the former case, pgsql >= 7.4 does not use HashAgg, but uses it for\nthe latter case. In many circumstances, esp. for large amount of data\nin the table baz, the second case is an order of a magnitude faster.\n\nFor example (pgsql8b4):\n\nregress=# explain analyze select distinct four from tenk1;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Unique (cost=1109.39..1159.39 rows=4 width=4) (actual\ntime=90.017..106.936 rows=4 loops=1)\n -> Sort (cost=1109.39..1134.39 rows=10000 width=4) (actual\ntime=90.008..95.589 rows=10000 loops=1)\n Sort Key: four\n -> Seq Scan on tenk1 (cost=0.00..445.00 rows=10000 width=4)\n(actual time=0.027..45.454 rows=10000 loops=1)\n Total runtime: 110.927 ms\n(5 rows)\n\nregress=# explain analyze select distinct four from tenk1 group by four;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Unique (cost=470.04..470.06 rows=4 width=4) (actual\ntime=47.487..47.498 rows=4 loops=1)\n -> Sort (cost=470.04..470.05 rows=4 width=4) (actual\ntime=47.484..47.486 rows=4 loops=1)\n Sort Key: four\n -> HashAggregate (cost=470.00..470.00 rows=4 width=4)\n(actual time=47.444..47.451 rows=4 loops=1)\n -> Seq Scan on tenk1 (cost=0.00..445.00 rows=10000\nwidth=4) (actual time=0.013..31.068 rows=10000 loops=1)\n Total runtime: 47.822 ms\n(6 rows)\n\nIf they're equivalent, can we have pgsql use HashAgg for DISTINCTs?\nYes, I've read planner.c's comments on \"Executor doesn't support\nhashed aggregation with DISTINCT aggregates.\", but I believe using\nHashAgg is better if the product of the columns' n_distinct statistic\nis way less than the number of expected rows.\n", "msg_date": "Sun, 7 Nov 2004 19:04:16 +0800", "msg_from": "Ang Chin Han <[email protected]>", "msg_from_op": true, "msg_subject": "DISTINCT and GROUP BY: possible performance enhancement?" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nWe have two tables, dst_port_hour and dst_port_day, which should be \nvery similar, they both have about 50.000.000 rows. In both tables we \nhave an index for period_id.\n\nWe run postgresql 7.4.5 on a dedicated Debian server, with dual Intel \nXeon 3GHz and 4GB memory.\n\nThe problem is that on the dst_port_day table, postgresql is using \nseqscan, and not the index when it should. Forcing the use of the index \nby setting enable_seqscan to false, makes the query lighthening fast. \nWhen using seqscan, the query takes several minutes. The planner \ncalculates the cost for Index scan to be much more than sequence scan.\n\nWhy is our query planner misbehaving?\n\nHere are the exaplain analyze output with and without index-force:\n\n\nSET enable_seqscan=false;\n\nstager=> explain analyze SELECT cur.portnr FROM dst_port_day cur WHERE \ncur.period_id='2779' GROUP BY cur.portnr ORDER BY SUM(cur.octets) DESC \n LIMIT 5;\n \n QUERY PLAN\n- ------------------------------------------------------------------------ \n- ------------------------------------------------------------------------ \n- ---------------------------------\n Limit (cost=2022664.62..2022664.63 rows=5 width=12) (actual \ntime=831.772..831.816 rows=5 loops=1)\n -> Sort (cost=2022664.62..2022664.82 rows=80 width=12) (actual \ntime=831.761..831.774 rows=5 loops=1)\n Sort Key: sum(octets)\n -> HashAggregate (cost=2022661.89..2022662.09 rows=80 \nwidth=12) (actual time=587.036..663.991 rows=16396 loops=1)\n -> Index Scan using dst_port_day_period_id_key on \ndst_port_day cur (cost=0.00..2019931.14 rows=546150 width=12) (actual \ntime=0.038..303.801 rows=48072 loops=1)\n Index Cond: (period_id = 2779)\n Total runtime: 836.362 ms\n(7 rows)\n\n\n\nSET enable_seqscan=true;\n\nstager=> explain analyze SELECT cur.portnr FROM dst_port_day cur \nWHERE cur.period_id='2779' GROUP BY cur.portnr ORDER BY \nSUM(cur.octets) DESC LIMIT 5;\n \nQUERY PLAN\n- ------------------------------------------------------------------------ \n- ------------------------------------------------------------------------ \n- ------\n Limit (cost=1209426.88..1209426.89 rows=5 width=12) (actual \ntime=299053.006..299053.053 rows=5 loops=1)\n -> Sort (cost=1209426.88..1209427.08 rows=80 width=12) (actual \ntime=299052.995..299053.008 rows=5 loops=1)\n Sort Key: sum(octets)\n -> HashAggregate (cost=1209424.15..1209424.35 rows=80 \nwidth=12) (actual time=298803.273..298881.020 rows=16396 loops=1)\n -> Seq Scan on dst_port_day cur (cost=0.00..1206693.40 \nrows=546150 width=12) (actual time=298299.508..298526.544 rows=48072 \nloops=1)\n Filter: (period_id = 2779)\n Total runtime: 299057.643 ms\n(7 rows)\n\n- -- \nAndreas Åkre Solberg, UNINETT AS Testnett\nContact info and Public PGP Key available on:\nhttp://andreas.solweb.no/?account=Work\n\n-----BEGIN PGP SIGNATURE-----\nVersion: PGP 8.1\nComment: My public key is available at http://andreas.solweb.no\n\niQA/AwUBQY9NBPyFPYEtpdl2EQKIcwCgpPEkZ3PQKWNf6JWP6tQ4eFBPEngAoKTT\n4eGkB0NVyIg0surd1LJdFD7+\n=bYtH\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Mon, 8 Nov 2004 11:40:00 +0100", "msg_from": "=?ISO-8859-1?Q?Andreas_=C5kre_Solberg?= <[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql is using seqscan when is should use indexes." }, { "msg_contents": "On Mon, 8 Nov 2004 09:40 pm, Andreas Åkre Solberg wrote:\n> We have two tables, dst_port_hour and dst_port_day, which should be\n> very similar, they both have about 50.000.000 rows. In both tables we\n> have an index for period_id.\n> \n> We run postgresql 7.4.5 on a dedicated Debian server, with dual Intel\n> Xeon 3GHz and 4GB memory.\n> \n> The problem is that on the dst_port_day table, postgresql is using\n> seqscan, and not the index when it should. Forcing the use of the index\n> by setting enable_seqscan to false, makes the query lighthening fast.\n> When using seqscan, the query takes several minutes. The planner\n> calculates the cost for Index scan to be much more than sequence scan.\n> \n> Why is our query planner misbehaving?\n> \n> Here are the exaplain analyze output with and without index-force:\n> \n> \n> SET enable_seqscan=false;\n> \n> stager=> explain analyze SELECT cur.portnr FROM dst_port_day cur WHERE\n> cur.period_id='2779' GROUP BY cur.portnr ORDER BY SUM(cur.octets) DESC\n> LIMIT 5;\n> \ndst_port_day cur (cost=0.00..2019931.14 rows=546150 width=12) (actual time=0.038..303.801 rows=48072 loops=1)\n\nThe guess of the number of rows returned by the index scan is out by a factor of 10. 500k rows is greater than 1% of\nthe rows, so I think the planner is likely to choose a sequence scan at this amount, unless you have tuned things like\nrandom page cost.\n\nWhat is the selectivity like on that column?\nHave you analyzed recently?\n\nIf so, you should probably increase the statistics on that column\nSee ALTER TABLE SET STATISTICS in the manual.\n\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=2022664.62..2022664.63 rows=5 width=12) (actual time=831.772..831.816 rows=5 loops=1)\n> -> Sort (cost=2022664.62..2022664.82 rows=80 width=12) (actual time=831.761..831.774 rows=5 loops=1)\n> Sort Key: sum(octets)\n> -> HashAggregate (cost=2022661.89..2022662.09 rows=80 width=12) (actual time=587.036..663.991 rows=16396 loops=1)\n> -> Index Scan using dst_port_day_period_id_key on dst_port_day cur (cost=0.00..2019931.14 rows=546150 width=12) (actual time=0.038..303.801 rows=48072 loops=1)\n> Index Cond: (period_id = 2779)\n> Total runtime: 836.362 ms\n> (7 rows)\n> \n> \n> \n> SET enable_seqscan=true;\n> \n> stager=> explain analyze SELECT cur.portnr FROM dst_port_day cur WHERE cur.period_id='2779' GROUP BY cur.portnr ORDER BY SUM(cur.octets) DESC LIMIT 5;\n> \n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=1209426.88..1209426.89 rows=5 width=12) (actual time=299053.006..299053.053 rows=5 loops=1)\n> -> Sort (cost=1209426.88..1209427.08 rows=80 width=12) (actual time=299052.995..299053.008 rows=5 loops=1)\n> Sort Key: sum(octets)\n> -> HashAggregate (cost=1209424.15..1209424.35 rows=80 width=12) (actual time=298803.273..298881.020 rows=16396 loops=1)\n> -> Seq Scan on dst_port_day cur (cost=0.00..1206693.40 rows=546150 width=12) (actual time=298299.508..298526.544 rows=48072 loops=1)\n> Filter: (period_id = 2779)\n> Total runtime: 299057.643 ms\n> (7 rows)\n> \n\nRegards\n\nRussell Smith\n", "msg_date": "Mon, 8 Nov 2004 23:04:04 +1100", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql is using seqscan when is should use indexes." } ]
[ { "msg_contents": "The ext3fs allows to selet type of journalling to be used with\nfilesystem. Journalling pretty much \"mirrors\" the work of WAL\nlogging by PostgreSQL... I wonder which type of journalling\nis best for PgSQL in terms of performance.\nChoices include:\n journal\n All data is committed into the journal prior to being\n written into the main file system.\n ordered\n This is the default mode. All data is forced directly\n out to the main file system prior to its metadata being\n committed to the journal.\n writeback\n Data ordering is not preserved - data may be written into\n the main file system after its metadata has been commit-\n ted to the journal. This is rumoured to be the highest-\n throughput option. It guarantees internal file system\n integrity, however it can allow old data to appear in\n files after a crash and journal recovery.\n\nAm I right to assume that \"writeback\" is both fastest and at the same\ntime as safe to use as ordered? Maybe any of you did some benchmarks?\n\nRegards,\n Dawid\n", "msg_date": "Mon, 8 Nov 2004 13:26:09 +0100", "msg_from": "Dawid Kuroczko <[email protected]>", "msg_from_op": true, "msg_subject": "ext3 journalling type" }, { "msg_contents": "Dawid Kuroczko wrote:\n> The ext3fs allows to selet type of journalling to be used with\n> filesystem. Journalling pretty much \"mirrors\" the work of WAL\n> logging by PostgreSQL... I wonder which type of journalling\n> is best for PgSQL in terms of performance.\n> Choices include:\n> journal\n> All data is committed into the journal prior to being\n> written into the main file system.\n> ordered\n> This is the default mode. All data is forced directly\n> out to the main file system prior to its metadata being\n> committed to the journal.\n> writeback\n> Data ordering is not preserved - data may be written into\n> the main file system after its metadata has been commit-\n> ted to the journal. This is rumoured to be the highest-\n> throughput option. It guarantees internal file system\n> integrity, however it can allow old data to appear in\n> files after a crash and journal recovery.\n> \n> Am I right to assume that \"writeback\" is both fastest and at the same\n> time as safe to use as ordered? Maybe any of you did some benchmarks?\n\nYes. I have seen benchmarks that say writeback is fastest but I don't\nhave any numbers handy.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 8 Nov 2004 09:04:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 journalling type" }, { "msg_contents": "> Am I right to assume that \"writeback\" is both fastest and at \n> the same time as safe to use as ordered? Maybe any of you \n> did some benchmarks?\n\nIt should be fastest because it is the least overhead, and safe because\npostgres does it's own write-order guaranteeing through fsync(). You should\nalso mount the FS with the 'noatime' option.\n\nBut.... For some workloads, there are tests showing that 'data=journal' can\nbe the fastest! This is because although the data is written twice (once to\nthe journal, and then to its real location on disk) in this mode data is\nwritten _sequentially_ to the journal, and later written out to its\ndestination, which may be at a quieter time.\n\nThere's a discussion (based around 7.2) here:\nhttp://www.kerneltraffic.org/kernel-traffic/kt20020401_160.txt\n\nM\n\n", "msg_date": "Mon, 8 Nov 2004 15:14:04 -0000", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 journalling type" }, { "msg_contents": "I have some data here, no detailed analyses though:\n\n\thttp://www.osdl.org/projects/dbt2dev/results/fs/\n\nMark\n\nOn Mon, Nov 08, 2004 at 01:26:09PM +0100, Dawid Kuroczko wrote:\n> The ext3fs allows to selet type of journalling to be used with\n> filesystem. Journalling pretty much \"mirrors\" the work of WAL\n> logging by PostgreSQL... I wonder which type of journalling\n> is best for PgSQL in terms of performance.\n> Choices include:\n> journal\n> All data is committed into the journal prior to being\n> written into the main file system.\n> ordered\n> This is the default mode. All data is forced directly\n> out to the main file system prior to its metadata being\n> committed to the journal.\n> writeback\n> Data ordering is not preserved - data may be written into\n> the main file system after its metadata has been commit-\n> ted to the journal. This is rumoured to be the highest-\n> throughput option. It guarantees internal file system\n> integrity, however it can allow old data to appear in\n> files after a crash and journal recovery.\n> \n> Am I right to assume that \"writeback\" is both fastest and at the same\n> time as safe to use as ordered? Maybe any of you did some benchmarks?\n> \n> Regards,\n> Dawid\n> \n\n", "msg_date": "Mon, 8 Nov 2004 08:29:59 -0800", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 journalling type" }, { "msg_contents": "Matt,\n\n> It should be fastest because it is the least overhead, and safe because\n> postgres does it's own write-order guaranteeing through fsync().  You\n> should also mount the FS with the 'noatime' option.\n\nThis, of course, assumes that PostgreSQL is the only thing on the partition. \nWhich is a good idea in general, but not to be taken for granted ...\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 8 Nov 2004 09:38:56 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 journalling type" } ]
[ { "msg_contents": "> > Good, I'll give it a shot and see what I come up with...thx.\n> >\n> Do share your experience with us.\n\nWill do. I have to ship the server on Friday, and the parts are on\norder. If they come today, I'll have time to test Gentoo, Redhat 32/64,\nand win32 by then. If I can't get it built until tomorrow,\nunfortunately the Gentoo test will have to be skipped.\n\nThe win32 test is forced because our clients prefer win32 and I have to\njustify any platform change with a reasonable performance advantage. I\nhave to compile and install a lot of software (including subversion,\nwhich I'm using to manage our application binaries), and I'm wary of 64\nbit library issues which will hold me up. Any major roadblocks and I'll\nbe forced to drop the test.\n\nWhen I'm finished I'll throw a link to this list, probably Friday.\n\nMerlin\n", "msg_date": "Mon, 8 Nov 2004 08:20:11 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql amd-64" } ]
[ { "msg_contents": "Greetings all,\n\nThis question has probably been asked many times, but I was unable to \nuse the list archives to search, since the term \"Group\" matches \nthousands of of messages with the names of user groups in them... so \nsorry if I'm repeating!\n\nHere's the problem: I have a table of 10,000,000 records called \n\"indethom\", each record representing a word in the works of a \nparticular author. Each record contains, among other columns, an \nCHAR(5) column representing the \"lemma\" code (i.e. which word it is) \ncalled \"codelemm\", and an integer representing a textual unit, i.e. \nchapter or other division of a work (these are numbered consecutively \nfrom 0 to around 50,000), called \"sectref\". What I want to do is find \nout how many times every word occurs in each textual unit (or no row \nreturned for textual units where a particular word doesn't appear). I \nused a group-by clause to group by \"sectref\", and then used the \nCOUNT(codelemm) function to sum up the occurrences. The codelemm \ncolumn had to be grouped on, in order to satisfy Postgres's \nrequirements. Here's the query as I have it:\n\n > create table matrix2.tuo as select codelemm, sectref, count(codelemm) \nfrom indethom group by codelemm, sectref;\n\nAnd the explain results are as follows:\n\n >it=> explain select codelemm, sectref, count(codelemm) from indethom \ngroup by codelemm, sectref;\n > QUERY PLAN\n >----------------------------------------------------------------------- \n---------\n > GroupAggregate (cost=2339900.60..2444149.44 rows=1790528 width=13)\n > -> Sort (cost=2339900.60..2364843.73 rows=9977252 width=13)\n > Sort Key: codelemm, sectref\n > -> Seq Scan on indethom (cost=0.00..455264.52 rows=9977252 \nwidth=13)\n\nI have an index defined as follows:\n\n > create index indethom_clemm_sect_ndx on indethom using \nbtree(codelemm, sectref);\n\nI also performed an ANALYZE after creating the index.\n\nI have the gut feeling that there's got to be a better way than a \nsequence scan on 10,000,000 records, but I'll be darned if I can find \nany way to improve things here.\n\nThanks for any help you all can offer!!\n\nErik Norvelle\n\n", "msg_date": "Tue, 9 Nov 2004 01:09:58 +0100", "msg_from": "Erik Norvelle <[email protected]>", "msg_from_op": true, "msg_subject": "Slow performance with Group By" }, { "msg_contents": "Erik Norvelle <[email protected]> writes:\n>>> it=> explain select codelemm, sectref, count(codelemm) from indethom \n> group by codelemm, sectref;\n>>> QUERY PLAN\n>>> ----------------------------------------------------------------------- \n> ---------\n>>> GroupAggregate (cost=2339900.60..2444149.44 rows=1790528 width=13)\n>>> -> Sort (cost=2339900.60..2364843.73 rows=9977252 width=13)\n>>> Sort Key: codelemm, sectref\n>>> -> Seq Scan on indethom (cost=0.00..455264.52 rows=9977252 \n> width=13)\n\nActually the painful part of that is the sort. If you bump up sort_mem\nenough it will eventually switch over to a HashAggregate with no sort,\nwhich may be a better plan if there's not too many groups (is the\nestimate of 1.79 million on the mark at all??)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Nov 2004 19:56:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow performance with Group By " } ]
[ { "msg_contents": "Thanks in advance for anything you can do to help.\n\n\nThe real issue is this, we have THE SAME queries taking anywhere from .001 - \n90.0 seconds... the server is using 98% of the available RAM at all times \n(because of the persistant connections via php), and I don't know what to \ndo. Every time I change a .conf setting I feel like it slows it down even \nmore.... and I can't seem to find the balance. I'll give you everything \nI've got, and I hope to god someone can point out some areas where I could \nimprove the speed of queries overall.\n\nFor months I've been optimizing my queries, and working around places that I \ndon't need them. They are so incredibly honed, I couldn't even begin to \nexplain.... and when there are less than 10 users browsing our sites, they \nare LIGHTENING fast... even with 5x the amount of dummy data in the \ndatabase(s)... so basically the LARGEST factor in this whole performance \nissue that I can find is the number of users browsing the sites at all \ntimes... but lowering shared_buffers to raise max_connections is hurting \nperformance immensley... so I\"m totally lost.... please help!! Bless you!\n\n\nTHE DETAILS:\n\n\n(for the databases, i'll list only the 'main' tables... as the others are \nfairly small)\nDatabase 1:\n 5000 'users'\n 20,000 'threads'\n 500,000 'posts'\n ...\nDatabase 2: (just starting out)\n 150 'users'\n 150 'entries'\n ...\n\nHardware :\n Pentium 4 2.44ghz\n 1.5gb RAM\n 7200rpm SATA\n\nSoftware:\n Redhat Linux (kernel v. 2.4.21-9.EL)\n Postgresql 7.4.2\n PHP 4.3.6 (using persistant connections to pgsql)\n\nUsage:\n uptime: 12:23:08 up 132 days, 19:16, 2 users, load average: 19.75, \n17.34, 18.86\n roughly 100-200 users connected to our server at any given moment\n roughly 10-15 queries per HTTP page load\n\n\n----------------------------------------------------------\n\n# -----------------------------\n# PostgreSQL configuration file\n# -----------------------------\n#\n# This file consists of lines of the form:\n#\n# name = value\n#\n# (The '=' is optional.) White space may be used. Comments are introduced\n# with '#' anywhere on a line. The complete list of option names and\n# allowed values can be found in the PostgreSQL documentation. The\n# commented-out settings shown in this file represent the default values.\n#\n# Any option can also be given as a command line switch to the\n# postmaster, e.g. 'postmaster -c log_connections=on'. Some options\n# can be changed at run-time with the 'SET' SQL command.\n#\n# This file is read on postmaster startup and when the postmaster\n# receives a SIGHUP. If you edit the file on a running system, you have\n# to SIGHUP the postmaster for the changes to take effect, or use\n# \"pg_ctl reload\".\n\n\n#---------------------------------------------------------------------------\n# CONNECTIONS AND AUTHENTICATION\n#---------------------------------------------------------------------------\n\n# - Connection Settings -\n\ntcpip_socket = true\nmax_connections = 75\n # note: increasing max_connections costs about 500 bytes of shared\n # memory per connection slot, in addition to costs from \nshared_buffers\n # and max_locks_per_transaction.\n#superuser_reserved_connections = 2\nport = 5432\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777 # octal\n#virtual_host = '' # what interface to listen on; defaults to \nany\n#rendezvous_name = '' # defaults to the computer name\n\n# - Security & Authentication -\n\n#authentication_timeout = 60 # 1-600, in seconds\nssl = true\npassword_encryption = true\n#krb_server_keyfile = ''\n#db_user_namespace = false\n\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\n\nshared_buffers = 8192 # min 16, at least max_connections*2, 8KB \neach\nsort_mem = 8192 # min 64, size in KB\nvacuum_mem = 4096 # min 1024, size in KB\n\n# - Free Space Map -\n\nmax_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 1000 # min 100, ~50 bytes each\n\n# - Kernel Resource Usage -\n\nmax_files_per_process = 3052 # min 25\n#preload_libraries = ''\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\nfsync = true # turns forced synchronization on or off\n#wal_sync_method = fsync # the default varies across platforms:\n # fsync, fdatasync, open_sync, or \nopen_datasync\nwal_buffers = 192 # min 4, 8KB each\n\n# - Checkpoints -\n\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 300 # range 30-3600, in seconds\n#checkpoint_warning = 30 # 0 is off, in seconds\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Enabling -\n\n#enable_hashagg = true\n#enable_hashjoin = true\n#enable_indexscan = true\n#enable_mergejoin = true\n#enable_nestloop = true\nenable_seqscan = false\n#enable_sort = true\n#enable_tidscan = true\n\n# - Planner Cost Constants -\n\neffective_cache_size = 131072 # typically 8KB each\n\nrandom_page_cost = 2 # units are one sequential page fetch cost\n\n\n\ncpu_tuple_cost = .01 # (same) default .01\ncpu_index_tuple_cost = .001 # (same) default .001\ncpu_operator_cost = 0.0025 # (same) default .0025\n\n# - Genetic Query Optimizer -\n\ngeqo = true\ngeqo_threshold = 20\n#geqo_effort = 1\n#geqo_generations = 0\n#geqo_pool_size = 0 # default based on tables in statement,\n # range 128-1024\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 10 # range 1-1000\n#from_collapse_limit = 8\n#join_collapse_limit = 8 # 1 disables collapsing of explicit JOINs\n\n\n#---------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#---------------------------------------------------------------------------\n\n# - Syslog -\n\n#syslog = 0 # range 0-2; 0=stdout; 1=both; 2=syslog\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n# - When to Log -\n\nclient_min_messages = error # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # log, info, notice, warning, error\n\nlog_min_messages = error # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error, log, \nfatal,\n # panic\n\nlog_error_verbosity = default # terse, default, or verbose messages\n\nlog_min_error_statement = panic # Values in order of increasing severity:\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error, \npanic(off)\n\nlog_min_duration_statement = -1 # Log all statements whose\n # execution time exceeds the value, in\n # milliseconds. Zero prints all queries.\n # Minus-one disables.\n\n#silent_mode = false # DO NOT USE without Syslog!\n\n# - What to Log -\n\ndebug_print_parse = false\ndebug_print_rewritten = false\ndebug_print_plan = false\ndebug_pretty_print = false\nlog_connections = false\nlog_duration = false\nlog_pid = false\nlog_statement = false\nlog_timestamp = false\nlog_hostname = false\nlog_source_port = false\n\n\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Statistics Monitoring -\n\nlog_parser_stats = false\nlog_planner_stats = false\nlog_executor_stats = false\nlog_statement_stats = false\n\n# - Query/Index Statistics Collector -\n\nstats_start_collector = false\nstats_command_string = false\nstats_block_level = false\nstats_row_level = false\nstats_reset_on_server_start = false\n\n\n#---------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#---------------------------------------------------------------------------\n\n# - Statement Behavior -\n\n#search_path = '$user,public' # schema names\n#check_function_bodies = true\n#default_transaction_isolation = 'read committed'\n#default_transaction_read_only = false\n#statement_timeout = 0 # 0 is disabled, in milliseconds\n\n# - Locale and Formatting -\n\n#datestyle = 'iso, mdy'\n#timezone = unknown # actually, defaults to TZ environment \nsetting\n#australian_timezones = false\n#extra_float_digits = 0 # min -15, max 2\n#client_encoding = sql_ascii # actually, defaults to database encoding\n\n# These settings are initialized by initdb -- they may be changed\nlc_messages = 'en_US.UTF-8' # locale for system error message \nstrings\nlc_monetary = 'en_US.UTF-8' # locale for monetary formatting\nlc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formatting\n\n# - Other Defaults -\n\nexplain_pretty_print = true\n#dynamic_library_path = '$libdir'\n#max_expr_depth = 10000 # min 10\n\n\n#---------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n\n#deadlock_timeout = 1000 # in milliseconds\n#max_locks_per_transaction = 64 # min 10, ~260*max_connections bytes each\n\n\n#---------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#---------------------------------------------------------------------------\n\n# - Previous Postgres Versions -\n\n#add_missing_from = true\nregex_flavor = advanced # advanced, extended, or basic\n#sql_inheritance = true\n\n# - Other Platforms & Clients -\n\n#transform_null_equals = false\n\n\n------\nI've been tweaking the postgresql.conf file for about 5 hours... just today. \nWe've had problems in the past (and I've also emailed this list in the past, \nbut perhaps I failed to ask the right questions)....\n\nI guess I need some help with the postgresql configuration file. I would \nlike to start off by asking that you not link me to the same basic .CONF \noverview, as what I really need at this point is real-world experience and \nwisdom, as opposed to cold, poorly documented, and incredibly abstract \n(trial-and-error) type manual entries. \n\n", "msg_date": "Tue, 9 Nov 2004 13:01:42 -0600", "msg_from": "\"Shane | SkinnyCorp\" <[email protected]>", "msg_from_op": true, "msg_subject": "Need advice on postgresql.conf settings" }, { "msg_contents": "\"Shane | SkinnyCorp\" <[email protected]> writes:\n> The real issue is this, we have THE SAME queries taking anywhere from .001 - \n> 90.0 seconds... the server is using 98% of the available RAM at all times \n> (because of the persistant connections via php), and I don't know what to \n> do.\n\nI have a feeling that the answer is going to boil down to \"buy more RAM\"\n--- it sounds a lot like you're just overstressing your server. The\nmore active backends you have, the more RAM goes to process-local\nmemory, and the less is available for kernel disk cache. Even if you\ndon't go into outright swapping, the amount of disk I/O needed goes up\nthe smaller the kernel disk cache gets.\n\nAnother possible line of attack is to use persistent (pooled)\nconnections to cut down the number of live backend processes you need.\nHowever, depending on what your application software is, that might\ntake more time/effort (= money) than dropping in some more RAM.\n\nYou can investigate this theory by watching \"top\" output (the first few\nlines about memory usage, not the process listing) as well as \"vmstat\"\noutput.\n\n> uptime: 12:23:08 up 132 days, 19:16, 2 users, load average: 19.75, \n> 17.34, 18.86\n\nLoad averages approaching 20 are not good either ... what sort of box\nare you running on anyway?\n\nAs for the postgresql.conf settings, the only ones I'd seriously question\nare\n\nmax_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 1000 # min 100, ~50 bytes each\n\nThese are the defaults, and are probably too small for a DB exceeding a\nhundred meg or so.\n\nmax_files_per_process = 3052 # min 25\n\nYou really have your kernel set to support 3052 * 75 simultaneously open\nfiles? Back this off. I doubt values beyond a couple hundred buy\nanything except headaches.\n\nwal_buffers = 192 \n\nThis is an order-of-magnitude overkill too, especially if your\ntransactions are mostly small. I know it's only a megabyte or two,\nbut you evidently need that RAM more elsewhere.\n\nenable_seqscan = false\n\nI don't think this is a good idea in general.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Nov 2004 16:56:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need advice on postgresql.conf settings " }, { "msg_contents": ">> The real issue is this, we have THE SAME queries taking anywhere from \n>> .001 -\n>> 90.0 seconds... the server is using 98% of the available RAM at all \n>> times\n>> (because of the persistant connections via php), and I don't know \n>> what to\n>> do.\n>\n> Another possible line of attack is to use persistent (pooled)\n> connections to cut down the number of live backend processes you need.\n> However, depending on what your application software is, that might\n> take more time/effort (= money) than dropping in some more RAM.\n\nThis particular feature is pure evilness. Using all of my fingers and \ntoes, I can't count the number of times I've had a client do this and \nget themselves into a world of hurt. Somewhere in the PHP \ndocumentation, there should be a big warning wrapped in the blink tag \nthat steers people away from setting this. The extra time necessary to \nsetup a TCP connection is less than the performance drag induced on the \nbackend when persistent connections are enabled. Reread that last \nsentence until it sinks in. On a local network, this is premature \noptimization that's hurting you.\n\n> max_files_per_process = 3052 # min 25\n>\n> You really have your kernel set to support 3052 * 75 simultaneously \n> open\n> files? Back this off. I doubt values beyond a couple hundred buy\n> anything except headaches.\n\nThis, on the other hand, has made a large difference for me. Time \nnecessary to complete open(2) calls can be expensive, especially when \nthe database is poorly designed and is touching many different parts of \nthe database spread across multiple files on the backend. 3000 is \nhigh, but I've found 500 to be vastly too low in some cases... in \nothers, it's just fine. My rule of thumb has become, if you're doing \nlots of aggregate functions (ex, SUM(), COUNT()) more than once in the \nlifetime of a backend, increasing this value helps.. otherwise it buys \nyou little (if so, 1500 is generally sufficient). Faster IO, however, \nis going to save you here. If you can, increase your disk caching in \nthe OS. On FreeBSD, increase your KVA_PAGES and NBUFs. Since you've \nfreed up more ram by disabling persistent connections, this shouldn't \nbe a problem. -sc\n\n-- \nSean Chittenden\n\n", "msg_date": "Tue, 9 Nov 2004 14:44:00 -0800", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need advice on postgresql.conf settings " }, { "msg_contents": "\nOn Nov 9, 2004, at 2:01 PM, Shane | SkinnyCorp wrote:\n\n> Thanks in advance for anything you can do to help.\n>\n>\n> The real issue is this, we have THE SAME queries taking anywhere from \n> .001 - 90.0 seconds... the server is using 98% of the available RAM at \n> all times (because of the persistant connections via php), and I don't \n> know what to do. Every time I change a\n\nI'd recommend strongly ditching the use of pconnect and use pgpool + \nregular connect. It is a terrific combination that provides pool \nconnections like how you'd think they shoudl work (a pool of N \nconnections to PG shared by Y processes instead of a 1:1 mapping).\n\ncuriously, have you noticed any pattern to the slowdown?\nIt could be induced by a checkpoint or vacuum.\n\nAre you swapping at all?\n\nAre your PHP scripts leaking at all, etc.?\n\nYour load average is high, how does your CPU idle look (if load is \nhigh, and the cpus are pretty idle that is an indicator of being IO \nbound).\n\ngood luck.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n", "msg_date": "Wed, 10 Nov 2004 08:04:03 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need advice on postgresql.conf settings" }, { "msg_contents": "Im PostgreSQL 7.2.2 / Linux 2.4.27 dual-processor Pentium III 900MHz,\n\nwe have this table:\n\ncreate table testtable (id SERIAL PRIMARY KEY, coni VARCHAR(255), date TIMESTAMP, direction VARCHAR(255), partner VARCHAR(255), type VARCHAR(255), block VARCHAR(255) );\n\n\nWe using Java with JDBC-driver pg72jdbc2.jar\n\n\nour Java-testgrogram is :\n\n\npublic class Stresser implements Runnable {\n public void run() {\n System.out.println(\"-> start\");\n try {\n\t\n\t Class.forName(\"org.postgresql.Driver\");\n\t Connection con = DriverManager.getConnection(\"jdbc:postgresql://\"+prop.getProperty(\"host\")+\":\"+prop.getProperty(\"port\")+\"/\"+prop.getProperty(\"dbname\"), prop.getProperty(\"user\"), \nprop.getProperty(\"pwd\"));\n\t con.setAutoCommit(true);\n\t Statement st = con.createStatement();\n\t java.sql.Timestamp datum = new java.sql.Timestamp(new Date().getTime());\n\t Date start = new Date();\n\t System.out.println(start);\n\t for (int i=0; i<100; ++i) {\n\t st.executeUpdate(\"insert into history(uuid,coni,date,direction,partner,type) values('uuid','content','\"+datum+\"','dir','partner','type')\");\n\t }\n\t Date end = new Date();\n\t System.out.println(end);\n\t con.close();\n } catch (Exception e) {\n System.out.println(\"Exception!\");\n e.printStackTrace();\n }\n System.out.println(\"-> ende\");\n }\n\n public static void main(String[] args) {\n\n for (int i=0; i<10; ++i) {\n Stresser s = new Stresser();\n Thread t = new Thread(s);\n t.start();\n }\n }\n}\n\n\nIt is trunning in in 10 Threads. Each thread makes 100 Inserts:\n\nFor the 1000 Inserts (10 threads a 100 inserts)\nwe need 8 seconds.\nThat's 125 Insets / Seconds.\n\nHow could we make it faster ?\n\nInserting 1000 rows via INSERT AS SELECT is much faster.\n\nregards\n Michael\n", "msg_date": "Wed, 10 Nov 2004 14:51:57 +0100", "msg_from": "Michael Kleiser <[email protected]>", "msg_from_op": false, "msg_subject": "How to speed-up inserts with jdbc" }, { "msg_contents": "\nOn Nov 10, 2004, at 8:51 AM, Michael Kleiser wrote:\n> It is trunning in in 10 Threads. Each thread makes 100 Inserts:\n>\n> For the 1000 Inserts (10 threads a 100 inserts)\n> we need 8 seconds.\n> That's 125 Insets / Seconds.\n> How could we make it faster ?\n>\n\nBatch the inserts up into a transaction.\n\nSo you'd have\nBEGIN\ninsert\ninsert\ninsert\n...\nCOMMIT\n\nYour numbers will suddenly sky rocket.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n", "msg_date": "Wed, 10 Nov 2004 08:55:01 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to speed-up inserts with jdbc" }, { "msg_contents": "couple of things\n\n1) That is a fairly old version of postgres, there are considerable \nperformance improvements in the last 2 releases since, and even more in \nthe pending release.\n2) If you are going to insert more rows than that, consider dropping the \nindex before, and recreating after the insert.\n\nDave\n\nMichael Kleiser wrote:\n\n> Im PostgreSQL 7.2.2 / Linux 2.4.27 dual-processor Pentium III 900MHz,\n>\n> we have this table:\n>\n> create table testtable (id SERIAL PRIMARY KEY, coni VARCHAR(255), date \n> TIMESTAMP, direction VARCHAR(255), partner VARCHAR(255), type \n> VARCHAR(255), block VARCHAR(255) );\n>\n>\n> We using Java with JDBC-driver pg72jdbc2.jar\n>\n>\n> our Java-testgrogram is :\n>\n>\n> public class Stresser implements Runnable {\n> public void run() {\n> System.out.println(\"-> start\");\n> try {\n> \n> Class.forName(\"org.postgresql.Driver\");\n> Connection con = \n> DriverManager.getConnection(\"jdbc:postgresql://\"+prop.getProperty(\"host\")+\":\"+prop.getProperty(\"port\")+\"/\"+prop.getProperty(\"dbname\"), \n> prop.getProperty(\"user\"), prop.getProperty(\"pwd\"));\n> con.setAutoCommit(true);\n> Statement st = con.createStatement();\n> java.sql.Timestamp datum = new java.sql.Timestamp(new \n> Date().getTime());\n> Date start = new Date();\n> System.out.println(start);\n> for (int i=0; i<100; ++i) {\n> st.executeUpdate(\"insert into \n> history(uuid,coni,date,direction,partner,type) \n> values('uuid','content','\"+datum+\"','dir','partner','type')\");\n> }\n> Date end = new Date();\n> System.out.println(end);\n> con.close();\n> } catch (Exception e) {\n> System.out.println(\"Exception!\");\n> e.printStackTrace();\n> }\n> System.out.println(\"-> ende\");\n> }\n>\n> public static void main(String[] args) {\n>\n> for (int i=0; i<10; ++i) {\n> Stresser s = new Stresser();\n> Thread t = new Thread(s);\n> t.start();\n> }\n> }\n> }\n>\n>\n> It is trunning in in 10 Threads. Each thread makes 100 Inserts:\n>\n> For the 1000 Inserts (10 threads a 100 inserts)\n> we need 8 seconds.\n> That's 125 Insets / Seconds.\n>\n> How could we make it faster ?\n>\n> Inserting 1000 rows via INSERT AS SELECT is much faster.\n>\n> regards\n> Michael\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Wed, 10 Nov 2004 09:05:37 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to speed-up inserts with jdbc" }, { "msg_contents": "On Wed, 10 Nov 2004 14:51:57 +0100, Michael Kleiser <[email protected]> wrote:\n> Statement st = con.createStatement();\n> java.sql.Timestamp datum = new java.sql.Timestamp(new Date().getTime());\n> Date start = new Date();\n> System.out.println(start);\n> for (int i=0; i<100; ++i) {\n> st.executeUpdate(\"insert into history(uuid,coni,date,direction,partner,type) values('uuid','content','\"+datum+\"','dir','partner','type')\");\n> }\n\nhow about using PreparedStatment? that's on the java end.\non the pg end, maybe do a BEGIN before the for loop and \nEND at the end of the for loop.\n-- \ni'm not flying. i'm falling... in style.\n", "msg_date": "Thu, 11 Nov 2004 16:04:06 +0800", "msg_from": "Edwin Eyan Moragas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to speed-up inserts with jdbc" }, { "msg_contents": "On Thu, Nov 11, 2004 at 04:04:06PM +0800, Edwin Eyan Moragas wrote:\n> how about using PreparedStatment? that's on the java end.\n> on the pg end, maybe do a BEGIN before the for loop and \n> END at the end of the for loop.\n\nYou don't even need a \"BEGIN\" and \"END\"; his code has a setAutoComit(true)\nbefore the for loop, which just has to be changed to setAutoCommit(false)\n(and add an explicit commit() after the for loop, of course).\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 11 Nov 2004 11:04:18 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to speed-up inserts with jdbc" }, { "msg_contents": "On Thu, 11 Nov 2004 11:04:18 +0100, Steinar H. Gunderson\n<[email protected]> wrote:\n> You don't even need a \"BEGIN\" and \"END\"; his code has a setAutoComit(true)\n> before the for loop, which just has to be changed to setAutoCommit(false)\n> (and add an explicit commit() after the for loop, of course).\n\namen. i stand corrected.\n\n-eem\n", "msg_date": "Fri, 12 Nov 2004 05:10:57 +0800", "msg_from": "Edwin Eyan Moragas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to speed-up inserts with jdbc" } ]
[ { "msg_contents": "I'm wondering if there's any way I can tweak things so that the estimate\nfor the query is more accurate (I have run analyze):\n\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=2712755.92..2713043.69 rows=12790 width=24)\n -> Nested Loop (cost=2997.45..2462374.58 rows=9104776 width=24)\n Join Filter: ((\"outer\".prev_end_time < ms_t(\"inner\".tick)) AND (\"outer\".end_time >= ms_t(\"inner\".tick)))\n -> Seq Scan on bucket b (cost=0.00..51.98 rows=1279 width=20)\n Filter: ((rrd_id = 1) AND (end_time <= '2004-11-09 16:04:00-06'::timestamp with time zone) AND (end_time > '2004-11-08 16:31:00-06'::timestamp with time zone))\n -> Materialize (cost=2997.45..3638.40 rows=64095 width=28)\n -> Hash Join (cost=94.31..2997.45 rows=64095 width=28)\n Hash Cond: (\"outer\".alert_def_id = \"inner\".id)\n -> Seq Scan on alert (cost=0.00..1781.68 rows=64068 width=28)\n -> Hash (cost=88.21..88.21 rows=2440 width=8)\n -> Hash Join (cost=1.12..88.21 rows=2440 width=8)\n Hash Cond: (\"outer\".alert_type_id = \"inner\".id)\n -> Seq Scan on alert_def d (cost=0.00..44.39 rows=2439 width=8)\n -> Hash (cost=1.10..1.10 rows=10 width=4)\n -> Seq Scan on alert_type t (cost=0.00..1.10 rows=10 width=4)\n(15 rows)\n\nopensims=# set enable_seqscan=false;\nSET\nopensims=# explain analyze SELECT a.rrd_bucket_id, alert_type_id\nopensims-# , count(*), count(*), count(*), min(ci), max(ci), sum(ci), min(rm), max(rm), sum(rm)\nopensims-# FROM\nopensims-# (SELECT b.bucket_id AS rrd_bucket_id, s.*\nopensims(# FROM rrd.bucket b\nopensims(# JOIN alert_def_type_v s\nopensims(# ON (\nopensims(# b.prev_end_time < tick_tsz\nopensims(# AND b.end_time >= tick_tsz )\nopensims(# WHERE b.rrd_id = '1'\nopensims(# AND b.end_time <= '2004-11-09 16:04:00-06'\nopensims(# AND b.end_time > '2004-11-08 16:31:00-06'\nopensims(# ) a\nopensims-# GROUP BY rrd_bucket_id, alert_type_id;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=3787628.37..3787916.15 rows=12790 width=24) (actual time=202.045..215.197 rows=5234 loops=1)\n -> Hash Join (cost=107.76..3537247.03 rows=9104776 width=24) (actual time=10.728..147.415 rows=17423 loops=1)\n Hash Cond: (\"outer\".alert_def_id = \"inner\".id)\n -> Nested Loop (cost=0.00..3377768.38 rows=9104775 width=24) (actual time=0.042..93.512 rows=17423 loops=1)\n -> Index Scan using rrd_bucket__rrd_id__end_time on bucket b (cost=0.00..101.62 rows=1279 width=20) (actual time=0.018..3.040 rows=1413 loops=1)\n Index Cond: ((rrd_id = 1) AND (end_time <= '2004-11-09 16:04:00-06'::timestamp with time zone) AND (end_time > '2004-11-08 16:31:00-06'::timestamp with time zone))\n -> Index Scan using alert__tick_tsz on alert (cost=0.00..2498.49 rows=7119 width=28) (actual time=0.006..0.030 rows=12 loops=1413)\n Index Cond: ((\"outer\".prev_end_time < ms_t(alert.tick)) AND (\"outer\".end_time >= ms_t(alert.tick)))\n -> Hash (cost=101.66..101.66 rows=2440 width=8) (actual time=10.509..10.509 rows=0 loops=1)\n -> Hash Join (cost=3.13..101.66 rows=2440 width=8) (actual time=0.266..8.499 rows=2439 loops=1)\n Hash Cond: (\"outer\".alert_type_id = \"inner\".id)\n -> Index Scan using alert_def_pkey on alert_def d (cost=0.00..55.83 rows=2439 width=8) (actual time=0.009..3.368 rows=2439 loops=1)\n -> Hash (cost=3.11..3.11 rows=10 width=4) (actual time=0.061..0.061 rows=0 loops=1)\n -> Index Scan using alert_type_pkey on alert_type t (cost=0.00..3.11 rows=10 width=4) (actual time=0.018..0.038 rows=10 loops=1)\n Total runtime: 218.644 ms\n(15 rows)\n\nopensims=# \n\nI'd really like to avoid putting a 'set enable_seqscan=false' in my\ncode, especially since this query only has a problem if it's run on a\nlarge date/time window, which normally doesn't happen.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Tue, 9 Nov 2004 16:23:45 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "seqscan strikes again" }, { "msg_contents": "> opensims=# \n> \n> I'd really like to avoid putting a 'set enable_seqscan=false' in my\n> code, especially since this query only has a problem if it's run on a\n> large date/time window, which normally doesn't happen.\n\nTry increasing your statistics target for the column and then rerunning \nanalyze.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n-- \nCommand Prompt, Inc., home of PostgreSQL Replication, and plPHP.\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nMammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL", "msg_date": "Tue, 09 Nov 2004 15:14:36 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seqscan strikes again" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> I'm wondering if there's any way I can tweak things so that the estimate\n> for the query is more accurate (I have run analyze):\n\n> -> Index Scan using alert__tick_tsz on alert (cost=0.00..2498.49 rows=7119 width=28) (actual time=0.006..0.030 rows=12 loops=1413)\n> Index Cond: ((\"outer\".prev_end_time < ms_t(alert.tick)) AND (\"outer\".end_time >= ms_t(alert.tick)))\n\nCan you alter the data representation? 7.4 doesn't have any stats about\nfunctional indexes and so it's not likely to come up with a good number\nabout the selectivity of the index on ms_t(tick). It might be worth\nmaterializing that value as a plain column and indexing the column.\n\n(This being a join, I'm not sure it would help any, but it seems worth\ntrying.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Nov 2004 18:24:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seqscan strikes again " }, { "msg_contents": "Which column would you recommend? Did something stick out at you?\n\nOn Tue, Nov 09, 2004 at 03:14:36PM -0800, Joshua D. Drake wrote:\n> \n> >opensims=# \n> >\n> >I'd really like to avoid putting a 'set enable_seqscan=false' in my\n> >code, especially since this query only has a problem if it's run on a\n> >large date/time window, which normally doesn't happen.\n> \n> Try increasing your statistics target for the column and then rerunning \n> analyze.\n> \n> Sincerely,\n> \n> Joshua D. Drake\n\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Wed, 10 Nov 2004 11:00:55 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: seqscan strikes again" }, { "msg_contents": "Jim C. Nasby wrote:\n > I'm wondering if there's any way I can tweak things so that the estimate\n > for the query is more accurate (I have run analyze):\n\nCan you post your configuration file ? I'd like to see for example your\nsettings about: random_page_cost and effective_cache_size.\n\n\n\n\nRegards\nGaetano Mendola\n", "msg_date": "Thu, 11 Nov 2004 08:58:27 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seqscan strikes again" } ]
[ { "msg_contents": "Hi all\n\nI have a table with ca. 4Mio Rows.\n\nhere is my simple select-statement:\nSELECT * FROM CUSTOMER WHERE CUSTOMER_ID=5\n\nthe result appears after about 27 sec.\n\nwhat's wrong?\n\nthe same statement on mysql takes 1 milisec.\n\nplease help\n\nhere is the structur of the table\nCREATE TABLE public.customer\n(\n customer_id bigserial NOT NULL,\n cooperationpartner_id int8 NOT NULL DEFAULT 0::bigint,\n maincontact_id int8 NOT NULL DEFAULT 0::bigint,\n companycontact_id int8,\n def_paymentdetails_id int8,\n def_paymentsort_id int8,\n def_invoicing_id int8,\n int_customernumber varchar(50),\n ext_customernumber varchar(50),\n CONSTRAINT customer_pkey PRIMARY KEY (customer_id),\n CONSTRAINT customer_ibfk_1 FOREIGN KEY (cooperationpartner_id)\nREFERENCES public.cooperationpartner (cooperationpartner_id) ON UPDATE\nNO ACTION ON DELETE NO ACTION,\n CONSTRAINT customer_ibfk_2 FOREIGN KEY (maincontact_id) REFERENCES\npublic.contact (contact_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT customer_ibfk_3 FOREIGN KEY (companycontact_id) REFERENCES\npublic.contact (contact_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT customer_ibfk_4 FOREIGN KEY (def_paymentdetails_id)\nREFERENCES public.paymentdetails (paymentdetails_id) ON UPDATE NO ACTION\nON DELETE NO ACTION,\n CONSTRAINT customer_ibfk_5 FOREIGN KEY (def_paymentsort_id) REFERENCES\npublic.paymentsort (paymentsort_id) ON UPDATE NO ACTION ON DELETE NO\nACTION,\n CONSTRAINT customer_ibfk_6 FOREIGN KEY (def_invoicing_id) REFERENCES\npublic.invoicing (invoicing_id) ON UPDATE NO ACTION ON DELETE NO ACTION\n) WITH OIDS;\n\n", "msg_date": "Wed, 10 Nov 2004 10:35:50 +0100", "msg_from": "Cao Duy <[email protected]>", "msg_from_op": true, "msg_subject": "simple select-statement takes more than 25 sec" }, { "msg_contents": "On Wed, Nov 10, 2004 at 10:35:50AM +0100, Cao Duy wrote:\n> here is my simple select-statement:\n> SELECT * FROM CUSTOMER WHERE CUSTOMER_ID=5\n\nIt seems like you're missing an index on customer_id. Set it to PRIMARY KEY\nor do an explicit CREATE INDEX (followed by an ANALYZE) and it should be a\nlot faster.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 10 Nov 2004 11:17:47 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: simple select-statement takes more than 25 sec" }, { "msg_contents": "From: \"Cao Duy\" <[email protected]>\n> \n> here is my simple select-statement:\n> SELECT * FROM CUSTOMER WHERE CUSTOMER_ID=5\n> \n> the result appears after about 27 sec.\n> \n> what's wrong?\n> ...\n> CREATE TABLE public.customer\n> (\n> customer_id bigserial NOT NULL,\n\nyou do not specify version or show us\nan explain analyze, or tell us what indexes\nyou have, but if you want to use an index\non the bigint column customer_id, and you\nare using postgres version 7.4 or less, you\nneed to cast your constant (5) to bigint.\n\n\ntry\nSELECT * FROM CUSTOMER WHERE CUSTOMER_ID=5::bigint\nor\nSELECT * FROM CUSTOMER WHERE CUSTOMER_ID='5'\n\ngnari\n\n\n", "msg_date": "Wed, 10 Nov 2004 10:31:40 -0000", "msg_from": "\"gnari\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: simple select-statement takes more than 25 sec" }, { "msg_contents": "Am Mi, den 10.11.2004 schrieb Steinar H. Gunderson um 11:17:\n> On Wed, Nov 10, 2004 at 10:35:50AM +0100, Cao Duy wrote:\n> > here is my simple select-statement:\n> > SELECT * FROM CUSTOMER WHERE CUSTOMER_ID=5\n> \n> It seems like you're missing an index on customer_id. Set it to PRIMARY KEY\n> or do an explicit CREATE INDEX (followed by an ANALYZE) and it should be a\n> lot faster.\nthere is an index on customer_id\n\ncreate table customer(\n...\nCONSTRAINT customer_pkey PRIMARY KEY (customer_id),\n...\n)\n\n> /* Steinar */\n\n", "msg_date": "Wed, 10 Nov 2004 12:22:17 +0100", "msg_from": "Cao Duy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: simple select-statement takes more than 25 sec" }, { "msg_contents": "On Wed, Nov 10, 2004 at 12:22:17PM +0100, Cao Duy wrote:\n> there is an index on customer_id\n> \n> create table customer(\n> ...\n> CONSTRAINT customer_pkey PRIMARY KEY (customer_id),\n> ...\n> )\n\nOh, sorry, I missed it among all the foreign keys. :-) Anyhow, as others have\npointed out, try doing a select against 5::bigint instead of just 5 (which is\nan integer).\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 10 Nov 2004 12:45:25 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: simple select-statement takes more than 25 sec" } ]
[ { "msg_contents": "Hi,\n\nTry using parametrized prepared statements, does that make a difference? Or does PGSQL jdbc not support them in your version?\n\n--Tim\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]]On Behalf Of Michael Kleiser\nSent: Wednesday, November 10, 2004 2:52 PM\nTo: Jeff\nCc: Shane|SkinnyCorp; [email protected]\nSubject: [PERFORM] How to speed-up inserts with jdbc\n\n\n[...]\n>\t Statement st = con.createStatement();\n[...]\n\t st.executeUpdate(\"insert into history(uuid,coni,date,direction,partner,type) values('uuid','content','\"+datum+\"','dir','partner','type')\");\n[...]\n", "msg_date": "Wed, 10 Nov 2004 15:26:20 +0100", "msg_from": "\"Leeuw van der, Tim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to speed-up inserts with jdbc" } ]
[ { "msg_contents": "I'm looking for suggestions on tuning Solaris 9 for a SunFire 890 (Ultra\nIV chips) connected to an Hitachi 9500V running PostgreSQL 7.4.\n\nSo that I don't lead people in a direction, I'll hold off for a while\nbefore posting our configuration settings.\n\nDatabase is approx 160GB in size with a churn of around 4GB per day (2\nGB updated, 2GB inserted, very little removed). It's a mixture of OLTP\nand reporting.\n\n5% is reports which do trickle writes\n95% is short (30 second or less) transactions with about 10 selects, 10\nwrites (inserts, updates, deletes all mixed in) affecting 150 tuples.\n\nThanks for any tips -- particularly Solaris kernel tuning.\n-- \nRod Taylor <[email protected]>\n\n", "msg_date": "Wed, 10 Nov 2004 11:52:34 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": true, "msg_subject": "Tuning suggestions wanted" } ]
[ { "msg_contents": "I'm looking for suggestions on tuning Solaris 9 for a SunFire 890 (Ultra\nIV chips) connected to an Hitachi 9500V running PostgreSQL 7.4.\n\nDatabase is approx 160GB in size with a churn of around 4GB per day (2\nGB updated, 2GB inserted, very little removed). It's a mixture of OLTP\nand reporting.\n\n5% is reports which do trickle writes 95% is short (30 second or less)\ntransactions with about 10 selects, 10 writes (inserts, updates, deletes\nall mixed in) affecting 150 tuples.\n\nThanks for any tips -- particularly Solaris kernel tuning or oddities in\nDisk IO or configuration settings as they related to Solaris (as they\ndiffer from an Intel).\n\n\n\n\n-- \n\n", "msg_date": "Wed, 10 Nov 2004 12:04:12 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": true, "msg_subject": "Solaris 9 Tuning Tips requested" } ]
[ { "msg_contents": "Hello all,\n\nI am using tsearch2 to (imagine this... :) index a text field. There\nis also a, for lack of a better name, \"classification\" field called\n'field' that will be used to group certain rows together.\n\nCREATE TABLE biblio.metarecord_field_entry (\n record BIGINT REFERENCES biblio.metarecord (id)\n ON UPDATE CASCADE\n ON DELETE SET NULL\n DEFERRABLE\n INITIALLY DEFERRED,\n field INT NOT NULL\n REFERENCES biblio.metarecord_field_map (id)\n ON UPDATE CASCADE\n ON DELETE CASCADE\n DEFERRABLE\n INITIALLY DEFERRED,\n value TEXT,\n value_fti tsvector,\n source BIGINT NOT NULL\n REFERENCES biblio.record (id)\n ON UPDATE CASCADE\n ON DELETE CASCADE\n DEFERRABLE\n INITIALLY DEFERRED\n) WITHOUT OIDS;\n\n\nBecause there will be \"or\" queries against the 'value_fti' I want to\ncreate a multi-column index across the tsvector and classification\ncolumns as that should help with selectivity. But because there is no\nGiST opclass for INT4 the index creation complains thusly:\n\n oils=# CREATE INDEX metarecord_field_entry_value_and_field_idx ON\nbiblio.metarecord_field_entry USING GIST (field, value_fti);\n ERROR: data type integer has no default operator class for access\nmethod \"gist\"\n HINT: You must specify an operator class for the index or define a\ndefault operator class for the data type.\n\nI attempted to give it the 'int4_ops' class, but that also complains:\n\n oils=# CREATE INDEX metarecord_field_entry_value_and_field_idx ON\nbiblio.metarecord_field_entry USING GIST (value_fti, field int4_ops);\n ERROR: operator class \"int4_ops\" does not exist for access method \"gist\"\n\nI couldn't find any info in the docs (7.4 and 8.0.0b4) for getting\nGiST to index standard integers. I'm sure this has been done before,\nbut I've note found the magic spell. Of course, I may just be barking\nup the wrong tree altogether...\n\nThanks in advance!\n\n\n-- \nMike Rylander\[email protected]\nGPLS -- PINES Development\nDatabase Developer\n", "msg_date": "Wed, 10 Nov 2004 16:51:35 -0500", "msg_from": "Mike Rylander <[email protected]>", "msg_from_op": true, "msg_subject": "int4 in a GiST index" } ]
[ { "msg_contents": "Mike Rylander wrote:\n\n> I want to create a multi-column index across the tsvector and classification\n> columns as that should help with selectivity. But because there is no\n> GiST opclass for INT4 the index creation complains thusly:\n\nInstall contrib/btree_gist along with contrib/tsearch2 to create a multicolumn index on the in4\nand the tsvector columns. See the following for an example:\n\nhttp://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/oscon_tsearch2/multi_column_index.html\n\nGeorge Essig\n", "msg_date": "Wed, 10 Nov 2004 18:50:28 -0800 (PST)", "msg_from": "George Essig <[email protected]>", "msg_from_op": true, "msg_subject": "Re: int4 in a GiST index" }, { "msg_contents": "On Wed, 10 Nov 2004 18:50:28 -0800 (PST), George Essig\n<[email protected]> wrote:\n> Mike Rylander wrote:\n> \n> > I want to create a multi-column index across the tsvector and classification\n> > columns as that should help with selectivity. But because there is no\n> > GiST opclass for INT4 the index creation complains thusly:\n> \n> Install contrib/btree_gist along with contrib/tsearch2 to create a multicolumn index on the in4\n> and the tsvector columns. See the following for an example:\n> \n> http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/oscon_tsearch2/multi_column_index.html\n> \n> George Essig\n> \n\n\nThanks a million. I had actually just found the answer after some\nmore googling, but I hadn't seen that page and it happens to be\nexactly what I wanted.\n\nAs a side note I'd like to thank everyone here (and especially George,\nin this case). I've been on these lists for quite a while and I'm\nalways amazed at the speed, accuracy and precision of the answers on\nthe PG mailing lists.\n\n-- \nMike Rylander\[email protected]\nGPLS -- PINES Development\nDatabase Developer\n", "msg_date": "Thu, 11 Nov 2004 00:53:09 -0500", "msg_from": "Mike Rylander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: int4 in a GiST index" } ]
[ { "msg_contents": "Folks,\n\nWanted to get clarification on two bits of output from 7.4's VACUUM FULL \nVERBOSE:\n\n\"Total free space (including removable row versions) is 2932036 bytes.\"\nIf the table referenced has no dead row versions, does this indicate open \nspace on partially full pages?\n\n\"There were 33076 unused item pointers.\"\nIs this a count of dead index pointers, or something else?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 12 Nov 2004 10:42:02 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Clarification on two bits on VACUUM FULL VERBOSE output" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Wanted to get clarification on two bits of output from 7.4's VACUUM FULL \n> VERBOSE:\n\n> \"Total free space (including removable row versions) is 2932036 bytes.\"\n> If the table referenced has no dead row versions, does this indicate open \n> space on partially full pages?\n\nYes.\n\n> \"There were 33076 unused item pointers.\"\n> Is this a count of dead index pointers, or something else?\n\nNo, it's currently-unused item pointers (a/k/a line pointers) on heap\npages. See http://developer.postgresql.org/docs/postgres/page.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Nov 2004 14:20:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clarification on two bits on VACUUM FULL VERBOSE output " }, { "msg_contents": "Tom,\n\n> > \"There were 33076 unused item pointers.\"\n> > Is this a count of dead index pointers, or something else?\n>\n> No, it's currently-unused item pointers (a/k/a line pointers) on heap\n> pages.  See http://developer.postgresql.org/docs/postgres/page.html\n\nSo this would be a count of pointers whose items had already been moved?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 12 Nov 2004 13:11:33 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Clarification on two bits on VACUUM FULL VERBOSE output" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>>> \"There were 33076 unused item pointers.\"\n>>> Is this a count of dead index pointers, or something else?\n>> \n>> No, it's currently-unused item pointers (a/k/a line pointers) on heap\n>> pages. See http://developer.postgresql.org/docs/postgres/page.html\n\n> So this would be a count of pointers whose items had already been moved?\n\nEither deleted, or moved to another page during VACUUM FULL compaction.\nSuch a pointer can be recycled to point to a new item, if there's room\nto put another item on its page ... but if not, the pointer is\nwasted space. I don't believe we ever try to physically eliminate\nunused item pointers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Nov 2004 16:19:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clarification on two bits on VACUUM FULL VERBOSE output " } ]
[ { "msg_contents": "Hello to all,\n\nI am new to this group and postgresql. I am working on\na project which uses postgresql and project is time\ncritical. We did all optimization in our project but\npostgresql seems to be a bottle-neck. To solve this we\nrun the database operations in a different thread. But\nstill, with large volume of data in database the\ninsert operation becomes very slow (ie. to insert 100\nrecords in 5 tables, it takes nearly 3minutes).\n\nvacuum analyze helps a bit but performance improvement\nis not much.\nWe are using the default postgres setting (ie. didn't\nchange postgresql.conf).\n\nOne more point: When we try to upload a pg_dump of\nnearly 60K records for 7 tables it took more than\n10hrs. \n\nSystem config:\n\nRedhat Linux7.2\nRAM: 256MB\npostgres: 7.1.3\nconnection: ODBC\n\nThanks to all, please consider it even if it is silly\ndoubt. \n \nVivek\n\n\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nCheck out the new Yahoo! Front Page. \nwww.yahoo.com \n \n\n", "msg_date": "Sat, 13 Nov 2004 03:26:09 -0800 (PST)", "msg_from": "vivek singh <[email protected]>", "msg_from_op": true, "msg_subject": "Insertion puzzles" }, { "msg_contents": "On Nov 13, 2004, at 12:26, vivek singh wrote:\n\n> But\n> still, with large volume of data in database the\n> insert operation becomes very slow (ie. to insert 100\n> records in 5 tables, it takes nearly 3minutes).\n\nWhat are the performance when you use COPY FROM instead of INSERT ?\nAnd have you tested the performance with fsync on and off.\n\n-- \nAndreas Åkre Solberg, UNINETT AS Testnett\nContact info and Public PGP Key available on:\nhttp://andreas.solweb.no/?account=Work", "msg_date": "Sat, 13 Nov 2004 14:31:11 +0100", "msg_from": "=?ISO-8859-1?Q?Andreas_=C5kre_Solberg?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insertion puzzles" }, { "msg_contents": "Well, the default configuration for postgresql 7.1.3 is *very* \nconservative. ( ie. very slow)\n\nYou should seriously consider upgrading to 7.4.6 as server performance \nhas increased; in some cases significantly.\n\nIf that is not an option, certainly tuning the shared buffers, and \neffective cache settings would be advisable.\n\ndave\n\nAndreas �kre Solberg wrote:\n\n>\n> On Nov 13, 2004, at 12:26, vivek singh wrote:\n>\n>> But\n>> still, with large volume of data in database the\n>> insert operation becomes very slow (ie. to insert 100\n>> records in 5 tables, it takes nearly 3minutes).\n>\n>\n> What are the performance when you use COPY FROM instead of INSERT ?\n> And have you tested the performance with fsync on and off.\n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Sat, 13 Nov 2004 11:54:56 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insertion puzzles" }, { "msg_contents": "On Sat, 13 Nov 2004, vivek singh wrote:\n\n> I am new to this group and postgresql. I am working on\n> a project which uses postgresql and project is time\n> critical. We did all optimization in our project but\n> postgresql seems to be a bottle-neck. To solve this we\n> run the database operations in a different thread. But\n> still, with large volume of data in database the\n> insert operation becomes very slow (ie. to insert 100\n> records in 5 tables, it takes nearly 3minutes).\n\nThat's pretty bad. What does the schema look like? Are there\nany foreign keys, triggers or rules being hit?\n\n> vacuum analyze helps a bit but performance improvement\n> is not much.\n> We are using the default postgres setting (ie. didn't\n> change postgresql.conf).\n\nHmm, there are a few settings to try to change, although to be\nhonest, I'm not sure which ones beyond shared_buffers (maybe try a\ncouple thousand) are applicable to 7.1.3.\n\nYou really should upgrade. Alot of serious bug fixes and performance\nenhancements have been made from 7.1.x to 7.4.x.\n", "msg_date": "Sat, 13 Nov 2004 09:13:42 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insertion puzzles" }, { "msg_contents": "Vivek,\n\n> Redhat Linux7.2\n> RAM: 256MB\n> postgres: 7.1.3\n\nUm, you do realise that both RH 7.2 and PostgreSQL 7.1 are \"no longer \nsupported\" but their respective communities?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sat, 13 Nov 2004 17:36:12 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insertion puzzles" }, { "msg_contents": "Actually, the most damning thing in this configuration I had missed earlier\n\n256MB of ram !\n\nDave\n\nJosh Berkus wrote:\n\n>Vivek,\n>\n> \n>\n>>Redhat Linux7.2\n>>RAM: 256MB\n>>postgres: 7.1.3\n>> \n>>\n>\n>Um, you do realise that both RH 7.2 and PostgreSQL 7.1 are \"no longer \n>supported\" but their respective communities?\n>\n> \n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Sun, 14 Nov 2004 19:45:56 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insertion puzzles" } ]
[ { "msg_contents": "I just finished upgrading the OS on our Opteron 148 from Redhat9 to \nFedora FC2 X86_64 with full recompiles of Postgres/Apache/Perl/Samba/etc.\n\nThe verdict: a definite performance improvement. I tested just a few CPU \nintensive queries and many of them are a good 30%-50% faster. \nTransactional/batch jobs involving client machines (i.e. include fixed \nclient/networking/odbc overhead) seem to be about 10%-20% faster \nalthough I will need run more data through the system to get a better \nfeel of the numbers.\n", "msg_date": "Sat, 13 Nov 2004 14:59:22 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": true, "msg_subject": "Some quick Opteron 32-bit/64-bit results" }, { "msg_contents": "Biggest speedup I've found yet is the backup process (PG_DUMP --> GZIP). \n100% faster in 64-bit mode. This drastic speed might be more the result \nof 64-bit GZIP though as I've seen benchmarks in the past showing \nencryption/compression running 2 or 3 times faster in 64-bit mode versus \n32-bit.\n\n\n\nWilliam Yu wrote:\n\n> I just finished upgrading the OS on our Opteron 148 from Redhat9 to \n> Fedora FC2 X86_64 with full recompiles of Postgres/Apache/Perl/Samba/etc.\n> \n> The verdict: a definite performance improvement. I tested just a few CPU \n> intensive queries and many of them are a good 30%-50% faster. \n> Transactional/batch jobs involving client machines (i.e. include fixed \n> client/networking/odbc overhead) seem to be about 10%-20% faster \n> although I will need run more data through the system to get a better \n> feel of the numbers.\n", "msg_date": "Sat, 13 Nov 2004 15:58:59 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Some quick Opteron 32-bit/64-bit results" }, { "msg_contents": "Hi Willian,\n\n Which are the GCC flags that you it used to compile PostgreSQL?\n\nBest regards,\n\nGustavo Franklin N�brega\nInfraestrutura e Banco de Dados\nPlanae Tecnologia da Informa��o\n(+55) 14 3224-3066 Ramal 209\nwww.planae.com.br\n\n> I just finished upgrading the OS on our Opteron 148 from Redhat9 to\n> Fedora FC2 X86_64 with full recompiles of Postgres/Apache/Perl/Samba/etc.\n>\n> The verdict: a definite performance improvement. I tested just a few CPU\n> intensive queries and many of them are a good 30%-50% faster.\n> Transactional/batch jobs involving client machines (i.e. include fixed\n> client/networking/odbc overhead) seem to be about 10%-20% faster\n> although I will need run more data through the system to get a better\n> feel of the numbers.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n\n\n\n", "msg_date": "Sat, 13 Nov 2004 22:14:03 -0200 (BRST)", "msg_from": "Gustavo Franklin =?iso-8859-1?Q?N=F3brega?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some quick Opteron 32-bit/64-bit results" }, { "msg_contents": "I gave -O3 a try with -funroll-loops, -fomit-frame-pointer and a few \nothers. Seemed to perform about the same as the default -O2 so I just \nleft it as -O2.\n\n\nGustavo Franklin N�brega wrote:\n> Hi Willian,\n> \n> Which are the GCC flags that you it used to compile PostgreSQL?\n> \n> Best regards,\n> \n> Gustavo Franklin N�brega\n> Infraestrutura e Banco de Dados\n> Planae Tecnologia da Informa��o\n> (+55) 14 3224-3066 Ramal 209\n> www.planae.com.br\n> \n> \n>>I just finished upgrading the OS on our Opteron 148 from Redhat9 to\n>>Fedora FC2 X86_64 with full recompiles of Postgres/Apache/Perl/Samba/etc.\n>>\n>>The verdict: a definite performance improvement. I tested just a few CPU\n>>intensive queries and many of them are a good 30%-50% faster.\n>>Transactional/batch jobs involving client machines (i.e. include fixed\n>>client/networking/odbc overhead) seem to be about 10%-20% faster\n>>although I will need run more data through the system to get a better\n>>feel of the numbers.\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 2: you can get off all lists at once with the unregister command\n>> (send \"unregister YourEmailAddressHere\" to [email protected])\n>>\n> \n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n", "msg_date": "Sat, 13 Nov 2004 16:52:45 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Some quick Opteron 32-bit/64-bit results" }, { "msg_contents": "William Yu <[email protected]> writes:\n\n> Biggest speedup I've found yet is the backup process (PG_DUMP --> GZIP). 100%\n> faster in 64-bit mode. This drastic speed might be more the result of 64-bit\n> GZIP though as I've seen benchmarks in the past showing encryption/compression\n> running 2 or 3 times faster in 64-bit mode versus 32-bit.\n\nIsn't this a major kernel bump too? So a different scheduler, different IO\nscheduler, etc?\n\n-- \ngreg\n\n", "msg_date": "15 Nov 2004 02:03:58 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some quick Opteron 32-bit/64-bit results" }, { "msg_contents": "Greg Stark wrote:\n> William Yu <[email protected]> writes:\n> \n> \n>>Biggest speedup I've found yet is the backup process (PG_DUMP --> GZIP). 100%\n>>faster in 64-bit mode. This drastic speed might be more the result of 64-bit\n>>GZIP though as I've seen benchmarks in the past showing encryption/compression\n>>running 2 or 3 times faster in 64-bit mode versus 32-bit.\n> \n> \n> Isn't this a major kernel bump too? So a different scheduler, different IO\n> scheduler, etc?\n> \n\nI'm sure there's some speedup due to the kernel bump. I really didn't \nhave the patience to even burn the FC2 32-bit CDs much less install both \n32-bit & 64-bit FC2 in order to have a more accurate baseline comparison.\n\nHowever, that being said -- when you see huge speed increases like 50% \n100% for dump+gzip, it's doubtful the kernel/process scheduler/IO \nscheduler could have made that drastic of a difference. Maybe somebody \nelse who has done a 2.4 -> 2.6 upgrade can give us a baseline to \nsubtract from my numbers.\n", "msg_date": "Sun, 14 Nov 2004 23:51:19 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Some quick Opteron 32-bit/64-bit results" }, { "msg_contents": "I ran quite a few file system benchmarks in RHAS x86-64 and FC2 x86-64\non a Sun V40z - I did see very consistent 50% improvements in bonnie++\nmoving from RHAS to FC2 with ext2/ext3 on SAN.\n\n\n\nOn Sun, 2004-11-14 at 23:51 -0800, William Yu wrote:\n> Greg Stark wrote:\n> > William Yu <[email protected]> writes:\n> > \n> > \n> >>Biggest speedup I've found yet is the backup process (PG_DUMP --> GZIP). 100%\n> >>faster in 64-bit mode. This drastic speed might be more the result of 64-bit\n> >>GZIP though as I've seen benchmarks in the past showing encryption/compression\n> >>running 2 or 3 times faster in 64-bit mode versus 32-bit.\n> > \n> > \n> > Isn't this a major kernel bump too? So a different scheduler, different IO\n> > scheduler, etc?\n> > \n> \n> I'm sure there's some speedup due to the kernel bump. I really didn't \n> have the patience to even burn the FC2 32-bit CDs much less install both \n> 32-bit & 64-bit FC2 in order to have a more accurate baseline comparison.\n> \n> However, that being said -- when you see huge speed increases like 50% \n> 100% for dump+gzip, it's doubtful the kernel/process scheduler/IO \n> scheduler could have made that drastic of a difference. Maybe somebody \n> else who has done a 2.4 -> 2.6 upgrade can give us a baseline to \n> subtract from my numbers.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n", "msg_date": "Tue, 23 Nov 2004 10:20:07 -0700", "msg_from": "Cott Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some quick Opteron 32-bit/64-bit results" } ]
[ { "msg_contents": "Vivek,\n\nI ran into the exact same problem you did. I tried many, many changes to\nthe conf file, I tried O.S. tuning but performance stunk. I had a fairly\nsimple job that had a lot of updates and inserts that was taking 4 1/2\nhours. I re-wrote it to be more \"Postgres friendly\" - meaning less\ndatabase updates and got it down under 2 1/2 hours (still horrible). \nUnderstand, the legacy non-postgres ISAM db took about 15 minutes to\nperform the same task. I assumed it was a system problem that would go\naway when we upgraded servers but it did not. I converted to MySQL and the\nexact same java process takes 5 minutes! Postgres is a great DB for some,\nfor our application it was not - you may want to consider other products\nthat are a bit faster and do not require the vacuuming of stale data.\n\nOriginal Message:\n-----------------\nFrom: vivek singh [email protected]\nDate: Sat, 13 Nov 2004 03:26:09 -0800 (PST)\nTo: [email protected]\nSubject: [PERFORM] Insertion puzzles\n\n\nHello to all,\n\nI am new to this group and postgresql. I am working on\na project which uses postgresql and project is time\ncritical. We did all optimization in our project but\npostgresql seems to be a bottle-neck. To solve this we\nrun the database operations in a different thread. But\nstill, with large volume of data in database the\ninsert operation becomes very slow (ie. to insert 100\nrecords in 5 tables, it takes nearly 3minutes).\n\nvacuum analyze helps a bit but performance improvement\nis not much.\nWe are using the default postgres setting (ie. didn't\nchange postgresql.conf).\n\nOne more point: When we try to upload a pg_dump of\nnearly 60K records for 7 tables it took more than\n10hrs. \n\nSystem config:\n\nRedhat Linux7.2\nRAM: 256MB\npostgres: 7.1.3\nconnection: ODBC\n\nThanks to all, please consider it even if it is silly\ndoubt. \n \nVivek\n\n\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nCheck out the new Yahoo! Front Page. \nwww.yahoo.com \n \n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n\n--------------------------------------------------------------------\nmail2web - Check your email from the web at\nhttp://mail2web.com/ .\n\n\n", "msg_date": "Sat, 13 Nov 2004 21:00:48 -0500", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Insertion puzzles" }, { "msg_contents": "On Sat, 2004-11-13 at 18:00, [email protected] wrote:\n> I ran into the exact same problem you did. I tried many, many changes to\n> the conf file, I tried O.S. tuning but performance stunk. I had a fairly\n> simple job that had a lot of updates and inserts that was taking 4 1/2\n> hours. I re-wrote it to be more \"Postgres friendly\" - meaning less\n> database updates and got it down under 2 1/2 hours (still horrible). \n> Understand, the legacy non-postgres ISAM db took about 15 minutes to\n> perform the same task. I assumed it was a system problem that would go\n> away when we upgraded servers but it did not. I converted to MySQL and the\n> exact same java process takes 5 minutes! Postgres is a great DB for some,\n> for our application it was not - you may want to consider other products\n> that are a bit faster and do not require the vacuuming of stale data.\n\n\nI have to wonder if the difference is in how your job is being chopped\nup by the different connection mechanisms. The only time I've had\nperformance problems like this, it was the result of pathological and\nunwelcome behaviors in the way things were being handled in the\nconnector or database design.\n\nWe have a 15GB OLTP/OLAP database on five spindles with a large\ninsert/update load and >100M rows, and I don't think it takes 2.5 hours\nto do *anything*. This includes inserts/updates of hundreds of\nthousands of rows at a shot, which takes very little time.\n\nI've gotten really bad performance before under postgres, but once I\nisolated the reason I've always gotten performance that was comparable\nto any other commercial RDBMS on the same hardware. \n\n\nJ. Andrew Rogers\n\n\n", "msg_date": "16 Nov 2004 11:43:38 -0800", "msg_from": "\"J. Andrew Rogers\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insertion puzzles" } ]
[ { "msg_contents": "Good day,\n\nI use pgsql 7.4: I would like to know if indexes will solve my problem\n(I fear the system will become slow with the time). And also some\nquestions on how pgsql optimise for speed.\n\n*Database*\n\n-- Assuming those tables (not original, but enought to get the point):\n\nCREATE TABLE prod.jobs (\njob_id serial PRIMARY KEY,\norder_id integer NOT NULL REFERENCES sales.orders,\n);\n\nCREATE TABLE design.products (\nproduct_id serial PRIMARY KEY,\ncompany_id integer NOT NULL REFERENCES sales.companies ON\nUPDATE CASCADE,\nproduct_code varchar(24) NOT NULL,\nCONSTRAINT product_code_already_used_for_this_company UNIQUE\n(company_id, product_code)\n);\n\nCREATE TABLE prod.jobs_products (\nproduct_id integer REFERENCES design.products ON UPDATE CASCADE,\n) INHERITS (prod.jobs);\n\n-- Assuming this view:\n\nCREATE VIEW prod.orders_jobs_view AS\n SELECT job_id, order_id, product_code\n FROM (\n SELECT *, NULL AS product_id FROM ONLY prod.jobs\n UNION\n SELECT * FROM prod.jobs_products\n ) AS alljobs LEFT JOIN design.products ON alljobs.product_id =\nproducts.product_id;\n\n*Question 1*\n\nAssuming this request:\nSELECT * FROM prod.orders_jobs_view WHERE order_id = 1;\n\nI imagine that somewhere down the road, this will get slow since there\nis no index on the order_id. I tought of creating two indexes... With\nthe previous VIEW and database schema, will the following boost the\nDB; as I don't know how PostgreSQL works internally:\n\nCREATE UNIQUE INDEX order_jobs ON prod.jobs(order_id);\nCREATE UNIQUE INDEX order_jobs_products ON prod.jobs_products(order_id);\n\n*Question 2*\n\nIf no to question 1, what can I do to boost the database speed. I do\nhave prety heavy views on data, and I would like to get some speed as\nthe DB will get filled up quickly.\n\n*Question 3*\n\nWhen creating a wien with linked \"UNION\" tables as previous... when we\ndo a SELECT with a WHERE clause, will the database act efficiently by\nadding the WHERE clause to the UNIONed tables in the FROM clause?\n\nExample:\n\nSELECT * FROM prod.orders_jobs_view WHERE order_id = 1;\n\nwhould cause something like\n\n SELECT job_id, order_id, product_code\n FROM (\n SELECT *, NULL AS product_id FROM ONLY prod.jobs WHERE order_id = 1\n UNION\n SELECT * FROM prod.jobs_products WHERE order_id = 1\n ) AS alljobs LEFT JOIN design.products ON alljobs.product_id =\nproducts.product_id;\n\nin order to speed the union processing?\n\nThank you for any help on this.\n\n-- \nAlexandre Leclerc\n", "msg_date": "Mon, 15 Nov 2004 12:48:36 -0500", "msg_from": "Alexandre Leclerc <[email protected]>", "msg_from_op": true, "msg_subject": "Question on pgsql optimisation of SQL and structure (index, etc)" }, { "msg_contents": "Alexandre,\n\n> -- Assuming those tables (not original, but enought to get the point):\n>\n> CREATE TABLE prod.jobs (\n> job_id serial PRIMARY KEY,\n> order_id integer NOT NULL REFERENCES sales.orders,\n> );\n>\n> CREATE TABLE design.products (\n> product_id serial PRIMARY KEY,\n> company_id integer NOT NULL REFERENCES sales.companies ON\n> UPDATE CASCADE,\n> product_code varchar(24) NOT NULL,\n> CONSTRAINT product_code_already_used_for_this_company UNIQUE\n> (company_id, product_code)\n> );\n>\n> CREATE TABLE prod.jobs_products (\n> product_id integer REFERENCES design.products ON UPDATE CASCADE,\n> ) INHERITS (prod.jobs);\n\nFirst off, let me say that I find this schema rather bizarre. The standard \nway to handle your situation would be to add a join table instead of \ninheritance for jobs_products:\n\nCREATE TABLE jobs_products (\n\tjob_id INT NOT NULL REFERENCES prod.jobs(job_id) ON DELETE CASCADE,\n\tproduct_id INT NOT NULL REFERENCES design.products(product_id) ON UPDATE \nCASCADE,\n\tCONSTRAINT jobs_products_pk PRIMARY KEY (job_id, product_id)\n);\n\nThen this view:\n\n> CREATE VIEW prod.orders_jobs_view AS\n> SELECT job_id, order_id, product_code\n> FROM (\n> SELECT *, NULL AS product_id FROM ONLY prod.jobs\n> UNION\n> SELECT * FROM prod.jobs_products\n> ) AS alljobs LEFT JOIN design.products ON alljobs.product_id =\n> products.product_id;\n\nBecomes much simpler, and better performance:\n\nCREATE VIEW prod.orders_jobs_view AS \nSELECT job_id, order_id, product_code\nFROM prod.jobs LEFT JOIN prod.jobs_products ON prod.jobs.job_id = \nprod.jobs_products.job_id\n\tLEFT JOIN design.products ON prod.jobs_products.product_id = \ndesign.products.product_id;\n\n> I imagine that somewhere down the road, this will get slow since there\n> is no index on the order_id. I tought of creating two indexes... With\n> the previous VIEW and database schema, will the following boost the\n> DB; as I don't know how PostgreSQL works internally:\n\nYes. Any time you have a foreign key, you should index it unless you have a \nreally good reason not to.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 15 Nov 2004 13:03:37 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question on pgsql optimisation of SQL and structure (index, etc)" } ]
[ { "msg_contents": "Hi,\n\nI have a strange (for me) result ... Why the second request is really quicker \nwith a Seq Scan than the first one with a DISTINCT and using an index !?\n\nThe table have really 183957 rows ... not like the Seq Scan seems to \nexpect ... !? I understand nothing here ...\n\nThanks for your explanations ...\n\n# explain analyze SELECT distinct s.id_category FROM site s;\n\n QUERY \nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=0.00..11903.15 rows=56 width=4) (actual time=0.147..1679.170 \nrows=68 loops=1)\n -> Index Scan using ix_site_id_category on site s (cost=0.00..11496.38 \nrows=162706 width=4) (actual time=0.143..1452.611 rows=183957 loops=1)\n Total runtime: 1679.496 ms\n(3 rows)\n\nTime: 1680,810 ms\n\n\n# explain analyze SELECT s.id_category FROM site s group by id_category;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=7307.83..7307.83 rows=56 width=4) (actual \ntime=1198.968..1199.084 rows=68 loops=1)\n -> Seq Scan on site s (cost=0.00..6901.06 rows=162706 width=4) (actual \ntime=0.097..921.676 rows=183957 loops=1)\n Total runtime: 1199.260 ms\n(3 rows)\n\n-- \nBill Footcow\n", "msg_date": "Mon, 15 Nov 2004 22:22:02 +0100", "msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>", "msg_from_op": true, "msg_subject": "Why distinct so slow ?" } ]
[ { "msg_contents": "I'm just currious about which is the best, if I have many query based\non the first one:\n\n-- suppose this view (used many times):\nCREATE VIEW prod.alljobs_view AS\n SELECT *\n FROM prod.jobs\n LEFT JOIN prod.jobs_products ON jobs.job_id = jobs_products.job_id;\n\n-- suppose this other query:\nCREATE VIEW prod.orders_jobs_view AS\n SELECT job_id, order_id, product_code\n FROM prod.alljobs_view\n LEFT JOIN design.products ON alljobs_view.product_id =\nproducts.product_id;\n\n-- would this be more effective on database side than:?\nCREATE VIEW prod.orders_jobs_view AS\n SELECT job_id, order_id, product_code\n FROM prod.jobs\n LEFT JOIN prod.jobs_products ON jobs.job_id = jobs_products.job_id\n LEFT JOIN design.products ON jobs_products.product_id =\nproducts.product_id;\n\nWhich is the best, or is there any difference? (I can't test it\nmyself, I have too few data).\n\nRegards.\n\n-- \nAlexandre Leclerc\n", "msg_date": "Mon, 15 Nov 2004 18:10:01 -0500", "msg_from": "Alexandre Leclerc <[email protected]>", "msg_from_op": true, "msg_subject": "Performance difference: SELECT from VIEW or not?" }, { "msg_contents": "Alexandre Leclerc <[email protected]> writes:\n> -- suppose this view (used many times):\n> CREATE VIEW prod.alljobs_view AS\n> SELECT *\n> FROM prod.jobs\n> LEFT JOIN prod.jobs_products ON jobs.job_id = jobs_products.job_id;\n\n> -- suppose this other query:\n> CREATE VIEW prod.orders_jobs_view AS\n> SELECT job_id, order_id, product_code\n> FROM prod.alljobs_view\n> LEFT JOIN design.products ON alljobs_view.product_id =\n> products.product_id;\n\n> -- would this be more effective on database side than:?\n> CREATE VIEW prod.orders_jobs_view AS\n> SELECT job_id, order_id, product_code\n> FROM prod.jobs\n> LEFT JOIN prod.jobs_products ON jobs.job_id = jobs_products.job_id\n> LEFT JOIN design.products ON jobs_products.product_id =\n> products.product_id;\n\n> Which is the best, or is there any difference?\n\nFor the specific case mentioned, there's not going to be any visible\ndifference (maybe a few more catalog lookups to expand two view\ndefinitions instead of one). However, it's real easy to shoot yourself\nin the foot with views and outer joins. Don't forget that the syntactic\nordering of outer joins is also a semantic and performance constraint.\n(a left join b) left join c is different from a left join (b left join c)\nand a view is basically a parenthesized subselect ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Nov 2004 18:30:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance difference: SELECT from VIEW or not? " } ]
[ { "msg_contents": "\nI've have a miniature data-warehouse in which I'm trying to rebuild\npre-calcuated aggregate data directly in the database and I'm geting some\npoor plans due to a bad mis-estimation of the number of rows involved.\n\nIn a standard star schema I have a sales fact table and dimensions\nproduct, customer, and period. From those dimensions I have created\n\"shrunken\" versions of them that only have productline, salesperson and\nmonth data. Now I want to rollup the base fact table to a \"shrunken\"\nversion with data summed up for these smaller aggregate dimensions.\n\nThe idea is to take a sales table (productid, customerid, periodid,\nquantity, usdamount) and create a table with the same columns that have\nthe \"id\" columns pointing to the matching smaller dimensions and total up\nthe quantity and usdamount. Since the shrunken dimension tables have\nrenumbered ids we look these up by joining on all of the common columns\nbetween the base and shrunken dimensions. The following query does just\nthat:\n\nCREATE TABLE shf_sales_by_salesperson_productline_month AS\nSELECT SUM(sales.quantity) AS quantity,\n\tSUM(sales.usdamount) AS usdamount,\n\tshd_productline.id AS productid,\n\tshd_month.id AS periodid,\n shd_salesperson.id AS customerid\nFROM sales\nJOIN (\n SELECT shd_productline.id, product.id AS productid\n\tFROM product, shd_productline\n\tWHERE product.productline = shd_productline.productline\n\t\tAND product.category = shd_productline.category\n\t\tAND product.platform = shd_productline.platform\n ) shd_productline\nON sales.productid = shd_productline.productid\nJOIN (\nSELECT shd_month.id, period.id AS periodid\n FROM period, shd_month\n WHERE period.monthnumber = shd_month.monthnumber\n\t\tAND period.monthname = shd_month.monthname\n\t\tAND period.year = shd_month.year\n\t\tAND period.monthyear = shd_month.monthyear\n\t\tAND period.quarter = shd_month.quarter\n\t\tAND period.quarteryear = shd_month.quarteryear\n ) shd_month\nON sales.periodid = shd_month.periodid\nJOIN (\n SELECT shd_salesperson.id, customer.id AS customerid\n FROM customer, shd_salesperson\n WHERE customer.salesperson = shd_salesperson.salesperson\n ) shd_salesperson\nON sales.customerid = shd_salesperson.customerid\n\nGROUP BY shd_productline.id, shd_month.id, shd_salesperson.id\n\nThis generates the following EXPLAIN ANALYZE plan for the SELECT portion:\n\n HashAggregate (cost=32869.33..32869.34 rows=1 width=36) (actual time=475182.855..475188.304 rows=911 loops=1)\n -> Nested Loop (cost=377.07..32869.32 rows=1 width=36) (actual time=130.179..464299.167 rows=1232140 loops=1)\n Join Filter: (\"outer\".salesperson = \"inner\".salesperson)\n -> Nested Loop (cost=377.07..32868.18 rows=1 width=44) (actual time=130.140..411975.760 rows=1232140 loops=1)\n Join Filter: (\"outer\".customerid = \"inner\".id)\n -> Hash Join (cost=377.07..32864.32 rows=1 width=32) (actual time=130.072..23167.501 rows=1232140 loops=1)\n Hash Cond: (\"outer\".productid = \"inner\".id)\n -> Hash Join (cost=194.23..32679.08 rows=375 width=28) (actual time=83.118..14019.802 rows=1232140 loops=1)\n Hash Cond: (\"outer\".periodid = \"inner\".id)\n -> Seq Scan on sales (cost=0.00..26320.40 rows=1232140 width=24) (actual time=0.109..3335.275 rows=1232140 loops=1)\n -> Hash (cost=194.23..194.23 rows=1 width=12) (actual time=81.548..81.548 rows=0 loops=1)\n -> Hash Join (cost=4.70..194.23 rows=1 width=12) (actual time=2.544..72.798 rows=3288 loops=1)\n Hash Cond: ((\"outer\".monthnumber = \"inner\".monthnumber) AND (\"outer\".monthname = \"inner\".monthname) AND (\"outer\".\"year\" = \"inner\".\"year\") AND (\"outer\".monthyear = \"inner\".monthyear) AND (\"outer\".quarter = \"inner\".quarter) AND (\"outer\".quarteryear = \"inner\".quarteryear))\n -> Seq Scan on period (cost=0.00..90.88 rows=3288 width=54) (actual time=0.009..9.960 rows=3288 loops=1)\n -> Hash (cost=3.08..3.08 rows=108 width=58) (actual time=1.643..1.643 rows=0 loops=1)\n -> Seq Scan on shd_month (cost=0.00..3.08 rows=108 width=58) (actual time=0.079..0.940 rows=108 loops=1)\n -> Hash (cost=182.18..182.18 rows=265 width=12) (actual time=45.431..45.431 rows=0 loops=1)\n -> Hash Join (cost=1.23..182.18 rows=265 width=12) (actual time=1.205..40.216 rows=1932 loops=1)\n Hash Cond: ((\"outer\".productline = \"inner\".productline) AND (\"outer\".category = \"inner\".category) AND (\"outer\".platform = \"inner\".platform))\n -> Seq Scan on product (cost=0.00..149.32 rows=1932 width=32) (actual time=0.013..6.179 rows=1932 loops=1)\n -> Hash (cost=1.13..1.13 rows=13 width=45) (actual time=0.199..0.199 rows=0 loops=1)\n -> Seq Scan on shd_productline (cost=0.00..1.13 rows=13 width=45) (actual time=0.048..0.083 rows=13 loops=1)\n -> Seq Scan on customer (cost=0.00..2.83 rows=83 width=20) (actual time=0.005..0.174 rows=83 loops=1232140)\n -> Seq Scan on shd_salesperson (cost=0.00..1.06 rows=6 width=24) (actual time=0.004..0.019 rows=6 loops=1232140)\n Total runtime: 475197.372 ms\n(25 rows)\n\nNote that the estimated number of input rows to the final HashAggreggate \nis 1 while the actual number is 1.2 million. By rewriting the JOIN \nconditions to LEFT JOIN we force the planner to recognize that there will \nbe a match for every row in the sales table:\n\n HashAggregate (cost=74601.88..74644.00 rows=8424 width=36) (actual time=39956.115..39961.507 rows=911 loops=1)\n -> Hash Left Join (cost=382.43..59200.13 rows=1232140 width=36) (actual time=140.879..30765.373 rows=1232140 loops=1)\n Hash Cond: (\"outer\".customerid = \"inner\".id)\n -> Hash Left Join (cost=377.07..40712.67 rows=1232140 width=32) (actual time=136.069..22721.760 rows=1232140 loops=1)\n Hash Cond: (\"outer\".periodid = \"inner\".id)\n -> Hash Left Join (cost=182.84..34353.99 rows=1232140 width=28) (actual time=50.815..14742.610 rows=1232140 loops=1)\n Hash Cond: (\"outer\".productid = \"inner\".id)\n -> Seq Scan on sales (cost=0.00..26320.40 rows=1232140 width=24) (actual time=0.099..4490.148 rows=1232140 loops=1)\n -> Hash (cost=182.18..182.18 rows=265 width=12) (actual time=49.114..49.114 rows=0 loops=1)\n -> Hash Join (cost=1.23..182.18 rows=265 width=12) (actual time=1.331..43.662 rows=1932 loops=1)\n Hash Cond: ((\"outer\".productline = \"inner\".productline) AND (\"outer\".category = \"inner\".category) AND (\"outer\".platform = \"inner\".platform))\n -> Seq Scan on product (cost=0.00..149.32 rows=1932 width=32) (actual time=0.128..11.246 rows=1932 loops=1)\n -> Hash (cost=1.13..1.13 rows=13 width=45) (actual time=0.200..0.200 rows=0 loops=1)\n -> Seq Scan on shd_productline (cost=0.00..1.13 rows=13 width=45) (actual time=0.047..0.081 rows=13 loops=1)\n -> Hash (cost=194.23..194.23 rows=1 width=12) (actual time=83.651..83.651 rows=0 loops=1)\n -> Hash Join (cost=4.70..194.23 rows=1 width=12) (actual time=2.675..74.693 rows=3288 loops=1)\n Hash Cond: ((\"outer\".monthnumber = \"inner\".monthnumber) AND (\"outer\".monthname = \"inner\".monthname) AND (\"outer\".\"year\" = \"inner\".\"year\") AND (\"outer\".monthyear = \"inner\".monthyear) AND (\"outer\".quarter = \"inner\".quarter) AND (\"outer\".quarteryear = \"inner\".quarteryear))\n -> Seq Scan on period (cost=0.00..90.88 rows=3288 width=54) (actual time=0.118..12.126 rows=3288 loops=1)\n -> Hash (cost=3.08..3.08 rows=108 width=58) (actual time=1.658..1.658 rows=0 loops=1)\n -> Seq Scan on shd_month (cost=0.00..3.08 rows=108 width=58) (actual time=0.081..0.947 rows=108 loops=1)\n -> Hash (cost=5.15..5.15 rows=83 width=12) (actual time=3.131..3.131 rows=0 loops=1)\n -> Hash Join (cost=1.07..5.15 rows=83 width=12) (actual time=1.937..2.865 rows=83 loops=1)\n Hash Cond: (\"outer\".salesperson = \"inner\".salesperson)\n -> Seq Scan on customer (cost=0.00..2.83 rows=83 width=20) (actual time=0.137..0.437 rows=83 loops=1)\n -> Hash (cost=1.06..1.06 rows=6 width=24) (actual time=0.152..0.152 rows=0 loops=1)\n -> Seq Scan on shd_salesperson (cost=0.00..1.06 rows=6 width=24) (actual time=0.045..0.064 rows=6 loops=1)\n Total runtime: 39974.236 ms\n(27 rows)\n\nGiven better row estimates the resulting plan runs more than ten times \nfaster. Why is the planner doing so poorly with estimating the number of \nrows returned? I tried:\n\nSET default_statistics_target = 1000;\nVACUUM FULL ANALYZE;\n\nbut the results were the same. This is on 8.0beta4. Any ideas?\n\nKris Jurka\n\n", "msg_date": "Tue, 16 Nov 2004 04:10:17 -0500 (EST)", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": true, "msg_subject": "mis-estimation on data-warehouse aggregate creation" }, { "msg_contents": "Tuesday, November 16, 2004, 10:10:17 AM, you wrote:\n\n\n> HashAggregate (cost=32869.33..32869.34 rows=1 width=36)\n ^\n> (actual time=475182.855..475188.304 rows=911 loops=1)\n ^^^\n> -> Nested Loop (cost=377.07..32869.32 rows=1 width=36)\n ^\n> (actual time=130.179..464299.167 rows=1232140 loops=1)\n ^^^^^^^\nLet me guess... You've never run \"analyze\" on your tables ?\n\nFred\n\n", "msg_date": "Tue, 16 Nov 2004 11:55:59 +0100", "msg_from": "\"F. Senault\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: mis-estimation on data-warehouse aggregate creation" }, { "msg_contents": "\n\nOn Tue, 16 Nov 2004, F. Senault wrote:\n\n> Let me guess... You've never run \"analyze\" on your tables ?\n> \n\nNo, I have. I mentioned that I did in my email, but you can also tell by\nthe exactly correct guesses for some other plan steps:\n\n-> Seq Scan on period (cost=0.00..90.88 rows=3288 width=54) (actual time=0.118..12.126 rows=3288 loops=1)\n\nKris Jurka\n", "msg_date": "Tue, 16 Nov 2004 14:49:31 -0500 (EST)", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": true, "msg_subject": "Re: mis-estimation on data-warehouse aggregate creation" }, { "msg_contents": "On Tue, 2004-11-16 at 09:10, Kris Jurka wrote:\n> By rewriting the JOIN \n> conditions to LEFT JOIN we force the planner to recognize that there will \n> be a match for every row in the sales table:\n> \n\nYou realise that returns a different answer (or at least it potentially\ndoes, depending upon your data?\n\n> -> Hash Join (cost=4.70..194.23 rows=1 width=12) (actual time=2.675..74.693 rows=3288 loops=1)\n> Hash Cond: ((\"outer\".monthnumber = \"inner\".monthnumber) AND (\"outer\".monthname = \"inner\".monthname) AND (\"outer\".\"year\" = \"inner\".\"year\") AND (\"outer\".monthyear = \"inner\".monthyear) AND (\"outer\".quarter = \"inner\".quarter) AND (\"outer\".quarteryear = \"inner\".quarteryear))\n> -> Seq Scan on period (cost=0.00..90.88 rows=3288 width=54) (actual time=0.118..12.126 rows=3288 loops=1)\n> -> Hash (cost=3.08..3.08 rows=108 width=58) (actual time=1.658..1.658 rows=0 loops=1)\n> -> Seq Scan on shd_month (cost=0.00..3.08 rows=108 width=58) (actual time=0.081..0.947 rows=108 loops=1)\n\nISTM your trouble starts here ^^^\nestimate=1, but rows=3288 \n\nThe join condition has so many ANDed predicates that we assume that this\nwill reduce the selectivity considerably. It does not, and so you pay\nthe cost dearly later on.\n\nIn both plans, the trouble starts at this point.\n\nIf you pre-build tables that have only a single join column between the\nfull.oldids and shrunken.renumberedids then this will most likely work\ncorrectly, since the planner will be able to correctly estimate the join\nselectivity. i.e. put product.id onto shd_productline ahead of time, so\nyou can avoid the complex join.\n\nSetting join_collapse_limit lower doesn't look like it would help, since\nthe plan already shows joining the sub-queries together first.\n\n-- \nBest Regards, Simon Riggs\n\n", "msg_date": "Tue, 16 Nov 2004 21:29:56 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: mis-estimation on data-warehouse aggregate creation" }, { "msg_contents": "\n\nOn Tue, 16 Nov 2004, Simon Riggs wrote:\n\n> The join condition has so many ANDed predicates that we assume that this\n> will reduce the selectivity considerably. It does not, and so you pay\n> the cost dearly later on.\n> \n\nYes, that makes a lot of sense. Without some incredibly good cross-column\nstatistics there is no way it could expect all of the rows to match. \nThanks for the analysis.\n\nKris Jurka\n", "msg_date": "Wed, 17 Nov 2004 03:19:39 -0500 (EST)", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": true, "msg_subject": "Re: mis-estimation on data-warehouse aggregate creation" } ]
[ { "msg_contents": "Hi,\n\nI'm completly dispointed with Tsearch2 ...\n\nI have a table like this :\n Table \"public.site\"\n Column | Type | \nModifiers\n---------------+-----------------------------+---------------------------------------------------------------\n id_site | integer | not null default \nnextval('public.site_id_site_seq'::text)\n site_name | text |\n site_url | text |\n url | text |\n language | text |\n datecrea | date | default now()\n id_category | integer |\n time_refresh | integer |\n active | integer |\n error | integer |\n description | text |\n version | text |\n idx_site_name | tsvector |\n lastcheck | date |\n lastupdate | timestamp without time zone |\nIndexes:\n \"site_id_site_key\" unique, btree (id_site)\n \"ix_idx_site_name\" gist (idx_site_name)\nTriggers:\n tsvectorupdate_site_name BEFORE INSERT OR UPDATE ON site FOR EACH ROW \nEXECUTE PROCEDURE tsearch2('idx_site_name', 'site_name')\n\nI have 183 956 records in the database ...\n\nSELECT s.site_name, s.id_site, s.description, s.site_url, \n case when exists (select id_user \n from user_choice u \n where u.id_site=s.id_site \n and u.id_user = 1) then 1\n else 0 end as bookmarked \n FROM site s \nWHERE s.idx_site_name @@ to_tsquery('atari');\n\nExplain Analyze :\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using ix_idx_site_name on site s (cost=0.00..1202.12 rows=184 \nwidth=158) (actual time=4687.674..4698.422 rows=1 loops=1)\n Index Cond: (idx_site_name @@ '\\'atari\\''::tsquery)\n Filter: (idx_site_name @@ '\\'atari\\''::tsquery)\n SubPlan\n -> Seq Scan on user_choice u (cost=0.00..3.46 rows=1 width=4) (actual \ntime=0.232..0.232 rows=0 loops=1)\n Filter: ((id_site = $0) AND (id_user = 1))\n Total runtime: 4698.608 ms\n\nFirst time I run the request I have a result in about 28 seconds.\n\nSELECT s.site_name, s.id_site, s.description, s.site_url, \n case when exists (select id_user \n from user_choice u \n where u.id_site=s.id_site \n and u.id_user = 1) then 1\n else 0 end as bookmarked \n FROM site_rss s \nWHERE s.site_name ilike '%atari%'\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Seq Scan on site_rss s (cost=0.00..11863.16 rows=295 width=158) (actual \ntime=17.414..791.937 rows=12 loops=1)\n Filter: (site_name ~~* '%atari%'::text)\n SubPlan\n -> Seq Scan on user_choice u (cost=0.00..3.46 rows=1 width=4) (actual \ntime=0.222..0.222 rows=0 loops=12)\n Filter: ((id_site = $0) AND (id_user = 1))\n Total runtime: 792.099 ms\n\nFirst time I run the request I have a result in about 789 miliseconds !!???\n\nI'm using PostgreSQL v7.4.6 with a Bi-Penitum III 933 Mhz and 1 Gb of RAM.\n\nAny idea ... ? For the moment I'm going back to use the ilike solution ... but \nI was really thinking that Tsearch2 could be a better solution ...\n\nRegards,\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n", "msg_date": "Tue, 16 Nov 2004 15:55:58 +0100", "msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>", "msg_from_op": true, "msg_subject": "Tsearch2 really slower than ilike ?" }, { "msg_contents": "On Tue, Nov 16, 2004 at 03:55:58PM +0100, Herv� Piedvache wrote:\n\n> WHERE s.idx_site_name @@ to_tsquery('atari');\n\nHow much text does each site_name field contain? From the field\nname I'd guess only a few words. Based on my own experience, if\nthe fields were documents containing thousands of words then I'd\nexpect tsearch2 to be faster than ILIKE by an order of magnitude\nor more.\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Tue, 16 Nov 2004 08:32:57 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 really slower than ilike ?" }, { "msg_contents": "Michael,\n\nLe Mardi 16 Novembre 2004 16:32, Michael Fuhr a ᅵcrit :\n> On Tue, Nov 16, 2004 at 03:55:58PM +0100, Hervᅵ Piedvache wrote:\n> > WHERE s.idx_site_name @@ to_tsquery('atari');\n>\n> How much text does each site_name field contain? From the field\n> name I'd guess only a few words. Based on my own experience, if\n> the fields were documents containing thousands of words then I'd\n> expect tsearch2 to be faster than ILIKE by an order of magnitude\n> or more.\n\nYes site name ... is company names or web site name ... so not many word in \neach record ... but I don't understand why more words are more efficient than \nfew words ?? sorry ...\n\nRegards,\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n", "msg_date": "Tue, 16 Nov 2004 16:48:11 +0100", "msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tsearch2 really slower than ilike ?" }, { "msg_contents": ">>or more.\n>> \n>>\n>\n>Yes site name ... is company names or web site name ... so not many word in \n>each record ... but I don't understand why more words are more efficient than \n>few words ?? sorry ...\n> \n>\nWell there are a couple of reasons but the easiest one is index size.\nAn ILIKE btree index is in general going to be much smaller than a gist \nindex.\nThe smaller the index the faster it is searched.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n>Regards,\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Tue, 16 Nov 2004 08:04:28 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 really slower than ilike ?" }, { "msg_contents": ">\n> QUERY PLAN\n>----------------------------------------------------------------------------------------------------------------\n> Seq Scan on site_rss s (cost=0.00..11863.16 rows=295 width=158) (actual \n>time=17.414..791.937 rows=12 loops=1)\n> Filter: (site_name ~~* '%atari%'::text)\n> SubPlan\n> -> Seq Scan on user_choice u (cost=0.00..3.46 rows=1 width=4) (actual \n>time=0.222..0.222 rows=0 loops=12)\n> Filter: ((id_site = $0) AND (id_user = 1))\n> Total runtime: 792.099 ms\n>\n>First time I run the request I have a result in about 789 miliseconds !!???\n>\n>I'm using PostgreSQL v7.4.6 with a Bi-Penitum III 933 Mhz and 1 Gb of RAM.\n>\n>Any idea ... ? For the moment I'm going back to use the ilike solution ... but \n>I was really thinking that Tsearch2 could be a better solution ...\n>\n> \n>\n\nWell I would be curious about what happens the second time you run the \nquery.\nThe first time is kind of a bad example because it has to push the index \ninto ram.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n>Regards,\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Tue, 16 Nov 2004 08:06:25 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 really slower than ilike ?" }, { "msg_contents": "could you provide me a dump of your table (just id and tsvector columns),\nso I could try on my computer. Also, plain query (simple and clean) which\ndemonstrated your problem would be preferred next time !\n\n Oleg\nOn Tue, 16 Nov 2004, [iso-8859-15] Herv? Piedvache wrote:\n\n> Michael,\n>\n> Le Mardi 16 Novembre 2004 16:32, Michael Fuhr a ?crit :\n>> On Tue, Nov 16, 2004 at 03:55:58PM +0100, Herv? Piedvache wrote:\n>>> WHERE s.idx_site_name @@ to_tsquery('atari');\n>>\n>> How much text does each site_name field contain? From the field\n>> name I'd guess only a few words. Based on my own experience, if\n>> the fields were documents containing thousands of words then I'd\n>> expect tsearch2 to be faster than ILIKE by an order of magnitude\n>> or more.\n>\n> Yes site name ... is company names or web site name ... so not many word in\n> each record ... but I don't understand why more words are more efficient than\n> few words ?? sorry ...\n>\n> Regards,\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Tue, 16 Nov 2004 19:17:35 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 really slower than ilike ?" }, { "msg_contents": "Le Mardi 16 Novembre 2004 17:06, Joshua D. Drake a ᅵcrit :\n> > QUERY PLAN\n> >--------------------------------------------------------------------------\n> >-------------------------------------- Seq Scan on site_rss s \n> > (cost=0.00..11863.16 rows=295 width=158) (actual time=17.414..791.937\n> > rows=12 loops=1)\n> > Filter: (site_name ~~* '%atari%'::text)\n> > SubPlan\n> > -> Seq Scan on user_choice u (cost=0.00..3.46 rows=1 width=4)\n> > (actual time=0.222..0.222 rows=0 loops=12)\n> > Filter: ((id_site = $0) AND (id_user = 1))\n> > Total runtime: 792.099 ms\n> >\n> >First time I run the request I have a result in about 789 miliseconds\n> > !!???\n> >\n> >I'm using PostgreSQL v7.4.6 with a Bi-Penitum III 933 Mhz and 1 Gb of RAM.\n> >\n> >Any idea ... ? For the moment I'm going back to use the ilike solution ...\n> > but I was really thinking that Tsearch2 could be a better solution ...\n>\n> Well I would be curious about what happens the second time you run the\n> query.\n> The first time is kind of a bad example because it has to push the index\n> into ram.\n\nThe second time is really quicker yes ... about 312 miliseconds ... \nBut for each search I have after it take about 3 or 4 seconds ...\nSo what can I do ?\n\nRegards,\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n", "msg_date": "Tue, 16 Nov 2004 17:21:18 +0100", "msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tsearch2 really slower than ilike ?" }, { "msg_contents": "On Tue, 16 Nov 2004, Joshua D. Drake wrote:\n\n>\n>>> or more.\n>>> \n>> \n>> Yes site name ... is company names or web site name ... so not many word in \n>> each record ... but I don't understand why more words are more efficient \n>> than few words ?? sorry ...\n>> \n> Well there are a couple of reasons but the easiest one is index size.\n> An ILIKE btree index is in general going to be much smaller than a gist \n> index.\n> The smaller the index the faster it is searched.\n\nfor single word queries @@ should have the same performance as ilike with \nindex disabled and better for complex queries.\n\n\n>\n> Sincerely,\n>\n> Joshua D. Drake\n>\n>\n>\n>> Regards,\n>> \n>\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Tue, 16 Nov 2004 19:27:05 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 really slower than ilike ?" }, { "msg_contents": "ok, I downloaded dump of table and here is what I found:\n\nzz=# select count(*) from tt;\n count \n--------\n 183956\n(1 row)\n\nzz=# select * from stat('select tt from tt') order by ndoc desc, nentry\ndesc,wo\nrd limit 10;\n word | ndoc | nentry \n--------------+-------+--------\n blog | 12710 | 12835\n weblog | 4857 | 4859\n news | 4402 | 4594\n life | 4136 | 4160\n world | 1980 | 1986\n journal | 1882 | 1883\n livejourn | 1737 | 1737\n thought | 1669 | 1677\n web | 1154 | 1161\n scotsman.com | 1138 | 1138\n(10 rows)\n\nzz=# explain analyze select tt from tt where tt @@ 'blog';\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------\n Index Scan using tt_idx on tt (cost=0.00..728.83 rows=184 width=32) (actual time=0.047..141.110 rows=12710 loops=1)\n Index Cond: (tt @@ '\\'blog\\''::tsquery)\n Filter: (tt @@ '\\'blog\\''::tsquery)\n Total runtime: 154.105 ms\n(4 rows)\n\nIt's really fast ! So, I don't understand your problem. \nI run query on my desktop machine, nothing special.\n\n\n \tOleg\nOn Tue, 16 Nov 2004, [iso-8859-15] Herv? Piedvache wrote:\n\n> Hi,\n>\n> I'm completly dispointed with Tsearch2 ...\n>\n> I have a table like this :\n> Table \"public.site\"\n> Column | Type |\n> Modifiers\n> ---------------+-----------------------------+---------------------------------------------------------------\n> id_site | integer | not null default\n> nextval('public.site_id_site_seq'::text)\n> site_name | text |\n> site_url | text |\n> url | text |\n> language | text |\n> datecrea | date | default now()\n> id_category | integer |\n> time_refresh | integer |\n> active | integer |\n> error | integer |\n> description | text |\n> version | text |\n> idx_site_name | tsvector |\n> lastcheck | date |\n> lastupdate | timestamp without time zone |\n> Indexes:\n> \"site_id_site_key\" unique, btree (id_site)\n> \"ix_idx_site_name\" gist (idx_site_name)\n> Triggers:\n> tsvectorupdate_site_name BEFORE INSERT OR UPDATE ON site FOR EACH ROW\n> EXECUTE PROCEDURE tsearch2('idx_site_name', 'site_name')\n>\n> I have 183 956 records in the database ...\n>\n> SELECT s.site_name, s.id_site, s.description, s.site_url,\n> case when exists (select id_user\n> from user_choice u\n> where u.id_site=s.id_site\n> and u.id_user = 1) then 1\n> else 0 end as bookmarked\n> FROM site s\n> WHERE s.idx_site_name @@ to_tsquery('atari');\n>\n> Explain Analyze :\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using ix_idx_site_name on site s (cost=0.00..1202.12 rows=184\n> width=158) (actual time=4687.674..4698.422 rows=1 loops=1)\n> Index Cond: (idx_site_name @@ '\\'atari\\''::tsquery)\n> Filter: (idx_site_name @@ '\\'atari\\''::tsquery)\n> SubPlan\n> -> Seq Scan on user_choice u (cost=0.00..3.46 rows=1 width=4) (actual\n> time=0.232..0.232 rows=0 loops=1)\n> Filter: ((id_site = $0) AND (id_user = 1))\n> Total runtime: 4698.608 ms\n>\n> First time I run the request I have a result in about 28 seconds.\n>\n> SELECT s.site_name, s.id_site, s.description, s.site_url,\n> case when exists (select id_user\n> from user_choice u\n> where u.id_site=s.id_site\n> and u.id_user = 1) then 1\n> else 0 end as bookmarked\n> FROM site_rss s\n> WHERE s.site_name ilike '%atari%'\n>\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------\n> Seq Scan on site_rss s (cost=0.00..11863.16 rows=295 width=158) (actual\n> time=17.414..791.937 rows=12 loops=1)\n> Filter: (site_name ~~* '%atari%'::text)\n> SubPlan\n> -> Seq Scan on user_choice u (cost=0.00..3.46 rows=1 width=4) (actual\n> time=0.222..0.222 rows=0 loops=12)\n> Filter: ((id_site = $0) AND (id_user = 1))\n> Total runtime: 792.099 ms\n>\n> First time I run the request I have a result in about 789 miliseconds !!???\n>\n> I'm using PostgreSQL v7.4.6 with a Bi-Penitum III 933 Mhz and 1 Gb of RAM.\n>\n> Any idea ... ? For the moment I'm going back to use the ilike solution ... but\n> I was really thinking that Tsearch2 could be a better solution ...\n>\n> Regards,\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Wed, 17 Nov 2004 00:13:08 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 really slower than ilike ?" }, { "msg_contents": "Oleg,\n\nSorry but when I do your request I get :\n# select id_site from site where idx_site_name @@ ᅵ'livejourn';\nERROR: ᅵtype \"ᅵ\" does not exist\n\nWhat is this ?\n\n(private: I don't know what happend with my mail, but I do nothing special to \ndisturb the contains when I'm writting to you ...)\n\nLe Mardi 16 Novembre 2004 22:13, Oleg Bartunov a ᅵcrit :\n> ok, I downloaded dump of table and here is what I found:\n>\n> zz=# select count(*) from tt;\n> count\n> --------\n> 183956\n> (1 row)\n>\n> zz=# select * from stat('select tt from tt') order by ndoc desc, nentry\n> desc,wo\n> rd limit 10;\n> word | ndoc | nentry\n> --------------+-------+--------\n> blog | 12710 | 12835\n> weblog | 4857 | 4859\n> news | 4402 | 4594\n> life | 4136 | 4160\n> world | 1980 | 1986\n> journal | 1882 | 1883\n> livejourn | 1737 | 1737\n> thought | 1669 | 1677\n> web | 1154 | 1161\n> scotsman.com | 1138 | 1138\n> (10 rows)\n>\n> zz=# explain analyze select tt from tt where tt @@ 'blog';\n> QUERY PLAN\n> ---------------------------------------------------------------------------\n>------------------------------------------- Index Scan using tt_idx on tt \n> (cost=0.00..728.83 rows=184 width=32) (actual time=0.047..141.110\n> rows=12710 loops=1) Index Cond: (tt @@ '\\'blog\\''::tsquery)\n> Filter: (tt @@ '\\'blog\\''::tsquery)\n> Total runtime: 154.105 ms\n> (4 rows)\n>\n> It's really fast ! So, I don't understand your problem.\n> I run query on my desktop machine, nothing special.\n>\n>\n> \tOleg\n>\n> On Tue, 16 Nov 2004, [iso-8859-15] Herv? Piedvache wrote:\n> > Hi,\n> >\n> > I'm completly dispointed with Tsearch2 ...\n> >\n> > I have a table like this :\n> > Table \"public.site\"\n> > Column | Type |\n> > Modifiers\n> > ---------------+-----------------------------+---------------------------\n> >------------------------------------ id_site | integer \n> > | not null default\n> > nextval('public.site_id_site_seq'::text)\n> > site_name | text |\n> > site_url | text |\n> > url | text |\n> > language | text |\n> > datecrea | date | default now()\n> > id_category | integer |\n> > time_refresh | integer |\n> > active | integer |\n> > error | integer |\n> > description | text |\n> > version | text |\n> > idx_site_name | tsvector |\n> > lastcheck | date |\n> > lastupdate | timestamp without time zone |\n> > Indexes:\n> > \"site_id_site_key\" unique, btree (id_site)\n> > \"ix_idx_site_name\" gist (idx_site_name)\n> > Triggers:\n> > tsvectorupdate_site_name BEFORE INSERT OR UPDATE ON site FOR EACH ROW\n> > EXECUTE PROCEDURE tsearch2('idx_site_name', 'site_name')\n> >\n> > I have 183 956 records in the database ...\n> >\n> > SELECT s.site_name, s.id_site, s.description, s.site_url,\n> > case when exists (select id_user\n> > from user_choice u\n> > where u.id_site=s.id_site\n> > and u.id_user = 1)\n> > then 1 else 0 end as bookmarked\n> > FROM site s\n> > WHERE s.idx_site_name @@ to_tsquery('atari');\n> >\n> > Explain Analyze :\n> > QUERY PLAN\n> > -------------------------------------------------------------------------\n> >----------------------------------------------------------------- Index\n> > Scan using ix_idx_site_name on site s (cost=0.00..1202.12 rows=184\n> > width=158) (actual time=4687.674..4698.422 rows=1 loops=1)\n> > Index Cond: (idx_site_name @@ '\\'atari\\''::tsquery)\n> > Filter: (idx_site_name @@ '\\'atari\\''::tsquery)\n> > SubPlan\n> > -> Seq Scan on user_choice u (cost=0.00..3.46 rows=1 width=4)\n> > (actual time=0.232..0.232 rows=0 loops=1)\n> > Filter: ((id_site = $0) AND (id_user = 1))\n> > Total runtime: 4698.608 ms\n> >\n> > First time I run the request I have a result in about 28 seconds.\n> >\n> > SELECT s.site_name, s.id_site, s.description, s.site_url,\n> > case when exists (select id_user\n> > from user_choice u\n> > where u.id_site=s.id_site\n> > and u.id_user = 1)\n> > then 1 else 0 end as bookmarked\n> > FROM site_rss s\n> > WHERE s.site_name ilike '%atari%'\n> >\n> > QUERY PLAN\n> > -------------------------------------------------------------------------\n> >--------------------------------------- Seq Scan on site_rss s \n> > (cost=0.00..11863.16 rows=295 width=158) (actual time=17.414..791.937\n> > rows=12 loops=1)\n> > Filter: (site_name ~~* '%atari%'::text)\n> > SubPlan\n> > -> Seq Scan on user_choice u (cost=0.00..3.46 rows=1 width=4)\n> > (actual time=0.222..0.222 rows=0 loops=12)\n> > Filter: ((id_site = $0) AND (id_user = 1))\n> > Total runtime: 792.099 ms\n> >\n> > First time I run the request I have a result in about 789 miliseconds\n> > !!???\n> >\n> > I'm using PostgreSQL v7.4.6 with a Bi-Penitum III 933 Mhz and 1 Gb of\n> > RAM.\n> >\n> > Any idea ... ? For the moment I'm going back to use the ilike solution\n> > ... but I was really thinking that Tsearch2 could be a better solution\n> > ...\n> >\n> > Regards,\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n", "msg_date": "Wed, 17 Nov 2004 18:16:10 +0100", "msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tsearch2 really slower than ilike ?" }, { "msg_contents": " This message is in MIME format. The first part should be readable text,\n while the remaining parts are likely unreadable without MIME-aware tools.\n\n---559023410-1817792895-1100712215=:18871\nContent-Type: TEXT/PLAIN; charset=koi8-r; format=flowed\nContent-Transfer-Encoding: 8BIT\n\n1;2c1;2c1;2cOn Wed, 17 Nov 2004, [iso-8859-15] Herv? Piedvache wrote:\n\n> Oleg,\n>\n> Sorry but when I do your request I get :\n> # select id_site from site where idx_site_name @@ О©╫'livejourn';\n> ERROR: О©╫type \"О©╫\" does not exist\n>\n\nno idea :) btw, what version of postgresql and OS you're running.\nCould you try minimal test - check sql commands from tsearch2 sources,\nsome basic queries from tsearch2 documentation, tutorials.\n\nbtw, your query should looks like\nselect id_site from site_rss where idx_site_name @@ 'livejourn';\n ^^^^^^^^\n\nHow did you run your queries at all ? I mean your first message about \npoor tsearch2 performance.\n\n1;2c1;2c1;2c\n\n> What is this ?\n>\n> (private: I don't know what happend with my mail, but I do nothing special to\n> disturb the contains when I'm writting to you ...)\n>\n> Le Mardi 16 Novembre 2004 22:13, Oleg Bartunov a ?crit :\n>> ok, I downloaded dump of table and here is what I found:\n>>\n>> zz=# select count(*) from tt;\n>> count\n>> --------\n>> 183956\n>> (1 row)\n>>\n>> zz=# select * from stat('select tt from tt') order by ndoc desc, nentry\n>> desc,wo\n>> rd limit 10;\n>> word | ndoc | nentry\n>> --------------+-------+--------\n>> blog | 12710 | 12835\n>> weblog | 4857 | 4859\n>> news | 4402 | 4594\n>> life | 4136 | 4160\n>> world | 1980 | 1986\n>> journal | 1882 | 1883\n>> livejourn | 1737 | 1737\n>> thought | 1669 | 1677\n>> web | 1154 | 1161\n>> scotsman.com | 1138 | 1138\n>> (10 rows)\n>>\n>> zz=# explain analyze select tt from tt where tt @@ 'blog';\n>> QUERY PLAN\n>> ---------------------------------------------------------------------------\n>> ------------------------------------------- Index Scan using tt_idx on tt\n>> (cost=0.00..728.83 rows=184 width=32) (actual time=0.047..141.110\n>> rows=12710 loops=1) Index Cond: (tt @@ '\\'blog\\''::tsquery)\n>> Filter: (tt @@ '\\'blog\\''::tsquery)\n>> Total runtime: 154.105 ms\n>> (4 rows)\n>>\n>> It's really fast ! So, I don't understand your problem.\n>> I run query on my desktop machine, nothing special.\n>>\n>>\n>> \tOleg\n>>\n>> On Tue, 16 Nov 2004, [iso-8859-15] Herv? Piedvache wrote:\n>>> Hi,\n>>>\n>>> I'm completly dispointed with Tsearch2 ...\n>>>\n>>> I have a table like this :\n>>> Table \"public.site\"\n>>> Column | Type |\n>>> Modifiers\n>>> ---------------+-----------------------------+---------------------------\n>>> ------------------------------------ id_site | integer\n>>> | not null default\n>>> nextval('public.site_id_site_seq'::text)\n>>> site_name | text |\n>>> site_url | text |\n>>> url | text |\n>>> language | text |\n>>> datecrea | date | default now()\n>>> id_category | integer |\n>>> time_refresh | integer |\n>>> active | integer |\n>>> error | integer |\n>>> description | text |\n>>> version | text |\n>>> idx_site_name | tsvector |\n>>> lastcheck | date |\n>>> lastupdate | timestamp without time zone |\n>>> Indexes:\n>>> \"site_id_site_key\" unique, btree (id_site)\n>>> \"ix_idx_site_name\" gist (idx_site_name)\n>>> Triggers:\n>>> tsvectorupdate_site_name BEFORE INSERT OR UPDATE ON site FOR EACH ROW\n>>> EXECUTE PROCEDURE tsearch2('idx_site_name', 'site_name')\n>>>\n>>> I have 183 956 records in the database ...\n>>>\n>>> SELECT s.site_name, s.id_site, s.description, s.site_url,\n>>> case when exists (select id_user\n>>> from user_choice u\n>>> where u.id_site=s.id_site\n>>> and u.id_user = 1)\n>>> then 1 else 0 end as bookmarked\n>>> FROM site s\n>>> WHERE s.idx_site_name @@ to_tsquery('atari');\n>>>\n>>> Explain Analyze :\n>>> QUERY PLAN\n>>> -------------------------------------------------------------------------\n>>> ----------------------------------------------------------------- Index\n>>> Scan using ix_idx_site_name on site s (cost=0.00..1202.12 rows=184\n>>> width=158) (actual time=4687.674..4698.422 rows=1 loops=1)\n>>> Index Cond: (idx_site_name @@ '\\'atari\\''::tsquery)\n>>> Filter: (idx_site_name @@ '\\'atari\\''::tsquery)\n>>> SubPlan\n>>> -> Seq Scan on user_choice u (cost=0.00..3.46 rows=1 width=4)\n>>> (actual time=0.232..0.232 rows=0 loops=1)\n>>> Filter: ((id_site = $0) AND (id_user = 1))\n>>> Total runtime: 4698.608 ms\n>>>\n>>> First time I run the request I have a result in about 28 seconds.\n>>>\n>>> SELECT s.site_name, s.id_site, s.description, s.site_url,\n>>> case when exists (select id_user\n>>> from user_choice u\n>>> where u.id_site=s.id_site\n>>> and u.id_user = 1)\n>>> then 1 else 0 end as bookmarked\n>>> FROM site_rss s\n>>> WHERE s.site_name ilike '%atari%'\n>>>\n>>> QUERY PLAN\n>>> -------------------------------------------------------------------------\n>>> --------------------------------------- Seq Scan on site_rss s\n>>> (cost=0.00..11863.16 rows=295 width=158) (actual time=17.414..791.937\n>>> rows=12 loops=1)\n>>> Filter: (site_name ~~* '%atari%'::text)\n>>> SubPlan\n>>> -> Seq Scan on user_choice u (cost=0.00..3.46 rows=1 width=4)\n>>> (actual time=0.222..0.222 rows=0 loops=12)\n>>> Filter: ((id_site = $0) AND (id_user = 1))\n>>> Total runtime: 792.099 ms\n>>>\n>>> First time I run the request I have a result in about 789 miliseconds\n>>> !!???\n>>>\n>>> I'm using PostgreSQL v7.4.6 with a Bi-Penitum III 933 Mhz and 1 Gb of\n>>> RAM.\n>>>\n>>> Any idea ... ? For the moment I'm going back to use the ilike solution\n>>> ... but I was really thinking that Tsearch2 could be a better solution\n>>> ...\n>>>\n>>> Regards,\n>>\n>> \tRegards,\n>> \t\tOleg\n>> _____________________________________________________________\n>> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n>> Sternberg Astronomical Institute, Moscow University (Russia)\n>> Internet: [email protected], http://www.sai.msu.su/~megera/\n>> phone: +007(095)939-16-83, +007(095)939-23-83\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 9: the planner will ignore your desire to choose an index scan if your\n>> joining column's datatypes do not match\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n---559023410-1817792895-1100712215=:18871--\n", "msg_date": "Wed, 17 Nov 2004 20:23:35 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 really slower than ilike ?" }, { "msg_contents": "Oleg,\n\nLe Mercredi 17 Novembre 2004 18:23, Oleg Bartunov a ᅵcrit :\n> > Sorry but when I do your request I get :\n> > # select id_site from site where idx_site_name @@ ᅵ'livejourn';\n> > ERROR: ᅵtype \"ᅵ\" does not exist\n>\n> no idea :) btw, what version of postgresql and OS you're running.\n> Could you try minimal test - check sql commands from tsearch2 sources,\n> some basic queries from tsearch2 documentation, tutorials.\n>\n> btw, your query should looks like\n> select id_site from site_rss where idx_site_name @@ 'livejourn';\n> ^^^^^^^^\n>\n> How did you run your queries at all ? I mean your first message about\n> poor tsearch2 performance.\n\nI don't know what happend yesterday ... it's running now ...\n\nYou sent me :\nzz=# explain analyze select id_site from site_rss where idx_site_name \n@@ ᅵ'livejourn';\nᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵQUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------\nᅵ Index Scan using ix_idx_site_name on site_rss ᅵ(cost=0.00..733.62 rows=184 \nwidth=4) (actual time=0.339..39.183 rows=1737 loops=1)\nᅵ ᅵ Index Cond: (idx_site_name @@ '\\'livejourn\\''::tsquery)\nᅵ ᅵ Filter: (idx_site_name @@ '\\'livejourn\\''::tsquery)\nᅵ Total runtime: 40.997 ms\n(4 rows)\n\n>It's really fast ! So, I don't understand your problem. \n>I run query on my desktop machine, nothing special.\n\n\nI get this :\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using ix_idx_site_name on site_rss s (cost=0.00..574.19 rows=187 \nwidth=24) (actual time=105.097..7157.277 rows=388 loops=1)\n Index Cond: (idx_site_name @@ '\\'livejourn\\''::tsquery)\n Filter: (idx_site_name @@ '\\'livejourn\\''::tsquery)\n Total runtime: 7158.576 ms\n(4 rows)\n\nWith the ilike I get :\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Seq Scan on site_rss s (cost=0.00..8360.23 rows=1 width=24) (actual \ntime=8.195..879.440 rows=404 loops=1)\n Filter: (site_name ~~* '%livejourn%'::text)\n Total runtime: 882.600 ms\n(3 rows)\n\nI don't know what is your desktop ... but I'm using PostgreSQL 7.4.6, on \nDebian Woody with a PC Bi-PIII 933 Mhz and 1 Gb of memory ... the server is \ndedicated to this database ... !!\n\nI have no idea !\n\nRegards,\n\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n", "msg_date": "Thu, 18 Nov 2004 10:27:00 +0100", "msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tsearch2 really slower than ilike ?" }, { "msg_contents": " This message is in MIME format. The first part should be readable text,\n while the remaining parts are likely unreadable without MIME-aware tools.\n\n---559023410-1271212614-1100770644=:18871\nContent-Type: TEXT/PLAIN; charset=koi8-r; format=flowed\nContent-Transfer-Encoding: 8BIT\n\nHave you run 'vacuum analyze' ?\n1;2c1;2c1;2c\n1;2c1;2c1;2cmy desktop is very simple PIII, 512 Mb RAM.\n1;2c1;2c1;2c\tOleg\n1;2c1;2c1;2c\n1;2c1;2c1;2cOn Thu, 18 Nov 2004, [iso-8859-15] Herv? Piedvache wrote:\n\n> Oleg,\n>\n> Le Mercredi 17 Novembre 2004 18:23, Oleg Bartunov a ?crit :\n>>> Sorry but when I do your request I get :\n>>> # select id_site from site where idx_site_name @@ О©╫'livejourn';\n>>> ERROR: О©╫type \"О©╫\" does not exist\n>>\n>> no idea :) btw, what version of postgresql and OS you're running.\n>> Could you try minimal test - check sql commands from tsearch2 sources,\n>> some basic queries from tsearch2 documentation, tutorials.\n>>\n>> btw, your query should looks like\n>> select id_site from site_rss where idx_site_name @@ 'livejourn';\n>> ^^^^^^^^\n>>\n>> How did you run your queries at all ? I mean your first message about\n>> poor tsearch2 performance.\n>\n> I don't know what happend yesterday ... it's running now ...\n>\n> You sent me :\n> zz=# explain analyze select id_site from site_rss where idx_site_name\n> @@ О©╫'livejourn';\n> О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------\n> О©╫ Index Scan using ix_idx_site_name on site_rss О©╫(cost=0.00..733.62 rows=184\n> width=4) (actual time=0.339..39.183 rows=1737 loops=1)\n> О©╫ О©╫ Index Cond: (idx_site_name @@ '\\'livejourn\\''::tsquery)\n> О©╫ О©╫ Filter: (idx_site_name @@ '\\'livejourn\\''::tsquery)\n> О©╫ Total runtime: 40.997 ms\n> (4 rows)\n>\n>> It's really fast ! So, I don't understand your problem.\n>> I run query on my desktop machine, nothing special.\n>\n>\n> I get this :\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using ix_idx_site_name on site_rss s (cost=0.00..574.19 rows=187\n> width=24) (actual time=105.097..7157.277 rows=388 loops=1)\n> Index Cond: (idx_site_name @@ '\\'livejourn\\''::tsquery)\n> Filter: (idx_site_name @@ '\\'livejourn\\''::tsquery)\n> Total runtime: 7158.576 ms\n> (4 rows)\n>\n> With the ilike I get :\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------\n> Seq Scan on site_rss s (cost=0.00..8360.23 rows=1 width=24) (actual\n> time=8.195..879.440 rows=404 loops=1)\n> Filter: (site_name ~~* '%livejourn%'::text)\n> Total runtime: 882.600 ms\n> (3 rows)\n>\n> I don't know what is your desktop ... but I'm using PostgreSQL 7.4.6, on\n> Debian Woody with a PC Bi-PIII 933 Mhz and 1 Gb of memory ... the server is\n> dedicated to this database ... !!\n>\n> I have no idea !\n>\n> Regards,\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n---559023410-1271212614-1100770644=:18871--\n", "msg_date": "Thu, 18 Nov 2004 12:37:24 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 really slower than ilike ?" }, { "msg_contents": "Le Jeudi 18 Novembre 2004 10:37, Oleg Bartunov a ᅵcrit :\n> Have you run 'vacuum analyze' ?\n\nYep every night VACUUM FULL VERBOSE ANALYZE; of all the database !\n\n> 1;2c1;2c1;2c\n> 1;2c1;2c1;2cmy desktop is very simple PIII, 512 Mb RAM.\n> 1;2c1;2c1;2c Oleg\n> 1;2c1;2c1;2c\n\nYOU send strange caracters ! ;o)\n\n> 1;2c1;2c1;2cOn Thu, 18 Nov 2004, [iso-8859-15] Herv? Piedvache wrote:\n> > Oleg,\n> >\n> > Le Mercredi 17 Novembre 2004 18:23, Oleg Bartunov a ?crit :\n> >>> Sorry but when I do your request I get :\n> >>> # select id_site from site where idx_site_name @@ ᅵ'livejourn';\n> >>> ERROR: ᅵtype \"ᅵ\" does not exist\n> >>\n> >> no idea :) btw, what version of postgresql and OS you're running.\n> >> Could you try minimal test - check sql commands from tsearch2 sources,\n> >> some basic queries from tsearch2 documentation, tutorials.\n> >>\n> >> btw, your query should looks like\n> >> select id_site from site_rss where idx_site_name @@ 'livejourn';\n> >> ^^^^^^^^\n> >>\n> >> How did you run your queries at all ? I mean your first message about\n> >> poor tsearch2 performance.\n> >\n> > I don't know what happend yesterday ... it's running now ...\n> >\n> > You sent me :\n> > zz=# explain analyze select id_site from site_rss where idx_site_name\n> > @@ ᅵ'livejourn';\n> > ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵ ᅵQUERY PLAN\n> > -------------------------------------------------------------------------\n> >---------------------------------------------------------- Index Scan\n> > using ix_idx_site_name on site_rss ᅵ(cost=0.00..733.62 rows=184 width=4)\n> > (actual time=0.339..39.183 rows=1737 loops=1)\n> > ᅵ ᅵ Index Cond: (idx_site_name @@ '\\'livejourn\\''::tsquery)\n> > ᅵ ᅵ Filter: (idx_site_name @@ '\\'livejourn\\''::tsquery)\n> > ᅵ Total runtime: 40.997 ms\n> > (4 rows)\n> >\n> >> It's really fast ! So, I don't understand your problem.\n> >> I run query on my desktop machine, nothing special.\n> >\n> > I get this :\n> > QUERY PLAN\n> > -------------------------------------------------------------------------\n> >---------------------------------------------------------------- Index\n> > Scan using ix_idx_site_name on site_rss s (cost=0.00..574.19 rows=187\n> > width=24) (actual time=105.097..7157.277 rows=388 loops=1)\n> > Index Cond: (idx_site_name @@ '\\'livejourn\\''::tsquery)\n> > Filter: (idx_site_name @@ '\\'livejourn\\''::tsquery)\n> > Total runtime: 7158.576 ms\n> > (4 rows)\n> >\n> > With the ilike I get :\n> > QUERY PLAN\n> > -------------------------------------------------------------------------\n> >----------------------------------- Seq Scan on site_rss s \n> > (cost=0.00..8360.23 rows=1 width=24) (actual time=8.195..879.440 rows=404\n> > loops=1)\n> > Filter: (site_name ~~* '%livejourn%'::text)\n> > Total runtime: 882.600 ms\n> > (3 rows)\n> >\n> > I don't know what is your desktop ... but I'm using PostgreSQL 7.4.6, on\n> > Debian Woody with a PC Bi-PIII 933 Mhz and 1 Gb of memory ... the server\n> > is dedicated to this database ... !!\n> >\n> > I have no idea !\n> >\n> > Regards,\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n", "msg_date": "Thu, 18 Nov 2004 11:30:13 +0100", "msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tsearch2 really slower than ilike ?" }, { "msg_contents": " This message is in MIME format. The first part should be readable text,\n while the remaining parts are likely unreadable without MIME-aware tools.\n\n---559023410-1857409239-1100774060=:18871\nContent-Type: TEXT/PLAIN; charset=koi8-r; format=flowed\nContent-Transfer-Encoding: 8BIT\n\n1;2c1;2c1;2cBlin !\n\nwhat's happenning with my terminal when I read messagess from this guy ?\nI don't even know how to call him - I see just Herv?\n\n \tOleg\n1;2c1;2c1;2c1;2c\n1;2cOn Thu, 18 Nov 2004, [iso-8859-15] Herv? Piedvache wrote:\n\n> Le Jeudi 18 Novembre 2004 10:37, Oleg Bartunov a ?crit :\n>> Have you run 'vacuum analyze' ?\n>\n> Yep every night VACUUM FULL VERBOSE ANALYZE; of all the database !\n>\n>> 1;2c1;2c1;2c\n>> 1;2c1;2c1;2cmy desktop is very simple PIII, 512 Mb RAM.\n>> 1;2c1;2c11;2c1;2c1;2c;2c Oleg1;2c1;2c1;2c\n>> 11;2c1;2c1;2c;2c1;2c1;2c\n>\n> YOU send strange caracters ! ;o)\n>\n>> 1;2c1;2c1;2cOn Thu, 18 Nov 2004, [iso-8859-15] Herv? Piedvache wrote:\n>>> Oleg,\n>>>\n>>> Le Mercredi 17 Novembre 2004 18:23, Oleg Bartunov a ?crit :\n>>>>> Sorry but when I do your request I get :\n>>>>> # select id_site from site where idx_site_name @@ О©╫'livejourn';\n>>>>> ERROR: О©╫type \"О©╫\" d1;2c1;2c1;2c1;2coes not exist\n>>>>\n>>>> no idea :) btw, what version of postgresql and OS you're running.\n>>>> Could you try minimal test - check sql commands from tsearch2 sources,\n>>>> some basic queries from tsearch2 documentation, tutorials.\n>>>>\n>>>> btw, your query should looks like\n>>>> select id_site from site_rss where idx_site_name @@ 'livejourn';\n>>>> ^^^^^^^^\n>>>>\n>>>> How did you run your queries at all ? I mean your first message about\n>>>> poor tsearch2 performance.\n>>>\n>>> I don't know what happend yesterday ... it's running now ...\n>>>\n>>> You sent me :\n>>> zz=# explain analyze select id_site from site_rss where idx_site_name\n>>> @@ О©╫'livejourn';\n>>> О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫ О©╫QUERY PLAN\n>>> -------------------------------------------------------------------------\n>>> ---------------------------------------------------------- Index Scan\n>>> using ix_idx_site_name on site_rss О©╫(cost=0.00..733.62 rows=184 width=4)\n>>> (actual time=0.339..39.183 rows=1737 loops=1)\n>>> О©╫ О©╫ Index Cond: (idx_site_name @@ '\\'livejourn\\''::tsquery)\n>>> О©╫ О©╫ Filter: (idx_site_name @@ '\\'livejourn\\''::tsquery)\n>>> О©╫ Total runtime: 40.997 ms\n>>> (4 rows)\n>>>\n>>>> It's really fast ! So, I don't understand your problem.\n>>>> I run query on my desktop machine, nothing special.\n>>>\n>>> I get this :\n>>> QUERY PLAN\n>>> -------------------------------------------------------------------------\n>>> ---------------------------------------------------------------- Index\n>>> Scan using ix_idx_site_name on site_rss s (cost=0.00..574.19 rows=187\n>>> width=24) (actual time=105.097..7157.277 rows=388 loops=1)\n>>> Index Cond: (idx_site_name @@ '\\'livejourn\\''::tsquery)\n>>> Filter: (idx_site_name @@ '\\'livejourn\\''::tsquery)\n>>> Total runtime: 7158.576 ms\n>>> (4 rows)\n>>>\n>>> With the ilike I get :\n>>> QUERY PLAN\n>>> -------------------------------------------------------------------------\n>>> ----------------------------------- Seq Scan on site_rss s\n>>> (cost=0.00..8360.23 rows=1 width=24) (actual time=8.195..879.440 rows=404\n>>> loops=1)\n>>> Filter: (site_name ~~* '%livejourn%'::text)\n>>> Total runtime: 882.600 ms\n>>> (3 rows)\n>>>\n>>> I don't know what is your desktop ... but I'm using PostgreSQL 7.4.6, on\n>>> Debian Woody with a PC Bi-PIII 933 Mhz and 1 Gb of memory ... the server\n>>> is dedicated to this database ... !!\n>>>\n>>> I have no idea !\n>>>\n>>> Regards,\n>>\n>> \tRegards,\n>> \t\tOleg\n>> _____________________________________________________________\n>> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n>> Sternberg Astronomical Institute, Moscow University (Russia)\n>> Internet: [email protected], http://www.sai.msu.su/~megera/\n>> phone: +007(095)939-16-83, +007(095)939-23-83\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 8: explain analyze is your friend\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n---559023410-1857409239-1100774060=:18871--\n", "msg_date": "Thu, 18 Nov 2004 13:34:20 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 really slower than ilike ?" } ]
[ { "msg_contents": "I have a table that has 2 columns of an OID type. I would like to issue\na truncate table command but my understanding is that the data pointed\nto by the OIDs is not removed and orphaned. What would be the most\nefficient way to truncate the table and not have orphaned data?\n \nThanks\n \n\n____________________________________\n\n \n\nJim Gunzelman\n\nSenior Software Engineer\n\n \n\nphone: 402.361.3078 fax: 402.361.3100\n\ne-mail: [email protected]\n\n \n\nSolutionary, Inc.\n\nwww.Solutionary.com <http://www.solutionary.com/> \n\n \n\nMaking Security Manageable 24x7\n\n_____________________________________\n\n \n\nConfidentiality Notice\n\nThe content of this communication, along with any attachments, is\ncovered by federal and state law governing electronic communications and\nmay contain confidential and legally privileged information. If the\nreader of this message is not the intended recipient, you are hereby\nnotified that any dissemination, distribution, use or copying of the\ninformation contained herein is strictly prohibited. If you have\nreceived this communication in error, please immediately contact us by\ntelephone at (402) 361-3000 or e-mail [email protected]. Thank\nyou.\n\n \n\nCopyright 2000-2004, Solutionary, Inc. All rights reserved.\nActiveGuard, eV3, Solutionary and the Solutionary logo are registered\ntrademarks of Solutionary, Inc.\n\n \n\n \n\n \n\nMessage\n\n\n\nI have a table that \nhas 2 columns of an OID type.  I would like to issue a truncate table \ncommand but my understanding is that the data pointed to by the OIDs is not \nremoved and orphaned.  What would be the most efficient way to truncate the \ntable and not have orphaned data?\n \nThanks\n \n____________________________________\n \nJim Gunzelman\nSenior Software \nEngineer\n \nphone: 402.361.3078   fax: \n402.361.3100\ne-mail:  \[email protected]\n \nSolutionary, \nInc.\nwww.Solutionary.com       \n\n \nMaking Security Manageable \n24x7\n_____________________________________\n \nConfidentiality \nNotice\nThe content of this \ncommunication, along with any attachments, is covered by federal and state law \ngoverning electronic communications and may contain confidential and legally \nprivileged information.  If the \nreader of this message is not the intended recipient, you are hereby notified \nthat any dissemination, distribution, use or copying of the information \ncontained herein is strictly prohibited.  \nIf you have received this communication in error, please immediately \ncontact us by telephone at (402) 361-3000 or e-mail \[email protected].  Thank \nyou.\n \nCopyright 2000-2004, Solutionary, \nInc. All rights reserved.  ActiveGuard, eV3, Solutionary and the \nSolutionary logo are registered trademarks of Solutionary, \nInc.", "msg_date": "Tue, 16 Nov 2004 09:57:07 -0600", "msg_from": "\"James Gunzelman\" <[email protected]>", "msg_from_op": true, "msg_subject": "Efficient way to remove OID data" } ]
[ { "msg_contents": "Hello,\n\nI build two SELECT queries, and in one I used COALESCE with a CASE,\nand in the second one I used only CASE statements.\n\nWhen analysing, I'm getting the exact same result, except the cost.\n(For now I have so few data that the results are too fragmented.\n\nIf the plans for both queries are exactly the same, should I assume\nthat the cost will also be the same?\n\nThanks for any help.\n-- \nAlexandre Leclerc\n", "msg_date": "Tue, 16 Nov 2004 16:11:30 -0500", "msg_from": "Alexandre Leclerc <[email protected]>", "msg_from_op": true, "msg_subject": "nuderstanding 'explain analyse'" }, { "msg_contents": "Alexandre,\n\n> If the plans for both queries are exactly the same, should I assume\n> that the cost will also be the same?\n\nNope. A seq scan over 1,000,000,000 rows is going to cost a LOT more than a \nseq scan over 1000 rows, even though it's the same plan.\n\nWhen you have the data sorted out, post explain analyzes and we'll take a shot \nat it.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 16 Nov 2004 22:07:50 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: nuderstanding 'explain analyse'" } ]
[ { "msg_contents": "I have a query for which postgres is generating a different plan on different machines. The database schema is the same, the dataset is the same, the configuration is the same (e.g., pg_autovacuum running in both cases), both systems are Solaris 9. The main difference in the two systems is that one is sparc and the other is intel.\n\nThe query runs in about 40 ms on the intel box, but takes about 18 seconds on the sparc box. Now, the intel boxes we have are certainly faster, but I'm curious why the query plan might be different.\n\nFor the intel:\n\nQUERY PLAN\nUnique (cost=11.50..11.52 rows=2 width=131)\n -> Sort (cost=11.50..11.50 rows=2 width=131)\n Sort Key: up.prefix, s.name, s.tuid, s.foundryversion\n -> Hash Join (cost=10.42..11.49 rows=2 width=131)\n Hash Cond: (\"outer\".dbid = \"inner\".\"schema\")\n -> Seq Scan on \"schema\" s (cost=0.00..1.02 rows=2 width=128)\n -> Hash (cost=10.41..10.41 rows=4 width=11)\n -> Nested Loop (cost=0.00..10.41 rows=4 width=11)\n -> Nested Loop (cost=0.00..2.14 rows=4 width=4)\n -> Seq Scan on flow fl (cost=0.00..0.00 rows=1 width=4)\n Filter: (servicetype = 646)\n -> Index Scan using usage_flow_i on \"usage\" u (cost=0.00..2.06 rows=6 width=8)\n Index Cond: (u.flow = \"outer\".dbid)\n -> Index Scan using usageparameter_usage_i on usageparameter up (cost=0.00..2.06 rows=1 width=15)\n Index Cond: (up.\"usage\" = \"outer\".dbid)\n Filter: ((prefix)::text <> 'xsd'::text)\n\nFor the sparc:\n\nQUERY PLAN\nUnique (cost=10.81..10.83 rows=1 width=167)\n -> Sort (cost=10.81..10.82 rows=1 width=167)\n Sort Key: up.prefix, s.name, s.tuid, s.foundryversion\n -> Nested Loop (cost=9.75..10.80 rows=1 width=167)\n Join Filter: (\"outer\".flow = \"inner\".dbid)\n -> Hash Join (cost=9.75..10.79 rows=1 width=171)\n Hash Cond: (\"outer\".dbid = \"inner\".\"schema\")\n -> Seq Scan on \"schema\" s (cost=0.00..1.02 rows=2 width=128)\n -> Hash (cost=9.75..9.75 rows=1 width=51)\n -> Nested Loop (cost=0.00..9.75 rows=1 width=51)\n Join Filter: (\"inner\".\"usage\" = \"outer\".dbid)\n -> Index Scan using usage_flow_i on \"usage\" u (cost=0.00..4.78 rows=1 width=8)\n -> Index Scan using usageparameter_schema_i on usageparameter up (cost=0.00..4.96 rows=1 width=51)\n Filter: ((prefix)::text <> 'xsd'::text)\n -> Seq Scan on flow fl (cost=0.00..0.00 rows=1 width=4)\n Filter: (servicetype = 646)\n\nI assume the problem with the second plan starts with doing a Nested Loop rather than a Hash Join at the 4th line of the plan, but I don't know why it would be different for the same schema, same dataset.\n\nWhat factors go into the planner's decision to choose a nested loop over a hash join? Should I be looking at adjusting my runtime configuration on the sparc box somehow?\n\nThanks.\n\n- DAP\n----------------------------------------------------------------------------------\nDavid Parker Tazz Networks (401) 709-5130\n \n", "msg_date": "Tue, 16 Nov 2004 22:54:51 -0500", "msg_from": "\"David Parker\" <[email protected]>", "msg_from_op": true, "msg_subject": "query plan question" }, { "msg_contents": "On Wed, 17 Nov 2004 02:54 pm, you wrote:\n> I have a query for which postgres is generating a different plan on different machines. The database schema is the same, the dataset is the same, the configuration is the same (e.g., pg_autovacuum running in both cases), both systems are Solaris 9. The main difference in the two systems is that one is sparc and the other is intel.\n> \n> The query runs in about 40 ms on the intel box, but takes about 18 seconds on the sparc box. Now, the intel boxes we have are certainly faster, but I'm curious why the query plan might be different.\n> \n> For the intel:\n> \n> QUERY PLAN\n> Unique (cost=11.50..11.52 rows=2 width=131)\n> -> Sort (cost=11.50..11.50 rows=2 width=131)\n> Sort Key: up.prefix, s.name, s.tuid, s.foundryversion\n> -> Hash Join (cost=10.42..11.49 rows=2 width=131)\n> Hash Cond: (\"outer\".dbid = \"inner\".\"schema\")\n> -> Seq Scan on \"schema\" s (cost=0.00..1.02 rows=2 width=128)\n> -> Hash (cost=10.41..10.41 rows=4 width=11)\n> -> Nested Loop (cost=0.00..10.41 rows=4 width=11)\n> -> Nested Loop (cost=0.00..2.14 rows=4 width=4)\n> -> Seq Scan on flow fl (cost=0.00..0.00 rows=1 width=4)\n> Filter: (servicetype = 646)\n> -> Index Scan using usage_flow_i on \"usage\" u (cost=0.00..2.06 rows=6 width=8)\n> Index Cond: (u.flow = \"outer\".dbid)\n> -> Index Scan using usageparameter_usage_i on usageparameter up (cost=0.00..2.06 rows=1 width=15)\n> Index Cond: (up.\"usage\" = \"outer\".dbid)\n> Filter: ((prefix)::text <> 'xsd'::text)\n> \n> For the sparc:\n> \n> QUERY PLAN\n> Unique (cost=10.81..10.83 rows=1 width=167)\n> -> Sort (cost=10.81..10.82 rows=1 width=167)\n> Sort Key: up.prefix, s.name, s.tuid, s.foundryversion\n> -> Nested Loop (cost=9.75..10.80 rows=1 width=167)\n> Join Filter: (\"outer\".flow = \"inner\".dbid)\n> -> Hash Join (cost=9.75..10.79 rows=1 width=171)\n> Hash Cond: (\"outer\".dbid = \"inner\".\"schema\")\n> -> Seq Scan on \"schema\" s (cost=0.00..1.02 rows=2 width=128)\n> -> Hash (cost=9.75..9.75 rows=1 width=51)\n> -> Nested Loop (cost=0.00..9.75 rows=1 width=51)\n> Join Filter: (\"inner\".\"usage\" = \"outer\".dbid)\n> -> Index Scan using usage_flow_i on \"usage\" u (cost=0.00..4.78 rows=1 width=8)\n> -> Index Scan using usageparameter_schema_i on usageparameter up (cost=0.00..4.96 rows=1 width=51)\n> Filter: ((prefix)::text <> 'xsd'::text)\n> -> Seq Scan on flow fl (cost=0.00..0.00 rows=1 width=4)\n> Filter: (servicetype = 646)\n> \nUnique (cost=11.50..11.52 rows=2 width=131)\nUnique (cost=10.81..10.83 rows=1 width=167)\n\nThe estimations for the cost is basically the same, 10ms for the first row. Can you supply Explain analyze to see what it's actually doing?\n\nRussell Smith\n", "msg_date": "Wed, 17 Nov 2004 15:35:42 +1100", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query plan question" }, { "msg_contents": "David Parker wrote:\n\n>I have a query for which postgres is generating a different plan on different machines. The database schema is the same, the dataset is the same, the configuration is the same (e.g., pg_autovacuum running in both cases), both systems are Solaris 9. The main difference in the two systems is that one is sparc and the other is intel.\n>\n>The query runs in about 40 ms on the intel box, but takes about 18 seconds on the sparc box. Now, the intel boxes we have are certainly faster, but I'm curious why the query plan might be different.\n> \n>\n\nIf they are the same and PostgreSQL are the same, are the intel machines \nXeons?\n\nSincerely,\n\nJoshua D. Drake\n\n\n>For the intel:\n>\n>QUERY PLAN\n>Unique (cost=11.50..11.52 rows=2 width=131)\n> -> Sort (cost=11.50..11.50 rows=2 width=131)\n> Sort Key: up.prefix, s.name, s.tuid, s.foundryversion\n> -> Hash Join (cost=10.42..11.49 rows=2 width=131)\n> Hash Cond: (\"outer\".dbid = \"inner\".\"schema\")\n> -> Seq Scan on \"schema\" s (cost=0.00..1.02 rows=2 width=128)\n> -> Hash (cost=10.41..10.41 rows=4 width=11)\n> -> Nested Loop (cost=0.00..10.41 rows=4 width=11)\n> -> Nested Loop (cost=0.00..2.14 rows=4 width=4)\n> -> Seq Scan on flow fl (cost=0.00..0.00 rows=1 width=4)\n> Filter: (servicetype = 646)\n> -> Index Scan using usage_flow_i on \"usage\" u (cost=0.00..2.06 rows=6 width=8)\n> Index Cond: (u.flow = \"outer\".dbid)\n> -> Index Scan using usageparameter_usage_i on usageparameter up (cost=0.00..2.06 rows=1 width=15)\n> Index Cond: (up.\"usage\" = \"outer\".dbid)\n> Filter: ((prefix)::text <> 'xsd'::text)\n>\n>For the sparc:\n>\n>QUERY PLAN\n>Unique (cost=10.81..10.83 rows=1 width=167)\n> -> Sort (cost=10.81..10.82 rows=1 width=167)\n> Sort Key: up.prefix, s.name, s.tuid, s.foundryversion\n> -> Nested Loop (cost=9.75..10.80 rows=1 width=167)\n> Join Filter: (\"outer\".flow = \"inner\".dbid)\n> -> Hash Join (cost=9.75..10.79 rows=1 width=171)\n> Hash Cond: (\"outer\".dbid = \"inner\".\"schema\")\n> -> Seq Scan on \"schema\" s (cost=0.00..1.02 rows=2 width=128)\n> -> Hash (cost=9.75..9.75 rows=1 width=51)\n> -> Nested Loop (cost=0.00..9.75 rows=1 width=51)\n> Join Filter: (\"inner\".\"usage\" = \"outer\".dbid)\n> -> Index Scan using usage_flow_i on \"usage\" u (cost=0.00..4.78 rows=1 width=8)\n> -> Index Scan using usageparameter_schema_i on usageparameter up (cost=0.00..4.96 rows=1 width=51)\n> Filter: ((prefix)::text <> 'xsd'::text)\n> -> Seq Scan on flow fl (cost=0.00..0.00 rows=1 width=4)\n> Filter: (servicetype = 646)\n>\n>I assume the problem with the second plan starts with doing a Nested Loop rather than a Hash Join at the 4th line of the plan, but I don't know why it would be different for the same schema, same dataset.\n>\n>What factors go into the planner's decision to choose a nested loop over a hash join? Should I be looking at adjusting my runtime configuration on the sparc box somehow?\n>\n>Thanks.\n>\n>- DAP\n>----------------------------------------------------------------------------------\n>David Parker Tazz Networks (401) 709-5130\n> \n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Tue, 16 Nov 2004 22:40:07 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query plan question" } ]
[ { "msg_contents": "http://pugs.postgresql.org/sfpug/archives/000021.html\n\nI noticed that some of you left coasters were talking about memcached\nand pgsql. I'm curious to know what was discussed.\n\nIn reading about memcached, it seems that many people are using it to\ncircumvent the scalability problems of MySQL (lack of MVCC). \n\nfrom their site: \t\n\n<snip>\nShouldn't the database do this?\n\nRegardless of what database you use (MS-SQL, Oracle, Postgres,\nMysQL-InnoDB, etc..), there's a lot of overhead in implementing ACID\nproperties in a RDBMS, especially when disks are involved, which means\nqueries are going to block. For databases that aren't ACID-compliant\n(like MySQL-MyISAM), that overhead doesn't exist, but reading threads\nblock on the writing threads. memcached never blocks. \n</snip>\n\nSo What does memcached offer pgsql users? It would still seem to offer\nthe benefit of a multi-machined cache.\n\n-Mike \n", "msg_date": "Tue, 16 Nov 2004 23:00:24 -0500", "msg_from": "Michael Adler <[email protected]>", "msg_from_op": true, "msg_subject": "memcached and PostgreSQL" }, { "msg_contents": "Michael,\n\n> So What does memcached offer pgsql users? It would still seem to offer\n> the benefit of a multi-machined cache.\n\nYes, and a very, very fast one too ... like, 120,000 operations per second. \nPostgreSQL can't match that because of the overhead of authentication, \nsecurity, transaction visibility checking, etc. \n\nSo memcached becomes a very good place to stick data that's read often but not \nupdated often, or alternately data that changes often but is disposable. An \nexample of the former is a user+ACL list; and example of the latter is web \nsession information ... or simple materialized views.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 16 Nov 2004 21:47:54 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: memcached and PostgreSQL" }, { "msg_contents": "On Tue, 16 Nov 2004 21:47:54 -0800, Josh Berkus wrote:\n\n> So memcached becomes a very good place to stick data that's read often but not \n> updated often, or alternately data that changes often but is disposable. An \n> example of the former is a user+ACL list; and example of the latter is web \n> session information ... or simple materialized views.\n\nHas anyone tried at least two of\n\n1. memcached\n2. Tugela Cache (pretty much the same as memcached, I think)\n3. Sharedance\n\nIn that case: Do you have any comparative remarks?\n\n\nLinks:\n\n1: http://www.danga.com/memcached/\n\n2: http://meta.wikimedia.org/wiki/Tugela_Cache\n http://cvs.sourceforge.net/viewcvs.py/wikipedia/tugelacache/\n\n3: http://sharedance.pureftpd.org/\n\n-- \nGreetings from Troels Arvin, Copenhagen, Denmark\n\n\n", "msg_date": "Wed, 17 Nov 2004 08:48:33 +0100", "msg_from": "Troels Arvin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: memcached and PostgreSQL" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n\n> So memcached becomes a very good place to stick data that's read often but not \n> updated often, or alternately data that changes often but is disposable. An \n> example of the former is a user+ACL list; and example of the latter is web \n> session information ... or simple materialized views.\n\nI would like very much to use something like memcached for a materialized view\nI have. The problem is that I have to join it against other tables.\n\nI've thought about providing a SRF in postgres to read records out of\nmemcached but I'm unclear it would it really help at all.\n\nHas anyone tried anything like this?\n\n-- \ngreg\n\n", "msg_date": "17 Nov 2004 03:08:20 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: memcached and PostgreSQL" }, { "msg_contents": "On 17 Nov 2004 03:08:20 -0500, Greg Stark <[email protected]> wrote:\n> Josh Berkus <[email protected]> writes:\n> \n> > So memcached becomes a very good place to stick data that's read often but not\n> > updated often, or alternately data that changes often but is disposable. An\n> > example of the former is a user+ACL list; and example of the latter is web\n> > session information ... or simple materialized views.\n> \n> I would like very much to use something like memcached for a materialized view\n> I have. The problem is that I have to join it against other tables.\n> \n> I've thought about providing a SRF in postgres to read records out of\n> memcached but I'm unclear it would it really help at all.\n> \n> Has anyone tried anything like this?\n\nI haven't tried it yet, but I plan too. An intersting case might be\nto use plperlu to interface with memcached and store hashes in the\ncache via some external process, like a CGI script. Then just define\na TYPE for the perl SRF to return, and store the data as an array of\nhashes with keys matching the TYPE.\n\nA (perhaps useless) example could then be something like:\n\nCREATE TYPE user_info AS ( sessionid TEXT, userid INT, lastaccess\nTIMESTAMP, lastrequest TEXT);\n\nCREATE FUNCTION get_user_info_by_session ( TEXT) RETURNS SETOF user_info AS $$\n use Cache::Memcached;\n\n my $session = shift;\n\n my $c = $_SHARED{memcached} || Cache::Memcached->new( {servers =>\n'127.0.0.1:1111'} );\n\n my $user_info = $m->get('web_access_list');\n\n # $user_info looks like\n # [ {userid => 5, lastrequest => 'http://...', lastaccess => localtime(),\n # sessionid => '123456789'}, { ...} ]\n # and is stored by a CGI.\n\n @info = grep {$$_{sessionid} eq $session} @$user_info;\n\n return \\@info;\n$$ LANGUAGE 'plperlu';\n\nSELECT u.username, f.lastrequest FROM users u,\nget_user_info_by_session('123456789') WHERE f.userid = u.userid;\n\n\nAny thoughts? \n\n> \n> --\n> greg\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n\n\n-- \nMike Rylander\[email protected]\nGPLS -- PINES Development\nDatabase Developer\n", "msg_date": "Wed, 17 Nov 2004 11:18:02 -0500", "msg_from": "Mike Rylander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: memcached and PostgreSQL" }, { "msg_contents": "On November 16, 2004 08:00 pm, Michael Adler wrote:\n> http://pugs.postgresql.org/sfpug/archives/000021.html\n>\n> I noticed that some of you left coasters were talking about memcached\n> and pgsql. I'm curious to know what was discussed.\n>\n> In reading about memcached, it seems that many people are using it to\n> circumvent the scalability problems of MySQL (lack of MVCC).\n>\n> from their site:\n>\n> <snip>\n> Shouldn't the database do this?\n>\n> Regardless of what database you use (MS-SQL, Oracle, Postgres,\n> MysQL-InnoDB, etc..), there's a lot of overhead in implementing ACID\n> properties in a RDBMS, especially when disks are involved, which means\n> queries are going to block. For databases that aren't ACID-compliant\n> (like MySQL-MyISAM), that overhead doesn't exist, but reading threads\n> block on the writing threads. memcached never blocks.\n> </snip>\n>\n> So What does memcached offer pgsql users? It would still seem to offer\n> the benefit of a multi-machined cache.\n\nHave a look at the pdf presentation found on the following site:\n\nhttp://people.freebsd.org/~seanc/pgmemcache/\n\n\n>\n> -Mike\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n\n-- \nDarcy Buskermolen\nWavefire Technologies Corp.\nph: 250.717.0200\nfx: 250.763.1759\nhttp://www.wavefire.com\n", "msg_date": "Wed, 17 Nov 2004 09:13:09 -0800", "msg_from": "Darcy Buskermolen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: memcached and PostgreSQL" }, { "msg_contents": "On Wed, Nov 17, 2004 at 09:13:09AM -0800, Darcy Buskermolen wrote:\n> On November 16, 2004 08:00 pm, Michael Adler wrote:\n> > http://pugs.postgresql.org/sfpug/archives/000021.html\n> >\n> > I noticed that some of you left coasters were talking about memcached\n> > and pgsql. I'm curious to know what was discussed.\n> \n> Have a look at the pdf presentation found on the following site:\n> \n> http://people.freebsd.org/~seanc/pgmemcache/\n\nThanks for that.\n\nThat presentation was rather broad and the API seems rather general\npurpose, but I wonder why you would really want access the cache by\nway of the DB? If one major point of memcache is to allocate RAM to a\nlow-overhead server instead of to the RDBMS's disk cache, why would\nyou add the overhead of the RDBMS to the process? (this is a bit of\nstraw man, but just trying to flesh-out the pros and cons)\n\nStill, it seems like a convenient way to maintain cache coherency,\nassuming that your application doesn't already have a clean way to do\nthat.\n\n(just my uninformed opinion, though...)\n\n-Mike\n", "msg_date": "Wed, 17 Nov 2004 14:51:58 -0500", "msg_from": "Michael Adler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: memcached and PostgreSQL" }, { "msg_contents": "Michael,\n\n> Still, it seems like a convenient way to maintain cache coherency,\n> assuming that your application doesn't already have a clean way to do\n> that.\n\nPrecisely. The big problem with memory caching is the cache getting out of \nsync with the database. Updating the cache through database triggers helps \nameliorate that.\n\nHowever, our inability to pass messages with NOTIFY somewhat limits the the \nutility of this solution Sean wants \"on commit triggers\", but there's some \nmajor issues to work out with that. Passing messages with NOTIFY would be \neasier and almost as good.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 17 Nov 2004 13:07:06 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: memcached and PostgreSQL" }, { "msg_contents": "> So What does memcached offer pgsql users? It would still seem to offer\n> the benefit of a multi-machined cache.\n\nAck, I totally missed this thread. Sorry for jumping in late.\n\nBasically, memcached and pgmemcache offer a more technically correct \nway of implementing query caching. MySQL's query caching is a \ndisaster, IMHO. memcached alleviates this load from the database and \nputs it elsewhere in a more optimized form. The problem with memcached \nby itself is that you're relying on the application to invalidate the \ncache. How many different places have to be kept in sync? Using \nmemcached, in its current form, makes relying on the application to be \ndeveloped correctly with centralized libraries and database access \nroutines. Bah, that's a cluster f#$@ waiting to happen.\n\npgmemcache fixes that though so that you don't have to worry about \ninvalidating the cache in every application/routine. Instead you just \ncentralize that logic in the database and automatically invalidate via \ntriggers. It's working out very well for me.\n\nI'd be interested in success stories, fwiw. In the next week or so \nI'll probably stick this on pgfoundry and build a proper make/release \nstructure. -sc\n\n-- \nSean Chittenden\n\n", "msg_date": "Thu, 18 Nov 2004 13:10:28 -0800", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: memcached and PostgreSQL" }, { "msg_contents": "Josh Berkus wrote:\n> Michael,\n> \n> > Still, it seems like a convenient way to maintain cache coherency,\n> > assuming that your application doesn't already have a clean way to do\n> > that.\n> \n> Precisely. The big problem with memory caching is the cache getting out of \n> sync with the database. Updating the cache through database triggers helps \n> ameliorate that.\n> \n> However, our inability to pass messages with NOTIFY somewhat limits the the \n> utility of this solution Sean wants \"on commit triggers\", but there's some \n> major issues to work out with that. Passing messages with NOTIFY would be \n> easier and almost as good.\n\nThe big concern I have about memcache is that because it controls\nstorage external to the database there is no way to guarantee the cache\nis consistent with the database. This is similar to sending email in a\ntrigger or on commit where you can't be certain you send email always\nand only on a commit.\n\nIn the database, we mark everything we do with a transaction id and mark\nthe transaction id as committed in on operation. I see no way to do\nthat with memcache.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 21 Nov 2004 23:15:43 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: memcached and PostgreSQL" }, { "msg_contents": "Bruce,\n\n> The big concern I have about memcache is that because it controls\n> storage external to the database there is no way to guarantee the cache\n> is consistent with the database.  This is similar to sending email in a\n> trigger or on commit where you can't be certain you send email always\n> and only on a commit.\n\nWell, some things ... ON COMMIT triggers, or messages with NOTIFY, would \nimprove the accuracy by cutting down on cached aborted transactions. \n\nHowever, caching is of necessity imperfect. Caching is a trade-off; greater \naccess speed vs. perfect consistency (and any durability). There are cases \nwhere the access speed is more important than the consistency (or the \ndurability). The answer is to use memcached judiciously and be prepared to \naccount for minor inconsistencies. \n\nFor that matter, as with other forms of cumulative asynchronous materialized \nview, it would be advisable to nightly re-build copies of data stored in \nmemcached from scratch during system slow time, assuming that the data in \nmemcached corresponds to a real table. Where memcached does not correspond \nto a real table (session keys, for example), it is not a concern at all.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 21 Nov 2004 20:27:15 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: memcached and PostgreSQL" }, { "msg_contents": "> The big concern I have about memcache is that because it controls\n> storage external to the database there is no way to guarantee the cache\n> is consistent with the database.\n\nI've found that letting applications add data to memcache and then \nletting the database replace or delete keys seems to be the best \napproach to minimize exactly this issue. Having two clients update the \ncache is risky. Using triggers or using NOTIFY + tailing logs makes \nthis much more bullet proof.\n\n> This is similar to sending email in a trigger or on commit where you \n> can't be certain you send email always\n> and only on a commit.\n\nWhile this is certainly a possibility, it's definitely closer to the \nexception and not the normal instance.\n\n> In the database, we mark everything we do with a transaction id and \n> mark\n> the transaction id as committed in on operation. I see no way to do\n> that with memcache.\n\nCorrect. With an ON COMMIT trigger, it'll be easier to have a 100% \naccurate cache. That said, memcache does exist out side of the \ndatabase so it's theoretically impossible to guarantee that the two are \n100% in sync. pgmemcache goes a long way towards facilitating that the \ncache is in sync with the database, but it certainly doesn't guarantee \nit's in sync. That being said, I haven't had any instances of it not \nbeing in sync since using pgmemcache (I'm quite proud of this, to be \nhonest *grin*). For critical operations such as financial \ntransactions, however, I advise going to the database unless you're \nwilling to swallow the financial cost of cache discrepancies.\n\n-sc\n\n-- \nSean Chittenden\n\n", "msg_date": "Sun, 21 Nov 2004 20:55:16 -0800", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: memcached and PostgreSQL" }, { "msg_contents": "\nOn Nov 21, 2004, at 11:55 PM, Sean Chittenden wrote:\n\n>\n>> This is similar to sending email in a trigger or on commit where you \n>> can't be certain you send email always\n>> and only on a commit.\n>\n> While this is certainly a possibility, it's definitely closer to the \n> exception and not the normal instance.\n\nWhile an exception, this is a very real possibility in day to day \noperations. The absence of any feedback or balancing mechanism between \nthe database and cache makes it impossible to know that they are in \nsync and even a small error percentage multiplied over time will lead \nto an ever increasing likelihood of error.\n\nMore dangerous is that this discrepancy will NOT always be apparent \nbecause without active verification of the correctness of the cache, we \nwill not know about any errors unless the error grows to an obvious \npoint. The errors may cause material damage long before they become \nobvious. This is a common failure pattern with caches.\n\n\n\nPatrick B. Kelly\n------------------------------------------------------\n http://patrickbkelly.org\n\n", "msg_date": "Mon, 22 Nov 2004 02:59:12 -0500", "msg_from": "Patrick B Kelly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: memcached and PostgreSQL" }, { "msg_contents": "\n> While an exception, this is a very real possibility in day to day \n> operations. The absence of any feedback or balancing mechanism between \n> the database and cache makes it impossible to know that they are in sync \n> and even a small error percentage multiplied over time will lead to an \n> ever increasing likelihood of error.\n\n\tSure, but there are applications where it does not matter, and these \napplications are othen loading the database... think about displaying \nforum posts, products list in a web store, and especially category trees, \ntop N queries... for all these, it does not matter if the data is a bit \nstale. For instance, a very popular forum will be cached, which is very \nimportant. In this case I think it is acceptable if a new post does not \nappear instantly.\n\n\tOf course, when inserting or updating data in the database, the primary \nkeys and other important data should be fetched from the database and not \nthe cache, which supposes a bit of application logic (for instance, in a \nforum, the display page should query the cache, but the \"post message\" \npage should query the database directly).\n\n\tMemcache can also save the database from update-heavy tasks like user \nsession management. In that case sessions can be stored entirely in memory.\n\n\tON COMMIT triggers would be very useful.\n\n> More dangerous is that this discrepancy will NOT always be apparent \n> because without active verification of the correctness of the cache, we \n> will not know about any errors unless the error grows to an obvious \n> point.\n\n> The errors may cause material damage long before they become obvious. \n> This is a common failure pattern with caches.\n\n\tThis is why it would be dangerous to fetch referential integrity data \n from the cache... this fits your \"banking\" example for instance.\n", "msg_date": "Mon, 22 Nov 2004 13:30:04 +0100", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: memcached and PostgreSQL" }, { "msg_contents": "Pierre-Fr�d�ric Caillaud wrote:\n> \n> > While an exception, this is a very real possibility in day to day \n> > operations. The absence of any feedback or balancing mechanism between \n> > the database and cache makes it impossible to know that they are in sync \n> > and even a small error percentage multiplied over time will lead to an \n> > ever increasing likelihood of error.\n> \n> \tSure, but there are applications where it does not matter, and these \n> applications are othen loading the database... think about displaying \n> forum posts, products list in a web store, and especially category trees, \n> top N queries... for all these, it does not matter if the data is a bit \n> stale. For instance, a very popular forum will be cached, which is very \n> important. In this case I think it is acceptable if a new post does not \n> appear instantly.\n\nMy point was that there are two failure cases --- one where the cache is\nslightly out of date compared to the db server --- these are cases where\nthe cache update is slightly before/after the commit. The second is\nwhere the cache update happens and the commit later fails, or the commit\nhappens and the cache update never happens. In these cases the cache is\nout of date for the amount of time you cache the data and not expire it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 22 Nov 2004 21:18:05 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: memcached and PostgreSQL" }, { "msg_contents": "> My point was that there are two failure cases --- one where the cache \n> is\n> slightly out of date compared to the db server --- these are cases \n> where\n> the cache update is slightly before/after the commit.\n\nI was thinking about this and ways to minimize this even further. Have \nmemcache clients add data and have a policy to have the database only \ndelete data. This sets the database up as the bottleneck again, but \nthen you have a degree of transactionality that couldn't be previously \nachieved with the database issuing replace commands. For example:\n\n1) client checks the cache for data and gets a cache lookup failure\n2) client beings transaction\n3) client SELECTs data from the database\n4) client adds the key to the cache\n5) client commits transaction\n\nThis assumes that the client won't rollback or have a transaction \nfailure. Again, in 50M transactions, I doubt one of them would fail \n(sure, it's possible, but that's a symptom of bigger problems: \nmemcached isn't an RDBMS).\n\nThe update case being:\n\n1) client begins transaction\n2) client updates data\n3) database deletes record from memcache\n4) client commits transaction\n5) client adds data to memcache\n\n> The second is\n> where the cache update happens and the commit later fails, or the \n> commit\n> happens and the cache update never happens.\n\nHaving pgmemcache delete, not replace data addresses this second issue. \n -sc\n\n-- \nSean Chittenden\n\n", "msg_date": "Tue, 23 Nov 2004 14:20:34 -0800", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: memcached and PostgreSQL" }, { "msg_contents": "If instead of a select you do a select for update I think this would be\ntransaction safe. Nothing would be able to modify the data in the\ndatabase between when you do the SELECT and when you commit. If the\ntransaction fails the value in memcached will be correct.\n\nAlso, it's not clear if you're doing an update here or not... If you're\ndoing an update then this wouldn't work. You'd want to do your update,\nthen re-insert the value into memcached outside of the update\ntransaction.\n\nOn Tue, Nov 23, 2004 at 02:20:34PM -0800, Sean Chittenden wrote:\n> >My point was that there are two failure cases --- one where the cache \n> >is\n> >slightly out of date compared to the db server --- these are cases \n> >where\n> >the cache update is slightly before/after the commit.\n> \n> I was thinking about this and ways to minimize this even further. Have \n> memcache clients add data and have a policy to have the database only \n> delete data. This sets the database up as the bottleneck again, but \n> then you have a degree of transactionality that couldn't be previously \n> achieved with the database issuing replace commands. For example:\n> \n> 1) client checks the cache for data and gets a cache lookup failure\n> 2) client beings transaction\n> 3) client SELECTs data from the database\n> 4) client adds the key to the cache\n> 5) client commits transaction\n> \n> This assumes that the client won't rollback or have a transaction \n> failure. Again, in 50M transactions, I doubt one of them would fail \n> (sure, it's possible, but that's a symptom of bigger problems: \n> memcached isn't an RDBMS).\n> \n> The update case being:\n> \n> 1) client begins transaction\n> 2) client updates data\n> 3) database deletes record from memcache\n> 4) client commits transaction\n> 5) client adds data to memcache\n> \n> >The second is\n> >where the cache update happens and the commit later fails, or the \n> >commit\n> >happens and the cache update never happens.\n> \n> Having pgmemcache delete, not replace data addresses this second issue. \n> -sc\n> \n> -- \n> Sean Chittenden\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Wed, 24 Nov 2004 10:06:23 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: memcached and PostgreSQL" } ]
[ { "msg_contents": "I've read the previous thread on the list regarding partitioning\nmechanisms and I just wrote a plpgsql function to create the partition\ntables (by date) as well as another function used to do the insert (it\ndetermines which table will be inserted).\n\nThe creation of the partition tables uses the inherits clause when\ncreating. It creates an exact copy of the table it's inheriting from,\nand adds the indexes since inherits doesn't do that for me.\n\nCREATE TABLE hourly_report_data_2004_11_16 () INHERITS (hourly_report_data)\n\nWhen I query on the hourly_report_data, the explain plan shows it\nquery all the tables that inherited from it. That's all great.\n\nWhat's really the difference between this and creating separate tables\nwith the same column definition without the inherit, and then create a\nview to \"merge\" them together?\n\nAlso, I've run into a snag in that I have a hourly_detail table, that\nhas a foreign key to the hourly_report_data. The inherit method above\ndoes not honor the foreign key relationship to the children table of\nhourly_report_data. I can't insert any data into the hourly_detail\ntable due to the constraint failing.\n\nThe hourly_detail table is relatively tiny compared to the enormous\nhourly_report_data table, so if I don't have to partition that one I\nwould rather not. Any suggestions on this?\n\nThanks.\n\n-Don\n\n-- \nDonald Drake\nPresident\nDrake Consulting\nhttp://www.drakeconsult.com/\n312-560-1574\n", "msg_date": "Tue, 16 Nov 2004 23:31:15 -0600", "msg_from": "Don Drake <[email protected]>", "msg_from_op": true, "msg_subject": "Table Partitions: To Inherit Or Not To Inherit" }, { "msg_contents": "Don,\n\n> What's really the difference between this and creating separate tables\n> with the same column definition without the inherit, and then create a\n> view to \"merge\" them together?\n\nEasier syntax for queries. If you created completely seperate tables and \nUNIONED them together, you'd have to be constantly modifying a VIEW which \ntied the tables together. With inheritance, you just do \"SELECT * FROM \nparent_table\" and it handles finding all the children for you.\n\n> Also, I've run into a snag in that I have a hourly_detail table, that\n> has a foreign key to the hourly_report_data. The inherit method above\n> does not honor the foreign key relationship to the children table of\n> hourly_report_data. I can't insert any data into the hourly_detail\n> table due to the constraint failing.\n\nThis is a known limitation of inherited tables, at least in current \nimplementations. I think it's on the TODO list. For now, either live \nwithout the FKs, or implement them through custom triggers/rules.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 16 Nov 2004 21:51:14 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table Partitions: To Inherit Or Not To Inherit" } ]
[ { "msg_contents": "Oh, I didn't realize that analyze gave that much more info. I've got a\nlot to learn about this tuning stuff ;-) \n\nI've attached the output. I see from the new output where the slow query\nis taking its time (the nested loop at line 10), but I still have no\nidea why this plan is getting chosen....\n\nThanks!\n\n- DAP\n\n>-----Original Message-----\n>From: [email protected] \n>[mailto:[email protected]] On Behalf Of \n>Russell Smith\n>Sent: Tuesday, November 16, 2004 11:36 PM\n>To: [email protected]\n>Subject: Re: [PERFORM] query plan question\n>\n>On Wed, 17 Nov 2004 02:54 pm, you wrote:\n>> I have a query for which postgres is generating a different \n>plan on different machines. The database schema is the same, \n>the dataset is the same, the configuration is the same (e.g., \n>pg_autovacuum running in both cases), both systems are Solaris \n>9. The main difference in the two systems is that one is sparc \n>and the other is intel.\n>> \n>> The query runs in about 40 ms on the intel box, but takes \n>about 18 seconds on the sparc box. Now, the intel boxes we \n>have are certainly faster, but I'm curious why the query plan \n>might be different.\n>> \n>> For the intel:\n>> \n>> QUERY PLAN\n>> Unique (cost=11.50..11.52 rows=2 width=131)\n>> -> Sort (cost=11.50..11.50 rows=2 width=131)\n>> Sort Key: up.prefix, s.name, s.tuid, s.foundryversion\n>> -> Hash Join (cost=10.42..11.49 rows=2 width=131)\n>> Hash Cond: (\"outer\".dbid = \"inner\".\"schema\")\n>> -> Seq Scan on \"schema\" s (cost=0.00..1.02 \n>rows=2 width=128)\n>> -> Hash (cost=10.41..10.41 rows=4 width=11)\n>> -> Nested Loop (cost=0.00..10.41 \n>rows=4 width=11)\n>> -> Nested Loop (cost=0.00..2.14 \n>rows=4 width=4)\n>> -> Seq Scan on flow fl \n>(cost=0.00..0.00 rows=1 width=4)\n>> Filter: (servicetype = 646)\n>> -> Index Scan using \n>usage_flow_i on \"usage\" u (cost=0.00..2.06 rows=6 width=8)\n>> Index Cond: (u.flow = \n>\"outer\".dbid)\n>> -> Index Scan using \n>usageparameter_usage_i on usageparameter up (cost=0.00..2.06 \n>rows=1 width=15)\n>> Index Cond: (up.\"usage\" = \n>\"outer\".dbid)\n>> Filter: ((prefix)::text <> \n>> 'xsd'::text)\n>> \n>> For the sparc:\n>> \n>> QUERY PLAN\n>> Unique (cost=10.81..10.83 rows=1 width=167)\n>> -> Sort (cost=10.81..10.82 rows=1 width=167)\n>> Sort Key: up.prefix, s.name, s.tuid, s.foundryversion\n>> -> Nested Loop (cost=9.75..10.80 rows=1 width=167)\n>> Join Filter: (\"outer\".flow = \"inner\".dbid)\n>> -> Hash Join (cost=9.75..10.79 rows=1 width=171)\n>> Hash Cond: (\"outer\".dbid = \"inner\".\"schema\")\n>> -> Seq Scan on \"schema\" s \n>(cost=0.00..1.02 rows=2 width=128)\n>> -> Hash (cost=9.75..9.75 rows=1 width=51)\n>> -> Nested Loop (cost=0.00..9.75 \n>rows=1 width=51)\n>> Join Filter: \n>(\"inner\".\"usage\" = \"outer\".dbid)\n>> -> Index Scan using \n>usage_flow_i on \"usage\" u (cost=0.00..4.78 rows=1 width=8)\n>> -> Index Scan using \n>usageparameter_schema_i on usageparameter up (cost=0.00..4.96 \n>rows=1 width=51)\n>> Filter: \n>((prefix)::text <> 'xsd'::text)\n>> -> Seq Scan on flow fl (cost=0.00..0.00 \n>rows=1 width=4)\n>> Filter: (servicetype = 646)\n>> \n>Unique (cost=11.50..11.52 rows=2 width=131) Unique \n>(cost=10.81..10.83 rows=1 width=167)\n>\n>The estimations for the cost is basically the same, 10ms for \n>the first row. Can you supply Explain analyze to see what \n>it's actually doing?\n>\n>Russell Smith\n>\n>---------------------------(end of \n>broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>", "msg_date": "Wed, 17 Nov 2004 07:32:55 -0500", "msg_from": "\"David Parker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query plan question" }, { "msg_contents": "\nOn Nov 17, 2004, at 7:32 AM, David Parker wrote:\n\n> Oh, I didn't realize that analyze gave that much more info. I've got a\n> lot to learn about this tuning stuff ;-)\n>\n> I've attached the output. I see from the new output where the slow \n> query\n> is taking its time (the nested loop at line 10), but I still have no\n> idea why this plan is getting chosen....\n>\n\nlooks like your stats are incorrect on the sparc.\nDid you forget to run vacuum analyze on it?\n\nalso, do both db's have the same data loaded?\nthere are some very different numbers in terms of actual rows floating \naround there...\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n", "msg_date": "Wed, 17 Nov 2004 09:00:42 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query plan question" } ]
[ { "msg_contents": ">If they are the same and PostgreSQL are the same, are the \n>intel machines Xeons?\n\nYup, dual 3.06-GHz Intel Xeon Processors.\n\nI'm not sure off the top of my head what the sparcs are exactly. We're\nin the process of moving completely to intel, but we still have to\nsupport our app on sparc, and we are seeing these weird differences...\n\n- DAP\n", "msg_date": "Wed, 17 Nov 2004 08:08:43 -0500", "msg_from": "\"David Parker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query plan question" } ]
[ { "msg_contents": "I've got pg_autovacuum running on both platforms. I've verified that the\ntables involved in the query have the same number of rows on both\ndatabases.\n\nI'm not sure where to look to see how the stats might be different. The\n\"good\" database's pg_statistic table has 24 more rows than that in the\n\"bad\" database, so there's definitely a difference. The good database's\npg_statistic has rows for 2 extra tables, but they are not tables\ninvolved in the query in question...\n\nSo something must be up with stats, but can you tell me what the most\nsignicant columns in the pg_statistic table are for the planner making\nits decision? I'm sure this has been discussed before, so if there's a\nthread you can point me to, that would be great - I realize it's a big\ngeneral question.\n\nThanks for your time.\n\n- DAP\n\n>-----Original Message-----\n>From: Jeff [mailto:[email protected]] \n>Sent: Wednesday, November 17, 2004 9:01 AM\n>To: David Parker\n>Cc: Russell Smith; [email protected]\n>Subject: Re: [PERFORM] query plan question\n>\n>\n>On Nov 17, 2004, at 7:32 AM, David Parker wrote:\n>\n>> Oh, I didn't realize that analyze gave that much more info. \n>I've got a \n>> lot to learn about this tuning stuff ;-)\n>>\n>> I've attached the output. I see from the new output where the slow \n>> query is taking its time (the nested loop at line 10), but I still \n>> have no idea why this plan is getting chosen....\n>>\n>\n>looks like your stats are incorrect on the sparc.\n>Did you forget to run vacuum analyze on it?\n>\n>also, do both db's have the same data loaded?\n>there are some very different numbers in terms of actual rows floating \n>around there...\n>\n>--\n>Jeff Trout <[email protected]>\n>http://www.jefftrout.com/\n>http://www.stuarthamm.net/\n>\n>\n", "msg_date": "Wed, 17 Nov 2004 09:43:39 -0500", "msg_from": "\"David Parker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query plan question" } ]
[ { "msg_contents": "Hmm, I'm really a beginner at this...\n\nIt turns out that the pg_statistic table in my good database has records\nin it for the tables in the query, while the pg_statistic table in my\nbad database has no records for those tables at all!\n\nSo I guess I need to figure out why pg_autovacuum isn't analyzing those\ntables.\n\n- DAP \n\n>-----Original Message-----\n>From: David Parker \n>Sent: Wednesday, November 17, 2004 9:44 AM\n>To: 'Jeff'\n>Cc: Russell Smith; [email protected]\n>Subject: RE: [PERFORM] query plan question\n>\n>I've got pg_autovacuum running on both platforms. I've \n>verified that the tables involved in the query have the same \n>number of rows on both databases.\n>\n>I'm not sure where to look to see how the stats might be \n>different. The \"good\" database's pg_statistic table has 24 \n>more rows than that in the \"bad\" database, so there's \n>definitely a difference. The good database's pg_statistic has \n>rows for 2 extra tables, but they are not tables involved in \n>the query in question...\n>\n>So something must be up with stats, but can you tell me what \n>the most signicant columns in the pg_statistic table are for \n>the planner making its decision? I'm sure this has been \n>discussed before, so if there's a thread you can point me to, \n>that would be great - I realize it's a big general question.\n>\n>Thanks for your time.\n>\n>- DAP\n>\n>>-----Original Message-----\n>>From: Jeff [mailto:[email protected]]\n>>Sent: Wednesday, November 17, 2004 9:01 AM\n>>To: David Parker\n>>Cc: Russell Smith; [email protected]\n>>Subject: Re: [PERFORM] query plan question\n>>\n>>\n>>On Nov 17, 2004, at 7:32 AM, David Parker wrote:\n>>\n>>> Oh, I didn't realize that analyze gave that much more info. \n>>I've got a\n>>> lot to learn about this tuning stuff ;-)\n>>>\n>>> I've attached the output. I see from the new output where the slow \n>>> query is taking its time (the nested loop at line 10), but I still \n>>> have no idea why this plan is getting chosen....\n>>>\n>>\n>>looks like your stats are incorrect on the sparc.\n>>Did you forget to run vacuum analyze on it?\n>>\n>>also, do both db's have the same data loaded?\n>>there are some very different numbers in terms of actual rows \n>floating \n>>around there...\n>>\n>>--\n>>Jeff Trout <[email protected]>\n>>http://www.jefftrout.com/\n>>http://www.stuarthamm.net/\n>>\n>>\n", "msg_date": "Wed, 17 Nov 2004 10:06:13 -0500", "msg_from": "\"David Parker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query plan question" }, { "msg_contents": "\"David Parker\" <[email protected]> writes:\n> So I guess I need to figure out why pg_autovacuum isn't analyzing those\n> tables.\n\nWhich autovacuum version are you using? The early releases had some\nnasty bugs that would allow it to skip tables sometimes. I think all\nthe known problems are fixed as of recent 7.4.x updates.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Nov 2004 10:46:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query plan question " } ]
[ { "msg_contents": "We're using postgresql 7.4.5. I've only recently put pg_autovacuum in\nplace as part of our installation, and I'm basically taking the\ndefaults. I doubt it's a problem with autovacuum itself, but rather with\nmy configuration of it. I have some reading to do, so any pointers to\nexisting autovacuum threads would be greatly appreciated!\n\nThanks.\n\n- DAP\n\n>-----Original Message-----\n>From: Tom Lane [mailto:[email protected]] \n>Sent: Wednesday, November 17, 2004 10:46 AM\n>To: David Parker\n>Cc: Jeff; Russell Smith; [email protected]\n>Subject: Re: [PERFORM] query plan question \n>\n>\"David Parker\" <[email protected]> writes:\n>> So I guess I need to figure out why pg_autovacuum isn't analyzing \n>> those tables.\n>\n>Which autovacuum version are you using? The early releases \n>had some nasty bugs that would allow it to skip tables \n>sometimes. I think all the known problems are fixed as of \n>recent 7.4.x updates.\n>\n>\t\t\tregards, tom lane\n>\n", "msg_date": "Wed, 17 Nov 2004 10:59:44 -0500", "msg_from": "\"David Parker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query plan question " }, { "msg_contents": "David Parker wrote:\n\n>We're using postgresql 7.4.5. I've only recently put pg_autovacuum in\n>place as part of our installation, and I'm basically taking the\n>defaults. I doubt it's a problem with autovacuum itself, but rather with\n>my configuration of it. I have some reading to do, so any pointers to\n>existing autovacuum threads would be greatly appreciated!\n>\n\nWell the first thing to do is increase the verbosity of the \npg_autovacuum logging output. If you use -d2 or higher, pg_autovacuum \nwill print out a lot of detail on what it thinks the thresholds are and \nwhy it is or isn't performing vacuums and analyzes. Attach some of the \nlog and I'll take a look at it.\n", "msg_date": "Wed, 17 Nov 2004 11:41:10 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query plan question" } ]
[ { "msg_contents": "Well based on the autovacuum log that you attached, all of those tables \nare insert only (at least during the time period included in the log. \nIs that correct? If so, autovacuum will never do a vacuum (unless \nrequired by xid wraparound issues) on those tables. So this doesn't \nappear to be an autovacuum problem. I'm not sure about the missing \npg_statistic entries anyone else care to field that one?\n\nMatthew\n\n\nDavid Parker wrote:\n\n>Thanks. The tables I'm concerned with are named: 'schema', 'usage',\n>'usageparameter', and 'flow'. It looks like autovacuum is performing\n>analyzes:\n>\n>% grep \"Performing: \" logs/.db.tazz.vacuum.log\n>[2004-11-17 12:05:58 PM] Performing: ANALYZE\n>\"public\".\"scriptlibrary_library\"\n>[2004-11-17 12:15:59 PM] Performing: ANALYZE\n>\"public\".\"scriptlibraryparm\"\n>[2004-11-17 12:15:59 PM] Performing: ANALYZE \"public\".\"usageparameter\"\n>[2004-11-17 12:21:00 PM] Performing: ANALYZE \"public\".\"usageproperty\"\n>[2004-11-17 12:21:00 PM] Performing: ANALYZE \"public\".\"route\"\n>[2004-11-17 12:21:00 PM] Performing: ANALYZE \"public\".\"usageparameter\"\n>[2004-11-17 12:21:00 PM] Performing: ANALYZE\n>\"public\".\"scriptlibrary_library\"\n>[2004-11-17 12:26:01 PM] Performing: ANALYZE \"public\".\"usage\"\n>[2004-11-17 12:26:01 PM] Performing: ANALYZE \"public\".\"usageparameter\"\n>[2004-11-17 12:31:04 PM] Performing: ANALYZE \"public\".\"usageproperty\"\n>[2004-11-17 12:36:04 PM] Performing: ANALYZE \"public\".\"route\"\n>[2004-11-17 12:36:04 PM] Performing: ANALYZE \"public\".\"service_usage\"\n>[2004-11-17 12:36:04 PM] Performing: ANALYZE \"public\".\"usageparameter\"\n>\n>But when I run the following:\n>\n>select * from pg_statistic where starelid in \n>(select oid from pg_class where relname in\n>('schema','usageparameter','flow','usage'))\n>\n>it returns no records. Shouldn't it? It doesn't appear to be doing a\n>vacuum anywhere, which makes sense because none of these tables have\n>over the default threshold of 1000. Are there statistics which only get\n>generated by vacuum?\n>\n>I've attached a gzip of the pg_autovacuum log file, with -d 3.\n>\n>Thanks again.\n>\n>- DAP\n>\n>\n> \n>\n>>-----Original Message-----\n>>From: Matthew T. O'Connor [mailto:[email protected]] \n>>Sent: Wednesday, November 17, 2004 11:41 AM\n>>To: David Parker\n>>Cc: Tom Lane; Jeff; Russell Smith; [email protected]\n>>Subject: Re: [PERFORM] query plan question\n>>\n>>David Parker wrote:\n>>\n>> \n>>\n>>>We're using postgresql 7.4.5. I've only recently put pg_autovacuum in \n>>>place as part of our installation, and I'm basically taking the \n>>>defaults. I doubt it's a problem with autovacuum itself, but rather \n>>>with my configuration of it. I have some reading to do, so \n>>> \n>>>\n>>any pointers \n>> \n>>\n>>>to existing autovacuum threads would be greatly appreciated!\n>>>\n>>> \n>>>\n>>Well the first thing to do is increase the verbosity of the \n>>pg_autovacuum logging output. If you use -d2 or higher, \n>>pg_autovacuum will print out a lot of detail on what it thinks \n>>the thresholds are and \n>>why it is or isn't performing vacuums and analyzes. Attach \n>>some of the \n>>log and I'll take a look at it.\n>>\n>> \n>>\n\n", "msg_date": "Wed, 17 Nov 2004 14:01:48 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query plan question" } ]