threads
listlengths
1
275
[ { "msg_contents": "Hi,\nI have 2 questions regarding the storage optimization done by Postgres:\n1) Is a NULL value optimized for storage. If I have a timestamp (or some\nsuch) field that I set to default NULL, will it still use up the full space\nfor the data type.\n2) Similarly, if I have a text array, is an empty array optimized for\nstorage?\nThanks,\nGangadharan\n\nHi, \nI have 2 questions regarding the storage optimization done by Postgres:\n1) Is a NULL value optimized for storage. If I have a timestamp (or some such) field that I set to default NULL, will it still\nuse up the full space for the data type.\n2) Similarly, if I have a text array, is an empty array optimized for storage?\nThanks,\nGangadharan", "msg_date": "Fri, 1 Feb 2008 14:14:18 +0530", "msg_from": "\"Gangadharan S.A.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Storage space usage" }, { "msg_contents": "On Fri, Feb 01, 2008 at 02:14:18PM +0530, Gangadharan S.A. wrote:\n> Hi,\n> I have 2 questions regarding the storage optimization done by Postgres:\n> 1) Is a NULL value optimized for storage. If I have a timestamp (or some\n> such) field that I set to default NULL, will it still use up the full space\n> for the data type.\n\nNull values are indicated via a NULL bitmap. A null field is not stored,\nit is just indicated in the bitmap.\n\n> 2) Similarly, if I have a text array, is an empty array optimized for\n> storage?\n\nArrays are stored as varlenas. I'm pretty sure than an empty array is\nconsidered to be NULL; as such the comments above would apply.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Sat, 2 Feb 2008 00:37:09 -0600", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storage space usage" } ]
[ { "msg_contents": "Hello\n\nI am testing one server and I found strange behave of 8.2.6. My\nconfiguration is:\n\nLinux Orbisek 2.6.18-xeonsmp #1 SMP Thu Jan 31 14:09:15 CET 2008 i686\nGNU/Linux, 4 x Intel(R) Xeon(R) CPU E5335 @ 2.00GHz, 6G RAM\n\npgbench on 8.3 puts 1600-1700tps without dependency on number of\nconnections or transactions.\n\npgbench on 8.2 is similar only for 10 connections and doesn't depend\non number of transactions:\npostgres@Orbisek:/root$ /usr/local/pgsql/bin/pgbench -c10 -t 50000 test\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 10\nnumber of transactions per client: 50000\nnumber of transactions actually processed: 500000/500000\ntps = 1747.662768 (including connections establishing)\ntps = 1747.758538 (excluding connections establishing)\n\nbut is half with 50 connections:\n10 (1780), 20 (1545), 30 (1400), 40 (1145) 50c (987tps)\n\npostgres@Orbisek:/root$ /usr/local/pgsql/bin/pgbench -c50 -t 100 test\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 50\nnumber of transactions per client: 100\nnumber of transactions actually processed: 5000/5000\ntps = 1106.484286 (including connections establishing)\ntps = 1126.062214 (excluding connections establishing)\npostgres@Orbisek:/root$ /usr/local/pgsql/bin/pgbench -c50 -t 1000 test\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 50\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 50000/50000\ntps = 975.009227 (including connections establishing)\ntps = 976.521036 (excluding connections establishing)\n\nall time load is less than 3 and cpu us 16%, cpu sys 5% (8.3 used\nprocs about 18%us and 7% sy)\n\nshared_buffers = 160MB\nwork_mem = 10MB\nmaintenance_work_mem = 256MB\nwal_buffers = 128kB\ncheckpoint_segments = 100\nbgwriter_lru_percent = 20.0\nbgwriter_lru_maxpages = 200\nbgwriter_all_percent = 10\nbgwriter_all_maxpages = 600\nautovacuum_vacuum_cost_delay = 20\n\npostgres82=# select mode, count(*) from pg_locks group by mode;\n mode | count\n--------------------------+-------\n ShareLock | 40\n ShareUpdateExclusiveLock | 1\n AccessShareLock | 99\n ExclusiveLock | 62\n RowExclusiveLock | 215\n(5 rows)\n\npostgres83=# select mode, count(*) from pg_locks group by mode;\n mode | count\n--------------------------+-------\n ShareLock | 43\n ShareUpdateExclusiveLock | 2\n AccessShareLock | 101\n ExclusiveLock | 116\n RowExclusiveLock | 218\n(5 rows)\npostgres@Orbisek:/root/postgresql-8.2.6/src/tools/fsync$ ./test_fsync\n-f /usr/local/pgsql/data/aa\nSimple write timing:\n write 0.005241\n\nCompare fsync times on write() and non-write() descriptor:\n(If the times are similar, fsync() can sync data written\n on a different descriptor.)\n write, fsync, close 0.152853\n write, close, fsync 0.152203\n\nCompare one o_sync write to two:\n one 16k o_sync write 0.298571\n two 8k o_sync writes 0.295349\n\nCompare file sync methods with one 8k write:\n\n (o_dsync unavailable)\n write, fdatasync 0.151626\n write, fsync, 0.150524\n\nCompare file sync methods with 2 8k writes:\n (o_dsync unavailable)\n open o_sync, write 0.340511\n write, fdatasync 0.182257\n write, fsync, 0.177968\n\nany ideas are welcome\n\nRegards\nPavel Stehule\n", "msg_date": "Sun, 3 Feb 2008 17:02:13 +0100", "msg_from": "\"Pavel Stehule\" <[email protected]>", "msg_from_op": true, "msg_subject": "slow 8.2.6 with 50 connections" }, { "msg_contents": "On Feb 3, 2008 10:02 AM, Pavel Stehule <[email protected]> wrote:\n> Hello\n>\n> I am testing one server and I found strange behave of 8.2.6. My\n> configuration is:\n\nNote that with a scaling factor that's < the number of clients, your\ntest isn't gonna be very useful. scaling factor should always be >=\nnumber of clients.\n", "msg_date": "Sun, 3 Feb 2008 12:58:12 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow 8.2.6 with 50 connections" }, { "msg_contents": "On 03/02/2008, Scott Marlowe <[email protected]> wrote:\n> On Feb 3, 2008 10:02 AM, Pavel Stehule <[email protected]> wrote:\n> > Hello\n> >\n> > I am testing one server and I found strange behave of 8.2.6. My\n> > configuration is:\n>\n> Note that with a scaling factor that's < the number of clients, your\n> test isn't gonna be very useful. scaling factor should always be >=\n> number of clients.\n>\n\nI use it only for orientation. And reported behave signalize some problem.\n\nPavel\n", "msg_date": "Sun, 3 Feb 2008 20:35:22 +0100", "msg_from": "\"Pavel Stehule\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow 8.2.6 with 50 connections" }, { "msg_contents": "On Sun, 3 Feb 2008, Pavel Stehule wrote:\n\n> postgres@Orbisek:/root$ /usr/local/pgsql/bin/pgbench -c10 -t 50000 test\n> scaling factor: 1\n\nIf you're running with the number of clients much greater than the scaling \nfactor, it's unsurprising transactions are suffering from lock issues at \nhigher client loads. It's good news that situation is much improved in \n8.3 but I'm not sure how much you can conclude from that.\n\nIncreasing scale will make the database bigger and drive down your results \ndramatically though, as it will get more disk-bound. Consider running \npgbench with \"-N\", which removes updates to the branches/tellers table \nwhere the worse locking issues are at, and see if that changes how 8.2 and \n8.3 compare. But try the things below first.\n\nThe other thing to try is running the pgbench client on another system \nfrom the server itself. I've seen resultings showing this curve before \n(sharp dive at higher TPS) that flattened out considerably once that was \ndone.\n\n> bgwriter_lru_percent = 20.0\n> bgwriter_lru_maxpages = 200 \n> bgwriter_all_percent = 10\n> bgwriter_all_maxpages = 600\n\nHmm, isn't that the set that Sun was using in their benchmarks? Unless \nyou have more CPUs than your system does, these are way more aggressive \nthan make sense--the percentages moreso than the pages. For pgbench \ntests, you'll probably find performance improves in every way if you just \nturn the background writer off in 8.2. I suspect that part of your 8.2 \nvs. 8.3 difference here is that the way you're 8.2 background writer is \nconfigured here is wasting all sorts of CPU and I/O resources doing \nunproductive things. 8.3 took away most of these parameters specifically \nto keep that from happening.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sun, 3 Feb 2008 16:17:55 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow 8.2.6 with 50 connections" } ]
[ { "msg_contents": "Can I ask for some help with benchmarking?\n\nThere are some results here that show PostgreSQL is slower in some cases\nthan Monet and MySQL. Of course these results were published immediately\nprior to 8.2 being released, plus run out-of-the-box, so without even\nbasic performance tuning.\n\nWould anybody like to repeat these tests with the latest production\nversions of these databases (i.e. with PGSQL 8.3), and with some\nsensible tuning settings for the hardware used? It will be useful to get\nsome blind tests with more sensible settings.\n\nhttp://monetdb.cwi.nl/projects/monetdb//SQL/Benchmark/TPCH/\n\nMultiple runs from different people/different hardware is useful since\nthey help to iron-out differences in hardware and test methodology. So\ndon't worry if you see somebody else doing this also.\n\nThanks,\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com \n\n", "msg_date": "Mon, 04 Feb 2008 18:37:16 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Benchmark Data requested" }, { "msg_contents": "Hi Simon,\n\nNote that MonetDB/X100 does not have a SQL optimizer, they ran raw\nhand-coded plans. As a consequence, these comparisons should be taken as an\n\"executor-executor\" test and we/you should be sure that the PG planner has\ngenerated the best possible plan.\n\nThat said, we've already done the comparisons internally and they've got a\ngood point to make about L2 cache use and removal of unnecessary\nabstractions in the executor. We've been aware of this since 2005/6 and\nhave been slowly working these ideas into our/PG executor.\n\nBottom line: it's a good thing to work to get close to the X100/Monet\nexecutor with a more general purpose DB. PG is a looong way from being\ncomparable, mostly due to poor L2 D-cache locality and I-cache thrashing in\nthe executor. The only way to cure this is to work on more rows than one at\na time.\n\n- Luke \n\n\nOn 2/4/08 10:37 AM, \"Simon Riggs\" <[email protected]> wrote:\n\n> Can I ask for some help with benchmarking?\n> \n> There are some results here that show PostgreSQL is slower in some cases\n> than Monet and MySQL. Of course these results were published immediately\n> prior to 8.2 being released, plus run out-of-the-box, so without even\n> basic performance tuning.\n> \n> Would anybody like to repeat these tests with the latest production\n> versions of these databases (i.e. with PGSQL 8.3), and with some\n> sensible tuning settings for the hardware used? It will be useful to get\n> some blind tests with more sensible settings.\n> \n> http://monetdb.cwi.nl/projects/monetdb//SQL/Benchmark/TPCH/\n> \n> Multiple runs from different people/different hardware is useful since\n> they help to iron-out differences in hardware and test methodology. So\n> don't worry if you see somebody else doing this also.\n> \n> Thanks,\n\n", "msg_date": "Mon, 04 Feb 2008 10:47:49 -0800", "msg_from": "Luke Lonergan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "> There are some results here that show PostgreSQL is slower in some cases\n> than Monet and MySQL. Of course these results were published immediately\n> prior to 8.2 being released, plus run out-of-the-box, so without even\n> basic performance tuning.\n>\n> Would anybody like to repeat these tests with the latest production\n> versions of these databases (i.e. with PGSQL 8.3), and with some\n> sensible tuning settings for the hardware used? It will be useful to get\n> some blind tests with more sensible settings.\n>\n> http://monetdb.cwi.nl/projects/monetdb//SQL/Benchmark/TPCH/\n>\n> Multiple runs from different people/different hardware is useful since\n> they help to iron-out differences in hardware and test methodology. So\n> don't worry if you see somebody else doing this also.\n\nHere is another graph: http://tweakers.net/reviews/649/7\n\nWithout monetdb though.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Mon, 4 Feb 2008 19:50:46 +0100", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "On Mon, 2008-02-04 at 10:47 -0800, Luke Lonergan wrote:\n\n> Note that MonetDB/X100 does not have a SQL optimizer, they ran raw\n> hand-coded plans. As a consequence, these comparisons should be taken as an\n> \"executor-executor\" test and we/you should be sure that the PG planner has\n> generated the best possible plan.\n\nIf it doesn't then I'd regard that as a performance issue in itself.\n\n> That said, we've already done the comparisons internally and they've got a\n> good point to make about L2 cache use and removal of unnecessary\n> abstractions in the executor. We've been aware of this since 2005/6 and\n> have been slowly working these ideas into our/PG executor.\n>\n> Bottom line: it's a good thing to work to get close to the X100/Monet\n> executor with a more general purpose DB. PG is a looong way from being\n> comparable, mostly due to poor L2 D-cache locality and I-cache thrashing in\n> the executor. \n\nYou maybe right, but I want to see where it hurts us the most.\n\n> The only way to cure this is to work on more rows than one at a time.\n\nDo you have any results to show that's true, or are you just referring\nto the Cray paper? (Which used fixed length tuples and specific vector\nhardware).\n\n(With regard to benchmarks, I'd rather not download Monet at all. Helps\navoid legal issues around did-you-look-at-the-code questions.)\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com \n\n", "msg_date": "Mon, 04 Feb 2008 19:07:41 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "On Mon, 4 Feb 2008, Simon Riggs wrote:\n\n> Would anybody like to repeat these tests with the latest production\n> versions of these databases (i.e. with PGSQL 8.3)\n\nDo you have any suggestions on how people should run TPC-H? It looked \nlike a bit of work to sort through how to even start this exercise.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 4 Feb 2008 15:09:58 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "On Mon, 2008-02-04 at 15:09 -0500, Greg Smith wrote:\n> On Mon, 4 Feb 2008, Simon Riggs wrote:\n> \n> > Would anybody like to repeat these tests with the latest production\n> > versions of these databases (i.e. with PGSQL 8.3)\n> \n> Do you have any suggestions on how people should run TPC-H? It looked \n> like a bit of work to sort through how to even start this exercise.\n\nThe link referred to a few different scale factors, so you could try\nthose. But anything that uses the hardware you have to its full\npotential is valuable.\n\nEverybody's test method is going to be different, whatever I say...\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com \n\n", "msg_date": "Mon, 04 Feb 2008 20:21:56 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "Hi Simon,\n\nOn 2/4/08 11:07 AM, \"Simon Riggs\" <[email protected]> wrote:\n\n>> \"executor-executor\" test and we/you should be sure that the PG planner has\n>> generated the best possible plan.\n> \n> If it doesn't then I'd regard that as a performance issue in itself.\n\nAgreed, though that's two problems to investigate - I think the Monet/X100\nstuff is clean in that it's a pure executor test.\n \n> You maybe right, but I want to see where it hurts us the most.\n\nYou'll see :-)\n \n>> The only way to cure this is to work on more rows than one at a time.\n> \n> Do you have any results to show that's true, or are you just referring\n> to the Cray paper? (Which used fixed length tuples and specific vector\n> hardware).\n\nNo paper referenced, just inference from the results and their (and others)\nconclusions about locality and re-use. It's a similar enough situation to\nscientific programming with vector processors versus cache based superscalar\nthat these are the right conclusions. We've done the profiling to look at\ncache misses and have some data to back it up as well.\n \n> (With regard to benchmarks, I'd rather not download Monet at all. Helps\n> avoid legal issues around did-you-look-at-the-code questions.)\n\nNone of us have looked at the code or downloaded it. There are a number of\npresentations out there for Monet/X100 to see what their results are.\n\n- Luke\n\n", "msg_date": "Mon, 04 Feb 2008 14:32:21 -0800", "msg_from": "Luke Lonergan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "Hi Simon,\n\nI have some insight into TPC-H on how it works.\n\nFirst of all I think it is a violation of TPC rules to publish numbers \nwithout auditing them first. So even if I do the test to show the \nbetter performance of PostgreSQL 8.3, I cannot post it here or any \npublic forum without doing going through the \"process\". (Even though it \nis partial benchmark as they are just doing the equivalent of the \nPowerRun of TPCH) Maybe the PR of PostgreSQL team should email \[email protected] about them and see what they have to say about that comparison.\n\nOn the technical side:\n\nRemember all TPC-H queries when run sequentially on PostgreSQL uses only \n1 core or virtual CPU so it is a very bad for system to use it with \nPostgreSQL (same for MySQL too).\n\nAlso very important unless you are running the UPDATE FUNCTIONS which \nare separate queries, all these Q1-Q22 Queries are pure \"READ-ONLY\" \nqueries. Traditionally I think PostgreSQL does lack \"READ-SPEED\"s \nspecially since it is bottlenecked by the size of the reads it does \n(BLOCKSIZE). Major database provides multi-block parameters to do \nmultiple of reads/writes in terms of blocksizes to reduce IOPS and also \nfor read only they also have READ-AHEAD or prefetch sizes which is \ngenerally bigger than multi-block or extent sizes to aid reads.\n\nScale factor is in terms of gigs and hence using max scale of 5 (5G) is \npretty useless since most of the rows could be cached in modern day \nsystems. And comparing with 0.01 is what 10MB? Size of recent L2 cache \nof Intel is probably bigger than that size.\n\nIf you are doing tuning for TPC-H Queries focus on few of them:\nFor example Query 1 is very Join intensive and if your CPU is not 100% \nused then you have a problem in your IO to solve before tuning it.\n\nAnother example is Query 16 is literally IO scan speed, many people use \nit to see if the database can scan at \"line speeds\" of the storage, \nending up with 100% CPU means the database cannot process that many rows \n(just to bring it in).\n\nIn essence each query does some combination of system features to \nhighlight the performance. However since it is an old benchmark, \ndatabase companies end up \"re-engineering\" their technologies to gain \nadvantage in this benchmark (Hence its time for a successor in work \ncalled TPC-DS which will have more than 100 such queries)\n\nFew of the technologies that have really helped gain ground in TPC-H world\n* Hash and/or Range Partitioning of tables ( PostgreSQL 8.3 can do that \nbut the setup cost of writing schema is great specially since data has \nto be loaded in separate tables)\n* Automated Aggregated Views - used by optmiziers - database technology \nto update more frequently used aggregations in a smaller views\n* Cube views Index - like bitmap but multidimensional (I think ..but not \nsure)\n\nThat said, is it useful to be used in \"Regression testing in PostgreSQL \nfarms. I would think yes.. specially Q16\n\nHope this helps.\nRegards,\nJignesh\n\n\n\n \nSimon Riggs wrote:\n> Can I ask for some help with benchmarking?\n>\n> There are some results here that show PostgreSQL is slower in some cases\n> than Monet and MySQL. Of course these results were published immediately\n> prior to 8.2 being released, plus run out-of-the-box, so without even\n> basic performance tuning.\n>\n> Would anybody like to repeat these tests with the latest production\n> versions of these databases (i.e. with PGSQL 8.3), and with some\n> sensible tuning settings for the hardware used? It will be useful to get\n> some blind tests with more sensible settings.\n>\n> http://monetdb.cwi.nl/projects/monetdb//SQL/Benchmark/TPCH/\n>\n> Multiple runs from different people/different hardware is useful since\n> they help to iron-out differences in hardware and test methodology. So\n> don't worry if you see somebody else doing this also.\n>\n> Thanks,\n>\n> \n", "msg_date": "Mon, 04 Feb 2008 17:33:34 -0500", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "Hi Greg,\n\nOn 2/4/08 12:09 PM, \"Greg Smith\" <[email protected]> wrote:\n\n> Do you have any suggestions on how people should run TPC-H? It looked\n> like a bit of work to sort through how to even start this exercise.\n\nTo run \"TPC-H\" requires a license to publish, etc.\n\nHowever, I think you can use their published data and query generation kit\nto run the queries, which aren't the benchmark per-se. That's what the\nMonet/X100 people did.\n\n- Luke\n\n", "msg_date": "Mon, 04 Feb 2008 14:34:31 -0800", "msg_from": "Luke Lonergan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "Doing it at low scales is not attractive.\n\nCommercial databases are publishing at scale factor of 1000(about 1TB) \nto 10000(10TB) with one in 30TB space. So ideally right now tuning \nshould start at 1000 scale factor.\n\nUnfortunately I have tried that before with PostgreSQL the few of the \nproblems are as follows:\n\nSingle stream loader of PostgreSQL takes hours to load data. (Single \nstream load... wasting all the extra cores out there)\n\nMultiple table loads ( 1 per table) spawned via script is bit better \nbut hits wal problems.\n\nTo avoid wal problems, I had created tables and load statements within \nthe same transaction, faster but cannot create index before load or it \nstarts writing to wal... AND if indexes are created after load, it takes \nabout a day or so to create all the indices required. (Its single \nthreaded and creating multiple indexes/indices at the same time could \nresult in running out of temporary \"DISK\" space since the tables are so \nbig. Which means 1 thread at a time is the answer for creating tables \nthat are really big. It is slow.\n\nBoy, by this time most people doing TPC-H in high end give up on \nPostgreSQL.\n\nI have not even started Partitioning of tables yet since with the \ncurrent framework, you have to load the tables separately into each \ntables which means for the TPC-H data you need \"extra-logic\" to take \nthat table data and split it into each partition child table. Not stuff \nthat many people want to do by hand.\n\nThen for the power run that is essentially running one query at a time \nshould essentially be able to utilize the full system (specially \nmulti-core systems), unfortunately PostgreSQL can use only one core. \n(Plus since this is read only and there is no separate disk reader all \nother processes are idle) and system is running at 1/Nth capacity (where \nN is the number of cores/threads)\n\n(I am not sure here with Partitioned tables, do you get N processes \nrunning in the system when you scan the partitioned table?)\n\nEven off-loading work like \"fetching the data into bufferpool\" into \nseparate processes will go big time with this type of workloads.\n\nI would be happy to help out if folks here want to do work related to \nit. Infact if you have time, I can request a project in one of the Sun \nBenchmarking center to see what we can learn with community members \ninterested in understanding where PostgreSQL performs and fails.\n\nRegards,\nJignesh\n\nGreg Smith wrote:\n> On Mon, 4 Feb 2008, Simon Riggs wrote:\n>\n>> Would anybody like to repeat these tests with the latest production\n>> versions of these databases (i.e. with PGSQL 8.3)\n>\n> Do you have any suggestions on how people should run TPC-H? It looked \n> like a bit of work to sort through how to even start this exercise.\n>\n> -- \n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n", "msg_date": "Mon, 04 Feb 2008 17:55:33 -0500", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "On Mon, 4 Feb 2008, Luke Lonergan wrote:\n\n> However, I think you can use their published data and query generation kit\n> to run the queries, which aren't the benchmark per-se. That's what the\n> Monet/X100 people did.\n\nRight; I was just hoping someone might suggest some relatively \nstandardized way to do that via PostgreSQL. I read Simon's original note \nand was afraid that multiple people might end up duplicating some \nnon-trivial amount of work just to get the kits setup and running, or get \nfrustrated not expecting that part and just give up on the whole idea.\n\nI'm very interested in this particular topic (non-trivial database \nmicro-benchmarks) but have no time to spare this week to hack on this one \nmyself.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 4 Feb 2008 17:56:05 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "On Mon, 4 Feb 2008, Jignesh K. Shah wrote:\n\n> Doing it at low scales is not attractive. Commercial databases are \n> publishing at scale factor of 1000(about 1TB) to 10000(10TB) with one in \n> 30TB space. So ideally right now tuning should start at 1000 scale \n> factor.\n\nI think what Simon was trying to get at is some sort of debunking of \nMonet's benchmarks which were running in-memory while not giving \nPostgreSQL any real memory to work with. What you're talking about is a \ncompletely separate discussion which is well worth having in addition to \nthat.\n\nI'm well aware of how painful it is to generate+load+index even single TB \nworth of data with PostgreSQL right now because I've been doing just that \nfor weeks now (there's two processing phases in there as well for me that \ntake even longer, but the raw operations are still a significant portion \nof the total time).\n\n> I would be happy to help out if folks here want to do work related to \n> it. Infact if you have time, I can request a project in one of the Sun \n> Benchmarking center to see what we can learn with community members \n> interested in understanding where PostgreSQL performs and fails.\n\nSounds like a good 8.4 project. Maybe pick this topic back up at the East \nconvention next month, we could talk about it then.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 4 Feb 2008 18:37:24 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "\n\"Jignesh K. Shah\" <[email protected]> writes:\n\n> Then for the power run that is essentially running one query at a time should\n> essentially be able to utilize the full system (specially multi-core systems),\n> unfortunately PostgreSQL can use only one core. (Plus since this is read only\n> and there is no separate disk reader all other processes are idle) and system\n> is running at 1/Nth capacity (where N is the number of cores/threads)\n\nIs the whole benchmark like this or is this just one part of it?\n\nIs the i/o system really able to saturate the cpu though?\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's On-Demand Production Tuning\n", "msg_date": "Tue, 05 Feb 2008 00:10:55 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "\n\"Jignesh K. Shah\" <[email protected]> writes:\n\n> Also very important unless you are running the UPDATE FUNCTIONS which are\n> separate queries, all these Q1-Q22 Queries are pure \"READ-ONLY\" queries.\n> Traditionally I think PostgreSQL does lack \"READ-SPEED\"s specially since it is\n> bottlenecked by the size of the reads it does (BLOCKSIZE). Major database\n> provides multi-block parameters to do multiple of reads/writes in terms of\n> blocksizes to reduce IOPS and also for read only they also have READ-AHEAD or\n> prefetch sizes which is generally bigger than multi-block or extent sizes to\n> aid reads.\n\nNote that all of these things are necessitated by those databases using direct\ni/o of some form or another. The theory is that PostgreSQL doesn't have to\nworry about these things because the kernel is taking care of it.\n\nHow true that is is a matter of some debate and it varies from OS to OS. But\nit's definitely true that the OS will do read-ahead for sequential reads, for\nexample.\n\nIncidentally we found some cases that Solaris was particularly bad at. Is\nthere anybody in particular that would be interested in hearing about them?\n(Not meant to be a knock on Solaris, I'm sure there are other cases Linux or\nBSD handle poorly too)\n\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's Slony Replication support!\n", "msg_date": "Tue, 05 Feb 2008 00:17:40 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "\nTPC-H has two runs\nPowerRun which is single stream (Q1-22 RF1, RF2)\nAnd Throughput Runs which has \"N\" (depends on scale) running \nsimultaneously in a mixed sequence of the same queries and the two \nupdate functions. During throughput run you can expect to max out CPU... \nBut commerial databases generally have PowerRuns running quite well even \non multi-cores ( Oracle (without RAC have published with 144 cores on \nSolaris)\n\nAs for IO system saturating the CPU its two folds\nKernel fetching in the data which saturates at some value\nand in this case PostgreSQL reading the data and putting it in its \nbufferpool\n\nAn example of how I use it is as follows:\nDo a select query on a table such that it results in table scan without \nactually returning any rows back\nNow keep throwing hardware (better storage) till it saturates the CPU. \nThat's the practical max you can do with the CPU/OS combination \n(considering unlimited storage bandwidth). This one is primarily used in \nguessing how fast one of the queries in TPC-H will complete.\n\nIn my tests with PostgreSQL, I generally reach the CPU limit without \neven reaching the storage bandwidth of the underlying storage.\nJust to give numbers\nSingle 2Gb Fiber Channel port can practically go upto 180 MB/sec\nSingle 4Gb ports have proven to go upto 360-370MB/sec\nSo to saturate a FC port, postgreSQL has to be able to scan 370MB/sec \nwithout saturating the CPU.\nThen comes software stripping which allows multiple ports to be stripped \nover increasing the capacity of the bandwidth... Now scanning has to be \nable to drive Nx370MB/sec (all on single core).\n\nI had some numbers and I had some limitations based on cpu frequency, \nblocksize ,etc but those were for 8.1 days or so..\n\nI think to take PostgreSQL a bit high end, we have to first scale out \nthese numbers.\nDoing some sorts of test in PostgreSQL farms for every release actually \ndoes help people see the amount of data that it can drive through...\n\nWe can actually work on some database operation metrics to also guage \nhow much each release is improving over older releases.. I have ideas \nfor few of them.\n\nRegards,\nJignesh\n\n\nGregory Stark wrote:\n> \"Jignesh K. Shah\" <[email protected]> writes:\n>\n> \n>> Then for the power run that is essentially running one query at a time should\n>> essentially be able to utilize the full system (specially multi-core systems),\n>> unfortunately PostgreSQL can use only one core. (Plus since this is read only\n>> and there is no separate disk reader all other processes are idle) and system\n>> is running at 1/Nth capacity (where N is the number of cores/threads)\n>> \n>\n> Is the whole benchmark like this or is this just one part of it?\n>\n> Is the i/o system really able to saturate the cpu though?\n>\n> \n", "msg_date": "Mon, 04 Feb 2008 19:49:22 -0500", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "\n\nGregory Stark wrote:\n> Incidentally we found some cases that Solaris was particularly bad at. Is\n> there anybody in particular that would be interested in hearing about them?\n> (Not meant to be a knock on Solaris, I'm sure there are other cases Linux or\n> BSD handle poorly too)\n>\n>\n> \n\nSend me the details, I can file bugs for Solaris on behalf of the \ncommunity. Since I am involved in lot of PostgreSQL testing on Solaris \nthis year, I have a small list myself (mostly related to file system \nstuff though).\n\nI know one regarding bonnie rewriting blocks that you sent out. (I still \nhavent done anything about it yet but finally have some test machines \nfor such work instead of using my workstation to test it out :-)\n\nBut I am really interested in seeing which one hits PostgreSQL \nperformance/usability.\n\nThanks in advance.\n\nRegards,\nJignesh\n\n", "msg_date": "Mon, 04 Feb 2008 19:57:52 -0500", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "On Mon, 2008-02-04 at 17:33 -0500, Jignesh K. Shah wrote:\n\n> First of all I think it is a violation of TPC rules to publish numbers \n> without auditing them first. So even if I do the test to show the \n> better performance of PostgreSQL 8.3, I cannot post it here or any \n> public forum without doing going through the \"process\". \n\nI'm not interested in the final results, pricing etc.. Just a query by\nquery elapsed times.\n\nCan you show which part of the rules precludes this? I can't find it.\n\nThis is a developer list, so \"publishing\" things here is what we do for\ndiscussion, so it's hardly breaking the spirit of the TPC rules to\npublish results here, in the hope of focusing development effort.\n \n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com \n\n", "msg_date": "Tue, 05 Feb 2008 08:45:36 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "On Mon, 2008-02-04 at 17:55 -0500, Jignesh K. Shah wrote:\n> Doing it at low scales is not attractive.\n> \n> Commercial databases are publishing at scale factor of 1000(about 1TB) \n> to 10000(10TB) with one in 30TB space. So ideally right now tuning \n> should start at 1000 scale factor.\n\nI don't understand this. Sun is currently publishing results at 100GB,\n300GB etc.. Why would we ignore those and go for much higher numbers?\nEspecially when you explain why we wouldn't be able to. There isn't any\ncurrently valid result above 10 TB.\n\nIf anybody is going to run tests in response to my request, then *any*\nscale factor is interesting, on any hardware. If that means Scale Factor\n1, 3, 10 or 30 then that's fine by me. \n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com \n\n", "msg_date": "Tue, 05 Feb 2008 09:08:14 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "Hi,\n\nLe lundi 04 février 2008, Jignesh K. Shah a écrit :\n> Single stream loader of PostgreSQL takes hours to load data. (Single\n> stream load... wasting all the extra cores out there)\n\nI wanted to work on this at the pgloader level, so CVS version of pgloader is \nnow able to load data in parallel, with a python thread per configured \nsection (1 section = 1 data file = 1 table is often the case).\nNot configurable at the moment, but I plan on providing a \"threads\" knob which \nwill default to 1, and could be -1 for \"as many thread as sections\".\n\n> Multiple table loads ( 1 per table) spawned via script is bit better\n> but hits wal problems.\n\npgloader will too hit the WAL problem, but it still may have its benefits, or \nat least we will soon (you can already if you take it from CVS) be able to \nmeasure if the parallel loading at the client side is a good idea perf. wise.\n\n[...]\n> I have not even started Partitioning of tables yet since with the\n> current framework, you have to load the tables separately into each\n> tables which means for the TPC-H data you need \"extra-logic\" to take\n> that table data and split it into each partition child table. Not stuff\n> that many people want to do by hand.\n\nI'm planning to add ddl-partitioning support to pgloader:\n http://archives.postgresql.org/pgsql-hackers/2007-12/msg00460.php\n\nThe basic idea is for pgloader to ask PostgreSQL about constraint_exclusion, \npg_inherits and pg_constraint and if pgloader recognize both the CHECK \nexpression and the datatypes involved, and if we can implement the CHECK in \npython without having to resort to querying PostgreSQL, then we can run a \nthread per partition, with as many COPY FROM running in parallel as there are \npartition involved (when threads = -1).\n\nI'm not sure this will be quicker than relying on PostgreSQL trigger or rules \nas used for partitioning currently, but ISTM Jignesh quoted § is just about \nthat.\n\nComments?\n-- \ndim", "msg_date": "Tue, 5 Feb 2008 15:06:48 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "On Tue, 2008-02-05 at 15:06 +0100, Dimitri Fontaine wrote:\n> Hi,\n> \n> Le lundi 04 février 2008, Jignesh K. Shah a écrit :\n> > Single stream loader of PostgreSQL takes hours to load data. (Single\n> > stream load... wasting all the extra cores out there)\n> \n> I wanted to work on this at the pgloader level, so CVS version of pgloader is \n> now able to load data in parallel, with a python thread per configured \n> section (1 section = 1 data file = 1 table is often the case).\n> Not configurable at the moment, but I plan on providing a \"threads\" knob which \n> will default to 1, and could be -1 for \"as many thread as sections\".\n\nThat sounds great. I was just thinking of asking for that :-)\n\nI'll look at COPY FROM internals to make this faster. I'm looking at\nthis now to refresh my memory; I already had some plans on the shelf.\n\n> > Multiple table loads ( 1 per table) spawned via script is bit better\n> > but hits wal problems.\n> \n> pgloader will too hit the WAL problem, but it still may have its benefits, or \n> at least we will soon (you can already if you take it from CVS) be able to \n> measure if the parallel loading at the client side is a good idea perf. wise.\n\nShould be able to reduce lock contention, but not overall WAL volume.\n\n> [...]\n> > I have not even started Partitioning of tables yet since with the\n> > current framework, you have to load the tables separately into each\n> > tables which means for the TPC-H data you need \"extra-logic\" to take\n> > that table data and split it into each partition child table. Not stuff\n> > that many people want to do by hand.\n> \n> I'm planning to add ddl-partitioning support to pgloader:\n> http://archives.postgresql.org/pgsql-hackers/2007-12/msg00460.php\n> \n> The basic idea is for pgloader to ask PostgreSQL about constraint_exclusion, \n> pg_inherits and pg_constraint and if pgloader recognize both the CHECK \n> expression and the datatypes involved, and if we can implement the CHECK in \n> python without having to resort to querying PostgreSQL, then we can run a \n> thread per partition, with as many COPY FROM running in parallel as there are \n> partition involved (when threads = -1).\n> \n> I'm not sure this will be quicker than relying on PostgreSQL trigger or rules \n> as used for partitioning currently, but ISTM Jignesh quoted § is just about \n> that.\n\nMuch better than triggers and rules, but it will be hard to get it to\nwork.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com \n\n", "msg_date": "Tue, 05 Feb 2008 14:24:55 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "", "msg_date": "Tue, 5 Feb 2008 14:29:12 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "Simon Riggs wrote:\n> On Tue, 2008-02-05 at 15:06 +0100, Dimitri Fontaine wrote:\n>>\n>> Le lundi 04 février 2008, Jignesh K. Shah a écrit :\n\n>>> Multiple table loads ( 1 per table) spawned via script is bit better\n>>> but hits wal problems.\n>> pgloader will too hit the WAL problem, but it still may have its benefits, or \n>> at least we will soon (you can already if you take it from CVS) be able to \n>> measure if the parallel loading at the client side is a good idea perf. wise.\n> \n> Should be able to reduce lock contention, but not overall WAL volume.\n\nIn the case of a bulk upload to an empty table (or partition?) could you \nnot optimise the WAL away? That is, shouldn't the WAL basically be a \nsimple transformation of the on-disk blocks? You'd have to explicitly \nsync the file(s) for the table/indexes of course, and you'd need some \nwork-around for WAL shipping, but it might be worth it for you chaps \nwith large imports.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 05 Feb 2008 14:43:36 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "Apologies for the blank email - mailer problems. I lost all my nicely \ntyped stuff, too.\n\nOn Tue, 5 Feb 2008, Dimitri Fontaine wrote:\n>> Multiple table loads ( 1 per table) spawned via script is bit better\n>> but hits wal problems.\n>\n> pgloader will too hit the WAL problem, but it still may have its benefits, or\n> at least we will soon (you can already if you take it from CVS) be able to\n> measure if the parallel loading at the client side is a good idea perf. wise.\n\nYou'll have to be careful here. Depending on the filesystem, writing large \namounts of data to two files simultaneously can results in the blocks \nbeing interleaved to some degree on the disc, which can cause performance \nproblems later on.\n\nAs for the WAL, I have an suggestion, but I'm aware that I don't know how \nPG actually does it, so you'll have to tell me if it is valid.\n\nMy impression is that the WAL is used to store writes in a transactional \nmanner, for writes that can't be written in a transactional manner \ndirectly to the data files. Hence the suggestion for restoring database \ndumps to run the whole restore in one transaction, which means that the \ntable creation is in the same transaction as loading the data into it. \nSince the table is not visible to other backends, the writes to it do not \nneed to go to the WAL, and PG is clever enough to do this.\n\nMy suggestion is to extend that slightly. If there is a large chunk of \ndata to be written to a table, which will be entirely to empty pages or \nappended to the of the data file, then there is no risk of corruption of \nexisting data, and that write could be made directly to the table. You \nwould have to write a WAL entry reserving the space in the data file, and \nthen write the data to the file. Then when that WAL entry is checkpointed, \nno work would be required.\n\nThis would improve the performance of database restores and large writes \nwhich expand the table's data file. So, would it work?\n\nMatthew\n\n-- \nIf pro is the opposite of con, what is the opposite of progress?\n", "msg_date": "Tue, 5 Feb 2008 14:51:29 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "On Tue, 5 Feb 2008, Richard Huxton wrote:\n> In the case of a bulk upload to an empty table (or partition?) could you not \n> optimise the WAL away?\n\nArgh. If I hadn't had to retype my email, I would have suggested that \nbefore you.\n\n;)\n\nMatthew\n\n-- \nUnfortunately, university regulations probably prohibit me from eating\nsmall children in front of the lecture class.\n -- Computer Science Lecturer\n", "msg_date": "Tue, 5 Feb 2008 14:53:25 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "On Tue, 2008-02-05 at 14:43 +0000, Richard Huxton wrote:\n> Simon Riggs wrote:\n> > On Tue, 2008-02-05 at 15:06 +0100, Dimitri Fontaine wrote:\n> >>\n> >> Le lundi 04 février 2008, Jignesh K. Shah a écrit :\n> \n> >>> Multiple table loads ( 1 per table) spawned via script is bit better\n> >>> but hits wal problems.\n> >> pgloader will too hit the WAL problem, but it still may have its benefits, or \n> >> at least we will soon (you can already if you take it from CVS) be able to \n> >> measure if the parallel loading at the client side is a good idea perf. wise.\n> > \n> > Should be able to reduce lock contention, but not overall WAL volume.\n> \n> In the case of a bulk upload to an empty table (or partition?) could you \n> not optimise the WAL away? That is, shouldn't the WAL basically be a \n> simple transformation of the on-disk blocks? You'd have to explicitly \n> sync the file(s) for the table/indexes of course, and you'd need some \n> work-around for WAL shipping, but it might be worth it for you chaps \n> with large imports.\n\nOnly by locking the table, which serializes access, which then slows you\ndown or at least restricts other options. Plus if you use pg_loader then\nyou'll find only the first few rows optimized and all the rest not.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com \n\n", "msg_date": "Tue, 05 Feb 2008 14:54:28 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "Simon Riggs wrote:\n> On Tue, 2008-02-05 at 14:43 +0000, Richard Huxton wrote:\n>> Simon Riggs wrote:\n>>> On Tue, 2008-02-05 at 15:06 +0100, Dimitri Fontaine wrote:\n>>>> Le lundi 04 février 2008, Jignesh K. Shah a écrit :\n>>>>> Multiple table loads ( 1 per table) spawned via script is bit better\n>>>>> but hits wal problems.\n>>>> pgloader will too hit the WAL problem, but it still may have its benefits, or \n>>>> at least we will soon (you can already if you take it from CVS) be able to \n>>>> measure if the parallel loading at the client side is a good idea perf. wise.\n>>> Should be able to reduce lock contention, but not overall WAL volume.\n>> In the case of a bulk upload to an empty table (or partition?) could you \n>> not optimise the WAL away? That is, shouldn't the WAL basically be a \n>> simple transformation of the on-disk blocks? You'd have to explicitly \n>> sync the file(s) for the table/indexes of course, and you'd need some \n>> work-around for WAL shipping, but it might be worth it for you chaps \n>> with large imports.\n> \n> Only by locking the table, which serializes access, which then slows you\n> down or at least restricts other options. Plus if you use pg_loader then\n> you'll find only the first few rows optimized and all the rest not.\n\nHmm - the table-locking requirement is true enough, but why would \npg_loader cause problems after the first few rows?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 05 Feb 2008 15:05:18 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "On Tue, 5 Feb 2008, Simon Riggs wrote:\n>> In the case of a bulk upload to an empty table (or partition?) could you\n>> not optimise the WAL away? That is, shouldn't the WAL basically be a\n>> simple transformation of the on-disk blocks? You'd have to explicitly\n>> sync the file(s) for the table/indexes of course, and you'd need some\n>> work-around for WAL shipping, but it might be worth it for you chaps\n>> with large imports.\n>\n> Only by locking the table, which serializes access, which then slows you\n> down or at least restricts other options. Plus if you use pg_loader then\n> you'll find only the first few rows optimized and all the rest not.\n\nWhy would you need to lock the table?\n\nMatthew\n\n-- \nPicard: I was just paid a visit from Q.\nRiker: Q! Any idea what he's up to?\nPicard: No. He said he wanted to be \"nice\" to me.\nRiker: I'll alert the crew.\n", "msg_date": "Tue, 5 Feb 2008 15:25:52 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "On Tue, 2008-02-05 at 15:05 +0000, Richard Huxton wrote:\n\n> > Only by locking the table, which serializes access, which then slows you\n> > down or at least restricts other options. Plus if you use pg_loader then\n> > you'll find only the first few rows optimized and all the rest not.\n> \n> Hmm - the table-locking requirement is true enough, but why would \n> pg_loader cause problems after the first few rows?\n\nIt runs a stream of COPY statements, so only first would be optimized\nwith the \"empty table optimization\".\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com \n\n", "msg_date": "Tue, 05 Feb 2008 15:26:47 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "Matthew wrote:\n> On Tue, 5 Feb 2008, Simon Riggs wrote:\n>>> In the case of a bulk upload to an empty table (or partition?) could you\n>>> not optimise the WAL away? That is, shouldn't the WAL basically be a\n>>> simple transformation of the on-disk blocks? You'd have to explicitly\n>>> sync the file(s) for the table/indexes of course, and you'd need some\n>>> work-around for WAL shipping, but it might be worth it for you chaps\n>>> with large imports.\n>>\n>> Only by locking the table, which serializes access, which then slows you\n>> down or at least restricts other options. Plus if you use pg_loader then\n>> you'll find only the first few rows optimized and all the rest not.\n> \n> Why would you need to lock the table?\n\nBecause you're not really writing the WAL, which means you can't let \nanyone else get their data into any of the blocks you are writing into. \nYou'd basically want to write the disk blocks then \"attach\" them in some \nway.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 05 Feb 2008 15:52:10 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "On Tue, 5 Feb 2008, Richard Huxton wrote:\n>> Why would you need to lock the table?\n>\n> Because you're not really writing the WAL, which means you can't let anyone \n> else get their data into any of the blocks you are writing into. You'd \n> basically want to write the disk blocks then \"attach\" them in some way.\n\nSo what's wrong with \"reserving\" the space using the WAL, then everyone \nelse will know. After all, when you write the data to the WAL, you must \nhave an idea of where it is meant to end up. My suggestion is that you go \nthrough all the motions of writing the data to the WAL, just without the \ndata bit.\n\nMatthew\n\n-- \nFailure is not an option. It comes bundled with your Microsoft product. \n -- Ferenc Mantfeld\n", "msg_date": "Tue, 5 Feb 2008 15:57:17 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "Simon Riggs wrote:\n> On Tue, 2008-02-05 at 15:05 +0000, Richard Huxton wrote:\n> \n>>> Only by locking the table, which serializes access, which then slows you\n>>> down or at least restricts other options. Plus if you use pg_loader then\n>>> you'll find only the first few rows optimized and all the rest not.\n>> Hmm - the table-locking requirement is true enough, but why would \n>> pg_loader cause problems after the first few rows?\n> \n> It runs a stream of COPY statements, so only first would be optimized\n> with the \"empty table optimization\".\n\nAh, if you're allowing multiple commands during the process I can see \nhow it could get fiddly.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 05 Feb 2008 15:59:23 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "Matthew wrote:\n> On Tue, 5 Feb 2008, Richard Huxton wrote:\n>>> Why would you need to lock the table?\n>>\n>> Because you're not really writing the WAL, which means you can't let \n>> anyone else get their data into any of the blocks you are writing \n>> into. You'd basically want to write the disk blocks then \"attach\" them \n>> in some way.\n> \n> So what's wrong with \"reserving\" the space using the WAL, then everyone \n> else will know. After all, when you write the data to the WAL, you must \n> have an idea of where it is meant to end up. My suggestion is that you \n> go through all the motions of writing the data to the WAL, just without \n> the data bit.\n\nWell, now you're looking at page-level locking for the data blocks, or \nat least something very similar. Not sure what you'd do with indexes \nthough - don't see a simple way of avoiding a large lock on a btree index.\n\nIf you reserved the space in advance that could work. But you don't know \nhow much to reserve until you've copied it in.\n\nYou could of course have a set of co-operating processes all \nbulk-loading while maintaining a table-lock outside of the those. It \nfeels like things are getting complicated then though.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 05 Feb 2008 16:09:36 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "One of the problems with \"Empty Table optimization\" is that if there are \nindexes created then it is considered as no longer empty.\n\nCommercial databases have options like \"IRRECOVERABLE\" clause along \nwith DISK PARTITIONS and CPU partitions for their bulk loaders.\n\nSo one option turns off logging, disk partitions create multiple \nprocesses to read various lines/blocks from input file and other various \nblocks to clean up the bufferpools to disk and CPU partitions to process \nthe various blocks/lines read for their format and put the rows in \nbufferpool if successful.\n\nRegards,\nJignesh\n\nSimon Riggs wrote:\n> On Tue, 2008-02-05 at 15:05 +0000, Richard Huxton wrote:\n>\n> \n>>> Only by locking the table, which serializes access, which then slows you\n>>> down or at least restricts other options. Plus if you use pg_loader then\n>>> you'll find only the first few rows optimized and all the rest not.\n>>> \n>> Hmm - the table-locking requirement is true enough, but why would \n>> pg_loader cause problems after the first few rows?\n>> \n>\n> It runs a stream of COPY statements, so only first would be optimized\n> with the \"empty table optimization\".\n>\n> \n", "msg_date": "Tue, 05 Feb 2008 11:14:39 -0500", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "On Tue, 5 Feb 2008, Richard Huxton wrote:\n>> So what's wrong with \"reserving\" the space using the WAL, then everyone \n>> else will know. After all, when you write the data to the WAL, you must \n>> have an idea of where it is meant to end up. My suggestion is that you go \n>> through all the motions of writing the data to the WAL, just without the \n>> data bit.\n>\n> Well, now you're looking at page-level locking for the data blocks, or at \n> least something very similar. Not sure what you'd do with indexes though - \n> don't see a simple way of avoiding a large lock on a btree index.\n\nYeah, indexes would be a lot more difficult I guess, if writes to them \ninvolve changing lots of stuff around. We do most of our loads without the \nindexes present though.\n\n> If you reserved the space in advance that could work. But you don't know how \n> much to reserve until you've copied it in.\n\nWhat does the WAL do? When do you allocate space in the file for written \nrows? Is is when you write the WAL, or when you checkpoint it? If it's \nwhen you write the WAL, then you can just use the same algorithm.\n\n> You could of course have a set of co-operating processes all bulk-loading \n> while maintaining a table-lock outside of the those. It feels like things are \n> getting complicated then though.\n\nThat does sound a bit evil.\n\nYou could have different backends, each running a single transaction where \nthey create one table and load the data for it. That wouldn't need any \nchange to the backend, but it would only work for dump restores, and would \nrequire the client to be clever. I'm all for allowing this kind of \noptimisation while writing normally to the database, and for not requiring \nthe client to think too hard.\n\nMatthew\n\n-- \nAll of this sounds mildly turgid and messy and confusing... but what the\nheck. That's what programming's all about, really\n -- Computer Science Lecturer\n", "msg_date": "Tue, 5 Feb 2008 16:18:10 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "Le mardi 05 février 2008, Simon Riggs a écrit :\n> It runs a stream of COPY statements, so only first would be optimized\n> with the \"empty table optimization\".\n\nThe number of rows per COPY statement is configurable, so provided you have an \nestimation of the volume to import (wc -l), you could tweak this number for \nlowering the stream (down to 1 COPY maybe)...\n\nBut basically a COPY run should be kept in memory (and we're talking about \nhigh volumes here) and in case of error processing you'd want it not that \nhuge after all...\n\n-- \ndim", "msg_date": "Tue, 5 Feb 2008 18:01:50 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "Le mardi 05 février 2008, Simon Riggs a écrit :\n> I'll look at COPY FROM internals to make this faster. I'm looking at\n> this now to refresh my memory; I already had some plans on the shelf.\n\nMaybe stealing some ideas from pg_bulkload could somewhat help here?\n http://pgfoundry.org/docman/view.php/1000261/456/20060709_pg_bulkload.pdf\n\nIIRC it's mainly about how to optimize index updating while loading data, and \nI've heard complaints on the line \"this external tool has to know too much \nabout PostgreSQL internals to be trustworthy as non-core code\"... so...\n\n> > The basic idea is for pgloader to ask PostgreSQL about\n> > constraint_exclusion, pg_inherits and pg_constraint and if pgloader\n> > recognize both the CHECK expression and the datatypes involved, and if we\n> > can implement the CHECK in python without having to resort to querying\n> > PostgreSQL, then we can run a thread per partition, with as many COPY\n> > FROM running in parallel as there are partition involved (when threads =\n> > -1).\n> >\n> > I'm not sure this will be quicker than relying on PostgreSQL trigger or\n> > rules as used for partitioning currently, but ISTM Jignesh quoted § is\n> > just about that.\n>\n> Much better than triggers and rules, but it will be hard to get it to\n> work.\n\nWell, I'm thinking about providing a somewhat modular approach where pgloader \ncode is able to recognize CHECK constraints, load a module registered to the \noperator and data types, then use it.\nThe modules and their registration should be done at the configuration level, \nI'll provide some defaults and users will be able to add their code, the same \nway on-the-fly reformat modules are handled now.\n\nThis means that I'll be able to provide (hopefully) quickly the basic cases \n(CHECK on dates >= x and < y), numeric ranges, etc, and users will be able to \ncare about more complex setups.\n\nWhen the constraint won't match any configured pgloader exclusion module, the \ntrigger/rule code will get used (COPY will go to the main table), and when \nthe python CHECK implementation will be wrong (worst case) PostgreSQL will \nreject the data and pgloader will fill your reject data and log files. And \nyou're back to debugging your python CHECK implementation...\n\nAll of this is only a braindump as of now, and maybe quite an optimistic \none... but baring any 'I know this can't work' objection that's what I'm \ngonna try to implement for next pgloader version.\n\nThanks for comments, input is really appreciated !\n-- \ndim", "msg_date": "Tue, 5 Feb 2008 18:15:25 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "On Tue, 2008-02-05 at 18:15 +0100, Dimitri Fontaine wrote:\n> Le mardi 05 février 2008, Simon Riggs a écrit :\n> > I'll look at COPY FROM internals to make this faster. I'm looking at\n> > this now to refresh my memory; I already had some plans on the shelf.\n> \n> Maybe stealing some ideas from pg_bulkload could somewhat help here?\n> http://pgfoundry.org/docman/view.php/1000261/456/20060709_pg_bulkload.pdf\n\n> IIRC it's mainly about how to optimize index updating while loading data, and \n> I've heard complaints on the line \"this external tool has to know too much \n> about PostgreSQL internals to be trustworthy as non-core code\"... so...\n\nYeh, the batch index updates are a cool feature. Should be able to do\nthat internally also.\n\nNot going to try the no-WAL route again though. If we can get it running\nefficiently and in parallel, then that will be OK. \n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com \n\n", "msg_date": "Tue, 05 Feb 2008 17:32:33 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "\nThat sounds cool to me too..\n\nHow much work is to make pg_bulkload to work on 8.3? An Integrated \nversion is certainly more beneficial.\n\nSpecially I think it will also help for other setups like TPC-E too \nwhere this is a problem.\n\nRegards,\nJignesh\n\n\n\nSimon Riggs wrote:\n> On Tue, 2008-02-05 at 18:15 +0100, Dimitri Fontaine wrote:\n> \n>> Le mardi 05 février 2008, Simon Riggs a écrit :\n>> \n>>> I'll look at COPY FROM internals to make this faster. I'm looking at\n>>> this now to refresh my memory; I already had some plans on the shelf.\n>>> \n>> Maybe stealing some ideas from pg_bulkload could somewhat help here?\n>> http://pgfoundry.org/docman/view.php/1000261/456/20060709_pg_bulkload.pdf\n>> \n>\n> \n>> IIRC it's mainly about how to optimize index updating while loading data, and \n>> I've heard complaints on the line \"this external tool has to know too much \n>> about PostgreSQL internals to be trustworthy as non-core code\"... so...\n>> \n>\n> Yeh, the batch index updates are a cool feature. Should be able to do\n> that internally also.\n>\n> Not going to try the no-WAL route again though. If we can get it running\n> efficiently and in parallel, then that will be OK. \n>\n> \n", "msg_date": "Tue, 05 Feb 2008 13:47:13 -0500", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "On Tue, 2008-02-05 at 13:47 -0500, Jignesh K. Shah wrote:\n> That sounds cool to me too..\n> \n> How much work is to make pg_bulkload to work on 8.3? An Integrated \n> version is certainly more beneficial.\n\n> Specially I think it will also help for other setups like TPC-E too \n> where this is a problem.\n \nIf you don't write WAL then you can lose all your writes in a crash.\nThat issue is surmountable on a table with no indexes, or even\nconceivably with one monotonically ascending index. With other indexes\nif we crash then we have a likely corrupt index.\n\nFor most production systems I'm aware of, losing an index on a huge\ntable is not anything you'd want to trade for performance. Assuming\nyou've ever been knee-deep in it on a real server.\n\nMaybe we can have a \"load mode\" for a table where we skip writing any\nWAL, but if we crash we just truncate the whole table to nothing? Issue\na WARNING if we enable this mode while any data in table. I'm nervous of\nit, but maybe people really want it?\n\nI don't really want to invent ext2 all over again, so we have to run an\nfsck on a table of we crash while loading. My concern is that many\npeople would choose that then blame us for delivering unreliable\nsoftware. e.g. direct path loader on Oracle used to corrupt a PK index\nif you loaded duplicate rows with it (whether it still does I couldn't\ncare). That kind of behaviour is simply incompatible with production\nusage, even if it does good benchmark.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com \n\n", "msg_date": "Tue, 05 Feb 2008 19:48:26 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "Dimitri Fontaine wrote:\n> Le mardi 05 f�vrier 2008, Simon Riggs a �crit :\n>> I'll look at COPY FROM internals to make this faster. I'm looking at\n>> this now to refresh my memory; I already had some plans on the shelf.\n> \n> Maybe stealing some ideas from pg_bulkload could somewhat help here?\n> http://pgfoundry.org/docman/view.php/1000261/456/20060709_pg_bulkload.pdf\n> \n> IIRC it's mainly about how to optimize index updating while loading data, and \n> I've heard complaints on the line \"this external tool has to know too much \n> about PostgreSQL internals to be trustworthy as non-core code\"... so...\n\nI've been thinking of looking into that as well. The basic trick \npg_bulkload is using is to populate the index as the data is being \nloaded. There's no fundamental reason why we couldn't do that internally \nin COPY. Triggers or constraints that access the table being loaded \nwould make it impossible, but we should be able to detect that and fall \nback to what we have now.\n\nWhat I'm basically thinking about is to modify the indexam API of \nbuilding a new index, so that COPY would feed the tuples to the indexam, \ninstead of the indexam opening and scanning the heap. The b-tree indexam \nwould spool the tuples into a tuplesort as the COPY progresses, and \nbuild the index from that at the end as usual.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 05 Feb 2008 20:06:17 +0000", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "Commercial Db bulk loaders work the same way.. they give you an option \nas a fast loader provided in case of error, the whole table is \ntruncated. This I think also has real life advantages where PostgreSQL \nis used as datamarts which are recreated every now and then from other \nsystems and they want fast loaders. So its not just the benchmarking \nfolks like me that will take advantage of such features. INFACT I have \nseen that they force the clause \"REPLACE TABLE\" in the sense that will \ninfact truncate the table before loading so there is no confusion what \nhappens to the original data in the table and only then it avoids the logs.\n\n\nto be honest, its not the WAL Writes to the disk that I am worried \nabout.. According to my tests, async_commit is coming pretty close to \nsync=off and solves the WALWriteLock contention. We should maybe just \nfocus on making it more efficient which I think also involves \nWALInsertLock that may not be entirely efficient.\n\n\nAlso all changes have to be addon options and not replacement for \nexisting loads, I totally agree to that point.. The guys in production \nsupport don't even like optimizer query plan changes, forget corrupt \nindex. (I have spent two days in previous role trying to figure out why \na particular query plan on another database changed in production.)\n\n\n\n\n\n\nSimon Riggs wrote:\n> On Tue, 2008-02-05 at 13:47 -0500, Jignesh K. Shah wrote:\n> \n>> That sounds cool to me too..\n>>\n>> How much work is to make pg_bulkload to work on 8.3? An Integrated \n>> version is certainly more beneficial.\n>> \n>\n> \n>> Specially I think it will also help for other setups like TPC-E too \n>> where this is a problem.\n>> \n> \n> If you don't write WAL then you can lose all your writes in a crash.\n> That issue is surmountable on a table with no indexes, or even\n> conceivably with one monotonically ascending index. With other indexes\n> if we crash then we have a likely corrupt index.\n>\n> For most production systems I'm aware of, losing an index on a huge\n> table is not anything you'd want to trade for performance. Assuming\n> you've ever been knee-deep in it on a real server.\n>\n> Maybe we can have a \"load mode\" for a table where we skip writing any\n> WAL, but if we crash we just truncate the whole table to nothing? Issue\n> a WARNING if we enable this mode while any data in table. I'm nervous of\n> it, but maybe people really want it?\n>\n> I don't really want to invent ext2 all over again, so we have to run an\n> fsck on a table of we crash while loading. My concern is that many\n> people would choose that then blame us for delivering unreliable\n> software. e.g. direct path loader on Oracle used to corrupt a PK index\n> if you loaded duplicate rows with it (whether it still does I couldn't\n> care). That kind of behaviour is simply incompatible with production\n> usage, even if it does good benchmark.\n>\n> \n", "msg_date": "Tue, 05 Feb 2008 15:45:33 -0500", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "Hi Heikki,\n\nIs there a way such an operation can be spawned as a worker process? \nGenerally during such loading - which most people will do during \n\"offpeak\" hours I expect additional CPU resources available. By \ndelegating such additional work to worker processes, we should be able \nto capitalize on additional cores in the system.\n\nEven if it is a single core, the mere fact that the loading process will \neventually wait for a read from the input file which cannot be \nnon-blocking, the OS can timeslice it well for the second process to use \nthose wait times for the index population work.\n\nWhat do you think?\n\n\nRegards,\nJignesh\n\n\nHeikki Linnakangas wrote:\n> Dimitri Fontaine wrote:\n>> Le mardi 05 f�vrier 2008, Simon Riggs a �crit :\n>>> I'll look at COPY FROM internals to make this faster. I'm looking at\n>>> this now to refresh my memory; I already had some plans on the shelf.\n>>\n>> Maybe stealing some ideas from pg_bulkload could somewhat help here?\n>> \n>> http://pgfoundry.org/docman/view.php/1000261/456/20060709_pg_bulkload.pdf \n>>\n>>\n>> IIRC it's mainly about how to optimize index updating while loading \n>> data, and I've heard complaints on the line \"this external tool has \n>> to know too much about PostgreSQL internals to be trustworthy as \n>> non-core code\"... so...\n>\n> I've been thinking of looking into that as well. The basic trick \n> pg_bulkload is using is to populate the index as the data is being \n> loaded. There's no fundamental reason why we couldn't do that \n> internally in COPY. Triggers or constraints that access the table \n> being loaded would make it impossible, but we should be able to detect \n> that and fall back to what we have now.\n>\n> What I'm basically thinking about is to modify the indexam API of \n> building a new index, so that COPY would feed the tuples to the \n> indexam, instead of the indexam opening and scanning the heap. The \n> b-tree indexam would spool the tuples into a tuplesort as the COPY \n> progresses, and build the index from that at the end as usual.\n>\n", "msg_date": "Tue, 05 Feb 2008 15:50:35 -0500", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "Jignesh K. Shah wrote:\n> Is there a way such an operation can be spawned as a worker process? \n> Generally during such loading - which most people will do during \n> \"offpeak\" hours I expect additional CPU resources available. By \n> delegating such additional work to worker processes, we should be able \n> to capitalize on additional cores in the system.\n\nHmm. You do need access to shared memory, locks, catalogs, and to run \nfunctions etc, so I don't think it's significantly easier than using \nmultiple cores for COPY itself.\n\n> Even if it is a single core, the mere fact that the loading process will \n> eventually wait for a read from the input file which cannot be \n> non-blocking, the OS can timeslice it well for the second process to use \n> those wait times for the index population work.\n\nThat's an interesting point.\n\n> What do you think?\n> \n> \n> Regards,\n> Jignesh\n> \n> \n> Heikki Linnakangas wrote:\n>> Dimitri Fontaine wrote:\n>>> Le mardi 05 f�vrier 2008, Simon Riggs a �crit :\n>>>> I'll look at COPY FROM internals to make this faster. I'm looking at\n>>>> this now to refresh my memory; I already had some plans on the shelf.\n>>>\n>>> Maybe stealing some ideas from pg_bulkload could somewhat help here?\n>>> \n>>> http://pgfoundry.org/docman/view.php/1000261/456/20060709_pg_bulkload.pdf \n>>>\n>>>\n>>> IIRC it's mainly about how to optimize index updating while loading \n>>> data, and I've heard complaints on the line \"this external tool has \n>>> to know too much about PostgreSQL internals to be trustworthy as \n>>> non-core code\"... so...\n>>\n>> I've been thinking of looking into that as well. The basic trick \n>> pg_bulkload is using is to populate the index as the data is being \n>> loaded. There's no fundamental reason why we couldn't do that \n>> internally in COPY. Triggers or constraints that access the table \n>> being loaded would make it impossible, but we should be able to detect \n>> that and fall back to what we have now.\n>>\n>> What I'm basically thinking about is to modify the indexam API of \n>> building a new index, so that COPY would feed the tuples to the \n>> indexam, instead of the indexam opening and scanning the heap. The \n>> b-tree indexam would spool the tuples into a tuplesort as the COPY \n>> progresses, and build the index from that at the end as usual.\n>>\n\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 05 Feb 2008 21:45:52 +0000", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "On Tue, 2008-02-05 at 15:50 -0500, Jignesh K. Shah wrote:\n\n> Is there a way such an operation can be spawned as a worker process? \n> Generally during such loading - which most people will do during \n> \"offpeak\" hours I expect additional CPU resources available. By \n> delegating such additional work to worker processes, we should be able \n> to capitalize on additional cores in the system.\n> \n> Even if it is a single core, the mere fact that the loading process will \n> eventually wait for a read from the input file which cannot be \n> non-blocking, the OS can timeslice it well for the second process to use \n> those wait times for the index population work.\n\nIf Dimitri is working on parallel load, why bother?\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com \n\n", "msg_date": "Tue, 05 Feb 2008 22:00:03 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "On Tue, 5 Feb 2008, Simon Riggs wrote:\n\n> On Tue, 2008-02-05 at 15:50 -0500, Jignesh K. Shah wrote:\n>>\n>> Even if it is a single core, the mere fact that the loading process will\n>> eventually wait for a read from the input file which cannot be\n>> non-blocking, the OS can timeslice it well for the second process to use\n>> those wait times for the index population work.\n>\n> If Dimitri is working on parallel load, why bother?\n\npgloader is a great tool for a lot of things, particularly if there's any \nchance that some of your rows will get rejected. But the way things pass \nthrough the Python/psycopg layer made it uncompetative (more than 50% \nslowdown) against the straight COPY path from a rows/second perspective \nthe last time (V2.1.0?) I did what I thought was a fair test of it (usual \ncaveat of \"with the type of data I was loading\"). Maybe there's been some \ngigantic improvement since then, but it's hard to beat COPY when you've \ngot an API layer or two in the middle.\n\nI suspect what will end up happening is that a parallel loading pgloader \nwill scale something like this:\n\n1 CPU: Considerably slower than COPY\n2-3 CPUs: Close to even with COPY\n4+ CPUs: Faster than COPY\n\nMaybe I'm wrong, but I wouldn't abandon looking into another approach \nuntil that territory is mapped out a bit better.\n\nGiven the very large number of dual-core systems out there now relative to \nthose with more, optimizing the straight COPY path with any way to take \nadvantage of even one more core to things like index building is well \nworth doing. Heikki's idea sounded good to me regardless, and if that can \nbe separated out enough to get a second core into the index building at \nthe same time so much the better.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 5 Feb 2008 22:35:09 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "Hi,\n\nOn Feb 6, 2008 9:05 AM, Greg Smith <[email protected]> wrote:\n\n> On Tue, 5 Feb 2008, Simon Riggs wrote:\n>\n> > On Tue, 2008-02-05 at 15:50 -0500, Jignesh K. Shah wrote:\n> >>\n> >> Even if it is a single core, the mere fact that the loading process\n> will\n> >> eventually wait for a read from the input file which cannot be\n> >> non-blocking, the OS can timeslice it well for the second process to\n> use\n> >> those wait times for the index population work.\n> >\n> > If Dimitri is working on parallel load, why bother?\n>\n> pgloader is a great tool for a lot of things, particularly if there's any\n> chance that some of your rows will get rejected. But the way things pass\n> through the Python/psycopg layer made it uncompetative (more than 50%\n> slowdown) against the straight COPY path from a rows/second perspective\n> the last time (V2.1.0?) I did what I thought was a fair test of it (usual\n> caveat of \"with the type of data I was loading\"). Maybe there's been some\n> gigantic improvement since then, but it's hard to beat COPY when you've\n> got an API layer or two in the middle.\n>\n\nI think, its time now that we should jazz COPY up a bit to include all the\ndiscussed functionality. Heikki's batch-indexing idea is pretty useful too.\nAnother thing that pg_bulkload does is it directly loads the tuples into the\nrelation by constructing the tuples and writing them directly to the\nphysical file corresponding to the involved relation, bypassing the engine\ncompletely (ofcourse the limitations that arise out of it are not supporting\nrules, triggers, constraints, default expression evaluation etc). ISTM, we\ncould optimize the COPY code to try to do direct loading too (not\nnecessarily as done by pg_bulkload) to speed it up further in certain cases.\n\n\nAnother thing that we should add to COPY is the ability to continue data\nload across errors as was discussed recently on hackers some time back too.\n\nRegards,\nNikhils\n-- \nEnterpriseDB http://www.enterprisedb.com\n\nHi,On Feb 6, 2008 9:05 AM, Greg Smith <[email protected]> wrote:\nOn Tue, 5 Feb 2008, Simon Riggs wrote:> On Tue, 2008-02-05 at 15:50 -0500, Jignesh K. Shah wrote:>>>> Even if it is a single core, the mere fact that the loading process will\n>> eventually wait for a read from the input file which cannot be>> non-blocking, the OS can timeslice it well for the second process to use>> those wait times for the index population work.>\n> If Dimitri is working on parallel load, why bother?pgloader is a great tool for a lot of things, particularly if there's anychance that some of your rows will get rejected.  But the way things pass\nthrough the Python/psycopg layer made it uncompetative (more than 50%slowdown) against the straight COPY path from a rows/second perspectivethe last time (V2.1.0?) I did what I thought was a fair test of it (usual\ncaveat of \"with the type of data I was loading\").  Maybe there's been somegigantic improvement since then, but it's hard to beat COPY when you'vegot an API layer or two in the middle.\nI think, its time now that we should jazz COPY up a bit to include all the discussed functionality. Heikki's batch-indexing idea is pretty useful too. Another thing that pg_bulkload does is it directly loads the tuples into the relation by constructing the tuples and writing them directly to the physical file corresponding to the involved relation, bypassing the engine completely (ofcourse the limitations that arise out of it are not supporting rules, triggers, constraints, default expression evaluation etc). ISTM, we could optimize the COPY code to try to do direct loading too (not necessarily as done by pg_bulkload) to speed it up further in certain cases. \nAnother thing that we should add to COPY is the ability to continue data load across errors as was discussed recently on hackers some time back too. Regards,Nikhils-- EnterpriseDB               http://www.enterprisedb.com", "msg_date": "Wed, 6 Feb 2008 13:08:34 +0530", "msg_from": "NikhilS <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "Le mercredi 06 février 2008, Greg Smith a écrit :\n> pgloader is a great tool for a lot of things, particularly if there's any\n> chance that some of your rows will get rejected. But the way things pass\n> through the Python/psycopg layer made it uncompetative (more than 50%\n> slowdown) against the straight COPY path from a rows/second perspective\n> the last time (V2.1.0?) \n\nI've yet to add in the psycopg wrapper Marko wrote for skytools: at the moment \nI'm using the psycopg1 interface even when psycopg2 is used, and it seems the \nnew version has some great performance improvements. I just didn't bother \nuntil now thinking this wouldn't affect COPY.\n\n> I did what I thought was a fair test of it (usual \n> caveat of \"with the type of data I was loading\"). Maybe there's been some\n> gigantic improvement since then, but it's hard to beat COPY when you've\n> got an API layer or two in the middle.\n\nDid you compare to COPY or \\copy? I'd expect psycopg COPY api not to be that \nmore costly than psql one, after all.\nWhere pgloader is really left behind (in term of tuples inserted per second) \ncompared to COPY is when it has to jiggle a lot with the data, I'd say \n(reformat, reorder, add constants, etc). But I've tried to design it so that \nwhen not configured to arrange (massage?) the data, the code path is the \nsimplest possible.\n\nDo you want to test pgloader again with Marko psycopgwrapper code to see if \nthis helps? If yes I'll arrange to push it to CVS ASAP.\n\nMaybe at the end of this PostgreSQL backend code will be smarter than pgloader \n(wrt error handling and data massaging) and we'll be able to drop the \nproject, but in the meantime I'll try my best to have pgloader as fast as \npossible :)\n-- \ndim", "msg_date": "Wed, 6 Feb 2008 11:29:42 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "Hi,\n\nI've been thinking about this topic some more, and as I don't know when I'll \nbe able to go and implement it I'd want to publish the ideas here. This way \nI'll be able to find them again :)\n\nLe mardi 05 février 2008, Dimitri Fontaine a écrit :\n> Le mardi 05 février 2008, Simon Riggs a écrit :\n> > Much better than triggers and rules, but it will be hard to get it to\n> > work.\n>\n> Well, I'm thinking about providing a somewhat modular approach where\n> pgloader code is able to recognize CHECK constraints, load a module\n> registered to the operator and data types, then use it.\n\nHere's how I think I'm gonna implement it:\n\nUser level configuration\n-=-=-=-=-=-=-=-=-=-\n\nAt user level, you will have to add a constraint_exclusion = on parameter to \npgloader section configuration for it to bother checking if the destination \ntable has some children etc.\nYou'll need to provide also a global ce_path parameter (where to find user \npython constraint exclusion modules) and a ce_modules parameter for each \nsection where constraint_exclusion = on:\n ce_modules = columnA:module:class, columnB:module:class\n\nAs the ce_path could point to any number of modules where a single type is \nsupported by several modules, I'll let the user choose which module to use.\n\nConstraint exclusion modules\n-=-=-=-=-=-=-=-=-=-=-=-=-\n\nThe modules will provide one or several class(es) (kind of a packaging issue), \neach one will have to register which datatypes and operators they know about. \nHere's some pseudo-code of a module, which certainly is the best way to \nexpress a code design idea:\n\nclass MyCE:\n def __init__(self, operator, constant, cside='r'):\n \"\"\" CHECK ( col operator constant ) => cside = 'r', could be 'l' \"\"\"\n ...\n\n @classmethod\n def support_type(cls, type):\n return type in ['integer', 'bigint', 'smallint', 'real', 'double']\n\n @classmethod\n def support_operator(cls, op):\n return op in ['=', '>', '<', '>=', '<=', '%']\n\n def check(self, op, data):\n if op == '>' : return self.gt(data)\n ...\n\n def gt(self, data):\n if cside == 'l':\n return self.constant > data\n elif cside == 'r':\n return data > self.constant\n\nThis way pgloader will be able to support any datatype (user datatype like \nIP4R included) and operator (@@, ~<= or whatever). For pgloader to handle a \nCHECK() constraint, though, it'll have to be configured to use a CE class \nsupporting the used operators and datatypes.\n\nPGLoader constraint exclusion support\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n\nThe CHECK() constraint being a tree of check expressions[*] linked by logical \noperators, pgloader will have to build some logic tree of MyCE (user CE \nmodules) and evaluate all the checks in order to be able to choose the input \nline partition.\n\n[*]: check((a % 10) = 1) makes an expression tree containing 2 check nodes\n\nAfter having parsed pg_constraint.consrc (not conbin which seems too much an \ninternal dump for using it from user code) and built a CHECK tree for each \npartition, pgloader will try to decide if it's about range partitioning (most \ncommon case). \n\nIf each partition CHECK tree is AND((a>=b, a<c) or a variation of it, we have \nrange partitioning. Then surely we can optimize the code to run to choose the \npartition where to COPY data to and still use the module operator \nimplementation, e.g. making a binary search on a partitions limits tree.\n\nIf you want some other widely used (or not) partitioning scheme to be \nrecognized and optimized by pgloader, just tell me and we'll see about it :)\nHaving this step as a user module seems overkill at the moment, though.\n\nMulti-Threading behavior and CE support\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n\nNow, pgloader will be able to run N threads, each one loading some data to a \npartitionned child-table target. N will certainly be configured depending on \nthe number of server cores and not depending on the partition numbers...\n\nSo what do we do when reading a tuple we want to store in a partition which \nhas no dedicated Thread started yet, and we already have N Threads running?\nI'm thinking about some LRU(Thread) to choose a Thread to terminate (launch \nCOPY with current buffer and quit) and start a new one for the current \npartition target.\nHopefully there won't be such high values of N that the LRU is a bad choice \nper see, and the input data won't be so messy to have to stop/start Threads \nat each new line.\n\nComments welcome, regards,\n-- \ndim", "msg_date": "Wed, 6 Feb 2008 12:27:56 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested --- pgloader CE design ideas" }, { "msg_contents": "On Wed, 2008-02-06 at 12:27 +0100, Dimitri Fontaine wrote:\n> Multi-Threading behavior and CE support\n> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n> \n> Now, pgloader will be able to run N threads, each one loading some\n> data to a \n> partitionned child-table target. N will certainly be configured\n> depending on \n> the number of server cores and not depending on the partition\n> numbers...\n> \n> So what do we do when reading a tuple we want to store in a partition\n> which \n> has no dedicated Thread started yet, and we already have N Threads\n> running?\n> I'm thinking about some LRU(Thread) to choose a Thread to terminate\n> (launch \n> COPY with current buffer and quit) and start a new one for the\n> current \n> partition target.\n> Hopefully there won't be such high values of N that the LRU is a bad\n> choice \n> per see, and the input data won't be so messy to have to stop/start\n> Threads \n> at each new line.\n\nFor me, it would be good to see a --parallel=n parameter that would\nallow pg_loader to distribute rows in \"round-robin\" manner to \"n\"\ndifferent concurrent COPY statements. i.e. a non-routing version. Making\nthat work well, whilst continuing to do error-handling seems like a\nchallenge, but a very useful goal.\n\nAdding intelligence to the row distribution may be technically hard but\nmay also simply move the bottleneck onto pg_loader. We may need multiple\nthreads in pg_loader, or we may just need multiple sessions from\npg_loader. Experience from doing the non-routing parallel version may\nhelp in deciding whether to go for the routing version.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com \n\n", "msg_date": "Wed, 06 Feb 2008 11:45:24 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Benchmark Data requested --- pgloader CE design ideas" }, { "msg_contents": "Le mercredi 06 février 2008, Simon Riggs a écrit :\n> For me, it would be good to see a --parallel=n parameter that would\n> allow pg_loader to distribute rows in \"round-robin\" manner to \"n\"\n> different concurrent COPY statements. i.e. a non-routing version.\n\nWhat happen when you want at most N parallel Threads and have several sections \nconfigured: do you want pgloader to serialize sections loading (often there's \none section per table, sometimes different sections target the same table) \nbut parallelise each section loading?\n\nI'm thinking we should have a global max_threads knob *and* and per-section \nmax_thread one if we want to go this way, but then multi-threaded sections \nwill somewhat fight against other sections (multi-threaded or not) for \nthreads to use.\n\nSo I'll also add a parameter to configure how many (max) sections to load in \nparallel at any time.\n\nWe'll then have (default values presented):\nmax_threads = 1\nmax_parallel_sections = 1\nsection_threads = -1\n\nThe section_threads parameter would be overloadable at section level but would \nneed to stay <= max_threads (if not, discarded, warning issued). When \nsection_threads is -1, pgloader tries to have the higher number of them \npossible, still in the max_threads global limit.\nIf max_parallel_section is -1, pgloader start a new thread per each new \nsection, maxing out at max_threads, then it waits for a thread to finish \nbefore launching a new section loading.\n\nIf you have N max_threads and max_parallel_sections = section_threads = -1, \nthen we'll see some kind of a fight between new section threads and in \nsection thread (the parallel non-routing COPY behaviour). But then it's a \nuser choice.\n\nAdding in it the Constraint_Exclusion support would not mess it up, but it'll \nhave some interest only when section_threads != 1 and max_threads > 1.\n\n> Making \n> that work well, whilst continuing to do error-handling seems like a\n> challenge, but a very useful goal.\n\nQuick tests showed me python threading model allows for easily sharing of \nobjects between several threads, I don't think I'll need to adjust my reject \ncode when going per-section multi-threaded. Just have to use a semaphore \nobject to continue rejected one line at a time. Not that complex if reliable.\n\n> Adding intelligence to the row distribution may be technically hard but\n> may also simply move the bottleneck onto pg_loader. We may need multiple\n> threads in pg_loader, or we may just need multiple sessions from\n> pg_loader. Experience from doing the non-routing parallel version may\n> help in deciding whether to go for the routing version.\n\nIf non-routing per-section multi-threading is a user request and not that hard \nto implement (thanks to python), that sounds a good enough reason for me to \nprovide it :)\n\nI'll keep you (and the list) informed as soon as I'll have the code to play \nwith.\n-- \ndim", "msg_date": "Wed, 6 Feb 2008 13:36:51 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested --- pgloader CE design ideas" }, { "msg_contents": "On Wed, 6 Feb 2008, Dimitri Fontaine wrote:\n\n> Did you compare to COPY or \\copy?\n\nCOPY. If you're loading a TB, if you're smart it's going onto the server \nitself if it all possible and loading directly from there. Would probably \nget a closer comparision against psql \\copy, but recognize you're always \ngoing to be compared against the best server-side copy method available.\n\n> Do you want to test pgloader again with Marko psycopgwrapper code to see if\n> this helps?\n\nWouldn't have time myself for at least a month (probably not until after \nthe East convention) so don't go making commits on my behalf.\n\n> Maybe at the end of this PostgreSQL backend code will be smarter than pgloader\n> (wrt error handling and data massaging) and we'll be able to drop the\n> project\n\nThere are way too many data massaging cases I never expect the backend \nwill handle that pgloader does a great job of right now, and I think there \nwill always be a niche for a tool like this.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 6 Feb 2008 10:40:20 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "On Wed, 6 Feb 2008, Simon Riggs wrote:\n\n> For me, it would be good to see a --parallel=n parameter that would\n> allow pg_loader to distribute rows in \"round-robin\" manner to \"n\"\n> different concurrent COPY statements. i.e. a non-routing version.\n\nLet me expand on this. In many of these giant COPY situations the \nbottleneck is plain old sequential I/O to a single process. You can \nalmost predict how fast the rows will load using dd. Having a process \nthat pulls rows in and distributes them round-robin is good, but it won't \ncrack that bottleneck. The useful approaches I've seen for other \ndatabases all presume that the data files involved are large enough that \non big hardware, you can start multiple processes running at different \npoints in the file and beat anything possible with a single reader.\n\nIf I'm loading a TB file, odds are good I can split that into 4 or more \nvertical pieces (say rows 1-25%, 25-50%, 50-75%, 75-100%), start 4 loaders \nat once, and get way more than 1 disk worth of throughput reading. You \nhave to play with the exact number because if you push the split too far \nyou introduce seek slowdown instead of improvements, but that's the basic \ndesign I'd like to see one day. It's not parallel loading that's useful \nfor the cases I'm thinking about until something like this comes around.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 6 Feb 2008 10:56:03 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested --- pgloader CE design ideas" }, { "msg_contents": "Hi Greg,\n\nOn 2/6/08 7:56 AM, \"Greg Smith\" <[email protected]> wrote:\n\n> If I'm loading a TB file, odds are good I can split that into 4 or more\n> vertical pieces (say rows 1-25%, 25-50%, 50-75%, 75-100%), start 4 loaders\n> at once, and get way more than 1 disk worth of throughput reading. You\n> have to play with the exact number because if you push the split too far\n> you introduce seek slowdown instead of improvements, but that's the basic\n> design I'd like to see one day. It's not parallel loading that's useful\n> for the cases I'm thinking about until something like this comes around.\n\nJust load 4 relfiles. You have to be able to handle partial relfiles, which\nchanges the storage mgmt a bit, but the benefits are easier to achieve.\n\n- Luke\n\n", "msg_date": "Wed, 06 Feb 2008 08:17:42 -0800", "msg_from": "Luke Lonergan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested --- pgloader CE design ideas" }, { "msg_contents": "\n\nGreg Smith wrote:\n> On Wed, 6 Feb 2008, Simon Riggs wrote:\n>\n>> For me, it would be good to see a --parallel=n parameter that would\n>> allow pg_loader to distribute rows in \"round-robin\" manner to \"n\"\n>> different concurrent COPY statements. i.e. a non-routing version.\n>\n> Let me expand on this. In many of these giant COPY situations the \n> bottleneck is plain old sequential I/O to a single process. You can \n> almost predict how fast the rows will load using dd. Having a process \n> that pulls rows in and distributes them round-robin is good, but it \n> won't crack that bottleneck. The useful approaches I've seen for \n> other databases all presume that the data files involved are large \n> enough that on big hardware, you can start multiple processes running \n> at different points in the file and beat anything possible with a \n> single reader.\n>\n> If I'm loading a TB file, odds are good I can split that into 4 or \n> more vertical pieces (say rows 1-25%, 25-50%, 50-75%, 75-100%), start \n> 4 loaders at once, and get way more than 1 disk worth of throughput \n> reading. You have to play with the exact number because if you push \n> the split too far you introduce seek slowdown instead of improvements, \n> but that's the basic design I'd like to see one day. It's not \n> parallel loading that's useful for the cases I'm thinking about until \n> something like this comes around.\n>\n\nSome food for thought here: Most BI Type applications which does data \nconversions/cleansing also might end up sorting the data before its \nloaded into a database so starting parallel loaders at Total different \npoints ruins that effort. A More pragmatic approach will be to read the \nnext rows from the input file So if there are N parallel streams then \neach one is offset by 1 from each other and jumps by N rows so the seeks \nare pretty much narrrowed down to few rows (ideally 1) instead of \njumping 1/Nth rows every time a read happens.\n\nFor example to replicate this with dd to see the impact use a big file \nand use the seek option and blocksizes .. Somebody out here once had \ndone that test and showed that \"seek time\" on the file being read is \nreduced significantly and depending on the file system it does \nintelligent prefetching (which unfortunately UFS in Solaris does not do \nbest by default) all the reads for the next stream will already be in \nmemory.\n\n\n\nRegards,\nJignesh\n\n", "msg_date": "Wed, 06 Feb 2008 11:34:14 -0500", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested --- pgloader CE design ideas" }, { "msg_contents": "Le mercredi 06 février 2008, Greg Smith a écrit :\n> COPY. If you're loading a TB, if you're smart it's going onto the server\n> itself if it all possible and loading directly from there. Would probably\n> get a closer comparision against psql \\copy, but recognize you're always\n> going to be compared against the best server-side copy method available.\n\nFair enough on your side, even if I can't expect an external tool using \nnetwork protocol to compete with backend reading a local file. I wanted to \nmake sure the 50% slowdown was not only due to my code being that bad.\n\n> There are way too many data massaging cases I never expect the backend\n> will handle that pgloader does a great job of right now, and I think there\n> will always be a niche for a tool like this.\n\nLet's try to continue improving the tool then!\n-- \ndim", "msg_date": "Wed, 6 Feb 2008 18:31:48 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "Le mercredi 06 février 2008, Greg Smith a écrit :\n> If I'm loading a TB file, odds are good I can split that into 4 or more\n> vertical pieces (say rows 1-25%, 25-50%, 50-75%, 75-100%), start 4 loaders\n> at once, and get way more than 1 disk worth of throughput reading.\n\npgloader already supports starting at any input file line number, and limit \nitself to any number of reads:\n\n -C COUNT, --count=COUNT\n number of input lines to process\n -F FROMCOUNT, --from=FROMCOUNT\n number of input lines to skip\n\nSo you could already launch 4 pgloader processes with the same configuration \nfine but different command lines arguments. It there's interest/demand, it's \neasy enough for me to add those parameters as file configuration knobs too.\n\nStill you have to pay for client to server communication instead of having the \nbackend read the file locally, but now maybe we begin to compete?\n\nRegards,\n-- \ndim", "msg_date": "Wed, 6 Feb 2008 18:37:41 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested --- pgloader CE design ideas" }, { "msg_contents": "Le Wednesday 06 February 2008 18:37:41 Dimitri Fontaine, vous avez écrit :\n> Le mercredi 06 février 2008, Greg Smith a écrit :\n> > If I'm loading a TB file, odds are good I can split that into 4 or more\n> > vertical pieces (say rows 1-25%, 25-50%, 50-75%, 75-100%), start 4\n> > loaders at once, and get way more than 1 disk worth of throughput\n> > reading.\n>\n> pgloader already supports starting at any input file line number, and limit\n> itself to any number of reads:\n\nIn fact, the -F option works by having pgloader read the given number of lines \nbut skip processing them, which is not at all what Greg is talking about here \nI think.\n\nPlus, I think it would be easier for me to code some stat() then lseek() then \nread() into the pgloader readers machinery than to change the code \narchitecture to support a separate thread for the file reader.\n\nGreg, what would you think of a pgloader which will separate file reading \nbased on file size as given by stat (os.stat(file)[ST_SIZE]) and number of \nthreads: we split into as many pieces as section_threads section config \nvalue.\n\nThis behaviour won't be available for sections where type = text and \nfield_count(*) is given, cause in this case I don't see how pgloader could \nreliably recognize a new logical line beginning and start processing here.\nIn other cases, a logical line is a physical line, so we start after first \nnewline met from given lseek start position, and continue reading after the \nlast lseek position until a newline.\n\n*:http://pgloader.projects.postgresql.org/#_text_format_configuration_parameters\n\nComments?\n-- \ndim", "msg_date": "Wed, 6 Feb 2008 20:59:04 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested --- pgloader CE design ideas" }, { "msg_contents": "On Wed, 6 Feb 2008, Dimitri Fontaine wrote:\n\n> In fact, the -F option works by having pgloader read the given number of lines\n> but skip processing them, which is not at all what Greg is talking about here\n> I think.\n\nYeah, that's not useful.\n\n> Greg, what would you think of a pgloader which will separate file reading\n> based on file size as given by stat (os.stat(file)[ST_SIZE]) and number of\n> threads: we split into as many pieces as section_threads section config\n> value.\n\nNow you're talking. Find a couple of split points that way, fine-tune the \nboundaries a bit so they rest on line termination points, and off you go. \nDon't forget that the basic principle here implies you'll never know until \nyou're done just how many lines were really in the file. When thread#1 is \nrunning against chunk#1, it will never have any idea what line chunk#2 \nreally started at until it reaches there, at which point it's done and \nthat information isn't helpful anymore.\n\nYou have to stop thinking in terms of lines for this splitting; all you \ncan do is split the file into useful byte sections and then count the \nlines within them as you go. Anything else requires a counting scan of \nthe file and such a sequential read is exactly what can't happen \n(especially not more than once), it just takes too long.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 6 Feb 2008 18:36:13 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested --- pgloader CE design ideas" }, { "msg_contents": "Le jeudi 07 février 2008, Greg Smith a écrit :\n>Le mercredi 06 février 2008, Dimitri Fontaine a écrit :\n>> In other cases, a logical line is a physical line, so we start after first\n>> newline met from given lseek start position, and continue reading after the\n>> last lseek position until a newline.\n>\n> Now you're talking. Find a couple of split points that way, fine-tune the\n> boundaries a bit so they rest on line termination points, and off you go.\n\nI was thinking of not even reading the file content from the controller \nthread, just decide splitting points in bytes (0..ST_SIZE/4 - \nST_SIZE/4+1..2*ST_SIZE/4 etc) and let the reading thread fine-tune by \nbeginning to process input after having read first newline, etc.\n\nAnd while we're still at the design board, I'm also thinking to add a \nper-section parameter (with a global default value possible) \nsplit_file_reading which defaults to False, and which you'll have to set True \nfor pgloader to behave the way we're talking about.\n\nWhen split_file_reading = False and section_threads != 1 pgloader will have to \nmanage several processing threads per section but only one file reading \nthread, giving the read input to processing theads in a round-robin fashion. \nIn the future the processing thread choosing will possibly (another knob) be \nsmarter than that, as soon as we get CE support into pgloader.\n\nWhen split_file_reading = True and section_threads != 1 pgloader will have to \nmanage several processing threads per section, each one responsible of \nreading its own part of the file, processing boundaries to be discovered at \nreading time. Adding in here CE support in this case means managing two \nseparate thread pools per section, one responsible of splitted file reading \nand another responsible of data buffering and routing (COPY to partition \ninstead of to parent table).\n\nIn both cases, maybe it would also be needed for pgloader to be able to have a \nseparate thread for COPYing the buffer to the server, allowing it to continue \npreparing next buffer in the meantime?\n\nThis will need some re-architecturing of pgloader, but it seems it worth it \n(I'm not entirely sold about the two thread-pools idea, though, and this last \ncontinue-reading-while-copying-idea still has to be examined).\nSome of the work needing to be done is by now quite clear for me, but a part \nof it still needs its design-time share. As usual though, the real hard part \nis knowing what we exactly want to get done, and we're showing good progress \nhere :)\n\nGreg's behavior:\nmax_threads = N \nmax_parallel_sections = 1\nsection_threads = -1\nsplit_file_reading = True\n\nSimon's behaviour:\nmax_threads = N\nmax_parallel_sections = 1 # I don't think Simon wants parallel sections\nsection_threads = -1\nsplit_file_reading = False\n\nComments?\n-- \ndim", "msg_date": "Thu, 7 Feb 2008 10:31:47 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested --- pgloader CE design ideas" }, { "msg_contents": "On Thu, 7 Feb 2008, Dimitri Fontaine wrote:\n\n> I was thinking of not even reading the file content from the controller\n> thread, just decide splitting points in bytes (0..ST_SIZE/4 -\n> ST_SIZE/4+1..2*ST_SIZE/4 etc) and let the reading thread fine-tune by\n> beginning to process input after having read first newline, etc.\n\nThe problem I was pointing out is that if chunk#2 moved foward a few bytes \nbefore it started reading in search of a newline, how will chunk#1 know \nthat it's supposed to read up to that further point? You have to stop #1 \nfrom reading further when it catches up with where #2 started. Since the \nstart of #2 is fuzzy until some reading is done, what you're describing \nwill need #2 to send some feedback to #1 after they've both started, and \nthat sounds bad to me. I like designs where the boundaries between \nthreads are clearly defined before any of them start and none of them ever \ntalk to the others.\n\n> In both cases, maybe it would also be needed for pgloader to be able to have a\n> separate thread for COPYing the buffer to the server, allowing it to continue\n> preparing next buffer in the meantime?\n\nThat sounds like a V2.0 design to me. I'd only chase after that level of \ncomplexity if profiling suggests that's where the bottleneck really is.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 7 Feb 2008 12:06:42 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested --- pgloader CE design ideas" }, { "msg_contents": "On Thu, 7 Feb 2008, Greg Smith wrote:\n> The problem I was pointing out is that if chunk#2 moved foward a few bytes \n> before it started reading in search of a newline, how will chunk#1 know that \n> it's supposed to read up to that further point? You have to stop #1 from \n> reading further when it catches up with where #2 started. Since the start of \n> #2 is fuzzy until some reading is done, what you're describing will need #2 \n> to send some feedback to #1 after they've both started, and that sounds bad \n> to me.\n\nIt doesn't have to be fuzzy at all. Both threads will presumably be able \nto use the same algorithm to work out where the boundary is, therefore \nthey'll get the same result. No need to pass back information.\n\nMatthew\n\n-- \nThere is something in the lecture course which may not have been visible so\nfar, which is reality -- Computer Science Lecturer\n", "msg_date": "Thu, 7 Feb 2008 17:11:10 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested --- pgloader CE design ideas" }, { "msg_contents": "> > I was thinking of not even reading the file content from the controller\n> > thread, just decide splitting points in bytes (0..ST_SIZE/4 -\n> > ST_SIZE/4+1..2*ST_SIZE/4 etc) and let the reading thread fine-tune by\n> > beginning to process input after having read first newline, etc.\n> \n> The problem I was pointing out is that if chunk#2 moved foward a few bytes \n> before it started reading in search of a newline, how will chunk#1 know \n> that it's supposed to read up to that further point? You have to stop #1 \n> from reading further when it catches up with where #2 started. Since the \n> start of #2 is fuzzy until some reading is done, what you're describing \n> will need #2 to send some feedback to #1 after they've both started, and \n> that sounds bad to me. I like designs where the boundaries between \n> threads are clearly defined before any of them start and none of them ever \n> talk to the others.\n\nI don't think that any communication is needed beyond the beginning of\nthe threads. Each thread knows that it should start at byte offset X\nand end at byte offset Y, but if Y happens to be in the middle of a\nrecord then just keep going until the end of the record. As long as the\nalgorithm for reading past the end marker is the same as the algorithm\nfor skipping past the beginning marker then all is well.\n\n-- Mark Lewis\n", "msg_date": "Thu, 07 Feb 2008 09:14:49 -0800", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested --- pgloader CE design ideas" }, { "msg_contents": "On Thu, Feb 07, 2008 at 12:06:42PM -0500, Greg Smith wrote:\n> On Thu, 7 Feb 2008, Dimitri Fontaine wrote:\n>\n>> I was thinking of not even reading the file content from the controller\n>> thread, just decide splitting points in bytes (0..ST_SIZE/4 -\n>> ST_SIZE/4+1..2*ST_SIZE/4 etc) and let the reading thread fine-tune by\n>> beginning to process input after having read first newline, etc.\n>\n> The problem I was pointing out is that if chunk#2 moved foward a few bytes \n> before it started reading in search of a newline, how will chunk#1 know \n> that it's supposed to read up to that further point? You have to stop #1 \n> from reading further when it catches up with where #2 started. Since the \n> start of #2 is fuzzy until some reading is done, what you're describing \n> will need #2 to send some feedback to #1 after they've both started, and \n> that sounds bad to me. I like designs where the boundaries between threads \n> are clearly defined before any of them start and none of them ever talk to \n> the others.\n>\n\nAs long as both processes understand the start condition, there\nis not a problem. p1 starts at beginning and processes through chunk2\n offset until it reaches the start condition. p2 starts loading from\nchunk2 offset plus the amount needed to reach the start condition, ...\n\nDBfile|---------------|--x--------------|x----------------|-x--|\n x chunk1----------->\n x chunk2-------->\n x chunk3----------->...\n\nAs long as both pieces use the same test, they will each process\nnon-overlapping segments of the file and still process 100% of the\nfile.\n\nKen\n\n>> In both cases, maybe it would also be needed for pgloader to be able to \n>> have a\n>> separate thread for COPYing the buffer to the server, allowing it to \n>> continue\n>> preparing next buffer in the meantime?\n>\n> That sounds like a V2.0 design to me. I'd only chase after that level of \n> complexity if profiling suggests that's where the bottleneck really is.\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n", "msg_date": "Thu, 7 Feb 2008 11:15:44 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested --- pgloader CE design ideas" }, { "msg_contents": "On Mon, 04 Feb 2008 17:33:34 -0500\n\"Jignesh K. Shah\" <[email protected]> wrote:\n\n> Hi Simon,\n> \n> I have some insight into TPC-H on how it works.\n> \n> First of all I think it is a violation of TPC rules to publish numbers \n> without auditing them first. So even if I do the test to show the \n> better performance of PostgreSQL 8.3, I cannot post it here or any \n> public forum without doing going through the \"process\". (Even though it \n> is partial benchmark as they are just doing the equivalent of the \n> PowerRun of TPCH) Maybe the PR of PostgreSQL team should email \n> [email protected] about them and see what they have to say about that comparison.\n\nI think I am qualified enough to say it is not a violation of TPC\nfair-use policy if we scope the data as a measure of how PostgreSQL has\nchanged from 8.1 to 8.3 and refrain from comparing these results to what\nany other database is doing.\n\nThe point is to measure PostgreSQL's progress not market it, correct?\n\nRegards,\nMark\n", "msg_date": "Fri, 8 Feb 2008 12:12:53 -0800", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "On Mon, 4 Feb 2008 15:09:58 -0500 (EST)\nGreg Smith <[email protected]> wrote:\n\n> On Mon, 4 Feb 2008, Simon Riggs wrote:\n> \n> > Would anybody like to repeat these tests with the latest production\n> > versions of these databases (i.e. with PGSQL 8.3)\n> \n> Do you have any suggestions on how people should run TPC-H? It looked \n> like a bit of work to sort through how to even start this exercise.\n\nIf you mean you want to get your hands on a kit, the one that Jenny and\nI put together is here:\n\nhttp://sourceforge.net/project/showfiles.php?group_id=52479&package_id=71458\n\nI hear it still works. :)\n\nRegards,\nMark\n", "msg_date": "Fri, 8 Feb 2008 12:16:19 -0800", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested" }, { "msg_contents": "Does anyone have performance info about the new Dell Perc/6 controllers? I found a long discussion (\"Dell vs HP\") about the Perc/5, but nothing about Perc/6. What's under the covers?\n\nHere is the (abbreviated) info from Dell on this machine:\n \nPowerEdge 1950 III Quad Core Intel� Xeon� E5405, 2x6MB Cache, 2.0GHz, 1333MHz FSB\nAdditional Processors Quad Core Intel� Xeon� E5405, 2x6MB Cache, 2.0GHz, 1333MHz FSB\nMemory 8GB 667MHz (4x2GB), Dual Ranked DIMMs\nHard Drive Configuration Integrated SAS/SATA RAID 5, PERC 6/i Integrated\n\nThanks,\nCraig\n", "msg_date": "Tue, 12 Feb 2008 08:32:26 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Dell Perc/6" }, { "msg_contents": "On Tue, 12 Feb 2008, Craig James wrote:\n\n> Does anyone have performance info about the new Dell Perc/6 controllers? I \n> found a long discussion (\"Dell vs HP\") about the Perc/5, but nothing about \n> Perc/6. What's under the covers?\n\nThe Perc/6i has an LSI Logic MegaRAID SAS 1078 chipset under the hood. I \nknow the Linux drivers for the card seemed to stabilize around October, \nthere's a good sized list of compatible distributions on LSI's site. \nFreeBSD support has some limitations but basically works. I haven't seen \nany benchmarks for the current version of the card yet.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 12 Feb 2008 14:01:13 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Perc/6" }, { "msg_contents": "Hello.\n\nI think I started that discussion. We ended up buying a Dell 2900 with \nPERC 6/i and 10 * 145GB SAS 3,5\" 15KRpm discs. 6 of the SAS discs are \nin a raid 10 for the database, 2 in a mirror for the wal and the last \n2 in a mirror for the OS. We get 350MB/s writing and 380MB/s reading \nto/from the raid 10 area using dd. The OS is Ubuntu and the filesystem \nfor the raid 10 is ext3.\n\nThe box is still under testing, but we plan to set it in production \nthis week.\n\nRegards,\n - Tore.\n\nOn Feb 12, 2008, at 17:32 , Craig James wrote:\n\n> Does anyone have performance info about the new Dell Perc/6 \n> controllers? I found a long discussion (\"Dell vs HP\") about the \n> Perc/5, but nothing about Perc/6. What's under the covers?\n>\n> Here is the (abbreviated) info from Dell on this machine:\n> PowerEdge 1950 III Quad Core Intel� Xeon� E5405, 2x6MB Cache, \n> 2.0GHz, 1333MHz FSB\n> Additional Processors Quad Core Intel� Xeon� E5405, 2x6MB Cache, \n> 2.0GHz, 1333MHz FSB\n> Memory 8GB 667MHz (4x2GB), Dual Ranked DIMMs\n> Hard Drive Configuration Integrated SAS/SATA RAID 5, PERC 6/i \n> Integrated\n>\n> Thanks,\n> Craig\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Wed, 13 Feb 2008 11:02:23 +0100", "msg_from": "Tore Halset <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Perc/6" }, { "msg_contents": "\nOn 13-Feb-08, at 5:02 AM, Tore Halset wrote:\n\n> Hello.\n>\n> I think I started that discussion. We ended up buying a Dell 2900 \n> with PERC 6/i and 10 * 145GB SAS 3,5\" 15KRpm discs. 6 of the SAS \n> discs are in a raid 10 for the database, 2 in a mirror for the wal \n> and the last 2 in a mirror for the OS. We get 350MB/s writing and \n> 380MB/s reading to/from the raid 10 area using dd. The OS is Ubuntu \n> and the filesystem for the raid 10 is ext3.\n>\nWow that's fantastic. Just to be sure, did you make sure that you read \nand wrote 2x memory to take the cache out of the measurement ?\n\nDave\n", "msg_date": "Wed, 13 Feb 2008 06:06:13 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Perc/6" }, { "msg_contents": "On Feb 13, 2008 5:02 AM, Tore Halset <[email protected]> wrote:\n> Hello.\n>\n> I think I started that discussion. We ended up buying a Dell 2900 with\n> PERC 6/i and 10 * 145GB SAS 3,5\" 15KRpm discs. 6 of the SAS discs are\n> in a raid 10 for the database, 2 in a mirror for the wal and the last\n> 2 in a mirror for the OS. We get 350MB/s writing and 380MB/s reading\n> to/from the raid 10 area using dd. The OS is Ubuntu and the filesystem\n> for the raid 10 is ext3.\n\nThose are decent numbers. Can you do a bonnie++ run and post the\nresults (specifically interested in seeks)?\n\nmerlin\n", "msg_date": "Wed, 13 Feb 2008 10:56:22 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Perc/6" }, { "msg_contents": "On Feb 13, 2008, at 12:06, Dave Cramer wrote:\n\n>\n> On 13-Feb-08, at 5:02 AM, Tore Halset wrote:\n>\n>> Hello.\n>>\n>> I think I started that discussion. We ended up buying a Dell 2900 \n>> with PERC 6/i and 10 * 145GB SAS 3,5\" 15KRpm discs. 6 of the SAS \n>> discs are in a raid 10 for the database, 2 in a mirror for the wal \n>> and the last 2 in a mirror for the OS. We get 350MB/s writing and \n>> 380MB/s reading to/from the raid 10 area using dd. The OS is Ubuntu \n>> and the filesystem for the raid 10 is ext3.\n>>\n> Wow that's fantastic. Just to be sure, did you make sure that you \n> read and wrote 2x memory to take the cache out of the measurement ?\n>\n> Dave\n>\n\n\nThe box have 16GB of ram, but my original test file was only 25GB. \nSorry. Going to 33GB lowered the numbers for writing. Here you have \nsome samples.\n\n% sh -c \"dd if=/dev/zero of=bigfile bs=8k count=4000000 && sync\"\n32768000000 bytes (33 GB) copied, 103.722 seconds, 316 MB/s\n32768000000 bytes (33 GB) copied, 99.669 seconds, 329 MB/s\n\n% time dd if=bigfile of=/dev/null bs=8k\n32768000000 bytes (33 GB) copied, 85.4235 seconds, 384 MB/s\n32768000000 bytes (33 GB) copied, 85.4628 seconds, 383 MB/s\n\nRegards,\n - Tore.\n", "msg_date": "Wed, 13 Feb 2008 20:45:01 +0100", "msg_from": "Tore Halset <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Perc/6" }, { "msg_contents": "On Feb 13, 2008, at 20:45, Tore Halset wrote:\n\n> The box have 16GB of ram, but my original test file was only 25GB. \n> Sorry. Going to 33GB lowered the numbers for writing. Here you have \n> some samples.\n>\n> % sh -c \"dd if=/dev/zero of=bigfile bs=8k count=4000000 && sync\"\n> 32768000000 bytes (33 GB) copied, 103.722 seconds, 316 MB/s\n> 32768000000 bytes (33 GB) copied, 99.669 seconds, 329 MB/s\n>\n> % time dd if=bigfile of=/dev/null bs=8k\n> 32768000000 bytes (33 GB) copied, 85.4235 seconds, 384 MB/s\n> 32768000000 bytes (33 GB) copied, 85.4628 seconds, 383 MB/s\n>\n> Regards,\n> - Tore.\n\nAnd here are the bonnie++ numbers. I am a bonnie++ newbie so I ran it \nwith no options.\n\nVersion 1.03c ------Sequential Output------ --Sequential Input- \n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- \n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec \n%CP /sec %CP\nharteigen 32136M 83983 97 221757 40 106483 19 89603 97 268787 \n22 886.1 1\n ------Sequential Create------ --------Random \nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- \n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec \n%CP /sec %CP\n 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ \n+++++ +++\nharteigen,32136M, \n83983,97,221757,40,106483,19,89603,97,268787,22,886.1,1,16,+++++,+++,++ \n+++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n\n\nRegards,\n - Tore.\n", "msg_date": "Wed, 13 Feb 2008 21:28:21 +0100", "msg_from": "Tore Halset <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dell Perc/6" } ]
[ { "msg_contents": "Hi,\n\nI'm having a performance problem on a request using Tsearch2: the\nrequest strangely takes several minutes.\n\nI've tried to follow Tsearch tuning recommendations, I've searched\nthrough the archives, but I can't seem to find a solution to solve my\nproblem.\n\nThe ts_vector field was created using dictionnary fr_ispell only on\ntypes lword, lpart_hword and lhword. An index was created on this\nfield.\n\nAccording to the stat() function, there are only 42,590 word stems indexed.\nI also did a VACUUM FULL ANALYZE.\n\nHere's the result of EXPLAIN ANALYZE on a filtered version of my\nrequest (the non-filtered version takes so long I usually cancel it):\n**************************************************************************\nexplain analyze SELECT idstruct, headline(zonetext, q),\nrank(zoneindex_test, q) FROM tab_ocr, tab_chemin, to_tsquery('partir')\nAS q WHERE tab_chemin.chemin like '%;2;%' AND tab_ocr.idstruct =\ntab_chemin.label AND zoneindex_test @@ q ORDER BY rank(zoneindex_test,\nq) DESC;\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=2345.54..2345.58 rows=16 width=308) (actual\ntime=270638.774..270643.142 rows=7106 loops=1)\nSort Key: rank(tab_ocr.zoneindex_test, q.q)\n-> Nested Loop (cost=80.04..2345.22 rows=16 width=308) (actual\ntime=40886.553..270619.730 rows=7106 loops=1)\n-> Nested Loop (cost=80.04..1465.76 rows=392 width=308) (actual\ntime=38209.193..173932.313 rows=272414 loops=1)\n-> Function Scan on q (cost=0.00..0.01 rows=1 width=32) (actual\ntime=0.006..0.007 rows=1 loops=1)\n-> Bitmap Heap Scan on tab_ocr (cost=80.04..1460.85 rows=392\nwidth=276) (actual time=38209.180..173507.052 rows=272414 loops=1)\nFilter: (tab_ocr.zoneindex_test @@ q.q)\n-> Bitmap Index Scan on zoneindex_test_idx (cost=0.00..79.94 rows=392\nwidth=0) (actual time=38204.261..38204.261 rows=283606 loops=1)\nIndex Cond: (tab_ocr.zoneindex_test @@ q.q)\n-> Index Scan using tab_chemin_label_index on tab_chemin\n(cost=0.00..2.23 rows=1 width=4) (actual time=0.036..0.036 rows=0\nloops=272414)\nIndex Cond: (tab_ocr.idstruct = tab_chemin.label)\nFilter: ((chemin)::text ~~ '%;2;%'::text)\nTotal runtime: 270647.946 ms\n**************************************************************************\n\nCould someone help me analyze this problem?\nI don't manage to see if the problem comes from bad tsearch tuning,\npostgresql configuration, or something else...\n\nThanks.\n", "msg_date": "Tue, 5 Feb 2008 12:47:47 +0100", "msg_from": "\"Viviane Lestic\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance issue using Tsearch2" }, { "msg_contents": "On 2008-02-05 Viviane Lestic wrote:\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=2345.54..2345.58 rows=16 width=308) (actual\n> time=270638.774..270643.142 rows=7106 loops=1)\n> Sort Key: rank(tab_ocr.zoneindex_test, q.q)\n> -> Nested Loop (cost=80.04..2345.22 rows=16 width=308) (actual\n> time=40886.553..270619.730 rows=7106 loops=1)\n> -> Nested Loop (cost=80.04..1465.76 rows=392 width=308) (actual\n> time=38209.193..173932.313 rows=272414 loops=1)\n> -> Function Scan on q (cost=0.00..0.01 rows=1 width=32) (actual\n> time=0.006..0.007 rows=1 loops=1)\n> -> Bitmap Heap Scan on tab_ocr (cost=80.04..1460.85 rows=392\n> width=276) (actual time=38209.180..173507.052 rows=272414 loops=1)\n> Filter: (tab_ocr.zoneindex_test @@ q.q)\n> -> Bitmap Index Scan on zoneindex_test_idx (cost=0.00..79.94 rows=392\n> width=0) (actual time=38204.261..38204.261 rows=283606 loops=1)\n> Index Cond: (tab_ocr.zoneindex_test @@ q.q)\n> -> Index Scan using tab_chemin_label_index on tab_chemin\n> (cost=0.00..2.23 rows=1 width=4) (actual time=0.036..0.036 rows=0\n> loops=272414)\n> Index Cond: (tab_ocr.idstruct = tab_chemin.label)\n> Filter: ((chemin)::text ~~ '%;2;%'::text)\n> Total runtime: 270647.946 ms\n> **************************************************************************\n> \n> Could someone help me analyze this problem?\n\nYour planner estimates are way off. Try increasing the statistics target\nfor the columns used in this query and re-analyze the tables after doing\nso.\n\nRegards\nAnsgar Wiechers\n-- \n\"The Mac OS X kernel should never panic because, when it does, it\nseriously inconveniences the user.\"\n--http://developer.apple.com/technotes/tn2004/tn2118.html\n", "msg_date": "Tue, 5 Feb 2008 14:36:02 +0100", "msg_from": "Ansgar -59cobalt- Wiechers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issue using Tsearch2" }, { "msg_contents": "On Feb 5, 2008 12:47 PM, Viviane Lestic <[email protected]> wrote:\n> Could someone help me analyze this problem?\n> I don't manage to see if the problem comes from bad tsearch tuning,\n> postgresql configuration, or something else...\n\nCan you try to replace zoneindex_test @@ q with zoneindex_test @@\nto_tsquery('partir')? Increasing the statistics for zoneindex_test may\nbe a good idea too (see ALTER TABLE ... ALTER COLUMN doc).\nI'm surprised you have the word \"partir\" in so many documents? Do you\nuse real data?\n\n--\nGuillaume\n", "msg_date": "Tue, 5 Feb 2008 14:48:22 +0100", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issue using Tsearch2" }, { "msg_contents": "2008/2/5, Ansgar -59cobalt- Wiechers wrote:\n> Your planner estimates are way off. Try increasing the statistics target\n> for the columns used in this query and re-analyze the tables after doing\n> so.\n\nI first set STATISTICS to 1000 for column zoneindex_test and saw no\nsignificant improvement (with a vacuum full analyze in between). Then\nI set default_statistics_target to 1000: there is now an improvement,\nbut the overall time is still way too long... (and the estimated costs\ndidn't change...)\nHere are the results with default_statistics_target set to 1000:\n\nexplain analyze SELECT idstruct, headline(zonetext, q),\nrank(zoneindex_test, q) FROM tab_ocr, tab_chemin, to_tsquery('partir')\nAS q WHERE tab_chemin.chemin like '%;2;%' AND tab_ocr.idstruct =\ntab_chemin.label AND zoneindex_test @@ q ORDER BY rank(zoneindex_test,\nq) DESC;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=2345.30..2345.32 rows=8 width=327) (actual\ntime=229913.715..229918.172 rows=7106 loops=1)\n Sort Key: rank(tab_ocr.zoneindex_test, q.q)\n -> Nested Loop (cost=80.04..2345.18 rows=8 width=327) (actual\ntime=28159.626..229892.957 rows=7106 loops=1)\n -> Nested Loop (cost=80.04..1465.76 rows=392 width=327)\n(actual time=26084.558..130979.395 rows=272414 loops=1)\n -> Function Scan on q (cost=0.00..0.01 rows=1\nwidth=32) (actual time=0.006..0.007 rows=1 loops=1)\n -> Bitmap Heap Scan on tab_ocr (cost=80.04..1460.85\nrows=392 width=295) (actual time=26084.544..130562.220 rows=272414\nloops=1)\n Filter: (tab_ocr.zoneindex_test @@ q.q)\n -> Bitmap Index Scan on zoneindex_test_idx\n(cost=0.00..79.94 rows=392 width=0) (actual time=26073.315..26073.315\nrows=283606 loops=1)\n Index Cond: (tab_ocr.zoneindex_test @@ q.q)\n -> Index Scan using tab_chemin_label_index on tab_chemin\n(cost=0.00..2.23 rows=1 width=4) (actual time=0.040..0.040 rows=0\nloops=272414)\n Index Cond: (tab_ocr.idstruct = tab_chemin.label)\n Filter: ((chemin)::text ~~ '%;2;%'::text)\n Total runtime: 229922.864 ms\n\n\n2008/2/5, Guillaume Smet wrote:\n> Can you try to replace zoneindex_test @@ q with zoneindex_test @@\n> to_tsquery('partir')?\n\nThe improvement seems negligible (with default_statistics_target back\nto 10, its default value):\nexplain analyze SELECT idstruct, headline(zonetext, q),\nrank(zoneindex_test, q) FROM tab_ocr, tab_chemin, to_tsquery('partir')\nAS q WHERE tab_chemin.chemin like '%;2;%' AND tab_ocr.idstruct =\ntab_chemin.label AND zoneindex_test @@ to_tsquery('partir') ORDER BY\nrank(zoneindex_test, q) DESC;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=4358.91..4358.95 rows=16 width=308) (actual\ntime=266489.667..266494.132 rows=7106 loops=1)\n Sort Key: rank(tab_ocr.zoneindex_test, q.q)\n -> Nested Loop (cost=80.04..4358.59 rows=16 width=308) (actual\ntime=42245.881..266469.644 rows=7106 loops=1)\n -> Function Scan on q (cost=0.00..0.01 rows=1 width=32)\n(actual time=0.007..0.008 rows=1 loops=1)\n -> Nested Loop (cost=80.04..4358.34 rows=16 width=276)\n(actual time=42239.570..178496.761 rows=7106 loops=1)\n -> Bitmap Heap Scan on tab_ocr (cost=80.04..1461.83\nrows=392 width=276) (actual time=38317.423..174188.779 rows=272414\nloops=1)\n Filter: (zoneindex_test @@ '''partir'''::tsquery)\n -> Bitmap Index Scan on zoneindex_test_idx\n(cost=0.00..79.94 rows=392 width=0) (actual time=38289.289..38289.289\nrows=283606 loops=1)\n Index Cond: (zoneindex_test @@ '''partir'''::tsquery)\n -> Index Scan using tab_chemin_label_index on\ntab_chemin (cost=0.00..7.38 rows=1 width=4) (actual time=0.014..0.014\nrows=0 loops=272414)\n Index Cond: (tab_ocr.idstruct = tab_chemin.label)\n Filter: ((chemin)::text ~~ '%;2;%'::text)\n Total runtime: 266498.704 ms\n\n> Increasing the statistics for zoneindex_test may\n> be a good idea too (see ALTER TABLE ... ALTER COLUMN doc).\n\nI posted the results above.\n\n> I'm surprised you have the word \"partir\" in so many documents? Do you\n> use real data?\n\nI'm using real data. The indexed documents are extracted from\nnewspapers, and \"partir\" (and its derivates) is quite a common verb in\nthe French language, so I'm not that surprised to see it show up in\nmany documents.\n", "msg_date": "Tue, 5 Feb 2008 16:08:36 +0100", "msg_from": "\"Viviane Lestic\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issue using Tsearch2" } ]
[ { "msg_contents": "Hi\n \nI have discovered an issue on my Postgresql database recently installed : it seems that the optimizer can not, when possible, simplify and rewrite a simple query before running it. Here is a simple and reproducible example :\n \nmy_db=# create table test (n numeric);\nCREATE\nmy_db=# insert into test values (1); --> run 10 times\nINSERT\nmy_db=# insert into test values (0); --> run 10 times\nINSERT\nmy_db=# select count(*) from test;\ncount\n-------\n20\n(1 row)\nmy_db=# vacuum full analyze test;\nVACUUM\nmy_db=# explain select * from test where n = 1;\nQUERY PLAN\n------------------------------------------------------\nSeq Scan on test (cost=0.00..1.25 rows=10 width=9)\nFilter: (n = 1::numeric)\n(2 rows)\n \nmy_db=# explain select * from test where n = 1 and n = 1;\nQUERY PLAN\n-----------------------------------------------------\nSeq Scan on test (cost=0.00..1.30 rows=5 width=9)\nFilter: ((n = 1::numeric) AND (n = 1::numeric))\n(2 rows)\n \nIn the first SELECT query (with \"where n=1\"), the estimated number of returned rows is correct (10), whereas in the second SELECT query (with \"where n=1 and n=1\"), the estimated number of returned rows is 5 (instead of 10 !)\nSo the optimizer has under-estimated the number of rows returned\nThat issue is very annoying because with generated SQL queries (from Business Objects for example) on big tables, it is possible that some queries have several times the same \"where\" condition (\"where n=1 and n=1\" for example), and as the optimizer is under-estimating the number of returned rows, some bad execution plans can be chosen (nested loops instead of hash joins for example)\n \nIs the estimated number of returned rows directly linked to the decision of the optimizer to chose Hash Joins or Nested Loops in join queries ?\nIs there a way for the Postgresql optimizer to be able to simplify and rewrite the SQL statements before running them ? Are there some parameters that could change the execution plans ?\n \nThanks by advance for your help\n \nJean-Francois SURANTYN\n \n\n**********************************************************************\nThis email and any files transmitted with it are confidential and\nintended solely for the use of the individual or entity to whom they\nare addressed. If you have received this email in error please notify\nthe system manager.\n\nSupermarchés MATCH, Société Par Actions Simplifiée au capital de 10 420 100 €, immatriculée au RCS de LILLE sous le Numéro B 785 480 351\nSiège : 250, rue du Général de Gaulle - BP 201 - 59 561 LA MADELEINE Cedex\n**********************************************************************\n\n\n\n\n\n\nHi\n \nI have discovered an issue on my Postgresql \ndatabase recently installed : it seems that the optimizer can not, when \npossible, simplify and rewrite a simple query before running it. Here is a \nsimple and reproducible example :\n \nmy_db=# create table test (n \nnumeric);CREATEmy_db=# insert into test values (1); --> run 10 \ntimesINSERTmy_db=# insert into test values (0); --> run 10 \ntimesINSERTmy_db=# select count(*) from \ntest;count-------20(1 row)my_db=# vacuum full analyze \ntest;VACUUMmy_db=# explain select * from test where n = \n1;QUERY \nPLAN------------------------------------------------------Seq Scan on \ntest (cost=0.00..1.25 rows=10 width=9)Filter: (n = \n1::numeric)(2 rows)\n \nmy_db=# explain select * from test where \nn = 1 and n = 1;QUERY \nPLAN-----------------------------------------------------Seq Scan on \ntest (cost=0.00..1.30 rows=5 width=9)Filter: ((n = \n1::numeric) AND (n = 1::numeric))(2 rows)\n \nIn the first SELECT query (with \"where n=1\"), the \nestimated number of returned rows is correct (10), whereas in the second SELECT \nquery (with \"where n=1 and n=1\"), the estimated number of returned rows is 5 \n(instead of 10 !)So the optimizer has under-estimated the number of rows \nreturnedThat issue is very annoying because with generated SQL queries (from \nBusiness Objects for example) on big tables, it is possible that some queries \nhave several times the same \"where\" condition (\"where n=1 and n=1\" for example), \nand as the optimizer is under-estimating the number of returned rows, some bad \nexecution plans can be chosen (nested loops instead of hash joins for \nexample)\n \nIs the estimated number of returned rows directly \nlinked to the decision of the optimizer to chose Hash Joins or Nested Loops in \njoin queries ?Is there a way for the Postgresql optimizer to be able to \nsimplify and rewrite the SQL statements before running them ? Are there some \nparameters that could change the execution plans ?\n \nThanks by advance for your help\n \nJean-Francois \nSURANTYN\n \n**********************************************************************\nThis email and any files transmitted with it are confidential and\nintended solely for the use of the individual or entity to whom they\nare addressed. If you have received this email in error please notify\nthe system manager.\n\n \nSupermarchés MATCH, Société Par Actions Simplifiée au capital de 10 420 100 €, immatriculée au RCS de LILLE sous le Numéro B 785 480 351\nSiège : 250, rue du Général de Gaulle - BP 201 - 59 561 LA MADELEINE Cedex\n**********************************************************************", "msg_date": "Wed, 6 Feb 2008 09:42:18 +0100", "msg_from": "=?iso-8859-1?Q?SURANTYN_Jean_Fran=E7ois?=\n\t<[email protected]>", "msg_from_op": true, "msg_subject": "Optimizer : query rewrite and execution plan ?" }, { "msg_contents": "SURANTYN Jean François wrote:\n> my_db=# explain select * from test where n = 1;\n\n> my_db=# explain select * from test where n = 1 and n = 1;\n\n> In the first SELECT query (with \"where n=1\"), the estimated number of\n> returned rows is correct (10), whereas in the second SELECT query\n> (with \"where n=1 and n=1\"), the estimated number of returned rows is\n> 5 (instead of 10 !) So the optimizer has under-estimated the number\n> of rows returned\n\nThat's because it's a badly composed query. The planner is guessing how\nmuch overlap there would be between the two clauses. It's not exploring\nthe option that they are the same clause repeated.\n\n> That issue is very annoying because with generated\n> SQL queries (from Business Objects for example) on big tables, it is\n> possible that some queries have several times the same \"where\"\n> condition (\"where n=1 and n=1\" for example), and as the optimizer is\n> under-estimating the number of returned rows, some bad execution\n> plans can be chosen (nested loops instead of hash joins for example)\n\nSounds like your query-generator needs a bit of an improvement, from my end.\n\n> Is the estimated number of returned rows directly linked to the\n> decision of the optimizer to chose Hash Joins or Nested Loops in join\n> queries ? \n\nYes, well the cost determines a plan and obviously number of rows\naffects the cost.\n\n> Is there a way for the Postgresql optimizer to be able to\n> simplify and rewrite the SQL statements before running them ? \n\nIt does, just not this one. It spots things like a=b and b=c implies a=c\n(for joins etc).\n\n> Are\n> there some parameters that could change the execution plans ?\n\nNot really in this case.\n\nThe root of your problem is that you have a query with an irrelevant\nclause (AND n=1) and you'd like the planner to notice that it's\nirrelevant and remove it from the query. There are two problems with this:\n\n1. It's only going to be possible in simple cases. It's unlikely the\nplanner would ever spot \"n=2 AND n=(10/5)\"\n2. Even in the simple case you're going to waste time checking *every\nquery* to see if clauses could be eliminated.\n\nIs there any way to improve your query generator?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 06 Feb 2008 09:47:22 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer : query rewrite and execution plan ?" }, { "msg_contents": "On Wed, 2008-02-06 at 09:42 +0100, SURANTYN Jean François wrote:\n\n> That issue is very annoying because with generated SQL queries (from\n> Business Objects for example) on big tables, it is possible that some\n> queries have several times the same \"where\" condition (\"where n=1 and\n> n=1\" for example), and as the optimizer is under-estimating the number\n> of returned rows, some bad execution plans can be chosen (nested loops\n> instead of hash joins for example)\n\nI can see the annoyance there.\n\nThere's a balance in the planner between time spent to optimize the\nquery and time spent to correct mistakes. If we looked continually for\nmistakes then planning time would increase for everybody that didn't\nsuffer from this problem.\n\nSince the SQL is not your fault and difficult to control, it is an\nargument in favour of an optional planner mode that would perform\nadditional checks for redundant clauses of various kinds. The default\nfor that would be \"off\" since most people don't suffer from this\nproblem. BO isn't the only SQL generating-client out there, so I think\nthis is a fairly wide problem.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com \n\n", "msg_date": "Wed, 06 Feb 2008 11:53:46 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer : query rewrite and execution plan ?" }, { "msg_contents": "On Wed, 2008-02-06 at 11:53 +0000, Simon Riggs wrote:\n> On Wed, 2008-02-06 at 09:42 +0100, SURANTYN Jean Fran�ois wrote:\n> \n> > That issue is very annoying because with generated SQL queries (from\n> > Business Objects for example) on big tables, it is possible that some\n> > queries have several times the same \"where\" condition (\"where n=1 and\n> > n=1\" for example), and as the optimizer is under-estimating the number\n> > of returned rows, some bad execution plans can be chosen (nested loops\n> > instead of hash joins for example)\n> \n> I can see the annoyance there.\n> \n> There's a balance in the planner between time spent to optimize the\n> query and time spent to correct mistakes. If we looked continually for\n> mistakes then planning time would increase for everybody that didn't\n> suffer from this problem.\n> \n> Since the SQL is not your fault and difficult to control, it is an\n> argument in favour of an optional planner mode that would perform\n> additional checks for redundant clauses of various kinds. The default\n> for that would be \"off\" since most people don't suffer from this\n> problem. BO isn't the only SQL generating-client out there, so I think\n> this is a fairly wide problem.\n\nI would have to disagree. I spend a lot of time writing code that\ngenerates SQL from a business app and feel strongly that any\noptimisation is my responsibility.\n\nHaving to re-configure PG to switch on a planner mode, as suggested\nabove, to address badly generated SQL is not a good idea.\n\nThis with experience on having to talk business application developers\nthrough re-configuring a database.\n\n-- \nRegards\nTheo\n\n", "msg_date": "Wed, 06 Feb 2008 14:12:42 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer : query rewrite and execution plan ?" }, { "msg_contents": "\n> > Since the SQL is not your fault and difficult to control, it is an\n> > argument in favour of an optional planner mode that would perform\n> > additional checks for redundant clauses of various kinds. The\ndefault\n> > for that would be \"off\" since most people don't suffer from this\n> > problem. BO isn't the only SQL generating-client out there, so I\nthink\n> > this is a fairly wide problem.\n> \n> I would have to disagree. I spend a lot of time writing code that\n> generates SQL from a business app and feel strongly that any\n> optimisation is my responsibility.\n> \n\nThe point to a BI tool like BO is to abstract the data collection and do\nit dynamically. The SQL is built at run time because the tool is\ndesigned to give the end user as much flexibility as the data structure\nallows to query the data however they want.\n\nIt isn't feasible, possible, or recommended to rewrite all of the\npossible generated SQL that could be designed at runtime by the tool. \n\n\n\nJon\n", "msg_date": "Wed, 6 Feb 2008 07:35:38 -0600", "msg_from": "\"Roberts, Jon\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer : query rewrite and execution plan ?" }, { "msg_contents": "\nOn Feb 6, 2008, at 7:35 AM, Roberts, Jon wrote:\n\n>\n>>> Since the SQL is not your fault and difficult to control, it is an\n>>> argument in favour of an optional planner mode that would perform\n>>> additional checks for redundant clauses of various kinds. The\n> default\n>>> for that would be \"off\" since most people don't suffer from this\n>>> problem. BO isn't the only SQL generating-client out there, so I\n> think\n>>> this is a fairly wide problem.\n>>\n>> I would have to disagree. I spend a lot of time writing code that\n>> generates SQL from a business app and feel strongly that any\n>> optimisation is my responsibility.\n>>\n>\n> The point to a BI tool like BO is to abstract the data collection \n> and do\n> it dynamically. The SQL is built at run time because the tool is\n> designed to give the end user as much flexibility as the data \n> structure\n> allows to query the data however they want.\n>\n> It isn't feasible, possible, or recommended to rewrite all of the\n> possible generated SQL that could be designed at runtime by the tool.\n\nNo, but it is feasible to expect the tool to generate well-formed \nqueries without redundant clauses. There are plenty that do.\n\nErik Jones\n\nDBA | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n\n", "msg_date": "Wed, 6 Feb 2008 09:27:33 -0600", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer : query rewrite and execution plan ?" }, { "msg_contents": "> >\n> >>> Since the SQL is not your fault and difficult to control, it is an\n> >>> argument in favour of an optional planner mode that would perform\n> >>> additional checks for redundant clauses of various kinds. The\n> > default\n> >>> for that would be \"off\" since most people don't suffer from this\n> >>> problem. BO isn't the only SQL generating-client out there, so I\n> > think\n> >>> this is a fairly wide problem.\n> >>\n> >> I would have to disagree. I spend a lot of time writing code that\n> >> generates SQL from a business app and feel strongly that any\n> >> optimisation is my responsibility.\n> >>\n> >\n> > The point to a BI tool like BO is to abstract the data collection\n> > and do\n> > it dynamically. The SQL is built at run time because the tool is\n> > designed to give the end user as much flexibility as the data\n> > structure\n> > allows to query the data however they want.\n> >\n> > It isn't feasible, possible, or recommended to rewrite all of the\n> > possible generated SQL that could be designed at runtime by the\ntool.\n> \n> No, but it is feasible to expect the tool to generate well-formed\n> queries without redundant clauses. There are plenty that do.\n> \n\n\nAgreed.\n\n\nJon\n", "msg_date": "Wed, 6 Feb 2008 09:35:47 -0600", "msg_from": "\"Roberts, Jon\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer : query rewrite and execution plan ?" }, { "msg_contents": "Theo Kramer <[email protected]> writes:\n> On Wed, 2008-02-06 at 11:53 +0000, Simon Riggs wrote:\n>> Since the SQL is not your fault and difficult to control, it is an\n>> argument in favour of an optional planner mode that would perform\n>> additional checks for redundant clauses of various kinds. The default\n>> for that would be \"off\" since most people don't suffer from this\n>> problem. BO isn't the only SQL generating-client out there, so I think\n>> this is a fairly wide problem.\n\n> I would have to disagree. I spend a lot of time writing code that\n> generates SQL from a business app and feel strongly that any\n> optimisation is my responsibility.\n\nDisagree with what? If that's your feeling then you'd leave the setting\n\"off\", and no harm done.\n\nWe used to have code that removed duplicate WHERE clauses (check the\nrevision history for prepqual.c). It was taken out because it consumed\nexcessive amounts of planning time without accomplishing a darn thing\nfor most queries. There is no chance that it will be put back in as the\nonly behavior, or even the default behavior, but I can see the reasoning\nfor offering an option as Simon suggests.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 Feb 2008 11:00:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer : query rewrite and execution plan ? " }, { "msg_contents": "On Wed, 2008-02-06 at 11:00 -0500, Tom Lane wrote:\n> Theo Kramer <[email protected]> writes:\n> > On Wed, 2008-02-06 at 11:53 +0000, Simon Riggs wrote:\n> >> Since the SQL is not your fault and difficult to control, it is an\n> >> argument in favour of an optional planner mode that would perform\n> >> additional checks for redundant clauses of various kinds. The default\n> >> for that would be \"off\" since most people don't suffer from this\n> >> problem. BO isn't the only SQL generating-client out there, so I think\n> >> this is a fairly wide problem.\n\n> We used to have code that removed duplicate WHERE clauses (check the\n> revision history for prepqual.c). It was taken out because it consumed\n> excessive amounts of planning time without accomplishing a darn thing\n> for most queries. There is no chance that it will be put back in as the\n> only behavior, or even the default behavior, but I can see the reasoning\n> for offering an option as Simon suggests.\n\nI was wondering if we might do that automatically? It seems easy to\nimagine a switch, but I wonder if we'd be able to set it correctly in\nenough situations to make it worthwhile.\n\nSay if cost of best plan >= N then recheck query for strangeness. If\nanything found, re-plan query.\n\nThat way we only pay the cost of checking for longer queries and we only\nactually re-plan for queries that will benefit.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com \n\n", "msg_date": "Mon, 11 Feb 2008 09:44:47 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer : query rewrite and execution plan ?" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> Say if cost of best plan >= N then recheck query for strangeness. If\n> anything found, re-plan query.\n\nWhatever makes you think that would be useful?\n\nThe usual result of undetected duplicate WHERE clauses is an\n*underestimate* of runtime, not an overestimate (because it thinks\ntoo few tuples will be selected).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Feb 2008 11:44:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer : query rewrite and execution plan ? " } ]
[ { "msg_contents": "Many thanks for your quick reply \n\nIn fact, that issue comes from a recent migration from Oracle to Postgresql, and even if some queries were not optimized by the past (example: where n=1 and n=1), Oracle was able to rewrite them and to \"hide\" the bad queries\". But now that we have migrated to Postgresql, we have discovered that some queries were indeed badly wroten\nI will tell to the developpers to try to optimize their queries for them to work efficiently on Postgresql\n\nThanks again for your help\n\nRegards\n\nJean-Francois SURANTYN\n\n\n-----Message d'origine-----\nDe : Richard Huxton [mailto:[email protected]] \nEnvoyé : mercredi 6 février 2008 10:47\nÀ : SURANTYN Jean François\nCc : [email protected]\nObjet : Re: [PERFORM] Optimizer : query rewrite and execution plan ?\n\nSURANTYN Jean François wrote:\n> my_db=# explain select * from test where n = 1;\n\n> my_db=# explain select * from test where n = 1 and n = 1;\n\n> In the first SELECT query (with \"where n=1\"), the estimated number of \n> returned rows is correct (10), whereas in the second SELECT query \n> (with \"where n=1 and n=1\"), the estimated number of returned rows is\n> 5 (instead of 10 !) So the optimizer has under-estimated the number of \n> rows returned\n\nThat's because it's a badly composed query. The planner is guessing how much overlap there would be between the two clauses. It's not exploring the option that they are the same clause repeated.\n\n> That issue is very annoying because with generated SQL queries (from \n> Business Objects for example) on big tables, it is possible that some \n> queries have several times the same \"where\"\n> condition (\"where n=1 and n=1\" for example), and as the optimizer is \n> under-estimating the number of returned rows, some bad execution plans \n> can be chosen (nested loops instead of hash joins for example)\n\nSounds like your query-generator needs a bit of an improvement, from my end.\n\n> Is the estimated number of returned rows directly linked to the \n> decision of the optimizer to chose Hash Joins or Nested Loops in join \n> queries ?\n\nYes, well the cost determines a plan and obviously number of rows affects the cost.\n\n> Is there a way for the Postgresql optimizer to be able to simplify and \n> rewrite the SQL statements before running them ?\n\nIt does, just not this one. It spots things like a=b and b=c implies a=c (for joins etc).\n\n> Are\n> there some parameters that could change the execution plans ?\n\nNot really in this case.\n\nThe root of your problem is that you have a query with an irrelevant clause (AND n=1) and you'd like the planner to notice that it's irrelevant and remove it from the query. There are two problems with this:\n\n1. It's only going to be possible in simple cases. It's unlikely the planner would ever spot \"n=2 AND n=(10/5)\"\n2. Even in the simple case you're going to waste time checking *every\nquery* to see if clauses could be eliminated.\n\nIs there any way to improve your query generator?\n\n--\n Richard Huxton\n Archonet Ltd\n\n**********************************************************************\nThis email and any files transmitted with it are confidential and\nintended solely for the use of the individual or entity to whom they\nare addressed. If you have received this email in error please notify\nthe system manager.\n\nSupermarchés MATCH, Société Par Actions Simplifiée au capital de 10 420 100 €, immatriculée au RCS de LILLE sous le Numéro B 785 480 351\nSiège : 250, rue du Général de Gaulle - BP 201 - 59 561 LA MADELEINE Cedex\n**********************************************************************\n\n", "msg_date": "Wed, 6 Feb 2008 11:02:45 +0100", "msg_from": "=?iso-8859-1?Q?SURANTYN_Jean_Fran=E7ois?=\n\t<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizer : query rewrite and execution plan ?" }, { "msg_contents": "SURANTYN Jean François wrote:\n> Many thanks for your quick reply\n> \n> In fact, that issue comes from a recent migration from Oracle to\n> Postgresql, and even if some queries were not optimized by the past\n> (example: where n=1 and n=1), Oracle was able to rewrite them and to\n> \"hide\" the bad queries\". But now that we have migrated to Postgresql,\n> we have discovered that some queries were indeed badly wroten I will\n> tell to the developpers to try to optimize their queries for them to\n> work efficiently on Postgresql\n\nIf nothing else it will help when / if you decide to use prepared\nqueries - there's no way to optimise \"n=$1 or n=$2\" at planning time.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 06 Feb 2008 10:06:47 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer : query rewrite and execution plan ?" } ]
[ { "msg_contents": "Improvements are welcome, but to compete in the industry, loading will need to speed up by a factor of 100.\n\nNote that Bizgres loader already does many of these ideas and it sounds like pgloader does too.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom: \tDimitri Fontaine [mailto:[email protected]]\nSent:\tWednesday, February 06, 2008 12:41 PM Eastern Standard Time\nTo:\[email protected]\nCc:\tGreg Smith\nSubject:\tRe: [PERFORM] Benchmark Data requested --- pgloader CE design ideas\n\nLe mercredi 06 février 2008, Greg Smith a écrit :\n> If I'm loading a TB file, odds are good I can split that into 4 or more\n> vertical pieces (say rows 1-25%, 25-50%, 50-75%, 75-100%), start 4 loaders\n> at once, and get way more than 1 disk worth of throughput reading.\n\npgloader already supports starting at any input file line number, and limit \nitself to any number of reads:\n\n -C COUNT, --count=COUNT\n number of input lines to process\n -F FROMCOUNT, --from=FROMCOUNT\n number of input lines to skip\n\nSo you could already launch 4 pgloader processes with the same configuration \nfine but different command lines arguments. It there's interest/demand, it's \neasy enough for me to add those parameters as file configuration knobs too.\n\nStill you have to pay for client to server communication instead of having the \nbackend read the file locally, but now maybe we begin to compete?\n\nRegards,\n-- \ndim\n\n\n\nRe: [PERFORM] Benchmark Data requested --- pgloader CE design ideas\n\n\n\nImprovements are welcome, but to compete in the industry, loading will need to speed up by a factor of 100.\n\nNote that Bizgres loader already does many of these ideas and it sounds like pgloader does too.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom:   Dimitri Fontaine [mailto:[email protected]]\nSent:   Wednesday, February 06, 2008 12:41 PM Eastern Standard Time\nTo:     [email protected]\nCc:     Greg Smith\nSubject:        Re: [PERFORM] Benchmark Data requested --- pgloader CE design ideas\n\nLe mercredi 06 février 2008, Greg Smith a écrit :\n> If I'm loading a TB file, odds are good I can split that into 4 or more\n> vertical pieces (say rows 1-25%, 25-50%, 50-75%, 75-100%), start 4 loaders\n> at once, and get way more than 1 disk worth of throughput reading.\n\npgloader already supports starting at any input file line number, and limit\nitself to any number of reads:\n\n  -C COUNT, --count=COUNT\n                        number of input lines to process\n  -F FROMCOUNT, --from=FROMCOUNT\n                        number of input lines to skip\n\nSo you could already launch 4 pgloader processes with the same configuration\nfine but different command lines arguments. It there's interest/demand, it's\neasy enough for me to add those parameters as file configuration knobs too.\n\nStill you have to pay for client to server communication instead of having the\nbackend read the file locally, but now maybe we begin to compete?\n\nRegards,\n--\ndim", "msg_date": "Wed, 6 Feb 2008 12:49:56 -0500", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Benchmark Data requested --- pgloader CE design ideas" }, { "msg_contents": "Le Wednesday 06 February 2008 18:49:56 Luke Lonergan, vous avez écrit :\n> Improvements are welcome, but to compete in the industry, loading will need\n> to speed up by a factor of 100.\n\nOh, I meant to compete with internal COPY command instead of \\copy one, not \nwith the competition. AIUI competing with competition will need some \nPostgreSQL internal improvements, which I'll let the -hackers do :)\n\n> Note that Bizgres loader already does many of these ideas and it sounds\n> like pgloader does too.\n\nWe're talking about how to improve pgloader :)\n\n-- \ndim", "msg_date": "Wed, 6 Feb 2008 21:04:09 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark Data requested --- pgloader CE design ideas" } ]
[ { "msg_contents": "Hi,\n\n(PostgreSQL 8.3)\n\nI'm trying to optimize one of the most often used queries in our system:\n\n(Full minimized pastable schema and data below.)\n\ncreate table feeds_users (\n user_id int references users(id) not null,\n feed_id int references feeds(id) not null,\n unique(user_id, feed_id)\n);\n\ncreate table items (\n id serial primary key,\n feed_id int references feeds(id) not null,\n title text,\n pub_date timestamp\n);\n\ncreate index items_feed_id_idx on items(feed_id);\ncreate index items_pub_date_idx on items(pub_date);\ncreate index items_pub_date_feed_id_idx on items(pub_date, feed_id);\ncreate index feeds_users_feed_id on feeds_users(feed_id);\n\n-- Query variant 1:\n\nEXPLAIN ANALYZE SELECT i.* FROM items i WHERE feed_id IN (\n SELECT f.feed_id FROM feeds_users f WHERE f.user_id = ?)\nORDER BY pub_date DESC\nLIMIT 20 OFFSET 100;\n\n-- Query variant 2:\nEXPLAIN ANALYZE SELECT i.* FROM items i\n JOIN feeds_users f ON i.feed_id = f.feed_id AND f.user_id = ?\nORDER BY pub_date DESC\nLIMIT 20 OFFSET 100;\n\nThe table items contains 700000 rows, feeds_users 99000, feeds 10000 and\nusers\n1000. The number of feeds for each user is distributed logarithmically, i.e.\nthere are many users with none to little feeds and some users with many\nfeeds.\n\nIn reality, 99% of the rows in items are being inserted in pub_date order,\nand the correlation of user_id in feeds_users is not 1 (it is 1 with the\ntest data).\n\nI need this query to work blisteringly fast for the common case, and at\nleast not too slow for extreme cases. Extreme cases are:\n * no feeds for a user\n * very little feeds for a user, with the top 20 items spread over >10% of\ntable items\n * normal number of feeds for a user, but big offset (1000 or 10000 or\n100000). The maximum offset could be capped in the application if needed,\nbut a solution without that would be preferred.\n\nThe common case is that the needed rows are within the first (by pub_date\ndesc) <1% of items.\n\nI ran some tests of both query variants on a Pentium M 1.6 GHz notebook with\n1280 MB RAM, shared_buffers = 32MB, temp_buffers 8MB, work_mem 8MB. Three\ndifferent user_ids were used for testing; the needed rows for each user are\neither 1) not existant, 2) spread over 18% of the table, 3) spread over\n0.064% of the table. Also I tried a statistics target of 10 and 100 for the\ntwo columns in feeds_users. Two query variants were used, one with an inner\njoin and one with IN. I got 4 different plans all in all. Results:\n\nno. stat user_id item rows result rows variant plan time\n target scanned w/o limit query\n\n1 10 3 700000 0 in 1 20000 ms\n2 join 2 15000 ms\n3 49 46855 (18%) 630 in 1 2300 ms\n4 join 2 2300 ms\n5 109 448 (0.064%) 206780 in 1 6 ms\n6 join 2 9 ms\n7 100 3 700000 0 in 3 0.2 ms\n8 join 2 16500 ms\n9 49 46855 (18%) 630 in 4 10 ms\n10 join 2 2300 ms\n11 109 448 (0.064%) 206780 in 1 6 ms\n12 join 2 9 ms\n\nPlans below. Now the questions:\n\nDo the differences in characteristics of the test data and the real data\nsomehow invalidate these numbers?\n\nI observe, that the query variant with IN is faster in all cases. What's the\ndifference in them that leads to plans being chosen that differ so much\nperformance-wise?\n\nCan I somehow trigger the items_pub_date_feed_id_idx to be used? ISTM that\nscanning by that index in pub_date desc order and using that same index to\ntest for a needed feed_id would be faster than accessing the heap for each\ntuple.\n\nWith a statistics target of 100, in queries no 3 and 9 a different, a very\nmuch faster plan was chosen. How is the statistics target to be determined\nsuch that the faster plan is chosen? Am I going to have to increase the\nstatistics target as one or more table receive more rows?\n\nThanks\n\nMarkus\n\nPlans:\n1 (for no 3)\n Limit (cost=1304.78..1565.74 rows=20 width=27) (actual time=\n2121.866..2377.740 rows=20 loops=1)\n -> Nested Loop IN Join (cost=0.00..57984.39 rows=4444 width=27) (actual\ntime=9.856..2377.421 rows=120 loops=1)\n -> Index Scan Backward using items_pub_date_idx on items i (cost=\n0.00..37484.20 rows=700071 width=27) (actual\ntime=0.131..1152.933rows=127337 loops=1)\n -> Index Scan using feeds_users_user_id_key on feeds_users f\n(cost=0.00..0.29 rows=1 width=4) (actual time=0.006..0.006 rows=0\nloops=127337)\n Index Cond: ((f.user_id = 49) AND (f.feed_id = i.feed_id))\n Total runtime: 2377.899 ms\n(6 rows)\n\n2 (for no. 4)\n Limit (cost=542.78..651.33 rows=20 width=27) (actual time=\n2133.759..2393.259 rows=20 loops=1)\n -> Nested Loop (cost=0.00..249705.41 rows=46005 width=27) (actual time=\n24.087..2392.950 rows=120 loops=1)\n -> Index Scan Backward using items_pub_date_idx on items i (cost=\n0.00..37484.20 rows=700071 width=27) (actual\ntime=0.067..1171.572rows=127337 loops=1)\n -> Index Scan using feeds_users_user_id_key on feeds_users f\n(cost=0.00..0.29 rows=1 width=4) (actual time=0.005..0.005 rows=0\nloops=127337)\n Index Cond: ((f.user_id = 49) AND (f.feed_id = i.feed_id))\n Total runtime: 2393.392 ms\n\n3 (for no. 7)\n Limit (cost=2227.06..2227.11 rows=20 width=27) (actual\ntime=0.052..0.052rows=0 loops=1)\n -> Sort (cost=2226.81..2229.43 rows=1048 width=27) (actual time=\n0.047..0.047 rows=0 loops=1)\n Sort Key: i.pub_date\n Sort Method: quicksort Memory: 17kB\n -> Nested Loop (cost=236.45..2185.37 rows=1048 width=27) (actual\ntime=0.036..0.036 rows=0 loops=1)\n -> HashAggregate (cost=231.72..231.87 rows=15 width=4)\n(actual time=0.032..0.032 rows=0 loops=1)\n -> Index Scan using feeds_users_user_id_key on\nfeeds_users f (cost=0.00..231.35 rows=148 width=4) (actual time=\n0.027..0.027 rows=0 loops=1)\n Index Cond: (user_id = 3)\n -> Bitmap Heap Scan on items i (cost=4.73..129.30 rows=75\nwidth=27) (never executed)\n Recheck Cond: (i.feed_id = f.feed_id)\n -> Bitmap Index Scan on items_feed_id_idx (cost=\n0.00..4.71 rows=75 width=0) (never executed)\n Index Cond: (i.feed_id = f.feed_id)\n Total runtime: 0.136 ms\n\n4 (for no. 9)\n Limit (cost=2227.06..2227.11 rows=20 width=27) (actual\ntime=8.806..8.906rows=20 loops=1)\n -> Sort (cost=2226.81..2229.43 rows=1048 width=27) (actual time=\n8.456..8.662 rows=120 loops=1)\n Sort Key: i.pub_date\n Sort Method: top-N heapsort Memory: 25kB\n -> Nested Loop (cost=236.45..2185.37 rows=1048 width=27) (actual\ntime=0.225..6.142 rows=630 loops=1)\n -> HashAggregate (cost=231.72..231.87 rows=15 width=4)\n(actual time=0.104..0.126 rows=9 loops=1)\n -> Index Scan using feeds_users_user_id_key on\nfeeds_users f (cost=0.00..231.35 rows=148 width=4) (actual time=\n0.037..0.062 rows=9 loops=1)\n Index Cond: (user_id = 49)\n -> Bitmap Heap Scan on items i (cost=4.73..129.30 rows=75\nwidth=27) (actual time=0.076..0.369 rows=70 loops=9)\n Recheck Cond: (i.feed_id = f.feed_id)\n -> Bitmap Index Scan on items_feed_id_idx (cost=\n0.00..4.71 rows=75 width=0) (actual time=0.046..0.046 rows=70 loops=9)\n Index Cond: (i.feed_id = f.feed_id)\n Total runtime: 9.061 ms\n\n\n-- Full pastable schema and data generation:\n\ncreate table feeds (\n id serial primary key,\n title text\n);\n\ncreate table users (\n id serial primary key,\n name text\n);\n\ncreate table feeds_users (\n user_id int references users(id) not null,\n feed_id int references feeds(id) not null,\n unique(user_id, feed_id)\n);\n\ncreate table items (\n id serial primary key,\n feed_id int references feeds(id) not null,\n title text,\n pub_date timestamp\n);\n\ninsert into users (name) select 'User ' || i::text as name from\ngenerate_series(1, 1000) as i;\ninsert into feeds (title) select 'Feed ' || i::text as name from\ngenerate_series(1, 10000) as i;\n\ninsert into feeds_users (user_id, feed_id)\n select\n --(i / 100) + 1 as user_id,\n floor(log(i)/log(1.1))+1 as user_id,\n ((i + floor(log(i)/log(1.1)))::int % 10000) + 1 as feed_id\n from generate_series(1, 99000) as i;\n\ninsert into items (feed_id, title, pub_date)\n select\n ((i * 17) % 10000) + 1 as feed_id,\n 'Item ' || i::text as title,\n '12/12/2006'::timestamp\n + cast(((i * 547) % 12343)::text || ' hours' as interval)\n + cast((random()*60)::numeric(6,3)::text || ' minutes' as interval)\nas pub_date\n from\n generate_series(1, 700000) as i;\n\ncreate index items_feed_id_idx on items(feed_id);\ncreate index items_pub_date_idx on items(pub_date);\ncreate index items_pub_date_feed_id_idx on items(pub_date, feed_id);\ncreate index feeds_users_feed_id on feeds_users(feed_id);\n\nanalyze;\n\n-- later\nalter table feeds_users alter column feed_id set statistics 100;\nalter table feeds_users alter column user_id set statistics 100;\nanalyze;\n\nHi,(PostgreSQL 8.3)I'm trying to optimize one of the most often used queries in our system:(Full minimized pastable schema and data below.)create table feeds_users (    user_id int references users(id) not null,\n    feed_id int references feeds(id) not null,    unique(user_id, feed_id));create table items (    id serial primary key,    feed_id int references feeds(id) not null,    title text,    pub_date timestamp\n);create index items_feed_id_idx on items(feed_id);create index items_pub_date_idx on items(pub_date);create index items_pub_date_feed_id_idx on items(pub_date, feed_id);create index feeds_users_feed_id on feeds_users(feed_id);\n-- Query variant 1:EXPLAIN ANALYZE SELECT i.* FROM items i WHERE feed_id IN (    SELECT f.feed_id FROM feeds_users f WHERE f.user_id = ?)ORDER BY pub_date DESCLIMIT 20 OFFSET 100;-- Query variant 2:\nEXPLAIN ANALYZE SELECT i.* FROM items i    JOIN feeds_users f ON i.feed_id = f.feed_id AND f.user_id = ?ORDER BY pub_date DESCLIMIT 20 OFFSET 100;The table items contains 700000 rows, feeds_users 99000, feeds 10000 and users\n1000. The number of feeds for each user is distributed logarithmically, i.e.there are many users with none to little feeds and some users with many feeds.In reality, 99% of the rows in items are being inserted in pub_date order, and the correlation of user_id in feeds_users is not 1 (it is 1 with the test data).\nI need this query to work blisteringly fast for the common case, and at least not too slow for extreme cases. Extreme cases are: * no feeds for a user * very little feeds for a user, with the top 20 items spread over >10% of table items\n * normal number of feeds for a user, but big offset (1000 or 10000 or 100000). The maximum offset could be capped in the application if needed, but a solution without that would be preferred.The common case is that the needed rows are within the first (by pub_date desc) <1% of items.\nI ran some tests of both query variants on a Pentium M 1.6 GHz notebook with 1280 MB RAM, shared_buffers = 32MB, temp_buffers 8MB, work_mem 8MB. Three different user_ids were used for testing; the needed rows for each user are either 1) not existant, 2) spread over 18% of the table, 3) spread over 0.064% of the table. Also I tried a statistics target of 10 and 100 for the two columns in feeds_users. Two query variants were used, one with an inner join and one with IN. I got 4 different plans all in all. Results:\nno. stat   user_id  item rows    result rows  variant  plan  time    target          scanned      w/o limit    query\n1   10     3        700000       0            in       1     20000 ms\n2                                             join     2     15000 ms3          49       46855 (18%)  630          in       1     2300 ms\n4                                             join     2     2300 ms5          109      448 (0.064%) 206780       in       1     6 ms\n6                                             join     2     9 ms7   100    3        700000       0            in       3     0.2 ms\n8                                             join     2     16500 ms9          49       46855 (18%)  630          in       4     10 ms\n10                                            join     2     2300 ms11         109      448 (0.064%) 206780       in       1     6 ms\n12                                            join     2     9 msPlans below. Now the questions:Do the differences in characteristics of the test data and the real data somehow invalidate these numbers?\nI observe, that the query variant with IN is faster in all cases. What's the difference in them that leads to plans being chosen that differ so much performance-wise?Can I somehow trigger the items_pub_date_feed_id_idx to be used? ISTM that scanning by that index in pub_date desc order and using that same index to test for a needed feed_id would be faster than accessing the heap for each tuple.\nWith a statistics target of 100, in queries no 3 and 9 a different, a very much faster plan was chosen. How is the statistics target to be determined such that the faster plan is chosen? Am I going to have to increase the statistics target as one or more table receive more rows?\nThanksMarkusPlans:1 (for no 3) Limit  (cost=1304.78..1565.74 rows=20 width=27) (actual time=2121.866..2377.740 rows=20 loops=1)   ->  Nested Loop IN Join  (cost=0.00..57984.39 rows=4444 width=27) (actual time=9.856..2377.421 rows=120 loops=1)\n         ->  Index Scan Backward using items_pub_date_idx on items i  (cost=0.00..37484.20 rows=700071 width=27) (actual time=0.131..1152.933 rows=127337 loops=1)         ->  Index Scan using feeds_users_user_id_key on feeds_users f  (cost=0.00..0.29 rows=1 width=4) (actual time=0.006..0.006 rows=0 loops=127337)\n               Index Cond: ((f.user_id = 49) AND (f.feed_id = i.feed_id)) Total runtime: 2377.899 ms(6 rows)2 (for no. 4) Limit  (cost=542.78..651.33 rows=20 width=27) (actual time=2133.759..2393.259 rows=20 loops=1)\n   ->  Nested Loop  (cost=0.00..249705.41 rows=46005 width=27) (actual time=24.087..2392.950 rows=120 loops=1)         ->  Index Scan Backward using items_pub_date_idx on items i  (cost=0.00..37484.20 rows=700071 width=27) (actual time=0.067..1171.572 rows=127337 loops=1)\n         ->  Index Scan using feeds_users_user_id_key on feeds_users f  (cost=0.00..0.29 rows=1 width=4) (actual time=0.005..0.005 rows=0 loops=127337)               Index Cond: ((f.user_id = 49) AND (f.feed_id = i.feed_id))\n Total runtime: 2393.392 ms3 (for no. 7) Limit  (cost=2227.06..2227.11 rows=20 width=27) (actual time=0.052..0.052 rows=0 loops=1)   ->  Sort  (cost=2226.81..2229.43 rows=1048 width=27) (actual time=0.047..0.047 rows=0 loops=1)\n         Sort Key: i.pub_date         Sort Method:  quicksort  Memory: 17kB         ->  Nested Loop  (cost=236.45..2185.37 rows=1048 width=27) (actual time=0.036..0.036 rows=0 loops=1)               ->  HashAggregate  (cost=231.72..231.87 rows=15 width=4) (actual time=0.032..0.032 rows=0 loops=1)\n                     ->  Index Scan using feeds_users_user_id_key on feeds_users f  (cost=0.00..231.35 rows=148 width=4) (actual time=0.027..0.027 rows=0 loops=1)                           Index Cond: (user_id = 3)\n               ->  Bitmap Heap Scan on items i  (cost=4.73..129.30 rows=75 width=27) (never executed)                     Recheck Cond: (i.feed_id = f.feed_id)                     ->  Bitmap Index Scan on items_feed_id_idx  (cost=0.00..4.71 rows=75 width=0) (never executed)\n                           Index Cond: (i.feed_id = f.feed_id) Total runtime: 0.136 ms4 (for no. 9) Limit  (cost=2227.06..2227.11 rows=20 width=27) (actual time=8.806..8.906 rows=20 loops=1)   ->  Sort  (cost=2226.81..2229.43 rows=1048 width=27) (actual time=8.456..8.662 rows=120 loops=1)\n         Sort Key: i.pub_date         Sort Method:  top-N heapsort  Memory: 25kB         ->  Nested Loop  (cost=236.45..2185.37 rows=1048 width=27) (actual time=0.225..6.142 rows=630 loops=1)               ->  HashAggregate  (cost=231.72..231.87 rows=15 width=4) (actual time=0.104..0.126 rows=9 loops=1)\n                     ->  Index Scan using feeds_users_user_id_key on feeds_users f  (cost=0.00..231.35 rows=148 width=4) (actual time=0.037..0.062 rows=9 loops=1)                           Index Cond: (user_id = 49)\n               ->  Bitmap Heap Scan on items i  (cost=4.73..129.30 rows=75 width=27) (actual time=0.076..0.369 rows=70 loops=9)                     Recheck Cond: (i.feed_id = f.feed_id)                     ->  Bitmap Index Scan on items_feed_id_idx  (cost=0.00..4.71 rows=75 width=0) (actual time=0.046..0.046 rows=70 loops=9)\n                           Index Cond: (i.feed_id = f.feed_id) Total runtime: 9.061 ms-- Full pastable schema and data generation:create table feeds (    id serial primary key,    title text\n);create table users (    id serial primary key,    name text);create table feeds_users (    user_id int references users(id) not null,    feed_id int references feeds(id) not null,\n    unique(user_id, feed_id));create table items (    id serial primary key,    feed_id int references feeds(id) not null,    title text,    pub_date timestamp);insert into users (name) select 'User ' || i::text as name from generate_series(1, 1000) as i;\ninsert into feeds (title) select 'Feed ' || i::text as name from generate_series(1, 10000) as i;insert into feeds_users (user_id, feed_id)    select        --(i / 100) + 1 as user_id,        floor(log(i)/log(1.1))+1 as user_id,\n        ((i + floor(log(i)/log(1.1)))::int % 10000) + 1 as feed_id    from generate_series(1, 99000) as i;insert into items (feed_id, title, pub_date)    select        ((i * 17) % 10000) + 1 as feed_id,\n        'Item ' || i::text as title,        '12/12/2006'::timestamp        + cast(((i * 547) % 12343)::text || ' hours' as interval)        + cast((random()*60)::numeric(6,3)::text || ' minutes' as interval) as pub_date\n    from        generate_series(1, 700000) as i;create index items_feed_id_idx on items(feed_id);create index items_pub_date_idx on items(pub_date);create index items_pub_date_feed_id_idx on items(pub_date, feed_id);\ncreate index feeds_users_feed_id on feeds_users(feed_id);analyze;-- lateralter table feeds_users alter column feed_id set statistics 100;alter table feeds_users alter column user_id set statistics 100;\nanalyze;", "msg_date": "Thu, 7 Feb 2008 20:51:57 +0600", "msg_from": "\"Markus Bertheau\" <[email protected]>", "msg_from_op": true, "msg_subject": "Index Scan Backward + check additional condition before heap access" } ]
[ { "msg_contents": "I am using Postgres 8.2.5. \n \nI have a table that has rows containing a variable length array with a known maximum. \nI was doing selects on the array elements using an ANY match. The performance \nwas not too good as my table got bigger. So I added an index on the array. \nThat didn't help since the select was not using it. I saw a thread in the \nmailing lists stating the index wouldn't be used. \n \nSo I created indices on the individual array elements and then do a select\non each element separately and then combine each match using OR. \nThis did substantially increase the select performance. However, it may \nbe difficult to maintain this approach over time as the maximum array \nsize may increase dramatically and forming the query will become tedious. \n \nIs there any alternative to what am I currently doing other than creating a row for \neach array element, i.e. stop using an array and use a separate row for each \narray index? The reason I didn't want to take this approach is because there are \nother columns in the row that will be duplicated needlessly.\n \nThanks, Andrew\n\n\n ____________________________________________________________________________________\nBe a better friend, newshound, and \nknow-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ \n\nI am using Postgres 8.2.5. \n \nI have a table that has rows containing a variable length array with a known maximum. \nI was doing selects on the array elements using an ANY match. The performance \nwas not too good as my table got bigger. So I added an index on the array. \nThat didn't help since the select was not using it.  I saw a thread in the \nmailing lists stating the index wouldn't be used. \n \nSo I created indices on the individual array elements and then do a select\non each element separately and then combine each match using OR. \nThis did substantially increase the select performance. However, it may \nbe difficult to maintain this approach over time as the maximum array \nsize may increase dramatically and forming the query will become tedious. \n \nIs there any alternative to what am I currently doing other than creating a row for \neach array element, i.e. stop using an array and use a separate row for each \narray index? The reason I didn't want to take this approach is because there are \nother columns in the row that will be duplicated needlessly.\n \nThanks, Andrew\n \nNever miss a thing. Make Yahoo your homepage.", "msg_date": "Thu, 7 Feb 2008 10:38:52 -0800 (PST)", "msg_from": "andrew klassen <[email protected]>", "msg_from_op": true, "msg_subject": "index usage on arrays" }, { "msg_contents": "andrew,\nwhat are your queries ? Have you seen contrib/intarray, \nGIN index ?\n\nOn Thu, 7 Feb 2008, andrew klassen wrote:\n\n> I am using Postgres 8.2.5.\n>\n> I have a table that has rows containing a variable length array with a known maximum.\n> I was doing selects on the array elements using an ANY match. The performance\n> was not too good as my table got bigger. So I added an index on the array.\n> That didn't help since the select was not using it. I saw a thread in the\n> mailing lists stating the index wouldn't be used.\n>\n> So I created indices on the individual array elements and then do a select\n> on each element separately and then combine each match using OR.\n> This did substantially increase the select performance. However, it may\n> be difficult to maintain this approach over time as the maximum array\n> size may increase dramatically and forming the query will become tedious.\n>\n> Is there any alternative to what am I currently doing other than creating a row for\n> each array element, i.e. stop using an array and use a separate row for each\n> array index? The reason I didn't want to take this approach is because there are\n> other columns in the row that will be duplicated needlessly.\n>\n> Thanks, Andrew\n>\n>\n> ____________________________________________________________________________________\n> Be a better friend, newshound, and\n> know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Thu, 7 Feb 2008 21:55:44 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index usage on arrays" }, { "msg_contents": "andrew klassen <[email protected]> writes:\n> Is there any alternative to what am I currently doing other than creating a row for \n> each array element,\n\nSince (I think) 8.2, you could create a GIN index on the array column\nand then array overlap (&&) would be indexable. GIN has some\nperformance issues if the table is heavily updated, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Feb 2008 14:01:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index usage on arrays " } ]
[ { "msg_contents": "Hi all,\n\nWhen I'm doing an explain analyze to a query of mine I notice that the\nnumber of estimated rows by the planner is a lot smaller then the actual\nnumber of rows, I'm afraid that this make my queries slower.\n\nA query for example is:\n\nEXPLAIN ANALYZE\nSELECT product_id,product_name\nFROM product\nWHERE product_keywords_vector @@ plainto_tsquery('default', 'black') AND\nrank(product_keywords_vector, plainto_tsquery('default', 'black')) > 0.4 AND\nproduct_status = TRUE AND product_type = 'comparison'\nORDER BY ((product_buy_number * 4) + product_view_number + 1) *\nrank(product_keywords_vector, plainto_tsquery('default', 'black')) DESC;\n\nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=10543.67..10544.81 rows=455 width=297) (actual time=\n1098.188..1104.606 rows=22248 loops=1)\n Sort Key: (((((product_buy_number * 4) + product_view_number) +\n1))::double precision * rank(product_keywords_vector, '''black'''::tsquery))\n -> Bitmap Heap Scan on product (cost=287.13..10523.59 rows=455\nwidth=297) (actual time=50.496..1071.900 rows=22248 loops=1)\n Recheck Cond: (product_keywords_vector @@ '''black'''::tsquery)\n Filter: ((rank(product_keywords_vector, '''black'''::tsquery) >\n0.4::double precision) AND product_status AND (product_type =\n'comparison'::text))\n -> Bitmap Index Scan on product_product_keywords_vector (cost=\n0.00..287.02 rows=2688 width=0) (actual time=26.385..26.385 rows=72507\nloops=1)\n Index Cond: (product_keywords_vector @@ '''black'''::tsquery)\n Total runtime: 1111.507 ms\n(8 rows)\n\nHere as I understand it, at the Bitmap Index Scan on\nproduct_product_keywords_vector the planner estimate that it will retrieve\n2688 rows but it actually retrieve 72507 rows and later at the Bitmap Heap\nScan on product it estimate 455 rows and retrieve 22248 rows.\n\nI increased the statistics of the field which the\nproduct_product_keywords_vector index is built on by doing:\nALTER TABLE product ALTER COLUMN product_keywords_vector SET STATISTICS\n1000;\nANALYZE;\nREINDEX INDEX product_product_keywords_vector;\n\nBut it didn't change a thing.\n\nAny ideas?\n\nThanks in advance,\nYonatan Ben-Nes\n\nHi all,When I'm doing an explain analyze to a query of mine I notice that the number of estimated rows by the planner is a lot smaller then the actual number of rows, I'm afraid that this make my queries slower.\nA query for example is:EXPLAIN ANALYZESELECT product_id,product_nameFROM productWHERE product_keywords_vector @@ plainto_tsquery('default', 'black') AND rank(product_keywords_vector, plainto_tsquery('default', 'black')) > 0.4 AND\nproduct_status = TRUE AND product_type = 'comparison'ORDER BY ((product_buy_number * 4) + product_view_number + 1) * rank(product_keywords_vector, plainto_tsquery('default', 'black')) DESC;\n                                                                         QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort  (cost=10543.67..10544.81 rows=455 width=297) (actual time=1098.188..1104.606 rows=22248 loops=1)   Sort Key: (((((product_buy_number * 4) + product_view_number) + 1))::double precision * rank(product_keywords_vector, '''black'''::tsquery))\n   ->  Bitmap Heap Scan on product  (cost=287.13..10523.59 rows=455 width=297) (actual time=50.496..1071.900 rows=22248 loops=1)         Recheck Cond: (product_keywords_vector @@ '''black'''::tsquery)\n         Filter: ((rank(product_keywords_vector, '''black'''::tsquery) > 0.4::double precision) AND product_status AND (product_type = 'comparison'::text))         ->  Bitmap Index Scan on product_product_keywords_vector  (cost=0.00..287.02 rows=2688 width=0) (actual time=26.385..26.385 rows=72507 loops=1)\n               Index Cond: (product_keywords_vector @@ '''black'''::tsquery) Total runtime: 1111.507 ms(8 rows)Here as I understand it, at the Bitmap Index Scan on product_product_keywords_vector the planner estimate that it will retrieve 2688 rows but it actually retrieve 72507 rows and later at the Bitmap Heap Scan on product it estimate 455 rows and retrieve 22248 rows.\nI increased the statistics of the field which the product_product_keywords_vector index is built on by doing:ALTER TABLE product ALTER COLUMN product_keywords_vector SET STATISTICS 1000;ANALYZE;REINDEX INDEX product_product_keywords_vector;\nBut it didn't change a thing.Any ideas?Thanks in advance,Yonatan Ben-Nes", "msg_date": "Fri, 8 Feb 2008 14:12:09 +0200", "msg_from": "\"Yonatan Ben-Nes\" <[email protected]>", "msg_from_op": true, "msg_subject": "Wrong number of rows estimation by the planner" } ]
[ { "msg_contents": "Hi,\n\nAssuming two or more clients issue INSERTs and COPYs on the same table \nin the database at the same time, does PostgreSQL execute them in \nparallel (i.e. no table-level locks, etc.) assuming there are no indexes \nand constrains on the table? What about when there are indexes and/or \nconstraints?\n\n", "msg_date": "Fri, 08 Feb 2008 21:59:33 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Parallel inserts" }, { "msg_contents": "Ivan Voras wrote:\n> Hi,\n> \n> Assuming two or more clients issue INSERTs and COPYs on the same table \n> in the database at the same time, does PostgreSQL execute them in \n> parallel (i.e. no table-level locks, etc.) assuming there are no indexes \n> and constrains on the table? What about when there are indexes and/or \n> constraints?\n\nYes, parallel in all cases.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://postgres.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Fri, 8 Feb 2008 23:19:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel inserts" } ]
[ { "msg_contents": "We have a large datawarehouse stored in postgres and temp tables are created\nbased on user query. The process of temp table creation involves selecting\ndata from main fact table, this includes several select and update\nstatements and one of the following update statement is having performance\nissues.\n\nThe newly temp table created for this scenario contains 22712 rows. Here is\nthe query\n\nalter table dummy add column gp numeric(40,15);\nupdate dummy set gp=(select (\ncase when sum(temp.pd) <> 0 then sum(temp.gd)/sum(temp.pd)*100 else 0 end\n) from dummy as temp\nwhere temp.product=dummy.product)\n\nNow this query basically updates a table using values within itself in the\nsubquery but it takes tooooo much time i.e. approx 5 mins. The whole temp\ntable creation process is stucked in this query (there are 4 additional such\nupdates with same problem). Index creation is useless here since its only a\none time process.\n\nHere is the strip down version (the part making performance issue) of above\nquery i.e. only select statement\n-------------------------------\nselect (case when sum(temp.pd) <> 0 then sum(temp.gd)/sum(temp.pd)*100 else\n0 end ) from dummy as temp, dummy as temp2\nwhere temp.product=temp2.product group by temp.product\n\n\"HashAggregate (cost=1652480.98..1652481.96 rows=39 width=39)\"\n\" -> Hash Join (cost=1636.07..939023.13 rows=71345785 width=39)\"\n\" Hash Cond: ((\"temp\".product)::text = (temp2.product)::text)\"\n\" -> Seq Scan on dummy \"temp\" (cost=0.00..1311.03 rows=26003\nwidth=39)\"\n\" -> Hash (cost=1311.03..1311.03 rows=26003 width=21)\"\n\" -> Seq Scan on dummy temp2 (cost=0.00..1311.03 rows=26003\nwidth=21)\"\n-------------------------------\n\n\nWhats the solution of this problem, or any alternate way to write this\nquery?\n\nWe have a large datawarehouse stored in postgres and temp tables are created based on user query. The process of temp table creation involves selecting data from main fact table, this includes several select and update statements and one of the following update statement is having performance issues.\nThe newly temp table created for this scenario contains 22712 rows. Here is the queryalter table dummy add column gp numeric(40,15);update dummy set gp=(select (case when sum(temp.pd) <> 0 then sum(temp.gd)/sum(temp.pd)*100 else 0 end  )   from dummy as temp\nwhere temp.product=dummy.product)Now this query basically updates a table using values within itself in the subquery but it takes tooooo much time i.e. approx 5 mins. The whole temp table creation process is stucked in this query (there are 4 additional such updates with same problem). Index creation is useless here since its only a one time process.\nHere is the strip down version (the part making performance issue) of above query i.e. only select statement-------------------------------select (case when sum(temp.pd) <> 0 then sum(temp.gd)/sum(temp.pd)*100 else 0 end  )   from dummy as temp, dummy as temp2\nwhere temp.product=temp2.product group by temp.product\"HashAggregate  (cost=1652480.98..1652481.96 rows=39 width=39)\"\"  ->  Hash Join  (cost=1636.07..939023.13 rows=71345785 width=39)\"\n\"        Hash Cond: ((\"temp\".product)::text = (temp2.product)::text)\"\"        ->  Seq Scan on dummy \"temp\"  (cost=0.00..1311.03 rows=26003 width=39)\"\"        ->  Hash  (cost=1311.03..1311.03 rows=26003 width=21)\"\n\"              ->  Seq Scan on dummy temp2  (cost=0.00..1311.03 rows=26003 width=21)\"-------------------------------Whats the solution of this problem, or any alternate way to write this query?", "msg_date": "Mon, 11 Feb 2008 16:06:44 +0500", "msg_from": "\"Linux Guru\" <[email protected]>", "msg_from_op": true, "msg_subject": "Update with Subquery Performance" }, { "msg_contents": "\"Linux Guru\" <[email protected]> writes:\n> We have a large datawarehouse stored in postgres and temp tables are created\n> based on user query. The process of temp table creation involves selecting\n> data from main fact table, this includes several select and update\n> statements and one of the following update statement is having performance\n> issues.\n\nTry ANALYZEing the temp table before the step that's too slow.\n\nIf that doesn't help, let's see EXPLAIN ANALYZE (not just EXPLAIN)\noutput.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Feb 2008 11:59:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update with Subquery Performance " }, { "msg_contents": "On Feb 11, 2008 5:06 AM, Linux Guru <[email protected]> wrote:\n> We have a large datawarehouse stored in postgres and temp tables are created\n> based on user query. The process of temp table creation involves selecting\n> data from main fact table, this includes several select and update\n> statements and one of the following update statement is having performance\n> issues.\n>\n> The newly temp table created for this scenario contains 22712 rows. Here is\n> the query\n>\n> alter table dummy add column gp numeric(40,15);\n> update dummy set gp=(select (\n> case when sum(temp.pd) <> 0 then sum(temp.gd)/sum(temp.pd)*100 else 0 end )\n> from dummy as temp\n> where temp.product=dummy.product)\n\nIs this supposed to be updating every single row with one value?\nCause I'm guessing it's running that sub select over and over instead\nof one time. I'm guessing that with more work_mem the planner might\nuse a more efficient plan. Try adding\n\nanalyze;\nset work_mem = 128000;\n between the alter and update and see if that helps.\n\nAlso, as Tom said, post explain analyze output of the statement.\n\n\n>\n> Now this query basically updates a table using values within itself in the\n> subquery but it takes tooooo much time i.e. approx 5 mins. The whole temp\n> table creation process is stucked in this query (there are 4 additional such\n> updates with same problem). Index creation is useless here since its only a\n> one time process.\n>\n> Here is the strip down version (the part making performance issue) of above\n> query i.e. only select statement\n> -------------------------------\n> select (case when sum(temp.pd) <> 0 then sum(temp.gd)/sum(temp.pd)*100 else\n> 0 end ) from dummy as temp, dummy as temp2\n> where temp.product=temp2.product group by temp.product\n>\n> \"HashAggregate (cost=1652480.98..1652481.96 rows=39 width=39)\"\n> \" -> Hash Join (cost=1636.07..939023.13 rows=71345785 width=39)\"\n> \" Hash Cond: ((\"temp\".product)::text = (temp2.product)::text)\"\n> \" -> Seq Scan on dummy \"temp\" (cost=0.00..1311.03 rows=26003\n> width=39)\"\n> \" -> Hash (cost=1311.03..1311.03 rows=26003 width=21)\"\n> \" -> Seq Scan on dummy temp2 (cost=0.00..1311.03 rows=26003\n> width=21)\"\n> -------------------------------\n>\n>\n> Whats the solution of this problem, or any alternate way to write this\n> query?\n>\n>\n>\n", "msg_date": "Mon, 11 Feb 2008 15:29:40 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update with Subquery Performance" }, { "msg_contents": "Analyzing did not help, here is the out of EXPLAIN ANALYZE of update query\n\n\"Seq Scan on dummy (cost=0.00..56739774.24 rows=23441 width=275) (actual\ntime=18.927..577929.014 rows=22712 loops=1)\"\n\" SubPlan\"\n\" -> Aggregate (cost=2420.41..2420.43 rows=1 width=19) (actual time=\n25.423..25.425 rows=1 loops=22712)\"\n\" -> Seq Scan on dummy \"temp\" (cost=0.00..2416.01 rows=586\nwidth=19) (actual time=0.049..17.834 rows=2414 loops=22712)\"\n\" Filter: ((product)::text = ($0)::text)\"\n\"Total runtime: 578968.885 ms\"\n\n\nOn Feb 11, 2008 9:59 PM, Tom Lane <[email protected]> wrote:\n\n> \"Linux Guru\" <[email protected]> writes:\n> > We have a large datawarehouse stored in postgres and temp tables are\n> created\n> > based on user query. The process of temp table creation involves\n> selecting\n> > data from main fact table, this includes several select and update\n> > statements and one of the following update statement is having\n> performance\n> > issues.\n>\n> Try ANALYZEing the temp table before the step that's too slow.\n>\n> If that doesn't help, let's see EXPLAIN ANALYZE (not just EXPLAIN)\n> output.\n>\n> regards, tom lane\n>\n\nAnalyzing did not help, here is the out of EXPLAIN ANALYZE of update query\"Seq Scan on dummy  (cost=0.00..56739774.24 rows=23441 width=275) (actual time=18.927..577929.014 rows=22712 loops=1)\"\"  SubPlan\"\n\"    ->  Aggregate  (cost=2420.41..2420.43 rows=1 width=19) (actual time=25.423..25.425 rows=1 loops=22712)\"\"          ->  Seq Scan on dummy \"temp\"  (cost=0.00..2416.01 rows=586 width=19) (actual time=0.049..17.834 rows=2414 loops=22712)\"\n\"                Filter: ((product)::text = ($0)::text)\"\"Total runtime: 578968.885 ms\"On Feb 11, 2008 9:59 PM, Tom Lane <[email protected]> wrote:\n\"Linux Guru\" <[email protected]> writes:\n> We have a large datawarehouse stored in postgres and temp tables are created> based on user query. The process of temp table creation involves selecting> data from main fact table, this includes several select and update\n> statements and one of the following update statement is having performance> issues.Try ANALYZEing the temp table before the step that's too slow.If that doesn't help, let's see EXPLAIN ANALYZE (not just EXPLAIN)\noutput.                        regards, tom lane", "msg_date": "Tue, 12 Feb 2008 13:32:29 +0500", "msg_from": "\"Linux Guru\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Update with Subquery Performance" }, { "msg_contents": "See, its calculating sum by grouping the product field. Here is an example\n\nProduct GP\n--------- -------\nA 30\nB 40\nA 30\nC 50\nC 50\n\nNow the query calculates aggregated sum and divide by grouping product so\nall A's must have same the result, so with B's and C's.\n\n> Is this supposed to be updating every single row with one value?\n> Cause I'm guessing it's running that sub select over and over instead\n> of one time.\n>\nyes you are right that its calculating every time for all elements in each\ngroup i.e. GP(A) is calculated twice for A, where it should only calculated\nonce for each group. Is there any way to achieve this?\n\nanalyze;\n> set work_mem = 128000;\n> between the alter and update and see if that helps.\n\nthat did not help\n\n\n> Also, as Tom said, post explain analyze output of the statement.\n\n\n\"Seq Scan on dummy (cost=0.00..56739774.24 rows=23441 width=275) (actual\ntime=18.927..577929.014 rows=22712 loops=1)\"\n\" SubPlan\"\n\" -> Aggregate (cost=2420.41..2420.43 rows=1 width=19) (actual time=\n25.423..25.425 rows=1 loops=22712)\"\n\" -> Seq Scan on dummy \"temp\" (cost=0.00..2416.01 rows=586\nwidth=19) (actual time=0.049..17.834 rows=2414 loops=22712)\"\n\" Filter: ((product)::text = ($0)::text)\"\n\"Total runtime: 578968.885 ms\"\n\nThanks\n\nOn Feb 12, 2008 2:29 AM, Scott Marlowe <[email protected]> wrote:\n\n> On Feb 11, 2008 5:06 AM, Linux Guru <[email protected]> wrote:\n> > We have a large datawarehouse stored in postgres and temp tables are\n> created\n> > based on user query. The process of temp table creation involves\n> selecting\n> > data from main fact table, this includes several select and update\n> > statements and one of the following update statement is having\n> performance\n> > issues.\n> >\n> > The newly temp table created for this scenario contains 22712 rows. Here\n> is\n> > the query\n> >\n> > alter table dummy add column gp numeric(40,15);\n> > update dummy set gp=(select (\n> > case when sum(temp.pd) <> 0 then sum(temp.gd)/sum(temp.pd)*100 else 0\n> end )\n> > from dummy as temp\n> > where temp.product=dummy.product)\n>\n> Is this supposed to be updating every single row with one value?\n> Cause I'm guessing it's running that sub select over and over instead\n> of one time. I'm guessing that with more work_mem the planner might\n> use a more efficient plan. Try adding\n>\n> analyze;\n> set work_mem = 128000;\n> between the alter and update and see if that helps.\n>\n> Also, as Tom said, post explain analyze output of the statement.\n>\n>\n> >\n> > Now this query basically updates a table using values within itself in\n> the\n> > subquery but it takes tooooo much time i.e. approx 5 mins. The whole\n> temp\n> > table creation process is stucked in this query (there are 4 additional\n> such\n> > updates with same problem). Index creation is useless here since its\n> only a\n> > one time process.\n> >\n> > Here is the strip down version (the part making performance issue) of\n> above\n> > query i.e. only select statement\n> > -------------------------------\n> > select (case when sum(temp.pd) <> 0 then sum(temp.gd)/sum(temp.pd)*100\n> else\n> > 0 end ) from dummy as temp, dummy as temp2\n> > where temp.product=temp2.product group by temp.product\n> >\n> > \"HashAggregate (cost=1652480.98..1652481.96 rows=39 width=39)\"\n> > \" -> Hash Join (cost=1636.07..939023.13 rows=71345785 width=39)\"\n> > \" Hash Cond: ((\"temp\".product)::text = (temp2.product)::text)\"\n> > \" -> Seq Scan on dummy \"temp\" (cost=0.00..1311.03 rows=26003\n> > width=39)\"\n> > \" -> Hash (cost=1311.03..1311.03 rows=26003 width=21)\"\n> > \" -> Seq Scan on dummy temp2 (cost=0.00..1311.03rows=26003\n> > width=21)\"\n> > -------------------------------\n> >\n> >\n> > Whats the solution of this problem, or any alternate way to write this\n> > query?\n> >\n> >\n> >\n>\n\nSee, its calculating sum by grouping the product field. Here is an exampleProduct      GP---------      -------A                  30B                   40A                  30C                 50\nC                 50Now the query calculates aggregated sum and divide by grouping product so all A's must have same the result, so with B's and C's.\nIs this supposed to be updating every single row with one value?Cause I'm guessing it's running that sub select over and over insteadof one time. yes you are right  that its calculating every time for all elements in each group i.e. GP(A) is calculated twice for A, where it should only calculated once for each group. Is there any  way to achieve this?\nanalyze;set work_mem = 128000; between the alter and update and see if that helps.\nthat did not help Also, as Tom said, post explain analyze output of the statement.\n \"Seq Scan on dummy  (cost=0.00..56739774.24 rows=23441 width=275) (actual time=18.927..577929.014 rows=22712 loops=1)\"\"  SubPlan\"\"    ->  Aggregate  (cost=2420.41..2420.43 rows=1 width=19) (actual time=25.423..25.425 rows=1 loops=22712)\"\n\"          ->  Seq Scan on dummy \"temp\"  (cost=0.00..2416.01 rows=586 width=19) (actual time=0.049..17.834 rows=2414 loops=22712)\"\"                Filter: ((product)::text = ($0)::text)\"\n\"Total runtime: 578968.885 ms\"ThanksOn Feb 12, 2008 2:29 AM, Scott Marlowe <[email protected]> wrote:\nOn Feb 11, 2008 5:06 AM, Linux Guru <[email protected]> wrote:> We have a large datawarehouse stored in postgres and temp tables are created\n> based on user query. The process of temp table creation involves selecting> data from main fact table, this includes several select and update> statements and one of the following update statement is having performance\n> issues.>> The newly temp table created for this scenario contains 22712 rows. Here is> the query>> alter table dummy add column gp numeric(40,15);> update dummy set gp=(select (\n> case when sum(temp.pd) <> 0 then sum(temp.gd)/sum(temp.pd)*100 else 0 end  )> from dummy as temp>  where temp.product=dummy.product)Is this supposed to be updating every single row with one value?\nCause I'm guessing it's running that sub select over and over insteadof one time.  I'm guessing that with more work_mem the planner mightuse a more efficient plan.  Try addinganalyze;set work_mem = 128000;\n between the alter and update and see if that helps.Also, as Tom said, post explain analyze output of the statement.>> Now this query basically updates a table using values within itself in the\n> subquery but it takes tooooo much time i.e. approx 5 mins. The whole temp> table creation process is stucked in this query (there are 4 additional such> updates with same problem). Index creation is useless here since its only a\n> one time process.>> Here is the strip down version (the part making performance issue) of above> query i.e. only select statement> -------------------------------> select (case when sum(temp.pd) <> 0 then sum(temp.gd)/sum(temp.pd)*100 else\n> 0 end  )   from dummy as temp, dummy as temp2>  where temp.product=temp2.product group by temp.product>> \"HashAggregate  (cost=1652480.98..1652481.96 rows=39 width=39)\"> \"  ->  Hash Join  (cost=1636.07..939023.13 rows=71345785 width=39)\"\n>  \"        Hash Cond: ((\"temp\".product)::text = (temp2.product)::text)\"> \"        ->  Seq Scan on dummy \"temp\"  (cost=0.00..1311.03 rows=26003> width=39)\"> \"        ->  Hash  (cost=1311.03..1311.03 rows=26003 width=21)\"\n>  \"              ->  Seq Scan on dummy temp2  (cost=0.00..1311.03 rows=26003> width=21)\"> ------------------------------->>> Whats the solution of this problem, or any alternate way to write this\n> query?>>>", "msg_date": "Tue, 12 Feb 2008 13:46:34 +0500", "msg_from": "\"Linux Guru\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Update with Subquery Performance" }, { "msg_contents": "\"Linux Guru\" <[email protected]> writes:\n> Analyzing did not help, here is the out of EXPLAIN ANALYZE of update query\n> \"Seq Scan on dummy (cost=0.00..56739774.24 rows=23441 width=275) (actual\n> time=18.927..577929.014 rows=22712 loops=1)\"\n> \" SubPlan\"\n> \" -> Aggregate (cost=2420.41..2420.43 rows=1 width=19) (actual time=\n> 25.423..25.425 rows=1 loops=22712)\"\n> \" -> Seq Scan on dummy \"temp\" (cost=0.00..2416.01 rows=586\n> width=19) (actual time=0.049..17.834 rows=2414 loops=22712)\"\n> \" Filter: ((product)::text = ($0)::text)\"\n> \"Total runtime: 578968.885 ms\"\n\nYeah, that's just not going to be fast. An index on the product column\nmight help a bit, but the real issue is that you're repetitively\ncalculating the same aggregates. I think you need a separate temp\ntable, along the lines of\n\ncreate temp table dummy_agg as\n select product,\n (case when sum(pd) <> 0 then sum(gd)/sum(pd)*100 else 0 end) as s\n from dummy\n group by product;\n\ncreate index dummy_agg_i on dummy_agg(product); -- optional\n\nupdate dummy\n set gp= (select s from dummy_agg where dummy_agg.product = dummy.product);\n\nThe index would only be needed if you expect a lot of rows (lot of\ndifferent product values).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Feb 2008 11:18:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update with Subquery Performance " }, { "msg_contents": "yes, I also thought of this method and tested it before I got your mail and\nthis solution seems workable.\n\nThanks for the help\n\nOn Feb 12, 2008 9:18 PM, Tom Lane <[email protected]> wrote:\n\n> \"Linux Guru\" <[email protected]> writes:\n> > Analyzing did not help, here is the out of EXPLAIN ANALYZE of update\n> query\n> > \"Seq Scan on dummy (cost=0.00..56739774.24 rows=23441 width=275)\n> (actual\n> > time=18.927..577929.014 rows=22712 loops=1)\"\n> > \" SubPlan\"\n> > \" -> Aggregate (cost=2420.41..2420.43 rows=1 width=19) (actual\n> time=\n> > 25.423..25.425 rows=1 loops=22712)\"\n> > \" -> Seq Scan on dummy \"temp\" (cost=0.00..2416.01 rows=586\n> > width=19) (actual time=0.049..17.834 rows=2414 loops=22712)\"\n> > \" Filter: ((product)::text = ($0)::text)\"\n> > \"Total runtime: 578968.885 ms\"\n>\n> Yeah, that's just not going to be fast. An index on the product column\n> might help a bit, but the real issue is that you're repetitively\n> calculating the same aggregates. I think you need a separate temp\n> table, along the lines of\n>\n> create temp table dummy_agg as\n> select product,\n> (case when sum(pd) <> 0 then sum(gd)/sum(pd)*100 else 0 end) as s\n> from dummy\n> group by product;\n>\n> create index dummy_agg_i on dummy_agg(product); -- optional\n>\n> update dummy\n> set gp= (select s from dummy_agg where dummy_agg.product = dummy.product\n> );\n>\n> The index would only be needed if you expect a lot of rows (lot of\n> different product values).\n>\n> regards, tom lane\n>\n\nyes, I also thought of this method and tested it before I got your mail and this solution seems workable.Thanks for the helpOn Feb 12, 2008 9:18 PM, Tom Lane <[email protected]> wrote:\n\"Linux Guru\" <[email protected]> writes:\n> Analyzing did not help, here is the out of EXPLAIN ANALYZE of update query> \"Seq Scan on dummy  (cost=0.00..56739774.24 rows=23441 width=275) (actual> time=18.927..577929.014 rows=22712 loops=1)\"\n> \"  SubPlan\"> \"    ->  Aggregate  (cost=2420.41..2420.43 rows=1 width=19) (actual time=> 25.423..25.425 rows=1 loops=22712)\"> \"          ->  Seq Scan on dummy \"temp\"  (cost=0.00..2416.01 rows=586\n> width=19) (actual time=0.049..17.834 rows=2414 loops=22712)\"> \"                Filter: ((product)::text = ($0)::text)\"> \"Total runtime: 578968.885 ms\"Yeah, that's just not going to be fast.  An index on the product column\nmight help a bit, but the real issue is that you're repetitivelycalculating the same aggregates.  I think you need a separate temptable, along the lines ofcreate temp table dummy_agg as  select product,\n         (case when sum(pd) <> 0 then sum(gd)/sum(pd)*100 else 0 end) as s  from dummy  group by product;create index dummy_agg_i on dummy_agg(product); -- optionalupdate dummy  set gp= (select s from dummy_agg where dummy_agg.product = dummy.product);\nThe index would only be needed if you expect a lot of rows (lot ofdifferent product values).                        regards, tom lane", "msg_date": "Wed, 13 Feb 2008 15:59:26 +0500", "msg_from": "\"Linux Guru\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Update with Subquery Performance" } ]
[ { "msg_contents": "I have serious performance problems with the following type of queries:\n/\n/explain analyse SELECT '12.11.2007 18:04:00 UTC' AS zeit,\n 'M' AS datatyp,\n p.zs_nr AS zs_de,\n j_ges,\n de_mw_abh_j_lkw(mw_abh) AS j_lkw,\n de_mw_abh_v_pkw(mw_abh) AS v_pkw,\n de_mw_abh_v_lkw(mw_abh) AS v_lkw,\n de_mw_abh_p_bel(mw_abh) AS p_bel\n FROM messpunkt p, messungen_v_dat_2007_11_12 m, de_mw w\n WHERE m.ganglinientyp = 'M' \n AND 381 = m.minute_tag\n AND (p.nr, p.mw_nr) = (m.messpunkt, w.nr);\n\nExplain analze returns\n\n Nested Loop (cost=0.00..50389.39 rows=3009 width=10) (actual \ntime=0.503..320.872 rows=2189 loops=1)\n -> Nested Loop (cost=0.00..30668.61 rows=3009 width=8) (actual \ntime=0.254..94.116 rows=2189 loops=1)\n -> Index Scan using \nmessungen_v_dat_2007_11_12_messpunkt_minute_tag_idx on \nmessungen_v_dat_2007_11_12 m (cost=0.00..5063.38 rows=3009 width=4) \n(actual time=0.131..9.262 rows=2189 loops=1)\n Index Cond: ((ganglinientyp = 'M'::bpchar) AND (381 = \nminute_tag))\n -> Index Scan using messpunkt_nr_idx on messpunkt p \n(cost=0.00..8.50 rows=1 width=12) (actual time=0.019..0.023 rows=1 \nloops=2189)\n Index Cond: (p.nr = m.messpunkt)\n -> Index Scan using de_nw_nr_idx on de_mw w (cost=0.00..6.53 rows=1 \nwidth=10) (actual time=0.019..0.023 rows=1 loops=2189)\n Index Cond: (p.mw_nr = w.nr)\n Total runtime: 329.134 ms\n(9 rows)\n\nDoesnt looks too bad to me, but i'm not that deep into sql query \noptimization. However, these type of query is used in a function to \naccess a normalized, partitioned database, so better performance in this \nqueries would speed up the whole database system big times.\nAny suggestions here would be great. I allready tested some things, \nusing inner join, rearranging the order of the tables, but but only \nminor changes in the runtime, the version above seemed to get us the \nbest performance.\n/\n\n/\n", "msg_date": "Mon, 11 Feb 2008 19:08:25 +0100", "msg_from": "Thomas Zaksek <[email protected]>", "msg_from_op": true, "msg_subject": "Join Query Perfomance Issue" }, { "msg_contents": "On Feb 11, 2008 12:08 PM, Thomas Zaksek <[email protected]> wrote:\n> I have serious performance problems with the following type of queries:\n> /\n> /explain analyse SELECT '12.11.2007 18:04:00 UTC' AS zeit,\n> 'M' AS datatyp,\n> p.zs_nr AS zs_de,\n> j_ges,\n> de_mw_abh_j_lkw(mw_abh) AS j_lkw,\n> de_mw_abh_v_pkw(mw_abh) AS v_pkw,\n> de_mw_abh_v_lkw(mw_abh) AS v_lkw,\n> de_mw_abh_p_bel(mw_abh) AS p_bel\n> FROM messpunkt p, messungen_v_dat_2007_11_12 m, de_mw w\n> WHERE m.ganglinientyp = 'M'\n> AND 381 = m.minute_tag\n> AND (p.nr, p.mw_nr) = (m.messpunkt, w.nr);\n>\n> Explain analze returns\n>\n> Nested Loop (cost=0.00..50389.39 rows=3009 width=10) (actual\n> time=0.503..320.872 rows=2189 loops=1)\n> -> Nested Loop (cost=0.00..30668.61 rows=3009 width=8) (actual\n> time=0.254..94.116 rows=2189 loops=1)\n\nThis nested loop is using us most of your time. Try increasing\nwork_mem and see if it chooses a better join plan, and / or turn off\nnested loops for a moment and see if that helps.\n\nset enable_nestloop = off\n\nNote that set enable_xxx = off\n\nIs kind of a hammer to the forebrain setting. It's not subtle, and\nthe planner can't work around it. So use them with caution. That\nsaid, I had one reporting query that simply wouldn't run fast without\nturning off nested loops for that one. But don't turn off nested\nqueries universally, they are still a good choice for smaller amounts\nof data.\n", "msg_date": "Mon, 11 Feb 2008 15:37:34 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join Query Perfomance Issue" }, { "msg_contents": "Correction:\n\n> turning off nested loops for that one. But don't turn off nested\n> queries universally, they are still a good choice for smaller amounts\n> of data.\n\nqueries should be loops up there...\n", "msg_date": "Mon, 11 Feb 2008 15:38:06 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join Query Perfomance Issue" }, { "msg_contents": "Scott Marlowe schrieb:\n> On Feb 11, 2008 12:08 PM, Thomas Zaksek <[email protected]> wrote:\n> \n>> I have serious performance problems with the following type of queries:\n>> /\n>> /explain analyse SELECT '12.11.2007 18:04:00 UTC' AS zeit,\n>> 'M' AS datatyp,\n>> p.zs_nr AS zs_de,\n>> j_ges,\n>> de_mw_abh_j_lkw(mw_abh) AS j_lkw,\n>> de_mw_abh_v_pkw(mw_abh) AS v_pkw,\n>> de_mw_abh_v_lkw(mw_abh) AS v_lkw,\n>> de_mw_abh_p_bel(mw_abh) AS p_bel\n>> FROM messpunkt p, messungen_v_dat_2007_11_12 m, de_mw w\n>> WHERE m.ganglinientyp = 'M'\n>> AND 381 = m.minute_tag\n>> AND (p.nr, p.mw_nr) = (m.messpunkt, w.nr);\n>>\n>> Explain analze returns\n>>\n>> Nested Loop (cost=0.00..50389.39 rows=3009 width=10) (actual\n>> time=0.503..320.872 rows=2189 loops=1)\n>> -> Nested Loop (cost=0.00..30668.61 rows=3009 width=8) (actual\n>> time=0.254..94.116 rows=2189 loops=1)\n>> \n>\n> This nested loop is using us most of your time. Try increasing\n> work_mem and see if it chooses a better join plan, and / or turn off\n> nested loops for a moment and see if that helps.\n>\n> set enable_nestloop = off\n>\n> Note that set enable_xxx = off\n>\n> Is kind of a hammer to the forebrain setting. It's not subtle, and\n> the planner can't work around it. So use them with caution. That\n> said, I had one reporting query that simply wouldn't run fast without\n> turning off nested loops for that one. But don't turn off nested\n> queries universally, they are still a good choice for smaller amounts\n> of data.\n> \nI tried turning off nestloop, but with terrible results:\n\nHash Join (cost=208328.61..228555.14 rows=3050 width=10) (actual \ntime=33421.071..40362.136 rows=2920 loops=1)\n Hash Cond: (w.nr = p.mw_nr)\n -> Seq Scan on de_mw w (cost=0.00..14593.79 rows=891479 width=10) \n(actual time=0.012..3379.971 rows=891479 loops=1)\n -> Hash (cost=208290.49..208290.49 rows=3050 width=8) (actual \ntime=33420.877..33420.877 rows=2920 loops=1)\n -> Merge Join (cost=5303.71..208290.49 rows=3050 width=8) \n(actual time=31.550..33407.688 rows=2920 loops=1)\n Merge Cond: (p.nr = m.messpunkt)\n -> Index Scan using messpunkt_nr_idx on messpunkt p \n(cost=0.00..238879.39 rows=6306026 width=12) (actual \ntime=0.056..17209.317 rows=4339470 loops=1)\n -> Sort (cost=5303.71..5311.34 rows=3050 width=4) \n(actual time=25.973..36.858 rows=2920 loops=1)\n Sort Key: m.messpunkt\n -> Index Scan using \nmessungen_v_dat_2007_11_12_messpunkt_minute_tag_idx on \nmessungen_v_dat_2007_11_12 m (cost=0.00..5127.20 rows=3050 width=4) \n(actual time=0.124..12.822 rows=2920 loops=1)\n Index Cond: ((ganglinientyp = 'M'::bpchar) \nAND (651 = minute_tag))\n Total runtime: 40373.512 ms\n(12 rows)\nLooks crappy, isn't it?\n\nI also tried to increase work_men, now the config is\nwork_mem = 4MB \nmaintenance_work_mem = 128MB,\nin regard to performance, it wasnt effective at all.\n\nThe postgresql runs on a HP Server with dual Opteron, 3GB of Ram, what \nare good settings here? The database will have to work with tables of \nseveral 10Millions of Lines, but only a few columns each. No more than \nmaybe ~5 clients accessing the database at the same time.\n\n", "msg_date": "Tue, 12 Feb 2008 11:11:11 +0100", "msg_from": "Thomas Zaksek <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Join Query Perfomance Issue" }, { "msg_contents": "> I have serious performance problems with the following type of queries:\n>\n> Doesnt looks too bad to me, but i'm not that deep into sql query\n> optimization. However, these type of query is used in a function to\n> access a normalized, partitioned database, so better performance in this\n> queries would speed up the whole database system big times.\n> Any suggestions here would be great. I allready tested some things,\n> using inner join, rearranging the order of the tables, but but only\n> minor changes in the runtime, the version above seemed to get us the\n> best performance.\n\nCan you send the table definitions of the tables involved in the\nquery, including index information? Might be if we look hard enough we\ncan find something.\n\nPeter\n", "msg_date": "Tue, 12 Feb 2008 14:34:28 -0600", "msg_from": "\"Peter Koczan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join Query Perfomance Issue" }, { "msg_contents": " > Can you send the table definitions of the tables involved in the\n > query, including index information? Might be if we look hard enough we\n > can find something.\n >\n > Peter\n\n\n\n Table \"messungen_v_dat_2007_11_12\"\n Column | Type | Modifiers | Description\n---------------+--------------+-----------+-------------\n ganglinientyp | character(1) | |\n minute_tag | smallint | |\n messpunkt | integer | |\nIndexes:\n \"messungen_v_dat_2007_11_12_ganglinientyp_key\" UNIQUE, btree \n(ganglinientyp, minute_tag, messpunkt)\n \"messungen_v_dat_2007_11_12_messpunkt_idx\" btree (messpunkt)\n \"messungen_v_dat_2007_11_12_messpunkt_minute_tag_idx\" btree \n(ganglinientyp, minute_tag)\nForeign-key constraints:\n \"messungen_v_dat_2007_11_12_messpunkt_fkey\" FOREIGN KEY (messpunkt) \nREFERENCES messpunkt(nr)\nInherits: messungen_v_dat\nHas OIDs: no\n\n\n\n\n Table \"messpunkt\"\n Column | Type | \nModifiers | Description\n--------+---------+--------------------------------------------------------+-------------\n nr | integer | not null default \nnextval('messpunkt_nr_seq'::regclass) |\n zs_nr | integer \n| |\n mw_nr | integer \n| |\nIndexes:\n \"messpunkt_pkey\" PRIMARY KEY, btree (nr)\n \"messpunkt_zs_nr_key\" UNIQUE, btree (zs_nr, mw_nr)\n \"messpunkt_mw_idx\" btree (mw_nr)\n \"messpunkt_nr_idx\" btree (nr)\n \"messpunkt_zs_idx\" btree (zs_nr)\nForeign-key constraints:\n \"messpunkt_mw_nr_fkey\" FOREIGN KEY (mw_nr) REFERENCES de_mw(nr)\n \"messpunkt_zs_nr_fkey\" FOREIGN KEY (zs_nr) REFERENCES de_zs(zs)\nHas OIDs: no\n\n\n\n\n Table \"de_mw\"\n Column | Type | Modifiers \n| Description\n--------+----------+----------------------------------------------------+-------------\n nr | integer | not null default nextval('de_mw_nr_seq'::regclass) |\n j_ges | smallint | |\n mw_abh | integer | |\nIndexes:\n \"de_mw_pkey\" PRIMARY KEY, btree (nr)\n \"de_mw_j_ges_key\" UNIQUE, btree (j_ges, mw_abh)\n \"de_nw_nr_idx\" btree (nr)\nCheck constraints:\n \"de_mw_check\" CHECK (j_ges IS NOT NULL AND (j_ges = 0 AND (mw_abh = \n0 OR mw_abh = 255 OR mw_abh IS NULL) OR j_ges > 0 AND j_ges <= 80 AND \nmw_abh <> 0))\nHas OIDs: no\n", "msg_date": "Wed, 13 Feb 2008 12:17:29 +0100", "msg_from": "Thomas Zaksek <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Join Query Perfomance Issue" }, { "msg_contents": "We have tried some recoding now, using a materialized view we could \nreduce the query to a join over too tables without any functions inside \nthe query, for example:\n\nexplain analyse SELECT '12.11.2007 18:04:00 UTC' AS zeit,\n 'M' AS ganglinientyp,\n zs_de,\n j_ges,\n j_lkw,\n v_pkw,\n v_lkw,\n p_bel\n FROM messungen_v_dat_2007_11_12 m\n LEFT JOIN messwerte_mv w on w.nr = m.messpunkt\n WHERE m.ganglinientyp = 'M' \n AND 992 = m.minute_tag;\n\nNested Loop Left Join (cost=0.00..32604.48 rows=3204 width=14) (actual \ntime=11.991..2223.227 rows=2950 loops=1)\n -> Index Scan using \nmessungen_v_dat_2007_11_12_messpunkt_minute_tag_idx on \nmessungen_v_dat_2007_11_12 m (cost=0.00..5371.09 rows=3204 width=4) \n(actual time=0.152..12.385 rows=2950 loops=1)\n Index Cond: ((ganglinientyp = 'M'::bpchar) AND (992 = minute_tag))\n -> Index Scan using messwerte_mv_nr_idx on messwerte_mv w \n(cost=0.00..8.49 rows=1 width=18) (actual time=0.730..0.734 rows=1 \nloops=2950)\n Index Cond: (w.nr = m.messpunkt)\n Total runtime: 2234.143 ms\n(6 rows)\n\nTo me this plan looks very clean and nearly optimal, BUT ~2seconds for \nthe nested loop can't be that good, isn't it?\nThe behavior of this query and the database is quite a mystery for me, \nyesterday i had it running in about 100ms, today i started testing with \nthe same query and 2000-3000ms :(\n\nCould this be some kind of a postgresql server/configuration problem? \nThis queries are very perfomance dependend, they are called a lot of \ntimes in a comlex physical real time simulation of traffic systems. \n200ms would be ok here, but >1sec is perhaps not functional.\n\nThe old version just used one big (partitioned) table without any joins, \nperforming this query in 10-300ms, depended on the server load.\n", "msg_date": "Wed, 13 Feb 2008 12:45:04 +0100", "msg_from": "Thomas Zaksek <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Join Query Perfomance Issue" }, { "msg_contents": "Thomas Zaksek <[email protected]> writes:\n> Nested Loop Left Join (cost=0.00..32604.48 rows=3204 width=14) (actual \n> time=11.991..2223.227 rows=2950 loops=1)\n> -> Index Scan using \n> messungen_v_dat_2007_11_12_messpunkt_minute_tag_idx on \n> messungen_v_dat_2007_11_12 m (cost=0.00..5371.09 rows=3204 width=4) \n> (actual time=0.152..12.385 rows=2950 loops=1)\n> Index Cond: ((ganglinientyp = 'M'::bpchar) AND (992 = minute_tag))\n> -> Index Scan using messwerte_mv_nr_idx on messwerte_mv w \n> (cost=0.00..8.49 rows=1 width=18) (actual time=0.730..0.734 rows=1 \n> loops=2950)\n> Index Cond: (w.nr = m.messpunkt)\n> Total runtime: 2234.143 ms\n> (6 rows)\n\n> To me this plan looks very clean and nearly optimal,\n\nFor so many rows I'm surprised it's not using a bitmap indexscan.\nWhat PG version is this? How big are these tables?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Feb 2008 10:48:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join Query Perfomance Issue " }, { "msg_contents": "> For so many rows I'm surprised it's not using a bitmap indexscan.\n> What PG version is this? How big are these tables?\n>\n> \t\t\tregards, tom lane\n\nIts PG 8.2.6 on Freebsd.\n\nmessungen_v_dat_2007_11_12 ist about 4 million rows and messwerte is \nabout 10 million rows.\n\n", "msg_date": "Wed, 13 Feb 2008 18:46:51 +0100", "msg_from": "Thomas Zaksek <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Join Query Perfomance Issue" }, { "msg_contents": "On Feb 12, 2008 4:11 AM, Thomas Zaksek <[email protected]> wrote:\n\n> I tried turning off nestloop, but with terrible results:\n\nYeah, it didn't help. I was expecting the query planner to switch to\na more efficient join plan.\n\n> I also tried to increase work_men, now the config is\n> work_mem = 4MB\n\nTry setting it higher for JUST THIS query. i.e.\n\nset work_mem=128M;\nexplain analyze select ....\n\nand see how that runs. Then play with it til you've got it down to\nwhat helps. Note that work_mem in postgresql.conf being too large can\nbe dangerous, so it might be something you set for just this query for\nsafety reasons.\n", "msg_date": "Wed, 13 Feb 2008 15:36:07 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join Query Perfomance Issue" }, { "msg_contents": "Scott Marlowe schrieb:\n>\n> Yeah, it didn't help. I was expecting the query planner to switch to\n> a more efficient join plan.\n>\n> \n> Try setting it higher for JUST THIS query. i.e.\n>\n> set work_mem=128M;\n> explain analyze select ....\n>\n> and see how that runs. Then play with it til you've got it down to\n> what helps. Note that work_mem in postgresql.conf being too large can\n> be dangerous, so it might be something you set for just this query for\n> safety reasons.\n>\n> \nTried some values for work_mem like 32M, 128M, 256M, not much of a \ndifference to 4M, so i think work_mem is high enough here in basic \nconfiguration.\n\nI have now kind of optimized the query to a join of to tables(using \nmaterialized views), basically like this:\n\nSELECT foo\n FROM messungen_v_dat_2007_11_12 m \n INNER JOIN messwerte_mv p ON p.nr = m.messpunkt\n WHERE m.ganglinientyp = 'M' \n AND xxx = m.minute_tag;\n\n\nAre there any major flaws in this construction? Is there a better way to \njoin two tables this way?\nBest i get here is a runtime of about 100ms, what seems ok to me.\nThe plan is like\n\nnested loop\n index scan\n index scan\n\nNested Loop (cost=0.00..31157.91 rows=3054 width=14) (actual \ntime=0.252..149.557 rows=2769 loops=1)\n -> Index Scan using \nmessungen_v_dat_2007_11_12_messpunkt_minute_tag_idx on \nmessungen_v_dat_2007_11_12 m (cost=0.00..5134.28 rows=3054 width=4) \n(actual time=0.085..11.562 rows=2769 loops=1)\n Index Cond: ((ganglinientyp = 'M'::bpchar) AND (799 = minute_tag))\n -> Index Scan using messwerte_mv_nr_idx on messwerte_mv p \n(cost=0.00..8.51 rows=1 width=18) (actual time=0.031..0.035 rows=1 \nloops=2769)\n Index Cond: (p.nr = m.messpunkt)\n Total runtime: 159.703 ms\n(6 rows)\n\nNested Loop is not the best regarding to performance, but there isn't a \nway to avoid it here?\n\nAnother strange problem occurs when i retry the query after about 12 \nhours break without akivity on the database (starting work in the \nmorning) :\nThe query runs incredible slow (~3sec), analyse on the tables doesn't \nchange much. But when i switch enable_netloop to false, retry the query \n(very bad result, > 30sec), then set enable_nestloop back to true, the \nquery works amazingly fast again (100ms). Note that explain analyse \nprovides the exactly similar plan for the 3sec at the beginning and the \nfast 100ms later. I have absolutly no idea what causes this behavior.\n", "msg_date": "Thu, 14 Feb 2008 11:57:07 +0100", "msg_from": "Thomas Zaksek <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Join Query Perfomance Issue" }, { "msg_contents": "\n> Nested Loop (cost=0.00..31157.91 rows=3054 width=14) (actual \n> time=0.252..149.557 rows=2769 loops=1)\n> -> Index Scan using \n> messungen_v_dat_2007_11_12_messpunkt_minute_tag_idx on \n> messungen_v_dat_2007_11_12 m (cost=0.00..5134.28 rows=3054 width=4) \n> (actual time=0.085..11.562 rows=2769 loops=1)\n> Index Cond: ((ganglinientyp = 'M'::bpchar) AND (799 = minute_tag))\n> -> Index Scan using messwerte_mv_nr_idx on messwerte_mv p \n> (cost=0.00..8.51 rows=1 width=18) (actual time=0.031..0.035 rows=1 \n> loops=2769)\n> Index Cond: (p.nr = m.messpunkt)\n> Total runtime: 159.703 ms\n> (6 rows)\n> \n> Nested Loop is not the best regarding to performance, but there isn't a \n> way to avoid it here?\n\nYour own tests have proven it's the right approach for this particular \nquery.\n\n> Another strange problem occurs when i retry the query after about 12 \n> hours break without akivity on the database (starting work in the \n> morning) :\n> The query runs incredible slow (~3sec), analyse on the tables doesn't \n> change much. But when i switch enable_netloop to false, retry the query \n> (very bad result, > 30sec), then set enable_nestloop back to true, the \n> query works amazingly fast again (100ms).\n\nThe o/s has cached some of the data so instead of actually hitting the \ndisk, it's getting it from the o/s cache.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Fri, 15 Feb 2008 10:48:08 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join Query Perfomance Issue" } ]
[ { "msg_contents": "Hello,\n\nI'm planning to cluster a few large tables in our database but I'm\nunable to find any recommendations/documentation on best practices --\nMainly, whether it's better to use an index which has a higher idx_scan\nvalue, a higher idx_tup_read value, or the higest idx_tup_fetch value.\n\nI'm assuming that idx_tup_read would probably be the best choice, but \nwant to get other opinions before proceeding.\n\nCan anyone point me to docs which explain this better?\n\n-salman\n\n\n", "msg_date": "Mon, 11 Feb 2008 15:03:43 -0500", "msg_from": "salman <[email protected]>", "msg_from_op": true, "msg_subject": "Question about CLUSTER" }, { "msg_contents": "On Feb 11, 2008 2:03 PM, salman <[email protected]> wrote:\n> Hello,\n>\n> I'm planning to cluster a few large tables in our database but I'm\n> unable to find any recommendations/documentation on best practices --\n> Mainly, whether it's better to use an index which has a higher idx_scan\n> value, a higher idx_tup_read value, or the higest idx_tup_fetch value.\n>\n> I'm assuming that idx_tup_read would probably be the best choice, but\n> want to get other opinions before proceeding.\n\nIf you've got two indexes that are both being hit a lot, it might be\nworth looking into their correlation, and if they get used a lot\ntogether, look at creating an index on both.\n\nBut I'd guess that idx_tup_read would be a good bet.\n", "msg_date": "Mon, 11 Feb 2008 15:33:37 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Question about CLUSTER" }, { "msg_contents": "On Mon, Feb 11, 2008 at 03:33:37PM -0600, Scott Marlowe wrote:\n> On Feb 11, 2008 2:03 PM, salman <[email protected]> wrote:\n> > I'm planning to cluster a few large tables in our database but I'm\n> > unable to find any recommendations/documentation on best practices --\n> > Mainly, whether it's better to use an index which has a higher idx_scan\n> > value, a higher idx_tup_read value, or the higest idx_tup_fetch value.\n> >\n> > I'm assuming that idx_tup_read would probably be the best choice, but\n> > want to get other opinions before proceeding.\n> \n> If you've got two indexes that are both being hit a lot, it might be\n> worth looking into their correlation, and if they get used a lot\n> together, look at creating an index on both.\n> \n> But I'd guess that idx_tup_read would be a good bet.\n\nYou might also consider the ratio idx_tup_read::float8 / idx_scan\nto see which indexes access a lot of rows per scan.\n\n-- \nMichael Fuhr\n", "msg_date": "Mon, 11 Feb 2008 19:42:19 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Question about CLUSTER" } ]
[ { "msg_contents": "Hello,\n\nI've been wrestling w/ a complex query for another developer for \nawhile today. The problem consistently seems to a mis-estimation of \nthe number of rows resulting from a join. This causes the query early \non to think it's only going to be processing 1 row and so it chooses \nnested loops much of the way up the chain. I've messed w/ statistics \ntargets on some of the columns w/ no increase in the accuracy of the \nestimates. I've analyzed the tables in question (autovac is \nrunning). If I turn off nested loops, the query runs in 1.5 seconds. \nOtherwise it takes about 37s. With other criteria in the where clause \nit can take many minutes to return. Here is a subset of the explain \nanalyze that I'm wrestling with. The entire explain is at the end of \nthe email.\n\n -> Nested Loop (cost=42.74..161.76 rows=1 width=38) (actual \ntime=2.932..27.772 rows=20153 loops=1)\n -> Hash Join (cost=10.89..22.58 rows=1 width=24) (actual \ntime=0.065..0.134 rows=1 loops=1)\n Hash Cond: (mtchsrcprj3.funding_source_id = \nmtchsrcprjfs3.nameid)\n -> Seq Scan on project mtchsrcprj3 (cost=0.00..11.22 \nrows=122 width=8) (actual time=0.002..0.054 rows=122 loops=1)\n -> Hash (cost=10.83..10.83 rows=5 width=24) (actual \ntime=0.017..0.017 rows=1 loops=1)\n -> Index Scan using name_float_lfm_idx on \nnamemaster mtchsrcprjfs3 (cost=0.00..10.83 rows=5 width=24) (actual \ntime=0.012..0.013 rows=1 loops=1)\n Index Cond: (name_float_lfm = 'DWS'::text)\n -> Bitmap Heap Scan on transaction_details idatrndtl \n(cost=31.85..121.60 rows=1407 width=22) (actual time=2.864..12.060 \nrows=20153 loops=1)\n Recheck Cond: (idatrndtl.ida_trans_match_source_id = \nmtchsrcprj3.id)\n -> Bitmap Index Scan on \ntransaction_details_ida_trans_match_source_id (cost=0.00..31.50 \nrows=1407 width=0) (actual time=2.696..2.696 rows=20153 loops=1)\n Index Cond: (idatrndtl.ida_trans_match_source_id = \nmtchsrcprj3.id)\n\nThe first frustration is that I can't get the transaction details scan \nto get any more accurate. It thinks it will find 1407 records, \ninstead it finds 20,153. Then for whatever reason it thinks that a \njoin between 1 record and 1407 records will return 1 record. This is \nmainly what I can't understand. Why does it think it will only get \none record in response when it's a left join?\n\nThe thing is that we've had this happen a number of times recently \nwith complex nested queries. Most of the time things will run very \nquickly, but an early mis-estimation by the planner causes it to use \nnested loops exclusively when hash joins would be more appropriate.\n\nIs there anything I can do to improve this short of the set \nenable_nestloop=off?\n\nPG 8.2.4 on Linux kernel 2.6.9 x64\n\n-Chris\n\n------- Full explain analyze -----\n\nGroup (cost=336.76..336.82 rows=1 width=328) (actual \ntime=36620.831..36621.176 rows=248 loops=1)\n -> Sort (cost=336.76..336.76 rows=1 width=328) (actual \ntime=36620.828..36620.888 rows=248 loops=1)\n Sort Key: county, fullname_last_first_mdl, ((((CASE WHEN \n(COALESCE(fullname_last_first_mdl, '0'::text) = '0'::text) THEN \n''::text ELSE COALESCE(fullname_last_first_mdl, '0'::text) END || ' \n'::text) || '-'::text) || ' '::text) || CASE WHEN (COALESCE(ssn, \n'0'::text) = '0'::text) THEN ''::text ELSE COALESCE(ssn, '0'::text) \nEND), system_name_id, ssn, ida_account_id, \nida_account_match_source_funding_source_name_float_lfm, \nida_account_status, vs_query_27453_212267, vs_query_27453_212252, \nvs_query_27453_212253, vs_query_27453_212254, vs_query_27453_212255, \n(COALESCE(vs_query_27453_212267, 0::numeric) + \nCOALESCE(vs_query_27453_212255, 0::numeric))\n -> Subquery Scan foo (cost=336.72..336.75 rows=1 \nwidth=328) (actual time=36614.750..36615.319 rows=248 loops=1)\n -> Sort (cost=336.72..336.72 rows=1 width=255) \n(actual time=36614.737..36614.798 rows=248 loops=1)\n Sort Key: cou.validvalue, dem.name_float_lfm\n -> Nested Loop Left Join (cost=194.80..336.71 \nrows=1 width=255) (actual time=506.599..36611.702 rows=248 loops=1)\n -> Nested Loop Left Join \n(cost=194.80..332.90 rows=1 width=242) (actual time=506.566..36606.528 \nrows=248 loops=1)\n Join Filter: (acc.id = \nqry27453.ida_account_id)\n -> Nested Loop (cost=30.16..168.13 \nrows=1 width=82) (actual time=0.461..27.079 rows=248 loops=1)\n -> Nested Loop \n(cost=30.16..167.85 rows=1 width=90) (actual time=0.453..25.133 \nrows=248 loops=1)\n -> Nested Loop \n(cost=30.16..165.94 rows=1 width=77) (actual time=0.441..19.687 \nrows=970 loops=1)\n -> Nested Loop \n(cost=30.16..162.90 rows=1 width=40) (actual time=0.429..11.405 \nrows=970 loops=1)\n -> Hash \nJoin (cost=30.16..162.48 rows=1 width=32) (actual time=0.417..4.524 \nrows=970 loops=1)\n Hash \nCond: (accmtchgrp.match_group_id = mtchsrc2.match_group_id)\n -> \nSeq Scan on ida_account_match_sources accmtchgrp (cost=0.00..117.26 \nrows=3926 width=8) (actual time=0.010..1.597 rows=3933 loops=1)\n -> \nHash (cost=30.15..30.15 rows=1 width=24) (actual time=0.315..0.315 \nrows=1 loops=1)\n - \n > Hash Join (cost=22.59..30.15 rows=1 width=24) (actual \ntime=0.228..0.309 rows=1 loops=1)\n Hash \n Cond: (mtchsrc2.project_id = mtchsrcprj2.id)\n -> \n Seq Scan on ida_match_sources mtchsrc2 (cost=0.00..6.85 rows=185 \nwidth=8) (actual time=0.004..0.065 rows=185 loops=1)\n -> \n Hash (cost=22.58..22.58 rows=1 width=24) (actual time=0.162..0.162 \nrows=1 loops=1)\n -> \n Hash Join (cost=10.89..22.58 rows=1 width=24) (actual \ntime=0.091..0.155 rows=1 loops=1)\n Hash \n Cond: (mtchsrcprj2.funding_source_id = mtchsrcprjfs2.nameid)\n -> \n Seq Scan on project mtchsrcprj2 (cost=0.00..11.22 rows=122 \nwidth=8) (actual time=0.005..0.060 rows=122 loops=1)\n -> \n Hash (cost=10.83..10.83 rows=5 width=24) (actual time=0.039..0.039 \nrows=1 loops=1)\n -> \n Index Scan using name_float_lfm_idx on namemaster mtchsrcprjfs2 \n(cost=0.00..10.83 rows=5 width=24) (actual time=0.028..0.030 rows=1 \nloops=1)\n Index \n Cond: (name_float_lfm = 'DWS'::text)\n -> Index \nScan using accounts_pkey on accounts acc (cost=0.00..0.41 rows=1 \nwidth=12) (actual time=0.005..0.005 rows=1 loops=970)\n Index \nCond: (acc.id = accmtchgrp.account_id)\n \nFilter: (program_id = 221)\n -> Index Scan \nusing nameid_pk on namemaster dem (cost=0.00..3.02 rows=1 width=41) \n(actual time=0.006..0.007 rows=1 loops=970)\n Index Cond: \n(acc.owner_id = dem.nameid)\n Filter: \n(programid = 221)\n -> Index Scan using \nvalidanswerid_pk on validanswer accsts (cost=0.00..1.91 rows=1 \nwidth=21) (actual time=0.004..0.005 rows=0 loops=970)\n Index Cond: \n(acc.ida_account_status_id = accsts.validanswerid)\n Filter: \n(validvalue = 'Open'::text)\n -> Index Scan using \nida_match_groups_pkey on ida_match_groups mtchgrp2 (cost=0.00..0.27 \nrows=1 width=4) (actual time=0.003..0.006 rows=1 loops=248)\n Index Cond: \n(accmtchgrp.match_group_id = mtchgrp2.id)\n -> GroupAggregate \n(cost=164.63..164.75 rows=1 width=129) (actual time=1.635..147.391 \nrows=230 loops=248)\n -> Sort (cost=164.63..164.64 \nrows=1 width=129) (actual time=1.407..3.160 rows=4715 loops=248)\n Sort Key: \nfoo.ida_account_id, foo.ida_account_status, foo.ida_match_rate\n -> Sort \n(cost=164.61..164.61 rows=1 width=82) (actual time=340.444..341.726 \nrows=4715 loops=1)\n Sort Key: acc.id\n -> Nested Loop \n(cost=42.74..164.60 rows=1 width=82) (actual time=3.069..333.866 \nrows=4715 loops=1)\n -> Nested \nLoop (cost=42.74..164.29 rows=1 width=69) (actual time=3.062..310.340 \nrows=4715 loops=1)\n -> \nNested Loop (cost=42.74..162.38 rows=1 width=56) (actual \ntime=2.955..224.985 rows=20048 loops=1)\n - \n > Nested Loop (cost=42.74..162.09 rows=1 width=42) (actual \ntime=2.947..135.616 rows=20048 loops=1)\n -> \n Nested Loop (cost=42.74..161.76 rows=1 width=38) (actual \ntime=2.932..27.772 rows=20153 loops=1)\n -> \n Hash Join (cost=10.89..22.58 rows=1 width=24) (actual \ntime=0.065..0.134 rows=1 loops=1)\n Hash \n Cond: (mtchsrcprj3.funding_source_id = mtchsrcprjfs3.nameid)\n -> \n Seq Scan on project mtchsrcprj3 (cost=0.00..11.22 rows=122 \nwidth=8) (actual time=0.002..0.054 rows=122 loops=1)\n -> \n Hash (cost=10.83..10.83 rows=5 width=24) (actual time=0.017..0.017 \nrows=1 loops=1)\n -> \n Index Scan using name_float_lfm_idx on namemaster mtchsrcprjfs3 \n(cost=0.00..10.83 rows=5 width=24) (actual time=0.012..0.013 rows=1 \nloops=1)\n Index \n Cond: (name_float_lfm = 'DWS'::text)\n -> \n Bitmap Heap Scan on transaction_details idatrndtl \n(cost=31.85..121.60 rows=1407 width=22) (actual time=2.864..12.060 \nrows=20153 loops=1)\n Recheck \n Cond: (idatrndtl.ida_trans_match_source_id = mtchsrcprj3.id)\n -> \n Bitmap Index Scan on transaction_details_ida_trans_match_source_id \n(cost=0.00..31.50 rows=1407 width=0) (actual time=2.696..2.696 \nrows=20153 loops=1)\n Index \n Cond: (idatrndtl.ida_trans_match_source_id = mtchsrcprj3.id)\n -> \n Index Scan using transactions_pkey on transactions idatrn \n(cost=0.00..0.31 rows=1 width=12) (actual time=0.003..0.004 rows=1 \nloops=20153)\n Index \n Cond: (idatrn.id = idatrndtl.transaction_id)\n Filter \n: (((transaction_date >= '2007-10-01'::date) OR (transaction_date <= \n'2007-10-01'::date)) AND (transaction_date <= '2007-12-31'::date))\n - \n > Index Scan using accounts_pkey on accounts acc (cost=0.00..0.28 \nrows=1 width=18) (actual time=0.003..0.003 rows=1 loops=20048)\n Index \n Cond: (acc.id = idatrn.account_id)\n Filter \n: (program_id = 221)\n -> \nIndex Scan using validanswerid_pk on validanswer accsts \n(cost=0.00..1.91 rows=1 width=21) (actual time=0.003..0.003 rows=0 \nloops=20048)\n \nIndex Cond: (acc.ida_account_status_id = accsts.validanswerid)\n \nFilter: (validvalue = 'Open'::text)\n -> Index \nScan using validanswerid_pk on validanswer trndtlcat (cost=0.00..0.29 \nrows=1 width=21) (actual time=0.003..0.004 rows=1 loops=4715)\n Index \nCond: (idatrndtl.ida_trans_detail_category_id = trndtlcat.validanswerid)\n \nFilter: (validvalue = ANY ('{\"Match Withdrawal\",\"Match Earned\",\"Match \nInterest\"}'::text[]))\n -> Index Scan using validanswerid_pk on \nvalidanswer cou (cost=0.00..3.77 rows=1 width=21) (actual \ntime=0.011..0.012 rows=1 loops=248)\n Index Cond: (cou.validanswerid = \ndem.county)\n Total runtime: 36622.135 ms\n(73 rows)\n\n\n\n", "msg_date": "Mon, 11 Feb 2008 17:24:33 -0500", "msg_from": "Chris Kratz <[email protected]>", "msg_from_op": true, "msg_subject": "mis-estimate in nested query causes slow runtimes" }, { "msg_contents": "Chris Kratz <[email protected]> writes:\n> -> Nested Loop (cost=42.74..161.76 rows=1 width=38) (actual \n> time=2.932..27.772 rows=20153 loops=1)\n> -> Hash Join (cost=10.89..22.58 rows=1 width=24) (actual \n> time=0.065..0.134 rows=1 loops=1)\n> Hash Cond: (mtchsrcprj3.funding_source_id = \n> mtchsrcprjfs3.nameid)\n> -> Seq Scan on project mtchsrcprj3 (cost=0.00..11.22 \n> rows=122 width=8) (actual time=0.002..0.054 rows=122 loops=1)\n> -> Hash (cost=10.83..10.83 rows=5 width=24) (actual \n> time=0.017..0.017 rows=1 loops=1)\n> -> Index Scan using name_float_lfm_idx on \n> namemaster mtchsrcprjfs3 (cost=0.00..10.83 rows=5 width=24) (actual \n> time=0.012..0.013 rows=1 loops=1)\n> Index Cond: (name_float_lfm = 'DWS'::text)\n> -> Bitmap Heap Scan on transaction_details idatrndtl \n> (cost=31.85..121.60 rows=1407 width=22) (actual time=2.864..12.060 \n> rows=20153 loops=1)\n> Recheck Cond: (idatrndtl.ida_trans_match_source_id = \n> mtchsrcprj3.id)\n> -> Bitmap Index Scan on \n> transaction_details_ida_trans_match_source_id (cost=0.00..31.50 \n> rows=1407 width=0) (actual time=2.696..2.696 rows=20153 loops=1)\n> Index Cond: (idatrndtl.ida_trans_match_source_id = \n> mtchsrcprj3.id)\n\n> The first frustration is that I can't get the transaction details scan \n> to get any more accurate. It thinks it will find 1407 records, \n> instead it finds 20,153. Then for whatever reason it thinks that a \n> join between 1 record and 1407 records will return 1 record. This is \n> mainly what I can't understand. Why does it think it will only get \n> one record in response when it's a left join?\n\nI don't see any left join there ...\n\n> PG 8.2.4 on Linux kernel 2.6.9 x64\n\nThe first thing you should do is update to 8.2.6; we've fixed a fair\nnumber of problems since then that were fallout from the outer-join\nplanning rewrite in 8.2.\n\nIf it still doesn't work very well, please post the pg_stats rows for\nthe join columns involved (idatrndtl.ida_trans_match_source_id and\nmtchsrcprj3.id). (You do have up-to-date ANALYZE stats for both\nof those tables, right?)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Feb 2008 18:07:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: mis-estimate in nested query causes slow runtimes " }, { "msg_contents": "On 2/11/08, Tom Lane <[email protected]> wrote:\n>\n> Chris Kratz <[email protected]> writes:\n> > -> Nested Loop (cost=42.74..161.76 rows=1 width=38) (actual\n> > time=2.932..27.772 rows=20153 loops=1)\n> > -> Hash Join (cost=10.89..22.58 rows=1 width=24) (actual\n> > time=0.065..0.134 rows=1 loops=1)\n> > Hash Cond: (mtchsrcprj3.funding_source_id =\n> > mtchsrcprjfs3.nameid)\n> > -> Seq Scan on project mtchsrcprj3 (cost=0.00..11.22\n> > rows=122 width=8) (actual time=0.002..0.054 rows=122 loops=1)\n> > -> Hash (cost=10.83..10.83 rows=5 width=24) (actual\n> > time=0.017..0.017 rows=1 loops=1)\n> > -> Index Scan using name_float_lfm_idx on\n> > namemaster mtchsrcprjfs3 (cost=0.00..10.83 rows=5 width=24) (actual\n> > time=0.012..0.013 rows=1 loops=1)\n> > Index Cond: (name_float_lfm = 'DWS'::text)\n> > -> Bitmap Heap Scan on transaction_details idatrndtl\n> > (cost=31.85..121.60 rows=1407 width=22) (actual time=2.864..12.060\n> > rows=20153 loops=1)\n> > Recheck Cond: (idatrndtl.ida_trans_match_source_id =\n> > mtchsrcprj3.id)\n> > -> Bitmap Index Scan on\n> > transaction_details_ida_trans_match_source_id (cost=0.00..31.50\n> > rows=1407 width=0) (actual time=2.696..2.696 rows=20153 loops=1)\n> > Index Cond: (idatrndtl.ida_trans_match_source_id =\n> > mtchsrcprj3.id)\n>\n> > The first frustration is that I can't get the transaction details scan\n> > to get any more accurate. It thinks it will find 1407 records,\n> > instead it finds 20,153. Then for whatever reason it thinks that a\n> > join between 1 record and 1407 records will return 1 record. This is\n> > mainly what I can't understand. Why does it think it will only get\n> > one record in response when it's a left join?\n>\n> I don't see any left join there ...\n>\n> > PG 8.2.4 on Linux kernel 2.6.9 x64\n>\n> The first thing you should do is update to 8.2.6; we've fixed a fair\n> number of problems since then that were fallout from the outer-join\n> planning rewrite in 8.2.\n>\n> If it still doesn't work very well, please post the pg_stats rows for\n> the join columns involved (idatrndtl.ida_trans_match_source_id and\n> mtchsrcprj3.id). (You do have up-to-date ANALYZE stats for both\n> of those tables, right?)\n>\n> regards, tom lane\n>\n\nThanks Tom, we will try the upgrade and see if that makes a difference.\n\n\n-Chris\n\nOn 2/11/08, Tom Lane <[email protected]> wrote:\nChris Kratz <[email protected]> writes:>   ->  Nested Loop  (cost=42.74..161.76 rows=1 width=38) (actual> time=2.932..27.772 rows=20153 loops=1)\n>         ->  Hash Join  (cost=10.89..22.58 rows=1 width=24) (actual> time=0.065..0.134 rows=1 loops=1)>               Hash Cond: (mtchsrcprj3.funding_source_id => mtchsrcprjfs3.nameid)>               ->  Seq Scan on project mtchsrcprj3  (cost=0.00..11.22\n> rows=122 width=8) (actual time=0.002..0.054 rows=122 loops=1)>               ->  Hash  (cost=10.83..10.83 rows=5 width=24) (actual> time=0.017..0.017 rows=1 loops=1)>                     ->  Index Scan using name_float_lfm_idx on\n> namemaster mtchsrcprjfs3  (cost=0.00..10.83 rows=5 width=24) (actual> time=0.012..0.013 rows=1 loops=1)>                           Index Cond: (name_float_lfm = 'DWS'::text)>         ->  Bitmap Heap Scan on transaction_details idatrndtl\n> (cost=31.85..121.60 rows=1407 width=22) (actual time=2.864..12.060> rows=20153 loops=1)>               Recheck Cond: (idatrndtl.ida_trans_match_source_id => mtchsrcprj3.id)\n>               ->  Bitmap Index Scan on> transaction_details_ida_trans_match_source_id  (cost=0.00..31.50> rows=1407 width=0) (actual time=2.696..2.696 rows=20153 loops=1)>                     Index Cond: (idatrndtl.ida_trans_match_source_id =\n> mtchsrcprj3.id)> The first frustration is that I can't get the transaction details scan> to get any more accurate.  It thinks it will find 1407 records,> instead it finds 20,153.  Then for whatever reason it thinks that a\n> join between 1 record and 1407 records will return 1 record.  This is> mainly what I can't understand.  Why does it think it will only get> one record in response when it's a left join?I don't see any left join there ...\n> PG 8.2.4 on Linux kernel 2.6.9 x64The first thing you should do is update to 8.2.6; we've fixed a fairnumber of problems since then that were fallout from the outer-joinplanning rewrite in 8.2.\nIf it still doesn't work very well, please post the pg_stats rows forthe join columns involved (idatrndtl.ida_trans_match_source_id andmtchsrcprj3.id).  (You do have up-to-date ANALYZE stats for both\nof those tables, right?)                        regards, tom laneThanks Tom, we will try the upgrade and see if that makes a difference. -Chris", "msg_date": "Tue, 12 Feb 2008 09:02:43 -0500", "msg_from": "\"Chris Kratz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: mis-estimate in nested query causes slow runtimes" }, { "msg_contents": "On 2/11/08, Tom Lane <[email protected]> wrote:\n>\n> Chris Kratz <[email protected]> writes:\n> > -> Nested Loop (cost=42.74..161.76 rows=1 width=38) (actual\n> > time=2.932..27.772 rows=20153 loops=1)\n> > -> Hash Join (cost=10.89..22.58 rows=1 width=24) (actual\n> > time=0.065..0.134 rows=1 loops=1)\n> > Hash Cond: (mtchsrcprj3.funding_source_id =\n> > mtchsrcprjfs3.nameid)\n> > -> Seq Scan on project mtchsrcprj3 (cost=0.00..11.22\n> > rows=122 width=8) (actual time=0.002..0.054 rows=122 loops=1)\n> > -> Hash (cost=10.83..10.83 rows=5 width=24) (actual\n> > time=0.017..0.017 rows=1 loops=1)\n> > -> Index Scan using name_float_lfm_idx on\n> > namemaster mtchsrcprjfs3 (cost=0.00..10.83 rows=5 width=24) (actual\n> > time=0.012..0.013 rows=1 loops=1)\n> > Index Cond: (name_float_lfm = 'DWS'::text)\n> > -> Bitmap Heap Scan on transaction_details idatrndtl\n> > (cost=31.85..121.60 rows=1407 width=22) (actual time=2.864..12.060\n> > rows=20153 loops=1)\n> > Recheck Cond: (idatrndtl.ida_trans_match_source_id =\n> > mtchsrcprj3.id)\n> > -> Bitmap Index Scan on\n> > transaction_details_ida_trans_match_source_id (cost=0.00..31.50\n> > rows=1407 width=0) (actual time=2.696..2.696 rows=20153 loops=1)\n> > Index Cond: (idatrndtl.ida_trans_match_source_id =\n> > mtchsrcprj3.id)\n>\n> > The first frustration is that I can't get the transaction details scan\n> > to get any more accurate. It thinks it will find 1407 records,\n> > instead it finds 20,153. Then for whatever reason it thinks that a\n> > join between 1 record and 1407 records will return 1 record. This is\n> > mainly what I can't understand. Why does it think it will only get\n> > one record in response when it's a left join?\n>\n> I don't see any left join there ...\n>\n> > PG 8.2.4 on Linux kernel 2.6.9 x64\n>\n> The first thing you should do is update to 8.2.6; we've fixed a fair\n> number of problems since then that were fallout from the outer-join\n> planning rewrite in 8.2.\n>\n> If it still doesn't work very well, please post the pg_stats rows for\n> the join columns involved (idatrndtl.ida_trans_match_source_id and\n> mtchsrcprj3.id). (You do have up-to-date ANALYZE stats for both\n> of those tables, right?)\n>\n> regards, tom lane\n\n\n\nI know it's somewhat premature as we haven't had a chance to do the update\nyet, but here is what I did w/ the statistics with the current version for\nchuckles and grins just to see if it would make a difference in the plan.\n\n# alter table project alter column id set statistics 1000;\nALTER TABLE\n# analyze project;\nANALYZE\n# alter table transaction_details alter column ida_trans_match_source_id set\nstatistics 1000;\nALTER TABLE\n# analyze transaction_details;\nANALYZE\n# select * from pg_stats where (tablename='project' and attname='id') or\n(tablename='transaction_details' and attname='ida_trans_match_source_id');\n schemaname | tablename | attname | null_frac |\navg_width | n_distinct |\n most_common_vals\n |\n\n most_common_freqs\n\n\n |\n\n\n histogram_bounds\n\n\n | correlation\n------------+---------------------+---------------------------+-----------+-----------+------------+------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------\n public | project | id | 0 |\n 4 | -1 |\n\n |\n\n\n\n\n |\n{6,7,8,12,13,14,15,17,18,19,24,25,26,27,28,29,30,31,32,33,34,35,36,37,41,42,71,72,797,802,803,809,812,813,814,815,816,817,818,822,823,824,825,826,827,828,829,830,831,832,833,834,835,836,837,838,839,840,841,842,843,844,845,846,847,848,849,920,921,922,923,924,925,926,927,928,929,930,931,932,933,934,935,936,937,938,939,940,941,942,946,947,948,949,950,951,952,953,954,955,956,957,958,959,960,961,962,963,964,965,966,967,968,969,970,971,972,973,974,975,977,978}\n| 0.939317\n public | transaction_details | ida_trans_match_source_id | 0.480469 |\n 4 | 74 |\n{832,818,930,937,923,812,931,829,837,830,836,14,809,838,936,924,921,922,814,816,817,827,815,941,835,967,926,813,968,928,920,939,925,974,833,965,933}\n| {0.100562,0.100233,0.0412866,0.0245354,0.0223948,0.021277,0.0198998,\n0.018817,0.0182431,0.0181583,0.0180236,0.0141714,0.0107633,0.00955071,\n0.00917646,0.00639708,0.00562364,0.00491507,0.00453584,0.0037624,0.00332828,\n0.00332828,0.00323846,0.00309874,0.00295403,0.00267959,0.00234526,0.00227041\n,0.00221552,0.00220055,0.00215565,0.00207581,0.00179138,0.00136225,\n0.00114269,0.00113271,0.00100796} |\n{6,6,7,8,15,802,802,802,802,803,803,839,841,844,844,845,845,846,927,927,934,934,935,935,938,938,940,942,952,954,955,955,957,972,972,972,978}\n\n\n\n\n | 0.218267\n(2 rows)\n\nThis had no appreciable difference in the plan. Here is the part that seems\nto be causing the problem again after the increase in stats. It still\nthinks there is only one row in the result.\n\n -> Nested Loop (cost=42.75..161.78 rows=1 width=38) (actual time=\n391.797..425.337 rows=20153 loops=1)\n -> Hash Join (cost=10.89..22.58 rows=1 width=24) (actual time=\n0.069..0.139 rows=1 loops=1)\n Hash Cond: (mtchsrcprj3.funding_source_id =\nmtchsrcprjfs3.nameid)\n -> Seq Scan on project mtchsrcprj3 (cost=0.00..11.22 rows=122\nwidth=8) (actual time=0.002..0.054 rows=122 loops=1)\n -> Hash (cost=10.83..10.83 rows=5 width=24) (actual time=\n0.022..0.022 rows=1 loops=1)\n -> Index Scan using name_float_lfm_idx on namemaster\nmtchsrcprjfs3 (cost=0.00..10.83 rows=5 width=24) (actual\ntime=0.013..0.014rows=1 loops=1)\n Index Cond: (name_float_lfm = 'DWS'::text)\n -> Bitmap Heap Scan on transaction_details idatrndtl (cost=\n31.87..121.61 rows=1407 width=22) (actual time=391.722..410.129 rows=20153\nloops=1)\n Recheck Cond: (idatrndtl.ida_trans_match_source_id =\nmtchsrcprj3.id)\n -> Bitmap Index Scan on\ntransaction_details_ida_trans_match_source_id (cost=0.00..31.51 rows=1407\nwidth=0) (actual time=391.523..391.523 rows=20153 loops=1)\n Index Cond: (idatrndtl.ida_trans_match_source_id =\nmtchsrcprj3.id)\n\nHere is the relevant snippet from the query\n\n<-- snip -->\nFROM\n accounts acc\n left join transactions idatrn on (acc.id = idatrn.account_id)\nleft join transaction_details idatrndtl on (idatrn.id =\nidatrndtl.transaction_id)\nleft join project mtchsrcprj3 on (idatrndtl.ida_trans_match_source_id =\nmtchsrcprj3.id)\nleft join namemaster mtchsrcprjfs3 on ( mtchsrcprj3.funding_source_id =\nmtchsrcprjfs3.nameid)\n<-- snip -->\n\nI'll update again once we've had a chance to do the update.\n\n-Chris\n\nOn 2/11/08, Tom Lane <[email protected]> wrote:\nChris Kratz <[email protected]> writes:>   ->  Nested Loop  (cost=42.74..161.76 rows=1 width=38) (actual> time=2.932..27.772 rows=20153 loops=1)\n>         ->  Hash Join  (cost=10.89..22.58 rows=1 width=24) (actual> time=0.065..0.134 rows=1 loops=1)>               Hash Cond: (mtchsrcprj3.funding_source_id => mtchsrcprjfs3.nameid)>               ->  Seq Scan on project mtchsrcprj3  (cost=0.00..11.22\n> rows=122 width=8) (actual time=0.002..0.054 rows=122 loops=1)>               ->  Hash  (cost=10.83..10.83 rows=5 width=24) (actual> time=0.017..0.017 rows=1 loops=1)>                     ->  Index Scan using name_float_lfm_idx on\n> namemaster mtchsrcprjfs3  (cost=0.00..10.83 rows=5 width=24) (actual> time=0.012..0.013 rows=1 loops=1)>                           Index Cond: (name_float_lfm = 'DWS'::text)>         ->  Bitmap Heap Scan on transaction_details idatrndtl\n> (cost=31.85..121.60 rows=1407 width=22) (actual time=2.864..12.060> rows=20153 loops=1)>               Recheck Cond: (idatrndtl.ida_trans_match_source_id => mtchsrcprj3.id)\n>               ->  Bitmap Index Scan on> transaction_details_ida_trans_match_source_id  (cost=0.00..31.50> rows=1407 width=0) (actual time=2.696..2.696 rows=20153 loops=1)>                     Index Cond: (idatrndtl.ida_trans_match_source_id =\n> mtchsrcprj3.id)> The first frustration is that I can't get the transaction details scan> to get any more accurate.  It thinks it will find 1407 records,> instead it finds 20,153.  Then for whatever reason it thinks that a\n> join between 1 record and 1407 records will return 1 record.  This is> mainly what I can't understand.  Why does it think it will only get> one record in response when it's a left join?I don't see any left join there ...\n> PG 8.2.4 on Linux kernel 2.6.9 x64The first thing you should do is update to 8.2.6; we've fixed a fairnumber of problems since then that were fallout from the outer-joinplanning rewrite in 8.2.\nIf it still doesn't work very well, please post the pg_stats rows forthe join columns involved (idatrndtl.ida_trans_match_source_id andmtchsrcprj3.id).  (You do have up-to-date ANALYZE stats for both\nof those tables, right?)                        regards, tom laneI know it's somewhat premature as we haven't had a chance to do the update yet, but here is what I did w/ the statistics with the current version for chuckles and grins just to see if it would make a difference in the plan.\n# alter table project alter column id set statistics 1000;ALTER TABLE# analyze project;ANALYZE# alter table transaction_details alter column ida_trans_match_source_id set statistics 1000;\nALTER TABLE# analyze transaction_details;ANALYZE# select * from pg_stats where (tablename='project' and attname='id') or (tablename='transaction_details' and attname='ida_trans_match_source_id');\n schemaname |      tablename      |          attname          | null_frac | avg_width | n_distinct |                                                                   most_common_vals                                                                   |                                                                                                                                                                                           most_common_freqs                                                                                                                                                                                            |                                                                                                                                                                                                                              histogram_bounds                                                                                                                                                                                                                              | correlation \n------------+---------------------+---------------------------+-----------+-----------+------------+------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------\n public     | project             | id                        |         0 |         4 |         -1 |                                                                                                                                                      |                                                                                                                                                                                                                                                                                                                                                                                                        | {6,7,8,12,13,14,15,17,18,19,24,25,26,27,28,29,30,31,32,33,34,35,36,37,41,42,71,72,797,802,803,809,812,813,814,815,816,817,818,822,823,824,825,826,827,828,829,830,831,832,833,834,835,836,837,838,839,840,841,842,843,844,845,846,847,848,849,920,921,922,923,924,925,926,927,928,929,930,931,932,933,934,935,936,937,938,939,940,941,942,946,947,948,949,950,951,952,953,954,955,956,957,958,959,960,961,962,963,964,965,966,967,968,969,970,971,972,973,974,975,977,978} |    0.939317\n public     | transaction_details | ida_trans_match_source_id |  0.480469 |         4 |         74 | {832,818,930,937,923,812,931,829,837,830,836,14,809,838,936,924,921,922,814,816,817,827,815,941,835,967,926,813,968,928,920,939,925,974,833,965,933} | {0.100562,0.100233,0.0412866,0.0245354,0.0223948,0.021277,0.0198998,0.018817,0.0182431,0.0181583,0.0180236,0.0141714,0.0107633,0.00955071,0.00917646,0.00639708,0.00562364,0.00491507,0.00453584,0.0037624,0.00332828,0.00332828,0.00323846,0.00309874,0.00295403,0.00267959,0.00234526,0.00227041,0.00221552,0.00220055,0.00215565,0.00207581,0.00179138,0.00136225,0.00114269,0.00113271,0.00100796} | {6,6,7,8,15,802,802,802,802,803,803,839,841,844,844,845,845,846,927,927,934,934,935,935,938,938,940,942,952,954,955,955,957,972,972,972,978}                                                                                                                                                                                                                                                                                                                               |    0.218267\n(2 rows) This had no appreciable difference in the plan.  Here is the part that seems to be causing the problem again after the increase in stats.  It still thinks there is only one row in the result.\n ->  Nested Loop  (cost=42.75..161.78 rows=1 width=38) (actual time=391.797..425.337 rows=20153 loops=1)       ->  Hash Join  (cost=10.89..22.58 rows=1 width=24) (actual time=0.069..0.139 rows=1 loops=1)\n             Hash Cond: (mtchsrcprj3.funding_source_id = mtchsrcprjfs3.nameid)             ->  Seq Scan on project mtchsrcprj3  (cost=0.00..11.22 rows=122 width=8) (actual time=0.002..0.054 rows=122 loops=1)\n             ->  Hash  (cost=10.83..10.83 rows=5 width=24) (actual time=0.022..0.022 rows=1 loops=1)                   ->  Index Scan using name_float_lfm_idx on namemaster mtchsrcprjfs3  (cost=0.00..10.83 rows=5 width=24) (actual time=0.013..0.014 rows=1 loops=1)\n                         Index Cond: (name_float_lfm = 'DWS'::text)       ->  Bitmap Heap Scan on transaction_details idatrndtl  (cost=31.87..121.61 rows=1407 width=22) (actual time=391.722..410.129 rows=20153 loops=1)\n             Recheck Cond: (idatrndtl.ida_trans_match_source_id = mtchsrcprj3.id)             ->  Bitmap Index Scan on transaction_details_ida_trans_match_source_id  (cost=0.00..31.51 rows=1407 width=0) (actual time=391.523..391.523 rows=20153 loops=1)\n                   Index Cond: (idatrndtl.ida_trans_match_source_id = mtchsrcprj3.id)Here is the relevant snippet from the query\n<-- snip -->FROM   accounts acc   left join transactions idatrn on (acc.id = idatrn.account_id) \nleft join transaction_details idatrndtl on (idatrn.id = idatrndtl.transaction_id) left join project mtchsrcprj3 on (idatrndtl.ida_trans_match_source_id = mtchsrcprj3.id) \nleft join namemaster mtchsrcprjfs3 on ( mtchsrcprj3.funding_source_id = mtchsrcprjfs3.nameid) <-- snip -->I'll update again once we've had a chance to do the update.\n-Chris", "msg_date": "Tue, 12 Feb 2008 10:09:12 -0500", "msg_from": "\"Chris Kratz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: mis-estimate in nested query causes slow runtimes" }, { "msg_contents": "On 2/11/08, Tom Lane <[email protected]> wrote:\n>\n> Chris Kratz <[email protected]> writes:\n> > The first frustration is that I can't get the transaction details scan\n> > to get any more accurate. It thinks it will find 1407 records,\n> > instead it finds 20,153. Then for whatever reason it thinks that a\n> > join between 1 record and 1407 records will return 1 record. This is\n> > mainly what I can't understand. Why does it think it will only get\n> > one record in response when it's a left join?\n>\n> I don't see any left join there ...\n>\n> > PG 8.2.4 on Linux kernel 2.6.9 x64\n>\n> The first thing you should do is update to 8.2.6; we've fixed a fair\n> number of problems since then that were fallout from the outer-join\n> planning rewrite in 8.2.\n>\n> If it still doesn't work very well, please post the pg_stats rows for\n> the join columns involved (idatrndtl.ida_trans_match_source_id and\n> mtchsrcprj3.id). (You do have up-to-date ANALYZE stats for both\n> of those tables, right?)\n>\n> regards, tom lane\n>\n\nHello Tom,\n\nWe've updated to Postgres 8.2.6 on our production database over the weekend.\n Unfortunately, the estimates on this query are no better after the upgrade.\n Here is just the part of the estimate that is incorrect. (2 vs 20153)\n\n-> Nested Loop (cost=12.68..165.69 rows=2 width=38) (actual time=\n0.089..29.792 rows=20153 loops=1)\n -> Hash Join (cost=12.68..24.37 rows=1 width=24) (actual time=\n0.064..0.135 rows=1 loops=1)\n Hash Cond: (mtchsrcprj3.funding_source_id = mtchsrcprjfs3.nameid\n)\n -> Seq Scan on project mtchsrcprj3 (cost=0.00..11.22 rows=122\nwidth=8) (actual time=0.002..0.053 rows=122 loops=1)\n -> Hash (cost=12.61..12.61 rows=6 width=24) (actual time=\n0.017..0.017 rows=1 loops=1)\n -> Index Scan using name_float_lfm_idx on namemaster\nmtchsrcprjfs3 (cost=0.00..12.61 rows=6 width=24) (actual\ntime=0.012..0.013rows=1 loops=1)\n Index Cond: (name_float_lfm = 'DWS'::text)\n -> Index Scan using transaction_details_ida_trans_match_source_id on\ntransaction_details idatrndtl (cost=0.00..123.72 rows=1408 width=22)\n(actual time=0.023..17.128 rows=20153 loops=1)\n\n(Entire explain analyze posted earlier in thread)\n\nTotal Query runtime: 35309.298 ms\nSame w/ enable_nestloop off: 761.715 ms\n\nI've tried the stats up to 1000 on both columns which causes no differences.\n Currently the stats are at 100.\n\ntest=# alter table transaction_details alter column\nida_trans_match_source_id set statistics 100;\nALTER TABLE\ntest=# analyze transaction_details;\nANALYZE\ntest=# alter table project alter column id set statistics 100;\nALTER TABLE\ntest=# analyze project;\nANALYZE\n\nStats rows in pg_stats for these two columns:\n\ntest=# select * from pg_stats where tablename = 'transaction_details' and\nattname='ida_trans_match_source_id';\n schemaname | tablename | attname | null_frac |\navg_width | n_distinct | most_common_vals\n | most_common_freqs\n |\n\n histogram_bounds\n | correlation\n------------+---------------------+---------------------------+-----------+-----------+------------+------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------\n public | transaction_details | ida_trans_match_source_id | 0.479533 |\n 4 | 69 |\n{818,832,930,937,923,812,931,836,837,829,830,14,809} | {0.1024,0.0991333,\n0.0408,0.0232,0.0221,0.0219,0.0207,0.0188667,0.0186667,0.0177667,0.0176667,\n0.0130333,0.0118667} |\n{6,802,813,813,814,814,815,815,816,816,817,817,827,827,833,835,835,838,838,838,838,838,843,920,921,921,921,921,922,922,924,924,924,924,925,926,926,928,928,934,936,936,936,936,936,938,939,941,941,955,965,967,968,968,974,980}\n| 0.178655\n(1 row)\n\ntest=# select * from pg_stats where tablename = 'project' and attname='id';\n schemaname | tablename | attname | null_frac | avg_width | n_distinct |\nmost_common_vals | most_common_freqs |\n\n\n histogram_bounds\n\n | correlation\n------------+-----------+---------+-----------+-----------+------------+------------------+-------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------\n public | project | id | 0 | 4 | -1 |\n | |\n{6,7,8,12,13,15,17,18,19,24,26,27,28,29,30,32,33,34,35,36,41,42,71,72,802,803,809,812,813,815,816,817,818,822,824,825,826,827,828,830,831,832,833,835,836,837,838,839,841,842,843,844,845,847,848,849,920,921,923,924,925,926,928,929,930,931,932,934,935,936,937,938,940,941,942,946,947,949,950,951,952,954,955,956,957,958,960,961,962,963,964,966,967,968,969,970,973,974,975,977,980}\n| 0.937228\n(1 row)\n\nPG 8.2.6 on linux x86_64, 8G ram, 4x15k->db, 2x10k-> OS & WAL\n\npostgresql.conf settings of note:\n\nshared_buffers = 1024MB\nwork_mem = 246MB\nmaintenance_work_mem = 256MB\nrandom_page_cost = 1.75\neffective_cache_size=2048MB\n\nAny ideas how we can get the query to run faster?\n\nThanks,\n\n-Chris\n\nOn 2/11/08, Tom Lane <[email protected]> wrote:\nChris Kratz <[email protected]> writes:> The first frustration is that I can't get the transaction details scan> to get any more accurate.  It thinks it will find 1407 records,\n> instead it finds 20,153.  Then for whatever reason it thinks that a> join between 1 record and 1407 records will return 1 record.  This is> mainly what I can't understand.  Why does it think it will only get\n> one record in response when it's a left join?I don't see any left join there ...> PG 8.2.4 on Linux kernel 2.6.9 x64The first thing you should do is update to 8.2.6; we've fixed a fair\nnumber of problems since then that were fallout from the outer-joinplanning rewrite in 8.2.If it still doesn't work very well, please post the pg_stats rows forthe join columns involved (idatrndtl.ida_trans_match_source_id and\nmtchsrcprj3.id).  (You do have up-to-date ANALYZE stats for bothof those tables, right?)                        regards, tom laneHello Tom,\nWe've updated to Postgres 8.2.6 on our production database over the weekend.  Unfortunately, the estimates on this query are no better after the upgrade.  Here is just the part of the estimate that is incorrect.  (2 vs 20153)\n->  Nested Loop  (cost=12.68..165.69 rows=2 width=38) (actual time=0.089..29.792 rows=20153 loops=1)      ->  Hash Join  (cost=12.68..24.37 rows=1 width=24) (actual time=0.064..0.135 rows=1 loops=1)\n            Hash Cond: (mtchsrcprj3.funding_source_id = mtchsrcprjfs3.nameid)            ->  Seq Scan on project mtchsrcprj3  (cost=0.00..11.22 rows=122 width=8) (actual time=0.002..0.053 rows=122 loops=1)\n            ->  Hash  (cost=12.61..12.61 rows=6 width=24) (actual time=0.017..0.017 rows=1 loops=1)                  ->  Index Scan using name_float_lfm_idx on namemaster mtchsrcprjfs3  (cost=0.00..12.61 rows=6 width=24) (actual time=0.012..0.013 rows=1 loops=1)\n                        Index Cond: (name_float_lfm = 'DWS'::text)      ->  Index Scan using transaction_details_ida_trans_match_source_id on transaction_details idatrndtl  (cost=0.00..123.72 rows=1408 width=22) (actual time=0.023..17.128 rows=20153 loops=1)\n(Entire explain analyze posted earlier in thread)Total Query runtime: 35309.298 msSame w/ enable_nestloop off: 761.715 ms\nI've tried the stats up to 1000 on both columns which causes no differences.  Currently the stats are at 100.\ntest=# alter table transaction_details alter column ida_trans_match_source_id set statistics 100;ALTER TABLEtest=# analyze transaction_details;ANALYZEtest=# alter table project alter column id set statistics 100;\nALTER TABLEtest=# analyze project;ANALYZEStats rows in pg_stats for these two columns:\ntest=# select * from pg_stats where tablename = 'transaction_details' and attname='ida_trans_match_source_id'; schemaname |      tablename      |          attname          | null_frac | avg_width | n_distinct |                   most_common_vals                   |                                                 most_common_freqs                                                 |                                                                                                        histogram_bounds                                                                                                         | correlation \n------------+---------------------+---------------------------+-----------+-----------+------------+------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------\n public     | transaction_details | ida_trans_match_source_id |  0.479533 |         4 |         69 | {818,832,930,937,923,812,931,836,837,829,830,14,809} | {0.1024,0.0991333,0.0408,0.0232,0.0221,0.0219,0.0207,0.0188667,0.0186667,0.0177667,0.0176667,0.0130333,0.0118667} | {6,802,813,813,814,814,815,815,816,816,817,817,827,827,833,835,835,838,838,838,838,838,843,920,921,921,921,921,922,922,924,924,924,924,925,926,926,928,928,934,936,936,936,936,936,938,939,941,941,955,965,967,968,968,974,980} |    0.178655\n(1 row)test=# select * from pg_stats where tablename = 'project' and attname='id'; schemaname | tablename | attname | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs |                                                                                                                                                                                      histogram_bounds                                                                                                                                                                                      | correlation \n------------+-----------+---------+-----------+-----------+------------+------------------+-------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------\n public     | project   | id      |         0 |         4 |         -1 |                  |                   | {6,7,8,12,13,15,17,18,19,24,26,27,28,29,30,32,33,34,35,36,41,42,71,72,802,803,809,812,813,815,816,817,818,822,824,825,826,827,828,830,831,832,833,835,836,837,838,839,841,842,843,844,845,847,848,849,920,921,923,924,925,926,928,929,930,931,932,934,935,936,937,938,940,941,942,946,947,949,950,951,952,954,955,956,957,958,960,961,962,963,964,966,967,968,969,970,973,974,975,977,980} |    0.937228\n(1 row)PG 8.2.6 on linux x86_64, 8G ram, 4x15k->db, 2x10k-> OS & WAL postgresql.conf settings of note:\nshared_buffers = 1024MBwork_mem = 246MBmaintenance_work_mem = 256MBrandom_page_cost = 1.75effective_cache_size=2048MB\nAny ideas how we can get the query to run faster?Thanks,\n-Chris", "msg_date": "Mon, 18 Feb 2008 14:32:06 -0500", "msg_from": "\"Chris Kratz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: mis-estimate in nested query causes slow runtimes" }, { "msg_contents": "On 2/18/08, Chris Kratz <[email protected]> wrote:\n>\n> On 2/11/08, Tom Lane <[email protected]> wrote:\n> >\n> > Chris Kratz <[email protected]> writes:\n> > > The first frustration is that I can't get the transaction details scan\n> > > to get any more accurate. It thinks it will find 1407 records,\n> > > instead it finds 20,153. Then for whatever reason it thinks that a\n> > > join between 1 record and 1407 records will return 1 record. This is\n> > > mainly what I can't understand. Why does it think it will only get\n> > > one record in response when it's a left join?\n> >\n> > I don't see any left join there ...\n> >\n> > > PG 8.2.4 on Linux kernel 2.6.9 x64\n> >\n> > The first thing you should do is update to 8.2.6; we've fixed a fair\n> > number of problems since then that were fallout from the outer-join\n> > planning rewrite in 8.2.\n> >\n> > If it still doesn't work very well, please post the pg_stats rows for\n> > the join columns involved (idatrndtl.ida_trans_match_source_id and\n> > mtchsrcprj3.id). (You do have up-to-date ANALYZE stats for both\n> > of those tables, right?)\n> >\n> > regards, tom lane\n> >\n>\n> Hello Tom,\n>\n>\n> We've updated to Postgres 8.2.6 on our production database over the\n> weekend. Unfortunately, the estimates on this query are no better after the\n> upgrade. Here is just the part of the estimate that is incorrect. (2 vs\n> 20153)\n>\n>\n> -> Nested Loop (cost=12.68..165.69 rows=2 width=38) (actual time=\n> 0.089..29.792 rows=20153 loops=1)\n> -> Hash Join (cost=12.68..24.37 rows=1 width=24) (actual time=\n> 0.064..0.135 rows=1 loops=1)\n> Hash Cond: (mtchsrcprj3.funding_source_id =\n> mtchsrcprjfs3.nameid)\n> -> Seq Scan on project mtchsrcprj3 (cost=0.00..11.22rows=122 width=8) (actual time=\n> 0.002..0.053 rows=122 loops=1)\n> -> Hash (cost=12.61..12.61 rows=6 width=24) (actual time=\n> 0.017..0.017 rows=1 loops=1)\n> -> Index Scan using name_float_lfm_idx on namemaster\n> mtchsrcprjfs3 (cost=0.00..12.61 rows=6 width=24) (actual time=\n> 0.012..0.013 rows=1 loops=1)\n> Index Cond: (name_float_lfm = 'DWS'::text)\n> -> Index Scan using transaction_details_ida_trans_match_source_id\n> on transaction_details idatrndtl (cost=0.00..123.72 rows=1408 width=22)\n> (actual time=0.023..17.128 rows=20153 loops=1)\n>\n>\n> (Entire explain analyze posted earlier in thread)\n>\n>\n> Total Query runtime: 35309.298 ms\n> Same w/ enable_nestloop off: 761.715 ms\n>\n>\n> I've tried the stats up to 1000 on both columns which causes no\n> differences. Currently the stats are at 100.\n>\n>\n> test=# alter table transaction_details alter column\n> ida_trans_match_source_id set statistics 100;\n> ALTER TABLE\n> test=# analyze transaction_details;\n> ANALYZE\n> test=# alter table project alter column id set statistics 100;\n> ALTER TABLE\n> test=# analyze project;\n> ANALYZE\n>\n>\n> Stats rows in pg_stats for these two columns:\n>\n>\n> test=# select * from pg_stats where tablename = 'transaction_details' and\n> attname='ida_trans_match_source_id';\n> schemaname | tablename | attname | null_frac\n> | avg_width | n_distinct | most_common_vals\n> | most_common_freqs\n> |\n>\n> histogram_bounds\n> | correlation\n>\n> ------------+---------------------+---------------------------+-----------+-----------+------------+------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------\n> public | transaction_details | ida_trans_match_source_id | 0.479533| 4 | 69 |\n> {818,832,930,937,923,812,931,836,837,829,830,14,809} | {0.1024,0.0991333,\n> 0.0408,0.0232,0.0221,0.0219,0.0207,0.0188667,0.0186667,0.0177667,0.0176667\n> ,0.0130333,0.0118667} |\n> {6,802,813,813,814,814,815,815,816,816,817,817,827,827,833,835,835,838,838,838,838,838,843,920,921,921,921,921,922,922,924,924,924,924,925,926,926,928,928,934,936,936,936,936,936,938,939,941,941,955,965,967,968,968,974,980}\n> | 0.178655\n> (1 row)\n>\n>\n> test=# select * from pg_stats where tablename = 'project' and\n> attname='id';\n> schemaname | tablename | attname | null_frac | avg_width | n_distinct |\n> most_common_vals | most_common_freqs |\n>\n>\n> histogram_bounds\n>\n> | correlation\n>\n> ------------+-----------+---------+-----------+-----------+------------+------------------+-------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------\n> public | project | id | 0 | 4 | -1 |\n> | |\n> {6,7,8,12,13,15,17,18,19,24,26,27,28,29,30,32,33,34,35,36,41,42,71,72,802,803,809,812,813,815,816,817,818,822,824,825,826,827,828,830,831,832,833,835,836,837,838,839,841,842,843,844,845,847,848,849,920,921,923,924,925,926,928,929,930,931,932,934,935,936,937,938,940,941,942,946,947,949,950,951,952,954,955,956,957,958,960,961,962,963,964,966,967,968,969,970,973,974,975,977,980}\n> | 0.937228\n> (1 row)\n>\n>\n> PG 8.2.6 on linux x86_64, 8G ram, 4x15k->db, 2x10k-> OS & WAL\n>\n>\n> postgresql.conf settings of note:\n>\n>\n> shared_buffers = 1024MB\n> work_mem = 246MB\n> maintenance_work_mem = 256MB\n> random_page_cost = 1.75\n> effective_cache_size=2048MB\n>\n>\n> Any ideas how we can get the query to run faster?\n>\n>\n> Thanks,\n>\n>\n> -Chris\n>\n>\n>\n>\n>\n>\n>\nTom, any further ideas? I appreciate your help so far, but we are still\nstuck after the update to 8.2.6. Our only real solution at this point is to\nadd code to our application that turns off nested loops for specific reports\nsince the planner isn't getting correct estimates. I posted the pg_stat\nrows as requested above.\nThanks,\n\n-Chris\n\nOn 2/18/08, Chris Kratz <[email protected]> wrote:\nOn 2/11/08, Tom Lane <[email protected]> wrote:\n\nChris Kratz <[email protected]> writes:> The first frustration is that I can't get the transaction details scan\n> to get any more accurate.  It thinks it will find 1407 records,\n> instead it finds 20,153.  Then for whatever reason it thinks that a> join between 1 record and 1407 records will return 1 record.  This is> mainly what I can't understand.  Why does it think it will only get\n\n> one record in response when it's a left join?I don't see any left join there ...> PG 8.2.4 on Linux kernel 2.6.9 x64The first thing you should do is update to 8.2.6; we've fixed a fair\n\nnumber of problems since then that were fallout from the outer-joinplanning rewrite in 8.2.If it still doesn't work very well, please post the pg_stats rows forthe join columns involved (idatrndtl.ida_trans_match_source_id and\nmtchsrcprj3.id).  (You do have up-to-date ANALYZE stats for bothof those tables, right?)                        regards, tom lane\nHello Tom,\n We've updated to Postgres 8.2.6 on our production database over the weekend.  Unfortunately, the estimates on this query are no better after the upgrade.  Here is just the part of the estimate that is incorrect.  (2 vs 20153)\n ->  Nested Loop  (cost=12.68..165.69 rows=2 width=38) (actual time=0.089..29.792 rows=20153 loops=1)      ->  Hash Join  (cost=12.68..24.37 rows=1 width=24) (actual time=0.064..0.135 rows=1 loops=1)\n\n            Hash Cond: (mtchsrcprj3.funding_source_id = mtchsrcprjfs3.nameid)            ->  Seq Scan on project mtchsrcprj3  (cost=0.00..11.22 rows=122 width=8) (actual time=0.002..0.053 rows=122 loops=1)\n            ->  Hash  (cost=12.61..12.61 rows=6 width=24) (actual time=0.017..0.017 rows=1 loops=1)                  ->  Index Scan using name_float_lfm_idx on namemaster mtchsrcprjfs3  (cost=0.00..12.61 rows=6 width=24) (actual time=0.012..0.013 rows=1 loops=1)\n\n                        Index Cond: (name_float_lfm = 'DWS'::text)      ->  Index Scan using transaction_details_ida_trans_match_source_id on transaction_details idatrndtl  (cost=0.00..123.72 rows=1408 width=22) (actual time=0.023..17.128 rows=20153 loops=1)\n (Entire explain analyze posted earlier in thread) Total Query runtime: 35309.298 msSame w/ enable_nestloop off: 761.715 ms\n I've tried the stats up to 1000 on both columns which causes no differences.  Currently the stats are at 100. \ntest=# alter table transaction_details alter column ida_trans_match_source_id set statistics 100;ALTER TABLEtest=# analyze transaction_details;ANALYZEtest=# alter table project alter column id set statistics 100;\nALTER TABLEtest=# analyze project;ANALYZE Stats rows in pg_stats for these two columns: \ntest=# select * from pg_stats where tablename = 'transaction_details' and attname='ida_trans_match_source_id'; schemaname |      tablename      |          attname          | null_frac | avg_width | n_distinct |                   most_common_vals                   |                                                 most_common_freqs                                                 |                                                                                                        histogram_bounds                                                                                                         | correlation \n------------+---------------------+---------------------------+-----------+-----------+------------+------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------\n\n public     | transaction_details | ida_trans_match_source_id |  0.479533 |         4 |         69 | {818,832,930,937,923,812,931,836,837,829,830,14,809} | {0.1024,0.0991333,0.0408,0.0232,0.0221,0.0219,0.0207,0.0188667,0.0186667,0.0177667,0.0176667,0.0130333,0.0118667} | {6,802,813,813,814,814,815,815,816,816,817,817,827,827,833,835,835,838,838,838,838,838,843,920,921,921,921,921,922,922,924,924,924,924,925,926,926,928,928,934,936,936,936,936,936,938,939,941,941,955,965,967,968,968,974,980} |    0.178655\n(1 row) test=# select * from pg_stats where tablename = 'project' and attname='id'; schemaname | tablename | attname | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs |                                                                                                                                                                                      histogram_bounds                                                                                                                                                                                      | correlation \n------------+-----------+---------+-----------+-----------+------------+------------------+-------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------\n\n public     | project   | id      |         0 |         4 |         -1 |                  |                   | {6,7,8,12,13,15,17,18,19,24,26,27,28,29,30,32,33,34,35,36,41,42,71,72,802,803,809,812,813,815,816,817,818,822,824,825,826,827,828,830,831,832,833,835,836,837,838,839,841,842,843,844,845,847,848,849,920,921,923,924,925,926,928,929,930,931,932,934,935,936,937,938,940,941,942,946,947,949,950,951,952,954,955,956,957,958,960,961,962,963,964,966,967,968,969,970,973,974,975,977,980} |    0.937228\n(1 row) PG 8.2.6 on linux x86_64, 8G ram, 4x15k->db, 2x10k-> OS & WAL  postgresql.conf settings of note:\n shared_buffers = 1024MBwork_mem = 246MBmaintenance_work_mem = 256MBrandom_page_cost = 1.75effective_cache_size=2048MB\n Any ideas how we can get the query to run faster? Thanks, \n-Chris   \nTom, any further ideas?  I appreciate your help so far, but we are still stuck after the update to 8.2.6.  Our only real solution at this point is to add code to our application that turns off nested loops for specific reports since the planner isn't getting correct estimates.  I posted the pg_stat rows as requested above.\nThanks,-Chris", "msg_date": "Wed, 20 Feb 2008 06:07:43 -0500", "msg_from": "\"Chris Kratz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: mis-estimate in nested query causes slow runtimes" } ]
[ { "msg_contents": "Hi all...\n\nIf ssl is enable in postgresql decreanse the performance of the database?\nHow much?\n\nThanks in advance\n\nHi all...If ssl is enable  in postgresql decreanse the performance of the database? How much?Thanks in advance", "msg_date": "Mon, 11 Feb 2008 16:58:35 -0700", "msg_from": "\"=?ISO-8859-1?Q?fabrix_pe=F1uelas?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Questions about enabling SSL" }, { "msg_contents": "On Mon, Feb 11, 2008 at 04:58:35PM -0700, fabrix pe�uelas wrote:\n> If ssl is enable in postgresql decreanse the performance of the database?\n> How much?\n\nThe performance impact of an encrypted connection depends on how\nexpensive the queries are and how much data they return. A query\nthat joins several tables and aggregates millions of rows might\ntake several seconds or minutes to run and return only a few rows;\nfor such a query the impact of an encrypted connection is insignificant.\nBut if you make many queries that run quickly and return large\nresult sets then you might indeed notice the impact of an encrypted\nconnection vs. a non-encrypted connection. The most reliable way\nto assess the impact would be to run representative queries over\nyour data and measure the difference yourself.\n\n-- \nMichael Fuhr\n", "msg_date": "Mon, 11 Feb 2008 17:37:51 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions about enabling SSL" }, { "msg_contents": "On Mon, Feb 11, 2008 at 05:37:51PM -0700, Michael Fuhr wrote:\n> On Mon, Feb 11, 2008 at 04:58:35PM -0700, fabrix pe�uelas wrote:\n> > If ssl is enable in postgresql decreanse the performance of the database?\n> > How much?\n> \n> The performance impact of an encrypted connection depends on how\n> expensive the queries are and how much data they return.\n\nAnother consideration is how much time you spend using each connection\nvs. how much time it takes to establish each connection. A thousand\nsimple queries over the same encrypted connection might be significantly\nfaster than running each query over a separate unencrypted connection,\nwhich in turn will probably be significantly faster than using\nseparate encrypted connections that must each carry out a relatively\nexpensive key establishment.\n\n-- \nMichael Fuhr\n", "msg_date": "Mon, 11 Feb 2008 18:00:13 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions about enabling SSL" }, { "msg_contents": "Thanks Michael...\n\n2008/2/11, Michael Fuhr <[email protected]>:\n>\n> On Mon, Feb 11, 2008 at 05:37:51PM -0700, Michael Fuhr wrote:\n> > On Mon, Feb 11, 2008 at 04:58:35PM -0700, fabrix peñuelas wrote:\n> > > If ssl is enable in postgresql decreanse the performance of the\n> database?\n> > > How much?\n> >\n> > The performance impact of an encrypted connection depends on how\n> > expensive the queries are and how much data they return.\n>\n> Another consideration is how much time you spend using each connection\n> vs. how much time it takes to establish each connection. A thousand\n> simple queries over the same encrypted connection might be significantly\n> faster than running each query over a separate unencrypted connection,\n> which in turn will probably be significantly faster than using\n> separate encrypted connections that must each carry out a relatively\n> expensive key establishment.\n>\n> --\n> Michael Fuhr\n>\n\nThanks Michael...2008/2/11, Michael Fuhr <[email protected]>:\nOn Mon, Feb 11, 2008 at 05:37:51PM -0700, Michael Fuhr wrote:> On Mon, Feb 11, 2008 at 04:58:35PM -0700, fabrix peñuelas wrote:> > If ssl is enable  in postgresql decreanse the performance of the database?\n> > How much?>> The performance impact of an encrypted connection depends on how> expensive the queries are and how much data they return.Another consideration is how much time you spend using each connection\nvs. how much time it takes to establish each connection.  A thousandsimple queries over the same encrypted connection might be significantlyfaster than running each query over a separate unencrypted connection,\nwhich in turn will probably be significantly faster than usingseparate encrypted connections that must each carry out a relativelyexpensive key establishment.--Michael Fuhr", "msg_date": "Tue, 12 Feb 2008 10:46:52 -0700", "msg_from": "\"=?ISO-8859-1?Q?fabrix_pe=F1uelas?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Questions about enabling SSL" } ]
[ { "msg_contents": "Hi,\n\n I am using Postgres 8.2.4, we have to regularly run some queries on \nsome big tables to see if we have any data for a particular request. But \nsometimes we might not have any matching rows on a particular request as \nin this case, when it cant find any matching rows it pretty much scans \nthe whole table and it takes too long to execute.\n\n As you can see from explain analyze output the response time is \nhorrible, Is there anything I can do to improve these queries ?\n\n Tables are autovacuumed regularly.\n\n\n select relname,relpages,reltuples from pg_class where relname in \n('listing','listingstatus','listedaddress');\n\n relname | relpages | reltuples\n---------------+----------+-------------\n listing | 132725 | 9.22896e+06\n listingstatus | 1 | 6\n listedaddress | 63459 | 8.15774e+06\n(3 rows)\n\nhelix_fdc=# select relname,last_autovacuum,last_autoanalyze from \npg_stat_user_tables where relname in ('listing','listedaddress');\n relname | last_autovacuum | last_autoanalyze\n---------------+-------------------------------+-------------------------------\n listing | 2008-02-12 10:57:54.690913-05 | 2008-02-12 \n10:57:54.690913-05\n listedaddress | 2008-02-09 14:12:44.038341-05 | 2008-02-12 \n11:17:47.822597-05\n(3 rows)\n\nExplain Analyze Output\n================\n\nexplain analyze\nselect listing0_.listingid as listingid157_, listing0_.entrydate as \nentrydate157_, listing0_.lastupdate as lastupdate157_,\n listing0_.sourcereference as sourcere4_157_, listing0_.start as \nstart157_, listing0_.stop as stop157_,\n listing0_.price as price157_, listing0_.updateHashcode as \nupdateHa8_157_, listing0_.fklistedaddressid as fklisted9_157_,\n listing0_.fklistingsubtypeid as fklisti10_157_, \nlisting0_.fkbestaddressid as fkbesta11_157_,\n listing0_.fklistingsourceid as fklisti12_157_, \nlisting0_.fklistingtypeid as fklisti13_157_,\n listing0_.fklistingstatusid as fklisti14_157_, \nlisting0_.fkpropertytypeid as fkprope15_157_\nfrom listing.listing listing0_, listing.listingstatus listingsta1_, \nlisting.listedaddress listedaddr2_\nwhere listing0_.fklistingstatusid=listingsta1_.listingstatusid\nand listing0_.fklistedaddressid=listedaddr2_.listedaddressid\nand listing0_.fklistingsourceid=5525\nand listingsta1_.shortname='active'\nand (listedaddr2_.fkverifiedaddressid is not null)\norder by listing0_.entrydate desc limit 10;\n \nQUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..11191.64 rows=10 width=107) (actual \ntime=2113544.437..2113544.437 rows=0 loops=1)\n -> Nested Loop (cost=0.00..790129.94 rows=706 width=107) (actual \ntime=2113544.427..2113544.427 rows=0 loops=1)\n -> Nested Loop (cost=0.00..783015.53 rows=853 width=107) \n(actual time=2113544.420..2113544.420 rows=0 loops=1)\n -> Index Scan Backward using idx_listing_entrydate on \nlisting listing0_ (cost=0.00..781557.28 rows=5118 width=107) (actual \ntime=2113544.412..2113544.412 rows=0 loops=1)\n Filter: (fklistingsourceid = 5525)\n -> Index Scan using pk_listingstatus_listingstatusid on \nlistingstatus listingsta1_ (cost=0.00..0.27 rows=1 width=4) (never \nexecuted)\n Index Cond: (listing0_.fklistingstatusid = \nlistingsta1_.listingstatusid)\n Filter: (shortname = 'active'::text)\n -> Index Scan using pk_listedaddress_listedaddressid on \nlistedaddress listedaddr2_ (cost=0.00..8.33 rows=1 width=4) (never \nexecuted)\n Index Cond: (listing0_.fklistedaddressid = \nlistedaddr2_.listedaddressid)\n Filter: (fkverifiedaddressid IS NOT NULL)\n Total runtime: 2113544.580 ms\n(12 rows)\n\n\nTable Definitions\n============\n\n \\d listing.listing\n Table \"listing.listing\"\n Column | Type \n| Modifiers\n--------------------+-----------------------------+------------------------------------------------------------------\n listingid | integer | not null default \nnextval(('listing.listingseq'::text)::regclass)\n fklistingsourceid | integer | not null\n fklistingtypeid | integer | not null\n entrydate | timestamp without time zone | not null\n lastupdate | timestamp without time zone | not null\n fklistedaddressid | integer |\n fkbestaddressid | integer |\n sourcereference | text |\n fkpropertytypeid | integer | not null\n fklistingstatusid | integer | not null\n start | timestamp without time zone | not null\n stop | timestamp without time zone |\n _entrydate | timestamp without time zone | default \n('now'::text)::timestamp(6) without time zone\n price | numeric(14,2) |\n fklistingsubtypeid | integer |\n updatehashcode | text |\nIndexes:\n \"pk_listing_listingid\" PRIMARY KEY, btree (listingid), tablespace \n\"indexdata\"\n \"idx_listing_entrydate\" btree (entrydate), tablespace \"indexdata\"\n \"idx_listing_fkbestaddressid\" btree (fkbestaddressid), tablespace \n\"indexdata\"\n \"idx_listing_fklistingsourceid\" btree (fklistingsourceid), \ntablespace \"indexdata\"\n \"idx_listing_fklistingtypeid\" btree (fklistingtypeid), tablespace \n\"indexdata\"\n \"idx_listing_lastupdate\" btree (lastupdate), tablespace \"indexdata\"\n \"idx_listing_sourcereference\" btree (sourcereference), tablespace \n\"indexdata\"\n \"idx_listing_stop\" btree (stop), tablespace \"indexdata\"\n \"idx_listing_updatehashcode\" btree (updatehashcode), tablespace \n\"indexdata\"\nForeign-key constraints:\n \"fk_listing_address\" FOREIGN KEY (fkbestaddressid) REFERENCES \nlisting.address(addressid)\n \"fk_listing_listedaddress\" FOREIGN KEY (fklistedaddressid) \nREFERENCES listing.listedaddress(listedaddressid)\n \"fk_listing_listingsource\" FOREIGN KEY (fklistingsourceid) \nREFERENCES listing.listingsource(listingsourceid)\n \"fk_listing_listingstatus\" FOREIGN KEY (fklistingstatusid) \nREFERENCES listing.listingstatus(listingstatusid)\n \"fk_listing_listingsubtype\" FOREIGN KEY (fklistingsubtypeid) \nREFERENCES listing.listingsubtype(listingsubtypeid)\n \"fk_listing_listingtypes\" FOREIGN KEY (fklistingtypeid) REFERENCES \nlisting.listingtype(listingtypeid)\n \"fk_listing_propertytype\" FOREIGN KEY (fkpropertytypeid) REFERENCES \nlisting.propertytype(propertytypeid)\n\n\\d listing.listedaddress\n Table \"listing.listedaddress\"\n Column | Type \n| Modifiers\n---------------------+-----------------------------+------------------------------------------------------------------------\n listedaddressid | integer | not null default \nnextval(('listing.listedaddressseq'::text)::regclass)\n fkaddressid | integer |\n fkverifiedaddressid | integer |\n verifyattempt | timestamp without time zone |\n _entrydate | timestamp without time zone | default \n('now'::text)::timestamp(6) without time zone\nIndexes:\n \"pk_listedaddress_listedaddressid\" PRIMARY KEY, btree \n(listedaddressid), tablespace \"indexdata\"\n \"uk_listedaddress_fkaddressid\" UNIQUE, btree (fkaddressid), \ntablespace \"indexdata\"\n \"idx_listedaddress_fkverifiedaddressid\" btree (fkverifiedaddressid), \ntablespace \"indexdata\"\nForeign-key constraints:\n \"fk_listedaddress_address\" FOREIGN KEY (fkaddressid) REFERENCES \nlisting.address(addressid)\n \"fk_listedaddress_verifiedaddress\" FOREIGN KEY (fkverifiedaddressid) \nREFERENCES listing.verifiedaddress(verifiedaddressid)\n\n \\d listing.listingstatus\n Table \"listing.listingstatus\"\n Column | Type \n| Modifiers\n-----------------+-----------------------------+------------------------------------------------------------------------\n listingstatusid | integer | not null default \nnextval(('listing.listingstatusseq'::text)::regclass)\n shortname | text |\n longname | text |\n _entrydate | timestamp without time zone | default \n('now'::text)::timestamp(6) without time zone\nIndexes:\n \"pk_listingstatus_listingstatusid\" PRIMARY KEY, btree \n(listingstatusid), tablespace \"indexdata\"\n\n\n\nTIA,\nPallav\n", "msg_date": "Tue, 12 Feb 2008 16:35:40 -0500", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizing No matching record Queries" }, { "msg_contents": "Pallav Kalva asked\n...\n> and listing0_.fklistingsourceid=5525\n...\n> order by listing0_.entrydate desc limit 10;\n\n> -> Index Scan Backward using idx_listing_entrydate on \n> listing listing0_ (cost=0.00..781557.28 rows=5118 width=107) (actual \n> time=2113544.412..2113544.412 rows=0 loops=1)\n> Filter: (fklistingsourceid = 5525)\n\nWould it help to have a combined index on fklistingsourceid, entrydate?\n\nRegards,\nStephen Denne.\n\nDisclaimer:\nAt the Datamail Group we value team commitment, respect, achievement, customer focus, and courage. This email with any attachments is confidential and may be subject to legal privilege. If it is not intended for you please advise by reply immediately, destroy it and do not copy, disclose or use it in any way.\n\n__________________________________________________________________\n This email has been scanned by the DMZGlobal Business Quality \n Electronic Messaging Suite.\nPlease see http://www.dmzglobal.com/services/bqem.htm for details.\n__________________________________________________________________\n\n\n", "msg_date": "Wed, 13 Feb 2008 11:09:29 +1300", "msg_from": "\"Stephen Denne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing No matching record Queries" }, { "msg_contents": "On 2008-02-12 13:35, Pallav Kalva wrote:\n> Hi,\n>\n> ...\n> Table Definitions\n> ============\n>\n> \\d listing.listingstatus\n> Table \"listing.listingstatus\"\n> Column | Type \n> | Modifiers\n> -----------------+-----------------------------+------------------------------------------------------------------------ \n>\n> listingstatusid | integer | not null default \n> nextval(('listing.listingstatusseq'::text)::regclass)\n> shortname | text |\n> longname | text |\n> _entrydate | timestamp without time zone | default \n> ('now'::text)::timestamp(6) without time zone\n> Indexes:\n> \"pk_listingstatus_listingstatusid\" PRIMARY KEY, btree \n> (listingstatusid), tablespace \"indexdata\"\n>\nSince you are searching by \"shortname\", trying adding an index on that. \nAlthough with that tiny a table, it might not matter.\n\nThe questions are:\n\n1. Why in the planner scanning the entire idx_listing_entrydate, when \nI'd think it should be scanning the entire \npk_listingstatus_listingstatusid ?\n2. Why is \"Index Scan using pk_listingstatus_listingstatusid on \nlistingstatus listingsta1_ (cost=0.00..0.27 rows=1 width=4) (never \nexecuted)\" ?\n\nNote: I'm new at this as well, and jumped in to learn as well as to help.\n\n-- Dean\n\n-- \nMail to my list address MUST be sent via the mailing list. All other mail will bounce.\n\n", "msg_date": "Tue, 12 Feb 2008 16:07:29 -0800", "msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing No matching record Queries" }, { "msg_contents": "\"Stephen Denne\" <[email protected]> writes:\n\n> Pallav Kalva asked\n> ...\n>> and listing0_.fklistingsourceid=5525\n> ...\n>> order by listing0_.entrydate desc limit 10;\n>\n>> -> Index Scan Backward using idx_listing_entrydate on \n>> listing listing0_ (cost=0.00..781557.28 rows=5118 width=107) (actual \n>> time=2113544.412..2113544.412 rows=0 loops=1)\n>> Filter: (fklistingsourceid = 5525)\n>\n> Would it help to have a combined index on fklistingsourceid, entrydate?\n\nI think that would help. You already have a ton of indexes, you might consider\nwhether all your queries start with a listingsourceid and whether you can have\nthat as a prefix on the existing index.\n\nAnother thing to try is raising the stats target on fklistingsourceid and/or\nentrydate. The estimate seems pretty poor. It could just be that the\ndistribution is highly skewed which is a hard case to estimate correctly.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's RemoteDBA services!\n", "msg_date": "Wed, 13 Feb 2008 00:16:44 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing No matching record Queries" }, { "msg_contents": "Dean Gibson (DB Administrator) wrote:\n> The questions are:\n> \n> 1. Why in the planner scanning the entire idx_listing_entrydate, when \n> I'd think it should be scanning the entire \n> pk_listingstatus_listingstatusid ?\n\nIt's looking at the ORDER BY and sees that the query needs the 10 most \nrecent, so tries searching by date. That's sensible where you are going \nto have a lot of matches for fklistingsourceid.\n\nWhich suggests that statistics for \"fklistingsourceid\" aren't high \nenough, like Greg suggested. If that doesn't help, the index on \n(fklistingsourceid,entrydate) that Stephen might well do so.\n\n> 2. Why is \"Index Scan using pk_listingstatus_listingstatusid on \n> listingstatus listingsta1_ (cost=0.00..0.27 rows=1 width=4) (never \n> executed)\" ?\n\nBecause nothing comes out of the first index-scan.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 13 Feb 2008 08:54:48 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing No matching record Queries" }, { "msg_contents": "Thanks! for all your replies, I tried increasing the statistics on \nfklistingsourceid to 1000 it made any difference. Then I created an \nindex on (fklistingsourceid,entrydate) it helped and it was fast.\n\nThis index would fix this problem but in general I would like to know \nwhat if there are queries where it does \"index scan backwards\" and \nthere is no \"order by clause\" and the query is still bad ? Would there \nbe a case like that or the planner uses index scan backwards only when \nuse order by desc also.\n\n\nRichard Huxton wrote:\n> Dean Gibson (DB Administrator) wrote:\n>> The questions are:\n>>\n>> 1. Why in the planner scanning the entire idx_listing_entrydate, when \n>> I'd think it should be scanning the entire \n>> pk_listingstatus_listingstatusid ?\n>\n> It's looking at the ORDER BY and sees that the query needs the 10 most \n> recent, so tries searching by date. That's sensible where you are \n> going to have a lot of matches for fklistingsourceid.\n>\n> Which suggests that statistics for \"fklistingsourceid\" aren't high \n> enough, like Greg suggested. If that doesn't help, the index on \n> (fklistingsourceid,entrydate) that Stephen might well do so.\n>\n>> 2. Why is \"Index Scan using pk_listingstatus_listingstatusid on \n>> listingstatus listingsta1_ (cost=0.00..0.27 rows=1 width=4) (never \n>> executed)\" ?\n>\n> Because nothing comes out of the first index-scan.\n>\n\n", "msg_date": "Wed, 13 Feb 2008 14:46:41 -0500", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizing No matching record Queries" }, { "msg_contents": "\"Pallav Kalva\" <[email protected]> writes:\n\n> This index would fix this problem but in general I would like to know what if\n> there are queries where it does \"index scan backwards\" and there is no \"order\n> by clause\" and the query is still bad ? Would there be a case like that or the\n> planner uses index scan backwards only when use order by desc also.\n\nI think you're oversimplifying. Basically you were asking the planner for the\nmost recent record for a given user. The planner had the choice either of\n\na) going through all the records for a given user and picking the most recent,\n\nor b) scanning the records from most recent to oldest and looking for the\ngiven user.\n\nIt was a choice between two evils. If there are a lot of records for the user\nthen a) will be bad since it has to scan all of them to find the most recent\nand if there are no records for the user then b) will be bad because it'll\nhave to go through all of the records to the beginning of time.\n\nThe suggested index lets it scan the records for the given user from most\nrecent to oldest without seeing any records for any other user.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's Slony Replication support!\n", "msg_date": "Wed, 13 Feb 2008 20:51:34 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing No matching record Queries" } ]
[ { "msg_contents": "We have a web application for which we intend to run the database on a\ndedicated server.\n\nWe hope by the end of 2008 to have 10 companies accessing 10 instances\nof the database for this application. The dump file of each database is\nlikely to be less than 100MB at the end of the year. The databases are\nread-heavy.\n\nI'm thinking of something along the following lines:\n\n(https://secure.dnuk.com/systems/r325hs-1u.php?configuration=7766)\n\n 4 x 147GB 15000 rpm SCSI in RAID 10 with 320-1 RAID CARD + 64MB cache BBU\n 2x Intel Xeon E5405 / 4x 2.00GHz / 1333MHz FSB / 12MB cache\n 6GB RAM\n\n Cost around 2320 GBP -- it would be great to get it under 2000\n Needs to be in the UK.\n\nI would be grateful for any comments. I'm particularly out of date about\nthe best processors to go for. DNUK also have Opteron as an option.\n\nRory\n\n\n\n", "msg_date": "Wed, 13 Feb 2008 13:12:06 +0000", "msg_from": "Rory Campbell-Lange <[email protected]>", "msg_from_op": true, "msg_subject": "Small DB Server Advice" }, { "msg_contents": "On Wed, 13 Feb 2008, Rory Campbell-Lange wrote:\n> 4 x 147GB 15000 rpm SCSI in RAID 10 with 320-1 RAID CARD + 64MB cache BBU\n> 2x Intel Xeon E5405 / 4x 2.00GHz / 1333MHz FSB / 12MB cache\n> 6GB RAM\n>\n> Cost around 2320 GBP -- it would be great to get it under 2000\n> Needs to be in the UK.\n\n> I would be grateful for any comments. I'm particularly out of date about\n> the best processors to go for. DNUK also have Opteron as an option.\n\nThat sounds pretty good. It should run postgres fairly well, especially if \nyou have quite a few parallel queries coming in. You won't need a bigger \nBBU cache if it's read-heavy. You'll have eight CPU cores, which is good. \nAnd RAID 10 is good.\n\nAs for Intel/AMD, I think they're neck-and-neck at the moment. Both are \nfast.\n\nOf course, we over here have no idea how much actual read traffic there \nwill be, so you may be massively over-providing or it may be woefully \ninadequate, but this machine looks like a fairly good buy for the price.\n\nMatthew\n\n-- \nNo trees were killed in the sending of this message. However a large\nnumber of electrons were terribly inconvenienced.\n", "msg_date": "Wed, 13 Feb 2008 13:53:37 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Small DB Server Advice" }, { "msg_contents": "Matthew wrote:\n> On Wed, 13 Feb 2008, Rory Campbell-Lange wrote:\n>> 4 x 147GB 15000 rpm SCSI in RAID 10 with 320-1 RAID CARD + 64MB \n>> cache BBU\n>> 2x Intel Xeon E5405 / 4x 2.00GHz / 1333MHz FSB / 12MB cache\n>> 6GB RAM\n>>\n>> Cost around 2320 GBP -- it would be great to get it under 2000\n>> Needs to be in the UK.\n> \n>> I would be grateful for any comments. I'm particularly out of date about\n>> the best processors to go for. DNUK also have Opteron as an option.\n> \n> That sounds pretty good. It should run postgres fairly well, especially \n> if you have quite a few parallel queries coming in. You won't need a \n> bigger BBU cache if it's read-heavy. You'll have eight CPU cores, which \n> is good. And RAID 10 is good.\n\nIn my experience, battery backed cache is always worth the money. Even \nif you're mostly select, you will have some updates. And it'll also pick \nup other write activity onthe system...\n\n\n//Magnus\n", "msg_date": "Wed, 13 Feb 2008 15:32:08 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Small DB Server Advice" }, { "msg_contents": "On Wed, 13 Feb 2008, Magnus Hagander wrote:\n>> On Wed, 13 Feb 2008, Rory Campbell-Lange wrote:\n>>> 4 x 147GB 15000 rpm SCSI in RAID 10 with 320-1 RAID CARD + 64MB cache \n>>> BBU\n\n> In my experience, battery backed cache is always worth the money. Even if \n> you're mostly select, you will have some updates. And it'll also pick up \n> other write activity onthe system...\n\nOf course. My point was that 64MB should be quite sufficient if most \naccesses are reads. We have a few machines here with 2GB BBU caches as we \ndo LOTS of writes - that sort of thing probably isn't necessary here.\n\nMatthew\n\n-- \nI suppose some of you have done a Continuous Maths course. Yes? Continuous\nMaths? <menacing stares from audience> Whoah, it was like that, was it!\n -- Computer Science Lecturer\n", "msg_date": "Wed, 13 Feb 2008 14:35:27 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Small DB Server Advice" }, { "msg_contents": "On Wed, 13 Feb 2008, Rory Campbell-Lange wrote:\n\n> (https://secure.dnuk.com/systems/r325hs-1u.php?configuration=7766)\n> 4 x 147GB 15000 rpm SCSI in RAID 10 with 320-1 RAID CARD + 64MB cache BBU\n\nThat's running the LSI Megaraid SCSI controller. Those are solid but not \nthe best performers in their class, particularly on writes. But given \nyour application description (1GB data in a year and read-heavy) that card \nrunning a 4-spindle RAID10 should be overkill.\n\n> I'm particularly out of date about the best processors to go for. DNUK \n> also have Opteron as an option.\n\nCurrent Intel chips benchmark better, occasionally you'll find people who \nclaim the better multi-CPU memory model in the Opteron systems give them \nbetter performance at high loads but that's difficult to quantify. \nThere's not a huge difference in any case. You're probably going to \nbottleneck on either disk or how fast DDR2 memory goes anyway and both \nsets of products are competative right now.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 13 Feb 2008 12:38:28 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Small DB Server Advice" } ]
[ { "msg_contents": "I want to create and update two tables in a function such as below, but\nusing parameters as tablename is not allowed and gives an error. Is there\nany way I could achieve this?\n\nCREATE OR REPLACE FUNCTION test ( t1 text,t2 text ) RETURNS numeric AS $$\ndeclare temp1 text;\ndeclare temp2 text;\nbegin\n temp1=t1;\n temp2=t2;\nselect\nproduct,\n(case when sum(pd) <> 0 then sum(gd)/sum(pd)*100 else 0 end ) as gppp\ninto temp2 from temp1 as dummy\ngroup by dummy.product,dummy.totalclaimsgroup,dummy.avgmems,dummy.months;\n\nupdate temp1 as t set\n GPPP=(select gppp from temp2 as dummy where dummy.product=t.product),\n\nend\n$$ LANGUAGE plpgsql\n\n\n----------------------\nERROR: syntax error at or near \"$1\"\nLINE 1: ...en sum(gd)/sum(pd)*100 else 0 end ) as gppp from $1 as dum...\n ^\nQUERY: select product, (case when sum(pd) <> 0 then sum(gd)/sum(pd)*100\nelse 0 end ) as gppp from $1 as dummy group by dummy.product,\ndummy.totalclaimsgroup,dummy.avgmems,dummy.months\nCONTEXT: SQL statement in PL/PgSQL function \"test\" near line 10\n\n********** Error **********\n\nERROR: syntax error at or near \"$1\"\nSQL state: 42601\nContext: SQL statement in PL/PgSQL function \"test\" near line 10\n\nI want to create and update two tables in a function such as below, but using parameters as tablename is not allowed and gives an error. Is there any way I could achieve this?CREATE OR REPLACE FUNCTION test ( t1  text,t2 text  ) RETURNS numeric AS $$\ndeclare temp1 text;declare temp2 text;begin    temp1=t1;    temp2=t2;selectproduct,(case when sum(pd) <> 0 then sum(gd)/sum(pd)*100 else 0 end  ) as gpppinto temp2 from temp1  as dummy\ngroup by dummy.product,dummy.totalclaimsgroup,dummy.avgmems,dummy.months;update temp1 as t  set GPPP=(select gppp  from temp2  as dummy where dummy.product=t.product),end$$ LANGUAGE plpgsql\n----------------------ERROR:  syntax error at or near \"$1\"LINE 1: ...en sum(gd)/sum(pd)*100 else 0 end ) as gppp from  $1  as dum...                                                             ^\nQUERY:  select product, (case when sum(pd) <> 0 then sum(gd)/sum(pd)*100 else 0 end ) as gppp from  $1  as dummy group by dummy.product,dummy.totalclaimsgroup,dummy.avgmems,dummy.monthsCONTEXT:  SQL statement in PL/PgSQL function \"test\" near line 10\n********** Error **********ERROR: syntax error at or near \"$1\"SQL state: 42601Context: SQL statement in PL/PgSQL function \"test\" near line 10", "msg_date": "Wed, 13 Feb 2008 19:25:02 +0500", "msg_from": "\"Linux Guru\" <[email protected]>", "msg_from_op": true, "msg_subject": "Creating and updating table using function parameter reference" }, { "msg_contents": "A Dimecres 13 Febrer 2008 15:25, Linux Guru va escriure:\n> I want to create and update two tables in a function such as below, but\n> using parameters as tablename is not allowed and gives an error. Is there\n> any way I could achieve this?\n\nYou're looking for EXECUTE:\nhttp://www.postgresql.org/docs/8.3/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN\n\n>\n> CREATE OR REPLACE FUNCTION test ( t1 text,t2 text ) RETURNS numeric AS $$\n> declare temp1 text;\n> declare temp2 text;\n> begin\n> temp1=t1;\n> temp2=t2;\n> select\n> product,\n> (case when sum(pd) <> 0 then sum(gd)/sum(pd)*100 else 0 end ) as gppp\n> into temp2 from temp1 as dummy\n> group by dummy.product,dummy.totalclaimsgroup,dummy.avgmems,dummy.months;\n>\n> update temp1 as t set\n> GPPP=(select gppp from temp2 as dummy where dummy.product=t.product),\n>\n> end\n> $$ LANGUAGE plpgsql\n>\n>\n> ----------------------\n> ERROR: syntax error at or near \"$1\"\n> LINE 1: ...en sum(gd)/sum(pd)*100 else 0 end ) as gppp from $1 as dum...\n> ^\n> QUERY: select product, (case when sum(pd) <> 0 then sum(gd)/sum(pd)*100\n> else 0 end ) as gppp from $1 as dummy group by dummy.product,\n> dummy.totalclaimsgroup,dummy.avgmems,dummy.months\n> CONTEXT: SQL statement in PL/PgSQL function \"test\" near line 10\n>\n> ********** Error **********\n>\n> ERROR: syntax error at or near \"$1\"\n> SQL state: 42601\n> Context: SQL statement in PL/PgSQL function \"test\" near line 10\n\n \n", "msg_date": "Wed, 13 Feb 2008 16:23:56 +0100", "msg_from": "Albert Cervera Areny <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Creating and updating table using function parameter reference" }, { "msg_contents": "I still cannot pass tablename, what is wrong?\nIs this the right way?\n\n\nCREATE OR REPLACE FUNCTION test ( t1 text,t2 text ) RETURNS numeric AS $$\ndeclare temp1 text;\ndeclare temp2 text;\ndeclare cmd text;\ndeclare t2row RECORD;\nbegin\n temp1=t1;\n temp2=t2;\n cmd='select product, (case when sum(pd) <> 0 then sum(gd)/sum(pd)*100\nelse 0 end ) as gppp\nfrom ' temp1 ' as dummy group by dummy.product,dummy.totalclaimsgroup,\ndummy.avgmems,dummy.months';\nexecute cmd into t2row\n\n--After executing above, I need here to update table t1\n\nend;\n$$ LANGUAGE plpgsql\n\n----------------\n\n\nERROR: syntax error at or near \"$1\"\nLINE 2: from ' $1 ' as dummy group by dummy.product,dummy.totalcla...\n ^\nQUERY: SELECT 'select product, (case when sum(pd) <> 0 then\nsum(gd)/sum(pd)*100 else 0 end ) as gppp\nfrom ' $1 ' as dummy group by dummy.product,dummy.totalclaimsgroup,\ndummy.avgmems,dummy.months'\nCONTEXT: SQL statement in PL/PgSQL function \"test\" near line 9\n\n********** Error **********\n\nERROR: syntax error at or near \"$1\"\nSQL state: 42601\nContext: SQL statement in PL/PgSQL function \"test\" near line 9\n\nOn Wed, Feb 13, 2008 at 8:23 PM, Albert Cervera Areny <[email protected]>\nwrote:\n\n> A Dimecres 13 Febrer 2008 15:25, Linux Guru va escriure:\n> > I want to create and update two tables in a function such as below, but\n> > using parameters as tablename is not allowed and gives an error. Is\n> there\n> > any way I could achieve this?\n>\n> You're looking for EXECUTE:\n>\n> http://www.postgresql.org/docs/8.3/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN\n>\n> >\n> > CREATE OR REPLACE FUNCTION test ( t1 text,t2 text ) RETURNS numeric AS\n> $$\n> > declare temp1 text;\n> > declare temp2 text;\n> > begin\n> > temp1=t1;\n> > temp2=t2;\n> > select\n> > product,\n> > (case when sum(pd) <> 0 then sum(gd)/sum(pd)*100 else 0 end ) as gppp\n> > into temp2 from temp1 as dummy\n> > group by dummy.product,dummy.totalclaimsgroup,dummy.avgmems,dummy.months\n> ;\n> >\n> > update temp1 as t set\n> > GPPP=(select gppp from temp2 as dummy where dummy.product=t.product),\n> >\n> > end\n> > $$ LANGUAGE plpgsql\n> >\n> >\n> > ----------------------\n> > ERROR: syntax error at or near \"$1\"\n> > LINE 1: ...en sum(gd)/sum(pd)*100 else 0 end ) as gppp from $1 as\n> dum...\n> > ^\n> > QUERY: select product, (case when sum(pd) <> 0 then sum(gd)/sum(pd)*100\n> > else 0 end ) as gppp from $1 as dummy group by dummy.product,\n> > dummy.totalclaimsgroup,dummy.avgmems,dummy.months\n> > CONTEXT: SQL statement in PL/PgSQL function \"test\" near line 10\n> >\n> > ********** Error **********\n> >\n> > ERROR: syntax error at or near \"$1\"\n> > SQL state: 42601\n> > Context: SQL statement in PL/PgSQL function \"test\" near line 10\n>\n>\n>\n\nI still cannot pass tablename, what is wrong?Is this the right way?CREATE OR REPLACE FUNCTION test ( t1  text,t2 text  ) RETURNS numeric AS $$declare temp1 text;declare temp2 text;    declare cmd text;\ndeclare t2row RECORD;begin    temp1=t1;    temp2=t2;    cmd='select product, (case when sum(pd) <> 0 then sum(gd)/sum(pd)*100 else 0 end  ) as gpppfrom ' temp1 ' as dummy group by dummy.product,dummy.totalclaimsgroup,dummy.avgmems,dummy.months';\nexecute cmd into t2row--After executing above, I need here to update table t1end;$$ LANGUAGE plpgsql----------------ERROR:  syntax error at or near \"$1\"LINE 2: from '  $1  ' as dummy group by dummy.product,dummy.totalcla...\n                ^QUERY:  SELECT 'select product, (case when sum(pd) <> 0 then sum(gd)/sum(pd)*100 else 0 end  ) as gpppfrom '  $1  ' as dummy group by dummy.product,dummy.totalclaimsgroup,dummy.avgmems,dummy.months'\nCONTEXT:  SQL statement in PL/PgSQL function \"test\" near line 9********** Error **********ERROR: syntax error at or near \"$1\"SQL state: 42601Context: SQL statement in PL/PgSQL function \"test\" near line 9\nOn Wed, Feb 13, 2008 at 8:23 PM, Albert Cervera Areny <[email protected]> wrote:\nA Dimecres 13 Febrer 2008 15:25, Linux Guru va escriure:\n> I want to create and update two tables in a function such as below, but\n> using parameters as tablename is not allowed and gives an error. Is there\n> any way I could achieve this?\n\nYou're looking for EXECUTE:\nhttp://www.postgresql.org/docs/8.3/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN\n\n>\n> CREATE OR REPLACE FUNCTION test ( t1  text,t2 text  ) RETURNS numeric AS $$\n> declare temp1 text;\n> declare temp2 text;\n> begin\n>     temp1=t1;\n>     temp2=t2;\n> select\n> product,\n> (case when sum(pd) <> 0 then sum(gd)/sum(pd)*100 else 0 end  ) as gppp\n> into temp2 from temp1  as dummy\n> group by dummy.product,dummy.totalclaimsgroup,dummy.avgmems,dummy.months;\n>\n> update temp1 as t  set\n>  GPPP=(select gppp  from temp2  as dummy where dummy.product=t.product),\n>\n> end\n> $$ LANGUAGE plpgsql\n>\n>\n> ----------------------\n> ERROR:  syntax error at or near \"$1\"\n> LINE 1: ...en sum(gd)/sum(pd)*100 else 0 end ) as gppp from  $1  as dum...\n>                                                              ^\n> QUERY:  select product, (case when sum(pd) <> 0 then sum(gd)/sum(pd)*100\n> else 0 end ) as gppp from  $1  as dummy group by dummy.product,\n> dummy.totalclaimsgroup,dummy.avgmems,dummy.months\n> CONTEXT:  SQL statement in PL/PgSQL function \"test\" near line 10\n>\n> ********** Error **********\n>\n> ERROR: syntax error at or near \"$1\"\n> SQL state: 42601\n> Context: SQL statement in PL/PgSQL function \"test\" near line 10", "msg_date": "Thu, 14 Feb 2008 17:35:27 +0500", "msg_from": "\"Linux Guru\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Creating and updating table using function parameter reference" }, { "msg_contents": "You need the string concatenation operator ||. Take a look at \nhttp://www.postgresql.org/docs/8.3/static/functions-string.html\n\nBy the way, this is off-topic in this list please, post general \nnon-performance questions to pgsql-general.\n\nA Dijous 14 Febrer 2008 13:35, Linux Guru va escriure:\n> I still cannot pass tablename, what is wrong?\n> Is this the right way?\n>\n>\n> CREATE OR REPLACE FUNCTION test ( t1 text,t2 text ) RETURNS numeric AS $$\n> declare temp1 text;\n> declare temp2 text;\n> declare cmd text;\n> declare t2row RECORD;\n> begin\n> temp1=t1;\n> temp2=t2;\n> cmd='select product, (case when sum(pd) <> 0 then sum(gd)/sum(pd)*100\n> else 0 end ) as gppp\n> from ' temp1 ' as dummy group by dummy.product,dummy.totalclaimsgroup,\n> dummy.avgmems,dummy.months';\n> execute cmd into t2row\n>\n> --After executing above, I need here to update table t1\n>\n> end;\n> $$ LANGUAGE plpgsql\n>\n> ----------------\n>\n>\n> ERROR: syntax error at or near \"$1\"\n> LINE 2: from ' $1 ' as dummy group by dummy.product,dummy.totalcla...\n> ^\n> QUERY: SELECT 'select product, (case when sum(pd) <> 0 then\n> sum(gd)/sum(pd)*100 else 0 end ) as gppp\n> from ' $1 ' as dummy group by dummy.product,dummy.totalclaimsgroup,\n> dummy.avgmems,dummy.months'\n> CONTEXT: SQL statement in PL/PgSQL function \"test\" near line 9\n>\n> ********** Error **********\n>\n> ERROR: syntax error at or near \"$1\"\n> SQL state: 42601\n> Context: SQL statement in PL/PgSQL function \"test\" near line 9\n>\n> On Wed, Feb 13, 2008 at 8:23 PM, Albert Cervera Areny <[email protected]>\n>\n> wrote:\n> > A Dimecres 13 Febrer 2008 15:25, Linux Guru va escriure:\n> > > I want to create and update two tables in a function such as below, but\n> > > using parameters as tablename is not allowed and gives an error. Is\n> >\n> > there\n> >\n> > > any way I could achieve this?\n> >\n> > You're looking for EXECUTE:\n> >\n> > http://www.postgresql.org/docs/8.3/static/plpgsql-statements.html#PLPGSQL\n> >-STATEMENTS-EXECUTING-DYN\n> >\n> > > CREATE OR REPLACE FUNCTION test ( t1 text,t2 text ) RETURNS numeric\n> > > AS\n> >\n> > $$\n> >\n> > > declare temp1 text;\n> > > declare temp2 text;\n> > > begin\n> > > temp1=t1;\n> > > temp2=t2;\n> > > select\n> > > product,\n> > > (case when sum(pd) <> 0 then sum(gd)/sum(pd)*100 else 0 end ) as gppp\n> > > into temp2 from temp1 as dummy\n> > > group by\n> > > dummy.product,dummy.totalclaimsgroup,dummy.avgmems,dummy.months\n> >\n> > ;\n> >\n> > > update temp1 as t set\n> > > GPPP=(select gppp from temp2 as dummy where\n> > > dummy.product=t.product),\n> > >\n> > > end\n> > > $$ LANGUAGE plpgsql\n> > >\n> > >\n> > > ----------------------\n> > > ERROR: syntax error at or near \"$1\"\n> > > LINE 1: ...en sum(gd)/sum(pd)*100 else 0 end ) as gppp from $1 as\n> >\n> > dum...\n> >\n> > > ^\n> > > QUERY: select product, (case when sum(pd) <> 0 then\n> > > sum(gd)/sum(pd)*100 else 0 end ) as gppp from $1 as dummy group by\n> > > dummy.product, dummy.totalclaimsgroup,dummy.avgmems,dummy.months\n> > > CONTEXT: SQL statement in PL/PgSQL function \"test\" near line 10\n> > >\n> > > ********** Error **********\n> > >\n> > > ERROR: syntax error at or near \"$1\"\n> > > SQL state: 42601\n> > > Context: SQL statement in PL/PgSQL function \"test\" near line 10\n\n-- \nAlbert Cervera Areny\nDept. Informàtica Sedifa, S.L.\n\nAv. Can Bordoll, 149\n08202 - Sabadell (Barcelona)\nTel. 93 715 51 11\nFax. 93 715 51 12\n\n====================================================================\n........................ AVISO LEGAL ............................\nLa presente comunicación y sus anexos tiene como destinatario la\npersona a la que va dirigida, por lo que si usted lo recibe\npor error debe notificarlo al remitente y eliminarlo de su\nsistema, no pudiendo utilizarlo, total o parcialmente, para\nningún fin. Su contenido puede tener información confidencial o\nprotegida legalmente y únicamente expresa la opinión del\nremitente. El uso del correo electrónico vía Internet no\npermite asegurar ni la confidencialidad de los mensajes\nni su correcta recepción. En el caso de que el\ndestinatario no consintiera la utilización del correo electrónico,\ndeberá ponerlo en nuestro conocimiento inmediatamente.\n====================================================================\n........................... DISCLAIMER .............................\nThis message and its attachments are intended exclusively for the\nnamed addressee. If you receive this message in error, please\nimmediately delete it from your system and notify the sender. You\nmay not use this message or any part of it for any purpose.\nThe message may contain information that is confidential or\nprotected by law, and any opinions expressed are those of the\nindividual sender. Internet e-mail guarantees neither the\nconfidentiality nor the proper receipt of the message sent.\nIf the addressee of this message does not consent to the use\nof internet e-mail, please inform us inmmediately.\n====================================================================\n\n\n \n", "msg_date": "Thu, 14 Feb 2008 13:54:28 +0100", "msg_from": "Albert Cervera Areny <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Creating and updating table using function parameter reference" }, { "msg_contents": "thanks, i posted in this listed because it was related to my previous query.\nAnyway, I am able to achieve, with the help in this mailing list, what I\nwanted but is there any way to further optimize this.\n\nThanks\n\nCREATE OR REPLACE FUNCTION test ( t1 text ) RETURNS numeric AS $$\ndeclare cmd1 text;\ndeclare cmd2 text;\ndeclare t2row RECORD;\nbegin\n\ncmd1=' select\nproduct, (case when sum(pd) <> 0 then sum(gd)/sum(pd)*100 else 0 end ) as\ngppp, (case when sum(tld) <> 0 then sum(pd)/sum(tld) else 0 end ) as ppd\nfrom '|| t1 || ' as dummy group by dummy.product' ;\n\nfor t2row in execute cmd1 loop\n\n cmd2 = 'update ' || t1 || ' as t set GPPP=' ||t2row.gppp||' where\nproduct='||quote_literal(t2row.product);\n execute cmd2;\n\n cmd2 = 'update ' || t1 || ' as t set PPD=' ||t2row.ppd||' where\nproduct='||quote_literal(t2row.product);\n execute cmd2;\n\nEND LOOP;\nRETURN NULL;\nend;\n$$ LANGUAGE plpgsql\nVOLATILE\n\n\nOn Thu, Feb 14, 2008 at 5:54 PM, Albert Cervera Areny <[email protected]>\nwrote:\n\n> You need the string concatenation operator ||. Take a look at\n> http://www.postgresql.org/docs/8.3/static/functions-string.html\n>\n> By the way, this is off-topic in this list please, post general\n> non-performance questions to pgsql-general.\n>\n> A Dijous 14 Febrer 2008 13:35, Linux Guru va escriure:\n> > I still cannot pass tablename, what is wrong?\n> > Is this the right way?\n> >\n> >\n> > CREATE OR REPLACE FUNCTION test ( t1 text,t2 text ) RETURNS numeric AS\n> $$\n> > declare temp1 text;\n> > declare temp2 text;\n> > declare cmd text;\n> > declare t2row RECORD;\n> > begin\n> > temp1=t1;\n> > temp2=t2;\n> > cmd='select product, (case when sum(pd) <> 0 then\n> sum(gd)/sum(pd)*100\n> > else 0 end ) as gppp\n> > from ' temp1 ' as dummy group by dummy.product,dummy.totalclaimsgroup,\n> > dummy.avgmems,dummy.months';\n> > execute cmd into t2row\n> >\n> > --After executing above, I need here to update table t1\n> >\n> > end;\n> > $$ LANGUAGE plpgsql\n> >\n> > ----------------\n> >\n> >\n> > ERROR: syntax error at or near \"$1\"\n> > LINE 2: from ' $1 ' as dummy group by dummy.product,dummy.totalcla...\n> > ^\n> > QUERY: SELECT 'select product, (case when sum(pd) <> 0 then\n> > sum(gd)/sum(pd)*100 else 0 end ) as gppp\n> > from ' $1 ' as dummy group by dummy.product,dummy.totalclaimsgroup,\n> > dummy.avgmems,dummy.months'\n> > CONTEXT: SQL statement in PL/PgSQL function \"test\" near line 9\n> >\n> > ********** Error **********\n> >\n> > ERROR: syntax error at or near \"$1\"\n> > SQL state: 42601\n> > Context: SQL statement in PL/PgSQL function \"test\" near line 9\n> >\n> > On Wed, Feb 13, 2008 at 8:23 PM, Albert Cervera Areny <[email protected]\n> >\n> >\n> > wrote:\n> > > A Dimecres 13 Febrer 2008 15:25, Linux Guru va escriure:\n> > > > I want to create and update two tables in a function such as below,\n> but\n> > > > using parameters as tablename is not allowed and gives an error. Is\n> > >\n> > > there\n> > >\n> > > > any way I could achieve this?\n> > >\n> > > You're looking for EXECUTE:\n> > >\n> > >\n> http://www.postgresql.org/docs/8.3/static/plpgsql-statements.html#PLPGSQL\n> > >-STATEMENTS-EXECUTING-DYN\n> > >\n> > > > CREATE OR REPLACE FUNCTION test ( t1 text,t2 text ) RETURNS\n> numeric\n> > > > AS\n> > >\n> > > $$\n> > >\n> > > > declare temp1 text;\n> > > > declare temp2 text;\n> > > > begin\n> > > > temp1=t1;\n> > > > temp2=t2;\n> > > > select\n> > > > product,\n> > > > (case when sum(pd) <> 0 then sum(gd)/sum(pd)*100 else 0 end ) as\n> gppp\n> > > > into temp2 from temp1 as dummy\n> > > > group by\n> > > > dummy.product,dummy.totalclaimsgroup,dummy.avgmems,dummy.months\n> > >\n> > > ;\n> > >\n> > > > update temp1 as t set\n> > > > GPPP=(select gppp from temp2 as dummy where\n> > > > dummy.product=t.product),\n> > > >\n> > > > end\n> > > > $$ LANGUAGE plpgsql\n> > > >\n> > > >\n> > > > ----------------------\n> > > > ERROR: syntax error at or near \"$1\"\n> > > > LINE 1: ...en sum(gd)/sum(pd)*100 else 0 end ) as gppp from $1 as\n> > >\n> > > dum...\n> > >\n> > > > ^\n> > > > QUERY: select product, (case when sum(pd) <> 0 then\n> > > > sum(gd)/sum(pd)*100 else 0 end ) as gppp from $1 as dummy group by\n> > > > dummy.product, dummy.totalclaimsgroup,dummy.avgmems,dummy.months\n> > > > CONTEXT: SQL statement in PL/PgSQL function \"test\" near line 10\n> > > >\n> > > > ********** Error **********\n> > > >\n> > > > ERROR: syntax error at or near \"$1\"\n> > > > SQL state: 42601\n> > > > Context: SQL statement in PL/PgSQL function \"test\" near line 10\n>\n> --\n> Albert Cervera Areny\n> Dept. Informàtica Sedifa, S.L.\n>\n> Av. Can Bordoll, 149\n> 08202 - Sabadell (Barcelona)\n> Tel. 93 715 51 11\n> Fax. 93 715 51 12\n>\n> ====================================================================\n> ........................ AVISO LEGAL ............................\n> La presente comunicación y sus anexos tiene como destinatario la\n> persona a la que va dirigida, por lo que si usted lo recibe\n> por error debe notificarlo al remitente y eliminarlo de su\n> sistema, no pudiendo utilizarlo, total o parcialmente, para\n> ningún fin. Su contenido puede tener información confidencial o\n> protegida legalmente y únicamente expresa la opinión del\n> remitente. El uso del correo electrónico vía Internet no\n> permite asegurar ni la confidencialidad de los mensajes\n> ni su correcta recepción. En el caso de que el\n> destinatario no consintiera la utilización del correo electrónico,\n> deberá ponerlo en nuestro conocimiento inmediatamente.\n> ====================================================================\n> ........................... DISCLAIMER .............................\n> This message and its attachments are intended exclusively for the\n> named addressee. If you receive this message in error, please\n> immediately delete it from your system and notify the sender. You\n> may not use this message or any part of it for any purpose.\n> The message may contain information that is confidential or\n> protected by law, and any opinions expressed are those of the\n> individual sender. Internet e-mail guarantees neither the\n> confidentiality nor the proper receipt of the message sent.\n> If the addressee of this message does not consent to the use\n> of internet e-mail, please inform us inmmediately.\n> ====================================================================\n>\n>\n>\n>\n\nthanks, i posted in this listed because it was related to my previous query.Anyway, I am able to achieve, with the help in this mailing list, what I wanted but is there any way to further optimize this. Thanks\nCREATE OR REPLACE FUNCTION test ( t1  text  ) RETURNS numeric AS $$declare cmd1 text;declare cmd2 text;declare t2row RECORD;begincmd1=' select product, (case when sum(pd) <> 0 then sum(gd)/sum(pd)*100 else 0 end  ) as gppp, (case when sum(tld) <> 0 then sum(pd)/sum(tld) else 0 end  ) as ppd  from '|| t1 || ' as dummy group by dummy.product' ;\nfor t2row in execute cmd1 loop    cmd2 = 'update ' || t1 || ' as t  set  GPPP=' ||t2row.gppp||' where product='||quote_literal(t2row.product);    execute cmd2;    cmd2 = 'update ' || t1 || ' as t  set  PPD=' ||t2row.ppd||' where product='||quote_literal(t2row.product);\n    execute cmd2;END LOOP;RETURN NULL;end;$$ LANGUAGE plpgsqlVOLATILEOn Thu, Feb 14, 2008 at 5:54 PM, Albert Cervera Areny <[email protected]> wrote:\nYou need the string concatenation operator ||. Take a look at\nhttp://www.postgresql.org/docs/8.3/static/functions-string.html\n\nBy the way, this is off-topic in this list please, post general\nnon-performance questions to pgsql-general.\n\nA Dijous 14 Febrer 2008 13:35, Linux Guru va escriure:\n> I still cannot pass tablename, what is wrong?\n> Is this the right way?\n>\n>\n> CREATE OR REPLACE FUNCTION test ( t1  text,t2 text  ) RETURNS numeric AS $$\n> declare temp1 text;\n> declare temp2 text;\n> declare cmd text;\n> declare t2row RECORD;\n> begin\n>     temp1=t1;\n>     temp2=t2;\n>     cmd='select product, (case when sum(pd) <> 0 then sum(gd)/sum(pd)*100\n> else 0 end  ) as gppp\n> from ' temp1 ' as dummy group by dummy.product,dummy.totalclaimsgroup,\n> dummy.avgmems,dummy.months';\n> execute cmd into t2row\n>\n> --After executing above, I need here to update table t1\n>\n> end;\n> $$ LANGUAGE plpgsql\n>\n> ----------------\n>\n>\n> ERROR:  syntax error at or near \"$1\"\n> LINE 2: from '  $1  ' as dummy group by dummy.product,dummy.totalcla...\n>                 ^\n> QUERY:  SELECT 'select product, (case when sum(pd) <> 0 then\n> sum(gd)/sum(pd)*100 else 0 end  ) as gppp\n> from '  $1  ' as dummy group by dummy.product,dummy.totalclaimsgroup,\n> dummy.avgmems,dummy.months'\n> CONTEXT:  SQL statement in PL/PgSQL function \"test\" near line 9\n>\n> ********** Error **********\n>\n> ERROR: syntax error at or near \"$1\"\n> SQL state: 42601\n> Context: SQL statement in PL/PgSQL function \"test\" near line 9\n>\n> On Wed, Feb 13, 2008 at 8:23 PM, Albert Cervera Areny <[email protected]>\n>\n> wrote:\n> > A Dimecres 13 Febrer 2008 15:25, Linux Guru va escriure:\n> > > I want to create and update two tables in a function such as below, but\n> > > using parameters as tablename is not allowed and gives an error. Is\n> >\n> > there\n> >\n> > > any way I could achieve this?\n> >\n> > You're looking for EXECUTE:\n> >\n> > http://www.postgresql.org/docs/8.3/static/plpgsql-statements.html#PLPGSQL\n> >-STATEMENTS-EXECUTING-DYN\n> >\n> > > CREATE OR REPLACE FUNCTION test ( t1  text,t2 text  ) RETURNS numeric\n> > > AS\n> >\n> > $$\n> >\n> > > declare temp1 text;\n> > > declare temp2 text;\n> > > begin\n> > >     temp1=t1;\n> > >     temp2=t2;\n> > > select\n> > > product,\n> > > (case when sum(pd) <> 0 then sum(gd)/sum(pd)*100 else 0 end  ) as gppp\n> > > into temp2 from temp1  as dummy\n> > > group by\n> > > dummy.product,dummy.totalclaimsgroup,dummy.avgmems,dummy.months\n> >\n> > ;\n> >\n> > > update temp1 as t  set\n> > >  GPPP=(select gppp  from temp2  as dummy where\n> > > dummy.product=t.product),\n> > >\n> > > end\n> > > $$ LANGUAGE plpgsql\n> > >\n> > >\n> > > ----------------------\n> > > ERROR:  syntax error at or near \"$1\"\n> > > LINE 1: ...en sum(gd)/sum(pd)*100 else 0 end ) as gppp from  $1  as\n> >\n> > dum...\n> >\n> > >                                                              ^\n> > > QUERY:  select product, (case when sum(pd) <> 0 then\n> > > sum(gd)/sum(pd)*100 else 0 end ) as gppp from  $1  as dummy group by\n> > > dummy.product, dummy.totalclaimsgroup,dummy.avgmems,dummy.months\n> > > CONTEXT:  SQL statement in PL/PgSQL function \"test\" near line 10\n> > >\n> > > ********** Error **********\n> > >\n> > > ERROR: syntax error at or near \"$1\"\n> > > SQL state: 42601\n> > > Context: SQL statement in PL/PgSQL function \"test\" near line 10\n\n--\nAlbert Cervera Areny\nDept. Informàtica Sedifa, S.L.\n\nAv. Can Bordoll, 149\n08202 - Sabadell (Barcelona)\nTel. 93 715 51 11\nFax. 93 715 51 12\n\n====================================================================\n........................  AVISO LEGAL  ............................\nLa   presente  comunicación  y sus anexos tiene como destinatario la\npersona a  la  que  va  dirigida, por  lo  que  si  usted lo  recibe\npor error  debe  notificarlo  al  remitente  y   eliminarlo   de  su\nsistema,  no  pudiendo  utilizarlo,  total  o   parcialmente,   para\nningún  fin.  Su  contenido  puede  tener información confidencial o\nprotegida legalmente   y   únicamente   expresa  la  opinión     del\nremitente.  El   uso   del   correo   electrónico   vía Internet  no\npermite   asegurar    ni  la   confidencialidad   de   los  mensajes\nni    su    correcta     recepción.   En    el  caso   de   que   el\ndestinatario no consintiera la utilización  del correo  electrónico,\ndeberá ponerlo en nuestro conocimiento inmediatamente.\n====================================================================\n........................... DISCLAIMER .............................\nThis message and its  attachments are  intended  exclusively for the\nnamed addressee. If you  receive  this  message  in   error,  please\nimmediately delete it from  your  system  and notify the sender. You\nmay  not  use  this message  or  any  part  of it  for any  purpose.\nThe   message   may  contain  information  that  is  confidential or\nprotected  by  law,  and  any  opinions  expressed  are those of the\nindividual    sender.  Internet  e-mail   guarantees   neither   the\nconfidentiality   nor  the  proper  receipt  of  the  message  sent.\nIf  the  addressee  of  this  message  does  not  consent to the use\nof   internet    e-mail,    please    inform     us    inmmediately.\n====================================================================", "msg_date": "Fri, 15 Feb 2008 19:14:48 +0500", "msg_from": "\"Linux Guru\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Creating and updating table using function parameter reference" } ]
[ { "msg_contents": "Hi all,\n\nWe're considering setting up a SAN where I work. Is there anyone using\na SAN, for postgres or other purposes? If so I have a few questions\nfor you.\n\n- Are there any vendors to avoid or ones that are particularly good?\n\n- What performance or reliability implications exist when using SANs?\n\n- Are there any killer features with SANs compared to local storage?\n\nAny other comments are certainly welcome.\n\nPeter\n", "msg_date": "Wed, 13 Feb 2008 10:56:54 -0600", "msg_from": "\"Peter Koczan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Anyone using a SAN?" }, { "msg_contents": "On Wed, Feb 13, 2008 at 10:56:54AM -0600, Peter Koczan wrote:\n> Hi all,\n> \n> We're considering setting up a SAN where I work. Is there anyone using\n> a SAN, for postgres or other purposes? If so I have a few questions\n> for you.\n> \n> - Are there any vendors to avoid or ones that are particularly good?\n> \n> - What performance or reliability implications exist when using SANs?\n> \n> - Are there any killer features with SANs compared to local storage?\n> \n> Any other comments are certainly welcome.\n> \n> Peter\n> \n\nPeter,\n\nThe key is to understand your usage patterns, both I/O and query.\nSANs can be easily bandwidth limited which can tank your database\nperformance. There have been several threads in the mailing list\nabout performance problems caused by the use of a SAN for storage.\n\nCheers,\nKen\n", "msg_date": "Wed, 13 Feb 2008 11:46:47 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone using a SAN?" }, { "msg_contents": "On Feb 13, 2008 12:46 PM, Kenneth Marshall <[email protected]> wrote:\n> On Wed, Feb 13, 2008 at 10:56:54AM -0600, Peter Koczan wrote:\n> > Hi all,\n> >\n> > We're considering setting up a SAN where I work. Is there anyone using\n> > a SAN, for postgres or other purposes? If so I have a few questions\n> > for you.\n> >\n> > - Are there any vendors to avoid or ones that are particularly good?\n> >\n> > - What performance or reliability implications exist when using SANs?\n> >\n> > - Are there any killer features with SANs compared to local storage?\n> >\n> > Any other comments are certainly welcome.\n> >\n> > Peter\n> >\n>\n> Peter,\n>\n> The key is to understand your usage patterns, both I/O and query.\n> SANs can be easily bandwidth limited which can tank your database\n> performance. There have been several threads in the mailing list\n> about performance problems caused by the use of a SAN for storage.\n\nIt's critical that you set up the SAN with a database in mind\notherwise the performance will be bad. I tested a DB on a SAN\ndesigned to maximize storage space and performance was terrible. I\nnever had the time or resources to reconfigure the SAN to test a more\nsuitable spindle setup since the SAN was in heavy production use for\nfile archiving.\n\nAlex\n", "msg_date": "Wed, 13 Feb 2008 12:58:03 -0500", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone using a SAN?" }, { "msg_contents": "[Peter Koczan - Wed at 10:56:54AM -0600]\n> We're considering setting up a SAN where I work. Is there anyone using\n> a SAN, for postgres or other purposes? If so I have a few questions\n> for you.\n\nSome time ago, my boss was planning to order more hardware - including a\nSAN - and coincidentally, SANs were discussed at this list as well.\nThe consensus on this list seemed to be that running postgres on SAN is\nnot cost efficiently - one would get better performance for a lower cost\nif the database host is connected directly to the disks - and also,\nbuying the wrong SAN can cause quite some problems.\n\nMy boss (with good help of the local SAN-pusher) considered that the\narguments against the SAN solution on this list was not really valid for\nan \"enterprise\" user. The SAN-pusher really insisted that through a\nstate-of-the-art SAN theoretically it should be possible to achieve far\nbetter bandwidth as well as lower latency to the disks. Personally, I\ndon't have the clue, but all my colleagues believes him, so I guess he\nis right ;-) What I'm told is that the state-of-the-art SAN allows for\nan \"insane amount\" of hard disks to be installed, much more than what\nwould fit into any decent database server. We've ended up buying a SAN,\nthe physical installation was done last week, and I will be able to tell\nin some months if it was a good idea after all, or not.\n\n", "msg_date": "Wed, 13 Feb 2008 22:06:55 +0100", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone using a SAN?" }, { "msg_contents": "On 13-2-2008 22:06 Tobias Brox wrote:\n> What I'm told is that the state-of-the-art SAN allows for\n> an \"insane amount\" of hard disks to be installed, much more than what\n> would fit into any decent database server. We've ended up buying a SAN,\n> the physical installation was done last week, and I will be able to tell\n> in some months if it was a good idea after all, or not.\n\nYour SAN-pusher should have a look at the HP-submissions for TPC-C... \nThe recent Xeon systems are all without SAN's and still able to connect \nhundreds of SAS-disks.\n\nThis one has 2+28+600 hard drives connected to it:\nhttp://tpc.org/results/individual_results/HP/hp_ml370g5_2p_X5460_tpcc_080107_es.pdf\n\nLong story short, using SAS you can theoretically connect up to 64k \ndisks to a single system. And with the HP-example they connected 26 \nexternal enclosures (MSA70) to 8 internal with external SAS-ports. I.e. \nthey ended up with 28+600 harddrives spread out over 16 external 4-port \nSAS-connectors with a bandwidth of 12Gbit per connector...\n\nObviously its a bit difficult to share those 628 harddrives amongst \nseveral systems, but the argument your colleagues have for SAN isn't a \nvery good one. All major hardware vendors nowadays have external \nSAS-enclosures which can hold 12-25 external harddrives (and can often \nbe stacked to two or three enclosures) and can be connected to normal \ninternal PCI-e SAS-raid-cards. Those controllers have commonly two \nexternal ports and can be used with other controllers in the system to \ncombine all those connected enclosures to one or more virtual images, or \nyou could have your software LVM/raid on top of those controllers.\n\nAnyway, the common physical limit of 6-16 disks in a single \nserver-enclosure isn't very relevant anymore in an argument against SAN.\n\nBest regards,\n\nArjen\n", "msg_date": "Wed, 13 Feb 2008 23:20:57 +0100", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone using a SAN?" }, { "msg_contents": "On Wed, 13 Feb 2008, Tobias Brox wrote:\n\n> What I'm told is that the state-of-the-art SAN allows for an \"insane \n> amount\" of hard disks to be installed, much more than what would fit \n> into any decent database server.\n\nYou can attach a surpringly large number of drives to a server nowadays, \nbut in general it's easier to manage larger numbers of them on a SAN. \nAlso, there are significant redundancy improvements using a SAN that are \nworth quite a bit in some enterprise environments. Being able to connect \nall the drives, no matter how many, to two or more machines at once \ntrivially is typically easier to setup on a SAN than when you're using \nmore direct storage.\n\nBasically the performance breaks down like this:\n\n1) Going through the SAN interface (fiber channel etc.) introduces some \nlatency and a potential write bottleneck compared with direct storage, \neverything else being equal. This can really be a problem if you've got a \npoor SAN vendor or interface issues you can't sort out.\n\n2) It can be easier to manage a large number of disks in the SAN, so for \nsituations where aggregate disk throughput is the limiting factor the SAN \nsolution might make sense.\n\n3) At the high-end, you can get SANs with more cache than any direct \ncontroller I'm aware of, which for some applications can lead to them \nhaving a more quantifiable lead over direct storage. It's easy (albeit \nexpensive) to get an EMC array with 16GB worth of memory for caching on it \nfor example (and with 480 drives). And since they've got a more robust \npower setup than a typical server, you can even enable all the individual \ndrive caches usefully (that's 16-32MB each nowadays, so at say 100 disks \nyou've potentially got another 1.6GB of cache right there). If you're got \na typical server you can end up needing to turn off individual direct \nattached drive caches for writes, because they many not survive a power \ncycle even with a UPS, and you have to just rely on the controller write \ncache.\n\nThere's no universal advantage on either side here, just a different set \nof trade-offs. Certainly you'll never come close to the performance/$ \ndirect storage gets you if you buy that in SAN form instead, but at higher \nbudgets or feature requirements they may make sense anyway.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 13 Feb 2008 18:02:17 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone using a SAN?" }, { "msg_contents": "[Arjen van der Meijden]\n> Your SAN-pusher should have a look at the HP-submissions for TPC-C... \n> The recent Xeon systems are all without SAN's and still able to connect \n> hundreds of SAS-disks.\n\nYes, I had a feeling that the various alternative solutions for \"direct\nconnection\" hadn't been investigated fully. I was pushing for it, but\nhardware is not my thing. Anyway, most likely the only harm done by\nchosing SAN is that it's more expensive than an equivalent solution with\ndirect connected disks. Well, not my money anyway. ;-)\n\n> Obviously its a bit difficult to share those 628 harddrives amongst \n> several systems, but the argument your colleagues have for SAN isn't a \n> very good one.\n\nAs far as I've heard, you cannot really benefit much from this with\npostgres, one cannot have two postgres servers on two hosts sharing the\nsame data (i.e. using one for failover or for CPU/memory-bound read\nqueries).\n\nHaving the SAN connected to several hosts gives us two benefits, if the\ndatabase host goes down but not the SAN, it will be quite fast to start\nup a new postgres instance on a different host - and it will also be\npossible to take out backups real-time from the SAN without much\nperformance-hit. Anyway, with a warm standby server as described on\nhttp://www.postgresql.org/docs/current/interactive/warm-standby.html one\ncan achieve pretty much the same without a SAN.\n\n", "msg_date": "Thu, 14 Feb 2008 00:29:46 +0100", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone using a SAN?" }, { "msg_contents": "On Feb 13, 2008 5:02 PM, Greg Smith <[email protected]> wrote:\n> On Wed, 13 Feb 2008, Tobias Brox wrote:\n>\n> > What I'm told is that the state-of-the-art SAN allows for an \"insane\n> > amount\" of hard disks to be installed, much more than what would fit\n> > into any decent database server.\n>\n> You can attach a surpringly large number of drives to a server nowadays,\n> but in general it's easier to manage larger numbers of them on a SAN.\n> Also, there are significant redundancy improvements using a SAN that are\n> worth quite a bit in some enterprise environments. Being able to connect\n> all the drives, no matter how many, to two or more machines at once\n> trivially is typically easier to setup on a SAN than when you're using\n> more direct storage.\n\nSNIP\n\n> There's no universal advantage on either side here, just a different set\n> of trade-offs. Certainly you'll never come close to the performance/$\n> direct storage gets you if you buy that in SAN form instead, but at higher\n> budgets or feature requirements they may make sense anyway.\n\nI agree with everything you've said here, and you've said it far more\nclearly than I could have.\n\nI'd like to add that it may still be feasable to have a SAN and a db\nwith locally attached storage. Talk the boss into a 4 port caching\nSAS controller and four very fast hard drives or something else on the\nserver so that you can run tests to compare the performance of a\nrather limited on board RAID set to the big SAN. For certain kinds of\nthings, like loading tables, it will still be a very good idea to have\nlocal drives for caching and transforming data and such.\n\nGoing further, the argument for putting the db onto the SAN may be\nweakened if the amount of data on the db server can't and likely won't\nrequire a lot of space. A lot of backend office dbs are running in\nthe sub gigabyte range and will never grow to the size of the social\nsecurity database. Even with dozens of apps, an in house db server\nmight be using no more than a few dozen gigabytes of storage. Given\nthe cost and performance of large SAS and SATA drives, it's not all\nunlikely that you can fit everything you need for the next five years\non a single set of disks on a server that's twice as powerful as most\ninternal db servers need.\n\nYou can hide the cost of the extra drives in the shadow of the receipt\nfor the SAN.\n", "msg_date": "Wed, 13 Feb 2008 18:55:49 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone using a SAN?" }, { "msg_contents": "\nShould this be summarized somewhere in our docs; just a few lines with\nthe tradeoffs, direct storage = cheaper, faster, SAN = more configurable?\n\n---------------------------------------------------------------------------\n\nScott Marlowe wrote:\n> On Feb 13, 2008 5:02 PM, Greg Smith <[email protected]> wrote:\n> > On Wed, 13 Feb 2008, Tobias Brox wrote:\n> >\n> > > What I'm told is that the state-of-the-art SAN allows for an \"insane\n> > > amount\" of hard disks to be installed, much more than what would fit\n> > > into any decent database server.\n> >\n> > You can attach a surpringly large number of drives to a server nowadays,\n> > but in general it's easier to manage larger numbers of them on a SAN.\n> > Also, there are significant redundancy improvements using a SAN that are\n> > worth quite a bit in some enterprise environments. Being able to connect\n> > all the drives, no matter how many, to two or more machines at once\n> > trivially is typically easier to setup on a SAN than when you're using\n> > more direct storage.\n> \n> SNIP\n> \n> > There's no universal advantage on either side here, just a different set\n> > of trade-offs. Certainly you'll never come close to the performance/$\n> > direct storage gets you if you buy that in SAN form instead, but at higher\n> > budgets or feature requirements they may make sense anyway.\n> \n> I agree with everything you've said here, and you've said it far more\n> clearly than I could have.\n> \n> I'd like to add that it may still be feasable to have a SAN and a db\n> with locally attached storage. Talk the boss into a 4 port caching\n> SAS controller and four very fast hard drives or something else on the\n> server so that you can run tests to compare the performance of a\n> rather limited on board RAID set to the big SAN. For certain kinds of\n> things, like loading tables, it will still be a very good idea to have\n> local drives for caching and transforming data and such.\n> \n> Going further, the argument for putting the db onto the SAN may be\n> weakened if the amount of data on the db server can't and likely won't\n> require a lot of space. A lot of backend office dbs are running in\n> the sub gigabyte range and will never grow to the size of the social\n> security database. Even with dozens of apps, an in house db server\n> might be using no more than a few dozen gigabytes of storage. Given\n> the cost and performance of large SAS and SATA drives, it's not all\n> unlikely that you can fit everything you need for the next five years\n> on a single set of disks on a server that's twice as powerful as most\n> internal db servers need.\n> \n> You can hide the cost of the extra drives in the shadow of the receipt\n> for the SAN.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://postgres.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Wed, 13 Feb 2008 22:23:04 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone using a SAN?" }, { "msg_contents": "Thanks for all your input, it is very helpful. A SAN for our postgres\ndeployment is probably sufficient in terms of performance, because we\njust don't have that much data. I'm a little concerned about needs for\nuser and research databases, but if a project needs a big, fast\ndatabase, it might be wise to have them shell out for DAS.\n\nMy co-workers and I are meeting with a vendor in two weeks (3Par,\nspecifically), and I think I have a better idea of what I should be\nlooking at. I'll keep you all up on the situation. Keep the ideas\ncoming as I still would like to know of any other important factors.\n\nThanks again.\n\nPeter\n", "msg_date": "Wed, 13 Feb 2008 22:17:56 -0600", "msg_from": "\"Peter Koczan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Anyone using a SAN?" }, { "msg_contents": "Tobias Brox wrote:\n> [Peter Koczan - Wed at 10:56:54AM -0600]\n> \n> The consensus on this list seemed to be that running postgres on SAN is\n> not cost efficiently - one would get better performance for a lower cost\n> if the database host is connected directly to the disks - and also,\n> buying the wrong SAN can cause quite some problems.\n> \nThat's true about SANs in general. You don't buy a SAN because it'll \ncost less than just buying the disks and a controller. You buy a SAN \nbecause it'll let you make managing it easier. The break-even point has \nmore to do with how many servers you're able to put on the SAN and how \noften you need to do tricky backup and upgrade procedures than it \ndoeswith the hardware.\n", "msg_date": "Thu, 14 Feb 2008 10:12:36 +0000", "msg_from": "\"Greg Stark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone using a SAN?" }, { "msg_contents": "On Wed, 13 Feb 2008, Bruce Momjian wrote:\n\n> Should this be summarized somewhere in our docs; just a few lines with\n> the tradeoffs, direct storage = cheaper, faster, SAN = more configurable?\n\nI think it's kind of stetching the PostgreSQL documentation to be covering \nthat. It's hard to generalize here without giving a fair amount of \nbackground and caveats--that last message was about as compact a \ncommentary on this as I could come up with. One of the things I was \nhoping to push into the community documentation one day was a larger look \nat disk layout than covers this, RAID issues, and related topics (this got \nstarted at http://www.postgresql.org/docs/techdocs.64 but stalled).\n\nWhat's nice about putting it into a web-only format is that it's easy to \nhyperlink heavily into the archives to recommend specific discussion \nthreads of the issues for reference, which isn't practical in the manual.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sat, 16 Feb 2008 01:08:26 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone using a SAN?" }, { "msg_contents": "> That's true about SANs in general. You don't buy a SAN because it'll\n> cost less than just buying the disks and a controller. You buy a SAN\n> because it'll let you make managing it easier. The break-even point has\n> more to do with how many servers you're able to put on the SAN and how\n> often you need to do tricky backup and upgrade procedures than it\n> doeswith the hardware.\n\nOne big reason we're really looking into a SAN option is that we have\na lot of unused disk space. A typical disk usage scheme for us is 6 GB\nfor a clean Linux install, and 20 GB for a Windows install. Our disks\nare typically 80GB, and even after decent amounts of usage we're not\neven approaching half that. We install a lot of software in AFS, our\nnetworked file system, and users' home directories and project\ndirectories are in AFS as well. Local disk space is relegated to the\nOS and vendor software, servers that need it, and seldom-used scratch\nspace. There might very well be a break-even point for us in terms of\ncost.\n\nOne of the other things I was interested in was the \"hidden costs\" of\na SAN. For instance, we'd probably have to invest in more UPS capacity\nto protect our data. Are there any other similar points that people\ndon't initially consider regarding a SAN?\n\nAgain, thanks for all your help.\n\nPeter\n", "msg_date": "Mon, 18 Feb 2008 15:44:40 -0600", "msg_from": "\"Peter Koczan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Anyone using a SAN?" }, { "msg_contents": "Hi Peter,\nPeter Koczan schrieb:\n>\n>\n> One of the other things I was interested in was the \"hidden costs\" of\n> a SAN. For instance, we'd probably have to invest in more UPS capacity\n> to protect our data. Are there any other similar points that people\n> don't initially consider regarding a SAN?\n>\n> \n\nThere are \"hidden costs\". The set up of a local disk system is easy. You \nneed only a few decisions.\nThis is totally different when it comes to SAN.\nAt the end of the day you need a guy who has the knowledge to design and \nconfigure such system.\nThat's why you should buy a SAN and the knowledge from a brand or a \nspecialist company.\n\nBTW: You can do other things with SAN you can't do with local disks.\n- mirroring to another location (room)\n- mounting snapshots on another server\n\nSven.\n\n-- \nSven Geisler <[email protected]> Tel +49.30.921017.81 Fax .50\nSenior Developer, think project! Solutions GmbH & Co. KG, Germany \n\n", "msg_date": "Wed, 20 Feb 2008 09:57:21 +0100", "msg_from": "Sven Geisler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone using a SAN?" }, { "msg_contents": "On Mon, 18 Feb 2008, Peter Koczan wrote:\n> One of the other things I was interested in was the \"hidden costs\" of\n> a SAN. For instance, we'd probably have to invest in more UPS capacity\n> to protect our data. Are there any other similar points that people\n> don't initially consider regarding a SAN?\n\nYou may well find that the hardware required in each machine to access the \nSAN (fibrechannel cards, etc) and switches are way more expensive than \njust shoving a cheap hard drive in each machine. Hard drives are \nmass-produced, and remarkably cheap for what they do. SAN hardware is \nspecialist, and expensive.\n\nMatthew\n\n-- \nNog: Look! They've made me into an ensign!\nO'Brien: I didn't know things were going so badly.\nNog: Frightening, isn't it?\n", "msg_date": "Wed, 20 Feb 2008 13:41:31 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone using a SAN?" }, { "msg_contents": "\nOn Wed, 2008-02-20 at 13:41 +0000, Matthew wrote:\n> On Mon, 18 Feb 2008, Peter Koczan wrote:\n> > One of the other things I was interested in was the \"hidden costs\" of\n> > a SAN. For instance, we'd probably have to invest in more UPS capacity\n> > to protect our data. Are there any other similar points that people\n> > don't initially consider regarding a SAN?\n> \n> You may well find that the hardware required in each machine to access the \n> SAN (fibrechannel cards, etc) and switches are way more expensive than \n> just shoving a cheap hard drive in each machine. Hard drives are \n> mass-produced, and remarkably cheap for what they do. SAN hardware is \n> specialist, and expensive.\n\nCan be, but may I point to a recent posting on Beowulf ml [1] and the\narticle it references [2] Showing that the per node price of SDR IB has\ncome down far enough to in some cases compete with GigE. ymmv, but I'm\nin the planning phase for a massive storage system and it's something\nwe're looking into. Just thought I'd share\n\n\nSuccess!\n\n./C\n\n[1]\nhttp://www.mirrorservice.org/sites/www.beowulf.org/archive/2008-January/020538.html\n\n[2] http://www.clustermonkey.net/content/view/222/1/\n\n\n\n", "msg_date": "Wed, 20 Feb 2008 14:52:42 +0100", "msg_from": "\"C.\" =?ISO-8859-1?Q?Bergstr=F6m?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone using a SAN?" }, { "msg_contents": "On Mon, Feb 18, 2008 at 03:44:40PM -0600, Peter Koczan wrote:\n>One big reason we're really looking into a SAN option is that we have\n>a lot of unused disk space.\n\nThe cost of the SAN interfaces probably exceeds the cost of the wasted\nspace, and the performance will probably be lower for a lot of \nworkloads. There are good reasons to have SANs, but increasing \nutilization of disk drives probably isn't one of them.\n\n>A typical disk usage scheme for us is 6 GB\n>for a clean Linux install, and 20 GB for a Windows install. Our disks\n>are typically 80GB, and even after decent amounts of usage we're not\n>even approaching half that.\n\nI typically partition systems to use a small fraction of the disk space, \nand don't even acknowledge that the rest exists unless there's an actual \nreason to use it. But the disks are basically free, so there's no point \nin trying to buy small ones to save space.\n\nMike Stone\n", "msg_date": "Wed, 20 Feb 2008 10:31:13 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone using a SAN?" }, { "msg_contents": "On Wed, Feb 20, 2008 at 02:52:42PM +0100, C. Bergstr�m wrote:\n>Can be, but may I point to a recent posting on Beowulf ml [1] and the\n>article it references [2] Showing that the per node price of SDR IB has\n>come down far enough to in some cases compete with GigE. ymmv, but I'm\n>in the planning phase for a massive storage system and it's something\n>we're looking into. Just thought I'd share\n\nFor HPC, maybe. For other sectors, it's hard to compete with the free \nGBE that comes with the machine, and that low price doesn't reflect the \ncost of extending an oddball network infrastructure outside of a cluster.\n\nMike Stone\n", "msg_date": "Wed, 20 Feb 2008 10:35:39 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone using a SAN?" }, { "msg_contents": "Hi all,\n\nI had a few meetings with SAN vendors and I thought I'd give you some\nfollow-up on points of potential interest.\n\n- Dell/EMC\nThe representative was like the Dell dude grown up. The sales pitch\nmentioned \"price point\" about twenty times (to the point where it was\nannoying), and the pitch ultimately boiled down to \"Dude, you're\ngetting a SAN.\" My apologies in advance to bringing back repressed\nmemories of the Dell dude. As far as technical stuff goes, it's about\nwhat you'd expect from a low-level SAN. The cost for a SAN was in the\n$2-3 per GB range if you went with the cheap option...not terrible,\nbut not great either, especially since you'd have to buy lots of GB.\nPerformance numbers weren't bad, but they weren't great either.\n\n- 3par\nThe sales pitch was more focused on technical aspects and only\nmentioned \"price point\" twice...which is a win in my books, at least\ncompared to Dell. Their real place to shine was in the technical\naspect. Whereas Dell just wanted to sell you a storage system that you\nput on a network, 3par wanted to sell you a storage system\nspecifically designed for a network, and change the very way you think\nabout storage. They had a bunch of cool management concepts, and very\nadvanced failover, power outage, and backup techniques and tools.\nPerformance wasn't shabby, either, for instance a RAID 5 set could get\nabout 90% the IOPS and transfer rate that a RAID 10 set could. How\nexactly this compares to DAS they didn't say. The main stumbling block\nwith 3par is price. While they didn't give any specific numbers, best\nestimates put a SAN in the $5-7 per GB range. The extra features just\nmight be worth it though.\n\n- Lefthand\nThis is going to be an upcoming meeting, so I don't have as good of an\nopinion. Looking at their website, they seem more to the Dell end in\nterms of price and functionality. I'll keep you in touch as I have\nmore info. They seem good for entry-level SANs, though.\n\nLuckily, almost everything here works with Linux (at least the major\ndistros), including the management tools, in case people were worried\nabout that. One of the key points to consider going forward is that\nthe competition of iSCSI and Fibre Channel techs will likely bring\nprice down in the future. While SANs are certainly more expensive than\ntheir DAS counterparts, the gap appears to be closing.\n\nHowever, to paraphrase a discussion between a few of my co-workers,\nyou can buy toilet paper or kitty litter in huge quantities because\nyou know you'll eventually use it...and it doesn't change in\nperformance or basic functionality. Storage is just something that you\ndon't always want to buy a lot of in one go. It will get bigger, and\ncheaper, and probably faster in a relatively short amount of time. The\nother thing is that you can't really get a small SAN. The minimum is\nusually in the multiple TB range (and usually >10 TB). I'd love to be\nable to put together a proof of concept and a test using 3par's\ntechnology and commodity 80GB slow disks, but I really can't. You're\nstuck with going all-in right away, and enough people have had\nproblems being married to specific techs or vendors that it's really\nhard to break that uneasiness.\n\nThanks for reading, hopefully you found it slightly informative.\n\nPeter\n", "msg_date": "Fri, 14 Mar 2008 16:09:36 -0500", "msg_from": "\"Peter Koczan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Anyone using a SAN?" }, { "msg_contents": "> Dell acquired Equallogic last November/December.\n>\n> I noticed your Dell meeting was a Dell/EMC meeting. Have you talked to them\n> or anyone else about Equallogic?\n\nNow that you mention it, I do recall a bit about Equalogic in the Dell\npitch. It didn't really stand out in my mind and a lot of the\ntechnical details were similar enough to the EMC details that they\njust melded in my mind.\n\n> When I was looking at iSCSI solutions, the Equallogic was really slick. Of\n> course, I needed high-end performance, which of course came at a steep\n> price, and the project got canned. Oh well. Still, the EL solution claimed\n> near linear scalability when additional capacity/shelves were added. And,\n> they have a lot of really nice technologies for managing the system.\n\nIf you think Equalogic is slick, check out 3par. They've got a lot of\nvery cool features and concepts. Unfortunately, this comes at a higher\nprice. To each his own, I guess.\n\nOur meetings didn't focus a lot on scalability of capacity, as we just\ndidn't think to ask. I think the basic pitch was \"it scales well\"\nwithout any real hard data.\n\nPeter\n", "msg_date": "Wed, 19 Mar 2008 16:21:21 -0500", "msg_from": "\"Peter Koczan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Anyone using a SAN?" } ]
[ { "msg_contents": "Folks,\n\nDoes anyone know if HOT is compatible with pg_toast tables, or do TOASTed rows \nsimply get excluded from HOT? I can run some tests, but if someone knows \nthis off the top of their heads it would save me some time.\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Wed, 13 Feb 2008 09:50:15 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "HOT TOAST?" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Does anyone know if HOT is compatible with pg_toast tables, or do TOASTed rows \n> simply get excluded from HOT?\n\nThe current TOAST code never does any updates, only inserts/deletes.\nBut the HOT logic should be able to reclaim deleted rows early via\npruning.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Feb 2008 13:04:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HOT TOAST? " }, { "msg_contents": "Tom,\n\n> The current TOAST code never does any updates, only inserts/deletes.\n> But the HOT logic should be able to reclaim deleted rows early via\n> pruning.\n\nOK, so for a heavy update application we should still see a vacuum \nreduction, even if most of the rows are 40k large?\n\nTime to run some tests ...\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Wed, 13 Feb 2008 15:07:13 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HOT TOAST?" } ]
[ { "msg_contents": "\nHi all,\n\nI've been reading through the performance list of the last few months, and haven't been able to find a solution to my problem yet, so I'm posting the specifics here now. If anyone can suggest what might work (or point me to where this has been covered before), that would be great. My current suspicion is that the shared_buffers setting is far too low.\n\nMy query is as follows:\nSELECT o.objectid, o.objectname, o.isactive, o.modificationtime \nFROM object o \nWHERE ( o.deleted = false OR o.deleted IS NULL ) \nAND o.accountid = 111 \nORDER BY 2 \nLIMIT 20 OFFSET 10000;\n\nThe object table has primary key objectid, an index on objectname, and a unique constraint on ( accountid, objectname ).\nWhat I'm trying to do is show only 20 records to the user at a time, sorting on objectname, and the ones I display depend on the page they're on (that's why I've got LIMIT plus OFFSET, of course).\n\nWhen offsetting up to about 90K records, the EXPLAIN ANALYZE is similar to the following:\n Limit (cost=15357.06..15387.77 rows=20 width=35) (actual time=19.235..19.276 rows=20 loops=1)\n -> Index Scan using account_objectname on \"object\" o (cost=0.00..1151102.10 rows=749559 width=35) (actual time=0.086..14.981 rows=10020 loops=1)\n Index Cond: (accountid = 354)\n Filter: ((NOT deleted) OR (deleted IS NULL))\n Total runtime: 19.315 ms\n\nIf I move the offset up to 100K records or higher, I get:\n Limit (cost=145636.26..145636.31 rows=20 width=35) (actual time=13524.327..13524.355 rows=20 loops=1)\n -> Sort (cost=145386.26..147260.16 rows=749559 width=35) (actual time=13409.216..13481.793 rows=100020 loops=1)\n Sort Key: objectname\n -> Seq Scan on \"object\" o (cost=0.00..16685.49 rows=749559 width=35) (actual time=0.011..1600.683 rows=749549 loops=1)\n Filter: (((NOT deleted) OR (deleted IS NULL)) AND (accountid = 354))\n Total runtime: 14452.374 ms\n\nThat's a huge decrease in performance, and I'm wondering if there's a way around it.\nRight now there are about 750K records in the object table, and that number will only increase with time.\nI've already run a VACUUM FULL on the table and played with changing work_mem, but so far am not seeing any improvement.\n\nAre there any other settings I can change to get back to that super-fast index scan? Is the shared_buffers = 2000 setting way too low? The reason I haven't actually changed that setting is due to some system limitations, etc., that require more work than just a change in the config file. If I can get confirmation that this is a likely cause/solution, then I can get the extra changes made.\n\nI'm running a quad core 2.33GHz Xeon with 4GB memory (1.2GB free), using Postgres 8.1.11.\n\nThanks,\n Michael Lorenz\n_________________________________________________________________\nIt's simple! Sell your car for just $30 at CarPoint.com.au\nhttp://a.ninemsn.com.au/b.aspx?URL=http%3A%2F%2Fsecure%2Dau%2Eimrworldwide%2Ecom%2Fcgi%2Dbin%2Fa%2Fci%5F450304%2Fet%5F2%2Fcg%5F801459%2Fpi%5F1004813%2Fai%5F859641&_t=762955845&_r=tig_OCT07&_m=EXT", "msg_date": "Thu, 14 Feb 2008 18:28:13 +0000", "msg_from": "Michael Lorenz <[email protected]>", "msg_from_op": true, "msg_subject": "Query slows after offset of 100K" }, { "msg_contents": "Michael Lorenz <[email protected]> writes:\n> My query is as follows:\n> SELECT o.objectid, o.objectname, o.isactive, o.modificationtime \n> FROM object o \n> WHERE ( o.deleted = false OR o.deleted IS NULL ) \n> AND o.accountid = 111 \n> ORDER BY 2 \n> LIMIT 20 OFFSET 10000;\n\nThis is guaranteed to lose --- huge OFFSET values are never a good idea\n(hint: the database still has to fetch those rows it's skipping over).\n\nA saner way to do pagination is to remember the last key you displayed\nand do something like \"WHERE key > $lastkey ORDER BY key LIMIT 20\",\nwhich will allow the database to go directly to the desired rows,\nas long as you have an index on the key. You do need a unique ordering\nkey for this to work, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Feb 2008 14:08:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query slows after offset of 100K " }, { "msg_contents": "\nFair enough, and I did think of this as well. However, I didn't think this was a viable option in my case, since we're currently allowing the user to randomly access the pages (so $lastkey wouldn't really have any meaning). The user can choose to sort on object ID, name or modification time, and then go straight to any page in the list. With 750K records, that's around 37K pages.\n\nMaybe a better way to phrase my question is: how can I paginate my data on 3 different keys which allow random access to any given page, and still get reasonable performance? Should I just force the user to limit their result set to some given number of records before allowing any paginated access? Or is it just not practical, period?\n\nThanks,\n Michael Lorenz\n\n----------------------------------------\n> To: [email protected]\n> CC: [email protected]\n> Subject: Re: [PERFORM] Query slows after offset of 100K \n> Date: Thu, 14 Feb 2008 14:08:15 -0500\n> From: [email protected]\n> \n> Michael Lorenz writes:\n>> My query is as follows:\n>> SELECT o.objectid, o.objectname, o.isactive, o.modificationtime \n>> FROM object o \n>> WHERE ( o.deleted = false OR o.deleted IS NULL ) \n>> AND o.accountid = 111 \n>> ORDER BY 2 \n>> LIMIT 20 OFFSET 10000;\n> \n> This is guaranteed to lose --- huge OFFSET values are never a good idea\n> (hint: the database still has to fetch those rows it's skipping over).\n> \n> A saner way to do pagination is to remember the last key you displayed\n> and do something like \"WHERE key> $lastkey ORDER BY key LIMIT 20\",\n> which will allow the database to go directly to the desired rows,\n> as long as you have an index on the key. You do need a unique ordering\n> key for this to work, though.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n_________________________________________________________________\nYour Future Starts Here. Dream it? Then be it! Find it at www.seek.com.au\nhttp://a.ninemsn.com.au/b.aspx?URL=http%3A%2F%2Fninemsn%2Eseek%2Ecom%2Eau%2F%3Ftracking%3Dsk%3Ahet%3Ask%3Anine%3A0%3Ahot%3Atext&_t=764565661&_r=OCT07_endtext_Future&_m=EXT", "msg_date": "Thu, 14 Feb 2008 19:49:22 +0000", "msg_from": "Michael Lorenz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query slows after offset of 100K" }, { "msg_contents": "Michael,\n\nOur application had a similar problem, and what we did to avoid having\npeople click into the middle of 750k records was to show the first page\nwith forward/back links but no link to go to the middle. So people\ncould manually page forward as far as they want, but nobody is going to\nsit there clicking next 37k times. We have several thousand users and\nnone of them complained about the change. Maybe it's because at the\nsame time as we made that change we also improved the rest of the\nsearching/filtering interface. But I think that really people don't\nneed to jump to the middle of the records anyway as long as you have\ndecent search abilities.\n\nIf you wanted to keep your same GUI, one workaround would be to\nperiodically update a table which maps \"page number\" to \"first unique\nkey on page\". That decouples the expensive work to generate the page\noffsets from the query itself, so if your data changes fairly\ninfrequently it might be appropriate. Sort of a materialized-view type\napproach.\n\nIf you can be approximate in your GUI you can do a lot more with this\noptimization-- if people don't necessarily need to be able to go\ndirectly to page 372898 but instead would be satisfied with a page\nroughly 47% of the way into the massive result set (think of a GUI\nslider), then you wouldn't need to update the lookup table as often even\nif the data changed frequently, because adding a few thousand records to\na 750k row result set is statistically insignificant, so your markers\nwouldn't need to be updated very frequently and you wouldn't need to\nstore a marker for each page, maybe only 100 markers spread evenly\nacross the result set would be sufficient.\n\n-- Mark Lewis\n\n\nOn Thu, 2008-02-14 at 19:49 +0000, Michael Lorenz wrote:\n> Fair enough, and I did think of this as well. However, I didn't think this was a viable option in my case, since we're currently allowing the user to randomly access the pages (so $lastkey wouldn't really have any meaning). The user can choose to sort on object ID, name or modification time, and then go straight to any page in the list. With 750K records, that's around 37K pages.\n> \n> Maybe a better way to phrase my question is: how can I paginate my data on 3 different keys which allow random access to any given page, and still get reasonable performance? Should I just force the user to limit their result set to some given number of records before allowing any paginated access? Or is it just not practical, period?\n> \n> Thanks,\n> Michael Lorenz\n> \n> ----------------------------------------\n> > To: [email protected]\n> > CC: [email protected]\n> > Subject: Re: [PERFORM] Query slows after offset of 100K \n> > Date: Thu, 14 Feb 2008 14:08:15 -0500\n> > From: [email protected]\n> > \n> > Michael Lorenz writes:\n> >> My query is as follows:\n> >> SELECT o.objectid, o.objectname, o.isactive, o.modificationtime \n> >> FROM object o \n> >> WHERE ( o.deleted = false OR o.deleted IS NULL ) \n> >> AND o.accountid = 111 \n> >> ORDER BY 2 \n> >> LIMIT 20 OFFSET 10000;\n> > \n> > This is guaranteed to lose --- huge OFFSET values are never a good idea\n> > (hint: the database still has to fetch those rows it's skipping over).\n> > \n> > A saner way to do pagination is to remember the last key you displayed\n> > and do something like \"WHERE key> $lastkey ORDER BY key LIMIT 20\",\n> > which will allow the database to go directly to the desired rows,\n> > as long as you have an index on the key. You do need a unique ordering\n> > key for this to work, though.\n> > \n> > \t\t\tregards, tom lane\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n> \n> _________________________________________________________________\n> Your Future Starts Here. Dream it? Then be it! Find it at www.seek.com.au\n> http://a.ninemsn.com.au/b.aspx?URL=http%3A%2F%2Fninemsn%2Eseek%2Ecom%2Eau%2F%3Ftracking%3Dsk%3Ahet%3Ask%3Anine%3A0%3Ahot%3Atext&_t=764565661&_r=OCT07_endtext_Future&_m=EXT\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n", "msg_date": "Thu, 14 Feb 2008 12:32:12 -0800", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query slows after offset of 100K" }, { "msg_contents": "Michael Lorenz <[email protected]> writes:\n> Fair enough, and I did think of this as well. However, I didn't think this was a viable option in my case, since we're currently allowing the user to randomly access the pages (so $lastkey wouldn't really have any meaning). The user can choose to sort on object ID, name or modification time, and then go straight to any page in the list. With 750K records, that's around 37K pages.\n\nWell, my first question is whether that user interface is actually\nuseful to anyone. If you have a dictionary and you want to look up\n\"foosball\", do you start by guessing that it's on page 432? No, you\nlook for the \"F\" tab. I'd suggest that what you want is to present\nlinks that let people go to specific spots in the key distribution,\nrather than expressing it in terms of so-many-pages.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Feb 2008 16:55:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query slows after offset of 100K " }, { "msg_contents": "On Thu, 14 Feb 2008, Michael Lorenz wrote:\n\n> When offsetting up to about 90K records, the EXPLAIN ANALYZE is similar to the following:\n> Limit (cost=15357.06..15387.77 rows=20 width=35) (actual time=19.235..19.276 rows=20 loops=1)\n> -> Index Scan using account_objectname on \"object\" o (cost=0.00..1151102.10 rows=749559 width=35) (actual time=0.086..14.981 rows=10020 loops=1)\n\nIt looks like the planner thinks that index scan will have to go through \n749559 rows, but there are actually only 10020 there. Is this table is \ngetting ANALYZE'd usefully? VACUUM FULL doesn't do that. If the row \nestimates are so far off, that might explain why it thinks the index scan \nis going to be so huge it might as well just walk the whole thing.\n\nActually, VACUUM FULL can be its own problem--you probably want a very \nregular VACUUM instead.\n\n> Is the shared_buffers = 2000 setting way too low?\n\nQuite; with 4GB of ram that could easily be 100,000+ instead. I wouldn't \nmake that whole jump at once, but 2000 is only a mere 16MB of memory \ndedicated to the database. Also, be sure to set effective_cache_size to \nsomething reflective of your total memory minus application+OS as it also \nhas an impact here; you've probably also got that set extremely low and if \nthis server is mostly for PostgreSQL a good starting point would be \nsomething like 300000 (=2.4GB).\n\n> Are there any other settings I can change to get back to that super-fast \n> index scan?\n\nWell, you can try to turn off sequential scans for the query. You can \ntest if that makes a difference like this:\n\nSET enable_seq_scan to off;\nEXPLAIN ANALYZE <x>;\nSET enable_seq_scan to on;\n\nIt's also possible to tweak parameters like random_page_cost to similarly \nprefer indexes. Far better to fix the true underlying issues though \n(above and below).\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 14 Feb 2008 18:54:34 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query slows after offset of 100K" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> On Thu, 14 Feb 2008, Michael Lorenz wrote:\n>> When offsetting up to about 90K records, the EXPLAIN ANALYZE is similar to the following:\n>> Limit (cost=15357.06..15387.77 rows=20 width=35) (actual time=19.235..19.276 rows=20 loops=1)\n>> -> Index Scan using account_objectname on \"object\" o (cost=0.00..1151102.10 rows=749559 width=35) (actual time=0.086..14.981 rows=10020 loops=1)\n\n> It looks like the planner thinks that index scan will have to go through \n> 749559 rows, but there are actually only 10020 there.\n\nNo, you have to be careful about that. The estimated rowcount is for\nthe case where the plan node is run to completion, but when there's a\nLIMIT over it, it may not get run to completion. In this case the limit\nwas satisfied after pulling 10020 rows from the indexscan, but we can't\ntell what fraction of the underlying scan was actually completed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Feb 2008 19:02:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query slows after offset of 100K " }, { "msg_contents": "On Thu, 14 Feb 2008, Michael Lorenz wrote:\n> When offsetting up to about 90K records, the EXPLAIN ANALYZE is similar to the following:\n> Limit (cost=15357.06..15387.77 rows=20 width=35) (actual time=19.235..19.276 rows=20 loops=1)\n> -> Index Scan using account_objectname on \"object\" o (cost=0.00..1151102.10 rows=749559 width=35) (actual time=0.086..14.981 rows=10020 loops=1)\n> Index Cond: (accountid = 354)\n> Filter: ((NOT deleted) OR (deleted IS NULL))\n> Total runtime: 19.315 ms\n\nSince this is scanning through 10,000 random rows in 19 milliseconds, I \nsay all this data is already in the cache. If it wasn't, you'd be looking \nat 10,000 random seeks on disk, at about 7ms each, which is 70 seconds. \nTry dropping the OS caches (on Linux echo \"1\" >/proc/sys/vm/drop_caches) \nand see if the performance is worse.\n\n> If I move the offset up to 100K records or higher, I get:\n> Limit (cost=145636.26..145636.31 rows=20 width=35) (actual time=13524.327..13524.355 rows=20 loops=1)\n> -> Sort (cost=145386.26..147260.16 rows=749559 width=35) (actual time=13409.216..13481.793 rows=100020 loops=1)\n> Sort Key: objectname\n> -> Seq Scan on \"object\" o (cost=0.00..16685.49 rows=749559 width=35) (actual time=0.011..1600.683 rows=749549 loops=1)\n> Filter: (((NOT deleted) OR (deleted IS NULL)) AND (accountid = 354))\n> Total runtime: 14452.374 ms\n\nAnd here, it only takes 1.5 seconds to fetch the entire table from disc \n(or it's already in the cache or something), but 14 seconds to sort the \nwhole lot in memory.\n\nIn any case, Postgres is making a good choice - it's just that you have an \nunexpected benefit in the first case that the data is in cache. Setting \nthe effective cache size correctly will help the planner in this case. \nSetting work_mem higher will improve the performance of the sort in the \nsecond case.\n\nOf course, what others have said about trying to avoid large offsets is \ngood advice. You don't actually need a unique index, but it makes it \nsimpler if you do.\n\nMatthew\n\n-- \nThe early bird gets the worm. If you want something else for breakfast, get\nup later.\n", "msg_date": "Fri, 15 Feb 2008 14:47:06 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query slows after offset of 100K" } ]
[ { "msg_contents": "Once per quarter, we need to load a lot of data, which causes many\nupdates across the database. We have an online transaction\nprocessing-style application, which we really want to stay up during the\nupdate job.\n\n \n\nThe programmer coded a stored procedure which does the job well ...\nlogically. But as a single PL/pgSQL stored procedure, it is one\nlong-running transaction. At least, that is my interpretation of\nhttp://www.postgresql.org/docs/8.0/interactive/plpgsql-porting.html#CO.P\nLPGSQL-PORTING-COMMIT - and in fact, we do get errors when we try little\nBEGIN-COMMIT blocks inside a stored procedure.\n\n \n\nA single long-running transaction would be bad in production. A long\nrun time = OK, but long-running transaction = site outage. \n\n \n\nSo I'm asking for advice on whether I can break this into small\ntransactions without too much of a rewrite. Roughly, the algorithm is:\n\n \n\n(1) One job dumps the data from the external source into a load table.\n\n(2) Another job calls the stored procedure, which uses a cursor to\ntraverse the load table. A loop for each record:\n\na. Processes a lot of special cases, with inserts and/or updates to\nmany tables.\n\n \n\nUnless this can be done within PL/pgSQL, I will have the programmer\nrefactor job (2) so that the loop is in a java program, and the\n\"normalization\" logic in (a) - the guts of the loop - remain in a\nsmaller stored procedure. The java loop will call that stored procedure\nonce per row of the load table, each call in a separate transaction.\nThat would both preserve the bulk of the PL/pgSQL code and keep the\nnormalization logic close to the data. So the runtime will be\nreasonable, probably somewhat longer than his single monolithic stored\nprocedure, but the transactions will be short.\n\n \n\nWe don't need anything like SERIALIZATION transaction isolation of the\nonline system from the entire load job. \n\n \n\nThanks for any ideas,\n\nDavid Crane\n\nDonorsChoose.org\n\n\n\n\n\n\n\n\n\n\nOnce per quarter, we need to load a lot of data, which\ncauses many updates across the database.  We have an online transaction\nprocessing-style application, which we really want to stay up during the update\njob.\n \nThe programmer coded a stored procedure which does the job\nwell … logically.  But as a single PL/pgSQL stored procedure, it is\none long-running transaction.  At least, that is my interpretation of http://www.postgresql.org/docs/8.0/interactive/plpgsql-porting.html#CO.PLPGSQL-PORTING-COMMIT\n– and in fact, we do get errors when we try little BEGIN-COMMIT blocks\ninside a stored procedure.\n \nA single long-running transaction would be bad in production. \nA long run time = OK, but long-running transaction = site outage. \n \nSo I’m asking for advice on whether I can break this\ninto small transactions without too much of a rewrite.  Roughly, the\nalgorithm is:\n \n(1)   One job\ndumps the data from the external source into a load table.\n(2)   Another job\ncalls the stored procedure, which uses a cursor to traverse the load table. \nA loop for each record:\na.      Processes a lot\nof special cases, with inserts and/or updates to many tables.\n \nUnless this can be done within PL/pgSQL, I will have the\nprogrammer refactor job (2) so that the loop is in a java program, and the “normalization”\nlogic in (a) – the guts of the loop – remain in a smaller stored\nprocedure.  The java loop will call that stored procedure once per row of\nthe load table, each call in a separate transaction.  That would both\npreserve the bulk of the PL/pgSQL code and keep the normalization logic close\nto the data.  So the runtime will be reasonable, probably somewhat longer\nthan his single monolithic stored procedure, but the transactions will be\nshort.\n \nWe don’t need anything like SERIALIZATION transaction\nisolation of the online system from the entire load job.  \n \nThanks for any ideas,\nDavid Crane\nDonorsChoose.org", "msg_date": "Thu, 14 Feb 2008 20:15:21 -0500", "msg_from": "\"David Crane\" <[email protected]>", "msg_from_op": true, "msg_subject": "Avoid long-running transactions in a long-running stored procedure?" }, { "msg_contents": "David,\n\n> Once per quarter, we need to load a lot of data, which causes many\n> updates across the database. We have an online transaction\n> processing-style application, which we really want to stay up during the\n> update job.\n\nWhat you're talking about is \"autonomous transactions\". There's someone \nworking on them for 8.4, and we may get them next version, but you can't \nhave them now.\n\nHowever, you can write your stored procedures in an external language (like \nPL/Perl, PL/Ruby, PL/Java or PL/Python) and re-connect to your database in \norder to run several separate transactions. Several users are doing this \nfor large ETL jobs. \n\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Thu, 14 Feb 2008 17:29:18 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid long-running transactions in a long-running stored\n\tprocedure?" }, { "msg_contents": "\nOn Thu, 2008-02-14 at 17:29 -0800, Josh Berkus wrote:\n> David,\n> \n> > Once per quarter, we need to load a lot of data, which causes many\n> > updates across the database. We have an online transaction\n> > processing-style application, which we really want to stay up during the\n> > update job.\n> However, you can write your stored procedures in an external language (like \n> PL/Perl, PL/Ruby, PL/Java or PL/Python) and re-connect to your database in \n> order to run several separate transactions. Several users are doing this \n> for large ETL jobs. \n> \n\nI actually do it externally via a perl script even, and I'm breaking the\ndata down to even more than miniscule size.\n", "msg_date": "Fri, 15 Feb 2008 09:30:38 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid long-running transactions in a long-running\n\tstored procedure?" }, { "msg_contents": "Thanks for the prompt replies!\n\nIt sounds like these are variations of the same approach. In our case,\nwe need to do a lot of comparing against the old data, audit tables and\nso forth, so the bulk of the work is in the body of the existing loop\n(already coded). So I think keeping that loop body in a stand-alone\nstored procedure will be the most efficient for us. And we'll port the\nlogic outside the loop into a java program, easier for us to schedule\nthrough another existing system.\n\nThose autonomous transactions are gonna be nice, but PostgreSQL is\nplenty nice as it is. Progress is good, though.\n\nThanks,\nDavid Crane\n\n-----Original Message-----\nFrom: Ow Mun Heng [mailto:[email protected]] \nSent: Thursday, February 14, 2008 8:31 PM\nTo: [email protected]\nCc: [email protected]; David Crane\nSubject: Re: [PERFORM] Avoid long-running transactions in a\nlong-runningstored procedure?\n\n\nOn Thu, 2008-02-14 at 17:29 -0800, Josh Berkus wrote:\n> David,\n> \n> > Once per quarter, we need to load a lot of data, which causes many\n> > updates across the database. We have an online transaction\n> > processing-style application, which we really want to stay up during\nthe\n> > update job.\n> However, you can write your stored procedures in an external language\n(like \n> PL/Perl, PL/Ruby, PL/Java or PL/Python) and re-connect to your\ndatabase in \n> order to run several separate transactions. Several users are doing\nthis \n> for large ETL jobs. \n> \n\nI actually do it externally via a perl script even, and I'm breaking the\ndata down to even more than miniscule size.\n", "msg_date": "Thu, 14 Feb 2008 20:38:13 -0500", "msg_from": "\"David Crane\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid long-running transactions in a long-runningstored\n procedure?" } ]
[ { "msg_contents": "Hello,\n\nmy impression has been that in the past, there has been a general\nsemi-consensus that upping shared_buffers to use the majority of RAM\nhas not generally been recommended, with reliance on the buffer cache\ninstead being the recommendation.\n\nGiven the changes that have gone into 8.3, in particular with regards\nto minimizing the impact of large sequential scans, would it be\ncorrect to say that given that\n\n - enough memory is left for other PG bits (sort mems and whatnot else)\n - only PG is running on the machine\n - you're on 64 bit so do not run into address space issues\n - the database working set is larger than RAM\n\nit would be generally advisable to pump up shared_buffers pretty much\nas far as possible instead of relying on the buffer cache?\n\n-- \n/ Peter Schuller\n\nPGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\nKey retrieval: Send an E-Mail to [email protected]\nE-Mail: [email protected] Web: http://www.scode.org", "msg_date": "Fri, 15 Feb 2008 13:35:29 +0100", "msg_from": "Peter Schuller <[email protected]>", "msg_from_op": true, "msg_subject": "shared_buffers in 8.3 w/ lots of RAM on dedicated PG machine" }, { "msg_contents": "On Fri, Feb 15, 2008 at 01:35:29PM +0100, Peter Schuller wrote:\n> Hello,\n> \n> my impression has been that in the past, there has been a general\n> semi-consensus that upping shared_buffers to use the majority of RAM\n> has not generally been recommended, with reliance on the buffer cache\n> instead being the recommendation.\n> \n> Given the changes that have gone into 8.3, in particular with regards\n> to minimizing the impact of large sequential scans, would it be\n> correct to say that given that\n> \n> - enough memory is left for other PG bits (sort mems and whatnot else)\n> - only PG is running on the machine\n> - you're on 64 bit so do not run into address space issues\n> - the database working set is larger than RAM\n> \n> it would be generally advisable to pump up shared_buffers pretty much\n> as far as possible instead of relying on the buffer cache?\n> \n> -- \n> / Peter Schuller\n> \n> PGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\n> Key retrieval: Send an E-Mail to [email protected]\n> E-Mail: [email protected] Web: http://www.scode.org\n> \nPeter,\n\nPostgreSQL still depends on the OS for file access and caching. I\nthink that the current recommendation is to have up to 25% of your\nRAM in the shared buffer cache.\n\nKen\n", "msg_date": "Fri, 15 Feb 2008 07:37:34 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared_buffers in 8.3 w/ lots of RAM on dedicated PG\n\tmachine" }, { "msg_contents": "> PostgreSQL still depends on the OS for file access and caching. I\n> think that the current recommendation is to have up to 25% of your\n> RAM in the shared buffer cache.\n\nThis feels strange. Given a reasonable amount of RAM (let's say 8 GB\nin this case), I cannot imagine why 75% of that would be efficiently\nused for anything but the buffer cache (ignoring work_mem, stacks,\netc). Obviously the OS will need memory to do it's usual stuff\n(buffering to do efficient I/O, and so on). But the need for that\nshould not increase with the amount of RAM in the machine, all else\nbeing equal.\n\nWhat type of file I/O, other than reading pages of PostgreSQL data\nwhich are eligable for the PostgreSQL buffer cache, does PostgreSQL do\nthat would take advantage of the operating system caching so much\ndata?\n\n(Assuming the database is not extreme to the point of file system meta\ndata being huge.)\n\nIf the 25% rule still holds true, even under circumstances where the\nassumption is that the PostgreSQL buffer cache is more efficient (in\nterms of hit ratio) at caching PostgreSQL database data pages, it\nwould be useful to understand why in order to understand the\ntrade-offs involved and make appropriate decisions.\n\nOr is it a matter of PostgreSQL doing non-direct I/O, such that\nanything cached in shared_buffers will also be cached by the OS?\n\n-- \n/ Peter Schuller\n\nPGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\nKey retrieval: Send an E-Mail to [email protected]\nE-Mail: [email protected] Web: http://www.scode.org", "msg_date": "Fri, 15 Feb 2008 14:58:46 +0100", "msg_from": "Peter Schuller <[email protected]>", "msg_from_op": true, "msg_subject": "Re: shared_buffers in 8.3 w/ lots of RAM on dedicated PG\n\tmachine" }, { "msg_contents": "On Fri, 15 Feb 2008, Peter Schuller wrote:\n\n> Or is it a matter of PostgreSQL doing non-direct I/O, such that\n> anything cached in shared_buffers will also be cached by the OS?\n\nPostgreSQL only uses direct I/O for writing to the WAL; everything else \ngoes through the regular OS buffer cache unless you force it to do \notherwise at the OS level (like some Solaris setups do with \nforcedirectio). This is one reason it still make not make sense to give \nan extremely high percentage of RAM to PostgreSQL even with improvements \nin managing it. Another is that shared_buffers memory has to be \nreconciled with disk at every checkpoint, where OS buffers do not. A \nthird is that your OS may just be more efficient at buffering--it knows \nmore about the underlying hardware, and the clock-sweep method used \ninternally by PostgreSQL to simulate a LRU cache is not extremely \nsophisticated.\n\nHowever, don't feel limited by the general 25% rule; it's certainly worth \nexploring whether 50% or more works better for your workload. You'll have \nto benchmark that yourself though, and I'd suggest using pg_buffercache: \nhttp://www.postgresql.org/docs/8.3/static/pgbuffercache.html to get an \nidea just what the pages are being used for.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 15 Feb 2008 09:29:05 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared_buffers in 8.3 w/ lots of RAM on dedicated PG\n machine" }, { "msg_contents": "On Friday 15 February 2008 06:29, Greg Smith wrote:\n> PostgreSQL only uses direct I/O for writing to the WAL; everything else\n> goes through the regular OS buffer cache unless you force it to do\n> otherwise at the OS level (like some Solaris setups do with\n> forcedirectio).\n\nAlso, note that even when direct I/O is available, most users and benchmark \ntests have reported that having PostgreSQL \"take over\" the entire cache is \nnot a net performance gain. I believe this is mostly because our I/O and \ncaching code aren't designed for this kind of operation.\n\nI believe that MyEmma had a different experience on their workload, though. \n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Fri, 15 Feb 2008 10:06:11 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared_buffers in 8.3 w/ lots of RAM on dedicated PG machine" }, { "msg_contents": "\nOn Feb 15, 2008, at 12:06 PM, Josh Berkus wrote:\n\n> On Friday 15 February 2008 06:29, Greg Smith wrote:\n>> PostgreSQL only uses direct I/O for writing to the WAL; everything \n>> else\n>> goes through the regular OS buffer cache unless you force it to do\n>> otherwise at the OS level (like some Solaris setups do with\n>> forcedirectio).\n>\n> Also, note that even when direct I/O is available, most users and \n> benchmark\n> tests have reported that having PostgreSQL \"take over\" the entire \n> cache is\n> not a net performance gain. I believe this is mostly because our I/ \n> O and\n> caching code aren't designed for this kind of operation.\n>\n> I believe that MyEmma had a different experience on their workload, \n> though.\n\nActually, while we did have shared_buffers set to 4G on an 8G system \nwhen we were running with forcedirectio, the decision to even run \nwith forcedirectio was a temporary until we were able (welll, forced \nto) migrate to a new system with a sane drive configuration. The old \nset up was done horribly by a sysadmin who's no longer with us who \nset us up with a RAID5 array with both the data and xlogs both \nmirrored across all of the disks with no spares. So, I wouldn't \nconsider the numbers I was seeing then a reliable expectation as that \nsystem was nowhere close to ideal. We've seen much more sane and \nconsistent numbers on a more normal setup, i.e. without forcedirectio \nand with <= 25% system memory.\n\nErik Jones\n\nDBA | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n\n", "msg_date": "Fri, 15 Feb 2008 12:37:10 -0600", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared_buffers in 8.3 w/ lots of RAM on dedicated PG machine" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Fri, 15 Feb 2008 12:37:10 -0600\r\nErik Jones <[email protected]> wrote:\r\n>(welll, forced \r\n> to) migrate to a new system with a sane drive configuration. The\r\n> old set up was done horribly by a sysadmin who's no longer with us\r\n> who set us up with a RAID5 array with both the data and xlogs both \r\n> mirrored across all of the disks with no spares. \r\n\r\nIs the admin still with us? Or is he fertilizer? I have some know some\r\ngreat gardeners from Jersey...\r\n\r\nSincerely,\r\n\r\nJoshua D. Drake\r\n\r\n\r\n> \r\n> Erik Jones\r\n> \r\n> DBA | Emma®\r\n> [email protected]\r\n> 800.595.4401 or 615.292.5888\r\n> 615.292.0777 (fax)\r\n> \r\n> Emma helps organizations everywhere communicate & market in style.\r\n> Visit us online at http://www.myemma.com\r\n> \r\n> \r\n> \r\n> \r\n> ---------------------------(end of\r\n> broadcast)--------------------------- TIP 3: Have you checked our\r\n> extensive FAQ?\r\n> \r\n> http://www.postgresql.org/docs/faq\r\n> \r\n\r\n\r\n- -- \r\nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \r\nPostgreSQL Community Conference: http://www.postgresqlconference.org/\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nPostgreSQL SPI Liaison | SPI Director | PostgreSQL political pundit\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHtd0YATb/zqfZUUQRAuwPAJ0Y2VjYMkHhCsQ07Sadj/kT0Yz3wQCgmuCP\r\neOmndoyvYe+DhH+AOwcyms4=\r\n=qGZE\r\n-----END PGP SIGNATURE-----\r\n", "msg_date": "Fri, 15 Feb 2008 10:42:32 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared_buffers in 8.3 w/ lots of RAM on dedicated PG\n machine" }, { "msg_contents": "\n\nGreg Smith wrote:\n> On Fri, 15 Feb 2008, Peter Schuller wrote:\n>\n>> Or is it a matter of PostgreSQL doing non-direct I/O, such that\n>> anything cached in shared_buffers will also be cached by the OS?\n>\n> PostgreSQL only uses direct I/O for writing to the WAL; everything \n> else goes through the regular OS buffer cache unless you force it to \n> do otherwise at the OS level (like some Solaris setups do with \n> forcedirectio). This is one reason it still make not make sense to \n> give an extremely high percentage of RAM to PostgreSQL even with \n> improvements in managing it. Another is that shared_buffers memory \n> has to be reconciled with disk at every checkpoint, where OS buffers \n> do not. A third is that your OS may just be more efficient at \n> buffering--it knows more about the underlying hardware, and the \n> clock-sweep method used internally by PostgreSQL to simulate a LRU \n> cache is not extremely sophisticated.\n>\n> However, don't feel limited by the general 25% rule; it's certainly \n> worth exploring whether 50% or more works better for your workload. \n> You'll have to benchmark that yourself though, and I'd suggest using \n> pg_buffercache: \n> http://www.postgresql.org/docs/8.3/static/pgbuffercache.html to get an \n> idea just what the pages are being used for.\n>\n\nAs per the test that I have done mostly with forcedirectio on Solaris, I \nhave seen gains with increasing the buffercache to about somewhere \nbetween 10GB and thats when thing seem to take a turn...\n\nSo in my case I am generally comfortable for Postgres to use about \n8-10GB beyond which I am cautious.\n\nAlso with tests with UFS buffered for table/index and forcedirectio it \nseems to perform better with forcedirectio .. However if you do want to \nexploit the extra RAM with UFS then you have to do some tunings for UFS \nin Solaris.. Now with machines with 32GB becoming common this is \nsomething worth pursuing depending on the storage if it can handle the \ndirectio load or not.\n\n\nRegards,\nJignesh\n\n\n\n", "msg_date": "Fri, 15 Feb 2008 14:34:33 -0500", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared_buffers in 8.3 w/ lots of RAM on dedicated PG\n machine" }, { "msg_contents": "\nOn Feb 15, 2008, at 12:42 PM, Joshua D. Drake wrote:\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> On Fri, 15 Feb 2008 12:37:10 -0600\n> Erik Jones <[email protected]> wrote:\n>> (welll, forced\n>> to) migrate to a new system with a sane drive configuration. The\n>> old set up was done horribly by a sysadmin who's no longer with us\n>> who set us up with a RAID5 array with both the data and xlogs both\n>> mirrored across all of the disks with no spares.\n>\n> Is the admin still with us? Or is he fertilizer? I have some know some\n> great gardeners from Jersey...\n\nHeh, he's definitely no long with us although not in the sense that \nhe's now \"pushin' up daisies\"...\n\nErik Jones\n\nDBA | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n\n", "msg_date": "Fri, 15 Feb 2008 13:46:48 -0600", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared_buffers in 8.3 w/ lots of RAM on dedicated PG machine" }, { "msg_contents": "> PostgreSQL only uses direct I/O for writing to the WAL; everything else\n> goes through the regular OS buffer cache unless you force it to do\n> otherwise at the OS level (like some Solaris setups do with\n> forcedirectio). This is one reason it still make not make sense to give\n> an extremely high percentage of RAM to PostgreSQL even with improvements\n> in managing it. \n\nOk - thank you for the input (that goes for everyone).\n\n> Another is that shared_buffers memory has to be \n> reconciled with disk at every checkpoint, where OS buffers do not.\n\nHmm. Am I interpreting that correctly in that dirty buffers need to be flushed \nto disk at checkpoints? That makes perfect sense - but why would that not be \nthe case with OS buffers? My understanding is that the point of the \ncheckpoint is to essentially obsolete old WAL data in order to recycle the \nspace, which would require flushing the data in question first (i.e., \nnormally you just fsync the WAL, but when you want to recycle space you need \nfsync() for the barrier and are then free to nuke the old WAL).\n\n-- \n/ Peter Schuller\n\nPGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\nKey retrieval: Send an E-Mail to [email protected]\nE-Mail: [email protected] Web: http://www.scode.org", "msg_date": "Mon, 18 Feb 2008 08:39:46 +0100", "msg_from": "Peter Schuller <[email protected]>", "msg_from_op": true, "msg_subject": "Re: shared_buffers in 8.3 w/ lots of RAM on dedicated PG machine" }, { "msg_contents": "On Mon, 18 Feb 2008, Peter Schuller wrote:\n\n> Am I interpreting that correctly in that dirty buffers need to be \n> flushed to disk at checkpoints? That makes perfect sense - but why would \n> that not be the case with OS buffers?\n\nAll the dirty buffers in the cache are written out as part of the \ncheckpoint process--all at once in earlier versions, spread out based on \ncheckpoint_completion_target in 8.3. In the worst case you could \ntheoretically have to write the entire shared_buffer cache out, however \nbig it is, if you managed to get it all dirty just before the checkpoint.\n\nUltimately everything written to the database (again, with the exception \nof non-standard direct I/O setups) passes through the OS buffers, so in \nthat respect the OS buffers will also be flushed when the checkpoint does \nits cleansing fsync.\n\nBut dirty buffers for less popular pages do get written before the \ncheckpoint occurs. As there is a need to allocate new pages for the \ndatabase to work with, it evicts pages in order to find space, and if the \npage given the boot is dirty it gets written to the OS buffer cache. \nThose writes trickle out to disk in advance of the checkpoint itself. If \nyou've pushed the majority of memory into the PostgreSQL cache, that won't \nhappen as much (more shared_buffers=>less evictions+less OS cache) and \nthere's a potential for longer, more intensive checkpoints.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 18 Feb 2008 16:24:30 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared_buffers in 8.3 w/ lots of RAM on dedicated PG\n machine" } ]
[ { "msg_contents": "We're using PostgreSQL 8.1.11 on AIX 5.3 and we've been doing some \nplaying around\nwith various settings. So far, we've (I say we, but it's another guy \ndoing the work) found\nthat open_datasync seems better than fsync. By how much, we have not \nyet determined,\nbut initial observations are promising.\n\nOur tests have been on a p550 connected to DS6800 array using pgbench.\n\nOne nasty behaviour we have seen is long running commits. Initial \nthoughts connected\nthem with checkpoints, but the long running commits do not correlate \nwith checkpoints being\nwritten. Have you seen this behaviour?\n\nFYI, 8.3.0 is not an option for us in the short term.\n\nWhat have you been using on AIX and why?\n\nthanks\n\n-- \nDan Langille -- http://www.langille.org/\[email protected]\n\n\n\n\n", "msg_date": "Fri, 15 Feb 2008 16:55:45 -0500", "msg_from": "Dan Langille <[email protected]>", "msg_from_op": true, "msg_subject": "wal_sync_methods for AIX" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Fri, 15 Feb 2008 16:55:45 -0500\r\nDan Langille <[email protected]> wrote:\r\n\r\n> Our tests have been on a p550 connected to DS6800 array using pgbench.\r\n> \r\n> One nasty behaviour we have seen is long running commits. Initial \r\n> thoughts connected\r\n> them with checkpoints, but the long running commits do not correlate \r\n> with checkpoints being\r\n> written. Have you seen this behaviour?\r\n\r\nAre you sure? What makes you think this? Do you have a high level of\r\nshared buffers? What are your bgwriter settings?\r\n\r\nJoshua D. Drake\r\n\r\n\r\n\r\n\r\n- -- \r\nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \r\nPostgreSQL Community Conference: http://www.postgresqlconference.org/\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nPostgreSQL SPI Liaison | SPI Director | PostgreSQL political pundit\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHtgyFATb/zqfZUUQRAv2LAJ41l25YG7PwfgpZtuPD/1aL5I4ZTwCfRGii\r\nLkFFefSDT72qGzY8PxOMXKE=\r\n=0iC3\r\n-----END PGP SIGNATURE-----\r\n", "msg_date": "Fri, 15 Feb 2008 14:04:53 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: wal_sync_methods for AIX" }, { "msg_contents": "\nOn Feb 15, 2008, at 3:55 PM, Dan Langille wrote:\n\n> We're using PostgreSQL 8.1.11 on AIX 5.3 and we've been doing some \n> playing around\n> with various settings. So far, we've (I say we, but it's another \n> guy doing the work) found\n> that open_datasync seems better than fsync. By how much, we have \n> not yet determined,\n> but initial observations are promising.\n\nHere's a good explanation (by the Greg Smith) on the different sync \nmethods. It basically says that if you have open_datasync available, \nit'll probably beat everything else.\n\n> Our tests have been on a p550 connected to DS6800 array using pgbench.\n>\n> One nasty behaviour we have seen is long running commits. Initial \n> thoughts connected\n> them with checkpoints, but the long running commits do not \n> correlate with checkpoints being\n> written. Have you seen this behaviour?\n>\n> FYI, 8.3.0 is not an option for us in the short term.\n>\n> What have you been using on AIX and why?\n\nI really don't know anything about AIX, but are you sure that these \nlong running commits are directly correlated with using open_datasync?\n\nErik Jones\n\nDBA | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n\n", "msg_date": "Fri, 15 Feb 2008 16:20:25 -0600", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: wal_sync_methods for AIX" }, { "msg_contents": "Dan Langille wrote:\n>\n>\n> Begin forwarded message:\n>\n>> From: \"Joshua D. Drake\" <[email protected]>\n>> Date: February 15, 2008 5:04:53 PM EST\n>> To: Dan Langille <[email protected]>\n>> Cc: [email protected]\n>> Subject: Re: [PERFORM] wal_sync_methods for AIX\n>>\n>> -----BEGIN PGP SIGNED MESSAGE-----\n>> Hash: SHA1\n>>\n>> On Fri, 15 Feb 2008 16:55:45 -0500\n>> Dan Langille <[email protected]> wrote:\n>>\n>>> Our tests have been on a p550 connected to DS6800 array using pgbench.\n>>>\n>>> One nasty behaviour we have seen is long running commits. Initial\n>>> thoughts connected\n>>> them with checkpoints, but the long running commits do not correlate\n>>> with checkpoints being\n>>> written. Have you seen this behaviour?\n>>\n>> Are you sure? What makes you think this? Do you have a high level of\n>> shared buffers? What are your bgwriter settings?\n\n\nI've set checkpoint_warning to 300, knowing that my testcase will \ndefinitely cause checkpoints inside this window, and generate log \nmessages. I see long running INSERT and END transactions sprinkled \nevenly throughout the duration of the test, not specifically around the \ntime of the checkpoint messages.\nShared buffers are set to 50000, bgwriter settings are as follows:\n\n# - Background writer -\n\nbgwriter_delay = 50 # 10-10000 milliseconds between \nrounds\nbgwriter_lru_percent = 20.0 # 0-100% of LRU buffers \nscanned/round\nbgwriter_lru_maxpages = 300 # 0-1000 buffers max written/round\nbgwriter_all_percent = 5 # 0-100% of all buffers \nscanned/round\nbgwriter_all_maxpages = 600 # 0-1000 buffers max written/round\n\n\nThe testcase is a simple pgbench test, with 100 clients, 10000 \ntransactions. I modified the pgbench db to increase the number of \nwrites by adding 3 history_archive tables, populated by rules. I am not \nassuming that changing the wal_sync_method will eliminate the long \nrunning transactions but repeating the testcase with open_datasync (vs \nfsync) resulted in fewer long running END transactions (by a large margin)\n\n>>\n>> Joshua D. Drake\n>>\n>>\n>>\n>>\n>> - --\n>> The PostgreSQL Company since 1997: http://www.commandprompt.com/\n>> PostgreSQL Community Conference: http://www.postgresqlconference.org/\n>> Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n>> PostgreSQL SPI Liaison | SPI Director | PostgreSQL political pundit\n>>\n>> -----BEGIN PGP SIGNATURE-----\n>> Version: GnuPG v1.4.6 (GNU/Linux)\n>>\n>> iD8DBQFHtgyFATb/zqfZUUQRAv2LAJ41l25YG7PwfgpZtuPD/1aL5I4ZTwCfRGii\n>> LkFFefSDT72qGzY8PxOMXKE=\n>> =0iC3\n>> -----END PGP SIGNATURE-----\n>\n>\n\n\n-- \nJP Fletcher\nDatabase Administrator\nAfilias Canada\nvoice: 416.646.3304 ext. 4123\nfax: 416.646.3305\nmobile: 416.561.4763\[email protected]\n\n\n", "msg_date": "Tue, 19 Feb 2008 15:21:16 -0500", "msg_from": "JP Fletcher <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: wal_sync_methods for AIX" }, { "msg_contents": "Erik Jones wrote:\n> \n> On Feb 15, 2008, at 3:55 PM, Dan Langille wrote:\n> \n>> We're using PostgreSQL 8.1.11 on AIX 5.3 and we've been doing some \n>> playing around\n>> with various settings. So far, we've (I say we, but it's another guy \n>> doing the work) found\n>> that open_datasync seems better than fsync. By how much, we have not \n>> yet determined,\n>> but initial observations are promising.\n> \n> Here's a good explanation (by the Greg Smith) on the different sync \n> methods. It basically says that if you have open_datasync available, \n> it'll probably beat everything else.\n\nWhere is that explanation?\n\n-- \nDan Langille\n\nBSDCan - The Technical BSD Conference : http://www.bsdcan.org/\nPGCon - The PostgreSQL Conference: http://www.pgcon.org/\n", "msg_date": "Tue, 19 Feb 2008 16:58:19 -0500", "msg_from": "Dan Langille <[email protected]>", "msg_from_op": true, "msg_subject": "Re: wal_sync_methods for AIX" }, { "msg_contents": "\nOn Feb 19, 2008, at 3:58 PM, Dan Langille wrote:\n\n> Erik Jones wrote:\n>> On Feb 15, 2008, at 3:55 PM, Dan Langille wrote:\n>>> We're using PostgreSQL 8.1.11 on AIX 5.3 and we've been doing \n>>> some playing around\n>>> with various settings. So far, we've (I say we, but it's another \n>>> guy doing the work) found\n>>> that open_datasync seems better than fsync. By how much, we have \n>>> not yet determined,\n>>> but initial observations are promising.\n>> Here's a good explanation (by the Greg Smith) on the different \n>> sync methods. It basically says that if you have open_datasync \n>> available, it'll probably beat everything else.\n>\n> Where is that explanation?\n\nSorry, did I leave off the link? http://www.westnet.com/~gsmith/ \ncontent/postgresql/TuningPGWAL.htm\n\nErik Jones\n\nDBA | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n\n", "msg_date": "Tue, 19 Feb 2008 16:09:41 -0600", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: wal_sync_methods for AIX" }, { "msg_contents": "On Tue, 19 Feb 2008, JP Fletcher wrote:\n\n> Shared buffers are set to 50000, bgwriter settings are as follows:\n>\n> bgwriter_delay = 50 # 10-10000 milliseconds between \n> rounds\n> bgwriter_lru_percent = 20.0 # 0-100% of LRU buffers scanned/round\n> bgwriter_lru_maxpages = 300 # 0-1000 buffers max written/round\n> bgwriter_all_percent = 5 # 0-100% of all buffers scanned/round\n> bgwriter_all_maxpages = 600 # 0-1000 buffers max written/round\n\nNot that it impacts what you're asking about, but that's probably an \nexcessive setting for bgwriter_lru_percent. With the reduced delay and \nscanning that much, you're burning a lot of CPU time doing that for little \nbenefit.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 19 Feb 2008 17:57:21 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: wal_sync_methods for AIX" } ]
[ { "msg_contents": "Hi members\n\nI am looking for an example of a web application (WAR) which executea a\nPostgres actions. This aims to test the performance of Postgres in Web mode.\n\nI shall be grateful if someone gives me a link where I can find a WAR file.\n\nThank you\n\nHi members\n \nI am looking for an example of a web application (WAR) which executea a Postgres actions. This aims to test the performance of Postgres in Web mode. I shall be grateful if someone gives me a link where I can find a WAR file. \nThank you", "msg_date": "Sun, 17 Feb 2008 12:21:49 +0100", "msg_from": "\"Mohamed Ali JBELI\" <[email protected]>", "msg_from_op": true, "msg_subject": "Example web access to Postgres DB" }, { "msg_contents": "Mohamed Ali JBELI wrote:\n> Hi members\n> \n> I am looking for an example of a web application (WAR) which executea a \n> Postgres actions. This aims to test the performance of Postgres in Web \n> mode.\n> I shall be grateful if someone gives me a link where I can find a WAR file.\n\nA WAR file? Postgres is peace, not war ;)\nSeriously, for postgres to be used for web applications (as nowadays\nRPC-Server with XML over HTTP are commonly named ;) you need to settle\nfor an application server (available for many technologies - choose\nwhatever you feel comfortable with) which then connects back to\npostgres.\n\nRegards\nTino\n", "msg_date": "Sun, 17 Feb 2008 14:53:29 +0100", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Example web access to Postgres DB" }, { "msg_contents": "\"Mohamed Ali JBELI\" <[email protected]> wrote:\n>\n> Hi members\n> \n> I am looking for an example of a web application (WAR) which executea a\n> Postgres actions. This aims to test the performance of Postgres in Web mode.\n> \n> I shall be grateful if someone gives me a link where I can find a WAR file.\n\nI think you're going to have to be more specific. I don't know what\ntechnology uses WAR files, and based on the tepid response, it doesn't\nseem like anyone else on the list does either.\n\nWhat program are you using that uses the WAR files?\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Sun, 17 Feb 2008 10:18:14 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Example web access to Postgres DB" }, { "msg_contents": "\nOn 17-Feb-08, at 10:18 AM, Bill Moran wrote:\n\n> \"Mohamed Ali JBELI\" <[email protected]> wrote:\n>>\n>> Hi members\n>>\n>> I am looking for an example of a web application (WAR) which \n>> executea a\n>> Postgres actions. This aims to test the performance of Postgres in \n>> Web mode.\n>>\n>> I shall be grateful if someone gives me a link where I can find a \n>> WAR file.\n>\n> I think you're going to have to be more specific. I don't know what\n> technology uses WAR files, and based on the tepid response, it doesn't\n> seem like anyone else on the list does either.\n>\nJava uses WAR files, it's a specific JAR file layout.\n\nTo answer your question.\n\nhttp://blogs.sun.com/tomdaly/entry/sun_pushing_price_performance_curve\n\nwas done using a java web application.\n\nDave\n", "msg_date": "Sun, 17 Feb 2008 11:12:54 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Example web access to Postgres DB" }, { "msg_contents": "> > I am looking for an example of a web application (WAR) which executea a\n> > Postgres actions. This aims to test the performance of Postgres in Web mode.\n> > I shall be grateful if someone gives me a link where I can find a WAR file.\n> I think you're going to have to be more specific. I don't know what\n> technology uses WAR files, and based on the tepid response, it doesn't\n> seem like anyone else on the list does either.\n> What program are you using that uses the WAR files?\n\nWAR is a way of packaging J2EE applications for Java app servers like\nTomcat, JBoss, etc... I have no idea what the question is looking for;\nthere are hundreds, if not thousands, of J2EE applications that use\nPostgreSQL (OpenNMS, OpenReports, OpenBravo, JOPE,....). And likely\n100% of those connect via JDBC with entirely standard and fairly\ndatabase-agnostic code.\n\nAnd I think a \"performance test\" from a J2EE application is just as\nlikely to measure the performance of the particular J2EE platform [and\nconfiguration] then it is to tell you much about PostgreSQL performance.\nFor performance testing (of the JDBC driver/connection) it would be\nbetter to connect from a simple Java application (without any of the app\nserver overhead) otherwise you have no idea who/what you are measuring\n[unless your a J2EE guru].\n\n-- \nAdam Tauno Williams, Network & Systems Administrator\nConsultant - http://www.whitemiceconsulting.com\nDeveloper - http://www.opengroupware.org\n\n", "msg_date": "Sun, 17 Feb 2008 11:22:21 -0500", "msg_from": "Adam Tauno Williams <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Example web access to Postgres DB" } ]
[ { "msg_contents": "Hi,\n\n I want to disable Write Ahead Log (WAL) completely because\nof following reasons,\n\n \n\n1.\tI am running Linux on the Compact Flash, which has limited\nmemory; I can't afford disk space of 32MB for pg_xlog folder. (\ncheckpoints_segments = 1)\n\n \n\n2.\tCF has own limitation with respect to number of writes (I/O\noperation).\n\n \n\n3.\tAnd, Our Database is Small and embedded as part of system (along\nwith web-server, drivers). We may not require data recovery from the\npg_xlog folder. It is not an enterprise application.\n\n \n\nPlease give your inputs, to resolve this issue..\n\n \n\nThanks,\n\nJeeva...\n\n\n\n\n\n\n\n\n\n\nHi,\n            I\nwant to disable Write Ahead Log (WAL) completely because of following reasons,\n \n\nI am running Linux on the\n Compact Flash, which has limited memory; I can’t afford disk space of\n 32MB for pg_xlog folder. ( checkpoints_segments = 1)\n\n \n\nCF has own limitation with\n respect to number of writes (I/O operation).\n\n \n\nAnd, Our Database is Small and embedded\n as part of system (along with web-server, drivers). We may not require data\n recovery from the pg_xlog folder. It is not an enterprise application.\n\n \nPlease give your inputs, to resolve this issue..\n \nThanks,\nJeeva…", "msg_date": "Mon, 18 Feb 2008 14:41:50 +0530", "msg_from": "\"Kathirvel, Jeevanandam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Disable WAL completely" }, { "msg_contents": "On Mon, Feb 18, 2008 at 02:41:50PM +0530, Kathirvel, Jeevanandam wrote:\n> I want to disable Write Ahead Log (WAL) completely because\n> of following reasons,\n\nbasically, you can't disable it.\n\nregards,\n\ndepesz\n\n-- \nquicksil1er: \"postgres is excellent, but like any DB it requires a\nhighly paid DBA. here's my CV!\" :)\nhttp://www.depesz.com/ - blog dla ciebie (i moje CV)\n", "msg_date": "Mon, 18 Feb 2008 10:19:40 +0100", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disable WAL completely" }, { "msg_contents": "am Mon, dem 18.02.2008, um 14:41:50 +0530 mailte Kathirvel, Jeevanandam folgendes:\n> Hi,\n> \n> I want to disable Write Ahead Log (WAL) completely because of\n> \n> \n> Please give your inputs, to resolve this issue..\n\n\nChange the destination for this log to /dev/null\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Mon, 18 Feb 2008 10:24:48 +0100", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disable WAL completely" }, { "msg_contents": "Hi Depesz,\n\n\tIs there way to minimize the I/O operation on disk/CF.\n\n\tCan I create RAM file system and point the pg_xlog files to RAM\nlocation instead of CF. whether this will work?\n\nRegards,\nJeeva\n\n-----Original Message-----\nFrom: hubert depesz lubaczewski [mailto:[email protected]] \nSent: Monday, February 18, 2008 2:50 PM\nTo: Kathirvel, Jeevanandam\nCc: [email protected]\nSubject: Re: [PERFORM] Disable WAL completely\n\nOn Mon, Feb 18, 2008 at 02:41:50PM +0530, Kathirvel, Jeevanandam wrote:\n> I want to disable Write Ahead Log (WAL) completely because\n> of following reasons,\n\nbasically, you can't disable it.\n\nregards,\n\ndepesz\n\n-- \nquicksil1er: \"postgres is excellent, but like any DB it requires a\nhighly paid DBA. here's my CV!\" :)\nhttp://www.depesz.com/ - blog dla ciebie (i moje CV)\n", "msg_date": "Mon, 18 Feb 2008 15:00:47 +0530", "msg_from": "\"Kathirvel, Jeevanandam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Disable WAL completely" }, { "msg_contents": "On Mon, Feb 18, 2008 at 03:00:47PM +0530, Kathirvel, Jeevanandam wrote:\n> \tIs there way to minimize the I/O operation on disk/CF.\n> \tCan I create RAM file system and point the pg_xlog files to RAM\n> location instead of CF. whether this will work?\n\nit will, but in case you'll lost power you will also (most probably)\nloose your database.\n\ndepesz\n\n-- \nquicksil1er: \"postgres is excellent, but like any DB it requires a\nhighly paid DBA. here's my CV!\" :)\nhttp://www.depesz.com/ - blog dla ciebie (i moje CV)\n", "msg_date": "Mon, 18 Feb 2008 10:32:40 +0100", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disable WAL completely" }, { "msg_contents": "\nOn Feb 18, 2008, at 3:32 AM, hubert depesz lubaczewski wrote:\n\n> On Mon, Feb 18, 2008 at 03:00:47PM +0530, Kathirvel, Jeevanandam \n> wrote:\n>> \tIs there way to minimize the I/O operation on disk/CF.\n>> \tCan I create RAM file system and point the pg_xlog files to RAM\n>> location instead of CF. whether this will work?\n>\n> it will, but in case you'll lost power you will also (most probably)\n> loose your database.\n\nRight. Without the xlog directory you'll have very little chance of \never doing any kind of clean stop/start of your database. If you \ndon't need the reliability offered by Postgres's use of transaction \nlogs you'll probably be much better served with a different database \nor even a completely different storage scheme than trying to make \nPostgres fit that bill.\n\nErik Jones\n\nDBA | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n\n", "msg_date": "Mon, 18 Feb 2008 09:34:44 -0600", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disable WAL completely" }, { "msg_contents": "[Erik Jones]\n> Right. Without the xlog directory you'll have very little chance of \n> ever doing any kind of clean stop/start of your database. If you \n> don't need the reliability offered by Postgres's use of transaction \n> logs you'll probably be much better served with a different database \n> or even a completely different storage scheme than trying to make \n> Postgres fit that bill.\n\nWe actually have some postgres databases that are read-only, others that\ncan be rebuilt by a script or from some old backup, and yet others that\ncan be wiped completely without ill effects ... and others where we\nwould prefer to keep all the data, but it would be no disaster if we\nlose some. Maybe we would be better off not using postgres for those\npurposes, but it's oh so much easier for us to stick to one database\nsystem ;-)\n\nWe've considered both running postgres from a ram-disk and to have the\nfsync turned off for some of our databases, but right now we're running\nall off one host, fsync didn't reduce the performance that much, and\nafter one disasterous power failure we considered that it was not worth\nthe effort to have fsync turned off.\n\nThat being said, postgres is probably not an optimal solution for an\nembedded system running on flash memory ;-)\n\n", "msg_date": "Mon, 18 Feb 2008 17:49:07 +0100", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disable WAL completely" }, { "msg_contents": "On Mon, 18 Feb 2008, Tobias Brox wrote:\n> We actually have some postgres databases that are read-only, others that\n> can be rebuilt by a script or from some old backup, and yet others that\n> can be wiped completely without ill effects ... and others where we\n> would prefer to keep all the data, but it would be no disaster if we\n> lose some.\n\nIf there's not much write traffic, the WAL won't be used much anyway. \nIf you really don't care much about the integrity, then the best option is \nprobably to put the WAL on ramfs.\n\nHaving said that, flash is cheaper than RAM. Why not just get a bigger \nflash device? The \"too many writes wear it out\" argument is mostly not \ntrue nowadays anyway.\n\nMatthew\n\n-- \nDon't worry! The world can't end today because it's already tomorrow\nin Australia.\n", "msg_date": "Tue, 19 Feb 2008 14:48:55 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disable WAL completely" }, { "msg_contents": "On Tue, Feb 19, 2008 at 02:48:55PM +0000, Matthew wrote:\n> If there's not much write traffic, the WAL won't be used much anyway. \n\nYou still have checkpoints.\n\n> If you really don't care much about the integrity, then the best option is \n> probably to put the WAL on ramfs.\n\nUm, that will cause the WAL to go away in the event of device crash. Surely\nthat's a bad thing?\n\nA\n\n", "msg_date": "Tue, 19 Feb 2008 10:48:01 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disable WAL completely" } ]
[ { "msg_contents": "Is there a way I can change where postgres writes it temporary files? My\ndata directory is on a slow array, but we also have a fast array. I'm\nlooking to get all the temp file creation onto the fast array.\n\nIs there a way I can change where postgres writes it temporary files?  My data directory is on a slow array, but we also have a fast array.  I'm looking to get all the temp file creation onto the fast array.", "msg_date": "Mon, 18 Feb 2008 10:27:39 -0500", "msg_from": "\"Nikolas Everett\" <[email protected]>", "msg_from_op": true, "msg_subject": "Controling where temporary files are created" }, { "msg_contents": "Since 8.3 there's temp_tablespaces configuration parameter.\n\nA Dilluns 18 Febrer 2008 16:27, Nikolas Everett va escriure:\n> Is there a way I can change where postgres writes it temporary files? My\n> data directory is on a slow array, but we also have a fast array. I'm\n> looking to get all the temp file creation onto the fast array.\n\n-- \nAlbert Cervera Areny\nDept. Informàtica Sedifa, S.L.\n\nAv. Can Bordoll, 149\n08202 - Sabadell (Barcelona)\nTel. 93 715 51 11\nFax. 93 715 51 12\n\n====================================================================\n........................ AVISO LEGAL ............................\nLa presente comunicación y sus anexos tiene como destinatario la\npersona a la que va dirigida, por lo que si usted lo recibe\npor error debe notificarlo al remitente y eliminarlo de su\nsistema, no pudiendo utilizarlo, total o parcialmente, para\nningún fin. Su contenido puede tener información confidencial o\nprotegida legalmente y únicamente expresa la opinión del\nremitente. El uso del correo electrónico vía Internet no\npermite asegurar ni la confidencialidad de los mensajes\nni su correcta recepción. En el caso de que el\ndestinatario no consintiera la utilización del correo electrónico,\ndeberá ponerlo en nuestro conocimiento inmediatamente.\n====================================================================\n........................... DISCLAIMER .............................\nThis message and its attachments are intended exclusively for the\nnamed addressee. If you receive this message in error, please\nimmediately delete it from your system and notify the sender. You\nmay not use this message or any part of it for any purpose.\nThe message may contain information that is confidential or\nprotected by law, and any opinions expressed are those of the\nindividual sender. Internet e-mail guarantees neither the\nconfidentiality nor the proper receipt of the message sent.\nIf the addressee of this message does not consent to the use\nof internet e-mail, please inform us inmmediately.\n====================================================================\n\n\n \n", "msg_date": "Mon, 18 Feb 2008 16:50:41 +0100", "msg_from": "Albert Cervera Areny <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Controling where temporary files are created" } ]
[ { "msg_contents": "Hi,\n\nThis occurs on postgresql 8.2.5.\n\nI'm a bit at loss with the plan chosen for a query :\n\nThe query is this one :\n\nSELECT SULY_SAOEN.SAOEN_ID, SULY_SDCEN.SDCEN_REF, SULY_SDCEN.SDCEN_LIB, CSTD_UTI.UTI_NOM, CSTD_UTI.UTI_LIBC, SULY_SAOEN.SAOEN_DTDERNENVOI,\n SULY_SDCEN.SDCEN_DTLIMAP, SULY_PFOUR.PFOUR_RAISON, SULY_SDCEN.PGTC_CODE\nFROM SULY_SDCEN\ninner join SULY_SDDEN on (SULY_SDCEN.SDCEN_ID=SULY_SDDEN.SDCEN_ID)\ninner join SULY_SAOEN on (SULY_SAOEN.SDDEN_ID=SULY_SDDEN.SDDEN_ID)\ninner join CSTD_UTI on (CSTD_UTI.UTI_CODE=SULY_SDDEN.SDDEN_RESPPROS)\ninner join SULY_PFOUR on (SULY_PFOUR.PFOUR_ID=SULY_SAOEN.PFOUR_ID)\nWHERE SULY_SDCEN.PGTC_CODE = '403' AND SULY_SDDEN.PBURE_ID IN (400001)\nAND SULY_SAOEN.SAOEN_ID IN\n (\n SELECT TmpAoen.SAOEN_ID\n FROM SULY_SAOPR TmpAopr\n LEFT JOIN SULY_SOFPR TmpOfpr ON (TmpOfpr.SAOPR_ID = TmpAopr.SAOPR_ID),SULY_SAOEN TmpAoen\n WHERE TmpAopr.SAOEN_ID= TmpAoen.SAOEN_ID AND (SOFPR_DEMCOMP = 1 OR (SAOPR_DTENV IS NOT NULL AND SAOPR_DTREPONSE IS NULL))\n )\n\n\nThe plan I get is :\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=65.91..2395.16 rows=6 width=142) (actual time=696.212..2566.996 rows=2 loops=1)\n -> Nested Loop IN Join (cost=65.91..2391.95 rows=6 width=124) (actual time=696.189..2566.957 rows=2 loops=1)\n Join Filter: (suly_saoen.saoen_id = tmpaopr.saoen_id)\n -> Nested Loop (cost=10.84..34.21 rows=6 width=124) (actual time=0.233..0.617 rows=30 loops=1)\n -> Nested Loop (cost=10.84..29.00 rows=2 width=108) (actual time=0.223..0.419 rows=2 loops=1)\n -> Hash Join (cost=10.84..24.44 rows=2 width=87) (actual time=0.207..0.372 rows=2 loops=1)\n Hash Cond: (suly_sdden.sdcen_id = suly_sdcen.sdcen_id)\n -> Seq Scan on suly_sdden (cost=0.00..13.36 rows=58 width=27) (actual time=0.012..0.163 rows=58 loops=1)\n Filter: (pbure_id = 400001)\n -> Hash (cost=10.74..10.74 rows=8 width=76) (actual time=0.129..0.129 rows=8 loops=1)\n -> Seq Scan on suly_sdcen (cost=0.00..10.74 rows=8 width=76) (actual time=0.017..0.113 rows=8 loops=1)\n Filter: ((pgtc_code)::text = '403'::text)\n -> Index Scan using pk_cstd_uti on cstd_uti (cost=0.00..2.27 rows=1 width=42) (actual time=0.015..0.017 rows=1 loops=2)\n Index Cond: ((cstd_uti.uti_code)::text = (suly_sdden.sdden_resppros)::text)\n -> Index Scan using ass_saoen_sdden_fk on suly_saoen (cost=0.00..2.54 rows=5 width=32) (actual time=0.007..0.049 rows=15 loops=2)\n Index Cond: (suly_saoen.sdden_id = suly_sdden.sdden_id)\n -> Hash Join (cost=55.07..2629.62 rows=8952 width=16) (actual time=0.119..82.680 rows=3202 loops=30)\n Hash Cond: (tmpaopr.saoen_id = tmpaoen.saoen_id)\n -> Merge Left Join (cost=0.00..2451.46 rows=8952 width=8) (actual time=0.027..76.229 rows=3202 loops=30)\n Merge Cond: (tmpaopr.saopr_id = tmpofpr.saopr_id)\n Filter: ((tmpofpr.sofpr_demcomp = 1::numeric) OR ((tmpaopr.saopr_dtenv IS NOT NULL) AND (tmpaopr.saopr_dtreponse IS NULL)))\n -> Index Scan using pk_suly_saopr on suly_saopr tmpaopr (cost=0.00..1193.49 rows=15412 width=32) (actual time=0.012..19.431 rows=14401 loops=30)\n -> Index Scan using ass_saopr_sofpr_fk on suly_sofpr tmpofpr (cost=0.00..998.90 rows=14718 width=16) (actual time=0.010..18.377 rows=13752 loops=30)\n -> Hash (cost=38.92..38.92 rows=1292 width=8) (actual time=2.654..2.654 rows=1292 loops=1)\n -> Seq Scan on suly_saoen tmpaoen (cost=0.00..38.92 rows=1292 width=8) (actual time=0.006..1.322 rows=1292 loops=1)\n -> Index Scan using pk_suly_pfour on suly_pfour (cost=0.00..0.52 rows=1 width=34) (actual time=0.010..0.011 rows=1 loops=2)\n Index Cond: (suly_pfour.pfour_id = suly_saoen.pfour_id)\n Total runtime: 2567.225 ms\n(28 lignes)\n\n\nWhat I don't understand is the Nested Loop IN. If I understand correctly, the consequence is that the \nbottom part (hash joins) is done 30 times ? Why not just once ?\n\nIf I remove SULY_SDCEN.PGTC_CODE = '403', the query becomes 25 times faster.\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=2766.40..2879.44 rows=175 width=142) (actual time=121.927..123.996 rows=120 loops=1)\n -> Hash Join (cost=2766.40..2785.92 rows=175 width=124) (actual time=121.881..122.830 rows=120 loops=1)\n Hash Cond: (tmpaopr.saoen_id = suly_saoen.saoen_id)\n -> HashAggregate (cost=2652.00..2664.92 rows=1292 width=16) (actual time=114.968..115.306 rows=351 loops=1)\n -> Hash Join (cost=55.07..2629.62 rows=8952 width=16) (actual time=2.694..111.293 rows=3424 loops=1)\n Hash Cond: (tmpaopr.saoen_id = tmpaoen.saoen_id)\n -> Merge Left Join (cost=0.00..2451.46 rows=8952 width=8) (actual time=0.038..101.836 rows=3424 loops=1)\n Merge Cond: (tmpaopr.saopr_id = tmpofpr.saopr_id)\n Filter: ((tmpofpr.sofpr_demcomp = 1::numeric) OR ((tmpaopr.saopr_dtenv IS NOT NULL) AND (tmpaopr.saopr_dtreponse IS NULL)))\n -> Index Scan using pk_suly_saopr on suly_saopr tmpaopr (cost=0.00..1193.49 rows=15412 width=32) (actual time=0.016..30.360 rows=15412 loops=1)\n -> Index Scan using ass_saopr_sofpr_fk on suly_sofpr tmpofpr (cost=0.00..998.90 rows=14718 width=16) (actual time=0.012..29.359 rows=14717 loops=1)\n -> Hash (cost=38.92..38.92 rows=1292 width=8) (actual time=2.630..2.630 rows=1292 loops=1)\n -> Seq Scan on suly_saoen tmpaoen (cost=0.00..38.92 rows=1292 width=8) (actual time=0.005..1.290 rows=1292 loops=1)\n -> Hash (cost=112.21..112.21 rows=175 width=124) (actual time=6.892..6.892 rows=287 loops=1)\n -> Hash Join (cost=66.70..112.21 rows=175 width=124) (actual time=3.557..6.413 rows=287 loops=1)\n Hash Cond: (suly_saoen.sdden_id = suly_sdden.sdden_id)\n -> Seq Scan on suly_saoen (cost=0.00..38.92 rows=1292 width=32) (actual time=0.010..1.272 rows=1292 loops=1)\n -> Hash (cost=65.97..65.97 rows=58 width=108) (actual time=3.386..3.386 rows=58 loops=1)\n -> Hash Join (cost=51.02..65.97 rows=58 width=108) (actual time=2.816..3.300 rows=58 loops=1)\n Hash Cond: (suly_sdden.sdcen_id = suly_sdcen.sdcen_id)\n -> Hash Join (cost=38.09..52.25 rows=58 width=48) (actual time=2.132..2.488 rows=58 loops=1)\n Hash Cond: ((suly_sdden.sdden_resppros)::text = (cstd_uti.uti_code)::text)\n -> Seq Scan on suly_sdden (cost=0.00..13.36 rows=58 width=27) (actual time=0.021..0.203 rows=58 loops=1)\n Filter: (pbure_id = 400001)\n -> Hash (cost=28.04..28.04 rows=804 width=42) (actual time=2.092..2.092 rows=804 loops=1)\n -> Seq Scan on cstd_uti (cost=0.00..28.04 rows=804 width=42) (actual time=0.012..1.075 rows=804 loops=1)\n -> Hash (cost=10.19..10.19 rows=219 width=76) (actual time=0.670..0.670 rows=219 loops=1)\n -> Seq Scan on suly_sdcen (cost=0.00..10.19 rows=219 width=76) (actual time=0.027..0.370 rows=219 loops=1)\n -> Index Scan using pk_suly_pfour on suly_pfour (cost=0.00..0.52 rows=1 width=34) (actual time=0.005..0.006 rows=1 loops=120)\n Index Cond: (suly_pfour.pfour_id = suly_saoen.pfour_id)\n Total runtime: 124.398 ms\n\n\n\nI see that there is an estimation error on \n\"Nested Loop (cost=10.84..34.21 rows=6 width=124) (actual time=0.233..0.617 rows=30 loops=1)\"\nand that the costs of both queries is very close ...\n\nBut I don't see a good solution. Does anybody have advice on this one ?\n\nThanks a lot for your help.\n", "msg_date": "Tue, 19 Feb 2008 16:27:58 +0100", "msg_from": "Cousin Marc <[email protected]>", "msg_from_op": true, "msg_subject": "strange plan choice" } ]
[ { "msg_contents": "I spent a whopping seven hours restoring a database late Fri nite for a \nclient. We stopped the application, ran pg_dump -v -Ft -b -o $db > \n~/pre_8.3.tar on the 8.2.x db, and then upgrading the software to 8.3. I then \ndid a pg_restore -v -d $db ./pre_8.3.tar and watched it positively crawl.\nI'll grant you that it's a 5.1G tar file, but 7 hours seems excessive. \n\nIs that kind of timeframe 'abnormal' or am I just impatient? :) If the former, \nI can provide whatever you need, just ask for it. \nThanks!\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\nI've been dying to hit something since I pressed \"1\" to join your conference.\n", "msg_date": "Tue, 19 Feb 2008 13:03:58 -0500", "msg_from": "Douglas J Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "7 hrs for a pg_restore?" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Tue, 19 Feb 2008 13:03:58 -0500\r\nDouglas J Hunley <[email protected]> wrote:\r\n\r\n> I spent a whopping seven hours restoring a database late Fri nite for\r\n> a client. We stopped the application, ran pg_dump -v -Ft -b -o $db > \r\n> ~/pre_8.3.tar on the 8.2.x db, and then upgrading the software to\r\n> 8.3. I then did a pg_restore -v -d $db ./pre_8.3.tar and watched it\r\n> positively crawl. I'll grant you that it's a 5.1G tar file, but 7\r\n> hours seems excessive. \r\n> \r\n> Is that kind of timeframe 'abnormal' or am I just impatient? :) If\r\n> the former, I can provide whatever you need, just ask for it. \r\n> Thanks!\r\n\r\n7 hours for 5.1 G is excessive. It took me 11 hours to do 220G :). It\r\nwould be helpful if we knew what the machine was doing. Was it IO\r\nbound? How much ram does it have? Is it just a single HD drive? What\r\nare your settings for postgresql?\r\n\r\nJoshua D. Drake\r\n\r\n\r\n- -- \r\nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \r\nPostgreSQL Community Conference: http://www.postgresqlconference.org/\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nPostgreSQL SPI Liaison | SPI Director | PostgreSQL political pundit\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHuxwoATb/zqfZUUQRAjNzAJ9FYBIdEpytIWHtvuqC2L0Phah9EwCfdGrZ\r\nkY1wItUqdtJ127ZA1Wl+95s=\r\n=vvm+\r\n-----END PGP SIGNATURE-----\r\n", "msg_date": "Tue, 19 Feb 2008 10:12:54 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "Douglas J Hunley wrote:\n> I spent a whopping seven hours restoring a database late Fri nite for a \n> client. We stopped the application, ran pg_dump -v -Ft -b -o $db > \n> ~/pre_8.3.tar on the 8.2.x db, and then upgrading the software to 8.3. I then \n> did a pg_restore -v -d $db ./pre_8.3.tar and watched it positively crawl.\n> I'll grant you that it's a 5.1G tar file, but 7 hours seems excessive. \n\nDepends, both on the machine and the database.\n\nWhat sort of disk i/o are you seeing, what's the cpu(s) doing, and \nwhat's the restore taking so long over (since you have -v)?\n\nOh, and have you tweaked the configuration settings for the restore? \nLots of work_mem, turn fsync off, that sort of thing.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 19 Feb 2008 18:13:37 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "\nOn 19-Feb-08, at 1:12 PM, Joshua D. Drake wrote:\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> On Tue, 19 Feb 2008 13:03:58 -0500\n> Douglas J Hunley <[email protected]> wrote:\n>\n>> I spent a whopping seven hours restoring a database late Fri nite for\n>> a client. We stopped the application, ran pg_dump -v -Ft -b -o $db >\n>> ~/pre_8.3.tar on the 8.2.x db, and then upgrading the software to\n>> 8.3. I then did a pg_restore -v -d $db ./pre_8.3.tar and watched it\n>> positively crawl. I'll grant you that it's a 5.1G tar file, but 7\n>> hours seems excessive.\n>>\n>> Is that kind of timeframe 'abnormal' or am I just impatient? :) If\n>> the former, I can provide whatever you need, just ask for it.\n>> Thanks!\n>\n> 7 hours for 5.1 G is excessive. It took me 11 hours to do 220G :). It\n> would be helpful if we knew what the machine was doing. Was it IO\n> bound? How much ram does it have? Is it just a single HD drive? What\n> are your settings for postgresql?\n>\nYeah, I did a 9G in about 20min. Did you optimize the new one ?\n> Joshua D. Drake\n>\n>\n> - --\n> The PostgreSQL Company since 1997: http://www.commandprompt.com/\n> PostgreSQL Community Conference: http://www.postgresqlconference.org/\n> Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n> PostgreSQL SPI Liaison | SPI Director | PostgreSQL political pundit\n>\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.6 (GNU/Linux)\n>\n> iD8DBQFHuxwoATb/zqfZUUQRAjNzAJ9FYBIdEpytIWHtvuqC2L0Phah9EwCfdGrZ\n> kY1wItUqdtJ127ZA1Wl+95s=\n> =vvm+\n> -----END PGP SIGNATURE-----\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n\n", "msg_date": "Tue, 19 Feb 2008 13:15:10 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "Richard Huxton <[email protected]> writes:\n> Douglas J Hunley wrote:\n>> I spent a whopping seven hours restoring a database late Fri nite for a \n\n> Oh, and have you tweaked the configuration settings for the restore? \n> Lots of work_mem, turn fsync off, that sort of thing.\n\nmaintenance_work_mem, to be more specific. If that's too small it will\ndefinitely cripple restore speed. I'm not sure fsync would make much\ndifference, but checkpoint_segments would. See\nhttp://www.postgresql.org/docs/8.3/static/populate.html#POPULATE-PG-DUMP\n\nAlso: why did you choose -o ... was there a real need to? I can see\nthat being pretty expensive.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Feb 2008 13:22:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore? " }, { "msg_contents": "On Tue, 2008-02-19 at 13:03 -0500, Douglas J Hunley wrote:\n> I spent a whopping seven hours restoring a database late Fri nite for a \n> client. We stopped the application, ran pg_dump -v -Ft -b -o $db > \n> ~/pre_8.3.tar on the 8.2.x db, and then upgrading the software to 8.3. I then \n> did a pg_restore -v -d $db ./pre_8.3.tar and watched it positively crawl.\n> I'll grant you that it's a 5.1G tar file, but 7 hours seems excessive. \n> \n\nAre there lots of indexes on localized text attributes? If you have a\nbig table with localized text (e.g. en_US.UTF-8), it can take a long\ntime to build the indexes. If the file is 5GB compressed, I wouldn't be\nsurprised if it took a long time to restore.\n\nKeep in mind, if you have several GB worth of indexes, they take up\nbasically no space in the logical dump (just the \"CREATE INDEX\" command,\nand that's it). But they can take a lot of processor time to build up\nagain, especially with localized text.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Tue, 19 Feb 2008 10:23:23 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "On Tuesday 19 February 2008 13:12:54 Joshua D. Drake wrote:\n> > I spent a whopping seven hours restoring a database late Fri nite for\n> > a client. We stopped the application, ran pg_dump -v -Ft -b -o $db >\n> > ~/pre_8.3.tar on the 8.2.x db, and then upgrading the software to\n> > 8.3. I then did a pg_restore -v -d $db ./pre_8.3.tar and watched it\n> > positively crawl. I'll grant you that it's a 5.1G tar file, but 7\n> > hours seems excessive.\n> >\n> > Is that kind of timeframe 'abnormal' or am I just impatient? :) If\n> > the former, I can provide whatever you need, just ask for it.\n> > Thanks!\n>\n> 7 hours for 5.1 G is excessive. It took me 11 hours to do 220G :). It\n> would be helpful if we knew what the machine was doing. Was it IO\n> bound? How much ram does it have? Is it just a single HD drive? What\n> are your settings for postgresql?\n\nIt wasn't doing anything but the restore. Dedicated DB box\n\npostgresql.conf attached\n\nsystem specs:\nIntel(R) Xeon(TM) CPU 3.40GHz (dual, so shows 4 in Linux)\n\nMemTotal: 8245524 kB\n\nThe db resides on a HP Modular Storage Array 500 G2. 4x72.8Gb 15k rpm disks. 1 \nraid 6 logical volume. Compaq Smart Array 6404 controller\n\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\nWe do nothing *FOR* users. We do things *TO* users. It's a fine distinction, \nbut an important one all the same.", "msg_date": "Tue, 19 Feb 2008 13:58:38 -0500", "msg_from": "Douglas J Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "On Tuesday 19 February 2008 13:13:37 Richard Huxton wrote:\n> Douglas J Hunley wrote:\n> > I spent a whopping seven hours restoring a database late Fri nite for a\n> > client. We stopped the application, ran pg_dump -v -Ft -b -o $db >\n> > ~/pre_8.3.tar on the 8.2.x db, and then upgrading the software to 8.3. I\n> > then did a pg_restore -v -d $db ./pre_8.3.tar and watched it positively\n> > crawl. I'll grant you that it's a 5.1G tar file, but 7 hours seems\n> > excessive.\n>\n> Depends, both on the machine and the database.\n>\n> What sort of disk i/o are you seeing, what's the cpu(s) doing, and\n> what's the restore taking so long over (since you have -v)?\n\nThe I/O didn't seem abnormal to me for this customer, so I didn't record it. \nIt wasn't excessive though. It took the longest on a couple of our highest \nvolume tables. By far index creation took the longest of the entire process\n\n>\n> Oh, and have you tweaked the configuration settings for the restore?\n> Lots of work_mem, turn fsync off, that sort of thing.\n\nI didn't tweak anything for the restore specifically. Used the postgresql.conf \nas attached in another reply\n\n\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\nOne item could not be deleted because it was missing. -- Mac System 7.0b1 \nerror message\n", "msg_date": "Tue, 19 Feb 2008 14:00:56 -0500", "msg_from": "Douglas J Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "On Tuesday 19 February 2008 13:22:58 Tom Lane wrote:\n> Richard Huxton <[email protected]> writes:\n> > Douglas J Hunley wrote:\n> >> I spent a whopping seven hours restoring a database late Fri nite for a\n> >\n> > Oh, and have you tweaked the configuration settings for the restore?\n> > Lots of work_mem, turn fsync off, that sort of thing.\n>\n> maintenance_work_mem, to be more specific. If that's too small it will\n> definitely cripple restore speed. I'm not sure fsync would make much\n> difference, but checkpoint_segments would. See\n> http://www.postgresql.org/docs/8.3/static/populate.html#POPULATE-PG-DUMP\n\nfrom the postgresql.conf i posted:\n~ $ grep maint postgresql.conf \nmaintenance_work_mem = 256MB # min 1MB\n\nthx for the pointer to the URL. I've made note of the recommendations therein \nfor next time.\n\n>\n> Also: why did you choose -o ... was there a real need to? I can see\n> that being pretty expensive.\n>\n\nI was under the impression our application made reference to OIDs. I'm now \ndoubting that heavily <g> and am seeking confirmation.\n\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\nI've got trouble with the wife again - she came into the bar looking for me \nand I asked her for her number.\n", "msg_date": "Tue, 19 Feb 2008 14:08:23 -0500", "msg_from": "Douglas J Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "On Tuesday 19 February 2008 13:23:23 Jeff Davis wrote:\n> On Tue, 2008-02-19 at 13:03 -0500, Douglas J Hunley wrote:\n> > I spent a whopping seven hours restoring a database late Fri nite for a\n> > client. We stopped the application, ran pg_dump -v -Ft -b -o $db >\n> > ~/pre_8.3.tar on the 8.2.x db, and then upgrading the software to 8.3. I\n> > then did a pg_restore -v -d $db ./pre_8.3.tar and watched it positively\n> > crawl. I'll grant you that it's a 5.1G tar file, but 7 hours seems\n> > excessive.\n>\n> Are there lots of indexes on localized text attributes? If you have a\n> big table with localized text (e.g. en_US.UTF-8), it can take a long\n> time to build the indexes. If the file is 5GB compressed, I wouldn't be\n> surprised if it took a long time to restore.\n>\n> Keep in mind, if you have several GB worth of indexes, they take up\n> basically no space in the logical dump (just the \"CREATE INDEX\" command,\n> and that's it). But they can take a lot of processor time to build up\n> again, especially with localized text.\n>\n\nthat could be a factor here. It is a UNICODE db, and we do a lot of text-based \nindexing for the application\n\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\nBe courteous to everyone, friendly to no one.\n", "msg_date": "Tue, 19 Feb 2008 14:20:11 -0500", "msg_from": "Douglas J Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "shared buffers is *way* too small as is effective cache\nset them to 2G/6G respectively.\n\nDave\n\n", "msg_date": "Tue, 19 Feb 2008 14:28:54 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "On Tuesday 19 February 2008 14:28:54 Dave Cramer wrote:\n> shared buffers is *way* too small as is effective cache\n> set them to 2G/6G respectively.\n>\n> Dave\n\npardon my ignorance, but is this in the context of a restore only? or 'in \ngeneral'?\n\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\nDon't let Kirk show you what he affectionately calls the \"Captain's Log\"\n", "msg_date": "Tue, 19 Feb 2008 14:35:58 -0500", "msg_from": "Douglas J Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "On Tue, 2008-02-19 at 14:20 -0500, Douglas J Hunley wrote:\n> > Keep in mind, if you have several GB worth of indexes, they take up\n> > basically no space in the logical dump (just the \"CREATE INDEX\" command,\n> > and that's it). But they can take a lot of processor time to build up\n> > again, especially with localized text.\n> >\n> \n> that could be a factor here. It is a UNICODE db, and we do a lot of text-based \n> indexing for the application\n\nI assume you're _not_ talking about full text indexes here.\n\nThese factors:\n* unicode (i.e. non-C locale)\n* low I/O utilization\n* indexes taking up most of the 7 hours\n\nmean that we've probably found the problem.\n\nLocalized text uses sorting rules that are not the same as binary sort\norder, and it takes much more CPU power to do the comparisons, and sorts\nare already processor-intensive operations.\n\nUnfortunately postgresql does not parallelize this sorting/indexing at\nall, so you're only using one core.\n\nI'd recommend restoring everything except the indexes, and then you can\nrestore the indexes concurrently in several different sessions so that\nit uses all of your cores. Build your primary key/unique indexes first,\nand then after those are built you can start using the database while\nthe rest of the indexes are building (use \"CREATE INDEX CONCURRENTLY\"). \n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Tue, 19 Feb 2008 11:46:07 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "On Tue, 2008-02-19 at 14:28 -0500, Dave Cramer wrote:\n> shared buffers is *way* too small as is effective cache\n> set them to 2G/6G respectively.\n\nThey are way too small, but I don't think that explains the index\ncreation time.\n\nEffective_cache_size is only used by the planner, and this problem is\nnot caused by a poorly chosen plan.\n\nIt's important to set shared_buffers higher as well, but he has so much\nRAM compared with his dataset that he's certainly not going to disk. I\ndon't think this explains it either. \n\nI think it's just the result of building a lot of indexes on localized\ntext using only one core at a time.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Tue, 19 Feb 2008 11:51:19 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "\nOn Feb 19, 2008, at 1:22 PM, Tom Lane wrote:\n\n>\n> maintenance_work_mem, to be more specific. If that's too small it \n> will\n> definitely cripple restore speed. I'm not sure fsync would make much\n> difference, but checkpoint_segments would. See\n> http://www.postgresql.org/docs/8.3/static/populate.html#POPULATE-PG- \n> DUMP\n>\n\nI wonder if it would be worthwhile if pg_restore could emit a warning \nif maint_work_mem is \"low\" (start flamewar on what \"low\" is).\n\nAnd as an addition to that - allow a cmd line arg to have pg_restore \nbump it before doing its work? On several occasions I was moving a \nlargish table and the COPY part went plenty fast, but when it hit \nindex creation it slowed down to a crawl due to low maint_work_mem..\n\n--\nJeff Trout <[email protected]>\nwww.dellsmartexitin.com\nwww.stuarthamm.net\n\n\n\n\n\n", "msg_date": "Tue, 19 Feb 2008 15:07:30 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore? " }, { "msg_contents": "\nOn 19-Feb-08, at 2:35 PM, Douglas J Hunley wrote:\n\n> On Tuesday 19 February 2008 14:28:54 Dave Cramer wrote:\n>> shared buffers is *way* too small as is effective cache\n>> set them to 2G/6G respectively.\n>>\n>> Dave\n>\n> pardon my ignorance, but is this in the context of a restore only? \n> or 'in\n> general'?\n\nThis is the \"generally accepted\" starting point for a pg db for \nproduction.\n\n>\n>\n> -- \n> Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\n> http://doug.hunley.homeip.net\n>\n> Don't let Kirk show you what he affectionately calls the \"Captain's \n> Log\"\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n", "msg_date": "Tue, 19 Feb 2008 15:16:42 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "On Tuesday 19 February 2008 15:16:42 Dave Cramer wrote:\n> On 19-Feb-08, at 2:35 PM, Douglas J Hunley wrote:\n> > On Tuesday 19 February 2008 14:28:54 Dave Cramer wrote:\n> >> shared buffers is *way* too small as is effective cache\n> >> set them to 2G/6G respectively.\n> >>\n> >> Dave\n> >\n> > pardon my ignorance, but is this in the context of a restore only?  \n> > or 'in\n> > general'?\n>\n> This is the \"generally accepted\" starting point for a pg db for  \n> production.\n\nfair enough. I have scheduled this change for the next outage\n\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\n\"The internet has had no impact on my life whatsoever.com\" - anon\n", "msg_date": "Tue, 19 Feb 2008 15:20:33 -0500", "msg_from": "Douglas J Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "On Tuesday 19 February 2008 15:07:30 Jeff wrote:\n> On Feb 19, 2008, at 1:22 PM, Tom Lane wrote:\n> > maintenance_work_mem, to be more specific. If that's too small it\n> > will\n> > definitely cripple restore speed. I'm not sure fsync would make much\n> > difference, but checkpoint_segments would. See\n> > http://www.postgresql.org/docs/8.3/static/populate.html#POPULATE-PG-\n> > DUMP\n>\n> I wonder if it would be worthwhile if pg_restore could emit a warning\n> if maint_work_mem is \"low\" (start flamewar on what \"low\" is).\n>\n> And as an addition to that - allow a cmd line arg to have pg_restore\n> bump it before doing its work? On several occasions I was moving a\n> largish table and the COPY part went plenty fast, but when it hit\n> index creation it slowed down to a crawl due to low maint_work_mem..\n\nfwiw, I +1 this\n\nnow that I have a (minor) understanding of what's going on, I'd love to do \nsomething like:\npg_restore -WM $large_value <normal options>\n\n\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\nThere are no dead students here. This week.\n", "msg_date": "Tue, 19 Feb 2008 15:55:43 -0500", "msg_from": "Douglas J Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "On Feb 19, 2008, at 2:55 PM, Douglas J Hunley wrote:\n\n> On Tuesday 19 February 2008 15:07:30 Jeff wrote:\n>> On Feb 19, 2008, at 1:22 PM, Tom Lane wrote:\n>>> maintenance_work_mem, to be more specific. If that's too small it\n>>> will\n>>> definitely cripple restore speed. I'm not sure fsync would make \n>>> much\n>>> difference, but checkpoint_segments would. See\n>>> http://www.postgresql.org/docs/8.3/static/populate.html#POPULATE-PG-\n>>> DUMP\n>>\n>> I wonder if it would be worthwhile if pg_restore could emit a warning\n>> if maint_work_mem is \"low\" (start flamewar on what \"low\" is).\n>>\n>> And as an addition to that - allow a cmd line arg to have pg_restore\n>> bump it before doing its work? On several occasions I was moving a\n>> largish table and the COPY part went plenty fast, but when it hit\n>> index creation it slowed down to a crawl due to low maint_work_mem..\n>\n> fwiw, I +1 this\n>\n> now that I have a (minor) understanding of what's going on, I'd \n> love to do\n> something like:\n> pg_restore -WM $large_value <normal options>\n\npg_restore is a postgres client app that uses libpq to connect and, \nthus, will pick up anything in your $PGOPTIONS env variable. So,\n\nPGOPTONS=\"-c maintenance_work_mem=512MB\" && pg_restore ....\n\nErik Jones\n\nDBA | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n\n", "msg_date": "Tue, 19 Feb 2008 15:32:02 -0600", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "On Tue, 19 Feb 2008, Douglas J Hunley wrote:\n\n> The db resides on a HP Modular Storage Array 500 G2. 4x72.8Gb 15k rpm disks. 1\n> raid 6 logical volume. Compaq Smart Array 6404 controller\n\nYou might consider doing some simple disk tests on the array just to prove \nit's working well. Reports here suggest the HP/Compaq arrays have been \nsomewhat inconsistant in performance, and it would be helpful to know if \nyou've got a good or a bad setup. Some hints here are at \nhttp://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 19 Feb 2008 17:53:45 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "On Feb 19, 2008 11:53 PM, Jeff Davis <[email protected]> wrote:\n>\n>\n> Keep in mind, if you have several GB worth of indexes, they take up\n> basically no space in the logical dump (just the \"CREATE INDEX\" command,\n> and that's it). But they can take a lot of processor time to build up\n> again, especially with localized text.\n>\n>\n\nI think it would be interesting if we can build these indexes in parallel.\nEach index build requires a seq scan on the table. If the table does\nnot fit in shared buffers, each index build would most likely result\nin lots of IO.\n\nOne option would be to add this facility to the backend so that multiple\nindexes can be built with a single seq scan of the table. In theory, it\nshould be possible, but might be tricky given the way index build works\n(it calls respective ambuild method to build the index which internally\ndoes the seq scan).\n\nOther option is to make pg_restore multi-threaded/processed. The\nsynchronized_scans facility would then synchronize the multiple heap\nscans. ISTM that if we can make pg_restore mult-processed, then\nwe can possibly add more parallelism to the restore process.\n\nMy two cents.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 20 Feb 2008 14:31:09 +0530", "msg_from": "\"Pavan Deolasee\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "On Wed, 20 Feb 2008, Pavan Deolasee wrote:\n\n> Date: Wed, 20 Feb 2008 14:31:09 +0530\n> From: Pavan Deolasee <[email protected]>\n> To: Jeff Davis <[email protected]>\n> Cc: Douglas J Hunley <[email protected]>,\n> [email protected]\n> Subject: Re: [PERFORM] 7 hrs for a pg_restore?\n>\n> On Feb 19, 2008 11:53 PM, Jeff Davis <[email protected]> wrote:\n> >\n> >\n> > Keep in mind, if you have several GB worth of indexes, they take up\n> > basically no space in the logical dump (just the \"CREATE INDEX\" command,\n> > and that's it). But they can take a lot of processor time to build up\n> > again, especially with localized text.\n> >\n> >\n>\n> I think it would be interesting if we can build these indexes in parallel.\n> Each index build requires a seq scan on the table. If the table does\n> not fit in shared buffers, each index build would most likely result\n> in lots of IO.\n>\n> One option would be to add this facility to the backend so that multiple\n> indexes can be built with a single seq scan of the table. In theory, it\n> should be possible, but might be tricky given the way index build works\n> (it calls respective ambuild method to build the index which internally\n> does the seq scan).\n>\n> Other option is to make pg_restore multi-threaded/processed. The\n> synchronized_scans facility would then synchronize the multiple heap\n> scans. ISTM that if we can make pg_restore mult-processed, then\n> we can possibly add more parallelism to the restore process.\n>\n> My two cents.\n>\n> Thanks,\n> Pavan\n>\n>\nThat'd be great! Maybe an option to pg_restore to spawn AT MOST n\nprocesses (1 per CPU)\nmy .02 Euro\n-- \nOlivier PRENANT \t Tel: +33-5-61-50-97-00 (Work)\n15, Chemin des Monges +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n", "msg_date": "Wed, 20 Feb 2008 12:29:59 +0100 (CET)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "On Tuesday 19 February 2008 16:32:02 Erik Jones wrote:\n> pg_restore is a postgres client app that uses libpq to connect and,  \n> thus, will pick up anything in your $PGOPTIONS env variable.  So,\n>\n> PGOPTONS=\"-c maintenance_work_mem=512MB\" && pg_restore ....\n\nnow that's just plain cool\n\n/me updates our wiki\n\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\nDrugs may lead to nowhere, but at least it's the scenic route.\n", "msg_date": "Wed, 20 Feb 2008 08:26:38 -0500", "msg_from": "Douglas J Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "On Tuesday 19 February 2008 17:53:45 Greg Smith wrote:\n> On Tue, 19 Feb 2008, Douglas J Hunley wrote:\n> > The db resides on a HP Modular Storage Array 500 G2. 4x72.8Gb 15k rpm\n> > disks. 1 raid 6 logical volume. Compaq Smart Array 6404 controller\n>\n> You might consider doing some simple disk tests on the array just to prove\n> it's working well. Reports here suggest the HP/Compaq arrays have been\n> somewhat inconsistant in performance, and it would be helpful to know if\n> you've got a good or a bad setup. Some hints here are at\n> http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm\n\nexcellent! i'll look into doing this. thx!\n\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\nIlliterate? Write for help!\n", "msg_date": "Wed, 20 Feb 2008 08:28:17 -0500", "msg_from": "Douglas J Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "\"Douglas J Hunley\" <[email protected]> writes:\n\n> On Tuesday 19 February 2008 16:32:02 Erik Jones wrote:\n>> pg_restore is a postgres client app that uses libpq to connect and,  \n>> thus, will pick up anything in your $PGOPTIONS env variable.  So,\n>>\n>> PGOPTONS=\"-c maintenance_work_mem=512MB\" && pg_restore ....\n>\n> now that's just plain cool\n>\n> /me updates our wiki\n\nI would suggest leaving out the && which only obfuscate what's going on here.\n\nPGOPTIONS=... pg_restore ...\n\nwould work just as well and be clearer about what's going on.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's Slony Replication support!\n", "msg_date": "Wed, 20 Feb 2008 14:14:13 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "\nOn Feb 20, 2008, at 8:14 AM, Gregory Stark wrote:\n\n> \"Douglas J Hunley\" <[email protected]> writes:\n>\n>> On Tuesday 19 February 2008 16:32:02 Erik Jones wrote:\n>>> pg_restore is a postgres client app that uses libpq to connect and,\n>>> thus, will pick up anything in your $PGOPTIONS env variable. So,\n>>>\n>>> PGOPTONS=\"-c maintenance_work_mem=512MB\" && pg_restore ....\n>>\n>> now that's just plain cool\n>>\n>> /me updates our wiki\n>\n> I would suggest leaving out the && which only obfuscate what's \n> going on here.\n>\n> PGOPTIONS=... pg_restore ...\n>\n> would work just as well and be clearer about what's going on.\n\nRight, that's just an unnecessary habit of mine.\n\nErik Jones\n\nDBA | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n\n", "msg_date": "Wed, 20 Feb 2008 10:27:32 -0600", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "Erik Jones <[email protected]> writes:\n> On Feb 20, 2008, at 8:14 AM, Gregory Stark wrote:\n>> I would suggest leaving out the && which only obfuscate what's \n>> going on here.\n>> \n>> PGOPTIONS=... pg_restore ...\n>> \n>> would work just as well and be clearer about what's going on.\n\n> Right, that's just an unnecessary habit of mine.\n\nIsn't that habit outright wrong? ISTM that with the && in there,\nwhat you're doing is equivalent to\n\n\tPGOPTIONS=whatever\n\tpg_restore ...\n\nThis syntax will set PGOPTIONS for the remainder of the shell session,\ncausing it to also affect (say) a subsequent psql invocation. Which is\nexactly not what is wanted.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Feb 2008 11:54:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore? " }, { "msg_contents": "On Wed, 20 Feb 2008, Tom Lane wrote:\n> Erik Jones <[email protected]> writes:\n>> On Feb 20, 2008, at 8:14 AM, Gregory Stark wrote:\n>>> I would suggest leaving out the && which only obfuscate what's\n>>> going on here.\n>>>\n>>> PGOPTIONS=... pg_restore ...\n>>>\n>>> would work just as well and be clearer about what's going on.\n>\n>> Right, that's just an unnecessary habit of mine.\n>\n> Isn't that habit outright wrong? ISTM that with the && in there,\n> what you're doing is equivalent to\n>\n> \tPGOPTIONS=whatever\n> \tpg_restore ...\n>\n> This syntax will set PGOPTIONS for the remainder of the shell session,\n> causing it to also affect (say) a subsequent psql invocation. Which is\n> exactly not what is wanted.\n\nIt's even better than that. I don't see an \"export\" there, so it won't \ntake effect at all!\n\nMatthew\n\n-- \nFailure is not an option. It comes bundled with your Microsoft product. \n -- Ferenc Mantfeld\n", "msg_date": "Wed, 20 Feb 2008 17:11:46 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore? " }, { "msg_contents": "\nOn Feb 20, 2008, at 10:54 AM, Tom Lane wrote:\n\n> Erik Jones <[email protected]> writes:\n>> On Feb 20, 2008, at 8:14 AM, Gregory Stark wrote:\n>>> I would suggest leaving out the && which only obfuscate what's\n>>> going on here.\n>>>\n>>> PGOPTIONS=... pg_restore ...\n>>>\n>>> would work just as well and be clearer about what's going on.\n>\n>> Right, that's just an unnecessary habit of mine.\n>\n> Isn't that habit outright wrong? ISTM that with the && in there,\n> what you're doing is equivalent to\n>\n> \tPGOPTIONS=whatever\n> \tpg_restore ...\n>\n> This syntax will set PGOPTIONS for the remainder of the shell session,\n> causing it to also affect (say) a subsequent psql invocation. \n> Which is\n> exactly not what is wanted.\n\nYes.\n\nErik Jones\n\nDBA | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n\n", "msg_date": "Wed, 20 Feb 2008 11:31:32 -0600", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore? " }, { "msg_contents": "On Wed, 2008-02-20 at 14:31 +0530, Pavan Deolasee wrote:\n> I think it would be interesting if we can build these indexes in parallel.\n> Each index build requires a seq scan on the table. If the table does\n> not fit in shared buffers, each index build would most likely result\n> in lots of IO.\n\nHe's already said that his I/O usage was not the problem. For one thing,\nhe has 8GB of memory for a 5GB dataset.\n\nEven when the table is much larger than memory, what percentage of the\ntime is spent on the table scan? A table scan is O(N), whereas an index\nbuild is O(N logN). If you combine that with expensive comparisons, e.g.\nfor localized text, then I would guess that the index building itself\nwas much more expensive than the scans themselves.\n\nHowever, building indexes in parallel would allow better CPU\nutilization.\n\n> One option would be to add this facility to the backend so that multiple\n> indexes can be built with a single seq scan of the table. In theory, it\n> should be possible, but might be tricky given the way index build works\n> (it calls respective ambuild method to build the index which internally\n> does the seq scan).\n\nI don't think that this would be necessary, because (as you say below)\nthe synchronized scan facility should already handle this.\n\n> Other option is to make pg_restore multi-threaded/processed. The\n> synchronized_scans facility would then synchronize the multiple heap\n> scans. ISTM that if we can make pg_restore mult-processed, then\n> we can possibly add more parallelism to the restore process.\n\nI like this approach more. I think that pg_restore is the right place to\ndo this, if we can make the options reasonably simple enough to use.\n\nSee:\n\nhttp://archives.postgresql.org/pgsql-hackers/2008-02/msg00699.php\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Wed, 20 Feb 2008 10:04:47 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "On Wed, 20 Feb 2008, Jeff Davis wrote:\n> However, building indexes in parallel would allow better CPU\n> utilization.\n\nWe have a process here that dumps a large quantity of data into an empty \ndatabase, much like pg_restore, and then creates all the indexes at the \nend. In order to speed up that bit, I initially made it spawn off several \nthreads, and make each thread run a CREATE INDEX operation in parallel. \nHowever, this resulted in random errors from Postgres - something to do \nwith locked tables. So I changed it so that no two threads create indexes \nfor the same table at once, and that solved it.\n\nObviously creating several indexes for the same table in parallel is \nbetter from a performance point of view, but you may have to fix that \nerror if you haven't already.\n\nMatthew\n\n-- \nfor a in past present future; do\n for b in clients employers associates relatives neighbours pets; do\n echo \"The opinions here in no way reflect the opinions of my $a $b.\"\ndone; done\n", "msg_date": "Wed, 20 Feb 2008 18:18:23 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "On Wed, 2008-02-20 at 18:18 +0000, Matthew wrote:\n> On Wed, 20 Feb 2008, Jeff Davis wrote:\n> > However, building indexes in parallel would allow better CPU\n> > utilization.\n> \n> We have a process here that dumps a large quantity of data into an empty \n> database, much like pg_restore, and then creates all the indexes at the \n> end. In order to speed up that bit, I initially made it spawn off several \n> threads, and make each thread run a CREATE INDEX operation in parallel. \n> However, this resulted in random errors from Postgres - something to do \n> with locked tables. So I changed it so that no two threads create indexes \n> for the same table at once, and that solved it.\n\nWhat was the specific problem? Were they UNIQUE indexes? Were you trying\nto write to the tables while indexing? Did you use \"CONCURRENTLY\"?\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Wed, 20 Feb 2008 10:35:32 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "Matthew <[email protected]> writes:\n> We have a process here that dumps a large quantity of data into an empty \n> database, much like pg_restore, and then creates all the indexes at the \n> end. In order to speed up that bit, I initially made it spawn off several \n> threads, and make each thread run a CREATE INDEX operation in parallel. \n> However, this resulted in random errors from Postgres - something to do \n> with locked tables. So I changed it so that no two threads create indexes \n> for the same table at once, and that solved it.\n\nHow long ago was that? There used to be some issues with two CREATE\nINDEXes both trying to update the pg_class row, but I thought we'd fixed\nit.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Feb 2008 13:46:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore? " }, { "msg_contents": "\"Tom Lane\" <[email protected]> writes:\n\n> Erik Jones <[email protected]> writes:\n>> On Feb 20, 2008, at 8:14 AM, Gregory Stark wrote:\n>>> I would suggest leaving out the && which only obfuscate what's \n>>> going on here.\n>>> \n>>> PGOPTIONS=... pg_restore ...\n>>> \n>>> would work just as well and be clearer about what's going on.\n>\n>> Right, that's just an unnecessary habit of mine.\n>\n> Isn't that habit outright wrong? ISTM that with the && in there,\n> what you're doing is equivalent to\n>\n> \tPGOPTIONS=whatever\n> \tpg_restore ...\n>\n> This syntax will set PGOPTIONS for the remainder of the shell session,\n> causing it to also affect (say) a subsequent psql invocation. Which is\n> exactly not what is wanted.\n\nWhen I said \"obfuscating\" I meant it. I'm pretty familiar with sh scripting\nand I'm not even sure what the && behaviour would do. On at least some shells\nI think the && will introduce a subshell. In that case the variable would not\ncontinue. In bash I think it would because bash avoids a lot of subshells that\nwould otherwise be necessary. \n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's RemoteDBA services!\n", "msg_date": "Wed, 20 Feb 2008 23:31:49 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "\n> When I said \"obfuscating\" I meant it. I'm pretty familiar with sh scripting\n> and I'm not even sure what the && behaviour would do.\n\nIt chains commands together so if the first fails the second doesn't happen.\n\n$ echo 1 && echo 2\n1\n2\n\n$ echo '1234' > /etc/file_that_doesnt_exist && echo 2\n-bash: /etc/file_that_doesnt_exist: Permission denied\n\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Thu, 21 Feb 2008 10:35:16 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "Gregory Stark wrote:\n> \"Chris\" <[email protected]> writes:\n> \n>>> When I said \"obfuscating\" I meant it. I'm pretty familiar with sh scripting\n>>> and I'm not even sure what the && behaviour would do.\n>> It chains commands together so if the first fails the second doesn't happen.\n> \n> I meant in this case, not in general. That is, does it introduce a subshell?\n\nAh - my misunderstanding then. No idea about that one.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Thu, 21 Feb 2008 16:29:37 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "\"Chris\" <[email protected]> writes:\n\n>> When I said \"obfuscating\" I meant it. I'm pretty familiar with sh scripting\n>> and I'm not even sure what the && behaviour would do.\n>\n> It chains commands together so if the first fails the second doesn't happen.\n\nI meant in this case, not in general. That is, does it introduce a subshell?\n\nSh traditionally has to introduce to implement some of the logical control and\npipe operators. I'm not sure if a simple && is enough but often it's\nsurprising how quickly that happens.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's Slony Replication support!\n", "msg_date": "Thu, 21 Feb 2008 05:30:14 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "On Wed, 20 Feb 2008, Tom Lane wrote:\n>> However, this resulted in random errors from Postgres - something to do\n>> with locked tables. So I changed it so that no two threads create indexes\n>> for the same table at once, and that solved it.\n>\n> How long ago was that? There used to be some issues with two CREATE\n> INDEXes both trying to update the pg_class row, but I thought we'd fixed\n> it.\n\nIt was a while back, and that sounds like exactly the error it returned. \nIt sounds like you have fixed it.\n\nMatthew\n\n-- \nSoftware suppliers are trying to make their software packages more\n'user-friendly'.... Their best approach, so far, has been to take all\nthe old brochures, and stamp the words, 'user-friendly' on the cover.\n-- Bill Gates\n", "msg_date": "Thu, 21 Feb 2008 13:58:16 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore? " }, { "msg_contents": "Jeff <threshar 'at' torgo.978.org> writes:\n\n> I wonder if it would be worthwhile if pg_restore could emit a warning\n> if maint_work_mem is \"low\" (start flamewar on what \"low\" is).\n>\n> And as an addition to that - allow a cmd line arg to have pg_restore\n> bump it before doing its work? On several occasions I was moving a\n> largish table and the COPY part went plenty fast, but when it hit\n> index creation it slowed down to a crawl due to low maint_work_mem..\n\nI have made a comparison restoring a production dump with default\nand large maintenance_work_mem. The speedup improvement here is\nonly of 5% (12'30 => 11'50).\n\nApprently, on the restored database, data is 1337 MB[1] and\nindexes 644 MB[2][2]. Pg is 8.2.3, checkpoint_segments 3,\nmaintenance_work_mem default (16MB) then 512MB, shared_buffers\n384MB. It is rather slow disks (Dell's LSI Logic RAID1), hdparm\nreports 82 MB/sec for reads.\n\nRef: \n[1] db=# SELECT sum(relpages)*8/1024 FROM pg_class, pg_namespace WHERE pg_namespace.oid = pg_class.relnamespace AND relkind = 'r' AND nspname = 'public';\n ?column? \n----------\n 1337\n \n (query run after ANALYZE)\n\n notice there are quite few toast pages to account:\n\n db=# SELECT relname, relpages FROM pg_class WHERE relname like '%toast%' ORDER BY relpages DESC;\n relname | relpages \n----------------------+----------\n pg_toast_2618 | 17\n pg_toast_2618_index | 2\n pg_toast_87570_index | 1\n pg_toast_87582_index | 1\n (...)\n\n[2] db=# SELECT sum(relpages)*8/1024 FROM pg_class, pg_namespace WHERE pg_namespace.oid = pg_class.relnamespace AND relkind = 'i' AND nspname = 'public';\n ?column? \n----------\n 644\n\n-- \nGuillaume Cottenceau\n", "msg_date": "Thu, 21 Feb 2008 18:28:58 +0100", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "Guillaume Cottenceau <[email protected]> writes:\n> I have made a comparison restoring a production dump with default\n> and large maintenance_work_mem. The speedup improvement here is\n> only of 5% (12'30 => 11'50).\n\n> Apprently, on the restored database, data is 1337 MB[1] and\n> indexes 644 MB[2][2]. Pg is 8.2.3, checkpoint_segments 3,\n> maintenance_work_mem default (16MB) then 512MB, shared_buffers\n> 384MB. It is rather slow disks (Dell's LSI Logic RAID1), hdparm\n> reports 82 MB/sec for reads.\n\nThe main thing that jumps out at me is that boosting checkpoint_segments\nwould probably help. I tend to set it to 30 or so (note that this\ncorresponds to about 1GB taken up by pg_xlog).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Feb 2008 13:09:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore? " }, { "msg_contents": "\nOn Feb 21, 2008, at 12:28 PM, Guillaume Cottenceau wrote:\n\n> I have made a comparison restoring a production dump with default\n> and large maintenance_work_mem. The speedup improvement here is\n> only of 5% (12'30 => 11'50).\n\nAt one point I was evaluating several server vendors and did a bunch \nof DB restores. The one thing that gave me the biggest benefit was to \nbump the number of checkpoint segments to a high number, like 128 or \n256. Everything else was mostly minor increases in speed.\n\n\n", "msg_date": "Thu, 21 Feb 2008 14:17:40 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" }, { "msg_contents": "Tom Lane <tgl 'at' sss.pgh.pa.us> writes:\n\n> Guillaume Cottenceau <[email protected]> writes:\n>> I have made a comparison restoring a production dump with default\n>> and large maintenance_work_mem. The speedup improvement here is\n>> only of 5% (12'30 => 11'50).\n>\n>> Apprently, on the restored database, data is 1337 MB[1] and\n>> indexes 644 MB[2][2]. Pg is 8.2.3, checkpoint_segments 3,\n>> maintenance_work_mem default (16MB) then 512MB, shared_buffers\n>> 384MB. It is rather slow disks (Dell's LSI Logic RAID1), hdparm\n>> reports 82 MB/sec for reads.\n>\n> The main thing that jumps out at me is that boosting checkpoint_segments\n> would probably help. I tend to set it to 30 or so (note that this\n> corresponds to about 1GB taken up by pg_xlog).\n\nInterestingly, from a bzipped dump, there is no win; however,\nfrom an uncompressed dump, increasing checkpoint_segments from 3\nto 30 decreases clock time from 9'50 to 8'30 (15% if I'm\ncorrect).\n\n-- \nGuillaume Cottenceau\n", "msg_date": "Fri, 22 Feb 2008 11:40:42 +0100", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7 hrs for a pg_restore?" } ]
[ { "msg_contents": "Hi,\n \n I am using Postgresql 8.1 for handling large data.\n \n I am having One Parent Table and Child Table I.e.inherits from\nparent table.\n \n The constraint for partitioning table is date range.\n \n I have to generate monthly report, report generation query\ncontains union with other table also.\n \n The report contains more than 70,000 records but it takes more\nthan half an hour and some time no result.\n \n Please help me to generating the report fast.\n \nRegards,\nShilpa\n \n \n\nThe information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. \n\nWARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email.\n\nwww.wipro.com\n\n\n\n\n\n\n \nHi,\n \n        I am using \nPostgresql 8.1 for handling large data.\n \n        I am having \nOne Parent Table and Child Table I.e.inherits from parent \ntable.\n \n       The constraint for \npartitioning table is date range.\n \n       I have to generate \nmonthly report, report generation query contains union with other table \nalso.\n \n       The report \ncontains more than 70,000 records but it takes more than half an hour and some \ntime no result.\n \n        Please help \nme to generating the report fast.\n \nRegards,\nShilpa\n \n        \nThe information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments.\nWARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email.\nwww.wipro.com", "msg_date": "Wed, 20 Feb 2008 16:32:40 +0530", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Need Help selecting Large Data From PQSQL" }, { "msg_contents": "[email protected] wrote:\n> The report contains more than 70,000 records but it takes more\n> than half an hour and some time no result.\n> \n> Please help me to generating the report fast.\n\nYou'll need to provide some more information before anyone can help. \nSomething along the lines of:\n- table definitions\n- query definition\n- EXPLAIN ANALYSE output (or just plain EXPLAIN if that's not practical)\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 20 Feb 2008 11:38:09 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need Help selecting Large Data From PQSQL" } ]
[ { "msg_contents": "Hi\n\nI am using Postgres8.3 on 8G memory , Xeon X5355 Quad Core x 2\nprocesser RH5 machine with 10G data. (with some table which have\nabout 2,000,000~ 5,000,000 rows )\n\nI have two quesion.\n1. how to set the shared_buffers and other postgresql.conf parameter\nfor best performance?\nI only run the Postgres8.3 on the machine so I set the shared_buffers\n= 7168MB (7G)\nBut somebody said it is too big, so confused.\nThe memory info is that\n-----------------------------------------------------\nMemTotal: 8177484 kB\nMemFree: 313336 kB\nBuffers: 112700 kB\nCached: 7008160 kB\nSwapCached: 210832 kB\nActive: 7303660 kB\nInactive: 402088 kB\nHighTotal: 0 kB\nHighFree: 0 kB\nLowTotal: 8177484 kB\nLowFree: 313336 kB\nSwapTotal: 8385920 kB\nSwapFree: 7415768 kB\nDirty: 908 kB\nWriteback: 0 kB\nAnonPages: 28312 kB\nMapped: 2163912 kB\nSlab: 99396 kB\nPageTables: 13004 kB\nNFS_Unstable: 0 kB\nBounce: 0 kB\nCommitLimit: 12474660 kB\nCommitted_AS: 8169440 kB\nVmallocTotal: 34359738367 kB\nVmallocUsed: 267136 kB\nVmallocChunk: 34359470587 kB\nHugePages_Total: 0\nHugePages_Free: 0\nHugePages_Rsvd: 0\nHugepagesize: 2048 kB\n-----------------------------------------------\n\n2 I have 8 core cpu ,but It seems that one sql can only use 1 core.\nCan I use more core to execute one sql to optimize the speed ?\n\nThanks\n", "msg_date": "Thu, 21 Feb 2008 14:13:50 +0900", "msg_from": "\"bh yuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Question about shared_buffers and cpu usage" }, { "msg_contents": "On Wed, Feb 20, 2008 at 11:13 PM, bh yuan <[email protected]> wrote:\n> Hi\n>\n> I am using Postgres8.3 on 8G memory , Xeon X5355 Quad Core x 2\n> processer RH5 machine with 10G data. (with some table which have\n> about 2,000,000~ 5,000,000 rows )\n>\n> I have two quesion.\n> 1. how to set the shared_buffers and other postgresql.conf parameter\n> for best performance?\n> I only run the Postgres8.3 on the machine so I set the shared_buffers\n> = 7168MB (7G)\n> But somebody said it is too big, so confused.\n\nOK. Shared_buffers are ONLY shared_buffers. When a pgsql process\nneeds memory it allocates it from the system heap. If you've given 7\nout of 8 gig to pg as shared_buffers, the other 1 Gig gets split up\nfor programs, and for in-memory sorts by pgsql. Also, the OS is very\ngood at caching file access, but here it won't be able to cache\nanything, because it won't have enough memory to do so. With high\nswappiness settings in linux, this can result in the OS swapping\nprograms that it then has to swap back in. If you make your machine\nswap out and back in for normal operation, you've gone backwards on\nperformance. Also, there's a cost associated with maintaining\nshared_buffers that grows with more share_buffers. This means it's\nusually not a good idea to set it larger than your working set of\ndata. I.e. if you have 1Gig of data and 1Gig of indexes, then 7Gig of\nshared_buffers means 5gigs wasted. Lastly, there's the background\nwriter which writes out dirty buffer pages before a checkpoint comes\nalong. The bigger shared_buffers the hard it has to work, if it's\nconfigured. For transactional systems it's usually a win to go with a\nsmaller (25%) shared_buffer setting and let the OS and battery backed\nRAID controller help out. For certain reporting application, larger\nsettings of shared_buffer are often useful, but you need to reserve\nsome % of main memory for things like sorts. I usually stick to 25%\nshared_buffers, and compute max_connects*work_mem to equal 25% and let\nthe OS have about 50% to work with. Then I test to see if changing\nthose helps.\n\n> 2 I have 8 core cpu ,but It seems that one sql can only use 1 core.\n\nYep, that's normal.\n\n> Can I use more core to execute one sql to optimize the speed ?\n\nOnly if you're willing to hack pgsql to split off sorts etc to child\nprocesses. Note that depending on you. I/O subsystem this may or may\nnot be a win. If you're creating multiple indexes at once, then each\ncreate index will use a different CPU.\n", "msg_date": "Thu, 21 Feb 2008 01:15:00 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about shared_buffers and cpu usage" }, { "msg_contents": "\nOn 21-Feb-08, at 12:13 AM, bh yuan wrote:\n\n> Hi\n>\n> I am using Postgres8.3 on 8G memory , Xeon X5355 Quad Core x 2\n> processer RH5 machine with 10G data. (with some table which have\n> about 2,000,000~ 5,000,000 rows )\n>\n> I have two quesion.\n> 1. how to set the shared_buffers and other postgresql.conf parameter\n> for best performance?\n> I only run the Postgres8.3 on the machine so I set the shared_buffers\n> = 7168MB (7G)\n> But somebody said it is too big, so confused.\n\nYes, it is too big! make it 2G to start\n>\n\n", "msg_date": "Thu, 21 Feb 2008 07:24:01 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about shared_buffers and cpu usage" } ]
[ { "msg_contents": "Given the following query:\n\nSELECT\n fi.pub_date\nFROM\n ext_feeder_item fi\nWHERE\n fi.feed_id IN (SELECT id FROM ext_feeder_feed ff\n WHERE ff.is_system)\nORDER BY\n pub_date DESC;\n\nI'm getting a plan that uses a sequential scan on ext_feeder_item instead of\nseveral index scans, which slows down the query significantly:\n\n# explain analyze select fi.pub_date from ext_feeder_item fi where fi.feed_id\n in (select id from ext_feeder_feed ff where ff.is_system) order by pub_date\n desc;\n Sort (cost=298545.70..299196.46 rows=260303 width=8) (actual\ntime=89299.623..89302.146 rows=807 loops=1)\n Sort Key: fi.pub_date\n Sort Method: quicksort Memory: 48kB\n -> Hash IN Join (cost=392.39..271572.17 rows=260303 width=8)\n(actual time=537.226..89294.837 rows=807 loops=1)\n Hash Cond: (fi.feed_id = ff.id)\n -> Seq Scan on ext_feeder_item fi (cost=0.00..261330.45\nrows=1932345 width=16) (actual time=0.035..82766.295 rows=1926576\nloops=1)\n -> Hash (cost=377.78..377.78 rows=1169 width=8) (actual\ntime=175.579..175.579 rows=1196 loops=1)\n -> Seq Scan on ext_feeder_feed ff (cost=0.00..377.78\nrows=1169 width=8) (actual time=13.723..171.467 rows=1196 loops=1)\n Filter: is_system\n Total runtime: 89304.787 ms\n\nUsing LIMIT in the subquery I can see that starting with 50 values for the in\nthe planner starts to prefer the seq scan. Plan for 49:\n\n# explain analyze select fi.pub_date from ext_feeder_item fi where fi.feed_id\n in (select id from ext_feeder_feed ff where ff.is_system limit 49) order by\n pub_date desc;\n\n Sort (cost=277689.24..277918.39 rows=91660 width=8) (actual\ntime=477.769..478.193 rows=137 loops=1)\n Sort Key: fi.pub_date\n Sort Method: quicksort Memory: 22kB\n -> Nested Loop (cost=16.45..268878.12 rows=91660 width=8) (actual\ntime=119.258..477.150 rows=137 loops=1)\n -> HashAggregate (cost=16.45..16.94 rows=49 width=8)\n(actual time=0.791..0.965 rows=49 loops=1)\n -> Limit (cost=0.00..15.84 rows=49 width=8) (actual\ntime=0.023..0.613 rows=49 loops=1)\n -> Seq Scan on ext_feeder_feed ff\n(cost=0.00..377.78 rows=1169 width=8) (actual time=0.016..0.310\nrows=49 loops=1)\n Filter: is_system\n -> Index Scan using ext_feeder_item_feed_id_idx on\next_feeder_item fi (cost=0.00..5463.58 rows=1871 width=16) (actual\ntime=4.485..9.692 rows=3 loops=49)\n Index Cond: (fi.feed_id = ff.id)\n Total runtime: 478.709 ms\n\nNote that the rows estimate for the index scan is way off. Increasing\nstatistics target for ext_feeder_item.feed_id to 100 lets the planner favor the\nindex scan up to LIMIT 150 for the subquery.\n\nUsing enable_seqscan=false, I see that the index scan plan continues to\noutperform the seqscan plan even with limit 1500 in the subquery (1196 values\nactually returned from it):\n\n# explain analyze select fi.pub_date from ext_feeder_item fi where fi.feed_id\n in (select id from ext_feeder_feed ff where ff.is_system limit 1500) order by\n pub_date desc;\n\n Sort (cost=100925142.27..100925986.74 rows=337787 width=8) (actual\ntime=102.111..104.627 rows=807 loops=1)\n Sort Key: fi.pub_date\n Sort Method: quicksort Memory: 48kB\n -> Nested Loop (cost=100000392.39..100889503.71 rows=337787\nwidth=8) (actual time=30.411..98.187 rows=807 loops=1)\n -> HashAggregate (cost=100000392.39..100000394.39 rows=200\nwidth=8) (actual time=30.337..35.329 rows=1196 loops=1)\n -> Limit (cost=100000000.00..100000377.78 rows=1169\nwidth=8) (actual time=0.027..24.759 rows=1196 loops=1)\n -> Seq Scan on ext_feeder_feed ff\n(cost=100000000.00..100000377.78 rows=1169 width=8) (actual\ntime=0.019..16.448 rows=1196 loops=1)\n Filter: is_system\n -> Index Scan using ext_feeder_item_feed_id_idx on\next_feeder_item fi (cost=0.00..4424.43 rows=1689 width=16) (actual\ntime=0.026..0.040 rows=1 loops=1196)\n Index Cond: (fi.feed_id = ff.id)\n Total runtime: 107.264 ms\n\nWithout limit though, the planner chooses a different plan that also doesn't\nperform:\n\n# explain analyze select fi.pub_date from ext_feeder_item fi where fi.feed_id\n in (select id from ext_feeder_feed ff where ff.is_system) order by pub_date\n desc;\n\n Sort (cost=1134023.40..1134669.54 rows=258456 width=8) (actual\ntime=854348.350..854350.866 rows=807 loops=1)\n Sort Key: fi.pub_date\n Sort Method: quicksort Memory: 48kB\n -> Hash IN Join (cost=543.03..1107253.77 rows=258456 width=8)\n(actual time=21.241..854343.544 rows=807 loops=1)\n Hash Cond: (fi.feed_id = ff.id)\n -> Index Scan Backward using ext_feeder_item_pub_date_idx on\next_feeder_item fi (cost=0.00..1096931.31 rows=1918631 width=16)\n(actual time=0.096..847635.097 rows=1926576 loops=1)\n -> Hash (cost=528.42..528.42 rows=1169 width=8) (actual\ntime=21.114..21.114 rows=1196 loops=1)\n -> Index Scan using ext_feeder_feed_pkey on\next_feeder_feed ff (cost=0.00..528.42 rows=1169 width=8) (actual\ntime=0.066..16.042 rows=1196 loops=1)\n Filter: is_system\n Total runtime: 854353.431 ms\n\n\nWhy does the planner choose that way and what can I do to make it choose the\nbetter plan, preferably without specifying limit and a maybe unreasonably high\nstatistics target for ext_feeder_item.feed_id?\n\nPostgreSQL 8.3, from a freshly loaded and analyzed dump.\n\nThanks\n\nMarkus Bertheau\n", "msg_date": "Thu, 21 Feb 2008 13:04:01 +0600", "msg_from": "\"Markus Bertheau\" <[email protected]>", "msg_from_op": true, "msg_subject": "planner favors seq scan too early" }, { "msg_contents": "Markus Bertheau wrote:\n> \n> I'm getting a plan that uses a sequential scan on ext_feeder_item instead of\n> several index scans, which slows down the query significantly:\n> \n> # explain analyze select fi.pub_date from ext_feeder_item fi where fi.feed_id\n> in (select id from ext_feeder_feed ff where ff.is_system) order by pub_date\n> desc;\n> Sort (cost=298545.70..299196.46 rows=260303 width=8) (actual\n> time=89299.623..89302.146 rows=807 loops=1)\n\n> Using LIMIT in the subquery I can see that starting with 50 values for the in\n> the planner starts to prefer the seq scan. Plan for 49:\n\n> Sort (cost=277689.24..277918.39 rows=91660 width=8) (actual\n> time=477.769..478.193 rows=137 loops=1)\n\n> Note that the rows estimate for the index scan is way off. Increasing\n> statistics target for ext_feeder_item.feed_id to 100 lets the planner favor the\n> index scan up to LIMIT 150 for the subquery.\n> \n> Using enable_seqscan=false, I see that the index scan plan continues to\n> outperform the seqscan plan even with limit 1500 in the subquery (1196 values\n> actually returned from it):\n\n> Sort (cost=100925142.27..100925986.74 rows=337787 width=8) (actual\n> time=102.111..104.627 rows=807 loops=1)\n\n> Why does the planner choose that way and what can I do to make it choose the\n> better plan, preferably without specifying limit and a maybe unreasonably high\n> statistics target for ext_feeder_item.feed_id?\n\nAlthough the index scans are fast enough, the cost estimate is much more.\n\nThis suggests you need to tweak your planner cost settings:\nhttp://www.postgresql.org/docs/8.3/static/runtime-config-query.html#RUNTIME-CONFIG-QUERY-CONSTANTS\n\nI'd probably start with reducing random_page_cost if you have a \nreasonable disk system and making sure effective_cache_size is \naccurately set.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 21 Feb 2008 08:34:09 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner favors seq scan too early" } ]
[ { "msg_contents": "Hi all,\n\nThe following query takes about 4s to run in a 16GB ram server. Any ideas\nwhy it doesn´t use index for the primary keys in the join conditions?\n\nselect i.inuid, count(*) as total\nfrom cte.instrumentounidade i\ninner join cte.pontuacao p on p.inuid = i.inuid\ninner join cte.acaoindicador ai on ai.ptoid = p.ptoid\ninner join cte.subacaoindicador si on si.aciid = ai.aciid\nwhere i.itrid = 2 and p.ptostatus = 'A'\ngroup by i.inuid\nhaving count(*) > 0\n\nHashAggregate (cost=47905.87..47941.01 rows=2008 width=4)\n Filter: (count(*) > 0)\n -> Hash Join (cost=16307.79..46511.45 rows=185923 width=4)\n Hash Cond: (si.aciid = ai.aciid)\n -> Seq Scan on subacaoindicador si (cost=0.00..22812.17 rows=368817\nwidth=4)\n -> Hash (cost=16211.40..16211.40 rows=38556 width=8)\n -> Hash Join (cost=9018.20..16211.40 rows=38556 width=8)\n Hash Cond: (p.inuid = i.inuid)\n -> Hash Join (cost=8908.41..15419.10 rows=39593\nwidth=8)\n Hash Cond: (ai.ptoid = p.ptoid)\n -> Seq Scan on acaoindicador ai (cost=\n0.00..4200.84 rows=76484 width=8)\n -> Hash (cost=8678.33..8678.33 rows=92034\nwidth=8)\n -> Seq Scan on pontuacao p (cost=\n0.00..8678.33 rows=92034 width=8)\n Filter: (ptostatus = 'A'::bpchar)\n -> Hash (cost=104.78..104.78 rows=2008 width=4)\n -> Seq Scan on instrumentounidade i (cost=\n0.00..104.78 rows=2008 width=4)\n Filter: (itrid = 2)\n\nHi all, The following query takes about 4s to run in a 16GB ram server. Any ideas why it doesn´t use index for the primary keys in the join conditions?select i.inuid, count(*) as totalfrom cte.instrumentounidade i\ninner join cte.pontuacao p on p.inuid = i.inuidinner join cte.acaoindicador ai on ai.ptoid = p.ptoidinner join cte.subacaoindicador si on si.aciid = ai.aciidwhere i.itrid = 2 and p.ptostatus = 'A'group by i.inuid\nhaving count(*) > 0HashAggregate  (cost=47905.87..47941.01 rows=2008 width=4) Filter: (count(*) > 0) ->  Hash Join  (cost=16307.79..46511.45 rows=185923 width=4)       Hash Cond: (si.aciid = ai.aciid)\n       ->  Seq Scan on subacaoindicador si  (cost=0.00..22812.17 rows=368817 width=4)       ->  Hash  (cost=16211.40..16211.40 rows=38556 width=8)             ->  Hash Join  (cost=9018.20..16211.40 rows=38556 width=8)\n                   Hash Cond: (p.inuid = i.inuid)                   ->  Hash Join  (cost=8908.41..15419.10 rows=39593 width=8)                         Hash Cond: (ai.ptoid = p.ptoid)                         ->  Seq Scan on acaoindicador ai  (cost=0.00..4200.84 rows=76484 width=8)\n                         ->  Hash  (cost=8678.33..8678.33 rows=92034 width=8)                               ->  Seq Scan on pontuacao p  (cost=0.00..8678.33 rows=92034 width=8)                                     Filter: (ptostatus = 'A'::bpchar)\n                   ->  Hash  (cost=104.78..104.78 rows=2008 width=4)                         ->  Seq Scan on instrumentounidade i  (cost=0.00..104.78 rows=2008 width=4)                               Filter: (itrid = 2)", "msg_date": "Thu, 21 Feb 2008 17:48:18 -0300", "msg_from": "\"Adonias Malosso\" <[email protected]>", "msg_from_op": true, "msg_subject": "4s query want to run faster" }, { "msg_contents": "On Thu, Feb 21, 2008 at 2:48 PM, Adonias Malosso <[email protected]> wrote:\n> Hi all,\n>\n> The following query takes about 4s to run in a 16GB ram server. Any ideas\n> why it doesn´t use index for the primary keys in the join conditions?\n>\n> select i.inuid, count(*) as total\n> from cte.instrumentounidade i\n> inner join cte.pontuacao p on p.inuid = i.inuid\n> inner join cte.acaoindicador ai on ai.ptoid = p.ptoid\n> inner join cte.subacaoindicador si on si.aciid = ai.aciid\n> where i.itrid = 2 and p.ptostatus = 'A'\n> group by i.inuid\n> having count(*) > 0\n\nWhat does explain analyze say about that query?\n", "msg_date": "Thu, 21 Feb 2008 14:58:04 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 4s query want to run faster" }, { "msg_contents": "HashAggregate (cost=47818.40..47853.12 rows=1984 width=4) (actual time=\n5738.879..5743.390 rows=1715 loops=1)\n Filter: (count(*) > 0)\n -> Hash Join (cost=16255.99..46439.06 rows=183912 width=4) (actual time=\n1887.974..5154.207 rows=241693 loops=1)\n Hash Cond: (si.aciid = ai.aciid)\n -> Seq Scan on subacaoindicador si (cost=0.00..22811.98 rows=368798\nwidth=4) (actual time=0.108..1551.816 rows=368798 loops=1)\n -> Hash (cost=16160.64..16160.64 rows=38141 width=8) (actual time=\n1887.790..1887.790 rows=52236 loops=1)\n -> Hash Join (cost=9015.31..16160.64 rows=38141 width=8)\n(actual time=980.058..1773.530 rows=52236 loops=1)\n Hash Cond: (p.inuid = i.inuid)\n -> Hash Join (cost=8905.89..15376.11 rows=39160\nwidth=8) (actual time=967.116..1568.028 rows=54225 loops=1)\n Hash Cond: (ai.ptoid = p.ptoid)\n -> Seq Scan on acaoindicador ai (cost=\n0.00..4200.84 rows=76484 width=8) (actual time=0.080..259.412 rows=76484\nloops=1)\n -> Hash (cost=8678.33..8678.33 rows=91026\nwidth=8) (actual time=966.841..966.841 rows=92405 loops=1)\n -> Seq Scan on pontuacao p (cost=\n0.00..8678.33 rows=91026 width=8) (actual time=0.087..746.528 rows=92405\nloops=1)\n Filter: (ptostatus = 'A'::bpchar)\n -> Hash (cost=104.46..104.46 rows=1984 width=4) (actual\ntime=12.913..12.913 rows=1983 loops=1)\n -> Seq Scan on instrumentounidade i (cost=\n0.00..104.46 rows=1984 width=4) (actual time=0.091..8.879 rows=1983 loops=1)\n Filter: (itrid = 2)\nTotal runtime: 5746.415 ms\n\nOn Thu, Feb 21, 2008 at 5:58 PM, Scott Marlowe <[email protected]>\nwrote:\n\n> On Thu, Feb 21, 2008 at 2:48 PM, Adonias Malosso <[email protected]>\n> wrote:\n> > Hi all,\n> >\n> > The following query takes about 4s to run in a 16GB ram server. Any\n> ideas\n> > why it doesn´t use index for the primary keys in the join conditions?\n> >\n> > select i.inuid, count(*) as total\n> > from cte.instrumentounidade i\n> > inner join cte.pontuacao p on p.inuid = i.inuid\n> > inner join cte.acaoindicador ai on ai.ptoid = p.ptoid\n> > inner join cte.subacaoindicador si on si.aciid = ai.aciid\n> > where i.itrid = 2 and p.ptostatus = 'A'\n> > group by i.inuid\n> > having count(*) > 0\n>\n> What does explain analyze say about that query?\n>\n\nHashAggregate  (cost=47818.40..47853.12 rows=1984 width=4) (actual time=5738.879..5743.390 rows=1715 loops=1)  Filter: (count(*) > 0)  ->  Hash Join  (cost=16255.99..46439.06 rows=183912 width=4) (actual time=1887.974..5154.207 rows=241693 loops=1)\n        Hash Cond: (si.aciid = ai.aciid)        ->  Seq Scan on subacaoindicador si  (cost=0.00..22811.98 rows=368798 width=4) (actual time=0.108..1551.816 rows=368798 loops=1)        ->  Hash  (cost=16160.64..16160.64 rows=38141 width=8) (actual time=1887.790..1887.790 rows=52236 loops=1)\n              ->  Hash Join  (cost=9015.31..16160.64 rows=38141 width=8) (actual time=980.058..1773.530 rows=52236 loops=1)                    Hash Cond: (p.inuid = i.inuid)                    ->  Hash Join  (cost=8905.89..15376.11 rows=39160 width=8) (actual time=967.116..1568.028 rows=54225 loops=1)\n                          Hash Cond: (ai.ptoid = p.ptoid)                          ->  Seq Scan on acaoindicador ai  (cost=0.00..4200.84 rows=76484 width=8) (actual time=0.080..259.412 rows=76484 loops=1)                          ->  Hash  (cost=8678.33..8678.33 rows=91026 width=8) (actual time=966.841..966.841 rows=92405 loops=1)\n                                ->  Seq Scan on pontuacao p  (cost=0.00..8678.33 rows=91026 width=8) (actual time=0.087..746.528 rows=92405 loops=1)                                      Filter: (ptostatus = 'A'::bpchar)\n                    ->  Hash  (cost=104.46..104.46 rows=1984 width=4) (actual time=12.913..12.913 rows=1983 loops=1)                          ->  Seq Scan on instrumentounidade i  (cost=0.00..104.46 rows=1984 width=4) (actual time=0.091..8.879 rows=1983 loops=1)\n                                Filter: (itrid = 2)Total runtime: 5746.415 msOn Thu, Feb 21, 2008 at 5:58 PM, Scott Marlowe <[email protected]> wrote:\nOn Thu, Feb 21, 2008 at 2:48 PM, Adonias Malosso <[email protected]> wrote:\n\n> Hi all,\n>\n> The following query takes about 4s to run in a 16GB ram server. Any ideas\n> why it doesn´t use index for the primary keys in the join conditions?\n>\n> select i.inuid, count(*) as total\n> from cte.instrumentounidade i\n>  inner join cte.pontuacao p on p.inuid = i.inuid\n> inner join cte.acaoindicador ai on ai.ptoid = p.ptoid\n> inner join cte.subacaoindicador si on si.aciid = ai.aciid\n> where i.itrid = 2 and p.ptostatus = 'A'\n> group by i.inuid\n>  having count(*) > 0\n\nWhat does explain analyze say about that query?", "msg_date": "Thu, 21 Feb 2008 18:05:52 -0300", "msg_from": "\"Adonias Malosso\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 4s query want to run faster" }, { "msg_contents": "> The following query takes about 4s to run in a 16GB ram server. Any ideas\n> why it doesn´t use index for the primary keys in the join conditions?\n\nMaby random_page_cost is set too high? What version are you using?\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Thu, 21 Feb 2008 22:10:24 +0100", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 4s query want to run faster" }, { "msg_contents": "On Thu, Feb 21, 2008 at 6:10 PM, Claus Guttesen <[email protected]> wrote:\n\n> > The following query takes about 4s to run in a 16GB ram server. Any\n> ideas\n> > why it doesn´t use index for the primary keys in the join conditions?\n>\n> Maby random_page_cost is set too high? What version are you using?\n\n\nPostgresql v. 8.2.1\n\n\n>\n> --\n> regards\n> Claus\n>\n> When lenity and cruelty play for a kingdom,\n> the gentlest gamester is the soonest winner.\n>\n> Shakespeare\n>\n\nOn Thu, Feb 21, 2008 at 6:10 PM, Claus Guttesen <[email protected]> wrote:\n> The following query takes about 4s to run in a 16GB ram server. Any ideas\n> why it doesn´t use index for the primary keys in the join conditions?\n\nMaby random_page_cost is set too high? What version are you using?Postgresql v. 8.2.1 \n\n\n--\nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare", "msg_date": "Thu, 21 Feb 2008 18:11:16 -0300", "msg_from": "\"Adonias Malosso\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 4s query want to run faster" }, { "msg_contents": "> > > why it doesn´t use index for the primary keys in the join conditions?\n> >\n> > Maby random_page_cost is set too high? What version are you using?\n>\n> Postgresql v. 8.2.1\n\nYou can try to lower this value. The default (in 8.3) is 4.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Thu, 21 Feb 2008 22:16:20 +0100", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 4s query want to run faster" }, { "msg_contents": "Set random_page_cost = 2 solved the problem. thanks\n\nOn Thu, Feb 21, 2008 at 6:16 PM, Claus Guttesen <[email protected]> wrote:\n\n> > > > why it doesn´t use index for the primary keys in the join\n> conditions?\n> > >\n> > > Maby random_page_cost is set too high? What version are you using?\n> >\n> > Postgresql v. 8.2.1\n>\n> You can try to lower this value. The default (in 8.3) is 4.\n>\n> --\n> regards\n> Claus\n>\n> When lenity and cruelty play for a kingdom,\n> the gentlest gamester is the soonest winner.\n>\n> Shakespeare\n>\n\nSet random_page_cost = 2 solved the problem. thanksOn Thu, Feb 21, 2008 at 6:16 PM, Claus Guttesen <[email protected]> wrote:\n> > > why it doesn´t use index for the primary keys in the join conditions?\n> >\n> > Maby random_page_cost is set too high? What version are you using?\n>\n> Postgresql v. 8.2.1\n\nYou can try to lower this value. The default (in 8.3) is 4.\n\n--\nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare", "msg_date": "Thu, 21 Feb 2008 18:23:36 -0300", "msg_from": "\"Adonias Malosso\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 4s query want to run faster" }, { "msg_contents": "Well, all the row counts in expected and actual are pretty close. I'm\nguessing it's as optimized as it's likely to get. you could try\nmucking about with random_page_cost to force index usage, but indexes\nare not always a win in pgsql, hence the seq scans etc... If the\nnumber of rows returned represents a large percentage of the total\nnumber of rows in the table, then a seq scan is generally a win. Note\nthat most all the time being spent in this query is on the Hash Join,\nnot on the seq scans.\n\nAlso, you should really update to 8.2.6 the latest 8.2 version. Check\nthe release notes for the bugs that were fixed between 8.2.1 and 8.2.6\n", "msg_date": "Thu, 21 Feb 2008 15:25:35 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 4s query want to run faster" }, { "msg_contents": "The other parameter you might want to look at is effective_cache_size - \nincreasing it will encourage index use. On a machine with 16GB the \ndefault is probably too small (there are various recommendations about \nhow to set this ISTR either Scott M or Greg Smith had a page somewhere \nthat covered this quite well - guys?).\n\nObviously, decreasing random_page_cost fixed this query for you, but if \nfind yourself needing to tweak it again for other queries, then look at \nchanging effective_cache_size.\n\nCheers\n\nMark\n\n\nAdonias Malosso wrote:\n> Set random_page_cost = 2 solved the problem. thanks\n>\n\n", "msg_date": "Fri, 22 Feb 2008 11:59:06 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 4s query want to run faster" }, { "msg_contents": "On Thu, Feb 21, 2008 at 4:59 PM, Mark Kirkwood <[email protected]> wrote:\n> The other parameter you might want to look at is effective_cache_size -\n> increasing it will encourage index use. On a machine with 16GB the\n> default is probably too small (there are various recommendations about\n> how to set this ISTR either Scott M or Greg Smith had a page somewhere\n> that covered this quite well - guys?).\n>\n> Obviously, decreasing random_page_cost fixed this query for you, but if\n> find yourself needing to tweak it again for other queries, then look at\n> changing effective_cache_size.\n\neffective_cache_size is pretty easy to set, and it's not real\nsensitive to small changes, so guesstimation is fine where it's\nconcerned. Basically, let your machine run for a while, then add the\ncache and buffer your unix kernel has altogether (top and free will\ntell you these things). If you're running other apps on the server,\nmake a SWAG (scientific wild assed guess) how much the other apps are\npounding on the kernel cache / buffer and set effective_cache_size to\nhow much you think postgresql is using of the total and set it to\nthat.\n\nIf your data set fits into memory, then setting random page cost\ncloser to 1 makes a lot of sense, and the larger effective cache size.\n", "msg_date": "Thu, 21 Feb 2008 17:16:10 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 4s query want to run faster" }, { "msg_contents": "\nOn 21-Feb-08, at 6:16 PM, Scott Marlowe wrote:\n\n> On Thu, Feb 21, 2008 at 4:59 PM, Mark Kirkwood \n> <[email protected]> wrote:\n>> The other parameter you might want to look at is \n>> effective_cache_size -\n>> increasing it will encourage index use. On a machine with 16GB the\n>> default is probably too small (there are various recommendations \n>> about\n>> how to set this ISTR either Scott M or Greg Smith had a page \n>> somewhere\n>> that covered this quite well - guys?).\n>>\nThe default is always too small in my experience.\n\nWhat are the rest of the configuration values ?\n\nDave\n\n", "msg_date": "Thu, 21 Feb 2008 18:40:02 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 4s query want to run faster" }, { "msg_contents": "Scott Marlowe wrote:\n>\n> effective_cache_size is pretty easy to set, and it's not real\n> sensitive to small changes, so guesstimation is fine where it's\n> concerned. Basically, let your machine run for a while, then add the\n> cache and buffer your unix kernel has altogether (top and free will\n> tell you these things). If you're running other apps on the server,\n> make a SWAG (scientific wild assed guess) how much the other apps are\n> pounding on the kernel cache / buffer and set effective_cache_size to\n> how much you think postgresql is using of the total and set it to\n> that.\n> \n\nFWIW - The buffered|cached may well be called something different if you \nare not on Linux (I didn't see any platform mentioned - sorry if I \nmissed it) - e.g for Freebsd it is \"Inactive\" that shows what the os is \ncaching and \"Cached\" actually means something slightly different... (yep \nthat's caused a lot of confusion in the past...)\n\nCheers\n\nMark\n", "msg_date": "Fri, 22 Feb 2008 18:10:18 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 4s query want to run faster" } ]
[ { "msg_contents": "On Thu, Feb 21, 2008 at 5:40 PM, Dave Cramer <[email protected]> wrote:\n>\n> On 21-Feb-08, at 6:16 PM, Scott Marlowe wrote:\n>\n> > On Thu, Feb 21, 2008 at 4:59 PM, Mark Kirkwood\n> > <[email protected]> wrote:\n> >> The other parameter you might want to look at is\n> >> effective_cache_size -\n> >> increasing it will encourage index use. On a machine with 16GB the\n> >> default is probably too small (there are various recommendations\n> >> about\n> >> how to set this ISTR either Scott M or Greg Smith had a page\n> >> somewhere\n> >> that covered this quite well - guys?).\n> >>\n> The default is always too small in my experience.\n>\n> What are the rest of the configuration values ?\n\nI was thinking that we almost need a matrix of versions and small,\ntypical, large, and too big or whatever for each version, and which\nhardware configs.\n\nmax_connections is the one I see abused a lot here. It's a setting\nthat you can set way too high and not notice there's a problem until\nyou go to actually use that many connections and find out your\ndatabase performance just went south.\n\nOne should closely monitor connection usage and track it over time, as\nwell as benchmark the behavior of your db under realistic but heavy\nload. You should know how many connections you can handle in a test\nsetup before things get ugly, and then avoid setting max_connections\nany higher than about half that if you can do it. Same kind of\nthinking applies to any resource that has straightline 1:1 increase in\nresource usage, or a tendency towards that, like work_mem (formerly\nsort_mem). Dammit, nearly every one really needs it's own mini-howto\non how to set it... They all are covered in the runtime config section\nof the docs.\n", "msg_date": "Thu, 21 Feb 2008 18:24:54 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": true, "msg_subject": "config settings, was: 4s query want to run faster" } ]
[ { "msg_contents": "Hi,\n\nI need to store a lot of 3-tuples of words (e.g. \"he\", \"can\", \n\"drink\"), order matters!\nThe source is about 4 GB of these 3-tuples.\nI need to store them in a table and check whether one of them is \nalready stored, and if that's the case to increment a column named \n\"count\" (or something).\n\nI thought of doing all the inserts without having an index and without \ndoing the check whether the row is already there. After that I'd do a \n\"group by\" and count(*) on that table. Is this a good idea?\n\nI don't know much about Pgs data types. I'd try to use the varchar \ntype. But maybe there is a better data type?\nWhat kind of index should I use?\n\nThis is for a scientific research.\n\nThanks in advance\n\nmoritz\n\n", "msg_date": "Fri, 22 Feb 2008 16:42:29 +0100", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": true, "msg_subject": "store A LOT of 3-tuples for comparisons" }, { "msg_contents": "On Fri, 22 Feb 2008, Moritz Onken wrote:\n> I need to store a lot of 3-tuples of words (e.g. \"he\", \"can\", \"drink\"), order \n> matters!\n> The source is about 4 GB of these 3-tuples.\n> I need to store them in a table and check whether one of them is already \n> stored, and if that's the case to increment a column named \"count\" (or \n> something).\n\nMy suggestion would be to use three varchar columns to store the 3-tuples. \nYou should then create a B-tree index on the three columns together.\n\n> I thought of doing all the inserts without having an index and without doing \n> the check whether the row is already there. After that I'd do a \"group by\" \n> and count(*) on that table. Is this a good idea?\n\nThat sounds like the fastest way to do it, certainly.\n\nMatthew\n\n-- \n\"We have always been quite clear that Win95 and Win98 are not the systems to\nuse if you are in a hostile security environment.\" \"We absolutely do recognize\nthat the Internet is a hostile environment.\" Paul Leach <[email protected]>\n", "msg_date": "Fri, 22 Feb 2008 15:49:34 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: store A LOT of 3-tuples for comparisons" }, { "msg_contents": "Matthew wrote:\n> On Fri, 22 Feb 2008, Moritz Onken wrote:\n\n>> I thought of doing all the inserts without having an index and without \n>> doing the check whether the row is already there. After that I'd do a \n>> \"group by\" and count(*) on that table. Is this a good idea?\n> \n> That sounds like the fastest way to do it, certainly.\n\nYeah I would load the data into a temp 3-column table and then\nINSERT INTO mydatatable SELECT w1,w2,w3,count(*) GROUP BY w1,w2,w3\nthen\nCREATE UNIQUE INDEX idx_unique_data ON mydatatable (w1,w2,w3)\nif you plan to continue adding to and using the data.\n\nIf this is to be an ongoing data collection (with data being added \nslowly from here) I would probably setup a trigger to update the count \ncolumn.\n\n\nI am also wondering about the ordering and whether that matters.\nCan you have \"he\", \"can\", \"drink\" as well as \"drink\", \"he\", \"can\"\nand should they be considered the same? If so you will need a different \ntactic.\n\n\n\n-- \n\nShane Ambler\npgSQL (at) Sheeky (dot) Biz\n\nGet Sheeky @ http://Sheeky.Biz\n", "msg_date": "Sat, 23 Feb 2008 18:23:43 +1030", "msg_from": "Shane Ambler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: store A LOT of 3-tuples for comparisons" }, { "msg_contents": ">\n>\n> I am also wondering about the ordering and whether that matters.\n> Can you have \"he\", \"can\", \"drink\" as well as \"drink\", \"he\", \"can\"\n> and should they be considered the same? If so you will need a \n> different tactic.\n>\n\nordering matters. So the 3-column tactic should work.\n\nThanks for your advice!\n", "msg_date": "Sat, 23 Feb 2008 10:07:18 +0100", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": true, "msg_subject": "Re: store A LOT of 3-tuples for comparisons" } ]
[ { "msg_contents": "\nHi -\nI'm wondering if anyone has had success doing a simultaneous\nload of one Pg dump to two different servers? The load command\nis actually run from two different workstations, but reading the\nsame pgdump-file.\n\nWe use this command from the command line (Solaris-10 OS):\n\nuncompress -c pgdump-filename.Z | psql -h pgserver-A pg-dbname\n\nand, likewise wonder if we can run the same command on another\nworkstation, but reading the SAME 'pgdump-filename.Z'\nto load onto ANOTHER server ('pgserver-B'), i.e.:\n\nuncompress -c pgdump-filename.Z | psql -h pgserver-A pg-dbname\n\nThanks for any advice.\nSusan Russo\n\n", "msg_date": "Fri, 22 Feb 2008 13:34:43 -0500 (EST)", "msg_from": "Susan Russo <[email protected]>", "msg_from_op": true, "msg_subject": "loading same instance of dump to two different servers\n simultaneously?" }, { "msg_contents": "Susan Russo wrote:\n> Hi -\n> I'm wondering if anyone has had success doing a simultaneous\n> load of one Pg dump to two different servers? The load command\n> is actually run from two different workstations, but reading the\n> same pgdump-file.\n> \n> We use this command from the command line (Solaris-10 OS):\n> \n> uncompress -c pgdump-filename.Z | psql -h pgserver-A pg-dbname\n> \n> and, likewise wonder if we can run the same command on another\n> workstation, but reading the SAME 'pgdump-filename.Z'\n> to load onto ANOTHER server ('pgserver-B'), i.e.:\n> \n> uncompress -c pgdump-filename.Z | psql -h pgserver-A pg-dbname\n\nI don't think this is really a postgres question, but the fact you're on \na UNIX type of OS, you should have no problem doing this. uncompress \nwill simply open the file separately for each shell session.\n\n-- \nUntil later, Geoffrey\n\nThose who would give up essential Liberty, to purchase a little\ntemporary Safety, deserve neither Liberty nor Safety.\n - Benjamin Franklin\n", "msg_date": "Fri, 22 Feb 2008 14:26:39 -0500", "msg_from": "Geoffrey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: loading same instance of dump to two different servers\n\tsimultaneously?" }, { "msg_contents": "Susan Russo wrote:\n> Hi -\n> I'm wondering if anyone has had success doing a simultaneous\n> load of one Pg dump to two different servers? The load command\n> is actually run from two different workstations, but reading the\n> same pgdump-file.\n> \n> We use this command from the command line (Solaris-10 OS):\n> \n> uncompress -c pgdump-filename.Z | psql -h pgserver-A pg-dbname\n> \n> and, likewise wonder if we can run the same command on another\n> workstation, but reading the SAME 'pgdump-filename.Z'\n> to load onto ANOTHER server ('pgserver-B'), i.e.:\n> \n> uncompress -c pgdump-filename.Z | psql -h pgserver-A pg-dbname\n\nI'm assuming the above line should have been:\n\nuncompress -c pgdump-filename.Z | psql -h pgserver-B pg-dbname\n ^\n ^\n\n-- \nUntil later, Geoffrey\n\nThose who would give up essential Liberty, to purchase a little\ntemporary Safety, deserve neither Liberty nor Safety.\n - Benjamin Franklin\n", "msg_date": "Fri, 22 Feb 2008 14:27:49 -0500", "msg_from": "Geoffrey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: loading same instance of dump to two different servers\n\tsimultaneously?" } ]
[ { "msg_contents": "\nSORRY -\n\nthese are the commands (i.e. pgserver-A and pgserver-B)\n\n======\n\nHi -\nI'm wondering if anyone has had success doing a simultaneous\nload of one Pg dump to two different servers? The load command\nis actually run from two different workstations, but reading the\nsame pgdump-file.\n\nWe use this command from the command line (Solaris-10 OS):\n\nuncompress -c pgdump-filename.Z | psql -h pgserver-A pg-dbname\n\nand, likewise wonder if we can run the same command on another\nworkstation, but reading the SAME 'pgdump-filename.Z'\nto load onto ANOTHER server ('pgserver-B'), i.e.:\n\nuncompress -c pgdump-filename.Z | psql -h pgserver-B pg-dbname\n\n=====\nS\n\n", "msg_date": "Fri, 22 Feb 2008 13:37:43 -0500 (EST)", "msg_from": "Susan Russo <[email protected]>", "msg_from_op": true, "msg_subject": "CORRECTION to msg 'loading same instance of dump to two different\n\tservers simultaneously'" }, { "msg_contents": "No need to crosspost. I removed -performance from the Cc.\n\nOn Fri, 22 Feb 2008 13:37:43 -0500 (EST)\nSusan Russo <[email protected]> wrote:\n> I'm wondering if anyone has had success doing a simultaneous\n> load of one Pg dump to two different servers? The load command\n> is actually run from two different workstations, but reading the\n> same pgdump-file.\n\nDo you have any reason to doubt that this would work? Have you tried\nit? As long as the targets are different I don't see any problem.\n\n-- \nD'Arcy J.M. Cain <[email protected]> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Fri, 22 Feb 2008 14:18:03 -0500", "msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] CORRECTION to msg 'loading same instance of dump to\n\ttwo different servers simultaneously'" }, { "msg_contents": "On Fri, 22 Feb 2008, Susan Russo wrote:\n> I'm wondering if anyone has had success doing a simultaneous\n> load of one Pg dump to two different servers? The load command\n> is actually run from two different workstations, but reading the\n> same pgdump-file.\n\nPlease don't cross-post.\n\nI can't see any problems with doing that. You have two independent \nworkstations uncompressing the data, sending it over the network to two \nindependent servers. The only common points are the fileserver and the \nnetwork. Assuming both of those can keep up, you won't even have a \nperformance penalty.\n\nMatthew\n\n-- \nAs you approach the airport, you see a sign saying \"Beware - low\nflying airplanes\". There's not a lot you can do about that. Take \nyour hat off? -- Michael Flanders\n", "msg_date": "Mon, 25 Feb 2008 13:11:04 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CORRECTION to msg 'loading same instance of dump to\n\ttwo different servers simultaneously'" } ]
[ { "msg_contents": "Hi. I'm trying to optimize the performance of a database whose main purpose\nis to support two (rather similar) kinds of queries. The first kind, which\nis expected to be the most common (I estimate it will account for about 90%\nof all the queries performed on this DB), has the following general\nstructure:\n\n(Q1) SELECT a1.word, a2.word\n FROM T a1 JOIN T a2 USING ( zipk )\n WHERE a1.type = <int1>\n AND a2.type = <int2>;\n\n...where <int1> and <int2> stand for some two integers. In English, this\nquery essentially executes an inner join between two \"virtual subtables\" of\ntable T, which are defined by the value of the type column. For brevity, I\nwill refer to these (virtual) subtables as T<int1> and T<int2>. (I should\npoint out that T holds about 2 million records, spread roughly evenly over\nabout 100 values of the type column. So each of these virtual subtables has\nabout 20K records. Also, for practical purposes T may be regarded as an\nimmutable, read-only table, since it gets re-built from scratch about once a\nmonth. And, FWIW, all the columns mentioned in this post have a NOT\nNULLconstraint.)\n\nThe second form is similar to the first, except that now the join is taken\nbetween T and T<int2>:\n\n(Q2) SELECT a1.word, a2.word\n FROM T a1 JOIN T a2 USING ( zipk )\n WHERE a2.type = <int2>;\n\n(Both the forms above are somewhat oversimplified relative to the actual\nsituation; in our actual application, the joins are actually left outer\nones, and each query also involves an additional inner join with another\ntable, S. For the sake of completeness, I give the \"real-world\" versions of\nthese queries at the end of this post, but I think that for the purpose of\nmy question, the additional complications they entail can be neglected.)\n\nOne way to speed (Q1) would be to break T into its subtables, i.e. to create\nT1, T2, T3, ... , T100 as bona fide tables. Then the query would become a\nsimple join without the two condition of the original's WHERE clause, which\nI figure should make it noticeably faster.\n\nBut since the second kind of query (Q2) requires T, we can't get rid of this\ntable, so all the data would need to be stored twice, once in T and once in\nsome T<int*>.\n\nIn trying to come up with a way around this duplication, it occurred to me\nthat instead of creating tables T1, T2, etc., I could create the analogous\nviews V1, V2, etc. (e.g. CREATE VIEW V1 AS SELECT * FROM T WHERE type = 1).\n With this design, the two queries above would become\n\n(Q1*) SELECT V<int1>.word, V<int2>.word\n FROM V<int1> JOIN V<int2> USING ( zipk );\n\n(Q2*) SELECT T.word, V<int2>.word\n FROM T JOIN V<int2> USING ( zipk );\n\nOf course, I expect that using views V<int1> and V<int2>... would result in\na loss in performance relative to a version that used bona fide tables\nT<int1> and T<int2>. My question is, how can I minimize this performance\nloss?\n\nMore specifically, how can I go about building table T and the views\nV<int?>'s to maximize the performance of (Q1)? For example, I'm thinking\nthat if T had an additional id column and were built in such a way that all\nthe records belonging to each V<int?> were physically contiguous, and (say)\nhad contiguous values in the id column, then I could define each view like\nthis\n\n CREATE VIEW V<int1> AS SELECT * FROM T\n WHERE <start_int1> <= id AND id < <start_int1+1>;\n\nSo my question is, what design would make querying V1, V2, V3 ... as fast as\npossible? Is it possible to approach the performance of the design that\nuses bona fide tables T1, T2, T3, ... instead of views V1, V2, V3 ...?\n\nThank you very much for reading this long post, and many thanks in advance\nfor your comments!\n\nKynn\n\n\nP.S. Here are the actual form of the queries. They now include an initial\njoin with table S, and the join with T<int2> (or V<int2>) is a left outer\njoin. Interestingly, even though the queries below that use views (i.e.\nQ1*** and Q2***) are not much more complex-looking than before, the other\ntwo (Q1** and Q2**) are. I don't know if this is because my ineptitude with\nSQL, but I am not able to render (Q1**) and (Q2**) without resorting to the\nsubquery sq.\n\n(Q1**) SELECT a1.word, sq.word FROM\n S JOIN T a1 USING ( word )\n LEFT JOIN ( SELECT * FROM T a2\n WHERE a2.type = <int2> ) sq USING ( zipk )\n WHERE a1.type = <int1>;\n\n(Q2**) SELECT a1.word, sq.word FROM\n S JOIN T a1 USING ( word )\n LEFT JOIN ( SELECT * FROM T a2\n WHERE a2.type = <int2> ) sq USING ( zipk )\n\n ---------------------------------------------\n\n(Q1***) SELECT V<int1>.word, V<int2>.word FROM\n S JOIN V<int1> USING ( word )\n LEFT JOIN V<int2> USING ( zipk );\n\n(Q2***) SELECT T.word, V<int2>.word\n FROM S JOIN T USING ( word )\n LEFT JOIN V<int2> USING ( zipk );\n\nHi.  I'm trying to optimize the performance of a database whose main purpose is to support two (rather similar) kinds of queries.  The first kind, which is expected to be the most common (I estimate it will account for about 90% of all the queries performed on this DB), has the following general structure:\n(Q1)   SELECT a1.word, a2.word\n         FROM T a1 JOIN T a2 USING ( zipk )\n        WHERE a1.type = <int1>\n          AND a2.type = <int2>;\n...where <int1> and <int2> stand for some two integers.  In English, this query essentially executes an inner join between two \"virtual subtables\" of table T, which are defined by the value of the type column.  For brevity, I will refer to these (virtual) subtables as T<int1> and T<int2>.  (I should point out that T holds about 2 million records, spread roughly evenly over about 100 values of the type column.  So each of these virtual subtables has about 20K records.  Also, for practical purposes T may be regarded as an immutable, read-only table, since it gets re-built from scratch about once a month.  And, FWIW, all the columns mentioned in this post have a NOT NULL constraint.)\nThe second form is similar to the first, except that now the join is taken between T and T<int2>:(Q2)   SELECT a1.word, a2.word\n         FROM T a1 JOIN T a2 USING ( zipk )\n        WHERE a2.type = <int2>;\n(Both the forms above are somewhat oversimplified relative to the actual situation; in our actual application, the joins are actually left outer ones, and each query also involves an additional inner join with another table, S.  For the sake of completeness, I give the \"real-world\" versions of these queries at the end of this post, but I think that for the purpose of my question, the additional complications they entail can be neglected.)\nOne way to speed (Q1) would be to break T into its subtables, i.e. to create T1, T2, T3, ... , T100 as bona fide tables.  Then the query would become a simple join without the two condition of the original's WHERE clause, which I figure should make it noticeably faster.\nBut since the second kind of query (Q2) requires T, we can't get rid of this table, so all the data would need to be stored twice, once in T and once in some T<int*>.\nIn trying to come up with a way around this duplication, it occurred to me that instead of creating tables T1, T2, etc., I could create the analogous views V1, V2, etc.  (e.g. CREATE VIEW V1 AS SELECT * FROM T WHERE type = 1).  With this design, the two queries above would become\n(Q1*)  SELECT V<int1>.word, V<int2>.word\n         FROM V<int1> JOIN V<int2> USING ( zipk );\n\n(Q2*)  SELECT T.word, V<int2>.word\n         FROM T JOIN V<int2> USING ( zipk );\nOf course, I expect that using views V<int1> and V<int2>... would result in a loss in performance relative to a version that used bona fide tables T<int1> and T<int2>.  My question is, how can I minimize this performance loss?\nMore specifically, how can I go about building table T and the views V<int?>'s to maximize the performance of (Q1)?  For example, I'm thinking that if T had an additional id column and were built in such a way that all the records belonging to each V<int?> were physically contiguous, and (say) had contiguous values in the id column, then I could define each view like this\n  CREATE VIEW V<int1> AS SELECT * FROM T\n   WHERE <start_int1> <= id AND id < <start_int1+1>;\nSo my question is, what design would make querying V1, V2, V3 ... as fast as possible?  Is it possible to approach the performance of the design that uses bona fide tables T1, T2, T3, ... instead of views V1, V2, V3 ...?\nThank you very much for reading this long post, and many thanks in advance for your comments!Kynn\nP.S.  Here are the actual form of the queries.  They now include an initial join with table S, and the join with T<int2> (or V<int2>) is a left outer join.  Interestingly, even though the queries below that use views (i.e. Q1*** and Q2***) are not much more complex-looking than before, the other two (Q1** and Q2**) are.  I don't know if this is because my ineptitude with SQL, but I am not able to render (Q1**) and (Q2**) without resorting to the subquery sq.\n(Q1**)  SELECT a1.word, sq.word FROM\n               S      JOIN T a1 USING ( word )\n                 LEFT JOIN ( SELECT * FROM T a2\n                             WHERE a2.type = <int2> ) sq USING ( zipk )\n         WHERE a1.type = <int1>;\n\n(Q2**)  SELECT a1.word, sq.word FROM\n               S      JOIN T a1 USING ( word )\n                 LEFT JOIN ( SELECT * FROM T a2\n                             WHERE a2.type = <int2> ) sq USING ( zipk )\n\n       ---------------------------------------------\n\n(Q1***) SELECT V<int1>.word, V<int2>.word FROM\n               S      JOIN V<int1> USING ( word )\n                 LEFT JOIN V<int2> USING ( zipk );\n\n(Q2***) SELECT T.word, V<int2>.word\n          FROM S      JOIN T       USING ( word )\n                 LEFT JOIN V<int2> USING ( zipk );", "msg_date": "Fri, 22 Feb 2008 15:49:59 -0500", "msg_from": "\"Kynn Jones\" <[email protected]>", "msg_from_op": true, "msg_subject": "Q on views and performance" }, { "msg_contents": "On 2008-02-22 12:49, Kynn Jones wrote:\n> Of course, I expect that using views V<int1> and V<int2>... would \n> result in a loss in performance relative to a version that used bona \n> fide tables T<int1> and T<int2>. My question is, how can I minimize \n> this performance loss?\n\nThat used to be my thoughts too, but I have found over the years that \nthe PostgreSQL execution planner is able to \"flatten\" SELECTs using \nVIEWs, ALMOST ALWAYS in a way that does not adversely affect \nperformance, and often gives an IMPROVEMENT in performance, probably \nbecause by using VIEWs I am stating the query problem in a better way \nthan if I try to guess the best way to optimize a SELECT.\n\nI have at least a 10:1 ratio of VIEWs to TABLEs. Occasionally, with \nsome query that is slow, I will try to rewrite it without VIEWs. This \nALMOST NEVER results in an improvement in performance, and when it does, \nI am able to find another way to write the VIEW and SELECT to recapture \nthe gain.\n\n-- Dean\n\n-- \nMail to my list address MUST be sent via the mailing list.\nAll other mail to my list address will bounce.\n\n", "msg_date": "Fri, 22 Feb 2008 17:48:48 -0800", "msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Q on views and performance" }, { "msg_contents": "On Fri, Feb 22, 2008 at 8:48 PM, Dean Gibson (DB Administrator) <\[email protected]> wrote:\n\n> On 2008-02-22 12:49, Kynn Jones wrote:\n> > Of course, I expect that using views V<int1> and V<int2>... would\n> > result in a loss in performance relative to a version that used bona\n> > fide tables T<int1> and T<int2>. My question is, how can I minimize\n> > this performance loss?\n>\n> That used to be my thoughts too, but I have found over the years that\n> the PostgreSQL execution planner is able to \"flatten\" SELECTs using\n> VIEWs, ALMOST ALWAYS in a way that does not adversely affect\n> performance, and often gives an IMPROVEMENT in performance, probably\n> because by using VIEWs I am stating the query problem in a better way\n> than if I try to guess the best way to optimize a SELECT.\n\n\nWell, the last consideration you mention there does not apply to the two\nalternatives I'm comparing because they differ only in that one uses views\nV1, V2, V3, ... , V100 where the other one uses the corresponding tables T1,\nT2, T3, ... , T100, so the query statements would be identical in both\ncases.\n\n\n> I have at least a 10:1 ratio of VIEWs to TABLEs. Occasionally, with\n> some query that is slow, I will try to rewrite it without VIEWs. This\n> ALMOST NEVER results in an improvement in performance...\n\n\nThat's truly amazing! Just to make sure I get you right, you're saying that\nwhen you replace a view by its equivalent table you see no performance gain?\n How could it be? With views every query entails the additional work of\nsearching the underlying tables for the records that make up the views...\n\nOK, if I think a bit more about it I suppose that a view could be\nimplemented for performance as a special sort of table consisting of a\nsingle column of pointers to the \"true\" records, in which case using views\nwould entail only the cost of this indirection, and not the cost of a\nsearch... (And also the cost of maintaining this pointer table, if the\nunderlying tables are mutable.) So I guess views could be implemented in\nsuch a way that the difference in SELECT performance relative to replacing\nthem with tables would be negligible...\n\nAnyway, your post once again reminded me of awesomeness of PostgreSQL.\n Props to the developers!\n\nkynn\n\nOn Fri, Feb 22, 2008 at 8:48 PM, Dean Gibson (DB Administrator) <[email protected]> wrote:\nOn 2008-02-22 12:49, Kynn Jones wrote:\n> Of course, I expect that using views V<int1> and V<int2>... would\n> result in a loss in performance relative to a version that used bona\n> fide tables T<int1> and T<int2>.  My question is, how can I minimize\n> this performance loss?\n\nThat used to be my thoughts too, but I have found over the years that\nthe PostgreSQL execution planner is able to \"flatten\" SELECTs using\nVIEWs, ALMOST ALWAYS in a way that does not adversely affect\nperformance, and often gives an IMPROVEMENT in performance, probably\nbecause by using VIEWs I am stating the query problem in a better way\nthan if I try to guess the best way to optimize a SELECT.Well, the last consideration you mention there does not apply to the two alternatives I'm comparing because they differ only in that one uses views V1, V2, V3, ... , V100 where the other one uses the corresponding tables T1, T2, T3, ... , T100, so the query statements would be identical in both cases.\n I have at least a 10:1 ratio of VIEWs to TABLEs.  Occasionally, with\nsome query that is slow, I will try to rewrite it without VIEWs.  This\nALMOST NEVER results in an improvement in performance...That's truly amazing!  Just to make sure I get you right, you're saying that when you replace a view by its equivalent table you see no performance gain?  How could it be?  With views every query entails the additional work of searching the underlying tables for the records that make up the views...\nOK, if I think a bit more about it I suppose that a view could be implemented for performance as a special sort of table consisting of a single column of pointers to the \"true\" records, in which case using views would entail only the cost of this indirection, and not the cost of a search...  (And also the cost of maintaining this pointer table, if the underlying tables are mutable.)  So I guess views could be implemented in such a way that the difference in SELECT performance relative to replacing them with tables would be negligible...\nAnyway, your post once again reminded me of awesomeness of PostgreSQL.  Props to the developers!kynn", "msg_date": "Sat, 23 Feb 2008 08:07:35 -0500", "msg_from": "\"Kynn Jones\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Q on views and performance" }, { "msg_contents": "On Fri, Feb 22, 2008 at 8:48 PM, Dean Gibson (DB Administrator) <\[email protected]> wrote:\n\n> On 2008-02-22 12:49, Kynn Jones wrote:\n> > Of course, I expect that using views V<int1> and V<int2>... would\n> > result in a loss in performance relative to a version that used bona\n> > fide tables T<int1> and T<int2>. My question is, how can I minimize\n> > this performance loss?\n>\n> That used to be my thoughts too, but I have found over the years that\n> the PostgreSQL execution planner is able to \"flatten\" SELECTs using\n> VIEWs, ALMOST ALWAYS in a way that does not adversely affect\n> performance, and often gives an IMPROVEMENT in performance, probably\n> because by using VIEWs I am stating the query problem in a better way\n> than if I try to guess the best way to optimize a SELECT.\n>\n> I have at least a 10:1 ratio of VIEWs to TABLEs. Occasionally, with\n> some query that is slow, I will try to rewrite it without VIEWs. This\n> ALMOST NEVER results in an improvement in performance, and when it does,\n> I am able to find another way to write the VIEW and SELECT to recapture\n> the gain.\n\n\nSince you have experience working with views, let me ask you this. The\nconverse strategy to the one I described originally would be to create the\nindividual tables T1, T2, T3, ..., T100, but instead of keeping around the\noriginal (and now redundant) table T, replace it with a view V made up of\nthe union of T1, T2, T3, ..., T100. The problem with this alternative is\nthat one cannot index V, or define a primary key constraint for it, because\nit's a view. This means that a search in V, even for a primary key value,\nwould be *have to be* very inefficient (i.e. I don't see how even the very\nclever PostgreSQL implementers could get around this one!), because the\nengine would have to search *all* the underlying tables, T1 through T100,\neven if it found the desired record in T1, since it has no way of knowing\nthat the value is unique all across V.\n\nIs there a way around this?\n\nkynn\n\nOn Fri, Feb 22, 2008 at 8:48 PM, Dean Gibson (DB Administrator) <[email protected]> wrote:\nOn 2008-02-22 12:49, Kynn Jones wrote:\n> Of course, I expect that using views V<int1> and V<int2>... would\n> result in a loss in performance relative to a version that used bona\n> fide tables T<int1> and T<int2>.  My question is, how can I minimize\n> this performance loss?\n\nThat used to be my thoughts too, but I have found over the years that\nthe PostgreSQL execution planner is able to \"flatten\" SELECTs using\nVIEWs, ALMOST ALWAYS in a way that does not adversely affect\nperformance, and often gives an IMPROVEMENT in performance, probably\nbecause by using VIEWs I am stating the query problem in a better way\nthan if I try to guess the best way to optimize a SELECT.\n\nI have at least a 10:1 ratio of VIEWs to TABLEs.  Occasionally, with\nsome query that is slow, I will try to rewrite it without VIEWs.  This\nALMOST NEVER results in an improvement in performance, and when it does,\nI am able to find another way to write the VIEW and SELECT to recapture\nthe gain.Since you have experience working with views, let me ask you this.  The converse strategy to the one I described originally would be to create the individual tables T1, T2, T3, ..., T100, but instead of keeping around the original (and now redundant) table T, replace it with a view V made up of the union of T1, T2, T3, ..., T100.  The problem with this alternative is that one cannot index V, or define a primary key constraint for it, because it's a view.  This means that a search in V, even for a primary key value, would be *have to be* very inefficient (i.e. I don't see how even the very clever PostgreSQL implementers could get around this one!), because the engine would have to search *all* the underlying tables, T1 through T100, even if it found the desired record in T1, since it has no way of knowing that the value is unique all across V.\nIs there a way around this?kynn", "msg_date": "Sat, 23 Feb 2008 08:59:35 -0500", "msg_from": "\"Kynn Jones\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Q on views and performance" }, { "msg_contents": "Hi Kynn,\n\nLets take these up as cases :\n\nCase A: keep one large table T and keep V1 .... V100\nCase B: keep one large table T and store the the same data also in T1...T100\nCase C: keep T1...T100 and store one V which is a UNION of T1 ... T100\n\n1. The way I look at it, in case B although fetching data instead of\nevaluating VIEWs would help (when compared to case A), you are missing a\nsmall negative fact that your caching mechanism would be severely hit by\nhaving to cache two copies of the same data once in T1..T100 and the second\ntime in T.\n\n2. Case C seems to me like a particularly bad idea... and the indexing point\nthat you make, seems all the more complicated... I don't know much about it,\nso I would try to avoid it.\n\n3. Also, it seems you got the Postgresql VIEW mechanism wrong here. What\nDean was trying to say was that PG flattens the VIEW (and its JOINS)\ndirectly into a *single* SELECT query *before* it hits even the first\nrecord. The per-record redirection is not how it approaches VIEWs which is\npretty much why Dean's experience says that relying on the Parser to\ngenerate a better SQL (compared to our expertise at optimising it) is not\nreally a bad idea.\n\n4. Personally, Case A is a far far simpler approach to understability (as\nwell as data storage) and if you ask my take ? I'll take Case A :)\n\n*Robins Tharakan\n*\n\nHi Kynn,Lets take these up as cases :\nCase A: keep one large table T and keep V1 .... V100Case B: keep one large table T and store the the same data also in T1...T100\nCase C: keep T1...T100 and store one V which is a UNION of T1 ... T1001. The way I look at it, in case B although fetching data instead of evaluating VIEWs would help (when compared to case A), you are missing a small negative fact that your caching mechanism would be severely hit by having to cache two copies of the same data once in T1..T100 and the second time in T.\n2. Case C seems to me like a particularly bad idea... and the indexing point that you make, seems all the more complicated... I don't know much about it, so I would try to avoid it.\n3. Also, it seems you got the Postgresql VIEW mechanism wrong here. What Dean was trying to say was that PG flattens the VIEW (and its JOINS) directly into a *single* SELECT query *before* it hits even the first record. The per-record redirection is not how it approaches VIEWs which is pretty much why Dean's experience says that relying on the Parser to generate a better SQL (compared to our expertise at optimising it) is not really a bad idea.\n4. Personally, Case A is a far far simpler approach to understability (as well as data storage) and if you ask my take ? I'll take Case A :)\nRobins Tharakan", "msg_date": "Sat, 23 Feb 2008 20:04:57 +0530", "msg_from": "\"Robins Tharakan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Q on views and performance" }, { "msg_contents": "On 2008-02-23 05:59, Kynn Jones wrote:\n> On Fri, Feb 22, 2008 at 8:48 PM, Dean Gibson (DB Administrator) \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> ...\n>\n>\n> Since you have experience working with views, let me ask you this. \n> The converse strategy to the one I described originally would be to \n> create the individual tables T1, T2, T3, ..., T100, but instead of \n> keeping around the original (and now redundant) table T, replace it \n> with a view V made up of the union of T1, T2, T3, ..., T100. The \n> problem with this alternative is that one cannot index V, or define a \n> primary key constraint for it, because it's a view. This means that a \n> search in V, even for a primary key value, would be *have to be* very \n> inefficient (i.e. I don't see how even the very clever PostgreSQL \n> implementers could get around this one!), because the engine would \n> have to search *all* the underlying tables, T1 through T100, even if \n> it found the desired record in T1, since it has no way of knowing that \n> the value is unique all across V.\n>\n> Is there a way around this?\n>\n> kynn\n>\nOh, I wouldn't create separate tables and do a UNION of them, I'd think \nthat would be inefficient.\n\nI didn't look in detail at your previous eMail, but I will now:\n\n1. You haven't told us the distribution of \"zipk\", or what the tables \nare indexed on, or what type of performance you are expecting. Your \ninitial examples don't help much unless you actually have performance \nnumbers or EXPLAIN output for them, since adding the third JOIN \nsignificantly changes the picture, as does changing one of the JOINs to \na LEFT JOIN.\n\n2. In your actual (Q1** and Q2**) examples, why is one JOIN an INNER \nJOIN and the other one a LEFT JOIN? Given your description of Q1 at the \ntop of your message, that doesn't make sense to me.\n\n3. Why not write:\n\nCREATE VIEW txt AS\n SELECT a1.word AS word1, a1.type AS type1, a2.word AS word2, a2.type \nAS type2\n FROM T a1 [LEFT] JOIN T a2 USING( zipk ); -- Use \"LEFT\" if appropriate\nSELECT word1, word1\n FROM S JOIN txt ON word = word1\n WHERE type1 = <int1> AND type2 = <int2>;\n\nIf either of those (either with or without the \"LEFT\") are not \nequivalent to your problem, how about just:\n\nSELECT a1.word AS word1, a2.word AS word2\n FROM S JOIN T a1 USING( word)\n [LEFT] JOIN T a2 USING( zipk ) -- Use \"LEFT\" if appropriate\n WHERE a1.type = <int1> AND a2.type = <int2>;\n\nShow us (using EXPLAIN) what the query planner thinks of each of these.\n\n-- \nMail to my list address MUST be sent via the mailing list.\nAll other mail to my list address will bounce.\n\n\n\n\n\n\n\n\nOn 2008-02-23 05:59, Kynn Jones wrote:\n\nOn Fri, Feb 22, 2008 at 8:48 PM, Dean Gibson\n(DB Administrator) <[email protected]>\nwrote:\n...\n\n\nSince you have experience working with views, let me ask you\nthis.  The converse strategy to the one I described originally would be\nto create the individual tables T1, T2, T3, ..., T100, but instead of\nkeeping around the original (and now redundant) table T, replace it\nwith a view V made up of the union of T1, T2, T3, ..., T100.  The\nproblem with this alternative is that one cannot index V, or define a\nprimary key constraint for it, because it's a view.  This means that a\nsearch in V, even for a primary key value, would be *have to be* very\ninefficient (i.e. I don't see how even the very clever PostgreSQL\nimplementers could get around this one!), because the engine would have\nto search *all* the underlying tables, T1 through T100, even if it\nfound the desired record in T1, since it has no way of knowing that the\nvalue is unique all across V.\n\n\nIs there a way around this?\n\n\nkynn\n\n\n\n\nOh, I wouldn't create separate tables and do a UNION of them, I'd think\nthat would be inefficient.\n\nI didn't look in detail at your previous eMail, but I will now:\n\n1. You haven't told us the distribution of \"zipk\", or what the tables\nare indexed on, or what type of performance you are expecting.  Your\ninitial examples don't help much unless you actually have performance\nnumbers or EXPLAIN output for them, since adding the third JOIN\nsignificantly changes the picture, as does changing one of the JOINs to\na LEFT JOIN.\n\n2. In your actual (Q1** and Q2**) examples, why is one JOIN an INNER\nJOIN and the other one a LEFT JOIN?  Given your description of Q1 at\nthe top of your message, that doesn't make sense to me.\n\n3. Why not write:\n\nCREATE VIEW txt AS\n  SELECT a1.word AS word1, a1.type AS type1, a2.word AS word2, a2.type\nAS type2\n    FROM T a1 [LEFT] JOIN T a2 USING( zipk );  -- Use \"LEFT\" if\nappropriate\nSELECT word1, word1\n  FROM S JOIN txt ON word = word1\n  WHERE type1 = <int1> AND type2 = <int2>;\n\nIf either of those (either with or without the \"LEFT\") are not\nequivalent to your problem, how about just:\n\nSELECT a1.word AS word1, a2.word AS word2\n  FROM S JOIN T a1 USING( word)\n    [LEFT] JOIN T a2 USING( zipk )  -- Use \"LEFT\" if\nappropriate\n  WHERE a1.type = <int1> AND a2.type = <int2>;\n\nShow us (using EXPLAIN) what the query planner thinks of each of these.\n\n-- \nMail to my list address MUST be sent via the mailing list.\nAll other mail to my list address will bounce.", "msg_date": "Sat, 23 Feb 2008 07:08:57 -0800", "msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Q on views and performance" }, { "msg_contents": "On 2008-02-23 07:08, Dean Gibson (DB Administrator) wrote:\n> ...\n>\n>\n> SELECT word1, word1\n> FROM S JOIN txt ON word = word1\n> WHERE type1 = <int1> AND type2 = <int2>;\n>\n> ...\nOops that should be:\n\nSELECT word1, word2\n FROM S JOIN txt ON word = word1\n WHERE type1 = <int1> AND type2 = <int2>;\n\n\n-- \nMail to my list address MUST be sent via the mailing list.\nAll other mail to my list address will bounce.\n\n\n\n\n\n\n\nOn 2008-02-23 07:08, Dean Gibson (DB Administrator) wrote:\n\n\n\n...\n\n\nSELECT word1, word1\n  FROM S JOIN txt ON word = word1\n  WHERE type1 = <int1> AND type2 = <int2>;\n\n...\n\nOops that should be:\n\nSELECT word1, word2\n  FROM S JOIN txt ON word = word1\n  WHERE type1 = <int1> AND type2 = <int2>;\n\n\n-- \nMail to my list address MUST be sent via the mailing list.\nAll other mail to my list address will bounce.", "msg_date": "Sat, 23 Feb 2008 07:29:55 -0800", "msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Q on views and performance" }, { "msg_contents": "Hi, Dean. The system I'm working with is very similar \"in spirit\" to a\nlarge multilingual dictionary covering 100 languages. Using this analogy,\nthe \"type\" column would correspond to the language, and the zipk column\nwould correspond to some language-independent key associated with a concept\n(\"concept key\" for short). So, if it were indeed a multilingual dictionary,\nrecords in T would look like\n word | zipk | language\n---------+------+-----------\n house | 1234 | <english>\n casa | 1234 | <spanish>\n haus | 1234 | <german>\n piano | 2345 | <english>\n piano | 2345 | <spanish>\n cat | 3456 | <english>\n chat | 3456 | <french>\n chat | 4567 | <english>\n plausch | 4567 | <german>\n\n...where I used the notation <lang> to denote \"the integer id assigned to\nlanguage lang\". Therefore typically there are about 100 records in T for\nany given zipk, one for each language. But the correspondence is not\nperfect, since, for example, some languages have, proverbially, more than\none word for snow, and some (maybe from some tropical island in the South\nPacific) have none. (This last case, BTW, is what accounts for the use of\nleft joins, as will become clear in a minute.)\n\nThe table S can be thought of a table consisting of a collection of words to\nbe translated to some target language. In the first type of query (Q1), all\nthe words in S are effectively declared to belong to the same source\nlanguage, whereas in the second type of query (Q2) the source language for\nthe words in S is left unspecified (in this case S may contain words from\nvarious languages, or words--like \"piano\" or \"chat\" in the example\nabove--that belong simultaneously to different languages, and which may (e.g.\npiano) or may not (e.g. chat) have the same zipk [concept key] for each of\nthese languages).\n\nSo, regarding your question about (Q1**) and (Q2**):\n\n(Q1**) SELECT a1.word, sq.word FROM\n S JOIN T a1 USING ( word )\n LEFT JOIN ( SELECT * FROM T a2\n WHERE a2.type = <int2> ) sq USING ( zipk )\n WHERE a1.type = <int1>;\n\n(Q2**) SELECT a1.word, sq.word FROM\n S JOIN T a1 USING ( word )\n LEFT JOIN ( SELECT * FROM T a2\n WHERE a2.type = <int2> ) sq USING ( zipk )\n\n...the inner join with S is intended to pick out all the records in the\nsource table (either T<int1> in Q1** or T in Q2**) corresponding to words in\nS, while the second (left) join, is there to find all the \"translations\" in\nthe target language. I use a left join so that even those words in S for\nwhich no translations exist will show up in the query results.\n\n3. Why not write:\n>\n> CREATE VIEW txt AS\n> SELECT a1.word AS word1, a1.type AS type1, a2.word AS word2, a2.type AS\n> type2\n> FROM T a1 [LEFT] JOIN T a2 USING( zipk ); -- Use \"LEFT\" if\n> appropriate\n> SELECT word1, word1\n> FROM S JOIN txt ON word = word1\n> WHERE type1 = <int1> AND type2 = <int2>;\n>\n\nThis is would indeed produce the same results as Q1, but this approach would\nrequire defining about 10,000 views, one for each possible pair of int1 and\nint2 (or pair of languages, to continue the multilingual dictionary\nanalogy), which freaks me out for some reason. (Actually, the number of\nsuch views would be many more than that, because in the actual application\nthere is not just one T but several dozen, similar to what would happen to\nthe schema in the multilingual dictionary analogy if we wanted to\npre-segregate the words according to some categories, say a T for animals, a\nT for fruits, a T for verbs, a T for professions, etc.)\n\n(I need to do a bit more work before I can post the EXPLAIN results.)\n\nkynn\n\nHi, Dean.  The system I'm working with is very similar \"in spirit\" to a large multilingual dictionary covering 100 languages.  Using this analogy, the \"type\" column would correspond to the language, and the zipk column would correspond to some language-independent key associated with a concept (\"concept key\" for short).  So, if it were indeed a multilingual dictionary, records in T would look like\n  word   | zipk | language\n---------+------+----------- house   | 1234 | <english> casa    | 1234 | <spanish> haus    | 1234 | <german> piano   | 2345 | <english> piano   | 2345 | <spanish>\n cat     | 3456 | <english> chat    | 3456 | <french> chat    | 4567 | <english> plausch | 4567 | <german>\n...where I used the notation <lang> to denote \"the integer id assigned to language lang\".  Therefore typically there are about 100 records in T for any given zipk, one for each language.  But the correspondence is not perfect, since, for example, some languages have, proverbially, more than one word for snow, and some (maybe from some tropical island in the South Pacific) have none.  (This last case, BTW, is what accounts for the use of left joins, as will become clear in a minute.)\nThe table S can be thought of a table consisting of a collection of words to be translated to some target language.  In the first type of query (Q1), all the words in S are effectively declared to belong to the same source language, whereas in the second type of query (Q2) the source language for the words in S is left unspecified (in this case S may contain words from various languages, or words--like \"piano\" or \"chat\" in the example above--that belong simultaneously to different languages, and which may (e.g. piano) or may not (e.g. chat) have the same zipk [concept key] for each of these languages).\nSo, regarding your question about (Q1**) and (Q2**):\n(Q1**)  SELECT a1.word, sq.word FROM\n               S      JOIN T a1 USING ( word )\n                 LEFT JOIN ( SELECT * FROM T a2\n                             WHERE a2.type = <int2> ) sq USING ( zipk )\n         WHERE a1.type = <int1>;\n(Q2**)  SELECT a1.word, sq.word FROM\n               S      JOIN T a1 USING ( word )\n                 LEFT JOIN ( SELECT * FROM T a2\n                             WHERE a2.type = <int2> ) sq USING ( zipk )\n...the inner join with S is intended to pick out all the records in the source table (either T<int1> in Q1** or T in Q2**) corresponding to words in S, while the second (left) join, is there to find all the \"translations\" in the target language.  I use a left join so that even those words in S for which no translations exist will show up in the query results.\n\n\n3. Why not write:\n\nCREATE VIEW txt AS\n  SELECT a1.word AS word1, a1.type AS type1, a2.word AS word2, a2.type\nAS type2\n    FROM T a1 [LEFT] JOIN T a2 USING( zipk );  -- Use \"LEFT\" if\nappropriate\nSELECT word1, word1\n  FROM S JOIN txt ON word = word1\n  WHERE type1 = <int1> AND type2 = <int2>;This is would indeed produce the same results as Q1, but this approach would require defining about 10,000 views, one for each possible pair of int1 and int2 (or pair of languages, to continue the multilingual dictionary analogy), which freaks me out for some reason.  (Actually, the number of such views would be many more than that, because in the actual application there is not just one T but several dozen, similar to what would happen to the schema in the multilingual dictionary analogy if we wanted to pre-segregate the words according to some categories, say a T for animals, a T for fruits, a T for verbs, a T for professions, etc.)\n(I need to do a bit more work before I can post the EXPLAIN results.)kynn", "msg_date": "Sat, 23 Feb 2008 11:21:49 -0500", "msg_from": "\"Kynn Jones\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Q on views and performance" }, { "msg_contents": "On 2008-02-23 08:21, Kynn Jones wrote:\n> ...\n>\n> 3. Why not write:\n>\n> CREATE VIEW txt AS\n> SELECT a1.word AS word1, a1.type AS type1, a2.word AS word2,\n> a2.type AS type2\n> FROM T a1 [LEFT] JOIN T a2 USING( zipk ); -- Use \"LEFT\" if\n> appropriate\n> SELECT word1, word1\n> FROM S JOIN txt ON word = word1\n> WHERE type1 = <int1> AND type2 = <int2>;\n>\n>\n> This is would indeed produce the same results as Q1, but this approach \n> would require defining about 10,000 views, one for each possible pair \n> of int1 and int2\n\nWhy 10,000 views??? What's wrong with the ONE view above? You DON'T \nwant to be defining VIEWs based on actual tables VALUES; leave that to \nthe SELECT. For that matter, what's wrong with the final SELECT I \nlisted (below)?\n\nSELECT a1.word AS word1, a2.word AS word2\n FROM S JOIN T a1 USING( word )\n LEFT JOIN T a2 USING( zipk )\n WHERE a1.type = <int1> AND a2.type = <int2>;\n\n-- Dean\n\n-- \nMail to my list address MUST be sent via the mailing list.\nAll other mail to my list address will bounce.\n\n\n\n\n\n\n\nOn 2008-02-23 08:21, Kynn Jones wrote:\n...\n \n\n\n\n\n3. Why not write:\n\nCREATE VIEW txt AS\n  SELECT a1.word AS word1, a1.type AS type1, a2.word AS word2, a2.type\nAS type2\n    FROM T a1 [LEFT] JOIN T a2 USING( zipk );  -- Use \"LEFT\" if\nappropriate\nSELECT word1, word1\n  FROM S JOIN txt ON word = word1\n  WHERE type1 = <int1> AND type2 = <int2>;\n\n\n\n\nThis is would indeed produce the same results as Q1, but this\napproach would require defining about 10,000 views, one for each\npossible pair of int1 and int2\n\n\n\n\nWhy 10,000 views???  What's wrong with the ONE view above?  You DON'T\nwant to be defining VIEWs based on actual tables VALUES;  leave that to\nthe SELECT.  For that matter, what's wrong with the final SELECT I\nlisted (below)?\n\nSELECT a1.word AS word1, a2.word AS word2\n  FROM S JOIN T a1 USING( word )\n    LEFT JOIN T a2 USING( zipk )\n  WHERE a1.type = <int1> AND a2.type = <int2>;\n\n-- Dean\n-- \nMail to my list address MUST be sent via the mailing list.\nAll other mail to my list address will bounce.", "msg_date": "Sat, 23 Feb 2008 08:49:13 -0800", "msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Q on views and performance" }, { "msg_contents": "On 2008-02-23 08:49, Dean Gibson (DB Administrator) wrote:\n> Why 10,000 views??? What's wrong with the ONE view above? You DON'T \n> want to be defining VIEWs based on actual tables VALUES; leave that \n> to the SELECT. For that matter, what's wrong with the final SELECT I \n> listed (below)?\n>\n> SELECT a1.word AS word1, a2.word AS word2\n> FROM S JOIN T a1 USING( word )\n> LEFT JOIN T a2 USING( zipk )\n> WHERE a1.type = <int1> AND a2.type = <int2>;\n>\n> -- Dean\nAmendment: I forgot, that if it's a LEFT JOIN you have to write it as:\n\nSELECT a1.word AS word1, a2.word AS word2\n FROM S JOIN T a1 USING( word )\n LEFT JOIN T a2 USING( zipk )\n WHERE a1.type = <int1> AND (a2.type = <int2> OR a2.type IS NULL);\n\n-- Dean\n\n-- \nMail to my list address MUST be sent via the mailing list.\nAll other mail to my list address will bounce.\n\n\n\n\n\n\n\nOn 2008-02-23 08:49, Dean Gibson (DB Administrator) wrote:\n\n\nWhy 10,000 views???  What's wrong with the ONE view above?  You DON'T\nwant to be defining VIEWs based on actual tables VALUES;  leave that to\nthe SELECT.  For that matter, what's wrong with the final SELECT I\nlisted (below)?\n\nSELECT a1.word AS word1, a2.word AS word2\n  FROM S JOIN T a1 USING( word )\n    LEFT JOIN T a2 USING( zipk )\n  WHERE a1.type = <int1> AND a2.type = <int2>;\n\n-- Dean\n\nAmendment:  I forgot, that if it's a LEFT JOIN you have to write it as:\n\nSELECT a1.word AS word1, a2.word AS word2\n  FROM S JOIN T a1 USING( word )\n    LEFT JOIN T a2 USING( zipk )\n  WHERE a1.type = <int1> AND (a2.type = <int2> OR\na2.type IS NULL);\n\n-- Dean\n-- \nMail to my list address MUST be sent via the mailing list.\nAll other mail to my list address will bounce.", "msg_date": "Sat, 23 Feb 2008 08:55:42 -0800", "msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Q on views and performance" }, { "msg_contents": "On Fri, 22 Feb 2008, Kynn Jones wrote:\n> Hi. I'm trying to optimize...\n>\n> (Q1) SELECT a1.word, a2.word\n> FROM T a1 JOIN T a2 USING ( zipk )\n> WHERE a1.type = <int1>\n> AND a2.type = <int2>;\n\nOkay, try this:\n\nCreate an index on T(type, zipk), and then CLUSTER on that index. That \nwill effectively group all the data for one type together and sort it by \nzipk, making a merge join very quick indeed. I'm not sure whether Postgres \nwill notice that, but it's worth a try.\n\n> More specifically, how can I go about building table T and the views\n> V<int?>'s to maximize the performance of (Q1)? For example, I'm thinking\n> that if T had an additional id column and were built in such a way that all\n> the records belonging to each V<int?> were physically contiguous, and (say)\n> had contiguous values in the id column, then I could define each view like\n> this\n\nThe above index and CLUSTER will effectively do this - you don't need to \nintroduce another field.\n\nAlternatively, you could go *really evil* and pre-join the table. \nSomething like this:\n\nCREATE TABLE evilJoin AS SELECT a1.type AS type1, a2.type AS type2,\n a1.zipk, a1.word AS word1, a2.word AS word2\n FROM T AS a1, T AS a2\n WHERE a1.zipk = a2.zipk\n ORDER BY a1.type, a2.type, a1.zipk;\nCREATE INDEX evilIndex1 ON evilJoin(type1, type2, zipk);\n\nThen your query becomes:\n\nSELECT word1, word2\n FROM evilJoin\n WHERE type1 = <int1>\n AND type2 = <int2>\n\nwhich should run quick. However, your cache usefulness will be reduced \nbecause of the extra volume of data.\n\nMatthew\n\n-- \n[About NP-completeness] These are the problems that make efficient use of\nthe Fairy Godmother. -- Computer Science Lecturer\n", "msg_date": "Mon, 25 Feb 2008 13:45:34 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Q on views and performance" }, { "msg_contents": "So, this email is directed much more towards Postgres Powers That Be. I \ncame across this problem a while ago, and I haven't checked whether it has \nbeen improved.\n\nOn Mon, 25 Feb 2008, I wrote:\n>> Hi. I'm trying to optimize...\n>> \n>> (Q1) SELECT a1.word, a2.word\n>> FROM T a1 JOIN T a2 USING ( zipk )\n>> WHERE a1.type = <int1>\n>> AND a2.type = <int2>;\n>\n> Create an index on T(type, zipk), and then CLUSTER on that index. That will \n> effectively group all the data for one type together and sort it by zipk, \n> making a merge join very quick indeed. I'm not sure whether Postgres will \n> notice that, but it's worth a try.\n\nStatistics are generated on fields in a table, and the one I'm interested \nin is the correlation coefficient which tells Postgres how costly an index \nscan sorted on that field would be. This entry is ONLY useful when the \nresult needs to be sorted by that exact field only. For example:\n\nCREATE TABLE test (a int, b int);\n// insert a bazillion entries\nCREATE INDEX testIndex ON test(a, b);\nCLUSTER test ON testIndex;\nANALYSE;\n\nSo now we have a table sorted by (a, b), but the statistics only record \nthe fact that it is sorted by a, and completely unsorted by b. If we run:\n\nSELECT * FROM test ORDER BY a;\n\nthen the query will run quickly, doing an index scan. However, if we run:\n\nSELECT * FROM test ORDER BY a, b;\n\nthen Postgres will not be able to use the index, because it cannot tell \nhow sequential the fetches from the index will be. Especially if we run:\n\nSELECT * FROM test WHERE a = <something> ORDER BY b;\n\nthen this is the case.\n\nSo, these observations were made a long time ago, and I don't know if they \nhave been improved. A while back I suggested a \"partial sort\" algorithm \nthat could take a stream sorted by a and turn it into a stream sorted by \n(a, b) at small cost. That would fix some instances of the problem. \nHowever, now I suggest that the statistics are in the wrong place.\n\nAt the moment, the correlation coefficient, which is an entry purely \ndesigned to indicate how good an index is at index scans, is a statistic \non the first field of the index. Why not create a correlation coefficient \nstatistic for the index as a whole instead, and store it elsewhere in the \nstatistics data? That way, instead of having to infer from the first field \nhow correlated an index is, and getting it wrong beyond the first field, \nyou can just look up the correlation for the index.\n\nOpinions?\n\nMatthew\n\n-- \nIf you let your happiness depend upon how somebody else feels about you,\nnow you have to control how somebody else feels about you. -- Abraham Hicks\n", "msg_date": "Mon, 25 Feb 2008 14:08:06 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Q on views and performance" }, { "msg_contents": "On Mon, Feb 25, 2008 at 8:45 AM, Matthew <[email protected]> wrote:\n\n> On Fri, 22 Feb 2008, Kynn Jones wrote:\n> > Hi. I'm trying to optimize...\n> >\n> > (Q1) SELECT a1.word, a2.word\n> > FROM T a1 JOIN T a2 USING ( zipk )\n> > WHERE a1.type = <int1>\n> > AND a2.type = <int2>;\n>\n> Okay, try this:\n>\n> Create an index on T(type, zipk), and then CLUSTER on that index...\n\n\nThis is just GREAT!!! It fits the problem to a tee.\n\nMany, many thanks!\n\nAlso, including zipk in the index is a really nice extra boost. (If you\nhadn't mentioned it I would have just settled for clustering only on\ntype...)\n\nThanks for that also!\n\nKynn\n\nOn Mon, Feb 25, 2008 at 8:45 AM, Matthew <[email protected]> wrote:\nOn Fri, 22 Feb 2008, Kynn Jones wrote:\n> Hi.  I'm trying to optimize...\n>\n> (Q1)   SELECT a1.word, a2.word\n>         FROM T a1 JOIN T a2 USING ( zipk )\n>        WHERE a1.type = <int1>\n>          AND a2.type = <int2>;\n\nOkay, try this:\n\nCreate an index on T(type, zipk), and then CLUSTER on that index...This is just GREAT!!!  It fits the problem to a tee.\nMany, many thanks!Also, including zipk in the index is a really nice extra boost.  (If you hadn't mentioned it I would have just settled for clustering only on type...)\nThanks for that also!Kynn", "msg_date": "Mon, 25 Feb 2008 11:50:44 -0500", "msg_from": "\"Kynn Jones\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Q on views and performance" }, { "msg_contents": "On Mon, 25 Feb 2008, Kynn Jones wrote:\n> This is just GREAT!!! It fits the problem to a tee.\n\nIt makes the queries quick then?\n\nMatthew\n\n-- \nThe only secure computer is one that's unplugged, locked in a safe,\nand buried 20 feet under the ground in a secret location...and i'm not\neven too sure about that one. --Dennis Huges, FBI\n", "msg_date": "Mon, 25 Feb 2008 16:56:32 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Q on views and performance" }, { "msg_contents": "On Mon, Feb 25, 2008 at 11:56 AM, Matthew <[email protected]> wrote:\n\n> On Mon, 25 Feb 2008, Kynn Jones wrote:\n> > This is just GREAT!!! It fits the problem to a tee.\n>\n> It makes the queries quick then?\n\n\nIt is good that you ask. Clearly you know the story: a brilliant-sounding\noptimization that in practice has only a small effect at best...\n\nI'm totally puzzled. It makes absolutely no sense to me...\n\nFor my analysis, in addition to creating the index on (type, zipk) that you\nsuggested, I also added an extra column to T containing a random integer in\nthe range 0..99, and created an index on this, so that I could produce a\ntotally \"shuffled clustering\". I compared the performance in going from a\nrandomly-clustered table to a (type, zipk)-clustered table, and the output\nof EXPLAIN was encouraging, but when I ran the actual queries under EXPLAIN\nANALYZE the difference in execution time was negligible.\n\nLive and learn!\n\nActually, what's driving me absolutely insane is the documentation for\nEXPLAIN and for Pg's query planning in general. I've read the docs (in\nparticular the chapter on performance), but I still can't make any sense of\nEXPLAINs results, so I can't begin to understand why optimizations like the\none you suggested turned out to be ineffective. For example, the first\nlines of two recent EXPLAIN ANALYZE outputs are\n\nNested Loop Left Join (cost=58.00..1154.22 rows=626 width=26) (actual time=\n1.462..26.494 rows=2240 loops=1)\nMerge Left Join (cost=33970.96..34887.69 rows=58739 width=26) (actual time=\n106.961..126.589 rows=7042 loops=1)\n\nActual runtimes are 27ms and 128ms. The ratio 128/27 is much smaller than\none would expect from the relative costs of the two queries. It looks like\nthere is no proportionality at all between the estimated costs and actual\nrunning time... (BTW, all these runs of EXPLAIN were done after calls to\nVACUUM ANALYZE.) This is one of the many things I don't understand about\nthis case...\n\nWhat I would like to be able to do is to at least make enough sense of query\nplans to determine whether they are reasonable or not. This requires\nknowing the algorithms behind each type of query tree node, but I have not\nfound this info...\n\nOn the positive side, in the course of all this analysis I must have done\n*something* to improve the performance, because now even the unoptimized\nqueries are running pretty fast (e.g. queries that used to take about\n1.5seconds are now taking 130ms). But unfortunately I don't know what\nwas it\nthat I did to bring this speed-up about!\n\nAnyway, be that as it may, thank you very much for your suggestion.\n\nKynn\n\nOn Mon, Feb 25, 2008 at 11:56 AM, Matthew <[email protected]> wrote:\nOn Mon, 25 Feb 2008, Kynn Jones wrote:\n> This is just GREAT!!!  It fits the problem to a tee.\n\nIt makes the queries quick then?It is good that you ask.  Clearly you know the story: a brilliant-sounding optimization that in practice has only a small effect at best...\nI'm totally puzzled.  It makes absolutely no sense to me...For my analysis, in addition to creating the index on (type, zipk) that you suggested, I also added an extra column to T containing a random integer in the range 0..99, and created an index on this, so that I could produce a totally \"shuffled clustering\".  I compared the performance in going from a randomly-clustered table to a (type, zipk)-clustered table, and the output of EXPLAIN was encouraging, but when I ran the actual queries under EXPLAIN ANALYZE the difference in execution time was negligible.\nLive and learn!Actually, what's driving me absolutely insane is the documentation for EXPLAIN and for Pg's query planning in general.  I've read the docs (in particular the chapter on performance), but I still can't make any sense of EXPLAINs results, so I can't begin to understand why optimizations like the one you suggested turned out to be ineffective.  For example, the first lines of two recent EXPLAIN ANALYZE outputs are\nNested Loop Left Join  (cost=58.00..1154.22 rows=626 width=26) (actual time=1.462..26.494 rows=2240 loops=1)Merge Left Join  (cost=33970.96..34887.69 rows=58739 width=26) (actual time=106.961..126.589 rows=7042 loops=1)\nActual runtimes are 27ms and 128ms.  The ratio 128/27 is much smaller than one would expect from the relative costs of the two queries.  It looks like there is no proportionality at all between the estimated costs and actual running time...  (BTW, all these runs of EXPLAIN were done after calls to VACUUM ANALYZE.)  This is one of the many things I don't understand about this case...\nWhat I would like to be able to do is to at least make enough sense of query plans to determine whether they are reasonable or not.  This requires knowing the algorithms behind each type of query tree node, but I have not found this info...\nOn the positive side, in the course of all this analysis I must have done *something* to improve the performance, because now even the unoptimized queries are running pretty fast (e.g. queries that used to take about 1.5 seconds are now taking 130ms).  But unfortunately I don't know what was it that I did to bring this speed-up about!\nAnyway, be that as it may, thank you very much for your suggestion.Kynn", "msg_date": "Tue, 26 Feb 2008 11:49:21 -0500", "msg_from": "\"Kynn Jones\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Q on views and performance" } ]
[ { "msg_contents": "Hi,\n\nI'm noticing a strange increase in the amount of time it takes to \nissue a NOTIFY statement.\n\nI have an existing app that provides a producer / consumer type of \nqueue and that uses the LISTEN / NOTIFY mechanism to signal the \nconsumers of new items arriving in the queue. The consumers then \nprocess these items and issue a notify to signal that they have been \nprocessed. In the past issuing these notifications happened very \nquickly, now on 8.3 I'm seeing all of them taking over 300ms and many \nof them taking 1500ms or more! The notifications are happening \noutside of any transactions (which is itself a probable area for \nperformance improvement, I realize) but I'm wondering what might have \nchanged between 8.1 (the version I was using in the past) and 8.3?\n\nTIA,\nJoel\n", "msg_date": "Sat, 23 Feb 2008 10:48:56 -0800", "msg_from": "Joel Stevenson <[email protected]>", "msg_from_op": true, "msg_subject": "LISTEN / NOTIFY performance in 8.3" }, { "msg_contents": "Joel Stevenson <[email protected]> writes:\n> I have an existing app that provides a producer / consumer type of \n> queue and that uses the LISTEN / NOTIFY mechanism to signal the \n> consumers of new items arriving in the queue. The consumers then \n> process these items and issue a notify to signal that they have been \n> processed. In the past issuing these notifications happened very \n> quickly, now on 8.3 I'm seeing all of them taking over 300ms and many \n> of them taking 1500ms or more!\n\nThat's strange, I would not have thought that listen/notify behavior\nwould change at all. How are you measuring this delay exactly?\nCan you put together a self-contained test case?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Feb 2008 14:35:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LISTEN / NOTIFY performance in 8.3 " }, { "msg_contents": "At 11:58 PM -0500 2/23/08, Tom Lane wrote:\n>Joel Stevenson <[email protected]> writes:\n>>> That's strange, I would not have thought that listen/notify behavior\n>>> would change at all. How are you measuring this delay exactly?\n>>> Can you put together a self-contained test case?\n>\n>> Attached is a perl script that sort of simulates what's going on.\n>\n>Thanks for the script. It's not showing any particular problems here,\n>though. With log_min_duration_statement = 10, the only statements that\n>(slightly) exceed 10ms are the select count(*) from generate_series(1,\n>15000) ones.\n>\n>> Also of note, the iowait percentages on this quad core linux box jump\n>> to 30-40% while this test script is running, event though there's no\n>> table activity involved and the producer consumers pause for up to a\n>> second between iterations.\n>\n>This sounds a bit like pg_listener has gotten bloated. Try a \"VACUUM\n>VERBOSE pg_listener\" (as superuser) and see what it says.\n\nAt the moment (server is inactive):\n\npcdb=# VACUUM VERBOSE pg_listener;\nINFO: vacuuming \"pg_catalog.pg_listener\"\nINFO: \"pg_listener\": removed 1 row versions in 1 pages\nINFO: \"pg_listener\": found 1 removable, 21 nonremovable row versions \nin 28 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 2319 unused item pointers.\n28 pages contain useful free space.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\n\nrunning the test script and then the above command:\n\npcdb=# VACUUM VERBOSE pg_listener;\nINFO: vacuuming \"pg_catalog.pg_listener\"\nINFO: \"pg_listener\": removed 693 row versions in 12 pages\nINFO: \"pg_listener\": found 693 removable, 21 nonremovable row \nversions in 28 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 2308 unused item pointers.\n28 pages contain useful free space.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\n\nNumerous notifications took 1000ms or so to complete in the test \nscript execution between those two vacuum runs.\n\n>If that is the problem then the next question is why it got so much more\n>bloated than you were used to --- something wrong with vacuuming\n>procedures, perhaps?\n\nI have autovacuum on and using default settings. I have an explicit \nvacuum routine that runs nightly over the whole DB.\n\n-Joel\n", "msg_date": "Sat, 23 Feb 2008 21:22:29 -0800", "msg_from": "Joel Stevenson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LISTEN / NOTIFY performance in 8.3" }, { "msg_contents": "Joel Stevenson <[email protected]> writes:\n>> This sounds a bit like pg_listener has gotten bloated. Try a \"VACUUM\n>> VERBOSE pg_listener\" (as superuser) and see what it says.\n\n> At the moment (server is inactive):\n\n> pcdb=# VACUUM VERBOSE pg_listener;\n> INFO: vacuuming \"pg_catalog.pg_listener\"\n> INFO: \"pg_listener\": removed 1 row versions in 1 pages\n> INFO: \"pg_listener\": found 1 removable, 21 nonremovable row versions \n> in 28 pages\n\nOK, that destroys the table-bloat theory. Just to make sure, I\npre-populated pg_listener with enough dead rows to make 28 pages,\nbut I still don't see any slow notifies or noticeable load in vmstat.\n\nThat server is not quite \"inactive\", though. What are the 21 remaining\npg_listener entries for? Is it possible that those jobs are having\nsome impact on the ones run by the test script?\n\nAlso, it might be worth enabling log_lock_waits to see if the slow\nnotifies are due to having to wait on some lock or other.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Feb 2008 13:57:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LISTEN / NOTIFY performance in 8.3 " }, { "msg_contents": "At 1:57 PM -0500 2/24/08, Tom Lane wrote:\n>Joel Stevenson <[email protected]> writes:\n>>> This sounds a bit like pg_listener has gotten bloated. Try a \"VACUUM\n>>> VERBOSE pg_listener\" (as superuser) and see what it says.\n>\n>> At the moment (server is inactive):\n>\n>> pcdb=# VACUUM VERBOSE pg_listener;\n>> INFO: vacuuming \"pg_catalog.pg_listener\"\n>> INFO: \"pg_listener\": removed 1 row versions in 1 pages\n>> INFO: \"pg_listener\": found 1 removable, 21 nonremovable row versions\n>> in 28 pages\n>\n>OK, that destroys the table-bloat theory. Just to make sure, I\n>pre-populated pg_listener with enough dead rows to make 28 pages,\n>but I still don't see any slow notifies or noticeable load in vmstat.\n>\n>That server is not quite \"inactive\", though. What are the 21 remaining\n>pg_listener entries for? Is it possible that those jobs are having\n>some impact on the ones run by the test script?\n>\n>Also, it might be worth enabling log_lock_waits to see if the slow\n>notifies are due to having to wait on some lock or other.\n\nThe other listeners are the application's consumers and producer. At \nthe time of the testing they were not active but were alive.\n\nFor isolation I've just shutdown all listener / notifier processes \nthat were using the box, vacuumed pg_listener, and run the test \nscript again. There were several LISTEN or NOTIFY statements that \ntook longer than expected to complete (default test script settings \nof 5 consumers and a loop of 100):\n\n2008-02-24 23:00:48 PST 7541 LOG: duration: 514.697 ms statement: \nLISTEN to_consumer\n2008-02-24 23:00:48 PST 7544 LOG: duration: 508.790 ms statement: \nLISTEN to_consumer\n2008-02-24 23:00:48 PST 7543 LOG: duration: 511.061 ms statement: \nLISTEN to_consumer\n2008-02-24 23:00:48 PST 7545 LOG: duration: 506.390 ms statement: \nLISTEN to_producer\n2008-02-24 23:00:57 PST 7544 LOG: duration: 400.595 ms statement: \nNOTIFY to_producer\n2008-02-24 23:00:57 PST 7538 LOG: duration: 369.018 ms statement: \nNOTIFY to_producer\n2008-02-24 23:01:03 PST 7544 LOG: duration: 410.588 ms statement: \nNOTIFY to_producer\n2008-02-24 23:01:03 PST 7541 LOG: duration: 300.774 ms statement: \nNOTIFY to_producer\n2008-02-24 23:01:32 PST 7545 LOG: duration: 325.380 ms statement: \nNOTIFY to_consumer\n2008-02-24 23:01:42 PST 7538 LOG: duration: 349.640 ms statement: \nNOTIFY to_producer\n2008-02-24 23:01:43 PST 7543 LOG: duration: 529.700 ms statement: \nNOTIFY to_producer\n\n-Joel\n", "msg_date": "Sun, 24 Feb 2008 23:09:30 -0800", "msg_from": "Joel Stevenson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LISTEN / NOTIFY performance in 8.3" }, { "msg_contents": "At 11:58 PM -0500 2/23/08, Tom Lane wrote:\n> > Attached is a perl script that sort of simulates what's going on.\n>\n>Thanks for the script. It's not showing any particular problems here,\n>though. With log_min_duration_statement = 10, the only statements that\n>(slightly) exceed 10ms are the select count(*) from generate_series(1,\n>15000) ones.\n\nI tried the test script on another machine (similar but not identical \nto the original machine) running 8.3 and although the notify \nperformance was *much* better than the original I still see \nnotifications taking longer than the select count(*) from \ngenerate_series(1, 15000) queries, and also longer than some simple \nupdates to other tables that are also happening on the server.\n\nduration: 10.030 ms statement: select count(*) from generate_series(1, 15000)\nduration: 224.833 ms statement: NOTIFY to_producer\n\nPerhaps this shouldn't be made much of as I'm sure there are many way \nthat this could quite naturally happen.\n\nI've been thinking of LISTEN / NOTIFY as one of the least expensive \nand therefore speedy ways to get the word out to participating \nprocesses that something has changed (versus using a manually setup \nsignals table that interested parties updated and selected from).\n\nNow that I see a little bit more of what goes on under the hood of \nthis function I see that it's still basically table-driven and I'll \nadjust my expectations accordingly, but I'm still puzzled by the \nhugely slow notifications showing up on the original server running \nthe producer / consumer setup.\n\nWith ps I can see some postgres backends with a 'notify interrupt \nwaiting' command line during the tests - could it be an issue with \nsignal handling on the original machine - something entirely outside \nof PG's control?\n\nThx,\n-Joel\n", "msg_date": "Tue, 26 Feb 2008 08:26:30 -0800", "msg_from": "Joel Stevenson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LISTEN / NOTIFY performance in 8.3" }, { "msg_contents": "Joel Stevenson <[email protected]> writes:\n> Now that I see a little bit more of what goes on under the hood of \n> this function I see that it's still basically table-driven and I'll \n> adjust my expectations accordingly,\n\nYeah, there's been discussion of replacing the implementation with some\nall-in-memory queue kind of setup, but no one's got round to that yet.\n\n> With ps I can see some postgres backends with a 'notify interrupt \n> waiting' command line during the tests - could it be an issue with \n> signal handling on the original machine - something entirely outside \n> of PG's control?\n\nNo, that's not unexpected if you have the same notify being delivered to\nmultiple processes that had been idle. They'll all get wakened and try\nto read pg_listener to see what happened, but since this is a\nread-modify-write type of operation it uses an exclusive lock, so only\none can clear its pg_listener entry at a time. The 'waiting' ones you\nare seeing are stacked up behind whichever one has the lock at the\nmoment. They shouldn't be waiting for long.\n\nI'm still baffled by why we aren't seeing comparable performance for the\nsame test case. What I'm testing on is couple-year-old desktop kit\n(dual 2.8GHz Xeon, consumer-grade disk drive) --- I had assumed your\nserver would be at least as fast as that, but maybe not?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Feb 2008 12:43:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LISTEN / NOTIFY performance in 8.3 " }, { "msg_contents": "Tom Lane wrote:\n> read-modify-write type of operation it uses an exclusive lock, so only\n> one can clear its pg_listener entry at a time. The 'waiting' ones you\n> are seeing are stacked up behind whichever one has the lock at the\n> moment. They shouldn't be waiting for long.\n> \nI certainly hadn't expected that to be the implementation technique - \nisn't it smply that we need\na sngle flag per worker process and can set/test-and-clear with atomic \noperations and then a\nsignal to wake them up?\n\nAnyway - how hard would it be to install triggers on commit and \nrollback? Then we could write\nour own mechanisms.\n\nJames\n\n", "msg_date": "Tue, 26 Feb 2008 22:15:58 +0000", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LISTEN / NOTIFY performance in 8.3" }, { "msg_contents": "At 12:43 PM -0500 2/26/08, Tom Lane wrote:\n>I'm still baffled by why we aren't seeing comparable performance for the\n>same test case. What I'm testing on is couple-year-old desktop kit\n>(dual 2.8GHz Xeon, consumer-grade disk drive) --- I had assumed your\n>server would be at least as fast as that, but maybe not?\n\nIt's a quad-core Xeon 3.0Ghz machine, 7200rpm SATA discs in a \nsoftware RAID. It's not a particularly high performance machine in \nterms of disc IO - but again the comparative speed of most \nselect-update-commit queries to plain notify's on the server seem off.\n\nWhat's really baffling is that there are plenty of other OLTP queries \ngoing in multiple backends simultaneously that don't fall over my \n300ms query log threshold, and yet NOTIFY and LISTEN consistently do. \nWhat's more it's looks like it's only happening for registered \nlistener relnames.\n\nThis is while the server processes are alive but inactive:\n\njoels=# \\timing\nTiming is on.\njoels=# select * from pg_listener;\n relname | listenerpid | notification\n----------------+-------------+--------------\n alert_inbound | 15013 | 0\n alert_inbound | 13371 | 0\n alert_inbound | 26856 | 0\n alert_inbound | 12016 | 0\n alert_inbound | 26911 | 0\n alert_inbound | 11956 | 0\n alert_process | 13365 | 0\n alert_inbound | 26855 | 0\n alert_inbound | 12248 | 0\n alert_inbound | 13367 | 0\n alert_inbound | 12304 | 0\n alert_inbound | 32633 | 0\n alert_inbound | 30979 | 0\n alert_inbound | 29290 | 0\n alert_inbound | 30394 | 0\n alert_inbound | 14490 | 0\n alert_inbound | 14491 | 0\n alert_inbound | 14492 | 0\n(18 rows)\n\nTime: 0.402 ms\njoels=# notify foo;\nNOTIFY\nTime: 0.244 ms\njoels=# notify foo2;\nNOTIFY\nTime: 0.211 ms\njoels=# notify alert_process;\nNOTIFY\nTime: 34.585 ms\njoels=# notify alert_process;\nNOTIFY\nTime: 45.554 ms\njoels=# notify alert_inbound;\nNOTIFY\nTime: 40.868 ms\njoels=# notify alert_inbound;\nNOTIFY\nTime: 176.309 ms\njoels=# notify alert_inbound;\nNOTIFY\nTime: 36.669 ms\njoels=# notify alert_inbound;\nNOTIFY\nTime: 369.761 ms\njoels=# notify alert_inbound;\nNOTIFY\nTime: 34.449 ms\njoels=# notify alert_inbound;\nNOTIFY\nTime: 121.990 ms\njoels=# notify foo3;\nNOTIFY\nTime: 0.250 ms\njoels=# notify foo2;\nNOTIFY\nTime: 0.175 ms\n\nThere's no autovacuum log entries prior to or immediately after the \n369ms notify command.\n\n-Joel\n", "msg_date": "Tue, 26 Feb 2008 14:18:07 -0800", "msg_from": "Joel Stevenson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LISTEN / NOTIFY performance in 8.3" }, { "msg_contents": "Joel Stevenson <[email protected]> writes:\n> What's really baffling is that there are plenty of other OLTP queries \n> going in multiple backends simultaneously that don't fall over my \n> 300ms query log threshold, and yet NOTIFY and LISTEN consistently do. \n> What's more it's looks like it's only happening for registered \n> listener relnames.\n\nHmm, that says that it's not a matter of locking on pg_listener,\nbut of actually applying the row update(s) and/or signaling the\nrecipient(s). If you're not seeing performance issues for ordinary\ntable-update operations it's hard to see why pg_listener updates would\nbe any worse, so that seems to point the finger at the signaling.\nWhich is just a matter of a kill(2) and shouldn't be that expensive.\n\nIt might be interesting to try strace'ing the whole PG process tree\nwhile these notifies are going on, and seeing if you can identify\nany specific kernel calls that seem to take a long time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Feb 2008 18:01:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LISTEN / NOTIFY performance in 8.3 " }, { "msg_contents": "James Mansion <[email protected]> writes:\n> I certainly hadn't expected that to be the implementation technique - \n> isn't it smply that we need\n> a sngle flag per worker process and can set/test-and-clear with atomic \n> operations and then a\n> signal to wake them up?\n\nHardly --- how's that going to pass a notify name? Also, a lot of\npeople want some payload data in a notify, not just a condition name;\nany reimplementation that doesn't address that desire probably won't\nget accepted.\n\nThere's lots of threads in the -hackers archives about reimplementing\nlisten/notify in a saner fashion. Personally I lean towards using\nsomething much like the sinval queue.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Feb 2008 18:33:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LISTEN / NOTIFY performance in 8.3 " }, { "msg_contents": "Tom Lane wrote:\n> Hardly --- how's that going to pass a notify name? Also, a lot of\n> people want some payload data in a notify, not just a condition name;\n> any reimplementation that doesn't address that desire probably won't\n> get accepted.\n> \nAh - forgot about the name. At least there need be just one instance of \na name record queued\nper worker if I'm reading the documentation right - it suggest that \nnotifications can be folded\nwith the proviso that if the process generates a notification and at \nleast one other process\ngenerates a notification then it will get at least (but possibly only) \ntwo events. Not sure why\nthe PID is there rather than a couple of flag bits.\n\nYou'll alsways have the danger of overflowing a shm area and need to \nspill: is the signal and then\nlookup in storage materially quicker than using the master process to \nroute messages via pipes?\nAs you say, you have a lock contention issue and often the total signal \ndata volume outstanding\nfor a single back end will be less than will fit in a kernel's pipe buffer.\n\nThe sending processes can track what signals they've generated in the \ncurrent transaction so\nthe master (or signal distributor) needn't get bombarded with signals \nfrom lots of rows within\none transaction.\n\n", "msg_date": "Wed, 27 Feb 2008 06:00:05 +0000", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LISTEN / NOTIFY performance in 8.3" }, { "msg_contents": "At 6:01 PM -0500 2/26/08, Tom Lane wrote:\n>Hmm, that says that it's not a matter of locking on pg_listener,\n>but of actually applying the row update(s) and/or signaling the\n>recipient(s). If you're not seeing performance issues for ordinary\n>table-update operations it's hard to see why pg_listener updates would\n>be any worse, so that seems to point the finger at the signaling.\n>Which is just a matter of a kill(2) and shouldn't be that expensive.\n>\n>It might be interesting to try strace'ing the whole PG process tree\n>while these notifies are going on, and seeing if you can identify\n>any specific kernel calls that seem to take a long time.\n\nI ran PG via strace and then ran the test script at 25 consumers \nlooping 25 times each. There were no other connections to the \ndatabase for this strace run and test.\n\nDigging through the strace file is a bit mind-numbing but here's some \nsigns that semop and send are the culprits:\n\n3 misc examples coming from near LISTEN or NOTIFIES:\n- - - - - -\n7495 18:10:40.251855 <... semop resumed> ) = 0 <1.006149>\n7495 18:10:41.325442 <... semop resumed> ) = -1 EINTR (Interrupted \nsystem call) <0.299135>\n7495 18:10:41.998219 <... semop resumed> ) = 0 <0.603566>\n\nA chunk of log following the action on fd 7 (slow recv on \nERESTARTSYS) and then the slow semop that follows:\n- - - - - - - -\n7495 18:10:42.576401 send(7, \"C\\0\\0\\0\\vNOTIFY\\0Z\\0\\0\\0\\5I\", 18, 0 \n<unfinished ...>\n7495 18:10:42.576503 <... send resumed> ) = 18 <0.000070>\n7495 18:10:42.576620 recv(7, <unfinished ...>\n7495 18:10:42.873796 <... recv resumed> 0x8331d40, 8192, 0) = ? \nERESTARTSYS (To be restarted) <0.297158>\n7495 18:10:42.873911 --- SIGUSR2 (User defined signal 2) @ 0 (0) ---\n7495 18:10:42.874079 gettimeofday( <unfinished ...>\n7495 18:10:42.874198 <... gettimeofday resumed> {1204078242, \n874097}, NULL) = 0 <0.000101>\n7495 18:10:42.874324 setitimer(ITIMER_REAL, {it_interval={0, 0}, \nit_value={1, 0}}, <unfinished ...>\n7495 18:10:42.874470 <... setitimer resumed> NULL) = 0 <0.000121>\n7495 18:10:42.874604 semop(50495522, 0xbfff9764, 1 <unfinished ...>\n7495 18:10:43.678431 <... semop resumed> ) = 0 <0.803809>\n\nA little further on:\n- - - - - - - -\n7495 18:10:44.905320 <... semop resumed> ) = -1 EINTR (Interrupted \nsystem call) <0.998192>\n\n\nI'm not sure what exactly that means, in terms of next steps. I'll \ndig more through the strace file and see if I can find anything else \nbut those look to be definite bottlenecks for some reason.\n\nAt 2:24 PM -0800 2/26/08, Maurice Aubrey wrote:\n>What's the OS/Dist?\n\nRed Hat Enterprise Linux ES release 3 (Taroon Update 8)\n\nThx,\n-Joel\n", "msg_date": "Wed, 27 Feb 2008 18:19:12 -0800", "msg_from": "Joel Stevenson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LISTEN / NOTIFY performance in 8.3" } ]
[ { "msg_contents": "I have a table, that in production, currently has a little over 3 \nmillion records in production. In development, the same table has \nabout 10 million records (we have cleaned production a few weeks \nago). One of my queries joins this table with another, and in \ndevelopment, the particular condition uses an IndexScan on the \"stamp\" \ncolumn (the time this record was inserted) which makes it run fast. \nIn Production however (different machine, similar specs/postgresql \nsettings) the planner uses a seq scan on that table, causing the query \nperformance to be abysmal (sometimes over 10 seconds in production, \naround 1 second in development). What can I do to tweak this/ \ntroubleshoot it? I have another table with similar structure etc. \nthat has the same issue. Thanks!!!\n\nHere is the query:\n\nSELECT node,count(*) AS counts FROM u_counts c,res r WHERE \nc.res_id=r.id AND stamp > (current_timestamp - interval '1 day') AND \nr.rtype='u' AND r.location=1 GROUP BY node;\n\nThe tables have an index on u_counts.res_id, u_counts.stamp, \nres.location, and res.rtype\n\nHere is the production explain analyze:\n\nHashAggregate (cost=472824.67..472824.77 rows=8 width=6) (actual \ntime=12482.856..12482.872 rows=9 loops=1)\n -> Hash Join (cost=16.71..471847.28 rows=195479 width=6) (actual \ntime=1217.532..10618.930 rows=1035998 loops=1)\n Hash Cond: (c.res_id = r.id)\n -> Seq Scan on u_counts c (cost=0.00..466319.96 \nrows=948218 width=4) (actual time=1217.183..7343.507 rows=1035998 \nloops=1)\n Filter: (stamp > (now() - '1 day'::interval))\n -> Hash (cost=15.88..15.88 rows=67 width=10) (actual \ntime=0.299..0.299 rows=60 loops=1)\n -> Seq Scan on res r (cost=0.00..15.88 rows=67 \nwidth=10) (actual time=0.027..0.195 rows=60 loops=1)\n Filter: (((rtype)::text = 'u'::text) AND \n(location = 1))\n Total runtime: 12482.961 ms\n\n\nHere is the development explain analyze:\n\n HashAggregate (cost=72.91..73.02 rows=9 width=6) (actual \ntime=3108.793..3108.807 rows=9 loops=1)\n -> Hash Join (cost=10.42..71.27 rows=327 width=6) (actual \ntime=0.608..2446.714 rows=392173 loops=1)\n Hash Cond: (c.res_id = r.id)\n -> Index Scan using u_counts_i2 on u_counts c \n(cost=0.00..53.53 rows=1082 width=4) (actual time=0.277..1224.582 \nrows=392173 loops=1)\n Index Cond: (stamp > (now() - '1 day'::interval))\n -> Hash (cost=9.53..9.53 rows=71 width=10) (actual \ntime=0.310..0.310 rows=78 loops=1)\n -> Seq Scan on res r (cost=0.00..9.53 rows=71 \nwidth=10) (actual time=0.010..0.189 rows=78 loops=1)\n Filter: (((rtype)::text = 'u'::text) AND \n(location = 1))\n Total runtime: 3108.891 ms\n\n", "msg_date": "Sun, 24 Feb 2008 07:40:54 -0800", "msg_from": "Sean Leach <[email protected]>", "msg_from_op": true, "msg_subject": "Weird issue with planner choosing seq scan" }, { "msg_contents": "Sean Leach <[email protected]> writes:\n> I have a table, that in production, currently has a little over 3 \n> million records in production. In development, the same table has \n> about 10 million records (we have cleaned production a few weeks \n> ago).\n\nYou mean the other way around, to judge by the rowcounts from EXPLAIN.\n\n> -> Index Scan using u_counts_i2 on u_counts c \n> (cost=0.00..53.53 rows=1082 width=4) (actual time=0.277..1224.582 \n> rows=392173 loops=1)\n\nI kinda think the devel system wouldn't be using an indexscan either\nif it had up-to-date ANALYZE statistics. But even with the 1082 row\nestimate that seems a remarkably low cost estimate. Have you been\nplaying games with random_page_cost? Maybe you forgot to duplicate the\ndevel system's cost parameters onto the production system?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Feb 2008 12:50:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird issue with planner choosing seq scan " }, { "msg_contents": "Nope, seems like that would make sense but dev is 10 mill, prod is 3 \nmillion. Also including random_page_cost below. Thanks for any help.\n\n\nHere is dev:\n\ndb=> analyze u_counts;\nANALYZE\nTime: 15775.161 ms\n\ndb=> select count(1) from u_counts;\n count\n----------\n 10972078\n(1 row)\n\ndb=> show random_page_cost;\n random_page_cost\n------------------\n 4\n(1 row)\n\nTime: 0.543 ms\ndb=> explain analyze SELECT node,count(*) AS counts FROM u_counts \nc,res r WHERE c.res_id=r.id AND stamp > (current_timestamp - interval \n'1 day') AND r.rtype='udns' AND r.location=1 GROUP BY node;\n QUERY \n PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=12906.12..12906.24 rows=9 width=6) (actual \ntime=3135.831..3135.845 rows=9 loops=1)\n -> Hash Join (cost=10.42..12538.88 rows=73449 width=6) (actual \ntime=0.746..2475.632 rows=391380 loops=1)\n Hash Cond: (c.res_id = r.id)\n -> Index Scan using u_counts_i2 on db c \n(cost=0.00..10882.33 rows=243105 width=4) (actual time=0.287..1269.651 \nrows=391380 loops=1)\n Index Cond: (stamp > (now() - '1 day'::interval))\n -> Hash (cost=9.53..9.53 rows=71 width=10) (actual \ntime=0.430..0.430 rows=78 loops=1)\n -> Seq Scan on res r (cost=0.00..9.53 rows=71 \nwidth=10) (actual time=0.021..0.203 rows=78 loops=1)\n Filter: (((rtype)::text = 'udns'::text) AND \n(location = 1))\n Total runtime: 3136.000 ms\n\n\n\n\nNow - here is prod:\n\n\ndb=> show random_page_cost;\n random_page_cost\n------------------\n 4\n(1 row)\n\nTime: 0.434 ms\n\ndb=> analyze u_counts;\nANALYZE\nTime: 179.928 ms\n\ndb=> select count(1) from u_counts;\n count\n---------\n 3292215\n(1 row)\n\n\ndb=> explain analyze SELECT node,count(*) AS counts FROM u_counts \nc,res r WHERE c.res_id=r.id AND stamp > (current_timestamp - interval \n'1 day') AND r.rtype='udns' AND r.location=1 GROUP BY node;\n QUERY \nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=452333.49..452333.59 rows=8 width=6) (actual \ntime=13200.887..13200.902 rows=9 loops=1)\n -> Hash Join (cost=16.71..451192.74 rows=228149 width=6) (actual \ntime=1430.458..11274.073 rows=1036015 loops=1)\n Hash Cond: (c.res_id = r.id)\n -> Seq Scan on u_counts c (cost=0.00..444744.45 \nrows=1106691 width=4) (actual time=1429.996..7893.178 rows=1036015 \nloops=1)\n Filter: (stamp > (now() - '1 day'::interval))\n -> Hash (cost=15.88..15.88 rows=67 width=10) (actual \ntime=0.363..0.363 rows=60 loops=1)\n -> Seq Scan on res r (cost=0.00..15.88 rows=67 \nwidth=10) (actual time=0.046..0.258 rows=60 loops=1)\n Filter: (((rtype)::text = 'udns'::text) AND \n(location = 1))\n Total runtime: 13201.046 ms\n(9 rows)\n\nTime: 13204.686 ms\n\n\n\n\n\n\n\n\n\nOn Feb 24, 2008, at 9:50 AM, Tom Lane wrote:\n\n> Sean Leach <[email protected]> writes:\n>> I have a table, that in production, currently has a little over 3\n>> million records in production. In development, the same table has\n>> about 10 million records (we have cleaned production a few weeks\n>> ago).\n>\n> You mean the other way around, to judge by the rowcounts from EXPLAIN.\n>\n>> -> Index Scan using u_counts_i2 on u_counts c\n>> (cost=0.00..53.53 rows=1082 width=4) (actual time=0.277..1224.582\n>> rows=392173 loops=1)\n>\n> I kinda think the devel system wouldn't be using an indexscan e ither\n> if it had up-to-date ANALYZE statistics. But even with the 1082 row\n> estimate that seems a remarkably low cost estimate. Have you been\n> playing games with random_page_cost? Maybe you forgot to duplicate \n> the\n> devel system's cost parameters onto the production system?\n>\n> \t\t\tregards, tom lane\n\n", "msg_date": "Sun, 24 Feb 2008 10:41:26 -0800", "msg_from": "Sean Leach <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Weird issue with planner choosing seq scan " }, { "msg_contents": "Sean Leach <[email protected]> writes:\n> Now - here is prod:\n\n> db=> select count(1) from u_counts;\n> count\n> ---------\n> 3292215\n> (1 row)\n\n\n> -> Seq Scan on u_counts c (cost=0.00..444744.45 \n> rows=1106691 width=4) (actual time=1429.996..7893.178 rows=1036015 \n> loops=1)\n> Filter: (stamp > (now() - '1 day'::interval))\n\nGiven that this scan actually is selecting about a third of the table,\nI'm not sure that the planner is doing the wrong thing. It's hard to\nsee how an indexscan would be an improvement.\n\n[ thinks for a bit... ] Actually, the problem might be the 3M\nexecutions of now() and interval subtraction that you get in the seqscan\ncase. What results do you get if you write it with a sub-select like this:\n\nexplain analyze SELECT node,count(*) AS counts FROM u_counts \nc,res r WHERE c.res_id=r.id AND stamp > (SELECT current_timestamp - interval \n'1 day') AND r.rtype='udns' AND r.location=1 GROUP BY node;\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Feb 2008 14:10:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird issue with planner choosing seq scan " }, { "msg_contents": "\nOn Feb 24, 2008, at 11:10 AM, Tom Lane wrote:\n\n> Sean Leach <[email protected]> writes:\n>> Now - here is prod:\n>\n>> db=> select count(1) from u_counts;\n>> count\n>> ---------\n>> 3292215\n>> (1 row)\n>\n>\n>> -> Seq Scan on u_counts c (cost=0.00..444744.45\n>> rows=1106691 width=4) (actual time=1429.996..7893.178 rows=1036015\n>> loops=1)\n>> Filter: (stamp > (now() - '1 day'::interval))\n>\n> Given that this scan actually is selecting about a third of the table,\n> I'm not sure that the planner is doing the wrong thing. It's hard to\n> see how an indexscan would be an improvement.\n>\n> [ thinks for a bit... ] Actually, the problem might be the 3M\n> executions of now() and interval subtraction that you get in the \n> seqscan\n> case. What results do you get if you write it with a sub-select \n> like this:\n>\n> explain analyze SELECT node,count(*) AS counts FROM u_counts\n> c,res r WHERE c.res_id=r.id AND stamp > (SELECT current_timestamp - \n> interval\n> '1 day') AND r.rtype='udns' AND r.location=1 GROUP BY node;\n\n\nUnfortunately, the same, dev uses index scan, prod uses seq scan, prod \ntakes about 4x longer to do the query. Any other thoughts on best way \nto proceed? Thanks again Tom.\n\n\n\n\n>\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n", "msg_date": "Sun, 24 Feb 2008 12:28:17 -0800", "msg_from": "Sean Leach <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Weird issue with planner choosing seq scan " }, { "msg_contents": "Tom Lane wrote\n> Sean Leach <[email protected]> writes:\n> > Now - here is prod:\n> \n> > db=> select count(1) from u_counts;\n> > count\n> > ---------\n> > 3292215\n> > (1 row)\n> \n> \n> > -> Seq Scan on u_counts c (cost=0.00..444744.45 \n> > rows=1106691 width=4) (actual time=1429.996..7893.178 rows=1036015 \n> > loops=1)\n> > Filter: (stamp > (now() - '1 day'::interval))\n> \n> Given that this scan actually is selecting about a third of the table,\n> I'm not sure that the planner is doing the wrong thing. It's hard to\n> see how an indexscan would be an improvement.\n\nIf you always get around a third of the rows in your table written in the last day, you've got to be deleting about a third of the rows in your table every day too. You might have a huge number of dead rows in your table, slowing down the sequential scan.\n(Likewise updating a third of the rows, changing an indexed field.)\n\nWhat do you get from:\nVACUUM VERBOSE u_counts;\n\nRegards,\nStephen Denne.\n\nDisclaimer:\nAt the Datamail Group we value team commitment, respect, achievement, customer focus, and courage. This email with any attachments is confidential and may be subject to legal privilege. If it is not intended for you please advise by reply immediately, destroy it and do not copy, disclose or use it in any way.\n\n__________________________________________________________________\n This email has been scanned by the DMZGlobal Business Quality \n Electronic Messaging Suite.\nPlease see http://www.dmzglobal.com/services/bqem.htm for details.\n__________________________________________________________________\n\n\n", "msg_date": "Mon, 25 Feb 2008 10:18:02 +1300", "msg_from": "\"Stephen Denne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird issue with planner choosing seq scan " }, { "msg_contents": "\nOn Feb 24, 2008, at 1:18 PM, Stephen Denne wrote:\n> If you always get around a third of the rows in your table written \n> in the last day, you've got to be deleting about a third of the rows \n> in your table every day too. You might have a huge number of dead \n> rows in your table, slowing down the sequential scan.\n> (Likewise updating a third of the rows, changing an indexed field.)\n>\n> What do you get from:\n> VACUUM VERBOSE u_counts;\n\n\nThis actually makes sense as we aggregate the production rows (but not \ndevelopment), and here is the output of vacuum analyze. We have the \nauto vacuum daemon on, but after we do our aggregation (we aggregate \nrows down to a less granular time scale, i.e. similar to what rrdtool \ndoes etc.), we should probably do a 'vacuum full analyze' moving \nforward after each aggregation run, right?\n\nI need to do one now it appears, but I am assuming it will take a \n_long_ time...I might need to schedule some downtime if it will. Even \nwithout a full vacuum, the query seems to have come down from 20-30s \nto 5s.\n\ndb=> VACUUM VERBOSE u_counts;\nINFO: vacuuming \"public.u_counts\"\nINFO: index \"u_counts_pkey\" now contains 5569556 row versions in \n73992 pages\nDETAIL: 0 index row versions were removed.\n57922 index pages have been deleted, 57922 are currently reusable.\nCPU 0.59s/0.09u sec elapsed 3.73 sec.\nINFO: index \"u_counts_i1\" now contains 5569556 row versions in 76820 \npages\nDETAIL: 0 index row versions were removed.\n54860 index pages have been deleted, 54860 are currently reusable.\nCPU 1.04s/0.16u sec elapsed 20.10 sec.\nINFO: index \"u_counts_i2\" now contains 5569556 row versions in 77489 \npages\nDETAIL: 0 index row versions were removed.\n53708 index pages have been deleted, 53708 are currently reusable.\nCPU 0.70s/0.10u sec elapsed 5.41 sec.\nINFO: index \"u_counts_i3\" now contains 5569556 row versions in 76900 \npages\nDETAIL: 0 index row versions were removed.\n55564 index pages have been deleted, 55564 are currently reusable.\nCPU 0.94s/0.13u sec elapsed 20.34 sec.\nINFO: \"u_counts\": found 0 removable, 5569556 nonremovable row \nversions in 382344 pages\nDETAIL: 2085075 dead row versions cannot be removed yet.\nThere were 15567992 unused item pointers.\n281727 pages contain useful free space.\n0 pages are entirely empty.\nCPU 5.24s/1.77u sec elapsed 53.69 sec.\nWARNING: relation \"public.u_counts\" contains more than \n\"max_fsm_pages\" pages with useful free space\nHINT: Consider using VACUUM FULL on this relation or increasing the \nconfiguration parameter \"max_fsm_pages\".\nVACUUM\nTime: 53758.329 ms\n\n\n", "msg_date": "Sun, 24 Feb 2008 15:21:15 -0800", "msg_from": "Sean Leach <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Weird issue with planner choosing seq scan " }, { "msg_contents": "The fact that your indexes are bloated but your table is not makes me\nwonder if you're not running a really old version of pgsql that had\nproblems with monotonically increasing indexes bloating over time and\nrequiring reindexing.\n\nThat problem has been (for the most part) solved by some hacking Tom\nLane did some time back.\n\nWhat version pgsql is this? If it's pre 8.0 it might be worth looking\ninto migrating for performance and maintenance reasons.\n", "msg_date": "Sun, 24 Feb 2008 18:03:48 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird issue with planner choosing seq scan" }, { "msg_contents": "\nOn Feb 24, 2008, at 4:03 PM, Scott Marlowe wrote:\n\n> The fact that your indexes are bloated but your table is not makes me\n> wonder if you're not running a really old version of pgsql that had\n> problems with monotonically increasing indexes bloating over time and\n> requiring reindexing.\n>\n> That problem has been (for the most part) solved by some hacking Tom\n> Lane did some time back.\n>\n> What version pgsql is this? If it's pre 8.0 it might be worth looking\n> into migrating for performance and maintenance reasons.\n\nIt's the latest 8.3.0 release :(\n", "msg_date": "Sun, 24 Feb 2008 16:05:18 -0800", "msg_from": "Sean Leach <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Weird issue with planner choosing seq scan" }, { "msg_contents": "On Sun, Feb 24, 2008 at 6:05 PM, Sean Leach <[email protected]> wrote:\n\n> On Feb 24, 2008, at 4:03 PM, Scott Marlowe wrote:\n>\n> >\n> > What version pgsql is this? If it's pre 8.0 it might be worth looking\n> > into migrating for performance and maintenance reasons.\n>\n> It's the latest 8.3.0 release :(\n\nUrg. Then I wonder how your indexes are bloating but your table is\nnot... you got autovac running? No weird lock issues? It's a side\nissue right now since the table is showing as non-bloated (unless\nyou've got a long running transaction and that number is WAY off from\nyour vacuum)\n", "msg_date": "Sun, 24 Feb 2008 18:27:55 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird issue with planner choosing seq scan" }, { "msg_contents": "On Feb 24, 2008, at 4:27 PM, Scott Marlowe wrote:\n\n> On Sun, Feb 24, 2008 at 6:05 PM, Sean Leach <[email protected]> wrote:\n>\n>> On Feb 24, 2008, at 4:03 PM, Scott Marlowe wrote:\n>>\n>>>\n>>> What version pgsql is this? If it's pre 8.0 it might be worth \n>>> looking\n>>> into migrating for performance and maintenance reasons.\n>>\n>> It's the latest 8.3.0 release :(\n>\n> Urg. Then I wonder how your indexes are bloating but your table is\n> not... you got autovac running? No weird lock issues? It's a side\n> issue right now since the table is showing as non-bloated (unless\n> you've got a long running transaction and that number is WAY off from\n> your vacuum)\n\n\nAutovac is running, but probably not tuned. I am looking at my \nmax_fsm_pages setting to up as vacuum says, but not sure which value \nto use (all the posts on the web refer to what looks like an old \nvacuum output format), is this the line to look at?\n\nINFO: \"u_counts\": found 0 removable, 6214708 nonremovable row \nversions in 382344 pages\nDETAIL: 2085075 dead row versions cannot be removed yet.\n\nI.e. I need 382344 max_fsm_pages? No weird lock issues that we have \nseen.\n\nSo should I do a vacuum full and then hope this doesn't happen again? \nOr should I run a VACUUM FULL after each aggregation run?\n\nThanks!\nSean\n\n", "msg_date": "Mon, 25 Feb 2008 06:13:49 -0800", "msg_from": "Sean Leach <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Weird issue with planner choosing seq scan" }, { "msg_contents": "On Sun, 24 Feb 2008, Tom Lane wrote:\n> Sean Leach <[email protected]> writes:\n>> I have a table, that in production, currently has a little over 3\n>> million records in production. In development, the same table has\n>> about 10 million records (we have cleaned production a few weeks\n>> ago).\n>\n> You mean the other way around, to judge by the rowcounts from EXPLAIN.\n>\n>> -> Index Scan using u_counts_i2 on u_counts c\n>> (cost=0.00..53.53 rows=1082 width=4) (actual time=0.277..1224.582\n>> rows=392173 loops=1)\n>\n> I kinda think the devel system wouldn't be using an indexscan either\n> if it had up-to-date ANALYZE statistics. But even with the 1082 row\n> estimate that seems a remarkably low cost estimate.\n\nSeems pretty obvious to me. The table is obviously going to be well \nordered by the timestamp, if that's the time that the entries are inserted \ninto the table. So the index is going to have a very good correlation with \nthe order of the table, which is why the estimated cost for the index scan \nis so low. The production table will be more active than the development \ntable, so the entries in it will be more recent. The entries that were \ncleaned out a while ago are all irrelevant, because they will be old ones, \nand we are specifically searching for new entries. Because the production \ntable is more active, even though it is smaller, the results of the search \nwill be bigger (as seen in the explain analyse results), pushing it over \nthe limit and making a sequential scan more attractive.\n\nMatthew\n\n-- \nFailure is not an option. It comes bundled with your Microsoft product. \n -- Ferenc Mantfeld\n", "msg_date": "Mon, 25 Feb 2008 14:27:12 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird issue with planner choosing seq scan " }, { "msg_contents": "Sean Leach wrote:\n> On Feb 24, 2008, at 4:27 PM, Scott Marlowe wrote:\n> \n> >\n> > Urg. Then I wonder how your indexes are bloating but your table is\n> > not... you got autovac running? No weird lock issues? It's a side\n> > issue right now since the table is showing as non-bloated (unless\n> > you've got a long running transaction and that number is \n> WAY off from\n> > your vacuum)\n> \n> \n> Autovac is running, but probably not tuned. I am looking at my \n> max_fsm_pages setting to up as vacuum says, but not sure which value \n> to use (all the posts on the web refer to what looks like an old \n> vacuum output format), is this the line to look at?\n> \n> INFO: \"u_counts\": found 0 removable, 6214708 nonremovable row \n> versions in 382344 pages\n> DETAIL: 2085075 dead row versions cannot be removed yet.\n> \n> I.e. I need 382344 max_fsm_pages? No weird lock issues that we have \n> seen.\n\nI think the hint and warning are referring to this line:\n> 281727 pages contain useful free space.\n\nBut you're likely to have other relations in your database that have useful free space too.\n\nWhat this warning is saying is that at least some of the useful free space in that table will not be re-used for new rows or row versions, because it is impossible for the free space map to have references to all of the pages with usable space, since it is too small to hold that much information.\n\n> So should I do a vacuum full and then hope this doesn't \n> happen again? \n> Or should I run a VACUUM FULL after each aggregation run?\n\nIf your usage pattern results in generating all of that unused space in one transaction, and no further inserts or updates to that table till next time you run the same process, then my guess is that you probably should run a vacuum full on that table after each aggregation run. In that case you wouldn't have to increase max_fsm_pages solely to keep track of large amount of unused space in that table, since you're cleaning it up as soon as you're generating it.\n\nYou earlier had 5.5 million row versions, 2 million of them dead but not yet removable, and you said (even earlier) that the table had 3.3 million rows in it.\nYou now say you've got 6.2 million row versions (with the same 2M dead). So it looks like you're creating new row versions at quite a pace, in which case increasing max_fsm_pages, and not worrying about doing a vacuum full _every_ time is probably a good idea.\n\nHave you checked Scott Marlowe's note:\n\n> > unless you've got a long running transaction\n\nHow come those 2 million dead rows are not removable yet? My guess (based on a quick search of the mailing lists) would be that they were generated from your aggregation run, and that a long running transaction started before your aggregation run committed (possibly even before it started), and that transaction is still alive.\n\nAlternatively, it may be a different 2 million dead row versions now than earlier, and may simply be a side effect of your particular usage, and nothing to worry about. (Though it is exactly the same number of rows, which strongly hints at being exactly the same rows.)\n\nRegards,\nStephen Denne.\n\nDisclaimer:\nAt the Datamail Group we value team commitment, respect, achievement, customer focus, and courage. This email with any attachments is confidential and may be subject to legal privilege. If it is not intended for you please advise by reply immediately, destroy it and do not copy, disclose or use it in any way.\n\n__________________________________________________________________\n This email has been scanned by the DMZGlobal Business Quality \n Electronic Messaging Suite.\nPlease see http://www.dmzglobal.com/services/bqem.htm for details.\n__________________________________________________________________\n\n\n", "msg_date": "Tue, 26 Feb 2008 10:19:06 +1300", "msg_from": "\"Stephen Denne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird issue with planner choosing seq scan" }, { "msg_contents": "\nOn Feb 25, 2008, at 1:19 PM, Stephen Denne wrote:\n>\n>> So should I do a vacuum full and then hope this doesn't\n>> happen again?\n>> Or should I run a VACUUM FULL after each aggregation run?\n>\n> If your usage pattern results in generating all of that unused space \n> in one transaction, and no further inserts or updates to that table \n> till next time you run the same process, then my guess is that you \n> probably should run a vacuum full on that table after each \n> aggregation run. In that case you wouldn't have to increase \n> max_fsm_pages solely to keep track of large amount of unused space \n> in that table, since you're cleaning it up as soon as you're \n> generating it.\n>\n> You earlier had 5.5 million row versions, 2 million of them dead but \n> not yet removable, and you said (even earlier) that the table had \n> 3.3 million rows in it.\n> You now say you've got 6.2 million row versions (with the same 2M \n> dead). So it looks like you're creating new row versions at quite a \n> pace, in which case increasing max_fsm_pages, and not worrying about \n> doing a vacuum full _every_ time is probably a good idea.\n\nSo 281727 should be the minimum I bump it to correct?\n\n\n>\n>\n> Have you checked Scott Marlowe's note:\n>\n>>> unless you've got a long running transaction\n>\n> How come those 2 million dead rows are not removable yet? My guess \n> (based on a quick search of the mailing lists) would be that they \n> were generated from your aggregation run, and that a long running \n> transaction started before your aggregation run committed (possibly \n> even before it started), and that transaction is still alive.\n>\n> Alternatively, it may be a different 2 million dead row versions now \n> than earlier, and may simply be a side effect of your particular \n> usage, and nothing to worry about. (Though it is exactly the same \n> number of rows, which strongly hints at being exactly the same rows.)\n\n\nGreat detective work, you are correct. We have a daemon that runs and \nis constantly adding new data to that table, then we aggregated it \ndaily (I said weekly before, I was incorrect) - which deletes several \nrows as it updates a bunch of others. So it sounds like upping \nmax_fsm_pages is the best option.\n\nThanks again everyone!\n\n\n", "msg_date": "Mon, 25 Feb 2008 13:37:40 -0800", "msg_from": "Sean Leach <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Weird issue with planner choosing seq scan" }, { "msg_contents": "Sean Leach wrote\n> On Feb 25, 2008, at 1:19 PM, Stephen Denne wrote:\n> >\n> >> So should I do a vacuum full and then hope this doesn't\n> >> happen again?\n> >> Or should I run a VACUUM FULL after each aggregation run?\n> >\n> > If your usage pattern results in generating all of that \n> unused space \n> > in one transaction, and no further inserts or updates to \n> that table \n> > till next time you run the same process, then my guess is that you \n> > probably should run a vacuum full on that table after each \n> > aggregation run. In that case you wouldn't have to increase \n> > max_fsm_pages solely to keep track of large amount of unused space \n> > in that table, since you're cleaning it up as soon as you're \n> > generating it.\n> >\n> > You earlier had 5.5 million row versions, 2 million of them \n> dead but \n> > not yet removable, and you said (even earlier) that the table had \n> > 3.3 million rows in it.\n> > You now say you've got 6.2 million row versions (with the same 2M \n> > dead). So it looks like you're creating new row versions at \n> quite a \n> > pace, in which case increasing max_fsm_pages, and not \n> worrying about \n> > doing a vacuum full _every_ time is probably a good idea.\n> \n> So 281727 should be the minimum I bump it to correct?\n\nPlease know that I'm very new at advising PostgreSQL users how they should tune their system...\n\nMy understanding of your vacuum verbose output was that it was pointing out that max_fsm_pages was currently smaller than 281727, so therefore there was no way it could contain mappings to all the reusable space. However I don't think it is hinting at, nor recommending a value that you should be using.\n\nIf you do nothing, then this number of pages with reusable space will probably continue to grow, therefore, it probably has been growing.\n\nSo, for example, if your max_fsm_pages is currently only 20000, then perhaps 20000 of the 281727 pages with reusable space are in the free space map. The remaining 260000 pages _may_ have been generated through 20 different processes each of which created 13000 more pages with reusable space than the map could reference. If that was the case, then a max_fsm_pages of 33000 might be large enough.\n\nDo you see what I'm getting at?\nI think that you should do a vacuum full of that table once, then monitor the number of pages in it with reusable space for a while (over a few iterations of your regular processes). That should give you information about how much larger your max_fsm_pages should be than it currently is.\n\nRegards,\nStephen Denne.\n\nDisclaimer:\nAt the Datamail Group we value team commitment, respect, achievement, customer focus, and courage. This email with any attachments is confidential and may be subject to legal privilege. If it is not intended for you please advise by reply immediately, destroy it and do not copy, disclose or use it in any way.\n\n__________________________________________________________________\n This email has been scanned by the DMZGlobal Business Quality \n Electronic Messaging Suite.\nPlease see http://www.dmzglobal.com/services/bqem.htm for details.\n__________________________________________________________________\n\n\n", "msg_date": "Tue, 26 Feb 2008 11:59:13 +1300", "msg_from": "\"Stephen Denne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird issue with planner choosing seq scan" }, { "msg_contents": "Sean Leach wrote\n> On Feb 25, 2008, at 1:19 PM, Stephen Denne wrote:\n> >\n> >\n> > Have you checked Scott Marlowe's note:\n> >\n> >>> unless you've got a long running transaction\n> >\n> > How come those 2 million dead rows are not removable yet? My guess \n> > (based on a quick search of the mailing lists) would be that they \n> > were generated from your aggregation run, and that a long running \n> > transaction started before your aggregation run committed \n> (possibly \n> > even before it started), and that transaction is still alive.\n> >\n> > Alternatively, it may be a different 2 million dead row \n> versions now \n> > than earlier, and may simply be a side effect of your particular \n> > usage, and nothing to worry about. (Though it is exactly the same \n> > number of rows, which strongly hints at being exactly the \n> same rows.)\n> \n> \n> Great detective work, you are correct. We have a daemon that \n> runs and \n> is constantly adding new data to that table, then we aggregated it \n> daily (I said weekly before, I was incorrect) - which deletes \n> several \n> rows as it updates a bunch of others. So it sounds like upping \n> max_fsm_pages is the best option.\n\nbut... do you have a long running transaction? Are you happy having 30% to 40% of your table unusable (needlessly?) and slowing down your sequential scans?\n\nRegards,\nStephen Denne.\n\nDisclaimer:\nAt the Datamail Group we value team commitment, respect, achievement, customer focus, and courage. This email with any attachments is confidential and may be subject to legal privilege. If it is not intended for you please advise by reply immediately, destroy it and do not copy, disclose or use it in any way.\n\n__________________________________________________________________\n This email has been scanned by the DMZGlobal Business Quality \n Electronic Messaging Suite.\nPlease see http://www.dmzglobal.com/services/bqem.htm for details.\n__________________________________________________________________\n\n\n", "msg_date": "Tue, 26 Feb 2008 12:19:43 +1300", "msg_from": "\"Stephen Denne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird issue with planner choosing seq scan" }, { "msg_contents": "\nOn Feb 25, 2008, at 2:59 PM, Stephen Denne wrote:\n>>\n>\n> Please know that I'm very new at advising PostgreSQL users how they \n> should tune their system...\n\nI'd never have known it if you hadn't said anything\n\n>\n>\n> My understanding of your vacuum verbose output was that it was \n> pointing out that max_fsm_pages was currently smaller than 281727, \n> so therefore there was no way it could contain mappings to all the \n> reusable space. However I don't think it is hinting at, nor \n> recommending a value that you should be using.\n>\n> If you do nothing, then this number of pages with reusable space \n> will probably continue to grow, therefore, it probably has been \n> growing.\n>\n> So, for example, if your max_fsm_pages is currently only 20000, then \n> perhaps 20000 of the 281727 pages with reusable space are in the \n> free space map. The remaining 260000 pages _may_ have been generated \n> through 20 different processes each of which created 13000 more \n> pages with reusable space than the map could reference. If that was \n> the case, then a max_fsm_pages of 33000 might be large enough.\n>\n> Do you see what I'm getting at?\n> I think that you should do a vacuum full of that table once, then \n> monitor the number of pages in it with reusable space for a while \n> (over a few iterations of your regular processes). That should give \n> you information about how much larger your max_fsm_pages should be \n> than it currently is.\n\nThis sounds sane to me, will do. Thanks again!\n\n\n", "msg_date": "Mon, 25 Feb 2008 15:32:10 -0800", "msg_from": "Sean Leach <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Weird issue with planner choosing seq scan" } ]
[ { "msg_contents": "Hi all,\n\n i have strange problem with performance in PostgreSQL (8.1.9). My problem\nshortly:\n\n I'm using postgreSQL via JDBC driver (postgresql-8.1-404.jdbc3.jar) and\nasking the database for search on table with approximately 3 000 000\nrecords.\n I have created functional index table(lower(href) varchar_pattern_ops)\nbecause of lower case \"like\" searching. When i ask the database directly\nfrom psql, it returns result in 0,5 ms, but when i put the same command via\njdbc driver, it returns in 10 000 ms. Where can be the problem?? Any problem\nwith PostgreSQL tuning??\n\nThe command is\nselect df.id as id, df.c as c, df.href as href, df.existing as existing,\ndf.filesize as filesize from documentfile df where (lower(href) like\n'aba001!_2235800001.djvu' escape '!' ) order by id limit 1 Thank you very\nmuch for any help,\n\n Kind regards,\n\n Pavel Rotek\n\nHi all,\n\n  i have strange problem with performance in PostgreSQL (8.1.9). My \nproblem shortly:\n\n  I'm using postgreSQL via JDBC driver (postgresql-8.1-404.jdbc3.jar) \nand asking the database for search on table with approximately 3 000 000 \nrecords.\n  I have created functional index table(lower(href) \nvarchar_pattern_ops) because of lower case \"like\" searching. When i ask \nthe database directly from psql, it returns result in 0,5 ms, but when i \nput the same command via jdbc driver, it returns in 10 000 ms. Where can \nbe the problem?? Any problem with PostgreSQL tuning??\n\nThe command is\nselect df.id as id, df.c as c, df.href as href, df.existing as existing, \ndf.filesize as filesize from documentfile df where (lower(href) like \n'aba001!_2235800001.djvu' escape '!' ) order by  id limit 1 \n  Thank you very much for any help,\n\n  Kind regards,\n\n  Pavel Rotek", "msg_date": "Mon, 25 Feb 2008 11:06:16 +0100", "msg_from": "\"Pavel Rotek\" <[email protected]>", "msg_from_op": true, "msg_subject": "response time when querying via JDBC and via psql differs" }, { "msg_contents": "2008/2/25, Pavel Rotek <[email protected]>:\n> I have created functional index table(lower(href) varchar_pattern_ops)\n> because of lower case \"like\" searching. When i ask the database directly\n> from psql, it returns result in 0,5 ms, but when i put the same command via\n> jdbc driver, it returns in 10 000 ms. Where can be the problem?? Any problem\n> with PostgreSQL tuning??\n\nMost likely the problem is that the JDBC driver uses prepared statements, in\nwhich the query is planned withouth the concrete argument value. For like only\npatterns that don't start with % or _ can use the index. Without the argument\nvalue PostgreSQL can't tell whether that is the case, so it takes the safe\nroute and chooses a sequential scan.\n\nto solve this particular problem, you have to convince jdbc to not use a\nprepared statement for this particular query.\n\nMarkus\n", "msg_date": "Mon, 25 Feb 2008 17:10:32 +0600", "msg_from": "\"Markus Bertheau\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: response time when querying via JDBC and via psql differs" }, { "msg_contents": "The thing to remember here is that prepared statements are only planned once\nand strait queries are planned for each query.\n\nWhen you give the query planner some concrete input like in your example\nthen it will happily use the index because it can check if the input starts\nwith % or _. If you use JDBC to set up a prepared statement like:\n\n> select df.id as id, df.c as c, df.href as href, df.existing as existing,\n> df.filesize as filesize from documentfile df where (lower(href) like ?\n> escape '!' ) order by id limit 1\n\nthen the query planner takes the safe route like Markus said and doesn't use\nthe index.\n\nI think your best bet is to use connection.createStatement instead of\nconnection.prepareStatement. The gain in query performance will offset the\nloss in planning overhead. I'm reasonably sure the plans are cached anyway.\n\n--Nik\nOn Mon, Feb 25, 2008 at 6:10 AM, Markus Bertheau <\[email protected]> wrote:\n\n> 2008/2/25, Pavel Rotek <[email protected]>:\n> > I have created functional index table(lower(href) varchar_pattern_ops)\n> > because of lower case \"like\" searching. When i ask the database directly\n> > from psql, it returns result in 0,5 ms, but when i put the same command\n> via\n> > jdbc driver, it returns in 10 000 ms. Where can be the problem?? Any\n> problem\n> > with PostgreSQL tuning??\n>\n> Most likely the problem is that the JDBC driver uses prepared statements,\n> in\n> which the query is planned withouth the concrete argument value. For like\n> only\n> patterns that don't start with % or _ can use the index. Without the\n> argument\n> value PostgreSQL can't tell whether that is the case, so it takes the safe\n> route and chooses a sequential scan.\n>\n> to solve this particular problem, you have to convince jdbc to not use a\n> prepared statement for this particular query.\n>\n> Markus\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\nThe thing to remember here is that prepared statements are only planned once and strait queries are planned for each query.\n \nWhen you give the query planner some concrete input like in your example then it will happily use the index because it can check if the input starts with % or _.  If you use JDBC to set up a prepared statement like:\nselect df.id as id, df.c as c, df.href as href, df.existing as existing, df.filesize as filesize from documentfile df where (lower(href) like ? escape '!' ) order by  id limit 1\nthen the query planner takes the safe route like Markus said and doesn't use the index.\n \nI think your best bet is to use connection.createStatement instead of connection.prepareStatement.  The gain in query performance will offset the loss in planning overhead.  I'm reasonably sure the plans are cached anyway.\n \n--Nik\nOn Mon, Feb 25, 2008 at 6:10 AM, Markus Bertheau <[email protected]> wrote:\n2008/2/25, Pavel Rotek <[email protected]>:\n>   I have created functional index table(lower(href) varchar_pattern_ops)> because of lower case \"like\" searching. When i ask the database directly> from psql, it returns result in 0,5 ms, but when i put the same command via\n> jdbc driver, it returns in 10 000 ms. Where can be the problem?? Any problem> with PostgreSQL tuning??Most likely the problem is that the JDBC driver uses prepared statements, inwhich the query is planned withouth the concrete argument value. For like only\npatterns that don't start with % or _ can use the index. Without the argumentvalue PostgreSQL can't tell whether that is the case, so it takes the saferoute and chooses a sequential scan.to solve this particular problem, you have to convince jdbc to not use a\nprepared statement for this particular query.Markus---------------------------(end of broadcast)---------------------------TIP 9: In versions below 8.0, the planner will ignore your desire to      choose an index scan if your joining column's datatypes do not\n      match", "msg_date": "Mon, 25 Feb 2008 09:11:29 -0500", "msg_from": "\"Nikolas Everett\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: response time when querying via JDBC and via psql differs" }, { "msg_contents": "Do not use setString() method to pass the parameter to the\nPreparedStatement in JDBC. Construct an SQL query string as you write\nit here and query the database with this new SQL string. This will\nmake the planner to recreate a plan every time for every new SQL\nstring per session (that is not usually good) but it will make the\nplanner to choose a correct plan.\n\n-- Valentine Gogichashvili\n\nOn Feb 25, 11:06 am, [email protected] (\"Pavel Rotek\") wrote:\n> Hi all,\n>\n> i have strange problem with performance in PostgreSQL (8.1.9). My problem\n> shortly:\n>\n> I'm using postgreSQL via JDBC driver (postgresql-8.1-404.jdbc3.jar) and\n> asking the database for search on table with approximately 3 000 000\n> records.\n> I have created functional index table(lower(href) varchar_pattern_ops)\n> because of lower case \"like\" searching. When i ask the database directly\n> from psql, it returns result in 0,5 ms, but when i put the same command via\n> jdbc driver, it returns in 10 000 ms. Where can be the problem?? Any problem\n> with PostgreSQL tuning??\n>\n> The command is\n> select df.id as id, df.c as c, df.href as href, df.existing as existing,\n> df.filesize as filesize from documentfile df where (lower(href) like\n> 'aba001!_2235800001.djvu' escape '!' ) order by id limit 1 Thank you very\n> much for any help,\n>\n> Kind regards,\n>\n> Pavel Rotek\n\n", "msg_date": "Tue, 26 Feb 2008 05:26:52 -0800 (PST)", "msg_from": "valgog <[email protected]>", "msg_from_op": false, "msg_subject": "Re: response time when querying via JDBC and via psql differs" } ]
[ { "msg_contents": ">Also, it might be worth enabling log_lock_waits to see if the slow\n>notifies are due to having to wait on some lock or other.\n\nTurning on log_lock_waits shows that there is a lot of waiting for \nlocks on the pg_listener table ala:\n\nprocess 22791 still waiting for ExclusiveLock on relation 2614 of \ndatabase 16387 after 992.397 ms\n...\nprocess 22791 acquired ExclusiveLock on relation 2614 of database \n16387 after 1433.152 ms\n\ndeadlock_timeout is left at the default 1s setting.\n\nThough these are being generated during application activity - \nrunning the simulation script does produce 300ms - 600ms NOTIFY \nstatements but doesn't (at the moment) trigger a lock_wait log entry.\n", "msg_date": "Mon, 25 Feb 2008 08:48:20 -0800", "msg_from": "Joel Stevenson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LISTEN / NOTIFY performance in 8.3" } ]
[ { "msg_contents": ">Also, it might be worth enabling log_lock_waits to see if the slow\n>notifies are due to having to wait on some lock or other.\n\nTurning on log_lock_waits shows that there is a lot of waiting for \nlocks on the pg_listener table ala:\n\nprocess 22791 still waiting for ExclusiveLock on relation 2614 of \ndatabase 16387 after 992.397 ms\n...\nprocess 22791 acquired ExclusiveLock on relation 2614 of database \n16387 after 1433.152 ms\n\ndeadlock_timeout is left at the default 1s setting.\n\nThough these are being generated during application activity - \nrunning the simulation script does produce 300ms - 600ms NOTIFY \nstatements but doesn't (at the moment) trigger a lock_wait log entry.\n", "msg_date": "Mon, 25 Feb 2008 09:42:12 -0800", "msg_from": "Joel Stevenson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LISTEN / NOTIFY performance in 8.3" }, { "msg_contents": "Joel Stevenson <[email protected]> writes:\n>> Also, it might be worth enabling log_lock_waits to see if the slow\n>> notifies are due to having to wait on some lock or other.\n\n> Turning on log_lock_waits shows that there is a lot of waiting for \n> locks on the pg_listener table ala:\n\nInteresting. The LISTEN/NOTIFY mechanism itself takes ExclusiveLock\non pg_listener, but never for very long at a time (assuming pg_listener\ndoesn't get horribly bloated, which we know isn't happening for you).\n\nAnother thought that comes to mind is that maybe the delays you see\ncome from these lock acquisitions getting blocked behind autovacuums of\npg_listener. I did not see that while trying to replicate your problem,\nbut maybe the issue requires more update load on pg_listener than the\ntest script can create by itself, or maybe some nondefault autovacuum\nsetting is needed --- what are you using?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Feb 2008 13:13:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LISTEN / NOTIFY performance in 8.3 " }, { "msg_contents": "At 1:13 PM -0500 2/25/08, Tom Lane wrote:\n>Joel Stevenson <[email protected]> writes:\n>>> Also, it might be worth enabling log_lock_waits to see if the slow\n>>> notifies are due to having to wait on some lock or other.\n>\n>> Turning on log_lock_waits shows that there is a lot of waiting for\n>> locks on the pg_listener table ala:\n>\n>Interesting. The LISTEN/NOTIFY mechanism itself takes ExclusiveLock\n>on pg_listener, but never for very long at a time (assuming pg_listener\n>doesn't get horribly bloated, which we know isn't happening for you).\n>\n>Another thought that comes to mind is that maybe the delays you see\n>come from these lock acquisitions getting blocked behind autovacuums of\n>pg_listener. I did not see that while trying to replicate your problem,\n>but maybe the issue requires more update load on pg_listener than the\n>test script can create by itself, or maybe some nondefault autovacuum\n>setting is needed --- what are you using?\n\nDefault autovacuum settings.\n\nI turned on all autovacuum logging and cranked up the test script and \nhave it fork 25 consumers each running 25 iterations. At that level \non my machine I can get the lock waiting to exceed the 1s \ndeadlock_timeout right away but the autovacuum activity on \npg_listener is entirely absent until the end when the forked \nconsumers are mostly done and disconnected.\n", "msg_date": "Mon, 25 Feb 2008 11:15:44 -0800", "msg_from": "Joel Stevenson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LISTEN / NOTIFY performance in 8.3" }, { "msg_contents": "Joel Stevenson <[email protected]> writes:\n> I turned on all autovacuum logging and cranked up the test script and \n> have it fork 25 consumers each running 25 iterations. At that level \n> on my machine I can get the lock waiting to exceed the 1s \n> deadlock_timeout right away but the autovacuum activity on \n> pg_listener is entirely absent until the end when the forked \n> consumers are mostly done and disconnected.\n\nHmph. At 25/100 I can get a few complaints about NOTIFY taking more\nthan 20ms, but it seems to be due to blocking behind autovacuum, as\nin this example:\n\n2008-02-25 14:53:41.812 EST 13773 LOG: automatic vacuum of table \"joels.pg_catalog.pg_listener\": index scans: 0\n\tpages: 0 removed, 78 remain\n\ttuples: 5560 removed, 25 remain\n\tsystem usage: CPU 0.00s/0.00u sec elapsed 0.00 sec\n2008-02-25 14:53:41.850 EST 13773 LOG: automatic analyze of table \"joels.pg_catalog.pg_listener\" system usage: CPU 0.00s/0.00u sec elapsed 0.03 sec\n2008-02-25 14:53:41.851 EST 13728 LOG: duration: 29.270 ms statement: NOTIFY to_producer\n2008-02-25 14:53:41.852 EST 13758 LOG: duration: 22.835 ms statement: NOTIFY to_producer\n\n\nIt's weird that the behavior is robust for you but I can't make it\nhappen at all. Would you show the output of pg_config, as well as\nall your nondefault postgresql.conf settings?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Feb 2008 14:57:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LISTEN / NOTIFY performance in 8.3 " }, { "msg_contents": "At 2:57 PM -0500 2/25/08, Tom Lane wrote:\n>It's weird that the behavior is robust for you but I can't make it\n>happen at all. Would you show the output of pg_config, as well as\n>all your nondefault postgresql.conf settings?\n\npg_config:\nBINDIR = /usr/local/pgsql/bin\nDOCDIR = /usr/local/pgsql/doc\nINCLUDEDIR = /usr/local/pgsql/include\nPKGINCLUDEDIR = /usr/local/pgsql/include\nINCLUDEDIR-SERVER = /usr/local/pgsql/include/server\nLIBDIR = /usr/local/pgsql/lib\nPKGLIBDIR = /usr/local/pgsql/lib\nLOCALEDIR =\nMANDIR = /usr/local/pgsql/man\nSHAREDIR = /usr/local/pgsql/share\nSYSCONFDIR = /usr/local/pgsql/etc\nPGXS = /usr/local/pgsql/lib/pgxs/src/makefiles/pgxs.mk\nCONFIGURE = 'CFLAGS=-O2 -pipe' '--with-openssl' \n'--enable-thread-safety' '--with-includes=/usr/kerberos/include' \n'--with-perl'\nCC = gcc\nCPPFLAGS = -D_GNU_SOURCE -I/usr/kerberos/include\nCFLAGS = -O2 -pipe -Wall -Wmissing-prototypes -Wpointer-arith \n-Winline -Wdeclaration-after-statement -fno-strict-aliasing\nCFLAGS_SL = -fpic\nLDFLAGS = -Wl,-rpath,'/usr/local/pgsql/lib'\nLDFLAGS_SL =\nLIBS = -lpgport -lssl -lcrypto -lz -lreadline -ltermcap -lcrypt -ldl -lm\nVERSION = PostgreSQL 8.3.0\n\n\nNon-default postgresql.conf settings:\nmax_connections = 80\nssl = on\nshared_buffers = 1GB\nwork_mem = 100MB\nmaintenance_work_mem = 100MB\nmax_fsm_pages = 204800\nvacuum_cost_delay = 100\nwal_buffers = 124kB\nwal_writer_delay = 200ms\ncommit_delay = 100\ncheckpoint_segments = 6\neffective_cache_size = 6GB\n", "msg_date": "Mon, 25 Feb 2008 12:37:24 -0800", "msg_from": "Joel Stevenson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LISTEN / NOTIFY performance in 8.3" } ]
[ { "msg_contents": "I have a cascading delete trigger that is obviously using a seqscan.\n(Explain analyze shows that trigger as taking over 1000s while all\nother triggers are <1s. The value in test delete didn't even appear in\nthis child table, so an index scan would have been almost instant.)\n\nIf I do\nDELETE FROM child_table WHERE fkey = value;\nI get an index scan. Why doesn't the trigger do that, and how can I\nforce it to re-plan?\n\n", "msg_date": "Mon, 25 Feb 2008 12:56:26 -0800", "msg_from": "Andrew Lazarus <[email protected]>", "msg_from_op": true, "msg_subject": "when is a DELETE FK trigger planned?" }, { "msg_contents": "Andrew Lazarus <[email protected]> writes:\n> I have a cascading delete trigger that is obviously using a seqscan.\n> (Explain analyze shows that trigger as taking over 1000s while all\n> other triggers are <1s. The value in test delete didn't even appear in\n> this child table, so an index scan would have been almost instant.)\n\n> If I do\n> DELETE FROM child_table WHERE fkey = value;\n> I get an index scan. Why doesn't the trigger do that, and how can I\n> force it to re-plan?\n\nThat would depend on what PG version you're using.\n\nHowever, starting a fresh connection should get you a new trigger\nfunction plan in any case.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Feb 2008 20:25:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: when is a DELETE FK trigger planned? " }, { "msg_contents": "I figured out what appears to happen with cascading delete using a\nseqscan. In this case, the foreign keys in the child table are not\nequally distributed. A few parent values occur often. Most parent\nvalues do not occur at all. So the planner, faced with an unknown\ngeneric key, takes the safe route.\n\nWhat I've done is remove the FK (maybe it would be better to leave it\nalbeit disabled for documentation) and written my own AFTER DELETE\ntrigger that uses EXECUTE to delay planning until the actual value is\nknown. This appears to work correctly.\n\n-- \nSincerely,\n Andrew Lazarus mailto:[email protected]", "msg_date": "Wed, 27 Feb 2008 16:54:20 -0800", "msg_from": "Andrew Lazarus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: when is a DELETE FK trigger planned?" } ]
[ { "msg_contents": "Hi,\n\nYou may remember some thread about data loading performances and \nmulti-threading support in pgloader:\n http://archives.postgresql.org/pgsql-performance/2008-02/msg00081.php\n\nThe pgloader code to handle this is now ready to get tested, a more structured \nproject could talk about a Release Candidate status.\n http://pgloader.projects.postgresql.org/dev/TODO.html\n http://pgloader.projects.postgresql.org/dev/pgloader.1.html#_parallel_loading\n http://packages.debian.org/pgloader --- experimental has the next version\n\nAs for the performances benefit of this new version (2.3.0~dev2), all the work \ncould be reduced to zilch because of the python Global Interpreter Lock, \nwhich I've been aware of late in the development effort.\n http://docs.python.org/api/threads.html\n\nThis documentation states that (a) using generators you're not that \nconcerned, and (b) the global lock still allows for IO and processing at the \nsame time. As pgloader uses generators, I'm still not sure how much a problem \nthis will be.\n\nI'd like to have some feedback about the new version, in term of bugs \nencountered and performance limitations (is pgloader up to what you would \nexpect a multi-threaded loader to be at?)\n\nRegards,\n-- \ndim", "msg_date": "Tue, 26 Feb 2008 13:08:53 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": true, "msg_subject": "multi-threaded pgloader needs your tests" }, { "msg_contents": "On Tue, 2008-02-26 at 13:08 +0100, Dimitri Fontaine wrote:\n\n> I'd like to have some feedback about the new version, in term of bugs \n> encountered and performance limitations (is pgloader up to what you would \n> expect a multi-threaded loader to be at?)\n\nMaybe post to general as well if you don't get any replies here.\n\nNew feature is very important for us.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com \n\n PostgreSQL UK 2008 Conference: http://www.postgresql.org.uk\n\n", "msg_date": "Sat, 01 Mar 2008 18:17:33 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multi-threaded pgloader needs your tests" }, { "msg_contents": "Hi,\n\nLe mardi 26 février 2008, Dimitri Fontaine a écrit :\n> You may remember some thread about data loading performances and\n> multi-threading support in pgloader:\n> http://archives.postgresql.org/pgsql-performance/2008-02/msg00081.php\n\nAs people here have asked for the new features implemented into pgloader \n2.3.0, I'm happy to post here about the availability of the new version!\n http://pgfoundry.org/projects/pgloader\n http://pgfoundry.org/forum/forum.php?forum_id=1283\n\nPlease consider this as a testbed related to the parallel COPY and pg_restore \nimprovements which have been discussed here and on -hackers, as that's how \nthose new features came to life.\n\nRegards,\n-- \ndim", "msg_date": "Mon, 10 Mar 2008 17:08:08 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": true, "msg_subject": "Re: multi-threaded pgloader needs your tests" }, { "msg_contents": "Hi,\n\nLe samedi 01 mars 2008, Simon Riggs a écrit :\n> On Tue, 2008-02-26 at 13:08 +0100, Dimitri Fontaine wrote:\n> > I'd like to have some feedback about the new version, in term of bugs\n> > encountered and performance limitations (is pgloader up to what you would\n> > expect a multi-threaded loader to be at?)\n>\n> Maybe post to general as well if you don't get any replies here.\n> New feature is very important for us.\n\nSo, here's yet another mail about pgloader new 2.3.0 version, please forgive \nme for being over zealous here if that's how I appear to be to you...\n\nThose links will give you detailed information about the new release.\n http://pgfoundry.org/projects/pgloader \n http://pgfoundry.org/forum/forum.php?forum_id=1283\n http://pgloader.projects.postgresql.org/#_parallel_loading\n\nRegards,\n-- \ndim", "msg_date": "Mon, 10 Mar 2008 17:18:16 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] multi-threaded pgloader makes it in version 2.3.0" }, { "msg_contents": "On Mon, 2008-03-10 at 17:18 +0100, Dimitri Fontaine wrote:\n\n> Le samedi 01 mars 2008, Simon Riggs a écrit :\n> > On Tue, 2008-02-26 at 13:08 +0100, Dimitri Fontaine wrote:\n> > > I'd like to have some feedback about the new version, in term of bugs\n> > > encountered and performance limitations (is pgloader up to what you would\n> > > expect a multi-threaded loader to be at?)\n> >\n> > Maybe post to general as well if you don't get any replies here.\n> > New feature is very important for us.\n> \n> So, here's yet another mail about pgloader new 2.3.0 version, please forgive \n> me for being over zealous here if that's how I appear to be to you...\n> \n> Those links will give you detailed information about the new release.\n> http://pgfoundry.org/projects/pgloader \n> http://pgfoundry.org/forum/forum.php?forum_id=1283\n> http://pgloader.projects.postgresql.org/#_parallel_loading\n\nSounds good.\n\nNot sure when or why I would want an rrqueue_size larger than\ncopy_every, and less sounds very strange. Can we get away with it being\nthe same thing in all cases?\n\nDo you have some basic performance numbers? It would be good to\nunderstand the overhead of the parallelism on a large file with 1, 2 and\n4 threads. Would be good to see if synchronous_commit = off helped speed\nthings up as well.\n\nPresumably -V and -T still work when we go parallel, but just issue one\nquery?\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com \n\n PostgreSQL UK 2008 Conference: http://www.postgresql.org.uk\n\n", "msg_date": "Mon, 10 Mar 2008 17:14:23 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] multi-threaded pgloader makes it in version 2.3.0" }, { "msg_contents": "Le lundi 10 mars 2008, Simon Riggs a écrit :\n> Not sure when or why I would want an rrqueue_size larger than\n> copy_every, and less sounds very strange. Can we get away with it being\n> the same thing in all cases?\n\nIn fact, that's just that you asked for a reader which reads one line at a \ntime and feed the workers in a round robin fashion, and I wanted to feed them \nmore than 1 line at a time, hence this parameter. Of course it could well be \nit's not needed, and I'll then deprecate it in next version.\nPlease note it defaults to what you want it to be, so you can just forget \nabout it...\n\nI'm beginning to think you asked 1 line at a time for the first version to be \neasier to implement... :)\n\n> Do you have some basic performance numbers? It would be good to\n> understand the overhead of the parallelism on a large file with 1, 2 and\n> 4 threads. Would be good to see if synchronous_commit = off helped speed\n> things up as well.\n\nDidn't have the time to test this performance wise, that's why I asked for \ntesting last time. I've planned some perf tests if only to have the \nopportunity to write up some presentation article, but didn't find the time \nto run them yet.\n\n> Presumably -V and -T still work when we go parallel, but just issue one\n> query?\n\nStill work, of course, the 'controller' thread will issue them before to \nparallelize the work or begin to read the input file. Rejecting still works \nthe same too, threads share a reject object which is protected by a lock \n(mutex), so the file don't get mixed line. \nI've tried not to compromise any existing feature by adding the parallel ones, \nand didn't have to at the end of it.\n\nRegards,\n-- \ndim", "msg_date": "Mon, 10 Mar 2008 18:57:38 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] multi-threaded pgloader makes it in version 2.3.0" } ]
[ { "msg_contents": "Hi,\n\nI'm having some issues with this simple query:\n\nSELECT\n _comment.*,\n _article.title AS article_title,\n _article.reference AS article_reference\nFROM\n _comment\n INNER JOIN _article\n ON _article.id = _comment.parent_id\nWHERE\n _comment.path <@ '0.1.3557034'\nORDER BY\n _comment.date_publishing DESC\nOFFSET 0\nLIMIT 5\n;\n\nThe varying information here is the ltree path \"0.1.3557034\"\n\nUsually it's quite fast (<1s) but sometimes after an ANALYZE on the\n_comment table it gets so slow it's killing our servers. And it's\nreally random.\nWe run our servers with default_statistics_target=100, I tried setting\nit up to 1000 (max) but it does not change this wrong behavior.\n\nI executed the same query on our 11 servers, 3 of them executed the\nquery slowly after the ANALYZE. Sometimes it happens to more,\nsometimes to less.\nHere is the EXPLAIN ANALYZE data on those 3 servers before and after\nthe ANALYZE execution.\n\n===== Server 1 =( ======\n\n===== The Query on server 1 before an ANALYZE =====\nLimit (cost=16286.04..16286.06 rows=5 width=567) (actual\ntime=62.521..62.526 rows=5 loops=1)\n -> Sort (cost=16286.04..16289.89 rows=1539 width=567) (actual\ntime=62.519..62.520 rows=5 loops=1)\n Sort Key: _comment.date_publishing\n -> Nested Loop (cost=0.00..16204.57 rows=1539 width=567)\n(actual time=2.063..44.517 rows=3606 loops=1)\n -> Index Scan using gist_idx_comment_path on _comment\n(cost=0.00..4736.73 rows=1539 width=534) (actual time=2.038..20.487\nrows=3748 loops=1)\n Index Cond: (path <@ '0.1.14666029'::ltree)\n -> Index Scan using _article_pkey on _article\n(cost=0.00..7.44 rows=1 width=41) (actual time=0.004..0.004 rows=1\nloops=3748)\n Index Cond: (_article.id = _comment.parent_id)\nTotal runtime: 64.844 ms\n\n===== The Query on server 1 after an ANALYZE =====\n Limit (cost=0.00..11082.13 rows=5 width=569) (actual\ntime=313945.051..693805.921 rows=5 loops=1)\n -> Nested Loop (cost=0.00..34057601.77 rows=15366 width=569)\n(actual time=313945.049..693805.912 rows=5 loops=1)\n -> Index Scan Backward using idx_comment_date_publishing on\n_comment (cost=0.00..33949736.04 rows=15366 width=536) (actual\ntime=313923.129..693755.772 rows=5 loops=1)\n Filter: (path <@ '0.1.14666029'::ltree)\n -> Index Scan using _article_pkey on _article\n(cost=0.00..7.01 rows=1 width=41) (actual time=10.016..10.018 rows=1\nloops=5)\n Index Cond: (_article.id = _comment.parent_id)\nTotal runtime: 693806.044 ms\n\n===== Poor Server 2 ='( ======\n\n ===== The Query on server 2 before an ANALYZE =====\n Limit (cost=21096.49..21096.51 rows=5 width=586) (actual\ntime=34.184..34.187 rows=5 loops=1)\n -> Sort (cost=21096.49..21100.33 rows=1535 width=586) (actual\ntime=34.182..34.184 rows=5 loops=1)\n Sort Key: _comment.date_publishing\n -> Nested Loop (cost=0.00..21015.26 rows=1535 width=586)\n(actual time=0.119..25.232 rows=3606 loops=1)\n -> Index Scan using gist_idx_comment_path on _comment\n(cost=0.00..6325.53 rows=1535 width=553) (actual time=0.100..11.066\nrows=3748 loops=1)\n Index Cond: (path <@ '0.1.14666029'::ltree)\n -> Index Scan using _article_pkey on _article\n(cost=0.00..9.56 rows=1 width=41) (actual time=0.002..0.003 rows=1\nloops=3748)\n Index Cond: (_article.id = _comment.parent_id)\nTotal runtime: 34.658 ms\n\n===== The Query on server 2 after an ANALYZE =====\n Limit (cost=0.00..18806.13 rows=5 width=585) (actual\ntime=363344.748..575823.722 rows=5 loops=1)\n -> Nested Loop (cost=0.00..57764897.33 rows=15358 width=585)\n(actual time=363344.747..575823.715 rows=5 loops=1)\n -> Index Scan Backward using idx_comment_date_publishing on\n_comment (cost=0.00..57618270.03 rows=15358 width=552) (actual\ntime=363344.681..575823.502 rows=5 loops=1)\n Filter: (path <@ '0.1.14666029'::ltree)\n -> Index Scan using _article_pkey on _article\n(cost=0.00..9.53 rows=1 width=41) (actual time=0.036..0.036 rows=1\nloops=5)\n Index Cond: (_article.id = _comment.parent_id)\nTotal runtime: 575823.796 ms\n\n===== Poor Server 3 ='(((( ======\n\n ===== The Query on server 3 before an ANALYZE =====\n Limit (cost=20563.80..20563.81 rows=5 width=585) (actual\ntime=31.424..31.428 rows=5 loops=1)\n -> Sort (cost=20563.80..20567.64 rows=1539 width=585) (actual\ntime=31.423..31.424 rows=5 loops=1)\n Sort Key: _comment.date_publishing\n -> Nested Loop (cost=0.00..20482.32 rows=1539 width=585)\n(actual time=1.198..22.912 rows=3606 loops=1)\n -> Index Scan using gist_idx_comment_path on _comment\n(cost=0.00..6341.85 rows=1539 width=552) (actual time=1.160..9.641\nrows=3748 loops=1)\n Index Cond: (path <@ '0.1.14666029'::ltree)\n -> Index Scan using _article_pkey on _article\n(cost=0.00..9.18 rows=1 width=41) (actual time=0.002..0.003 rows=1\nloops=3748)\n Index Cond: (_article.id = _comment.parent_id)\nTotal runtime: 31.850 ms\n\n===== The Query on server 3 after an ANALYZE =====\n Limit (cost=0.00..18726.66 rows=5 width=585) (actual\ntime=171378.294..286416.273 rows=5 loops=1)\n -> Nested Loop (cost=0.00..57577000.69 rows=15373 width=585)\n(actual time=171378.293..286416.269 rows=5 loops=1)\n -> Index Scan Backward using idx_comment_date_publishing on\n_comment (cost=0.00..57436080.63 rows=15373 width=552) (actual\ntime=171378.249..286416.062 rows=5 loops=1)\n Filter: (path <@ '0.1.14666029'::ltree)\n -> Index Scan using _article_pkey on _article\n(cost=0.00..9.15 rows=1 width=41) (actual time=0.034..0.034 rows=1\nloops=5)\n Index Cond: (_article.id = _comment.parent_id)\nTotal runtime: 286416.339 ms\n\nHow can we stick the planner to the faster execution plan ?\n\nPlease help our poor servers, they are tired ;)\n\n-- \nLaurent Raufaste\n<http://www.glop.org/>\n", "msg_date": "Tue, 26 Feb 2008 16:19:03 +0100", "msg_from": "\"Laurent Raufaste\" <[email protected]>", "msg_from_op": true, "msg_subject": "PG planning randomly ?" }, { "msg_contents": "2008/2/26, Laurent Raufaste <[email protected]>:\n> Hi,\n>\n> I'm having some issues with this simple query:\n>\n> SELECT\n> _comment.*,\n> _article.title AS article_title,\n> _article.reference AS article_reference\n> FROM\n> _comment\n> INNER JOIN _article\n> ON _article.id = _comment.parent_id\n> WHERE\n> _comment.path <@ '0.1.3557034'\n> ORDER BY\n> _comment.date_publishing DESC\n> OFFSET 0\n> LIMIT 5\n> ;\n\nI forgot the table definition, here it is ;)\n\n Table \"ob2._comment\"\n Column | Type |\n Modifiers\n-------------------+-----------------------------+--------------------------------------------------------------\n id | bigint | not null default\nnextval('element_id_sequence'::regclass)\n parent_id | bigint |\n path | ltree |\n data | text |\n date_creation | timestamp without time zone | not null default now()\n date_publishing | timestamp without time zone | not null default now()\n date_modification | timestamp without time zone | not null default now()\n counters | hstore |\n reference | integer | not null default\nnextval('_comment_reference_seq'::regclass)\n text | text |\nIndexes:\n \"_comment_pkey\" PRIMARY KEY, btree (id), tablespace \"indexspace\"\n \"gist_idx_comment_path\" gist (path), tablespace \"indexspace\"\n \"idx_comment_date_creation\" btree (date_creation), tablespace \"indexspace\"\n \"idx_comment_date_publishing\" btree (date_publishing), tablespace\n\"indexspace\"\n \"idx_comment_parent_id\" btree (parent_id), tablespace \"indexspace\"\n \"idx_comment_reference\" btree (reference), tablespace \"indexspace\"\nInherits: _element\n\nThanks for looking into ou problem !\n\n-- \nLaurent Raufaste\n<http://www.glop.org/>\n", "msg_date": "Tue, 26 Feb 2008 16:35:03 +0100", "msg_from": "\"Laurent Raufaste\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG planning randomly ?" }, { "msg_contents": "\"Laurent Raufaste\" <[email protected]> writes:\n> I'm having some issues with this simple query:\n\n> SELECT\n> _comment.*,\n> _article.title AS article_title,\n> _article.reference AS article_reference\n> FROM\n> _comment\n> INNER JOIN _article\n> ON _article.id = _comment.parent_id\n> WHERE\n> _comment.path <@ '0.1.3557034'\n> ORDER BY\n> _comment.date_publishing DESC\n> OFFSET 0\n> LIMIT 5\n> ;\n\n> The varying information here is the ltree path \"0.1.3557034\"\n\nWhat PG version is this?\n\nIf it's 8.2 or later then increasing the stats target for _comment.path\nto 100 or more would likely help.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Feb 2008 11:31:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG planning randomly ? " }, { "msg_contents": "2008/2/26, Tom Lane <[email protected]>:\n>\n> What PG version is this?\n>\n> If it's 8.2 or later then increasing the stats target for _comment.path\n> to 100 or more would likely help.\n>\n\nI'm using PG 8.2.4.\nWe are using 100 as default_statistics_target by default and all our\ncolumn are using this value:\n# SELECT attname,attstattarget FROM pg_attribute WHERE attrelid=16743\nAND attname='path' ;\n attname | attstattarget\n---------+---------------\n path | -1\n\nI tried increasing the stats target with the command:\nSET default_statistics_target=1000 ;\nThat's the command I launched before executing the ANALYZE showed in\nthe previous mail.\nThe ANALYZE were longer to complete, but it did not change the planner\nbehavior (sometimes right, sometimes wrong).\n\nI did not try setting up the target stats directly using an ALTER\nTABLE because it implies some LOCK on our replication cluster. Do you\nthink the planner will act differently by using an ALTER TABLE rather\nthen just the \"SET default_statistics_target\" command ?\n\nIf so, I will try it =)\n\nThanks.\n\n-- \nLaurent Raufaste\n<http://www.glop.org/>\n", "msg_date": "Tue, 26 Feb 2008 18:12:07 +0100", "msg_from": "\"Laurent Raufaste\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG planning randomly ?" }, { "msg_contents": "\"Laurent Raufaste\" <[email protected]> writes:\n> 2008/2/26, Tom Lane <[email protected]>:\n>> If it's 8.2 or later then increasing the stats target for _comment.path\n>> to 100 or more would likely help.\n\n> I'm using PG 8.2.4.\n> We are using 100 as default_statistics_target by default and all our\n> column are using this value:\n\nHmm, that ought to be enough to activate the better selectivity\nestimator.\n\nUnless ... did you update this database from a pre-8.2 DB that already\nhad contrib/ltree in it? If so, did you just load the existing old\ndefinition of ltree as part of your dump, or did you install 8.2's\nversion fresh? I'm wondering if you have a definition of operator <@\nthat doesn't specify the new selectivity estimator. Please try a\npg_dump -s and see what it shows as the definition of <@.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Feb 2008 12:59:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG planning randomly ? " }, { "msg_contents": "2008/2/26, Tom Lane <[email protected]>:\n> \"Laurent Raufaste\" <[email protected]> writes:\n>\n> > 2008/2/26, Tom Lane <[email protected]>:\n>\n> >> If it's 8.2 or later then increasing the stats target for _comment.path\n> >> to 100 or more would likely help.\n>\n> > I'm using PG 8.2.4.\n> > We are using 100 as default_statistics_target by default and all our\n> > column are using this value:\n>\n>\n> Hmm, that ought to be enough to activate the better selectivity\n> estimator.\n>\n> Unless ... did you update this database from a pre-8.2 DB that already\n> had contrib/ltree in it? If so, did you just load the existing old\n> definition of ltree as part of your dump, or did you install 8.2's\n> version fresh? I'm wondering if you have a definition of operator <@\n> that doesn't specify the new selectivity estimator. Please try a\n> pg_dump -s and see what it shows as the definition of <@.\n>\n> regards, tom lane\n>\n\nHere's the first definition of the <@ operator in my dump:\n\n--\n-- Name: <@; Type: OPERATOR; Schema: public; Owner: postgres\n--\nCREATE OPERATOR <@ (\n PROCEDURE = ltree_risparent,\n LEFTARG = ltree,\n RIGHTARG = ltree,\n COMMUTATOR = @>,\n RESTRICT = ltreeparentsel,\n JOIN = contjoinsel\n);\nALTER OPERATOR public.<@ (ltree, ltree) OWNER TO postgres;\n\nOur data was created on an older PG (8.1.x) but we installed 8.2.x\nfrom scratch, only dumping the schema and the data in it. I used ltree\nfound in the 8.2.4 source.\n\nDo you think an update of ltree, or better of the database will fix\nthe problem ?\n\nWe plan on upgrading to the 8.3 branch in the next weeks, but this\nbehavior can't wait this much as our servers are overburned from time\nto time =(\n\nThanks for your help ;)\n\n-- \nLaurent Raufaste\n<http://www.glop.org/>\n", "msg_date": "Tue, 26 Feb 2008 21:05:24 +0100", "msg_from": "\"Laurent Raufaste\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG planning randomly ?" }, { "msg_contents": "On Tue, Feb 26, 2008 at 11:12 AM, Laurent Raufaste <[email protected]> wrote:\n> 2008/2/26, Tom Lane <[email protected]>:\n>\n> >\n> > What PG version is this?\n> >\n> > If it's 8.2 or later then increasing the stats target for _comment.path\n> > to 100 or more would likely help.\n> >\n>\n> I'm using PG 8.2.4.\n> We are using 100 as default_statistics_target by default and all our\n> column are using this value:\n> # SELECT attname,attstattarget FROM pg_attribute WHERE attrelid=16743\n> AND attname='path' ;\n> attname | attstattarget\n> ---------+---------------\n> path | -1\n>\n> I tried increasing the stats target with the command:\n> SET default_statistics_target=1000 ;\n> That's the command I launched before executing the ANALYZE showed in\n> the previous mail.\n> The ANALYZE were longer to complete, but it did not change the planner\n> behavior (sometimes right, sometimes wrong).\n\nYou're doing it wrong. The default target affects newly created\ncolumns / tables. You need to use alter table to change a stats\ntarget after creation. Like so:\n\nalter table abc alter column xyz set statistics 100;\n", "msg_date": "Tue, 26 Feb 2008 14:06:24 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG planning randomly ?" }, { "msg_contents": "\"Laurent Raufaste\" <[email protected]> writes:\n> 2008/2/26, Tom Lane <[email protected]>:\n>> ... I'm wondering if you have a definition of operator <@\n>> that doesn't specify the new selectivity estimator. Please try a\n>> pg_dump -s and see what it shows as the definition of <@.\n\n> Here's the first definition of the <@ operator in my dump:\n\n> CREATE OPERATOR <@ (\n> PROCEDURE = ltree_risparent,\n> LEFTARG = ltree,\n> RIGHTARG = ltree,\n> COMMUTATOR = @>,\n> RESTRICT = ltreeparentsel,\n> JOIN = contjoinsel\n> );\n\nThat's the right RESTRICT function, but what exactly did you mean by\n\"first definition\"? Are there more?\n\nIt may be that it's just not possible for the estimator to come up with\naccurate rowcount estimates given the amount of info it has available.\nThe query you are complaining about confuses the issue quite a lot by\ninvolving other issues. Would you try just \"explain analyze select 1\nfrom _commment where path <@ '....';\" for various typical path values,\nand see if it's coming up with estimated rowcounts that are in the right\nballpark compared to the actual ones?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Feb 2008 16:57:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG planning randomly ? " }, { "msg_contents": "\"Scott Marlowe\" <[email protected]> writes:\n> On Tue, Feb 26, 2008 at 11:12 AM, Laurent Raufaste <[email protected]> wrote:\n>> I tried increasing the stats target with the command:\n>> SET default_statistics_target=1000 ;\n>> That's the command I launched before executing the ANALYZE showed in\n>> the previous mail.\n\n> You're doing it wrong. The default target affects newly created\n> columns / tables. You need to use alter table to change a stats\n> target after creation. Like so:\n> alter table abc alter column xyz set statistics 100;\n\nThat's completely incorrect. If the column doesn't have a specific\nstats target (indicated by -1 in attstattarget, which Laurent showed\nus was the case), then ANALYZE will use the current value of\ndefault_statistics_target. The table-creation-time value of that\nparameter isn't relevant at all.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Feb 2008 16:59:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG planning randomly ? " }, { "msg_contents": "2008/2/26, Tom Lane <[email protected]>:\n>\n> That's the right RESTRICT function, but what exactly did you mean by\n> \"first definition\"? Are there more?\n\nI thought it was enough, here is the complete definition of the <@ operator:\n\n--\n-- Name: <@; Type: OPERATOR; Schema: public; Owner: postgres\n--\n\nCREATE OPERATOR <@ (\n PROCEDURE = ltree_risparent,\n LEFTARG = ltree,\n RIGHTARG = ltree,\n COMMUTATOR = @>,\n RESTRICT = ltreeparentsel,\n JOIN = contjoinsel\n);\n\n\nALTER OPERATOR public.<@ (ltree, ltree) OWNER TO postgres;\n\n--\n-- Name: <@; Type: OPERATOR; Schema: public; Owner: postgres\n--\n\nCREATE OPERATOR <@ (\n PROCEDURE = _ltree_r_isparent,\n LEFTARG = ltree,\n RIGHTARG = ltree[],\n COMMUTATOR = @>,\n RESTRICT = contsel,\n JOIN = contjoinsel\n);\n\n\nALTER OPERATOR public.<@ (ltree, ltree[]) OWNER TO postgres;\n\n--\n-- Name: <@; Type: OPERATOR; Schema: public; Owner: postgres\n--\n\nCREATE OPERATOR <@ (\n PROCEDURE = _ltree_risparent,\n LEFTARG = ltree[],\n RIGHTARG = ltree,\n COMMUTATOR = @>,\n RESTRICT = contsel,\n JOIN = contjoinsel\n);\n\n\nALTER OPERATOR public.<@ (ltree[], ltree) OWNER TO postgres;\n\n--\n-- Name: <@; Type: OPERATOR; Schema: public; Owner: postgres\n--\n\nCREATE OPERATOR <@ (\n PROCEDURE = hs_contained,\n LEFTARG = hstore,\n RIGHTARG = hstore,\n COMMUTATOR = @>,\n RESTRICT = contsel,\n JOIN = contjoinsel\n);\n\n\nALTER OPERATOR public.<@ (hstore, hstore) OWNER TO postgres;\n\n\n>\n> It may be that it's just not possible for the estimator to come up with\n> accurate rowcount estimates given the amount of info it has available.\n> The query you are complaining about confuses the issue quite a lot by\n> involving other issues. Would you try just \"explain analyze select 1\n> from _commment where path <@ '....';\" for various typical path values,\n> and see if it's coming up with estimated rowcounts that are in the right\n> ballpark compared to the actual ones?\n>\n\nIt might be the source of the problem =)\nI executed the following query on all the servers with a varying path\n(but with the same path on each server), before and after lauching an\nANALYZE _comment.\n\nEXPLAIN ANALYZE SELECT 1\nFROM _comment\nWHERE path <@ '0.1.810879'\n;\n\nOn every server except one it showed the same plan before and after the ANALYZE:\n Bitmap Heap Scan on _comment (cost=174.87..6163.31 rows=1536\nwidth=0) (actual time=1.072..1.495 rows=1070 loops=1)\n Recheck Cond: (path <@ '0.1.14155763'::ltree)\n -> Bitmap Index Scan on gist_idx_comment_path (cost=0.00..174.48\nrows=1536 width=0) (actual time=1.058..1.058 rows=1070 loops=1)\n Index Cond: (path <@ '0.1.14155763'::ltree)\n Total runtime: 1.670 ms\n\nOn a random server, the plan before the ANALYZE was:\n Bitmap Heap Scan on _comment (cost=15833.00..440356.99 rows=155649\nwidth=0) (actual time=1.581..2.885 rows=1070 loops=1)\n Recheck Cond: (path <@ '0.1.14155763'::ltree)\n -> Bitmap Index Scan on gist_idx_comment_path\n(cost=0.00..15794.09 rows=155649 width=0) (actual time=1.552..1.552\nrows=1070 loops=1)\n Index Cond: (path <@ '0.1.14155763'::ltree)\n Total runtime: 3.160 ms\n\nThe runtime is ok, but the planned cost is huge, because the row count\nof the index scan estimates 100x more rows. After the ANALYZE it was\nlike the others. If this wrong row count happens, I understand why the\nplanner try to find an alternative plan in the first query I showed\nyou in a previous mail.\n\nHow can I help him to better estimate the row count ? Setting\ndefault_stats_target to 1000 did not help =(\n\n-- \nLaurent Raufaste\n<http://www.glop.org/>\n", "msg_date": "Wed, 27 Feb 2008 18:38:48 +0100", "msg_from": "\"Laurent Raufaste\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG planning randomly ?" }, { "msg_contents": "\"Laurent Raufaste\" <[email protected]> writes:\n> On a random server, the plan before the ANALYZE was:\n> Bitmap Heap Scan on _comment (cost=15833.00..440356.99 rows=155649\n> width=0) (actual time=1.581..2.885 rows=1070 loops=1)\n> Recheck Cond: (path <@ '0.1.14155763'::ltree)\n> -> Bitmap Index Scan on gist_idx_comment_path\n> (cost=0.00..15794.09 rows=155649 width=0) (actual time=1.552..1.552\n> rows=1070 loops=1)\n> Index Cond: (path <@ '0.1.14155763'::ltree)\n> Total runtime: 3.160 ms\n\n> The runtime is ok, but the planned cost is huge, because the row count\n> of the index scan estimates 100x more rows. After the ANALYZE it was\n> like the others. If this wrong row count happens, I understand why the\n> planner try to find an alternative plan in the first query I showed\n> you in a previous mail.\n\n> How can I help him to better estimate the row count ? Setting\n> default_stats_target to 1000 did not help =(\n\nAre you sure the table had been analyzed recently at all on that server?\n\nIf it had, then what you must be dealing with is a different result from\na different random sample. The laws of statistics say that sometimes a\nrandom sample won't be very representative ... but if the sample is\nreasonably large they also say that won't happen very often. You could\ntry ANALYZEing over and over and seeing what rowcount estimate you get\nafter each one. If you frequently get a bad estimate, maybe it would be\nworth looking at the pg_stats row for _comment.path to see if there's\nanything obviously bogus about the bad samples.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Feb 2008 15:36:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG planning randomly ? " }, { "msg_contents": "2008/2/27, Tom Lane <[email protected]>:\n>\n> Are you sure the table had been analyzed recently at all on that server?\n>\n> If it had, then what you must be dealing with is a different result from\n> a different random sample. The laws of statistics say that sometimes a\n> random sample won't be very representative ... but if the sample is\n> reasonably large they also say that won't happen very often. You could\n> try ANALYZEing over and over and seeing what rowcount estimate you get\n> after each one. If you frequently get a bad estimate, maybe it would be\n> worth looking at the pg_stats row for _comment.path to see if there's\n> anything obviously bogus about the bad samples.\n>\n\nThanks for your help Tom, it's greetly appreciated here =)\n\nYes, I ANALYZE the table before any bunch of request.\nI isolated the problem, it happens when a lot (thousands) of rows in\nthe _comment table are matching the query. I can now reproduce the\nproblem at will, and on any server, even on our development server.\n\n- I took 3 paths mathing thousands of rows: 0.1.4108047, 0.1.15021804\nand 0.1.4749259\n- I wrote the following script:\nANALYZE _comment (default_stats_target is 100)\nEXPLAIN ANALYZE SELECT 1 FROM _comment WHERE _path <@ 0.1.4108047 ;\nEXPLAIN ANALYZE SELECT 1 FROM _comment WHERE _path <@ 0.1.15021804 ;\nEXPLAIN ANALYZE SELECT 1 FROM _comment WHERE _path <@ 0.1.4749259 ;\nSELECT * FROM pg_stats WHERE tablename = '_comment' AND attname='path';\n\nIf I execute it a lot of times, approx. 2/3 of the executed query costs are OK:\n\nBitmap Heap Scan on _comment (cost=114.24..4634.75 rows=1540 width=0)\n(actual time=6.715..13.836 rows=12589 loops=1)\n Recheck Cond: (path <@ '0.1.15021804'::ltree)\n -> Bitmap Index Scan on gist_idx_comment_path (cost=0.00..113.85\nrows=1540 width=0) (actual time=6.515..6.515 rows=12589 loops=1)\n Index Cond: (path <@ '0.1.15021804'::ltree)\n\nAnd 1/3 of the executed queries are huge:\n\nBitmap Heap Scan on _comment (cost=10366.65..342840.71 rows=156174\nwidth=0) (actual time=6.513..12.984 rows=12589 loops=1)\n Recheck Cond: (path <@ '0.1.15021804'::ltree)\n -> Bitmap Index Scan on gist_idx_comment_path (cost=0.00..10327.61\nrows=156174 width=0) (actual time=6.313..6.313 rows=12589 loops=1)\n Index Cond: (path <@ '0.1.15021804'::ltree)\n\nThe pg_stats table show no strange value, and the only rows not\nconstant are null_frac and correlation. avg_width go from 56 to 57 and\nn_disctinct stay to -1 (which is OK, all path are distinct) after each\nANALYZE:\n [schemaname] => ob2\n [tablename] => _comment\n [attname] => path\n [null_frac] => 6.66667e-05\n [avg_width] => 56\n [n_distinct] => -1\n [most_common_vals] =>\n [most_common_freqs] =>\n [correlation] => -0.256958\n\nIf I do the same test with default_stats_target=1000, I get the same\nbehavior (huge row counts) but a bit closer to the reality. Instead of\nonly getting 2 different estimations accross all the requests\n(rows=1540 and rows=156174), I get 3 different ones: (rows=1543\nrows=15446 and rows=61784).\n\nThe problem is that the cost is still huge compared to the reality.\nAnd the query we use in our production environment switch to a\ndifferent way of running it.\n\nFast version:\n\nLimit (cost=15557.29..15557.30 rows=5 width=570) (actual\ntime=1305.824..1305.829 rows=5 loops=1)\n -> Sort (cost=15557.29..15561.14 rows=1540 width=570) (actual\ntime=1305.822..1305.825 rows=5 loops=1)\n Sort Key: _comment.date_publishing\n -> Nested Loop (cost=0.00..15475.75 rows=1540 width=570)\n(actual time=0.185..847.502 rows=61537 loops=1)\n -> Index Scan using gist_idx_comment_path on _comment\n(cost=0.00..4746.26 rows=1540 width=537) (actual time=0.118..307.553\nrows=64825 loops=1)\n Index Cond: (path <@ '0.1.4108047'::ltree)\n -> Index Scan using _article_pkey on _article\n(cost=0.00..6.95 rows=1 width=41) (actual time=0.006..0.006 rows=1\nloops=64825)\n Index Cond: (_article.id = _comment.parent_id)\n\n\nSlow version:\n\nLimit (cost=0.00..1047.60 rows=5 width=566) (actual time=0.352..1.625\nrows=5 loops=1)\n -> Nested Loop (cost=0.00..32663447.76 rows=155897 width=566)\n(actual time=0.351..1.620 rows=5 loops=1)\n -> Index Scan Backward using idx_comment_date_publishing on\n_comment (cost=0.00..31719108.69 rows=155897 width=533) (actual\ntime=0.286..1.412 rows=5 loops=1)\n Filter: (path <@ '0.1.4108047'::ltree)\n -> Index Scan using _article_pkey on _article\n(cost=0.00..6.04 rows=1 width=41) (actual time=0.038..0.039 rows=1\nloops=5)\n Index Cond: (_article.id = _comment.parent_id)\n\nDon't you think an increase in some RAM parameter would help the\nserver working on this kind of query ? We have 20+GB of RAM for those\nservers\n\nThanks.\n\n-- \nLaurent Raufaste\n<http://www.glop.org/>\n", "msg_date": "Thu, 28 Feb 2008 18:46:43 +0100", "msg_from": "\"Laurent Raufaste\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG planning randomly ?" } ]
[ { "msg_contents": "This might be a weird question...is there any way to disable a\nparticular index without dropping it?\n\nThere are a few queries I run where I'd like to test out the effects\nof having (and not having) different indexes on particular query plans\nand performance. I'd really prefer not to have to drop and ultimately\nrecreate a particular index, as some of the data sets are quite large.\n\nSo, is there any way to do this, or at least mimic this sort of behavior?\n\nPeter\n", "msg_date": "Tue, 26 Feb 2008 14:46:04 -0600", "msg_from": "\"Peter Koczan\" <[email protected]>", "msg_from_op": true, "msg_subject": "disabling an index without deleting it?" }, { "msg_contents": "On Tue, Feb 26, 2008 at 2:46 PM, Peter Koczan <[email protected]> wrote:\n> This might be a weird question...is there any way to disable a\n> particular index without dropping it?\n>\n> There are a few queries I run where I'd like to test out the effects\n> of having (and not having) different indexes on particular query plans\n> and performance. I'd really prefer not to have to drop and ultimately\n> recreate a particular index, as some of the data sets are quite large.\n>\n> So, is there any way to do this, or at least mimic this sort of behavior?\n\nThe brick to the head method would use set enable_indexscan = off;\nHowever, you can delete an index without actually deleting it like so:\n\nbegin;\ndrop index abc_dx;\nselect ....\nrollback;\n\nand viola, your index is still there. note that there are likely some\nlocking issues with this, so be careful with it in production. But on\na test box it's a very easy way to test various indexes.\n", "msg_date": "Tue, 26 Feb 2008 14:57:51 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disabling an index without deleting it?" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Tue, 26 Feb 2008 14:57:51 -0600\r\n\"Scott Marlowe\" <[email protected]> wrote:\r\n\r\n \r\n> The brick to the head method would use set enable_indexscan = off;\r\n> However, you can delete an index without actually deleting it like so:\r\n> \r\n> begin;\r\n> drop index abc_dx;\r\n> select ....\r\n> rollback;\r\n> \r\n> and viola, your index is still there. note that there are likely some\r\n> locking issues with this, so be careful with it in production. But on\r\n> a test box it's a very easy way to test various indexes.\r\n\r\nWouldn't you also bloat the index?\r\n\r\nJoshua D. Drake\r\n\r\n\r\n\r\n- -- \r\nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \r\nPostgreSQL Community Conference: http://www.postgresqlconference.org/\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nPostgreSQL SPI Liaison | SPI Director | PostgreSQL political pundit\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHxH6rATb/zqfZUUQRAp//AJ4wKiA4WRprp3L3y9UEAzz2rb2+hACaA9b7\r\nA1k3n6GkyFwx2vrbnpD8CX0=\r\n=zYaI\r\n-----END PGP SIGNATURE-----\r\n", "msg_date": "Tue, 26 Feb 2008 13:03:39 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disabling an index without deleting it?" }, { "msg_contents": "\"Joshua D. Drake\" <[email protected]> writes:\n> \"Scott Marlowe\" <[email protected]> wrote:\n>> begin;\n>> drop index abc_dx;\n>> select ....\n>> rollback;\n>> \n>> and viola, your index is still there. note that there are likely some\n>> locking issues with this, so be careful with it in production. But on\n>> a test box it's a very easy way to test various indexes.\n\n> Wouldn't you also bloat the index?\n\nNo, what makes you think that? The index won't change at all in the\nabove example. The major problem is, as Scott says, that DROP INDEX\ntakes exclusive lock on the table so any other sessions will be locked\nout of it for the duration of your test query.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Feb 2008 17:22:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disabling an index without deleting it? " }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Tue, 26 Feb 2008 17:22:40 -0500\r\nTom Lane <[email protected]> wrote:\r\n\r\n> \"Joshua D. Drake\" <[email protected]> writes:\r\n> > \"Scott Marlowe\" <[email protected]> wrote:\r\n> >> begin;\r\n> >> drop index abc_dx;\r\n> >> select ....\r\n> >> rollback;\r\n> >> \r\n> >> and viola, your index is still there. note that there are likely\r\n> >> some locking issues with this, so be careful with it in\r\n> >> production. But on a test box it's a very easy way to test\r\n> >> various indexes.\r\n> \r\n> > Wouldn't you also bloat the index?\r\n> \r\n> No, what makes you think that? \r\n\r\nWell now that I am obviously wrong :P. I was thinking about it from the:\r\n\r\nBEGIN;\r\nDELETE FROM\r\nROLLBACK;\r\n\r\nPerspective.\r\n\r\nSincerely,\r\n\r\nJoshua D. Drake\r\n\r\n\r\n\r\n- -- \r\nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \r\nPostgreSQL Community Conference: http://www.postgresqlconference.org/\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nPostgreSQL SPI Liaison | SPI Director | PostgreSQL political pundit\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHxJSyATb/zqfZUUQRAnSPAJkB6Gz0gUTPohXcFak9LbVYIdxCtwCfWvxp\r\ngQZymMaKEXfo2Mf1E2BWtUk=\r\n=p+EO\r\n-----END PGP SIGNATURE-----\r\n", "msg_date": "Tue, 26 Feb 2008 14:37:38 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disabling an index without deleting it?" }, { "msg_contents": "2008/2/27, Tom Lane <[email protected]>:\n> \"Joshua D. Drake\" <[email protected]> writes:\n> > \"Scott Marlowe\" <[email protected]> wrote:\n>\n> >> begin;\n> >> drop index abc_dx;\n> >> select ....\n> >> rollback;\n> >>\n> >> and viola, your index is still there. note that there are likely some\n> >> locking issues with this, so be careful with it in production. But on\n> >> a test box it's a very easy way to test various indexes.\n>\n> > Wouldn't you also bloat the index?\n>\n>\n> No, what makes you think that? The index won't change at all in the\n> above example. The major problem is, as Scott says, that DROP INDEX\n> takes exclusive lock on the table so any other sessions will be locked\n> out of it for the duration of your test query.\n\nWhy is the exclusive lock not taken later, so that this method can be\nused reasonably risk-free on production systems? From what I\nunderstand the later would be either a statement that would\n(potentially) be modifying the index, like an UPDATE or an INSERT, or\nactual transaction commit. If none of these occur and the transaction\nis rollbacked, the exclusive lock doesn't have to be taken at all.\n\nMarkus\n\n-- \nMarkus Bertheau\nBlog: http://www.bluetwanger.de/blog/\n", "msg_date": "Wed, 27 Feb 2008 08:48:06 +0600", "msg_from": "\"Markus Bertheau\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disabling an index without deleting it?" }, { "msg_contents": "On Tue, Feb 26, 2008 at 8:48 PM, Markus Bertheau\n<[email protected]> wrote:\n> 2008/2/27, Tom Lane <[email protected]>:\n>\n>\n> > \"Joshua D. Drake\" <[email protected]> writes:\n> > > \"Scott Marlowe\" <[email protected]> wrote:\n> >\n> > >> begin;\n> > >> drop index abc_dx;\n> > >> select ....\n> > >> rollback;\n> > >>\n> > >> and viola, your index is still there. note that there are likely some\n> > >> locking issues with this, so be careful with it in production. But on\n> > >> a test box it's a very easy way to test various indexes.\n> >\n> > > Wouldn't you also bloat the index?\n> >\n> >\n> > No, what makes you think that? The index won't change at all in the\n> > above example. The major problem is, as Scott says, that DROP INDEX\n> > takes exclusive lock on the table so any other sessions will be locked\n> > out of it for the duration of your test query.\n>\n> Why is the exclusive lock not taken later, so that this method can be\n> used reasonably risk-free on production systems? From what I\n> understand the later would be either a statement that would\n> (potentially) be modifying the index, like an UPDATE or an INSERT, or\n> actual transaction commit. If none of these occur and the transaction\n> is rollbacked, the exclusive lock doesn't have to be taken at all.\n\nIt would rock to be able to do that on a production database. Any\nOracle DBA looking over your shoulder would fall to the floor and need\nresuscitation.\n", "msg_date": "Tue, 26 Feb 2008 21:48:57 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disabling an index without deleting it?" }, { "msg_contents": "\"Markus Bertheau\" <[email protected]> writes:\n> 2008/2/27, Tom Lane <[email protected]>:\n>> No, what makes you think that? The index won't change at all in the\n>> above example. The major problem is, as Scott says, that DROP INDEX\n>> takes exclusive lock on the table so any other sessions will be locked\n>> out of it for the duration of your test query.\n\n> Why is the exclusive lock not taken later, so that this method can be\n> used reasonably risk-free on production systems?\n\nEr, later than what? Once the DROP is pending, other transactions can\nhardly safely use the index for lookups, and what should they do about\ninsertions?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Feb 2008 23:48:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disabling an index without deleting it? " }, { "msg_contents": "On Tue, Feb 26, 2008 at 10:48 PM, Tom Lane <[email protected]> wrote:\n> \"Markus Bertheau\" <[email protected]> writes:\n> > 2008/2/27, Tom Lane <[email protected]>:\n>\n> >> No, what makes you think that? The index won't change at all in the\n> >> above example. The major problem is, as Scott says, that DROP INDEX\n> >> takes exclusive lock on the table so any other sessions will be locked\n> >> out of it for the duration of your test query.\n>\n> > Why is the exclusive lock not taken later, so that this method can be\n> > used reasonably risk-free on production systems?\n>\n> Er, later than what? Once the DROP is pending, other transactions can\n> hardly safely use the index for lookups, and what should they do about\n> insertions?\n\nI see what you're saying. Sadly, my dreams of drop index concurrently\nappear dashed.\n", "msg_date": "Tue, 26 Feb 2008 23:16:54 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disabling an index without deleting it?" }, { "msg_contents": "2008/2/27, Scott Marlowe <[email protected]>:\n> On Tue, Feb 26, 2008 at 10:48 PM, Tom Lane <[email protected]> wrote:\n> > \"Markus Bertheau\" <[email protected]> writes:\n> > > 2008/2/27, Tom Lane <[email protected]>:\n> >\n> > >> No, what makes you think that? The index won't change at all in the\n> > >> above example. The major problem is, as Scott says, that DROP INDEX\n> > >> takes exclusive lock on the table so any other sessions will be locked\n> > >> out of it for the duration of your test query.\n> >\n> > > Why is the exclusive lock not taken later, so that this method can be\n> > > used reasonably risk-free on production systems?\n> >\n> > Er, later than what? Once the DROP is pending, other transactions can\n> > hardly safely use the index for lookups, and what should they do about\n> > insertions?\n>\n>\n> I see what you're saying. Sadly, my dreams of drop index concurrently\n> appear dashed.\n\nMaybe a different syntax: DROP INDEX DEFERRED, which will make the\ncurrent transaction behave as if the index was dropped but not\nactually drop it until the end of the transaction. Inserts and updates\nof this and other transactions behave as if the index existed.\n\nOn the other hand, if the only reason to have that feature is to plan\nand execute queries pretending that one index doesn't exist, then DROP\nINDEX DEFERRED is not the most straightforward syntax.\n\nMarkus\n\n-- \nMarkus Bertheau\nBlog: http://www.bluetwanger.de/blog/\n", "msg_date": "Wed, 27 Feb 2008 11:29:55 +0600", "msg_from": "\"Markus Bertheau\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disabling an index without deleting it?" }, { "msg_contents": "\"Markus Bertheau\" <[email protected]> writes:\n> On the other hand, if the only reason to have that feature is to plan\n> and execute queries pretending that one index doesn't exist, then DROP\n> INDEX DEFERRED is not the most straightforward syntax.\n\nYeah, I was just about to mention that 8.3 has a hook that allows a\nplug-in to manipulate the planner's notions of which indexes exist.\nIgnoring a specific index would be really trivial.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Feb 2008 00:38:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disabling an index without deleting it? " }, { "msg_contents": "On Tue, 2008-02-26 at 17:22 -0500, Tom Lane wrote:\n> \"Joshua D. Drake\" <[email protected]> writes:\n> > \"Scott Marlowe\" <[email protected]> wrote:\n> >> begin;\n> >> drop index abc_dx;\n> >> select ....\n> >> rollback;\n> >> \n> >> and viola, your index is still there. note that there are likely some\n> >> locking issues with this, so be careful with it in production. But on\n> >> a test box it's a very easy way to test various indexes.\n> \n> > Wouldn't you also bloat the index?\n> \n> No, what makes you think that? The index won't change at all in the\n> above example. The major problem is, as Scott says, that DROP INDEX\n> takes exclusive lock on the table so any other sessions will be locked\n> out of it for the duration of your test query.\n\nIt may cause catalog bloat though, right?\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Wed, 27 Feb 2008 11:16:58 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disabling an index without deleting it?" }, { "msg_contents": "Jeff Davis <[email protected]> writes:\n>>> begin;\n>>> drop index abc_dx;\n>>> select ....\n>>> rollback;\n\n> It may cause catalog bloat though, right?\n\nNot in this particular case; AFAIR this will only result in catalog row\ndeletions, not updates. So when the deletions roll back, there's no\ndead rows to clean up.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Feb 2008 15:02:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disabling an index without deleting it? " }, { "msg_contents": ">>> On Tue, Feb 26, 2008 at 10:48 PM, in message <[email protected]>,\nTom Lane <[email protected]> wrote: \n \n> Er, later than what? Once the DROP is pending, other transactions can\n> hardly safely use the index for lookups, and what should they do about\n> insertions?\n \nOut of curiosity, couldn't any transaction using a snapshot prior to\nthe commit of the DROP continue to use it (just like an uncommited\nDELETE of a row)? The transaction doing the DROP wouldn't maintain\nit for modifications, which is fine whether it is committed or\nrolled back. There would just be the matter of \"vacuuming\" the\nindex out of physical existence once all transactions which could\nsee it are gone.\n \nThat's probably naive, but I'm curious what I'm missing.\n \n-Kevin\n\n\n\n", "msg_date": "Wed, 27 Feb 2008 14:50:10 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disabling an index without deleting it?" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Out of curiosity, couldn't any transaction using a snapshot prior to\n> the commit of the DROP continue to use it (just like an uncommited\n> DELETE of a row)? The transaction doing the DROP wouldn't maintain\n> it for modifications, which is fine whether it is committed or\n> rolled back. There would just be the matter of \"vacuuming\" the\n> index out of physical existence once all transactions which could\n> see it are gone.\n\nYou can't just lazily remove the index after the last xact stops using\nit; there has to be an agreed synchronization point among all the\ntransactions. Otherwise you could have xact A expecting the index to\ncontain entries from the already-committed xact B, but B thought the\nindex was dead and didn't bother updating it.\n\nWe might be able to do something that would shorten the length of time\nthat the exclusive lock is held, but AFAICS we couldn't eliminate it\naltogether; and I'm unconvinced that merely shortening the interval\nis worth much extra complexity.\n\nIn the particular case at hand, a planner hook to make it ignore the\nindex is a far better solution anyway...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Feb 2008 18:00:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disabling an index without deleting it? " }, { "msg_contents": ">>> On Wed, Feb 27, 2008 at 5:00 PM, in message <[email protected]>,\nTom Lane <[email protected]> wrote: \n> \"Kevin Grittner\" <[email protected]> writes:\n>> Out of curiosity, couldn't any transaction using a snapshot prior to\n>> the commit of the DROP continue to use it (just like an uncommited\n>> DELETE of a row)? The transaction doing the DROP wouldn't maintain\n>> it for modifications, which is fine whether it is committed or\n>> rolled back. There would just be the matter of \"vacuuming\" the\n>> index out of physical existence once all transactions which could\n>> see it are gone.\n> \n> You can't just lazily remove the index after the last xact stops using\n> it; there has to be an agreed synchronization point among all the\n> transactions. Otherwise you could have xact A expecting the index to\n> contain entries from the already-committed xact B, but B thought the\n> index was dead and didn't bother updating it.\n \nIf xact A is using a snapshot from before the commit of the index\nDROP, it shouldn't see anything done after the drop anyway. If\nit's using a snapshot from after the DROP, it won't see the index.\nxact B would only fail to update the index if it was using a\nsnapshot after the drop, so I'm having trouble grasping the\nsequence of events where this is a problem. Could you outline\nthe series of events where the problem occurs?\n \n> In the particular case at hand, a planner hook to make it ignore the\n> index is a far better solution anyway...\n \nAgreed -- I was just curious whether we could eliminate a source of\nblocking raised in the discussion; and failing that, improve my\ngrasp of the PostgreSQL MVCC implementation.\n \n-Kevin\n \n\n", "msg_date": "Wed, 27 Feb 2008 17:18:50 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disabling an index without deleting it?" }, { "msg_contents": "I wrote:\n> In the particular case at hand, a planner hook to make it ignore the\n> index is a far better solution anyway...\n\nJust as proof of concept, a quick-and-dirty version of this is attached.\nIt works in 8.3 and up. Sample (after compiling the .so):\n\nregression=# load '/home/tgl/pgsql/planignoreindex.so';\nLOAD\nregression=# explain select * from tenk1 where unique1 = 42;\n QUERY PLAN \n-----------------------------------------------------------------------------\n Index Scan using tenk1_unique1 on tenk1 (cost=0.00..8.27 rows=1 width=244)\n Index Cond: (unique1 = 42)\n(2 rows)\n\nregression=# set ignore_index TO 'tenk1_unique1';\nSET\nregression=# explain select * from tenk1 where unique1 = 42;\n QUERY PLAN \n---------------------------------------------------------\n Seq Scan on tenk1 (cost=0.00..483.00 rows=1 width=244)\n Filter: (unique1 = 42)\n(2 rows)\n\nregression=# \n\n\t\t\tregards, tom lane", "msg_date": "Wed, 27 Feb 2008 18:33:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disabling an index without deleting it? " }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> If xact A is using a snapshot from before the commit of the index\n> DROP, it shouldn't see anything done after the drop anyway. If\n> it's using a snapshot from after the DROP, it won't see the index.\n> xact B would only fail to update the index if it was using a\n> snapshot after the drop, so I'm having trouble grasping the\n> sequence of events where this is a problem. Could you outline\n> the series of events where the problem occurs?\n\nYou're assuming that the query plan is as new as the snapshot is.\nThis isn't guaranteed, at least not without the locking that you\nseek to eliminate.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Feb 2008 18:37:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disabling an index without deleting it? " } ]
[ { "msg_contents": "I've got some long running queries, and want to tune them.\nUsing simple logic, I can understand what expensive steps in the query plan\nought to be (seq scan and index scans using much rows), but I want to\nquantify; use a somewhat more scientific approach.\n\nThe manual states: \"Actually two numbers are shown: the start-up time before\nthe first row can be returned, and the total time to return all the rows.\".\nDoes this mean that the difference between the first and second is the cost\nor the time the step in the explain has taken?\n\nTIA\n\nfrits\n\nI've got some long running queries, and want to tune them.Using simple logic, I can understand what expensive steps in the query plan ought to be (seq scan and index scans using much rows), but I want to quantify; use a somewhat more scientific approach.\nThe manual states: \"Actually two numbers are shown: the start-up time before the first row can be returned, and the total time to return all the rows.\". Does this mean that the difference between the first and second is the cost or the time the step in the explain has taken?\nTIAfrits", "msg_date": "Wed, 27 Feb 2008 11:18:25 +0100", "msg_from": "\"Frits Hoogland\" <[email protected]>", "msg_from_op": true, "msg_subject": "how to identify expensive steps in an explain analyze output" }, { "msg_contents": "I've got some long running queries, and want to tune them.\nUsing simple logic, I can understand what expensive steps in the query plan\nought to be (seq scan and index scans using much rows), but I want to\nquantify; use a somewhat more scientific approach.\n\nThe manual states: \"Actually two numbers are shown: the start-up time before\nthe first row can be returned, and the total time to return all the rows.\".\nDoes this mean that the difference between the first and second is the cost\nor the time the step in the explain has taken?\n\nTIA\n\nfrits\n\nI've got some long running queries, and want to tune them.Using simple logic, I can understand what expensive steps in the query plan ought to be (seq scan and index scans using much rows), but I want to quantify; use a somewhat more scientific approach.\nThe manual states: \"Actually two numbers are shown: the start-up time before the first row can be returned, and the total time to return all the rows.\". Does this mean that the difference between the first and second is the cost or the time the step in the explain has taken?\nTIAfrits", "msg_date": "Wed, 27 Feb 2008 12:54:01 +0100", "msg_from": "\"Frits Hoogland\" <[email protected]>", "msg_from_op": true, "msg_subject": "how to identify expensive steps in an explain analyze output" }, { "msg_contents": "\"Frits Hoogland\" <[email protected]> writes:\n> The manual states: \"Actually two numbers are shown: the start-up time before\n> the first row can be returned, and the total time to return all the rows.\".\n> Does this mean that the difference between the first and second is the cost\n> or the time the step in the explain has taken?\n\nNo, or at least only for a very strange definition of \"cost\". An\nexample of the way these are used is that for a hash join, the startup\ntime would include the time needed to scan the inner relation and build\nthe hash table from it. The run time (ie, difference between startup\nand total) represents the part of the process where we're scanning the\nouter relation and probing into the hash table for matches. Rows are\nreturned as matches are found during this part of the process. I can't\nthink of any useful definition under which the startup time would be\nignored.\n\nThe reason the planner divides the total cost like this is that in the\npresence of LIMIT or a few other SQL features, it may not be necessary\nto run the plan to completion, but only to fetch the first few rows.\nIn this case a plan with low startup cost may be preferred, even though\nthe estimated total cost to run it to completion might be higher than\nsome other plan has. We're not *going* to run it to completion, and\nso the really interesting figure is startup cost plus some appropriate\nfraction of run cost. You can see this at work if you look at the\nEXPLAIN numbers for a query involving a LIMIT.\n\nThe whole thing might make a bit more sense if you read\nhttp://www.postgresql.org/docs/8.3/static/overview.html\nparticularly the last two subsections.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Feb 2008 10:26:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to identify expensive steps in an explain analyze output " }, { "msg_contents": "thanks for your answer!\n\nokay, cost is a totally wrong word\nhere. I am using the \"actual times\" in the execplan.\n\nwhat I am trying to do, is optimise a database both by investigating\nexecplans, and thinking about what concurrency would do to my database. (I\nhave a database which is reported to almost stop functioning under load)\n\nI've read the sections you pointed out. It's quite understandable.\n\nWhat I am missing, is the connection between the logical\nsteps (ie. the execplan) and the physical implications.\nFor example: the time it took for a seqscan to complete can be\ncomputed by subtracting the two times after \"actual time\"\n(because it's an end node in the execplan, is that assumption\n(scantime=totaltime-startuptime) right?)\nI can compute how long it approximately took for each row by dividing the\ntime through the number of rows (and loops)\nbut I do not know how many physical IO's it has done, and/or how many\nlogical IO's\nSame for merge joins&sorts: the physical implication (writing and\nreading if the amount of data exceeds work_mem) is not in the execplan.\nthat's the reason I mentioned \"cost\".\nI know understand that it's impossible to judge the \"cost\" of a merge join,\nbecause it's time is composited by both the scans and the merge operation\nitself, right?\n\nIs there any way to identify nodes in the execplan which \"cost\" many (CPU\ntime, IO, etc.)?\n\nregards\n\nfrits\n\nOn 2/27/08, Tom Lane <[email protected]> wrote:\n>\n> \"Frits Hoogland\" <[email protected]> writes:\n> > The manual states: \"Actually two numbers are shown: the start-up time\n> before\n> > the first row can be returned, and the total time to return all the\n> rows.\".\n> > Does this mean that the difference between the first and second is the\n> cost\n> > or the time the step in the explain has taken?\n>\n>\n> No, or at least only for a very strange definition of \"cost\". An\n> example of the way these are used is that for a hash join, the startup\n> time would include the time needed to scan the inner relation and build\n> the hash table from it. The run time (ie, difference between startup\n> and total) represents the part of the process where we're scanning the\n> outer relation and probing into the hash table for matches. Rows are\n> returned as matches are found during this part of the process. I can't\n> think of any useful definition under which the startup time would be\n> ignored.\n>\n> The reason the planner divides the total cost like this is that in the\n> presence of LIMIT or a few other SQL features, it may not be necessary\n> to run the plan to completion, but only to fetch the first few rows.\n> In this case a plan with low startup cost may be preferred, even though\n> the estimated total cost to run it to completion might be higher than\n> some other plan has. We're not *going* to run it to completion, and\n> so the really interesting figure is startup cost plus some appropriate\n> fraction of run cost. You can see this at work if you look at the\n> EXPLAIN numbers for a query involving a LIMIT.\n>\n> The whole thing might make a bit more sense if you read\n> http://www.postgresql.org/docs/8.3/static/overview.html\n> particularly the last two subsections.\n>\n> regards, tom lane\n>\n\nthanks for your answer!okay, cost is a totally wrong word here. I am using the \"actual times\" in the execplan. what I am trying to do, is optimise a database both by investigating execplans, and thinking about what concurrency would do to my database. (I have a database which is reported to almost stop functioning under load)\nI've read the sections you pointed out. It's quite understandable. What I am missing, is the connection between the logical steps (ie. the execplan) and the physical implications. For example: the time it took for a seqscan to complete can be computed by subtracting the two times after \"actual time\" \n(because it's an end node in the execplan, is that assumption (scantime=totaltime-startuptime) right?)I can compute how long it approximately took for each row by dividing the time through the number of rows (and loops)\nbut I do not know how many physical IO's it has done, and/or how many logical IO'sSame for merge joins&sorts: the physical implication (writing and reading if the amount of data exceeds work_mem) is not in the execplan. that's the reason I mentioned \"cost\".\nI know understand that it's impossible to judge the \"cost\" of a merge join, because it's time is composited by both the scans and the merge operation itself, right?Is there any way to identify nodes in the execplan which \"cost\" many (CPU time, IO, etc.)?\nregardsfritsOn 2/27/08, Tom Lane <[email protected]> wrote:\n\"Frits Hoogland\" <[email protected]> writes: > The manual states: \"Actually two numbers are shown: the start-up time before > the first row can be returned, and the total time to return all the rows.\".\n > Does this mean that the difference between the first and second is the cost > or the time the step in the explain has taken?No, or at least only for a very strange definition of \"cost\".  An\n example of the way these are used is that for a hash join, the startup time would include the time needed to scan the inner relation and build the hash table from it.  The run time (ie, difference between startup\n and total) represents the part of the process where we're scanning the outer relation and probing into the hash table for matches.  Rows are returned as matches are found during this part of the process.  I can't\n think of any useful definition under which the startup time would be ignored. The reason the planner divides the total cost like this is that in the presence of LIMIT or a few other SQL features, it may not be necessary\n to run the plan to completion, but only to fetch the first few rows. In this case a plan with low startup cost may be preferred, even though the estimated total cost to run it to completion might be higher than\n some other plan has.  We're not *going* to run it to completion, and so the really interesting figure is startup cost plus some appropriate fraction of run cost.  You can see this at work if you look at the\n EXPLAIN numbers for a query involving a LIMIT. The whole thing might make a bit more sense if you read http://www.postgresql.org/docs/8.3/static/overview.html\n particularly the last two subsections.                        regards, tom lane", "msg_date": "Wed, 27 Feb 2008 18:36:52 +0100", "msg_from": "\"Frits Hoogland\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how to identify expensive steps in an explain analyze output" } ]
[ { "msg_contents": "After reviewing http://www.postgresql.org/docs/8.3/static/sql-cluster.html a \ncouple of times, I have some questions:\n1) it says to run analyze after doing a cluster. i'm assuming autovacuum will \ntake care of this? or should i go ahead and do the analyze 'now' instead of \nwaiting?\n2) is there any internal data in the db that would allow me to \nprogrammatically determine which tables would benefit from being clustered?\n3) for that matter, is there info to allow me to determine which index it \nshould be clustered on in cases where the table has more than one index?\n4) for tables with >1 indexes, does clustering on one index negatively impact \nqueries that use the other indexes?\n5) is it better to cluster on a compound index (index on lastnamefirstname) or \non the underlying index (index on lastname)?\n\ntia\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\nEverything takes twice as long as you plan for and produces results about half \nas good as you hoped.\n", "msg_date": "Wed, 27 Feb 2008 11:01:57 -0500", "msg_from": "Douglas J Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "questions about CLUSTER" }, { "msg_contents": "In response to Douglas J Hunley <[email protected]>:\n\n> After reviewing http://www.postgresql.org/docs/8.3/static/sql-cluster.html a \n> couple of times, I have some questions:\n> 1) it says to run analyze after doing a cluster. i'm assuming autovacuum will \n> take care of this? or should i go ahead and do the analyze 'now' instead of \n> waiting?\n\nIt's always a good idea to analyze after major DB operations. Autovacuum\nonly runs so often. Also, it won't hurt anything, so why risk not doing\nit?\n\n> 2) is there any internal data in the db that would allow me to \n> programmatically determine which tables would benefit from being clustered?\n> 3) for that matter, is there info to allow me to determine which index it \n> should be clustered on in cases where the table has more than one index?\n\nThe pg_stat_user_indexes table keeps stats on how often the index is used.\nIndexes that are used frequently are candidates for clustering.\n\n> 4) for tables with >1 indexes, does clustering on one index negatively impact \n> queries that use the other indexes?\n\nNot likely. Clustering only really helps performance if you have an index\nthat is used to gather ranges of data. For example, if you frequently\ndo things like SELECT * FROM log WHERE logdate > 'somedate\" and < 'somedate,\nyou might benefit from clustering on logdate.\n\nBut it doesn't really do much if you're only ever pulling one record at a\ntime. It's the kind of thing that you really need to experiment with to\nunderstand whether it will have a worthwhile performance impact on your\ndata and your workload. I doubt if there's any pat answer.\n\n> 5) is it better to cluster on a compound index (index on lastnamefirstname) or \n> on the underlying index (index on lastname)?\n\nIf cluster helps you at all, it's going to help if you have an index that's\nfrequently used to fetch ranges of data. Whether that index is compound or\nnot isn't likely to factor in.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Wed, 27 Feb 2008 12:40:57 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: questions about CLUSTER" }, { "msg_contents": "On Wednesday 27 February 2008 12:40:57 Bill Moran wrote:\n> In response to Douglas J Hunley <[email protected]>:\n> > After reviewing\n> > http://www.postgresql.org/docs/8.3/static/sql-cluster.html a couple of\n> > times, I have some questions:\n> > 1) it says to run analyze after doing a cluster. i'm assuming autovacuum\n> > will take care of this? or should i go ahead and do the analyze 'now'\n> > instead of waiting?\n\n> It's always a good idea to analyze after major DB operations. Autovacuum\n> only runs so often. Also, it won't hurt anything, so why risk not doing\n> it?\n\nbeing overly-cautious. i was concerned about both autovac and me doing \nanalyzes over each other\n\n>\n> > 2) is there any internal data in the db that would allow me to\n> > programmatically determine which tables would benefit from being\n> > clustered? 3) for that matter, is there info to allow me to determine\n> > which index it should be clustered on in cases where the table has more\n> > than one index?\n>\n> The pg_stat_user_indexes table keeps stats on how often the index is used.\n> Indexes that are used frequently are candidates for clustering.\n\nI had just started looking at this actually.\n\n>\n> > 4) for tables with >1 indexes, does clustering on one index negatively\n> > impact queries that use the other indexes?\n>\n> Not likely. Clustering only really helps performance if you have an index\n> that is used to gather ranges of data. For example, if you frequently\n> do things like SELECT * FROM log WHERE logdate > 'somedate\" and <\n> 'somedate, you might benefit from clustering on logdate.\n>\n> But it doesn't really do much if you're only ever pulling one record at a\n> time. It's the kind of thing that you really need to experiment with to\n> understand whether it will have a worthwhile performance impact on your\n> data and your workload. I doubt if there's any pat answer.\n\nmakes sense.\n\n>\n> > 5) is it better to cluster on a compound index (index on\n> > lastnamefirstname) or on the underlying index (index on lastname)?\n>\n> If cluster helps you at all, it's going to help if you have an index that's\n> frequently used to fetch ranges of data. Whether that index is compound or\n> not isn't likely to factor in.\n\nunderstood. i didn't really think it would matter, but its easier to ask than \nto screw up performance for existing customers :)\n\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\nIf a turtle doesn't have a shell, is he homeless or naked?\n", "msg_date": "Wed, 27 Feb 2008 13:35:16 -0500", "msg_from": "Douglas J Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: questions about CLUSTER" }, { "msg_contents": "On Wednesday 27 February 2008 13:35:16 Douglas J Hunley wrote:\n> > > 2) is there any internal data in the db that would allow me to\n> > > programmatically determine which tables would benefit from being\n> > > clustered? 3) for that matter, is there info to allow me to determine\n> > > which index it should be clustered on in cases where the table has more\n> > > than one index?\n> >\n> > The pg_stat_user_indexes table keeps stats on how often the index is\n> > used. Indexes that are used frequently are candidates for clustering.\n>\n> I had just started looking at this actually.\n\nok, so for a follow-on, should I be more concerned with idx_scan, \nidx_tup_read, or idx_tup_fetch when determining which indexes are 'good' \ncandidates?\n\nagain, tia. i feel like such a noob around here :)\n\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n\n<kwall> I don't get paid. I just get a tip as the money passes through my \nhands on its way from my employer to my debtors.\n", "msg_date": "Wed, 27 Feb 2008 13:45:11 -0500", "msg_from": "Douglas J Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: questions about CLUSTER" }, { "msg_contents": "In response to Douglas J Hunley <[email protected]>:\n\n> On Wednesday 27 February 2008 13:35:16 Douglas J Hunley wrote:\n> > > > 2) is there any internal data in the db that would allow me to\n> > > > programmatically determine which tables would benefit from being\n> > > > clustered? 3) for that matter, is there info to allow me to determine\n> > > > which index it should be clustered on in cases where the table has more\n> > > > than one index?\n> > >\n> > > The pg_stat_user_indexes table keeps stats on how often the index is\n> > > used. Indexes that are used frequently are candidates for clustering.\n> >\n> > I had just started looking at this actually.\n> \n> ok, so for a follow-on, should I be more concerned with idx_scan, \n> idx_tup_read, or idx_tup_fetch when determining which indexes are 'good' \n> candidates?\n\nAgain, not an easy question to answer, as it's probably different for\ndifferent people.\n\nidx_scan is the count of how many times the index was used.\nidx_tup_read and idx_tup_fetch are counts of how much data has been\naccessed by using the index.\n\nThis part of the docs has more:\nhttp://www.postgresql.org/docs/8.2/static/monitoring-stats.html\n\nSo, you'll probably have to watch all of those if you want to determine\nwhen to automate clustering operations.\n\nPersonally, if I were you, I'd set up a test box and make sure\nclustering makes enough of a difference to be doing all of this work.\n\n> again, tia. i feel like such a noob around here :)\n\nBah ... we all start out as noobs. Just don't go googling for my posts\nfrom years back, it's embarrassing ...\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Wed, 27 Feb 2008 14:34:49 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: questions about CLUSTER" } ]
[ { "msg_contents": "I've got a lot of rows in one table and a lot of rows in another table. I\nwant to do a bunch of queries on their join column. One of these is like\nthis: t1.col like '%t2.col%'\n\n \n\nI know that always sucks. I'm wondering how I can make it better. First, I\nshould let you know that I can likely hold both of these tables entirely in\nram. Since that's the case, would it be better to accomplish this with my\nprogramming language? Also you should know that in most cases, t1.col and\nt2.col is 2 words or less. I'm not sure if that matters, I mention it\nbecause it may make tsearch2 perform badly.\n\n\n\n\n\n\n\n\n\n\nI’ve got a lot of rows in one table and a lot of rows\nin another table.  I want to do a bunch of queries on their join\ncolumn.  One of these is like this: t1.col like '%t2.col%'\n \nI know that always sucks.  I’m wondering how I can\nmake it better.  First, I should let you know that I can likely hold both\nof these tables entirely in ram.  Since that’s the case, would it be\nbetter to accomplish this with my programming language?  Also you should\nknow that in most cases, t1.col and t2.col is 2 words or less.  I’m\nnot sure if that matters, I mention it because it may make tsearch2 perform\nbadly.", "msg_date": "Wed, 27 Feb 2008 11:19:22 -0800", "msg_from": "\"Dan Kaplan\" <[email protected]>", "msg_from_op": true, "msg_subject": "t1.col like '%t2.col%'" }, { "msg_contents": "On Wed, 27 Feb 2008, Dan Kaplan wrote:\n\n> I've got a lot of rows in one table and a lot of rows in another table. I\n> want to do a bunch of queries on their join column. One of these is like\n> this: t1.col like '%t2.col%'\n\nWe have an idea how to speedup wildcard search at the expense of the size - \nwe have to index all permutation of the original word. Then we could\nuse GIN for quieries like a*b.\n\n>\n>\n>\n> I know that always sucks. I'm wondering how I can make it better. First, I\n> should let you know that I can likely hold both of these tables entirely in\n> ram. Since that's the case, would it be better to accomplish this with my\n> programming language? Also you should know that in most cases, t1.col and\n> t2.col is 2 words or less. I'm not sure if that matters, I mention it\n> because it may make tsearch2 perform badly.\n>\n\ncontrib/pg_trgm should help you.\n\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Thu, 28 Feb 2008 08:47:22 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: t1.col like '%t2.col%'" }, { "msg_contents": "I learned a little about pg_trgm here:\nhttp://www.sai.msu.su/~megera/postgres/gist/pg_trgm/README.pg_trgm\n\nBut this seems like it's for finding similarities, not substrings. How can\nI use it to speed up t1.col like '%t2.col%'?\n\nThanks,\nDan\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Oleg Bartunov\nSent: Wednesday, February 27, 2008 9:47 PM\nTo: Dan Kaplan\nCc: [email protected]\nSubject: Re: [PERFORM] t1.col like '%t2.col%'\n\nOn Wed, 27 Feb 2008, Dan Kaplan wrote:\n\n> I've got a lot of rows in one table and a lot of rows in another table. I\n> want to do a bunch of queries on their join column. One of these is like\n> this: t1.col like '%t2.col%'\n\nWe have an idea how to speedup wildcard search at the expense of the size - \nwe have to index all permutation of the original word. Then we could\nuse GIN for quieries like a*b.\n\n>\n>\n>\n> I know that always sucks. I'm wondering how I can make it better. First,\nI\n> should let you know that I can likely hold both of these tables entirely\nin\n> ram. Since that's the case, would it be better to accomplish this with my\n> programming language? Also you should know that in most cases, t1.col and\n> t2.col is 2 words or less. I'm not sure if that matters, I mention it\n> because it may make tsearch2 perform badly.\n>\n\ncontrib/pg_trgm should help you.\n\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n\n", "msg_date": "Fri, 29 Feb 2008 15:52:31 -0800", "msg_from": "\"Dan Kaplan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: t1.col like '%t2.col%'" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Fri, 29 Feb 2008 15:52:31 -0800\r\n\"Dan Kaplan\" <[email protected]> wrote:\r\n\r\n> I learned a little about pg_trgm here:\r\n> http://www.sai.msu.su/~megera/postgres/gist/pg_trgm/README.pg_trgm\r\n> \r\n> But this seems like it's for finding similarities, not substrings.\r\n> How can I use it to speed up t1.col like '%t2.col%'?\r\n\r\nFaster disks.\r\n\r\nNo matter what, that will seqscan. So if you want it to go faster, you\r\nneed faster hardware.\r\n\r\nSincerely,\r\n\r\nJoshua D. Drake\r\n\r\n- -- \r\nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \r\nPostgreSQL Community Conference: http://www.postgresqlconference.org/\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nPostgreSQL SPI Liaison | SPI Director | PostgreSQL political pundit\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHyJu9ATb/zqfZUUQRAlPwAJ9XZvoWvNquuWGytvJfNlm79LBvtwCbBwRw\r\nuqb7fhD5+w87BzUoVEjICEY=\r\n=z5xQ\r\n-----END PGP SIGNATURE-----\r\n", "msg_date": "Fri, 29 Feb 2008 15:56:42 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: t1.col like '%t2.col%'" }, { "msg_contents": "Joshua Drake spake thusly:\n\n> On Fri, 29 Feb 2008 15:52:31 -0800\n> \"Dan Kaplan\" <[email protected]> wrote:\n> \n> > I learned a little about pg_trgm here:\n> > http://www.sai.msu.su/~megera/postgres/gist/pg_trgm/README.pg_trgm\n> > \n> > But this seems like it's for finding similarities, not substrings.\n> > How can I use it to speed up t1.col like '%t2.col%'?\n> \n> Faster disks.\n> \n> No matter what, that will seqscan. So if you want it to go faster, you\n> need faster hardware.\n\nWord!\n\nThat said ...\n\nOnce upon a time we had a requirement to allow users to search within US counties for property owner name or street names by text fragment.\n\nWe used the now deprecated Full Text Indexing (FTI) with some handwaving. But that was in PostgreSQL 7.4 and FTI is not in the contrib package for some time now. See <http://pgfoundry.org/projects/simplefti/> ... I looked at using it in 8.1 but my \"C\" chops weren't up to it, and it depended heavily on OIDs which we didn't want to use. Anyway, our business requirement evaporated so it doesn't matter to us now.\n\nHTH,\n\nGreg Williamson\nSenior DBA\nGlobeXplorer LLC, a DigitalGlobe company\n\nConfidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information and must be protected in accordance with those provisions. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.\n\n(My corporate masters made me say this.)\n\n\n\n\n\nRE: [PERFORM] t1.col like '%t2.col%'\n\n\n\nJoshua Drake spake thusly:\n\n> On Fri, 29 Feb 2008 15:52:31 -0800\n> \"Dan Kaplan\" <[email protected]> wrote:\n>\n> > I learned a little about pg_trgm here:\n> > http://www.sai.msu.su/~megera/postgres/gist/pg_trgm/README.pg_trgm\n> >\n> > But this seems like it's for finding similarities, not substrings.\n> > How can I use it to speed up t1.col like '%t2.col%'?\n>\n> Faster disks.\n>\n> No matter what, that will seqscan. So if you want it to go faster, you\n> need faster hardware.\n\nWord!\n\nThat said ...\n\nOnce upon a time we had a requirement to allow users to search within US counties for property owner name or street names by text fragment.\n\nWe used the now deprecated Full Text Indexing (FTI) with some handwaving. But that was in PostgreSQL 7.4 and FTI is not in the contrib package for some time now. See <http://pgfoundry.org/projects/simplefti/> ... I looked at using it in 8.1 but my \"C\" chops weren't up to it, and it depended heavily on OIDs which we didn't want to use. Anyway, our business requirement evaporated so it doesn't matter to us now.\n\nHTH,\n\nGreg Williamson\nSenior DBA\nGlobeXplorer LLC, a DigitalGlobe company\n\nConfidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information and must be protected in accordance with those provisions. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.\n\n(My corporate masters made me say this.)", "msg_date": "Fri, 29 Feb 2008 17:30:08 -0700", "msg_from": "\"Gregory Williamson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: t1.col like '%t2.col%'" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Fri, 29 Feb 2008 17:30:08 -0700\r\n\"Gregory Williamson\" <[email protected]> wrote:\r\n\r\n> Joshua Drake spake thusly:\r\n\r\n> We used the now deprecated Full Text Indexing (FTI) with some\r\n> handwaving. But that was in PostgreSQL 7.4 and FTI is not in the\r\n> contrib package for some time now. See\r\n> <http://pgfoundry.org/projects/simplefti/> ... I looked at using it\r\n> in 8.1 but my \"C\" chops weren't up to it, and it depended heavily on\r\n> OIDs which we didn't want to use. Anyway, our business requirement\r\n> evaporated so it doesn't matter to us now.\r\n> \r\n\r\nRight but wouldn't this be solved with tsearch2 and pg_tgrm?\r\n\r\n- --\r\nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \r\nPostgreSQL Community Conference: http://www.postgresqlconference.org/\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nPostgreSQL SPI Liaison | SPI Director | PostgreSQL political pundit\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHyLO+ATb/zqfZUUQRAnRlAJ0S0jrc4pSmnBcobEtZvuDkpkWzIACcDP1t\r\nCwk/j1C2pEXWdANsyZV5f8E=\r\n=9GYn\r\n-----END PGP SIGNATURE-----\r\n", "msg_date": "Fri, 29 Feb 2008 17:39:10 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: t1.col like '%t2.col%'" }, { "msg_contents": "\"Dan Kaplan\" <[email protected]> writes:\n> I learned a little about pg_trgm here:\n> http://www.sai.msu.su/~megera/postgres/gist/pg_trgm/README.pg_trgm\n\nThere's also real documentation in the 8.3 release:\nhttp://www.postgresql.org/docs/8.3/static/pgtrgm.html\nAFAIK pg_trgm hasn't changed much lately, so you should be able to\nrely on that for recent earlier branches.\n\n> But this seems like it's for finding similarities, not substrings. How can\n> I use it to speed up t1.col like '%t2.col%'?\n\nThe idea is to use it as a lossy index. You make a trigram index on\nt1.col and then do something like\n\n\t... where t1.col % t2.col and t1.col like ('%'||t2.col||'%');\n\nThe index gets you the %-matches and then you filter for the exact\nmatches with LIKE.\n\nThe similarity threshold (set_limit()) has to be set low enough that you\ndon't lose any desired matches, but not so low that you get everything\nin the table back. Not sure how delicate that will be. It might be\nunworkable, but surely it's worth a try.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Feb 2008 21:10:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: t1.col like '%t2.col%' " } ]
[ { "msg_contents": "I've got a lot of rows in one table and a lot of rows in another table. I\nwant to do a bunch of queries on their join column. One of these is like\nthis: t1.col like '%t2.col%'\n\n \n\nI know that always sucks. I'm wondering how I can make it better. First, I\nshould let you know that I can likely hold both of these tables entirely in\nram. Since that's the case, would it be better to accomplish this with my\nprogramming language? Also you should know that in most cases, t1.col and\nt2.col is 2 words or less. I'm not sure if that matters, I mention it\nbecause it may make tsearch2 perform badly.\n\n \n\n\n\n\n\n\n\n\n\n\nI’ve got a lot of rows in one table and a lot of rows\nin another table.  I want to do a bunch of queries on their join\ncolumn.  One of these is like this: t1.col like '%t2.col%'\n \nI know that always sucks.  I’m wondering how I\ncan make it better.  First, I should let you know that I can likely hold\nboth of these tables entirely in ram.  Since that’s the case, would\nit be better to accomplish this with my programming language?  Also you\nshould know that in most cases, t1.col and t2.col is 2 words or less. \nI’m not sure if that matters, I mention it because it may make tsearch2\nperform badly.", "msg_date": "Wed, 27 Feb 2008 11:47:43 -0800", "msg_from": "\"Dan Kaplan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizing t1.col like '%t2.col%'" }, { "msg_contents": "\"Dan Kaplan\" <[email protected]> writes:\n> I've got a lot of rows in one table and a lot of rows in another table. I\n> want to do a bunch of queries on their join column. One of these is like\n> this: t1.col like '%t2.col%'\n\n> I know that always sucks. I'm wondering how I can make it better.\n\ntsearch or pg_trgm could probably help. Are you really after exact\nsubstring-match semantics, or is this actually a poor man's substitute\nfor full text search? If you just want substrings then see pg_trgm,\nif you want text search see tsearch.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Feb 2008 15:26:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing t1.col like '%t2.col%' " } ]
[ { "msg_contents": "Hi,\n\nWhile designing a complex database structure I wanted to ask a basic\nquestion about views.\n\nIf I give an ORDER BY clause in a VIEW and then use it in another query\nwhere the VIEW's ORDER BY is immaterial, would the planner be able to\ndiscard this ORDER BY clause ?\n\nAny pointers / feedbacks would be really helpful.\n\nRegards,\n*Robins Tharakan*\n\nHi,While designing a complex database structure I wanted to ask a basic question about views.\nIf I give an ORDER BY clause in a VIEW and then use it in another query where the VIEW's ORDER BY is immaterial, would the planner be able to discard this ORDER BY clause ?\nAny pointers / feedbacks would be really helpful.\nRegards,Robins Tharakan", "msg_date": "Thu, 28 Feb 2008 20:01:19 +0530", "msg_from": "\"Robins Tharakan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Bypassing useless ORDER BY in a VIEW" }, { "msg_contents": "\"Robins Tharakan\" <[email protected]> writes:\n> If I give an ORDER BY clause in a VIEW and then use it in another query\n> where the VIEW's ORDER BY is immaterial, would the planner be able to\n> discard this ORDER BY clause ?\n\nNo. That's a feature not a bug; the sorts of cases where you want an\nORDER BY in a subquery, it's because you really want those rows computed\nin that order (eg you've got side-effect-causing functions reading the\nresults). Postgres will never discard an ORDER BY as \"immaterial\".\n\nA rule of thumb is that ORDER BY in a view is bad design, IMHO.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Feb 2008 12:13:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bypassing useless ORDER BY in a VIEW " }, { "msg_contents": "On 2008-02-28 09:13, Tom Lane wrote:\n> A rule of thumb is that ORDER BY in a view is bad design, IMHO.\n>\n> \t\t\tregards, tom lane\n> \n\nI was surprised to find out that apparently it's also a PostgreSQL \nextension; standard SQL apparently disallows ORDER BY in VIEWs:\n\nhttp://en.wikipedia.org/wiki/Order_by_(SQL)\n\nWhen I found this out, I removed all the ORDER BYs from my VIEWs (which \nhad been there for the convenience of subsequent SELECTs).\n\nOf course, where ORDER BY in a VIEW is really helpful, is with OFFSET \nand/or LIMIT clauses (which are also PostgreSQL extensions), which is \nequivalent to what you point out.\n\n-- \nMail to my list address MUST be sent via the mailing list.\nAll other mail to my list address will bounce.\n\n", "msg_date": "Thu, 28 Feb 2008 10:09:50 -0800", "msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bypassing useless ORDER BY in a VIEW" }, { "msg_contents": "\"Dean Gibson (DB Administrator)\" <[email protected]> writes:\n> Of course, where ORDER BY in a VIEW is really helpful, is with OFFSET \n> and/or LIMIT clauses (which are also PostgreSQL extensions), which is \n> equivalent to what you point out.\n\nRight, which is the main reason why we allow it. I think that these\nare sort of poor man's cases of things that SQL2003 covers with\n\"windowing functions\".\n\nThe SQL spec treats ORDER BY as a cosmetic thing that you can slap onto\nthe final output of a SELECT. They don't consider it useful in\nsubqueries (including views) because row ordering is never supposed to\nbe a semantically significant aspect of a set of rows.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Feb 2008 16:52:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bypassing useless ORDER BY in a VIEW " }, { "msg_contents": "Frankly put, i didn't know that this perspective exists and then thanks for\nputting it that way then !!\n\nGuess I should take a relook at how I plan to use those VIEWS.\n\nThanks\n*Robins*\n\n\n> A rule of thumb is that ORDER BY in a view is bad design, IMHO.\n>\n> regards, tom lane\n>\n\nFrankly put, i didn't know that this perspective exists and then thanks for putting it that way then !!Guess I should take a relook at how I plan to use those VIEWS.ThanksRobins\n\nA rule of thumb is that ORDER BY in a view is bad design, IMHO.\n\n                        regards, tom lane", "msg_date": "Fri, 29 Feb 2008 17:27:55 +0530", "msg_from": "\"Robins Tharakan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bypassing useless ORDER BY in a VIEW" } ]
[ { "msg_contents": "Hi,\n\nI am in the process of setting up a postgresql server with 12 SAS disks.\n\nI am considering two options:\n\n1) set up a 12 disks raid 10 array to get maximum raw performance from\nthe system and put everything on it (it the whole pg cluster, including\nWAL, and every tablespcace)\n\n2) set up 3 raid 10 arrays of 4 disks, and dispatch my data on these\ndisks via tablespaces :\n\ndata1 = pg cluster + references data (dimensions) tablespace\ndata2 = fact data tablespace\ndata3 = indices tablespace\n\nTypical workload is either massive insert/update via ETL or complex\nqueries on big (10 millions tuples) tables with several joins (including\nMondrian ROLAP).\n\nDoes anyone have an opinion of what could give best results ?\n\nThanks,\nFranck\n\n\n", "msg_date": "Fri, 29 Feb 2008 12:51:34 +0100", "msg_from": "Franck Routier <[email protected]>", "msg_from_op": true, "msg_subject": "12 disks raid setup" }, { "msg_contents": "On Fri, Feb 29, 2008 at 5:51 AM, Franck Routier\n<[email protected]> wrote:\n> Hi,\n>\n> I am in the process of setting up a postgresql server with 12 SAS disks.\n>\n> I am considering two options:\n>\n> 1) set up a 12 disks raid 10 array to get maximum raw performance from\n> the system and put everything on it (it the whole pg cluster, including\n> WAL, and every tablespcace)\n>\n> 2) set up 3 raid 10 arrays of 4 disks, and dispatch my data on these\n> disks via tablespaces :\n\nHow you set it up depends on your RAID controller as much as anything.\n Good battery backed RAID controllers seem to work better with one big\nRAID-10 array. But as with anything, unless you benchmark it, you're\nreally just guessing which is best.\n", "msg_date": "Fri, 29 Feb 2008 08:13:12 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 disks raid setup" }, { "msg_contents": "Hi,\n\nmy Raid controller is an Adaptec 31205 SAS/RAID controller. The battery\nwas an option, but I didn't know it at purchase time. So I have no\nbattery, but the whole system is on an UPS.\n\nI have done quite a few tests using bonnie++, focusing on 'random seek'\nresults, and found out that:\n\n1) linux md raid 10 performs better than Adaptec hardware raid in this\nfield (random seek) by 15%, quite consistently\n2) hardware raid is better on sequential output\n3) md outperforms it again when coming to sequential read, especially\nwith far layout option.\n\nSo in fact I think I will use md raid, but still don't know with which\nlayout (3x4 or 1x12).\nWhat would you suggest as a benchmarking method ? Simply issue a few big\nqueries that I expect to be usual and see how long it last, or is there\na more convinient and or \"scientific\" method ?\n\nThanks,\nFranck\n\n\n", "msg_date": "Fri, 29 Feb 2008 15:51:45 +0100", "msg_from": "Franck Routier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 12 disks raid setup" }, { "msg_contents": "On Fri, 29 Feb 2008, Franck Routier wrote:\n\n> my Raid controller is an Adaptec 31205 SAS/RAID controller. The battery\n> was an option, but I didn't know it at purchase time. So I have no\n> battery, but the whole system is on an UPS.\n\nThe UPS is of no help here. The problem is that PostgreSQL forces the \ndisk controller to commit WAL writes to disk after every transaction. If \nyou have a controller with a battery-backed cache, you can use that cache \nto buffer those writes and dramatically increase write performance. The \nUSP doesn't give you the same write guarantees. Let's say someone trips \nover the server power cord (simplest example of a whole class of \nfailures). With the BBC controller, the cached writes will get committed \nwhen you plug the server back in. If all you've got is a UPS, writes that \ndidn't make it to disk before the outage are lost. That means you can't \nbuffer those writes without risking database corruption.\n\nThe general guideline here is that if you don't have a battery-backed \ncache on your controller, based on disk rotation speed you'll be limited \nto around 100 (7200 RPM) to 200 (15K RPM) commits/second per single \nclient, with each commit facing around a 2-4ms delay. That rises to \nperhaps 500/s total with lots of clients. BBC configurations can easily \nclear 3000/s total and individual commits don't have that couple of ms \ndelay.\n\n> So in fact I think I will use md raid, but still don't know with which \n> layout (3x4 or 1x12).\n\nThe only real downside of md RAID is that if you lose the boot device it \ncan be tricky to get the system to start again; hardware RAID hides that \nlittle detail from the BIOS. Make sure you simulate a failure of the \nprimary boot drive and are comfortable with recovering from that situation \nbefore you go into production with md.\n\nThe only way to know which layout will work better is to have a lot of \nknowledge of this application and how it bottlenecks under load. If you \nknow, for example, that there's a particular set of tables/indexes that \nare critical to real-time users, whereas others are only used by batch \noperations, things like that can be used to figure out how to optimize \ndisk layout. If you don't know your database to that level, put \neverything into one big array and forget about it; you won't do any better \nthan that.\n\n> What would you suggest as a benchmarking method ? Simply issue a few big\n> queries that I expect to be usual and see how long it last, or is there\n> a more convinient and or \"scientific\" method ?\n\nBenchmarking is hard and you have to use a whole array of tests if you \nwant to quantify the many aspects of performance. You're doing the right \nthing using bonnie++ to quantify disk speed. If you've got some typical \nqueries, using those to fine-tune postgresql.conf parameters is a good \nidea; just make sure to set shared_buffers, estimated_cache_size, and run \nANALYZE on your tables. Be careful to note performance differences when \nthe cache is already filled with data from previous runs. Measuring \nwrite/commit performance is probably easiest using pgbench.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 29 Feb 2008 10:41:08 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 disks raid setup" }, { "msg_contents": "\n\nFranck Routier wrote:\n> Hi,\n>\n> I am in the process of setting up a postgresql server with 12 SAS disks.\n>\n> I am considering two options:\n>\n> 1) set up a 12 disks raid 10 array to get maximum raw performance from\n> the system and put everything on it (it the whole pg cluster, including\n> WAL, and every tablespcace)\n>\n> 2) set up 3 raid 10 arrays of 4 disks, and dispatch my data on these\n> disks via tablespaces :\n>\n> data1 = pg cluster + references data (dimensions) tablespace\n> data2 = fact data tablespace\n> data3 = indices tablespace\n>\n>\n> \n\nOption 2: Infact I would also say within one of the RAID1 use another \nsoftpartition and separate out pg_xlog also.\n\nMy 2 cents based on my benchmarks.\n\n-Jignesh\n\n", "msg_date": "Fri, 29 Feb 2008 12:17:29 -0500", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 disks raid setup" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Fri, 29 Feb 2008 12:17:29 -0500\r\n\"Jignesh K. Shah\" <[email protected]> wrote:\r\n\r\n> \r\n> \r\n> Franck Routier wrote:\r\n> > Hi,\r\n> >\r\n> > I am in the process of setting up a postgresql server with 12 SAS\r\n> > disks.\r\n> >\r\n> > I am considering two options:\r\n> >\r\n> > 1) set up a 12 disks raid 10 array to get maximum raw performance\r\n> > from the system and put everything on it (it the whole pg cluster,\r\n> > including WAL, and every tablespcace)\r\n\r\nI would do this (assuming you have other spindles for the OS):\r\n\r\n/data1 - RAID 10 journalled filesystem + 1 (so 9 disks)\r\n/xlogs - RAID 1 non journalled filesystem + 1 (so 3 disks)\r\n\r\n\r\nYou can create any number of tablespaces for further growth you see fit\r\nand move them as more IO becomes available.\r\n\r\nSincerely,\r\n\r\nJoshua D. Drake\r\n\r\n\r\n- -- \r\nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \r\nPostgreSQL Community Conference: http://www.postgresqlconference.org/\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nPostgreSQL SPI Liaison | SPI Director | PostgreSQL political pundit\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHyEX6ATb/zqfZUUQRAu1XAKCpszYwF4dbI0hidg71JhmcrPqbmACcDhdc\r\nE0qVOtKrUBpEEerGUjTMF9I=\r\n=LzZS\r\n-----END PGP SIGNATURE-----\r\n", "msg_date": "Fri, 29 Feb 2008 09:50:50 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 disks raid setup" }, { "msg_contents": "On Fri, 29 Feb 2008, Joshua D. Drake wrote:\n> /data1 - RAID 10 journalled filesystem + 1 (so 9 disks)\n> /xlogs - RAID 1 non journalled filesystem + 1 (so 3 disks)\n\nSounds good. Can't they share the hot spare, rather than having two?\n\nHowever, I would recommend splashing out on the battery for the cache, and \nthen just putting then all in one RAID 10 lump.\n\nMatthew\n\n-- \nTo be or not to be -- Shakespeare\nTo do is to be -- Nietzsche\nTo be is to do -- Sartre\nDo be do be do -- Sinatra\n", "msg_date": "Fri, 29 Feb 2008 18:24:12 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 disks raid setup" }, { "msg_contents": "\nOn Feb 29, 2008, at 9:51 AM, Franck Routier wrote:\n\n> my Raid controller is an Adaptec 31205 SAS/RAID controller. The \n> battery\n> was an option, but I didn't know it at purchase time. So I have no\n> battery, but the whole system is on an UPS.\n\nGo find one on ebay or google search, and plug it in. Adaptec \nbatteries just snap in and sometimes have a bracket to clip them in \nplace.\n\nYour performance will be awful without one since you can't safely \nwrite cache. Also, if your card has upgradable RAM (but I've never \nseen an adaptec card which could) max it out.\n\n", "msg_date": "Fri, 29 Feb 2008 13:58:47 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 disks raid setup" }, { "msg_contents": "Greg Smith wrote:\n>\n> The only real downside of md RAID is that if you lose the boot device \n> it can be tricky to get the system to start again; hardware RAID hides \n> that little detail from the BIOS. Make sure you simulate a failure of \n> the primary boot drive and are comfortable with recovering from that \n> situation before you go into production with md.\n\n+1\n\nI usually ensure there is a separate /boot that is setup RAID1 (with md \nusing all the disks for the RAID1 - so the I can keep the partition map \nthe same for all the disks, otherwise it is fiddly!)\n\nCheers\n\nMark\n", "msg_date": "Sat, 01 Mar 2008 11:33:10 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 disks raid setup" }, { "msg_contents": "Greg Smith wrote:\n> On Fri, 29 Feb 2008, Franck Routier wrote:\n> \n>> my Raid controller is an Adaptec 31205 SAS/RAID controller. The battery\n>> was an option, but I didn't know it at purchase time. So I have no\n>> battery, but the whole system is on an UPS.\n> \n> The UPS is of no help here. The problem is that PostgreSQL forces the \n> disk controller to commit WAL writes to disk after every transaction. \n> If you have a controller with a battery-backed cache, you can use that \n> cache to buffer those writes and dramatically increase write \n> performance. The USP doesn't give you the same write guarantees. Let's \n> say someone trips over the server power cord (simplest example of a \n> whole class of failures). With the BBC controller, the cached writes \n> will get committed when you plug the server back in. If all you've got \n> is a UPS, writes that didn't make it to disk before the outage are \n> lost. That means you can't buffer those writes without risking database \n> corruption.\n> \n> The general guideline here is that if you don't have a battery-backed \n> cache on your controller, based on disk rotation speed you'll be limited \n> to around 100 (7200 RPM) to 200 (15K RPM) commits/second per single \n> client, with each commit facing around a 2-4ms delay. That rises to \n> perhaps 500/s total with lots of clients. BBC configurations can easily \n> clear 3000/s total and individual commits don't have that couple of ms \n> delay.\n> \n\nIt may be the way you have worded this but it makes it sound like the \ncache and the battery backup are as one (or that the cache doesn't work \nunless you have the battery) The cache may be optional (or plug-in) in \nsome cards, even of varied size. The battery is normally optional. You \ncan normally add/remove the battery without changing the cache options.\n\nIf the raid card has the cache without the battery you would get the \nperformance figures you mentioned, you just wouldn't have the \nreliability of finishing writes after a power off situation.\n\n\ncorrect me if I am wrong here.\n\n\n-- \n\nShane Ambler\npgSQL (at) Sheeky (dot) Biz\n\nGet Sheeky @ http://Sheeky.Biz\n", "msg_date": "Sat, 01 Mar 2008 11:14:09 +1030", "msg_from": "Shane Ambler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 disks raid setup" }, { "msg_contents": "On Sat, 1 Mar 2008, Shane Ambler wrote:\n\n> It may be the way you have worded this but it makes it sound like the \n> cache and the battery backup are as one (or that the cache doesn't work \n> unless you have the battery)...If the raid card has the cache without \n> the battery you would get the performance figures you mentioned, you \n> just wouldn't have the reliability of finishing writes after a power off \n> situation.\n\nWording is intentional--if you don't have a battery for it, the cache has \nto be turned off (or set to write-through so it's only being used on \nreads) in order for the database to be reliable. If you can't finish \nwrites after a power off, you can't cache writes and expect your database \nto survive for too long.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 29 Feb 2008 23:56:54 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 disks raid setup" }, { "msg_contents": "Hi,\n\nLe vendredi 29 février 2008 à 23:56 -0500, Greg Smith a écrit :\n> Wording is intentional--if you don't have a battery for it, the cache has \n> to be turned off (or set to write-through so it's only being used on \n> reads) in order for the database to be reliable. If you can't finish \n> writes after a power off, you can't cache writes and expect your database \n> to survive for too long.\n\nWell, am I just wrong, or the file system might also heavily rely on\ncache, especially as I use XFS ?\n\nSo anyway Postgresql has no way to know if the data is really on the\ndisk, and in case of a brutal outage, the system may definitely lose\ndata, wether there is another level of caching (Raid controller) or\nnot...\n\nRight ?\n\n\n", "msg_date": "Sat, 01 Mar 2008 11:27:57 +0100", "msg_from": "Franck Routier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 12 disks raid setup" }, { "msg_contents": "On Sat, Mar 1, 2008 at 4:27 AM, Franck Routier <[email protected]> wrote:\n> Hi,\n>\n> Le vendredi 29 février 2008 à 23:56 -0500, Greg Smith a écrit :\n> > Wording is intentional--if you don't have a battery for it, the cache has\n> > to be turned off (or set to write-through so it's only being used on\n> > reads) in order for the database to be reliable. If you can't finish\n> > writes after a power off, you can't cache writes and expect your database\n> > to survive for too long.\n>\n> Well, am I just wrong, or the file system might also heavily rely on\n> cache, especially as I use XFS ?\n>\n> So anyway Postgresql has no way to know if the data is really on the\n> disk, and in case of a brutal outage, the system may definitely lose\n> data, wether there is another level of caching (Raid controller) or\n> not...\n>\n> Right ?\n\nnope. assuming your disk subsystem doesn't lie about write\ncompletion, then postgresql can recover from complete and sudden loss\nof power without any data loss.\n", "msg_date": "Sat, 1 Mar 2008 06:44:13 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 disks raid setup" }, { "msg_contents": "We're upgrading to a medium-sized server, a Dell PowerEdge 2950, dual-quad CPU's and 8 GB memory. This box can hold at most 8 disks (10K SCSI 2.5\" 146 GB drives) and has Dell's Perc 6/i RAID controller.\n\nI'm thinking of this:\n\n 6 disks RAID 1+0 Postgres data\n 1 disk WAL\n 1 disk Linux\n\nI've often seen RAID 1 recommended for the WAL. Is that strictly for reliability, or is there a performance advantage to RAID 1 for the WAL?\n\nIt seems to me separating the OS and WAL on two disks is better than making a single RAID 1 and sharing it, from a performance point of view.\n\nThanks,\nCraig\n", "msg_date": "Sat, 01 Mar 2008 10:06:54 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "How to allocate 8 disks" }, { "msg_contents": "\"Scott Marlowe\" <[email protected]> writes:\n> On Sat, Mar 1, 2008 at 4:27 AM, Franck Routier <[email protected]> wrote:\n>> Well, am I just wrong, or the file system might also heavily rely on\n>> cache, especially as I use XFS ?\n>> \n>> So anyway Postgresql has no way to know if the data is really on the\n>> disk, and in case of a brutal outage, the system may definitely lose\n>> data, wether there is another level of caching (Raid controller) or\n>> not...\n\n> nope. assuming your disk subsystem doesn't lie about write\n> completion, then postgresql can recover from complete and sudden loss\n> of power without any data loss.\n\nFranck does have a point here: we are expecting the filesystem to tend\nto its own knitting. If a power failure corrupts the filesystem so\nbadly that we can't find the WAL files, or their contents are badly\nscrambled, then we're screwed. Most modern filesystems defend\nthemselves against that using journaling, which is exactly the same\nidea as WAL but applied to filesystem metadata.\n\nWe do expect that when we fsync a file, by the time the OS reports that\nthat's done both the file contents and its metadata are safely on disk.\nThis is part of the specification for fsync, so the OS is clearly broken\nif it doesn't get that right. Whether the OS *can* guarantee it if the\ndisk drive lies about write completion is something you'd have to ask\nthe filesystem hackers about.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 01 Mar 2008 13:11:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 disks raid setup " }, { "msg_contents": "On Sat, Mar 1, 2008 at 12:06 PM, Craig James <[email protected]> wrote:\n> We're upgrading to a medium-sized server, a Dell PowerEdge 2950, dual-quad CPU's and 8 GB memory. This box can hold at most 8 disks (10K SCSI 2.5\" 146 GB drives) and has Dell's Perc 6/i RAID controller.\n>\n> I'm thinking of this:\n>\n> 6 disks RAID 1+0 Postgres data\n> 1 disk WAL\n> 1 disk Linux\n>\n> I've often seen RAID 1 recommended for the WAL. Is that strictly for reliability, or is there a performance advantage to RAID 1 for the WAL?\n>\n> It seems to me separating the OS and WAL on two disks is better than making a single RAID 1 and sharing it, from a performance point of view.\n\nIt's a trade off. Remember that if the single disk hold xlog fails\nyou've just quite possubly lost your database. I'd be inclined to\neither using a RAID-1 of two disks for the OS and xlog, and having\npgsql log to the 6 disk RAID-10 instead of the OS / xlog disk set.\n\nMore important, do you have battery backed cache on the controller? A\ngood controller with a battery backed cache can usually outrun a\nlarger array with no write cache when it comes to transactions /\nwriting to the disks.\n", "msg_date": "Sat, 1 Mar 2008 12:47:56 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to allocate 8 disks" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Sat, 01 Mar 2008 10:06:54 -0800\r\nCraig James <[email protected]> wrote:\r\n\r\n> We're upgrading to a medium-sized server, a Dell PowerEdge 2950,\r\n> dual-quad CPU's and 8 GB memory. This box can hold at most 8 disks\r\n> (10K SCSI 2.5\" 146 GB drives) and has Dell's Perc 6/i RAID controller.\r\n> \r\n> I'm thinking of this:\r\n> \r\n> 6 disks RAID 1+0 Postgres data\r\n> 1 disk WAL\r\n> 1 disk Linux\r\n> \r\n> I've often seen RAID 1 recommended for the WAL. Is that strictly for\r\n> reliability, or is there a performance advantage to RAID 1 for the\r\n> WAL?\r\n> \r\n> It seems to me separating the OS and WAL on two disks is better than\r\n> making a single RAID 1 and sharing it, from a performance point of\r\n> view.\r\n\r\nThis scares me... You lose WAL you are a goner. Combine your OS and\r\nWAL into a RAID 1.\r\n\r\nSincerely,\r\n\r\nJoshua D. Drake\r\n\r\n\r\n- -- \r\nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \r\nPostgreSQL Community Conference: http://www.postgresqlconference.org/\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nPostgreSQL SPI Liaison | SPI Director | PostgreSQL political pundit\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHycSzATb/zqfZUUQRAs14AJ9pm3huW+z1j7jUIY7FbIZMzz2IxgCgnOhD\r\nyWiDabTYAG+x12JEqrf4q8E=\r\n=gBPs\r\n-----END PGP SIGNATURE-----\r\n", "msg_date": "Sat, 1 Mar 2008 13:03:47 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to allocate 8 disks" }, { "msg_contents": "Joshua D. Drake wrote:\n> On Sat, 01 Mar 2008 10:06:54 -0800\n> Craig James <[email protected]> wrote:\n> \n>> We're upgrading to a medium-sized server, a Dell PowerEdge 2950,\n>> dual-quad CPU's and 8 GB memory. This box can hold at most 8 disks\n>> (10K SCSI 2.5\" 146 GB drives) and has Dell's Perc 6/i RAID controller.\n>>\n>> I'm thinking of this:\n>>\n>> 6 disks RAID 1+0 Postgres data\n>> 1 disk WAL\n>> 1 disk Linux\n>>\n>> I've often seen RAID 1 recommended for the WAL. Is that strictly for\n>> reliability, or is there a performance advantage to RAID 1 for the\n>> WAL?\n>>\n>> It seems to me separating the OS and WAL on two disks is better than\n>> making a single RAID 1 and sharing it, from a performance point of\n>> view.\n> \n> This scares me... You lose WAL you are a goner. Combine your OS and\n> WAL into a RAID 1.\n\nRight, I do understand that, but reliability is not a top priority in this system. The database will be replicated, and can be reproduced from the raw data. It's not an accounting system, it finds scientific results. That's not to say I *won't* take your advice, we may in fact combine the OS and WAL on one disk. Reliability is a good thing, but I need to know all of the tradeoffs, so that I can weigh performance, reliability, and cost and make the right choice.\n\nSo my question still stands: From a strictly performance point of view, would it be better to separate the OS and the WAL onto two disks? Is there any performance advantage to RAID 1? My understanding is that RAID 1 can give 2x seek performance during read, but no advantage during write. For the WAL, it seems to me that RAID 1 has no performance benefits, so separating the WAL and OS seems like a peformance advantage.\n\nAnother option would be:\n\n 4 disks RAID 1+0 Postgres data\n 2 disks RAID 1 WAL\n 1 disk Linux\n 1 disk spare\n\nThis would give us reliability, but I think the performance would be considerably worse, since the primary Postgres data would come from 4 disks instead of six.\n\nI guess we could also consider:\n\n 4 disks RAID 1+0 Postgres data\n 4 disks RAID 1+0 WAL and Linux\n\nOr even\n\n 8 disks RAID 1+0 Everything\n\nThis is a dedicated system and does nothing but Apache/Postgres, so the OS should get very little traffic. But if that's the case, I guess you could argue that your suggestion of combining OS and WAL on a 2-disk RAID 1 would be the way to go, since the OS activity wouldn't affect the WAL very much.\n\nI suppose the thing to do is get the system, and run bonnie on various configurations. I've never run bonnie before -- can I get some useful results without a huge learning curve?\n\nThanks,\nCraig\n", "msg_date": "Sat, 01 Mar 2008 13:53:32 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to allocate 8 disks" }, { "msg_contents": "On Sat, Mar 1, 2008 at 3:53 PM, Craig James <[email protected]> wrote:\n> Joshua D. Drake wrote:\n> > On Sat, 01 Mar 2008 10:06:54 -0800\n> > Craig James <[email protected]> wrote:\n> >\n> >> We're upgrading to a medium-sized server, a Dell PowerEdge 2950,\n> >> dual-quad CPU's and 8 GB memory. This box can hold at most 8 disks\n> >> (10K SCSI 2.5\" 146 GB drives) and has Dell's Perc 6/i RAID controller.\n> >>\n> >> I'm thinking of this:\n> >>\n> >> 6 disks RAID 1+0 Postgres data\n> >> 1 disk WAL\n> >> 1 disk Linux\n> >>\n> >> I've often seen RAID 1 recommended for the WAL. Is that strictly for\n> >> reliability, or is there a performance advantage to RAID 1 for the\n> >> WAL?\n> >>\n> >> It seems to me separating the OS and WAL on two disks is better than\n> >> making a single RAID 1 and sharing it, from a performance point of\n> >> view.\n> >\n> > This scares me... You lose WAL you are a goner. Combine your OS and\n> > WAL into a RAID 1.\n>\n> Right, I do understand that, but reliability is not a top priority in this system. The database will be replicated, and can be reproduced from the raw data. It's not an accounting system, it finds scientific results. That's not to say I *won't* take your advice, we may in fact combine the OS and WAL on one disk. Reliability is a good thing, but I need to know all of the tradeoffs, so that I can weigh performance, reliability, and cost and make the right choice.\n\nIn that case you could always make the data partition a 6 disk RAID-0.\n\n> So my question still stands: From a strictly performance point of view, would it be better to separate the OS and the WAL onto two disks? Is there any performance advantage to RAID 1? My understanding is that RAID 1 can give 2x seek performance during read, but no advantage during write. For the WAL, it seems to me that RAID 1 has no performance benefits, so separating the WAL and OS seems like a peformance advantage.\n\nYes, Only on Reads. Correct.\n\n> Another option would be:\n>\n>\n> 4 disks RAID 1+0 Postgres data\n> 2 disks RAID 1 WAL\n> 1 disk Linux\n> 1 disk spare\n>\n> This would give us reliability, but I think the performance would be considerably worse, since the primary Postgres data would come from 4 disks instead of six.\n\nPerformance-wise, RAID-10 with n disks is about the same as RAID-0\nwith n/2 disks. So, you're losing abot 1/3 of your peak performance,\nassuming 100% efficient controllers and you aren't bottlenecking I/O\nwith > 4 disks.\n\n> I guess we could also consider:\n>\n>\n> 4 disks RAID 1+0 Postgres data\n> 4 disks RAID 1+0 WAL and Linux\n>\n> Or even\n>\n> 8 disks RAID 1+0 Everything\n\nIt really depends on the controller. Battery backed write cache?\nThen the one big everything is often faster than any other method. No\nBB cache? Then splitting them up will help.\n\n> I suppose the thing to do is get the system, and run bonnie on various configurations. I've never run bonnie before -- can I get some useful results without a huge learning curve?\n\nYes, it's fairly easy to drive. It'll tell you more about your\ncontroller than anything else, which is very useful information. The\nway a different controllers behaves with different configurations can\nbe very very different from one controller to the next.\n", "msg_date": "Sat, 1 Mar 2008 16:30:20 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to allocate 8 disks" }, { "msg_contents": "On Sat, 1 Mar 2008, Franck Routier wrote:\n\n> Well, am I just wrong, or the file system might also heavily rely on \n> cache, especially as I use XFS ? So anyway Postgresql has no way to know \n> if the data is really on the disk, and in case of a brutal outage, the \n> system may definitely lose data, wether there is another level of \n> caching (Raid controller) or not...\n\nAfter PostgreSQL writes to the WAL, it calls fsync. If your filesystem \ndoesn't then force a real write to disk at that point and clear whatever \ncache it might have, it's broken and unsuitable for database use. XFS is \nsmart enough to understand that.\n\nThe only thing people typically run into that will hear fsync and lie \nabout the data actually being written to disk are a) caching controllers \nwith the write cache turned on and b) cheap hard drives. In case (a), \nhaving a battery backup for the cache is sufficient to survive most \nclasses of outage without damage--if the system is without power for \nlonger than the battery lasts you're in trouble, otherwise is shouldn't be \na problem. In case (b), you have to turn the disk cache off to get \nreliable database operation.\n\nI've put all the interesting trivia on this topic I've ever come across at \nhttp://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm if \nyou're looking for some really exciting reading.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sat, 1 Mar 2008 23:17:18 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 disks raid setup" }, { "msg_contents": "On Sat, 1 Mar 2008, Craig James wrote:\n\n> So my question still stands: From a strictly performance point of view, would \n> it be better to separate the OS and the WAL onto two disks?\n\nYou're not getting a more useful answer here because you haven't mentioned \nyet a) what the disk controller is or b) how much writing activity is \ngoing on here. If you can cache writes, most of the advantages to having \na seperate WAL disk aren't important unless you've got an extremely high \nwrite throughput (higher you can likely sustain with only 8 disks) so you \ncan put the WAL data just about anywhere.\n\n> This is a dedicated system and does nothing but Apache/Postgres, so the OS \n> should get very little traffic. But if that's the case, I guess you could \n> argue that your suggestion of combining OS and WAL on a 2-disk RAID 1 would \n> be the way to go, since the OS activity wouldn't affect the WAL very much.\n\nThe main thing to watch out for if the OS and WAL are on the same disk is \nthat some random process spewing logs files could fill the disk and now \nthe database is stalled.\n\nI think there are two configurations that make sense for your situation:\n\n> 8 disks RAID 1+0 Everything\n\nThis maximizes potential sequential and seek throughput for the database, \nwhich is probably going to be your bottleneck unless you're writing lots \nof simple data, while still allowing survival of any one disk. The crazy \nlog situation I mentioned above is less likely to be a problem because \nhaving so much more disk space available to everything means it's more \nlikely you'll notice it before the disk actually fills.\n\n 6 disks RAID 0 Postgres data+WAL\n 2 disks RAID 1 Linux\n\nThis puts some redundancy on the base OS, so no single disk loss can \nactually take down the system altogether. You get maximum throughput on \nthe database. If you lose a database disk, you replace it and rebuild the \nwhole database at that point.\n\n> I suppose the thing to do is get the system, and run bonnie on various \n> configurations. I've never run bonnie before -- can I get some useful \n> results without a huge learning curve?\n\nI've collected some bonnie++ examples at \nhttp://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm you \nmay find useful. With only 8 disks you should be able to get useful \nresults without a learning curve; with significantly more it can be \nnecessary to run more than one bonnie at once to really saturate the disks \nand that's trickier.\n\nI don't think you're going to learn anything useful from that though \n(other than figuring out if your disk+controller combination is \nfundamentally fast or not). As you put more disks into the array, \nsequential throughput and seeks/second will go up. This doesn't tell you \nanything useful about whether the WAL is going to get enough traffic to be \na bottleneck such that it needs to be on a seperate disk. To figure that \nout, you need to run some simulations of the real database and its \napplication, and doing that fairly is a more serious benchmarking project.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sat, 1 Mar 2008 23:51:28 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to allocate 8 disks" }, { "msg_contents": "On Sat, 1 Mar 2008, Craig James wrote:\n> Right, I do understand that, but reliability is not a top priority in this \n> system. The database will be replicated, and can be reproduced from the raw \n> data.\n\nSo what you're saying is:\n\n1. Reliability is not important.\n2. There's zero write traffic once the database is set up.\n\nIf this is true, then RAID-0 is the way to go. I think Greg's options are \ngood. Either:\n\n2 discs RAID 1: OS\n6 discs RAID 0: database + WAL\n\nwhich is what we're using here (except with more discs), or:\n\n8 discs RAID 10: everything\n\nHowever, if reliability *really* isn't an issue, and you can accept \nreinstalling the system if you lose a disc, then there's a third option:\n\n8 discs RAID 0: Everything\n\nMatthew\n\n-- \nHeat is work, and work's a curse. All the heat in the universe, it's\ngoing to cool down, because it can't increase, then there'll be no\nmore work, and there'll be perfect peace. -- Michael Flanders\n", "msg_date": "Mon, 3 Mar 2008 12:11:55 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to allocate 8 disks" }, { "msg_contents": "Matthew wrote:\n> On Sat, 1 Mar 2008, Craig James wrote:\n>> Right, I do understand that, but reliability is not a top priority in \n>> this system. The database will be replicated, and can be reproduced \n>> from the raw data.\n>\n> So what you're saying is:\n>\n> 1. Reliability is not important.\n> 2. There's zero write traffic once the database is set up.\n>\n> If this is true, then RAID-0 is the way to go. I think Greg's options \n> are good. Either:\n>\n> 2 discs RAID 1: OS\n> 6 discs RAID 0: database + WAL\n>\n> which is what we're using here (except with more discs), or:\n>\n> 8 discs RAID 10: everything\n\nHas anybody been able to prove to themselves that RAID 0 vs RAID 1+0 is \nfaster for these sorts of loads? My understanding is that RAID 1+0 *can* \nreduce latency for reads, but that it relies on random access, whereas \nRAID 0 performs best for sequential scans? Does PostgreSQL ever do \nenough random access to make RAID 1+0 shine?\n\nCurious.\n\nThanks,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n", "msg_date": "Mon, 03 Mar 2008 09:48:49 -0500", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to allocate 8 disks" }, { "msg_contents": "Matthew wrote:\n> On Sat, 1 Mar 2008, Craig James wrote:\n>> Right, I do understand that, but reliability is not a top priority in \n>> this system. The database will be replicated, and can be reproduced \n>> from the raw data.\n> \n> So what you're saying is:\n> \n> 1. Reliability is not important.\n> 2. There's zero write traffic once the database is set up.\n\nWell, I actually didn't say either of those things, but I appreciate the feedback. RAID 0 is an interesting suggestion, but given our constraints, it's not an option. Reliability is important, but not as important as, say, a banking system.\n\nAnd as far as zero write traffic, I don't know where that came from. It's a \"hitlist\" based system, where complex search results are saved for the user in tables, and the write traffic can be quite high.\n\n> If this is true, then RAID-0 is the way to go. I think Greg's options \n> are good. Either:\n> \n> 2 discs RAID 1: OS\n> 6 discs RAID 0: database + WAL\n> \n> which is what we're using here (except with more discs), or:\n> \n> 8 discs RAID 10: everything\n\nRight now, an 8-disk RAID 10 is looking like the best choice. The Dell Perc 6i has configurations that include a battery-backed cache, so performance should be quite good.\n\n> However, if reliability *really* isn't an issue, and you can accept \n> reinstalling the system if you lose a disc, then there's a third option:\n> \n> 8 discs RAID 0: Everything\n\nI imagine the MTBF on a system like this would be < 1 year, which is out of the question, even with a backup system that can take over. A failure completely wipes the system, OS and everything, so you're guaranteed that once or twice a year, you have to rebuild your system from the ground up. I'd rather spend that time at the beach!\n\nCraig\n", "msg_date": "Mon, 03 Mar 2008 06:53:48 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to allocate 8 disks" }, { "msg_contents": "On Mon, 3 Mar 2008, Mark Mielke wrote:\n> Has anybody been able to prove to themselves that RAID 0 vs RAID 1+0 is \n> faster for these sorts of loads? My understanding is that RAID 1+0 *can* \n> reduce latency for reads, but that it relies on random access, whereas RAID 0 \n> performs best for sequential scans? Does PostgreSQL ever do enough random \n> access to make RAID 1+0 shine?\n\nTheoretically the performance of RAID 0 and RAID 10 should be identical \nfor reads, both seeks and throughput, assuming you have a sensible \nreadahead and a good controller. For writes, RAID 10 needs to write to \nmultiple drives, so is slower. Whether this is true in reality is another \nmatter, as all sorts of factors come in, not least how good your \ncontroller is at managing the arrangement.\n\nMatthew\n\n-- \nThe only secure computer is one that's unplugged, locked in a safe,\nand buried 20 feet under the ground in a secret location...and i'm not\neven too sure about that one. --Dennis Huges, FBI\n", "msg_date": "Mon, 3 Mar 2008 14:57:51 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to allocate 8 disks" }, { "msg_contents": "Matthew wrote:\n> On Mon, 3 Mar 2008, Mark Mielke wrote:\n>> Has anybody been able to prove to themselves that RAID 0 vs RAID 1+0 \n>> is faster for these sorts of loads? My understanding is that RAID 1+0 \n>> *can* reduce latency for reads, but that it relies on random access, \n>> whereas RAID 0 performs best for sequential scans? Does PostgreSQL \n>> ever do enough random access to make RAID 1+0 shine?\n> Theoretically the performance of RAID 0 and RAID 10 should be \n> identical for reads, both seeks and throughput, assuming you have a \n> sensible readahead and a good controller. For writes, RAID 10 needs to \n> write to multiple drives, so is slower. Whether this is true in \n> reality is another matter, as all sorts of factors come in, not least \n> how good your controller is at managing the arrangement.\n\nI don't think your statement that they should be identical is true - \nRAID 1+0 can satisfy and given read from at least two drives. A good \ncontroller can satisfy half the reads from one side of the array, and \nhalf the reads from the other side of the array, where the first set \ndoes not have to wait for the second set, before continuing. To \ncontrast, sequential reads of a RAID 1+0 system is almost always HALF of \nthe speed of sequential reads of a RAID 0 system. The hardware \nread-ahead on the RAID 1+0 system is being wasted as even if you did \nleap from one side of the array to the other, each side ends up \n\"skipping\" the data served by the other side, making any caching \nineffective.\n\nThe question I have is not whether RAID 1+0 vs RAID 0 show different \ncharacteristics. I know they do based upon my own analysis. My question \nis whether PostgreSQL disk access patterns for certain loads ever \nbenefit from RAID 1+0, or whether RAID 1+0 is always a bad choice for \nperformance-only (completely ignore reliability) loads.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n", "msg_date": "Mon, 03 Mar 2008 10:06:03 -0500", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to allocate 8 disks" }, { "msg_contents": "On Mon, Mar 3, 2008 at 8:48 AM, Mark Mielke <[email protected]> wrote:\n> Matthew wrote:\n> > On Sat, 1 Mar 2008, Craig James wrote:\n> >> Right, I do understand that, but reliability is not a top priority in\n> >> this system. The database will be replicated, and can be reproduced\n> >> from the raw data.\n> >\n> > So what you're saying is:\n> >\n> > 1. Reliability is not important.\n> > 2. There's zero write traffic once the database is set up.\n> >\n> > If this is true, then RAID-0 is the way to go. I think Greg's options\n> > are good. Either:\n> >\n> > 2 discs RAID 1: OS\n> > 6 discs RAID 0: database + WAL\n> >\n> > which is what we're using here (except with more discs), or:\n> >\n> > 8 discs RAID 10: everything\n>\n> Has anybody been able to prove to themselves that RAID 0 vs RAID 1+0 is\n> faster for these sorts of loads? My understanding is that RAID 1+0 *can*\n> reduce latency for reads, but that it relies on random access, whereas\n> RAID 0 performs best for sequential scans? Does PostgreSQL ever do\n> enough random access to make RAID 1+0 shine?\n\nRAID 1+0 has certain theoretical advantages in parallel access\nscenarios that straight RAID-0 wouldn't have. I.e. if you used n>2\ndisks in a mirror and built a RAID-0 out of those types of mirrors,\nthen you could theoretically have n users reading data on the same\n\"drive\" (the raid-1 underneath the raid-0) at the same time where\nRAID-0 would only have the one disk to read from. The effects of this\nadvantage are dulled by caching, depending on how much of the data set\nyou can cache. With a system that can cache it's whole data set in\nmemory (not uncommon for transactional systems) or at least a large\npercentage, the n>2 RAID-1 sets aren't that big of an advantage.\n\nRAID-0 of n drives should behave pretty similarly to RAID-10 with 2n\ndrives for most types of access. I.e. no better or worse for\nsequential or random access, if the number of drives is equivalent.\n", "msg_date": "Mon, 3 Mar 2008 09:22:12 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to allocate 8 disks" }, { "msg_contents": "Joshua D. Drake wrote:\n\n> This scares me... You lose WAL you are a goner. Combine your OS and\n> WAL into a RAID 1.\n\nCan someone elaborate on this? From the WAL concept and documentation at\nhttp://www.postgresql.org/docs/8.3/interactive/wal-intro.html I'd say\nthe only data that should be lost are the transactions currently in the\nlog but not yet transferred to permanent storage (database files proper).\n\n\n\n", "msg_date": "Tue, 04 Mar 2008 11:45:52 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to allocate 8 disks" }, { "msg_contents": "Ivan Voras wrote:\n> Joshua D. Drake wrote:\n> \n>> This scares me... You lose WAL you are a goner. Combine your OS and\n>> WAL into a RAID 1.\n> \n> Can someone elaborate on this? From the WAL concept and documentation at\n> http://www.postgresql.org/docs/8.3/interactive/wal-intro.html I'd say\n> the only data that should be lost are the transactions currently in the\n> log but not yet transferred to permanent storage (database files proper).\n> \n\nThe log records what changes are made to your data files before the data \nfiles are changed. (and gets flushed to disk before the data files are \nchanged)\n\nIn the event of power loss right in the middle of the data files being \nupdated for a transaction, when power is restored, how do we know what \nchanges were made to which data files and which changes are incomplete?\n\nWithout the log files there is no way to be sure your data files are not \nfull of \"half done transactions\"\n\n\n\nChances are that 90% of the time everything is fine but without the log \nfiles how do you check that your data files are as they should be.\n(or do you expect to restore from backup after any power outs?)\n\n\nKeeping them on a raid 1 gives you a level of redundancy to get you past \nhardware failures that happen at the wrong time. (as they all do)\n\n\n\n\n-- \n\nShane Ambler\npgSQL (at) Sheeky (dot) Biz\n\nGet Sheeky @ http://Sheeky.Biz\n", "msg_date": "Wed, 05 Mar 2008 01:34:32 +1030", "msg_from": "Shane Ambler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to allocate 8 disks" } ]
[ { "msg_contents": "I am moving our small business application\ndatabase application supporting\na 24/7 animal hospital to use 8.0.15 from\n7.4.19 (it will not support 8.1, 8.2. or 8.3).\n\nNow, we can choose a new a disc array. SATA\nseems cheaper and you can get more discs but\nI want to stay with SCSI. Any good reasons to\nchoose SATA over SCSI?\n\nI need to consider a vendor for the new disc array (6-\nto 8 discs). The local vendor (in the San Francisco Bay Area),\nI've not been completely pleased with, so I am considering using\nDell storage connecting to an retail version LSI MegaRAID 320-2X card.\n\nAnyone use any vendors that have been supportive of Postgresql?\n\nThanks for your help/feedback.\n\nSteve\n\nI am moving our small business applicationdatabase application supporting a 24/7 animal hospital to use 8.0.15 from7.4.19 (it will not support 8.1, 8.2. or 8.3).Now, we can choose a new a disc array. SATA\nseems cheaper and you can get more discs butI want to stay with SCSI.  Any good reasons tochoose SATA over SCSI?I need to consider a vendor for the new disc array (6-to 8 discs). The local vendor (in the San Francisco Bay Area),\nI've not been completely pleased with, so I am considering usingDell storage connecting to an retail version LSI MegaRAID 320-2X card.Anyone use any vendors that have been supportive of Postgresql?\nThanks for your help/feedback.Steve", "msg_date": "Sat, 1 Mar 2008 23:37:37 -0800", "msg_from": "\"Steve Poe\" <[email protected]>", "msg_from_op": true, "msg_subject": "How to choose a disc array for Postgresql?" }, { "msg_contents": "On Sat, 1 Mar 2008 23:37:37 -0800\n\"Steve Poe\" <[email protected]> wrote:\n\n> I am moving our small business application\n> database application supporting\n> a 24/7 animal hospital to use 8.0.15 from\n> 7.4.19 (it will not support 8.1, 8.2. or 8.3).\n> \n> Now, we can choose a new a disc array. SATA\n> seems cheaper and you can get more discs but\n> I want to stay with SCSI. Any good reasons to\n> choose SATA over SCSI?\n\nSata is great if you need lots of space. SCSI/SAS is better if you want\nlots of performance from a lesser amount of spindles.\n\nJoshua D. Drake\n\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL SPI Liaison | SPI Director | PostgreSQL political pundit", "msg_date": "Sun, 2 Mar 2008 00:05:35 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to choose a disc array for Postgresql?" }, { "msg_contents": "\nOn Mar 2, 2008, at 2:37 AM, Steve Poe wrote:\n\n> I need to consider a vendor for the new disc array (6-\n> to 8 discs). The local vendor (in the San Francisco Bay Area),\n> I've not been completely pleased with, so I am considering using\n> Dell storage connecting to an retail version LSI MegaRAID 320-2X card.\n>\n> Anyone use any vendors that have been supportive of Postgresql?\n\nI've been 1000% satisfied with Partners Data for my RAID systems. I \nconnect to the host boxes with a fibre channel. They've gone above \nand beyond expectations for supporting my FreeBSD systems. I don't \nknow if/how they support postgres as I never asked for that help. \nTheir prices are excellent, too.\n\nAs for your plan to hook up Dell storage to a 320-2x card, the last \ntime I did that, the lsi card complained that one of the drives in the \n14-disk chassis was down. Identical on two different arrays I had. \nDell swapped nearly every single part, yet the LSI card still \ncomplained. I had to drop the drives to U160 speed to get it to even \nrecognize all the drives.\n\nI hooked up the same arrays to Adaptec controllers, and they seemed to \nnot mind the array so much, but would cause random failures \n(catastrophic failures resulting in loss of all data) on occassion \nuntil I dropped the disks to U160 speed. Dell swears up and down \nthat their devices work at U320, but the two arrays I got from them, \nwhich were identical twins, both clearly did not work at U320 properly.\n\nIt is these Dell arrays that I replaced with the Partners Data units \nlast year. The dell boxes still have a year of warrantee on them... \nanyone interested in buying them from me, please make an offer :-)\n\n\n", "msg_date": "Sun, 2 Mar 2008 22:11:49 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to choose a disc array for Postgresql?" }, { "msg_contents": "Vivek,\n\nI've had the same issue with the LSI MegaRAID card previously, I had\nto drop to U160. Since it happened with a brand new card and\nthe local vendor's disc array, I've blamed the local vendor since this\nhappened before.\n\nI've been leary of using Adaptec since they've had issues in the past.\n\nIt seems the RAID card manufacturers have more to do with failures\nthan the drives themselves. Have you found a RAID card you did not\nhave to drop to U160?\n\nThanks again for sharing your feedback.\n\nSteve\n\nOn Sun, Mar 2, 2008 at 7:11 PM, Vivek Khera <[email protected]> wrote:\n\n>\n> On Mar 2, 2008, at 2:37 AM, Steve Poe wrote:\n>\n> > I need to consider a vendor for the new disc array (6-\n> > to 8 discs). The local vendor (in the San Francisco Bay Area),\n> > I've not been completely pleased with, so I am considering using\n> > Dell storage connecting to an retail version LSI MegaRAID 320-2X card.\n> >\n> > Anyone use any vendors that have been supportive of Postgresql?\n>\n> I've been 1000% satisfied with Partners Data for my RAID systems. I\n> connect to the host boxes with a fibre channel. They've gone above\n> and beyond expectations for supporting my FreeBSD systems. I don't\n> know if/how they support postgres as I never asked for that help.\n> Their prices are excellent, too.\n>\n> As for your plan to hook up Dell storage to a 320-2x card, the last\n> time I did that, the lsi card complained that one of the drives in the\n> 14-disk chassis was down. Identical on two different arrays I had.\n> Dell swapped nearly every single part, yet the LSI card still\n> complained. I had to drop the drives to U160 speed to get it to even\n> recognize all the drives.\n>\n> I hooked up the same arrays to Adaptec controllers, and they seemed to\n> not mind the array so much, but would cause random failures\n> (catastrophic failures resulting in loss of all data) on occassion\n> until I dropped the disks to U160 speed. Dell swears up and down\n> that their devices work at U320, but the two arrays I got from them,\n> which were identical twins, both clearly did not work at U320 properly.\n>\n> It is these Dell arrays that I replaced with the Partners Data units\n> last year. The dell boxes still have a year of warrantee on them...\n> anyone interested in buying them from me, please make an offer :-)\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your Subscription:\n>\n> http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance\n>\n\nVivek,I've had the same issue with the LSI MegaRAID card previously, I hadto drop to U160. Since it happened with a brand new card andthe local vendor's disc array, I've blamed the local vendor since this\nhappened before.I've been leary of using Adaptec since they've had issues in the past.It seems the RAID card manufacturers have more to do with failuresthan the drives themselves. Have you found a RAID card you did not\nhave to drop to U160?Thanks again for sharing your feedback.SteveOn Sun, Mar 2, 2008 at 7:11 PM, Vivek Khera <[email protected]> wrote:\n\nOn Mar 2, 2008, at 2:37 AM, Steve Poe wrote:\n\n> I need to consider a vendor for the new disc array (6-\n> to 8 discs). The local vendor (in the San Francisco Bay Area),\n> I've not been completely pleased with, so I am considering using\n> Dell storage connecting to an retail version LSI MegaRAID 320-2X card.\n>\n> Anyone use any vendors that have been supportive of Postgresql?\n\nI've been 1000% satisfied with Partners Data for my RAID systems.  I\nconnect to the host boxes with a fibre channel.  They've gone above\nand beyond expectations for supporting my FreeBSD systems.  I don't\nknow if/how they support postgres as I never asked for that help.\nTheir prices are excellent, too.\n\nAs for your plan to hook up Dell storage to a 320-2x card, the last\ntime I did that, the lsi card complained that one of the drives in the\n14-disk chassis was down.  Identical on two different arrays I had.\nDell swapped nearly every single part, yet the LSI card still\ncomplained.  I had to drop the drives to U160 speed to get it to even\nrecognize all the drives.\n\nI hooked up the same arrays to Adaptec controllers, and they seemed to\nnot mind the array so much, but would cause random failures\n(catastrophic failures resulting in loss of all data) on occassion\nuntil I dropped the disks to U160 speed.   Dell swears up and down\nthat their devices work at U320, but the two arrays I got from them,\nwhich were identical twins, both clearly did not work at U320 properly.\n\nIt is these Dell arrays that I replaced with the Partners Data units\nlast year.  The dell boxes still have a  year of warrantee on them...\nanyone interested in buying them from me, please make an offer :-)\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your Subscription:\nhttp://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance", "msg_date": "Sun, 2 Mar 2008 20:02:36 -0800", "msg_from": "\"Steve Poe\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to choose a disc array for Postgresql?" }, { "msg_contents": "On Sun, 2 Mar 2008 20:02:36 -0800\n\"Steve Poe\" <[email protected]> wrote:\n\n> > It is these Dell arrays that I replaced with the Partners Data units\n> > last year. The dell boxes still have a year of warrantee on\n> > them... anyone interested in buying them from me, please make an\n> > offer :-)\n\nI suggest the HP 64* and P* series. \n\nJoshua D. Drake\n\n\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL SPI Liaison | SPI Director | PostgreSQL political pundit", "msg_date": "Sun, 2 Mar 2008 20:13:15 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to choose a disc array for Postgresql?" }, { "msg_contents": "On Sun, Mar 2, 2008 at 9:11 PM, Vivek Khera <[email protected]> wrote:\n>\n> On Mar 2, 2008, at 2:37 AM, Steve Poe wrote:\n>\n> > I need to consider a vendor for the new disc array (6-\n> > to 8 discs). The local vendor (in the San Francisco Bay Area),\n> > I've not been completely pleased with, so I am considering using\n> > Dell storage connecting to an retail version LSI MegaRAID 320-2X card.\n> >\n> > Anyone use any vendors that have been supportive of Postgresql?\n>\n> I've been 1000% satisfied with Partners Data for my RAID systems. I\n> connect to the host boxes with a fibre channel. They've gone above\n> and beyond expectations for supporting my FreeBSD systems. I don't\n> know if/how they support postgres as I never asked for that help.\n> Their prices are excellent, too.\n>\n> As for your plan to hook up Dell storage to a 320-2x card, the last\n> time I did that, the lsi card complained that one of the drives in the\n> 14-disk chassis was down. Identical on two different arrays I had.\n> Dell swapped nearly every single part, yet the LSI card still\n> complained. I had to drop the drives to U160 speed to get it to even\n> recognize all the drives.\n\nIs there still some advantage to U320 over SAS? I'm just wondering\nwhy one would be building a new machine with U320 instead of SAS\nnowadays.\n\nAnd I've never had any of the problems you list with LSI cards. The\nonly issue I've seen is mediocre RAID-10 performance on their cards\nmany years ago, when I was testing them for our database server.\nAdaptec controllers, especially RAID controllers have been nothing but\nproblematic for me, with random lockups every month or two. Just\nenough to make it really dangerous, not often enough to make it easy\nto troubleshoot. In those systems the lockup problems were solved\n100% by switching to LSI based controllers.\n", "msg_date": "Sun, 2 Mar 2008 22:23:23 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to choose a disc array for Postgresql?" }, { "msg_contents": "On Sat, 1 Mar 2008, Steve Poe wrote:\n\n> SATA seems cheaper and you can get more discs but I want to stay with \n> SCSI. Any good reasons to choose SATA over SCSI?\n\nI've collected up many of the past list comments on this subject and put a \nsummary at http://www.postgresqldocs.org/index.php/SCSI_vs._IDE/SATA_Disks\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 3 Mar 2008 00:16:44 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to choose a disc array for Postgresql?" }, { "msg_contents": "\nOn Mar 2, 2008, at 11:02 PM, Steve Poe wrote:\n\n> It seems the RAID card manufacturers have more to do with failures\n> than the drives themselves. Have you found a RAID card you did not\n> have to drop to U160?\n\nThe only array for which I've had to drop to U160 on an LSI card is \nthe Dell array. I think the backplane is not fully U320 compliant, \nbut I have no real proof. I had the same seagate drives, which I \n*know* work U320 with an LSI card.\n\nIt seems only the Dell-branded LSI cards work with the Dell-branded \narrays at U320 -- at least they report to be working.\n\nBecause I'm leery of Adaptec, and the LSI cards are hard to get decent \narrays at decent prices, I've moved to using external RAID enclosures \nattached via LSI Fibre Channel cards.\n", "msg_date": "Mon, 3 Mar 2008 09:32:09 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to choose a disc array for Postgresql?" }, { "msg_contents": "\nOn Mar 2, 2008, at 11:23 PM, Scott Marlowe wrote:\n\n> And I've never had any of the problems you list with LSI cards. The\n> only issue I've seen is mediocre RAID-10 performance on their cards\n\nI don't fault the LSI card. The 320-2X is by far one of the fastest \ncards I've ever used, and the most stable under FreeBSD. The only \ntime I've had issue with the LSI cards is with dell-branded disk \nenclosures.\n\nAs for using U320 vs. SAS, I guess the decision would be based on \ncost. The last systems I bought with big disks were over a year ago, \nso I don't know the pricing anymore.\n\n", "msg_date": "Mon, 3 Mar 2008 09:34:34 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to choose a disc array for Postgresql?" }, { "msg_contents": "\nOn Mar 3, 2008, at 12:16 AM, Greg Smith wrote:\n\n> I've collected up many of the past list comments on this subject and \n> put a summary athttp://www.postgresqldocs.org/index.php/SCSI_vs._IDE/SATA_Disks\n\nI'll add a recommendation of Partners Data Systems http://www.partnersdata.com/ \n as a great vendor of SATA RAID subsystems (the 16-disk units I have \nare based on an Areca controller and have dual FC output)\n\n", "msg_date": "Mon, 3 Mar 2008 10:02:08 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to choose a disc array for Postgresql?" }, { "msg_contents": "Greg Smith wrote:\n> On Sat, 1 Mar 2008, Steve Poe wrote:\n>> SATA over SCSI?\n> \n> I've collected up many of the past list comments on this subject and put \n> a summary at \n> http://www.postgresqldocs.org/index.php/SCSI_vs._IDE/SATA_Disks\n\nShould this section:\n\n ATA Disks... Always default to the write cache\n enabled....turn it off....\n\nbe amended to say that if you have an OS that supports write\nbarriers (linuxes newer than early 2005) you shouldn't worry\nabout this?\n\nAnd perhaps the SCSI section should also be amended to say that\nthat the same 2.6 kernels that fail to send the IDE FLUSH CACHE\ncommand also fail to send the SCSI SYNCHRONIZE CACHE command,\nso you should go through the same cache-disabling hoops there?\n\n\nReferences from the Linux SATA driver guy and lwn here:\nhttp://hardware.slashdot.org/comments.pl?sid=149349&cid=12519114\nhttp://lwn.net/Articles/77074/\n", "msg_date": "Mon, 03 Mar 2008 11:38:21 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to choose a disc array for Postgresql?" } ]
[ { "msg_contents": "The Dell MD1000 is good. The most trouble you will have will be with the raid adapter - to get the best support I suggest trying to buy the dell perc 5e (also an LSI) - that way you'll get drivers that work and are supported.\r\n\r\nLatest seq scan performance I've seen on redhat 5 is 400 MB/s on eight drives in RAID10 after setting linux max readahead to 16384 (blockdev --setra 16384) and 220 without.\r\n\r\n----- Original Message -----\r\nFrom: [email protected] <[email protected]>\r\nTo: [email protected] <[email protected]>\r\nSent: Sun Mar 02 02:37:37 2008\r\nSubject: [PERFORM] How to choose a disc array for Postgresql?\r\n\r\nI am moving our small business application\r\ndatabase application supporting \r\na 24/7 animal hospital to use 8.0.15 from\r\n7.4.19 (it will not support 8.1, 8.2. or 8.3).\r\n\r\nNow, we can choose a new a disc array. SATA\r\nseems cheaper and you can get more discs but\r\nI want to stay with SCSI. Any good reasons to\r\nchoose SATA over SCSI?\r\n\r\nI need to consider a vendor for the new disc array (6-\r\nto 8 discs). The local vendor (in the San Francisco Bay Area),\r\nI've not been completely pleased with, so I am considering using\r\nDell storage connecting to an retail version LSI MegaRAID 320-2X card.\r\n\r\nAnyone use any vendors that have been supportive of Postgresql?\r\n\r\nThanks for your help/feedback.\r\n\r\nSteve\r\n\r\n\r\n\r\n\n\n\n\n\nRe: [PERFORM] How to choose a disc array for Postgresql?\n\n\n\nThe Dell MD1000 is good.  The most trouble you will have will be with the raid adapter - to get the best support I suggest trying to buy the dell perc 5e (also an LSI) - that way you'll get drivers that work and are supported.\n\r\nLatest seq scan performance I've seen on redhat 5 is 400 MB/s on eight drives in RAID10 after setting linux max readahead to 16384 (blockdev --setra 16384) and 220 without.\n\r\n----- Original Message -----\r\nFrom: [email protected] <[email protected]>\r\nTo: [email protected] <[email protected]>\r\nSent: Sun Mar 02 02:37:37 2008\r\nSubject: [PERFORM] How to choose a disc array for Postgresql?\n\r\nI am moving our small business application\r\ndatabase application supporting\r\na 24/7 animal hospital to use 8.0.15 from\r\n7.4.19 (it will not support 8.1, 8.2. or 8.3).\n\r\nNow, we can choose a new a disc array. SATA\r\nseems cheaper and you can get more discs but\r\nI want to stay with SCSI.  Any good reasons to\r\nchoose SATA over SCSI?\n\r\nI need to consider a vendor for the new disc array (6-\r\nto 8 discs). The local vendor (in the San Francisco Bay Area),\r\nI've not been completely pleased with, so I am considering using\r\nDell storage connecting to an retail version LSI MegaRAID 320-2X card.\n\r\nAnyone use any vendors that have been supportive of Postgresql?\n\r\nThanks for your help/feedback.\n\r\nSteve", "msg_date": "Sun, 2 Mar 2008 09:29:05 -0500", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to choose a disc array for Postgresql?" } ]
[ { "msg_contents": "Subject about says it all. Should I be more concerned about checkpoints \nhappening 'frequently' or lasting 'longer'? In other words, is it ok to \ncheckpoint say, every 5 minutes, if it only last a second or three or better \nto have checkpoints every 10 minutes that last half a minute? Stupid examples \nprobably, but you get my point I hope :)\n-- \nDouglas J Hunley (doug at hunley.homeip.net) - Linux User #174778\nhttp://doug.hunley.homeip.net\n", "msg_date": "Mon, 3 Mar 2008 09:25:02 -0500", "msg_from": "Douglas J Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "which is more important? freq of checkpoints or the duration of them?" }, { "msg_contents": "On Mon, Mar 3, 2008 at 8:25 AM, Douglas J Hunley <[email protected]> wrote:\n> Subject about says it all. Should I be more concerned about checkpoints\n> happening 'frequently' or lasting 'longer'? In other words, is it ok to\n> checkpoint say, every 5 minutes, if it only last a second or three or better\n> to have checkpoints every 10 minutes that last half a minute? Stupid examples\n> probably, but you get my point I hope :)\n\nThe answer is, of course, it depends.\n\nIf you do a lot of batch processing where you move a lot of data in a\nstream into the database, then less, but larger checkpoints are\nprobably a win.\n\nOr is this a transactional system that has to run transactions in\nunder x seconds? Then more, smaller checkpoints might make sense.\n\nAnd then, you might be better off using the bgwriter. If tuned\nproperly, it will keep ahead of your checkpoints just enough that they\nnever have to happen. Comes with a price, some small % of performance\nloss peak, in exchange for a smoother behaviour.\n", "msg_date": "Mon, 3 Mar 2008 09:16:34 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which is more important? freq of checkpoints or the duration of\n\tthem?" }, { "msg_contents": "[email protected] (Douglas J Hunley) writes:\n> Subject about says it all. Should I be more concerned about checkpoints \n> happening 'frequently' or lasting 'longer'? In other words, is it ok to \n> checkpoint say, every 5 minutes, if it only last a second or three or better \n> to have checkpoints every 10 minutes that last half a minute? Stupid examples \n> probably, but you get my point I hope :)\n\nWell, with the (new-in-8.1) background writer, you should be able to\nhave whatever combination might appear attractive, as the point of the\nbackground writer is to push out dirty pages.\n\nPre-8.1, your choice would be either to:\na) Flush frequently, and so have the checkpoints be of short duration, or\nb) Flush infrequently, so that the checkpoint flushes would have a long\n duration.\n\nNow, if you have reasonable settings (I'm not sure how well its tuning\nis documented :-(), checkpoint \"flushes\" should be able to be short,\nhowever infrequent they may be.\n\nIn effect, the \"oops, the database got blocked by checkpoint flushing\"\nissue should now be gone...\n\nThe issue that then remains is whether to checkpoint often, in which\ncase crash recovery will tend to be be quicker, or whether to\ncheckpoint seldom, in which case crash recovery will have fewer\ncheckpoints to choose from, and hence will run somewhat longer.\n\nIf your systems don't crash much, and recovery time isn't a big deal,\nthen this probably doesn't much matter...\n-- \n(reverse (concatenate 'string \"ofni.sesabatadxunil\" \"@\" \"enworbbc\"))\nhttp://linuxfinances.info/info/sap.html\n\"I don't plan to maintain it, just to install it.\" -- Richard M. Stallman\n", "msg_date": "Mon, 03 Mar 2008 15:55:33 +0000", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which is more important? freq of checkpoints or the duration of\n\tthem?" }, { "msg_contents": "On Mon, 3 Mar 2008, Douglas J Hunley wrote:\n\n> In other words, is it ok to checkpoint say, every 5 minutes, if it only \n> last a second or three or better to have checkpoints every 10 minutes \n> that last half a minute?\n\nWhen checkpoints do too much work at once they will block clients for a \nsignificant period of time near the end--anywhere from 2 to 8 seconds \nisn't unusual. Every client on the system will just hang, then they all \nstart responding again in a batch when the checkpoint is finished.\n\nWith that as the problematic case, if you can keep the duration of the \ncheckpoint processing minimal by having them happen more frequently, then \nthat's the better approach. You can't push that interval too small though \nor your system will be continuously checkpointing.\n\nIn cases where checkpoints hurt no matter how often you do them, there it \nmakes sense to have them as infrequently as possible so at least you \nminimize the number of times that the disruption happens.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 3 Mar 2008 11:40:07 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which is more important? freq of checkpoints or the\n\tduration of them?" }, { "msg_contents": "On Mon, 3 Mar 2008, Chris Browne wrote:\n\n> Now, if you have reasonable settings (I'm not sure how well its tuning \n> is documented :-(), checkpoint \"flushes\" should be able to be short, \n> however infrequent they may be. In effect, the \"oops, the database got \n> blocked by checkpoint flushing\" issue should now be gone...\n\nAh, if only it were true. The background writer can be made to work \nfairly well in circa early 8.1 setups where the shared_buffers cache is \nsmall. But on more current systems where there's a lot of memory \ninvolved, you can't get a tuning aggressive enough to make checkpoint \nspikes small without wasting a bunch of I/O writing buffers that will just \nget dirty again before the checkpoint. Since the kinds of systems that \nhave nasty checkpoint spikes are also I/O bound in general, there is no \ngood way to resolve that struggle using the code in 8.1 and 8.2.\n\nThe checkpoint_completion_target tunable and related code in 8.3 is the \nfirst approach to this issue that has a good foundation even with larger \nbuffer caches. You can band-aid some cases well enough to improve things \nwith the background writer in earlier versions, but it's certainly not \nguaranteed that it's possible even if you spend lots of time fiddling with \nthe settings.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 3 Mar 2008 12:06:29 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which is more important? freq of checkpoints or the\n\tduration of them?" } ]
[ { "msg_contents": "Hello\n\n---------------------------\nPostgresql version: 8.1.10\n4GB RAM\n2x HP 72GB 10K SAS RAID1/smartarray\n---------------------------\n\nI have a colleague that is having som performance problems from time to\ntime when deleting some rows from a table.\n\nWe found out that the database having this problem had a severe bloat\nproblem in many tables and indexes (they were running only autovacuum)\nand some misconfiguration in postgresql.conf.\n\nWhat we did to fix the situation was:\n\n1) Stop the application accessing the database.\n2) Change these parameters in postgresql.conf:\n---------------------------------\nshared_buffers = 108157\nwork_mem = 16384\nmaintenance_work_mem = 262144\n\nmax_fsm_pages = 800000\n\nwal_buffers = 64\ncheckpoint_segments = 128\n\nrandom_page_cost = 2.0\neffective_cache_size = 255479\n\ndefault_statistics_target = 400\n---------------------------------\n\n3) Update /etc/sysctl.conf with new values for kernel.shmmax and\nkernel.shmall\n\n3) Run 'VACUUM FULL VERBOSE'\n4) Run 'REINDEX DATABASE <dbname>'\n5) Run 'ANALYZE VERBOSE'\n6) Define a 'VACUUM VERBOSE ANALYZE' in crontab\n7) Start the application.\n\nThese changes helped a lot, the size of the database when down from 7GB\nto 1GB and most of the deletes work as they are suppose to. But from\ntime to time a single deletion takes a lot of time to finish. The output\nfrom explain analyze doesn't show anything wrong, as long as I can see.\n\nThe definition of the table 'module' is:\n-------------------------------------------------------------------------\nmanage=# \\d module\n Table \"public.module\"\n Column | Type | Modifiers\n-----------+-----------------------------+-----------------------------------------------------------\n moduleid | integer | not null default\nnextval('module_moduleid_seq'::regclass)\n deviceid | integer | not null\n netboxid | integer | not null\n module | integer | not null\n model | character varying |\n descr | character varying |\n up | character(1) | not null default 'y'::bpchar\n downsince | timestamp without time zone |\nIndexes:\n \"module_pkey\" PRIMARY KEY, btree (moduleid)\n \"module_deviceid_key\" UNIQUE, btree (deviceid)\n \"module_netboxid_key\" UNIQUE, btree (netboxid, module)\nCheck constraints:\n \"module_up\" CHECK (up = 'y'::bpchar OR up = 'n'::bpchar)\nForeign-key constraints:\n \"$1\" FOREIGN KEY (deviceid) REFERENCES device(deviceid) ON UPDATE\nCASCADE ON DELETE CASCADE\n \"$2\" FOREIGN KEY (netboxid) REFERENCES netbox(netboxid) ON UPDATE\nCASCADE ON DELETE CASCADE\nRules:\n close_alerthist_modules AS\n ON DELETE TO module DO UPDATE alerthist SET end_time = now()\n WHERE (alerthist.eventtypeid::text = 'moduleState'::text OR\nalerthist.eventtypeid::text = 'linkState'::text) AND alerthist.end_time\n= 'infinity'::timestamp without time zone AND alerthist.deviceid =\nold.deviceid\n-------------------------------------------------------------------------\n\n\nmanage=# EXPLAIN ANALYZE DELETE FROM module WHERE deviceid='7298';\n QUERY PLAN\n-------------------------------------------------------------------------\n Nested Loop (cost=0.00..14.63 rows=1 width=67) (actual\ntime=2.365..2.365 rows=0 loops=1)\n -> Index Scan using alerthist_end_time_btree on alerthist\n(cost=0.00..10.65 rows=1 width=67) (actual time=2.363..2.363 rows=0 loops=1)\n Index Cond: (end_time = 'infinity'::timestamp without time zone)\n Filter: ((((eventtypeid)::text = 'moduleState'::text) OR\n((eventtypeid)::text = 'linkState'::text)) AND (7298 = deviceid))\n -> Index Scan using module_deviceid_key on module (cost=0.00..3.96\nrows=1 width=4) (never executed)\n Index Cond: (deviceid = 7298)\n Total runtime: 2.546 ms\n\n Index Scan using module_deviceid_key on module (cost=0.00..3.96 rows=1\nwidth=6) (actual time=0.060..0.061 rows=1 loops=1)\n Index Cond: (deviceid = 7298)\n Trigger for constraint $1: time=3.422 calls=1\n Trigger for constraint $1: time=0.603 calls=1\n Total runtime: 2462558.813 ms\n(13 rows)\n-------------------------------------------------------------------------\n\nAny ideas why it is taking 2462558.813 ms to finish when the total time\nfor the deletion is 2.546 ms + 3.422 ms + 0.603ms?\n\nThe deletion of a row in the 'module' table involves several\ndeletions/updates in many other tables in the database related by\nforeign keys (with ON DELETE CASCADE) and triggers.\n\nI suppose that an open transaction in one of these not directly releated\ntables to 'module' could lock the deletion without showing in EXPLAIN\nANALYZE?. The two 'Trigger for constraint' in the EXPLAIN ANALYZE output\nonly show two tables having an attribute as a foreign key in 'module',\nbut if these two tables have to wait for other tables, that would not\nshow anywhere? (only in pg_locks)\n\nThanks in advance\nregards\n-- \n Rafael Martinez, <[email protected]>\n Center for Information Technology Services\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n", "msg_date": "Mon, 03 Mar 2008 15:54:49 +0100", "msg_from": "Rafael Martinez <[email protected]>", "msg_from_op": true, "msg_subject": "Performance problems deleting data" }, { "msg_contents": "Rafael Martinez <[email protected]> writes:\n> manage=# EXPLAIN ANALYZE DELETE FROM module WHERE deviceid='7298';\n> QUERY PLAN\n> -------------------------------------------------------------------------\n> Nested Loop (cost=0.00..14.63 rows=1 width=67) (actual\n> time=2.365..2.365 rows=0 loops=1)\n> -> Index Scan using alerthist_end_time_btree on alerthist\n> (cost=0.00..10.65 rows=1 width=67) (actual time=2.363..2.363 rows=0 loops=1)\n> Index Cond: (end_time = 'infinity'::timestamp without time zone)\n> Filter: ((((eventtypeid)::text = 'moduleState'::text) OR\n> ((eventtypeid)::text = 'linkState'::text)) AND (7298 = deviceid))\n> -> Index Scan using module_deviceid_key on module (cost=0.00..3.96\n> rows=1 width=4) (never executed)\n> Index Cond: (deviceid = 7298)\n> Total runtime: 2.546 ms\n\n> Index Scan using module_deviceid_key on module (cost=0.00..3.96 rows=1\n> width=6) (actual time=0.060..0.061 rows=1 loops=1)\n> Index Cond: (deviceid = 7298)\n> Trigger for constraint $1: time=3.422 calls=1\n> Trigger for constraint $1: time=0.603 calls=1\n> Total runtime: 2462558.813 ms\n> (13 rows)\n> -------------------------------------------------------------------------\n\n> Any ideas why it is taking 2462558.813 ms to finish when the total time\n> for the deletion is 2.546 ms + 3.422 ms + 0.603ms?\n\nThat's just bizarre. So far as I can see from the 8.1 EXPLAIN code,\nthe only place the extra time could be spent is in ExecutorStart,\nExecutorEnd, or the top level of ExecutorRun, none of which should\ntake any noticeable amount of runtime in a trivial query like this.\n\nThe only thing I can think of is that ExecutorStart would have been\nwhere we'd acquire RowExclusiveLock on \"module\", while the previous\nrule-generated query would only take AccessShareLock. So if for\ninstance some other transaction had ShareLock (perhaps from CREATE\nINDEX) and just sat a long time before committing, perhaps this\nwould be explainable. I'm not too sure about that explanation\nthough because I think the parser should have already taken\nRowExclusiveLock when it was doing parse analysis.\n\nIs the problem repeatable? Is the delay consistent? What do\nyou see in pg_locks while it's delaying? Also watch \"vmstat 1\"\noutput --- is it consuming CPU and/or I/O?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Mar 2008 12:25:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems deleting data " }, { "msg_contents": "Tom Lane wrote:\n> Rafael Martinez <[email protected]> writes:\n> \n>> Any ideas why it is taking 2462558.813 ms to finish when the total time\n>> for the deletion is 2.546 ms + 3.422 ms + 0.603ms?\n>\n\nHei Tom, I got this information from my colleague:\n\n\n> Is the problem repeatable? \n\nRepeatable as in about 30+ times every day, the deletion of a row takes\nmore than 100 seconds. I have not found a way to provoke it though.\n\n> Is the delay consistent? \n\nNo. I see frequently everything from below the 8 seconds\nlog_min_duration_statement to about 4900 seconds. As for distribution,\nabout half of the 30+ takes more than 500 seconds to complete, the rest\n(obviously) between 100 and 500 seconds.\n\n> What do you see in pg_locks while it's delaying? \n\n locktype | database | relation | page | tuple | transactionid |\nclassid | objid | objsubid | transaction | pid | mode |\ngranted\n---------------+----------+----------+------+-------+---------------+---------+-------+----------+-------------+-------+------------------+---------\n relation | 16393 | 16784 | | | |\n | | | 82179843 | 19890 | AccessShareLock | t\n relation | 16393 | 16784 | | | |\n | | | 82179843 | 19890 | RowExclusiveLock | t\n relation | 16393 | 17176 | | | |\n | | | 82179843 | 19890 | RowExclusiveLock | t\n relation | 16393 | 16794 | | | |\n | | | 82180131 | 19907 | AccessShareLock | t\n relation | 16393 | 16794 | | | |\n | | | 82180131 | 19907 | RowExclusiveLock | t\n relation | 16393 | 16977 | | | |\n | | | 82179843 | 19890 | AccessShareLock | t\n relation | 16393 | 16977 | | | |\n | | | 82179843 | 19890 | RowExclusiveLock | t\n relation | 16393 | 16800 | | | |\n | | | 82179669 | 19906 | AccessShareLock | t\n relation | 16393 | 16800 | | | |\n | | | 82179669 | 19906 | RowExclusiveLock | t\n relation | 16393 | 17174 | | | |\n | | | 82179843 | 19890 | RowExclusiveLock | t\n transactionid | | | | | 80430155 |\n | | | 80430155 | 29569 | ExclusiveLock | t\n relation | 16393 | 17164 | | | |\n | | | 82179843 | 19890 | AccessShareLock | t\n relation | 16393 | 16816 | | | |\n | | | 82179669 | 19906 | AccessShareLock | t\n relation | 16393 | 16816 | | | |\n | | | 82179669 | 19906 | RowExclusiveLock | t\n relation | 16393 | 16812 | | | |\n | | | 82179669 | 19906 | AccessShareLock | t\n relation | 16393 | 16812 | | | |\n | | | 82179669 | 19906 | RowExclusiveLock | t\n relation | 16393 | 17174 | | | |\n | | | 82180131 | 19907 | RowExclusiveLock | t\n relation | 16393 | 16977 | | | |\n | | | 82180131 | 19907 | AccessShareLock | t\n relation | 16393 | 16977 | | | |\n | | | 82180131 | 19907 | RowExclusiveLock | t\n relation | 16393 | 16784 | | | |\n | | | 82180131 | 19907 | AccessShareLock | t\n relation | 16393 | 16784 | | | |\n | | | 82180131 | 19907 | RowExclusiveLock | t\n relation | 16393 | 16766 | | | |\n | | | 82179843 | 19890 | AccessShareLock | t\n relation | 16393 | 16766 | | | |\n | | | 82179843 | 19890 | RowExclusiveLock | t\n relation | 16393 | 16977 | | | |\n | | | 82179669 | 19906 | AccessShareLock | t\n relation | 16393 | 16977 | | | |\n | | | 82179669 | 19906 | RowExclusiveLock | t\n relation | 16393 | 17164 | | | |\n | | | 82179669 | 19906 | AccessShareLock | t\n relation | 16393 | 16766 | | | |\n | | | 82180131 | 19907 | AccessShareLock | t\n relation | 16393 | 16766 | | | |\n | | | 82180131 | 19907 | RowExclusiveLock | t\n relation | 16393 | 10342 | | | |\n | | | 82180134 | 31646 | AccessShareLock | t\n relation | 16393 | 16794 | | | |\n | | | 82179843 | 19890 | AccessShareLock | t\n relation | 16393 | 16794 | | | |\n | | | 82179843 | 19890 | RowExclusiveLock | t\n relation | 16393 | 16835 | | | |\n | | | 82179669 | 19906 | AccessShareLock | t\n relation | 16393 | 16835 | | | |\n | | | 82179669 | 19906 | RowExclusiveLock | t\n relation | 16393 | 17176 | | | |\n | | | 82180131 | 19907 | RowExclusiveLock | t\n relation | 16393 | 16800 | | | |\n | | | 82180131 | 19907 | AccessShareLock | t\n relation | 16393 | 16800 | | | |\n | | | 82180131 | 19907 | RowExclusiveLock | t\n relation | 16393 | 16821 | | | |\n | | | 82179669 | 19906 | AccessShareLock | t\n relation | 16393 | 16821 | | | |\n | | | 82179669 | 19906 | RowExclusiveLock | t\n relation | 16393 | 17174 | | | |\n | | | 82179669 | 19906 | RowExclusiveLock | t\n relation | 16393 | 16730 | | | |\n | | | 80430155 | 29569 | AccessShareLock | t\n transactionid | | | | | 82179669 |\n | | | 82179669 | 19906 | ExclusiveLock | t\n relation | 16393 | 16800 | | | |\n | | | 82179843 | 19890 | AccessShareLock | t\n relation | 16393 | 16800 | | | |\n | | | 82179843 | 19890 | RowExclusiveLock | t\n relation | 16393 | 16784 | | | |\n | | | 82179669 | 19906 | AccessShareLock | t\n relation | 16393 | 16784 | | | |\n | | | 82179669 | 19906 | RowExclusiveLock | t\n relation | 16393 | 16766 | | | |\n | | | 82179669 | 19906 | AccessShareLock | t\n relation | 16393 | 16766 | | | |\n | | | 82179669 | 19906 | RowExclusiveLock | t\n relation | 16393 | 16794 | | | |\n | | | 82179669 | 19906 | AccessShareLock | t\n relation | 16393 | 16794 | | | |\n | | | 82179669 | 19906 | RowExclusiveLock | t\n transactionid | | | | | 82180134 |\n | | | 82180134 | 31646 | ExclusiveLock | t\n transactionid | | | | | 82179843 |\n | | | 82179843 | 19890 | ExclusiveLock | t\n relation | 16393 | 17176 | | | |\n | | | 82179669 | 19906 | RowExclusiveLock | t\n transactionid | | | | | 82180131 |\n | | | 82180131 | 19907 | ExclusiveLock | t\n relation | 16393 | 17164 | | | |\n | | | 82180131 | 19907 | AccessShareLock | t\n(54 rows)\n\n\n> Also watch \"vmstat 1\" output --- is it consuming CPU and/or I/O?\n> \n> \t\t\t\n\nCPU 50% idle, rest mainly used in \"system\". Virtually no IO. No\nblocked processes. An impressive amount of context switches. No swap.\n\nAn strace(1) of the postgres process may give a hint about the \"system\"\npart; this is what it does over and over and over again. The filename\ndoes change to a different file in the same directory every now and\nthen, but not often.\n\nsemop(4227102, 0xbf8ef23a, 1) = 0\nsemop(4227102, 0xbf8ef67a, 1) = 0\nopen(\"pg_subtrans/047B\", O_RDWR|O_LARGEFILE) = 12\n_llseek(12, 139264, [139264], SEEK_SET) = 0\nread(12, \"\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\"...,\n8192) = 8192\nclose(12) = 0\n\n\nregards\n-- \n Rafael Martinez, <[email protected]>\n Center for Information Technology Services\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n", "msg_date": "Tue, 04 Mar 2008 09:28:38 +0100", "msg_from": "Rafael Martinez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems deleting data" }, { "msg_contents": "Rafael Martinez wrote:\n\n> CPU 50% idle, rest mainly used in \"system\". Virtually no IO. No\n> blocked processes. An impressive amount of context switches. No swap.\n> \n> An strace(1) of the postgres process may give a hint about the \"system\"\n> part; this is what it does over and over and over again. The filename\n> does change to a different file in the same directory every now and\n> then, but not often.\n> \n> semop(4227102, 0xbf8ef23a, 1) = 0\n> semop(4227102, 0xbf8ef67a, 1) = 0\n> open(\"pg_subtrans/047B\", O_RDWR|O_LARGEFILE) = 12\n> _llseek(12, 139264, [139264], SEEK_SET) = 0\n> read(12, \"\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\"...,\n> 8192) = 8192\n> close(12) = 0\n\nHmm, severe usage of subtransactions? Does it ever write to these\npg_subtrans files? I wonder if it's related to the ON DELETE rule that\nupdates alerthist.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Tue, 4 Mar 2008 08:57:40 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems deleting data" } ]
[ { "msg_contents": "I've got a new server and am myself new to tuning postgres.\n\nServer is an 8 core Xeon 2.33GHz, 8GB RAM, RAID 10 on a 3ware 9550SX-4LP w/ BBU.\n\nIt's serving as the DB for a fairly write intensive (maybe 25-30%) Web\napplication in PHP. We are not using persistent connections, thus the\nhigh max connections.\n\nI've done the following so far:\n\n> cat /boot/loader.conf\nkern.ipc.semmni=256\nkern.ipc.semmns=512\nkern.ipc.semmnu=256\n\n> cat /etc/sysctl.conf\nkern.ipc.shmall=393216\nkern.ipc.shmmax=1610612736\nkern.ipc.semmap=256\nkern.ipc.shm_use_phys=1\n\npostgresql.conf settings (changed from Default):\nmax_connections = 180\nshared_buffers = 1024MB\nmaintenance_work_mem = 128MB\nwal_buffers = 1024kB\n\nI then set up a test database for running pgbench with scaling factor\n100. I then ran:\n> pgbench -c 100 -t 1000 testdb\nand got:\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 100\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 100000/100000\ntps = 557.095867 (including connections establishing)\ntps = 558.013714 (excluding connections establishing)\n\nJust for testing, I tried turning off fsync and got:\n> pgbench -c 100 -t 1000 testdb\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 100\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 100000/100000\ntps = 4014.075114 (including connections establishing)\ntps = 4061.662041 (excluding connections establishing)\n\nDo these numbers sound inline with what I should be seeing? What else\ncan I do to try to get better performance in the more general sense\n(knowing that specifics are tied to real world data and testing). Any\nhints for FreeBSD specific tuning would be helpful.\n", "msg_date": "Mon, 3 Mar 2008 15:39:35 -0800", "msg_from": "\"alan bryan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance tuning on FreeBSD" }, { "msg_contents": "\"alan bryan\" <[email protected]> wrote:\n>\n> I've got a new server and am myself new to tuning postgres.\n> \n> Server is an 8 core Xeon 2.33GHz, 8GB RAM, RAID 10 on a 3ware 9550SX-4LP w/ BBU.\n> \n> It's serving as the DB for a fairly write intensive (maybe 25-30%) Web\n> application in PHP. We are not using persistent connections, thus the\n> high max connections.\n> \n> I've done the following so far:\n> \n> > cat /boot/loader.conf\n> kern.ipc.semmni=256\n> kern.ipc.semmns=512\n> kern.ipc.semmnu=256\n> \n> > cat /etc/sysctl.conf\n> kern.ipc.shmall=393216\n> kern.ipc.shmmax=1610612736\n\nI would just set this to 2G (which is the max). It doesn't really hurt\nanything if you don't use it all.\n\n> kern.ipc.semmap=256\n> kern.ipc.shm_use_phys=1\n> \n> postgresql.conf settings (changed from Default):\n> max_connections = 180\n> shared_buffers = 1024MB\n\nWhy not 2G, which would be 25% of total memory?\n\n> maintenance_work_mem = 128MB\n> wal_buffers = 1024kB\n> \n> I then set up a test database for running pgbench with scaling factor\n> 100. I then ran:\n> > pgbench -c 100 -t 1000 testdb\n> and got:\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 100\n> number of clients: 100\n> number of transactions per client: 1000\n> number of transactions actually processed: 100000/100000\n> tps = 557.095867 (including connections establishing)\n> tps = 558.013714 (excluding connections establishing)\n> \n> Just for testing, I tried turning off fsync and got:\n> > pgbench -c 100 -t 1000 testdb\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 100\n> number of clients: 100\n> number of transactions per client: 1000\n> number of transactions actually processed: 100000/100000\n> tps = 4014.075114 (including connections establishing)\n> tps = 4061.662041 (excluding connections establishing)\n> \n> Do these numbers sound inline with what I should be seeing? What else\n> can I do to try to get better performance in the more general sense\n> (knowing that specifics are tied to real world data and testing). Any\n> hints for FreeBSD specific tuning would be helpful.\n\nAre you running FreeBSD 7? If performance is of the utmost importance,\nthen you need to be running the 7.X branch.\n\nBased on your pgbench results, I'm guessing you didn't get battery-backed\ncache on your systems? That makes a big difference no matter what OS\nyou're using.\n\nBesides that, I can't think of any FreeBSD-specific things to do. Basically,\ngeneral tuning advice applies to FreeBSD as well as to most other OS.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Mon, 3 Mar 2008 19:26:06 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance tuning on FreeBSD" }, { "msg_contents": "On Mon, Mar 3, 2008 at 4:26 PM, Bill Moran\n<[email protected]> wrote:\n\n> > > cat /boot/loader.conf\n> > kern.ipc.semmni=256\n> > kern.ipc.semmns=512\n> > kern.ipc.semmnu=256\n> >\n> > > cat /etc/sysctl.conf\n> > kern.ipc.shmall=393216\n> > kern.ipc.shmmax=1610612736\n>\n> I would just set this to 2G (which is the max). It doesn't really hurt\n> anything if you don't use it all.\n\nI'll try that and report back.\n\n\n> > kern.ipc.semmap=256\n> > kern.ipc.shm_use_phys=1\n> >\n> > postgresql.conf settings (changed from Default):\n> > max_connections = 180\n> > shared_buffers = 1024MB\n>\n> Why not 2G, which would be 25% of total memory?\n\n\nDitto - I'll report back.\n\n\n\n> Are you running FreeBSD 7? If performance is of the utmost importance,\n> then you need to be running the 7.X branch.\n>\n> Based on your pgbench results, I'm guessing you didn't get battery-backed\n> cache on your systems? That makes a big difference no matter what OS\n> you're using.\n>\n> Besides that, I can't think of any FreeBSD-specific things to do. Basically,\n> general tuning advice applies to FreeBSD as well as to most other OS.\n\nYes, FreeBSD 7.0-Release. Tried both the 4BSD and ULE schedulers and\ndidn't see much difference with this test.\nI do have the Battery for the 3ware and it is enabled. I'll do some\nbonnie++ benchmarks and make sure disk is near where it should be.\n\nShould turning off fsync make things roughly 8x-10x faster? Or is\nthat indicative of something not being correct or tuned quite right in\nthe rest of the system? I'll have to run in production with fsync on\nbut was just testing to see how much of an effect it had.\n\nThanks,\nAlan\n", "msg_date": "Mon, 3 Mar 2008 16:34:02 -0800", "msg_from": "\"alan bryan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance tuning on FreeBSD" }, { "msg_contents": "On Mon, 3 Mar 2008, alan bryan wrote:\n\n>> pgbench -c 100 -t 1000 testdb\n> tps = 558.013714 (excluding connections establishing)\n>\n> Just for testing, I tried turning off fsync and got:\n> tps = 4061.662041 (excluding connections establishing)\n\nThis is odd. ~500 is what I expect from this test when there is no write \ncache to accelerate fsync, while ~4000 is normal for your class of \nhardware when you have such a cache. Since you say your 3Ware card is \nsetup with a cache and a BBU, that's suspicious--you should be able to get \naround 4000 with fsync on. Any chance you have the card set to \nwrite-through instead of write-back? That's the only thing that comes to \nmind that would cause this.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 3 Mar 2008 20:11:42 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance tuning on FreeBSD" }, { "msg_contents": "On Mon, Mar 3, 2008 at 5:11 PM, Greg Smith <[email protected]> wrote:\n> On Mon, 3 Mar 2008, alan bryan wrote:\n>\n> >> pgbench -c 100 -t 1000 testdb\n>\n> > tps = 558.013714 (excluding connections establishing)\n> >\n> > Just for testing, I tried turning off fsync and got:\n>\n> > tps = 4061.662041 (excluding connections establishing)\n>\n> This is odd. ~500 is what I expect from this test when there is no write\n> cache to accelerate fsync, while ~4000 is normal for your class of\n> hardware when you have such a cache. Since you say your 3Ware card is\n> setup with a cache and a BBU, that's suspicious--you should be able to get\n> around 4000 with fsync on. Any chance you have the card set to\n> write-through instead of write-back? That's the only thing that comes to\n> mind that would cause this.\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>\n\n\nAccording to 3dm2 the cache is on. I even tried setting The StorSave\npreference to \"Performance\" with no real benefit. There seems to be\nsomething really wrong with disk performance. Here's the results from\nbonnie:\n\nFile './Bonnie.2551', size: 104857600\nWriting with putc()...done\nRewriting...done\nWriting intelligently...done\nReading with getc()...done\nReading intelligently...done\nSeeker 1...Seeker 2...Seeker 3...start 'em...done...done...done...\n -------Sequential Output-------- ---Sequential Input-- --Random--\n -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---\nMachine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU\n 100 9989 4.8 6739 1.0 18900 7.8 225973 98.5 1914662\n99.9 177210.7 259.7\n\nThis is on FreeBSD 7.0-Release. I tried ULE and 4BSD schedulers with\nno difference. Maybe I'll try FreeBSD 6.3 to see what that does?\n", "msg_date": "Tue, 4 Mar 2008 01:15:53 -0800", "msg_from": "\"alan bryan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance tuning on FreeBSD" }, { "msg_contents": "On Tue, 4 Mar 2008, alan bryan wrote:\n\n> There seems to be something really wrong with disk performance. Here's \n> the results from bonnie\n\nSo input speed is reasonable but write throughput is miserable--<10MB/s. \nI'd suggest taking this to one of the FreeBSD lists; this doesn't look \nlike a PostgreSQL problem.\n\n> This is on FreeBSD 7.0-Release. I tried ULE and 4BSD schedulers with\n> no difference. Maybe I'll try FreeBSD 6.3 to see what that does?\n\nThe other thing you might consider is booting with a Linux live CD/DVD \n(something like Ubuntu would work) and running bonnie++ from there to see \nwhat you get. Help to sort out whether this ia a server problem or an OS \none.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 4 Mar 2008 05:22:43 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance tuning on FreeBSD" }, { "msg_contents": "alan bryan wrote:\n\n> File './Bonnie.2551', size: 104857600\n> Writing with putc()...done\n> Rewriting...done\n> Writing intelligently...done\n> Reading with getc()...done\n> Reading intelligently...done\n> Seeker 1...Seeker 2...Seeker 3...start 'em...done...done...done...\n> -------Sequential Output-------- ---Sequential Input-- --Random--\n> -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---\n> Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU\n> 100 9989 4.8 6739 1.0 18900 7.8 225973 98.5 1914662\n> 99.9 177210.7 259.7\n> \n> This is on FreeBSD 7.0-Release. I tried ULE and 4BSD schedulers with\n> no difference. Maybe I'll try FreeBSD 6.3 to see what that does?\n\nGenerally, you should set the \"size\" parameter to be twice the RAM\nyou've got (or use bonnie++ which will auto-size it), but anyway,\nsomething is definitely wrong with your drives, controller or the\ndriver. Switching schedulers won't help you, and trying different\nreleases will only help you if the problem is in the driver. Try asking\non the freebsd-performance @ freebsd.org list.\n\n", "msg_date": "Tue, 04 Mar 2008 11:54:25 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance tuning on FreeBSD" }, { "msg_contents": "Greg Smith wrote:\n> On Mon, 3 Mar 2008, alan bryan wrote:\n> \n>>> pgbench -c 100 -t 1000 testdb\n>> tps = 558.013714 (excluding connections establishing)\n>>\n>> Just for testing, I tried turning off fsync and got:\n>> tps = 4061.662041 (excluding connections establishing)\n> \n> This is odd. ~500 is what I expect from this test when there is no\n> write cache to accelerate fsync, while ~4000 is normal for your class of\n> hardware when you have such a cache. \n\nI'm curious about the math behind this - is ~4000 burst or sustained\nrate? For common BBU cache sizes (256M, 512M), filling that amount with\ndata is pretty trivial. When the cache is full, new data can enter the\ncache only at a rate at which old data is evacuated from the cache (to\nthe drive), which is at \"normal\", uncached disk drive speeds.\n\n", "msg_date": "Tue, 04 Mar 2008 15:07:57 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance tuning on FreeBSD" }, { "msg_contents": "On Tue, 4 Mar 2008, Ivan Voras wrote:\n> I'm curious about the math behind this - is ~4000 burst or sustained\n> rate? For common BBU cache sizes (256M, 512M), filling that amount with\n> data is pretty trivial. When the cache is full, new data can enter the\n> cache only at a rate at which old data is evacuated from the cache (to\n> the drive), which is at \"normal\", uncached disk drive speeds.\n\nShould be sustained rate. The reason is if you have no BBU cache, then \neach transaction needs to wait for the disc to rotate around to the bit \nwhere you want to write, even though each transaction is going to be \nwriting in approximately the same place each time. However, with a BBU \ncache, the system no longer needs to wait for the disc to rotate, and the \nwrites can be made from the cache to the disc in large groups of \nsequential writes, which is much faster. Several transactions worth can be \nwritten on each rotation instead of just one.\n\nMatthew\n\n-- \nPeople who love sausages, respect the law, and work with IT standards \nshouldn't watch any of them being made. -- Peter Gutmann\n", "msg_date": "Tue, 4 Mar 2008 14:14:33 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance tuning on FreeBSD" }, { "msg_contents": "On Tue, 4 Mar 2008, Ivan Voras wrote:\n\n> I'm curious about the math behind this - is ~4000 burst or sustained\n> rate?\n\nAverage, which is not quite burst or sustained. No math behind it, just \nlooking at a few samples of pgbench data on similar hardware. A system \nlike this one is profiled at \nhttp://www.kaltenbrunner.cc/blog/index.php?/archives/21-8.3-vs.-8.2-a-simple-benchmark.html \nfor example.\n\n> For common BBU cache sizes (256M, 512M), filling that amount with data \n> is pretty trivial.\n\nI don't have any good numbers handy but I think the burst is >6000, you \nonly get that for a few seconds before all the caches fill and the rate \ndrops considerably.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 4 Mar 2008 11:25:25 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance tuning on FreeBSD" } ]
[ { "msg_contents": "Hello Everyone,\n\nI had posted an issue previously that we've been unable to resolve. \nAn early mis-estimation in one or more subqueries causes the remainder \nof the query to choose nested loops instead of a more efficient method \nand runs very slowly (CPU Bound). I don't think there is any way to \n\"suggest\" to the planner it not do what it's doing, so we are starting \nto think about turning off nested loops entirely.\n\nHere is the history so far:\n\nhttp://archives.postgresql.org/pgsql-performance/2008-02/msg00205.php\n\nAt the suggestion of the list, we upgraded to 8.2.6 and are still \nexperiencing the same problem. I'm now installing 8.3 on my \nworkstation to see if it chooses a better plan, but it will take some \ntime to get it compiled, a db loaded, etc.\n\nWe have a number of very long running reports that will run in seconds \nif nested loops are turned off. The other alternative we are \nexploring is programmatically turning off nested loops just for the \nproblematic reports. But with the speedups we are seeing, others are \ngetting gun shy about having them on at all.\n\nSo, I've now been asked to ping the list as to whether turning off \nnested loops system wide is a bad idea, and why or why not.\n\nAny other thoughts or suggestions?\n\nThanks,\n\n-Chris\n", "msg_date": "Tue, 4 Mar 2008 09:42:49 -0500", "msg_from": "Chris Kratz <[email protected]>", "msg_from_op": true, "msg_subject": "Ramifications of turning off Nested Loops for slow queries" }, { "msg_contents": ">>> On Tue, Mar 4, 2008 at 8:42 AM, in message\n<[email protected]>, Chris Kratz\n<[email protected]> wrote: \n \n> So, I've now been asked to ping the list as to whether turning off \n> nested loops system wide is a bad idea, and why or why not.\n \nIn our environment, the fastest plan for a lot of queries involve\nnested loops. Of course, it's possible that these never provide the\nfasted plan in your environment, but it seems very unlikely --\nyou're just not noticing the queries where it's doing fine.\n \n> Any other thoughts or suggestions?\n \nMake sure your effective_cache_size is properly configured.\n \nIncrease random_page_cost and/or decrease seq_page_cost.\nYou can play with the cost settings on a connection, using EXPLAIN\non the query, to see what plan you get with each configuration\nbefore putting it into the postgresql.conf file.\n \n-Kevin\n \n\n\n", "msg_date": "Tue, 04 Mar 2008 10:18:43 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ramifications of turning off Nested Loops for\n\tslow queries" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> On Tue, Mar 4, 2008 at 8:42 AM, in message\n> <[email protected]>, Chris Kratz\n> <[email protected]> wrote: \n>> So, I've now been asked to ping the list as to whether turning off \n>> nested loops system wide is a bad idea, and why or why not.\n \n> In our environment, the fastest plan for a lot of queries involve\n> nested loops. Of course, it's possible that these never provide the\n> fasted plan in your environment, but it seems very unlikely --\n> you're just not noticing the queries where it's doing fine.\n\nYeah, I seem to recall similar queries from other people who were\nconsidering the opposite, ie disabling the other join types :-(\n\nThe rule of thumb is that nestloop with an inner indexscan will beat\nanything else for pulling a few rows out of a large table. But on\nthe other hand it loses big for selecting lots of rows. I don't think\nthat a global disable in either direction would be a smart move, unless\nyou run only a very small number of query types and have checked them\nall.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Mar 2008 12:19:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ramifications of turning off Nested Loops for slow queries " }, { "msg_contents": "On 3/4/08, Kevin Grittner <[email protected]> wrote:\n>\n> >>> On Tue, Mar 4, 2008 at 8:42 AM, in message\n> > Any other thoughts or suggestions?\n>\n>\n> Make sure your effective_cache_size is properly configured.\n>\n> Increase random_page_cost and/or decrease seq_page_cost.\n> You can play with the cost settings on a connection, using EXPLAIN\n> on the query, to see what plan you get with each configuration\n> before putting it into the postgresql.conf file.\n>\n>\n> -Kevin\n\n\nThat was a good idea. I hadn't tried playing with those settings in a\nsession. This is a 8G box, and we've dedicated half of that (4G) to the\nfile system cache. So, 4G is what effective_cache_size is set to. Our\nseq_page_cost is set to 1 and our random_page_cost is set to 1.75 in the\npostgresql.conf.\n\nIn testing this one particular slow query in a session, I changed these\nsettings alternating in increments of 0.25. The random_page_cost up to 4\nand the seq_page_cost down to 0.25. This made perhaps a second difference,\nbut at the end, we were back to to the 37s. Doing a set enable_nestloop=off\nin the session reduced the runtime to 1.2s with the other settings back to\nour normal day to day settings.\n\nSo, for now I think we are going to have to modify the code to prepend the\nproblematic queries with this setting and hope the estimator is able to\nbetter estimate this particular query in 8.3.\n\nThanks for the suggestions,\n\n-Chris\n\nOn 3/4/08, Kevin Grittner <[email protected]> wrote:\n>>> On Tue, Mar 4, 2008 at  8:42 AM, in message> Any other thoughts or suggestions? Make sure your effective_cache_size is properly configured. Increase random_page_cost and/or decrease seq_page_cost.\n You can play with the cost settings on a connection, using EXPLAIN on the query, to see what plan you get with each configuration before putting it into the postgresql.conf file. -Kevin\nThat was a good idea.  I hadn't tried playing with those settings in a session.  This is a 8G box, and we've dedicated half of that (4G) to the file system cache.  So, 4G is what effective_cache_size is set to.  Our seq_page_cost is set to 1 and our random_page_cost is set to 1.75 in the postgresql.conf.\nIn testing this one particular slow query in a session, I changed these settings alternating in increments of 0.25.  The random_page_cost up to 4 and the seq_page_cost down to 0.25.  This made perhaps a second difference, but at the end, we were back to to the 37s.  Doing a set enable_nestloop=off in the session reduced the runtime to 1.2s with the other settings back to our normal day to day settings.\nSo, for now I think we are going to have to modify the code to prepend the problematic queries with this setting and hope the estimator is able to better estimate this particular query in 8.3.\nThanks for the suggestions,-Chris", "msg_date": "Tue, 4 Mar 2008 13:13:58 -0500", "msg_from": "\"Chris Kratz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ramifications of turning off Nested Loops for slow queries" }, { "msg_contents": "On 3/4/08, Tom Lane <[email protected]> wrote:\n>\n> \"Kevin Grittner\" <[email protected]> writes:\n> > On Tue, Mar 4, 2008 at 8:42 AM, in message\n> > <[email protected]>, Chris Kratz\n> > <[email protected]> wrote:\n> >> So, I've now been asked to ping the list as to whether turning off\n> >> nested loops system wide is a bad idea, and why or why not.\n>\n> > In our environment, the fastest plan for a lot of queries involve\n> > nested loops. Of course, it's possible that these never provide the\n> > fasted plan in your environment, but it seems very unlikely --\n> > you're just not noticing the queries where it's doing fine.\n>\n>\n> Yeah, I seem to recall similar queries from other people who were\n> considering the opposite, ie disabling the other join types :-(\n>\n> The rule of thumb is that nestloop with an inner indexscan will beat\n> anything else for pulling a few rows out of a large table. But on\n> the other hand it loses big for selecting lots of rows. I don't think\n> that a global disable in either direction would be a smart move, unless\n> you run only a very small number of query types and have checked them\n> all.\n>\n> regards, tom lane\n>\n\nSo, if we can't find another way to solve the problem, probably our best bet\nis to turn off nested loops on particularly bad queries by prepending them\nw/ set enable_nested_loop=off? But, leave them on for the remainder of the\nsystem?\n\nDo you think it's worth testing on 8.3 to see if the estimator is able to\nmake a better estimate?\n\n-Chris\n\nOn 3/4/08, Tom Lane <[email protected]> wrote:\n\"Kevin Grittner\" <[email protected]> writes: > On Tue, Mar 4, 2008 at  8:42 AM, in message > <[email protected]>, Chris Kratz\n > <[email protected]> wrote: >> So, I've now been asked to ping the list as to whether turning off >> nested loops system wide is a bad idea, and why or why not.\n > In our environment, the fastest plan for a lot of queries involve > nested loops.  Of course, it's possible that these never provide the > fasted plan in your environment, but it seems very unlikely --\n > you're just not noticing the queries where it's doing fine. Yeah, I seem to recall similar queries from other people who were considering the opposite, ie disabling the other join types :-(\n The rule of thumb is that nestloop with an inner indexscan will beat anything else for pulling a few rows out of a large table.  But on the other hand it loses big for selecting lots of rows.  I don't think\n that a global disable in either direction would be a smart move, unless you run only a very small number of query types and have checked them all.                         regards, tom lane \nSo, if we can't find another way to solve the problem, probably our best bet is to turn off nested loops on particularly bad queries by prepending them w/ set enable_nested_loop=off?  But, leave them on for the remainder of the system?\nDo you think it's worth testing on 8.3 to see if the estimator is able to make a better estimate?-Chris", "msg_date": "Tue, 4 Mar 2008 13:16:15 -0500", "msg_from": "\"Chris Kratz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ramifications of turning off Nested Loops for slow queries" } ]
[ { "msg_contents": "Hi\n\nWe are thinking of running a PostgreSQL instance on a virtual host under\nXen.\n\nAny thoughts for/against running PostgreSQL on a virtual host would be\nmuch appreciated.\n\n-- \nRegards\nTheo\n\n", "msg_date": "Tue, 04 Mar 2008 17:43:00 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL performance on a virtual host" }, { "msg_contents": "We have very good experiences with openVZ as virtualizer.\nSince it's not a para virtualization like xen it's very fast. Almost \nas fast as the host.\n\nwww.openvz.org\n\nAm 04.03.2008 um 16:43 schrieb Theo Kramer:\n\n> Hi\n>\n> We are thinking of running a PostgreSQL instance on a virtual host \n> under\n> Xen.\n>\n> Any thoughts for/against running PostgreSQL on a virtual host would be\n> much appreciated.\n>\n> -- \n> Regards\n> Theo\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your Subscription:\n> http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance\n\n", "msg_date": "Wed, 5 Mar 2008 09:54:35 +0100", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance on a virtual host" }, { "msg_contents": "Hi,\n\nI've run it on xen. works OK. Course this is all predicated upon your \nexpectations. If you expect it to be as fast as a dedicated machine, \nyou will be dissapointed.\n\nDave\nOn 5-Mar-08, at 3:54 AM, Moritz Onken wrote:\n\n> We have very good experiences with openVZ as virtualizer.\n> Since it's not a para virtualization like xen it's very fast. Almost \n> as fast as the host.\n>\n> www.openvz.org\n>\n> Am 04.03.2008 um 16:43 schrieb Theo Kramer:\n>\n>> Hi\n>>\n>> We are thinking of running a PostgreSQL instance on a virtual host \n>> under\n>> Xen.\n>>\n>> Any thoughts for/against running PostgreSQL on a virtual host would \n>> be\n>> much appreciated.\n>>\n>> -- \n>> Regards\n>> Theo\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected] \n>> )\n>> To make changes to your Subscription:\n>> http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance\n\n", "msg_date": "Wed, 5 Mar 2008 07:42:07 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance on a virtual host" }, { "msg_contents": "Hello,\n\nWe had a bad experience with PostgreSQL running in OpenVZ (year and a\nhalf year ago): OpenVZ kernel killed postmaster with strange signals\nfrom time to time, failcounters of OpenVZ did not worked as expected\nin this moments, PostgreSQL fighted for the disk with applications in\nother virtual cells, no one from OpenVZ forums was able to help me\nwith these issues. So this experience was really dissapointing; since\nthen we use only dedicated systems without kernels patched for\nvirtualization.\n\n--\nRegards,\n Ivan\n\nOn Wed, Mar 5, 2008 at 11:54 AM, Moritz Onken <[email protected]> wrote:\n> We have very good experiences with openVZ as virtualizer.\n> Since it's not a para virtualization like xen it's very fast. Almost\n> as fast as the host.\n>\n> www.openvz.org\n>\n> Am 04.03.2008 um 16:43 schrieb Theo Kramer:\n>\n>\n>\n> > Hi\n> >\n> > We are thinking of running a PostgreSQL instance on a virtual host\n> > under\n> > Xen.\n> >\n> > Any thoughts for/against running PostgreSQL on a virtual host would be\n> > much appreciated.\n> >\n> > --\n> > Regards\n> > Theo\n> >\n> >\n> > --\n> > Sent via pgsql-performance mailing list ([email protected]\n> > )\n> > To make changes to your Subscription:\n> > http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance\n>\n", "msg_date": "Wed, 5 Mar 2008 17:26:20 +0300", "msg_from": "\"Ivan Zolotukhin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance on a virtual host" }, { "msg_contents": "In response to \"Ivan Zolotukhin\" <[email protected]>:\n> \n> We had a bad experience with PostgreSQL running in OpenVZ (year and a\n> half year ago): OpenVZ kernel killed postmaster with strange signals\n> from time to time, failcounters of OpenVZ did not worked as expected\n> in this moments, PostgreSQL fighted for the disk with applications in\n> other virtual cells, no one from OpenVZ forums was able to help me\n> with these issues. So this experience was really dissapointing; since\n> then we use only dedicated systems without kernels patched for\n> virtualization.\n\nIf your database is busy enough that it's stressing the hardware on a\nsingle machine, it's not going to do any better in a VM. Sounds to me like\nyou were already pushing the limits of the IO capability of that machine ...\nit's not OpenVZ's fault that it can't make more IO bandwidth available.\n\n> On Wed, Mar 5, 2008 at 11:54 AM, Moritz Onken <[email protected]> wrote:\n> > We have very good experiences with openVZ as virtualizer.\n> > Since it's not a para virtualization like xen it's very fast. Almost\n> > as fast as the host.\n> >\n> > www.openvz.org\n> >\n> > Am 04.03.2008 um 16:43 schrieb Theo Kramer:\n> >\n> >\n> >\n> > > Hi\n> > >\n> > > We are thinking of running a PostgreSQL instance on a virtual host\n> > > under\n> > > Xen.\n> > >\n> > > Any thoughts for/against running PostgreSQL on a virtual host would be\n> > > much appreciated.\n> > >\n> > > --\n> > > Regards\n> > > Theo\n> > >\n> > >\n> > > --\n> > > Sent via pgsql-performance mailing list ([email protected]\n> > > )\n> > > To make changes to your Subscription:\n> > > http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance\n> >\n> >\n> > --\n> > Sent via pgsql-performance mailing list ([email protected])\n> > To make changes to your subscription:\n> > http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance\n> >\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance\n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Wed, 5 Mar 2008 09:40:10 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance on a virtual host" }, { "msg_contents": "Hello,\n\nOn Wed, Mar 5, 2008 at 5:40 PM, Bill Moran\n<[email protected]> wrote:\n> In response to \"Ivan Zolotukhin\" <[email protected]>:\n>\n> >\n> > We had a bad experience with PostgreSQL running in OpenVZ (year and a\n> > half year ago): OpenVZ kernel killed postmaster with strange signals\n> > from time to time, failcounters of OpenVZ did not worked as expected\n> > in this moments, PostgreSQL fighted for the disk with applications in\n> > other virtual cells, no one from OpenVZ forums was able to help me\n> > with these issues. So this experience was really dissapointing; since\n> > then we use only dedicated systems without kernels patched for\n> > virtualization.\n>\n> If your database is busy enough that it's stressing the hardware on a\n> single machine, it's not going to do any better in a VM. Sounds to me like\n> you were already pushing the limits of the IO capability of that machine ...\n> it's not OpenVZ's fault that it can't make more IO bandwidth available.\n\nThe problem actually was that PostgreSQL worked closely to some kernel\nlimits and kernel simply killed it sometimes without counting that in\ncorresponding failcounters. And there was no existing OpenVZ\ndocumentation to debug these issues which I think is unacceptable.\nWorkload (incl. IO load) was moderate, nothing extremal.\n\nI don't like when system kills my PostgreSQL without even telling me\nwhy. Apart from that I think that administrator should be able to\ndecide himself whether load is high enough to stop some process for it\nnot to interfere with other virtual cells. OpenVZ with its\ndocumentation and experts on dedicated forum did not provide such\ninstruments at that time. I would be satisfied with OpenVZ if I'd be\nable to tell the system not to kill my PostgreSQL whatever happens,\nbut I couldn't. So I simply switched to something more reliable.\n\n> > On Wed, Mar 5, 2008 at 11:54 AM, Moritz Onken <[email protected]> wrote:\n> > > We have very good experiences with openVZ as virtualizer.\n> > > Since it's not a para virtualization like xen it's very fast. Almost\n> > > as fast as the host.\n> > >\n> > > www.openvz.org\n> > >\n> > > Am 04.03.2008 um 16:43 schrieb Theo Kramer:\n> > >\n> > >\n> > >\n> > > > Hi\n> > > >\n> > > > We are thinking of running a PostgreSQL instance on a virtual host\n> > > > under\n> > > > Xen.\n> > > >\n> > > > Any thoughts for/against running PostgreSQL on a virtual host would be\n> > > > much appreciated.\n> > > >\n> > > > --\n> > > > Regards\n> > > > Theo\n> > > >\n> > > >\n> > > > --\n> > > > Sent via pgsql-performance mailing list ([email protected]\n> > > > )\n> > > > To make changes to your Subscription:\n> > > > http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance\n> > >\n> > >\n> > > --\n> > > Sent via pgsql-performance mailing list ([email protected])\n> > > To make changes to your subscription:\n> > > http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance\n> > >\n> >\n> > --\n> > Sent via pgsql-performance mailing list ([email protected])\n> > To make changes to your subscription:\n> > http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance\n>\n>\n> --\n> Bill Moran\n> Collaborative Fusion Inc.\n> http://people.collaborativefusion.com/~wmoran/\n>\n> [email protected]\n> Phone: 412-422-3463x4023\n>\n> ****************************************************************\n> IMPORTANT: This message contains confidential information and is\n> intended only for the individual named. If the reader of this\n> message is not an intended recipient (or the individual\n> responsible for the delivery of this message to an intended\n> recipient), please be advised that any re-use, dissemination,\n> distribution or copying of this message is prohibited. Please\n> notify the sender immediately by e-mail if you have received\n> this e-mail by mistake and delete this e-mail from your system.\n> E-mail transmission cannot be guaranteed to be secure or\n> error-free as information could be intercepted, corrupted, lost,\n> destroyed, arrive late or incomplete, or contain viruses. The\n> sender therefore does not accept liability for any errors or\n> omissions in the contents of this message, which arise as a\n> result of e-mail transmission.\n> ****************************************************************\n>\n", "msg_date": "Wed, 5 Mar 2008 17:56:11 +0300", "msg_from": "\"Ivan Zolotukhin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance on a virtual host" } ]
[ { "msg_contents": "On Tuesday 04 March 2008, dforums <[email protected]> wrote:\n> Hello\n>\n>\n> We hace a Quad Xeon server, with 8GO of ram, sata II 750Go\n>\n>\n> I suppose the main problem is from database server settings.\n\nNo, the problem is your hard drive is too slow. One drive can only do maybe \n150 seeks per second.\n\nOh, and updates in PostgreSQL are expensive. But mostly I'd say it's your \ndrive.\n\n-- \nAlan\n", "msg_date": "Tue, 4 Mar 2008 13:03:58 -0800", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimisation help" }, { "msg_contents": "\n\n\n\n\nHello\n\n\nWe hace a Quad Xeon server, with 8GO of ram, sata II 750Go\n\nAn postgresql database, of 10 Go\n\nI have several treatment every 2 minutes who select, insert, update\nthousand of data in a table. It take a lot of time (0.3300 ms per line)\njust to check if a string of 15 char is present, and decide to update\nit under few constraint\n\nI suppose the main problem is from database server settings.\n\nThis is my settings : \n\n\nmax_connections = 256\nshared_buffers = 1500                   # min 16 or max_connections*2,\n8KB each\ntemp_buffers = 500                      # min 100, 8KB each\nmax_prepared_transactions = 100 \n\nwork_mem = 22000                        # min 64, size in KB\nmaintenance_work_mem = 500000           # min 1024, size in KB\nmax_stack_depth = 8192 \n\n\nmax_fsm_pages = 100000                  # min max_fsm_relations*16, 6\nbytes each\nmax_fsm_relations = 5000  \n\n\nvacuum_cost_delay = 50                  # 0-1000 milliseconds\nvacuum_cost_page_hit = 1000             # 0-10000 credits\nvacuum_cost_page_miss = 1000            # 0-10000 credits\nvacuum_cost_page_dirty = 120            # 0-10000 credits\nvacuum_cost_limit = 2000                # 0-10000 credits\n\n# - Background writer -\n\nbgwriter_delay = 50                     # 10-10000 milliseconds between\nrounds\nbgwriter_lru_percent = 1.0              # 0-100% of LRU buffers\nscanned/round\nbgwriter_lru_maxpages = 25              # 0-1000 buffers max\nwritten/round\nbgwriter_all_percent = 0.333            # 0-100% of all buffers\nscanned/round\nbgwriter_all_maxpages = 50              # 0-1000 buffers max\nwritten/round\n\nwal_buffers = 16                        # min 4, 8KB each\ncommit_delay = 500                      # range 0-100000, in\nmicroseconds\ncommit_siblings = 50                    # range 1-1000\n\n# - Checkpoints -\n\ncheckpoint_segments = 50                # in logfile segments, min 1,\n16MB each\ncheckpoint_timeout = 1800               # range 30-3600, in seconds\ncheckpoint_warning = 180    \n\neffective_cache_size = 2048             # typically 8KB each\nrandom_page_cost = 3   \n\n\nShared memory set to :\necho /proc/sys/kernel/shmmax = 256000000\n\nCould you help  please...\n\ntx\n\n\nDavid\n\n\n\n\n\n\n\n\n\n", "msg_date": "Tue, 04 Mar 2008 22:21:22 +0000", "msg_from": "dforums <[email protected]>", "msg_from_op": false, "msg_subject": "Optimisation help" }, { "msg_contents": "On Wed, Mar 05, 2008 at 12:15:25AM +0000, dforums wrote:\n> In regards of update, I have around 10000 updates while a laps of 10 minutes\n>\n> Is there a settings to optimise updates ?\n\nIf you can, batch them into a single transaction.\n\nIf you can, upgrade to 8.3. HOT might help you here.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Tue, 4 Mar 2008 23:54:45 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation help" }, { "msg_contents": "On Tue, 4 Mar 2008, dforums wrote:\n\n> max_connections = 256\n> shared_buffers = 1500 # min 16 or max_connections*2, 8KB each\n> work_mem = 22000 # min 64, size in KB\n> effective_cache_size = 2048 # typically 8KB each\n\nWell, you're giving the main database server a whopping 1500*8K=12MB of \nspace to work with. Meanwhile you're giving each of the 256 clients up to \n22MB of work_mem, which means they can use 5.6GB total. This is quite \nbackwards.\n\nIncrease shared_buffers to something like 250000 (2GB), decrease work_mem \nto at most 10000 and probably lower, and raise effective_cache_size to \nsomething like 5GB=625000. Whatever data you've collected about \nperformance with your current settings is pretty much meaningless with \nonly giving 12MB of memory to shared_buffers and having a tiny setting for \neffective_cache_size.\n\nOh, and make sure you ANALYZE your tables regularly.\n\n> random_page_cost = 3\n\nAnd you shouldn't be playing with that until you've got the memory usage \nto something sane.\n\nAlso, you didn't mention what version of PostgreSQL you're using. You'll \nneed 8.1 or later to have any hope of using 8GB of RAM effectively on a \n4-core system.\n\n> But My most fear is that for now the database is only of 10 Go. But I \n> will have to increase it 10 times during the next six month I'm afraid \n> that these problems will increase.\n\nIt's very unlikely you will be able to get good performance on a 100GB \ndatabase with a single SATA drive. You should be able to get great \nperformance with the current size though.\n\n> In regards of update, I have around 10000 updates while a laps of 10 \n> minutes. Is there a settings to optimise updates ?\n\n10000 updates / 600 seconds = 17 updates/second. That's trivial; even a \nsingle boring drive can get 100/second. As someone already suggested your \nreal problem here is that you'll be hard pressed to handle the amount of \nseeking that goes into a larger database with only a single drive.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 4 Mar 2008 17:59:47 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation help" }, { "msg_contents": "tX for your reply,\n\nI do not have more information on disk speed. I'll get it latter.\n\nBut My most fear is that for now the database is only of 10 Go.\n\nBut I will have to increase it 10 times during the next six month I'm \nafraid that these problems will increase.\n\nRegards\n\nDavid\n\nAlan Hodgson a �crit :\n> On Tuesday 04 March 2008, dforums <[email protected]> wrote:\n>> Hello\n>>\n>>\n>> We hace a Quad Xeon server, with 8GO of ram, sata II 750Go\n>>\n>>\n>> I suppose the main problem is from database server settings.\n> \n> No, the problem is your hard drive is too slow. One drive can only do maybe \n> 150 seeks per second.\n> \n> Oh, and updates in PostgreSQL are expensive. But mostly I'd say it's your \n> drive.\n> \n", "msg_date": "Wed, 05 Mar 2008 00:10:18 +0000", "msg_from": "dforums <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation help" }, { "msg_contents": "In regards of update, I have around 10000 updates while a laps of 10 minutes\n\nIs there a settings to optimise updates ?\n\nregards\n\ndavid\n\nAlan Hodgson a �crit :\n> On Tuesday 04 March 2008, dforums <[email protected]> wrote:\n>> Hello\n>>\n>>\n>> We hace a Quad Xeon server, with 8GO of ram, sata II 750Go\n>>\n>>\n>> I suppose the main problem is from database server settings.\n> \n> No, the problem is your hard drive is too slow. One drive can only do maybe \n> 150 seeks per second.\n> \n> Oh, and updates in PostgreSQL are expensive. But mostly I'd say it's your \n> drive.\n> \n", "msg_date": "Wed, 05 Mar 2008 00:15:25 +0000", "msg_from": "dforums <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation help" }, { "msg_contents": "\nOn Mar 4, 2008, at 6:54 PM, dforums wrote:\n\n> Hello,\n>\n> After controling the settings I so, that shared_buffers is \n> configurated at 1024 (the default), however, in my postgresql.conf I \n> set it to 250000, is it due to shared memory settings, should I \n> increase shmmax?\n\nDid you do a full restart of the db cluster? Changes to shared memory \nsettings require that.\n\nErik Jones\n\nDBA | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n\n", "msg_date": "Tue, 4 Mar 2008 18:19:32 -0600", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation help" }, { "msg_contents": "Thanks i'm trying with this new settings. I gain only 3 second (2:40 vs \n 2:37 min) on a treatment of 1000 lines, with it's done every 2 minutes.\n\nFor the database version, i'm under postgresql 8.1.11. x64\n\nAs i'm in a procedure it seems that postgresql explain analyse doesn't \ngive details.\n\nI suppose that I have to fragment my procedure to see exactly where i'm \nwasting so much time.\n\nregards\n\ndavid\n\nGreg Smith a �crit :\n> On Tue, 4 Mar 2008, dforums wrote:\n> \n>> max_connections = 256\n>> shared_buffers = 1500 # min 16 or max_connections*2, \n>> 8KB each\n>> work_mem = 22000 # min 64, size in KB\n>> effective_cache_size = 2048 # typically 8KB each\n> \n> Well, you're giving the main database server a whopping 1500*8K=12MB of \n> space to work with. Meanwhile you're giving each of the 256 clients up \n> to 22MB of work_mem, which means they can use 5.6GB total. This is \n> quite backwards.\n> \n> Increase shared_buffers to something like 250000 (2GB), decrease \n> work_mem to at most 10000 and probably lower, and raise \n> effective_cache_size to something like 5GB=625000. Whatever data you've \n> collected about performance with your current settings is pretty much \n> meaningless with only giving 12MB of memory to shared_buffers and having \n> a tiny setting for effective_cache_size.\n> \n> Oh, and make sure you ANALYZE your tables regularly.\n> \n>> random_page_cost = 3\n> \n> And you shouldn't be playing with that until you've got the memory usage \n> to something sane.\n> \n> Also, you didn't mention what version of PostgreSQL you're using. \n> You'll need 8.1 or later to have any hope of using 8GB of RAM \n> effectively on a 4-core system.\n> \n>> But My most fear is that for now the database is only of 10 Go. But I \n>> will have to increase it 10 times during the next six month I'm afraid \n>> that these problems will increase.\n> \n> It's very unlikely you will be able to get good performance on a 100GB \n> database with a single SATA drive. You should be able to get great \n> performance with the current size though.\n> \n>> In regards of update, I have around 10000 updates while a laps of 10 \n>> minutes. Is there a settings to optimise updates ?\n> \n> 10000 updates / 600 seconds = 17 updates/second. That's trivial; even a \n> single boring drive can get 100/second. As someone already suggested \n> your real problem here is that you'll be hard pressed to handle the \n> amount of seeking that goes into a larger database with only a single \n> drive.\n> \n> -- \n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your Subscription:\n> http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance \n> \n> \n> \n", "msg_date": "Wed, 05 Mar 2008 00:37:44 +0000", "msg_from": "dforums <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation help" }, { "msg_contents": "Hello,\n\nAfter controling the settings I so, that shared_buffers is configurated \nat 1024 (the default), however, in my postgresql.conf I set it to \n250000, is it due to shared memory settings, should I increase shmmax?\n\nregards\n\ndavid\n\nGreg Smith a �crit :\n> On Tue, 4 Mar 2008, dforums wrote:\n> \n>> max_connections = 256\n>> shared_buffers = 1500 # min 16 or max_connections*2, \n>> 8KB each\n>> work_mem = 22000 # min 64, size in KB\n>> effective_cache_size = 2048 # typically 8KB each\n> \n> Well, you're giving the main database server a whopping 1500*8K=12MB of \n> space to work with. Meanwhile you're giving each of the 256 clients up \n> to 22MB of work_mem, which means they can use 5.6GB total. This is \n> quite backwards.\n> \n> Increase shared_buffers to something like 250000 (2GB), decrease \n> work_mem to at most 10000 and probably lower, and raise \n> effective_cache_size to something like 5GB=625000. Whatever data you've \n> collected about performance with your current settings is pretty much \n> meaningless with only giving 12MB of memory to shared_buffers and having \n> a tiny setting for effective_cache_size.\n> \n> Oh, and make sure you ANALYZE your tables regularly.\n> \n>> random_page_cost = 3\n> \n> And you shouldn't be playing with that until you've got the memory usage \n> to something sane.\n> \n> Also, you didn't mention what version of PostgreSQL you're using. \n> You'll need 8.1 or later to have any hope of using 8GB of RAM \n> effectively on a 4-core system.\n> \n>> But My most fear is that for now the database is only of 10 Go. But I \n>> will have to increase it 10 times during the next six month I'm afraid \n>> that these problems will increase.\n> \n> It's very unlikely you will be able to get good performance on a 100GB \n> database with a single SATA drive. You should be able to get great \n> performance with the current size though.\n> \n>> In regards of update, I have around 10000 updates while a laps of 10 \n>> minutes. Is there a settings to optimise updates ?\n> \n> 10000 updates / 600 seconds = 17 updates/second. That's trivial; even a \n> single boring drive can get 100/second. As someone already suggested \n> your real problem here is that you'll be hard pressed to handle the \n> amount of seeking that goes into a larger database with only a single \n> drive.\n> \n> -- \n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your Subscription:\n> http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance \n> \n> \n> \n", "msg_date": "Wed, 05 Mar 2008 00:54:51 +0000", "msg_from": "dforums <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation help" }, { "msg_contents": "OK I found the cause, it was a default settings added on server start. \n(-B 1024) Grrrrrrrrr!!!!!\n\nNow it works really better I devide the full time per 2.\n\nI suppose I steal have to look deep in the procedure to see some hack, \nhas somebody suggest, I will try to buffer all updates in one.\n\nOne question, Could I optimise the treatment if I'm doing the select on \na view while updating the main table ????\n\n\nregards\n\nDavid\n\n\nErik Jones a �crit :\n> \n> On Mar 4, 2008, at 6:54 PM, dforums wrote:\n> \n>> Hello,\n>>\n>> After controling the settings I so, that shared_buffers is \n>> configurated at 1024 (the default), however, in my postgresql.conf I \n>> set it to 250000, is it due to shared memory settings, should I \n>> increase shmmax?\n> \n> Did you do a full restart of the db cluster? Changes to shared memory \n> settings require that.\n> \n> Erik Jones\n> \n> DBA | Emma�\n> [email protected]\n> 800.595.4401 or 615.292.5888\n> 615.292.0777 (fax)\n> \n> Emma helps organizations everywhere communicate & market in style.\n> Visit us online at http://www.myemma.com\n> \n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your Subscription:\n> http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance \n> \n> \n> \n", "msg_date": "Wed, 05 Mar 2008 09:18:19 +0100", "msg_from": "dforums <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimisation help" } ]
[ { "msg_contents": "\nDear Friends,\n I have a table with 50 lakhs records, the table has more then 10\nfields, i have primary key, i have select query with count(*) without any\ncondition, it takes 17 seconds.\n\n I have another one query which will do joins with other small tables, it\ntakes 47 seconds to give output, the result has 2 lakhs records. the\nindexing is not used. I have created one index with one field ( which i\nused in this query, the field value has duplicates also ).\n\n Now the cpu usage is 4.4%, and waiting for io process .\n\n i have checked the above in the top command.\n\n If i delete that index, then it works fine, now the cpu usage takes more\nthen 70%, and waiting shows less then 30%.\n\nCan Anyone explain, Why postgres behaves like this.\n\n\n-- \nView this message in context: http://www.nabble.com/postgresql-performance-tp15847165p15847165.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Wed, 5 Mar 2008 02:27:08 -0800 (PST)", "msg_from": "SPMLINGAM <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql performance" }, { "msg_contents": "On Wed, Mar 05, 2008 at 02:27:08AM -0800, SPMLINGAM wrote:\n> I have a table with 50 lakhs records, the table has more then 10\n> fields, i have primary key, i have select query with count(*) without any\n> condition, it takes 17 seconds.\n\nWithout knowing what a \"lakhs\" record is, it's pretty obvious that you\nhaven't vacuumed in a very long time. Run VACUUM FULL on your tables, then\ninstate regular (non-FULL) VACUUMs or enable autovacuum.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 5 Mar 2008 11:39:45 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql performance" }, { "msg_contents": "Hi,\n\nLe mercredi 05 mars 2008 à 11:39 +0100, Steinar H. Gunderson a écrit :\n\n> Without knowing what a \"lakhs\" record is, \n\nI had the same question... and Wikipedia gave me the answer : it is an\nIndian word meaning 10^5, often used in indian english.\n\nFranck\n\n\n", "msg_date": "Wed, 05 Mar 2008 11:52:26 +0100", "msg_from": "Franck Routier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql performance" }, { "msg_contents": "> > Without knowing what a \"lakhs\" record is,\n>\n> I had the same question... and Wikipedia gave me the answer : it is an\n> Indian word meaning 10^5, often used in indian english.\n\nThank you (both OP and this post) for enlightening us with this word.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Wed, 5 Mar 2008 12:03:39 +0100", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql performance" }, { "msg_contents": "> -----Original Message-----\n> From: SPMLINGAM\n> Subject: [PERFORM] postgresql performance\n> \n> Dear Friends,\n> I have a table with 50 lakhs records, the table has more \n> then 10 fields, i have primary key, i have select query with \n> count(*) without any condition, it takes 17 seconds.\n\n17 seconds to scan 5 million records doesn't sound that bad to me.\nPostgresql does not store a count of records, and so it has to actually scan\nthe table to count all the records. This was a design choice because select\ncount(*) isn't usually used in a production system. \n\n\n> I have another one query which will do joins with other \n> small tables, it takes 47 seconds to give output, the result \n> has 2 lakhs records. the indexing is not used. I have \n> created one index with one field ( which i used in this \n> query, the field value has duplicates also ).\n\nYou should post which version of Postgresql you are using, your table\ndefinition, and the output of EXPLAIN ANALYSE run on your query. If you\nhave a lot of IO wait, you are most likely IO bound. When Postgresql is\nusing a lot of CPU it is likely performing a sort or hashing. Pulling a\nlarge number of rows out of an even larger table can be difficult to do\nextremely quickly, but if you post the EXPLAIN ANALYZE output we would know\nif things could be improved or not.\n\nDave\n \n\n", "msg_date": "Wed, 5 Mar 2008 08:09:39 -0600", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql performance" }, { "msg_contents": ">>> On Wed, Mar 5, 2008 at 4:39 AM, in message <[email protected]>,\n\"Steinar H. Gunderson\" <[email protected]> wrote: \n \n> it's pretty obvious that you\n> haven't vacuumed in a very long time. Run VACUUM FULL on your tables\n \nIf you use VACUUM FULL, you should probably throw in ANALYZE with\nit, and REINDEX, too. An alternative that is probably faster, but\nwhich requires that you have enough free space for a temporary\nadditional copy of the data, is to CLUSTER the bloated tables,\nwhich automatically takes care of the indexes, but requires a\nsubsequent ANALYZE.\n \n> regular (non-FULL) VACUUMs or enable autovacuum.\n \nAbsolutely!\n \n-Kevin\n \n\n\n", "msg_date": "Wed, 05 Mar 2008 09:46:05 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql performance" }, { "msg_contents": "In response to \"Dave Dutcher\" <[email protected]>:\n\n> > -----Original Message-----\n> > From: SPMLINGAM\n> > Subject: [PERFORM] postgresql performance\n> > \n> > Dear Friends,\n> > I have a table with 50 lakhs records, the table has more \n> > then 10 fields, i have primary key, i have select query with \n> > count(*) without any condition, it takes 17 seconds.\n> \n> 17 seconds to scan 5 million records doesn't sound that bad to me.\n> Postgresql does not store a count of records, and so it has to actually scan\n> the table to count all the records. This was a design choice because select\n> count(*) isn't usually used in a production system. \n\nNote that if you need a fast count of the number of rows in a large\ntable, there are known workarounds to get it. Such as creating triggers\nthat update a count column, or using explain to get a quick estimate of\nthe number of rows (if that's acceptable).\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Wed, 5 Mar 2008 11:00:13 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql performance" } ]
[ { "msg_contents": "DGB87PGD\n\n-- \nWith Best Regards,\nPetchimuthulingam S\n\nDGB87PGD-- With Best Regards,Petchimuthulingam S", "msg_date": "Wed, 5 Mar 2008 16:08:00 +0530", "msg_from": "\"petchimuthu lingam\" <[email protected]>", "msg_from_op": true, "msg_subject": "=?ISO-8859-1?Q?Confirma=E7=E3o_de_envio_/_Sending_con?=\n\t=?ISO-8859-1?Q?firmation_(captchaid:13266b111e1d)?=" } ]
[ { "msg_contents": "Below I have two almost identical queries. Strangely enough the one \nthat uses the index is slower ???\n\nexplain analyze select uid from user_profile where \nlower(firstname)='angie' and extract(year from age('2008-02-26 \n02:50:31.382', dob)) >= 18 and extract(year from age('2008-02-26 \n02:50:31.382', dob)) <= 68 and image1 is not null and profileprivacy=1 \nand isactive='t' order by name asc limit 250;\n QUERY \n PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=166423.90..166423.93 rows=11 width=17) (actual \ntime=1033.634..1034.137 rows=129 loops=1)\n -> Sort (cost=166423.90..166423.93 rows=11 width=17) (actual \ntime=1033.631..1033.811 rows=129 loops=1)\n Sort Key: name\n -> Seq Scan on user_profile (cost=0.00..166423.71 rows=11 \nwidth=17) (actual time=46.730..1032.994 rows=129 loops=1)\n Filter: ((lower((firstname)::text) = 'angie'::text) \nAND (date_part('year'::text, age('2008-02-26 02:50:31.382'::timestamp \nwithout time zone, dob)) >= 18::double precision) AND \n(date_part('year'::text, age('2008-02-26 02:50:31.382'::timestamp \nwithout time zone, dob)) <= 68::double precision) AND (image1 IS NOT \nNULL) AND (profileprivacy = 1) AND isactive)\n Total runtime: 1034.334 ms\n(6 rows)\n\njnj=# explain analyze select uid from user_profile where \nlower(firstname)='angie' and dob <= '1990-03-05 15:17:29.537' and dob \n >= '1940-03-05 15:17:29.537' and image1 is not null and \nprofileprivacy=1 and isactive='t' order by name asc limit 250;\n QUERY \n PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..113963.92 rows=250 width=17) (actual \ntime=230.326..4688.607 rows=129 loops=1)\n -> Index Scan using user_profile_name_key on user_profile \n(cost=0.00..460414.23 rows=1010 width=17) (actual \ntime=230.322..4688.174 rows=129 loops=1)\n Filter: ((lower((firstname)::text) = 'angie'::text) AND (dob \n<= '1990-03-05 15:17:29.537'::timestamp without time zone) AND (dob >= \n'1940-03-05 15:17:29.537'::timestamp without time zone) AND (image1 IS \nNOT NULL) AND (profileprivacy = 1) AND isactive)\n Total runtime: 4688.906 ms\n(4 rows)\n", "msg_date": "Wed, 5 Mar 2008 19:27:13 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Why the difference in plans ?" }, { "msg_contents": "Dave,\n\n> Below I have two almost identical queries. Strangely enough the one\n> that uses the index is slower ???\n\nMy first guess would be that records are highly correlated by DOB and not at \nall by name. However, it would help if you supplied both the index \ndefinitions and what changed between the two queries to cause the index to be \nused.\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Thu, 6 Mar 2008 09:26:38 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why the difference in plans ?" }, { "msg_contents": "\nOn 6-Mar-08, at 12:26 PM, Josh Berkus wrote:\n\n> Dave,\n>\n>> Below I have two almost identical queries. Strangely enough the one\n>> that uses the index is slower ???\n>\n> My first guess would be that records are highly correlated by DOB \n> and not at\n> all by name. However, it would help if you supplied both the index\n> definitions and what changed between the two queries to cause the \n> index to be\n> used.\n\nThe two queries were run 2 seconds apart, there were no changes \nbetween. I'll get the index definitions.\n\nDave\n>\n>\n> -- \n> Josh Berkus\n> PostgreSQL @ Sun\n> San Francisco\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance\n\n", "msg_date": "Thu, 6 Mar 2008 13:56:19 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why the difference in plans ?" }, { "msg_contents": "Dave Cramer wrote:\n> I have two almost identical queries. Strangely enough the one \n> that uses the index is slower ???\n\nThe index scan is being used so that it can retrieve the rows in the name order.\nIt expects that if it was to retrieve every row via the index, it would get about 1010 rows that matched the filter, and it knows it can stop after 250, so assuming the matching rows are evenly distributed it thinks it can stop after having read only a quarter of the rows.\n\nHowever only 129 rows matched. Consequently it had to read every row in the table anyway, seeking a fair bit as the read order was specified by the index rather than in sequential order, and it also had to read the index. These extra costs were much larger than reading the lot sequentially, and sorting 129 resulting rows.\n\nThe first query picked a sequential scan as it thought it was only going to get 11 results, so was expecting that the limit wasn't going to come into play, and that every row would have to be read anyway.\n\nRegards,\nStephen Denne.\n\nDisclaimer:\nAt the Datamail Group we value team commitment, respect, achievement, customer focus, and courage. This email with any attachments is confidential and may be subject to legal privilege. If it is not intended for you please advise by reply immediately, destroy it and do not copy, disclose or use it in any way.\n\n__________________________________________________________________\n This email has been scanned by the DMZGlobal Business Quality \n Electronic Messaging Suite.\nPlease see http://www.dmzglobal.com/services/bqem.htm for details.\n__________________________________________________________________\n\n\n", "msg_date": "Fri, 7 Mar 2008 11:10:49 +1300", "msg_from": "\"Stephen Denne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why the difference in plans ?" }, { "msg_contents": "\nOn 6-Mar-08, at 5:10 PM, Stephen Denne wrote:\n\n> Dave Cramer wrote:\n>> I have two almost identical queries. Strangely enough the one\n>> that uses the index is slower ???\n>\n> The index scan is being used so that it can retrieve the rows in the \n> name order.\n> It expects that if it was to retrieve every row via the index, it \n> would get about 1010 rows that matched the filter, and it knows it \n> can stop after 250, so assuming the matching rows are evenly \n> distributed it thinks it can stop after having read only a quarter \n> of the rows.\n>\n> However only 129 rows matched. Consequently it had to read every row \n> in the table anyway, seeking a fair bit as the read order was \n> specified by the index rather than in sequential order, and it also \n> had to read the index. These extra costs were much larger than \n> reading the lot sequentially, and sorting 129 resulting rows.\n>\n> The first query picked a sequential scan as it thought it was only \n> going to get 11 results, so was expecting that the limit wasn't \n> going to come into play, and that every row would have to be read \n> anyway.\n>\nThe strange thing of course is that the data is exactly the same for \nboth runs, the tables have not been changed between runs, and I did \nthem right after another. Even more strange is that the seq scan is \nfaster than the index scan.\n\nDave\n> Regards,\n> Stephen Denne.\n>\n> Disclaimer:\n> At the Datamail Group we value team commitment, respect, \n> achievement, customer focus, and courage. This email with any \n> attachments is confidential and may be subject to legal privilege. \n> If it is not intended for you please advise by reply immediately, \n> destroy it and do not copy, disclose or use it in any way.\n>\n> __________________________________________________________________\n> This email has been scanned by the DMZGlobal Business Quality\n> Electronic Messaging Suite.\n> Please see http://www.dmzglobal.com/services/bqem.htm for details.\n> __________________________________________________________________\n>\n>\n\n", "msg_date": "Thu, 6 Mar 2008 21:16:02 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why the difference in plans ?" }, { "msg_contents": "> The strange thing of course is that the data is exactly the same for \n> both runs, the tables have not been changed between runs, and I did \n> them right after another. Even more strange is that the seq scan is \n> faster than the index scan.\n\nIt is not strange at all, since both queries read ALL the rows in your table, checking each and every row to see whether it matched your predicates.\n\nThe sequential scan read them in the order they are on the disk, meaning your disk didn't have to seek as much (assuming low file fragmentation).\n\nThe index scan again reads all the rows in your table, but reads them in the order they were in the index, which is probably quite different from the order that they are on the disk, so the disk had to seek a lot. In addition, it had to read the index.\n\nTaking some wild guesses about the distribution of your data, I'd hazard a guess that this specific query could be sped up a great deal by creating an index on lower(firstname).\n\nRegards,\nStephen.\n\nDisclaimer:\nAt the Datamail Group we value team commitment, respect, achievement, customer focus, and courage. This email with any attachments is confidential and may be subject to legal privilege. If it is not intended for you please advise by reply immediately, destroy it and do not copy, disclose or use it in any way.\n\n__________________________________________________________________\n This email has been scanned by the DMZGlobal Business Quality \n Electronic Messaging Suite.\nPlease see http://www.dmzglobal.com/services/bqem.htm for details.\n__________________________________________________________________\n\n\n", "msg_date": "Fri, 7 Mar 2008 15:30:41 +1300", "msg_from": "\"Stephen Denne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why the difference in plans ?" }, { "msg_contents": "Josh,\n\nOn 6-Mar-08, at 12:26 PM, Josh Berkus wrote:\n\n> Dave,\n>\n>> Below I have two almost identical queries. Strangely enough the one\n>> that uses the index is slower ???\n>\n> My first guess would be that records are highly correlated by DOB \n> and not at\n> all by name. However, it would help if you supplied both the index\n> definitions and what changed between the two queries to cause the \n> index to be\n> used.\n\nIndexes:\n \"user_profile_pkey\" PRIMARY KEY, btree (uid) CLUSTER\n \"user_profile_name_idx\" UNIQUE, btree (name varchar_pattern_ops)\n \"user_profile_name_key\" UNIQUE, btree (name)\n \"user_profile_uploadcode_key\" UNIQUE, btree (uploadcode)\n \"user_profile_active_idx\" btree (isactive)\n \"user_profile_areacode_index\" btree (areacode)\n \"user_profile_gender_idx\" btree (gender)\n\nand nothing changed between runs.\n\nDave\n>\n>\n> -- \n> Josh Berkus\n> PostgreSQL @ Sun\n> San Francisco\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance\n\n", "msg_date": "Fri, 7 Mar 2008 06:21:03 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why the difference in plans ?" }, { "msg_contents": "\nOn 6-Mar-08, at 9:30 PM, Stephen Denne wrote:\n\n>> The strange thing of course is that the data is exactly the same for\n>> both runs, the tables have not been changed between runs, and I did\n>> them right after another. Even more strange is that the seq scan is\n>> faster than the index scan.\n>\n> It is not strange at all, since both queries read ALL the rows in \n> your table, checking each and every row to see whether it matched \n> your predicates.\n>\n> The sequential scan read them in the order they are on the disk, \n> meaning your disk didn't have to seek as much (assuming low file \n> fragmentation).\n>\n> The index scan again reads all the rows in your table, but reads \n> them in the order they were in the index, which is probably quite \n> different from the order that they are on the disk, so the disk had \n> to seek a lot. In addition, it had to read the index.\n>\nOK, that makes sense.\n\nSo given that the predicates are essentially the same why would the \nplanner decide to use or not use the index ?\n\n>\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Fri, 7 Mar 2008 06:21:56 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why the difference in plans ?" }, { "msg_contents": "Dave,\n> \"user_profile_pkey\" PRIMARY KEY, btree (uid) CLUSTER\n> \"user_profile_name_idx\" UNIQUE, btree (name varchar_pattern_ops)\n> \"user_profile_name_key\" UNIQUE, btree (name)\n> \"user_profile_uploadcode_key\" UNIQUE, btree (uploadcode)\n> \"user_profile_active_idx\" btree (isactive)\n> \"user_profile_areacode_index\" btree (areacode)\n> \"user_profile_gender_idx\" btree (gender)\n\nYou need to change one of the name indexes to a functional index on \nlower(firstname). That'll speed the query up considerably.\n\nI'm still puzzled as to why the index is being used at all in the 2nd \nquery, as it seems very unlikely to work out, but the above is the \npractical solution to your problem.\n\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Fri, 7 Mar 2008 17:23:24 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why the difference in plans ?" } ]
[ { "msg_contents": "OI6PQO5P\n\n-- \nWith Best Regards,\nPetchimuthulingam S\n\nOI6PQO5P-- With Best Regards,Petchimuthulingam S", "msg_date": "Thu, 6 Mar 2008 10:18:48 +0530", "msg_from": "\"petchimuthu lingam\" <[email protected]>", "msg_from_op": true, "msg_subject": "=?ISO-8859-1?Q?Confirma=E7=E3o_de_envio_/_Sending_con?=\n\t=?ISO-8859-1?Q?firmation_(captchaid:13266b1124bc)?=" } ]
[ { "msg_contents": "R9PKE431\n-- \nWith Best Regards,\nPetchimuthulingam S\n\nR9PKE431-- With Best Regards,Petchimuthulingam S", "msg_date": "Thu, 6 Mar 2008 10:24:25 +0530", "msg_from": "\"petchimuthu lingam\" <[email protected]>", "msg_from_op": true, "msg_subject": "=?ISO-8859-1?Q?Confirma=E7=E3o_de_envio_/_Sending_con?=\n\t=?ISO-8859-1?Q?firmation_(captchaid:13266b203c28)?=" } ]
[ { "msg_contents": "HZ5DHKQJ\n\n-- \nWith Best Regards,\nPetchimuthulingam S\n\nHZ5DHKQJ-- With Best Regards,Petchimuthulingam S", "msg_date": "Thu, 6 Mar 2008 10:38:49 +0530", "msg_from": "\"petchimuthu lingam\" <[email protected]>", "msg_from_op": true, "msg_subject": "=?ISO-8859-1?Q?Confirma=E7=E3o_de_envio_/_Sending_con?=\n\t=?ISO-8859-1?Q?firmation_(captchaid:13266b203e23)?=" } ]
[ { "msg_contents": "C5BK4513\n\n-- \nWith Best Regards,\nPetchimuthulingam S\n\nC5BK4513-- With Best Regards,Petchimuthulingam S", "msg_date": "Thu, 6 Mar 2008 10:47:39 +0530", "msg_from": "\"petchimuthu lingam\" <[email protected]>", "msg_from_op": true, "msg_subject": "=?ISO-8859-1?Q?Confirma=E7=E3o_de_envio_/_Sending_con?=\n\t=?ISO-8859-1?Q?firmation_(captchaid:13266b20536d)?=" }, { "msg_contents": "petchimuthu lingam wrote:\n> C5BK4513\n\nAhh - you are sending this to the wrong address, these are not being \nsent by the postgres mailing list.\n\nCheck which address you are replying to next time...\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Thu, 06 Mar 2008 16:46:15 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: =?ISO-8859-1?Q?Confirma=E7=E3o_de_envio_/_?=\n\t=?ISO-8859-1?Q?Sending_confirmation_=28captchaid=3A13266b20536d=29?=" } ]
[ { "msg_contents": "count(*) tooks much time...\n\nbut with the where clause we can make this to use indexing,... what where\nclause we can use??\n\nAm using postgres 7.4 in Debian OS with 1 GB RAM,\n\nam having a table with nearly 50 lakh records,\n\nit has more than 15 columns, i want to count how many records are there, it\nis taking nearly 17 seconds to do that...\n\ni know that to get a approximate count we can use\n SELECT reltuples FROM pg_class where relname = TABLENAME;\n\nbut this give approximate count, and i require exact count...\n\ncount(*) tooks much time...but with the where clause we can make this to use indexing,... what where clause we can use??Am using postgres 7.4 in Debian OS with 1 GB RAM, am having a table with nearly 50 lakh records, \nit has more than 15 columns, i want to count how many records are there, it is taking nearly 17 seconds to do that...i know that to get a approximate count we can use         SELECT reltuples FROM pg_class where relname = TABLENAME;\nbut this give approximate count, and i require exact count...", "msg_date": "Thu, 6 Mar 2008 11:13:01 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "count * performance issue" }, { "msg_contents": "sathiya psql wrote:\n> count(*) tooks much time...\n> \n> but with the where clause we can make this to use indexing,... what \n> where clause we can use??\n> \n> Am using postgres 7.4 in Debian OS with 1 GB RAM,\n> \n> am having a table with nearly 50 lakh records,\n\nLooks suspiciously like a question asked yesterday:\n\nhttp://archives.postgresql.org/pgsql-performance/2008-03/msg00068.php\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Thu, 06 Mar 2008 16:48:19 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "am Thu, dem 06.03.2008, um 11:13:01 +0530 mailte sathiya psql folgendes:\n> count(*) tooks much time...\n> \n> but with the where clause we can make this to use indexing,... what where\n> clause we can use??\n\nAn index without a WHERE can't help to avoid a seq. scan.\n\n\n> \n> Am using postgres 7.4 in Debian OS with 1 GB RAM,\n\nPG 7.4 are very old... Recent versions are MUCH faster.\n\n\n\n> \n> am having a table with nearly 50 lakh records,\n> \n> it has more than 15 columns, i want to count how many records are there, it is\n> taking nearly 17 seconds to do that...\n> \n> i know that to get a approximate count we can use\n> SELECT reltuples FROM pg_class where relname = TABLENAME;\n> \n> but this give approximate count, and i require exact count...\n\nThere aren't a general solution. If you realy need the exact count of\ntuples than you can play with a TRIGGER and increase/decrease the\ntuple-count for this table in an extra table.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Thu, 6 Mar 2008 07:08:29 +0100", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "On Thu, Mar 6, 2008 at 5:08 PM, A. Kretschmer <\[email protected]> wrote:>\n\n> > am having a table with nearly 50 lakh records,\n> >\n> > it has more than 15 columns, i want to count how many records are there,\n> it is\n> > taking nearly 17 seconds to do that...\n> >\n> > i know that to get a approximate count we can use\n> > SELECT reltuples FROM pg_class where relname = TABLENAME;\n> >\n> > but this give approximate count, and i require exact count...\n>\n> There aren't a general solution. If you realy need the exact count of\n> tuples than you can play with a TRIGGER and increase/decrease the\n> tuple-count for this table in an extra table.\n>\n>\n>\nOr do something like:\n\nANALYZE tablename;\nselect reltuple from pg_class where relname = 'tablename';\n\nThat will also return the total number of rows in a table and I guess might\nbe much faster then doing a count(*) but yes if trigger can be an option\nthat can be the easiest way to do it and fastest too.\n\n-- \nShoaib Mir\nFujitsu Australia Software Technology\nshoaibm[@]fast.fujitsu.com.au\n\nOn Thu, Mar 6, 2008 at 5:08 PM, A. Kretschmer <[email protected]> wrote:>\n\n> am having a table with nearly 50 lakh records,\n>\n> it has more than 15 columns, i want to count how many records are there, it is\n> taking nearly 17 seconds to do that...\n>\n> i know that to get a approximate count we can use\n>          SELECT reltuples FROM pg_class where relname = TABLENAME;\n>\n> but this give approximate count, and i require exact count...\n\nThere aren't a general solution. If you realy need the exact count of\ntuples than you can play with a TRIGGER and increase/decrease the\ntuple-count for this table in an extra table.\n\nOr do something like:ANALYZE tablename;select reltuple from pg_class where relname = 'tablename';That will also return the total number of rows in a table and I guess might be much faster then doing a count(*) but yes if trigger can be an option that can be the easiest way to do it and fastest too.\n-- Shoaib MirFujitsu Australia Software Technologyshoaibm[@]fast.fujitsu.com.au", "msg_date": "Thu, 6 Mar 2008 17:15:06 +1100", "msg_from": "\"Shoaib Mir\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "buy every time i need to put ANALYZE...\nthis takes the same time as count(*) takes, what is the use ??\n\nOn Thu, Mar 6, 2008 at 11:45 AM, Shoaib Mir <[email protected]> wrote:\n\n> On Thu, Mar 6, 2008 at 5:08 PM, A. Kretschmer <\n> [email protected]> wrote:>\n>\n> > > am having a table with nearly 50 lakh records,\n> > >\n> > > it has more than 15 columns, i want to count how many records are\n> > there, it is\n> > > taking nearly 17 seconds to do that...\n> > >\n> > > i know that to get a approximate count we can use\n> > > SELECT reltuples FROM pg_class where relname = TABLENAME;\n> > >\n> > > but this give approximate count, and i require exact count...\n> >\n> > There aren't a general solution. If you realy need the exact count of\n> > tuples than you can play with a TRIGGER and increase/decrease the\n> > tuple-count for this table in an extra table.\n> >\n> >\n> >\n> Or do something like:\n>\n> ANALYZE tablename;\n> select reltuple from pg_class where relname = 'tablename';\n>\n> That will also return the total number of rows in a table and I guess\n> might be much faster then doing a count(*) but yes if trigger can be an\n> option that can be the easiest way to do it and fastest too.\n>\n> --\n> Shoaib Mir\n> Fujitsu Australia Software Technology\n> shoaibm[@]fast.fujitsu.com.au\n\nbuy every time i need to put ANALYZE...this takes the same time as count(*) takes, what is the use ??On Thu, Mar 6, 2008 at 11:45 AM, Shoaib Mir <[email protected]> wrote:\nOn Thu, Mar 6, 2008 at 5:08 PM, A. Kretschmer <[email protected]> wrote:>\n\n\n> am having a table with nearly 50 lakh records,\n>\n> it has more than 15 columns, i want to count how many records are there, it is\n> taking nearly 17 seconds to do that...\n>\n> i know that to get a approximate count we can use\n>          SELECT reltuples FROM pg_class where relname = TABLENAME;\n>\n> but this give approximate count, and i require exact count...\n\nThere aren't a general solution. If you realy need the exact count of\ntuples than you can play with a TRIGGER and increase/decrease the\ntuple-count for this table in an extra table.\n\nOr do something like:ANALYZE tablename;select reltuple from pg_class where relname = 'tablename';That will also return the total number of rows in a table and I guess might be much faster then doing a count(*) but yes if trigger can be an option that can be the easiest way to do it and fastest too.\n\n-- Shoaib MirFujitsu Australia Software Technologyshoaibm[@]fast.fujitsu.com.au", "msg_date": "Thu, 6 Mar 2008 11:49:08 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "On Thu, Mar 6, 2008 at 5:19 PM, sathiya psql <[email protected]> wrote:\n\n> buy every time i need to put ANALYZE...\n> this takes the same time as count(*) takes, what is the use ??\n>\n>\n>\nDont you have autovacuuming running in the background which is taking care\nof the analyze as well?\n\nIf not then hmm turn it on and doing manual analyze then shouldnt I guess\ntake much time!\n\nBut yes, I will say if its possible go with the trigger option as that might\nbe more helpful and a very fast way to do that.\n\n-- \nShoaib Mir\nFujitsu Australia Software Technology\nshoaibm[@]fast.fujitsu.com.au\n\nOn Thu, Mar 6, 2008 at 5:19 PM, sathiya psql <[email protected]> wrote:\nbuy every time i need to put ANALYZE...this takes the same time as count(*) takes, what is the use ??Dont you have autovacuuming running in the background which is taking care of the analyze as well?\nIf not then hmm turn it on and doing manual analyze then shouldnt I guess take much time!But yes, I will say if its possible go with the trigger option as that might be more helpful and a very fast way to do that.\n-- Shoaib MirFujitsu Australia Software Technologyshoaibm[@]fast.fujitsu.com.au", "msg_date": "Thu, 6 Mar 2008 17:26:07 +1100", "msg_from": "\"Shoaib Mir\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": ">\n> There aren't a general solution. If you realy need the exact count of\n> tuples than you can play with a TRIGGER and increase/decrease the\n> tuple-count for this table in an extra table.\n>\n\nOf course, this means accepting the cost of obtaining update locks on \nthe count table.\n\nThe original poster should understand that they can either get a fast \nestimated count, or they can get a slow accurate count (either slow in \nterms of select using count(*) or slow in terms of updates using \ntriggers and locking).\n\nOther systems have their own issues. An index scan may be faster than a \ntable scan for databases that can accurately determine counts using only \nthe index, but it's still a relatively slow operation, and people don't \nnormally need an accurate count for records in the range of 100,000+? :-)\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n\n\n\n\n\n\n  \n\n\nThere\naren't a general solution. If you realy need the exact count of\ntuples than you can play with a TRIGGER and increase/decrease the\ntuple-count for this table in an extra table.\n\n\n\n\nOf course, this means accepting the cost of obtaining update locks on\nthe count table.\n\nThe original poster should understand that they can either get a fast\nestimated count, or they can get a slow accurate count (either slow in\nterms of select using count(*) or slow in terms of updates using\ntriggers and locking).\n\nOther systems have their own issues. An index scan may be faster than a\ntable scan for databases that can accurately determine counts using\nonly the index, but it's still a relatively slow operation, and people\ndon't normally need an accurate count for records in the range of\n100,000+? :-)\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>", "msg_date": "Thu, 06 Mar 2008 01:26:46 -0500", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "will you please tell, what is autovacuuming... and wat it ll do... is there\nany good article in this....\n\nOn Thu, Mar 6, 2008 at 11:56 AM, Shoaib Mir <[email protected]> wrote:\n\n> On Thu, Mar 6, 2008 at 5:19 PM, sathiya psql <[email protected]>\n> wrote:\n>\n> > buy every time i need to put ANALYZE...\n> > this takes the same time as count(*) takes, what is the use ??\n> >\n> >\n> >\n> Dont you have autovacuuming running in the background which is taking care\n> of the analyze as well?\n>\n> If not then hmm turn it on and doing manual analyze then shouldnt I guess\n> take much time!\n>\n> But yes, I will say if its possible go with the trigger option as that\n> might be more helpful and a very fast way to do that.\n>\n>\n> --\n> Shoaib Mir\n> Fujitsu Australia Software Technology\n> shoaibm[@]fast.fujitsu.com.au\n>\n\nwill you please tell, what is autovacuuming... and wat it ll do... is there any good article in this....On Thu, Mar 6, 2008 at 11:56 AM, Shoaib Mir <[email protected]> wrote:\nOn Thu, Mar 6, 2008 at 5:19 PM, sathiya psql <[email protected]> wrote:\n\nbuy every time i need to put ANALYZE...this takes the same time as count(*) takes, what is the use ??Dont you have autovacuuming running in the background which is taking care of the analyze as well?\nIf not then hmm turn it on and doing manual analyze then shouldnt I guess take much time!But yes, I will say if its possible go with the trigger option as that might be more helpful and a very fast way to do that.\n\n-- Shoaib MirFujitsu Australia Software Technologyshoaibm[@]fast.fujitsu.com.au", "msg_date": "Thu, 6 Mar 2008 12:01:09 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "On Thu, Mar 6, 2008 at 5:31 PM, sathiya psql <[email protected]> wrote:\n\n> will you please tell, what is autovacuuming... and wat it ll do... is\n> there any good article in this....\n>\n>\n>\nRead this -->\nhttp://www.postgresql.org/docs/8.3/interactive/routine-vacuuming.html#AUTOVACUUM\n\n-- \nShoaib Mir\nFujitsu Australia Software Technology\nshoaibm[@]fast.fujitsu.com.au\n\nOn Thu, Mar 6, 2008 at 5:31 PM, sathiya psql <[email protected]> wrote:\nwill you please tell, what is autovacuuming... and wat it ll do... is there any good article in this....\nRead this --> http://www.postgresql.org/docs/8.3/interactive/routine-vacuuming.html#AUTOVACUUM\n-- Shoaib MirFujitsu Australia Software Technologyshoaibm[@]fast.fujitsu.com.au", "msg_date": "Thu, 6 Mar 2008 17:32:50 +1100", "msg_from": "\"Shoaib Mir\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "am Thu, dem 06.03.2008, um 1:26:46 -0500 mailte Mark Mielke folgendes:\n> \n> \n> There aren't a general solution. If you realy need the exact count of\n> tuples than you can play with a TRIGGER and increase/decrease the\n> tuple-count for this table in an extra table.\n> \n> \n> Of course, this means accepting the cost of obtaining update locks on the count\n> table.\n> \n> The original poster should understand that they can either get a fast estimated\n> count, or they can get a slow accurate count (either slow in terms of select\n> using count(*) or slow in terms of updates using triggers and locking).\n> \n> Other systems have their own issues. An index scan may be faster than a table\n> scan for databases that can accurately determine counts using only the index,\n\nNo. The current index-implementation contains no information about the\nrow-visibility within the current transaction. You need to scan the\nwhole data-table to obtain if the current row are visible within the\ncurrent transaction.\n\n\n> but it's still a relatively slow operation, and people don't normally need an\n> accurate count for records in the range of 100,000+? :-)\n\nright.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Thu, 6 Mar 2008 07:36:44 +0100", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "is there any way to explicitly force the postgres to use index scan\n\nOn Thu, Mar 6, 2008 at 12:06 PM, A. Kretschmer <\[email protected]> wrote:\n\n> am Thu, dem 06.03.2008, um 1:26:46 -0500 mailte Mark Mielke folgendes:\n> >\n> >\n> > There aren't a general solution. If you realy need the exact\n> count of\n> > tuples than you can play with a TRIGGER and increase/decrease\n> the\n> > tuple-count for this table in an extra table.\n> >\n> >\n> > Of course, this means accepting the cost of obtaining update locks on\n> the count\n> > table.\n> >\n> > The original poster should understand that they can either get a fast\n> estimated\n> > count, or they can get a slow accurate count (either slow in terms of\n> select\n> > using count(*) or slow in terms of updates using triggers and locking).\n> >\n> > Other systems have their own issues. An index scan may be faster than a\n> table\n> > scan for databases that can accurately determine counts using only the\n> index,\n>\n> No. The current index-implementation contains no information about the\n> row-visibility within the current transaction. You need to scan the\n> whole data-table to obtain if the current row are visible within the\n> current transaction.\n>\n>\n> > but it's still a relatively slow operation, and people don't normally\n> need an\n> > accurate count for records in the range of 100,000+? :-)\n>\n> right.\n>\n>\n> Andreas\n> --\n> Andreas Kretschmer\n> Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\n> GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n>\n> http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance\n>\n\nis there any way to explicitly force the postgres to use index scanOn Thu, Mar 6, 2008 at 12:06 PM, A. Kretschmer <[email protected]> wrote:\nam  Thu, dem 06.03.2008, um  1:26:46 -0500 mailte Mark Mielke folgendes:\n>\n>\n>         There aren't a general solution. If you realy need the exact count of\n>         tuples than you can play with a TRIGGER and increase/decrease the\n>         tuple-count for this table in an extra table.\n>\n>\n> Of course, this means accepting the cost of obtaining update locks on the count\n> table.\n>\n> The original poster should understand that they can either get a fast estimated\n> count, or they can get a slow accurate count (either slow in terms of select\n> using count(*) or slow in terms of updates using triggers and locking).\n>\n> Other systems have their own issues. An index scan may be faster than a table\n> scan for databases that can accurately determine counts using only the index,\n\nNo. The current index-implementation contains no information about the\nrow-visibility within the current transaction. You need to scan the\nwhole data-table to obtain if the current row are visible within the\ncurrent transaction.\n\n\n> but it's still a relatively slow operation, and people don't normally need an\n> accurate count for records in the range of 100,000+? :-)\n\nright.\n\n\nAndreas\n--\nAndreas Kretschmer\nKontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance", "msg_date": "Thu, 6 Mar 2008 12:13:17 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "---------- Forwarded message ----------\nFrom: sathiya psql <[email protected]>\nDate: Thu, Mar 6, 2008 at 12:17 PM\nSubject: Re: [PERFORM] count * performance issue\nTo: \"A. Kretschmer\" <[email protected]>\nCc: [email protected]\n\n\nTRIGGER i can use if i want the count of the whole table, but i require for\nsome of the rows with WHERE condition....\n\nso how to do that ???\n\n\nOn Thu, Mar 6, 2008 at 12:06 PM, A. Kretschmer <\[email protected]> wrote:\n\n> am Thu, dem 06.03.2008, um 1:26:46 -0500 mailte Mark Mielke folgendes:\n> >\n> >\n> > There aren't a general solution. If you realy need the exact\n> count of\n> > tuples than you can play with a TRIGGER and increase/decrease\n> the\n> > tuple-count for this table in an extra table.\n> >\n> >\n> > Of course, this means accepting the cost of obtaining update locks on\n> the count\n> > table.\n> >\n> > The original poster should understand that they can either get a fast\n> estimated\n> > count, or they can get a slow accurate count (either slow in terms of\n> select\n> > using count(*) or slow in terms of updates using triggers and locking).\n> >\n> > Other systems have their own issues. An index scan may be faster than a\n> table\n> > scan for databases that can accurately determine counts using only the\n> index,\n>\n> No. The current index-implementation contains no information about the\n> row-visibility within the current transaction. You need to scan the\n> whole data-table to obtain if the current row are visible within the\n> current transaction.\n>\n>\n> > but it's still a relatively slow operation, and people don't normally\n> need an\n> > accurate count for records in the range of 100,000+? :-)\n>\n> right.\n>\n>\n> Andreas\n> --\n> Andreas Kretschmer\n> Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\n> GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n>\n> http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance\n>\n\n---------- Forwarded message ----------From: sathiya psql <[email protected]>Date: Thu, Mar 6, 2008 at 12:17 PM\nSubject: Re: [PERFORM] count * performance issueTo: \"A. Kretschmer\" <[email protected]>Cc: [email protected]\nTRIGGER i can use if i want the count of the whole table, but i require for some of the rows with WHERE condition....so how to do that ???\nOn Thu, Mar 6, 2008 at 12:06 PM, A. Kretschmer <[email protected]> wrote:\nam  Thu, dem 06.03.2008, um  1:26:46 -0500 mailte Mark Mielke folgendes:\n>\n>\n>         There aren't a general solution. If you realy need the exact count of\n>         tuples than you can play with a TRIGGER and increase/decrease the\n>         tuple-count for this table in an extra table.\n>\n>\n> Of course, this means accepting the cost of obtaining update locks on the count\n> table.\n>\n> The original poster should understand that they can either get a fast estimated\n> count, or they can get a slow accurate count (either slow in terms of select\n> using count(*) or slow in terms of updates using triggers and locking).\n>\n> Other systems have their own issues. An index scan may be faster than a table\n> scan for databases that can accurately determine counts using only the index,\n\nNo. The current index-implementation contains no information about the\nrow-visibility within the current transaction. You need to scan the\nwhole data-table to obtain if the current row are visible within the\ncurrent transaction.\n\n\n> but it's still a relatively slow operation, and people don't normally need an\n> accurate count for records in the range of 100,000+? :-)\n\nright.\n\n\nAndreas\n--\nAndreas Kretschmer\nKontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance", "msg_date": "Thu, 6 Mar 2008 12:18:25 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: count * performance issue" }, { "msg_contents": "am Thu, dem 06.03.2008, um 12:13:17 +0530 mailte sathiya psql folgendes:\n> is there any way to explicitly force the postgres to use index scan\n\nNot realy, PG use a cost-based optimizer and use an INDEX if it make\nsense.\n\n\n> \n> On Thu, Mar 6, 2008 at 12:06 PM, A. Kretschmer <\n> [email protected]> wrote:\n\nplease, no silly top-posting with the complete quote below.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Thu, 6 Mar 2008 07:54:40 +0100", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "am Thu, dem 06.03.2008, um 12:17:55 +0530 mailte sathiya psql folgendes:\n> TRIGGER i can use if i want the count of the whole table, but i require for\n> some of the rows with WHERE condition....\n> \n> so how to do that ???\n\nOkay, in this case a TRIGGER are a bad idea. You can use an INDEX on\nthis row. Can you show us the output for a EXPLAIN ANALYSE SELECT\ncount(*) from <your_table> WHERE <your_row> = ... ?\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Thu, 6 Mar 2008 07:57:50 +0100", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=205756.95..205756.95 rows=1 width=0) (actual time=\n114675.042..114675.042 rows=1 loops=1)\n -> Seq Scan on call_log (cost=0.00..193224.16 rows=5013112 width=0)\n(actual time=11.754..91429.594 rows=5061619 loops=1)\n Filter: (call_id > 0)\n Total runtime: 114699.797 ms\n(4 rows)\n\n\nit is now taking 114 seconds, i think because of load in my system.... any\nway will you explain., what is this COST, actual time and other stuffs....\n\nOn Thu, Mar 6, 2008 at 12:27 PM, A. Kretschmer <\[email protected]> wrote:\n\n> am Thu, dem 06.03.2008, um 12:17:55 +0530 mailte sathiya psql folgendes:\n> > TRIGGER i can use if i want the count of the whole table, but i require\n> for\n> > some of the rows with WHERE condition....\n> >\n> > so how to do that ???\n>\n> Okay, in this case a TRIGGER are a bad idea. You can use an INDEX on\n> this row. Can you show us the output for a EXPLAIN ANALYSE SELECT\n> count(*) from <your_table> WHERE <your_row> = ... ?\n>\n>\n> Andreas\n> --\n> Andreas Kretschmer\n> Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\n> GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n>\n> http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance\n>\n\n                                                          QUERY PLAN------------------------------------------------------------------------------------------------------------------------------ Aggregate  (cost=205756.95..205756.95 rows=1 width=0) (actual time=114675.042..114675.042 rows=1 loops=1)\n   ->  Seq Scan on call_log  (cost=0.00..193224.16 rows=5013112 width=0) (actual time=11.754..91429.594 rows=5061619 loops=1)         Filter: (call_id > 0) Total runtime: 114699.797 ms(4 rows)\nit is now taking 114 seconds, i think because of load in my system.... any way will you explain., what is this COST, actual time and other stuffs....On Thu, Mar 6, 2008 at 12:27 PM, A. Kretschmer <[email protected]> wrote:\nam  Thu, dem 06.03.2008, um 12:17:55 +0530 mailte sathiya psql folgendes:\n> TRIGGER i can use if i want the count of the whole table, but i require for\n> some of the rows with WHERE condition....\n>\n> so how to do that ???\n\nOkay, in this case a TRIGGER are a bad idea. You can use an INDEX on\nthis row. Can you show us the output for a EXPLAIN ANALYSE SELECT\ncount(*) from <your_table> WHERE <your_row> = ... ?\n\n\nAndreas\n--\nAndreas Kretschmer\nKontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance", "msg_date": "Thu, 6 Mar 2008 12:36:48 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "am Thu, dem 06.03.2008, um 12:36:48 +0530 mailte sathiya psql folgendes:\n> \n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=205756.95..205756.95 rows=1 width=0) (actual time=\n> 114675.042..114675.042 rows=1 loops=1)\n> -> Seq Scan on call_log (cost=0.00..193224.16 rows=5013112 width=0)\n> (actual time=11.754..91429.594 rows=5061619 loops=1)\n> Filter: (call_id > 0)\n> Total runtime: 114699.797 ms\n> (4 rows)\n\n'call_id > 0' are your where-condition? An INDEX can't help, all rows\nwith call_id > 0 are in the result, and i guess, that's all records in\nthe table.\n\n\n> \n> \n> it is now taking 114 seconds, i think because of load in my system.... any way\n> will you explain., what is this COST, actual time and other stuffs....\n\n\n08:16 < akretschmer> ??explain\n08:16 < rtfm_please> For information about explain\n08:16 < rtfm_please> see http://explain-analyze.info\n08:16 < rtfm_please> or http://www.depesz.com/index.php/2007/08/06/better-explain-analyze/\n08:16 < rtfm_please> or http://www.postgresql.org/docs/current/static/sql-explain.html\n\nand \n\nhttp://redivi.com/~bob/oscon2005_pgsql_pdf/OSCON_Explaining_Explain_Public.pdf\n\n\nRead this to learn more about explain.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Thu, 6 Mar 2008 08:18:15 +0100", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "\nOn 6-Mar-08, at 1:43 AM, sathiya psql wrote:\n\n> is there any way to explicitly force the postgres to use index scan\n>\n>\n\nIf you want to count all the rows in the table there is only one way \nto do it (without keeping track yourself with a trigger ); a seq scan.\n\nAn index will not help you.\n\nThe only thing that is going to help you is really fast disks, and \nmore memory, and you should consider moving to 8.3 for all the other \nperformance benefits.\n\nDave \n", "msg_date": "Thu, 6 Mar 2008 06:50:17 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "Hi,\n\nOn 6-Mar-08, at 6:58 AM, sathiya psql wrote:\n\n> The only thing that is going to help you is really fast disks, and\n> more memory, and you should consider moving to 8.3 for all the other\n> performance benefits.\n> Is 8.3 is a stable version or what is the latest stable version of \n> postgres ??\n>\nYes it is the latest stable version.\n> moving my database from 7.4 to 8.3 will it do any harm ??\n>\nYou will have to test this yourself. There may be issues\n> what are all the advantages of moving from 7.4 to 8.3\n>\nEvery version of postgresql has improved performance, and robustness; \nso you will get better overall performance. However I want to caution \nyou this is not a panacea. It will NOT solve your seq scan problem.\n\n\n> Dave\n>\n\n\nHi,On 6-Mar-08, at 6:58 AM, sathiya psql wrote:The only thing that is going to help you is really fast disks, and more memory, and you should consider moving to 8.3 for all the other performance benefits.Is 8.3 is a stable version or what is the latest stable version of postgres ??Yes it is the latest stable version.moving my database from 7.4 to 8.3 will it do any harm ??You will have to test this yourself. There may be issues what are all the advantages of moving from 7.4 to 8.3 Every version of postgresql has improved performance, and robustness; so you will get better overall performance. However I want to caution you this is not a panacea. It will NOT solve your seq scan problem. Dave", "msg_date": "Thu, 6 Mar 2008 07:13:57 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "> Yes it is the latest stable version.\n>\n\nis there any article saying the difference between this 7.3 and 8.4\n\nYes it is the latest stable version.\nis there any article saying the difference between this 7.3 and 8.4", "msg_date": "Thu, 6 Mar 2008 18:13:50 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "Of course, the official documentation covers that information in its\nrelease notes\n\nhttp://www.postgresql.org/docs/8.3/static/release.html\n\nbest wishes\n\nHarald\n\nOn Thu, Mar 6, 2008 at 1:43 PM, sathiya psql <[email protected]> wrote:\n>\n>\n> >\n> >\n> >\n> > Yes it is the latest stable version.\n>\n> is there any article saying the difference between this 7.3 and 8.4\n>\n>\n>\n\n\n\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nSpielberger Straße 49\n70435 Stuttgart\n0173/9409607\nfx 01212-5-13695179\n-\nEuroPython 2008 will take place in Vilnius, Lithuania - Stay tuned!\n", "msg_date": "Thu, 6 Mar 2008 13:48:25 +0100", "msg_from": "\"Harald Armin Massa\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "am Thu, dem 06.03.2008, um 18:13:50 +0530 mailte sathiya psql folgendes:\n> \n> Yes it is the latest stable version.\n> \n> \n> is there any article saying the difference between this 7.3 and 8.4\n\nhttp://developer.postgresql.org/pgdocs/postgres/release.html\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Thu, 6 Mar 2008 13:49:06 +0100", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "sathiya psql escribi�:\n> > Yes it is the latest stable version.\n> \n> is there any article saying the difference between this 7.3 and 8.4\n\nhttp://www.postgresql.org/docs/8.3/static/release.html\n\nIn particular,\nhttp://www.postgresql.org/docs/8.3/static/release-8-3.html\nhttp://www.postgresql.org/docs/8.3/static/release-8-2.html\nhttp://www.postgresql.org/docs/8.3/static/release-8-1.html\nhttp://www.postgresql.org/docs/8.3/static/release-8-0.html\nwhich are all the major releases between 7.4 and 8.3.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Thu, 6 Mar 2008 09:49:08 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "A. Kretschmer wrote:\n> am Thu, dem 06.03.2008, um 12:17:55 +0530 mailte sathiya psql folgendes:\n> \n>> TRIGGER i can use if i want the count of the whole table, but i require for\n>> some of the rows with WHERE condition....\n>>\n>> so how to do that ???\n>> \n>\n> Okay, in this case a TRIGGER are a bad idea. You can use an INDEX on\n> this row. Can you show us the output for a EXPLAIN ANALYSE SELECT\n> count(*) from <your_table> WHERE <your_row> = ... ?\n> \n\nActually - in this case, TRIGGER can be a good idea. If your count table \ncan include the where information, then you no longer require an \neffective table-wide lock for updates.\n\nIn the past I have used sequential articles numbers within a topic for \nan online community. Each topic row had an article_count. To generate a \nnew article, I could update the article_count and use the number I had \ngenerated as the article number. To query the number of articles in a \nparticular topic, article_count was available. Given thousands of \ntopics, and 10s of thousands of articles, the system worked pretty good. \nNot in the millions range as the original poster, but I saw no reason \nwhy this wouldn't scale.\n\nFor the original poster: You might be desperate and looking for help \nfrom the only place you know to get it from, but some of your recent \nanswers have shown that you are either not reading the helpful responses \nprovided to you, or you are unwilling to do your own research. If that \ncontinues, I won't be posting to aid you.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n\n\n\n\n\n\nA. Kretschmer wrote:\n\nam Thu, dem 06.03.2008, um 12:17:55 +0530 mailte sathiya psql folgendes:\n \n\nTRIGGER i can use if i want the count of the whole table, but i require for\nsome of the rows with WHERE condition....\n\nso how to do that ???\n \n\n\nOkay, in this case a TRIGGER are a bad idea. You can use an INDEX on\nthis row. Can you show us the output for a EXPLAIN ANALYSE SELECT\ncount(*) from <your_table> WHERE <your_row> = ... ?\n \n\n\nActually - in this case, TRIGGER can be a good idea. If your count\ntable can include the where information, then you no longer require an\neffective table-wide lock for updates.\n\nIn the past I have used sequential articles numbers within a topic for\nan online community. Each topic row had an article_count. To generate a\nnew article, I could update the article_count and use the number I had\ngenerated as the article number. To query the number of articles in a\nparticular topic, article_count was available. Given thousands of\ntopics, and 10s of thousands of articles, the system worked pretty\ngood. Not in the millions range as the original poster, but I saw no\nreason why this wouldn't scale.\n\nFor the original poster: You might be desperate and looking for help\nfrom the only place you know to get it from, but some of your recent\nanswers have shown that you are either not reading the helpful\nresponses provided to you, or you are unwilling to do your own\nresearch. If that continues, I won't be posting to aid you.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>", "msg_date": "Thu, 06 Mar 2008 08:10:35 -0500", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "On Thu, 6 Mar 2008, sathiya psql wrote:\n\n> is there any article saying the difference between this 7.3 and 8.4\n\nI've collected a list of everything on this topic I've seen at \nhttp://www.postgresqldocs.org/index.php/Version_8.3_Changes\n\nThe Feature Matrix linked to there will be a quicker way to see what's \nhappened than sorting through the release notes.\n\nNone of these changes change the fact that getting an exact count in this \nsituation takes either a sequential scan or triggers.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 6 Mar 2008 08:18:17 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "On Thu, 6 Mar 2008, sathiya psql wrote:\n\n> any way will you explain., what is this COST, actual time and other \n> stuffs....\n\nThere's a long list of links to tools and articles on this subject at \nhttp://www.postgresqldocs.org/index.php/Using_EXPLAIN\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 6 Mar 2008 08:27:14 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "In the 3 years I've been using Postgres, the problem of count() performance has come up more times than I can recall, and each time the answer is, \"It's a sequential scan -- redesign your application.\"\n\nMy question is: What do the other databases do that Postgres can't do, and why not?\n\nCount() on Oracle and MySQL is almost instantaneous, even for very large tables. So why can't Postgres do what they do?\n\nOn the one hand, I understand that Postgres has its architecture, and I understand the issue of row visibility, and so forth. On the other hand, my database is just sitting there, nothing going on, no connections except me, and... it takes FIFTY FIVE SECONDS to count 20 million rows, a query that either Oracle or MySQL would answer in a fraction of a second. It's hard for me to believe there isn't a better way.\n\nThis is a real problem. Countless people (including me) have spent significant effort rewriting applications because of this performance flaw in Postgres. Over and over, the response is, \"You don't really need to do that ... change your application.\" Well, sure, it's always possible to change the application, but that misses the point. To most of us users, count() seems like it should be a trivial operation. On other relational database systems, it is a trivial operation.\n\nThis is really a significant flaw on an otherwise excellent relational database system.\n\nMy rant for today...\nCraig\n", "msg_date": "Thu, 06 Mar 2008 07:28:50 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "Craig James wrote:\n> This is a real problem. Countless people (including me) have\n> spent significant effort rewriting applications because of this\n> performance flaw in Postgres. Over and over, the response is,\n> \"You don't really need to do that ... change your application.\"\n> Well, sure, it's always possible to change the application, but\n> that misses the point. To most of us users, count() seems like\n> it should be a trivial operation. On other relational database\n> systems, it is a trivial operation.\n> \n> This is really a significant flaw on an otherwise excellent\n> relational database system.\n\nHave you read the TODO items related to this?\n\n--\n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://postgres.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 6 Mar 2008 10:33:25 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "On Thu, Mar 06, 2008 at 07:28:50AM -0800, Craig James wrote:\n> Count() on Oracle and MySQL is almost instantaneous, even for very large \n> tables. So why can't Postgres do what they do?\n\nIn MySQL's case: Handle transactions. (Try COUNT(*) on an InnoDB table.)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n\n", "msg_date": "Thu, 6 Mar 2008 16:36:39 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "In response to Craig James <[email protected]>:\n\n> In the 3 years I've been using Postgres, the problem of count() performance has come up more times than I can recall, and each time the answer is, \"It's a sequential scan -- redesign your application.\"\n> \n> My question is: What do the other databases do that Postgres can't do, and why not?\n> \n> Count() on Oracle and MySQL is almost instantaneous, even for very large tables. So why can't Postgres do what they do?\n\nI don't know about Oracle, but MySQL has this problem as well. Use\ninnodb tables and see how slow it is. The only reason myisam tables\ndon't have this problem is because they don't implement any of the\nfeatures that make the problem difficult to solve.\n\n> On the one hand, I understand that Postgres has its architecture, and I understand the issue of row visibility, and so forth. On the other hand, my database is just sitting there, nothing going on, no connections except me, and... it takes FIFTY FIVE SECONDS to count 20 million rows, a query that either Oracle or MySQL would answer in a fraction of a second. It's hard for me to believe there isn't a better way.\n\nThere's been discussion about putting visibility information in indexes.\nI don't know how far along that effort is, but I expect that will improve\ncount() performance significantly.\n\n> This is a real problem. Countless people (including me) have spent significant effort rewriting applications because of this performance flaw in Postgres. Over and over, the response is, \"You don't really need to do that ... change your application.\" Well, sure, it's always possible to change the application, but that misses the point. To most of us users, count() seems like it should be a trivial operation. On other relational database systems, it is a trivial operation.\n> \n> This is really a significant flaw on an otherwise excellent relational database system.\n\nNot really. It really is a design flaw in your application ... it doesn't\nmake relational sense to use the number of rows in a table for anything.\nJust because other people do it frequently doesn't make it right.\n\nThat being said, it's still a useful feature, and I don't hear anyone\ndenying that. As I said, google around a bit WRT to PG storing\nvisibility information in indexes, as I think that's the way this will\nbe improved.\n\n> My rant for today...\n\nFeel better now?\n\n-- \nBill Moran\n", "msg_date": "Thu, 6 Mar 2008 10:41:20 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "On Thu, 6 Mar 2008, Steinar H. Gunderson wrote:\n\n> On Thu, Mar 06, 2008 at 07:28:50AM -0800, Craig James wrote:\n>> Count() on Oracle and MySQL is almost instantaneous, even for very large\n>> tables. So why can't Postgres do what they do?\n>\n> In MySQL's case: Handle transactions. (Try COUNT(*) on an InnoDB table.)\n\nExactly. There is a good discussion of this at \nhttp://www.mysqlperformanceblog.com/2007/04/10/count-vs-countcol/ and I \nfound the comments from Ken Jacobs were the most informative.\n\nIn short, if you want any reasonable database integrity you have to use \nInnoDB with MySQL, and once you make that choice it has the same problem. \nYou only get this accelerated significantly when using MyISAM, which can \ntell you an exact count of all the rows it hasn't corrupted yet.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 6 Mar 2008 10:49:33 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "On Thu, Mar 6, 2008 at 3:49 PM, Greg Smith <[email protected]> wrote:\n>\n> You only get this accelerated significantly when using MyISAM, which can\n> tell you an exact count of all the rows it hasn't corrupted yet.\n\nPlease don't do that again. I'm going to have to spend the next hour\ncleaning coffee out of my laptop keyboard.\n\n:-)\n\n-- \nDave Page\nEnterpriseDB UK Ltd: http://www.enterprisedb.com\nPostgreSQL UK 2008 Conference: http://www.postgresql.org.uk\n", "msg_date": "Thu, 6 Mar 2008 16:05:59 +0000", "msg_from": "\"Dave Page\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "On Thu, 2008-03-06 at 07:28 -0800, Craig James wrote:\n...\n> My question is: What do the other databases do that Postgres can't do, and why not?\n> \n> Count() on Oracle and MySQL is almost instantaneous, even for very large tables. So why can't Postgres do what they do?\n...\n\nI can vouch that Oracle can still take linear time to perform a\ncount(*), at least in some cases.\n\nI have also seen count(*) fast in some cases too... my understanding is\nthat they maintain a list of \"interested transactions\" on a per-relation\nbasis. Perhaps they do an optimization based on the index size if there\nare no pending DML transactions?\n\n-- Mark\n\n\n", "msg_date": "Thu, 6 Mar 2008 08:16:31 -0800", "msg_from": "\"Mark Lewis\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "On Thu, 06 Mar 2008 07:28:50 -0800\nCraig James <[email protected]> wrote:\n> In the 3 years I've been using Postgres, the problem of count() performance has come up more times than I can recall, and each time the answer is, \"It's a sequential scan -- redesign your application.\"\n> \n> My question is: What do the other databases do that Postgres can't do, and why not?\n> \n> Count() on Oracle and MySQL is almost instantaneous, even for very large tables. So why can't Postgres do what they do?\n\nIt's a tradeoff. The only way to get that information quickly is to\nmaintain it internally when you insert or delete a row. So when do you\nwant to take your hit. It sounds like Oracle has made this decision\nfor you. In PostgreSQL you can use triggers and rules to manage this\ninformation if you need it. You can even do stuff like track how many\nof each type of something you have. That's something you can't do if\nyour database engine has done a generic speedup for you. You would\nstill have to create your own table for something like that and then\nyou get the hit twice.\n\n-- \nD'Arcy J.M. Cain <[email protected]> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 6 Mar 2008 11:31:44 -0500", "msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "Craig James <[email protected]> writes:\n> Count() on Oracle and MySQL is almost instantaneous, even for very large tables. So why can't Postgres do what they do?\n\nAFAIK the above claim is false for Oracle. They have the same\ntransactional issues we do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Mar 2008 12:29:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue " }, { "msg_contents": "Craig James wrote:\n>\n> My question is: What do the other databases do that Postgres can't do, \n> and why not?\n>\n> Count() on Oracle and MySQL is almost instantaneous, even for very \n> large tables. So why can't Postgres do what they do?\n>\n\nI think Mysql can only do that for the myisam engine - innodb and \nfalcon are similar to Postgres.\n\nI don't believe Oracle optimizes bare count(*) on a table either - tho \nit may be able to use a suitable index (if present) to get the answer \nquicker.\n\nregards\n\nMark\n", "msg_date": "Fri, 07 Mar 2008 15:40:45 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "Tom Lane wrote:\n> Craig James <[email protected]> writes:\n>> Count() on Oracle and MySQL is almost instantaneous, even for very large tables. So why can't Postgres do what they do?\n> \n> AFAIK the above claim is false for Oracle. They have the same\n> transactional issues we do.\n\nMy experience doesn't match this claim. When I ported my application from Oracle to Postgres, this was the single biggest performance problem. count() in Oracle was always very fast. We're not talking about a 20% or 50% difference, we're talking about a small fraction of a second (Oracle) versus a minute (Postgres) -- something like two or three orders of magnitude.\n\nIt may be that Oracle has a way to detect when there's no transaction and use a faster method. If so, this was a clever optimization -- in my experience, that represents the vast majority of the times you want to use count(). It's not very useful to count the rows of a table that many apps are actively modifying since the result may change the moment your transaction completes. Most of the time when you use count(), it's because you're the only one modifying the table, so the count will be meaningful.\n\nCraig\n\n", "msg_date": "Thu, 06 Mar 2008 19:00:17 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "Craig James wrote:\n> Tom Lane wrote:\n>> Craig James <[email protected]> writes:\n>>> Count() on Oracle and MySQL is almost instantaneous, even for very \n>>> large tables. So why can't Postgres do what they do?\n>>\n>> AFAIK the above claim is false for Oracle. They have the same\n>> transactional issues we do.\n>\n> My experience doesn't match this claim. When I ported my application \n> from Oracle to Postgres, this was the single biggest performance \n> problem. count() in Oracle was always very fast. We're not talking \n> about a 20% or 50% difference, we're talking about a small fraction of \n> a second (Oracle) versus a minute (Postgres) -- something like two or \n> three orders of magnitude.\n>\n> It may be that Oracle has a way to detect when there's no transaction \n> and use a faster method. If so, this was a clever optimization -- in \n> my experience, that represents the vast majority of the times you want \n> to use count(). It's not very useful to count the rows of a table \n> that many apps are actively modifying since the result may change the \n> moment your transaction completes. Most of the time when you use \n> count(), it's because you're the only one modifying the table, so the \n> count will be meaningful.\n>\n> Craig\n>\n>\n\nOracle will use a btree index on a not null set of columns to do a fast \nfull index scan, which can be an order of magnitude or faster compared \nto a table scan. Also, Oracle can use a bitmap index (in cases where a \nbitmap index isn't otherwise silly) for a bitmap fast index scan/bitmap \nconversion for similar dramatic results. \n\nFor \"large\" tables, Oracle is not going to be as fast as MyISAM tables \nin MySQL, even with these optimizations, since MyISAM doesn't have to \nscan even index pages to get a count(*) answer against the full table.\n\nPaul\n\n\n", "msg_date": "Thu, 06 Mar 2008 21:38:48 -0800", "msg_from": "paul rivers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "Craig James wrote:\n> Tom Lane wrote:\n>> Craig James <[email protected]> writes:\n>>> Count() on Oracle and MySQL is almost instantaneous, even for very \n>>> large tables. So why can't Postgres do what they do?\n>>\n>> AFAIK the above claim is false for Oracle. They have the same\n>> transactional issues we do.\n>\n> My experience doesn't match this claim. When I ported my application \n> from Oracle to Postgres, this was the single biggest performance \n> problem. count() in Oracle was always very fast. We're not talking \n> about a 20% or 50% difference, we're talking about a small fraction of \n> a second (Oracle) versus a minute (Postgres) -- something like two or \n> three orders of magnitude.\n>\n\nTo convince yourself do this in Oracle:\n\nEXPLAIN PLAN FOR SELECT count(*) FROM table_without_any_indexes\n\nand you will see a full table scan. If you add (suitable) indexes you'll \nsee something like an index full fast scan.\n\n\nIn fact you can make count(*) *very* slow indeed in Oracle, by having an \nolder session try to count a table that a newer session is modifying and \ncommitting to. The older session's data for the count is reconstructed \nfrom the rollback segments - which is very expensive.\n\nregards\n\nMark\n\n\n", "msg_date": "Sat, 08 Mar 2008 12:51:19 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "Tom,\n\n> > Count() on Oracle and MySQL is almost instantaneous, even for very\n> > large tables. So why can't Postgres do what they do?\n>\n> AFAIK the above claim is false for Oracle. They have the same\n> transactional issues we do.\n\nNope. Oracle's MVCC is implemented through rollback segments, rather than \nnon-overwriting the way ours is. So Oracle can just do a count(*) on the \nindex, then check the rollback segment for any concurrent \nupdate/delete/insert activity and adjust the count. This sucks if there's \na *lot* of concurrent activity, but in the usual case it's pretty fast.\n\nI've been thinking that when we apply the Dead Space Map we might be able \nto get a similar effect in PostgreSQL. That is, just do a count over the \nindex, and visit only the heap pages flagged in the DSM. Again, for a \nheavily updated table this wouldn't have any benefit, but for most cases \nit would be much faster.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Fri, 7 Mar 2008 17:27:35 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Tom,\n>>> Count() on Oracle and MySQL is almost instantaneous, even for very\n>>> large tables. So why can't Postgres do what they do?\n>> \n>> AFAIK the above claim is false for Oracle. They have the same\n>> transactional issues we do.\n\n> Nope. Oracle's MVCC is implemented through rollback segments, rather than \n> non-overwriting the way ours is. So Oracle can just do a count(*) on the \n> index, then check the rollback segment for any concurrent \n> update/delete/insert activity and adjust the count. This sucks if there's \n> a *lot* of concurrent activity, but in the usual case it's pretty fast.\n\nWell, scanning an index to get a count might be significantly faster\nthan scanning the main table, but it's hardly \"instantaneous\". It's\nstill going to take time proportional to the table size.\n\nUnless they keep a central counter of the number of index entries;\nwhich would have all the same serialization penalties we've talked\nabout before...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Mar 2008 20:38:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue " }, { "msg_contents": "On Fri, 7 Mar 2008, Tom Lane wrote:\n\n> Well, scanning an index to get a count might be significantly faster\n> than scanning the main table, but it's hardly \"instantaneous\". It's\n> still going to take time proportional to the table size.\n\nIf this is something that's happening regularly, you'd have to hope that \nmost of the index is already buffered in memory somewhere though, so now \nyou're talking a buffer/OS cache scan that doesn't touch disk much. \nShould be easier for that to be true because the index is smaller than the \ntable, right?\n\nI know when I'm playing with pgbench the primary key index on the big \naccounts table is 1/7 the size of the table, and when using that table \nheavily shared_buffers ends up being mostly filled with that index. The \nusage counts are so high on the index blocks relative to any section of \nthe table itself that they're very sticky in memory. And that's toy data; \non some of the webapps people want these accurate counts for the ratio of \nindex size to table data is even more exaggerated (think web forum).\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sat, 8 Mar 2008 00:08:39 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue " }, { "msg_contents": "Josh Berkus wrote:\n>>> Count() on Oracle and MySQL is almost instantaneous, even for very\n>>> large tables. So why can't Postgres do what they do?\n>>> \n>> AFAIK the above claim is false for Oracle. They have the same\n>> transactional issues we do.\n>> \n>\n> Nope. Oracle's MVCC is implemented through rollback segments, rather than \n> non-overwriting the way ours is. So Oracle can just do a count(*) on the \n> index, then check the rollback segment for any concurrent \n> update/delete/insert activity and adjust the count. This sucks if there's \n> a *lot* of concurrent activity, but in the usual case it's pretty fast\n\nI read the \"almost instantaneous\" against \"the above claim is false\" and \n\"Nope.\", and I am not sure from the above whether you are saying that \nOracle keeps an up-to-date count for the index (which might make it \ninstantaneous?), or whether you are saying it still has to scan the \nindex - which can take time if the index is large (therefore not \ninstantaneous).\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n\n\n\n\n\n\nJosh Berkus wrote:\n\n\n\nCount() on Oracle and MySQL is almost instantaneous, even for very\nlarge tables. So why can't Postgres do what they do?\n \n\nAFAIK the above claim is false for Oracle. They have the same\ntransactional issues we do.\n \n\n\nNope. Oracle's MVCC is implemented through rollback segments, rather than \nnon-overwriting the way ours is. So Oracle can just do a count(*) on the \nindex, then check the rollback segment for any concurrent \nupdate/delete/insert activity and adjust the count. This sucks if there's \na *lot* of concurrent activity, but in the usual case it's pretty fast\n\n\nI read the \"almost instantaneous\" against \"the above claim is false\"\nand \"Nope.\", and I am not sure from the above whether you are saying\nthat Oracle keeps an up-to-date count for the index (which might make\nit instantaneous?), or whether you are saying it still has to scan the\nindex - which can take time if the index is large (therefore not\ninstantaneous).\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>", "msg_date": "Sat, 08 Mar 2008 00:31:42 -0500", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> I know when I'm playing with pgbench the primary key index on the big \n> accounts table is 1/7 the size of the table, and when using that table \n> heavily shared_buffers ends up being mostly filled with that index. The \n> usage counts are so high on the index blocks relative to any section of \n> the table itself that they're very sticky in memory. And that's toy data; \n> on some of the webapps people want these accurate counts for the ratio of \n> index size to table data is even more exaggerated (think web forum).\n\nRemember that our TOAST mechanism acts to limit the width of the\nmain-table row.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Mar 2008 01:13:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue " }, { "msg_contents": "Mark Mielke wrote:\n> Josh Berkus wrote:\n>>>> Count() on Oracle and MySQL is almost instantaneous, even for very\n>>>> large tables. So why can't Postgres do what they do?\n>>>> \n>>> AFAIK the above claim is false for Oracle. They have the same\n>>> transactional issues we do.\n>>> \n>>\n>> Nope. Oracle's MVCC is implemented through rollback segments, rather than \n>> non-overwriting the way ours is. So Oracle can just do a count(*) on the \n>> index, then check the rollback segment for any concurrent \n>> update/delete/insert activity and adjust the count. This sucks if there's \n>> a *lot* of concurrent activity, but in the usual case it's pretty fast\n> \n> I read the \"almost instantaneous\" against \"the above claim is false\" and \n> \"Nope.\", and I am not sure from the above whether you are saying that \n> Oracle keeps an up-to-date count for the index (which might make it \n> instantaneous?), or whether you are saying it still has to scan the \n> index - which can take time if the index is large (therefore not \n> instantaneous).\n> \n> Cheers,\n> mark\n> \n> -- \n> Mark Mielke <[email protected]>\n> \n\nOracle scans the index pages, if the b-tree index is on non-nullable \ncolumns, or if the bitmap index is on low-ish cardinality data. \nOtherwise, it table scans. MyISAM in MySQL would be an example where a \ncounter is kept.\n\n\n\n\n", "msg_date": "Fri, 07 Mar 2008 23:11:19 -0800", "msg_from": "paul rivers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "On 6-3-2008 16:28 Craig James wrote:\n> On the one hand, I understand that Postgres has its architecture, and I \n> understand the issue of row visibility, and so forth. On the other \n> hand, my database is just sitting there, nothing going on, no \n> connections except me, and... it takes FIFTY FIVE SECONDS to count 20 \n> million rows, a query that either Oracle or MySQL would answer in a \n> fraction of a second. It's hard for me to believe there isn't a better \n> way.\n\nCan you explain to me how you'd fit this in a fraction of a second?\n\nmysql> select count(*) from messages;\n+----------+\n| count(*) |\n+----------+\n| 21908505 |\n+----------+\n1 row in set (8 min 35.09 sec)\n\nThis is a table containing the messages on forumtopics and is therefore \nrelatively large. The hardware is quite beefy for a forum however (4 \n3Ghz cores, 16GB, 14+1 disk raid5). This table has about 20GB of data.\n\nIf I use a table that contains about the same amount of records as the \nabove and was before this query probably much less present in the \ninnodb-buffer (but also less frequently touched by other queries), we \nsee this:\n\nmysql> select count(*) from messagesraw;\n+----------+\n| count(*) |\n+----------+\n| 21962804 |\n+----------+\n1 row in set (5 min 16.41 sec)\n\nThis table is about 12GB.\n\nIn both cases MySQL claimed to be 'Using index' with the PRIMARY index, \nwhich for those tables is more or less identical.\n\nApparently the time is still table-size related, not necessarily \ntuple-count related. As this shows:\n\nmysql> select count(*) from articlestats;\n+----------+\n| count(*) |\n+----------+\n| 34467246 |\n+----------+\n1 row in set (54.14 sec)\n\nthat table is only 2.4GB, but contains 57% more records, although this \nwas on another database on a system with somewhat different specs (8 \n2.6Ghz cores, 16GB, 7+7+1 raid50), used a non-primary index and I have \nno idea how well that index was in the system's cache prior to this query.\n\nRepeating it makes it do that query in 6.65 seconds, repeating the \n12GB-query doesn't make it any faster.\n\nAnyway, long story short: MySQL's table-count stuff also seems \ntable-size related. As soon as the index it uses fits in the cache or it \ndoesn't have to use the primary index, it might be a different story, \nbut when the table(index) is too large to fit, it is quite slow.\nActually, it doesn't appear to be much faster than Postgresql's (8.2) \ntable-based counts. If I use a much slower machine (2 2Ghz opterons, 8GB \nddr memory, 5+1 old 15k rpm scsi disks in raid5) with a 1GB, 13M record \ntable wich is similar to the above articlestats, it is able to return a \ncount(*) in 3 seconds after priming the cache.\n\nIf you saw instantaneous results with MySQL, you have either seen the \nquery-cache at work or where using myisam. Or perhaps with a fast \nsystem, you had small tuples with a nice index in a nicely primed cache.\n\nBest regards,\n\nArjen\n", "msg_date": "Sat, 08 Mar 2008 09:04:31 +0100", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "\"Tom Lane\" <[email protected]> writes:\n\n> Well, scanning an index to get a count might be significantly faster\n> than scanning the main table, but it's hardly \"instantaneous\". It's\n> still going to take time proportional to the table size.\n\nHm, Mark's comment about bitmap indexes makes that not entirely true. A bitmap\nindex can do RLE compression which makes the relationship between the size of\nthe table and the time taken to scan the index more complex. In the degenerate\ncase where there are no concurrent updates (assuming you can determine that\nquickly) it might actually be constant time.\n\n> Unless they keep a central counter of the number of index entries;\n> which would have all the same serialization penalties we've talked\n> about before...\n\nBitmap indexes do in fact have concurrency issues -- arguably they're just a\nbaroque version of this central counter in this case.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's Slony Replication support!\n", "msg_date": "Mon, 10 Mar 2008 15:16:08 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "Gregory,\n\nI just joined this listserv and was happy to see this posting. I have a\n400GB table that I have indexed (building the index took 27 hours) , Loading\nthe table with 10 threads took 9 hours. I run queries on the data nad get\nimmediate max and min as well as other aggrgate functions very quickly,\nhowever a select count(*) of the table takes forever usually nearly an hour\nor more.\n\nDo you have any tuning recommendations. We in our warehouse use the\ncount(*) as our verification of counts by day/month's etc and in Netezza its\nimmediate. I tried by adding oids. BUT the situation I learned was that\nadding the oids in the table adds a significasnt amount of space to the data\nAND the index.\n\nAs you may gather from this we are relatively new on Postgres.\n\nAny suggestions you can give me would be most helpful.\n\nCheers,\nJoe\n\nOn Mon, Mar 10, 2008 at 11:16 AM, Gregory Stark <[email protected]>\nwrote:\n\n> \"Tom Lane\" <[email protected]> writes:\n>\n> > Well, scanning an index to get a count might be significantly faster\n> > than scanning the main table, but it's hardly \"instantaneous\". It's\n> > still going to take time proportional to the table size.\n>\n> Hm, Mark's comment about bitmap indexes makes that not entirely true. A\n> bitmap\n> index can do RLE compression which makes the relationship between the size\n> of\n> the table and the time taken to scan the index more complex. In the\n> degenerate\n> case where there are no concurrent updates (assuming you can determine\n> that\n> quickly) it might actually be constant time.\n>\n> > Unless they keep a central counter of the number of index entries;\n> > which would have all the same serialization penalties we've talked\n> > about before...\n>\n> Bitmap indexes do in fact have concurrency issues -- arguably they're just\n> a\n> baroque version of this central counter in this case.\n>\n> --\n> Gregory Stark\n> EnterpriseDB http://www.enterprisedb.com\n> Ask me about EnterpriseDB's Slony Replication support!\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nMirabili et Veritas\nJoe Mirabal\n\nGregory,I just joined this listserv and was happy to see this posting.  I have a 400GB table that I have indexed (building the index took 27 hours) , Loading the table with 10 threads took 9 hours.  I run queries on the data nad get immediate max and min as well as other aggrgate functions very quickly, however a select count(*) of the table takes forever usually nearly an hour or more.  \nDo you have any tuning recommendations.  We in our warehouse use the count(*) as our verification of counts by day/month's etc and in Netezza its immediate.  I tried by adding oids. BUT the situation I learned was that adding the oids in the table adds a significasnt amount of space to the data AND the index.\nAs you may gather from this we are relatively new on Postgres.Any suggestions you can give me would be most helpful.Cheers,Joe On Mon, Mar 10, 2008 at 11:16 AM, Gregory Stark <[email protected]> wrote:\n\"Tom Lane\" <[email protected]> writes:\n\n> Well, scanning an index to get a count might be significantly faster\n> than scanning the main table, but it's hardly \"instantaneous\".  It's\n> still going to take time proportional to the table size.\n\nHm, Mark's comment about bitmap indexes makes that not entirely true. A bitmap\nindex can do RLE compression which makes the relationship between the size of\nthe table and the time taken to scan the index more complex. In the degenerate\ncase where there are no concurrent updates (assuming you can determine that\nquickly) it might actually be constant time.\n\n> Unless they keep a central counter of the number of index entries;\n> which would have all the same serialization penalties we've talked\n> about before...\n\nBitmap indexes do in fact have concurrency issues -- arguably they're just a\nbaroque version of this central counter in this case.\n\n--\n  Gregory Stark\n  EnterpriseDB          http://www.enterprisedb.com\n  Ask me about EnterpriseDB's Slony Replication support!\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Mirabili et VeritasJoe Mirabal", "msg_date": "Mon, 10 Mar 2008 16:54:23 -0400", "msg_from": "\"Joe Mirabal\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "In response to \"Joe Mirabal\" <[email protected]>:\n\n> Gregory,\n> \n> I just joined this listserv and was happy to see this posting. I have a\n> 400GB table that I have indexed (building the index took 27 hours) , Loading\n> the table with 10 threads took 9 hours. I run queries on the data nad get\n> immediate max and min as well as other aggrgate functions very quickly,\n> however a select count(*) of the table takes forever usually nearly an hour\n> or more.\n> \n> Do you have any tuning recommendations. We in our warehouse use the\n> count(*) as our verification of counts by day/month's etc and in Netezza its\n> immediate. I tried by adding oids. BUT the situation I learned was that\n> adding the oids in the table adds a significasnt amount of space to the data\n> AND the index.\n> \n> As you may gather from this we are relatively new on Postgres.\n> \n> Any suggestions you can give me would be most helpful.\n\nOne approach to this problem is to create triggers that keep track of\nthe total count whenever rows are added or deleted. This adds some\noverhead to the update process, but the correct row count is always\nquickly available.\n\nAnother is to use EXPLAIN to get an estimate of the # of rows from\nthe planner. This works well if an estimate is acceptable, but can't\nbe trusted for precise counts.\n\nSome searches through the archives should turn up details on these\nmethods.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Mon, 10 Mar 2008 17:28:26 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "On Mon, 10 Mar 2008, Bill Moran wrote:\n\n> Some searches through the archives should turn up details on these\n> methods.\n\nI've collected up what looked like the best resources on this topic into \nthe FAQ entry at http://www.postgresqldocs.org/index.php/Slow_Count\n\nGeneral Bits has already done two good summary articles here and I'd think \nwading through the archives directly shouldn't be necessary.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 10 Mar 2008 17:52:40 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "On Mon, Mar 10, 2008 at 1:54 PM, Joe Mirabal <[email protected]> wrote:\n> Gregory,\n>\n> I just joined this listserv and was happy to see this posting. I have a\n> 400GB table that I have indexed (building the index took 27 hours) , Loading\n> the table with 10 threads took 9 hours. I run queries on the data nad get\n> immediate max and min as well as other aggrgate functions very quickly,\n> however a select count(*) of the table takes forever usually nearly an hour\n> or more.\n>\n> Do you have any tuning recommendations. We in our warehouse use the\n> count(*) as our verification of counts by day/month's etc and in Netezza its\n> immediate. I tried by adding oids. BUT the situation I learned was that\n> adding the oids in the table adds a significasnt amount of space to the data\n> AND the index.\n\nYeah, this is a typical problem people run into with MVCC databases to\none extent or another. PostgreSQL has no native way to just make it\nfaster. However, if it's a table with wide rows, you can use a lookup\ntable to help a bit. Have a FK with cascading deletes from the master\ntable to a table that just holds the PK for it, and do count(*) on\nthat table.\n\nOtherwise, you have the trigger solution mentioned previously.\n\nAlso, if you only need an approximate count, then you can use the\nsystem tables to get that with something like\n\nselect reltuples from pg_class where relname='tablename';\n\nafter an analyze. It won't be 100% accurate, but it will be pretty\nclose most the time.\n", "msg_date": "Mon, 10 Mar 2008 15:21:42 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "On Mon, 10 Mar 2008, Joe Mirabal wrote:\n\n> I run queries on the data nad get immediate max and min as well as other \n> aggrgate functions very quickly, however a select count(*) of the table \n> takes forever usually nearly an hour or more.\n\nAre you sure the form of \"select count(*)\" you're using is actually \nutilizing the index to find a useful subset? What do you get out of \nEXPLAIN ANALZYE on the query?\n\nIn order for indexes to be helpful a couple of things need to happen:\n1) They have to be structured correctly to be useful\n2) There needs to be large enough settings for shared_buffes and \neffective_cache_size that the database things it can use them efficiently\n3) The tables involved need to be ANALYZEd to keep their statistics up to \ndate.\n\nThe parameters to run a 400GB *table* are very different from the \ndefaults; if you want tuning suggestions you should post the non-default \nentries in your postgresql.conf file from what you've already adjusted \nalong with basic information about your server (PostgreSQL version, OS, \nmemory, disk setup).\n\n> We in our warehouse use the count(*) as our verification of counts by \n> day/month's etc\n\nIf you've got a database that size and you're doing that sort of thing on \nit, you really should be considering partitioning as well.\n\n --\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 10 Mar 2008 19:01:43 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "Hi,\n\nI have been reading this conversation for a few days now and I just wanted\nto ask this. From the release notes, one of the new additions in 8.3 is\n(Allow col IS NULL to use an index (Teodor)).\n\nSorry, if I am missing something here, but shouldn't something like this\nallow us to get a (fast) accurate count ?\n\nSELECT COUNT(*) from table WHERE indexed_field IS NULL\n+\nSELECT COUNT(*) from table WHERE indexed_field IS NOT NULL\n\n*Robins Tharakan*\n\n---------- Forwarded message ----------\nFrom: Greg Smith <[email protected]>\nDate: Tue, Mar 11, 2008 at 4:31 AM\nSubject: Re: [PERFORM] count * performance issue\nTo: Joe Mirabal <[email protected]>\nCc: [email protected]\n\n\nOn Mon, 10 Mar 2008, Joe Mirabal wrote:\n\n> I run queries on the data nad get immediate max and min as well as other\n> aggrgate functions very quickly, however a select count(*) of the table\n> takes forever usually nearly an hour or more.\n\nAre you sure the form of \"select count(*)\" you're using is actually\nutilizing the index to find a useful subset? What do you get out of\nEXPLAIN ANALZYE on the query?\n\nIn order for indexes to be helpful a couple of things need to happen:\n1) They have to be structured correctly to be useful\n2) There needs to be large enough settings for shared_buffes and\neffective_cache_size that the database things it can use them efficiently\n3) The tables involved need to be ANALYZEd to keep their statistics up to\ndate.\n\nThe parameters to run a 400GB *table* are very different from the\ndefaults; if you want tuning suggestions you should post the non-default\nentries in your postgresql.conf file from what you've already adjusted\nalong with basic information about your server (PostgreSQL version, OS,\nmemory, disk setup).\n\n> We in our warehouse use the count(*) as our verification of counts by\n> day/month's etc\n\nIf you've got a database that size and you're doing that sort of thing on\nit, you really should be considering partitioning as well.\n\n --\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\nHi,I have been reading this conversation for a few days now and I just wanted to ask this. From the release notes, one of the new additions in 8.3 is (Allow col IS NULL to use an index (Teodor)).\nSorry, if I am missing something here, but shouldn't something like this allow us to get a (fast) accurate count ?\nSELECT COUNT(*) from table WHERE indexed_field IS NULL\n+SELECT COUNT(*) from table WHERE indexed_field IS NOT NULLRobins Tharakan\n---------- Forwarded message ----------From: Greg Smith <[email protected]>Date: Tue, Mar 11, 2008 at 4:31 AM\nSubject: Re: [PERFORM] count * performance issueTo: Joe Mirabal <[email protected]>Cc: [email protected]\nOn Mon, 10 Mar 2008, Joe Mirabal wrote:\n\n> I run queries on the data nad get immediate max and min as well as other\n> aggrgate functions very quickly, however a select count(*) of the table\n> takes forever usually nearly an hour or more.\n\nAre you sure the form of \"select count(*)\" you're using is actually\nutilizing the index to find a useful subset?  What do you get out of\nEXPLAIN ANALZYE on the query?\n\nIn order for indexes to be helpful a couple of things need to happen:\n1) They have to be structured correctly to be useful\n2) There needs to be large enough settings for shared_buffes and\neffective_cache_size that the database things it can use them efficiently\n3) The tables involved need to be ANALYZEd to keep their statistics up to\ndate.\n\nThe parameters to run a 400GB *table* are very different from the\ndefaults; if you want tuning suggestions you should post the non-default\nentries in your postgresql.conf file from what you've already adjusted\nalong with basic information about your server (PostgreSQL version, OS,\nmemory, disk setup).\n\n> We in our warehouse use the count(*) as our verification of counts by\n> day/month's etc\n\nIf you've got a database that size and you're doing that sort of thing on\nit, you really should be considering partitioning as well.\n\n  --\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 11 Mar 2008 08:27:05 +0530", "msg_from": "\"Robins Tharakan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "Robins Tharakan wrote:\n> Hi,\n>\n> I have been reading this conversation for a few days now and I just \n> wanted to ask this. From the release notes, one of the new additions \n> in 8.3 is (Allow col IS NULL to use an index (Teodor)).\n>\n> Sorry, if I am missing something here, but shouldn't something like \n> this allow us to get a (fast) accurate count ?\n>\n> SELECT COUNT(*) from table WHERE indexed_field IS NULL\n> +\n> SELECT COUNT(*) from table WHERE indexed_field IS NOT NULL\n\nFor PostgreSQL: You still don't know whether the row is visible until \nyou check the row. That it's NULL or NOT NULL does not influence this truth.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n\n\n\n\n\n\nRobins Tharakan wrote:\nHi,\n\nI have been reading\nthis conversation for a few days now and I just wanted to ask this.\n>From the release notes, one of the new additions in\n8.3 is (Allow col IS NULL to use an index (Teodor)).\n\nSorry, if I am missing\nsomething here, but shouldn't something like this\nallow us to get a (fast) accurate count ?\n\nSELECT COUNT(*) from table WHERE\nindexed_field IS NULL\n+\nSELECT COUNT(*) from\ntable WHERE indexed_field IS NOT NULL\n\n\nFor PostgreSQL: You still don't know whether the row is visible until\nyou check the row. That it's NULL or NOT NULL does not influence this\ntruth.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>", "msg_date": "Mon, 10 Mar 2008 23:01:51 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "On Tue, 11 Mar 2008 08:27:05 +0530\n\"Robins Tharakan\" <[email protected]> wrote:\n\n> SELECT COUNT(*) from table WHERE indexed_field IS NULL\n> +\n> SELECT COUNT(*) from table WHERE indexed_field IS NOT NULL\n\nIf the selectivity is appropriate yes. However if you have 1 million\nrows, and 200k of those rows are null (or not null), it is still going\nto seqscan.\n\njoshua d. drake\n\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL SPI Liaison | SPI Director | PostgreSQL political pundit", "msg_date": "Mon, 10 Mar 2008 20:08:44 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "On Mon, Mar 10, 2008 at 7:57 PM, Robins Tharakan <[email protected]> wrote:\n> Hi,\n>\n> I have been reading this conversation for a few days now and I just wanted\n> to ask this. From the release notes, one of the new additions in 8.3 is\n> (Allow col IS NULL to use an index (Teodor)).\n>\n> Sorry, if I am missing something here, but shouldn't something like this\n> allow us to get a (fast) accurate count ?\n>\n> SELECT COUNT(*) from table WHERE indexed_field IS NULL\n> +\n> SELECT COUNT(*) from table WHERE indexed_field IS NOT NULL\n\nIt really depends on the distribution of the null / not nulls in the\ntable. If it's 50/50 there's no advantage to using the index, as you\nstill have to check visibility info in the table itself.\n\nOTOH, if NULL (or converserly not null) are rare, then yes, the index\ncan help. I.e. if 1% of the tuples are null, the select count(*) from\ntable where field is null can use the index efficiently.\n", "msg_date": "Mon, 10 Mar 2008 20:11:27 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "A Dimarts 11 Març 2008 04:11, Scott Marlowe va escriure:\n> On Mon, Mar 10, 2008 at 7:57 PM, Robins Tharakan <[email protected]> wrote:\n> > Hi,\n> >\n> > I have been reading this conversation for a few days now and I just\n> > wanted to ask this. From the release notes, one of the new additions in\n> > 8.3 is (Allow col IS NULL to use an index (Teodor)).\n> >\n> > Sorry, if I am missing something here, but shouldn't something like this\n> > allow us to get a (fast) accurate count ?\n> >\n> > SELECT COUNT(*) from table WHERE indexed_field IS NULL\n> > +\n> > SELECT COUNT(*) from table WHERE indexed_field IS NOT NULL\n>\n> It really depends on the distribution of the null / not nulls in the\n> table. If it's 50/50 there's no advantage to using the index, as you\n> still have to check visibility info in the table itself.\n>\n> OTOH, if NULL (or converserly not null) are rare, then yes, the index\n> can help. I.e. if 1% of the tuples are null, the select count(*) from\n> table where field is null can use the index efficiently.\n\nBut you'll get a sequential scan with the NOT NULL case which will end up \ntaking more time. (Seq Scan + Index Scan > Seq Scan)\n\n-- \nAlbert Cervera Areny\nDept. Informàtica Sedifa, S.L.\n\nAv. Can Bordoll, 149\n08202 - Sabadell (Barcelona)\nTel. 93 715 51 11\nFax. 93 715 51 12\n\n====================================================================\n........................ AVISO LEGAL ............................\nLa presente comunicación y sus anexos tiene como destinatario la\npersona a la que va dirigida, por lo que si usted lo recibe\npor error debe notificarlo al remitente y eliminarlo de su\nsistema, no pudiendo utilizarlo, total o parcialmente, para\nningún fin. Su contenido puede tener información confidencial o\nprotegida legalmente y únicamente expresa la opinión del\nremitente. El uso del correo electrónico vía Internet no\npermite asegurar ni la confidencialidad de los mensajes\nni su correcta recepción. En el caso de que el\ndestinatario no consintiera la utilización del correo electrónico,\ndeberá ponerlo en nuestro conocimiento inmediatamente.\n====================================================================\n........................... DISCLAIMER .............................\nThis message and its attachments are intended exclusively for the\nnamed addressee. If you receive this message in error, please\nimmediately delete it from your system and notify the sender. You\nmay not use this message or any part of it for any purpose.\nThe message may contain information that is confidential or\nprotected by law, and any opinions expressed are those of the\nindividual sender. Internet e-mail guarantees neither the\nconfidentiality nor the proper receipt of the message sent.\nIf the addressee of this message does not consent to the use\nof internet e-mail, please inform us inmmediately.\n====================================================================\n\n\n \n", "msg_date": "Tue, 11 Mar 2008 09:34:30 +0100", "msg_from": "Albert Cervera Areny <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "In response to \"Robins Tharakan\" <[email protected]>:\n\n> Hi,\n> \n> I have been reading this conversation for a few days now and I just wanted\n> to ask this. From the release notes, one of the new additions in 8.3 is\n> (Allow col IS NULL to use an index (Teodor)).\n> \n> Sorry, if I am missing something here, but shouldn't something like this\n> allow us to get a (fast) accurate count ?\n> \n> SELECT COUNT(*) from table WHERE indexed_field IS NULL\n> +\n> SELECT COUNT(*) from table WHERE indexed_field IS NOT NULL\n\nFor certain, qualified definitions of \"fast\", sure.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Tue, 11 Mar 2008 09:35:37 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "On Tue, 11 Mar 2008, Bill Moran wrote:\n\n> In response to \"Robins Tharakan\" <[email protected]>:\n>> Sorry, if I am missing something here, but shouldn't something like this\n>> allow us to get a (fast) accurate count ?\n>>\n>> SELECT COUNT(*) from table WHERE indexed_field IS NULL\n>> +\n>> SELECT COUNT(*) from table WHERE indexed_field IS NOT NULL\n>\n> For certain, qualified definitions of \"fast\", sure.\n\nAnd certain, qualified definitions of \"accurate\" as well. Race condition?\n\nMatthew\n\n-- \n\"Television is a medium because it is neither rare nor well done.\" \n -- Fred Friendly\n", "msg_date": "Tue, 11 Mar 2008 13:57:03 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "Hi,\n\nMatthew wrote:\n> On Tue, 11 Mar 2008, Bill Moran wrote:\n> \n>> In response to \"Robins Tharakan\" <[email protected]>:\n>>> Sorry, if I am missing something here, but shouldn't something like this\n>>> allow us to get a (fast) accurate count ?\n>>>\n>>> SELECT COUNT(*) from table WHERE indexed_field IS NULL\n>>> +\n>>> SELECT COUNT(*) from table WHERE indexed_field IS NOT NULL\n>>\n>> For certain, qualified definitions of \"fast\", sure.\n> \n> And certain, qualified definitions of \"accurate\" as well. Race condition?\n\nYou mean in a three-state-logic? null, not null and something different?\n\n;-)\n\nTino\n", "msg_date": "Tue, 11 Mar 2008 15:01:06 +0100", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "On Tue, 11 Mar 2008, Tino Wildenhain wrote:\n>> And certain, qualified definitions of \"accurate\" as well. Race condition?\n>\n> You mean in a three-state-logic? null, not null and something different?\n\nTrue, False, and FILE_NOT_FOUND.\n\nNo, actually I was referring to a race condition. So, you find the count \nof rows with IS NULL, then someone changes a row, then you find the count \nof rows with IS NOT NULL. Add the two together, and there may be rows that \nwere counted twice, or not at all.\n\nMatthew\n\n-- \nIt's one of those irregular verbs - \"I have an independent mind,\" \"You are\nan eccentric,\" \"He is round the twist.\"\n -- Bernard Woolly, Yes Prime Minister\n", "msg_date": "Tue, 11 Mar 2008 14:19:09 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "Matthew wrote:\n> No, actually I was referring to a race condition. So, you find the count \n> of rows with IS NULL, then someone changes a row, then you find the \n> count of rows with IS NOT NULL. Add the two together, and there may be \n> rows that were counted twice, or not at all.\n\nNot a problem if you use a serializable transaction, or if you do\n\nSELECT COUNT(*) from table WHERE indexed_field IS NULL\nUNION ALL\nSELECT COUNT(*) from table WHERE indexed_field IS NOT NULL\n\nas one statement.\n\nHowever, this makes no sense whatsoever. As both index scans (assuming \nthe planner even chooses an index scan for them, which seems highly \nunlikely) still have to visit each tuple in the heap. It's always going \nto be slower than a single \"SELECT COUNT(*) FROM table\" with a seq scan.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 11 Mar 2008 14:31:18 +0000", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "On Tue, Mar 11, 2008 at 02:19:09PM +0000, Matthew wrote:\n> of rows with IS NULL, then someone changes a row, then you find the count \n> of rows with IS NOT NULL. Add the two together, and there may be rows that \n> were counted twice, or not at all.\n\nOnly if you count in READ COMMITTED.\n\nA\n\n", "msg_date": "Tue, 11 Mar 2008 10:34:39 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: count * performance issue" }, { "msg_contents": "I just received a new server and thought benchmarks would be interesting. I think this looks pretty good, but maybe there are some suggestions about the configuration file. This is a web app, a mix of read/write, where writes tend to be \"insert into ... (select ...)\" where the resulting insert is on the order of 100 to 10K rows of two integers. An external process also uses a LOT of CPU power along with each query.\n\nThanks,\nCraig\n\n\nConfiguration:\n Dell 2950\n 8 CPU (Intel 2GHz Xeon)\n 8 GB memory\n Dell Perc 6i with battery-backed cache\n RAID 10 of 8x 146GB SAS 10K 2.5\" disks\n\nEverything (OS, WAL and databases) are on the one RAID array.\n\nDiffs from original configuration:\n\nmax_connections = 1000\nshared_buffers = 400MB\nwork_mem = 256MB\nmax_fsm_pages = 1000000\nmax_fsm_relations = 5000\nwal_buffers = 256kB\neffective_cache_size = 4GB\n\nBonnie output (slightly reformatted)\n\n------------------------------------------------------------------------------\n\nDelete files in random order...done.\nVersion 1.03\n ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\n 16G 64205 99 234252 38 112924 26 65275 98 293852 24 940.3 1\n\n ------Sequential Create------ --------Random Create--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\nfiles /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 12203 95 +++++ +++ 19469 94 12297 95 +++++ +++ 15578 82\n\nwww.xxx.com,16G,64205,99,234252,38,112924,26,65275,98,293852,24,940.3,1,16,12203,95,+++++,+++,19469,94,12297,95,+++++,+++,15578,82\n\n------------------------------------------------------------------------------\n\n$ pgbench -c 10 -t 10000 -v test -U test\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 10\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 2786.377933 (including connections establishing)\ntps = 2787.888209 (excluding connections establishing)\n\n\n", "msg_date": "Wed, 12 Mar 2008 21:55:18 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "On Wed, Mar 12, 2008 at 9:55 PM, Craig James <[email protected]> wrote:\n> I just received a new server and thought benchmarks would be interesting. I think this looks pretty good, but maybe there are some suggestions about the configuration file. This is a web app, a mix of read/write, where writes tend to be \"insert into ... (select ...)\" where the resulting insert is on the order of 100 to 10K rows of two integers. An external process also uses a LOT of CPU power along with each query.\n\nHave you been inserting each insert individually, or as part of a\nlarger transaction. Wrapping a few thousand up in a begin;end; pair\ncan really help. You can reasonably wrap 100k or more inserts into a\nsingle transaction. if any one insert fails the whole insert sequence\nfails.\n", "msg_date": "Wed, 12 Mar 2008 22:31:51 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Wed, 12 Mar 2008 21:55:18 -0700\r\nCraig James <[email protected]> wrote:\r\n\r\n\r\n> Diffs from original configuration:\r\n> \r\n> max_connections = 1000\r\n> shared_buffers = 400MB\r\n> work_mem = 256MB\r\n> max_fsm_pages = 1000000\r\n> max_fsm_relations = 5000\r\n> wal_buffers = 256kB\r\n> effective_cache_size = 4GB\r\n\r\nI didn't see which OS but I assume linux. I didn't see postgresql so I\r\nassume 8.3.\r\n\r\nwal_sync_method = open_sync\r\ncheckpoint_segments = 30\r\nshared_buffers = 2000MB\r\nasyncrhonous_commit = off (sp?)\r\n\r\nTry again.\r\n\r\nThanks this is useful stuff!\r\n\r\nJoshua D. Drake\r\n\r\n\r\n> \r\n\r\n\r\n- -- \r\nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \r\nPostgreSQL Community Conference: http://www.postgresqlconference.org/\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\n PostgreSQL political pundit | Mocker of Dolphins\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFH2MHsATb/zqfZUUQRAqqtAJsEa8RkJbpqY2FAYSrNVHhvTK/GBgCfYzYD\r\n9myRDV7AYXq+Iht7rIZVZcc=\r\n=PLpQ\r\n-----END PGP SIGNATURE-----\r\n", "msg_date": "Wed, 12 Mar 2008 22:55:54 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "I recent just got a new server also from dell 2 weeks ago\nwent with more memory slower CPU, and smaller harddrives\n have not run pgbench \n\nDell PE 2950 III \n 2 Quad Core 1.866 Ghz \n 16 gigs of ram. \n 8 hard drives 73Gig 10k RPM SAS \n 2 drives in Mirrored for OS, Binaries, and WAL \n 6 in a raid 10 \n Dual Gig Ethernet \nOS Ubuntu 7.10\n-----------------------------------------------\n\nVersion 1.03 \n ------Sequential Output------ --Sequential Input- --Random- \n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- \nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP \nPriData 70000M 51030 90 107488 29 50666 10 38464 65 102931 9 268.2 0 \n \n ------Sequential Create------ --------Random Create-------- \n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- \n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP \n 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ \nPriData,70000M,51030,90,107488,29,50666,10,38464,65,102931,9,268.2,0,16, \n+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n\nthe difference in our results are interesting.\n\nWhat are the setting on the RAID card . I have the cache turned on with Read Ahead \n\n\n---- Message from mailto:[email protected] Craig James <[email protected]> at 03-12-2008 09:55:18 PM ------\n\nI just received a new server and thought benchmarks would be interesting. I think this looks pretty good, but maybe there are some suggestions about the configuration file. This is a web app, a mix of read/write, where writes tend to be \"insert into ... (select ...)\" where the resulting insert is on the order of 100 to 10K rows of two integers. An external process also uses a LOT of CPU power along with each query.\n\nThanks,\nCraig\n\n\nConfiguration:\n Dell 2950\n 8 CPU (Intel 2GHz Xeon)\n 8 GB memory\n Dell Perc 6i with battery-backed cache\n RAID 10 of 8x 146GB SAS 10K 2.5\" disks\n\nEverything (OS, WAL and databases) are on the one RAID array.\n\nDiffs from original configuration:\n\nmax_connections = 1000\nshared_buffers = 400MB\nwork_mem = 256MB\nmax_fsm_pages = 1000000\nmax_fsm_relations = 5000\nwal_buffers = 256kB\neffective_cache_size = 4GB\n\nBonnie output (slightly reformatted)\n\n------------------------------------------------------------------------------\n\nDelete files in random order...done.\nVersion 1.03\n ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\n 16G 64205 99 234252 38 112924 26 65275 98 293852 24 940.3 1\n\n ------Sequential Create------ --------Random Create--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\nfiles /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 12203 95 +++++ +++ 19469 94 12297 95 +++++ +++ 15578 82\n\nwww.xxx.com,16G,64205,99,234252,38,112924,26,65275,98,293852,24,940.3,1,16,12203,95,+++++,+++,19469,94,12297,95,+++++,+++,15578,82\n\n------------------------------------------------------------------------------\n\n$ pgbench -c 10 -t 10000 -v test -U test\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 10\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 2786.377933 (including connections establishing)\ntps = 2787.888209 (excluding connections establishing)\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\nI recent just got a new server also from dell 2 weeks agowent with more memory slower CPU, and smaller harddrives have not run pgbench Dell PE 2950 III\n\n 2 Quad Core 1.866 Ghz \n\n 16 gigs of ram. \n\n 8 hard drives 73Gig 10k RPM SAS\n\n 2 drives in Mirrored for OS, Binaries, and WAL\n\n 6 in a raid 10\n\n Dual Gig Ethernet\nOS Ubuntu 7.10-----------------------------------------------Version 1.03                      ------Sequential Output------ --Sequential Input- --Random-                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n\nMachine Size        K/sec %CP  K/sec   %CP    K/sec %CP   K/sec %CP K/sec    %CP /sec %CP\n\nPriData 70000M  51030   90   107488    29      50666 10     38464 65     102931     9    268.2 0\n\n\n ------Sequential Create------ --------Random Create--------\n\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n\n 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\n\nPriData,70000M,51030,90,107488,29,50666,10,38464,65,102931,9,268.2,0,16,\n\n+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++the difference in our results are interesting.What are the setting on the RAID card .  I have the cache turned on with Read Ahead  ---- Message from Craig James <[email protected]> at 03-12-2008 09:55:18 PM ------I just received a new server and thought benchmarks would be interesting.  I think this looks pretty good, but maybe there are some suggestions about the configuration file.  This is a web app, a mix of read/write, where writes tend to be \"insert into ... (select ...)\" where the resulting insert is on the order of 100 to 10K rows of two integers.  An external process also uses a LOT of CPU power along with each query.\n\nThanks,\nCraig\n\n\nConfiguration:\n  Dell 2950\n  8 CPU (Intel 2GHz Xeon)\n  8 GB memory\n  Dell Perc 6i with battery-backed cache\n  RAID 10 of 8x 146GB SAS 10K 2.5\" disks\n\nEverything (OS, WAL and databases) are on the one RAID array.\n\nDiffs from original configuration:\n\nmax_connections = 1000\nshared_buffers = 400MB\nwork_mem = 256MB\nmax_fsm_pages = 1000000\nmax_fsm_relations = 5000\nwal_buffers = 256kB\neffective_cache_size = 4GB\n\nBonnie output (slightly reformatted)\n\n------------------------------------------------------------------------------\n\nDelete files in random order...done.\nVersion  1.03\n         ------Sequential Output------       --Sequential Input-      --Random-\n      -Per Chr-   --Block--    -Rewrite-     -Per Chr-   --Block--    --Seeks--\n Size K/sec %CP   K/sec  %CP   K/sec  %CP    K/sec %CP   K/sec  %CP    /sec %CP\n  16G 64205  99   234252  38   112924  26    65275  98   293852  24   940.3   1\n\n         ------Sequential Create------    --------Random Create--------\n      -Create--   --Read---   -Delete--   -Create--   --Read---   -Delete--\nfiles  /sec %CP    /sec %CP    /sec %CP    /sec %CP    /sec %CP    /sec %CP\n   16 12203  95   +++++ +++   19469  94   12297  95   +++++ +++   15578  82\n\nwww.xxx.com,16G,64205,99,234252,38,112924,26,65275,98,293852,24,940.3,1,16,12203,95,+++++,+++,19469,94,12297,95,+++++,+++,15578,82\n\n------------------------------------------------------------------------------\n\n$ pgbench -c 10 -t 10000 -v test -U test\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 10\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 2786.377933 (including connections establishing)\ntps = 2787.888209 (excluding connections establishing)\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 13 Mar 2008 04:11:59 -0400", "msg_from": "\"Justin Graf\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "All,\nI am in the process of specing out a purchase for our production\nsystems, and am looking at the Dell 2950s as well. I am very interested\nto see where this thread goes, and what combinations work with different\napplication loading types. Our systems will have one pair of\nheartbeat-controlled, drbd mirrored servers running postgresql 8.3, with\na more write intensive, multiple writers and few readers application.\nThe other similarly configured pair will have lots of readers and few\nwriters. Our initial plan is RAID 10 for the database (four 300GB 15K\ndrives in an attached MD1000 box) and RAID 1 for the OS (pair of 73GB\ndrives internal to the 2950). PERC 6i for the internal drives (256MB\nbattery backed cache), PERC 6E for the external drives (512MB battery\nbacked cache). 8GB RAM, also dual Gig NICs for internet and\nheartbeat/drbd. Not sure which processor we're going with, or if 8GB\nmemory will be enough. Keep the benchmarks coming.\n\nDoug\n\nOn Thu, 2008-03-13 at 04:11 -0400, Justin Graf wrote:\n\n> I recent just got a new server also from dell 2 weeks ago\n> went with more memory slower CPU, and smaller harddrives\n> have not run pgbench \n> \n> Dell PE 2950 III \n> 2 Quad Core 1.866 Ghz \n> 16 gigs of ram. \n> 8 hard drives 73Gig 10k RPM SAS \n> 2 drives in Mirrored for OS, Binaries, and WAL \n> 6 in a raid 10 \n> Dual Gig Ethernet \n> OS Ubuntu 7.10\n> -----------------------------------------------\n> \n> Version 1.03 \n> ------Sequential Output------ --Sequential Input-\n> --Random- \n> -Per Chr- --Block-- -Rewrite- -Per Chr-\n> --Block-- --Seeks-- \n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> K/sec %CP /sec %CP \n> PriData 70000M 51030 90 107488 29 50666 10 38464 65\n> 102931 9 268.2 0 \n> \n> ------Sequential Create------ --------Random Create-------- \n> -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- \n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP \n> 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ \n> PriData,70000M,51030,90,107488,29,50666,10,38464,65,102931,9,268.2,0,16, \n> +++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n> \n> the difference in our results are interesting.\n> \n> What are the setting on the RAID card . I have the cache turned on\n> with Read Ahead \n> \n> \n> ---- Message from Craig James <[email protected]> at\n> 03-12-2008 09:55:18 PM ------\n> \n> I just received a new server and thought benchmarks would be\n> interesting. I think this looks pretty good, but maybe there\n> are some suggestions about the configuration file. This is a\n> web app, a mix of read/write, where writes tend to be \"insert\n> into ... (select ...)\" where the resulting insert is on the\n> order of 100 to 10K rows of two integers. An external process\n> also uses a LOT of CPU power along with each query.\n> \n> Thanks,\n> Craig\n> \n> \n> Configuration:\n> Dell 2950\n> 8 CPU (Intel 2GHz Xeon)\n> 8 GB memory\n> Dell Perc 6i with battery-backed cache\n> RAID 10 of 8x 146GB SAS 10K 2.5\" disks\n> \n> Everything (OS, WAL and databases) are on the one RAID array.\n> \n> Diffs from original configuration:\n> \n> max_connections = 1000\n> shared_buffers = 400MB\n> work_mem = 256MB\n> max_fsm_pages = 1000000\n> max_fsm_relations = 5000\n> wal_buffers = 256kB\n> effective_cache_size = 4GB\n> \n> Bonnie output (slightly reformatted)\n> \n> ------------------------------------------------------------------------------\n> \n> Delete files in random order...done.\n> Version 1.03\n> ------Sequential Output------ --Sequential\n> Input- --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr-\n> --Block-- --Seeks--\n> Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> K/sec %CP /sec %CP\n> 16G 64205 99 234252 38 112924 26 65275 98\n> 293852 24 940.3 1\n> \n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create--\n> --Read--- -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %\n> CP /sec %CP\n> 16 12203 95 +++++ +++ 19469 94 12297 95 +++++\n> +++ 15578 82\n> \n> www.xxx.com,16G,64205,99,234252,38,112924,26,65275,98,293852,24,940.3,1,16,12203,95,+++++,+++,19469,94,12297,95,+++++,+++,15578,82\n> \n> ------------------------------------------------------------------------------\n> \n> $ pgbench -c 10 -t 10000 -v test -U test\n> starting vacuum...end.\n> starting vacuum accounts...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> number of clients: 10\n> number of transactions per client: 10000\n> number of transactions actually processed: 100000/100000\n> tps = 2786.377933 (including connections establishing)\n> tps = 2787.888209 (excluding connections establishing)\n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list\n> ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n\n\n\n\n\n\n\nAll,\nI am in the process of specing out a purchase for our production systems, and am looking at the Dell 2950s as well. I am very interested to see where this thread goes, and what combinations work with different application loading types. Our systems will have one pair of heartbeat-controlled, drbd mirrored servers running postgresql 8.3, with a more write intensive, multiple writers and few readers application. The other similarly configured pair will have lots of readers and few writers. Our initial plan is RAID 10 for the database (four 300GB 15K drives in an attached MD1000 box) and RAID 1 for the OS (pair of 73GB drives internal to the 2950). PERC 6i for the internal drives (256MB battery backed cache), PERC 6E for the external drives (512MB battery backed cache). 8GB RAM, also dual Gig NICs for internet and heartbeat/drbd. Not sure which processor we're going with, or if 8GB memory will be enough. Keep the benchmarks coming.\n\nDoug\n\nOn Thu, 2008-03-13 at 04:11 -0400, Justin Graf wrote:\n\nI recent just got a new server also from dell 2 weeks ago\nwent with more memory slower CPU, and smaller harddrives\n have not run pgbench \n\nDell PE 2950 III \n2 Quad Core 1.866 Ghz \n16 gigs of ram. \n8 hard drives 73Gig 10k RPM SAS \n2 drives in Mirrored for OS, Binaries, and WAL \n6 in a raid 10 \nDual Gig Ethernet \nOS Ubuntu 7.10\n-----------------------------------------------\n\nVersion 1.03 \n                     ------Sequential Output------ --Sequential Input- --Random- \n                       -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- \nMachine Size        K/sec %CP  K/sec   %CP    K/sec %CP   K/sec %CP K/sec    %CP /sec %CP \nPriData 70000M  51030   90   107488    29      50666 10     38464 65     102931     9    268.2 0 \n\n------Sequential Create------ --------Random Create-------- \n-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- \nfiles /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP \n16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ \nPriData,70000M,51030,90,107488,29,50666,10,38464,65,102931,9,268.2,0,16, \n+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n\nthe difference in our results are interesting.\n\nWhat are the setting on the RAID card .  I have the cache turned on with Read Ahead  \n\n\n---- Message from Craig James <[email protected]> at 03-12-2008 09:55:18 PM ------\n\nI just received a new server and thought benchmarks would be interesting.  I think this looks pretty good, but maybe there are some suggestions about the configuration file.  This is a web app, a mix of read/write, where writes tend to be \"insert into ... (select ...)\" where the resulting insert is on the order of 100 to 10K rows of two integers.  An external process also uses a LOT of CPU power along with each query.\n\nThanks,\nCraig\n\n\nConfiguration:\n  Dell 2950\n  8 CPU (Intel 2GHz Xeon)\n  8 GB memory\n  Dell Perc 6i with battery-backed cache\n  RAID 10 of 8x 146GB SAS 10K 2.5\" disks\n\nEverything (OS, WAL and databases) are on the one RAID array.\n\nDiffs from original configuration:\n\nmax_connections = 1000\nshared_buffers = 400MB\nwork_mem = 256MB\nmax_fsm_pages = 1000000\nmax_fsm_relations = 5000\nwal_buffers = 256kB\neffective_cache_size = 4GB\n\nBonnie output (slightly reformatted)\n\n------------------------------------------------------------------------------\n\nDelete files in random order...done.\nVersion  1.03\n         ------Sequential Output------       --Sequential Input-      --Random-\n      -Per Chr-   --Block--    -Rewrite-     -Per Chr-   --Block--    --Seeks--\n Size K/sec %CP   K/sec  %CP   K/sec  %CP    K/sec %CP   K/sec  %CP    /sec %CP\n  16G 64205  99   234252  38   112924  26    65275  98   293852  24   940.3   1\n\n         ------Sequential Create------    --------Random Create--------\n      -Create--   --Read---   -Delete--   -Create--   --Read---   -Delete--\nfiles  /sec %CP    /sec %CP    /sec %CP    /sec %CP    /sec %CP    /sec %CP\n   16 12203  95   +++++ +++   19469  94   12297  95   +++++ +++   15578  82\n\nwww.xxx.com,16G,64205,99,234252,38,112924,26,65275,98,293852,24,940.3,1,16,12203,95,+++++,+++,19469,94,12297,95,+++++,+++,15578,82\n\n------------------------------------------------------------------------------\n\n$ pgbench -c 10 -t 10000 -v test -U test\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 10\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 2786.377933 (including connections establishing)\ntps = 2787.888209 (excluding connections establishing)\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 13 Mar 2008 08:11:10 -0400", "msg_from": "Doug Knight <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "Justin,\n\nThis may be a bit out of context, but did you run into any troubles \ngetting your Perc6i RAID controller to work under Ubuntu 7.1? I've \nheard there were issues with that.\n\nThanks,\nWill\n\n\nOn Mar 13, 2008, at 3:11 AM, Justin Graf wrote:\n\n> I recent just got a new server also from dell 2 weeks ago\n> went with more memory slower CPU, and smaller harddrives\n> have not run pgbench\n>\n> Dell PE 2950 III\n> 2 Quad Core 1.866 Ghz\n> 16 gigs of ram.\n> 8 hard drives 73Gig 10k RPM SAS\n> 2 drives in Mirrored for OS, Binaries, and WAL\n> 6 in a raid 10\n> Dual Gig Ethernet\n> OS Ubuntu 7.10\n> -----------------------------------------------\n>\n> Version 1.03\n> ------Sequential Output------ --Sequential \n> Input- --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- -- \n> Block-- --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP \n> K/sec %CP /sec %CP\n> PriData 70000M 51030 90 107488 29 50666 10 38464 \n> 65 102931 9 268.2 0\n>\n> ------Sequential Create------ --------Random Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\n> PriData,70000M, \n> 51030,90,107488,29,50666,10,38464,65,102931,9,268.2,0,16,\n> +++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n>\n> the difference in our results are interesting.\n>\n> What are the setting on the RAID card . I have the cache turned on \n> with Read Ahead\n>\n>\n> ---- Message from Craig James <[email protected]> at \n> 03-12-2008 09:55:18 PM ------\n> I just received a new server and thought benchmarks would be \n> interesting. I think this looks pretty good, but maybe there are \n> some suggestions about the configuration file. This is a web app, a \n> mix of read/write, where writes tend to be \"insert into ... \n> (select ...)\" where the resulting insert is on the order of 100 to \n> 10K rows of two integers. An external process also uses a LOT of \n> CPU power along with each query.\n>\n> Thanks,\n> Craig\n>\n>\n> Configuration:\n> Dell 2950\n> 8 CPU (Intel 2GHz Xeon)\n> 8 GB memory\n> Dell Perc 6i with battery-backed cache\n> RAID 10 of 8x 146GB SAS 10K 2.5\" disks\n>\n> Everything (OS, WAL and databases) are on the one RAID array.\n>\n> Diffs from original configuration:\n>\n> max_connections = 1000\n> shared_buffers = 400MB\n> work_mem = 256MB\n> max_fsm_pages = 1000000\n> max_fsm_relations = 5000\n> wal_buffers = 256kB\n> effective_cache_size = 4GB\n>\n> Bonnie output (slightly reformatted)\n>\n> ------------------------------------------------------------------------------\n>\n> Delete files in random order...done.\n> Version 1.03\n> ------Sequential Output------ --Sequential \n> Input- --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- -- \n> Block-- --Seeks--\n> Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec \n> %CP /sec %CP\n> 16G 64205 99 234252 38 112924 26 65275 98 293852 \n> 24 940.3 1\n>\n> ------Sequential Create------ --------Random \n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read--- - \n> Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP / \n> sec %CP\n> 16 12203 95 +++++ +++ 19469 94 12297 95 +++++ +++ \n> 15578 82\n>\n> www.xxx.com,16G, \n> 64205,99,234252,38,112924,26,65275,98,293852,24,940.3,1,16,12203,95,+ \n> ++++,+++,19469,94,12297,95,+++++,+++,15578,82\n>\n> ------------------------------------------------------------------------------\n>\n> $ pgbench -c 10 -t 10000 -v test -U test\n> starting vacuum...end.\n> starting vacuum accounts...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> number of clients: 10\n> number of transactions per client: 10000\n> number of transactions actually processed: 100000/100000\n> tps = 2786.377933 (including connections establishing)\n> tps = 2787.888209 (excluding connections establishing)\n>\n>\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\n\nJustin,This may be a bit out of context, but did you run into any troubles getting your Perc6i RAID controller to work under Ubuntu 7.1? I've heard there were issues with that.Thanks,Will On Mar 13, 2008, at 3:11 AM, Justin Graf wrote:I recent just got a new server also from dell 2 weeks agowent with more memory slower CPU, and smaller harddrives have not run pgbench Dell PE 2950 III 2 Quad Core 1.866 Ghz 16 gigs of ram. 8 hard drives 73Gig 10k RPM SAS 2 drives in Mirrored for OS, Binaries, and WAL 6 in a raid 10 Dual Gig Ethernet OS Ubuntu 7.10-----------------------------------------------Version 1.03                      ------Sequential Output------ --Sequential Input- --Random-                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size        K/sec %CP  K/sec   %CP    K/sec %CP   K/sec %CP K/sec    %CP /sec %CP PriData 70000M  51030   90   107488    29      50666 10     38464 65     102931     9    268.2 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ PriData,70000M,51030,90,107488,29,50666,10,38464,65,102931,9,268.2,0,16, +++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++the difference in our results are interesting.What are the setting on the RAID card .  I have the cache turned on with Read Ahead  ---- Message from Craig James <[email protected]> at 03-12-2008 09:55:18 PM ------I just received a new server and thought benchmarks would be interesting.  I think this looks pretty good, but maybe there are some suggestions about the configuration file.  This is a web app, a mix of read/write, where writes tend to be \"insert into ... (select ...)\" where the resulting insert is on the order of 100 to 10K rows of two integers.  An external process also uses a LOT of CPU power along with each query.Thanks,CraigConfiguration:  Dell 2950  8 CPU (Intel 2GHz Xeon)  8 GB memory  Dell Perc 6i with battery-backed cache  RAID 10 of 8x 146GB SAS 10K 2.5\" disksEverything (OS, WAL and databases) are on the one RAID array.Diffs from original configuration:max_connections = 1000shared_buffers = 400MBwork_mem = 256MBmax_fsm_pages = 1000000max_fsm_relations = 5000wal_buffers = 256kBeffective_cache_size = 4GBBonnie output (slightly reformatted)------------------------------------------------------------------------------Delete files in random order...done.Version  1.03         ------Sequential Output------       --Sequential Input-      --Random-      -Per Chr-   --Block--    -Rewrite-     -Per Chr-   --Block--    --Seeks-- Size K/sec %CP   K/sec  %CP   K/sec  %CP    K/sec %CP   K/sec  %CP    /sec %CP  16G 64205  99   234252  38   112924  26    65275  98   293852  24   940.3   1         ------Sequential Create------    --------Random Create--------      -Create--   --Read---   -Delete--   -Create--   --Read---   -Delete--files  /sec %CP    /sec %CP    /sec %CP    /sec %CP    /sec %CP    /sec %CP   16 12203  95   +++++ +++   19469  94   12297  95   +++++ +++   15578  82www.xxx.com,16G,64205,99,234252,38,112924,26,65275,98,293852,24,940.3,1,16,12203,95,+++++,+++,19469,94,12297,95,+++++,+++,15578,82------------------------------------------------------------------------------$ pgbench -c 10 -t 10000 -v test -U teststarting vacuum...end.starting vacuum accounts...end.transaction type: TPC-B (sort of)scaling factor: 1number of clients: 10number of transactions per client: 10000number of transactions actually processed: 100000/100000tps = 2786.377933 (including connections establishing)tps = 2787.888209 (excluding connections establishing)-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 13 Mar 2008 08:11:06 -0500", "msg_from": "Will Weaver <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "I did not run into one install problem, I read a thread about people having problems but the thread is over a year old now. \n\nI used the 7.1 gutsy amd64 server version \n\nI then installed gnome desktop because its not installed by default. \"i'm a windows admin i have to have my gui\"\n\nthen installed postgres 8.3 gutsy. \n\n it took about 3 hours to get the server setup.\n\n\n---- Message from mailto:[email protected] Will Weaver <[email protected]> at 03-13-2008 08:11:06 AM ------\n\nJustin,\n\n\nThis may be a bit out of context, but did you run into any troubles getting your Perc6i RAID controller to work under Ubuntu 7.1? I've heard there were issues with that.\n\n\nThanks,\nWill\n\n\n\n\n \n\n\nOn Mar 13, 2008, at 3:11 AM, Justin Graf wrote:\n\n\n\nI recent just got a new server also from dell 2 weeks ago\nwent with more memory slower CPU, and smaller harddrives\n have not run pgbench \n\nDell PE 2950 III \n2 Quad Core 1.866 Ghz \n16 gigs of ram. \n8 hard drives 73Gig 10k RPM SAS \n2 drives in Mirrored for OS, Binaries, and WAL \n6 in a raid 10 \nDual Gig Ethernet \nOS Ubuntu 7.10\n-----------------------------------------------\n\nVersion 1.03 \n ------Sequential Output------ --Sequential Input- --Random- \n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- \nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP \nPriData 70000M 51030 90 107488 29 50666 10 38464 65 102931 9 268.2 0 \n\n------Sequential Create------ --------Random Create-------- \n-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- \nfiles /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP \n16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ \nPriData,70000M,51030,90,107488,29,50666,10,38464,65,102931,9,268.2,0,16, \n+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n\nthe difference in our results are interesting.\n\nWhat are the setting on the RAID card . I have the cache turned on with Read Ahead \n\n\n---- Message from mailto:[email protected] Craig James <[email protected]> at 03-12-2008 09:55:18 PM ------\n\nI just received a new server and thought benchmarks would be interesting. I think this looks pretty good, but maybe there are some suggestions about the configuration file. This is a web app, a mix of read/write, where writes tend to be \"insert into ... (select ...)\" where the resulting insert is on the order of 100 to 10K rows of two integers. An external process also uses a LOT of CPU power along with each query.\n\nThanks,\nCraig\n\n\nConfiguration:\n Dell 2950\n 8 CPU (Intel 2GHz Xeon)\n 8 GB memory\n Dell Perc 6i with battery-backed cache\n RAID 10 of 8x 146GB SAS 10K 2.5\" disks\n\nEverything (OS, WAL and databases) are on the one RAID array.\n\nDiffs from original configuration:\n\nmax_connections = 1000\nshared_buffers = 400MB\nwork_mem = 256MB\nmax_fsm_pages = 1000000\nmax_fsm_relations = 5000\nwal_buffers = 256kB\neffective_cache_size = 4GB\n\nBonnie output (slightly reformatted)\n\n------------------------------------------------------------------------------\n\nDelete files in random order...done.\nVersion 1.03\n ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\n 16G 64205 99 234252 38 112924 26 65275 98 293852 24 940.3 1\n\n ------Sequential Create------ --------Random Create--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\nfiles /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 12203 95 +++++ +++ 19469 94 12297 95 +++++ +++ 15578 82\n\nhttp://www.xxx.com www.xxx.com,16G,64205,99,234252,38,112924,26,65275,98,293852,24,940.3,1,16,12203,95,+++++,+++,19469,94,12297,95,+++++,+++,15578,82\n\n------------------------------------------------------------------------------\n\n$ pgbench -c 10 -t 10000 -v test -U test\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 10\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 2786.377933 (including connections establishing)\ntps = 2787.888209 (excluding connections establishing)\n\n\n\n-- \nSent via pgsql-performance mailing list (mailto:[email protected] )\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance \n\n\n\n\n\n\n\n\nI did not run into one install problem,  I read a thread about people having problems but the thread is over a year old now. I used the 7.1 gutsy amd64 server version I then installed gnome desktop because its not installed by default.  \"i'm a windows admin i have to have my gui\"then installed postgres 8.3 gutsy.  it took about 3 hours to get the server setup.---- Message from Will Weaver <[email protected]> at 03-13-2008 08:11:06 AM ------Justin,This may be a bit out of context, but did you run into any troubles getting your Perc6i RAID controller to work under Ubuntu 7.1? I've heard there were issues with that.Thanks,Will On Mar 13, 2008, at 3:11 AM, Justin Graf wrote:I recent just got a new server also from dell 2 weeks agowent with more memory slower CPU, and smaller harddrives have not run pgbench Dell PE 2950 III 2 Quad Core 1.866 Ghz 16 gigs of ram. 8 hard drives 73Gig 10k RPM SAS 2 drives in Mirrored for OS, Binaries, and WAL 6 in a raid 10 Dual Gig Ethernet OS Ubuntu 7.10-----------------------------------------------Version 1.03                      ------Sequential Output------ --Sequential Input- --Random-                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size        K/sec %CP  K/sec   %CP    K/sec %CP   K/sec %CP K/sec    %CP /sec %CP PriData 70000M  51030   90   107488    29      50666 10     38464 65     102931     9    268.2 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ PriData,70000M,51030,90,107488,29,50666,10,38464,65,102931,9,268.2,0,16, +++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++the difference in our results are interesting.What are the setting on the RAID card .  I have the cache turned on with Read Ahead  ---- Message from Craig James <[email protected]> at 03-12-2008 09:55:18 PM ------I just received a new server and thought benchmarks would be interesting.  I think this looks pretty good, but maybe there are some suggestions about the configuration file.  This is a web app, a mix of read/write, where writes tend to be \"insert into ... (select ...)\" where the resulting insert is on the order of 100 to 10K rows of two integers.  An external process also uses a LOT of CPU power along with each query.Thanks,CraigConfiguration:  Dell 2950  8 CPU (Intel 2GHz Xeon)  8 GB memory  Dell Perc 6i with battery-backed cache  RAID 10 of 8x 146GB SAS 10K 2.5\" disksEverything (OS, WAL and databases) are on the one RAID array.Diffs from original configuration:max_connections = 1000shared_buffers = 400MBwork_mem = 256MBmax_fsm_pages = 1000000max_fsm_relations = 5000wal_buffers = 256kBeffective_cache_size = 4GBBonnie output (slightly reformatted)------------------------------------------------------------------------------Delete files in random order...done.Version  1.03         ------Sequential Output------       --Sequential Input-      --Random-      -Per Chr-   --Block--    -Rewrite-     -Per Chr-   --Block--    --Seeks-- Size K/sec %CP   K/sec  %CP   K/sec  %CP    K/sec %CP   K/sec  %CP    /sec %CP  16G 64205  99   234252  38   112924  26    65275  98   293852  24   940.3   1         ------Sequential Create------    --------Random Create--------      -Create--   --Read---   -Delete--   -Create--   --Read---   -Delete--files  /sec %CP    /sec %CP    /sec %CP    /sec %CP    /sec %CP    /sec %CP   16 12203  95   +++++ +++   19469  94   12297  95   +++++ +++   15578  82www.xxx.com,16G,64205,99,234252,38,112924,26,65275,98,293852,24,940.3,1,16,12203,95,+++++,+++,19469,94,12297,95,+++++,+++,15578,82------------------------------------------------------------------------------$ pgbench -c 10 -t 10000 -v test -U teststarting vacuum...end.starting vacuum accounts...end.transaction type: TPC-B (sort of)scaling factor: 1number of clients: 10number of transactions per client: 10000number of transactions actually processed: 100000/100000tps = 2786.377933 (including connections establishing)tps = 2787.888209 (excluding connections establishing)-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 13 Mar 2008 09:55:55 -0400", "msg_from": "\"Justin Graf\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "Justin Graf wrote:\n> I recent just got a new server also from dell 2 weeks ago\n> went with more memory slower CPU, and smaller harddrives\n> have not run pgbench\n> \n> Dell PE 2950 III\n> 2 Quad Core 1.866 Ghz\n> 16 gigs of ram.\n> 8 hard drives 73Gig 10k RPM SAS\n> 2 drives in Mirrored for OS, Binaries, and WAL\n> 6 in a raid 10\n> Dual Gig Ethernet\n> OS Ubuntu 7.10\n> -----------------------------------------------\n> \n> Version 1.03\n> ------Sequential Output------ --Sequential Input- \n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- \n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP \n> K/sec %CP /sec %CP\n> PriData 70000M 51030 90 107488 29 50666 10 38464 65 \n> 102931 9 268.2 0\n> \n> ------Sequential Create------ --------Random Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\n> PriData,70000M,51030,90,107488,29,50666,10,38464,65,102931,9,268.2,0,16,\n> +++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n> \n> the difference in our results are interesting.\n> \n> What are the setting on the RAID card . I have the cache turned on with \n> Read Ahead \n\nFirst, did you get the Perc 6i with battery-backed cache? Not all versions have it. I found this really confusing when trying to order -- we had to work pretty hard to figure out exactly what to order to be sure we got this feature. (Does anyone at Dell follow these discussions?)\n\nSecond, Dell ships a Linux driver with this hardware, and we installed it. I have no idea what the driver does, because I think you can run the system without it, but my guess is that without the Dell driver, it's using the Perc6 card in some \"normal\" mode that doesn't take advantage of its capabilities.\n\nWith a 6-disk RAID 10, you should get numbers at least in the same ballpark as my numbers.\n\nCraig\n\n> \n> \n> ---- Message from Craig James <[email protected]> \n> <mailto:[email protected]> at 03-12-2008 09:55:18 PM ------\n> \n> I just received a new server and thought benchmarks would be\n> interesting. I think this looks pretty good, but maybe there are\n> some suggestions about the configuration file. This is a web app, a\n> mix of read/write, where writes tend to be \"insert into ... (select\n> ...)\" where the resulting insert is on the order of 100 to 10K rows\n> of two integers. An external process also uses a LOT of CPU power\n> along with each query.\n> \n> Thanks,\n> Craig\n> \n> \n> Configuration:\n> Dell 2950\n> 8 CPU (Intel 2GHz Xeon)\n> 8 GB memory\n> Dell Perc 6i with battery-backed cache\n> RAID 10 of 8x 146GB SAS 10K 2.5\" disks\n> \n> Everything (OS, WAL and databases) are on the one RAID array.\n> \n> Diffs from original configuration:\n> \n> max_connections = 1000\n> shared_buffers = 400MB\n> work_mem = 256MB\n> max_fsm_pages = 1000000\n> max_fsm_relations = 5000\n> wal_buffers = 256kB\n> effective_cache_size = 4GB\n> \n> Bonnie output (slightly reformatted)\n> \n> ------------------------------------------------------------------------------\n> \n> Delete files in random order...done.\n> Version 1.03\n> ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> 16G 64205 99 234252 38 112924 26 65275 98 293852 24\n> 940.3 1\n> \n> ------Sequential Create------ --------Random Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 12203 95 +++++ +++ 19469 94 12297 95 +++++ +++\n> 15578 82\n> \n> www.xxx.com,16G,64205,99,234252,38,112924,26,65275,98,293852,24,940.3,1,16,12203,95,+++++,+++,19469,94,12297,95,+++++,+++,15578,82\n> \n> ------------------------------------------------------------------------------\n> \n> $ pgbench -c 10 -t 10000 -v test -U test\n> starting vacuum...end.\n> starting vacuum accounts...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> number of clients: 10\n> number of transactions per client: 10000\n> number of transactions actually processed: 100000/100000\n> tps = 2786.377933 (including connections establishing)\n> tps = 2787.888209 (excluding connections establishing)\n", "msg_date": "Thu, 13 Mar 2008 07:29:23 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "Doug Knight wrote:\n> All,\n> I am in the process of specing out a purchase for our production \n> systems, and am looking at the Dell 2950s as well. I am very interested \n> to see where this thread goes, and what combinations work with different \n> application loading types. Our systems will have one pair of \n> heartbeat-controlled, drbd mirrored servers running postgresql 8.3, with \n> a more write intensive, multiple writers and few readers application. \n> The other similarly configured pair will have lots of readers and few \n> writers. Our initial plan is RAID 10 for the database (four 300GB 15K \n> drives in an attached MD1000 box) and RAID 1 for the OS (pair of 73GB \n> drives internal to the 2950). PERC 6i for the internal drives (256MB \n> battery backed cache), PERC 6E for the external drives (512MB battery \n> backed cache). 8GB RAM, also dual Gig NICs for internet and \n> heartbeat/drbd. Not sure which processor we're going with, or if 8GB \n> memory will be enough. Keep the benchmarks coming.\n\nWe considered this configuration too. But in the end, we decided that by going with the 146 GB 2.5\" drives, we could get 8 disks in the main box, and save the cost of the MD1000, which almost doubles the price of the system. We end up with a 546 GB RAID assembly, more than enough for our needs.\n\nI think that 8 10K disks in a RAID 10 will be faster than 4 15K disks, and you only gain a little space (two 300GB versus four 146GB). So it seemed like we'd be paying more and getting less. With the battery-backed Perc 6i RAID, the advice seemed to be that the OS, WAL and Database could all share the disk without conflict, and I think the numbers will back that up. We're not in production yet, so only time will tell.\n\nCraig\n\n> \n> Doug\n> \n> On Thu, 2008-03-13 at 04:11 -0400, Justin Graf wrote:\n>> I recent just got a new server also from dell 2 weeks ago\n>> went with more memory slower CPU, and smaller harddrives\n>> have not run pgbench\n>>\n>> Dell PE 2950 III\n>> 2 Quad Core 1.866 Ghz\n>> 16 gigs of ram.\n>> 8 hard drives 73Gig 10k RPM SAS\n>> 2 drives in Mirrored for OS, Binaries, and WAL\n>> 6 in a raid 10\n>> Dual Gig Ethernet\n>> OS Ubuntu 7.10\n>> -----------------------------------------------\n>>\n>> Version 1.03\n>> ------Sequential Output------ --Sequential Input- \n>> --Random-\n>> -Per Chr- --Block-- -Rewrite- -Per Chr- \n>> --Block-- --Seeks--\n>> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP \n>> K/sec %CP /sec %CP\n>> PriData 70000M 51030 90 107488 29 50666 10 38464 \n>> 65 102931 9 268.2 0\n>>\n>> ------Sequential Create------ --------Random Create--------\n>> -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n>> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n>> 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\n>> PriData,70000M,51030,90,107488,29,50666,10,38464,65,102931,9,268.2,0,16,\n>> +++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n>>\n>> the difference in our results are interesting.\n>>\n>> What are the setting on the RAID card . I have the cache turned on \n>> with Read Ahead \n>>\n>>\n>> ---- Message from Craig James <[email protected]> \n>> <mailto:[email protected]> at 03-12-2008 09:55:18 PM ------\n>>\n>> I just received a new server and thought benchmarks would be\n>> interesting. I think this looks pretty good, but maybe there are\n>> some suggestions about the configuration file. This is a web app,\n>> a mix of read/write, where writes tend to be \"insert into ...\n>> (select ...)\" where the resulting insert is on the order of 100 to\n>> 10K rows of two integers. An external process also uses a LOT of\n>> CPU power along with each query.\n>>\n>> Thanks,\n>> Craig\n>>\n>>\n>> Configuration:\n>> Dell 2950\n>> 8 CPU (Intel 2GHz Xeon)\n>> 8 GB memory\n>> Dell Perc 6i with battery-backed cache\n>> RAID 10 of 8x 146GB SAS 10K 2.5\" disks\n>>\n>> Everything (OS, WAL and databases) are on the one RAID array.\n>>\n>> Diffs from original configuration:\n>>\n>> max_connections = 1000\n>> shared_buffers = 400MB\n>> work_mem = 256MB\n>> max_fsm_pages = 1000000\n>> max_fsm_relations = 5000\n>> wal_buffers = 256kB\n>> effective_cache_size = 4GB\n>>\n>> Bonnie output (slightly reformatted)\n>>\n>> ------------------------------------------------------------------------------\n>>\n>> Delete files in random order...done.\n>> Version 1.03\n>> ------Sequential Output------ --Sequential Input-\n>> --Random-\n>> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n>> --Seeks--\n>> Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec\n>> %CP /sec %CP\n>> 16G 64205 99 234252 38 112924 26 65275 98 293852\n>> 24 940.3 1\n>>\n>> ------Sequential Create------ --------Random\n>> Create--------\n>> -Create-- --Read--- -Delete-- -Create-- --Read---\n>> -Delete--\n>> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n>> /sec %CP\n>> 16 12203 95 +++++ +++ 19469 94 12297 95 +++++ +++\n>> 15578 82\n>>\n>> www.xxx.com,16G,64205,99,234252,38,112924,26,65275,98,293852,24,940.3,1,16,12203,95,+++++,+++,19469,94,12297,95,+++++,+++,15578,82\n>>\n>> ------------------------------------------------------------------------------\n>>\n>> $ pgbench -c 10 -t 10000 -v test -U test\n>> starting vacuum...end.\n>> starting vacuum accounts...end.\n>> transaction type: TPC-B (sort of)\n>> scaling factor: 1\n>> number of clients: 10\n>> number of transactions per client: 10000\n>> number of transactions actually processed: 100000/100000\n>> tps = 2786.377933 (including connections establishing)\n>> tps = 2787.888209 (excluding connections establishing)\n>>\n>>\n>>\n>> -- \n>> Sent via pgsql-performance mailing list\n>> ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>>\n\n", "msg_date": "Thu, 13 Mar 2008 07:36:47 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "Joshua D. Drake wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> On Wed, 12 Mar 2008 21:55:18 -0700\n> Craig James <[email protected]> wrote:\n> \n> \n>> Diffs from original configuration:\n>>\n>> max_connections = 1000\n>> shared_buffers = 400MB\n>> work_mem = 256MB\n>> max_fsm_pages = 1000000\n>> max_fsm_relations = 5000\n>> wal_buffers = 256kB\n>> effective_cache_size = 4GB\n> \n> I didn't see which OS but I assume linux. I didn't see postgresql so I\n> assume 8.3.\n\nRight on both counts.\n\n> wal_sync_method = open_sync\n> checkpoint_segments = 30\n> shared_buffers = 2000MB\n> asyncrhonous_commit = off (sp?)\n> \n> Try again.\n\nNice improvement! About 25% increase in TPS:\n\n$ pgbench -c 10 -t 10000 -v test -U test\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 10\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 3423.636423 (including connections establishing)\ntps = 3425.957521 (excluding connections establishing)\n\nFor reference, here are the results before your suggested changes:\n\n$ pgbench -c 10 -t 10000 -v test -U test\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 10\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 2786.377933 (including connections establishing)\ntps = 2787.888209 (excluding connections establishing)\n\nThanks!\nCraig\n", "msg_date": "Thu, 13 Mar 2008 07:54:34 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "On Thu, 13 Mar 2008, Craig James wrote:\n\n>> wal_sync_method = open_sync\n\nThere was a bug report I haven't had a chance to investigate yet that \nsuggested some recent Linux versions have issues when using open_sync. \nI'd suggest popping that back to the default for now unless you have time \nto really do a long certification process that your system runs reliably \nwith it turned on.\n\nI suspect most of the improvement you saw from Joshua's recommendations \nwas from raising checkpoint_segments.\n\n> $ pgbench -c 10 -t 10000 -v test -U test\n> scaling factor: 1\n> number of clients: 10\n\nA scaling factor of 1 means you are operating on a positively trivial 16MB \ndatabase. It also means there's exactly one entry in a table that every \nclient updates on every transactions. You have 10 clients, and they're \nall fighting over access to it.\n\nIf you actually want something that approaches useful numbers here, you \nneed to at run 'pgbench -i -s 10' to get a scaling factor of 10 and a \n160MB database. Interesting results on this class of hardware are when \nyou set scaling to 100 or more (100=1.6GB database). See \nhttp://www.westnet.com/~gsmith/content/postgresql/pgbench-scaling.htm for \nsome examples of how that works, from a less powerful system than yours.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 13 Mar 2008 12:01:50 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Thu, 13 Mar 2008 12:01:50 -0400 (EDT)\r\nGreg Smith <[email protected]> wrote:\r\n\r\n> On Thu, 13 Mar 2008, Craig James wrote:\r\n> \r\n> >> wal_sync_method = open_sync\r\n> \r\n> There was a bug report I haven't had a chance to investigate yet that \r\n> suggested some recent Linux versions have issues when using\r\n> open_sync. I'd suggest popping that back to the default for now\r\n> unless you have time to really do a long certification process that\r\n> your system runs reliably with it turned on.\r\n\r\nWell the default would be ugly, that's fsync, fdatasync is probably a\r\nbetter choice in that case.\r\n\r\nSincerely,\r\n\r\nJoshua D. Drake\r\n\r\n\r\n- -- \r\nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \r\nPostgreSQL Community Conference: http://www.postgresqlconference.org/\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\n PostgreSQL political pundit | Mocker of Dolphins\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFH2ZdMATb/zqfZUUQRArZvAJ9Ja3Jnj2WD3eSYWoAv0ps5TVlPCQCglIEK\r\nCAelb/M/BR+RJXhhEh7Iecw=\r\n=pq2P\r\n-----END PGP SIGNATURE-----\r\n", "msg_date": "Thu, 13 Mar 2008 14:06:18 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "Absolutely on the battery backup.\n\nI did not load the linux drivers from dell, it works so i figured i was not \ngoing to worry about it. This server is so oversized for its load its \nunreal. I have always gone way overboard on server specs and making sure \nits redundant.\n\nThe difference in our bonnie++ numbers is interesting, In some cases my \nsetup blows by yours and in other your's destroys mine.\n\nOn the raid setup. I'm a windows guy so i setup like its windows machine \nkeeping the OS/logs way separate from the DATA.\n\nI chose to use ext3 on these partition\n\n\n---- Message from Craig James <[email protected]> at 03-13-2008 \n07:29:23 AM ------ \n\n First, did you get the Perc 6i with battery-backed cache? Not all \nversions have it. I found this really confusing when trying to order -- we \nhad to work pretty hard to figure out exactly what to order to be sure we \ngot this feature. (Does anyone at Dell follow these discussions?)\n\n Second, Dell ships a Linux driver with this hardware, and we installed it. \nI have no idea what the driver does, because I think you can run the system \nwithout it, but my guess is that without the Dell driver, it's using the \nPerc6 card in some \"normal\" mode that doesn't take advantage of its \ncapabilities.\n\n With a 6-disk RAID 10, you should get numbers at least in the same \nballpark as my numbers.\n\n Craig\n\n\n\n\n\n", "msg_date": "Thu, 13 Mar 2008 16:09:26 -0500", "msg_from": "\"justin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "On Thu, 13 Mar 2008, Joshua D. Drake wrote:\n\n> Greg Smith <[email protected]> wrote:\n>>>> wal_sync_method = open_sync\n>>\n>> There was a bug report I haven't had a chance to investigate yet that\n>> suggested some recent Linux versions have issues when using\n>> open_sync. I'd suggest popping that back to the default for now\n>> unless you have time to really do a long certification process that\n>> your system runs reliably with it turned on.\n>\n> Well the default would be ugly, that's fsync, fdatasync is probably a\n> better choice in that case.\n\nI haven't found fdatasync to be significantly better in my tests on Linux \nbut I never went out of my way to try and quantify it. My understanding \nis that some of the write barrier implementation details on ext3 \nfilesystems make any sync call a relatively heavy operation but I haven't \npoked at the code yet to figure out why.\n\nThere are really some substantial gains for WAL-heavy loads under Linux \njust waiting for someone to dig into this more. For example, I have a \nlittle plan sitting here to allow opening the WAL files with noatime even \nif the rest of the filesystem can't be mounted that way, which would \ncollapse one of the big reasons to use a separate WAL disk.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 13 Mar 2008 17:27:09 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "\n----- Original Message ----- \nFrom: \"Greg Smith\" <[email protected]>\nTo: <[email protected]>\nSent: Thursday, March 13, 2008 4:27 PM\nSubject: Re: [PERFORM] Benchmark: Dell/Perc 6, 8 disk RAID 10\n\n\n> On Thu, 13 Mar 2008, Joshua D. Drake wrote:\n>\n>> Greg Smith <[email protected]> wrote:\n>>>>> wal_sync_method = open_sync\n>>>\n>>> There was a bug report I haven't had a chance to investigate yet that\n>>> suggested some recent Linux versions have issues when using\n>>> open_sync. I'd suggest popping that back to the default for now\n>>> unless you have time to really do a long certification process that\n>>> your system runs reliably with it turned on.\n>>\n>> Well the default would be ugly, that's fsync, fdatasync is probably a\n>> better choice in that case.\n>\n> I haven't found fdatasync to be significantly better in my tests on Linux \n> but I never went out of my way to try and quantify it. My understanding \n> is that some of the write barrier implementation details on ext3 \n> filesystems make any sync call a relatively heavy operation but I haven't \n> poked at the code yet to figure out why.\n>\n> There are really some substantial gains for WAL-heavy loads under Linux \n> just waiting for someone to dig into this more. For example, I have a \n> little plan sitting here to allow opening the WAL files with noatime even \n> if the rest of the filesystem can't be mounted that way, which would \n> collapse one of the big reasons to use a separate WAL disk.\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI'm ran pgbench from my laptop to the new server\n\nMy laptop is dual core with 2 gigs of ram and 1 gig enthernet connection to \nserver. so i don't think the network is going to be a problem in the test.\n\nWhen i look at the server memory its only consuming 463 megs. I have the \neffective cache set at 12 gigs and sharebuffer at 100megs and work mem set \nto 50megs\n\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 1\nnumber of transactions per client: 10\nnumber of transactions actually processed: 10/10\ntps = 20.618557 (including connections establishing)\ntps = 20.618557 (excluding connections establishing)\n\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 10\nnumber of transactions actually processed: 100/100\ntps = 18.231541 (including connections establishing)\ntps = 18.231541 (excluding connections establishing)\n\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 100\nnumber of transactions actually processed: 1000/1000\ntps = 19.116073 (including connections establishing)\ntps = 19.116073 (excluding connections establishing)\n\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 40\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 40000/40000\ntps = 20.368217 (including connections establishing)\ntps = 20.368217 (excluding connections establishing) \n\n", "msg_date": "Thu, 13 Mar 2008 17:53:04 -0500", "msg_from": "\"justin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "On Thu, Mar 13, 2008 at 4:53 PM, justin <[email protected]> wrote:\n>\n> I'm ran pgbench from my laptop to the new server\n>\n> My laptop is dual core with 2 gigs of ram and 1 gig enthernet connection to\n> server. so i don't think the network is going to be a problem in the test.\n>\n> When i look at the server memory its only consuming 463 megs. I have the\n> effective cache set at 12 gigs and sharebuffer at 100megs and work mem set\n> to 50megs\n\nYou do know that effective_cache_size is the size of the OS level\ncache. i.e. it won't show up in postgresql's memory usage. On a\nmachine with (I assume) 12 or more gigs or memory, you should have\nyour shared_buffers set to a much higher number than 100Meg. (unless\nyou're still running 7.4 but that's another story.)\n\npgbench will never use 50 megs of work_mem, as it's transactional and\nhitting single rows at a time, not sorting huge lists of rows. Having\nPostgreSQL use up all the memory is NOT necessarily your best bet.\nLetting the OS cache your data is quite likely a good choice here, so\nI'd keep your shared_buffers in the 500M to 2G range.\n\n> transaction type: TPC-B (sort of)\n> scaling factor: 100\n> number of clients: 1\n>\n> number of transactions per client: 10\n> number of transactions actually processed: 10/10\n> tps = 20.618557 (including connections establishing)\n> tps = 20.618557 (excluding connections establishing)\n>\n>\n> transaction type: TPC-B (sort of)\n> scaling factor: 100\n>\n> number of clients: 10\n> number of transactions per client: 10\n> number of transactions actually processed: 100/100\n> tps = 18.231541 (including connections establishing)\n> tps = 18.231541 (excluding connections establishing)\n>\n>\n> transaction type: TPC-B (sort of)\n> scaling factor: 100\n>\n> number of clients: 10\n> number of transactions per client: 100\n> number of transactions actually processed: 1000/1000\n> tps = 19.116073 (including connections establishing)\n> tps = 19.116073 (excluding connections establishing)\n>\n>\n> transaction type: TPC-B (sort of)\n> scaling factor: 100\n>\n> number of clients: 40\n> number of transactions per client: 1000\n> number of transactions actually processed: 40000/40000\n> tps = 20.368217 (including connections establishing)\n> tps = 20.368217 (excluding connections establishing)\n\nThose numbers are abysmal. I had a P-III-750 5 years ago that ran\nwell into the hundreds on a large scaling factor (1000 or so) pgbench\ndb with 100 or more concurrent connections all the way down to 10\nthreads. I.e. it never dropped below 200 or so during the testing.\nthis was with a Perc3 series LSI controller with LSI firmware and the\nmegaraid 2.0.x driver, which I believe is the basis for the current\nLSI drivers today.\n\nA few points. 10 or 100 total transactions is far too few\ntransactions to really get a good number. 1000 is about the minimum\nto run to get a good average, and running 10000 or so is about the\nminimum I shoot for. So your later tests are likely to be less noisy.\n They're all way too slow for a modern server, and point ot\nnon-optimal hardware. An untuned pgsql database should be able to get\nto or over 100 tps. I had a sparc-20 that could do 80 or so.\n\nDo you know if you're I/O bound or CPU bound?\n", "msg_date": "Fri, 14 Mar 2008 00:09:15 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "On Thu, Mar 13, 2008 at 3:09 PM, justin <[email protected]> wrote:\n\n> I chose to use ext3 on these partition\n\nYou should really consider another file system. ext3 has two flaws\nthat mean I can't really use it properly. A 2TB file system size\nlimit (at least on the servers I've tested) and it locks the whole\nfile system while deleting large files, which can take several seconds\nand stop ANYTHING from happening during that time. This means that\ndropping or truncating large tables in the middle of the day could\nhalt your database for seconds at a time. This one misfeature means\nthat ext2/3 are unsuitable for running under a database.\n", "msg_date": "Fri, 14 Mar 2008 00:12:27 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "Scott Marlowe wrote:\n> On Thu, Mar 13, 2008 at 3:09 PM, justin <[email protected]> wrote:\n> \n>> I chose to use ext3 on these partition\n> \n> You should really consider another file system. ext3 has two flaws\n> that mean I can't really use it properly. A 2TB file system size\n> limit (at least on the servers I've tested) and it locks the whole\n> file system while deleting large files, which can take several seconds\n> and stop ANYTHING from happening during that time. This means that\n> dropping or truncating large tables in the middle of the day could\n> halt your database for seconds at a time. This one misfeature means\n> that ext2/3 are unsuitable for running under a database.\n\nI cannot acknowledge or deny the last one, but the first one is not \ntrue. I have several volumes in the 4TB+ range on ext3 performing nicely.\n\nI can test the \"large file stuff\", but how large? .. several GB is not a \nproblem here.\n\nJesper\n-- \nJesper\n\n", "msg_date": "Fri, 14 Mar 2008 07:17:10 +0100", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "On Fri, Mar 14, 2008 at 12:17 AM, Jesper Krogh <[email protected]> wrote:\n>\n> Scott Marlowe wrote:\n> > On Thu, Mar 13, 2008 at 3:09 PM, justin <[email protected]> wrote:\n> >\n> >> I chose to use ext3 on these partition\n> >\n> > You should really consider another file system. ext3 has two flaws\n> > that mean I can't really use it properly. A 2TB file system size\n> > limit (at least on the servers I've tested) and it locks the whole\n> > file system while deleting large files, which can take several seconds\n> > and stop ANYTHING from happening during that time. This means that\n> > dropping or truncating large tables in the middle of the day could\n> > halt your database for seconds at a time. This one misfeature means\n> > that ext2/3 are unsuitable for running under a database.\n>\n> I cannot acknowledge or deny the last one, but the first one is not\n> true. I have several volumes in the 4TB+ range on ext3 performing nicely.\n>\n> I can test the \"large file stuff\", but how large? .. several GB is not a\n> problem here.\n\nIs this on a 64 bit or 32 bit machine? We had the problem with a 32\nbit linux box (not sure what flavor) just a few months ago. I would\nnot create a filesystem on a partition of 2+TB\n", "msg_date": "Fri, 14 Mar 2008 00:19:40 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "On Fri, Mar 14, 2008 at 12:19 AM, Scott Marlowe <[email protected]> wrote:\n>\n> On Fri, Mar 14, 2008 at 12:17 AM, Jesper Krogh <[email protected]> wrote:\n> >\n> > Scott Marlowe wrote:\n> > > On Thu, Mar 13, 2008 at 3:09 PM, justin <[email protected]> wrote:\n> > >\n> > >> I chose to use ext3 on these partition\n> > >\n> > > You should really consider another file system. ext3 has two flaws\n> > > that mean I can't really use it properly. A 2TB file system size\n> > > limit (at least on the servers I've tested) and it locks the whole\n> > > file system while deleting large files, which can take several seconds\n> > > and stop ANYTHING from happening during that time. This means that\n> > > dropping or truncating large tables in the middle of the day could\n> > > halt your database for seconds at a time. This one misfeature means\n> > > that ext2/3 are unsuitable for running under a database.\n> >\n> > I cannot acknowledge or deny the last one, but the first one is not\n> > true. I have several volumes in the 4TB+ range on ext3 performing nicely.\n> >\n> > I can test the \"large file stuff\", but how large? .. several GB is not a\n> > problem here.\n>\n> Is this on a 64 bit or 32 bit machine? We had the problem with a 32\n> bit linux box (not sure what flavor) just a few months ago. I would\n> not create a filesystem on a partition of 2+TB\n>\n\nOK, according to this it's 16TiB:\nhttp://en.wikipedia.org/wiki/Ext2\n\nso I'm not sure what problem we were having. It was a friend setting\nup the RAID and I'd already told him to use xfs but he really wanted\nto use ext3 because he was more familiar with it.\n", "msg_date": "Fri, 14 Mar 2008 00:23:28 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "Scott Marlowe wrote:\n> On Fri, Mar 14, 2008 at 12:17 AM, Jesper Krogh <[email protected]> wrote:\n>> Scott Marlowe wrote:\n>> > On Thu, Mar 13, 2008 at 3:09 PM, justin <[email protected]> wrote:\n>> >\n>> >> I chose to use ext3 on these partition\n>> >\n>> > You should really consider another file system. ext3 has two flaws\n>> > that mean I can't really use it properly. A 2TB file system size\n>> > limit (at least on the servers I've tested) and it locks the whole\n>> > file system while deleting large files, which can take several seconds\n>> > and stop ANYTHING from happening during that time. This means that\n>> > dropping or truncating large tables in the middle of the day could\n>> > halt your database for seconds at a time. This one misfeature means\n>> > that ext2/3 are unsuitable for running under a database.\n>>\n>> I cannot acknowledge or deny the last one, but the first one is not\n>> true. I have several volumes in the 4TB+ range on ext3 performing nicely.\n>>\n>> I can test the \"large file stuff\", but how large? .. several GB is not a\n>> problem here.\n> \n> Is this on a 64 bit or 32 bit machine? We had the problem with a 32\n> bit linux box (not sure what flavor) just a few months ago. I would\n> not create a filesystem on a partition of 2+TB\n\nIt is on a 64 bit machine.. but ext3 doesnt have anything specifik in it \nas far as I know.. I have mountet filesystems created on 32 bit on 64 \nbit and the other way around. The filesystems are around years old.\n\nhttp://en.wikipedia.org/wiki/Ext3 => Limit seems to be 16TB currently \n(It might get down to something lower if you choose a small blocksize).\n\n-- \nJesper\n", "msg_date": "Fri, 14 Mar 2008 07:29:31 +0100", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "On Fri, 14 Mar 2008, Justin wrote:\n\n> I played with shared_buffer and never saw much of an improvement from\n> 100 all the way up to 800 megs moved the checkpoints from 3 to 30 and\n> still never saw no movement in the numbers.\n\nIncreasing shared_buffers normally improves performance as the size of the \ndatabase goes up, but since the pgbench workload is so simple the \noperating system will cache it pretty well even if you don't give the \nmemory directly to PostgreSQL. Also, on Windows large settings for \nshared_buffers don't work very well, you might as well keep it in the \n100MB range.\n\n> wal_sync_method=fsync\n\nYou might get a decent boost in resuls that write data (not the SELECT \nones) by changing\n\nwal_sync_method = open_datasync\n\nwhich is the default on Windows. The way you've got your RAID controller \nsetup, this is no more or less safe than using fsync.\n\n> i agree with you, those numbers are terrible i realized after posting i \n> had the option -C turned on if i read the option -C correctly it is \n> disconnecting and reconnecting between transactions. The way read -C \n> option creates the worst case.\n\nIn addition to being an odd testing mode, there's an outstanding bug in \nhow -C results are computed that someone submitted a fix for, but it \nhasn't been applied yet. I would suggest forgetting you ever ran that \ntest.\n\n> number of clients: 10\n> number of transactions per client: 10000\n> number of transactions actually processed: 100000/100000\n> tps = 1768.940935 (including connections establishing)\n\n> number of clients: 40\n> number of transactions per client: 10000\n> number of transactions actually processed: 400000/400000\n> tps = 567.149831 (including connections establishing)\n> tps = 568.648692 (excluding connections establishing)\n\nNote how the total number of transactions goes up here, because it's \nactually doing clients x requested transcations in total. The 40 client \ncase is actually doing 4X as many total operations. That also means you \ncan expect 4X as many checkpoints during that run. It's a longer run like \nthis second one that you might see some impact by increasing \ncheckpoint_segments.\n\nTo keep comparisons like this more fair, I like to keep the total \ntransactions constant and just divide that number by the number of clients \nto figure out what to set the -t parameter to. 400000 is a good medium \nlength test, so for that case you'd get\n\n-c 10 -t 40000\n-c 40 -t 10000\n\nas the two to compare.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 14 Mar 2008 04:19:25 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "\n> Is this on a 64 bit or 32 bit machine? We had the problem with a 32\n> bit linux box (not sure what flavor) just a few months ago. I would\n> not create a filesystem on a partition of 2+TB\n> \nYes this machine is 64bit\n> You do know that effective_cache_size is the size of the OS level\n> cache. i.e. it won't show up in postgresql's memory usage. On a\n> machine with (I assume) 12 or more gigs or memory, you should have\n> your shared_buffers set to a much higher number than 100Meg. (unless\n> you're still running 7.4 but that's another story.)\n\n\nSorry for my ignorance of linux, i'm used to windows task manager or \nperformance monitor showing all the\nmemory usage. I\ndecided to move to Linux on the new server to get 64bit so still in the \nlearning curve with that\n\nI played with shared_buffer and never saw much of an improvement from\n100 all the way up to 800 megs moved the checkpoints from 3 to 30 and\nstill never saw no movement in the numbers.\n\ni agree with you, those numbers are terrible i realized after posting i\nhad the option -C turned on\nif i read the option -C correctly it is disconnecting and reconnecting\nbetween transactions. The way read -C option creates the worst case.\n\nThe raid controller setting is set to make sure it don't lie on fsync\n\nshared_buffers = 800megs\ntemp_buffers 204800\nwork_mem 256MB\nfsync_on\nwal_syns_method fysnc\n\n\n\nC:\\Program Files\\PostgreSQL\\8.3\\bin>pgbench -c 10 -t 10000 -v -h\n192.168.1.9 -U postgres empro\nPassword:\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 1768.940935 (including connections establishing)\ntps = 1783.230500 (excluding connections establishing)\n\n\nC:\\Program Files\\PostgreSQL\\8.3\\bin>pgbench -c 40 -t 10000 -v -h\n192.168.1.9 -U\npostgres empro\nPassword:\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 40\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 400000/400000\ntps = 567.149831 (including connections establishing)\ntps = 568.648692 (excluding connections establishing)\n\n--------------now with just Select --------------\n\nC:\\Program Files\\PostgreSQL\\8.3\\bin>pgbench -S -c 10 -t 10000 -h\n192.168.1.9 -U\npostgres empro\nPassword:\nstarting vacuum...end.\ntransaction type: SELECT only\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 16160.310278 (including connections establishing)\ntps = 17436.791630 (excluding connections establishing)\n\nC:\\Program Files\\PostgreSQL\\8.3\\bin>pgbench -S -c 40 -t 10000 -h\n192.168.1.9 -U\npostgres empro\nPassword:\nstarting vacuum...end.\ntransaction type: SELECT only\nscaling factor: 100\nnumber of clients: 40\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 400000/400000\ntps = 18338.529250 (including connections establishing)\ntps = 20031.048125 (excluding connections establishing)\n\n\n\n\n", "msg_date": "Fri, 14 Mar 2008 03:31:29 -0500", "msg_from": "Justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "Greg Smith wrote:\n> On Fri, 14 Mar 2008, Justin wrote:\n>\n>> I played with shared_buffer and never saw much of an improvement from\n>> 100 all the way up to 800 megs moved the checkpoints from 3 to 30 and\n>> still never saw no movement in the numbers.\n>\n> Increasing shared_buffers normally improves performance as the size of \n> the database goes up, but since the pgbench workload is so simple the \n> operating system will cache it pretty well even if you don't give the \n> memory directly to PostgreSQL. Also, on Windows large settings for \n> shared_buffers don't work very well, you might as well keep it in the \n> 100MB range.\n>\n>> wal_sync_method=fsync\n>\n> You might get a decent boost in resuls that write data (not the SELECT \n> ones) by changing\n>\n> wal_sync_method = open_datasync\n>\n> which is the default on Windows. The way you've got your RAID \n> controller setup, this is no more or less safe than using fsync.\nI moved the window server back to fsync a long time ago. Around here we \nare super paranoid about making sure the data does not become corrupt, \nperformance is secondary. The new server along with the old server is \nway over built for the load it will ever see. I will be making the old \nserver a slony replicator located in the manufacturing building.\n\nAlso **note* *tried setting the value open_datasync and get invalid \nparameter. instead i use open_sync\n>\n>> i agree with you, those numbers are terrible i realized after posting \n>> i had the option -C turned on if i read the option -C correctly it is \n>> disconnecting and reconnecting between transactions. The way read -C \n>> option creates the worst case.\n>\n> In addition to being an odd testing mode, there's an outstanding bug \n> in how -C results are computed that someone submitted a fix for, but \n> it hasn't been applied yet. I would suggest forgetting you ever ran \n> that test.\nWhy is the -C option odd?\n\n>\n> Note how the total number of transactions goes up here, because it's \n> actually doing clients x requested transcations in total. The 40 \n> client case is actually doing 4X as many total operations. That also \n> means you can expect 4X as many checkpoints during that run. It's a \n> longer run like this second one that you might see some impact by \n> increasing checkpoint_segments.\n>\n> To keep comparisons like this more fair, I like to keep the total \n> transactions constant and just divide that number by the number of \n> clients to figure out what to set the -t parameter to. 400000 is a \n> good medium length test, so for that case you'd get\n>\n> -c 10 -t 40000\n> -c 40 -t 10000\n>\n> as the two to compare.\n>\n---- retested with fsync turned on -----\n\nC:\\Program Files\\PostgreSQL\\8.3\\bin>pgbench -c 10 -t 40000 -v -h \n192.168.1.9 -U\npostgres empro\nPassword:\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 40000\nnumber of transactions actually processed: 400000/400000\ntps = 767.040279 (including connections establishing)\ntps = 767.707166 (excluding connections establishing)\n\n\nC:\\Program Files\\PostgreSQL\\8.3\\bin>pgbench -c 40 -t 10000 -v -h \n192.168.1.9 -U\npostgres empro\nPassword:\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 40\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 400000/400000\ntps = 648.988227 (including connections establishing)\ntps = 650.935720 (excluding connections establishing)\n\n\n-------open_sync------------\n\nC:\\Program Files\\PostgreSQL\\8.3\\bin>pgbench -c 10 -t 40000 -v -h \n192.168.1.9 -U\npostgres empro\nPassword:\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 40000\nnumber of transactions actually processed: 400000/400000\ntps = 798.030461 (including connections establishing)\ntps = 798.752349 (excluding connections establishing)\n\nC:\\Program Files\\PostgreSQL\\8.3\\bin>pgbench -c 40 -t 10000 -v -h \n192.168.1.9 -U\npostgres empro\nPassword:\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 40\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 400000/400000\ntps = 613.879195 (including connections establishing)\ntps = 615.592023 (excluding connections establishing)\n\n\n\n\n\n\n\n\nGreg Smith wrote:\nOn Fri, 14 Mar 2008, Justin wrote:\n \n\nI played with shared_buffer and never saw\nmuch of an improvement from\n \n100 all the way up to 800 megs  moved the checkpoints from 3 to 30 and\n \nstill never saw no movement in the numbers.\n \n\n\nIncreasing shared_buffers normally improves performance as the size of\nthe database goes up, but since the pgbench workload is so simple the\noperating system will cache it pretty well even if you don't give the\nmemory directly to PostgreSQL.  Also, on Windows large settings for\nshared_buffers don't work very well, you might as well keep it in the\n100MB range.\n \n\nwal_sync_method=fsync\n \n\n\nYou might get a decent boost in resuls that write data (not the SELECT\nones) by changing\n \n\nwal_sync_method = open_datasync\n \n\nwhich is the default on Windows.  The way you've got your RAID\ncontroller setup, this is no more or less safe than using fsync.\n \n\nI moved the window server back to fsync a long time ago.  Around here\nwe are super paranoid about making sure the data does not become\ncorrupt, performance is secondary.  The new server along with the old\nserver is way over built for the load it will ever see.  I will be\nmaking the old server a slony replicator located in the manufacturing\nbuilding. \n\nAlso *note* tried setting the value open_datasync and get\ninvalid parameter.  instead i use open_sync\n\ni agree with you, those numbers are terrible\ni realized after posting i had the option -C turned on if i read the\noption -C correctly it is disconnecting and reconnecting between\ntransactions. The way read -C option creates the worst case.\n \n\n\nIn addition to being an odd testing mode, there's an outstanding bug in\nhow -C results are computed that someone submitted a fix for, but it\nhasn't been applied yet.  I would suggest forgetting you ever ran that\ntest.\n \n\nWhy is the -C option odd?\n\n\nNote how the total number of transactions goes up here, because it's\nactually doing clients x requested transcations in total.  The 40\nclient case is actually doing 4X as many total operations.  That also\nmeans you can expect 4X as many checkpoints during that run.  It's a\nlonger run like this second one that you might see some impact by\nincreasing checkpoint_segments.\n \n\nTo keep comparisons like this more fair, I like to keep the total\ntransactions constant and just divide that number by the number of\nclients to figure out what to set the -t parameter to.  400000 is a\ngood medium length test, so for that case you'd get\n \n\n-c 10 -t 40000\n \n-c 40 -t 10000\n \n\nas the two to compare.\n \n\n\n---- retested with fsync turned on -----\n\nC:\\Program Files\\PostgreSQL\\8.3\\bin>pgbench -c 10 -t 40000 -v -h\n192.168.1.9 -U\npostgres empro\nPassword:\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 40000\nnumber of transactions actually processed: 400000/400000\ntps = 767.040279 (including connections establishing)\ntps = 767.707166 (excluding connections establishing)\n\n\nC:\\Program Files\\PostgreSQL\\8.3\\bin>pgbench -c 40 -t 10000 -v -h\n192.168.1.9 -U\npostgres empro\nPassword:\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 40\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 400000/400000\ntps = 648.988227 (including connections establishing)\ntps = 650.935720 (excluding connections establishing)\n\n\n-------open_sync------------\n\nC:\\Program Files\\PostgreSQL\\8.3\\bin>pgbench -c 10 -t 40000 -v -h\n192.168.1.9 -U\npostgres empro\nPassword:\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 40000\nnumber of transactions actually processed: 400000/400000\ntps = 798.030461 (including connections establishing)\ntps = 798.752349 (excluding connections establishing)\n\nC:\\Program Files\\PostgreSQL\\8.3\\bin>pgbench -c 40 -t 10000 -v -h\n192.168.1.9 -U\npostgres empro\nPassword:\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 40\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 400000/400000\ntps = 613.879195 (including connections establishing)\ntps = 615.592023 (excluding connections establishing)", "msg_date": "Fri, 14 Mar 2008 05:49:54 -0500", "msg_from": "Justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "On Thu, Mar 13, 2008 at 05:27:09PM -0400, Greg Smith wrote:\n>I haven't found fdatasync to be significantly better in my tests on Linux \n>but I never went out of my way to try and quantify it. My understanding \n>is that some of the write barrier implementation details on ext3 \n>filesystems make any sync call a relatively heavy operation but I haven't \n>poked at the code yet to figure out why.\n\nWhich is why having the checkpoints on a seperate ext2 partition tends \nto be a nice win. (Even if its not on a seperate disk.)\n\nMike Stone\n", "msg_date": "Fri, 14 Mar 2008 11:16:02 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "\nI decided to reformat the raid 10 into ext2 to see if there was any real \nbig difference in performance as some people have noted here is the \ntest results\n\nplease note the WAL files are still on the raid 0 set which is still in \next3 file system format. these test where run with the fsync as \nbefore. I made sure every thing was the same as with the first test.\n\nAs you can see there is a 3 to 3.5 times increase in performance \nnumbers just changing the file system\n\nWith -S option set there is not change in performance numbers\n\n-------First Run 10 clients------\nC:\\Program Files\\PostgreSQL\\8.3\\bin>pgbench -c 10 -t 40000 -v -h \n192.168.1.9 -U\npostgres play\nPassword:\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 40000\nnumber of transactions actually processed: 400000/400000\ntps = 2108.036891 (including connections establishing)\ntps = 2112.902970 (excluding connections establishing)\n\n-----Second Run 10 clients -----\nC:\\Program Files\\PostgreSQL\\8.3\\bin>pgbench -c 10 -t 40000 -v -h \n192.168.1.9 -U\npostgres play\nPassword:\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 40000\nnumber of transactions actually processed: 400000/400000\ntps = 2316.114949 (including connections establishing)\ntps = 2321.990410 (excluding connections establishing)\n\n\n-----First Run 40 clients --------\nC:\\Program Files\\PostgreSQL\\8.3\\bin>pgbench -c 40 -t 10000 -v -h \n192.168.1.9 -U\npostgres play\nPassword:\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 40\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 400000/400000\ntps = 2675.585284 (including connections establishing)\ntps = 2706.707899 (excluding connections establishing)\n\n---Second Run ----\nC:\\Program Files\\PostgreSQL\\8.3\\bin>pgbench -c 40 -t 10000 -v -h \n192.168.1.9 -U\npostgres play\nPassword:\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 40\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 400000/400000\ntps = 2600.560421 (including connections establishing)\ntps = 2629.952529 (excluding connections establishing)\n\n---- Select Only Option ------\nC:\\Program Files\\PostgreSQL\\8.3\\bin>pgbench -S -c 10 -t 40000 -v -h \n192.168.1.9\n-U postgres play\nPassword:\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: SELECT only\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 40000\nnumber of transactions actually processed: 400000/400000\ntps = 18181.818182 (including connections establishing)\ntps = 18550.294486 (excluding connections establishing)\n\nC:\\Program Files\\PostgreSQL\\8.3\\bin>pgbench -S -c 40 -t 10000 -v -h \n192.168.1.9\n-U postgres play\nPassword:\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: SELECT only\nscaling factor: 100\nnumber of clients: 40\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 400000/400000\ntps = 18991.548761 (including connections establishing)\ntps = 20729.684909 (excluding connections establishing)\n\n\n\n", "msg_date": "Sun, 16 Mar 2008 01:19:37 -0500", "msg_from": "Justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "\nOn 16-Mar-08, at 2:19 AM, Justin wrote:\n\n>\n> I decided to reformat the raid 10 into ext2 to see if there was any \n> real big difference in performance as some people have noted here \n> is the test results\n>\n> please note the WAL files are still on the raid 0 set which is still \n> in ext3 file system format. these test where run with the fsync as \n> before. I made sure every thing was the same as with the first test.\n>\nThis is opposite to the way I run things. I use ext2 on the WAL and \next3 on the data. I'd also suggest RAID 10 on the WAL it is mostly \nwrite.\n\nDave\n\n", "msg_date": "Sun, 16 Mar 2008 07:25:41 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "Dave Cramer wrote:\n> \n> On 16-Mar-08, at 2:19 AM, Justin wrote:\n> \n>>\n>> I decided to reformat the raid 10 into ext2 to see if there was any \n>> real big difference in performance as some people have noted here is \n>> the test results\n>>\n>> please note the WAL files are still on the raid 0 set which is still \n>> in ext3 file system format. these test where run with the fsync as \n>> before. I made sure every thing was the same as with the first test.\n>>\n> This is opposite to the way I run things. I use ext2 on the WAL and ext3 \n> on the data. I'd also suggest RAID 10 on the WAL it is mostly write.\n\nJust out of curiosity: Last time I did research, the word seemed to be that xfs was better than ext2 or ext3. Is that not true? Why use ext2/3 at all if xfs is faster for Postgres?\n\nCriag\n", "msg_date": "Sun, 16 Mar 2008 12:04:44 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "Craig James wrote:\n> Dave Cramer wrote:\n>>\n>> On 16-Mar-08, at 2:19 AM, Justin wrote:\n>>\n>>>\n>>> I decided to reformat the raid 10 into ext2 to see if there was any \n>>> real big difference in performance as some people have noted here \n>>> is the test results\n>>>\n>>> please note the WAL files are still on the raid 0 set which is still \n>>> in ext3 file system format. these test where run with the fsync as \n>>> before. I made sure every thing was the same as with the first test.\n>>>\n>> This is opposite to the way I run things. I use ext2 on the WAL and \n>> ext3 on the data. I'd also suggest RAID 10 on the WAL it is mostly write.\n> \n> Just out of curiosity: Last time I did research, the word seemed to be \n> that xfs was better than ext2 or ext3. Is that not true? Why use \n> ext2/3 at all if xfs is faster for Postgres?\n> \n> Criag\n\nAnd let's see if I can write my own name ...\n\nCraig\n", "msg_date": "Sun, 16 Mar 2008 12:08:06 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "\nOn 16-Mar-08, at 3:04 PM, Craig James wrote:\n\n> Dave Cramer wrote:\n>> On 16-Mar-08, at 2:19 AM, Justin wrote:\n>>>\n>>> I decided to reformat the raid 10 into ext2 to see if there was \n>>> any real big difference in performance as some people have noted \n>>> here is the test results\n>>>\n>>> please note the WAL files are still on the raid 0 set which is \n>>> still in ext3 file system format. these test where run with the \n>>> fsync as before. I made sure every thing was the same as with \n>>> the first test.\n>>>\n>> This is opposite to the way I run things. I use ext2 on the WAL and \n>> ext3 on the data. I'd also suggest RAID 10 on the WAL it is mostly \n>> write.\n>\n> Just out of curiosity: Last time I did research, the word seemed to \n> be that xfs was better than ext2 or ext3. Is that not true? Why \n> use ext2/3 at all if xfs is faster for Postgres?\n>\nI would like to see the evidence of this. I doubt that it would be \nfaster than ext2. There is no journaling on ext2.\n\nDave\n", "msg_date": "Sun, 16 Mar 2008 15:36:33 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "On Sun, Mar 16, 2008 at 1:36 PM, Dave Cramer <[email protected]> wrote:\n>\n> On 16-Mar-08, at 3:04 PM, Craig James wrote:\n> > Just out of curiosity: Last time I did research, the word seemed to\n> > be that xfs was better than ext2 or ext3. Is that not true? Why\n> > use ext2/3 at all if xfs is faster for Postgres?\n> >\n> I would like to see the evidence of this. I doubt that it would be\n> faster than ext2. There is no journaling on ext2.\n\nWell, if you're dropping a large table ext2/3 has that very long wait\nthing that can happen. Don't know how much battery backed cache would\nhelp.\n", "msg_date": "Sun, 16 Mar 2008 14:56:26 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "Justin wrote:\n> OK i'm showing my ignorance of linux. On Ubuntu i can't seem to figure\n> out if XFS file system is installed, if not installed getting it\n> installed.\n\nThere are two parts to the file system, really. One is the kernel driver\nfor the file system. This is almost certainly available, as it will ship\nwith the kernel. It might be a module that is loaded on demand or it\nmight be compiled into the kernel its self.\n\nOn my Debian Etch system it's a module, xfs.ko, that can be loaded\nmanually with:\n\nmodprobe xfs\n\n... however, you should not need to do that, as it'll be autoloaded when\nyou try to mount an xfs volume.\n\nThe other part to the file system is the userspace tools for creating,\nchecking, resizing, etc the file system. An `apt-cache search xfs' shows\nthat these tools have the package name xfsprogs, at least on Debian.\n\nYou can install them with \"apt-get install xfsprogs\". If they're already\ninstalled no action will be taken.\n\nWhen xfsprogs is installed you can use mkfs.xfs (see: man mkfs.xfs) to\nformat a block device (say, a partition like /dev/sda1 or an LVM logical\nvolume like /dev/SOMELVMVG/somelvmlv) with the xfs file system.\n\nOnce the file system is formatted you can mount it manually with the\nmount command, eg:\n\nmkdir /mnt/tmp\nmount -t xfs /dev/sda1 /mnt/tmp\n\n... or have it mounted on boot using an fstab entry like:\n\n/dev/sda1 /path/to/desired/mountpoint xfs defaults 0 0\n\n--\nCraig Ringer\n", "msg_date": "Mon, 17 Mar 2008 16:14:03 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "OK i'm showing my ignorance of linux. On Ubuntu i can't seem to figure \nout if XFS file system is installed, if not installed getting it \ninstalled.\n\nI would like to see the difference between XFS and ext2 performance \nnumbers. \n\nany pointers would be nice. I 'm not going to reinstall the OS. Nor do \ni want to install some unstable library into the kernel. \n\nDave Cramer wrote:\n>\n> On 16-Mar-08, at 3:04 PM, Craig James wrote:\n>\n>> Dave Cramer wrote:\n>>> On 16-Mar-08, at 2:19 AM, Justin wrote:\n>>>>\n>>>> I decided to reformat the raid 10 into ext2 to see if there was any \n>>>> real big difference in performance as some people have noted here \n>>>> is the test results\n>>>>\n>>>> please note the WAL files are still on the raid 0 set which is \n>>>> still in ext3 file system format. these test where run with the \n>>>> fsync as before. I made sure every thing was the same as with the \n>>>> first test.\n>>>>\n>>> This is opposite to the way I run things. I use ext2 on the WAL and \n>>> ext3 on the data. I'd also suggest RAID 10 on the WAL it is mostly \n>>> write.\n>>\n>> Just out of curiosity: Last time I did research, the word seemed to \n>> be that xfs was better than ext2 or ext3. Is that not true? Why use \n>> ext2/3 at all if xfs is faster for Postgres?\n>>\n> I would like to see the evidence of this. I doubt that it would be \n> faster than ext2. There is no journaling on ext2.\n>\n> Dave\n>\n", "msg_date": "Mon, 17 Mar 2008 02:36:58 -0500", "msg_from": "Justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "On 17/03/2008, Justin <[email protected]> wrote:\n> OK i'm showing my ignorance of linux. On Ubuntu i can't seem to figure\n> out if XFS file system is installed, if not installed getting it\n> installed.\n...\n> any pointers would be nice. I 'm not going to reinstall the OS. Nor do\n> i want to install some unstable library into the kernel.\nIt's there. All you need to do is (I hope ;}) back-up the\npartition with the database files on it, reformat using mkfs.xfs\n(man mkfs.xfs for details), modify /etc/fstab to say xfs where\nit says ext2 for the database partition, restore the data and\nuse it...\n\n\nCheers,\nAndrej\n\n-- \nPlease don't top post, and don't use HTML e-Mail :} Make your quotes concise.\n\nhttp://www.american.edu/econ/notes/htmlmail.htm\n", "msg_date": "Mon, 17 Mar 2008 21:02:18 +1300", "msg_from": "\"Andrej Ricnik-Bay\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "Justin wrote:\n> OK i'm showing my ignorance of linux. On Ubuntu i can't seem to figure \n> out if XFS file system is installed, if not installed getting it \n> installed.\n\nHm? Installed/not installed? You can select that when you are preparing\nyour partitions. If you run the automated partitioner there is of course\nnot much choice but you can try the manual mode. Even after that you\ncan format individual partitions with XFS if you want. XFS is long since\nincluded in the recent linux kernels, also there is raiserfs if you feel\ndesperate (well in fact raiser fs is ok too but you should not use it\non flaky hardware). Both xfs and raiser are designed for journaling -\nit is believed that xfs performs better with large files and raiser\ngood with many small files (like Maildir for example).\n\nI'd suggest a test with your data and workload to be sure.\n\nRegards\nTino\n", "msg_date": "Mon, 17 Mar 2008 10:11:26 +0100", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "Well every thing worked right up to the point where i tried to mount the \nfile system\n\nWarning: xfs_db: /dev/sdb1 contains a mounted file system\n\nfatal error -- couldn't initialize XFS library.\n\nthink i'm missing something???\n\nCraig Ringer wrote:\n> Justin wrote:\n> \n>> OK i'm showing my ignorance of linux. On Ubuntu i can't seem to figure\n>> out if XFS file system is installed, if not installed getting it\n>> installed.\n>> \n>\n> There are two parts to the file system, really. One is the kernel driver\n> for the file system. This is almost certainly available, as it will ship\n> with the kernel. It might be a module that is loaded on demand or it\n> might be compiled into the kernel its self.\n>\n> On my Debian Etch system it's a module, xfs.ko, that can be loaded\n> manually with:\n>\n> modprobe xfs\n>\n> ... however, you should not need to do that, as it'll be autoloaded when\n> you try to mount an xfs volume.\n>\n> The other part to the file system is the userspace tools for creating,\n> checking, resizing, etc the file system. An `apt-cache search xfs' shows\n> that these tools have the package name xfsprogs, at least on Debian.\n>\n> You can install them with \"apt-get install xfsprogs\". If they're already\n> installed no action will be taken.\n>\n> When xfsprogs is installed you can use mkfs.xfs (see: man mkfs.xfs) to\n> format a block device (say, a partition like /dev/sda1 or an LVM logical\n> volume like /dev/SOMELVMVG/somelvmlv) with the xfs file system.\n>\n> Once the file system is formatted you can mount it manually with the\n> mount command, eg:\n>\n> mkdir /mnt/tmp\n> mount -t xfs /dev/sda1 /mnt/tmp\n>\n> ... or have it mounted on boot using an fstab entry like:\n>\n> /dev/sda1 /path/to/desired/mountpoint xfs defaults 0 0\n>\n> --\n> Craig Ringer\n>\n> \n\n\n\n\n\n\nWell every thing worked right up to the point where i tried to mount\nthe file system \n\nWarning:  xfs_db: /dev/sdb1 contains a mounted file system\n\nfatal error -- couldn't initialize XFS library.\n\nthink i'm missing something???\n\nCraig Ringer wrote:\n\nJustin wrote:\n \n\nOK i'm showing my ignorance of linux. On Ubuntu i can't seem to figure\nout if XFS file system is installed, if not installed getting it\ninstalled.\n \n\n\nThere are two parts to the file system, really. One is the kernel driver\nfor the file system. This is almost certainly available, as it will ship\nwith the kernel. It might be a module that is loaded on demand or it\nmight be compiled into the kernel its self.\n\nOn my Debian Etch system it's a module, xfs.ko, that can be loaded\nmanually with:\n\nmodprobe xfs\n\n... however, you should not need to do that, as it'll be autoloaded when\nyou try to mount an xfs volume.\n\nThe other part to the file system is the userspace tools for creating,\nchecking, resizing, etc the file system. An `apt-cache search xfs' shows\nthat these tools have the package name xfsprogs, at least on Debian.\n\nYou can install them with \"apt-get install xfsprogs\". If they're already\ninstalled no action will be taken.\n\nWhen xfsprogs is installed you can use mkfs.xfs (see: man mkfs.xfs) to\nformat a block device (say, a partition like /dev/sda1 or an LVM logical\nvolume like /dev/SOMELVMVG/somelvmlv) with the xfs file system.\n\nOnce the file system is formatted you can mount it manually with the\nmount command, eg:\n\nmkdir /mnt/tmp\nmount -t xfs /dev/sda1 /mnt/tmp\n\n... or have it mounted on boot using an fstab entry like:\n\n/dev/sda1 /path/to/desired/mountpoint xfs defaults 0 0\n\n--\nCraig Ringer", "msg_date": "Mon, 17 Mar 2008 11:31:15 -0500", "msg_from": "Justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "\nOn 17-Mar-08, at 2:50 PM, Justin wrote:\n\n>\n>>\n>> Just out of curiosity: Last time I did research, the word seemed to \n>> be that xfs was better than ext2 or ext3. Is that not true? Why \n>> use ext2/3 at all if xfs is faster for Postgres?\n>>\n>> Criag\n>\n> Ext2 vs XFS on my setup there is difference in the performance \n> between the two file systems but its not OMG let switch. XFS did \n> better then Ext2 only one time, then Ext2 won out by small margin at \n> best was 6%. the other test ran at 3 to 4% better than XFS \n> performance.\n>\n> XFS has journaling so it should be safer. I think i may stick with \n> XFS as it has journaling\n>\n> One thing i think is clear don't use ext3 it just kills performance \n> by factors not small percents\n>\n> here is article i found on XFS http://linux-xfs.sgi.com/projects/xfs/papers/xfs_white/xfs_white_paper.html\n>\n> I hope this is helpful to people. I know the process has taught me \n> new things, and thanks to those that helped me out.\n>\n> Before i throw this sever into production any one else want \n> performance numbers.\n>\n> C:\\Program Files\\PostgreSQL\\8.3\\bin>pgbench -c 10 -t 40000 -v -h \n> 192.168.1.9 -U\n> postgres play\n> Password:\n> starting vacuum...end.\n> starting vacuum accounts...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 100\n> number of clients: 10\n> number of transactions per client: 40000\n> number of transactions actually processed: 400000/400000\n> tps = 2181.512770 (including connections establishing)\n> tps = 2187.107004 (excluding connections establishing)\n>\n\n\n2000 tps ??? do you have fsync turned off ?\n\nDave\n\n", "msg_date": "Mon, 17 Mar 2008 14:12:03 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "Justin wrote:\n>> 2000 tps ??? do you have fsync turned off ?\n>>\n>> Dave\n>>\n> \n> No its turned on.\n\nUnless I'm seriously confused, something is wrong with these numbers. That's the sort of performance you expect from a good-sized RAID 10 six-disk array. With a single 7200 rpm SATA disk and XFS, I get 640 tps. There's no way you could 2000 tps from a single disk.\n\nCraig\n\n", "msg_date": "Mon, 17 Mar 2008 11:33:28 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "\n>\n> Just out of curiosity: Last time I did research, the word seemed to be \n> that xfs was better than ext2 or ext3. Is that not true? Why use \n> ext2/3 at all if xfs is faster for Postgres?\n>\n> Criag\n\nExt2 vs XFS on my setup there is difference in the performance between \nthe two file systems but its not OMG let switch. XFS did better then \nExt2 only one time, then Ext2 won out by small margin at best was 6%. \nthe other test ran at 3 to 4% better than XFS performance.\n\n XFS has journaling so it should be safer. I think i may stick with XFS \nas it has journaling\n\nOne thing i think is clear don't use ext3 it just kills performance by \nfactors not small percents\n\nhere is article i found on XFS \nhttp://linux-xfs.sgi.com/projects/xfs/papers/xfs_white/xfs_white_paper.html\n\nI hope this is helpful to people. I know the process has taught me new \nthings, and thanks to those that helped me out.\n\nBefore i throw this sever into production any one else want performance \nnumbers.\n\n C:\\Program Files\\PostgreSQL\\8.3\\bin>pgbench -c 10 -t 40000 -v -h \n192.168.1.9 -U\npostgres play\nPassword:\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 40000\nnumber of transactions actually processed: 400000/400000\ntps = 2181.512770 (including connections establishing)\ntps = 2187.107004 (excluding connections establishing)\n\nC:\\Program Files\\PostgreSQL\\8.3\\bin>pgbench -c 10 -t 40000 -v -h \n192.168.1.9 -U\npostgres play\nPassword:\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 40000\nnumber of transactions actually processed: 400000/400000\ntps = 2248.365719 (including connections establishing)\ntps = 2254.308547 (excluding connections establishing)\n\n-----------Clients log increased to 40------------\n\nC:\\Program Files\\PostgreSQL\\8.3\\bin>pgbench -c 40 -t 10000 -v -h \n192.168.1.9 -U\npostgres play\nPassword:\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 40\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 400000/400000\ntps = 2518.447629 (including connections establishing)\ntps = 2548.014141 (excluding connections establishing)\n\nC:\\Program Files\\PostgreSQL\\8.3\\bin>pgbench -c 40 -t 10000 -v -h \n192.168.1.9 -U\npostgres play\nPassword:\nstarting vacuum...end.\nstarting vacuum accounts...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 40\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 400000/400000\ntps = 2606.933139 (including connections establishing)\ntps = 2638.626859 (excluding connections establishing)\n\n", "msg_date": "Mon, 17 Mar 2008 13:50:36 -0500", "msg_from": "Justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "\n\n\n>>\n>>\n>\n>\n> 2000 tps ??? do you have fsync turned off ?\n>\n> Dave\n>\n\nNo its turned on.\n", "msg_date": "Mon, 17 Mar 2008 14:12:45 -0500", "msg_from": "Justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "\n\nCraig James wrote:\n> Justin wrote:\n>>> 2000 tps ??? do you have fsync turned off ?\n>>>\n>>> Dave\n>>>\n>>\n>> No its turned on.\n>\n> Unless I'm seriously confused, something is wrong with these numbers. \n> That's the sort of performance you expect from a good-sized RAID 10 \n> six-disk array. With a single 7200 rpm SATA disk and XFS, I get 640 \n> tps. There's no way you could 2000 tps from a single disk.\n>\n> Craig\n>\n\nit is a RAID 10 controller with 6 SAS 10K 73 gig drives. The server \nis 3 weeks old now.\n\nit has 16 gigs of RAM\n2 quad core Xenon 1.88 Ghz processors\n2 gig Ethernet cards. \nRAID controller perc 6/i with battery backup 512meg cache, setup not lie \nabout fsync\n\nWAL is on a RAID 0 drive along with the OS\n", "msg_date": "Mon, 17 Mar 2008 14:38:19 -0500", "msg_from": "Justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "Hi Justin,\n\nIl giorno 17/mar/08, alle ore 20:38, Justin ha scritto:\n> it is a RAID 10 controller with 6 SAS 10K 73 gig drives. The \n> server is 3 weeks old now.\n>\n> it has 16 gigs of RAM\n> 2 quad core Xenon 1.88 Ghz processors\n> 2 gig Ethernet cards. RAID controller perc 6/i with battery backup \n> 512meg cache, setup not lie about fsync\n>\n> WAL is on a RAID 0 drive along with the OS\n\nDid you try with a single raid 10 hosting DB + WAL? It gave me much \nbetter performances on similar hardware\nBye,\ne.\n\n", "msg_date": "Mon, 17 Mar 2008 21:58:31 +0100", "msg_from": "Enrico Sirola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "On Mon, Mar 17, 2008 at 2:58 PM, Enrico Sirola <[email protected]> wrote:\n> Hi Justin,\n>\n> Il giorno 17/mar/08, alle ore 20:38, Justin ha scritto:\n>\n> > it is a RAID 10 controller with 6 SAS 10K 73 gig drives. The\n> > server is 3 weeks old now.\n> >\n> > it has 16 gigs of RAM\n> > 2 quad core Xenon 1.88 Ghz processors\n> > 2 gig Ethernet cards. RAID controller perc 6/i with battery backup\n> > 512meg cache, setup not lie about fsync\n> >\n> > WAL is on a RAID 0 drive along with the OS\n>\n> Did you try with a single raid 10 hosting DB + WAL? It gave me much\n> better performances on similar hardware\n> Bye,\n\nNote that it can often be advantageous to have one big physical\npartition on RAID-10 and to then break it into logical partitions for\nthe computer, so that you have a partition with just ext2 for the WAL\nand since it has its own file system you usually get better\nperformance without having to actually hard partition out a separate\nRAID-1 or RAID-10 for WAL.\n", "msg_date": "Mon, 17 Mar 2008 15:57:20 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "[email protected] wrote:\n>\n> WAL is on a RAID 0 drive along with the OS\nIsn't that just as unsafe as having the whole lot on RAID0?\n\n\n", "msg_date": "Tue, 18 Mar 2008 07:08:58 +0000", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" }, { "msg_contents": "On Sun, Mar 16, 2008 at 12:04:44PM -0700, Craig James wrote:\n>Just out of curiosity: Last time I did research, the word seemed to be that \n>xfs was better than ext2 or ext3. Is that not true? Why use ext2/3 at all \n>if xfs is faster for Postgres?\n\nFor the WAL, the filesystem is largely irrelevant. (It's relatively \nsmall, the files are preallocated, the data is synced to disk so there's \nnot advantage from write buffering, etc.) The best filesystem is one \nthat does almost nothing and stays out of the way--ext2 is a good choice \nfor that. The data is a different story and a different filesystem is \nusually a better choice. (If for no other reason than to avoid long \nfsck times.)\n\nMike Stone\n", "msg_date": "Tue, 18 Mar 2008 07:04:21 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark: Dell/Perc 6, 8 disk RAID 10" } ]
[ { "msg_contents": "P289ZUDZ\n\n-- \nWith Best Regards,\nPetchimuthulingam S\n\nP289ZUDZ-- With Best Regards,Petchimuthulingam S", "msg_date": "Thu, 6 Mar 2008 11:27:36 +0530", "msg_from": "\"petchimuthu lingam\" <[email protected]>", "msg_from_op": true, "msg_subject": "=?ISO-8859-1?Q?Confirma=E7=E3o_de_envio_/_Sending_con?=\n\t=?ISO-8859-1?Q?firmation_(captchaid:13266b2056e4)?=" } ]
[ { "msg_contents": "hi,\n\n is there any generalized format for the output for the output of the\nexplain command ?. If so please send that generalized format to me.\notherwise tell me how to parse the output of explain command to\nknow where the relation name occurs,where the conditions occurs,\nwhere the join conditions occur and so on. My goal is create a visual\nrepresentation of the expain plan. Can some body help me in this regard.\n Thanks & regards\n RAVIRAM KOLIPAKA\n M.TECH(CSE)-final year\n IIIT Hyderabad\n\nhi,      is there any generalized format for the output for the output of the explain command ?. If so please send that generalized format to me.otherwise tell me how to parse the output of explain command to \nknow where the relation name occurs,where the conditions occurs,where the join conditions occur and so on. My  goal is create a visual representation of the expain plan. Can some body help me in this regard.         Thanks & regards\n       RAVIRAM KOLIPAKA      M.TECH(CSE)-final year      IIIT Hyderabad", "msg_date": "Thu, 6 Mar 2008 11:40:05 +0530", "msg_from": "\"RaviRam Kolipaka\" <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql Explain command output" }, { "msg_contents": "On Thu, 6 Mar 2008, RaviRam Kolipaka wrote:\n\n> My goal is create a visual representation of the expain plan.\n\nThis problem has been solved already by code that's in pgadmin and you \nmight look at that source code for hints if you want to write your own \nimplementation. There's a good intro to using that at \nhttp://www.postgresonline.com/journal/index.php?/archives/27-Reading-PgAdmin-Graphical-Explain-Plans.html\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 6 Mar 2008 08:10:02 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql Explain command output" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> On Thu, 6 Mar 2008, RaviRam Kolipaka wrote:\n>> My goal is create a visual representation of the expain plan.\n\n> This problem has been solved already by code that's in pgadmin and you \n> might look at that source code for hints if you want to write your own \n> implementation.\n\nIt's been solved more than once actually --- Red Hat did a \"Visual\nExplain\" tool several years ago, which is unmaintained now but still\navailable for download (http://sources.redhat.com/rhdb/). I've heard\nthat EDB picked it up and is now maintaining their own fork, but I\ndon't know the status of that for sure. That code is in Java, if it\nmakes a difference to you.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Mar 2008 08:31:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql Explain command output " }, { "msg_contents": "On Thu, 6 Mar 2008, Tom Lane wrote:\n\n> Red Hat did a \"Visual Explain\" tool several years ago, which is \n> unmaintained now but still available for download \n> (http://sources.redhat.com/rhdb/). I've heard that EDB picked it up and \n> is now maintaining their own fork, but I don't know the status of that \n> for sure.\n\nI know I wrote this down somewhere...ah ha, it was in the MySQL \ncomparision paper:\n\nVisual Explain, originally a RedHat component that has been kept current \nand improved by Enterprise DB, comes bundled with the EnterpriseDB \nAdvanced Server package. It can be built to run against other PostgreSQL \ninstallations using the source code to their Developer Studio package: \nhttp://www.enterprisedb.com/products/download.do\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 6 Mar 2008 08:43:43 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql Explain command output " }, { "msg_contents": "How do we know in the output of expain command table or constraint names\nso that while parsing each line of the output\n we can able to recognise them and build the pictorial representation. for\nexample if you consider the following output\n\n EXPLAIN select * from table1,table2 where roll=id_no;\n QUERY PLAN\n----------------------------------------------------------------------\n Hash Join (cost=29.12..59.31 rows=850 width=124)\n Hash Cond: (table2.id_no = table1.roll)\n -> Seq Scan on table2 (cost=0.00..18.50 rows=850 width=62)\n -> Hash (cost=18.50..18.50 rows=850 width=62)\n -> Seq Scan on table1 (cost=0.00..18.50 rows=850 width=62)\n\n we can know the table name , just by seeing any string after ON\nkeyword . Like that how to recognise the other constraints and so on.\n\nOn Thu, Mar 6, 2008 at 12:24 PM, Devi <[email protected]> wrote:\n\n> Hi,\n>\n> I suppose the format depends on the query you serve. The following links\n> will throw some light.\n>\n> http://www.postgresql.org/docs/8.1/static/sql-explain.html\n> http://www.postgresql.org/docs/7/static/c4888.htm\n>\n> Thanks\n> DEVI.G\n>\n> ----- Original Message -----\n> *From:* RaviRam Kolipaka <[email protected]>\n> *To:* [email protected]\n> *Sent:* Thursday, March 06, 2008 11:40 AM\n> *Subject:* [PERFORM] postgresql Explain command output\n>\n>\n> hi,\n>\n> is there any generalized format for the output for the output of the\n> explain command ?. If so please send that generalized format to me.\n> otherwise tell me how to parse the output of explain command to\n> know where the relation name occurs,where the conditions occurs,\n> where the join conditions occur and so on. My goal is create a visual\n> representation of the expain plan. Can some body help me in this regard.\n> Thanks & regards\n> RAVIRAM KOLIPAKA\n> M.TECH(CSE)-final year\n> IIIT Hyderabad\n>\n> ------------------------------\n>\n> No virus found in this incoming message.\n> Checked by AVG Free Edition.\n> Version: 7.5.516 / Virus Database: 269.21.5/1314 - Release Date: 3/5/2008\n> 6:38 PM\n>\n>\n\n How do we know in the output of expain command table  or constraint names so that while parsing  each line of the output we can able to recognise them and build the pictorial representation. for example if you consider the following output\n                EXPLAIN select * from table1,table2 where roll=id_no;                              QUERY PLAN---------------------------------------------------------------------- Hash Join  (cost=29.12..59.31 rows=850 width=124)\n   Hash Cond: (table2.id_no = table1.roll)   ->  Seq Scan on table2  (cost=0.00..18.50 rows=850 width=62)   ->  Hash  (cost=18.50..18.50 rows=850 width=62)         ->  Seq Scan on table1  (cost=0.00..18.50 rows=850 width=62)\n         we can know the table name , just by seeing any string after ON keyword . Like that how to recognise the other constraints and so on.On Thu, Mar 6, 2008 at 12:24 PM, Devi <[email protected]> wrote:\n\n\nHi,\n \nI suppose the format depends on the query you \nserve.  The following links will throw some light.\n \nhttp://www.postgresql.org/docs/8.1/static/sql-explain.html\nhttp://www.postgresql.org/docs/7/static/c4888.htm\n \nThanks\nDEVI.G\n\n----- Original Message ----- \n\nFrom:\nRaviRam Kolipaka \nTo: [email protected]\n\nSent: Thursday, March 06, 2008 11:40 \n AM\nSubject: [PERFORM] postgresql Explain \n command output\nhi,      is there any \n generalized format for the output for the output of the explain command ?. If \n so please send that generalized format to me.otherwise tell me how to \n parse the output of explain command to know where the relation name \n occurs,where the conditions occurs,where the join conditions occur and so \n on. My  goal is create a visual representation of the expain plan. \n Can some body help me in this regard.         \n Thanks & regards       RAVIRAM KOLIPAKA  \n     M.TECH(CSE)-final year      IIIT \n Hyderabad    \n\n\nNo virus found in this incoming message.Checked by AVG Free \n Edition. Version: 7.5.516 / Virus Database: 269.21.5/1314 - Release Date: \n 3/5/2008 6:38 PM", "msg_date": "Fri, 7 Mar 2008 18:29:06 +0530", "msg_from": "\"RaviRam Kolipaka\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql Explain command output" } ]
[ { "msg_contents": "i had a table with 50 lakh record...\n\nit has a column called oid ( obviously all the tables will have this ), but\nwhile doing any operation it is getting slow because of the number of\nrecords...\n\nif i remove the oid column will i get any benefit, what are all the other\ndefault columns created without our knowledge..\n\ni had a table with 50 lakh record...it has a column called oid ( obviously all the tables will have this ), but while doing any operation it is getting slow because of the number of records...if i remove the oid column will i get any benefit, what are all the other default columns created without our knowledge..", "msg_date": "Thu, 6 Mar 2008 12:32:01 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "oid...any optimizations" }, { "msg_contents": "On Thu, 6 Mar 2008 12:32:01 +0530\n\"sathiya psql\" <[email protected]> wrote:\n\n> i had a table with 50 lakh record...\n> \n> it has a column called oid ( obviously all the tables will have\n> this ), but while doing any operation it is getting slow because of\n> the number of records...\n\nActually it isn't obvious as oids have been deprecated for years.\n\n> \n> if i remove the oid column will i get any benefit, what are all the\n> other default columns created without our knowledge..\n\nWhat version of ancient PostgreSQL are you running exactly?\n\nJoshua D. Drake\n\n\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL SPI Liaison | SPI Director | PostgreSQL political pundit", "msg_date": "Wed, 5 Mar 2008 23:06:50 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: oid...any optimizations" }, { "msg_contents": ">\n> Actually it isn't obvious as oids have been deprecated for years.\n\n\nno in my version it is now also available....\n\n>\n>\n> What version of ancient PostgreSQL are you running exactly?\n\n\npostgresql 7.4\n\nActually it isn't obvious as oids have been deprecated for years.\nno in my version it is now also available.... \nWhat version of ancient PostgreSQL are you running exactly?postgresql 7.4", "msg_date": "Thu, 6 Mar 2008 12:43:57 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: oid...any optimizations" }, { "msg_contents": "On Thu, 2008-03-06 at 12:32 +0530, sathiya psql wrote:\n> i had a table with 50 lakh record...\n> \n> it has a column called oid ( obviously all the tables will have\n> this ), but while doing any operation it is getting slow because of\n> the number of records...\n> \n> if i remove the oid column will i get any benefit, what are all the\n> other default columns created without our knowledge..\n\nProbably not\n\nAlso - do not remove oid if your sql operations require it. \n\nA 'create index x_oid_idx on table x (oid)' might help. \n\nAlso see EXPLAIN in the manual.\n\n-- \nRegards\nTheo\n\n", "msg_date": "Thu, 06 Mar 2008 09:21:36 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: oid...any optimizations" }, { "msg_contents": "On Thu, 6 Mar 2008 12:43:57 +0530\n\"sathiya psql\" <[email protected]> wrote:\n\n> >\n> > Actually it isn't obvious as oids have been deprecated for years.\n> \n> \n> no in my version it is now also available....\n\nI didn't say they were gone. I said they are deprecated. You should not\nbe using them.\n\n> \n> >\n> >\n> > What version of ancient PostgreSQL are you running exactly?\n> \n> \n> postgresql 7.4\n\nThat is god awful ancient. Upgrade to something remotely new, like\n8.2.6.\n\nJoshua D. Drake\n\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL SPI Liaison | SPI Director | PostgreSQL political pundit", "msg_date": "Thu, 6 Mar 2008 08:28:49 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: oid...any optimizations" } ]
[ { "msg_contents": "I had 50 lakh records in my table...\n\nwhile counting that am using that row in where condition... which makes\nproblem, cpu is waiting for device...\n\nDebian OS, postresql 7.4, 50 lakh records.\n\nQuery is\n\nEXPLAIN ANALYZE select count(call_id) from call_log where call_id > 1;\n\nwhile seeing the top, cpu is waiting for i/o, and without this call_id\ncondition if i do\n EXPLAIN ANALYZE select count(oid) from call_log where oid > 1;\nit executed in 21 seconds....\n\nI had 50 lakh records in my table...while counting that am using that row in where condition... which makes problem, cpu is waiting for device...Debian OS, postresql 7.4, 50 lakh records.Query is\nEXPLAIN ANALYZE select count(call_id) from call_log where call_id > 1;while seeing the top, cpu is waiting for i/o, and without this call_id condition if i do        EXPLAIN ANALYZE select count(oid) from call_log where oid > 1;\nit executed in 21 seconds....", "msg_date": "Thu, 6 Mar 2008 13:46:13 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "index usage makes problem" } ]
[ { "msg_contents": "Hi all,\n\nI'm running the following query to match a supplied text string to an actual\nplace name which is recorded in a table with extra info like coordinates,\netc.\n\nSELECT ts_rank_cd(textsearchable_index_col , query, 32 /* rank/(rank+1) */)\nAS rank,*\nFROM gazetteer, to_tsquery('Gunbower|Island|Vic') query\nWHERE query @@ textsearchable_index_col\torder by rank desc, concise_ga desc,\nauda_alloc desc LIMIT 10\n\nWhen I run this I get the following top two results:\n\nPos\tRank\t\tName\nState\n1\t0.23769\tGunbower Island Primary School\tVic\t\n2\t0.23769\tGunbower Island\t\t\t\tVic\n\nThe textsearchable_index_col for each of these looks like this:\n\n'vic':6 '9999':5 'gunbow':1 'island':2 'school':4 'primari':3 'victoria':7\n'vic':4 '9999':3 'gunbow':1 'island':2 'victoria':5\n\nI'm new to this, but I can't figure out why the \"Gunbower Island Primary\nSchool\" is getting top place. How do I get the query to improve the ranking\nso that an exact match (like \"Gunbower|Island|Vic\") gets a higher position?\n\nThanks,\n\nbw\n\n\n\n\nNo virus found in this outgoing message.\nChecked by AVG Free Edition. \nVersion: 7.5.516 / Virus Database: 269.21.4/1309 - Release Date: 3/03/2008\n6:50 PM\n \n\n", "msg_date": "Fri, 7 Mar 2008 14:58:47 +1100", "msg_from": "\"b wragg\" <[email protected]>", "msg_from_op": true, "msg_subject": "Improve Full text rank in a query " }, { "msg_contents": "\"b wragg\" <[email protected]> writes:\n> I'm new to this, but I can't figure out why the \"Gunbower Island Primary\n> School\" is getting top place. How do I get the query to improve the ranking\n> so that an exact match (like \"Gunbower|Island|Vic\") gets a higher position?\n\nI'm new at this too, but AFAICS these are both exact matches: they have\nthe same matching lexemes at the same positions, so the basic rank\ncalculation is going to come out exactly the same. Normalization option\n32 doesn't help (as the manual notes, it's purely cosmetic). So it's\nrandom chance which one comes out first.\n\nWhat I think you might want is one of the other normalization options,\nso that shorter documents are preferred. Either 1, 2, 8, or 16 would\ndo fine for this simple example --- which one you want depends on just\nhow heavily you want to favor shorter documents.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Mar 2008 00:40:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve Full text rank in a query " }, { "msg_contents": "On Fri, 7 Mar 2008, b wragg wrote:\n\n> Hi all,\n>\n> I'm running the following query to match a supplied text string to an actual\n> place name which is recorded in a table with extra info like coordinates,\n> etc.\n>\n> SELECT ts_rank_cd(textsearchable_index_col , query, 32 /* rank/(rank+1) */)\n> AS rank,*\n> FROM gazetteer, to_tsquery('Gunbower|Island|Vic') query\n> WHERE query @@ textsearchable_index_col\torder by rank desc, concise_ga desc,\n> auda_alloc desc LIMIT 10\n>\n> When I run this I get the following top two results:\n>\n> Pos\tRank\t\tName\n> State\n> 1\t0.23769\tGunbower Island Primary School\tVic\n> 2\t0.23769\tGunbower Island\t\t\t\tVic\n>\n> The textsearchable_index_col for each of these looks like this:\n>\n> 'vic':6 '9999':5 'gunbow':1 'island':2 'school':4 'primari':3 'victoria':7\n> 'vic':4 '9999':3 'gunbow':1 'island':2 'victoria':5\n>\n> I'm new to this, but I can't figure out why the \"Gunbower Island Primary\n> School\" is getting top place. How do I get the query to improve the ranking\n> so that an exact match (like \"Gunbower|Island|Vic\") gets a higher position?\n\nyou can read documentation and use document length normalization flag,\nor write your own ranking function.\n\n>\n> Thanks,\n>\n> bw\n>\n>\n>\n>\n> No virus found in this outgoing message.\n> Checked by AVG Free Edition.\n> Version: 7.5.516 / Virus Database: 269.21.4/1309 - Release Date: 3/03/2008\n> 6:50 PM\n>\n>\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Fri, 7 Mar 2008 11:44:18 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve Full text rank in a query " } ]
[ { "msg_contents": "Hi,\n \n I am using postgresql for application. daily i will get more\nthan 5,00,000 records.\n \n i have done the partitioning of the table for each month.\n \n while generating reports, i will do join on some other table\nwith the large table \n \n it takes too much time to get the data so i am planning design\nstar schema for report for last month so report module directly will\npick from that stale data.\n \n please suggest me any other way to generate report from very\nlarge data\n \nRegards,\nShilpa\n\nThe information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. \n\nWARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email.\n\nwww.wipro.com\n\n\n\n\n\n\nHi,\n \n        I am using \npostgresql for application. daily i will get more than 5,00,000 \nrecords.\n \n        i have done \nthe partitioning of the table for each month.\n \n        while \ngenerating reports, i will do join on some other table with \nthe large table \n \n        it takes too \nmuch time to get the data so i am planning design star schema for report \nfor last month so report module directly will pick from that stale \ndata.\n \n       please \nsuggest me any other way to generate report from very large \ndata\n \nRegards,\nShilpaThe information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments.\nWARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email.\nwww.wipro.com", "msg_date": "Fri, 7 Mar 2008 10:29:00 +0530", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "database design for large data." } ]
[ { "msg_contents": "I've came across this issue while writing report-like query for 2 not\nvery large tables. I've tried several methods to resolve this one (see\nbelow). But now I'm really stuck...\nPostgreSQL 8.3, default configuration\n\nThere are 2 tables (structure was simplified to show only problematic\nplace):\ncreate table c\n(\n id bigint primary key\n cdate date\n);\n\ncreate index c_cdate_idx on c (cdate);\n\ncreate table i\n(\n id bigint primary key,\n id_c bigint references c(id)\n);\n\nselect count(*) from c\n\ncount\n--------\n636 565\n\nselect count(*) from i\n\ncount\n--------\n4 646 145\n\nanalyze i;\nanalyze c;\n\nexplain analyze\nselect id\nfrom c\n join i on i.idc = c.id\nwhere c.cdate between '2007-02-01' and '2007-02-16'\n\nQUERY\nPLAN \n \n\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------- \n\n\nMerge Join (cost=738.95..57864.63 rows=14479 width=8) (actual\ntime=13954.681..14358.731 rows=14583\nloops=1)\n Merge Cond: (i.idc =\nc.id) \n\n\n -> Index Scan using fki_i_c_fk on i (cost=0.00..194324.34\nrows=4646145 width=8) (actual time=17.254..12061.414 rows=1042599 loops=1)\n -> Sort (cost=738.94..756.88 rows=7178 width=8) (actual\ntime=53.942..75.013 rows=14583\nloops=1)\n Sort Key:\nc.id \n\n\n Sort Method: quicksort Memory:\n404kB \n\n\n -> Index Scan using c_cdate_idx on c (cost=0.00..279.21\nrows=7178 width=8) (actual time=23.595..41.470 rows=7064 loops=1)\n Index Cond: ((cdate >= '2007-02-01'::date) AND (cdate <=\n'2007-02-16'::date))\nTotal runtime: 14379.461\nms \n\n\n\nset enable_mergejoin to off;\nset enable_hashjoin to off;\n\nQUERY\nPLAN \n\n\n-------------------------------------------------------------------------------------------------------------------------------------------------- \n\n\nNested Loop (cost=0.00..59833.70 rows=14479 width=8) (actual\ntime=0.129..153.038 rows=14583\nloops=1)\n -> Index Scan using c_cdate_idx on c (cost=0.00..279.21 rows=7178\nwidth=8) (actual time=0.091..14.468 rows=7064 loops=1)\n Index Cond: ((cdate >= '2007-02-01'::date) AND (cdate <=\n'2007-02-16'::date))\n -> Index Scan using fki_i_c_fk on i (cost=0.00..8.13 rows=13\nwidth=8) (actual time=0.007..0.011 rows=2 loops=7064)\n Index Cond: (i.idc =\nc.id) \n\n\nTotal runtime: 172.599 ms\n\nOk, the first problem is here:\n -> Index Scan using fki_i_c_fk on i (cost=0.00..8.13 rows=13\nwidth=8) (actual time=0.007..0.011 rows=2 loops=7064)\n\nI collected statistics for these tables at level 1000 for all columns.\n\nselect attname, null_frac, avg_width, n_distinct, correlation\nfrom pg_stats\nwhere tablename = 'i'\n\nattname null_frac avg_width n_distinct\ncorrelation\n---------- ------------------ ------------ -------------\n------------------\nid 0 8 -1 0,9999849796295166\nidc 0,7236369848251343 8 95583 0,999763011932373\n\nNice stats except of n_distinct for idc column.\n\nselect count(distinct idc)\nfrom i\n\ncount\n--------\n633 864\n\nOf course it is not correct solution but...\n\nupdate pg_statistic\nset stadistinct = 633864\nwhere starelid = ... and staattnum = ...\n\nReconnect and execute:\n\nexplain analyze\nselect id\nfrom c\n join i on i.idc = c.id\nwhere c.cdate between '2007-02-01' and '2007-02-16'\n\nQUERY\nPLAN \n\n\n-------------------------------------------------------------------------------------------------------------------------------------------------- \n\n\nNested Loop (cost=0.00..57342.39 rows=14479 width=8) (actual\ntime=0.133..151.426 rows=14583\nloops=1)\n -> Index Scan using c_cdate_idx on c (cost=0.00..279.21 rows=7178\nwidth=8) (actual time=0.094..14.242 rows=7064 loops=1)\n Index Cond: ((cdate >= '2007-02-01'::date) AND (cdate <=\n'2007-02-16'::date))\n -> Index Scan using fki_i_c_fk on i (cost=0.00..7.92 rows=2 width=8)\n(actual time=0.007..0.011 rows=2 loops=7064)\n Index Cond: (i.idc =\nc.id) \n\n\nTotal runtime: 170.911\nms \n\n\n\nBut the reason of this issue is not the incorrect value of n_distinct.\nLet's expand dates interval in WHERE clause.\n\n\nexplain analyze\nselect id\nfrom c\n join i on i.idc = c.id\nwhere c.cdate between '2007-02-01' and '2007-02-19'\n\nQUERY\nPLAN \n\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------- \n\n\nMerge Join (cost=831.16..57981.98 rows=16155 width=8) (actual\ntime=11691.156..12155.201 rows=16357\nloops=1)\n Merge Cond: (i.idc =\nc.id) \n\n\n -> Index Scan using fki_i_c_fk on i (cost=0.00..194324.34\nrows=4646145 width=8) (actual time=22.236..9928.489 rows=1044373 loops=1)\n -> Sort (cost=831.15..851.17 rows=8009 width=8) (actual\ntime=31.660..55.277 rows=16357\nloops=1)\n Sort Key:\nc.id \n\n\n Sort Method: quicksort Memory:\n438kB \n\n\n -> Index Scan using c_cdate_idx on c (cost=0.00..311.87\nrows=8009 width=8) (actual time=0.116..17.050 rows=7918 loops=1)\n Index Cond: ((cdate >= '2007-02-01'::date) AND (cdate <=\n'2007-02-19'::date))\nTotal runtime: 12178.678 ms\n\nset enable_mergejoin to off;\nset enable_hashjoin to off;\n\nQUERY\nPLAN \n\n\n-------------------------------------------------------------------------------------------------------------------------------------------------- \n\n\nNested Loop (cost=0.00..63724.20 rows=16155 width=8) (actual\ntime=0.131..171.292 rows=16357\nloops=1)\n -> Index Scan using c_cdate_idx on c (cost=0.00..311.87 rows=8009\nwidth=8) (actual time=0.093..15.906 rows=7918 loops=1)\n Index Cond: ((cdate >= '2007-02-01'::date) AND (cdate <=\n'2007-02-19'::date))\n -> Index Scan using fki_i_c_fk on i (cost=0.00..7.89 rows=2 width=8)\n(actual time=0.007..0.011 rows=2 loops=7918)\n Index Cond: (i.idc =\nc.id) \n\n\nTotal runtime: 193.221 ms\n\nWhy nested loop is overestimated here (63000 estimated = 171 actual,\n58000 estimated = 12155 actual).\n\n", "msg_date": "Fri, 07 Mar 2008 13:50:40 +0800", "msg_from": "Vlad Arkhipov <[email protected]>", "msg_from_op": true, "msg_subject": "Nested loop vs merge join: inconsistencies between estimated and\n\tactual time" }, { "msg_contents": "Vlad Arkhipov <[email protected]> writes:\n> I've came across this issue while writing report-like query for 2 not\n> very large tables. I've tried several methods to resolve this one (see\n> below). But now I'm really stuck...\n\nIt looks like you are wishing to optimize for all-in-memory situations,\nin which case the traditional advice is to reduce random_page_cost to\nsomething close to 1. AFAICS all the rowcount estimates you're seeing\nare spot on, or as close to spot on as you could realistically hope for,\nand so the problem lies with the cost parameters. Fooling with the\nstatistics is not going to help if the rowcount estimates are already\ngood.\n\n(Note: the apparent undercounts you're seeing on indexscans on the outer\nside of a mergejoin seem to be because the mergejoin terminates early\ndue to limited range of the other input join key. The planner is\nexpecting this, as we can see because the predicted cost of the join is\nactually much less than the predicted cost of running the input\nindexscan to completion. The cost ratio is about consistent with the\nrowcount ratio, which makes me think it got these right too.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Mar 2008 01:35:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested loop vs merge join: inconsistencies between estimated and\n\tactual time" }, { "msg_contents": "Tom Lane writes:\n> Vlad Arkhipov <[email protected]> writes:\n> \n>> I've came across this issue while writing report-like query for 2 not\n>> very large tables. I've tried several methods to resolve this one (see\n>> below). But now I'm really stuck...\n>> \n>\n> It looks like you are wishing to optimize for all-in-memory situations,\n> in which case the traditional advice is to reduce random_page_cost to\n> something close to 1. AFAICS all the rowcount estimates you're seeing\n> are spot on, or as close to spot on as you could realistically hope for,\n> and so the problem lies with the cost parameters. Fooling with the\n> statistics is not going to help if the rowcount estimates are already\n> good.\n> \n\nI tried to change random_page_cost to 1.1 or something close to it and \nincrease/decrease effective_cache_size. But Postgres always prefer plan \nwith merge join.\n\n\n\n\n\n\nTom Lane writes:\n\nVlad Arkhipov <[email protected]> writes:\n \n\nI've came across this issue while writing report-like query for 2 not\nvery large tables. I've tried several methods to resolve this one (see\nbelow). But now I'm really stuck...\n \n\n\nIt looks like you are wishing to optimize for all-in-memory situations,\nin which case the traditional advice is to reduce random_page_cost to\nsomething close to 1. AFAICS all the rowcount estimates you're seeing\nare spot on, or as close to spot on as you could realistically hope for,\nand so the problem lies with the cost parameters. Fooling with the\nstatistics is not going to help if the rowcount estimates are already\ngood.\n \n\n\nI tried to change random_page_cost to 1.1 or something close to it and\nincrease/decrease effective_cache_size. But Postgres always prefer plan\nwith merge join.", "msg_date": "Tue, 11 Mar 2008 09:52:12 +0800", "msg_from": "Vlad Arkhipov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Nested loop vs merge join: inconsistencies between\n\testimated and actual time" } ]
[ { "msg_contents": "Hello,\n\ni have problem with following table...\n\ncreate table dataaction (\n id INT4 not null,\n log text,\n primary key (id)\n);\n\nIt is the table for storing results of long running jobs. The log attribute\ntakes approximately 5MB for one row (there is about 300 rows). My problem\nis, that table dataaction takes after restoring about 1,5G, but in few days\ngrows to 79G(Toast space)... Vacuum on the table doesn't finish.\n\nWhere can be the problem?? How to solve it, how to shrink toast space??\n\nAny help will be appretiated.\n\nKind regards,\n\nPavel Rotek\n\nHello,i have problem with following table...create table dataaction (   id INT4 not null,   log text,   primary key (id));It is the table for storing results of long running jobs. The log attribute takes approximately 5MB for one row (there is about 300 rows). My problem is, that table dataaction takes after restoring about 1,5G, but in few days grows to 79G(Toast space)... Vacuum on the table doesn't finish.\nWhere can be the problem?? How to solve it, how to shrink toast space??Any help will be appretiated.Kind regards,Pavel Rotek", "msg_date": "Fri, 7 Mar 2008 09:35:42 +0100", "msg_from": "\"Pavel Rotek\" <[email protected]>", "msg_from_op": true, "msg_subject": "Toast space grows" }, { "msg_contents": "Pavel Rotek wrote:\n> Hello,\n> \n> i have problem with following table...\n> \n> create table dataaction (\n> id INT4 not null,\n> log text,\n> primary key (id)\n> );\n> \n> It is the table for storing results of long running jobs. The log attribute\n> takes approximately 5MB for one row (there is about 300 rows). My problem\n> is, that table dataaction takes after restoring about 1,5G, but in few days\n> grows to 79G(Toast space)...\n\n1. What is happening with this table - just inserts, lots of updates?\n\n2. What does SELECT sum(length(log)) FROM dataaction; show?\n\n > Vacuum on the table doesn't finish.\n\nA plain vacuum doesn't finish, or vacuum full doesn't finish?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 07 Mar 2008 09:13:06 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Toast space grows" }, { "msg_contents": "we just restored it to free 70G :-(\n\nThere are inserts and few updates (but what do you mean with update??\ncommitted update??, because there are many updates of the log attribute in\ntrasaction, we do periodical flush during transaction), sum takes\napproximately 1,2G, and i mean vacuum full (but there is no lock on the\ntable when running vacuum full). I haven't try plain vacuum.\n\n2008/3/7, Richard Huxton <[email protected]>:\n>\n> Pavel Rotek wrote:\n> > Hello,\n> >\n> > i have problem with following table...\n> >\n> > create table dataaction (\n> > id INT4 not null,\n> > log text,\n> > primary key (id)\n> > );\n> >\n> > It is the table for storing results of long running jobs. The log\n> attribute\n> > takes approximately 5MB for one row (there is about 300 rows). My\n> problem\n> > is, that table dataaction takes after restoring about 1,5G, but in few\n> days\n> > grows to 79G(Toast space)...\n>\n>\n> 1. What is happening with this table - just inserts, lots of updates?\n>\n> 2. What does SELECT sum(length(log)) FROM dataaction; show?\n>\n>\n> > Vacuum on the table doesn't finish.\n>\n>\n> A plain vacuum doesn't finish, or vacuum full doesn't finish?\n>\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n\nwe just restored it to free 70G :-(There are inserts and few updates (but what do you mean with update?? committed update??, because there are many updates of the log attribute in trasaction, we do periodical flush during transaction), sum takes approximately 1,2G, and i mean vacuum full (but there is no lock on the table when running vacuum full). I haven't try plain vacuum.\n2008/3/7, Richard Huxton <[email protected]>:\nPavel Rotek wrote: > Hello, > > i have problem with following table... > > create table dataaction ( >    id INT4 not null, >    log text, >    primary key (id) > );\n > > It is the table for storing results of long running jobs. The log attribute > takes approximately 5MB for one row (there is about 300 rows). My problem > is, that table dataaction takes after restoring about 1,5G, but in few days\n > grows to 79G(Toast space)... 1. What is happening with this table - just inserts, lots of updates? 2. What does SELECT sum(length(log)) FROM dataaction; show?   > Vacuum on the table doesn't finish.\n A plain vacuum doesn't finish, or vacuum full doesn't finish? --   Richard Huxton   Archonet Ltd", "msg_date": "Fri, 7 Mar 2008 10:29:33 +0100", "msg_from": "\"Pavel Rotek\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Toast space grows" }, { "msg_contents": "In response to \"Pavel Rotek\" <[email protected]>:\n> \n> There are inserts and few updates (but what do you mean with update??\n\nHe means adding or changing data in the table.\n\n> committed update??, because there are many updates of the log attribute in\n> trasaction, we do periodical flush during transaction)\n\nAre you saying you have long-running transactions? How long does a\nsingle transaction take?\n\nTransactions prevent vacuum from being able to clean up. Long running\ntransactions tend to render vacuum ineffective.\n\n>, sum takes\n> approximately 1,2G, and i mean vacuum full (but there is no lock on the\n> table when running vacuum full). I haven't try plain vacuum.\n\nDon't do vacuum full on this table. Do frequent vacuums. The table will\nbloat some, but not 10x the required size, once you find a reasonable\nfrequency for vacuums. You might find it practical to manually vacuum\nthis table from your application after insert and update operations.\n\n> \n> 2008/3/7, Richard Huxton <[email protected]>:\n> >\n> > Pavel Rotek wrote:\n> > > Hello,\n> > >\n> > > i have problem with following table...\n> > >\n> > > create table dataaction (\n> > > id INT4 not null,\n> > > log text,\n> > > primary key (id)\n> > > );\n> > >\n> > > It is the table for storing results of long running jobs. The log\n> > attribute\n> > > takes approximately 5MB for one row (there is about 300 rows). My\n> > problem\n> > > is, that table dataaction takes after restoring about 1,5G, but in few\n> > days\n> > > grows to 79G(Toast space)...\n> >\n> >\n> > 1. What is happening with this table - just inserts, lots of updates?\n> >\n> > 2. What does SELECT sum(length(log)) FROM dataaction; show?\n> >\n> >\n> > > Vacuum on the table doesn't finish.\n> >\n> >\n> > A plain vacuum doesn't finish, or vacuum full doesn't finish?\n> >\n> >\n> > --\n> > Richard Huxton\n> > Archonet Ltd\n> >\n> \n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Fri, 7 Mar 2008 06:53:14 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Toast space grows" }, { "msg_contents": "2008/3/7, Bill Moran <[email protected]>:\n>\n> In response to \"Pavel Rotek\" <[email protected]>:\n>\n> >\n> > There are inserts and few updates (but what do you mean with update??\n>\n>\n> He means adding or changing data in the table.\n\n\nI understand, but i don't have deep understanding of mechanism, that affects\ntoast space. I don't know if toast space is affected only with commited\ntransactions or also with uncommitted transactions. This is the reason why i\nasked.\n\n> committed update??, because there are many updates of the log attribute in\n> > trasaction, we do periodical flush during transaction)\n>\n>\n> Are you saying you have long-running transactions? How long does a\n> single transaction take?\n>\n> Transactions prevent vacuum from being able to clean up. Long running\n> transactions tend to render vacuum ineffective.\n\n\nNo i do not mean long running transactions... Update of log entry (update of\nrow in dataaction) is performed in series of short transactions, but during\nshort transaction there is a lot of change log value, flush, change log\nvalue ,flush ..... change log value, flush actions (flush means perform\nflush operation via JDBC driver). I'm not sure if this flush affects toast\nspace... Maybe this is the reason.\n\n>, sum takes\n> > approximately 1,2G, and i mean vacuum full (but there is no lock on the\n> > table when running vacuum full). I haven't try plain vacuum.\n>\n>\n> Don't do vacuum full on this table. Do frequent vacuums. The table will\n> bloat some, but not 10x the required size, once you find a reasonable\n> frequency for vacuums. You might find it practical to manually vacuum\n> this table from your application after insert and update operations.\n\n\nI perform autovacuum daily.\n\n>\n> > 2008/3/7, Richard Huxton <[email protected]>:\n> > >\n> > > Pavel Rotek wrote:\n> > > > Hello,\n> > > >\n> > > > i have problem with following table...\n> > > >\n> > > > create table dataaction (\n> > > > id INT4 not null,\n> > > > log text,\n> > > > primary key (id)\n> > > > );\n> > > >\n> > > > It is the table for storing results of long running jobs. The log\n> > > attribute\n> > > > takes approximately 5MB for one row (there is about 300 rows). My\n> > > problem\n> > > > is, that table dataaction takes after restoring about 1,5G, but in\n> few\n> > > days\n> > > > grows to 79G(Toast space)...\n> > >\n> > >\n> > > 1. What is happening with this table - just inserts, lots of updates?\n> > >\n> > > 2. What does SELECT sum(length(log)) FROM dataaction; show?\n> > >\n> > >\n> > > > Vacuum on the table doesn't finish.\n> > >\n> > >\n> > > A plain vacuum doesn't finish, or vacuum full doesn't finish?\n> > >\n> > >\n> > > --\n> > > Richard Huxton\n> > > Archonet Ltd\n> > >\n> >\n>\n>\n>\n> --\n> Bill Moran\n> Collaborative Fusion Inc.\n> http://people.collaborativefusion.com/~wmoran/\n>\n> [email protected]\n> Phone: 412-422-3463x4023\n>\n> ****************************************************************\n> IMPORTANT: This message contains confidential information and is\n> intended only for the individual named. If the reader of this\n> message is not an intended recipient (or the individual\n> responsible for the delivery of this message to an intended\n> recipient), please be advised that any re-use, dissemination,\n> distribution or copying of this message is prohibited. Please\n> notify the sender immediately by e-mail if you have received\n> this e-mail by mistake and delete this e-mail from your system.\n> E-mail transmission cannot be guaranteed to be secure or\n> error-free as information could be intercepted, corrupted, lost,\n> destroyed, arrive late or incomplete, or contain viruses. The\n> sender therefore does not accept liability for any errors or\n> omissions in the contents of this message, which arise as a\n> result of e-mail transmission.\n> ****************************************************************\n>\n\n2008/3/7, Bill Moran <[email protected]>:\nIn response to \"Pavel Rotek\" <[email protected]>: > > There are inserts and few updates (but what do you mean with update?? He means adding or changing data in the table.\nI understand, but i don't have deep understanding of mechanism, that affects toast space. I don't know if toast space is affected only with commited transactions or also with uncommitted transactions. This is the reason why i asked.\n > committed update??, because there are many updates of the log attribute in\n > trasaction, we do periodical flush during transaction) Are you saying you have long-running transactions?  How long does a single transaction take? Transactions prevent vacuum from being able to clean up.  Long running\n transactions tend to render vacuum ineffective.No i do not mean long running transactions... Update of log entry (update of row in dataaction) is performed in series of short transactions, but during short transaction there is a lot of change log value, flush, change log value ,flush ..... change log value, flush actions (flush means perform flush operation via JDBC driver). I'm not sure if this flush affects toast space... Maybe this is the reason.\n >, sum takes > approximately 1,2G, and i mean vacuum full (but there is no lock on the\n > table when running vacuum full). I haven't try plain vacuum. Don't do vacuum full on this table.  Do frequent vacuums.  The table will bloat some, but not 10x the required size, once you find a reasonable\n frequency for vacuums.  You might find it practical to manually vacuum this table from your application after insert and update operations.I perform autovacuum daily.\n > > 2008/3/7, Richard Huxton <[email protected]>: > > > > Pavel Rotek wrote: > > > Hello, > > > > > > i have problem with following table...\n > > > > > > create table dataaction ( > > >    id INT4 not null, > > >    log text, > > >    primary key (id) > > > ); > > > > > > It is the table for storing results of long running jobs. The log\n > > attribute > > > takes approximately 5MB for one row (there is about 300 rows). My > > problem > > > is, that table dataaction takes after restoring about 1,5G, but in few\n > > days > > > grows to 79G(Toast space)... > > > > > > 1. What is happening with this table - just inserts, lots of updates? > > > > 2. What does SELECT sum(length(log)) FROM dataaction; show?\n > > > > > >   > Vacuum on the table doesn't finish. > > > > > > A plain vacuum doesn't finish, or vacuum full doesn't finish? > > > >\n > > -- > >    Richard Huxton > >    Archonet Ltd > > > -- Bill Moran Collaborative Fusion Inc. http://people.collaborativefusion.com/~wmoran/\n [email protected] Phone: 412-422-3463x4023 **************************************************************** IMPORTANT: This message contains confidential information and is\n intended only for the individual named. If the reader of this message is not an intended recipient (or the individual responsible for the delivery of this message to an intended recipient), please be advised that any re-use, dissemination,\n distribution or copying of this message is prohibited. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secure or\n error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a\n result of e-mail transmission. ****************************************************************", "msg_date": "Fri, 7 Mar 2008 13:59:46 +0100", "msg_from": "\"Pavel Rotek\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Toast space grows" }, { "msg_contents": "Pavel Rotek escribi�:\n> 2008/3/7, Bill Moran <[email protected]>:\n\n> > Don't do vacuum full on this table. Do frequent vacuums. The table will\n> > bloat some, but not 10x the required size, once you find a reasonable\n> > frequency for vacuums. You might find it practical to manually vacuum\n> > this table from your application after insert and update operations.\n> \n> I perform autovacuum daily.\n\nSorry, this sentence makes no sense. Do you mean that you set\nautovacuum_naptime=1 day? If so, that's a bad idea -- you should let\nautovacuum run far more frequently.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Fri, 7 Mar 2008 10:19:25 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Toast space grows" }, { "msg_contents": "\"Pavel Rotek\" <[email protected]> writes:\n> No i do not mean long running transactions... Update of log entry (update of\n> row in dataaction) is performed in series of short transactions, but during\n> short transaction there is a lot of change log value, flush, change log\n> value ,flush ..... change log value, flush actions (flush means perform\n> flush operation via JDBC driver). I'm not sure if this flush affects toast\n> space... Maybe this is the reason.\n\nYou mean that you build up the 5MB log entry by adding a few lines at a\ntime? That's going to consume horrid amounts of toast space, because\neach time you add a few lines, an entire new toasted field value is\ncreated.\n\nIf you have to do it that way, you'll need very frequent vacuums on this\ntable (not vacuum full, as noted already) to keep the toast space from\nbloating too much. And make sure you've got max_fsm_pages set high\nenough.\n\nIf you can restructure your code a bit, it might be better to accumulate\nlog values in a short-lived table and only store the final form of a log\nentry into the main table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Mar 2008 08:33:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Toast space grows " }, { "msg_contents": "> \"Pavel Rotek\" <[email protected]> writes:\n>> No i do not mean long running transactions... Update of log entry (update of\n>> row in dataaction) is performed in series of short transactions, but during\n>> short transaction there is a lot of change log value, flush, change log\n>> value ,flush ..... change log value, flush actions (flush means perform\n>> flush operation via JDBC driver). I'm not sure if this flush affects toast\n>> space... Maybe this is the reason.\n\nOn Fri, 7 Mar 2008, Tom Lane wrote:\n> You mean that you build up the 5MB log entry by adding a few lines at a\n> time? That's going to consume horrid amounts of toast space, because\n> each time you add a few lines, an entire new toasted field value is\n> created.\n\nMoreover, what is the point of flushing data to Postgres without \ncommitting the transaction, if you're only going to overwrite the data \nlater. You don't get any level of protection for your data just by \nflushing it to Postgres - you have to commit the transaction for that to \nhappen. In my opinion, you should just be generating the log entry in \nmemory entirely, and then flushing it in a transaction commit when it's \nfinished, since you're obviously holding it all in memory all the time \nanyway.\n\n> If you have to do it that way, you'll need very frequent vacuums on this\n> table (not vacuum full, as noted already) to keep the toast space from\n> bloating too much. And make sure you've got max_fsm_pages set high\n> enough.\n\nAgreed, this is kind of the worst-case-scenario for table bloat.\n\nMatthew\n\n-- \nNow the reason people powdered their faces back then was to change the values\n\"s\" and \"n\" in this equation here. - Computer science lecturer\n", "msg_date": "Fri, 7 Mar 2008 13:56:48 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Toast space grows " }, { "msg_contents": "2008/3/7, Tom Lane <[email protected]>:\n>\n> \"Pavel Rotek\" <[email protected]> writes:\n> > No i do not mean long running transactions... Update of log entry\n> (update of\n> > row in dataaction) is performed in series of short transactions, but\n> during\n> > short transaction there is a lot of change log value, flush, change log\n> > value ,flush ..... change log value, flush actions (flush means perform\n> > flush operation via JDBC driver). I'm not sure if this flush affects\n> toast\n> > space... Maybe this is the reason.\n>\n>\n> You mean that you build up the 5MB log entry by adding a few lines at a\n> time? That's going to consume horrid amounts of toast space, because\n> each time you add a few lines, an entire new toasted field value is\n> created.\n\n\nwell, this will be the main problem... But... do uncomitted trasactions\naffect toast space?\n\nIf you have to do it that way, you'll need very frequent vacuums on this\n> table (not vacuum full, as noted already) to keep the toast space from\n> bloating too much. And make sure you've got max_fsm_pages set high\n> enough.\n\n\ni'll set max_fsm_pages to 1 000 000. It should be enough and set\nautovacuum_naptime to 10 minutes. May it be?\n\nIf you can restructure your code a bit, it might be better to accumulate\n> log values in a short-lived table and only store the final form of a log\n> entry into the main table.\n\n\nI'll try to refactor the code... My application do following thing... long\nrunning jobs (for example long imports) are broken into series of short\ntransactions to store snapshot of current state of long running job. Short\ntransaction consist of\n(begin tx, load previous log, do business action, append new log, flush, do\nbusiness action, append new log, flush, ... do business action, append new\nlog, flush, commit tx). Is it enough to avoid multiple \"append new log,\nflush\" in one short transaction and keep log changes for short transaction\nin the buffer (only one update of log attribute at the end of transaction)?\n>From your answer probably not, but i ask for sure, it will be less work. Or\nstore logs for each one partial transaction and concat all at the end of\nlong running job??\n\n regards, tom lane\n>\n\n2008/3/7, Tom Lane <[email protected]>:\n\"Pavel Rotek\" <[email protected]> writes: > No i do not mean long running transactions... Update of log entry (update of > row in dataaction) is performed in series of short transactions, but during\n > short transaction there is a lot of change log value, flush, change log > value ,flush ..... change log value, flush actions (flush means perform > flush operation via JDBC driver). I'm not sure if this flush affects toast\n > space... Maybe this is the reason. You mean that you build up the 5MB log entry by adding a few lines at a time?  That's going to consume horrid amounts of toast space, because each time you add a few lines, an entire new toasted field value is\n created.well, this will be the main problem... But... do uncomitted trasactions affect toast space?\n If you have to do it that way, you'll need very frequent vacuums on this table (not vacuum full, as noted already) to keep the toast space from bloating too much.  And make sure you've got max_fsm_pages set high\n enough. i'll set max_fsm_pages to 1 000 000. It should be enough and set autovacuum_naptime to 10 minutes. May it be?\n If you can restructure your code a bit, it might be better to accumulate log values in a short-lived table and only store the final form of a log entry into the main table.I'll try to refactor the code... My application do following thing... long running jobs (for example long imports) are broken into series of short transactions to store snapshot of current state of long running job. Short transaction consist of \n(begin tx, load previous log, do business action, append new log, flush, do business action, append new log, flush, ... do business action, append new log, flush, commit tx). Is it enough to avoid multiple \"append new log, flush\" in one short transaction and keep log changes for short transaction in the buffer (only one update of log attribute at the end of transaction)? From your answer probably not, but i ask for sure, it will be less work. Or store logs for each one partial transaction and concat all at the end of long running job??\n                        regards, tom lane", "msg_date": "Fri, 7 Mar 2008 15:11:55 +0100", "msg_from": "\"Pavel Rotek\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Toast space grows" }, { "msg_contents": "On Fri, 7 Mar 2008, Pavel Rotek wrote:\n> well, this will be the main problem... But... do uncomitted trasactions\n> affect toast space?\n\nI think the demonstrated answer to this is yes.\n\n> (begin tx, load previous log, do business action, append new log, flush, do\n> business action, append new log, flush, ... do business action, append new\n> log, flush, commit tx).\n\nIf all you're doing is appending to the end of the log, why don't you make \neach \"append\" a new row in a table. Instead of building massive rows, use \nthe database for what it was designed for, and have many smaller \nindependent rows.\n\nMatthew\n\n-- \n\"To err is human; to really louse things up requires root\n privileges.\" -- Alexander Pope, slightly paraphrased\n", "msg_date": "Fri, 7 Mar 2008 14:19:47 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Toast space grows" }, { "msg_contents": "2008/3/7, Matthew <[email protected]>:\n>\n> > \"Pavel Rotek\" <[email protected]> writes:\n> >> No i do not mean long running transactions... Update of log entry\n> (update of\n> >> row in dataaction) is performed in series of short transactions, but\n> during\n> >> short transaction there is a lot of change log value, flush, change log\n> >> value ,flush ..... change log value, flush actions (flush means perform\n> >> flush operation via JDBC driver). I'm not sure if this flush affects\n> toast\n> >> space... Maybe this is the reason.\n>\n>\n> On Fri, 7 Mar 2008, Tom Lane wrote:\n> > You mean that you build up the 5MB log entry by adding a few lines at a\n> > time? That's going to consume horrid amounts of toast space, because\n> > each time you add a few lines, an entire new toasted field value is\n> > created.\n>\n>\n> Moreover, what is the point of flushing data to Postgres without\n> committing the transaction, if you're only going to overwrite the data\n> later. You don't get any level of protection for your data just by\n> flushing it to Postgres - you have to commit the transaction for that to\n> happen. In my opinion, you should just be generating the log entry in\n> memory entirely, and then flushing it in a transaction commit when it's\n> finished, since you're obviously holding it all in memory all the time\n> anyway.\n\n\nBecause i use kind of hibrid access to work with data in database (both\nhibernate and plain JDBC queries shares the same connection) and when i want\nto see data saved via hibernate in JDBC queries, i have to do flush of\nhibernate session... :-(\n\n> If you have to do it that way, you'll need very frequent vacuums on this\n> > table (not vacuum full, as noted already) to keep the toast space from\n> > bloating too much. And make sure you've got max_fsm_pages set high\n> > enough.\n>\n>\n> Agreed, this is kind of the worst-case-scenario for table bloat.\n>\n> Matthew\n>\n> --\n> Now the reason people powdered their faces back then was to change the\n> values\n> \"s\" and \"n\" in this equation here. - Computer science\n> lecturer\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n2008/3/7, Matthew <[email protected]>:\n> \"Pavel Rotek\" <[email protected]> writes: >> No i do not mean long running transactions... Update of log entry (update of >> row in dataaction) is performed in series of short transactions, but during\n >> short transaction there is a lot of change log value, flush, change log >> value ,flush ..... change log value, flush actions (flush means perform >> flush operation via JDBC driver). I'm not sure if this flush affects toast\n >> space... Maybe this is the reason. On Fri, 7 Mar 2008, Tom Lane wrote: > You mean that you build up the 5MB log entry by adding a few lines at a > time?  That's going to consume horrid amounts of toast space, because\n > each time you add a few lines, an entire new toasted field value is > created. Moreover, what is the point of flushing data to Postgres without committing the transaction, if you're only going to overwrite the data\n later. You don't get any level of protection for your data just by flushing it to Postgres - you have to commit the transaction for that to happen. In my opinion, you should just be generating the log entry in\n memory entirely, and then flushing it in a transaction commit when it's finished, since you're obviously holding it all in memory all the time anyway.Because i use kind of hibrid access to work with data in database (both hibernate and plain JDBC queries shares the same connection) and when i want to see data saved via hibernate in JDBC queries, i have to do flush of hibernate session... :-(\n > If you have to do it that way, you'll need very frequent vacuums on this\n > table (not vacuum full, as noted already) to keep the toast space from > bloating too much.  And make sure you've got max_fsm_pages set high > enough. Agreed, this is kind of the worst-case-scenario for table bloat.\n Matthew -- Now the reason people powdered their faces back then was to change the values \"s\" and \"n\" in this equation here.                 - Computer science lecturer \n -- Sent via pgsql-performance mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 7 Mar 2008 15:22:17 +0100", "msg_from": "\"Pavel Rotek\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Toast space grows" }, { "msg_contents": "2008/3/7, Matthew <[email protected]>:\n>\n> On Fri, 7 Mar 2008, Pavel Rotek wrote:\n> > well, this will be the main problem... But... do uncomitted trasactions\n> > affect toast space?\n>\n>\n> I think the demonstrated answer to this is yes.\n>\n>\n> > (begin tx, load previous log, do business action, append new log, flush,\n> do\n> > business action, append new log, flush, ... do business action, append\n> new\n> > log, flush, commit tx).\n>\n>\n> If all you're doing is appending to the end of the log, why don't you make\n> each \"append\" a new row in a table. Instead of building massive rows, use\n> the database for what it was designed for, and have many smaller\n> independent rows.\n>\n> Matthew\n\n\nBecause I modify existing application, where logic is already given :-(. If\nno other way exists, i'll have to do refactoring...\n\n--\n> \"To err is human; to really louse things up requires root\n> privileges.\" -- Alexander Pope, slightly paraphrased\n>\n\n2008/3/7, Matthew <[email protected]>:\nOn Fri, 7 Mar 2008, Pavel Rotek wrote: > well, this will be the main problem... But... do uncomitted trasactions > affect toast space? I think the demonstrated answer to this is yes. \n > (begin tx, load previous log, do business action, append new log, flush, do > business action, append new log, flush, ... do business action, append new > log, flush, commit tx). If all you're doing is appending to the end of the log, why don't you make\n each \"append\" a new row in a table. Instead of building massive rows, use the database for what it was designed for, and have many smaller independent rows. MatthewBecause I modify existing application, where logic is already given :-(. If no other way exists, i'll have to do refactoring...\n -- \"To err is human; to really louse things up requires root  privileges.\"                 -- Alexander Pope, slightly paraphrased", "msg_date": "Fri, 7 Mar 2008 15:27:47 +0100", "msg_from": "\"Pavel Rotek\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Toast space grows" }, { "msg_contents": "\"Pavel Rotek\" <[email protected]> writes:\n> 2008/3/7, Tom Lane <[email protected]>:\n>> You mean that you build up the 5MB log entry by adding a few lines at a\n>> time? That's going to consume horrid amounts of toast space, because\n>> each time you add a few lines, an entire new toasted field value is\n>> created.\n\n> well, this will be the main problem... But... do uncomitted trasactions\n> affect toast space?\n\nSure. Where do you think the data goes? It's gotta be stored\nsomeplace. Every UPDATE operation that changes a toasted field will\nconsume space for a fresh copy of that field, whether it ever commits or\nnot. You need VACUUM to reclaim the space eaten by no-longer-accessible\ncopies.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Mar 2008 09:31:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Toast space grows " }, { "msg_contents": "In response to \"Pavel Rotek\" <[email protected]>:\n\n> 2008/3/7, Tom Lane <[email protected]>:\n\n[snip]\n \n> > If you have to do it that way, you'll need very frequent vacuums on this\n> > table (not vacuum full, as noted already) to keep the toast space from\n> > bloating too much. And make sure you've got max_fsm_pages set high\n> > enough.\n> \n> i'll set max_fsm_pages to 1 000 000. It should be enough and set\n> autovacuum_naptime to 10 minutes. May it be?\n\nNo. Keep naptime at 1 minute. If it comes around and there's nothing\nto do, the overhead is minimal. If you set the naptime too high, it might\nhave too much to do on the next cycle and then it'll bog things down.\nAlso, it only checks 1 database per cycle, so setting it to 10 minutes\nmeans a _minimum_ of 40 minutes between checks (because you have a template0,\ntemplate1, postgres, and your database minimum)\n\nAlso, keep an eye on your database bloat to ensure the various\nautovacuum_*_scale_factor and related settings are appropriate.\nIt's been found that these are often not aggressive enough for\ngood maintenance. If you see bloat even with autovacuum running,\nreduce those values.\n\nPersonally, I'd recommend running a MRTG graph that graphs the size\nof this table so you can easily watch to see if your config tweaks\nare getting the job done or not. And remember that _some_ bloat is\nexpected and normal for operation.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Fri, 7 Mar 2008 09:34:53 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Toast space grows" }, { "msg_contents": "Thanks to all for time and valuable help,\n\nPavel Rotek\n\n2008/3/7, Bill Moran <[email protected]>:\n>\n> In response to \"Pavel Rotek\" <[email protected]>:\n>\n>\n> > 2008/3/7, Tom Lane <[email protected]>:\n>\n>\n> [snip]\n>\n>\n> > > If you have to do it that way, you'll need very frequent vacuums on\n> this\n> > > table (not vacuum full, as noted already) to keep the toast space from\n> > > bloating too much. And make sure you've got max_fsm_pages set high\n> > > enough.\n> >\n> > i'll set max_fsm_pages to 1 000 000. It should be enough and set\n> > autovacuum_naptime to 10 minutes. May it be?\n>\n>\n> No. Keep naptime at 1 minute. If it comes around and there's nothing\n> to do, the overhead is minimal. If you set the naptime too high, it might\n> have too much to do on the next cycle and then it'll bog things down.\n> Also, it only checks 1 database per cycle, so setting it to 10 minutes\n> means a _minimum_ of 40 minutes between checks (because you have a\n> template0,\n> template1, postgres, and your database minimum)\n>\n> Also, keep an eye on your database bloat to ensure the various\n> autovacuum_*_scale_factor and related settings are appropriate.\n> It's been found that these are often not aggressive enough for\n> good maintenance. If you see bloat even with autovacuum running,\n> reduce those values.\n>\n> Personally, I'd recommend running a MRTG graph that graphs the size\n> of this table so you can easily watch to see if your config tweaks\n> are getting the job done or not. And remember that _some_ bloat is\n> expected and normal for operation.\n>\n>\n> --\n>\n> Bill Moran\n> Collaborative Fusion Inc.\n> http://people.collaborativefusion.com/~wmoran/\n>\n> [email protected]\n> Phone: 412-422-3463x4023\n>\n\nThanks to all for time and valuable help,Pavel Rotek2008/3/7, Bill Moran <[email protected]>:\nIn response to \"Pavel Rotek\" <[email protected]>: > 2008/3/7, Tom Lane <[email protected]>: \n[snip] > > If you have to do it that way, you'll need very frequent vacuums on this > > table (not vacuum full, as noted already) to keep the toast space from > > bloating too much.  And make sure you've got max_fsm_pages set high\n > > enough. > > i'll set max_fsm_pages to 1 000 000. It should be enough and set > autovacuum_naptime to 10 minutes. May it be? No.  Keep naptime at 1 minute.  If it comes around and there's nothing\n to do, the overhead is minimal.  If you set the naptime too high, it might have too much to do on the next cycle and then it'll bog things down. Also, it only checks 1 database per cycle, so setting it to 10 minutes\n means a _minimum_ of 40 minutes between checks (because you have a template0, template1, postgres, and your database minimum) Also, keep an eye on your database bloat to ensure the various autovacuum_*_scale_factor and related settings are appropriate.\n It's been found that these are often not aggressive enough for good maintenance.  If you see bloat even with autovacuum running, reduce those values. Personally, I'd recommend running a MRTG graph that graphs the size\n of this table so you can easily watch to see if your config tweaks are getting the job done or not.  And remember that _some_ bloat is expected and normal for operation. -- Bill Moran Collaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/ [email protected] Phone: 412-422-3463x4023", "msg_date": "Fri, 7 Mar 2008 16:03:24 +0100", "msg_from": "\"Pavel Rotek\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Toast space grows" } ]
[ { "msg_contents": "In a select query i have used the join conditions, will it affect query\nperformance.\n\nExplicitly I didn't used the join command, Will it make any difference.\n\nMy Query is:\nSELECT test_log.test_id, test_log.test_id, test_log.test_id,\nuser_details.first_name, group_details.group_name, site_details.site_name,\ntest_projects.project_name, test_campaigns.campaign_name,\ntest_log.test_stime, test_log.test_duration, test_log.test_etime,\ntest_log.dialed_no, test_log.tester_id, test_log.voice_recorded,\ntest_log.screen_recorded, test_log.agent_id, test_log.group_id,\ntest_log.site_id, test_log.dtmf_values FROM\nuser_details,group_details,site_details,test_log,\nsv_agent_map,test_campaigns,test_projects WHERE\nsv_agent_map.sv_user_id='347' AND sv_agent_map.sv_group_id='13' AND\nsv_agent_map.sv_site_id='10' AND\nuser_details.user_id=sv_agent_map.agent_user_id and\ngroup_details.group_id=sv_agent_map.agent_group_id and\nsite_details.site_id=sv_agent_map.agent_site_id and\ntest_log.agent_id=sv_agent_map.agent_user_id and\ntest_log.campaign_id=test_campaigns.campaign_id and\ntest_projects.project_id=test_log.project_id ORDER BY test_log.test_id limit\n5000.\n\nThe test_log has 50 million records.\n\nThe postgres version is 7.4\n\nThe Explain Analysis Output is:\n\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=206342.52..206345.46 rows=1178 width=420) (actual time=\n42514.526..42525.443 rows=5000 loops=1)\n -> Sort (cost=206342.52..206345.46 rows=1178 width=420) (actual time=\n42514.517..42519.466 rows=5000 loops=1)\n Sort Key: test_log.test_id\n -> Hash Join (cost=10.22..206282.43 rows=1178 width=420) (actual\ntime=1.297..37852.353 rows=281603 loops=1)\n Hash Cond: (\"outer\".agent_id = \"inner\".user_id)\n -> Hash Join (cost=7.11..206256.15 rows=2278 width=361)\n(actual time=0.923..34630.591 rows=281603 loops=1)\n Hash Cond: (\"outer\".campaign_id = \"inner\".campaign_id)\n -> Hash Join (cost=5.77..206209.22 rows=2281\nwidth=272) (actual time=0.789..31832.361 rows=281603 loops=1)\n Hash Cond: (\"outer\".agent_group_id =\n\"inner\".group_id)\n -> Hash Join (cost=4.51..206153.11 rows=6407\nwidth=228) (actual time=0.656..28964.197 rows=281603 loops=1)\n Hash Cond: (\"outer\".project_id =\n\"inner\".project_id)\n -> Hash Join\n(cost=3.24..206055.70rows=6415 width=139) (actual time=\n0.461..26168.581 rows=281603 loops=1)\n Hash Cond: (\"outer\".agent_id =\n\"inner\".agent_user_id)\n -> Seq Scan on test_log (cost=\n0.00..180692.90 rows=5013690 width=83) (actual\ntime=0.005..18942.968rows=5061643 loops=1)\n -> Hash (cost=3.24..3.24 rows=1\nwidth=56) (actual time=0.362..0.362 rows=0 loops=1)\n -> Hash Join\n(cost=2.04..3.24rows=1 width=56) (actual time=\n0.256..0.325 rows=31 loops=1)\n Hash Cond:\n(\"outer\".site_id = \"inner\".agent_site_id)\n -> Seq Scan on\nsite_details (cost=0.00..1.13 rows=13 width=52) (actual\ntime=0.005..0.018rows=13 loops=1)\n -> Hash (cost=\n2.03..2.03 rows=1 width=12) (actual time=0.156..0.156 rows=0 loops=1)\n -> Seq Scan on\nsv_agent_map (cost=0.00..2.03 rows=1 width=12) (actual\ntime=0.031..0.113rows=31 loops=1)\n Filter:\n((sv_user_id = 347) AND (sv_group_id = 13) AND (sv_site_id = 10))\n -> Hash (cost=1.21..1.21 rows=21\nwidth=97) (actual time=0.145..0.145 rows=0 loops=1)\n -> Seq Scan on test_projects (cost=\n0.00..1.21 rows=21 width=97) (actual time=0.010..0.042 rows=21 loops=1)\n -> Hash (cost=1.21..1.21 rows=21 width=52)\n(actual time=0.077..0.077 rows=0 loops=1)\n -> Seq Scan on group_details (cost=\n0.00..1.21 rows=21 width=52) (actual time=0.010..0.039 rows=21 loops=1)\n -> Hash (cost=1.27..1.27 rows=27 width=97) (actual\ntime=0.084..0.084 rows=0 loops=1)\n -> Seq Scan on test_campaigns\n(cost=0.00..1.27rows=27 width=97) (actual time=\n0.011..0.043 rows=27 loops=1)\n -> Hash (cost=2.89..2.89 rows=89 width=67) (actual time=\n0.245..0.245 rows=0 loops=1)\n -> Seq Scan on user_details (cost=0.00..2.89 rows=89\nwidth=67) (actual time=0.019..0.154 rows=89 loops=1)\n Total runtime: 42548.932 ms\n(30 rows)\n\n\n-- \nWith Best Regards,\nPetchimuthulingam S\n\nIn a select query i have used the join conditions, will it affect query performance.Explicitly I didn't used the join command, Will it make any difference.My Query is:SELECT  test_log.test_id, test_log.test_id, test_log.test_id, user_details.first_name, group_details.group_name, site_details.site_name, test_projects.project_name, test_campaigns.campaign_name, test_log.test_stime, test_log.test_duration, test_log.test_etime, test_log.dialed_no, test_log.tester_id, test_log.voice_recorded, test_log.screen_recorded, test_log.agent_id, test_log.group_id, test_log.site_id, test_log.dtmf_values FROM user_details,group_details,site_details,test_log, sv_agent_map,test_campaigns,test_projects  WHERE sv_agent_map.sv_user_id='347' AND sv_agent_map.sv_group_id='13' AND sv_agent_map.sv_site_id='10'  AND user_details.user_id=sv_agent_map.agent_user_id and group_details.group_id=sv_agent_map.agent_group_id and site_details.site_id=sv_agent_map.agent_site_id and test_log.agent_id=sv_agent_map.agent_user_id and test_log.campaign_id=test_campaigns.campaign_id and test_projects.project_id=test_log.project_id ORDER BY test_log.test_id limit 5000.\nThe test_log has 50 million records.The postgres version is 7.4The Explain Analysis Output is:                                                                             QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=206342.52..206345.46 rows=1178 width=420) (actual time=42514.526..42525.443 rows=5000 loops=1)   ->  Sort  (cost=206342.52..206345.46 rows=1178 width=420) (actual time=42514.517..42519.466 rows=5000 loops=1)\n         Sort Key: test_log.test_id         ->  Hash Join  (cost=10.22..206282.43 rows=1178 width=420) (actual time=1.297..37852.353 rows=281603 loops=1)               Hash Cond: (\"outer\".agent_id = \"inner\".user_id)\n               ->  Hash Join  (cost=7.11..206256.15 rows=2278 width=361) (actual time=0.923..34630.591 rows=281603 loops=1)                     Hash Cond: (\"outer\".campaign_id = \"inner\".campaign_id)\n                     ->  Hash Join  (cost=5.77..206209.22 rows=2281 width=272) (actual time=0.789..31832.361 rows=281603 loops=1)                           Hash Cond: (\"outer\".agent_group_id = \"inner\".group_id)\n                           ->  Hash Join  (cost=4.51..206153.11 rows=6407 width=228) (actual time=0.656..28964.197 rows=281603 loops=1)                                 Hash Cond: (\"outer\".project_id = \"inner\".project_id)\n                                 ->  Hash Join  (cost=3.24..206055.70 rows=6415 width=139) (actual time=0.461..26168.581 rows=281603 loops=1)                                       Hash Cond: (\"outer\".agent_id = \"inner\".agent_user_id)\n                                       ->  Seq Scan on test_log  (cost=0.00..180692.90 rows=5013690 width=83) (actual time=0.005..18942.968 rows=5061643 loops=1)                                       ->  Hash  (cost=3.24..3.24 rows=1 width=56) (actual time=0.362..0.362 rows=0 loops=1)\n                                             ->  Hash Join  (cost=2.04..3.24 rows=1 width=56) (actual time=0.256..0.325 rows=31 loops=1)                                                   Hash Cond: (\"outer\".site_id = \"inner\".agent_site_id)\n                                                   ->  Seq Scan on site_details  (cost=0.00..1.13 rows=13 width=52) (actual time=0.005..0.018 rows=13 loops=1)                                                   ->  Hash  (cost=2.03..2.03 rows=1 width=12) (actual time=0.156..0.156 rows=0 loops=1)\n                                                         ->  Seq Scan on sv_agent_map  (cost=0.00..2.03 rows=1 width=12) (actual time=0.031..0.113 rows=31 loops=1)                                                               Filter: ((sv_user_id = 347) AND (sv_group_id = 13) AND (sv_site_id = 10))\n                                 ->  Hash  (cost=1.21..1.21 rows=21 width=97) (actual time=0.145..0.145 rows=0 loops=1)                                       ->  Seq Scan on test_projects  (cost=0.00..1.21 rows=21 width=97) (actual time=0.010..0.042 rows=21 loops=1)\n                           ->  Hash  (cost=1.21..1.21 rows=21 width=52) (actual time=0.077..0.077 rows=0 loops=1)                                 ->  Seq Scan on group_details  (cost=0.00..1.21 rows=21 width=52) (actual time=0.010..0.039 rows=21 loops=1)\n                     ->  Hash  (cost=1.27..1.27 rows=27 width=97) (actual time=0.084..0.084 rows=0 loops=1)                           ->  Seq Scan on test_campaigns  (cost=0.00..1.27 rows=27 width=97) (actual time=0.011..0.043 rows=27 loops=1)\n               ->  Hash  (cost=2.89..2.89 rows=89 width=67) (actual time=0.245..0.245 rows=0 loops=1)                     ->  Seq Scan on user_details  (cost=0.00..2.89 rows=89 width=67) (actual time=0.019..0.154 rows=89 loops=1)\n Total runtime: 42548.932 ms(30 rows)-- With Best Regards,Petchimuthulingam S", "msg_date": "Sat, 8 Mar 2008 09:37:01 +0530", "msg_from": "\"petchimuthu lingam\" <[email protected]>", "msg_from_op": true, "msg_subject": "join query performance" }, { "msg_contents": "In a select query i have used the join conditions, will it affect query\nperformance.\n\nExplicitly I didn't used the join command, Will it make any difference.\n\nMy Query is:\nSELECT test_log.test_id, test_log.test_id, test_log.test_id,\nuser_details.first_name, group_details.group_name, site_details.site_name,\ntest_projects.project_name, test_campaigns.campaign_name,\ntest_log.test_stime, test_log.test_duration, test_log.test_etime,\ntest_log.dialed_no, test_log.tester_id, test_log.voice_recorded,\ntest_log.screen_recorded, test_log.agent_id, test_log.group_id,\ntest_log.site_id, test_log.dtmf_values FROM\nuser_details,group_details,site_details,test_log,\nsv_agent_map,test_campaigns,test_projects WHERE\nsv_agent_map.sv_user_id='347' AND sv_agent_map.sv_group_id='13' AND\nsv_agent_map.sv_site_id='10' AND\nuser_details.user_id=sv_agent_map.agent_user_id and\ngroup_details.group_id=sv_agent_map.agent_group_id and\nsite_details.site_id=sv_agent_map.agent_site_id and\ntest_log.agent_id=sv_agent_map.agent_user_id and\ntest_log.campaign_id=test_campaigns.campaign_id and\ntest_projects.project_id=test_log.project_id ORDER BY test_log.test_id limit\n5000.\n\nThe test_log has 50 million records.\n\nThe postgres version is 7.4\n\nThe Explain Analysis Output is:\n\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=206342.52..206345.46 rows=1178 width=420) (actual time=\n42514.526..42525.443 rows=5000 loops=1)\n -> Sort (cost=206342.52..206345.46 rows=1178 width=420) (actual time=\n42514.517..42519.466 rows=5000 loops=1)\n Sort Key: test_log.test_id\n -> Hash Join (cost=10.22..206282.43 rows=1178 width=420) (actual\ntime=1.297..37852.353 rows=281603 loops=1)\n Hash Cond: (\"outer\".agent_id = \"inner\".user_id)\n -> Hash Join (cost=7.11..206256.15 rows=2278 width=361)\n(actual time=0.923..34630.591 rows=281603 loops=1)\n Hash Cond: (\"outer\".campaign_id = \"inner\".campaign_id)\n -> Hash Join (cost=5.77..206209.22 rows=2281\nwidth=272) (actual time=0.789..31832.361 rows=281603 loops=1)\n Hash Cond: (\"outer\".agent_group_id =\n\"inner\".group_id)\n -> Hash Join (cost=4.51..206153.11 rows=6407\nwidth=228) (actual time=0.656..28964.197 rows=281603 loops=1)\n Hash Cond: (\"outer\".project_id =\n\"inner\".project_id)\n -> Hash Join\n(cost=3.24..206055.70rows=6415 width=139) (actual time=\n0.461..26168.581 rows=281603 loops=1)\n Hash Cond: (\"outer\".agent_id =\n\"inner\".agent_user_id)\n -> Seq Scan on test_log (cost=\n0.00..180692.90 rows=5013690 width=83) (actual\ntime=0.005..18942.968rows=5061643 loops=1)\n -> Hash (cost=3.24..3.24 rows=1\nwidth=56) (actual time=0.362..0.362 rows=0 loops=1)\n -> Hash Join\n(cost=2.04..3.24rows=1 width=56) (actual time=\n0.256..0.325 rows=31 loops=1)\n Hash Cond:\n(\"outer\".site_id = \"inner\".agent_site_id)\n -> Seq Scan on\nsite_details (cost=0.00..1.13 rows=13 width=52) (actual\ntime=0.005..0.018rows=13 loops=1)\n -> Hash (cost=\n2.03..2.03 rows=1 width=12) (actual time=0.156..0.156 rows=0 loops=1)\n -> Seq Scan on\nsv_agent_map (cost=0.00..2.03 rows=1 width=12) (actual\ntime=0.031..0.113rows=31 loops=1)\n Filter:\n((sv_user_id = 347) AND (sv_group_id = 13) AND (sv_site_id = 10))\n -> Hash (cost=1.21..1.21 rows=21\nwidth=97) (actual time=0.145..0.145 rows=0 loops=1)\n -> Seq Scan on test_projects (cost=\n0.00..1.21 rows=21 width=97) (actual time=0.010..0.042 rows=21 loops=1)\n -> Hash (cost=1.21..1.21 rows=21 width=52)\n(actual time=0.077..0.077 rows=0 loops=1)\n -> Seq Scan on group_details (cost=\n0.00..1.21 rows=21 width=52) (actual time=0.010..0.039 rows=21 loops=1)\n -> Hash (cost=1.27..1.27 rows=27 width=97) (actual\ntime=0.084..0.084 rows=0 loops=1)\n -> Seq Scan on test_campaigns\n(cost=0.00..1.27rows=27 width=97) (actual time=\n0.011..0.043 rows=27 loops=1)\n -> Hash (cost=2.89..2.89 rows=89 width=67) (actual time=\n0.245..0.245 rows=0 loops=1)\n -> Seq Scan on user_details (cost=0.00..2.89 rows=89\nwidth=67) (actual time=0.019..0.154 rows=89 loops=1)\n Total runtime: 42548.932 ms\n(30 rows)\n\n\n\n-- \nWith Best Regards,\nPetchimuthulingam S\n\nIn a select query i have used the join conditions, will it affect query performance.Explicitly I didn't used the join command, Will it make any difference.My Query is:SELECT  test_log.test_id, test_log.test_id, test_log.test_id, user_details.first_name, group_details.group_name, site_details.site_name, test_projects.project_name, test_campaigns.campaign_name, test_log.test_stime, test_log.test_duration, test_log.test_etime, test_log.dialed_no, test_log.tester_id, test_log.voice_recorded, test_log.screen_recorded, test_log.agent_id, test_log.group_id, test_log.site_id, test_log.dtmf_values FROM user_details,group_details,site_details,test_log, sv_agent_map,test_campaigns,test_projects  WHERE sv_agent_map.sv_user_id='347' AND sv_agent_map.sv_group_id='13' AND sv_agent_map.sv_site_id='10'  AND user_details.user_id=sv_agent_map.agent_user_id and group_details.group_id=sv_agent_map.agent_group_id and site_details.site_id=sv_agent_map.agent_site_id and test_log.agent_id=sv_agent_map.agent_user_id and test_log.campaign_id=test_campaigns.campaign_id and test_projects.project_id=test_log.project_id ORDER BY test_log.test_id limit 5000.\nThe test_log has 50 million records.The postgres version is 7.4The Explain Analysis Output is:                                                                             QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Limit  (cost=206342.52..206345.46 rows=1178 width=420) (actual time=42514.526..42525.443 rows=5000 loops=1)   ->  Sort  (cost=206342.52..206345.46 rows=1178 width=420) (actual time=42514.517..42519.466 rows=5000 loops=1)\n\n         Sort Key: test_log.test_id         ->  Hash Join  (cost=10.22..206282.43 rows=1178 width=420) (actual time=1.297..37852.353 rows=281603 loops=1)               Hash Cond: (\"outer\".agent_id = \"inner\".user_id)\n\n               ->  Hash Join  (cost=7.11..206256.15 rows=2278 width=361) (actual time=0.923..34630.591 rows=281603 loops=1)                     Hash Cond: (\"outer\".campaign_id = \"inner\".campaign_id)\n\n                     ->  Hash Join  (cost=5.77..206209.22 rows=2281 width=272) (actual time=0.789..31832.361 rows=281603 loops=1)                           Hash Cond: (\"outer\".agent_group_id = \"inner\".group_id)\n\n                           ->  Hash Join  (cost=4.51..206153.11 rows=6407 width=228) (actual time=0.656..28964.197 rows=281603 loops=1)                                 Hash Cond: (\"outer\".project_id = \"inner\".project_id)\n\n                                 ->  Hash Join  (cost=3.24..206055.70 rows=6415 width=139) (actual time=0.461..26168.581 rows=281603 loops=1)                                       Hash Cond: (\"outer\".agent_id = \"inner\".agent_user_id)\n\n                                       ->  Seq Scan on test_log  (cost=0.00..180692.90 rows=5013690 width=83) (actual time=0.005..18942.968 rows=5061643 loops=1)                                       ->  Hash  (cost=3.24..3.24 rows=1 width=56) (actual time=0.362..0.362 rows=0 loops=1)\n\n                                             ->  Hash Join  (cost=2.04..3.24 rows=1 width=56) (actual time=0.256..0.325 rows=31 loops=1)                                                   Hash Cond: (\"outer\".site_id = \"inner\".agent_site_id)\n\n                                                   ->  Seq Scan on site_details  (cost=0.00..1.13 rows=13 width=52) (actual time=0.005..0.018 rows=13 loops=1)                                                   ->  Hash  (cost=2.03..2.03 rows=1 width=12) (actual time=0.156..0.156 rows=0 loops=1)\n\n                                                         ->  Seq Scan on sv_agent_map  (cost=0.00..2.03 rows=1 width=12) (actual time=0.031..0.113 rows=31 loops=1)                                                               Filter: ((sv_user_id = 347) AND (sv_group_id = 13) AND (sv_site_id = 10))\n\n                                 ->  Hash  (cost=1.21..1.21 rows=21 width=97) (actual time=0.145..0.145 rows=0 loops=1)                                       ->  Seq Scan on test_projects  (cost=0.00..1.21 rows=21 width=97) (actual time=0.010..0.042 rows=21 loops=1)\n\n                           ->  Hash  (cost=1.21..1.21 rows=21 width=52) (actual time=0.077..0.077 rows=0 loops=1)                                 ->  Seq Scan on group_details  (cost=0.00..1.21 rows=21 width=52) (actual time=0.010..0.039 rows=21 loops=1)\n\n                     ->  Hash  (cost=1.27..1.27 rows=27 width=97) (actual time=0.084..0.084 rows=0 loops=1)                           ->  Seq Scan on test_campaigns  (cost=0.00..1.27 rows=27 width=97) (actual time=0.011..0.043 rows=27 loops=1)\n\n               ->  Hash  (cost=2.89..2.89 rows=89 width=67) (actual time=0.245..0.245 rows=0 loops=1)                     ->  Seq Scan on user_details  (cost=0.00..2.89 rows=89 width=67) (actual time=0.019..0.154 rows=89 loops=1)\n\n Total runtime: 42548.932 ms(30 rows)-- With Best Regards,Petchimuthulingam S", "msg_date": "Sat, 8 Mar 2008 10:31:53 +0530", "msg_from": "\"petchimuthu lingam\" <[email protected]>", "msg_from_op": true, "msg_subject": "join query performance" }, { "msg_contents": "petchimuthu lingam wrote:\n> \n> In a select query i have used the join conditions, will it affect query \n> performance.\n> \n> Explicitly I didn't used the join command, Will it make any difference.\n\nIt'll make sure you don't miss any join conditions between two tables so \nit'll be helpful in that respect, I didn't read your whole query but \nwith a 7 table join it's very easy to miss doing a match on one or more \ntables.\n\nIt shouldn't take more than a couple of minutes to rewrite your query \nand find out the answers..\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Mon, 10 Mar 2008 10:42:33 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: join query performance" } ]
[ { "msg_contents": "VE4TQQBN\n\n-- \nWith Best Regards,\nPetchimuthulingam S\n\nVE4TQQBN-- With Best Regards,Petchimuthulingam S", "msg_date": "Sat, 8 Mar 2008 09:46:06 +0530", "msg_from": "\"petchimuthu lingam\" <[email protected]>", "msg_from_op": true, "msg_subject": "=?ISO-8859-1?Q?Confirma=E7=E3o_de_envio_/_Sending_con?=\n\t=?ISO-8859-1?Q?firmation_(captchaid:13266b402bd3)?=" } ]
[ { "msg_contents": "VQQ7HE18\n\nOn Sat, Mar 8, 2008 at 9:50 AM, <[email protected]> wrote:\n\n> A mensagem de email enviada para [email protected] confirmação para ser entregue. Por favor, responda este e-mail\n> informando os caracteres que você vê na imagem abaixo.\n>\n> The email message sent to [email protected] requires a\n> confirmation to be delivered. Please, answer this email informing the\n> characters that you see in the image below.\n>\n> Não remova a próxima linha / Don't remove next line\n> captchakey:asbTI4NElzUkMwMTAxMTE\n>\n>\n>\n>\n>\n>\n>\n>\n\n\n-- \nWith Best Regards,\nPetchimuthulingam S", "msg_date": "Sat, 8 Mar 2008 10:03:00 +0530", "msg_from": "\"petchimuthu lingam\" <[email protected]>", "msg_from_op": true, "msg_subject": "=?ISO-8859-1?Q?Re:_Confirma=E7=E3o_de_envio_/_Sending_c?=\n\t=?ISO-8859-1?Q?onfirmation_(captchaid:13266b402f09)?=" }, { "msg_contents": "petchimuthu lingam escribi�:\n> VQQ7HE18\n\nPlease stop sending this nonsense. These \"sending confirmations\" are\nnot necessary -- they are sent by a clueless user whose identity we've\nas of yet unable to determine (otherwise we would have kicked him from\nthe list.)\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Sat, 8 Mar 2008 11:17:50 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: =?iso-8859-1?Q?Confirma?= =?iso-8859-1?B?5+M=?=\n\t=?iso-8859-1?Q?o?= de envio / Sending confirmation\n\t(captchaid:13266b402f09)" } ]
[ { "msg_contents": "Hi!\n\nAs part of a data warehousing project, I need to pre-process data downloaded\nfrom an external source, in the form of several large flat files. This\npreprocessing entails checking the validity of various data items, and\ndiscarding those that fail to pass all the checks.\n\nCurrently, the code that performs the checks generates intermediate\ntemporary tables of those bits of data that are invalid in some way. (This\nsimplifies the process of generating various quality-control reports about\nthe incoming data).\n\nThe next step is to weed out the bad data from the main tables, and here's\nwhere I begin to get lost.\n\nTo be concrete, suppose I have a table T consisting of 20 million rows,\nkeyed on some column K. (There are no formal constrains on T at the moment,\nbut one could define column K as T's primary key.) Suppose also that I have\na second table B (for \"bad\") consisting of 20 thousand rows, and also keyed\non some column K. For each value of B.K there is exactly one row in T such\nthat T.K = B.K, and the task is to delete all these rows from T as\nefficiently as possible.\n\nMy naive approach would something like\n\nDELETE FROM T WHERE T.K IN ( SELECT K FROM B );\n\n...which, according to EXPLAIN, is a terrible idea, because it involves\nsequentially scanning all 20 million rows of T just to delete about only\n0.1% of them.\n\nIt seems to me better to sequentially scan B and rely on an index on T to\nzero-in the few rows in T that must be deleted.\n\nIs this strategy something that can be done with plain SQL (even if to do\nthis I must produce additional helper tables, indices, etc.), or must I\nwrite a stored procedure to implement it?\n\n\nTIA!\n\nKynn\n\nHi!As part of a data warehousing project, I need to pre-process data downloaded from an external source, in the form of several large flat files.  This preprocessing entails checking the validity of various data items, and discarding those that fail to pass all the checks.\nCurrently, the code that performs the checks generates intermediate temporary tables of those bits of data that are invalid in some way.  (This simplifies the process of generating various quality-control reports about the incoming data).\nThe next step is to weed out the bad data from the main tables, and here's where I begin to get lost.To be concrete, suppose I have a table T consisting of 20 million rows, keyed on some column K.  (There are no formal constrains on T at the moment, but one could define column K as T's primary key.)  Suppose also that I have a second table B (for \"bad\") consisting of 20 thousand rows, and also keyed on some column K.  For each value of B.K there is exactly one row in T such that T.K = B.K, and the task is to delete all these rows from T as efficiently as possible.\nMy naive approach would something likeDELETE FROM T WHERE T.K IN ( SELECT K FROM B );\n...which, according to EXPLAIN, is a terrible idea, because it involves sequentially scanning all 20 million rows of T just to delete about only 0.1% of them.\nIt seems to me better to sequentially scan B and rely on an index on T to zero-in the few rows in T that must be deleted.Is this strategy something that can be done with plain SQL (even if to do this I must produce additional helper tables, indices, etc.), or must I write a stored procedure to implement it?\nTIA!Kynn", "msg_date": "Sat, 8 Mar 2008 12:31:30 -0500", "msg_from": "\"Kynn Jones\" <[email protected]>", "msg_from_op": true, "msg_subject": "Joins and DELETE FROM" }, { "msg_contents": "Kynn Jones wrote:\n> Hi!\n> \n> As part of a data warehousing project, I need to pre-process data downloaded\n> from an external source, in the form of several large flat files. This\n> preprocessing entails checking the validity of various data items, and\n> discarding those that fail to pass all the checks.\n> \n> Currently, the code that performs the checks generates intermediate\n> temporary tables of those bits of data that are invalid in some way. (This\n> simplifies the process of generating various quality-control reports about\n> the incoming data).\n> \n> The next step is to weed out the bad data from the main tables, and here's\n> where I begin to get lost.\n> \n> To be concrete, suppose I have a table T consisting of 20 million rows,\n> keyed on some column K. (There are no formal constrains on T at the moment,\n> but one could define column K as T's primary key.) Suppose also that I have\n> a second table B (for \"bad\") consisting of 20 thousand rows, and also keyed\n> on some column K. For each value of B.K there is exactly one row in T such\n> that T.K = B.K, and the task is to delete all these rows from T as\n> efficiently as possible.\n> \n> My naive approach would something like\n> \n> DELETE FROM T WHERE T.K IN ( SELECT K FROM B );\n> \n> ...which, according to EXPLAIN, is a terrible idea, because it involves\n> sequentially scanning all 20 million rows of T just to delete about only\n> 0.1% of them.\n> \n> It seems to me better to sequentially scan B and rely on an index on T to\n> zero-in the few rows in T that must be deleted.\n> \n> Is this strategy something that can be done with plain SQL (even if to do\n> this I must produce additional helper tables, indices, etc.), or must I\n> write a stored procedure to implement it?\n\nThe planner knows how to produce such a plan, so it must thinking that \nit's not the fastest plan.\n\nHave you ANALYZEd the tables? You do have an index on T.K, right? What \ndoes EXPLAIN ANALYZE output look like? (you can do BEGIN; EXPLAIN \nANALYZE ...; ROLLBACK; if you don't want to actually delete the rows)\n\nThe sequential scan really could be the fastest way to do that. If those \n0.1% of the rows are scattered randomly across the table, an index scan \nmight end up fetching almost every page, but using random I/O which is \nmuch slower than a sequential read. For example, assuming you can fit \n100 rows on a page, deleting 0.1% of the rows would have to access ~ 10% \nof the pages. At that point, it can easily be cheaper to just seq scan it.\n\nYou can try to coerce the planner to choose the indexscan with \"set \nenable_seqscan=off\", to see how fast it actually is.\n\nYou could also write the query as DELETE FROM t USING b WHERE t.k = b.k, \nbut I doubt it makes much difference.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sat, 08 Mar 2008 18:01:41 +0000", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Joins and DELETE FROM" }, { "msg_contents": "On Sat, Mar 8, 2008 at 1:01 PM, Heikki Linnakangas <[email protected]>\nwrote:\n\n> Kynn Jones wrote:\n> > Hi!\n> >\n> > As part of a data warehousing project, I need to pre-process data\n> downloaded\n> > from an external source, in the form of several large flat files. This\n> > preprocessing entails checking the validity of various data items, and\n> > discarding those that fail to pass all the checks.\n> >\n> > Currently, the code that performs the checks generates intermediate\n> > temporary tables of those bits of data that are invalid in some way.\n> (This\n> > simplifies the process of generating various quality-control reports\n> about\n> > the incoming data).\n> >\n> > The next step is to weed out the bad data from the main tables, and\n> here's\n> > where I begin to get lost.\n> >\n> > To be concrete, suppose I have a table T consisting of 20 million rows,\n> > keyed on some column K. (There are no formal constrains on T at the\n> moment,\n> > but one could define column K as T's primary key.) Suppose also that I\n> have\n> > a second table B (for \"bad\") consisting of 20 thousand rows, and also\n> keyed\n> > on some column K. For each value of B.K there is exactly one row in T\n> such\n> > that T.K = B.K, and the task is to delete all these rows from T as\n> > efficiently as possible.\n> >\n> > My naive approach would something like\n> >\n> > DELETE FROM T WHERE T.K IN ( SELECT K FROM B );\n> >\n> > ...which, according to EXPLAIN, is a terrible idea, because it involves\n> > sequentially scanning all 20 million rows of T just to delete about only\n> > 0.1% of them.\n> >\n> > It seems to me better to sequentially scan B and rely on an index on T\n> to\n> > zero-in the few rows in T that must be deleted.\n> >\n> > Is this strategy something that can be done with plain SQL (even if to\n> do\n> > this I must produce additional helper tables, indices, etc.), or must I\n> > write a stored procedure to implement it?\n>\n> The planner knows how to produce such a plan, so it must thinking that\n> it's not the fastest plan.\n\n\nCurious.\n\n\n> Have you ANALYZEd the tables? You do have an index on T.K, right? What\n> does EXPLAIN ANALYZE output look like? (you can do BEGIN; EXPLAIN\n> ANALYZE ...; ROLLBACK; if you don't want to actually delete the rows)\n\n\nYes, all the tables have been vacuumed and analyzed, and there's an index on\nT.K (and on also on B.K for good measure).\n\n\n> You can try to coerce the planner to choose the indexscan with \"set\n> enable_seqscan=off\", to see how fast it actually is.\n\n\nThanks, that was a useful trick. I tried it on a simpler case: just the\nnatural join of T and B. (I also used smaller versions of the table, but\nwith a size ratio similar to the one in my hypothetical example.) Indeed,\nwhen I turn off sequential scans, the resulting query is over 2X faster.\n\nmy_db=> SET ENABLE_SEQSCAN TO ON;\nmy_db=> EXPLAIN ANALYZE SELECT * FROM T NATURAL JOIN B;\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=7634.14..371997.64 rows=219784 width=13) (actual time=\n176.065..12041.486 rows=219784 loops=1)\n Hash Cond: (t.k = b.k)\n -> Seq Scan on t (cost=0.00..172035.56 rows=10509456 width=13) (actual\ntime=0.023..2379.407 rows=10509456 loops=1)\n -> Hash (cost=3598.84..3598.84 rows=219784 width=12) (actual time=\n171.868..171.868 rows=219784 loops=1)\n -> Seq Scan on b (cost=0.00..3598.84 rows=219784 width=12)\n(actual time=0.013..49.626 rows=219784 loops=1)\n Total runtime: 12064.966 ms\n(6 rows)\n\nmy_db=> SET ENABLE_SEQSCAN TO OFF;\nmy_db=> EXPLAIN ANALYZE SELECT * FROM T NATURAL JOIN B;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=0.00..423589.69 rows=219784 width=13) (actual time=\n0.114..5449.808 rows=219784 loops=1)\n Merge Cond: (t.k = b.k)\n -> Index Scan using idx__t on t (cost=0.00..386463.71 rows=10509456\nwidth=13) (actual time=0.059..3083.182 rows=10509414 loops=1)\n -> Index Scan using idx__b on b (cost=0.00..8105.04 rows=219784\nwidth=12) (actual time=0.044..69.659 rows=219784 loops=1)\n Total runtime: 5473.812 ms\n(5 rows)\n\n\nHonestly, I still have not learned to fully decipher the output of\nEXPLAN/EXPLAIN ANALYZE. (The PostgreSQL docs are generally superb, IMO, but\nthere's still a big hole on the subject of the query planner, including the\ninterpretation of these query plans.)\n\nSo it seems like turning off ENABLE_SEQSCAN is the way to go. I wonder how\nmuch faster the query would be if I could selectively turn of the sequential\nscan on T. (The one on B seems to me reasonable.)\n\nYou could also write the query as DELETE FROM t USING b WHERE t.k = b.k,\n\nbut I doubt it makes much difference.\n\n\nYou're right: no difference at all (same query plan).\n\nThanks again!\n\nKynn\n\nOn Sat, Mar 8, 2008 at 1:01 PM, Heikki Linnakangas <[email protected]> wrote:\nKynn Jones wrote:\n> Hi!\n>\n> As part of a data warehousing project, I need to pre-process data downloaded\n> from an external source, in the form of several large flat files.  This\n> preprocessing entails checking the validity of various data items, and\n> discarding those that fail to pass all the checks.\n>\n> Currently, the code that performs the checks generates intermediate\n> temporary tables of those bits of data that are invalid in some way.  (This\n> simplifies the process of generating various quality-control reports about\n> the incoming data).\n>\n> The next step is to weed out the bad data from the main tables, and here's\n> where I begin to get lost.\n>\n> To be concrete, suppose I have a table T consisting of 20 million rows,\n> keyed on some column K.  (There are no formal constrains on T at the moment,\n> but one could define column K as T's primary key.)  Suppose also that I have\n> a second table B (for \"bad\") consisting of 20 thousand rows, and also keyed\n> on some column K.  For each value of B.K there is exactly one row in T such\n> that T.K = B.K, and the task is to delete all these rows from T as\n> efficiently as possible.\n>\n> My naive approach would something like\n>\n> DELETE FROM T WHERE T.K IN ( SELECT K FROM B );\n>\n> ...which, according to EXPLAIN, is a terrible idea, because it involves\n> sequentially scanning all 20 million rows of T just to delete about only\n> 0.1% of them.\n>\n> It seems to me better to sequentially scan B and rely on an index on T to\n> zero-in the few rows in T that must be deleted.\n>\n> Is this strategy something that can be done with plain SQL (even if to do\n> this I must produce additional helper tables, indices, etc.), or must I\n> write a stored procedure to implement it?\n\nThe planner knows how to produce such a plan, so it must thinking that\nit's not the fastest plan.Curious. \nHave you ANALYZEd the tables? You do have an index on T.K, right? What\ndoes EXPLAIN ANALYZE output look like? (you can do BEGIN; EXPLAIN\nANALYZE ...; ROLLBACK; if you don't want to actually delete the rows)Yes, all the tables have been vacuumed and analyzed, and there's an index on T.K (and on also on B.K for good measure). \n You can try to coerce the planner to choose the indexscan with \"set\nenable_seqscan=off\", to see how fast it actually is.Thanks, that was a useful trick.  I tried it on a simpler case: just the natural join of T and B.  (I also used smaller versions of the table, but with a size ratio similar to the one in my hypothetical example.)  Indeed, when I turn off sequential scans, the resulting query is over 2X faster. \nmy_db=> SET ENABLE_SEQSCAN TO ON;my_db=> EXPLAIN ANALYZE SELECT * FROM T NATURAL JOIN B;                                                       QUERY PLAN                                                       \n------------------------------------------------------------------------------------------------------------------------ Hash Join  (cost=7634.14..371997.64 rows=219784 width=13) (actual time=176.065..12041.486 rows=219784 loops=1)\n   Hash Cond: (t.k = b.k)   ->  Seq Scan on t  (cost=0.00..172035.56 rows=10509456 width=13) (actual time=0.023..2379.407 rows=10509456 loops=1)   ->  Hash  (cost=3598.84..3598.84 rows=219784 width=12) (actual time=171.868..171.868 rows=219784 loops=1)\n         ->  Seq Scan on b  (cost=0.00..3598.84 rows=219784 width=12) (actual time=0.013..49.626 rows=219784 loops=1) Total runtime: 12064.966 ms(6 rows)\nmy_db=> SET ENABLE_SEQSCAN TO OFF;my_db=> EXPLAIN ANALYZE SELECT * FROM T NATURAL JOIN B;                                                              QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------\n Merge Join  (cost=0.00..423589.69 rows=219784 width=13) (actual time=0.114..5449.808 rows=219784 loops=1)   Merge Cond: (t.k = b.k)   ->  Index Scan using idx__t on t  (cost=0.00..386463.71 rows=10509456 width=13) (actual time=0.059..3083.182 rows=10509414 loops=1)\n   ->  Index Scan using idx__b on b  (cost=0.00..8105.04 rows=219784 width=12) (actual time=0.044..69.659 rows=219784 loops=1) Total runtime: 5473.812 ms(5 rows)\nHonestly, I still have not learned to fully decipher the output of EXPLAN/EXPLAIN ANALYZE.  (The PostgreSQL docs are generally superb, IMO, but there's still a big hole on the subject of the query planner, including the interpretation of these query plans.)\nSo it seems like turning off ENABLE_SEQSCAN is the way to go.  I wonder how much faster the query would be if I could selectively turn of the sequential scan on T.  (The one on B seems to me reasonable.)\n\nYou could also write the query as DELETE FROM t USING b WHERE t.k = b.k, \nbut I doubt it makes much difference.You're right: no difference at all (same query plan).\nThanks again!Kynn", "msg_date": "Sat, 8 Mar 2008 14:02:03 -0500", "msg_from": "\"Kynn Jones\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Joins and DELETE FROM" }, { "msg_contents": "\"Kynn Jones\" <[email protected]> writes:\n> So it seems like turning off ENABLE_SEQSCAN is the way to go.\n\nTry reducing random_page_cost a bit instead. Also, have you got\neffective_cache_size set to something that's realistic for your\nmachine?\n\nOne problem with this test is that your smaller tables probably fit in\nmemory whereas the big ones may not, so it's not a given that your test\naccurately reflects how the real query will go down.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Mar 2008 15:08:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Joins and DELETE FROM " }, { "msg_contents": "Kynn Jones wrote:\n> my_db=> SET ENABLE_SEQSCAN TO OFF;\n> my_db=> EXPLAIN ANALYZE SELECT * FROM T NATURAL JOIN B;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------\n> Merge Join (cost=0.00..423589.69 rows=219784 width=13) (actual time=\n> 0.114..5449.808 rows=219784 loops=1)\n> Merge Cond: (t.k = b.k)\n> -> Index Scan using idx__t on t (cost=0.00..386463.71 rows=10509456\n> width=13) (actual time=0.059..3083.182 rows=10509414 loops=1)\n> -> Index Scan using idx__b on b (cost=0.00..8105.04 rows=219784\n> width=12) (actual time=0.044..69.659 rows=219784 loops=1)\n> Total runtime: 5473.812 ms\n> (5 rows)\n\nThat's more like 2% of the rows, not 0.1%.\n\nNote that this still isn't the plan you were asking for, it's still \nscanning the whole index for t, not just looking up the keys from b. \nWhat you wanted is a nested loop join. You could try to turn \nenable_mergejoin=off as well if you want to coerce the planner even more...\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sat, 08 Mar 2008 20:25:09 +0000", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Joins and DELETE FROM" }, { "msg_contents": "Thank you for your post. I finally spent some quality time with the query\nplanner section in the docs' server config chapter. Very instructive, even\nconsidering that most of it went over my head!\n\nOn Sat, Mar 8, 2008 at 4:08 PM, Tom Lane <[email protected]> wrote:\n\n...have you got effective_cache_size set to something that's realistic for\n> your machine?\n\n\nI guess not. It was the default value (128MB) on a machine with 4GB of RAM.\n It's not a dedicated server, though, so I'll set it to 1G for now.\n\nBut before doing so I need a clarification. The docs state that this\nparameter is used only for cost estimation, and has no effect on actual\nmemory allocations. I imagine that if other memory-related settings are not\nsomehow in line with it, it could lead to estimates that are out of touch\nwith reality. If this is correct what other memory-related parameters do I\nneed to adjust to ensure that both the planner's estimates and the actual\nexecution agree and fit well with the available memory?\n\nOne problem with this test is that your smaller tables probably fit in\n> memory whereas the big ones may not, so it's not a given that your test\n> accurately reflects how the real query will go down.\n>\n\nThat's a very helpful reminder. Thanks.\n\nKynn\n\nThank you for your post.  I finally spent some quality time with the query planner section in the docs' server config chapter.  Very instructive, even considering that most of it went over my head!\nOn Sat, Mar 8, 2008 at 4:08 PM, Tom Lane <[email protected]> wrote:\n...have you got effective_cache_size set to something that's realistic for your machine?I guess not.  It was the default value (128MB) on a machine with 4GB of RAM.  It's not a dedicated server, though, so I'll set it to 1G for now.\nBut before doing so I need a clarification.  The docs state that this parameter is used only for cost estimation, and has no effect on actual memory allocations.  I imagine that if other memory-related settings are not somehow in line with it, it could lead to estimates that are out of touch with reality.  If this is correct what other memory-related parameters do I need to adjust to ensure that both the planner's estimates and the actual execution agree and fit well with the available memory?\n\nOne problem with this test is that your smaller tables probably fit in\nmemory whereas the big ones may not, so it's not a given that your test\naccurately reflects how the real query will go down.That's a very helpful reminder.  Thanks.Kynn", "msg_date": "Tue, 11 Mar 2008 11:56:55 -0400", "msg_from": "\"Kynn Jones\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Joins and DELETE FROM" } ]
[ { "msg_contents": "Hi!\n\n I'm testing an update on 66k rows on Postgresql, and it seems \nsomething is not right here.\n\n My server is a Quad-Xeon 3.2 Ghz with 2 GB RAM and a RAID 1 running \nFreeBSD 6.3 and PgSQL 8.3. My development machine is a PowerBook G4 \n1.67 Ghz with 2 GB RAM, OS X Leopard and PgSQL 8.3.\n\n I detected that an update in my application was runnning to slow. \nSo, I'm testing an update query with no conditions, just:\n\n UPDATE text_answer_mapping_ebt SET f1 = false;\n\n f1 is a boolean column, so it can't get much simpler than this. \nI've analysed and vaccumed several times, yet the results I get on the \nXeon are:\n\nEXPLAIN ANALYZE UPDATE text_answer_mapping_ebt SET f1 = false;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on text_answer_mapping_ebt (cost=0.00..13945.72 \nrows=265072 width=92) (actual time=21.123..1049.054 rows=66268 loops=1)\n Total runtime: 63235.363 ms\n(2 rows)\n\n On my powerbook, this runs on about 25 seconds.\n\n Also, when I do the same operation on a very similar-structured \ntable with less rows, I get *much* faster times:\n\nEXPLAIN ANALYZE UPDATE respondent_mapping_ebt SET f1 = false;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Seq Scan on respondent_mapping_ebt (cost=0.00..1779.03 rows=36003 \nwidth=68) (actual time=3.023..76.096 rows=12001 loops=1)\n Total runtime: 894.888 ms\n(2 rows)\n\n Of course that, less rows, less time, but how can 12k rows take \nless than one second, and 66k rows take more than one minute?\n\n I've read some stuff about PgSQL tuning, and played with the \nconfiguration files, but I keep getting the feeling that I'm doing \nthis in a blind way. I'm trying to guess the problem and avoid it. I'm \nsure there's a better way, but I can't seem to find it. My question \nis, how can I \"ask\" PgSQL what's happening? How can I avoid guessing, \nand be sure of what is causing this slowdown? Is some buffer too small \nfor this? Is this related to checkpoints?\n\n I would appreciate if someone could point me in the right \ndirection. Of course I don't need to say I'm relatively new to this \nkind of problems. :)\n\n Yours\n\nMiguel Arroz\n\nMiguel Arroz\nhttp://www.terminalapp.net\nhttp://www.ipragma.com", "msg_date": "Mon, 10 Mar 2008 02:21:39 +0000", "msg_from": "Miguel Arroz <[email protected]>", "msg_from_op": true, "msg_subject": "UPDATE 66k rows too slow" }, { "msg_contents": "Miguel Arroz <[email protected]> writes:\n> EXPLAIN ANALYZE UPDATE text_answer_mapping_ebt SET f1 = false;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on text_answer_mapping_ebt (cost=0.00..13945.72 \n> rows=265072 width=92) (actual time=21.123..1049.054 rows=66268 loops=1)\n> Total runtime: 63235.363 ms\n> (2 rows)\n\nHm, only one second to do the scan ...\n\nI'm thinking the extra time must be going into index updating or\nCHECK-constraint checking or some such overhead. Can we see the full\nschema definition of the table?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Mar 2008 00:48:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UPDATE 66k rows too slow " }, { "msg_contents": "On Mon, 10 Mar 2008, Miguel Arroz wrote:\n\n> My question is, how can I \"ask\" PgSQL what's happening? How can I avoid \n> guessing, and be sure of what is causing this slowdown?\n\nThere are many pieces involved here, and any one or multiple of them could \nbe to blame. Someone may make a guess and get lucky about the cause, but \nthe only generic way to solve this sort of thing is to have a systematic \napproach that goes through the likely possible causes one by one until \nyou've discovered the source of the problem. Since as you say you're new \nto this, you've got the double task of learning that outline and then \nfinding out how to run each of the tests.\n\nFor your particular case, slow updates, I usually follow the following \nseries of tests. I happen to have articles on most of these sitting \naround because they're common issues:\n\n-Confirm disks are working as expected: \nhttp://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm\n\n-Look at differences between fsync commit behavior between the two \nsystems. It's often the case that when servers appear slower than \ndevelopment systems it's because the server is doing fsync properly, while \nthe development one is caching fsync in a way that is unsafe for database \nuse but much faster. \nhttp://www.postgresql.org/docs/8.3/static/wal-reliability.html is a brief \nintro to this while \nhttp://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm goes \ninto extreme detail. The test_fsync section there is probably the most \nuseful one for your comparision.\n\n-Setup basic buffer memory parameters: \nhttp://www.westnet.com/~gsmith/content/postgresql/pg-5minute.htm\n\n-VACUUM VERBOSE ANALYZE and make sure that's working properly. This \nrequires actually understanding the output from that command which is \n\"fun\" to figure out. A related topic is looking for index bloat which I \nhaven't found a good tutorial on yet.\n\n-Investigate whether checkpoints are to blame. Since you're running 8.3 \nyou can just turn on log_checkpoints and see how often they're showing up \nand get an idea how big the performance impact is. Increasing \ncheckpoint_segments is the usual first thing to do if this is the case.\n\n-Collect data with vmstat, iostat, and top to figure out what's happening \nduring the problem query\n\n-Look for application problems (not your issue here)\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 10 Mar 2008 01:10:21 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UPDATE 66k rows too slow" }, { "msg_contents": "Hi!\n\n I read and did many stuff you pointed me too. Raised shared buffers \nto 180 MB, and tried again. Same results.\n\n I deleted the DB, created a new one and generated new test data. I \nknow have 72k rows, and the same query finishes in... 9 seconds.\n\n I'm totally clueless. Anyway, two questions:\n\n 1) My working_mem is 2 MB. Does an UPDATE query like main depend on \nworking_mem?\n\n 2) I still feel this is all very trial-and-error. Change value, run \nquery, hope it solves the problem. Well, the DB itself knows what is \ndoing. Isn't there any way to make it tell us that? Like \"the working \nmem is too low\" or anything else. I know the problem is not the \ncheckpoints, at least nothing appears on the log related to that. But \nit irritates me to be in front of a such complex system and not being \nable to know what's going on.\n\n Yours\n\nMiguel Arroz\n\nOn 2008/03/10, at 05:10, Greg Smith wrote:\n\n> On Mon, 10 Mar 2008, Miguel Arroz wrote:\n>\n>> My question is, how can I \"ask\" PgSQL what's happening? How can I \n>> avoid guessing, and be sure of what is causing this slowdown?\n>\n> There are many pieces involved here, and any one or multiple of them \n> could be to blame. Someone may make a guess and get lucky about the \n> cause, but the only generic way to solve this sort of thing is to \n> have a systematic approach that goes through the likely possible \n> causes one by one until you've discovered the source of the \n> problem. Since as you say you're new to this, you've got the double \n> task of learning that outline and then finding out how to run each \n> of the tests.\n>\n> For your particular case, slow updates, I usually follow the \n> following series of tests. I happen to have articles on most of \n> these sitting around because they're common issues:\n>\n> -Confirm disks are working as expected: http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm\n>\n> -Look at differences between fsync commit behavior between the two \n> systems. It's often the case that when servers appear slower than \n> development systems it's because the server is doing fsync properly, \n> while the development one is caching fsync in a way that is unsafe \n> for database use but much faster. http://www.postgresql.org/docs/8.3/static/wal-reliability.html \n> is a brief intro to this while http://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm \n> goes into extreme detail. The test_fsync section there is probably \n> the most useful one for your comparision.\n>\n> -Setup basic buffer memory parameters: http://www.westnet.com/~gsmith/content/postgresql/pg-5minute.htm\n>\n> -VACUUM VERBOSE ANALYZE and make sure that's working properly. This \n> requires actually understanding the output from that command which \n> is \"fun\" to figure out. A related topic is looking for index bloat \n> which I haven't found a good tutorial on yet.\n>\n> -Investigate whether checkpoints are to blame. Since you're running \n> 8.3 you can just turn on log_checkpoints and see how often they're \n> showing up and get an idea how big the performance impact is. \n> Increasing checkpoint_segments is the usual first thing to do if \n> this is the case.\n>\n> -Collect data with vmstat, iostat, and top to figure out what's \n> happening during the problem query\n>\n> -Look for application problems (not your issue here)\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com \n> Baltimore, MD\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nMiguel Arroz\nhttp://www.terminalapp.net\nhttp://www.ipragma.com", "msg_date": "Mon, 10 Mar 2008 23:17:54 +0000", "msg_from": "Miguel Arroz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: UPDATE 66k rows too slow" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Mon, 10 Mar 2008 23:17:54 +0000\r\nMiguel Arroz <[email protected]> wrote:\r\n\r\n> Hi!\r\n> \r\n> I read and did many stuff you pointed me too. Raised shared\r\n> buffers to 180 MB, and tried again. Same results.\r\n> \r\n> I deleted the DB, created a new one and generated new test data.\r\n> I know have 72k rows, and the same query finishes in... 9 seconds.\r\n> \r\n> I'm totally clueless. Anyway, two questions:\r\n> \r\n> 1) My working_mem is 2 MB. Does an UPDATE query like main depend\r\n> on working_mem?\r\n> \r\n> 2) I still feel this is all very trial-and-error. Change value,\r\n> run query, hope it solves the problem. Well, the DB itself knows what\r\n> is doing. Isn't there any way to make it tell us that? Like \"the\r\n> working mem is too low\" or anything else. I know the problem is not\r\n> the checkpoints, at least nothing appears on the log related to that.\r\n> But it irritates me to be in front of a such complex system and not\r\n> being able to know what's going on.\r\n\r\nWhat does iostat -k 1 tell you during the 9 seconds the query is\r\nrunning?\r\n\r\nJoshua D. Drake\r\n\r\n\r\n\r\n- -- \r\nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \r\nPostgreSQL Community Conference: http://www.postgresqlconference.org/\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\n PostgreSQL political pundit | Mocker of Dolphins\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFH1cK3ATb/zqfZUUQRAhllAJ9C9aL9o/4hzq9vZyRaY8J6DknP5QCePDfS\r\nBxJ/umrVArStUJgG3oFYsSE=\r\n=n0uC\r\n-----END PGP SIGNATURE-----\r\n", "msg_date": "Mon, 10 Mar 2008 16:22:31 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UPDATE 66k rows too slow" }, { "msg_contents": "Hi!\n\n It now raised to 40 seconds... here goes the result of iostat:\n\niostat -K -c 40\n tty ad4 ad6 cpu\n tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id\n 1 78 32.86 34 1.08 0.70 0 0.00 13 0 1 0 86\n 0 180 6.00 4 0.02 0.00 0 0.00 0 0 0 0 100\n 1 63 39.74 62 2.40 0.00 0 0.00 17 0 1 0 82\n 0 60 18.69 815 14.87 0.00 0 0.00 20 0 2 0 79\n 0 60 56.17 293 16.06 0.00 0 0.00 41 0 5 0 53\n 0 60 55.74 396 21.53 0.00 0 0.00 39 0 10 0 51\n 0 60 42.24 357 14.71 0.00 0 0.00 10 0 2 0 88\n 0 60 42.92 354 14.82 0.00 0 0.00 12 0 7 1 80\n 0 60 38.51 368 13.82 0.00 0 0.00 14 0 6 0 80\n 0 60 43.83 326 13.94 0.00 0 0.00 4 0 1 0 95\n 0 60 33.30 395 12.83 0.00 0 0.00 11 0 3 0 86\n 0 60 41.36 395 15.94 0.00 0 0.00 4 0 3 0 93\n 0 60 21.97 684 14.68 0.00 0 0.00 10 0 2 0 88\n 0 60 72.44 297 20.99 0.00 0 0.00 42 0 9 0 48\n 0 60 38.18 453 16.87 0.00 0 0.00 23 0 8 1 68\n 0 60 35.15 365 12.52 0.00 0 0.00 1 0 1 0 97\n 0 60 44.40 396 17.15 0.00 0 0.00 17 0 6 0 77\n 0 60 43.99 341 14.64 0.00 0 0.00 4 0 2 0 93\n 0 60 33.53 440 14.39 0.00 0 0.00 10 0 5 0 85\n 0 60 31.22 345 10.51 0.00 0 0.00 0 0 2 0 97\n tty ad4 ad6 cpu\n tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id\n 0 60 33.48 449 14.66 0.00 0 0.00 11 0 3 0 86\n 0 180 16.85 599 9.87 0.00 0 0.00 1 0 1 0 98\n 0 60 55.37 455 24.58 0.00 0 0.00 25 0 4 1 69\n 0 60 49.83 376 18.28 0.00 0 0.00 18 0 5 1 76\n 0 60 29.86 363 10.58 0.00 0 0.00 3 0 0 1 96\n 0 60 36.21 365 12.90 0.00 0 0.00 12 0 3 1 84\n 0 60 33.13 353 11.41 0.00 0 0.00 2 0 2 0 96\n 0 60 39.47 345 13.28 0.00 0 0.00 16 0 3 0 80\n 0 60 40.48 363 14.34 0.00 0 0.00 8 0 2 0 89\n 0 60 30.91 397 11.97 0.00 0 0.00 5 0 2 0 93\n 0 60 18.21 604 10.75 0.00 0 0.00 5 0 2 0 93\n 0 60 48.65 359 17.04 0.00 0 0.00 20 0 6 0 74\n 0 60 32.91 375 12.04 0.00 0 0.00 10 0 4 0 86\n 0 60 35.81 339 11.84 0.00 0 0.00 3 0 2 0 96\n 0 60 33.38 394 12.83 0.00 0 0.00 11 0 4 0 85\n 0 60 34.40 313 10.51 0.00 0 0.00 4 0 2 0 93\n 0 60 45.65 358 15.94 0.00 0 0.00 19 0 7 0 74\n 0 60 37.41 309 11.28 0.00 0 0.00 3 0 2 0 95\n 0 60 32.61 447 14.22 0.00 0 0.00 10 0 3 1 86\n 0 60 17.11 516 8.63 0.00 0 0.00 1 0 1 0 98\n\n There's surely a lot of disk activity going on. With this figures, \nI could have written some hundred gigabytes during the query \nexecution! Something is definitely not right here.\n\n Yours\n\nMiguel Arroz\n\nOn 2008/03/10, at 23:22, Joshua D. Drake wrote:\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> On Mon, 10 Mar 2008 23:17:54 +0000\n> Miguel Arroz <[email protected]> wrote:\n>\n>> Hi!\n>>\n>> I read and did many stuff you pointed me too. Raised shared\n>> buffers to 180 MB, and tried again. Same results.\n>>\n>> I deleted the DB, created a new one and generated new test data.\n>> I know have 72k rows, and the same query finishes in... 9 seconds.\n>>\n>> I'm totally clueless. Anyway, two questions:\n>>\n>> 1) My working_mem is 2 MB. Does an UPDATE query like main depend\n>> on working_mem?\n>>\n>> 2) I still feel this is all very trial-and-error. Change value,\n>> run query, hope it solves the problem. Well, the DB itself knows what\n>> is doing. Isn't there any way to make it tell us that? Like \"the\n>> working mem is too low\" or anything else. I know the problem is not\n>> the checkpoints, at least nothing appears on the log related to that.\n>> But it irritates me to be in front of a such complex system and not\n>> being able to know what's going on.\n>\n> What does iostat -k 1 tell you during the 9 seconds the query is\n> running?\n>\n> Joshua D. Drake\n>\n>\n>\n> - --\n> The PostgreSQL Company since 1997: http://www.commandprompt.com/\n> PostgreSQL Community Conference: http://www.postgresqlconference.org/\n> Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n> PostgreSQL political pundit | Mocker of Dolphins\n>\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.6 (GNU/Linux)\n>\n> iD8DBQFH1cK3ATb/zqfZUUQRAhllAJ9C9aL9o/4hzq9vZyRaY8J6DknP5QCePDfS\n> BxJ/umrVArStUJgG3oFYsSE=\n> =n0uC\n> -----END PGP SIGNATURE-----\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nMiguel Arroz\nhttp://www.terminalapp.net\nhttp://www.ipragma.com", "msg_date": "Mon, 10 Mar 2008 23:46:10 +0000", "msg_from": "Miguel Arroz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: UPDATE 66k rows too slow" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Mon, 10 Mar 2008 23:46:10 +0000\r\nMiguel Arroz <[email protected]> wrote:\r\n tty ad4 ad6 cpu\r\n> tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id\r\n> 0 60 33.48 449 14.66 0.00 0 0.00 11 0 3 0 86\r\n> 0 180 16.85 599 9.87 0.00 0 0.00 1 0 1 0 98\r\n> 0 60 55.37 455 24.58 0.00 0 0.00 25 0 4 1 69\r\n> 0 60 49.83 376 18.28 0.00 0 0.00 18 0 5 1 76\r\n> 0 60 29.86 363 10.58 0.00 0 0.00 3 0 0 1 96\r\n> 0 60 36.21 365 12.90 0.00 0 0.00 12 0 3 1 84\r\n> 0 60 33.13 353 11.41 0.00 0 0.00 2 0 2 0 96\r\n> 0 60 39.47 345 13.28 0.00 0 0.00 16 0 3 0 80\r\n> 0 60 40.48 363 14.34 0.00 0 0.00 8 0 2 0 89\r\n> 0 60 30.91 397 11.97 0.00 0 0.00 5 0 2 0 93\r\n> 0 60 18.21 604 10.75 0.00 0 0.00 5 0 2 0 93\r\n> 0 60 48.65 359 17.04 0.00 0 0.00 20 0 6 0 74\r\n> 0 60 32.91 375 12.04 0.00 0 0.00 10 0 4 0 86\r\n> 0 60 35.81 339 11.84 0.00 0 0.00 3 0 2 0 96\r\n> 0 60 33.38 394 12.83 0.00 0 0.00 11 0 4 0 85\r\n> 0 60 34.40 313 10.51 0.00 0 0.00 4 0 2 0 93\r\n> 0 60 45.65 358 15.94 0.00 0 0.00 19 0 7 0 74\r\n> 0 60 37.41 309 11.28 0.00 0 0.00 3 0 2 0 95\r\n> 0 60 32.61 447 14.22 0.00 0 0.00 10 0 3 1 86\r\n> 0 60 17.11 516 8.63 0.00 0 0.00 1 0 1 0 98\r\n> \r\n> There's surely a lot of disk activity going on. With this\r\n> figures, I could have written some hundred gigabytes during the\r\n> query execution! Something is definitely not right here.\r\n\r\n\r\nWell the above says you are getting ~ 10-15MB/s a second performance.\r\nWhat is the disk subsystem you have. Also note that the duration\r\nprobably went up because you didn't vacuum between tests.\r\n\r\nWhat version of PostgreSQL (I missed it).\r\n\r\nJoshua D. Drake \r\n\r\n\r\n\r\n- -- \r\nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \r\nPostgreSQL Community Conference: http://www.postgresqlconference.org/\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\n PostgreSQL political pundit | Mocker of Dolphins\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFH1cq5ATb/zqfZUUQRAhVvAKCfQk4Mg6qLNQfc6uyiI2TBSbkThACeK/5k\r\nTgc9ltxoOvnTMzKG2hG/4LY=\r\n=Tm4N\r\n-----END PGP SIGNATURE-----\r\n", "msg_date": "Mon, 10 Mar 2008 16:56:41 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UPDATE 66k rows too slow" }, { "msg_contents": "Hi!\n\n The disk subsystem will be a RAID 1, but for now it's just a single \n7200 rpm 160 GB SATA hard drive. The PgSQL version is 8.3, the latest \none.\n\n I have done some performance tests on the drive, and it handles \nabout 40 MB/s on sequential writes, so I'm assuming it's OK.\n\n Yours\n\nMiguel Arroz\n\nOn 2008/03/10, at 23:56, Joshua D. Drake wrote:\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> On Mon, 10 Mar 2008 23:46:10 +0000\n> Miguel Arroz <[email protected]> wrote:\n> tty ad4 ad6 cpu\n>> tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id\n>> 0 60 33.48 449 14.66 0.00 0 0.00 11 0 3 0 86\n>> 0 180 16.85 599 9.87 0.00 0 0.00 1 0 1 0 98\n>> 0 60 55.37 455 24.58 0.00 0 0.00 25 0 4 1 69\n>> 0 60 49.83 376 18.28 0.00 0 0.00 18 0 5 1 76\n>> 0 60 29.86 363 10.58 0.00 0 0.00 3 0 0 1 96\n>> 0 60 36.21 365 12.90 0.00 0 0.00 12 0 3 1 84\n>> 0 60 33.13 353 11.41 0.00 0 0.00 2 0 2 0 96\n>> 0 60 39.47 345 13.28 0.00 0 0.00 16 0 3 0 80\n>> 0 60 40.48 363 14.34 0.00 0 0.00 8 0 2 0 89\n>> 0 60 30.91 397 11.97 0.00 0 0.00 5 0 2 0 93\n>> 0 60 18.21 604 10.75 0.00 0 0.00 5 0 2 0 93\n>> 0 60 48.65 359 17.04 0.00 0 0.00 20 0 6 0 74\n>> 0 60 32.91 375 12.04 0.00 0 0.00 10 0 4 0 86\n>> 0 60 35.81 339 11.84 0.00 0 0.00 3 0 2 0 96\n>> 0 60 33.38 394 12.83 0.00 0 0.00 11 0 4 0 85\n>> 0 60 34.40 313 10.51 0.00 0 0.00 4 0 2 0 93\n>> 0 60 45.65 358 15.94 0.00 0 0.00 19 0 7 0 74\n>> 0 60 37.41 309 11.28 0.00 0 0.00 3 0 2 0 95\n>> 0 60 32.61 447 14.22 0.00 0 0.00 10 0 3 1 86\n>> 0 60 17.11 516 8.63 0.00 0 0.00 1 0 1 0 98\n>>\n>> There's surely a lot of disk activity going on. With this\n>> figures, I could have written some hundred gigabytes during the\n>> query execution! Something is definitely not right here.\n>\n>\n> Well the above says you are getting ~ 10-15MB/s a second performance.\n> What is the disk subsystem you have. Also note that the duration\n> probably went up because you didn't vacuum between tests.\n>\n> What version of PostgreSQL (I missed it).\n>\n> Joshua D. Drake\n>\n>\n>\n> - --\n> The PostgreSQL Company since 1997: http://www.commandprompt.com/\n> PostgreSQL Community Conference: http://www.postgresqlconference.org/\n> Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n> PostgreSQL political pundit | Mocker of Dolphins\n>\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.6 (GNU/Linux)\n>\n> iD8DBQFH1cq5ATb/zqfZUUQRAhVvAKCfQk4Mg6qLNQfc6uyiI2TBSbkThACeK/5k\n> Tgc9ltxoOvnTMzKG2hG/4LY=\n> =Tm4N\n> -----END PGP SIGNATURE-----\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nMiguel Arroz\nhttp://www.terminalapp.net\nhttp://www.ipragma.com", "msg_date": "Tue, 11 Mar 2008 00:33:58 +0000", "msg_from": "Miguel Arroz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: UPDATE 66k rows too slow" }, { "msg_contents": "On Mon, 10 Mar 2008, Miguel Arroz wrote:\n\n> I deleted the DB, created a new one and generated new test data. I know have \n> 72k rows, and the same query finishes in... 9 seconds.\n\nThis seems like more evidence that your problem here is related to dead \nrows (this is what Andrew suggested). If a fresh copy of the database \nruns fast but it quickly degrades as you run additional tests that do many \nupdates on it, that's a popular suspect.\n\nAre you familiar with dead rows? When you update something, the original \ncopy doesn't go away; it stays behind until VACUUM gets to cleaning it up. \nIf you update the same rows, say, 10 times you'll have 9 dead copies of \nevery row in the way of doing reports on the ones still alive.\n\nLet's go back to your original post a second:\n\nSeq Scan on text_answer_mapping_ebt (cost=0.00..13945.72 rows=265072 \nwidth=92) (actual time=21.123..1049.054 rows=66268 loops=1)\n\nThat shows the database estimating there are exactly 4 times your 66268 \nrows there (4X66268=265072). That sounds like one active copy of your \ndata and 3 dead ones left behind from earlier tests. In that case, it \nwould take much longer to do that full scan than when the database was \nfresh.\n\n> 1) My working_mem is 2 MB. Does an UPDATE query like main depend on \n> working_mem?\n\nNope. That's used for sorting and that sort of thing.\n\n> Well, the DB itself knows what is doing. Isn't there any way to make it \n> tell us that?\n\nWell, the database server itself has a lot of operating system and \nhardware components it relies on, and it has no idea how any of those are \nworking. So it's unreasonable to expect in every case the database has a \nclue what's going on.\n\nIn your case, I'm suspecting more strongly the report that will say \nsomething interesting here is the 4th item on the list I sent before, \nlooking at VACUUM VERBOSE ANALYZE output for a problem.\n\nHere's the educational exercise I'd suggest that might help you track down \nwhat's going on here:\n\n1) Recreate a fresh copy of the database. Run VACUUM VERBOSE ANALYZE and \nsave a copy of the output so you know what that looks like with no dead \nrows.\n2) Run your query with EXPLAIN ANALYZE and save that too. Should be fast.\n3) Do whatever testing it is you do that seems to result in the system \nrunning much slower\n4) Save the EXPLAIN ANALYZE output when you're reached slowness\n5) Run a VACUUM VERBOSE ANALYZE, save that for comparision to the earlier \n6) Run the EXPLAIN ANALYZE again to see if (5) did anything useful.\none\n7) Run VACUUM FULL VERBOSE and save that output\n8) Run the EXPLAIN ANALYZE again to see if (7) did anything useful.\n\nComparing the VACUUM reports and the EXPLAIN plans to see what changes \nalong the way should give you some good insight into what's happening \nhere. That is what you're asking for--asking the database to tell you \nwhat it's doing--but actually understanding that report takes a certain \namount of study.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 11 Mar 2008 00:14:32 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UPDATE 66k rows too slow" } ]
[ { "msg_contents": "Miguel Arroz <[email protected]> wrote ..\n\n> I'm testing an update on 66k rows on Postgresql, and it seems \n> something is not right here.\n> \n> My server is a Quad-Xeon 3.2 Ghz with 2 GB RAM and a RAID 1 running\n> FreeBSD 6.3 and PgSQL 8.3. My development machine is a PowerBook G4 \n> 1.67 Ghz with 2 GB RAM, OS X Leopard and PgSQL 8.3.\n\n[62 seconds on server, 25 seconds on much weaker development machine]\n\nOK, my guess is that the server's tables are bloated beyond what regular VACUUM can fix. Try a VACUUM FULL VERBOSE or a re-CLUSTER if the tables are clustered.\n\nHope this helps.\n", "msg_date": "Sun, 9 Mar 2008 20:10:10 -0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: UPDATE 66k rows too slow" } ]
[ { "msg_contents": "Hi all\n\nI've just spent some time working with PostgreSQL 8.3 trying to get a 90\nminute job to run in a reasonable amount of time, and in the process\nI've come up with something that I thought others might find useful.\n\nAttached is a pair of PL/PgSQL functions that enable/disable the\ntriggers associated with a given foreign key constraint. They use the\nsystem catalogs to obtain all the required information about the\ninvolved tables. A fairly fast consistency check is performed before\nre-enabling the triggers.\n\nAs it turns out I don't need it after all, but I though that others\ndoing really large data imports might given messages like:\n\nhttp://archives.postgresql.org/pgsql-performance/2003-03/msg00157.php\n\n\n\nI wrote it because I was frustrated with the slow execution of the ALTER\nTABLE ... ADD CONSTRAINT ... FOREIGN KEY statements I was running to\nrebuild the foreign key constraints on some of my tables after some bulk\nimports. Leaving the constraints enabled was resulting in execution time\nthat increased for every record inserted, and rebuilding them after the\ninsert wasn't much faster.\n\nUnfortunately it turns out that the issue wasn't with the way ALTER\nTABLE ... ADD CONSTRAINT ... FOREIGN KEY was doing the check, as the\nintegrity check run by those functions is almost as slow as the ALTER\nTABLE in the context of the transaction they're run in - and both run in\n< 1 second outside of a transaction context or in a separate transaction.\n\nOh well, maybe the code will be useful to somebody anyway.\n\n--\nCraig Ringer", "msg_date": "Mon, 10 Mar 2008 18:12:52 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": true, "msg_subject": "Utility functions for enabling/disabling fkey triggers" } ]
[ { "msg_contents": "Hi all\n\nI'm encountering an odd issue with a bulk import query using PostgreSQL\n8.3. After a 400,000 row import into a just-truncated table `booking', a\nsequential scan run on the table in the same transaction is incredibly\nslow, taking ~ 166738.047 ms. After a:\n\t`COMMIT; BEGIN;'\nthe same query runs in 712.615 ms, with almost all the time difference\nbeing in the sequential scan of the `booking' table [schema at end of post].\n\nThe table is populated by a complex pl/pgsql function that draws from\nseveral other tables to convert data from another app's format. After\nthat function runs, here's what happens if I do a simple sequential\nscan, then what happens after I commit and run it again:\n\ncraig=# explain analyze select * from booking;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------\n Seq Scan on booking (cost=0.00..9871.60 rows=320160 width=139)\n (actual time=0.017..166644.697 rows=341481 loops=1)\n Total runtime: 166738.047 ms\n(2 rows)\n\ncraig=# commit; begin;\nCOMMIT\nBEGIN\ncraig=# explain analyze select * from booking;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------\n Seq Scan on booking (cost=0.00..9871.60 rows=320160 width=139)\n (actual time=0.022..624.492 rows=341481 loops=1)\n Total runtime: 712.615 ms\n(2 rows)\n\n\nSET client_min_messages = 'debug5';\n\ndoes not shed any more light; the only extra output is (eg):\n\ncraig=# select count(distinct id) from booking;\nDEBUG: StartTransactionCommand\nDEBUG: CommitTransactionCommand\n count\n--------\n 341481\n(1 row)\n\n... which took 164558.961 ms to run, or about 2 tuples per second.\n[Table schema at end of post]. By comparison, after commit the same\nquery read about 500 tuples/second.\n\nThis issue appears to affect any query that results in a sequential scan\non the newly populated table, and also affects maintenance operations\nlike ALTER TABLE ... ADD CONSTRAINT ... FOREIGN KEY that perform\nsequential scans. ANALYZE is also really slow. I'm not yet sure if index\nscans are affected.\n\nI'm not using WAL logging.\n\nIt doesn't matter whether I truncate the table before the import using a\nseparate transaction or the same one that does the import.\n\nI see essentially identical results, and runtimes, with other more\ncomplex queries, but it seems to boil down to extremely slow sequential\nscans.\n\nThe Linux 2.6.22 host these queries are running on runs PostgreSQL 8.3.\nIt has 4GB of RAM and shmmax set to 512MB. Tweaking the postgresql\nmemory parameters seems to make little difference, but the ones that I\nadjusted away from defaults to see if this was a resource issue are:\n\nshared_buffers = 32MB\ntemp_buffers = 128MB\nwork_mem = 128MB\nmaintenance_work_mem = 1024MB # min 1MB\n\n(There are relatively few clients to this database, but they work it hard).\n\nIs this huge speed difference in sequential scans expected behavior? Any\nidea what might be causing it?\n\nI'm presently working around it by just committing the transaction after\nthe bulk import - but there's lots more work to do after that and it\nleaves the database in a rather messy interim state.\n\n\n\n\nHere's the table's schema, pasted as a quote to stop Thunderbird\nmangling it. There are no rules on this table except those that might be\ncreated internally by postgresql.\n\n> craig=# \\d booking\n> Table \"public.booking\"\n> Column | Type | Modifiers\n> ------------------------+--------------------------+------------------------------------------------------\n> id | integer | not null default nextval('booking_id_seq'::regclass)\n> customer_id | integer | not null\n> edition_id | integer | not null\n> description | character varying(255) | not null\n> position | integer | not null\n> loading_applied | boolean | not null default false\n> loading_ratio | numeric(16,4) | not null\n> size_type | integer | not null\n> size_length | numeric(16,4) |\n> base_price | numeric(16,4) | not null\n> gst_factor | numeric(16,8) | not null default gst_factor()\n> page_number | integer |\n> invoiced | timestamp with time zone |\n> contract_id | integer |\n> old_customer_id | integer | not null\n> booked_time | timestamp with time zone | not null\n> booked_by | character varying(80) | not null\n> cancelled | boolean | not null default false\n> art_supplied | boolean | not null default false\n> repeat_group | integer |\n> notes | text |\n> originally_from_system | character(1) |\n> pe_booking_id | integer |\n> pe_customer_id | integer |\n> Indexes:\n> \"booking_pkey\" PRIMARY KEY, btree (id)\n> Check constraints:\n> \"base_price_nonnegative\" CHECK (base_price >= 0::numeric)\n> \"gst_factor_nonnegative_and_sane\" CHECK (gst_factor >= 0::numeric AND gst_factor < 9::numeric)\n> \"loading_ratio_sane\" CHECK (loading_ratio > 0::numeric AND loading_ratio < 9::numeric)\n> \"page_no_sane\" CHECK (page_number IS NULL OR page_number > 0 AND page_number <= 500)\n> \"size_length_nonnegative\" CHECK (size_length IS NULL OR size_length >= 0::numeric)\n> Foreign-key constraints:\n> \"booking_contract_id_fkey\" FOREIGN KEY (contract_id) REFERENCES contract(id)\n> \"booking_customer_id_fkey\" FOREIGN KEY (customer_id) REFERENCES customer(id)\n> \"booking_edition_id_fkey\" FOREIGN KEY (edition_id) REFERENCES edition(id) ON DELETE CASCADE\n> \"booking_old_customer_id_fkey\" FOREIGN KEY (old_customer_id) REFERENCES customer(id)\n> \"booking_position_fkey\" FOREIGN KEY (\"position\") REFERENCES booking_position(id)\n> \"booking_repeat_group_fkey\" FOREIGN KEY (repeat_group) REFERENCES booking_repeat(id) ON DELETE SET NULL\n> \"booking_size_type_fkey\" FOREIGN KEY (size_type) REFERENCES booking_size_type(id)\n> Triggers:\n> booking_after_insert_update AFTER INSERT ON booking FOR EACH ROW EXECUTE PROCEDURE booking_after_trigger()\n> booking_audit AFTER UPDATE ON booking FOR EACH ROW EXECUTE PROCEDURE booking_audit_trigger()\n> booking_before_insert BEFORE INSERT ON booking FOR EACH ROW EXECUTE PROCEDURE booking_before_trigger()\n\n--\nCraig Ringer\n", "msg_date": "Mon, 10 Mar 2008 18:55:59 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": true, "msg_subject": "Very slow (2 tuples/second) sequential scan after bulk insert; speed\n\treturns to ~500 tuples/second after commit" }, { "msg_contents": "Craig Ringer wrote:\n> I'm encountering an odd issue with a bulk import query using PostgreSQL\n> 8.3. After a 400,000 row import into a just-truncated table `booking', a\n> sequential scan run on the table in the same transaction is incredibly\n> slow, taking ~ 166738.047 ms. After a:\n> \t`COMMIT; BEGIN;'\n> the same query runs in 712.615 ms, with almost all the time difference\n> being in the sequential scan of the `booking' table [schema at end of post].\n> \n> The table is populated by a complex pl/pgsql function that draws from\n> several other tables to convert data from another app's format. \n\nYou must be having an exception handler block in that pl/pgsql function, \nwhich implicitly creates a new subtransaction on each invocation of the \nexception handler block, so you end up with hundreds of thousands of \ncommitted subtransactions. For each row in the seq scan, the list of \nsubtransactions is scanned, to see if the transaction that inserted the \nrow is part of the current top-level transaction. That's fine for a \nhandful of subtransactions, but it gets really slow with large numbers \nof them, as you've seen. It's an O(n^2) operation, where n is the number \nof rows inserted, so you'll be in even more trouble if the number of \nrows increases.\n\nAs a work-around, avoid using exception handlers, or process more than 1 \nrow per function invocation. Or COMMIT the transaction, as you did.\n\nFor 8.4, it would be nice to improve that. I tested that on my laptop \nwith a similarly-sized table, inserting each row in a pl/pgsql function \nwith an exception handler, and I got very similar run times. According \nto oprofile, all the time is spent in TransactionIdIsInProgress. I think \nit would be pretty straightforward to store the committed subtransaction \nids in a sorted array, instead of a linked list, and binary search. Or \nto use a hash table. That should eliminate this problem, though there is \nstill other places as well where a large number of subtransactions will \nhurt performance.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 10 Mar 2008 11:01:32 +0000", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very slow (2 tuples/second) sequential scan after bulk insert;\n\tspeed returns to ~500 tuples/second after commit" }, { "msg_contents": "Thanks for the extremely helpful response. I don't think I would've\nspotted that one in a hurry.\n\n> You must be having an exception handler block in that pl/pgsql \n> function, which implicitly creates a new subtransaction on each \n> invocation of the exception handler block, so you end up with hundreds \n> of thousands of committed subtransactions.\n\nAah - yes, there is. I didn't realize it'd have such an impact. I can\nwork around the need for it by explicitly checking the table constraints\nin the function - in which case an uncaught exception will terminate the\ntransaction, but should only arise when I've missed a constraint check.\n\n> For 8.4, it would be nice to improve that. I tested that on my laptop \n> with a similarly-sized table, inserting each row in a pl/pgsql \n> function with an exception handler, and I got very similar run times. \n> According to oprofile, all the time is spent in \n> TransactionIdIsInProgress. I think it would be pretty straightforward \n> to store the committed subtransaction ids in a sorted array, instead \n> of a linked list, and binary search. Or to use a hash table. That \n> should eliminate this problem, though there is still other places as \n> well where a large number of subtransactions will hurt performance.\n\nThat does sound interesting - and it would be nice to be able to use\nexception handlers this way without too huge a performance hit. In the\nend though it's something that can be designed around once you're aware\nof it - and I'm sure that other ways of storing that data have their own\ndifferent costs and downsides.\n\nWhat might also be nice, and simpler, would be a `notice', `log', or \neven `debug1' level warning telling the user they've reached an absurd \nnumber of subtransactions that'll cripple PostgreSQL's performance - say\n100,000. There's precedent for this in the checkpoint frequency warning\n8.3 produces if checkpoints are becoming too frequent - and like that\nwarning it could be configurable for big sites. If you think that's sane\nI might have a go at it - though I mostly work in C++ so the result\nprobably won't be too pretty initially.\n\n--\nCraig Ringer\n", "msg_date": "Mon, 10 Mar 2008 20:29:01 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very slow (2 tuples/second) sequential scan after bulk\n\tinsert; speed returns to ~500 tuples/second after commit" }, { "msg_contents": "Heikki Linnakangas wrote:\n> You must be having an exception handler block in that pl/pgsql \n> function, which implicitly creates a new subtransaction on each \n> invocation of the exception handler block, so you end up with hundreds \n> of thousands of committed subtransactions.\nI've just confirmed that that was indeed the issue, and coding around \nthe begin block dramatically cuts the runtimes of commands executed \nafter the big import function.\n\nThanks again!\n\n--\nCraig Ringer\n", "msg_date": "Mon, 10 Mar 2008 21:16:27 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very slow (2 tuples/second) sequential scan after bulk\n\tinsert; speed returns to ~500 tuples/second after commit" }, { "msg_contents": "\"Heikki Linnakangas\" <[email protected]> writes:\n> For 8.4, it would be nice to improve that. I tested that on my laptop \n> with a similarly-sized table, inserting each row in a pl/pgsql function \n> with an exception handler, and I got very similar run times. According \n> to oprofile, all the time is spent in TransactionIdIsInProgress. I think \n> it would be pretty straightforward to store the committed subtransaction \n> ids in a sorted array, instead of a linked list, and binary search.\n\nI think the OP is not complaining about the time to run the transaction\nthat has all the subtransactions; he's complaining about the time to\nscan the table that it emitted. Presumably, each row in the table has a\ndifferent (sub)transaction ID and so we are thrashing the clog lookup\nmechanism. It only happens once because after that the XMIN_COMMITTED\nhint bits are set.\n\nThis probably ties into the recent discussions about eliminating the\nfixed-size allocations for SLRU buffers --- I suspect it would've run\nbetter if it could have scaled up the number of pg_clog pages held in\nmemory.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Mar 2008 10:33:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very slow (2 tuples/second) sequential scan after bulk insert;\n\tspeed returns to ~500 tuples/second after commit" }, { "msg_contents": "Tom Lane wrote:\n> \"Heikki Linnakangas\" <[email protected]> writes:\n>> For 8.4, it would be nice to improve that. I tested that on my laptop \n>> with a similarly-sized table, inserting each row in a pl/pgsql function \n>> with an exception handler, and I got very similar run times. According \n>> to oprofile, all the time is spent in TransactionIdIsInProgress. I think \n>> it would be pretty straightforward to store the committed subtransaction \n>> ids in a sorted array, instead of a linked list, and binary search.\n> \n> I think the OP is not complaining about the time to run the transaction\n> that has all the subtransactions; he's complaining about the time to\n> scan the table that it emitted.\n\nYes, but only in succeeding statements in the same transaction as the \nprocedure that creates all the subtransactions. Table scan times return \nto normal after that transaction commits.\n\n> Presumably, each row in the table has a\n> different (sub)transaction ID and so we are thrashing the clog lookup\n> mechanism. It only happens once because after that the XMIN_COMMITTED\n> hint bits are set.\n\nIt seems to happen with every statement run in the same transaction as, \nand after, the procedure with all the subtransactions. As soon as a \nCOMMIT is executed, operations return to normal speed. There's no \nsignificant delay on the first statement after COMMIT as compared to \nsubsequent statements, nor do successive statements before the COMMIT \nget faster.\n\nIn other words, if I repeatedly run one of the statements I used in \ntesting for my initial post, like:\n\nEXPLAIN ANALYZE SELECT * FROM booking;\n\n... after running the problem stored procedure, it takes just as long \nfor the second and third and so on runs as for the first.\n\nAs soon as I commit the transaction, the exact same statement returns to \nrunning in less than a second, and doesn't significantly change in \nruntime for subsequent executions.\n\nI'll bang out a couple of examples at work tomorrow to see what I land \nup with, since this is clearly something that can benefit from a neat \ntest case.\n\nIn any case, avoding the use of an exception block per record generated \nworked around the performance issues, so it's clearly something to do \nwith the vast numbers of subtransactions - as Heikki Linnakangas \nsuggested and tested.\n\n--\nCraig Ringer\n", "msg_date": "Mon, 10 Mar 2008 23:48:39 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very slow (2 tuples/second) sequential scan after bulk\n\tinsert; speed returns to ~500 tuples/second after commit" }, { "msg_contents": "Tom Lane wrote:\n> \"Heikki Linnakangas\" <[email protected]> writes:\n>> For 8.4, it would be nice to improve that. I tested that on my laptop \n>> with a similarly-sized table, inserting each row in a pl/pgsql function \n>> with an exception handler, and I got very similar run times. According \n>> to oprofile, all the time is spent in TransactionIdIsInProgress. I think \n>> it would be pretty straightforward to store the committed subtransaction \n>> ids in a sorted array, instead of a linked list, and binary search.\n> \n> I think the OP is not complaining about the time to run the transaction\n> that has all the subtransactions; he's complaining about the time to\n> scan the table that it emitted.\n\nIf you read the original post carefully, he complained that the seq scan \nwas slow when executed within the same transaction as populating the \ntable, and fast if he committed in between.\n\n> Presumably, each row in the table has a\n> different (sub)transaction ID and so we are thrashing the clog lookup\n> mechanism. It only happens once because after that the XMIN_COMMITTED\n> hint bits are set.\n> \n> This probably ties into the recent discussions about eliminating the\n> fixed-size allocations for SLRU buffers --- I suspect it would've run\n> better if it could have scaled up the number of pg_clog pages held in\n> memory.\n\nI doubt that makes any noticeable difference in this case. 300000 \ntransaction ids fit on < ~100 clog pages, and the xmins on heap pages \nare nicely in order.\n\nGetting rid of the fixed-size allocations would be nice for other \nreasons, of course.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 10 Mar 2008 14:53:32 +0000", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very slow (2 tuples/second) sequential scan after bulk insert;\n\tspeed returns to ~500 tuples/second after commit" }, { "msg_contents": "Craig Ringer <[email protected]> writes:\n> It seems to happen with every statement run in the same transaction as, \n> and after, the procedure with all the subtransactions. As soon as a \n> COMMIT is executed, operations return to normal speed.\n\nAh. I misread your post as saying that it happened only once.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Mar 2008 11:02:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very slow (2 tuples/second) sequential scan after bulk insert;\n\tspeed returns to ~500 tuples/second after commit" }, { "msg_contents": "Craig Ringer wrote:\n> I'll bang out a couple of examples at work tomorrow to see what I land \n> up with, since this is clearly something that can benefit from a neat \n> test case.\n\nHere's what I used to reproduce this:\n\npostgres=# BEGIN;\nBEGIN\npostgres=# CREATE TABLE foo (id int4,t text);CREATE TABLE\npostgres=# CREATE OR REPLACE FUNCTION insertfunc() RETURNS void LANGUAGE \nplpgsql AS $$\n begin\n INSERT INTO foo VALUES ( 1, repeat('a',110));\n exception when unique_violation THEN end;\n$$;\nCREATE FUNCTION\npostgres=# SELECT COUNT(insertfunc()) FROM generate_series(1,300000); \ncount\n--------\n 300000\n(1 row)\n\npostgres=# EXPLAIN ANALYZE SELECT COUNT(*) FROM foo; \n QUERY PLAN \n\n----------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=13595.93..13595.94 rows=1 width=0) (actual \ntime=239535.904..239535.906 rows=1 loops=1)\n -> Seq Scan on foo (cost=0.00..11948.34 rows=659034 width=0) \n(actual time=0.022..239133.898 rows=300000 loops=1)\n Total runtime: 239535.974 ms\n(3 rows)\n\n\nThe oprofile output is pretty damning:\n\nsamples % symbol name\n42148 99.7468 TransactionIdIsCurrentTransactionId\n\nIf you put a COMMIT right before \"EXPLAIN ANALYZE...\" it runs in < 1s.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 10 Mar 2008 15:03:14 +0000", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very slow (2 tuples/second) sequential scan after bulk insert;\n\tspeed returns to ~500 tuples/second after commit" }, { "msg_contents": "Tom Lane wrote:\n> Craig Ringer <[email protected]> writes:\n> \n>> It seems to happen with every statement run in the same transaction as, \n>> and after, the procedure with all the subtransactions. As soon as a \n>> COMMIT is executed, operations return to normal speed.\n>> \n>\n> Ah. I misread your post as saying that it happened only once.\nNo worries - it's best to be sure.\n\nThanks for looking into it.\n\n--\nCraig Ringer\n\n", "msg_date": "Tue, 11 Mar 2008 00:10:00 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very slow (2 tuples/second) sequential scan after bulk\n\tinsert; speed returns to ~500 tuples/second after commit" }, { "msg_contents": "\"Heikki Linnakangas\" <[email protected]> writes:\n> The oprofile output is pretty damning:\n\n> samples % symbol name\n> 42148 99.7468 TransactionIdIsCurrentTransactionId\n\nOh, I have no doubt that that could eat a lot of cycles inside the\noriginating transaction ;-). I just misread Craig's complaint as\nbeing about the cost of the first table scan *after* that transaction.\n\nGetting rid of the linked-list representation would be a win in a couple\nof ways --- we'd not need the bogus \"list of XIDs\" support in pg_list.h,\nand xactGetCommittedChildren would go away. OTOH AtSubCommit_childXids\nwould get more expensive.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Mar 2008 11:20:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very slow (2 tuples/second) sequential scan after bulk insert;\n\tspeed returns to ~500 tuples/second after commit" }, { "msg_contents": "We experienced a similar degradation,\nwhen heavily using savepoints within a single transaction.\nHowever, we had not yet enough time to really investigate the issue.\nIt also was not directly reproducible using a (small) set of statements from a script.\nAs the overall scenario \"bulk loads with sub-transactions\" is close to the scenario we do run, it might come down to the same reason, so.\n\nThus take my vote for a solution that does not end up with \"don't use (sub-) transactions\".\n\nRegards,\nRainer\n\nCraig Ringer schrieb:\n> Thanks for the extremely helpful response. I don't think I would've\n> spotted that one in a hurry.\n> \n>> You must be having an exception handler block in that pl/pgsql\n>> function, which implicitly creates a new subtransaction on each\n>> invocation of the exception handler block, so you end up with hundreds\n>> of thousands of committed subtransactions.\n> \n> Aah - yes, there is. I didn't realize it'd have such an impact. I can\n> work around the need for it by explicitly checking the table constraints\n> in the function - in which case an uncaught exception will terminate the\n> transaction, but should only arise when I've missed a constraint check.\n> \n>> For 8.4, it would be nice to improve that. I tested that on my laptop\n>> with a similarly-sized table, inserting each row in a pl/pgsql\n>> function with an exception handler, and I got very similar run times.\n>> According to oprofile, all the time is spent in\n>> TransactionIdIsInProgress. I think it would be pretty straightforward\n>> to store the committed subtransaction ids in a sorted array, instead\n>> of a linked list, and binary search. Or to use a hash table. That\n>> should eliminate this problem, though there is still other places as\n>> well where a large number of subtransactions will hurt performance.\n> \n> That does sound interesting - and it would be nice to be able to use\n> exception handlers this way without too huge a performance hit. In the\n> end though it's something that can be designed around once you're aware\n> of it - and I'm sure that other ways of storing that data have their own\n> different costs and downsides.\n> \n> What might also be nice, and simpler, would be a `notice', `log', or\n> even `debug1' level warning telling the user they've reached an absurd\n> number of subtransactions that'll cripple PostgreSQL's performance - say\n> 100,000. There's precedent for this in the checkpoint frequency warning\n> 8.3 produces if checkpoints are becoming too frequent - and like that\n> warning it could be configurable for big sites. If you think that's sane\n> I might have a go at it - though I mostly work in C++ so the result\n> probably won't be too pretty initially.\n> \n> -- \n> Craig Ringer\n> \n\n-- \nRainer Pruy\nGesch�ftsf�hrer\n\nAcrys Consult GmbH & Co. KG\nUntermainkai 29-30, D-60329 Frankfurt\nTel: +49-69-244506-0 - Fax: +49-69-244506-50\nWeb: http://www.acrys.com - Email: [email protected]\nHandelsregister: Frankfurt am Main, HRA 31151\n", "msg_date": "Mon, 10 Mar 2008 17:55:43 +0100", "msg_from": "Rainer Pruy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very slow (2 tuples/second) sequential scan after bulk\n\tinsert; speed returns to ~500 tuples/second after commit" }, { "msg_contents": "On Mon, 2008-03-10 at 11:01 +0000, Heikki Linnakangas wrote:\n> According \n> to oprofile, all the time is spent in TransactionIdIsInProgress. \n\nI recently submitted a patch to optimise this. Your comments would be\nwelcome on the patch.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com \n\n PostgreSQL UK 2008 Conference: http://www.postgresql.org.uk\n\n", "msg_date": "Mon, 10 Mar 2008 22:00:27 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very slow (2 tuples/second) sequential scan after\n\tbulk insert; speed returns to ~500 tuples/second after commit" }, { "msg_contents": "On Mon, Mar 10, 2008 at 4:31 PM, Heikki Linnakangas\n<[email protected]> wrote:\n> According\n> to oprofile, all the time is spent in TransactionIdIsInProgress. I think\n> it would be pretty straightforward to store the committed subtransaction\n> ids in a sorted array, instead of a linked list, and binary search.\n\nAssuming that in most of the cases, there will be many committed and few aborted\nsubtransactions, how about storing the list of *aborted* subtransactions ?\n\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 11 Mar 2008 11:54:42 +0530", "msg_from": "\"Pavan Deolasee\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very slow (2 tuples/second) sequential scan after bulk insert;\n\tspeed returns to ~500 tuples/second after commit" }, { "msg_contents": "Simon Riggs wrote:\n> On Mon, 2008-03-10 at 11:01 +0000, Heikki Linnakangas wrote:\n>> According \n>> to oprofile, all the time is spent in TransactionIdIsInProgress. \n> \n> I recently submitted a patch to optimise this. Your comments would be\n> welcome on the patch.\n\nYou mean this one:\n\nhttp://archives.postgresql.org/pgsql-patches/2008-02/msg00008.php\n\n? Unfortunately that patch won't help in this case.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 11 Mar 2008 09:58:20 +0000", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very slow (2 tuples/second) sequential scan afterbulk insert;\n\tspeed returns to ~500 tuples/second after commit" }, { "msg_contents": "(moved to pgsql-patches, as there's a patch attached)\n\nTom Lane wrote:\n> Getting rid of the linked-list representation would be a win in a couple\n> of ways --- we'd not need the bogus \"list of XIDs\" support in pg_list.h,\n> and xactGetCommittedChildren would go away. OTOH AtSubCommit_childXids\n> would get more expensive.\n\nI couldn't let this case go, so I wrote a patch. I replaced the linked \nlist with an array that's enlarged at AtSubCommit_childXids when necessary.\n\nI couldn't measure any performance hit from the extra work of enlarging \nand memcpying the array. I tried two test cases:\n\n1. Insert one row per subtransaction, and commit the subtransaction \nafter each insert. This is like the OP's case.\n\n printf(\"CREATE TABLE foo (id int4);\\n\");\n printf(\"BEGIN;\\n\");\n for(i=1; i <= 100000; i++)\n {\n printf(\"SAVEPOINT sp%d;\\n\", i);\n printf(\"INSERT INTO foo VALUES (1);\\n\");\n printf(\"RELEASE SAVEPOINT sp%d;\\n\", i);\n }\n printf(\"COMMIT;\\n\");\n\n2. Insert one row per subtransaction, but leave the subtransaction in \nnot-committed state\n\n printf(\"CREATE TABLE foo (id int4, t text);\\n\");\n printf(\"BEGIN;\\n\");\n for(i=1; i <= 100000; i++)\n {\n printf(\"SAVEPOINT sp%d;\\n\", i);\n printf(\"INSERT INTO foo VALUES (1, 'f');\\n\");\n }\n printf(\"COMMIT;\\n\");\n\nTest case 1 is not bad, because we just keep appending to the parent \narray one at a time. Test case 2 might become slower, as the number of \nsubtransactions increases, as at the commit of each subtransaction you \nneed to enlarge the parent array and copy all the already-committed \nchildxids to it. However, you hit the limit on amount of shared mem \nrequired for holding the per-xid locks before that becomes a problem, \nand the performance becomes dominated by other things anyway (notably \nLockReassignCurrentOwner).\n\nI initially thought that using a single palloc'd array to hold all the \nXIDs would introduce a new limit on the number committed \nsubtransactions, thanks to MaxAllocSize, but that's not the case. \nWithout patch, we actually allocate an array like that anyway in \nxactGetCommittedChildren.\n\nElsewhere in our codebase where we use arrays that are enlarged as \nneeded, we keep track of the \"allocated\" size and the \"used\" size of the \narray separately, and only call repalloc when the array fills up, and \nrepalloc a larger than necessary array when it does. I chose to just \ncall repalloc every time instead, as repalloc is smart enough to fall \nout quickly if the chunk the allocation was made in is already larger \nthan the new size. There might be some gain avoiding the repeated \nrepalloc calls, but I doubt it's worth the code complexity, and calling \nrepalloc with a larger than necessary size can actually force it to \nunnecessarily allocate a new, larger chunk instead of reusing the old \none. Thoughts on that?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com", "msg_date": "Tue, 11 Mar 2008 12:34:07 +0000", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Very slow (2 tuples/second) sequential scan after bulk\n\tinsert; speed returns to ~500 tuples/second after commit" }, { "msg_contents": "On Tue, Mar 11, 2008 at 6:04 PM, Heikki Linnakangas\n<[email protected]> wrote:\n> (moved to pgsql-patches, as there's a patch attached)\n>\n>\n> I couldn't let this case go, so I wrote a patch. I replaced the linked\n> list with an array that's enlarged at AtSubCommit_childXids when necessary.\n>\n\nWe can replace the array of xids with an array of flags where i'th flag is\nset to true if the corresponding subtransaction is committed. This would\nmake lookup O(1) operation. Of course, the downside is when the array\nis sparse. Or we can switch to flag based representation once the array size\ngrows beyond a limit.\n\nThanks,\nPavan\n\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 11 Mar 2008 18:36:33 +0530", "msg_from": "\"Pavan Deolasee\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Very slow (2 tuples/second) sequential scan after bulk\n\tinsert; speed returns to ~500 tuples/second after commit" }, { "msg_contents": "Heikki Linnakangas wrote:\n\n> I couldn't let this case go, so I wrote a patch. I replaced the linked \n> list with an array that's enlarged at AtSubCommit_childXids when \n> necessary.\n\nDo you still need to palloc the return value from\nxactGetCommittedChildren? Perhaps you can save the palloc/memcpy/pfree\nand just return the pointer to the array already in memory?\n\nNot that it'll any much of a performance impact, but just for\ncleanliness :-)\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Tue, 11 Mar 2008 10:29:04 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Very slow (2 tuples/second) sequential\n\tscan after bulk insert; speed returns to ~500 tuples/second after\n\tcommit" }, { "msg_contents": "Alvaro Herrera wrote:\n> Heikki Linnakangas wrote:\n> \n>> I couldn't let this case go, so I wrote a patch. I replaced the linked \n>> list with an array that's enlarged at AtSubCommit_childXids when \n>> necessary.\n> \n> Do you still need to palloc the return value from\n> xactGetCommittedChildren? Perhaps you can save the palloc/memcpy/pfree\n> and just return the pointer to the array already in memory?\n\nYeah, good point. The callers just need to be modified not to pfree it.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 11 Mar 2008 14:03:08 +0000", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Very slow (2 tuples/second) sequentialscan after bulk\n\tinsert; speed returns to ~500 tuples/second aftercommit" }, { "msg_contents": "\"Heikki Linnakangas\" <[email protected]> writes:\n> I initially thought that using a single palloc'd array to hold all the \n> XIDs would introduce a new limit on the number committed \n> subtransactions, thanks to MaxAllocSize, but that's not the case. \n> Without patch, we actually allocate an array like that anyway in \n> xactGetCommittedChildren.\n\nRight.\n\n> Elsewhere in our codebase where we use arrays that are enlarged as \n> needed, we keep track of the \"allocated\" size and the \"used\" size of the \n> array separately, and only call repalloc when the array fills up, and \n> repalloc a larger than necessary array when it does. I chose to just \n> call repalloc every time instead, as repalloc is smart enough to fall \n> out quickly if the chunk the allocation was made in is already larger \n> than the new size. There might be some gain avoiding the repeated \n> repalloc calls, but I doubt it's worth the code complexity, and calling \n> repalloc with a larger than necessary size can actually force it to \n> unnecessarily allocate a new, larger chunk instead of reusing the old \n> one. Thoughts on that?\n\nSeems like a pretty bad idea to me, as the behavior you're counting on\nonly applies to chunks up to 8K or thereabouts. In a situation where\nyou are subcommitting lots of XIDs one at a time, this is likely to have\nquite awful behavior (or at least, you're at the mercy of the local\nmalloc library as to how bad it is). I'd go with the same\ndouble-it-each-time-needed approach we use elsewhere.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Mar 2008 17:06:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Very slow (2 tuples/second) sequential scan after bulk\n\tinsert; speed returns to ~500 tuples/second after commit" }, { "msg_contents": "Tom Lane wrote:\n> \"Heikki Linnakangas\" <[email protected]> writes:\n>> Elsewhere in our codebase where we use arrays that are enlarged as \n>> needed, we keep track of the \"allocated\" size and the \"used\" size of the \n>> array separately, and only call repalloc when the array fills up, and \n>> repalloc a larger than necessary array when it does. I chose to just \n>> call repalloc every time instead, as repalloc is smart enough to fall \n>> out quickly if the chunk the allocation was made in is already larger \n>> than the new size. There might be some gain avoiding the repeated \n>> repalloc calls, but I doubt it's worth the code complexity, and calling \n>> repalloc with a larger than necessary size can actually force it to \n>> unnecessarily allocate a new, larger chunk instead of reusing the old \n>> one. Thoughts on that?\n> \n> Seems like a pretty bad idea to me, as the behavior you're counting on\n> only applies to chunks up to 8K or thereabouts. \n\nOh, you're right. Though I'm sure libc realloc has all kinds of smarts \nas well, it does seem better to not rely too much on that.\n\n> In a situation where\n> you are subcommitting lots of XIDs one at a time, this is likely to have\n> quite awful behavior (or at least, you're at the mercy of the local\n> malloc library as to how bad it is). I'd go with the same\n> double-it-each-time-needed approach we use elsewhere.\n\nYep, patch attached. I also changed xactGetCommittedChildren to return \nthe original array instead of copying it, as Alvaro suggested.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com", "msg_date": "Wed, 12 Mar 2008 13:43:46 +0000", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Very slow (2 tuples/second) sequential scan after bulk\n\tinsert; speed returns to ~500 tuples/second after commit" }, { "msg_contents": "On Wed, Mar 12, 2008 at 7:13 PM, Heikki Linnakangas\n<[email protected]> wrote:\n>\n>\n> Yep, patch attached. I also changed xactGetCommittedChildren to return\n> the original array instead of copying it, as Alvaro suggested.\n>\n\nAny comments on the flag based approach I suggested earlier ? Am I\nmissing some normal scenario where it won't work well ?\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 12 Mar 2008 21:17:20 +0530", "msg_from": "\"Pavan Deolasee\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Very slow (2 tuples/second) sequential scan after bulk\n\tinsert; speed returns to ~500 tuples/second after commit" }, { "msg_contents": "\"Pavan Deolasee\" <[email protected]> writes:\n> On Wed, Mar 12, 2008 at 7:13 PM, Heikki Linnakangas\n> <[email protected]> wrote:\n>> Yep, patch attached. I also changed xactGetCommittedChildren to return\n>> the original array instead of copying it, as Alvaro suggested.\n\n> Any comments on the flag based approach I suggested earlier ?\n\nI didn't like it; it seemed overly complicated (consider dealing with\nXID wraparound), and it would have problems with a slow transaction\ngenerating a sparse set of subtransaction XIDs. I think getting rid of\nthe linear search will be enough to fix the performance problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Mar 2008 11:57:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Very slow (2 tuples/second) sequential scan after bulk\n\tinsert; speed returns to ~500 tuples/second after commit" }, { "msg_contents": "On Wed, Mar 12, 2008 at 9:27 PM, Tom Lane <[email protected]> wrote:\n\n>\n> I didn't like it; it seemed overly complicated (consider dealing with\n> XID wraparound),\n\nWe are talking about subtransactions here. I don't think we support\nsubtransaction wrap-around, do we ?\n\n> and it would have problems with a slow transaction\n> generating a sparse set of subtransaction XIDs.\n\nI agree thats the worst case. But is that common ? Thats what I\nwas thinking when I proposed the alternate solution. I thought that can\nhappen only if most of the subtransactions abort, which again I thought\nis not a normal case. But frankly I am not sure if my assumption is correct.\n\n> I think getting rid of\n> the linear search will be enough to fix the performance problem.\n>\n\nI wonder if a skewed binary search would help more ? For example,\nif we know that the range of xids stored in the array is 1 to 1000 and\nif we are searching for a number closer to 1000, we can break the\narray into <large,small> parts instead of equal parts and then\nsearch.\n\nWell, may be I making simple things complicated ;-)\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 12 Mar 2008 22:32:37 +0530", "msg_from": "\"Pavan Deolasee\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Very slow (2 tuples/second) sequential scan after bulk\n\tinsert; speed returns to ~500 tuples/second after commit" }, { "msg_contents": "Pavan Deolasee wrote:\n> On Wed, Mar 12, 2008 at 9:27 PM, Tom Lane <[email protected]> wrote:\n> \n>> I didn't like it; it seemed overly complicated (consider dealing with\n>> XID wraparound),\n> \n> We are talking about subtransactions here. I don't think we support\n> subtransaction wrap-around, do we ?\n\nImagine that you start a transaction just before transaction \nwrap-around, so that the top level XID is 2^31-10. Then you start 20 \nsubtransactions. What XIDs will they get? Now how would you map those to \na bitmap?\n\nIt's certainly possible, you could index the bitmap by the index from \ntop transaction XID for example. But it does get a bit complicated.\n\n>> and it would have problems with a slow transaction\n>> generating a sparse set of subtransaction XIDs.\n> \n> I agree thats the worst case. But is that common ? Thats what I\n> was thinking when I proposed the alternate solution. I thought that can\n> happen only if most of the subtransactions abort, which again I thought\n> is not a normal case. But frankly I am not sure if my assumption is correct.\n\nIt's not that common to have hundreds of thousands of subtransactions to \nbegin with..\n\n>> I think getting rid of\n>> the linear search will be enough to fix the performance problem.\n> \n> I wonder if a skewed binary search would help more ? For example,\n> if we know that the range of xids stored in the array is 1 to 1000 and\n> if we are searching for a number closer to 1000, we can break the\n> array into <large,small> parts instead of equal parts and then\n> search.\n\nPossibly, but I doubt it's worth the trouble. The simple binary search \nsolved the performance problem well enough. In the test case of the OP, \nwith 300000 subtransactions, with the patch, there was no longer any \nmeasurable difference whether you ran the \"SELECT COUNT(*)\" in the same \ntransaction as the INSERTs or after a COMMIT.\n\n> Well, may be I making simple things complicated ;-)\n\nI think so :-).\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 12 Mar 2008 17:14:04 +0000", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Very slow (2 tuples/second) sequential scan after bulk\n\tinsert; speed returns to ~500 tuples/second after commit" }, { "msg_contents": "\"Pavan Deolasee\" <[email protected]> writes:\n> On Wed, Mar 12, 2008 at 9:27 PM, Tom Lane <[email protected]> wrote:\n>> and it would have problems with a slow transaction\n>> generating a sparse set of subtransaction XIDs.\n\n> I agree thats the worst case. But is that common ? Thats what I\n> was thinking when I proposed the alternate solution. I thought that can\n> happen only if most of the subtransactions abort, which again I thought\n> is not a normal case.\n\nNo, I was thinking of the case where other sessions are chewing up XIDs\nwhile the lots-of-subtransactions transaction runs. If it's slow\nenough, there could be very large gaps between the XIDs it acquires for\nits subtransactions. So you'd have a situation where the exact same\ntransaction processing might or might not run out of memory depending\non what else happened meanwhile. Not a very pleasant property.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Mar 2008 13:22:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Very slow (2 tuples/second) sequential scan after bulk\n\tinsert; speed returns to ~500 tuples/second after commit" }, { "msg_contents": "On Wed, Mar 12, 2008 at 10:44 PM, Heikki Linnakangas\n<[email protected]> wrote:\n\n>\n> Imagine that you start a transaction just before transaction\n> wrap-around, so that the top level XID is 2^31-10. Then you start 20\n> subtransactions. What XIDs will they get? Now how would you map those to\n> a bitmap?\n>\n\nWait. Subtransaction ids are local to a transaction and always start from 1.\nSee this:\n\n /*\n * reinitialize within-transaction counters\n */\n s->subTransactionId = TopSubTransactionId;\n currentSubTransactionId = TopSubTransactionId;\n\n\n>\n> It's not that common to have hundreds of thousands of subtransactions to\n> begin with..\n\nTrue. But thats the case we are trying to solve here :-)\n\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 12 Mar 2008 22:56:22 +0530", "msg_from": "\"Pavan Deolasee\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Very slow (2 tuples/second) sequential scan after bulk\n\tinsert; speed returns to ~500 tuples/second after commit" }, { "msg_contents": "Pavan Deolasee wrote:\n> Wait. Subtransaction ids are local to a transaction and always start from 1.\n> See this:\n> \n> /*\n> * reinitialize within-transaction counters\n> */\n> s->subTransactionId = TopSubTransactionId;\n> currentSubTransactionId = TopSubTransactionId;\n\nNo, we're not talking about SubTransactionIds. We're talking about real \nXIDs assigned to subtransactions. Subtransactions are assigned regular \nXIDs as well, just like top-level transactions.\n\nI can see where you were coming from with you suggestion now :-).\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 12 Mar 2008 17:33:47 +0000", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Very slow (2 tuples/second) sequential scan after bulk\n\tinsert; speed returns to ~500 tuples/second after commit" }, { "msg_contents": "On Wed, Mar 12, 2008 at 11:03 PM, Heikki Linnakangas\n<[email protected]> wrote:\n> Subtransactions are assigned regular\n> XIDs as well, just like top-level transactions.\n>\n\nAh, got it now. I never noticed this before.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 12 Mar 2008 23:18:50 +0530", "msg_from": "\"Pavan Deolasee\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Very slow (2 tuples/second) sequential scan after bulk\n\tinsert; speed returns to ~500 tuples/second after commit" }, { "msg_contents": "\"Heikki Linnakangas\" <[email protected]> writes:\n> Yep, patch attached. I also changed xactGetCommittedChildren to return \n> the original array instead of copying it, as Alvaro suggested.\n\nApplied with minor corrections (mostly comment fixes, but there were\na couple of real mistakes).\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Mar 2008 22:20:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Very slow (2 tuples/second) sequential scan after bulk\n\tinsert; speed returns to ~500 tuples/second after commit" }, { "msg_contents": "\nThis has been applied by Tom.\n\n---------------------------------------------------------------------------\n\nHeikki Linnakangas wrote:\n> Tom Lane wrote:\n> > \"Heikki Linnakangas\" <[email protected]> writes:\n> >> Elsewhere in our codebase where we use arrays that are enlarged as \n> >> needed, we keep track of the \"allocated\" size and the \"used\" size of the \n> >> array separately, and only call repalloc when the array fills up, and \n> >> repalloc a larger than necessary array when it does. I chose to just \n> >> call repalloc every time instead, as repalloc is smart enough to fall \n> >> out quickly if the chunk the allocation was made in is already larger \n> >> than the new size. There might be some gain avoiding the repeated \n> >> repalloc calls, but I doubt it's worth the code complexity, and calling \n> >> repalloc with a larger than necessary size can actually force it to \n> >> unnecessarily allocate a new, larger chunk instead of reusing the old \n> >> one. Thoughts on that?\n> > \n> > Seems like a pretty bad idea to me, as the behavior you're counting on\n> > only applies to chunks up to 8K or thereabouts. \n> \n> Oh, you're right. Though I'm sure libc realloc has all kinds of smarts \n> as well, it does seem better to not rely too much on that.\n> \n> > In a situation where\n> > you are subcommitting lots of XIDs one at a time, this is likely to have\n> > quite awful behavior (or at least, you're at the mercy of the local\n> > malloc library as to how bad it is). I'd go with the same\n> > double-it-each-time-needed approach we use elsewhere.\n> \n> Yep, patch attached. I also changed xactGetCommittedChildren to return \n> the original array instead of copying it, as Alvaro suggested.\n> \n> -- \n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n\n\n> \n> -- \n> Sent via pgsql-patches mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-patches\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://postgres.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Sun, 16 Mar 2008 22:21:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Very slow (2 tuples/second)\n\tsequential scan after bulk insert; speed returns to ~500\n\ttuples/second after commit" } ]
[ { "msg_contents": "I have one table with 30 fields, i have more then 60 million records, if i\nuse more no of indexes will it affect the insertion\nperformance? and select performance?\n\nShall i have more then one partial index for same field, ????\n\n-- \nWith Best Regards,\nPetchimuthulingam S\n\nI have one table with 30 fields, i have more then 60 million records, if i use more no of indexes will it affect the insertion performance? and select performance?Shall i have more then one partial index for same field, ????\n-- With Best Regards,Petchimuthulingam S", "msg_date": "Tue, 11 Mar 2008 17:58:30 +0530", "msg_from": "\"petchimuthu lingam\" <[email protected]>", "msg_from_op": true, "msg_subject": "how many index can have????" }, { "msg_contents": "\nOn 11-Mar-08, at 8:28 AM, petchimuthu lingam wrote:\n\n> I have one table with 30 fields, i have more then 60 million \n> records, if i use more no of indexes will it affect the insertion\n> performance? and select performance?\n>\nYes, and yes, but without more information about what you are trying \nto do, those answers are useless.\n\n> Shall i have more then one partial index for same field, ????\n>\n> -- \n> With Best Regards,\n> Petchimuthulingam S\n\n", "msg_date": "Tue, 11 Mar 2008 09:46:08 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how many index can have????" }, { "msg_contents": "petchimuthu lingam wrote:\n> I have one table with 30 fields, i have more then 60 million records, if\n> i use more no of indexes will it affect the insertion\n> performance? and select performance?\n\nMaintaining an index has a cost. That means that every time a record\ncovered by an index is added, deleted, or modified the index must be\nupdated, slowing down the insert/delete*/update operation.\n\nOn the other hand, an index can dramatically speed up a query if:\n\n- The query only requires a small proportion of the rows in the table; and\n- The index matches the conditions in the query\n\nAn index won't help with most operations that affect most of the table\nanyway; in fact, it'll generally be slower than a sequential scan.\n\nMaybe you should tell the people here a little more about your problem.\nWhat queries are you running? What's your schema? What indexes do you\nhave? What is the EXPLAIN or EXPLAIN ANALYZE output from running your\nqueries?\n\nThere's plenty of advice on the net for using indexes in postgresql\neffectively, so I suggest you do some reading. I'm no expert, but from\nmy reading and the recent work I've been doing I've found some crucial\npoints to be:\n\n- Adding an index is a trade-off between the cost of maintaining the\nindex and the benefits the index provides for queries. Don't blindly add\nindexes; use EXPLAIN ANALYZE on your queries and find out what they\nactually need and what they do.\n\n- Test queries with and without indexes ( you can set options to the\nquery planner to control which methods it uses ) and see what difference\nthey make. Don't blindly add indexes.\n\n- Prefer to make indexes on highly differentiated data. If most of the\ndata is the same or split between just a couple of values an index might\nnot help much. Test and find out.\n\n- If the data is mostly one value, and you want to make searching for\nother values faster, consider a partial index. For example, if you have\na boolean column \"is_archived\" that's set to 't' in 99% of rows, but you\nregularly run queries that are restricted to the 1% of rows with\nis_archived = 'f' you will benefit from a partial index on is_archived\nthat's limited to \"WHERE (NOT is_archived)\" . See the postgresql\ndocumentation on partial indexes.\n\n- Partial indexes don't have to use the indexed value(s) for the\nrestriction clause, they can use other columns. For example:\n\nCREATE INDEX some_partial_index ON tablename(customer_created_date)\nWHERE (NOT customer_is_retired);\n\n... will make queries like:\n\nSELECT customer_created_date > DATE '2007-01-01'\nwWHERE NOT customer_is_retired;\n\n... much faster then they would've been without the partial index if the\nmajority of customers have customer_is_retired set to 't'.\n\n- A WHERE clause needs to exactly match the restrictions on a partial\nindex to use that index. I'm pretty sure the data types must be exactly\nthe same and so must the order of the checks.\n\n- You can use multi-column indexes for single-column filters under some\ncircumstances, but it's slow.\n\n> Shall i have more then one partial index for same field, ????\n\nMaybe, it depends on your query and what the distribution of the data in\nthe field is like.\n\n* OK, so PostgreSQL can cheat with MVCC on deletes, but that has its own\ncosts.\n\n--\nCraig Ringer\n", "msg_date": "Wed, 12 Mar 2008 01:04:35 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how many index can have????" } ]
[ { "msg_contents": "how to find trigger names in my database ?\n\nusing psql 7.4\n\nthe following query shows system triggers, i want only to list the\ntriggers created by me\n\nselect relname, tgname, tgtype, proname, prosrc, tgisconstraint,\ntgconstrname, tgconstrrelid, tgdeferrable, tginitdeferred, tgnargs,\ntgattr, tgargs from (pg_trigger join pg_class on tgrelid=pg_class.oid)\njoin pg_proc on (tgfoid=pg_proc.oid);\n", "msg_date": "Tue, 11 Mar 2008 19:08:33 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "list user created triggers" } ]
[ { "msg_contents": "thanks...\nafter this query also it is showing default triggers ( am very much worried\nthat how the system triggers are created using my username ....\n\nok i posted this in the group you specified..\n\nOn Tue, Mar 11, 2008 at 8:03 PM, Sergey Benner <[email protected]>\nwrote:\n\n> Again :) Try this query.\n>\n>\n>\n> select usename,relname, tgname, tgtype, proname, prosrc, tgisconstraint,\n> tgconstrname, tgconstrrelid, tgdeferrable, tginitdeferred, tgnargs,\n> tgattr, tgargs from (pg_trigger join pg_class c on tgrelid=c.oid )\n> join pg_proc on (tgfoid=pg_proc.oid) join pg_user pu on\n> c.relowner=pu.usesysid where pu.usename='YOURUSERNAME';\n>\n>\n>\n> Please post such questions to the [email protected] list.\n>\n> Cheers,\n> Sergey\n>\n>\n\nthanks...after this query also it is showing default triggers ( am very much worried that how the system triggers are created using my username ....ok i posted this in the group you specified..\nOn Tue, Mar 11, 2008 at 8:03 PM, Sergey Benner <[email protected]> wrote:\nAgain :) Try this query.select usename,relname, tgname, tgtype, proname, prosrc, tgisconstraint,tgconstrname, tgconstrrelid, tgdeferrable, tginitdeferred, tgnargs,\ntgattr, tgargs from (pg_trigger join pg_class c on tgrelid=c.oid )\n\njoin pg_proc on (tgfoid=pg_proc.oid) join pg_user pu on c.relowner=pu.usesysid where pu.usename='YOURUSERNAME'; Please post such questions to the [email protected] list.\nCheers, Sergey", "msg_date": "Wed, 12 Mar 2008 09:37:52 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: list user created triggers" }, { "msg_contents": ">\n> select usename,relname, tgname, tgtype, proname, prosrc, tgisconstraint,\n> > tgconstrname, tgconstrrelid, tgdeferrable, tginitdeferred, tgnargs,\n> > tgattr, tgargs from (pg_trigger join pg_class c on tgrelid=c.oid )\n> > join pg_proc on (tgfoid=pg_proc.oid) join pg_user pu on\n> > c.relowner=pu.usesysid where pu.usename='YOURUSERNAME';\n> >\n> >\nyes i got the answer by this query..\n\nselect relname, tgname, tgtype, proname, prosrc, tgisconstraint,\ntgconstrname, tgconstrrelid, tgdeferrable, tginitdeferred, tgnargs,\ntgattr, tgargs from (pg_trigger join pg_class on tgrelid=pg_class.oid)\njoin pg_proc on (tgfoid=pg_proc.oid) where tgname not ilike '%constraint%'\nand tgname not ilike 'pg%';\n\n\n---- the query which you had given shown the implicit triggers which is\ncreated for maintaining the constraints..\n\n-- so the query which i had given will show the user created triggers...\n\nTHANKS\n\n\nselect usename,relname, tgname, tgtype, proname, prosrc, tgisconstraint,tgconstrname, tgconstrrelid, tgdeferrable, tginitdeferred, tgnargs,\ntgattr, tgargs from (pg_trigger join pg_class c on tgrelid=c.oid )\n\njoin pg_proc on (tgfoid=pg_proc.oid) join pg_user pu on c.relowner=pu.usesysid where pu.usename='YOURUSERNAME';yes i got the answer by this query..\nselect relname, tgname, tgtype, proname, prosrc, tgisconstraint,tgconstrname, tgconstrrelid, tgdeferrable, tginitdeferred, tgnargs,tgattr, tgargs from (pg_trigger join pg_class on tgrelid=pg_class.oid)join pg_proc on (tgfoid=pg_proc.oid) where tgname not ilike '%constraint%' and tgname not ilike 'pg%';\n---- the query which you had given shown the implicit triggers which is created for maintaining the constraints..-- so the query which i had given will show the user created triggers...THANKS", "msg_date": "Wed, 12 Mar 2008 10:47:05 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: list user created triggers" } ]
[ { "msg_contents": "Is there any article describing the migration database from postgresql 7.4to\n8.1\n\nIs there any article describing the migration database from postgresql 7.4 to 8.1", "msg_date": "Wed, 12 Mar 2008 10:57:09 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "migration of 7.4 to 8.1" }, { "msg_contents": "sathiya psql wrote:\n> Is there any article describing the migration database from postgresql 7.4to\n> 8.1\nThis might be a silly question, but ... why 8.1 ?\n\nIf you're doing a major upgrade, why not go straight to 8.3? It's been \nout long enough that there aren't any obvious nasty bugs, and there have \nbeen a fair few fixes and improvements since prior versions.\n\n--\nCraig Ringer\n", "msg_date": "Wed, 12 Mar 2008 14:32:42 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: migration of 7.4 to 8.1" }, { "msg_contents": ">\n> This might be a silly question, but ... why 8.1 ?\n>\n> If you're doing a major upgrade, why not go straight to 8.3? It's been\n> out long enough that there aren't any obvious nasty bugs, and there have\n> been a fair few fixes and improvements since prior versions.\n>\nBecause am using Debian ETCH stable... i need to continue using this... In\nDebian ETCH stable 8.1 is only there....\n\nI will be installing my project in other machines where it is having DEBIAN\nETCH STABLE, i dont want to face any problems... so only...\n\nam i right ???\n\n>\n> --\n> Craig Ringer\n>\n\nThis might be a silly question, but ... why 8.1 ?\n\nIf you're doing a major upgrade, why not go straight to 8.3? It's been\nout long enough that there aren't any obvious nasty bugs, and there have\nbeen a fair few fixes and improvements since prior versions.\nBecause am using Debian ETCH stable... i need to continue using this... In Debian ETCH stable 8.1 is only there....I will be installing my project in other machines where it is having DEBIAN ETCH STABLE, i dont want to face any problems... so only...\nam i right ???\n--\nCraig Ringer", "msg_date": "Wed, 12 Mar 2008 11:54:08 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: migration of 7.4 to 8.1" }, { "msg_contents": "sathiya psql wrote:\n> This might be a silly question, but ... why 8.1 ?\n> \n> If you're doing a major upgrade, why not go straight to 8.3? It's been\n> out long enough that there aren't any obvious nasty bugs, and there have\n> been a fair few fixes and improvements since prior versions.\n> \n> Because am using Debian ETCH stable... i need to continue using this... \n> In Debian ETCH stable 8.1 is only there....\n> \n> I will be installing my project in other machines where it is having \n> DEBIAN ETCH STABLE, i dont want to face any problems... so only...\n\nYou can get 8.3 from backports: http://www.backports.org/ - it's a \ndebian project to get more up to date versions for existing stable \nreleases (they package everything exactly the same way).\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Wed, 12 Mar 2008 17:28:27 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: migration of 7.4 to 8.1" }, { "msg_contents": ">\n> This might be a silly question, but ... why 8.1 ?\n\nhow it will be a silly question....\n\nI thought that some manual changes are required... so am asking this.... may\nbe argument for functions had changed.. or any other changes...\n\n\n>\n> --\n> Craig Ringer\n>\n\nThis might be a silly question, but ... why 8.1 ?how it will be a silly question....\nI thought that some manual changes are required... so am asking this.... may be argument for functions had changed.. or any other changes... \n\n--\nCraig Ringer", "msg_date": "Wed, 12 Mar 2008 11:59:27 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: migration of 7.4 to 8.1" }, { "msg_contents": "sathiya psql wrote:\n>> This might be a silly question, but ... why 8.1 ?\n>>\n>> If you're doing a major upgrade, why not go straight to 8.3? It's been\n>> out long enough that there aren't any obvious nasty bugs, and there have\n>> been a fair few fixes and improvements since prior versions.\n>>\n> Because am using Debian ETCH stable... i need to continue using this... In\n> Debian ETCH stable 8.1 is only there....\n\nI use Etch on my servers. The magic of etch-backports from backports.org \nallows me to use 8.3 without messing with anything else. It works \nextremely well.\n\nAll I had to do to install 8.3 was add the etch-backports line to my \n/etc/apt/sources.list, run `apt-get update', then run:\n\napt-get -t etch-backports install postgresql-8.3\n\n... then migrate the data to it and remove the old packages.\n\nSee backports.org for a mirror list.\n\n> I will be installing my project in other machines where it is having DEBIAN\n> ETCH STABLE, i dont want to face any problems... so only...\n> \n> am i right ???\n\nPersonally I'd use 8.3 from backports.org, but it's of course up to you.\n\n--\nCraig Ringer\n", "msg_date": "Wed, 12 Mar 2008 15:33:13 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: migration of 7.4 to 8.1" }, { "msg_contents": "In the home page itself they were saying .... testing ... unstable....\n\nthen we should not use that for live.....\n\nso i prefer 8.1 .........\n\n>\n> You can get 8.3 from backports: http://www.backports.org/ - it's a\n> debian project to get more up to date versions for existing stable\n> releases (they package everything exactly the same way).\n>\n> --\n> Postgresql & php tutorials\n> http://www.designmagick.com/\n>\n\nIn the home page itself they were saying .... testing ... unstable....then we should not use that for live.....so i prefer 8.1 .........\n\nYou can get 8.3 from backports: http://www.backports.org/ - it's a\ndebian project to get more up to date versions for existing stable\nreleases (they package everything exactly the same way).\n\n--\nPostgresql & php tutorials\nhttp://www.designmagick.com/", "msg_date": "Wed, 12 Mar 2008 12:06:45 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: migration of 7.4 to 8.1" }, { "msg_contents": "sathiya psql wrote:\n>> This might be a silly question, but ... why 8.1 ?\n>> \n>\n> how it will be a silly question....\n>\n> I thought that some manual changes are required... so am asking this.... may\n> be argument for functions had changed.. or any other changes...\nThere have been changes for sure... but I doubt going from 7.4 to 8.1 is \nmuch harder than going from 7.4 to 8.3 . I'm developing a database that \nstarted off in 8.1 just before 8.2 came out and is now running on 8.3 . \nI didn't find changing versions a big hassle and noticed big \nimprovements between versions.\n\nIn any case, I was suggesting that *I* might've been asknig a silly \nquestion by asking you why you wanted 8.1. You've covered that (you want \nto use the version packaged in your distro) so that's easy enough.\n\n--\nCraig Ringer\n\n", "msg_date": "Wed, 12 Mar 2008 15:36:49 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: migration of 7.4 to 8.1" }, { "msg_contents": "On Wed, 12 Mar 2008, Craig Ringer wrote:\n\n> sathiya psql wrote:\n>>> This might be a silly question, but ... why 8.1 ?\n>>> \n>>> If you're doing a major upgrade, why not go straight to 8.3? It's been\n>>> out long enough that there aren't any obvious nasty bugs, and there have\n>>> been a fair few fixes and improvements since prior versions.\n>>> \n>> Because am using Debian ETCH stable... i need to continue using this... In\n>> Debian ETCH stable 8.1 is only there....\n>\n> I use Etch on my servers. The magic of etch-backports from backports.org \n> allows me to use 8.3 without messing with anything else. It works extremely \n> well.\n>\n> All I had to do to install 8.3 was add the etch-backports line to my \n> /etc/apt/sources.list, run `apt-get update', then run:\n>\n> apt-get -t etch-backports install postgresql-8.3\n>\n> ... then migrate the data to it and remove the old packages.\n>\n> See backports.org for a mirror list.\n>\n>> I will be installing my project in other machines where it is having DEBIAN\n>> ETCH STABLE, i dont want to face any problems... so only...\n>> \n>> am i right ???\n>\n> Personally I'd use 8.3 from backports.org, but it's of course up to you.\n\nconsider the fact that after 1 year Etch is 2 major releases out of date. \nby the time the next stable release is made you will be running on ancient \nversions unless you make use of backports.org (or compile it yourself, \nwhich is useually my choice).\n\nDavid Lang\n", "msg_date": "Tue, 11 Mar 2008 23:39:40 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: migration of 7.4 to 8.1" }, { "msg_contents": "My question is that how to migrate my database to 7.4 to 8.1\n\n\nthat is not only dumping the db and extracting that in 8.1 ..\n\nIf i do that whether it will work without problem, or i have to do some\nmanual changes is my question...\n\nMy question is that how to migrate my database to 7.4 to 8.1that is not only dumping the db and extracting that in 8.1 ..If i do that whether it will work without problem, or i have to do some manual changes is my question...", "msg_date": "Wed, 12 Mar 2008 12:13:01 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: migration of 7.4 to 8.1" }, { "msg_contents": "sathiya psql wrote:\n> My question is that how to migrate my database to 7.4 to 8.1\n> \n> \n> that is not only dumping the db and extracting that in 8.1 ..\n> \n> If i do that whether it will work without problem, or i have to do some\n> manual changes is my question...\n\nStart by reading the postgresql 8.0 and 8.1 release notes. See:\n\nhttp://www.postgresql.org/docs/current/static/release.html\n\nparticularly:\n\nhttp://www.postgresql.org/docs/current/static/release-8-0.html\nhttp://www.postgresql.org/docs/current/static/release-8-1.html\n\n--\nCraig Ringer\n\n\n", "msg_date": "Wed, 12 Mar 2008 15:49:13 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: migration of 7.4 to 8.1" }, { "msg_contents": "On Wed, 12 Mar 2008, sathiya psql wrote:\n\n> My question is that how to migrate my database to 7.4 to 8.1\n>\n>\n> that is not only dumping the db and extracting that in 8.1 ..\n>\n> If i do that whether it will work without problem, or i have to do some\n> manual changes is my question...\n\nyou would need to look at the release notes for each version to see if you \nare using any of the features that were changed.\n\nbut in return for this hassle (which you will need to do eventually, you \nare just deciding when) you get to use the newer version with all the \nspeed improvements, not to mention the new features.\n\ngoing from 7.x to 8.0 is the most painful step. going on from 8.0 to 8.1 \nto 8.2 to 8.3 are relativly minor steps in comparison.\n\nDavid Lang\n", "msg_date": "Tue, 11 Mar 2008 23:50:30 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: migration of 7.4 to 8.1" }, { "msg_contents": "On Wed, 12 Mar 2008, sathiya psql wrote:\n\n> In the home page itself they were saying .... testing ... unstable....\n\nyou are talking about the debian home page right?\n\n> then we should not use that for live.....\n>\n> so i prefer 8.1 .........\n\nDebian selected the version of Postgres for Etch about a year and a half \nago. At that point selecting 8.1 was a resonable choice.\n\nDebian has a policy that they will never change the version number of a \npackage in a stable release (they will backport many bugfixes, but not \nupgrade the version)\n\nAs a result 2 years from now when Postgres is on 8.5 stable (and looking \nat 8.6), Debian Etch will still be on 8.1\n\nit is of course your option to stick with 8.1, but before very long the \nanswer to most of your questions about postgres is going to be 'upgrade to \na resonably current version' (people running 7.4 and 8.0 are starting to \nget that answer now. 8.1 and 8.2 are recent enough that it's rare to get \nthat now, but in a year or two that will change)\n\nfor most utility software you just want it to work and don't really care \nabout new features over a couple of years (or the project has reached the \nstage where it just doesn't change much over a couple of years). In these \nsituations the Debian policy is great, you don't have to worry about new \nstuff messing you up.\n\nhowever some software has much faster development cycles. The kernel has a \nnew release about every two months, Postgres is aiming at a one year \ncycle, Apache has it's own release schedule. These packages are usually \npretty core to your operation and the improvments (and security fixes that \nare not possible to backport sanely) mean that you need to think very hard \nabout what version of them you are going to run. On my systems I bypass \nDebian directly for such packages and compile them myself, the \nbackports.org option allows you to avoid that hassle, and get versions \nthat are fairly well tested (like any new version, you need to do some \ntesting yourself), just wait a month or two after a new release hits \nbackports.org and you will be pretty safe.\n\nDavid Lang\n\n>>\n>> You can get 8.3 from backports: http://www.backports.org/ - it's a\n>> debian project to get more up to date versions for existing stable\n>> releases (they package everything exactly the same way).\n>>\n>> --\n>> Postgresql & php tutorials\n>> http://www.designmagick.com/\n>>\n>\n", "msg_date": "Tue, 11 Mar 2008 23:59:47 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: migration of 7.4 to 8.1" }, { "msg_contents": "[email protected] wrote:\n> On Wed, 12 Mar 2008, sathiya psql wrote:\n> \n>> In the home page itself they were saying .... testing ... unstable....\n> \n> you are talking about the debian home page right?\n> \n>> then we should not use that for live.....\n>>\n>> so i prefer 8.1 .........\n> \n> Debian selected the version of Postgres for Etch about a year and a half \n> ago. At that point selecting 8.1 was a resonable choice.\n> \n> Debian has a policy that they will never change the version number of a \n> package in a stable release (they will backport many bugfixes, but not \n> upgrade the version)\n> \n> As a result 2 years from now when Postgres is on 8.5 stable (and looking \n> at 8.6), Debian Etch will still be on 8.1\n\nI like that with debian I can install multiple postgres versions and it \nhandles everything for me :) Changing the default port, config files are \nin different folders, different start up scripts.. all works very nicely :)\n\nWhich means you can have 8.1 installed and 8.3 installed at the same \ntime - both from packages, no compiling etc necessary - and can switch \nbetween them very easily.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Wed, 12 Mar 2008 18:05:52 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: migration of 7.4 to 8.1" }, { "msg_contents": ">\n> you are talking about the debian home page right?\n>\n> --- no am talking about backports home page..\n\nyou are talking about the debian home page right?\n--- no am talking about backports home page..", "msg_date": "Wed, 12 Mar 2008 12:52:30 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: migration of 7.4 to 8.1" }, { "msg_contents": "On Wed, Mar 12, 2008 at 12:13:01PM +0530, sathiya psql wrote:\n> My question is that how to migrate my database to 7.4 to 8.1\n\n aptitude install postgresql-8.1\n pg_dropcluster 8.1 main\n pg_upgradecluster 7.4 main\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 12 Mar 2008 10:18:49 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: migration of 7.4 to 8.1" }, { "msg_contents": "\nOn Mar 12, 2008, at 2:43 AM, sathiya psql wrote:\n\n> My question is that how to migrate my database to 7.4 to 8.1\n>\n>\n> that is not only dumping the db and extracting that in 8.1 ..\n>\n> If i do that whether it will work without problem, or i have to do \n> some manual changes is my question...\n\nthe pg dump/restore should work without issue. you might not get all \nthe relations of which sequence is \"owned\" by which table, and similar \nfor foreign keys, but you may not miss that much.\n\nthen, review *every* SQL query you run and make sure it runs against \n8.1. things that will trap you are treating strings as integers in \ncertain cases. most everything else should just work. you will also \nwant to review the performance of each query.\n\nyou can find a detailed list of changes to the DB in the release notes \nfile that comes with every release.\n\nfinally, this is not really a performance issue, so perhaps the pg- \ngeneral list would have been better to ask your question.\n\n", "msg_date": "Wed, 12 Mar 2008 10:46:25 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: migration of 7.4 to 8.1" } ]
[ { "msg_contents": "Hello,\n\n(you could receive this message twice - I have some email issues sorry)\n\nI'm setting up a new DB with Centos 5 (amd64) + postgresql 8.3\ninstalled from the pgsql yum repository. This is a host dedicated to\npostgresql. From the benchmarks I found here and there on the web, and\nhaving digged a bit on the ML, it seems that I could expect something\nsimilar to 4000 tps with pgbench.\nThe host is a HP DL580 G5, 8GB of RAM with 8x146G 10k SAS disks and\n2x2.4GHz quad-code intel xeons, with a battery-backed P400/512MB\ncontroller.\n\nI tried two disk layout configurations:\n\n- raid 1 system\n- raid 1 for WAL\n- raid 10 (4 disks) for data\n\nand\n\n- raid 1 system\n- raid 10 (6 disks) WAL+data\n\nthe FS is ext3 (without lvm) for all the db-related partitions,\ncreated with the following options\n\n-b 4096 -E stride=32 -j -m 10 -T largefile\n\n(the array stripe size is 128k) and mounted with\nnoatime,data=writeback\n\npgbench reports circa 2000 tps with a number of clients ranging from 2\nto 12 and starts decreasing afterwards with the number of clients, for\nexample I get 1200 tps with 50 clients.\n\nThe relevant configuration items of the DB is like in the following\n(default values are not shown)\n\n------------------------------------\nmax_connections = 100\nshared_buffers = 2GB\ntemp_buffers = 128MB\nwork_mem = 8MB\nmaintenance_work_mem = 128MB\nmax_fsm_pages = 1000000\nbgwriter_delay = 200ms\nfsync = on\nsynchronous_commit = on\nwal_sync_method = fdatasync\nwal_buffers = 1024kB\ncommit_delay = 100\ncheckpoint_segments = 128\neffective_cache_size = 5GB\nconstraint_exclusion = on\n--------------------------------------\n\nI'm a bit worried about this 2k tps limit I hit regardless of the disk\nlayout, and also by the fact that it seems someone else reported to\nhave much better figures (4ktps) with similar hardware. Does anyone\nhave hints/suggestions/advices about what could be further optimized?\nThanks a lot for your help,\nEnrico\n\n\n", "msg_date": "Wed, 12 Mar 2008 16:25:29 +0100", "msg_from": "Enrico Sirola <[email protected]>", "msg_from_op": true, "msg_subject": "8.3 write performance" }, { "msg_contents": "Hi,\nI follow up myself: I was using pgbench with the wrong scale size. \nWith the configuration I posted before and scale=100 I Get the \nfollowing:\n\nsudo -u postgres pgbench -c 10 -t 10000 -s 100\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 4399.753895 (including connections establishing)\ntps = 4405.228901 (excluding connections establishing)\n\nsudo -u postgres pgbench -c 50 -t 10000 -s 100\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 50\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 500000/500000\ntps = 3208.532479 (including connections establishing)\ntps = 3211.816174 (excluding connections establishing)\n\nwhich I think is in line with the expectations.\nThanks,\ne.\n\n", "msg_date": "Fri, 14 Mar 2008 11:17:12 +0100", "msg_from": "Enrico Sirola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8.3 write performance" } ]
[ { "msg_contents": "Hello, we plan to buy a dedicated server to host our database.\nHere is the proposal I was given (with a second identical server fro \nbackup using log shipping):\n=========================\nIBM X3650 (This is a 2U server, can hold 8 Drives)\n2 x QC Xeon E5450 (3.0GHz 12MB L2 1333MHz 80W)\n8 x 2GB RAM (16GB total)\n2.5\" SAS Hotswap\nServeRAID-8K SAS Controller\n8 x 73GB 15K 2.5\" SAS Drive\nCD/DVD Drive\nRemote Supervisor Adapter II Slimline\nRedundant Power\n4 Year, 24x7 2hour support/warranty\n\n=========================\n\nI would like specialists advices.\n\nIf you need additional details, please let me know.\n\nThanks in advance for your help\n\nThank you\n\nPascal\n", "msg_date": "Wed, 12 Mar 2008 19:58:51 +0100", "msg_from": "Pascal Cohen <[email protected]>", "msg_from_op": true, "msg_subject": "Hardware question for a DB server" }, { "msg_contents": "What type of usage does it need to scale for? How many concurrent\nconnections? What size database? Data warehousing or OLTP-type\nworkloads? Ratio of reads/writes? Do you care about losing data?\n\nOne question that's likely going to be important depending on your\nanswers above is whether or not you're getting a battery-backed write\ncache for that ServeRAID-8K.\n\n-- Mark Lewis\n\nOn Wed, 2008-03-12 at 19:58 +0100, Pascal Cohen wrote:\n> Hello, we plan to buy a dedicated server to host our database.\n> Here is the proposal I was given (with a second identical server fro \n> backup using log shipping):\n> =========================\n> IBM X3650 (This is a 2U server, can hold 8 Drives)\n> 2 x QC Xeon E5450 (3.0GHz 12MB L2 1333MHz 80W)\n> 8 x 2GB RAM (16GB total)\n> 2.5\" SAS Hotswap\n> ServeRAID-8K SAS Controller\n> 8 x 73GB 15K 2.5\" SAS Drive\n> CD/DVD Drive\n> Remote Supervisor Adapter II Slimline\n> Redundant Power\n> 4 Year, 24x7 2hour support/warranty\n> \n> =========================\n> \n> I would like specialists advices.\n> \n> If you need additional details, please let me know.\n> \n> Thanks in advance for your help\n> \n> Thank you\n> \n> Pascal\n> \n", "msg_date": "Wed, 12 Mar 2008 12:23:41 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware question for a DB server" }, { "msg_contents": "Mark Lewis wrote:\n> What type of usage does it need to scale for? How many concurrent\n> connections? What size database? Data warehousing or OLTP-type\n> workloads? Ratio of reads/writes? Do you care about losing data?\n> \nI expected those questions but I was sure that I would forget or ignore \nsome ;)\n- This Database will be accessed by Web applications but also by an XMPP \nserver. This means that those are not complex requests but we may have a \nnumber of high parallel requests for small results. Ideally as many \nconnections as possible would be nice.\n- I am not sure but I would say from what I found thanks to Google is \nthat we are probably closer to an OLTP type workload (but I may be wrong)\n- Size of the DB: a few Gb but not yet more than the 16Gb.\n- It is a read mainly database (8/9 reads for 1 write) with potential \nbatch updates\n- We cannot afford (anymore) losing data.\n\n\n> One question that's likely going to be important depending on your\n> answers above is whether or not you're getting a battery-backed write\n> cache for that ServeRAID-8K.\n>\n> -- Mark Lewis\n>\n> On Wed, 2008-03-12 at 19:58 +0100, Pascal Cohen wrote:\n> \n>> Hello, we plan to buy a dedicated server to host our database.\n>> Here is the proposal I was given (with a second identical server fro \n>> backup using log shipping):\n>> =========================\n>> IBM X3650 (This is a 2U server, can hold 8 Drives)\n>> 2 x QC Xeon E5450 (3.0GHz 12MB L2 1333MHz 80W)\n>> 8 x 2GB RAM (16GB total)\n>> 2.5\" SAS Hotswap\n>> ServeRAID-8K SAS Controller\n>> 8 x 73GB 15K 2.5\" SAS Drive\n>> CD/DVD Drive\n>> Remote Supervisor Adapter II Slimline\n>> Redundant Power\n>> 4 Year, 24x7 2hour support/warranty\n>>\n>> =========================\n>>\n>> I would like specialists advices.\n>>\n>> If you need additional details, please let me know.\n>>\n>> Thanks in advance for your help\n>>\n>> Thank you\n>>\n>> Pascal\n>>\n>> \n>\n> \n\n", "msg_date": "Wed, 12 Mar 2008 20:42:57 +0100", "msg_from": "Pascal Cohen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware question for a DB server" }, { "msg_contents": "On Wed, 12 Mar 2008, Mark Lewis wrote:\n\n> One question that's likely going to be important depending on your\n> answers above is whether or not you're getting a battery-backed write\n> cache for that ServeRAID-8K.\n\nApparently there's a 8k-l and an regular 8-k; the l doesn't have the \ncache, so if this one is a regular 8-k it will have 256MB and a battery. \nSee http://www.redbooks.ibm.com/abstracts/TIPS0054.html?Open#ServeRAID-8k\n\n From Pascal's description of the application this system sounds like \noverkill whether or not there's a cache. For scaling to lots of small \nrequests, using things like using connection pooling may end up being more \nimportant than worring about the disk system (the database isn't big \nenough relative to RAM for that to be too important).\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 12 Mar 2008 16:08:41 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware question for a DB server" }, { "msg_contents": "Greg Smith wrote:\n> On Wed, 12 Mar 2008, Mark Lewis wrote:\n>\n>> One question that's likely going to be important depending on your\n>> answers above is whether or not you're getting a battery-backed write\n>> cache for that ServeRAID-8K.\n>\n> Apparently there's a 8k-l and an regular 8-k; the l doesn't have the \n> cache, so if this one is a regular 8-k it will have 256MB and a \n> battery. See \n> http://www.redbooks.ibm.com/abstracts/TIPS0054.html?Open#ServeRAID-8k\nIt is the solution with RAM and battery.\n>\n>> From Pascal's description of the application this system sounds like \n> overkill whether or not there's a cache. For scaling to lots of small \n> requests, using things like using connection pooling may end up being \n> more important than worring about the disk system (the database isn't \n> big enough relative to RAM for that to be too important).\n>\nI agree with what you are saying. We are using Java with a pool of \nconnections to access the DB. Today our database is really small \ncompared to the RAM but it may evolve and even will probably grow (hope \nso which would be a good situation).\n\nThanks for your advices/remarks.\n", "msg_date": "Fri, 14 Mar 2008 20:24:25 +0100", "msg_from": "Pascal Cohen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware question for a DB server" }, { "msg_contents": "On Fri, Mar 14, 2008 at 1:24 PM, Pascal Cohen <[email protected]> wrote:\n> I agree with what you are saying. We are using Java with a pool of\n> connections to access the DB. Today our database is really small\n> compared to the RAM but it may evolve and even will probably grow (hope\n> so which would be a good situation).\n>\n\n\nKeep in mind that differential cost between a mediocre and a good RAID\ncontroller is often only a few hundred dollars. If that means you can\nscale to 10 or 100 times as many users, it's an investment worth\nmaking up front rather than later on.\n", "msg_date": "Fri, 14 Mar 2008 13:43:33 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware question for a DB server" } ]
[ { "msg_contents": "Hi all,\n\nJust upgraded to 8.2.5. \n\n \n\nGiven table t with columns a, b, c, d\n\n And index on t using btree (a,b)\n\n Is this indexable:\n\n Select * from t where a || b = '124cab' (or whatever)\n\n Assume a and b are defined as char(3)\n\n \n\nI have tried various op classes and so far have just gotten sequential scans\n\n \n\nThanks for your time\n\n \n\n \n\nMark Steben\n\nSenior Database Administrator\n@utoRevenueT \nA Dominion Enterprises Company\n480 Pleasant Street\nSuite B200\nLee, MA 01238\n413-243-4800 Home Office \n413-243-4809 Corporate Fax\n\nmsteben <blocked::mailto:[email protected]> @autorevenue.com\n\nVisit our new website at \n <blocked::http://www.autorevenue.com/> www.autorevenue.com\n\n \n\nIMPORTANT: The information contained in this e-mail message is confidential\nand is intended only for the named addressee(s). If the reader of this\ne-mail message is not the intended recipient (or the individual responsible\nfor the delivery of this e-mail message to the intended recipient), please\nbe advised that any re-use, dissemination, distribution or copying of this\ne-mail message is prohibited. If you have received this e-mail message in\nerror, please reply to the sender that you have received this e-mail message\nin error and then delete it. Thank you.\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi all,\nJust upgraded to 8.2.5.  \n \nGiven table t with columns a, b, c, d\n  And index on t using btree (a,b)\n   Is this indexable:\n  Select * from t where a || b = ‘124cab’  (or whatever)\n    Assume a and b are defined as char(3)\n \nI have tried various op classes and so far have just gotten sequential scans\n \nThanks for your time\n \n \n\nMark Steben\nSenior Database Administrator\n@utoRevenue™ \nA Dominion Enterprises\nCompany\n480 Pleasant Street\nSuite B200\nLee, MA\n01238\n413-243-4800 Home Office \n413-243-4809 Corporate Fax\n\[email protected]\n\nVisit our new website\nat \nwww.autorevenue.com\n\n\n\n \n\n\nIMPORTANT: The information contained in\nthis e-mail message is confidential and is intended only for the named\naddressee(s). If the reader of this e-mail message is not the intended\nrecipient (or the individual responsible for the delivery of this e-mail\nmessage to the intended recipient), please be advised that any re-use,\ndissemination, distribution or copying of this e-mail message is prohibited.\n If you have received this e-mail message in error, please reply to the\nsender that you have received this e-mail message in error and then delete it.\n Thank you.", "msg_date": "Wed, 12 Mar 2008 16:38:14 -0400", "msg_from": "\"Mark Steben\" <[email protected]>", "msg_from_op": true, "msg_subject": "Are piped columns indexable" }, { "msg_contents": "On Wed, Mar 12, 2008 at 4:38 PM, Mark Steben <[email protected]> wrote:\n> Given table t with columns a, b, c, d\n>\n> And index on t using btree (a,b)\n>\n> Is this indexable:\n>\n> Select * from t where a || b = '124cab' (or whatever)\n>\n> Assume a and b are defined as char(3)\n> I have tried various op classes and so far have just gotten sequential scans\n\n\ncreate index t_idx on t((a || b));\n\n:-)\n\nmerlin\n", "msg_date": "Wed, 12 Mar 2008 16:55:26 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are piped columns indexable" } ]
[ { "msg_contents": "Hi\n\nI've been wondering about postgresql's handling of repeated subqueries \nin statements for a while, and thought I'd ask here.\n\nIf the exact same subquery appears in multiple places in a complex \nquery, it seems to be executed separately each time it appears. I'm \nwondering if there's any way, without moving over to PL/PgSQL, to \n\"collapse\" such multiple executions down to a single one.\n\nConsider this simplistic example, which while useless demonstrates the \nissue in a self contained way:\n\nexplain analyze select (select count(id) from booking) as x\nwhere (select count(id) from booking) > 100;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Result (cost=37023.85..37023.86 rows=1 width=0)\n (actual time=668.922..668.922 rows=1 loops=1)\n One-Time Filter: ($1 > 100)\n InitPlan\n -> Aggregate (cost=18511.92..18511.92 rows=1 width=4)\n (actual time=335.150..335.150 rows=1 loops=1)\n -> Seq Scan on booking (cost=0.00..17627.13 rows=353913\n width=4) (actual time=0.128..200.147 rows=353913 loops=1)\n -> Aggregate (cost=18511.92..18511.92 rows=1 width=4)\n (actual time=333.756..333.756 rows=1 loops=1)\n -> Seq Scan on booking (cost=0.00..17627.13 rows=353913\n width=4) (actual time=0.133..198.261 rows=353913 loops=1)\n Total runtime: 668.993 ms\n(8 rows)\n\n\nThe query:\n\n(select count(id) from booking)\n\nis executed twice, even though it's guaranteed by MVCC that the result \nwill be the same in both subqueries.\n\nIdeally you'd be able to say something like:\n\nselect (select count(id) from booking) as x\nwhere x > 100;\n\nI realize that in this case the query can be rewritten as:\n\nselect x.c from (select count(id) AS c from booking) as x\nwhere x.c > 100;\n\n\nbut in more complex queries introducing an additional FROM clause for a \n single value can be undesirable and/or ugly.\n\nIs there any way to get postgresql to detect such repeated query parts \nand evaluate them only once?\n\n--\nCraig Ringer\n", "msg_date": "Thu, 13 Mar 2008 12:37:53 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": true, "msg_subject": "Repeated execution of identical subqueries" }, { "msg_contents": "Craig Ringer <[email protected]> writes:\n> Is there any way to get postgresql to detect such repeated query parts \n> and evaluate them only once?\n\nNo, not at the moment. In principle the planner could look for such\nduplicates, but it'd be wasted cycles so much of the time that I'd be\nloath to do it.\n\nThere is work afoot to implement the SQL:2003 \"WITH\" syntax, which\nI think would offer a syntactic solution to your problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Mar 2008 01:13:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Repeated execution of identical subqueries " }, { "msg_contents": "Tom Lane wrote:\n> No, not at the moment. In principle the planner could look for such\n> duplicates, but it'd be wasted cycles so much of the time that I'd be\n> loath to do it.\n> \nGood point - there are better places to spend time, and I imagine it'd \nbe an expensive thing to check too.\n> There is work afoot to implement the SQL:2003 \"WITH\" syntax, which\n> I think would offer a syntactic solution to your problem.\nYes, it would. In fact, I was thinking about the syntax seen in some \nfunctional languages - like Haskell's `where' clause - that defines a \nsubexpression available to all parts of the expression. If the SQL:2003 \nWITH expression is anything like that it'd be very handy indeed.\n\n--\nCraig Ringer\n", "msg_date": "Thu, 13 Mar 2008 15:35:00 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Repeated execution of identical subqueries" }, { "msg_contents": "Craig Ringer wrote:\n> Tom Lane wrote:\n>> No, not at the moment. In principle the planner could look for such\n>> duplicates, but it'd be wasted cycles so much of the time that I'd be\n>> loath to do it.\n>> \n> Good point - there are better places to spend time, and I imagine it'd \n> be an expensive thing to check too.\n\nThe one very simple case that gets me every time is when a user-defined function is called.\n\ntest=> explain analyze select chm_mf(isosmiles) from version where chm_mf(isosmiles) like '%C20%' or chm_mf(isosmiles) like '%C21%';\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------\n Seq Scan on version (cost=0.00..18.57 rows=29 width=43) (actual time=48.798..1180.538 rows=50 loops=1)\n Filter: ((chm_mf(isosmiles) ~~ '%C20%'::text) OR (chm_mf(isosmiles) ~~ '%C21%'::text))\n Total runtime: 1180.683 ms\n(3 rows)\n\nThis table only has 375 rows TOTAL, yet it takes over a second to answer this query: \"Find me molecules with either 20 or 21 carbon atoms in the molecular formula\". This is a somewhat contrived example, but we have others that really do get us, and we go to great lengths to avoid them. It requires us to avoid \"natural\" queries like the one above, and instead create convoluted application logic using temporary tables to hold the results of a function call, which we can then use in a query that uses the values more than once. Something like this:\n\ncreate temporary table foo(mf text);\nexplain analyze insert into foo (select chm_mf(isosmiles) from version);\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------\n Seq Scan on version (cost=0.00..15.69 rows=375 width=43) (actual time=1.829..566.233 rows=375 loops=1)\n Total runtime: 568.470 ms\n\nNow we can use this table in a complex query where we need the MF twice, and discard it, and it's STILL faster than the original \"natural\" SQL. Notice that it takes just half the time as the first query, which tells me chm_mf() was being called twice in the first example. This function is defined as:\n\n CREATE OR REPLACE FUNCTION chm_mf(text) RETURNS text\n AS '/usr/local/pgsql/lib/libchm.so', 'chm_mf'\n LANGUAGE 'C' STRICT IMMUTABLE;\n\nI can understand how in the general case, it is very hard to identify repeated subqueries. But it seems like an IMMUTABLE function shouldn't be called twice on the same column -- isn't that the whole point of IMMUTABLE?\n\nCraig\n", "msg_date": "Thu, 13 Mar 2008 07:06:03 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Repeated execution of identical subqueries" } ]
[ { "msg_contents": "I have created partial index on a field with conditions, if field _a > 200\nand field_a < 300.\n\nI using the select query with condition as field_a in ( 100, 250, 289, ),\n\nWill it use the index.\n\n\n\n-- \nWith Best Regards,\nPetchimuthulingam S\n\nI have created partial index on a field with conditions, if   field _a > 200 and field_a  < 300. I using the select query with condition as field_a in ( 100, 250, 289,  ),Will it use the index.\n-- With Best Regards,Petchimuthulingam S", "msg_date": "Thu, 13 Mar 2008 12:51:54 +0530", "msg_from": "\"petchimuthu lingam\" <[email protected]>", "msg_from_op": true, "msg_subject": "partial index + select query performance" }, { "msg_contents": "am Thu, dem 13.03.2008, um 12:51:54 +0530 mailte petchimuthu lingam folgendes:\n> \n> I have created partial index on a field with conditions, if field _a > 200\n> and field_a < 300.\n> \n> I using the select query with condition as field_a in ( 100, 250, 289, ),\n> \n> Will it use the index.\n\nWhy do you ask us, ask your PG, simple type\n\nexplain select ... from ... where field_a in ( 100, 250, 289 )\n\n\nPG shows you the execution plan. But i guess, the index are not used,\nbecause the value 100 are not in the index.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Thu, 13 Mar 2008 08:33:46 +0100", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partial index + select query performance" } ]
[ { "msg_contents": "Hi chaps,\n\nI'm looking at switching out the perc5i (lsi megaraid) cards from our\nDell 2950s for something else as they're crap at raid 10.\n\nThing is I'm not entirely sure where to start, we're using 6 SAS\ndrives and also need a bbu cache. The perc5i has 256mb which I'm sure\nwould be fine for us.\n\nWe're using debian etch 64 bit.\n\nWhat cards have you lot had success with?\n\nAny tips would be appreciated.\n\n\n ___________________________________________________________ \nRise to the challenge for Sport Relief with Yahoo! For Good \n\nhttp://uk.promotions.yahoo.com/forgood/\n\n", "msg_date": "Thu, 13 Mar 2008 05:33:55 -0700 (PDT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Recomendations on raid controllers raid 1+0" }, { "msg_contents": "Glyn,\n\nOn Thu, Mar 13, 2008 at 1:33 PM, Glyn Astill <[email protected]> wrote:\n> I'm looking at switching out the perc5i (lsi megaraid) cards from our\n> Dell 2950s for something else as they're crap at raid 10.\n\nDo you have numbers? Perc 5/i cards perform quite well for us (we have\na 8 disks RAID 10 in a 2900 server with the traditional Perc 5/i).\n\n--\nGuillaume\n", "msg_date": "Thu, 13 Mar 2008 16:58:41 +0100", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recomendations on raid controllers raid 1+0" } ]
[ { "msg_contents": "Is there any tool to draw ER diagram from SQL schema file...\n\n\nno other groups are replying.....\n\nIs there any tool to draw ER diagram from SQL schema file...no other groups are replying.....", "msg_date": "Thu, 13 Mar 2008 18:54:18 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "ER diagram tool" }, { "msg_contents": "am Thu, dem 13.03.2008, um 18:54:18 +0530 mailte sathiya psql folgendes:\n> Is there any tool to draw ER diagram from SQL schema file...\n> \n\n14:31 < akretschmer> ??erd\n14:31 < rtfm_please> For information about erd\n14:31 < rtfm_please> see http://druid.sf.net/\n14:31 < rtfm_please> or http://schemaspy.sourceforge.net/\n14:31 < rtfm_please> or http://uml.sourceforge.net/index.php\n\nlast but not least: pencil and paper...\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Thu, 13 Mar 2008 14:32:50 +0100", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ER diagram tool" }, { "msg_contents": "On 14/03/2008, sathiya psql <[email protected]> wrote:\n> no other groups are replying.....\nYou waited for a *whole* *hour* before deciding to cross-post?\nWow.\n\n\n-- \nPlease don't top post, and don't use HTML e-Mail :} Make your quotes concise.\n\nhttp://www.american.edu/econ/notes/htmlmail.htm\n", "msg_date": "Fri, 14 Mar 2008 08:43:34 +1300", "msg_from": "\"Andrej Ricnik-Bay\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ER diagram tool" }, { "msg_contents": "ERStudio\nToad Data Modeller\n\nAnd you might try searching sourceforge or freshmeat.\n\nsathiya psql wrote:\n> Is there any tool to draw ER diagram from SQL schema file...\n> \n> \n> no other groups are replying.....\n\n-- \nEasyflex diensten b.v.\nAcaciastraat 16\n4921 MA MADE\nT: 0162 - 690410\nF: 0162 - 690419\nE: [email protected]\nW: http://www.easyflex.nl\n", "msg_date": "Fri, 14 Mar 2008 11:20:52 +0100", "msg_from": "Jurgen Haan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ER diagram tool" }, { "msg_contents": ">\n> 14:31 < rtfm_please> For information about erd\n> 14:31 < rtfm_please> see http://druid.sf.net/\n> 14:31 < rtfm_please> or http://schemaspy.sourceforge.net/\n\n\nA very great Thanks.........\n\nSchemaSpy drawn ER diagram by referring my database...\nit done a very good job....\n\nThanks a lot GUYZZZZZZZZ.......\n\n\n> <http://schemaspy.sourceforge.net/>\n> 14:31 < rtfm_please> or http://uml.sourceforge.net/index.php\n>\n>\n\n14:31 < rtfm_please> For information about erd\n14:31 < rtfm_please> see http://druid.sf.net/\n14:31 < rtfm_please> or http://schemaspy.sourceforge.net/A very great Thanks.........SchemaSpy drawn ER diagram by referring my database...\nit done a very good job....Thanks a lot GUYZZZZZZZZ....... \n\n14:31 < rtfm_please> or http://uml.sourceforge.net/index.php", "msg_date": "Fri, 14 Mar 2008 17:05:08 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ER diagram tool" } ]
[ { "msg_contents": "Bonie++ benchmarks below.\n\nI believe the the Perc 5/i Raid 10 mode is actually a span of mirrors, rather than the expected stripe of mirrors we should expect from 1+0, and that this is the reason for the shitty performance.\n\n\nRAID 5\n======\nVersion 1.03 ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\nway5a 32096M 55354 97 201375 59 109586 23 59934 97 427541 33 767.9 1\n ------Sequential Create------ --------Random Create--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\nway5a,32096M,55354,97,201375,59,109586,23,59934,97,427541,33,767.9,1,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n\nRAID 10\n=======\nVersion 1.03 ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\nWay5a 32096M 53479 99 131640 33 66718 10 58225 95 339287 25 699.1 1\n ------Sequential Create------ --------Random Create--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\nWay5a,32096M,53479,99,131640,33,66718,10,58225,95,339287,25,699.1,1,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n\n\n----- Original Message ----\n> From: Guillaume Smet <[email protected]>\n> To: Glyn Astill <[email protected]>\n> Cc: [email protected]\n> Sent: Thursday, 13 March, 2008 3:58:41 PM\n> Subject: Re: [PERFORM] Recomendations on raid controllers raid 1+0\n> \n> Glyn,\n> \n> On Thu, Mar 13, 2008 at 1:33 PM, Glyn Astill wrote:\n> > I'm looking at switching out the perc5i (lsi megaraid) cards from our\n> > Dell 2950s for something else as they're crap at raid 10.\n> \n> Do you have numbers? Perc 5/i cards perform quite well for us (we have\n> a 8 disks RAID 10 in a 2900 server with the traditional Perc 5/i).\n> \n> --\n> Guillaume\n\n\n ___________________________________________________________ \nRise to the challenge for Sport Relief with Yahoo! For Good \n\nhttp://uk.promotions.yahoo.com/forgood/\n\n", "msg_date": "Thu, 13 Mar 2008 16:10:49 +0000 (GMT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recomendations on raid controllers raid 1+0" }, { "msg_contents": "Glyn Astill wrote:\n> Bonie++ benchmarks below.\n> \n> I believe the the Perc 5/i Raid 10 mode is actually a span of mirrors, rather than the expected stripe of mirrors we should expect from 1+0, and that this is the reason for the shitty performance.\n> \n\nCould you build three RAID-1 mirrors on the device, and then stripe that \nusing software RAID-0?\n\nThanks\nLeigh\n\n> \n> RAID 5\n> ======\n> Version 1.03 ------Sequential Output------ --Sequential Input- --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\n> way5a 32096M 55354 97 201375 59 109586 23 59934 97 427541 33 767.9 1\n> ------Sequential Create------ --------Random Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\n> way5a,32096M,55354,97,201375,59,109586,23,59934,97,427541,33,767.9,1,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n> \n> RAID 10\n> =======\n> Version 1.03 ------Sequential Output------ --Sequential Input- --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\n> Way5a 32096M 53479 99 131640 33 66718 10 58225 95 339287 25 699.1 1\n> ------Sequential Create------ --------Random Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\n> Way5a,32096M,53479,99,131640,33,66718,10,58225,95,339287,25,699.1,1,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n> \n> \n> ----- Original Message ----\n>> From: Guillaume Smet <[email protected]>\n>> To: Glyn Astill <[email protected]>\n>> Cc: [email protected]\n>> Sent: Thursday, 13 March, 2008 3:58:41 PM\n>> Subject: Re: [PERFORM] Recomendations on raid controllers raid 1+0\n>>\n>> Glyn,\n>>\n>> On Thu, Mar 13, 2008 at 1:33 PM, Glyn Astill wrote:\n>>> I'm looking at switching out the perc5i (lsi megaraid) cards from our\n>>> Dell 2950s for something else as they're crap at raid 10.\n>> Do you have numbers? Perc 5/i cards perform quite well for us (we have\n>> a 8 disks RAID 10 in a 2900 server with the traditional Perc 5/i).\n>>\n>> --\n>> Guillaume\n> \n> \n> ___________________________________________________________ \n> Rise to the challenge for Sport Relief with Yahoo! For Good \n> \n> http://uk.promotions.yahoo.com/forgood/\n> \n> \n", "msg_date": "Fri, 14 Mar 2008 10:14:46 +1100", "msg_from": "Leigh Dyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recomendations on raid controllers raid 1+0" }, { "msg_contents": "On Thu, Mar 13, 2008 at 5:14 PM, Leigh Dyer <[email protected]> wrote:\n> Glyn Astill wrote:\n> > Bonie++ benchmarks below.\n> >\n> > I believe the the Perc 5/i Raid 10 mode is actually a span of mirrors, rather than the expected stripe of mirrors we should expect from 1+0, and that this is the reason for the shitty performance.\n> >\n>\n> Could you build three RAID-1 mirrors on the device, and then stripe that\n> using software RAID-0?\n\nThat can be a useful option. You'd have to test it on your setup, but\nRAID-0 is dead simple, so letting the kernel handle the RAID-0 part is\npretty low cost. Especially if you've got lots of CPU power, like\nmany of today's servers.\n", "msg_date": "Thu, 13 Mar 2008 23:53:41 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recomendations on raid controllers raid 1+0" } ]
[ { "msg_contents": "Do CREATE TEMP TABLE table have any special treatment regarding eliding \nsync operations\nor deferring creation of disk files in the case where memory pressure \ndoes not require a spill?\n\nJames\n\n", "msg_date": "Thu, 13 Mar 2008 22:08:39 +0000", "msg_from": "James Mansion <[email protected]>", "msg_from_op": true, "msg_subject": "temp tables" }, { "msg_contents": "James Mansion <[email protected]> writes:\n> Do CREATE TEMP TABLE table have any special treatment regarding eliding \n> sync operations\n\nYes; neither fsync nor WAL-writing is done for temp tables.\n\n> or deferring creation of disk files in the case where memory pressure \n> does not require a spill?\n\nNo. The trouble with doing something like that is we might be forced to\nreport an out-of-disk-space failure at some quite unintuitive point,\nlike during a SELECT.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Mar 2008 20:01:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: temp tables " } ]
[ { "msg_contents": "On a database (PostgreSQL 8.2.4 on 64-bit Linux 2.6.18 on 8 AMD Opterons)\r\nthat is under high load, I observe the following:\r\n\r\n- About 200 database sessions concurrently issue queries, most of them small,\r\n but I have found one that touches 38000 table and index blocks.\r\n- \"vmstat\" shows that CPU time is divided between \"idle\" and \"iowait\",\r\n with user and sys time practically zero.\r\n- the run queue is short, but the blocked queue (uninterruptible sleep) is around 10.\r\n- Many context switches are reported (over hundred per second).\r\n- \"sar\" says that the disk with the database is on 100% of its capacity.\r\n Storage is on a SAN box.\r\n\r\nQueries that normally take seconds at most require up to an hour to finish.\r\n\r\nI ran \"lsof -p\" on a backend running the big query mentioned above, and\r\nit does not use any temp files (work_mem = 20MB).\r\nThe query accesses only one table and its index.\r\n\r\nWhat puzzles me is the \"strace -tt\" output from that backend:\r\n\r\n13:44:58.263598 semop(393227, 0x7fff482f6050, 1) = 0\r\n13:44:58.313448 semop(229382, 0x7fff482f6070, 1) = 0\r\n13:44:58.313567 semop(393227, 0x7fff482f6050, 1) = 0\r\n13:44:58.442917 semop(229382, 0x7fff482f6070, 1) = 0\r\n13:44:58.443074 semop(393227, 0x7fff482f6050, 1) = 0\r\n13:44:58.565313 semop(393227, 0x7fff482f6050, 1) = 0\r\n13:44:58.682178 semop(229382, 0x7fff482f6070, 1) = 0\r\n13:44:58.682333 semop(393227, 0x7fff482f6480, 1) = 0\r\n13:44:58.807452 semop(393227, 0x7fff482f6050, 1) = 0\r\n13:44:58.924425 semop(393227, 0x7fff482f6480, 1) = 0\r\n13:44:58.924727 semop(393227, 0x7fff482f6050, 1) = 0\r\n13:44:59.045456 semop(393227, 0x7fff482f6050, 1) = 0\r\n13:44:59.169011 semop(393227, 0x7fff482f6480, 1) = 0\r\n13:44:59.169226 semop(327689, 0x7fff482f64a0, 1) = 0\r\n[many more semops]\r\n13:44:59.602532 semop(327689, 0x7fff482f6070, 1) = 0\r\n13:44:59.602648 lseek(32, 120176640, SEEK_SET) = 120176640\r\n13:44:59.602742 read(32, \"{\\0\\0\\0xwv\\227\\1\\0\\0\\0\\320\\0\\350\\0\\0 \\3 \\237\\300\\1@\\236\\300\\1`\\235\\300\\1\"..., 8192) = 8192\r\n13:44:59.602825 semop(327689, 0x7fff482f64d0, 1) = 0\r\n13:44:59.602872 semop(393227, 0x7fff482f6080, 1) = 0\r\n13:44:59.602929 semop(393227, 0x7fff482f6050, 1) = 0\r\n13:44:59.614559 semop(360458, 0x7fff482f6070, 1) = 0\r\n[many more semops]\r\n13:44:59.742103 semop(229382, 0x7fff482f64a0, 1) = 0\r\n13:44:59.742172 semop(393227, 0x7fff482f6050, 1) = 0\r\n13:44:59.756526 select(0, NULL, NULL, NULL, {0, 1000}) = 0 (Timeout)\r\n13:44:59.758096 semop(393227, 0x7fff482f6480, 1) = 0\r\n13:44:59.771655 semop(393227, 0x7fff482f6050, 1) = 0\r\n[hundreds of semops]\r\n13:45:14.339905 semop(393227, 0x7fff482f6050, 1) = 0\r\n13:45:14.466992 semop(360458, 0x7fff482f6070, 1) = 0\r\n13:45:14.467102 lseek(33, 332693504, SEEK_SET) = 332693504\r\n13:45:14.467138 read(33, \"{\\0\\0\\0\\210\\235\\351\\331\\1\\0\\0\\0\\204\\0010\\32\\360\\37\\3 \\340\\237 \\0\\320\\237 \\0\\300\\237 \\0\"..., 8192) = 8192\r\n13:45:14.599815 semop(163844, 0x7fff482f60a0, 1) = 0\r\n13:45:14.600006 lseek(32, 125034496, SEEK_SET) = 125034496\r\n13:45:14.600305 read(32, \"{\\0\\0\\0\\230\\257\\270\\227\\1\\0\\0\\0\\330\\0\\340\\0\\0 \\3 \\237\\300\\1@\\236\\300\\1`\\235\\300\\1\"..., 8192) = 8192\r\n13:45:14.600391 semop(163844, 0x7fff482f64d0, 1) = 0\r\n13:45:14.600519 semop(393227, 0x7fff482f6480, 1) = 0\r\n\r\nand so on. File 32 is the table, file 33 is the index.\r\n\r\nMany of the table and index blocks are probably already in shared memory\r\n(shared_buffers = 6GB) and don't have to be read from disk.\r\n\r\nMy questions:\r\n\r\nIs the long duration of the query caused by something else than I/O overload?\r\nWhat are the semops? Lightweight locks waiting for shared buffer?\r\nAre the lseek and read operations really that fast although the disk is on 100%?\r\n\r\nIs this normal behavior under overload or is something ill tuned?\r\n\r\nYours,\r\nLaurenz Albe\r\n", "msg_date": "Fri, 14 Mar 2008 14:50:05 +0100", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Lots of \"semop\" calls under load" }, { "msg_contents": "\"Albe Laurenz\" <[email protected]> writes:\n> On a database (PostgreSQL 8.2.4 on 64-bit Linux 2.6.18 on 8 AMD Opterons)\n> that is under high load, I observe the following:\n> ...\n> - \"vmstat\" shows that CPU time is divided between \"idle\" and \"iowait\",\n> with user and sys time practically zero.\n> - \"sar\" says that the disk with the database is on 100% of its capacity.\n\nIt sounds like you've simply saturated the disk's I/O bandwidth.\n(I've noticed that Linux isn't all that good about distinguishing \"idle\"\nfrom \"iowait\" --- more than likely you're really looking at 100% iowait.)\n\n> Storage is on a SAN box.\n\nWhat kind of SAN box? You're going to need something pretty beefy to\nkeep all those CPUs busy.\n\n> What puzzles me is the \"strace -tt\" output from that backend:\n\nSome low level of contention and consequent semops/context switches\nis to be expected. I don't think you need to worry if it's only\n100/sec. The sort of \"context swap storm\" behavior we've seen in\nthe past is in the tens of thousands of swaps/sec on hardware\nmuch weaker than what you have here --- if you were seeing one of\nthose I bet you'd be well above 100000 swaps/sec.\n\n> Are the lseek and read operations really that fast although the disk is on 100%?\n\nlseek is (should be) cheap ... it doesn't do any actual I/O. The\nread()s you're showing here were probably satisfied from kernel disk\ncache. If you look at a larger sample you'll find slower ones, I think.\nAnother thing to look for is slow writes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Mar 2008 12:34:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lots of \"semop\" calls under load " }, { "msg_contents": "Tom Lane wrote:\n>> On a database (PostgreSQL 8.2.4 on 64-bit Linux 2.6.18 on 8 AMD Opterons)\n>> that is under high load, I observe the following: ...\n>> - \"vmstat\" shows that CPU time is divided between \"idle\" and \"iowait\",\n>> with user and sys time practically zero.\n>> - \"sar\" says that the disk with the database is on 100% of its capacity.\n> \n> It sounds like you've simply saturated the disk's I/O bandwidth.\n> (I've noticed that Linux isn't all that good about distinguishing \"idle\"\n> from \"iowait\" --- more than likely you're really looking at \n> 100% iowait.)\n> \n>> Storage is on a SAN box.\n> \n> What kind of SAN box? You're going to need something pretty beefy to\n> keep all those CPUs busy.\n\nHP EVA 8100. Our storage people think that the observed I/O rate is not ok.\nThey mutter something about kernel disk cache configuration.\n\n>> What puzzles me is the \"strace -tt\" output from that backend:\n> \n> I don't think you need to worry [...]\n\nThanks for explaining the strace output.\n\nI am now more confident that the I/O overload is not the fault of PostgreSQL.\nMost execution plans look as good as they can be, so it's probably either\nthe I/O system or the application that's at fault.\n\nYours,\nLaurenz Albe\n", "msg_date": "Mon, 17 Mar 2008 10:11:48 +0100", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Lots of \"semop\" calls under load " } ]