threads
listlengths 1
275
|
---|
[
{
"msg_contents": "I've got a table with a few million rows, consisting of a single text\ncolumn. The average length is about 17 characters. For the sake of\nan experiment, I put a trigram index on that table. Unfortunately, %\nqueries without smallish LIMITs are ridiculously slow (they take\nlonger than an hour). A full table scan with a \"WHERE similarity(...)\n>= 0.4\" clause completes in just a couple of minutes. The queries\nonly select a few hundred rows, so an index scan has got a real chance\nto be faster than a sequential scan.\n\nAm I missing something? Or are trigrams just a poor match for my data\nset? Are the individual strings too long, maybe?\n\n(This is with PostgreSQL 8.2.0, BTW.)\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Mon, 15 Jan 2007 11:16:36 +0100",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_trgm performance"
},
{
"msg_contents": "On Mon, Jan 15, 2007 at 11:16:36AM +0100, Florian Weimer wrote:\n> Am I missing something? Or are trigrams just a poor match for my data\n> set? Are the individual strings too long, maybe?\n\nFWIW, I've seen the same results with 8.1.x.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 15 Jan 2007 13:58:34 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_trgm performance"
},
{
"msg_contents": "Florian, Steinar,\n\nCould you try to see if the GIN implementation of pg_trgm is faster in\nyour cases?\n\nFlorian, instead of using WHERE similarity(...) > 0.4, you should use\nset_limit (SELECT set_limit(0.4);).\n\nI posted it on -patches and it is available here:\nhttp://people.openwide.fr/~gsmet/postgresql/pg_trgm_gin3.diff .\n\nThe patch is against HEAD but It applies on 8.2 without any problem.\n\nAfter applying this patch and installing pg_trgm.sql (you should\nuninstall pg_trgm before compiling using the old uninstall script),\nthen you can create:\nCREATE INDEX idx_table_word ON table USING gin(word gin_trgm_ops);\n\n17 characters is quite long so I'm not sure it will help you because\nit usually has to recheck a high number of rows due to the GIN\nimplementation but I'd like to know if it's faster or slower in this\ncase.\n\nIf your data are not private and you don't have the time to test it, I\ncan test it here without any problem.\n\nThanks.\n\n--\nGuillaume\n",
"msg_date": "Sat, 24 Feb 2007 00:09:41 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_trgm performance"
},
{
"msg_contents": "On Sat, Feb 24, 2007 at 12:09:41AM +0100, Guillaume Smet wrote:\n> Could you try to see if the GIN implementation of pg_trgm is faster in\n> your cases?\n\nI'm sorry, I can no longer remember where I needed pg_trgm. Simple testing of\nyour patch seems to indicate that the GiN version is about 65% _slower_ (18ms\nvs. 30ms) for a test data set I found lying around, but I remember that on\nthe data set I needed it, the GIST version was a lot slower than that (think\n3-400ms). The 18 vs. 30ms test is a random Amarok database, on 8.2.3\n(Debian).\n\nSorry I couldn't be of more help.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Sat, 24 Feb 2007 01:31:05 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_trgm performance"
},
{
"msg_contents": "Hi Steinar,\n\nOn 2/24/07, Steinar H. Gunderson <[email protected]> wrote:\n> I'm sorry, I can no longer remember where I needed pg_trgm. Simple testing of\n> your patch seems to indicate that the GiN version is about 65% _slower_ (18ms\n> vs. 30ms) for a test data set I found lying around, but I remember that on\n> the data set I needed it, the GIST version was a lot slower than that (think\n> 3-400ms). The 18 vs. 30ms test is a random Amarok database, on 8.2.3\n> (Debian).\n\nCould you post EXPLAIN ANALYZE for both queries (after 2 or 3 runs)?\nAnd if you can provide EXPLAIN ANALYZE for a couple of searches\n(short length, medium length and long) in both cases, it could be nice\ntoo.\n\nThe GiN version is not selective enough currently compared to GiST. It\ngenerally finds the matching rows faster but it has a slower recheck\ncond so it's sometimes interesting (in my case) and sometimes not that\ninteresting (it seems to be your case).\n\nThanks.\n\n--\nGuillaume\n",
"msg_date": "Sat, 24 Feb 2007 02:04:36 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_trgm performance"
},
{
"msg_contents": "On Sat, Feb 24, 2007 at 02:04:36AM +0100, Guillaume Smet wrote:\n> Could you post EXPLAIN ANALYZE for both queries (after 2 or 3 runs)?\n\nGIST version, short:\n\namarok=# explain analyze select count(*) from tags where title % 'foo';\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=147.84..147.85 rows=1 width=0) (actual time=16.873..16.875 rows=1 loops=1)\n -> Bitmap Heap Scan on tags (cost=4.59..147.74 rows=41 width=0) (actual time=16.828..16.850 rows=7 loops=1)\n Recheck Cond: (title % 'foo'::text)\n -> Bitmap Index Scan on trgm_idx (cost=0.00..4.58 rows=41 width=0) (actual time=16.818..16.818 rows=7 loops=1)\n Index Cond: (title % 'foo'::text)\n Total runtime: 16.935 ms\n(6 rows)\n\nGiN version, short:\n\namarok=# explain analyze select count(*) from tags where title % 'foo';\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=151.89..151.90 rows=1 width=0) (actual time=30.197..30.199 rows=1 loops=1)\n -> Bitmap Heap Scan on tags (cost=8.64..151.79 rows=41 width=0) (actual time=5.555..30.157 rows=7 loops=1)\n Filter: (title % 'foo'::text)\n -> Bitmap Index Scan on trgm_idx (cost=0.00..8.63 rows=41 width=0) (actual time=2.857..2.857 rows=5555 loops=1)\n Index Cond: (title % 'foo'::text)\n Total runtime: 30.292 ms\n(6 rows)\n\n\nGIST version, medium:\n\namarok=# explain analyze select count(*) from tags where title % 'chestnuts roasting on an 0pen fire';\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=147.84..147.85 rows=1 width=0) (actual time=216.149..216.151 rows=1 loops=1)\n -> Bitmap Heap Scan on tags (cost=4.59..147.74 rows=41 width=0) (actual time=216.135..216.137 rows=1 loops=1)\n Recheck Cond: (title % 'chestnuts roasting on an 0pen fire'::text)\n -> Bitmap Index Scan on trgm_idx (cost=0.00..4.58 rows=41 width=0) (actual time=216.124..216.124 rows=1 loops=1)\n Index Cond: (title % 'chestnuts roasting on an 0pen fire'::text)\n Total runtime: 216.214 ms\n(6 rows)\n\n\namarok=# explain analyze select count(*) from tags where title % 'chestnuts roasting on an 0pen fire';\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=151.89..151.90 rows=1 width=0) (actual time=156.310..156.312 rows=1 loops=1)\n -> Bitmap Heap Scan on tags (cost=8.64..151.79 rows=41 width=0) (actual time=156.205..156.299 rows=1 loops=1)\n Filter: (title % 'chestnuts roasting on an 0pen fire'::text)\n -> Bitmap Index Scan on trgm_idx (cost=0.00..8.63 rows=41 width=0) (actual time=155.748..155.748 rows=36 loops=1)\n Index Cond: (title % 'chestnuts roasting on an 0pen fire'::text)\n Total runtime: 156.376 ms\n(6 rows)\n\n\nGIST version, long:\n\namarok=# explain analyze select count(*) from tags where title % 'Donaueschingen (Peter Kruders Donaudampfschifffahrtsgesellschaftskapitänskajütenremix)'; \n;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=147.84..147.85 rows=1 width=0) (actual time=597.115..597.117 rows=1 loops=1)\n -> Bitmap Heap Scan on tags (cost=4.59..147.74 rows=41 width=0) (actual time=597.102..597.104 rows=1 loops=1)\n Recheck Cond: (title % 'Donaueschingen (Peter Kruders Donaudampfschifffahrtsgesellschaftskapitänskajütenremix)'::text)\n -> Bitmap Index Scan on trgm_idx (cost=0.00..4.58 rows=41 width=0) (actual time=597.093..597.093 rows=1 loops=1)\n Index Cond: (title % 'Donaueschingen (Peter Kruders Donaudampfschifffahrtsgesellschaftskapitänskajütenremix)'::text)\n Total runtime: 597.173 ms\n(6 rows)\n\n\nGiN version, long:\n\namarok=# explain analyze select count(*) from tags where title % 'Donaueschingen (Peter Kruders Donaudampfschifffahrtsgesellschaftskapitänskajütenremix)'; \n;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=151.89..151.90 rows=1 width=0) (actual time=435.789..435.791 rows=1 loops=1)\n -> Bitmap Heap Scan on tags (cost=8.64..151.79 rows=41 width=0) (actual time=435.777..435.779 rows=1 loops=1)\n Filter: (title % 'Donaueschingen (Peter Kruders Donaudampfschifffahrtsgesellschaftskapitänskajütenremix)'::text)\n -> Bitmap Index Scan on trgm_idx (cost=0.00..8.63 rows=41 width=0) (actual time=435.729..435.729 rows=1 loops=1)\n Index Cond: (title % 'Donaueschingen (Peter Kruders Donaudampfschifffahrtsgesellschaftskapitänskajütenremix)'::text)\n Total runtime: 435.851 ms\n(6 rows)\n\n\nSo, the GiN version seems to be a bit faster for long queries, but it's still\ntoo slow -- in fact, _unindexed_ versions give 141ms, 342ms, 725ms for these\nthree queries, so for the longer queries, the gain is only about a factor\ntwo. (By the way, I would like to stress that this is not my personal music\ncollection! :-P)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Sat, 24 Feb 2007 11:07:37 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_trgm performance"
},
{
"msg_contents": "On 2/24/07, Steinar H. Gunderson <[email protected]> wrote:\n\nThanks for your time.\n\n> GiN version, short:\n> -> Bitmap Heap Scan on tags (cost=8.64..151.79 rows=41 width=0) (actual time=5.555..30.157 rows=7 loops=1)\n> Filter: (title % 'foo'::text)\n> -> Bitmap Index Scan on trgm_idx (cost=0.00..8.63 rows=41 width=0) (actual time=2.857..2.857 rows=5555 loops=1)\n> Index Cond: (title % 'foo'::text)\n\nThis is currently the worst case in the gist - gin comparison because\nin the index scan, gin version doesn't have the length of the indexed\nstring. So it returns a lot of rows which have every trigram of your\nsearch string but has in fact a low similarity due to the length of\nthe indexed string (5555 rows -> 7 rows).\nIt cannot be fixed at the moment due to the way GIN indexes work.\n\n> So, the GiN version seems to be a bit faster for long queries, but it's still\n> too slow -- in fact, _unindexed_ versions give 141ms, 342ms, 725ms for these\n> three queries, so for the longer queries, the gain is only about a factor\n> two. (By the way, I would like to stress that this is not my personal music\n> collection! :-P)\n\nThe fact is that pg_trgm is designed to index words and not to index\nlong sentences. I'm not that surprised it's slow in your case.\n\nIt's also my case but following the instructions in README.pg_trgm I\ncreated a dictionary of words using tsearch2 (stat function) and I use\npg_trgm on this dictionary to find similar words in my dictionary.\n\nFor example, I rewrite the search:\nauberge cevenes\nas:\n(auberge | auberges | aubberge | auberg) & (ceven | cene | cevenol | cevennes)\nusing pg_trgm and my query can find Auberge des Cévennes (currently\nit's limited to the 4th most similar words but I can change it\neasily).\n\n--\nGuillaume\n",
"msg_date": "Mon, 26 Feb 2007 13:42:40 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_trgm performance"
},
{
"msg_contents": "On Mon, 26 Feb 2007, Guillaume Smet wrote:\n\n> On 2/24/07, Steinar H. Gunderson <[email protected]> wrote:\n>\n> Thanks for your time.\n>\n>> GiN version, short:\n>> -> Bitmap Heap Scan on tags (cost=8.64..151.79 rows=41 width=0) \n>> (actual time=5.555..30.157 rows=7 loops=1)\n>> Filter: (title % 'foo'::text)\n>> -> Bitmap Index Scan on trgm_idx (cost=0.00..8.63 rows=41 \n>> width=0) (actual time=2.857..2.857 rows=5555 loops=1)\n>> Index Cond: (title % 'foo'::text)\n>\n> This is currently the worst case in the gist - gin comparison because\n> in the index scan, gin version doesn't have the length of the indexed\n> string. So it returns a lot of rows which have every trigram of your\n> search string but has in fact a low similarity due to the length of\n> the indexed string (5555 rows -> 7 rows).\n> It cannot be fixed at the moment due to the way GIN indexes work.\n>\n>> So, the GiN version seems to be a bit faster for long queries, but it's \n>> still\n>> too slow -- in fact, _unindexed_ versions give 141ms, 342ms, 725ms for \n>> these\n>> three queries, so for the longer queries, the gain is only about a factor\n>> two. (By the way, I would like to stress that this is not my personal music\n>> collection! :-P)\n>\n> The fact is that pg_trgm is designed to index words and not to index\n> long sentences. I'm not that surprised it's slow in your case.\n>\n> It's also my case but following the instructions in README.pg_trgm I\n> created a dictionary of words using tsearch2 (stat function) and I use\n> pg_trgm on this dictionary to find similar words in my dictionary.\n>\n> For example, I rewrite the search:\n> auberge cevenes\n> as:\n> (auberge | auberges | aubberge | auberg) & (ceven | cene | cevenol | \n> cevennes)\n> using pg_trgm and my query can find Auberge des C?vennes (currently\n> it's limited to the 4th most similar words but I can change it\n> easily).\n\nDid you rewrite query manually or use rewrite feature of tsearch2 ?\n\n>\n> --\n> Guillaume\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n",
"msg_date": "Mon, 26 Feb 2007 15:53:16 +0300 (MSK)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_trgm performance"
},
{
"msg_contents": "On 2/26/07, Oleg Bartunov <[email protected]> wrote:\n> Did you rewrite query manually or use rewrite feature of tsearch2 ?\n\nCurrently, it's manual. I perform a pg_trgm query for each word of the\nsearch words (a few stop words excluded) and I generate the ts_query\nwith the similar words instead of using the search words.\nIs there any benefit of using rewrite() in this case?\n\n--\nGuillaume\n",
"msg_date": "Mon, 26 Feb 2007 13:58:20 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_trgm performance"
},
{
"msg_contents": "On Mon, 26 Feb 2007, Guillaume Smet wrote:\n\n> On 2/26/07, Oleg Bartunov <[email protected]> wrote:\n>> Did you rewrite query manually or use rewrite feature of tsearch2 ?\n>\n> Currently, it's manual. I perform a pg_trgm query for each word of the\n> search words (a few stop words excluded) and I generate the ts_query\n> with the similar words instead of using the search words.\n> Is there any benefit of using rewrite() in this case?\n\nnot really, just a matter of possible interesting architectural design.\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n",
"msg_date": "Mon, 26 Feb 2007 16:10:17 +0300 (MSK)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_trgm performance"
}
] |
[
{
"msg_contents": "Does anyone here have positive experiences to relate running\nfiberchannel cards on FreeBSD on AMD64? The last time I tried it was\nwith FreeBSD 4 about 2 years ago and none of the cards I tried could\ncross the 32bit memory barrier (since they were all actually 32bit\ncards despite plugging into a 64bit PCI bus).\n\nAndrew\n\n",
"msg_date": "15 Jan 2007 09:55:47 -0800",
"msg_from": "\"Andrew Hammond\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "FiberChannel cards for FreeBSD on AMD64"
}
] |
[
{
"msg_contents": "\n \nHi,\n\n\tCan anybody tell me how can I implement data Caching in the\nshared memory using PostgreSQL.\n\n\tFor one of the projects we are using Postgres version 8.0.3 and\nwere planning to support table partitioning in order to improve the DB\nquery/update performance but, it will be very much helpful if anybody\ncan tell me whether data cache is possible in Postgres.\n\nWe also, need to know if the DB view is kept in the DB Managers program\nmemory or in the shared memory of the processor. If it is in the shared\nmemory, we only need to know the shared memory address/name so that we\ncan access. If not in shared memory then we need know whether it is\npossible to make the postgress to store the DB view in shared memory.\n\nThanks in Advance,\nRam. \n\n\nThe information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. \n\nWARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email.\n \nwww.wipro.com\n",
"msg_date": "Tue, 16 Jan 2007 14:13:21 +0530",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Caching in PostgreSQL"
},
{
"msg_contents": "[email protected] wrote:\n> \n> Hi,\n> \n> \tCan anybody tell me how can I implement data Caching in the\n> shared memory using PostgreSQL.\n\nPostgreSQL, like most other DBMS, caches data pages in shared memory. \nWhat exactly are you trying to accomplish? Are you having a performance \nproblem?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Tue, 16 Jan 2007 09:33:40 +0000",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching in PostgreSQL"
},
{
"msg_contents": "[email protected] writes:\n> \tCan anybody tell me how can I implement data Caching in the\n> shared memory using PostgreSQL.\n\nPostgreSQL already does that.\n\nImplementing this functionality is rather tricky: Between version 7.4\nand now, it has seen *massive* change which has required a great deal\nof effort on the part of various developers.\n\nReimplementing it without a well-tested, well-thought-out alternative\nproposal isn't likely to happen...\n-- \nlet name=\"cbbrowne\" and tld=\"linuxfinances.info\" in name ^ \"@\" ^ tld;;\nhttp://linuxfinances.info/info/emacs.html\n\"Just because it's free doesn't mean you can afford it.\" -- Unknown\n",
"msg_date": "Tue, 16 Jan 2007 11:10:29 -0500",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching in PostgreSQL"
}
] |
[
{
"msg_contents": "\nHi Heikki Linnakangas,\n\n\tThanks for yoru kind response. \n\n\tWe were looking on how to improve the performance of our\napplication which is using PostgreSQL as backend. If postgreSQL is\nsupporting data page caching in the shared memory then we wanted to\ndesign our application to read/write using the shared memory rather than\naccessing the DB everytime so that, it will improve the performance of\nour system.\n\n\tIf this is possible can you please tell us how can we implement\nthe same. Any idea on the same is of very much help.\n\nThanks in Advance,\nRamachandra B.S.\n\n-----Original Message-----\nFrom: Heikki Linnakangas [mailto:[email protected]] On Behalf Of Heikki\nLinnakangas\nSent: Tuesday, January 16, 2007 3:04 PM\nTo: Ramachandra Bhaskaram (WT01 - IP-Multimedia Carrier & Ent Networks)\nCc: [email protected]\nSubject: Re: [PERFORM] Caching in PostgreSQL\n\[email protected] wrote:\n> \n> Hi,\n> \n> \tCan anybody tell me how can I implement data Caching in the\nshared \n> memory using PostgreSQL.\n\nPostgreSQL, like most other DBMS, caches data pages in shared memory. \nWhat exactly are you trying to accomplish? Are you having a performance\nproblem?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n\n\nThe information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. \n\nWARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email.\n \nwww.wipro.com\n",
"msg_date": "Tue, 16 Jan 2007 15:13:09 +0530",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Caching in PostgreSQL"
},
{
"msg_contents": "[email protected] wrote:\n> \tWe were looking on how to improve the performance of our\n> application which is using PostgreSQL as backend. If postgreSQL is\n> supporting data page caching in the shared memory then we wanted to\n> design our application to read/write using the shared memory rather than\n> accessing the DB everytime so that, it will improve the performance of\n> our system.\n\nThat's a bad idea. Just design your database schema with performance in \nmind, and use PostgreSQL normally with SQL queries. If you must, use a \ngeneral-purpose caching library in your application, instead of trying \nto peek into PostgreSQL internals.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Tue, 16 Jan 2007 10:13:18 +0000",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching in PostgreSQL"
},
{
"msg_contents": "I am using memcached (http://www.danga.com/memcached/) to cache Postgres\nADODB recordsets.\nIt's very efficient but has to be implemented in your own application.\n\n\nOn 1/16/07, Heikki Linnakangas <[email protected]> wrote:\n>\n> [email protected] wrote:\n> > We were looking on how to improve the performance of our\n> > application which is using PostgreSQL as backend. If postgreSQL is\n> > supporting data page caching in the shared memory then we wanted to\n> > design our application to read/write using the shared memory rather than\n> > accessing the DB everytime so that, it will improve the performance of\n> > our system.\n>\n> That's a bad idea. Just design your database schema with performance in\n> mind, and use PostgreSQL normally with SQL queries. If you must, use a\n> general-purpose caching library in your application, instead of trying\n> to peek into PostgreSQL internals.\n>\n> --\n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n\n\n-- \nDavid LEVY aka Selenium\nZlio.com & Col.fr\nBlog : http://www.davidlevy.org\nZlioShop : http://shop.davidlevy.org\n\nI am using memcached (http://www.danga.com/memcached/) to cache Postgres ADODB recordsets.It's very efficient but has to be implemented in your own application.\nOn 1/16/07, Heikki Linnakangas <[email protected]> wrote:\[email protected] wrote:> We were looking on how to improve the performance of our> application which is using PostgreSQL as backend. If postgreSQL is\n> supporting data page caching in the shared memory then we wanted to> design our application to read/write using the shared memory rather than> accessing the DB everytime so that, it will improve the performance of\n> our system.That's a bad idea. Just design your database schema with performance inmind, and use PostgreSQL normally with SQL queries. If you must, use ageneral-purpose caching library in your application, instead of trying\nto peek into PostgreSQL internals.-- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives? http://archives.postgresql.org-- David LEVY aka Selenium\nZlio.com & Col.frBlog : http://www.davidlevy.orgZlioShop : http://shop.davidlevy.org",
"msg_date": "Tue, 16 Jan 2007 12:29:14 +0200",
"msg_from": "\"David Levy\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Caching in PostgreSQL"
}
] |
[
{
"msg_contents": "Hi,\n\nCan anybody help me out to get following info of all the tables in a\ndatabase.\n\ntable_len\ntuple_count\ntuple_len\ntuple_percent\ndead_tuple_count\ndead_tuple_len\ndead_tuple_percent\nfree_space\nfree_percent\n\nThanks\nGauri\n\nHi,\nCan anybody help me out to get following info of all the tables in a database.\n \ntable_len\ntuple_count\ntuple_len\ntuple_percent\ndead_tuple_count\ndead_tuple_len\ndead_tuple_percent\nfree_space\nfree_percent\n \nThanks \nGauri",
"msg_date": "Tue, 16 Jan 2007 19:18:17 +0530",
"msg_from": "\"Gauri Kanekar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Table Size"
},
{
"msg_contents": "Gauri Kanekar wrote:\n> Hi,\n> \n> Can anybody help me out to get following info of all the tables in a\n> database.\n\n1. Have you read up on the information schema and system catalogues?\nhttp://www.postgresql.org/docs/8.2/static/catalogs.html\nhttp://www.postgresql.org/docs/8.2/static/catalogs.html\n\n\n> table_len\n> tuple_count\n> tuple_len\n\n2. Not sure what the difference is between \"len\" and \"count\" here.\n\n> tuple_percent\n\n3. Or what this \"percent\" refers to.\n\n> dead_tuple_count\n> dead_tuple_len\n> dead_tuple_percent\n> free_space\n> free_percent\n\n4. You might find some of the stats tables useful too:\nhttp://www.postgresql.org/docs/8.2/static/monitoring-stats.html\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 16 Jan 2007 19:24:32 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table Size"
},
{
"msg_contents": "Richard Huxton a �crit :\n> Gauri Kanekar wrote:\n>> Hi,\n>>\n>> Can anybody help me out to get following info of all the tables in a\n>> database.\n> \n> 1. Have you read up on the information schema and system catalogues?\n> http://www.postgresql.org/docs/8.2/static/catalogs.html\n> http://www.postgresql.org/docs/8.2/static/catalogs.html\n> \n> \n>> table_len\n>> tuple_count\n>> tuple_len\n> \n> 2. Not sure what the difference is between \"len\" and \"count\" here.\n> \n\ntuple_count is the number of live tuples. tuple_len is the length (in\nbytes) for all live tuples.\n\n>> tuple_percent\n> \n> 3. Or what this \"percent\" refers to.\n> \n\ntuple_percent is % of live tuple from all tuples in a table.\n\n>> dead_tuple_count\n>> dead_tuple_len\n>> dead_tuple_percent\n>> free_space\n>> free_percent\n> \n> 4. You might find some of the stats tables useful too:\n> http://www.postgresql.org/docs/8.2/static/monitoring-stats.html\n> \n\nActually, these columns refer to the pgstattuple contrib module. This\ncontrib module must be installed on the server (how you install it\ndepends on your distro). Then, you have to create the functions on you\ndatabase :\n psql -f /path/to/pgstattuple.sql your_database\n\nRight after that, you can query these columns :\n\ntest=> \\x\nExpanded display is on.\ntest=> SELECT * FROM pgstattuple('pg_catalog.pg_proc');\n-[ RECORD 1 ]------+-------\ntable_len | 458752\ntuple_count | 1470\ntuple_len | 438896\ntuple_percent | 95.67\ndead_tuple_count | 11\ndead_tuple_len | 3157\ndead_tuple_percent | 0.69\nfree_space | 8932\nfree_percent | 1.95\n\nExample from README.pgstattuple.\n\nRegards.\n\n\n-- \nGuillaume.\n<!-- http://abs.traduc.org/\n http://lfs.traduc.org/\n http://docs.postgresqlfr.org/ -->\n",
"msg_date": "Wed, 17 Jan 2007 07:37:25 +0100",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table Size"
}
] |
[
{
"msg_contents": "Running PostgreSQL 8.2.1 on Win32. The query planner is choosing a seq\nscan over index scan even though index scan is faster (as shown by\ndisabling seqscan). Table is recently analyzed and row count estimates\nseem to be in the ballpark.\n\nAnother tidbit - I haven't done a \"vacuum full\" ever, although I \"vacuum\nanalyze\" regularly (and autovacuum). I recently noticed that the PG\ndata drive is 40% fragmented (NTFS). Could that be making the seqscan\nslower than it should be? Regardless of the fragmentations affect on\nperformance, is the query planner making a good decision here?\n\n\nSOME CONFIGURATION PARAMS\neffective_cache_size=1000MB\nrandom_page_cost=3\ndefault_statistics_target=50\nshared_buffers=400MB\ntemp_buffers=10MB\nwork_mem=10MB\ncheckpoint_segments=12\n\n\nQUERY\nselect merchant_dim_id, \n dcms_dim_id,\n sum(success) as num_success, \n sum(failed) as num_failed, \n count(*) as total_transactions,\n (sum(success) * 1.0 / count(*)) as success_rate \nfrom transaction_facts \nwhere transaction_date >= '2007-1-16' \nand transaction_date < '2007-1-16 15:20' \ngroup by merchant_dim_id, dcms_dim_id;\n\n\nEXPLAIN ANALYZE (enable_seqscan=true)\nHashAggregate (cost=339573.01..340089.89 rows=15904 width=16) (actual\ntime=140606.593..140650.573 rows=10549 loops=1)\n -> Seq Scan on transaction_facts (cost=0.00..333928.25 rows=322558\n width=16) (actual time=19917.957..140036.910 rows=347434 loops=1)\n Filter: ((transaction_date >= '2007-01-16 00:00:00'::timestamp\n without time zone) AND (transaction_date < '2007-01-16\n 15:20:00'::timestamp without time zone))\nTotal runtime: 140654.813 ms\n\n\nEXPLAIN ANALYZE (enable_seqscan=false)\nHashAggregate (cost=379141.53..379658.41 rows=15904 width=16) (actual\ntime=3720.838..3803.748 rows=10549 loops=1)\n -> Bitmap Heap Scan on transaction_facts (cost=84481.80..373496.76\n rows=322558 width=16) (actual time=244.568..3133.741 rows=347434\n loops=1)\n Recheck Cond: ((transaction_date >= '2007-01-16\n 00:00:00'::timestamp without time zone) AND (transaction_date <\n '2007-01-16 15:20:00'::timestamp without time zone))\n -> Bitmap Index Scan on transaction_facts_transaction_date_idx \n (cost=0.00..84401.16 rows=322558 width=0) (actual\n time=241.994..241.994 rows=347434 loops=1)\n Index Cond: ((transaction_date >= '2007-01-16\n 00:00:00'::timestamp without time zone) AND\n (transaction_date < '2007-01-16 15:20:00'::timestamp\n without time zone))\nTotal runtime: 3810.795 ms\n",
"msg_date": "Tue, 16 Jan 2007 16:23:00 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PG8.2.1 choosing slow seqscan over idx scan"
},
{
"msg_contents": "\"Jeremy Haile\" <[email protected]> writes:\n> Running PostgreSQL 8.2.1 on Win32. The query planner is choosing a seq\n> scan over index scan even though index scan is faster (as shown by\n> disabling seqscan). Table is recently analyzed and row count estimates\n> seem to be in the ballpark.\n\nTry reducing random_page_cost a bit. Keep in mind that you are probably\nmeasuring a fully-cached situation here, if you repeated the test case.\nIf your database fits into memory reasonably well then that's fine and\nyou want to optimize for that case ... but otherwise you may find\nyourself pessimizing the actual behavior.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Jan 2007 16:39:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG8.2.1 choosing slow seqscan over idx scan "
},
{
"msg_contents": "Thanks Tom! Reducing random_page_cost to 2 did the trick for this\nquery. It now favors the index scan.\n\nEven if this is a cached situation, I wouldn't expect a difference of 3\nmin vs 3 seconds. \n\nEven if unrelated, do you think disk fragmentation would have negative\neffects? Is it worth trying to defragment the drive on a regular basis\nin Windows?\n\nJeremy Haile\n\n\nOn Tue, 16 Jan 2007 16:39:07 -0500, \"Tom Lane\" <[email protected]> said:\n> \"Jeremy Haile\" <[email protected]> writes:\n> > Running PostgreSQL 8.2.1 on Win32. The query planner is choosing a seq\n> > scan over index scan even though index scan is faster (as shown by\n> > disabling seqscan). Table is recently analyzed and row count estimates\n> > seem to be in the ballpark.\n> \n> Try reducing random_page_cost a bit. Keep in mind that you are probably\n> measuring a fully-cached situation here, if you repeated the test case.\n> If your database fits into memory reasonably well then that's fine and\n> you want to optimize for that case ... but otherwise you may find\n> yourself pessimizing the actual behavior.\n> \n> \t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Jan 2007 17:20:53 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG8.2.1 choosing slow seqscan over idx scan"
},
{
"msg_contents": "On 1/16/07, Jeremy Haile <[email protected]> wrote:\n>\n> Even if unrelated, do you think disk fragmentation would have negative\n> effects? Is it worth trying to defragment the drive on a regular basis\n> in Windows?\n>\n\nOut of curiosity, is this table heavily updated or deleted from? Perhaps\nthere is an unfavorable \"correlation\" between the btree and data? Can you\ndump the results of\n\nselect attname, null_frac, avg_width, n_distinct, correlation from pg_stats\nwhere tablename = 'transaction_facts'\n\n\n\n\n-- \nChad\nhttp://www.postgresqlforums.com/\n\nOn 1/16/07, Jeremy Haile <[email protected]> wrote:\nEven if unrelated, do you think disk fragmentation would have negativeeffects? Is it worth trying to defragment the drive on a regular basisin Windows?Out of curiosity, is this table heavily updated or deleted from? Perhaps there is an unfavorable \"correlation\" between the btree and data? Can you dump the results of\nselect attname, null_frac, avg_width, n_distinct, correlation from pg_stats where tablename = 'transaction_facts'-- Chadhttp://www.postgresqlforums.com/",
"msg_date": "Tue, 16 Jan 2007 17:44:53 -0500",
"msg_from": "\"Chad Wagner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG8.2.1 choosing slow seqscan over idx scan"
},
{
"msg_contents": "Hey Chad,\n\nThe table is heavily inserted and deleted from. Recently I had done a\nvery large delete.\n\nHere is the results of the query you sent me: (sorry it's hard to read)\n\n\"dcms_dim_id\";0;4;755;-0.00676181\n\"transaction_fact_id\";0;4;-1;-0.194694\n\"failed\";0;4;2;0.964946\n\"van16\";0;23;145866;0.00978649\n\"vendor_response\";0.9942;43;9;0.166527\n\"transaction_id\";0;4;-1;-0.199583\n\"transaction_date\";0;8;172593;-0.194848\n\"serial_number\";0.0434667;16;53311;0.0713039\n\"merchant_dim_id\";0;4;105;0.299335\n\"comment\";0.0052;29;7885;0.0219167\n\"archived\";0;1;2;0.84623\n\"response_code\";0.9942;4;3;0.905409\n\"transaction_source\";0;4;2;0.983851\n\"location_dim_id\";0;4;86;0.985384\n\"success\";0;4;2;0.981072\n\nJust curious - what does that tell us?\n\nJeremy Haile\n\nOn Tue, 16 Jan 2007 17:44:53 -0500, \"Chad Wagner\"\n<[email protected]> said:\n> On 1/16/07, Jeremy Haile <[email protected]> wrote:\n> >\n> > Even if unrelated, do you think disk fragmentation would have negative\n> > effects? Is it worth trying to defragment the drive on a regular basis\n> > in Windows?\n> >\n> \n> Out of curiosity, is this table heavily updated or deleted from? Perhaps\n> there is an unfavorable \"correlation\" between the btree and data? Can\n> you\n> dump the results of\n> \n> select attname, null_frac, avg_width, n_distinct, correlation from\n> pg_stats\n> where tablename = 'transaction_facts'\n> \n> \n> \n> \n> -- \n> Chad\n> http://www.postgresqlforums.com/\n",
"msg_date": "Tue, 16 Jan 2007 21:58:59 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG8.2.1 choosing slow seqscan over idx scan"
},
{
"msg_contents": "On 1/16/07, Jeremy Haile <[email protected]> wrote:\n>\n> The table is heavily inserted and deleted from. Recently I had done a\n> very large delete.\n\n\nThat's what I suspected.\n\nHere is the results of the query you sent me: (sorry it's hard to read)\n>\n> \"transaction_date\";0;8;172593;-0.194848\n>\nJust curious - what does that tell us?\n\n\nBased on the explain plan you posted earlier we learned the optimizer\nbelieved the query would return 322558 rows (and it was reasonably accurate,\ntoo) for a relatively small time frame (15 hours and 20 minutes).\n\n-> Seq Scan on transaction_facts (cost=0.00..333928.25 rows=322558\n width=16) (actual time=19917.957..140036.910 rows=347434 loops=1)\n\nBased on the information you just posted, the average row length is 156\nbytes.\n\n347434 rows * 156 bytes = 52MB (reasonable it could be held in your shared\nbuffers, which makes Tom's suggestion very plausible, the index scan may not\nbe cheaper -- because it is all cached)\n\n\nThe estimation of cost when you are requesting a \"range\" of data will\ninvolve the \"correlation\" factor, the correlation is defined as:\n\n\"Statistical correlation between physical row ordering and logical ordering\nof the column values. This ranges from -1 to +1. When the value is near -1\nor +1, an index scan on the column will be estimated to be cheaper than when\nit is near zero, due to reduction of random access to the disk. (This column\nis NULL if the column data type does not have a < operator.)\"\n\nWhich means that as correlation approaches zero (which it is -0.19, I would\ncall that zero) it represents that the physical ordering of the data (in the\ndata files) is such that a range scan of the btree index would result in\nmany scatter reads (which are slow). So the optimizer considers whether a\n\"scattered\" read will be cheaper than a sequential scan based on a few\nfactors: a) correlation [for ranges] b) size of table c) estimated\ncardinality [what does it think you are going to get back]. Based on those\nfactors it may choose either access path (sequential or index).\n\nOne of the reasons why the sequential scan is slower is because the\noptimizer doesn't know the data you are requesting is sitting in the cache\n(and it is VERY unlikely you have the entire table in cache, unless it is a\nheavily used table or very small table, which it's probably not). So once\nit decides a sequential scan is the best plan, it starts chugging away at\nthe disk and pulling in most of the pages in the table and throws away the\npages that do not meet your criteria.\n\nThe index scan is quicker (may be bogus, as Tom suggested) because the it\nstarts chugging away at the index and finds that many of the pages you are\ninterested in are cached (but it didn't know, you just got lucky!).\n\nIn practice, once you start pulling in 15% or more of the table it is often\ncheaper just to read the entire table in, rather than scatter reads + double\nI/O. Remember that an index access means I have to read the index PLUS the\ntable from disk, and a sequential scan just throws away the index and reads\nthe table from disk.\n\n\nI would suggest running explain analyze after restarting the database (that\nmay not be even good enough, because a large portion of the data file may be\nin the OS's buffer cache, hrm -- reboot? unmount?) and see how cheap that\nindex access path is.\n\nOne thing to be careful of here is that you really need to consider what is\nthe primary use of the table, and what are the major queries you will be\nlaunching against it. But you could improve the correlation by rebuilding\nthe table ordered by the transaction_date column, but it may screw up other\nrange scans. Another option is partitioning. I wouldn't do any of this\nstuff, until you find out the last tweak you made still holds true, give it\na few days, perhaps test it after a clean reboot of the server.\n\nLet me know if any of this is inaccurate folks, as I certainly don't profess\nto be an expert on the internals of PostgreSQL, but I believe it is accurate\nbased on my prior experiences with other database products. :)\n\n\n-- \nChad\nhttp://www.postgresqlforums.com/\n\nOn 1/16/07, Jeremy Haile <[email protected]> wrote:\nThe table is heavily inserted and deleted from. Recently I had done avery large delete.That's what I suspected. \nHere is the results of the query you sent me: (sorry it's hard to read)\"transaction_date\";0;8;172593;-0.194848\nJust curious - what does that tell us?Based on the explain plan you posted earlier we learned the optimizer believed the query would return 322558 rows (and it was reasonably accurate, too) for a relatively small time frame (15 hours and 20 minutes). \n -> Seq Scan on transaction_facts (cost=0.00..333928.25 rows=322558 width=16) (actual time=19917.957..140036.910 rows=347434 \nloops=1)Based on the information you just posted, the average row length is 156 bytes.347434 rows * 156 bytes = 52MB (reasonable it could be held in your shared buffers, which makes Tom's suggestion very plausible, the index scan may not be cheaper -- because it is all cached)\nThe estimation of cost when you are requesting a \"range\" of data will involve the \"correlation\" factor, the correlation is defined as:\"Statistical correlation between physical row ordering and logical\nordering of the column values. This ranges from -1 to +1. When the\nvalue is near -1 or +1, an index scan on the column will be estimated\nto be cheaper than when it is near zero, due to reduction of random\naccess to the disk. (This column is NULL if the column data type does\nnot have a < operator.)\"Which means that as correlation approaches zero (which it is -0.19, I would call that zero) it represents that the physical ordering of the data (in the data files) is such that a range scan of the btree index would result in many scatter reads (which are slow). So the optimizer considers whether a \"scattered\" read will be cheaper than a sequential scan based on a few factors: a) correlation [for ranges] b) size of table c) estimated cardinality [what does it think you are going to get back]. Based on those factors it may choose either access path (sequential or index).\nOne of the reasons why the sequential scan is slower is because the optimizer doesn't know the data you are requesting is sitting in the cache (and it is VERY unlikely you have the entire table in cache, unless it is a heavily used table or very small table, which it's probably not). So once it decides a sequential scan is the best plan, it starts chugging away at the disk and pulling in most of the pages in the table and throws away the pages that do not meet your criteria.\nThe index scan is quicker (may be bogus, as Tom suggested) because the it starts chugging away at the index and finds that many of the pages you are interested in are cached (but it didn't know, you just got lucky!).\nIn practice, once you start pulling in 15% or more of the table it is often cheaper just to read the entire table in, rather than scatter reads + double I/O. Remember that an index access means I have to read the index PLUS the table from disk, and a sequential scan just throws away the index and reads the table from disk.\nI would suggest running explain analyze after restarting the database (that may not be even good enough, because a large portion of the data file may be in the OS's buffer cache, hrm -- reboot? unmount?) and see how cheap that index access path is.\nOne thing to be careful of here is that you really need to consider what is the primary use of the table, and what are the major queries you will be launching against it. But you could improve the correlation by rebuilding the table ordered by the transaction_date column, but it may screw up other range scans. Another option is partitioning. I wouldn't do any of this stuff, until you find out the last tweak you made still holds true, give it a few days, perhaps test it after a clean reboot of the server.\nLet me know if any of this is inaccurate folks, as I certainly don't profess to be an expert on the internals of PostgreSQL, but I believe it is accurate based on my prior experiences with other database products. :)\n-- Chadhttp://www.postgresqlforums.com/",
"msg_date": "Tue, 16 Jan 2007 23:52:39 -0500",
"msg_from": "\"Chad Wagner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG8.2.1 choosing slow seqscan over idx scan"
},
{
"msg_contents": "On Tue, 2007-01-16 at 21:58 -0500, Jeremy Haile wrote:\n> Hey Chad,\n> \n> The table is heavily inserted and deleted from. Recently I had done a\n> very large delete.\n\nI still keep wondering if this table is bloated with dead tuples. Even\nif you vacuum often if there's a connection with an idle transaction,\nthe tuples can't be reclaimed and the table would continue to grow.\n\nAnyway, what does \n\nvacuum analyze tablename\n\nsay (where tablename is, of course, the name of the table we're looking\nat)? Pay particular attention to DETAIL statements.\n\nAssuming the table's NOT bloated, you may do well to increase the\neffective_cache_size, which doesn't allocate anything, but just tells\nthe query planner about how big your operating systems file cache is as\nregards postgresql. It's a bit of a course setting, i.e. you can make\nrather large changes to it without blowing things up. If you've got a\ncouple gigs on your machine, try setting it to something like 512MB or\nso.\n\nIf your table is bloating, and you don't have idle transactions hanging\nof the database, it could be that your fsm settings are too low.\n",
"msg_date": "Tue, 16 Jan 2007 23:46:07 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG8.2.1 choosing slow seqscan over idx scan"
},
{
"msg_contents": "> I still keep wondering if this table is bloated with dead tuples. Even\n> if you vacuum often if there's a connection with an idle transaction,\n> the tuples can't be reclaimed and the table would continue to grow.\n\nI used to vacuum once an hour, although I've switched it to autovacuum\nnow. It definitely could be bloated with dead tuples. I'll paste the\n\"vacuum analyze verbose\" output at the bottom of this e-mail. Would a\nvacuum full be a good idea?\n\n\n> Assuming the table's NOT bloated, you may do well to increase the\n> effective_cache_size, which doesn't allocate anything, \n<snip>\n> try setting it to something like 512MB or so.\n\nIt's currently set to 1000MB.\n\n\n> If your table is bloating, and you don't have idle transactions hanging\n> of the database, it could be that your fsm settings are too low.\n\nfsm is currently set to 2000000. Is there any harm in setting it too\nhigh? =)\n\nHere's the vacuum analyze verbose output:\n\nINFO: vacuuming \"public.transaction_facts\"\nINFO: scanned index \"transaction_facts_pkey\" to remove 759969 row\nversions\nDETAIL: CPU 7.20s/2.31u sec elapsed 315.31 sec.\nINFO: scanned index \"transaction_facts_dcms_dim_id_idx\" to remove\n759969 row versions\nDETAIL: CPU 1.29s/2.15u sec elapsed 146.98 sec.\nINFO: scanned index \"transaction_facts_merchant_dim_id_idx\" to remove\n759969 row versions\nDETAIL: CPU 1.10s/2.10u sec elapsed 126.09 sec.\nINFO: scanned index \"transaction_facts_transaction_date_idx\" to remove\n759969 row versions\nDETAIL: CPU 1.65s/2.40u sec elapsed 259.25 sec.\nINFO: scanned index \"transaction_facts_transaction_id_idx\" to remove\n759969 row versions\nDETAIL: CPU 7.48s/2.85u sec elapsed 371.98 sec.\nINFO: scanned index \"transaction_facts_product_date_idx\" to remove\n759969 row versions\nDETAIL: CPU 2.32s/2.10u sec elapsed 303.83 sec.\nINFO: scanned index \"transaction_facts_merchant_product_date_idx\" to\nremove 759969 row versions\nDETAIL: CPU 2.48s/2.31u sec elapsed 295.19 sec.\nINFO: scanned index \"transaction_facts_merchant_date_idx\" to remove\n759969 row versions\nDETAIL: CPU 8.10s/3.35u sec elapsed 398.73 sec.\nINFO: scanned index \"transaction_facts_success_idx\" to remove 759969\nrow versions\nDETAIL: CPU 5.01s/2.84u sec elapsed 192.73 sec.\nINFO: scanned index \"transaction_facts_failed_idx\" to remove 759969 row\nversions\nDETAIL: CPU 1.03s/1.90u sec elapsed 123.00 sec.\nINFO: scanned index \"transaction_facts_archived_idx\" to remove 759969\nrow versions\nDETAIL: CPU 1.03s/1.39u sec elapsed 104.42 sec.\nINFO: scanned index \"transaction_facts_response_code_idx\" to remove\n759969 row versions\nDETAIL: CPU 0.75s/2.17u sec elapsed 36.71 sec.\nINFO: scanned index \"transaction_facts_transaction_source_idx\" to\nremove 759969 row versions\nDETAIL: CPU 0.60s/1.75u sec elapsed 42.29 sec.\nINFO: scanned index \"transaction_facts_transaction_id_source_idx\" to\nremove 759969 row versions\nDETAIL: CPU 1.14s/1.84u sec elapsed 44.75 sec.\nINFO: \"transaction_facts\": removed 759969 row versions in 14360 pages\nDETAIL: CPU 0.57s/0.23u sec elapsed 45.28 sec.\nINFO: index \"transaction_facts_pkey\" now contains 2274280 row versions\nin 152872 pages\nDETAIL: 759969 index row versions were removed.\n134813 index pages have been deleted, 134813 are currently reusable.\nCPU 0.00s/0.01u sec elapsed 0.01 sec.\nINFO: index \"transaction_facts_dcms_dim_id_idx\" now contains 2274280\nrow versions in 85725 pages\nDETAIL: 759323 index row versions were removed.\n75705 index pages have been deleted, 73721 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"transaction_facts_merchant_dim_id_idx\" now contains\n2274280 row versions in 80023 pages\nDETAIL: 759969 index row versions were removed.\n71588 index pages have been deleted, 69210 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"transaction_facts_transaction_date_idx\" now contains\n2274280 row versions in 144196 pages\nDETAIL: 759969 index row versions were removed.\n126451 index pages have been deleted, 126451 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"transaction_facts_transaction_id_idx\" now contains 2274280\nrow versions in 150529 pages\nDETAIL: 759969 index row versions were removed.\n130649 index pages have been deleted, 130649 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"transaction_facts_product_date_idx\" now contains 2274280\nrow versions in 202248 pages\nDETAIL: 759969 index row versions were removed.\n174652 index pages have been deleted, 174652 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"transaction_facts_merchant_product_date_idx\" now contains\n2274280 row versions in 202997 pages\nDETAIL: 759969 index row versions were removed.\n175398 index pages have been deleted, 175398 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"transaction_facts_merchant_date_idx\" now contains 2274280\nrow versions in 203561 pages\nDETAIL: 759969 index row versions were removed.\n175960 index pages have been deleted, 175960 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"transaction_facts_success_idx\" now contains 2274280 row\nversions in 78237 pages\nDETAIL: 759969 index row versions were removed.\n70239 index pages have been deleted, 67652 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"transaction_facts_failed_idx\" now contains 2274280 row\nversions in 78230 pages\nDETAIL: 759969 index row versions were removed.\n70231 index pages have been deleted, 67665 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"transaction_facts_archived_idx\" now contains 2274280 row\nversions in 72943 pages\nDETAIL: 759969 index row versions were removed.\n64962 index pages have been deleted, 62363 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"transaction_facts_response_code_idx\" now contains 2274280\nrow versions in 16918 pages\nDETAIL: 759969 index row versions were removed.\n8898 index pages have been deleted, 6314 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"transaction_facts_transaction_source_idx\" now contains\n2274280 row versions in 14235 pages\nDETAIL: 759969 index row versions were removed.\n6234 index pages have been deleted, 3663 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"transaction_facts_transaction_id_source_idx\" now contains\n2274280 row versions in 18053 pages\nDETAIL: 759969 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"transaction_facts\": found 759969 removable, 2274280 nonremovable\nrow versions in 308142 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 15710471 unused item pointers.\n266986 pages contain useful free space.\n0 pages are entirely empty.\nCPU 58.00s/35.59u sec elapsed 3240.94 sec.\nINFO: analyzing \"public.transaction_facts\"\nINFO: \"transaction_facts\": scanned 15000 of 308142 pages, containing\n113476 live rows and 0 dead rows; 15000 rows in sample, 2331115\nestimated total rows\n",
"msg_date": "Wed, 17 Jan 2007 09:37:29 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG8.2.1 choosing slow seqscan over idx scan"
},
{
"msg_contents": "A good idea here will be to first do a VACUUM FULL and then keep the\nAutovacuum settings you want.\n\n-----------------\nShoaib Mir\nEnterpriseDB (www.enterprisedb.com)\n\nOn 1/17/07, Jeremy Haile <[email protected]> wrote:\n>\n> > I still keep wondering if this table is bloated with dead tuples. Even\n> > if you vacuum often if there's a connection with an idle transaction,\n> > the tuples can't be reclaimed and the table would continue to grow.\n>\n> I used to vacuum once an hour, although I've switched it to autovacuum\n> now. It definitely could be bloated with dead tuples. I'll paste the\n> \"vacuum analyze verbose\" output at the bottom of this e-mail. Would a\n> vacuum full be a good idea?\n>\n>\n> > Assuming the table's NOT bloated, you may do well to increase the\n> > effective_cache_size, which doesn't allocate anything,\n> <snip>\n> > try setting it to something like 512MB or so.\n>\n> It's currently set to 1000MB.\n>\n>\n> > If your table is bloating, and you don't have idle transactions hanging\n> > of the database, it could be that your fsm settings are too low.\n>\n> fsm is currently set to 2000000. Is there any harm in setting it too\n> high? =)\n>\n> Here's the vacuum analyze verbose output:\n>\n> INFO: vacuuming \"public.transaction_facts\"\n> INFO: scanned index \"transaction_facts_pkey\" to remove 759969 row\n> versions\n> DETAIL: CPU 7.20s/2.31u sec elapsed 315.31 sec.\n> INFO: scanned index \"transaction_facts_dcms_dim_id_idx\" to remove\n> 759969 row versions\n> DETAIL: CPU 1.29s/2.15u sec elapsed 146.98 sec.\n> INFO: scanned index \"transaction_facts_merchant_dim_id_idx\" to remove\n> 759969 row versions\n> DETAIL: CPU 1.10s/2.10u sec elapsed 126.09 sec.\n> INFO: scanned index \"transaction_facts_transaction_date_idx\" to remove\n> 759969 row versions\n> DETAIL: CPU 1.65s/2.40u sec elapsed 259.25 sec.\n> INFO: scanned index \"transaction_facts_transaction_id_idx\" to remove\n> 759969 row versions\n> DETAIL: CPU 7.48s/2.85u sec elapsed 371.98 sec.\n> INFO: scanned index \"transaction_facts_product_date_idx\" to remove\n> 759969 row versions\n> DETAIL: CPU 2.32s/2.10u sec elapsed 303.83 sec.\n> INFO: scanned index \"transaction_facts_merchant_product_date_idx\" to\n> remove 759969 row versions\n> DETAIL: CPU 2.48s/2.31u sec elapsed 295.19 sec.\n> INFO: scanned index \"transaction_facts_merchant_date_idx\" to remove\n> 759969 row versions\n> DETAIL: CPU 8.10s/3.35u sec elapsed 398.73 sec.\n> INFO: scanned index \"transaction_facts_success_idx\" to remove 759969\n> row versions\n> DETAIL: CPU 5.01s/2.84u sec elapsed 192.73 sec.\n> INFO: scanned index \"transaction_facts_failed_idx\" to remove 759969 row\n> versions\n> DETAIL: CPU 1.03s/1.90u sec elapsed 123.00 sec.\n> INFO: scanned index \"transaction_facts_archived_idx\" to remove 759969\n> row versions\n> DETAIL: CPU 1.03s/1.39u sec elapsed 104.42 sec.\n> INFO: scanned index \"transaction_facts_response_code_idx\" to remove\n> 759969 row versions\n> DETAIL: CPU 0.75s/2.17u sec elapsed 36.71 sec.\n> INFO: scanned index \"transaction_facts_transaction_source_idx\" to\n> remove 759969 row versions\n> DETAIL: CPU 0.60s/1.75u sec elapsed 42.29 sec.\n> INFO: scanned index \"transaction_facts_transaction_id_source_idx\" to\n> remove 759969 row versions\n> DETAIL: CPU 1.14s/1.84u sec elapsed 44.75 sec.\n> INFO: \"transaction_facts\": removed 759969 row versions in 14360 pages\n> DETAIL: CPU 0.57s/0.23u sec elapsed 45.28 sec.\n> INFO: index \"transaction_facts_pkey\" now contains 2274280 row versions\n> in 152872 pages\n> DETAIL: 759969 index row versions were removed.\n> 134813 index pages have been deleted, 134813 are currently reusable.\n> CPU 0.00s/0.01u sec elapsed 0.01 sec.\n> INFO: index \"transaction_facts_dcms_dim_id_idx\" now contains 2274280\n> row versions in 85725 pages\n> DETAIL: 759323 index row versions were removed.\n> 75705 index pages have been deleted, 73721 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_merchant_dim_id_idx\" now contains\n> 2274280 row versions in 80023 pages\n> DETAIL: 759969 index row versions were removed.\n> 71588 index pages have been deleted, 69210 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_transaction_date_idx\" now contains\n> 2274280 row versions in 144196 pages\n> DETAIL: 759969 index row versions were removed.\n> 126451 index pages have been deleted, 126451 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_transaction_id_idx\" now contains 2274280\n> row versions in 150529 pages\n> DETAIL: 759969 index row versions were removed.\n> 130649 index pages have been deleted, 130649 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_product_date_idx\" now contains 2274280\n> row versions in 202248 pages\n> DETAIL: 759969 index row versions were removed.\n> 174652 index pages have been deleted, 174652 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_merchant_product_date_idx\" now contains\n> 2274280 row versions in 202997 pages\n> DETAIL: 759969 index row versions were removed.\n> 175398 index pages have been deleted, 175398 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_merchant_date_idx\" now contains 2274280\n> row versions in 203561 pages\n> DETAIL: 759969 index row versions were removed.\n> 175960 index pages have been deleted, 175960 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_success_idx\" now contains 2274280 row\n> versions in 78237 pages\n> DETAIL: 759969 index row versions were removed.\n> 70239 index pages have been deleted, 67652 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_failed_idx\" now contains 2274280 row\n> versions in 78230 pages\n> DETAIL: 759969 index row versions were removed.\n> 70231 index pages have been deleted, 67665 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_archived_idx\" now contains 2274280 row\n> versions in 72943 pages\n> DETAIL: 759969 index row versions were removed.\n> 64962 index pages have been deleted, 62363 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_response_code_idx\" now contains 2274280\n> row versions in 16918 pages\n> DETAIL: 759969 index row versions were removed.\n> 8898 index pages have been deleted, 6314 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_transaction_source_idx\" now contains\n> 2274280 row versions in 14235 pages\n> DETAIL: 759969 index row versions were removed.\n> 6234 index pages have been deleted, 3663 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_transaction_id_source_idx\" now contains\n> 2274280 row versions in 18053 pages\n> DETAIL: 759969 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: \"transaction_facts\": found 759969 removable, 2274280 nonremovable\n> row versions in 308142 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 15710471 unused item pointers.\n> 266986 pages contain useful free space.\n> 0 pages are entirely empty.\n> CPU 58.00s/35.59u sec elapsed 3240.94 sec.\n> INFO: analyzing \"public.transaction_facts\"\n> INFO: \"transaction_facts\": scanned 15000 of 308142 pages, containing\n> 113476 live rows and 0 dead rows; 15000 rows in sample, 2331115\n> estimated total rows\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\nA good idea here will be to first do a VACUUM FULL and then keep the Autovacuum settings you want.-----------------Shoaib MirEnterpriseDB (www.enterprisedb.com)\nOn 1/17/07, Jeremy Haile <[email protected]> wrote:\n> I still keep wondering if this table is bloated with dead tuples. Even> if you vacuum often if there's a connection with an idle transaction,> the tuples can't be reclaimed and the table would continue to grow.\nI used to vacuum once an hour, although I've switched it to autovacuumnow. It definitely could be bloated with dead tuples. I'll paste the\"vacuum analyze verbose\" output at the bottom of this e-mail. Would a\nvacuum full be a good idea?> Assuming the table's NOT bloated, you may do well to increase the> effective_cache_size, which doesn't allocate anything,<snip>> try setting it to something like 512MB or so.\nIt's currently set to 1000MB.> If your table is bloating, and you don't have idle transactions hanging> of the database, it could be that your fsm settings are too low.fsm is currently set to 2000000. Is there any harm in setting it too\nhigh? =)Here's the vacuum analyze verbose output:INFO: vacuuming \"public.transaction_facts\"INFO: scanned index \"transaction_facts_pkey\" to remove 759969 rowversions\nDETAIL: CPU 7.20s/2.31u sec elapsed 315.31 sec.INFO: scanned index \"transaction_facts_dcms_dim_id_idx\" to remove759969 row versionsDETAIL: CPU 1.29s/2.15u sec elapsed 146.98 sec.INFO: scanned index \"transaction_facts_merchant_dim_id_idx\" to remove\n759969 row versionsDETAIL: CPU 1.10s/2.10u sec elapsed 126.09 sec.INFO: scanned index \"transaction_facts_transaction_date_idx\" to remove759969 row versionsDETAIL: CPU 1.65s/2.40u sec elapsed \n259.25 sec.INFO: scanned index \"transaction_facts_transaction_id_idx\" to remove759969 row versionsDETAIL: CPU 7.48s/2.85u sec elapsed 371.98 sec.INFO: scanned index \"transaction_facts_product_date_idx\" to remove\n759969 row versionsDETAIL: CPU 2.32s/2.10u sec elapsed 303.83 sec.INFO: scanned index \"transaction_facts_merchant_product_date_idx\" toremove 759969 row versionsDETAIL: CPU 2.48s/2.31u sec elapsed \n295.19 sec.INFO: scanned index \"transaction_facts_merchant_date_idx\" to remove759969 row versionsDETAIL: CPU 8.10s/3.35u sec elapsed 398.73 sec.INFO: scanned index \"transaction_facts_success_idx\" to remove 759969\nrow versionsDETAIL: CPU 5.01s/2.84u sec elapsed 192.73 sec.INFO: scanned index \"transaction_facts_failed_idx\" to remove 759969 rowversionsDETAIL: CPU 1.03s/1.90u sec elapsed 123.00 sec.\nINFO: scanned index \"transaction_facts_archived_idx\" to remove 759969row versionsDETAIL: CPU 1.03s/1.39u sec elapsed 104.42 sec.INFO: scanned index \"transaction_facts_response_code_idx\" to remove\n759969 row versionsDETAIL: CPU 0.75s/2.17u sec elapsed 36.71 sec.INFO: scanned index \"transaction_facts_transaction_source_idx\" toremove 759969 row versionsDETAIL: CPU 0.60s/1.75u sec elapsed \n42.29 sec.INFO: scanned index \"transaction_facts_transaction_id_source_idx\" toremove 759969 row versionsDETAIL: CPU 1.14s/1.84u sec elapsed 44.75 sec.INFO: \"transaction_facts\": removed 759969 row versions in 14360 pages\nDETAIL: CPU 0.57s/0.23u sec elapsed 45.28 sec.INFO: index \"transaction_facts_pkey\" now contains 2274280 row versionsin 152872 pagesDETAIL: 759969 index row versions were removed.134813 index pages have been deleted, 134813 are currently reusable.\nCPU 0.00s/0.01u sec elapsed 0.01 sec.INFO: index \"transaction_facts_dcms_dim_id_idx\" now contains 2274280row versions in 85725 pagesDETAIL: 759323 index row versions were removed.75705 index pages have been deleted, 73721 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"transaction_facts_merchant_dim_id_idx\" now contains2274280 row versions in 80023 pagesDETAIL: 759969 index row versions were removed.71588 index pages have been deleted, 69210 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"transaction_facts_transaction_date_idx\" now contains2274280 row versions in 144196 pagesDETAIL: 759969 index row versions were removed.126451 index pages have been deleted, 126451 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"transaction_facts_transaction_id_idx\" now contains 2274280row versions in 150529 pagesDETAIL: 759969 index row versions were removed.130649 index pages have been deleted, 130649 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"transaction_facts_product_date_idx\" now contains 2274280row versions in 202248 pagesDETAIL: 759969 index row versions were removed.174652 index pages have been deleted, 174652 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"transaction_facts_merchant_product_date_idx\" now contains2274280 row versions in 202997 pagesDETAIL: 759969 index row versions were removed.\n175398 index pages have been deleted, 175398 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"transaction_facts_merchant_date_idx\" now contains 2274280row versions in 203561 pages\nDETAIL: 759969 index row versions were removed.175960 index pages have been deleted, 175960 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"transaction_facts_success_idx\" now contains 2274280 row\nversions in 78237 pagesDETAIL: 759969 index row versions were removed.70239 index pages have been deleted, 67652 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"transaction_facts_failed_idx\" now contains 2274280 row\nversions in 78230 pagesDETAIL: 759969 index row versions were removed.70231 index pages have been deleted, 67665 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"transaction_facts_archived_idx\" now contains 2274280 row\nversions in 72943 pagesDETAIL: 759969 index row versions were removed.64962 index pages have been deleted, 62363 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"transaction_facts_response_code_idx\" now contains 2274280\nrow versions in 16918 pagesDETAIL: 759969 index row versions were removed.8898 index pages have been deleted, 6314 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"transaction_facts_transaction_source_idx\" now contains\n2274280 row versions in 14235 pagesDETAIL: 759969 index row versions were removed.6234 index pages have been deleted, 3663 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"transaction_facts_transaction_id_source_idx\" now contains\n2274280 row versions in 18053 pagesDETAIL: 759969 index row versions were removed.0 index pages have been deleted, 0 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: \"transaction_facts\": found 759969 removable, 2274280 nonremovable\nrow versions in 308142 pagesDETAIL: 0 dead row versions cannot be removed yet.There were 15710471 unused item pointers.266986 pages contain useful free space.0 pages are entirely empty.CPU 58.00s/35.59u sec elapsed \n3240.94 sec.INFO: analyzing \"public.transaction_facts\"INFO: \"transaction_facts\": scanned 15000 of 308142 pages, containing113476 live rows and 0 dead rows; 15000 rows in sample, 2331115\nestimated total rows---------------------------(end of broadcast)---------------------------TIP 5: don't forget to increase your free space map settings",
"msg_date": "Wed, 17 Jan 2007 19:53:22 +0500",
"msg_from": "\"Shoaib Mir\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG8.2.1 choosing slow seqscan over idx scan"
},
{
"msg_contents": "Thanks for the great info Chad. I'm learning a lot from this thread!\n\n> 347434 rows * 156 bytes = 52MB (reasonable it could be held in your\n> shared buffers, which makes Tom's suggestion very plausible, the \n> index scan may not be cheaper -- because it is all cached)\n\nMaybe - I tried running the same query for an older time range that is\nless likely to be cached. The index scan took longer than my previous\nexample, but still only took 16 seconds, compared to the 87 seconds\nrequired to seqscan the table. When I can, I'll restart the machine and\nrun a comparison again to get a \"pure\" test.\n\n\n> One of the reasons why the sequential scan is slower is because the\n> optimizer doesn't know the data you are requesting is sitting in the\n> cache (and it is VERY unlikely you have the entire table in cache, \n> unless it is a heavily used table or very small table, which it's probably \n> not). \n\nThis is a large table (3 million rows). Rows are typically inserted in\ndate order, although large numbers of rows are deleted every night. \nBasically, this table contains a list of transactions in a rolling time\nwindow. So inserts happen throughout the day, and then a day's worth of\nold rows are deleted every night. The most accessed rows are going to\nbe today's rows, which is a small subset of the overall table. (maybe\n14%)\n\n> One thing to be careful of here is that you really need to consider what\n> is the primary use of the table, and what are the major queries you will be\n> launching against it. But you could improve the correlation by\n> rebuilding the table ordered by the transaction_date column, but it may screw up\n> other range scans. \n\nDate is almost always a criteria in scans of this table. As mentioned\nearlier, the table is naturally built in date order - so would\nrebuilding the table help? Is it possible that even though I'm\ninserting in date order, since I delete rows so often the physical\ncorrelation would get disrupted as disk pages are reused? Perhaps\nclustering on the transaction_date index and periodically running\n\"cluster\" would help? Does vacuum full help with this at all?\n\n> Another option is partitioning. I wouldn't do any of this\n> stuff, until you find out the last tweak you made still holds true, give\n> it a few days, perhaps test it after a clean reboot of the server.\n\nYeah - partitioning makes a lot of sense and I've thought about doing\nthis in the past. Although I do run queries that cross multiple days,\nmost of my queries only look at today's data, so the physical disk\norganization would likely be much better with a partitioned table setup.\n Also, since I usually delete old data one day at a time, I could simply\ndrop the old day's partition. This would make vacuuming much less of an\nissue. \n\nBut I won't be making any changes immediately, so I'll continue to run\ntests given your advice.\n\nThanks again,\nJeremy Haile\n\n\n",
"msg_date": "Wed, 17 Jan 2007 10:09:22 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG8.2.1 choosing slow seqscan over idx scan"
},
{
"msg_contents": "\nOn 17-Jan-07, at 9:37 AM, Jeremy Haile wrote:\n\n>> I still keep wondering if this table is bloated with dead tuples. \n>> Even\n>> if you vacuum often if there's a connection with an idle transaction,\n>> the tuples can't be reclaimed and the table would continue to grow.\n>\n> I used to vacuum once an hour, although I've switched it to autovacuum\n> now. It definitely could be bloated with dead tuples. I'll paste the\n> \"vacuum analyze verbose\" output at the bottom of this e-mail. Would a\n> vacuum full be a good idea?\n>\n>\n>> Assuming the table's NOT bloated, you may do well to increase the\n>> effective_cache_size, which doesn't allocate anything,\n> <snip>\n>> try setting it to something like 512MB or so.\n>\n> It's currently set to 1000MB.\n\nHow much memory does the box have\n\n>\n>\n>> If your table is bloating, and you don't have idle transactions \n>> hanging\n>> of the database, it could be that your fsm settings are too low.\n>\n> fsm is currently set to 2000000. Is there any harm in setting it too\n> high? =)\n\nYes, it takes up space\n>\n> Here's the vacuum analyze verbose output:\n>\n> INFO: vacuuming \"public.transaction_facts\"\n> INFO: scanned index \"transaction_facts_pkey\" to remove 759969 row\n> versions\n> DETAIL: CPU 7.20s/2.31u sec elapsed 315.31 sec.\n> INFO: scanned index \"transaction_facts_dcms_dim_id_idx\" to remove\n> 759969 row versions\n> DETAIL: CPU 1.29s/2.15u sec elapsed 146.98 sec.\n> INFO: scanned index \"transaction_facts_merchant_dim_id_idx\" to remove\n> 759969 row versions\n> DETAIL: CPU 1.10s/2.10u sec elapsed 126.09 sec.\n> INFO: scanned index \"transaction_facts_transaction_date_idx\" to \n> remove\n> 759969 row versions\n> DETAIL: CPU 1.65s/2.40u sec elapsed 259.25 sec.\n> INFO: scanned index \"transaction_facts_transaction_id_idx\" to remove\n> 759969 row versions\n> DETAIL: CPU 7.48s/2.85u sec elapsed 371.98 sec.\n> INFO: scanned index \"transaction_facts_product_date_idx\" to remove\n> 759969 row versions\n> DETAIL: CPU 2.32s/2.10u sec elapsed 303.83 sec.\n> INFO: scanned index \"transaction_facts_merchant_product_date_idx\" to\n> remove 759969 row versions\n> DETAIL: CPU 2.48s/2.31u sec elapsed 295.19 sec.\n> INFO: scanned index \"transaction_facts_merchant_date_idx\" to remove\n> 759969 row versions\n> DETAIL: CPU 8.10s/3.35u sec elapsed 398.73 sec.\n> INFO: scanned index \"transaction_facts_success_idx\" to remove 759969\n> row versions\n> DETAIL: CPU 5.01s/2.84u sec elapsed 192.73 sec.\n> INFO: scanned index \"transaction_facts_failed_idx\" to remove \n> 759969 row\n> versions\n> DETAIL: CPU 1.03s/1.90u sec elapsed 123.00 sec.\n> INFO: scanned index \"transaction_facts_archived_idx\" to remove 759969\n> row versions\n> DETAIL: CPU 1.03s/1.39u sec elapsed 104.42 sec.\n> INFO: scanned index \"transaction_facts_response_code_idx\" to remove\n> 759969 row versions\n> DETAIL: CPU 0.75s/2.17u sec elapsed 36.71 sec.\n> INFO: scanned index \"transaction_facts_transaction_source_idx\" to\n> remove 759969 row versions\n> DETAIL: CPU 0.60s/1.75u sec elapsed 42.29 sec.\n> INFO: scanned index \"transaction_facts_transaction_id_source_idx\" to\n> remove 759969 row versions\n> DETAIL: CPU 1.14s/1.84u sec elapsed 44.75 sec.\n> INFO: \"transaction_facts\": removed 759969 row versions in 14360 pages\n> DETAIL: CPU 0.57s/0.23u sec elapsed 45.28 sec.\n> INFO: index \"transaction_facts_pkey\" now contains 2274280 row \n> versions\n> in 152872 pages\n> DETAIL: 759969 index row versions were removed.\n> 134813 index pages have been deleted, 134813 are currently reusable.\n> CPU 0.00s/0.01u sec elapsed 0.01 sec.\n> INFO: index \"transaction_facts_dcms_dim_id_idx\" now contains 2274280\n> row versions in 85725 pages\n> DETAIL: 759323 index row versions were removed.\n> 75705 index pages have been deleted, 73721 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_merchant_dim_id_idx\" now contains\n> 2274280 row versions in 80023 pages\n> DETAIL: 759969 index row versions were removed.\n> 71588 index pages have been deleted, 69210 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_transaction_date_idx\" now contains\n> 2274280 row versions in 144196 pages\n> DETAIL: 759969 index row versions were removed.\n> 126451 index pages have been deleted, 126451 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_transaction_id_idx\" now contains \n> 2274280\n> row versions in 150529 pages\n> DETAIL: 759969 index row versions were removed.\n> 130649 index pages have been deleted, 130649 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_product_date_idx\" now contains 2274280\n> row versions in 202248 pages\n> DETAIL: 759969 index row versions were removed.\n> 174652 index pages have been deleted, 174652 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_merchant_product_date_idx\" now \n> contains\n> 2274280 row versions in 202997 pages\n> DETAIL: 759969 index row versions were removed.\n> 175398 index pages have been deleted, 175398 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_merchant_date_idx\" now contains \n> 2274280\n> row versions in 203561 pages\n> DETAIL: 759969 index row versions were removed.\n> 175960 index pages have been deleted, 175960 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_success_idx\" now contains 2274280 row\n> versions in 78237 pages\n> DETAIL: 759969 index row versions were removed.\n> 70239 index pages have been deleted, 67652 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_failed_idx\" now contains 2274280 row\n> versions in 78230 pages\n> DETAIL: 759969 index row versions were removed.\n> 70231 index pages have been deleted, 67665 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_archived_idx\" now contains 2274280 row\n> versions in 72943 pages\n> DETAIL: 759969 index row versions were removed.\n> 64962 index pages have been deleted, 62363 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_response_code_idx\" now contains \n> 2274280\n> row versions in 16918 pages\n> DETAIL: 759969 index row versions were removed.\n> 8898 index pages have been deleted, 6314 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_transaction_source_idx\" now contains\n> 2274280 row versions in 14235 pages\n> DETAIL: 759969 index row versions were removed.\n> 6234 index pages have been deleted, 3663 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"transaction_facts_transaction_id_source_idx\" now \n> contains\n> 2274280 row versions in 18053 pages\n> DETAIL: 759969 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: \"transaction_facts\": found 759969 removable, 2274280 \n> nonremovable\n> row versions in 308142 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 15710471 unused item pointers.\n> 266986 pages contain useful free space.\n> 0 pages are entirely empty.\n> CPU 58.00s/35.59u sec elapsed 3240.94 sec.\n> INFO: analyzing \"public.transaction_facts\"\n> INFO: \"transaction_facts\": scanned 15000 of 308142 pages, containing\n> 113476 live rows and 0 dead rows; 15000 rows in sample, 2331115\n> estimated total rows\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n",
"msg_date": "Wed, 17 Jan 2007 10:14:47 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG8.2.1 choosing slow seqscan over idx scan"
},
{
"msg_contents": "> How much memory does the box have\n2GB\n\n> Yes, it takes up space\nWell, I upped max_fsm_pages to 2000000 because it vacuums were failing\nwith it set to 1500000. However, I'm now autovacuuming, which might be\nkeeping my fsm lower. I didn't realize that setting it too high had\nnegative effects, so I'll try to get a better handle on how large this\nneeds to be.\n",
"msg_date": "Wed, 17 Jan 2007 10:23:41 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG8.2.1 choosing slow seqscan over idx scan"
},
{
"msg_contents": "On Wed, 2007-01-17 at 08:37, Jeremy Haile wrote:\n> > I still keep wondering if this table is bloated with dead tuples. Even\n> > if you vacuum often if there's a connection with an idle transaction,\n> > the tuples can't be reclaimed and the table would continue to grow.\n> \n> I used to vacuum once an hour, although I've switched it to autovacuum\n> now. It definitely could be bloated with dead tuples. I'll paste the\n> \"vacuum analyze verbose\" output at the bottom of this e-mail. Would a\n> vacuum full be a good idea?\n> \n> \n> > Assuming the table's NOT bloated, you may do well to increase the\n> > effective_cache_size, which doesn't allocate anything, \n> <snip>\n> > try setting it to something like 512MB or so.\n> \n> It's currently set to 1000MB.\n\nSounds about right for a machine with 2G memory (as you mentioned\nelsewhere)\n\n> > If your table is bloating, and you don't have idle transactions hanging\n> > of the database, it could be that your fsm settings are too low.\n> \n> fsm is currently set to 2000000. Is there any harm in setting it too\n> high? =)\n\nAs long as you don't run out of memory, it can be pretty high. note\nthat it uses shared memory, so setting it too high can cause the db to\nfail to start.\n\n> INFO: scanned index \"transaction_facts_pkey\" to remove 759969 row\n> versions\n> DETAIL: CPU 7.20s/2.31u sec elapsed 315.31 sec.\n> INFO: scanned index \"transaction_facts_dcms_dim_id_idx\" to remove\n> 759969 row versions\n\nSNIP\n\n> DETAIL: 759969 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: \"transaction_facts\": found 759969 removable, 2274280 nonremovable\n> row versions in 308142 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 15710471 unused item pointers.\n> 266986 pages contain useful free space.\n> 0 pages are entirely empty.\n> CPU 58.00s/35.59u sec elapsed 3240.94 sec.\n\nThat's about 32% dead rows. Might be worth scheduling a vacuum full,\nbut it's not like I was afraid it might be. It looks to me like you\ncould probably use a faster I/O subsystem in that machine though.\n\nIf the random page cost being lower fixes your issues, then I'd just run\nwith it lower for now. note that while lowering it may fix one query,\nit may break another. Tuning pgsql, like any database, is as much art\nas science... \n",
"msg_date": "Wed, 17 Jan 2007 10:19:06 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG8.2.1 choosing slow seqscan over idx scan"
},
{
"msg_contents": "> That's about 32% dead rows. Might be worth scheduling a vacuum full,\n> but it's not like I was afraid it might be. It looks to me like you\n> could probably use a faster I/O subsystem in that machine though.\n\nI'll try to schedule a full vacuum tonight. As far as I/O - it's using\nSAN over fiber. Not as fast as internal SCSI though...\n",
"msg_date": "Wed, 17 Jan 2007 11:28:01 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG8.2.1 choosing slow seqscan over idx scan"
},
{
"msg_contents": "On 1/17/07, Jeremy Haile <[email protected]> wrote:\n>\n> Maybe - I tried running the same query for an older time range that is\n> less likely to be cached. The index scan took longer than my previous\n> example, but still only took 16 seconds, compared to the 87 seconds\n> required to seqscan the table. When I can, I'll restart the machine and\n> run a comparison again to get a \"pure\" test.\n\n\nHeh, querying a different range of data was a better idea compared to\nrebooting.. doh.. So I think you reasonably established that an index scan\nfor unbuffered data would still be faster than a sequential scan.\n\nTo me this is one of those cases where the optimizer doesn't understand the\nclustering of the data, and it is being misled by the statistics and fixed\nparameters it has. If you have fast disks (I think a fiber SAN probably\ncounts here) then adjusting random_page_cost lower is reasonable, the lowest\nI have heard recommended is 2.0. It would be nice if the database could\nlearn to estimate these values, as newer versions of Oracle does.\n\nDate is almost always a criteria in scans of this table. As mentioned\n> earlier, the table is naturally built in date order - so would\n> rebuilding the table help? Is it possible that even though I'm\n> inserting in date order, since I delete rows so often the physical\n> correlation would get disrupted as disk pages are reused? Perhaps\n> clustering on the transaction_date index and periodically running\n> \"cluster\" would help? Does vacuum full help with this at all?\n\n\nYes, cluster would rebuild the table for you. I wouldn't do anything too\nintrusive, run with the random_page_cost lowered, perhaps vacuum full,\nreindex, and see what happens. If it degrades over time, then I would start\nlooking at partitioning or some other solution.\n\n\nYeah - partitioning makes a lot of sense and I've thought about doing\n> this in the past. Although I do run queries that cross multiple days,\n> most of my queries only look at today's data, so the physical disk\n> organization would likely be much better with a partitioned table setup.\n> Also, since I usually delete old data one day at a time, I could simply\n> drop the old day's partition. This would make vacuuming much less of an\n> issue.\n\n\nYep, my thoughts exactly. Partitioning support is PostgreSQL is there, but\nit needs a bit more of a tighter integration into the core (I shouldn't have\nto create a view, n tables, n rules, etc). Additionally, I have read that\nat some point when you have \"y\" partitions the performance degrades, haven't\nreally looked into it myself.\n\n\n-- \nChad\nhttp://www.postgresqlforums.com/\n\nOn 1/17/07, Jeremy Haile <[email protected]> wrote:\n\nMaybe - I tried running the same query for an older time range that isless likely to be cached. The index scan took longer than my previousexample, but still only took 16 seconds, compared to the 87 secondsrequired to seqscan the table. When I can, I'll restart the machine and\nrun a comparison again to get a \"pure\" test.Heh, querying a different range of data was a better idea compared to rebooting.. doh.. So I think you reasonably established that an index scan for unbuffered data would still be faster than a sequential scan.\nTo me this is one of those cases where the optimizer doesn't understand the clustering of the data, and it is being misled by the statistics and fixed parameters it has. If you have fast disks (I think a fiber SAN probably counts here) then adjusting random_page_cost lower is reasonable, the lowest I have heard recommended is \n2.0. It would be nice if the database could learn to estimate these values, as newer versions of Oracle does.\nDate is almost always a criteria in scans of this table. As mentionedearlier, the table is naturally built in date order - so would\nrebuilding the table help? Is it possible that even though I'minserting in date order, since I delete rows so often the physicalcorrelation would get disrupted as disk pages are reused? Perhapsclustering on the transaction_date index and periodically running\n\"cluster\" would help? Does vacuum full help with this at all?Yes, cluster would rebuild the table for you. I wouldn't do anything too intrusive, run with the random_page_cost lowered, perhaps vacuum full, reindex, and see what happens. If it degrades over time, then I would start looking at partitioning or some other solution.\n\nYeah - partitioning makes a lot of sense and I've thought about doingthis in the past. Although I do run queries that cross multiple days,most of my queries only look at today's data, so the physical disk\norganization would likely be much better with a partitioned table setup. Also, since I usually delete old data one day at a time, I could simplydrop the old day's partition. This would make vacuuming much less of an\nissue.Yep, my thoughts exactly. Partitioning support is PostgreSQL is there, but it needs a bit more of a tighter integration into the core (I shouldn't have to create a view, n tables, n rules, etc). Additionally, I have read that at some point when you have \"y\" partitions the performance degrades, haven't really looked into it myself.\n-- \nChadhttp://www.postgresqlforums.com/",
"msg_date": "Wed, 17 Jan 2007 12:26:50 -0500",
"msg_from": "\"Chad Wagner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG8.2.1 choosing slow seqscan over idx scan"
},
{
"msg_contents": "On Wed, 2007-01-17 at 10:28, Jeremy Haile wrote:\n> > That's about 32% dead rows. Might be worth scheduling a vacuum full,\n> > but it's not like I was afraid it might be. It looks to me like you\n> > could probably use a faster I/O subsystem in that machine though.\n> \n> I'll try to schedule a full vacuum tonight. As far as I/O - it's using\n> SAN over fiber. Not as fast as internal SCSI though...\n\nAlso, look at the thread going by about index bloat by 4x. You'll\nlikely want to reindex after a vacuum full since vacuum full doesn't\nreclaim space in indexes and in fact often bloats indexes.\n",
"msg_date": "Wed, 17 Jan 2007 12:35:11 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG8.2.1 choosing slow seqscan over idx scan"
},
{
"msg_contents": "> It would be nice if the database could\n> learn to estimate these values, as newer versions of Oracle does.\n\nThat would be really nice since it would take some of the guess work out\nof it.\n\n> Yes, cluster would rebuild the table for you. I wouldn't do anything too\n> intrusive, run with the random_page_cost lowered, perhaps vacuum full,\n> reindex, and see what happens. \n\nI'll try doing the vacuum full and reindex tonight since they require\nexclusive locks.\n\n> Yep, my thoughts exactly. Partitioning support is PostgreSQL is there,\n> but it needs a bit more of a tighter integration into the core (I shouldn't\n> have to create a view, n tables, n rules, etc). Additionally, I have read\n> that at some point when you have \"y\" partitions the performance degrades,\n> haven't really looked into it myself.\n\nYeah - I haven't setup partitioning in PostgreSQL before, although I've\nread quite a bit about it. I've talked about getting improved syntax\nfor partitioning in PostgreSQL. MySQL's syntax is much simpler and more\nintuitive compared to setting them up with Postgres - it would be nice\nif PostgreSQL adopted a similar syntax where partitions were first-class\ncitizens.\n",
"msg_date": "Wed, 17 Jan 2007 13:46:40 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG8.2.1 choosing slow seqscan over idx scan"
},
{
"msg_contents": "> Also, look at the thread going by about index bloat by 4x. You'll\n> likely want to reindex after a vacuum full since vacuum full doesn't\n> reclaim space in indexes and in fact often bloats indexes.\n\nThanks for the pointer. That thread might indeed apply to my situation.\n I'm going to reindex the the table tonight.\n\nJeremy Haile\n",
"msg_date": "Wed, 17 Jan 2007 13:55:07 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG8.2.1 choosing slow seqscan over idx scan"
},
{
"msg_contents": ">> Assuming the table's NOT bloated, you may do well to increase the\n>> effective_cache_size, which doesn't allocate anything, \n> <snip>\n>> try setting it to something like 512MB or so.\n> \n> It's currently set to 1000MB.\n> \n> \n>> If your table is bloating, and you don't have idle transactions hanging\n>> of the database, it could be that your fsm settings are too low.\n> \n> fsm is currently set to 2000000. Is there any harm in setting it too\n> high? =)\n\nI generally recomend to use this - it's a nice list of the most\nimportant settings in postgresql.conf (with respect to performance),\nalong with a short explanation, and suggested values:\n\n http://www.powerpostgresql.com/PerfList\n\nI'm using it as a general guide when setting and tuning our servers.\n\nAnyway, as someone already pointed out, it's an art to choose the proper\nvalues - there's nothing like 'the only best values'.\n\ntomas\n",
"msg_date": "Wed, 17 Jan 2007 20:14:36 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG8.2.1 choosing slow seqscan over idx scan"
},
{
"msg_contents": "> That's about 32% dead rows. Might be worth scheduling a vacuum full,\n> but it's not like I was afraid it might be. It looks to me like you\n> could probably use a faster I/O subsystem in that machine though.\n> \n> If the random page cost being lower fixes your issues, then I'd just run\n> with it lower for now. note that while lowering it may fix one query,\n> it may break another. Tuning pgsql, like any database, is as much art\n> as science...\n\nA nice feature of postgresql is the ability to log the 'slow queries'\n(exceeding some time limit) - you can use it to compare the performance\nof various settings. We're using it to 'detect' stupid SQL etc.\n\nJust set it reasonably (the value depends on you), for example we used\nabout 500ms originally and after about two months of improvements we\nlowered it to about 100ms.\n\nYou can analyze the log by hand, but about a year ago I've written a\ntool to parse it and build a set of HTML reports with an overview and\ndetails about each query) along with graphs and examples of queries.\n\nYou can get it here: http://opensource.pearshealthcyber.cz/\n\nJust beware, it's written in PHP and it definitely is not perfect:\n\n (1) memory requirements (about 4x the size of the log)\n (2) not to fast (about 20mins of P4@3GHz for a 200MB log)\n (3) it requires a certain log format (see the page)\n\nI did some improvements to the script recently, but forgot to upload it.\nI'll do that tomorrow.\n\nTomas\n",
"msg_date": "Wed, 17 Jan 2007 20:32:37 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG8.2.1 choosing slow seqscan over idx scan"
},
{
"msg_contents": "Interesting - I haven't seen that tool before. I'll have to check it\nout when I get a chance. Thanks!\n\n\nOn Wed, 17 Jan 2007 20:32:37 +0100, \"Tomas Vondra\" <[email protected]> said:\n> > That's about 32% dead rows. Might be worth scheduling a vacuum full,\n> > but it's not like I was afraid it might be. It looks to me like you\n> > could probably use a faster I/O subsystem in that machine though.\n> > \n> > If the random page cost being lower fixes your issues, then I'd just run\n> > with it lower for now. note that while lowering it may fix one query,\n> > it may break another. Tuning pgsql, like any database, is as much art\n> > as science...\n> \n> A nice feature of postgresql is the ability to log the 'slow queries'\n> (exceeding some time limit) - you can use it to compare the performance\n> of various settings. We're using it to 'detect' stupid SQL etc.\n> \n> Just set it reasonably (the value depends on you), for example we used\n> about 500ms originally and after about two months of improvements we\n> lowered it to about 100ms.\n> \n> You can analyze the log by hand, but about a year ago I've written a\n> tool to parse it and build a set of HTML reports with an overview and\n> details about each query) along with graphs and examples of queries.\n> \n> You can get it here: http://opensource.pearshealthcyber.cz/\n> \n> Just beware, it's written in PHP and it definitely is not perfect:\n> \n> (1) memory requirements (about 4x the size of the log)\n> (2) not to fast (about 20mins of P4@3GHz for a 200MB log)\n> (3) it requires a certain log format (see the page)\n> \n> I did some improvements to the script recently, but forgot to upload it.\n> I'll do that tomorrow.\n> \n> Tomas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n",
"msg_date": "Wed, 17 Jan 2007 14:38:52 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG8.2.1 choosing slow seqscan over idx scan"
},
{
"msg_contents": "Hey there;\n\nI've been lurking on this list awhile, and I've been working with postgres \nfor a number of years so I'm not exactly new to this. But I'm still \nhaving trouble getting a good balance of settings and I'd like to see what \nother people think. We may also be willing to hire a contractor to help \ntackle this problem if anyone is interested.\n\nI've got an application here that runs large (in terms of length -- the \nqueries have a lot of conditions in them) queries that can potentially \nreturn millions of rows but on average probably return tens of thousands \nof rows. It's read only for most of the day, and pretty much all the \nqueries except one are really fast.\n\nHowever, each night we load data from a legacy cobol system into the SQL \nsystem and then we summarize that data to make the reports faster. This \nload process is intensely insert/update driven but also has a hefty \namount of selects as well. This load process is taking ever longer to \ncomplete.\n\n\nSO ... our goal here is to make this load process take less time. It \nseems the big part is building the big summary table; this big summary \ntable is currently 9 million rows big. Every night, we drop the table, \nre-create it, build the 9 million rows of data (we use COPY to put hte \ndata in when it's prepared, not INSERT), and then build the indexes on it \n-- of which there are many. Unfortunately this table gets queried \nin a lot of different ways and needs these indexes; also unfortunately, we \nhave operator class indexes to support both ASC and DESC sorting on \ncolumns so these are for all intents and purposes duplicate but required \nunder Postgres 8.1 (we've recently upgraded to Postgres 8.2, is this still \na requirement?)\n\nBuilding these indexes takes forever! It's a long grind through inserts \nand then building the indexes takes a hefty amount of time too. (about 9 \nhours). Now, the application is likely part at fault, and we're working \nto make it more efficient, but it has nothing to do with the index \nbuilding time. I'm wondering what we can do to make this better if \nanything; would it be better to leave the indexes on? It doesn't seem to \nbe. Would it be better to use INSERTs instead of copies? Doesn't seem to \nbe.\n\n\nAnyway -- ANYTHING we can do to make this go faster is appreciated :) \nHere's some vital statistics:\n\n- Machine is a 16 GB, 4 actual CPU dual-core opteron system using SCSI \ndiscs. The disc configuration seems to be a good one, it's the best of \nall the ones we've tested so far.\n\n- The load process itself takes about 6 gigs of memory, the rest is free \nfor postgres because this is basically all the machine does.\n\n- If this was your machine and situation, how would you lay out the emmory \nsettings? What would you set the FSM to? Would you leave teh bgwriter on \nor off? We've already got FSYNC off because \"data integrity\" doesn't \nmatter -- this stuff is religeously backed up and we've got no problem \nreinstalling it. Besides, in order for this machine to go down, data \nintegrity of the DB is the least of the worries :)\n\nDo wal_buffers/full_page_writes matter of FSYNC is off? If so, what \nsettings? What about checkpoints?\n\nAny finally, any ideas on planner constants? Here's what I'm using:\n\nseq_page_cost = 0.5 # measured on an arbitrary scale\nrandom_page_cost = 1.0 # same scale as above\ncpu_tuple_cost = 0.001 # same scale as above\ncpu_index_tuple_cost = 0.0001 # same scale as above\ncpu_operator_cost = 0.00025 # same scale as above\neffective_cache_size = 679006\n\nI really don't remember how I came up with that effective_cache_size \nnumber....\n\n\nAnyway... any advice would be appreciated :)\n\n\nSteve\n",
"msg_date": "Wed, 17 Jan 2007 15:41:45 -0500 (EST)",
"msg_from": "Steve <[email protected]>",
"msg_from_op": false,
"msg_subject": "Configuration Advice"
},
{
"msg_contents": "> Any finally, any ideas on planner constants? Here's what I'm using:\n> \n> seq_page_cost = 0.5 # measured on an arbitrary scale\n> random_page_cost = 1.0 # same scale as above\n> cpu_tuple_cost = 0.001 # same scale as above\n> cpu_index_tuple_cost = 0.0001 # same scale as above\n> cpu_operator_cost = 0.00025 # same scale as above\n> effective_cache_size = 679006\n> \n> I really don't remember how I came up with that effective_cache_size\n> number....\n\nI don't have much experience with the way your application works, but:\n\n1) What is the size of the whole database? Does that fit in your memory?\n That's the first thing I'd like to know and I can't find it in your\n post.\n\n I'm missing several other important values too - namely\n\n shared_buffers\n max_fsm_pages\n work_mem\n maintenance_work_mem\n\n BTW, is the autovacuum daemon running? If yes, try to stop it during\n the import (and run ANALYZE after the import of data).\n\n2) What is the size of a disc page? Without that we can only guess what\n doest the effective_cache_size number means - in the usual case it's\n 8kB thus giving about 5.2 GiB of memory.\n\n As suggested in http://www.powerpostgresql.com/PerfList I'd increase\n that to about 1.400.000 which about 10.5 GiB (about 2/3 of RAM).\n\n Anyway - don't be afraid this breaks something. This is just an\n information for PostgreSQL how much memory the OS is probably using\n as a filesystem cache. PostgreSQL uses this to evaluate the\n probability that the page is in a cache.\n\n3) What is the value of maintenance_work_mem? This is a very important\n value for CREATE INDEX (and some other). The lower this value is,\n the slower the CREATE INDEX is. So try to increase the value as much\n as you can - this could / should improve the import performance\n considerably.\n\n But be careful - this does influence the amount of memmory allocated\n by PostgreSQL. Being in your position I wouldn't do this in the\n postgresql.conf - I'd do that in the connection used by the import\n using SET command, ie. something like\n\n SET maintenance_work_mem = 524288;\n CREATE INDEX ...\n CREATE INDEX ...\n CREATE INDEX ...\n CREATE INDEX ...\n\n for a 512 MiB of maintenance_work_mem. Maybe even a higher value\n could be used (1 GiB?). Just try to fiddle with this a little.\n\n4) Try to set up some performance monitoring - for example a 'dstat' is\n a nice way to do that. This way you can find yout where's the\n bottleneck (memory, I/O etc.)\n\nThat's basically all I can think of right now.\n\nTomas\n",
"msg_date": "Wed, 17 Jan 2007 22:32:25 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "> Building these indexes takes forever!\n\n> Anyway -- ANYTHING we can do to make this go faster is appreciated :) \n> Here's some vital statistics:\n\n> - Machine is a 16 GB, 4 actual CPU dual-core opteron system using SCSI \n> discs. The disc configuration seems to be a good one, it's the best of \n> all the ones we've tested so far.\n\nWhat are your shared_buffers, work_mem, and maintenance_work_mem settings?\n\nmaintenance_work_mem is used for CREATE INDEX, and with 16GB of memory \nin the machine, maintenance_work_mem should be set to at least 1GB in my \nopinion.\n\n-- \nBenjamin Minshall <[email protected]>\nSenior Developer -- Intellicon, Inc.\nhttp://www.intellicon.biz",
"msg_date": "Wed, 17 Jan 2007 16:33:30 -0500",
"msg_from": "Benjamin Minshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "On 1/17/07, Steve <[email protected]> wrote:\n>\n> However, each night we load data from a legacy cobol system into the SQL\n> system and then we summarize that data to make the reports faster. This\n> load process is intensely insert/update driven but also has a hefty\n> amount of selects as well. This load process is taking ever longer to\n> complete.\n\n\nHow many rows do you typically load each night? If it is say less than 10%\nof the total rows, then perhaps the suggestion in the next paragraph is\nreasonable.\n\nSO ... our goal here is to make this load process take less time. It\n> seems the big part is building the big summary table; this big summary\n> table is currently 9 million rows big. Every night, we drop the table,\n> re-create it, build the 9 million rows of data (we use COPY to put hte\n> data in when it's prepared, not INSERT), and then build the indexes on it\n\n\nPerhaps, placing a trigger on the source table and building a \"change log\"\nwould be useful. For example, you could scan the change log (looking for\ninsert, update, and deletes) and integrate those changes into your summary\ntable. Obviously if you are using complex aggregates it may not be possible\nto adjust the summary table, but if you are performing simple SUM's,\nCOUNT's, etc. then this is a workable solution.\n\n\n-- \nChad\nhttp://www.postgresqlforums.com/\n\nOn 1/17/07, Steve <[email protected]> wrote:\nHowever, each night we load data from a legacy cobol system into the SQLsystem and then we summarize that data to make the reports faster. Thisload process is intensely insert/update driven but also has a hefty\namount of selects as well. This load process is taking ever longer tocomplete.How many rows do you typically load each night? If it is say less than 10% of the total rows, then perhaps the suggestion in the next paragraph is reasonable.\nSO ... our goal here is to make this load process take less time. Itseems the big part is building the big summary table; this big summary\ntable is currently 9 million rows big. Every night, we drop the table,re-create it, build the 9 million rows of data (we use COPY to put htedata in when it's prepared, not INSERT), and then build the indexes on it\nPerhaps, placing a trigger on the source table and building a \"change log\" would be useful. For example, you could scan the change log (looking for insert, update, and deletes) and integrate those changes into your summary table. Obviously if you are using complex aggregates it may not be possible to adjust the summary table, but if you are performing simple SUM's, COUNT's, etc. then this is a workable solution.\n-- Chadhttp://www.postgresqlforums.com/",
"msg_date": "Wed, 17 Jan 2007 16:34:10 -0500",
"msg_from": "\"Chad Wagner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "Steve wrote:\n> SO ... our goal here is to make this load process take less time. It \n> seems the big part is building the big summary table; this big summary \n> table is currently 9 million rows big. Every night, we drop the table, \n> re-create it, build the 9 million rows of data (we use COPY to put hte \n> data in when it's prepared, not INSERT), and then build the indexes on \n> it -- of which there are many. \n\nWould it be possible to just update the summary table, instead of \nrecreating it from scratch every night?\n\n> Unfortunately this table gets queried in \n> a lot of different ways and needs these indexes; also unfortunately, we \n> have operator class indexes to support both ASC and DESC sorting on \n> columns so these are for all intents and purposes duplicate but required \n> under Postgres 8.1 (we've recently upgraded to Postgres 8.2, is this \n> still a requirement?)\n\nI don't think this has changed in 8.2.\n\n> Building these indexes takes forever! It's a long grind through inserts \n> and then building the indexes takes a hefty amount of time too. (about \n> 9 hours). Now, the application is likely part at fault, and we're \n> working to make it more efficient, but it has nothing to do with the \n> index building time. I'm wondering what we can do to make this better \n> if anything; would it be better to leave the indexes on? It doesn't \n> seem to be. Would it be better to use INSERTs instead of copies? \n> Doesn't seem to be.\n\nWould it help if you created multiple indexes simultaneously? You have \nenough CPU to do it. Is the index creation CPU or I/O bound? 9 million \nrows should fit in 16 GB of memory, right?\n\n> - The load process itself takes about 6 gigs of memory, the rest is free \n> for postgres because this is basically all the machine does.\n\nCan you describe the load process in more detail? What's it doing with \nthe 6 gigs?\n\n> - If this was your machine and situation, how would you lay out the \n> emmory settings? What would you set the FSM to? \n\nFSM seems irrelevant here..\n\n> Do wal_buffers/full_page_writes matter of FSYNC is off? \n\nBetter turn off full_page_writes, since you can kiss goodbye to data \nintegrity anyway with fsync=off.\n\n> Anyway... any advice would be appreciated :)\n\nWhat's your maintenance_work_mem setting? It can make a big difference \nin sorting the data for indexes.\n\nIf you could post the schema including the indexes, people might have \nmore ideas...\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Wed, 17 Jan 2007 21:37:50 +0000",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": ">\n> 1) What is the size of the whole database? Does that fit in your memory?\n> That's the first thing I'd like to know and I can't find it in your\n> post.\n\nCurrent on-disk size is about 51 gig. I'm not sure if there's a different \nsize I should be looking at instead, but that's what du tells me the \ndirectory for the database in the \"base\" directory is sized at. So, no, \nit doesn't fit into memory all the way.\n\n> I'm missing several other important values too - namely\n>\n> shared_buffers\n> max_fsm_pages\n> work_mem\n> maintenance_work_mem\n>\n\nI didn't share these because they've been in flux :) I've been \nexperimenting with different values, but currently we're using:\n\n8GB shared_buffers\n100000 max_fsm_pages\n256MB work_mem\n6GB maintenance_work_mem\n\n> BTW, is the autovacuum daemon running? If yes, try to stop it during\n> the import (and run ANALYZE after the import of data).\n\nNo. all vacuums are done explicitly since the database doesn't change \nduring the day. The 'order of operations' is:\n\n- Load COBOL data into database (inserts/updates)\n- VACUUM COBOL data\n- Summarize COBOL data (inserts/updates with the big table using COPY)\n- VACUUM summary tables\n\nSo everything gets vacuumed as soon as it's updated.\n\n> 2) What is the size of a disc page? Without that we can only guess what\n> doest the effective_cache_size number means - in the usual case it's\n> 8kB thus giving about 5.2 GiB of memory.\n>\n\n \tI believe it's 8kB. I definitely haven't changed it :)\n\n> As suggested in http://www.powerpostgresql.com/PerfList I'd increase\n> that to about 1.400.000 which about 10.5 GiB (about 2/3 of RAM).\n>\n> Anyway - don't be afraid this breaks something. This is just an\n> information for PostgreSQL how much memory the OS is probably using\n> as a filesystem cache. PostgreSQL uses this to evaluate the\n> probability that the page is in a cache.\n\n\n \tOkay, I'll try the value you recommend. :)\n\n\n> 3) What is the value of maintenance_work_mem? This is a very important\n> value for CREATE INDEX (and some other). The lower this value is,\n> the slower the CREATE INDEX is. So try to increase the value as much\n> as you can - this could / should improve the import performance\n> considerably.\n>\n> But be careful - this does influence the amount of memmory allocated\n> by PostgreSQL. Being in your position I wouldn't do this in the\n> postgresql.conf - I'd do that in the connection used by the import\n> using SET command, ie. something like\n>\n> SET maintenance_work_mem = 524288;\n> CREATE INDEX ...\n> CREATE INDEX ...\n> CREATE INDEX ...\n> CREATE INDEX ...\n>\n> for a 512 MiB of maintenance_work_mem. Maybe even a higher value\n> could be used (1 GiB?). Just try to fiddle with this a little.\n\n \tIt's currently at 6GB in postgres.conf, though you have a good \npoint in that maybe that should be before the indexes are made to save \nroom. Things are certainly kinda tight in the config as is.\n\n> 4) Try to set up some performance monitoring - for example a 'dstat' is\n> a nice way to do that. This way you can find yout where's the\n> bottleneck (memory, I/O etc.)\n>\n> That's basically all I can think of right now.\n>\n\nThanks for the tips :)\n\n\nSteve\n",
"msg_date": "Wed, 17 Jan 2007 16:56:30 -0500 (EST)",
"msg_from": "Steve <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "\n\nOn Wed, 17 Jan 2007, Benjamin Minshall wrote:\n\n>\n>> Building these indexes takes forever!\n>\n>> Anyway -- ANYTHING we can do to make this go faster is appreciated :) \n>> Here's some vital statistics:\n>\n>> - Machine is a 16 GB, 4 actual CPU dual-core opteron system using SCSI \n>> discs. The disc configuration seems to be a good one, it's the best of all \n>> the ones we've tested so far.\n>\n> What are your shared_buffers, work_mem, and maintenance_work_mem settings?\n>\n> maintenance_work_mem is used for CREATE INDEX, and with 16GB of memory in the \n> machine, maintenance_work_mem should be set to at least 1GB in my opinion.\n>\n\nshared_buffers = 8GB\nwork_mem = 256MB\nmaintenance_work_mem = 6GB\n\nSo that should be covered, unless I'm using too much memory and swapping. \nIt does look like it's swapping a little, but not too badly as far as I \ncan tell. I'm thinking of dialing back everything a bit, but I'm not \nreally sure what the heck to do :) It's all guessing for me right now.\n\n\nSteve\n",
"msg_date": "Wed, 17 Jan 2007 16:58:05 -0500 (EST)",
"msg_from": "Steve <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "[email protected] (Steve) writes:\n> I'm wondering what we can do to make\n> this better if anything; would it be better to leave the indexes on?\n> It doesn't seem to be. \n\nDefinitely NOT. Generating an index via a bulk sort is a LOT faster\nthan loading data into an index one tuple at a time.\n\nWe saw a BIG increase in performance of Slony-I when, in version\n1.1.5, we added a modification that shuts off indexes during COPY and\nthen does a reindex. Conceivably, you might look at how Slony-I does\nthat, and try doing the same thing; it might well be faster than doing\na bunch of reindexes serially. (Or not...)\n\n> Would it be better to use INSERTs instead of copies? Doesn't seem\n> to be.\n\nI'd be mighty surprised.\n\n> - The load process itself takes about 6 gigs of memory, the rest is\n> free for postgres because this is basically all the machine does.\n\nThe settings you have do not seem conspicuously wrong in any way.\n\nThe one thought which comes to mind is that if you could turn this\ninto a *mostly* incremental change, that might help.\n\nThe thought:\n\n - Load the big chunk of data into a new table\n\n - Generate some minimal set of indices on the new table\n\n - Generate four queries that compare old to new:\n q1 - See which tuples are unchanged from yesterday to today\n q2 - See which tuples have been deleted from yesterday to today\n q3 - See which tuples have been added\n q4 - See which tuples have been modified\n\n If the \"unchanged\" set is extremely large, then you might see benefit\n to doing updates based on deleting the rows indicated by q2,\n inserting rows based on q3, and updating based on q4. \n\nIn principle, computing and applying those 4 queries might be quicker\nthan rebuilding from scratch.\n\nIn principle, applying q2, then q4, then vacuuming, then q3, ought to\nbe \"optimal.\"\n-- \nlet name=\"cbbrowne\" and tld=\"cbbrowne.com\" in name ^ \"@\" ^ tld;;\nhttp://linuxdatabases.info/info/linux.html\n\"A 'Cape Cod Salsa' just isn't right.\" -- Unknown\n",
"msg_date": "Wed, 17 Jan 2007 18:08:44 -0500",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "On Wed, 2007-01-17 at 15:58, Steve wrote:\n> On Wed, 17 Jan 2007, Benjamin Minshall wrote:\n> \n> >\n> >> Building these indexes takes forever!\n> >\n> >> Anyway -- ANYTHING we can do to make this go faster is appreciated :) \n> >> Here's some vital statistics:\n> >\n> >> - Machine is a 16 GB, 4 actual CPU dual-core opteron system using SCSI \n> >> discs. The disc configuration seems to be a good one, it's the best of all \n> >> the ones we've tested so far.\n> >\n> > What are your shared_buffers, work_mem, and maintenance_work_mem settings?\n> >\n> > maintenance_work_mem is used for CREATE INDEX, and with 16GB of memory in the \n> > machine, maintenance_work_mem should be set to at least 1GB in my opinion.\n> >\n> \n> shared_buffers = 8GB\n> work_mem = 256MB\n> maintenance_work_mem = 6GB\n> \n> So that should be covered, unless I'm using too much memory and swapping. \n> It does look like it's swapping a little, but not too badly as far as I \n> can tell. I'm thinking of dialing back everything a bit, but I'm not \n> really sure what the heck to do :) It's all guessing for me right now.\n\nGenerally speaking, once you've gotten to the point of swapping, even a\nlittle, you've gone too far. A better approach is to pick some\nconservative number, like 10-25% of your ram for shared_buffers, and 1\ngig or so for maintenance work_mem, and then increase them while\nexercising the system, and measure the difference increasing them makes.\n\nIf going from 1G shared buffers to 2G shared buffers gets you a 10%\nincrease, then good. If going from 2G to 4G gets you a 1.2% increase,\nit's questionable. You should reach a point where throwing more\nshared_buffers stops helping before you start swapping. But you might\nnot.\n\nSame goes for maintenance work mem. Incremental changes, accompanied by\nreproduceable benchmarks / behaviour measurements are the way to\ndetermine the settings.\n\nNote that you can also vary those during different times of the day. \nyou can have maint_mem set to 1Gig during the day and crank it up to 8\ngig or something while loading data. Shared_buffers can't be changed\nwithout restarting the db though.\n",
"msg_date": "Wed, 17 Jan 2007 17:13:51 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "> How many rows do you typically load each night? If it is say less than 10%\n> of the total rows, then perhaps the suggestion in the next paragraph is\n> reasonable.\n\n \tHrm. It's very, very variable. I'd say it's more than 10% on \naverage, and it can actually be pretty close to 50-100% on certain days. \nOur data is based upon customer submissions, and normally it's a daily \nbasis kind of deal, but sometimes they'll resubmit their entire year on \ncertain deadlines to make sure it's all in. Now, we don't have to \noptimize for those deadlines, just the 'average daily load'. It's okay if \non those deadlines it takes forever, because that's understandable.\n\n \tHowever, I will look into this and see if I can figure out this \naverage value. This may be a valid idea, and I'll look some more at it.\n\n\nThanks!\n\nSteve\n\n> SO ... our goal here is to make this load process take less time. It\n>> seems the big part is building the big summary table; this big summary\n>> table is currently 9 million rows big. Every night, we drop the table,\n>> re-create it, build the 9 million rows of data (we use COPY to put hte\n>> data in when it's prepared, not INSERT), and then build the indexes on it\n>\n>\n> Perhaps, placing a trigger on the source table and building a \"change log\"\n> would be useful. For example, you could scan the change log (looking for\n> insert, update, and deletes) and integrate those changes into your summary\n> table. Obviously if you are using complex aggregates it may not be possible\n> to adjust the summary table, but if you are performing simple SUM's,\n> COUNT's, etc. then this is a workable solution.\n>\n>\n> -- \n> Chad\n> http://www.postgresqlforums.com/\n>\n",
"msg_date": "Wed, 17 Jan 2007 18:33:50 -0500 (EST)",
"msg_from": "Steve <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "\nOn 17-Jan-07, at 3:41 PM, Steve wrote:\n\n> Hey there;\n>\n> I've been lurking on this list awhile, and I've been working with \n> postgres for a number of years so I'm not exactly new to this. But \n> I'm still having trouble getting a good balance of settings and I'd \n> like to see what other people think. We may also be willing to \n> hire a contractor to help tackle this problem if anyone is interested.\n>\n> I've got an application here that runs large (in terms of length -- \n> the queries have a lot of conditions in them) queries that can \n> potentially return millions of rows but on average probably return \n> tens of thousands of rows. It's read only for most of the day, and \n> pretty much all the queries except one are really fast.\n>\n> However, each night we load data from a legacy cobol system into \n> the SQL system and then we summarize that data to make the reports \n> faster. This load process is intensely insert/update driven but \n> also has a hefty amount of selects as well. This load process is \n> taking ever longer to complete.\n>\n>\n> SO ... our goal here is to make this load process take less time. \n> It seems the big part is building the big summary table; this big \n> summary table is currently 9 million rows big. Every night, we \n> drop the table, re-create it, build the 9 million rows of data (we \n> use COPY to put hte data in when it's prepared, not INSERT), and \n> then build the indexes on it -- of which there are many. \n> Unfortunately this table gets queried in a lot of different ways \n> and needs these indexes; also unfortunately, we have operator class \n> indexes to support both ASC and DESC sorting on columns so these \n> are for all intents and purposes duplicate but required under \n> Postgres 8.1 (we've recently upgraded to Postgres 8.2, is this \n> still a requirement?)\n>\n> Building these indexes takes forever! It's a long grind through \n> inserts and then building the indexes takes a hefty amount of time \n> too. (about 9 hours). Now, the application is likely part at \n> fault, and we're working to make it more efficient, but it has \n> nothing to do with the index building time. I'm wondering what we \n> can do to make this better if anything; would it be better to leave \n> the indexes on? It doesn't seem to be. Would it be better to use \n> INSERTs instead of copies? Doesn't seem to be.\n>\n>\n> Anyway -- ANYTHING we can do to make this go faster is \n> appreciated :) Here's some vital statistics:\n>\n> - Machine is a 16 GB, 4 actual CPU dual-core opteron system using \n> SCSI discs. The disc configuration seems to be a good one, it's \n> the best of all the ones we've tested so far.\n>\nThe basic problem here is simply writing all the data to disk. you \nare building 9M rows of data plus numerous index's. How much data are \nyou actually writing to the disk. Try looking at iostat while this is \ngoing on.\n\nMy guess is you are maxing out the disk write speed.\n> - The load process itself takes about 6 gigs of memory, the rest is \n> free for postgres because this is basically all the machine does.\n>\n> - If this was your machine and situation, how would you lay out the \n> emmory settings? What would you set the FSM to? Would you leave \n> teh bgwriter on or off? We've already got FSYNC off because \"data \n> integrity\" doesn't matter -- this stuff is religeously backed up \n> and we've got no problem reinstalling it. Besides, in order for \n> this machine to go down, data integrity of the DB is the least of \n> the worries :)\n>\n> Do wal_buffers/full_page_writes matter of FSYNC is off? If so, \n> what settings? What about checkpoints?\n>\nNot reallly, I'd have WAL buffers write to a ram disk\n> Any finally, any ideas on planner constants? Here's what I'm using:\n>\n> seq_page_cost = 0.5 # measured on an arbitrary \n> scale\n> random_page_cost = 1.0 # same scale as above\n> cpu_tuple_cost = 0.001 # same scale as above\n> cpu_index_tuple_cost = 0.0001 # same scale as above\n> cpu_operator_cost = 0.00025 # same scale as above\n> effective_cache_size = 679006\n>\n\nas a general rule make shared buffers about 25% of free mem, \neffective cache 75% but with a write intensive load like you have I \nthink the first thing to look at is write speed.\n> I really don't remember how I came up with that \n> effective_cache_size number....\n>\n>\n> Anyway... any advice would be appreciated :)\n>\n>\n> Steve\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Wed, 17 Jan 2007 18:43:44 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "> Would it be possible to just update the summary table, instead of recreating \n> it from scratch every night?\n\n \tHrm, I believe it's probably less work for the computer to do if \nit's rebuilt. Any number of rows may be changed during an update, not \nincluding additions, so I'd have to pull out what's changed and sync it \nwith what's in the summary table already. It'll be a lot more selects and \nprogram-side computation to save the big copy; it might work out, but I'd \nsay this would be my last ditch thing. :)\n\n>> Building these indexes takes forever! It's a long grind through inserts \n>> and then building the indexes takes a hefty amount of time too. (about 9 \n>> hours). Now, the application is likely part at fault, and we're working to \n>> make it more efficient, but it has nothing to do with the index building \n>> time. I'm wondering what we can do to make this better if anything; would \n>> it be better to leave the indexes on? It doesn't seem to be. Would it be \n>> better to use INSERTs instead of copies? Doesn't seem to be.\n>\n> Would it help if you created multiple indexes simultaneously? You have enough \n> CPU to do it. Is the index creation CPU or I/O bound? 9 million rows should \n> fit in 16 GB of memory, right?\n\n \tThis is a very very interesting idea. It looks like we're \nprobably not fully utilizing the machine for the index build, and this \ncould be the ticket for us. I'm going to go ahead and set up a test for \nthis and we'll see how it goes.\n\n> Can you describe the load process in more detail? What's it doing with the 6 \n> gigs?\n\n \tThere's two halves to the load process; the loader and the \nsummarizer. The loader is the part that takes 6 gigs; the summarizer only \ntakes a few hundred MEG.\n\n \tBasically we have these COBOL files that vary in size but \nare usually in the hundred's of MEG realm. These files contain new data \nOR updates to existing data. We load this data from the COBOL files in \nchunks, so that's not a place where we're burning a lot of memory.\n\n \tThe first thing we do is cache the list of COBOL ID codes that are \nalready in the DB; the COBOL ID codes are really long numeric strings, so \nwe use a sequenced integer primary key. The cache translates COBOL IDs to \nprimary keys, and this takes most of our memory nowadays. Our cache is \nfast, but it's kind of a memory hog. We're working on trimming that down, \nbut it's definitely faster than making a query for each COBOL ID.\n\n \tThe load is relatively fast and is considered \"acceptable\", and \nhas been relatively constant in speed. It's the summarizer that's brutal.\n\n \tThe summarizer produces 3 main summary tables and a few \nsecondaries that don't take much time to make. Two of them are smallish \nand not that big a deal, and the last one is the biggie that's 9 mil rows \nand growing. To produce the 9 mil row table, we query out the data in \ngroups, do our processing, and save that data to a series of text files \nthat are in blocks of 10,000 rows as I recall. We then copy each file \ninto the DB (there were some issues with copying in an entire 9 mil row \nfile in the past, which is why we don't use just one file -- those issues \nhave been fixed, but we didn't undo the change).\n\n> What's your maintenance_work_mem setting? It can make a big difference in \n> sorting the data for indexes.\n\n \t6 gigs currently. :)\n\n> If you could post the schema including the indexes, people might have more \n> ideas...\n\n \tI'll have to ask first, but I'll see if I can :)\n\nTalk to you later, and thanks for the info!\n\n\nSteve\n",
"msg_date": "Wed, 17 Jan 2007 18:55:20 -0500 (EST)",
"msg_from": "Steve <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "On 1/17/07, Steve <[email protected]> wrote:\n>\n> However, I will look into this and see if I can figure out this\n> average value. This may be a valid idea, and I'll look some more at it.\n\n\nIt must be, Oracle sells it pretty heavily as a data warehousing feature\n;). Oracle calls it a materialized view, and the basic premise is you have\na \"change\" log (called a materialized log by Oracle) and you have a job that\nruns through the change log and applies the changes to the materialized\nview.\n\nIf you are using aggregates, be careful and make sure you use simple forms\nof those aggregates. For example, if you are using an \"average\" function\nthen you should have two columns sum and count instead. Some aggregates are\ntoo complex and cannot be represented by this solution and you will find\nthat you can't update the summary tables, so definitely try to stay away\nfrom complex aggregates if you do not need them.\n\nHere is a link to a PL/pgSQL effort that tries to simulate materialized\nviews:\n\nhttp://jonathangardner.net/PostgreSQL/materialized_views/matviews.html\n\nI don't know how complete it is, and it looks like there was a project\nstarted but has been abandoned for the last 3 years.\n\n-- \nChad\nhttp://www.postgresqlforums.com/\n\nOn 1/17/07, Steve <[email protected]> wrote:\n However, I will look into this and see if I can figure out thisaverage value. This may be a valid idea, and I'll look some more at it.It must be, Oracle sells it pretty heavily as a data warehousing feature ;). Oracle calls it a materialized view, and the basic premise is you have a \"change\" log (called a materialized log by Oracle) and you have a job that runs through the change log and applies the changes to the materialized view.\nIf you are using aggregates, be careful and make sure you use simple forms of those aggregates. For example, if you are using an \"average\" function then you should have two columns sum and count instead. Some aggregates are too complex and cannot be represented by this solution and you will find that you can't update the summary tables, so definitely try to stay away from complex aggregates if you do not need them.\nHere is a link to a PL/pgSQL effort that tries to simulate materialized views:http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html\nI don't know how complete it is, and it looks like there was a project started but has been abandoned for the last 3 years.-- Chadhttp://www.postgresqlforums.com/",
"msg_date": "Wed, 17 Jan 2007 19:01:55 -0500",
"msg_from": "\"Chad Wagner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "> Generally speaking, once you've gotten to the point of swapping, even a\n> little, you've gone too far. A better approach is to pick some\n> conservative number, like 10-25% of your ram for shared_buffers, and 1\n> gig or so for maintenance work_mem, and then increase them while\n> exercising the system, and measure the difference increasing them makes.\n>\n> If going from 1G shared buffers to 2G shared buffers gets you a 10%\n> increase, then good. If going from 2G to 4G gets you a 1.2% increase,\n> it's questionable. You should reach a point where throwing more\n> shared_buffers stops helping before you start swapping. But you might\n> not.\n>\n> Same goes for maintenance work mem. Incremental changes, accompanied by\n> reproduceable benchmarks / behaviour measurements are the way to\n> determine the settings.\n>\n> Note that you can also vary those during different times of the day.\n> you can have maint_mem set to 1Gig during the day and crank it up to 8\n> gig or something while loading data. Shared_buffers can't be changed\n> without restarting the db though.\n>\n\nI'm currently benchmarking various configuration adjustments. Problem is \nthese tests take a really long time because I have to run the load \nprocess... which is like a 9 hour deal. That's why I'm asking for advice \nhere, because there's a lot of variables here and it's really time costly \nto test :)\n\nI'm still working on the benchmarkings and by Friday I should have some \ninteresting statistics to work with and maybe help figure out what's going \non.\n\n\nThanks!\n\nSteve\n",
"msg_date": "Wed, 17 Jan 2007 19:27:22 -0500 (EST)",
"msg_from": "Steve <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "> The thought:\n>\n> - Load the big chunk of data into a new table\n>\n> - Generate some minimal set of indices on the new table\n>\n> - Generate four queries that compare old to new:\n> q1 - See which tuples are unchanged from yesterday to today\n> q2 - See which tuples have been deleted from yesterday to today\n> q3 - See which tuples have been added\n> q4 - See which tuples have been modified\n>\n> If the \"unchanged\" set is extremely large, then you might see benefit\n> to doing updates based on deleting the rows indicated by q2,\n> inserting rows based on q3, and updating based on q4.\n>\n> In principle, computing and applying those 4 queries might be quicker\n> than rebuilding from scratch.\n>\n> In principle, applying q2, then q4, then vacuuming, then q3, ought to\n> be \"optimal.\"\n\n\n \tThis looks like an interesting idea, and I'm going to take a look \nat how feasible it'll be to impletement. I may be able to combine this \nwith Mr. Wagner's idea to make a much more efficient system overall. It's \ngoing to be a pretty big programming task, but I've a feeling this \nsummarizer thing may just need to be re-written with a smarter system \nlike this to get something faster.\n\n\nThanks!\n\nSteve\n",
"msg_date": "Wed, 17 Jan 2007 19:50:11 -0500 (EST)",
"msg_from": "Steve <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "On Wed, 2007-01-17 at 18:27, Steve wrote:\n> > Generally speaking, once you've gotten to the point of swapping, even a\n> > little, you've gone too far. A better approach is to pick some\n> > conservative number, like 10-25% of your ram for shared_buffers, and 1\n> > gig or so for maintenance work_mem, and then increase them while\n> > exercising the system, and measure the difference increasing them makes.\n> >\n> > If going from 1G shared buffers to 2G shared buffers gets you a 10%\n> > increase, then good. If going from 2G to 4G gets you a 1.2% increase,\n> > it's questionable. You should reach a point where throwing more\n> > shared_buffers stops helping before you start swapping. But you might\n> > not.\n> >\n> > Same goes for maintenance work mem. Incremental changes, accompanied by\n> > reproduceable benchmarks / behaviour measurements are the way to\n> > determine the settings.\n> >\n> > Note that you can also vary those during different times of the day.\n> > you can have maint_mem set to 1Gig during the day and crank it up to 8\n> > gig or something while loading data. Shared_buffers can't be changed\n> > without restarting the db though.\n> >\n> \n> I'm currently benchmarking various configuration adjustments. Problem is \n> these tests take a really long time because I have to run the load \n> process... which is like a 9 hour deal. That's why I'm asking for advice \n> here, because there's a lot of variables here and it's really time costly \n> to test :)\n> \n> I'm still working on the benchmarkings and by Friday I should have some \n> interesting statistics to work with and maybe help figure out what's going \n> on.\n\nYou can probably take a portion of what you're loading and make a\nbenchmark of the load process that is repeatable (same data, size,\netc...) each time, but only takes 30 minutes to an hour to run each\ntime. shortens your test iteration AND makes it reliably repeatable.\n",
"msg_date": "Thu, 18 Jan 2007 09:38:32 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "On 1/17/07, Steve <[email protected]> wrote:\n> Hey there;\n> I've been lurking on this list awhile, and I've been working with postgres\n> for a number of years so I'm not exactly new to this. But I'm still\n> having trouble getting a good balance of settings and I'd like to see what\n> other people think. We may also be willing to hire a contractor to help\n> tackle this problem if anyone is interested.\n\nI happen to be something of a cobol->sql expert, if you are interested\nin some advice you can contact me off-list. I converted an enterprise\ncobol (in acucobol) app to Postgresql by plugging pg into the cobol\nsystem via custom c++ isam driver.\n\n> I've got an application here that runs large (in terms of length -- the\n> queries have a lot of conditions in them) queries that can potentially\n> return millions of rows but on average probably return tens of thousands\n> of rows. It's read only for most of the day, and pretty much all the\n> queries except one are really fast.\n\nIf it's just one query I think I'd focus on optimizing that query, not\n.conf settings. In my opinion .conf tuning (a few gotchas aside)\ndoesn't really get you all that much.\n\n> However, each night we load data from a legacy cobol system into the SQL\n> system and then we summarize that data to make the reports faster. This\n> load process is intensely insert/update driven but also has a hefty\n> amount of selects as well. This load process is taking ever longer to\n> complete.\n>\n>\n> SO ... our goal here is to make this load process take less time. It\n> seems the big part is building the big summary table; this big summary\n> table is currently 9 million rows big. Every night, we drop the table,\n> re-create it, build the 9 million rows of data (we use COPY to put hte\n> data in when it's prepared, not INSERT), and then build the indexes on it\n> -- of which there are many. Unfortunately this table gets queried\n> in a lot of different ways and needs these indexes; also unfortunately, we\n> have operator class indexes to support both ASC and DESC sorting on\n\nI have some very specific advice here. Check out row-wise comparison\nfeature introduced in 8.2.\n\n> columns so these are for all intents and purposes duplicate but required\n> under Postgres 8.1 (we've recently upgraded to Postgres 8.2, is this still\n> a requirement?)\n\n> Building these indexes takes forever! It's a long grind through inserts\n> and then building the indexes takes a hefty amount of time too. (about 9\n> hours). Now, the application is likely part at fault, and we're working\n> to make it more efficient, but it has nothing to do with the index\n> building time. I'm wondering what we can do to make this better if\n> anything; would it be better to leave the indexes on? It doesn't seem to\n> be. Would it be better to use INSERTs instead of copies? Doesn't seem to\n\nno.\n\nprobably any optimization strategies would focus on reducing the\namount of data you had to load.\n\nmerlin\n",
"msg_date": "Fri, 19 Jan 2007 14:16:35 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
}
] |
[
{
"msg_contents": "Hi,\n \nWe are having 3 tables;\n1. persons <-- Base table and no data will be inserted in this table.\n2. Person1 <-- Inherited table from persons all data will be inserted in this table.\n3. PersonalInfo <-- which is storing all personal information of the persons and is having the foreign key relationship with the persons table.\n \nWhen we try to insert the data in the personalInfo table it is throwing the error stating the primary key does not contain the given value. But, if I try to select from the persons table it is showing the records from its inherited tables as well. Can anybody tell me what might be the problem here? Or else any help regarding the same will be of very much help.\n \nFollowing is the table structure;\n---------------------------------------------------------------------\nCreate Table persons (\nname varchar,\nage int,\ndob varchar,\nconstraint pKey primary key(name)\n);\n create table person1 ( ) inherits(persons);\n \nCreate table personalInfo (\nname varchar,\ncontact_id int,\ncontact_addr varchar,\nconstraint cKey primary key(contact_id),\nconstraint fKey foreign key(name) references persons(name)\n);\n--------------------------------------------------------------------------------------------------\n \nThanks In Advance, \nRamachandra B.S. \n\n\n\nThe information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. \n\nWARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email.\n \nwww.wipro.com\n\n\nHi,\n \nWe are having 3 tables;\n1. persons <-- Base table and no data \nwill be inserted in this table.\n2. Person1 <-- Inherited table from \npersons all data will be inserted in this table.\n3. PersonalInfo <-- which is \nstoring all personal information of the persons and is having the foreign key \nrelationship with the persons table.\n \nWhen we try to insert the data in the \npersonalInfo table it is throwing the error stating the primary \nkey does not contain the given value. But, if I try to select \nfrom the persons table it is showing the records from its inherited tables as \nwell. Can anybody tell me what might be the problem \nhere? Or else any help regarding the same will be of very much \nhelp.\n \nFollowing is the table \nstructure;\n---------------------------------------------------------------------\nCreate Table persons (\nname varchar,\nage int,\ndob varchar,\nconstraint pKey primary \nkey(name)\n);\n create table person1 ( ) \ninherits(persons);\n \nCreate table personalInfo (\nname varchar,\ncontact_id int,\ncontact_addr varchar,\nconstraint cKey primary \nkey(contact_id),\nconstraint fKey foreign key(name) \nreferences persons(name)\n);\n--------------------------------------------------------------------------------------------------\n \n\nThanks In Advance, \nRamachandra B.S.",
"msg_date": "Wed, 17 Jan 2007 16:12:32 +0530",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Table Inheritence and Partioning"
},
{
"msg_contents": "Currently foreign keys don't work that way. You will need your data to be in \ntable persons if you want the foreign key to work correctly. Otherwise you \ncould create your own trigger to ensure the foreign key restriction you need.\n\nI'm no expert at all and it seems there are non trivial issues that have to be \nsolved in order to make foreign keys behave in the way you'd expect.\n\nA Dimecres 17 Gener 2007 11:42, [email protected] va escriure:\n> Hi,\n>\n> We are having 3 tables;\n> 1. persons <-- Base table and no data will be inserted in this table.\n> 2. Person1 <-- Inherited table from persons all data will be inserted in\n> this table. 3. PersonalInfo <-- which is storing all personal information\n> of the persons and is having the foreign key relationship with the persons\n> table.\n>\n> When we try to insert the data in the personalInfo table it is throwing the\n> error stating the primary key does not contain the given value. But, if I\n> try to select from the persons table it is showing the records from its\n> inherited tables as well. Can anybody tell me what might be the problem\n> here? Or else any help regarding the same will be of very much help.\n>\n> Following is the table structure;\n> ---------------------------------------------------------------------\n> Create Table persons (\n> name varchar,\n> age int,\n> dob varchar,\n> constraint pKey primary key(name)\n> );\n> create table person1 ( ) inherits(persons);\n>\n> Create table personalInfo (\n> name varchar,\n> contact_id int,\n> contact_addr varchar,\n> constraint cKey primary key(contact_id),\n> constraint fKey foreign key(name) references persons(name)\n> );\n> ---------------------------------------------------------------------------\n>-----------------------\n>\n> Thanks In Advance,\n> Ramachandra B.S.\n>\n>\n>\n> The information contained in this electronic message and any attachments to\n> this message are intended for the exclusive use of the addressee(s) and may\n> contain proprietary, confidential or privileged information. If you are not\n> the intended recipient, you should not disseminate, distribute or copy this\n> e-mail. Please notify the sender immediately and destroy all copies of this\n> message and any attachments.\n>\n> WARNING: Computer viruses can be transmitted via email. The recipient\n> should check this email and any attachments for the presence of viruses.\n> The company accepts no liability for any damage caused by any virus\n> transmitted by this email.\n>\n> www.wipro.com\n\n-- \nAlbert Cervera Areny\nDept. Informàtica Sedifa, S.L.\n\nAv. Can Bordoll, 149\n08202 - Sabadell (Barcelona)\nTel. 93 715 51 11\nFax. 93 715 51 12\n\n====================================================================\n........................ AVISO LEGAL ............................\nLa presente comunicación y sus anexos tiene como destinatario la\npersona a la que va dirigida, por lo que si usted lo recibe\npor error debe notificarlo al remitente y eliminarlo de su\nsistema, no pudiendo utilizarlo, total o parcialmente, para\nningún fin. Su contenido puede tener información confidencial o\nprotegida legalmente y únicamente expresa la opinión del\nremitente. El uso del correo electrónico vía Internet no\npermite asegurar ni la confidencialidad de los mensajes\nni su correcta recepción. En el caso de que el\ndestinatario no consintiera la utilización del correo electrónico,\ndeberá ponerlo en nuestro conocimiento inmediatamente.\n====================================================================\n........................... DISCLAIMER .............................\nThis message and its attachments are intended exclusively for the\nnamed addressee. If you receive this message in error, please\nimmediately delete it from your system and notify the sender. You\nmay not use this message or any part of it for any purpose.\nThe message may contain information that is confidential or\nprotected by law, and any opinions expressed are those of the\nindividual sender. Internet e-mail guarantees neither the\nconfidentiality nor the proper receipt of the message sent.\nIf the addressee of this message does not consent to the use\nof internet e-mail, please inform us inmmediately.\n====================================================================\n\n\n \n",
"msg_date": "Wed, 17 Jan 2007 12:01:51 +0100",
"msg_from": "Albert Cervera Areny <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table Inheritence and Partioning"
},
{
"msg_contents": "On Wed, 2007-01-17, [email protected] wrote\n\n> We are having 3 tables;\n> 1. persons <-- Base table and no data will be inserted in this table.\n> 2. Person1 <-- Inherited table from persons all data will be inserted\n> in this table.\n> 3. PersonalInfo <-- which is storing all personal information of the\n> persons and is having the foreign key relationship with the persons\n> table.\n> \n> When we try to insert the data in the personalInfo table it is\n> throwing the error stating the primary key does not contain the given\n> value. But, if I try to select from the persons table it is showing\n> the records from its inherited tables as well. Can anybody tell me\n> what might be the problem here? \n\nhttp://www.postgresql.org/docs/8.2/static/ddl-inherit.html\n\nanswers your questions, I believe.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 22 Jan 2007 09:40:22 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table Inheritence and Partioning"
}
] |
[
{
"msg_contents": "Hi. I sent this email to the list last week, but for some reason I never\nsaw it show up on the list. I apologize if it appears twice now.\n \nWe recently bought two Dell PowerEdge 2900 servers each with a 2.3 Ghz 5140\nXeon, 4 Gigs of RAM, 8 15k SAS drives, and a PERC 5/i raid controller with\n256 megs of battery backed cache. Our database is more of an OLTP type, and\neverything I've read says that 10 would be better, but I thought I would\ntest both 10 and 5. The other guys here wanted to run the Raid 5 with a hot\nspare, so the RAID 10 uses 8 disks and RAID 5 uses 7 plus the hot spare. We\nare running CentOS 4.4. I started testing with bonnie++ 1.03a (bonnie++ -s\n16g -x 3 -q ) and I got these numbers. These tests were done with a RAID\nstripe size of 64KB, no RAID controller read ahead, and write back caching\nenabled.\n \n \nRAID 10\nname file_size putc putc_cpu put_block put_block_cpu rewrite rewrite_cpu \nRaid10 16G 50546 94 149590 34 81799 17 \nRaid10 16G 50722 95 139080 31 82987 17 \nRaid10 16G 50526 94 148433 33 82278 17 \n\n getc getc_cpu get_block get_block_cpu seeks seeks_cpu num_files \n50678 84 236858 30 602.7 1 16 \n50845 85 240024 31 594.8 1 16 \n50921 85 240238 31 547.5 1 16 \n\n \nRaid 5 \nname file_size putc putc_cpu put_block put_block_cpu rewrite rewrite_cpu \nRaid5 16G 51253 95 176743 40 87493 19 \nRaid5 16G 51349 96 182828 41 89990 19 \nRaid5 16G 51627 96 183772 42 91088 20 \n\n getc getc_cpu get_block get_block_cpu seeks seeks_cpu num_files \n50750 83 232967 29 378.5 0 16 \n51387 84 237049 31 385.0 0 16 \n51241 84 236493 30 391.8 0 16 \n\n \nI was somewhat surprised that the RAID 5 was equal or better on almost\neverything. I assume this must be because it has 6 data disks as opposed to\n4 data disks. The one number that I find strange is the seeks/sec. Is\nthere any reason why a RAID 5 would not be able to seek as quickly as a RAID\n10? Or are the numbers from the Raid 10 bogus? I've also done some testing\nwith Postgres 8.2.1 and real world queries, and the two machines are\nbasically performing the same, but those seek numbers kinda bug me.\n\n\nThanks,\n\n\nDave Dutcher\n \n\n",
"msg_date": "Wed, 17 Jan 2007 09:57:16 -0600",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Raid 10 or Raid 5 on Dell PowerEdge"
}
] |
[
{
"msg_contents": "Hi,\n\n \n\nhow can I monitor the size of the transaction log files using SQL Statements?\n\n \n\nBest Regards\n\n \n\nSilvio\n\n \n\nSilvio Ziegelwanger\nResearch & Development\n\nFabalabs Software GmbH\nHonauerstraße 4\n4020 Linz\nAustria\nTel: [43] (732) 60 61 62\nFax: [43] (732) 60 61 62-609\nE-Mail: [email protected]\nwww.fabasoft.com <http://www.fabasoft.com/> \n\n <http://www.fabasoft.at/fabasofthomepage2006/fabasoftgruppe/veranstaltungen/fabasoftegovdays07.htm> \n\nFabasoft egovday 07\nTrends im E-Government, Innovationen für eine \nzukunftsorientierte Verwaltung und digitale Geschäftsprozesse.\n\n23. Januar 2007, Bern\n\n30. Januar 2007, Berlin\n\n6. Februar 2007, Bonn",
"msg_date": "Wed, 17 Jan 2007 17:58:03 +0100",
"msg_from": "\"Ziegelwanger, Silvio\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Monitoring Transaction Log size"
},
{
"msg_contents": "Ziegelwanger, Silvio wrote:\n> Hi,\n> \n> \n> \n> how can I monitor the size of the transaction log files using SQL Statements?\n\nYou can't. You would have to write a custom function to heck the size of\nthe xlog directory.\n\nSincerely,\n\nJoshua D. Drake\n\n> \n> \n> \n> Best Regards\n> \n> \n> \n> Silvio\n> \n> \n> \n> Silvio Ziegelwanger\n> Research & Development\n> \n> Fabalabs Software GmbH\n> Honauerstra�e 4\n> 4020 Linz\n> Austria\n> Tel: [43] (732) 60 61 62\n> Fax: [43] (732) 60 61 62-609\n> E-Mail: [email protected]\n> www.fabasoft.com <http://www.fabasoft.com/> \n> \n> <http://www.fabasoft.at/fabasofthomepage2006/fabasoftgruppe/veranstaltungen/fabasoftegovdays07.htm> \n> \n> Fabasoft egovday 07\n> Trends im E-Government, Innovationen f�r eine \n> zukunftsorientierte Verwaltung und digitale Gesch�ftsprozesse.\n> \n> 23. Januar 2007, Bern\n> \n> 30. Januar 2007, Berlin\n> \n> 6. Februar 2007, Bonn\n> \n> \n> \n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Wed, 17 Jan 2007 09:57:10 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring Transaction Log size"
},
{
"msg_contents": "archive_timeout (came in ver 8.2) might help you with customizing the size\nfor log files.\n\n-----------------\nShoaib Mir\nEnterpriseDB (www.enterprisedb.com)\n\n\n\nOn 1/17/07, Joshua D. Drake <[email protected]> wrote:\n>\n> Ziegelwanger, Silvio wrote:\n> > Hi,\n> >\n> >\n> >\n> > how can I monitor the size of the transaction log files using SQL\n> Statements?\n>\n> You can't. You would have to write a custom function to heck the size of\n> the xlog directory.\n>\n> Sincerely,\n>\n> Joshua D. Drake\n>\n> >\n> >\n> >\n> > Best Regards\n> >\n> >\n> >\n> > Silvio\n> >\n> >\n> >\n> > Silvio Ziegelwanger\n> > Research & Development\n> >\n> > Fabalabs Software GmbH\n> > Honauerstraße 4\n> > 4020 Linz\n> > Austria\n> > Tel: [43] (732) 60 61 62\n> > Fax: [43] (732) 60 61 62-609\n> > E-Mail: [email protected]\n> > www.fabasoft.com <http://www.fabasoft.com/>\n> >\n> > <\n> http://www.fabasoft.at/fabasofthomepage2006/fabasoftgruppe/veranstaltungen/fabasoftegovdays07.htm\n> >\n> >\n> > Fabasoft egovday 07\n> > Trends im E-Government, Innovationen für eine\n> > zukunftsorientierte Verwaltung und digitale Geschäftsprozesse.\n> >\n> > 23. Januar 2007, Bern\n> >\n> > 30. Januar 2007, Berlin\n> >\n> > 6. Februar 2007, Bonn\n> >\n> >\n> >\n> >\n>\n>\n> --\n>\n> === The PostgreSQL Company: Command Prompt, Inc. ===\n> Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n> Providing the most comprehensive PostgreSQL solutions since 1997\n> http://www.commandprompt.com/\n>\n> Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n> PostgreSQL Replication: http://www.commandprompt.com/products/\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n\narchive_timeout (came in ver 8.2) might help you with customizing the size for log files.-----------------Shoaib MirEnterpriseDB (www.enterprisedb.com)\nOn 1/17/07, Joshua D. Drake <[email protected]> wrote:\nZiegelwanger, Silvio wrote:> Hi,>>>> how can I monitor the size of the transaction log files using SQL Statements?You can't. You would have to write a custom function to heck the size of\nthe xlog directory.Sincerely,Joshua D. Drake>>>> Best Regards>>>> Silvio>>>> Silvio Ziegelwanger> Research & Development\n>> Fabalabs Software GmbH> Honauerstraße 4> 4020 Linz> Austria> Tel: [43] (732) 60 61 62> Fax: [43] (732) 60 61 62-609> E-Mail: \[email protected]> www.fabasoft.com <http://www.fabasoft.com/>>> <\nhttp://www.fabasoft.at/fabasofthomepage2006/fabasoftgruppe/veranstaltungen/fabasoftegovdays07.htm>>> Fabasoft egovday 07> Trends im E-Government, Innovationen für eine> zukunftsorientierte Verwaltung und digitale Geschäftsprozesse.\n>> 23. Januar 2007, Bern>> 30. Januar 2007, Berlin>> 6. Februar 2007, Bonn>>>>-- === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donatePostgreSQL Replication: http://www.commandprompt.com/products/\n---------------------------(end of broadcast)---------------------------TIP 7: You can help support the PostgreSQL project by donating at \nhttp://www.postgresql.org/about/donate",
"msg_date": "Wed, 17 Jan 2007 23:03:18 +0500",
"msg_from": "\"Shoaib Mir\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring Transaction Log size"
},
{
"msg_contents": "Joshua D. Drake wrote:\n> Ziegelwanger, Silvio wrote:\n>> Hi,\n>>\n>> \n>>\n>> how can I monitor the size of the transaction log files using SQL Statements?\n> \n> You can't. You would have to write a custom function to heck the size of\n> the xlog directory.\n\nwel in recent versions of pg it should be pretty easy to do that from\nwithin SQL by using pg_ls_dir() and pg_stat_file().\n\nmaybe something(rough sketch) along the line of:\n\nselect sum((pg_stat_file('pg_xlog/' || file)).size) from\npg_ls_dir('pg_xlog') as file where file ~ '^[0-9A-F]';\n\nmight do the trick\n\n\nStefan\n",
"msg_date": "Wed, 17 Jan 2007 19:29:09 +0100",
"msg_from": "Stefan Kaltenbrunner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring Transaction Log size"
},
{
"msg_contents": "\"Joshua D. Drake\" <[email protected]> writes:\n> Ziegelwanger, Silvio wrote:\n>> how can I monitor the size of the transaction log files using SQL Statements?\n\n> You can't. You would have to write a custom function to heck the size of\n> the xlog directory.\n\nPerhaps more to the point, why do you think you need to? pg_xlog should\nstay pretty level at approximately 2*checkpoint_segments xlog files\n(once it's ramped up to that size, which might take a heavy burst of\nactivity if checkpoint_segments is large).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 17 Jan 2007 17:38:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring Transaction Log size "
},
{
"msg_contents": "On Wed, 2007-01-17 at 23:03 +0500, Shoaib Mir wrote:\n> archive_timeout (came in ver 8.2) might help you with customizing the\n> size for log files.\n\nI'm not sure that it will.\n\nIf anything it could produce more log files, which could lead to a\nbacklog if the archive_command isn't functioning for some reason.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Jan 2007 09:55:59 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring Transaction Log size"
},
{
"msg_contents": "Suggested in case he wants to do a log switch after certain amount of\ntime...\n\n-----------\nShoaib Mir\nEnterpriseDB (www.enterprisedb.com)\n\nOn 1/18/07, Simon Riggs <[email protected]> wrote:\n>\n> On Wed, 2007-01-17 at 23:03 +0500, Shoaib Mir wrote:\n> > archive_timeout (came in ver 8.2) might help you with customizing the\n> > size for log files.\n>\n> I'm not sure that it will.\n>\n> If anything it could produce more log files, which could lead to a\n> backlog if the archive_command isn't functioning for some reason.\n>\n> --\n> Simon Riggs\n> EnterpriseDB http://www.enterprisedb.com\n>\n>\n>\n\nSuggested in case he wants to do a log switch after certain amount of time...-----------Shoaib MirEnterpriseDB (www.enterprisedb.com)\nOn 1/18/07, Simon Riggs <[email protected]> wrote:\nOn Wed, 2007-01-17 at 23:03 +0500, Shoaib Mir wrote:> archive_timeout (came in ver 8.2) might help you with customizing the> size for log files.I'm not sure that it will.If anything it could produce more log files, which could lead to a\nbacklog if the archive_command isn't functioning for some reason.-- Simon Riggs EnterpriseDB http://www.enterprisedb.com",
"msg_date": "Thu, 18 Jan 2007 15:19:17 +0500",
"msg_from": "\"Shoaib Mir\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring Transaction Log size"
}
] |
[
{
"msg_contents": "\nDoesn't sound like you want postgres at all.... Try mysql.\n\n\n\n-----Original Message-----\nFrom: \"Steve\" <[email protected]>\nTo: [email protected]\nSent: 1/17/2007 2:41 PM\nSubject: [PERFORM] Configuration Advice\n\nHey there;\n\nI've been lurking on this list awhile, and I've been working with postgres \nfor a number of years so I'm not exactly new to this. But I'm still \nhaving trouble getting a good balance of settings and I'd like to see what \nother people think. We may also be willing to hire a contractor to help \ntackle this problem if anyone is interested.\n\nI've got an application here that runs large (in terms of length -- the \nqueries have a lot of conditions in them) queries that can potentially \nreturn millions of rows but on average probably return tens of thousands \nof rows. It's read only for most of the day, and pretty much all the \nqueries except one are really fast.\n\nHowever, each night we load data from a legacy cobol system into the SQL \nsystem and then we summarize that data to make the reports faster. This \nload process is intensely insert/update driven but also has a hefty \namount of selects as well. This load process is taking ever longer to \ncomplete.\n\n\nSO ... our goal here is to make this load process take less time. It \nseems the big part is building the big summary table; this big summary \ntable is currently 9 million rows big. Every night, we drop the table, \nre-create it, build the 9 million rows of data (we use COPY to put hte \ndata in when it's prepared, not INSERT), and then build the indexes on it \n-- of which there are many. Unfortunately this table gets queried \nin a lot of different ways and needs these indexes; also unfortunately, we \nhave operator class indexes to support both ASC and DESC sorting on \ncolumns so these are for all intents and purposes duplicate but required \nunder Postgres 8.1 (we've recently upgraded to Postgres 8.2, is this still \na requirement?)\n\nBuilding these indexes takes forever! It's a long grind through inserts \nand then building the indexes takes a hefty amount of time too. (about 9 \nhours). Now, the application is likely part at fault, and we're working \nto make it more efficient, but it has nothing to do with the index \nbuilding time. I'm wondering what we can do to make this better if \nanything; would it be better to leave the indexes on? It doesn't seem to \nbe. Would it be better to use INSERTs instead of copies? Doesn't seem to \nbe.\n\n\nAnyway -- ANYTHING we can do to make this go faster is appreciated :) \nHere's some vital statistics:\n\n- Machine is a 16 GB, 4 actual CPU dual-core opteron system using SCSI \ndiscs. The disc configuration seems to be a good one, it's the best of \nall the ones we've tested so far.\n\n- The load process itself takes about 6 gigs of memory, the rest is free \nfor postgres because this is basically all the machine does.\n\n- If this was your machine and situation, how would you lay out the emmory \nsettings? What would you set the FSM to? Would you leave teh bgwriter on \nor off? We've already got FSYNC off because \"data integrity\" doesn't \nmatter -- this stuff is religeously backed up and we've got no problem \nreinstalling it. Besides, in order for this machine to go down, data \nintegrity of the DB is the least of the worries :)\n\nDo wal_buffers/full_page_writes matter of FSYNC is off? If so, what \nsettings? What about checkpoints?\n\nAny finally, any ideas on planner constants? Here's what I'm using:\n\nseq_page_cost = 0.5 # measured on an arbitrary scale\nrandom_page_cost = 1.0 # same scale as above\ncpu_tuple_cost = 0.001 # same scale as above\ncpu_index_tuple_cost = 0.0001 # same scale as above\ncpu_operator_cost = 0.00025 # same scale as above\neffective_cache_size = 679006\n\nI really don't remember how I came up with that effective_cache_size \nnumber....\n\n\nAnyway... any advice would be appreciated :)\n\n\nSteve\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Wed, 17 Jan 2007 15:24:02 -0600",
"msg_from": "Adam Rich <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "Adam Rich wrote:\n> Doesn't sound like you want postgres at all.... Try mysql.\n\nCould you explain your reason for suggesting mysql? I'm simply curious \nwhy you would offer that as a solution.\n",
"msg_date": "Wed, 17 Jan 2007 13:29:23 -0800",
"msg_from": "Bricklen Anderson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "> From: \"Steve\" <[email protected]>\n> To: [email protected]\n> Sent: 1/17/2007 2:41 PM\n> Subject: [PERFORM] Configuration Advice\n> \n> SO ... our goal here is to make this load process take less time. It \n> seems the big part is building the big summary table; this big summary \n> table is currently 9 million rows big. Every night, we drop the table, \n> re-create it, build the 9 million rows of data (we use COPY to put hte \n> data in when it's prepared, not INSERT), and then build the indexes on it \n> -- of which there are many. Unfortunately this table gets queried \n> in a lot of different ways and needs these indexes; also unfortunately, we \n> have operator class indexes to support both ASC and DESC sorting on \n> columns so these are for all intents and purposes duplicate but required \n> under Postgres 8.1 (we've recently upgraded to Postgres 8.2, is this still \n> a requirement?)\n\nNote that you only need to have the ASC and DESC versions of opclasses when\nyou are going to use multicolumn indexes with some columns in ASC order and\nsome in DESC order. For columns used by themselves in an index, you don't\nneed to do this, no matter which order you are sorting on.\n",
"msg_date": "Wed, 17 Jan 2007 16:12:29 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "Bricklen Anderson wrote:\n> Adam Rich wrote:\n>> Doesn't sound like you want postgres at all.... Try mysql.\n> \n> Could you explain your reason for suggesting mysql? I'm simply curious\n> why you would offer that as a solution.\n\nHe sound a little trollish to me. I would refer to the other actually\nhelpful posts on the topic.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Wed, 17 Jan 2007 14:14:59 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "\nSorry if this came off sounding trollish.... All databases have their\nstrengths & weaknesses, and I feel the responsible thing to do is\nexploit\nthose strengths where possible, rather than expend significant time and\neffort coaxing one database to do something it wasn't designed to.\nThere's just no such thing as \"one size fits all\".\n\nI have professional experience with MS-SQL, Oracle, MySQL, and Postgres.\nand the scenario described sounds more ideal for MySQL & MyISAM than \nanything else:\n\n1) No concerns for data loss (turning fsync & full_page_writes off)\nsince the data can be reloaded\n\n2) No need for MVCC or transactions, since the database is read-only\n\n3) No worries about lock contention\n\n4) Complex queries that might take advantage of the MySQL \"Query Cache\"\nsince the base data never changes\n\n5) Queries that might obtain data directly from indexes without having\nto touch tables (again, no need for MVCC)\n\nIf loading in the base data and creating the summary table is taking \na lot of time, using MySQL with MyISAM tables (and binary logging\ndisabled) should provide significant time savings, and it doesn't \nsound like there's any concerns for the downsides. \n\nYes, postgresql holds an edge over MySQL for heavy OLTP applications,\nI use it for that and I love it. But for the scenario the original \nposter is asking about, MySQL/MyISAM is ideal. \n\n\n\n\n-----Original Message-----\nFrom: Bricklen Anderson [mailto:[email protected]] \nSent: Wednesday, January 17, 2007 3:29 PM\nTo: Adam Rich\nCc: [email protected]\nSubject: Re: [PERFORM] Configuration Advice\n\n\nAdam Rich wrote:\n> Doesn't sound like you want postgres at all.... Try mysql.\n\nCould you explain your reason for suggesting mysql? I'm simply curious \nwhy you would offer that as a solution.\n\n",
"msg_date": "Wed, 17 Jan 2007 17:37:48 -0600",
"msg_from": "\"Adam Rich\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "> Note that you only need to have the ASC and DESC versions of opclasses when\n> you are going to use multicolumn indexes with some columns in ASC order and\n> some in DESC order. For columns used by themselves in an index, you don't\n> need to do this, no matter which order you are sorting on.\n>\n\nYeah, I assumed the people 'in the know' on this kind of stuff would know \nthe details of why I have to have those, and therefore I wouldn't have to \ngo into detail as to why -- but you put your finger right on it. :) \nUnfortunately the customer this is for wants certain columns joined at the \nhip for querying and sorting, and this method was a performance godsend \nwhen we implemented it (with a C .so library, not using SQL in our \nopclasses or anything like that).\n\n\nSteve\n",
"msg_date": "Wed, 17 Jan 2007 18:58:52 -0500 (EST)",
"msg_from": "Steve <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "This would probably also be another last ditch option. :) Our stuff is \ndesigned to pretty much work on any DB but there are postgres specific \nthings in there... not to mention ramp up time on MySQL. I mean, I know \nMySQL from a user point of view and in a very limited way \nadministratively, but I'd be back to square one on learning performance \nstuff :)\n\nAnyway -- I'll listen to what people have to say, and keep this in mind. \nIt would be an interesting test to take parts of the process and compare \nat least, if not converting the whole thing.\n\ntalk to you later,\n\nSteve\n\nOn Wed, 17 Jan 2007, Adam Rich wrote:\n\n>\n> Sorry if this came off sounding trollish.... All databases have their\n> strengths & weaknesses, and I feel the responsible thing to do is\n> exploit\n> those strengths where possible, rather than expend significant time and\n> effort coaxing one database to do something it wasn't designed to.\n> There's just no such thing as \"one size fits all\".\n>\n> I have professional experience with MS-SQL, Oracle, MySQL, and Postgres.\n> and the scenario described sounds more ideal for MySQL & MyISAM than\n> anything else:\n>\n> 1) No concerns for data loss (turning fsync & full_page_writes off)\n> since the data can be reloaded\n>\n> 2) No need for MVCC or transactions, since the database is read-only\n>\n> 3) No worries about lock contention\n>\n> 4) Complex queries that might take advantage of the MySQL \"Query Cache\"\n> since the base data never changes\n>\n> 5) Queries that might obtain data directly from indexes without having\n> to touch tables (again, no need for MVCC)\n>\n> If loading in the base data and creating the summary table is taking\n> a lot of time, using MySQL with MyISAM tables (and binary logging\n> disabled) should provide significant time savings, and it doesn't\n> sound like there's any concerns for the downsides.\n>\n> Yes, postgresql holds an edge over MySQL for heavy OLTP applications,\n> I use it for that and I love it. But for the scenario the original\n> poster is asking about, MySQL/MyISAM is ideal.\n>\n>\n>\n>\n> -----Original Message-----\n> From: Bricklen Anderson [mailto:[email protected]]\n> Sent: Wednesday, January 17, 2007 3:29 PM\n> To: Adam Rich\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Configuration Advice\n>\n>\n> Adam Rich wrote:\n>> Doesn't sound like you want postgres at all.... Try mysql.\n>\n> Could you explain your reason for suggesting mysql? I'm simply curious\n> why you would offer that as a solution.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n",
"msg_date": "Wed, 17 Jan 2007 19:32:34 -0500 (EST)",
"msg_from": "Steve <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "On 18-1-2007 0:37 Adam Rich wrote:\n> 4) Complex queries that might take advantage of the MySQL \"Query Cache\"\n> since the base data never changes\n\nHave you ever compared MySQL's performance with complex queries to \nPostgreSQL's? I once had a query which would operate on a recordlist and \nsee whether there were any gaps larger than 1 between consecutive \nprimary keys.\n\nNormally that information isn't very usefull, but this time it was. \nSince the data was in MySQL I tried several variations of queries in \nMySQL... After ten minutes or so I gave up waiting, but left my last \nversion running. In the mean time I dumped the data, reloaded the data \nin PostgreSQL and ran some testqueries there. I came up with a query \nthat took only 0.5 second on Postgres pretty soon. The query on MySQL \nstill wasn't finished...\nIn my experience it is (even with the 5.0 release) easier to get good \nperformance from complex queries in postgresql. And postgresql gives you \nmore usefull information on why a query takes a long time when using \nexplain (analyze). There are some draw backs too of course, but while we \nin our company use mysql I switched to postgresql for some readonly \ncomplex query stuff just for its performance...\n\nBesides that, mysql rewrites the entire table for most table-altering \nstatements you do (including indexes). For small tables that's no issue, \nbut if you somehow can't add all your indexes in a single statement to a \ntable you'll be waiting a long time more for new indexes than with \npostgresql. And that situation isn't so unusual if you think of a query \nwhich needs an index that isn't there yet. Apart from the fact that it \ndoesn't have functional indexes and such.\n\nLong story short: MySQL still isn't the best performer when looking at \nthe more complex queries. I've seen performance which made me assume it \ncan't optimise sequential scans (when it is forced to loop using a seq \nscan it appears to do a new seq scan for each round in the loop...) and \nvarious other cases PostgreSQL can execute much more efficiently.\n\nSo unless you run the same queries a lot of times and know of a way to \nget it fast enough the initial time, the query cache is not much of a help.\n\nBest regards,\n\nArjen\n",
"msg_date": "Thu, 18 Jan 2007 11:24:10 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "On Thu, 2007-01-18 at 04:24, Arjen van der Meijden wrote:\n> On 18-1-2007 0:37 Adam Rich wrote:\n> > 4) Complex queries that might take advantage of the MySQL \"Query Cache\"\n> > since the base data never changes\n> \n> Have you ever compared MySQL's performance with complex queries to \n> PostgreSQL's? I once had a query which would operate on a recordlist and \n> see whether there were any gaps larger than 1 between consecutive \n> primary keys.\n> \n> Normally that information isn't very usefull, but this time it was. \n> Since the data was in MySQL I tried several variations of queries in \n> MySQL... After ten minutes or so I gave up waiting, but left my last \n> version running. In the mean time I dumped the data, reloaded the data \n> in PostgreSQL and ran some testqueries there. I came up with a query \n> that took only 0.5 second on Postgres pretty soon. The query on MySQL \n> still wasn't finished...\n\nI have had similar experiences in the past. Conversely, I've had\nsimilar things happen the other way around. The biggest difference? If\nI report something like that happening in postgresql, it's easier to get\na fix or workaround, and if it's a code bug, the fix is usually released\nas a patch within a day or two. With MySQL, if it's a common problem,\nthen I can find it on the internet with google, otherwise it might take\na while to get a good workaround / fix. And if it's a bug, it might\ntake much longer to get a working patch.\n\n> In my experience it is (even with the 5.0 release) easier to get good \n> performance from complex queries in postgresql.\n\nAgreed. For data warehousing / OLAP stuff, postgresql is generally\nbetter than mysql. \n\n> Besides that, mysql rewrites the entire table for most table-altering \n> statements you do (including indexes). For small tables that's no issue, \n> but if you somehow can't add all your indexes in a single statement to a \n> table you'll be waiting a long time more for new indexes than with \n> postgresql. And that situation isn't so unusual if you think of a query \n> which needs an index that isn't there yet. Apart from the fact that it \n> doesn't have functional indexes and such.\n\nNote that this applies to the myisam table type. innodb works quite\ndifferently. It is more like pgsql in behaviour, and is an mvcc storage\nengine. Like all storage engine, it's a collection of compromises. \nSome areas it's better than pgsql, some areas worse. Sadly, it lives\nunder the hood of a database that can do some pretty stupid things, like\nignore column level constraint definitions without telling you.\n\n> Long story short: MySQL still isn't the best performer when looking at \n> the more complex queries. \n\nagreed. And those are the queries that REALLY kick your ass. Or your\nserver's ass, I guess.\n",
"msg_date": "Thu, 18 Jan 2007 10:20:53 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "On 18-1-2007 17:20 Scott Marlowe wrote:\n>> Besides that, mysql rewrites the entire table for most table-altering \n>> statements you do (including indexes). \n> \n> Note that this applies to the myisam table type. innodb works quite\n> differently. It is more like pgsql in behaviour, and is an mvcc storage\n\nAfaik this is not engine specific and also applies to InnoDB. Here is \nwhat the MySQL-manual sais about it:\n\"In most cases, ALTER TABLE works by making a temporary copy of the \noriginal table. The alteration is performed on the copy, and then the \noriginal table is deleted and the new one is renamed. While ALTER TABLE \n is executing, the original table is readable by other clients. Updates \nand writes to the table are stalled until the new table is ready, and \nthen are automatically redirected to the new table without any failed \nupdates.\"\n\nhttp://dev.mysql.com/doc/refman/5.0/en/alter-table.html\n\nIf it were myisam-only they sure would've mentioned that. Besides this \nis the behaviour we've seen on our site as well.\n\nSince 'create index' is also an alter table statement for mysql, this \nalso applies for adding indexes.\n\nBest regards,\n\nArjen\n\n",
"msg_date": "Thu, 18 Jan 2007 17:42:22 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "> I once had a query which would operate on a recordlist and \n> see whether there were any gaps larger than 1 between consecutive \n> primary keys.\n\nWould you mind sharing the query you described? I am attempting to do\nsomething similar now. \n",
"msg_date": "Thu, 18 Jan 2007 12:28:27 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "On 18-1-2007 18:28 Jeremy Haile wrote:\n>> I once had a query which would operate on a recordlist and \n>> see whether there were any gaps larger than 1 between consecutive \n>> primary keys.\n> \n> Would you mind sharing the query you described? I am attempting to do\n> something similar now. \n\n\nWell it was over a year ago, so I don't know what I did back then. But \nsince it was a query adjusted from what I did in MySQL there where no \nsubqueries involved, I think it was something like this:\nselect a.id, min(b.id)\n from\n members a\n join members b on a.id < b.id\n left join members c on a.id +1 = c.id\nwhere c.id IS NULL\ngroup by a.id;\n\nOr rewriting it to this one halves the execution time though:\n\nselect a.id, min(b.id)\n from\n members a\n left join members c on a.id +1 = c.id\n join members b on a.id < b.id\nwhere c.id IS NULL\ngroup by a.id;\n\nAlthough this query seems to be much faster with 150k records:\n\nselect aid, bid\nfrom\n(select a.id as aid, (select min(b.id) from members b where b.id > a.id) \nas bid\n from\n members a\ngroup by a.id) as foo\nwhere bid > aid+1;\n\nThe first one takes about 16 seconds on my system with PG 8.2, the \nsecond about 1.8 second. But back then the list was much shorter, so it \ncan have been the first one or a variant on that. On MySQL the first \ntakes much more than the 16 seconds PostgreSQL uses, and after editting \nthis e-mail it still isn't finished... The second one made EXPLAIN hang \nin my 5.0.32-bk, so I didn't try that for real.\n\nBest regards,\n\nArjen\n\nPS, In case any of the planner-hackers are reading, here are the plans \nof the first two queries, just to see if something can be done to \ndecrease the differences between them. The main differences seems to be \nthat groupaggregate vs the hashaggregate?\n\n GroupAggregate (cost=34144.16..35144.38 rows=50011 width=8) (actual \ntime=17653.401..23881.320 rows=71 loops=1)\n -> Sort (cost=34144.16..34269.19 rows=50011 width=8) (actual \ntime=17519.274..21423.128 rows=7210521 loops=1)\n Sort Key: a.id\n -> Nested Loop (cost=11011.41..30240.81 rows=50011 width=8) \n(actual time=184.412..10945.189 rows=7210521 loops=1)\n -> Hash Left Join (cost=11011.41..28739.98 rows=1 \nwidth=4) (actual time=184.384..1452.467 rows=72 loops=1)\n Hash Cond: ((a.id + 1) = c.id)\n Filter: (c.id IS NULL)\n -> Seq Scan on members a (cost=0.00..9903.33 \nrows=150033 width=4) (actual time=0.009..71.463 rows=150033 loops=1)\n -> Hash (cost=9903.33..9903.33 rows=150033 \nwidth=4) (actual time=146.040..146.040 rows=150033 loops=1)\n -> Seq Scan on members c \n(cost=0.00..9903.33 rows=150033 width=4) (actual time=0.002..77.066 \nrows=150033 loops=1)\n -> Index Scan using members_pkey on members b \n(cost=0.00..875.69 rows=50011 width=4) (actual time=0.025..78.971 \nrows=100146 loops=72)\n Index Cond: (a.id < b.id)\n Total runtime: 23882.511 ms\n(13 rows)\n\n HashAggregate (cost=30240.82..30240.83 rows=1 width=8) (actual \ntime=12870.440..12870.504 rows=71 loops=1)\n -> Nested Loop (cost=11011.41..30240.81 rows=1 width=8) (actual \ntime=168.658..9466.644 rows=7210521 loops=1)\n -> Hash Left Join (cost=11011.41..28739.98 rows=1 width=4) \n(actual time=168.630..865.690 rows=72 loops=1)\n Hash Cond: ((a.id + 1) = c.id)\n Filter: (c.id IS NULL)\n -> Seq Scan on members a (cost=0.00..9903.33 \nrows=150033 width=4) (actual time=0.012..70.612 rows=150033 loops=1)\n -> Hash (cost=9903.33..9903.33 rows=150033 width=4) \n(actual time=140.432..140.432 rows=150033 loops=1)\n -> Seq Scan on members c (cost=0.00..9903.33 \nrows=150033 width=4) (actual time=0.003..76.709 rows=150033 loops=1)\n -> Index Scan using members_pkey on members b \n(cost=0.00..875.69 rows=50011 width=4) (actual time=0.023..73.317 \nrows=100146 loops=72)\n Index Cond: (a.id < b.id)\n Total runtime: 12870.756 ms\n(11 rows)\n",
"msg_date": "Thu, 18 Jan 2007 22:10:12 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
},
{
"msg_contents": "Arjen van der Meijden <[email protected]> writes:\n> PS, In case any of the planner-hackers are reading, here are the plans \n> of the first two queries, just to see if something can be done to \n> decrease the differences between them.\n\nIncrease work_mem? It's not taking the hash because it thinks it won't\nfit in memory ...\n\nThere is a bug here, I'd say: the rowcount estimate ought to be the same\neither way. Dunno why it's not, but will look --- I see the same\nmisbehavior with a toy table here.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 18 Jan 2007 17:11:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice "
},
{
"msg_contents": "On 18-1-2007 23:11 Tom Lane wrote:\n> Increase work_mem? It's not taking the hash because it thinks it won't\n> fit in memory ...\n\nWhen I increase it to 128MB in the session (arbitrarily selected \nrelatively large value) it indeed has the other plan.\n\nBest regards,\n\nArjen\n",
"msg_date": "Fri, 19 Jan 2007 07:15:12 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Advice"
}
] |
[
{
"msg_contents": "Hello List,\n\nNot sure to which list I should post (gray lines, and all that), so \npoint me in the right direction if'n it's a problem.\n\nI am in the process of learning some of the art/science of benchmarking. \n Given novnov's recent post about the comparison of MS SQL vs \nPostgresQL, I felt it time to do a benchmark comparison of sorts for \nmyself . . . more for me and the benchmark learning process than the \nDB's, but I'm interested in DB's in general, so it's a good fit. (If I \nfind anything interesting/new, I will of course share the results.)\n\nGiven that, I don't know what I'm doing. :| It seems initially that to \ndo it properly, I have to pick some sort of focus. In other words, \nshall I benchmark from a standpoint of ACID compliance? Shall I \nbenchmark with functionality in mind? Ease of use/setup? Speed? The \nlatter seems to be done most widely/often, so I suspect it's the easiest \nstandpoint from which to work. Thus, for my initial foray into \nbenchmarking, I'll probably start there. (Unless of course, in any of \nyour wisdom, you can point me in a better direction.)\n\n From my less-than-one-month-of-Postgres-list-lurking, I think I need to \nbe aware of at /least/ these items for my benchmarks (in no particular \norder):\n\n* overall speed (obvious)\n\n* mitigating factors\n - DB fits entirely in memory or not (page faults)\n - DB size\n - DB versions\n\n* DB non-SELECT performance. A common point I see in comparisons of\n MySQL and PostgresQL is that MySQL is much faster. However, I rarely\n see anything other than comparison of SELECT.\n\n* Query complexity (e.g. criteria, {,inner,outer}-joins)\n ex.\tSELECT * FROM aTable; vs\n\tSELECT\n\t FUNC( var ),\n\t ...\n\t FROM\n\t tables\n\t WHERE\n\t x IN (<list>)\n\tOR y BETWEEN\n\t a\n\t AND b ...\n\n* Queries against tables/columns of varying data types. (BOOLEAN,\n SMALLINT, TEXT, VARCHAR, etc.)\n\n* Queries against tables with/out constraints\n\n* Queries against tables with/out triggers {post,pre}-{non,}SELECT\n\n* Transactions\n\n* Individual and common functions (common use, not necessarily common\n name, e.g. SUBSTRING/SUBSTR, MAX, COUNT, ORDER BY w/{,o} LIMIT).\n\n* Performance under load (e.g. 1, 10, 100 concurrent users),\n - need to delineate how DB's handle concurrent queries against the\n same tuples AND against different tuples/tables.\n\n* Access method (e.g. Thru C libs, via PHP/Postgres libs, apache/web,\n command line and stdin scripts)\n\n# I don't currently have access to a RAID setup, so this will all have\n to be on single hard drive for now. Perhaps later I can procure more\n hardware/situations with which to test.\n\nClearly, this is only a small portion of what I should be aware when I'm \nbenchmarking different DB's in terms of speed/performance, and already \nit's feeling daunting. Feel free to add any/all items about which I'm \nnot thinking.\n\nThe other thing: as I'm still a bit of a noob, all my use of the \nPostgres DB has been -- for the most part -- with the stock \nconfiguration. Since I'm planning to run these tests on the same \nhardware, I can pseudo-rule out hardware-related differences in the \nresults. However, I'm hoping that I can give my stats/assumptions to \nthe list and someone would give me a configuration file that would /most \nlikely/ be best? I can search the documentation/archives, but I'm \nhoping to get head start and tweak from there.\n\nAny and all advice would be /much/ appreciated!\n\nKevin\n",
"msg_date": "Wed, 17 Jan 2007 17:15:24 -0500",
"msg_from": "Kevin Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "DB benchmark and pg config file help"
},
{
"msg_contents": "On 1/17/07, Kevin Hunter <[email protected]> wrote:\n> Hello List,\n>\n> Not sure to which list I should post (gray lines, and all that), so\n> point me in the right direction if'n it's a problem.\n>\n> I am in the process of learning some of the art/science of benchmarking.\n> Given novnov's recent post about the comparison of MS SQL vs\n> PostgresQL, I felt it time to do a benchmark comparison of sorts for\n> myself . . . more for me and the benchmark learning process than the\n> DB's, but I'm interested in DB's in general, so it's a good fit. (If I\n> find anything interesting/new, I will of course share the results.)\n\nJust remember that all the major commercial databases have\nanti-benchmark clauses in their license agreements. So, if you decide\nto publish your results (especially in a formal benchmark), you can't\nmention the big boys by name. [yes this is cowardice]\n\nmerlin\n",
"msg_date": "Fri, 19 Jan 2007 08:45:13 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB benchmark and pg config file help"
},
{
"msg_contents": "On 19 Jan 2007 at 8:45a -0500, Merlin Moncure wrote:\n> On 1/17/07, Kevin Hunter [hunteke∈earlham.edu] wrote:\n>> I am in the process of learning some of the art/science of benchmarking. \n>> Given novnov's recent post about the comparison of MS SQL vs \n>> PostgresQL, I felt it time to do a benchmark comparison of sorts for \n>> myself . . . more for me and the benchmark learning process than the \n>> DB's, but I'm interested in DB's in general, so it's a good fit. (If I \n>> find anything interesting/new, I will of course share the results.)\n> \n> Just remember that all the major commercial databases have \n> anti-benchmark clauses in their license agreements. So, if you decide \n> to publish your results (especially in a formal benchmark), you can't \n> mention the big boys by name. [yes this is cowardice]\n\n\"Anti-benchmark clauses in the license agreements\"?!? Cowardice indeed! \n <wry_look>So, by implication, I should do my benchmarking with \n\"borrowed\" copies, right? No sale, no agreement . . . </wry_look>\n\nSeriously though, that would have bitten me. Thank you, I did not know \nthat. Does that mean that I can't publish the results outside of my \nwork/research/personal unit at all? Or do I just need to obscure about \nwhich DB I'm talking? (Like Vendor {1,2,3,...} Product).\n\nAppreciatively,\n\nKevin\n",
"msg_date": "Fri, 19 Jan 2007 09:05:35 -0500",
"msg_from": "Kevin Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DB benchmark and pg config file help"
},
{
"msg_contents": "On Fri, Jan 19, 2007 at 09:05:35 -0500,\n Kevin Hunter <[email protected]> wrote:\n> \n> Seriously though, that would have bitten me. Thank you, I did not know \n> that. Does that mean that I can't publish the results outside of my \n> work/research/personal unit at all? Or do I just need to obscure about \n> which DB I'm talking? (Like Vendor {1,2,3,...} Product).\n\nCheck with your lawyer. Depending on where you are, those clauses may not even\nbe valid.\n",
"msg_date": "Fri, 19 Jan 2007 10:52:17 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB benchmark and pg config file help"
},
{
"msg_contents": "On 19 Jan 2007 at 10:56a -0600, Bruno Wolff III wrote:\n> On Fri, Jan 19, 2007 at 09:05:35 -0500,\n> Kevin Hunter <[email protected]> wrote:\n>> Seriously though, that would have bitten me. Thank you, I did not know \n>> that. Does that mean that I can't publish the results outside of my \n>> work/research/personal unit at all? Or do I just need to obscure about \n>> which DB I'm talking? (Like Vendor {1,2,3,...} Product).\n> \n> Check with your lawyer. Depending on where you are, those clauses may not even\n> be valid.\n\n<grins />\n\n/me = student => no money . . . lawyer? You /are/ my lawyers. ;)\n\nWell, sounds like America's legal system/red tape will at least slow my \nefforts against the non-open source DBs, until I get a chance to find \nout for sure.\n\nI really do appreciate the warnings/heads ups.\n\nKevin\n\nBTW: I'm currently located in Richmond, IN, USA. A pin for someone's \nmap. :)\n",
"msg_date": "Fri, 19 Jan 2007 16:40:55 -0500",
"msg_from": "Kevin Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DB benchmark and pg config file help"
}
] |
[
{
"msg_contents": "Hi List,\n\nCan anybody suggest some comprehensive test for version change from 8.1.3 to\n8.2\n\n-- \nThanks in advance\nGauri\n\nHi List,\n \nCan anybody suggest some comprehensive test for version change from 8.1.3 to 8.2-- Thanks in advance\nGauri",
"msg_date": "Thu, 18 Jan 2007 10:13:26 +0530",
"msg_from": "\"Gauri Kanekar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Version Change"
},
{
"msg_contents": "\nOn Jan 18, 2007, at 13:43 , Gauri Kanekar wrote:\n\n> Can anybody suggest some comprehensive test for version change from \n> 8.1.3 to 8.2\n\nhttp://www.postgresql.org/docs/8.2/interactive/release.html\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n",
"msg_date": "Thu, 18 Jan 2007 13:56:06 +0900",
"msg_from": "Michael Glaesemann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Version Change"
},
{
"msg_contents": "\nOn Jan 18, 2007, at 13:56 , Michael Glaesemann wrote:\n\n>\n> On Jan 18, 2007, at 13:43 , Gauri Kanekar wrote:\n>\n>> Can anybody suggest some comprehensive test for version change \n>> from 8.1.3 to 8.2\n>\n> http://www.postgresql.org/docs/8.2/interactive/release.html\n\nSorry, I misread your request as a list of version changes. You could \nparse the result of SELECT version(); to test what version the server \nis, if that's what you're asking.\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n",
"msg_date": "Thu, 18 Jan 2007 13:59:46 +0900",
"msg_from": "Michael Glaesemann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Version Change"
},
{
"msg_contents": "Please reply to the list so that others may contribute and benefit \nfrom the discussion.\n\nOn Jan 18, 2007, at 14:19 , Gauri Kanekar wrote:\n\n> i want some comprehensive tests, to identify wheather shifiting \n> from 8.1.3 to 8.2 will be advantageous.\n\nI think it depends on your installation and use for PostgreSQL. \nPostgreSQL is used for many different types of projects which have \ndifferent needs. I don't think it would be possible to put together \nsome sort of comprehensive test suite that would answer your \nquestion. What I can recommend is benchmark common and performance \ncritical tasks for your current 8.1.3 installation and then compare \nthe results with those same benchmarks run against 8.2.\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n",
"msg_date": "Thu, 18 Jan 2007 14:35:12 +0900",
"msg_from": "Michael Glaesemann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Version Change"
}
] |
[
{
"msg_contents": "Hi List,\n\nCan anybody help me out with this -\n\nis autovacuum similar to vacuum full analyse verbose.\n\n-- \nRegards\nGauri\n\nHi List,\n \nCan anybody help me out with this - \n \nis autovacuum similar to vacuum full analyse verbose.\n \n-- RegardsGauri",
"msg_date": "Thu, 18 Jan 2007 18:54:05 +0530",
"msg_from": "\"Gauri Kanekar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Vacuum v/s Autovacuum"
},
{
"msg_contents": "\nOn Jan 18, 2007, at 22:24 , Gauri Kanekar wrote:\n\n> is autovacuum similar to vacuum full analyse verbose.\n\nhttp://www.postgresql.org/docs/8.2/interactive/routine- \nvacuuming.html#AUTOVACUUM\n\nApparently, no FULL, no VERBOSE (which is only really useful if you \nwant to see the results, not for routine maintenance).\n\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n",
"msg_date": "Thu, 18 Jan 2007 22:31:10 +0900",
"msg_from": "Michael Glaesemann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum v/s Autovacuum"
},
{
"msg_contents": "Hi\n\nThanks.\n\nWe have autovacuum ON , but still postgres server warns to\nincreas max_fsm_pages value.\n\nDo autovacuum release space after it is over?\n\nso how can we tackle it.\n\n\n\n\nOn 1/18/07, Michael Glaesemann <[email protected]> wrote:\n>\n>\n> On Jan 18, 2007, at 22:24 , Gauri Kanekar wrote:\n>\n> > is autovacuum similar to vacuum full analyse verbose.\n>\n> http://www.postgresql.org/docs/8.2/interactive/routine-\n> vacuuming.html#AUTOVACUUM\n>\n> Apparently, no FULL, no VERBOSE (which is only really useful if you\n> want to see the results, not for routine maintenance).\n>\n>\n> Michael Glaesemann\n> grzm seespotcode net\n>\n>\n>\n\n\n-- \nRegards\nGauri\n\nHi \n \nThanks.\n \nWe have autovacuum ON , but still postgres server warns to increas max_fsm_pages value.\n \nDo autovacuum release space after it is over?\n \nso how can we tackle it.\n \n \nOn 1/18/07, Michael Glaesemann <[email protected]> wrote:\nOn Jan 18, 2007, at 22:24 , Gauri Kanekar wrote:> is autovacuum similar to vacuum full analyse verbose.\nhttp://www.postgresql.org/docs/8.2/interactive/routine-vacuuming.html#AUTOVACUUMApparently, no FULL, no VERBOSE (which is only really useful if you\nwant to see the results, not for routine maintenance).Michael Glaesemanngrzm seespotcode net-- RegardsGauri",
"msg_date": "Thu, 18 Jan 2007 19:16:04 +0530",
"msg_from": "\"Gauri Kanekar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Vacuum v/s Autovacuum"
},
{
"msg_contents": "In response to \"Gauri Kanekar\" <[email protected]>:\n> On 1/18/07, Michael Glaesemann <[email protected]> wrote:\n> >\n> > On Jan 18, 2007, at 22:24 , Gauri Kanekar wrote:\n> >\n> > > is autovacuum similar to vacuum full analyse verbose.\n> >\n> > http://www.postgresql.org/docs/8.2/interactive/routine-\n> > vacuuming.html#AUTOVACUUM\n> >\n> > Apparently, no FULL, no VERBOSE (which is only really useful if you\n> > want to see the results, not for routine maintenance).\n\n[please don't top-post]\n\nActually, you can raise the debugging level in PostgreSQL and get something\nsimilar to VERBOSE. The only problem is that it also increases the amount\nof logging that occurs with everything.\n\n> We have autovacuum ON , but still postgres server warns to\n> increas max_fsm_pages value.\n> \n> Do autovacuum release space after it is over?\n\nYes.\n\nIf you're still getting warnings about max_fsm_pages while autovac is\nrunning, you need to do one of two things:\n1) Increase max_fsm_pages\n2) Adjust autovacuum's settings so it vacuums more often.\n\nDepending on this, you may also need to temporarily adjust max_fsm_pages,\nthen manually vacuum -- you may then find that autovacuum can keep everything\nclean with lower settings of max_fsm_pages.\n\nOverall, the best settings for 1 and 2 depend on the nature of your\nworkload, and simulation and monitoring will be required to find the\nbest values. I feel that the docs on this are very good. If the\namount of data that changes between runs of autovacuum is greater\nthan max_fsm_pages, then vacuum will be unable to reclaim all the\nspace.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n",
"msg_date": "Thu, 18 Jan 2007 09:05:28 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum v/s Autovacuum"
},
{
"msg_contents": "You will need to properly tune the thresholds for VACUUM and ANALYZE in case\nof autovacuuming process, so that you do not need to increase the\nmax_fsm_pages oftenly...\n\n-------------\nShoaib Mir\nEnterpriseDB (www.enterprisedb.com)\n\nOn 1/18/07, Bill Moran <[email protected]> wrote:\n>\n> In response to \"Gauri Kanekar\" <[email protected]>:\n> > On 1/18/07, Michael Glaesemann <[email protected]> wrote:\n> > >\n> > > On Jan 18, 2007, at 22:24 , Gauri Kanekar wrote:\n> > >\n> > > > is autovacuum similar to vacuum full analyse verbose.\n> > >\n> > > http://www.postgresql.org/docs/8.2/interactive/routine-\n> > > vacuuming.html#AUTOVACUUM\n> > >\n> > > Apparently, no FULL, no VERBOSE (which is only really useful if you\n> > > want to see the results, not for routine maintenance).\n>\n> [please don't top-post]\n>\n> Actually, you can raise the debugging level in PostgreSQL and get\n> something\n> similar to VERBOSE. The only problem is that it also increases the amount\n> of logging that occurs with everything.\n>\n> > We have autovacuum ON , but still postgres server warns to\n> > increas max_fsm_pages value.\n> >\n> > Do autovacuum release space after it is over?\n>\n> Yes.\n>\n> If you're still getting warnings about max_fsm_pages while autovac is\n> running, you need to do one of two things:\n> 1) Increase max_fsm_pages\n> 2) Adjust autovacuum's settings so it vacuums more often.\n>\n> Depending on this, you may also need to temporarily adjust max_fsm_pages,\n> then manually vacuum -- you may then find that autovacuum can keep\n> everything\n> clean with lower settings of max_fsm_pages.\n>\n> Overall, the best settings for 1 and 2 depend on the nature of your\n> workload, and simulation and monitoring will be required to find the\n> best values. I feel that the docs on this are very good. If the\n> amount of data that changes between runs of autovacuum is greater\n> than max_fsm_pages, then vacuum will be unable to reclaim all the\n> space.\n>\n> --\n> Bill Moran\n> Collaborative Fusion Inc.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\nYou will need to properly tune the thresholds for VACUUM and ANALYZE in case of autovacuuming process, so that you do not need to increase the max_fsm_pages oftenly...-------------Shoaib MirEnterpriseDB (\nwww.enterprisedb.com)On 1/18/07, Bill Moran <[email protected]> wrote:\nIn response to \"Gauri Kanekar\" <[email protected]\n>:> On 1/18/07, Michael Glaesemann <[email protected]> wrote:> >> > On Jan 18, 2007, at 22:24 , Gauri Kanekar wrote:> >> > > is autovacuum similar to vacuum full analyse verbose.\n> >> > http://www.postgresql.org/docs/8.2/interactive/routine-> > vacuuming.html#AUTOVACUUM> >> > Apparently, no FULL, no VERBOSE (which is only really useful if you\n> > want to see the results, not for routine maintenance).[please don't top-post]Actually, you can raise the debugging level in PostgreSQL and get somethingsimilar to VERBOSE. The only problem is that it also increases the amount\nof logging that occurs with everything.> We have autovacuum ON , but still postgres server warns to> increas max_fsm_pages value.>> Do autovacuum release space after it is over?\nYes.If you're still getting warnings about max_fsm_pages while autovac isrunning, you need to do one of two things:1) Increase max_fsm_pages2) Adjust autovacuum's settings so it vacuums more often.\nDepending on this, you may also need to temporarily adjust max_fsm_pages,then manually vacuum -- you may then find that autovacuum can keep everythingclean with lower settings of max_fsm_pages.Overall, the best settings for 1 and 2 depend on the nature of your\nworkload, and simulation and monitoring will be required to find thebest values. I feel that the docs on this are very good. If theamount of data that changes between runs of autovacuum is greaterthan max_fsm_pages, then vacuum will be unable to reclaim all the\nspace.--Bill MoranCollaborative Fusion Inc.---------------------------(end of broadcast)---------------------------TIP 2: Don't 'kill -9' the postmaster",
"msg_date": "Thu, 18 Jan 2007 19:14:25 +0500",
"msg_from": "\"Shoaib Mir\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum v/s Autovacuum"
}
] |
[
{
"msg_contents": "Some of my very large tables (10 million rows) need to be analyzed by\nautovacuum on a frequent basis. Rather than specifying this as a\npercentage of table size + base threshold, I wanted to specify it as an\nexplicit number of rows.\n\nI changed the table-specific settings so that the ANALYZE base threshold\nwas 5000 and the ANALYZE scale factor is 0. According to the documented\nformula: analyze threshold = analyze base threshold + analyze scale\nfactor * number of tuples, I assumed that this would cause the table to\nbe analyzed everytime 5000 tuples were inserted/updated/deleted.\n\nHowever, the tables have been updated with tens of thousands of inserts\nand the table has still not been analyzed (according to\npg_stat_user_tables). Does a scale factor of 0 cause the table to never\nbe analyzed? What am I doing wrong?\n\nI'm using PG 8.2.1.\n\nThanks,\nJeremy Haile\n",
"msg_date": "Thu, 18 Jan 2007 13:37:54 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Autoanalyze settings with zero scale factor"
},
{
"msg_contents": "Jeremy Haile wrote:\n> I changed the table-specific settings so that the ANALYZE base threshold\n> was 5000 and the ANALYZE scale factor is 0. According to the documented\n> formula: analyze threshold = analyze base threshold + analyze scale\n> factor * number of tuples, I assumed that this would cause the table to\n> be analyzed everytime 5000 tuples were inserted/updated/deleted.\n\nThat is right, and exactly how the scaling factor / base value are \nsupposed to work, so this should be fine.\n\n> However, the tables have been updated with tens of thousands of inserts\n> and the table has still not been analyzed (according to\n> pg_stat_user_tables). Does a scale factor of 0 cause the table to never\n> be analyzed? What am I doing wrong? I'm using PG 8.2.1.\n\nNo a scaling factor of 0 shouldn't stop the table from being analyzed.\n\nUnless it's just a bug, my only guess is that autovacuum may be getting \nbusy at times (vacuuming large tables for example) and hasn't had a \nchance to even look at that table for a while, and by the time it gets \nto it, there have been tens of thousands of inserts. Does that sounds \nplausible?\n\nAlso, are other auto-vacuums and auto-analyzes showing up in the \npg_stats table? Maybe it's a stats system issue.\n",
"msg_date": "Thu, 18 Jan 2007 14:20:11 -0500",
"msg_from": "\"Matthew T. O'Connor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autoanalyze settings with zero scale factor"
},
{
"msg_contents": "> Unless it's just a bug, my only guess is that autovacuum may be getting \n> busy at times (vacuuming large tables for example) and hasn't had a \n> chance to even look at that table for a while, and by the time it gets \n> to it, there have been tens of thousands of inserts. Does that sounds \n> plausible?\n\nPossible, but I think your next suggestion is more likely.\n\n> Also, are other auto-vacuums and auto-analyzes showing up in the \n> pg_stats table? Maybe it's a stats system issue.\n\nNo tables have been vacuumed or analyzed today. I had thought that this\nproblem was due to my pg_autovacuum changes, but perhaps not. I\nrestarted PostgreSQL (in production - yikes) About a minute after being\nrestarted, the autovac process fired up.\n\nWhat could get PG in a state where autovac isn't running? Is there\nanything I should watch to debug or monitor for this problem in the\nfuture? I wish I'd noticed whether or not the stats collector process\nwas running before I restarted.\n",
"msg_date": "Thu, 18 Jan 2007 15:21:47 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autoanalyze settings with zero scale factor"
},
{
"msg_contents": "Jeremy Haile wrote:\n>> Also, are other auto-vacuums and auto-analyzes showing up in the \n>> pg_stats table? Maybe it's a stats system issue.\n>> \n>\n> No tables have been vacuumed or analyzed today. I had thought that this\n> problem was due to my pg_autovacuum changes, but perhaps not. I\n> restarted PostgreSQL (in production - yikes) About a minute after being\n> restarted, the autovac process fired up.\n>\n> What could get PG in a state where autovac isn't running? Is there\n> anything I should watch to debug or monitor for this problem in the\n> future? I wish I'd noticed whether or not the stats collector process\n> was running before I restarted.\n\nFirst off you shouldn't need to restart PG. When it wasn't working did \nyou ever check the autovacuum_enabled setting? For example within psql: \n\"show autovacuum;\".\n\nI would venture to guess that autovacuum was disabled for some reason. \nPerhaps last time you started the server the stats settings weren't enabled?\n\n\n\n",
"msg_date": "Thu, 18 Jan 2007 15:40:07 -0500",
"msg_from": "\"Matthew T. O'Connor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autoanalyze settings with zero scale factor"
},
{
"msg_contents": "\"Jeremy Haile\" <[email protected]> writes:\n> No tables have been vacuumed or analyzed today. I had thought that this\n> problem was due to my pg_autovacuum changes, but perhaps not. I\n> restarted PostgreSQL (in production - yikes) About a minute after being\n> restarted, the autovac process fired up.\n\n> What could get PG in a state where autovac isn't running?\n\nUm, are you sure it wasn't? The autovac process is not an always-there\nthing, it quits after each pass and then the postmaster starts a new one\nawhile later.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 18 Jan 2007 16:30:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autoanalyze settings with zero scale factor "
},
{
"msg_contents": "Well - it hadn't run on any table in over 24 hours (according to\npg_stat_user_tables). My tables are constantly being inserted into and\ndeleted from, and the autovacuum settings are pretty aggressive. I also\nhad not seen the autovac process running in the past 24 hours. (although\nI wasn't watching it *all* the time)\n\nSo - as far as I could tell it wasn't running.\n\n\nOn Thu, 18 Jan 2007 16:30:17 -0500, \"Tom Lane\" <[email protected]> said:\n> \"Jeremy Haile\" <[email protected]> writes:\n> > No tables have been vacuumed or analyzed today. I had thought that this\n> > problem was due to my pg_autovacuum changes, but perhaps not. I\n> > restarted PostgreSQL (in production - yikes) About a minute after being\n> > restarted, the autovac process fired up.\n> \n> > What could get PG in a state where autovac isn't running?\n> \n> Um, are you sure it wasn't? The autovac process is not an always-there\n> thing, it quits after each pass and then the postmaster starts a new one\n> awhile later.\n> \n> \t\t\tregards, tom lane\n",
"msg_date": "Thu, 18 Jan 2007 16:53:21 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autoanalyze settings with zero scale factor"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a slow response of my PostgreSQL database 7.4 using this query below\non a table with 800000 rows:\n\nselect count(*)from tbl;\n\nPostgreSQL return result in 28 sec every time.\nalthough MS-SQL return result in 0.02 sec every time.\n\nMy server is a DELL PowerEdge 2600 with bi-processor Xeon at 3.2 Ghz\nwith 3GBytes RAM\n\n\nMy PostgreSQL Conf is\n*********************\nlog_connections = yes\nsyslog = 2\neffective_cache_size = 50000\nsort_mem = 10000\nmax_connections = 200\nshared_buffers = 3000\nvacuum_mem = 32000\nwal_buffers = 8\nmax_fsm_pages = 2000\nmax_fsm_relations = 100\n\nCan you tell me is there a way to enhence performance ?\n\nThank you\n\n\n\n\n\n+-----------------------------------------------------+\n| Laurent Manchon |\n| Email: [email protected] |\n+-----------------------------------------------------+\n\n\nHi,\n\nI have a slow response of my PostgreSQL database 7.4 using this query\nbelow\non a table with 800000 rows:\n\nselect count(*)from tbl;\n\nPostgreSQL return result in 28 sec every time.\nalthough MS-SQL return result in 0.02 sec every time.\n\nMy server is a DELL PowerEdge 2600 with bi-processor Xeon at 3.2 \nGhz\nwith 3GBytes RAM\n\n\nMy PostgreSQL Conf is\n*********************\nlog_connections = yes\nsyslog = 2\neffective_cache_size = 50000\nsort_mem = 10000\nmax_connections = 200\nshared_buffers = 3000\nvacuum_mem = 32000\nwal_buffers = 8\nmax_fsm_pages = 2000 \nmax_fsm_relations = 100 \n\nCan you tell me is there a way to enhence performance ?\n\nThank you\n\n\n\n\n\n+-----------------------------------------------------+\n| Laurent\nManchon \n|\n| Email:\[email protected] \n|\n+-----------------------------------------------------+",
"msg_date": "Tue, 23 Jan 2007 11:34:52 +0100",
"msg_from": "Laurent Manchon <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow result"
},
{
"msg_contents": "\nAm 23.01.2007 um 11:34 schrieb Laurent Manchon:\n\n> Hi,\n>\n> I have a slow response of my PostgreSQL database 7.4 using this \n> query below\n> on a table with 800000 rows:\n>\n> select count(*)from tbl;\n\ncount(*) is doing a full tablescan over all your 800000 rows. This is \na well known \"feature\"\nof postgres :-/\n\nSo enhancing the performance is currently only possible by having \nfaster disk drives.\n-- \nHeiko W.Rupp\n [email protected], http://www.dpunkt.de/buch/ \n3-89864-429-4.html\n\n\n\n",
"msg_date": "Tue, 23 Jan 2007 11:43:16 +0100",
"msg_from": "\"Heiko W.Rupp\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow result"
},
{
"msg_contents": "am Tue, dem 23.01.2007, um 11:34:52 +0100 mailte Laurent Manchon folgendes:\n> Hi,\n> \n> I have a slow response of my PostgreSQL database 7.4 using this query below\n> on a table with 800000 rows:\n> \n> select count(*)from tbl;\n\nIf i remember correctly, i saw this question yesterday on an other\nlist...\n\n\nAnswer:\n\nBecause PG force a sequencial scan. You can read a lot about this in the\narchives. Here some links to explanations:\n\nhttp://www.pervasive-postgres.com/instantkb13/article.aspx?id=10117&cNode=0T1L6L\nhttp://sql-info.de/postgresql/postgres-gotchas.html#1_7\nhttp://www.varlena.com/GeneralBits/49.php\n\n\nHope that helps, Andreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n",
"msg_date": "Tue, 23 Jan 2007 11:53:35 +0100",
"msg_from": "\"A. Kretschmer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow result"
},
{
"msg_contents": "On Tue, Jan 23, 2007 at 11:34:52AM +0100, Laurent Manchon wrote:\n> I have a slow response of my PostgreSQL database 7.4 using this query below\n> on a table with 800000 rows:\n> \n> select count(*)from tbl;\n\nContrary to your expectations, this is _not_ a query you'd expect to be fast\nin Postgres. Try real queries from your application instead -- most likely,\nyou'll find them to be much master. (If not, come back with the query, the\nschema and the EXPLAIN ANALYZE output of your query, and you'll usually get\nhelp nailing down the issues. :-) )\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 23 Jan 2007 11:55:41 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow result"
},
{
"msg_contents": "On Tue, Jan 23, 2007 at 11:55:41AM +0100, Steinar H. Gunderson wrote:\n> you'll find them to be much master.\n\ns/master/faster/\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 23 Jan 2007 11:58:04 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow result"
},
{
"msg_contents": "In response to Laurent Manchon <[email protected]>:\n> \n> I have a slow response of my PostgreSQL database 7.4 using this query below\n> on a table with 800000 rows:\n> \n> select count(*)from tbl;\n> \n> PostgreSQL return result in 28 sec every time.\n> although MS-SQL return result in 0.02 sec every time.\n> \n> My server is a DELL PowerEdge 2600 with bi-processor Xeon at 3.2 Ghz\n> with 3GBytes RAM\n\nWhile there's truth in everything that's been said by others, the query\nshould not take _that_ long. I just tried a count(*) on a table with\n460,000 rows, and it took less than a second. count(*) in PostgreSQL\nis not likely to compare to most other RDBMS for the reasons others have\nstated, but counting 800,000 rows shouldn't take 28 seconds.\n\nThe standard question applies: have you vacuumed recently?\n\n> My PostgreSQL Conf is\n> *********************\n> log_connections = yes\n> syslog = 2\n> effective_cache_size = 50000\n> sort_mem = 10000\n> max_connections = 200\n> shared_buffers = 3000\n> vacuum_mem = 32000\n> wal_buffers = 8\n> max_fsm_pages = 2000\n> max_fsm_relations = 100\n> \n> Can you tell me is there a way to enhence performance ?\n\nOn our 4G machines, we use shared_buffers=240000 (which equates to about\n2G). The only reason I don't set it higher is that FreeBSD has a limit on\nshared memory of 2G.\n\nThe caveat here is that I'm running a mix of 8.1 and 8.2. There have been\nsignificant improvements in both the usage of shared memory, and the\noptimization of count(*) since 7.4, so the first suggestion I have is to\nupgrade your installation.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n",
"msg_date": "Tue, 23 Jan 2007 08:53:25 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow result"
},
{
"msg_contents": "On Tue, Jan 23, 2007 at 11:34:52 +0100,\n Laurent Manchon <[email protected]> wrote:\n> Hi,\n> \n> I have a slow response of my PostgreSQL database 7.4 using this query below\n> on a table with 800000 rows:\n> \n> select count(*)from tbl;\n> \n> PostgreSQL return result in 28 sec every time.\n> although MS-SQL return result in 0.02 sec every time.\n\nBesides the other advice mentioned in this thread, check that you don't\nhave a lot of dead tuples in that table. 28 seconds seems a bit high\nfor even a sequential scan of 800000 tuples unless they are pretty large.\n",
"msg_date": "Tue, 23 Jan 2007 11:59:51 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow result"
},
{
"msg_contents": "You can also try this one:\n\nANALYZE tablename;\nselect reltuples from pg_class where relname = 'tablename';\n\nWill also give almost the same results I guess...\n\n-------------\nShoaib Mir\nEnterpriseDB (www.enterprisedb.com)\n\nOn 1/23/07, Bruno Wolff III <[email protected]> wrote:\n>\n> On Tue, Jan 23, 2007 at 11:34:52 +0100,\n> Laurent Manchon <[email protected]> wrote:\n> > Hi,\n> >\n> > I have a slow response of my PostgreSQL database 7.4 using this query\n> below\n> > on a table with 800000 rows:\n> >\n> > select count(*)from tbl;\n> >\n> > PostgreSQL return result in 28 sec every time.\n> > although MS-SQL return result in 0.02 sec every time.\n>\n> Besides the other advice mentioned in this thread, check that you don't\n> have a lot of dead tuples in that table. 28 seconds seems a bit high\n> for even a sequential scan of 800000 tuples unless they are pretty large.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n\nYou can also try this one:ANALYZE tablename;select reltuples from pg_class where relname = 'tablename';Will also give almost the same results I guess...-------------Shoaib MirEnterpriseDB (\nwww.enterprisedb.com)On 1/23/07, Bruno Wolff III <[email protected]> wrote:\nOn Tue, Jan 23, 2007 at 11:34:52 +0100, Laurent Manchon <\[email protected]> wrote:> Hi,>> I have a slow response of my PostgreSQL database 7.4 using this query below> on a table with 800000 rows:>> select count(*)from tbl;\n>> PostgreSQL return result in 28 sec every time.> although MS-SQL return result in 0.02 sec every time.Besides the other advice mentioned in this thread, check that you don'thave a lot of dead tuples in that table. 28 seconds seems a bit high\nfor even a sequential scan of 800000 tuples unless they are pretty large.---------------------------(end of broadcast)---------------------------TIP 3: Have you checked our extensive FAQ? \nhttp://www.postgresql.org/docs/faq",
"msg_date": "Wed, 24 Jan 2007 15:19:22 +0500",
"msg_from": "\"Shoaib Mir\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow result"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a slow response of my PostgreSQL database 7.4 using this query below\non a table with 800000 rows:\n\nselect count(*)from tbl;\n\nPostgreSQL return result in 28 sec every time.\nalthough MS-SQL return result in 0.02 sec every time.\n\nMy server is a DELL PowerEdge 2600 with bi-processor Xeon at 3.2 Ghz\nwith 3GBytes RAM\n\n\nMy PostgreSQL Conf is\n*********************\nlog_connections = yes\nsyslog = 2\neffective_cache_size = 50000\nsort_mem = 10000\nmax_connections = 200\nshared_buffers = 3000\nvacuum_mem = 32000\nwal_buffers = 8\nmax_fsm_pages = 2000\nmax_fsm_relations = 100\n\nCan you tell me is there a way to enhence performance ?\n\nThank you\n\n\n\n\n\n+-----------------------------------------------------+\n| Laurent Manchon |\n| Email: [email protected] |\n+-----------------------------------------------------+\n\n\nHi,\n\nI have a slow response of my PostgreSQL database 7.4 using this query\nbelow\non a table with 800000 rows:\n\nselect count(*)from tbl;\n\nPostgreSQL return result in 28 sec every time.\nalthough MS-SQL return result in 0.02 sec every time.\n\nMy server is a DELL PowerEdge 2600 with bi-processor Xeon at 3.2 \nGhz\nwith 3GBytes RAM\n\n\nMy PostgreSQL Conf is\n*********************\nlog_connections = yes\nsyslog = 2\neffective_cache_size = 50000\nsort_mem = 10000\nmax_connections = 200\nshared_buffers = 3000\nvacuum_mem = 32000\nwal_buffers = 8\nmax_fsm_pages = 2000 \nmax_fsm_relations = 100 \n\nCan you tell me is there a way to enhence performance ?\n\nThank you\n\n\n\n\n\n+-----------------------------------------------------+\n| Laurent\nManchon \n|\n| Email:\[email protected] \n|\n+-----------------------------------------------------+",
"msg_date": "Tue, 23 Jan 2007 13:34:19 +0100",
"msg_from": "Laurent Manchon <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow result"
},
{
"msg_contents": "am Tue, dem 23.01.2007, um 13:34:19 +0100 mailte Laurent Manchon folgendes:\n> Hi,\n> \n> I have a slow response of my PostgreSQL database 7.4 using this query below\n> on a table with 800000 rows:\n> \n> select count(*)from tbl;\n\nPLEASE READ THE ANSWERS FOR YOUR OTHER MAILS.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n",
"msg_date": "Tue, 23 Jan 2007 13:43:19 +0100",
"msg_from": "\"A. Kretschmer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow result"
},
{
"msg_contents": "At 07:34 AM 1/23/2007, Laurent Manchon wrote:\n>Hi,\n>\n>I have a slow response of my PostgreSQL database 7.4 using this query below\n>on a table with 800000 rows:\n\n1= Upgrade to the latest stable version of pg. That would be \n8.2.x You are very much in the Dark Ages pg version wise.\npg 8.x has significant IO enhancements. Especially compared to 7.4.\n\n>select count(*)from tbl;\n>\n>PostgreSQL return result in 28 sec every time.\n>although MS-SQL return result in 0.02 sec every time.\n\n2= pg actually counts how many rows there are in a table. MS-SQL \nlooks up a count value from a internal data table... ....which can be \nwrong in extraordinarily rare circumstances in a MVCC DBMS (which \nMS-SQL is !not!. MS-SQL uses the older hierarchical locking strategy \nfor data protection.)\nSince pg actually scans the table for the count, pg's count will \nalways be correct. No matter what.\n\nSince MS-SQL does not use MVCC, it does not have to worry about the \ncorner MVCC cases that pg does.\nOTOH, MVCC _greatly_ reduces the number of cases where one \ntransaction can block another compared to the locking strategy used in MS-SQL.\nThis means in real day to day operation, pg is very likely to handle \nOLTP loads and heavy loads better than MS-SQL will.\n\nIn addition, MS-SQL is a traditional Codd & Date table oriented \nDBMS. pg is an object oriented DBMS.\n\nTwo very different products with very different considerations and \ngoals (and initially designed at very different times historically.)\n\nCompare them under real loads using real queries if you are going to \ncompare them. Comparing pg and MS-SQL using \"fluff\" queries like \ncount(*) is both misleading and a waste of effort.\n\n\n>My server is a DELL PowerEdge 2600 with bi-processor Xeon at 3.2 Ghz\n>with 3GBytes RAM\n>\n>\n>My PostgreSQL Conf is\n>*********************\n>log_connections = yes\n>syslog = 2\n>effective_cache_size = 50000\n>sort_mem = 10000\n>max_connections = 200\n>shared_buffers = 3000\n>vacuum_mem = 32000\n>wal_buffers = 8\n>max_fsm_pages = 2000\n>max_fsm_relations = 100\n>\n>Can you tell me is there a way to enhence performance ?\nThere are extensive FAQs on what the above values should be for \npg. The lore is very different for pg 8.x vs pg 7.x\n\n>Thank you\nYou're welcome.\n\nRon Peacetree\n\n",
"msg_date": "Tue, 23 Jan 2007 09:43:35 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow result"
},
{
"msg_contents": "Laurent Manchon wrote:\n> Hi,\n> \n> I have a slow response of my PostgreSQL database 7.4 using this query below\n> on a table with 800000 rows:\n> \n> select count(*)from tbl;\n> \n> PostgreSQL return result in 28 sec every time.\n\n\nCan you post the results of:\n\nanalyze verbose tbl;\nexplain analyze select count(*) from tbl;\n\nThe first will give us some info about how many pages tbl has (in 7.4 \nISTR it does not state the # of dead rows... but anyway), the second \nshould help us deduce why it is so slow.\n\nAlso as others have pointed out, later versions are quite a bit faster \nfor sequential scans...\n\nCheers\n\nMark\n",
"msg_date": "Wed, 24 Jan 2007 13:51:51 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow result"
}
] |
[
{
"msg_contents": "Does anyone have experience with using postgres for data warehousing?\nRight, I saw one post suggestion to use mysql for a mostly read-only\ndatabase ... but anyway, I think it's not a question to change the\ndatabase platform for this project, at least not today ;-)\n\nRalph Kimball seems to be some kind of guru on data warehousing, and\nin his books he's strongly recommending to have a date dimension -\nsimply a table describing all dates in the system, and having\nattributes for what day of the week it is, month, day of the month,\nweek number, bank holiday, anything special, etc. Well, it does make\nsense if adding lots of information there that cannot easily be pulled\nout from elsewhere - but as for now, I'm mostly only interessted in\ngrouping turnover/profit by weeks/months/quarters/years/weekdays. It\nseems so much bloated to store this information, my gut feeling tells it\nshould be better to generate them on the fly. Postgres even allows to\ncreate an index on an expression.\n\nThe question is ... I'm curious about what would yield the highest\nperformance - when choosing between:\n\n select extract(week from created), ...\n from some_table\n where ...\n group by extract(week from created), ...\n sort by extract(week from created), ...\n\nand:\n\n select date_dim.week_num, ...\n from some_table join date_dim ...\n where ...\n group by date_dim.week_num, ...\n sort by date_dim, week_num, ...\n\nThe date_dim table would eventually cover ~3 years of operation, that\nis less than 1000 rows.\n\n",
"msg_date": "Tue, 23 Jan 2007 13:49:37 +0100",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "extract(field from timestamp) vs date dimension"
},
{
"msg_contents": "On 1/23/07, Tobias Brox <[email protected]> wrote:\n>\n> Ralph Kimball seems to be some kind of guru on data warehousing, and\n> in his books he's strongly recommending to have a date dimension -\n> simply a table describing all dates in the system, and having\n\n\nI would tend to agree with this line of thought.\n\n\nout from elsewhere - but as for now, I'm mostly only interessted in\n> grouping turnover/profit by weeks/months/quarters/years/weekdays. It\n> seems so much bloated to store this information, my gut feeling tells it\n> should be better to generate them on the fly. Postgres even allows to\n> create an index on an expression.\n\n\nI guess go with your gut, but at some point the expressions are going to be\ntoo complicated to maintain, and inefficient.\n\nCalendar tables are very very common, because traditional date functions\nsimply can't define business logic (especially things like month end close,\nquarter end close, and year end close) that doesn't have any repeating\npatterns (every 4th friday, 1st monday in the quarter, etc). Sure you can\nstuff it into a function, but it just isn't as maintainable as a table.\n\n\n\n\n-- \nChad\nhttp://www.postgresqlforums.com/\n\nOn 1/23/07, Tobias Brox <[email protected]> wrote:\nRalph Kimball seems to be some kind of guru on data warehousing, andin his books he's strongly recommending to have a date dimension -simply a table describing all dates in the system, and having\nI would tend to agree with this line of thought. out from elsewhere - but as for now, I'm mostly only interessted in\ngrouping turnover/profit by weeks/months/quarters/years/weekdays. Itseems so much bloated to store this information, my gut feeling tells itshould be better to generate them on the fly. Postgres even allows to\ncreate an index on an expression.I guess go with your gut, but at some point the expressions are going to be too complicated to maintain, and inefficient.Calendar tables are very very common, because traditional date functions simply can't define business logic (especially things like month end close, quarter end close, and year end close) that doesn't have any repeating patterns (every 4th friday, 1st monday in the quarter, etc). Sure you can stuff it into a function, but it just isn't as maintainable as a table.\n-- Chadhttp://www.postgresqlforums.com/",
"msg_date": "Tue, 23 Jan 2007 08:24:34 -0500",
"msg_from": "\"Chad Wagner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extract(field from timestamp) vs date dimension"
},
{
"msg_contents": "[Chad Wagner - Tue at 08:24:34AM -0500]\n> I guess go with your gut, but at some point the expressions are going to be\n> too complicated to maintain, and inefficient.\n\nThe layout of my system is quite flexible, so it should eventually be\nfairly trivial to throw in a date dimension at a later stage.\n\n> Calendar tables are very very common, because traditional date functions\n> simply can't define business logic (especially things like month end close,\n> quarter end close, and year end close) that doesn't have any repeating\n> patterns (every 4th friday, 1st monday in the quarter, etc). Sure you can\n> stuff it into a function, but it just isn't as maintainable as a table.\n\nSo far I haven't been bothered with anything more complex than \"clean\"\nweeks, months, quarters, etc.\n\nI suppose the strongest argument for introducing date dimensions already\nnow is that I probably will benefit from having conform and\nwell-designed dimensions when I will be introducing more data marts. As\nfor now I have only one fact table and some few dimensions in the\nsystem.\n\n",
"msg_date": "Tue, 23 Jan 2007 14:35:48 +0100",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: extract(field from timestamp) vs date dimension"
},
{
"msg_contents": "On 1/23/07, Tobias Brox <[email protected]> wrote:\n> Does anyone have experience with using postgres for data warehousing?\n> Right, I saw one post suggestion to use mysql for a mostly read-only\n> database ... but anyway, I think it's not a question to change the\n> database platform for this project, at least not today ;-)\n>\n> Ralph Kimball seems to be some kind of guru on data warehousing, and\n> in his books he's strongly recommending to have a date dimension -\n> simply a table describing all dates in the system, and having\n> attributes for what day of the week it is, month, day of the month,\n> week number, bank holiday, anything special, etc. Well, it does make\n> sense if adding lots of information there that cannot easily be pulled\n> out from elsewhere - but as for now, I'm mostly only interessted in\n> grouping turnover/profit by weeks/months/quarters/years/weekdays. It\n> seems so much bloated to store this information, my gut feeling tells it\n> should be better to generate them on the fly. Postgres even allows to\n> create an index on an expression.\n>\n> The question is ... I'm curious about what would yield the highest\n> performance - when choosing between:\n>\n> select extract(week from created), ...\n> from some_table\n> where ...\n> group by extract(week from created), ...\n> sort by extract(week from created), ...\n>\n> and:\n>\n> select date_dim.week_num, ...\n> from some_table join date_dim ...\n> where ...\n> group by date_dim.week_num, ...\n> sort by date_dim, week_num, ...\n>\n> The date_dim table would eventually cover ~3 years of operation, that\n> is less than 1000 rows.\n\n\nIn my opinion, I would make a date_dim table for this case. I would\nstrongly advice against making a date_id field, just use the date\nitself as the p-key (i wouldn't bother with RI links to the table\nthough).\n\nI would also however make a function and use this to make the record:\ncreate or replace function make_date_dim(in_date date) returns\ndate_dim as $$ [...]\n\nAnd make date_dim records this way:\ninsert into date_dim select * from make_dim('01/01/2001'::date);\n\n(or pre-insert with generate_series).\nnow you get the best of both worlds: you can join to the table for the\ngeneral case or index via function for special case indexes. for\nexample suppose you had to frequently count an account's sales by\nfiscal year quarter irrespective of year:\n\ncreate index q_sales_idx on account_sale(account_no,\n(make_dim(sale_date)).fiscal_quarter);\n\nalso you can use the function in place of a join if you want. In some\ncases the join may be better, though.\n\nmerlin\n",
"msg_date": "Tue, 23 Jan 2007 11:05:51 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extract(field from timestamp) vs date dimension"
},
{
"msg_contents": "Tobias Brox wrote:\n\n> \n> I suppose the strongest argument for introducing date dimensions already\n> now is that I probably will benefit from having conform and\n> well-designed dimensions when I will be introducing more data marts. As\n> for now I have only one fact table and some few dimensions in the\n> system.\n> \n\nAnother factors to consider is that end user tools (and end users) may \nfind a date/time dimension helpful.\n\nBest wishes\n\nMark\n",
"msg_date": "Wed, 24 Jan 2007 16:20:47 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extract(field from timestamp) vs date dimension"
}
] |
[
{
"msg_contents": "Hello all,\n\n I have a setup in which four client machines access\na Postgres database (8.1.1) (on a Linux box). So,\nthere are connections from each machine to the\ndatabase; hence, the Linux box has about 2 postgres\nprocesses associated with each machine.\n\n I am using the JDBC driver\n(postgresql-8.1-404.jdbc3.jar) to talk to the\ndatabase. I am also using the Spring framework(1.2.2)\nand Hibernate (3.0.5) on top of JDBC. I use Apache's\nDBCP database connection pool (1.2.1).\n\n Now, there is one particular update that I make from\none of the client machines - this involves a\nreasonably large object graph (from the Java point of\nview). It deletes a bunch of rows (around 20 rows in\nall) in 4-5 tables and inserts another bunch into the\nsame tables.\n\n When I do this, I see a big spike in the CPU usage\nof postgres processes that are associated with ALL the\nclient machines, not just the one I executed the\ndelete/insert operation on. The spike seems to happen\na second or two AFTER the original update completes\nand last for a few seconds.\n\n Is it that this operation is forcibly clearing some\nclient cache on ALL the postgres processes? Why is\nthere such an interdependency? Can I set some\nparameter to turn this off?\n\nRegards and thanks,\nS.Aiylam\n\n\n\n\n \n____________________________________________________________________________________\nDo you Yahoo!?\nEveryone is raving about the all-new Yahoo! Mail beta.\nhttp://new.mail.yahoo.com\n",
"msg_date": "Tue, 23 Jan 2007 08:11:11 -0800 (PST)",
"msg_from": "Subramaniam Aiylam <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres processes have a burst of CPU usage"
},
{
"msg_contents": "Subramaniam Aiylam wrote:\n> Now, there is one particular update that I make from\n> one of the client machines - this involves a\n> reasonably large object graph (from the Java point of\n> view). It deletes a bunch of rows (around 20 rows in\n> all) in 4-5 tables and inserts another bunch into the\n> same tables.\n> \n> When I do this, I see a big spike in the CPU usage\n> of postgres processes that are associated with ALL the\n> client machines, not just the one I executed the\n> delete/insert operation on. The spike seems to happen\n> a second or two AFTER the original update completes\n> and last for a few seconds.\n\nSo what are the other backends doing? They're not going to be using CPU \ncycles for nothing, they must be executing queries. Perhaps turn on \nstatement logging, and track process IDs.\n\nI can't think of any PostgreSQL caching that would be seriously affected \nby updating a few dozen rows. It might be that one of your java \nlibraries is clearing its cache though, causing it to issue more queries.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 24 Jan 2007 08:49:28 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres processes have a burst of CPU usage"
}
] |
[
{
"msg_contents": "Hello,\n\nI discovered a query which is taking 70 seconds on 8.2.1 which used to take\nunder a second on 8.1.2. I was digging into what was causing it and I\nbelieve the problem is a view which the planner estimates will return 1 row\nwhen it actually returns 3500. When I join with the view, the planner ends\nup using a nested loop because it thinks the right branch will run once\ninstead of 3500 times. I've analyzed all the tables and played around with\nthe default_statistics_target, but still the planner estimates 1 row. I was\nwondering if anybody else has any other ideas? \n\nHere is the query the view is defined as:\n\nSELECT foo.fund_id, foo.owner_trader_id, foo.strategy_id, foo.cf_account_id,\nfoo.instrument_id, sum(foo.pos) AS pos, sum(foo.cost) AS cost\nFROM \n( \n\tSELECT om_position.fund_id, om_position.owner_trader_id,\nom_position.strategy_id, om_position.cf_account_id,\nom_position.instrument_id, om_position.pos, om_position.cost\n\tFROM om_position\n\tWHERE om_position.as_of_date = date(now())\n\tUNION ALL \n\tSELECT om_trade.fund_id, om_trade.owner_trader_id,\nom_trade.strategy_id, om_trade.cf_account_id, om_trade.instrument_id,\nom_trade.qty::numeric(22,9) AS pos, om_trade.cost\n\tFROM om_trade\n\tWHERE om_trade.process_state = 0 OR om_trade.process_state = 2\n) foo\nGROUP BY foo.fund_id, foo.owner_trader_id, foo.strategy_id,\nfoo.cf_account_id, foo.instrument_id;\n\n\n\nHere is explain analyze from both 8.1.2 and 8.2.1 with\ndefault_statistics_target=10 and tables freshly analyzed:\n\n\n\n\n8.1.2\nHashAggregate (cost=4760.33..4764.95 rows=308 width=168) (actual\ntime=56.873..71.293 rows=3569 loops=1)\n -> Append (cost=0.00..4675.85 rows=3072 width=54) (actual\ntime=0.037..38.261 rows=3715 loops=1)\n -> Index Scan using as_of_date_om_position_index on om_position\n(cost=0.00..4637.10 rows=3071 width=54) (actual time=0.031..14.722 rows=3559\nloops=1)\n Index Cond: (as_of_date = date(now()))\n -> Bitmap Heap Scan on om_trade (cost=4.01..8.03 rows=1 width=48)\n(actual time=0.118..0.917 rows=156 loops=1)\n Recheck Cond: ((process_state = 0) OR (process_state = 2))\n -> BitmapOr (cost=4.01..4.01 rows=1 width=0) (actual\ntime=0.079..0.079 rows=0 loops=1)\n -> Bitmap Index Scan on\nom_trade_partial_process_state_index (cost=0.00..2.00 rows=1 width=0)\n(actual time=0.060..0.060 rows=156 loops=1)\n Index Cond: (process_state = 0)\n -> Bitmap Index Scan on\nom_trade_partial_process_state_index (cost=0.00..2.00 rows=1 width=0)\n(actual time=0.008..0.008 rows=0 loops=1)\n Index Cond: (process_state = 2)\nTotal runtime: 82.398 ms\n\n8.2.1\nHashAggregate (cost=6912.51..6912.53 rows=1 width=200) (actual\ntime=19.005..24.137 rows=3569 loops=1)\n -> Append (cost=0.00..6406.73 rows=28902 width=200) (actual\ntime=0.037..11.569 rows=3715 loops=1)\n -> Index Scan using as_of_date_om_position_index on om_position\n(cost=0.00..4333.82 rows=2964 width=53) (actual time=0.035..4.884 rows=3559\nloops=1)\n Index Cond: (as_of_date = date(now()))\n -> Bitmap Heap Scan on om_trade (cost=464.40..1783.89 rows=25938\nwidth=49) (actual time=0.060..0.380 rows=156 loops=1)\n Recheck Cond: ((process_state = 0) OR (process_state = 2))\n -> BitmapOr (cost=464.40..464.40 rows=308 width=0) (actual\ntime=0.041..0.041 rows=0 loops=1)\n -> Bitmap Index Scan on\nom_trade_partial_process_state_index (cost=0.00..225.72 rows=154 width=0)\n(actual time=0.032..0.032 rows=156 loops=1)\n Index Cond: (process_state = 0)\n -> Bitmap Index Scan on\nom_trade_partial_process_state_index (cost=0.00..225.72 rows=154 width=0)\n(actual time=0.003..0.003 rows=0 loops=1)\n Index Cond: (process_state = 2)\nTotal runtime: 27.193 ms\n\n\n\n\n\nHere is explain analyze from 8.2.1 with default_statistics_target=1000 and\ntables freshly analyzed:\n\n\n\n\nHashAggregate (cost=5344.36..5344.37 rows=1 width=200) (actual\ntime=18.826..23.950 rows=3569 loops=1)\n -> Append (cost=0.00..5280.01 rows=3677 width=200) (actual\ntime=0.031..11.606 rows=3715 loops=1)\n -> Index Scan using as_of_date_om_position_index on om_position\n(cost=0.00..5224.44 rows=3502 width=54) (actual time=0.029..4.903 rows=3559\nloops=1)\n Index Cond: (as_of_date = date(now()))\n -> Bitmap Heap Scan on om_trade (cost=9.91..18.79 rows=175\nwidth=49) (actual time=0.069..0.394 rows=156 loops=1)\n Recheck Cond: ((process_state = 0) OR (process_state = 2))\n -> BitmapOr (cost=9.91..9.91 rows=2 width=0) (actual\ntime=0.050..0.050 rows=0 loops=1)\n -> Bitmap Index Scan on\nom_trade_partial_process_state_index (cost=0.00..5.57 rows=2 width=0)\n(actual time=0.039..0.039 rows=156 loops=1)\n Index Cond: (process_state = 0)\n -> Bitmap Index Scan on\nom_trade_partial_process_state_index (cost=0.00..4.26 rows=1 width=0)\n(actual time=0.004..0.004 rows=0 loops=1)\n Index Cond: (process_state = 2)\nTotal runtime: 27.055 ms\n\n\n\nThanks,\n\n\nDave Dutcher\nTelluride Asset Management\n952.653.6411\n \n\n",
"msg_date": "Tue, 23 Jan 2007 16:08:47 -0600",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad Row Count Estimate on View with 8.2"
},
{
"msg_contents": "\"Dave Dutcher\" <[email protected]> writes:\n> I discovered a query which is taking 70 seconds on 8.2.1 which used to take\n> under a second on 8.1.2. I was digging into what was causing it and I\n> believe the problem is a view which the planner estimates will return 1 row\n> when it actually returns 3500.\n\nThis is evidently a failure of estimate_num_groups(). However, I do not\nsee any difference in that code between 8.1 and 8.2 branch tips. I do\nnotice a possibly-relevant change that was applied in 8.1.4:\n\n2006-05-02 00:34 tgl\n\n\t* src/backend/: optimizer/path/allpaths.c, utils/adt/selfuncs.c\n\t(REL8_1_STABLE): Avoid assuming that statistics for a parent\n\trelation reflect the properties of the union of its child relations\n\tas well. This might have been a good idea when it was originally\n\tcoded, but it's a fatally bad idea when inheritance is being used\n\tfor partitioning. It's better to have no stats at all than\n\tcompletely misleading stats. Per report from Mark Liberman.\n\t\n\tThe bug arguably exists all the way back, but I've only patched\n\tHEAD and 8.1 because we weren't particularly trying to support\n\tpartitioning before 8.1.\n\t\n\tEventually we ought to look at deriving union statistics instead of\n\tjust punting, but for now the drop kick looks good.\n\nI think this was only meant to apply to table inheritance situations,\nbut on reflection it might affect UNION queries too. The question is\nwhether the numbers it was using before really mean anything --- they\nseem to have been better-than-nothing in your particular case, but I'm\nnot sure that translates to a conclusion that we should depend on 'em.\n\nIn fact, since there isn't any \"parent relation\" in a UNION, I'm not\nsure that this patch actually changed your results ... but I'm not\nseeing what else would've ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 Jan 2007 02:44:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad Row Count Estimate on View with 8.2 "
},
{
"msg_contents": "> From: [email protected] \n> [mailto:[email protected]] On Behalf Of Tom Lane\n> \n> \n> In fact, since there isn't any \"parent relation\" in a UNION, I'm not\n> sure that this patch actually changed your results ... but I'm not\n> seeing what else would've ...\n> \n\nThanks for looking into it. I thought I might actually test if it was the\npatch you mentioned which changed my results, but I haven't had time.\nBecause you mentioned it was grouping on the results of a UNION ALL which\nwas throwing off the row estimate I changed my query from a UNION ALL/GROUP\nBY to a GROUP BY/FULL OUTER JOIN. The view runs a hair slower by itself,\nbut the better estimate of rows makes it work much better for joining with.\nIf anybody is curious, this is what I changed too:\n\nSELECT \ncoalesce(pos_set.fund_id, trade_set.fund_id) as fund_id,\ncoalesce(pos_set.owner_trader_id, trade_set.owner_trader_id) as\nowner_trader_id,\ncoalesce(pos_set.strategy_id, trade_set.strategy_id) as strategy_id,\ncoalesce(pos_set.cf_account_id, trade_set.cf_account_id) as cf_account_id,\ncoalesce(pos_set.instrument_id, trade_set.instrument_id) as instrument_id,\ncoalesce(pos_set.pos, 0) + coalesce(trade_set.pos, 0) as pos, \ncoalesce(pos_set.cost, 0) + coalesce(trade_set.cost, 0) as cost\nFROM\n(\nSELECT om_position.fund_id, om_position.owner_trader_id,\nom_position.strategy_id, om_position.cf_account_id,\nom_position.instrument_id, om_position.pos, om_position.cost\nFROM om_position\nWHERE om_position.as_of_date = date(now())\n) as pos_set\nfull outer join\n(\nSELECT om_trade.fund_id, om_trade.owner_trader_id, om_trade.strategy_id,\nom_trade.cf_account_id, om_trade.instrument_id,\nsum(om_trade.qty::numeric(22,9)) AS pos, sum(om_trade.cost) as cost\nFROM om_trade\nWHERE om_trade.process_state = 0 OR om_trade.process_state = 2\nGROUP BY om_trade.fund_id, om_trade.owner_trader_id, om_trade.strategy_id,\nom_trade.cf_account_id, om_trade.instrument_id\n) as trade_set \nON \npos_set.fund_id = trade_set.fund_id and pos_set.owner_trader_id =\ntrade_set.owner_trader_id and\npos_set.strategy_id = trade_set.strategy_id and pos_set.cf_account_id =\ntrade_set.cf_account_id and\npos_set.instrument_id = trade_set.instrument_id;\n\n\n\n",
"msg_date": "Sun, 28 Jan 2007 11:02:12 -0600",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad Row Count Estimate on View with 8.2 "
},
{
"msg_contents": "\"Dave Dutcher\" <[email protected]> writes:\n> Thanks for looking into it. I thought I might actually test if it was the\n> patch you mentioned which changed my results, but I haven't had time.\n> Because you mentioned it was grouping on the results of a UNION ALL which\n> was throwing off the row estimate I changed my query from a UNION ALL/GROUP\n> BY to a GROUP BY/FULL OUTER JOIN. The view runs a hair slower by itself,\n> but the better estimate of rows makes it work much better for joining with.\n\nI took another look and think I found the problem: 8.2's new code for\nflattening UNION ALL subqueries into \"append relations\" is failing to\ninitialize all the fields of the appendrel, which confuses\nestimate_num_groups (and perhaps other places). I think this will fix\nit for you.\n\n\t\t\tregards, tom lane\n\nIndex: allpaths.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/optimizer/path/allpaths.c,v\nretrieving revision 1.154\ndiff -c -r1.154 allpaths.c\n*** allpaths.c\t4 Oct 2006 00:29:53 -0000\t1.154\n--- allpaths.c\t28 Jan 2007 18:44:01 -0000\n***************\n*** 384,389 ****\n--- 384,395 ----\n \t}\n \n \t/*\n+ \t * Set \"raw tuples\" count equal to \"rows\" for the appendrel; needed\n+ \t * because some places assume rel->tuples is valid for any baserel.\n+ \t */\n+ \trel->tuples = rel->rows;\n+ \n+ \t/*\n \t * Finally, build Append path and install it as the only access path for\n \t * the parent rel.\t(Note: this is correct even if we have zero or one\n \t * live subpath due to constraint exclusion.)\n",
"msg_date": "Sun, 28 Jan 2007 13:47:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad Row Count Estimate on View with 8.2 "
},
{
"msg_contents": "> From: [email protected] \n> [mailto:[email protected]] On Behalf Of Tom Lane\n> \n> I took another look and think I found the problem: 8.2's new code for\n> flattening UNION ALL subqueries into \"append relations\" is failing to\n> initialize all the fields of the appendrel, which confuses\n> estimate_num_groups (and perhaps other places). I think this will fix\n> it for you.\n> \n\nI gave this a try on our test machine yesterday and it worked. The planner\nwas estimating that the group by on the union would return about 300 rows\nwhich is very similar to what 8.1.2 thought. Actually it returned about\n3000 rows, but still it is a good enough estimate to pick a plan which takes\n100ms instead of a plan which takes 100 seconds.\n\nThanks,\n\nDave\n\n\n",
"msg_date": "Tue, 30 Jan 2007 08:54:33 -0600",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad Row Count Estimate on View with 8.2 "
}
] |
[
{
"msg_contents": "Hi,\n\nFor I can not find too much information about how to use vacuum, I want\nto ask some general information about the guideline of vacuum planning.\n\n1. How do we know if autovacuum is enough for my application, or should\n I setup a vacuum manually from cron for my application?\n\n2. How to set the GUC parameters for autovacuum?\nThere are two sets of parameters for autovacuum:\n - vacuum threshold and scale factor (500/0.2)\n - analyze threshold and scale factor(250/0.1)\nIs there any guideline to set these parameters? When does it need to\nchange the default values?\n \n3. How to tune cost-based delay vacuum?\nI had searched in performance list; it seems that most of the practices\nare based on experience / trial-and-error approach to meet the\nrequirement of disk utilization or CPU utilization. Is there any other\nguild line to set them?\n\nFor when autovacuum is turned on by default, if the parameters for\nvacuum have not been set well, it will make the system rather unstable.\nSo I just wonder if we should setup a section in the manual about the\ntips of vacuum, then many users can easily set the vacuum parameters for\ntheir system.\n\nBest Regards\nGaly Lee\nNTT OSS Center\n",
"msg_date": "Wed, 24 Jan 2007 14:37:44 +0900",
"msg_from": "Galy Lee <[email protected]>",
"msg_from_op": true,
"msg_subject": "how to plan for vacuum?"
},
{
"msg_contents": "Just have one example here:\n\nworkload: run pgbench in 365x24x7\ndatabase size: 100GB\n\nthe workload distribution:\n 06:00-24:00 100tps\n 00:00-06:00 20tps\n\nhow should we plan vacuum for this situation to get the highest performance?\n\nBest regards\nGaly\n\nGaly Lee wrote:\n> Hi,\n> \n> For I can not find too much information about how to use vacuum, I want\n> to ask some general information about the guideline of vacuum planning.\n> \n> 1. How do we know if autovacuum is enough for my application, or should\n> I setup a vacuum manually from cron for my application?\n> \n> 2. How to set the GUC parameters for autovacuum?\n> There are two sets of parameters for autovacuum:\n> - vacuum threshold and scale factor (500/0.2)\n> - analyze threshold and scale factor(250/0.1)\n> Is there any guideline to set these parameters? When does it need to\n> change the default values?\n> \n> 3. How to tune cost-based delay vacuum?\n> I had searched in performance list; it seems that most of the practices\n> are based on experience / trial-and-error approach to meet the\n> requirement of disk utilization or CPU utilization. Is there any other\n> guild line to set them?\n> \n> For when autovacuum is turned on by default, if the parameters for\n> vacuum have not been set well, it will make the system rather unstable.\n> So I just wonder if we should setup a section in the manual about the\n> tips of vacuum, then many users can easily set the vacuum parameters for\n> their system.\n",
"msg_date": "Wed, 24 Jan 2007 16:44:55 +0900",
"msg_from": "Galy Lee <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how to plan for vacuum?"
},
{
"msg_contents": "On Wed, Jan 24, 2007 at 02:37:44PM +0900, Galy Lee wrote:\n> 1. How do we know if autovacuum is enough for my application, or should\n> I setup a vacuum manually from cron for my application?\n\nGenerally I trust autovac unless there's some tables where it's critical\nthat they be vacuumed frequently, such as a queue table or a web session\ntable.\n\n> 2. How to set the GUC parameters for autovacuum?\n> There are two sets of parameters for autovacuum:\n> - vacuum threshold and scale factor (500/0.2)\n> ?$B!! - analyze threshold and scale factor(250/0.1)\n> Is there any guideline to set these parameters? When does it need to\n> change the default values?\n\nI find those are generally pretty good starting points; just bear in\nmind that it means 20% dead space.\n\n> 3. How to tune cost-based delay vacuum?\n> I had searched in performance list; it seems that most of the practices\n> are based on experience / trial-and-error approach to meet the\n> requirement of disk utilization or CPU utilization. Is there any other\n> guild line to set them?\n\nUnless you have a means for the database to monitor IO usage on it's\nown, I don't know that we have a choice...\n\nI'll generally start with a cost delay of 20ms and adjust based on IO\nutilization.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 24 Jan 2007 21:39:17 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to plan for vacuum?"
},
{
"msg_contents": "Jim C. Nasby wrote:\n\n> I'll generally start with a cost delay of 20ms and adjust based on IO\n> utilization.\n\nI've been considering set a default autovacuum cost delay to 10ms; does\nthis sound reasonable?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Thu, 25 Jan 2007 00:52:02 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to plan for vacuum?"
},
{
"msg_contents": "On Thu, Jan 25, 2007 at 12:52:02AM -0300, Alvaro Herrera wrote:\n> Jim C. Nasby wrote:\n> \n> > I'll generally start with a cost delay of 20ms and adjust based on IO\n> > utilization.\n> \n> I've been considering set a default autovacuum cost delay to 10ms; does\n> this sound reasonable?\n\nFor a lightly loaded system, sure. For a heavier load that might be too\nmuch, but of course that's very dependent on not only your hardware, but\nhow much you want vacuum to interfere with normal operations. Though,\nI'd say as a default it's probably better to be more aggressive rather\nthan less.\n\nAlso, it might be better to only set autovac_cost_delay by default;\npresumably if someone's running vacuum by hand they want it done pronto.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 24 Jan 2007 21:57:42 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] how to plan for vacuum?"
},
{
"msg_contents": "\n\nJim C. Nasby wrote:\n> On Wed, Jan 24, 2007 at 02:37:44PM +0900, Galy Lee wrote:\n>> 1. How do we know if autovacuum is enough for my application, or should\n>> I setup a vacuum manually from cron for my application?\n> \n> Generally I trust autovac unless there's some tables where it's critical\n> that they be vacuumed frequently, such as a queue table or a web session\n> table.\n\nSo how much can we trust autovac? I think at least the following cases \ncan not be covered by autovac now:\n - small but high update tables which are sensitive to garbage\n - very big tables which need a long time to be vacuumed.\n - when we need to adjust the the max_fsm_page\n\n>> 2. How to set the GUC parameters for autovacuum?\n>> There are two sets of parameters for autovacuum:\n>> - vacuum threshold and scale factor (500/0.2)\n>> ?$B!! - analyze threshold and scale factor(250/0.1)\n>> Is there any guideline to set these parameters? When does it need to\n>> change the default values?\n> \n> I find those are generally pretty good starting points; just bear in\n> mind that it means 20% dead space.\n\nso what is the principle to set them?\n - keep dead space lower than some disk limit\n - or keep the garbage rate lower than fillfactor\n or any other general principle?\n\n",
"msg_date": "Thu, 25 Jan 2007 19:29:20 +0900",
"msg_from": "Galy Lee <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how to plan for vacuum?"
},
{
"msg_contents": "\n\nJim C. Nasby wrote:\n> On Thu, Jan 25, 2007 at 12:52:02AM -0300, Alvaro Herrera wrote:\n>> Jim C. Nasby wrote:\n>>\n>>> I'll generally start with a cost delay of 20ms and adjust based on IO\n>>> utilization.\n>> I've been considering set a default autovacuum cost delay to 10ms; does\n>> this sound reasonable?\n\nThe problem in here is that we can not easily find a direct relation \nbetween\n Cost delay <-> CPU/IO utilization <--> real performance (response \ntime in peak hour)\n\nIt is very hard for any normal user to set this correctly. I think the \nexperience / trial-and-error approach is awful for the user, every DBA \nneed to be an expert of vacuum to keep the system stable. For vacuum is \nstill a big threat to the performance, a more intelligent way is needed.\n\nA lot of efforts have contributed to make vacuum to be a more \nlightweight operation, but I think we should still need more efforts on \nhow to make it can be used easily and safely.\n\nSo I have proposed the \"vacuum in time\" feature in previous; just let \nvacuum know how long can it runs, and then it will minimize the impact \nin the time span for you. Some argue that it should not have the \nmaintenance window assumption, but the most safely way is to run in the \nmaintenance window.\n\n\n\n\n",
"msg_date": "Thu, 25 Jan 2007 19:52:50 +0900",
"msg_from": "Galy Lee <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] how to plan for vacuum?"
},
{
"msg_contents": "On Thu, Jan 25, 2007 at 07:29:20PM +0900, Galy Lee wrote:\n> so what is the principle to set them?\n> - keep dead space lower than some disk limit\n> - or keep the garbage rate lower than fillfactor\n> or any other general principle?\n\n\nHow do you measure \"dead space\" and \"garbage rate?\"\n\nI'm a newbe, I don't even know what these terms mean, but if I can measure\nthem, perhaps it will gel, and really if you can't measure the effect\nof a setting change, what have you got? I would hope any discussion on\nautovac parms would include some metric evaluation techniques. Thanks.\n",
"msg_date": "Thu, 25 Jan 2007 08:47:03 -0500",
"msg_from": "Ray Stell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to plan for vacuum?"
},
{
"msg_contents": "Please cc the list so others can reply as well...\n\nOn Thu, Jan 25, 2007 at 08:45:50AM +0100, Tomas Vondra wrote:\n> > On Wed, Jan 24, 2007 at 02:37:44PM +0900, Galy Lee wrote:\n> >> 1. How do we know if autovacuum is enough for my application, or should\n> >> I setup a vacuum manually from cron for my application?\n> > \n> > Generally I trust autovac unless there's some tables where it's critical\n> > that they be vacuumed frequently, such as a queue table or a web session\n> > table.\n> \n> You can tune thresholds and scale factors for that particular table\n> using pg_autovacuum. If you lower them appropriately, the vacuum will be\n> fired more often for that table - but don't lower them too much, just go\n> step by step until you reach values that are fine for you.\n\nThat doesn't work well if autovac gets tied up vacuuming a very large\ntable. Granted, when that happens there are considerations about the\nlong-running vacuum transaction (prior to 8.2), but in many systems\nyou'll still get some use out of other vacuums.\n-- \nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n",
"msg_date": "Thu, 25 Jan 2007 09:54:24 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to plan for vacuum?"
},
{
"msg_contents": "Alvaro Herrera wrote:\n> Jim C. Nasby wrote:\n> \n>> I'll generally start with a cost delay of 20ms and adjust based on IO\n>> utilization.\n> \n> I've been considering set a default autovacuum cost delay to 10ms; does\n> this sound reasonable?\n\nIt really depends on the system. Most of our systems run anywhere from\n10-25ms. I find that any more than that, Vacuum takes too long.\n\nJoshua D. Drake\n\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Thu, 25 Jan 2007 08:04:49 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] how to plan for vacuum?"
},
{
"msg_contents": "On Thu, Jan 25, 2007 at 07:52:50PM +0900, Galy Lee wrote:\n> It is very hard for any normal user to set this correctly. I think the \n> experience / trial-and-error approach is awful for the user, every DBA \n> need to be an expert of vacuum to keep the system stable. For vacuum is \n> still a big threat to the performance, a more intelligent way is needed.\n\nAgreed.\n\n> So I have proposed the \"vacuum in time\" feature in previous; just let \n> vacuum know how long can it runs, and then it will minimize the impact \n> in the time span for you. Some argue that it should not have the \n> maintenance window assumption, but the most safely way is to run in the \n> maintenance window.\n\nMost systems I work on don't have a maintenance window. For those that\ndo, the window is at best once a day, and that's nowhere near often\nenough to be vacuuming any database I've run across. I'm not saying they\ndon't exist, but effort put into restricting vacuums to a maintenance\nwindow would serve very few people. It'd be much better to put effort\ninto things like piggyback vacuum.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 25 Jan 2007 10:22:47 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] how to plan for vacuum?"
},
{
"msg_contents": "On Thu, Jan 25, 2007 at 08:04:49AM -0800, Joshua D. Drake wrote:\n> \n> It really depends on the system. Most of our systems run anywhere from\n> 10-25ms. I find that any more than that, Vacuum takes too long.\n\n\nHow do you measure the impact of setting it to 12 as opposed to 15?\n",
"msg_date": "Thu, 25 Jan 2007 11:33:29 -0500",
"msg_from": "Ray Stell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] how to plan for vacuum?"
},
{
"msg_contents": "On Jan 25, 2007, at 10:33 AM, Ray Stell wrote:\n> On Thu, Jan 25, 2007 at 08:04:49AM -0800, Joshua D. Drake wrote:\n>>\n>> It really depends on the system. Most of our systems run anywhere \n>> from\n>> 10-25ms. I find that any more than that, Vacuum takes too long.\n>\n>\n> How do you measure the impact of setting it to 12 as opposed to 15?\n\nIf you've got a tool that will report disk utilization as a \npercentage it's very easy; I'll decrease the setting until I'm at \nabout 90% utilization with the system's normal workload (leaving some \nroom for spikes, etc). Sometimes I'll also tune the costs if reads \nvs. writes are a concern.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Fri, 26 Jan 2007 15:07:45 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] how to plan for vacuum?"
}
] |
[
{
"msg_contents": "Hi List,\n\nWhen auto vacuum is over the dead tuple are seems to get reduced, but\nphysical size of database do not decreases.\nWe are using Postgres 8.1.3 and this are the auto vacuum settings.\n\n autovacuum = on # enable autovacuum subprocess?\nautovacuum_naptime = 900 # time between autovacuum runs, in\nsecs\nautovacuum_vacuum_scale_factor = 0.1 # fraction of rel size before\nautovacuum_analyze_scale_factor = 0.05 # fraction of rel size before\nautovacuum_vacuum_cost_delay = 100 # default vacuum cost delay for\nHow do I make the physical size of the DB smaller without doing a full\nvacuum.\n\nCan anybody help me out in this.\n\n-- \nRegards\nGauri\n\nHi List,\n \nWhen auto vacuum is over the dead tuple are seems to get reduced, but physical size of database do not decreases.\nWe are using Postgres 8.1.3 and this are the auto vacuum settings.\n \n\nautovacuum = on # enable autovacuum subprocess?\nautovacuum_naptime = 900 # time between autovacuum runs, in secs\nautovacuum_vacuum_scale_factor = 0.1 # fraction of rel size beforeautovacuum_analyze_scale_factor = 0.05 # fraction of rel size beforeautovacuum_vacuum_cost_delay = 100 # default vacuum cost delay for\n\nHow do I make the physical size of the DB smaller without doing a full vacuum.\n \nCan anybody help me out in this.\n \n-- RegardsGauri",
"msg_date": "Wed, 24 Jan 2007 14:19:06 +0530",
"msg_from": "\"Gauri Kanekar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Auto Vacuum Problem"
},
{
"msg_contents": "Gauri Kanekar wrote:\n> Hi List,\n> \n> When auto vacuum is over the dead tuple are seems to get reduced, but\n> physical size of database do not decreases.\n\nIt won't necessarily. An ordinary vacuum just keeps track of old rows \nthat can be cleared and re-used. A \"vacuum full\" is needed to compact \nthe table.\n\nIf you are running a normal vacuum often enough then the size of the \ndatabase should remain about the same (unless you insert more rows of \ncourse).\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 24 Jan 2007 08:54:05 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Auto Vacuum Problem"
},
{
"msg_contents": "Hi,\n\n> How do I make the physical size of the DB smaller without doing a \n> full vacuum.\n\nThis might turn out to be counterproductive, as subsequent inserts \nand updates will increase\nthe size of the physical database again, which might be expensive \ndepending on the underlying OS.\n-- \nHeiko W.Rupp\[email protected], http://www.dpunkt.de/buch/3-89864-429-4.html\n\n\n\n",
"msg_date": "Wed, 24 Jan 2007 10:12:43 +0100",
"msg_from": "\"Heiko W.Rupp\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Auto Vacuum Problem"
}
] |
[
{
"msg_contents": "\nHi!\n \nI'm planning to move from mysql to postgresql as I believe the latter\nperforms better when it comes to complex queries. The mysql database\nthat I'm running is about 150 GB in size, with 300 million rows in the\nlargest table. We do quite a lot of statistical analysis on the data\nwhich means heavy queries that run for days. Now that I've got two new\nservers with 32GB of ram I'm eager to switch to postgresql to improve\nperfomance. One database is to be an analysis server and the other an\nOLTP server feeding a web site with pages. \n \nI'm setting for Postgresql 8.1 as it is available as a package in Debian\nEtch AMD64.\n \nAs I'm new to postgresql I've googled to find some tips and found some\ninteresting links how configure and tune the database manager. Among\nothers I've found the PowerPostgresql pages with a performance checklist\nand annotated guide to postgresql.conf\n[http://www.powerpostgresql.com/]. And of course the postgresql site\nitself is a good way to start. RevSys have a short guide as well\n[http://www.revsys.com/writings/postgresql-performance.html]\n \nI just wonder if someone on this list have some tips from the real world\nhow to tune postgresql and what is to be avoided. AFAIK the following\nparameters seems important to adjust to start with are:\n\n-work_mem\n-maintenance_work_mem - 50% of the largest table?\n-shared_buffers - max value 50000\n-effective_cache_size - max 2/3 of available ram, ie 24GB on the\nhardware described above\n-shmmax - how large dare I set this value on dedicated postgres servers?\n-checkpoint_segments - this is crucial as one of the server is\ntransaction heavy\n-vacuum_cost_delay\n\nOf course some values can only be estimated after database has been feed\ndata and queries have been run in a production like manner. \n\nCheers\n// John\n\nPs. I sent to list before but the messages where withheld as I'm not \"a\nmember of any of the restrict_post groups\". This is perhaps due to the\nfact that we have changed email address a few weeks ago and there was a\nmismatch between addresses. So I apologize if any similar messages show\nup from me, just ignore them.\n",
"msg_date": "Fri, 26 Jan 2007 12:28:19 +0100",
"msg_from": "\"John Parnefjord\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tuning"
},
{
"msg_contents": "\nOn 26-Jan-07, at 6:28 AM, John Parnefjord wrote:\n\n>\n> Hi!\n>\n> I'm planning to move from mysql to postgresql as I believe the latter\n> performs better when it comes to complex queries. The mysql database\n> that I'm running is about 150 GB in size, with 300 million rows in the\n> largest table. We do quite a lot of statistical analysis on the data\n> which means heavy queries that run for days. Now that I've got two new\n> servers with 32GB of ram I'm eager to switch to postgresql to improve\n> perfomance. One database is to be an analysis server and the other an\n> OLTP server feeding a web site with pages.\n>\n> I'm setting for Postgresql 8.1 as it is available as a package in \n> Debian\n> Etch AMD64.\n>\n> As I'm new to postgresql I've googled to find some tips and found some\n> interesting links how configure and tune the database manager. Among\n> others I've found the PowerPostgresql pages with a performance \n> checklist\n> and annotated guide to postgresql.conf\n> [http://www.powerpostgresql.com/]. And of course the postgresql site\n> itself is a good way to start. RevSys have a short guide as well\n> [http://www.revsys.com/writings/postgresql-performance.html]\n>\n> I just wonder if someone on this list have some tips from the real \n> world\n> how to tune postgresql and what is to be avoided. AFAIK the following\n> parameters seems important to adjust to start with are:\n>\n> -work_mem\n> -maintenance_work_mem - 50% of the largest table?\nIsn't it possible for this to be larger than memory ?\n> -shared_buffers - max value 50000\nWhere does this shared buffers maximum come from ? It's wrong it \nshould be 1/4 of available memory (8G) to start and tuned from there\n\n> -effective_cache_size - max 2/3 of available ram, ie 24GB on the\n> hardware described above\n> -shmmax - how large dare I set this value on dedicated postgres \n> servers?\nas big as required by shared buffer setting above\n> -checkpoint_segments - this is crucial as one of the server is\n> transaction heavy\n> -vacuum_cost_delay\n>\n> Of course some values can only be estimated after database has been \n> feed\n> data and queries have been run in a production like manner.\n>\n> Cheers\n> // John\n>\n> Ps. I sent to list before but the messages where withheld as I'm \n> not \"a\n> member of any of the restrict_post groups\". This is perhaps due to the\n> fact that we have changed email address a few weeks ago and there \n> was a\n> mismatch between addresses. So I apologize if any similar messages \n> show\n> up from me, just ignore them.\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Fri, 26 Jan 2007 09:07:21 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning"
},
{
"msg_contents": "Hello !\n\nAm Freitag 26 Januar 2007 12:28 schrieb John Parnefjord:\n> Hi!\n>\n> I'm planning to move from mysql to postgresql as I believe the latter\n> performs better when it comes to complex queries. The mysql database\n> that I'm running is about 150 GB in size, with 300 million rows in the\n> largest table. We do quite a lot of statistical analysis on the data\n> which means heavy queries that run for days. Now that I've got two new\n> servers with 32GB of ram I'm eager to switch to postgresql to improve\n> perfomance. One database is to be an analysis server and the other an\n> OLTP server feeding a web site with pages.\n>\n> I'm setting for Postgresql 8.1 as it is available as a package in Debian\n> Etch AMD64.\n>\n> As I'm new to postgresql I've googled to find some tips and found some\n> interesting links how configure and tune the database manager. Among\n> others I've found the PowerPostgresql pages with a performance checklist\n> and annotated guide to postgresql.conf\n> [http://www.powerpostgresql.com/]. And of course the postgresql site\n> itself is a good way to start. RevSys have a short guide as well\n> [http://www.revsys.com/writings/postgresql-performance.html]\n>\n> I just wonder if someone on this list have some tips from the real world\n> how to tune postgresql and what is to be avoided. AFAIK the following\n> parameters seems important to adjust to start with are:\n>\n> -work_mem\n> -maintenance_work_mem - 50% of the largest table?\n> -shared_buffers - max value 50000\n> -effective_cache_size - max 2/3 of available ram, ie 24GB on the\n\nDo you use a Opteron with a NUMA architecture ?\n\nYou could end up with switching pages between your memory nodes, which slowed \ndown heavily my server (Tyan 2895, 2 x 275 cpu, 8 GB)...\n\nTry first to use only one numa node for your cache.\n\n> hardware described above\n> -shmmax - how large dare I set this value on dedicated postgres servers?\n> -checkpoint_segments - this is crucial as one of the server is\n> transaction heavy\n> -vacuum_cost_delay\n>\n> Of course some values can only be estimated after database has been feed\n> data and queries have been run in a production like manner.\n>\n> Cheers\n> // John\n>\n> Ps. I sent to list before but the messages where withheld as I'm not \"a\n> member of any of the restrict_post groups\". This is perhaps due to the\n> fact that we have changed email address a few weeks ago and there was a\n> mismatch between addresses. So I apologize if any similar messages show\n> up from me, just ignore them.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n",
"msg_date": "Fri, 26 Jan 2007 16:17:20 +0100",
"msg_from": "Anton Rommerskirchen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning"
},
{
"msg_contents": "John,\n\n> -work_mem\n\nDepends on the number of concurrent queries you expect to run and what size \nsorts you expect them to do.\n\n> -maintenance_work_mem - 50% of the largest table?\n\nActually, in current code I've found that anything over 256mb doesn't actually \nget used.\n\n> -shared_buffers - max value 50000\n\nActually, I need to update that. On newer faster multi-core machines you may \nwant to allocate up to 1GB of shared buffers.\n\n> -effective_cache_size - max 2/3 of available ram, ie 24GB on the\n> hardware described above\n\nYes.\n\n> -shmmax - how large dare I set this value on dedicated postgres servers?\n\nSet it to 2GB and you'll be covered.\n\n> -checkpoint_segments - this is crucial as one of the server is\n> transaction heavy\n\nWell, it only helps you to raise this if you have a dedicated disk resource \nfor the xlog. Otherwise having more segments doesn't help you much.\n\n> -vacuum_cost_delay\n\nTry 200ms to start.\n\nAlso, set wal_buffers to 128.\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Sun, 28 Jan 2007 15:24:24 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n>> -checkpoint_segments - this is crucial as one of the server is\n>> transaction heavy\n\n> Well, it only helps you to raise this if you have a dedicated disk resource \n> for the xlog. Otherwise having more segments doesn't help you much.\n\nAu contraire, large checkpoint_segments is important for write-intensive\nworkloads no matter what your disk layout is. If it's too small then\nyou'll checkpoint too often, and the resulting increase in page-image\nwrites will hurt. A lot.\n\nMy advice is to set checkpoint_warning to the same value as\ncheckpoint_timeout (typically 5 minutes or so) and then keep an eye on\nthe postmaster log for awhile. If you see more than a few \"checkpoints\nare occuring too frequently\" messages, you want to raise\ncheckpoint_segments.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 28 Jan 2007 18:38:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning "
},
{
"msg_contents": "At 06:24 PM 1/28/2007, Josh Berkus wrote:\n>John,\n>\n> > -work_mem\n>\n>Depends on the number of concurrent queries you expect to run and what size\n>sorts you expect them to do.\nEXPLAIN ANALYZE is your friend. It will tell you how much data each \nquery is manipulating and therefore how much memory each query will chew.\n\nThe next step is to figure out how many of each query will be running \nconcurrently.\nSumming those will tell you the maximum work_mem each kind of query \nwill be capable of using.\n\nIf you have a deep enough understanding of how your pg system is \nworking, then you can set work_mem on a per query basis to get the \nmost efficient use of the RAM in your system.\n\n\n> > -maintenance_work_mem - 50% of the largest table?\n>\n>Actually, in current code I've found that anything over 256mb \n>doesn't actually\n>get used.\nIs this considered a bug? When will this limit go away? Does \nwork_mem have a similar limit?\n\n\n> > -shared_buffers - max value 50000\n>\n>Actually, I need to update that. On newer faster multi-core \n>machines you may\n>want to allocate up to 1GB of shared buffers.\n>\n> > -effective_cache_size - max 2/3 of available ram, ie 24GB on the\n> > hardware described above\n>\n>Yes.\nWhy? \"max of 2/3 of available RAM\" sounds a bit \nhand-wavy. Especially with 32gb, 64GB, and 128GB systems available.\n\nIs there are hidden effective or hard limit here as well?\n\nFor a dedicated pg machine, I'd assume one would want to be very \naggressive about configuring the kernel, minimizing superfluous \nservices, and configuring memory use so that absolutely as much as \npossible is being used by pg and in the most intelligent way given \none's specific pg usage scenario.\n\n\n> > -shmmax - how large dare I set this value on dedicated postgres servers?\n>\n>Set it to 2GB and you'll be covered.\nI thought that on 32b systems the 2GB shmmax limit had been raised to 4GB?\nand that there essentially is no limit to shmmax on 64b systems?\n\nWhat are Oracle and EnterpriseDB recommending for shmmax these days?\n\n\nMy random thoughts,\nRon Peacetree \n\n",
"msg_date": "Mon, 29 Jan 2007 11:14:43 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning"
},
{
"msg_contents": "\n> What are Oracle and EnterpriseDB recommending for shmmax these days?\n\nAccording to Oracle \"set to a value half the size of physical memory\".\n[http://www.oracle.com/technology/tech/linux/validated-configurations/ht\nml/vc_dell6850-rhel4-cx500-1_1.html]\n\nI've been talking to an Oracle DBA and he said that they are setting\nthis to something between 70-80% on a dedicated database server. As long\nas one doesn't run other heavy processes and leave room for the OS.\n\nEnterpriseDB advocates: 250 KB + 8.2 KB * shared_buffers + 14.2 kB *\nmax_connections up to infinity\n[http://www.enterprisedb.com/documentation/kernel-resources.html]\n\n\n// John\n",
"msg_date": "Tue, 30 Jan 2007 11:05:03 +0100",
"msg_from": "\"John Parnefjord\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n[mailto:pgsql-performance-\n> [email protected]] On Behalf Of John Parnefjord\n> Sent: Tuesday, January 30, 2007 2:05 AM\n> Subject: Re: [PERFORM] Tuning\n\n> EnterpriseDB advocates: 250 KB + 8.2 KB * shared_buffers + 14.2 kB *\n> max_connections up to infinity\n> [http://www.enterprisedb.com/documentation/kernel-resources.html]\n\n... + 8.1KB * wal_buffers + 6 * max_fsm_pages + 65 * max_fsm_relations.\nOkay, maybe getting pedantic; but if you're going to cite the ~256KB\nconst over head ... :-)\n\n",
"msg_date": "Wed, 31 Jan 2007 09:25:12 -0800",
"msg_from": "\"Mischa Sandberg\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning"
},
{
"msg_contents": "Tuners,\n\nallways be aware that results on Windows may be totally different!\n\nMy main customer is running PostgreSQL 8.1 on MINIMUM shared buffers\n\nmax_connections = 100 #\nshared_buffers = 200 # min 16 or max_connections*2, 8KB each\n\nI changed it to this value from the very low default 20000, and the system\nis responding better; especially after fixing the available memory setting\nwithin the planner.\n\n... frustrating part is, I could not replicate this behavious with pg_bench\n:(\n\nHarald\n\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nReinsburgstraße 202b\n70197 Stuttgart\n0173/9409607\nfx 01212-5-13695179\n-\nPython: the only language with more web frameworks than keywords.\n\nTuners,allways be aware that results on Windows may be totally different!My main customer is running PostgreSQL 8.1 on MINIMUM shared buffersmax_connections = 100 # shared_buffers = 200 # min 16 or max_connections*2, 8KB each\nI changed it to this value from the very low default 20000, and the system is responding better; especially after fixing the available memory setting within the planner.... frustrating part is, I could not replicate this behavious with pg_bench :(\nHarald-- GHUM Harald Massa\npersuadere et programmareHarald Armin MassaReinsburgstraße 202b70197 Stuttgart0173/9409607fx 01212-5-13695179 -Python: the only language with more web frameworks than keywords.",
"msg_date": "Tue, 6 Feb 2007 11:21:41 +0100",
"msg_from": "\"Harald Armin Massa\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning"
}
] |
[
{
"msg_contents": "\nHi,\n\nI find various references in the list to this issue of queries\nbeing too slow because the planner miscalculates things and\ndecides to go for a sequenctial scan when an index is available\nand would lead to better performance.\n\nIs this still an issue with the latest version? I'm doing some\ntests right now, but I have version 7.4 (and not sure when I will\nbe able to spend the effort to move our system to 8.2).\n\nWhen I force it via \"set enable_seqscan to off\", the index scan\ntakes about 0.1 msec (as reported by explain analyze), whereas\nwith the default, it chooses a seq. scan, for a total execution\ntime around 10 msec!! (yes: 100 times slower!). The table has\n20 thousand records, and the WHERE part of the query uses one\nfield that is part of the primary key (as in, the primary key\nis the combination of field1,field2, and the query involves a\nwhere field1=1 and some_other_field=2). I don't think I'm doing\nsomething \"wrong\", and I find no reason not to expect the query\nplanner to choose an index scan.\n\nFor the time being, I'm using an explicit \"enable_seqscan off\"\nin the client code, before executing the select. But I wonder:\nIs this still an issue, or has it been solved in the latest\nversion?\n\nThanks,\n\nCarlos\n-- \n\n",
"msg_date": "Fri, 26 Jan 2007 21:18:42 -0500",
"msg_from": "Carlos Moreno <[email protected]>",
"msg_from_op": true,
"msg_subject": "Seqscan/Indexscan still a known issue?"
},
{
"msg_contents": "Carlos Moreno skrev:\n\n> When I force it via \"set enable_seqscan to off\", the index scan\n> takes about 0.1 msec (as reported by explain analyze), whereas\n >\n> For the time being, I'm using an explicit \"enable_seqscan off\"\n> in the client code, before executing the select. But I wonder:\n> Is this still an issue, or has it been solved in the latest\n> version?\n\nFor most queries it has never been an issue. Every once in a while there \nis a query that the planner makes a non-optimal plan for, but it's not \nthat common.\n\nIn general the optimizer has improved with every new version of pg.\n\nAlmost everyone I've talked to that has upgraded has got a faster \ndatabase tham before. It was like that for 7.4->8.0, for 8.0->8.1 and \nfor 8.1->8.2. So in your case going from 7.4->8.2 is most likely going \nto give a speedup (especially if you have some queries that isn't just \nsimple primary key lookups).\n\nIn your case it's hard to give any advice since you didn't share the \nEXPLAIN ANALYZE output with us. I'm pretty sure it's possible to tune pg \nso it makes the right choice even for this query of yours but without \nthe EXPLAIN ANALYZE output we would just be guessing anyway. If you want \nto share it then it might be helpful to show the plan both with and \nwithout seqscan enabled.\n\nHow often do you run VACUUM ANALYZE; on the database?\n\n/Dennis\n",
"msg_date": "Sat, 27 Jan 2007 08:26:13 +0100",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seqscan/Indexscan still a known issue?"
},
{
"msg_contents": "Carlos Moreno wrote:\n>\n> Hi,\n>\n> I find various references in the list to this issue of queries\n> being too slow because the planner miscalculates things and\n> decides to go for a sequenctial scan when an index is available\n> and would lead to better performance.\n>\n> Is this still an issue with the latest version? I'm doing some\n> tests right now, but I have version 7.4 (and not sure when I will\n> be able to spend the effort to move our system to 8.2).\n>\n> When I force it via \"set enable_seqscan to off\", the index scan\n> takes about 0.1 msec (as reported by explain analyze), whereas\n> with the default, it chooses a seq. scan, for a total execution\n> time around 10 msec!! (yes: 100 times slower!). The table has\n> 20 thousand records, and the WHERE part of the query uses one\n> field that is part of the primary key (as in, the primary key\n> is the combination of field1,field2, and the query involves a\n> where field1=1 and some_other_field=2). I don't think I'm doing\n> something \"wrong\", and I find no reason not to expect the query\n> planner to choose an index scan.\n>\n> For the time being, I'm using an explicit \"enable_seqscan off\"\n> in the client code, before executing the select. But I wonder:\n> Is this still an issue, or has it been solved in the latest\n> version?\nPlease supply explain analyze for the query in both the index and \nsequence scan operation. We may be able to tell you why it's choosing \nthe wrong options. Guess 1 would be that your primary key is int8, but \ncan't be certain that is what's causing the problem.\n\nRegards\n\nRussell Smith\n>\n> Thanks,\n>\n> Carlos\n\n",
"msg_date": "Sat, 27 Jan 2007 18:35:23 +1100",
"msg_from": "Russell Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seqscan/Indexscan still a known issue?"
},
{
"msg_contents": "On 27.01.2007, at 00:35, Russell Smith wrote:\n\n> Guess 1 would be that your primary key is int8, but can't be \n> certain that is what's causing the problem.\n\nWhy could that be a problem?\n\ncug\n\n",
"msg_date": "Sat, 27 Jan 2007 00:38:02 -0700",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seqscan/Indexscan still a known issue?"
},
{
"msg_contents": "> \n> Hi,\n> \n> I find various references in the list to this issue of queries\n> being too slow because the planner miscalculates things and\n> decides to go for a sequenctial scan when an index is available\n> and would lead to better performance.\n> \n> Is this still an issue with the latest version? I'm doing some\n> tests right now, but I have version 7.4 (and not sure when I will\n> be able to spend the effort to move our system to 8.2).\n> \n> When I force it via \"set enable_seqscan to off\", the index scan\n> takes about 0.1 msec (as reported by explain analyze), whereas\n> with the default, it chooses a seq. scan, for a total execution\n> time around 10 msec!! (yes: 100 times slower!). The table has\n> 20 thousand records, and the WHERE part of the query uses one\n> field that is part of the primary key (as in, the primary key\n> is the combination of field1,field2, and the query involves a\n> where field1=1 and some_other_field=2). I don't think I'm doing\n> something \"wrong\", and I find no reason not to expect the query\n> planner to choose an index scan.\n\n1) I'm missing a very important part - information about the settings\n in postgresql.conf, especially effective cache size, random page\n cost, etc. What hw are you using (RAM size, disk speed etc.)?\n\n2) Another thing I'm missing is enough information about the table\n and the query itself. What is the execution plan used? Was the table\n modified / vacuumed / analyzed recently?\n\nWithout these information it's completely possible the postgresql is\nusing invalid values and thus generating suboptimal execution plan.\nThere are many cases when the sequential scan is better (faster, does\nless I/O etc.) than the index scan.\n\nFor example if the table has grown and was not analyzed recently, the\npostgresql may still believe it's small and thus uses the sequential\nscan. Or maybe the effective case size is set improperly (too low in\nthis case) thus the postgresql thinks just a small fraction of data is\ncached, which means a lot of scattered reads in case of the index -\nthat's slower than sequential reads.\n\nThere are many such cases - the common belief that index scan is always\nbetter than the sequential scan is incorrect. But most of these cases\ncan be identified using explain analyze output (which is missing in your\npost).\n\nThe data supplied by you are not a 'proof' the index scan is better than\nsequential scan in this case, as the data might be cached due to\nprevious queries.\n\nThe port to 8.x might help, as some of the settings in postgresql.conf\nuse different default values and the stats used by the planner might be\n'freshier' than those in the current database.\n\nMy recommendation:\n\n1) send us the execution plan, that is use the EXPLAIN ANALYZE and send\n us the output\n\n2) try to use ANALYZE on the table and run the queries again\n\n3) review the settings in postgresql - a nice starting point is here\n\n http://www.powerpostgresql.com/PerfList\n\n (Yes, it's for Pg 8.0 but the basics are the same).\n\nTomas\n",
"msg_date": "Sat, 27 Jan 2007 10:54:22 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seqscan/Indexscan still a known issue?"
},
{
"msg_contents": "Guido Neitzer wrote:\n> On 27.01.2007, at 00:35, Russell Smith wrote:\n>\n>> Guess 1 would be that your primary key is int8, but can't be certain \n>> that is what's causing the problem.\n>\n> Why could that be a problem?\nBefore 8.0, the planner would not choose an index scan if the types were \ndifferent int8_col = const, int8_col = 4.\n4 in this example is cast to int4. int8 != int4. So the planner will \nnot choose an index scan.\n\nRegards\n\nRussell Smith\n>\n> cug\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n>\n\n",
"msg_date": "Sat, 27 Jan 2007 21:44:10 +1100",
"msg_from": "Russell Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seqscan/Indexscan still a known issue?"
},
{
"msg_contents": "On Sat, 2007-01-27 at 21:44 +1100, Russell Smith wrote:\n> Guido Neitzer wrote:\n> > On 27.01.2007, at 00:35, Russell Smith wrote:\n> >\n> >> Guess 1 would be that your primary key is int8, but can't be certain \n> >> that is what's causing the problem.\n> >\n> > Why could that be a problem?\n> Before 8.0, the planner would not choose an index scan if the types were \n> different int8_col = const, int8_col = 4.\n> 4 in this example is cast to int4. int8 != int4. So the planner will \n> not choose an index scan.\n\nBut, in 7.4 setting enable_seqscan off would not make it use that index.\nFor the OP, the problem is likely either that the stats for the column\nare off, effective_cache_size is set too low, and / or random_page_cost\nis too high. there are other possibilities as well.\n\nFYI, I upgraded the server we use at work to scan a statistical db of\nour production performance, and the queries we run there, which take\nanywhere from a few seconds to 20-30 minutes, run much faster. About an\nhour after the upgrade I had a user ask what I'd done to the db to make\nit so much faster. The upgrade was 7.4 to 8.1 btw... still testing\n8.2, and it looks very good.\n",
"msg_date": "Sat, 27 Jan 2007 05:33:28 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seqscan/Indexscan still a known issue?"
},
{
"msg_contents": "Tomas Vondra wrote:\n\n>>\n>>When I force it via \"set enable_seqscan to off\", the index scan\n>>takes about 0.1 msec (as reported by explain analyze), whereas\n>>with the default, it chooses a seq. scan, for a total execution\n>>time around 10 msec!! (yes: 100 times slower!). The table has\n>>20 thousand records, and the WHERE part of the query uses one\n>>field that is part of the primary key (as in, the primary key\n>>is the combination of field1,field2, and the query involves a\n>>where field1=1 and some_other_field=2). I don't think I'm doing\n>>something \"wrong\", and I find no reason not to expect the query\n>>planner to choose an index scan.\n>>\n>\n>1) I'm missing a very important part - information about the settings\n> in postgresql.conf, especially effective cache size, random page\n> cost, etc. What hw are you using (RAM size, disk speed etc.)?\n>\n\nshow all; responds with (I'm leaving only the ones I think could be\nthe relevant ones):\n\n client_encoding | SQL_ASCII\n commit_delay | 0\n commit_siblings | 5\n cpu_index_tuple_cost | 0.001\n cpu_operator_cost | 0.0025\n cpu_tuple_cost | 0.01\n deadlock_timeout | 1000\n effective_cache_size | 1000\n enable_hashagg | on\n enable_hashjoin | on\n enable_indexscan | on\n enable_mergejoin | on\n enable_nestloop | on\n enable_seqscan | on\n enable_sort | on\n enable_tidscan | on\n from_collapse_limit | 8\n fsync | on\n geqo | on\n geqo_effort | 1\n geqo_generations | 0\n geqo_pool_size | 0\n geqo_selection_bias | 2\n geqo_threshold | 11\n join_collapse_limit | 8\n max_connections | 100\n max_expr_depth | 10000\n max_files_per_process | 1000\n max_fsm_pages | 20000\n max_fsm_relations | 1000\n max_locks_per_transaction | 64\n pre_auth_delay | 0\n random_page_cost | 4\n regex_flavor | advanced\n server_encoding | SQL_ASCII\n server_version | 7.4.5\n shared_buffers | 62000\n sort_mem | 1024\n statement_timeout | 0\n vacuum_mem | 8192\n virtual_host | unset\n wal_buffers | 8\n wal_debug | 0\n wal_sync_method | fdatasync\n\n\nAny obvious red flag on these?\n\nThe HW/SW is: Fedora Core 2 running on a P4 3GHz HT, with 1GB of\nRAM and 120GB SATA drive.\n\n>\n>2) Another thing I'm missing is enough information about the table\n> and the query itself. What is the execution plan used? Was the table\n> modified / vacuumed / analyzed recently?\n>\n\nI vacuum analyze the entire DB daily, via a cron entry (at 4AM).\n\nBut I think the problem is that this particular table had not been\nvacuum analyzed after having inserted the 20000 records (the\nquery planner was giving me seq. scan when the table had about\na dozen records --- and seq. scan was, indeed, 10 times faster;\nas a test, to make sure that the query planner would do the right\nthing when the amount of records was high, I inserted 20000\nrecords, and tried again --- now the seq. scan was 100 times\nslower, but it was still chosen (at that point was that I did a\nsearch through the archives and then posted the question).\n\nBut now, after reading the replies, I did a vacuum analyze for\nthis table, and now the query planner is choosing the Index\nscan.\n\n>Without these information it's completely possible the postgresql is\n>using invalid values and thus generating suboptimal execution plan.\n>There are many cases when the sequential scan is better (faster, does\n>less I/O etc.) than the index scan.\n>\n\nBut as the tests yesterday revealed, this was not the case\n(explain analyze was reporting execution times showing index\nscan 100 times faster!)\n\n>For example if the table has grown and was not analyzed recently\n>\n\nOk, now I'm quite sure that this is, indeed, the case (as\nyou can see from my description above)\n\n>postgresql may still believe it's small and thus uses the sequential\n>scan. Or maybe the effective case size is set improperly (too low in\n>this case) thus the postgresql thinks just a small fraction of data is\n>cached, which means a lot of scattered reads in case of the index -\n>that's slower than sequential reads.\n>\n\nBut these values are all defaults (I think I played with the shared\nbuffers size, following some guidelines I read in the PostgreSQL\ndocumentation), which is why I felt that I was not doing something\n\"wrong\" which would be at fault for making the query planner do\nthe wrong thing (well, nothing wrong in the query and the table\ndefinition --- there was indeed something wrong on my side).\n\n>There are many such cases - the common belief that index scan is always\n>better than the sequential scan is incorrect. \n>\n\nNo, you can rest assured that this was not the case with me! I\ndo understand that basic notion that, for example, the bubble\nsort is faster than most NlogN sort algorithms if you have an\narray of 3 or 4 elements (or depending on the nature of the\ndata, if they are always close to sorted, etc.)\n\nOne thing is that, the cases where seq. scan are faster tend\nto be when there aren't many records, and therefore, the\nexecution times are low anyway --- this seems like an argument\nin favor of being biased in favor of index scans; but yes, I\nguess if one has to do several thousands queries like those,\nthen the fact that the query takes 0.5 ms instead of 1 ms\ndoes count.\n\n>But most of these cases\n>can be identified using explain analyze output (which is missing in your\n>post).\n>\nI can't reproduce now teh seq. scan, and given that I seem to\nhave found the reason for the unexpected result, there probably\nis no point in showing you something that exhibits no problem\nto be debugged.\n\n>\n>The data supplied by you are not a 'proof' the index scan is better than\n>sequential scan in this case, as the data might be cached due to\n>previous queries.\n>\n\nNo, that was not the case (another notion that is quite clear\nin my mind :-)). I repeated ll queries no less than 10 times ---\nthen alternating with enable_seqscan on and off, etc.). The times\nI supplied seemed to be the \"asymptotic\" values, once things had\nbeen cached as much as they would.\n\n>\n>The port to 8.x might help\n>\nGiven this, and some of the other replies, I guess I'll move\nthis way up in my list of top-priority things to do.\n\n>\n>2) try to use ANALYZE on the table and run the queries again\n>\n\nBingo! :-)\n\n>\n>3) review the settings in postgresql - a nice starting point is here\n>\n> http://www.powerpostgresql.com/PerfList\n>\n\nI'll definitely take a look at this, so that I can figure thigns\nout better if something similar arises in the future.\n\nThanks,\n\nCarlos\n--\n\n",
"msg_date": "Sat, 27 Jan 2007 12:09:16 -0500",
"msg_from": "Carlos Moreno <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Seqscan/Indexscan still a known issue?"
},
{
"msg_contents": "Carlos Moreno <[email protected]> writes:\n> But I think the problem is that this particular table had not been\n> vacuum analyzed after having inserted the 20000 records (the\n> query planner was giving me seq. scan when the table had about\n> a dozen records --- and seq. scan was, indeed, 10 times faster;\n> as a test, to make sure that the query planner would do the right\n> thing when the amount of records was high, I inserted 20000\n> records, and tried again --- now the seq. scan was 100 times\n> slower, but it was still chosen (at that point was that I did a\n> search through the archives and then posted the question).\n\n> But now, after reading the replies, I did a vacuum analyze for\n> this table, and now the query planner is choosing the Index\n> scan.\n\nOne reason you might consider updating is that newer versions check the\nphysical table size instead of unconditionally believing\npg_class.relpages/reltuples. Thus, they're much less likely to get\nfooled when a table has grown substantially since it was last vacuumed\nor analyzed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 27 Jan 2007 12:28:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seqscan/Indexscan still a known issue? "
},
{
"msg_contents": "Tom Lane wrote:\n\n> One reason you might consider updating is that newer versions check the\n>\n>physical table size instead of unconditionally believing\n>pg_class.relpages/reltuples. Thus, they're much less likely to get\n>fooled when a table has grown substantially since it was last vacuumed\n>or analyzed.\n> \n>\n\nSounds good. Obviously, there seem to be plenty of reasons to\nupgrade, as pointed out in several of the previous replies; I\nwould not rank this one as one of the top reasons to upgrade,\nsince every time I've encountered this issue (planner selecting\nseq. scan when I'm convinced it should choose an index scan), I\ncan always get away with forcing it to use an index scan, even\nif it feels like the wrong solution.\n\nBut still, I guess what you point out comes as part of an array\nof improvements that will contribute to much better performance\nanyway!\n\nI'm sure I've said it countless times, but it feels again like\nthe right time to say it: thank you so much for all the help\nand all the effort the PG team has put in making this such a\ngreat product --- improvement after improvement!\n\nThanks,\n\nCarlos\n--\n\n",
"msg_date": "Sat, 27 Jan 2007 16:14:44 -0500",
"msg_from": "Carlos Moreno <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Seqscan/Indexscan still a known issue?"
}
] |
[
{
"msg_contents": "PostgreSQL version: 8.2.1\nOS: Windows Server 2003\n\nI have a relatively simple query where the planner chooses a \nsequential scan when using the IN operator but chooses an index scan \nwhen using logically equivalent multiple OR expressions. Here is the \ntable structure and the two versions of the query:\n\nCREATE TABLE pool_sample\n(\n id integer NOT NULL,\n state character varying(25) NOT NULL,\n not_pooled_reason character varying(25) NOT NULL,\n \"comment\" character varying(255),\n CONSTRAINT \"pk_poolSample_id\" PRIMARY KEY (id)\n)\nWITHOUT OIDS;\n\nCREATE INDEX \"idx_poolSample_state\"\n ON pool_sample\n USING btree\n (state);\n\n\nThe following query uses a sequential scan (i.e. filter) on the \n\"state\" column and takes about 5 seconds to execute (up to ~45 \nseconds with an \"empty\" cache):\nSELECT * FROM pool_sample ps\nWHERE ps.state IN ('PENDING_REPOOL_REVIEW', 'READY_FOR_REPOOL');\n\nThis version of the query uses an index scan on \"state\" and takes \nabout 50 milliseconds:\nSELECT * FROM pool_sample ps\nWHERE ps.state = 'PENDING_REPOOL_REVIEW' OR ps.state = \n'READY_FOR_REPOOL';\n\nThere are over 10 million rows in the pool_sample table and 518 rows \nmeet the given criteria. In the first query, the planner estimates \nthat nearly 10 million rows will be returned (i.e. almost all rows in \nthe table). In the second query, the planner estimates 6830 rows, \nwhich seems close enough for the purposes of planning.\n\nIf I explicitly cast the state column to text, the IN query uses an \nindex scan and performs just as well as the multiple OR version:\nSELECT * FROM pool_sample ps\nWHERE ps.state::text IN ('PENDING_REPOOL_REVIEW', 'READY_FOR_REPOOL');\n\nSo it would appear that the planner automatically casts the state \ncolumn to text within an OR expression but does not perform the cast \nin an IN expression.\n\nOur SQL is generated from an O/R mapper, so it's non-trivial (or at \nleast undesirable) to hand tune every query like this with an \nexplicit type cast. The only option I've come up with is to define \nthe state column as text in the first place, thus avoiding the need \nto cast. Would this work? Are there any other/better options?\n\nThanks,\n-Ryan\n\n\n\n\n",
"msg_date": "Sat, 27 Jan 2007 15:34:56 -0800",
"msg_from": "Ryan Holmes <[email protected]>",
"msg_from_op": true,
"msg_subject": "IN operator causes sequential scan (vs. multiple OR expressions) "
},
{
"msg_contents": "Ryan Holmes <[email protected]> writes:\n> I have a relatively simple query where the planner chooses a \n> sequential scan when using the IN operator but chooses an index scan \n> when using logically equivalent multiple OR expressions.\n\nEXPLAIN ANALYZE for both, please?\n\nIf you set enable_seqscan = off, does that force an indexscan, and if so\nwhat does EXPLAIN ANALYZE show in that case?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 27 Jan 2007 18:53:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IN operator causes sequential scan (vs. multiple OR expressions) "
},
{
"msg_contents": "\nOn Jan 27, 2007, at 3:53 PM, Tom Lane wrote:\n\n> Ryan Holmes <[email protected]> writes:\n>> I have a relatively simple query where the planner chooses a\n>> sequential scan when using the IN operator but chooses an index scan\n>> when using logically equivalent multiple OR expressions.\n>\n> EXPLAIN ANALYZE for both, please?\n>\n> If you set enable_seqscan = off, does that force an indexscan, and \n> if so\n> what does EXPLAIN ANALYZE show in that case?\n>\n> \t\t\tregards, tom lane\n\nWow, I didn't expect such a quick response -- thank you!\nNote: I rebuilt the state column index and ran a VACUUM ANALYZE since \nmy original post, so the planner's \"rows\" estimate is now different \nthan the 6830 I mentioned. The planner estimate is actually *less* \naccurate now, but still in the ballpark relatively speaking.\n\nHere is the EXPLAIN ANALYZE for both queries with enable_seqscan = on :\n\nEXPLAIN ANALYZE SELECT * FROM pool_sample ps\nWHERE ps.state = 'PENDING_REPOOL_REVIEW' OR ps.state = \n'READY_FOR_REPOOL';\n\nBitmap Heap Scan on pool_sample ps (cost=985.51..61397.50 rows=50022 \nwidth=539) (actual time=13.560..39.377 rows=518 loops=1)\n Recheck Cond: (((state)::text = 'PENDING_REPOOL_REVIEW'::text) OR \n((state)::text = 'READY_FOR_REPOOL'::text))\n -> BitmapOr (cost=985.51..985.51 rows=50084 width=0) (actual \ntime=9.628..9.628 rows=0 loops=1)\n -> Bitmap Index Scan on \n\"idx_poolSample_state\" (cost=0.00..480.25 rows=25042 width=0) \n(actual time=0.062..0.062 rows=4 loops=1)\n Index Cond: ((state)::text = \n'PENDING_REPOOL_REVIEW'::text)\n -> Bitmap Index Scan on \n\"idx_poolSample_state\" (cost=0.00..480.25 rows=25042 width=0) \n(actual time=9.563..9.563 rows=514 loops=1)\n Index Cond: ((state)::text = 'READY_FOR_REPOOL'::text)\nTotal runtime: 39.722 ms\n\n\nEXPLAIN ANALYZE SELECT * FROM pool_sample ps\nWHERE ps.state IN ('PENDING_REPOOL_REVIEW', 'READY_FOR_REPOOL');\n\nSeq Scan on pool_sample ps (cost=0.00..331435.92 rows=9667461 \nwidth=539) (actual time=1060.472..47584.542 rows=518 loops=1)\n Filter: ((state)::text = ANY \n(('{PENDING_REPOOL_REVIEW,READY_FOR_REPOOL}'::character varying \n[])::text[]))\nTotal runtime: 47584.698 ms\n\n\n\nAnd now with enable_seqscan = off:\n\nEXPLAIN ANALYZE SELECT * FROM pool_sample ps\nWHERE ps.state = 'PENDING_REPOOL_REVIEW' OR ps.state = \n'READY_FOR_REPOOL';\n\nBitmap Heap Scan on pool_sample ps (cost=985.51..61397.50 rows=50022 \nwidth=539) (actual time=0.324..0.601 rows=518 loops=1)\n Recheck Cond: (((state)::text = 'PENDING_REPOOL_REVIEW'::text) OR \n((state)::text = 'READY_FOR_REPOOL'::text))\n -> BitmapOr (cost=985.51..985.51 rows=50084 width=0) (actual \ntime=0.287..0.287 rows=0 loops=1)\n -> Bitmap Index Scan on \n\"idx_poolSample_state\" (cost=0.00..480.25 rows=25042 width=0) \n(actual time=0.109..0.109 rows=4 loops=1)\n Index Cond: ((state)::text = \n'PENDING_REPOOL_REVIEW'::text)\n -> Bitmap Index Scan on \n\"idx_poolSample_state\" (cost=0.00..480.25 rows=25042 width=0) \n(actual time=0.176..0.176 rows=514 loops=1)\n Index Cond: ((state)::text = 'READY_FOR_REPOOL'::text)\nTotal runtime: 0.779 ms\n\n\nEXPLAIN ANALYZE SELECT * FROM pool_sample ps\nWHERE ps.state IN ('PENDING_REPOOL_REVIEW', 'READY_FOR_REPOOL');\n\nBitmap Heap Scan on pool_sample ps (cost=150808.51..467822.04 \nrows=9667461 width=539) (actual time=0.159..0.296 rows=518 loops=1)\n Recheck Cond: ((state)::text = ANY \n(('{PENDING_REPOOL_REVIEW,READY_FOR_REPOOL}'::character varying \n[])::text[]))\n -> Bitmap Index Scan on \n\"idx_poolSample_state\" (cost=0.00..148391.65 rows=9667461 width=0) \n(actual time=0.148..0.148 rows=518 loops=1)\n Index Cond: ((state)::text = ANY \n(('{PENDING_REPOOL_REVIEW,READY_FOR_REPOOL}'::character varying \n[])::text[]))\nTotal runtime: 0.445 ms\n\n\n\nSo, yes, disabling seqscan does force an index scan for the IN \nversion. My question now is, how do I get PostgreSQL to make the \n\"right\" decision without disabling seqscan?\n\nHere are the non-default resource usage and query tuning settings \nfrom postgresql.conf:\nshared_buffers = 512MB\nwork_mem = 6MB\nmaintenance_work_mem = 256MB\nrandom_page_cost = 3.0\neffective_cache_size = 1536MB\nfrom_collapse_limit = 12\njoin_collapse_limit = 12\n\nThe server has 4GB RAM, 2 X 2.4GHz Opteron dual core procs, 5 x 15k \nRPM disks in a RAID 5 array and runs Windows Server 2003 x64.\n\nThanks,\n-Ryan\n",
"msg_date": "Sat, 27 Jan 2007 17:34:12 -0800",
"msg_from": "Ryan Holmes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: IN operator causes sequential scan (vs. multiple OR expressions) "
},
{
"msg_contents": "Ryan Holmes <[email protected]> writes:\n> So, yes, disabling seqscan does force an index scan for the IN \n> version. My question now is, how do I get PostgreSQL to make the \n> \"right\" decision without disabling seqscan?\n\nI pinged you before because in a trivial test case I got\nindexscans out of both text and varchar cases:\n\nregression=# create table foo (f1 text unique, f2 varchar(25) unique);\nNOTICE: CREATE TABLE / UNIQUE will create implicit index \"foo_f1_key\" for table \"foo\"\nNOTICE: CREATE TABLE / UNIQUE will create implicit index \"foo_f2_key\" for table \"foo\"\nCREATE TABLE\nregression=# explain select * from foo where f1 in ('foo', 'bar');\n QUERY PLAN \n-------------------------------------------------------------------------\n Bitmap Heap Scan on foo (cost=4.52..9.86 rows=2 width=61)\n Recheck Cond: (f1 = ANY ('{foo,bar}'::text[]))\n -> Bitmap Index Scan on foo_f1_key (cost=0.00..4.52 rows=2 width=0)\n Index Cond: (f1 = ANY ('{foo,bar}'::text[]))\n(4 rows)\n\nregression=# explain select * from foo where f2 in ('foo', 'bar');\n QUERY PLAN \n-------------------------------------------------------------------------------------\n Bitmap Heap Scan on foo (cost=6.59..17.27 rows=10 width=61)\n Recheck Cond: ((f2)::text = ANY (('{foo,bar}'::character varying[])::text[]))\n -> Bitmap Index Scan on foo_f2_key (cost=0.00..6.59 rows=10 width=0)\n Index Cond: ((f2)::text = ANY (('{foo,bar}'::character varying[])::text[]))\n(4 rows)\n\nBut on closer inspection the second case is not doing the right thing:\nnotice the rowcount estimate is 10, whereas it should be only 2 because\nof the unique index on f2. I poked into it and realized that in 8.2\nscalararraysel() fails to deal with binary-compatible datatype cases,\ninstead falling back to a not-very-bright generic estimate.\n\nI've committed a fix for 8.2.2, but in the meantime maybe you could\nchange your varchar column to text?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 27 Jan 2007 20:56:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IN operator causes sequential scan (vs. multiple OR expressions) "
},
{
"msg_contents": "\nOn Jan 27, 2007, at 5:56 PM, Tom Lane wrote:\n\n> I've committed a fix for 8.2.2, but in the meantime maybe you could\n> change your varchar column to text?\n>\n> \t\t\tregards, tom lane\nThank you for the help and the fix. We're just performance testing \nright now so minor data model changes are no problem.\n\nThanks,\n-Ryan\n\n",
"msg_date": "Sat, 27 Jan 2007 18:31:45 -0800",
"msg_from": "Ryan Holmes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: IN operator causes sequential scan (vs. multiple OR expressions) "
}
] |
[
{
"msg_contents": "I have been researching how to improve my overall performance of\npostgres. I am a little confused on the reasoning for how work-mem is\nused in the postgresql.conf file. The way I understand the\ndocumentation is you define with work-mem how much memory you want to\nallocate per search. Couldn't you run out of memory? This approach\nseems kind of odd to me. How do you tell the system not to allocate too\nmuch memory if you all of the sudden got hit with a heavier number of\nqueries? \n\n \n\n \n\nThanks,\n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nI have been researching how to improve my overall performance\nof postgres. I am a little confused on the reasoning for how work-mem is\nused in the postgresql.conf file. The way I understand the documentation\nis you define with work-mem how much memory you want to allocate per\nsearch. Couldn’t you run out of memory? This approach seems\nkind of odd to me. How do you tell the system not to allocate too much\nmemory if you all of the sudden got hit with a heavier number of queries? \n\n \n \nThanks,\n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu",
"msg_date": "Sun, 28 Jan 2007 14:21:39 -0600",
"msg_from": "\"Campbell, Lance\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "work-mem"
},
{
"msg_contents": "work-mem tells the size of physical memory only, virtual memory is\nalways there off course in case you run out of available memory.\n\nI recommend you reading PostgreSQL internals for all this stuff:\nhttp://www.postgresql.org/docs/8.0/static/internals.html\n\n--Imad\nwww.EnterpriseDB.com\n\nOn 1/29/07, Campbell, Lance <[email protected]> wrote:\n>\n>\n>\n>\n> I have been researching how to improve my overall performance of postgres.\n> I am a little confused on the reasoning for how work-mem is used in the\n> postgresql.conf file. The way I understand the documentation is you define\n> with work-mem how much memory you want to allocate per search. Couldn't you\n> run out of memory? This approach seems kind of odd to me. How do you tell\n> the system not to allocate too much memory if you all of the sudden got hit\n> with a heavier number of queries?\n>\n>\n>\n>\n>\n> Thanks,\n>\n>\n>\n> Lance Campbell\n>\n> Project Manager/Software Architect\n>\n> Web Services at Public Affairs\n>\n> University of Illinois\n>\n> 217.333.0382\n>\n> http://webservices.uiuc.edu\n>\n>\n",
"msg_date": "Mon, 29 Jan 2007 01:26:21 +0500",
"msg_from": "imad <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: work-mem"
},
{
"msg_contents": "\"Campbell, Lance\" <[email protected]> wrote:\n>\n> I have been researching how to improve my overall performance of\n> postgres. I am a little confused on the reasoning for how work_mem is\n> used in the postgresql.conf file. The way I understand the\n> documentation is you define with work_mem how much memory you want to\n> allocate per search. Couldn't you run out of memory? This approach\n> seems kind of odd to me. How do you tell the system not to allocate too\n> much memory if you all of the sudden got hit with a heavier number of\n> queries? \n\nwork_mem tells PostgreSQL how much memory to use for each sort/join.\nIf a sort/join exceeds that amount, PostgreSQL uses temp files on the\ndisk instead of memory to do the work.\n\nIf you want a query to complete, you've got to allow Postgres to finish\nit. The work_mem setting gives Postgres information on how to go about\ndoing that in the best way.\n\nIf you want to guarantee that individual processes can't suck up tons of\nmemory, use your operating system's ulimit or equivalent functionality.\nThat's one of the advantages of the forking model, it allows the operating\nsystem to enforce a certain amount of policy an a per-connection basis.\n\nHTH,\nBill\n",
"msg_date": "Sun, 28 Jan 2007 17:11:18 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: work_mem"
}
] |
[
{
"msg_contents": "If I set work-mem at a particular amount of memory how do I answer the\nfollowing questions:\n\n \n\n1) How many of my queries were able to run inside the memory I\nallocated for work-mem?\n\n2) How many of my queries had to run from disk because work-mem\nwas not set high enough?\n\n3) If a query had to go to disk in order to be sorted or completed\nis there a way to identify how much memory it would have taken in order\nto run the query from memory?\n\n \n\nThanks for all of your help,\n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nIf I set work-mem at a particular amount of memory how do I answer\nthe following questions:\n \n1) How many of\nmy queries were able to run inside the memory I allocated for work-mem?\n2) How many of\nmy queries had to run from disk because work-mem was not set high enough?\n3) If a query\nhad to go to disk in order to be sorted or completed is there a way to identify\nhow much memory it would have taken in order to run the query from memory?\n \nThanks for all of your help,\n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu",
"msg_date": "Sun, 28 Jan 2007 21:10:04 -0600",
"msg_from": "\"Campbell, Lance\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "work-mem how do I identify the proper size"
},
{
"msg_contents": "In response to \"Campbell, Lance\" <[email protected]>:\n\n> If I set work-mem at a particular amount of memory how do I answer the\n> following questions:\n> \n> 1) How many of my queries were able to run inside the memory I\n> allocated for work-mem?\n> \n> 2) How many of my queries had to run from disk because work-mem\n> was not set high enough?\n> \n> 3) If a query had to go to disk in order to be sorted or completed\n> is there a way to identify how much memory it would have taken in order\n> to run the query from memory?\n\nI don't know of any good way to answer these questions on current versions.\n\nI have a patch in for 8.3 that logs the usage of temporary files, which\nhelps with some of this.\n\nIt'd be nice to have additional debug logging that tells you:\n1) when a sort/join operation uses disk instead of memory\n2) A higher level debugging that announces \"this query used temp files for\n some operations\".\n\n#1 would be nice for optimizing, but may involve a lot of overhead.\n\n#2 could (potentially) be enabled on production servers to flag queries\nthat need investigated, without generating a significant amount of logging\noverhead.\n\nHopefully I'll get some time to try to hack some stuff together for this\nsoon.\n\nA little bit of playing around shows that cost estimates for queries change\nradically when the system thinks it will be creating temp files (which\nmakes sense ...)\n\nNotice these two partial explains:\n -> Sort (cost=54477.32..55674.31 rows=478798 width=242)\n -> Sort (cost=283601.32..284798.31 rows=478798 width=242)\n\nThese are explains of the same query (a simple select * + order by on a\nnon-indexed column) The first one is taken with work_mem set at 512m,\nwhich would appear to be enough space to do the entire sort in memory.\nThe second is with work_mem set to 128k.\n\nMore interesting is that that actual runtime doesn't differ by nearly\nthat much: 3100ms vs 2200ms. (I've realized that my setting for\nrandom_page_cost is too damn high for this hardware -- thanks for\ncausing me to look at that ... :)\n\nAnyway -- hope that helps.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n",
"msg_date": "Mon, 29 Jan 2007 09:53:56 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: work-mem how do I identify the proper size"
}
] |
[
{
"msg_contents": "Hi all,\n\n I have postgresql 7.4.2 running on debian and I have the oddest\npostgresql behaviour I've ever seen.\n\nI do the following queries:\n\n\nespsm_asme=# select customer_app_config_id, customer_app_config_name\nfrom customer_app_config where customer_app_config_id = 5929 or\ncustomer_app_config_id = 11527 order by customer_app_config_id;\n\n\n customer_app_config_id | customer_app_config_name\n------------------------+--------------------------\n 5929 | INFO\n(1 row)\n\n\n I do the same query but changing the order of the or conditions:\n\n\nespsm_asme=# select customer_app_config_id, customer_app_config_name\nfrom customer_app_config where customer_app_config_id = 11527 or\ncustomer_app_config_id = 5929 order by customer_app_config_id;\n\n\n customer_app_config_id | customer_app_config_name\n------------------------+--------------------------\n 11527 | MOVIDOSERENA TONI 5523\n(1 row)\n\n\n\n As you can see, the configuration 5929 and 11527 both exists, but\nwhen I do the queries they don't appear.\n\n Here below you have the execution plans. Those queries use an index,\nI have done reindex table customer_app_config but nothing has changed.\n\nespsm_asme=# explain analyze select customer_app_config_id,\ncustomer_app_config_name from customer_app_config where\ncustomer_app_config_id = 11527 or customer_app_config_id = 5929 order by\ncustomer_app_config_id;\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=10.28..10.29 rows=2 width=28) (actual time=0.252..0.253\nrows=1 loops=1)\n Sort Key: customer_app_config_id\n -> Index Scan using pk_cag_customer_application_id,\npk_cag_customer_application_id on customer_app_config (cost=0.00..10.27\nrows=2 width=28) (actual time=0.168..0.232 rows=1 loops=1)\n Index Cond: ((customer_app_config_id = 11527::numeric) OR\n(customer_app_config_id = 5929::numeric))\n Total runtime: 0.305 ms\n(5 rows)\n\nespsm_asme=# explain analyze select customer_app_config_id,\ncustomer_app_config_name from customer_app_config where\ncustomer_app_config_id = 5929 or customer_app_config_id = 11527 order by\ncustomer_app_config_id;\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=10.28..10.29 rows=2 width=28) (actual time=0.063..0.064\nrows=1 loops=1)\n Sort Key: customer_app_config_id\n -> Index Scan using pk_cag_customer_application_id,\npk_cag_customer_application_id on customer_app_config (cost=0.00..10.27\nrows=2 width=28) (actual time=0.034..0.053 rows=1 loops=1)\n Index Cond: ((customer_app_config_id = 5929::numeric) OR\n(customer_app_config_id = 11527::numeric))\n Total runtime: 0.114 ms\n(5 rows)\n\n The table definition is the following:\n\nespsm_asme=# \\d customer_app_config\n Table \"public.customer_app_config\"\n Column | Type | Modifiers\n--------------------------+-----------------------+--------------------\n customer_app_config_id | numeric(10,0) | not null\n customer_app_config_name | character varying(32) | not null\n keyword | character varying(43) |\n application_id | numeric(10,0) | not null\n customer_id | numeric(10,0) | not null\n customer_app_contents_id | numeric(10,0) |\n number_access_id | numeric(10,0) |\n prefix | character varying(10) |\n separator | numeric(1,0) | default 0\n on_hold | numeric(1,0) | not null default 0\n with_toss | numeric(1,0) | not null default 0\n number_id | numeric(10,0) |\n param_separator_id | numeric(4,0) | default 1\n memory_timeout | integer |\n with_memory | numeric(1,0) | default 0\n session_enabled | numeric(1,0) | default 0\n session_timeout | integer |\n number | character varying(15) |\nIndexes:\n \"pk_cag_customer_application_id\" primary key, btree\n(customer_app_config_id)\n \"un_cag_kwordnumber\" unique, btree (keyword, number_id)\n \"idx_cappconfig_ccontentsid\" btree (customer_app_contents_id)\n \"idx_cappconfig_cusidappid\" btree (customer_id, application_id)\n \"idx_cappconfig_customerid\" btree (customer_id)\n \"idx_cappconfig_onhold\" btree (on_hold)\n \"idx_cappconfig_onholdkeyw\" btree (on_hold, keyword)\nRules:\n\n A lot of rules that I don't paste as matter of length.\n\n\n Do you have any idea about how I can fix this?\n\n-- \nArnau\n",
"msg_date": "Mon, 29 Jan 2007 13:21:56 +0100",
"msg_from": "Arnau <[email protected]>",
"msg_from_op": true,
"msg_subject": "[OT] Very strange postgresql behaviour"
},
{
"msg_contents": "In response to Arnau <[email protected]>:\n> \n> I have postgresql 7.4.2 running on debian and I have the oddest\n> postgresql behaviour I've ever seen.\n> \n> I do the following queries:\n> \n> \n> espsm_asme=# select customer_app_config_id, customer_app_config_name\n> from customer_app_config where customer_app_config_id = 5929 or\n> customer_app_config_id = 11527 order by customer_app_config_id;\n> \n> \n> customer_app_config_id | customer_app_config_name\n> ------------------------+--------------------------\n> 5929 | INFO\n> (1 row)\n> \n> \n> I do the same query but changing the order of the or conditions:\n> \n> \n> espsm_asme=# select customer_app_config_id, customer_app_config_name\n> from customer_app_config where customer_app_config_id = 11527 or\n> customer_app_config_id = 5929 order by customer_app_config_id;\n> \n> \n> customer_app_config_id | customer_app_config_name\n> ------------------------+--------------------------\n> 11527 | MOVIDOSERENA TONI 5523\n> (1 row)\n> \n> \n> \n> As you can see, the configuration 5929 and 11527 both exists, but\n> when I do the queries they don't appear.\n\n[snip]\n\nJust a guess, but perhaps your index is damaged. Have you tried\nREINDEXing?\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n",
"msg_date": "Mon, 29 Jan 2007 08:27:06 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [OT] Very strange postgresql behaviour"
},
{
"msg_contents": "Arnau wrote:\n> Hi all,\n> \n> I have postgresql 7.4.2 running on debian and I have the oddest\n> postgresql behaviour I've ever seen.\n\nYou should upgrade. The latest 7.2 release is 7.4.15\n\n> I do the following queries:\n> ...\n\nAt first glance this looks like a bug in PostgreSQL, but..\n\n> Rules:\n> \n> A lot of rules that I don't paste as matter of length.\n\nIs there any SELECT rules by chance that might explain this?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Mon, 29 Jan 2007 13:35:41 +0000",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [OT] Very strange postgresql behaviour"
},
{
"msg_contents": "Hi Bill,\n\n> In response to Arnau <[email protected]>:\n>> I have postgresql 7.4.2 running on debian and I have the oddest\n>> postgresql behaviour I've ever seen.\n>>\n>> I do the following queries:\n>>\n>>\n>> espsm_asme=# select customer_app_config_id, customer_app_config_name\n>> from customer_app_config where customer_app_config_id = 5929 or\n>> customer_app_config_id = 11527 order by customer_app_config_id;\n>>\n>>\n>> customer_app_config_id | customer_app_config_name\n>> ------------------------+--------------------------\n>> 5929 | INFO\n>> (1 row)\n>>\n>>\n>> I do the same query but changing the order of the or conditions:\n>>\n>>\n>> espsm_asme=# select customer_app_config_id, customer_app_config_name\n>> from customer_app_config where customer_app_config_id = 11527 or\n>> customer_app_config_id = 5929 order by customer_app_config_id;\n>>\n>>\n>> customer_app_config_id | customer_app_config_name\n>> ------------------------+--------------------------\n>> 11527 | MOVIDOSERENA TONI 5523\n>> (1 row)\n>>\n>>\n>>\n>> As you can see, the configuration 5929 and 11527 both exists, but\n>> when I do the queries they don't appear.\n> \n> [snip]\n> \n> Just a guess, but perhaps your index is damaged. Have you tried\n> REINDEXing?\n> \n\n Yes, I have tried with:\n\n reindex table customer_app_config\n reindex index pk_cag_customer_application_id\n\nbut nothing changed. I also tried to drop the index:\n\nespsm_asme=# begin; drop index pk_cag_customer_application_id;\nBEGIN\nERROR: cannot drop index pk_cag_customer_application_id because \nconstraint pk_cag_customer_application_id on table customer_app_config \nrequires it\nHINT: You may drop constraint pk_cag_customer_application_id on table \ncustomer_app_config instead.\nespsm_asme=# rollback;\nROLLBACK\n\n But I can't remove the constraint as it's the primary key and there \nare foreign keys over it\n\n\n-- \nArnau\n",
"msg_date": "Mon, 29 Jan 2007 16:40:57 +0100",
"msg_from": "Arnau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [OT] Very strange postgresql behaviour"
},
{
"msg_contents": "Arnau wrote:\n\n> Hi Bill,\n>\n>> In response to Arnau <[email protected]>:\n>>\n>>> I have postgresql 7.4.2 running on debian and I have the oddest\n>>> postgresql behaviour I've ever seen.\n>>>\n>>> I do the following queries:\n>>>\n>>>\n>>> espsm_asme=# select customer_app_config_id, customer_app_config_name\n>>> from customer_app_config where customer_app_config_id = 5929 or\n>>> customer_app_config_id = 11527 order by customer_app_config_id;\n>>\n\nJust wild guessing: is there any chance that there may be some form of\n\"implicit\" limit modifier for the select statements on this table? Does \nthe\nbehaviour change if you add \"limit 2\" at the end of the query? Does it\nchange if you use customer_app_config_id in (5929, 11527) instead?\n\nAnother wild guess: if the data is somewhat corrupt, maybe a vacuum\nanalyze would detect it? Or perhaps try pg_dumping, to see if pg_dump\nat some point complains about mssing or corrupt data?\n\nHTH,\n\nCarlos\n--\n\n",
"msg_date": "Mon, 29 Jan 2007 14:47:46 -0500",
"msg_from": "Carlos Moreno <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [OT] Very strange postgresql behaviour"
}
] |
[
{
"msg_contents": "\n\nHello,\n\nI have an authorization table that associates 1 customer IP to a service \nIP to determine a TTL (used by a radius server).\n\ntable auth\n client varchar(15);\n service varchar(15);\n ttl int4;\n\n\nclient and service are both ip addr.\n\nThe number of distinct clients can be rather large (say around 4 million) \nand the number of distinct service around 1000.\n\ntable auth can contain between 10 M and 20 M lines.\n\nthere's a double index on ( client , service ).\n\nSince I would like to maximize the chance to have the whole table cached \nby the OS (linux), I'd like to reduce the size of the table by replacing \nthe varchar by another data type more suited to store ip addr.\n\nI could use PG internal inet/cidr type to store the ip addrs, which would \ntake 12 bytes per IP, thus gaining a few bytes per row.\n\nApart from gaining some bytes, would the btree index scan be faster with \nthis data type compared to plain varchar ?\n\n\nAlso, in my case, I don't need the mask provided by inet/cidr ; is there a \nway to store an IPV4 addr directly into an INT4 but using the same syntax \nas varchar or inet/cidr (that is I could use '192.12.18.1' for example), \nor should I create my own data type and develop the corresponding function \nto convert from a text input to an int4 storage ?\n\nThis would really reduce the size of the table, since it would need 3 int4 \nfor client/service/ttl and I guess index scan would be faster with int4 \ndata that with varchar(15) ?\n\nThanks for any input.\n\n\nNicolas\n",
"msg_date": "Mon, 29 Jan 2007 17:22:22 +0100 (CET)",
"msg_from": "Pomarede Nicolas <[email protected]>",
"msg_from_op": true,
"msg_subject": "int4 vs varchar to store ip addr"
},
{
"msg_contents": "* Pomarede Nicolas:\n\n> I could use PG internal inet/cidr type to store the ip addrs, which\n> would take 12 bytes per IP, thus gaining a few bytes per row.\n\nI thought it's down to 8 bytes in PostgreSQL 8.2, but I could be\nmistaken.\n\n> Apart from gaining some bytes, would the btree index scan be faster\n> with this data type compared to plain varchar ?\n\nIt will be faster because less I/O is involved.\n\nFor purposes like yours, there is a special ip4 type in a contributed\npackage which brings down the byte count to 4. I'm not sure if it's\nbeen ported to PostgreSQL 8.2 yet.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Mon, 29 Jan 2007 17:26:42 +0100",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: int4 vs varchar to store ip addr"
},
{
"msg_contents": "Nicolas wrote:\n\n> I have an authorization table that associates 1 customer IP to a service \n> IP to determine a TTL (used by a radius server).\n> \n> table auth\n> client varchar(15);\n> service varchar(15);\n> ttl int4;\n> client and service are both ip addr.\n> \n> The number of distinct clients can be rather large (say around 4 \n> million) and the number of distinct service around 1000.\n> \n> there's a double index on ( client , service ).\n\nIt comes to mind another solution... I don't know if it is better or worse,\nbut you could give it a try.\nStore IP addresses as 4 distinct columns, like the following:\n\n CREATE TABLE auth (\n client_ip1 shortint,\n client_ip2 shortint,\n client_ip3 shortint,\n client_ip4 shortint,\n service varchar(15),\n ttl int4,\n );\n\nAnd then index by client_ip4/3/2/1, then service.\n\n CREATE INDEX auth_i1 ON auth (client_ip4, client_ip3, client_ip2, client_ip1);\n\nor:\n\n CREATE INDEX auth_i1 ON auth (client_ip4, client_ip3, client_ip2, client_ip1, service);\n\nI'm curious to know from pg internals experts if this could be a\nvalid idea or is totally non-sense.\n\nProbably the builtin ip4 type is better suited for these tasks?\n\n-- \nCosimo\n",
"msg_date": "Mon, 29 Jan 2007 17:44:13 +0100",
"msg_from": "Cosimo Streppone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: int4 vs varchar to store ip addr"
},
{
"msg_contents": "On Mon, 29 Jan 2007, Florian Weimer wrote:\n\n> * Pomarede Nicolas:\n>\n>> I could use PG internal inet/cidr type to store the ip addrs, which\n>> would take 12 bytes per IP, thus gaining a few bytes per row.\n>\n> I thought it's down to 8 bytes in PostgreSQL 8.2, but I could be\n> mistaken.\n>\n>> Apart from gaining some bytes, would the btree index scan be faster\n>> with this data type compared to plain varchar ?\n>\n> It will be faster because less I/O is involved.\n>\n> For purposes like yours, there is a special ip4 type in a contributed\n> package which brings down the byte count to 4. I'm not sure if it's\n> been ported to PostgreSQL 8.2 yet.\n\nYes thanks for this reference, ip4r package seems to be a nice addition to \npostgres for what I'd like to do. Does someone here have some real life \nexperience with it (regarding performance and stability) ?\n\nAlso, is it possible that this package functionalities' might be merged \ninto postgres one day, I think the benefit of using 4 bytes to store an \nipv4 addr could be really interesting for some case ?\n\nthanks,\n\n\n----------------\nNicolas Pomarede e-mail: [email protected]\n\n\"In a world without walls and fences, who needs windows and gates ?\"\n",
"msg_date": "Tue, 30 Jan 2007 11:54:05 +0100 (CET)",
"msg_from": "Pomarede Nicolas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: int4 vs varchar to store ip addr"
},
{
"msg_contents": "On 1/30/07, Pomarede Nicolas <[email protected]> wrote:\n> On Mon, 29 Jan 2007, Florian Weimer wrote:\n>\n> > * Pomarede Nicolas:\n> >\n> >> I could use PG internal inet/cidr type to store the ip addrs, which\n> >> would take 12 bytes per IP, thus gaining a few bytes per row.\n> >\n> > I thought it's down to 8 bytes in PostgreSQL 8.2, but I could be\n> > mistaken.\n> >\n> >> Apart from gaining some bytes, would the btree index scan be faster\n> >> with this data type compared to plain varchar ?\n> >\n> > It will be faster because less I/O is involved.\n> >\n> > For purposes like yours, there is a special ip4 type in a contributed\n> > package which brings down the byte count to 4. I'm not sure if it's\n> > been ported to PostgreSQL 8.2 yet.\n>\n> Yes thanks for this reference, ip4r package seems to be a nice addition to\n> postgres for what I'd like to do. Does someone here have some real life\n> experience with it (regarding performance and stability) ?\n\nI'm using IP4 and have not had a problem with it in 8.2 (or 8.1) in\nterms of stability. As I designed my DB using it, I don't really have\nany comparisons to inet and/or varchar. One of the most useful things\nfor me is the ability to create a GIST index to support determination\nof range inclusion (i.e. 192.168.23.1 is in the 192.168/16 network\nrange), although it doesn't sound like this would be useful to you.\n\n-Mike\n",
"msg_date": "Thu, 1 Feb 2007 14:30:42 -0500",
"msg_from": "\"Michael Artz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: int4 vs varchar to store ip addr"
}
] |
[
{
"msg_contents": "Hi All,\n\nThanks to all in this support community. You are doing a great job! \nWith all the information/support from the communities and documentations, we successfuly upgraded to 8.1 from 7.3.2 on our production environment! It was a smooth switch over. \n\nJust wanted to say thanks to everyone on the community!\n\nThanks,\nSaranya Sivakumar\n\n \n---------------------------------\nExpecting? Get great news right away with email Auto-Check.\nTry the Yahoo! Mail Beta.\nHi All,Thanks to all in this support community. You are doing a great job! With all the information/support from the communities and documentations, we successfuly upgraded to 8.1 from 7.3.2 on our production environment! It was a smooth switch over. Just wanted to say thanks to everyone on the community!Thanks,Saranya Sivakumar\nExpecting? Get great news right away with email Auto-Check.Try the Yahoo! Mail Beta.",
"msg_date": "Mon, 29 Jan 2007 12:51:17 -0800 (PST)",
"msg_from": "Saranya Sivakumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Thanks All!"
}
] |
[
{
"msg_contents": "I have partitioned a table based on period (e.g., cdate >= '2007-01-01'::date and cdate<=.2007-03-31':;date).\n\nNow, I am issuing query like cdate >= CURRENT_DATE - 1 and cdate <= CURRENT_DATE, it scans all the partitions. But if I do cdate >= '2007-01-01'::date and cdate<=.2007-03-31'::date it picks the correct partition. Also if I join the cdate field with another table, it does not pick the correct partition.\n\nI would like to know if it is possible to pick the correct partition using the above example.\n\nThanks\nAbu\n\n \n---------------------------------\nNeed Mail bonding?\nGo to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users.\nI have partitioned a table based on period (e.g., cdate >= '2007-01-01'::date and cdate<=.2007-03-31':;date).Now, I am issuing query like cdate >= CURRENT_DATE - 1 and cdate <= CURRENT_DATE, it scans all the partitions. But if I do cdate >= '2007-01-01'::date and cdate<=.2007-03-31'::date it picks the correct partition. Also if I join the cdate field with another table, it does not pick the correct partition.I would like to know if it is possible to pick the correct partition using the above example.ThanksAbu\nNeed Mail bonding?Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users.",
"msg_date": "Mon, 29 Jan 2007 22:42:20 -0800 (PST)",
"msg_from": "Abu Mushayeed <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partitioning"
},
{
"msg_contents": "Abu Mushayeed wrote:\n> I have partitioned a table based on period (e.g., cdate >= \n> '2007-01-01'::date and cdate<=.2007-03-31':;date).\n> \n> Now, I am issuing query like cdate >= CURRENT_DATE - 1 and cdate <= \n> CURRENT_DATE, it scans all the partitions. But if I do cdate >= \n> '2007-01-01'::date and cdate<=.2007-03-31'::date it picks the correct \n> partition. Also if I join the cdate field with another table, it does \n> not pick the correct partition.\n> \n> I would like to know if it is possible to pick the correct partition \n> using the above example.\n\nfrom http://www.postgresql.org/docs/8.1/interactive/ddl-partitioning.html\n\n...\"For the same reason, \"stable\" functions such as CURRENT_DATE must be \navoided. Joining the partition key to a column of another table will not be \noptimized, either.\"...\n\n\nRigmor\n\n> \n> Thanks\n> Abu\n> \n> ------------------------------------------------------------------------\n> Need Mail bonding?\n> Go to the Yahoo! Mail Q&A \n> <http://answers.yahoo.com/dir/index;_ylc=X3oDMTFvbGNhMGE3BF9TAzM5NjU0NTEwOARfcwMzOTY1NDUxMDMEc2VjA21haWxfdGFnbGluZQRzbGsDbWFpbF90YWcx?link=ask&sid=396546091> \n> for great tips from Yahoo! Answers \n> <http://answers.yahoo.com/dir/index;_ylc=X3oDMTFvbGNhMGE3BF9TAzM5NjU0NTEwOARfcwMzOTY1NDUxMDMEc2VjA21haWxfdGFnbGluZQRzbGsDbWFpbF90YWcx?link=ask&sid=396546091> \n> users. !DSPAM:5,45beea6d287779832115503!\n\n\n-- \nRigmor Ukuhe\nFinestmedia Ltd | Software Development Team Manager\ngsm : (+372)56467729 | tel : (+372)6558043 | e-mail : [email protected]\n",
"msg_date": "Tue, 30 Jan 2007 12:21:38 +0200",
"msg_from": "Rigmor Ukuhe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning"
}
] |
[
{
"msg_contents": "Greetings!\n\nI have rather large table with about 5 millions of rows and a dozen of \ncolumns. Let's suppose that columns are named 'a', 'b', 'c' etc. I need \nto query distinct pairs of ('a';'b') from this table.\n\nI use following query:\n\nSELECT DISTINCT a, b FROM tbl;\n\nbut unfortunately, it takes forever to complete. Explaining gives me \ninformation that bottleneck is seqscan on 'tbl', which eats much time.\n\nCreating compound index on this table using following statement:\n\nCREATE INDEX tbl_a_b_idx ON tbl( a, b );\n\ngives no effect, postgres simply ignores it, at least according to the \nEXPLAIN output.\n\nIs there any way to somehow improve the performance of this operation? \nTable can not be changed.\n\n-- \nIgor Lobanov\nInternal Development Engineer\nSWsoft, Inc.\n\n",
"msg_date": "Tue, 30 Jan 2007 14:33:34 +0600",
"msg_from": "Igor Lobanov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Querying distinct values from a large table"
},
{
"msg_contents": "Igor Lobanov wrote:\n> Greetings!\n> \n> I have rather large table with about 5 millions of rows and a dozen of \n> columns. Let's suppose that columns are named 'a', 'b', 'c' etc. I need \n> to query distinct pairs of ('a';'b') from this table.\n\n> Creating compound index on this table using following statement:\n> \n> CREATE INDEX tbl_a_b_idx ON tbl( a, b );\n> \n> gives no effect, postgres simply ignores it, at least according to the \n> EXPLAIN output.\n\nWhat version of PostgreSQL is it?\n\nHow many distinct values are you getting back from your 5 million rows? \nIf there are too many, an index isn't going to help.\n\nCan you share the EXPLAIN ANALYSE output? You might want to try \nincreasing work_mem for this one query to speed any sorting.\n\nHow often is the table updated? Clustering might buy you some \nimprovements (but not a huge amount I suspect).\n\n\n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 30 Jan 2007 09:12:51 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying distinct values from a large table"
},
{
"msg_contents": "\n\nRichard Huxton wrote:\n>> I have rather large table with about 5 millions of rows and a dozen of \n>> columns. Let's suppose that columns are named 'a', 'b', 'c' etc. I \n>> need to query distinct pairs of ('a';'b') from this table.\n >\n> What version of PostgreSQL is it?\n\n8.1.4\n\n> How many distinct values are you getting back from your 5 million rows? \n> If there are too many, an index isn't going to help.\n\nNo more than 10,000.\n\n> Can you share the EXPLAIN ANALYSE output? You might want to try \n> increasing work_mem for this one query to speed any sorting.\n\nReal table and colum names are obfuscated because of NDA, sorry.\n\nexplain analyze select distinct a, b from tbl\n\nEXPLAIN ANALYZE output is:\n\n Unique (cost=500327.32..525646.88 rows=1848 width=6) (actual \ntime=52719.868..56126.356 rows=5390 loops=1)\n -> Sort (cost=500327.32..508767.17 rows=3375941 width=6) (actual \ntime=52719.865..54919.989 rows=3378864 loops=1)\n Sort Key: a, b\n -> Seq Scan on tbl (cost=0.00..101216.41 rows=3375941 \nwidth=6) (actual time=16.643..20652.610 rows=3378864 loops=1)\n Total runtime: 57307.394 ms\n\n> How often is the table updated? Clustering might buy you some \n> improvements (but not a huge amount I suspect).\n\nIt is updated once per 3-5 seconds.\n\nAnd one more thing. I don't know if it helps, but column 'a' can have \nvalue from a limited set: 0, 1 or 2. Column 'b' is also an integer \n(foreign key, actually).\n\n-- \nIgor Lobanov\nInternal Development Engineer\nSWsoft, Inc.\n\n",
"msg_date": "Tue, 30 Jan 2007 15:33:11 +0600",
"msg_from": "Igor Lobanov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Querying distinct values from a large table"
},
{
"msg_contents": "Forgot to mention that our work_mem setting is 20480 Kb.\n\n> You might want to try \n> increasing work_mem for this one query to speed any sorting.\n\n-- \nIgor Lobanov\nInternal Development Engineer\nSWsoft, Inc.\n\n",
"msg_date": "Tue, 30 Jan 2007 15:36:12 +0600",
"msg_from": "Igor Lobanov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Querying distinct values from a large table"
},
{
"msg_contents": "Igor Lobanov wrote:\n> \n> \n> Richard Huxton wrote:\n>>> I have rather large table with about 5 millions of rows and a dozen \n>>> of columns. Let's suppose that columns are named 'a', 'b', 'c' etc. I \n>>> need to query distinct pairs of ('a';'b') from this table.\n> >\n>> What version of PostgreSQL is it?\n> \n> 8.1.4\n\nCurrent release is 8.1.6 - probably worth upgrading when you've got \ntime. It should be a simple matter of replacing the binaries but do \ncheck the release notes.\n\n>> How many distinct values are you getting back from your 5 million \n>> rows? If there are too many, an index isn't going to help.\n> \n> No more than 10,000.\n\nOK. Should be possible to do something then.\n\n>> Can you share the EXPLAIN ANALYSE output? You might want to try \n>> increasing work_mem for this one query to speed any sorting.\n> \n> Real table and colum names are obfuscated because of NDA, sorry.\n\nFair enough.\n\n> explain analyze select distinct a, b from tbl\n> \n> EXPLAIN ANALYZE output is:\n> \n> Unique (cost=500327.32..525646.88 rows=1848 width=6) (actual \n> time=52719.868..56126.356 rows=5390 loops=1)\n> -> Sort (cost=500327.32..508767.17 rows=3375941 width=6) (actual \n> time=52719.865..54919.989 rows=3378864 loops=1)\n> Sort Key: a, b\n> -> Seq Scan on tbl (cost=0.00..101216.41 rows=3375941 \n> width=6) (actual time=16.643..20652.610 rows=3378864 loops=1)\n> Total runtime: 57307.394 ms\n\nHmm - am I right in thinking (a,b) are the only two columns on this \ntable? That means you'll have a lot of rows per page and an index scan \ncould end up fetching lots of pages to check the rows are visible. Still \n- I'd expect it to be better than a seq scan.\n\nThe first thing to try is to put the index back and run \"SET \nenable_seqscan=off\" before the explain analyse. That should force it to \nuse the index. Then we'll see what costs it's expecting.\n\n>> How often is the table updated? Clustering might buy you some \n>> improvements (but not a huge amount I suspect).\n> \n> It is updated once per 3-5 seconds.\n\nOK - forget clustering then.\n\n> And one more thing. I don't know if it helps, but column 'a' can have \n> value from a limited set: 0, 1 or 2. Column 'b' is also an integer \n> (foreign key, actually).\n\nHmm - might be worth trying distinct on (b,a) with an index that way \naround - that would give you greater selectivity at the top-level of the \nbtree. Can you repeat the EXPLAIN ANALYSE with that too please.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 30 Jan 2007 10:02:33 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying distinct values from a large table"
},
{
"msg_contents": "To be sure about the performance of index scan, try forcing the\nplanner to use it instead of seq scan. A way might be to force the\nplanner to use index scan on your table by using a dummy where clause.\nTry using a condition in your where clause which holds true for all\nrows.\n\n--Imad\nwww.EnterpriseDB.com\n\nOn 1/30/07, Richard Huxton <[email protected]> wrote:\n> Igor Lobanov wrote:\n> >\n> >\n> > Richard Huxton wrote:\n> >>> I have rather large table with about 5 millions of rows and a dozen\n> >>> of columns. Let's suppose that columns are named 'a', 'b', 'c' etc. I\n> >>> need to query distinct pairs of ('a';'b') from this table.\n> > >\n> >> What version of PostgreSQL is it?\n> >\n> > 8.1.4\n>\n> Current release is 8.1.6 - probably worth upgrading when you've got\n> time. It should be a simple matter of replacing the binaries but do\n> check the release notes.\n>\n> >> How many distinct values are you getting back from your 5 million\n> >> rows? If there are too many, an index isn't going to help.\n> >\n> > No more than 10,000.\n>\n> OK. Should be possible to do something then.\n>\n> >> Can you share the EXPLAIN ANALYSE output? You might want to try\n> >> increasing work_mem for this one query to speed any sorting.\n> >\n> > Real table and colum names are obfuscated because of NDA, sorry.\n>\n> Fair enough.\n>\n> > explain analyze select distinct a, b from tbl\n> >\n> > EXPLAIN ANALYZE output is:\n> >\n> > Unique (cost=500327.32..525646.88 rows=1848 width=6) (actual\n> > time=52719.868..56126.356 rows=5390 loops=1)\n> > -> Sort (cost=500327.32..508767.17 rows=3375941 width=6) (actual\n> > time=52719.865..54919.989 rows=3378864 loops=1)\n> > Sort Key: a, b\n> > -> Seq Scan on tbl (cost=0.00..101216.41 rows=3375941\n> > width=6) (actual time=16.643..20652.610 rows=3378864 loops=1)\n> > Total runtime: 57307.394 ms\n>\n> Hmm - am I right in thinking (a,b) are the only two columns on this\n> table? That means you'll have a lot of rows per page and an index scan\n> could end up fetching lots of pages to check the rows are visible. Still\n> - I'd expect it to be better than a seq scan.\n>\n> The first thing to try is to put the index back and run \"SET\n> enable_seqscan=off\" before the explain analyse. That should force it to\n> use the index. Then we'll see what costs it's expecting.\n>\n> >> How often is the table updated? Clustering might buy you some\n> >> improvements (but not a huge amount I suspect).\n> >\n> > It is updated once per 3-5 seconds.\n>\n> OK - forget clustering then.\n>\n> > And one more thing. I don't know if it helps, but column 'a' can have\n> > value from a limited set: 0, 1 or 2. Column 'b' is also an integer\n> > (foreign key, actually).\n>\n> Hmm - might be worth trying distinct on (b,a) with an index that way\n> around - that would give you greater selectivity at the top-level of the\n> btree. Can you repeat the EXPLAIN ANALYSE with that too please.\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n",
"msg_date": "Tue, 30 Jan 2007 16:39:20 +0500",
"msg_from": "imad <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying distinct values from a large table"
},
{
"msg_contents": "On Tue, 2007-01-30 at 15:33 +0600, Igor Lobanov wrote:\n\n> explain analyze select distinct a, b from tbl\n> \n> EXPLAIN ANALYZE output is:\n> \n> Unique (cost=500327.32..525646.88 rows=1848 width=6) (actual \n> time=52719.868..56126.356 rows=5390 loops=1)\n> -> Sort (cost=500327.32..508767.17 rows=3375941 width=6) (actual \n> time=52719.865..54919.989 rows=3378864 loops=1)\n> Sort Key: a, b\n> -> Seq Scan on tbl (cost=0.00..101216.41 rows=3375941 \n> width=6) (actual time=16.643..20652.610 rows=3378864 loops=1)\n> Total runtime: 57307.394 ms\n\nAll your time is in the sort, not in the SeqScan.\n\nIncrease your work_mem.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 30 Jan 2007 14:04:15 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying distinct values from a large table"
},
{
"msg_contents": "Simon Riggs wrote:\n> On Tue, 2007-01-30 at 15:33 +0600, Igor Lobanov wrote:\n> \n>> explain analyze select distinct a, b from tbl\n>>\n>> EXPLAIN ANALYZE output is:\n>>\n>> Unique (cost=500327.32..525646.88 rows=1848 width=6) (actual \n>> time=52719.868..56126.356 rows=5390 loops=1)\n>> -> Sort (cost=500327.32..508767.17 rows=3375941 width=6) (actual \n>> time=52719.865..54919.989 rows=3378864 loops=1)\n>> Sort Key: a, b\n>> -> Seq Scan on tbl (cost=0.00..101216.41 rows=3375941 \n>> width=6) (actual time=16.643..20652.610 rows=3378864 loops=1)\n>> Total runtime: 57307.394 ms\n> \n> All your time is in the sort, not in the SeqScan.\n> \n> Increase your work_mem.\n\nWell, even if the sort was instant it's only going to get him down to \n20secs.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 30 Jan 2007 14:11:48 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying distinct values from a large table"
},
{
"msg_contents": "On 1/30/07, Simon Riggs <[email protected]> wrote:\n>\n> > explain analyze select distinct a, b from tbl\n> >\n> > EXPLAIN ANALYZE output is:\n> >\n> > Unique (cost=500327.32..525646.88 rows=1848 width=6) (actual\n> > time=52719.868..56126.356 rows=5390 loops=1)\n> > -> Sort (cost=500327.32..508767.17 rows=3375941 width=6) (actual\n> > time=52719.865..54919.989 rows=3378864 loops=1)\n> > Sort Key: a, b\n> > -> Seq Scan on tbl (cost=0.00..101216.41 rows=3375941\n> > width=6) (actual time=16.643..20652.610 rows=3378864 loops=1)\n> > Total runtime: 57307.394 ms\n>\n> All your time is in the sort, not in the SeqScan.\n>\n> Increase your work_mem.\n>\n\nSounds like an opportunity to implement a \"Sort Unique\" (sort of like a\nhash, I guess), there is no need to push 3M rows through a sort algorithm to\nonly shave it down to 1848 unique records.\n\nI am assuming this optimization just isn't implemented in PostgreSQL?\n\n\n-- \nChad\nhttp://www.postgresqlforums.com/\n\nOn 1/30/07, Simon Riggs <[email protected]> wrote:\n> explain analyze select distinct a, b from tbl>> EXPLAIN ANALYZE output is:>> Unique (cost=500327.32..525646.88 rows=1848 width=6) (actual> time=52719.868..56126.356 rows=5390 loops=1)\n> -> Sort (cost=500327.32..508767.17 rows=3375941 width=6) (actual> time=52719.865..54919.989 rows=3378864 loops=1)> Sort Key: a, b> -> Seq Scan on tbl (cost=0.00..101216.41\n rows=3375941> width=6) (actual time=16.643..20652.610 rows=3378864 loops=1)> Total runtime: 57307.394 msAll your time is in the sort, not in the SeqScan.Increase your work_mem.\nSounds like an opportunity to implement a \"Sort Unique\" (sort of like a hash, I guess), there is no need to push 3M rows through a sort algorithm to only shave it down to 1848 unique records.I am assuming this optimization just isn't implemented in PostgreSQL?\n-- Chadhttp://www.postgresqlforums.com/",
"msg_date": "Tue, 30 Jan 2007 09:13:27 -0500",
"msg_from": "\"Chad Wagner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying distinct values from a large table"
},
{
"msg_contents": "Chad,\n\nOn 1/30/07 6:13 AM, \"Chad Wagner\" <[email protected]> wrote:\n\n> Sounds like an opportunity to implement a \"Sort Unique\" (sort of like a hash,\n> I guess), there is no need to push 3M rows through a sort algorithm to only\n> shave it down to 1848 unique records.\n> \n> I am assuming this optimization just isn't implemented in PostgreSQL?\n\nNot that it helps Igor, but we've implemented single pass sort/unique,\ngrouping and limit optimizations and it speeds things up to a single seqscan\nover the data, from 2-5 times faster than a typical external sort.\n\nI can't think of a way that indexing would help this situation given the\nrequired visibility check of each tuple.\n\n- Luke\n\n\n",
"msg_date": "Tue, 30 Jan 2007 06:56:57 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying distinct values from a large table"
},
{
"msg_contents": "On 1/30/07, Luke Lonergan <[email protected]> wrote:\n>\n> Not that it helps Igor, but we've implemented single pass sort/unique,\n> grouping and limit optimizations and it speeds things up to a single\n> seqscan\n> over the data, from 2-5 times faster than a typical external sort.\n\n\nWas that integrated back into PostgreSQL, or is that part of Greenplum's\noffering?\n\nI can't think of a way that indexing would help this situation given the\n> required visibility check of each tuple.\n>\n\nI agree, using indexes as a \"skinny\" table is a whole other feature that\nwould be nice.\n\n\n-- \nChad\nhttp://www.postgresqlforums.com/\n\nOn 1/30/07, Luke Lonergan <[email protected]> wrote:\nNot that it helps Igor, but we've implemented single pass sort/unique,grouping and limit optimizations and it speeds things up to a single seqscanover the data, from 2-5 times faster than a typical external sort.\nWas that integrated back into PostgreSQL, or is that part of Greenplum's offering? \nI can't think of a way that indexing would help this situation given therequired visibility check of each tuple.I agree, using indexes as a \"skinny\" table is a whole other feature that would be nice.\n-- Chadhttp://www.postgresqlforums.com/",
"msg_date": "Tue, 30 Jan 2007 10:03:03 -0500",
"msg_from": "\"Chad Wagner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying distinct values from a large table"
},
{
"msg_contents": "Chad,\n\nOn 1/30/07 7:03 AM, \"Chad Wagner\" <[email protected]> wrote:\n\n> On 1/30/07, Luke Lonergan <[email protected]> wrote:\n>> Not that it helps Igor, but we've implemented single pass sort/unique,\n>> grouping and limit optimizations and it speeds things up to a single seqscan\n>> over the data, from 2-5 times faster than a typical external sort.\n> \n> Was that integrated back into PostgreSQL, or is that part of Greenplum's\n> offering? \n\nNot yet, we will submit to PostgreSQL along with other executor node\nenhancements like hybrid hash agg (fixes the memory overflow problem with\nhash agg) and some other great sort work. These are all \"cooked\" and in the\nGreenplum DBMS, and have proven themselves significant on customer workloads\nwith tens of terabytes already.\n\nFor now it seems that the \"Group By\" trick Brian suggested in this thread\ncombined with lots of work_mem may speed things up for this case if HashAgg\nis chosen. Watch out for misestimates of stats though - hash agg may\noverallocate RAM in some cases.\n\n>> I can't think of a way that indexing would help this situation given the\n>> required visibility check of each tuple.\n> \n> I agree, using indexes as a \"skinny\" table is a whole other feature that would\n> be nice. \n\nYah - I like Hannu's ideas to make visibility less of a problem. We're\nthinking about this too.\n\n- Luke\n\n\n",
"msg_date": "Tue, 30 Jan 2007 08:34:03 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying distinct values from a large table"
},
{
"msg_contents": "\"Luke Lonergan\" <[email protected]> writes:\n\n> Chad,\n> \n> On 1/30/07 6:13 AM, \"Chad Wagner\" <[email protected]> wrote:\n> \n> > Sounds like an opportunity to implement a \"Sort Unique\" (sort of like a hash,\n> > I guess), there is no need to push 3M rows through a sort algorithm to only\n> > shave it down to 1848 unique records.\n> > \n> > I am assuming this optimization just isn't implemented in PostgreSQL?\n> \n> Not that it helps Igor, but we've implemented single pass sort/unique,\n> grouping and limit optimizations and it speeds things up to a single seqscan\n> over the data, from 2-5 times faster than a typical external sort.\n\nFwiw I also implemented this last September when I worked on the LIMIT/SORT\ncase. It's blocked on the same issue that that patch is: how do we communicate\nthe info that only unique records are needed down the plan to the sort node?\n\nNote that it's not just a boolean flag that we can include like the random\naccess flag: The Unique node could in theory have a different key than the\nsort node.\n\nComing at this fresh now a few months later I'm thinking instead of trying to\nput this intelligence into the individual nodes, an optimizer pass could run\nthrough the tree looking for Sort nodes that can have this bit tweaked.\n\nThat doesn't quite help the LIMIT/SORT case because the limit isn't calculated\nuntil run-time but perhaps it's possible to finesse that issue by providing\neither the Limit or Sort node with pointers to other nodes.\n\n(Incidentally I'm not sure where 2-5x comes from. It's entirely dependant on\nyour data distribution. It's not hard to come up with distributions where it's\n1000x as fast and others where there's no speed difference.)\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "30 Jan 2007 11:38:30 -0500",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying distinct values from a large table"
},
{
"msg_contents": "On Tue, Jan 30, 2007 at 14:33:34 +0600,\n Igor Lobanov <[email protected]> wrote:\n> Greetings!\n> \n> I have rather large table with about 5 millions of rows and a dozen of \n> columns. Let's suppose that columns are named 'a', 'b', 'c' etc. I need \n> to query distinct pairs of ('a';'b') from this table.\n> \n> Is there any way to somehow improve the performance of this operation? \n> Table can not be changed.\n\nDISTINCT currently can't use a hash aggregate plan and will use a sort.\nIf there aren't many distinct values, the hash aggregate plan will run much\nfaster. To get around this limitation, rewrite the query as a group by.\nSomething like:\nSELECT a, b FROM table GROUP BY a, b;\n",
"msg_date": "Tue, 30 Jan 2007 10:44:35 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying distinct values from a large table"
},
{
"msg_contents": "Gregory Stark wrote:\n\n> (Incidentally I'm not sure where 2-5x comes from. It's entirely dependant on\n> your data distribution. It's not hard to come up with distributions where it's\n> 1000x as fast and others where there's no speed difference.)\n\nSo the figure is really \"1-1000x\"? I bet this one is more impressive in\nPHB terms.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Tue, 30 Jan 2007 14:04:43 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying distinct values from a large table"
},
{
"msg_contents": "Alvaro,\n\nOn 1/30/07 9:04 AM, \"Alvaro Herrera\" <[email protected]> wrote:\n\n>> (Incidentally I'm not sure where 2-5x comes from. It's entirely dependant on\n>> your data distribution. It's not hard to come up with distributions where\n>> it's\n>> 1000x as fast and others where there's no speed difference.)\n> \n> So the figure is really \"1-1000x\"? I bet this one is more impressive in\n> PHB terms.\n\nYou got me - I'll bite - what's PHB?\n\n- Luke\n\n\n",
"msg_date": "Tue, 30 Jan 2007 10:20:28 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying distinct values from a large table"
},
{
"msg_contents": "On Tue, Jan 30, 2007 at 10:20:28AM -0800, Luke Lonergan wrote:\n> You got me - I'll bite - what's PHB?\n\nUsually the Pointy-Haired Boss, a figure from Dilbert.\n\n http://en.wikipedia.org/wiki/Pointy_Haired_Boss\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 30 Jan 2007 22:01:04 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying distinct values from a large table"
},
{
"msg_contents": "Luke Lonergan wrote:\n> Alvaro,\n> \n> On 1/30/07 9:04 AM, \"Alvaro Herrera\" <[email protected]> wrote:\n> \n> >> (Incidentally I'm not sure where 2-5x comes from. It's entirely dependant on\n> >> your data distribution. It's not hard to come up with distributions where\n> >> it's\n> >> 1000x as fast and others where there's no speed difference.)\n> > \n> > So the figure is really \"1-1000x\"? I bet this one is more impressive in\n> > PHB terms.\n> \n> You got me - I'll bite - what's PHB?\n\nPointy-Haired Boss, a term (AFAIK) popularized by the Dilbert comic strip.\n\nWhy, I just noticed there's even a Wikipedia entry:\nhttp://en.wikipedia.org/wiki/Pointy_Haired_Boss\nIt seems people enjoy writing hypothetically about their bosses.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Tue, 30 Jan 2007 18:03:03 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying distinct values from a large table"
},
{
"msg_contents": "Luke,\n\n> You got me - I'll bite - what's PHB?\n\nPointy Haired Boss. It's a Dilbert reference.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Tue, 30 Jan 2007 15:00:42 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying distinct values from a large table"
},
{
"msg_contents": "Argh!\n\n ### #######\n ##### ####################\n ##### #########################\n ### ###############\n \n #####\n ##########\n ############## ##########\n ##### ##### ############\n ### #################\n ## ###### ###\n # ###### ##\n # ###### #\n ## ##### ##\n ### ####### ##\n ##### ######## #### ####\n ################ ##########\n ############## ######\n #########\n \n ##\n ##\n ##\n ##\n ##\n ##\n ##\n ##\n ##\n ##\n \n ### ###\n ######## #########\n ################################\n ##########################\n ##################\n ##########\n\n\n\nOn 1/30/07 3:00 PM, \"Josh Berkus\" <[email protected]> wrote:\n\n> \n> -- \n> --Josh\n> \n> Josh Berkus\n> PostgreSQL @ Sun\n> San Francisco\n> \n\n\n",
"msg_date": "Tue, 30 Jan 2007 15:17:48 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying distinct values from a large table"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Gregory Stark wrote:\n>> (Incidentally I'm not sure where 2-5x comes from. It's entirely dependant on\n>> your data distribution. It's not hard to come up with distributions where it's\n>> 1000x as fast and others where there's no speed difference.)\n\n> So the figure is really \"1-1000x\"? I bet this one is more impressive in\n> PHB terms.\n\nLuke has a bad habit of quoting numbers that are obviously derived from\nnarrow benchmarking scenarios as Universal Truths, rather than providing\nthe context they were derived in. I wish he'd stop doing that...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 31 Jan 2007 00:55:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying distinct values from a large table "
},
{
"msg_contents": "Tom,\n\nOn 1/30/07 9:55 PM, \"Tom Lane\" <[email protected]> wrote:\n\n> Alvaro Herrera <[email protected]> writes:\n>> Gregory Stark wrote:\n>>> (Incidentally I'm not sure where 2-5x comes from. It's entirely dependant on\n>>> your data distribution. It's not hard to come up with distributions where\n>>> it's\n>>> 1000x as fast and others where there's no speed difference.)\n> \n>> So the figure is really \"1-1000x\"? I bet this one is more impressive in\n>> PHB terms.\n> \n> Luke has a bad habit of quoting numbers that are obviously derived from\n> narrow benchmarking scenarios as Universal Truths, rather than providing\n> the context they were derived in. I wish he'd stop doing that...\n\nIn this case I was referring to results obtained using grouping and distinct\noptimizations within sort where the benefit is from the use of a single pass\ninstead of the multiple merge passes for external sort followed by a UNIQUE\noperator. In this case, the benefit ranges from 2-5x in many examples as I\nmentioned: \"from 2-5 times faster than a typical external sort\". This is\nalso the same range of benefits we see for this optimization with a popular\ncommercial database. With the limit/sort optimization we have seen more\ndramatic results, but I think those are less typical cases.\n\nHere are some results for a 1GB table and a simple COUNT(DISTINCT) on a\ncolumn with 7 unique values from my dual CPU laptop running Greenplum DB (PG\n8.2.1 compatible) on both CPUs. Note that my laptop has 2GB of RAM so I have\nthe 1GB table loaded into OS I/O cache. The unmodified external sort spills\nthe sorted attribute to disk, but that takes little time. Note that the\nCOUNT(DISTINCT) plan embeds a sort as the transition function in the\naggregation node.\n\n=================================================================\n================= No Distinct Optimization in Sort ==============\n=================================================================\nlukelonergan=# select count(distinct l_shipmode) from lineitem;\n count \n-------\n 7\n(1 row)\n\nTime: 37832.308 ms\n\nlukelonergan=# explain analyze select count(distinct l_shipmode) from\nlineitem; \nQUERY PLAN \n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n----\n Aggregate (cost=159175.30..159175.31 rows=1 width=8)\n Total 1 rows with 40899 ms to end, start offset by 3.189 ms.\n -> Gather Motion 2:1 (slice2) (cost=159175.25..159175.28 rows=1\nwidth=8)\n recv: Total 2 rows with 39387 ms to first row, 40899 ms to end,\nstart offset by 3.191 ms.\n -> Aggregate (cost=159175.25..159175.26 rows=1 width=8)\n Avg 1.00 rows x 2 workers. Max 1 rows (seg0) with 39367 ms\nto end, start offset by 22 ms.\n -> Redistribute Motion 2:2 (slice1) (cost=0.00..151672.00\nrows=3001300 width=8)\n recv: Avg 3000607.50 rows x 2 workers. Max 3429492\nrows (seg1) with 0.362 ms to first row, 8643 ms to end, start offset by 23\nms.\n Hash Key: lineitem.l_shipmode\n -> Seq Scan on lineitem (cost=0.00..91646.00\nrows=3001300 width=8)\n Avg 3000607.50 rows x 2 workers. Max 3001300\nrows (seg0) with 0.049 ms to first row, 2813 ms to end, start offset by\n12.998 ms.\n Total runtime: 40903.321 ms\n(12 rows)\n\nTime: 40904.013 ms\n\n=================================================================\n================= With Distinct Optimization in Sort ==============\n=================================================================\nlukelonergan=# set mpp_sort_flags=1;\nSET\nTime: 1.425 ms\nlukelonergan=# select count(distinct l_shipmode) from lineitem;\n count \n-------\n 7\n(1 row)\n\nTime: 12846.466 ms\n\nlukelonergan=# explain analyze select count(distinct l_shipmode) from\nlineitem;\n \nQUERY PLAN \n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n----\n Aggregate (cost=159175.30..159175.31 rows=1 width=8)\n Total 1 rows with 13754 ms to end, start offset by 2.998 ms.\n -> Gather Motion 2:1 (slice2) (cost=159175.25..159175.28 rows=1\nwidth=8)\n recv: Total 2 rows with 13754 ms to end, start offset by 3.000 ms.\n -> Aggregate (cost=159175.25..159175.26 rows=1 width=8)\n Avg 1.00 rows x 2 workers. Max 1 rows (seg0) with 13734 ms\nto end, start offset by 23 ms.\n -> Redistribute Motion 2:2 (slice1) (cost=0.00..151672.00\nrows=3001300 width=8)\n recv: Avg 3000607.50 rows x 2 workers. Max 3429492\nrows (seg1) with 0.352 ms to first row, 10145 ms to end, start offset by 26\nms.\n Hash Key: lineitem.l_shipmode\n -> Seq Scan on lineitem (cost=0.00..91646.00\nrows=3001300 width=8)\n Avg 3000607.50 rows x 2 workers. Max 3001300\nrows (seg0) with 0.032 ms to first row, 4048 ms to end, start offset by\n13.037 ms.\n Total runtime: 13757.524 ms\n(12 rows)\n\nTime: 13758.182 ms\n\n================= Background Information ==============\nlukelonergan=# select count(*) from lineitem;\n count \n---------\n 6001215\n(1 row)\n\nTime: 1661.337 ms\nlukelonergan=# \\d lineitem\n Table \"public.lineitem\"\n Column | Type | Modifiers\n-----------------+-----------------------+-----------\n l_orderkey | integer | not null\n l_partkey | integer | not null\n l_suppkey | integer | not null\n l_linenumber | integer | not null\n l_quantity | double precision | not null\n l_extendedprice | double precision | not null\n l_discount | double precision | not null\n l_tax | double precision | not null\n l_returnflag | text | not null\n l_linestatus | text | not null\n l_shipdate | date | not null\n l_commitdate | date | not null\n l_receiptdate | date | not null\n l_shipinstruct | text | not null\n l_shipmode | text | not null\n l_comment | character varying(44) | not null\nDistributed by: (l_orderkey)\n\nlukelonergan=# select pg_relation_size(oid)/(1000.*1000.) as MB, relname as\nTable from pg_class order by MB desc limit 10;\n mb | table\n------------------------+--------------------------------\n 1009.4755840000000000 | lineitem\n 230.0559360000000000 | orders\n 146.7678720000000000 | partsupp\n 35.8072320000000000 | part\n 32.8908800000000000 | customer\n 1.9333120000000000 | supplier\n 1.1304960000000000 | pg_proc\n 0.88473600000000000000 | pg_proc_proname_args_nsp_index\n 0.81100800000000000000 | pg_attribute\n 0.81100800000000000000 | pg_depend\n\n\n- Luke \n\n\n\nRe: [PERFORM] Querying distinct values from a large table\n\n\nTom,\n\nOn 1/30/07 9:55 PM, \"Tom Lane\" <[email protected]> wrote:\n\n> Alvaro Herrera <[email protected]> writes:\n>> Gregory Stark wrote:\n>>> (Incidentally I'm not sure where 2-5x comes from. It's entirely dependant on\n>>> your data distribution. It's not hard to come up with distributions where \n>>> it's\n>>> 1000x as fast and others where there's no speed difference.)\n> \n>> So the figure is really \"1-1000x\"? I bet this one is more impressive in\n>> PHB terms.\n> \n> Luke has a bad habit of quoting numbers that are obviously derived from\n> narrow benchmarking scenarios as Universal Truths, rather than providing\n> the context they were derived in. I wish he'd stop doing that...\n\nIn this case I was referring to results obtained using grouping and distinct optimizations within sort where the benefit is from the use of a single pass instead of the multiple merge passes for external sort followed by a UNIQUE operator. In this case, the benefit ranges from 2-5x in many examples as I mentioned: \"from 2-5 times faster than a typical external sort\". This is also the same range of benefits we see for this optimization with a popular commercial database. With the limit/sort optimization we have seen more dramatic results, but I think those are less typical cases.\n\nHere are some results for a 1GB table and a simple COUNT(DISTINCT) on a column with 7 unique values from my dual CPU laptop running Greenplum DB (PG 8.2.1 compatible) on both CPUs. Note that my laptop has 2GB of RAM so I have the 1GB table loaded into OS I/O cache. The unmodified external sort spills the sorted attribute to disk, but that takes little time. Note that the COUNT(DISTINCT) plan embeds a sort as the transition function in the aggregation node.\n\n=================================================================\n================= No Distinct Optimization in Sort ==============\n=================================================================\nlukelonergan=# select count(distinct l_shipmode) from lineitem;\n count \n-------\n 7\n(1 row)\n\nTime: 37832.308 ms\n\nlukelonergan=# explain analyze select count(distinct l_shipmode) from lineitem; QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=159175.30..159175.31 rows=1 width=8)\n Total 1 rows with 40899 ms to end, start offset by 3.189 ms.\n -> Gather Motion 2:1 (slice2) (cost=159175.25..159175.28 rows=1 width=8)\n recv: Total 2 rows with 39387 ms to first row, 40899 ms to end, start offset by 3.191 ms.\n -> Aggregate (cost=159175.25..159175.26 rows=1 width=8)\n Avg 1.00 rows x 2 workers. Max 1 rows (seg0) with 39367 ms to end, start offset by 22 ms.\n -> Redistribute Motion 2:2 (slice1) (cost=0.00..151672.00 rows=3001300 width=8)\n recv: Avg 3000607.50 rows x 2 workers. Max 3429492 rows (seg1) with 0.362 ms to first row, 8643 ms to end, start offset by 23 ms.\n Hash Key: lineitem.l_shipmode\n -> Seq Scan on lineitem (cost=0.00..91646.00 rows=3001300 width=8)\n Avg 3000607.50 rows x 2 workers. Max 3001300 rows (seg0) with 0.049 ms to first row, 2813 ms to end, start offset by 12.998 ms.\n Total runtime: 40903.321 ms\n(12 rows)\n\nTime: 40904.013 ms\n\n=================================================================\n================= With Distinct Optimization in Sort ==============\n=================================================================\nlukelonergan=# set mpp_sort_flags=1;\nSET\nTime: 1.425 ms\nlukelonergan=# select count(distinct l_shipmode) from lineitem;\n count \n-------\n 7\n(1 row)\n\nTime: 12846.466 ms\n\nlukelonergan=# explain analyze select count(distinct l_shipmode) from lineitem;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=159175.30..159175.31 rows=1 width=8)\n Total 1 rows with 13754 ms to end, start offset by 2.998 ms.\n -> Gather Motion 2:1 (slice2) (cost=159175.25..159175.28 rows=1 width=8)\n recv: Total 2 rows with 13754 ms to end, start offset by 3.000 ms.\n -> Aggregate (cost=159175.25..159175.26 rows=1 width=8)\n Avg 1.00 rows x 2 workers. Max 1 rows (seg0) with 13734 ms to end, start offset by 23 ms.\n -> Redistribute Motion 2:2 (slice1) (cost=0.00..151672.00 rows=3001300 width=8)\n recv: Avg 3000607.50 rows x 2 workers. Max 3429492 rows (seg1) with 0.352 ms to first row, 10145 ms to end, start offset by 26 ms.\n Hash Key: lineitem.l_shipmode\n -> Seq Scan on lineitem (cost=0.00..91646.00 rows=3001300 width=8)\n Avg 3000607.50 rows x 2 workers. Max 3001300 rows (seg0) with 0.032 ms to first row, 4048 ms to end, start offset by 13.037 ms.\n Total runtime: 13757.524 ms\n(12 rows)\n\nTime: 13758.182 ms\n\n================= Background Information ==============\nlukelonergan=# select count(*) from lineitem;\n count \n---------\n 6001215\n(1 row)\n\nTime: 1661.337 ms\nlukelonergan=# \\d lineitem\n Table \"public.lineitem\"\n Column | Type | Modifiers \n-----------------+-----------------------+-----------\n l_orderkey | integer | not null\n l_partkey | integer | not null\n l_suppkey | integer | not null\n l_linenumber | integer | not null\n l_quantity | double precision | not null\n l_extendedprice | double precision | not null\n l_discount | double precision | not null\n l_tax | double precision | not null\n l_returnflag | text | not null\n l_linestatus | text | not null\n l_shipdate | date | not null\n l_commitdate | date | not null\n l_receiptdate | date | not null\n l_shipinstruct | text | not null\n l_shipmode | text | not null\n l_comment | character varying(44) | not null\nDistributed by: (l_orderkey)\n\nlukelonergan=# select pg_relation_size(oid)/(1000.*1000.) as MB, relname as Table from pg_class order by MB desc limit 10;\n mb | table \n------------------------+--------------------------------\n 1009.4755840000000000 | lineitem\n 230.0559360000000000 | orders\n 146.7678720000000000 | partsupp\n 35.8072320000000000 | part\n 32.8908800000000000 | customer\n 1.9333120000000000 | supplier\n 1.1304960000000000 | pg_proc\n 0.88473600000000000000 | pg_proc_proname_args_nsp_index\n 0.81100800000000000000 | pg_attribute\n 0.81100800000000000000 | pg_depend\n\n\n- Luke",
"msg_date": "Tue, 30 Jan 2007 22:58:53 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying distinct values from a large table"
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> Alvaro Herrera <[email protected]> writes:\n> > Gregory Stark wrote:\n> >> (Incidentally I'm not sure where 2-5x comes from. It's entirely dependant on\n> >> your data distribution. It's not hard to come up with distributions where it's\n> >> 1000x as fast and others where there's no speed difference.)\n> \n> > So the figure is really \"1-1000x\"? I bet this one is more impressive in\n> > PHB terms.\n> \n> Luke has a bad habit of quoting numbers that are obviously derived from\n> narrow benchmarking scenarios as Universal Truths, rather than providing\n> the context they were derived in. I wish he'd stop doing that...\n\nIn fairness I may have exaggerated a bit there. There is a limit to how much\nof a speedup you can get in valid benchmarking situations. A single sequential\nscan is always going to be necessary so you're only saving the cost of writing\nout the temporary file and subsequent merge passes.\n\nIt's hard to generate lots of intermediate merge passes since there are only\nO(log(n)) of them. So to get 1000x speedup on a large I/O bound sort you would\nhave to be sorting something on order of 2^1000 records which is ridiculous.\nRealistically you should only be able to save 2-5 intermediate merge passes.\n\nOn the other there are some common situations where you could see atypical\nincreases. Consider joining a bunch of small tables to generate a large result\nset. The small tables are probably all in memory and the result set may only\nhave a small number of distinct values. If you throw out the duplicates early\nyou save *all* the I/O. If you have to do a disk sort it could be many orders\nslower.\n\nThis is actually not an uncommon coding idiom for MySQL programmers accustomed\nto fast DISTINCT working around the lack of subqueries and poor performance of\nIN and EXISTS. They often just join together all the tables in a big cross\njoin and then toss in a DISTINCT at the top to get rid of the duplicates.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "31 Jan 2007 08:24:41 -0500",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying distinct values from a large table"
},
{
"msg_contents": "I strongly encourage anyone who is interested in the general external \nsorting problem peruse Jim Gray's site:\nhttp://research.microsoft.com/barc/SortBenchmark/\n\nRon Peacetree\n\nAt 08:24 AM 1/31/2007, Gregory Stark wrote:\n>Tom Lane <[email protected]> writes:\n>\n> > Alvaro Herrera <[email protected]> writes:\n> > > Gregory Stark wrote:\n> > >> (Incidentally I'm not sure where 2-5x comes from. It's \n> entirely dependant on\n> > >> your data distribution. It's not hard to come up with \n> distributions where it's\n> > >> 1000x as fast and others where there's no speed difference.)\n> >\n> > > So the figure is really \"1-1000x\"? I bet this one is more impressive in\n> > > PHB terms.\n> >\n> > Luke has a bad habit of quoting numbers that are obviously derived from\n> > narrow benchmarking scenarios as Universal Truths, rather than providing\n> > the context they were derived in. I wish he'd stop doing that...\n>\n>In fairness I may have exaggerated a bit there. There is a limit to how much\n>of a speedup you can get in valid benchmarking situations. A single sequential\n>scan is always going to be necessary so you're only saving the cost of writing\n>out the temporary file and subsequent merge passes.\n>\n>It's hard to generate lots of intermediate merge passes since there are only\n>O(log(n)) of them. So to get 1000x speedup on a large I/O bound sort you would\n>have to be sorting something on order of 2^1000 records which is ridiculous.\n>Realistically you should only be able to save 2-5 intermediate merge passes.\n>\n>On the other there are some common situations where you could see atypical\n>increases. Consider joining a bunch of small tables to generate a large result\n>set. The small tables are probably all in memory and the result set may only\n>have a small number of distinct values. If you throw out the duplicates early\n>you save *all* the I/O. If you have to do a disk sort it could be many orders\n>slower.\n>\n>This is actually not an uncommon coding idiom for MySQL programmers accustomed\n>to fast DISTINCT working around the lack of subqueries and poor performance of\n>IN and EXISTS. They often just join together all the tables in a big cross\n>join and then toss in a DISTINCT at the top to get rid of the duplicates.\n>\n>--\n> Gregory Stark\n> EnterpriseDB http://www.enterprisedb.com\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n\n",
"msg_date": "Wed, 31 Jan 2007 09:46:30 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying distinct values from a large table"
},
{
"msg_contents": "Gregory Stark <[email protected]> writes:\n> On the other there are some common situations where you could see\n> atypical increases. Consider joining a bunch of small tables to\n> generate a large result set. The small tables are probably all in\n> memory and the result set may only have a small number of distinct\n> values. If you throw out the duplicates early you save *all* the\n> I/O. If you have to do a disk sort it could be many orders slower.\n\nRight, we already have support for doing that well, in the form of\nhashed aggregation. What needs to happen is to get that to work for\nDISTINCT as well as GROUP BY. IIRC, DISTINCT is currently rather\nthoroughly intertwined with ORDER BY, and we'd have to figure out\nsome way to decouple them --- without breaking DISTINCT ON, which\nmakes it a lot harder :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 31 Jan 2007 10:15:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying distinct values from a large table "
}
] |
[
{
"msg_contents": "As I understand, there's no hashing for DISTINCT, but there is for GROUP BY. GROUP BY will choose between a hash and a sort (or maybe other options?) depending on the circumstances. So you can write\n\nSELECT a, b FROM tbl GROUP BY a,b\n\nand the sort/unique part of the query may run faster.\n\nBrian\n\n----- Original Message ----\nFrom: Chad Wagner <[email protected]>\nTo: Simon Riggs <[email protected]>\nCc: Igor Lobanov <[email protected]>; Richard Huxton <[email protected]>; [email protected]\nSent: Tuesday, 30 January, 2007 10:13:27 PM\nSubject: Re: [PERFORM] Querying distinct values from a large table\n\nOn 1/30/07, Simon Riggs <[email protected]> wrote:\n> explain analyze select distinct a, b from tbl\n>\n> EXPLAIN ANALYZE output is:\n>\n> Unique (cost=500327.32..525646.88 rows=1848 width=6) (actual\n> time=52719.868..56126.356 rows=5390 loops=1)\n\n> -> Sort (cost=500327.32..508767.17 rows=3375941 width=6) (actual\n> time=52719.865..54919.989 rows=3378864 loops=1)\n> Sort Key: a, b\n> -> Seq Scan on tbl (cost=0.00..101216.41\n rows=3375941\n> width=6) (actual time=16.643..20652.610 rows=3378864 loops=1)\n> Total runtime: 57307.394 ms\n\nAll your time is in the sort, not in the SeqScan.\n\nIncrease your work_mem.\n\n\n\nSounds like an opportunity to implement a \"Sort Unique\" (sort of like a hash, I guess), there is no need to push 3M rows through a sort algorithm to only shave it down to 1848 unique records.\n\nI am assuming this optimization just isn't implemented in PostgreSQL?\n\n\n\n-- \nChad\nhttp://www.postgresqlforums.com/\n\n\n\n\nAs I understand, there's no hashing for DISTINCT, but there is for GROUP BY. GROUP BY will choose between a hash and a sort (or maybe other options?) depending on the circumstances. So you can writeSELECT a, b FROM tbl GROUP BY a,band the sort/unique part of the query may run faster.Brian----- Original Message ----From: Chad Wagner <[email protected]>To: Simon Riggs <[email protected]>Cc: Igor Lobanov <[email protected]>; Richard Huxton <[email protected]>; [email protected]: Tuesday, 30 January, 2007 10:13:27 PMSubject: Re: [PERFORM]\n Querying distinct values from a large tableOn 1/30/07, Simon Riggs <[email protected]> wrote:\n> explain analyze select distinct a, b from tbl>> EXPLAIN ANALYZE output is:>> Unique (cost=500327.32..525646.88 rows=1848 width=6) (actual> time=52719.868..56126.356 rows=5390 loops=1)\n> -> Sort (cost=500327.32..508767.17 rows=3375941 width=6) (actual> time=52719.865..54919.989 rows=3378864 loops=1)> Sort Key: a, b> -> Seq Scan on tbl (cost=0.00..101216.41\n rows=3375941> width=6) (actual time=16.643..20652.610 rows=3378864 loops=1)> Total runtime: 57307.394 msAll your time is in the sort, not in the SeqScan.Increase your work_mem.\nSounds like an opportunity to implement a \"Sort Unique\" (sort of like a hash, I guess), there is no need to push 3M rows through a sort algorithm to only shave it down to 1848 unique records.I am assuming this optimization just isn't implemented in PostgreSQL?\n-- Chadhttp://www.postgresqlforums.com/",
"msg_date": "Tue, 30 Jan 2007 06:38:11 -0800 (PST)",
"msg_from": "Brian Herlihy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Querying distinct values from a large table"
}
] |
[
{
"msg_contents": "Check this:\n\nquery: Delete From ceroriesgo.salarios Where numero_patrono Not In (Select \nnumero_patrono From ceroriesgo.patronos)\n\nSeq Scan on salarios (cost=51021.78..298803854359.95 rows=14240077 width=6)\n Filter: (NOT (subplan))\n SubPlan\n -> Materialize (cost=51021.78..69422.58 rows=1032980 width=25)\n -> Seq Scan on patronos (cost=0.00..41917.80 rows=1032980 \nwidth=25)\n\n\nThese query took a day to finish, how or who can improove better performance \nof my PostgreSQL.\n\n_________________________________________________________________\nCharla con tus amigos en l�nea mediante MSN Messenger: \nhttp://messenger.latam.msn.com/\n\n",
"msg_date": "Tue, 30 Jan 2007 15:09:23 -0600",
"msg_from": "=?iso-8859-1?B?U2lkYXIgTPNwZXogQ3J1eg==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Very slow queries"
},
{
"msg_contents": "What indexes do those tables have? Any?\n\nSidar L�pez Cruz wrote:\n> Check this:\n>\n> query: Delete From ceroriesgo.salarios Where numero_patrono Not In \n> (Select numero_patrono From ceroriesgo.patronos)\n>\n> Seq Scan on salarios (cost=51021.78..298803854359.95 rows=14240077 \n> width=6)\n> Filter: (NOT (subplan))\n> SubPlan\n> -> Materialize (cost=51021.78..69422.58 rows=1032980 width=25)\n> -> Seq Scan on patronos (cost=0.00..41917.80 rows=1032980 \n> width=25)\n>\n>\n> These query took a day to finish, how or who can improove better \n> performance of my PostgreSQL.\n>\n> _________________________________________________________________\n> Charla con tus amigos en l�nea mediante MSN Messenger: \n> http://messenger.latam.msn.com/\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n\n-- \n\n*Edward Allen*\nSoftware Engineer\nBlack Duck Software, Inc.\n\[email protected] <mailto:[email protected]>\nT +1.781.891.5100 x133\nF +1.781.891.5145\nhttp://www.blackducksoftware.com\n\n",
"msg_date": "Tue, 30 Jan 2007 16:14:38 -0500",
"msg_from": "Ted Allen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow queries"
},
{
"msg_contents": "\n\n\n>From: Ted Allen <[email protected]>\n>To: Sidar L�pez Cruz <[email protected]>\n>CC: [email protected]\n>Subject: Re: [PERFORM] Very slow queries\n>Date: Tue, 30 Jan 2007 16:14:38 -0500\n>\n\n\n>What indexes do those tables have? Any?\n\nYes:\nTABLE ceroriesgo.patronos ADD CONSTRAINT patronos_pkey PRIMARY \nKEY(numero_patrono);\n\nINDEX salarios_numero_patrono_idx ON ceroriesgo.salarios\n USING btree (numero_patrono);\n\n\n\n>\n>Sidar L�pez Cruz wrote:\n>>Check this:\n>>\n>>query: Delete From ceroriesgo.salarios Where numero_patrono Not In (Select \n>>numero_patrono From ceroriesgo.patronos)\n>>\n>>Seq Scan on salarios (cost=51021.78..298803854359.95 rows=14240077 \n>>width=6)\n>> Filter: (NOT (subplan))\n>> SubPlan\n>> -> Materialize (cost=51021.78..69422.58 rows=1032980 width=25)\n>> -> Seq Scan on patronos (cost=0.00..41917.80 rows=1032980 \n>>width=25)\n>>\n>>\n>>These query took a day to finish, how or who can improove better \n>>performance of my PostgreSQL.\n>>\n>>_________________________________________________________________\n>>Charla con tus amigos en l�nea mediante MSN Messenger: \n>>http://messenger.latam.msn.com/\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 4: Have you searched our list archives?\n>>\n>> http://archives.postgresql.org\n>>\n>\n>\n>--\n>\n>*Edward Allen*\n>Software Engineer\n>Black Duck Software, Inc.\n>\n>[email protected] <mailto:[email protected]>\n>T +1.781.891.5100 x133\n>F +1.781.891.5145\n>http://www.blackducksoftware.com\n>\n\n_________________________________________________________________\nCharla con tus amigos en l�nea mediante MSN Messenger: \nhttp://messenger.latam.msn.com/\n\n",
"msg_date": "Tue, 30 Jan 2007 15:19:41 -0600",
"msg_from": "=?iso-8859-1?B?U2lkYXIgTPNwZXogQ3J1eg==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow queries"
},
{
"msg_contents": "On 1/30/07, Sidar López Cruz <[email protected]> wrote:\n>\n> query: Delete From ceroriesgo.salarios Where numero_patrono Not In (Select\n> numero_patrono From ceroriesgo.patronos)\n>\n> Seq Scan on salarios (cost=51021.78..298803854359.95 rows=14240077\n> width=6)\n> Filter: (NOT (subplan))\n> SubPlan\n> -> Materialize (cost=51021.78..69422.58 rows=1032980 width=25)\n> -> Seq Scan on patronos (cost=0.00..41917.80 rows=1032980\n> width=25)\n>\n\nHow many rows exist in salarios, but not in patronos? How many rows are\nthere in salarios?\n\nWhat does the explain look like for:\n\ndelete\n from ceroriesgo.salarios s\nwhere not exists (select 1\n from ceroriesgo.patronos\n where numero_patrono = s.numero_patrono);\n\nAlso, is this not a case for a foreign key with a cascade delete?\n\nhttp://www.postgresql.org/docs/8.2/static/ddl-constraints.html\n\n\n-- \nChad\nhttp://www.postgresqlforums.com/\n\nOn 1/30/07, Sidar López Cruz <[email protected]> wrote:\nquery: Delete From ceroriesgo.salarios Where numero_patrono Not In (Selectnumero_patrono From ceroriesgo.patronos)Seq Scan on salarios (cost=51021.78..298803854359.95 rows=14240077 width=6) Filter: (NOT (subplan))\n SubPlan -> Materialize (cost=51021.78..69422.58 rows=1032980 width=25) -> Seq Scan on patronos (cost=0.00..41917.80 rows=1032980width=25)How many rows exist in salarios, but not in patronos? How many rows are there in salarios?\nWhat does the explain look like for:delete from ceroriesgo.salarios swhere not exists (select 1 from ceroriesgo.patronos where numero_patrono = \ns.numero_patrono);Also, is this not a case for a foreign key with a cascade delete?http://www.postgresql.org/docs/8.2/static/ddl-constraints.html\n-- Chadhttp://www.postgresqlforums.com/",
"msg_date": "Tue, 30 Jan 2007 17:37:17 -0500",
"msg_from": "\"Chad Wagner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow queries"
},
{
"msg_contents": "\n\n\n>From: \"Chad Wagner\" <[email protected]>\n>To: \"Sidar L�pez Cruz\" <[email protected]>\n>CC: [email protected]\n>Subject: Re: [PERFORM] Very slow queries\n>Date: Tue, 30 Jan 2007 17:37:17 -0500\n>\n>On 1/30/07, Sidar L�pez Cruz <[email protected]> wrote:\n>>\n>>query: Delete From ceroriesgo.salarios Where numero_patrono Not In (Select\n>>numero_patrono From ceroriesgo.patronos)\n>>\n>>Seq Scan on salarios (cost=51021.78..298803854359.95 rows=14240077\n>>width=6)\n>> Filter: (NOT (subplan))\n>> SubPlan\n>> -> Materialize (cost=51021.78..69422.58 rows=1032980 width=25)\n>> -> Seq Scan on patronos (cost=0.00..41917.80 rows=1032980\n>>width=25)\n>>\n>\n>How many rows exist in salarios, but not in patronos? How many rows are\n>there in salarios?\n\nRows:\nPatronos: 1032980\nSalarios: 28480200\n\n>\n>What does the explain look like for:\n>\n>delete\n>from ceroriesgo.salarios s\n>where not exists (select 1\n> from ceroriesgo.patronos\n> where numero_patrono = s.numero_patrono);\n>\n>Also, is this not a case for a foreign key with a cascade delete?\n\nNo, this is not cascade delete case because I need to delete from salarios \nnot from patronos.\n\n\n>http://www.postgresql.org/docs/8.2/static/ddl-constraints.html\n>\n>\n>--\n>Chad\n>http://www.postgresqlforums.com/\n\n_________________________________________________________________\nCharla con tus amigos en l�nea mediante MSN Messenger: \nhttp://messenger.latam.msn.com/\n\n",
"msg_date": "Wed, 31 Jan 2007 08:19:41 -0600",
"msg_from": "=?iso-8859-1?B?U2lkYXIgTPNwZXogQ3J1eg==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow queries"
},
{
"msg_contents": "How many rows were delete last time you ran the query?\n\nChad's query looks good but here is another variation that may help. \n\nDelete From ceroriesgo.salarios Where numero_patrono In (Select\nceroriesgo.salarios.numero_patrono From ceroriesgo.salarios Left Join \nceroriesgo.patronos Using (numero_patrono) Where \nceroriesgo.patronos.numero_patrono Is Null)\n\nHope that Helps,\nTed\n\nSidar L�pez Cruz wrote:\n>\n>\n>\n>> From: \"Chad Wagner\" <[email protected]>\n>> To: \"Sidar L�pez Cruz\" <[email protected]>\n>> CC: [email protected]\n>> Subject: Re: [PERFORM] Very slow queries\n>> Date: Tue, 30 Jan 2007 17:37:17 -0500\n>>\n>> On 1/30/07, Sidar L�pez Cruz <[email protected]> wrote:\n>>>\n>>> query: Delete From ceroriesgo.salarios Where numero_patrono Not In \n>>> (Select\n>>> numero_patrono From ceroriesgo.patronos)\n>>>\n>>> Seq Scan on salarios (cost=51021.78..298803854359.95 rows=14240077\n>>> width=6)\n>>> Filter: (NOT (subplan))\n>>> SubPlan\n>>> -> Materialize (cost=51021.78..69422.58 rows=1032980 width=25)\n>>> -> Seq Scan on patronos (cost=0.00..41917.80 rows=1032980\n>>> width=25)\n>>>\n>>\n>> How many rows exist in salarios, but not in patronos? How many rows are\n>> there in salarios?\n>\n> Rows:\n> Patronos: 1032980\n> Salarios: 28480200\n>\n>>\n>> What does the explain look like for:\n>>\n>> delete\n>> from ceroriesgo.salarios s\n>> where not exists (select 1\n>> from ceroriesgo.patronos\n>> where numero_patrono = s.numero_patrono);\n>>\n>> Also, is this not a case for a foreign key with a cascade delete?\n>\n> No, this is not cascade delete case because I need to delete from \n> salarios not from patronos.\n>\n>\n>> http://www.postgresql.org/docs/8.2/static/ddl-constraints.html\n>>\n>>\n>> -- \n>> Chad\n>> http://www.postgresqlforums.com/\n>\n> _________________________________________________________________\n> Charla con tus amigos en l�nea mediante MSN Messenger: \n> http://messenger.latam.msn.com/\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\n\n-- \n\n*Edward Allen*\nSoftware Engineer\nBlack Duck Software, Inc.\n\[email protected] <mailto:[email protected]>\nT +1.781.891.5100 x133\nF +1.781.891.5145\nhttp://www.blackducksoftware.com\n\n",
"msg_date": "Wed, 31 Jan 2007 09:32:43 -0500",
"msg_from": "Ted Allen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow queries"
},
{
"msg_contents": ">How many rows were delete last time you ran the query?\n\nI never delete any rows, the tables was inserted with copy command, then I \ncreate index and I need to delete these records on ceroriesgo.salarios to \ncreate the foreign key restriction on it.\n\n\n>\n>Chad's query looks good but here is another variation that may help.\n>\n>Delete From ceroriesgo.salarios Where numero_patrono In (Select\n>ceroriesgo.salarios.numero_patrono From ceroriesgo.salarios Left Join \n>ceroriesgo.patronos Using (numero_patrono) Where \n>ceroriesgo.patronos.numero_patrono Is Null)\n>\n>Hope that Helps,\n>Ted\n>\n>Sidar L�pez Cruz wrote:\n>>\n>>\n>>\n>>>From: \"Chad Wagner\" <[email protected]>\n>>>To: \"Sidar L�pez Cruz\" <[email protected]>\n>>>CC: [email protected]\n>>>Subject: Re: [PERFORM] Very slow queries\n>>>Date: Tue, 30 Jan 2007 17:37:17 -0500\n>>>\n>>>On 1/30/07, Sidar L�pez Cruz <[email protected]> wrote:\n>>>>\n>>>>query: Delete From ceroriesgo.salarios Where numero_patrono Not In \n>>>>(Select\n>>>>numero_patrono From ceroriesgo.patronos)\n>>>>\n>>>>Seq Scan on salarios (cost=51021.78..298803854359.95 rows=14240077\n>>>>width=6)\n>>>> Filter: (NOT (subplan))\n>>>> SubPlan\n>>>> -> Materialize (cost=51021.78..69422.58 rows=1032980 width=25)\n>>>> -> Seq Scan on patronos (cost=0.00..41917.80 rows=1032980\n>>>>width=25)\n>>>>\n>>>\n>>>How many rows exist in salarios, but not in patronos? How many rows are\n>>>there in salarios?\n>>\n>>Rows:\n>>Patronos: 1032980\n>>Salarios: 28480200\n>>\n>>>\n>>>What does the explain look like for:\n>>>\n>>>delete\n>>>from ceroriesgo.salarios s\n>>>where not exists (select 1\n>>> from ceroriesgo.patronos\n>>> where numero_patrono = s.numero_patrono);\n>>>\n>>>Also, is this not a case for a foreign key with a cascade delete?\n>>\n>>No, this is not cascade delete case because I need to delete from salarios \n>>not from patronos.\n>>\n>>\n>>>http://www.postgresql.org/docs/8.2/static/ddl-constraints.html\n>>>\n>>>\n>>>--\n>>>Chad\n>>>http://www.postgresqlforums.com/\n>>\n>>_________________________________________________________________\n>>Charla con tus amigos en l�nea mediante MSN Messenger: \n>>http://messenger.latam.msn.com/\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 6: explain analyze is your friend\n>>\n>\n>\n>--\n>\n>*Edward Allen*\n>Software Engineer\n>Black Duck Software, Inc.\n>\n>[email protected] <mailto:[email protected]>\n>T +1.781.891.5100 x133\n>F +1.781.891.5145\n>http://www.blackducksoftware.com\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n\n_________________________________________________________________\nLas mejores tiendas, los precios mas bajos, entregas en todo el mundo, \nYupiMSN Compras: http://latam.msn.com/compras/\n\n",
"msg_date": "Wed, 31 Jan 2007 08:58:02 -0600",
"msg_from": "=?iso-8859-1?B?U2lkYXIgTPNwZXogQ3J1eg==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow queries"
},
{
"msg_contents": ">From: Ted Allen <[email protected]>\n>To: Sidar L�pez Cruz <[email protected]>\n>CC: [email protected], [email protected]\n>Subject: Re: [PERFORM] Very slow queries\n>Date: Wed, 31 Jan 2007 09:32:43 -0500\n>\n>How many rows were delete last time you ran the query?\n>\n>Chad's query looks good but here is another variation that may help.\n>\n>Delete From ceroriesgo.salarios Where numero_patrono In (Select\n>ceroriesgo.salarios.numero_patrono From ceroriesgo.salarios Left Join \n>ceroriesgo.patronos Using (numero_patrono) Where \n>ceroriesgo.patronos.numero_patrono Is Null)\n>\n\nExecuting these query take:\n\nQuery returned successfully: 290 rows affected, 2542387 ms execution time.\n\n\nI think that's too many time\n\n\n\n\n\n>Hope that Helps,\n>Ted\n>\n>Sidar L�pez Cruz wrote:\n>>\n>>\n>>\n>>>From: \"Chad Wagner\" <[email protected]>\n>>>To: \"Sidar L�pez Cruz\" <[email protected]>\n>>>CC: [email protected]\n>>>Subject: Re: [PERFORM] Very slow queries\n>>>Date: Tue, 30 Jan 2007 17:37:17 -0500\n>>>\n>>>On 1/30/07, Sidar L�pez Cruz <[email protected]> wrote:\n>>>>\n>>>>query: Delete From ceroriesgo.salarios Where numero_patrono Not In \n>>>>(Select\n>>>>numero_patrono From ceroriesgo.patronos)\n>>>>\n>>>>Seq Scan on salarios (cost=51021.78..298803854359.95 rows=14240077\n>>>>width=6)\n>>>> Filter: (NOT (subplan))\n>>>> SubPlan\n>>>> -> Materialize (cost=51021.78..69422.58 rows=1032980 width=25)\n>>>> -> Seq Scan on patronos (cost=0.00..41917.80 rows=1032980\n>>>>width=25)\n>>>>\n>>>\n>>>How many rows exist in salarios, but not in patronos? How many rows are\n>>>there in salarios?\n>>\n>>Rows:\n>>Patronos: 1032980\n>>Salarios: 28480200\n>>\n>>>\n>>>What does the explain look like for:\n>>>\n>>>delete\n>>>from ceroriesgo.salarios s\n>>>where not exists (select 1\n>>> from ceroriesgo.patronos\n>>> where numero_patrono = s.numero_patrono);\n>>>\n>>>Also, is this not a case for a foreign key with a cascade delete?\n>>\n>>No, this is not cascade delete case because I need to delete from salarios \n>>not from patronos.\n>>\n>>\n>>>http://www.postgresql.org/docs/8.2/static/ddl-constraints.html\n>>>\n>>>\n>>>--\n>>>Chad\n>>>http://www.postgresqlforums.com/\n>>\n>>_________________________________________________________________\n>>Charla con tus amigos en l�nea mediante MSN Messenger: \n>>http://messenger.latam.msn.com/\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 6: explain analyze is your friend\n>>\n>\n>\n>--\n>\n>*Edward Allen*\n>Software Engineer\n>Black Duck Software, Inc.\n>\n>[email protected] <mailto:[email protected]>\n>T +1.781.891.5100 x133\n>F +1.781.891.5145\n>http://www.blackducksoftware.com\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n\n_________________________________________________________________\nMSN Amor: busca tu � naranja http://latam.msn.com/amor/\n\n",
"msg_date": "Wed, 31 Jan 2007 09:28:05 -0600",
"msg_from": "=?iso-8859-1?B?U2lkYXIgTPNwZXogQ3J1eg==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow queries"
},
{
"msg_contents": "On 1/31/07, Sidar López Cruz <[email protected]> wrote:\n>\n> Executing these query take:\n> Query returned successfully: 290 rows affected, 2542387 ms execution time.\n> I think that's too many time\n\n\nI would post the plans that you are getting, otherwise just mentioning the\nexecution time is not very helpful. Also, yet another syntax is the UPDATE\nfoo... FROM tab1, tab2... syntax.\n\nhttp://www.postgresql.org/docs/8.2/static/sql-update.html\n\n\nIn any case, I thought you mentioned this was a one off query?\n\n\n\n-- \nChad\nhttp://www.postgresqlforums.com/\n\nOn 1/31/07, Sidar López Cruz <[email protected]> wrote:\nExecuting these query take:Query returned successfully: 290 rows affected, 2542387 ms execution time.I think that's too many timeI would post the plans that you are getting, otherwise just mentioning the execution time is not very helpful. Also, yet another syntax is the UPDATE foo... FROM tab1, tab2... syntax.\nhttp://www.postgresql.org/docs/8.2/static/sql-update.htmlIn any case, I thought you mentioned this was a one off query? \n-- Chadhttp://www.postgresqlforums.com/",
"msg_date": "Wed, 31 Jan 2007 20:29:55 -0500",
"msg_from": "\"Chad Wagner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow queries"
}
] |
[
{
"msg_contents": "Following is one of the update query and it's explain plan which takes about 6 mins to execute. I am trying to find a way to execute it faster. The functions used in the update statement are if then else test and then return one value or the other.\n=====================================================\nupdate mdc_upc\n set total_curoh = ownedgoods(k.gmmid,k.divid, loc1_oh, loc2_oh,loc3_oh,loc120_oh, loc15_oh,chesh_curoh),\n total_curoo =orderedgoods(k.gmmid,k.divid, loc1_oo, loc2_oo,loc3_oo,loc120_oo, loc15_oo,chesh_curoo),\n total_oh = ownedgoods(k.gmmid,k.divid, 0, 0,loc3_oh,loc120_oh, loc15_oh,chesh_oh),\n total_oo =orderedgoods(k.gmmid,k.divid, 0, 0,loc3_oo,loc120_oo, loc15_oo,chesh_oo)\nfrom mdc_products p LEFT OUTER JOIN\nkst k on p.dvm_d = k.dept\nwhere p.keyp_products = mdc_upc.keyf_products;\n---------------------------------------------------------------------------\nHash Join (cost=48602.07..137331.77 rows=695899 width=391)\n Hash Cond: (\"outer\".keyf_products = \"inner\".keyp_products)\n -> Seq Scan on mdc_upc (cost=0.00..59153.99 rows=695899 width=383)\n -> Hash (cost=47274.60..47274.60 rows=530990 width=12)\n -> Hash Left Join (cost=43.85..47274.60 rows=530990 width=12)\n Hash Cond: (\"outer\".dvm_d = \"inner\".dept)\n -> Seq Scan on mdc_products p (cost=0.00..39265.90 rows=530990 width=8)\n -> Hash (cost=41.48..41.48 rows=948 width=12)\n -> Seq Scan on kst k (cost=0.00..41.48 rows=948 width=12)\n\n======================================================\nI have seen that the updates are very slow on our system. What parameter should I test in order to find out why is it slow during update.\n\nThanks\nAbu\n \n\n \n---------------------------------\nIt's here! Your new message!\nGet new email alerts with the free Yahoo! Toolbar.\nFollowing is one of the update query and it's explain plan which takes about 6 mins to execute. I am trying to find a way to execute it faster. The functions used in the update statement are if then else test and then return one value or the other.=====================================================update mdc_upc set total_curoh = ownedgoods(k.gmmid,k.divid, loc1_oh, loc2_oh,loc3_oh,loc120_oh, loc15_oh,chesh_curoh), total_curoo =orderedgoods(k.gmmid,k.divid, loc1_oo, loc2_oo,loc3_oo,loc120_oo, loc15_oo,chesh_curoo), total_oh = ownedgoods(k.gmmid,k.divid, 0, 0,loc3_oh,loc120_oh, loc15_oh,chesh_oh), total_oo =orderedgoods(k.gmmid,k.divid, 0, 0,loc3_oo,loc120_oo, loc15_oo,chesh_oo)from mdc_products p LEFT OUTER JOINkst k on p.dvm_d = k.deptwhere p.keyp_products =\n mdc_upc.keyf_products;---------------------------------------------------------------------------Hash Join (cost=48602.07..137331.77 rows=695899 width=391) Hash Cond: (\"outer\".keyf_products = \"inner\".keyp_products) -> Seq Scan on mdc_upc (cost=0.00..59153.99 rows=695899 width=383) -> Hash (cost=47274.60..47274.60 rows=530990 width=12) -> Hash Left Join (cost=43.85..47274.60 rows=530990 width=12) Hash Cond: (\"outer\".dvm_d = \"inner\".dept) -> Seq Scan on mdc_products p (cost=0.00..39265.90 rows=530990 width=8) -> Hash (cost=41.48..41.48 rows=948\n width=12) -> Seq Scan on kst k (cost=0.00..41.48 rows=948 width=12)======================================================I have seen that the updates are very slow on our system. What parameter should I test in order to find out why is it slow during update.ThanksAbu \nIt's here! Your new message!Get\n new email alerts with the free Yahoo! Toolbar.",
"msg_date": "Wed, 31 Jan 2007 16:57:38 -0800 (PST)",
"msg_from": "Abu Mushayeed <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow update"
},
{
"msg_contents": "> Following is one of the update query and it's explain plan which takes\n> about 6 mins to execute. I am trying to find a way to execute it faster.\n> The functions used in the update statement are if then else test and\n> then return one value or the other.\n> =====================================================\n> update mdc_upc\n> set total_curoh = ownedgoods(k.gmmid,k.divid, loc1_oh,\n> loc2_oh,loc3_oh,loc120_oh, loc15_oh,chesh_curoh),\n> total_curoo =orderedgoods(k.gmmid,k.divid, loc1_oo,\n> loc2_oo,loc3_oo,loc120_oo, loc15_oo,chesh_curoo),\n> total_oh = ownedgoods(k.gmmid,k.divid, 0, 0,loc3_oh,loc120_oh,\n> loc15_oh,chesh_oh),\n> total_oo =orderedgoods(k.gmmid,k.divid, 0, 0,loc3_oo,loc120_oo,\n> loc15_oo,chesh_oo)\n> from mdc_products p LEFT OUTER JOIN\n> kst k on p.dvm_d = k.dept\n> where p.keyp_products = mdc_upc.keyf_products;\n>\n---------------------------------------------------------------------------\n> Hash Join (cost=48602.07..137331.77 rows=695899 width=391)\n> Hash Cond: (\"outer\".keyf_products = \"inner\".keyp_products)\n> -> Seq Scan on mdc_upc (cost=0.00..59153.99 rows=695899 width=383)\n> -> Hash (cost=47274.60..47274.60 rows=530990 width=12)\n> -> Hash Left Join (cost=43.85..47274.60 rows=530990 width=12)\n> Hash Cond: (\"outer\".dvm_d = \"inner\".dept)\n> -> Seq Scan on mdc_products p (cost=0.00..39265.90\n> rows=530990 width=8)\n> -> Hash (cost=41.48..41.48 rows=948 width=12)\n> -> Seq Scan on kst k (cost=0.00..41.48 rows=948\n> width=12)\n>\n> ======================================================\n> I have seen that the updates are very slow on our system. What parameter\n> should I test in order to find out why is it slow during update.\n>\n> Thanks\n> Abu\n\nObviously the update is slow because of sequential scans of the table,\nwith about 1GB od data to read - this may seem as 'not too much of data'\nbut there may be a lot of seeks, knocking the performance down. You can\nuse iowait / dstat to see how much time is spent waiting for the data.\n\nI have to admin I don't fully uderstand that query as I've never used\nthe UPDATE ... FROM query, but it seems to me you could set up some\nindexes to speed things up. I'd definitely start with indexes on the\ncolumns used in join conditions, namely\n\nCREATE INDEX mdc_products_keyp_idx ON mdc_products(keyp_products);\nCREATE INDEX mdc_upc_keyf_idx ON mdc_upc(keyf_products);\nCREATE INDEX mdc_products_dvm_idx ON mdc_products(dvm_d);\nCREATE INDEX kst_dept_idx ON kst(dept);\n\nbut this is just a guess as I really know NOTHING about those tables\n(structure, size, statistical features, etc.) Btw. don't forget to\nanalyze the tables.\n\nAnother 'uncertainty' is related to the functions used in your query,\nnamely ownedgoods() / orderedgoods(). If these procedures do something\nnontrivial (searches in tables, etc.) it might have severe impact on the\nquery - but the parser / optimizer knows nothing about these procedures\nso it can't optimize them.\n\nIf this won't help, we'll need some more information - for example what\ndoes the 'our system' mean - how much memory doest it have? What are the\nimportant settings in your postgresql.conf? Especially what is the value\nof effective_cache_size, work_mem, and some others (for example number\nof checkpoint segments as it seems like a write-intensive query).\n\nAnd last but not least info of the structure and size of the tables\n(columns, indexes, number of rows, number of occupied blocks, etc).\n\nTomas\n",
"msg_date": "Thu, 01 Feb 2007 02:32:20 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow update"
}
] |
[
{
"msg_contents": "\nHello,\n\nI'm working on setting up replication with Slony, and will soon have a\nslave that a lot of SELECT traffic will be sent to (over 500k/day).\n\nThe primary query we need to run is somewhat complex, but seems to\ncomplete on average in well under a second.\n\nHowever, every so often (less in 1 in 10,000 queries) we'll see the\nquery take 2 or 3 minutes.\n\nIt's not clear why this is happening-- perhaps there is something else\ngoing on that is affecting this query.\n\nI'm considering the use of \"statement_timeout\" to limit the time of\nthis particular query, to suppress the rare \"run away\", and avoid tying\nup the processor for that additional time.\n\nI think it may be better to give up, quit spending cycles on it right\nthen, and return an \"oops, try again in a few minutes\" message instead.\n>From the data we have, seems like it has a strong chance of working again.\n\nIs anyone else using \"statement_timeout\" as part of an overall\nperformance plan?\n\n Mark\n\n",
"msg_date": "Thu, 01 Feb 2007 09:10:09 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Using statement_timeout as a performance tool?"
}
] |
[
{
"msg_contents": "I'm needing help determining the best all-around query for the\nfollowing situation. I have primary table that holds ip information\nand two other tables that hold event data for the specific IP in with\na one-to-many mapping between them, ie:\n\nCREATE TABLE ip_info (\n ip IP4,\n --other data\n);\n\nCREATE TABLE network_events (\n ip IP4 NOT NULL REFERENCES ip_info(ip),\n name VARCHAR,\n port INTEGER,\n --other data\n);\n\nCREATE TABLE host_events (\n ip IP4 NOT NULL REFERENCES ip_info(ip),\n name VARCHAR\n port INTEGER,\n --other data\n);\n\nThere is quite a bit of commonality between the network_events and\nhost_events schemas, but they do not currently share an ancestor.\nip_info has about 13 million rows, the network_events table has about\n30 million rows, and the host_events table has about 7 million rows.\nThere are indexes on all the rows.\n\nThe query that I would like to execute is to select all the rows of\nip_info that have either network or host events that meet some\ncriteria, i.e. name='blah'. I have 3 different possibilities that I\nhave thought of to execute this.\n\nFirst, 2 'ip IN (SELECT ...)' statements joined by an OR:\n\nSELECT * FROM ip_info\n WHERE ip IN (SELECT ip FROM network_events WHERE name='blah')\n OR ip IN (SELECT ip FROM host_events WHERE name='blah');\n\nNext, 1 'ip IN (SELECT ... UNION SELECT ...) statement:\n\nSELECT * FROM ip_info\n WHERE ip IN (SELECT ip FROM network_events WHERE name='blah'\n UNION\n SELECT ip FROM host_events WHERE name='blah');\n\nOr, finally, the UNION statment with DISTINCTs:\n\nSELECT * FROM ip_info\n WHERE ip IN (SELECT DISTINCT ip FROM network_events WHERE name='blah'\n UNION\n SELECT DISTINCT ip FROM host_events WHERE name='blah');\n\n From what I have read, the UNION statement does an implicit DISTINCT,\nbut I thought that doing it on each of the tables would result in\nslightly faster execution. Can you think of any other ways to\nimplement the previous query?\n\nI have explained/analyzed all the queries but, unfortunately, they are\non an isolated computer. The gist is that, for relatively\nlow-incidence values of name, the UNION performs better, but for\nqueries on a common name, the dual-subselect query performs better.\n\nThe explains look something like:\nDual-subselect:\nSeq scan on ip_info\n Filter: ... AND ((hashed_subplan) OR (hashed_subplan))\n Subplan\n -> Result\n -> Append\n -> various scans on host_events\n -> Result\n -> Append\n -> various scans on network_events\n\nUNION SELECT DISTINCT:\n\nNested Loop\n -> Unique\n -> Sort\n -> Append\n -> Unique\n -> Sort\n -> Result\n -> Append\n -> various scans on host_events\n -> Unique\n -> Sort\n -> Result\n -> Append\n -> various scans on network_events\n\nIf it would help to have more information, I could retype some of\nnumbers in the explain.\n\nAny ideas?\n\nThanks,\n-Mike\n",
"msg_date": "Thu, 1 Feb 2007 11:42:03 -0500",
"msg_from": "\"Michael Artz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Subselect query enhancement"
},
{
"msg_contents": "Michael Artz wrote:\n> I'm needing help determining the best all-around query for the\n> following situation. \n\nNot sure whether such a beast exists, but...\n\n > I have primary table that holds ip information\n> and two other tables that hold event data for the specific IP in with\n> a one-to-many mapping between them, ie:\n[snip]\n> There is quite a bit of commonality between the network_events and\n> host_events schemas, but they do not currently share an ancestor.\n> ip_info has about 13 million rows, the network_events table has about\n> 30 million rows, and the host_events table has about 7 million rows.\n> There are indexes on all the rows.\n\nWhat indexes though. Do you have (name,ip) on the two event tables?\n\nHow selective is \"name\" - are there many different values or just a few? \nIf lots, it might be worth increasing the statistics gathered on that \ncolumn (ALTER COLUMN ... SET STATISTICS).\nhttp://www.postgresql.org/docs/8.2/static/sql-altertable.html\n\n> The query that I would like to execute is to select all the rows of\n> ip_info that have either network or host events that meet some\n> criteria, i.e. name='blah'. I have 3 different possibilities that I\n> have thought of to execute this.\n> \n> First, 2 'ip IN (SELECT ...)' statements joined by an OR:\n> \n> SELECT * FROM ip_info\n> WHERE ip IN (SELECT ip FROM network_events WHERE name='blah')\n> OR ip IN (SELECT ip FROM host_events WHERE name='blah');\n> \n> Next, 1 'ip IN (SELECT ... UNION SELECT ...) statement:\n> \n> SELECT * FROM ip_info\n> WHERE ip IN (SELECT ip FROM network_events WHERE name='blah'\n> UNION\n> SELECT ip FROM host_events WHERE name='blah');\n> \n> Or, finally, the UNION statment with DISTINCTs:\n> \n> SELECT * FROM ip_info\n> WHERE ip IN (SELECT DISTINCT ip FROM network_events WHERE name='blah'\n> UNION\n> SELECT DISTINCT ip FROM host_events WHERE name='blah');\n> \n> From what I have read, the UNION statement does an implicit DISTINCT,\n> but I thought that doing it on each of the tables would result in\n> slightly faster execution. Can you think of any other ways to\n> implement the previous query?\n\nYou're right about removing duplicates. Not sure whether the DISTINCTs \non the sub-selects are helping or hindering. It'll probably depend on \nyour hardware, config, number of rows etc.\n\nThe only other way I can think of for this query is to UNION two JOINs. \nMight interact well with the (name,ip) index I mentioned above.\n\n> I have explained/analyzed all the queries but, unfortunately, they are\n> on an isolated computer. The gist is that, for relatively\n> low-incidence values of name, the UNION performs better, but for\n> queries on a common name, the dual-subselect query performs better.\n\nDifficult to say much without seeing the full explain analyse. Did the \nrow estimates look reasonable?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 01 Feb 2007 17:23:47 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Subselect query enhancement"
},
{
"msg_contents": "I've found that doing joins seems to produce better results on the big \ntables queries I use. This is not always the case though.\n\nHow about this option:\n\nSELECT distinct ip_info.* FROM ip_info RIGHT JOIN network_events USING \n(ip) RIGHT JOIN host_events USING (ip) WHERE \n(network_events.name='blah' OR host_events.name = 'blah') AND \nip_info.ip IS NOT NULL;\n\nThat gets rid of the sub-queries your using that look pretty costly.\n\nMichael Artz wrote:\n> I'm needing help determining the best all-around query for the\n> following situation. I have primary table that holds ip information\n> and two other tables that hold event data for the specific IP in with\n> a one-to-many mapping between them, ie:\n>\n> CREATE TABLE ip_info (\n> ip IP4,\n> --other data\n> );\n>\n> CREATE TABLE network_events (\n> ip IP4 NOT NULL REFERENCES ip_info(ip),\n> name VARCHAR,\n> port INTEGER,\n> --other data\n> );\n>\n> CREATE TABLE host_events (\n> ip IP4 NOT NULL REFERENCES ip_info(ip),\n> name VARCHAR\n> port INTEGER,\n> --other data\n> );\n>\n> There is quite a bit of commonality between the network_events and\n> host_events schemas, but they do not currently share an ancestor.\n> ip_info has about 13 million rows, the network_events table has about\n> 30 million rows, and the host_events table has about 7 million rows.\n> There are indexes on all the rows.\n>\n> The query that I would like to execute is to select all the rows of\n> ip_info that have either network or host events that meet some\n> criteria, i.e. name='blah'. I have 3 different possibilities that I\n> have thought of to execute this.\n>\n> First, 2 'ip IN (SELECT ...)' statements joined by an OR:\n>\n> SELECT * FROM ip_info\n> WHERE ip IN (SELECT ip FROM network_events WHERE name='blah')\n> OR ip IN (SELECT ip FROM host_events WHERE name='blah');\n>\n> Next, 1 'ip IN (SELECT ... UNION SELECT ...) statement:\n>\n> SELECT * FROM ip_info\n> WHERE ip IN (SELECT ip FROM network_events WHERE name='blah'\n> UNION\n> SELECT ip FROM host_events WHERE name='blah');\n>\n> Or, finally, the UNION statment with DISTINCTs:\n>\n> SELECT * FROM ip_info\n> WHERE ip IN (SELECT DISTINCT ip FROM network_events WHERE name='blah'\n> UNION\n> SELECT DISTINCT ip FROM host_events WHERE name='blah');\n>\n>> From what I have read, the UNION statement does an implicit DISTINCT,\n> but I thought that doing it on each of the tables would result in\n> slightly faster execution. Can you think of any other ways to\n> implement the previous query?\n>\n> I have explained/analyzed all the queries but, unfortunately, they are\n> on an isolated computer. The gist is that, for relatively\n> low-incidence values of name, the UNION performs better, but for\n> queries on a common name, the dual-subselect query performs better.\n>\n> The explains look something like:\n> Dual-subselect:\n> Seq scan on ip_info\n> Filter: ... AND ((hashed_subplan) OR (hashed_subplan))\n> Subplan\n> -> Result\n> -> Append\n> -> various scans on host_events\n> -> Result\n> -> Append\n> -> various scans on network_events\n>\n> UNION SELECT DISTINCT:\n>\n> Nested Loop\n> -> Unique\n> -> Sort\n> -> Append\n> -> Unique\n> -> Sort\n> -> Result\n> -> Append\n> -> various scans on host_events\n> -> Unique\n> -> Sort\n> -> Result\n> -> Append\n> -> various scans on network_events\n>\n> If it would help to have more information, I could retype some of\n> numbers in the explain.\n>\n> Any ideas?\n>\n> Thanks,\n> -Mike\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n\n-- \n\n*Edward Allen*\nSoftware Engineer\nBlack Duck Software, Inc.\n\[email protected] <mailto:[email protected]>\nT +1.781.891.5100 x133\nF +1.781.891.5145\nhttp://www.blackducksoftware.com\n\n",
"msg_date": "Thu, 01 Feb 2007 12:28:51 -0500",
"msg_from": "Ted Allen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Subselect query enhancement"
},
{
"msg_contents": "> How about this option:\n>\n> SELECT distinct ip_info.* FROM ip_info RIGHT JOIN network_events USING\n> (ip) RIGHT JOIN host_events USING (ip) WHERE\n> (network_events.name='blah' OR host_events.name = 'blah') AND\n> ip_info.ip IS NOT NULL;\n\nNah, that seems to be much much worse. The other queries usually\nreturn in 1-2 minutes, this one has been running for 30 minutes and\nhas still not returned\n\n-Mike\n",
"msg_date": "Thu, 1 Feb 2007 13:14:56 -0500",
"msg_from": "\"Michael Artz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Subselect query enhancement"
},
{
"msg_contents": "On Thu, 1 Feb 2007 11:42:03 -0500\n\"Michael Artz\" <[email protected]> wrote:\n\n> I'm needing help determining the best all-around query for the\n> following situation. I have primary table that holds ip information\n> and two other tables that hold event data for the specific IP in with\n> a one-to-many mapping between them, ie:\n> \n> CREATE TABLE ip_info (\n> ip IP4,\n> --other data\n> );\n> \n> CREATE TABLE network_events (\n> ip IP4 NOT NULL REFERENCES ip_info(ip),\n> name VARCHAR,\n> port INTEGER,\n> --other data\n> );\n> \n> CREATE TABLE host_events (\n> ip IP4 NOT NULL REFERENCES ip_info(ip),\n> name VARCHAR\n> port INTEGER,\n> --other data\n> );\n\nIt would probably help to have an index on that column for all three\ntables, then I would wager using joins will be the speed winner. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n",
"msg_date": "Thu, 1 Feb 2007 12:20:45 -0600",
"msg_from": "Frank Wiles <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Subselect query enhancement"
},
{
"msg_contents": "> > I have primary table that holds ip information\n> > and two other tables that hold event data for the specific IP in with\n> > a one-to-many mapping between them, ie:\n> [snip]\n> > There is quite a bit of commonality between the network_events and\n> > host_events schemas, but they do not currently share an ancestor.\n> > ip_info has about 13 million rows, the network_events table has about\n> > 30 million rows, and the host_events table has about 7 million rows.\n> > There are indexes on all the rows.\n>\n> What indexes though. Do you have (name,ip) on the two event tables?\n\nAll the columns are indexed individually. The tables are completely\nstatic, as I reload the whole DB with new data every day.\n\n> How selective is \"name\" - are there many different values or just a few?\n> If lots, it might be worth increasing the statistics gathered on that\n> column (ALTER COLUMN ... SET STATISTICS).\n> http://www.postgresql.org/docs/8.2/static/sql-altertable.html\n\nI guess that is the heart of my question. \"name\" is not very\nselective (there are only 20 or so choices) however other columns are\nfairly selective for certain cases, such as 'port'. When querying on\nand unusual port, the query is very fast, and the single UNIONed\nsubselect returns quickly. When 'port' is not very selective (like\nport = '80', which is roughly 1/2 of the rows in the DB), the dual\nsubselect query wins, hands-down.\n\nAnd I have altered the statistics via the config file:\n default_statistics_target = 100\nPerhaps this should be even higher for certain columns?\n\n> > The query that I would like to execute is to select all the rows of\n> > ip_info that have either network or host events that meet some\n> > criteria, i.e. name='blah'. I have 3 different possibilities that I\n> > have thought of to execute this.\n> >\n> > First, 2 'ip IN (SELECT ...)' statements joined by an OR:\n> >\n> > SELECT * FROM ip_info\n> > WHERE ip IN (SELECT ip FROM network_events WHERE name='blah')\n> > OR ip IN (SELECT ip FROM host_events WHERE name='blah');\n> >\n> > Next, 1 'ip IN (SELECT ... UNION SELECT ...) statement:\n> >\n> > SELECT * FROM ip_info\n> > WHERE ip IN (SELECT ip FROM network_events WHERE name='blah'\n> > UNION\n> > SELECT ip FROM host_events WHERE name='blah');\n> >\n> > Or, finally, the UNION statment with DISTINCTs:\n> >\n> > SELECT * FROM ip_info\n> > WHERE ip IN (SELECT DISTINCT ip FROM network_events WHERE name='blah'\n> > UNION\n> > SELECT DISTINCT ip FROM host_events WHERE name='blah');\n> >\n> > From what I have read, the UNION statement does an implicit DISTINCT,\n> > but I thought that doing it on each of the tables would result in\n> > slightly faster execution. Can you think of any other ways to\n> > implement the previous query?\n>\n> You're right about removing duplicates. Not sure whether the DISTINCTs\n> on the sub-selects are helping or hindering. It'll probably depend on\n> your hardware, config, number of rows etc.\n>\n> The only other way I can think of for this query is to UNION two JOINs.\n> Might interact well with the (name,ip) index I mentioned above.\n\nNah, that did very poorly.\n\n> > I have explained/analyzed all the queries but, unfortunately, they are\n> > on an isolated computer. The gist is that, for relatively\n> > low-incidence values of name, the UNION performs better, but for\n> > queries on a common name, the dual-subselect query performs better.\n>\n> Difficult to say much without seeing the full explain analyse. Did the\n> row estimates look reasonable?\n\nhmm, I think so, but I'm not that good in reading the outputs. I'll\nsee if I can retype some of the interesting bits of the explain\nanalyze.\n",
"msg_date": "Thu, 1 Feb 2007 13:27:32 -0500",
"msg_from": "\"Michael Artz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Subselect query enhancement"
},
{
"msg_contents": "Michael Artz wrote:\n>> > I have primary table that holds ip information\n>> > and two other tables that hold event data for the specific IP in with\n>> > a one-to-many mapping between them, ie:\n>> [snip]\n>> > There is quite a bit of commonality between the network_events and\n>> > host_events schemas, but they do not currently share an ancestor.\n>> > ip_info has about 13 million rows, the network_events table has about\n>> > 30 million rows, and the host_events table has about 7 million rows.\n>> > There are indexes on all the rows.\n>>\n>> What indexes though. Do you have (name,ip) on the two event tables?\n> \n> All the columns are indexed individually. The tables are completely\n> static, as I reload the whole DB with new data every day.\n\nThe point of a (name,ip) index would be to let you read off ip numbers \nin order easily.\n\n>> How selective is \"name\" - are there many different values or just a few?\n>> If lots, it might be worth increasing the statistics gathered on that\n>> column (ALTER COLUMN ... SET STATISTICS).\n>> http://www.postgresql.org/docs/8.2/static/sql-altertable.html\n> \n> I guess that is the heart of my question. \"name\" is not very\n> selective (there are only 20 or so choices) however other columns are\n> fairly selective for certain cases, such as 'port'. When querying on\n> and unusual port, the query is very fast, and the single UNIONed\n> subselect returns quickly. When 'port' is not very selective (like\n> port = '80', which is roughly 1/2 of the rows in the DB), the dual\n> subselect query wins, hands-down.\n> \n> And I have altered the statistics via the config file:\n> default_statistics_target = 100\n> Perhaps this should be even higher for certain columns?\n\nYou're probably better off leaving it at 10 and upping it for the vital \ncolumns. 25 for names should be a good choice.\n\nYou could try partial indexes for those cases where you have \nparticularly common values of name/port:\n\nCREATE INDEX idx1 ON host_events (ip) WHERE port=80;\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 01 Feb 2007 19:02:40 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Subselect query enhancement"
},
{
"msg_contents": "Here are some numbers for 3 different queries using a very selective\nquery (port = 2222). I'm thinking that, since the row estimates are\ndifferent from the actuals (2 vs 2000), that this particular port\ndidn't make it into the statistics ... is that true? Does this\nmatter? If this isn't enough data, I can type up some more.\n\nOne thing that I forgot to mention is that the network_events and\nhost_events tables are partitioned by first octet of the IP, so when I\nsay \"various scans of ...\" that means that there is a scan of each of\nthe partitions, the type determined by the planner and the statistics,\nI assume.\n\n** Dual subselects:\nSELECT * FROM ip_info\n WHERE ip IN (SELECT ip FROM network_events WHERE port = 2222)\n OR ip IN (SELECT ip FROM host_events WHERE port = 2222);\n\nSeq scan on ip_info (cost=2776..354575 rows=9312338 width=72) (actual\ntime=34..8238 rows=234 loops=1)\n Filter: ... AND ((hashed_subplan) OR (hashed_subplan))\n Subplan\n -> Result (cost=0..849 rows=459 width=4) (actual time=0.176..2.310\nrows=72 loops=1)\n -> Append (cost=0.00..849 rows=459 width=4) (actual\ntime=0.173..2.095 rows=72 loops=1)\n -> various scans on host_events\n -> Result (cost=0..1923 rows=856 width=4) (actual\ntime=0.072..24.614 rows=2393 loops=1)\n -> Append (cost=0..1923 rows=856 width=4) (actual time=0.069..27\nrows=2393 loops=1)\n -> various scans on network_events\n\n** Single subselect:\n\nSELECT * FROM ip_info\n WHERE ip IN (SELECT DISTINCT ip FROM network_events WHERE port = 2222\n UNION\n SELECT DISTINCT ip FROM host_events WHERE port = 2222);\n\nNested Loop (cost=2841..2856 rows=2 width=72) (actual time=55..106\nrows=2349 loops=1)\n -> Unique (cost=2841..2856 rows=2 width=72) (actual time=55..66\nrows=2349 loops=1)\n -> Sort (cost=2841..2841 rows=2 width=4) (actual time=55..58\nrows=2401 loops=1)\n -> Append (cost=1956..2841 rows=2 width=4) (actual time=29..50\nrows=2401 loops=1)\n -> Unique (cost=1956..1959 rows=2 width=4) (actual time=29..50\nrows=2401 loops=1)\n -> Sort\n -> Result\n -> Append\n -> various scans on network_events\n -> Unique (cost=869..871 rows=1 width=4) (actual time=2.9..3.3\nrows=70 loops=1)\n -> Sort\n -> Result\n -> Append\n -> various scans on host_events\n\n\n** The join:\n\nSELECT distinct ip_info.*\n FROM ip_info RIGHT JOIN network_events USING (ip)\n RIGHT JOIN host_events USING (ip)\n WHERE (network_events.port=2222 OR host_events.port=2222)\n\nUnique (cost=9238..9367 rows=1965 width=72) (actual time=61..61 rows=52 loops=1)\n -> Sort (cost=9238..9288 rows=1965 width=72) (actual time=61..61\nrows=63 loops=1)\n -> Hash Join (cost=850..9176 rows=1965 width=76) (actual\ntime=0..54 rows=2393 loops=1)\n -> Nested Loop Left Join (cost=0..8205 rows=856 width=76)\n(actual time=0..54 rows=2393 loops=1)\n -> Append\n -> various scans of network_events\n -> Index Scan of ip_info (cost=0..7 rows=1 width=72) (actual\ntime=0..0 rows=1 loops 2393)\n ->Hash (cost=849..849 rows=459 width=4) (actual time=0..2 rows=72 loops=1)\n -> Append\n ->various scans of host_events\n\n\nOn 2/1/07, Michael Artz <[email protected]> wrote:\n> > > I have primary table that holds ip information\n> > > and two other tables that hold event data for the specific IP in with\n> > > a one-to-many mapping between them, ie:\n> > [snip]\n> > > There is quite a bit of commonality between the network_events and\n> > > host_events schemas, but they do not currently share an ancestor.\n> > > ip_info has about 13 million rows, the network_events table has about\n> > > 30 million rows, and the host_events table has about 7 million rows.\n> > > There are indexes on all the rows.\n> >\n> > What indexes though. Do you have (name,ip) on the two event tables?\n>\n> All the columns are indexed individually. The tables are completely\n> static, as I reload the whole DB with new data every day.\n>\n> > How selective is \"name\" - are there many different values or just a few?\n> > If lots, it might be worth increasing the statistics gathered on that\n> > column (ALTER COLUMN ... SET STATISTICS).\n> > http://www.postgresql.org/docs/8.2/static/sql-altertable.html\n>\n> I guess that is the heart of my question. \"name\" is not very\n> selective (there are only 20 or so choices) however other columns are\n> fairly selective for certain cases, such as 'port'. When querying on\n> and unusual port, the query is very fast, and the single UNIONed\n> subselect returns quickly. When 'port' is not very selective (like\n> port = '80', which is roughly 1/2 of the rows in the DB), the dual\n> subselect query wins, hands-down.\n>\n> And I have altered the statistics via the config file:\n> default_statistics_target = 100\n> Perhaps this should be even higher for certain columns?\n>\n> > > The query that I would like to execute is to select all the rows of\n> > > ip_info that have either network or host events that meet some\n> > > criteria, i.e. name='blah'. I have 3 different possibilities that I\n> > > have thought of to execute this.\n> > >\n> > > First, 2 'ip IN (SELECT ...)' statements joined by an OR:\n> > >\n> > > SELECT * FROM ip_info\n> > > WHERE ip IN (SELECT ip FROM network_events WHERE name='blah')\n> > > OR ip IN (SELECT ip FROM host_events WHERE name='blah');\n> > >\n> > > Next, 1 'ip IN (SELECT ... UNION SELECT ...) statement:\n> > >\n> > > SELECT * FROM ip_info\n> > > WHERE ip IN (SELECT ip FROM network_events WHERE name='blah'\n> > > UNION\n> > > SELECT ip FROM host_events WHERE name='blah');\n> > >\n> > > Or, finally, the UNION statment with DISTINCTs:\n> > >\n> > > SELECT * FROM ip_info\n> > > WHERE ip IN (SELECT DISTINCT ip FROM network_events WHERE name='blah'\n> > > UNION\n> > > SELECT DISTINCT ip FROM host_events WHERE name='blah');\n> > >\n> > > From what I have read, the UNION statement does an implicit DISTINCT,\n> > > but I thought that doing it on each of the tables would result in\n> > > slightly faster execution. Can you think of any other ways to\n> > > implement the previous query?\n> >\n> > You're right about removing duplicates. Not sure whether the DISTINCTs\n> > on the sub-selects are helping or hindering. It'll probably depend on\n> > your hardware, config, number of rows etc.\n> >\n> > The only other way I can think of for this query is to UNION two JOINs.\n> > Might interact well with the (name,ip) index I mentioned above.\n>\n> Nah, that did very poorly.\n>\n> > > I have explained/analyzed all the queries but, unfortunately, they are\n> > > on an isolated computer. The gist is that, for relatively\n> > > low-incidence values of name, the UNION performs better, but for\n> > > queries on a common name, the dual-subselect query performs better.\n> >\n> > Difficult to say much without seeing the full explain analyse. Did the\n> > row estimates look reasonable?\n>\n> hmm, I think so, but I'm not that good in reading the outputs. I'll\n> see if I can retype some of the interesting bits of the explain\n> analyze.\n>\n",
"msg_date": "Thu, 1 Feb 2007 14:06:23 -0500",
"msg_from": "\"Michael Artz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Subselect query enhancement"
},
{
"msg_contents": "\n>> How about this option:\n>>\n>> SELECT distinct ip_info.* FROM ip_info RIGHT JOIN network_events USING\n>> (ip) RIGHT JOIN host_events USING (ip) WHERE\n>> (network_events.name='blah' OR host_events.name = 'blah') AND\n>> ip_info.ip IS NOT NULL;\n\nMA> Nah, that seems to be much much worse. The other queries usually\nMA> return in 1-2 minutes, this one has been running for 30 minutes and\nMA> has still not returned\n\nI find that an OR involving two different fields (in this case even\ndifferent tables) is faster when replaced by the equivalent UNION. In this\ncase---\n\nSELECT distinct ip_info.* FROM ip_info RIGHT JOIN network_events USING\n(ip) WHERE\nnetwork_events.name='blah' AND ip_info.ip IS NOT NULL\nUNION\nSELECT distinct ip_info.* FROM ip_info RIGHT JOIN host_events USING (ip) WHERE\nhost_events.name = 'blah' AND ip_info.ip IS NOT NULL;\n\nMoreover, at least through 8.1, GROUP BY is faster than DISTINCT.\n\n\n\n",
"msg_date": "Thu, 1 Feb 2007 14:37:35 -0800",
"msg_from": "Andrew Lazarus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Subselect query enhancement"
},
{
"msg_contents": "Michael Artz wrote:\n> Here are some numbers for 3 different queries using a very selective\n> query (port = 2222). I'm thinking that, since the row estimates are\n> different from the actuals (2 vs 2000), that this particular port\n> didn't make it into the statistics ... is that true? Does this\n> matter? If this isn't enough data, I can type up some more.\n\nSELECT * FROM pg_stats WHERE tablename='foo';\nThis will show you frequency-stats on each column (as generated by \nanalyse). You're interested in n_distinct, most_common_vals, \nmost_common_freqs.\n\n> One thing that I forgot to mention is that the network_events and\n> host_events tables are partitioned by first octet of the IP, so when I\n> say \"various scans of ...\" that means that there is a scan of each of\n> the partitions, the type determined by the planner and the statistics,\n> I assume.\n\nSo you've got xxx_events tables partitioned by ip, but ip_info is one \ntable? Do you do a lot of scans across the bottom 3 bytes of the IP? If \nnot, I'm not clear what we're gaining from the partitioning.\n\n> ** Dual subselects:\n> SELECT * FROM ip_info\n> WHERE ip IN (SELECT ip FROM network_events WHERE port = 2222)\n> OR ip IN (SELECT ip FROM host_events WHERE port = 2222);\n> \n> Seq scan on ip_info (cost=2776..354575 rows=9312338 width=72) (actual\n> time=34..8238 rows=234 loops=1)\n> Filter: ... AND ((hashed_subplan) OR (hashed_subplan))\n\nWell, the estimate here is rubbish - 9.3 million rows whereas we \nactually get 234. Now we know you're likely to get a lot of overlap, and \nthe planner might not realise that. Still - that looks very bad. Of \ncourse, because it's expecting so many rows a seq-scan of ip_info looks \nlike a good choice to it.\n\n> ** Single subselect:\n> \n> SELECT * FROM ip_info\n> WHERE ip IN (SELECT DISTINCT ip FROM network_events WHERE port = 2222\n> UNION\n> SELECT DISTINCT ip FROM host_events WHERE port = 2222);\n> \n> Nested Loop (cost=2841..2856 rows=2 width=72) (actual time=55..106\n> rows=2349 loops=1)\n\nThis is clearly a lot better, Not sure whether the DISTINCT in each \nsubquery works or not.\n\n> ** The join:\n> \n> SELECT distinct ip_info.*\n> FROM ip_info RIGHT JOIN network_events USING (ip)\n> RIGHT JOIN host_events USING (ip)\n> WHERE (network_events.port=2222 OR host_events.port=2222)\n> \n> Unique (cost=9238..9367 rows=1965 width=72) (actual time=61..61 rows=52 \n> loops=1)\n> -> Sort (cost=9238..9288 rows=1965 width=72) (actual time=61..61\n> rows=63 loops=1)\n\nOK, so what do the plans look like for port=80 or something larger like \nthat?\n\nThen try adding an index to the various host/network_events tables\nCREATE INDEX ... ON ... (ip) WHERE port=80;\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 02 Feb 2007 09:51:42 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Subselect query enhancement"
}
] |
[
{
"msg_contents": "I'm looking to replace some old crusty hardware with some sparkling new \nhardware. In the process, I'm looking to move away from the previous \nmentality of having the Big Server for Everything to having a cluster of \nservers, each of which handles some discrete subset of data. But rackspace \nisn't inifinte, so I'm leaning towards cases that give me 8 drive bays. \nThis leaves me with an interesting problem of how to configure these \nlimited number of drives.\n\nI know that ideally I would have seperate spindles for WAL, indexes, and \ndata. But I also know that I must be able to survive a drive failure, and \nI want at least 1TB of space for my data. I suspect with so few drive \nbays, I won't be living in an ideal world.\n\nWith an even mix of reads and writes (or possibly more writes than reads), \nis it better to use RAID10 and have everything on the same partition, or \nto have data and indexes on a 6-drive RAID5 with WAL on its own RAID1?\n",
"msg_date": "Thu, 1 Feb 2007 16:41:35 -0800 (PST)",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "drive configuration for a new server"
},
{
"msg_contents": "On Thu, 1 Feb 2007, Ben wrote:\n\n> I'm looking to replace some old crusty hardware with some sparkling new \n> hardware. In the process, I'm looking to move away from the previous \n> mentality of having the Big Server for Everything to having a cluster of \n> servers, each of which handles some discrete subset of data. But rackspace \n> isn't inifinte, so I'm leaning towards cases that give me 8 drive bays. This \n> leaves me with an interesting problem of how to configure these limited \n> number of drives.\n>\n> I know that ideally I would have seperate spindles for WAL, indexes, and \n> data. But I also know that I must be able to survive a drive failure, and I \n> want at least 1TB of space for my data. I suspect with so few drive bays, I \n> won't be living in an ideal world.\n>\n> With an even mix of reads and writes (or possibly more writes than reads), is \n> it better to use RAID10 and have everything on the same partition, or to have \n> data and indexes on a 6-drive RAID5 with WAL on its own RAID1?\n\nI'm surprised I haven't seen any responses to this, but maybe everyone's tired \nof the what to do with X drives question...perhaps we need a pgsql-perform \nFAQ?\n\nAt any rate, I just recently built a new PG server for a client which had 8 \nRaptors with an Areca 1160 controller that has the 1GB battery backed cache \ninstalled. We tested a few different configurations and decided on an 8 disk \nRAID10 with a separate WAL partition. The separate WAL partition was \nmarginally faster by a few percent.\n\nThe 8 disk RAID5 was actually a bit faster than the 8 disk RAID10 in overall \nthroughput with the Areca, but we opted for the RAID10 because of reliability \nreasons.\n\nThe moral of the story is to test each config with your workload and see what \nperforms the best. In our case, the battery backed write cache seemed to \nremove the need for a separate WAL disk, but someone elses workload might \nstill benefit from it.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n",
"msg_date": "Fri, 2 Feb 2007 10:16:23 -0800 (PST)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: drive configuration for a new server"
},
{
"msg_contents": "Thanks Jeff, this was exactly the kind of answer I was looking for.\n\nOn Fri, 2 Feb 2007, Jeff Frost wrote:\n\n> On Thu, 1 Feb 2007, Ben wrote:\n>\n>> I'm looking to replace some old crusty hardware with some sparkling new \n>> hardware. In the process, I'm looking to move away from the previous \n>> mentality of having the Big Server for Everything to having a cluster of \n>> servers, each of which handles some discrete subset of data. But rackspace \n>> isn't inifinte, so I'm leaning towards cases that give me 8 drive bays. \n>> This leaves me with an interesting problem of how to configure these \n>> limited number of drives.\n>> \n>> I know that ideally I would have seperate spindles for WAL, indexes, and \n>> data. But I also know that I must be able to survive a drive failure, and I \n>> want at least 1TB of space for my data. I suspect with so few drive bays, I \n>> won't be living in an ideal world.\n>> \n>> With an even mix of reads and writes (or possibly more writes than reads), \n>> is it better to use RAID10 and have everything on the same partition, or to \n>> have data and indexes on a 6-drive RAID5 with WAL on its own RAID1?\n>\n> I'm surprised I haven't seen any responses to this, but maybe everyone's \n> tired of the what to do with X drives question...perhaps we need a \n> pgsql-perform FAQ?\n>\n> At any rate, I just recently built a new PG server for a client which had 8 \n> Raptors with an Areca 1160 controller that has the 1GB battery backed cache \n> installed. We tested a few different configurations and decided on an 8 disk \n> RAID10 with a separate WAL partition. The separate WAL partition was \n> marginally faster by a few percent.\n>\n> The 8 disk RAID5 was actually a bit faster than the 8 disk RAID10 in overall \n> throughput with the Areca, but we opted for the RAID10 because of reliability \n> reasons.\n>\n> The moral of the story is to test each config with your workload and see what \n> performs the best. In our case, the battery backed write cache seemed to \n> remove the need for a separate WAL disk, but someone elses workload might \n> still benefit from it.\n>\n> -- \n> Jeff Frost, Owner \t<[email protected]>\n> Frost Consulting, LLC \thttp://www.frostconsultingllc.com/\n> Phone: 650-780-7908\tFAX: 650-649-1954\n>\n",
"msg_date": "Fri, 2 Feb 2007 10:49:30 -0800 (PST)",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: drive configuration for a new server"
}
] |
[
{
"msg_contents": "\nWhy should these queries have different plans?\n\n\ncreate table foo (a int PRIMARY KEY);\n\nQ1: explain select max(a) from foo\n\n> Result (cost=0.04..0.05 rows=1 width=0)\n> InitPlan\n> -> Limit (cost=0.00..0.04 rows=1 width=4)\n> -> Index Scan Backward using foo_pkey on foo\n> (cost=0.00..76.10 rows=2140 width=4)\n> Filter: (a IS NOT NULL)\n\nQ2: explain select max(a) from (select * from foo) as f\n\n> Aggregate (cost=36.75..36.76 rows=1 width=4)\n> -> Seq Scan on foo (cost=0.00..31.40 rows=2140 width=4)\n\n\nI need the lovely index scan, but my table is hidden behind a view, and\nall I get is the ugly sequential scan. Any ideas on how to convince the\noptimizer to unfold the subquery properly?\n\nBill\n",
"msg_date": "Thu, 01 Feb 2007 17:44:21 -0800",
"msg_from": "Bill Howe <[email protected]>",
"msg_from_op": true,
"msg_subject": "index scan through a subquery"
},
{
"msg_contents": "Bill Howe <[email protected]> writes:\n> I need the lovely index scan, but my table is hidden behind a view, and\n> all I get is the ugly sequential scan. Any ideas on how to convince the\n> optimizer to unfold the subquery properly?\n\nYou should provide some context in this sort of gripe, like which PG\nversion you're using. But I'm going to guess that it's 8.2.x, because\n8.1.x gets it right :-(. Try the attached.\n\n\t\t\tregards, tom lane\n\nIndex: planagg.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/optimizer/plan/planagg.c,v\nretrieving revision 1.25\ndiff -c -r1.25 planagg.c\n*** planagg.c\t9 Jan 2007 02:14:13 -0000\t1.25\n--- planagg.c\t6 Feb 2007 06:30:23 -0000\n***************\n*** 70,75 ****\n--- 70,76 ----\n optimize_minmax_aggregates(PlannerInfo *root, List *tlist, Path *best_path)\n {\n \tQuery\t *parse = root->parse;\n+ \tFromExpr *jtnode;\n \tRangeTblRef *rtr;\n \tRangeTblEntry *rte;\n \tRelOptInfo *rel;\n***************\n*** 102,115 ****\n \t * We also restrict the query to reference exactly one table, since join\n \t * conditions can't be handled reasonably. (We could perhaps handle a\n \t * query containing cartesian-product joins, but it hardly seems worth the\n! \t * trouble.)\n \t */\n! \tAssert(parse->jointree != NULL && IsA(parse->jointree, FromExpr));\n! \tif (list_length(parse->jointree->fromlist) != 1)\n! \t\treturn NULL;\n! \trtr = (RangeTblRef *) linitial(parse->jointree->fromlist);\n! \tif (!IsA(rtr, RangeTblRef))\n \t\treturn NULL;\n \trte = rt_fetch(rtr->rtindex, parse->rtable);\n \tif (rte->rtekind != RTE_RELATION || rte->inh)\n \t\treturn NULL;\n--- 103,121 ----\n \t * We also restrict the query to reference exactly one table, since join\n \t * conditions can't be handled reasonably. (We could perhaps handle a\n \t * query containing cartesian-product joins, but it hardly seems worth the\n! \t * trouble.) However, the single real table could be buried in several\n! \t * levels of FromExpr.\n \t */\n! \tjtnode = parse->jointree;\n! \twhile (IsA(jtnode, FromExpr))\n! \t{\n! \t\tif (list_length(jtnode->fromlist) != 1)\n! \t\t\treturn NULL;\n! \t\tjtnode = linitial(jtnode->fromlist);\n! \t}\n! \tif (!IsA(jtnode, RangeTblRef))\n \t\treturn NULL;\n+ \trtr = (RangeTblRef *) jtnode;\n \trte = rt_fetch(rtr->rtindex, parse->rtable);\n \tif (rte->rtekind != RTE_RELATION || rte->inh)\n \t\treturn NULL;\n",
"msg_date": "Tue, 06 Feb 2007 01:52:25 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index scan through a subquery "
},
{
"msg_contents": "Tom Lane wrote:\n>> I need the lovely index scan, but my table is hidden behind a view, and\n>> all I get is the ugly sequential scan. Any ideas on how to convince the\n>> optimizer to unfold the subquery properly?\n> \n> You should provide some context in this sort of gripe, like which PG\n> version you're using. But I'm going to guess that it's 8.2.x, because\n> 8.1.x gets it right :-(. Try the attached.\n\nGood guess; I was indeed talking about the \"current release\" rather than\nthe \"previous release.\"\n\nAlso, apologies for the tone of my post: I was attempting to be jovial,\nbut in retrospect, I see how it reads as a \"gripe,\" which I guess\nevoked your frowny-face emoticon.\n\nThanks for the quick response, elegant fix, and ongoing excellent work!\n\nCheers,\nBill\n\n> Index: planagg.c\n> ===================================================================\n> RCS file: /cvsroot/pgsql/src/backend/optimizer/plan/planagg.c,v\n> retrieving revision 1.25\n> diff -c -r1.25 planagg.c\n> *** planagg.c\t9 Jan 2007 02:14:13 -0000\t1.25\n> --- planagg.c\t6 Feb 2007 06:30:23 -0000\n> ***************\n> *** 70,75 ****\n> --- 70,76 ----\n> optimize_minmax_aggregates(PlannerInfo *root, List *tlist, Path *best_path)\n> {\n> \tQuery\t *parse = root->parse;\n> + \tFromExpr *jtnode;\n> \tRangeTblRef *rtr;\n> \tRangeTblEntry *rte;\n> \tRelOptInfo *rel;\n> ***************\n> *** 102,115 ****\n> \t * We also restrict the query to reference exactly one table, since join\n> \t * conditions can't be handled reasonably. (We could perhaps handle a\n> \t * query containing cartesian-product joins, but it hardly seems worth the\n> ! \t * trouble.)\n> \t */\n> ! \tAssert(parse->jointree != NULL && IsA(parse->jointree, FromExpr));\n> ! \tif (list_length(parse->jointree->fromlist) != 1)\n> ! \t\treturn NULL;\n> ! \trtr = (RangeTblRef *) linitial(parse->jointree->fromlist);\n> ! \tif (!IsA(rtr, RangeTblRef))\n> \t\treturn NULL;\n> \trte = rt_fetch(rtr->rtindex, parse->rtable);\n> \tif (rte->rtekind != RTE_RELATION || rte->inh)\n> \t\treturn NULL;\n> --- 103,121 ----\n> \t * We also restrict the query to reference exactly one table, since join\n> \t * conditions can't be handled reasonably. (We could perhaps handle a\n> \t * query containing cartesian-product joins, but it hardly seems worth the\n> ! \t * trouble.) However, the single real table could be buried in several\n> ! \t * levels of FromExpr.\n> \t */\n> ! \tjtnode = parse->jointree;\n> ! \twhile (IsA(jtnode, FromExpr))\n> ! \t{\n> ! \t\tif (list_length(jtnode->fromlist) != 1)\n> ! \t\t\treturn NULL;\n> ! \t\tjtnode = linitial(jtnode->fromlist);\n> ! \t}\n> ! \tif (!IsA(jtnode, RangeTblRef))\n> \t\treturn NULL;\n> + \trtr = (RangeTblRef *) jtnode;\n> \trte = rt_fetch(rtr->rtindex, parse->rtable);\n> \tif (rte->rtekind != RTE_RELATION || rte->inh)\n> \t\treturn NULL;\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n",
"msg_date": "Tue, 06 Feb 2007 11:18:45 -0800",
"msg_from": "Bill Howe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index scan through a subquery"
}
] |
[
{
"msg_contents": "I am trying to do fairly simple joins on climate databases that \nshould return ~ 7 million rows of data. However, I'm getting an error \nmessage on a OS X (10.4 tiger server) machine that seems to imply \nthat I am running out of memory. The errors are:\n\npsql(15811) malloc: *** vm_allocate(size=8421376) failed (error code=3)\npsql(15811) malloc: *** error: can't allocate region\npsql(15811) malloc: *** set a breakpoint in szone_error to debug\n\nThe query should return all data from all climate stations. In order \nto test the query I tried narrowing the SELECT statement to a return \ndata for a single station. This query worked (ie did not cause the \nmalloc errors) and returned the expected 200,000 or so rows. Since \nthis worked I don't think there is a problem with the join syntax.\n\nThis a a dual G5 box with 6 gigs of ram running postgresql 8.1. I \nhave not tired altering kernel resources (as described in http:// \nwww.postgresql.org/docs/8.1/interactive/kernel-resources.html#SHARED- \nMEMORY-PARAMETERS), or compiling for 64 bit. I'm just not sure what \nto try next. Does anyone have any suggestions?\n\nBest Regards,\n\nKirk\n\n",
"msg_date": "Fri, 2 Feb 2007 07:52:48 -0600",
"msg_from": "Kirk Wythers <[email protected]>",
"msg_from_op": true,
"msg_subject": "trouble with a join on OS X"
},
{
"msg_contents": "On Fri, Feb 02, 2007 at 07:52:48AM -0600, Kirk Wythers wrote:\n> psql(15811) malloc: *** vm_allocate(size=8421376) failed (error code=3)\n> psql(15811) malloc: *** error: can't allocate region\n> psql(15811) malloc: *** set a breakpoint in szone_error to debug\n\nIt sounds like you are out of memory. Have you tried reducing work_mem?\nActually, what does your postgresql.conf look like with regard to memory\nsettings?\n\n> This a a dual G5 box with 6 gigs of ram running postgresql 8.1. I \n> have not tired altering kernel resources (as described in http:// \n> www.postgresql.org/docs/8.1/interactive/kernel-resources.html#SHARED- \n> MEMORY-PARAMETERS), or compiling for 64 bit. I'm just not sure what \n> to try next. Does anyone have any suggestions?\n\nCompiling for 64 bit might very well help you, but it sounds odd to use\nseveral gigabytes of RAM for a sort.\n\nCould you post EXPLAIN ANALYZE for the query with only one row, as well\nas your table schema?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 2 Feb 2007 15:41:18 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trouble with a join on OS X"
},
{
"msg_contents": "Kirk Wythers wrote:\n> I am trying to do fairly simple joins on climate databases that should \n> return ~ 7 million rows of data. However, I'm getting an error message \n> on a OS X (10.4 tiger server) machine that seems to imply that I am \n> running out of memory. The errors are:\n> \n> psql(15811) malloc: *** vm_allocate(size=8421376) failed (error code=3)\nIs this actually in psql - the client code rather than the backend?\n\nCould it be that its allocating memory for its 7million result rows and \nrunning out of space for your user account?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 02 Feb 2007 14:45:20 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trouble with a join on OS X"
},
{
"msg_contents": "Richard Huxton wrote:\n> Kirk Wythers wrote:\n>> I am trying to do fairly simple joins on climate databases that should \n>> return ~ 7 million rows of data. However, I'm getting an error message \n>> on a OS X (10.4 tiger server) machine that seems to imply that I am \n>> running out of memory. The errors are:\n>>\n>> psql(15811) malloc: *** vm_allocate(size=8421376) failed (error code=3)\n> Is this actually in psql - the client code rather than the backend?\n> \n> Could it be that its allocating memory for its 7million result rows and \n> running out of space for your user account?\n> \n\nHi,\n\nIf you look at the message carefully, it looks like (for me) that the \nclient is running out of memory. Can't allocate that 8,4MB :)\n\nRegards,\nAkos\n\n\n",
"msg_date": "Fri, 02 Feb 2007 15:50:46 +0100",
"msg_from": "=?ISO-8859-1?Q?G=E1briel_=C1kos?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trouble with a join on OS X"
},
{
"msg_contents": "=?ISO-8859-1?Q?G=E1briel_=C1kos?= <[email protected]> writes:\n> Richard Huxton wrote:\n>> Kirk Wythers wrote:\n>>> I am trying to do fairly simple joins on climate databases that should \n>>> return ~ 7 million rows of data.\n\n> If you look at the message carefully, it looks like (for me) that the \n> client is running out of memory. Can't allocate that 8,4MB :)\n\nRight, the join result doesn't fit in the client's memory limit.\nThis is not too surprising, as the out-of-the-box ulimit settings\non Tiger appear to be\n\n$ ulimit -a\ncore file size (blocks, -c) 0\ndata seg size (kbytes, -d) 6144\nfile size (blocks, -f) unlimited\nmax locked memory (kbytes, -l) unlimited\nmax memory size (kbytes, -m) unlimited\nopen files (-n) 256\npipe size (512 bytes, -p) 1\nstack size (kbytes, -s) 8192\ncpu time (seconds, -t) unlimited\nmax user processes (-u) 266\nvirtual memory (kbytes, -v) unlimited\n$\n\n6 meg of memory isn't gonna hold 7 million rows ... so either raise\n\"ulimit -d\" (quite a lot) or else use a cursor to fetch the result\nin segments.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Feb 2007 10:46:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trouble with a join on OS X "
},
{
"msg_contents": "Thanks for the reply Steiner,\n\nOn Feb 2, 2007, at 8:41 AM, Steinar H. Gunderson wrote:\n\n> On Fri, Feb 02, 2007 at 07:52:48AM -0600, Kirk Wythers wrote:\n>> psql(15811) malloc: *** vm_allocate(size=8421376) failed (error \n>> code=3)\n>> psql(15811) malloc: *** error: can't allocate region\n>> psql(15811) malloc: *** set a breakpoint in szone_error to debug\n>\n> It sounds like you are out of memory. Have you tried reducing \n> work_mem?\n> Actually, what does your postgresql.conf look like with regard to \n> memory\n> settings?\n\nI have not altered postgresql.conf. I assume these are the defaults:\n\n# - Memory -\n\nshared_buffers = 300 # min 16 or \nmax_connections*2, 8KB each\n#temp_buffers = 1000 # min 100, 8KB each\n#max_prepared_transactions = 5 # can be 0 or more\n# note: increasing max_prepared_transactions costs ~600 bytes of \nshared memory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\n#work_mem = 1024 # min 64, size in KB\n#maintenance_work_mem = 16384 # min 1024, size in KB\n#max_stack_depth = 2048 # min 100, size in KB\n\nWhat about altering the sysctl values in /etc/rc to:\nsysctl -w kern.sysv.shmmax=167772160\nsysctl -w kern.sysv.shmmin=1\nsysctl -w kern.sysv.shmmni=32\nsysctl -w kern.sysv.shmseg=8\nsysctl -w kern.sysv.shmall=65536\n\nRIght now they are:\nsysctl -w kern.sysv.shmmax=4194304 kern.sysv.shmmin=1 \nkern.sysv.shmmni=32 kern.s\nysv.shmseg=8 kern.sysv.shmall=1024\n\n\n>\n>> This a a dual G5 box with 6 gigs of ram running postgresql 8.1. I\n>> have not tired altering kernel resources (as described in http://\n>> www.postgresql.org/docs/8.1/interactive/kernel-resources.html#SHARED-\n>> MEMORY-PARAMETERS), or compiling for 64 bit. I'm just not sure what\n>> to try next. Does anyone have any suggestions?\n>\n> Compiling for 64 bit might very well help you, but it sounds odd to \n> use\n> several gigabytes of RAM for a sort.\n>\n> Could you post EXPLAIN ANALYZE for the query with only one row, as \n> well\n> as your table schema?\n\nmet_data=# EXPLAIN ANALYSE SELECT sites.station_id, sites.longname, \nsites.lat, sites.lon, sites.thepoint_meter, weather.date, \nweather.year, weather.month, weather.day, weather.doy, \nweather.precip, weather.tmin, weather.tmax, weather.snowfall, \nweather.snowdepth, weather.tmean FROM sites LEFT OUTER JOIN weather \nON sites.station_id = weather.station_id WHERE weather.station_id = \n210018 AND weather.year = 1893 AND weather.doy = 365;\n QUERY PLAN\n------------------------------------------------------------------------ \n-----------------------------------------------------------\nNested Loop (cost=0.00..33426.63 rows=1 width=96) (actual \ntime=2.140..101.122 rows=1 loops=1)\n -> Index Scan using sites_pkey on sites (cost=0.00..5.25 rows=1 \nwidth=60) (actual time=0.106..0.111 rows=1 loops=1)\n Index Cond: (210018 = station_id)\n -> Index Scan using weather_pkey on weather \n(cost=0.00..33421.37 rows=1 width=40) (actual time=2.011..100.983 \nrows=1 loops=1)\n Index Cond: (station_id = 210018)\n Filter: ((\"year\" = 1893) AND (doy = 365))\nTotal runtime: 101.389 ms\n(7 rows)\n\nThe schema is public, but I'm not sure how to do an EXPAIN ANALYSE on \na schema.\n\n>\n> /* Steinar */\n> -- \n> Homepage: http://www.sesse.net/\n\n",
"msg_date": "Fri, 2 Feb 2007 09:59:38 -0600",
"msg_from": "Kirk Wythers <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trouble with a join on OS X"
},
{
"msg_contents": "\nOn Feb 2, 2007, at 9:46 AM, Tom Lane wrote:\n\n> =?ISO-8859-1?Q?G=E1briel_=C1kos?= <[email protected]> writes:\n>> Richard Huxton wrote:\n>>> Kirk Wythers wrote:\n>>>> I am trying to do fairly simple joins on climate databases that \n>>>> should\n>>>> return ~ 7 million rows of data.\n>\n>> If you look at the message carefully, it looks like (for me) that the\n>> client is running out of memory. Can't allocate that 8,4MB :)\n>\n> Right, the join result doesn't fit in the client's memory limit.\n> This is not too surprising, as the out-of-the-box ulimit settings\n> on Tiger appear to be\n>\n> $ ulimit -a\n> core file size (blocks, -c) 0\n> data seg size (kbytes, -d) 6144\n> file size (blocks, -f) unlimited\n> max locked memory (kbytes, -l) unlimited\n> max memory size (kbytes, -m) unlimited\n> open files (-n) 256\n> pipe size (512 bytes, -p) 1\n> stack size (kbytes, -s) 8192\n> cpu time (seconds, -t) unlimited\n> max user processes (-u) 266\n> virtual memory (kbytes, -v) unlimited\n> $\n>\n> 6 meg of memory isn't gonna hold 7 million rows ... so either raise\n> \"ulimit -d\" (quite a lot) or else use a cursor to fetch the result\n> in segments.\n>\n\nThanks Tom... Any suggestions as to how much to raise ulimit -d? And \nhow to raise ulimit -d?\n",
"msg_date": "Fri, 2 Feb 2007 10:05:29 -0600",
"msg_from": "Kirk Wythers <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trouble with a join on OS X "
},
{
"msg_contents": "On Fri, Feb 02, 2007 at 10:05:29AM -0600, Kirk Wythers wrote:\n> Thanks Tom... Any suggestions as to how much to raise ulimit -d? And \n> how to raise ulimit -d?\n\nTry multiplying it by 100 for a start:\n\n ulimit -d 614400\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 2 Feb 2007 17:09:03 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trouble with a join on OS X"
},
{
"msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> On Fri, Feb 02, 2007 at 10:05:29AM -0600, Kirk Wythers wrote:\n>> Thanks Tom... Any suggestions as to how much to raise ulimit -d? And \n>> how to raise ulimit -d?\n\n> Try multiplying it by 100 for a start:\n> ulimit -d 614400\n\nOr just \"ulimit -d unlimited\"\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Feb 2007 11:11:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trouble with a join on OS X "
},
{
"msg_contents": "\nOn Feb 2, 2007, at 10:11 AM, Tom Lane wrote:\n\n> \"Steinar H. Gunderson\" <[email protected]> writes:\n>> On Fri, Feb 02, 2007 at 10:05:29AM -0600, Kirk Wythers wrote:\n>>> Thanks Tom... Any suggestions as to how much to raise ulimit -d? And\n>>> how to raise ulimit -d?\n>\n>> Try multiplying it by 100 for a start:\n>> ulimit -d 614400\n>\n> Or just \"ulimit -d unlimited\"\n\nThanks to everyone so far.\n\nHowever, setting ulimit to unlimited does not seem to solve the \nissue. Output from ulimit -a is:\n\ntruffula:~ kwythers$ ulimit -a\ncore file size (blocks, -c) 0\ndata seg size (kbytes, -d) unlimited\nfile size (blocks, -f) unlimited\nmax locked memory (kbytes, -l) unlimited\nmax memory size (kbytes, -m) unlimited\nopen files (-n) 256\npipe size (512 bytes, -p) 1\nstack size (kbytes, -s) 8192\ncpu time (seconds, -t) unlimited\nmax user processes (-u) 100\nvirtual memory (kbytes, -v) unlimited\n\nAlso, changes to kernel settings in /etc/rc include:\nsysctl -w kern.sysv.shmmax=167772160\nsysctl -w kern.sysv.shmmin=1\nsysctl -w kern.sysv.shmmni=32\nsysctl -w kern.sysv.shmseg=8\nsysctl -w kern.sysv.shmall=65536\n\nHowever, I'm still getting the memory error:\n\nmet_data=# SELECT sites.station_id, sites.longname, sites.lat, \nsites.lon, sites.thepoint_meter, weather.date, weather.year, \nweather.month, weather.day, weather.doy, weather.precip, \nweather.tmin, weather.tmax, weather.snowfall, weather.snowdepth, \nweather.tmean FROM sites LEFT OUTER JOIN weather ON sites.station_id \n= weather.station_id;\npsql(532) malloc: *** vm_allocate(size=8421376) failed (error code=3)\npsql(532) malloc: *** error: can't allocate region\npsql(532) malloc: *** set a breakpoint in szone_error to debug\nout of memory for query result\n\n\nAny other ideas out there?\n\n\n\n\n",
"msg_date": "Fri, 2 Feb 2007 11:54:20 -0600",
"msg_from": "Kirk Wythers <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trouble with a join on OS X "
},
{
"msg_contents": "Kirk Wythers <[email protected]> writes:\n> However, setting ulimit to unlimited does not seem to solve the \n> issue. Output from ulimit -a is:\n\nPossibly a silly question, but you are running the client code under the\nshell session that you adjusted ulimit for, yes?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Feb 2007 12:59:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trouble with a join on OS X "
},
{
"msg_contents": "At this point there are no silly questions. But I am running the \nquery under the shell session that I adjusted. I did discover that \nulimit -d only changes the shell session that you issue the command \nin. So I changed ulimit -d to unlimited, connected to the db with \npsql db_name, then ran the select command (all in the same shell).\n\n\nOn Feb 2, 2007, at 11:59 AM, Tom Lane wrote:\n\n> Kirk Wythers <[email protected]> writes:\n>> However, setting ulimit to unlimited does not seem to solve the\n>> issue. Output from ulimit -a is:\n>\n> Possibly a silly question, but you are running the client code \n> under the\n> shell session that you adjusted ulimit for, yes?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n",
"msg_date": "Fri, 2 Feb 2007 12:06:54 -0600",
"msg_from": "Kirk Wythers <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trouble with a join on OS X "
},
{
"msg_contents": "Kirk Wythers <[email protected]> writes:\n> However, setting ulimit to unlimited does not seem to solve the \n> issue.\n\nAfter some experimentation I'm left wondering exactly what ulimit's -d\noption is for on OS X, because it sure doesn't seem to be limiting\nprocess data size. (I should have been suspicious of a value as small\nas 6 meg, anyway.) I tried selecting a large unconstrained join on my own\nOS X machine, and what I saw (watching with \"top\") was that the psql\nprocess VSIZE went up to 1.75Gb before it failed with the same error as\nKirk got:\n\nregression=# select * from tenk1 a , tenk1 b;\npsql(16572) malloc: *** vm_allocate(size=8421376) failed (error code=3)\npsql(16572) malloc: *** error: can't allocate region\npsql(16572) malloc: *** set a breakpoint in szone_error to debug\n\nSince this is just a bog-standard Mini with 512M memory, it was pretty\nthoroughly on its knees by this point :-(. I'm not sure how to find out\nabout allocated swap space in OS X, but my bet is that the above message\nshould be understood as \"totally out of virtual memory\".\n\nMy suggestion is to use a cursor to retrieve the data in more\nmanageably-sized chunks than 7M rows. (If you don't want to mess with\nmanaging a cursor explicitly, as of 8.2 there's a psql variable\nFETCH_COUNT that can be set to make it happen behind the scenes.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Feb 2007 17:18:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trouble with a join on OS X "
},
{
"msg_contents": "Tom,\n\nOn 2/2/07 2:18 PM, \"Tom Lane\" <[email protected]> wrote:\n\n> as of 8.2 there's a psql variable\n> FETCH_COUNT that can be set to make it happen behind the scenes.)\n\nFETCH_COUNT is a godsend and works beautifully for exactly this purpose.\n\nNow he's got to worry about how to page through 8GB of results in something\nless than geological time with the space bar ;-)\n\n- Luke\n\n\n",
"msg_date": "Fri, 02 Feb 2007 17:39:54 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trouble with a join on OS X"
},
{
"msg_contents": "Luke Lonergan wrote:\n> Tom,\n> \n> On 2/2/07 2:18 PM, \"Tom Lane\" <[email protected]> wrote:\n> \n>> as of 8.2 there's a psql variable\n>> FETCH_COUNT that can be set to make it happen behind the scenes.)\n> \n> FETCH_COUNT is a godsend and works beautifully for exactly this purpose.\n> \n> Now he's got to worry about how to page through 8GB of results in something\n> less than geological time with the space bar ;-)\n\n\\o /tmp/really_big_cursor_return\n\n;)\n\nJoshua D. Drake\n\n> \n> - Luke\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Fri, 02 Feb 2007 17:48:38 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trouble with a join on OS X"
},
{
"msg_contents": "> \\o /tmp/really_big_cursor_return\n> \n> ;)\n\nTough crowd :-D\n\n- Luke\n\n\n",
"msg_date": "Fri, 02 Feb 2007 17:53:03 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trouble with a join on OS X"
},
{
"msg_contents": "On Feb 2, 2007, at 7:53 PM, Luke Lonergan wrote:\n\n> Tough crowd :-D\n\nNo kidding ;-)\n\n\nOn Feb 2, 2007, at 7:53 PM, Luke Lonergan wrote:Tough crowd :-D No kidding ;-)",
"msg_date": "Fri, 2 Feb 2007 20:01:39 -0600",
"msg_from": "Kirk Wythers <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trouble with a join on OS X"
},
{
"msg_contents": "\nOn Feb 2, 2007, at 7:39 PM, Luke Lonergan wrote:\n\n> Tom,\n>\n> On 2/2/07 2:18 PM, \"Tom Lane\" <[email protected]> wrote:\n>\n>> as of 8.2 there's a psql variable\n>> FETCH_COUNT that can be set to make it happen behind the scenes.)\n>\n> FETCH_COUNT is a godsend and works beautifully for exactly this \n> purpose.\n>\n> Now he's got to worry about how to page through 8GB of results in \n> something\n> less than geological time with the space bar ;-)\n\nI actually have no intention of paging through the results, but \nrather need to use the query to get the results into a new table with \nUPDATE, so that a GIS system can do some interpolations with subsets \nof the results.\n\n>\n> - Luke\n>\n>\n\n",
"msg_date": "Fri, 2 Feb 2007 20:03:54 -0600",
"msg_from": "Kirk Wythers <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trouble with a join on OS X"
},
{
"msg_contents": "Luke Lonergan wrote:\n>> \\o /tmp/really_big_cursor_return\n>>\n>> ;)\n> \n> Tough crowd :-D\n\nYeah well Andrew probably would have said use sed and pipe it through\nawk to get the data you want.\n\nJoshua D. Drake\n\n> \n> - Luke\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Fri, 02 Feb 2007 18:05:21 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trouble with a join on OS X"
},
{
"msg_contents": "Joshua D. Drake wrote:\n> Luke Lonergan wrote:\n>>> \\o /tmp/really_big_cursor_return\n>>>\n>>> ;)\n>> Tough crowd :-D\n> \n> Yeah well Andrew probably would have said use sed and pipe it through\n> awk to get the data you want.\n\nChances are, if you're using awk, you shouldn't need sed. :)\n\n-- \nUntil later, Geoffrey\n\nThose who would give up essential Liberty, to purchase a little\ntemporary Safety, deserve neither Liberty nor Safety.\n - Benjamin Franklin\n",
"msg_date": "Fri, 02 Feb 2007 21:10:21 -0500",
"msg_from": "Geoffrey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trouble with a join on OS X"
},
{
"msg_contents": "Kirk Wythers <[email protected]> writes:\n> On Feb 2, 2007, at 7:39 PM, Luke Lonergan wrote:\n>> Now he's got to worry about how to page through 8GB of results in \n>> something less than geological time with the space bar ;-)\n\n> I actually have no intention of paging through the results, but \n> rather need to use the query to get the results into a new table with \n> UPDATE, so that a GIS system can do some interpolations with subsets \n> of the results.\n\nEr ... then why are you SELECTing the data at all? You can most likely\nget it done much faster if the data stays inside the database engine.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Feb 2007 21:32:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trouble with a join on OS X "
},
{
"msg_contents": "Geoffrey wrote:\n> Joshua D. Drake wrote:\n>> Luke Lonergan wrote:\n>>>> \\o /tmp/really_big_cursor_return\n>>>>\n>>>> ;)\n>>> Tough crowd :-D\n>>\n>> Yeah well Andrew probably would have said use sed and pipe it through\n>> awk to get the data you want.\n> \n> Chances are, if you're using awk, you shouldn't need sed. :)\n\nChances are.. if you are using awk or sed, you should use perl ;)\n\nJoshua D. Drake\n\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Fri, 02 Feb 2007 18:32:26 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trouble with a join on OS X"
},
{
"msg_contents": "\nOn Feb 2, 2007, at 8:32 PM, Tom Lane wrote:\n\n> Kirk Wythers <[email protected]> writes:\n>> On Feb 2, 2007, at 7:39 PM, Luke Lonergan wrote:\n>>> Now he's got to worry about how to page through 8GB of results in\n>>> something less than geological time with the space bar ;-)\n>\n>> I actually have no intention of paging through the results, but\n>> rather need to use the query to get the results into a new table with\n>> UPDATE, so that a GIS system can do some interpolations with subsets\n>> of the results.\n>\n> Er ... then why are you SELECTing the data at all? You can most \n> likely\n> get it done much faster if the data stays inside the database engine.\n>\n> \t\t\t\n\nThe new table needs to be filled with the results of the join. If \nthere is a way to do this without a SELECT, please share.\n",
"msg_date": "Fri, 2 Feb 2007 21:19:15 -0600",
"msg_from": "Kirk Wythers <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trouble with a join on OS X "
},
{
"msg_contents": "Kirk Wythers wrote:\n> \n> On Feb 2, 2007, at 8:32 PM, Tom Lane wrote:\n> \n>> Kirk Wythers <[email protected]> writes:\n>>> On Feb 2, 2007, at 7:39 PM, Luke Lonergan wrote:\n>>>> Now he's got to worry about how to page through 8GB of results in\n>>>> something less than geological time with the space bar ;-)\n>>\n>>> I actually have no intention of paging through the results, but\n>>> rather need to use the query to get the results into a new table with\n>>> UPDATE, so that a GIS system can do some interpolations with subsets\n>>> of the results.\n>>\n>> Er ... then why are you SELECTing the data at all? You can most likely\n>> get it done much faster if the data stays inside the database engine.\n>>\n>> \n> \n> The new table needs to be filled with the results of the join. If there\n> is a way to do this without a SELECT, please share.\n\n\nINSERT INTO foo SELECT * FROM BAR JOIN baz USING (id)\n\nJoshua D. Drake\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Fri, 02 Feb 2007 19:25:38 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trouble with a join on OS X"
},
{
"msg_contents": "Kirk Wythers <[email protected]> writes:\n> The new table needs to be filled with the results of the join. If \n> there is a way to do this without a SELECT, please share.\n\nIf it's an entirely new table, then you probably want to use INSERT\n... SELECT. If what you want is to update existing rows using a join,\nyou can use UPDATE ... FROM (not standard) or something involving a\nsub-select. You'd need to state your problem in some detail to get more\nhelp than that...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Feb 2007 22:31:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trouble with a join on OS X "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Since this is just a bog-standard Mini with 512M memory, it was pretty\n> thoroughly on its knees by this point :-(. I'm not sure how to find out\n> about allocated swap space in OS X, but my bet is that the above message\n> should be understood as \"totally out of virtual memory\".\n\njust so you can look into it for your own curiosity ;-) - Mac OS X uses \nthe startup disk for VM storage. You can find the files in - /var/vm\n\nYou will find the swapfiles there, the size of the swapfiles \nprogressively get larger - swapfile0 and 1 are 64M then 2 is 128M, 3 is \n256M, 4 is 512M, 5 is 1G.... each is preallocated so it only gives you a \nrough idea of how much vm is being used. You would run out when your \nstartup disk is full, though most apps probably hit the wall at 4G of vm \nunless you have built a 64bit version.\n\nThe 4G (32bit) limit may be where you hit the out of memory errors (or \nis postgres get around that with it's caching?).\n\n\n\n-- \n\nShane Ambler\[email protected]\n\nGet Sheeky @ http://Sheeky.Biz\n",
"msg_date": "Sun, 04 Feb 2007 02:29:49 +1030",
"msg_from": "Shane Ambler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trouble with a join on OS X"
},
{
"msg_contents": "\nOn Feb 3, 2007, at 9:59 AM, Shane Ambler wrote:\n\n>\n> just so you can look into it for your own curiosity ;-) - Mac OS X \n> uses the startup disk for VM storage. You can find the files in - / \n> var/vm\n>\n> You will find the swapfiles there, the size of the swapfiles \n> progressively get larger - swapfile0 and 1 are 64M then 2 is 128M, \n> 3 is 256M, 4 is 512M, 5 is 1G.... each is preallocated so it only \n> gives you a rough idea of how much vm is being used. You would run \n> out when your startup disk is full, though most apps probably hit \n> the wall at 4G of vm unless you have built a 64bit version.\n>\n> The 4G (32bit) limit may be where you hit the out of memory errors \n> (or is postgres get around that with it's caching?).\n\nAny idea if postgres on OS X can truely access more that 4 gigs if \nthe 64 bit version is built? I have tried building the 64 bit version \nof some other apps on OS X, and I have never been convinced that they \nbehaved as true 64 bit.\n\n>\n>\n>\n> -- \n>\n> Shane Ambler\n> [email protected]\n>\n> Get Sheeky @ http://Sheeky.Biz\n\n",
"msg_date": "Sun, 4 Feb 2007 22:29:52 -0600",
"msg_from": "Kirk Wythers <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trouble with a join on OS X"
},
{
"msg_contents": "Kirk Wythers wrote:\n\n>> The 4G (32bit) limit may be where you hit the out of memory errors (or \n>> is postgres get around that with it's caching?).\n> \n> Any idea if postgres on OS X can truely access more that 4 gigs if the \n> 64 bit version is built? I have tried building the 64 bit version of \n> some other apps on OS X, and I have never been convinced that they \n> behaved as true 64 bit.\n> \n\nI haven't tried myself\n\n\n\n-- \n\nShane Ambler\[email protected]\n\nGet Sheeky @ http://Sheeky.Biz\n",
"msg_date": "Tue, 06 Feb 2007 04:20:11 +1030",
"msg_from": "Shane Ambler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trouble with a join on OS X"
}
] |
[
{
"msg_contents": "Tom,\n\nI tried ulimit -d 614400, but the query ended with the same error. I \nthought then that the message:\npsql(21522) malloc: *** vm_allocate(size=8421376) failed (error code=3)\npsql(21522) malloc: *** error: can't allocate region\npsql(21522) malloc: *** set a breakpoint in szone_error to debug\nout of memory for query result\n\nwas telling me that I needed 841376 for the querry, so I tied bumping \nulimit -d up another 10 to 6244000. However, that attempt gave the \nerror:\ntruffula:~ kwythers$ ulimit -d 6144000\n-bash: ulimit: data seg size: cannot modify limit: Operation not \npermitted\n\nSo I tried re-setting ulimit -d back to 6144, which worked, but now I \ncan not seem to get ulimit -d to change again. It will not even allow \nulimit -d 614400 (even though that worked a second ago). This seems \nvery odd.\n\n\n\nOn Feb 2, 2007, at 10:11 AM, Tom Lane wrote:\n\n\n> \"Steinar H. Gunderson\" <[email protected]> writes:\n>\n>> On Fri, Feb 02, 2007 at 10:05:29AM -0600, Kirk Wythers wrote:\n>>\n>>> Thanks Tom... Any suggestions as to how much to raise ulimit -d? And\n>>> how to raise ulimit -d?\n>>>\n>\n>\n>> Try multiplying it by 100 for a start:\n>> ulimit -d 614400\n>>\n>\n> Or just \"ulimit -d unlimited\"\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n\n",
"msg_date": "Fri, 2 Feb 2007 10:53:04 -0600",
"msg_from": "Kirk Wythers <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trouble with a join on OS X "
}
] |
[
{
"msg_contents": "Is there any way of monitoring and/or controlling disk buffer cache \nallocation and usage on Mac OS X? I'm thinking of two things: 1) being \nable to give PG a more accurate estimate of the size of the cache, and \n2) being able to more quickly flush the cache for testing the \nperformance of cold queries. It would be fantastic for testing to be \nable to signal both PG and the operating system to invalidate their caches.\n\n-Kevin Murphy\n\n",
"msg_date": "Fri, 02 Feb 2007 14:26:57 -0500",
"msg_from": "Kevin Murphy <[email protected]>",
"msg_from_op": true,
"msg_subject": "OT: Mac OS X disk buffer cache"
}
] |
[
{
"msg_contents": "\nHello,\n\nI'm using geo_distance() from contrib/earthdistance would like to find a\nway to spend up the geo distance calculation if possible. This is for a\nproximity search: \"Show me adoptable pets within 250 miles of this\nzipcode\".\n\nI'm researched a number of approaches to this, but none seem as workable\nas I would have hoped.\n\nI read this claim [1] that Jobster used a lookup table of pre-calculated\ndistances between zipcodes...it took about 100 million rows.\n\n1. http://bostonsteamer.livejournal.com/831325.html\n\nI'd like to avoid that, but I think there's a sound concept in there: we\nrepeatedly making a complex calculation with the same inputs, and the\noutputs are always the same.\n\nThe zipdy project [2] used some interesting approaches, both similar to\nthe large table idea. One variation involved a PL routine that would\nlook up the result in a cache table. If no result was found, it would\nwould compute the result and add it to the cache table. Besides\neventually creating millions of rows in the cache table, I tried this\ntechnique and found it was much slower than using geo_distance() without\na cache. Another variation in the zipdy distribution just uses several\nsmaller cache tables, like one for \"zipcodes 25 miles away\" and\n\"zipcodes 50 miles away\". Equally unattractive.\n\n2. http://www.cryptnet.net/fsp/zipdy/\n\nI looked at doing the calculation outside of PostgreSQL, and passing in\nthe resulting list of zipcodes in an explicit IN() list. This seem\npromising at first. Geo::Postalcode (Perl) could do the calculation in\n5ms, which seemed to beat PostgreSQL. For a small proximity, I think\nthat combination might have performed better. However, some places have\nclose to 5,000 zipcodes within 250 files. I tried putting /that/\nresulting list into an explicitly IN() clause, and it got much slower. :)\n\nI did find there are some possible optimizations that can be made to the\nHaversine algorithm itself. As this post pointed out [3], we could\npre-convert the lat/lon pair to radians, and also compute their sin()\nand cos() values. However, the person suggesting this approach provided\nno benchmarks to suggest it was worth it, and I have no evidence so far\nthat it matters either.\n\n3.\nhttp://www.voxclandestina.com/2006-09-01/optimizing-spatial-proximity-searches-in-sql/\n\nWhat strikes me to consider at this point are a couple of options:\n\nA. Explore a way add some memory caching or \"memoizing\" to\ngeo_distance() so it would hold on to frequently pre-computed values,\nbut without storing all the millions of possibilities.\n\nB. Look at an alternate implementation. I suspect that given a small\nenough radius and the relatively large size of zipcodes, a simpler\nrepresentation of the Earth's curvature could be used, with a sufficient\naccuracy. Perhaps a cylinder, or even a flat projection... We currently\nmax out at 250 miles. ( I just discussed this option with my wife, the\nmath teacher. :)\n\nAdvice from other people who have deployed high-performance proximity\nsearches with PostgreSQL would be appreciated!\n\n Mark\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Sat, 03 Feb 2007 14:00:26 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "optimizing a geo_distance() proximity query"
},
{
"msg_contents": "Mark,\n\nin astronomy we extensively use such kind of query, which we call\nradial search or conesearch. There are several algorithms which perform\nefficiently such query using spherical coordinates. \nSpecifically, we use our Q3C algorithm, see\nhttp://www.sai.msu.su/~megera/wiki/SkyPixelization for details,\nwhich was designed for PostgreSQL and is freely available.\n\nThe paper is http://lnfm1.sai.msu.ru/~math/docs/adass_proceedings2005.pdf\nWeb site - http://q3c.sf.net/\n\n\nOleg\n\nOn Sat, 3 Feb 2007, Mark Stosberg wrote:\n\n>\n> Hello,\n>\n> I'm using geo_distance() from contrib/earthdistance would like to find a\n> way to spend up the geo distance calculation if possible. This is for a\n> proximity search: \"Show me adoptable pets within 250 miles of this\n> zipcode\".\n>\n> I'm researched a number of approaches to this, but none seem as workable\n> as I would have hoped.\n>\n> I read this claim [1] that Jobster used a lookup table of pre-calculated\n> distances between zipcodes...it took about 100 million rows.\n>\n> 1. http://bostonsteamer.livejournal.com/831325.html\n>\n> I'd like to avoid that, but I think there's a sound concept in there: we\n> repeatedly making a complex calculation with the same inputs, and the\n> outputs are always the same.\n>\n> The zipdy project [2] used some interesting approaches, both similar to\n> the large table idea. One variation involved a PL routine that would\n> look up the result in a cache table. If no result was found, it would\n> would compute the result and add it to the cache table. Besides\n> eventually creating millions of rows in the cache table, I tried this\n> technique and found it was much slower than using geo_distance() without\n> a cache. Another variation in the zipdy distribution just uses several\n> smaller cache tables, like one for \"zipcodes 25 miles away\" and\n> \"zipcodes 50 miles away\". Equally unattractive.\n>\n> 2. http://www.cryptnet.net/fsp/zipdy/\n>\n> I looked at doing the calculation outside of PostgreSQL, and passing in\n> the resulting list of zipcodes in an explicit IN() list. This seem\n> promising at first. Geo::Postalcode (Perl) could do the calculation in\n> 5ms, which seemed to beat PostgreSQL. For a small proximity, I think\n> that combination might have performed better. However, some places have\n> close to 5,000 zipcodes within 250 files. I tried putting /that/\n> resulting list into an explicitly IN() clause, and it got much slower. :)\n>\n> I did find there are some possible optimizations that can be made to the\n> Haversine algorithm itself. As this post pointed out [3], we could\n> pre-convert the lat/lon pair to radians, and also compute their sin()\n> and cos() values. However, the person suggesting this approach provided\n> no benchmarks to suggest it was worth it, and I have no evidence so far\n> that it matters either.\n>\n> 3.\n> http://www.voxclandestina.com/2006-09-01/optimizing-spatial-proximity-searches-in-sql/\n>\n> What strikes me to consider at this point are a couple of options:\n>\n> A. Explore a way add some memory caching or \"memoizing\" to\n> geo_distance() so it would hold on to frequently pre-computed values,\n> but without storing all the millions of possibilities.\n>\n> B. Look at an alternate implementation. I suspect that given a small\n> enough radius and the relatively large size of zipcodes, a simpler\n> representation of the Earth's curvature could be used, with a sufficient\n> accuracy. Perhaps a cylinder, or even a flat projection... We currently\n> max out at 250 miles. ( I just discussed this option with my wife, the\n> math teacher. :)\n>\n> Advice from other people who have deployed high-performance proximity\n> searches with PostgreSQL would be appreciated!\n>\n> Mark\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n",
"msg_date": "Sat, 3 Feb 2007 22:11:51 +0300 (MSK)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizing a geo_distance() proximity query"
},
{
"msg_contents": "On Sat, Feb 03, 2007 at 14:00:26 -0500,\n Mark Stosberg <[email protected]> wrote:\n> \n> I'm using geo_distance() from contrib/earthdistance would like to find a\n> way to spend up the geo distance calculation if possible. This is for a\n> proximity search: \"Show me adoptable pets within 250 miles of this\n> zipcode\".\n\nIf you are using the \"cube\" based part of the earth distance package,\nthen you can use gist indexes to speed those searches up. There are\nfunctions for creating boxes that include all of the points some distance\nfrom a fixed point. This is lossy, so you need to recheck if you don't\nwant some points a bit farther away returned. Also you would need to\npick a point to be where the zip code is located, rather than using area\nbased zip codes. However, if you have actually addresses you could use the\ntiger database to locate them instead of just zip code locations.\n",
"msg_date": "Sat, 3 Feb 2007 23:16:44 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizing a geo_distance() proximity query"
},
{
"msg_contents": "Bruno Wolff III wrote:\n> On Sat, Feb 03, 2007 at 14:00:26 -0500,\n> Mark Stosberg <[email protected]> wrote:\n>> I'm using geo_distance() from contrib/earthdistance would like to find a\n>> way to spend up the geo distance calculation if possible. This is for a\n>> proximity search: \"Show me adoptable pets within 250 miles of this\n>> zipcode\".\n> \n> If you are using the \"cube\" based part of the earth distance package,\n> then you can use gist indexes to speed those searches up. \n\nThanks for the tip. Any idea what kind of improvement I can expect to\nsee, compared to using geo_distance()?\n\n> There are functions for creating boxes that include all of the points some distance\n> from a fixed point. This is lossy, so you need to recheck if you don't\n> want some points a bit farther away returned. Also you would need to\n> pick a point to be where the zip code is located, rather than using area\n> based zip codes. \n\nThis is also interesting. Is this approach practical if I want to index\nwhat's near each of about 40,000 US zipcodes, or the approach mostly\nuseful if you there are just a small number of fixed points to address?\n\nI'm going to start installing the cube() and earth_distance() functions\ntoday and see where I can get with the approach.\n\n Mark\n",
"msg_date": "Mon, 05 Feb 2007 14:47:25 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: optimizing a geo_distance() proximity query"
},
{
"msg_contents": "On 2/5/07, Mark Stosberg <[email protected]> wrote:\n> Bruno Wolff III wrote:\n> > On Sat, Feb 03, 2007 at 14:00:26 -0500,\n> > Mark Stosberg <[email protected]> wrote:\n> >> I'm using geo_distance() from contrib/earthdistance would like to find a\n> >> way to spend up the geo distance calculation if possible. This is for a\n> >> proximity search: \"Show me adoptable pets within 250 miles of this\n> >> zipcode\".\n> >\n> > If you are using the \"cube\" based part of the earth distance package,\n> > then you can use gist indexes to speed those searches up.\n>\n> Thanks for the tip. Any idea what kind of improvement I can expect to\n> see, compared to using geo_distance()?\n\na lot. be aware that gist takes longer to build than btree, but very\nfast to search. Index search and filter to box is basically an index\nlookup (fast!). for mostly static datasets that involve a lot of\nsearching, gist is ideal.\n\nkeep in mind that the cube based gist searches out a the smallest\nlat/lon 'square' projected onto the earth which covers your circular\nradius so you have to do extra processing if you want exact matches. (\nyou can speed this up to, by doing an 'inner box' search and not\nrecomputing distance to those points)\n\nmerlin\n",
"msg_date": "Mon, 5 Feb 2007 15:15:15 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizing a geo_distance() proximity query"
},
{
"msg_contents": "On Mon, Feb 05, 2007 at 14:47:25 -0500,\n Mark Stosberg <[email protected]> wrote:\n> \n> This is also interesting. Is this approach practical if I want to index\n> what's near each of about 40,000 US zipcodes, or the approach mostly\n> useful if you there are just a small number of fixed points to address?\n\nI think the answer depends on what your data model is. If you treat each\nzip code as having a location at a single point, the earth distance stuff\nshould work. If you are trying to include the shape of each zip code in\nyour model and measure distances to the nearest point of zip codes, then\nyou will probably be better off using postgis.\n",
"msg_date": "Mon, 5 Feb 2007 14:22:09 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizing a geo_distance() proximity query"
},
{
"msg_contents": "Merlin Moncure wrote:\n> On 2/5/07, Mark Stosberg <[email protected]> wrote:\n>> Bruno Wolff III wrote:\n>> > On Sat, Feb 03, 2007 at 14:00:26 -0500,\n>> > Mark Stosberg <[email protected]> wrote:\n>> >> I'm using geo_distance() from contrib/earthdistance would like to\n>> find a\n>> >> way to spend up the geo distance calculation if possible. This is\n>> for a\n>> >> proximity search: \"Show me adoptable pets within 250 miles of this\n>> >> zipcode\".\n>> >\n>> > If you are using the \"cube\" based part of the earth distance package,\n>> > then you can use gist indexes to speed those searches up.\n>>\n>> Thanks for the tip. Any idea what kind of improvement I can expect to\n>> see, compared to using geo_distance()?\n> \n> a lot. be aware that gist takes longer to build than btree, but very\n> fast to search. Index search and filter to box is basically an index\n> lookup (fast!). for mostly static datasets that involve a lot of\n> searching, gist is ideal.\n\nThe documentation in contrib/ didn't provide examples of how to create\nor the index or actually a the proximity search. Here's what I figured\nout to do:\n\nI added a new column as type 'cube':\n\n ALTER table zipcodes add column earth_coords cube;\n\nNext I converted the old lat/lon data I had stored in a 'point'\ntype to the new format:\n\n-- Make to get lat/lon in the right order for your data model!\n UPDATE zipcodes set earth_coords = ll_to_earth( lon_lat[1], lon_lat[0] );\n\nNow I added a GIST index on the field:\n\n CREATE index earth_coords_idx on zipcodes using gist (earth_coords);\n\nFinally, I was able to run a query, which I could see used the index (by\nchecking \"EXPLAIN ANALYZE ...\"\n\n select * from zipcodes where earth_box('(436198.322855334,\n4878562.8732218, 4085386.43843934)'::cube,16093.44) @ earth_coords;\n\nIt's also notable that the units used are meters, not miles like\ngeo_distance(). That's what the magic number of \"16093.44\" is-- 10 miles\nconverted to meters.\n\nWhen I benchmarked this query against the old geo_distance() variation,\nit was about 200 times faster (~100ms vs .5ms).\n\nHowever, my next step was to try a more \"real world\" query that involved\n a more complex where clause and a couple of table joins. So far, that\nresult is coming out /slower/ with the new approach, even though the\nindex is being used. I believe this may be cause of the additional\nresults found that are outside of the sphere, but inside the cube. This\ncauses additional rows that need processing in the joined tables.\n\nCould someone post an example of how to further refine this so the\nresults more closely match what geo_distance returns() ?\n\nAny other indexing or optimization tips would be appreciated.\n\n Mark\n",
"msg_date": "Mon, 05 Feb 2007 18:01:05 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: optimizing a geo_distance() proximity query (example and\n\tbenchmark)"
},
{
"msg_contents": "On Mon, Feb 05, 2007 at 18:01:05 -0500,\n Mark Stosberg <[email protected]> wrote:\n> \n> It's also notable that the units used are meters, not miles like\n> geo_distance(). That's what the magic number of \"16093.44\" is-- 10 miles\n> converted to meters.\n\nYou can change the earth() function in earthdistance.sql before running it\nto use some other unit other than meters:\n\n-- earth() returns the radius of the earth in meters. This is the only\n-- place you need to change things for the cube base distance functions\n-- in order to use different units (or a better value for the Earth's radius).\n\nCREATE OR REPLACE FUNCTION earth() RETURNS float8\nLANGUAGE 'sql' IMMUTABLE\nAS 'SELECT ''6378168''::float8';\n\n> However, my next step was to try a more \"real world\" query that involved\n> a more complex where clause and a couple of table joins. So far, that\n> result is coming out /slower/ with the new approach, even though the\n> index is being used. I believe this may be cause of the additional\n> results found that are outside of the sphere, but inside the cube. This\n> causes additional rows that need processing in the joined tables.\n\nThis is unlikely to be the cause. The ratio of the area of the cube to\nthe circle for small radii (compared to the radius of the earth, so that\nwe can consider thinsg flat) is 4/pi = 1.27 which shouldn't cause that\nmuch of a change.\nIt might be that you are getting a bad plan. The guess on the selectivity\nof the gist constraint may not be very good.\nSome people here may be able to tell you more if you show us explain\nanalyze output.\n",
"msg_date": "Mon, 5 Feb 2007 23:40:11 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizing a geo_distance() proximity query (example and\n\tbenchmark)"
},
{
"msg_contents": "On 2/6/07, Mark Stosberg <[email protected]> wrote:\n> It's also notable that the units used are meters, not miles like\n> geo_distance(). That's what the magic number of \"16093.44\" is-- 10 miles\n> converted to meters.\n>\n> When I benchmarked this query against the old geo_distance() variation,\n> it was about 200 times faster (~100ms vs .5ms).\n>\n> However, my next step was to try a more \"real world\" query that involved\n> a more complex where clause and a couple of table joins. So far, that\n> result is coming out /slower/ with the new approach, even though the\n> index is being used. I believe this may be cause of the additional\n> results found that are outside of the sphere, but inside the cube. This\n> causes additional rows that need processing in the joined tables.\n>\n> Could someone post an example of how to further refine this so the\n> results more closely match what geo_distance returns() ?\n\nI agree with bruno...the extra time is probably not what you are\nthinking...please post explain analyze results, etc. However bruno's\nratio, while correct does not tell the whole story because you have to\nrecheck distance to every point in the returned set.\n\nThere is a small optimization you can make. The query you wrote\nautomatically excludes points within a certain box. you can also\ninclude points in the set which is the largest box that fits in the\ncircle:\n\nselect * from zipcodes\nwhere\nearth_box('(436198.322855334,\n4878562.8732218, 4085386.43843934)'::cube,inner_radius) @ earth_coords\nor\n(\nearth_box('(436198.322855334,\n4878562.8732218, 4085386.43843934)'::cube,16093.44) @ earth_coords\nand\ngeo_dist...\n);\n\nyou can also choose to omit the earth_coords column and calculate it\non the fly...there is no real performance hit for this but it does\nmake the sql a bit ugly.\n\nmerlin\n",
"msg_date": "Tue, 6 Feb 2007 18:28:29 +0530",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizing a geo_distance() proximity query (example and\n\tbenchmark)"
},
{
"msg_contents": "Bruno Wolff III wrote:\n>\n> Some people here may be able to tell you more if you show us explain\n> analyze output.\n\nHere is my explain analyze output. Some brief context of what's going\non. The goal is to find \"Pets Near You\".\n\nWe join the pets table on the shelters table to get a zipcode, and then\njoin a shelters table with \"earth_distance\" to get the coordinates of\nthe zipcode. ( Is there any significant penalty for using a varchar vs\nan int for a joint? ).\n\nI've been investigating partial indexes for the pets table. It has about\n300,000 rows, but only about 10 are \"active\", and those are the ones we\nare care about. Queries are also frequently made on males vs females, dogs vs cats\nor specific ages, and those specific cases seem like possible candidates for partial indexes\nas well. I played with that approach some, but had trouble coming up with any thing that\nbenchmarked faster.\n\nI'm reading the explain analyze output correctly myself, nearly all of\nthe time spent is related to the 'pets' table, but I can't see what to\nabout it.\n\nHelp appreciated!\n\n Mark\n\nNested Loop (cost=11.82..29.90 rows=1 width=0) (actual time=37.601..1910.787 rows=628 loops=1)\n -> Nested Loop (cost=6.68..20.73 rows=1 width=24) (actual time=35.525..166.547 rows=1727 loops=1)\n -> Bitmap Heap Scan on pets (cost=6.68..14.71 rows=1 width=4) (actual time=35.427..125.594 rows=1727 loops=1)\n Recheck Cond: (((sex)::text = 'f'::text) AND (species_id = 1))\n Filter: ((pet_state)::text = 'available'::text)\n -> BitmapAnd (cost=6.68..6.68 rows=2 width=0) (actual time=33.398..33.398 rows=0 loops=1)\n -> Bitmap Index Scan on pets_sex_idx (cost=0.00..3.21 rows=347 width=0) (actual time=14.739..14.739 rows=35579 loops=1)\n Index Cond: ((sex)::text = 'f'::text)\n -> Bitmap Index Scan on pet_species_id_idx (cost=0.00..3.21 rows=347 width=0) (actual time=16.779..16.779 rows=48695 loops=1)\n Index Cond: (species_id = 1)\n -> Index Scan using shelters_pkey on shelters (cost=0.00..6.01 rows=1 width=28) (actual time=0.012..0.014 rows=1 loops=1727)\n Index Cond: (\"outer\".shelter_id = shelters.shelter_id)\n -> Bitmap Heap Scan on earth_distance (cost=5.14..9.15 rows=1 width=9) (actual time=0.984..0.984 rows=0 loops=1727)\n Recheck Cond: ((cube_enlarge(('(-2512840.11676572, 4646218.19036629, 3574817.21369166)'::cube)::cube, 160930.130863421::double precision, 3) @ earth_distance.earth_coords) AND\n((\"outer\".postal_code_for_joining)::text = (earth_distance.zipcode)::text))\n -> BitmapAnd (cost=5.14..5.14 rows=1 width=0) (actual time=0.978..0.978 rows=0 loops=1727)\n -> Bitmap Index Scan on earth_coords_idx (cost=0.00..2.15 rows=42 width=0) (actual time=0.951..0.951 rows=1223 loops=1727)\n Index Cond: (cube_enlarge(('(-2512840.11676572, 4646218.19036629, 3574817.21369166)'::cube)::cube, 160930.130863421::double precision, 3) @ earth_coords)\n -> Bitmap Index Scan on earth_distance_zipcode_idx (cost=0.00..2.74 rows=212 width=0) (actual time=0.015..0.015 rows=1 loops=1727)\n Index Cond: ((\"outer\".postal_code_for_joining)::text = (earth_distance.zipcode)::text)\n Total runtime: 1913.099 ms\n\n\n",
"msg_date": "Tue, 06 Feb 2007 09:39:54 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: explain analyze output for review (was: optimizing a\n\tgeo_distance()...)"
},
{
"msg_contents": "\nIf I'm reading this correctly, 89% of the query time is spent\ndoing an index scan of earth_coords_idx. Scanning pets is only\ntaking 6% of the total time.\n\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Mark\nStosberg\nSent: Tuesday, February 06, 2007 8:40 AM\nTo: [email protected]\nSubject: Re: [PERFORM] explain analyze output for review (was:\noptimizing a geo_distance()...)\n\n\nBruno Wolff III wrote:\n>\n> Some people here may be able to tell you more if you show us explain\n> analyze output.\n\nHere is my explain analyze output. Some brief context of what's going\non. The goal is to find \"Pets Near You\".\n\nWe join the pets table on the shelters table to get a zipcode, and then\njoin a shelters table with \"earth_distance\" to get the coordinates of\nthe zipcode. ( Is there any significant penalty for using a varchar vs\nan int for a joint? ).\n\nI've been investigating partial indexes for the pets table. It has about\n300,000 rows, but only about 10 are \"active\", and those are the ones we\nare care about. Queries are also frequently made on males vs females,\ndogs vs cats\nor specific ages, and those specific cases seem like possible candidates\nfor partial indexes\nas well. I played with that approach some, but had trouble coming up\nwith any thing that\nbenchmarked faster.\n\nI'm reading the explain analyze output correctly myself, nearly all of\nthe time spent is related to the 'pets' table, but I can't see what to\nabout it.\n\nHelp appreciated!\n\n Mark\n\nNested Loop (cost=11.82..29.90 rows=1 width=0) (actual\ntime=37.601..1910.787 rows=628 loops=1)\n -> Nested Loop (cost=6.68..20.73 rows=1 width=24) (actual\ntime=35.525..166.547 rows=1727 loops=1)\n -> Bitmap Heap Scan on pets (cost=6.68..14.71 rows=1 width=4)\n(actual time=35.427..125.594 rows=1727 loops=1)\n Recheck Cond: (((sex)::text = 'f'::text) AND (species_id\n= 1))\n Filter: ((pet_state)::text = 'available'::text)\n -> BitmapAnd (cost=6.68..6.68 rows=2 width=0) (actual\ntime=33.398..33.398 rows=0 loops=1)\n -> Bitmap Index Scan on pets_sex_idx\n(cost=0.00..3.21 rows=347 width=0) (actual time=14.739..14.739\nrows=35579 loops=1)\n Index Cond: ((sex)::text = 'f'::text)\n -> Bitmap Index Scan on pet_species_id_idx\n(cost=0.00..3.21 rows=347 width=0) (actual time=16.779..16.779\nrows=48695 loops=1)\n Index Cond: (species_id = 1)\n -> Index Scan using shelters_pkey on shelters\n(cost=0.00..6.01 rows=1 width=28) (actual time=0.012..0.014 rows=1\nloops=1727)\n Index Cond: (\"outer\".shelter_id = shelters.shelter_id)\n -> Bitmap Heap Scan on earth_distance (cost=5.14..9.15 rows=1\nwidth=9) (actual time=0.984..0.984 rows=0 loops=1727)\n Recheck Cond: ((cube_enlarge(('(-2512840.11676572,\n4646218.19036629, 3574817.21369166)'::cube)::cube,\n160930.130863421::double precision, 3) @ earth_distance.earth_coords)\nAND\n((\"outer\".postal_code_for_joining)::text =\n(earth_distance.zipcode)::text))\n -> BitmapAnd (cost=5.14..5.14 rows=1 width=0) (actual\ntime=0.978..0.978 rows=0 loops=1727)\n -> Bitmap Index Scan on earth_coords_idx\n(cost=0.00..2.15 rows=42 width=0) (actual time=0.951..0.951 rows=1223\nloops=1727)\n Index Cond: (cube_enlarge(('(-2512840.11676572,\n4646218.19036629, 3574817.21369166)'::cube)::cube,\n160930.130863421::double precision, 3) @ earth_coords)\n -> Bitmap Index Scan on earth_distance_zipcode_idx\n(cost=0.00..2.74 rows=212 width=0) (actual time=0.015..0.015 rows=1\nloops=1727)\n Index Cond:\n((\"outer\".postal_code_for_joining)::text =\n(earth_distance.zipcode)::text)\n Total runtime: 1913.099 ms\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n",
"msg_date": "Tue, 6 Feb 2007 08:53:57 -0600",
"msg_from": "\"Adam Rich\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: explain analyze output for review (was: optimizing a\n\tgeo_distance()...)"
},
{
"msg_contents": "Mark Stosberg wrote:\n> \n> I'm reading the explain analyze output correctly myself, nearly all of\n> the time spent is related to the 'pets' table, but I can't see what to\n> about it.\n\nSomething about typing that message jarred by brain to think to try:\n\nVACUUM FULL pets;\nVACUUM ANALYZE pets;\n\nNow the new cube-based calculation benchmarks reliably faster. The old\nlat/lon systems now benchmarks at 250ms, while the the new cube-based\ncode bechmarks at 100ms, over a 50% savings!\n\nThat's good enough for me.\n\nHowever, I'm still interested advice on the other points I snuck into my\nlast message about joining with ints vs varchars and best use of partial\nindexes.\n\n Mark\n",
"msg_date": "Tue, 06 Feb 2007 09:54:59 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: explain analyze output: vacuuming made a big difference."
},
{
"msg_contents": "On Tue, Feb 06, 2007 at 09:39:54 -0500,\n Mark Stosberg <[email protected]> wrote:\n> \n> I've been investigating partial indexes for the pets table. It has about\n> 300,000 rows, but only about 10 are \"active\", and those are the ones we\n> are care about. Queries are also frequently made on males vs females, dogs vs cats\n\nIt probably won't pay to make partial indexes on sex or species (at least\nfor the popular ones), as you aren't likely to save enough by eliminating only\nhalf the cases to make up for maintaining another index. A partial index for\nactive rows probably does make sense.\n\n> or specific ages, and those specific cases seem like possible candidates for partial indexes\n> as well. I played with that approach some, but had trouble coming up with any thing that\n> benchmarked faster.\n",
"msg_date": "Tue, 6 Feb 2007 09:45:08 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: explain analyze output for review (was: optimizing a\n\tgeo_distance()...)"
},
{
"msg_contents": "Hello,\n\nI wanted to share something else I learned in my proximity search work.\n One my requirements is to order by the distance that matches are found\nfrom the center point.\n\nWhen did this using earth_distance(), the benefit of the earth_box()\ntechnique over the old geo_distance became minimal as I approached a\n250mi radius.\n\nSwitching to sorting by cube_distance() offered a huge benefit, allowing\nthe earth_distance() query to run in about 100ms vs 300ms for the\ngeo_distance() equivalent.\n\nI checked the results that cube_distance() produced versus\nearth_distance(). cube_distance() is always (not surprisingly) a little\nsmaller, but the difference seems only grows to about a mile for a 250\nmile radius. That's an acceptable margin of error for this application,\nand may be for others as well.\n\n Mark\n\n\n",
"msg_date": "Wed, 07 Feb 2007 16:30:16 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "tip: faster sorting for proximity queries by using cube_distance()"
}
] |
[
{
"msg_contents": "I have a pl/pgsql function that is inserting 200,000 records for\ntesting purposes. What is the expected time frame for this operation\non a pc with 1/2 a gig of ram and a 7200 RPM disk? The processor is\na 2ghz cpu. So far I've been sitting here for about 2 million ms\nwaiting for it to complete, and I'm not sure how many inserts postgres\nis doing per second.\n\nregards,\nkaren\n\n",
"msg_date": "5 Feb 2007 16:35:59 -0800",
"msg_from": "\"Karen Hill\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How long should it take to insert 200,000 records?"
},
{
"msg_contents": "\"Karen Hill\" <[email protected]> writes:\n> I have a pl/pgsql function that is inserting 200,000 records for\n> testing purposes. What is the expected time frame for this operation\n> on a pc with 1/2 a gig of ram and a 7200 RPM disk?\n\nI think you have omitted a bunch of relevant facts. Bare INSERT is\nreasonably quick:\n\nregression=# create table foo (f1 int);\nCREATE TABLE\nregression=# \\timing\nTiming is on.\nregression=# insert into foo select x from generate_series(1,200000) x;\nINSERT 0 200000\nTime: 5158.564 ms\nregression=# \n\n(this on a not-very-fast machine) but if you weigh it down with a ton\nof index updates, foreign key checks, etc, it could get slow ...\nalso you haven't mentioned what else that plpgsql function is doing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Feb 2007 00:33:35 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How long should it take to insert 200,000 records? "
},
{
"msg_contents": "On Tue, 2007-02-06 at 01:35, Karen Hill wrote:\n> [snip] So far I've been sitting here for about 2 million ms\n> waiting for it to complete, and I'm not sure how many inserts postgres\n> is doing per second.\n\nOne way is to run analyze verbose on the target table and see how many\npages it has, and then do it again 1 minute later and check how many\npages it grew. Then multiply the page increase by the record per page\nratio you can get from the same analyze's output, and you'll get an\nestimated growth rate. Of course this will only work if you didn't have\nlots of free space in the table to start with... if you do have lots of\nfree space, you still can estimate the growth based on the analyze\nresults, but it will be more complicated.\n\n\nIn any case, it would be very nice to have more tools to attach to\nrunning queries and see how they are doing... starting with what exactly\nthey are doing (are they in RI checks maybe ?), the actual execution\nplan they are using, how much they've done from their work... it would\nhelp a lot debugging performance problems.\n\nCheers,\nCsaba.\n\n\n",
"msg_date": "Tue, 06 Feb 2007 10:33:56 +0100",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How long should it take to insert 200,000 records?"
},
{
"msg_contents": "On Mon, 2007-02-05 at 18:35, Karen Hill wrote:\n> I have a pl/pgsql function that is inserting 200,000 records for\n> testing purposes. What is the expected time frame for this operation\n> on a pc with 1/2 a gig of ram and a 7200 RPM disk? The processor is\n> a 2ghz cpu. So far I've been sitting here for about 2 million ms\n> waiting for it to complete, and I'm not sure how many inserts postgres\n> is doing per second.\n\nThat really depends. Doing 200,000 inserts as individual transactions\nwill be fairly slow. Since PostgreSQL generally runs in autocommit\nmode, this means that if you didn't expressly begin a transaction, you\nare in fact inserting each row as a transaction. i.e. this:\n\nfor (i=0;i<200000;i++){\n insert into table abc values ('test',123);\n}\n\nIs functionally equivalent to:\n\nfor (i=0;i<200000;i++){\n begin;\n insert into table abc values ('test',123);\n commit;\n}\n\nHowever, you can add begin / end pairs outside the loop like this:\n\nbegin;\nfor (i=0;i<200000;i++){\n insert into table abc values ('test',123);\n}\ncommit;\n\nand it should run much faster.\n",
"msg_date": "Tue, 06 Feb 2007 10:14:44 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How long should it take to insert 200,000 records?"
},
{
"msg_contents": "On 2/6/07, Scott Marlowe <[email protected]> wrote:\n> On Mon, 2007-02-05 at 18:35, Karen Hill wrote:\n> > I have a pl/pgsql function that is inserting 200,000 records for\n> > testing purposes. What is the expected time frame for this operation\n> > on a pc with 1/2 a gig of ram and a 7200 RPM disk? The processor is\n> > a 2ghz cpu. So far I've been sitting here for about 2 million ms\n> > waiting for it to complete, and I'm not sure how many inserts postgres\n> > is doing per second.\n>\n> That really depends. Doing 200,000 inserts as individual transactions\n> will be fairly slow. Since PostgreSQL generally runs in autocommit\n> mode, this means that if you didn't expressly begin a transaction, you\n> are in fact inserting each row as a transaction. i.e. this:\n\nI think OP is doing insertion inside a pl/pgsql loop...transaction is\nimplied here. For creating test data, generate_series or\ninsert...select is obviously the way to go. If that's unsuitable for\nsome reason, I would suggest RAISE NOTICE every n records so you can\nmonitor the progress and make sure something is not binding up in a\nlock or something like that. Be especially wary of degrading\nperformance during the process.\n\nAnother common problem with poor insert performance is a RI check to\nan un-indexed column. In-transaction insert performance should be\nbetween 1k and 10k records/second in normal situations, meaning if you\nhaven't inserted 1 million records inside of an hour something else is\ngoing on.\n\nGenerally, insertion performance from fastest to slowest is:\n* insert select generate_series...\n* insert select\n* copy\n* insert (),(),()[...] (at least 10 or preferably 100 insertions)\n* begin, prepare, n prepared inserts executed, commit\n* begin, n inserts, commit\n* plpgsql loop, single inserts\n* n inserts outside of transaction.\n\nThe order of which is faster might not be absolutely set in stone\n(copy might beat insert select for example), but the top 4 methods\nwill always be much faster than the bottom 4.\n\nmerlin\n",
"msg_date": "Tue, 6 Feb 2007 11:40:01 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How long should it take to insert 200,000 records?"
},
{
"msg_contents": "On Tue, 2007-02-06 at 10:40, Merlin Moncure wrote:\n> On 2/6/07, Scott Marlowe <[email protected]> wrote:\n> > On Mon, 2007-02-05 at 18:35, Karen Hill wrote:\n> > > I have a pl/pgsql function that is inserting 200,000 records for\n> > > testing purposes. What is the expected time frame for this operation\n> > > on a pc with 1/2 a gig of ram and a 7200 RPM disk? The processor is\n> > > a 2ghz cpu. So far I've been sitting here for about 2 million ms\n> > > waiting for it to complete, and I'm not sure how many inserts postgres\n> > > is doing per second.\n> >\n> > That really depends. Doing 200,000 inserts as individual transactions\n> > will be fairly slow. Since PostgreSQL generally runs in autocommit\n> > mode, this means that if you didn't expressly begin a transaction, you\n> > are in fact inserting each row as a transaction. i.e. this:\n> \n> I think OP is doing insertion inside a pl/pgsql loop...transaction is\n> implied here. \n\nYeah, I noticed that about 10 seconds after hitting send... :)\n",
"msg_date": "Tue, 06 Feb 2007 10:55:53 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How long should it take to insert 200,000 records?"
},
{
"msg_contents": "On 2/6/07, Scott Marlowe <[email protected]> wrote:\n> On Tue, 2007-02-06 at 10:40, Merlin Moncure wrote:\n> > On 2/6/07, Scott Marlowe <[email protected]> wrote:\n> > > On Mon, 2007-02-05 at 18:35, Karen Hill wrote:\n> > > > I have a pl/pgsql function that is inserting 200,000 records for\n> > > > testing purposes. What is the expected time frame for this operation\n> > > > on a pc with 1/2 a gig of ram and a 7200 RPM disk? The processor is\n> > > > a 2ghz cpu. So far I've been sitting here for about 2 million ms\n> > > > waiting for it to complete, and I'm not sure how many inserts postgres\n> > > > is doing per second.\n> > >\n> > > That really depends. Doing 200,000 inserts as individual transactions\n> > > will be fairly slow. Since PostgreSQL generally runs in autocommit\n> > > mode, this means that if you didn't expressly begin a transaction, you\n> > > are in fact inserting each row as a transaction. i.e. this:\n> >\n> > I think OP is doing insertion inside a pl/pgsql loop...transaction is\n> > implied here.\n>\n> Yeah, I noticed that about 10 seconds after hitting send... :)\n\nactually, I get the stupid award also because RI check to unindexed\ncolumn is not possible :) (this haunts deletes, not inserts).\n\nmerlin\n",
"msg_date": "Tue, 6 Feb 2007 12:01:00 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How long should it take to insert 200,000 records?"
},
{
"msg_contents": "On Tue, 2007-02-06 at 12:01 -0500, Merlin Moncure wrote:\n> On 2/6/07, Scott Marlowe <[email protected]> wrote:\n> > On Tue, 2007-02-06 at 10:40, Merlin Moncure wrote:\n> > > On 2/6/07, Scott Marlowe <[email protected]> wrote:\n> > > > On Mon, 2007-02-05 at 18:35, Karen Hill wrote:\n> > > > > I have a pl/pgsql function that is inserting 200,000 records for\n> > > > > testing purposes. What is the expected time frame for this operation\n> > > > > on a pc with 1/2 a gig of ram and a 7200 RPM disk? The processor is\n> > > > > a 2ghz cpu. So far I've been sitting here for about 2 million ms\n> > > > > waiting for it to complete, and I'm not sure how many inserts postgres\n> > > > > is doing per second.\n> > > >\n> > > > That really depends. Doing 200,000 inserts as individual transactions\n> > > > will be fairly slow. Since PostgreSQL generally runs in autocommit\n> > > > mode, this means that if you didn't expressly begin a transaction, you\n> > > > are in fact inserting each row as a transaction. i.e. this:\n> > >\n> > > I think OP is doing insertion inside a pl/pgsql loop...transaction is\n> > > implied here.\n> >\n> > Yeah, I noticed that about 10 seconds after hitting send... :)\n> \n> actually, I get the stupid award also because RI check to unindexed\n> column is not possible :) (this haunts deletes, not inserts).\n\nSure it's possible:\n\nCREATE TABLE parent (col1 int4);\n-- insert many millions of rows into parent\nCREATE TABLE child (col1 int4 REFERENCES parent(col1));\n-- insert many millions of rows into child, very very slowly.\n\n\n- Mark Lewis\n\n\n",
"msg_date": "Tue, 06 Feb 2007 10:31:26 -0800",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How long should it take to insert 200,000 records?"
},
{
"msg_contents": "On Tue, Feb 06, 2007 at 10:31:26 -0800,\n Mark Lewis <[email protected]> wrote:\n> \n> Sure it's possible:\n> \n> CREATE TABLE parent (col1 int4);\n> -- insert many millions of rows into parent\n> CREATE TABLE child (col1 int4 REFERENCES parent(col1));\n> -- insert many millions of rows into child, very very slowly.\n\nI don't think Postgres allows this. You don't have to have an index in the\nchild table, but do in the parent table.\nQuote from http://developer.postgresql.org/pgdocs/postgres/sql-createtable.html:\nThe referenced columns must be the columns of a unique or primary key\nconstraint in the referenced table.\n",
"msg_date": "Tue, 6 Feb 2007 12:37:52 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How long should it take to insert 200,000 records?"
},
{
"msg_contents": "On 2/6/07, Mark Lewis <[email protected]> wrote:\n> > actually, I get the stupid award also because RI check to unindexed\n> > column is not possible :) (this haunts deletes, not inserts).\n>\n> Sure it's possible:\n>\n> CREATE TABLE parent (col1 int4);\n> -- insert many millions of rows into parent\n> CREATE TABLE child (col1 int4 REFERENCES parent(col1));\n> -- insert many millions of rows into child, very very slowly.\n\nthe database will not allow you to create a RI link out unless the\nparent table has a primary key/unique constraint, which the database\nbacks with an index....and you can't even trick it afterwards by\ndropping the constraint.\n\nit's the other direction, when you cascade forwards when you can have\na problem. this is most common with a delete, but can also happen on\nan update of a table's primary key with child tables referencing it.\n\nmerlin\n",
"msg_date": "Tue, 6 Feb 2007 14:06:32 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How long should it take to insert 200,000 records?"
},
{
"msg_contents": "On Tue, 2007-02-06 at 14:06 -0500, Merlin Moncure wrote:\n> On 2/6/07, Mark Lewis <[email protected]> wrote:\n> > > actually, I get the stupid award also because RI check to unindexed\n> > > column is not possible :) (this haunts deletes, not inserts).\n> >\n> > Sure it's possible:\n> >\n> > CREATE TABLE parent (col1 int4);\n> > -- insert many millions of rows into parent\n> > CREATE TABLE child (col1 int4 REFERENCES parent(col1));\n> > -- insert many millions of rows into child, very very slowly.\n> \n> the database will not allow you to create a RI link out unless the\n> parent table has a primary key/unique constraint, which the database\n> backs with an index....and you can't even trick it afterwards by\n> dropping the constraint.\n> \n> it's the other direction, when you cascade forwards when you can have\n> a problem. this is most common with a delete, but can also happen on\n> an update of a table's primary key with child tables referencing it.\n> \n\nHmmm, should check my SQL before hitting send I guess. Well, at least\nyou no longer have to wear the stupid award, Merlin :)\n\n-- Mark Lewis\n",
"msg_date": "Tue, 06 Feb 2007 11:10:53 -0800",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How long should it take to insert 200,000 records?"
},
{
"msg_contents": "On Feb 5, 9:33 pm, [email protected] (Tom Lane) wrote:\n> \"Karen Hill\" <[email protected]> writes:\n> > I have a pl/pgsql function that is inserting 200,000 records for\n> > testing purposes. What is the expected time frame for this operation\n> > on a pc with 1/2 a gig of ram and a 7200 RPM disk?\n>\n> I think you have omitted a bunch of relevant facts. Bare INSERT is\n> reasonably quick:\n>\n> regression=# create table foo (f1 int);\n> CREATE TABLE\n> regression=# \\timing\n> Timing is on.\n> regression=# insert into foo select x from generate_series(1,200000) x;\n> INSERT 0 200000\n> Time: 5158.564 ms\n> regression=#\n>\n> (this on a not-very-fast machine) but if you weigh it down with a ton\n> of index updates, foreign key checks, etc, it could get slow ...\n> also you haven't mentioned what else that plpgsql function is doing.\n>\n\nThe postgres version is 8.2.1 on Windows. The pl/pgsql function is\ninserting to an updatable view (basically two tables).\n\nCREATE TABLE foo1\n(\n\n\n) ;\n\nCREATE TABLE foo2\n(\n\n);\n\nCREATE VIEW viewfoo AS\n(\n\n);\nCREATE RULE ruleFoo AS ON INSERT TO viewfoo DO INSTEAD\n(\n\n);\n\nCREATE OR REPLACE FUNCTION functionFoo() RETURNS VOID AS $$\nBEGIN\nFOR i in 1..200000 LOOP\nINSERT INTO viewfoo (x) VALUES (x);\nEND LOOP;\nEND;\n$$ LANGUAGE plpgsql;\n\n\n",
"msg_date": "6 Feb 2007 13:39:16 -0800",
"msg_from": "\"Karen Hill\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How long should it take to insert 200,000 records?"
},
{
"msg_contents": "Karen Hill wrote:\n\n> \n> The postgres version is 8.2.1 on Windows. The pl/pgsql function is\n> inserting to an updatable view (basically two tables).\n> \n> CREATE TABLE foo1\n> (\n> \n> \n> ) ;\n> \n> CREATE TABLE foo2\n> (\n> \n> );\n> \n> CREATE VIEW viewfoo AS\n> (\n> \n> );\n> CREATE RULE ruleFoo AS ON INSERT TO viewfoo DO INSTEAD\n> (\n> \n> );\n> \n> CREATE OR REPLACE FUNCTION functionFoo() RETURNS VOID AS $$\n> BEGIN\n> FOR i in 1..200000 LOOP\n> INSERT INTO viewfoo (x) VALUES (x);\n> END LOOP;\n> END;\n> $$ LANGUAGE plpgsql;\n> \n\n\nSorry - but we probably need *still* more detail! - the definition of \nviewfoo is likely to be critical. For instance a simplified variant of \nyour setup does 200000 inserts in 5s on my PIII tualatin machine:\n\nCREATE TABLE foo1 (x INTEGER);\n\nCREATE VIEW viewfoo AS SELECT * FROM foo1;\n\nCREATE RULE ruleFoo AS ON INSERT TO viewfoo DO INSTEAD\n(\n INSERT INTO foo1 VALUES (new.x);\n)\n\nCREATE OR REPLACE FUNCTION functionFoo() RETURNS VOID AS $$\nBEGIN\n FOR i in 1..200000 LOOP\n INSERT INTO viewfoo (x) VALUES (i);\n END LOOP;\nEND;\n$$ LANGUAGE plpgsql;\n\n\npostgres=# \\timing\npostgres=# SELECT functionFoo() ;\n functionfoo\n-------------\n\n(1 row)\n\nTime: 4659.477 ms\n\npostgres=# SELECT count(*) FROM viewfoo;\n count\n--------\n 200000\n(1 row)\n\nCheers\n\nMark\n",
"msg_date": "Wed, 07 Feb 2007 15:41:34 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How long should it take to insert 200,000 records?"
},
{
"msg_contents": "\"Karen Hill\" <[email protected]> writes:\n> On Feb 5, 9:33 pm, [email protected] (Tom Lane) wrote:\n>> I think you have omitted a bunch of relevant facts.\n\n> The postgres version is 8.2.1 on Windows. The pl/pgsql function is\n> inserting to an updatable view (basically two tables).\n> [ sketch of schema ]\n\nI think the problem is probably buried in the parts you left out. Can\nyou show us the full schemas for those tables, as well as the rule\ndefinition? The plpgsql function itself can certainly go a lot faster\nthan what you indicated. On my slowest active machine:\n\nregression=# create table viewfoo(x int);\nCREATE TABLE\nregression=# CREATE OR REPLACE FUNCTION functionFoo() RETURNS VOID AS $$\nBEGIN\nFOR i in 1..200000 LOOP\nINSERT INTO viewfoo (x) VALUES (i);\nEND LOOP;\nEND;\n$$ LANGUAGE plpgsql;\nCREATE FUNCTION\nregression=# \\timing\nTiming is on.\nregression=# select functionFoo();\n functionfoo \n-------------\n \n(1 row)\n\nTime: 16939.667 ms\nregression=# \n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Feb 2007 22:07:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How long should it take to insert 200,000 records? "
},
{
"msg_contents": "unless you specify otherwiise, every insert carries its own transaction\nbegin/commit. That's a lot of overhead for a single insert, no? Why\nnot use a single transaction for, say, each 1000 inserts? That would\nstrike a nice balance of security with efficiency.\n\npseudo code for the insert:\n\nBegin Transaction;\nFOR i in 1..200000 LOOP\n INSERT INTO viewfoo (x) VALUES (x);\n IF i % 1000 = 0 THEN\n Commit Transaction;\n Begin Transaction;\n END IF;\nEND LOOP;\nCommit Transaction;\nEnd\n\n\nThis approach should speed up things dramatically. \n\n\n \n>>> \"Karen Hill\" <[email protected]> 2/6/2007 2:39 PM >>>\nOn Feb 5, 9:33 pm, [email protected] (Tom Lane) wrote:\n> \"Karen Hill\" <[email protected]> writes:\n> > I have a pl/pgsql function that is inserting 200,000 records for\n> > testing purposes. What is the expected time frame for this\noperation\n> > on a pc with 1/2 a gig of ram and a 7200 RPM disk?\n>\n> I think you have omitted a bunch of relevant facts. Bare INSERT is\n> reasonably quick:\n>\n> regression=# create table foo (f1 int);\n> CREATE TABLE\n> regression=# \\timing\n> Timing is on.\n> regression=# insert into foo select x from generate_series(1,200000)\nx;\n> INSERT 0 200000\n> Time: 5158.564 ms\n> regression=#\n>\n> (this on a not-very-fast machine) but if you weigh it down with a\nton\n> of index updates, foreign key checks, etc, it could get slow ...\n> also you haven't mentioned what else that plpgsql function is doing.\n>\n\nThe postgres version is 8.2.1 on Windows. The pl/pgsql function is\ninserting to an updatable view (basically two tables).\n\nCREATE TABLE foo1\n(\n\n\n) ;\n\nCREATE TABLE foo2\n(\n\n);\n\nCREATE VIEW viewfoo AS\n(\n\n);\nCREATE RULE ruleFoo AS ON INSERT TO viewfoo DO INSTEAD\n(\n\n);\n\nCREATE OR REPLACE FUNCTION functionFoo() RETURNS VOID AS $$\nBEGIN\nFOR i in 1..200000 LOOP\nINSERT INTO viewfoo (x) VALUES (x);\nEND LOOP;\nEND;\n$$ LANGUAGE plpgsql;\n\n\n\n---------------------------(end of\nbroadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n",
"msg_date": "Wed, 14 Feb 2007 15:24:00 -0700",
"msg_from": "\"Lou O'Quin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How long should it take to insert 200,000\n\trecords?"
}
] |
[
{
"msg_contents": "Aqua data studio has a graphical explain built into it. It supports most\nrdbms including postgres. Its what I use to performance tune DB2.\nhttp://www.aquafold.com/\n\n\nIndex ANDing would suit you here\n\nYou have 3 tables with 3 possible indexes and it sounds like the query\nis doing table scans where it needs to use indexes. \n\nIf your version of postgres does not support index anding another way\naround this is to create a view and then index the view (if indexing\nviews are possible in postgres)\n\nAnother possible solution is inserting your data into a single table and\nthen indexing that table. The initial cost is consuming however if you\nuse triggers on your parent tables to automatically insert data into the\nnew table it becomes almost hands free. \n\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Mark\nStosberg\nSent: Tuesday, February 06, 2007 8:40 AM\nTo: [email protected]\nSubject: Re: [PERFORM] explain analyze output for review (was:\noptimizing a geo_distance()...)\n\nBruno Wolff III wrote:\n>\n> Some people here may be able to tell you more if you show us explain\n> analyze output.\n\nHere is my explain analyze output. Some brief context of what's going\non. The goal is to find \"Pets Near You\".\n\nWe join the pets table on the shelters table to get a zipcode, and then\njoin a shelters table with \"earth_distance\" to get the coordinates of\nthe zipcode. ( Is there any significant penalty for using a varchar vs\nan int for a joint? ).\n\nI've been investigating partial indexes for the pets table. It has about\n300,000 rows, but only about 10 are \"active\", and those are the ones we\nare care about. Queries are also frequently made on males vs females,\ndogs vs cats\nor specific ages, and those specific cases seem like possible candidates\nfor partial indexes\nas well. I played with that approach some, but had trouble coming up\nwith any thing that\nbenchmarked faster.\n\nI'm reading the explain analyze output correctly myself, nearly all of\nthe time spent is related to the 'pets' table, but I can't see what to\nabout it.\n\nHelp appreciated!\n\n Mark\n\nNested Loop (cost=11.82..29.90 rows=1 width=0) (actual\ntime=37.601..1910.787 rows=628 loops=1)\n -> Nested Loop (cost=6.68..20.73 rows=1 width=24) (actual\ntime=35.525..166.547 rows=1727 loops=1)\n -> Bitmap Heap Scan on pets (cost=6.68..14.71 rows=1 width=4)\n(actual time=35.427..125.594 rows=1727 loops=1)\n Recheck Cond: (((sex)::text = 'f'::text) AND (species_id\n= 1))\n Filter: ((pet_state)::text = 'available'::text)\n -> BitmapAnd (cost=6.68..6.68 rows=2 width=0) (actual\ntime=33.398..33.398 rows=0 loops=1)\n -> Bitmap Index Scan on pets_sex_idx\n(cost=0.00..3.21 rows=347 width=0) (actual time=14.739..14.739\nrows=35579 loops=1)\n Index Cond: ((sex)::text = 'f'::text)\n -> Bitmap Index Scan on pet_species_id_idx\n(cost=0.00..3.21 rows=347 width=0) (actual time=16.779..16.779\nrows=48695 loops=1)\n Index Cond: (species_id = 1)\n -> Index Scan using shelters_pkey on shelters\n(cost=0.00..6.01 rows=1 width=28) (actual time=0.012..0.014 rows=1\nloops=1727)\n Index Cond: (\"outer\".shelter_id = shelters.shelter_id)\n -> Bitmap Heap Scan on earth_distance (cost=5.14..9.15 rows=1\nwidth=9) (actual time=0.984..0.984 rows=0 loops=1727)\n Recheck Cond: ((cube_enlarge(('(-2512840.11676572,\n4646218.19036629, 3574817.21369166)'::cube)::cube,\n160930.130863421::double precision, 3) @ earth_distance.earth_coords)\nAND\n((\"outer\".postal_code_for_joining)::text =\n(earth_distance.zipcode)::text))\n -> BitmapAnd (cost=5.14..5.14 rows=1 width=0) (actual\ntime=0.978..0.978 rows=0 loops=1727)\n -> Bitmap Index Scan on earth_coords_idx\n(cost=0.00..2.15 rows=42 width=0) (actual time=0.951..0.951 rows=1223\nloops=1727)\n Index Cond: (cube_enlarge(('(-2512840.11676572,\n4646218.19036629, 3574817.21369166)'::cube)::cube,\n160930.130863421::double precision, 3) @ earth_coords)\n -> Bitmap Index Scan on earth_distance_zipcode_idx\n(cost=0.00..2.74 rows=212 width=0) (actual time=0.015..0.015 rows=1\nloops=1727)\n Index Cond:\n((\"outer\".postal_code_for_joining)::text =\n(earth_distance.zipcode)::text)\n Total runtime: 1913.099 ms\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n\nPRIVILEGED AND CONFIDENTIAL\nThis email transmission contains privileged and confidential information intended only for the use of the individual or entity named above. If the reader of the email is not the intended recipient or the employee or agent responsible for delivering it to the intended recipient, you are hereby notified that any use, dissemination or copying of this email transmission is strictly prohibited by the sender. If you have received this transmission in error, please delete the email and immediately notify the sender via the email return address or mailto:[email protected]. Thank you.\n\n\n\n\n",
"msg_date": "Tue, 6 Feb 2007 08:50:29 -0600",
"msg_from": "\"Hiltibidal, Robert\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: explain analyze output for review (was:\n\toptimizing a geo_distance()...)"
}
] |
[
{
"msg_contents": "what is the size of that index?\n\nHave you considered breaking the index into components, ie more than one\nindex on the table?\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Adam Rich\nSent: Tuesday, February 06, 2007 8:54 AM\nTo: 'Mark Stosberg'; [email protected]\nSubject: Re: [PERFORM] explain analyze output for review (was:\noptimizing a geo_distance()...)\n\n\nIf I'm reading this correctly, 89% of the query time is spent\ndoing an index scan of earth_coords_idx. Scanning pets is only\ntaking 6% of the total time.\n\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Mark\nStosberg\nSent: Tuesday, February 06, 2007 8:40 AM\nTo: [email protected]\nSubject: Re: [PERFORM] explain analyze output for review (was:\noptimizing a geo_distance()...)\n\n\nBruno Wolff III wrote:\n>\n> Some people here may be able to tell you more if you show us explain\n> analyze output.\n\nHere is my explain analyze output. Some brief context of what's going\non. The goal is to find \"Pets Near You\".\n\nWe join the pets table on the shelters table to get a zipcode, and then\njoin a shelters table with \"earth_distance\" to get the coordinates of\nthe zipcode. ( Is there any significant penalty for using a varchar vs\nan int for a joint? ).\n\nI've been investigating partial indexes for the pets table. It has about\n300,000 rows, but only about 10 are \"active\", and those are the ones we\nare care about. Queries are also frequently made on males vs females,\ndogs vs cats\nor specific ages, and those specific cases seem like possible candidates\nfor partial indexes\nas well. I played with that approach some, but had trouble coming up\nwith any thing that\nbenchmarked faster.\n\nI'm reading the explain analyze output correctly myself, nearly all of\nthe time spent is related to the 'pets' table, but I can't see what to\nabout it.\n\nHelp appreciated!\n\n Mark\n\nNested Loop (cost=11.82..29.90 rows=1 width=0) (actual\ntime=37.601..1910.787 rows=628 loops=1)\n -> Nested Loop (cost=6.68..20.73 rows=1 width=24) (actual\ntime=35.525..166.547 rows=1727 loops=1)\n -> Bitmap Heap Scan on pets (cost=6.68..14.71 rows=1 width=4)\n(actual time=35.427..125.594 rows=1727 loops=1)\n Recheck Cond: (((sex)::text = 'f'::text) AND (species_id\n= 1))\n Filter: ((pet_state)::text = 'available'::text)\n -> BitmapAnd (cost=6.68..6.68 rows=2 width=0) (actual\ntime=33.398..33.398 rows=0 loops=1)\n -> Bitmap Index Scan on pets_sex_idx\n(cost=0.00..3.21 rows=347 width=0) (actual time=14.739..14.739\nrows=35579 loops=1)\n Index Cond: ((sex)::text = 'f'::text)\n -> Bitmap Index Scan on pet_species_id_idx\n(cost=0.00..3.21 rows=347 width=0) (actual time=16.779..16.779\nrows=48695 loops=1)\n Index Cond: (species_id = 1)\n -> Index Scan using shelters_pkey on shelters\n(cost=0.00..6.01 rows=1 width=28) (actual time=0.012..0.014 rows=1\nloops=1727)\n Index Cond: (\"outer\".shelter_id = shelters.shelter_id)\n -> Bitmap Heap Scan on earth_distance (cost=5.14..9.15 rows=1\nwidth=9) (actual time=0.984..0.984 rows=0 loops=1727)\n Recheck Cond: ((cube_enlarge(('(-2512840.11676572,\n4646218.19036629, 3574817.21369166)'::cube)::cube,\n160930.130863421::double precision, 3) @ earth_distance.earth_coords)\nAND\n((\"outer\".postal_code_for_joining)::text =\n(earth_distance.zipcode)::text))\n -> BitmapAnd (cost=5.14..5.14 rows=1 width=0) (actual\ntime=0.978..0.978 rows=0 loops=1727)\n -> Bitmap Index Scan on earth_coords_idx\n(cost=0.00..2.15 rows=42 width=0) (actual time=0.951..0.951 rows=1223\nloops=1727)\n Index Cond: (cube_enlarge(('(-2512840.11676572,\n4646218.19036629, 3574817.21369166)'::cube)::cube,\n160930.130863421::double precision, 3) @ earth_coords)\n -> Bitmap Index Scan on earth_distance_zipcode_idx\n(cost=0.00..2.74 rows=212 width=0) (actual time=0.015..0.015 rows=1\nloops=1727)\n Index Cond:\n((\"outer\".postal_code_for_joining)::text =\n(earth_distance.zipcode)::text)\n Total runtime: 1913.099 ms\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n\nPRIVILEGED AND CONFIDENTIAL\nThis email transmission contains privileged and confidential information intended only for the use of the individual or entity named above. If the reader of the email is not the intended recipient or the employee or agent responsible for delivering it to the intended recipient, you are hereby notified that any use, dissemination or copying of this email transmission is strictly prohibited by the sender. If you have received this transmission in error, please delete the email and immediately notify the sender via the email return address or mailto:[email protected]. Thank you.\n\n\n\n\n",
"msg_date": "Tue, 6 Feb 2007 09:04:54 -0600",
"msg_from": "\"Hiltibidal, Robert\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: explain analyze output for review (was:\n\toptimizing a geo_distance()...)"
}
] |
[
{
"msg_contents": "What is your row size?\n\nHave you checked to see what your current inserts per second are?\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Scott\nMarlowe\nSent: Tuesday, February 06, 2007 10:56 AM\nTo: Merlin Moncure\nCc: Karen Hill; [email protected]\nSubject: Re: [PERFORM] How long should it take to insert 200,000\nrecords?\n\nOn Tue, 2007-02-06 at 10:40, Merlin Moncure wrote:\n> On 2/6/07, Scott Marlowe <[email protected]> wrote:\n> > On Mon, 2007-02-05 at 18:35, Karen Hill wrote:\n> > > I have a pl/pgsql function that is inserting 200,000 records for\n> > > testing purposes. What is the expected time frame for this\noperation\n> > > on a pc with 1/2 a gig of ram and a 7200 RPM disk? The processor\nis\n> > > a 2ghz cpu. So far I've been sitting here for about 2 million ms\n> > > waiting for it to complete, and I'm not sure how many inserts\npostgres\n> > > is doing per second.\n> >\n> > That really depends. Doing 200,000 inserts as individual\ntransactions\n> > will be fairly slow. Since PostgreSQL generally runs in autocommit\n> > mode, this means that if you didn't expressly begin a transaction,\nyou\n> > are in fact inserting each row as a transaction. i.e. this:\n> \n> I think OP is doing insertion inside a pl/pgsql loop...transaction is\n> implied here. \n\nYeah, I noticed that about 10 seconds after hitting send... :)\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n\nPRIVILEGED AND CONFIDENTIAL\nThis email transmission contains privileged and confidential information intended only for the use of the individual or entity named above. If the reader of the email is not the intended recipient or the employee or agent responsible for delivering it to the intended recipient, you are hereby notified that any use, dissemination or copying of this email transmission is strictly prohibited by the sender. If you have received this transmission in error, please delete the email and immediately notify the sender via the email return address or mailto:[email protected]. Thank you.\n\n\n\n\n",
"msg_date": "Tue, 6 Feb 2007 11:04:46 -0600",
"msg_from": "\"Hiltibidal, Robert\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How long should it take to insert 200,000\n records?"
}
] |
[
{
"msg_contents": "Hi,\n\nWe are developing one website using ChessD in red hat environment. ChessD is\na open source which requires Postgres as its back end. When we tried to\ninstall ChessD we got the error\n\nMissing postgresql/libpq-fe.h, is libpq-dev installed?.\n\nBut this chess D is works fine in Debian and We need a libpq-dev package for\nredhat which is equivalent to debian.\n\nWe need you valuable feed back which will help us lot to sort out this set\nback.\n-- \nRegards,\n\nMuruga\nChess Development Team,\nSilicon Oyster Technologies,\nChennai-84.\n\nHi,\n\nWe are developing one website using ChessD in red hat environment.\nChessD is a open source which requires Postgres as its back end. When\nwe tried to install ChessD we got the error\n\nMissing postgresql/libpq-fe.h, is libpq-dev installed?.\n\nBut this chess D is works fine in Debian and We need a libpq-dev package for redhat which is equivalent to debian.\n\nWe need you valuable feed back which will help us lot to sort out this set back.\n-- Regards,MurugaChess Development Team,Silicon Oyster Technologies,Chennai-84.",
"msg_date": "Wed, 7 Feb 2007 19:45:02 +0530",
"msg_from": "\"Muruganantham M\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Help Needed"
},
{
"msg_contents": "Muruganantham M wrote:\n> Hi,\n> \n> We are developing one website using ChessD in red hat environment. \n> ChessD is\n> a open source which requires Postgres as its back end. When we tried to\n> install ChessD we got the error\n> \n> Missing postgresql/libpq-fe.h, is libpq-dev installed?.\n\nHave you tried looking for some postgresql-client or postgresql-dev rpm \nfiles?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 07 Feb 2007 15:03:06 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help Needed"
}
] |
[
{
"msg_contents": "Greetings,\n\nSince upgrading to 8.2.3 yesterday, the stats collector process has had \nvery high CPU utilization; it is consuming roughly 80-90% of one CPU. \nThe server seems a lot more sluggish than it was before. Is this normal \noperation for 8.2 or something I should look into correcting?\n\nstats_start_collector = true\nstats_command_string = true\nstats_block_level = true\nstats_row_level = true\nstats_reset_on_server_start = false\n\n-- \nBenjamin Minshall <[email protected]>\nSenior Developer -- Intellicon, Inc.",
"msg_date": "Thu, 08 Feb 2007 11:48:01 -0500",
"msg_from": "Benjamin Minshall <[email protected]>",
"msg_from_op": true,
"msg_subject": "stats collector process high CPU utilization"
},
{
"msg_contents": "Benjamin Minshall <[email protected]> writes:\n> Since upgrading to 8.2.3 yesterday, the stats collector process has had \n> very high CPU utilization; it is consuming roughly 80-90% of one CPU. \n> The server seems a lot more sluggish than it was before. Is this normal \n> operation for 8.2 or something I should look into correcting?\n\nWhat version did you update from, and what platform is this?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Feb 2007 12:17:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats collector process high CPU utilization "
},
{
"msg_contents": "Tom Lane wrote:\n> Benjamin Minshall <[email protected]> writes:\n>> Since upgrading to 8.2.3 yesterday, the stats collector process has had \n>> very high CPU utilization; it is consuming roughly 80-90% of one CPU. \n>> The server seems a lot more sluggish than it was before. Is this normal \n>> operation for 8.2 or something I should look into correcting?\n> \n> What version did you update from, and what platform is this?\n> \n> \t\t\tregards, tom lane\n\nI upgraded from 8.1.5. The system is a dual Xeon 2.4Ghz, 4Gb RAM \nrunning linux kernel 2.6 series.\n\n-- \nBenjamin Minshall <[email protected]>\nSenior Developer -- Intellicon, Inc.\nhttp://www.intellicon.biz",
"msg_date": "Thu, 08 Feb 2007 12:25:26 -0500",
"msg_from": "Benjamin Minshall <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: stats collector process high CPU utilization"
},
{
"msg_contents": "Benjamin Minshall <[email protected]> writes:\n> Tom Lane wrote:\n>> Benjamin Minshall <[email protected]> writes:\n>>> Since upgrading to 8.2.3 yesterday, the stats collector process has had \n>>> very high CPU utilization; it is consuming roughly 80-90% of one CPU. \n>>> The server seems a lot more sluggish than it was before. Is this normal \n>>> operation for 8.2 or something I should look into correcting?\n\n>> What version did you update from, and what platform is this?\n\n> I upgraded from 8.1.5. The system is a dual Xeon 2.4Ghz, 4Gb RAM \n> running linux kernel 2.6 series.\n\nOK, I was trying to correlate it with post-8.2.0 patches but evidently\nthat's the wrong tree to bark up. No, this isn't an expected behavior.\nIs there anything unusual about your database (huge numbers of tables,\nor some such)? Can you gather some info about what it's doing?\nstrace'ing the stats collector might prove interesting, also if you have\nbuilt it with --enable-debug then oprofile results would be helpful.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Feb 2007 13:46:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats collector process high CPU utilization "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> OK, I was trying to correlate it with post-8.2.0 patches but evidently\n> that's the wrong tree to bark up. No, this isn't an expected behavior.\n\nI talked with a co-worker and discovered that we went from 8.1.5 to \n8.2.2, ran a few hours then went to 8.2.3 after the patch was released. \n I do not know if the high utilization was a problem during the few \nhours on 8.2.2.\n\n> Is there anything unusual about your database (huge numbers of tables,\n> or some such)?\n\nNothing unusual. I have a few databases of about 10GB each; the \nworkload is mostly inserts using COPY or parameterized INSERTS inside \ntransaction blocks.\n\n> Can you gather some info about what it's doing?\n> strace'ing the stats collector might prove interesting, also if you have\n> built it with --enable-debug then oprofile results would be helpful.\n\nI will gather some strace info later today when I have a chance to \nshutdown the server.\n\nThanks.\n\n-- \nBenjamin Minshall <[email protected]>\nSenior Developer -- Intellicon, Inc.\nhttp://www.intellicon.biz",
"msg_date": "Thu, 08 Feb 2007 16:16:32 -0500",
"msg_from": "Benjamin Minshall <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: stats collector process high CPU utilization"
},
{
"msg_contents": "Benjamin Minshall <[email protected]> writes:\n> Tom Lane wrote:\n>> Can you gather some info about what it's doing?\n>> strace'ing the stats collector might prove interesting, also if you have\n>> built it with --enable-debug then oprofile results would be helpful.\n\n> I will gather some strace info later today when I have a chance to \n> shutdown the server.\n\nI don't see why you'd need to shut anything down. Just run\n\tstrace -p stats-process-ID\nfor a few seconds or minutes (enough to gather maybe a few thousand\nlines of output).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Feb 2007 16:24:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats collector process high CPU utilization "
},
{
"msg_contents": "Tom Lane wrote:\n> Benjamin Minshall <[email protected]> writes:\n>> Tom Lane wrote:\n>>> Can you gather some info about what it's doing?\n>>> strace'ing the stats collector might prove interesting, also if you have\n>>> built it with --enable-debug then oprofile results would be helpful.\n> \n>> I will gather some strace info later today when I have a chance to \n>> shutdown the server.\n> \n> I don't see why you'd need to shut anything down. Just run\n> \tstrace -p stats-process-ID\n> for a few seconds or minutes (enough to gather maybe a few thousand\n> lines of output).\n> \n\nSeems the problem may be related to a huge global/pgstat.stat file. \nUnder 8.1.5 it was about 1 MB; now it's 90 MB in 8.2.3.\n\nI ran strace for 60 seconds:\n\n% time seconds usecs/call calls errors syscall\n------ ----------- ----------- --------- --------- ----------------\n 95.71 1.119004 48652 23 rename\n 4.29 0.050128 0 508599 write\n 0.00 0.000019 0 249 22 poll\n 0.00 0.000000 0 23 open\n 0.00 0.000000 0 23 close\n 0.00 0.000000 0 34 getppid\n 0.00 0.000000 0 23 munmap\n 0.00 0.000000 0 23 setitimer\n 0.00 0.000000 0 23 22 sigreturn\n 0.00 0.000000 0 23 mmap2\n 0.00 0.000000 0 23 fstat64\n 0.00 0.000000 0 216 recv\n------ ----------- ----------- --------- --------- ----------------\n100.00 1.169151 509282 44 total\n\nI attached an excerpt of the full strace with the many thousands of \nwrite calls filtered.\n\n-- \nBenjamin Minshall <[email protected]>\nSenior Developer -- Intellicon, Inc.\nhttp://www.intellicon.biz",
"msg_date": "Thu, 08 Feb 2007 16:55:25 -0500",
"msg_from": "Benjamin Minshall <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: stats collector process high CPU utilization"
},
{
"msg_contents": "Benjamin Minshall <[email protected]> writes:\n> Seems the problem may be related to a huge global/pgstat.stat file. \n> Under 8.1.5 it was about 1 MB; now it's 90 MB in 8.2.3.\n\nYoi. We didn't do anything that would bloat that file if it were\nstoring the same information as before. What I'm betting is that it's\nstoring info on a whole lot more tables than before. Did you decide\nto start running autovacuum when you updated to 8.2? How many tables\nare visible in the pg_stats views?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Feb 2007 17:07:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats collector process high CPU utilization "
},
{
"msg_contents": "Tom Lane wrote:\n> Benjamin Minshall <[email protected]> writes:\n>> Seems the problem may be related to a huge global/pgstat.stat file. \n>> Under 8.1.5 it was about 1 MB; now it's 90 MB in 8.2.3.\n> \n> Yoi. We didn't do anything that would bloat that file if it were\n> storing the same information as before. What I'm betting is that it's\n> storing info on a whole lot more tables than before.\n\nThe server is running on the same actual production data, schema and \nworkload as before.\n\n> Did you decide to start running autovacuum when you updated to 8.2?\n\nAutovacuum was on and functioning before the update.\n\n> How many tables are visible in the pg_stats views?\n\nThere are about 15 databases in the cluster each with around 90 tables. \n A count of pg_stats yields between 500 and 800 rows in each database.\n\nselect count(*) from (select distinct tablename from pg_stats) as i;\n count\n-------\n 92\n(1 row)\n\nselect count(*) from pg_stats;\n count\n-------\n 628\n(1 row)\n\n-- \nBenjamin Minshall <[email protected]>\nSenior Developer -- Intellicon, Inc.\nhttp://www.intellicon.biz",
"msg_date": "Thu, 08 Feb 2007 17:29:23 -0500",
"msg_from": "Benjamin Minshall <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: stats collector process high CPU utilization"
},
{
"msg_contents": "Benjamin Minshall <[email protected]> writes:\n> Tom Lane wrote:\n>> How many tables are visible in the pg_stats views?\n\n> There are about 15 databases in the cluster each with around 90 tables. \n> A count of pg_stats yields between 500 and 800 rows in each database.\n\nSorry, I was imprecise. The view \"pg_stats\" doesn't have anything to do\nwith the stats collector; what I was interested in was the contents of\nthe \"pg_stat_xxx\" and \"pg_statio_xxx\" views. It'd be enough to check\npg_stat_all_indexes and pg_stat_all_tables, probably. Also, do you have\nthe 8.1 installation still available to get the comparable counts there?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Feb 2007 21:04:32 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats collector process high CPU utilization "
},
{
"msg_contents": "> Benjamin Minshall <[email protected]> writes:\n>> Tom Lane wrote:\n>>> How many tables are visible in the pg_stats views?\n>\n>> There are about 15 databases in the cluster each with around 90 tables.\n>> A count of pg_stats yields between 500 and 800 rows in each database.\n>\n> Sorry, I was imprecise. The view \"pg_stats\" doesn't have anything to do\n> with the stats collector; what I was interested in was the contents of\n> the \"pg_stat_xxx\" and \"pg_statio_xxx\" views. It'd be enough to check\n> pg_stat_all_indexes and pg_stat_all_tables, probably. Also, do you have\n> the 8.1 installation still available to get the comparable counts there?\n>\n\nI checked all 15 databases on both 8.1 and 8.2; they were all quite\nconsistent:\n\npg_stat_all_indexes has about 315 rows per database\npg_stat_all_tables has about 260 rows per database\n\nThe pg_statio_* views match in count to the pg_stat_* views as well.\n\nWhile exploring this problem, I've noticed that one of the frequent insert\nprocesses creates a few temporary tables to do post-processing. Is it\npossible that the stats collector is getting bloated with stats from these\nshort-lived temporary tables? During periods of high activity it could be\ncreating temporary tables as often as two per second.\n",
"msg_date": "Thu, 8 Feb 2007 23:13:38 -0500 (EST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: stats collector process high CPU utilization"
},
{
"msg_contents": "[email protected] writes:\n> While exploring this problem, I've noticed that one of the frequent insert\n> processes creates a few temporary tables to do post-processing. Is it\n> possible that the stats collector is getting bloated with stats from these\n> short-lived temporary tables? During periods of high activity it could be\n> creating temporary tables as often as two per second.\n\nHmmm ... that's an interesting point, but offhand I don't see why it'd\ncause more of a problem in 8.2 than 8.1. Alvaro, any thoughts?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Feb 2007 23:26:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats collector process high CPU utilization "
},
{
"msg_contents": "Tom Lane wrote:\n> [email protected] writes:\n> > While exploring this problem, I've noticed that one of the frequent insert\n> > processes creates a few temporary tables to do post-processing. Is it\n> > possible that the stats collector is getting bloated with stats from these\n> > short-lived temporary tables? During periods of high activity it could be\n> > creating temporary tables as often as two per second.\n> \n> Hmmm ... that's an interesting point, but offhand I don't see why it'd\n> cause more of a problem in 8.2 than 8.1. Alvaro, any thoughts?\n\nNo idea. I do have a very crude piece of code to read a pgstat.stat\nfile and output some info about what it finds (table OIDs basically\nIIRC). Maybe it can be helpful to examine what's in the bloated stat\nfile.\n\nRegarding temp tables, I'd think that the pgstat entries should be\ngetting dropped at some point in both releases. Maybe there's a bug\npreventing that in 8.2?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Fri, 9 Feb 2007 11:24:59 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats collector process high CPU utilization"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Regarding temp tables, I'd think that the pgstat entries should be\n> getting dropped at some point in both releases. Maybe there's a bug\n> preventing that in 8.2?\n\nHmmm ... I did rewrite the backend-side code for that just recently for\nperformance reasons ... could I have broken it? Anyone want to take a\nsecond look at\nhttp://archives.postgresql.org/pgsql-committers/2007-01/msg00171.php\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 09 Feb 2007 10:08:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats collector process high CPU utilization "
},
{
"msg_contents": "I wrote:\n> Alvaro Herrera <[email protected]> writes:\n>> Regarding temp tables, I'd think that the pgstat entries should be\n>> getting dropped at some point in both releases. Maybe there's a bug\n>> preventing that in 8.2?\n\n> Hmmm ... I did rewrite the backend-side code for that just recently for\n> performance reasons ... could I have broken it?\n\nI did some testing with HEAD and verified that pgstat_vacuum_tabstat()\nstill seems to do what it's supposed to, so that theory falls down.\n\nAlvaro, could you send Benjamin your stat-file-dumper tool so we can\nget some more info? Alternatively, if Benjamin wants to send me a copy\nof his stats file (off-list), I'd be happy to take a look.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 09 Feb 2007 10:55:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats collector process high CPU utilization "
},
{
"msg_contents": "Tom Lane wrote:\n> I wrote:\n>> Alvaro Herrera <[email protected]> writes:\n>>> Regarding temp tables, I'd think that the pgstat entries should be\n>>> getting dropped at some point in both releases. Maybe there's a bug\n>>> preventing that in 8.2?\n> \n>> Hmmm ... I did rewrite the backend-side code for that just recently for\n>> performance reasons ... could I have broken it?\n> \n> I did some testing with HEAD and verified that pgstat_vacuum_tabstat()\n> still seems to do what it's supposed to, so that theory falls down.\n> \n> Alvaro, could you send Benjamin your stat-file-dumper tool so we can\n> get some more info?\n\n> Alternatively, if Benjamin wants to send me a copy\n> of his stats file (off-list), I'd be happy to take a look.\n> \n> \t\t\tregards, tom lane\n\nWhen I checked on the server this morning, the huge stats file has \nreturned to a normal size. I set up a script to track CPU usage and \nstats file size, and it appears to have decreased from 90MB down to \nabout 2MB over roughly 6 hours last night. The CPU usage of the stats \ncollector also decreased accordingly.\n\nThe application logs indicate that there was no variation in the \nworkload over this time period, however the file size started to \ndecrease soon after the nightly pg_dump backups completed. Coincidence \nperhaps?\n\nNonetheless, I would appreciate a copy of Alvaro's stat file tool just \nto see if anything stands out in the collected stats.\n\nThanks for your help, Tom.\n\n-- \nBenjamin Minshall <[email protected]>\nSenior Developer -- Intellicon, Inc.\nhttp://www.intellicon.biz",
"msg_date": "Fri, 09 Feb 2007 11:27:07 -0500",
"msg_from": "Benjamin Minshall <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: stats collector process high CPU utilization"
},
{
"msg_contents": "Benjamin Minshall <[email protected]> writes:\n> When I checked on the server this morning, the huge stats file has \n> returned to a normal size. I set up a script to track CPU usage and \n> stats file size, and it appears to have decreased from 90MB down to \n> about 2MB over roughly 6 hours last night. The CPU usage of the stats \n> collector also decreased accordingly.\n\n> The application logs indicate that there was no variation in the \n> workload over this time period, however the file size started to \n> decrease soon after the nightly pg_dump backups completed. Coincidence \n> perhaps?\n\nWell, that's pretty interesting. What are your vacuuming arrangements\nfor this installation? Could the drop in file size have coincided with\nVACUUM operations? Because the ultimate backstop against bloated stats\nfiles is pgstat_vacuum_tabstat(), which is run by VACUUM and arranges to\nclean out any entries that shouldn't be there anymore.\n\nIt's sounding like what you had was just transient bloat, in which case\nit might be useful to inquire whether anything out-of-the-ordinary had\nbeen done to the database right before the excessive-CPU-usage problem\nstarted.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 09 Feb 2007 11:33:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats collector process high CPU utilization "
},
{
"msg_contents": "Tom Lane wrote:\n> Well, that's pretty interesting. What are your vacuuming arrangements\n> for this installation? Could the drop in file size have coincided with\n> VACUUM operations? Because the ultimate backstop against bloated stats\n> files is pgstat_vacuum_tabstat(), which is run by VACUUM and arranges to\n> clean out any entries that shouldn't be there anymore.\n\nVACUUM and ANALYZE are done by autovacuum only, no cron jobs. \nautovacuum_naptime is 30 seconds so it should make it to each database \nevery 10 minutes or so. Do you think that more aggressive vacuuming \nwould prevent future swelling of the stats file?\n\n> It's sounding like what you had was just transient bloat, in which case\n> it might be useful to inquire whether anything out-of-the-ordinary had\n> been done to the database right before the excessive-CPU-usage problem\n> started.\n\nI don't believe that there was any unusual activity on the server, but I \nhave set up some more detailed logging to hopefully identify a pattern \nif the problem resurfaces.\n\nThanks.\n\n-- \nBenjamin Minshall <[email protected]>\nSenior Developer -- Intellicon, Inc.\nhttp://www.intellicon.biz",
"msg_date": "Fri, 09 Feb 2007 13:36:03 -0500",
"msg_from": "Benjamin Minshall <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: stats collector process high CPU utilization"
},
{
"msg_contents": "Benjamin Minshall <[email protected]> writes:\n> Tom Lane wrote:\n>> It's sounding like what you had was just transient bloat, in which case\n>> it might be useful to inquire whether anything out-of-the-ordinary had\n>> been done to the database right before the excessive-CPU-usage problem\n>> started.\n\n> I don't believe that there was any unusual activity on the server, but I \n> have set up some more detailed logging to hopefully identify a pattern \n> if the problem resurfaces.\n\nA further report led us to realize that 8.2.x in fact has a nasty bug\nhere: the stats collector is supposed to dump its stats to a file at\nmost every 500 milliseconds, but the code was actually waiting only\n500 microseconds :-(. The larger the stats file, the more obvious\nthis problem gets.\n\nIf you want to patch this before 8.2.4, try this...\n\nIndex: pgstat.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/postmaster/pgstat.c,v\nretrieving revision 1.140.2.2\ndiff -c -r1.140.2.2 pgstat.c\n*** pgstat.c\t26 Jan 2007 20:07:01 -0000\t1.140.2.2\n--- pgstat.c\t1 Mar 2007 20:04:50 -0000\n***************\n*** 1689,1695 ****\n \t/* Preset the delay between status file writes */\n \tMemSet(&write_timeout, 0, sizeof(struct itimerval));\n \twrite_timeout.it_value.tv_sec = PGSTAT_STAT_INTERVAL / 1000;\n! \twrite_timeout.it_value.tv_usec = PGSTAT_STAT_INTERVAL % 1000;\n \n \t/*\n \t * Read in an existing statistics stats file or initialize the stats to\n--- 1689,1695 ----\n \t/* Preset the delay between status file writes */\n \tMemSet(&write_timeout, 0, sizeof(struct itimerval));\n \twrite_timeout.it_value.tv_sec = PGSTAT_STAT_INTERVAL / 1000;\n! \twrite_timeout.it_value.tv_usec = (PGSTAT_STAT_INTERVAL % 1000) * 1000;\n \n \t/*\n \t * Read in an existing statistics stats file or initialize the stats to\n\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Mar 2007 15:12:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats collector process high CPU utilization "
},
{
"msg_contents": "Tom Lane wrote:\n> A further report led us to realize that 8.2.x in fact has a nasty bug\n> here: the stats collector is supposed to dump its stats to a file at\n> most every 500 milliseconds, but the code was actually waiting only\n> 500 microseconds :-(. The larger the stats file, the more obvious\n> this problem gets.\n> \n> If you want to patch this before 8.2.4, try this...\n> \n\nThanks for the follow-up on this issue, Tom. I was able to link the \noriginal huge stats file problem to some long(ish) running transactions \nwhich blocked VACUUM, but this patch will really help. Thanks.\n\n-Ben",
"msg_date": "Thu, 01 Mar 2007 15:41:19 -0500",
"msg_from": "Benjamin Minshall <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: stats collector process high CPU utilization"
},
{
"msg_contents": "On 3/1/07, Tom Lane <[email protected]> wrote:\n> Benjamin Minshall <[email protected]> writes:\n> > Tom Lane wrote:\n> >> It's sounding like what you had was just transient bloat, in which case\n> >> it might be useful to inquire whether anything out-of-the-ordinary had\n> >> been done to the database right before the excessive-CPU-usage problem\n> >> started.\n>\n> > I don't believe that there was any unusual activity on the server, but I\n> > have set up some more detailed logging to hopefully identify a pattern\n> > if the problem resurfaces.\n>\n> A further report led us to realize that 8.2.x in fact has a nasty bug\n> here: the stats collector is supposed to dump its stats to a file at\n> most every 500 milliseconds, but the code was actually waiting only\n> 500 microseconds :-(. The larger the stats file, the more obvious\n> this problem gets.\n\nI think this explains the trigger that was blowing up my FC4 box.\n\nmerlin\n",
"msg_date": "Thu, 1 Mar 2007 16:58:20 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats collector process high CPU utilization"
},
{
"msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> On 3/1/07, Tom Lane <[email protected]> wrote:\n>> A further report led us to realize that 8.2.x in fact has a nasty bug\n>> here: the stats collector is supposed to dump its stats to a file at\n>> most every 500 milliseconds, but the code was actually waiting only\n>> 500 microseconds :-(. The larger the stats file, the more obvious\n>> this problem gets.\n\n> I think this explains the trigger that was blowing up my FC4 box.\n\nI dug in the archives a bit and couldn't find the report you're\nreferring to?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Mar 2007 17:44:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats collector process high CPU utilization "
},
{
"msg_contents": "On 3/2/07, Tom Lane <[email protected]> wrote:\n> \"Merlin Moncure\" <[email protected]> writes:\n> > On 3/1/07, Tom Lane <[email protected]> wrote:\n> >> A further report led us to realize that 8.2.x in fact has a nasty bug\n> >> here: the stats collector is supposed to dump its stats to a file at\n> >> most every 500 milliseconds, but the code was actually waiting only\n> >> 500 microseconds :-(. The larger the stats file, the more obvious\n> >> this problem gets.\n>\n> > I think this explains the trigger that was blowing up my FC4 box.\n>\n> I dug in the archives a bit and couldn't find the report you're\n> referring to?\n\nI was referring to this:\nhttp://archives.postgresql.org/pgsql-hackers/2007-02/msg01418.php\n\nEven though the fundamental reason was obvious (and btw, I inherited\nthis server less than two months ago), I was still curious what was\nmaking 8.2 blow up a box that was handling a million tps/hour for over\na year. :-)\n\nmerlin\n",
"msg_date": "Fri, 2 Mar 2007 07:00:00 +0530",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats collector process high CPU utilization"
},
{
"msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> On 3/2/07, Tom Lane <[email protected]> wrote:\n>> \"Merlin Moncure\" <[email protected]> writes:\n>>> I think this explains the trigger that was blowing up my FC4 box.\n>> \n>> I dug in the archives a bit and couldn't find the report you're\n>> referring to?\n\n> I was referring to this:\n> http://archives.postgresql.org/pgsql-hackers/2007-02/msg01418.php\n\nOh, the kernel-panic thing. Hm, I wouldn't have thought that replacing\na file at a huge rate would induce a kernel panic ... but who knows?\nDo you want to try installing the one-liner patch and see if the panic\ngoes away?\n\nActually I was wondering a bit if that strange Windows error discussed\nearlier today could be triggered by this behavior:\nhttp://archives.postgresql.org/pgsql-general/2007-03/msg00000.php\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Mar 2007 20:35:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats collector process high CPU utilization "
},
{
"msg_contents": "Tom Lane wrote:\n> \"Merlin Moncure\" <[email protected]> writes:\n>> On 3/2/07, Tom Lane <[email protected]> wrote:\n>>> \"Merlin Moncure\" <[email protected]> writes:\n>>>> I think this explains the trigger that was blowing up my FC4 box.\n>>> I dug in the archives a bit and couldn't find the report you're\n>>> referring to?\n> \n>> I was referring to this:\n>> http://archives.postgresql.org/pgsql-hackers/2007-02/msg01418.php\n> \n> Oh, the kernel-panic thing. Hm, I wouldn't have thought that replacing\n> a file at a huge rate would induce a kernel panic ... but who knows?\n> Do you want to try installing the one-liner patch and see if the panic\n> goes away?\n> \n> Actually I was wondering a bit if that strange Windows error discussed\n> earlier today could be triggered by this behavior:\n> http://archives.postgresql.org/pgsql-general/2007-03/msg00000.php\n\nI think that's very likely. If we're updaitng the file *that* often,\nwe're certainly doing something that's very unusual for the windows\nfilesystem, and possibly for the hardware as well :-)\n\n//Magnus\n",
"msg_date": "Fri, 02 Mar 2007 11:32:46 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats collector process high CPU utilization"
},
{
"msg_contents": "\nSorry, I introduced this bug.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Benjamin Minshall <[email protected]> writes:\n> > Tom Lane wrote:\n> >> It's sounding like what you had was just transient bloat, in which case\n> >> it might be useful to inquire whether anything out-of-the-ordinary had\n> >> been done to the database right before the excessive-CPU-usage problem\n> >> started.\n> \n> > I don't believe that there was any unusual activity on the server, but I \n> > have set up some more detailed logging to hopefully identify a pattern \n> > if the problem resurfaces.\n> \n> A further report led us to realize that 8.2.x in fact has a nasty bug\n> here: the stats collector is supposed to dump its stats to a file at\n> most every 500 milliseconds, but the code was actually waiting only\n> 500 microseconds :-(. The larger the stats file, the more obvious\n> this problem gets.\n> \n> If you want to patch this before 8.2.4, try this...\n> \n> Index: pgstat.c\n> ===================================================================\n> RCS file: /cvsroot/pgsql/src/backend/postmaster/pgstat.c,v\n> retrieving revision 1.140.2.2\n> diff -c -r1.140.2.2 pgstat.c\n> *** pgstat.c\t26 Jan 2007 20:07:01 -0000\t1.140.2.2\n> --- pgstat.c\t1 Mar 2007 20:04:50 -0000\n> ***************\n> *** 1689,1695 ****\n> \t/* Preset the delay between status file writes */\n> \tMemSet(&write_timeout, 0, sizeof(struct itimerval));\n> \twrite_timeout.it_value.tv_sec = PGSTAT_STAT_INTERVAL / 1000;\n> ! \twrite_timeout.it_value.tv_usec = PGSTAT_STAT_INTERVAL % 1000;\n> \n> \t/*\n> \t * Read in an existing statistics stats file or initialize the stats to\n> --- 1689,1695 ----\n> \t/* Preset the delay between status file writes */\n> \tMemSet(&write_timeout, 0, sizeof(struct itimerval));\n> \twrite_timeout.it_value.tv_sec = PGSTAT_STAT_INTERVAL / 1000;\n> ! \twrite_timeout.it_value.tv_usec = (PGSTAT_STAT_INTERVAL % 1000) * 1000;\n> \n> \t/*\n> \t * Read in an existing statistics stats file or initialize the stats to\n> \n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Fri, 2 Mar 2007 18:06:39 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats collector process high CPU utilization"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Sorry, I introduced this bug.\n\nTo the gallows with you! :) Don't feel bad, there were several hackers\nthat missed the math on that one.\n\nJoshua D. Drake\n\n\n\n> \n> ---------------------------------------------------------------------------\n> \n> Tom Lane wrote:\n>> Benjamin Minshall <[email protected]> writes:\n>>> Tom Lane wrote:\n>>>> It's sounding like what you had was just transient bloat, in which case\n>>>> it might be useful to inquire whether anything out-of-the-ordinary had\n>>>> been done to the database right before the excessive-CPU-usage problem\n>>>> started.\n>>> I don't believe that there was any unusual activity on the server, but I \n>>> have set up some more detailed logging to hopefully identify a pattern \n>>> if the problem resurfaces.\n>> A further report led us to realize that 8.2.x in fact has a nasty bug\n>> here: the stats collector is supposed to dump its stats to a file at\n>> most every 500 milliseconds, but the code was actually waiting only\n>> 500 microseconds :-(. The larger the stats file, the more obvious\n>> this problem gets.\n>>\n>> If you want to patch this before 8.2.4, try this...\n>>\n>> Index: pgstat.c\n>> ===================================================================\n>> RCS file: /cvsroot/pgsql/src/backend/postmaster/pgstat.c,v\n>> retrieving revision 1.140.2.2\n>> diff -c -r1.140.2.2 pgstat.c\n>> *** pgstat.c\t26 Jan 2007 20:07:01 -0000\t1.140.2.2\n>> --- pgstat.c\t1 Mar 2007 20:04:50 -0000\n>> ***************\n>> *** 1689,1695 ****\n>> \t/* Preset the delay between status file writes */\n>> \tMemSet(&write_timeout, 0, sizeof(struct itimerval));\n>> \twrite_timeout.it_value.tv_sec = PGSTAT_STAT_INTERVAL / 1000;\n>> ! \twrite_timeout.it_value.tv_usec = PGSTAT_STAT_INTERVAL % 1000;\n>> \n>> \t/*\n>> \t * Read in an existing statistics stats file or initialize the stats to\n>> --- 1689,1695 ----\n>> \t/* Preset the delay between status file writes */\n>> \tMemSet(&write_timeout, 0, sizeof(struct itimerval));\n>> \twrite_timeout.it_value.tv_sec = PGSTAT_STAT_INTERVAL / 1000;\n>> ! \twrite_timeout.it_value.tv_usec = (PGSTAT_STAT_INTERVAL % 1000) * 1000;\n>> \n>> \t/*\n>> \t * Read in an existing statistics stats file or initialize the stats to\n>>\n>>\n>> \t\t\tregards, tom lane\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 1: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so that your\n>> message can get through to the mailing list cleanly\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Fri, 02 Mar 2007 15:23:53 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats collector process high CPU utilization"
}
] |
[
{
"msg_contents": "[email protected] wrote:\n> Hi all,\n> \n> I'm fairly new to SQL, so this is probably a dumb way to form this\n> query, but I don't know another.\n> \n> I want to see the usernames of all the users who have logged on\n> today. \"users\" is my table of users with id's and username's.\n> \"session_stats\" is my table of user sessions where I store site\n> activity, and it has a user_id column.\n> \n> SELECT username FROM users WHERE id IN (SELECT DISTINCT user_id FROM\n> session_stats $dateClause AND user_id!=0)\n\nJeff,\n\nIt looks like you need a JOIN instead:\n\nSELECT username from users\n JOIN session_stats ON (users.id = session_stats.user_id)\n WHERE $dateClause AND user_id != 0);\n\nCheck that you also have indexes on both of those columns (check the\ndocs for \"CREATE INDEX\" for details. )\n\n Mark\n",
"msg_date": "Thu, 08 Feb 2007 12:11:25 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speed up this query"
}
] |
[
{
"msg_contents": "Hi,\n\nOne of our database server is getting some very high response times and I�m\nwondering what can be responsible for this issue.\n\nA strange behaviour in this server is the saturation number iostat is\ngiving, an average of 20% for only 140 wkB/s or can I consider them normal\nnumbers?\n\nIt is a Fedora Core release 4 (Stentz) box with 4 GB RAM and 2 GenuineIntel\nXEON CPU 3.20 GHz Cache: 1024 KB. In the disk subsystem we have two SCSI\ndrives in hardware RAID1.\n\n\nSome typical iostat -x data:\ncpu-moy: %user %nice %sys %iowait %idle\n 11,96 0,00 7,04 1,31 79,70\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s\navgrq-sz avgqu-sz await svctm %util\nsda 0,00 30,44 0,81 3,43 14,52 270,97 7,26 135,48\n67,43 0,19 46,05 7,29 3,08\n\ncpu-moy: %user %nice %sys %iowait %idle\n 12,26 0,00 7,34 6,63 73,77\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s\navgrq-sz avgqu-sz await svctm %util\nsda 0,00 32,26 0,00 4,41 0,00 293,39 0,00 146,69\n66,55 1,51 342,32 25,14 11,08\n\ncpu-moy: %user %nice %sys %iowait %idle\n 13,67 0,00 6,83 7,74 71,76\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s\navgrq-sz avgqu-sz await svctm %util\nsda 0,00 31,99 0,00 3,82 0,00 286,52 0,00 143,26\n74,95 2,93 767,58 55,84 21,35\n\ncpu-moy: %user %nice %sys %iowait %idle\n 13,27 0,00 6,83 14,37 65,53\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s\navgrq-sz avgqu-sz await svctm %util\nsda 0,00 31,73 0,00 4,02 0,00 285,94 0,00 142,97\n71,20 2,73 680,40 49,25 19,78\n\ncpu-moy: %user %nice %sys %iowait %idle\n 12,86 0,00 6,33 9,45 71,36\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s\navgrq-sz avgqu-sz await svctm %util\nsda 0,00 30,92 0,00 3,41 0,00 274,70 0,00 137,35\n80,47 2,33 681,35 57,53 19,64\n\ncpu-moy: %user %nice %sys %iowait %idle\n 12,45 0,00 6,02 1,91 79,62\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s\navgrq-sz avgqu-sz await svctm %util\nsda 0,00 33,00 0,60 4,23 6,44 297,79 3,22 148,89\n63,00 0,41 85,29 20,00 9,66\n\n\ndmesg data:\n....\n....\nSCSI subsystem initialized\nFusion MPT base driver 3.01.20\nCopyright (c) 1999-2004 LSI Logic Corporation\nACPI: PCI Interrupt 0000:02:04.0[A] -> GSI 42 (level, low) -> IRQ 185\nmptbase: Initiating ioc0 bringup\nioc0: 53C1030: Capabilities={Initiator,Target}\nFusion MPT SCSI Host driver 3.01.20\nscsi0 : ioc0: LSI53C1030, FwRev=01032300h, Ports=1, MaxQ=255, IRQ=185\ninput: AT Translated Set 2 keyboard on isa0060/serio0\ninput: AT Translated Set 2 keyboard on isa0060/serio0\nmegaraid cmm: 2.20.2.5 (Release Date: Fri Jan 21 00:01:03 EST 2005)\nmegaraid: 2.20.4.5 (Release Date: Thu Feb 03 12:27:22 EST 2005)\nmegaraid: probe new device 0x1000:0x1960:0x1028:0x0520: bus 2:slot 5:func 0\nACPI: PCI Interrupt 0000:02:05.0[A] -> GSI 37 (level, low) -> IRQ 193\nmegaraid: fw version:[351S] bios version:[1.10]\nscsi1 : LSI Logic MegaRAID driver\nscsi[1]: scanning scsi channel 0 [Phy 0] for non-raid devices\n Vendor: SDR Model: GEM318P Rev: 1\n Type: Processor ANSI SCSI revision: 02\nscsi[1]: scanning scsi channel 1 [virtual] for logical drives\n Vendor: MegaRAID Model: LD 0 RAID1 69G Rev: 351S\n Type: Direct-Access ANSI SCSI revision: 02\nSCSI device sda: 143114240 512-byte hdwr sectors (73274 MB)\nsda: asking for cache data failed\nsda: assuming drive cache: write through\nSCSI device sda: 143114240 512-byte hdwr sectors (73274 MB)\nsda: asking for cache data failed\nsda: assuming drive cache: write through\n sda: sda1 sda2 sda3 sda4 < sda5 >\nAttached scsi disk sda at scsi1, channel 1, id 0, lun 0\nlibata version 1.10 loaded.\nata_piix version 1.03\nACPI: PCI Interrupt 0000:00:1f.2[A] -> GSI 18 (level, low) -> IRQ 177\nPCI: Setting latency timer of device 0000:00:1f.2 to 64\nata1: SATA max UDMA/133 cmd 0xBC98 ctl 0xBC92 bmdma 0xBC60 irq 177\nata2: SATA max UDMA/133 cmd 0xBC80 ctl 0xBC7A bmdma 0xBC68 irq 177\nata1: SATA port has no device.\nscsi2 : ata_piix\nata2: SATA port has no device.\nscsi3 : ata_piix\nisa bounce pool size: 16 pages\nkjournald starting. Commit interval 5 seconds\nEXT3-fs: mounted filesystem with ordered data mode.\nSELinux: Disabled at runtime.\nSELinux: Unregistering netfilter hooks\ncfq: depth 4 reached, tagging now on\nAttached scsi generic sg0 at scsi1, channel 0, id 6, lun 0, type 3\nAttached scsi generic sg1 at scsi1, channel 1, id 0, lun 0, type 0\nFloppy drive(s): fd0 is 1.44M\n...\n...\n...\n\nThank you in advance!\n\nReimer\n\nReimer\[email protected]\nOpenDB Servi�os e Treinamentos PostgreSQL e DB2\nFone: 47 3327-0878 Cel: 47 9183-0547\nwww.opendb.com.br\n\n\n\n\n\n\n\nHi,\n \nOne of \nour database server is getting some very high response times and I´m wondering what can be responsible for this \nissue.\n \nA strange \nbehaviour in this server is the saturation number iostat is giving, an average \nof 20% for only 140 wkB/s or can I consider them normal \nnumbers?\n \nIt is a Fedora Core release 4 (Stentz) box \nwith 4 GB RAM and 2 GenuineIntel XEON CPU 3.20 GHz Cache: 1024 KB. In the disk \nsubsystem we have two SCSI drives in hardware RAID1.\n \nSome typical iostat -x data:\n\ncpu-moy: %user \n%nice %sys %iowait \n%idle \n11,96 0,00 7,04 \n1,31 79,70\n \nDevice: rrqm/s \nwrqm/s r/s w/s rsec/s \nwsec/s rkB/s wkB/s avgrq-sz \navgqu-sz await svctm \n%utilsda 0,00 \n30,44 0,81 3,43 14,52 \n270,97 7,26 135,48 \n67,43 0,19 46,05 \n7,29 3,08\n \ncpu-moy: %user \n%nice %sys %iowait \n%idle \n12,26 0,00 7,34 \n6,63 73,77\n \nDevice: rrqm/s \nwrqm/s r/s w/s rsec/s \nwsec/s rkB/s wkB/s avgrq-sz \navgqu-sz await svctm \n%utilsda 0,00 \n32,26 0,00 4,41 0,00 \n293,39 0,00 146,69 \n66,55 1,51 342,32 25,14 \n11,08\n \ncpu-moy: %user \n%nice %sys %iowait \n%idle \n13,67 0,00 6,83 \n7,74 71,76\n \nDevice: rrqm/s \nwrqm/s r/s w/s rsec/s \nwsec/s rkB/s wkB/s avgrq-sz \navgqu-sz await svctm \n%utilsda 0,00 \n31,99 0,00 3,82 0,00 \n286,52 0,00 143,26 \n74,95 2,93 767,58 55,84 \n21,35\n \ncpu-moy: %user \n%nice %sys %iowait \n%idle \n13,27 0,00 6,83 \n14,37 65,53\n \nDevice: rrqm/s \nwrqm/s r/s w/s rsec/s \nwsec/s rkB/s wkB/s avgrq-sz \navgqu-sz await svctm \n%utilsda 0,00 \n31,73 0,00 4,02 0,00 \n285,94 0,00 142,97 \n71,20 2,73 680,40 49,25 \n19,78\n \ncpu-moy: %user \n%nice %sys %iowait \n%idle \n12,86 0,00 6,33 \n9,45 71,36\n \nDevice: rrqm/s \nwrqm/s r/s w/s rsec/s \nwsec/s rkB/s wkB/s avgrq-sz \navgqu-sz await svctm \n%utilsda 0,00 \n30,92 0,00 3,41 0,00 \n274,70 0,00 137,35 \n80,47 2,33 681,35 57,53 \n19,64\n \ncpu-moy: %user \n%nice %sys %iowait \n%idle \n12,45 0,00 6,02 \n1,91 79,62\n \nDevice: rrqm/s \nwrqm/s r/s w/s rsec/s \nwsec/s rkB/s wkB/s avgrq-sz \navgqu-sz await svctm \n%utilsda 0,00 \n33,00 0,60 4,23 6,44 \n297,79 3,22 148,89 \n63,00 0,41 85,29 20,00 \n9,66\n \ndmesg data:\n....\n....\nSCSI subsystem \ninitializedFusion MPT base driver 3.01.20Copyright (c) 1999-2004 LSI \nLogic CorporationACPI: PCI Interrupt 0000:02:04.0[A] -> GSI 42 (level, \nlow) -> IRQ 185mptbase: Initiating ioc0 bringupioc0: 53C1030: \nCapabilities={Initiator,Target}Fusion MPT SCSI Host driver 3.01.20scsi0 \n: ioc0: LSI53C1030, FwRev=01032300h, Ports=1, MaxQ=255, IRQ=185input: AT \nTranslated Set 2 keyboard on isa0060/serio0input: AT Translated Set 2 \nkeyboard on isa0060/serio0megaraid cmm: 2.20.2.5 (Release Date: Fri Jan 21 \n00:01:03 EST 2005)megaraid: 2.20.4.5 (Release Date: Thu Feb 03 12:27:22 EST \n2005)megaraid: probe new device 0x1000:0x1960:0x1028:0x0520: bus 2:slot \n5:func 0ACPI: PCI Interrupt 0000:02:05.0[A] -> GSI 37 (level, low) -> \nIRQ 193megaraid: fw version:[351S] bios version:[1.10]scsi1 : LSI Logic \nMegaRAID driverscsi[1]: scanning scsi channel 0 [Phy 0] for non-raid \ndevices Vendor: SDR Model: \nGEM318P Rev: \n1 Type: \nProcessor \nANSI SCSI revision: 02scsi[1]: scanning scsi channel 1 [virtual] for logical \ndrives Vendor: MegaRAID Model: LD 0 RAID1 69G \nRev: 351S Type: \nDirect-Access \nANSI SCSI revision: 02SCSI device sda: 143114240 512-byte hdwr sectors \n(73274 MB)sda: asking for cache data failedsda: assuming drive cache: \nwrite throughSCSI device sda: 143114240 512-byte hdwr sectors (73274 \nMB)sda: asking for cache data failedsda: assuming drive cache: write \nthrough sda: sda1 sda2 sda3 sda4 < sda5 >Attached scsi disk \nsda at scsi1, channel 1, id 0, lun 0libata version 1.10 loaded.ata_piix \nversion 1.03ACPI: PCI Interrupt 0000:00:1f.2[A] -> GSI 18 (level, low) \n-> IRQ 177PCI: Setting latency timer of device 0000:00:1f.2 to \n64ata1: SATA max UDMA/133 cmd 0xBC98 ctl 0xBC92 bmdma 0xBC60 irq \n177ata2: SATA max UDMA/133 cmd 0xBC80 ctl 0xBC7A bmdma 0xBC68 irq \n177ata1: SATA port has no device.scsi2 : ata_piixata2: SATA port has \nno device.scsi3 : ata_piixisa bounce pool size: 16 pageskjournald \nstarting. Commit interval 5 secondsEXT3-fs: mounted filesystem with \nordered data mode.SELinux: Disabled at runtime.SELinux: \nUnregistering netfilter hookscfq: depth 4 reached, tagging now \nonAttached scsi generic sg0 at scsi1, channel 0, id 6, lun 0, type \n3Attached scsi generic sg1 at scsi1, channel 1, id 0, lun 0, type \n0Floppy drive(s): fd0 is 1.44M\n...\n...\n...\n \nThank you in advance!\n \nReimer\[email protected] \nServiços e Treinamentos PostgreSQL e DB2Fone: 47 3327-0878 Cel: 47 \n9183-0547www.opendb.com.br",
"msg_date": "Thu, 8 Feb 2007 15:19:32 -0200",
"msg_from": "\"Carlos H. Reimer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Disk saturation"
}
] |
[
{
"msg_contents": "Hi,\n\nWe have a quite big table, which is heavily used by our online clients.\nThe table has several indexes, but no other relation to other tables.\nWe have an import process, which can fill/update this table.\nThe full import takes 1 hour, and this is very long.\nWe are thinking of doing the full import in another table and then just \n\"swapping\" the two tables.\nWhat will happen to our indexes? What will happen to our current \ntransactions (only read) ? What will the user see? :)\nShould we recreate the indexes after the swap is done?\n\nBtw is there a good practice doing this kind of work?\n\nThanks in advance,\nAkos Gabriel\n",
"msg_date": "Fri, 09 Feb 2007 19:24:59 +0100",
"msg_from": "=?ISO-8859-2?Q?G=E1briel_=C1kos?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Recreate big table"
},
{
"msg_contents": "Gábriel,\n\nYou could use table inheritance, like table partitioning is explained in manual:\n\nhttp://www.postgresql.org/docs/8.2/interactive/ddl-partitioning.html\n\nKind regards,\n\nDaniel Cristian\n\nOn 2/9/07, Gábriel Ákos <[email protected]> wrote:\n> Hi,\n>\n> We have a quite big table, which is heavily used by our online clients.\n> The table has several indexes, but no other relation to other tables.\n> We have an import process, which can fill/update this table.\n> The full import takes 1 hour, and this is very long.\n> We are thinking of doing the full import in another table and then just\n> \"swapping\" the two tables.\n> What will happen to our indexes? What will happen to our current\n> transactions (only read) ? What will the user see? :)\n> Should we recreate the indexes after the swap is done?\n>\n> Btw is there a good practice doing this kind of work?\n>\n> Thanks in advance,\n> Akos Gabriel\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\n\n-- \nDaniel Cristian Cruz\nAnalista de Sistemas\nEspecialista postgreSQL e Linux\nInstrutor Certificado Mandriva\n",
"msg_date": "Fri, 9 Feb 2007 17:10:30 -0200",
"msg_from": "\"Daniel Cristian Cruz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recreate big table"
}
] |
[
{
"msg_contents": "\nWith the help of some of this list, I was able to successfully set up\nand benchmark a cube-based replacement for geo_distance() calculations.\n\nOn a development box, the cube-based variations benchmarked consistently\nrunning in about 1/3 of the time of the gel_distance() equivalents.\n\nAfter setting up the same columns and indexes on a production\ndatabase, it's a different story. All the cube operations show\nthemselves to be about the same as, or noticeably slower than, the same\noperations done with geo_distance().\n\nI've stared at the EXPLAIN ANALYZE output as much I can to figure what's\ngone. Could you help?\n\nHere's the plan on the production server, which seems too slow. Below is the plan I get in\non the development server, which is much faster.\n\nI tried \"set enable_nestloop = off\", which did change the plan, but the performance.\n\nThe production DB has much more data in it, but I still expected comparable results relative\nto using geo_distance() calculations.\n\nThe production db gets a \"VACUUM ANALYZE\" every couple of hours now.\n\nThanks!\n\n Mark\n\n########\n\n Sort (cost=6617.03..6617.10 rows=27 width=32) (actual time=2482.915..2487.008 rows=1375 loops=1)\n Sort Key: (cube_distance($0, zipcodes.earth_coords) / 1609.344::double precision)\n InitPlan\n -> Index Scan using zipcodes_pkey on zipcodes (cost=0.00..3.01 rows=1 width=32) (actual time=0.034..0.038 rows=1 loops=1)\n Index Cond: ((zipcode)::text = '90210'::text)\n -> Index Scan using zipcodes_pkey on zipcodes (cost=0.00..3.01 rows=1 width=32) (actual time=0.435..0.438 rows=1 loops=1)\n Index Cond: ((zipcode)::text = '90210'::text)\n -> Nested Loop (cost=538.82..6610.36 rows=27 width=32) (actual time=44.660..2476.919 rows=1375 loops=1)\n -> Nested Loop (cost=2.15..572.14 rows=9 width=36) (actual time=4.877..39.037 rows=136 loops=1)\n -> Bitmap Heap Scan on zipcodes (cost=2.15..150.05 rows=42 width=41) (actual time=3.749..4.951 rows=240 loops=1)\n Recheck Cond: (cube_enlarge(($1)::cube, 16093.4357308298::double precision, 3) @ earth_coords)\n -> Bitmap Index Scan on zip_earth_coords_idx (cost=0.00..2.15 rows=42 width=0) (actual time=3.658..3.658 rows=240 loops=1)\n Index Cond: (cube_enlarge(($1)::cube, 16093.4357308298::double precision, 3) @ earth_coords)\n -> Index Scan using shelters_postal_code_for_joining_idx on shelters (cost=0.00..10.02 rows=2 width=12) (actual time=0.079..0.133 rows=1 loops=240)\n Index Cond: ((shelters.postal_code_for_joining)::text = (\"outer\".zipcode)::text)\n -> Bitmap Heap Scan on pets (cost=536.67..670.47 rows=34 width=4) (actual time=16.844..17.830 rows=10 loops=136)\n Recheck Cond: ((pets.shelter_id = \"outer\".shelter_id) AND ((pets.pet_state)::text = 'available'::text))\n Filter: (species_id = 1) Sort (cost=7004.53..7004.62 rows=39 width=32) (actual time=54.635..55.450 rows=475 loops=1)\n -> BitmapAnd (cost=536.67..536.67 rows=34 width=0) (actual time=16.621..16.621 rows=0 loops=136)\n -> Bitmap Index Scan on pets_shelter_id_idx (cost=0.00..3.92 rows=263 width=0) (actual time=0.184..0.184 rows=132 loops=136)\n Index Cond: (pets.shelter_id = \"outer\".shelter_id)\n -> Bitmap Index Scan on pets_pet_state_idx (cost=0.00..532.50 rows=39571 width=0) (actual time=26.922..26.922 rows=40390 loops=82)\n Index Cond: ((pet_state)::text = 'available'::text)\n Total runtime: 2492.852 ms\n\n\n########### Faster plan in development:\n\n Sort (cost=7004.53..7004.62 rows=39 width=32) (actual time=54.635..55.450 rows=475 loops=1)\n Sort Key: (cube_distance($0, earth_distance.earth_coords) / 1609.344::double precision)\n InitPlan\n -> Bitmap Heap Scan on earth_distance (cost=4.74..624.60 rows=212 width=32) (actual time=0.113..0.115 rows=1 loops=1)\n Recheck Cond: ((zipcode)::text = '90210'::text)\n -> Bitmap Index Scan on earth_distance_zipcode_idx (cost=0.00..4.74 rows=212 width=0) (actual time=0.101..0.101 rows=2 loops=1)\n Index Cond: ((zipcode)::text = '90210'::text)\n -> Bitmap Heap Scan on earth_distance (cost=4.74..624.60 rows=212 width=32) (actual time=0.205..0.208 rows=1 loops=1)\n Recheck Cond: ((zipcode)::text = '90210'::text)\n -> Bitmap Index Scan on earth_distance_zipcode_idx (cost=0.00..4.74 rows=212 width=0) (actual time=0.160..0.160 rows=2 loops=1)\n Index Cond: ((zipcode)::text = '90210'::text)\n -> Hash Join (cost=618.67..5754.30 rows=39 width=32) (actual time=13.499..52.924 rows=475 loops=1)\n Hash Cond: (\"outer\".shelter_id = \"inner\".shelter_id)\n -> Bitmap Heap Scan on pets (cost=44.85..5158.42 rows=4298 width=4) (actual time=4.278..34.192 rows=3843 loops=1)\n Recheck Cond: ((pet_state)::text = 'available'::text)\n Filter: (species_id = 1)\n -> Bitmap Index Scan on pets_pet_state_idx (cost=0.00..44.85 rows=6244 width=0) (actual time=3.623..3.623 rows=7257 loops=1)\n Index Cond: ((pet_state)::text = 'available'::text)\n -> Hash (cost=573.65..573.65 rows=66 width=36) (actual time=8.916..8.916 rows=102 loops=1)\n -> Nested Loop (cost=3.15..573.65 rows=66 width=36) (actual time=3.004..8.513 rows=102 loops=1)\n -> Bitmap Heap Scan on earth_distance (cost=3.15..152.36 rows=42 width=41) (actual time=2.751..3.432 rows=240 loops=1)\n Recheck Cond: (cube_enlarge(($1)::cube, 16093.4357308298::double precision, 3) @ earth_coords)\n -> Bitmap Index Scan on earth_coords_idx (cost=0.00..3.15 rows=42 width=0) (actual time=2.520..2.520 rows=480 loops=1)\n Index Cond: (cube_enlarge(($1)::cube, 16093.4357308298::double precision, 3) @ earth_coords)\n -> Index Scan using shelters_postal_code_for_joining_idx on shelters (cost=0.00..10.01 rows=2 width=12) (actual time=0.011..0.015 rows=0 loops=240)\n Index Cond: ((shelters.postal_code_for_joining)::text = (\"outer\".zipcode)::text)\n Total runtime: 58.038 ms\n",
"msg_date": "Fri, 09 Feb 2007 14:26:01 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "cube operations slower than geo_distance() on production server"
},
{
"msg_contents": "On 2/10/07, Mark Stosberg <[email protected]> wrote:\n>\n> With the help of some of this list, I was able to successfully set up\n> and benchmark a cube-based replacement for geo_distance() calculations.\n>\n> On a development box, the cube-based variations benchmarked consistently\n> running in about 1/3 of the time of the gel_distance() equivalents.\n>\n> After setting up the same columns and indexes on a production\n> database, it's a different story. All the cube operations show\n> themselves to be about the same as, or noticeably slower than, the same\n> operations done with geo_distance().\n>\n> I've stared at the EXPLAIN ANALYZE output as much I can to figure what's\n> gone. Could you help?\n>\n> Here's the plan on the production server, which seems too slow. Below is the plan I get in\n> on the development server, which is much faster.\n>\n> I tried \"set enable_nestloop = off\", which did change the plan, but the performance.\n>\n> The production DB has much more data in it, but I still expected comparable results relative\n> to using geo_distance() calculations.\n\nany objection to posting the query (any maybe tables, keys, indexes, etc)?\n\nmerlin\n",
"msg_date": "Sat, 10 Feb 2007 07:01:34 +0530",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cube operations slower than geo_distance() on production server"
},
{
"msg_contents": "Merlin Moncure wrote:\n> On 2/10/07, Mark Stosberg <[email protected]> wrote:\n>>\n>> With the help of some of this list, I was able to successfully set up\n>> and benchmark a cube-based replacement for geo_distance() calculations.\n>>\n>> On a development box, the cube-based variations benchmarked consistently\n>> running in about 1/3 of the time of the gel_distance() equivalents.\n>>\n>> After setting up the same columns and indexes on a production\n>> database, it's a different story. All the cube operations show\n>> themselves to be about the same as, or noticeably slower than, the same\n>> operations done with geo_distance().\n>>\n>> I've stared at the EXPLAIN ANALYZE output as much I can to figure what's\n>> gone. Could you help?\n>>\n>> Here's the plan on the production server, which seems too slow. Below\n>> is the plan I get in\n>> on the development server, which is much faster.\n>>\n>> I tried \"set enable_nestloop = off\", which did change the plan, but\n>> the performance.\n>>\n>> The production DB has much more data in it, but I still expected\n>> comparable results relative\n>> to using geo_distance() calculations.\n>\n> any objection to posting the query (any maybe tables, keys, indexes, etc)?\n\nHere the basic query I'm using:\nSELECT\n -- 1609.344 is a constant for \"meters per mile\"\n cube_distance( (SELECT earth_coords from zipcodes WHERE zipcode =\n'90210') , earth_coords)/1609.344\n AS RADIUS\n FROM pets\n -- \"shelters_active\" is a view where \"shelter_state = 'active'\"\n JOIN shelters_active as shelters USING (shelter_id)\n -- The zipcode fields here are varchars\n JOIN zipcodes ON (\n shelters.postal_code_for_joining = zipcodes.zipcode )\n -- search for just 'dogs'\n WHERE species_id = 1\n AND pet_state='available'\n AND earth_box(\n (SELECT earth_coords from zipcodes WHERE zipcode = '90210') ,\n10*1609.344\n ) @ earth_coords\n ORDER BY RADIUS;\n\nAll the related columns are indexed:\n pets.species_id\n pets.shelter_id\n pets.pet_state\n\n shelters.shelter_id (pk)\n shelters.postal_code_for_joining\n shelters.active\n\n zipcodes.zipcode (pk)\n zipcodes.earth_coords\n\nThe pets table has about 300,000 rows, but only about 10% are\n\"available\". It sees regular updates and is \"vacuum analyzed\" every\ncouple of hours now. the rest of the tables get \"vacuum analyzed\nnightly\". The shelters table is about 99% \"shelter_state = active\".\nIt's updated infrequently.\n\nThe zipcodes table has about 40,000 rows in it and doesn't change.\n\nI tried a partial index on the pets table \"WHERE pet_state =\n'available'. I could see the index was used, but the performance was\nunaffected.\n\nThe \"EXPLAIN ANALYZE\" output is attached, to try to avoid mail-client\nwrapping. The query is running 10 times slower today than on Friday,\nperhaps because of server load, or because we are at the end of a VACUUM\ncycle.\n\nThanks for any help!\n\n Mark",
"msg_date": "Mon, 12 Feb 2007 11:11:19 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: cube operations slower than geo_distance() on production server"
},
{
"msg_contents": "On 2/12/07, Mark Stosberg <[email protected]> wrote:\n> Merlin Moncure wrote:\n> > On 2/10/07, Mark Stosberg <[email protected]> wrote:\n> >>\n> >> With the help of some of this list, I was able to successfully set up\n> >> and benchmark a cube-based replacement for geo_distance() calculations.\n> >>\n> >> On a development box, the cube-based variations benchmarked consistently\n> >> running in about 1/3 of the time of the gel_distance() equivalents.\n> >>\n> >> After setting up the same columns and indexes on a production\n> >> database, it's a different story. All the cube operations show\n> >> themselves to be about the same as, or noticeably slower than, the same\n> >> operations done with geo_distance().\n> >>\n> >> I've stared at the EXPLAIN ANALYZE output as much I can to figure what's\n> >> gone. Could you help?\n> >>\n> >> Here's the plan on the production server, which seems too slow. Below\n> >> is the plan I get in\n> >> on the development server, which is much faster.\n> >>\n> >> I tried \"set enable_nestloop = off\", which did change the plan, but\n> >> the performance.\n> >>\n> >> The production DB has much more data in it, but I still expected\n> >> comparable results relative\n> >> to using geo_distance() calculations.\n> >\n> > any objection to posting the query (any maybe tables, keys, indexes, etc)?\n>\n> Here the basic query I'm using:\n> SELECT\n> -- 1609.344 is a constant for \"meters per mile\"\n> cube_distance( (SELECT earth_coords from zipcodes WHERE zipcode =\n> '90210') , earth_coords)/1609.344\n> AS RADIUS\n> FROM pets\n> -- \"shelters_active\" is a view where \"shelter_state = 'active'\"\n> JOIN shelters_active as shelters USING (shelter_id)\n> -- The zipcode fields here are varchars\n> JOIN zipcodes ON (\n> shelters.postal_code_for_joining = zipcodes.zipcode )\n> -- search for just 'dogs'\n> WHERE species_id = 1\n> AND pet_state='available'\n> AND earth_box(\n> (SELECT earth_coords from zipcodes WHERE zipcode = '90210') ,\n> 10*1609.344\n> ) @ earth_coords\n> ORDER BY RADIUS;\n\nyour query looks a bit funky. here are the problems I see.\n\n* in your field list, you don't need to re-query the zipcode table.\n> cube_distance( (SELECT earth_coords from zipcodes WHERE zipcode =\n> '90210') , earth_coords)/1609.344 AS RADIUS\n\nbecomes\n\n cube_distance(pets.earth_coords, earth_coords ) / 1609.344 AS RADIUS\n\nalso, dont. re-refer to the zipcodes table in the join clause. you are\nalready joining to it:\n> AND earth_box(\n> (SELECT earth_coords from zipcodes WHERE zipcode = '90210') ,\n> 10*1609.344) @ earth_coords\n\nbecomes\n\n AND earth_box(zipcodes.earth_coords, 10*1609.344) ) @ pets.earth_coords\n\n* also, does pet_state have any other states than 'available' and '\nnot available'? if not, you should be using a boolean. if so, you can\nconsider a functional index to convert it to a booelan.\n\n* if you always look up pets by species, we can explore composite\nindex columns on species, available (especially using the above\nfunctional suggestion), etc. composite > partial (imo)\n\nthats just to start. play with it and see what comes up.\n\nmerlin\n",
"msg_date": "Mon, 12 Feb 2007 14:03:14 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cube operations slower than geo_distance() on production server"
},
{
"msg_contents": "On 2/12/07, Merlin Moncure <[email protected]> wrote:\n> cube_distance(pets.earth_coords, earth_coords ) / 1609.344 AS RADIUS\n\nthis should read:\ncube_distance(pets.earth_coords, zipcodes.earth_coords ) / 1609.344 AS RADIUS\n\nmerlin\n",
"msg_date": "Mon, 12 Feb 2007 14:05:18 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cube operations slower than geo_distance() on production server"
},
{
"msg_contents": "\nMerlin--\n\nThanks so much for your help. Some follow-ups are below.\n\nMerlin Moncure wrote:\n>\n>> Here the basic query I'm using:\n>> SELECT\n>> -- 1609.344 is a constant for \"meters per mile\"\n>> cube_distance( (SELECT earth_coords from zipcodes WHERE zipcode =\n>> '90210') , earth_coords)/1609.344\n>> AS RADIUS\n>> FROM pets\n>> -- \"shelters_active\" is a view where \"shelter_state = 'active'\"\n>> JOIN shelters_active as shelters USING (shelter_id)\n>> -- The zipcode fields here are varchars\n>> JOIN zipcodes ON (\n>> shelters.postal_code_for_joining = zipcodes.zipcode )\n>> -- search for just 'dogs'\n>> WHERE species_id = 1\n>> AND pet_state='available'\n>> AND earth_box(\n>> (SELECT earth_coords from zipcodes WHERE zipcode = '90210') ,\n>> 10*1609.344\n>> ) @ earth_coords\n>> ORDER BY RADIUS;\n>\n> your query looks a bit funky. here are the problems I see.\n>\n> * in your field list, you don't need to re-query the zipcode table.\n>> cube_distance( (SELECT earth_coords from zipcodes WHERE zipcode =\n>> '90210') , earth_coords)/1609.344 AS RADIUS\n>\n> becomes\n>\n> cube_distance(pets.earth_coords, earth_coords ) / 1609.344 AS RADIUS\n\nIt may not have been clear from the query, but only the 'zipcodes' table\nhas an 'earth_coords' column. Also, I think your refactoring means\nsomething different. My query expresses \"number of miles this pet is\nfrom 90210\", while I think the refactor expresses a distance between a\npet and another calculated value.\n\n> also, dont. re-refer to the zipcodes table in the join clause. you are\n> already joining to it:\n>> AND earth_box(\n>> (SELECT earth_coords from zipcodes WHERE zipcode = '90210') ,\n>> 10*1609.344) @ earth_coords\n>\n> becomes\n>\n> AND earth_box(zipcodes.earth_coords, 10*1609.344) ) @ pets.earth_coords\n\nI have the same question here as above-- I don't see how the new syntax\nincludes the logic of \"distance from the 90210 zipcode\".\n\n> * also, does pet_state have any other states than 'available' and '\n> not available'? if not, you should be using a boolean. if so, you can\n> consider a functional index to convert it to a booelan.\n\nYes, it has three states.\n\n> * if you always look up pets by species, we can explore composite\n> index columns on species, available (especially using the above\n> functional suggestion), etc. composite > partial (imo)\n\nWe nearly always search by species. Right now it's mostly dogs and some\ncats. I searched for references to composite index columns, and didn't\nfind much. Could you provide a direct reference to what you have in\nmind?\n\nAny other ideas appreciated!\n\n Mark\n",
"msg_date": "Mon, 12 Feb 2007 14:48:58 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: cube operations slower than geo_distance() on production server"
},
{
"msg_contents": "On 2/12/07, Mark Stosberg <[email protected]> wrote:\n> Merlin Moncure wrote:\n> >\n> >> Here the basic query I'm using:\n> >> SELECT\n> >> -- 1609.344 is a constant for \"meters per mile\"\n> >> cube_distance( (SELECT earth_coords from zipcodes WHERE zipcode =\n> >> '90210') , earth_coords)/1609.344\n> >> AS RADIUS\n> >> FROM pets\n> >> -- \"shelters_active\" is a view where \"shelter_state = 'active'\"\n> >> JOIN shelters_active as shelters USING (shelter_id)\n> >> -- The zipcode fields here are varchars\n> >> JOIN zipcodes ON (\n> >> shelters.postal_code_for_joining = zipcodes.zipcode )\n> >> -- search for just 'dogs'\n> >> WHERE species_id = 1\n> >> AND pet_state='available'\n> >> AND earth_box(\n> >> (SELECT earth_coords from zipcodes WHERE zipcode = '90210') ,\n> >> 10*1609.344\n> >> ) @ earth_coords\n> >> ORDER BY RADIUS;\n> >\n> It may not have been clear from the query, but only the 'zipcodes' table\n> has an 'earth_coords' column. Also, I think your refactoring means\n> something different. My query expresses \"number of miles this pet is\n> from 90210\", while I think the refactor expresses a distance between a\n> pet and another calculated value.\n\nmy mistake, i misunderstood what you were trying to do...can you try\nremoving the 'order by radius' and see if it helps? if not, we can try\nworking on this query some more. There is a better, faster way to do\nthis, I'm sure of it.\n\nmerlin\n",
"msg_date": "Tue, 13 Feb 2007 09:15:26 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cube operations slower than geo_distance() on production server"
},
{
"msg_contents": "On 2/13/07, Merlin Moncure <[email protected]> wrote:\n> On 2/12/07, Mark Stosberg <[email protected]> wrote:\n> > Merlin Moncure wrote:\n> > >\n> > >> Here the basic query I'm using:\n> > >> SELECT\n> > >> -- 1609.344 is a constant for \"meters per mile\"\n> > >> cube_distance( (SELECT earth_coords from zipcodes WHERE zipcode =\n> > >> '90210') , earth_coords)/1609.344\n> > >> AS RADIUS\n> > >> FROM pets\n> > >> -- \"shelters_active\" is a view where \"shelter_state = 'active'\"\n> > >> JOIN shelters_active as shelters USING (shelter_id)\n> > >> -- The zipcode fields here are varchars\n> > >> JOIN zipcodes ON (\n> > >> shelters.postal_code_for_joining = zipcodes.zipcode )\n> > >> -- search for just 'dogs'\n> > >> WHERE species_id = 1\n> > >> AND pet_state='available'\n> > >> AND earth_box(\n> > >> (SELECT earth_coords from zipcodes WHERE zipcode = '90210') ,\n> > >> 10*1609.344\n> > >> ) @ earth_coords\n> > >> ORDER BY RADIUS;\n> > >\n> > It may not have been clear from the query, but only the 'zipcodes' table\n> > has an 'earth_coords' column. Also, I think your refactoring means\n> > something different. My query expresses \"number of miles this pet is\n> > from 90210\", while I think the refactor expresses a distance between a\n> > pet and another calculated value.\n>\n> my mistake, i misunderstood what you were trying to do...can you try\n> removing the 'order by radius' and see if it helps? if not, we can try\n> working on this query some more. There is a better, faster way to do\n> this, I'm sure of it.\n\ntry this:\n\nSELECT * FROM\n(\nSELECT\n earth_coords(q.earth_coords, s.earth_coords)/1609.344 as radius\n FROM pets\n JOIN shelters_active as shelters USING (shelter_id)\n JOIN zipcodes s ON shelters.postal_code_for_joining = zipcodes.zipcode\n JOIN zipcodes q ON q.zipcode = '90210'\n WHERE species_id = 1\n AND pet_state='available'\n AND earth_box(q.earth_coords, 10*1609.344) @ s.earth_coords\n) p order by radius\n\nmerlin\n",
"msg_date": "Tue, 13 Feb 2007 09:31:18 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cube operations slower than geo_distance() on production server"
},
{
"msg_contents": "On Tue, Feb 13, 2007 at 09:31:18AM -0500, Merlin Moncure wrote:\n>\n> >my mistake, i misunderstood what you were trying to do...can you try\n> >removing the 'order by radius' and see if it helps? if not, we can try\n> >working on this query some more. There is a better, faster way to do\n> >this, I'm sure of it.\n\nMerlin,\n\nThanks again for your help. I did try without the \"order by\", and it\ndidn't make more difference. \n\n> try this:\n\nBased on your example, I was able to further refine the query to remove\nthe duplicate sub-selects that I had. However, this didn't seem to\nimprove performance. \n\nI'm still stuck with the same essential problem: On the development\nserver, where is less data (400 results returns vs 1300), the cube\nsearch is at least twice as fast, but on the production server, it is\nconsistently slower. \n\nSo, either the difference is one of scale, or I have some different\nconfiguration detail in production that is causing the issue. \n\nFor reference, here's two versions of the query. The first uses\nthe old geo_distance(), and the second one is the new cube query I'm \ntrying, inspired by your suggested refactoring.\n\nIt's not surprising to me that the queries run at different speeds\non different servers, but it /is/ surprising that their relative speeds\nreverse!\n\n\tMark\n\n-- Searching for all dogs within 10 miles of 90210 zipcode\nEXPLAIN ANALYZE\nSELECT\n zipcodes.lon_lat <@> center.lon_lat AS radius\n FROM (SELECT lon_lat FROM zipcodes WHERE zipcode = '90210') as center,\n pets\n JOIN shelters_active as shelters USING (shelter_id)\n JOIN zipcodes on (shelters.postal_code_for_joining = zipcodes.zipcode)\n WHERE species_id = 1\n AND pet_state='available'\n AND (zipcodes.lon_lat <@> center.lon_lat) < 10\n ORDER BY RADIUS;\n\n\nEXPLAIN ANALYZE\nSELECT\n cube_distance( center.earth_coords , zipcodes.earth_coords)/1609.344\n AS RADIUS\n FROM (SELECT\n earth_coords,\n earth_box( earth_coords , 10*1609.344 ) as center_box\n from zipcodes WHERE zipcode = '90210'\n ) AS center,\n pets\n JOIN shelters_active AS shelters USING (shelter_id)\n JOIN zipcodes ON ( shelters.postal_code_for_joining = zipcodes.zipcode )\n WHERE species_id = 1\n AND pet_state='available'\n AND center_box @ zipcodes.earth_coords\n ORDER BY RADIUS;\n\n",
"msg_date": "Tue, 13 Feb 2007 15:47:47 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: cube operations slower than geo_distance() on production server"
},
{
"msg_contents": "Mark Stosberg <[email protected]> writes:\n> For reference, here's two versions of the query. The first uses\n> the old geo_distance(), and the second one is the new cube query I'm \n> trying, inspired by your suggested refactoring.\n\nYou didn't show EXPLAIN ANALYZE output :-(\n\nLooking back in the thread, the last E.A. output I see is in your\nmessage of 2/12 11:11, and the striking thing there is that it seems all\nthe time is going into one indexscan:\n\n -> Bitmap Index Scan on pets_pet_state_idx (cost=0.00..562.50 rows=39571 width=0) (actual time=213.620..213.620 rows=195599 loops=82)\n Index Cond: ((pet_state)::text = 'available'::text)\n Total runtime: 17933.675 ms\n\n213.620 * 82 = 17516.840, so this step is all but 400msec of the run.\n\nThere are two things wrong here: first, that the estimated row count is\nonly 20% of actual; it should certainly not be that far off for such a\nsimple condition. I wonder if your vacuum/analyze procedures are\nactually working. Second, you mentioned somewhere along the line that\n'available' pets are about 10% of all the entries, which means that this\nindexscan is more than likely entirely counterproductive: it would be\ncheaper to ignore this index altogether.\n\nSuggestions:\n\n1. ANALYZE the table by hand, try the explain again and see if this\nrowcount estimate gets better. If so, you need to look into why your\nexisting procedures aren't keeping the stats up to date.\n\n2. If, with a more accurate rowcount estimate, the planner still wants\nto use this index, try discouraging it. Brute force would be to drop\nthe index. If there are less-common pet_states that are actually worth\nsearching for, maybe keep the index but change it to a partial index\nWHERE pet_state != 'available'.\n\nAlso, I don't see that you mentioned anywhere what PG version you are\nrunning, but if it's not the latest then an update might help. I recall\nhaving fixed a bug that made the planner too eager to AND on an index\nthat wouldn't actually help much ... which seems to fit this problem\ndescription pretty well.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Feb 2007 02:15:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cube operations slower than geo_distance() on production server "
},
{
"msg_contents": "On 2/14/07, Tom Lane <[email protected]> wrote:\n> There are two things wrong here: first, that the estimated row count is\n> only 20% of actual; it should certainly not be that far off for such a\n> simple condition. I wonder if your vacuum/analyze procedures are\n> actually working. Second, you mentioned somewhere along the line that\n> 'available' pets are about 10% of all the entries, which means that this\n> indexscan is more than likely entirely counterproductive: it would be\n> cheaper to ignore this index altogether.\n\nI think switching the index on pet_state to a composite on (pet_state,\nspecies_id) might help too.\n\nor even better:\n\ncreate function is_pet_available(text) returns bool as\n$$\n select $1='available';\n$$ language sql immutable;\n\ncreate index pets_available_species_idx on\npets(is_pet_available(pet_state), species_id);\n\nrefactor your query something similar to:\n\nSELECT * FROM\n(\nSELECT\n earth_coords(q.earth_coords, s.earth_coords)/1609.344 as radius\n FROM pets\n JOIN shelters_active as shelters USING (shelter_id)\n JOIN zipcodes s ON shelters.postal_code_for_joining = zipcodes.zipcode\n JOIN zipcodes q ON q.zipcode = '90210'\n WHERE\n is_pet_available(pet_state)\n AND species_id = 1\n AND earth_box(q.earth_coords, 10*1609.344) @ s.earth_coords\n) p order by radius\n\nmerlin\n",
"msg_date": "Wed, 14 Feb 2007 08:58:35 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cube operations slower than geo_distance() on production server"
},
{
"msg_contents": "Merlin Moncure wrote:\n> On 2/14/07, Tom Lane <[email protected]> wrote:\n>> There are two things wrong here: first, that the estimated row\n>> count is only 20% of actual; it should certainly not be that far\n>> off for such a simple condition. I wonder if your vacuum/analyze\n>> procedures are actually working. Second, you mentioned somewhere\n>> along the line that 'available' pets are about 10% of all the\n>> entries, which means that this indexscan is more than likely\n>> entirely counterproductive: it would be cheaper to ignore this\n>> index altogether.\n\nTom,\n\nThanks for the generosity of your time. We are using 8.1.3 currently. I \nhave read there are some performance improvements in 8.2, but we have \nnot started evaluating that yet.\n\nYour suggestion about the pet_state index was right on. I tried \n\"Analyze\" on it, but still got the same bad estimate. However, I then \nused \"reindex\" on that index, and that fixed the estimate accuracy, \nwhich made the query run faster! The cube search now benchmarks faster \nthan the old search in production, taking about 2/3s of the time of the \nold one.\n\nAny ideas why the manual REINDEX did something that \"analyze\" didn't? It \nmakes me wonder if there is other tuning like this to do.\n\nAttached is the EA output from the most recent run, after the \"re-index\".\n\n> I think switching the index on pet_state to a composite on (pet_state,\n> species_id) might help too.\n> \n> or even better:\n> \n> create function is_pet_available(text) returns bool as\n> $$\n> select $1='available';\n> $$ language sql immutable;\n> \n> create index pets_available_species_idx on\n> pets(is_pet_available(pet_state), species_id);\n\nMerlin,\n\nThanks for this suggestion. It is not an approach I had used before, and \nI was interested to try it. However, the new index didn't get chosen. \n(Perhaps I would need to drop the old one?) However, Tom's suggestions \ndid help. I'll follow up on that in just a moment.\n\n> \n> refactor your query something similar to:\n> \n> SELECT * FROM\n> (\n> SELECT\n> earth_coords(q.earth_coords, s.earth_coords)/1609.344 as radius\n> FROM pets\n> JOIN shelters_active as shelters USING (shelter_id)\n> JOIN zipcodes s ON shelters.postal_code_for_joining = zipcodes.zipcode\n> JOIN zipcodes q ON q.zipcode = '90210'\n> WHERE\n> is_pet_available(pet_state)\n> AND species_id = 1\n> AND earth_box(q.earth_coords, 10*1609.344) @ s.earth_coords\n> ) p order by radius\n> \n> merlin\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend",
"msg_date": "Wed, 14 Feb 2007 11:28:38 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "reindex vs 'analyze' (was: Re: cube operations slower than\n\tgeo_distance() on production server)"
},
{
"msg_contents": "Mark Stosberg <[email protected]> writes:\n> Your suggestion about the pet_state index was right on. I tried \n> \"Analyze\" on it, but still got the same bad estimate. However, I then \n> used \"reindex\" on that index, and that fixed the estimate accuracy, \n> which made the query run faster!\n\nNo, the estimate is about the same, and so is the plan. The data seems\nto have changed though --- on Monday you had\n\n -> Bitmap Index Scan on pets_pet_state_idx (cost=0.00..562.50 rows=39571 width=0) (actual time=213.620..213.620 rows=195599 loops=82)\n Index Cond: ((pet_state)::text = 'available'::text)\n \nand now it's\n\n -> Bitmap Index Scan on pets_pet_state_idx (cost=0.00..285.02 rows=41149 width=0) (actual time=22.043..22.043 rows=40397 loops=82)\n Index Cond: ((pet_state)::text = 'available'::text)\n\nDon't tell me you got 155000 pets adopted out yesterday ... what\nhappened here?\n\n[ thinks... ] One possibility is that those were dead but\nnot-yet-vacuumed rows. What's your vacuuming policy on this table?\n(A bitmap-index-scan plan node will count dead rows as returned,\nunlike all other plan node types, since we haven't actually visited\nthe heap yet...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Feb 2007 13:07:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex vs 'analyze' (was: Re: cube operations slower than\n\tgeo_distance() on production server)"
},
{
"msg_contents": "On Wed, Feb 14, 2007 at 01:07:23PM -0500, Tom Lane wrote:\n> Mark Stosberg <[email protected]> writes:\n> > Your suggestion about the pet_state index was right on. I tried \n> > \"Analyze\" on it, but still got the same bad estimate. However, I then \n> > used \"reindex\" on that index, and that fixed the estimate accuracy, \n> > which made the query run faster!\n> \n> No, the estimate is about the same, and so is the plan. The data seems\n> to have changed though --- on Monday you had\n> \n> -> Bitmap Index Scan on pets_pet_state_idx (cost=0.00..562.50 rows=39571 width=0) (actual time=213.620..213.620 rows=195599 loops=82)\n> Index Cond: ((pet_state)::text = 'available'::text)\n> \n> and now it's\n> \n> -> Bitmap Index Scan on pets_pet_state_idx (cost=0.00..285.02 rows=41149 width=0) (actual time=22.043..22.043 rows=40397 loops=82)\n> Index Cond: ((pet_state)::text = 'available'::text)\n> \n> Don't tell me you got 155000 pets adopted out yesterday ... what\n> happened here?\n\nThat seemed be the difference that the \"reindex\" made. The number of\nrows in the table and the number marked \"available\" is roughly\nunchanged.\n\nselect count(*) from pets;\n--------\n304951\n (1 row)\n\nselect count(*) from pets where pet_state = 'available';\n-------\n39857\n\nIt appears just about 400 were marked as \"adopted\" yesterday. \n\n> [ thinks... ] One possibility is that those were dead but\n> not-yet-vacuumed rows. What's your vacuuming policy on this table?\n\nIt gets vacuum analyzed ery two hours throughout most of the day. Once\nNightly we vacuum analyze everything, but most of the time we just do\nthis table. \n\n> (A bitmap-index-scan plan node will count dead rows as returned,\n> unlike all other plan node types, since we haven't actually visited\n> the heap yet...)\n\nThanks again for your help, Tom.\n\n\tMark\n\n--\n . . . . . . . . . . . . . . . . . . . . . . . . . . . \n Mark Stosberg Principal Developer \n [email protected] Summersault, LLC \n 765-939-9301 ext 202 database driven websites\n . . . . . http://www.summersault.com/ . . . . . . . .\n",
"msg_date": "Wed, 14 Feb 2007 14:05:02 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: reindex vs 'analyze'"
},
{
"msg_contents": "Tom Lane wrote:\n> Mark Stosberg <[email protected]> writes:\n>> Your suggestion about the pet_state index was right on. I tried \n>> \"Analyze\" on it, but still got the same bad estimate. However, I then \n>> used \"reindex\" on that index, and that fixed the estimate accuracy, \n>> which made the query run faster!\n> \n> No, the estimate is about the same, and so is the plan. The data seems\n> to have changed though --- on Monday you had\n> \n> -> Bitmap Index Scan on pets_pet_state_idx (cost=0.00..562.50 rows=39571 width=0) (actual time=213.620..213.620 rows=195599 loops=82)\n> Index Cond: ((pet_state)::text = 'available'::text)\n> \n> and now it's\n> \n> -> Bitmap Index Scan on pets_pet_state_idx (cost=0.00..285.02 rows=41149 width=0) (actual time=22.043..22.043 rows=40397 loops=82)\n> Index Cond: ((pet_state)::text = 'available'::text)\n> \n> Don't tell me you got 155000 pets adopted out yesterday ... what\n> happened here?\n> \n> [ thinks... ] One possibility is that those were dead but\n> not-yet-vacuumed rows. What's your vacuuming policy on this table?\n> (A bitmap-index-scan plan node will count dead rows as returned,\n> unlike all other plan node types, since we haven't actually visited\n> the heap yet...)\n\nToday I noticed a combination of related mistakes here.\n\n1. The Vacuum commands were being logged to a file that didn't exist.\nI'm mot sure if this prevented them being run. I had copied the cron\nentry for another machine, but neglected to create /var/log/pgsql:\n\nvacuumdb -z --table pets -d saveapet >> /var/log/pgsql/vacuum.log 2>&1\n\n###\n\nHowever, I again noticed that the row counts were horribly off on the\n'pet_state' index, and again used REINDEX to fix it. (Examples below).\nHowever, if the \"VACUUM ANALYZE\" wasn't actually run, that does seem\nlike it could have been related.\n\nI'll have to see how things are tomorrow after a full round of database\nvacuuming.\n\n Mark\n\n\n-> Bitmap Index Scan on pets_pet_state_idx (cost=0.00..337.29\nrows=39226 width=0) (actual time=77.158. .77.158 rows=144956\nloops=81)\n Index Cond: ((pet_state)::text =\n'available'::text)\n Total runtime: 8327.261 ms\n\n\n-> Bitmap Index Scan on pets_pet_state_idx (cost=0.00..271.71\nrows=39347 width=0) (actual time=15.466..15.466 rows=40109 loops=81)\n Index Cond: ((pet_state)::text =\n'available'::text)\n Total runtime: 1404.124 ms\n",
"msg_date": "Fri, 16 Feb 2007 14:25:21 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: reindex vs 'analyze'"
}
] |
[
{
"msg_contents": "[email protected] wrote:\n> I have this function in my C#.NET app that goes out to find the\n> business units for each event and returns a string (for my report).\n> I'm finding that for larger reports it takes too long and times out.\n> \n> Does anyone know how I can speed this process up? Is this code very\n> tight or could it be cleaner? thanks in advance for your help, this\n> is a big issue used on several reports.\n\nPerhaps try \"EXPLAIN ANALYZE\" on this query, given a valid event ID:\n\n SELECT Distinct Companies.Name\n FROM Companies INNER JOIN\n ExpenseAllocations ON Companies.ID =\nExpenseAllocations.BusinessUnitID\n WHERE (ExpenseAllocations.EventID =\n@EventID)\n ORDER BY Companies.Name DESC\n\n\n#######\nDo the columns used in the join and WHERE clause have indexes?\n\nIt's also possible the optimization needs to happen at a different level. Perhaps you are frequently\nlooking up the same results in a large report, or throughout the day.\n\nIf this part doesn't need to be up-to-second fresh, perhaps your application could\ncache some of the results of this function, instead of repeatedly asking the database\nto recompute it.\n\n Mark\n",
"msg_date": "Fri, 09 Feb 2007 16:04:31 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can anyone make this code tighter? Too slow, Please help!"
}
] |
[
{
"msg_contents": "All,\n I am looking to automate analyze table in my application.\n\nI have some insert only tables in my application which I need to analyze as data grows.\nSince the inserts are application controlled, I can choose to run analyze when I determine the\ndata has grown more than x% since last analyze.\n\nHowever since users can truncate the tables too, I need to be able to tell the numbers of rows\nin the table as perceived by the optimizer. \n\nI could not decipher a good way of telling the number of table rows from pg_stats/pg_statistics.\n\nDoes someone know of a way of telling what the optimizer believes the number of rows are ?\n\nThe tables in question have multi-column primary keys.\n\nRegards,\n\nVirag\n\n\n\n\n\n\nAll,\n I am looking to automate analyze \ntable in my application.\n \nI have some insert only tables in my application \nwhich I need to analyze as data grows.\nSince the inserts are application controlled, I can \nchoose to run analyze when I determine the\ndata has grown more than x% since last \nanalyze.\n \nHowever since users can truncate the tables too, I \nneed to be able to tell the numbers of rows\nin the table as perceived by the optimizer. \n\n \nI could not decipher a good way of telling the \nnumber of table rows from pg_stats/pg_statistics.\n \nDoes someone know of a way of telling what the \noptimizer believes the number of rows are ?\n \nThe tables in question have multi-column primary \nkeys.\n \nRegards,\n \nVirag",
"msg_date": "Fri, 9 Feb 2007 16:00:18 -0800",
"msg_from": "\"Virag Saksena\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is there an equivalent for Oracle's user_tables.num_rows"
},
{
"msg_contents": "\"Virag Saksena\" <[email protected]> writes:\n> Does someone know of a way of telling what the optimizer believes the =\n> number of rows are ?\n\nYou're looking in the wrong place; see pg_class.relpages and reltuples.\n\nBut note that in recent releases neither one is taken as gospel.\nInstead the planner uses the current physical table size in place of\nrelpages, and scales reltuples correspondingly. So neither steady\ngrowth nor truncation create a need for re-ANALYZE; at least not as long\nas the other statistics don't change too much.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 09 Feb 2007 19:45:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is there an equivalent for Oracle's user_tables.num_rows "
},
{
"msg_contents": "On Fri, 2007-02-09 at 19:45 -0500, Tom Lane wrote:\n> \"Virag Saksena\" <[email protected]> writes:\n> > Does someone know of a way of telling what the optimizer believes the =\n> > number of rows are ?\n> \n> You're looking in the wrong place; see pg_class.relpages and reltuples.\n> \n> But note that in recent releases neither one is taken as gospel.\n> Instead the planner uses the current physical table size in place of\n> relpages, and scales reltuples correspondingly. So neither steady\n> growth nor truncation create a need for re-ANALYZE; at least not as long\n> as the other statistics don't change too much.\n\nThat does work very well for Production systems, but not for\nDevelopment.\n\nIn 8.4, I'll be looking for a way to export Production system stats to a\nDev server that *acts* as if it really had 10^lots rows in it. That will\nalso help us support the optimiser when it is acting in extreme\nconditions that are not sensibly reproducible in reality by hackers. It\nwill also provide us with what-if capability for system expansion.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 10 Feb 2007 01:06:37 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is there an equivalent for Oracle'suser_tables.num_rows"
},
{
"msg_contents": "Thanks, that is exactly what I was looking for\n\nI know that number of rows may not be the best indicator, but it is a \nheuristic that can be tracked\neasily, causing analyze for the first x insert events, and then only doing \nit only when an insert event causes\ntotal rows to exceed y % of the optimizer perceived rows\n\nOther more accurate heuristics like relative distribution of columns would \nbe harder to track in the application,\nand I'd rather let the database do that by issuing the analyze\n\nRegards,\n\nVirag\n\n----- Original Message ----- \nFrom: \"Tom Lane\" <[email protected]>\nTo: \"Virag Saksena\" <[email protected]>\nCc: <[email protected]>\nSent: Friday, February 09, 2007 4:45 PM\nSubject: Re: [PERFORM] Is there an equivalent for Oracle's \nuser_tables.num_rows\n\n\n> \"Virag Saksena\" <[email protected]> writes:\n>> Does someone know of a way of telling what the optimizer believes the =\n>> number of rows are ?\n>\n> You're looking in the wrong place; see pg_class.relpages and reltuples.\n>\n> But note that in recent releases neither one is taken as gospel.\n> Instead the planner uses the current physical table size in place of\n> relpages, and scales reltuples correspondingly. So neither steady\n> growth nor truncation create a need for re-ANALYZE; at least not as long\n> as the other statistics don't change too much.\n>\n> regards, tom lane\n> \n\n",
"msg_date": "Fri, 9 Feb 2007 17:08:47 -0800",
"msg_from": "\"Virag Saksena\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is there an equivalent for Oracle's user_tables.num_rows "
}
] |
[
{
"msg_contents": "\nThere are 1.9M rows in ts_defects and indexes on b.ts_id (primary key)\nd.ts_biz_event_id and d.ts_occur_date. Both queries below return 0 \nrows. The 1st runs fast and the 2nd > 400x slower. The 2nd query\ndiffers from the 1st only by the addition of \"limit 1\".\n\nWhy the big difference in performance?\n\nThanks,\nBrian\n\n[bcox@athena jsp]$ time PGPASSWORD=**** psql -U admin -d cemdb -h \n192.168.1.30 -c 'select * from ts_defects d join ts_biz_events b on \nb.ts_id = d.ts_biz_event_id where b.ts_status=3 order by d.ts_occur_date \ndesc;'\n(column list deleted)\n-------+--------------+--------------+---------------+---------------------------+----------------+----------------+------------+------------------------+------------------+-----------------+---------------+------------------+---------------+-----------------+-------------------+---------------------+---------------------+---------------------+--------------------+----------------+--------------------+----------------------+--------------------+----------------------+-----------------------+----------------+----------------------+---------------+-----------------+------------------+--------------+----------------+-------+--------------+---------------+---------------------------+------------------+---------+--------------------+---------------+-----------------+------------------+---------------+----------------------+---------------------+--------------------+-----------+---------------------+----------+---------------+--------------+------------------+-------------+--------\n-----+--------------+--------------+----------------\n(0 rows)\n\n\nreal 0m0.022s\nuser 0m0.003s\nsys 0m0.003s\n\n\n[bcox@athena jsp]$ time PGPASSWORD=**** psql -U admin -d cemdb -h \n192.168.1.30 -c 'select * from ts_defects d join ts_biz_events b on \nb.ts_id = d.ts_biz_event_id where b.ts_status=3 order by d.ts_occur_date \ndesc limit 1;'\n(column list deleted)\n-------+--------------+--------------+---------------+---------------------------+----------------+----------------+------------+------------------------+------------------+-----------------+---------------+------------------+---------------+-----------------+-------------------+---------------------+---------------------+---------------------+--------------------+----------------+--------------------+----------------------+--------------------+----------------------+-----------------------+----------------+----------------------+---------------+-----------------+------------------+--------------+----------------+-------+--------------+---------------+---------------------------+------------------+---------+--------------------+---------------+-----------------+------------------+---------------+----------------------+---------------------+--------------------+-----------+---------------------+----------+---------------+--------------+------------------+-------------+--------\n-----+--------------+--------------+----------------\n(0 rows)\n\n\nreal 0m9.410s\nuser 0m0.005s\nsys 0m0.002s\n",
"msg_date": "Mon, 12 Feb 2007 13:36:45 -0800",
"msg_from": "Brian Cox <[email protected]>",
"msg_from_op": true,
"msg_subject": "limit + order by is slow if no rows in result set"
},
{
"msg_contents": "Brian Cox wrote:\n> \n> There are 1.9M rows in ts_defects and indexes on b.ts_id (primary key)\n> d.ts_biz_event_id and d.ts_occur_date. Both queries below return 0 \n> rows. The 1st runs fast and the 2nd > 400x slower. The 2nd query\n> differs from the 1st only by the addition of \"limit 1\".\n> \n> Why the big difference in performance?\n\nPlease run EXPLAIN ANALYZE on both queries, and send back the results. \nAlso, what indexes are there on the tables involved?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Mon, 12 Feb 2007 23:26:04 +0000",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: limit + order by is slow if no rows in result set"
}
] |
[
{
"msg_contents": "Hi Heikki,\n\nThanks for your response.\n\n> Please run EXPLAIN ANALYZE on both queries, and send back the results.\n\n[bcox@athena jsp]$ PGPASSWORD=quality psql -U admin -d cemdb -h \n192.168.1.30 -c 'explain analyze select * from ts_defects d join \nts_biz_events b on b.ts_id = d.ts_biz_event_id where b.ts_status=3 order \nby d.ts_occur_date desc;'\n \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=160400.01..160646.91 rows=98762 width=2715) (actual \ntime=0.303..0.303 rows=0 loops=1)\n Sort Key: d.ts_occur_date\n -> Hash Join (cost=33.20..82567.14 rows=98762 width=2715) (actual \ntime=0.218..0.218 rows=0 loops=1)\n Hash Cond: (\"outer\".ts_biz_event_id = \"inner\".ts_id)\n -> Seq Scan on ts_defects d (cost=0.00..71882.88 \nrows=1932688 width=1545) (actual time=0.022..0.022 rows=1 loops=1)\n -> Hash (cost=33.04..33.04 rows=65 width=1170) (actual \ntime=0.135..0.135 rows=0 loops=1)\n -> Bitmap Heap Scan on ts_biz_events b \n(cost=2.23..33.04 rows=65 width=1170) (actual time=0.132..0.132 rows=0 \nloops=1)\n Recheck Cond: (ts_status = 3)\n -> Bitmap Index Scan on ts_biz_events_statusindex \n (cost=0.00..2.23 rows=65 width=0) (actual time=0.054..0.054 rows=61 \nloops=1)\n Index Cond: (ts_status = 3)\n Total runtime: 0.586 ms\n(11 rows)\n\n[bcox@athena jsp]$ PGPASSWORD=quality psql -U admin -d cemdb -h \n192.168.1.30 -c 'explain analyze select * from ts_defects d join \nts_biz_events b on b.ts_id = d.ts_biz_event_id where b.ts_status=3 order \nby d.ts_occur_date desc limit 1;'\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..87.37 rows=1 width=2715) (actual \ntime=17999.482..17999.482 rows=0 loops=1)\n -> Nested Loop (cost=0.00..8628543.77 rows=98762 width=2715) \n(actual time=17999.476..17999.476 rows=0 loops=1)\n -> Index Scan Backward using ts_defects_dateindex on \nts_defects d (cost=0.00..227675.97 rows=1932688 width=1545) (actual \ntime=0.047..3814.923 rows=1932303 loops=1)\n -> Index Scan using ts_biz_events_pkey on ts_biz_events b \n(cost=0.00..4.33 rows=1 width=1170) (actual time=0.005..0.005 rows=0 \nloops=1932303)\n Index Cond: (b.ts_id = \"outer\".ts_biz_event_id)\n Filter: (ts_status = 3)\n Total runtime: 17999.751 ms\n(7 rows)\n\n> Also, what indexes are there on the tables involved?\n\nI tried to mention the relevant indexes in my original posting, but \nomitted one; here's a list of all indexes:\n\nts_defects: ts_id, ts_occur_date, ts_defect_def_id, ts_biz_event_id, \nts_trancomp_id, ts_transet_incarnation_id, ts_transet_id, \nts_tranunit_id, ts_user_incarnation_id, ts_user_id\n\nts_biz_events: ts_id, ts_defect_def_id, ts_status\n\nThanks,\nBrian\n",
"msg_date": "Mon, 12 Feb 2007 16:24:07 -0800",
"msg_from": "Brian Cox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: limit + order by is slow if no rows in result set"
},
{
"msg_contents": "Brian Cox <[email protected]> writes:\n>> Please run EXPLAIN ANALYZE on both queries, and send back the results.\n> [ results... ]\n\nThe reason the hash plan is fairly fast is that the hash join code has a\nspecial hack: if it reads the inner relation and finds it contains no\nrows, it knows there can be no join result rows, so it can fall out\nwithout reading the outer relation at all. This saves it from scanning \nthe large ts_defects table. (If you look close you'll see that it\nactually reads just the first row from ts_defects; this is because the\ninner relation isn't read until after we know the outer is nonempty,\nso as to try to win for the other case of empty outer and nonempty\ninner.)\n\nThe reason the nestloop/limit plan is not fast is that it has to scan\nthe inner relation (ts_biz_events) for each row of ts_defects, and there\nare lots of them. Even though each inner scan is just a fast index\nprobe, it adds up.\n\nThe reason the planner goes for the nestloop/limit plan is that it's\nexpecting that about 5% (98762/1932688) of the ts_defects rows will\nhave a match in ts_biz_events, and so it figures it'll only have to\nprobe ts_biz_events about 20 times before producing an output row,\nand the Limit only wants one row. So this looks a lot cheaper than\nthe hash plan --- especially since the latter is being costed without\nany assumption that the zero-inner-rows situation applies.\n\nThe bottom line is that the plans are being chosen on \"typical\" rather\nthan corner-case assumptions, and zero matching rows is a corner case\nthat happens to work real well for the hash plan and not well at all for\nthe nestloop plan. I'm not sure what we can do about that without\nmaking the performance worse for the case of not-quite-zero matching\nrows.\n\nYou might be able to get a better result if you increased the statistics\ntarget for ts_status --- it looks like the planner thinks there are many\nmore ts_status = 3 rows than there really are.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 12 Feb 2007 22:36:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: limit + order by is slow if no rows in result set "
}
] |
[
{
"msg_contents": "Hi,\n\nI have used postgresql some years now, but only small databases and only \none database per instance and one user per database.\n\nNow we have a server reserved only for postgresql, and I'm wondering if it \nis better to set up:\n- only one instance and many databases or\n- many instances and only one database/instance or\n- one instance, one database and many users\n\nserver will have 8G memory and 2 processors.\n\nEarlier we have had problems with difficult queries, some query can take \n100% cpu and all other processes have slowed down.\n\nI have used oracle many years and in oracle it's better have one \ninstance and many users, but oracle can handle many difficult queries in \nsame time. no process (=query) can slow other queries as much as in postgesql.\n\nthere is no need think safety, maintenance, ... only pure performance!\n\nis one instance capable to use that 8G of memory? and share it with \ndifferent databases/users as needed?\nor will one big and difficult query take all memory and slow down whole \nserver?\n\nif there is 2 instances one query can't take all memory, but downside is \nthat if instance1 is inactive and instance2 is active, there will be much \nunused memory (reverved for instance1) and that can produce disk io when \ninstance2 reads lots of data and sorts it.\n\nhow you have set up your postgresql server?\n\nIsmo\n",
"msg_date": "Tue, 13 Feb 2007 10:55:28 +0200 (EET)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "many instances or many databases or many users?"
},
{
"msg_contents": "[email protected] wrote:\n> Now we have a server reserved only for postgresql, and I'm wondering if it \n> is better to set up:\n> - only one instance and many databases or\n> - many instances and only one database/instance or\n> - one instance, one database and many users\n\nIt depends. One instance should give you best overall throughput, \nbecause the OS can maximize the use of resources across all users.\n\nThere shouldn't be any difference between having one instance with many \ndatabases and one database and many users.\n\n> server will have 8G memory and 2 processors.\n> \n> Earlier we have had problems with difficult queries, some query can take \n> 100% cpu and all other processes have slowed down.\n\nHow much data do you have? If it all fits in memory, it's not going to \nmake much difference if you have one or more instances. If not, you \nmight be better off with many instances dividing the memory between \nthem, giving some level of fairness in the memory allocation.\n\nUnfortunately there's no way to stop one query from using 100% CPU \n(though on a 2 CPU server, it's only going to saturate 1 CPU). If you \nhave difficult queries like that, I'd suggest that you take a look at \nthe access plans to check if they could benefit from adding indexes or \nrewritten in a more efficient way.\n\n> is one instance capable to use that 8G of memory? and share it with \n> different databases/users as needed?\n> or will one big and difficult query take all memory and slow down whole \n> server?\n\nOne instance can use all of the 8G of memory. You should set your \nshared_buffers to maybe 1-2G. People have different opinions on what \nexactly is the best value; I'd suggest that you try with different \nvalues to see what gives you the best performance in your application.\n\n> if there is 2 instances one query can't take all memory, but downside is \n> that if instance1 is inactive and instance2 is active, there will be much \n> unused memory (reverved for instance1) and that can produce disk io when \n> instance2 reads lots of data and sorts it.\n\nYep.\n\n> how you have set up your postgresql server?\n\nI'd say it's more a question of isolation and administration than \nperformance. For example, do you want to be able to do filesystem-level \nbackups and restores one database at a time? Do you need to shut down \none database while keeping the rest of them running?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Tue, 13 Feb 2007 10:07:50 +0000",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: many instances or many databases or many users?"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm currently working on optimizing a couple of queries. While\nstudying the EXPLAIN ANALYZE output of a query, I found this Bitmap\nHeap Scan node:\n\n-> Bitmap Heap Scan on lieu l (cost=12.46..63.98 rows=53 width=94)\n(actual time=35.569..97.166 rows=78 loops=1)\n Recheck Cond: ('(4190964.86112204, 170209.656489245,\n4801644.52951672),(4194464.86111106, 173709.656478266,\n4805144.52950574)'::cube @ (ll_to_earth((wgslat)::double precision,\n(wgslon)::double precision))::cube)\n Filter: (parking AND (numlieu <> 0))\n -> BitmapAnd (cost=12.46..12.46 rows=26 width=0) (actual\ntime=32.902..32.902 rows=0 loops=1)\n -> Bitmap Index Scan on idx_lieu_earth (cost=0.00..3.38\nrows=106 width=0) (actual time=30.221..30.221 rows=5864 loops=1)\n Index Cond: ('(4190964.86112204, 170209.656489245,\n4801644.52951672),(4194464.86111106, 173709.656478266,\n4805144.52950574)'::cube @ (ll_to_earth((wgslat)::double precision,\n(wgslon)::double precision))::cube)\n -> Bitmap Index Scan on idx_lieu_parking (cost=0.00..8.83\nrows=26404 width=0) (actual time=0.839..0.839 rows=1095 loops=1)\n Index Cond: (parking = true)\n\nWhat surprises me is that \"parking\" is in the filter and not in the\nRecheck Cond whereas it's part of the second Bitmap Index Scan of the\nBitmap And node.\nAFAIK, BitmapAnd builds a bitmap of the pages returned by the two\nBitmap Index Scans so I supposed it should append both Index Cond in\nthe Recheck Cond.\n\nIs there a reason why the second Index Cond in the filter? Does it\nmake a difference in terms of performance (I suppose no but I'd like\nto have a confirmation)?\n\nThanks.\n\n--\nGuillaume\n",
"msg_date": "Tue, 13 Feb 2007 17:32:58 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Question about Bitmap Heap Scan/BitmapAnd"
},
{
"msg_contents": "\"Guillaume Smet\" <[email protected]> writes:\n> What surprises me is that \"parking\" is in the filter and not in the\n> Recheck Cond whereas it's part of the second Bitmap Index Scan of the\n> Bitmap And node.\n\nThat's probably because of this:\n\n /*\n * When dealing with special or lossy operators, we will at this point\n * have duplicate clauses in qpqual and bitmapqualorig. We may as well\n * drop 'em from bitmapqualorig, since there's no point in making the\n * tests twice.\n */\n bitmapqualorig = list_difference_ptr(bitmapqualorig, qpqual);\n\nWhat's not immediately clear is why the condition was in both lists to\nstart with. Perhaps idx_lieu_parking is a partial index with this as\nits WHERE condition?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Feb 2007 11:49:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about Bitmap Heap Scan/BitmapAnd "
},
{
"msg_contents": "On 2/13/07, Tom Lane <[email protected]> wrote:\n> bitmapqualorig = list_difference_ptr(bitmapqualorig, qpqual);\n>\n> What's not immediately clear is why the condition was in both lists to\n> start with. Perhaps idx_lieu_parking is a partial index with this as\n> its WHERE condition?\n\nYes, it is: \"idx_lieu_parking\" btree (parking) WHERE parking = true .\nSorry for not pointing it immediatly.\nIf not, the index is not used at all (there are very few lines in lieu\nwith parking=true).\n\nSo the basic explanation is that it's in both lists due to the partial\nindex and only qpqual keeps the condition? I would have expected the\nopposite but it doesn't change anything I suppose?\n\nThanks for your answer.\n\n--\nGuillaume\n",
"msg_date": "Tue, 13 Feb 2007 18:17:41 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Question about Bitmap Heap Scan/BitmapAnd"
},
{
"msg_contents": "\"Guillaume Smet\" <[email protected]> writes:\n> So the basic explanation is that it's in both lists due to the partial\n> index and only qpqual keeps the condition? I would have expected the\n> opposite but it doesn't change anything I suppose?\n\nIt gets the right answer, yes. I'm not sure if we could safely put the\ncondition into the recheck instead of the filter. The particular code\nI showed you has to go the direction it does, because a condition in the\nfilter has to be checked even if the bitmap is not lossy. I seem to\nrecall concluding that we had to recheck partial-index conditions even\nif the bitmap is not lossy, but I can't reconstruct my reasoning at the\nmoment.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Feb 2007 12:51:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about Bitmap Heap Scan/BitmapAnd "
},
{
"msg_contents": "Tom,\n\nOn 2/13/07, Tom Lane <[email protected]> wrote:\n> It gets the right answer, yes. I'm not sure if we could safely put the\n> condition into the recheck instead of the filter. The particular code\n> I showed you has to go the direction it does, because a condition in the\n> filter has to be checked even if the bitmap is not lossy. I seem to\n> recall concluding that we had to recheck partial-index conditions even\n> if the bitmap is not lossy, but I can't reconstruct my reasoning at the\n> moment.\n\nI'm still working on my proximity query, testing PostGIS now. I\nnoticed an issue with a gist index on a point which seems related to\nmy previous question.\n\nI have the following in my plan:\n-> Bitmap Heap Scan on lieu l (cost=13.37..1555.69 rows=844\nwidth=118) (actual time=3.672..39.497 rows=1509 loops=1)\n Filter: (((dfinvalidlieu IS NULL) OR (dfinvalidlieu >= now()))\nAND (wgslat IS NOT NULL) AND (wgslon IS NOT NULL) AND (wgslat <>\n41.89103400) AND (wgslon <> 12.49244400) AND (earthpoint &&\n'0103000020777F0000010000000500000000000040019B334100000020D1D8514100000040019B334100000040ADDE51410000006071B2334100000040ADDE51410000006071B2334100000020D1D8514100000040019B334100000020D1D85141'::geometry)\nAND (numlieu <> 49187))\n -> Bitmap Index Scan on idx_lieu_earthpoint (cost=0.00..13.37\nrows=1249 width=0) (actual time=2.844..2.844 rows=1510 loops=1)\n Index Cond: (earthpoint &&\n'0103000020777F0000010000000500000000000040019B334100000020D1D8514100000040019B334100000040ADDE51410000006071B2334100000040ADDE51410000006071B2334100000020D1D8514100000040019B334100000020D1D85141'::geometry)\n\nIs it normal I have no recheck cond and the index cond of Bitmap Index\nScan is in the filter? Is it also a consequence of the code you\npointed?\n\nThe index was created with:\ncreate index idx_lieu_earthpoint on lieu using gist(earthpoint\ngist_geometry_ops);\n\n--\nGuillaume\n",
"msg_date": "Thu, 15 Feb 2007 17:05:25 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Question about Bitmap Heap Scan/BitmapAnd"
},
{
"msg_contents": "Guillaume Smet escribi�:\n\n> I'm still working on my proximity query, testing PostGIS now. I\n> noticed an issue with a gist index on a point which seems related to\n> my previous question.\n> \n> I have the following in my plan:\n> -> Bitmap Heap Scan on lieu l (cost=13.37..1555.69 rows=844\n> width=118) (actual time=3.672..39.497 rows=1509 loops=1)\n> Filter: (((dfinvalidlieu IS NULL) OR (dfinvalidlieu >= now()))\n> AND (wgslat IS NOT NULL) AND (wgslon IS NOT NULL) AND (wgslat <>\n> 41.89103400) AND (wgslon <> 12.49244400) AND (earthpoint &&\n> '0103000020777F0000010000000500000000000040019B334100000020D1D8514100000040019B334100000040ADDE51410000006071B2334100000040ADDE51410000006071B2334100000020D1D8514100000040019B334100000020D1D85141'::geometry)\n> AND (numlieu <> 49187))\n> -> Bitmap Index Scan on idx_lieu_earthpoint (cost=0.00..13.37\n> rows=1249 width=0) (actual time=2.844..2.844 rows=1510 loops=1)\n> Index Cond: (earthpoint &&\n> '0103000020777F0000010000000500000000000040019B334100000020D1D8514100000040019B334100000040ADDE51410000006071B2334100000040ADDE51410000006071B2334100000020D1D8514100000040019B334100000020D1D85141'::geometry)\n> \n> Is it normal I have no recheck cond and the index cond of Bitmap Index\n> Scan is in the filter? Is it also a consequence of the code you\n> pointed?\n\nIt is in the filter, is it not? Having a recheck would be redundant.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Thu, 15 Feb 2007 13:27:33 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about Bitmap Heap Scan/BitmapAnd"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Guillaume Smet escribi�:\n>> Is it normal I have no recheck cond and the index cond of Bitmap Index\n>> Scan is in the filter? Is it also a consequence of the code you\n>> pointed?\n\n> It is in the filter, is it not? Having a recheck would be redundant.\n\nYeah, but his question is why is it in the filter? I think that the\nanswer is probably \"because the index is lossy for this operator,\nso it has to be checked even if the bitmap didn't become lossy\".\nYou'd have to check the GIST opclass definition to be sure.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Feb 2007 11:34:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about Bitmap Heap Scan/BitmapAnd "
},
{
"msg_contents": "On 2/15/07, Tom Lane <[email protected]> wrote:\n> I think that the\n> answer is probably \"because the index is lossy for this operator,\n> so it has to be checked even if the bitmap didn't become lossy\".\n> You'd have to check the GIST opclass definition to be sure.\n\nAny idea on what I have to look for (if it's of any interest for\nanyone, otherwise, I can live with your answer)?\n\nThanks.\n\n--\nGuillaume\n",
"msg_date": "Thu, 15 Feb 2007 19:09:15 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Question about Bitmap Heap Scan/BitmapAnd"
},
{
"msg_contents": "On 2/15/07, Guillaume Smet <[email protected]> wrote:\n> On 2/15/07, Tom Lane <[email protected]> wrote:\n> > I think that the\n> > answer is probably \"because the index is lossy for this operator,\n> > so it has to be checked even if the bitmap didn't become lossy\".\n> > You'd have to check the GIST opclass definition to be sure.\n\nFYI I've taken a look at PostGIS source code and the index is lossy\nfor the operator &&:\nOPERATOR 3 &&\tRECHECK,\n\n(for every operator in the opclass to be exact)\n\n--\nGuillaume\n",
"msg_date": "Fri, 16 Feb 2007 01:32:42 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Question about Bitmap Heap Scan/BitmapAnd"
}
] |
[
{
"msg_contents": "Hi all,\n\nFollowing the work on Mark Stosberg on this list (thanks Mark!), I\noptimized our slow proximity queries by using cube, earthdistance\n(shipped with contrib) and a gist index. The result is globally very\ninteresting apart for a specific query and we'd like to be able to fix\nit too to be more consistent (it's currently faster with a basic\ndistance calculation based on acos, cos and so on but it's slow\nanyway).\n\nThe problem is that we have sometimes very few places near a given\nlocation (small city) and sometimes a lot of them (in Paris, Bruxelles\nand so on - it's the case we have here). The gist index I created\ndoesn't estimate the number of rows in the area very well.\n\nTable: lieu (100k rows) with wgslat and wgslon as numeric\nTable: lieugelieu (200k rows, 1k with codegelieu = 'PKG')\nIndex: \"idx_lieu_earth\" gist (ll_to_earth(wgslat::double precision,\nwgslon::double precision))\n\nThe simplified query is:\nSELECT DISTINCT l.numlieu, l.nomlieu, ROUND\n(earth_distance(ll_to_earth(48.85957600, 2.34860800),\nll_to_earth(l.wgslat, l.wgslon))) as dist\n\tFROM lieu l, lieugelieu lgl\n\tWHERE lgl.codegelieu = 'PKG' AND earth_box(ll_to_earth(48.85957600,\n2.34860800), 1750) @ ll_to_earth(l.wgslat, l.wgslon) AND lgl.numlieu =\nl.numlieu ORDER BY dist ASC LIMIT 2;\nIt's used to find the nearest car parks from a given location.\n\nThe plan is attached plan_earthdistance_nestedloop.txt. It uses a\nnested loop because the row estimate is pretty bad: (cost=0.00..3.38\nrows=106 width=0) (actual time=30.229..30.229 rows=5864 loops=1).\n\nIf I disable the nested loop, the plan is different and faster (see\nplan_earthdistance_hash.txt attached).\n\nIs there any way to improve this estimation? I tried to set the\nstatistics of wgslat and wgslon higher but it doesn't change anything\n(I don't know if the operator is designed to use the statistics).\n\nAny other idea to optimize this query is very welcome too.\n\n--\nGuillaume",
"msg_date": "Tue, 13 Feb 2007 18:09:19 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Proximity query with GIST and row estimation"
},
{
"msg_contents": "You'll find that PostGIS does a pretty good job of selectivity \nestimation.\n\nP\n\nOn 13-Feb-07, at 9:09 AM, Guillaume Smet wrote:\n\n> Hi all,\n>\n> Following the work on Mark Stosberg on this list (thanks Mark!), I\n> optimized our slow proximity queries by using cube, earthdistance\n> (shipped with contrib) and a gist index. The result is globally very\n> interesting apart for a specific query and we'd like to be able to fix\n> it too to be more consistent (it's currently faster with a basic\n> distance calculation based on acos, cos and so on but it's slow\n> anyway).\n>\n> The problem is that we have sometimes very few places near a given\n> location (small city) and sometimes a lot of them (in Paris, Bruxelles\n> and so on - it's the case we have here). The gist index I created\n> doesn't estimate the number of rows in the area very well.\n>\n> Table: lieu (100k rows) with wgslat and wgslon as numeric\n> Table: lieugelieu (200k rows, 1k with codegelieu = 'PKG')\n> Index: \"idx_lieu_earth\" gist (ll_to_earth(wgslat::double precision,\n> wgslon::double precision))\n>\n> The simplified query is:\n> SELECT DISTINCT l.numlieu, l.nomlieu, ROUND\n> (earth_distance(ll_to_earth(48.85957600, 2.34860800),\n> ll_to_earth(l.wgslat, l.wgslon))) as dist\n> \tFROM lieu l, lieugelieu lgl\n> \tWHERE lgl.codegelieu = 'PKG' AND earth_box(ll_to_earth(48.85957600,\n> 2.34860800), 1750) @ ll_to_earth(l.wgslat, l.wgslon) AND lgl.numlieu =\n> l.numlieu ORDER BY dist ASC LIMIT 2;\n> It's used to find the nearest car parks from a given location.\n>\n> The plan is attached plan_earthdistance_nestedloop.txt. It uses a\n> nested loop because the row estimate is pretty bad: (cost=0.00..3.38\n> rows=106 width=0) (actual time=30.229..30.229 rows=5864 loops=1).\n>\n> If I disable the nested loop, the plan is different and faster (see\n> plan_earthdistance_hash.txt attached).\n>\n> Is there any way to improve this estimation? I tried to set the\n> statistics of wgslat and wgslon higher but it doesn't change anything\n> (I don't know if the operator is designed to use the statistics).\n>\n> Any other idea to optimize this query is very welcome too.\n>\n> --\n> Guillaume\n> <plan_earthdistance_nestedloop.txt>\n> <plan_earthdistance_hash.txt>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n",
"msg_date": "Tue, 13 Feb 2007 19:01:34 -0800",
"msg_from": "Paul Ramsey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proximity query with GIST and row estimation"
},
{
"msg_contents": "Paul,\n\nOn 2/14/07, Paul Ramsey <[email protected]> wrote:\n> You'll find that PostGIS does a pretty good job of selectivity\n> estimation.\n\nPostGIS is probably what I'm going to experiment in the future. The\nonly problem is that it's really big for a very basic need.\nWith my current method, I don't even have to create a new column: I\ncreate directly a functional index so it's really easy to use.\nUsing PostGIS requires to create a new column and triggers to maintain\nit and install PostGIS of course. That's why it was not my first\nchoice.\n\nThanks for your answer.\n\n--\nGuillaume\n",
"msg_date": "Wed, 14 Feb 2007 19:12:54 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Proximity query with GIST and row estimation"
},
{
"msg_contents": "On 2/14/07, Paul Ramsey <[email protected]> wrote:\n> You'll find that PostGIS does a pretty good job of selectivity\n> estimation.\n\nSo I finally have a working PostGIS and I fixed the query to use PostGIS.\n\nThe use of PostGIS is slower than the previous cube/earthdistance\napproach (on a similar query and plan). But you're right, it does a\npretty good job to calculate the selectivity and the estimations are\nreally good.\nIt helps to select a good plan (or a bad one if the previous false\nnumbers led to a better plan which is my case for certain queries).\nI suppose it's normal to be slower as it's more precise. I don't know\nwhich approach is better in my case as I don't need the precision of\nPostGIS.\n\nFor the record, here is what I did:\nselect AddGeometryColumn('lieu','earthpoint',32631,'POINT',2);\nupdate lieu set earthpoint=Transform(SetSRID(MakePoint(wgslon,\nwgslat), 4327), 32631);\ncreate index idx_lieu_earthpoint on lieu using gist(earthpoint\ngist_geometry_ops);\n\nanalyze lieu;\n\nselect numlieu, nomlieu, wgslon, wgslat, astext(earthpoint) from lieu\nwhere earthpoint && Expand(Transform(SetSRID(MakePoint(12.49244400,\n41.89103400), 4326), 32631), 3000);\n\n(3000 is the distance in meters)\n\n--\nGuillaume\n",
"msg_date": "Thu, 15 Feb 2007 17:13:34 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Proximity query with GIST and row estimation"
},
{
"msg_contents": "On 2/15/07, Guillaume Smet <[email protected]> wrote:\n> The use of PostGIS is slower than the previous cube/earthdistance\n> approach (on a similar query and plan).\n\nFor the record, here are new information about my proximity query work.\n\nThanks to Tom Lane, I found the reason of the performance drop. The\nproblem is that the gist index for operator && is lossy (declared as\nRECHECK in the op class).\nAFAICS, for the && operator it's done to prevent problems when SRIDs\nare not compatible: it forces the execution of the filter and so even\nwith a \"should be non lossy\" bitmap index scan, it throws an error as\nif we use a seqscan (Paul, correct me if I'm wrong) because it forces\nthe execution of the filter.\n\nAs I'm sure I won't have this problem (I will write a wrapper stored\nprocedure so that the end users won't see the SRID used), I created a\ndifferent opclass without the RECHECK clause:\nCREATE OPERATOR CLASS gist_geometry_ops_norecheck FOR TYPE geometry\nUSING gist AS\n OPERATOR 3 &&,\n FUNCTION 1 LWGEOM_gist_consistent (internal,\ngeometry, int4),\n FUNCTION 2 LWGEOM_gist_union (bytea, internal),\n FUNCTION 3 LWGEOM_gist_compress (internal),\n FUNCTION 4 LWGEOM_gist_decompress (internal),\n FUNCTION 5 LWGEOM_gist_penalty (internal,\ninternal, internal),\n FUNCTION 6 LWGEOM_gist_picksplit (internal, internal),\n FUNCTION 7 LWGEOM_gist_same (box2d, box2d, internal);\n\nUPDATE pg_opclass\n\tSET opckeytype = (SELECT oid FROM pg_type\n WHERE typname = 'box2d'\n AND typnamespace = (SELECT oid FROM pg_namespace\n WHERE nspname=current_schema()))\n\tWHERE opcname = 'gist_geometry_ops_norecheck'\n AND opcnamespace = (SELECT oid from pg_namespace\n WHERE nspname=current_schema());\n\nAs I use only the && operator, I put only this one.\n\nAnd I recreated my index using:\nCREATE INDEX idx_lieu_earthpoint ON lieu USING gist(earthpoint\ngist_geometry_ops_norecheck);\n\nIn the case presented before, the bitmap index scan is then non lossy\nand I have similar performances than with earthdistance method.\n\n--\nGuillaume\n",
"msg_date": "Fri, 16 Feb 2007 19:31:45 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Proximity query with GIST and row estimation"
}
] |
[
{
"msg_contents": "Postgres 8.1\n\nLinux Redhat AS 4.X\n\n4 processor box\n\nWe have 12+ schemas in 1 database. When I do a unix \"top\" command I\nnotice one postmaster process has 100% CPU usage. This process just\nstays at 100% to 99% CPU usage. There are other postmaster processes\nthat pop up. They use hardly no CPU or memory. They also disappear\nvery fast. I am wondering, is postgres only using one processor for\ndatabase queries? Is there something I need to do to tell postgres to\nuse more than one processor? Or does postgres only use up to one\nprocessor for any particular database? \n\n \n\nThanks for your help,\n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPostgres 8.1\nLinux Redhat AS 4.X\n4 processor box\nWe have 12+ schemas in 1 database. When I do a unix “top”\ncommand I notice one postmaster process has 100% CPU usage. This process just stays\nat 100% to 99% CPU usage. There are other postmaster processes that pop up. \nThey use hardly no CPU or memory. They also disappear very fast. I am\nwondering, is postgres only using one processor for database queries? Is there\nsomething I need to do to tell postgres to use more than one processor? Or\ndoes postgres only use up to one processor for any particular database? \n \nThanks for your help,\n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu",
"msg_date": "Tue, 13 Feb 2007 12:36:59 -0600",
"msg_from": "\"Campbell, Lance\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "CPU Usage"
},
{
"msg_contents": "On Tuesday 13 February 2007 10:36, \"Campbell, Lance\" <[email protected]> \nwrote:\n> We have 12+ schemas in 1 database. When I do a unix \"top\" command I\n> notice one postmaster process has 100% CPU usage. This process just\n> stays at 100% to 99% CPU usage. There are other postmaster processes\n> that pop up. They use hardly no CPU or memory. They also disappear\n> very fast. I am wondering, is postgres only using one processor for\n> database queries? Is there something I need to do to tell postgres\n> to use more than one processor? Or does postgres only use up to one\n> processor for any particular database?\n\nEach connection to the cluster gets a dedicated backend process, which \ncan only be scheduled on one processor at a time. So, in effect, a \nsingle query can only use one processor at a time, but any number of \nother backends can be simultaneously using the other CPUs. It does not \nmatter which database they are operating on.\n\n-- \n\"It is a besetting vice of democracies to substitute public opinion for\nlaw.\" - James Fenimore Cooper \n",
"msg_date": "Tue, 13 Feb 2007 10:48:25 -0800",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPU Usage"
}
] |
[
{
"msg_contents": "Hi folks,\n\nI don't know if this is an SQL or PERFORMANCE list problem but I wanted to \ncheck here first. I've seen this discussed on the list before but I'm still \nnot sure of the solution. Maybe my query is just structured wrong.\n\nI recently visited an old project of mine that has a 'city', 'state,' \nand 'country' tables. The city data comes from multiple sources and totals \nabout 3 million rows. I decided to split the city table up based on the \nsource (world_city, us_city). This makes easier updating because the \nassigned feature id's from the two sources overlap in some cases making it \nimpossible to update as a single merged table.\n\nHowever, I decided to create a view to behave like the old 'city' table. The \nview is just a simple:\n\nSELECT [columns]\nFROM world_city\nUNION\nSELECT [columns]\nFROM us_city\n;\n\nSelecting from the view is very quick, but JOINing to the view is slow. About \n65 seconds to select a city. It doesn't matter wether it is joined to one \ntable or 6 like it is in my user_detail query - it is still slow. It has \nindexes on the city_id, state_id, country_id of each table in the view too. \nEverything has been 'VACUUM ANALYZE' ed.\n\nWhen using explain analyze from the view I get this:\n\ncmi=# explain analyze\ncmi-# select user_id, username, city_name\ncmi-# FROM m_user AS mu\ncmi-# left JOIN geo.city_vw AS ci ON (mu.city_id = ci.city_id)\ncmi-# ;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=650146.58..751018.45 rows=10618 width=55) \n(actual time=53078.261..61269.190 rows=1 loops=1)\n Join Filter: (\"outer\".city_id = \"inner\".\"?column1?\")\n -> Seq Scan on m_user mu (cost=0.00..1.01 rows=1 width=27) (actual \ntime=0.010..0.022 rows=1 loops=1)\n -> Unique (cost=650146.58..703236.51 rows=2123597 width=62) (actual \ntime=49458.007..59635.140 rows=2122712 loops=1)\n -> Sort (cost=650146.58..655455.58 rows=2123597 width=62) (actual \ntime=49458.003..55405.965 rows=2122712 loops=1)\n Sort Key: city_id, state_id, country_id, cc1, rc, adm1, lat, \nlon, city_name\n -> Append (cost=0.00..73741.94 rows=2123597 width=62) (actual \ntime=18.835..13706.395 rows=2122712 loops=1)\n -> Seq Scan on us_city (cost=0.00..4873.09 rows=169409 \nwidth=62) (actual time=18.832..620.553 rows=169398 loops=1)\n -> Seq Scan on world_city (cost=0.00..47632.88 \nrows=1954188 width=61) (actual time=23.513..11193.341 rows=1953314 loops=1)\n Total runtime: 61455.471 ms\n(10 rows)\n\nTime: 61512.377 ms\n\nSo, a sequence scan on the tables in the view, won't use the index.\n\nThen do the same query by replacing the view with the real table:\n\ncmi=# explain analyze\ncmi-# select user_id, username, city_name\ncmi-# FROM m_user AS mu\ncmi-# left JOIN geo.world_city AS ci ON (mu.city_id = ci.city_id)\ncmi-# ;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.00..4.04 rows=1 width=36) (actual \ntime=53.854..53.871 rows=1 loops=1)\n -> Seq Scan on m_user mu (cost=0.00..1.01 rows=1 width=27) (actual \ntime=0.010..0.016 rows=1 loops=1)\n -> Index Scan using world_city_pk on world_city ci (cost=0.00..3.01 \nrows=1 width=17) (actual time=53.825..53.833 rows=1 loops=1)\n Index Cond: (\"outer\".city_id = ci.city_id)\n Total runtime: 53.989 ms\n(5 rows)\n\nTime: 56.234 ms\n\n\nI'm not sure that a view on a UNION is the best idea but I don't know how to \ngo about keeping the tables from the data sources with the view (other than \nmodifying them with a source_id column). Any ideas on what is causing the \nperformance lag?\n\n\n",
"msg_date": "Tue, 13 Feb 2007 12:53:55 -0600",
"msg_from": "\"Chuck D.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "JOIN to a VIEW makes a real slow query"
},
{
"msg_contents": "On 2/13/07, Chuck D. <[email protected]> wrote:\n> Hi folks,\n>\n> I don't know if this is an SQL or PERFORMANCE list problem but I wanted to\n> check here first. I've seen this discussed on the list before but I'm still\n> not sure of the solution. Maybe my query is just structured wrong.\n>\n> I recently visited an old project of mine that has a 'city', 'state,'\n> and 'country' tables. The city data comes from multiple sources and totals\n> about 3 million rows. I decided to split the city table up based on the\n> source (world_city, us_city). This makes easier updating because the\n> assigned feature id's from the two sources overlap in some cases making it\n> impossible to update as a single merged table.\n>\n> However, I decided to create a view to behave like the old 'city' table. The\n> view is just a simple:\n>\n> SELECT [columns]\n> FROM world_city\n> UNION\n> SELECT [columns]\n> FROM us_city\n> ;\n>\n> Selecting from the view is very quick, but JOINing to the view is slow. About\n> 65 seconds to select a city. It doesn't matter wether it is joined to one\n> table or 6 like it is in my user_detail query - it is still slow. It has\n> indexes on the city_id, state_id, country_id of each table in the view too.\n> Everything has been 'VACUUM ANALYZE' ed.\n\nuse 'union all' instead of union. union without all has an implied\nsort and duplicate removal step that has to be resolved, materializing\nthe view, before you can join to it.\n\nmerlin\n",
"msg_date": "Tue, 13 Feb 2007 14:16:54 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JOIN to a VIEW makes a real slow query"
},
{
"msg_contents": "On Tuesday 13 February 2007 13:16, Merlin Moncure wrote:\n>\n> use 'union all' instead of union. union without all has an implied\n> sort and duplicate removal step that has to be resolved, materializing\n> the view, before you can join to it.\n>\n\nThanks for that Merlin, I forgot about using ALL. That does eliminate the \nUNIQUE, SORT and SORT lines from the EXPLAIN query. It also brings the query \ntime down from a whopping 65 seconds to 11 seconds. The two tables contain \nunique rows already so ALL would be required.\n\nIt is still using that sequence scan on the view after the APPEND for the \nus_city and world_city table. Any reason why the view won't use the indexes \nwhen it is JOINed to another table but it will when the view is queried \nwithout a JOIN? I should have mentioned this is v8.1.4.\n\nAlso, does anyone know why this line:\nJoin Filter: (\"outer\".city_id = \"inner\".\"?column1?\")\n... contains \"?column1?\" instead of the actual column name?\n\nThis is the result after UNION ALL on the view\n\ncmi=# explain analyze\ncmi-# select user_id, username, city_name\ncmi-# FROM m_user AS mu\ncmi-# LEFT JOIN geo.city_vw AS ci ON (mu.city_id = ci.city_id)\ncmi-# ;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.00..121523.88 rows=10618 width=55) (actual \ntime=2392.376..11061.117 rows=1 loops=1)\n Join Filter: (\"outer\".city_id = \"inner\".\"?column1?\")\n -> Seq Scan on m_user mu (cost=0.00..1.01 rows=1 width=27) (actual \ntime=0.025..0.028 rows=1 loops=1)\n -> Append (cost=0.00..73741.94 rows=2123597 width=62) (actual \ntime=16.120..9644.315 rows=2122712 loops=1)\n -> Seq Scan on us_city (cost=0.00..4873.09 rows=169409 width=62) \n(actual time=16.119..899.802 rows=169398 loops=1)\n -> Seq Scan on world_city (cost=0.00..47632.88 rows=1954188 \nwidth=61) (actual time=10.585..6949.946 rows=1953314 loops=1)\n Total runtime: 11061.441 ms\n(7 rows)\n\n",
"msg_date": "Tue, 13 Feb 2007 14:17:31 -0600",
"msg_from": "\"Chuck D.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: JOIN to a VIEW makes a real slow query"
},
{
"msg_contents": "\"Chuck D.\" <[email protected]> writes:\n> It is still using that sequence scan on the view after the APPEND for the \n> us_city and world_city table. Any reason why the view won't use the indexes \n> when it is JOINed to another table but it will when the view is queried \n> without a JOIN? I should have mentioned this is v8.1.4.\n\n8.1 isn't bright enough for that. Should work in 8.2 though.\n\n> Also, does anyone know why this line:\n> Join Filter: (\"outer\".city_id = \"inner\".\"?column1?\")\n> ... contains \"?column1?\" instead of the actual column name?\n\nEXPLAIN can't conveniently get access to the column name. That could\nprobably be improved if someone wanted to put enough effort into it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Feb 2007 15:51:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JOIN to a VIEW makes a real slow query "
},
{
"msg_contents": "\nOn Tuesday 13 February 2007 14:51, Tom Lane wrote:\n> \"Chuck D.\" <[email protected]> writes:\n> > It is still using that sequence scan on the view after the APPEND for the\n> > us_city and world_city table. Any reason why the view won't use the\n> > indexes when it is JOINed to another table but it will when the view is\n> > queried without a JOIN? I should have mentioned this is v8.1.4.\n>\n> 8.1 isn't bright enough for that. Should work in 8.2 though.\n\n>\n> \t\t\tregards, tom lane\n\nUpgraded to 8.2.3 in my spare time here - went from the packaged binary that \ncame with Ubuntu to compiling from source. Haven't tuned it yet, but what do \nyou think about this join on the view?\n\n\ncmi=# explain analyze\ncmi-# select user_id, username, city_name\ncmi-# FROM m_user AS mu\ncmi-# LEFT JOIN geo.city_vw AS ci ON (mu.city_id = ci.city_id)\ncmi-# ;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.00..17.76 rows=10614 width=486) (actual \ntime=0.109..0.113 rows=1 loops=1)\n Join Filter: (mu.city_id = ci.city_id)\n -> Seq Scan on m_user mu (cost=0.00..1.01 rows=1 width=72) (actual \ntime=0.015..0.017 rows=1 loops=1)\n -> Append (cost=0.00..16.72 rows=2 width=422) (actual time=0.073..0.075 \nrows=1 loops=1)\n -> Index Scan using pk_us_city on us_city (cost=0.00..8.28 rows=1 \nwidth=222) (actual time=0.032..0.032 rows=0 loops=1)\n Index Cond: (mu.city_id = us_city.city_id)\n -> Index Scan using world_city_pk on world_city (cost=0.00..8.44 \nrows=1 width=422) (actual time=0.040..0.042 rows=1 loops=1)\n Index Cond: (mu.city_id = world_city.city_id)\n Total runtime: 0.359 ms\n(9 rows)\n\n\nFrom 65 seconds down to less than 1 ms. Pretty good huh? Nice call Tom. \n\nNow I'll have to find some time to do the production server before this app \ngoes up.\n\n\n",
"msg_date": "Wed, 14 Feb 2007 13:12:22 -0600",
"msg_from": "\"Chuck D.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: JOIN to a VIEW makes a real slow query"
}
] |
[
{
"msg_contents": "I am about to pull the trigger on a new machine after analyzing some\ndiscussions I posted here last year. I've been trying to spec out a reliable\nand powerfull enough machine where I won't have to replace it for some time.\nCurrently I've been using a dual Xeon 3.06ghz with 4GB of ram and utilizing a\nRAID 1+0 configuration over a total 6 SCSI disks asside from the OS partition.\nWe have about 10GB of data and will probably scale at about 1GB per month. We\ncurrently average about 200 queries per second and the 15 minute load average\nis about .30. I am running FreeBSD 6.1.\n\nAt the end of last year, I specced out a new machine to replace this box. At\nthat time, the quad core 2.66ghz were not available from my vendor and I was\nnot planning to go that route. Now that they are available, I am considering\nthe option. The main question here is whether FreeBSD 6.X and PostgreSQL 8.1\nwould be able to take advantage of the quad core and perform better than the\n3.0Ghz dual core. The reason I ask is due to some conflicting benchmarking\nresults I see posted on the spec.org website.\n\nHere is the full specification of the new box I hope to build and run FreeBSD\n6.X and PostgreSQL on:\n\n- SuperMicro Dual Xeon X7DBE+ motherboard\n + 2 x Quad Core X5355 2.66Ghz \n OR\n + 2 x Dual Core 5160 3.0Ghz \n\n- 8 x 1GB PC2-4200 fully buffered DIMM\n\n- LSI MegaRAID SAS 8408E w/BBU 256MB\n\n- 16 x 73GB SAS disk\n\nSo, question #1, to go dual core or quad core? Quad core in theory seems to\nscale the machine's processing potential by almost a factor of two.\n\nAnd lastly, up till now, I've only have experience configuring SCSI RAID\ncontrollers. I believe this LSI MegaRAID unit has a dual channel setup, but\nwhen it comes to SAS drives, I don't know what kind of flexibility this\nprovides. How should the disks be partitioned for maximum PostgreSQL\nperformance?\n\nI'm thinking about keeping it simple assuming that the hot spare can only be\nutilized one per channel leaving me only 14 disks to utilize.\n\n1 RAID1 partition using 2 disks total for the OS\n1 RAID1+0 using 12 disks total striping over 6.\n\nIf I am not able to utilize both channels to create a 12 disk RAID1+0 array,\nthen it might be better to create 2 seperate data partitions, one for\nWAL/pg_xlog and the rest for the data store.\n\nPlease comment on any issues you may see with this box and my assumptions.\nAlso any FreeBSD kernel issues or tweaks you could recommend.\n\nSincerely,\nKenji \n",
"msg_date": "Tue, 13 Feb 2007 11:46:10 -0800",
"msg_from": "Kenji Morishige <[email protected]>",
"msg_from_op": true,
"msg_subject": "quad or dual core Intel CPUs"
},
{
"msg_contents": "Hi Kenji,\n\nOn 13-2-2007 20:46 Kenji Morishige wrote:\n> Here is the full specification of the new box I hope to build and run FreeBSD\n> 6.X and PostgreSQL on:\n> \n> - SuperMicro Dual Xeon X7DBE+ motherboard\n> + 2 x Quad Core X5355 2.66Ghz \n> OR\n> + 2 x Dual Core 5160 3.0Ghz \n> \n> - 8 x 1GB PC2-4200 fully buffered DIMM\n> \n> - LSI MegaRAID SAS 8408E w/BBU 256MB\n> \n> - 16 x 73GB SAS disk\n\nIf this is in one of those 4U cases, make very, very sure it can \nproperly exhaust all the heat generated and has more than adequate power \nsupply. When going for a similar machine, we got a negative advice on \nsuch a set-up from a server vendor who built such machines themselves. \nDon't forget that the FB-dimms run pretty hot and they need sufficient \ncooling. As you can see on these pictures Fujitsu thought it necessary \nto add fan-ducts for the memory:\nhttp://tweakers.net/reviews/646/7\n\nOur own Dell systems have similar ducts. But a third-party server \nbuilder we tested did not include those, and the machine ran very hot (I \ncouldn't touch the bottom for more than a short time) in a \nnot-too-good-ventilated, but mostly empty, server rack. Although that \nwas a 2U machine, but it didn't include any disks. Currently we have had \ngood experience with our new Dell 1950 (2x 5160, PC5300 FBD) combined \nwith a Dell MD1000 SAS disk unit (15x 15k 36G disks) described in the \nsecond review linked below. HP offers similar options and there are \nprobably several other suppliers who can build something like that too. \nSeperate SAS-JBOD disk units are available from other suppliers as well.\n\n> So, question #1, to go dual core or quad core? Quad core in theory seems to\n> scale the machine's processing potential by almost a factor of two.\n\nI can partially answer that question, but than for linux + postgresql \n8.2. In that case, postgresql can take advantage of the extra core. See \nour review here:\nhttp://tweakers.net/reviews/661\n\nThis includes comparisons between the X5355 and 5160 with postgresql on \nthe seventh page, here: http://tweakers.net/reviews/661/7\n\nBut be aware that there can be substantial and unexpected differences on \nthis relatively new platform due to simply changing the OS, like we saw \nwhen going from linux 2.6.15 to 2.6.18, as you can see here:\nhttp://tweakers.net/reviews/657/2\n\nOur benchmark has relatively little writing and a smallish dataset (fits \nin 4GB of memory), so I don't know how much use these benchmarks are for \nyou. But the conclusion was that the extra processor power isn't fully \navailable, possibly because there is less memory bandwidth per processor \ncore and more communication overhead. Then again, in our test the dual \nquad core was faster than the dual dual core.\n\n> And lastly, up till now, I've only have experience configuring SCSI RAID\n> controllers. I believe this LSI MegaRAID unit has a dual channel setup, but\n> when it comes to SAS drives, I don't know what kind of flexibility this\n> provides. How should the disks be partitioned for maximum PostgreSQL\n> performance?\n> \n> I'm thinking about keeping it simple assuming that the hot spare can only be\n> utilized one per channel leaving me only 14 disks to utilize.\n\nIn that Dell 1950 we use the Dell PERC5/e SAS-controller for the \ndatabase, which is based on that same LSI controller, although it has 2 \nexternal sas connections (for 4 channels each). Afaik it supports global \nhot spares. But we use the full set of 15 disks as a 14+1 disk raid 5, \nso I haven't looked at that too well. For the OS we have a seperate \nPERC5/i \"internal\" raid controller with two internal disks. My \ncolleague also tested several raid set-ups with that equipment, and we \nchoose a raid5 for its slightly better read-performance. If you can make \nsomething of these dutch pages, you can have a look at those results here:\nhttp://tweakers.net/benchdb/test/122\n\nPlay around with the form at the bottom of the page to see some \ncomparisons between several raid set-ups. The sas configurations are of \ncourse the ones with the \"Fujitsu MAX3036RC 36GB\" disks and \"Dell PERC \n5/E\" controller.\n\n> If I am not able to utilize both channels to create a 12 disk RAID1+0 array,\n> then it might be better to create 2 seperate data partitions, one for\n> WAL/pg_xlog and the rest for the data store.\n\nI'm not too sure how you can connect your disks and controller to a \nSAS-expander (which you need to connect more than 8 disks to a \ncontroller). I believe it is possible to use a 24-port expander, \nallowing communication between the 16 disks and 8 ports of your \ncontroller. A SAS-expander comes normally with the enclosure/disk unit, \nbut I have no idea about the details. Our own testing was done using \njust a single 4-port connector, which can handle 1.2GB/sec (afaik this B \nis for bytes) and we believe that's sufficient for our 15 disks.\n\n> Please comment on any issues you may see with this box and my assumptions.\n> Also any FreeBSD kernel issues or tweaks you could recommend.\n\nHave a very good look at your heat production and exhaust and power \nsupply. It was one of the reasons we decided to use seperate enclosures, \nseperating the processors/memory from the big disk array.\n\nBest regards and good luck,\n\nArjen van der Meijden\n",
"msg_date": "Tue, 13 Feb 2007 22:05:45 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: quad or dual core Intel CPUs"
},
{
"msg_contents": "Kenji Morishige wrote:\n> \n> Please comment on any issues you may see with this box and my assumptions.\n> Also any FreeBSD kernel issues or tweaks you could recommend.\n> \n\nI would recommend posting to freebsd-hardware or freebsd-stable and \nasking if there are any gotchas with the X7DBE+ and 6.2 (for instance \nX7DBR-8+ suffers from an intermittent hang at boot... so can't hurt to ask!)\n\nCheers\n\nMark\n",
"msg_date": "Wed, 14 Feb 2007 10:28:51 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: quad or dual core Intel CPUs"
},
{
"msg_contents": "Arjen van der Meijden wrote:\n\n> \n> But be aware that there can be substantial and unexpected differences on \n> this relatively new platform due to simply changing the OS, like we saw \n> when going from linux 2.6.15 to 2.6.18, as you can see here:\n> http://tweakers.net/reviews/657/2\n\n\nHaving upgraded to 2.6.18 fairly recently, I am *very* interested in \nwhat caused the throughput to drop in 2.6.18? I haven't done any \nbenchmarking on my system to know if it affected my usage pattern \nnegatively, but I am curious if anyone knows why this happened?\n\n-Dan\n",
"msg_date": "Tue, 13 Feb 2007 15:21:07 -0700",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: quad or dual core Intel CPUs"
},
{
"msg_contents": "Dan,\n\nOn 2/13/07, Dan Harris <[email protected]> wrote:\n> Having upgraded to 2.6.18 fairly recently, I am *very* interested in\n> what caused the throughput to drop in 2.6.18? I haven't done any\n> benchmarking on my system to know if it affected my usage pattern\n> negatively, but I am curious if anyone knows why this happened?\n\nI think you misread the graph. PostgreSQL 8.2 seems to be\napproximately 20% faster with kernel 2.6.18 on the platforms tested\n(and using tweakers.net benchmark).\n\n--\nGuillaume\n",
"msg_date": "Tue, 13 Feb 2007 23:49:36 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: quad or dual core Intel CPUs"
},
{
"msg_contents": "Dan Harris wrote:\n> Arjen van der Meijden wrote:\n> \n>> But be aware that there can be substantial and unexpected differences \n>> on this relatively new platform due to simply changing the OS, like we \n>> saw when going from linux 2.6.15 to 2.6.18, as you can see here:\n>> http://tweakers.net/reviews/657/2\n> \n> Having upgraded to 2.6.18 fairly recently, I am *very* interested in \n> what caused the throughput to drop in 2.6.18? I haven't done any \n> benchmarking on my system to know if it affected my usage pattern \n> negatively, but I am curious if anyone knows why this happened?\n\nI think you're reading the results backwards. PostgreSQL throughput \nincreased, not decreased, by the upgrade.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Tue, 13 Feb 2007 22:52:54 +0000",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: quad or dual core Intel CPUs"
},
{
"msg_contents": "> I am about to pull the trigger on a new machine after analyzing some\n> discussions I posted here last year. I've been trying to spec out a reliable\n> and powerfull enough machine where I won't have to replace it for some time.\n> Currently I've been using a dual Xeon 3.06ghz with 4GB of ram and utilizing a\n> RAID 1+0 configuration over a total 6 SCSI disks asside from the OS partition.\n> We have about 10GB of data and will probably scale at about 1GB per month. We\n> currently average about 200 queries per second and the 15 minute load average\n> is about .30. I am running FreeBSD 6.1.\n>\n> At the end of last year, I specced out a new machine to replace this box. At\n> that time, the quad core 2.66ghz were not available from my vendor and I was\n> not planning to go that route. Now that they are available, I am considering\n> the option. The main question here is whether FreeBSD 6.X and PostgreSQL 8.1\n> would be able to take advantage of the quad core and perform better than the\n> 3.0Ghz dual core. The reason I ask is due to some conflicting benchmarking\n> results I see posted on the spec.org website.\n>\n> Here is the full specification of the new box I hope to build and run FreeBSD\n> 6.X and PostgreSQL on:\n>\n> - SuperMicro Dual Xeon X7DBE+ motherboard\n> + 2 x Quad Core X5355 2.66Ghz\n> OR\n> + 2 x Dual Core 5160 3.0Ghz\n>\n> - 8 x 1GB PC2-4200 fully buffered DIMM\n>\n> - LSI MegaRAID SAS 8408E w/BBU 256MB\n>\n> - 16 x 73GB SAS disk\n>\n> So, question #1, to go dual core or quad core? Quad core in theory seems to\n> scale the machine's processing potential by almost a factor of two.\n\nWe recently migrated from a four way opteron @ 2 GHz with 8 GB to a\nfour way woodcrest @ 3 GHz (HP DL380 G5) with 16 GB ram. I also\nupgraded FreeBSD from 6.0 to 6.2 and did a minor upgrade of postgresql\nfrom 7.4.9 to 7.4.12. The change was tremendous, the first few hours\nof after it went into production I had to doublecheck that our website\nworked, since the load was way below 1 whereas the load had been\nalmost 100 during peak.\n\nI don't have any financial ties to HP but building a server from\nscratch may not be worth it, rather than spending time assemling all\nthe different parts yourself I would suggest you get a server from one\nvendor who build a server according to your specs.\n\nThe DL380 (also) has a 256 MB bbc controller, the nic works flawlessly\nwith FreeBSD 6.2, all parts are well integrated, the frontbay can\naccomodate 8 146 GB SAS drives. This server is wellsuited as a\npostgresql-server.\n\nApprox. 200 reqest a sec. should be a problem unless the queries are heavy.\n\nregards\nClaus\n",
"msg_date": "Wed, 14 Feb 2007 10:19:52 +0100",
"msg_from": "\"Claus Guttesen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: quad or dual core Intel CPUs"
},
{
"msg_contents": "Thanks Claus thats good news! \nI'm having a reputable vendor build the box and test it for me before\ndelivering. The bottom line of your message, did you mean 'should be not a\nproblem'? I wonder what the main reason for your improvement, your ram was\nincreased by a factor of 2, but 4 way opteron vs 4 way woodcrest performance\nmust not be that significant.\n\n-Kenji\n\n\n> We recently migrated from a four way opteron @ 2 GHz with 8 GB to a\n> four way woodcrest @ 3 GHz (HP DL380 G5) with 16 GB ram. I also\n> upgraded FreeBSD from 6.0 to 6.2 and did a minor upgrade of postgresql\n> from 7.4.9 to 7.4.12. The change was tremendous, the first few hours\n> of after it went into production I had to doublecheck that our website\n> worked, since the load was way below 1 whereas the load had been\n> almost 100 during peak.\n> \n> I don't have any financial ties to HP but building a server from\n> scratch may not be worth it, rather than spending time assemling all\n> the different parts yourself I would suggest you get a server from one\n> vendor who build a server according to your specs.\n> \n> The DL380 (also) has a 256 MB bbc controller, the nic works flawlessly\n> with FreeBSD 6.2, all parts are well integrated, the frontbay can\n> accomodate 8 146 GB SAS drives. This server is wellsuited as a\n> postgresql-server.\n> \n> Approx. 200 reqest a sec. should be a problem unless the queries are heavy.\n> \n> regards\n> Claus\n",
"msg_date": "Wed, 14 Feb 2007 09:25:01 -0800",
"msg_from": "Kenji Morishige <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: quad or dual core Intel CPUs"
},
{
"msg_contents": ">> Approx. 200 reqest a sec. should be a problem unless the queries are heavy.\n>\n> Thanks Claus thats good news!\n> I'm having a reputable vendor build the box and test it for me before\n> delivering. The bottom line of your message, did you mean 'should be not a\n> problem'? I wonder what the main reason for your improvement, your ram was\n> increased by a factor of 2, but 4 way opteron vs 4 way woodcrest performance\n> must not be that significant.\n\nSorry, the line should read 'should *not* be a problem', pardon for\nthe confusion. So 200 queries/s should be fine, probably won't make\nthe server sweat.\n\nI'm not shure what attributed most to the decrease when the load went\nfrom approx. 100 during peak to less than 1! Since the db-server is\nsuch a vital part of our infrastructure, I was reluctant to upgrade\nit, while load was below 10. But in November and December - when we\nhave our most busy time - our website slowed to a crawl, thus phasing\na new server in was an easy decision.\n\nThe woodcrest is a better performer compared to the current opteron,\nthe ciss-disk-controller also has 256 MB cache compared to the 64 MB\nLSI-logic controller in the former db-server, FreeBSD 6.2 is also a\nbetter performer than 6.0, but I haven't done any benchmarking on the\nsame hardware.\n\nregards\nClaus\n",
"msg_date": "Wed, 14 Feb 2007 19:43:27 +0100",
"msg_from": "\"Claus Guttesen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: quad or dual core Intel CPUs"
}
] |
[
{
"msg_contents": "Hi,\n\nI am having trouble understanding why a seqscan is chosen for this query.\n\nIn practice the seqscan is very expensive, whereas the nested loop is usually quite fast, even with several hundred rows returned from meta_keywords_url.\n\nThe server is running version 8.1.3, and both tables were analyzed recently. meta_keywords contains around 25% dead rows, meta_keywords_url contains no dead rows.\n\nI have included the query written both as a subquery and as a join.\n\nThanks for any assistance!\nBrian\n\n\n\nlive=> explain select * from meta_keywords where url_id in (select url_id from meta_keywords_url where host = 'postgresql.org');\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------\n Hash IN Join (cost=1755.79..545380.52 rows=9442 width=29)\n Hash Cond: (\"outer\".url_id = \"inner\".url_id)\n -> Seq Scan on meta_keywords (cost=0.00..507976.54 rows=7110754 width=29)\n -> Hash (cost=1754.35..1754.35 rows=576 width=4)\n -> Bitmap Heap Scan on meta_keywords_url (cost=11.02..1754.35 rows=576 width=4)\n Recheck Cond: ((host)::text = 'postgresql.org'::text)\n -> Bitmap Index Scan on meta_keywords_url_host_path (cost=0.00..11.02 rows=576 width=0)\n Index Cond: ((host)::text = 'postgresql.org'::text)\n(8 rows)\n\nlive=> set enable_seqscan=off;\nSET\nlive=> explain select * from meta_keywords where url_id in (select url_id from meta_keywords_url where host = 'postgresql.org');\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------\n Nested Loop (cost=1755.79..3161748.83 rows=9442 width=29)\n -> HashAggregate (cost=1755.79..1761.55 rows=576 width=4)\n -> Bitmap Heap Scan on meta_keywords_url (cost=11.02..1754.35 rows=576 width=4)\n Recheck Cond: ((host)::text = 'postgresql.org'::text)\n -> Bitmap Index Scan on meta_keywords_url_host_path (cost=0.00..11.02 rows=576 width=0)\n Index Cond: ((host)::text = 'postgresql.org'::text)\n -> Index Scan using meta_keywords_url_id on meta_keywords (cost=0.00..5453.28 rows=2625 width=29)\n Index Cond: (meta_keywords.url_id = \"outer\".url_id)\n(8 rows)\n\nlive=> explain select * from meta_keywords join meta_keywords_url using (url_id) where host = 'postgresql.org'; QUERY PLAN \n---------------------------------------------------------------------------------------------------------\n Hash Join (cost=1758.52..543685.43 rows=9297 width=107)\n Hash Cond: (\"outer\".url_id = \"inner\".url_id)\n -> Seq Scan on meta_keywords (cost=0.00..506859.29 rows=6994929 width=28)\n -> Hash (cost=1757.08..1757.08 rows=577 width=83)\n -> Bitmap Heap Scan on meta_keywords_url (cost=11.02..1757.08 rows=577 width=83)\n Recheck Cond: ((host)::text = 'postgresql.org'::text)\n -> Bitmap Index Scan on meta_keywords_url_host_path (cost=0.00..11.02 rows=577 width=0)\n Index Cond: ((host)::text = 'postgresql.org'::text)\n(8 rows)\n\nlive=> set enable_seqscan=off;\nSET\nlive=> explain select * from meta_keywords join meta_keywords_url using (url_id) where host = 'postgresql.org';\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..3348211.21 rows=9297 width=107)\n -> Index Scan using meta_keywords_url_host_path on meta_keywords_url (cost=0.00..2230.24 rows=577 width=83)\n Index Cond: ((host)::text = 'postgresql.org'::text)\n -> Index Scan using meta_keywords_url_id on meta_keywords (cost=0.00..5765.81 rows=2649 width=28)\n Index Cond: (meta_keywords.url_id = \"outer\".url_id)\n(5 rows)\n\n\n\n\n",
"msg_date": "Wed, 14 Feb 2007 00:40:13 -0800 (PST)",
"msg_from": "Brian Herlihy <[email protected]>",
"msg_from_op": true,
"msg_subject": "An unwanted seqscan"
},
{
"msg_contents": "Brian Herlihy <[email protected]> writes:\n> I am having trouble understanding why a seqscan is chosen for this query.\n\nAs far as anyone can see from this output, the planner's decisions are\ncorrect: it prefers the plans with the smaller estimated cost. If you\nwant us to take an interest, provide some more context --- EXPLAIN\nANALYZE output for starters.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Feb 2007 03:53:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An unwanted seqscan "
}
] |
[
{
"msg_contents": "Hello All,\nI'm a performance engineer, quite interested in getting deep into the PGSQL\nperformance enhancement effort. In that regard, I have the following\nquestions :\n1. Is there a benchmarking setup, that I can access online?\n2. What benchmarks are we running , for performance numbers?\n3. What are the current issues, related to performance?\n4. Where can I start, with the PGSQL performance effort?\n\nThanks a lot,\nKrishna\n\nHello All, I'm a performance engineer, quite interested in getting deep into the PGSQL performance enhancement effort. In that regard, I have the following questions : 1. Is there a benchmarking setup, that I can access online?\n2. What benchmarks are we running , for performance numbers?3. What are the current issues, related to performance? 4. Where can I start, with the PGSQL performance effort?Thanks a lot, Krishna",
"msg_date": "Wed, 14 Feb 2007 15:30:09 +0530",
"msg_from": "\"Krishna Kumar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Benchmarking PGSQL?"
},
{
"msg_contents": "Have you tried pgbench yet?\n\n--\nShoaib Mir\nEnterpriseDB (www.enterprisedb.com)\n\nOn 2/14/07, Krishna Kumar <[email protected]> wrote:\n>\n> Hello All,\n> I'm a performance engineer, quite interested in getting deep into the\n> PGSQL performance enhancement effort. In that regard, I have the following\n> questions :\n> 1. Is there a benchmarking setup, that I can access online?\n> 2. What benchmarks are we running , for performance numbers?\n> 3. What are the current issues, related to performance?\n> 4. Where can I start, with the PGSQL performance effort?\n>\n> Thanks a lot,\n> Krishna\n>\n\nHave you tried pgbench yet?--Shoaib MirEnterpriseDB (www.enterprisedb.com)On 2/14/07, Krishna Kumar\n <[email protected]> wrote:\nHello All, I'm a performance engineer, quite interested in getting deep into the PGSQL performance enhancement effort. In that regard, I have the following questions : 1. Is there a benchmarking setup, that I can access online?\n2. What benchmarks are we running , for performance numbers?3. What are the current issues, related to performance? 4. Where can I start, with the PGSQL performance effort?Thanks a lot, \nKrishna",
"msg_date": "Wed, 14 Feb 2007 15:17:45 +0500",
"msg_from": "\"Shoaib Mir\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking PGSQL?"
},
{
"msg_contents": "Here¹s one:\n\nInsert performance is limited to about 10-12 MB/s no matter how fast the\nunderlying I/O hardware. Bypassing the WAL (write ahead log) only boosts\nthis to perhaps 20 MB/s. We¹ve found that the biggest time consumer in the\nprofile is the collection of routines that ³convert to datum².\n\nYou can perform the test using any dataset, you might consider using the\nTPC-H benchmark kit with a data generator available at www.tpc.org. Just\ngenerate some data, load the schema, then perform some COPY statements,\nINSERT INTO SELECT FROM and CREATE TABLE AS SELECT.\n\n- Luke\n\n\nOn 2/14/07 2:00 AM, \"Krishna Kumar\" <[email protected]> wrote:\n\n> Hello All, \n> I'm a performance engineer, quite interested in getting deep into the PGSQL\n> performance enhancement effort. In that regard, I have the following questions\n> : \n> 1. Is there a benchmarking setup, that I can access online?\n> 2. What benchmarks are we running , for performance numbers?\n> 3. What are the current issues, related to performance?\n> 4. Where can I start, with the PGSQL performance effort?\n> \n> Thanks a lot, \n> Krishna \n> \n\n\n\n\n\nRe: [PERFORM] Benchmarking PGSQL?\n\n\nHere’s one:\n\nInsert performance is limited to about 10-12 MB/s no matter how fast the underlying I/O hardware. Bypassing the WAL (write ahead log) only boosts this to perhaps 20 MB/s. We’ve found that the biggest time consumer in the profile is the collection of routines that “convert to datum”.\n\nYou can perform the test using any dataset, you might consider using the TPC-H benchmark kit with a data generator available at www.tpc.org. Just generate some data, load the schema, then perform some COPY statements, INSERT INTO SELECT FROM and CREATE TABLE AS SELECT.\n\n- Luke\n\n\nOn 2/14/07 2:00 AM, \"Krishna Kumar\" <[email protected]> wrote:\n\nHello All, \nI'm a performance engineer, quite interested in getting deep into the PGSQL performance enhancement effort. In that regard, I have the following questions : \n1. Is there a benchmarking setup, that I can access online? \n2. What benchmarks are we running , for performance numbers?\n3. What are the current issues, related to performance? \n4. Where can I start, with the PGSQL performance effort?\n\nThanks a lot, \nKrishna",
"msg_date": "Wed, 14 Feb 2007 07:35:43 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking PGSQL?"
},
{
"msg_contents": "On 2/14/07, Luke Lonergan <[email protected]> wrote:\n>\n> Here's one:\n>\n> Insert performance is limited to about 10-12 MB/s no matter how fast the\n> underlying I/O hardware. Bypassing the WAL (write ahead log) only boosts\n> this to perhaps 20 MB/s. We've found that the biggest time consumer in the\n> profile is the collection of routines that \"convert to datum\".\n>\n> You can perform the test using any dataset, you might consider using the\n> TPC-H benchmark kit with a data generator available at www.tpc.org. Just\n> generate some data, load the schema, then perform some COPY statements,\n> INSERT INTO SELECT FROM and CREATE TABLE AS SELECT.\n\nI am curious what is your take on the maximum insert performance, in\nmb/sec of large bytea columns (toasted), and how much if any greenplum\nwas able to advance this over the baseline. I am asking on behalf of\nanother interested party. Interested in numbers broken down per core\non 8 core quad system and also aggreate.\n\nmerlin\n",
"msg_date": "Wed, 14 Feb 2007 11:20:53 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking PGSQL?"
},
{
"msg_contents": "Hi Merlin,\n\nOn 2/14/07 8:20 AM, \"Merlin Moncure\" <[email protected]> wrote:\n\n> I am curious what is your take on the maximum insert performance, in\n> mb/sec of large bytea columns (toasted), and how much if any greenplum\n> was able to advance this over the baseline. I am asking on behalf of\n> another interested party. Interested in numbers broken down per core\n> on 8 core quad system and also aggreate.\n\nOur approach is to attach a segment to each core, so we scale INSERT\nlinearly on number of cores. So the per core limit we live with is the\n10-20MB/s observed here. We'd like to improve that so that we get better\nperformance with smaller machines.\n\nWe have demonstrated insert performance of 670 MB/s, 2.4TB/hour for\nnon-toasted columns using 3 load machines against 120 cores. This rate was\nload machine limited.\n\nWRT toasted bytea columns we haven't done any real benchmarking of those.\nDo you have a canned benchmark we can run?\n\n- Luke \n\n\n",
"msg_date": "Wed, 14 Feb 2007 10:23:38 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking PGSQL?"
},
{
"msg_contents": "Krisna,\n\n> I'm a performance engineer, quite interested in getting deep into the\n> PGSQL performance enhancement effort. In that regard, I have the\n> following questions :\n> 1. Is there a benchmarking setup, that I can access online?\n> 2. What benchmarks are we running , for performance numbers?\n> 3. What are the current issues, related to performance?\n> 4. Where can I start, with the PGSQL performance effort?\n\nHey, I work for Sun and we've been working on PostgreSQL & benchmarks. \nHopefully we will soon have a server which runs Spec benchmarks which the \ncommunity can legally use for testing (it's waiting on some setup issues).\n\nHelp we could use right now includes work on an open source TPCE-like \nworkload, being run by Rilson and Mark Wong. Another issue which could \nuse help is reducing our WAL transaction log volume; it's higher than any \nother database. Or you could work on multi-processor scalability; we are \nstill trying to identify the bottlenecks which are holding us back there.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Wed, 14 Feb 2007 15:42:55 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking PGSQL?"
},
{
"msg_contents": "On 2/14/07, Luke Lonergan <[email protected]> wrote:\n> Hi Merlin,\n>\n> On 2/14/07 8:20 AM, \"Merlin Moncure\" <[email protected]> wrote:\n>\n> > I am curious what is your take on the maximum insert performance, in\n> > mb/sec of large bytea columns (toasted), and how much if any greenplum\n> > was able to advance this over the baseline. I am asking on behalf of\n> > another interested party. Interested in numbers broken down per core\n> > on 8 core quad system and also aggreate.\n>\n> Our approach is to attach a segment to each core, so we scale INSERT\n> linearly on number of cores. So the per core limit we live with is the\n> 10-20MB/s observed here. We'd like to improve that so that we get better\n> performance with smaller machines.\n>\n> We have demonstrated insert performance of 670 MB/s, 2.4TB/hour for\n> non-toasted columns using 3 load machines against 120 cores. This rate was\n> load machine limited.\n>\n> WRT toasted bytea columns we haven't done any real benchmarking of those.\n> Do you have a canned benchmark we can run?\n\nInterested in how fast you can insert binary objects (images, files,\netc). into the database as a file storage system. Ultimately the\ninsertions would all be done via libpq ExecPrepared/Params. A simple\nbenchmark such as insert a 1mb object via pg_bench over 20 or so\nconnections would be fine. Mostly interested in raw throughput per\ncore and especially interested if you can beat stock pg on the same\nhardware.\n\nmerlin\n",
"msg_date": "Thu, 15 Feb 2007 08:39:10 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking PGSQL?"
},
{
"msg_contents": "Thanks for the tips, Josh. Would you know of where I can find the TpC-E\neffort online? I've looked, and have only found references to the 'summer of\ncode' project that Riklas was doing/mentoring.\n\nAlso, I'm going to spend some time digging into the code, for the WAL log\nissue you mentioned. Let's see what I can come up with... (I must confess\nI'm not a Guru with Database internals, but I'll give this my best...)\n\nKrishna\n\nOn 2/15/07, Josh Berkus <[email protected]> wrote:\n>\n> Krisna,\n>\n> > I'm a performance engineer, quite interested in getting deep into the\n> > PGSQL performance enhancement effort. In that regard, I have the\n> > following questions :\n> > 1. Is there a benchmarking setup, that I can access online?\n> > 2. What benchmarks are we running , for performance numbers?\n> > 3. What are the current issues, related to performance?\n> > 4. Where can I start, with the PGSQL performance effort?\n>\n> Hey, I work for Sun and we've been working on PostgreSQL & benchmarks.\n> Hopefully we will soon have a server which runs Spec benchmarks which the\n> community can legally use for testing (it's waiting on some setup issues).\n>\n> Help we could use right now includes work on an open source TPCE-like\n> workload, being run by Rilson and Mark Wong. Another issue which could\n> use help is reducing our WAL transaction log volume; it's higher than any\n> other database. Or you could work on multi-processor scalability; we are\n> still trying to identify the bottlenecks which are holding us back there.\n>\n> --\n> --Josh\n>\n> Josh Berkus\n> PostgreSQL @ Sun\n> San Francisco\n>\n\nThanks for the tips, Josh. Would you know of where I can find the TpC-E effort online? I've looked, and have only found references to the 'summer of code' project that Riklas was doing/mentoring. Also, I'm going to spend some time digging into the code, for the WAL log issue you mentioned. Let's see what I can come up with... (I must confess I'm not a Guru with Database internals, but I'll give this my best...)\nKrishna On 2/15/07, Josh Berkus <[email protected]> wrote:\nKrisna,> I'm a performance engineer, quite interested in getting deep into the> PGSQL performance enhancement effort. In that regard, I have the> following questions :> 1. Is there a benchmarking setup, that I can access online?\n> 2. What benchmarks are we running , for performance numbers?> 3. What are the current issues, related to performance?> 4. Where can I start, with the PGSQL performance effort?Hey, I work for Sun and we've been working on PostgreSQL & benchmarks.\nHopefully we will soon have a server which runs Spec benchmarks which thecommunity can legally use for testing (it's waiting on some setup issues).Help we could use right now includes work on an open source TPCE-like\nworkload, being run by Rilson and Mark Wong. Another issue which coulduse help is reducing our WAL transaction log volume; it's higher than anyother database. Or you could work on multi-processor scalability; we are\nstill trying to identify the bottlenecks which are holding us back there.----JoshJosh BerkusPostgreSQL @ SunSan Francisco",
"msg_date": "Fri, 16 Feb 2007 11:39:39 +0530",
"msg_from": "\"Krishna Kumar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Benchmarking PGSQL?"
}
] |
[
{
"msg_contents": "Hi Tom,\n\nSorry, I didn't ask the right question. I meant to ask \"Why does it estimate a smaller cost for the seqscan?\"\n\nWith some further staring I was able to find the bad estimate and fix it by increasing the relevant statistics target.\n\nThanks,\nBrian\n\n----- Original Message ----\nFrom: Tom Lane <[email protected]>\nTo: Brian Herlihy <[email protected]>\nCc: Postgresql Performance <[email protected]>\nSent: Wednesday, 14 February, 2007 4:53:54 PM\nSubject: Re: [PERFORM] An unwanted seqscan \n\nBrian Herlihy <[email protected]> writes:\n> I am having trouble understanding why a seqscan is chosen for this query.\n\nAs far as anyone can see from this output, the planner's decisions are\ncorrect: it prefers the plans with the smaller estimated cost. If you\nwant us to take an interest, provide some more context --- EXPLAIN\nANALYZE output for starters.\n\n regards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n\n\n",
"msg_date": "Wed, 14 Feb 2007 17:30:46 -0800 (PST)",
"msg_from": "Brian Herlihy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: An unwanted seqscan"
}
] |
[
{
"msg_contents": "Hi,\n\nI am using PostgreSQL for benchmarking and I study the following query: \n\nSELECT *\nFROM k10_1, k10_2, k10_3, k10_4, k10_5 \nWHERE k10_1.times4 = k10_2.times4\nAND k10_2.times4 = k10_3.times4\nAND k10_3.times4 = k10_4.times4\nAND k10_4.times4 = k10_5.times4\n\nThe used schema for all the tables is:\n\n Column | Type | Modifiers \n---------+---------------+-----------\n unique1 | integer | not null\n unique2 | integer | \n times4 | integer | \n times10 | integer | \n ten | integer | \n stringu | character(52) | \nIndexes:\n \"k10_*_pkey\" PRIMARY KEY, btree (unique1)\n\n\nEach table has 10000 tuples of 72 bytes each (constant). The field\ntimes4 in every table is valued in [0,2500), each value appearing\nexactly four times but in random order. It is easy to extract that the\nresult has exactly 2,560,000 tuples or approximate size 185 MB. The\ndatabase has been 'VACUUM FULL'-ed and is static.\n\nWhen I execute this query with EXPLAIN ANALYSE, the query is executed in\n10-20 sec and consumes only 8Mb of memory, depending to the machine (I\nhave tried it on P4-2.0GHz, P4-2.2GHz and Athlon 4200++ 64x2, all with 2\nGb RAM and Linux OS, Ubuntu Edgy or Fedora 6). However, when I execute\nexactly the same query normally and direct the output to /dev/null,\nPostgreSQL consumes all the available memory (RAM and swap), and the\nquery cannot be executed, as I receive the message 'Out of memory'. The\nsame thing happened to all the machines. I have tried to adjust working\nmemory and shared buffers but it still performed in the same way.\n\nSince this is not exactly an extreme query, as its input is 5 tables\nwith 10 thousands tuples and its output is 2.6 millions, it seems that a\nproblem exists in this case. I would like to pose the following\nquestions:\n\n1. Why PostgreSQL fails to execute the query? Is there any parameter\nthat specifies when the buffer manager tries to store intermediate and\nfinal results to the disc and how much of disk space it can occupy for\ntemporary results?\n\n2. How does EXPLAIN ANALYSE work? Does it create all the intermediate\nresults as in the normal execution? Does it call the print function at\nthe final result and direct the output to /dev/null or it doesn't call\nit at all? This is important as, if the cardinality of the final result\nis high, the print function callls impose a significant penalty on\nexecution time\n\n3. Is there any way of executing a query without materialising the final\nresult but only the intermediate results, if the query plans demand it? \n\nI hope you can enlighten me with these questions.\n\nKind regards,\nKonstantinos Krikellas\nPhD student, Database Group\nUniversity of Edinburgh\nEmail: [email protected]\nPnone number: +44 (0) 131 651 3769 \n\n\n\n\n\n\n\nHi,\n\nI am using PostgreSQL for benchmarking and I study the following query: \n\nSELECT *\nFROM k10_1, k10_2, k10_3, k10_4, k10_5 \nWHERE k10_1.times4 = k10_2.times4\nAND k10_2.times4 = k10_3.times4\nAND k10_3.times4 = k10_4.times4\nAND k10_4.times4 = k10_5.times4\n\nThe used schema for all the tables is:\n\n Column | Type | Modifiers \n---------+---------------+-----------\n unique1 | integer | not null\n unique2 | integer | \n times4 | integer | \n times10 | integer | \n ten | integer | \n stringu | character(52) | \nIndexes:\n \"k10_*_pkey\" PRIMARY KEY, btree (unique1)\n\n\nEach table has 10000 tuples of 72 bytes each (constant). The field times4 in every table is valued in [0,2500), each value appearing exactly four times but in random order. It is easy to extract that the result has exactly 2,560,000 tuples or approximate size 185 MB. The database has been 'VACUUM FULL'-ed and is static.\n\nWhen I execute this query with EXPLAIN ANALYSE, the query is executed in 10-20 sec and consumes only 8Mb of memory, depending to the machine (I have tried it on P4-2.0GHz, P4-2.2GHz and Athlon 4200++ 64x2, all with 2 Gb RAM and Linux OS, Ubuntu Edgy or Fedora 6). However, when I execute exactly the same query normally and direct the output to /dev/null, PostgreSQL consumes all the available memory (RAM and swap), and the query cannot be executed, as I receive the message 'Out of memory'. The same thing happened to all the machines. I have tried to adjust working memory and shared buffers but it still performed in the same way.\n\nSince this is not exactly an extreme query, as its input is 5 tables with 10 thousands tuples and its output is 2.6 millions, it seems that a problem exists in this case. I would like to pose the following questions:\n\n1. Why PostgreSQL fails to execute the query? Is there any parameter that specifies when the buffer manager tries to store intermediate and final results to the disc and how much of disk space it can occupy for temporary results?\n\n2. How does EXPLAIN ANALYSE work? Does it create all the intermediate results as in the normal execution? Does it call the print function at the final result and direct the output to /dev/null or it doesn't call it at all? This is important as, if the cardinality of the final result is high, the print function callls impose a significant penalty on execution time\n\n3. Is there any way of executing a query without materialising the final result but only the intermediate results, if the query plans demand it? \n\nI hope you can enlighten me with these questions.\n\nKind regards,\n\n\n\n\nKonstantinos Krikellas\nPhD student, Database Group\nUniversity of Edinburgh\nEmail: [email protected]\nPnone number: +44 (0) 131 651 3769",
"msg_date": "Thu, 15 Feb 2007 12:58:33 +0000",
"msg_from": "Konstantinos Krikellas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem with joining queries."
},
{
"msg_contents": "Konstantinos Krikellas wrote:\n> Hi,\n> \n> I am using PostgreSQL for benchmarking and I study the following query: \n> \n> SELECT *\n> FROM k10_1, k10_2, k10_3, k10_4, k10_5 \n> WHERE k10_1.times4 = k10_2.times4\n> AND k10_2.times4 = k10_3.times4\n> AND k10_3.times4 = k10_4.times4\n> AND k10_4.times4 = k10_5.times4\n\n> Each table has 10000 tuples of 72 bytes each (constant). The field\n\nThat's 72 bytes plus about 24 for each row header. Plus 4 bytes for the \ntext length, and you can't assume each character is only one byte if \nyou're using UTF-8 or similar.\n\n> times4 in every table is valued in [0,2500), each value appearing\n> exactly four times but in random order. It is easy to extract that the\n> result has exactly 2,560,000 tuples or approximate size 185 MB. The\n> database has been 'VACUUM FULL'-ed and is static.\n> \n> When I execute this query with EXPLAIN ANALYSE, the query is executed in\n> 10-20 sec and consumes only 8Mb of memory, depending to the machine (I\n> have tried it on P4-2.0GHz, P4-2.2GHz and Athlon 4200++ 64x2, all with 2\n> Gb RAM and Linux OS, Ubuntu Edgy or Fedora 6). However, when I execute\n> exactly the same query normally and direct the output to /dev/null,\n> PostgreSQL consumes all the available memory (RAM and swap), and the\n> query cannot be executed, as I receive the message 'Out of memory'. The\n> same thing happened to all the machines. I have tried to adjust working\n> memory and shared buffers but it still performed in the same way.\n\nNot happening here (8.2.x, output redirected using \"\\o /dev/null\") - are \nyou sure it's not psql (or whatever client) that's using up your memory, \nas it tries to build the entire result set before sending it to \n/dev/null? Don't forget, you've got 5 copies of the columns so that \nwould be ~ 700MB.\n\nIf it is the backend, you'll need to give some of the tuning parameters \nyou're using, since it works here on my much smaller dev server (1GB RAM \nand plenty of other stuff using it).\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 15 Feb 2007 14:01:42 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem with joining queries."
},
{
"msg_contents": "> Not happening here (8.2.x, output redirected using \"\\o /dev/null\") - are \n> you sure it's not psql (or whatever client) that's using up your memory, \n> as it tries to build the entire result set before sending it to \n> /dev/null? Don't forget, you've got 5 copies of the columns so that \n> would be ~ 700MB.\n> \n> If it is the backend, you'll need to give some of the tuning parameters \n> you're using, since it works here on my much smaller dev server (1GB RAM \n> and plenty of other stuff using it).\n\n\nYou are absolutely right about the result size.\n\nI tried the '\\o /dev/null' and worked, I had not realised the client is\nbuffering the final result. \nInstead, I used to execute the command 'psql < query.sql > /dev/null',\nso the psql process consumed all the available memory.\n\nThank you very much for your immediate and felicitous response.\n\nKind regards, \nKonstantinos Krikellas\nPhD student, Database Group\nUniversity of Edinburgh\nEmail: [email protected]\nPnone number: +44 (0) 131 651 3769 \n\n\n\n\n\n\n\n\n\nNot happening here (8.2.x, output redirected using \"\\o /dev/null\") - are \nyou sure it's not psql (or whatever client) that's using up your memory, \nas it tries to build the entire result set before sending it to \n/dev/null? Don't forget, you've got 5 copies of the columns so that \nwould be ~ 700MB.\n\nIf it is the backend, you'll need to give some of the tuning parameters \nyou're using, since it works here on my much smaller dev server (1GB RAM \nand plenty of other stuff using it).\n\n\n\nYou are absolutely right about the result size.\n\nI tried the '\\o /dev/null' and worked, I had not realised the client is buffering the final result. \nInstead, I used to execute the command 'psql < query.sql > /dev/null', so the psql process consumed all the available memory.\n\nThank you very much for your immediate and felicitous response.\n\nKind regards, \n\n\n\n\nKonstantinos Krikellas\nPhD student, Database Group\nUniversity of Edinburgh\nEmail: [email protected]\nPnone number: +44 (0) 131 651 3769",
"msg_date": "Thu, 15 Feb 2007 14:28:27 +0000",
"msg_from": "Konstantinos Krikellas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problem with joining queries."
}
] |
[
{
"msg_contents": "Hi\n\nI'm using posstgresql 8.1.4 on linux 2.6\nshared_buffers = tested with 3000 and 10000\ntemp_buffers = 1000\nwork_mem = 4096\neffective_cache_size = 65536\nrandom_page_cost = 2\n\nI have a query which I think is anormaly slow with � 'OR'\n\n\nselect count(*) from client_contact\nleft join client_company using(cli_id)\nwhere (cli_mail = '[email protected]') OR\n(lower(cli_nom) = 'xxxxxx' and zipcode = '10001');\n\nif I split this query in 2 query like this\n\nfirst\nselect count(*) from client_contact\nleft join client_company using(cli_id)\nwhere (cli_mail = '[email protected]')\n\nsecond\nselect count(*) from client_contact\nleft join client_company using(cli_id)\nwhere (lower(cli_nom) = 'xxxxxx' and zipcode = '10001');\n\neach query are under 100 ms\n\nWhy postgresql think scanning index on cli_nom and cli_mail is not a good thing\nwith the OR clause ?\n\n\nI hope you can help me understanding the problem\nregards,\n\n\n\nexplain analyse\nselect count(*) from client_contact\nleft join client_company using(cli_id)\nwhere (cli_mail = '[email protected]') OR\n(lower(cli_nom) = 'xxxxxx' and zipcode = '10001');\n\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=37523.98..37523.99 rows=1 width=0) (actual\ntime=3871.086..3871.087 rows=1 loops=1)\n -> Merge Left Join (cost=0.00..36719.10 rows=321952 width=0) (actual\ntime=3871.058..3871.058 rows=0 loops=1)\n Merge Cond: (\"outer\".cli_id = \"inner\".cli_id)\n Filter: (((\"outer\".cli_mail)::text = '[email protected]'::text) OR\n((lower((\"outer\".cli_nom)::text) = 'xxxxxx'::text) AND ((\"inner\".zipcode)::text\n= '10001'::text)))\n -> Index Scan using client_pkey on client_contact\n(cost=0.00..14801.29 rows=321952 width=38) (actual time=0.110..1130.134\nrows=321152 loops=1)\n -> Index Scan using client_company_cli_id_idx on client_company\n(cost=0.00..13891.30 rows=321114 width=12) (actual time=0.097..1171.905\nrows=321152 loops=1)\n Total runtime: 3871.443 ms\n\nexplain analyse\nselect count(*) from client_contact\nleft join client_company using(cli_id)\nwhere (cli_mail = '[email protected]')\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=2750.11..2750.12 rows=1 width=0) (actual time=23.930..23.932\nrows=1 loops=1)\n -> Nested Loop Left Join (cost=0.00..2750.08 rows=11 width=0) (actual\ntime=23.918..23.918 rows=0 loops=1)\n -> Index Scan using email_client on client_contact\n(cost=0.00..2711.33 rows=11 width=4) (actual time=23.913..23.913 rows=0\nloops=1)\n Index Cond: ((cli_mail)::text = '[email protected]'::text)\n -> Index Scan using client_company_cli_id_idx on client_company\n(cost=0.00..3.51 rows=1 width=4) (never executed)\n Index Cond: (\"outer\".cli_id = client_company.cli_id)\n Total runtime: 24.018 ms\n\n\nexplain analyse\nselect count(*) from client_contact\nleft join client_company using(cli_id)\nwhere\n(lower(cli_nom) = 'xxxxxx' and zipcode = '10001');\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=107.18..107.19 rows=1 width=0) (actual time=84.935..84.936\nrows=1 loops=1)\n -> Nested Loop (cost=0.00..107.17 rows=1 width=0) (actual\ntime=84.928..84.928 rows=0 loops=1)\n -> Index Scan using client_contact_cli_nom_idx on client_contact\n(cost=0.00..40.19 rows=19 width=4) (actual time=84.832..84.835 rows=1 loops=1)\n Index Cond: (lower((cli_nom)::text) = 'xxxxxx'::text)\n -> Index Scan using client_company_cli_id_idx on client_company\n(cost=0.00..3.51 rows=1 width=4) (actual time=0.083..0.083 rows=0 loops=1)\n Index Cond: (\"outer\".cli_id = client_company.cli_id)\n Filter: ((zipcode)::text = '10001'::text)\n Total runtime: 85.013 ms\n",
"msg_date": "Thu, 15 Feb 2007 15:36:53 +0100",
"msg_from": "philippe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query with 'or' clause"
},
{
"msg_contents": "philippe wrote:\n> explain analyse\n> select count(*) from client_contact\n> left join client_company using(cli_id)\n> where (cli_mail = '[email protected]') OR\n> (lower(cli_nom) = 'xxxxxx' and zipcode = '10001');\n> \n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=37523.98..37523.99 rows=1 width=0) (actual\n> time=3871.086..3871.087 rows=1 loops=1)\n> -> Merge Left Join (cost=0.00..36719.10 rows=321952 width=0) (actual\n> time=3871.058..3871.058 rows=0 loops=1)\n\nThis is the root of the problem - it's expecting to match over 320000 \nrows rather than 0.\n\nI'm guessing there's a lot of correlation between cli_mail and cli_nom \n(you're expecting them to match the same clients) but the planner \ndoesn't know this.\n\nIf this is a common query, you could try an index on zipcode - that \nmight cut down the other side.\n\nHowever, I have to ask why you're using a left-join? Do you really have \nrows in client_contact without a matching cli_id in client_company?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 15 Feb 2007 15:44:33 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query with 'or' clause"
},
{
"msg_contents": "Selon Richard Huxton <[email protected]>:\n\n> philippe wrote:\n> > explain analyse\n> > select count(*) from client_contact\n> > left join client_company using(cli_id)\n> > where (cli_mail = '[email protected]') OR\n> > (lower(cli_nom) = 'xxxxxx' and zipcode = '10001');\n> >\n> > QUERY PLAN\n> >\n>\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> > Aggregate (cost=37523.98..37523.99 rows=1 width=0) (actual\n> > time=3871.086..3871.087 rows=1 loops=1)\n> > -> Merge Left Join (cost=0.00..36719.10 rows=321952 width=0) (actual\n> > time=3871.058..3871.058 rows=0 loops=1)\n>\n> This is the root of the problem - it's expecting to match over 320000\n> rows rather than 0.\n>\n> I'm guessing there's a lot of correlation between cli_mail and cli_nom\n> (you're expecting them to match the same clients) but the planner\n> doesn't know this.\n>\n> If this is a common query, you could try an index on zipcode - that\n> might cut down the other side.\n>\n> However, I have to ask why you're using a left-join? Do you really have\n> rows in client_contact without a matching cli_id in client_company?\n>\n\nYou are right, I was focused on server perf and I should have analysed my query.\n\nQuery time is ok now.\n\nthanks you !!\n\n\n\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\n\n",
"msg_date": "Thu, 15 Feb 2007 18:10:45 +0100",
"msg_from": "philippe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query with 'or' clause"
}
] |
[
{
"msg_contents": "Hello,\n\n\tI'm experiencing some very unusual speed problems while executing a \nparticular query. Appending an empty string to one of the fields in \nthe query speeds up the execution by a 1000 fold. According to the \nplanner this alternate query is much more costly too. Here are the \nEXPLAIN ANALYZE results of the two queries (the preferred one first, \nthe hack one second):\n\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n--------------------------------\nSort (cost=1721.66..1721.66 rows=1 width=818) (actual \ntime=3163.430..3163.433 rows=4 loops=1)\n Sort Key: COALESCE(keys.word2, keys.word1)\n -> Nested Loop Left Join (cost=1685.65..1721.65 rows=1 \nwidth=818) (actual time=3161.971..3163.375 rows=4 loops=1)\n -> Hash Join (cost=1685.65..1716.71 rows=1 width=813) \n(actual time=3160.479..3160.784 rows=4 loops=1)\n Hash Cond: (\"outer\".pbx_id = \"inner\".pbx_id)\n -> Nested Loop (cost=1681.61..1712.64 rows=3 \nwidth=813) (actual time=3093.394..3160.398 rows=40 loops=1)\n -> Nested Loop (cost=1681.61..1700.79 rows=1 \nwidth=817) (actual time=3093.185..3158.959 rows=40 loops=1)\n -> Bitmap Heap Scan on keys \n(cost=1681.61..1697.63 rows=1 width=21) (actual \ntime=3092.935..3156.989 rows=40 loops=1)\n Recheck Cond: (((word0)::text = \n'2727'::text) AND ((feature)::text = 'ACD'::text))\n Filter: ((number = 0) AND (word1 IS \nNOT NULL))\n -> BitmapAnd \n(cost=1681.61..1681.61 rows=4 width=0) (actual \ntime=3090.806..3090.806 rows=0 loops=1)\n -> Bitmap Index Scan on \nkeys_word0_idx (cost=0.00..5.00 rows=573 width=0) (actual \ntime=2.295..2.295 rows=2616 loops=1)\n Index Cond: \n((word0)::text = '2727'::text)\n -> Bitmap Index Scan on \nkeys_feature_idx (cost=0.00..1676.36 rows=277530 width=0) (actual \ntime=3086.875..3086.875 rows=276590 loops=1)\n Index Cond: \n((feature)::text = 'ACD'::text)\n -> Index Scan using tnb_pkey on tnb \n(cost=0.00..3.15 rows=1 width=796) (actual time=0.043..0.045 rows=1 \nloops=40)\n Index Cond: (\"outer\".tnb_id = \ntnb.tnb_id)\n -> Index Scan using has_tn_tnb_id_idx on \nhas_tn (cost=0.00..11.81 rows=3 width=12) (actual time=0.028..0.032 \nrows=1 loops=40)\n Index Cond: (\"outer\".tnb_id = has_tn.tnb_id)\n -> Hash (cost=4.03..4.03 rows=1 width=4) (actual \ntime=0.257..0.257 rows=1 loops=1)\n -> Index Scan using request_to_pbx_ids_pkey on \nrequest_to_pbx_ids (cost=0.00..4.03 rows=1 width=4) (actual \ntime=0.217..0.221 rows=1 loops=1)\n Index Cond: (request_id = 206335)\n -> Index Scan using names_pkey on \n\"names\" (cost=0.00..4.91 rows=1 width=29) (actual time=0.059..0.059 \nrows=0 loops=4)\n Index Cond: ((\"outer\".pbx_id = \"names\".pbx_id) AND \n((\"outer\".primary_dn)::text = (\"names\".primary_dn)::text))\nTotal runtime: 3164.147 ms\n(25 rows)\n\n\n\n\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n--------------------------\nSort (cost=12391.96..12391.96 rows=1 width=818) (actual \ntime=4.020..4.023 rows=4 loops=1)\n Sort Key: COALESCE(keys.word2, keys.word1)\n -> Nested Loop Left Join (cost=0.00..12391.95 rows=1 width=818) \n(actual time=2.796..3.969 rows=4 loops=1)\n -> Nested Loop (cost=0.00..12387.01 rows=1 width=813) \n(actual time=2.055..2.234 rows=4 loops=1)\n -> Nested Loop (cost=0.00..12383.85 rows=1 \nwidth=33) (actual time=2.026..2.138 rows=4 loops=1)\n -> Nested Loop (cost=0.00..7368.29 rows=591 \nwidth=12) (actual time=0.469..0.698 rows=73 loops=1)\n -> Index Scan using \nrequest_to_pbx_ids_pkey on request_to_pbx_ids (cost=0.00..4.03 \nrows=1 width=4) (actual time=0.215..0.217 rows=1 loops=1)\n Index Cond: (request_id = 206335)\n -> Index Scan using has_tn_pkey on \nhas_tn (cost=0.00..7319.43 rows=3586 width=12) (actual \ntime=0.241..0.401 rows=73 loops=1)\n Index Cond: (has_tn.pbx_id = \n\"outer\".pbx_id)\n -> Index Scan using keys_pkey on keys \n(cost=0.00..8.47 rows=1 width=21) (actual time=0.018..0.018 rows=0 \nloops=73)\n Index Cond: ((keys.tnb_id = \n\"outer\".tnb_id) AND (keys.number = 0))\n Filter: (((feature)::text = 'ACD'::text) \nAND (((word0)::text || ''::text) = '2727'::text) AND (word1 IS NOT \nNULL))\n -> Index Scan using tnb_pkey on tnb \n(cost=0.00..3.15 rows=1 width=796) (actual time=0.018..0.020 rows=1 \nloops=4)\n Index Cond: (\"outer\".tnb_id = tnb.tnb_id)\n -> Index Scan using names_pkey on \n\"names\" (cost=0.00..4.91 rows=1 width=29) (actual time=0.017..0.017 \nrows=0 loops=4)\n Index Cond: ((\"outer\".pbx_id = \"names\".pbx_id) AND \n((\"outer\".primary_dn)::text = (\"names\".primary_dn)::text))\nTotal runtime: 4.624 ms\n(18 rows)\n\nAs you can see the second query executes in about 4 ms as opposed to \n3000 ms.\nThe actual query is...\nSELECT name, CASE WHEN raw_tn ~ '\\n +SPV *\\n' THEN '✓' ELSE '' \nEND AS supervisor, COALESCE(word2, word1) AS agent_id, des, tn_string \n(loop,shelf,card,unit) AS tn, display_type(tnb_id) FROM keys JOIN \nhas_tn USING (tnb_id) JOIN tnb USING (tnb_id) JOIN request_to_pbx_ids \nUSING (pbx_id) LEFT JOIN names USING (pbx_id, primary_dn) WHERE \nrequest_id = 206335 AND number = 0 AND feature = 'ACD' AND word0 = \n'2727' AND word1 IS NOT NULL ORDER BY agent_id USING <;\n\nall that is done to obtain the second query is appending '' to word0 \nin the WHERE clause, so it becomes word0 || '' = '2727'\n\nThis word0 field is index and the planner returns what I consider to \nbe the correct plan. The plan for the second query has a much higher \ncost, but it actually runs much faster. Autovacuum is turned on and \nthe database is vacuumed every 4 hrs. and I just tried reindexing \nthe whole table since there is very sparse data in the 40 million row \ntable and the results remained the same.\n\nHere are some of the settings for postgres...\nautovacuum | \non | Starts the autovacuum \nsubprocess.\nautovacuum_analyze_scale_factor | \n0.2 | Number of tuple \ninserts, updates or deletes prior to analyze as a fract\nion of reltuples.\nautovacuum_analyze_threshold | \n500 | Minimum number of tuple \ninserts, updates or deletes prior to analyze.\nautovacuum_naptime | \n60 | Time to sleep between \nautovacuum runs, in seconds.\nautovacuum_vacuum_cost_delay | \n-1 | Vacuum cost delay in \nmilliseconds, for autovacuum.\nautovacuum_vacuum_cost_limit | \n-1 | Vacuum cost amount \navailable before napping, for autovacuum.\nautovacuum_vacuum_scale_factor | \n0.4 | Number of tuple updates \nor deletes prior to vacuum as a fraction of rel\ntuples.\nautovacuum_vacuum_threshold | \n1000 | Minimum number of tuple \nupdates or deletes prior to vacuum.\nbackslash_quote | \nsafe_encoding | Sets whether \"\\'\" is \nallowed in string literals.\nbgwriter_all_maxpages | \n5 | Background writer \nmaximum number of all pages to flush per round\nbgwriter_all_percent | \n0.333 | Background writer \npercentage of all buffers to flush per round\nbgwriter_delay | \n200 | Background writer sleep \ntime between rounds in milliseconds\nbgwriter_lru_maxpages | \n5 | Background writer \nmaximum number of LRU pages to flush per round\nbgwriter_lru_percent | \n1 | Background writer \npercentage of LRU buffers to flush per round\nblock_size | 8192\nconstraint_exclusion | \noff | Enables the planner to \nuse constraints to optimize queries.\ncpu_index_tuple_cost | \n0.001 | Sets the planner's \nestimate of processing cost for each index tuple (ro\nw) during index scan.\ncpu_operator_cost | \n0.0025 | Sets the planner's \nestimate of processing cost of each operator in WHER\nE.\ncpu_tuple_cost | 0.01\nfull_page_writes | \non | Writes full pages to \nWAL when first modified after a checkpoint.\ngeqo | \non | Enables genetic query \noptimization.\ngeqo_effort | \n5 | GEQO: effort is used to \nset the default for other GEQO parameters.\ngeqo_generations | \n0 | GEQO: number of \niterations of the algorithm.\ngeqo_pool_size | \n0 | GEQO: number of \nindividuals in the population.\ngeqo_selection_bias | \n2 | GEQO: selective \npressure within the population.\ngeqo_threshold | 12\njoin_collapse_limit | 8\nserver_encoding | \nSQL_ASCII | Sets the server \n(database) character set encoding.\nserver_version | \n8.1.4 | Shows the server version.\nshared_buffers | 400\nvacuum_cost_delay | \n0 | Vacuum cost delay in \nmilliseconds.\nvacuum_cost_limit | \n200 | Vacuum cost amount \navailable before napping.\nvacuum_cost_page_dirty | \n20 | Vacuum cost for a page \ndirtied by vacuum.\nvacuum_cost_page_hit | \n1 | Vacuum cost for a page \nfound in the buffer cache.\nvacuum_cost_page_miss | \n10 | Vacuum cost for a page \nnot found in the buffer cache.\nwal_buffers | \n8 | Sets the number of disk- \npage buffers in shared memory for WAL.\nwal_sync_method | \nfsync | Selects the method used \nfor forcing WAL updates out to disk.\nwork_mem | \n1024 | Sets the maximum memory \nto be used for query workspaces.\nzero_damaged_pages | off\n\n\nThanks in advance for any help you can offer on this problem.\n\n-Mike\n\n",
"msg_date": "Thu, 15 Feb 2007 12:00:02 -0500",
"msg_from": "Mike Gargano <[email protected]>",
"msg_from_op": true,
"msg_subject": "strange issue for certain queries"
}
] |
[
{
"msg_contents": "Hi List,\n\nI want to run a Select Query on a table. But i dont want the query to pick a\nindex defined on that table.\n\nSo can i instruct the planner not to pick that index.\n\n-- \nRegards\nGauri\n\nHi List,\n \nI want to run a Select Query on a table. But i dont want the query to pick a index defined on that table.\n \nSo can i instruct the planner not to pick that index.\n \n-- RegardsGauri",
"msg_date": "Fri, 16 Feb 2007 18:26:51 +0530",
"msg_from": "\"Gauri Kanekar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Not Picking Index"
},
{
"msg_contents": "On Fri, Feb 16, 2007 at 06:26:51PM +0530, Gauri Kanekar wrote:\n> I want to run a Select Query on a table. But i dont want the query to pick a\n> index defined on that table.\n> \n> So can i instruct the planner not to pick that index.\n\nWhy don't you want the planner to use the index? Is there a specific\nindex you want to ignore or do you want the planner to ignore all\nindexes? What problem are you trying to solve?\n\n-- \nMichael Fuhr\n",
"msg_date": "Fri, 16 Feb 2007 07:28:05 -0700",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Not Picking Index"
},
{
"msg_contents": "I want the planner to ignore a specific index.\nI am testing some query output. For that purpose i dont want the index.\nI that possible to ignore a index by the planner.\n\nOn 2/16/07, Michael Fuhr <[email protected]> wrote:\n\n> On Fri, Feb 16, 2007 at 06:26:51PM +0530, Gauri Kanekar wrote:\n> > I want to run a Select Query on a table. But i dont want the query to\n> pick a\n> > index defined on that table.\n> >\n> > So can i instruct the planner not to pick that index.\n>\n> Why don't you want the planner to use the index? Is there a specific\n> index you want to ignore or do you want the planner to ignore all\n> indexes? What problem are you trying to solve?\n>\n> --\n> Michael Fuhr\n>\n\n\n\n-- \nRegards\nGauri\n\nI want the planner to ignore a specific index.\nI am testing some query output. For that purpose i dont want the index.\nI that possible to ignore a index by the planner.\n \nOn 2/16/07, Michael Fuhr <[email protected]> wrote:\n\nOn Fri, Feb 16, 2007 at 06:26:51PM +0530, Gauri Kanekar wrote:> I want to run a Select Query on a table. But i dont want the query to pick a\n> index defined on that table.>> So can i instruct the planner not to pick that index.Why don't you want the planner to use the index? Is there a specificindex you want to ignore or do you want the planner to ignore all\nindexes? What problem are you trying to solve?--Michael Fuhr-- RegardsGauri",
"msg_date": "Fri, 16 Feb 2007 20:01:16 +0530",
"msg_from": "\"Gauri Kanekar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Not Picking Index"
},
{
"msg_contents": "Gauri Kanekar escribi�:\n> I want the planner to ignore a specific index.\n> I am testing some query output. For that purpose i dont want the index.\n> I that possible to ignore a index by the planner.\n\nSure:\n\nBEGIN\nDROP INDEX foo\nSELECT ....\nROLLBACK\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Fri, 16 Feb 2007 11:46:39 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Not Picking Index"
},
{
"msg_contents": "\"Gauri Kanekar\" <[email protected]> writes:\n> I want the planner to ignore a specific index.\n> I am testing some query output. For that purpose i dont want the index.\n> I that possible to ignore a index by the planner.\n\n\tbegin;\n\tdrop index soandso;\n\texplain analyze ...;\n\trollback;\n\nNote the DROP INDEX will acquire exclusive lock on the table, so this\nmight not be the greatest thing to do in a production environment.\nIn PG 8.2 and up there is a sneakier way to do it that won't acquire\nany more lock than the statement-under-test does:\n\n\tbegin;\n\tupdate pg_index set indisvalid = false\n\t where indexrelid = 'soandso'::regclass;\n\texplain analyze ...;\n\trollback;\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Feb 2007 09:53:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Not Picking Index "
},
{
"msg_contents": "On Fri, 2007-02-16 at 20:01 +0530, Gauri Kanekar wrote:\n> \n> I want the planner to ignore a specific index.\n> I am testing some query output. For that purpose i dont want the\n> index.\n> I that possible to ignore a index by the planner.\n\nIf the indexed field is an intger, add 0 to it.\n\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n",
"msg_date": "Fri, 16 Feb 2007 13:27:46 -0500",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Not Picking Index"
},
{
"msg_contents": "On Fri, Feb 16, 2007 at 01:27:46PM -0500, Brad Nicholson wrote:\n> If the indexed field is an intger, add 0 to it.\n\nWon't that also invalidate the statistics?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 16 Feb 2007 19:32:04 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Not Picking Index"
},
{
"msg_contents": "> Note the DROP INDEX will acquire exclusive lock on the table, so this\n> might not be the greatest thing to do in a production environment.\n> In PG 8.2 and up there is a sneakier way to do it that won't acquire\n> any more lock than the statement-under-test does:\n> \n> \tbegin;\n> \tupdate pg_index set indisvalid = false\n> \t where indexrelid = 'soandso'::regclass;\n> \texplain analyze ...;\n> \trollback;\n\nthis really smacks of that four-letter word that starts with h... -- i\nam glad we have finally come around on the subject :-) \n\nseriously, this is a great technique and an enormous time saver during\nquery optimization. thanks for sharing!\n\ngeorge\n",
"msg_date": "Fri, 16 Feb 2007 10:33:22 -0800",
"msg_from": "\"George Pavlov\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Not Picking Index "
},
{
"msg_contents": "\"George Pavlov\" <[email protected]> writes:\n>> In PG 8.2 and up there is a sneakier way to do it that won't acquire\n>> any more lock than the statement-under-test does:\n>> \n>> begin;\n>> update pg_index set indisvalid = false\n>> where indexrelid = 'soandso'::regclass;\n>> explain analyze ...;\n>> rollback;\n\n> this really smacks of that four-letter word that starts with h... -- i\n> am glad we have finally come around on the subject :-) \n\nindisvalid isn't a hint; it was necessary to make CREATE INDEX CONCURRENTLY\nwork. But if you want to (mis?)use it as a hint, you can ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Feb 2007 14:27:47 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Not Picking Index "
}
] |
[
{
"msg_contents": "\n> This is very similar to the problem I posted to this list \n> yesterday. Apparently, if you append an empty string to the column \n> data in your WHERE clause it will force the planer to treat it as a \n> filter and not an index cond. It's extremely ugly, but this method \n> doesn't seem to be anymore elegant.\n>\n> -Mike\n> On Feb 16, 2007, at 9:46 AM, Alvaro Herrera wrote:\n>\n>> Gauri Kanekar escribi�:\n>>> I want the planner to ignore a specific index.\n>>> I am testing some query output. For that purpose i dont want the \n>>> index.\n>>> I that possible to ignore a index by the planner.\n>>\n>> Sure:\n>>\n>> BEGIN\n>> DROP INDEX foo\n>> SELECT ....\n>> ROLLBACK\n>>\n>> -- \n>> Alvaro Herrera http:// \n>> www.CommandPrompt.com/\n>> The PostgreSQL Company - Command Prompt, Inc.\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 6: explain analyze is your friend\n>>\n>\n\n",
"msg_date": "Fri, 16 Feb 2007 11:06:57 -0500",
"msg_from": "Mike Gargano <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Not Picking Index"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm running a web application using Zope that obtains all data\nfrom a PostgreSQL 7.4 database (Debian Sarge system with package\n7.4.7-6sarge4 on an \"older\" Sparc machine, equipped with 2GB\nmemory and two processors E250 server). Once I did some performance\ntuning and found out that\n\n max_connections = 256\n shared_buffers = 131072\n sort_mem = 65536\n\nwould help for a certain application (that is now not running any\nmore on this machine, but I left these parameters in\n/etc/postgresql/postgresql.conf untouched.\n\nMy web application was running fine for years without any problem\nand the performance was satisfying. Some months ago I added a\ntable containing 4500000 data rows (all other used tables are\nsmaller by order of magnitudes) so nothing very large and this\ntable is not directly accessed in the web application (just some\ngenereated caching tables updated once a day. Some functions\nand small tables were added as well, but there was a stable\ncore over several years.\n\nSince about two weeks the application became *drastically* slower\nand I urgently have to bring back the old performance. As I said\nI'm talking about functions accessing tables that did not increased\nover several years and should behave more or less the same.\n\nI wonder whether adding tables and functions could have an influence\non other untouched parts and how to find out what makes the things\nslow that worked for years reliable and satisfying. My first try\nwas to switch back to the default settings of the current Debian\npackage maintainers /etc/postgresql/postgresql.conf leaving the\nparameters above untouched but this did not changed anything.\n\nI'm quite clueless even how to explain the problem correctly and\nI'm hoping you will at least find information enouth to ask me\n\"the right questions\" to find out the information you need to\ntrack down the performance problems.\n\nKind regards and thanks for any help\n\n Andreas.\n",
"msg_date": "Mon, 19 Feb 2007 11:50:21 +0100 (CET)",
"msg_from": "Andreas Tille <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to debug performance problems"
},
{
"msg_contents": "Andreas Tille wrote:\n> My web application was running fine for years without any problem\n> and the performance was satisfying. Some months ago I added a\n> table containing 4500000 data rows ...\n> \n> Since about two weeks the application became *drastically* slower\n> and I urgently have to bring back the old performance. As I said\n> I'm talking about functions accessing tables that did not increased\n> over several years and should behave more or less the same.\n\nDon't assume that the big table you added is the source of the problem. It might be, but more likely it's something else entirely. You indicated that the problem didn't coincide with creating the large table.\n\nThere are a number of recurring themes on this discussion group:\n\n * A long-running transaction keeps vacuum from working.\n\n * A table grows just enough to pass a threshold in the\n planner and a drastically different plan is generated.\n \n * An index has become bloated and/or corrupted, and you\n need to run the REINDEX command.\n\nAnd several other common problems.\n\nThe first thing is to find out which query is taking a lot of time. I'm no expert, but there have been several explanations on this forum recently how to find your top time-consuming queries. Once you find them, then EXPLAIN ANALYZE should get you started \n\nCraig\n",
"msg_date": "Mon, 19 Feb 2007 10:02:46 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to debug performance problems"
},
{
"msg_contents": "On Mon, 2007-02-19 at 11:50 +0100, Andreas Tille wrote:\n> Hi,\n> \n> I'm running a web application using Zope that obtains all data\n> from a PostgreSQL 7.4 database (Debian Sarge system with package\n> 7.4.7-6sarge4 on an \"older\" Sparc machine, equipped with 2GB\n\nUpgrade to 8.2.3 if possible, or at least to 7.4.16.\n\nThis is a basic question, but do you VACUUM ANALYZE regularly? 7.4 is\nbefore autovacuum was integrated in the core. If you don't do this you\ncould have a lot of wasted space in your tables causing unneeded I/O,\nand the planner might be making bad plans.\n\n> memory and two processors E250 server). Once I did some performance\n> tuning and found out that\n> \n> max_connections = 256\n> shared_buffers = 131072\n> sort_mem = 65536\n> \n\nYou're allocating 50% of the physical memory to shared buffers. That's\nnot necessarily too much, but that's on the high side of the normal\nrange. \n\nDoes the total size of all of your tables and indexes add up to enough\nto exhaust your physical memory? Check to see if you have any\nexceptionally large tables or indexes. You can do that easily with\npg_relation_size('a_table_or_index') and pg_total_relation_size\n('a_table').\n\n> Since about two weeks the application became *drastically* slower\n> and I urgently have to bring back the old performance. As I said\n> I'm talking about functions accessing tables that did not increased\n> over several years and should behave more or less the same.\n> \n> I wonder whether adding tables and functions could have an influence\n> on other untouched parts and how to find out what makes the things\n> slow that worked for years reliable and satisfying. My first try\n\nYou need to provide queries, and also define \"slower\". Set\nlog_min_duration_statement to some positive value (I often start with\n1000) to try to catch the slow statements in the logs. Once you have\nfound the slow statements, do an EXPLAIN and an EXPLAIN ANALYZE on those\nstatements. That will tell you exactly what you need to know.\n\nRegards,\n\tJeff Davis\n\n\n",
"msg_date": "Mon, 19 Feb 2007 10:18:27 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to debug performance problems"
},
{
"msg_contents": "On Mon, 2007-02-19 at 12:18, Jeff Davis wrote:\n> On Mon, 2007-02-19 at 11:50 +0100, Andreas Tille wrote:\n> > Hi,\n> > \n> > I'm running a web application using Zope that obtains all data\n> > from a PostgreSQL 7.4 database (Debian Sarge system with package\n> > 7.4.7-6sarge4 on an \"older\" Sparc machine, equipped with 2GB\n> \n> Upgrade to 8.2.3 if possible, or at least to 7.4.16.\n\nWhat Jeff said ++\n\n> \n> This is a basic question, but do you VACUUM ANALYZE regularly? 7.4 is\n> before autovacuum was integrated in the core. If you don't do this you\n> could have a lot of wasted space in your tables causing unneeded I/O,\n> and the planner might be making bad plans.\n\nLook into vacuum full followed by reindex to fix the bloat. Then\nschedule regular vacuums (regular, not full).\n\n> > memory and two processors E250 server). Once I did some performance\n> > tuning and found out that\n> > \n> > max_connections = 256\n> > shared_buffers = 131072\n> > sort_mem = 65536\n> > \n> \n> You're allocating 50% of the physical memory to shared buffers. That's\n> not necessarily too much, but that's on the high side of the normal\n> range. \n\nFor 7.4 that's far too much. Very few installations running 7.4 will be\nfaster as you go past 10000 to 20000 buffers. 131072 is probably\nslowing the machine down instead of speeding it up, as the buffer cache\nalgo in 7.4 was not that good with large amounts of memory.\n\n",
"msg_date": "Mon, 19 Feb 2007 12:24:18 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to debug performance problems"
},
{
"msg_contents": "\nI'd like to have a toolbox prepared for when performance goes south.\nI'm clueless. Would someone mind providing some detail about how to\nmeasure these four items Craig listed:\n\n1. The first thing is to find out which query is taking a lot of time.\n\n2. A long-running transaction keeps vacuum from working.\n\n3. A table grows just enough to pass a threshold in the\n planner and a drastically different plan is generated.\n\n4. An index has become bloated and/or corrupted, and you\n need to run the REINDEX command.\n\nThx.\n\n\n\n\n\nOn Wed, Aug 30, 2006 at 11:45:06AM -0700, Jeff Frost wrote:\n> On Wed, 30 Aug 2006, Joe McClintock wrote:\n> \n> >I ran a vacuum, analyze and reindex on the database with no change in \n> >performance, query time was still 37+ sec, a little worse. On our test \n> >system I found that a db_dump from production and then restore brought the \n> >database back to full performance. So in desperation I shut down the \n> >production application, backed up the production database, rename the \n> >production db, create a new empty production db and restored the \n> >production backup to the empty db. After a successful db restore and \n> >restart of the web application, everything was then up and running like a \n> >top.\n> \n> Joe,\n> \n> I would guess that since the dump/restore yielded good performance once \n> again, a VACUUM FULL would have also fixed the problem. How are your FSM \n> settings in the conf file? Can you run VACUUM VERBOSE and send us the last \n> 10 or so lines of output?\n> \n> A good article on FSM settings can be found here:\n> \n> http://www.pervasive-postgres.com/instantkb13/article.aspx?id=10087&cNode=5K1C3W\n> \n> You probably should consider setting up autovacuum and definitely should \n> upgrade to at least 8.0.8 if not 8.1.4 when you get the chance.\n> \n> When you loaded the new data did you delete or update old data or was it \n> just a straight insert?\n> \n> -- \n> Jeff Frost, Owner \t<[email protected]>\n> Frost Consulting, LLC \thttp://www.frostconsultingllc.com/\n> Phone: 650-780-7908\tFAX: 650-649-1954\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n\n\n\n\n\n\n--\n\nOn Mon, Feb 19, 2007 at 10:02:46AM -0800, Craig A. James wrote:\n> Andreas Tille wrote:\n> >My web application was running fine for years without any problem\n> >and the performance was satisfying. Some months ago I added a\n> >table containing 4500000 data rows ...\n> >\n> >Since about two weeks the application became *drastically* slower\n> >and I urgently have to bring back the old performance. As I said\n> >I'm talking about functions accessing tables that did not increased\n> >over several years and should behave more or less the same.\n> \n> Don't assume that the big table you added is the source of the problem. It \n> might be, but more likely it's something else entirely. You indicated that \n> the problem didn't coincide with creating the large table.\n> \n> There are a number of recurring themes on this discussion group:\n> \n> * A long-running transaction keeps vacuum from working.\n> \n> * A table grows just enough to pass a threshold in the\n> planner and a drastically different plan is generated.\n> \n> * An index has become bloated and/or corrupted, and you\n> need to run the REINDEX command.\n> \n> And several other common problems.\n> \n> The first thing is to find out which query is taking a lot of time. I'm no \n> expert, but there have been several explanations on this forum recently how \n> to find your top time-consuming queries. Once you find them, then EXPLAIN \n> ANALYZE should get you started \n> Craig\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n-- \nYou have no chance to survive make your time.\n",
"msg_date": "Wed, 21 Feb 2007 10:45:20 -0500",
"msg_from": "Ray Stell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to debug performance problems"
},
{
"msg_contents": "Ray,\n\n> I'd like to have a toolbox prepared for when performance goes south.\n> I'm clueless. Would someone mind providing some detail about how to\n> measure these four items Craig listed:\n\nI hope I didn't give the impression that these were the only thing to look at ... those four items just popped into my head, because they've come up repeatedly in this forum. There are surely more things that could be suspect; perhaps others could add to your list.\n\nYou can find the answers to each of the four topics I mentioned by looking through the archives of this list. It's a lot of work. It would be really nice if there was some full-time employee somewhere whose job was to monitor this group and pull out common themes that were put into a nice, tidy manual. But this is open-source development, and there is no such person, so you have to dig in and find it yourself.\n\nCraig\n",
"msg_date": "Wed, 21 Feb 2007 08:09:49 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to debug performance problems"
},
{
"msg_contents": "On Wed, Feb 21, 2007 at 08:09:49AM -0800, Craig A. James wrote:\n> I hope I didn't give the impression that these were the only thing to look \n> at ... those four items just popped into my head, because they've come up \n> repeatedly in this forum. There are surely more things that could be \n> suspect; perhaps others could add to your list.\n\nI'm only clueless about the details of pg, not db perf concepts. Really,\na mechanism to determine where the system is spending the response\ntime is key. As you pointed out, the added table may not be the issue.\nIn fact, if you can't measure where the db time is being spent\nyou will be lucky to fix a performance issue, since you don't really\nknow what resources need to be addressed. \n\n\n> so you have to dig in and find it yourself.\n\nthis afternoon, maybe.\n",
"msg_date": "Wed, 21 Feb 2007 11:40:49 -0500",
"msg_from": "Ray Stell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to debug performance problems"
},
{
"msg_contents": "Ray Stell wrote:\n> I'd like to have a toolbox prepared for when performance goes south.\n> I'm clueless. Would someone mind providing some detail about how to\n> measure these four items Craig listed:\n> \n> 1. The first thing is to find out which query is taking a lot of time.\n> \n> 2. A long-running transaction keeps vacuum from working.\n> \n> 3. A table grows just enough to pass a threshold in the\n> planner and a drastically different plan is generated.\n\nI just ran into a variation of this:\n\n3.5 A table grows so large so that VACUUMING it takes extremely long,\ninterfering with the general performance of the system.\n\nIn our case, we think the table had about 36 million rows when it hit\nthat threshold.\n\nI'm now working on a better solution for that table.\n\n Mark\n",
"msg_date": "Wed, 21 Feb 2007 12:49:12 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to debug performance problems"
}
] |
[
{
"msg_contents": "PostgreSQL version: 8.1.6\nOS: Debian etch\n\nThe following query needs a lot of time because the query planner \nreorders the joins:\n\nselect m.message_idnr, v.headervalue, n.headername from dbmail_messages m\n join dbmail_headervalue v ON v.physmessage_id=m.physmessage_id\n join dbmail_headername n ON v.headername_id=n.id\n where m.mailbox_idnr = 1022 AND message_idnr BETWEEN 698928 AND 1496874\n and lower(n.headername) IN \n('from','to','cc','subject','date','message-id',\n 'priority','x-priority','references','newsgroups','in-reply-to',\n 'content-type','x-spam-status','x-spam-flag');\n\nIf I prevent the query planner from reordering the joins with 'set \njoin_collapse_limit=1;' then the same query is faster. At the end of the \nMail is the output of a explain analyze for both cases.\n\nThe statistics of the database are updated each night. Is there an error \n(in the statistical data) which is responsible for the reordering of the \njoins? And if not are there other alternatives for preventing join \nreordering?\n\nThanks\nReinhard\n\n\n\nExplain analyze with set join_collapse_limit=8:\n\n Merge Join (cost=388657.62..391332.20 rows=821 width=127) (actual \ntime=82677.950..89103.192 rows=2699 loops=1)\n Merge Cond: (\"outer\".physmessage_id = \"inner\".physmessage_id)\n -> Sort (cost=2901.03..2902.61 rows=632 width=16) (actual \ntime=247.238..247.578 rows=373 loops=1)\n Sort Key: m.physmessage_id\n -> Bitmap Heap Scan on dbmail_messages m (cost=9.16..2871.63 \nrows=632 width=16) (actual time=38.072..246.509 rows=373 loops=1)\n Recheck Cond: (mailbox_idnr = 1022)\n Filter: ((message_idnr >= 698928) AND (message_idnr <= \n1496874))\n -> Bitmap Index Scan on dbmail_messages_8 \n(cost=0.00..9.16 rows=902 width=0) (actual time=25.561..25.561 rows=615 \nloops=1)\n Index Cond: (mailbox_idnr = 1022)\n -> Sort (cost=385756.58..387089.35 rows=533108 width=127) (actual \ntime=80156.731..85760.186 rows=3278076 loops=1)\n Sort Key: v.physmessage_id\n -> Hash Join (cost=51.00..285787.17 rows=533108 width=127) \n(actual time=34.519..28260.855 rows=3370242 loops=1)\n Hash Cond: (\"outer\".headername_id = \"inner\".id)\n -> Seq Scan on dbmail_headervalue v \n(cost=0.00..241200.39 rows=7840939 width=115) (actual \ntime=0.006..16844.479 rows=7854485 loops=1)\n -> Hash (cost=50.72..50.72 rows=113 width=28) (actual \ntime=34.493..34.493 rows=35 loops=1)\n -> Bitmap Heap Scan on dbmail_headername n \n(cost=28.44..50.72 rows=113 width=28) (actual time=11.796..34.437 \nrows=35 loops=1)\n Recheck Cond: ((lower((headername)::text) = \n'from'::text) OR (lower((headername)::text) = 'to'::text) OR \n(lower((headername)::text) = 'cc'::text) OR (lower((headername)::text) = \n'subject'::text) OR (lower((headername)::text) = 'date'::text) OR \n(lower((headername)::text) = 'message-id'::text) OR \n(lower((headername)::text) = 'priority'::text) OR \n(lower((headername)::text) = 'x-priority'::text) OR \n(lower((headername)::text) = 'references'::text) OR \n(lower((headername)::text) = 'newsgroups'::text) OR \n(lower((headername)::text) = 'in-reply-to'::text) OR \n(lower((headername)::text) = 'content-type'::text) OR \n(lower((headername)::text) = 'x-spam-status'::text) OR (lower((hea\ndername)::text) = 'x-spam-flag'::text))\n -> BitmapOr (cost=28.44..28.44 rows=116 \nwidth=0) (actual time=11.786..11.786 rows=0 loops=1)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.037..0.037 rows=3 loops=1)\n Index Cond: \n(lower((headername)::text) = 'from'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.013..0.013 rows=3 loops=1)\n Index Cond: \n(lower((headername)::text) = 'to'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.013..0.013 rows=3 loops=1)\n Index Cond: \n(lower((headername)::text) = 'cc'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.014..0.014 rows=3 loops=1)\n Index Cond: \n(lower((headername)::text) = 'subject'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.014..0.014 rows=3 loops=1)\n Index Cond: \n(lower((headername)::text) = 'date'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.019..0.019 rows=4 loops=1)\n Index Cond: \n(lower((headername)::text) = 'message-id'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.012..0.012 rows=2 loops=1)\n Index Cond: \n(lower((headername)::text) = 'priority'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.032..0.032 rows=4 loops=1)\n Index Cond: \n(lower((headername)::text) = 'x-priority'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.015..0.015 rows=1 loops=1)\n Index Cond: \n(lower((headername)::text) = 'references'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.009..0.009 rows=0 loops=1)\n Index Cond: \n(lower((headername)::text) = 'newsgroups'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.014..0.014 rows=3 loops=1)\n Index Cond: \n(lower((headername)::text) = 'in-reply-to'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.013..0.013 rows=1 loops=1)\n Index Cond: \n(lower((headername)::text) = 'content-type'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=11.549..11.549 rows=2 loops=1)\n Index Cond: \n(lower((headername)::text) = 'x-spam-status'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.019..0.019 rows=3 loops=1)\n Index Cond: \n(lower((headername)::text) = 'x-spam-flag'::text)\n Total runtime: 89277.937 ms\n(47 rows)\n\n\n\nExplain analyze with set join_collapse_limit=1:\n \n Hash Join (cost=51.00..1607155.00 rows=821 width=127) (actual \ntime=14.640..47.851 rows=2699 loops=1)\n Hash Cond: (\"outer\".headername_id = \"inner\".id)\n -> Nested Loop (cost=0.00..1607035.43 rows=12071 width=115) (actual \ntime=0.085..25.057 rows=7025 loops=1)\n -> Index Scan using dbmail_messages_mailbox_idx on \ndbmail_messages m (cost=0.00..3515.08 rows=632 width=16) (actual \ntime=0.064..1.070 rows=373 loops=1)\n Index Cond: (mailbox_idnr = 1022)\n Filter: ((message_idnr >= 698928) AND (message_idnr <= \n1496874))\n -> Index Scan using dbmail_headervalue_physmsg_id on \ndbmail_headervalue v (cost=0.00..2526.34 rows=870 width=115) (actual \ntime=0.010..0.035 rows=19 loops=373)\n Index Cond: (v.physmessage_id = \"outer\".physmessage_id)\n -> Hash (cost=50.72..50.72 rows=113 width=28) (actual \ntime=14.540..14.540 rows=35 loops=1)\n -> Bitmap Heap Scan on dbmail_headername n (cost=28.44..50.72 \nrows=113 width=28) (actual time=14.429..14.492 rows=35 loops=1)\n Recheck Cond: ((lower((headername)::text) = 'from'::text) \nOR (lower((headername)::text) = 'to'::text) OR \n(lower((headername)::text) = 'cc'::text) OR (lower((headername)::text) = \n'subject'::text) OR (lower((headername)::text) = 'date'::text) OR \n(lower((headername)::text) = 'message-id'::text) OR \n(lower((headername)::text) = 'priority'::text) OR \n(lower((headername)::text) = 'x-priority'::text) OR \n(lower((headername)::text) = 'references'::text) OR \n(lower((headername)::text) = 'newsgroups'::text) OR \n(lower((headername)::text) = 'in-reply-to'::text) OR \n(lower((headername)::text) = 'content-type'::text) OR \n(lower((headername)::text) = 'x-spam-status'::text) OR \n(lower((headername)::text) = 'x-spam-flag'::text)) \n -> BitmapOr (cost=28.44..28.44 rows=116 width=0) \n(actual time=14.418..14.418 rows=0 loops=1)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=14.197..14.197 rows=3 loops=1)\n Index Cond: (lower((headername)::text) = \n'from'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.015..0.015 rows=3 loops=1)\n Index Cond: (lower((headername)::text) = \n'to'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.012..0.012 rows=3 loops=1)\n Index Cond: (lower((headername)::text) = \n'cc'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.013..0.013 rows=3 loops=1)\n Index Cond: (lower((headername)::text) = \n'subject'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.014..0.014 rows=3 loops=1)\n Index Cond: (lower((headername)::text) = \n'date'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.019..0.019 rows=4 loops=1)\n Index Cond: (lower((headername)::text) = \n'message-id'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.011..0.011 rows=2 loops=1)\n Index Cond: (lower((headername)::text) = \n'priority'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.031..0.031 rows=4 loops=1)\n Index Cond: (lower((headername)::text) = \n'x-priority'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.013..0.013 rows=1 loops=1)\n Index Cond: (lower((headername)::text) = \n'references'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.008..0.008 rows=0 loops=1)\n Index Cond: (lower((headername)::text) = \n'newsgroups'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.014..0.014 rows=3 loops=1)\n Index Cond: (lower((headername)::text) = \n'in-reply-to'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.012..0.012 rows=1 loops=1)\n Index Cond: (lower((headername)::text) = \n'content-type'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.028..0.028 rows=2 loops=1)\n Index Cond: (lower((headername)::text) = \n'x-spam-status'::text)\n -> Bitmap Index Scan on \ndbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n(actual time=0.018..0.018 rows=3 loops=1)\n Index Cond: (lower((headername)::text) = \n'x-spam-flag'::text)\n Total runtime: 49.634 ms\n(41 rows)\n\n-- \nReinhard Vicinus\nrjm business solutions GmbH \t\nSperlingweg 3, \n68623 Lampertheim\nTel. 06206 9513084\nFax 06206 910315\n--\n\n",
"msg_date": "Mon, 19 Feb 2007 18:03:22 +0100",
"msg_from": "Reinhard Vicinus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query Optimization"
},
{
"msg_contents": "Reinhard Vicinus <[email protected]> writes:\n> PostgreSQL version: 8.1.6\n> The following query needs a lot of time because the query planner \n> reorders the joins:\n\nTry reducing random_page_cost, increasing effective_cache_size, and/or\nupdating to PG 8.2. Any of these are likely to make it like the\nnestloop plan better...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Feb 2007 23:54:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Optimization "
},
{
"msg_contents": "It's not necessarily the join order that's an issue; it could also be\ndue to the merge join that it does in the first case. I've also run into\nsituations where the cost estimate for a merge join is way off the mark.\n\nRather than forcing the join order, you might try setting\nenable_mergejoin=false.\n\nOn Mon, Feb 19, 2007 at 06:03:22PM +0100, Reinhard Vicinus wrote:\n> PostgreSQL version: 8.1.6\n> OS: Debian etch\n> \n> The following query needs a lot of time because the query planner \n> reorders the joins:\n> \n> select m.message_idnr, v.headervalue, n.headername from dbmail_messages m\n> join dbmail_headervalue v ON v.physmessage_id=m.physmessage_id\n> join dbmail_headername n ON v.headername_id=n.id\n> where m.mailbox_idnr = 1022 AND message_idnr BETWEEN 698928 AND 1496874\n> and lower(n.headername) IN \n> ('from','to','cc','subject','date','message-id',\n> 'priority','x-priority','references','newsgroups','in-reply-to',\n> 'content-type','x-spam-status','x-spam-flag');\n> \n> If I prevent the query planner from reordering the joins with 'set \n> join_collapse_limit=1;' then the same query is faster. At the end of the \n> Mail is the output of a explain analyze for both cases.\n> \n> The statistics of the database are updated each night. Is there an error \n> (in the statistical data) which is responsible for the reordering of the \n> joins? And if not are there other alternatives for preventing join \n> reordering?\n> \n> Thanks\n> Reinhard\n> \n> \n> \n> Explain analyze with set join_collapse_limit=8:\n> \n> Merge Join (cost=388657.62..391332.20 rows=821 width=127) (actual \n> time=82677.950..89103.192 rows=2699 loops=1)\n> Merge Cond: (\"outer\".physmessage_id = \"inner\".physmessage_id)\n> -> Sort (cost=2901.03..2902.61 rows=632 width=16) (actual \n> time=247.238..247.578 rows=373 loops=1)\n> Sort Key: m.physmessage_id\n> -> Bitmap Heap Scan on dbmail_messages m (cost=9.16..2871.63 \n> rows=632 width=16) (actual time=38.072..246.509 rows=373 loops=1)\n> Recheck Cond: (mailbox_idnr = 1022)\n> Filter: ((message_idnr >= 698928) AND (message_idnr <= \n> 1496874))\n> -> Bitmap Index Scan on dbmail_messages_8 \n> (cost=0.00..9.16 rows=902 width=0) (actual time=25.561..25.561 rows=615 \n> loops=1)\n> Index Cond: (mailbox_idnr = 1022)\n> -> Sort (cost=385756.58..387089.35 rows=533108 width=127) (actual \n> time=80156.731..85760.186 rows=3278076 loops=1)\n> Sort Key: v.physmessage_id\n> -> Hash Join (cost=51.00..285787.17 rows=533108 width=127) \n> (actual time=34.519..28260.855 rows=3370242 loops=1)\n> Hash Cond: (\"outer\".headername_id = \"inner\".id)\n> -> Seq Scan on dbmail_headervalue v \n> (cost=0.00..241200.39 rows=7840939 width=115) (actual \n> time=0.006..16844.479 rows=7854485 loops=1)\n> -> Hash (cost=50.72..50.72 rows=113 width=28) (actual \n> time=34.493..34.493 rows=35 loops=1)\n> -> Bitmap Heap Scan on dbmail_headername n \n> (cost=28.44..50.72 rows=113 width=28) (actual time=11.796..34.437 \n> rows=35 loops=1)\n> Recheck Cond: ((lower((headername)::text) = \n> 'from'::text) OR (lower((headername)::text) = 'to'::text) OR \n> (lower((headername)::text) = 'cc'::text) OR (lower((headername)::text) = \n> 'subject'::text) OR (lower((headername)::text) = 'date'::text) OR \n> (lower((headername)::text) = 'message-id'::text) OR \n> (lower((headername)::text) = 'priority'::text) OR \n> (lower((headername)::text) = 'x-priority'::text) OR \n> (lower((headername)::text) = 'references'::text) OR \n> (lower((headername)::text) = 'newsgroups'::text) OR \n> (lower((headername)::text) = 'in-reply-to'::text) OR \n> (lower((headername)::text) = 'content-type'::text) OR \n> (lower((headername)::text) = 'x-spam-status'::text) OR (lower((hea\n> dername)::text) = 'x-spam-flag'::text))\n> -> BitmapOr (cost=28.44..28.44 rows=116 \n> width=0) (actual time=11.786..11.786 rows=0 loops=1)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.037..0.037 rows=3 loops=1)\n> Index Cond: \n> (lower((headername)::text) = 'from'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.013..0.013 rows=3 loops=1)\n> Index Cond: \n> (lower((headername)::text) = 'to'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.013..0.013 rows=3 loops=1)\n> Index Cond: \n> (lower((headername)::text) = 'cc'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.014..0.014 rows=3 loops=1)\n> Index Cond: \n> (lower((headername)::text) = 'subject'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.014..0.014 rows=3 loops=1)\n> Index Cond: \n> (lower((headername)::text) = 'date'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.019..0.019 rows=4 loops=1)\n> Index Cond: \n> (lower((headername)::text) = 'message-id'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.012..0.012 rows=2 loops=1)\n> Index Cond: \n> (lower((headername)::text) = 'priority'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.032..0.032 rows=4 loops=1)\n> Index Cond: \n> (lower((headername)::text) = 'x-priority'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.015..0.015 rows=1 loops=1)\n> Index Cond: \n> (lower((headername)::text) = 'references'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.009..0.009 rows=0 loops=1)\n> Index Cond: \n> (lower((headername)::text) = 'newsgroups'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.014..0.014 rows=3 loops=1)\n> Index Cond: \n> (lower((headername)::text) = 'in-reply-to'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.013..0.013 rows=1 loops=1)\n> Index Cond: \n> (lower((headername)::text) = 'content-type'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=11.549..11.549 rows=2 loops=1)\n> Index Cond: \n> (lower((headername)::text) = 'x-spam-status'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.019..0.019 rows=3 loops=1)\n> Index Cond: \n> (lower((headername)::text) = 'x-spam-flag'::text)\n> Total runtime: 89277.937 ms\n> (47 rows)\n> \n> \n> \n> Explain analyze with set join_collapse_limit=1:\n> \n> Hash Join (cost=51.00..1607155.00 rows=821 width=127) (actual \n> time=14.640..47.851 rows=2699 loops=1)\n> Hash Cond: (\"outer\".headername_id = \"inner\".id)\n> -> Nested Loop (cost=0.00..1607035.43 rows=12071 width=115) (actual \n> time=0.085..25.057 rows=7025 loops=1)\n> -> Index Scan using dbmail_messages_mailbox_idx on \n> dbmail_messages m (cost=0.00..3515.08 rows=632 width=16) (actual \n> time=0.064..1.070 rows=373 loops=1)\n> Index Cond: (mailbox_idnr = 1022)\n> Filter: ((message_idnr >= 698928) AND (message_idnr <= \n> 1496874))\n> -> Index Scan using dbmail_headervalue_physmsg_id on \n> dbmail_headervalue v (cost=0.00..2526.34 rows=870 width=115) (actual \n> time=0.010..0.035 rows=19 loops=373)\n> Index Cond: (v.physmessage_id = \"outer\".physmessage_id)\n> -> Hash (cost=50.72..50.72 rows=113 width=28) (actual \n> time=14.540..14.540 rows=35 loops=1)\n> -> Bitmap Heap Scan on dbmail_headername n (cost=28.44..50.72 \n> rows=113 width=28) (actual time=14.429..14.492 rows=35 loops=1)\n> Recheck Cond: ((lower((headername)::text) = 'from'::text) \n> OR (lower((headername)::text) = 'to'::text) OR \n> (lower((headername)::text) = 'cc'::text) OR (lower((headername)::text) = \n> 'subject'::text) OR (lower((headername)::text) = 'date'::text) OR \n> (lower((headername)::text) = 'message-id'::text) OR \n> (lower((headername)::text) = 'priority'::text) OR \n> (lower((headername)::text) = 'x-priority'::text) OR \n> (lower((headername)::text) = 'references'::text) OR \n> (lower((headername)::text) = 'newsgroups'::text) OR \n> (lower((headername)::text) = 'in-reply-to'::text) OR \n> (lower((headername)::text) = 'content-type'::text) OR \n> (lower((headername)::text) = 'x-spam-status'::text) OR \n> (lower((headername)::text) = 'x-spam-flag'::text)) \n> -> BitmapOr (cost=28.44..28.44 rows=116 width=0) \n> (actual time=14.418..14.418 rows=0 loops=1)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=14.197..14.197 rows=3 loops=1)\n> Index Cond: (lower((headername)::text) = \n> 'from'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.015..0.015 rows=3 loops=1)\n> Index Cond: (lower((headername)::text) = \n> 'to'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.012..0.012 rows=3 loops=1)\n> Index Cond: (lower((headername)::text) = \n> 'cc'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.013..0.013 rows=3 loops=1)\n> Index Cond: (lower((headername)::text) = \n> 'subject'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.014..0.014 rows=3 loops=1)\n> Index Cond: (lower((headername)::text) = \n> 'date'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.019..0.019 rows=4 loops=1)\n> Index Cond: (lower((headername)::text) = \n> 'message-id'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.011..0.011 rows=2 loops=1)\n> Index Cond: (lower((headername)::text) = \n> 'priority'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.031..0.031 rows=4 loops=1)\n> Index Cond: (lower((headername)::text) = \n> 'x-priority'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.013..0.013 rows=1 loops=1)\n> Index Cond: (lower((headername)::text) = \n> 'references'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.008..0.008 rows=0 loops=1)\n> Index Cond: (lower((headername)::text) = \n> 'newsgroups'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.014..0.014 rows=3 loops=1)\n> Index Cond: (lower((headername)::text) = \n> 'in-reply-to'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.012..0.012 rows=1 loops=1)\n> Index Cond: (lower((headername)::text) = \n> 'content-type'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.028..0.028 rows=2 loops=1)\n> Index Cond: (lower((headername)::text) = \n> 'x-spam-status'::text)\n> -> Bitmap Index Scan on \n> dbmail_headername_lower_headername (cost=0.00..2.03 rows=8 width=0) \n> (actual time=0.018..0.018 rows=3 loops=1)\n> Index Cond: (lower((headername)::text) = \n> 'x-spam-flag'::text)\n> Total runtime: 49.634 ms\n> (41 rows)\n> \n> -- \n> Reinhard Vicinus\n> rjm business solutions GmbH \t\n> Sperlingweg 3, \n> 68623 Lampertheim\n> Tel. 06206 9513084\n> Fax 06206 910315\n> --\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Tue, 20 Feb 2007 10:45:20 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Optimization"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm having a surprising performance problem with the following simple\n'highscore report'\n\nselect studentid, (select max(score) from\nstudentprofile prof where prof.studentid = students.studentid) from students;\n\nI have indexes on students(studentid) and studentprofile(studentid).\n\nRow counts: about 160 000 in each students and studentprofile.\nPostgres version:\npostgresql-8.1.8-1.fc5\npostgresql-server-8.1.8-1.fc5\n\nThis is a dual-processor 3Ghz 64bit box with 2 GB mem.\n\nRunning the query takes 99% CPU and 1% mem.\n\nI have the same data in MSSQL and there the query takes less than a\nminute. With postgres it seems to take several hours.\n\nIs there a way of making this faster?\n\nMarko\n",
"msg_date": "Tue, 20 Feb 2007 16:10:55 +0900",
"msg_from": "\"Marko Niinimaki\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow subselects"
},
{
"msg_contents": "\ntry:\n\nselect studentid,max(score) from studentprofile group by studentid;\n\nor if you want only those which exists in students\n\nselect s.studentid,max(p.score)\nfrom studentprofile p,students s\nwhere s.studentid=p.studentid\ngroup by s.studentid;\n\nif it takes longer than 1-2 seconds something is seriously wrong\n\nIsmo\n\nOn Tue, 20 Feb 2007, Marko Niinimaki wrote:\n\n> Hello,\n> \n> I'm having a surprising performance problem with the following simple\n> 'highscore report'\n> \n> select studentid, (select max(score) from\n> studentprofile prof where prof.studentid = students.studentid) from students;\n> \n> I have indexes on students(studentid) and studentprofile(studentid).\n> \n> Row counts: about 160 000 in each students and studentprofile.\n> Postgres version:\n> postgresql-8.1.8-1.fc5\n> postgresql-server-8.1.8-1.fc5\n> \n> This is a dual-processor 3Ghz 64bit box with 2 GB mem.\n> \n> Running the query takes 99% CPU and 1% mem.\n> \n> I have the same data in MSSQL and there the query takes less than a\n> minute. With postgres it seems to take several hours.\n> \n> Is there a way of making this faster?\n> \n> Marko\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n> \n",
"msg_date": "Tue, 20 Feb 2007 09:21:45 +0200 (EET)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: slow subselects"
},
{
"msg_contents": "\"Marko Niinimaki\" <[email protected]> writes:\n> I'm having a surprising performance problem with the following simple\n> 'highscore report'\n\n> select studentid, (select max(score) from\n> studentprofile prof where prof.studentid = students.studentid) from students;\n\n> I have indexes on students(studentid) and studentprofile(studentid).\n\nThe optimal index for this would be on studentprofile(studentid,score).\nA quick test says that PG 8.1 knows what to do with such an index ---\nwhat does EXPLAIN show for this query?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Feb 2007 02:43:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow subselects "
},
{
"msg_contents": "Many thanks! Ismo's reply solved the problem, and Tom's reply solved\nanother one.\n\nMarko\n\nIsmo Tuononen wrote:\n> select studentid,max(score) from studentprofile group by studentid;\n\nOn 20/02/07, Tom Lane <[email protected]> wrote:\n\n> The optimal index for this would be on studentprofile(studentid,score).\n",
"msg_date": "Wed, 21 Feb 2007 14:34:35 +0900",
"msg_from": "\"Marko Niinimaki\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow subselects"
}
] |
[
{
"msg_contents": "I have problems with queries over tsearch index.\nI have a table of books, with 1200000 registers. I have created an GIST\nindex over the title and subtitle,\n\nCREATE INDEX \"idxts2_titsub_idx\" ON \"public\".\"libros\" USING gist\n(\"idxts2_titsub\");\n\nMy problems started when i execute my queries.\nFor example, i execute a simple query like this one:\nexplain analyze\n SELECT isbn, titulo\n FROM libros\n WHERE idxts2_titsub @@ to_tsquery('default_spanish',\nto_ascii('sevilla'))\n ORDER BY titulo\n LIMIT 10;\nThis query take more than 10 secods, and i think this is too much for such\nan easy query.\nEvery night, i execute a VACUUM ANALYZE over my data base.\n\nThe query plan of this query, is the next one:\nQUERY PLAN\nLimit (cost=4725.18..4725.20 rows=10 width=56) (actual time=\n17060.826..17061.078 rows=10 loops=1)\n -> Sort (cost=4725.18..4728.23 rows=1223 width=56) (actual time=\n17060.806..17060.874 rows=10 loops=1)\n Sort Key: titulo\n -> Bitmap Heap Scan on libros (cost=45.28..4662.46 rows=1223\nwidth=56) (actual time=10831.530..16957.667 rows=2542 loops=1)\n Filter: (idxts2_titsub @@ '''sevilla'''::tsquery)\n -> Bitmap Index Scan on idxts2_titsub_idx\n(cost=0.00..45.28rows=1223 width=0) (actual time=\n10830.051..10830.051 rows=2586 loops=1)\n Index Cond: (idxts2_titsub @@ '''sevilla'''::tsquery)\nTotal runtime: 17062.665 ms\n\nI have no idea what is happening. Why the Bitmap Index Scan and the Bitmap\nHeap Scan cost so much time?\n\nI have a 2GB RAM memory Server.\n\nThanks every body for your healp and sorry for my English\n\nI have problems with queries over tsearch index.I have a table of books, with 1200000 registers. I have created an GIST index over the title and subtitle, CREATE INDEX \"idxts2_titsub_idx\" ON \"public\".\"libros\" USING gist (\"idxts2_titsub\");\nMy problems started when i execute my queries. For example, i execute a simple query like this one:explain analyze SELECT isbn, titulo FROM libros WHERE idxts2_titsub @@ to_tsquery('default_spanish', to_ascii('sevilla'))\n ORDER BY titulo LIMIT 10;This query take more than 10 secods, and i think this is too much for such an easy query.Every night, i execute a VACUUM ANALYZE over my data base.The query plan of this query, is the next one:\nQUERY PLANLimit (cost=4725.18..4725.20 rows=10 width=56) (actual time=17060.826..17061.078 rows=10 loops=1) -> Sort (cost=4725.18..4728.23 rows=1223 width=56) (actual time=17060.806..17060.874 rows=10 loops=1)\n Sort Key: titulo -> Bitmap Heap Scan on libros (cost=45.28..4662.46 rows=1223 width=56) (actual time=10831.530..16957.667 rows=2542 loops=1) Filter: (idxts2_titsub @@ '''sevilla'''::tsquery)\n -> Bitmap Index Scan on idxts2_titsub_idx (cost=0.00..45.28 rows=1223 width=0) (actual time=10830.051..10830.051 rows=2586 loops=1) Index Cond: (idxts2_titsub @@ '''sevilla'''::tsquery)\nTotal runtime: 17062.665 msI have no idea what is happening. Why the Bitmap Index Scan and the Bitmap Heap Scan cost so much time?I have a 2GB RAM memory Server. Thanks every body for your healp and sorry for my English",
"msg_date": "Tue, 20 Feb 2007 11:41:54 +0100",
"msg_from": "\"Rafa Comino\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Having performance problems with TSearch2"
},
{
"msg_contents": "Use GIN index instead of GiST\n\n> I have a table of books, with 1200000 registers. I have created an GIST \n> index over the title and subtitle,\n-- \nTeodor Sigaev E-mail: [email protected]\n WWW: http://www.sigaev.ru/\n",
"msg_date": "Tue, 20 Feb 2007 14:15:27 +0300",
"msg_from": "Teodor Sigaev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Having performance problems with TSearch2"
},
{
"msg_contents": "\n> \n> I have problems with queries over tsearch index.I have a table of books, with\n> 1200000 registers. I have created an GIST index over the title and subtitle,\n> CREATE INDEX \"idxts2_titsub_idx\" ON \"public\".\"libros\" USING gist\n(\"idxts2_titsub\");\n\nYour query didn't use index that you are created..\nAfter CREATE INDEX you mast ran VACUUM (FULL recomended) and REINDEX\nThe query is: select userid,msg,idxfti from _my_msg0 where idxfti @@\nto_tsquery('utf8_russian','хочу & трахаться');\n\n\n",
"msg_date": "Mon, 5 Mar 2007 12:59:59 +0000 (UTC)",
"msg_from": "Ares <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Having performance problems with TSearch2"
}
] |
[
{
"msg_contents": "I am updating from 7.4.5 to 8.2.3. I have noticed a significant\nslowdown in simple searches such as\n \"select filename from vnmr_data where seqfil = 'sems';\"\nThis returns 12 rows out of 1 million items in the table.\nOn 7.4.5, this takes about 1.5 seconds. On 8.2.3, it is taking\nabout 9 seconds.\n\nI have played with different values of:\nwork_mem, temp_buffers, shared_buffers and effective_cache_size\nand none of them make any difference.\n\nI am running on redhat Linux 4 64bit.\n\nAny ideas?\n\nGlenn\n",
"msg_date": "Tue, 20 Feb 2007 16:21:52 -0700",
"msg_from": "Glenn Sullivan <[email protected]>",
"msg_from_op": true,
"msg_subject": "SELECT performance problem"
},
{
"msg_contents": "On Tue, 20 Feb 2007, Glenn Sullivan wrote:\n\n> I am updating from 7.4.5 to 8.2.3. I have noticed a significant\n> slowdown in simple searches such as\n> \"select filename from vnmr_data where seqfil = 'sems';\"\n> This returns 12 rows out of 1 million items in the table.\n> On 7.4.5, this takes about 1.5 seconds. On 8.2.3, it is taking\n> about 9 seconds.\n>\n> I have played with different values of:\n> work_mem, temp_buffers, shared_buffers and effective_cache_size\n> and none of them make any difference.\n>\n> I am running on redhat Linux 4 64bit.\n\nGlenn,\n\nCan you forward us the explain analyze output from 7.4.5 and 8.2.3 for the \nquery in question?\n\nAlso, is the hardware the same between 7.4.5 and 8.2.3? If not, what is the \ndifference?\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n",
"msg_date": "Tue, 20 Feb 2007 15:41:37 -0800 (PST)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT performance problem"
},
{
"msg_contents": "Did you run ANALYZE on your data after importing it into 8.2.3? Is there an\nindex on the seqfil column? If so, you should post the output of EXPLAIN\nANALYZE from both systems if possible.\n\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Glenn Sullivan\n> Sent: Tuesday, February 20, 2007 5:22 PM\n> To: [email protected]\n> Subject: [PERFORM] SELECT performance problem\n> \n> \n> I am updating from 7.4.5 to 8.2.3. I have noticed a significant\n> slowdown in simple searches such as\n> \"select filename from vnmr_data where seqfil = 'sems';\"\n> This returns 12 rows out of 1 million items in the table.\n> On 7.4.5, this takes about 1.5 seconds. On 8.2.3, it is taking\n> about 9 seconds.\n> \n> I have played with different values of:\n> work_mem, temp_buffers, shared_buffers and effective_cache_size\n> and none of them make any difference.\n> \n> I am running on redhat Linux 4 64bit.\n> \n> Any ideas?\n> \n> Glenn\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] \n> so that your\n> message can get through to the mailing list cleanly\n\n",
"msg_date": "Tue, 20 Feb 2007 17:46:56 -0600",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT performance problem"
}
] |
[
{
"msg_contents": "Hello, I've set up 2 identical machines, hp server 1ghz p3,\n768mb ram, 18gb scsi3 drive. On the first one I've installed\nDebian/GNU 4.0 Linux, on the second FreeBSD 6.2. On both\nmachines I've installed Postgresql 8.2.3 from sources.\nNow the point :)) According to my tests postgres on Linux\nbox run much faster then on FreeBSD, here are my results:\n\n*** setting up **************************\ncreeate table foo as select x from generate_series(1,2500000) x;\nvacuum foo;\ncheckpoint;\n\\timing\n\n*****************************************\n\n*** BSD *********************************\nactual=# select count(*) from foo;\n count\n---------\n 2500000\n(1 row)\n\nTime: 1756.455 ms\nactual=# explain analyze select count(*) from foo;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=34554.20..34554.21 rows=1 width=0) (actual\ntime=12116.841..12116.843 rows=1 loops=1)\n -> Seq Scan on foo (cost=0.00..28304.20 rows=2500000 width=0)\n(actual time=9.276..6435.890 rows=2500000 loops=1)\n Total runtime: 12116.989 ms\n(3 rows)\n\nTime: 12117.803 ms\n\n******************************************\n\n\n*** LIN **********************************\nactual=# select count(*) from foo;\n count\n---------\n 2500000\n(1 row)\n\nTime: 1362,193 ms\nactual=# EXPLAIN ANALYZE\nactual-# select count(*) from foo;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=34554.20..34554.21 rows=1 width=0) (actual\ntime=4737.243..4737.244 rows=1 loops=1)\n -> Seq Scan on foo (cost=0.00..28304.20 rows=2500000 width=0)\n(actual time=0.058..2585.170 rows=2500000 loops=1)\n Total runtime: 4737.363 ms\n(3 rows)\n\nTime: 4738,367 ms\nactual=#\n******************************************\n\nJust a word about FS i've used:\nBSD:\n/dev/da0s1g on /usr/local/pgsql (ufs, local, noatime, soft-updates)\n\nLIN:\n/dev/sda7 on /usr/local/pgsql type xfs (rw,noatime)\n\n\nMy question is simple :) what's wrong with the FreeBSD BOX??\nWhat's the rule for computing gettimeofday() time ??\n\nThanks for any advices :))\n..and have a nice day!!\n\nJ.\n\n",
"msg_date": "Wed, 21 Feb 2007 10:57:26 +0100",
"msg_from": "=?ISO-8859-2?Q?Jacek_Zar=EAba?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres performance Linux vs FreeBSD"
},
{
"msg_contents": "On 2/21/07, Jacek Zaręba <[email protected]> wrote:\n> Hello, I've set up 2 identical machines, hp server 1ghz p3,\n> 768mb ram, 18gb scsi3 drive. On the first one I've installed\n> Debian/GNU 4.0 Linux, on the second FreeBSD 6.2. On both\n> machines I've installed Postgresql 8.2.3 from sources.\n> Now the point :)) According to my tests postgres on Linux\n> box run much faster then on FreeBSD, here are my results:\n>\n> *** setting up **************************\n> creeate table foo as select x from generate_series(1,2500000) x;\n> vacuum foo;\n> checkpoint;\n> \\timing\n>\n> *****************************************\n>\n> *** BSD *********************************\n> actual=# select count(*) from foo;\n> count\n> ---------\n> 2500000\n> (1 row)\n>\n> Time: 1756.455 ms\n> actual=# explain analyze select count(*) from foo;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=34554.20..34554.21 rows=1 width=0) (actual\n> time=12116.841..12116.843 rows=1 loops=1)\n> -> Seq Scan on foo (cost=0.00..28304.20 rows=2500000 width=0)\n> (actual time=9.276..6435.890 rows=2500000 loops=1)\n> Total runtime: 12116.989 ms\n> (3 rows)\n>\n> Time: 12117.803 ms\n>\n> ******************************************\n>\n>\n> *** LIN **********************************\n> actual=# select count(*) from foo;\n> count\n> ---------\n> 2500000\n> (1 row)\n>\n> Time: 1362,193 ms\n> actual=# EXPLAIN ANALYZE\n> actual-# select count(*) from foo;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=34554.20..34554.21 rows=1 width=0) (actual\n> time=4737.243..4737.244 rows=1 loops=1)\n> -> Seq Scan on foo (cost=0.00..28304.20 rows=2500000 width=0)\n> (actual time=0.058..2585.170 rows=2500000 loops=1)\n> Total runtime: 4737.363 ms\n> (3 rows)\n>\n> Time: 4738,367 ms\n> actual=#\n> ******************************************\n>\n> Just a word about FS i've used:\n> BSD:\n> /dev/da0s1g on /usr/local/pgsql (ufs, local, noatime, soft-updates)\n>\n> LIN:\n> /dev/sda7 on /usr/local/pgsql type xfs (rw,noatime)\n>\n>\n> My question is simple :) what's wrong with the FreeBSD BOX??\n> What's the rule for computing gettimeofday() time ??\n\n'explain analyze' can't be reliably used to compare results from\ndifferent operating systems...1756ms v. 1362ms is a win for linux but\nnot a blowout and there might be other things going on...\n\nmerlin\n",
"msg_date": "Wed, 21 Feb 2007 10:21:05 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres performance Linux vs FreeBSD"
},
{
"msg_contents": "In response to \"Jacek Zaręba\" <[email protected]>:\n\n> Hello, I've set up 2 identical machines, hp server 1ghz p3,\n> 768mb ram, 18gb scsi3 drive. On the first one I've installed\n> Debian/GNU 4.0 Linux, on the second FreeBSD 6.2. On both\n> machines I've installed Postgresql 8.2.3 from sources.\n> Now the point :)) According to my tests postgres on Linux\n> box run much faster then on FreeBSD, here are my results:\n> \n> *** setting up **************************\n> creeate table foo as select x from generate_series(1,2500000) x;\n> vacuum foo;\n> checkpoint;\n> \\timing\n> \n> *****************************************\n> \n> *** BSD *********************************\n> actual=# select count(*) from foo;\n> count\n> ---------\n> 2500000\n> (1 row)\n> \n> Time: 1756.455 ms\n> actual=# explain analyze select count(*) from foo;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=34554.20..34554.21 rows=1 width=0) (actual\n> time=12116.841..12116.843 rows=1 loops=1)\n> -> Seq Scan on foo (cost=0.00..28304.20 rows=2500000 width=0)\n> (actual time=9.276..6435.890 rows=2500000 loops=1)\n> Total runtime: 12116.989 ms\n> (3 rows)\n> \n> Time: 12117.803 ms\n> \n> ******************************************\n> \n> \n> *** LIN **********************************\n> actual=# select count(*) from foo;\n> count\n> ---------\n> 2500000\n> (1 row)\n> \n> Time: 1362,193 ms\n> actual=# EXPLAIN ANALYZE\n> actual-# select count(*) from foo;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=34554.20..34554.21 rows=1 width=0) (actual\n> time=4737.243..4737.244 rows=1 loops=1)\n> -> Seq Scan on foo (cost=0.00..28304.20 rows=2500000 width=0)\n> (actual time=0.058..2585.170 rows=2500000 loops=1)\n> Total runtime: 4737.363 ms\n> (3 rows)\n> \n> Time: 4738,367 ms\n> actual=#\n> ******************************************\n> \n> Just a word about FS i've used:\n> BSD:\n> /dev/da0s1g on /usr/local/pgsql (ufs, local, noatime, soft-updates)\n> \n> LIN:\n> /dev/sda7 on /usr/local/pgsql type xfs (rw,noatime)\n> \n> \n> My question is simple :) what's wrong with the FreeBSD BOX??\n> What's the rule for computing gettimeofday() time ??\n\nI can't speak to the gettimeofday() question, but I have a slew of comments\nregarding other parts of this email.\n\nThe first thing that I expect most people will comment on is your testing\nstrategy. You don't get a lot of details, but it seems as if you ran\n1 query on each server, 1 run on each. If you actually did more tests,\nyou should provide that information, otherwise, people will criticize your\ntesting strategy instead of looking at the problem.\n\nThe other side to this is that you haven't shown enough information about\nyour alleged problem to even start to investigate it.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n",
"msg_date": "Wed, 21 Feb 2007 10:23:51 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres performance Linux vs FreeBSD"
},
{
"msg_contents": "Le mercredi 21 février 2007 10:57, Jacek Zaręba a écrit :\n> Now the point :)) According to my tests postgres on Linux\n> box run much faster then on FreeBSD, here are my results:\n\nYou may want to compare some specific benchmark, as in bench with you \napplication queries. For this, you can consider Tsung and pgfouine softwares.\n http://tsung.erlang-projects.org/\n http://pgfouine.projects.postgresql.org/tsung.html\n\nRegards,\n-- \nDimitri Fontaine\n",
"msg_date": "Wed, 21 Feb 2007 20:27:00 +0100",
"msg_from": "Dimitri Fontaine <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres performance Linux vs FreeBSD"
},
{
"msg_contents": "Jacek Zar�ba wrote:\n> Hello, I've set up 2 identical machines, hp server 1ghz p3,\n> 768mb ram, 18gb scsi3 drive. On the first one I've installed\n> Debian/GNU 4.0 Linux, on the second FreeBSD 6.2. On both\n> machines I've installed Postgresql 8.2.3 from sources.\n> Now the point :)) According to my tests postgres on Linux\n> box run much faster then on FreeBSD, here are my results:\n> \n\nWith respect to 'select count(*) from ...' being slower on FreeBSD, \nthere are a number of things to try to make FreeBSD faster for this sort \nof query. Two I'm currently using are:\n\n- setting sysctl vfs.read_max to 16 or 32\n- rebuilding the relevant filesystem with 32K blocks and 4K frags\n\nI have two (almost) identical systems - one running Gentoo, one running \nFreeBSD 6.2. With the indicated changes the FreeBSD system performs \npretty much the same as the Gentoo one.\n\nWith respect to the 'explain analyze' times, FreeBSD has a more accurate \nand more expensive gettimeofday call - which hammers its 'explain \nanalyze' times compared to Linux.\n\nCheers\n\nMark\n\n",
"msg_date": "Thu, 22 Feb 2007 10:12:42 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres performance Linux vs FreeBSD"
},
{
"msg_contents": "On 2007-02-21, Mark Kirkwood <[email protected]> wrote:\n> With respect to 'select count(*) from ...' being slower on FreeBSD, \n> there are a number of things to try to make FreeBSD faster for this sort \n> of query. Two I'm currently using are:\n>\n> - setting sysctl vfs.read_max to 16 or 32\n> - rebuilding the relevant filesystem with 32K blocks and 4K frags\n\nBe aware that increasing the filesystem block size above 16k is _known_\nto tickle kernel bugs - there is a workaround involving increasing\nBKVASIZE, but this isn't a config parameter and therefore you have to\npatch the sources.\n\nThe symptom to look for is: largescale filesystem deadlocks with many\nprocesses (especially syncer) blocked in \"nbufkv\" state.\n\n-- \nAndrew, Supernews\nhttp://www.supernews.com - individual and corporate NNTP services\n",
"msg_date": "Tue, 27 Feb 2007 17:56:25 -0000",
"msg_from": "Andrew - Supernews <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres performance Linux vs FreeBSD"
}
] |
[
{
"msg_contents": "Our application has a table that is only logged to, and infrequently\nused for reporting. There generally no deletes and updates.\n\nRecently, the shear size (an estimated 36 million rows) caused a serious\nproblem because it prevented a \"vacuum analyze\" on the whole database\nfrom finishing in a timely manner.\n\nAs I understand, a table with this usage pattern wouldn't need to be\nvacuumed anyway.\n\nI'm looking for general advice from people who have faced the same\nissue. I'm looking at a number of alternatives:\n\n1. Once a month, we could delete and archive old rows, for possible\nre-import later if we need to report on them. It would seem this would\nneed to be done as proper insert statements for re-importing. (Maybe\nthere is a solution for that with table partitioning? )\n\n2. We could find a way to exclude the table for vacuuming, and let it\ngrow even larger. Putting the table in it's own database would\naccomplish that, but it would nice to avoid the overhead of a second\ndatabase connection.\n\n3. Take a really different approach. Log in CSV format to text files\ninstead, And only import the date ranges we need \"on demand\" if a report\nis requested on the data.\n\nThanks for any tips.\n\n Mark\n",
"msg_date": "Wed, 21 Feb 2007 13:02:16 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to avoid vacuuming a huge logging table"
},
{
"msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: RIPEMD160\n\n\n> Take a really different approach. Log in CSV format to text files\n> instead, And only import the date ranges we need \"on demand\" if a report\n> is requested on the data.\n\nSeems like more work than a separate database to me. :)\n\n> 2. We could find a way to exclude the table for vacuuming, and let it\n> grow even larger. Putting the table in it's own database would\n> accomplish that, but it would nice to avoid the overhead of a second\n> database connection.\n\nSpecific exclusions is generally what I've done for similar problems in \nthe past. If you can live without the per-database summary at the end of \nthe vacuum, you can do something like this:\n\nSET search_path = 'pg_catalog';\nSELECT set_config('search_path',\n current_setting('search_path')||','||quote_ident(nspname),'false')\n FROM pg_namespace\n WHERE nspname <> 'pg_catalog'\n ORDER BY 1;\n\n\\t\n\\o pop\nSELECT 'vacuum verbose analyze '||quote_ident(relname)||';' \n FROM pg_class\n WHERE relkind = 'r'\n AND relname <> 'ginormous_table'\n ORDER BY 1;\n\\o\n\\i pop\n\nOr put any tables you don't want vacuumed by this script into their own schema:\n\n...\nSELECT 'vacuum verbose analyze '||quote_ident(relname)||';' \n FROM pg_class c, pg_namespace n\n WHERE relkind = 'r'\n AND relnamespace = n.oid\n AND nspname = 'novac'\n ORDER BY 1;\n...\n\nJust flip the equality operator, and you've got a way to vacuum just those \nexcluded tables, for example once a week during a slow time.\n\n\n- --\nGreg Sabino Mullane [email protected]\nEnd Point Corporation\nPGP Key: 0x14964AC8 200702211402\nhttp://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n-----BEGIN PGP SIGNATURE-----\n\niD8DBQFF3JeivJuQZxSWSsgRA7LZAKC7Sfz4XBTAfHuk1CpR+eBl7ixBIACeML8N\n1W2sLLI4HMtdyV4EOoh2XkY=\n=eTUi\n-----END PGP SIGNATURE-----\n\n\n",
"msg_date": "Wed, 21 Feb 2007 19:08:58 -0000",
"msg_from": "\"Greg Sabino Mullane\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to avoid vacuuming a huge logging table"
},
{
"msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: RIPEMD160\n\n\nA minor correction to my earlier post: I should have specified the \nschema as well in the vacuum command for tables with the same \nname in different schemas:\n\nSET search_path = 'pg_catalog';\nSELECT set_config('search_path',\n current_setting('search_path')||','||quote_ident(nspname),'false')\n FROM pg_namespace\n WHERE nspname <> 'pg_catalog'\n ORDER BY 1;\n\n\\t\n\\o pop\nSELECT 'vacuum verbose analyze '||quote_ident(nspname)||'.'||quote_ident(relname)||';' \n FROM pg_class c, pg_namespace n\n WHERE relkind = 'r'\n AND relnamespace = n.oid\n AND nspname = 'novac'\n ORDER BY 1;\n\\o\n\\i pop\n\n- --\nGreg Sabino Mullane [email protected]\nEnd Point Corporation\nPGP Key: 0x14964AC8 200702211652\nhttp://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n-----BEGIN PGP SIGNATURE-----\n\niD8DBQFF3L+XvJuQZxSWSsgRAwzeAKDz+YmLmm9K0of/ObjUux/P7fg7jwCfeSoK\nTfVGoSyThrdFjlGXWn1aEGI=\n=/jBZ\n-----END PGP SIGNATURE-----\n\n\n",
"msg_date": "Wed, 21 Feb 2007 21:58:33 -0000",
"msg_from": "\"Greg Sabino Mullane\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to avoid vacuuming a huge logging table"
},
{
"msg_contents": "On Wed, 21 Feb 2007 21:58:33 -0000\n\"Greg Sabino Mullane\" <[email protected]> wrote:\n> SELECT 'vacuum verbose analyze '||quote_ident(nspname)||'.'||quote_ident(relname)||';' \n> FROM pg_class c, pg_namespace n\n> WHERE relkind = 'r'\n> AND relnamespace = n.oid\n> AND nspname = 'novac'\n> ORDER BY 1;\n\nI assume you meant \"AND nspname != 'novac'\"\n\n-- \nD'Arcy J.M. Cain <[email protected]> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 21 Feb 2007 17:04:06 -0500",
"msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to avoid vacuuming a huge logging table"
}
] |
[
{
"msg_contents": "I have a new task of automating the export of a very complex Crystal \nReport. One thing I have learned in the last 36 hours is that the \nexport process to PDF is really, really, slooww..\n\nAnyway, that is none of your concern. But, I am thinking that I can \nsomehow utilize some of PG's strengths to work around the bottleneck in \nCrystal. The main problem seems to be that tens of thousands of rows of \ndata must be summarized in the report and calculations made. Based on \nmy recent experience, I'd say that this task would be better suited to \nPG than relying on Crystal Reports to do the summarizing.\n\nThe difficulty I'm having is that the data needed is from about 50 \ndifferent \"snapshots\" of counts over time. The queries are very simple, \nhowever I believe I am going to need to combine all of these queries \ninto a single function that runs all 50 and then returns just the \ncount(*) of each as a separate \"column\" in a single row.\n\nI have been Googling for hours and reading about PL/pgsql functions in \nthe PG docs and I have yet to find examples that returns multiple items \nin a single row. I have seen cases that return \"sets of\", but that \nappears to be returning multiple rows, not columns. Maybe this I'm \nbarking up the wrong tree?\n\nHere's the gist of what I need to do:\n\n1) query count of rows that occurred between 14 months ago and 12 months \nago for a given criteria, then count the rows that occurred between 2 \nmonths ago and current. Repeat for 50 different where clauses.\n\n2) return each count(*) as a \"column\" so that in the end I can say:\n\nselect count_everything( ending_date );\n\nand have it return to me:\n\ncount_a_lastyear count_a_last60 count_b_lastyear count_b_last60\n---------------- -------------- ---------------- --------------\n 100 150 200 250\n\nI'm not even sure if a function is what I'm after, maybe this can be \ndone in a view? I am embarrassed to ask something that seems like it \nshould be easy, but some key piece of knowledge is escaping me on this.\n\nI don't expect someone to write this for me, I just need a nudge in the \nright direction and maybe a URL or two to get me started.\n\nThank you for reading this far.\n\n-Dan\n",
"msg_date": "Wed, 21 Feb 2007 11:33:24 -0700",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": true,
"msg_subject": "General advice on user functions"
},
{
"msg_contents": "Hi Dan,\n\tyou may take a look at the crosstab contrib module. There you can find a \nfunction that can convert your rows into columns. However, you can also use \nthe manual approach, as crosstab has its limitations too. \n\tYou can create a TYPE that has all the columns you need, you create a \nfunction that fills and returns this newly created TYPE. Of course the type \nwill have all those 50 fields defined, so it's boring, but should work. (Take \na look at \nhttp://www.postgresql.org/docs/8.2/interactive/sql-createtype.html).\n\nA Dimecres 21 Febrer 2007 19:33, Dan Harris va escriure:\n> I have a new task of automating the export of a very complex Crystal\n> Report. One thing I have learned in the last 36 hours is that the\n> export process to PDF is really, really, slooww..\n>\n> Anyway, that is none of your concern. But, I am thinking that I can\n> somehow utilize some of PG's strengths to work around the bottleneck in\n> Crystal. The main problem seems to be that tens of thousands of rows of\n> data must be summarized in the report and calculations made. Based on\n> my recent experience, I'd say that this task would be better suited to\n> PG than relying on Crystal Reports to do the summarizing.\n>\n> The difficulty I'm having is that the data needed is from about 50\n> different \"snapshots\" of counts over time. The queries are very simple,\n> however I believe I am going to need to combine all of these queries\n> into a single function that runs all 50 and then returns just the\n> count(*) of each as a separate \"column\" in a single row.\n>\n> I have been Googling for hours and reading about PL/pgsql functions in\n> the PG docs and I have yet to find examples that returns multiple items\n> in a single row. I have seen cases that return \"sets of\", but that\n> appears to be returning multiple rows, not columns. Maybe this I'm\n> barking up the wrong tree?\n>\n> Here's the gist of what I need to do:\n>\n> 1) query count of rows that occurred between 14 months ago and 12 months\n> ago for a given criteria, then count the rows that occurred between 2\n> months ago and current. Repeat for 50 different where clauses.\n>\n> 2) return each count(*) as a \"column\" so that in the end I can say:\n>\n> select count_everything( ending_date );\n>\n> and have it return to me:\n>\n> count_a_lastyear count_a_last60 count_b_lastyear count_b_last60\n> ---------------- -------------- ---------------- --------------\n> 100 150 200 250\n>\n> I'm not even sure if a function is what I'm after, maybe this can be\n> done in a view? I am embarrassed to ask something that seems like it\n> should be easy, but some key piece of knowledge is escaping me on this.\n>\n> I don't expect someone to write this for me, I just need a nudge in the\n> right direction and maybe a URL or two to get me started.\n>\n> Thank you for reading this far.\n>\n> -Dan\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n-- \nAlbert Cervera Areny\nDept. Informàtica Sedifa, S.L.\n\nAv. Can Bordoll, 149\n08202 - Sabadell (Barcelona)\nTel. 93 715 51 11\nFax. 93 715 51 12\n\n====================================================================\n........................ AVISO LEGAL ............................\nLa presente comunicación y sus anexos tiene como destinatario la\npersona a la que va dirigida, por lo que si usted lo recibe\npor error debe notificarlo al remitente y eliminarlo de su\nsistema, no pudiendo utilizarlo, total o parcialmente, para\nningún fin. Su contenido puede tener información confidencial o\nprotegida legalmente y únicamente expresa la opinión del\nremitente. El uso del correo electrónico vía Internet no\npermite asegurar ni la confidencialidad de los mensajes\nni su correcta recepción. En el caso de que el\ndestinatario no consintiera la utilización del correo electrónico,\ndeberá ponerlo en nuestro conocimiento inmediatamente.\n====================================================================\n........................... DISCLAIMER .............................\nThis message and its attachments are intended exclusively for the\nnamed addressee. If you receive this message in error, please\nimmediately delete it from your system and notify the sender. You\nmay not use this message or any part of it for any purpose.\nThe message may contain information that is confidential or\nprotected by law, and any opinions expressed are those of the\nindividual sender. Internet e-mail guarantees neither the\nconfidentiality nor the proper receipt of the message sent.\nIf the addressee of this message does not consent to the use\nof internet e-mail, please inform us inmmediately.\n====================================================================\n\n\n \n",
"msg_date": "Wed, 21 Feb 2007 20:05:26 +0100",
"msg_from": "Albert Cervera Areny <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: General advice on user functions"
},
{
"msg_contents": "On 2/21/07, Dan Harris <[email protected]> wrote:\n> I have a new task of automating the export of a very complex Crystal\n> Report. One thing I have learned in the last 36 hours is that the\n> export process to PDF is really, really, slooww..\n>\n> Anyway, that is none of your concern. But, I am thinking that I can\n> somehow utilize some of PG's strengths to work around the bottleneck in\n> Crystal. The main problem seems to be that tens of thousands of rows of\n> data must be summarized in the report and calculations made. Based on\n> my recent experience, I'd say that this task would be better suited to\n> PG than relying on Crystal Reports to do the summarizing.\n>\n> The difficulty I'm having is that the data needed is from about 50\n> different \"snapshots\" of counts over time. The queries are very simple,\n> however I believe I am going to need to combine all of these queries\n> into a single function that runs all 50 and then returns just the\n> count(*) of each as a separate \"column\" in a single row.\n>\n> I have been Googling for hours and reading about PL/pgsql functions in\n> the PG docs and I have yet to find examples that returns multiple items\n> in a single row. I have seen cases that return \"sets of\", but that\n> appears to be returning multiple rows, not columns. Maybe this I'm\n> barking up the wrong tree?\n>\n> Here's the gist of what I need to do:\n>\n> 1) query count of rows that occurred between 14 months ago and 12 months\n> ago for a given criteria, then count the rows that occurred between 2\n> months ago and current. Repeat for 50 different where clauses.\n>\n> 2) return each count(*) as a \"column\" so that in the end I can say:\n>\n> select count_everything( ending_date );\n>\n> and have it return to me:\n>\n> count_a_lastyear count_a_last60 count_b_lastyear count_b_last60\n> ---------------- -------------- ---------------- --------------\n> 100 150 200 250\n>\n> I'm not even sure if a function is what I'm after, maybe this can be\n> done in a view? I am embarrassed to ask something that seems like it\n> should be easy, but some key piece of knowledge is escaping me on this.\n\nthis could be be done in a view, a function, or a view function combo.\n you can select multiple counts at once like this:\n\nselect (select count(*) from foo) as foo, (select count(*) from bar) as bar;\n\nbut this may not be appropriate in some cases where something complex\nis going on. you may certainly return multiple columns from a single\ncall using one of two methods:\n\n* out parameters (8.1+)\n* custom type\n\nboth of which basically return a record instead of a scalar. any\nfunction call can be wrapped in a view which can be as simple as\n\ncreate view foo as select * from my_count_proc();\n\nthis is especially advised if you want to float input parameters over\na table and also filter the inputs via 'where'.\n\nmerlin\n",
"msg_date": "Wed, 21 Feb 2007 14:30:50 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: General advice on user functions"
},
{
"msg_contents": "Thank you all for your ideas. I appreciate the quick response.\n\n-Dan\n",
"msg_date": "Wed, 21 Feb 2007 14:40:54 -0700",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: General advice on user functions"
}
] |
[
{
"msg_contents": "\nWhen I upgraded a busy database system to PostgreSQL 8.1, I was excited\nabout AutoVacuum, and promptly enabled it, and turned off the daily\nvacuum process.\n\n(\nI set the following, as well as the option to enable auto vacuuming\nstats_start_collector = true\nstats_row_level = true\n)\n\nI could see in the logs that related activity was happening, but within\na few days, the performance became horrible, and enabling the regular\nvacuum fixed it.\n\nEventually autovacuum was completely disabled.\n\nWhat could have happened? Is 8.2 more likely to \"just work\" in the\nregard? Is the the table-specific tuning that I would have needed to do?\n\nI realize getting autovacuuming to work could be one way to exclude the\nlarge table I wrote about in a recent post.\n\n Mark\n",
"msg_date": "Wed, 21 Feb 2007 13:37:10 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Auto-Vacuum in 8.1 was ineffective for me. 8.2 may work better?"
},
{
"msg_contents": "Mark Stosberg wrote:\n> \n> When I upgraded a busy database system to PostgreSQL 8.1, I was excited\n> about AutoVacuum, and promptly enabled it, and turned off the daily\n> vacuum process.\n> \n> (\n> I set the following, as well as the option to enable auto vacuuming\n> stats_start_collector = true\n> stats_row_level = true\n> )\n> \n> I could see in the logs that related activity was happening, but within\n> a few days, the performance became horrible, and enabling the regular\n> vacuum fixed it.\n> \n> Eventually autovacuum was completely disabled.\n\nThis has been tracked down to a bug in 8.1's Windows port. See\nhttp://people.planetpostgresql.org/mha/index.php?/archives/134-8.1-on-win32-pgstat-and-autovacuum.html\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Wed, 21 Feb 2007 15:55:44 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Auto-Vacuum in 8.1 was ineffective for me. 8.2 may work better?"
},
{
"msg_contents": "Alvaro Herrera wrote:\n> Mark Stosberg wrote:\n>> When I upgraded a busy database system to PostgreSQL 8.1, I was excited\n>> about AutoVacuum, and promptly enabled it, and turned off the daily\n>> vacuum process.\n>>\n>> (\n>> I set the following, as well as the option to enable auto vacuuming\n>> stats_start_collector = true\n>> stats_row_level = true\n>> )\n>>\n>> I could see in the logs that related activity was happening, but within\n>> a few days, the performance became horrible, and enabling the regular\n>> vacuum fixed it.\n>>\n>> Eventually autovacuum was completely disabled.\n> \n> This has been tracked down to a bug in 8.1's Windows port. See\n> http://people.planetpostgresql.org/mha/index.php?/archives/134-8.1-on-win32-pgstat-and-autovacuum.html\n\nThanks for the response Alvaro. This would have been on FreeBSD.\n\nLet me ask the question a different way: Is simply setting the two\nvalues plus enabling autovacuuming generally enough, or is further\ntweaking common place?\n\nPerhaps I'll give it another tree when we upgrade to 8.2.\n\n Mark\n",
"msg_date": "Wed, 21 Feb 2007 15:28:34 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Auto-Vacuum in 8.1 was ineffective for me. 8.2 may work better?"
},
{
"msg_contents": "Mark Stosberg wrote:\n> Let me ask the question a different way: Is simply setting the two\n> values plus enabling autovacuuming generally enough, or is further\n> tweaking common place?\n\nNo, most people in addition to setting those two GUC settings also lower \nthe threshold values (there is a fair amount of discussion on this in \nthe lists) the defaults are not aggressive enough, so you tables \nprobably aren't getting vacuumed often enough to keep up with the load.\n\nSome work loads also require that you do cron based vacuuming of \nspecific highly active tables.\n\n> Perhaps I'll give it another tree when we upgrade to 8.2.\n\nAutovacuum is still somewhat new, and there were some significant \nimprovements in 8.2 so yes you should give it another try.\n",
"msg_date": "Wed, 21 Feb 2007 15:36:32 -0500",
"msg_from": "\"Matthew T. O'Connor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Auto-Vacuum in 8.1 was ineffective for me. 8.2 may\n work better?"
},
{
"msg_contents": "Mark Stosberg wrote:\n> Alvaro Herrera wrote:\n> > Mark Stosberg wrote:\n> >> When I upgraded a busy database system to PostgreSQL 8.1, I was excited\n> >> about AutoVacuum, and promptly enabled it, and turned off the daily\n> >> vacuum process.\n> >>\n> >> (\n> >> I set the following, as well as the option to enable auto vacuuming\n> >> stats_start_collector = true\n> >> stats_row_level = true\n> >> )\n> >>\n> >> I could see in the logs that related activity was happening, but within\n> >> a few days, the performance became horrible, and enabling the regular\n> >> vacuum fixed it.\n> >>\n> >> Eventually autovacuum was completely disabled.\n\n> > This has been tracked down to a bug in 8.1's Windows port. See\n> > http://people.planetpostgresql.org/mha/index.php?/archives/134-8.1-on-win32-pgstat-and-autovacuum.html\n> \n> Thanks for the response Alvaro. This would have been on FreeBSD.\n\nOh, maybe I misread your OP :-) With \"completely disabled\" I thought\nyou meant it was \"frozen\", i.e., it ran, but did nothing.\n\n> Let me ask the question a different way: Is simply setting the two\n> values plus enabling autovacuuming generally enough, or is further\n> tweaking common place?\n\nI assume your FSM configuration is already good enough?\n\nWhat you should do is find out what tables are not getting vacuumed\nenough (e.g. by using contrib/pgstattuple repeteadly and seeing where is\ndead space increasing) and tweak the autovacuum settings to have them\nvacuumed more often. This is done by inserting appropriate tuples in\npg_autovacuum.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Wed, 21 Feb 2007 17:51:30 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Auto-Vacuum in 8.1 was ineffective for me. 8.2 may work better?"
},
{
"msg_contents": "\nThanks to everyone for the feedback about vacuuming. It's been very\nuseful. The pointers to the pgstattuple and Pgfouine tools were also\nhelpful.\n\nI'm now considering the following plan for trying Autovacuuming again\nwith 8.1. I'd like any peer review you have to offer of the following:\n\n1. First, I'll move the settings to match the defaults in 8.2. The ones\nI noticed in particular were:\n\nautovacuum_vacuum_threshold changes: 1000 -> 500\nautovacuum_anayze_threshold changes: 500 -> 250\nautovacuum_scale_factor changes: .4 -> .2\nautovacuum_analyze_scale_factor changes .2 -> .1\n\n2. Try the vacuum cost delay feature, starting with a 20ms value:\n\nautovacuum_vacuum_cost_delay = 20\n\n3. Immediately add a row to pg_autovacuum for a huge logging table that\nwould be too slow to vacuum usually. We'll still vacuum it once a week\nfor good measure by cron.\n\n4. For good measure, I think I still keep the nightly cron entry that\ndoes a complete vacuum analyze (except for that large table...).\n\nSeem like a reasonable plan?\n\n Mark\n",
"msg_date": "Thu, 22 Feb 2007 16:53:52 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Using the 8.2 autovacuum values with 8.1"
},
{
"msg_contents": "On Thu, 2007-02-22 at 22:53, Mark Stosberg wrote:\n> Thanks to everyone for the feedback about vacuuming. It's been very\n> useful. The pointers to the pgstattuple and Pgfouine tools were also\n> helpful.\n> \n> I'm now considering the following plan for trying Autovacuuming again\n> with 8.1. I'd like any peer review you have to offer of the following:\n> \n> 1. First, I'll move the settings to match the defaults in 8.2. The ones\n> I noticed in particular were:\n> \n> autovacuum_vacuum_threshold changes: 1000 -> 500\n> autovacuum_anayze_threshold changes: 500 -> 250\n> autovacuum_scale_factor changes: .4 -> .2\n> autovacuum_analyze_scale_factor changes .2 -> .1\n> \n> 2. Try the vacuum cost delay feature, starting with a 20ms value:\n> \n> autovacuum_vacuum_cost_delay = 20\n> \n> 3. Immediately add a row to pg_autovacuum for a huge logging table that\n> would be too slow to vacuum usually. We'll still vacuum it once a week\n> for good measure by cron.\n> \n> 4. For good measure, I think I still keep the nightly cron entry that\n> does a complete vacuum analyze (except for that large table...).\n> \n> Seem like a reasonable plan?\n\nYou likely don't need the nightly full vacuum run... we also do here a\nnightly vacuum beside autovacuum, but not a full one, only for tables\nwhich are big enough that we don't want autovacuum to touch them in high\nbusiness time but they have enough change that we want a vacuum on them\nfrequent enough. I discover them by checking the stats, for example:\n\nSELECT \n c.relname,\n c.reltuples::bigint as rowcnt,\n pg_stat_get_tuples_inserted(c.oid) AS inserted, \n pg_stat_get_tuples_updated(c.oid) AS updated, \n pg_stat_get_tuples_deleted(c.oid) AS deleted\nFROM pg_class c\nWHERE c.relkind = 'r'::\"char\"\nGROUP BY c.oid, c.relname, c.reltuples\nHAVING pg_stat_get_tuples_updated(c.oid) +\npg_stat_get_tuples_deleted(c.oid) > 1000\nORDER BY pg_stat_get_tuples_updated(c.oid) +\npg_stat_get_tuples_deleted(c.oid) DESC;\n\n\nThe top tables in this list for which the (deleted + updated) / rowcnt \nis relatively small but still significant need your attention for\nnightly vacuum... the rest is handled just fine by autovacuum.\n\nOn the other end of the scale, if you have tables for which the\ndeletion/update rate is way higher then the row count, that's likely a\nhot-spot table which you probably need extra vacuuming during the day.\n\nCheers,\nCsaba.\n\n\n",
"msg_date": "Fri, 23 Feb 2007 10:13:31 +0100",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using the 8.2 autovacuum values with 8.1"
},
{
"msg_contents": "On Fri, Feb 23, 2007 at 10:13:31AM +0100, Csaba Nagy wrote:\n> You likely don't need the nightly full vacuum run... we also do here a\n> nightly vacuum beside autovacuum, but not a full one, only for tables\n> which are big enough that we don't want autovacuum to touch them in high\n> business time but they have enough change that we want a vacuum on them\n> frequent enough. I discover them by checking the stats, for example:\n\nSomething else I like doing is a periodic vacuumdb -av and capture the\noutput. It's a good way to keep an eye on FSM utilization. Once you've\ngot vacuuming under control you can probably just do that once a month\nor so.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Fri, 23 Feb 2007 11:37:36 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using the 8.2 autovacuum values with 8.1"
}
] |
[
{
"msg_contents": "Hi,\n\nI am trying to execute some queries in PostgreSQL that produce a large\nnumber of results and I do not want to display the output (redirect it\nto /dev/null). I have tried the psql client with \\o /dev/null option,\nJDBC and libpq functions, but all of them have to buffer totally the\nresult before redirecting it. Is there any way to disable result\nbuffering, either on the client or on the server side? Note that the\nEXPLAIN ANALYSE does not produce consistent estimations as it seems to\navoid calling the result tuple construction function. This is concluded\nby the fact that augmenting the projection width hardly changes the\nexecution time, which is incompatible with the supplementary field copy\ncost.\n\nThanks.\nKonstantinos Krikellas\n\n",
"msg_date": "Thu, 22 Feb 2007 11:02:50 +0000",
"msg_from": "Konstantinos Krikellas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Disable result buffering to frontend clients"
},
{
"msg_contents": "Konstantinos Krikellas wrote:\n> Hi,\n> \n> I am trying to execute some queries in PostgreSQL that produce a large\n> number of results and I do not want to display the output (redirect it\n> to /dev/null). I have tried the psql client with \\o /dev/null option,\n> JDBC and libpq functions, but all of them have to buffer totally the\n> result before redirecting it. Is there any way to disable result\n> buffering, either on the client or on the server side? \n\nWell, you could use a cursor, but that could change the plan (I believe \nit favours plans that return the first result quickly).\n\nYou could have a function that used FOR-IN-EXECUTE to run a query for \nyou then just loop through the results, doing nothing. That would keep \neverything server-side.\n\nIf you really want to duplicate all the query costs except client-side \nbuffering, the simplest might be to just hack the libpq source to \ndiscard any query results rather than buffering them - shouldn't be too \ndifficult.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 22 Feb 2007 11:12:11 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disable result buffering to frontend clients"
}
] |
[
{
"msg_contents": "Hello,\n\nI experience significant performance issues with postgresql and updates.\nI have a table which contains ~1M rows.\nLayout:\nTOTO=# \\d versions_9d;\n Table �public.versions_9d�\n Colonne | Type | Modificateurs\n------------+------------------------+---------------\n hash | character(32) |\n date | integer | default 0\n diff | integer | default 0\n flag | integer | default 0\n size | bigint | default 0\n zip_size | bigint | default 0\n jds | integer | default 0\n scanned | integer | default 0\n dead | integer | default 0\n\nTest case:\nCreate a new DB and load a dump of the above database with 976009 rows, \nthen i perform updates on the whole table. I recorded the time taken \nfor each full update and the amount of extra disk space used. Each \nconsecutive update of the table is slower than the previous\nbeebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=2\"\nUPDATE 976009\nreal 0m41.542s\nbeebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=3\"\nUPDATE 976009\nreal 0m45.140s (+480M)\nbeebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=4\"\nUPDATE 976009\nreal 1m10.554s (+240M)\nbeebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=5\"\nUPDATE 976009\nreal 1m24.065s (+127M)\nbeebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=6\"\nUPDATE 976009\nreal 1m17.758s (+288M)\nbeebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=7\"\nUPDATE 976009\nreal 1m26.777s (+288M)\nbeebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=8\"\nUPDATE 976009\nreal 1m39.151s (+289M)\n\nThen i tried adding an index to the table on the column date (int) that \nstores unix timestamps.\nTOTO=# CREATE INDEX versions_index ON versions_9d (date);\n(-60M) disk space goes down on index creation\nbeebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=9\"\nUPDATE 976009\nreal 3m8.219s (+328M)\nbeebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=8\"\nUPDATE 976009\nreal 6m24.716s (+326M)\nbeebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=10\"\nUPDATE 976009\nreal 8m25.274s (+321M)\n\nAs a sanity check, i loaded mysql5 and tried the same database and \nupdates. With mysql, the update always lasts ~8s.\nThe conclusions I have come to is that update==insert+delete which seems \nvery heavy when index are present (and heavy disk wise on big tables). \nIs there a switch i can flip to optimise this?\n\nThanks in advance,\nGabriel Biberian\n",
"msg_date": "Thu, 22 Feb 2007 19:11:42 +0100",
"msg_from": "Gabriel Biberian <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow update on 1M rows (worse with indexes)"
},
{
"msg_contents": "On Thu, Feb 22, 2007 at 07:11:42PM +0100, Gabriel Biberian wrote:\n> Create a new DB and load a dump of the above database with 976009 rows, \n> then i perform updates on the whole table. I recorded the time taken \n> for each full update and the amount of extra disk space used. Each \n> consecutive update of the table is slower than the previous\n> beebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=2\"\n> UPDATE 976009\n> real 0m41.542s\n\nYou're creating a huge amount of dead rows by this kind of procedure. Try a\nVACUUM in-between, or enable autovacuum. (Adjusting your WAL and\ncheckpointing settings might help too.)\n\nApart from that, do you really have a scenario that requires updating _all_\nrows in your table regularly?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 22 Feb 2007 19:25:00 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow update on 1M rows (worse with indexes)"
},
{
"msg_contents": "n i tried adding an index to the table on the column date (int) that\n> stores unix timestamps.\n> TOTO=# CREATE INDEX versions_index ON versions_9d (date);\n> (-60M) disk space goes down on index creation\n> beebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=9\"\n> UPDATE 976009\n> real 3m8.219s (+328M)\n> beebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=8\"\n> UPDATE 976009\n> real 6m24.716s (+326M)\n> beebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=10\"\n> UPDATE 976009\n> real 8m25.274s (+321M)\n> \n> As a sanity check, i loaded mysql5 and tried the same database and\n> updates. With mysql, the update always lasts ~8s.\n\nYes but with mysql did you use myisam or innodb?\n\n\n> The conclusions I have come to is that update==insert+delete which seems\n> very heavy when index are present (and heavy disk wise on big tables).\n> Is there a switch i can flip to optimise this?\n> \n> Thanks in advance,\n> Gabriel Biberian\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Thu, 22 Feb 2007 10:42:39 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow update on 1M rows (worse with indexes)"
},
{
"msg_contents": "\nhow about saying:\n\nlock table versions_9d in EXCLUSIVE mode;\nUPDATE versions_9d SET flag=2;\ncommit;\n\nIsmo\n\nOn Thu, 22 Feb 2007, Gabriel Biberian wrote:\n\n> Hello,\n> \n> I experience significant performance issues with postgresql and updates.\n> I have a table which contains ~1M rows.\n> Layout:\n> TOTO=# \\d versions_9d;\n> Table «public.versions_9d»\n> Colonne | Type | Modificateurs\n> ------------+------------------------+---------------\n> hash | character(32) |\n> date | integer | default 0\n> diff | integer | default 0\n> flag | integer | default 0\n> size | bigint | default 0\n> zip_size | bigint | default 0\n> jds | integer | default 0\n> scanned | integer | default 0\n> dead | integer | default 0\n> \n> Test case:\n> Create a new DB and load a dump of the above database with 976009 rows, then i\n> perform updates on the whole table. I recorded the time taken for each full\n> update and the amount of extra disk space used. Each consecutive update of\n> the table is slower than the previous\n> beebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=2\"\n> UPDATE 976009\n> real 0m41.542s\n> beebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=3\"\n> UPDATE 976009\n> real 0m45.140s (+480M)\n> beebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=4\"\n> UPDATE 976009\n> real 1m10.554s (+240M)\n> beebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=5\"\n> UPDATE 976009\n> real 1m24.065s (+127M)\n> beebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=6\"\n> UPDATE 976009\n> real 1m17.758s (+288M)\n> beebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=7\"\n> UPDATE 976009\n> real 1m26.777s (+288M)\n> beebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=8\"\n> UPDATE 976009\n> real 1m39.151s (+289M)\n> \n> Then i tried adding an index to the table on the column date (int) that stores\n> unix timestamps.\n> TOTO=# CREATE INDEX versions_index ON versions_9d (date);\n> (-60M) disk space goes down on index creation\n> beebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=9\"\n> UPDATE 976009\n> real 3m8.219s (+328M)\n> beebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=8\"\n> UPDATE 976009\n> real 6m24.716s (+326M)\n> beebox@evobrik01:~$ time psql TOTO -c \"UPDATE versions_9d SET flag=10\"\n> UPDATE 976009\n> real 8m25.274s (+321M)\n> \n> As a sanity check, i loaded mysql5 and tried the same database and updates.\n> With mysql, the update always lasts ~8s.\n> The conclusions I have come to is that update==insert+delete which seems very\n> heavy when index are present (and heavy disk wise on big tables). Is there a\n> switch i can flip to optimise this?\n> \n> Thanks in advance,\n> Gabriel Biberian\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n> ",
"msg_date": "Fri, 23 Feb 2007 08:18:20 +0200 (EET)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: slow update on 1M rows (worse with indexes)"
}
] |
[
{
"msg_contents": "Hi,\n\n \n\nHere is the situation. We are at postgres version 8.1.3.\n\n \n\nI have a table that gets many rows inserted, updated and then deleted,\nconsistently throughout the day. At any point in time this table should\nhave no more than 50 actual rows and many times a direct select against\nthis table produces no rows. This table also has a VACUUM FULL ANALYZE\nperformed against it about very 30 minutes. I noticed the vacuum was\ntaking a considerable amount of time for a table with a small number of\nactual rows. The output of the first vacuum full analyze verbose I\nperformed showed that this table had 3,699,704 dead row versions that\ncould not be removed. This number of dead rows that could not be\nreleased increased with each vacuum full that was performed. The output\nof the last vacuum full is shown below. \n \nThe only way I was able to get these dead row version removed was to\nperform a truncate on the table. I performed the truncate when the\ntable was empty and there was no activity (insert, updates, delete or\nvacuums, etc) being performed against this table. After the truncate I\nperformed another vacuum full analyze verbose. The vacuum was very fast\nand the output of the vacuum showed that there were no non-removable\nrows versions.\n \nSo my question is what makes a dead row nonremovable?\n \nMiscellaneous info about table\n \nAll inserts, updates and deletes to this table are performed within\nfunctions that get called when a row is inserted into another table.\n \nBelow is the output of a \"VACUUM FULL VERBOSE ANALYZE\nnc_persistent_host_temp;\" before the truncate.\n \nINFO: vacuuming \"public.nc_persistent_host_temp\"\nINFO: \"nc_persistent_host_temp\": found 0 removable, 4599704\nnonremovable row versions in 90171 pages\nDETAIL: 4599704 dead row versions cannot be removed yet.\nNonremovable row versions range from 132 to 184 bytes long.\nThere were 95884 unused item pointers.\nTotal free space (including removable row versions) is 7140772 bytes.\n61 pages are or will become empty, including 0 at the end of the table.\n9166 pages containing 2002868 free bytes are potential move\ndestinations.\nCPU 21.07s/45.15u sec elapsed 71.27 sec.\nINFO: \"nc_persistent_host_temp\": moved 0 row versions, truncated 90171\nto 90171 pages\nDETAIL: CPU 2.98s/2.20u sec elapsed 101.17 sec.\nINFO: vacuuming \"pg_toast.pg_toast_1036640\"\nINFO: \"pg_toast_1036640\": found 0 removable, 0 nonremovable row\nversions in 0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 0 to 0 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 0 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n0 pages containing 0 free bytes are potential move destinations.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"pg_toast_1036640_index\" now contains 0 row versions in 1\npages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: analyzing \"public.nc_persistent_host_temp\"\nINFO: \"nc_persistent_host_temp\": scanned 3000 of 90171 pages,\ncontaining 0 live rows and 152997 dead rows; 0 rows in sample, 0\nestimated total rows\nVACUUM\n \n \n\n Thanks,\n\n \n\nBarbara Cosentino\n\n \n\n \n\n\n\n\n\n\n\n\n\n\n\nHi,\n \nHere is the situation. We are at postgres version 8.1.3.\n \nI have a table that gets many rows inserted, updated and then deleted, consistently throughout the day. At any point in time this table should have no more than 50 actual rows and many times a direct select against this table produces no rows. This table also has a VACUUM FULL ANALYZE performed against it about very 30 minutes. I noticed the vacuum was taking a considerable amount of time for a table with a small number of actual rows. The output of the first vacuum full analyze verbose I performed showed that this table had 3,699,704 dead row versions that could not be removed. This number of dead rows that could not be released increased with each vacuum full that was performed. The output of the last vacuum full is shown below. The only way I was able to get these dead row version removed was to perform a truncate on the table. I performed the truncate when the table was empty and there was no activity (insert, updates, delete or vacuums, etc) being performed against this table. After the truncate I performed another vacuum full analyze verbose. The vacuum was very fast and the output of the vacuum showed that there were no non-removable rows versions. So my question is what makes a dead row nonremovable? Miscellaneous info about table All inserts, updates and deletes to this table are performed within functions that get called when a row is inserted into another table. Below is the output of a “VACUUM FULL VERBOSE ANALYZE nc_persistent_host_temp;” before the truncate. INFO: vacuuming \"public.nc_persistent_host_temp\"INFO: \"nc_persistent_host_temp\": found 0 removable, 4599704 nonremovable row versions in 90171 pagesDETAIL: 4599704 dead row versions cannot be removed yet.Nonremovable row versions range from 132 to 184 bytes long.There were 95884 unused item pointers.Total free space (including removable row versions) is 7140772 bytes.61 pages are or will become empty, including 0 at the end of the table.9166 pages containing 2002868 free bytes are potential move destinations.CPU 21.07s/45.15u sec elapsed 71.27 sec.INFO: \"nc_persistent_host_temp\": moved 0 row versions, truncated 90171 to 90171 pagesDETAIL: CPU 2.98s/2.20u sec elapsed 101.17 sec.INFO: vacuuming \"pg_toast.pg_toast_1036640\"INFO: \"pg_toast_1036640\": found 0 removable, 0 nonremovable row versions in 0 pagesDETAIL: 0 dead row versions cannot be removed yet.Nonremovable row versions range from 0 to 0 bytes long.There were 0 unused item pointers.Total free space (including removable row versions) is 0 bytes.0 pages are or will become empty, including 0 at the end of the table.0 pages containing 0 free bytes are potential move destinations.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index \"pg_toast_1036640_index\" now contains 0 row versions in 1 pagesDETAIL: 0 index pages have been deleted, 0 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: analyzing \"public.nc_persistent_host_temp\"INFO: \"nc_persistent_host_temp\": scanned 3000 of 90171 pages, containing 0 live rows and 152997 dead rows; 0 rows in sample, 0 estimated total rowsVACUUM \n Thanks,\n \nBarbara Cosentino",
"msg_date": "Thu, 22 Feb 2007 12:19:50 -0800",
"msg_from": "\"Barbara Cosentino\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Vacuum full very slow due to nonremovable dead rows...What makes the\n\tdead rows non-removable?"
},
{
"msg_contents": "On Thu, Feb 22, 2007 at 12:19:50PM -0800, Barbara Cosentino wrote:\n> I have a table that gets many rows inserted, updated and then deleted,\n> consistently throughout the day. At any point in time this table should\n> have no more than 50 actual rows and many times a direct select against\n> this table produces no rows. This table also has a VACUUM FULL ANALYZE\n> performed against it about very 30 minutes.\n\nYou should not usually need VACUUM FULL; doing so all the time will probably\n_decrease_ your performance.\n\n> I noticed the vacuum was taking a considerable amount of time for a table\n> with a small number of actual rows. The output of the first vacuum full\n> analyze verbose I performed showed that this table had 3,699,704 dead row\n> versions that could not be removed. This number of dead rows that could\n> not be released increased with each vacuum full that was performed. The\n> output of the last vacuum full is shown below. \n\nDo you have any long-running transactions going? Those are likely to make\nrows nonremovable. Look for idle workers in a transaction.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 22 Feb 2007 21:29:20 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum full very slow due to nonremovable dead rows...What makes\n\tthe dead rows non-removable?"
}
] |
[
{
"msg_contents": "Hi all,\nI'm using Postgresql 8.2.3 on a Windows XP system.\n\nI need to \nwrite and retrieve bytea data from a table.\nThe problem is that, while \ndata insertion is quite fast, bytea extraction is very slow.\nI'm trying \nto store a 250KB image into the bytea field.\nA simple select query on a \n36-row table takes more than one minute to execute.\n\nAny help would be \nvery appreciated\n\nThanks in advance\n\nMassimo\n\n",
"msg_date": "Fri, 23 Feb 2007 11:10:13 +0100 (GMT+01:00)",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Very slow bytea data extraction"
},
{
"msg_contents": "[email protected] wrote:\n> Hi all,\n> I'm using Postgresql 8.2.3 on a Windows XP system.\n> \n> I need to \n> write and retrieve bytea data from a table.\n> The problem is that, while \n> data insertion is quite fast, bytea extraction is very slow.\n> I'm trying \n> to store a 250KB image into the bytea field.\n> A simple select query on a \n> 36-row table takes more than one minute to execute.\n\nWhere is the problem?\n\nIs it in executing the query (what does EXPLAIN ANALYSE show)?\nIs it in fetching/formatting the data (what does the equivalent COUNT(*) \nshow)?\nHow are you accessing the database: odbc,jdbc,other?\nDoes it do this with psql too?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 23 Feb 2007 10:46:14 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow bytea data extraction"
}
] |
[
{
"msg_contents": "Thanks for your reply,\n\n\n>Is it in executing the query (what does \nEXPLAIN ANALYSE show)?\n\nHere is the output of explain analyze SELECT * \nFROM \"FILE\"\n\n\"Seq Scan on \"FILE\" (cost=0.00..1.36 rows=36 width=235) \n(actual time=0.023..0.107 rows=36 loops=1)\"\n\n\n>How are you accessing \nthe database: odbc,jdbc,other?\n>Does it do this with psql too?\n\nThe \nproblem is the same when I access the db with jdbc, pgAdmin and even \npsql\n\n\nMassimo\n\n",
"msg_date": "Fri, 23 Feb 2007 13:19:56 +0100 (GMT+01:00)",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "R: Very slow bytea data extraction"
},
{
"msg_contents": "[email protected] wrote:\n> Thanks for your reply,\n> \n> \n>> Is it in executing the query (what does \n> EXPLAIN ANALYSE show)?\n> \n> Here is the output of explain analyze SELECT * \n> FROM \"FILE\"\n> \n> \"Seq Scan on \"FILE\" (cost=0.00..1.36 rows=36 width=235) \n> (actual time=0.023..0.107 rows=36 loops=1)\"\n\nIf you look at the \"actual time\" it's completing very quickly indeed. So \n- it must be something to do with either:\n1. Fetching/formatting the data\n2. Transferring the data to the client.\n\nWhat happens if you only select half the rows? Does the time to run the \nselect halve?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 23 Feb 2007 14:39:21 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: R: Very slow bytea data extraction"
},
{
"msg_contents": "On 2/23/07, [email protected] <[email protected]> wrote:\n> Thanks for your reply,\n>\n>\n> >Is it in executing the query (what does\n> EXPLAIN ANALYSE show)?\n>\n> Here is the output of explain analyze SELECT *\n> FROM \"FILE\"\n>\n> \"Seq Scan on \"FILE\" (cost=0.00..1.36 rows=36 width=235)\n> (actual time=0.023..0.107 rows=36 loops=1)\"\n>\n>\n> >How are you accessing\n> the database: odbc,jdbc,other?\n> >Does it do this with psql too?\n>\n> The\n> problem is the same when I access the db with jdbc, pgAdmin and even\n> psql\n\n\nare you getting the data from the local box or from a remote site?\nalso explain analyze is showing nothing slow but you did not post the\nenitre output. also, try the \\timing switch in psql.\n\nmerlin\n",
"msg_date": "Fri, 23 Feb 2007 11:35:42 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: R: Very slow bytea data extraction"
}
] |
[
{
"msg_contents": "I would like to get someone's recommendations on the best initial\nsettings for a dedicated PostgreSQL server. I do realize that there are\na lot of factors that influence how one should configure a database. I\nam just looking for a good starting point. Ideally I would like the\ndatabase to reside as much as possible in memory with no disk access.\nThe current database size of my 7.x version of PostgreSQL generates a 6\nGig file when doing a database dump.\n\n \n\nDedicated PostgreSQL 8.2 Server\n\nRedhat Linux 4.x AS 64 bit version (EM64T)\n\n4 Intel Xeon Processors\n\n20 Gig Memory\n\nCurrent PostgreSQL database is 6 Gig file when doing a database dump\n\n \n\n \n\n/etc/sysctl.conf file settings:\n\n \n\n# 11 Gig\n\nkernel.shmmax = 11811160064\n\n \n\nkernel.sem = 250 32000 100 128\n\nnet.ipv4.ip_local_port_range = 1024 65000\n\nnet.core.rmem_default = 262144 \n\nnet.core.rmem_max = 262144 \n\nnet.core.wmem_default = 262144\n\nnet.core.wmem_max = 262144 \n\n \n\n \n\npostgresql.conf file settings (if not listed then I used the defaults):\n\n \n\nmax_connections = 300\n\nshared_buffers = 10240MB\n\nwork_mem = 10MB\n\neffective_cache_size = 512MB\n\nmaintenance_work_mem = 100MB\n\n \n\n \n\nAny suggestions would be appreciated!\n\n \n\nThanks,\n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nI would like to get someone’s recommendations on the\nbest initial settings for a dedicated PostgreSQL server. I do realize\nthat there are a lot of factors that influence how one should configure a\ndatabase. I am just looking for a good starting point. Ideally I\nwould like the database to reside as much as possible in memory with no disk\naccess. The current database size of my 7.x version of PostgreSQL\ngenerates a 6 Gig file when doing a database dump.\n \nDedicated PostgreSQL 8.2 Server\nRedhat Linux 4.x AS 64 bit version (EM64T)\n4 Intel Xeon Processors\n20 Gig Memory\nCurrent PostgreSQL database is 6 Gig file when doing a\ndatabase dump\n \n \n/etc/sysctl.conf file settings:\n \n# 11 Gig\nkernel.shmmax = 11811160064\n \nkernel.sem = 250 32000 100 128\nnet.ipv4.ip_local_port_range = 1024 65000\nnet.core.rmem_default = 262144 \n\nnet.core.rmem_max =\n262144 \nnet.core.wmem_default = 262144\nnet.core.wmem_max = 262144 \n \n \npostgresql.conf file settings (if not listed then I used the\ndefaults):\n \nmax_connections = 300\nshared_buffers = 10240MB\nwork_mem = 10MB\neffective_cache_size = 512MB\nmaintenance_work_mem = 100MB\n \n \nAny suggestions would be appreciated!\n \nThanks,\n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu",
"msg_date": "Fri, 23 Feb 2007 10:08:36 -0600",
"msg_from": "\"Campbell, Lance\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Recommended Initial Settings"
},
{
"msg_contents": "Campbell, Lance wrote:\n> I would like to get someone's recommendations on the best initial\n> settings for a dedicated PostgreSQL server. I do realize that there are\n> a lot of factors that influence how one should configure a database. I\n> am just looking for a good starting point. Ideally I would like the\n> database to reside as much as possible in memory with no disk access.\n> The current database size of my 7.x version of PostgreSQL generates a 6\n> Gig file when doing a database dump.\n\nYour operating-system should be doing the caching for you.\n\n> Dedicated PostgreSQL 8.2 Server\n> Redhat Linux 4.x AS 64 bit version (EM64T)\n> 4 Intel Xeon Processors\n\nIf these are older Xeons, check the mailing list archives for \"xeon \ncontext switch\".\n\n> 20 Gig Memory\n> Current PostgreSQL database is 6 Gig file when doing a database dump\n\nOK, so it's plausible the whole thing will fit in RAM (as a \nrule-of-thumb I assume headers, indexes etc. triple or quadruple the \nsize). To know better, check the actual disk-usage of $PGDATA.\n\n> /etc/sysctl.conf file settings:\n> \n> # 11 Gig\n> \n> kernel.shmmax = 11811160064\n\nHmm - that's a lot of shared RAM. See shared_buffers below.\n\n> kernel.sem = 250 32000 100 128\n> \n> net.ipv4.ip_local_port_range = 1024 65000\n> \n> net.core.rmem_default = 262144 \n> \n> net.core.rmem_max = 262144 \n> \n> net.core.wmem_default = 262144\n> \n> net.core.wmem_max = 262144 \n\n> postgresql.conf file settings (if not listed then I used the defaults):\n> \n> max_connections = 300\n\nHow many connections do you expect typically/peak? It doesn't cost much \nto have max_connections set high but your workload is the most important \nthing missing from your question.\n\n> shared_buffers = 10240MB\n\nFor 7.x that's probably way too big, but 8.x organises its buffers \nbetter. I'd still be tempted to start a 1 or 2GB and work up - see where \nit stops buying you an improvement.\n\n> work_mem = 10MB\n\nIf you have large queries, doing big sorts I'd increase this. Don't \nforget it's per-sort, so if you have got about 300 connections live at \nany one time that could be 300*10MB*N if they're all doing something \ncomplicated. If you only have one connection live, you can increase this \nquite substantially.\n\n> effective_cache_size = 512MB\n\nThis isn't setting PG's memory usage, it's telling PG how much data your \noperating-system is caching. Check \"free\" and see what it says. For you, \nI'd expect 10GB+.\n\n> maintenance_work_mem = 100MB\n\nThis is for admin-related tasks, so you could probably increase it.\n\nWorkload workload workload - we need to know what you're doing with it. \nOnce connection summarising the entire database will want larger numbers \nthan 100 connections running many small queries.\n\nHTH\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 23 Feb 2007 16:29:20 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recommended Initial Settings"
},
{
"msg_contents": "Richard,\nThanks for your reply. \n\nYou said:\n\"Your operating-system should be doing the caching for you.\"\n\nMy understanding is that as long as Linux has memory available it will\ncache files. Then from your comment I get the impression that since\nLinux would be caching the data files for the postgres database it would\nbe redundant to have a large shared_buffers. Did I understand you\ncorrectly?\n\nThanks,\n\n\n\nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\n \n-----Original Message-----\nFrom: Richard Huxton [mailto:[email protected]] \nSent: Friday, February 23, 2007 10:29 AM\nTo: Campbell, Lance\nCc: [email protected]\nSubject: Re: [PERFORM] Recommended Initial Settings\n\nCampbell, Lance wrote:\n> I would like to get someone's recommendations on the best initial\n> settings for a dedicated PostgreSQL server. I do realize that there\nare\n> a lot of factors that influence how one should configure a database.\nI\n> am just looking for a good starting point. Ideally I would like the\n> database to reside as much as possible in memory with no disk access.\n> The current database size of my 7.x version of PostgreSQL generates a\n6\n> Gig file when doing a database dump.\n\nYour operating-system should be doing the caching for you.\n\n> Dedicated PostgreSQL 8.2 Server\n> Redhat Linux 4.x AS 64 bit version (EM64T)\n> 4 Intel Xeon Processors\n\nIf these are older Xeons, check the mailing list archives for \"xeon \ncontext switch\".\n\n> 20 Gig Memory\n> Current PostgreSQL database is 6 Gig file when doing a database dump\n\nOK, so it's plausible the whole thing will fit in RAM (as a \nrule-of-thumb I assume headers, indexes etc. triple or quadruple the \nsize). To know better, check the actual disk-usage of $PGDATA.\n\n> /etc/sysctl.conf file settings:\n> \n> # 11 Gig\n> \n> kernel.shmmax = 11811160064\n\nHmm - that's a lot of shared RAM. See shared_buffers below.\n\n> kernel.sem = 250 32000 100 128\n> \n> net.ipv4.ip_local_port_range = 1024 65000\n> \n> net.core.rmem_default = 262144 \n> \n> net.core.rmem_max = 262144 \n> \n> net.core.wmem_default = 262144\n> \n> net.core.wmem_max = 262144 \n\n> postgresql.conf file settings (if not listed then I used the\ndefaults):\n> \n> max_connections = 300\n\nHow many connections do you expect typically/peak? It doesn't cost much \nto have max_connections set high but your workload is the most important\n\nthing missing from your question.\n\n> shared_buffers = 10240MB\n\nFor 7.x that's probably way too big, but 8.x organises its buffers \nbetter. I'd still be tempted to start a 1 or 2GB and work up - see where\n\nit stops buying you an improvement.\n\n> work_mem = 10MB\n\nIf you have large queries, doing big sorts I'd increase this. Don't \nforget it's per-sort, so if you have got about 300 connections live at \nany one time that could be 300*10MB*N if they're all doing something \ncomplicated. If you only have one connection live, you can increase this\n\nquite substantially.\n\n> effective_cache_size = 512MB\n\nThis isn't setting PG's memory usage, it's telling PG how much data your\n\noperating-system is caching. Check \"free\" and see what it says. For you,\n\nI'd expect 10GB+.\n\n> maintenance_work_mem = 100MB\n\nThis is for admin-related tasks, so you could probably increase it.\n\nWorkload workload workload - we need to know what you're doing with it. \nOnce connection summarising the entire database will want larger numbers\n\nthan 100 connections running many small queries.\n\nHTH\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 23 Feb 2007 11:14:44 -0600",
"msg_from": "\"Campbell, Lance\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Recommended Initial Settings"
},
{
"msg_contents": "Campbell, Lance wrote:\n> Richard,\n> Thanks for your reply. \n> \n> You said:\n> \"Your operating-system should be doing the caching for you.\"\n> \n> My understanding is that as long as Linux has memory available it will\n> cache files. Then from your comment I get the impression that since\n> Linux would be caching the data files for the postgres database it would\n> be redundant to have a large shared_buffers. Did I understand you\n> correctly?\n\nThat's right - PG works with the O.S. This means it *might* not be a big \nadvantage to have a large shared_buffers.\n\nOn older versions of PG, the buffer management code wasn't great with \nlarge shared_buffers values too.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 23 Feb 2007 17:19:00 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recommended Initial Settings"
},
{
"msg_contents": "If you're doing much updating at all you'll also want to bump up\ncheckpoint_segments. I like setting checkpoint_warning just a bit under\ncheckpoint_timeout as a way to monitor how often you're checkpointing\ndue to running out of segments.\n\nWith a large shared_buffers you'll likely need to make the bgwriter more\naggressive as well (increase the max_pages numbers), though how\nimportant that is depends on how much updating you're doing. If you see\nperiodic spikes in IO corresponding to checkpoints, that's an indication\nbgwriter isn't doing a good enough job.\n\nIf everything ends up in memory, it might be good to decrease\nrandom_page_cost to 1 or something close to it; though the database\nshould just rely on effective_cache to figure out that everything's in\nmemory.\n\nIf you're on pre-8.2, you'll want to cut all the autovacuum parameters\nin half, if you're using it.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Fri, 23 Feb 2007 11:43:56 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recommended Initial Settings"
},
{
"msg_contents": "In response to \"Campbell, Lance\" <[email protected]>:\n\n> Richard,\n> Thanks for your reply. \n> \n> You said:\n> \"Your operating-system should be doing the caching for you.\"\n> \n> My understanding is that as long as Linux has memory available it will\n> cache files. Then from your comment I get the impression that since\n> Linux would be caching the data files for the postgres database it would\n> be redundant to have a large shared_buffers. Did I understand you\n> correctly?\n\nKeep in mind that keeping the data in the kernel's buffer requires\nPostgres to make a syscall to read a file, which the kernel then realizes\nis cached in memory. The kernel then has to make that data available\nto the Postgres (userland) process.\n\nIf the data is in Postgres' buffers, Postgres can fetch it directly, thus\navoiding the overhead of the syscalls and the kernel activity. You still\nhave to make sysvshm calls, though.\n\nSo, it depends on which is able to manage the memory better. Is the\nkernel so much more efficient that it makes up for the overhead of the\nsyscalls? My understanding is that in recent versions of Postgres,\nthis is not the case, and large shared_buffers improve performance.\nI've yet to do any in-depth testing on this, though.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n",
"msg_date": "Fri, 23 Feb 2007 13:11:37 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recommended Initial Settings"
}
] |
[
{
"msg_contents": "\n>If you look at the \"actual time\" it's completing very quickly indeed. \nSo \n>- it must be something to do with either:\n>1. Fetching/formatting \nthe data\n>>2. Transferring the data to the client.\n\nI do agree.\n\n>What \nhappens if you only select half the rows? Does the time to run the \n>select halve?\n\nYes, it does.\nUsing pgAdmin, the time to get all 36 \nrows is about 67500ms while it's 24235ms to get only 18 rows.\n\nMassimo\n\n",
"msg_date": "Fri, 23 Feb 2007 18:44:27 +0100 (GMT+01:00)",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow bytea data extraction"
},
{
"msg_contents": "[email protected] wrote:\n>> If you look at the \"actual time\" it's completing very quickly indeed. \n> So \n>> - it must be something to do with either:\n>> 1. Fetching/formatting \n> the data\n>>> 2. Transferring the data to the client.\n> \n> I do agree.\n> \n>> What \n> happens if you only select half the rows? Does the time to run the \n>> select halve?\n> \n> Yes, it does.\n> Using pgAdmin, the time to get all 36 \n> rows is about 67500ms while it's 24235ms to get only 18 rows.\n\nHmm - I've seen reports about the traffic-shaping module not being \ninstall/activated making large data transfers slow. That was on Windows \n2000 though. Might be worth searching the mail archives - I'm afraid I \nrun PG on Linux mostly, so can't say for sure.\n\nOne other thing I'd test. Make a small table with text columns of the \nsame size and see how fast it is to select from that. If it's just as \nslow then it's your network setup. If it's much faster then it's \nsomething to do with the bytea type.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 26 Feb 2007 08:56:33 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow bytea data extraction"
}
] |
[
{
"msg_contents": ">are you getting the data from the local box or from a remote site?\n\nEverything is on the local box.\n\n>also explain analyze is showing \nnothing slow but you did not post the\n>enitre output. also, try the \n\\timing switch in psql.\n\nActually a line was missing: Total runtime: \n0.337 ms.\n\nMassimo\n\n",
"msg_date": "Fri, 23 Feb 2007 18:55:20 +0100 (GMT+01:00)",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow bytea data extraction"
}
] |
[
{
"msg_contents": "The postgresql.conf says that the maximum checkpoint_timeout is 1 hour.\nHowever, the following messages seem to suggest that it may be useful to\nset the value significantly higher to reduce unnecessary WAL volume:\n\nhttp://archives.postgresql.org/pgsql-hackers/2006-10/msg00527.php\nhttp://archives.postgresql.org/pgsql-hackers/2006-08/msg01190.php\n\nIs there a reason for the hour-long limit on checkpoint_timeout? Is\nthere a cost to doing so, aside from potentially longer recovery time?\n\nAs I understand it, the background writer keeps the I/O more balanced\nanyway, avoiding I/O spikes at checkpoint. \n\nI don't need the checkpoint time to be higher than 1 hour, but I'm\ntrying to understand the reasoning behind the limit and the implications\nof a longer checkpoint_timeout.\n\nThe docs here:\n\nhttp://www.postgresql.org/docs/current/static/wal-configuration.html\n\nsay that checkpoints cause extra disk I/O. Is there a good way to\nmeasure how much extra I/O (and WAL volume) is caused by the\ncheckpoints? Also, it would be good to know how much total I/O is caused\nby a checkpoint so that I know if bgwriter is doing it's job.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Fri, 23 Feb 2007 10:14:29 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "long checkpoint_timeout"
},
{
"msg_contents": "On Fri, Feb 23, 2007 at 10:14:29AM -0800, Jeff Davis wrote:\n> The postgresql.conf says that the maximum checkpoint_timeout is 1 hour.\n> However, the following messages seem to suggest that it may be useful to\n> set the value significantly higher to reduce unnecessary WAL volume:\n> \n> http://archives.postgresql.org/pgsql-hackers/2006-10/msg00527.php\n> http://archives.postgresql.org/pgsql-hackers/2006-08/msg01190.php\n> \n> Is there a reason for the hour-long limit on checkpoint_timeout? Is\n> there a cost to doing so, aside from potentially longer recovery time?\n> \n> As I understand it, the background writer keeps the I/O more balanced\n> anyway, avoiding I/O spikes at checkpoint. \n> \n> I don't need the checkpoint time to be higher than 1 hour, but I'm\n> trying to understand the reasoning behind the limit and the implications\n> of a longer checkpoint_timeout.\n> \n> The docs here:\n> \n> http://www.postgresql.org/docs/current/static/wal-configuration.html\n> \n> say that checkpoints cause extra disk I/O. Is there a good way to\n> measure how much extra I/O (and WAL volume) is caused by the\n> checkpoints? Also, it would be good to know how much total I/O is caused\n> by a checkpoint so that I know if bgwriter is doing it's job.\n\nThere's a patch someone just came up with that provides additional debug\ninfo about both bgwriter operation and checkpoints. I know it will at\nleast tell you how much was written out by a checkpoint.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Fri, 23 Feb 2007 14:02:15 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long checkpoint_timeout"
},
{
"msg_contents": "On Fri, 2007-02-23 at 14:02 -0600, Jim C. Nasby wrote:\n> > say that checkpoints cause extra disk I/O. Is there a good way to\n> > measure how much extra I/O (and WAL volume) is caused by the\n> > checkpoints? Also, it would be good to know how much total I/O is caused\n> > by a checkpoint so that I know if bgwriter is doing it's job.\n> \n> There's a patch someone just came up with that provides additional debug\n> info about both bgwriter operation and checkpoints. I know it will at\n> least tell you how much was written out by a checkpoint.\n\nExcellent, that would answer a lot of my questions. I did some brief\nsearching and nothing turned up. Do you have a link to the discussion or\nthe patch?\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Fri, 23 Feb 2007 12:23:08 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: long checkpoint_timeout"
},
{
"msg_contents": "On Fri, Feb 23, 2007 at 12:23:08PM -0800, Jeff Davis wrote:\n> On Fri, 2007-02-23 at 14:02 -0600, Jim C. Nasby wrote:\n> > > say that checkpoints cause extra disk I/O. Is there a good way to\n> > > measure how much extra I/O (and WAL volume) is caused by the\n> > > checkpoints? Also, it would be good to know how much total I/O is caused\n> > > by a checkpoint so that I know if bgwriter is doing it's job.\n> > \n> > There's a patch someone just came up with that provides additional debug\n> > info about both bgwriter operation and checkpoints. I know it will at\n> > least tell you how much was written out by a checkpoint.\n> \n> Excellent, that would answer a lot of my questions. I did some brief\n> searching and nothing turned up. Do you have a link to the discussion or\n> the patch?\n\nhttp://archives.postgresql.org/pgsql-hackers/2007-02/msg01083.php\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Fri, 23 Feb 2007 16:34:57 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long checkpoint_timeout"
}
] |
[
{
"msg_contents": "I recall a reference on the list indicating that newer Xeon processors \ndon't suffer from the context switching problem reported last year.\n\nIn searching the archives, I can't find any specific info indentifying \nwhich Xeon processors don't have this problem.\n\nAnyone point me to a reference?\n\nIs this in any way related to the version of Postgresql one is running? \n We're headed for 8, but have a bit of work before we can get there. \nWe are currently on 7.4.16.\n\nThanks for any info.\n-- \nUntil later, Geoffrey\n\nThose who would give up essential Liberty, to purchase a little\ntemporary Safety, deserve neither Liberty nor Safety.\n - Benjamin Franklin\n",
"msg_date": "Fri, 23 Feb 2007 14:05:57 -0500",
"msg_from": "Geoffrey <[email protected]>",
"msg_from_op": true,
"msg_subject": "which Xeon processors don't have the context switching problem"
},
{
"msg_contents": "> I recall a reference on the list indicating that newer Xeon processors\n> don't suffer from the context switching problem reported last year.\n>\n> In searching the archives, I can't find any specific info indentifying\n> which Xeon processors don't have this problem.\n>\n> Anyone point me to a reference?\n\nWe recently migrated to a woodcrest @ 3 GHz from a 2 Ghz opteron. The\nwoodcrest seems to be enjoying doing db-related work. I don't have\nnumbers other than load is much lower now.\n\n> Is this in any way related to the version of Postgresql one is running?\n> We're headed for 8, but have a bit of work before we can get there.\n> We are currently on 7.4.16.\n\nWe are at 7.4.14 which works fine atm.\n\nregards\nClaus\n",
"msg_date": "Fri, 23 Feb 2007 20:33:41 +0100",
"msg_from": "\"Claus Guttesen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: which Xeon processors don't have the context switching problem"
},
{
"msg_contents": "On Fri, Feb 23, 2007 at 02:05:57PM -0500, Geoffrey wrote:\n> In searching the archives, I can't find any specific info indentifying \n> which Xeon processors don't have this problem.\n\nAFAIK the cut-off point is at the Woodcrests. They are overall much better\nsuited to PostgreSQL than the older Xeons were.\n\nIt's slightly unfortunate that AMD and Intel cling to the Opteron and Xeon\nnames even though they're making significant architecture changes, but that's\nlife, I guess.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 23 Feb 2007 20:38:54 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: which Xeon processors don't have the context switching problem"
},
{
"msg_contents": "Steinar H. Gunderson wrote:\n> On Fri, Feb 23, 2007 at 02:05:57PM -0500, Geoffrey wrote:\n> > In searching the archives, I can't find any specific info indentifying \n> > which Xeon processors don't have this problem.\n> \n> AFAIK the cut-off point is at the Woodcrests. They are overall much better\n> suited to PostgreSQL than the older Xeons were.\n> \n> It's slightly unfortunate that AMD and Intel cling to the Opteron and Xeon\n> names even though they're making significant architecture changes, but that's\n> life, I guess.\n\nAFAIR Intel has been calling their server processors Xeon since Pentium\nPro's, at least.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Fri, 23 Feb 2007 16:53:18 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: which Xeon processors don't have the context switching problem"
},
{
"msg_contents": "On Fri, Feb 23, 2007 at 04:53:18PM -0300, Alvaro Herrera wrote:\n>> It's slightly unfortunate that AMD and Intel cling to the Opteron and Xeon\n>> names even though they're making significant architecture changes, but that's\n>> life, I guess.\n> AFAIR Intel has been calling their server processors Xeon since Pentium\n> Pro's, at least.\n\nYes, that was sort of my point. :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 23 Feb 2007 21:01:01 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: which Xeon processors don't have the context switching problem"
},
{
"msg_contents": "Geoffrey,\n\n> I recall a reference on the list indicating that newer Xeon processors\n> don't suffer from the context switching problem reported last year.\n\nJust to be clear, it's a software problem which affects all architectures, \nincluding AMD and Sparc. It's just *worse* on the PIII and P4 generation \nXeons.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Fri, 23 Feb 2007 13:30:56 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: which Xeon processors don't have the context switching problem"
},
{
"msg_contents": "Josh Berkus wrote:\n> Geoffrey,\n> \n>> I recall a reference on the list indicating that newer Xeon processors\n>> don't suffer from the context switching problem reported last year.\n> \n> Just to be clear, it's a software problem which affects all architectures, \n> including AMD and Sparc. It's just *worse* on the PIII and P4 generation \n> Xeons.\n\nThanks, that's what I need to hear. They've since cut a deal for \nOperton based hardware, so the point is now moot.\n\n-- \nUntil later, Geoffrey\n\nThose who would give up essential Liberty, to purchase a little\ntemporary Safety, deserve neither Liberty nor Safety.\n - Benjamin Franklin\n",
"msg_date": "Fri, 23 Feb 2007 16:33:19 -0500",
"msg_from": "Geoffrey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: which Xeon processors don't have the context switching\n problem"
},
{
"msg_contents": "Josh Berkus wrote:\n> Geoffrey,\n> \n>> I recall a reference on the list indicating that newer Xeon processors\n>> don't suffer from the context switching problem reported last year.\n> \n> Just to be clear, it's a software problem which affects all architectures, \n> including AMD and Sparc. It's just *worse* on the PIII and P4 generation \n> Xeons.\n> \n\nAlso isn't it pretty much *not* a problem with current versions of\nPostgreSQL?\n\nJoshua D. Drake\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Fri, 23 Feb 2007 13:33:29 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: which Xeon processors don't have the context switching\n problem"
},
{
"msg_contents": "Joshua D. Drake wrote:\n> Josh Berkus wrote:\n>> Geoffrey,\n>>\n>>> I recall a reference on the list indicating that newer Xeon processors\n>>> don't suffer from the context switching problem reported last year.\n>> Just to be clear, it's a software problem which affects all architectures, \n>> including AMD and Sparc. It's just *worse* on the PIII and P4 generation \n>> Xeons.\n>>\n> \n> Also isn't it pretty much *not* a problem with current versions of\n> PostgreSQL?\n\nAs I've heard. We're headed for 8 as soon as possible, but until we get \nour code ready, we're on 7.4.16.\n\n-- \nUntil later, Geoffrey\n\nThose who would give up essential Liberty, to purchase a little\ntemporary Safety, deserve neither Liberty nor Safety.\n - Benjamin Franklin\n",
"msg_date": "Fri, 23 Feb 2007 16:49:08 -0500",
"msg_from": "Geoffrey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: which Xeon processors don't have the context switching\n problem"
},
{
"msg_contents": "On 2/23/07, Joshua D. Drake <[email protected]> wrote:\n> Also isn't it pretty much *not* a problem with current versions of\n> PostgreSQL?\n\nWe had a really *big* scalability problem with a quad Xeon MP 2.2 and\nPostgreSQL 7.4. The problem is mostly gone since we upgraded to 8.1 a\nyear ago.\n\nWoodcrest seems to perform really well with PostgreSQL according to\nwhat I can read on the Internet so we will probably change the server\nfor a dual Woodcrest in a few months.\n\n--\nGuillaume\n",
"msg_date": "Fri, 23 Feb 2007 23:23:10 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: which Xeon processors don't have the context switching problem"
},
{
"msg_contents": "On 2/23/07, Geoffrey <[email protected]> wrote:\n> As I've heard. We're headed for 8 as soon as possible, but until we get\n> our code ready, we're on 7.4.16.\n\nYou should move to at least 8.1 and possibly 8.2. It's not a good idea\nto upgrade only to 8 IMHO.\n\n--\nGuillaume\n",
"msg_date": "Fri, 23 Feb 2007 23:25:04 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: which Xeon processors don't have the context switching problem"
},
{
"msg_contents": "Alvaro Herrera wrote:\n> Steinar H. Gunderson wrote:\n>> On Fri, Feb 23, 2007 at 02:05:57PM -0500, Geoffrey wrote:\n>>> In searching the archives, I can't find any specific info indentifying \n>>> which Xeon processors don't have this problem.\n>> AFAIK the cut-off point is at the Woodcrests. They are overall much better\n>> suited to PostgreSQL than the older Xeons were.\n>>\n>> It's slightly unfortunate that AMD and Intel cling to the Opteron and Xeon\n>> names even though they're making significant architecture changes, but that's\n>> life, I guess.\n> \n> AFAIR Intel has been calling their server processors Xeon since Pentium\n> Pro's, at least.\n> \nAlmost. Xeon was the new name for the \"Pro\" series. Instead of Pentium\nII Pro, we got Pentium II Xeon. The whole Pentium Pro line was a server\nline, which is why initial Pentium-II CPUs were significantly slower for\nserver apps than the much older ppro (which still runs pg at a\nreasonable speed if you have enough of them and a low budget, btw)\n\n//Magnus\n",
"msg_date": "Sat, 24 Feb 2007 01:56:22 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: which Xeon processors don't have the context switching\n problem"
},
{
"msg_contents": "Guillaume Smet wrote:\n> On 2/23/07, Geoffrey <[email protected]> wrote:\n>> As I've heard. We're headed for 8 as soon as possible, but until we get\n>> our code ready, we're on 7.4.16.\n> \n> You should move to at least 8.1 and possibly 8.2. It's not a good idea\n> to upgrade only to 8 IMHO.\n\nWhen I said 8, I meant whatever the latest greatest 8 is. Right now, \nthat looks like 8.2.3.\n\n-- \nUntil later, Geoffrey\n\nThose who would give up essential Liberty, to purchase a little\ntemporary Safety, deserve neither Liberty nor Safety.\n - Benjamin Franklin\n",
"msg_date": "Fri, 23 Feb 2007 20:28:31 -0500",
"msg_from": "Geoffrey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: which Xeon processors don't have the context switching\n problem"
},
{
"msg_contents": "Geoffrey wrote:\n> Guillaume Smet wrote:\n>> On 2/23/07, Geoffrey <[email protected]> wrote:\n>>> As I've heard. We're headed for 8 as soon as possible, but until we get\n>>> our code ready, we're on 7.4.16.\n>>\n>> You should move to at least 8.1 and possibly 8.2. It's not a good idea\n>> to upgrade only to 8 IMHO.\n> \n> When I said 8, I meant whatever the latest greatest 8 is. Right now,\n> that looks like 8.2.3.\n\nNo. The latest version of 8.2 is 8.2.3, there is also 8.1 which is at\n8.1.8 and 8.0 which is at 8.0.12.\n\nThey are all different *major* releases.\n\nIMO, nobody should be running anything less than 8.1.8.\n\nSincerely,\n\nJoshua D. Drake\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Sat, 24 Feb 2007 12:50:08 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: which Xeon processors don't have the context switching\n problem"
},
{
"msg_contents": "\nSay that I have a dual-core processor (AMD64), with, say, 2GB of memory\nto run PostgreSQL 8.2.3 on Fedora Core X.\n\nI have the option to put two hard disks (SATA2, most likely); I'm \nwondering\nwhat would be the optimal configuration from the point of view of \nperformance.\n\nI do have the option to configure it in RAID-0, but I'm sort of \nreluctant; I think\nthere's the possibility that having two filesystems that can be accessed \ntruly\nsimultaneously can be more beneficial. The question is: does PostgreSQL\nhave separate, independent areas that require storage such that performance\nwould be noticeably boosted if the multiple storage operations could be \ndone\nsimultaneously?\n\nNotice that even with RAID-0, the \"twice the performance\" may turn into\nan illusion --- if the system requires access from \"distant\" areas of \nthe disk\n(\"distant\" as in many tracks apart), then the back-and-forth travelling of\nthe heads would take precedence over the doubled access speed ... Though\nmaybe it depends on whether accesses are in small chunks (in which case\nthe cache of the hard disks could take precedence).\n\nComing back to the option of two independent disks --- the thing is: if it\nturns out that two independent disks are a better option, how should I\nconfigure the system and the mount points? And how would I configure\nPostgreSQL to take advantage of that?\n\nAdvice, anyone?\n\nThanks,\n\nCarlos\n--\n\n",
"msg_date": "Sat, 24 Feb 2007 22:39:09 -0500",
"msg_from": "Carlos Moreno <[email protected]>",
"msg_from_op": false,
"msg_subject": "Two hard drives --- what to do with them?"
},
{
"msg_contents": "Carlos Moreno <[email protected]> writes:\n> The question is: does PostgreSQL have separate, independent areas that\n> require storage such that performance would be noticeably boosted if\n> the multiple storage operations could be done simultaneously?\n\nThe standard advice in this area is to put pg_xlog on a separate\nspindle; although that probably is only important for update-intensive\napplications. You did not tell us anything about your application...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 24 Feb 2007 22:58:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two hard drives --- what to do with them? "
},
{
"msg_contents": "On Feb 25, 2007, at 04:39 , Carlos Moreno wrote:\n\n> I do have the option to configure it in RAID-0, but I'm sort of \n> reluctant; I think\n> there's the possibility that having two filesystems that can be \n> accessed truly\n> simultaneously can be more beneficial. The question is: does \n> PostgreSQL\n> have separate, independent areas that require storage such that \n> performance\n> would be noticeably boosted if the multiple storage operations \n> could be done\n> simultaneously?\n\nPutting the WAL (aka pg_xlog) on a separate disk will take some load \noff your main database disk. See http://www.varlena.com/GeneralBits/ \nTidbits/perf.html for this.\n\nIt is also possible to put individual tables and/or indexes on \nseparate disks by using tablespaces: \"For example, an index which is \nvery heavily used can be placed on a very fast, highly available \ndisk, such as an expensive solid state device. At the same time a \ntable storing archived data which is rarely used or not performance \ncritical could be stored on a less expensive, slower disk \nsystem.\" (http://www.postgresql.org/docs/8.2/interactive/manage-ag- \ntablespaces.html)\n\nIn both cases, the performance benefits tend to be relative to the \namount of write activity you experience, and the latter solution \nassumes you know where the hotspots are. If you have two tables that \nsee continuous, intense write activity, for example, putting each on \na separate disk\n\nAlexander.\n",
"msg_date": "Sun, 25 Feb 2007 05:08:34 +0100",
"msg_from": "Alexander Staubo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two hard drives --- what to do with them?"
},
{
"msg_contents": "A related question:\nIs it sufficient to disable write cache only on the disk where pg_xlog\nis located? Or should write cache be disabled on both disks?\n\nThanks\nPeter\n\nOn 2/25/07, Tom Lane <[email protected]> wrote:\n> Carlos Moreno <[email protected]> writes:\n> > The question is: does PostgreSQL have separate, independent areas that\n> > require storage such that performance would be noticeably boosted if\n> > the multiple storage operations could be done simultaneously?\n>\n> The standard advice in this area is to put pg_xlog on a separate\n> spindle; although that probably is only important for update-intensive\n> applications. You did not tell us anything about your application...\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n",
"msg_date": "Sun, 25 Feb 2007 23:11:01 +0100",
"msg_from": "\"Peter Kovacs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two hard drives --- what to do with them?"
},
{
"msg_contents": "On Sun, 2007-02-25 at 23:11 +0100, Peter Kovacs wrote:\n> A related question:\n> Is it sufficient to disable write cache only on the disk where pg_xlog\n> is located? Or should write cache be disabled on both disks?\n> \n\nWhen PostgreSQL does a checkpoint, it thinks the data pages before the\ncheckpoint have successfully made it to disk. \n\nIf the write cache holds those data pages, and then loses them, there's\nno way for PostgreSQL to recover. So use a battery backed cache or turn\noff the write cache.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Mon, 26 Feb 2007 13:51:06 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two hard drives --- what to do with them?"
},
{
"msg_contents": "On 2/26/07, Jeff Davis <[email protected]> wrote:\n> On Sun, 2007-02-25 at 23:11 +0100, Peter Kovacs wrote:\n> > A related question:\n> > Is it sufficient to disable write cache only on the disk where pg_xlog\n> > is located? Or should write cache be disabled on both disks?\n> >\n>\n> When PostgreSQL does a checkpoint, it thinks the data pages before the\n> checkpoint have successfully made it to disk.\n>\n> If the write cache holds those data pages, and then loses them, there's\n> no way for PostgreSQL to recover. So use a battery backed cache or turn\n> off the write cache.\n\nSorry for for not being familar with storage techonologies... Does\n\"battery\" here mean battery in the common sense of the word - some\nkind of independent power supply? Shouldn't the disk itself be backed\nby a battery? As should the entire storage subsystem?\n\nThanks\nPeter\n\n>\n> Regards,\n> Jeff Davis\n>\n>\n",
"msg_date": "Tue, 27 Feb 2007 01:11:24 +0100",
"msg_from": "\"Peter Kovacs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two hard drives --- what to do with them?"
},
{
"msg_contents": "On Tue, 2007-02-27 at 01:11 +0100, Peter Kovacs wrote:\n> On 2/26/07, Jeff Davis <[email protected]> wrote:\n> > On Sun, 2007-02-25 at 23:11 +0100, Peter Kovacs wrote:\n> > > A related question:\n> > > Is it sufficient to disable write cache only on the disk where pg_xlog\n> > > is located? Or should write cache be disabled on both disks?\n> > >\n> >\n> > When PostgreSQL does a checkpoint, it thinks the data pages before the\n> > checkpoint have successfully made it to disk.\n> >\n> > If the write cache holds those data pages, and then loses them, there's\n> > no way for PostgreSQL to recover. So use a battery backed cache or turn\n> > off the write cache.\n> \n> Sorry for for not being familar with storage techonologies... Does\n> \"battery\" here mean battery in the common sense of the word - some\n> kind of independent power supply? Shouldn't the disk itself be backed\n> by a battery? As should the entire storage subsystem?\n> \n\nYes, a battery that can hold power to keep data alive in the write cache\nin case of power failure, etc., for a long enough time to recover and\ncommit the data to disk.\n\nSo, a write cache is OK (even for pg_xlog) if it is durable (i.e. on\npermanent storage or backed by enough power to make sure it gets there).\nHowever, if PostgreSQL has no way to know whether a write is durable or\nnot, it can't guarantee the data is safe.\n\nThe reason this becomes an issue is that many consumer-grade disks have\nwrite cache enabled by default and no way to make sure the cached data\nactually gets written. So, essentially, these disks \"lie\" and say they\nwrote the data, when in reality, it's in volatile memory. It's\nrecommended that you disable write cache on such a device.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Mon, 26 Feb 2007 16:25:14 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two hard drives --- what to do with them?"
},
{
"msg_contents": "Jeff Davis wrote:\n\n>> Sorry for for not being familar with storage techonologies... Does\n>> \"battery\" here mean battery in the common sense of the word - some\n>> kind of independent power supply? Shouldn't the disk itself be backed\n>> by a battery? As should the entire storage subsystem?\n>>\n> \n> Yes, a battery that can hold power to keep data alive in the write cache\n> in case of power failure, etc., for a long enough time to recover and\n> commit the data to disk.\n\nJust to expand a bit - the battery backup options are available on some \nraid cards - that is where you would be looking for it. I don't know of \nany hard drives that have it built in.\n\nOf cause another reason to have a UPS for the server - keep it running \nlong enough after the clients have gone down so that it can ensure \neverything is on disk and shuts down properly.\n\n> So, a write cache is OK (even for pg_xlog) if it is durable (i.e. on\n> permanent storage or backed by enough power to make sure it gets there).\n> However, if PostgreSQL has no way to know whether a write is durable or\n> not, it can't guarantee the data is safe.\n> \n> The reason this becomes an issue is that many consumer-grade disks have\n> write cache enabled by default and no way to make sure the cached data\n> actually gets written. So, essentially, these disks \"lie\" and say they\n> wrote the data, when in reality, it's in volatile memory. It's\n> recommended that you disable write cache on such a device.\n\n From all that I have heard this is another advantage of SCSI disks - \nthey honor these settings as you would expect - many IDE/SATA disks \noften say \"sure I'll disable the cache\" but continue to use it or don't \nretain the setting after restart.\n\n\n-- \n\nShane Ambler\[email protected]\n\nGet Sheeky @ http://Sheeky.Biz\n",
"msg_date": "Tue, 27 Feb 2007 15:35:13 +1030",
"msg_from": "Shane Ambler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two hard drives --- what to do with them?"
},
{
"msg_contents": "On 2/27/07, Shane Ambler <[email protected]> wrote:\n> Jeff Davis wrote:\n>\n> >> Sorry for for not being familar with storage techonologies... Does\n> >> \"battery\" here mean battery in the common sense of the word - some\n> >> kind of independent power supply? Shouldn't the disk itself be backed\n> >> by a battery? As should the entire storage subsystem?\n> >>\n> >\n> > Yes, a battery that can hold power to keep data alive in the write cache\n> > in case of power failure, etc., for a long enough time to recover and\n> > commit the data to disk.\n>\n> Just to expand a bit - the battery backup options are available on some\n> raid cards - that is where you would be looking for it. I don't know of\n> any hard drives that have it built in.\n>\n> Of cause another reason to have a UPS for the server - keep it running\n> long enough after the clients have gone down so that it can ensure\n> everything is on disk and shuts down properly.\n>\n> > So, a write cache is OK (even for pg_xlog) if it is durable (i.e. on\n> > permanent storage or backed by enough power to make sure it gets there).\n> > However, if PostgreSQL has no way to know whether a write is durable or\n> > not, it can't guarantee the data is safe.\n> >\n> > The reason this becomes an issue is that many consumer-grade disks have\n> > write cache enabled by default and no way to make sure the cached data\n> > actually gets written. So, essentially, these disks \"lie\" and say they\n> > wrote the data, when in reality, it's in volatile memory. It's\n> > recommended that you disable write cache on such a device.\n>\n> From all that I have heard this is another advantage of SCSI disks -\n> they honor these settings as you would expect - many IDE/SATA disks\n> often say \"sure I'll disable the cache\" but continue to use it or don't\n> retain the setting after restart.\n\nAs far as I know, SCSI drives also have \"write cache\" which is turned\noff by default, but can be turned on (e.g. with the sdparm utility on\nLinux). The reason I am so much interested in how write cache is\ntypically used (on or off) is that I recently ran our benchmarks on a\nmachine with SCSI disks and those benchmarks with high commit ratio\nsuffered significantly compared to our previous results\n\"traditionally\" obtained on machines with IDE drives.\n\nI wonder if running a machine on a UPS + 1 hot standby internal PS is\nequivalent, in terms of data integrity, to using battery backed write\ncache. Instinctively, I'd think that UPS + 1 hot standby internal PS\nis better, since this setup also provides for the disk to actually\nwrite out the content of the cache -- as you pointed out.\n\nThanks\nPeter\n\n>\n>\n> --\n>\n> Shane Ambler\n> [email protected]\n>\n> Get Sheeky @ http://Sheeky.Biz\n>\n",
"msg_date": "Tue, 27 Feb 2007 09:27:52 +0100",
"msg_from": "\"Peter Kovacs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two hard drives --- what to do with them?"
},
{
"msg_contents": "Just remember that batteries (in both RAID cards and UPSes) wear out \nand will eventually have to be replaced. It depends how critical your \ndata is, but if you only have a UPS, you risk badness in the off \nchance that your power fails and you haven't replaced your UPS battery.\n\nOn Feb 27, 2007, at 12:27 AM, Peter Kovacs wrote:\n\n> I wonder if running a machine on a UPS + 1 hot standby internal PS is\n> equivalent, in terms of data integrity, to using battery backed write\n> cache. Instinctively, I'd think that UPS + 1 hot standby internal PS\n> is better, since this setup also provides for the disk to actually\n> write out the content of the cache -- as you pointed out.\n\n\nJust remember that batteries (in both RAID cards and UPSes) wear out and will eventually have to be replaced. It depends how critical your data is, but if you only have a UPS, you risk badness in the off chance that your power fails and you haven't replaced your UPS battery.On Feb 27, 2007, at 12:27 AM, Peter Kovacs wrote:I wonder if running a machine on a UPS + 1 hot standby internal PS is equivalent, in terms of data integrity, to using battery backed write cache. Instinctively, I'd think that UPS + 1 hot standby internal PS is better, since this setup also provides for the disk to actually write out the content of the cache -- as you pointed out.",
"msg_date": "Tue, 27 Feb 2007 07:57:15 -0800",
"msg_from": "Ben <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two hard drives --- what to do with them?"
},
{
"msg_contents": "Peter Kovacs wrote:\n\n>> > The reason this becomes an issue is that many consumer-grade disks have\n>> > write cache enabled by default and no way to make sure the cached data\n>> > actually gets written. So, essentially, these disks \"lie\" and say they\n>> > wrote the data, when in reality, it's in volatile memory. It's\n>> > recommended that you disable write cache on such a device.\n>>\n>> From all that I have heard this is another advantage of SCSI disks -\n>> they honor these settings as you would expect - many IDE/SATA disks\n>> often say \"sure I'll disable the cache\" but continue to use it or don't\n>> retain the setting after restart.\n> \n> As far as I know, SCSI drives also have \"write cache\" which is turned\n> off by default, but can be turned on (e.g. with the sdparm utility on\n> Linux). The reason I am so much interested in how write cache is\n> typically used (on or off) is that I recently ran our benchmarks on a\n> machine with SCSI disks and those benchmarks with high commit ratio\n> suffered significantly compared to our previous results\n> \"traditionally\" obtained on machines with IDE drives.\n\nMost likely - with write cache, when the drive gets the data it puts it \ninto cache and then says \"yep all done\" and you continue on as it puts \nit on the disk. But if the power goes out as it's doing that you got \ntrouble.\n\nThe difference between SCSI and IDE/SATA in this case is a lot if not \nall IDE/SATA drives tell you that the cache is disabled when you ask it \nto but they either don't actually disable it or they don't retain the \nsetting so you get caught later. SCSI disks can be trusted when you set \nthis option.\n\n> I wonder if running a machine on a UPS + 1 hot standby internal PS is\n> equivalent, in terms of data integrity, to using battery backed write\n> cache. Instinctively, I'd think that UPS + 1 hot standby internal PS\n> is better, since this setup also provides for the disk to actually\n> write out the content of the cache -- as you pointed out.\n> \n\nThis is covering two different scenarios.\nThe UPS maintains power in the event of a black out.\nThe hot standby internal PS maintains power when the first PS dies.\n\nIt is a good choice to have both as a PS dying will be just as bad as \nlosing power without a UPS and the UPS won't save you if the PS goes.\n\nA battery backed raid card sits in between these - as long as the \ndrive's write cache is off - the raid card will hold data that was sent \nto disk until it confirms it is written to disk. The battery backup will \neven hold that data until the machine is switched back on when it \ncompletes the writing to disk. That would cover you even if the PS goes.\n\n\n-- \n\nShane Ambler\[email protected]\n\nGet Sheeky @ http://Sheeky.Biz\n",
"msg_date": "Wed, 28 Feb 2007 05:21:41 +1030",
"msg_from": "Shane Ambler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two hard drives --- what to do with them?"
},
{
"msg_contents": "On Tue, 2007-02-27 at 09:27 +0100, Peter Kovacs wrote:\n> I wonder if running a machine on a UPS + 1 hot standby internal PS is\n> equivalent, in terms of data integrity, to using battery backed write\n> cache. Instinctively, I'd think that UPS + 1 hot standby internal PS\n> is better, since this setup also provides for the disk to actually\n> write out the content of the cache -- as you pointed out.\n\nIt's all about the degree of safety. A battery-backed cache on a RAID\ncontroller sits below all of these points of failure:\n\n* External power\n* Power supply\n* Operating system\n\nand with proper system administration, can recover from any transient\nerrors in the above. Keep in mind that it can only recover from\ntransient failures: if you have a long blackout that outlasts your UPS\nand cache battery, you can still have data loss. Also, you need a very\nresponsive system administrator that can make sure that data gets to\ndisk in case of failure.\n\nLet's say you have a RAID system but you rely on the UPS to make sure\nthe data hits disk. Well, now if you have an OS crash (caused by another\npiece of hardware failing, perhaps), you've lost your data.\n\nIf you can afford it (in terms of dollars or performance hit) go with\nthe safe solution.\n\nAlso, put things in context. The chances of failure due to these kinds\nof things are fairly low. If it's more likely that someone spills coffee\non your server than the UPS fails, it doesn't make sense to spend huge\namounts of money on NVRAM (or something) to store your data. So identify\nthe highest-risk scenarios and prevent those first.\n\nAlso keep in mind what the cost of failure is: a few hundred bucks more\non a better RAID controller is probably a good value if it prevents a\nday of chaos and unhappy customers.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Tue, 27 Feb 2007 11:23:49 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two hard drives --- what to do with them?"
},
{
"msg_contents": "On Tue, 2007-02-27 at 13:23, Jeff Davis wrote:\n> Also, put things in context. The chances of failure due to these kinds\n> of things are fairly low. If it's more likely that someone spills coffee\n> on your server than the UPS fails, it doesn't make sense to spend huge\n> amounts of money on NVRAM (or something) to store your data. So identify\n> the highest-risk scenarios and prevent those first.\n> \n> Also keep in mind what the cost of failure is: a few hundred bucks more\n> on a better RAID controller is probably a good value if it prevents a\n> day of chaos and unhappy customers.\n\nJust FYI, I can testify to the happiness a good battery backed caching\nRAID controller can bring. I had the only server that survived a\ncomplete power grid failure in the data center where I used to work. A\npiece of wire blew out a power conditioner, which killed the other power\nconditioner, all three UPSes and the switch to bring the diesel\ngenerator online.\n\nthe only problem the pgsql server had coming back up was that it had\nremote nfs mounts it used for file storage that weren't able to boot up\nfast enough so we just waited a few minutes and rebooted it.\n\nAll of our other database servers had to be restored from backup due to\nmassive data corruption because someone had decided that NFS mounts were\na good idea under databases.\n",
"msg_date": "Tue, 27 Feb 2007 13:38:56 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two hard drives --- what to do with them?"
},
{
"msg_contents": "On Sun, Feb 25, 2007 at 23:11:01 +0100,\n Peter Kovacs <[email protected]> wrote:\n> A related question:\n> Is it sufficient to disable write cache only on the disk where pg_xlog\n> is located? Or should write cache be disabled on both disks?\n\nWith recent linux kernels you may also have the option to use write\nbarriers instead of disabling caching. You need to make sure all of\nyour stacked block devices will handle it and most versions of software\nraid (other than 1) won't. This won't be a lot faster, since at sync\npoints the OS needs to order a cache flush, but it does give the disks a chance\nto reorder some commands in between flushes.\n",
"msg_date": "Wed, 28 Feb 2007 21:28:46 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two hard drives --- what to do with them?"
},
{
"msg_contents": "On Tue, Feb 27, 2007 at 15:35:13 +1030,\n Shane Ambler <[email protected]> wrote:\n> \n> From all that I have heard this is another advantage of SCSI disks - \n> they honor these settings as you would expect - many IDE/SATA disks \n> often say \"sure I'll disable the cache\" but continue to use it or don't \n> retain the setting after restart.\n\nIt is easy enough to tests if your disk lie about disabling the cache.\nI doubt that it is all that common for modern disks to do that.\n",
"msg_date": "Wed, 28 Feb 2007 21:31:12 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two hard drives --- what to do with them?"
},
{
"msg_contents": "On Wed, Feb 28, 2007 at 05:21:41 +1030,\n Shane Ambler <[email protected]> wrote:\n> \n> The difference between SCSI and IDE/SATA in this case is a lot if not \n> all IDE/SATA drives tell you that the cache is disabled when you ask it \n> to but they either don't actually disable it or they don't retain the \n> setting so you get caught later. SCSI disks can be trusted when you set \n> this option.\n\nI have some Western Digital Caviars and they don't lie about disabling\nwrite caching.\n",
"msg_date": "Wed, 28 Feb 2007 21:35:21 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two hard drives --- what to do with them?"
},
{
"msg_contents": "Joshua D. Drake wrote:\n> Geoffrey wrote:\n>> Guillaume Smet wrote:\n>>> On 2/23/07, Geoffrey <[email protected]> wrote:\n>>>> As I've heard. We're headed for 8 as soon as possible, but until we get\n>>>> our code ready, we're on 7.4.16.\n>>> You should move to at least 8.1 and possibly 8.2. It's not a good idea\n>>> to upgrade only to 8 IMHO.\n>> When I said 8, I meant whatever the latest greatest 8 is. Right now,\n>> that looks like 8.2.3.\n> \n> No. The latest version of 8.2 is 8.2.3, there is also 8.1 which is at\n> 8.1.8 and 8.0 which is at 8.0.12.\n> \n> They are all different *major* releases.\n\nYes I am aware of the various releases. My bad in that my reference to \n'8' was lazy and did not indicate the full release. Our intention is to \nmove to the latest 8.2.* when we are able.\n\n> IMO, nobody should be running anything less than 8.1.8.\n\nSame old thing, time and money. Too busy bailing the boat to patch it \nright now...\n\n-- \nUntil later, Geoffrey\n\nThose who would give up essential Liberty, to purchase a little\ntemporary Safety, deserve neither Liberty nor Safety.\n - Benjamin Franklin\n",
"msg_date": "Mon, 05 Mar 2007 08:23:41 -0500",
"msg_from": "Geoffrey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: which Xeon processors don't have the context switching\n problem"
}
] |
[
{
"msg_contents": "\nHello List,\n\nI'm using postgresql 7.4 on FC4.\n\nWhen i run following query on postgresql prompt, it gives error.\n\nindse96=# CREATE TEMP TABLE aggregate AS ( SELECT vc_vouchno\n,vc_srno,vc_type ,vc_vouchdt,trim(''),vc_descrip, ( CASE WHEN\n(vc_dr_cr='D') THEN vc_amount ELSE 0 END ) AS debit , (\nCASE WHEN (vc_dr_cr='C') THEN vc_amount ELSE 0 END) AS credit \nFROM voucher WHERE vc_vouchdt >= '2005-04-01' AND\nvc_vouchdt <= '2006-03-31' AND trim(vc_code)='100006016' ORDER BY\nvc_vouchdt,vc_vouchno );\nERROR: invalid page header in block 428 of relation \"pg_attribute\"\n\nI'm not able to understand why this error is come.\nplz. help me.\n\nThanks\nAshok\n\n--------------------------------------------------------------------\nmail2web.com Enhanced email for the mobile individual based on Microsoft®\nExchange - http://link.mail2web.com/Personal/EnhancedEmail\n\n\n",
"msg_date": "Mon, 26 Feb 2007 00:44:06 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "invalid page header in block 428 of relation \"pg_attribute\""
}
] |
[
{
"msg_contents": "Hi List,\n\nI have a Query. So when i do explain analyse on it , it shows me many Hash\nJoins.\nSo is it possible to indicate the Query Planner not to consider Hash Join.\n\n-- \nRegards\nGauri\n\nHi List,\n \nI have a Query. So when i do explain analyse on it , it shows me many Hash Joins.\nSo is it possible to indicate the Query Planner not to consider Hash Join.\n-- RegardsGauri",
"msg_date": "Mon, 26 Feb 2007 17:19:05 +0530",
"msg_from": "\"Gauri Kanekar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query Planner"
},
{
"msg_contents": "On Mon, Feb 26, 2007 at 05:19:05PM +0530, Gauri Kanekar wrote:\n> I have a Query. So when i do explain analyse on it , it shows me many Hash\n> Joins.\n> So is it possible to indicate the Query Planner not to consider Hash Join.\n\nset enable_hashjoin = false;\n\nThis is very often the wrong solution, though. If hash join is indeed slower\nthan merge join, the planner should have guessed so, and it would be better\nto find out why the planner guessed wrong, and correct it somehow.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 26 Feb 2007 12:52:27 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Planner"
}
] |
[
{
"msg_contents": "Hi List,\n\nMachine was down due to some hardware problem.\n\nAfter then when i issue this command /usr/local/pgsql/bin/psql -l\nits giving me the following error\n\npsql: could not connect to server: No such file or directory\n Is the server running locally and accepting\n connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?\n\nCan anybody tell me what going wrong??\n\n-- \nRegards\nGauri\n\nHi List,\n \nMachine was down due to some hardware problem.\n \nAfter then when i issue this command /usr/local/pgsql/bin/psql -l\nits giving me the following error \n \npsql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"? \nCan anybody tell me what going wrong??-- RegardsGauri",
"msg_date": "Mon, 26 Feb 2007 18:52:19 +0530",
"msg_from": "\"Gauri Kanekar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Server Startup Error"
},
{
"msg_contents": "Gauri Kanekar wrote:\n> Hi List,\n> \n> Machine was down due to some hardware problem.\n> \n> After then when i issue this command /usr/local/pgsql/bin/psql -l\n> its giving me the following error\n> \n> psql: could not connect to server: No such file or directory\n> Is the server running locally and accepting\n> connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?\n> \n> Can anybody tell me what going wrong??\n> \n> -- \n> Regards\n> Gauri\n\nPostgres is not running, start it and try again\n",
"msg_date": "Mon, 26 Feb 2007 10:27:05 -0300",
"msg_from": "Rodrigo Gonzalez <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server Startup Error"
},
{
"msg_contents": "Thanks,\nBut how to start postgres server\n\n\nOn 2/26/07, Rodrigo Gonzalez <[email protected]> wrote:\n>\n> Gauri Kanekar wrote:\n> > Hi List,\n> >\n> > Machine was down due to some hardware problem.\n> >\n> > After then when i issue this command /usr/local/pgsql/bin/psql -l\n> > its giving me the following error\n> >\n> > psql: could not connect to server: No such file or directory\n> > Is the server running locally and accepting\n> > connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?\n> >\n> > Can anybody tell me what going wrong??\n> >\n> > --\n> > Regards\n> > Gauri\n>\n> Postgres is not running, start it and try again\n>\n\n\n\n-- \nRegards\nGauri\n\nThanks,\nBut how to start postgres server \nOn 2/26/07, Rodrigo Gonzalez <[email protected]> wrote:\nGauri Kanekar wrote:> Hi List,>> Machine was down due to some hardware problem.>\n> After then when i issue this command /usr/local/pgsql/bin/psql -l> its giving me the following error>> psql: could not connect to server: No such file or directory> Is the server running locally and accepting\n> connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?>> Can anybody tell me what going wrong??>> --> Regards> GauriPostgres is not running, start it and try again\n-- RegardsGauri",
"msg_date": "Mon, 26 Feb 2007 18:58:44 +0530",
"msg_from": "\"Gauri Kanekar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Server Startup Error"
},
{
"msg_contents": "Gauri Kanekar wrote:\n> Hi List,\n> \n> Machine was down due to some hardware problem.\n> \n> After then when i issue this command /usr/local/pgsql/bin/psql -l\n> its giving me the following error\n> \n> psql: could not connect to server: No such file or directory\n> Is the server running locally and accepting\n> connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?\n> \n> Can anybody tell me what going wrong??\n\nWell, it's either looking in the wrong place or the server isn't \nactually running.\n\n1. Do your startup scripts start PG?\n2. Is there a server process? \"ps auxw | grep postgres\"\n3. What do your logfiles say?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 26 Feb 2007 13:29:06 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server Startup Error"
},
{
"msg_contents": "Gauri Kanekar wrote:\n> Thanks,\n> But how to start postgres server\n> \n> \n> On 2/26/07, *Rodrigo Gonzalez* <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> Gauri Kanekar wrote:\n> > Hi List,\n> >\n> > Machine was down due to some hardware problem.\n> >\n> > After then when i issue this command /usr/local/pgsql/bin/psql -l\n> > its giving me the following error\n> >\n> > psql: could not connect to server: No such file or directory\n> > Is the server running locally and accepting\n> > connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?\n> >\n> > Can anybody tell me what going wrong??\n> >\n> > --\n> > Regards\n> > Gauri\n> \n> Postgres is not running, start it and try again\n> \n> \n> \n> \n> -- \n> Regards\n> Gauri\n\nwhich OS?\n\ncompiled from source?\n\ndid you install from package?\n",
"msg_date": "Mon, 26 Feb 2007 10:35:34 -0300",
"msg_from": "Rodrigo Gonzalez <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server Startup Error"
},
{
"msg_contents": "\nNote - try to cc: the mailing list, I don't always read this inbox\n\nGauri Kanekar wrote:\n> On 2/26/07, Richard Huxton <[email protected]> wrote:\n>>\n>> Gauri Kanekar wrote:\n>> > Hi List,\n>> >\n>> > Machine was down due to some hardware problem.\n>> >\n>> > After then when i issue this command /usr/local/pgsql/bin/psql -l\n>> > its giving me the following error\n>> >\n>> > psql: could not connect to server: No such file or directory\n>> > Is the server running locally and accepting\n>> > connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?\n>> >\n>> > Can anybody tell me what going wrong??\n>>\n>> Well, it's either looking in the wrong place or the server isn't\n>> actually running.\n>>\n>> 1. Do your startup scripts start PG?\n> \n> Yes\n\nOK - so we know it should have started, which means the logs should say \nsomething about our problem.\n\n> 2. Is there a server process? \"ps auxw | grep postgres\"\n> \n> This is the result given by the command\n> root 8907 0.0 0.1 37496 2640 ? Ss 03:47 0:00 sshd:\n> postgres [priv]\n> postgres 8910 0.0 0.0 37636 1684 ? S 03:47 0:00 sshd:\n> postgres@pts/1\n> postgres 8911 0.0 0.1 10152 2564 pts/1 Ss+ 03:47 0:00 -bash\n> root 9470 0.0 0.1 37500 2644 ? Ss 04:28 0:00 sshd:\n> postgres [priv]\n> postgres 9473 0.0 0.0 37640 1688 ? S 04:28 0:00 sshd:\n> postgres@pts/2\n> postgres 9474 0.0 0.1 10104 2412 pts/2 Ss 04:28 0:00 -bash\n> postgres 9724 0.0 0.0 3496 892 pts/2 R+ 04:44 0:00 ps auxw\n> postgres 9725 0.0 0.0 3868 784 pts/2 R+ 04:44 0:00 grep\n> postgres\n\nHmm - nothing there but \"ssh\" connections. So, it's not started, which \nis why psql is complaining.\n\n> 3. What do your logfiles say?\n> \n> \n> HINT: In a moment you should be able to reconnect to the database and\n> repeat your command.\n> LOG: database system was interrupted at 2007-02-23 20:14:24 IST\n> LOG: could not open file \"pg_xlog/00000001000000390000001A\" (log file 57,\n> segment 26): No such file or directory\n> LOG: invalid primary checkpoint record\n> LOG: could not open file \"pg_xlog/000000010000003900000017\" (log file 57,\n> segment 23): No such file or directory\n> LOG: invalid secondary checkpoint record\n> PANIC: could not locate a valid checkpoint record\n> LOG: startup process (PID 9057) was terminated by signal 6\n> LOG: aborting startup due to startup process failure\n> FATAL: pre-existing shared memory block (key 5432001, ID 1900546) is still\n> in use\n> HINT: If you're sure there are no old server processes still running,\n> remove the shared memory block with the command \"ipcclean\", \"ipcrm\", or \n> just\n> delete the file \"postmaster.pid\".\n\nOK - this last bit is the first thing to deal with. Find your \npostmaster.pid file and delete it. Your postmaster.pid file should be in \nyour data directory - try \"locate postmaster.pid\" or \"find /usr/local/ \n-name postmaster.pid\".\n\nThen restart postgresql (as root \"/etc/init.d/postgresql start\" or \nsimilar) and check the logs again.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 26 Feb 2007 14:03:19 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server Startup Error"
}
] |
[
{
"msg_contents": "Hi,\n\n I am in the process of cleaning up one of our big table, this table\nhas 187 million records and we need to delete around 100 million of them. \n\n I am deleting around 4-5 million of them daily in order to catchup\nwith vacuum and also with the archive logs space. So far I have deleted\naround 15million in past few days.\n\n max_fsm_pages value is set to 1200000. Vacuumdb runs once daily,\nhere is the output from last night's vacuum job\n\n \n=======================================================================================\n INFO: free space map: 999 relations, 798572 pages stored; 755424\ntotal pages needed\n DETAIL: Allocated FSM size: 1000 relations + 1200000 pages = 7096\nkB shared memory.\n VACUUM\n \n========================================================================================\n\n From the output it says 755424 total pages needed , this number\nkeeps growing daily even after vacuums are done daily. This was around\n350K pages before the delete process started.\n \n I am afraid that this number will reach the max_fsm_pages limit\nsoon and vacuums thereafter will never catch up .\n\n Can anyone please explain this behavior ? What should I do to catch\nup with vacuumdb daily ?\n\n Postgres Version : 8.0.2.\n Backup Mode: PITR.\n \n\nThanks!\nPallav\n",
"msg_date": "Mon, 26 Feb 2007 09:44:02 -0500",
"msg_from": "Pallav Kalva <[email protected]>",
"msg_from_op": true,
"msg_subject": "Vacuumdb - Max_FSM_Pages Problem."
},
{
"msg_contents": "On 26/02/07, Pallav Kalva <[email protected]> wrote:\n> Hi,\n>\n> I am in the process of cleaning up one of our big table, this table\n> has 187 million records and we need to delete around 100 million of them.\n>\n> I am deleting around 4-5 million of them daily in order to catchup\n> with vacuum and also with the archive logs space. So far I have deleted\n> around 15million in past few days.\n>\n> max_fsm_pages value is set to 1200000. Vacuumdb runs once daily,\n> here is the output from last night's vacuum job\n>\n>\n> =======================================================================================\n> INFO: free space map: 999 relations, 798572 pages stored; 755424\n> total pages needed\n> DETAIL: Allocated FSM size: 1000 relations + 1200000 pages = 7096\n> kB shared memory.\n> VACUUM\n>\n> ========================================================================================\n>\n> From the output it says 755424 total pages needed , this number\n> keeps growing daily even after vacuums are done daily. This was around\n> 350K pages before the delete process started.\n>\n> I am afraid that this number will reach the max_fsm_pages limit\n> soon and vacuums thereafter will never catch up .\n>\n> Can anyone please explain this behavior ? What should I do to catch\n> up with vacuumdb daily ?\n>\n\nVacuum adds to free pages to the fsm so that they can be reused. If\nyou don't fill up those free pages the fsm will fill up. Once the fsm\nis full no more pages can be added to the fsm. If you start writing to\nthe free pages via inserts when vacuum next runs more free pages will\nbe added that did not fit previously in the free space map due to it\nbeing full.\n\nIf you are really deleting that many records you may be better coping\nthose you want to a new table and dropping the old one. To actually\nrecover space you need to either run vacuum full or cluster.\n\nThis ought to be in the manual somewhere as this question gets asked\nabout once a week.\n\nPeter.\n",
"msg_date": "Mon, 26 Feb 2007 15:53:17 +0000",
"msg_from": "\"Peter Childs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuumdb - Max_FSM_Pages Problem."
},
{
"msg_contents": "At 10:53 AM 2/26/2007, Peter Childs wrote:\n>On 26/02/07, Pallav Kalva <[email protected]> wrote:\n>>Hi,\n>>\n>> I am in the process of cleaning up one of our big table, this table\n>>has 187 million records and we need to delete around 100 million of them.\n>>\n>> I am deleting around 4-5 million of them daily in order to catchup\n>>with vacuum and also with the archive logs space. So far I have deleted\n>>around 15million in past few days.\n>>\n>> max_fsm_pages value is set to 1200000. Vacuumdb runs once daily,\n>>here is the output from last night's vacuum job\n>>\n>>\n>>=======================================================================================\n>> INFO: free space map: 999 relations, 798572 pages stored; 755424\n>>total pages needed\n>> DETAIL: Allocated FSM size: 1000 relations + 1200000 pages = 7096\n>>kB shared memory.\n>> VACUUM\n>>\n>>========================================================================================\n>>\n>> From the output it says 755424 total pages needed , this number\n>>keeps growing daily even after vacuums are done daily. This was around\n>>350K pages before the delete process started.\n>>\n>> I am afraid that this number will reach the max_fsm_pages limit\n>>soon and vacuums thereafter will never catch up .\n>>\n>> Can anyone please explain this behavior ? What should I do to catch\n>>up with vacuumdb daily ?\n>\n>Vacuum adds to free pages to the fsm so that they can be reused. If\n>you don't fill up those free pages the fsm will fill up. Once the fsm\n>is full no more pages can be added to the fsm. If you start writing to\n>the free pages via inserts when vacuum next runs more free pages will\n>be added that did not fit previously in the free space map due to it\n>being full.\n>\n>If you are really deleting that many records you may be better coping\n>those you want to a new table and dropping the old one. To actually\n>recover space you need to either run vacuum full or cluster.\n>\n>This ought to be in the manual somewhere as this question gets asked\n>about once a week.\n>\n>Peter.\nIn fact ,\na= copying data to a new table and dropping the original table\nrather than\nb= updating the original table\nis a \"standard best DBA practice\" regardless of DB product.\n\nThe only thing that changes from DB product to DB product is the \nexact point where the copy is large enough to make \"copy, replace\" \nbetter than \"update in place\".\n\nRule of Thumb: No matter what DB product you are using, if it's more \nthan 1/2 of any table or more than 1/4 of any table that does not fit \ninto memory, it's usually better to copy replace rather then update in place.\n\n...and I completely agree that we should document this sort of \nIndustry Best Practice in a way that is easily usable by the pg community.\n\nCheers,\nRon \n\n",
"msg_date": "Mon, 26 Feb 2007 15:33:14 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuumdb - Max_FSM_Pages Problem."
}
] |
[
{
"msg_contents": "hi!\n\nI've been having some serious performance issues with\npostgresql8.2/hibernate/jdbc due to postgres reusing bad cached query\nplans. It doesn't look at the parameter values and therefore does not\nuse any partial indexes.\n\nAfter trying to set prepareThreshold=0 in the connection string which\ndidnt work, even modifying the jdbc driver and forcing it to 0 and not\nworking I realized that it must be being ignored. After giving up\npretty much I tried a much older driver which doesn't use server\nprepared statements at all the problem has gone away and it is once\nagain using the partial indexes. How can I get this to work properly\non the new jdbc driver? I don't really like having to use a 2 year old\ndriver to get good performance as you can imagine :)\n\nCould someone point me to a jdbc src file where I could just disable\nserver-side prepared statements entirely?\n\n-- \nthanks, G\n",
"msg_date": "Mon, 26 Feb 2007 11:12:53 -0500",
"msg_from": "Gene <[email protected]>",
"msg_from_op": true,
"msg_subject": "does prepareThreshold work? forced to use old driver"
},
{
"msg_contents": "\n\nOn Mon, 26 Feb 2007, Gene wrote:\n\n> I've been having some serious performance issues with\n> postgresql8.2/hibernate/jdbc due to postgres reusing bad cached query\n> plans. It doesn't look at the parameter values and therefore does not\n> use any partial indexes.\n>\n> After trying to set prepareThreshold=0 in the connection string which\n> didnt work, even modifying the jdbc driver and forcing it to 0 and not\n> working I realized that it must be being ignored. After giving up\n> pretty much I tried a much older driver which doesn't use server\n> prepared statements at all the problem has gone away and it is once\n> again using the partial indexes. How can I get this to work properly\n> on the new jdbc driver? I don't really like having to use a 2 year old\n> driver to get good performance as you can imagine :)\n\nSomething must be going wrong in the setting to zero or your code may be \nsetting it to non-zero at some later point. I believe prepareThreshold=0 \nshould work. Do you have a test case showing it doesn't?\n\nKris Jurka\n\n",
"msg_date": "Mon, 26 Feb 2007 11:45:24 -0500 (EST)",
"msg_from": "Kris Jurka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: does prepareThreshold work? forced to use old driver"
},
{
"msg_contents": "\nOn 26-Feb-07, at 11:12 AM, Gene wrote:\n\n> hi!\n>\n> I've been having some serious performance issues with\n> postgresql8.2/hibernate/jdbc due to postgres reusing bad cached query\n> plans. It doesn't look at the parameter values and therefore does not\n> use any partial indexes.\n>\n> After trying to set prepareThreshold=0 in the connection string which\n> didnt work, even modifying the jdbc driver and forcing it to 0 and not\n> working I realized that it must be being ignored. After giving up\n> pretty much I tried a much older driver which doesn't use server\n> prepared statements at all the problem has gone away and it is once\n> again using the partial indexes. How can I get this to work properly\n> on the new jdbc driver? I don't really like having to use a 2 year old\n> driver to get good performance as you can imagine :)\n>\n> Could someone point me to a jdbc src file where I could just disable\n> server-side prepared statements entirely?\n>\nyou can just add protocolVersion=2 to the url and it will not use \nprepared statements.\n\nsetting prepareThreshold=0 just tells it not to use named statements. \nIt will still use statements but won't cache them.\n\nAre you sure the problem is with cached statements ? There are issues \nwhere prepared statements won't use the index if you don't use the \ncorrect type.\n\nDave\n> -- \n> thanks, G\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n\n",
"msg_date": "Mon, 26 Feb 2007 12:28:37 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: does prepareThreshold work? forced to use old driver"
},
{
"msg_contents": "Thank you! setting the protocolVersion=2 works with the newer driver.\nI'm still puzzled as to why the prepareThreshold=0 doesn't force the\nreplan though.\n\nOn 2/26/07, Dave Cramer <[email protected]> wrote:\n>\n> On 26-Feb-07, at 11:12 AM, Gene wrote:\n>\n> > hi!\n> >\n> > I've been having some serious performance issues with\n> > postgresql8.2/hibernate/jdbc due to postgres reusing bad cached query\n> > plans. It doesn't look at the parameter values and therefore does not\n> > use any partial indexes.\n> >\n> > After trying to set prepareThreshold=0 in the connection string which\n> > didnt work, even modifying the jdbc driver and forcing it to 0 and not\n> > working I realized that it must be being ignored. After giving up\n> > pretty much I tried a much older driver which doesn't use server\n> > prepared statements at all the problem has gone away and it is once\n> > again using the partial indexes. How can I get this to work properly\n> > on the new jdbc driver? I don't really like having to use a 2 year old\n> > driver to get good performance as you can imagine :)\n> >\n> > Could someone point me to a jdbc src file where I could just disable\n> > server-side prepared statements entirely?\n> >\n> you can just add protocolVersion=2 to the url and it will not use\n> prepared statements.\n>\n> setting prepareThreshold=0 just tells it not to use named statements.\n> It will still use statements but won't cache them.\n>\n> Are you sure the problem is with cached statements ? There are issues\n> where prepared statements won't use the index if you don't use the\n> correct type.\n>\n> Dave\n> > --\n> > thanks, G\n> >\n> > ---------------------------(end of\n> > broadcast)---------------------------\n> > TIP 3: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/docs/faq\n> >\n>\n>\n\n\n-- \nGene Hart\ncell: 443-604-2679\n",
"msg_date": "Mon, 26 Feb 2007 12:41:24 -0500",
"msg_from": "Gene <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: does prepareThreshold work? forced to use old driver"
}
] |
[
{
"msg_contents": "Hi all,\n\n I am asking in this list because, at the end of the day, this is a \nperformance question.\n\n I am looking at writing a search engine of sorts for my database. I \nhave only ever written very simple search engines before which amounted \nto not much more that the query string being used with ILIKE on a pile \nof columns. This was pretty rudimentary and didn't offer anything like \nrelevance sorting and such (I'd sort by result name, age or whatnot).\n\n So I am hoping some of you guys and gals might be able to point me \ntowards some resources or offer some tips or gotcha's before I get \nstarted on this. I'd really like to come up with a more intelligent \nsearch engine that doesn't take two minutes to return results. :) I \nknow, in the end good indexes and underlying hardware will be important, \nbut a sane as possible query structure helps to start with.\n\n Thanks all!!\n\nMadison\n",
"msg_date": "Mon, 26 Feb 2007 11:29:14 -0500",
"msg_from": "Madison Kelly <[email protected]>",
"msg_from_op": true,
"msg_subject": "Writting a \"search engine\" for a pgsql DB"
},
{
"msg_contents": "Madison Kelly wrote:\n> Hi all,\n> \n> I am asking in this list because, at the end of the day, this is a\n> performance question.\n> \n> I am looking at writing a search engine of sorts for my database. I\n> have only ever written very simple search engines before which amounted\n> to not much more that the query string being used with ILIKE on a pile\n> of columns. This was pretty rudimentary and didn't offer anything like\n> relevance sorting and such (I'd sort by result name, age or whatnot).\n> \n> So I am hoping some of you guys and gals might be able to point me\n> towards some resources or offer some tips or gotcha's before I get\n> started on this. I'd really like to come up with a more intelligent\n> search engine that doesn't take two minutes to return results. :) I\n> know, in the end good indexes and underlying hardware will be important,\n> but a sane as possible query structure helps to start with.\n\nSee search.postgresql.org, you can download all source from\ngborg.postgresql.org.\n\nJoshua D. Drake\n\n\n> \n> Thanks all!!\n> \n> Madison\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Mon, 26 Feb 2007 09:04:00 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writting a \"search engine\" for a pgsql DB"
},
{
"msg_contents": "Joshua D. Drake wrote:\n> Madison Kelly wrote:\n>> Hi all,\n>>\n>> I am asking in this list because, at the end of the day, this is a\n>> performance question.\n>>\n>> I am looking at writing a search engine of sorts for my database. I\n>> have only ever written very simple search engines before which amounted\n>> to not much more that the query string being used with ILIKE on a pile\n>> of columns. This was pretty rudimentary and didn't offer anything like\n>> relevance sorting and such (I'd sort by result name, age or whatnot).\n>>\n>> So I am hoping some of you guys and gals might be able to point me\n>> towards some resources or offer some tips or gotcha's before I get\n>> started on this. I'd really like to come up with a more intelligent\n>> search engine that doesn't take two minutes to return results. :) I\n>> know, in the end good indexes and underlying hardware will be important,\n>> but a sane as possible query structure helps to start with.\n> \n> See search.postgresql.org, you can download all source from\n> gborg.postgresql.org.\n\nJoshua,\n\nWhat's the name of the project referred to? There's nothing named\n\"search\" hosted on Gborg according to this project list:\n\nhttp://gborg.postgresql.org/project/projdisplaylist.php\n\nMadison,\n\nFor small data sets and simpler searches, the approach you have been\nusing can be appropriate. You may just want to use a small tool in a\nregular programming language to help build the query. I wrote such a\ntool for Perl:\n\nhttp://search.cpan.org/~markstos/SQL-KeywordSearch-1.11/lib/SQL/KeywordSearch.pm\n\nFor large or complex searches, a more specialized search system may be\nappropriate. I suspect that's kind of tool that Joshua is referencing.\n\n Mark\n",
"msg_date": "Mon, 26 Feb 2007 12:32:09 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writting a \"search engine\" for a pgsql DB"
},
{
"msg_contents": "\n>>> So I am hoping some of you guys and gals might be able to point me\n>>> towards some resources or offer some tips or gotcha's before I get\n>>> started on this. I'd really like to come up with a more intelligent\n>>> search engine that doesn't take two minutes to return results. :) I\n>>> know, in the end good indexes and underlying hardware will be important,\n>>> but a sane as possible query structure helps to start with.\n>> See search.postgresql.org, you can download all source from\n>> gborg.postgresql.org.\n> \n> Joshua,\n> \n> What's the name of the project referred to? There's nothing named\n> \"search\" hosted on Gborg according to this project list:\n> \n> http://gborg.postgresql.org/project/projdisplaylist.php\n\nhttp://gborg.postgresql.org/project/pgweb/projdisplay.php\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Mon, 26 Feb 2007 09:41:37 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writting a \"search engine\" for a pgsql DB"
},
{
"msg_contents": "Mark Stosberg wrote:\n> Joshua D. Drake wrote:\n>> Madison Kelly wrote:\n>>> Hi all,\n>>>\n>>> I am asking in this list because, at the end of the day, this is a\n>>> performance question.\n>>>\n>>> I am looking at writing a search engine of sorts for my database. I\n>>> have only ever written very simple search engines before which amounted\n>>> to not much more that the query string being used with ILIKE on a pile\n>>> of columns. This was pretty rudimentary and didn't offer anything like\n>>> relevance sorting and such (I'd sort by result name, age or whatnot).\n>>>\n>>> So I am hoping some of you guys and gals might be able to point me\n>>> towards some resources or offer some tips or gotcha's before I get\n>>> started on this. I'd really like to come up with a more intelligent\n>>> search engine that doesn't take two minutes to return results. :) I\n>>> know, in the end good indexes and underlying hardware will be important,\n>>> but a sane as possible query structure helps to start with.\n>> See search.postgresql.org, you can download all source from\n>> gborg.postgresql.org.\n> \n> Joshua,\n> \n> What's the name of the project referred to? There's nothing named\n> \"search\" hosted on Gborg according to this project list:\n> \n> http://gborg.postgresql.org/project/projdisplaylist.php\n> \n> Madison,\n> \n> For small data sets and simpler searches, the approach you have been\n> using can be appropriate. You may just want to use a small tool in a\n> regular programming language to help build the query. I wrote such a\n> tool for Perl:\n> \n> http://search.cpan.org/~markstos/SQL-KeywordSearch-1.11/lib/SQL/KeywordSearch.pm\n> \n> For large or complex searches, a more specialized search system may be\n> appropriate. I suspect that's kind of tool that Joshua is referencing.\n> \n> Mark\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\nThanks Joshua and Mark!\n\n Joshua, I've been digging around the CVS (web) looking for the search \nengine code but so far have only found the reference (www.search) in \n'general.php' but can't locate the file. You wouldn't happen to have a \ndirect link would you?\n\n Mark, Thanks for a link to your module. I'll take a look at it's \nsource and see how you work your magic. :)\n\n I think the more direct question I was trying to get at is \"How do \nyou build a 'relavence' search engine? One where results are \nreturned/sorted by relevance of some sort?\". At this point, the best I \ncan think of, would be to perform multiple queries; first matching the \nwhole search term, then the search term starting a row, then ending a \nrow, then anywhere in a row and \"scoring\" the results based on which \nquery they came out on. This seems terribly cumbersome (and probably \nslow, indexes be damned) though. I'm hoping there is a better way! :)\n\nMadi\n",
"msg_date": "Mon, 26 Feb 2007 12:47:20 -0500",
"msg_from": "Madison Kelly <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Writting a \"search engine\" for a pgsql DB"
},
{
"msg_contents": "> Joshua, I've been digging around the CVS (web) looking for the search\n> engine code but so far have only found the reference (www.search) in\n> 'general.php' but can't locate the file. You wouldn't happen to have a\n> direct link would you?\n\nIt's all in module \"portal\". You will find the indexing stuff in\ntools/search, and the search interface in system/page/search.php.\n\n> I think the more direct question I was trying to get at is \"How do you\n> build a 'relavence' search engine? One where results are returned/sorted\n> by relevance of some sort?\". At this point, the best I can think of,\n> would be to perform multiple queries; first matching the whole search\n> term, then the search term starting a row, then ending a row, then\n> anywhere in a row and \"scoring\" the results based on which query they\n> came out on. This seems terribly cumbersome (and probably slow, indexes\n> be damned) though. I'm hoping there is a better way! :)\n\nThe tsearch2 ranking features are pretty good.\n\n//Magnus\n\n",
"msg_date": "Mon, 26 Feb 2007 18:55:18 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writting a \"search engine\" for a pgsql DB"
},
{
"msg_contents": "Madison Kelly wrote:\n>\n> I think the more direct question I was trying to get at is \"How do you\n> build a 'relavence' search engine? One where results are returned/sorted\n> by relevance of some sort?\". At this point, the best I can think of,\n> would be to perform multiple queries; first matching the whole search\n> term, then the search term starting a row, then ending a row, then\n> anywhere in a row and \"scoring\" the results based on which query they\n> came out on. This seems terribly cumbersome (and probably slow, indexes\n> be damned) though. I'm hoping there is a better way! :)\n\nMadison,\n\nI think your basic thinking is correct. However, the first \"select\" can\ndone \"offline\" -- sometime beforehand.\n\nFor example, you might create a table called \"keywords\" that includes\nthe list of words mined in the other tables, along with references to\nwhere the words are found, and how many times they are mentioned.\n\nThen, when someone actually searches, the search is primarily on the\n\"keywords\" table, which is now way to sort by \"rank\", since the table\ncontains how many times each keyword matches. The final result can be\nconstructed by using the details in the keywords table to pull up the\nactual records needed.\n\nMy expectation however is that there are enough details in the system,\nthat I would first look at trying a package like tsearch2 to help solve\nthe problem, before trying to write another system like this from scratch.\n\n Mark\n\n",
"msg_date": "Mon, 26 Feb 2007 13:15:07 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writting a \"search engine\" for a pgsql DB"
},
{
"msg_contents": "Mark Stosberg wrote:\n> Madison Kelly wrote:\n>> I think the more direct question I was trying to get at is \"How do you\n>> build a 'relavence' search engine? One where results are returned/sorted\n>> by relevance of some sort?\". At this point, the best I can think of,\n>> would be to perform multiple queries; first matching the whole search\n>> term, then the search term starting a row, then ending a row, then\n>> anywhere in a row and \"scoring\" the results based on which query they\n>> came out on. This seems terribly cumbersome (and probably slow, indexes\n>> be damned) though. I'm hoping there is a better way! :)\n> \n> Madison,\n> \n> I think your basic thinking is correct. However, the first \"select\" can\n> done \"offline\" -- sometime beforehand.\n> \n> For example, you might create a table called \"keywords\" that includes\n> the list of words mined in the other tables, along with references to\n> where the words are found, and how many times they are mentioned.\n> \n> Then, when someone actually searches, the search is primarily on the\n> \"keywords\" table, which is now way to sort by \"rank\", since the table\n> contains how many times each keyword matches. The final result can be\n> constructed by using the details in the keywords table to pull up the\n> actual records needed.\n> \n> My expectation however is that there are enough details in the system,\n> that I would first look at trying a package like tsearch2 to help solve\n> the problem, before trying to write another system like this from scratch.\n> \n> Mark\n\nNow see, this is exactly the kind of sagely advice I was hoping for! :)\n\nI'll look into tsearch2, and failing that for some reason, I love the \nkeyword table idea.\n\nThanks kindly!!\n\nMadi\n",
"msg_date": "Mon, 26 Feb 2007 13:59:45 -0500",
"msg_from": "Madison Kelly <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Writting a \"search engine\" for a pgsql DB"
},
{
"msg_contents": ">\n> Now see, this is exactly the kind of sagely advice I was hoping for! :)\n> \n> I'll look into tsearch2, and failing that for some reason, I love the\n> keyword table idea.\n\nFor example keyword search code, you can try this package:\n\nhttp://downloads.sourceforge.net/cascade/cascade-devel-pieces-1.1.tgz?modtime=999556617&big_mirror=0\n\nThere is a \"keywords\" subdirectory with the Perl and SQL. I'm sure this\ncode is not ideal in a number of ways:\n\n1. It's from 2001.\n2. It doesn't actually function on it's own anymore. However, you can\nread the code and get ideas.\n3. I'm sure someone has a better looking/functioning example!\n\nAnyway, it's there if you want to take a look.\n\n Mark\n",
"msg_date": "Mon, 26 Feb 2007 14:46:18 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writting a \"search engine\" for a pgsql DB"
},
{
"msg_contents": "On Mon, 2007-02-26 at 11:29 -0500, Madison Kelly wrote:\n> I am looking at writing a search engine of sorts for my database. I \n> have only ever written very simple search engines before which amounted \n> to not much more that the query string being used with ILIKE on a pile \n> of columns. This was pretty rudimentary and didn't offer anything like \n> relevance sorting and such (I'd sort by result name, age or whatnot).\n\nLook at Tsearch2:\n\nhttp://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/\n\nIt has a lot of features for searching, and can make use of powerful\nindexes to return search results very quickly. As someone already\nmentioned, it also has ranking features.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Mon, 26 Feb 2007 12:14:05 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writting a \"search engine\" for a pgsql DB"
},
{
"msg_contents": "On Mon, 26 Feb 2007, Madison Kelly wrote:\n\n> Hi all,\n>\n> I'd really like to come up with a more intelligent search engine that doesn't \n> take two minutes to return results. :) I know, in the end good indexes and \n> underlying hardware will be important, but a sane as possible query structure \n> helps to start with.\n\nI'm not a programmer, so I can't comment on how good of an example this \nis, but I've been pretty happy with mnogosearch:\n\nhttp://www.mnogosearch.com/\n\nThe *nix versions are free. Looking at the db structure gave me a bit of \nan idea of what I'm guessing is the \"right way\" to search a huge amount of \ndocuments.\n\nCharles\n\n> Thanks all!!\n>\n> Madison\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n",
"msg_date": "Mon, 26 Feb 2007 16:24:12 -0500 (EST)",
"msg_from": "Charles Sprickman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writting a \"search engine\" for a pgsql DB"
},
{
"msg_contents": "On Mon, Feb 26, 2007 at 04:24:12PM -0500, Charles Sprickman wrote:\n> On Mon, 26 Feb 2007, Madison Kelly wrote:\n> \n> >Hi all,\n> >\n> >I'd really like to come up with a more intelligent search engine that \n> >doesn't take two minutes to return results. :) I know, in the end good \n> >indexes and underlying hardware will be important, but a sane as possible \n> >query structure helps to start with.\n> \n> I'm not a programmer, so I can't comment on how good of an example this \n> is, but I've been pretty happy with mnogosearch:\n> \n> http://www.mnogosearch.com/\n> \n> The *nix versions are free. Looking at the db structure gave me a bit of \n> an idea of what I'm guessing is the \"right way\" to search a huge amount of \n> documents.\n\nJust as a datapoint, we did try to use mnogosearch for the\npostgresql.org website+archives search, and it fell over completely.\nIndexing took way too long, and we had search times several thousand\ntimes longer than with tsearch2.\n\nThat said, I'm sure there are cases when it works fine :-)\n\n//Magnus\n",
"msg_date": "Tue, 27 Feb 2007 14:15:58 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writting a \"search engine\" for a pgsql DB"
},
{
"msg_contents": "Magnus Hagander wrote:\n> \n> Just as a datapoint, we did try to use mnogosearch for the\n> postgresql.org website+archives search, and it fell over completely.\n> Indexing took way too long, and we had search times several thousand\n> times longer than with tsearch2.\n> \n> That said, I'm sure there are cases when it works fine :-)\n\nThere are - in fact before your time the site did use Mnogosearch. We\nmoved to our own port of ASPSeek when we outgrew Mnogo's capabilities,\nand then to your TSearch code when we outgrew ASPSeek.\n\nWhen we outgrow PostgreSQL & Tsearch2, then, well, we'll need to stop\npretending to be Google...\n\n/D\n",
"msg_date": "Tue, 27 Feb 2007 13:33:47 +0000",
"msg_from": "Dave Page <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writting a \"search engine\" for a pgsql DB"
},
{
"msg_contents": "Madison Kelly wrote:\n> Hi all,\n> \n> I am asking in this list because, at the end of the day, this is a \n> performance question.\n> \n> I am looking at writing a search engine of sorts for my database. I \n> have only ever written very simple search engines before which amounted \n> to not much more that the query string being used with ILIKE on a pile \n> of columns. This was pretty rudimentary and didn't offer anything like \n> relevance sorting and such (I'd sort by result name, age or whatnot).\n> \n> So I am hoping some of you guys and gals might be able to point me \n> towards some resources or offer some tips or gotcha's before I get \n> started on this. I'd really like to come up with a more intelligent \n> search engine that doesn't take two minutes to return results. :) I \n> know, in the end good indexes and underlying hardware will be important, \n> but a sane as possible query structure helps to start with.\n\nAs someone mentioned, tsearch2 is a good option.\n\n<plug> I wrote a small article about how to get it set up relatively \neasily: http://www.designmagick.com/article/27/ </plug>\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Wed, 28 Feb 2007 10:05:42 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writting a \"search engine\" for a pgsql DB"
},
{
"msg_contents": "On Tue, 27 Feb 2007, Dave Page wrote:\n\n> Magnus Hagander wrote:\n>>\n>> Just as a datapoint, we did try to use mnogosearch for the\n>> postgresql.org website+archives search, and it fell over completely.\n>> Indexing took way too long, and we had search times several thousand\n>> times longer than with tsearch2.\n>>\n>> That said, I'm sure there are cases when it works fine :-)\n>\n> There are - in fact before your time the site did use Mnogosearch. We\n> moved to our own port of ASPSeek when we outgrew Mnogo's capabilities,\n> and then to your TSearch code when we outgrew ASPSeek.\n\nAt risk of pulling this way too far off topic, may I ask how many \ndocuments (mail messages) you were dealing with when things started to \nfall apart with mnogo? We're looking at it for a new project that will \nhopefully get bigger and bigger. We will be throwing groups of mailing \nlists into their own mnogo config/tables... If we should save ourselves \nthe pain and look at something more homebrew, then we'll start \ninvestigating \"Tsearch\".\n\nThanks,\n\nCharles\n\n> When we outgrow PostgreSQL & Tsearch2, then, well, we'll need to stop\n> pretending to be Google...\n>\n> /D\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n",
"msg_date": "Tue, 27 Feb 2007 18:36:11 -0500 (EST)",
"msg_from": "Charles Sprickman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writting a \"search engine\" for a pgsql DB"
},
{
"msg_contents": "On Tue, Feb 27, 2007 at 06:36:11PM -0500, Charles Sprickman wrote:\n> On Tue, 27 Feb 2007, Dave Page wrote:\n> \n> >Magnus Hagander wrote:\n> >>\n> >>Just as a datapoint, we did try to use mnogosearch for the\n> >>postgresql.org website+archives search, and it fell over completely.\n> >>Indexing took way too long, and we had search times several thousand\n> >>times longer than with tsearch2.\n> >>\n> >>That said, I'm sure there are cases when it works fine :-)\n> >\n> >There are - in fact before your time the site did use Mnogosearch. We\n> >moved to our own port of ASPSeek when we outgrew Mnogo's capabilities,\n> >and then to your TSearch code when we outgrew ASPSeek.\n> \n> At risk of pulling this way too far off topic, may I ask how many \n> documents (mail messages) you were dealing with when things started to \n> fall apart with mnogo? We're looking at it for a new project that will \n> hopefully get bigger and bigger. We will be throwing groups of mailing \n> lists into their own mnogo config/tables... If we should save ourselves \n> the pain and look at something more homebrew, then we'll start \n> investigating \"Tsearch\".\n\nI don't know when it broke exactly, but I know we're currently doing\nabout 600,000 documents. AFAIK it started to fall apart pretty long\nbefore that.\n\n//Magnus\n",
"msg_date": "Wed, 28 Feb 2007 08:44:39 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writting a \"search engine\" for a pgsql DB"
},
{
"msg_contents": "Charles Sprickman wrote:\n> On Tue, 27 Feb 2007, Dave Page wrote:\n> \n>> Magnus Hagander wrote:\n>>>\n>>> Just as a datapoint, we did try to use mnogosearch for the\n>>> postgresql.org website+archives search, and it fell over completely.\n>>> Indexing took way too long, and we had search times several thousand\n>>> times longer than with tsearch2.\n>>>\n>>> That said, I'm sure there are cases when it works fine :-)\n>>\n>> There are - in fact before your time the site did use Mnogosearch. We\n>> moved to our own port of ASPSeek when we outgrew Mnogo's capabilities,\n>> and then to your TSearch code when we outgrew ASPSeek.\n> \n> At risk of pulling this way too far off topic, may I ask how many\n> documents (mail messages) you were dealing with when things started to\n> fall apart with mnogo? \n\nI honestly don't remember now, but it would have been in the tens or\nmaybe low hundreds of thousands. Don't get me wrong, I've built sites\nwhere Mnogo is still running fine and does a great job - it just doesn't\nscale well.\n\n> We're looking at it for a new project that will\n> hopefully get bigger and bigger. We will be throwing groups of mailing\n> lists into their own mnogo config/tables... If we should save ourselves\n> the pain and look at something more homebrew, then we'll start\n> investigating \"Tsearch\".\n\nWell put it this way, the PostgreSQL mailing list archives outgrew Mnogo\nyears ago and even ASPSeek was beginning to struggle when it got removed\na few months back.\n\nRegards, Dave\n",
"msg_date": "Wed, 28 Feb 2007 08:33:30 +0000",
"msg_from": "Dave Page <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writting a \"search engine\" for a pgsql DB"
},
{
"msg_contents": "On Tue, Feb 27, 2007 at 01:33:47PM +0000, Dave Page wrote:\n> When we outgrow PostgreSQL & Tsearch2, then, well, we'll need to stop\n> pretending to be Google...\n\nJust for the record: Google has been known to sponsor sites in need with\nGoogle Minis and such earlier -- I don't know what their[1] policy is on the\nmatter, but if tsearch2 should at some point stop being usable for indexing\npostgresql.org, asking them might be worth a shot.\n\n[1] Technically \"our\", as I start working there in July. I do not speak for\n Google, etc., blah blah. :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 28 Feb 2007 12:16:11 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writting a \"search engine\" for a pgsql DB"
},
{
"msg_contents": "Steinar H. Gunderson wrote:\n> On Tue, Feb 27, 2007 at 01:33:47PM +0000, Dave Page wrote:\n>> When we outgrow PostgreSQL & Tsearch2, then, well, we'll need to stop\n>> pretending to be Google...\n> \n> Just for the record: Google has been known to sponsor sites in need with\n> Google Minis and such earlier -- I don't know what their[1] policy is on the\n> matter, but if tsearch2 should at some point stop being usable for indexing\n> postgresql.org, asking them might be worth a shot.\n\nI think if postgresql.org outgrows tsearch2 then the preferred solution\nwould be to improve tsearch2/postgresql, but thanks for the tip :-)\n\n> [1] Technically \"our\", as I start working there in July. \n\nCongratulations :-)\n\nRegards, Dave\n\n",
"msg_date": "Wed, 28 Feb 2007 11:39:55 +0000",
"msg_from": "Dave Page <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writting a \"search engine\" for a pgsql DB"
},
{
"msg_contents": "On Wed, 28 Feb 2007, Dave Page wrote:\n\n> Steinar H. Gunderson wrote:\n>> On Tue, Feb 27, 2007 at 01:33:47PM +0000, Dave Page wrote:\n>>> When we outgrow PostgreSQL & Tsearch2, then, well, we'll need to stop\n>>> pretending to be Google...\n>>\n>> Just for the record: Google has been known to sponsor sites in need with\n>> Google Minis and such earlier -- I don't know what their[1] policy is on the\n>> matter, but if tsearch2 should at some point stop being usable for indexing\n>> postgresql.org, asking them might be worth a shot.\n>\n> I think if postgresql.org outgrows tsearch2 then the preferred solution\n> would be to improve tsearch2/postgresql, but thanks for the tip :-)\n\nGuys, current tsearch2 should works with millions of documents. Actually,\nthe performance killer is the necessity to consult heap to calculate rank\nwhich is unavoidably slow, since one need to read all records.\nSearch itself is incredibly fast ! If we find a way to store an additional \ninformation in index and workout visibility issue, full text search will \nbe damn fast.\n\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n",
"msg_date": "Wed, 28 Feb 2007 15:35:18 +0300 (MSK)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writting a \"search engine\" for a pgsql DB"
},
{
"msg_contents": "Oleg Bartunov wrote:\n\n\n> Guys, current tsearch2 should works with millions of documents. \n...\n\n> Search itself is incredibly fast !\n\nOh, I know - you and Teodor have done a wonderful job.\n\nRegards, Dave.\n",
"msg_date": "Wed, 28 Feb 2007 12:40:14 +0000",
"msg_from": "Dave Page <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Writting a \"search engine\" for a pgsql DB"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm trying to make sense of the memory usage reported by 'top', compared\nto what \"pg_database_size\" shows. Here's one result:\n\nselect pg_size_pretty(pg_database_size('production'));\n pg_size_pretty\n----------------\n 6573 MB\n\nNow, looking at memory use with \"top\", there is a lot memory that isn't\nbeing used on the system:\n\n Mem: 470M Active, 2064M Inact\n\n( 3 Gigs RAM, total ).\n\nOverall performance is decent, so maybe there's no\nproblem. However, I wonder if we've under-allocated memory to\nPostgreSQL. (This is a dedicated FreeBSD DB server).\n\nSome memory settings include:\n\nshared_buffers = 8192 (we have 450 connections)\nmax_fsm_pages = 1250000 (we kept getting HINTs to bump it, so we did)\n\nMaybe we should be bumping up the \"sort_mem\" and \"vacuum_mem\" as well?\n\nI do sometimes see sorting and vacuuming as showing up as things I'd\nlike to run faster.\n\nThis list has been a great resource for performance tuning help, and I\ncontinue to appreciate your help. We've used PostgreSQL on every project\nwe've had a choice on for the last 10 years. (Has it been that long?!)\nWe've never regretted it once.\n\n Mark\n",
"msg_date": "Mon, 26 Feb 2007 11:52:09 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "low memory usage reported by 'top' indicates poor tuning?"
},
{
"msg_contents": "Mark Stosberg wrote:\n> Hello,\n> \n> I'm trying to make sense of the memory usage reported by 'top', compared\n> to what \"pg_database_size\" shows. Here's one result:'\n\n\nYou are missing the most important parts of the equation:\n\n1. What version of PostgreSQL.\n2. What operating system -- scratch , I see freebsd\n3. How big is your pg_dump in comparison to the pg_database_size()\n4. What type of raid do you have?\n5. What is your work_mem set to?\n6. What about effective_cache_size?\n7. Do you analyze? How often?\n\n> \n> select pg_size_pretty(pg_database_size('production'));\n> pg_size_pretty\n> ----------------\n> 6573 MB\n> \n> Now, looking at memory use with \"top\", there is a lot memory that isn't\n> being used on the system:\n> \n> Mem: 470M Active, 2064M Inact\n> \n> ( 3 Gigs RAM, total ).\n> \n> Overall performance is decent, so maybe there's no\n> problem. However, I wonder if we've under-allocated memory to\n> PostgreSQL. (This is a dedicated FreeBSD DB server).\n> \n> Some memory settings include:\n> \n> shared_buffers = 8192 (we have 450 connections)\n> max_fsm_pages = 1250000 (we kept getting HINTs to bump it, so we did)\n> \n> Maybe we should be bumping up the \"sort_mem\" and \"vacuum_mem\" as well?\n> \n> I do sometimes see sorting and vacuuming as showing up as things I'd\n> like to run faster.\n> \n> This list has been a great resource for performance tuning help, and I\n> continue to appreciate your help. We've used PostgreSQL on every project\n> we've had a choice on for the last 10 years. (Has it been that long?!)\n> We've never regretted it once.\n> \n> Mark\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Mon, 26 Feb 2007 09:08:03 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: low memory usage reported by 'top' indicates poor tuning?"
},
{
"msg_contents": "Joshua D. Drake wrote:\n> Mark Stosberg wrote:\n>> Hello,\n>>\n>> I'm trying to make sense of the memory usage reported by 'top', compared\n>> to what \"pg_database_size\" shows. Here's one result:'\n> \n> \n> You are missing the most important parts of the equation:\n\nThanks for your patience, Joshua. I'm new at performance tuning.\n\n> 1. What version of PostgreSQL.\n\nNow, 8.1. We are evaluating 8.2 currently and could potentially upgrade\nsoon.\n\n> 2. What operating system -- scratch , I see freebsd\n\n> 3. How big is your pg_dump in comparison to the pg_database_size()\n\nUsing the compressed, custom format: 360M. It was recently 1.2G\ndue to logging tables that were pruned recently. These tables are\nonly inserted into and are not otherwise accessed by the application.\n\n> 4. What type of raid do you have?\n\nRAID-1.\n\n> 5. What is your work_mem set to?\n\n1024 (left at the default)\n\n> 6. What about effective_cache_size?\n\n1000 (default)\n\nFor any other settings, it's probably the defaults, too.\n\n> 7. Do you analyze? How often?\n\nOnce, nightly. I'm currently learning and experience with autovacuuming\nto see if there is a more optimal arrangement of autovacuuming + nightly\ncron vacuuming.\n\nA test on Friday was failure: Autovacuuming brought the application to a\ncrawl, and with 8.1, I couldn't see what table it was stuck on. I had\nautovacuum_vacuum_cost_delay set to \"10\".\n\nThanks again for your experienced help.\n\n Mark\n\n>> select pg_size_pretty(pg_database_size('production'));\n>> pg_size_pretty\n>> ----------------\n>> 6573 MB\n>>\n>> Now, looking at memory use with \"top\", there is a lot memory that isn't\n>> being used on the system:\n>>\n>> Mem: 470M Active, 2064M Inact\n>>\n>> ( 3 Gigs RAM, total ).\n>>\n>> Overall performance is decent, so maybe there's no\n>> problem. However, I wonder if we've under-allocated memory to\n>> PostgreSQL. (This is a dedicated FreeBSD DB server).\n>>\n>> Some memory settings include:\n>>\n>> shared_buffers = 8192 (we have 450 connections)\n>> max_fsm_pages = 1250000 (we kept getting HINTs to bump it, so we did)\n>>\n>> Maybe we should be bumping up the \"sort_mem\" and \"vacuum_mem\" as well?\n>>\n>> I do sometimes see sorting and vacuuming as showing up as things I'd\n>> like to run faster.\n>>\n>> This list has been a great resource for performance tuning help, and I\n>> continue to appreciate your help. We've used PostgreSQL on every project\n>> we've had a choice on for the last 10 years. (Has it been that long?!)\n>> We've never regretted it once.\n>>\n>> Mark\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 2: Don't 'kill -9' the postmaster\n>>\n> \n> \n",
"msg_date": "Mon, 26 Feb 2007 12:26:23 -0500",
"msg_from": "Mark Stosberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: low memory usage reported by 'top' indicates poor tuning?"
}
] |
[
{
"msg_contents": "This is the second proposal for Dead Space Map (DSM).\nHere is the previous discussion:\nhttp://archives.postgresql.org/pgsql-hackers/2006-12/msg01188.php\n\nI'll post the next version of the Dead Space Map patch to -patches.\nI've implemented 2bits/page bitmap and new vacuum commands.\nMemory management and recovery features are not done yet.\n\nI think it's better to get DSM and HOT together. DSM is good at complex\nupdated cases but not at heavily updated cases. HOT has opposite aspects,\nas far as I can see. I think they can cover each other.\n\n\n2bits/page bitmap\n-----------------\n\nEach heap pages have 4 states for dead space map; HIGH, LOW, UNFROZEN and\nFROZEN. VACUUM uses the states to reduce the number of target pages.\n\n- HIGH : High priority to vacuum. Maybe many dead tuples in the page.\n- LOW : Low priority to vacuum Maybe few dead tuples in the page.\n- UNFROZEN : No dead tuples, but some unfrozen tuples in the page.\n- FROZEN : No dead nor unfrozen tuples in the page.\n\nIf we do UPDATE a tuple, the original page containing the tuple is marked\nas HIGH and the new page where the updated tuple is placed is marked as LOW.\nWhen we commit the transaction, the updated tuples needs only FREEZE.\nThat's why the after-page is marked as LOW. However, If we rollback, the\nafter-page should be vacuumed, so we should mark the page LOW, not UNFROZEN.\nWe don't know the transaction will commit or rollback at the UPDATE.\n\nIf we combine this with the HOT patch, pages with HOT tuples are probably\nmarked as UNFROZEN because we don't bother vacuuming HOT tuples. They can\nbe removed incrementally and doesn't require explicit vacuums.\n\nIn future work, we can do index-only-scan for tuples that is in UNFROZEN or\nFROZEN pages. (currently not implemented)\n\n\nVACUUM commands\n---------------\n\nVACUUM now only scans the pages that possibly have dead tuples.\nVACUUM ALL, a new syntax, behaves as the same as before.\n\n- VACUUM FULL : Not changed. scans all pages and compress them.\n- VACUUM ALL : Scans all pages; Do the same behavior as previous VACUUM.\n- VACUUM : Scans only HIGH pages usually, but also LOW and UNFROZEN\n pages on vacuums in the cases for preventing XID wraparound.\n\nThe commitment of oldest XID for VACUUM is not changed. There should not be\ntuples that XIDs are older than (Current XID - vacuum_freeze_min_age) after\nVACUUM. If the VACUUM can guarantee the commitment, it scans only HIGH pages.\nOtherwise, it scans HIGH, LOW and UNFROZEN pages for FREEZE.\n\n\nPerformance issues\n------------------\n\n* Enable/Disable DSM tracking per tables\n DSM requires more or less additional works. If we know specific tables\n where DSM does not work well, ex. heavily updated small tables, we can\n disable DSM for it. The syntax is:\n ALTER TABLE name SET (dsm=true/false);\n\n* Dead Space State Cache\n The DSM management module is guarded using one LWLock, DeadSpaceLock.\n Almost all accesses to DSM requires only shared lock, but the frequency\n of shared lock was very high (tied with BufMappingLock) in my research.\n To avoid the lock contention, I added a cache of dead space state in\n BufferDesc flags. Backends see the flags first, and avoid locking if no\n need to \n\n* Agressive freezing\n We will freeze tuples in dirty pages using OldestXmin but FreezeLimit.\n This is for making FROZEN pages but not UNFROZEN pages as far as possible\n in order to reduce works in XID wraparound vacuums.\n\n\nMemory management\n-----------------\n\nIn current implementation, DSM allocates a bunch of memory at start up and\nwe cannot modify it in running. It's maybe enough because DSM consumes very\nlittle memory -- 32MB memory per 1TB database.\n\nThere are 3 parameters for FSM and DSM.\n\n - max_fsm_pages = 204800\n - max_fsm_relations = 1000 (= max_dsm_relations)\n - max_dsm_pages = 4096000\n\nI'm thinking to change them into 2 new paramaters. We will allocates memory\nfor DSM that can hold all of estimated_database_size, and for FSM 50% or\nsomething of the size. Is this reasonable?\n\n - estimated_max_relations = 1000\n - estimated_database_size = 4GB (= about max_fsm_pages * 8KB * 2)\n\n\nRecovery\n--------\n\nI've already have a recovery extension. However, it can recover DSM\nbut not FSM. Do we also need to restore FSM? If we don't, unreusable\npages might be left in heaps. Of cource it could be reused if another\ntuple in the page are updated, but VACUUM will not find those pages.\n\n\nComments and suggestions are really appreciated.\n\nRegards,\n---\nITAGAKI Takahiro\nNTT Open Source Software Center\n\n",
"msg_date": "Tue, 27 Feb 2007 12:05:57 +0900",
"msg_from": "ITAGAKI Takahiro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Dead Space Map version 2"
},
{
"msg_contents": "On Tue, Feb 27, 2007 at 12:05:57PM +0900, ITAGAKI Takahiro wrote:\n> Each heap pages have 4 states for dead space map; HIGH, LOW, UNFROZEN and\n> FROZEN. VACUUM uses the states to reduce the number of target pages.\n> \n> - HIGH : High priority to vacuum. Maybe many dead tuples in the page.\n> - LOW : Low priority to vacuum Maybe few dead tuples in the page.\n> - UNFROZEN : No dead tuples, but some unfrozen tuples in the page.\n> - FROZEN : No dead nor unfrozen tuples in the page.\n> \n> If we do UPDATE a tuple, the original page containing the tuple is marked\n> as HIGH and the new page where the updated tuple is placed is marked as LOW.\n\nDon't you mean UNFROZEN?\n\n> When we commit the transaction, the updated tuples needs only FREEZE.\n> That's why the after-page is marked as LOW. However, If we rollback, the\n> after-page should be vacuumed, so we should mark the page LOW, not UNFROZEN.\n> We don't know the transaction will commit or rollback at the UPDATE.\n\nWhat makes it more important to mark the original page as HIGH instead\nof LOW, like the page with the new tuple? The description of the states\nindicates that there would likely be a lot more dead tuples in a HIGH\npage than in a LOW page.\n\nPerhaps it would be better to have the bgwriter take a look at how many\ndead tuples (or how much space the dead tuples account for) when it\nwrites a page out and adjust the DSM at that time.\n\n> * Agressive freezing\n> We will freeze tuples in dirty pages using OldestXmin but FreezeLimit.\n> This is for making FROZEN pages but not UNFROZEN pages as far as possible\n> in order to reduce works in XID wraparound vacuums.\n\nDo you mean using OldestXmin instead of FreezeLimit?\n\nPerhaps it might be better to save that optimization for later...\n\n> In current implementation, DSM allocates a bunch of memory at start up and\n> we cannot modify it in running. It's maybe enough because DSM consumes very\n> little memory -- 32MB memory per 1TB database.\n> \n> There are 3 parameters for FSM and DSM.\n> \n> - max_fsm_pages = 204800\n> - max_fsm_relations = 1000 (= max_dsm_relations)\n> - max_dsm_pages = 4096000\n> \n> I'm thinking to change them into 2 new paramaters. We will allocates memory\n> for DSM that can hold all of estimated_database_size, and for FSM 50% or\n> something of the size. Is this reasonable?\n \nI don't think so, at least not until we get data from the field about\nwhat's typical. If the DSM is tracking every page in the cluster then\nI'd expect the FSM to be closer to 10% or 20% of that, anyway.\n\n> I've already have a recovery extension. However, it can recover DSM\n> but not FSM. Do we also need to restore FSM? If we don't, unreusable\n> pages might be left in heaps. Of cource it could be reused if another\n> tuple in the page are updated, but VACUUM will not find those pages.\n\nYes, DSM would make FSM recovery more important, but I thought it was\nrecoverable now? Or is that only on a clean shutdown?\n\nI suspect we don't need perfect recoverability... theoretically we could\njust commit the FSM after vacuum frees pages and leave it at that; if we\nrevert to that after a crash, backends will grab pages from the FSM only\nto find there's no more free space, at which point they could pull the\npage from the FSM and find another one. This would lead to degraded\nperformance for a while after a crash, but that might be a good\ntrade-off.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Mon, 26 Feb 2007 23:11:44 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dead Space Map version 2"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> Yes, DSM would make FSM recovery more important, but I thought it was\n> recoverable now? Or is that only on a clean shutdown?\n\nCurrently we throw away FSM during any non-clean restart. This is\nprobably overkill but I'm quite unclear what would be a safe\nalternative.\n\n> I suspect we don't need perfect recoverability...\n\nThe main problem with the levels proposed by Takahiro-san is that any\ntransition from FROZEN to not-FROZEN *must* be exactly recovered,\nbecause vacuum will never visit an allegedly frozen page at all. This\nappears to require WAL-logging DSM state changes, which is a pretty\nserious performance hit. I'd be happier if the DSM content could be\ntreated as just a hint. I think that means not trusting it for whether\na page is frozen to the extent of not needing vacuum even for\nwraparound. So I'm inclined to propose that there be only two states\n(hence only one DSM bit per page): page needs vacuum for space recovery,\nor not. Vacuum for XID wraparound would have to hit every page\nregardless.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 27 Feb 2007 00:55:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dead Space Map version 2 "
},
{
"msg_contents": "On Tue, 2007-02-27 at 12:05 +0900, ITAGAKI Takahiro wrote:\n\n> I think it's better to get DSM and HOT together. DSM is good at\n> complex updated cases but not at heavily updated cases. HOT has\n> opposite aspects, as far as I can see. I think they can cover each\n> other.\n\nVery much agreed.\n\nI'll be attempting to watch for any conflicting low-level assumptions as\nwe progress towards deadline.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 27 Feb 2007 07:49:01 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dead Space Map version 2"
},
{
"msg_contents": "On Tue, 2007-02-27 at 12:05 +0900, ITAGAKI Takahiro wrote:\n\n> If we combine this with the HOT patch, pages with HOT tuples are probably\n> marked as UNFROZEN because we don't bother vacuuming HOT tuples. They can\n> be removed incrementally and doesn't require explicit vacuums.\n\nPerhaps avoid DSM entries for HOT updates completely?\n\n> VACUUM commands\n> ---------------\n> \n> VACUUM now only scans the pages that possibly have dead tuples.\n> VACUUM ALL, a new syntax, behaves as the same as before.\n> \n> - VACUUM FULL : Not changed. scans all pages and compress them.\n> - VACUUM ALL : Scans all pages; Do the same behavior as previous VACUUM.\n> - VACUUM : Scans only HIGH pages usually, but also LOW and UNFROZEN\n> pages on vacuums in the cases for preventing XID wraparound.\n\nSounds good.\n\n> Performance issues\n> ------------------\n> \n> * Enable/Disable DSM tracking per tables\n> DSM requires more or less additional works. If we know specific tables\n> where DSM does not work well, ex. heavily updated small tables, we can\n> disable DSM for it. The syntax is:\n> ALTER TABLE name SET (dsm=true/false);\n\nHow about a dsm_tracking_limit GUC? (Better name please)\nThe number of pages in a table before we start tracking DSM entries for\nit. DSM only gives worthwhile benefits for larger tables anyway, so let\nthe user define what large means for them.\ndsm_tracking_limit = 1000 by default.\n\n> * Dead Space State Cache\n> The DSM management module is guarded using one LWLock, DeadSpaceLock.\n> Almost all accesses to DSM requires only shared lock, but the frequency\n> of shared lock was very high (tied with BufMappingLock) in my research.\n> To avoid the lock contention, I added a cache of dead space state in\n> BufferDesc flags. Backends see the flags first, and avoid locking if no\n> need to \n\nISTM there should be a point at which DSM is so full we don't bother to\nkeep track any longer, so we can drop that information. For example if\nuser runs UPDATE without a WHERE clause, there's no point in tracking\nwhole relation.\n\n> Memory management\n> -----------------\n> \n> In current implementation, DSM allocates a bunch of memory at start up and\n> we cannot modify it in running. It's maybe enough because DSM consumes very\n> little memory -- 32MB memory per 1TB database.\n\nThat sounds fine.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 27 Feb 2007 08:11:37 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dead Space Map version 2"
},
{
"msg_contents": "On Tue, 2007-02-27 at 00:55 -0500, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > Yes, DSM would make FSM recovery more important, but I thought it was\n> > recoverable now? Or is that only on a clean shutdown?\n> \n> Currently we throw away FSM during any non-clean restart. This is\n> probably overkill but I'm quite unclear what would be a safe\n> alternative.\n> \n> > I suspect we don't need perfect recoverability...\n> \n> The main problem with the levels proposed by Takahiro-san is that any\n> transition from FROZEN to not-FROZEN *must* be exactly recovered,\n> because vacuum will never visit an allegedly frozen page at all. This\n> appears to require WAL-logging DSM state changes, which is a pretty\n> serious performance hit. I'd be happier if the DSM content could be\n> treated as just a hint. I think that means not trusting it for whether\n> a page is frozen to the extent of not needing vacuum even for\n> wraparound. \n\nAgreed.\n\n> So I'm inclined to propose that there be only two states\n> (hence only one DSM bit per page): page needs vacuum for space recovery,\n> or not. Vacuum for XID wraparound would have to hit every page\n> regardless.\n\nI'm inclined to think: this close to deadline it would be more robust to\ngo with the simpler option. So, agreed to the one bit per page.\n\nWe can revisit the 2 bits/page idea easily for later releases. If the\nDSM is non-transactional, upgrading to a new format in the future should\nbe very easy.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 27 Feb 2007 08:16:54 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dead Space Map version 2"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> wrote:\n\n> > If we do UPDATE a tuple, the original page containing the tuple is marked\n> > as HIGH and the new page where the updated tuple is placed is marked as LOW.\n> \n> Don't you mean UNFROZEN?\n\nNo, the new tuples are marked as LOW. I intend to use UNFROZEN and FROZEN\npages as \"all tuples in the pages are visible to all transactions\" for\nindex-only-scan in the future.\n\n\n> What makes it more important to mark the original page as HIGH instead\n> of LOW, like the page with the new tuple? The description of the states\n> indicates that there would likely be a lot more dead tuples in a HIGH\n> page than in a LOW page.\n> \n> Perhaps it would be better to have the bgwriter take a look at how many\n> dead tuples (or how much space the dead tuples account for) when it\n> writes a page out and adjust the DSM at that time.\n\nYeah, I feel it is worth optimizable, too. One question is, how we treat\ndirty pages written by backends not by bgwriter? If we want to add some\nworks in bgwriter, do we also need to make bgwriter to write almost of\ndirty pages?\n\n\n> > * Agressive freezing\n> > We will freeze tuples in dirty pages using OldestXmin but FreezeLimit.\n> \n> Do you mean using OldestXmin instead of FreezeLimit?\n\nYes, we will use OldestXmin as the threshold to freeze tuples in\ndirty pages or pages that have some dead tuples. Or, many UNFROZEN\npages still remain after vacuum and they will cost us in the next\nvacuum preventing XID wraparound.\n\n\n> > I'm thinking to change them into 2 new paramaters. We will allocates memory\n> > for DSM that can hold all of estimated_database_size, and for FSM 50% or\n> > something of the size. Is this reasonable?\n> \n> I don't think so, at least not until we get data from the field about\n> what's typical. If the DSM is tracking every page in the cluster then\n> I'd expect the FSM to be closer to 10% or 20% of that, anyway.\n\nI'd like to add some kind of logical flavors to max_fsm_pages\nand max_dsm_pages. For DSM, max_dsm_pages should represent the\nwhole database size. In the other hand, what meaning does\nmax_fsm_pages have? (estimated_updatable_size ?)\n\nRegards,\n---\nITAGAKI Takahiro\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 27 Feb 2007 17:38:39 +0900",
"msg_from": "ITAGAKI Takahiro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dead Space Map version 2"
},
{
"msg_contents": "\nTom Lane <[email protected]> wrote:\n\n> Vacuum for XID wraparound would have to hit every page regardless.\n\nThere is one problem at this point. If we want to guarantee that there\nare no tuples that XIDs are older than pg_class.relfrozenxid, we must scan\nall pages for XID wraparound for every vacuums. So I used two thresholds\nfor treating XIDs, that is commented as follows. Do you have better ideas\nfor this point?\n\n/*\n * We use vacuum_freeze_min_age to determine whether a freeze scan is\n * needed, but half vacuum_freeze_min_age for the actual freeze limits\n * in order to prevent XID wraparound won't occur too frequently.\n */\n\n\nAlso, normal vacuums uses DSM and freeze-vacuum does not, so vacuums\nsometimes take longer time than usual. Doesn't the surprise bother us?\n\nRegards,\n---\nITAGAKI Takahiro\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 27 Feb 2007 17:56:23 +0900",
"msg_from": "ITAGAKI Takahiro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dead Space Map version 2 "
},
{
"msg_contents": "\n\"Simon Riggs\" <[email protected]> wrote:\n\n> > If we combine this with the HOT patch, pages with HOT tuples are probably\n> > marked as UNFROZEN because we don't bother vacuuming HOT tuples. They can\n> > be removed incrementally and doesn't require explicit vacuums.\n> \n> Perhaps avoid DSM entries for HOT updates completely?\n\nYes, if we employ 1bit/page (worth vacuum or not).\nOr no if 2bits/page because HOT updates change page states to UNFROZEN.\n\n\n> > * Enable/Disable DSM tracking per tables\n> \n> How about a dsm_tracking_limit GUC? (Better name please)\n> The number of pages in a table before we start tracking DSM entries for\n> it. DSM only gives worthwhile benefits for larger tables anyway, so let\n> the user define what large means for them.\n> dsm_tracking_limit = 1000 by default.\n\nSound good. How about small_table_size = 8MB for the variable?\nI found that we've already have the value used for truncating\nthreshold for vacuum. (REL_TRUNCATE_MINIMUM = 1000 in vacuumlazy.c)\nI think they have the same purpose in treating of small tables\nand we can use the same variable in these places.\n\n\n> > * Dead Space State Cache\n> \n> ISTM there should be a point at which DSM is so full we don't bother to\n> keep track any longer, so we can drop that information. For example if\n> user runs UPDATE without a WHERE clause, there's no point in tracking\n> whole relation.\n\nIt's a bit difficult. We have to lock DSM *before* we see whether\nthe table is tracked or not. So we need to cache the tracked state\nin the relcache entry, but it requres some works to keep coherency\nbetween cached states and shared states.\n\nRegards,\n---\nITAGAKI Takahiro\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 27 Feb 2007 18:37:12 +0900",
"msg_from": "ITAGAKI Takahiro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dead Space Map version 2"
},
{
"msg_contents": "Tom Lane wrote:\n> The main problem with the levels proposed by Takahiro-san is that any\n> transition from FROZEN to not-FROZEN *must* be exactly recovered,\n> because vacuum will never visit an allegedly frozen page at all. This\n> appears to require WAL-logging DSM state changes, which is a pretty\n> serious performance hit.\n\nI doubt it would be a big performance hit. AFAICS, all the information \nneeded to recover the DSM is already written to WAL, so it wouldn't need \nany new WAL records.\n\n> I'd be happier if the DSM content could be\n> treated as just a hint. I think that means not trusting it for whether\n> a page is frozen to the extent of not needing vacuum even for\n> wraparound. So I'm inclined to propose that there be only two states\n> (hence only one DSM bit per page): page needs vacuum for space recovery,\n> or not. Vacuum for XID wraparound would have to hit every page\n> regardless.\n\nIf we don't have a frozen state, we can't use the DSM to implement \nindex-only scans. Index-only scans will obviously require a lot more \nwork than just the DSM, but I'd like to have a solution that enables it \nin the future.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Tue, 27 Feb 2007 09:51:09 +0000",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dead Space Map version 2"
},
{
"msg_contents": "On Tue, Feb 27, 2007 at 12:55:21AM -0500, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > Yes, DSM would make FSM recovery more important, but I thought it was\n> > recoverable now? Or is that only on a clean shutdown?\n> \n> Currently we throw away FSM during any non-clean restart. This is\n> probably overkill but I'm quite unclear what would be a safe\n> alternative.\n\nMy thought would be to revert to a FSM that has pages marked as free\nthat no longer are. Could be done by writing the FSM out every time we\nadd pages to it. After an unclean restart backends would be getting\npages from the FSM that didn't have free space, in which case they'd\nneed to yank that page out of the FSM and request a new one. Granted,\nthis means extra IO until the FSM gets back to a realistic state, but I\nsuspect that's better than bloating tables out until the next vacuum.\nAnd it's ultimately less IO than re-vacuuming every table to rebuild the\nFSM.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Tue, 27 Feb 2007 10:58:26 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dead Space Map version 2"
},
{
"msg_contents": "On Tue, Feb 27, 2007 at 05:38:39PM +0900, ITAGAKI Takahiro wrote:\n> \"Jim C. Nasby\" <[email protected]> wrote:\n> \n> > > If we do UPDATE a tuple, the original page containing the tuple is marked\n> > > as HIGH and the new page where the updated tuple is placed is marked as LOW.\n> > \n> > Don't you mean UNFROZEN?\n> \n> No, the new tuples are marked as LOW. I intend to use UNFROZEN and FROZEN\n> pages as \"all tuples in the pages are visible to all transactions\" for\n> index-only-scan in the future.\n \nAhh, ok. Makes sense, though I tend to agree with others that it's\nbetter to leave that off for now, or at least do the initial patch\nwithout it.\n \n> > What makes it more important to mark the original page as HIGH instead\n> > of LOW, like the page with the new tuple? The description of the states\n> > indicates that there would likely be a lot more dead tuples in a HIGH\n> > page than in a LOW page.\n> > \n> > Perhaps it would be better to have the bgwriter take a look at how many\n> > dead tuples (or how much space the dead tuples account for) when it\n> > writes a page out and adjust the DSM at that time.\n> \n> Yeah, I feel it is worth optimizable, too. One question is, how we treat\n> dirty pages written by backends not by bgwriter? If we want to add some\n> works in bgwriter, do we also need to make bgwriter to write almost of\n> dirty pages?\n\nIMO yes, we want the bgwriter to be the only process that's normally\nwriting pages out. How close we are to that, I don't know...\n \n> > > * Agressive freezing\n> > > We will freeze tuples in dirty pages using OldestXmin but FreezeLimit.\n> > \n> > Do you mean using OldestXmin instead of FreezeLimit?\n> \n> Yes, we will use OldestXmin as the threshold to freeze tuples in\n> dirty pages or pages that have some dead tuples. Or, many UNFROZEN\n> pages still remain after vacuum and they will cost us in the next\n> vacuum preventing XID wraparound.\n\nAnother good idea. If it's not too invasive I'd love to see that as a\nstand-alone patch so that we know it can get in.\n \n> > > I'm thinking to change them into 2 new paramaters. We will allocates memory\n> > > for DSM that can hold all of estimated_database_size, and for FSM 50% or\n> > > something of the size. Is this reasonable?\n> > \n> > I don't think so, at least not until we get data from the field about\n> > what's typical. If the DSM is tracking every page in the cluster then\n> > I'd expect the FSM to be closer to 10% or 20% of that, anyway.\n> \n> I'd like to add some kind of logical flavors to max_fsm_pages\n> and max_dsm_pages. For DSM, max_dsm_pages should represent the\n> whole database size. In the other hand, what meaning does\n> max_fsm_pages have? (estimated_updatable_size ?)\n\nAt some point it might make sense to convert the FSM into a bitmap; that\nway everything just scales with database size.\n\nIn the meantime, I'm not sure if it makes sense to tie the FSM size to\nthe DSM size, since each FSM page requires 48x the storage of a DSM\npage. I think there's also a lot of cases where FSM size will not scale\nthe same was DSM size will, such as when there's historical data in the\ndatabase.\n\nThat raises another question... what happens when we run out of DSM\nspace?\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Tue, 27 Feb 2007 11:06:13 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dead Space Map version 2"
},
{
"msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> Tom Lane wrote:\n>> I'd be happier if the DSM content could be\n>> treated as just a hint.\n\n> If we don't have a frozen state, we can't use the DSM to implement \n> index-only scans.\n\nTo implement index-only scans, the DSM would have to be expected to\nprovide 100% reliable coverage, which will increase its cost and\ncomplexity by orders of magnitude. If you insist on that, I will bet\nyou lunch at a fine restaurant that it doesn't make it into 8.3.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 28 Feb 2007 00:34:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dead Space Map version 2 "
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> wrote:\n\n> > I'd like to add some kind of logical flavors to max_fsm_pages\n> > and max_dsm_pages.\n> \n> In the meantime, I'm not sure if it makes sense to tie the FSM size to\n> the DSM size, since each FSM page requires 48x the storage of a DSM\n> page. I think there's also a lot of cases where FSM size will not scale\n> the same was DSM size will, such as when there's historical data in the\n> database.\n\nI see. We need separate variables for FSM and DSM.\n\nHere is a new proposal for replacements of variables at Free Space Map\nsection in postgresql.conf. Are these changes acceptable? If ok, I'd\nlike to rewrite codes using them.\n\n\n# - Space Management -\n\nmanaged_relations = 1000 # min 100, ~120 bytes each\nmanaged_freespaces = 2GB # 6 bytes of shared memory per 8KB\nmanaged_deadspaces = 8GB # 4KB of shared memory per 32MB\n\nmanaged_relations:\n Replacement of max_fsm_relations. It is also used by DSM.\n\nmanaged_freespaces:\n Replacement of max_fsm_pages. The meaning is not changed,\n but can be set in bytes.\n\nmanaged_deadspaces:\n A new parameter for DSM. It might be better to be scaled\n with whole database size.\n\nRegards,\n---\nITAGAKI Takahiro\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 28 Feb 2007 15:04:09 +0900",
"msg_from": "ITAGAKI Takahiro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dead Space Map version 2"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> wrote:\n\n> At some point it might make sense to convert the FSM into a bitmap; that\n> way everything just scales with database size.\n\n> In the meantime, I'm not sure if it makes sense to tie the FSM size to\n> the DSM size, since each FSM page requires 48x the storage of a DSM\n> page. I think there's also a lot of cases where FSM size will not scale\n> the same was DSM size will, such as when there's historical data in the\n> database.\n\nBitmapped FSM is interesting. Maybe strict accuracy is not needed for FSM.\nIf we change FSM to use 2 bits/page bitmaps, it requires only 1/48 shared\nmemory by now. However, 6 bytes/page is small enough for normal use. We need\nto reconsider it if we would go into TB class heavily updated databases.\n\n\n> That raises another question... what happens when we run out of DSM\n> space?\n\nFirst, discard completely clean memory chunks in DSM. 'Clean' means all of\nthe tuples managed by the chunk are frozen. This is a lossless transition.\n\nSecond, discard tracked tables and its chunks that is least recently\nvacuumed. We can assume those tables have many dead tuples and almost\nfullscan will be required. We don't bother to keep tracking to such tables.\n\nMany optimizations should still remain at this point, but I'll make\na not-so-complex suggestions in the meantime.\n\nRegards,\n---\nITAGAKI Takahiro\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 28 Feb 2007 16:10:09 +0900",
"msg_from": "ITAGAKI Takahiro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dead Space Map version 2"
},
{
"msg_contents": "Tom Lane wrote:\n> Heikki Linnakangas <[email protected]> writes:\n>> Tom Lane wrote:\n>>> I'd be happier if the DSM content could be\n>>> treated as just a hint.\n> \n>> If we don't have a frozen state, we can't use the DSM to implement \n>> index-only scans.\n> \n> To implement index-only scans, the DSM would have to be expected to\n> provide 100% reliable coverage, which will increase its cost and\n> complexity by orders of magnitude. If you insist on that, I will bet\n> you lunch at a fine restaurant that it doesn't make it into 8.3.\n\n:)\n\nWhile I understand that 100% reliable coverage is a significantly \nstronger guarantee, I don't see any particular problems in implementing \nthat. WAL logging isn't that hard.\n\nI won't insist, I'm not the one doing the programming after all. \nAnything is better than what we have now. However, I do hope that \nwhatever is implemented doesn't need a complete rewrite to make it 100% \nreliable in the future.\n\nThe basic wish I have is to not use a fixed size shared memory area like \nFSM for the DSM. I'd like it to use the shared buffers instead, which \nmakes the memory management and tuning easier. And it also makes it \neasier to get the WAL logging right, even if it's not done for 8.3 but \nadded later.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Wed, 28 Feb 2007 09:51:46 +0000",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dead Space Map version 2"
},
{
"msg_contents": "Hello, long time no see.\n\nThis topic looks interesting. I'm enrious of Itagaki-san and others.\nI can't do now what I want, due to other work that I don't want to do\n(isn't my boss seeing this?). I wish I could join the community some\nday and contribute to the development like the great experts here.\n# I can't wait to try Itagakis-san's latest patch for load distributed\ncheckpoint in my environment and report the result.\n# But I may not have enough time...\n\nLet me give some comment below.\n\n\nFrom: \"Heikki Linnakangas\" <[email protected]>\n> While I understand that 100% reliable coverage is a significantly\n> stronger guarantee, I don't see any particular problems in\nimplementing\n> that. WAL logging isn't that hard.\n>\n> I won't insist, I'm not the one doing the programming after all.\n> Anything is better than what we have now. However, I do hope that\n> whatever is implemented doesn't need a complete rewrite to make it\n100%\n> reliable in the future.\n>\n> The basic wish I have is to not use a fixed size shared memory area\nlike\n> FSM for the DSM. I'd like it to use the shared buffers instead,\nwhich\n> makes the memory management and tuning easier. And it also makes it\n> easier to get the WAL logging right, even if it's not done for 8.3\nbut\n> added later.\n>\n\nI hope for the same thing as Heikki-san. Though I'm relatively new to\nPostgreSQL source code, I don't think it is very difficult (at least\nfor experts here) to implement the reliable space management scheme,\nso I proposed the following before -- not separate memory area for\nFSM, but treating it the same way as data files in the shared buffers.\nThough Tom-san is worrying about performance, what makes the\nperformance degrade greatly? Additional WAL records for updating\nspace management structures are written sequentially in batch.\nAdditional dirty shared buffers are written efficiently by kernel (at\nleast now.) And PostgreSQL is released from the giant lwlock for FSM.\nSome performance degradation would surely result. However,\nreliability is more important because \"vacuum\" is almost the greatest\nconcern for real serious users (not for hobbists who enjoy\nperformance.) Can anybody say to users \"we are working hard, but our\nwork may not be reliable and sometimes fails. Can you see if our\nvacuuming effort failed and try this...?\"\n\nAnd I'm afraid that increasing the number of configuration parameters\nis unacceptable for users. It is merely the excuse of developers.\nPostgreSQL already has more than 100 parameters. Some of them, such\nas bgwriter_*, are difficult for normal users to understand. It's\nbest to use shared_buffers parameter and show how to set it in the\ndocument.\nAddressing the vacuum problem correctly is very important. I hope you\ndon't introduce new parameters for unfinished work and force users to\ncheck the manual to change the parameters in later versions, i.e.\n\"managed_* parameters are not supported from this release. Please use\nshared_buffers...\" Is it a \"must\" to release 8.3 by this summer? I\nthink that delaying the release a bit for correct (reliable) vacuum\nresolution is worth.\n\n\nFrom: \"Takayuki Tsunakawa\" <[email protected]>\n> Yes! I'm completely in favor of Itagaki-san. Separating the cache\nfor\n> FSM may produce a new configuration parameter like fsm_cache_size,\n> which the normal users would not desire (unless they like enjoying\n> difficult DBMS.)\n> I think that integrating the treatment of space management structure\n> and data area is good. That means, for example, implementing \"Free\n> Space Table\" described in section 14.2.2.1 of Jim Gray's book\n> \"Transaction Processing: Concepts and Techniques\", though it may\nhave\n> been discussed in PostgreSQL community far long ago (really?). Of\n> course, some refinements may be necessary to tune to PostgreSQL's\n> concept, say, creating one free space table file for each data file\nto\n> make the implementation easy. It would reduce the source code\nsolely\n> for FSM.\n>\n> In addition, it would provide the transactional space management.\nIf\n> I understand correctly, in the current implementation, updates to\nFSM\n> are lost when the server crashes, aren't they? The idea assumes\nthat\n> FSM will be rebuilt by vacuum because vacuum is inevitable. If\n> updates to space management area were made transactional, it might\n> provide the infrastructure for \"vacuumless PostgreSQL.\"\n\n\n",
"msg_date": "Thu, 1 Mar 2007 09:45:53 +0900",
"msg_from": "\"Takayuki Tsunakawa\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dead Space Map version 2"
},
{
"msg_contents": "On Wed, Feb 28, 2007 at 04:10:09PM +0900, ITAGAKI Takahiro wrote:\n> \"Jim C. Nasby\" <[email protected]> wrote:\n> \n> > At some point it might make sense to convert the FSM into a bitmap; that\n> > way everything just scales with database size.\n> \n> > In the meantime, I'm not sure if it makes sense to tie the FSM size to\n> > the DSM size, since each FSM page requires 48x the storage of a DSM\n> > page. I think there's also a lot of cases where FSM size will not scale\n> > the same was DSM size will, such as when there's historical data in the\n> > database.\n> \n> Bitmapped FSM is interesting. Maybe strict accuracy is not needed for FSM.\n> If we change FSM to use 2 bits/page bitmaps, it requires only 1/48 shared\n> memory by now. However, 6 bytes/page is small enough for normal use. We need\n> to reconsider it if we would go into TB class heavily updated databases.\n> \n> \n> > That raises another question... what happens when we run out of DSM\n> > space?\n> \n> First, discard completely clean memory chunks in DSM. 'Clean' means all of\n> the tuples managed by the chunk are frozen. This is a lossless transition.\n> \n> Second, discard tracked tables and its chunks that is least recently\n> vacuumed. We can assume those tables have many dead tuples and almost\n> fullscan will be required. We don't bother to keep tracking to such tables.\n> \n> Many optimizations should still remain at this point, but I'll make\n> a not-so-complex suggestions in the meantime.\n\nActually, I have to agree with Heikki and Takayuki-san... I really like\nthe idea of managing DSM (and FSM for that matter) using shared_buffers.\nIf we do that, that means that we could probably back them to disk very\neasily.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 1 Mar 2007 09:45:00 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dead Space Map version 2"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> wrote:\n\n> > > Perhaps it would be better to have the bgwriter take a look at how many\n> > > dead tuples (or how much space the dead tuples account for) when it\n> > > writes a page out and adjust the DSM at that time.\n> > \n> > Yeah, I feel it is worth optimizable, too. One question is, how we treat\n> > dirty pages written by backends not by bgwriter? If we want to add some\n> > works in bgwriter, do we also need to make bgwriter to write almost of\n> > dirty pages?\n> \n> IMO yes, we want the bgwriter to be the only process that's normally\n> writing pages out. How close we are to that, I don't know...\n\nI'm working on making the bgwriter to write almost of dirty pages. This is\nthe proposal for it using automatic adjustment of bgwriter_lru_maxpages.\n\nThe bgwriter_lru_maxpages value will be adjusted to the equal number of calls\nof StrategyGetBuffer() per cycle with some safety margins (x2 at present).\nThe counter are incremented per call and reset to zero at StrategySyncStart().\n\n\nThis patch alone is not so useful except for hiding hardly tunable parameters\nfrom users. However, it would be a first step of allow bgwriters to do some\nworks before writing dirty buffers.\n\n- [DSM] Pick out pages worth vaccuming and register them into DSM.\n- [HOT] Do a per page vacuum for HOT updated tuples. (Is it worth doing?)\n- [TODO Item] Shrink expired COLD updated tuples to just their headers.\n- Set commit hint bits to reduce subsequent writes of blocks.\n http://archives.postgresql.org/pgsql-hackers/2007-01/msg01363.php\n\n\nI tested the attached patch on pgbench -s5 (80MB) with shared_buffers=32MB.\nI got an expected result as below. Over 75% of buffers are written by\nbgwriter. In addition , automatic adjusted bgwriter_lru_maxpages values\nwere much higher than the default value (5). It shows that the most suitable\nvalues greatly depends on workloads.\n\n benchmark | throughput | cpu-usage | by-bgwriter | bgwriter_lru_maxpages\n------------+------------+-----------+-------------+-----------------------\n default | 300tps | 100% | 77.5% | 120 pages/cycle\n with sleep | 150tps | 50% | 98.6% | 70 pages/cycle\n\n\nI hope that this patch will be a first step of the intelligent bgwriter.\nComments welcome.\n\nRegards,\n---\nITAGAKI Takahiro\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 05 Mar 2007 13:10:03 +0900",
"msg_from": "ITAGAKI Takahiro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Automatic adjustment of bgwriter_lru_maxpages (was: Dead Space Map\n\tversion 2)"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> wrote:\n\n> > > Perhaps it would be better to have the bgwriter take a look at how many\n> > > dead tuples (or how much space the dead tuples account for) when it\n> > > writes a page out and adjust the DSM at that time.\n> > \n> > Yeah, I feel it is worth optimizable, too. One question is, how we treat\n> > dirty pages written by backends not by bgwriter? If we want to add some\n> > works in bgwriter, do we also need to make bgwriter to write almost of\n> > dirty pages?\n> \n> IMO yes, we want the bgwriter to be the only process that's normally\n> writing pages out. How close we are to that, I don't know...\n\nI'm working on making the bgwriter to write almost of dirty pages. This is\nthe proposal for it using automatic adjustment of bgwriter_lru_maxpages.\n\nThe bgwriter_lru_maxpages value will be adjusted to the equal number of calls\nof StrategyGetBuffer() per cycle with some safety margins (x2 at present).\nThe counter are incremented per call and reset to zero at StrategySyncStart().\n\n\nThis patch alone is not so useful except for hiding hardly tunable parameters\nfrom users. However, it would be a first step of allow bgwriters to do some\nworks before writing dirty buffers.\n\n- [DSM] Pick out pages worth vaccuming and register them into DSM.\n- [HOT] Do a per page vacuum for HOT updated tuples. (Is it worth doing?)\n- [TODO Item] Shrink expired COLD updated tuples to just their headers.\n- Set commit hint bits to reduce subsequent writes of blocks.\n http://archives.postgresql.org/pgsql-hackers/2007-01/msg01363.php\n\n\nI tested the attached patch on pgbench -s5 (80MB) with shared_buffers=32MB.\nI got an expected result as below. Over 75% of buffers are written by\nbgwriter. In addition , automatic adjusted bgwriter_lru_maxpages values\nwere much higher than the default value (5). It shows that the most suitable\nvalues greatly depends on workloads.\n\n benchmark | throughput | cpu-usage | by-bgwriter | bgwriter_lru_maxpages\n------------+------------+-----------+-------------+-----------------------\n default | 300tps | 100% | 77.5% | 120 pages/cycle\n with sleep | 150tps | 50% | 98.6% | 70 pages/cycle\n\n\nI hope that this patch will be a first step of the intelligent bgwriter.\nComments welcome.\n\nRegards,\n---\nITAGAKI Takahiro\nNTT Open Source Software Center",
"msg_date": "Mon, 05 Mar 2007 13:11:46 +0900",
"msg_from": "ITAGAKI Takahiro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Automatic adjustment of bgwriter_lru_maxpages (was: Dead Space Map\n\tversion 2)"
},
{
"msg_contents": "Sorry, I had a mistake in the patch I sent.\nThis is a fixed version.\n\nI wrote:\n\n> I'm working on making the bgwriter to write almost of dirty pages. This is\n> the proposal for it using automatic adjustment of bgwriter_lru_maxpages.\n\nRegards,\n---\nITAGAKI Takahiro\nNTT Open Source Software Center",
"msg_date": "Mon, 05 Mar 2007 17:22:16 +0900",
"msg_from": "ITAGAKI Takahiro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Automatic adjustment of bgwriter_lru_maxpages"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> wrote:\n\n> > * Aggressive freezing\n> > we will use OldestXmin as the threshold to freeze tuples in\n> > dirty pages or pages that have some dead tuples. Or, many UNFROZEN\n> > pages still remain after vacuum and they will cost us in the next\n> > vacuum preventing XID wraparound.\n> \n> Another good idea. If it's not too invasive I'd love to see that as a\n> stand-alone patch so that we know it can get in.\n\nThis is a stand-alone patch for aggressive freezing. I'll propose\nto use OldestXmin instead of FreezeLimit as the freeze threshold\nin the circumstances below:\n\n- The page is already dirty.\n- There are another tuple to be frozen in the same page.\n- There are another dead tuples in the same page.\n Freezing is delayed until the heap vacuum phase.\n\nAnyway we create new dirty buffers and/or write WAL then, so additional\nfreezing is almost free. Keeping the number of unfrozen tuples low,\nwe can reduce the cost of next XID wraparound vacuum and piggyback\nmultiple freezing operations in the same page.\n\n\nThe following test shows differences of the number of unfrozen tuples\nwith or without the patch. Formerly, recently inserted tuples are not\nfrozen immediately (1). Even if there are some dead tuples in the same\npage, unfrozen live tuples are not frozen (2). With patch, the number\nafter first vacuum was already low (3), because the pages including recently\ninserted tuples were dirty and not written yet, so aggressive freeze was\nperformed for it. Moreover, if there are dead tuples in a page, other live\ntuples in the same page are also frozen (4).\n\n\n# CREATE CAST (xid AS integer) WITHOUT FUNCTION AS IMPLICIT;\n\n[without patch]\n$ ./pgbench -i -s1 (including vacuum)\n# SELECT count(*) FROM accounts WHERE xmin > 2; => 100000 (1)\n# UPDATE accounts SET aid = aid WHERE aid % 20 = 0; => UPDATE 5000\n# SELECT count(*) FROM accounts WHERE xmin > 2; => 100000\n# VACUUM accounts;\n# SELECT count(*) FROM accounts WHERE xmin > 2; => 100000 (2)\n\n[with patch]\n$ ./pgbench -i -s1 (including vacuum)\n# SELECT count(*) FROM accounts WHERE xmin > 2; => 2135 (3)\n# UPDATE accounts SET aid = aid WHERE aid % 20 = 0; => UPDATE 5000\n# SELECT count(*) FROM accounts WHERE xmin > 2; => 7028\n# VACUUM accounts;\n# SELECT count(*) FROM accounts WHERE xmin > 2; => 0 (4)\n\nRegards,\n---\nITAGAKI Takahiro\nNTT Open Source Software Center",
"msg_date": "Mon, 05 Mar 2007 19:14:34 +0900",
"msg_from": "ITAGAKI Takahiro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Aggressive freezing in lazy-vacuum"
},
{
"msg_contents": "ITAGAKI Takahiro <[email protected]> writes:\n> This is a stand-alone patch for aggressive freezing. I'll propose\n> to use OldestXmin instead of FreezeLimit as the freeze threshold\n> in the circumstances below:\n\nI think it's a really bad idea to freeze that aggressively under any\ncircumstances except being told to (ie, VACUUM FREEZE). When you\nfreeze, you lose history information that might be needed later --- for\nforensic purposes if nothing else. You need to show a fairly amazing\nperformance gain to justify that, and I don't think you can.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 05 Mar 2007 11:02:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Aggressive freezing in lazy-vacuum "
},
{
"msg_contents": "Tom Lane wrote:\n> ITAGAKI Takahiro <[email protected]> writes:\n>> This is a stand-alone patch for aggressive freezing. I'll propose\n>> to use OldestXmin instead of FreezeLimit as the freeze threshold\n>> in the circumstances below:\n> \n> I think it's a really bad idea to freeze that aggressively under any\n> circumstances except being told to (ie, VACUUM FREEZE). When you\n> freeze, you lose history information that might be needed later --- for\n> forensic purposes if nothing else. You need to show a fairly amazing\n> performance gain to justify that, and I don't think you can.\n\nThere could be a GUC vacuum_freeze_limit, and the actual FreezeLimit \nwould be calculated as\nGetOldestXmin() - vacuum_freeze_limit\n\nThe default for vacuum_freeze_limit would be MaxTransactionId/2, just\nas it is now.\n\ngreetings, Florian Pflug\n",
"msg_date": "Mon, 05 Mar 2007 20:26:58 +0100",
"msg_from": "\"Florian G. Pflug\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Aggressive freezing in lazy-vacuum"
},
{
"msg_contents": "Florian G. Pflug wrote:\n> There could be a GUC vacuum_freeze_limit, and the actual FreezeLimit \n> would be calculated as\n> GetOldestXmin() - vacuum_freeze_limit\n\nWe already have that. It's called vacuum_freeze_min_age, and the default \nis 100 million transactions.\n\nIIRC we added it late in the 8.2 release cycle when we changed the clog \ntruncation point to depend on freeze limit.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Mon, 05 Mar 2007 19:30:00 +0000",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Aggressive freezing in lazy-vacuum"
},
{
"msg_contents": "Heikki Linnakangas wrote:\n> Florian G. Pflug wrote:\n>> There could be a GUC vacuum_freeze_limit, and the actual FreezeLimit \n>> would be calculated as\n>> GetOldestXmin() - vacuum_freeze_limit\n> \n> We already have that. It's called vacuum_freeze_min_age, and the default \n> is 100 million transactions.\n> \n> IIRC we added it late in the 8.2 release cycle when we changed the clog \n> truncation point to depend on freeze limit.\n\nOk, that explains why I didn't find it when I checked the source - I\nchecked the 8.1 sources by accident ;-)\n\nAnyway, thanks for pointing that out ;-)\n\ngreetings, Florian Pflug\n",
"msg_date": "Mon, 05 Mar 2007 20:41:40 +0100",
"msg_from": "\"Florian G. Pflug\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Aggressive freezing in lazy-vacuum"
},
{
"msg_contents": "Tom Lane <[email protected]> wrote:\n\n> > This is a stand-alone patch for aggressive freezing. I'll propose\n> > to use OldestXmin instead of FreezeLimit as the freeze threshold\n> > in the circumstances below:\n> \n> I think it's a really bad idea to freeze that aggressively under any\n> circumstances except being told to (ie, VACUUM FREEZE). When you\n> freeze, you lose history information that might be needed later --- for\n> forensic purposes if nothing else.\n\nI don't think we can supply such a historical database functionality here,\nbecause we can guarantee it just only for INSERTed tuples even if we pay \nattention. We've already enabled autovacuum as default, so that we cannot\npredict when the next vacuum starts and recently UPDATEd and DELETEd tuples\nare removed at random times. Furthermore, HOT will also accelerate removing\nexpired tuples. Instead, we'd better to use WAL or something like audit\nlogs for keeping history information.\n\n\n> You need to show a fairly amazing\n> performance gain to justify that, and I don't think you can.\n\nThank you for your advice. I found that aggressive freezing for\nalready dirty pages made things worse, but for pages that contain\nother tuples being frozen or dead tuples was useful.\n\nI did an acceleration test for XID wraparound vacuum.\nI initialized the database with\n\n $ ./pgbench -i -s100\n # VACUUM FREEZE accounts;\n # SET vacuum_freeze_min_age = 6;\n\nand repeated the following queries.\n\n CHECKPOINT;\n UPDATE accounts SET aid=aid WHERE random() < 0.005;\n SELECT count(*) FROM accounts WHERE xmin > 2;\n VACUUM accounts;\n\nAfter the freeze threshold got at vacuum_freeze_min_age (run >= 3),\nthe VACUUM became faster with aggressive freezing. I think it came\nfrom piggybacking multiple freezing operations -- the number of\nunfrozen tuples were kept lower values.\n\n* Durations of VACUUM [sec]\nrun| HEAD | freeze\n---+--------+--------\n 1 | 5.8 | 8.2 \n 2 | 5.2 | 9.0 \n 3 | 118.2 | 102.0 \n 4 | 122.4 | 99.8 \n 5 | 121.0 | 79.8 \n 6 | 122.1 | 77.9 \n 7 | 123.8 | 115.5 \n---+--------+--------\navg| 121.5 | 95.0 \n3-7|\n\n* Numbers of unfrozen tuples\nrun| HEAD | freeze\n---+--------+--------\n 1 | 50081 | 50434 \n 2 | 99836 | 100072 \n 3 | 100047 | 86484 \n 4 | 100061 | 86524 \n 5 | 99766 | 87046 \n 6 | 99854 | 86824 \n 7 | 99502 | 86595 \n---+--------+--------\navg| 99846 | 86695\n3-7|\n\nRegards,\n---\nITAGAKI Takahiro\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 06 Mar 2007 19:03:03 +0900",
"msg_from": "ITAGAKI Takahiro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Aggressive freezing in lazy-vacuum "
},
{
"msg_contents": "\n\"ITAGAKI Takahiro\" <[email protected]> writes:\n\n> I don't think we can supply such a historical database functionality here,\n> because we can guarantee it just only for INSERTed tuples even if we pay \n> attention. We've already enabled autovacuum as default, so that we cannot\n> predict when the next vacuum starts and recently UPDATEd and DELETEd tuples\n> are removed at random times. Furthermore, HOT will also accelerate removing\n> expired tuples. Instead, we'd better to use WAL or something like audit\n> logs for keeping history information.\n\nWell comparing the data to WAL is precisely the kind of debugging that I think\nTom is concerned with.\n\nThe hoped for gain here is that vacuum finds fewer pages with tuples that\nexceed vacuum_freeze_min_age? That seems useful though vacuum is still going\nto have to read every page and I suspect most of the writes pertain to dead\ntuples, not freezing tuples.\n\nThis strikes me as something that will be more useful once we have the DSM\nespecially if it ends up including a frozen map. Once we have the DSM vacuum\nwill no longer be visiting every page, so it will be much easier for pages to\nget quite old and only be caught by a vacuum freeze. The less i/o that vacuum\nfreeze has to do the better. If we get a freeze map then agressive freezing\nwould help keep pages out of that map so they never need to be vacuumed just\nto freeze the tuples in them.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Tue, 06 Mar 2007 11:12:47 +0000",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Aggressive freezing in lazy-vacuum"
},
{
"msg_contents": "ITAGAKI Takahiro <[email protected]> writes:\n> Tom Lane <[email protected]> wrote:\n>> I think it's a really bad idea to freeze that aggressively under any\n>> circumstances except being told to (ie, VACUUM FREEZE). When you\n>> freeze, you lose history information that might be needed later --- for\n>> forensic purposes if nothing else.\n\n> I don't think we can supply such a historical database functionality here,\n> because we can guarantee it just only for INSERTed tuples even if we pay \n> attention. We've already enabled autovacuum as default, so that we cannot\n> predict when the next vacuum starts and recently UPDATEd and DELETEd tuples\n> are removed at random times.\n\nI said nothing about expired tuples. The point of not freezing is to\npreserve information about the insertion time of live tuples. And your\ntest case is unconvincing, because no sane DBA would run with such a\nsmall value of vacuum_freeze_min_age.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Mar 2007 10:02:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Aggressive freezing in lazy-vacuum "
},
{
"msg_contents": "Gregory Stark <[email protected]> wrote:\n\n> The hoped for gain here is that vacuum finds fewer pages with tuples that\n> exceed vacuum_freeze_min_age? That seems useful though vacuum is still going\n> to have to read every page and I suspect most of the writes pertain to dead\n> tuples, not freezing tuples.\n\nYes. VACUUM makes dirty pages only for freezing exceeded tuples in\nparticular cases and I think we can reduce the writes by keeping the\nnumber of unfrozen tuples low.\n\nThere are three additional costs in FREEZE.\n 1. CPU cost for changing the xids of target tuples.\n 2. Writes cost for WAL entries of FREEZE (log_heap_freeze).\n 3. Writes cost for newly created dirty pages.\n\nI did additional freezing in the following two cases. We'll have created\ndirty buffers and WAL entries for required operations then, so that I think\nthe additional costs of 2 and 3 are ignorable, though 1 still affects us.\n\n| - There are another tuple to be frozen in the same page.\n| - There are another dead tuples in the same page.\n| Freezing is delayed until the heap vacuum phase.\n\n\n> This strikes me as something that will be more useful once we have the DSM\n> especially if it ends up including a frozen map. Once we have the DSM vacuum\n> will no longer be visiting every page, so it will be much easier for pages to\n> get quite old and only be caught by a vacuum freeze. The less i/o that vacuum\n> freeze has to do the better. If we get a freeze map then agressive freezing\n> would help keep pages out of that map so they never need to be vacuumed just\n> to freeze the tuples in them.\n\nYeah, I was planning to 2 bits/page DSM exactly for the purpose. One of the\nbits means to-be-vacuumed and another means to-be-frozen. It helps us avoid\nfull scanning of the pages for XID wraparound vacuums, but DSM should be more\nreliable and not lost any information. I made an attempt to accomplish it\nin DSM, but I understand the need to demonstrate it works as designed to you.\n\nRegards,\n---\nITAGAKI Takahiro\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 07 Mar 2007 10:56:04 +0900",
"msg_from": "ITAGAKI Takahiro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Aggressive freezing in lazy-vacuum"
},
{
"msg_contents": "\nTom Lane <[email protected]> wrote:\n\n> I said nothing about expired tuples. The point of not freezing is to\n> preserve information about the insertion time of live tuples.\n\nI don't know what good it will do -- for debugging?\nWhy don't you use CURRENT_TIMESTAMP?\n\n\n> And your\n> test case is unconvincing, because no sane DBA would run with such a\n> small value of vacuum_freeze_min_age.\n\nI intended to use the value for an accelerated test.\nThe penalties of freeze are divided for the long term in normal use,\nbut we surely suffer from them by bits.\n\nRegards,\n---\nITAGAKI Takahiro\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 07 Mar 2007 11:18:20 +0900",
"msg_from": "ITAGAKI Takahiro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Aggressive freezing in lazy-vacuum "
},
{
"msg_contents": "ITAGAKI Takahiro <[email protected]> writes:\n> Tom Lane <[email protected]> wrote:\n>> I said nothing about expired tuples. The point of not freezing is to\n>> preserve information about the insertion time of live tuples.\n\n> I don't know what good it will do -- for debugging?\n\nExactly. As an example, I've been chasing offline a report from Merlin\nMoncure about duplicate entries in a unique index; I still don't know\nwhat exactly is going on there, but the availability of knowledge about\nwhich transactions inserted which entries has been really helpful. If\nwe had a system designed to freeze tuples as soon as possible, that info\nwould have been gone forever pretty soon after the problem happened.\n\nI don't say that this behavior can never be acceptable, but you need\nmuch more than a marginal performance improvement to convince me that\nit's worth the loss of forensic information.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Mar 2007 23:34:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Aggressive freezing in lazy-vacuum "
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://momjian.postgresql.org/cgi-bin/pgpatches\n\nIt will be applied as soon as one of the PostgreSQL committers reviews\nand approves it.\n\n---------------------------------------------------------------------------\n\n\nITAGAKI Takahiro wrote:\n> Sorry, I had a mistake in the patch I sent.\n> This is a fixed version.\n> \n> I wrote:\n> \n> > I'm working on making the bgwriter to write almost of dirty pages. This is\n> > the proposal for it using automatic adjustment of bgwriter_lru_maxpages.\n> \n> Regards,\n> ---\n> ITAGAKI Takahiro\n> NTT Open Source Software Center\n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Sat, 24 Mar 2007 21:44:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Automatic adjustment of bgwriter_lru_maxpages"
},
{
"msg_contents": "ITAGAKI Takahiro wrote:\n> \"Jim C. Nasby\" <[email protected]> wrote:\n> \n>>>> Perhaps it would be better to have the bgwriter take a look at how many\n>>>> dead tuples (or how much space the dead tuples account for) when it\n>>>> writes a page out and adjust the DSM at that time.\n>>> Yeah, I feel it is worth optimizable, too. One question is, how we treat\n>>> dirty pages written by backends not by bgwriter? If we want to add some\n>>> works in bgwriter, do we also need to make bgwriter to write almost of\n>>> dirty pages?\n>> IMO yes, we want the bgwriter to be the only process that's normally\n>> writing pages out. How close we are to that, I don't know...\n> \n> I'm working on making the bgwriter to write almost of dirty pages. This is\n> the proposal for it using automatic adjustment of bgwriter_lru_maxpages.\n> \n> The bgwriter_lru_maxpages value will be adjusted to the equal number of calls\n> of StrategyGetBuffer() per cycle with some safety margins (x2 at present).\n> The counter are incremented per call and reset to zero at StrategySyncStart().\n> \n> \n> This patch alone is not so useful except for hiding hardly tunable parameters\n> from users. However, it would be a first step of allow bgwriters to do some\n> works before writing dirty buffers.\n> \n> - [DSM] Pick out pages worth vaccuming and register them into DSM.\n> - [HOT] Do a per page vacuum for HOT updated tuples. (Is it worth doing?)\n> - [TODO Item] Shrink expired COLD updated tuples to just their headers.\n> - Set commit hint bits to reduce subsequent writes of blocks.\n> http://archives.postgresql.org/pgsql-hackers/2007-01/msg01363.php\n> \n> \n> I tested the attached patch on pgbench -s5 (80MB) with shared_buffers=32MB.\n> I got an expected result as below. Over 75% of buffers are written by\n> bgwriter. In addition , automatic adjusted bgwriter_lru_maxpages values\n> were much higher than the default value (5). It shows that the most suitable\n> values greatly depends on workloads.\n> \n> benchmark | throughput | cpu-usage | by-bgwriter | bgwriter_lru_maxpages\n> ------------+------------+-----------+-------------+-----------------------\n> default | 300tps | 100% | 77.5% | 120 pages/cycle\n> with sleep | 150tps | 50% | 98.6% | 70 pages/cycle\n> \n> \n> I hope that this patch will be a first step of the intelligent bgwriter.\n> Comments welcome.\n\nThe general approach looks good to me. I'm queuing some benchmarks to \nsee how effective it is with a fairly constant workload.\n\nThis change in bgwriter.c looks fishy:\n\n*************** BackgroundWriterMain(void)\n*** 484,491 ****\n \t\t *\n \t\t * We absorb pending requests after each short sleep.\n \t\t */\n! \t\tif ((bgwriter_all_percent > 0.0 && bgwriter_all_maxpages > 0) ||\n! \t\t\t(bgwriter_lru_percent > 0.0 && bgwriter_lru_maxpages > 0))\n \t\t\tudelay = BgWriterDelay * 1000L;\n \t\telse if (XLogArchiveTimeout > 0)\n \t\t\tudelay = 1000000L;\t/* One second */\n--- 484,490 ----\n \t\t *\n \t\t * We absorb pending requests after each short sleep.\n \t\t */\n! \t\tif (bgwriter_all_percent > 0.0 && bgwriter_all_maxpages > 0)\n \t\t\tudelay = BgWriterDelay * 1000L;\n \t\telse if (XLogArchiveTimeout > 0)\n \t\t\tudelay = 1000000L;\t/* One second */\n\nDoesn't that mean that bgwriter only runs every 1 or 10 seconds, \nregardless of bgwriter_delay, if bgwriter_all_* parameters are not set?\n\nThe algorithm used to update bgwriter_lru_maxpages needs some thought. \nCurrently, it's decreased by one when less clean pages were required by \nbackends than expected, and increased otherwise. Exponential smoothing \nor something similar seems like the natural choice to me.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Thu, 12 Apr 2007 11:57:17 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Automatic adjustment of bgwriter_lru_maxpages"
},
{
"msg_contents": "Attached are two patches that try to recast the ideas of Itagaki \nTakahiro's auto bgwriter_lru_maxpages patch in the direction I think this \ncode needs to move. Epic-length commentary follows.\n\nThe original code came from before there was a pg_stat_bgwriter. The \nfirst patch (buf-alloc-stats) takes the two most interesting pieces of \ndata the original patch collected, the number of buffers allocated \nrecently and the number that the clients wrote out, and ties all that into \nthe new stats structure. With this patch applied, you can get a feel for \nthings like churn/turnover in the buffer pool that were very hard to \nquantify before. Also, it makes it easy to measure how well your \nbackground writer is doing at writing buffers so the clients don't have \nto. Applying this would complete one of my personal goals for the 8.3 \nrelease, which was having stats to track every type of buffer write.\n\nI split this out because I think it's very useful to have regardless of \nwhether the automatic tuning portion is accepted, and I think these \nsmaller patches make the review easier. The main thing I would recommend \nsomeone check is how am_bg_writer is (mis?)used here. I spliced some of \nthe debugging-only code from the original patch, and I can't tell if the \nresult is a robust enough approach to solving the problem of having every \nclient indirectly report their activity to the background writer. Other \nthan that, I think this code is ready for review and potentially \ncomitting.\n\nThe second patch (limit-lru) adds on top of that a constraint of the LRU \nwriter so that it doesn't do any more work than it has to. Note that I \nleft verbose debugging code in here because I'm much less confident this \npatch is complete.\n\nIt predicts upcoming buffer allocations using a 16-period weighted moving \naverage of recent activity, which you can think of as the last 3.2 seconds \nat the default interval. After testing a few systems that seemed a decent \ncompromise of smoothing in both directions. I found the 2X overallocation \nfudge factor of the original patch way too aggressive, and just pick the \nlarger of the most recent allocation amount or the smoothed value. The \nmain thing that throws off the allocation estimation is when you hit a \ncheckpoint, which can give a big spike after the background writer returns \nto BgBufferSync and notices all the buffers that were allocated during the \ncheckpoint write; the code then tries to find more buffers it can recycle \nthan it needs to. Since the checkpoint itself normally leaves a large \nwake of reusable buffers behind it, I didn't find this to be a serious \nproblem.\n\nThere's another communication issue here, which is that SyncOneBuffer \nneeds to return more information about the buffer than it currently does \nonce it gets it locked. The background writer needs to know more than \njust if it was written to tune itself. The original patch used a clever \ntrick for this which worked but I found confusing. I happen to have a \nbunch of other background writer tuning code I'm working on, and I had to \ncome up with a more robust way to communicate buffer internals back via \nthis channel. I used that code here, it's a bitmask setup similar to how \nflags like BM_DIRTY are used. It's overkill for solving this particular \nproblem, but I think the interface is clean and it helps support future \nenhancements in intelligent background writing.\n\nNow we get to the controversial part. The original patch removed the \nbgwriter_lru_maxpages parameter and updated the documentation accordingly. \nI didn't do that here. The reason is that after playing around in this \narea I'm not convinced yet I can satisfy all the tuning scenarios I'd like \nto be able to handle that way. I describe this patch as enforcing a \nconstraint instead; it allows you to set the LRU parameters much higher \nthan was reasonable before without having to be as concerned about the LRU \nwriter wasting resources.\n\nI already brought up some issues in this area on -hackers ( \nhttp://archives.postgresql.org/pgsql-hackers/2007-04/msg00781.php ) but my \nwork hasn't advanced as fast as I'd hoped. I wanted to submit what I've \nfinished anyway because I think any approach here is going to have cope \nwith the issues addressed in these two patches, and I'm happy now with how \nthey're solved here. It's only a one-line delete to disable the LRU \nlimiting behavior of the second patch, at which point it's strictly \ninternals code with no expected functional impact that alternate \napproaches might be built on.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD",
"msg_date": "Sat, 12 May 2007 21:32:29 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Automatic adjustment of bgwriter_lru_maxpages"
},
{
"msg_contents": "Greg Smith wrote:\n> The original code came from before there was a pg_stat_bgwriter. The \n> first patch (buf-alloc-stats) takes the two most interesting pieces of \n> data the original patch collected, the number of buffers allocated \n> recently and the number that the clients wrote out, and ties all that \n> into the new stats structure. With this patch applied, you can get a \n> feel for things like churn/turnover in the buffer pool that were very \n> hard to quantify before. Also, it makes it easy to measure how well \n> your background writer is doing at writing buffers so the clients don't \n> have to. Applying this would complete one of my personal goals for the \n> 8.3 release, which was having stats to track every type of buffer write.\n> \n> I split this out because I think it's very useful to have regardless of \n> whether the automatic tuning portion is accepted, and I think these \n> smaller patches make the review easier. The main thing I would \n> recommend someone check is how am_bg_writer is (mis?)used here. I \n> spliced some of the debugging-only code from the original patch, and I \n> can't tell if the result is a robust enough approach to solving the \n> problem of having every client indirectly report their activity to the \n> background writer. Other than that, I think this code is ready for \n> review and potentially comitting.\n\nThis looks good to me in principle. StrategyReportWrite increments \nnumClientWrites without holding the BufFreeListLock, that's a race \ncondition. The terminology needs some adjustment; clients don't write \nbuffers, backends do.\n\nSplitting the patch to two is a good idea.\n\n> The second patch (limit-lru) adds on top of that a constraint of the LRU \n> writer so that it doesn't do any more work than it has to. Note that I \n> left verbose debugging code in here because I'm much less confident this \n> patch is complete.\n> \n> It predicts upcoming buffer allocations using a 16-period weighted \n> moving average of recent activity, which you can think of as the last \n> 3.2 seconds at the default interval. After testing a few systems that \n> seemed a decent compromise of smoothing in both directions. I found the \n> 2X overallocation fudge factor of the original patch way too aggressive, \n> and just pick the larger of the most recent allocation amount or the \n> smoothed value. The main thing that throws off the allocation \n> estimation is when you hit a checkpoint, which can give a big spike \n> after the background writer returns to BgBufferSync and notices all the \n> buffers that were allocated during the checkpoint write; the code then \n> tries to find more buffers it can recycle than it needs to. Since the \n> checkpoint itself normally leaves a large wake of reusable buffers \n> behind it, I didn't find this to be a serious problem.\n\nCan you tell more about the tests you performed? That algorithm seems \ndecent, but I wonder why the simple fudge factor wasn't good enough? I \nwould've thought that a 2x or even bigger fudge factor would still be \nonly a tiny fraction of shared_buffers, and wouldn't really affect \nperformance.\n\nThe load distributed checkpoint patch should mitigate the checkpoint \nspike problem by continuing the LRU scan throughout the checkpoint.\n\n> There's another communication issue here, which is that SyncOneBuffer \n> needs to return more information about the buffer than it currently does \n> once it gets it locked. The background writer needs to know more than \n> just if it was written to tune itself. The original patch used a clever \n> trick for this which worked but I found confusing. I happen to have a \n> bunch of other background writer tuning code I'm working on, and I had \n> to come up with a more robust way to communicate buffer internals back \n> via this channel. I used that code here, it's a bitmask setup similar \n> to how flags like BM_DIRTY are used. It's overkill for solving this \n> particular problem, but I think the interface is clean and it helps \n> support future enhancements in intelligent background writing.\n\nUh, that looks pretty ugly to me. The normal way to return multiple \nvalues is to pass a pointer as an argument, though that can get ugly as \nwell if there's a lot of return values. What combinations of the flags \nare valid? Would an enum be better? Or how about moving the checks for \ndirty and pinned buffers from SyncOneBuffer to the callers?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Sun, 13 May 2007 17:27:20 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Automatic adjustment of bgwriter_lru_maxpages"
},
{
"msg_contents": "On Sun, 13 May 2007, Heikki Linnakangas wrote:\n\n> StrategyReportWrite increments numClientWrites without holding the \n> BufFreeListLock, that's a race condition. The terminology needs some \n> adjustment; clients don't write buffers, backends do.\n\nThat was another piece of debugging code I moved into the main path \nwithout thinking too hard about it, good catch. I have a \ndocumentation/naming patch I've started on that revises a lot of the \npg_stat_bgwriter names to be more consistant and easier to understand (as \nwell as re-ordering the view); the underlying code is still fluid enough \nthat I was trying to nail that down first.\n\n> That algorithm seems decent, but I wonder why the simple fudge factor \n> wasn't good enough? I would've thought that a 2x or even bigger fudge \n> factor would still be only a tiny fraction of shared_buffers, and \n> wouldn't really affect performance.\n\nI like the way the smoothing evens out the I/O rates. I saw occasional \nspots where the buffer allocations drop to 0 for a few intervals while \nother stuff is going on everybody is waiting for, and I didn't want all \nLRU cleanup come to halt just because there's a fraction of a second where \nnothing happened in the middle of a very busy period.\n\nAs for why not overestimate, if you get into a situation where the buffer \ncache is very dirty with much of the data being recently used (I normally \nsee this with bulk UPDATEs on indexed tables), you can end up scanning \nmany buffers for each one you find that can be written out. In this kind \nof situation, deciding that you actually need to write out twice as many \njust because you don't trust your estimate is very inefficient.\n\nI was able to simulate most of the bad behavior I look for with the \npgbench schema using \"update accounts set abalance=abalance+1;\". To throw \nsome sample numbers out, on my test server I was just doing final work on \nlast night, I was seeing peaks of about 600-1200 buffers allocated per \n200ms interval doing that simple UPDATE with shared_buffers=32768.\n\nLet's call it 2% of the pool. If 50% of the pool is either dirty or can't \nbe reused yet, that means I'll average having to scan 2%/50%=4% of the \npool to find enough buffers to reuse per interval. I wouldn't describe \nthat as a tiny fraction, and doubling it is not an insignificant load \nincrease. I'd like to be able to increase the LRU percentage scanned \nwithout being concerned that I'm wasting resources because of this \nsituation.\n\nThe fact that this problem exists is what got me digging into the \nbackground writer code in the first place, because it's way worse on my \nproduction server than this example suggests. The buffer cache is bigger, \nbut the ability of the server to dirty it under heavy load is far better. \nReturning to the theme discussed in the -hackers thread I referenced: \nyou can't try to make the background writer LRU do all the writes without \nexposing yourself to issues like this, because it doesn't touch the usage \ncounts. Therefore it's vulnerable to breakdowns if your buffer pool \nshifts toward dirty and non-reusable.\n\nHaving the background writer run amok when reusable buffers are rare can \nreally pull down the performance of the other backends (as well as delay \ncheckpoints), both in terms of CPU usage and locking issues. I don't feel \nit's a good idea to try and push it too hard unless some of these \nunderlying issues are fixed first; I'd rather err on the side of letting \nit do less rather than more than it has to.\n\n> The normal way to return multiple values is to pass a pointer as an \n> argument, though that can get ugly as well if there's a lot of return \n> values.\n\nI'm open to better suggestions, but after tinkering with this interface \nfor over a month now--including pointers and enums--this is the first \nimplementation I was happy with.\n\nThere are four things I eventually need returned here, to support the \nfully automatic BGW tuning. My 1st implementation passed in pointers, and \nin addition to being ugly I found consistantly checking for null pointers \nand data consistancy a drag, both from the coding and the overhead \nperspective.\n\n> What combinations of the flags are valid? Would an enum be better?\n\nAnd my 2nd generation code used an enum. There are five possible return \ncode states:\n\nCLEAN + REUSABLE + !WRITTEN\nCLEAN + !REUSABLE + !WRITTEN\n!CLEAN + !REUSABLE + WRITTEN (all-scan only)\n!CLEAN + !REUSABLE + !WRITTEN (rejected by skip)\n!CLEAN + REUSABLE + WRITTEN\n\n!CLEAN + REUSABLE + !WRITTEN isn't possible (all paths will write dirty \nreusable buffers)\n\nI found the enum-based code more confusing, both reading it and making \nsure it was correct when writing it, than the current form. Right now I \nhave lines like:\n\n if (buffer_state & BUF_REUSABLE)\n\nWith an enum this has to be something like\n\n if (buffer_state == BUF_CLEAN_REUSABLE || buffer_state == \nBUF_REUSABLE_WRITTEN)\n\nAnd that was a pain all around; I kept having to stare at the table above \nto make sure the code was correct. Also, in order to pass back full \nusage_count information I was back to either pointers or bitshifting \nanyway. While this particular patch doesn't need the usage count, the \nlater ones I'm working on do, and I'd like to get this interface complete \nwhile it's being tinkered with anyway.\n\n> Or how about moving the checks for dirty and pinned buffers from \n> SyncOneBuffer to the callers?\n\nThere are 3 callers to SyncOneBuffer, and almost all the code is shared \nbetween them. Trying to push even just the dirty/pinned stuff back into \nthe callers would end up being a cut and paste job that would duplicate \nmany lines. That's on top of the fact that the buffer is cleanly \nlocked/unlocked all in one section of code right now, and I didn't see how \nto move any parts of that to the callers without disrupting that clean \ninterface.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Sun, 13 May 2007 14:54:14 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Automatic adjustment of bgwriter_lru_maxpages"
},
{
"msg_contents": "Greg Smith <[email protected]> wrote:\n\n> The first patch (buf-alloc-stats) takes the two most interesting pieces of \n> data the original patch collected, the number of buffers allocated \n> recently and the number that the clients wrote out, and ties all that into \n> the new stats structure.\n\n> The second patch (limit-lru) adds on top of that a constraint of the LRU \n> writer so that it doesn't do any more work than it has to.\n\nBoth patches look good. \n\n> Now we get to the controversial part. The original patch removed the \n> bgwriter_lru_maxpages parameter and updated the documentation accordingly. \n> I didn't do that here. The reason is that after playing around in this \n> area I'm not convinced yet I can satisfy all the tuning scenarios I'd like \n> to be able to handle that way. I describe this patch as enforcing a \n> constraint instead; it allows you to set the LRU parameters much higher \n> than was reasonable before without having to be as concerned about the LRU \n> writer wasting resources.\n\nI'm agreeable to the limiters of resource usage by bgwriter.\nBTW, your patch will cut LRU writes short, but will not encourage to\ndo more works. So should set more aggressive values to bgwriter_lru_percent\nand bgwriter_lru_maxpages as defaults? My original motivation was to enlarge\nbgwriter_lru_maxpages automatically; the default bgwriter_lru_maxpages (=5)\nseemed to be too small.\n\nRegards,\n---\nITAGAKI Takahiro\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 14 May 2007 18:04:58 +0900",
"msg_from": "ITAGAKI Takahiro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Automatic adjustment of bgwriter_lru_maxpages"
},
{
"msg_contents": "On Mon, 14 May 2007, ITAGAKI Takahiro wrote:\n\n> BTW, your patch will cut LRU writes short, but will not encourage to\n> do more works. So should set more aggressive values to bgwriter_lru_percent\n> and bgwriter_lru_maxpages as defaults?\n\nSetting a bigger default maximum is one possibility I was thinking about. \nSince the whole background writer setup is kind of complicated, the other \nthing I was working on is writing a guide on how to use the new \npg_stat_bgwriter information to figure out if you need to increase \nbgwriter_[all|lru]_pages (and the other parameters too). It makes it much \neasier to write that if you can say \"You can safely set \nbgwriter_lru_maxpages high because it only writes what it needs to based \non your usage\".\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Mon, 14 May 2007 09:34:21 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Automatic adjustment of bgwriter_lru_maxpages"
},
{
"msg_contents": "Greg Smith wrote:\n> On Mon, 14 May 2007, ITAGAKI Takahiro wrote:\n> \n>> BTW, your patch will cut LRU writes short, but will not encourage to\n>> do more works. So should set more aggressive values to \n>> bgwriter_lru_percent\n>> and bgwriter_lru_maxpages as defaults?\n> \n> Setting a bigger default maximum is one possibility I was thinking \n> about. Since the whole background writer setup is kind of complicated, \n> the other thing I was working on is writing a guide on how to use the \n> new pg_stat_bgwriter information to figure out if you need to increase \n> bgwriter_[all|lru]_pages (and the other parameters too). It makes it \n> much easier to write that if you can say \"You can safely set \n> bgwriter_lru_maxpages high because it only writes what it needs to based \n> on your usage\".\n\nIf it's safe to set it high, let's default it to infinity.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Mon, 14 May 2007 14:41:25 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Automatic adjustment of bgwriter_lru_maxpages"
},
{
"msg_contents": "Greg Smith <[email protected]> writes:\n> Since the whole background writer setup is kind of complicated, the other \n> thing I was working on is writing a guide on how to use the new \n> pg_stat_bgwriter information to figure out if you need to increase \n> bgwriter_[all|lru]_pages (and the other parameters too). It makes it much \n> easier to write that if you can say \"You can safely set \n> bgwriter_lru_maxpages high because it only writes what it needs to based \n> on your usage\".\n\nIf you can write something like that, why do we need the parameter at all?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 14 May 2007 09:52:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Automatic adjustment of bgwriter_lru_maxpages "
},
{
"msg_contents": "On Mon, 14 May 2007, Heikki Linnakangas wrote:\n\n> If it's safe to set it high, let's default it to infinity.\n\nThe maximum right now is 1000, and that would be a reasonable new default. \nYou really don't to write more than 1000 per interval anyway without \ntaking a break for checkpoints; the more writes you do at once, the higher \nthe chances are you'll have the whole thing stall because the OS makes you \nwait for a write (this is not a theoretical comment; I've watched it \nhappen when I try to get the BGW doing too much).\n\nIf someone has so much activity that they're allocating more than that \nduring a period, they should shrink the delay instead. The kinds of \nsystems where 1000 isn't high enough for bgwriter_lru_maxpages are going \nto be compelled to adjust these parameters anyway for good performance.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Mon, 14 May 2007 09:57:28 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Automatic adjustment of bgwriter_lru_maxpages"
},
{
"msg_contents": "On Mon, 14 May 2007, Tom Lane wrote:\n\n> If you can write something like that, why do we need the parameter at all?\n\nCouple of reasons:\n\n-As I already mentioned in my last message, I think it's unwise to let the \nLRU writes go completely unbounded. I still think there should be a \nmaximum, and if there is one it should be tunable. You can get into \nsituations where the only way to get the LRU writer to work at all is to \nset the % to scan fairly high, but that exposes you to way more writes \nthan you might want per interval in situations where buffers to write are \neasy to find.\n\n-There is considerable coupling between how the LRU and the all background \nwriters work. There are workloads where the LRU writer is relatively \nineffective, and only the all one really works well. If there is a \nlimiter on the writes from the all writer, but not on the LRU, admins may \nnot be able to get the balance between the two they want. I know I \nwouldn't.\n\n-Just because I can advise what is generally the right move, that doesn't \nmean it's always the right one. Someone may notice that the maximum pages \nwritten limit is being nailed and not care.\n\nThe last system I really got deep into the background writer mechanics on, \nit could be very effective at improving performance and reducing \ncheckpoint spikes under low to medium loads. But under heavy load, it \njust got in the way of the individual backends running, which was \nabsolutely necessary in order to execute the LRU mechanics (usage_count--) \nso less important buffers could be kicked out. I would like people to \nstill be able to set a tuning such that the background writers were useful \nunder average loads, but didn't ever try to do too much. It's much more \ndifficult to do that if bgwriter_lru_maxpages goes away.\n\nI realized recently the task I should take on here is to run some more \nexperiments with the latest code and pass along suggested techniques for \nproducing/identifying the kind of problem conditions I've run into in the \npast; then we can see if other people can reproduce them. I got a new \n8-core server I need to thrash anyway and will try and do just that \nstarting tomorrow.\n\nFor all I know my concerns are strictly a rare edge case. But since the \nfinal adjustments to things like whether there is an upper limit or not \nare very small patches compared to what's already been done here, I sent \nin what I thought was ready to go because I didn't want to hold up \nreviewing the bulk of the code over some of these fine details.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Mon, 14 May 2007 23:19:23 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Automatic adjustment of bgwriter_lru_maxpages "
},
{
"msg_contents": "On Mon, May 14, 2007 at 11:19:23PM -0400, Greg Smith wrote:\n> On Mon, 14 May 2007, Tom Lane wrote:\n> \n> >If you can write something like that, why do we need the parameter at all?\n> \n> Couple of reasons:\n> \n> -As I already mentioned in my last message, I think it's unwise to let the \n> LRU writes go completely unbounded. I still think there should be a \n> maximum, and if there is one it should be tunable. You can get into \n> situations where the only way to get the LRU writer to work at all is to \n> set the % to scan fairly high, but that exposes you to way more writes \n> than you might want per interval in situations where buffers to write are \n> easy to find.\n> \n> -There is considerable coupling between how the LRU and the all background \n> writers work. There are workloads where the LRU writer is relatively \n> ineffective, and only the all one really works well. If there is a \n> limiter on the writes from the all writer, but not on the LRU, admins may \n> not be able to get the balance between the two they want. I know I \n> wouldn't.\n> \n> -Just because I can advise what is generally the right move, that doesn't \n> mean it's always the right one. Someone may notice that the maximum pages \n> written limit is being nailed and not care.\n> \n> The last system I really got deep into the background writer mechanics on, \n> it could be very effective at improving performance and reducing \n> checkpoint spikes under low to medium loads. But under heavy load, it \n> just got in the way of the individual backends running, which was \n> absolutely necessary in order to execute the LRU mechanics (usage_count--) \n> so less important buffers could be kicked out. I would like people to \n> still be able to set a tuning such that the background writers were useful \n> under average loads, but didn't ever try to do too much. It's much more \n> difficult to do that if bgwriter_lru_maxpages goes away.\n> \n> I realized recently the task I should take on here is to run some more \n> experiments with the latest code and pass along suggested techniques for \n> producing/identifying the kind of problem conditions I've run into in the \n> past; then we can see if other people can reproduce them. I got a new \n> 8-core server I need to thrash anyway and will try and do just that \n> starting tomorrow.\n> \n> For all I know my concerns are strictly a rare edge case. But since the \n> final adjustments to things like whether there is an upper limit or not \n> are very small patches compared to what's already been done here, I sent \n> in what I thought was ready to go because I didn't want to hold up \n> reviewing the bulk of the code over some of these fine details.\n\nApologies for asking this on the wrong list, but it is at least the right\nthread.\n\nWhat is the current thinking on bg_writer setttings for systems such as \n4 core Opteron with 16GB or 32GB of memory and heavy batch workloads?\n\n-dg\n\n-- \nDavid Gould [email protected]\nIf simplicity worked, the world would be overrun with insects.\n",
"msg_date": "Mon, 14 May 2007 21:55:16 -0700",
"msg_from": "daveg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Automatic adjustment of bgwriter_lru_maxpages"
},
{
"msg_contents": "Greg Smith wrote:\n> I realized recently the task I should take on here is to run some more \n> experiments with the latest code and pass along suggested techniques for \n> producing/identifying the kind of problem conditions I've run into in \n> the past; then we can see if other people can reproduce them. I got a \n> new 8-core server I need to thrash anyway and will try and do just that \n> starting tomorrow.\n\nYes, please do that. I can't imagine a situation where a tunable maximum \nwould help, but you've clearly spent a lot more time experimenting with \nit than me.\n\nI have noticed that on a heavily (over)loaded system with fully \nsaturated I/O, bgwriter doesn't make any difference because all the \nbackends need to wait for writes anyway. But it doesn't hurt either.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Tue, 15 May 2007 13:45:03 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Automatic adjustment of bgwriter_lru_maxpages"
},
{
"msg_contents": "Moving to -performance.\n\nOn Mon, May 14, 2007 at 09:55:16PM -0700, daveg wrote:\n> Apologies for asking this on the wrong list, but it is at least the right\n> thread.\n> \n> What is the current thinking on bg_writer setttings for systems such as \n> 4 core Opteron with 16GB or 32GB of memory and heavy batch workloads?\n\nIt depends greatly on how much of your data tends to stay 'pinned' in\nshared_buffers between checkpoints. In a case where the same data tends\nto stay resident you're going to need to depend on the 'all' scan to\ndecrease the impact of checkpoints (though the load distributed\ncheckpoint patch will change that greatly).\n\nOther than that tuning bgwriter boils down to your IO capability as well\nas how often you're checkpointing.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Tue, 15 May 2007 19:08:07 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Automatic adjustment of bgwriter_lru_maxpages"
},
{
"msg_contents": "On Tue, 15 May 2007, Jim C. Nasby wrote:\n\n> Moving to -performance.\n\nNo, really, moved to performance now.\n\n> On Mon, May 14, 2007 at 09:55:16PM -0700, daveg wrote:\n>> What is the current thinking on bg_writer setttings for systems such as\n>> 4 core Opteron with 16GB or 32GB of memory and heavy batch workloads?\n\nFirst off, the primary purpose of both background writers are to keep the \nindividual client backends from stalling to wait for disk I/O. If you're \nrunning a batch workload, and there isn't a specific person waiting for a \nresponse, the background writer isn't as critical to worry about.\n\nAs Jim already said, tuning the background writer well really requires a \nlook at the usage profile of your buffer pool and some thinking about your \nI/O capacity just as much as it does your CPU/memory situation.\n\nFor the first part, I submitted a patch that updates the \ncontrib/pg_buffercache module to show the usage count information of your \nbuffer cache. The LRU writer only writes things with a usage_count of 0, \nso taking some snapshots of that data regularly will give you an idea \nwhether you can useful use it or whether you'd be better off making the \nall scan more aggressive. It's a simple patch that only effects a contrib \nmodule you can add and remove easily, I would characterize it as pretty \nsafe to apply even to a production system as long as you're doing the \ninitial tests off-hours. The patch is at\n\nhttp://archives.postgresql.org/pgsql-patches/2007-03/msg00555.php\n\nAnd the usual summary query I run after installing it in a database is:\n\nselect usagecount,count(*),isdirty from pg_buffercache group by \nisdirty,usagecount order by isdirty,usagecount;\n\nAs for the I/O side of things, I'd suggest you compute a worst-case \nscenario for how many disk writes will happen if every buffer the \nbackground writer comes across is dirty and base your settings on what \nyou're comfortable with there. Say you kept the default interval of 200ms \nbut increased the maximum pages value to 1000; each writer could \ntheoretically push 1000 x 8KB x 5/second = 40MB/s worth of data to disk. \nSince these are database writes that have to be interleaved with reads, \nthe sustainable rate here is not as high as you might think. You might \nget a useful performance boost just pushing the max numbers from the \ndefaults to up into the couple of hundred range--with the amount of RAM \nyou probably have decided to the buffer cache even the default small \npercentages will cover a lot of ground and might need to be increased. I \nlike 250 as a round number because it makes for at most an even 10MB a \nsecond flow out per writer. I wouldn't go too high on the max writes per \npass unless you're in a position to run some good tests to confirm you're \nnot actually making things worse.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Tue, 15 May 2007 23:02:44 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Automatic adjustment of bgwriter_lru_maxpages"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://momjian.postgresql.org/cgi-bin/pgpatches\n\nIt will be applied as soon as one of the PostgreSQL committers reviews\nand approves it.\n\n---------------------------------------------------------------------------\n\n\nGreg Smith wrote:\n> Attached are two patches that try to recast the ideas of Itagaki \n> Takahiro's auto bgwriter_lru_maxpages patch in the direction I think this \n> code needs to move. Epic-length commentary follows.\n> \n> The original code came from before there was a pg_stat_bgwriter. The \n> first patch (buf-alloc-stats) takes the two most interesting pieces of \n> data the original patch collected, the number of buffers allocated \n> recently and the number that the clients wrote out, and ties all that into \n> the new stats structure. With this patch applied, you can get a feel for \n> things like churn/turnover in the buffer pool that were very hard to \n> quantify before. Also, it makes it easy to measure how well your \n> background writer is doing at writing buffers so the clients don't have \n> to. Applying this would complete one of my personal goals for the 8.3 \n> release, which was having stats to track every type of buffer write.\n> \n> I split this out because I think it's very useful to have regardless of \n> whether the automatic tuning portion is accepted, and I think these \n> smaller patches make the review easier. The main thing I would recommend \n> someone check is how am_bg_writer is (mis?)used here. I spliced some of \n> the debugging-only code from the original patch, and I can't tell if the \n> result is a robust enough approach to solving the problem of having every \n> client indirectly report their activity to the background writer. Other \n> than that, I think this code is ready for review and potentially \n> comitting.\n> \n> The second patch (limit-lru) adds on top of that a constraint of the LRU \n> writer so that it doesn't do any more work than it has to. Note that I \n> left verbose debugging code in here because I'm much less confident this \n> patch is complete.\n> \n> It predicts upcoming buffer allocations using a 16-period weighted moving \n> average of recent activity, which you can think of as the last 3.2 seconds \n> at the default interval. After testing a few systems that seemed a decent \n> compromise of smoothing in both directions. I found the 2X overallocation \n> fudge factor of the original patch way too aggressive, and just pick the \n> larger of the most recent allocation amount or the smoothed value. The \n> main thing that throws off the allocation estimation is when you hit a \n> checkpoint, which can give a big spike after the background writer returns \n> to BgBufferSync and notices all the buffers that were allocated during the \n> checkpoint write; the code then tries to find more buffers it can recycle \n> than it needs to. Since the checkpoint itself normally leaves a large \n> wake of reusable buffers behind it, I didn't find this to be a serious \n> problem.\n> \n> There's another communication issue here, which is that SyncOneBuffer \n> needs to return more information about the buffer than it currently does \n> once it gets it locked. The background writer needs to know more than \n> just if it was written to tune itself. The original patch used a clever \n> trick for this which worked but I found confusing. I happen to have a \n> bunch of other background writer tuning code I'm working on, and I had to \n> come up with a more robust way to communicate buffer internals back via \n> this channel. I used that code here, it's a bitmask setup similar to how \n> flags like BM_DIRTY are used. It's overkill for solving this particular \n> problem, but I think the interface is clean and it helps support future \n> enhancements in intelligent background writing.\n> \n> Now we get to the controversial part. The original patch removed the \n> bgwriter_lru_maxpages parameter and updated the documentation accordingly. \n> I didn't do that here. The reason is that after playing around in this \n> area I'm not convinced yet I can satisfy all the tuning scenarios I'd like \n> to be able to handle that way. I describe this patch as enforcing a \n> constraint instead; it allows you to set the LRU parameters much higher \n> than was reasonable before without having to be as concerned about the LRU \n> writer wasting resources.\n> \n> I already brought up some issues in this area on -hackers ( \n> http://archives.postgresql.org/pgsql-hackers/2007-04/msg00781.php ) but my \n> work hasn't advanced as fast as I'd hoped. I wanted to submit what I've \n> finished anyway because I think any approach here is going to have cope \n> with the issues addressed in these two patches, and I'm happy now with how \n> they're solved here. It's only a one-line delete to disable the LRU \n> limiting behavior of the second patch, at which point it's strictly \n> internals code with no expected functional impact that alternate \n> approaches might be built on.\n> \n> --\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\nContent-Description: \n\n[ Attachment, skipping... ]\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Thu, 17 May 2007 18:47:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Automatic adjustment of bgwriter_lru_maxpages"
}
] |
[
{
"msg_contents": "We have been running Postgres on a 2U server with 2 disks configured in\nraid 1 for the os and logs and 4 disks configured in raid 10 for the\ndata. I have since been told raid 5 would have been a better option\ngiven our usage of Dell equipment and the way they handle raid 10. I\nhave just a few general questions about raid with respect to Postgres:\n\n[1] What is the performance penalty of software raid over hardware raid?\n Is it truly significant? We will be working with 100s of GB to 1-2 TB\nof data eventually.\n\n[2] How do people on this list monitor their hardware raid? Thus far we\nhave used Dell and the only way to easily monitor disk status is to use\ntheir openmanage application. Do other controllers offer easier means\nof monitoring individual disks in a raid configuration? It seems one\nadvantage software raid has is the ease of monitoring.\n\nI truly appreciate any assistance or input. As an additional question,\ndoes anyone have any strong recommendations for vendors that offer both\nconsulting/training and support? We are currently speaking with Command\nPrompt, EnterpriseDB, and Greenplum but I am certainly open to hearing\nany other recommendations.\n\nThanks,\n\nJoe\n",
"msg_date": "Tue, 27 Feb 2007 08:12:00 -0500",
"msg_from": "\"Joe Uhl\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Opinions on Raid"
},
{
"msg_contents": "Joe Uhl wrote:\n> We have been running Postgres on a 2U server with 2 disks configured in\n> raid 1 for the os and logs and 4 disks configured in raid 10 for the\n> data. I have since been told raid 5 would have been a better option\n> given our usage of Dell equipment and the way they handle raid 10. I\n> have just a few general questions about raid with respect to Postgres:\n> \n> [1] What is the performance penalty of software raid over hardware raid?\n> Is it truly significant? We will be working with 100s of GB to 1-2 TB\n> of data eventually.\n\nthis depends a lot on the raidcontroller (whether it has or not BBWC for \nexample) - for some use-cases softwareraid is actually faster(especially \nfor seq-io tests).\n\n> \n> [2] How do people on this list monitor their hardware raid? Thus far we\n> have used Dell and the only way to easily monitor disk status is to use\n> their openmanage application. Do other controllers offer easier means\n> of monitoring individual disks in a raid configuration? It seems one\n> advantage software raid has is the ease of monitoring.\n\nwell the answer to that question depends on what you are using for your \nnetwork monitoring as a whole as well as your Platform of choice. If you \nuse say nagios and Linux it makes sense to use a nagios plugin (we do \nthat here with a unified check script that checks everything from \nLSI-MPT based raid cards, over IBMs ServeRAID, HPs Smartarray,LSI \nMegaRAID cards and also Linux/Solaris Software RAID).\nIf you are using another monitoring solution(OpenView, IBM \nDirectory,...) your solution might look different.\n\n\nStefan\n",
"msg_date": "Tue, 27 Feb 2007 14:56:08 +0100",
"msg_from": "Stefan Kaltenbrunner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opinions on Raid"
},
{
"msg_contents": "At 08:12 AM 2/27/2007, Joe Uhl wrote:\n>We have been running Postgres on a 2U server with 2 disks configured in\n>raid 1 for the os and logs and 4 disks configured in raid 10 for the\n>data. I have since been told raid 5 would have been a better option\n>given our usage of Dell equipment and the way they handle raid 10. I\n>have just a few general questions about raid with respect to Postgres:\n>\n>[1] What is the performance penalty of software raid over hardware raid?\n> Is it truly significant? We will be working with 100s of GB to 1-2 TB\n>of data eventually.\nThe real CPU overhead when using SW RAID is when using any form of SW \nRAID that does XOR operations as part of writes (RAID 5, 6, 50, ..., \netc). At that point, you are essentially hammering on the CPU just \nas hard as you would on a dedicated RAID controller... ...and the \ndedicated RAID controller probably has custom HW helping it do this \nsort of thing more efficiently.\nThat being said, SW RAID 5 in this sort of scenario can be reasonable \nif you =dedicate= a CPU core to it. So in such a system, your \"n\" \ncore box is essentially a \"n-1\" core box because you have to lock a \ncore to doing nothing but RAID management.\nReligious wars aside, this actually can work well. You just have to \nunderstand and accept what needs to be done.\n\nSW RAID 1, or 10, or etc should not impose a great deal of CPU \noverhead, and often can be =faster= than a dedicated RAID controller.\n\nSW RAID 5 etc in usage scenarios involving far more reads than writes \nand light write loads can work quite well even if you don't dedicate \na core to RAID management, but you must be careful about workloads \nthat are, or that contain parts that are, examples of the first \nscenario I gave. If you have any doubts about whether you are doing \ntoo many writes, dedicate a core to RAID stuff as in the first scenario.\n\n\n>[2] How do people on this list monitor their hardware raid? Thus far we\n>have used Dell and the only way to easily monitor disk status is to use\n>their openmanage application. Do other controllers offer easier means\n>of monitoring individual disks in a raid configuration? It seems one\n>advantage software raid has is the ease of monitoring.\nMany RAID controller manufacturers and storage product companies \noffer reasonable monitoring / management tools.\n\n3ware AKA AMCC has a good reputation in this area for their cards.\nSo does Areca.\nI personally do not like Adaptec's SW for this purpose, but YMMV.\nLSI Logic has had both good and bad SW in this area over the years.\n\nDell, HP, IBM, etc's offerings in this area tend to be product line \nspecific. I'd insist on some sort of \"try before you buy\" if the \nease of use / quality of the SW matters to your overall purchase decision.\n\nThen there are the various CSSW and OSSW packages that contain this \nfunctionality or are dedicated to it. Go find some reputable reviews.\n(HEY LURKERS FROM Tweakers.net: ^^^ THAT\"S AN ARTICLE IDEA ;-) )\n\nCheers,\nRon \n\n",
"msg_date": "Tue, 27 Feb 2007 11:05:39 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opinions on Raid"
},
{
"msg_contents": "Hope you don't mind, Ron. This might be splitting hairs.\n\nOn Tue, Feb 27, 2007 at 11:05:39AM -0500, Ron wrote:\n> The real CPU overhead when using SW RAID is when using any form of SW \n> RAID that does XOR operations as part of writes (RAID 5, 6, 50, ..., \n> etc). At that point, you are essentially hammering on the CPU just \n> as hard as you would on a dedicated RAID controller... ...and the \n> dedicated RAID controller probably has custom HW helping it do this \n> sort of thing more efficiently.\n> That being said, SW RAID 5 in this sort of scenario can be reasonable \n> if you =dedicate= a CPU core to it. So in such a system, your \"n\" \n> core box is essentially a \"n-1\" core box because you have to lock a \n> core to doing nothing but RAID management.\n\nI have an issue with the above explanation. XOR is cheap. It's one of\nthe cheapest CPU instructions available. Even with high bandwidth, the\nCPU should always be able to XOR very fast.\n\nThis leads me to the belief that the RAID 5 problem has to do with\ngetting the data ready to XOR. With RAID 5, the L1/L2 cache is never\nlarge enoguh to hold multiple stripes of data under regular load, and\nthe system may not have the blocks in RAM. Reading from RAM to find the\nmissing blocks shows up as CPU load. Reading from disk to find the\nmissing blocks shows up as system load. Dedicating a core to RAID 5\nfocuses on the CPU - which I believe to be mostly idle waiting for a\nmemory read. Dedicating a core reduces the impact, but can't eliminate\nit, and the cost of a whole core to sit mostly idle waiting for memory\nreads is high. Also, any reads scheduled by this core will affect the\nbandwidth/latency for other cores.\n\nHardware RAID 5 solves this by using its own memory modules - like a\nvideo card using its own memory modules. The hardware RAID can read\nfrom its own memory or disk all day and not affect system performance.\nHopefully it has plenty of memory dedicated to holding the most\nfrequently required blocks.\n\n> SW RAID 5 etc in usage scenarios involving far more reads than writes \n> and light write loads can work quite well even if you don't dedicate \n> a core to RAID management, but you must be careful about workloads \n> that are, or that contain parts that are, examples of the first \n> scenario I gave. If you have any doubts about whether you are doing \n> too many writes, dedicate a core to RAID stuff as in the first scenario.\n\nI found software RAID 5 to suck such that I only use it for backups\nnow. It seemed that Linux didn't care to read-ahead or hold blocks in\nmemory for too long, and preferred to read and then write. It was awful.\nRAID 5 doesn't seem like a good option even with hardware RAID. They mask\nthe issues with it behind a black box (dedicated hardware). The issues\nstill exist.\n\nMost of my system is RAID 1+0 now. I have it broken up. Rarely read or\nwritten files (long term storage) in RAID 5, The main system data on\nRAID 1+0. The main system on RAID 1. A larger build partition on RAID\n0. For a crappy server in my basement, I've very happy with my\nsoftware RAID performance now. :-)\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Tue, 27 Feb 2007 12:25:23 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Opinions on Raid"
},
{
"msg_contents": "On Tue, 2007-02-27 at 07:12, Joe Uhl wrote:\n> We have been running Postgres on a 2U server with 2 disks configured in\n> raid 1 for the os and logs and 4 disks configured in raid 10 for the\n> data. I have since been told raid 5 would have been a better option\n> given our usage of Dell equipment and the way they handle raid 10.\n\nSome controllers do no layer RAID effectively. Generally speaking, the\ncheaper the controller, the worse it's gonna perform.\n\nAlso, some controllers are optimized more for RAID 5 than RAID 1 or 0.\n\nWhich controller does your Dell have, btw?\n\n> I\n> have just a few general questions about raid with respect to Postgres:\n> \n> [1] What is the performance penalty of software raid over hardware raid?\n> Is it truly significant? We will be working with 100s of GB to 1-2 TB\n> of data eventually.\n\nFor a mostly read system, the performance is generally pretty good. \nOlder linux kernels ran layered RAID pretty slowly. I.e. RAID 1+0 was\nno faster than RAID 1. The best performance software RAID I found in\nolder linux kernels (2.2, 2.4) was plain old RAID-1. RAID-5 was good at\nreading, but slow at writing.\n\n",
"msg_date": "Tue, 27 Feb 2007 11:56:07 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opinions on Raid"
},
{
"msg_contents": "Really appreciate all of the valuable input. The current server has the\nPerc4ei controller.\n\nThe impression I am taking from the responses is that we may be okay with\nsoftware raid, especially if raid 1 and 10 are what we intend to use.\n\nI think we can collect enough information from the archives of this list to\nhelp make decisions for the new machine(s), was just very interested in\nhearing feedback on software vs. hardware raid.\n\nWe will likely be using the 2.6.18 kernel.\n\nThanks for everyone's input,\n\nJoe\n\n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]] \nSent: Tuesday, February 27, 2007 12:56 PM\nTo: Joe Uhl\nCc: [email protected]\nSubject: Re: [PERFORM] Opinions on Raid\n\nOn Tue, 2007-02-27 at 07:12, Joe Uhl wrote:\n> We have been running Postgres on a 2U server with 2 disks configured in\n> raid 1 for the os and logs and 4 disks configured in raid 10 for the\n> data. I have since been told raid 5 would have been a better option\n> given our usage of Dell equipment and the way they handle raid 10.\n\nSome controllers do no layer RAID effectively. Generally speaking, the\ncheaper the controller, the worse it's gonna perform.\n\nAlso, some controllers are optimized more for RAID 5 than RAID 1 or 0.\n\nWhich controller does your Dell have, btw?\n\n> I\n> have just a few general questions about raid with respect to Postgres:\n> \n> [1] What is the performance penalty of software raid over hardware raid?\n> Is it truly significant? We will be working with 100s of GB to 1-2 TB\n> of data eventually.\n\nFor a mostly read system, the performance is generally pretty good. \nOlder linux kernels ran layered RAID pretty slowly. I.e. RAID 1+0 was\nno faster than RAID 1. The best performance software RAID I found in\nolder linux kernels (2.2, 2.4) was plain old RAID-1. RAID-5 was good at\nreading, but slow at writing.\n\n\n",
"msg_date": "Tue, 27 Feb 2007 13:28:13 -0500",
"msg_from": "\"Joe Uhl\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opinions on Raid"
},
{
"msg_contents": "On Tue, 2007-02-27 at 12:28, Joe Uhl wrote:\n> Really appreciate all of the valuable input. The current server has the\n> Perc4ei controller.\n> \n> The impression I am taking from the responses is that we may be okay with\n> software raid, especially if raid 1 and 10 are what we intend to use.\n> \n> I think we can collect enough information from the archives of this list to\n> help make decisions for the new machine(s), was just very interested in\n> hearing feedback on software vs. hardware raid.\n> \n> We will likely be using the 2.6.18 kernel.\n\nWell, whatever you do, benchmark it with what you think will be your\ntypical load. you can do some simple initial tests to see if you're in\nthe ballpark with bonnie++, dd, etc... Then move on to real database\ntests after that.\n",
"msg_date": "Tue, 27 Feb 2007 13:12:25 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opinions on Raid"
},
{
"msg_contents": "Joe Uhl wrote:\n\n> [1] What is the performance penalty of software raid over hardware raid?\n> Is it truly significant? We will be working with 100s of GB to 1-2 TB\n> of data eventually.\n\nOne thing you should appreciate about hw vs sw raid is that with the former \nyou can battery-back it and enable controller write caching in order to \nmake disk write latency largely disappear. How much of a performance \ndifference that makes depends on what you're doing with it, of course.\n\nSee the current thread \"Two hard drives --- what to do with them?\" for some \ndiscussion of the virtues of battery-backed raid.\n\n> [2] How do people on this list monitor their hardware raid? Thus far we\n> have used Dell and the only way to easily monitor disk status is to use\n> their openmanage application. Do other controllers offer easier means\n> of monitoring individual disks in a raid configuration? It seems one\n> advantage software raid has is the ease of monitoring.\n\nPersonally I use nagios with nrpe for most of the monitoring, and write a \nlittle wrapper around the cli monitoring tool from the controller \nmanufacturer to grok whether it's in a good/degraded/bad state.\n\nDell PERC controllers I think are mostly just derivatives of Adaptec/LSI \ncontrollers, so you might be able to get a more convenient monitoring tool \nfrom one of them that might work. See if you can find your PERC version in \nhttp://pciids.sourceforge.net/pci.ids, or if you're using Linux then which \nhw raid module is loaded for it, to get an idea of which place to start \nlooking for that.\n\n- Geoff\n\n",
"msg_date": "Tue, 27 Feb 2007 15:42:36 -0800",
"msg_from": "Geoff Tolley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opinions on Raid"
},
{
"msg_contents": "On 28-2-2007 0:42 Geoff Tolley wrote:\n>> [2] How do people on this list monitor their hardware raid? Thus far we\n>> have used Dell and the only way to easily monitor disk status is to use\n>> their openmanage application. Do other controllers offer easier means\n>> of monitoring individual disks in a raid configuration? It seems one\n>> advantage software raid has is the ease of monitoring.\n\nRecent Dell raid-controllers are based on LSI chips, although they are \nnot exactly the same as similar LSI-controllers (anymore). Our Dell \nPerc5/e and 5/i work with the MegaCLI-tool from LSI. But that tool has \nreally limited documentation from LSI itself. Luckily Fujitsu-Siemens \noffers a nice PDF:\nhttp://manuals.fujitsu-siemens.com/serverbooks/content/manuals/english/mr-sas-sw-ug-en.pdf\n\nBesides that, there are several Dell linux resources popping up, \nincluding on their own site:\nhttp://linux.dell.com/\n\n> Personally I use nagios with nrpe for most of the monitoring, and write \n> a little wrapper around the cli monitoring tool from the controller \n> manufacturer to grok whether it's in a good/degraded/bad state.\n\nIf you have a MegaCLI-version, I'd like to see it, if possible? That \nwould definitely save us some reinventing the wheel :-)\n\n> Dell PERC controllers I think are mostly just derivatives of Adaptec/LSI \n> controllers, so you might be able to get a more convenient monitoring \n> tool from one of them that might work. See if you can find your PERC \n> version in http://pciids.sourceforge.net/pci.ids, or if you're using \n> Linux then which hw raid module is loaded for it, to get an idea of \n> which place to start looking for that.\n\nThe current ones are afaik all LSI-based. But at least the recent SAS \ncontrollers (5/i and 5/e) are.\n\nBest regards,\n\nArjen\n",
"msg_date": "Sat, 03 Mar 2007 12:30:16 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opinions on Raid"
},
{
"msg_contents": "On Sat, Mar 03, 2007 at 12:30:16PM +0100, Arjen van der Meijden wrote:\n> If you have a MegaCLI-version, I'd like to see it, if possible? That \n> would definitely save us some reinventing the wheel :-)\n\nA friend of mine just wrote\n\n MegaCli -AdpAllInfo -a0|egrep ' (Degraded|Offline|Critical Disks|Failed Disks)' | grep -v ': 0 $'\n\nwhich will output errors if there are any, and none otherwise. Or just add -q\nto the grep and check the return status.\n\n(Yes, simplistic, but often all you want to know is if all's OK or not...)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 6 Mar 2007 02:31:46 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opinions on Raid"
}
] |
[
{
"msg_contents": "Dear all,\n\nAfter many tests and doc reading, i finally try to get help from\nyou...\n\nHere is my problem. With some heavy insert into a simple BD (one\ntable, no indexes) i can't get better perf than 8000 inserts/sec. I'm\ntesting it using a simple C software which use libpq and which use:\n- Insert prepared statement (to avoid too many request parsing on the\nserver)\n- transaction of 100000 inserts\n\nMy server which has the following config:\n- 3G RAM\n- Pentium D - 64 bits, 3Ghz\n- database data on hardware raid 0 disks\n- x_log (WAL logs) on an other single hard drive\n\nThe server only use 30% of the CPU, 10% of disk access and not much\nRAM... So i'm wondering where could be the bottle neck and why i can't\nget better performance ?\nI really need to use inserts and i can't change it to use COPY...\n\nAny advice is welcome. Sorry in advance for my bad understanding of\ndatabase !\n\nThanks in advance.\n\nRegards,\n\n\nJoël.W\n\n",
"msg_date": "27 Feb 2007 09:10:21 -0800",
"msg_from": "\"hatman\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Insert performance"
},
{
"msg_contents": "hatman wrote:\n> Dear all,\n> \n> After many tests and doc reading, i finally try to get help from\n> you...\n> \n> Here is my problem. With some heavy insert into a simple BD (one\n> table, no indexes) i can't get better perf than 8000 inserts/sec. I'm\n> testing it using a simple C software which use libpq and which use:\n> - Insert prepared statement (to avoid too many request parsing on the\n> server)\n> - transaction of 100000 inserts\n\nAre each of the INSERTs in their own transaction?\n\nIf so, you'll be limited by the speed of the disk the WAL is running on.\n\nThat means you have two main options:\n1. Have multiple connections inserting simultaneously.\n2. Batch your inserts together, from 10 to 10,000 per transaction.\n\nAre either of those possible?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 06 Mar 2007 07:34:46 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance"
},
{
"msg_contents": "Hi Richard,\n\n> > \n> > Here is my problem. With some heavy insert into a simple BD (one\n> > table, no indexes) i can't get better perf than 8000 inserts/sec. I'm\n> > testing it using a simple C software which use libpq and which use:\n> > - Insert prepared statement (to avoid too many request parsing on the\n> > server)\n> > - transaction of 100000 inserts\n> \n> Are each of the INSERTs in their own transaction?\n> \n\nNo, as said above transactions are made of 100000 inserts...\n\n> If so, you'll be limited by the speed of the disk the WAL is running on.\n> \n> That means you have two main options:\n> 1. Have multiple connections inserting simultaneously.\n\nYes, you're right. That what i have been testing and what provide the\nbest performance ! I saw that postgresql frontend was using a lot of CPU\nand not both of them (i'm using a pentium D, dual core). To the opposit,\nthe postmaster process use not much resources. Using several client,\nboth CPU are used and i saw an increase of performance (about 18000\ninserts/sec).\n\nSo i think my bottle neck is more the CPU speed than the disk speed,\nwhat do you think ?\n\nI use 2 disks (raid 0) for the data and a single disk for pg_xlog.\n\n> 2. Batch your inserts together, from 10 to 10,000 per transaction.\n> \n\nYes, that's what i'm doing.\n\n\nThanks a lot for the advices !\n\n\nregards,\n\n\nJo�l\n\n\n\n",
"msg_date": "Tue, 06 Mar 2007 08:53:27 +0100",
"msg_from": "=?ISO-8859-1?Q?jo=EBl?= Winteregg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance"
},
{
"msg_contents": "jo�l Winteregg wrote:\n> Hi Richard,\n> \n>>> Here is my problem. With some heavy insert into a simple BD (one\n>>> table, no indexes) i can't get better perf than 8000 inserts/sec. I'm\n>>> testing it using a simple C software which use libpq and which use:\n>>> - Insert prepared statement (to avoid too many request parsing on the\n>>> server)\n>>> - transaction of 100000 inserts\n>> Are each of the INSERTs in their own transaction?\n>>\n> \n> No, as said above transactions are made of 100000 inserts...\n\nHmm - I read that as just meaning \"inserted 100000 rows\". You might find \nthat smaller batches provide peak performance.\n\n>> If so, you'll be limited by the speed of the disk the WAL is running on.\n>>\n>> That means you have two main options:\n>> 1. Have multiple connections inserting simultaneously.\n> \n> Yes, you're right. That what i have been testing and what provide the\n> best performance ! I saw that postgresql frontend was using a lot of CPU\n> and not both of them (i'm using a pentium D, dual core). To the opposit,\n> the postmaster process use not much resources. Using several client,\n> both CPU are used and i saw an increase of performance (about 18000\n> inserts/sec).\n> \n> So i think my bottle neck is more the CPU speed than the disk speed,\n> what do you think ?\n\nWell, I think it's fair to say it's not disk. Let's see - the original \nfigure was 8000 inserts/sec, which is 0.125ms per insert. That sounds \nplausible to me for a round-trip to process a simple command - are you \nrunning the client app on the same machine, or is it over the network?\n\nTwo other things to bear in mind:\n1. If you're running 8.2 you can have multiple sets of values in an INSERT\nhttp://www.postgresql.org/docs/8.2/static/sql-insert.html\n\n2. You can do a COPY from libpq - is it really not possible?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 06 Mar 2007 08:08:29 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance"
},
{
"msg_contents": "Hi and thanks for your quick answer :-)\n\n> > \n> >>> Here is my problem. With some heavy insert into a simple BD (one\n> >>> table, no indexes) i can't get better perf than 8000 inserts/sec. I'm\n> >>> testing it using a simple C software which use libpq and which use:\n> >>> - Insert prepared statement (to avoid too many request parsing on the\n> >>> server)\n> >>> - transaction of 100000 inserts\n> >> Are each of the INSERTs in their own transaction?\n> >>\n> > \n> > No, as said above transactions are made of 100000 inserts...\n> \n> Hmm - I read that as just meaning \"inserted 100000 rows\". You might find \n> that smaller batches provide peak performance.\n> \n\nAhh ok ;-) sorry for my bad english... (yeah, i have been testing\nseveral transaction size 10000, 20000 and 100000)\n\n\n> >> If so, you'll be limited by the speed of the disk the WAL is running on.\n> >>\n> >> That means you have two main options:\n> >> 1. Have multiple connections inserting simultaneously.\n> > \n> > Yes, you're right. That what i have been testing and what provide the\n> > best performance ! I saw that postgresql frontend was using a lot of CPU\n> > and not both of them (i'm using a pentium D, dual core). To the opposit,\n> > the postmaster process use not much resources. Using several client,\n> > both CPU are used and i saw an increase of performance (about 18000\n> > inserts/sec).\n> > \n> > So i think my bottle neck is more the CPU speed than the disk speed,\n> > what do you think ?\n> \n> Well, I think it's fair to say it's not disk. Let's see - the original \n> figure was 8000 inserts/sec, which is 0.125ms per insert. That sounds \n> plausible to me for a round-trip to process a simple command - are you \n> running the client app on the same machine, or is it over the network?\n\nI did both test. On the local machine (using UNIX sockets) i can reach\n18000 insert/sec with 10 clients and prepared statements. The same test\nusing clients on the remote machine provide me 13000 inserts/sec.\n\nNow, with multiple client (multi-threaded inserts) my both CPU are quite\nwell used (both arround 90%) so i maybe think that disk speeds are now\nmy bottleneck. What do you think ? or maybe i will need a better CPU ?\n\n> \n> Two other things to bear in mind:\n> 1. If you're running 8.2 you can have multiple sets of values in an INSERT\n> http://www.postgresql.org/docs/8.2/static/sql-insert.html\n> \n\nYeah, i'm running the 8.2.3 version ! i didn't know about multiple\ninserts sets ! Thanks for the tip ;-)\n\n> 2. You can do a COPY from libpq - is it really not possible?\n> \n\nNot really but i have been testing it and inserts are flying (about\n100000 inserts/sec) !!\n\n\n\n",
"msg_date": "Tue, 06 Mar 2007 10:19:08 +0100",
"msg_from": "=?ISO-8859-1?Q?jo=EBl?= Winteregg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance"
},
{
"msg_contents": "jo�l Winteregg wrote:\n> \n>>> No, as said above transactions are made of 100000 inserts...\n>> Hmm - I read that as just meaning \"inserted 100000 rows\". You might find \n>> that smaller batches provide peak performance.\n> \n> Ahh ok ;-) sorry for my bad english... (yeah, i have been testing\n> several transaction size 10000, 20000 and 100000)\n\nNot your bad English, my poor reading :-)\n\n>>>> If so, you'll be limited by the speed of the disk the WAL is running on.\n>>>>\n>>>> That means you have two main options:\n>>>> 1. Have multiple connections inserting simultaneously.\n>>> Yes, you're right. That what i have been testing and what provide the\n>>> best performance ! I saw that postgresql frontend was using a lot of CPU\n>>> and not both of them (i'm using a pentium D, dual core). To the opposit,\n>>> the postmaster process use not much resources. Using several client,\n>>> both CPU are used and i saw an increase of performance (about 18000\n>>> inserts/sec).\n>>>\n>>> So i think my bottle neck is more the CPU speed than the disk speed,\n>>> what do you think ?\n>> Well, I think it's fair to say it's not disk. Let's see - the original \n>> figure was 8000 inserts/sec, which is 0.125ms per insert. That sounds \n>> plausible to me for a round-trip to process a simple command - are you \n>> running the client app on the same machine, or is it over the network?\n> \n> I did both test. On the local machine (using UNIX sockets) i can reach\n> 18000 insert/sec with 10 clients and prepared statements. The same test\n> using clients on the remote machine provide me 13000 inserts/sec.\n\nOK, so we know what the overhead for network connections is.\n\n> Now, with multiple client (multi-threaded inserts) my both CPU are quite\n> well used (both arround 90%) so i maybe think that disk speeds are now\n> my bottleneck. What do you think ? or maybe i will need a better CPU ?\n> \n>> Two other things to bear in mind:\n>> 1. If you're running 8.2 you can have multiple sets of values in an INSERT\n>> http://www.postgresql.org/docs/8.2/static/sql-insert.html\n> \n> Yeah, i'm running the 8.2.3 version ! i didn't know about multiple\n> inserts sets ! Thanks for the tip ;-)\n\nAh-ha! Give it a go, it's designed for this sort of situation. Not sure \nit'll manage thousands of value clauses, but working up from 10 perhaps. \nI've not tested it for performance, so I'd be interesting in knowing how \nit compares to your other results.\n\n>> 2. You can do a COPY from libpq - is it really not possible?\n>>\n> \n> Not really but i have been testing it and inserts are flying (about\n> 100000 inserts/sec) !!\n\nWhat's the problem with the COPY? Could you COPY into one table then \ninsert from that to your target table?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 06 Mar 2007 09:43:45 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance"
},
{
"msg_contents": "* Richard Huxton <[email protected]> [070306 12:22]:\n> >>2. You can do a COPY from libpq - is it really not possible?\n> >>\n> >Not really but i have been testing it and inserts are flying (about\n> >100000 inserts/sec) !!\n> \n> What's the problem with the COPY? Could you COPY into one table then insert from that to your target table?\nWell, there are some issues. First your client needs to support it.\nE.g. psycopg2 supports only some specific CSV formatting in it's\nmethods. (plus I had sometimes random psycopg2 crashes, but guarding against\nthese is cheap compared to the speedup from COPY versus INSERT)\nPlus you need to be sure that your data will apply cleanly (which in\nmy app was not the case), or you need to code a fallback that\nlocalizes the row that doesn't work.\n\nAnd the worst thing is, that it ignores RULES on the tables, which\nsucks if you use them ;) (e.g. table partitioning).\n\nAndreas\n",
"msg_date": "Tue, 6 Mar 2007 13:23:45 +0100",
"msg_from": "Andreas Kostyrka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance"
},
{
"msg_contents": "Andreas Kostyrka wrote:\n> * Richard Huxton <[email protected]> [070306 12:22]:\n>>>> 2. You can do a COPY from libpq - is it really not possible?\n>>>>\n>>> Not really but i have been testing it and inserts are flying (about\n>>> 100000 inserts/sec) !!\n>> What's the problem with the COPY? Could you COPY into one table then insert from that to your target table?\n> Well, there are some issues. First your client needs to support it.\n> E.g. psycopg2 supports only some specific CSV formatting in it's\n> methods. (plus I had sometimes random psycopg2 crashes, but guarding against\n> these is cheap compared to the speedup from COPY versus INSERT)\n> Plus you need to be sure that your data will apply cleanly (which in\n> my app was not the case), or you need to code a fallback that\n> localizes the row that doesn't work.\n> \n> And the worst thing is, that it ignores RULES on the tables, which\n> sucks if you use them ;) (e.g. table partitioning).\n\nAh, but two things deal with these issues:\n1. Joel is using libpq\n2. COPY into a holding table, tidy data and INSERT ... SELECT\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 06 Mar 2007 12:24:58 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance"
},
{
"msg_contents": "* Richard Huxton <[email protected]> [070306 13:47]:\n> Andreas Kostyrka wrote:\n> >* Richard Huxton <[email protected]> [070306 12:22]:\n> >>>>2. You can do a COPY from libpq - is it really not possible?\n> >>>>\n> >>>Not really but i have been testing it and inserts are flying (about\n> >>>100000 inserts/sec) !!\n> >>What's the problem with the COPY? Could you COPY into one table then insert from that to your target table?\n> >Well, there are some issues. First your client needs to support it.\n> >E.g. psycopg2 supports only some specific CSV formatting in it's\n> >methods. (plus I had sometimes random psycopg2 crashes, but guarding against\n> >these is cheap compared to the speedup from COPY versus INSERT)\n> >Plus you need to be sure that your data will apply cleanly (which in\n> >my app was not the case), or you need to code a fallback that\n> >localizes the row that doesn't work.\n> >And the worst thing is, that it ignores RULES on the tables, which\n> >sucks if you use them ;) (e.g. table partitioning).\n> \n> Ah, but two things deal with these issues:\n> 1. Joel is using libpq\n> 2. COPY into a holding table, tidy data and INSERT ... SELECT\n\nClearly COPY is the way for bulk loading data, BUT you asked, so I\nwanted to point out some problems and brittle points with COPY.\n\n(and the copy into the holding table doesn't solve completly the\nproblem with the dirty inconsistent data)\n\nAndreas\n",
"msg_date": "Tue, 6 Mar 2007 13:49:37 +0100",
"msg_from": "Andreas Kostyrka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance"
},
{
"msg_contents": "Hi Andreas,\n\nThanks for the info about COPY !!\n\nOn Mar 6, 1:23 pm, [email protected] (Andreas Kostyrka) wrote:\n> * Richard Huxton <[email protected]> [070306 12:22]:> >>2. You can do a COPY from libpq - is it really not possible?\n>\n> > >Not really but i have been testing it and inserts are flying (about\n> > >100000 inserts/sec) !!\n>\n> > What's the problem with the COPY? Could you COPY into one table then insert from that to your target table?\n>\n> Well, there are some issues. First your client needs to support it.\n> E.g. psycopg2 supports only some specific CSV formatting in it's\n> methods. (plus I had sometimes random psycopg2 crashes, but guarding against\n> these is cheap compared to the speedup from COPY versus INSERT)\n> Plus you need to be sure that your data will apply cleanly (which in\n> my app was not the case), or you need to code a fallback that\n> localizes the row that doesn't work.\n>\n> And the worst thing is, that it ignores RULES on the tables, which\n> sucks if you use them ;) (e.g. table partitioning).\n\nOk, but what about constraints (foreign keys and SERIAL id) using a\ncopy statement ? do we need to handle auto-generated id (SERIAL)\nmanually ?\n\nThanks for your feedback.\n\nRegards,\n\nJoël\n\n",
"msg_date": "6 Mar 2007 07:38:25 -0800",
"msg_from": "\"hatman\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insert performance"
},
{
"msg_contents": "Hi Richard,\n\n>\n> >>> No, as said above transactions are made of 100000 inserts...\n> >> Hmm - I read that as just meaning \"inserted 100000 rows\". You might find\n> >> that smaller batches provide peak performance.\n>\n> > Ahh ok ;-) sorry for my bad english... (yeah, i have been testing\n> > several transaction size 10000, 20000 and 100000)\n>\n> Not your bad English, my poor reading :-)\n>\n>\n>\n> >>>> If so, you'll be limited by the speed of the disk the WAL is running on.\n>\n> >>>> That means you have two main options:\n> >>>> 1. Have multiple connections inserting simultaneously.\n> >>> Yes, you're right. That what i have been testing and what provide the\n> >>> best performance ! I saw that postgresql frontend was using a lot of CPU\n> >>> and not both of them (i'm using a pentium D, dual core). To the opposit,\n> >>> the postmaster process use not much resources. Using several client,\n> >>> both CPU are used and i saw an increase of performance (about 18000\n> >>> inserts/sec).\n>\n> >>> So i think my bottle neck is more the CPU speed than the disk speed,\n> >>> what do you think ?\n> >> Well, I think it's fair to say it's not disk. Let's see - the original\n> >> figure was 8000 inserts/sec, which is 0.125ms per insert. That sounds\n> >> plausible to me for a round-trip to process a simple command - are you\n> >> running the client app on the same machine, or is it over the network?\n>\n> > I did both test. On the local machine (using UNIX sockets) i can reach\n> > 18000 insert/sec with 10 clients and prepared statements. The same test\n> > using clients on the remote machine provide me 13000 inserts/sec.\n>\n> OK, so we know what the overhead for network connections is.\n>\n> > Now, with multiple client (multi-threaded inserts) my both CPU are quite\n> > well used (both arround 90%) so i maybe think that disk speeds are now\n> > my bottleneck. What do you think ? or maybe i will need a better CPU ?\n>\n> >> Two other things to bear in mind:\n> >> 1. If you're running 8.2 you can have multiple sets of values in an INSERT\n> >>http://www.postgresql.org/docs/8.2/static/sql-insert.html\n>\n> > Yeah, i'm running the 8.2.3 version ! i didn't know about multiple\n> > inserts sets ! Thanks for the tip ;-)\n>\n\n\n> Ah-ha! Give it a go, it's designed for this sort of situation. Not sure\n> it'll manage thousands of value clauses, but working up from 10 perhaps.\n> I've not tested it for performance, so I'd be interesting in knowing how\n> it compares to your other results.\n\nYeah, as soon as possible i will give it a try ! Thanks for the\nfeedback ;-)\n\n>\n> >> 2. You can do a COPY from libpq - is it really not possible?\n>\n> > Not really but i have been testing it and inserts are flying (about\n> > 100000 inserts/sec) !!\n>\n> What's the problem with the COPY? Could you COPY into one table then\n> insert from that to your target table?\n\nThe main problem comes from our \"real time\" needs. We are getting\ninformation as a data flow from several application and we need to\nstore them in the DB without buffering them too much...\nI have been testing the COPY using several statement (i mean using\ncopy to add only a few rows to a specific table and then using it on\nan other table to add a few rows, etc...) and the perf are as bad as\nan insert !\nCOPY seems to be designed to add many many rows to the same table and\nnot a few rows to several tables... So that's my main problem.\n\nRegards,\n\nJoël\n\n",
"msg_date": "6 Mar 2007 07:46:36 -0800",
"msg_from": "\"hatman\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insert performance"
},
{
"msg_contents": "\n>>> 1. If you're running 8.2 you can have multiple sets of values in an \n>>> INSERT\n>>> http://www.postgresql.org/docs/8.2/static/sql-insert.html\n>>\n>>\n>> Yeah, i'm running the 8.2.3 version ! i didn't know about multiple\n>> inserts sets ! Thanks for the tip ;-)\n>\n\nNo kidding --- thanks for the tip from me as well !!!\n\nI didn't know this was possible (though I read in the docs that it is ANSI\nSQL standard), and I'm also having a similar situation.\n\nTwo related questions:\n\n1) What about atomicity? Is it strictly equivalent to having multiple \ninsert\nstatements inside a transaction? (I assume it should be)\n\n2) What about the issue with excessive locking for foreign keys when\ninside a transaction? Has that issue disappeared in 8.2? And if not,\nwould it affect similarly in the case of multiple-row inserts?\n\nIn case you have no clue what I'm referring to:\n\nSay that we have a table A, with one foreign key constraint to table\nB --- last time I checked, there was an issue that whenever inserting\nor updating table A (inside a transacion), postgres sets an exclusive\naccess lock on the referenced row on table B --- this is overkill, and\nthe correct thing to do would be to set a read-only lock (so that\nno-one else can *modify or remove* the referenced row while the\ntransaction has not been finished).\n\nThis caused unnecessary deadlock situations --- even though no-one\nis modifying table B (which is enough to guarantee that concurrent\ntransactions would be ok), a second transacion would fail to set the\nexclusive access lock, since someone already locked it.\n\nMy solution was to sort the insert statements by the referenced value\non table B.\n\n(I hope the above explanation clarifies what I'm trying to say)\n\nI wonder if I should still do the same if I go with a multiple-row\ninsert instead of multiple insert statements inside a transaction.\n\nThanks,\n\nCarlos\n--\n\n",
"msg_date": "Tue, 06 Mar 2007 10:55:41 -0500",
"msg_from": "Carlos Moreno <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance"
},
{
"msg_contents": "I only know to answer your no. 2:\n> 2) What about the issue with excessive locking for foreign keys when\n> inside a transaction? Has that issue disappeared in 8.2? And if not,\n> would it affect similarly in the case of multiple-row inserts?\n\nThe exclusive lock is gone already starting with 8.0 IIRC, a\nnon-exclusive lock on the parent row is used instead. Thing is that this\nis still too strong ;-)\n\nThe proper lock would be one which only prevents modification of the\nparent key, other updates would be safe on the same row.\n\nIn any case, the current behavior is much better than what was before.\n\nCheers,\nCsaba.\n\n\n",
"msg_date": "Tue, 06 Mar 2007 17:02:24 +0100",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance"
},
{
"msg_contents": "Csaba Nagy wrote:\n\n>I only know to answer your no. 2:\n> \n>\n>>2) What about the issue with excessive locking for foreign keys when\n>>inside a transaction? Has that issue disappeared in 8.2? And if not,\n>>would it affect similarly in the case of multiple-row inserts?\n>> \n>>\n>\n>The exclusive lock is gone already starting with 8.0 IIRC, a\n>non-exclusive lock on the parent row is used instead. Thing is that this\n>is still too strong ;-)\n>\n>The proper lock would be one which only prevents modification of the\n>parent key, other updates would be safe on the same row.\n>\n>In any case, the current behavior is much better than what was before.\n> \n>\n\n*Much* better, I would say --- though you're still correct in that it is \nstill\nnot the right thing to do.\n\nIn particular, with the previous approach. there was a serious performance\nhit when concurrent transactions reference the same keys --- that is, after\nhaving taken measures to avoid deadlocks, some transactions would have\nto *wait* (for no good reason) until the other transaction is completed and\nthe exclusive-access lock is released. For high-traffic databases this \ncan be\na quite severe performance hit. I'm glad it has been fixed, even if only\npartially.\n\nThanks,\n\nCarlos\n--\n\n",
"msg_date": "Tue, 06 Mar 2007 12:54:44 -0500",
"msg_from": "Carlos Moreno <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance"
}
] |
[
{
"msg_contents": "Thought I'd pass this along, since the Linux vs FreeBSD performance\nquestion comes up fairly regularly...\n\nBTW, I've already asked about benchmarking with PostgreSQL, so please\ndon't go over there making trouble. :)\n\n----- Forwarded message from Kris Kennaway <[email protected]> -----\n\nX-Spam-Checker-Version: SpamAssassin 3.1.6 (2006-10-03) on noel.decibel.org\nX-Spam-Level: \nX-Spam-Status: No, score=-0.9 required=5.0 tests=AWL,BAYES_50,\n\tFORGED_RCVD_HELO,SPF_PASS autolearn=no version=3.1.6\nDate: Sat, 24 Feb 2007 16:31:11 -0500\nFrom: Kris Kennaway <[email protected]>\nTo: [email protected], [email protected], [email protected]\nUser-Agent: Mutt/1.4.2.2i\nCc: \nSubject: Progress on scaling of FreeBSD on 8 CPU systems\nPrecedence: list\nErrors-To: [email protected]\n\nNow that the goals of the SMPng project are complete, for the past\nyear or more several of us have been working hard on profiling FreeBSD\nin various multiprocessor workloads, and looking for performance\nbottlenecks to be optimized.\n\nWe have recently made significant progress on optimizing for MySQL\nrunning on an 8-core amd64 system. The graph of results may be found\nhere:\n\n http://www.freebsd.org/~kris/scaling/scaling.png\n\nThis shows the graph of MySQL transactions/second performed by a\nmulti-threaded client workload against a local MySQL database with\nvarying numbers of client threads, with identically configured FreeBSD\nand Linux systems on the same machine.\n\nThe test was run on FreeBSD 7.0, with the latest version of the ULE\n2.0 scheduler, the libthr threading library, and an uncommitted patch\nfrom Jeff Roberson [1] that addresses poor scalability of file\ndescriptor locking (using a new sleepable mutex primitive); this patch\nis responsible for almost all of the performance and scaling\nimprovements measured. It also includes some other patches (collected\nin my kris-contention p4 branch) that have been shown to help\ncontention in MySQL workloads in the past (including a UNIX domain\nsocket locking pushdown patch from Robert Watson), but these were\nshown to only give small individual contributions, with a cumulative\neffect on the order of 5-10%.\n\nWith this configuration we are able to achieve performance that is\nconsistent with Linux at peak (the graph shows Linux 2% faster, but\nthis is commensurate with the margin of error coming from variance\nbetween runs, so more data is needed to distinguish them), with 8\nclient threads (=1 thread/CPU core), and significantly outperforms\nLinux at higher than peak loads, when running on the same hardware.\n\nSpecifically, beyond 8 client threads FreeBSD has only minor\nperformance degradation (an 8% drop from peak throughput at 8 clients\nto 20 clients), but Linux collapses immediately above 8 threads, and\nabove 14 threads asymptotes to essentially single-threaded levels. At\n20 clients FreeBSD outperforms Linux by a factor of 4.\n\nWe see this result as part of the payoff we are seeing from the hard\nwork of many developers over the past 7 years. In particular it is a\nsignificant validation of the SMP and locking strategies chosen for\nthe FreeBSD kernel in the post-FreeBSD 4.x world.\n\nMore configuration details and discussion about the benchmark may be\nfound here:\n\n http://people.freebsd.org/~kris/scaling/mysql.html\n\nKris\n\n\n\n----- End forwarded message -----\n\n-- \nJim C. Nasby, Database Architect [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 27 Feb 2007 12:26:22 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[[email protected]: Progress on scaling of FreeBSD on 8 CPU\n systems]"
},
{
"msg_contents": "\n> From: Kris Kennaway <[email protected]>\n\n> We have recently made significant progress on optimizing for MySQL\n> running on an 8-core amd64 system. The graph of results may be found\n> here:\n> \n> http://www.freebsd.org/~kris/scaling/scaling.png\n> \n> This shows the graph of MySQL transactions/second performed by a\n> multi-threaded client workload against a local MySQL database with\n> varying numbers of client threads, with identically configured FreeBSD\n> and Linux systems on the same machine.\n\nInteresting -- the MySQL/Linux graph is very similar to the graphs from\nthe .nl magazine posted last year. I think this suggests that the\n\"MySQL deficiency\" was rather a performance bug in Linux, not in MySQL\nitself ...\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Fri, 2 Mar 2007 12:01:29 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [[email protected]: Progress on scaling of FreeBSD on 8 CPU\n\tsystems]"
},
{
"msg_contents": "Alvaro Herrera wrote:\n> Interesting -- the MySQL/Linux graph is very similar to the graphs from\n> the .nl magazine posted last year. I think this suggests that the\n> \"MySQL deficiency\" was rather a performance bug in Linux, not in MySQL\n> itself ...\n\nThe latest benchmark we did was both with Solaris and Linux on the same \nbox, both showed such a drop. So I doubt its \"not in MySQL\", although it \nmight be possible to fix the load MySQL's usage pattern poses on a \nsystem, via the OS. And since MySQL 5.0.32 is less bad than 4.1.22 on \nthat system. We didn't have time to test 5.0.25 again, but .32 scaled \nbetter, so at least some of the scaling issues where actually fixed in \nMySQL itself.\n\nBest regards,\n\nArjen\n",
"msg_date": "Fri, 02 Mar 2007 16:49:36 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [[email protected]: Progress on scaling of FreeBSD\n\ton 8 CPU systems]"
},
{
"msg_contents": "On Fri, 2007-03-02 at 09:01, Alvaro Herrera wrote:\n> > From: Kris Kennaway <[email protected]>\n> \n> > We have recently made significant progress on optimizing for MySQL\n> > running on an 8-core amd64 system. The graph of results may be found\n> > here:\n> > \n> > http://www.freebsd.org/~kris/scaling/scaling.png\n> > \n> > This shows the graph of MySQL transactions/second performed by a\n> > multi-threaded client workload against a local MySQL database with\n> > varying numbers of client threads, with identically configured FreeBSD\n> > and Linux systems on the same machine.\n> \n> Interesting -- the MySQL/Linux graph is very similar to the graphs from\n> the .nl magazine posted last year. I think this suggests that the\n> \"MySQL deficiency\" was rather a performance bug in Linux, not in MySQL\n> itself ...\n\nI rather think it's a combination of how MySQL does things and Linux not\nbeing optimized to handle that situation.\n\nIt may well be that the fixes to BSD have simply moved the point at\nwhich performance dives off quickly from 50 connections to 300 or\nsomething.\n\nI'd really like to see freebsd tested on the 32 thread Sun CPU that had\nsuch horrible performance with linux, and with many more threads to see\nif there's still a cliff there somewhere, and to see where postgresql's\ncliff would be as well. After all, the most interesting part of\nperformance graphs are the ones you see when the system is heading into\noverload.\n",
"msg_date": "Fri, 02 Mar 2007 10:35:01 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [[email protected]: Progress on scaling of FreeBSD\n\ton 8 CPU systems]"
},
{
"msg_contents": "And here is that latest benchmark we did, using a 8 dual core opteron \nSun Fire x4600. Unfortunately PostgreSQL seems to have some difficulties \nscaling over 8 cores, but not as bad as MySQL.\n\nhttp://tweakers.net/reviews/674\n\nBest regards,\n\nArjen\n\nArjen van der Meijden wrote:\n> Alvaro Herrera wrote:\n>> Interesting -- the MySQL/Linux graph is very similar to the graphs from\n>> the .nl magazine posted last year. I think this suggests that the\n>> \"MySQL deficiency\" was rather a performance bug in Linux, not in MySQL\n>> itself ...\n> \n> The latest benchmark we did was both with Solaris and Linux on the same \n> box, both showed such a drop. So I doubt its \"not in MySQL\", although it \n> might be possible to fix the load MySQL's usage pattern poses on a \n> system, via the OS. And since MySQL 5.0.32 is less bad than 4.1.22 on \n> that system. We didn't have time to test 5.0.25 again, but .32 scaled \n> better, so at least some of the scaling issues where actually fixed in \n> MySQL itself.\n> \n> Best regards,\n> \n> Arjen\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n> \n",
"msg_date": "Mon, 05 Mar 2007 11:52:08 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [[email protected]: Progress on scaling of FreeBSD\n\ton 8 CPU systems]"
},
{
"msg_contents": "Arjen van der Meijden wrote:\n> And here is that latest benchmark we did, using a 8 dual core opteron \n> Sun Fire x4600. Unfortunately PostgreSQL seems to have some difficulties \n> scaling over 8 cores, but not as bad as MySQL.\n> \n> http://tweakers.net/reviews/674\n\nouch - do I read that right that even after tom's fixes for the \n\"regressions\" in 8.2.0 we are still 30% slower then the -HEAD checkout \nfrom the middle of the 8.2 development cycle ?\n\n\nStefan\n",
"msg_date": "Mon, 05 Mar 2007 12:44:52 +0100",
"msg_from": "Stefan Kaltenbrunner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [[email protected]: Progress on scaling of FreeBSD\n\ton 8 CPU systems]"
},
{
"msg_contents": "Arjen van der Meijden wrote:\n> And here is that latest benchmark we did, using a 8 dual core opteron \n> Sun Fire x4600. Unfortunately PostgreSQL seems to have some difficulties \n> scaling over 8 cores, but not as bad as MySQL.\n> \n> http://tweakers.net/reviews/674\n\nHmm - interesting reading as always Arjen.\n\nThanks for the notice on this.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 05 Mar 2007 11:49:13 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [[email protected]: Progress on scaling of FreeBSD\n\ton 8 CPU systems]"
},
{
"msg_contents": "Stefan Kaltenbrunner wrote:\n> ouch - do I read that right that even after tom's fixes for the \n> \"regressions\" in 8.2.0 we are still 30% slower then the -HEAD checkout \n> from the middle of the 8.2 development cycle ?\n\nYes, and although I tested about 17 different cvs-checkouts, Tom and I \nweren't really able to figure out where \"it\" happened. So its a bit of a \nmystery why the performance is so much worse.\n\nBest regards,\n\nArjen\n",
"msg_date": "Mon, 05 Mar 2007 12:51:31 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [[email protected]: Progress on scaling of FreeBSD\n\ton 8 CPU systems]"
},
{
"msg_contents": "Arjen van der Meijden wrote:\n> Stefan Kaltenbrunner wrote:\n>> ouch - do I read that right that even after tom's fixes for the\n>> \"regressions\" in 8.2.0 we are still 30% slower then the -HEAD checkout\n>> from the middle of the 8.2 development cycle ?\n> \n> Yes, and although I tested about 17 different cvs-checkouts, Tom and I\n> weren't really able to figure out where \"it\" happened. So its a bit of a\n> mystery why the performance is so much worse.\n\ndouble ouch - losing that much in performance without an idea WHY it\nhappened is really unfortunate :-(\n\n\nStefan\n",
"msg_date": "Mon, 05 Mar 2007 21:17:07 +0100",
"msg_from": "Stefan Kaltenbrunner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [[email protected]: Progress on scaling of FreeBSD\n\ton 8 CPU systems]"
},
{
"msg_contents": "Stefan Kaltenbrunner <[email protected]> writes:\n> Arjen van der Meijden wrote:\n>> Stefan Kaltenbrunner wrote:\n>>> ouch - do I read that right that even after tom's fixes for the\n>>> \"regressions\" in 8.2.0 we are still 30% slower then the -HEAD checkout\n>>> from the middle of the 8.2 development cycle ?\n>> \n>> Yes, and although I tested about 17 different cvs-checkouts, Tom and I\n>> weren't really able to figure out where \"it\" happened. So its a bit of a\n>> mystery why the performance is so much worse.\n\n> double ouch - losing that much in performance without an idea WHY it\n> happened is really unfortunate :-(\n\nKeep in mind that Arjen's test exercises some rather narrow scenarios;\nIIRC its performance is mostly determined by some complicated\nbitmap-indexscan cases. So that \"30% slower\" bit certainly doesn't\nrepresent an across-the-board figure. As best I can tell, the decisions\nthe planner happened to be making in late June were peculiarly nicely\nsuited to his test, but not so much for other cases.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 05 Mar 2007 15:38:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [[email protected]: Progress on scaling of FreeBSD on 8 CPU\n\tsystems]"
},
{
"msg_contents": "On 5-3-2007 21:38 Tom Lane wrote:\n> Keep in mind that Arjen's test exercises some rather narrow scenarios;\n> IIRC its performance is mostly determined by some complicated\n> bitmap-indexscan cases. So that \"30% slower\" bit certainly doesn't\n> represent an across-the-board figure. As best I can tell, the decisions\n> the planner happened to be making in late June were peculiarly nicely\n> suited to his test, but not so much for other cases.\n\nTrue, its not written as a database-comparison-test, but as a \nplatform-comparison test. As I showed you back then, there where indeed \nquerytypes faster on the final version (I still have that database of \nexecuted queries on dev and 8.2 rc1), especially after your three \npatches. Still, its a pitty that both the general performance and \nscalability seem to be worse on these platforms.\n\nBest regards,\n\nArjen\n",
"msg_date": "Mon, 05 Mar 2007 21:50:58 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [[email protected]: Progress on scaling of FreeBSD\n\ton 8 CPU systems]"
},
{
"msg_contents": "Tom Lane wrote:\n> Stefan Kaltenbrunner <[email protected]> writes:\n>> Arjen van der Meijden wrote:\n>>> Stefan Kaltenbrunner wrote:\n>>>> ouch - do I read that right that even after tom's fixes for the\n>>>> \"regressions\" in 8.2.0 we are still 30% slower then the -HEAD checkout\n>>>> from the middle of the 8.2 development cycle ?\n>>> Yes, and although I tested about 17 different cvs-checkouts, Tom and I\n>>> weren't really able to figure out where \"it\" happened. So its a bit of a\n>>> mystery why the performance is so much worse.\n> \n>> double ouch - losing that much in performance without an idea WHY it\n>> happened is really unfortunate :-(\n> \n> Keep in mind that Arjen's test exercises some rather narrow scenarios;\n> IIRC its performance is mostly determined by some complicated\n> bitmap-indexscan cases. So that \"30% slower\" bit certainly doesn't\n> represent an across-the-board figure. As best I can tell, the decisions\n> the planner happened to be making in late June were peculiarly nicely\n> suited to his test, but not so much for other cases.\n\nunderstood - I was not trying to imply that we suffer a 30% performance\ndrop overall.\nBut still it means we know about a set of queries that we once could\nhandle faster than we can now ...\n\nStefan\n",
"msg_date": "Wed, 07 Mar 2007 19:48:44 +0100",
"msg_from": "Stefan Kaltenbrunner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [[email protected]: Progress on scaling of FreeBSD\n\ton 8 CPU systems]"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.