threads
listlengths
1
275
[ { "msg_contents": "Dan,\n\n> I'm doing some performance profiling with a simple two-table query:\n\nPlease send EXPLAIN ANALYZE for each query, and not just EXPLAIN. Thanks!\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 17 Nov 2004 11:14:43 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Analyzer is clueless" }, { "msg_contents": "Hello,\n\nHave you tried increasing the statistics target for orderdate and \nrerunning analyze?\n\nSincerely,\n\nJoshua D. Drake\n\n\nDavid Brown wrote:\n> I'm doing some performance profiling with a simple two-table query:\n> \n> SELECT L.\"ProductID\", sum(L.\"Amount\")\n> FROM \"drinv\" H\n> JOIN \"drinvln\" L ON L.\"OrderNo\" = H.\"OrderNo\"\n> WHERE\n> (\"OrderDate\" between '2003-01-01' AND '2003-04-30')\n> GROUP BY L.\"ProductID\"\n> \n> drinv and drinvln have about 100,000 and 3,500,000 rows respectively. Actual data size in the large table is 500-600MB. OrderNo is indexed in both tables, as is OrderDate.\n> \n> The environment is PGSQL 8 on Win2k with 512MB RAM (results are similar to 7.3 from Mammoth). I've tried tweaking various conf parameters, but apart from using up memory, nothing seems to have had a tangible effect - the Analyzer doesn't seem to take resources into account like some of the doco suggests.\n> \n> The date selection represents about 5% of the range. Here's the plan summaries:\n> \n> Three months (2003-01-01 to 2003-03-30) = 1 second\n> \n> HashAggregate (cost=119365.53..119368.74 rows=642 width=26)\n> -> Nested Loop (cost=0.00..118791.66 rows=114774 width=26)\n> -> Index Scan using \"drinv_OrderDate\" on drinv h (cost=0.00..200.27 rows=3142 width=8)\n> Index Cond: ((\"OrderDate\" >= '2003-01-01'::date) AND (\"OrderDate\" <= '2003-03-30'::date))\n> -> Index Scan using \"drinvln_OrderNo\" on drinvln l (cost=0.00..28.73 rows=721 width=34)\n> Index Cond: (l.\"OrderNo\" = \"outer\".\"OrderNo\")\n> \n> \n> Four months (2003-01-01 to 2003-04-30) = 60 seconds\n> \n> HashAggregate (cost=126110.53..126113.74 rows=642 width=26)\n> -> Hash Join (cost=277.55..125344.88 rows=153130 width=26)\n> Hash Cond: (\"outer\".\"OrderNo\" = \"inner\".\"OrderNo\")\n> -> Seq Scan on drinvln l (cost=0.00..106671.35 rows=3372935 width=34)\n> -> Hash (cost=267.07..267.07 rows=4192 width=8)\n> -> Index Scan using \"drinv_OrderDate\" on drinv h (cost=0.00..267.07 rows=4192 width=8)\n> Index Cond: ((\"OrderDate\" >= '2003-01-01'::date) AND (\"OrderDate\" <= '2003-04-30'::date))\n> \n> \n> Four months (2003-01-01 to 2003-04-30) with Seq_scan disabled = 75 seconds\n> \n> \n> HashAggregate (cost=130565.83..130569.04 rows=642 width=26)\n> -> Merge Join (cost=519.29..129800.18 rows=153130 width=26)\n> Merge Cond: (\"outer\".\"OrderNo\" = \"inner\".\"OrderNo\")\n> -> Sort (cost=519.29..529.77 rows=4192 width=8)\n> Sort Key: h.\"OrderNo\"\n> -> Index Scan using \"drinv_OrderDate\" on drinv h (cost=0.00..267.07 rows=4192 width=8)\n> Index Cond: ((\"OrderDate\" >= '2003-01-01'::date) AND (\"OrderDate\" <= '2003-04-30'::date))\n> -> Index Scan using \"drinvln_OrderNo\" on drinvln l (cost=0.00..119296.29 rows=3372935 width=34)\n> \n> Statistics were run on each table before query execution. The random page cost was lowered to 2, but as you can see, the estimated costs are wild anyway.\n> \n> As a comparison, MS SQL Server took less than 15 seconds, or 4 times faster.\n> \n> MySQL (InnoDB) took 2 seconds, which is 30 times faster.\n> \n> The query looks straightforward to me (it might be clearer with a subselect), so what on earth is wrong?\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n\n-- \nCommand Prompt, Inc., home of PostgreSQL Replication, and plPHP.\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nMammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL", "msg_date": "Wed, 17 Nov 2004 13:02:59 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analyzer is clueless" }, { "msg_contents": "On Thu, 2004-11-18 at 02:08, David Brown wrote:\n> Statistics were run on each table before query execution. The random page cost was lowered to 2, but as you can see, the estimated costs are wild anyway.\n> \n> As a comparison, MS SQL Server took less than 15 seconds, or 4 times faster.\n> \n> MySQL (InnoDB) took 2 seconds, which is 30 times faster.\n> \n> The query looks straightforward to me (it might be clearer with a subselect), so what on earth is wrong?\n\nThe query is, as you say, straightforward.\n\nYou are clearly working with a query that is on the very edge of the\ndecision between using an index or not. \n\nThe main issue is that PostgreSQL's default histogram statistics setting\nis lower than other RDBMS. This means that it is less able to\ndiscriminate between cases such as yours that are close to the edge.\nThis is a trade-off between run-time of the ANALYZE command and the\nbenefit it produces. As Joshua suggests, increasing the statistics\ntarget for this table will likely allow the optimizer to correctly\ndetermine the selectivity of the index and take the right path.\n\nIf this is a general RDBMS comparison, you may wish to extend the\nsystem's default_statistics_target = 80 or at least > 10.\n\nTo improve this query, you may wish to extend the table's statistics\ntarget using:\n\nALTER TABLE \"drinv\"\n\tALTER COLUMN OrderDate SET STATISTICS 100;\n\nwhich should allow the planner to more accurately estimate statistics\nand thereby select an index, if appropriate.\n\nThe doco has recently been changed with regard to effective_cache_size;\nyou don't mention what beta release level you're using. That is the only\nplanner parameter that takes cache size into account, so any other\nchanges would certainly have zero effect on this *plan* though might\nstill benefit execution time.\n\nPlease post EXPLAIN ANALYZE output for any further questions.\n\n-- \nBest Regards, Simon Riggs\n\n", "msg_date": "Wed, 17 Nov 2004 22:32:48 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analyzer is clueless" }, { "msg_contents": "On Wed, Nov 17, 2004 at 10:32:48PM +0000, Simon Riggs wrote:\n> The main issue is that PostgreSQL's default histogram statistics setting\n> is lower than other RDBMS. This means that it is less able to\n> discriminate between cases such as yours that are close to the edge.\n> This is a trade-off between run-time of the ANALYZE command and the\n> benefit it produces. As Joshua suggests, increasing the statistics\n> target for this table will likely allow the optimizer to correctly\n> determine the selectivity of the index and take the right path.\n\nIs there still a good reason to have the histogram stats so low? Should\nthe default be changed to more like 100 at this point?\n\nAlso, how extensively does the planner use n_distinct, null_frac,\nreltuples and the histogram to see what the odds are of finding a unique\nvalue or a low number of values? I've seen cases where it seems the\nplaner doesn't think it'll be getting a unique value or a small set of\nvalues even though stats indicates that it should be.\n\nOne final question... would there be interest in a process that would\ndynamically update the histogram settings for tables based on how\ndistinct/unique each field was?\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Wed, 17 Nov 2004 18:20:09 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analyzer is clueless" }, { "msg_contents": "Jim,\n\n> Is there still a good reason to have the histogram stats so low? Should\n> the default be changed to more like 100 at this point?\n\nLow overhead. This is actually a TODO for me for 8.1. I need to find some \ntest cases to set a differential level of histogram access for indexed \nfields, so like 10 for most fields but 100/150/200 for indexed fields.\n\nHowever, I got stalled on finding test cases and then ran out of time.\n\n> Also, how extensively does the planner use n_distinct, null_frac,\n> reltuples and the histogram to see what the odds are of finding a unique\n> value or a low number of values? I've seen cases where it seems the\n> planer doesn't think it'll be getting a unique value or a small set of\n> values even though stats indicates that it should be.\n>\n> One final question... would there be interest in a process that would\n> dynamically update the histogram settings for tables based on how\n> distinct/unique each field was?\n\nWell, the process by which the analyzer decides that a field is unique could \nprobably use some troubleshooting. And we always, always could use \nsuggestions/tests/help with the query planner.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 17 Nov 2004 17:41:16 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Analyzer is clueless" }, { "msg_contents": ">> I've seen cases where it seems the\n>> planer doesn't think it'll be getting a unique value or a small set of\n>> values even though stats indicates that it should be.\n\nA test case exhibiting the problem would be helpful.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Nov 2004 20:57:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analyzer is clueless " }, { "msg_contents": "I'm doing some performance profiling with a simple two-table query:\n\nSELECT L.\"ProductID\", sum(L.\"Amount\")\nFROM \"drinv\" H\nJOIN \"drinvln\" L ON L.\"OrderNo\" = H.\"OrderNo\"\nWHERE\n(\"OrderDate\" between '2003-01-01' AND '2003-04-30')\nGROUP BY L.\"ProductID\"\n\ndrinv and drinvln have about 100,000 and 3,500,000 rows respectively. Actual data size in the large table is 500-600MB. OrderNo is indexed in both tables, as is OrderDate.\n\nThe environment is PGSQL 8 on Win2k with 512MB RAM (results are similar to 7.3 from Mammoth). I've tried tweaking various conf parameters, but apart from using up memory, nothing seems to have had a tangible effect - the Analyzer doesn't seem to take resources into account like some of the doco suggests.\n\nThe date selection represents about 5% of the range. Here's the plan summaries:\n\nThree months (2003-01-01 to 2003-03-30) = 1 second\n\nHashAggregate (cost=119365.53..119368.74 rows=642 width=26)\n -> Nested Loop (cost=0.00..118791.66 rows=114774 width=26)\n -> Index Scan using \"drinv_OrderDate\" on drinv h (cost=0.00..200.27 rows=3142 width=8)\n Index Cond: ((\"OrderDate\" >= '2003-01-01'::date) AND (\"OrderDate\" <= '2003-03-30'::date))\n -> Index Scan using \"drinvln_OrderNo\" on drinvln l (cost=0.00..28.73 rows=721 width=34)\n Index Cond: (l.\"OrderNo\" = \"outer\".\"OrderNo\")\n\n\nFour months (2003-01-01 to 2003-04-30) = 60 seconds\n\nHashAggregate (cost=126110.53..126113.74 rows=642 width=26)\n -> Hash Join (cost=277.55..125344.88 rows=153130 width=26)\n Hash Cond: (\"outer\".\"OrderNo\" = \"inner\".\"OrderNo\")\n -> Seq Scan on drinvln l (cost=0.00..106671.35 rows=3372935 width=34)\n -> Hash (cost=267.07..267.07 rows=4192 width=8)\n -> Index Scan using \"drinv_OrderDate\" on drinv h (cost=0.00..267.07 rows=4192 width=8)\n Index Cond: ((\"OrderDate\" >= '2003-01-01'::date) AND (\"OrderDate\" <= '2003-04-30'::date))\n\n\nFour months (2003-01-01 to 2003-04-30) with Seq_scan disabled = 75 seconds\n\n\nHashAggregate (cost=130565.83..130569.04 rows=642 width=26)\n -> Merge Join (cost=519.29..129800.18 rows=153130 width=26)\n Merge Cond: (\"outer\".\"OrderNo\" = \"inner\".\"OrderNo\")\n -> Sort (cost=519.29..529.77 rows=4192 width=8)\n Sort Key: h.\"OrderNo\"\n -> Index Scan using \"drinv_OrderDate\" on drinv h (cost=0.00..267.07 rows=4192 width=8)\n Index Cond: ((\"OrderDate\" >= '2003-01-01'::date) AND (\"OrderDate\" <= '2003-04-30'::date))\n -> Index Scan using \"drinvln_OrderNo\" on drinvln l (cost=0.00..119296.29 rows=3372935 width=34)\n\nStatistics were run on each table before query execution. The random page cost was lowered to 2, but as you can see, the estimated costs are wild anyway.\n\nAs a comparison, MS SQL Server took less than 15 seconds, or 4 times faster.\n\nMySQL (InnoDB) took 2 seconds, which is 30 times faster.\n\nThe query looks straightforward to me (it might be clearer with a subselect), so what on earth is wrong?\n", "msg_date": "Thu, 18 Nov 2004 02:08:33 +0000", "msg_from": "David Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Analyzer is clueless" } ]
[ { "msg_contents": "I understand that the sort_mem conf setting affects queries with order by, etc., and the doc mentions that it is used in create index. Does sort_mem affect the updating of indexes, i.e., can the sort_mem setting affect the performance of inserts?\n\n- DAP\n----------------------------------------------------------------------------------\nDavid Parker Tazz Networks (401) 709-5130\n \n", "msg_date": "Wed, 17 Nov 2004 16:31:31 -0500", "msg_from": "\"David Parker\" <[email protected]>", "msg_from_op": true, "msg_subject": "sort_mem affect on inserts?" }, { "msg_contents": "David,\n\n> I understand that the sort_mem conf setting affects queries with order by,\n> etc., and the doc mentions that it is used in create index. Does sort_mem\n> affect the updating of indexes, i.e., can the sort_mem setting affect the\n> performance of inserts?\n\nOnly if the table has Foriegn Keys whose lookup might require a large sort. \nOtherwise, no.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 17 Nov 2004 14:07:30 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sort_mem affect on inserts?" }, { "msg_contents": "On 11/17/2004 5:07 PM, Josh Berkus wrote:\n\n> David,\n> \n>> I understand that the sort_mem conf setting affects queries with order by,\n>> etc., and the doc mentions that it is used in create index. Does sort_mem\n>> affect the updating of indexes, i.e., can the sort_mem setting affect the\n>> performance of inserts?\n> \n> Only if the table has Foriegn Keys whose lookup might require a large sort. \n> Otherwise, no.\n> \n\nHmmm ... what type of foreign key lookup would that be? None of the RI \ngenerated queries has any order by clause.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Fri, 19 Nov 2004 09:34:14 -0500", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sort_mem affect on inserts?" }, { "msg_contents": "Jan,\n\n> Hmmm ... what type of foreign key lookup would that be? None of the RI\n> generated queries has any order by clause.\n\nI was under the impression that work_mem would be used for the index if there \nwas an index for the RI lookup. Wrong?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 19 Nov 2004 12:25:45 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sort_mem affect on inserts?" }, { "msg_contents": "Josh Berkus wrote:\n> I was under the impression that work_mem would be used for the index if there \n> was an index for the RI lookup. Wrong?\n\nYes -- work_mem is not used for doing index scans, whether for RI \nlookups or otherwise.\n\n-Neil\n", "msg_date": "Mon, 22 Nov 2004 01:51:58 +1100", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sort_mem affect on inserts?" } ]
[ { "msg_contents": "Hi,\n\nin March there was an interesting discussion on the list with the \nsubject \"postgres eating CPU on HP9000\".\n\nNow I'm the same problem on a Dell dual processor machine.\n\nAnybody know if there was a solution?\n\nThanks\n\nPiergiorgio\n", "msg_date": "Wed, 17 Nov 2004 23:13:52 +0100", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "postgres eating CPU" }, { "msg_contents": "\n> in March there was an interesting discussion on the list with the\n> subject \"postgres eating CPU on HP9000\".\n\nLink, please?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 17 Nov 2004 14:27:13 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres eating CPU" }, { "msg_contents": "Josh Berkus wrote:\n\n>>in March there was an interesting discussion on the list with the\n>>subject \"postgres eating CPU on HP9000\".\n>> \n>>\n>\n>Link, please?\n>\n> \n>\n\n http://archives.postgresql.org/pgsql-performance/2004-03/msg00380.php\n\n\n \n", "msg_date": "Wed, 17 Nov 2004 23:51:11 +0100", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres eating CPU" }, { "msg_contents": "\n> >>in March there was an interesting discussion on the list with the\n> >>subject \"postgres eating CPU on HP9000\".\n\nAha, this one. Yeah, I believe that they upgraded to 7.4 inorder to deal \nwith REINDEX issues.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 17 Nov 2004 15:23:41 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres eating CPU" }, { "msg_contents": "\"[email protected]\" <[email protected]> writes:\n> in March there was an interesting discussion on the list with the\n> subject \"postgres eating CPU on HP9000\".\n> http://archives.postgresql.org/pgsql-performance/2004-03/msg00380.php\n\nReviewing that, the problem is most likely that (a) they didn't have\nmax_fsm_pages set high enough to cover the database, and (b) they were\nrunning 7.3.* which is prone to index bloat.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Nov 2004 18:38:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres eating CPU " }, { "msg_contents": "Josh Berkus wrote:\n\n>>in March there was an interesting discussion on the list with the\n>>subject \"postgres eating CPU on HP9000\".\n>>\n\nhttp://archives.postgresql.org/pgsql-performance/2004-03/msg00380.php\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Wed, 17 Nov 2004 19:30:36 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres eating CPU" } ]
[ { "msg_contents": "French encodings vs. Cyrillic encodings? Characters coming thru the mail in some encoding that don't get translated properly.\r\n\r\nHis name is Herve Piedvache, where the 2nd 'e' in Herve is an accented character. It must somehow do weird things to your terminal when it's trying to map that into the encoding which you use.\r\n\r\nMessages from you also come out in my mailer; lots of '1;2c1;2c' sequences (one - semi-colon - 2 - character-c and repeat)\r\n\r\ncheers,\r\n\r\n--Tim\r\n\r\n-----Original Message-----\r\nFrom: [email protected]\r\n[mailto:[email protected]]On Behalf Of Oleg\r\nBartunov\r\nSent: Thursday, November 18, 2004 11:34 AM\r\nTo: Herve Piedvache\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Tsearch2 really slower than ilike ?\r\n\r\n\r\n1;2c1;2c1;2cBlin !\r\n\r\nwhat's happenning with my terminal when I read messagess from this guy ?\r\nI don't even know how to call him - I see just Herv?\r\n\r\n \tOleg\r\n1;2c1;2c1;2c1;2c\r\n1;2cOn Thu, 18 Nov 2004, [iso-8859-15] Herv? Piedvache wrote:\r\n\r\n> Le Jeudi 18 Novembre 2004 10:37, Oleg Bartunov a ?crit :\r\n>> Have you run 'vacuum analyze' ?\r\n>\r\n> Yep every night VACUUM FULL VERBOSE ANALYZE; of all the database !\r\n>\r\n>> 1;2c1;2c1;2c\r\n>> 1;2c1;2c1;2cmy desktop is very simple PIII, 512 Mb RAM.\r\n>> 1;2c1;2c11;2c1;2c1;2c;2c Oleg1;2c1;2c1;2c\r\n>> 11;2c1;2c1;2c;2c1;2c1;2c\r\n>\r\n> YOU send strange caracters ! ;o)\r\n>\r\n>> 1;2c1;2c1;2cOn Thu, 18 Nov 2004, [iso-8859-15] Herv? Piedvache wrote:\r\n>>> Oleg,\r\n>>>\r\n>>> Le Mercredi 17 Novembre 2004 18:23, Oleg Bartunov a ?crit :\r\n>>>>> Sorry but when I do your request I get :\r\n>>>>> # select id_site from site where idx_site_name @@  'livejourn';\r\n>>>>> ERROR:  type \" \" d1;2c1;2c1;2c1;2coes not exist\r\n>>>>\r\n>>>> no idea :) btw, what version of postgresql and OS you're running.\r\n>>>> Could you try minimal test - check sql commands from tsearch2 sources,\r\n>>>> some basic queries from tsearch2 documentation, tutorials.\r\n>>>>\r\n>>>> btw, your query should looks like\r\n>>>> select id_site from site_rss where idx_site_name @@ 'livejourn';\r\n>>>> ^^^^^^^^\r\n>>>>\r\n>>>> How did you run your queries at all ? I mean your first message about\r\n>>>> poor tsearch2 performance.\r\n>>>\r\n>>> I don't know what happend yesterday ... it's running now ...\r\n>>>\r\n>>> You sent me :\r\n>>> zz=# explain analyze select id_site from site_rss where idx_site_name\r\n>>> @@  'livejourn';\r\n>>>                                                              QUERY PLAN\r\n>>> -------------------------------------------------------------------------\r\n>>> ---------------------------------------------------------- Index Scan\r\n>>> using ix_idx_site_name on site_rss  (cost=0.00..733.62 rows=184 width=4)\r\n>>> (actual time=0.339..39.183 rows=1737 loops=1)\r\n>>>     Index Cond: (idx_site_name @@ '\\'livejourn\\''::tsquery)\r\n>>>     Filter: (idx_site_name @@ '\\'livejourn\\''::tsquery)\r\n>>>   Total runtime: 40.997 ms\r\n>>> (4 rows)\r\n>>>\r\n>>>> It's really fast ! So, I don't understand your problem.\r\n>>>> I run query on my desktop machine, nothing special.\r\n>>>\r\n>>> I get this :\r\n>>> QUERY PLAN\r\n>>> -------------------------------------------------------------------------\r\n>>> ---------------------------------------------------------------- Index\r\n>>> Scan using ix_idx_site_name on site_rss s (cost=0.00..574.19 rows=187\r\n>>> width=24) (actual time=105.097..7157.277 rows=388 loops=1)\r\n>>> Index Cond: (idx_site_name @@ '\\'livejourn\\''::tsquery)\r\n>>> Filter: (idx_site_name @@ '\\'livejourn\\''::tsquery)\r\n>>> Total runtime: 7158.576 ms\r\n>>> (4 rows)\r\n>>>\r\n>>> With the ilike I get :\r\n>>> QUERY PLAN\r\n>>> -------------------------------------------------------------------------\r\n>>> ----------------------------------- Seq Scan on site_rss s\r\n>>> (cost=0.00..8360.23 rows=1 width=24) (actual time=8.195..879.440 rows=404\r\n>>> loops=1)\r\n>>> Filter: (site_name ~~* '%livejourn%'::text)\r\n>>> Total runtime: 882.600 ms\r\n>>> (3 rows)\r\n>>>\r\n>>> I don't know what is your desktop ... but I'm using PostgreSQL 7.4.6, on\r\n>>> Debian Woody with a PC Bi-PIII 933 Mhz and 1 Gb of memory ... the server\r\n>>> is dedicated to this database ... !!\r\n>>>\r\n>>> I have no idea !\r\n>>>\r\n>>> Regards,\r\n>>\r\n>> \tRegards,\r\n>> \t\tOleg\r\n>> _____________________________________________________________\r\n>> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\r\n>> Sternberg Astronomical Institute, Moscow University (Russia)\r\n>> Internet: [email protected], http://www.sai.msu.su/~megera/\r\n>> phone: +007(095)939-16-83, +007(095)939-23-83\r\n>> ---------------------------(end of broadcast)---------------------------\r\n>> TIP 8: explain analyze is your friend\r\n>\r\n>\r\n\r\n \tRegards,\r\n \t\tOleg\r\n_____________________________________________________________\r\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\r\nSternberg Astronomical Institute, Moscow University (Russia)\r\nInternet: [email protected], http://www.sai.msu.su/~megera/\r\nphone: +007(095)939-16-83, +007(095)939-23-83\r\n---------------------------(end of broadcast)---------------------------\r\nTIP 6: Have you searched our list archives?\r\n\r\n http://archives.postgresql.org\r\n", "msg_date": "Thu, 18 Nov 2004 11:53:08 +0100", "msg_from": "\"Leeuw van der, Tim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tsearch2 really slower than ilike ?" } ]
[ { "msg_contents": "Hello All,\r\n\r\nI have a setup with a Dell Poweredge 2650 with Red Hat and Postgres 7.4.5 with a database with about 27GB of data. The table in question has about 35 million rows.\r\n\r\nI am running the following query:\r\n\r\nSELECT *\r\nFROM mb_fix_message\r\nWHERE msg_client_order_id IN (\r\n\tSELECT msg_client_order_id\r\n\tFROM mb_fix_message\r\n\tWHERE msg_log_time >= '2004-06-01'\r\n\t\tAND msg_log_time < '2004-06-01 13:30:00.000'\r\n\t\tAND msg_message_type IN ('D','G')\r\n\t\tAND mb_ord_type = '1'\r\n\t)\r\n\tAND msg_log_time > '2004-06-01'\r\n\tAND msg_log_time < '2004-06-01 23:59:59.999'\r\n\tAND msg_message_type = '8'\r\n\tAND (mb_raw_text LIKE '%39=1%' OR mb_raw_text LIKE '%39=2%');\r\n\r\nwith the following plan:\r\n\r\n QUERY PLAN\r\nNested Loop IN Join (cost=0.00..34047.29 rows=1 width=526)\r\n -> Index Scan using mfi_log_time on mb_fix_message (cost=0.00..22231.31 rows=2539 width=526)\r\n Index Cond: ((msg_log_time > '2004-06-01 00:00:00'::timestamp without time zone) AND (msg_log_time < '2004-06-01 23:59:59.999'::timestamp without time zone))\r\n Filter: (((msg_message_type)::text = '8'::text) AND (((mb_raw_text)::text ~~ '%39=1%'::text) OR ((mb_raw_text)::text ~~ '%39=2%'::text)))\r\n -> Index Scan using mfi_client_ordid on mb_fix_message (cost=0.00..445.56 rows=1 width=18)\r\n Index Cond: ((\"outer\".msg_client_order_id)::text = (mb_fix_message.msg_client_order_id)::text)\r\n Filter: ((msg_log_time >= '2004-06-01 00:00:00'::timestamp without time zone) AND (msg_log_time < '2004-06-01 13:30:00'::timestamp without time zone) AND ((msg_message_type)::text = 'D'::text) OR ((msg_message_type)::text = 'G'::text)) AND ((mb_ord_type)::text = '1'::text))\r\n\r\nWhile running, this query produces 100% iowait usage on its processor and takes a ungodly amount of time (about an hour).\r\n\r\nThe postgres settings are as follows:\r\n\r\nshared_buffers = 32768 # min 16, at least max_connections*2, 8KB each\r\nsort_mem = 262144 # min 64, size in KB\r\n\r\nAnd the /etc/sysctl.conf has:\r\nkernel.shmall = 274235392\r\nkernel.shmmax = 274235392\r\n\r\nThe system has 4GB of RAM.\r\n\r\nI am pretty sure of these settings, but only from my reading of the docs and others' recommendations online.\r\n\r\nThanks,\r\n\r\nAndrew Janian\r\nOMS Development\r\nScottrade Financial Services\r\n(314) 965-1555 x 1513\r\nCell: (314) 369-2083\r\n", "msg_date": "Thu, 18 Nov 2004 07:42:20 -0600", "msg_from": "\"Andrew Janian\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query Performance and IOWait" }, { "msg_contents": "\"Andrew Janian\" <[email protected]> writes:\n> QUERY PLAN\n> Nested Loop IN Join (cost=0.00..34047.29 rows=1 width=526)\n> -> Index Scan using mfi_log_time on mb_fix_message (cost=0.00..22231.31 rows=2539 width=526)\n> Index Cond: ((msg_log_time > '2004-06-01 00:00:00'::timestamp without time zone) AND (msg_log_time < '2004-06-01 23:59:59.999'::timestamp without time zone))\n> Filter: (((msg_message_type)::text = '8'::text) AND (((mb_raw_text)::text ~~ '%39=1%'::text) OR ((mb_raw_text)::text ~~ '%39=2%'::text)))\n> -> Index Scan using mfi_client_ordid on mb_fix_message (cost=0.00..445.56 rows=1 width=18)\n> Index Cond: ((\"outer\".msg_client_order_id)::text = (mb_fix_message.msg_client_order_id)::text)\n> Filter: ((msg_log_time >= '2004-06-01 00:00:00'::timestamp without time zone) AND (msg_log_time < '2004-06-01 13:30:00'::timestamp without time zone) AND ((msg_message_type)::text = 'D'::text) OR ((msg_message_type)::text = 'G'::text)) AND ((mb_ord_type)::text = '1'::text))\n\n> While running, this query produces 100% iowait usage on its processor and takes a ungodly amount of time (about an hour).\n\nThis plan looks fairly reasonable if the rowcount estimates are\naccurate. Have you ANALYZEd the table lately? You might need to\nbump up the statistics target for the msg_log_time column to improve\nthe quality of the estimates. It would be useful to see EXPLAIN\nANALYZE results too (yes I know it'll take you an hour to get them...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Nov 2004 10:39:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance and IOWait " }, { "msg_contents": "Andrew,\n\nDell's aren't well known for their disk performance, apparently most of \nthe perc controllers sold with dell's are actually adaptec controllers. \nAlso apparently they do not come with the battery required to use the \nbattery backed up write cache ( In fact according to some Dell won't \neven sell the battery to you). Also Dell's monitoring software is quite \na memory hog.\n\nHave you looked at top ?, and also hdparm -Tt /dev/sd?\n\nDave\n\nAndrew Janian wrote:\n\n>Hello All,\n>\n>I have a setup with a Dell Poweredge 2650 with Red Hat and Postgres 7.4.5 with a database with about 27GB of data. The table in question has about 35 million rows.\n>\n>I am running the following query:\n>\n>SELECT *\n>FROM mb_fix_message\n>WHERE msg_client_order_id IN (\n>\tSELECT msg_client_order_id\n>\tFROM mb_fix_message\n>\tWHERE msg_log_time >= '2004-06-01'\n>\t\tAND msg_log_time < '2004-06-01 13:30:00.000'\n>\t\tAND msg_message_type IN ('D','G')\n>\t\tAND mb_ord_type = '1'\n>\t)\n>\tAND msg_log_time > '2004-06-01'\n>\tAND msg_log_time < '2004-06-01 23:59:59.999'\n>\tAND msg_message_type = '8'\n>\tAND (mb_raw_text LIKE '%39=1%' OR mb_raw_text LIKE '%39=2%');\n>\n>with the following plan:\n>\n> QUERY PLAN\n>Nested Loop IN Join (cost=0.00..34047.29 rows=1 width=526)\n> -> Index Scan using mfi_log_time on mb_fix_message (cost=0.00..22231.31 rows=2539 width=526)\n> Index Cond: ((msg_log_time > '2004-06-01 00:00:00'::timestamp without time zone) AND (msg_log_time < '2004-06-01 23:59:59.999'::timestamp without time zone))\n> Filter: (((msg_message_type)::text = '8'::text) AND (((mb_raw_text)::text ~~ '%39=1%'::text) OR ((mb_raw_text)::text ~~ '%39=2%'::text)))\n> -> Index Scan using mfi_client_ordid on mb_fix_message (cost=0.00..445.56 rows=1 width=18)\n> Index Cond: ((\"outer\".msg_client_order_id)::text = (mb_fix_message.msg_client_order_id)::text)\n> Filter: ((msg_log_time >= '2004-06-01 00:00:00'::timestamp without time zone) AND (msg_log_time < '2004-06-01 13:30:00'::timestamp without time zone) AND ((msg_message_type)::text = 'D'::text) OR ((msg_message_type)::text = 'G'::text)) AND ((mb_ord_type)::text = '1'::text))\n>\n>While running, this query produces 100% iowait usage on its processor and takes a ungodly amount of time (about an hour).\n>\n>The postgres settings are as follows:\n>\n>shared_buffers = 32768 # min 16, at least max_connections*2, 8KB each\n>sort_mem = 262144 # min 64, size in KB\n>\n>And the /etc/sysctl.conf has:\n>kernel.shmall = 274235392\n>kernel.shmmax = 274235392\n>\n>The system has 4GB of RAM.\n>\n>I am pretty sure of these settings, but only from my reading of the docs and others' recommendations online.\n>\n>Thanks,\n>\n>Andrew Janian\n>OMS Development\n>Scottrade Financial Services\n>(314) 965-1555 x 1513\n>Cell: (314) 369-2083\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 7: don't forget to increase your free space map settings\n> \n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Thu, 18 Nov 2004 12:14:00 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance and IOWait" }, { "msg_contents": "On Thu, 18 Nov 2004 12:14:00 -0500\nDave Cramer <[email protected]> wrote:\n\n> Andrew,\n> \n> Dell's aren't well known for their disk performance, apparently most\n> of the perc controllers sold with dell's are actually adaptec\n> controllers. Also apparently they do not come with the battery\n> required to use the battery backed up write cache ( In fact according\n> to some Dell won't even sell the battery to you). Also Dell's\n> monitoring software is quite a memory hog.\n> \n> Have you looked at top ?, and also hdparm -Tt /dev/sd?\n\n I haven't seen any PERC controllers that were really Adaptec ones,\n but I for one quit buying Dell RAID controllers several years ago\n because of poor Linux support and performance. \n\n On one machine (not a PostgreSQL server) we saw a 20% speed\n improvement by switching to software raid. \n\n If you have a test machine, I would suggest moving the data to a\n box without a RAID controller and see if you get better results. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Fri, 19 Nov 2004 15:13:37 -0600", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance and IOWait" } ]
[ { "msg_contents": "Andrew,\n\nIt seems that you could combine the subquery's WHERE clause with the main\nquery's to produce a simpler query, i.e. one without a subquery.\n\nRick\n\n\n\n\n \n \"Andrew Janian\" \n <[email protected]> To: <[email protected]> \n Sent by: cc: \n pgsql-performance-owner@pos Subject: [PERFORM] Query Performance and IOWait \n tgresql.org \n \n \n 11/18/2004 08:42 AM \n \n \n\n\n\n\nHello All,\n\nI have a setup with a Dell Poweredge 2650 with Red Hat and Postgres 7.4.5\nwith a database with about 27GB of data. The table in question has about\n35 million rows.\n\nI am running the following query:\n\nSELECT *\nFROM mb_fix_message\nWHERE msg_client_order_id IN (\n SELECT msg_client_order_id\n FROM mb_fix_message\n WHERE msg_log_time >= '2004-06-01'\n AND msg_log_time < '2004-06-01 13:30:00.000'\n AND msg_message_type IN ('D','G')\n AND mb_ord_type = '1'\n )\n AND msg_log_time > '2004-06-01'\n AND msg_log_time < '2004-06-01 23:59:59.999'\n AND msg_message_type = '8'\n AND (mb_raw_text LIKE '%39=1%' OR mb_raw_text LIKE '%39=2%');\n\nwith the following plan:\n\nQUERY PLAN\nNested Loop IN Join (cost=0.00..34047.29 rows=1 width=526)\n -> Index Scan using mfi_log_time on mb_fix_message (cost=0.00..22231.31\nrows=2539 width=526)\n Index Cond: ((msg_log_time > '2004-06-01 00:00:00'::timestamp\nwithout time zone) AND (msg_log_time < '2004-06-01 23:59:59.999'::timestamp\nwithout time zone))\n Filter: (((msg_message_type)::text = '8'::text) AND\n(((mb_raw_text)::text ~~ '%39=1%'::text) OR ((mb_raw_text)::text ~~\n'%39=2%'::text)))\n -> Index Scan using mfi_client_ordid on mb_fix_message\n(cost=0.00..445.56 rows=1 width=18)\n Index Cond: ((\"outer\".msg_client_order_id)::text =\n(mb_fix_message.msg_client_order_id)::text)\n Filter: ((msg_log_time >= '2004-06-01 00:00:00'::timestamp without\ntime zone) AND (msg_log_time < '2004-06-01 13:30:00'::timestamp without\ntime zone) AND ((msg_message_type)::text = 'D'::text) OR\n((msg_message_type)::text = 'G'::text)) AND ((mb_ord_type)::text =\n'1'::text))\n\nWhile running, this query produces 100% iowait usage on its processor and\ntakes a ungodly amount of time (about an hour).\n\nThe postgres settings are as follows:\n\nshared_buffers = 32768 # min 16, at least max_connections*2, 8KB\neach\nsort_mem = 262144 # min 64, size in KB\n\nAnd the /etc/sysctl.conf has:\nkernel.shmall = 274235392\nkernel.shmmax = 274235392\n\nThe system has 4GB of RAM.\n\nI am pretty sure of these settings, but only from my reading of the docs\nand others' recommendations online.\n\nThanks,\n\nAndrew Janian\nOMS Development\nScottrade Financial Services\n(314) 965-1555 x 1513\nCell: (314) 369-2083\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\n\n\n\n", "msg_date": "Thu, 18 Nov 2004 08:56:41 -0500", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Query Performance and IOWait" } ]
[ { "msg_contents": "Actually, unfortunately, that won't work. The subquery gets a list of message IDs and then the outer query gets the responses to those messages.\n\nAlso, I dumped this data and imported it all to ms sql server and then ran it there. The query ran in 2s.\n\nAndrew\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]\nSent: Thursday, November 18, 2004 7:57 AM\nTo: Andrew Janian\nCc: [email protected];\[email protected]\nSubject: Re: [PERFORM] Query Performance and IOWait\n\n\nAndrew,\n\nIt seems that you could combine the subquery's WHERE clause with the main\nquery's to produce a simpler query, i.e. one without a subquery.\n\nRick\n\n\n\n\n \n \"Andrew Janian\" \n <[email protected]> To: <[email protected]> \n Sent by: cc: \n pgsql-performance-owner@pos Subject: [PERFORM] Query Performance and IOWait \n tgresql.org \n \n \n 11/18/2004 08:42 AM \n \n \n\n\n\n\nHello All,\n\nI have a setup with a Dell Poweredge 2650 with Red Hat and Postgres 7.4.5\nwith a database with about 27GB of data. The table in question has about\n35 million rows.\n\nI am running the following query:\n\nSELECT *\nFROM mb_fix_message\nWHERE msg_client_order_id IN (\n SELECT msg_client_order_id\n FROM mb_fix_message\n WHERE msg_log_time >= '2004-06-01'\n AND msg_log_time < '2004-06-01 13:30:00.000'\n AND msg_message_type IN ('D','G')\n AND mb_ord_type = '1'\n )\n AND msg_log_time > '2004-06-01'\n AND msg_log_time < '2004-06-01 23:59:59.999'\n AND msg_message_type = '8'\n AND (mb_raw_text LIKE '%39=1%' OR mb_raw_text LIKE '%39=2%');\n\nwith the following plan:\n\nQUERY PLAN\nNested Loop IN Join (cost=0.00..34047.29 rows=1 width=526)\n -> Index Scan using mfi_log_time on mb_fix_message (cost=0.00..22231.31\nrows=2539 width=526)\n Index Cond: ((msg_log_time > '2004-06-01 00:00:00'::timestamp\nwithout time zone) AND (msg_log_time < '2004-06-01 23:59:59.999'::timestamp\nwithout time zone))\n Filter: (((msg_message_type)::text = '8'::text) AND\n(((mb_raw_text)::text ~~ '%39=1%'::text) OR ((mb_raw_text)::text ~~\n'%39=2%'::text)))\n -> Index Scan using mfi_client_ordid on mb_fix_message\n(cost=0.00..445.56 rows=1 width=18)\n Index Cond: ((\"outer\".msg_client_order_id)::text =\n(mb_fix_message.msg_client_order_id)::text)\n Filter: ((msg_log_time >= '2004-06-01 00:00:00'::timestamp without\ntime zone) AND (msg_log_time < '2004-06-01 13:30:00'::timestamp without\ntime zone) AND ((msg_message_type)::text = 'D'::text) OR\n((msg_message_type)::text = 'G'::text)) AND ((mb_ord_type)::text =\n'1'::text))\n\nWhile running, this query produces 100% iowait usage on its processor and\ntakes a ungodly amount of time (about an hour).\n\nThe postgres settings are as follows:\n\nshared_buffers = 32768 # min 16, at least max_connections*2, 8KB\neach\nsort_mem = 262144 # min 64, size in KB\n\nAnd the /etc/sysctl.conf has:\nkernel.shmall = 274235392\nkernel.shmmax = 274235392\n\nThe system has 4GB of RAM.\n\nI am pretty sure of these settings, but only from my reading of the docs\nand others' recommendations online.\n\nThanks,\n\nAndrew Janian\nOMS Development\nScottrade Financial Services\n(314) 965-1555 x 1513\nCell: (314) 369-2083\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\n\n\n\n", "msg_date": "Thu, 18 Nov 2004 08:01:58 -0600", "msg_from": "\"Andrew Janian\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Performance and IOWait" }, { "msg_contents": " \nAndrew,\n\nWhat version of Redhat are you running? We have found running Enterprise\nUpdate 3 kernel kills our Dell boxes with IOWait, both NFS and local disk\ntraffic. Update 2 kernel does not seem to have the issue, and we are in the\nprocess of trying Update 4 beta to see if it is better.\n\nWoody\n\niGLASS Networks\nwww.iglass.net\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Andrew Janian\nSent: Thursday, November 18, 2004 9:02 AM\nTo: [email protected]\nCc: [email protected]; [email protected]\nSubject: Re: [PERFORM] Query Performance and IOWait\n\nActually, unfortunately, that won't work. The subquery gets a list of\nmessage IDs and then the outer query gets the responses to those messages.\n\nAlso, I dumped this data and imported it all to ms sql server and then ran\nit there. The query ran in 2s.\n\nAndrew\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]\nSent: Thursday, November 18, 2004 7:57 AM\nTo: Andrew Janian\nCc: [email protected];\[email protected]\nSubject: Re: [PERFORM] Query Performance and IOWait\n\n\nAndrew,\n\nIt seems that you could combine the subquery's WHERE clause with the main\nquery's to produce a simpler query, i.e. one without a subquery.\n\nRick\n\n\n\n\n \n\n \"Andrew Janian\"\n\n <[email protected]> To:\n<[email protected]>\n\n Sent by: cc:\n\n pgsql-performance-owner@pos Subject: [PERFORM]\nQuery Performance and IOWait \n tgresql.org\n\n \n\n \n\n 11/18/2004 08:42 AM\n\n \n\n \n\n\n\n\n\nHello All,\n\nI have a setup with a Dell Poweredge 2650 with Red Hat and Postgres 7.4.5\nwith a database with about 27GB of data. The table in question has about\n35 million rows.\n\nI am running the following query:\n\nSELECT *\nFROM mb_fix_message\nWHERE msg_client_order_id IN (\n SELECT msg_client_order_id\n FROM mb_fix_message\n WHERE msg_log_time >= '2004-06-01'\n AND msg_log_time < '2004-06-01 13:30:00.000'\n AND msg_message_type IN ('D','G')\n AND mb_ord_type = '1'\n )\n AND msg_log_time > '2004-06-01'\n AND msg_log_time < '2004-06-01 23:59:59.999'\n AND msg_message_type = '8'\n AND (mb_raw_text LIKE '%39=1%' OR mb_raw_text LIKE '%39=2%');\n\nwith the following plan:\n\nQUERY PLAN\nNested Loop IN Join (cost=0.00..34047.29 rows=1 width=526)\n -> Index Scan using mfi_log_time on mb_fix_message (cost=0.00..22231.31\nrows=2539 width=526)\n Index Cond: ((msg_log_time > '2004-06-01 00:00:00'::timestamp without\ntime zone) AND (msg_log_time < '2004-06-01 23:59:59.999'::timestamp without\ntime zone))\n Filter: (((msg_message_type)::text = '8'::text) AND\n(((mb_raw_text)::text ~~ '%39=1%'::text) OR ((mb_raw_text)::text ~~\n'%39=2%'::text)))\n -> Index Scan using mfi_client_ordid on mb_fix_message\n(cost=0.00..445.56 rows=1 width=18)\n Index Cond: ((\"outer\".msg_client_order_id)::text =\n(mb_fix_message.msg_client_order_id)::text)\n Filter: ((msg_log_time >= '2004-06-01 00:00:00'::timestamp without\ntime zone) AND (msg_log_time < '2004-06-01 13:30:00'::timestamp without time\nzone) AND ((msg_message_type)::text = 'D'::text) OR\n((msg_message_type)::text = 'G'::text)) AND ((mb_ord_type)::text =\n'1'::text))\n\nWhile running, this query produces 100% iowait usage on its processor and\ntakes a ungodly amount of time (about an hour).\n\nThe postgres settings are as follows:\n\nshared_buffers = 32768 # min 16, at least max_connections*2, 8KB\neach\nsort_mem = 262144 # min 64, size in KB\n\nAnd the /etc/sysctl.conf has:\nkernel.shmall = 274235392\nkernel.shmmax = 274235392\n\nThe system has 4GB of RAM.\n\nI am pretty sure of these settings, but only from my reading of the docs and\nothers' recommendations online.\n\nThanks,\n\nAndrew Janian\nOMS Development\nScottrade Financial Services\n(314) 965-1555 x 1513\nCell: (314) 369-2083\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n\n", "msg_date": "Thu, 18 Nov 2004 09:18:26 -0500", "msg_from": "\"Woody Woodring\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance and IOWait" }, { "msg_contents": "Woody,\n\n> What version of Redhat are you running?   We have found running Enterprise\n> Update 3 kernel kills our Dell boxes with IOWait, both NFS and local disk\n> traffic.  Update 2 kernel does not seem to have the issue, and we are in\n> the process of trying Update 4 beta to see if it is better.\n\nThis is interesting; do you have more to say about it? I've been having some \nmysterious issues with RHES that I've not been able to pin down.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 18 Nov 2004 10:33:54 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance and IOWait" }, { "msg_contents": " From our experience it is not just a postgres issue, but all IO with the\nUpdate 3 kernel.\n\nWe have a box with Update 3 that queries a remote postgres database(Running\nRH7.3, RH3 Update2) and writes to a file on an NFS server. The update 3\nbox does half the work with 2-3 times the load as our update 1 and 2 boxes.\nLooking at top the box is always above 90% IO Wait on the CPU. When we\ndowngrade the kernel to Update 2 it seems to fix the issue.\n\nWe several Update 3 boxes that run postgres locally and they all struggle\ncompared to the Update 2 boxes\n\nWe have tried the Fedora Core 3 with not much more success and we are going\nto try the Update 4 beta kernel next week to see if it is any better.\n\nThere are several threads on the Taroon mailing list discussing the issue.\n\nWoody\n\n-----Original Message-----\nFrom: Josh Berkus [mailto:[email protected]] \nSent: Thursday, November 18, 2004 1:34 PM\nTo: [email protected]\nCc: Woody Woodring; 'Andrew Janian'\nSubject: Re: [PERFORM] Query Performance and IOWait\n\nWoody,\n\n> What version of Redhat are you running?   We have found running \n> Enterprise Update 3 kernel kills our Dell boxes with IOWait, both NFS \n> and local disk traffic.  Update 2 kernel does not seem to have the \n> issue, and we are in the process of trying Update 4 beta to see if it is\nbetter.\n\nThis is interesting; do you have more to say about it? I've been having\nsome \nmysterious issues with RHES that I've not been able to pin down.\n\n--\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n\n", "msg_date": "Thu, 18 Nov 2004 14:04:38 -0500", "msg_from": "\"Woody Woodring\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance and IOWait" } ]
[ { "msg_contents": "I have run ANALYZE right before running this query.\n\nI will run EXPLAIN ANALYZE when I can. I started running the query when I sent the first email and it is still running. Looke like it longer than an hour.\n\nI will post the results of EXPLAIN ANALYZE in a few hours when I get them.\n\nThanks for all your help,\n\nAndrew\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: Thursday, November 18, 2004 9:40 AM\nTo: Andrew Janian\nCc: [email protected]\nSubject: Re: [PERFORM] Query Performance and IOWait \n\n\n\"Andrew Janian\" <[email protected]> writes:\n> QUERY PLAN\n> Nested Loop IN Join (cost=0.00..34047.29 rows=1 width=526)\n> -> Index Scan using mfi_log_time on mb_fix_message (cost=0.00..22231.31 rows=2539 width=526)\n> Index Cond: ((msg_log_time > '2004-06-01 00:00:00'::timestamp without time zone) AND (msg_log_time < '2004-06-01 23:59:59.999'::timestamp without time zone))\n> Filter: (((msg_message_type)::text = '8'::text) AND (((mb_raw_text)::text ~~ '%39=1%'::text) OR ((mb_raw_text)::text ~~ '%39=2%'::text)))\n> -> Index Scan using mfi_client_ordid on mb_fix_message (cost=0.00..445.56 rows=1 width=18)\n> Index Cond: ((\"outer\".msg_client_order_id)::text = (mb_fix_message.msg_client_order_id)::text)\n> Filter: ((msg_log_time >= '2004-06-01 00:00:00'::timestamp without time zone) AND (msg_log_time < '2004-06-01 13:30:00'::timestamp without time zone) AND ((msg_message_type)::text = 'D'::text) OR ((msg_message_type)::text = 'G'::text)) AND ((mb_ord_type)::text = '1'::text))\n\n> While running, this query produces 100% iowait usage on its processor and takes a ungodly amount of time (about an hour).\n\nThis plan looks fairly reasonable if the rowcount estimates are\naccurate. Have you ANALYZEd the table lately? You might need to\nbump up the statistics target for the msg_log_time column to improve\nthe quality of the estimates. It would be useful to see EXPLAIN\nANALYZE results too (yes I know it'll take you an hour to get them...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Nov 2004 09:57:17 -0600", "msg_from": "\"Andrew Janian\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Performance and IOWait " }, { "msg_contents": "Hello,\n\nWhat is your statistics target?\nWhat is your effective_cache_size?\n\nHave you tried running the query as a cursor?\n\nSincerely,\n\nJoshua D. Drake\n\n\n\nAndrew Janian wrote:\n> I have run ANALYZE right before running this query.\n> \n> I will run EXPLAIN ANALYZE when I can. I started running the query when I sent the first email and it is still running. Looke like it longer than an hour.\n> \n> I will post the results of EXPLAIN ANALYZE in a few hours when I get them.\n> \n> Thanks for all your help,\n> \n> Andrew\n> \n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Thursday, November 18, 2004 9:40 AM\n> To: Andrew Janian\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Query Performance and IOWait \n> \n> \n> \"Andrew Janian\" <[email protected]> writes:\n> \n>> QUERY PLAN\n>>Nested Loop IN Join (cost=0.00..34047.29 rows=1 width=526)\n>> -> Index Scan using mfi_log_time on mb_fix_message (cost=0.00..22231.31 rows=2539 width=526)\n>> Index Cond: ((msg_log_time > '2004-06-01 00:00:00'::timestamp without time zone) AND (msg_log_time < '2004-06-01 23:59:59.999'::timestamp without time zone))\n>> Filter: (((msg_message_type)::text = '8'::text) AND (((mb_raw_text)::text ~~ '%39=1%'::text) OR ((mb_raw_text)::text ~~ '%39=2%'::text)))\n>> -> Index Scan using mfi_client_ordid on mb_fix_message (cost=0.00..445.56 rows=1 width=18)\n>> Index Cond: ((\"outer\".msg_client_order_id)::text = (mb_fix_message.msg_client_order_id)::text)\n>> Filter: ((msg_log_time >= '2004-06-01 00:00:00'::timestamp without time zone) AND (msg_log_time < '2004-06-01 13:30:00'::timestamp without time zone) AND ((msg_message_type)::text = 'D'::text) OR ((msg_message_type)::text = 'G'::text)) AND ((mb_ord_type)::text = '1'::text))\n> \n> \n>>While running, this query produces 100% iowait usage on its processor and takes a ungodly amount of time (about an hour).\n> \n> \n> This plan looks fairly reasonable if the rowcount estimates are\n> accurate. Have you ANALYZEd the table lately? You might need to\n> bump up the statistics target for the msg_log_time column to improve\n> the quality of the estimates. It would be useful to see EXPLAIN\n> ANALYZE results too (yes I know it'll take you an hour to get them...)\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n\n-- \nCommand Prompt, Inc., home of PostgreSQL Replication, and plPHP.\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nMammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL", "msg_date": "Thu, 18 Nov 2004 08:44:11 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance and IOWait" } ]
[ { "msg_contents": "What I think is happening with the missing pg_statistic entries:\n\nThe install of our application involves a lot of data importing (via\nJDBC) in one large transaction, which can take up to 30 minutes. (I\nrealize I left out this key piece of info in my original post...)\n\nThe pg_autovacuum logic is relying on data from pg_stat_all_tables to\nmake the decision about running analyze. As far as I can tell, the data\nin this view gets updated outside of the transaction, because I saw the\nnumbers growing while I was importing. I saw pg_autovacuum log messages\nfor running analyze on several tables, but no statistics data showed up\nfor these, I assume because the actual data in the table wasn't yet\nvisible to pg_autovacuum because the import transaction had not finished\nyet.\n\nWhen the import finished, not all of the tables affected by the import\nwere re-visited because they had not bumped up over the threshold again,\neven though the analyze run for those tables had not generated any stats\nbecause of the still-open transaction.\n\nAm I making the correct assumptions about the way the various pieces\nwork? Does this scenario make sense?\n\nIt's easy enough for us to kick off a vacuum/analyze at the end of a\nlong import - but this \"mysterious\" behavior was bugging me!\n\nThanks.\n\n- DAP \n\n>-----Original Message-----\n>From: Matthew T. O'Connor [mailto:[email protected]] \n>Sent: Wednesday, November 17, 2004 2:02 PM\n>To: David Parker\n>Cc: Tom Lane; Jeff; Russell Smith; [email protected]\n>Subject: Re: [PERFORM] query plan question\n>\n>Well based on the autovacuum log that you attached, all of \n>those tables \n>are insert only (at least during the time period included in \n>the log. \n>Is that correct? If so, autovacuum will never do a vacuum \n>(unless required by xid wraparound issues) on those tables. \n>So this doesn't appear to be an autovacuum problem. I'm not \n>sure about the missing pg_statistic entries anyone else care \n>to field that one?\n>\n>Matthew\n>\n>\n>David Parker wrote:\n>\n>>Thanks. The tables I'm concerned with are named: 'schema', 'usage', \n>>'usageparameter', and 'flow'. It looks like autovacuum is performing\n>>analyzes:\n>>\n>>% grep \"Performing: \" logs/.db.tazz.vacuum.log\n>>[2004-11-17 12:05:58 PM] Performing: ANALYZE \n>>\"public\".\"scriptlibrary_library\"\n>>[2004-11-17 12:15:59 PM] Performing: ANALYZE \n>>\"public\".\"scriptlibraryparm\"\n>>[2004-11-17 12:15:59 PM] Performing: ANALYZE \"public\".\"usageparameter\"\n>>[2004-11-17 12:21:00 PM] Performing: ANALYZE \"public\".\"usageproperty\"\n>>[2004-11-17 12:21:00 PM] Performing: ANALYZE \"public\".\"route\"\n>>[2004-11-17 12:21:00 PM] Performing: ANALYZE \"public\".\"usageparameter\"\n>>[2004-11-17 12:21:00 PM] Performing: ANALYZE \n>>\"public\".\"scriptlibrary_library\"\n>>[2004-11-17 12:26:01 PM] Performing: ANALYZE \"public\".\"usage\"\n>>[2004-11-17 12:26:01 PM] Performing: ANALYZE \"public\".\"usageparameter\"\n>>[2004-11-17 12:31:04 PM] Performing: ANALYZE \"public\".\"usageproperty\"\n>>[2004-11-17 12:36:04 PM] Performing: ANALYZE \"public\".\"route\"\n>>[2004-11-17 12:36:04 PM] Performing: ANALYZE \"public\".\"service_usage\"\n>>[2004-11-17 12:36:04 PM] Performing: ANALYZE \"public\".\"usageparameter\"\n>>\n>>But when I run the following:\n>>\n>>select * from pg_statistic where starelid in (select oid from \n>pg_class \n>>where relname in\n>>('schema','usageparameter','flow','usage'))\n>>\n>>it returns no records. Shouldn't it? It doesn't appear to be doing a \n>>vacuum anywhere, which makes sense because none of these tables have \n>>over the default threshold of 1000. Are there statistics \n>which only get \n>>generated by vacuum?\n>>\n>>I've attached a gzip of the pg_autovacuum log file, with -d 3.\n>>\n>>Thanks again.\n>>\n>>- DAP\n>>\n>>\n>> \n>>\n>>>-----Original Message-----\n>>>From: Matthew T. O'Connor [mailto:[email protected]]\n>>>Sent: Wednesday, November 17, 2004 11:41 AM\n>>>To: David Parker\n>>>Cc: Tom Lane; Jeff; Russell Smith; [email protected]\n>>>Subject: Re: [PERFORM] query plan question\n>>>\n>>>David Parker wrote:\n>>>\n>>> \n>>>\n>>>>We're using postgresql 7.4.5. I've only recently put \n>pg_autovacuum in \n>>>>place as part of our installation, and I'm basically taking the \n>>>>defaults. I doubt it's a problem with autovacuum itself, but rather \n>>>>with my configuration of it. I have some reading to do, so\n>>>> \n>>>>\n>>>any pointers\n>>> \n>>>\n>>>>to existing autovacuum threads would be greatly appreciated!\n>>>>\n>>>> \n>>>>\n>>>Well the first thing to do is increase the verbosity of the \n>>>pg_autovacuum logging output. If you use -d2 or higher, \n>pg_autovacuum \n>>>will print out a lot of detail on what it thinks the thresholds are \n>>>and\n>>>why it is or isn't performing vacuums and analyzes. Attach \n>>>some of the\n>>>log and I'll take a look at it.\n>>>\n>>> \n>>>\n>\n>\n", "msg_date": "Thu, 18 Nov 2004 11:12:12 -0500", "msg_from": "\"David Parker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query plan question" }, { "msg_contents": "\"David Parker\" <[email protected]> writes:\n> What I think is happening with the missing pg_statistic entries:\n> The install of our application involves a lot of data importing (via\n> JDBC) in one large transaction, which can take up to 30 minutes. (I\n> realize I left out this key piece of info in my original post...)\n\n> The pg_autovacuum logic is relying on data from pg_stat_all_tables to\n> make the decision about running analyze. As far as I can tell, the data\n> in this view gets updated outside of the transaction, because I saw the\n> numbers growing while I was importing. I saw pg_autovacuum log messages\n> for running analyze on several tables, but no statistics data showed up\n> for these, I assume because the actual data in the table wasn't yet\n> visible to pg_autovacuum because the import transaction had not finished\n> yet.\n\n> When the import finished, not all of the tables affected by the import\n> were re-visited because they had not bumped up over the threshold again,\n> even though the analyze run for those tables had not generated any stats\n> because of the still-open transaction.\n\nBingo. The per-table activity stats are sent to the collector whenever\nthe backend waits for a client command. Given a moderately long\ntransaction block doing updates, it's not hard at all to imagine that\nautovacuum would kick off vacuum and/or analyze while the updating\ntransaction is still in progress. The resulting operation is of course\na waste of time.\n\nIt'd be trivial to adjust postgres.c so that per-table stats are\nonly transmitted when we exit the transaction (basically move the\npgstat_report_tabstat call down a couple lines so it's not called if\nIsTransactionOrTransactionBlock).\n\nThis seems like a good change to me. Does anyone not like it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Nov 2004 11:43:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Timing of pgstats updates" }, { "msg_contents": "On 11/18/2004 11:43 AM, Tom Lane wrote:\n> \"David Parker\" <[email protected]> writes:\n>> What I think is happening with the missing pg_statistic entries:\n>> The install of our application involves a lot of data importing (via\n>> JDBC) in one large transaction, which can take up to 30 minutes. (I\n>> realize I left out this key piece of info in my original post...)\n> \n>> The pg_autovacuum logic is relying on data from pg_stat_all_tables to\n>> make the decision about running analyze. As far as I can tell, the data\n>> in this view gets updated outside of the transaction, because I saw the\n>> numbers growing while I was importing. I saw pg_autovacuum log messages\n>> for running analyze on several tables, but no statistics data showed up\n>> for these, I assume because the actual data in the table wasn't yet\n>> visible to pg_autovacuum because the import transaction had not finished\n>> yet.\n> \n>> When the import finished, not all of the tables affected by the import\n>> were re-visited because they had not bumped up over the threshold again,\n>> even though the analyze run for those tables had not generated any stats\n>> because of the still-open transaction.\n> \n> Bingo. The per-table activity stats are sent to the collector whenever\n> the backend waits for a client command. Given a moderately long\n> transaction block doing updates, it's not hard at all to imagine that\n> autovacuum would kick off vacuum and/or analyze while the updating\n> transaction is still in progress. The resulting operation is of course\n> a waste of time.\n> \n> It'd be trivial to adjust postgres.c so that per-table stats are\n> only transmitted when we exit the transaction (basically move the\n> pgstat_report_tabstat call down a couple lines so it's not called if\n> IsTransactionOrTransactionBlock).\n> \n> This seems like a good change to me. Does anyone not like it?\n> \n> \t\t\tregards, tom lane\n\nSounds reasonable here.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Thu, 18 Nov 2004 14:29:31 -0500", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Timing of pgstats updates" } ]
[ { "msg_contents": "Can someone explain how the free space map deals with alternate database \nlocations?\n\nGiven that the free space map is global, and it is ostensibly managing \nfree disk space, how does it deal with tuples across disk locations ?\n\n\nDave\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Thu, 18 Nov 2004 12:09:08 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Interaction between Free Space Map an alternate location for a\n\tdatabase" }, { "msg_contents": "Dave,\n\n> Given that the free space map is global, and it is ostensibly managing\n> free disk space, how does it deal with tuples across disk locations ?\n\nAre you talking Tablespaces?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 18 Nov 2004 10:18:59 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Interaction between Free Space Map an alternate location for a\n\tdatabase" }, { "msg_contents": "Dave Cramer <[email protected]> writes:\n> Can someone explain how the free space map deals with alternate database \n> locations?\n\nIt doesn't really care. It identifies tables by database OID+table OID,\nand where they happen to sit physically doesn't matter.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 Nov 2004 13:38:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Interaction between Free Space Map an alternate location for a\n\tdatabase" }, { "msg_contents": "No, have a look at the create database command\n\nthere is a clause 'with location' that allows you to set up a separate \nlocation for the db\n\nDave\n\nJosh Berkus wrote:\n\n>Dave,\n>\n> \n>\n>>Given that the free space map is global, and it is ostensibly managing\n>>free disk space, how does it deal with tuples across disk locations ?\n>> \n>>\n>\n>Are you talking Tablespaces?\n>\n> \n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n\n\n\n\n\n\n\nNo, have a look at the create database command\n\nthere is a clause 'with location' that allows you to set up a separate\nlocation for the db\n\nDave\n\nJosh Berkus wrote:\n\nDave,\n\n \n\nGiven that the free space map is global, and it is ostensibly managing\nfree disk space, how does it deal with tuples across disk locations ?\n \n\n\nAre you talking Tablespaces?\n\n \n\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561", "msg_date": "Thu, 18 Nov 2004 13:42:49 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Interaction between Free Space Map an alternate location" }, { "msg_contents": "Ok, so the global part of the fsm is just that it is in shared memory. \nIf certain databases have more\nfree space they will simply take up more of the fsm. There is no cross \ndatabase movement of tuples.\n( I realized this when I tried to form my next question)\n\nDave\n\nTom Lane wrote:\n\n>Dave Cramer <[email protected]> writes:\n> \n>\n>>Can someone explain how the free space map deals with alternate database \n>>locations?\n>> \n>>\n>\n>It doesn't really care. It identifies tables by database OID+table OID,\n>and where they happen to sit physically doesn't matter.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 7: don't forget to increase your free space map settings\n>\n>\n> \n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n\n\n\n\n\n\n\nOk, so the global part of the fsm is just that it is in shared memory. \nIf certain databases have more \nfree space they will simply take up more of the fsm. There is no cross\ndatabase movement of tuples.\n( I realized this when I tried to form my next question)\n\nDave\n\nTom Lane wrote:\n\nDave Cramer <[email protected]> writes:\n \n\nCan someone explain how the free space map deals with alternate database \nlocations?\n \n\n\nIt doesn't really care. It identifies tables by database OID+table OID,\nand where they happen to sit physically doesn't matter.\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\n\n\n \n\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561", "msg_date": "Thu, 18 Nov 2004 14:46:41 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Interaction between Free Space Map an alternate location" } ]
[ { "msg_contents": "ALTER TABLE foo ALTER COLUMN bar SET STATISTICS n; .....\n\nI wonder what are the implications of using this statement,\nI know by using, say n=100, ANALYZE will take more time,\npg_statistics will be bigger, planner will take longer time,\non the other hand it will make better decisions... Etc, etc.\n\nI wonder however when it is most uselful to bump it up.\nPlease tell me what you think about it:\n\nIs bumping up statistics is only useful for indexed columns?\n\nWhen is it most useful/benefitial to bump them up:\n\n1) huge table with huge number of distinct values (_almost_\n unique ;))\n\n2) huge table with relatively equally distributed values\n (like each value is in between, say, 30-50 rows).\n\n3) huge table with unequally distributed values (some\n values are in 1-5 rows, some are in 1000-5000 rows).\n\n4) huge table with small number values (around ~100\n distinct values, equally or uneqally distributed).\n\n5) boolean column.\n\nI think SET STATISTICS 100 is very useful for case with\nunequally distributed values, but I wonder what about\nthe other cases. And as a side note -- what are the\nreasonable bounds for statistics (between 10 and 100?)\n\nWhat are the runtime implications of setting statistics\ntoo large -- how much can it affect queries?\n\nAnd finally -- how other RDBMS and RDBM-likes deal\nwith this issue? :)\n\n Regards,\n Dawid\n", "msg_date": "Fri, 19 Nov 2004 14:59:48 +0100", "msg_from": "Dawid Kuroczko <[email protected]>", "msg_from_op": true, "msg_subject": "When to bump up statistics?" }, { "msg_contents": "[email protected] (Dawid Kuroczko) writes:\n> ALTER TABLE foo ALTER COLUMN bar SET STATISTICS n; .....\n>\n> I wonder what are the implications of using this statement,\n> I know by using, say n=100, ANALYZE will take more time,\n> pg_statistics will be bigger, planner will take longer time,\n> on the other hand it will make better decisions... Etc, etc.\n>\n> I wonder however when it is most uselful to bump it up.\n> Please tell me what you think about it:\n>\n> Is bumping up statistics is only useful for indexed columns?\n\nThe main decision changes that result from this would occur then...\n\n> When is it most useful/benefitial to bump them up:\n>\n> 1) huge table with huge number of distinct values (_almost_\n> unique ;))\n>\n> 2) huge table with relatively equally distributed values\n> (like each value is in between, say, 30-50 rows).\n>\n> 3) huge table with unequally distributed values (some\n> values are in 1-5 rows, some are in 1000-5000 rows).\n>\n> 4) huge table with small number values (around ~100\n> distinct values, equally or uneqally distributed).\n\nA hard and fast rule hasn't emerged, definitely not to distinguish\nprecisely between these cases.\n\nThere are two effects that come out of changing the numbers:\n\n 1. They increase the number of tuples examined.\n\n This would pointedly affect cases 3 and 4, increasing the\n likelihood that the statistics are more representative\n\n 2. They increase the number of samples that are kept, increasing the\n number of items recorded in the histogram.\n\n If you have on the order of 100 unique values (it would not be\n unusual for a company to have 100 \"main\" customers or suppliers),\n that allows there to be nearly a bin apiece, which makes\n estimates _way_ more representative both for common and less\n common cases amongst the \"top 100.\"\n\nBoth of those properties are useful for pretty much all of the above\ncases.\n\n> 5) boolean column.\n\nBoolean column would more or less indicate SET STATISTICS 2; the only\npoint to having more would be if there was one of the values that\nalmost never occurred so that you'd need to collect more stats to even\npick up instances of the \"rare\" case.\n\nA boolean column is seldom much use for indices anyways...\n\n> I think SET STATISTICS 100 is very useful for case with unequally\n> distributed values, but I wonder what about the other cases. And as\n> a side note -- what are the reasonable bounds for statistics\n> (between 10 and 100?)\n\nIf there are, say, 200 unique values, then increasing from 10 to 100\nwould seem likely to be useful in making the histogram MUCH more\nrepresentative...\n\n> What are the runtime implications of setting statistics too large --\n> how much can it affect queries?\n\nMore stats would mean a bit more time evaluating query plans, but the\nquality of the plans should be better.\n\n> And finally -- how other RDBMS and RDBM-likes deal with this issue? \n> :)\n\nFor Oracle and DB/2, the issues are not dissimilar. Oracle somewhat\nprefers the notion of collecting comprehensive statistics on the whole\ntable, which will be even more costly than PostgreSQL's sampling.\n-- \nlet name=\"cbbrowne\" and tld=\"cbbrowne.com\" in String.concat \"@\" [name;tld];;\nhttp://www.ntlug.org/~cbbrowne/linuxxian.html\nA VAX is virtually a computer, but not quite.\n", "msg_date": "Fri, 19 Nov 2004 15:23:15 -0500", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When to bump up statistics?" }, { "msg_contents": "Dawid,\n\n> I wonder what are the implications of using this statement,\n> I know by using, say n=100, ANALYZE will take more time,\n> pg_statistics will be bigger, planner will take longer time,\n> on the other hand it will make better decisions... Etc, etc.\n\nYep. And pg_statistics will need to be vacuumed more often.\n\n> Is bumping up statistics is only useful for indexed columns?\n\nNo. It's potentially useful for any queried column.\n\n> 1) huge table with huge number of distinct values (_almost_\n> unique ;))\n\nYes.\n\n> 2) huge table with relatively equally distributed values\n> (like each value is in between, say, 30-50 rows).\n\nNot usually.\n\n> 3) huge table with unequally distributed values (some\n> values are in 1-5 rows, some are in 1000-5000 rows).\n\nYes.\n\n> 4) huge table with small number values (around ~100\n> distinct values, equally or uneqally distributed).\n\nNot usually, especially if they are equally distributed.\n\n> 5) boolean column.\n\nAlmost never, just as it is seldom useful to index a boolean column.\n\n> I think SET STATISTICS 100 is very useful for case with\n> unequally distributed values, but I wonder what about\n> the other cases. And as a side note -- what are the\n> reasonable bounds for statistics (between 10 and 100?)\n\nOh, no, I've used values up to 500 in production, and we've tested up to the \nmax on DBT-3. In my experience, if the default (10) isn't sufficient, you \noften have to go up to > 250 to get a different plan.\n\n> What are the runtime implications of setting statistics\n> too large -- how much can it affect queries?\n\nIt won't affect select queries. It will affect ANALYZE time (substantially \nin the aggregate) and maintenance on the pg_statistics table.\n\n> And finally -- how other RDBMS and RDBM-likes deal\n> with this issue? :)\n\nMost don't allow such fine-tuned adjustment. MSSQL, for example, allows only \nsetting it per-table or maybe even database-wide, and on that platform it \ndoesn't seem to have much effect on query plans. Oracle prefers to use \nHINTS, which are a brute-force method to manage query plans.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 19 Nov 2004 12:32:59 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When to bump up statistics?" } ]
[ { "msg_contents": "Hi,\n\nI have a query that when run on similar tables in 2 different databases \neither uses the index on the column (primary key) in the where clause or \ndoes a full table scan. The structure of the tables is the same, except \nthat the table where the index does not get used has an extra million \nrows (22mil vs 23mil).\n\nThe 2 boxes where these database run are very different (Sparc with scsi \ndisks and 2G RAM running Solaris 8 AND a PC with 128M RAM running and an \nIDE drive running Linux RH9 2.4.20-20.9). I am not sure why that would \nmake a difference, but maybe it does.\nAlso, according to our dba both tables have been analyzed about the same \ntime.\n\nAny pointers would be much appreciated.\n\n\nArshavir\n\n\n\nWORKS:\n\n=> explain analyze select num from document where num like 'EP1000000%';\n\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------\n Index Scan using document_pkey on document (cost=0.00..5.77 rows=1 width=14) (actual time=0.147..0.166 rows=2 loops=1)\n Index Cond: (((num)::text >= 'EP1000000'::character varying) AND ((num)::text < 'EP1000001'::character varying))\n Filter: ((num)::text ~~ 'EP1000000%'::text)\n Total runtime: 0.281 ms\n(4 rows) \n\n=> \\d document\n Table \"public.document\"\n Column | Type | Modifiers \n-----------+------------------------+-----------\n num | character varying(30) | not null\n titl | character varying(500) | \n isscntry | character varying(50) | \n issdate | date | \n filedate | date | \n appnum | character varying(20) | \n clnum | integer | \n exnum | integer | \n exmnr | character varying(300) | \n agent | character varying(300) | \n priodate | date | \n prionum | character varying(100) | \n priocntry | character varying(50) | \n legalstat | integer | \nIndexes:\n \"document_pkey\" primary key, btree (num)\nCheck constraints:\n \"document_legalstat\" CHECK (legalstat > 0 AND legalstat < 6)\n\n\n\nDOES NOT WORK:\n\nd5=> EXPLAIN ANALYZE select num from document where num like 'EP1000000%';\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------\n Seq Scan on document (cost=0.00..804355.12 rows=1 width=14) (actual time=97.235..353286.781 rows=2 loops=1)\n Filter: ((num)::text ~~ 'EP1000000%'::text)\n Total runtime: 353286.907 ms\n(3 rows)\n \nd5=> \\d document\n Table \"public.document\"\n Column | Type | Modifiers \n-----------+------------------------+-----------\n num | character varying(30) | not null\n titl | character varying(500) | \n isscntry | character varying(50) | \n issdate | date | \n filedate | date | \n clnum | integer | \n exnum | integer | \n exmnr | character varying(300) | \n agent | character varying(300) | \n priodate | date | \n prionum | character varying(100) | \n priocntry | character varying(50) | \n legalstat | integer | \n appnum | character varying(20) | \nIndexes:\n \"document_pkey\" primary key, btree (num)\nCheck constraints:\n \"$1\" CHECK (legalstat > 0 AND legalstat < 6)\n\n", "msg_date": "Fri, 19 Nov 2004 14:18:55 -0500", "msg_from": "Arshavir Grigorian <[email protected]>", "msg_from_op": true, "msg_subject": "index use" }, { "msg_contents": "On Fri, Nov 19, 2004 at 02:18:55PM -0500, Arshavir Grigorian wrote:\n> The 2 boxes where these database run are very different (Sparc with scsi \n> disks and 2G RAM running Solaris 8 AND a PC with 128M RAM running and an \n> IDE drive running Linux RH9 2.4.20-20.9). I am not sure why that would \n> make a difference, but maybe it does.\n\nAre you having different locales on your systems?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Fri, 19 Nov 2004 21:00:06 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index use" }, { "msg_contents": "Arshavir Grigorian <[email protected]> writes:\n> I have a query that when run on similar tables in 2 different databases \n> either uses the index on the column (primary key) in the where clause or \n> does a full table scan. The structure of the tables is the same, except \n> that the table where the index does not get used has an extra million \n> rows (22mil vs 23mil).\n\nI'd say you initialized the second database in a non-C locale. The\nplanner is clearly well aware that the seqscan is going to be expensive,\nso the explanation has to be that it does not have a usable index available.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Nov 2004 15:09:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index use " }, { "msg_contents": "Arshavir,\n\n> I have a query that when run on similar tables in 2 different databases\n> either uses the index on the column (primary key) in the where clause or\n> does a full table scan. The structure of the tables is the same, except\n> that the table where the index does not get used has an extra million\n> rows (22mil vs 23mil).\n\nAre both using the same version of PostgreSQL? If so, what version?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 19 Nov 2004 12:27:39 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index use" }, { "msg_contents": "On Fri, 19 Nov 2004, Arshavir Grigorian wrote:\n\n> Hi,\n>\n> I have a query that when run on similar tables in 2 different databases\n> either uses the index on the column (primary key) in the where clause or\n> does a full table scan. The structure of the tables is the same, except\n> that the table where the index does not get used has an extra million\n> rows (22mil vs 23mil).\n>\n> The 2 boxes where these database run are very different (Sparc with scsi\n> disks and 2G RAM running Solaris 8 AND a PC with 128M RAM running and an\n> IDE drive running Linux RH9 2.4.20-20.9). I am not sure why that would\n> make a difference, but maybe it does.\n\nIs the second server running in \"C\" locale or a different locale? The\noptimization for LIKE to use indexes involves either making an index with\na *_pattern_ops operator class or being in \"C\" locale.\n", "msg_date": "Fri, 19 Nov 2004 12:29:42 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index use" }, { "msg_contents": "Thanks for all the replies. It actually has to do with the locales. The \ndb where the index gets used is running on C vs the the other one that \nuses en_US.UTF-8. I guess the db with the wrong locale will need to be \nwaxed and recreated with correct locale settings. I wonder if there are \nany plans to make LIKE work with all locales.\n\nAgain, many thanks. You guys are great!\n\n\n\nArshavir\n\n", "msg_date": "Fri, 19 Nov 2004 15:39:51 -0500", "msg_from": "Arshavir Grigorian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index use" }, { "msg_contents": "Arshavir,\n\n> Thanks for all the replies. It actually has to do with the locales. The\n> db where the index gets used is running on C vs the the other one that\n> uses en_US.UTF-8. I guess the db with the wrong locale will need to be\n> waxed and recreated with correct locale settings. I wonder if there are\n> any plans to make LIKE work with all locales.\n\nI thought there were some fixes for this in 8.0, but I can't find anything in \nthe release notes.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sat, 20 Nov 2004 11:40:53 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index use" } ]
[ { "msg_contents": "The data that we are accessing is via QLogic cards connected to an EMC Clarion. We have tried it on local SCSI disks with the same (bad) results.\r\n\r\nWhen the machine gets stuck in a 100% IOWAIT state it often crashes soon after that.\r\n\r\nThe disks are fine, have been replaced and checked.\r\n\r\nHere are my results from hdparm -Tt /dev/sda1 (which is the EMC disk array)\r\n/dev/sda1:\r\n Timing buffer-cache reads: 2976 MB in 2.00 seconds = 1488.00 MB/sec\r\n Timing buffered disk reads: 44 MB in 3.13 seconds = 14.06 MB/sec\r\n\r\n-----Original Message-----\r\nFrom: Dave Cramer [mailto:[email protected]]\r\nSent: Thursday, November 18, 2004 11:14 AM\r\nTo: Andrew Janian\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Query Performance and IOWait\r\n\r\n\r\nAndrew,\r\n\r\nDell's aren't well known for their disk performance, apparently most of \r\nthe perc controllers sold with dell's are actually adaptec controllers. \r\nAlso apparently they do not come with the battery required to use the \r\nbattery backed up write cache ( In fact according to some Dell won't \r\neven sell the battery to you). Also Dell's monitoring software is quite \r\na memory hog.\r\n\r\nHave you looked at top ?, and also hdparm -Tt /dev/sd?\r\n\r\nDave\r\n\r\nAndrew Janian wrote:\r\n\r\n>Hello All,\r\n>\r\n>I have a setup with a Dell Poweredge 2650 with Red Hat and Postgres 7.4.5 with a database with about 27GB of data. The table in question has about 35 million rows.\r\n>\r\n>I am running the following query:\r\n>\r\n>SELECT *\r\n>FROM mb_fix_message\r\n>WHERE msg_client_order_id IN (\r\n>\tSELECT msg_client_order_id\r\n>\tFROM mb_fix_message\r\n>\tWHERE msg_log_time >= '2004-06-01'\r\n>\t\tAND msg_log_time < '2004-06-01 13:30:00.000'\r\n>\t\tAND msg_message_type IN ('D','G')\r\n>\t\tAND mb_ord_type = '1'\r\n>\t)\r\n>\tAND msg_log_time > '2004-06-01'\r\n>\tAND msg_log_time < '2004-06-01 23:59:59.999'\r\n>\tAND msg_message_type = '8'\r\n>\tAND (mb_raw_text LIKE '%39=1%' OR mb_raw_text LIKE '%39=2%');\r\n>\r\n>with the following plan:\r\n>\r\n> QUERY PLAN\r\n>Nested Loop IN Join (cost=0.00..34047.29 rows=1 width=526)\r\n> -> Index Scan using mfi_log_time on mb_fix_message (cost=0.00..22231.31 rows=2539 width=526)\r\n> Index Cond: ((msg_log_time > '2004-06-01 00:00:00'::timestamp without time zone) AND (msg_log_time < '2004-06-01 23:59:59.999'::timestamp without time zone))\r\n> Filter: (((msg_message_type)::text = '8'::text) AND (((mb_raw_text)::text ~~ '%39=1%'::text) OR ((mb_raw_text)::text ~~ '%39=2%'::text)))\r\n> -> Index Scan using mfi_client_ordid on mb_fix_message (cost=0.00..445.56 rows=1 width=18)\r\n> Index Cond: ((\"outer\".msg_client_order_id)::text = (mb_fix_message.msg_client_order_id)::text)\r\n> Filter: ((msg_log_time >= '2004-06-01 00:00:00'::timestamp without time zone) AND (msg_log_time < '2004-06-01 13:30:00'::timestamp without time zone) AND ((msg_message_type)::text = 'D'::text) OR ((msg_message_type)::text = 'G'::text)) AND ((mb_ord_type)::text = '1'::text))\r\n>\r\n>While running, this query produces 100% iowait usage on its processor and takes a ungodly amount of time (about an hour).\r\n>\r\n>The postgres settings are as follows:\r\n>\r\n>shared_buffers = 32768 # min 16, at least max_connections*2, 8KB each\r\n>sort_mem = 262144 # min 64, size in KB\r\n>\r\n>And the /etc/sysctl.conf has:\r\n>kernel.shmall = 274235392\r\n>kernel.shmmax = 274235392\r\n>\r\n>The system has 4GB of RAM.\r\n>\r\n>I am pretty sure of these settings, but only from my reading of the docs and others' recommendations online.\r\n>\r\n>Thanks,\r\n>\r\n>Andrew Janian\r\n>OMS Development\r\n>Scottrade Financial Services\r\n>(314) 965-1555 x 1513\r\n>Cell: (314) 369-2083\r\n>\r\n>---------------------------(end of broadcast)---------------------------\r\n>TIP 7: don't forget to increase your free space map settings\r\n> \r\n>\r\n\r\n-- \r\nDave Cramer\r\nhttp://www.postgresintl.com\r\n519 939 0336\r\nICQ#14675561\r\n\r\n", "msg_date": "Fri, 19 Nov 2004 15:22:02 -0600", "msg_from": "\"Andrew Janian\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Performance and IOWait" } ]
[ { "msg_contents": "Hi All,\n \nI am new to Postgres.\n \nI have a query which does not use index scan unless I force postgres to use index scan. I dont want to force postgres, unless there is no way of optimizing this query.\n \nThe query :\n \nselect m.company_name,m.approved,cu.account_no,mbt.business_name,cda.country, \n(select count(*) from merchant_purchase mp left join data d on mp.data_id=d.id where mp.merchant_id=m.id and d.status=5) as Trans_count,\n(select sum(total * 0.01) from merchant_purchase mp left join data d on mp.data_id=d.id where mp.merchant_id=m.id and d.status=5) as Trans_amount,\n(select count(*) from merchant_purchase mp left join data d on mp.data_id=d.id where d.what=15 and d.status=5 and d.flags=7 and mp.merchant_id=m.id) as Reversal_count\nfrom merchant m \nleft join customer cu on cu.id=m.uid \nleft join customerdata cda on cda.uid=cu.id \nleft join merchant_business_types mbt on mbt.id=m.businesstype and\nexists (select distinct(merchant_id) from merchant_purchase where m.id=merchant_id);\n\n \nFirst Question: I know the way I have written the first two sub-selects is really bad, as they have the same conditions in the where clause. But I am not sure if there is a way to select two columns in a single sub-select query. When I tried to combine the two sub-select queries, I got an error saying that the sub-select can have only one column. Does anyone know any other efficient way of doing it?\n \nSecond Question: The query plan is as follows:\n \n QUERY PLAN \n \n-------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=901.98..17063.67 rows=619 width=88) (actual time=52.01..5168.09 rows=619 loops=1)\n Hash Cond: (\"outer\".businesstype = \"inner\".id)\n Join Filter: (subplan)\n -> Merge Join (cost=900.34..1276.04 rows=619 width=62) (actual time=37.00..97.58 rows=619 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".uid)\n -> Merge Join (cost=900.34..940.61 rows=619 width=52) (actual time=36.91..54.66 rows=619 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".uid)\n -> Sort (cost=795.45..810.32 rows=5949 width=17) (actual time=32.59..36.59 rows=5964 loops=1)\n Sort Key: cu.id\n -> Seq Scan on customer cu (cost=0.00..422.49 rows=5949 width=17) (actual time=0.02..15.69 rows=5964 loops=1)\n -> Sort (cost=104.89..106.44 rows=619 width=35) (actual time=4.27..5.10 rows=619 loops=1)\n Sort Key: m.uid\n -> Seq Scan on merchant m (cost=0.00..76.19 rows=619 width=35) (actual time=0.04..2.65 rows=619 loops=1)\n -> Index Scan using customerdata_uid_idx on customerdata cda (cost=0.00..311.85 rows=5914 width=10) (actual time=0.09..27.70 rows=5\n919 loops=1)\n -> Hash (cost=1.51..1.51 rows=51 width=26) (actual time=0.19..0.19 rows=0 loops=1)\n -> Seq Scan o n merchant_business_types mbt (cost=0.00..1.51 rows=51 width=26) (actual time=0.04..0.12 rows=51 loops=1)\n SubPlan\n -> Aggregate (cost=269.89..269.89 rows=1 width=12) (actual time=2.70..2.70 rows=1 loops=619)\n -> Nested Loop (cost=0.00..269.78 rows=44 width=12) (actual time=2.40..2.69 rows=4 loops=619)\n Filter: (\"inner\".status = 5)\n -> Seq Scan on merchant_purchase mp (cost=0.00..95.39 rows=44 width=4) (actual time=2.37..2.58 rows=6 loops=619)\n Filter: (merchant_id = $0)\n -> Index Scan using data_pkey on data d (cost=0.00..3.91 rows=1 width=8) (actual time=0.01..0.01 rows=1 loops=3951)\n Index Cond: (\"outer\".data_id = d.id)\n -> Aggregate (cost=269.89..269.89 rows=1 width=16) (actual time=2.73..2.73 rows=1 loops=619)\n -> Nested Loop (cost=0.00..269.78 rows=44 width=16) (actual time=2.42..2.70 rows=4 loops=619)\n Filter: (\"inner\".status = 5)\n -> Seq Scan on merchant_purchase m p (cost=0.00..95.39 rows=44 width=8) (actual time=2.39..2.60 rows=6 loops=619)\n Filter: (merchant_id = $0)\n -> Index Scan using data_pkey on data d (cost=0.00..3.91 rows=1 width=8) (actual time=0.01..0.01 rows=1 loops=3951)\n Index Cond: (\"outer\".data_id = d.id)\n -> Aggregate (cost=270.12..270.12 rows=1 width=20) (actual time=2.72..2.72 rows=1 loops=619)\n -> Nested Loop (cost=0.00..270.00 rows=44 width=20) (actual time=2.63..2.72 rows=0 loops=619)\n Filter: ((\"inner\".what = 15) AND (\"inner\".status = 5) AND (\"inner\".flags = 7))\n -> Seq Scan on merchant_purchase mp (cost=0.00..95.39 rows=44 width=4) (actual time=2.40..2.62 rows=6 loops=619)\n Filter: (merchant_id = $0)\n -> Index Scan using data_pkey on data d (cost=0.00..3.91 rows=1 width=16) (actual time=0.01..0.01 rows=1 loops=3951)\n Index Cond: (\"outer\".data_id = d.id)\n -> Unique (cost=0.00..113.14 rows=4 width=4) (actual time=0.02..0.02 rows=0 loops=598)\n -> Index Scan using merchant_purchase_merchant_id_idx on merchant_purchase (cost=0.00..113.02 rows=44 width=4) (actual time=0.01.\n.0.01 rows=0 loops=598)\n Index Cond: ($0 = merchant_id)\n Total runtime: 5170.37 msec (5.170 sec)\n(42 rows)\n\n \nAs you can see, there are many sequential scans in the query plan. Postgres is not using the index defined, even though it leads to better performance(0.2 sec!! when i force index scan)\n \nIs there something wrong in my query that makes postgres use seq scan as opposed to index scan?? Any help would be really appreciated.\n \nThanks for you time and help!\nSaranya\n\n\t\t\n---------------------------------\nDo you Yahoo!?\n Discover all that���s new in My Yahoo!\nHi All,\n \nI am new to Postgres.\n \nI have a query which does not use index scan unless I force postgres to use index scan. I dont want to force postgres, unless there is no way of optimizing this query.\n \nThe query :\n \nselect m.company_name,m.approved,cu.account_no,mbt.business_name,cda.country, \n(select count(*) from merchant_purchase mp left join data d on mp.data_id=d.id where mp.merchant_id=m.id and d.status=5) as Trans_count,\n(select sum(total * 0.01) from merchant_purchase mp left join data d on mp.data_id=d.id where mp.merchant_id=m.id and d.status=5) as Trans_amount,\n(select count(*) from merchant_purchase mp left join data d on mp.data_id=d.id where d.what=15 and d.status=5 and d.flags=7 and mp.merchant_id=m.id) as Reversal_count\nfrom merchant m \nleft join customer cu on cu.id=m.uid \nleft join customerdata cda on cda.uid=cu.id \nleft join merchant_business_types mbt on mbt.id=m.businesstype and\nexists (select distinct(merchant_id) from merchant_purchase where m.id=merchant_id);\n \nFirst Question: I know the way I have written the first two sub-selects is really bad, as they have the same conditions in the where clause. But I am not sure if there is a way to select two columns in a single sub-select query. When I tried to combine the two sub-select queries, I got an error saying that the sub-select can have only one column. Does anyone know any other efficient way of doing it?\n \nSecond Question: The query plan is as follows:\n \n QUERY PLAN                                                                              -------------------------------------------------------------------------------------------------------------------------------- Hash Join  (cost=901.98..17063.67 rows=619 width=88) (actual time=52.01..5168.09 rows=619 loops=1)   Hash Cond: (\"outer\".businesstype = \"inner\".id)   Join Filter: (subplan)   ->  Merge Join  (cost=900.34..1276.04 rows=619 width=62) (actual time=37.00..97.58 rows=619\n loops=1)         Merge Cond: (\"outer\".id = \"inner\".uid)         ->  Merge Join  (cost=900.34..940.61 rows=619 width=52) (actual time=36.91..54.66 rows=619 loops=1)               Merge Cond: (\"outer\".id = \"inner\".uid)               ->  Sort  (cost=795.45..810.32 rows=5949 width=17) (actual time=32.59..36.59 rows=5964 loops=1)                     Sort Key: cu.id                     ->  Seq Scan on customer cu  (cost=0.00..422.49 rows=5949 width=17) (actual time=0.02..15.69 rows=5964\n loops=1)               ->  Sort  (cost=104.89..106.44 rows=619 width=35) (actual time=4.27..5.10 rows=619 loops=1)                     Sort Key: m.uid                     ->  Seq Scan on merchant m  (cost=0.00..76.19 rows=619 width=35) (actual time=0.04..2.65 rows=619 loops=1)         ->  Index Scan using customerdata_uid_idx on customerdata cda  (cost=0.00..311.85 rows=5914 width=10) (actual time=0.09..27.70 rows=5919 loops=1)   ->  Hash  (cost=1.51..1.51 rows=51 width=26) (actual time=0.19..0.19 rows=0 loops=1)         ->  Seq Scan o\n n\n merchant_business_types mbt  (cost=0.00..1.51 rows=51 width=26) (actual time=0.04..0.12 rows=51 loops=1)   SubPlan     ->  Aggregate  (cost=269.89..269.89 rows=1 width=12) (actual time=2.70..2.70 rows=1 loops=619)           ->  Nested Loop  (cost=0.00..269.78 rows=44 width=12) (actual time=2.40..2.69 rows=4 loops=619)                 Filter: (\"inner\".status = 5)                 ->  Seq Scan on merchant_purchase mp  (cost=0.00..95.39 rows=44 width=4) (actual time=2.37..2.58 rows=6 loops=619)                       Filter: (merchant_id =\n $0)                 ->  Index Scan using data_pkey on data d  (cost=0.00..3.91 rows=1 width=8) (actual time=0.01..0.01 rows=1 loops=3951)                       Index Cond: (\"outer\".data_id = d.id)     ->  Aggregate  (cost=269.89..269.89 rows=1 width=16) (actual time=2.73..2.73 rows=1 loops=619)           ->  Nested Loop  (cost=0.00..269.78 rows=44 width=16) (actual time=2.42..2.70 rows=4 loops=619)                 Filter: (\"inner\".status = 5)                 ->  Seq Scan on merchant_purchase m\n p \n (cost=0.00..95.39 rows=44 width=8) (actual time=2.39..2.60 rows=6 loops=619)                       Filter: (merchant_id = $0)                 ->  Index Scan using data_pkey on data d  (cost=0.00..3.91 rows=1 width=8) (actual time=0.01..0.01 rows=1 loops=3951)                       Index Cond: (\"outer\".data_id = d.id)     ->  Aggregate  (cost=270.12..270.12 rows=1 width=20) (actual time=2.72..2.72 rows=1 loops=619)           ->  Nested Loop  (cost=0.00..270.00 rows=44 width=20) (actual time=2.63..2.72 rows=0\n loops=619)                 Filter: ((\"inner\".what = 15) AND (\"inner\".status = 5) AND (\"inner\".flags = 7))                 ->  Seq Scan on merchant_purchase mp  (cost=0.00..95.39 rows=44 width=4) (actual time=2.40..2.62 rows=6 loops=619)                       Filter: (merchant_id = $0)                 ->  Index Scan using data_pkey on data d  (cost=0.00..3.91 rows=1 width=16) (actual time=0.01..0.01 rows=1 loops=3951)                       Index Cond: (\"outer\".data_id =\n d.id)     ->  Unique  (cost=0.00..113.14 rows=4 width=4) (actual time=0.02..0.02 rows=0 loops=598)           ->  Index Scan using merchant_purchase_merchant_id_idx on merchant_purchase  (cost=0.00..113.02 rows=44 width=4) (actual time=0.01..0.01 rows=0 loops=598)                 Index Cond: ($0 = merchant_id) Total runtime: 5170.37 msec (5.170 sec)(42 rows)\n \nAs you can see, there are many sequential scans in the query plan. Postgres is not using the index defined, even though it leads to better performance(0.2 sec!! when i force index scan)\n \nIs there something wrong in my query that makes postgres use seq scan as opposed to index scan?? Any help would be really appreciated.\n \nThanks for you time and help!\nSaranya\nDo you Yahoo!? \nDiscover all that���s new in My Yahoo!", "msg_date": "Fri, 19 Nov 2004 14:31:22 -0800 (PST)", "msg_from": "sarlav kumar <[email protected]>", "msg_from_op": true, "msg_subject": "help needed -- sequential scan problem" }, { "msg_contents": "sarlav kumar <[email protected]> writes:\n> I have a query which does not use index scan unless I force postgres to use index scan. I dont want to force postgres, unless there is no way of optimizing this query.\n\nThe major issue seems to be in the sub-selects:\n\n> -> Seq Scan on merchant_purchase mp (cost=0.00..95.39 rows=44 width=4) (actual time=2.37..2.58 rows=6 loops=619)\n> Filter: (merchant_id = $0)\n\nwhere the estimated row count is a factor of 7 too high. If the\nestimated row count were even a little lower, it'd probably have gone\nfor an indexscan. You might get some results from increasing the\nstatistics target for merchant_purchase.merchant_id. If that doesn't\nhelp, I'd think about reducing random_page_cost a little bit.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Nov 2004 00:19:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help needed -- sequential scan problem " }, { "msg_contents": "Hi Tom,\n \nThanks for the help, Tom.\n \n>The major issue seems to be in the sub-selects:\n\n> -> Seq Scan on merchant_purchase mp (cost=0.00..95.39 rows=44 width=4) (actual time=2.37..2.58 rows=6 loops=619)\n> Filter: (merchant_id = $0)\n>where the estimated row count is a factor of 7 too high. If the\n>estimated row count were even a little lower, it'd probably have gone\n>for an indexscan.\n \nI understand that the sub-selects are taking up most of the time as they do a sequential scan on the tables. \n \n >You might get some results from increasing the\n>statistics target for merchant_purchase.merchant_id. \n \nDo I have to use vacuum analyze to update the statistics? If so, I have already tried that and it doesn't seem to help.\n \n>If that doesn't help, I'd think about reducing random_page_cost a little bit.\n \nI am sorry, I am not aware of what random_page_cost is, as I am new to Postgres. What does it signify and how do I reduce random_page_cost? \n\nThanks,\nSaranya\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \nHi Tom,\n \nThanks for the help, Tom.\n \n>The major issue seems to be in the sub-selects:> -> Seq Scan on merchant_purchase mp (cost=0.00..95.39 rows=44 width=4) (actual time=2.37..2.58 rows=6 loops=619)> Filter: (merchant_id = $0)>where the estimated row count is a factor of 7 too high. If the>estimated row count were even a little lower, it'd probably have gone>for an indexscan.\n \nI understand that the sub-selects are taking up most of the time as they do a sequential scan on the tables.  \n \n >You might get some results from increasing the>statistics target for merchant_purchase.merchant_id. \n \nDo I have to use vacuum analyze to update the statistics? If so, I have already tried that and it doesn't seem to help.\n \n>If that doesn't help, I'd think about reducing random_page_cost a little bit.\n \nI am sorry, I am not aware of what random_page_cost is, as I am new to Postgres. What does it signify and how do I reduce random_page_cost? Thanks,\nSaranya__________________________________________________Do You Yahoo!?Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com", "msg_date": "Mon, 22 Nov 2004 11:41:25 -0800 (PST)", "msg_from": "sarlav kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: help needed -- sequential scan problem " }, { "msg_contents": "Sarlav,\n\n> I am sorry, I am not aware of what random_page_cost is, as I am new to\n> Postgres. What does it signify and how do I reduce random_page_cost?\n\nIt's a parameter in your postgresql.conf file. After you test it, you will \nwant to change it there and reload the server (pg_ctl reload).\n\nHowever, you can test it on an individual connection:\nSET random_page_cost=2.5\n(the default is 4.0)\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 22 Nov 2004 11:57:38 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help needed -- sequential scan problem" }, { "msg_contents": "Hi Josh,\n \nCan you tell me in what way it affects performance? And How do I decide what value to set for the random_page_cost? Does it depend on any other factors?\n \nThanks,\nSaranya\n\nJosh Berkus <[email protected]> wrote:\nSarlav,\n\n> I am sorry, I am not aware of what random_page_cost is, as I am new to\n> Postgres. What does it signify and how do I reduce random_page_cost?\n\nIt's a parameter in your postgresql.conf file. After you test it, you will \nwant to change it there and reload the server (pg_ctl reload).\n\nHowever, you can test it on an individual connection:\nSET random_page_cost=2.5\n(the default is 4.0)\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \nHi Josh,\n \nCan you tell me in what way it affects performance? And How do I decide what value to set for the random_page_cost? Does it depend on any  other factors?\n \nThanks,\nSaranyaJosh Berkus <[email protected]> wrote:\nSarlav,> I am sorry, I am not aware of what random_page_cost is, as I am new to> Postgres. What does it signify and how do I reduce random_page_cost?It's a parameter in your postgresql.conf file. After you test it, you will want to change it there and reload the server (pg_ctl reload).However, you can test it on an individual connection:SET random_page_cost=2.5(the default is 4.0)-- --JoshJosh BerkusAglio Database SolutionsSan Francisco__________________________________________________Do You Yahoo!?Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com", "msg_date": "Mon, 22 Nov 2004 11:58:01 -0800 (PST)", "msg_from": "sarlav kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: help needed -- sequential scan problem" }, { "msg_contents": "From: \"sarlav kumar\" <[email protected]>\n\n> [Tom:]\n> >You might get some results from increasing the\n> >statistics target for merchant_purchase.merchant_id.\n>\n> Do I have to use vacuum analyze to update the statistics? If so, I have\nalready tried that and it doesn't seem to help.\n\nalter table merchant_purchase alter column merchant_id set statistics 500;\nanalyze merchant_purchase;\n\n>\n> >If that doesn't help, I'd think about reducing random_page_cost a little\nbit.\n>\n> I am sorry, I am not aware of what random_page_cost is, as I am new to\nPostgres. What does it signify and how do I reduce random_page_cost?\n\nset random_page_cost = 3;\nexplain analyse <query>\n\nif it is an improvement, consider setting the value in your postgresql.conf,\nbut remember that this may affect other queries too.\n\ngnari\n\n\n\n", "msg_date": "Mon, 22 Nov 2004 20:01:42 -0000", "msg_from": "\"gnari\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help needed -- sequential scan problem " } ]
[ { "msg_contents": "We are using 7.4.5 on Solaris 9. \n\nWe have a couple tables (holding information about network sessions, for instance) which don't need to persist beyond the life of the server, but while the server is running they are heavily hit, insert/update/delete.\n\nTemporary tables won't work for us because they are per-connection, and we are using a thread pool, and session data could be accessed from multiple connections.\n\nWould 8.0 tablespaces, with a tablespace placed on a RAM disk be a potential solution for this? I have used RAM disks for disk caches in the past, but I don't know if there are any special issues with defining a tablespace that way.\n\nThanks.\n\n- DAP\n----------------------------------------------------------------------------------\nDavid Parker Tazz Networks (401) 709-5130\n \n", "msg_date": "Fri, 19 Nov 2004 17:32:11 -0500", "msg_from": "\"David Parker\" <[email protected]>", "msg_from_op": true, "msg_subject": "tablespace + RAM disk?" }, { "msg_contents": "David,\n\n> We have a couple tables (holding information about network sessions, for\n> instance) which don't need to persist beyond the life of the server, but\n> while the server is running they are heavily hit, insert/update/delete.\n\nSee the thread this last week on Memcached for a cheaper solution.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 19 Nov 2004 16:35:26 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tablespace + RAM disk?" } ]
[ { "msg_contents": "Oh! I sort of started paying attention to that in the middle...and\ncouldn't make head or tail out of it. Will search back to the\nbeginning....\n\nThanks.\n\n- DAP\n\n>-----Original Message-----\n>From: Josh Berkus [mailto:[email protected]] \n>Sent: Friday, November 19, 2004 7:35 PM\n>To: [email protected]\n>Cc: David Parker\n>Subject: Re: [PERFORM] tablespace + RAM disk?\n>\n>David,\n>\n>> We have a couple tables (holding information about network sessions, \n>> for\n>> instance) which don't need to persist beyond the life of the server, \n>> but while the server is running they are heavily hit, \n>insert/update/delete.\n>\n>See the thread this last week on Memcached for a cheaper solution.\n>\n>--\n>--Josh\n>\n>Josh Berkus\n>Aglio Database Solutions\n>San Francisco\n>\n", "msg_date": "Fri, 19 Nov 2004 20:34:22 -0500", "msg_from": "\"David Parker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tablespace + RAM disk?" } ]
[ { "msg_contents": "But, I'm also still interested in the answer to my question: is there\nany reason you could not put an 8.0 tablespace on a RAM disk? \n\nI can imagine doing it by having an initdb run at startup somehow, with\nthe idea that having a mix of tablespaces in a database would make this\nharder, but I haven't read enough about tablespaces yet. The problem\nwith trying to mix a RAM tablespace with a persistent tablespace would\nseem to be that you would have to recreate select data files at system\nstartup before you could start the database. That's why an initdb seems\ncleaner to me, but...I should stop talking and go read about tablespaces\nand memcached.\n\nI'd be interested to hear if anybody has tried this. And I will also\ncheck out memcached, too, of course. Thanks for the pointer.\n\n- DAP\n\n>-----Original Message-----\n>From: [email protected] \n>[mailto:[email protected]] On Behalf Of \n>David Parker\n>Sent: Friday, November 19, 2004 8:34 PM\n>To: [email protected]; [email protected]\n>Subject: Re: [PERFORM] tablespace + RAM disk?\n>\n>Oh! I sort of started paying attention to that in the \n>middle...and couldn't make head or tail out of it. Will search \n>back to the beginning....\n>\n>Thanks.\n>\n>- DAP\n>\n>>-----Original Message-----\n>>From: Josh Berkus [mailto:[email protected]]\n>>Sent: Friday, November 19, 2004 7:35 PM\n>>To: [email protected]\n>>Cc: David Parker\n>>Subject: Re: [PERFORM] tablespace + RAM disk?\n>>\n>>David,\n>>\n>>> We have a couple tables (holding information about network \n>sessions, \n>>> for\n>>> instance) which don't need to persist beyond the life of \n>the server, \n>>> but while the server is running they are heavily hit,\n>>insert/update/delete.\n>>\n>>See the thread this last week on Memcached for a cheaper solution.\n>>\n>>--\n>>--Josh\n>>\n>>Josh Berkus\n>>Aglio Database Solutions\n>>San Francisco\n>>\n>\n>---------------------------(end of \n>broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n>\n", "msg_date": "Fri, 19 Nov 2004 23:18:51 -0500", "msg_from": "\"David Parker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tablespace + RAM disk?" }, { "msg_contents": "On Fri, 19 Nov 2004 23:18:51 -0500, David Parker\n<[email protected]> wrote:\n> But, I'm also still interested in the answer to my question: is there\n> any reason you could not put an 8.0 tablespace on a RAM disk?\n> \n> I can imagine doing it by having an initdb run at startup somehow, with\n> the idea that having a mix of tablespaces in a database would make this\n> harder, but I haven't read enough about tablespaces yet. The problem\n> with trying to mix a RAM tablespace with a persistent tablespace would\n> seem to be that you would have to recreate select data files at system\n> startup before you could start the database. That's why an initdb seems\n> cleaner to me, but...I should stop talking and go read about tablespaces\n> and memcached.\n\nI think there might be a problem with recovery after crash. I haven't tested\nit but I guess pgsql would complain that databases which existed before\ncrash (or even server reboot) no longer exist. And I see two options, either\nit would complain loudly and continue, or simply fail... Unless there would\nbe option to mark database/schema/table as non-PITR-logged (since data\nis expendable and can be easily recreated)... :)\n\nHaving tablespaces on RAM disks (like tmpfs), hmm, it could be useful,\nsay to put TEMPORARY tables there. Since they will be gone nonetheless,\nits a nice place for them.\n\nSide question: Do TEMPORARY tables operations end up in PITR log?\n\n Regards,\n Dawid\n\nPS: To pgmemchache I go!\n", "msg_date": "Sat, 20 Nov 2004 10:36:38 +0100", "msg_from": "Dawid Kuroczko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tablespace + RAM disk?" }, { "msg_contents": "David,\n\n> But, I'm also still interested in the answer to my question: is there\n> any reason you could not put an 8.0 tablespace on a RAM disk?\n\nSome people have *talked* about trying it, but nobody yet has reported back. \nI can see a few potential problems:\n\n1) The query planner would not be aware, and could not be made aware short of \nhacking the source, that one tablespace has different access speeds than the \nothers;\n\n2) After a crash, you might be unable to recover that tablespace, and PG would \nrefuse to bring the system back up without it.\n\nHowever, the best thing to do is to try it. Good luck, and do a write-up for \nus!\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sat, 20 Nov 2004 11:38:43 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tablespace + RAM disk?" }, { "msg_contents": "Dawid Kuroczko wrote:\n> Side question: Do TEMPORARY tables operations end up in PITR log?\n\nNo.\n\n-Neil\n", "msg_date": "Mon, 22 Nov 2004 01:48:05 +1100", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tablespace + RAM disk?" } ]
[ { "msg_contents": "Hello,\n\nI have the following query plan:\n\nlogigis=# explain SELECT geom, ref_in_id as ref, nref_in_id as nref, st_name as name, substr(l_postcode,1,2) as postfirst, func_class as level FROM schabi.streets WHERE cd='ca' ORDER BY l_postcode;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------\n Sort (cost=2950123.42..2952466.07 rows=937059 width=290)\n Sort Key: l_postcode\n -> Index Scan using streets_name_idx on streets (cost=0.00..2857177.57 rows=937059 width=290)\n Index Cond: ((cd)::text = 'ca'::text)\n\n\nAnd I have, beside others, the following index:\n »streets_name_idx« btree (cd, l_postcode)\n\nAs the query plan shows, my postgresql 7.4 does fine on using the index\nfor the WHERE clause.\n\nBut as it fetches all the rows through the index, why doesn't it\nrecognize that, fetching this way, the rows are already sorted by\nl_postcode?\n\nAs I have a larger set of data, it nearly breaks down our developer\nmachine every time we do this, as it always creates a temporary copy of\nthe large amount of data to sort it (setting sort_mem higher makes it\nswap, setting it lower makes it thrashing disk directly).\n\nIs Postgresql 8 more intelligend in this case?\n\nThanks for your hints,\nMarkus \n\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zürich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n", "msg_date": "Sat, 20 Nov 2004 15:17:10 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": true, "msg_subject": "Index usage for sorted query" }, { "msg_contents": "\nInstead of :\n\n> WHERE cd='ca' ORDER BY l_postcode;\n\nWrite :\n\n> WHERE cd='ca' ORDER BY cd, l_postcode;\n\nYou have a multicolumn index, so you should specify a multicolumn sort \nexactly the same as your index, and the planner will get it.\n", "msg_date": "Sat, 20 Nov 2004 17:12:43 +0100", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index usage for sorted query" }, { "msg_contents": "Markus Schaber <[email protected]> writes:\n> But as it fetches all the rows through the index, why doesn't it\n> recognize that, fetching this way, the rows are already sorted by\n> l_postcode?\n\nTell it to \"ORDER BY cd, l_postcode\".\n\n> Is Postgresql 8 more intelligend in this case?\n\nNo.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Nov 2004 11:48:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index usage for sorted query " }, { "msg_contents": "Hi, Pierre-Frédéric,\n\nOn Sat, 20 Nov 2004 17:12:43 +0100\nPierre-Frédéric Caillaud <[email protected]> wrote:\n\n> > WHERE cd='ca' ORDER BY l_postcode;\n> \n> Write :\n> \n> > WHERE cd='ca' ORDER BY cd, l_postcode;\n> \n> You have a multicolumn index, so you should specify a multicolumn sort \n> exactly the same as your index, and the planner will get it.\n\nThanks, that seems to help.\n\nSeems weird to order by a column that is all the same value, but well,\nwhy not :-)\n\nThanks a lot,\nMarkus\n\n\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zürich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n", "msg_date": "Mon, 22 Nov 2004 16:01:15 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index usage for sorted query" } ]
[ { "msg_contents": "Hello to everybody again,\n\n \n\nthought you didn't hear any news from me for a very long time, the news are\ngood :-)\n\nI'm still here and promoting PostgreSQL.\n\n \n\nI am involved in the developing of a big romanian project for the vets that\nwill put Linux & PostgreSQL on 3500 computers in the whole country, linked\ntogether with dial-up connections that will keep track of the animal\nmovements.\n\n \n\nThe central database (also PostgreSLQ) will hold billions of records with\nanimal events (births, movements, slaughter and so on) and my question is:\n\n \n\nIf I will choose to keep a mirror of every workstation database in a\nseparate schema in the central database that mean that I will have 3500\ndifferent schemas.\n\nIs there any limit or any barrier that could stop this kind of approach or\nmake things go slower?\n\n \n\nConstantin Teodorescu\n\nAncient PgAccess developer\n\n \n\nP.S. Please Cc: me at [email protected]\n\n \n\n\n\n\n\n\n\n\n\n\nHello to everybody again,\n \nthought you didn’t hear any news from me for a very\nlong time, the news are good J\nI’m still here and promoting PostgreSQL.\n \nI am involved in the developing of a big romanian project\nfor the vets that will put Linux & PostgreSQL on 3500 computers in the\nwhole country, linked together with dial-up connections that will keep track of\nthe animal movements.\n \nThe central database (also PostgreSLQ) will hold billions of\nrecords with animal events (births, movements, slaughter and so on) and my\nquestion is:\n \nIf I will choose to keep a mirror of every workstation\ndatabase in a separate schema in the central database that mean that I will\nhave 3500 different schemas.\nIs there any limit or any barrier that could stop this kind\nof approach or make things go slower?\n \nConstantin Teodorescu\nAncient PgAccess developer\n \nP.S. Please Cc: me at [email protected]", "msg_date": "Mon, 22 Nov 2004 09:44:04 +0200", "msg_from": "\"Constantin Teodorescu\" <[email protected]>", "msg_from_op": true, "msg_subject": "Big number of schemas (3500) into a single database" }, { "msg_contents": "\"Constantin Teodorescu\" <[email protected]> writes:\n> If I will choose to keep a mirror of every workstation database in a\n> separate schema in the central database that mean that I will have 3500\n> different schemas.\n\n> Is there any limit or any barrier that could stop this kind of approach or\n> make things go slower?\n\nWould you need to put them all into \"search_path\" at once?\n\nI'm not sure what the scaling issues might be for long search_paths, but\nI wouldn't be surprised if it's bad. But as long as you don't do that,\nI don't believe there will be any problems.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Nov 2004 00:32:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big number of schemas (3500) into a single database " }, { "msg_contents": " --- Tom Lane <[email protected]> escribi�: \n> \"Constantin Teodorescu\" <[email protected]> writes:\n> > If I will choose to keep a mirror of every\n> > workstation database in a\n> > separate schema in the central database that mean\n> > that I will have 3500 different schemas.\n> \n> > Is there any limit or any barrier that could stop\n> > this kind of approach or make things go slower?\n> \n> Would you need to put them all into \"search_path\" at\n> once?\n> \n> I'm not sure what the scaling issues might be for\n> long search_paths, but I wouldn't be surprised if \n> it's bad. But as long as you don't do that,\n> I don't believe there will be any problems.\n> \n\nif i do a select with fully qualified table names it\nwill search in the search_path or it will go directly\nto the schema?\n\nJust for know.\n\nregards,\nJaime Casanova\n\n_________________________________________________________\nDo You Yahoo!?\nInformaci�n de Estados Unidos y Am�rica Latina, en Yahoo! Noticias.\nVis�tanos en http://noticias.espanol.yahoo.com\n", "msg_date": "Wed, 24 Nov 2004 11:11:56 -0600 (CST)", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] Big number of schemas (3500) into a single database" } ]
[ { "msg_contents": "Check the linux-dell list for more...The PERC3/Di cards are specifically\nAdaptec, not most. PERC4/DC is LSI Megaraid. Unless you buy the cheaper\nversion, most will come with battery.\n\n-anjan \n\n-----Original Message-----\nFrom: Andrew Janian [mailto:[email protected]] \nSent: Friday, November 19, 2004 4:22 PM\nTo: [email protected]\nSubject: Re: [PERFORM] Query Performance and IOWait\n\nThe data that we are accessing is via QLogic cards connected to an EMC\nClarion. We have tried it on local SCSI disks with the same (bad)\nresults.\n\nWhen the machine gets stuck in a 100% IOWAIT state it often crashes soon\nafter that.\n\nThe disks are fine, have been replaced and checked.\n\nHere are my results from hdparm -Tt /dev/sda1 (which is the EMC disk\narray)\n/dev/sda1:\n Timing buffer-cache reads: 2976 MB in 2.00 seconds = 1488.00 MB/sec\n Timing buffered disk reads: 44 MB in 3.13 seconds = 14.06 MB/sec\n\n-----Original Message-----\nFrom: Dave Cramer [mailto:[email protected]]\nSent: Thursday, November 18, 2004 11:14 AM\nTo: Andrew Janian\nCc: [email protected]\nSubject: Re: [PERFORM] Query Performance and IOWait\n\n\nAndrew,\n\nDell's aren't well known for their disk performance, apparently most of \nthe perc controllers sold with dell's are actually adaptec controllers. \nAlso apparently they do not come with the battery required to use the \nbattery backed up write cache ( In fact according to some Dell won't \neven sell the battery to you). Also Dell's monitoring software is quite \na memory hog.\n\nHave you looked at top ?, and also hdparm -Tt /dev/sd?\n\nDave\n\nAndrew Janian wrote:\n\n>Hello All,\n>\n>I have a setup with a Dell Poweredge 2650 with Red Hat and Postgres\n7.4.5 with a database with about 27GB of data. The table in question\nhas about 35 million rows.\n>\n>I am running the following query:\n>\n>SELECT *\n>FROM mb_fix_message\n>WHERE msg_client_order_id IN (\n>\tSELECT msg_client_order_id\n>\tFROM mb_fix_message\n>\tWHERE msg_log_time >= '2004-06-01'\n>\t\tAND msg_log_time < '2004-06-01 13:30:00.000'\n>\t\tAND msg_message_type IN ('D','G')\n>\t\tAND mb_ord_type = '1'\n>\t)\n>\tAND msg_log_time > '2004-06-01'\n>\tAND msg_log_time < '2004-06-01 23:59:59.999'\n>\tAND msg_message_type = '8'\n>\tAND (mb_raw_text LIKE '%39=1%' OR mb_raw_text LIKE '%39=2%');\n>\n>with the following plan:\n>\n>\nQUERY PLAN\n>Nested Loop IN Join (cost=0.00..34047.29 rows=1 width=526)\n> -> Index Scan using mfi_log_time on mb_fix_message\n(cost=0.00..22231.31 rows=2539 width=526)\n> Index Cond: ((msg_log_time > '2004-06-01 00:00:00'::timestamp\nwithout time zone) AND (msg_log_time < '2004-06-01\n23:59:59.999'::timestamp without time zone))\n> Filter: (((msg_message_type)::text = '8'::text) AND\n(((mb_raw_text)::text ~~ '%39=1%'::text) OR ((mb_raw_text)::text ~~\n'%39=2%'::text)))\n> -> Index Scan using mfi_client_ordid on mb_fix_message\n(cost=0.00..445.56 rows=1 width=18)\n> Index Cond: ((\"outer\".msg_client_order_id)::text =\n(mb_fix_message.msg_client_order_id)::text)\n> Filter: ((msg_log_time >= '2004-06-01 00:00:00'::timestamp\nwithout time zone) AND (msg_log_time < '2004-06-01 13:30:00'::timestamp\nwithout time zone) AND ((msg_message_type)::text = 'D'::text) OR\n((msg_message_type)::text = 'G'::text)) AND ((mb_ord_type)::text =\n'1'::text))\n>\n>While running, this query produces 100% iowait usage on its processor\nand takes a ungodly amount of time (about an hour).\n>\n>The postgres settings are as follows:\n>\n>shared_buffers = 32768 # min 16, at least max_connections*2,\n8KB each\n>sort_mem = 262144 # min 64, size in KB\n>\n>And the /etc/sysctl.conf has:\n>kernel.shmall = 274235392\n>kernel.shmmax = 274235392\n>\n>The system has 4GB of RAM.\n>\n>I am pretty sure of these settings, but only from my reading of the\ndocs and others' recommendations online.\n>\n>Thanks,\n>\n>Andrew Janian\n>OMS Development\n>Scottrade Financial Services\n>(314) 965-1555 x 1513\n>Cell: (314) 369-2083\n>\n>---------------------------(end of\nbroadcast)---------------------------\n>TIP 7: don't forget to increase your free space map settings\n> \n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\n\n", "msg_date": "Mon, 22 Nov 2004 10:20:14 -0500", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Performance and IOWait" } ]
[ { "msg_contents": "Good day,\n\nI'm asking myself if there is a performance issue in using an integer\nof varchar(24) PRIMARY KEY in a product table.\n\nI've read that there is no speed issue in the query, but that the only\nperformance issue is the database size of copying the key in other\ntables that require it.\n\nMy product_id is copied in orders, jobs, and other specific tables.\n\nWhat is the common approach? Should I use directly the product_code as\nmy ID, or use a sequantial number for speed? (I did the same for the\ncompany_id, this is a 'serial' and not the shor name of the customer.\nI just don't know what is usually done.\n\nRight now I did the following:\nCREATE TABLE design.products (\nproduct_id serial PRIMARY KEY,\ncompany_id integer NOT NULL REFERENCES sales.companies ON\nUPDATE CASCADE,\nproduct_code varchar(24) NOT NULL,\n...\nCONSTRAINT product_code_already_used_for_this_company UNIQUE\n(company_id, product_code)\n);\n\nCREATE TABLE sales.companies (\ncompany_id integer PRIMARY KEY,\ncompany_name varchar(48) NOT NULL UNIQUE,\n...\n);\n\nThe company_id is also copied in many tables like product, contacts, etc.\n\nThank you very much for any good pointers on this 'already seen' issue.\n\n-- \nAlexandre Leclerc\n", "msg_date": "Mon, 22 Nov 2004 15:32:40 -0500", "msg_from": "Alexandre Leclerc <[email protected]>", "msg_from_op": true, "msg_subject": "Data type to use for primary key" }, { "msg_contents": "\n> What is the common approach? Should I use directly the product_code as\n> my ID, or use a sequantial number for speed? (I did the same for the\n> company_id, this is a 'serial' and not the shor name of the customer.\n> I just don't know what is usually done.\n\n\tUse a serial :\n\t- you can change product_code for a product easily\n\t- you can pass around integers easier around, in web forms for instance, \nyou don't have to ask 'should I escape this string ?'\n\t- it's faster\n\t- it uses less space\n\t- if one day you must manage products from another source whose \nproduct_code overlap yours, you won't have problems\n\t- you can generate them with a serial uniquely and easily\n", "msg_date": "Tue, 23 Nov 2004 00:06:13 +0100", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data type to use for primary key" }, { "msg_contents": "Mr Caillaud,\n\nMerci! Many points you bring were also my toughts. I was asking myself\nreally this was the way to go. I'm happy to see that my view of the\nproblem was good.\n\nEncore merci! (Thanks again!)\n\nOn Tue, 23 Nov 2004 00:06:13 +0100, Pierre-Frédéric Caillaud\n<[email protected]> wrote:\n> \n> > What is the common approach? Should I use directly the product_code as\n> > my ID, or use a sequantial number for speed? (I did the same for the\n> > company_id, this is a 'serial' and not the shor name of the customer.\n> > I just don't know what is usually done.\n> \n> Use a serial :\n> - you can change product_code for a product easily\n> - you can pass around integers easier around, in web forms for instance,\n> you don't have to ask 'should I escape this string ?'\n> - it's faster\n> - it uses less space\n> - if one day you must manage products from another source whose\n> product_code overlap yours, you won't have problems\n> - you can generate them with a serial uniquely and easily\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n> \n\n\n-- \nAlexandre Leclerc\n", "msg_date": "Mon, 22 Nov 2004 18:26:00 -0500", "msg_from": "Alexandre Leclerc <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Data type to use for primary key" }, { "msg_contents": "Alexandre,\n\n> What is the common approach? Should I use directly the product_code as\n> my ID, or use a sequantial number for speed? (I did the same for the\n> company_id, this is a 'serial' and not the shor name of the customer.\n> I just don't know what is usually done.\n\nDon't use SERIAL just because it's there. Ideally, you *want* to use the \nproduct_code if you can. It's your natural key and a natural key is always \nsuperior to a surrogate key all other things being equal. \n\nUnfortunately, all other things are NOT equal. Here's the reasons why you'd \nuse a surrogate key (i.e. SERIAL):\n\n1) because the product code is a large text string (i.e. > 10bytes) and you \nwill have many millions of records, so having it as an FK in other tables \nwill add significantly to the footprint of the database;\n\n2) because product codes get blanket changes frequently, where thousands of \nthem pet re-mapped to new codes, and the ON CASCADE UPDATE slow performance \nwill kill your database;\n\n3) Because every other table in the database has a SERIAL key and consistency \nreduces errors;\n\n4) or because your interface programmers get annoyed with using different \ntypes of keys for different tables and multicolumn keys.\n\nIf none of the above is true (and I've had it not be, in some tables and some \ndatabases) then you want to stick with your \"natural key\", the product_code.\n\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 22 Nov 2004 16:54:56 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data type to use for primary key" }, { "msg_contents": "On Mon, 2004-11-22 at 16:54 -0800, Josh Berkus wrote:\n> Alexandre,\n> \n> > What is the common approach? Should I use directly the product_code as\n> > my ID, or use a sequantial number for speed? (I did the same for the\n> > company_id, this is a 'serial' and not the shor name of the customer.\n> > I just don't know what is usually done.\n> \n> Don't use SERIAL just because it's there. Ideally, you *want* to use the \n> product_code if you can. It's your natural key and a natural key is always \n> superior to a surrogate key all other things being equal. \n\nIt would be nice if PostgreSQL had some form of transparent surrogate\nkeying in the background which would automatically run around and\nreplace your real data with SERIAL integers. It could use a lookup table\nfor conversions between the surrogate and real values so the user never\nknows that it's done, a bit like ENUM. Then we could all use the real\nvalues with no performance issues for 1) because it's an integer in the\nbackground, and 2) because a cascade only touches a single tuple in the\nlookup table.\n\n\n-- \n\n", "msg_date": "Mon, 22 Nov 2004 20:03:11 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data type to use for primary key" }, { "msg_contents": "Rod,\n\n> It would be nice if PostgreSQL had some form of transparent surrogate\n> keying in the background which would automatically run around and\n> replace your real data with SERIAL integers. It could use a lookup table\n> for conversions between the surrogate and real values so the user never\n> knows that it's done, a bit like ENUM. Then we could all use the real\n> values with no performance issues for 1) because it's an integer in the\n> background, and 2) because a cascade only touches a single tuple in the\n> lookup table.\n\nSybase does this, and it's a feature I would dearly love to emulate. You can \njust refer to another table, without specifying the column, as an FK and it \nwill create an invisible hashed key. This is the type of functionality Codd \nwas advocating -- invisible, implementation-automated surrogate keys -- in \nthe mid 90's (don't have a paper citation at the moment).\n\nSo you'd just do:\n\ncreate table client_contacts (\n\tfname text not null,\n\tlname text not null,\n\tclient foriegn key clients,\n\tposition text,\n\tnotes text\n);\n\nand the \"client\" column would create an invisible hashed key that would drag \nin the relevant row from the clients table; thus a:\n\nselect * from client_contacts\n\nwould actually show the whole record from clients as well.\n\t\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 22 Nov 2004 22:00:41 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data type to use for primary key" }, { "msg_contents": "\n>> It would be nice if PostgreSQL had some form of transparent surrogate\n>> keying in the background which would automatically run around and\n>> replace your real data with SERIAL integers. It could use a lookup table\n\n\tThere is still table inheritance, but it's not really the same.\n", "msg_date": "Tue, 23 Nov 2004 09:39:52 +0100", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data type to use for primary key" }, { "msg_contents": "All,\n\tWell, you should still escape any strings you're getting from a web page so\nyou can ensure you're not subject to a SQL insert attack, even if you're\nexpecting integers.\nThanks,\nPeter Darley\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of\nPierre-Frᅵdᅵric Caillaud\nSent: Monday, November 22, 2004 3:06 PM\nTo: [email protected]\nSubject: Re: [PERFORM] Data type to use for primary key\n\n\n\n> What is the common approach? Should I use directly the product_code as\n> my ID, or use a sequantial number for speed? (I did the same for the\n> company_id, this is a 'serial' and not the shor name of the customer.\n> I just don't know what is usually done.\n\n\tUse a serial :\n\t- you can change product_code for a product easily\n\t- you can pass around integers easier around, in web forms for instance,\nyou don't have to ask 'should I escape this string ?'\n\t- it's faster\n\t- it uses less space\n\t- if one day you must manage products from another source whose\nproduct_code overlap yours, you won't have problems\n\t- you can generate them with a serial uniquely and easily\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if your\n joining column's datatypes do not match\n\n", "msg_date": "Tue, 23 Nov 2004 06:59:42 -0800", "msg_from": "\"Peter Darley\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data type to use for primary key" }, { "msg_contents": "On Mon, 22 Nov 2004 16:54:56 -0800, Josh Berkus <[email protected]> wrote:\n> Alexandre,\n> \n> > What is the common approach? Should I use directly the product_code as\n> > my ID, or use a sequantial number for speed? (I did the same for the\n> > company_id, this is a 'serial' and not the shor name of the customer.\n> > I just don't know what is usually done.\n> \n> Don't use SERIAL just because it's there. Ideally, you *want* to use the\n> product_code if you can. It's your natural key and a natural key is always\n> superior to a surrogate key all other things being equal.\n> \n> Unfortunately, all other things are NOT equal. Here's the reasons why you'd\n> use a surrogate key (i.e. SERIAL):\n> \n> 1) because the product code is a large text string (i.e. > 10bytes) and you\n> will have many millions of records, so having it as an FK in other tables\n> will add significantly to the footprint of the database;\n\nThanks for those tips. I'll print and keep them. So in my case, the\nproduct_code being varchar(24) is:\n4 bytes + string size (so possibly up to 24) = possible 28 bytes. I\ndid the good thing using a serial. For my shorter keys (4 bytes + up\nto 6 char) I will use the natural key.\n\nThis is interesting, because this is what I did right now.\n\nThe \"transparent surrogate keying\" proposal that is discussed bellow\nin the thread is a very good idea. It would be nice to see that. It\nwould be easier for the DB admin and the coder; the moment this is not\nslowing the system. : )\n\nBest regards.\n\n-- \nAlexandre Leclerc\n", "msg_date": "Tue, 23 Nov 2004 11:29:45 -0500", "msg_from": "Alexandre Leclerc <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Data type to use for primary key" }, { "msg_contents": "\n> All,\n> \tWell, you should still escape any strings you're getting from a web \n> page so\n> you can ensure you're not subject to a SQL insert attack, even if you're\n> expecting integers.\n> Thanks,\n> Peter Darley\n\n\tWell, your framework should do this for you :\n\n\t\"integer\" specified in your database object class description\n\t\"%d\" appears in in your generated queries (or you put it in your hand \nwritten queries)\n\t=> if the parameter is not an integer, an exception is thrown, then \ncatched, then an error page is displayed...\n\n\tOr, just casting to int should throw an exception...\n\n\tForms should be validated, but hidden parameters in links are OK imho to \ndisplay an error page if they are incorrect, after all, if the user edits \nthe get or post parameters, well...\n", "msg_date": "Tue, 23 Nov 2004 17:45:27 +0100", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data type to use for primary key" }, { "msg_contents": "Alexandre Leclerc <[email protected]> writes:\n\n> Thanks for those tips. I'll print and keep them. So in my case, the\n> product_code being varchar(24) is:\n> 4 bytes + string size (so possibly up to 24) = possible 28 bytes. I\n> did the good thing using a serial. For my shorter keys (4 bytes + up\n> to 6 char) I will use the natural key.\n\nRealize that space usage is really only part of the issue.\n\nIf you ever have two records with the same natural key or a record whose\nnatural key has changed you'll be in for a world of hurt if you use the\nnatural key as the primary key in your database.\n\nBasically I never use natural keys except when they're arbitrarily chosen\nvalues defined by the application itself.\n\nSituations where I've used varchars instead of integer keys are things like:\n\n. Individual privileges grantable in a security system.\n (things like \"VIEWUSER\" \"EDITUSER\" privileges)\n\n. Reference tables for one letter codes used to indicate the type of object\n represented by the record.\n\nActually I see one interesting exception to my policy in my current database\nschema. And I don't think I would do this one differently given the choice\neither. The primary key of the postal code table is the postal code. (postal\ncodes are up here in the great white north like zip codes down there.)\n\nThis could hurt if they ever reuse an old previously retired postal code,\nwhich isn't an entirely impossible case. As far as I know it hasn't happened\nyet though. And it's just so much more convenient having the postal code handy\ninstead of having to join against another table to look it up.\n\n-- \ngreg\n\n", "msg_date": "24 Nov 2004 01:52:52 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data type to use for primary key" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> This could hurt if they ever reuse an old previously retired postal code,\n> which isn't an entirely impossible case. As far as I know it hasn't happened\n> yet though.\n\nOne would suppose that the guys who are in charge of this point at\nCanada Post consider the postal code to be their primary key, and\nare no more eager to reuse one than you are to see it reused.\n\nBasically this comes down to \"I'm going to use some externally supplied\nprimary key as my primary key. Do I trust the upstream DBA to know what\na primary key is?\"\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Nov 2004 02:16:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data type to use for primary key " }, { "msg_contents": "\nTom Lane <[email protected]> writes:\n\n> Greg Stark <[email protected]> writes:\n> > This could hurt if they ever reuse an old previously retired postal code,\n> > which isn't an entirely impossible case. As far as I know it hasn't happened\n> > yet though.\n> \n> One would suppose that the guys who are in charge of this point at\n> Canada Post consider the postal code to be their primary key, and\n> are no more eager to reuse one than you are to see it reused.\n\nWell, they may eventually be forced to. For the same sort of hierarchic issue\nthat causes the \"shortage\" of IPv4 address space even though there's far less\nthan 4 billion hosts online.\n\nBut as far as I can see today the only postal codes that are being retired are\nrural areas that are being developed and have blocks of codes assigned instead\nof having a single broad code.\n\n> Basically this comes down to \"I'm going to use some externally supplied\n> primary key as my primary key. Do I trust the upstream DBA to know what\n> a primary key is?\"\n\nWell there's another issue here I think. Often people see something that looks\nunique and is clearly intended to be a primary key and think \"aha, nice\nprimary key\" but they miss a subtle distinction between what the external\nprimary key represents and what their data model is tracking.\n\nThe typical example is social security numbers. SSNs are a perfectly\nreasonable primary key -- as long as you're tracking Social Security accounts,\nnot people. Most people in the US have exactly one SS account, so people often\nthink it looks like a primary key for people. In fact not everyone has a\nSocial Security account (aliens who have never worked in the US, or for that\nmatter people who have never been in the US) and others have had multiple\nSocial Security accounts (victims of identity theft).\n\nAnother example that comes to mind is the local telephone company. When I\nchanged my phone number they created a new account without telling me, because\ntheir billing system's primary key for accounts is... the phone number. So all\nmy automated bill payments started disappearing into the black hole of the old\naccount and my new account went negative. I wonder what they do for customers\nwho buy services from them but don't have a telephone line.\n\n-- \ngreg\n\n", "msg_date": "24 Nov 2004 03:14:17 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data type to use for primary key" }, { "msg_contents": "On 24 Nov 2004 01:52:52 -0500, Greg Stark <[email protected]> wrote:\n> Alexandre Leclerc <[email protected]> writes:\n> \n> > Thanks for those tips. I'll print and keep them. So in my case, the\n> > product_code being varchar(24) is:\n> > 4 bytes + string size (so possibly up to 24) = possible 28 bytes. I\n> > did the good thing using a serial. For my shorter keys (4 bytes + up\n> > to 6 char) I will use the natural key.\n> \n> Realize that space usage is really only part of the issue.\n\nThank you for this additionnal information. This will help out in the\nfutur. In my situation this is a good thing to have integer key where\nI decided to have them. Event if I was obliged to add UNIQUE\nconstraints to some other columns. I think they call this \"candidate\nkey\" and it's still 3NF (whatever; but only if my db is correctly\norganised)... I try to be logical and efficient for good performance.\nBut in the end, the time (the db will get bigger) and good EXPLAIN\nANALYSE commands will help fine tuning later! This will give me more\nexperience at that point.\n\n> Actually I see one interesting exception to my policy in my current database\n> schema. And I don't think I would do this one differently given the choice\n> either. The primary key of the postal code table is the postal code. (postal\n> codes are up here in the great white north like zip codes down there.)\n\n(I do understand this one, living in the province of Quebec. ;) And\nthe great white north is still not arrived; end november! - Still, not\nvery exceptionnal.)\n\nRegards.\n\n-- \nAlexandre Leclerc\n", "msg_date": "Wed, 24 Nov 2004 10:39:03 -0500", "msg_from": "Alexandre Leclerc <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Data type to use for primary key" } ]
[ { "msg_contents": "I have the following view:\n\ncreate or replace view market.p_areas as\nselect a.*\nfrom _areas a\nwhere a.area in (\n select b.area\n from _bins b, _inventories i, _offers o, _pricemembers p\n where b.bin = i.bin and\n i.inventory = o.inventory and\n o.pricegroup = p.pricegroup and\n p.buyer in (\n select s.store\n from _stores s, _webusers w\n where w.webuser = getWebuser() and\n w.company = s.company\n union\n select s.store\n from _stores s, _companies c\n where s.company = c.company and\n c.companyid = 'DEFAULT'\n )\n);\n\nWhen I query the view without a where clause I get:\n\n explain analyze select * from p_areas;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=1273.12..1276.31 rows=47 width=163) (actual time=438.739..439.574 rows=34 loops=1)\n Hash Cond: (\"outer\".area = \"inner\".area)\n -> Seq Scan on _areas a (cost=0.00..2.48 rows=48 width=163) (actual time=0.015..0.169 rows=48 loops=1)\n -> Hash (cost=1273.01..1273.01 rows=47 width=8) (actual time=438.532..438.532 rows=0 loops=1)\n -> HashAggregate (cost=1273.01..1273.01 rows=47 width=8) (actual time=438.286..438.395 rows=34 loops=1)\n -> Hash Join (cost=558.53..1266.68 rows=2532 width=8) (actual time=160.923..416.968 rows=5264 loops=1)\n Hash Cond: (\"outer\".bin = \"inner\".bin)\n -> Hash Join (cost=544.02..1207.86 rows=2531 width=8) (actual time=156.097..356.560 rows=5264 loops=1)\n Hash Cond: (\"outer\".inventory = \"inner\".inventory)\n -> Seq Scan on _inventories i (cost=0.00..265.96 rows=11396 width=16) (actual time=0.010..44.047 rows=11433 loops=1)\n -> Hash (cost=537.14..537.14 rows=2751 width=8) (actual time=155.891..155.891 rows=0 loops=1)\n -> Hash Join (cost=13.96..537.14 rows=2751 width=8) (actual time=11.967..136.598 rows=5264 loops=1)\n Hash Cond: (\"outer\".pricegroup = \"inner\".pricegroup)\n -> Seq Scan on _offers o (cost=0.00..379.24 rows=15524 width=16) (actual time=0.008..50.335 rows=15599 loops=1)\n -> Hash (cost=13.94..13.94 rows=9 width=8) (actual time=11.861..11.861 rows=0 loops=1)\n -> Hash IN Join (cost=8.74..13.94 rows=9 width=8) (actual time=10.801..11.801 rows=12 loops=1)\n Hash Cond: (\"outer\".buyer = \"inner\".store)\n -> Seq Scan on _pricemembers p (cost=0.00..4.07 rows=207 width=16) (actual time=0.011..0.548 rows=207 loops=1)\n -> Hash (cost=8.72..8.72 rows=8 width=8) (actual time=10.687..10.687 rows=0 loops=1)\n -> Subquery Scan \"IN_subquery\" (cost=8.60..8.72 rows=8 width=8) (actual time=10.645..10.654 rows=1 loops=1)\n -> Unique (cost=8.60..8.64 rows=8 width=8) (actual time=10.631..10.636 rows=1 loops=1)\n -> Sort (cost=8.60..8.62 rows=8 width=8) (actual time=10.625..10.627 rows=1 loops=1)\n Sort Key: store\n -> Append (cost=2.86..8.48 rows=8 width=8) (actual time=10.529..10.583 rows=1 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=2.86..5.15 rows=5 width=8) (actual time=10.222..10.222 rows=0 loops=1)\n -> Hash Join (cost=2.86..5.10 rows=5 width=8) (actual time=10.214..10.214 rows=0 loops=1)\n Hash Cond: (\"outer\".company = \"inner\".company)\n -> Seq Scan on _stores s (cost=0.00..2.13 rows=13 width=16) (actual time=0.019..0.059 rows=13 loops=1)\n -> Hash (cost=2.85..2.85 rows=1 width=8) (actual time=10.031..10.031 rows=0 loops=1)\n -> Seq Scan on _webusers w (cost=0.00..2.85 rows=1 width=8) (actual time=10.023..10.023 rows=0 loops=1)\n Filter: (webuser = getwebuser())\n -> Subquery Scan \"*SELECT* 2\" (cost=1.08..3.33 rows=3 width=8) (actual time=0.298..0.349 rows=1 loops=1)\n -> Hash Join (cost=1.08..3.30 rows=3 width=8) (actual time=0.287..0.335 rows=1 loops=1)\n Hash Cond: (\"outer\".company = \"inner\".company)\n -> Seq Scan on _stores s (cost=0.00..2.13 rows=13 width=16) (actual time=0.008..0.048 rows=13 loops=1)\n -> Hash (cost=1.07..1.07 rows=1 width=8) (actual time=0.111..0.111 rows=0 loops=1)\n -> Seq Scan on _companies c (cost=0.00..1.07 rows=1 width=8) (actual time=0.059..0.080 rows=1 loops=1)\n Filter: ((companyid)::text = 'DEFAULT'::text)\n -> Hash (cost=13.01..13.01 rows=601 width=16) (actual time=4.735..4.735 rows=0 loops=1)\n -> Seq Scan on _bins b (cost=0.00..13.01 rows=601 width=16) (actual time=0.054..2.846 rows=601 loops=1)\n Total runtime: 441.685 ms\n(41 rows)\n\n---\n\nWhen I query the view with a simple filter, I get:\n\nexplain analyze select * from p_areas where deactive is null;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop IN Join (cost=8.60..524.28 rows=1 width=163) (actual time=1023.291..20025.620 rows=34 loops=1)\n Join Filter: (\"outer\".area = \"inner\".area)\n -> Seq Scan on _areas a (cost=0.00..2.48 rows=1 width=163) (actual time=0.037..0.804 rows=48 loops=1)\n Filter: (deactive IS NULL)\n -> Nested Loop (cost=8.60..25530.60 rows=2532 width=8) (actual time=0.345..402.775 rows=3408 loops=48)\n -> Nested Loop (cost=8.60..16893.61 rows=2531 width=8) (actual time=0.304..264.929 rows=3408 loops=48)\n -> Merge Join (cost=8.60..2912.00 rows=2751 width=8) (actual time=0.258..120.841 rows=3408 loops=48)\n Merge Cond: (\"outer\".pricegroup = \"inner\".pricegroup)\n -> Nested Loop IN Join (cost=8.60..1837.73 rows=9 width=8) (actual time=0.216..4.612 rows=8 loops=48)\n Join Filter: (\"outer\".buyer = \"inner\".store)\n -> Index Scan using i_pricemembers3 on _pricemembers p (cost=0.00..10.96 rows=207 width=16) (actual time=0.022..1.045 rows=138 loops=48)\n -> Subquery Scan \"IN_subquery\" (cost=8.60..8.72 rows=8 width=8) (actual time=0.011..0.017 rows=1 loops=6606)\n -> Unique (cost=8.60..8.64 rows=8 width=8) (actual time=0.006..0.010 rows=1 loops=6606)\n -> Sort (cost=8.60..8.62 rows=8 width=8) (actual time=0.003..0.004 rows=1 loops=6606)\n Sort Key: store\n -> Append (cost=2.86..8.48 rows=8 width=8) (actual time=7.667..7.757 rows=1 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=2.86..5.15 rows=5 width=8) (actual time=7.362..7.362 rows=0 loops=1)\n -> Hash Join (cost=2.86..5.10 rows=5 width=8) (actual time=7.355..7.355 rows=0 loops=1)\n Hash Cond: (\"outer\".company = \"inner\".company)\n -> Seq Scan on _stores s (cost=0.00..2.13 rows=13 width=16) (actual time=0.013..0.054 rows=13 loops=1)\n -> Hash (cost=2.85..2.85 rows=1 width=8) (actual time=7.163..7.163 rows=0 loops=1)\n -> Seq Scan on _webusers w (cost=0.00..2.85 rows=1 width=8) (actual time=7.154..7.154 rows=0 loops=1)\n Filter: (webuser = getwebuser())\n -> Subquery Scan \"*SELECT* 2\" (cost=1.08..3.33 rows=3 width=8) (actual time=0.295..0.381 rows=1 loops=1)\n -> Hash Join (cost=1.08..3.30 rows=3 width=8) (actual time=0.286..0.368 rows=1 loops=1)\n Hash Cond: (\"outer\".company = \"inner\".company)\n -> Seq Scan on _stores s (cost=0.00..2.13 rows=13 width=16) (actual time=0.008..0.080 rows=13 loops=1)\n -> Hash (cost=1.07..1.07 rows=1 width=8) (actual time=0.116..0.116 rows=0 loops=1)\n -> Seq Scan on _companies c (cost=0.00..1.07 rows=1 width=8) (actual time=0.062..0.083 rows=1 loops=1)\n Filter: ((companyid)::text = 'DEFAULT'::text)\n -> Index Scan using i_offers4 on _offers o (cost=0.00..1007.93 rows=15524 width=16) (actual time=0.023..67.183 rows=10049 loops=48)\n -> Index Scan using i_inventories1 on _inventories i (cost=0.00..5.07 rows=1 width=16) (actual time=0.025..0.029 rows=1 loops=163561)\n Index Cond: (i.inventory = \"outer\".inventory)\n -> Index Scan using i_bins1 on _bins b (cost=0.00..3.40 rows=1 width=16) (actual time=0.021..0.026 rows=1 loops=163561)\n Index Cond: (b.bin = \"outer\".bin)\n Total runtime: 20027.414 ms\n(36 rows)\n\n---\n\nThat's a slow-down on execution time by a factor of 50, even\nthough the row count was the same: 34. In fact, it's MUCH\nfaster to do:\n\ncreate temporary table foo as\nselect * from p_areas;\n\nselect * from foo\nwhere deactive is null;\n\nThe database has been analyzed.\n\nAny tips would be greatly appreciated.\n\nMike Mascari\n\nP.S.: I turned off word-wrap in my mail client for this post.\nIs that the right thing to do for analyze output?\n\n\n", "msg_date": "Mon, 22 Nov 2004 16:38:03 -0500", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": true, "msg_subject": "Slow execution time when querying view with WHERE clause" }, { "msg_contents": "Mike Mascari wrote:\n> I have the following view:\n> \n> create or replace view market.p_areas as\n> select a.*\n> from _areas a\n> where a.area in (\n> select b.area\n> from _bins b, _inventories i, _offers o, _pricemembers p\n> where b.bin = i.bin and\n> i.inventory = o.inventory and\n> o.pricegroup = p.pricegroup and\n> p.buyer in (\n> select s.store\n> from _stores s, _webusers w\n> where w.webuser = getWebuser() and\n> w.company = s.company\n> union\n> select s.store\n> from _stores s, _companies c\n> where s.company = c.company and\n> c.companyid = 'DEFAULT'\n> )\n> );\n\n...\n\nI failed to report the version:\n\nselect version();\n\nPostgreSQL 7.4.5 on i686-pc-linux-gnu, compiled by GCC \ni686-pc-linux-gnu-gcc (GCC) 3.4.0 20040204 (prerelease)\n\nSorry.\n\nMike Mascari\n", "msg_date": "Mon, 22 Nov 2004 16:46:08 -0500", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow execution time when querying view with WHERE clause" }, { "msg_contents": "Mike Mascari <[email protected]> writes:\n> When I query the view with a simple filter, I get:\n\n> explain analyze select * from p_areas where deactive is null;\n\nThe problem seems to be here:\n\n> -> Seq Scan on _areas a (cost=0.00..2.48 rows=1 width=163) (actual time=0.037..0.804 rows=48 loops=1)\n> Filter: (deactive IS NULL)\n\nWhy is it so completely off about the selectivity of the IS NULL clause?\nAre you sure you ANALYZEd this table recently?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Nov 2004 12:54:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow execution time when querying view with WHERE clause " }, { "msg_contents": "Tom Lane wrote:\n> Mike Mascari <[email protected]> writes:\n> \n>>When I query the view with a simple filter, I get:\n> \n> \n>>explain analyze select * from p_areas where deactive is null;\n> \n> \n> The problem seems to be here:\n> \n> \n>> -> Seq Scan on _areas a (cost=0.00..2.48 rows=1 width=163) (actual time=0.037..0.804 rows=48 loops=1)\n>> Filter: (deactive IS NULL)\n> \n> \n> Why is it so completely off about the selectivity of the IS NULL clause?\n> Are you sure you ANALYZEd this table recently?\n\n\nYes. I just did:\n\n[estore@lexus] vacuum full analyze;\nVACUUM\n[estore@lexus] explain analyze select * from p_areas where deactive is null;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop IN Join (cost=8.62..512.47 rows=1 width=162) (actual time=1143.969..21811.417 rows=37 loops=1)\n Join Filter: (\"outer\".area = \"inner\".area)\n -> Seq Scan on _areas a (cost=0.00..2.49 rows=1 width=162) (actual time=0.037..1.673 rows=49 loops=1)\n Filter: (deactive IS NULL)\n -> Nested Loop (cost=8.62..25740.20 rows=2681 width=8) (actual time=1.172..429.501 rows=3566 loops=49)\n -> Nested Loop (cost=8.62..16674.93 rows=2680 width=8) (actual time=1.125..281.570 rows=3566 loops=49)\n -> Merge Join (cost=8.62..3012.72 rows=2778 width=8) (actual time=0.876..128.908 rows=3566 loops=49)\n Merge Cond: (\"outer\".pricegroup = \"inner\".pricegroup)\n -> Nested Loop IN Join (cost=8.62..1929.41 rows=9 width=8) (actual time=0.613..5.504 rows=9 loops=49)\n Join Filter: (\"outer\".buyer = \"inner\".store)\n -> Index Scan using i_pricemembers3 on _pricemembers p (cost=0.00..11.13 rows=217 width=16) (actual time=0.403..1.476 rows=142 loops=49)\n -> Subquery Scan \"IN_subquery\" (cost=8.62..8.74 rows=8 width=8) (actual time=0.013..0.019 rows=1 loops=6950)\n -> Unique (cost=8.62..8.66 rows=8 width=8) (actual time=0.007..0.010 rows=1 loops=6950)\n -> Sort (cost=8.62..8.64 rows=8 width=8) (actual time=0.003..0.004 rows=1 loops=6950)\n Sort Key: store\n -> Append (cost=2.87..8.50 rows=8 width=8) (actual time=8.394..8.446 rows=1 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=2.87..5.17 rows=5 width=8) (actual time=8.112..8.112 rows=0 loops=1)\n -> Hash Join (cost=2.87..5.12 rows=5 width=8) (actual time=8.106..8.106 rows=0 loops=1)\n Hash Cond: (\"outer\".company = \"inner\".company)\n -> Seq Scan on _stores s (cost=0.00..2.13 rows=13 width=16) (actual time=0.014..0.052 rows=13 loops=1)\n -> Hash (cost=2.87..2.87 rows=1 width=8) (actual time=7.878..7.878 rows=0 loops=1)\n -> Seq Scan on _webusers w (cost=0.00..2.87 rows=1 width=8) (actual time=7.868..7.868 rows=0 loops=1)\n Filter: (webuser = getwebuser())\n -> Subquery Scan \"*SELECT* 2\" (cost=1.08..3.33 rows=3 width=8) (actual time=0.273..0.322 rows=1 loops=1)\n -> Hash Join (cost=1.08..3.30 rows=3 width=8) (actual time=0.263..0.308 rows=1 loops=1)\n Hash Cond: (\"outer\".company = \"inner\".company)\n -> Seq Scan on _stores s (cost=0.00..2.13 rows=13 width=16) (actual time=0.008..0.042 rows=13 loops=1)\n -> Hash (cost=1.07..1.07 rows=1 width=8) (actual time=0.093..0.093 rows=0 loops=1)\n -> Seq Scan on _companies c (cost=0.00..1.07 rows=1 width=8) (actual time=0.061..0.081 rows=1 loops=1)\n Filter: ((companyid)::text = 'DEFAULT'::text)\n -> Index Scan using i_offers4 on _offers o (cost=0.00..1014.76 rows=16298 width=16) (actual time=0.244..72.742 rows=10433 loops=49)\n -> Index Scan using i_inventories1 on _inventories i (cost=0.00..4.91 rows=1 width=16) (actual time=0.025..0.029 rows=1 loops=174715)\n Index Cond: (i.inventory = \"outer\".inventory)\n -> Index Scan using i_bins1 on _bins b (cost=0.00..3.37 rows=1 width=16) (actual time=0.022..0.027 rows=1 loops=174715)\n Index Cond: (b.bin = \"outer\".bin)\n Total runtime: 21813.209 ms\n\n_areas looks like:\n\n[estore@lexus] \\d _areas\n Table \"temporal._areas\"\n Column | Type | Modifiers\n--------------------+--------------------------+------------------------\n area | bigint | not null\n store | bigint | not null\n name | character varying(32) | not null\n description | character varying(64) | not null\n email | character varying(48) | not null\n phoneno | character varying(16) | not null\n requisition_device | bigint | not null\n inventory_device | bigint | not null\n receive_device | bigint | not null\n invoice_device | bigint | not null\n activation_device | bigint | not null\n active | timestamp with time zone | not null default now()\n deactive | timestamp with time zone |\nIndexes:\n \"i_areas1\" unique, btree (area)\n \"i_areas2\" unique, btree (store, name) WHERE (deactive IS NULL)\n \"i_areas3\" btree (store, name)\nTriggers:\n t_areas1 BEFORE INSERT OR DELETE OR UPDATE ON _areas FOR EACH ROW EXECUTE PROCEDURE ri_areas()\n\n\nNote that if I disable nestedloop plans, I get:\n\n[estore@lexus] explain analyze select * from p_areas where deactive is null;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=1456.90..1457.15 rows=1 width=162) (actual time=423.273..424.156 rows=37 loops=1)\n Hash Cond: (\"outer\".area = \"inner\".area)\n -> HashAggregate (cost=1454.40..1454.40 rows=48 width=8) (actual time=422.192..422.334 rows=37 loops=1)\n -> Hash Join (cost=582.79..1447.70 rows=2681 width=8) (actual time=188.687..406.584 rows=5694 loops=1)\n Hash Cond: (\"outer\".bin = \"inner\".bin)\n -> Hash Join (cost=568.07..1386.07 rows=2680 width=8) (actual time=182.756..358.441 rows=5694 loops=1)\n Hash Cond: (\"outer\".inventory = \"inner\".inventory)\n -> Seq Scan on _inventories i (cost=0.00..280.04 rows=12004 width=16) (actual time=0.013..38.221 rows=12004 loops=1)\n -> Hash (cost=561.12..561.12 rows=2778 width=8) (actual time=182.543..182.543 rows=0 loops=1)\n -> Hash Join (cost=14.13..561.12 rows=2778 width=8) (actual time=9.854..160.963 rows=5694 loops=1)\n Hash Cond: (\"outer\".pricegroup = \"inner\".pricegroup)\n -> Seq Scan on _offers o (cost=0.00..396.98 rows=16298 width=16) (actual time=0.011..58.422 rows=16298 loops=1)\n -> Hash (cost=14.10..14.10 rows=9 width=8) (actual time=9.728..9.728 rows=0 loops=1)\n -> Hash IN Join (cost=8.76..14.10 rows=9 width=8) (actual time=8.616..9.657 rows=13 loops=1)\n Hash Cond: (\"outer\".buyer = \"inner\".store)\n -> Seq Scan on _pricemembers p (cost=0.00..4.17 rows=217 width=16) (actual time=0.011..0.565 rows=217 loops=1)\n -> Hash (cost=8.74..8.74 rows=8 width=8) (actual time=8.465..8.465 rows=0 loops=1)\n -> Subquery Scan \"IN_subquery\" (cost=8.62..8.74 rows=8 width=8) (actual time=8.446..8.455 rows=1 loops=1)\n -> Unique (cost=8.62..8.66 rows=8 width=8) (actual time=8.430..8.435 rows=1 loops=1)\n -> Sort (cost=8.62..8.64 rows=8 width=8) (actual time=8.424..8.426 rows=1 loops=1)\n Sort Key: store\n -> Append (cost=2.87..8.50 rows=8 width=8) (actual time=8.004..8.058 rows=1 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=2.87..5.17 rows=5 width=8) (actual time=7.710..7.710 rows=0 loops=1)\n -> Hash Join (cost=2.87..5.12 rows=5 width=8) (actual time=7.701..7.701 rows=0 loops=1)\n Hash Cond: (\"outer\".company = \"inner\".company)\n -> Seq Scan on _stores s (cost=0.00..2.13 rows=13 width=16) (actual time=0.013..0.052 rows=13 loops=1)\n -> Hash (cost=2.87..2.87 rows=1 width=8) (actual time=7.486..7.486 rows=0 loops=1)\n -> Seq Scan on _webusers w (cost=0.00..2.87 rows=1 width=8) (actual time=7.478..7.478 rows=0 loops=1)\n Filter: (webuser = getwebuser())\n -> Subquery Scan \"*SELECT* 2\" (cost=1.08..3.33 rows=3 width=8) (actual time=0.284..0.336 rows=1 loops=1)\n -> Hash Join (cost=1.08..3.30 rows=3 width=8) (actual time=0.274..0.321 rows=1 loops=1)\n Hash Cond: (\"outer\".company = \"inner\".company)\n -> Seq Scan on _stores s (cost=0.00..2.13 rows=13 width=16) (actual time=0.008..0.046 rows=13 loops=1)\n -> Hash (cost=1.07..1.07 rows=1 width=8) (actual time=0.096..0.096 rows=0 loops=1)\n -> Seq Scan on _companies c (cost=0.00..1.07 rows=1 width=8) (actual time=0.064..0.083 rows=1 loops=1)\n Filter: ((companyid)::text = 'DEFAULT'::text)\n -> Hash (cost=13.18..13.18 rows=618 width=16) (actual time=5.849..5.849 rows=0 loops=1)\n -> Seq Scan on _bins b (cost=0.00..13.18 rows=618 width=16) (actual time=0.027..3.554 rows=618 loops=1)\n -> Hash (cost=2.49..2.49 rows=1 width=162) (actual time=0.960..0.960 rows=0 loops=1)\n -> Seq Scan on _areas a (cost=0.00..2.49 rows=1 width=162) (actual time=0.033..0.197 rows=49 loops=1)\n Filter: (deactive IS NULL)\n Total runtime: 427.390 ms\n\n\nThanks!\n\nMike Mascari\n\n\n\n", "msg_date": "Tue, 23 Nov 2004 21:04:15 -0500", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow execution time when querying view with WHERE clause" }, { "msg_contents": " --- Mike Mascari <[email protected]> escribi�: \n> Tom Lane wrote:\n> > Mike Mascari <[email protected]> writes:\n> > \n> >>When I query the view with a simple filter, I get:\n> > \n> > \n> >>explain analyze select * from p_areas where\n> deactive is null;\n> > \n> > \n> > The problem seems to be here:\n> > \n> > \n> >> -> Seq Scan on _areas a (cost=0.00..2.48\n> rows=1 width=163) (actual time=0.037..0.804 rows=48\n> loops=1)\n> >> Filter: (deactive IS NULL)\n> > \n> > \n> > Why is it so completely off about the selectivity\n> of the IS NULL clause?\n\nnull values are not indexable, is that your question?\nIf it is your question then create a partial index\nwith where deactive is null.\n\nregards,\nJaime Casanova\n\n_________________________________________________________\nDo You Yahoo!?\nInformaci�n de Estados Unidos y Am�rica Latina, en Yahoo! Noticias.\nVis�tanos en http://noticias.espanol.yahoo.com\n", "msg_date": "Tue, 23 Nov 2004 23:40:43 -0600 (CST)", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow execution time when querying view with WHERE clause" }, { "msg_contents": "Jaime Casanova <[email protected]> writes:\n> Tom Lane wrote:\n>> Why is it so completely off about the selectivity\n>> of the IS NULL clause?\n\n> null values are not indexable, is that your question?\n\nUh, no. The problem is that the IS NULL condition matched all 48 rows\nof the table, but the planner thought it would only match one row. This\nis definitely covered by the pg_stats statistics, and with only 48 live\nrows there couldn't possibly have been any sampling error, so what the\nheck went wrong there?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Nov 2004 01:11:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow execution time when querying view with WHERE clause " }, { "msg_contents": "Mike Mascari <[email protected]> writes:\n> Tom Lane wrote:\n>> Why is it so completely off about the selectivity of the IS NULL clause?\n\n> I think this is a bug in ANALYZE not constructing statistics for columns \n> whose data is entirely NULL:\n\nUm ... doh ... analyze.c about line 1550:\n\n /* We can only compute valid stats if we found some non-null values. */\n if (nonnull_cnt > 0)\n ...\n\nThere's a bit of an epistemological issue here: if we didn't actually\nfind any nonnull values in our sample, is it legitimate to assume that\nthe column is entirely null? On the other hand, if we find only \"3\" in\nour sample we will happily assume the column contains only \"3\", so I\ndunno why we are discriminating against null. This seems like a case\nthat just hasn't come up before.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Nov 2004 01:31:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow execution time when querying view with WHERE clause " }, { "msg_contents": "Tom Lane wrote:\n> Um ... doh ... analyze.c about line 1550:\n> \n> /* We can only compute valid stats if we found some non-null values. */\n> if (nonnull_cnt > 0)\n> ...\n> \n> There's a bit of an epistemological issue here: if we didn't actually\n> find any nonnull values in our sample, is it legitimate to assume that\n> the column is entirely null? On the other hand, if we find only \"3\" in\n> our sample we will happily assume the column contains only \"3\", so I\n> dunno why we are discriminating against null. This seems like a case\n> that just hasn't come up before.\n\nWill this discriminatory policy toward null end for 8.0?\n\nMike Mascari\n\n\n", "msg_date": "Wed, 24 Nov 2004 17:51:13 -0500", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow execution time when querying view with WHERE clause" } ]
[ { "msg_contents": "Following is the promised writeup in performance related issues\ncomparing win32 with linux x86 and linux x86-64. Unfortunately, the 64\nbit portion of the test is not yet completed and won't be for a bit.\nHowever there are some telling things about the win32/linux comparison.\nIf you are considering deploying postgres in a win32 environment read\non...\n \nFirst a description of the client app:\nOur company develops an ERP/CRM written in cobol which we are porting to\nrun on PostgreSQL. Part of this porting effort was development of an\nISAM 'driver' for our app to allow it to store/fetch data from the\ndatabase in place of a traditional file system, which is complete.\n\nFor those of you unfamiliar with COBOL/ISAM, applications written with\nit have a 'one record at a time' mentality, such the application tends\nto spam the server with queries of the select * from t where k = k1\nvariety. Our driver creates stored procedures as necessary and uses\nExecParams wherever possible to cut down on server CPU time, which is a\nprecious resource in our case. Over time we plan to gradually redeploy\nour application logic to make better use of the sql server's server side\npower. Our application is very rarely i/o bound because we will make\nsure our server has enough memory so that the data will be rarely, if\never, *not* run from the cache.\n\nA good benchmark of our application performance is the time it takes to\nread the entire bill of materials for a product. This is a recursive\nread of about 2500 records in the typical case (2408 in the test case).\n\nTest platform:\nPentium 4 3.06 GHz/HT\n10k SATA Raptor\n1Gb memory\nWindows XP Pro SP2/Redhat Fedora 3 (64 bit results coming soon)\n\nBOM traversal for product ***** (1 user): \nwin32: runtime: 3.34 sec avg cpu load: 60%\nredhat: runtime: 3.46 sec avg cpu load: 20%\n\nWell, win32 wins this test. There is some variability in the results\nmeaning for a single user scenario there is basically no difference\nbetween win32 and linux in execution time. However the cpu load is much\nlower for linux which spells problems for win32 with multiple users:\n\nBOM traversal for product ***** (6 users):\nwin32: runtime (each): 7.29 sec avg cpu load: 100%\nredhat: runtime (each): 4.56 sec avg cpu load: 90%\n\nHere, the win32 problems with cpu load start to manifest. The cpu meter\nstays pegged at 100% while the redhat hand around 90%. The difference\nin times is telling.\n\nThe third and final test is what I call the query 'surprise' factor, IOW\nsurprise! your query takes forever! The test involves a combination of\nthe previous test with a query with a couple of joins that returns about\n15k records. On both redhat/win32, the query takes about .35 seconds to\nexecute on a unloaded server...remember that figure.\n\n\n\nItem List generation while 6 clients generating BOM for multiple\nproducts:\nRedhat: 2.5 seconds\nWin32: 155 seconds (!)\n\nHere the win32 server is showing real problems. Also, the query\nexecution time is really variable, in some cases not returning until the\n6 workhorse processes completed their tasks. The linux server by\ncontrast ran slower but never ran over 4 seconds after multiple runs.\n\nAlso, on the purely subjective side, the linux server 'feels' faster and\nconsiderably more responsive under load, even under much higher load.\n\nComments/Suggestions?\n\nMerlin\n\n\n\n", "msg_date": "Mon, 22 Nov 2004 17:07:05 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "scalability issues on win32 " }, { "msg_contents": "\n> Test platform:\n> Pentium 4 3.06 GHz/HT\n> 10k SATA Raptor\n> 1Gb memory\n> Windows XP Pro SP2/Redhat Fedora 3 (64 bit results coming soon)\n\n\tCould you please add information about...\n\t- filesystems ?\n\t- windows configured as \"network server\" or as \"desktop box\" ?\n\t- virtual memory\n\tIn my experience you MUST deactivate virtual memory on a Windows box to \navoid catastrophic competition between virtual memory and disk cache\n\t- respective pgsql configurations (buffers...) identical ?\n\t- explain analyze for the two, identical ?\n\t- client on same machine or via network (100Mb ? 1G ?)\n\t- size of the data set involved in query\n\t- first query time after boot (with nothing in the cache), and times for \nthe next disk-cached runs ?\n\t- are the N users doing the same query or exercising different parts of \nthe dataset ?\n\n\tYou don't do any writes in your test do you ? Just big SELECTs ?\n", "msg_date": "Tue, 23 Nov 2004 00:15:26 +0100", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: scalability issues on win32 " }, { "msg_contents": "\nThis was an intersting Win32/linux comparison. I expected Linux to scale\nbetter, but I was surprised how poorly XP scaled. It reinforces our\nperception that Win32 is for low traffic servers.\n\n---------------------------------------------------------------------------\n\nMerlin Moncure wrote:\n> Following is the promised writeup in performance related issues\n> comparing win32 with linux x86 and linux x86-64. Unfortunately, the 64\n> bit portion of the test is not yet completed and won't be for a bit.\n> However there are some telling things about the win32/linux comparison.\n> If you are considering deploying postgres in a win32 environment read\n> on...\n> \n> First a description of the client app:\n> Our company develops an ERP/CRM written in cobol which we are porting to\n> run on PostgreSQL. Part of this porting effort was development of an\n> ISAM 'driver' for our app to allow it to store/fetch data from the\n> database in place of a traditional file system, which is complete.\n> \n> For those of you unfamiliar with COBOL/ISAM, applications written with\n> it have a 'one record at a time' mentality, such the application tends\n> to spam the server with queries of the select * from t where k = k1\n> variety. Our driver creates stored procedures as necessary and uses\n> ExecParams wherever possible to cut down on server CPU time, which is a\n> precious resource in our case. Over time we plan to gradually redeploy\n> our application logic to make better use of the sql server's server side\n> power. Our application is very rarely i/o bound because we will make\n> sure our server has enough memory so that the data will be rarely, if\n> ever, *not* run from the cache.\n> \n> A good benchmark of our application performance is the time it takes to\n> read the entire bill of materials for a product. This is a recursive\n> read of about 2500 records in the typical case (2408 in the test case).\n> \n> Test platform:\n> Pentium 4 3.06 GHz/HT\n> 10k SATA Raptor\n> 1Gb memory\n> Windows XP Pro SP2/Redhat Fedora 3 (64 bit results coming soon)\n> \n> BOM traversal for product ***** (1 user): \n> win32: runtime: 3.34 sec avg cpu load: 60%\n> redhat: runtime: 3.46 sec avg cpu load: 20%\n> \n> Well, win32 wins this test. There is some variability in the results\n> meaning for a single user scenario there is basically no difference\n> between win32 and linux in execution time. However the cpu load is much\n> lower for linux which spells problems for win32 with multiple users:\n> \n> BOM traversal for product ***** (6 users):\n> win32: runtime (each): 7.29 sec avg cpu load: 100%\n> redhat: runtime (each): 4.56 sec avg cpu load: 90%\n> \n> Here, the win32 problems with cpu load start to manifest. The cpu meter\n> stays pegged at 100% while the redhat hand around 90%. The difference\n> in times is telling.\n> \n> The third and final test is what I call the query 'surprise' factor, IOW\n> surprise! your query takes forever! The test involves a combination of\n> the previous test with a query with a couple of joins that returns about\n> 15k records. On both redhat/win32, the query takes about .35 seconds to\n> execute on a unloaded server...remember that figure.\n> \n> \n> \n> Item List generation while 6 clients generating BOM for multiple\n> products:\n> Redhat: 2.5 seconds\n> Win32: 155 seconds (!)\n> \n> Here the win32 server is showing real problems. Also, the query\n> execution time is really variable, in some cases not returning until the\n> 6 workhorse processes completed their tasks. The linux server by\n> contrast ran slower but never ran over 4 seconds after multiple runs.\n> \n> Also, on the purely subjective side, the linux server 'feels' faster and\n> considerably more responsive under load, even under much higher load.\n> \n> Comments/Suggestions?\n> \n> Merlin\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 22 Nov 2004 21:26:05 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scalability issues on win32" }, { "msg_contents": "Merlin Moncure schrieb:\n> Following is the promised writeup in performance related issues\n> comparing win32 with linux x86 and linux x86-64. Unfortunately, the 64\n> bit portion of the test is not yet completed and won't be for a bit.\n> However there are some telling things about the win32/linux comparison.\n> If you are considering deploying postgres in a win32 environment read\n> on...\n> \n> First a description of the client app:\n> Our company develops an ERP/CRM written in cobol which we are porting to\n> run on PostgreSQL. Part of this porting effort was development of an\n> ISAM 'driver' for our app to allow it to store/fetch data from the\n> database in place of a traditional file system, which is complete.\n> \n> For those of you unfamiliar with COBOL/ISAM, applications written with\n> it have a 'one record at a time' mentality, such the application tends\n> to spam the server with queries of the select * from t where k = k1\n> variety. Our driver creates stored procedures as necessary and uses\n> ExecParams wherever possible to cut down on server CPU time, which is a\n> precious resource in our case. Over time we plan to gradually redeploy\n> our application logic to make better use of the sql server's server side\n> power. Our application is very rarely i/o bound because we will make\n> sure our server has enough memory so that the data will be rarely, if\n> ever, *not* run from the cache.\n> \n> A good benchmark of our application performance is the time it takes to\n> read the entire bill of materials for a product. This is a recursive\n> read of about 2500 records in the typical case (2408 in the test case).\n\nI always knew that COBOL ultimativly looses, but it's always refreshing \nto get confirmation from time to time :)\n\n> Test platform:\n> Pentium 4 3.06 GHz/HT\n> 10k SATA Raptor\n> 1Gb memory\n> Windows XP Pro SP2/Redhat Fedora 3 (64 bit results coming soon)\n> \n> BOM traversal for product ***** (1 user): \n> win32: runtime: 3.34 sec avg cpu load: 60%\n> redhat: runtime: 3.46 sec avg cpu load: 20%\n\nWhere did you get the win32 \"avg cpu load\" number from? AFAIK there's no \ngetloadavg() for windows. At least I tried hard to find one, because I \nwant to add a comparable figure to cygwin core. emacs, coreutils, make \nand others would need desperately need it, not to speak of servers and \nreal-time apps.\nDid you read it from taskman, or did you come up with your self-written \nsolution? In taskman there's afaik no comparable figure. But there \nshould be some perfmon api, which would do the trick.\n\nOverview:\n http://www.wilsonmar.com/1perfmon.htm#TaskManager\n\n\"The load average (LA) is the average number of processes (the sum of \nthe run queue length and the number of jobs currently running) that are \nready to run, but are waiting for access to a busy CPU.\"\n\nAnd thanks for the overview!\n-- \nReini Urban\nhttp://xarch.tu-graz.ac.at/home/rurban/\n", "msg_date": "Tue, 23 Nov 2004 11:19:32 +0100", "msg_from": "Reini Urban <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scalability issues on win32" } ]
[ { "msg_contents": " \n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf \n> Of Bruce Momjian\n> Sent: 23 November 2004 02:26\n> To: Merlin Moncure\n> Cc: [email protected]; PostgreSQL Win32 port list\n> Subject: Re: [pgsql-hackers-win32] scalability issues on win32\n> \n> \n> This was an intersting Win32/linux comparison. I expected \n> Linux to scale better, but I was surprised how poorly XP \n> scaled. It reinforces our perception that Win32 is for low \n> traffic servers.\n\nThat's a bit harsh given the lack of any further investigation so far\nisn't it? Win32 can run perfectly well with other DBMSs with hundreds of\nusers.\n\nAny chance you can profile your test runs Merlin?\n\nRegards, Dave.\n", "msg_date": "Tue, 23 Nov 2004 10:35:54 -0000", "msg_from": "\"Dave Page\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: scalability issues on win32" }, { "msg_contents": "Dave Page wrote:\n> \n> \n> > -----Original Message-----\n> > From: [email protected] \n> > [mailto:[email protected]] On Behalf \n> > Of Bruce Momjian\n> > Sent: 23 November 2004 02:26\n> > To: Merlin Moncure\n> > Cc: [email protected]; PostgreSQL Win32 port list\n> > Subject: Re: [pgsql-hackers-win32] scalability issues on win32\n> > \n> > \n> > This was an intersting Win32/linux comparison. I expected \n> > Linux to scale better, but I was surprised how poorly XP \n> > scaled. It reinforces our perception that Win32 is for low \n> > traffic servers.\n> \n> That's a bit harsh given the lack of any further investigation so far\n> isn't it? Win32 can run perfectly well with other DBMSs with hundreds of\n> users.\n\nThe general opinion of server users is that you need 2-4 more Win32\nservers to do the same work as one Unix-like server. That and the\ndifficulty of automated administration and security problems is what is\npreventing Win32 from making greater inroads into the server\nmarketplace.\n\nOf course these are just generalizations.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 23 Nov 2004 10:06:17 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scalability issues on win32" } ]
[ { "msg_contents": "Reini Urban wrote:\n> Merlin Moncure schrieb:\n> > A good benchmark of our application performance is the time it takes\nto\n> > read the entire bill of materials for a product. This is a\nrecursive\n> > read of about 2500 records in the typical case (2408 in the test\ncase).\n> \n> I always knew that COBOL ultimativly looses, but it's always\nrefreshing\n> to get confirmation from time to time :)\n\nHeh. It's important to make the distinction between COBOL, which is\njust a language, and ISAM, which is a data delivery system. You could,\nfor example, pair COBOL with SQL with good results, (in fact, we plan\nto). But yes, many legacy COBOL apps were written with assumptions\nabout the system architecture that are no longer valid.\n\n> Where did you get the win32 \"avg cpu load\" number from? AFAIK there's\nno\n> getloadavg() for windows. At least I tried hard to find one, because I\n> want to add a comparable figure to cygwin core. emacs, coreutils, make\n> and others would need desperately need it, not to speak of servers and\n> real-time apps.\n\nI just eyeballed it :-). So consider the load averages anecdotal,\nalthough they are quite stable. However it is quite striking that with\nthe same application code the win32 load average was 2-3 times higher.\n\nI also left out the dual processor results, because I did not have time\nto test them on linux. However, sadly the 2nd processor adds very\nlittle extras horsepower to the server. I'm hoping linux will be\nbetter.\n\nMerlin\n", "msg_date": "Tue, 23 Nov 2004 08:05:27 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: scalability issues on win32" } ]
[ { "msg_contents": "> > > This was an intersting Win32/linux comparison. I expected \n> Linux to \n> > > scale better, but I was surprised how poorly XP scaled. It \n> > > reinforces our perception that Win32 is for low traffic servers.\n> > \n> > That's a bit harsh given the lack of any further \n> investigation so far \n> > isn't it? Win32 can run perfectly well with other DBMSs \n> with hundreds \n> > of users.\n> \n> The general opinion of server users is that you need 2-4 more \n> Win32 servers to do the same work as one Unix-like server. \n> That and the difficulty of automated administration and \n> security problems is what is preventing Win32 from making \n> greater inroads into the server marketplace.\n> \n> Of course these are just generalizations.\n\nIs this for Postgresql Cygwin? You surely can't mean \"for all server\ntasks\" - if so, I would say that's *way* off. There is a difference, but\nit's more along the line of single-digit percentage in my experience -\nprovided you config your machines reasonably, of course.\n\n(In my experience, Win32 MSSQLServer often outperforms postgresql on\nLinux. Granted you can tweak postgresql up to higher speeds, but MS does\nmost of that tweaking automatically... Talking of tweaking a lot more\nspecific than just raising the memory limits from the installation\ndefault, of course)\n\nI do agree on the automated administration though... It's a major PITA.\n\n\n//Magnus\n", "msg_date": "Tue, 23 Nov 2004 17:25:50 +0100", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: scalability issues on win32" } ]
[ { "msg_contents": "> Is this for Postgresql Cygwin? You surely can't mean \"for all server\n> tasks\" - if so, I would say that's *way* off. There is a difference,\nbut\n> it's more along the line of single-digit percentage in my experience -\n> provided you config your machines reasonably, of course.\n> \n> (In my experience, Win32 MSSQLServer often outperforms postgresql on\n> Linux. Granted you can tweak postgresql up to higher speeds, but MS\ndoes\n> most of that tweaking automatically... Talking of tweaking a lot more\n> specific than just raising the memory limits from the installation\n> default, of course)\n\nI agree with Magnus. Specifically, I suspect there is some sort of\nresource contention going on that is driving up the cpu load when the\nqueries follow certain patterns. This resource contention could be\nhappening in the win32 port code (likely ipc), the mingw api, or inside\nthe o/s itself.\n\nOther servers, namely apache, sql server and a host of others do not\nhave this problem.\n\nMerlin\n", "msg_date": "Tue, 23 Nov 2004 12:06:47 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [pgsql-hackers-win32] scalability issues on win32" } ]
[ { "msg_contents": " \n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]] \n> Sent: 23 November 2004 15:06\n> To: Dave Page\n> Cc: Merlin Moncure; [email protected]; \n> PostgreSQL Win32 port list\n> Subject: Re: [pgsql-hackers-win32] scalability issues on win32\n> \n> The general opinion of server users is that you need 2-4 more \n> Win32 servers to do the same work as one Unix-like server. \n> That and the difficulty of automated administration and \n> security problems is what is preventing Win32 from making \n> greater inroads into the server marketplace.\n> \n> Of course these are just generalizations.\n\nI'd rather avoid an OS advocacy war here, but if I'm honest, with group\npolicy and other tools such as SUS, I find that my Windows servers are\nactually easier to administer than the Linux ones (I have about a 50-50\nmix at work). Perhaps that's because I favour Slackware though?\n\nAs for the 2-4 servers quote, I find that a little on the high side. I\nagree that generally you might expect a little more performance from an\nequivalent Linux system on the same hardware, but in my practical\nexperience the difference is far less than you suggest.\n\nRegards, Dave.\n", "msg_date": "Tue, 23 Nov 2004 20:12:21 -0000", "msg_from": "\"Dave Page\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: scalability issues on win32" }, { "msg_contents": "Dave Page wrote:\n> \n> \n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:[email protected]] \n> > Sent: 23 November 2004 15:06\n> > To: Dave Page\n> > Cc: Merlin Moncure; [email protected]; \n> > PostgreSQL Win32 port list\n> > Subject: Re: [pgsql-hackers-win32] scalability issues on win32\n> > \n> > The general opinion of server users is that you need 2-4 more \n> > Win32 servers to do the same work as one Unix-like server. \n> > That and the difficulty of automated administration and \n> > security problems is what is preventing Win32 from making \n> > greater inroads into the server marketplace.\n> > \n> > Of course these are just generalizations.\n> \n> I'd rather avoid an OS advocacy war here, but if I'm honest, with group\n> policy and other tools such as SUS, I find that my Windows servers are\n> actually easier to administer than the Linux ones (I have about a 50-50\n> mix at work). Perhaps that's because I favour Slackware though?\n> \n> As for the 2-4 servers quote, I find that a little on the high side. I\n> agree that generally you might expect a little more performance from an\n> equivalent Linux system on the same hardware, but in my practical\n> experience the difference is far less than you suggest.\n\nI have never run the tests myself. I am just quoting what I have heard,\nand maybe that information is a few years old.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 23 Nov 2004 23:38:58 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scalability issues on win32" } ]
[ { "msg_contents": "\nHi everyone,\n\nCan anyone please explain postgres' behavior on our index.\n\nI did the following query tests on our database:\n\n====================\ndb=# create index chatlogs_date_idx on chatlogs (date);\nCREATE\ndb=# explain select date from chatlogs where date>='11/23/04';\nNOTICE: QUERY PLAN:\n\nIndex Scan using chatlogs_date_idx on chatlogs (cost=0.00..144.11 rows=36\nwidth=4)\n\nEXPLAIN\ndb=# explain select date from chatlogs where date>='10/23/04';\nNOTICE: QUERY PLAN:\n\nSeq Scan on chatlogs (cost=0.00..23938.06 rows=253442 width=4)\n\nEXPLAIN====================\n\nDate's datatype is date. Its just odd that I just change the actual date of\nsearch and the index is not being used anymore.\n\n", "msg_date": "Wed, 24 Nov 2004 14:52:07 +0800", "msg_from": "\"BBI Edwin Punzalan\" <[email protected]>", "msg_from_op": true, "msg_subject": "FW: Index usage" } ]
[ { "msg_contents": "Well you just selected a whole lot more rows... What's the total number of rows in the table?\n\nIn general, what I remember from reading on the list, is that when there's no upper bound on a query like this, the planner is more likely to choose a seq. scan than an index scan.\nTry to give your query an upper bound like:\n\nselect date from chatlogs where date>='11/23/04' and date < '12/31/99';\n\nselect date from chatlogs where date>='10/23/04' and date < '12/31/99';\n\nThis should make it easier for the planner to give a proper estimate of the number of rows returned. If it doesn't help yet, please post 'explain analyze' output rather than 'explain' output, for it allows much better investigation into why the planner chooses what it chooses.\n\ncheers,\n\n--Tim\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]]On Behalf Of BBI Edwin Punzalan\nSent: Wednesday, November 24, 2004 7:52 AM\nTo: [email protected]\nSubject: [PERFORM] FW: Index usage\n\n\n\nHi everyone,\n\nCan anyone please explain postgres' behavior on our index.\n\nI did the following query tests on our database:\n\n====================\ndb=# create index chatlogs_date_idx on chatlogs (date);\nCREATE\ndb=# explain select date from chatlogs where date>='11/23/04';\nNOTICE: QUERY PLAN:\n\nIndex Scan using chatlogs_date_idx on chatlogs (cost=0.00..144.11 rows=36\nwidth=4)\n\nEXPLAIN\ndb=# explain select date from chatlogs where date>='10/23/04';\nNOTICE: QUERY PLAN:\n\nSeq Scan on chatlogs (cost=0.00..23938.06 rows=253442 width=4)\n\nEXPLAIN====================\n\nDate's datatype is date. Its just odd that I just change the actual date of\nsearch and the index is not being used anymore.\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\n http://archives.postgresql.org\n", "msg_date": "Wed, 24 Nov 2004 08:34:41 +0100", "msg_from": "\"Leeuw van der, Tim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FW: Index usage" } ]
[ { "msg_contents": "\nThanks, Tim.\n\nI tried adding an upper limit and its still the same as follows:\n\n==============\ndb=# explain analyze select date from chatlogs where date>='11/24/04';\nNOTICE: QUERY PLAN:\n\nIndex Scan using chatlogs_date_idx on chatlogs (cost=0.00..145.72 rows=37\nwidth=4) (actual time=0.18..239.69 rows=10737 loops=1)\nTotal runtime: 246.22 msec\n\nEXPLAIN\ndb=# explain analyze select date from chatlogs where date>='11/23/04' and\ndate<'11/24/04';\nNOTICE: QUERY PLAN:\n\nSeq Scan on chatlogs (cost=0.00..24763.19 rows=9200 width=4) (actual\ntime=0.44..4447.01 rows=13029 loops=1)\nTotal runtime: 4455.56 msec\n\nEXPLAIN\ndb=# explain analyze select date from chatlogs where date>='11/23/04' and\ndate<'11/25/04';\nNOTICE: QUERY PLAN:\n\nSeq Scan on chatlogs (cost=0.00..24763.19 rows=9200 width=4) (actual\ntime=0.45..4268.00 rows=23787 loops=1)\nTotal runtime: 4282.81 msec\n==============\n\nHow come a query on the current date filter uses an index and the others\ndoes not? This makes indexing to speed up queries quite difficult.\n\n-----Original Message-----\nFrom: Leeuw van der, Tim [mailto:[email protected]] \nSent: Wednesday, November 24, 2004 3:35 PM\nTo: BBI Edwin Punzalan; [email protected]\nSubject: RE: [PERFORM] FW: Index usage\n\n\nWell you just selected a whole lot more rows... What's the total number of\nrows in the table?\n\nIn general, what I remember from reading on the list, is that when there's\nno upper bound on a query like this, the planner is more likely to choose a\nseq. scan than an index scan. Try to give your query an upper bound like:\n\nselect date from chatlogs where date>='11/23/04' and date < '12/31/99';\n\nselect date from chatlogs where date>='10/23/04' and date < '12/31/99';\n\nThis should make it easier for the planner to give a proper estimate of the\nnumber of rows returned. If it doesn't help yet, please post 'explain\nanalyze' output rather than 'explain' output, for it allows much better\ninvestigation into why the planner chooses what it chooses.\n\ncheers,\n\n--Tim\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of BBI Edwin\nPunzalan\nSent: Wednesday, November 24, 2004 7:52 AM\nTo: [email protected]\nSubject: [PERFORM] FW: Index usage\n\n\n\nHi everyone,\n\nCan anyone please explain postgres' behavior on our index.\n\nI did the following query tests on our database:\n\n====================\ndb=# create index chatlogs_date_idx on chatlogs (date);\nCREATE\ndb=# explain select date from chatlogs where date>='11/23/04';\nNOTICE: QUERY PLAN:\n\nIndex Scan using chatlogs_date_idx on chatlogs (cost=0.00..144.11 rows=36\nwidth=4)\n\nEXPLAIN\ndb=# explain select date from chatlogs where date>='10/23/04';\nNOTICE: QUERY PLAN:\n\nSeq Scan on chatlogs (cost=0.00..23938.06 rows=253442 width=4)\n\nEXPLAIN====================\n\nDate's datatype is date. Its just odd that I just change the actual date of\nsearch and the index is not being used anymore.\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n", "msg_date": "Wed, 24 Nov 2004 16:07:37 +0800", "msg_from": "\"BBI Edwin Punzalan\" <[email protected]>", "msg_from_op": true, "msg_subject": "FW: FW: Index usage" }, { "msg_contents": "From: \"BBI Edwin Punzalan\" <[email protected]>\n\n> db=# explain analyze select date from chatlogs where date>='11/23/04' and\n> date<'11/25/04';\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on chatlogs (cost=0.00..24763.19 rows=9200 width=4) (actual\n> time=0.45..4268.00 rows=23787 loops=1)\n> Total runtime: 4282.81 msec\n> ==============\n> \n> How come a query on the current date filter uses an index and the others\n> does not? This makes indexing to speed up queries quite difficult.\n\nhave you ANALYZED the table lately ?\nwhat version postgres are you using ?\n\ngnari\n\n\n\n\n", "msg_date": "Wed, 24 Nov 2004 08:34:38 -0000", "msg_from": "\"gnari\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Index usage" }, { "msg_contents": "\nYes, the database is being vacuum-ed and analyzed on a daily basis.\n\nOur version is 7.2.1\n\n-----Original Message-----\nFrom: gnari [mailto:[email protected]] \nSent: Wednesday, November 24, 2004 4:35 PM\nTo: BBI Edwin Punzalan; [email protected]\nSubject: Re: [PERFORM] FW: Index usage\n\n\nFrom: \"BBI Edwin Punzalan\" <[email protected]>\n\n> db=# explain analyze select date from chatlogs where date>='11/23/04' \n> and date<'11/25/04';\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on chatlogs (cost=0.00..24763.19 rows=9200 width=4) (actual \n> time=0.45..4268.00 rows=23787 loops=1) Total runtime: 4282.81 msec\n> ==============\n> \n> How come a query on the current date filter uses an index and the \n> others does not? This makes indexing to speed up queries quite \n> difficult.\n\nhave you ANALYZED the table lately ?\nwhat version postgres are you using ?\n\ngnari\n\n\n\n", "msg_date": "Wed, 24 Nov 2004 17:43:52 +0800", "msg_from": "\"BBI Edwin Punzalan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FW: Index usage" }, { "msg_contents": "BBI Edwin Punzalan wrote:\n> Thanks, Tim.\n> \n> I tried adding an upper limit and its still the same as follows:\n> \n> ==============\n> db=# explain analyze select date from chatlogs where date>='11/24/04';\n> NOTICE: QUERY PLAN:\n> \n> Index Scan using chatlogs_date_idx on chatlogs (cost=0.00..145.72 rows=37\n> width=4) (actual time=0.18..239.69 rows=10737 loops=1)\n> Total runtime: 246.22 msec\n> \n> EXPLAIN\n> db=# explain analyze select date from chatlogs where date>='11/23/04' and\n> date<'11/24/04';\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on chatlogs (cost=0.00..24763.19 rows=9200 width=4) (actual\n> time=0.44..4447.01 rows=13029 loops=1)\n> Total runtime: 4455.56 msec\n\nWe have two issues here\n1. In the first example it only picks an index because it thinks it is \ngoing to get 37 rows, it actually gets 10737\n2. It's taking 4455ms to run a seq-scan but only 246ms to run an \nindex-scan over 10737 rows (and then fetch the rows too).\n\nQuestions:\n1. How many rows do you have in chatlogs?\n2. Is this the only problem you are experiencing, or just one from many?\n3. Have you tuned any configuration settings? e.g. as suggested in:\n http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 24 Nov 2004 10:16:53 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: FW: Index usage" }, { "msg_contents": "\nHi.\n\n1) chatlogs rows increases every now and then (its in a live environment)\nand currently have 538,696 rows\n2) this is the only problem we experienced. So far, all our other indexes\nare being used correctly.\n3) I don't remember tuning any post-installation configuration of our\npostgreSQL except setting fsync to false.\n\nThanks for taking a look at our problem. :D\n\n-----Original Message-----\nFrom: Richard Huxton [mailto:[email protected]] \nSent: Wednesday, November 24, 2004 6:17 PM\nTo: BBI Edwin Punzalan\nCc: [email protected]\nSubject: Re: FW: [PERFORM] FW: Index usage\n\n\nBBI Edwin Punzalan wrote:\n> Thanks, Tim.\n> \n> I tried adding an upper limit and its still the same as follows:\n> \n> ==============\n> db=# explain analyze select date from chatlogs where date>='11/24/04';\n> NOTICE: QUERY PLAN:\n> \n> Index Scan using chatlogs_date_idx on chatlogs (cost=0.00..145.72 \n> rows=37\n> width=4) (actual time=0.18..239.69 rows=10737 loops=1)\n> Total runtime: 246.22 msec\n> \n> EXPLAIN\n> db=# explain analyze select date from chatlogs where date>='11/23/04' \n> and date<'11/24/04';\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on chatlogs (cost=0.00..24763.19 rows=9200 width=4) (actual \n> time=0.44..4447.01 rows=13029 loops=1) Total runtime: 4455.56 msec\n\nWe have two issues here\n1. In the first example it only picks an index because it thinks it is \ngoing to get 37 rows, it actually gets 10737\n2. It's taking 4455ms to run a seq-scan but only 246ms to run an \nindex-scan over 10737 rows (and then fetch the rows too).\n\nQuestions:\n1. How many rows do you have in chatlogs?\n2. Is this the only problem you are experiencing, or just one from many? 3.\nHave you tuned any configuration settings? e.g. as suggested in:\n http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n-- \n Richard Huxton\n Archonet Ltd\n\n", "msg_date": "Wed, 24 Nov 2004 18:39:52 +0800", "msg_from": "\"BBI Edwin Punzalan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FW: FW: Index usage" }, { "msg_contents": "BBI Edwin Punzalan wrote:\n> Hi.\n> \n> 1) chatlogs rows increases every now and then (its in a live environment)\n> and currently have 538,696 rows\n\nOK, so as a rule of thumb I'd say if you were fetching less than 5000 \nrows it's bound to use an index. If more than 50,000 always use a \nseqscan, otherwise it'll depend on configuration settings. It looks like \nyou settings are suggesting the cost of an index-scan vs seq-scan are \ngreater than they are.\n\n> 2) this is the only problem we experienced. So far, all our other indexes\n> are being used correctly.\n\nGood.\n\n> 3) I don't remember tuning any post-installation configuration of our\n> postgreSQL except setting fsync to false.\n\nSo long as you know why this can cause data loss. It won't affect this \nproblem.\n\nRead that performance article I linked to in the last message, it's \nwritten by two people who know what they're talking about. The standard \nconfiguration settings are designed to work on any machine, not provide \ngood performance. Work through the basics there and we can look at \nrandom_page_cost etc. if it's still causing you problems.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 24 Nov 2004 11:02:06 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: FW: Index usage" }, { "msg_contents": "From: \"BBI Edwin Punzalan\" <[email protected]>\n> \n> Yes, the database is being vacuum-ed and analyzed on a daily basis.\n> \n\nthen you should consider increating the statistics on the date column,\nas the estimates were a bit off in the plan\n\n> Our version is 7.2.1\n\nupgrade time ?\n\ngnari\n\n\n", "msg_date": "Wed, 24 Nov 2004 19:12:41 -0000", "msg_from": "\"gnari\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Index usage" }, { "msg_contents": "\nHi, what do you mean by increasing the statistics on the date column?\n\nWe never had any upgrade on it.\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of gnari\nSent: Thursday, November 25, 2004 3:13 AM\nTo: BBI Edwin Punzalan; [email protected]\nSubject: Re: [PERFORM] FW: Index usage\n\n\nFrom: \"BBI Edwin Punzalan\" <[email protected]>\n> \n> Yes, the database is being vacuum-ed and analyzed on a daily basis.\n> \n\nthen you should consider increating the statistics on the date column, as\nthe estimates were a bit off in the plan\n\n> Our version is 7.2.1\n\nupgrade time ?\n\ngnari\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n", "msg_date": "Wed, 1 Dec 2004 09:50:30 +0800", "msg_from": "\"BBI Edwin Punzalan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FW: Index usage" }, { "msg_contents": "From: \"BBI Edwin Punzalan\" <[email protected]>\n\n\n> \n> Hi, what do you mean by increasing the statistics on the date column?\n\nalter table chatlogs alter column date set statistics 300;\nanalyze chatlogs;\n\n> > > Our version is 7.2.1\n> > \n> > upgrade time ?\n> \n> We never had any upgrade on it.\n\n7.2 is a bit dated now that 8.0 is in beta\n\nif you want to stay with 7.2, you should at least upgrade\nto the latest point release (7.2.6 ?), as several serious bugs\nhave been fixed\n\ngnari\n\n\n", "msg_date": "Wed, 1 Dec 2004 02:07:52 -0000", "msg_from": "\"gnari\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Index usage" }, { "msg_contents": "\nThanks but whatever it does, it didn't work. :D\n\nDo you think upgrading will fix this problem?\n\n=========================\ndb=# alter table chatlogs alter column date set statistics 300;\nALTER\ndb=# analyze chatlogs;\nANALYZE\ndb=# explain analyze select * from chatlogs where date >= '12/1/04';\nNOTICE: QUERY PLAN:\n\nIndex Scan using chatlogs_type_idx on chatlogs (cost=0.00..6053.61\nrows=3357 width=212) (actual time=22.14..138.53 rows=1312\nloops=1)\nTotal runtime: 139.42 msec\n\nEXPLAIN\nmorphTv=# explain analyze select * from chatlogs where date >= '11/03/04';\nNOTICE: QUERY PLAN:\n\nSeq Scan on chatlogs (cost=0.00..27252.86 rows=271882 width=212) (actual\ntime=12.24..13419.36 rows=257137 loops=1)\nTotal runtime: 13573.70 msec\n\nEXPLAIN\n=========================\n\n\n\n-----Original Message-----\nFrom: gnari [mailto:[email protected]] \nSent: Wednesday, December 01, 2004 10:08 AM\nTo: BBI Edwin Punzalan; [email protected]\nSubject: Re: [PERFORM] FW: Index usage\n\n\nFrom: \"BBI Edwin Punzalan\" <[email protected]>\n\n\n> \n> Hi, what do you mean by increasing the statistics on the date column?\n\nalter table chatlogs alter column date set statistics 300; analyze chatlogs;\n\n> > > Our version is 7.2.1\n> > \n> > upgrade time ?\n> \n> We never had any upgrade on it.\n\n7.2 is a bit dated now that 8.0 is in beta\n\nif you want to stay with 7.2, you should at least upgrade\nto the latest point release (7.2.6 ?), as several serious bugs have been\nfixed\n\ngnari\n\n", "msg_date": "Wed, 1 Dec 2004 10:33:15 +0800", "msg_from": "\"BBI Edwin Punzalan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FW: Index usage" }, { "msg_contents": "If it's any help, i just ran this test on 7.4.6, my table has about 7000000 \nrows and the index is an integer.\n\nThe item id ranges from 1 to 20000.\n\nAs you can see from the following plans, the optimizer changed it's plan \ndepending on the value of the item id condition, and will use an index when \nit determines that the number of values that will be returned is a low % of \nthe total table size.\n\nThe item_id is an integer, but It looked like you are using a character \nfield to store date information. Also, the dates you entered in your test \ncase seem to be in the format DD/MM/YY which won't be amenable to useful \ncomparative searching (I didn't read any of the earlier posts so if that \nisn't the case, just ignore this). If this is the case, try storing the data \nin a date column and see what happens then.\n\nregards\nIain\n\ntest=# explain analyse select * from bigtable where item_id <= 1000;\n QUERY \nPLAN\n\n-------------------------------------------------------------------------------------------------------------------\n--------------------------------------------\n Index Scan using d_bigtable_idx2 on bigtable (cost=0.00..118753.57 \nrows=59553 width=80) (actual\ntime=0.069..704.401 rows=58102 loops=1)\n Index Cond: ((item_id)::integer <= 1000)\n Total runtime: 740.786 ms\n(3 rows)\n\n\ntest=# explain analyse select * from bigtable where item_id <= 100000000;\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------\n---------------\n Seq Scan on d_hi_mise_item_uri (cost=0.00..194285.15 rows=7140589 \nwidth=80) (actual time=0.027..18599.032 rows=71\n14844 loops=1)\n Filter: ((item_id)::integer <= 100000000)\n Total runtime: 23024.986 ms\n\n----- Original Message ----- \nFrom: \"BBI Edwin Punzalan\" <[email protected]>\nTo: \"'gnari'\" <[email protected]>; <[email protected]>\nSent: Wednesday, December 01, 2004 11:33 AM\nSubject: Re: [PERFORM] FW: Index usage\n\n\n>\n> Thanks but whatever it does, it didn't work. :D\n>\n> Do you think upgrading will fix this problem?\n>\n> =========================\n> db=# alter table chatlogs alter column date set statistics 300;\n> ALTER\n> db=# analyze chatlogs;\n> ANALYZE\n> db=# explain analyze select * from chatlogs where date >= '12/1/04';\n> NOTICE: QUERY PLAN:\n>\n> Index Scan using chatlogs_type_idx on chatlogs (cost=0.00..6053.61\n> rows=3357 width=212) (actual time=22.14..138.53 rows=1312\n> loops=1)\n> Total runtime: 139.42 msec\n>\n> EXPLAIN\n> morphTv=# explain analyze select * from chatlogs where date >= '11/03/04';\n> NOTICE: QUERY PLAN:\n>\n> Seq Scan on chatlogs (cost=0.00..27252.86 rows=271882 width=212) (actual\n> time=12.24..13419.36 rows=257137 loops=1)\n> Total runtime: 13573.70 msec\n>\n> EXPLAIN\n> =========================\n>\n>\n>\n> -----Original Message-----\n> From: gnari [mailto:[email protected]]\n> Sent: Wednesday, December 01, 2004 10:08 AM\n> To: BBI Edwin Punzalan; [email protected]\n> Subject: Re: [PERFORM] FW: Index usage\n>\n>\n> From: \"BBI Edwin Punzalan\" <[email protected]>\n>\n>\n>>\n>> Hi, what do you mean by increasing the statistics on the date column?\n>\n> alter table chatlogs alter column date set statistics 300; analyze \n> chatlogs;\n>\n>> > > Our version is 7.2.1\n>> >\n>> > upgrade time ?\n>>\n>> We never had any upgrade on it.\n>\n> 7.2 is a bit dated now that 8.0 is in beta\n>\n> if you want to stay with 7.2, you should at least upgrade\n> to the latest point release (7.2.6 ?), as several serious bugs have been\n> fixed\n>\n> gnari\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected] \n\n", "msg_date": "Wed, 1 Dec 2004 13:00:24 +0900", "msg_from": "\"Iain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Index usage" }, { "msg_contents": "\nHi. Thanks for your reply. The date column data type is date already. :D\n\n-----Original Message-----\nFrom: Iain [mailto:[email protected]] \nSent: Wednesday, December 01, 2004 12:00 PM\nTo: BBI Edwin Punzalan; 'gnari'; [email protected]\nSubject: Re: [PERFORM] FW: Index usage\n\n\nIf it's any help, i just ran this test on 7.4.6, my table has about 7000000 \nrows and the index is an integer.\n\nThe item id ranges from 1 to 20000.\n\nAs you can see from the following plans, the optimizer changed it's plan \ndepending on the value of the item id condition, and will use an index when \nit determines that the number of values that will be returned is a low % of \nthe total table size.\n\nThe item_id is an integer, but It looked like you are using a character \nfield to store date information. Also, the dates you entered in your test \ncase seem to be in the format DD/MM/YY which won't be amenable to useful \ncomparative searching (I didn't read any of the earlier posts so if that \nisn't the case, just ignore this). If this is the case, try storing the data\n\nin a date column and see what happens then.\n\nregards\nIain\n\ntest=# explain analyse select * from bigtable where item_id <= 1000;\n \nQUERY \nPLAN\n\n----------------------------------------------------------------------------\n---------------------------------------\n--------------------------------------------\n Index Scan using d_bigtable_idx2 on bigtable (cost=0.00..118753.57 \nrows=59553 width=80) (actual\ntime=0.069..704.401 rows=58102 loops=1)\n Index Cond: ((item_id)::integer <= 1000)\n Total runtime: 740.786 ms\n(3 rows)\n\n\ntest=# explain analyse select * from bigtable where item_id <= 100000000;\n QUERY PLAN\n\n----------------------------------------------------------------------------\n---------------------------------------\n---------------\n Seq Scan on d_hi_mise_item_uri (cost=0.00..194285.15 rows=7140589 \nwidth=80) (actual time=0.027..18599.032 rows=71\n14844 loops=1)\n Filter: ((item_id)::integer <= 100000000)\n Total runtime: 23024.986 ms\n\n----- Original Message ----- \nFrom: \"BBI Edwin Punzalan\" <[email protected]>\nTo: \"'gnari'\" <[email protected]>; <[email protected]>\nSent: Wednesday, December 01, 2004 11:33 AM\nSubject: Re: [PERFORM] FW: Index usage\n\n\n>\n> Thanks but whatever it does, it didn't work. :D\n>\n> Do you think upgrading will fix this problem?\n>\n> =========================\n> db=# alter table chatlogs alter column date set statistics 300; ALTER\n> db=# analyze chatlogs;\n> ANALYZE\n> db=# explain analyze select * from chatlogs where date >= '12/1/04';\n> NOTICE: QUERY PLAN:\n>\n> Index Scan using chatlogs_type_idx on chatlogs (cost=0.00..6053.61 \n> rows=3357 width=212) (actual time=22.14..138.53 rows=1312\n> loops=1)\n> Total runtime: 139.42 msec\n>\n> EXPLAIN\n> morphTv=# explain analyze select * from chatlogs where date >= \n> '11/03/04';\n> NOTICE: QUERY PLAN:\n>\n> Seq Scan on chatlogs (cost=0.00..27252.86 rows=271882 width=212) \n> (actual time=12.24..13419.36 rows=257137 loops=1) Total runtime: \n> 13573.70 msec\n>\n> EXPLAIN\n> =========================\n>\n>\n>\n> -----Original Message-----\n> From: gnari [mailto:[email protected]]\n> Sent: Wednesday, December 01, 2004 10:08 AM\n> To: BBI Edwin Punzalan; [email protected]\n> Subject: Re: [PERFORM] FW: Index usage\n>\n>\n> From: \"BBI Edwin Punzalan\" <[email protected]>\n>\n>\n>>\n>> Hi, what do you mean by increasing the statistics on the date column?\n>\n> alter table chatlogs alter column date set statistics 300; analyze\n> chatlogs;\n>\n>> > > Our version is 7.2.1\n>> >\n>> > upgrade time ?\n>>\n>> We never had any upgrade on it.\n>\n> 7.2 is a bit dated now that 8.0 is in beta\n>\n> if you want to stay with 7.2, you should at least upgrade\n> to the latest point release (7.2.6 ?), as several serious bugs have \n> been fixed\n>\n> gnari\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected] \n\n", "msg_date": "Wed, 1 Dec 2004 12:05:18 +0800", "msg_from": "\"BBI Edwin Punzalan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FW: Index usage" }, { "msg_contents": "Sorry, i can't check this easily as I don't have any date fields in my data \n(they all held has character strings - do as i say, not as i do) but maybe \nyou should cast or convert the string representation of the date to a date \nin the where clause. Postgres might be doing some implicit conversion but if \nit is, I'd expect it to use a YYYY-MM-DD format which is what I see here.\n\nSomething like ... WHERE date>= to_date('11/03/04','DD/MM/YY')\n\nregards\nIain\n----- Original Message ----- \nFrom: \"BBI Edwin Punzalan\" <[email protected]>\nTo: \"'Iain'\" <[email protected]>; \"'gnari'\" <[email protected]>; \n<[email protected]>\nSent: Wednesday, December 01, 2004 1:05 PM\nSubject: RE: [PERFORM] FW: Index usage\n\n\n>\n> Hi. Thanks for your reply. The date column data type is date already. :D\n>\n> -----Original Message-----\n> From: Iain [mailto:[email protected]]\n> Sent: Wednesday, December 01, 2004 12:00 PM\n> To: BBI Edwin Punzalan; 'gnari'; [email protected]\n> Subject: Re: [PERFORM] FW: Index usage\n>\n>\n> If it's any help, i just ran this test on 7.4.6, my table has about \n> 7000000\n> rows and the index is an integer.\n>\n> The item id ranges from 1 to 20000.\n>\n> As you can see from the following plans, the optimizer changed it's plan\n> depending on the value of the item id condition, and will use an index \n> when\n> it determines that the number of values that will be returned is a low % \n> of\n> the total table size.\n>\n> The item_id is an integer, but It looked like you are using a character\n> field to store date information. Also, the dates you entered in your test\n> case seem to be in the format DD/MM/YY which won't be amenable to useful\n> comparative searching (I didn't read any of the earlier posts so if that\n> isn't the case, just ignore this). If this is the case, try storing the \n> data\n>\n> in a date column and see what happens then.\n>\n> regards\n> Iain\n>\n> test=# explain analyse select * from bigtable where item_id <= 1000;\n>\n> QUERY\n> PLAN\n>\n> ----------------------------------------------------------------------------\n> ---------------------------------------\n> --------------------------------------------\n> Index Scan using d_bigtable_idx2 on bigtable (cost=0.00..118753.57\n> rows=59553 width=80) (actual\n> time=0.069..704.401 rows=58102 loops=1)\n> Index Cond: ((item_id)::integer <= 1000)\n> Total runtime: 740.786 ms\n> (3 rows)\n>\n>\n> test=# explain analyse select * from bigtable where item_id <= 100000000;\n> QUERY PLAN\n>\n> ----------------------------------------------------------------------------\n> ---------------------------------------\n> ---------------\n> Seq Scan on d_hi_mise_item_uri (cost=0.00..194285.15 rows=7140589\n> width=80) (actual time=0.027..18599.032 rows=71\n> 14844 loops=1)\n> Filter: ((item_id)::integer <= 100000000)\n> Total runtime: 23024.986 ms\n>\n> ----- Original Message ----- \n> From: \"BBI Edwin Punzalan\" <[email protected]>\n> To: \"'gnari'\" <[email protected]>; <[email protected]>\n> Sent: Wednesday, December 01, 2004 11:33 AM\n> Subject: Re: [PERFORM] FW: Index usage\n>\n>\n>>\n>> Thanks but whatever it does, it didn't work. :D\n>>\n>> Do you think upgrading will fix this problem?\n>>\n>> =========================\n>> db=# alter table chatlogs alter column date set statistics 300; ALTER\n>> db=# analyze chatlogs;\n>> ANALYZE\n>> db=# explain analyze select * from chatlogs where date >= '12/1/04';\n>> NOTICE: QUERY PLAN:\n>>\n>> Index Scan using chatlogs_type_idx on chatlogs (cost=0.00..6053.61\n>> rows=3357 width=212) (actual time=22.14..138.53 rows=1312\n>> loops=1)\n>> Total runtime: 139.42 msec\n>>\n>> EXPLAIN\n>> morphTv=# explain analyze select * from chatlogs where date >=\n>> '11/03/04';\n>> NOTICE: QUERY PLAN:\n>>\n>> Seq Scan on chatlogs (cost=0.00..27252.86 rows=271882 width=212)\n>> (actual time=12.24..13419.36 rows=257137 loops=1) Total runtime:\n>> 13573.70 msec\n>>\n>> EXPLAIN\n>> =========================\n>>\n>>\n>>\n>> -----Original Message-----\n>> From: gnari [mailto:[email protected]]\n>> Sent: Wednesday, December 01, 2004 10:08 AM\n>> To: BBI Edwin Punzalan; [email protected]\n>> Subject: Re: [PERFORM] FW: Index usage\n>>\n>>\n>> From: \"BBI Edwin Punzalan\" <[email protected]>\n>>\n>>\n>>>\n>>> Hi, what do you mean by increasing the statistics on the date column?\n>>\n>> alter table chatlogs alter column date set statistics 300; analyze\n>> chatlogs;\n>>\n>>> > > Our version is 7.2.1\n>>> >\n>>> > upgrade time ?\n>>>\n>>> We never had any upgrade on it.\n>>\n>> 7.2 is a bit dated now that 8.0 is in beta\n>>\n>> if you want to stay with 7.2, you should at least upgrade\n>> to the latest point release (7.2.6 ?), as several serious bugs have\n>> been fixed\n>>\n>> gnari\n>>\n>>\n>> ---------------------------(end of\n>> broadcast)---------------------------\n>> TIP 1: subscribe and unsubscribe commands go to [email protected] \n\n", "msg_date": "Wed, 1 Dec 2004 13:18:50 +0900", "msg_from": "\"Iain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Index usage" }, { "msg_contents": "From: \"BBI Edwin Punzalan\" <[email protected]>\n\n\n> Thanks but whatever it does, it didn't work. :\n\n> Do you think upgrading will fix this problem?\n\nare you sure there is a problem here to solve ?\n\n> Seq Scan on chatlogs (cost=0.00..27252.86 rows=271882 width=212) (actual\n> time=12.24..13419.36 rows=257137 loops=1)\n\nyou see that the actual rowcount matches the estimate,\nso the planner is not being misled by wrong statistics.\nyou realize that an indexscan is not allways faster than\nsequential scan unless the number of rows are a small\npercentage of the total number of rows\n\ndid you try to add a 'order by date' clause to your query ?\n\ngnari\n\n\n\n", "msg_date": "Wed, 1 Dec 2004 08:24:59 -0000", "msg_from": "\"gnari\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Index usage" } ]
[ { "msg_contents": "\nHi,\n\nI have installed the dspam filter\n(http://www.nuclearelephant.com/projects/dspam) on our mail server\n(RedHat 7.3 Linux with sendmail 8.13 and procmail). I have ~300 users\nwith a quite low traffic of 4000 messages/day. So it's a quite common\nplatform/environment, nothing spectacular.\n\nFirst time(s) I tried the Postgres interface that was already installed\nfor other applications. Whenever I begin to train and/or filter\nmessages throug dspam the performance is incredibly bad. First messages\nare ok but soon the filter time begins to increase to about 30 seconds\nor more!\n\n...so I looked for some optimization both for the linux kernel and the\npostgres server. Nothing has work for me. I always have the same\nbehavior. For isolation purposes I started using another server just to\nhold the dspam database and nothing else. No matter what I do: postgres\ngets slower and slower with each new message fed or filtered. \n\nSeveral strategies have failed: newest RPMs from postgresql.org,\npg_autovacuum, etc.\n\nI finally tried the MySQL driver.\n\nI have started using this tool right now for dspam, so I am a newcomer\nin MySQL.\n\nThe result: after some preparation in configuring some parameters for\nmysqld (with the \"QuickStart\" Guide from mysql.com) all works fine!\n\nIt's incredible! the same servers, the same messages, the same dspam\ncompilation (well each one with the corresponding\n--with-storage-driver=*sql_drv). Postgres is getting worst than\n30s/message and MySQL process the same in less than a second.\n\nI can surrender the Postgres server by just corpus-feeding one single\nshort message to each user (it takes hours to process 300 users!).\n\nOn the other hand, MySQL only takes a few minutes to process the same\nbatch.\n\nI do not want to make flame over Postgres (I have always prefered it for\nits capabilities) but I am absolutely impressed by MySQL (I have seen\nthe light!)\n\nPlease, could anyone explain me this difference?\nIs Postgres that bad?\nIs MySQL that good?\nAm I the only one to have observed this behavior?\n\nTIA.\n\nCheers,\n\n----------------------------------------------------------------\nEvilio Jose del Rio Silvan Centre Mediterrani d'Investigacions\[email protected] Marines i Ambientals\n\"Microsoft sells you Windows, Linux gives you the whole house\" - Anonymous\n\n", "msg_date": "Wed, 24 Nov 2004 14:14:18 +0100", "msg_from": "Evilio del Rio <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres vs. MySQL" }, { "msg_contents": "On Wed, Nov 24, 2004 at 02:14:18PM +0100, Evilio del Rio wrote:\n> It's incredible! the same servers, the same messages, the same dspam\n> compilation (well each one with the corresponding\n> --with-storage-driver=*sql_drv). Postgres is getting worst than\n> 30s/message and MySQL process the same in less than a second.\n\nAFAIK dspam is heavily optimized for MySQL and not optimized for PostgreSQL\nat all; I believe there would be significant performance boosts available \nby \"fixing\" dspam.\n\nExample queries that are slow, as well as table schemas, would probably help\na lot in tracking down the problems.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 24 Nov 2004 15:16:15 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres vs. MySQL" }, { "msg_contents": "Evilio del Rio wrote:\n\n> Please, could anyone explain me this difference?\n> Is Postgres that bad?\n> Is MySQL that good?\n> Am I the only one to have observed this behavior?\n\nDo you have any record of configuration, system hardware, usage \npatterns, queries executed?\n\nIf you can tell us what you tried (and why) then we might be able to \nhelp, otherwise there's not much information here.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 24 Nov 2004 14:27:06 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres vs. MySQL" }, { "msg_contents": "\nAs for performance, lots of others will probably volunteer tips and \ntechniques. In my experience, properly written and tuned applications will \nshow only minor speed differences. I have seen several open-source apps \nthat \"support postgres\" but are not well tested on it. Query optimization \ncan cause orders of magnitude performance differences. It sounds maybe \ndspam is in this bucket?\n\n\n>\n> Please, could anyone explain me this difference?\n> Is Postgres that bad?\n> Is MySQL that good?\n> Am I the only one to have observed this behavior?\n\nI made a little chart about these about a year ago:\n\nhttp://www.tikipro.org/wiki/index.php?page=DatabaseComparison\n\nIf speed is what you need, and data integrity / safety is not, then MySQL \nmay be a good choice. (Aggregate statistics tables and other such \ncalculated denormalizations).\n\nIMHO, if all you need is dpsam running *now*, then I'd say MySQL might be \ngood choice. If you ever need to run a DB application where data integrity \nis mission critical, then postgres is the top of my list.\n\n\n[ \\ /\n[ >X< Christian Fowler | spider AT viovio.com\n[ / \\ http://www.viovio.com | http://www.tikipro.org\n", "msg_date": "Wed, 24 Nov 2004 09:57:52 -0500 (EST)", "msg_from": "Christian Fowler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres vs. MySQL" }, { "msg_contents": "Evilio del Rio wrote:\n\n>Hi,\n>\n>I have installed the dspam filter\n>(http://www.nuclearelephant.com/projects/dspam) on our mail server\n>(RedHat 7.3 Linux with sendmail 8.13 and procmail). I have ~300 users\n>with a quite low traffic of 4000 messages/day. So it's a quite common\n>platform/environment, nothing spectacular.\n>\n> \n>\nThe problem is definately dspam. We have been through their code.\nThe new version is much, much better than the older one but I am sure\nthere is more work to be done.\n\nThe first version we installed suffered from a well known problem:\n\nIt would use smallint/bigint but would not cast or quote the\nwhere clauses and thus PostgreSQL would never use the indexes.\n\nIt was also missing several indexes on appropriate columns.\n\nWe offered some advice and we know that some of it was taken but\nwe don't know which.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Wed, 24 Nov 2004 07:57:00 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres vs. MySQL" }, { "msg_contents": "On Wed, Nov 24, 2004 at 09:57:52AM -0500, Christian Fowler wrote:\n> As for performance, lots of others will probably volunteer tips and \n> techniques. In my experience, properly written and tuned applications will \n> show only minor speed differences. I have seen several open-source apps \n> that \"support postgres\" but are not well tested on it. Query optimization \n> can cause orders of magnitude performance differences.\n\nDefinitely. My favourite is Request Tracker (we use 2.x, although 3.x is the\nlatest version), which used something like 5-600 queries (all seqscans since\nthe database schema only had an ordinary index on the varchar fields in\nquestion, and the queries were automatically searching on LOWER(field) to\nemulate MySQL's case-insensitivity on varchar fields) for _every_ page shown.\nNeedless to say, the web interface was dog slow -- some index manipulation\nand a few bugfixes (they had some kind of cache layer which would eliminate\n98% of the queries, but for some reason was broken for non-MySQL databases)\nlater, and we were down to 3-4 index scans, a few orders of magnitude faster.\n:-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 24 Nov 2004 17:26:14 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres vs. MySQL" }, { "msg_contents": "On Wed, Nov 24, 2004 at 02:14:18PM +0100, Evilio del Rio wrote:\n> I have installed the dspam filter\n> (http://www.nuclearelephant.com/projects/dspam) on our mail server\n> (RedHat 7.3 Linux with sendmail 8.13 and procmail). I have ~300 users\n> with a quite low traffic of 4000 messages/day. So it's a quite common\n> platform/environment, nothing spectacular.\n\nWe just had a case just like this on #postgresql. The (somewhat surprising)\nsolution was increasing the statistics target on the \"token\" column to\nsomething like 200, which makes the planner choose an index scan instead of a\nsequential scan.\n\nFor the people who did not follow the case: The culprit is a query like\n\n SELECT * FROM table WHERE token IN ('346369873476346', '4376376034', ...)\n\n(token is a numeric(20,0)) With one entry in the IN (), the cost of an index\nscan was estimated to 4.77; with ten entries, it was about 48, but with 574\nentries the estimated cost was 513565 (!!), making the planner prefer an\nindex scan to 574 consecutive index scans. Upping the statistics target made\nthe planner estimate the cost to about ~4000, and thus select the index scan,\nwhich was two orders of magnitude faster.\n\nBTW, this case was with PostgreSQL 7.4.6, not 7.3 as the poster here is\nreporting.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 25 Nov 2004 02:18:23 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres vs. MySQL" }, { "msg_contents": "I did some work on RT wrt Postgres for a company and found that their \nwas lots of room for improvement\nparticularly if you are linking requests. The latest RT code hopefully \nhas fixes as a result of this work.\n\nDave\n\nSteinar H. Gunderson wrote:\n\n>On Wed, Nov 24, 2004 at 09:57:52AM -0500, Christian Fowler wrote:\n> \n>\n>>As for performance, lots of others will probably volunteer tips and \n>>techniques. In my experience, properly written and tuned applications will \n>>show only minor speed differences. I have seen several open-source apps \n>>that \"support postgres\" but are not well tested on it. Query optimization \n>>can cause orders of magnitude performance differences.\n>> \n>>\n>\n>Definitely. My favourite is Request Tracker (we use 2.x, although 3.x is the\n>latest version), which used something like 5-600 queries (all seqscans since\n>the database schema only had an ordinary index on the varchar fields in\n>question, and the queries were automatically searching on LOWER(field) to\n>emulate MySQL's case-insensitivity on varchar fields) for _every_ page shown.\n>Needless to say, the web interface was dog slow -- some index manipulation\n>and a few bugfixes (they had some kind of cache layer which would eliminate\n>98% of the queries, but for some reason was broken for non-MySQL databases)\n>later, and we were down to 3-4 index scans, a few orders of magnitude faster.\n>:-)\n>\n>/* Steinar */\n> \n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Thu, 25 Nov 2004 07:38:23 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres vs. MySQL" }, { "msg_contents": "On Wed, 2004-11-24 at 14:14 +0100, Evilio del Rio wrote:\n> Hi,\n> \n> I have installed the dspam filter\n> (http://www.nuclearelephant.com/projects/dspam) on our mail server\n> (RedHat 7.3 Linux with sendmail 8.13 and procmail). I have ~300 users\n> with a quite low traffic of 4000 messages/day. So it's a quite common\n> platform/environment, nothing spectacular.\n\nI am using DSpam with PostgreSQL here. I have a daily job that cleans\nthe DSpam database up, as follows:\n\nDELETE FROM dspam_token_data\n WHERE (innocent_hits*2) + spam_hits < 5\n AND CURRENT_DATE - last_hit > 60;\n\nDELETE FROM dspam_token_data\n WHERE innocent_hits = 1\n AND CURRENT_DATE - last_hit > 30;\n\nDELETE FROM dspam_token_data\n WHERE CURRENT_DATE - last_hit > 180;\n\nDELETE FROM dspam_signature_data\n WHERE CURRENT_DATE - created_on > 14;\n\nVACUUM dspam_token_data;\n\nVACUUM dspam_signature_data;\n\n\n\nI also occasionally do a \"VACUUM FULL ANALYZE;\" on the database as well.\n\n\nIn all honesty though, I think that MySQL is better suited to DSpam than\nPostgreSQL is.\n\n\n> Please, could anyone explain me this difference?\n> Is Postgres that bad?\n> Is MySQL that good?\n> Am I the only one to have observed this behavior?\n\nI believe that what DSpam does that is not well-catered for in the way\nPostgreSQL operates, is that it does very frequent updates to rows in\n(eventually) quite large tables. In PostgreSQL the UPDATE will result\ninternally in a new record being written, with the old record being\nmarked as deleted. That old record won't be re-used until after a\nVACUUM has run, and this means that the on-disk tables will have a lot\nof dead rows in them quite quickly.\n\nThe reason that PostgreSQL operates this way, is a direct result of the\nway transactional support is implemented, and it may well change in a\nversion or two. It's got better over the last few versions, with things\nlike pg_autovacuum, but that approach still doesn't suit some types of\ndatabase updating.\n\nCheers,\n\t\t\t\t\tAndrew.\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n These PRESERVES should be FORCE-FED to PENTAGON OFFICIALS!!\n-------------------------------------------------------------------------", "msg_date": "Fri, 26 Nov 2004 14:37:12 +1300", "msg_from": "Andrew McMillan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres vs. DSpam" }, { "msg_contents": "On Fri, 2004-11-26 at 14:37 +1300, Andrew McMillan wrote:\n> In PostgreSQL the UPDATE will result\n> internally in a new record being written, with the old record being\n> marked as deleted. That old record won't be re-used until after a\n> VACUUM has run, and this means that the on-disk tables will have a lot\n> of dead rows in them quite quickly.\n\nNot necessarily: yes, you need a VACUUM to begin reusing the space\nconsumed by expired tuples, but that does not mean \"tables will have a\nlot of dead rows in them quite quickly\". VACUUM does not block\nconcurrent database activity, so you can run it as frequently as you'd\nlike (and as your database workload requires). There is a tradeoff\nbetween the space consumed by expired tuple versions and the I/O\nrequired to do a VACUUM -- it's up to the PG admin to decide what the\nright balance for their database is (pg_autovacuum et al. can help make\nthis decision).\n\n> The reason that PostgreSQL operates this way, is a direct result of the\n> way transactional support is implemented, and it may well change in a\n> version or two.\n\nI doubt it.\n\n-Neil\n\n\n", "msg_date": "Fri, 26 Nov 2004 14:25:25 +1100", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres vs. DSpam" }, { "msg_contents": "On Wed, Nov 24, 2004 at 02:14:18PM +0100, Evilio del Rio wrote:\n> I have installed the dspam filter\n> (http://www.nuclearelephant.com/projects/dspam) on our mail server\n> (RedHat 7.3 Linux with sendmail 8.13 and procmail). I have ~300 users\n> with a quite low traffic of 4000 messages/day. So it's a quite common\n> platform/environment, nothing spectacular.\n> \n> First time(s) I tried the Postgres interface that was already installed\n> for other applications. Whenever I begin to train and/or filter\n> messages throug dspam the performance is incredibly bad. First messages\n> are ok but soon the filter time begins to increase to about 30 seconds\n> or more!\n> \n> ...so I looked for some optimization both for the linux kernel and the\n> postgres server. Nothing has work for me. I always have the same\n> behavior. For isolation purposes I started using another server just to\n> hold the dspam database and nothing else. No matter what I do: postgres\n> gets slower and slower with each new message fed or filtered. \n\nI know *somewhere* I recently read something indicating a critical\nconfiguration change for DSPAM + Postgres, but don't think I've seen it\nmentioned on this list. Possibly it is in the UPGRADING instructions\nfor 3.2.1, or in a README file there. At any rate, it mentioned that\nit was essential to make some change to the table layout used by previous\nversions of DSPAM, and then Postgres would run many times faster.\n\nUnfortunately I no longer have 3.2.1 installed on my system, so I can't\ntell you if it was in there or somewhere else.\n\n -- Clifton\n\n-- \n Clifton Royston -- [email protected] \n Tiki Technologies Lead Programmer/Software Architect\nDid you ever fly a kite in bed? Did you ever walk with ten cats on your head?\n Did you ever milk this kind of cow? Well we can do it. We know how.\nIf you never did, you should. These things are fun, and fun is good.\n -- Dr. Seuss\n", "msg_date": "Fri, 26 Nov 2004 10:35:31 -1000", "msg_from": "Clifton Royston <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [dspam-users] Postgres vs. MySQL" }, { "msg_contents": "I posted about this a couple days ago on dspam-dev...\n\nI am using DSpam with PostgreSQL, and like you discovered the horrible\nperformance. The reason is because the default PostgreSQL query planner\nsettings determine that a sequence scan will be more efficient than an\nindex scan, which is wrong. To correct this behavior, adjust the query\nplanner settings for the appropriate table/column with this command:\n\nalter table \"dspam_token_data\" alter \"token\" set statistics 200; analyze;\n\nLet me know if it help you. It worked wonders for me.\n\n-- \nCasey Allen Shobe\[email protected]\n\nOn Fri, November 26, 2004 12:35 pm, Clifton Royston said:\n> On Wed, Nov 24, 2004 at 02:14:18PM +0100, Evilio del Rio wrote:\n>> I have installed the dspam filter\n>> (http://www.nuclearelephant.com/projects/dspam) on our mail server\n>> (RedHat 7.3 Linux with sendmail 8.13 and procmail). I have ~300 users\n>> with a quite low traffic of 4000 messages/day. So it's a quite common\n>> platform/environment, nothing spectacular.\n>>\n>> First time(s) I tried the Postgres interface that was already installed\n>> for other applications. Whenever I begin to train and/or filter\n>> messages throug dspam the performance is incredibly bad. First messages\n>> are ok but soon the filter time begins to increase to about 30 seconds\n>> or more!\n>>\n>> ...so I looked for some optimization both for the linux kernel and the\n>> postgres server. Nothing has work for me. I always have the same\n>> behavior. For isolation purposes I started using another server just to\n>> hold the dspam database and nothing else. No matter what I do: postgres\n>> gets slower and slower with each new message fed or filtered.\n>\n> I know *somewhere* I recently read something indicating a critical\n> configuration change for DSPAM + Postgres, but don't think I've seen it\n> mentioned on this list. Possibly it is in the UPGRADING instructions\n> for 3.2.1, or in a README file there. At any rate, it mentioned that\n> it was essential to make some change to the table layout used by previous\n> versions of DSPAM, and then Postgres would run many times faster.\n>\n> Unfortunately I no longer have 3.2.1 installed on my system, so I can't\n> tell you if it was in there or somewhere else.\n>\n> -- Clifton\n>\n> --\n> Clifton Royston -- [email protected]\n> Tiki Technologies Lead Programmer/Software Architect\n> Did you ever fly a kite in bed? Did you ever walk with ten cats on your\n> head?\n> Did you ever milk this kind of cow? Well we can do it. We know how.\n> If you never did, you should. These things are fun, and fun is good.\n> -- Dr.\n> Seuss\n>\n\n", "msg_date": "Fri, 26 Nov 2004 18:11:13 -0800 (PST)", "msg_from": "\"Casey Allen Shobe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [dspam-users] Postgres vs. MySQL" }, { "msg_contents": "Casey Allen Shobe wrote the following on 11/27/04 03:11 :\n\n>I posted about this a couple days ago on dspam-dev...\n>\n>I am using DSpam with PostgreSQL, and like you discovered the horrible\n>performance. The reason is because the default PostgreSQL query planner\n>settings determine that a sequence scan will be more efficient than an\n>index scan, which is wrong. To correct this behavior, adjust the query\n>planner settings for the appropriate table/column with this command:\n>\n>alter table \"dspam_token_data\" alter \"token\" set statistics 200; analyze;\n>\n>Let me know if it help you. It worked wonders for me.\n>\n> \n>\nIn tum mode, this could help too (I'm currently testing it) :\nCREATE INDEX id_token_data_sumhits ON dspam_token_data ((spam_hits + \ninnocent_hits));\n\nIndeed each UPDATE on dspam_token_data in TUM is done with :\nWHERE ... AND spam_hits + innocent_hits < 50\n", "msg_date": "Sat, 27 Nov 2004 11:14:30 +0100", "msg_from": "Lionel Bouton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [dspam-users] Postgres vs. MySQL" }, { "msg_contents": "Martha Stewart called it a Good Thing when [email protected] (\"Casey Allen Shobe\") wrote:\n> I posted about this a couple days ago on dspam-dev...\n>\n> I am using DSpam with PostgreSQL, and like you discovered the horrible\n> performance. The reason is because the default PostgreSQL query planner\n> settings determine that a sequence scan will be more efficient than an\n> index scan, which is wrong. To correct this behavior, adjust the query\n> planner settings for the appropriate table/column with this command:\n>\n> alter table \"dspam_token_data\" alter \"token\" set statistics 200; analyze;\n>\n> Let me know if it help you. It worked wonders for me.\n\nThat makes a great deal of sense; the number of tokens are likely to\nbe rather larger than 10, and are likely to be quite unevenly\ndistributed. That fits with the need you found to collect more\nstatistics on that column.\n\nOther cases where it seems plausible that it would be worthwhile to do\nthe same:\n\n alter table dspam_signature_data alter signature set statistics 200;\n alter table dspam_neural_data alter node set statistics 200;\n alter table dspam_neural_decisions alter signature set statistics 200;\n\nLionel's suggestion of having a functional index on dspam_token_data\n(innocent_hits + spam_hits) also seems likely to be helpful. Along\nwith that, it might prove necessary to alter stats on dspam_token_data\nthus:\n\n alter table dspam_token_data alter innocent_hits set statistics 200;\n alter table dspam_token_data alter spam_hits set statistics 200;\n\nNone of these changes are likely to make things materially worse; if\nthey do help, they'll help rather a lot.\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"gmail.com\")\nhttp://www.ntlug.org/~cbbrowne/nonrdbms.html\nRules of the Evil Overlord #112. \"I will not rely entirely upon\n\"totally reliable\" spells that can be neutralized by relatively\ninconspicuous talismans.\" <http://www.eviloverlord.com/>\n", "msg_date": "Sat, 27 Nov 2004 13:43:15 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [dspam-users] Postgres vs. MySQL" }, { "msg_contents": "FWIW, those queries won't be able to use an index. A better WHERE clause\nwould be:\n\nAND last_hit < CURRENT_DATE - 60\n\nOn Fri, Nov 26, 2004 at 02:37:12PM +1300, Andrew McMillan wrote:\n> On Wed, 2004-11-24 at 14:14 +0100, Evilio del Rio wrote:\n> > Hi,\n> > \n> > I have installed the dspam filter\n> > (http://www.nuclearelephant.com/projects/dspam) on our mail server\n> > (RedHat 7.3 Linux with sendmail 8.13 and procmail). I have ~300 users\n> > with a quite low traffic of 4000 messages/day. So it's a quite common\n> > platform/environment, nothing spectacular.\n> \n> I am using DSpam with PostgreSQL here. I have a daily job that cleans\n> the DSpam database up, as follows:\n> \n> DELETE FROM dspam_token_data\n> WHERE (innocent_hits*2) + spam_hits < 5\n> AND CURRENT_DATE - last_hit > 60;\n> \n> DELETE FROM dspam_token_data\n> WHERE innocent_hits = 1\n> AND CURRENT_DATE - last_hit > 30;\n> \n> DELETE FROM dspam_token_data\n> WHERE CURRENT_DATE - last_hit > 180;\n> \n> DELETE FROM dspam_signature_data\n> WHERE CURRENT_DATE - created_on > 14;\n> \n> VACUUM dspam_token_data;\n> \n> VACUUM dspam_signature_data;\n> \n> \n> \n> I also occasionally do a \"VACUUM FULL ANALYZE;\" on the database as well.\n> \n> \n> In all honesty though, I think that MySQL is better suited to DSpam than\n> PostgreSQL is.\n> \n> \n> > Please, could anyone explain me this difference?\n> > Is Postgres that bad?\n> > Is MySQL that good?\n> > Am I the only one to have observed this behavior?\n> \n> I believe that what DSpam does that is not well-catered for in the way\n> PostgreSQL operates, is that it does very frequent updates to rows in\n> (eventually) quite large tables. In PostgreSQL the UPDATE will result\n> internally in a new record being written, with the old record being\n> marked as deleted. That old record won't be re-used until after a\n> VACUUM has run, and this means that the on-disk tables will have a lot\n> of dead rows in them quite quickly.\n> \n> The reason that PostgreSQL operates this way, is a direct result of the\n> way transactional support is implemented, and it may well change in a\n> version or two. It's got better over the last few versions, with things\n> like pg_autovacuum, but that approach still doesn't suit some types of\n> database updating.\n> \n> Cheers,\n> \t\t\t\t\tAndrew.\n> -------------------------------------------------------------------------\n> Andrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\n> WEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\n> DDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n> These PRESERVES should be FORCE-FED to PENTAGON OFFICIALS!!\n> -------------------------------------------------------------------------\n> \n\n\n\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Mon, 29 Nov 2004 15:50:56 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres vs. DSpam" } ]
[ { "msg_contents": "I have a table with this index:\n\n create index ARTISTS_NAME on ARTISTS (\n lower(AR_NAME)\n );\n\nTe index is over a colum with this definition:\n\n AR_NAME VARCHAR(256) null,\n\nI want to optimize this query:\n\n select * from artists where lower(ar_name) like\nlower('a%') order by lower(ar_name) limit 20;\n\nI think the planner should use the index i have. But\nthe result of the explain command is:\n\n explain analyze select * from artists where\nlower(ar_name) like lower('a%') order by\nlower(ar_name) limit 20;\n\n \n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=20420.09..20420.14 rows=20 width=360)\n(actual time=2094.13..2094.19 rows=20 loops=1)\n -> Sort (cost=20420.09..20433.52 rows=5374\nwidth=360) (actual time=2094.13..2094.16 rows=21\nloops=1)\n Sort Key: lower((ar_name)::text)\n -> Index Scan using artists_name on artists \n(cost=0.00..19567.09 rows=5374 width=360) (actual\ntime=0.11..1391.97 rows=59047 loops=1)\n Index Cond: ((lower((ar_name)::text) >=\n'a'::text) AND (lower((ar_name)::text) < 'b'::text))\n Filter: (lower((ar_name)::text) ~~\n'a%'::text)\n Total runtime: 2098.62 msec\n(7 rows)\n\nThe \"ORDER BY\" clause is not using the index!. I don't\nknow why.\n\nI have the locale configured to C, and the index works\nwell with the \"like\" operator. \n\n�Could you help me? I am really lost. \n\n\n\t\t\n______________________________________________\nRenovamos el Correo Yahoo!: �100 MB GRATIS!\nNuevos servicios, m�s seguridad\nhttp://correo.yahoo.es\n", "msg_date": "Wed, 24 Nov 2004 18:36:59 +0100 (CET)", "msg_from": "sdfasdfas sdfasdfs <[email protected]>", "msg_from_op": true, "msg_subject": "\"Group By \" index usage" }, { "msg_contents": "sdfasdfas sdfasdfs <[email protected]> writes:\n> I have a table with this index:\n> create index ARTISTS_NAME on ARTISTS (\n> lower(AR_NAME)\n> );\n\n> Te index is over a colum with this definition:\n\n> AR_NAME VARCHAR(256) null,\n\n> I want to optimize this query:\n\n> select * from artists where lower(ar_name) like\n> lower('a%') order by lower(ar_name) limit 20;\n\n> I think the planner should use the index i have.\n\nUpdate to 7.4, or declare the column as TEXT instead of VARCHAR.\nOlder versions aren't very bright about situations involving\nimplicit coercions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Nov 2004 12:55:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Group By \" index usage " }, { "msg_contents": " Did you test with ILIKE instead of lower LIKE lower ?\n\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of sdfasdfas\nsdfasdfs\nSent: mercredi 24 novembre 2004 18:37\nTo: [email protected]\nSubject: [PERFORM] \"Group By \" index usage\n\nI have a table with this index:\n\n create index ARTISTS_NAME on ARTISTS (\n lower(AR_NAME)\n );\n\nTe index is over a colum with this definition:\n\n AR_NAME VARCHAR(256) null,\n\nI want to optimize this query:\n\n select * from artists where lower(ar_name) like\nlower('a%') order by lower(ar_name) limit 20;\n\nI think the planner should use the index i have. But the result of the\nexplain command is:\n\n explain analyze select * from artists where\nlower(ar_name) like lower('a%') order by\nlower(ar_name) limit 20;\n\n \n QUERY PLAN \n----------------------------------------------------------------------------\n---------------------------------------------------------------------\n Limit (cost=20420.09..20420.14 rows=20 width=360) (actual\ntime=2094.13..2094.19 rows=20 loops=1)\n -> Sort (cost=20420.09..20433.52 rows=5374\nwidth=360) (actual time=2094.13..2094.16 rows=21\nloops=1)\n Sort Key: lower((ar_name)::text)\n -> Index Scan using artists_name on artists\n(cost=0.00..19567.09 rows=5374 width=360) (actual\ntime=0.11..1391.97 rows=59047 loops=1)\n Index Cond: ((lower((ar_name)::text) >=\n'a'::text) AND (lower((ar_name)::text) < 'b'::text))\n Filter: (lower((ar_name)::text) ~~\n'a%'::text)\n Total runtime: 2098.62 msec\n(7 rows)\n\nThe \"ORDER BY\" clause is not using the index!. I don't know why.\n\nI have the locale configured to C, and the index works well with the \"like\"\noperator. \n\n¿Could you help me? I am really lost. \n\n\n\t\t\n______________________________________________\nRenovamos el Correo Yahoo!: ¡100 MB GRATIS!\nNuevos servicios, más seguridad\nhttp://correo.yahoo.es\n\n---------------------------(end of broadcast)---------------------------\nTIP 8: explain analyze is your friend\n\n", "msg_date": "Tue, 30 Nov 2004 17:04:58 +0100", "msg_from": "\"Alban Medici (NetCentrex)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Group By \" index usage" } ]
[ { "msg_contents": "We currently are utilizing postgresql on 2 servers with the following \nconfiguration:\n\n2 - 2.4 Ghz Xeon processors\n4GB ram\n4 36gb 10000rpm scsi drives configured for raid 10\n\nWe started out with one server and as we became IO bound we added the \nsecond. We are currently considering purchasing another identical server \nto go along with these. In addition to this we are also considering a scsi \nattached storage device in the 10 - 14 drive range configured for raid 10 \nin place of the onboard 4 drives we currently have. Daily about 30% of \nour data gets updated with about 2% new data. Our query load is about 60% \nreads and 40% writes currently. My question is what type of performance \ngains can I expect on average from swapping from 4 disk raid 10 to 14 disk \nraid 10? Could I expect to see 40 - 50% better throughput.\n\nThe servers listed above are the dell 2650's which have perc 3 \ncontrollers. I have seen on this list where they are know for not \nperforming well. So any suggestions for an attached scsi device would be \ngreatly appreciated. Also, any thoughts on fibre channel storage devices?\n\nThank You,\nBo Stewart\n\n\n", "msg_date": "Wed, 24 Nov 2004 11:58:06 -0600", "msg_from": "Bo Stewart <[email protected]>", "msg_from_op": true, "msg_subject": "Hardware purchase question" }, { "msg_contents": "Bo,\n\n> 2 - 2.4 Ghz Xeon processors\n> 4GB ram\n> 4 36gb 10000rpm scsi drives configured for raid 10\n\nHopefully you've turned OFF hyperthreading?\n\n> gains can I expect on average from swapping from 4 disk raid 10 to 14 disk\n> raid 10? Could I expect to see 40 - 50% better throughput.\n\nThis is so dependant on application design that I can't possibly estimate. \nOne big gain area for you will be moving the database log (pg_xlog) to its \nown private disk resource (such as a raid-1 pair). In high-write \nenviroments, this can gain you 15% without changing anything else.\n\n> The servers listed above are the dell 2650's which have perc 3\n> controllers. I have seen on this list where they are know for not\n> performing well. So any suggestions for an attached scsi device would be\n> greatly appreciated. Also, any thoughts on fibre channel storage devices?\n\nThe 2650s don't perform well in a whole assortment of ways. This is why they \nare cheap.\n\nNetApps seem to be the current best in NAS/SAN storage, although many people \nlike EMC. Stay away from Apple's XRaid, which is not designed for \ndatabases.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 30 Nov 2004 16:26:42 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware purchase question" }, { "msg_contents": ">>>>> \"BS\" == Bo Stewart <[email protected]> writes:\n\nBS> The servers listed above are the dell 2650's which have perc 3\nBS> controllers. I have seen on this list where they are know for not\nBS> performing well. So any suggestions for an attached scsi device would\nBS> be greatly appreciated. Also, any thoughts on fibre channel storage\nBS> devices?\n\nI have a 2450 and a 2650 both of which are totally sucking IO wise.\n\nThe 2650 has a PERC3 card (LSI based) and has one channel holding a\nmirrored pair for the pg_xlog and OS, and the other channel has 14\nU320 disks in a RAID5. If I'm lucky, I'll get 30MB/s out of the\ndisks. Normally it hovers at 5 or 6MB/s on the big RAID.\n\nI'm currently shopping for non-Dell hardware to replace it :-(\n\nHowever, I keep getting conflicting advice. My choices are along\nthese lines:\n\nDual Xeon 64bit with built-in 6-disk RAID10 or RAID5 (LSI RAID card)\nDual Opteron 64bit with built-in 6-disk RAID10 or RAID5 (LSI RAID card)\nDual Opteron 64bit with external RAID via fibre channel (eg, nstor)\n\nI'm sure any of these will whip the bottom off the Dell 2650, but\nwhich will be the fastest overall? No way to know without spending\nlots of money to test. :-(\n\nDell claims their new 2750 will be faster, but they've lost the battle\nalready, and won't commit to any performance numbers. Won't even give\nme a ballpark number.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Mon, 13 Dec 2004 11:33:47 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware purchase question" }, { "msg_contents": "> However, I keep getting conflicting advice. My choices are along\n> these lines:\n> \n> Dual Xeon 64bit with built-in 6-disk RAID10 or RAID5 (LSI RAID card)\n> Dual Opteron 64bit with built-in 6-disk RAID10 or RAID5 (LSI RAID card)\n> Dual Opteron 64bit with external RAID via fibre channel (eg, nstor)\n\nAn Opteron, properly tuned with PostgreSQL will always beat a Xeon\nin terms of raw cpu.\n\nRAID 10 will typically always outperform RAID 5 with the same HD config.\n\nFibre channel in general will always beat a normal (especially an LSI) raid.\n\nDell's suck for PostgreSQL.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n> \n> I'm sure any of these will whip the bottom off the Dell 2650, but\n> which will be the fastest overall? No way to know without spending\n> lots of money to test. :-(\n> \n> Dell claims their new 2750 will be faster, but they've lost the battle\n> already, and won't commit to any performance numbers. Won't even give\n> me a ballpark number.\n> \n\n\n-- \nCommand Prompt, Inc., home of PostgreSQL Replication, and plPHP.\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nMammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL", "msg_date": "Mon, 13 Dec 2004 09:23:13 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware purchase question" }, { "msg_contents": "Vivek,\n\n> Dual Xeon 64bit with built-in 6-disk RAID10 or RAID5 (LSI RAID card)\n> Dual Opteron 64bit with built-in 6-disk RAID10 or RAID5 (LSI RAID card)\n> Dual Opteron 64bit with external RAID via fibre channel (eg, nstor)\n\nOpteron over Xeon, no question. Not only are the Opterons \nreal-world-faster, they are less severely affected by the CS bug.\n\n> I'm sure any of these will whip the bottom off the Dell 2650, but\n> which will be the fastest overall? No way to know without spending\n> lots of money to test. :-(\n\nThe SAN is going to be faster with a good SAN. That being said, I understand \nthat \"a good SAN\" is something like a $30,000 NetApp; the less expensive \nSANs/NASes don't seem to be more than an external drive enclosure with a raid \nchip (e.g. Apple XRaid). But we saw even a less expensive/slower EMC \nmachine improve performance just moving the pg_xlog off of the local PERC \nRAID 5 and onto the SAN. So this is probably a good way to go if you can \nafford it.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 13 Dec 2004 10:31:40 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware purchase question" }, { "msg_contents": "Joshua D. Drake wrote:\n> \n> An Opteron, properly tuned with PostgreSQL will always beat a Xeon\n> in terms of raw cpu.\n> \n> RAID 10 will typically always outperform RAID 5 with the same HD config.\n> \n> Fibre channel in general will always beat a normal (especially an LSI) \n> raid.\n> \n> Dell's suck for PostgreSQL.\n\nDoes anyone have any OS recommendations/experiences for PostgreSQL on \nOpteron?\n\nThanks,\nAndrew\n", "msg_date": "Tue, 14 Dec 2004 09:33:55 +0000", "msg_from": "Andrew Hood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware purchase question" }, { "msg_contents": "On Mon, 13 Dec 2004 09:23:13 -0800, Joshua D. Drake\n<[email protected]> wrote:\n> \n> RAID 10 will typically always outperform RAID 5 with the same HD config.\n\nIsn't RAID10 just RAID5 mirrored? How does that speed up performance?\n Or am I missing something?\n\n-- Mitch\n", "msg_date": "Mon, 3 Jan 2005 14:20:41 -0500", "msg_from": "Mitch Pirtle <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware purchase question" }, { "msg_contents": "Mitch Pirtle wrote:\n> On Mon, 13 Dec 2004 09:23:13 -0800, Joshua D. Drake\n> <[email protected]> wrote:\n> \n>>RAID 10 will typically always outperform RAID 5 with the same HD config.\n> \n> \n> Isn't RAID10 just RAID5 mirrored? How does that speed up performance?\n> Or am I missing something?\n> \n> -- Mitch\n\nHi Mitch,\n\n Nope, Raid 10 (one zero) is a mirror is stripes, no parity. with r10 \nyou get the benefit of a full mirror which means your system does not \nneed to calculate the XOR parity but you only get 50% disk usage. The \nmirror causes a slight write hit as the data needs to be split between \ntwo disk (or in this case, to striped pairs) but reads can be up to \ntwice as fast (theoretically). By adding the stripe you negate the write \nhit and actually gain write performance because half the data goes to \nmirror A, half to mirror B (same with reads, roughly).\n\n Raid 10 is a popular choice for software raid because of the reduced \noverhead. Raid 5 on the otherhand does require that a parity bit is \ncalculated for every N-1 disks. With r5 you get N-1 disk usage (you get \nthe combined capacity of 3 disks in a 4 disk r5 array) and still get the \nbenefit of striping across the disks so long as you have a dedicated \nraid asic that can do the XOR calculations. Without it, specially in a \nfailure state, the performance can collapse as the CPU performs all that \nextra math.\n\nhth\n\nMadison\n", "msg_date": "Mon, 03 Jan 2005 14:35:26 -0500", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware purchase question" }, { "msg_contents": "Madison Kelly wrote:\n> Nope, Raid 10 (one zero) is a mirror is stripes, no parity. with r10 \n\nWoops, that should be \"mirror of stripes\".\n\nBy the way, what you are thinking of is possible, it would be 51 (five \none; a raid 5 built on mirrors) or 15 (a mirror of raid 5 arrays). \nAlways be careful, 10 and 01 are also not the same. You want to think \ncarefully about what you want out of your array before building it.\n\nMadison\n", "msg_date": "Mon, 03 Jan 2005 15:19:04 -0500", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware purchase question" }, { "msg_contents": "\nMadison Kelly <[email protected]> writes:\n\n> Without it, specially in a failure state, the performance can collapse as\n> the CPU performs all that extra math.\n\nIt's really not the math that makes raid 5 hurt. It's that in order to\ncalculate the checksum block the raid controller needs to read in the existing\nchecksum block and write out the new version. So every write causes not just\none drive seeking and writing, but a second drive seeking and performing a\nread and a write.\n\nThe usual strategy for dealing with that is stuffing a huge nonvolatile cache\nin the controller so those reads are mostly cached and the extra writes don't\nsaturate the i/o throughput. But those kinds of controllers are expensive and\nnot an option for software raid.\n\n-- \ngreg\n\n", "msg_date": "03 Jan 2005 15:36:07 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware purchase question" }, { "msg_contents": "You are right, I now remember that setup was originally called \"RAID\n10 plus 1\", and I believe is was an incorrect statement from an\noverzealous salesman ;-)\n\nThanks for the clarification!\n\n- Mitch\n\nOn Mon, 03 Jan 2005 15:19:04 -0500, Madison Kelly <[email protected]> wrote:\n> Madison Kelly wrote:\n> > Nope, Raid 10 (one zero) is a mirror is stripes, no parity. with r10\n> \n> Woops, that should be \"mirror of stripes\".\n> \n> By the way, what you are thinking of is possible, it would be 51 (five\n> one; a raid 5 built on mirrors) or 15 (a mirror of raid 5 arrays).\n> Always be careful, 10 and 01 are also not the same. You want to think\n> carefully about what you want out of your array before building it.\n", "msg_date": "Mon, 3 Jan 2005 15:44:44 -0500", "msg_from": "Mitch Pirtle <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware purchase question" }, { "msg_contents": "...and on Mon, Jan 03, 2005 at 03:44:44PM -0500, Mitch Pirtle used the keyboard:\n>\n> You are right, I now remember that setup was originally called \"RAID\n> 10 plus 1\", and I believe is was an incorrect statement from an\n> overzealous salesman ;-)\n>\n\nJust an afterthought - that could well be the unfortunate consequence of\nsalesmen specializing in sales as an act rather than the goods they were\nselling - it might be that he/she was referring to the specifics of the\nconcrete configuration they were selling you (or trying to sell you),\nwhich should, in the case you were mentioning, probably be called \"a\nRAID10 array with a hotspare drive\" - that is, it would be preconfigured\nto, upon the failure of one of array members, detect the failed drive and\nautomatically replace it with one that has been sitting there all the time,\ndoing nothing but waiting for one of its active companions to fail.\n\nBut this already falls into the category that has, so far, probably\ncaused the vast majority of misunderstandings, failed investments and\ngrey hair in RAID, namely data safety, and I don't feel particularly\nqualified for getting into specifics of this at this moment, as it\nhappens to be 2AM, I had a couple of beers (my friend's birthday's due)\nand I'm dying to get some sleep. :)\n\nHTH, cheers,\n-- \n Grega Bremec\n gregab at p0f dot net", "msg_date": "Tue, 4 Jan 2005 02:21:10 +0100", "msg_from": "Grega Bremec <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware purchase question" } ]
[ { "msg_contents": "Hi ALL,\n\n\tIve been using postgres for 3 years and now we are having problems with its \nperformance.\n\n\tHere are some givens..\n\n\t\tWe have 260 subscription tables per Database. \n\t\tWe have 2 databases.\n\t\t\n\t\tOur main client has given us 250,000 mobile numbers to deactivate.\n\t\t\n--\n\t\tWe we are experiencing\n\t\t 91,000 mobile numbers to deactive it took a week to finish for 1 DB only \nthe second DB is still in the process of deactivating\n\n\tAlgorithm to deactivate:\n\t\twe loaded all subscription tables names into a table\n\t\twe loaded all mobile numbers to deactivate into a table\n\n\t\tSQL:\n\t\tupdate SUBSCRIPTIONTABLE set ACTIVEFLAG='Y' where mobile_num in (select \nmobile_num from LOADED_MOBILE_NUMBERS)\n\n\tthe script was made is \"C\"\n\nCOFIG FILE:\n# This is ARA nmimain\n\ntcpip_socket = true\nmax_connections = 150\nsuperuser_reserved_connections = 2\n\nport = 5433\nshared_buffers = 45600\nsort_mem = 40000\nmax_locks_per_transaction=128\n\n#fsync = true\n#wal_sync_method = fsync\n\n#\n# Locale settings\n#\n# (initialized by initdb -- may be changed)\nLC_MESSAGES = 'en_US.UTF-8'\nLC_MONETARY = 'en_US.UTF-8'\nLC_NUMERIC = 'en_US.UTF-8'\nLC_TIME = 'en_US.UTF-8'\n\n\n.. DB is being vaccumed every week\nmy box is running on a DUAL Xeon, 15K RPM with 2 G Mem.\n\nthat box is running 2 instances of PG DB.\n\n\n\nTIA,\n\t\t\n\n\n\t\t\n\n", "msg_date": "Thu, 25 Nov 2004 14:00:32 +0800", "msg_from": "JM <[email protected]>", "msg_from_op": true, "msg_subject": "HELP speed up my Postgres" }, { "msg_contents": "Dear JM ,\n\n\n\n> Ive been using postgres for 3 years and now we are having problems with its\n\nPostgrSQL version please\n-- \nWith Best Regards,\nVishal Kashyap.\nLead Software Developer,\nhttp://saihertz.com,\nhttp://vishalkashyap.tk\n", "msg_date": "Thu, 25 Nov 2004 11:42:18 +0530", "msg_from": "\"Vishal Kashyap @ [SaiHertz]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] HELP speed up my Postgres" }, { "msg_contents": "PG Version 7.3.4\n\nOn Thursday 25 November 2004 14:12, Vishal Kashyap @ [SaiHertz] wrote:\n> Dear JM ,\n>\n> > Ive been using postgres for 3 years and now we are having\n> > problems with its\n>\n> PostgrSQL version please\n\n", "msg_date": "Thu, 25 Nov 2004 14:29:48 +0800", "msg_from": "JM <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] HELP speed up my Postgres" }, { "msg_contents": "> SQL:\n> update SUBSCRIPTIONTABLE set ACTIVEFLAG='Y' where mobile_num in (select\n> mobile_num from LOADED_MOBILE_NUMBERS)\n\nCould you try using UPDATE ... FROM (SELECT ....) AS .. style syntax?\n\nAbout 20 minutes ago, I changed a 8 minute update to an most instant by \ndoing that.\n\nregards\nIain \n\n", "msg_date": "Thu, 25 Nov 2004 15:40:57 +0900", "msg_from": "\"Iain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] HELP speed up my Postgres" }, { "msg_contents": "JM <[email protected]> writes:\n> PG Version 7.3.4\n\nAvoid the \"IN (subselect)\" construct then. 7.4 is the first release\nthat can optimize that in any real sense.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Nov 2004 01:55:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] HELP speed up my Postgres " }, { "msg_contents": "> \t\tupdate SUBSCRIPTIONTABLE set ACTIVEFLAG='Y' where mobile_num in (select \n> mobile_num from LOADED_MOBILE_NUMBERS)\n\nChange to:\n\nupdate SUBSCRIPTIONTABLE set ACTIVEFLAG='Y' where exists (select 1 from \nLOADED_MOBILE_NUMBERS lmn where \nlmn.mobile_num=SUBSCRIPTIONTABLE.mobile_num);\n\nThat should run a lot faster.\n\nMake sure you have indexes on both mobile_num columns.\n\nChris\n", "msg_date": "Thu, 25 Nov 2004 15:06:27 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] HELP speed up my Postgres" }, { "msg_contents": "On Thu, 25 Nov 2004 14:00:32 +0800, JM <[email protected]> wrote:\n> \t\tupdate SUBSCRIPTIONTABLE set ACTIVEFLAG='Y' where mobile_num in (select \n> mobile_num from LOADED_MOBILE_NUMBERS)\n\ndoes loaded_mobile_numbers have a primary key or index on mobile_num?\nsame for subscriptiontable?\nhave you analyzed both tables?\nis mobile_num the same type in both tables?\n\nhow does this query compare?\n update SUBSCRIPTIONTABLE set ACTIVEFLAG='Y' \n from loaded_mobile_numbers\n where subscriptiontable.mobile_num = LOADED_MOBILE_NUMBERS.mobile_num\n\nklint.\n\n+---------------------------------------+-----------------+\n: Klint Gore : \"Non rhyming :\n: EMail : [email protected] : slang - the :\n: Snail : A.B.R.I. : possibilities :\n: Mail University of New England : are useless\" :\n: Armidale NSW 2351 Australia : L.J.J. :\n: Fax : +61 2 6772 5376 : :\n+---------------------------------------+-----------------+\n", "msg_date": "Thu, 25 Nov 2004 18:08:30 +1100", "msg_from": "Klint Gore <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] HELP speed up my Postgres" }, { "msg_contents": "> SQL:\n> update SUBSCRIPTIONTABLE set ACTIVEFLAG='Y' where mobile_num in (select\n> mobile_num from LOADED_MOBILE_NUMBERS)\n\nYou can try this:\n\nupdate SUBSCRIPTIONTABLE, LOADED_MOBILE_NUMBERS set \nSUBSCRIPTIONTABLE.ACTIVEFLAG='Y'\nwhere LOADED_MOBILE_NUMBERS.mobile_num=SUBSCRIPTIONTABLE.mobile_num\n\nAnatoly.\n\n\n", "msg_date": "Thu, 25 Nov 2004 14:21:00 +0500", "msg_from": "\"Anatoly Okishev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HELP speed up my Postgres" }, { "msg_contents": "Hi all,\n\n===================================\nCREATE FUNCTION trigger_test_func()\nRETURNS trigger\nAS '\n DECLARE\n cnt int4;\n \n BEGIN\n SELECT INTO cnt COUNT(*)\n FROM table_test\n WHERE ip = new.ip;\n\n IF cnt > 50 THEN\n -- THERE THE \"INSERT\" HAS TO BE STOPED\n END IF;\n\n RETURN new;\n END;'\nLANGUAGE 'plpgsql';\n\nCREATE TRIGGER trigger_test\nBEFORE INSERT\nON table_test\nFOR EACH ROW\nEXECUTE PROCEDURE trigger_test_func();\n===================================\n\nHow could i stop Inserting record into table by some condition?\n\nThanx!\n\n", "msg_date": "Thu, 25 Nov 2004 14:37:40 +0300", "msg_from": "\"ON.KG\" <[email protected]>", "msg_from_op": false, "msg_subject": "Trigger before insert" }, { "msg_contents": "ON.KG wrote:\n> \n> How could i stop Inserting record into table by some condition?\n\nRETURN null when using a before trigger. Or raise an exception to abort \nthe whole transaction.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 25 Nov 2004 12:54:46 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger before insert" }, { "msg_contents": "Hi!\n\n>> How could i stop Inserting record into table by some condition?\n\nRH> RETURN null when using a before trigger. Or raise an exception to abort\nRH> the whole transaction.\n\nThanx ;)\nRETURN NULL works so as i need\n\n", "msg_date": "Thu, 25 Nov 2004 18:02:40 +0300", "msg_from": "\"ON.KG\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger before insert" }, { "msg_contents": "it did.. thanks.. generally a weeks process turned out to be less than a \nday..\n\n\n\nOn Thursday 25 November 2004 15:06, Christopher Kings-Lynne wrote:\n> > \t\tupdate SUBSCRIPTIONTABLE set ACTIVEFLAG='Y' where mobile_num in (select\n> > mobile_num from LOADED_MOBILE_NUMBERS)\n>\n> Change to:\n>\n> update SUBSCRIPTIONTABLE set ACTIVEFLAG='Y' where exists (select 1 from\n> LOADED_MOBILE_NUMBERS lmn where\n> lmn.mobile_num=SUBSCRIPTIONTABLE.mobile_num);\n>\n> That should run a lot faster.\n>\n> Make sure you have indexes on both mobile_num columns.\n>\n> Chris\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n\n\n-- \n\nJerome Macaranas\nSystems/Network Administrator\nGMA New Media, Inc.\nPhone: (632) 9254627 loc 202\nFax: (632) 9284553\nMobile: (632) 918-9336819\[email protected]\n\nSanity is the playground for the unimaginative.\n\n\nDISCLAIMER: This Message may contain confidential information intended only \nfor the use of the addressee named above. If you are not the intended \nrecipient of this message you are hereby notified that any use, \ndissemination, distribution or reproduction of this message is prohibited. If \nyou received this message in error please notify your Mail Administrator and \ndelete this message immediately. Any views expressed in this message are \nthose of the individual sender and may not necessarily reflect the views of \nGMA New Media, Inc.\n\n", "msg_date": "Fri, 26 Nov 2004 16:28:31 +0800", "msg_from": "Jerome Macaranas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] HELP speed up my Postgres" } ]
[ { "msg_contents": "How much RAM can a single postgres backend use?\n\nI've just loaded a moderately sized dataset into postgres and was\napplying RI constraints to the tables (using pgadmin on windows). Part\nway though I noticed the (single) postgres backend had shot up to using\n300+ MB of my RAM!\n\nThe two tables are:\n\ncreate table reqt_dates\n(\n\treqt_date_id\tserial,\n\treqt_id \tinteger not null,\n\treqt_date\tdate not null,\n\tprimary key (reqt_date_id)\n) without oids;\n\nand\n\ncreate table booking_plan\n(\n\tbooking_plan_id\t\tserial,\n\treqt_date_id\t\tinteger not null,\n\tbooking_id \t\tinteger not null,\n\tbooking_date\t\tdate not null,\n\tdatetime_from\t\ttimestamp not null,\n\tdatetime_to\t\ttimestamp not null,\n\tprimary key (booking_plan_id)\n) without oids;\n\nand I was was trying to do:\n\nalter table booking_plan add\n\t foreign key\n\t(\n\t\treqt_date_id\n\t) references reqt_dates (\n\t\treqt_date_id\n\t) on delete cascade;\n\nSince I can't get an explain of what the alter table was doing I used this:\n\nselect count(*) from booking_plan,reqt_dates where\nbooking_plan.reqt_date_id = reqt_dates.reqt_date_id\n\nand sure enough this query caused the backend to use 300M RAM. The plan\nfor this was:\n\nQUERY PLAN\nAggregate (cost=37.00..37.00 rows=1 width=0) (actual\ntime=123968.000..123968.000 rows=1 loops=1)\n -> Hash Join (cost=15.50..36.50 rows=1000 width=0) (actual\ntime=10205.000..120683.000 rows=1657709 loops=1)\n Hash Cond: (\"outer\".reqt_date_id = \"inner\".reqt_date_id)\n -> Seq Scan on booking_plan (cost=0.00..15.00 rows=1000\nwidth=4) (actual time=10.000..4264.000 rows=1657709 loops=1)\n -> Hash (cost=15.00..15.00 rows=1000 width=4) (actual\ntime=10195.000..10195.000 rows=0 loops=1)\n -> Seq Scan on reqt_dates (cost=0.00..15.00 rows=1000\nwidth=4) (actual time=0.000..6607.000 rows=2142184 loops=1)\nTotal runtime: 124068.000 ms\n\nI then analysed the database. Note, there are no indexes at this stage\nexcept the primary keys.\n\nthe same query then gave:\nQUERY PLAN\nAggregate (cost=107213.17..107213.17 rows=1 width=0) (actual\ntime=57002.000..57002.000 rows=1 loops=1)\n -> Hash Join (cost=35887.01..106384.32 rows=1657709 width=0)\n(actual time=9774.000..54046.000 rows=1657709 loops=1)\n Hash Cond: (\"outer\".reqt_date_id = \"inner\".reqt_date_id)\n -> Seq Scan on booking_plan (cost=0.00..22103.55 rows=1657709\nwidth=4) (actual time=10.000..19648.000 rows=1657709 loops=1)\n -> Hash (cost=24355.92..24355.92 rows=2142184 width=4)\n(actual time=9674.000..9674.000 rows=0 loops=1)\n -> Seq Scan on reqt_dates (cost=0.00..24355.92\nrows=2142184 width=4) (actual time=0.000..4699.000 rows=2142184 loops=1)\nTotal runtime: 57002.000 ms\n\nThis is the same set of hash joins, BUT the backend only used 30M of\nprivate RAM.\n\nPlatform is Windows XP, Postgres 8.0 beta 5\n\nshared_buffers = 4000\nwork_mem = 8192\n\nAny explanations?\n\nThanks,\nGary.\n\n", "msg_date": "Thu, 25 Nov 2004 20:35:25 +0000", "msg_from": "Gary Doades <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres backend using huge amounts of ram" }, { "msg_contents": "Gary Doades wrote:\n> How much RAM can a single postgres backend use?\n> \n> I've just loaded a moderately sized dataset into postgres and was\n> applying RI constraints to the tables (using pgadmin on windows). Part\n> way though I noticed the (single) postgres backend had shot up to using\n> 300+ MB of my RAM!\n\nOops - guess that's why they call it a Beta. My first guess was a queue \nof pending foreign-key checks or triggers etc. but then you go on to say...\n\n> Since I can't get an explain of what the alter table was doing I used this:\n> \n> select count(*) from booking_plan,reqt_dates where\n> booking_plan.reqt_date_id = reqt_dates.reqt_date_id\n> \n> and sure enough this query caused the backend to use 300M RAM. The plan\n> for this was:\n[snip]\n> I then analysed the database. Note, there are no indexes at this stage\n> except the primary keys.\n> \n> the same query then gave:\n[snip]\n\n> This is the same set of hash joins, BUT the backend only used 30M of\n> private RAM.\n\nI'm guessing in the first case that the default estimate of 1000 rows in \na table means PG chooses to do the join in RAM. Once it knows there are \na lot of rows it can tell not to do so.\n\nHowever, I thought PG was supposed to spill to disk when the memory \nrequired exceeded config-file limits. If you could reproduce a simple \ntest case I'm sure someone would be interested in squashing this bug.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 26 Nov 2004 09:12:15 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres backend using huge amounts of ram" }, { "msg_contents": "Gary Doades <[email protected]> writes:\n> I've just loaded a moderately sized dataset into postgres and was\n> applying RI constraints to the tables (using pgadmin on windows). Part\n> way though I noticed the (single) postgres backend had shot up to using\n> 300+ MB of my RAM!\n\n> Since I can't get an explain of what the alter table was doing I used this:\n\n[ looks in code... ] The test query for an ALTER ADD FOREIGN KEY looks\nlike\n\n\t \tSELECT fk.keycols FROM ONLY relname fk\n\t \t LEFT OUTER JOIN ONLY pkrelname pk\n\t \t ON (pk.pkkeycol1=fk.keycol1 [AND ...])\n\t \t WHERE pk.pkkeycol1 IS NULL AND\n\t \t (fk.keycol1 IS NOT NULL [AND ...])\n\nIt's also worth noting that work_mem is temporarily set to\nmaintenance_work_mem, which you didn't tell us the value of:\n\n\t/*\n\t * Temporarily increase work_mem so that the check query can be\n\t * executed more efficiently. It seems okay to do this because the\n\t * query is simple enough to not use a multiple of work_mem, and one\n\t * typically would not have many large foreign-key validations\n\t * happening concurrently.\tSo this seems to meet the criteria for\n\t * being considered a \"maintenance\" operation, and accordingly we use\n\t * maintenance_work_mem.\n\t */\n\n> I then analysed the database. ...\n> This is the same set of hash joins, BUT the backend only used 30M of\n> private RAM.\n\nMy recollection is that hash join chooses hash table partitions partly\non the basis of the estimated number of input rows. Since the estimate\nwas way off, the actual table size got out of hand a bit :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Nov 2004 14:25:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres backend using huge amounts of ram " }, { "msg_contents": "Tom Lane wrote:\n> \n> It's also worth noting that work_mem is temporarily set to\n> maintenance_work_mem, which you didn't tell us the value of:\n> \nIt's left at the default. (16384).\n\nThis would be OK if that is all it used for this type of thing.\n\n> \n> \n> My recollection is that hash join chooses hash table partitions partly\n> on the basis of the estimated number of input rows. Since the estimate\n> was way off, the actual table size got out of hand a bit :-(\n\nA bit!!\n\nThe really worrying bit is that a normal (ish) query also exhibited the \nsame behaviour. I'm a bit worried that if the stats get a bit out of \ndate so that the estimate is off, as in this case, a few backends trying \nto get this much RAM will see the server grind to a halt.\n\nIs this a fixable bug? It seems a fairly high priority, makes the server \ngo away, type bug to me.\n\nIf you need the test data, I could zip the two tables up and send them \nsomewhere....\n\nThanks,\nGary.\n", "msg_date": "Fri, 26 Nov 2004 19:42:50 +0000", "msg_from": "Gary Doades <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres backend using huge amounts of ram" } ]
[ { "msg_contents": "We have a network application in which many clients will be executing a mix of select/insert/update/deletes on a central postgres 7.4.5 database, running on Solaris 9 running on dual 2.3 ghz Xeons, with 2 gig of RAM and a RAID 10 disk. The test database is about 400 meg in size.\n\nWe have tuned the postgresql.conf parameters to the point where we are confident we have enough memory for shared buffers and for sorting. We are still tuning SQL statements, but we're pretty sure the big wins have been achieved.\n\nWe are maxing out on the backend with 30 postmaster processes, each taking up about 2.5-3% of the CPU. We have tested mounting the whole database in /tmp, hence in memory, and it has made no difference in performance, so it seems we are purely CPU bound at this point.\n\nAbout 70% of our time is spent in selects, and another 25% spent in inserts/updates of a single table (about 10% out of the selects % is against this table).\n\nNow, our application client is not doing nearly enough of it's own caching, so a lot the work the database is doing currently is redundant, and we are working on the client, but in the meantime we have to squeeze as much as we can from the backend.\n\nAfter that long intro, I have a couple of questions:\n\n1) Given that the data is all cached, what can we do to make sure that postgres is generating\nthe most efficient plans in this case? We have bumped up effective_cache_size, but it had no\neffect. Also, what would the most efficient plan for in-memory data look like? I mean, does one\nstill look for the normal stuff - index usage, etc., or are seqscans what we should be looking for?\nI've seen some stuff about updating statistics targets for specific tables, but I'm not sure I \nunderstand it, and don't know if something like that applies in this case. I can supply some specific plans, if that would help (this email is already too long...).\n\n2) We have SQL test environment where we just run the SQL statements executed by the clients (culled from the log file) in psql. In our test environment, the same set of SQL statements runs 4X faster that the times achieved in the test that generated our source log file. Obviously there was a bigger load on the machine in the full test, but I'm wondering if there are any particular diagnostics that I should be looking at to ferret out contention. I haven't seen anything that looked suspicious in pg_locks, but it's difficult to interpret that data when the database is under load (at least for someone of my limited experience).\n\nI suspect the ultimate answer to our problem will be:\n\n 1) aggressive client-side caching\n 2) SQL tuning\n 3) more backend hardware\n\nBut I would grateful to hear any tips/anecdotes/experiences that others might have from tuning similar applications.\n\nThanks!\n\n- DAP\n----------------------------------------------------------------------------------\nDavid Parker Tazz Networks (401) 709-5130\n \n", "msg_date": "Fri, 26 Nov 2004 12:13:32 -0500", "msg_from": "\"David Parker\" <[email protected]>", "msg_from_op": true, "msg_subject": "time to stop tuning?" }, { "msg_contents": "On Fri, 2004-11-26 at 12:13 -0500, David Parker wrote:\n> \n> I suspect the ultimate answer to our problem will be:\n> \n> 1) aggressive client-side caching\n> 2) SQL tuning\n> 3) more backend hardware\n\n#0 might actually be using connection pooling and using cached query\nplans (PREPARE), disabling the statistics daemon, etc.\n\nFor the plans, send us EXPLAIN ANALYZE output for each of the common\nqueries.\n\nIf you can try it, I'd give a try at FreeBSD or a newer Linux on your\nsystem instead of Solaris. Older versions of Solaris had not received\nthe same amount of attention for Intel hardware as the BSDs and Linux\nhave and I would imagine (having not tested it recently) that this is\nstill true for 32bit Intel.\n\nAnother interesting test might be to limit the number of simultaneous\nconnections to 8 instead of 30 (client side connection retry) after\nclient side connection pooling via pgpool or similar has been installed.\n\nPlease report back with your findings.\n-- \nRod Taylor <[email protected]>\n\n", "msg_date": "Fri, 26 Nov 2004 13:29:19 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: time to stop tuning?" }, { "msg_contents": "\"David Parker\" <[email protected]> writes:\n> 1) Given that the data is all cached, what can we do to make sure that\n> postgres is generating the most efficient plans in this case? We have\n> bumped up effective_cache_size, but it had no effect.\n\nIf you're willing to bet on everything being in RAM all the time,\ndropping random_page_cost to 1 would be a theoretically sound thing\nto do. In any case you should look at reducing it considerably from\nthe default setting of 4.\n\nSomething that might also be interesting is to try increasing all the\ncpu_xxx cost factors, on the theory that since the unit of measurement\n(1 sequential page fetch) relates to an action involving no actual I/O,\nthe relative costs of invoking an operator, etc, should be rated higher\nthan when you expect actual I/O. I'm not real confident that this would\nmake things better --- you might find that any improvement would be\nswamped by the low accuracy with which we model CPU costs (such as the\nassumption that every operator costs the same to evaluate). But it's\nworth some experimentation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Nov 2004 14:04:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: time to stop tuning? " } ]
[ { "msg_contents": "Hi. Thanks for responding.\n\nAs it happens, the client-side already has a connection pool. \n\nWe need statistics enabled so that autovacuum can run (without\nautovacuum running our updates begin to kill us pretty quickly).\n\nMoving off of Solaris 9 isn't an option, even for the purposes of\ncomparison, unfortunately.\n\nOn limiting the client side connections: we've been gradually pushing up\nthe client-side connection pool and threads, and have seen steady\nimprovement in our throughput up to the current barrier we have reached.\nI guess the idea would be that backing off on the connections would\nallow each operation to finish faster, but that hasn't been the observed\nbehavior so far. \n\nI've attached the plans for the 4 queries that represent ~35% of our\nload. These are run against the same dataset, but without any other\nload. Another big query basically requires a test to be runnning because\nthe data is transient, and I can't run that at the moment. The times for\nthe individual queries is really fine - it's just they are called 3\ntimes for every logical \"unit of work\" on the client side, so they are\ncalled thousands of times in a given test (hence the need for client\ncaching).\n\nThanks.\n\n- DAP\n\n>-----Original Message-----\n>From: Rod Taylor [mailto:[email protected]] \n>Sent: Friday, November 26, 2004 1:29 PM\n>To: David Parker\n>Cc: Postgresql Performance\n>Subject: Re: [PERFORM] time to stop tuning?\n>\n>On Fri, 2004-11-26 at 12:13 -0500, David Parker wrote:\n>> \n>> I suspect the ultimate answer to our problem will be:\n>> \n>> 1) aggressive client-side caching\n>> 2) SQL tuning\n>> 3) more backend hardware\n>\n>#0 might actually be using connection pooling and using cached \n>query plans (PREPARE), disabling the statistics daemon, etc.\n>\n>For the plans, send us EXPLAIN ANALYZE output for each of the \n>common queries.\n>\n>If you can try it, I'd give a try at FreeBSD or a newer Linux \n>on your system instead of Solaris. Older versions of Solaris \n>had not received the same amount of attention for Intel \n>hardware as the BSDs and Linux have and I would imagine \n>(having not tested it recently) that this is still true for \n>32bit Intel.\n>\n>Another interesting test might be to limit the number of \n>simultaneous connections to 8 instead of 30 (client side \n>connection retry) after client side connection pooling via \n>pgpool or similar has been installed.\n>\n>Please report back with your findings.\n>--\n>Rod Taylor <[email protected]>\n>\n>", "msg_date": "Fri, 26 Nov 2004 14:16:09 -0500", "msg_from": "\"David Parker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: time to stop tuning?" }, { "msg_contents": "> On limiting the client side connections: we've been gradually pushing up\n> the client-side connection pool and threads, and have seen steady\n> improvement in our throughput up to the current barrier we have reached.\n\nVery well.. Sometimes more simultaneous workers helps, other times it\nhinders.\n\n> I've attached the plans for the 4 queries that represent ~35% of our\n> load. These are run against the same dataset, but without any other\n> load. Another big query basically requires a test to be runnning because\n\nThose aren't likely from your production system as there isn't any data\nin those tables and the queries took less than 1ms.\n\n-- \nRod Taylor <[email protected]>\n\n", "msg_date": "Fri, 26 Nov 2004 16:48:40 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: time to stop tuning?" } ]
[ { "msg_contents": "\nHi all,\n\nOn v7.4.5 I noticed downgrade in the planner, namely favoring\nsequential scan over index scan. The proof:\n\n create table a ( a integer);\n create index aidx on a(a);\n explain analyze select * from a where a = 0;\n -- Index Scan using aidx on a (cost=0.00..17.07 rows=5 width=4) (actual\n -- time=0.029..0.029 rows=0 loops=1)\n -- Index Cond: (a = 0)\n vacuum analyze;\n explain analyze select * from a where a = 0;\n -- Seq Scan on a (cost=0.00..0.00 rows=1 width=4) (actual time=0.009..0.009 \n -- rows=0 loops=1)\n -- Filter: (a = 0)\n\nI do realize that there might be reasons why this happens over an empty\ntable, but what is way worse that when the table starts actually to fill,\nthe seq scan is still there, and the index is simply not used. How\nthat could be so ...mmm... shortsighted, and what is more important, \nhow to avoid this? I hope the answer is not 'run vacuum analyze each 5 seconds'.\n\n-- \nSincerely,\n\tDmitry Karasik\n\n---\ncatpipe Systems ApS\n*BSD solutions, consulting, development\nwww.catpipe.net\n+45 7021 0050 \n\n", "msg_date": "30 Nov 2004 14:30:37 +0100", "msg_from": "Dmitry Karasik <[email protected]>", "msg_from_op": true, "msg_subject": "VACUUM ANALYZE downgrades performance" }, { "msg_contents": "On 30 Nov 2004 14:30:37 +0100, Dmitry Karasik <[email protected]> wrote:\n> \n> Hi all,\n> \n> On v7.4.5 I noticed downgrade in the planner, namely favoring\n> sequential scan over index scan. The proof:\n> \n> create table a ( a integer);\n> create index aidx on a(a);\n> explain analyze select * from a where a = 0;\n> -- Index Scan using aidx on a (cost=0.00..17.07 rows=5 width=4) (actual\n> -- time=0.029..0.029 rows=0 loops=1)\n> -- Index Cond: (a = 0)\n> vacuum analyze;\n> explain analyze select * from a where a = 0;\n> -- Seq Scan on a (cost=0.00..0.00 rows=1 width=4) (actual time=0.009..0.009\n> -- rows=0 loops=1)\n> -- Filter: (a = 0)\n\nLooks to me like the seq scan is a better plan. The \"actual time\" went down.\n\n> \n> I do realize that there might be reasons why this happens over an empty\n> table, but what is way worse that when the table starts actually to fill,\n> the seq scan is still there, and the index is simply not used. How\n> that could be so ...mmm... shortsighted, and what is more important,\n> how to avoid this? I hope the answer is not 'run vacuum analyze each 5 seconds'.\n> \n\nSee this thread\n(http://archives.postgresql.org/pgsql-hackers/2004-11/msg00985.php and\nhttp://archives.postgresql.org/pgsql-hackers/2004-11/msg01080.php) for\nan ongoing discussion of the issue.\n\n\n-- \nMike Rylander\[email protected]\nGPLS -- PINES Development\nDatabase Developer\nhttp://open-ils.org\n", "msg_date": "Tue, 30 Nov 2004 10:33:01 -0500", "msg_from": "Mike Rylander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM ANALYZE downgrades performance" }, { "msg_contents": "On 11/30/2004 7:30 AM Dmitry Karasik said::\n\n>Hi all,\n>\n>On v7.4.5 I noticed downgrade in the planner, namely favoring\n>sequential scan over index scan. The proof:\n>\n> create table a ( a integer);\n> create index aidx on a(a);\n> explain analyze select * from a where a = 0;\n> -- Index Scan using aidx on a (cost=0.00..17.07 rows=5 width=4) (actual\n> -- time=0.029..0.029 rows=0 loops=1)\n> -- Index Cond: (a = 0)\n> vacuum analyze;\n> explain analyze select * from a where a = 0;\n> -- Seq Scan on a (cost=0.00..0.00 rows=1 width=4) (actual time=0.009..0.009 \n> -- rows=0 loops=1)\n> -- Filter: (a = 0)\n>\n>I do realize that there might be reasons why this happens over an empty\n>table, but what is way worse that when the table starts actually to fill,\n>the seq scan is still there, and the index is simply not used. How\n>that could be so ...mmm... shortsighted, and what is more important, \n>how to avoid this? I hope the answer is not 'run vacuum analyze each 5 seconds'.\n>\n> \n>\nLook at the ACTUAL TIME. It dropped from 0.029ms (using the index \nscan) to 0.009ms (using a sequential scan.) \n\nIndex scans are not always faster, and the planner/optimizer knows \nthis. VACUUM ANALYZE is best run when a large proportion of data has \nbeen updated/loaded or in the off hours to refresh the statistics on \nlarge datasets.\n\n\n\n\n", "msg_date": "Tue, 30 Nov 2004 09:42:04 -0600", "msg_from": "Thomas Swan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM ANALYZE downgrades performance" }, { "msg_contents": "\tHi Thomas!\n\n Thomas> Look at the ACTUAL TIME. It dropped from 0.029ms (using the index\n Thomas> scan) to 0.009ms (using a sequential scan.)\n\n Thomas> Index scans are not always faster, and the planner/optimizer knows\n Thomas> this. VACUUM ANALYZE is best run when a large proportion of data\n Thomas> has been updated/loaded or in the off hours to refresh the\n Thomas> statistics on large datasets.\n\nWhile I agree that generally this is true, look how stupid this \nbehavior looks in this particular case: A developer creates a table\nand index, knowing that the table will be large and will be intensively \nused. An admin runs 'VACUUM ANALYZE' when table is occasionally empty, \nand next, say, 1 day, until another 'VACUUM ANALYZE' starts, the index \nis simply not used! Sure you don't suppose to run 'VACUUM ANALYZE' every \n5 minutes as a solution, right?\n\nI'm not sure if there's ever such thing like planner hints, such as,\n\"yes, we were switched from index back to seqscan, but this switch is\nonly valid until table has less than X records\", but it sounds as a \nreasonable solution. \n\nWell anyway, here's the scenario that cannot be fought neither by\nSQL programming nor by administrative guidelines, at least as I see\nit. And yes, I looked on the actual time, but somehow am not moved by\nhow fast postgresql can seqscan an empty table, really. I believe \nthere's something wrong if decisions based on a table when it is empty,\nare suddenly applied when it is full.\n\n-- \nSincerely,\n\tDmitry Karasik\n\n---\ncatpipe Systems ApS\n*BSD solutions, consulting, development\nwww.catpipe.net\n+45 7021 0050 \n\n", "msg_date": "02 Dec 2004 17:07:17 +0100", "msg_from": "Dmitry Karasik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: VACUUM ANALYZE downgrades performance" }, { "msg_contents": "On Thu, 2004-12-02 at 17:07 +0100, Dmitry Karasik wrote:\n> \tHi Thomas!\n> \n> Thomas> Look at the ACTUAL TIME. It dropped from 0.029ms (using the index\n> Thomas> scan) to 0.009ms (using a sequential scan.)\n> \n> Thomas> Index scans are not always faster, and the planner/optimizer knows\n> Thomas> this. VACUUM ANALYZE is best run when a large proportion of data\n> Thomas> has been updated/loaded or in the off hours to refresh the\n> Thomas> statistics on large datasets.\n> \n> While I agree that generally this is true, look how stupid this \n> behavior looks in this particular case: A developer creates a table\n> and index, knowing that the table will be large and will be intensively \n> used. An admin runs 'VACUUM ANALYZE' when table is occasionally empty, \n> and next, say, 1 day, until another 'VACUUM ANALYZE' starts, the index \n> is simply not used! Sure you don't suppose to run 'VACUUM ANALYZE' every \n> 5 minutes as a solution, right?\n\nYou might want to try this on the next 8.0 beta to come out, or against\nCVS. Tom recently applied some changes which should mitigate this\nsituation.\n\n-- \n\n", "msg_date": "Thu, 02 Dec 2004 11:25:18 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM ANALYZE downgrades performance" }, { "msg_contents": "On Thu, Dec 02, 2004 at 05:07:17PM +0100, Dmitry Karasik wrote:\n> While I agree that generally this is true, look how stupid this \n> behavior looks in this particular case: A developer creates a table\n> and index, knowing that the table will be large and will be intensively \n> used. An admin runs 'VACUUM ANALYZE' when table is occasionally empty, \n> and next, say, 1 day, until another 'VACUUM ANALYZE' starts, the index \n> is simply not used! Sure you don't suppose to run 'VACUUM ANALYZE' every \n> 5 minutes as a solution, right?\n\nNo, you run autovacuum, which automatically re-analyzes at approximately the\nright time.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 2 Dec 2004 17:30:44 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM ANALYZE downgrades performance" }, { "msg_contents": "On Thursday 02 Dec 2004 9:37 pm, Dmitry Karasik wrote:\n> \tHi Thomas!\n>\n> Thomas> Look at the ACTUAL TIME. It dropped from 0.029ms (using the index\n> Thomas> scan) to 0.009ms (using a sequential scan.)\n>\n> Thomas> Index scans are not always faster, and the planner/optimizer knows\n> Thomas> this. VACUUM ANALYZE is best run when a large proportion of data\n> Thomas> has been updated/loaded or in the off hours to refresh the\n> Thomas> statistics on large datasets.\n>\n> While I agree that generally this is true, look how stupid this\n> behavior looks in this particular case: A developer creates a table\n> and index, knowing that the table will be large and will be intensively\n> used. An admin runs 'VACUUM ANALYZE' when table is occasionally empty,\n> and next, say, 1 day, until another 'VACUUM ANALYZE' starts, the index\n> is simply not used! Sure you don't suppose to run 'VACUUM ANALYZE' every\n> 5 minutes as a solution, right?\n\nWhy not? If the updates are frequent enough, that is *the* solution.\n\nBut you could always use autovacuum daemon in most case.\n\nHTH\n\n Shridhar\n", "msg_date": "Thu, 2 Dec 2004 22:08:57 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM ANALYZE downgrades performance" }, { "msg_contents": "\tHi Rod!\n\n Thomas> Index scans are not always faster, and the planner/optimizer knows\n Thomas> this. VACUUM ANALYZE is best run when a large proportion of data\n Thomas> has been updated/loaded or in the off hours to refresh the\n Thomas> statistics on large datasets.\n >> While I agree that generally this is true, look how stupid this\n >> behavior looks in this particular case: A developer creates a table and\n >> index, knowing that the table will be large and will be intensively\n >> used. An admin runs 'VACUUM ANALYZE' when table is occasionally empty,\n >> and next, say, 1 day, until another 'VACUUM ANALYZE' starts, the index\n >> is simply not used! Sure you don't suppose to run 'VACUUM ANALYZE'\n >> every 5 minutes as a solution, right?\n\n Rod> You might want to try this on the next 8.0 beta to come out, or\n Rod> against CVS. Tom recently applied some changes which should mitigate\n Rod> this situation.\n\nBut this would affect only VACUUM, and not ANALYZE, right?\n\n-- \nSincerely,\n\tDmitry Karasik\n\n---\ncatpipe Systems ApS\n*BSD solutions, consulting, development\nwww.catpipe.net\n+45 7021 0050 \n\n", "msg_date": "03 Dec 2004 11:42:13 +0100", "msg_from": "Dmitry Karasik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: VACUUM ANALYZE downgrades performance" } ]
[ { "msg_contents": "hello~\ni'm curious about this situation.\n\nhere is my test.\nmy zipcode table has 47705 rows,\nand schema looks like this.\n\npgsql=# \\d zipcode\n\nTable \"public.zipcode\" Column | Type | Modifiers \n---------+-----------------------+----------- zipcode | character(7) | \nnot null sido | character varying(4) | not null gugun | character \nvarying(13) | not null dong | character varying(43) | not null bunji | \ncharacter varying(17) | not null seq | integer | not null Indexes: \n\"zipcode_pkey\" PRIMARY KEY, btree (seq)\n\nand I need seq scan so,\n\npgsql=# SET enable_indexscan TO OFF;\nSET\nTime: 0.534 ms\n\n\nnow test start!\nthe first row.\n\npgsql=# EXPLAIN ANALYZE select * from zipcode where seq = '1';\n \nQUERY PLAN\n-------------------------------------------------------------------------------------------------------\n Seq Scan on zipcode (cost=0.00..1168.31 rows=1 width=207) (actual \ntime=0.029..88.099 rows=1 loops=1)\n Filter: (seq = 1)\n Total runtime: 88.187 ms\n(3 rows)\n\nTime: 89.392 ms pgsql=#\n\nthe first row with LIMIT\n\npgsql=# EXPLAIN ANALYZE select * from zipcode where seq = '1' LIMIT 1; \nQUERY PLAN \n------------------------------------------------------------------------------------------------------------ \nLimit (cost=0.00..1168.31 rows=1 width=207) (actual time=0.033..0.034 \nrows=1 loops=1) -> Seq Scan on zipcode (cost=0.00..1168.31 rows=1 \nwidth=207) (actual time=0.028..0.028 rows=1 loops=1) Filter: (seq = 1) \nTotal runtime: 0.111 ms (4 rows)\n\nTime: 1.302 ms pgsql=#\n\nthe last row,\n\npgsql=# EXPLAIN ANALYZE select * from zipcode where seq = '47705'; QUERY \nPLAN \n------------------------------------------------------------------------------------------------------- \nSeq Scan on zipcode (cost=0.00..1168.31 rows=1 width=207) (actual \ntime=3.248..88.232 rows=1 loops=1) Filter: (seq = 47705) Total runtime: \n88.317 ms (3 rows)\n\nTime: 89.521 ms pgsql=#\n\nthe last row with LIMIT,\n\npgsql=# EXPLAIN ANALYZE select * from zipcode where seq = '47705' LIMIT \n1; QUERY PLAN \n------------------------------------------------------------------------------------------------------------ \nLimit (cost=0.00..1168.31 rows=1 width=207) (actual time=3.254..3.254 \nrows=1 loops=1) -> Seq Scan on zipcode (cost=0.00..1168.31 rows=1 \nwidth=207) (actual time=3.248..3.248 rows=1 loops=1) Filter: (seq = \n47705) Total runtime: 3.343 ms (4 rows)\n\nTime: 4.583 ms pgsql=#\n\nWhen I using index scan, the result was almost same, that means, there \nwas no time difference, so i'll not mention about index scan.\n\nbut, sequence scan, as you see above result, there is big time \ndifference between using LIMIT and without using it. my question is, \nwhen we're searching with PK like SELECT * FROM table WHERE PK = 'xxx', \nwe already know there is only 1 row or not. so, pgsql should stop \nsearching when maching row was found, isn't it?\n\ni don't know exactly about mechanism how pgsql searching row its inside, \nso might be i'm thinking wrong way, any comments, advices, notes, \nanything will be appreciate to me!\n\n", "msg_date": "Wed, 01 Dec 2004 13:10:27 +0900", "msg_from": "=?UTF-8?B?7J6l7ZiE7ISx?= <[email protected]>", "msg_from_op": true, "msg_subject": "Using \"LIMIT\" is much faster even though, searching with PK." }, { "msg_contents": "=?UTF-8?B?7J6l7ZiE7ISx?= <[email protected]> writes:\n> but, sequence scan, as you see above result, there is big time \n> difference between using LIMIT and without using it.\n\nYou've got a table full of dead rows. Try VACUUM FULL ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Nov 2004 23:26:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using \"LIMIT\" is much faster even though, searching with PK. " }, { "msg_contents": "before test, I already executed VACUUM FULL.\nthis result show up after vacuum full.\n\n\nTom Lane 쓴 글:\n\n>=?UTF-8?B?7J6l7ZiE7ISx?= <[email protected]> writes:\n> \n>\n>>but, sequence scan, as you see above result, there is big time \n>>difference between using LIMIT and without using it.\n>> \n>>\n>\n>You've got a table full of dead rows. Try VACUUM FULL ...\n>\n>\t\t\tregards, tom lane\n>\n> \n>\n\n\n\n\n\n\n\nbefore test, I already executed VACUUM FULL.\nthis result show up after vacuum full.\n\n\nTom Lane 쓴 글:\n\n=?UTF-8?B?7J6l7ZiE7ISx?= <[email protected]> writes:\n \n\nbut, sequence scan, as you see above result, there is big time \ndifference between using LIMIT and without using it.\n \n\n\nYou've got a table full of dead rows. Try VACUUM FULL ...\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 01 Dec 2004 13:38:40 +0900", "msg_from": "\"Hyun-Sung, Jang\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using \"LIMIT\" is much faster even though, searching" }, { "msg_contents": "Hyun-Sang,\n\n> before test, I already executed VACUUM FULL.\n> this result show up after vacuum full.\n\nReally? Your results really look like a bloated table. Can you run VACUUM \nFULL ANALYZE VERBOSE on the table and post the output?\n\n> When I using index scan, the result was almost same, that means, there\n> was no time difference, so i'll not mention about index scan.\n\nCan we see an index scan plan anyway? EXPLAIN ANALYZE?\n\nOh, and if this is a zip codes table, why are you using a sequence as the \nprimary key instead of just using the zip code? \n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 30 Nov 2004 21:03:51 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using \"LIMIT\" is much faster even though, searching" }, { "msg_contents": "do you need all of verbose information??\nVACUUM FULL ANALYZE VERBOSE give me a lot of infomation,\nso i just cut zipcode parts.\n\n==start===============================================================================\nINFO: vacuuming \"public.zipcode\"\nINFO: \"zipcode\": found 0 removable, 47705 nonremovable row versions in \n572 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 76 to 136 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 27944 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n91 pages containing 8924 free bytes are potential move destinations.\nCPU 0.03s/0.00u sec elapsed 0.03 sec.\nINFO: index \"zipcode_pkey\" now contains 47705 row versions in 147 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.01s/0.00u sec elapsed 0.00 sec.\nINFO: \"zipcode\": moved 0 row versions, truncated 572 to 572 pages\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: analyzing \"public.zipcode\"\nINFO: \"zipcode\": scanned 572 of 572 pages, containing 47705 live rows \nand 0 dead rows; 3000 rows in sample, 47705 estimated total rows\nINFO: free space map: 108 relations, 128 pages stored; 1760 total pages \nneeded\nDETAIL: Allocated FSM size: 1000 relations + 20000 pages = 182 kB \nshared memory.\nVACUUM\npgsql=#\n==end===============================================================================\n\n\nUSING INDEX SCAN\n\n==start===============================================================================\npgsql=# EXPLAIN ANALYZE select * from zipcode where seq='1';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Index Scan using zipcode_pkey on zipcode (cost=0.00..3.02 rows=1 \nwidth=55) (actual time=0.054..0.058 rows=1 loops=1)\n Index Cond: (seq = 1)\n Total runtime: 0.152 ms\n(3 rows)\n\npgsql=#\n\n\npgsql=# EXPLAIN ANALYZE select * from zipcode where seq='1' LIMIT 1;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..3.02 rows=1 width=55) (actual time=0.059..0.060 \nrows=1 loops=1)\n -> Index Scan using zipcode_pkey on zipcode (cost=0.00..3.02 rows=1 \nwidth=55) (actual time=0.054..0.054 rows=1 loops=1)\n Index Cond: (seq = 1)\n Total runtime: 0.158 ms\n(4 rows)\n\npgsql=#\n\n\nWHEN SELECT LAST ROW -----\n\npgsql=# EXPLAIN ANALYZE select * from zipcode where seq='47705';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Index Scan using zipcode_pkey on zipcode (cost=0.00..3.02 rows=1 \nwidth=55) (actual time=0.054..0.059 rows=1 loops=1)\n Index Cond: (seq = 47705)\n Total runtime: 0.150 ms\n(3 rows)\n\npgsql=#\n\n\npgsql=# EXPLAIN ANALYZE select * from zipcode where seq='47705' LIMIT 1;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..3.02 rows=1 width=55) (actual time=0.057..0.057 \nrows=1 loops=1)\n -> Index Scan using zipcode_pkey on zipcode (cost=0.00..3.02 rows=1 \nwidth=55) (actual time=0.052..0.052 rows=1 loops=1)\n Index Cond: (seq = 47705)\n Total runtime: 0.156 ms\n(4 rows)\n\npgsql=#\n==end===============================================================================\n\n\n\nUSING SEQUENCE SCAN\n\n==start===============================================================================\npgsql=# set enable_indexscan to off;\nSET\npgsql=#\n\npgsql=# EXPLAIN ANALYZE select * from zipcode where seq='1';\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------\n Seq Scan on zipcode (cost=0.00..1168.31 rows=1 width=55) (actual \ntime=0.032..109.934 rows=1 loops=1)\n Filter: (seq = 1)\n Total runtime: 110.021 ms\n(3 rows)\n\npgsql=#\n\n\npgsql=# EXPLAIN ANALYZE select * from zipcode where seq='1' LIMIT 1;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..1168.31 rows=1 width=55) (actual time=0.035..0.035 \nrows=1 loops=1)\n -> Seq Scan on zipcode (cost=0.00..1168.31 rows=1 width=55) (actual \ntime=0.030..0.030 rows=1 loops=1)\n Filter: (seq = 1)\n Total runtime: 0.113 ms\n(4 rows)\n\npgsql=#\n\n\nWHEN SELECT LAST ROW -----\n\npgsql=# EXPLAIN ANALYZE select * from zipcode where seq='47705';\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------\n Seq Scan on zipcode (cost=0.00..1168.31 rows=1 width=55) (actual \ntime=4.048..110.232 rows=1 loops=1)\n Filter: (seq = 47705)\n Total runtime: 110.322 ms\n(3 rows)\n\npgsql=#\n\n\npgsql=# EXPLAIN ANALYZE select * from zipcode where seq='47705' LIMIT 1;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..1168.31 rows=1 width=55) (actual time=4.038..4.038 \nrows=1 loops=1)\n -> Seq Scan on zipcode (cost=0.00..1168.31 rows=1 width=55) (actual \ntime=4.033..4.033 rows=1 loops=1)\n Filter: (seq = 47705)\n Total runtime: 4.125 ms\n(4 rows)\n\npgsql=#\n\n==end===============================================================================\n\n\nI just choose zipcode table for this test.\nnot only zipcode table but other table also give me same result.\n\nSELECT * FROM table_name WHERE PK = 'xxx'\n\nwas always slower than\n\nSELECT * FROM table_name WHERE PK = 'xxx' LIMIT 1\n\nwhen sequence scan .\n\ni think pgsql tring to find more than 1 row when query executed even if\nsearching condition is primary key.\n\n\nah, why i'm using sequence as PK instead of zip code is\nin korea, the small towns doesn't have it's own zipcode\nso they share other big city's.\nthat's why zip code can't be a primary key.\nactually, i'm not using sequence to find zipcode.\ni made it temporary for this test.\n\ni think there is nobody want to using sequence number to find zipcode,\ninstead of city name. :-)\n\n\nJosh Berkus 쓴 글:\n\n>Hyun-Sang,\n>\n> \n>\n>>before test, I already executed VACUUM FULL.\n>>this result show up after vacuum full.\n>> \n>>\n>\n>Really? Your results really look like a bloated table. Can you run VACUUM \n>FULL ANALYZE VERBOSE on the table and post the output?\n>\n> \n>\n>>When I using index scan, the result was almost same, that means, there\n>>was no time difference, so i'll not mention about index scan.\n>> \n>>\n>\n>Can we see an index scan plan anyway? EXPLAIN ANALYZE?\n>\n>Oh, and if this is a zip codes table, why are you using a sequence as the \n>primary key instead of just using the zip code? \n>\n> \n>\n\n\n\n\n\n\n\ndo you need all of verbose information??\nVACUUM FULL ANALYZE VERBOSE give me a lot of infomation,\nso i just cut zipcode parts.\n\n==start===============================================================================\nINFO:  vacuuming \"public.zipcode\"\nINFO:  \"zipcode\": found 0 removable, 47705 nonremovable row versions in\n572 pages\nDETAIL:  0 dead row versions cannot be removed yet.\nNonremovable row versions range from 76 to 136 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 27944 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n91 pages containing 8924 free bytes are potential move destinations.\nCPU 0.03s/0.00u sec elapsed 0.03 sec.\nINFO:  index \"zipcode_pkey\" now contains 47705 row versions in 147 pages\nDETAIL:  0 index pages have been deleted, 0 are currently reusable.\nCPU 0.01s/0.00u sec elapsed 0.00 sec.\nINFO:  \"zipcode\": moved 0 row versions, truncated 572 to 572 pages\nDETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO:  analyzing \"public.zipcode\"\nINFO:  \"zipcode\": scanned 572 of 572 pages, containing 47705 live rows\nand 0 dead rows; 3000 rows in sample, 47705 estimated total rows\nINFO:  free space map: 108 relations, 128 pages stored; 1760 total\npages needed\nDETAIL:  Allocated FSM size: 1000 relations + 20000 pages = 182 kB\nshared memory.\nVACUUM\npgsql=#\n==end===============================================================================\n\n\nUSING INDEX SCAN\n\n==start===============================================================================\npgsql=# EXPLAIN ANALYZE select * from zipcode where seq='1';\n                                                      QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Index Scan using zipcode_pkey on zipcode  (cost=0.00..3.02 rows=1\nwidth=55) (actual time=0.054..0.058 rows=1 loops=1)\n   Index Cond: (seq = 1)\n Total runtime: 0.152 ms\n(3 rows)\n\npgsql=#\n\n\npgsql=# EXPLAIN ANALYZE select * from zipcode where seq='1' LIMIT 1;\n                                                         QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=0.00..3.02 rows=1 width=55) (actual time=0.059..0.060\nrows=1 loops=1)\n   ->  Index Scan using zipcode_pkey on zipcode  (cost=0.00..3.02\nrows=1 width=55) (actual time=0.054..0.054 rows=1 loops=1)\n         Index Cond: (seq = 1)\n Total runtime: 0.158 ms\n(4 rows)\n\npgsql=#\n\n\nWHEN SELECT LAST ROW -----\n\npgsql=# EXPLAIN ANALYZE select * from zipcode where seq='47705';\n                                                      QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Index Scan using zipcode_pkey on zipcode  (cost=0.00..3.02 rows=1\nwidth=55) (actual time=0.054..0.059 rows=1 loops=1)\n   Index Cond: (seq = 47705)\n Total runtime: 0.150 ms\n(3 rows)\n\npgsql=#\n\n\npgsql=# EXPLAIN ANALYZE select * from zipcode where seq='47705' LIMIT 1;\n                                                         QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=0.00..3.02 rows=1 width=55) (actual time=0.057..0.057\nrows=1 loops=1)\n   ->  Index Scan using zipcode_pkey on zipcode  (cost=0.00..3.02\nrows=1 width=55) (actual time=0.052..0.052 rows=1 loops=1)\n         Index Cond: (seq = 47705)\n Total runtime: 0.156 ms\n(4 rows)\n\npgsql=#\n==end===============================================================================\n\n\n\nUSING SEQUENCE SCAN\n\n==start===============================================================================\npgsql=# set enable_indexscan to off;\nSET\npgsql=#\n\npgsql=# EXPLAIN ANALYZE select * from zipcode where seq='1';\n                                              QUERY PLAN\n-------------------------------------------------------------------------------------------------------\n Seq Scan on zipcode  (cost=0.00..1168.31 rows=1 width=55) (actual\ntime=0.032..109.934 rows=1 loops=1)\n   Filter: (seq = 1)\n Total runtime: 110.021 ms\n(3 rows)\n\npgsql=#\n\n\npgsql=# EXPLAIN ANALYZE select * from zipcode where seq='1' LIMIT 1;\n                                                QUERY PLAN\n-----------------------------------------------------------------------------------------------------------\n Limit  (cost=0.00..1168.31 rows=1 width=55) (actual time=0.035..0.035\nrows=1 loops=1)\n   ->  Seq Scan on zipcode  (cost=0.00..1168.31 rows=1 width=55)\n(actual time=0.030..0.030 rows=1 loops=1)\n         Filter: (seq = 1)\n Total runtime: 0.113 ms\n(4 rows)\n\npgsql=#\n\n\nWHEN SELECT LAST ROW -----\n\npgsql=# EXPLAIN ANALYZE select * from zipcode where seq='47705';\n                                              QUERY PLAN\n-------------------------------------------------------------------------------------------------------\n Seq Scan on zipcode  (cost=0.00..1168.31 rows=1 width=55) (actual\ntime=4.048..110.232 rows=1 loops=1)\n   Filter: (seq = 47705)\n Total runtime: 110.322 ms\n(3 rows)\n\npgsql=#\n\n\npgsql=# EXPLAIN ANALYZE select * from zipcode where seq='47705' LIMIT 1;\n                                                QUERY PLAN\n-----------------------------------------------------------------------------------------------------------\n Limit  (cost=0.00..1168.31 rows=1 width=55) (actual time=4.038..4.038\nrows=1 loops=1)\n   ->  Seq Scan on zipcode  (cost=0.00..1168.31 rows=1 width=55)\n(actual time=4.033..4.033 rows=1 loops=1)\n         Filter: (seq = 47705)\n Total runtime: 4.125 ms\n(4 rows)\n\npgsql=#\n\n==end===============================================================================\n\n\nI just choose zipcode table for this test.\nnot only zipcode table but other table also give me same result.\n\nSELECT * FROM table_name WHERE PK = 'xxx' \n\nwas always slower than\n\nSELECT * FROM table_name WHERE PK = 'xxx' LIMIT 1 \n\nwhen sequence scan .\n\ni think pgsql tring to find more than 1 row when query executed even if\nsearching condition is primary key.\n\n\nah, why i'm using sequence as PK instead of zip code is\nin korea, the small towns doesn't have it's own zipcode\nso they share other big city's.\nthat's why zip code can't be a primary key.\nactually, i'm not using sequence to find zipcode.\ni made it temporary for this test.\n\ni think there is nobody want to using sequence number to find zipcode, \ninstead of city name. :-)\n\n\nJosh Berkus 쓴 글:\n\nHyun-Sang,\n\n \n\nbefore test, I already executed VACUUM FULL.\nthis result show up after vacuum full.\n \n\n\nReally? Your results really look like a bloated table. Can you run VACUUM \nFULL ANALYZE VERBOSE on the table and post the output?\n\n \n\nWhen I using index scan, the result was almost same, that means, there\nwas no time difference, so i'll not mention about index scan.\n \n\n\nCan we see an index scan plan anyway? EXPLAIN ANALYZE?\n\nOh, and if this is a zip codes table, why are you using a sequence as the \nprimary key instead of just using the zip code?", "msg_date": "Wed, 01 Dec 2004 15:03:31 +0900", "msg_from": "\"Hyun-Sung, Jang\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using \"LIMIT\" is much faster even though, searching" }, { "msg_contents": "On Wed, 2004-12-01 at 15:03 +0900, Hyun-Sung, Jang wrote:\n> \n> < lots of information about seq scan vs index scan >\n> \n\nHi,\n\nJust because it has an ID that is the largest in the set, does not mean\nit will be at the last position in the on-disk tables. And similarly,\nthe lowest numbered ID does not mean it will be at the beginning in the\non-disk structures.\n\nSo when you 'LIMIT 1' the sequential scan stops as soon as it has found\nthe first row that matches, but in the no LIMIT case with a sequential\nscan it will continue the scan to the end of the on-disk data.\n\nGiven that this column is unique, PostgreSQL could optimise this case\nand imply LIMIT 1 for all sequential scans on such criteria, but in the\nreal world the optimisation is usually going to come from an index - at\nleast it will for larger tables - since that's a component of how\nPostgreSQL is enforcing the unique constraint.\n\nRegards,\n\t\t\t\t\tAndrew McMillan.\n\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n Chicken Little only has to be right once.\n-------------------------------------------------------------------------", "msg_date": "Wed, 01 Dec 2004 21:23:30 +1300", "msg_from": "Andrew McMillan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using \"LIMIT\" is much faster even though, searching" }, { "msg_contents": "Hyun-Sung,\n\n> do you need all of verbose information??\n> VACUUM FULL ANALYZE VERBOSE give me a lot of infomation,\n> so i just cut zipcode parts.\n\nOh, sorry. I meant just \"VACUUM FULL ANALYZE VERBOSE zipcode\", not the whole \ndatabase. Should have been clearer.\n\n> ==start====================================================================\n>=========== INFO: vacuuming \"public.zipcode\"\n> INFO: \"zipcode\": found 0 removable, 47705 nonremovable row versions in\n> 572 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n\nOK, looks like you're clean.\n\n> I just choose zipcode table for this test.\n> not only zipcode table but other table also give me same result.\n>\n> SELECT * FROM table_name WHERE PK = 'xxx'\n>\n> was always slower than\n>\n> SELECT * FROM table_name WHERE PK = 'xxx' LIMIT 1\n>\n> when sequence scan .\n\nyeah? So? Stop using sequence scan! You've just demonstrated that, if you \ndon't force the planner to use sequence scan, things run at the same speed \nwith or without the LIMIT. So you're causing a problem by forcing the \nplanner into a bad plan.\n\nSee Andrew's explanation of why it works this way.\n\n> ah, why i'm using sequence as PK instead of zip code is\n> in korea, the small towns doesn't have it's own zipcode\n> so they share other big city's.\n> that's why zip code can't be a primary key.\n> actually, i'm not using sequence to find zipcode.\n> i made it temporary for this test.\n\nThat makes sense.\n\n--Josh\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 1 Dec 2004 12:25:42 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using \"LIMIT\" is much faster even though, searching" } ]
[ { "msg_contents": "Hi!\n\nI am using PostgreSQL with a proprietary ERP software in Brazil. The \ndatabase have around 1.600 tables (each one with +/- 50 columns).\nMy problem now is the time that takes to restore a dump. My customer \ndatabase have arount 500mb (on the disk, not the dump file) and I am \nmaking the dump with pg_dump -Fc, my dumped file have 30mb. To make the \ndump, it's taking +/- 1,5 hours BUT to restore (using pg_restore ) it it \ntakes 4 - 5 hours!!!\n\nOur machine it's a Dell Server Power Edge 1600sc (Xeon 2,4Ghz, with 1GB \nmemory, 7200 RPM disk). I don't think that there is a machine problem \nbecause it's a server dedicated for the database and the cpu utilization \nduring the restore is around 30%.\n\nLooking on the lists arquives I found some messages about this and Tom \nLane was saying that then you have a lot of convertions the dump can \ndelay too much. 90% of the columns on my database are char columns and I \ndon't have large objects on the database. The restore is delaying too \nmuch because the conversion of the char columns ? How can I have a \nbetter performance on this restore?\n\nI need to find a solution for this because I am convincing customers \nthat are using SQL Server, DB2 and Oracle to change to PostgreSQL but \nthis customers have databases of 5GB!!! I am thinking that even with a \nbetter server, the restore will take 2 days!\n\nMy data:\nConectiva Linux 10 , Kernel 2.6.8\nPostgreSQL 7.4.6.\n\npostgresql.conf modified parameters (the other parameters are the default)\ntcpip_socket = true\nmax_connections = 30\nshared_buffers = 30000\nsort_mem = 4096 \nvacuum_mem = 8192\nmax_fsm_pages = 20000\nmax_fsm_relations = 1000\n\nRegards,\n\nRodrigo Carvalhaes\n", "msg_date": "Wed, 01 Dec 2004 09:16:58 -0200", "msg_from": "Rodrigo Carvalhaes <[email protected]>", "msg_from_op": true, "msg_subject": "pg_restore taking 4 hours!" }, { "msg_contents": "On Wednesday 01 Dec 2004 4:46 pm, Rodrigo Carvalhaes wrote:\n> I need to find a solution for this because I am convincing customers\n> that are using SQL Server, DB2 and Oracle to change to PostgreSQL but\n> this customers have databases of 5GB!!! I am thinking that even with a\n> better server, the restore will take 2 days!\n>\n> My data:\n> Conectiva Linux 10 , Kernel 2.6.8\n> PostgreSQL 7.4.6.\n>\n> postgresql.conf modified parameters (the other parameters are the default)\n> tcpip_socket = true\n> max_connections = 30\n> shared_buffers = 30000\n> sort_mem = 4096\n> vacuum_mem = 8192\n> max_fsm_pages = 20000\n> max_fsm_relations = 1000\n\nCan you try bumping sort mem lot higher(basically whatever the machine can \nafford) so that index creation is faster? \n\nJust try setting sort mem for the restore session and see if it helps..\n\n Shridhar\n", "msg_date": "Wed, 1 Dec 2004 19:55:23 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] pg_restore taking 4 hours!" }, { "msg_contents": "\n--- Shridhar Daithankar <__> wrote:\n\n> On Wednesday 01 Dec 2004 4:46 pm, Rodrigo Carvalhaes wrote:\n> > I need to find a solution for this because I am convincing\n> customers\n> > that are using SQL Server, DB2 and Oracle to change to PostgreSQL\n> but\n> > this customers have databases of 5GB!!! I am thinking that even\n> with a\n> > better server, the restore will take 2 days!\n> >\n> > My data:\n> > Conectiva Linux 10 , Kernel 2.6.8\n> > PostgreSQL 7.4.6.\n> >\n> > postgresql.conf modified parameters (the other parameters are the\n> default)\n> > tcpip_socket = true\n> > max_connections = 30\n> > shared_buffers = 30000\n> > sort_mem = 4096\n> > vacuum_mem = 8192\n> > max_fsm_pages = 20000\n> > max_fsm_relations = 1000\n> \n> Can you try bumping sort mem lot higher(basically whatever the\n> machine can \n> afford) so that index creation is faster? \n> \n> Just try setting sort mem for the restore session and see if it\n> helps..\n> \n> Shridhar\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n> \n\nYes, indexing is probably the issue.\n\nYou can always ask them to report how long does it take to restore\ntheir M$-SQL, DB2 and Oracle from a scripting dump.\n\nI've been restoring DB2 for a looong time (on different architectures)\nand the main problem comes from indexing.\n\nAs an index is basically a dynamic structure that is created on the\nphysical data (the data stored on the table), what is normally saved is\nthe index DEFINITION, not the index itself, so this is recreated at\nrestore time.\n\nSome DB2 architectures (and M$-SQL, and Oracle, and Sybase, and others.\nothers) may have a backup tool that is capable of saving the index\ndata, but is almost never used, as the index space itself can grow well\nover the data size.\n\nI'll give one example: we have one DB2 on iSeries that runs around the\n70Gb of Data and Indexes. We do a full backup that occupies only 45Gb\nof Data and we do that in a little more than 1 hour because we only\nsave the index definitions. \n\nWe know for sure that this full backup takes something between 5 and 7\nhours because of the reindexing. I had this written down in the Restore\nProcedure Manual, so the user can't complain (they know that the\nprocedure will eventually restore the data and the full functionality).\n\nSo, make sure that your client knows of their restore times.\n\nOne small trick that can help you: \nFIRST restore the tables.\nTHEN restore the foreingn keys, the constraints and the triggers and\nprocedures.\nLAST restore the indexes and views.\nLATEST restore the security.\n\nThis way, if you have complicated views and indexes with a lot of info,\nthe procedure <<<may>>> be shorter.\n\nregards,\n\nR.\n", "msg_date": "Wed, 1 Dec 2004 07:19:17 -0800 (PST)", "msg_from": "\"Riccardo G. Facchini\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] pg_restore taking 4 hours!" }, { "msg_contents": "Shridhar Daithankar <[email protected]> writes:\n> On Wednesday 01 Dec 2004 4:46 pm, Rodrigo Carvalhaes wrote:\n>> I need to find a solution for this because I am convincing customers\n>> that are using SQL Server, DB2 and Oracle to change to PostgreSQL but\n>> this customers have databases of 5GB!!! I am thinking that even with a\n>> better server, the restore will take 2 days!\n\n> Can you try bumping sort mem lot higher(basically whatever the machine can \n> afford) so that index creation is faster? \n\nIt would be a good idea to bump up vacuum_mem as well. In current\nsources it's vacuum_mem (well actually maintenance_work_mem) that\ndetermines the speed of CREATE INDEX; I forget just how long that\nbehavior has been around, but 7.4.6 might do it too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 Dec 2004 10:38:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] pg_restore taking 4 hours! " }, { "msg_contents": "Rodrigo,\n\n> Our machine it's a Dell Server Power Edge 1600sc (Xeon 2,4Ghz, with 1GB\n> memory, 7200 RPM disk). I don't think that there is a machine problem\n> because it's a server dedicated for the database and the cpu utilization\n> during the restore is around 30%.\n\nIn addition to Tom and Shridhar's advice, a single IDE disk is simply going to \nmake restores slow. A 500MB data file copy on that disk, straight, would \ntake up to 15 min. If this is for your ISV application, you need to \nseriously re-think your hardware strategy; spending less on processors and \nmore on disks would be wise.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 1 Dec 2004 12:19:00 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] pg_restore taking 4 hours!" }, { "msg_contents": "\nRodrigo Carvalhaes a �crit :\n\n> Hi!\n>\n> I am using PostgreSQL with a proprietary ERP software in Brazil. The \n> database have around 1.600 tables (each one with +/- 50 columns).\n> My problem now is the time that takes to restore a dump. My customer \n> database have arount 500mb (on the disk, not the dump file) and I am \n> making the dump with pg_dump -Fc, my dumped file have 30mb. To make \n> the dump, it's taking +/- 1,5 hours BUT to restore (using pg_restore ) \n> it it takes 4 - 5 hours!!!\n\nI have notice that fac and one way to improve the restore prefomances, \nis to avoid build indexes and checking the foreign key in the same step \nthan the restore.\nSo, as it is not possible to disable indexes and Foreign key, you have \nto drop them and recreate them once the restore step has finished. To do \nthat you should have a script to recreate the indexes and the Foreign \nKey afterward.\n\n>\n> Our machine it's a Dell Server Power Edge 1600sc (Xeon 2,4Ghz, with \n> 1GB memory, 7200 RPM disk). I don't think that there is a machine \n> problem because it's a server dedicated for the database and the cpu \n> utilization during the restore is around 30%.\n>\n> Looking on the lists arquives I found some messages about this and Tom \n> Lane was saying that then you have a lot of convertions the dump can \n> delay too much. 90% of the columns on my database are char columns and \n> I don't have large objects on the database. The restore is delaying \n> too much because the conversion of the char columns ? How can I have a \n> better performance on this restore?\n>\n> I need to find a solution for this because I am convincing customers \n> that are using SQL Server, DB2 and Oracle to change to PostgreSQL but \n> this customers have databases of 5GB!!! I am thinking that even with a \n> better server, the restore will take 2 days!\n>\n> My data:\n> Conectiva Linux 10 , Kernel 2.6.8\n> PostgreSQL 7.4.6.\n>\n> postgresql.conf modified parameters (the other parameters are the \n> default)\n> tcpip_socket = true\n> max_connections = 30\n> shared_buffers = 30000\n> sort_mem = 4096 vacuum_mem = 8192\n> max_fsm_pages = 20000\n> max_fsm_relations = 1000\n>\n> Regards,\n>\n> Rodrigo Carvalhaes\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n", "msg_date": "Thu, 02 Dec 2004 16:46:12 +0100", "msg_from": "Thierry Missimilly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_restore taking 4 hours!" }, { "msg_contents": "Thierry Missimilly wrote:\n\n>\n> Rodrigo Carvalhaes a �crit :\n>\n>> Hi!\n>>\n>> I am using PostgreSQL with a proprietary ERP software in Brazil. The \n>> database have around 1.600 tables (each one with +/- 50 columns).\n>> My problem now is the time that takes to restore a dump. My customer \n>> database have arount 500mb (on the disk, not the dump file) and I am \n>> making the dump with pg_dump -Fc, my dumped file have 30mb. To make \n>> the dump, it's taking +/- 1,5 hours BUT to restore (using pg_restore \n>> ) it it takes 4 - 5 hours!!!\n>\n>\n> I have notice that fac and one way to improve the restore prefomances, \n> is to avoid build indexes and checking the foreign key in the same \n> step than the restore.\n> So, as it is not possible to disable indexes and Foreign key, you have \n> to drop them and recreate them once the restore step has finished. To \n> do that you should have a script to recreate the indexes and the \n> Foreign Key afterward.\n>\nThere are a couple of things you can do.\n\n1. Turn off Fsync for the restore\n2. Restore in three phases:\n\n 1. Schema without constraints or indexes\n 2. Restore data\n 3. Apply rest of schema with constraints and indexes\n\n3. Increase the number of transaction logs.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n\n>>\n>> Our machine it's a Dell Server Power Edge 1600sc (Xeon 2,4Ghz, with \n>> 1GB memory, 7200 RPM disk). I don't think that there is a machine \n>> problem because it's a server dedicated for the database and the cpu \n>> utilization during the restore is around 30%.\n>>\n>> Looking on the lists arquives I found some messages about this and \n>> Tom Lane was saying that then you have a lot of convertions the dump \n>> can delay too much. 90% of the columns on my database are char \n>> columns and I don't have large objects on the database. The restore \n>> is delaying too much because the conversion of the char columns ? How \n>> can I have a better performance on this restore?\n>>\n>> I need to find a solution for this because I am convincing customers \n>> that are using SQL Server, DB2 and Oracle to change to PostgreSQL but \n>> this customers have databases of 5GB!!! I am thinking that even with \n>> a better server, the restore will take 2 days!\n>>\n>> My data:\n>> Conectiva Linux 10 , Kernel 2.6.8\n>> PostgreSQL 7.4.6.\n>>\n>> postgresql.conf modified parameters (the other parameters are the \n>> default)\n>> tcpip_socket = true\n>> max_connections = 30\n>> shared_buffers = 30000\n>> sort_mem = 4096 vacuum_mem = 8192\n>> max_fsm_pages = 20000\n>> max_fsm_relations = 1000\n>>\n>> Regards,\n>>\n>> Rodrigo Carvalhaes\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 6: Have you searched our list archives?\n>>\n>> http://archives.postgresql.org\n>>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Thu, 02 Dec 2004 08:53:00 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_restore taking 4 hours!" }, { "msg_contents": "On Wed, 2004-12-01 at 09:16 -0200, Rodrigo Carvalhaes wrote:\n> \n> I am using PostgreSQL with a proprietary ERP software in Brazil. The \n> database have around 1.600 tables (each one with +/- 50 columns).\n\n...\n\n> max_fsm_pages = 20000\n> max_fsm_relations = 1000\n\nHi,\n\nI doubt that this will improve your pg_restore performance, but if you\nhave 1600 tables in the database then you very likely want to increase\nthe above two settings.\n\nIn general max_fsm_relations should be more than the total number of\ntables across all databases in a given installation. The best way to\nset these is to do a \"VACUUM VERBOSE\", which will print the appropriate\nminimum numbers at the end of the run, along with the current setting.\n\nCheers,\n\t\t\t\t\tAndrew.\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n Never trust a computer you can't repair yourself.\n-------------------------------------------------------------------------", "msg_date": "Fri, 03 Dec 2004 06:58:15 +1300", "msg_from": "Andrew McMillan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_restore taking 4 hours!" }, { "msg_contents": "Hi !\n\nThanks for the lots of tips that I received on this matter.\n\nSome points:\n\n1. I bumped the sort_mem and vaccum_mem to 202800 (200mb each) and the \nperformance was quite the same , the total difference was 10 minutes\n2. I made the restore without the index and the total time was 3 hours \nso, I don't think that the botle neck is the index creation\n3. I changed my max_fsm_pages to 30000 and max_fsm_relations = 2000 as \nwas recommended on the vacuum analyze but I had no significante change \non the performance.\n4. I made the backup with pg_dump -Fc and -Ft . The performance of -Ft \nwas better (around 10%), maybe because the data it's already uncompressed.\n\nI am thinking that the key point on this delay is the converstions from \nchar fields because this database is full of char fields, see below one \nstructure of one table\n\nThere is something more that I can try to improve this performance?\n\nCheers (and thanks for all the oppinions)\n\nRodrigo Carvalhaes\n\ndadosadv=# \\d sb1010\n Table \n\"public.sb1010\"\n Column | Type \n| Modifiers\n\n------------+------------------+---------------------------------------------------------------------------------------------\n----------------\n b1_filial | character(2) | not null default ' '::bpchar\n b1_cod | character(15) | not null default ' '::bpchar\n b1_desc | character(30) | not null default \n' '::bpchar\n b1_tipo | character(2) | not null default ' '::bpchar\n b1_codite | character(27) | not null default \n' '::bpchar\n b1_um | character(2) | not null default ' '::bpchar\n b1_locpad | character(2) | not null default ' '::bpchar\n b1_grupo | character(4) | not null default ' '::bpchar\n b1_picm | double precision | not null default 0.0\n b1_ipi | double precision | not null default 0.0\n b1_posipi | character(10) | not null default ' '::bpchar\n b1_especie | double precision | not null default 0.0\n b1_ex_ncm | character(3) | not null default ' '::bpchar\n b1_ex_nbm | character(3) | not null default ' '::bpchar\n b1_aliqiss | double precision | not null default 0.0\n b1_codiss | character(8) | not null default ' '::bpchar\n b1_te | character(3) | not null default ' '::bpchar\n b1_ts | character(3) | not null default ' '::bpchar\n b1_picmret | double precision | not null default 0.0\n b1_picment | double precision | not null default 0.0\n b1_impzfrc | character(1) | not null default ' '::bpchar\n b1_bitmap | character(8) | not null default ' '::bpchar\n b1_segum | character(2) | not null default ' '::bpchar\n b1_conv | double precision | not null default 0.0\n b1_tipconv | character(1) | not null default ' '::bpchar\n b1_alter | character(15) | not null default ' '::bpchar\n b1_qe | double precision | not null default 0.0\n b1_prv1 | double precision | not null default 0.0\n b1_emin | double precision | not null default 0.0\n b1_custd | double precision | not null default 0.0\n b1_mcustd | character(1) | not null default ' '::bpchar\n b1_uprc | double precision | not null default 0.0\n b1_ucom | character(8) | not null default ' '::bpchar\n b1_peso | double precision | not null default 0.0\n b1_pesob | double precision | not null default 0.0\n b1_estseg | double precision | not null default 0.0\n b1_estfor | character(3) | not null default ' '::bpchar\n b1_forprz | character(3) | not null default ' '::bpchar\n b1_pe | double precision | not null default 0.0\n b1_tipe | character(1) | not null default ' '::bpchar\n b1_le | double precision | not null default 0.0\n b1_lm | double precision | not null default 0.0\n b1_conta | character(20) | not null default ' \n'::bpchar\n b1_cc | character(9) | not null default ' '::bpchar\n b1_toler | double precision | not null default 0.0\n b1_itemcc | character(9) | not null default ' '::bpchar\n b1_familia | character(1) | not null default ' '::bpchar\n b1_proc | character(6) | not null default ' '::bpchar\n b1_lojproc | character(2) | not null default ' '::bpchar\n b1_qb | double precision | not null default 0.0\n b1_apropri | character(1) | not null default ' '::bpchar\n b1_fantasm | character(1) | not null default ' '::bpchar\n b1_tipodec | character(1) | not null default ' '::bpchar\n b1_origem | character(2) | not null default ' '::bpchar\n b1_clasfis | character(2) | not null default ' '::bpchar\n b1_datref | character(8) | not null default ' '::bpchar\n b1_rastro | character(1) | not null default ' '::bpchar\n b1_urev | character(8) | not null default ' '::bpchar\n b1_foraest | character(1) | not null default ' '::bpchar\n b1_comis | double precision | not null default 0.0\n b1_mono | character(1) | not null default ' '::bpchar\n b1_mrp | character(1) | not null default ' '::bpchar\n b1_perinv | double precision | not null default 0.0\n b1_dtrefp1 | character(8) | not null default ' '::bpchar\n b1_grtrib | character(3) | not null default ' '::bpchar\n b1_notamin | double precision | not null default 0.0\n b1_prvalid | double precision | not null default 0.0\n b1_numcop | double precision | not null default 0.0\n b1_contsoc | character(1) | not null default ' '::bpchar\n b1_conini | character(8) | not null default ' '::bpchar\n b1_irrf | character(1) | not null default ' '::bpchar\n b1_codbar | character(15) | not null default ' '::bpchar\n b1_grade | character(1) | not null default ' '::bpchar\n b1_formlot | character(3) | not null default ' '::bpchar\n b1_localiz | character(1) | not null default ' '::bpchar\n b1_fpcod | character(2) | not null default ' '::bpchar\n b1_operpad | character(2) | not null default ' '::bpchar\n b1_contrat | character(1) | not null default ' '::bpchar\n b1_desc_p | character(6) | not null default ' '::bpchar\n b1_desc_gi | character(6) | not null default ' '::bpchar\n b1_desc_i | character(6) | not null default ' '::bpchar\n b1_vlrefus | double precision | not null default 0.0\n b1_import | character(1) | not null default ' '::bpchar\n b1_opc | character(80) | not null default '\n '::bpchar\n b1_anuente | character(1) | not null default ' '::bpchar\n b1_codobs | character(6) | not null default ' '::bpchar\n b1_sitprod | character(2) | not null default ' '::bpchar\n b1_fabric | character(20) | not null default ' \n'::bpchar\n b1_modelo | character(15) | not null default ' '::bpchar\n b1_setor | character(2) | not null default ' '::bpchar\n b1_balanca | character(1) | not null default ' '::bpchar\n b1_tecla | character(3) | not null default ' '::bpchar\n b1_prodpai | character(15) | not null default ' '::bpchar\n b1_tipocq | character(1) | not null default ' '::bpchar\n b1_solicit | character(1) | not null default ' '::bpchar\n b1_grupcom | character(6) | not null default ' '::bpchar\n b1_numcqpr | double precision | not null default 0.0\n b1_contcqp | double precision | not null default 0.0\n b1_revatu | character(3) | not null default ' '::bpchar\n b1_inss | character(1) | not null default ' '::bpchar\n b1_codemb | character(20) | not null default ' \n'::bpchar\n b1_especif | character(80) | not null default '\n '::bpchar\n b1_mat_pri | character(20) | not null default ' \n'::bpchar\n b1_redinss | double precision | not null default 0.0\n b1_nalncca | character(7) | not null default ' '::bpchar\n b1_aladi | character(3) | not null default ' '::bpchar\n b1_nalsh | character(8) | not null default ' '::bpchar\n b1_redirrf | double precision | not null default 0.0\n b1_tab_ipi | character(2) | not null default ' '::bpchar\n b1_grudes | character(3) | not null default ' '::bpchar\n b1_datasub | character(8) | not null default ' '::bpchar\n b1_pcsll | double precision | not null default 0.0\n b1_pcofins | double precision | not null default 0.0\n b1_ppis | double precision | not null default 0.0\n b1_mtbf | double precision | not null default 0.0\n b1_mttr | double precision | not null default 0.0\n b1_flagsug | character(1) | not null default ' '::bpchar\n b1_classve | character(1) | not null default ' '::bpchar\n b1_midia | character(1) | not null default ' '::bpchar\n b1_midia | character(1) | not null default ' '::bpchar\n b1_qtmidia | double precision | not null default 0.0\n b1_vlr_ipi | double precision | not null default 0.0\n b1_envobr | character(1) | not null default ' '::bpchar\n b1_qtdser | double precision | not null default 0.0\n b1_serie | character(20) | not null default ' \n'::bpchar\n b1_faixas | double precision | not null default 0.0\n b1_nropag | double precision | not null default 0.0\n b1_isbn | character(10) | not null default ' '::bpchar\n b1_titorig | character(50) | not null default \n' '::bpchar\n b1_lingua | character(20) | not null default ' \n'::bpchar\n b1_edicao | character(3) | not null default ' '::bpchar\n b1_obsisbn | character(40) | not null default \n' '::bpchar\n b1_clvl | character(9) | not null default ' '::bpchar\n b1_ativo | character(1) | not null default ' '::bpchar\n b1_pesbru | double precision | not null default 0.0\n b1_tipcar | character(6) | not null default ' '::bpchar\n b1_vlr_icm | double precision | not null default 0.0\n b1_vlrselo | double precision | not null default 0.0\n b1_codnor | character(3) | not null default ' '::bpchar\n b1_corpri | character(6) | not null default ' '::bpchar\n b1_corsec | character(6) | not null default ' '::bpchar\n b1_nicone | character(15) | not null default ' '::bpchar\n b1_atrib1 | character(6) | not null default ' '::bpchar\n b1_atrib2 | character(6) | not null default ' '::bpchar\n b1_atrib3 | character(6) | not null default ' '::bpchar\n b1_regseq | character(6) | not null default ' '::bpchar\n b1_ucalstd | character(8) | not null default ' '::bpchar\n b1_cpotenc | character(1) | not null default ' '::bpchar\n b1_potenci | double precision | not null default 0.0\n b1_qtdacum | double precision | not null default 0.0\n b1_qtdinic | double precision | not null default 0.0\n b1_requis | character(1) | not null default ' '::bpchar\n d_e_l_e_t_ | character(1) | not null default ' '::bpchar\n r_e_c_n_o_ | double precision | not null default 0.0\nIndexes:\n \"sb1010_pkey\" primary key, btree (r_e_c_n_o_)\n \"sb10101\" btree (b1_filial, b1_cod, r_e_c_n_o_, d_e_l_e_t_)\n \"sb10102\" btree (b1_filial, b1_tipo, b1_cod, r_e_c_n_o_, d_e_l_e_t_)\n \"sb10103\" btree (b1_filial, b1_desc, b1_cod, r_e_c_n_o_, d_e_l_e_t_)\n \"sb10104\" btree (b1_filial, b1_grupo, b1_cod, r_e_c_n_o_, d_e_l_e_t_)\n \"sb10105\" btree (b1_filial, b1_codbar, r_e_c_n_o_, d_e_l_e_t_)\n \"sb10106\" btree (b1_filial, b1_proc, r_e_c_n_o_, d_e_l_e_t_)\n\n\nShridhar Daithankar wrote:\n\n>On Wednesday 01 Dec 2004 4:46 pm, Rodrigo Carvalhaes wrote:\n> \n>\n>>I need to find a solution for this because I am convincing customers\n>>that are using SQL Server, DB2 and Oracle to change to PostgreSQL but\n>>this customers have databases of 5GB!!! I am thinking that even with a\n>>better server, the restore will take 2 days!\n>>\n>>My data:\n>>Conectiva Linux 10 , Kernel 2.6.8\n>>PostgreSQL 7.4.6.\n>>\n>>postgresql.conf modified parameters (the other parameters are the default)\n>>tcpip_socket = true\n>>max_connections = 30\n>>shared_buffers = 30000\n>>sort_mem = 4096\n>>vacuum_mem = 8192\n>>max_fsm_pages = 20000\n>>max_fsm_relations = 1000\n>> \n>>\n>\n>Can you try bumping sort mem lot higher(basically whatever the machine can \n>afford) so that index creation is faster? \n>\n>Just try setting sort mem for the restore session and see if it helps..\n>\n> Shridhar\n>\n> \n>\n", "msg_date": "Sun, 05 Dec 2004 17:43:36 -0200", "msg_from": "Rodrigo Carvalhaes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_restore taking 4 hours!" }, { "msg_contents": "On P, 2004-12-05 at 21:43, Rodrigo Carvalhaes wrote:\n> Hi !\n> \n> Thanks for the lots of tips that I received on this matter.\n> \n...\n> There is something more that I can try to improve this performance?\n\ncheck the speed of your ide drive. maybe tweak some params with\n/sbin/hdparm . Sometimes the defaults result in 2MB/sec r/w speeds\n(instead on(30-70 MB/sec)\n\n------------\nHannu\n\n", "msg_date": "Tue, 07 Dec 2004 13:08:38 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_restore taking 4 hours!" }, { "msg_contents": ">>>>> \"RC\" == Rodrigo Carvalhaes <[email protected]> writes:\n\nRC> Hi!\nRC> I am using PostgreSQL with a proprietary ERP software in Brazil. The\nRC> database have around 1.600 tables (each one with +/- 50 columns).\nRC> My problem now is the time that takes to restore a dump. My customer\nRC> database have arount 500mb (on the disk, not the dump file) and I am\nRC> making the dump with pg_dump -Fc, my dumped file have 30mb. To make\nRC> the dump, it's taking +/- 1,5 hours BUT to restore (using pg_restore )\nRC> it it takes 4 - 5 hours!!!\n\nI regularly dump a db that is compressed at over 2Gb. Last time I did\na restore on the production box it took about 3 hours. Restoring it\ninto a development box with a SATA RAID0 config takes like 7 hours or\nso.\n\nThe biggest improvement in speed to restore time I have discovered is\nto increase the checkpoint segments. I bump mine to about 50. And\nmoving the pg_xlog to a separate physical disk helps a lot there, too.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Mon, 13 Dec 2004 11:37:28 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_restore taking 4 hours!" }, { "msg_contents": "Vivek,\n\n> The biggest improvement in speed to restore time I have discovered is\n> to increase the checkpoint segments.  I bump mine to about 50.  And\n> moving the pg_xlog to a separate physical disk helps a lot there, too.\n\nDon't leave it at 50; if you have the space on your log array, bump it up to \n256.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 13 Dec 2004 10:25:54 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_restore taking 4 hours!" }, { "msg_contents": "Vivek,\n\n> Do I need a correspondingly large checkpoint timeout then? Or does\n> that matter much?\n\nYes, you do.\n\n> And does this advice apply if the pg_xlog is on the same RAID partition\n> (mine currently is not, but perhaps will be in the future)\n\nNot as much, but it's still a good idea to serialize the load. With too few \nsegments, you get a pattern like:\n\nFill up segments\nWrite to database\nRecycle segments\nFill up segments\nWrite to database\nRecycle segments \netc.\n\nCompared to doing it in one long run of a single cycle, considerble efficiency \nis lost. With a proper 2-array setup, the segments become like a write \nbuffer for the database, and you want that buffer as large as you can afford \nin order to prevent buffer cycling from interrupting database writes.\n\nBTW, for members of the studio audience, checkpoint_segments of 256 is about \n8GB.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 13 Dec 2004 10:43:28 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_restore taking 4 hours!" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Not as much, but it's still a good idea to serialize the load. With too few\n> segments, you get a pattern like:\n\n> Fill up segments\n> Write to database\n> Recycle segments\n> Fill up segments\n> Write to database\n> Recycle segments \n> etc.\n\nActually I think the problem is specifically that you get checkpoints\ntoo often if either checkpoint_timeout or checkpoint_segments is too\nsmall. A checkpoint is expensive both directly (the I/O it causes)\nand indirectly (because the first update of a particular data page\nafter a checkpoint causes the whole page to be logged in WAL). So\nkeeping them spread well apart is a Good Thing, as long as you\nunderstand that a wider checkpoint spacing implies a longer time to\nrecover if you do suffer a crash.\n\nI think 8.0's bgwriter will considerably reduce the direct cost of\na checkpoint (since not so many pages will be dirty when the checkpoint\nhappens) but it won't do a thing for the indirect cost.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Dec 2004 14:21:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_restore taking 4 hours! " }, { "msg_contents": "Hi,\n\nSorry, I didn't catch the original message, so I'm not sure if the original \nposter mentioned the postgres version that he's using.\n\nI just thought that I'd contribute this observation.\n\nI have a DB that takes several hours to restore under 7,1 but completes in \naround 10 minutes on 7.4. The main reason for this is that by default the \n7.4 restore delays creation of PKs and indexes until after the data load, \nwhereas 7.1 doesn't.\n\nI noticed that 7.1 has a re-arrange option that reportedly delays the pks \nand indexes, so presumably this would have alleviated the problem.\n\nI also noticed that a dumpfile created under 7.1 took hours to restore using \n7.4 to load it as the order remained in the default of 7.1.\n\nI don'tknow when the default behaviour changed, but I get the feeling it may \nhave been with 7.4.\n\nHTH\nIain \n\n", "msg_date": "Tue, 14 Dec 2004 10:54:17 +0900", "msg_from": "\"Iain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_restore taking 4 hours! " } ]
[ { "msg_contents": "Just as an update, We installed RHE Update4 beta kernel on a box and it\nseems to have solved our issues.\n\nWoody \n\n----------------------------------------\niGLASS Networks\n211-A S. Salem St\nApex NC 27502\n(919) 387-3550 x813\nwww.iglass.net\n", "msg_date": "Wed, 1 Dec 2004 08:29:55 -0500", "msg_from": "\"George Woodring\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Performance and IOWait" } ]
[ { "msg_contents": "I have a query that fetches information from a log, based on an indexed \ncolumn. The timestamp in the table is with time zone, and the server \ntime zone is not GMT. However, i want all of the timestamps for a \nparticular day in GMT. If i simply use a date constant, the index is \nused, but the incorrect rows are fetched, since the date is converted \nto a timestamp in the server's time zone. When i cast that date to a \nGMT date, the index is no longer used. Is there some better way to \nwrite the query so that the planner will use the index? I have \nsimplied the queries below to demonstrate the problem i'm having. \nThanks for any advice.\n\n\nSLOW:\nbasement=# select count(*) from redir_log\nbasement-# where redir_timestamp >= '10/14/2004'::timestamp without \ntime zone at time zone 'GMT';\n count\n-------\n 33696\n(1 row)\n\nbasement=# explain analyze\nbasement-#\tselect count(*) from redir_log\nbasement-#\twhere redir_timestamp >= '10/14/2004'::timestamp without \ntime zone at time zone 'GMT';\n\n Aggregate (cost=223093.00..223093.00 rows=1 width=0) (actual \ntime=5036.975..5036.976 rows=1 loops=1)\n -> Seq Scan on redir_log (cost=0.00..219868.95 rows=1289621 \nwidth=0) (actual time=4941.127..5006.133 rows=33696 loops=1)\n Filter: (redir_timestamp >= timezone('GMT'::text, '2004-10-14 \n00:00:00'::timestamp without time zone))\n Total runtime: 5037.023 ms\n\n\nFAST:\nbasement=# select count(*) from redir_log where redir_timestamp >= \n'10/14/2004';\n count\n-------\n 33072\n(1 row)\n\nbasement=# explain analyze select count(*) from redir_log where \nredir_timestamp >= '10/14/2004';\n \nQUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n--\n Aggregate (cost=70479.79..70479.79 rows=1 width=0) (actual \ntime=84.771..84.772 rows=1 loops=1)\n -> Index Scan using redir_log_timestamp on redir_log \n(cost=0.00..70404.02 rows=30308 width=0) (actual time=0.022..55.337 \nrows=33072 loops=1)\n Index Cond: (redir_timestamp >= '2004-10-14 \n00:00:00-06'::timestamp with time zone)\n Total runtime: 84.823 ms\n(4 rows)\n\n\n--------------------------------------------\nMobyGames\nhttp://www.mobygames.com\nThe world's largest and most comprehensive�\ngaming database project", "msg_date": "Wed, 1 Dec 2004 09:46:43 -0700", "msg_from": "Brian Hirt <[email protected]>", "msg_from_op": true, "msg_subject": "query with timestamp not using index" }, { "msg_contents": "Brian Hirt wrote:\n> I have a query that fetches information from a log, based on an indexed \n> column. The timestamp in the table is with time zone, and the server \n> time zone is not GMT. However, i want all of the timestamps for a \n> particular day in GMT. If i simply use a date constant, the index is \n> used, but the incorrect rows are fetched, since the date is converted \n> to a timestamp in the server's time zone. When i cast that date to a \n> GMT date, the index is no longer used. Is there some better way to \n> write the query so that the planner will use the index? I have \n> simplied the queries below to demonstrate the problem i'm having. \n> Thanks for any advice.\n> \n> \n> SLOW:\n> basement=# select count(*) from redir_log\n> basement-# where redir_timestamp >= '10/14/2004'::timestamp without \n> time zone at time zone 'GMT';\n\nNot quite what's wanted. Try keeping things as a timestamp with timezone \n(you can add a timestamp to a date):\n\nSELECT count(*) FROM redir_log\nWHERE redir_timestamp BETWEEN '2004-10-14+00'::timestamptz AND \nCURRENT_TIMESTAMP;\n\nPutting two bounds on the range can also help index usage.\n\nIn actual fact, since you're comparing to a timestamp and not a date, \nI'd personally supply a valid timestamptz: '2004-10-14 00:00:00+00'\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 01 Dec 2004 17:38:28 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query with timestamp not using index" }, { "msg_contents": "Brian Hirt <[email protected]> writes:\n> select count(*) from redir_log\n> where redir_timestamp >= '10/14/2004'::timestamp without time zone at time zone 'GMT';\n\nThat seems like the hard way to express a timestamp constant. Why not\n\nselect count(*) from redir_log\nwhere redir_timestamp >= '10/14/2004 00:00 GMT';\n\n(FWIW, though, the AT TIME ZONE construct *should* have been collapsed\nto a constant; 8.0 fixes this.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 Dec 2004 15:06:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query with timestamp not using index " }, { "msg_contents": "On Dec 1, 2004, at 1:06 PM, Tom Lane wrote:\n>\n> That seems like the hard way to express a timestamp constant. Why not\n>\n\nI realized after i sent this message that i might get this responese. \nI should have mentioned this was from within a stored pl/pgsql \nfunction, and the date wasn't a constant, but a variable. I was just \ntrying to simplify the example.\n\nit's more like:\n\ndeclare\n\tfoo_date date;\nbegin\n\tselect some_date into foo_date from some_table where something = \nsomething_else;\n\t\n\tselect blah from redir_log where redir_timestamp >= \nfoo_date::timestamp without time zone at time zone 'GMT';\n\tetc / etc / etc\nend;\n\n> select count(*) from redir_log\n> where redir_timestamp >= '10/14/2004 00:00 GMT';\n>\n> (FWIW, though, the AT TIME ZONE construct *should* have been collapsed\n> to a constant; 8.0 fixes this.)\n>\n> \t\t\tregards, tom lane\n>\n--------------------------------------------\nMobyGames\nhttp://www.mobygames.com\nThe world's largest and most comprehensive \ngaming database project", "msg_date": "Wed, 1 Dec 2004 15:09:04 -0700", "msg_from": "Brian Hirt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query with timestamp not using index " }, { "msg_contents": "Brian Hirt <[email protected]> writes:\n> it's more like:\n\n> declare\n> \tfoo_date date;\n> begin\n> \tselect some_date into foo_date from some_table where something = something_else;\n> \tselect blah from redir_log where redir_timestamp >= foo_date::timestamp without time zone at time zone 'GMT';\n> \tetc / etc / etc\n\nAh. In that case you're going to have trouble anyway with the planner\nhaving no clue what the effective value of the comparison expression is,\nbecause it'll certainly not be able to fold the plpgsql variable to a\nconstant. I agree with the other person who suggested faking it out\nby adding a dummy other-side-of-the-range constraint, perhaps\n\tAND redir_timestamp <= now()\n(or whatever upper bound is okay for your purposes). This should coax\nit into using an indexscan.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 Dec 2004 17:16:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query with timestamp not using index " } ]
[ { "msg_contents": "Folks,\n\nA lot of people have been having a devilish time with Dell hardware lately. \nIt seems like the quality control just isn't there on the Dell servers.\n\nThing is, some companies are required to use 1st-tier or at least 2nd-tier \nvendors for hardware; they won't home-build. For those people, what vendors \ndo others on this list recommend? What have been your good/bad experiences?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 1 Dec 2004 14:24:12 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Alternatives to Dell?" }, { "msg_contents": "Josh Berkus wrote:\n> Folks,\n> \n> A lot of people have been having a devilish time with Dell hardware lately. \n> It seems like the quality control just isn't there on the Dell servers.\n\nWas the quality ever there with Dell?\n\n> Thing is, some companies are required to use 1st-tier or at least 2nd-tier \n> vendors for hardware; they won't home-build. For those people, what vendors \n> do others on this list recommend? What have been your good/bad experiences?\n\nI use Supermicro and have liked them. They make motherboards and systems.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 1 Dec 2004 17:25:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "Josh Berkus wrote:\n> Folks,\n> \n> A lot of people have been having a devilish time with Dell hardware\n> lately. It seems like the quality control just isn't there on the\n> Dell servers.\n> \n> Thing is, some companies are required to use 1st-tier or at least\n> 2nd-tier vendors for hardware; they won't home-build. For those\n> people, what vendors do others on this list recommend? What have\n> been your good/bad experiences?\n\nMy experience with Dell is they are not reliable as well.\n\nHalf way between the big guys and home built. I've had good success \nwith Monarch Computers. They've had ads in Linux Journal for a while \nand a couple of their boxes have been reviewed there. As a matter of \nfact, in the December issue, they did a review of a dual operton from \nMonarch.\n\nhttp://www.monarchcomputer.com/\n\n-- \nUntil later, Geoffrey\n", "msg_date": "Wed, 01 Dec 2004 17:38:49 -0500", "msg_from": "Geoffrey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "On Wed, 1 Dec 2004 14:24:12 -0800\nJosh Berkus <[email protected]> wrote:\n\n> Folks,\n> \n> A lot of people have been having a devilish time with Dell hardware\n> lately. It seems like the quality control just isn't there on the\n> Dell servers.\n\n I believe I had expressed some problems with Dell in the past, but\n it really isn't a quality control issue that I have seen. It is more\n of a Linux support issue. Lately I've been running into problems with\n getting particular parts of system working under Linux (raid cards, \n SATA drives, Ethernet cards) or I can get it working, but it\n performs badly ( PERC cards vs say a Mylex card ). \n\n I think it's more of a system design issue ( wrt Linux use ) rather\n than a quality issue. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Wed, 1 Dec 2004 16:41:30 -0600", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "On Wed, 1 Dec 2004 14:24:12 -0800, Josh Berkus <[email protected]> wrote:\n> Folks,\n> \n> A lot of people have been having a devilish time with Dell hardware lately.\n> It seems like the quality control just isn't there on the Dell servers.\n\nWhich is a shame, because I *still* drool over a rack full of those\nfront bevels with the bright blue LEDs. :)\n\n> \n> Thing is, some companies are required to use 1st-tier or at least 2nd-tier\n> vendors for hardware; they won't home-build. For those people, what vendors\n> do others on this list recommend? What have been your good/bad experiences?\n\nI'm using an HP DL585 quad Opteron with 16G RAM as a development box. \nIt's working great. ;)\n\nSeriously though, I never really liked HP (or worse, Compaq) hardware\nbefore, but this box seems really well built, and I've yet to see a\n'D' in the S column in top with the SA-6404/256 RAID card.\n\nIf all goes as well as it has so far on this testbed I'll be deploying\non a Slony-1 clustered set of 3 of these bad boys with 32G RAM each. \nDollar-for-dollar, we're saving 90% (that's right, an order of\nmagnitude) going this route, PG with linux-amd64 on HP/Opterons, as\nopposed to the E20K monster that was originally spec'd out.\n\nMail me direct if you want the full spec list on this beast. And if\nthere is a ready-made benchmark anyone would like me to run, just drop\nme a note.\n\n-- \nMike Rylander\[email protected]\nGPLS -- PINES Development\nDatabase Developer\nhttp://open-ils.org\n", "msg_date": "Wed, 1 Dec 2004 17:48:26 -0500", "msg_from": "Mike Rylander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "> \n> Thing is, some companies are required to use 1st-tier or at least 2nd-tier \n> vendors for hardware; they won't home-build. For those people, what vendors \n> do others on this list recommend? What have been your good/bad experiences?\n\nWell this is almost as bad as vi/emacs ;) but I have had good experience \nwith Compaq Proliant (now HP) servers. I have also \"heard\" good things \nabout IBM.\n\nIBM actually sells a reasonable costing Opteron server as well.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n\n\n-- \nCommand Prompt, Inc., home of PostgreSQL Replication, and plPHP.\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nMammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL", "msg_date": "Wed, 01 Dec 2004 15:13:04 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "Jeff,\n\n> I'm curious about the problem's you're seeing with Dell servers since\n> we're about to buy some 750s, 2850s and 1850s.\n\nThe problems I've been dealing with have been on the *650s. They're the ones \nyou name.\n\n> FYI ... the 750s, 1850s and 2850s use Intel chipsets (E7520 on 1850s\n> and 2850s, 7210 on 750s), Intel NICs, and come only with LSI Logic\n> RAID controllers. It looks like Dell has dropped the\n> Broadcom/ServerWorks and Adaptec junk.\n\nI don't know if Vivek is on this list; I think he just had a critical failure \nwith one of the new Dells with LSI.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 1 Dec 2004 15:35:23 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Off-list Re: Alternatives to Dell?" }, { "msg_contents": "\n\n\n\nI recommend IBM equipment, but in the spirit of caveat emptor I should let\nyou know I work for IBM... :-)\n\nSeriously, I've been using IBM laptops and desktops for about 5 years, even\nbefore I started working for them. They tend to be a little more expensive\nthan Dell, but I think they use beefier components and don't cut the specs\nquite as close as Dell does. IBM gear is designed more for industrial use\nthan home computing, which is reflected in the quality (and the price).\n\nIBM just released a new series of PowerPC-based servers that are\nspecifically designed to run Linux. They're at the higher end, but from\nwhat I understand, they provide much more bang for the buck than\nIntel-based servers.\n\nI hope this helps,\n--- Steve\n___________________________________________________________________________________\n\nSteven Rosenstein\nSenior IT Architect/Specialist | IBM Virtual Server Administration\nVoice/FAX: 845-689-2064 | Cell: 646-345-6978 | Tieline: 930-6001\nText Messaging: 6463456978 @ mobile.mycingular.com\nEmail: srosenst @ us.ibm.com\n\n\"Learn from the mistakes of others because you can't live long enough to\nmake them all yourself.\" -- Eleanor Roosevelt\n\n\n \n Josh Berkus \n <[email protected] \n m> To \n Sent by: [email protected] \n pgsql-performance cc \n -owner@postgresql \n .org Subject \n [PERFORM] Alternatives to Dell? \n \n 12/01/2004 05:24 \n PM \n \n \n Please respond to \n josh \n \n \n\n\n\n\nFolks,\n\nA lot of people have been having a devilish time with Dell hardware lately.\n\nIt seems like the quality control just isn't there on the Dell servers.\n\nThing is, some companies are required to use 1st-tier or at least 2nd-tier\nvendors for hardware; they won't home-build. For those people, what\nvendors\ndo others on this list recommend? What have been your good/bad\nexperiences?\n\n--\n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faqs/FAQ.html\n\n\n", "msg_date": "Wed, 1 Dec 2004 19:17:05 -0500", "msg_from": "Steven Rosenstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "\n\nJosh Berkus wrote:\n> Jeff,\n> \n> \n>>I'm curious about the problem's you're seeing with Dell servers since\n>>we're about to buy some 750s, 2850s and 1850s.\n> \n> \n> The problems I've been dealing with have been on the *650s. They're the ones \n> you name.\n> \n> \n>>FYI ... the 750s, 1850s and 2850s use Intel chipsets (E7520 on 1850s\n>>and 2850s, 7210 on 750s), Intel NICs, and come only with LSI Logic\n>>RAID controllers. It looks like Dell has dropped the\n>>Broadcom/ServerWorks and Adaptec junk.\n> \n> \n> I don't know if Vivek is on this list; I think he just had a critical failure \n> with one of the new Dells with LSI.\n> \n\nOn this note about \"Adaptec junk\", I have a question regarding hardware \nas well. We tend to build a lot of servers in house (Supermicro based \nwith the Zero-channel raid). Does anyone have any anecdotal or empirical \ndata on using a ZCR card versus a full-blown RAID controller (adaptec or \nother)?? I am trying to build a medium-duty database server with 8G RAM, \n4x144GB U320 Scsi RAID 10, FreeBSD (5.3-stable or 4-stable) and was \nwondering about performance differences between ZCR and Adaptec versus \nother manufacturers' Full-RAID cards. (PCI-E)\n\nSven\n", "msg_date": "Wed, 01 Dec 2004 21:23:23 -0500", "msg_from": "Sven Willenberger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Off-list Re: Alternatives to Dell?" }, { "msg_contents": "On Wed, 2004-12-01 at 14:24 -0800, Josh Berkus wrote:\n> Folks,\n> \n> A lot of people have been having a devilish time with Dell hardware lately. \n> It seems like the quality control just isn't there on the Dell servers.\n> \n> Thing is, some companies are required to use 1st-tier or at least 2nd-tier \n> vendors for hardware; they won't home-build. For those people, what vendors \n> do others on this list recommend? What have been your good/bad experiences?\n\n\nWe use a bunch of HP ProLiant DL360 and DL380 without any problems at\nall. \n\n\n\nregards,\n\tRobin\n\n", "msg_date": "Thu, 02 Dec 2004 09:43:21 +0100", "msg_from": "Robin Ericsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "On Wed, Dec 01, 2004 at 05:25:03PM -0500, Bruce Momjian wrote:\n> I use Supermicro and have liked them. They make motherboards and systems.\n\nMany of their rack-based servers seem to be near-impossible to fit in a rack,\nthough. :-) (Many of their 4U servers are just desktop cases which you can\nturn on their sides and apply an extra kit onto, and into the rack it goes...\nafter a lot of pain. :-) )\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 2 Dec 2004 11:59:45 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "Josh Berkus wrote:\n> Thing is, some companies are required to use 1st-tier or at least 2nd-tier \n> vendors for hardware; they won't home-build. For those people, what vendors \n> do others on this list recommend? What have been your good/bad experiences?\n\nI've had very good experiences with IBM hardware, and found their sales \nand support to be responsive.\n\nJoe\n", "msg_date": "Thu, 02 Dec 2004 06:42:26 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "I've been at companies where we've had good experiences with Penguin \nComputing servers.\n\nhttp://www.penguincomputing.com/\n\nI always evaluate their offerings when considering server purchases or \nrecommendations.\n\n-tfo\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-260-0005\n\n", "msg_date": "Thu, 2 Dec 2004 10:47:22 -0600", "msg_from": "Thomas F.O'Connell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "Consider Sun's new line of Opterons. They've been around for a couple of\nyears under the Newisys name. I'm using dozens of them for web servers\nand PG servers and so far both the v20z and v40z have been excellent\nperformers with solid reliability. The pricing was also competitive\nsince Sun is looking to break into the market.\n\n\n\nOn Wed, 2004-12-01 at 14:24 -0800, Josh Berkus wrote:\n> Folks,\n> \n> A lot of people have been having a devilish time with Dell hardware lately. \n> It seems like the quality control just isn't there on the Dell servers.\n> \n> Thing is, some companies are required to use 1st-tier or at least 2nd-tier \n> vendors for hardware; they won't home-build. For those people, what vendors \n> do others on this list recommend? What have been your good/bad experiences?\n> \n\n", "msg_date": "Fri, 03 Dec 2004 07:19:37 -0700", "msg_from": "Cott Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "Cott Lang wrote:\n\n>Consider Sun's new line of Opterons. They've been around for a couple of\n>years under the Newisys name. I'm using dozens of them for web servers\n>and PG servers and so far both the v20z and v40z have been excellent\n>performers with solid reliability. The pricing was also competitive\n>since Sun is looking to break into the market.\n> \n>\nReally? I am not being sarcastic, but I found their prices pretty sad.\nDid you go direct or web purchase? I have thought about using them\nseveral times but....\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n>\n>\n>On Wed, 2004-12-01 at 14:24 -0800, Josh Berkus wrote:\n> \n>\n>>Folks,\n>>\n>>A lot of people have been having a devilish time with Dell hardware lately. \n>>It seems like the quality control just isn't there on the Dell servers.\n>>\n>>Thing is, some companies are required to use 1st-tier or at least 2nd-tier \n>>vendors for hardware; they won't home-build. For those people, what vendors \n>>do others on this list recommend? What have been your good/bad experiences?\n>>\n>> \n>>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Fri, 03 Dec 2004 06:30:26 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "Most of mine I got through a Sun reseller. Some of mine I got off of\nEbay. You should be able to get them a lot cheaper than than retail web\npricing. :)\n\nHowever, even full retail seems like it was a hell of a lot cheaper for\na v40z than a DL585. :)\n\n\nOn Fri, 2004-12-03 at 06:30 -0800, Joshua D. Drake wrote:\n> Cott Lang wrote:\n> \n> >Consider Sun's new line of Opterons. They've been around for a couple of\n> >years under the Newisys name. I'm using dozens of them for web servers\n> >and PG servers and so far both the v20z and v40z have been excellent\n> >performers with solid reliability. The pricing was also competitive\n> >since Sun is looking to break into the market.\n> > \n> >\n> Really? I am not being sarcastic, but I found their prices pretty sad.\n> Did you go direct or web purchase? I have thought about using them\n> several times but....\n> \n> Sincerely,\n> \n> Joshua D. Drake\n> \n> \n> \n> >\n> >\n> >On Wed, 2004-12-01 at 14:24 -0800, Josh Berkus wrote:\n> > \n> >\n> >>Folks,\n> >>\n> >>A lot of people have been having a devilish time with Dell hardware lately. \n> >>It seems like the quality control just isn't there on the Dell servers.\n> >>\n> >>Thing is, some companies are required to use 1st-tier or at least 2nd-tier \n> >>vendors for hardware; they won't home-build. For those people, what vendors \n> >>do others on this list recommend? What have been your good/bad experiences?\n> >>\n> >> \n> >>\n> >\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 4: Don't 'kill -9' the postmaster\n> > \n> >\n> \n> \n\n", "msg_date": "Fri, 03 Dec 2004 07:33:39 -0700", "msg_from": "Cott Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "Cott Lang wrote:\n\n>Most of mine I got through a Sun reseller. Some of mine I got off of\n>Ebay. You should be able to get them a lot cheaper than than retail web\n>pricing. :)\n>\n>However, even full retail seems like it was a hell of a lot cheaper for\n>a v40z than a DL585. :)\n> \n>\nThat's true :) One of the reasons the compaq's are expensive\nis they supposedly use a quad board, even for the dual machine.\nWhich means a different opteron chip as well.\n\nI don't know this for a fact, it is just what one of their\n\"ahem\" sales guys told me.\n\nThe IBM machines are seem reasonable though.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n>\n>On Fri, 2004-12-03 at 06:30 -0800, Joshua D. Drake wrote:\n> \n>\n>>Cott Lang wrote:\n>>\n>> \n>>\n>>>Consider Sun's new line of Opterons. They've been around for a couple of\n>>>years under the Newisys name. I'm using dozens of them for web servers\n>>>and PG servers and so far both the v20z and v40z have been excellent\n>>>performers with solid reliability. The pricing was also competitive\n>>>since Sun is looking to break into the market.\n>>> \n>>>\n>>> \n>>>\n>>Really? I am not being sarcastic, but I found their prices pretty sad.\n>>Did you go direct or web purchase? I have thought about using them\n>>several times but....\n>>\n>>Sincerely,\n>>\n>>Joshua D. Drake\n>>\n>>\n>>\n>> \n>>\n>>>On Wed, 2004-12-01 at 14:24 -0800, Josh Berkus wrote:\n>>> \n>>>\n>>> \n>>>\n>>>>Folks,\n>>>>\n>>>>A lot of people have been having a devilish time with Dell hardware lately. \n>>>>It seems like the quality control just isn't there on the Dell servers.\n>>>>\n>>>>Thing is, some companies are required to use 1st-tier or at least 2nd-tier \n>>>>vendors for hardware; they won't home-build. For those people, what vendors \n>>>>do others on this list recommend? What have been your good/bad experiences?\n>>>>\n>>>> \n>>>>\n>>>> \n>>>>\n>>>---------------------------(end of broadcast)---------------------------\n>>>TIP 4: Don't 'kill -9' the postmaster\n>>> \n>>>\n>>> \n>>>\n>> \n>>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Fri, 03 Dec 2004 06:38:50 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "We were originally heading towards an IBM deployment, but the 325 was\nall that was available at the time, and it only supported 12GB. Then\nwhen I heard they canceled their rumored quad processor 350, I feared\nIntel/AMD politics and IBM dropped from the running. :)\n\n(IBM now has the 326 that supports 16GB of RAM)\n\n\n\n\nOn Fri, 2004-12-03 at 06:38 -0800, Joshua D. Drake wrote:\n> Cott Lang wrote:\n> \n> >Most of mine I got through a Sun reseller. Some of mine I got off of\n> >Ebay. You should be able to get them a lot cheaper than than retail web\n> >pricing. :)\n> >\n> >However, even full retail seems like it was a hell of a lot cheaper for\n> >a v40z than a DL585. :)\n> > \n> >\n> That's true :) One of the reasons the compaq's are expensive\n> is they supposedly use a quad board, even for the dual machine.\n> Which means a different opteron chip as well.\n> \n> I don't know this for a fact, it is just what one of their\n> \"ahem\" sales guys told me.\n> \n> The IBM machines are seem reasonable though.\n> \n> Sincerely,\n> \n> Joshua D. Drake\n> \n> \n> \n> >\n> >On Fri, 2004-12-03 at 06:30 -0800, Joshua D. Drake wrote:\n> > \n> >\n> >>Cott Lang wrote:\n> >>\n> >> \n> >>\n> >>>Consider Sun's new line of Opterons. They've been around for a couple of\n> >>>years under the Newisys name. I'm using dozens of them for web servers\n> >>>and PG servers and so far both the v20z and v40z have been excellent\n> >>>performers with solid reliability. The pricing was also competitive\n> >>>since Sun is looking to break into the market.\n> >>> \n> >>>\n> >>> \n> >>>\n> >>Really? I am not being sarcastic, but I found their prices pretty sad.\n> >>Did you go direct or web purchase? I have thought about using them\n> >>several times but....\n> >>\n> >>Sincerely,\n> >>\n> >>Joshua D. Drake\n> >>\n> >>\n> >>\n> >> \n> >>\n> >>>On Wed, 2004-12-01 at 14:24 -0800, Josh Berkus wrote:\n> >>> \n> >>>\n> >>> \n> >>>\n> >>>>Folks,\n> >>>>\n> >>>>A lot of people have been having a devilish time with Dell hardware lately. \n> >>>>It seems like the quality control just isn't there on the Dell servers.\n> >>>>\n> >>>>Thing is, some companies are required to use 1st-tier or at least 2nd-tier \n> >>>>vendors for hardware; they won't home-build. For those people, what vendors \n> >>>>do others on this list recommend? What have been your good/bad experiences?\n> >>>>\n> >>>> \n> >>>>\n> >>>> \n> >>>>\n> >>>---------------------------(end of broadcast)---------------------------\n> >>>TIP 4: Don't 'kill -9' the postmaster\n> >>> \n> >>>\n> >>> \n> >>>\n> >> \n> >>\n> \n> \n\n", "msg_date": "Fri, 03 Dec 2004 09:55:55 -0700", "msg_from": "Cott Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "On Fri, 03 Dec 2004 06:38:50 -0800, Joshua D. Drake\n<[email protected]> wrote:\n> That's true :) One of the reasons the compaq's are expensive\n> is they supposedly use a quad board, even for the dual machine.\n> Which means a different opteron chip as well.\n\nI can confirm that. You have a choice of CPUs, but all the DL585s are\nexpandable to 4 procs if you get the 800 series Opterons. Each CPU\nsits on it's own daughter board that links up the HyperTransport\nbusses between all the others. Each CPU card has (I think...) 8 slots\nfor DIMMS, for a max of 64G.\n\n> \n> I don't know this for a fact, it is just what one of their\n> \"ahem\" sales guys told me.\n> \n\nAt least in that case they were being accurate. ;)\n\n\n-- \nMike Rylander\[email protected]\nGPLS -- PINES Development\nDatabase Developer\nhttp://open-ils.org\n", "msg_date": "Fri, 3 Dec 2004 20:53:07 -0500", "msg_from": "Mike Rylander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "On Fri, 2004-12-03 at 20:53 -0500, Mike Rylander wrote:\n> On Fri, 03 Dec 2004 06:38:50 -0800, Joshua D. Drake\n> <[email protected]> wrote:\n> > That's true :) One of the reasons the compaq's are expensive\n> > is they supposedly use a quad board, even for the dual machine.\n> > Which means a different opteron chip as well.\n> \n> I can confirm that. You have a choice of CPUs, but all the DL585s are\n> expandable to 4 procs if you get the 800 series Opterons. Each CPU\n> sits on it's own daughter board that links up the HyperTransport\n> busses between all the others. Each CPU card has (I think...) 8 slots\n> for DIMMS, for a max of 64G.\n\nWhy would I want that giant beast when a 1U will do for dual\nopterons? :)\n\nThe V40zs have dual procs on the main board with a daughter board for\nthe other two procs. Each CPU has 4 DIMM slots. Sun has the daughter\nboards for an outrageous price, but you can buy white box Newisys\ndaughter boards for a lot less.\n\nThe 64GB of 2GB DIMMs I am jealous of, other than that, the DL585 is so\noutrageously priced I never considered it. \n\n", "msg_date": "Sat, 04 Dec 2004 10:22:24 -0700", "msg_from": "Cott Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "On Fri, Dec 03, 2004 at 07:19:37AM -0700, Cott Lang wrote:\n> Consider Sun's new line of Opterons. They've been around for a couple of\n\nI wouldn't buy a ray of sunshine from Sun in the middle of January at\nthe north pole, given the customer experience I had with them. They\nhad consistent failures in some critical hardware, and it was like\nasking them to donate a kidney when we tried to get the things fixed. \nFinally, they told us that they'd sell us the new line of hardware\ninstead. In other words, \"The last version was broken, but _this_\none works! We promise!\" We told them to take a long walk off a\nshort pier. Their service people sure _try_ hard in the field, but\nsome machines required three and four visits to fix. \n\nI also find the Sun Opteron offering to be way overpriced compared to\nthe competition.\n\nIn case it's not obvious, I don't speak for my employer.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThis work was visionary and imaginative, and goes to show that visionary\nand imaginative work need not end up well. \n\t\t--Dennis Ritchie\n", "msg_date": "Mon, 6 Dec 2004 13:57:38 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": ">>>>> \"FW\" == Frank Wiles <[email protected]> writes:\n\nFW> I believe I had expressed some problems with Dell in the past, but\nFW> it really isn't a quality control issue that I have seen. It is more\nFW> of a Linux support issue. Lately I've been running into problems with\n\nDitto that experience, but with FreeBSD.\n\nFW> getting particular parts of system working under Linux (raid cards, \nFW> SATA drives, Ethernet cards) or I can get it working, but it\nFW> performs badly ( PERC cards vs say a Mylex card ). \n\nDrivers for their devices are not problems, but performance is.\n\nTheir RAID cards are either Adaptec or LSI, but people who use the\n\"real\" branded versions of those cards always seem to get better\nperformance. Way better.\n\nI'm considering FreeBSD systems and a custom built configuration right\nnow. Very hard decision to make.\n\nFor desktops and web/office servers, I still like the Dells. Just not\nfor the DB servers anymore.\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Mon, 13 Dec 2004 11:47:34 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": ">>>>> \"JB\" == Josh Berkus <[email protected]> writes:\n\n\n>> FYI ... the 750s, 1850s and 2850s use Intel chipsets (E7520 on 1850s\n>> and 2850s, 7210 on 750s), Intel NICs, and come only with LSI Logic\n>> RAID controllers. It looks like Dell has dropped the\n>> Broadcom/ServerWorks and Adaptec junk.\n\nJB> I don't know if Vivek is on this list; I think he just had a\nJB> critical failure with one of the new Dells with LSI.\n\nI'm here, but time delayed :-)\n\nNo critical failures on the Dell, just performance failure. It can't\nkeep up. You'd think with a box like this:\n\n4GB RAM\nDual Xeon (32 bit)\nPERC3 (LSI based controller) dual channel\nchan0: RAID1 two disks for OS + pg_xlog\nchan1: RAID5 14 disks U320 18Gb\nFreeBSD 4.10\nPG 7.4.6\n\nI should get better than a sustained 6MB/s I/O throughput with peaks\nto 30MB/s and about 30% the tracks/sec others report with name-brand\nLSI controllers with Opteron systems.\n\nThe computer is wicked fast, but the I/O can't hold up, and I can't\nget a straight answer as to why.\n\nI'm no closer to solving the vendor problem than anyone else here.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Mon, 13 Dec 2004 12:19:24 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Off-list Re: Alternatives to Dell?" }, { "msg_contents": "Vivek Khera wrote:\n> >>>>> \"FW\" == Frank Wiles <[email protected]> writes:\n> \n> FW> I believe I had expressed some problems with Dell in the past, but\n> FW> it really isn't a quality control issue that I have seen. It is more\n> FW> of a Linux support issue. Lately I've been running into problems with\n> \n> Ditto that experience, but with FreeBSD.\n> \n> FW> getting particular parts of system working under Linux (raid cards, \n> FW> SATA drives, Ethernet cards) or I can get it working, but it\n> FW> performs badly ( PERC cards vs say a Mylex card ). \n> \n> Drivers for their devices are not problems, but performance is.\n> \n> Their RAID cards are either Adaptec or LSI, but people who use the\n> \"real\" branded versions of those cards always seem to get better\n> performance. Way better.\n> \n> I'm considering FreeBSD systems and a custom built configuration right\n> now. Very hard decision to make.\n> \n> For desktops and web/office servers, I still like the Dells. Just not\n> for the DB servers anymore.\n\nWay off topic, but Dell regularly advertises included hardware that is\n\"almost\" the same as the name brand hardware if purchased individually.\n\nMy brother bought a Dell and needed to upgrade his video driver and the\nDell tech said he has to use Dell's drivers rather than the\nmanufacturers driver because the video card isn't identical to the\nmanufacturers. Of course the manufacturer had an updated driver that\nfixed the problem while Dell had only the broken one. He upgraded the\ndriver anyway and it worked.\n\nDo you want to purchase hardware from a vendor that tries to shave every\ndollar off the hardware cost, even if compatibility or performance\nsuffers? I don't.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 13 Dec 2004 13:20:14 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" } ]
[ { "msg_contents": "> Folks,\n> \n> A lot of people have been having a devilish time with Dell hardware\n> lately.\n> It seems like the quality control just isn't there on the Dell\nservers.\n> \n> Thing is, some companies are required to use 1st-tier or at least\n2nd-tier\n> vendors for hardware; they won't home-build. For those people, what\n> vendors\n> do others on this list recommend? What have been your good/bad\n> experiences?\n\nWell, there is always HP and (if money is no object) IBM or Sun.\n\nFor the budget or performance minded I'd suggest checking out SWT\nservers (http://www.swt.com) ...not sure what tier they fit into but\nthey can get you into a quad Opteron for under 10k$ US, about half what\nyou would pay for a comparable HP server (and Dell doesn't even offer\nOpteron).\n\nAlso, if choice of RAID controller is an option, I'd definitely suggest\n3ware. They are cheap, have excellent linux support (including open\nsource drivers), and have the options you'd expect form a decent raid\ncontroller including a BBU. I just picked up one of their escalade SATA\ncontrollers and am really impressed with it.\n\nI'd definitely suggest Opteron...cooler, faster, and 64 bit. Another\nreason not to go with Dell.\n\nMerlin\n", "msg_date": "Wed, 1 Dec 2004 17:43:10 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "Merlin Moncure wrote:\n\n> For the budget or performance minded I'd suggest checking out SWT\n> servers (http://www.swt.com) ...not sure what tier they fit into but\n> they can get you into a quad Opteron for under 10k$ US, about half what\n> you would pay for a comparable HP server (and Dell doesn't even offer\n> Opteron).\n\nYou can do the same with Monarch Computers. A 4u quad opteron. You can \nalso pay a lot more, depends on the configuration. They have a very \nnice site for building a system as you want.\n\n-- \nUntil later, Geoffrey\n", "msg_date": "Wed, 01 Dec 2004 22:56:44 -0500", "msg_from": "Geoffrey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "On Wed, Dec 01, 2004 at 05:43:10PM -0500, Merlin Moncure wrote:\n> Also, if choice of RAID controller is an option, I'd definitely suggest\n> 3ware. They are cheap, have excellent linux support (including open\n> source drivers)\n\nThe drivers are open source, but the management tools are not. (This is quite\nimpractical for us running other distributions than Red Hat or SuSE, at least.)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 2 Dec 2004 12:00:56 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "Well, I've personally seen IBM's that were slower than Dell's, and \nDell's aren't particularly fast.\n\nI'm currently trying to find a name brand computer that is as fast as \nsomething I could build myself. So far the HP looks like the fastest, \nbut still not as fast as a machine built from scratch\nSuperMicro seems to be pretty good as Bruce mentioned.\n\nDave\n\nGeoffrey wrote:\n\n> Merlin Moncure wrote:\n>\n>> For the budget or performance minded I'd suggest checking out SWT\n>> servers (http://www.swt.com) ...not sure what tier they fit into but\n>> they can get you into a quad Opteron for under 10k$ US, about half what\n>> you would pay for a comparable HP server (and Dell doesn't even offer\n>> Opteron).\n>\n>\n> You can do the same with Monarch Computers. A 4u quad opteron. You \n> can also pay a lot more, depends on the configuration. They have a \n> very nice site for building a system as you want.\n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Thu, 02 Dec 2004 07:06:26 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "Dave Cramer wrote:\n> Well, I've personally seen IBM's that were slower than Dell's, and \n> Dell's aren't particularly fast.\n> \n> I'm currently trying to find a name brand computer that is as fast as \n> something I could build myself. So far the HP looks like the fastest, \n> but still not as fast as a machine built from scratch\n> SuperMicro seems to be pretty good as Bruce mentioned.\n\nI've been very impressed with the Monarch machines. They are well \nbuilt, with good quality components. They are meticulously assembled, \nwith special care taken with cable routing and such. Very quiet \nmachines as well and that's not easy with AMD processors.\n\nThese folks also specialize in Linux boxes and they preload Linux. You \nwon't find that with most of the large vendors. Plus, you can call and \nactually talk to one of the folks who's actually building the box. It's \nunlikely you'll get that kind of service from any of the big guys.\n\nAs far as Dell is concerned, I've heard nothing but problems from other \nfolks using their boxes, both servers and desktops. My personal \nexperience reflects the same.\n\n-- \nUntil later, Geoffrey\n", "msg_date": "Thu, 02 Dec 2004 08:12:51 -0500", "msg_from": "Geoffrey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" } ]
[ { "msg_contents": "Hi,\n\nI was reading a lot on the specs that was used by those who runs \npostgres. I was wondering is the a more structured method of \ndetermining what is the required hardware specs? The project that i am \ndoing can populate about few millions records a day (worst case).\n\nBased on what i read, this is what i guess\n\nRAM - the more the better\npostgresql.conf must be optimized\nCPU - not sure if adding more CPU will really help unless i start to \ncreate parrallel insert sessions.\nHard disks - ?? how do i actually check how much space this records take \non the hard drives?\noptimized queries is a must\nOS? linux? freebsd? solaris?\ncpu type? sun sparc? intel? amd?\nanything else?\n\nHasnul\n\n\n", "msg_date": "Thu, 02 Dec 2004 08:20:14 +0800", "msg_from": "Hasnul Fadhly bin Hasan <[email protected]>", "msg_from_op": true, "msg_subject": "Recommended Specs" }, { "msg_contents": "Hasnul,\n\n> Hard disks - ?? how do i actually check how much space this records take\n> on the hard drives?\n> optimized queries is a must\n> OS? linux? freebsd? solaris?\n> cpu type? sun sparc? intel? amd?\n> anything else?\n\nThere have been treads discussing these as well. Work your way through the \narchives.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 1 Dec 2004 22:15:09 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recommended Specs" } ]
[ { "msg_contents": "Josh, Steve:\n\nI have also been looking at non-dell server vendors due to\nrecent concerns about the PERC RAID Controllers. That said,\nI believe IBM just shoots itself in the foot via its sales/pricing \npractices....\n\nPrice out a PE2850 w/ 8GB RAM and 6 18GB Drives on the\nDell website and you'll get a number in the $9-10K range. Talk\nto your sales rep and you can get a $1-2K discount(total $7-8K). \nThat seems fair and it wins alot of business.\n\nGo the IBM website, try to find a comparative x86 system and\nspec it out. The list pricing is in the $12-16K range. Yes, I know\nI could get a good discount if I developed a relationship with\nan IBM reseller here..and perhaps the end pricing would be\nin the $10-12K range....but the Dell way just seems alot more honest\nto me, and reasonable. The IBM gear doesn't seem that much better.\n\nAnd while I have concerns about some of the Dell\nhardware, none of the issues have really caused any issues for me or my clients\nhere yet.....(crossing fingers..)\n\nI just don't think IBM makes it easy for new customers to buy their equipment and\nif I went with them, I'd always have the lingering suspicion that I was paying too much.\n\nI really hope they change some day... Until then, I just see Dell winning more of the\nserver market share.\n\nRegards,\nMatt \n--- Original Message---\n To: [email protected]\n Cc: [email protected]\n From: Steven Rosenstein <[email protected]>\n Sent: 12/01/2004 4:17PM\n Subject: Re: [PERFORM] Alternatives to Dell?\n\n>> \n>> \n>> \n>> \n>> I recommend IBM equipment, but in the spirit of caveat emptor I should let\n>> you know I work for IBM... :-)\n>> \n>> Seriously, I've been using IBM laptops and desktops for about 5 years, even\n>> before I started working for them. They tend to be a little more expensive\n>> than Dell, but I think they use beefier components and don't cut the specs\n>> quite as close as Dell does. IBM gear is designed more for industrial use\n>> than home computing, which is reflected in the quality (and the price).\n>> \n>> IBM just released a new series of PowerPC-based servers that are\n>> specifically designed to run Linux. They're at the higher end, but from\n>> what I understand, they provide much more bang for the buck than\n>> Intel-based servers.\n>> \n>> I hope this helps,\n>> --- Steve\n>> ________________________________________________________________________\n>> ___________\n>> \n>> Steven Rosenstein\n>> Senior IT Architect/Specialist | IBM Virtual Server Administration\n>> Voice/FAX: 845-689-2064 | Cell: 646-345-6978 | Tieline: 930-6001\n>> Text Messaging: 6463456978 @ mobile.mycingular.com\n>> Email: srosenst @ us.ibm.com\n>> \n>> \"Learn from the mistakes of others because you can't live long enough to\n>> make them all yourself.\" -- Eleanor Roosevelt\n>> \n>> \n>> \n>> Josh Berkus\n>> <[email protected]\n>> m> To\n>> Sent by: [email protected]\n>> pgsql-performance cc\n>> -owner@postgresql\n>> .org Subject\n>> [PERFORM] Alternatives to Dell?\n>> \n>> 12/01/2004 05:24\n>> PM\n>> \n>> \n>> Please respond to\n>> josh\n>> \n>> \n>> \n>> \n>> \n>> \n>> Folks,\n>> \n>> A lot of people have been having a devilish time with Dell hardware lately.\n>> \n>> It seems like the quality control just isn't there on the Dell servers.\n>> \n>> Thing is, some companies are required to use 1st-tier or at least 2nd-tier\n>> vendors for hardware; they won't home-build. For those people, what\n>> vendors\n>> do others on this list recommend? What have been your good/bad\n>> experiences?\n>> \n>> --\n>> --Josh\n>> \n>> Josh Berkus\n>> Aglio Database Solutions\n>> San Francisco\n>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 5: Have you checked our extensive FAQ?\n>> \n>> http://www.postgresql.org/docs/faqs/FAQ.html\n>> \n>> \n>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 4: Don't 'kill -9' the postmaster\n>> \n\n", "msg_date": "Wed, 01 Dec 2004 17:35:32 -0800", "msg_from": "\"Matthew Marlowe\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "Matthew Marlowe wrote:\n\n> I just don't think IBM makes it easy for new customers to buy their\n> equipment and if I went with them, I'd always have the lingering\n> suspicion that I was paying too much.\n> \n> I really hope they change some day... Until then, I just see Dell\n> winning more of the server market share.\n\nSomething to be said for the old saying, 'you get what you pay for.'\n\n-- \nUntil later, Geoffrey\n", "msg_date": "Wed, 01 Dec 2004 20:58:28 -0500", "msg_from": "Geoffrey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "I always say 'If you pay for quality it only hurts once', but then again I \ndon't equate high price with high quality ;-)\n\n----- Original Message ----- \nFrom: \"Geoffrey\" <[email protected]>\n\n> Something to be said for the old saying, 'you get what you pay for.'\n\n", "msg_date": "Thu, 2 Dec 2004 11:10:07 +0900", "msg_from": "\"Iain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": ">\n>Go the IBM website, try to find a comparative x86 system and\n>spec it out. The list pricing is in the $12-16K range. Yes, I know\n>I could get a good discount if I developed a relationship with\n>an IBM reseller here..and perhaps the end pricing would be\n>in the $10-12K range....but the Dell way just seems alot more honest\n>to me, and reasonable. The IBM gear doesn't seem that much better.\n> \n>\nIt is my experience that IBM will get within 5% of Dell if you\nprovide IBM with a written quote from Dell.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n>And while I have concerns about some of the Dell\n>hardware, none of the issues have really caused any issues for me or my clients\n>here yet.....(crossing fingers..)\n>\n>I just don't think IBM makes it easy for new customers to buy their equipment and\n>if I went with them, I'd always have the lingering suspicion that I was paying too much.\n>\n>I really hope they change some day... Until then, I just see Dell winning more of the\n>server market share.\n>\n>Regards,\n>Matt \n>--- Original Message---\n> To: [email protected]\n> Cc: [email protected]\n> From: Steven Rosenstein <[email protected]>\n> Sent: 12/01/2004 4:17PM\n> Subject: Re: [PERFORM] Alternatives to Dell?\n>\n> \n>\n>>>\n>>>\n>>>I recommend IBM equipment, but in the spirit of caveat emptor I should let\n>>>you know I work for IBM... :-)\n>>>\n>>>Seriously, I've been using IBM laptops and desktops for about 5 years, even\n>>>before I started working for them. They tend to be a little more expensive\n>>>than Dell, but I think they use beefier components and don't cut the specs\n>>>quite as close as Dell does. IBM gear is designed more for industrial use\n>>>than home computing, which is reflected in the quality (and the price).\n>>>\n>>>IBM just released a new series of PowerPC-based servers that are\n>>>specifically designed to run Linux. They're at the higher end, but from\n>>>what I understand, they provide much more bang for the buck than\n>>>Intel-based servers.\n>>>\n>>>I hope this helps,\n>>>--- Steve\n>>>________________________________________________________________________\n>>>___________\n>>>\n>>>Steven Rosenstein\n>>>Senior IT Architect/Specialist | IBM Virtual Server Administration\n>>>Voice/FAX: 845-689-2064 | Cell: 646-345-6978 | Tieline: 930-6001\n>>>Text Messaging: 6463456978 @ mobile.mycingular.com\n>>>Email: srosenst @ us.ibm.com\n>>>\n>>>\"Learn from the mistakes of others because you can't live long enough to\n>>>make them all yourself.\" -- Eleanor Roosevelt\n>>>\n>>>\n>>>\n>>> Josh Berkus\n>>> <[email protected]\n>>> m> To\n>>> Sent by: [email protected]\n>>> pgsql-performance cc\n>>> -owner@postgresql\n>>> .org Subject\n>>> [PERFORM] Alternatives to Dell?\n>>>\n>>> 12/01/2004 05:24\n>>> PM\n>>>\n>>>\n>>> Please respond to\n>>> josh\n>>>\n>>>\n>>>\n>>>\n>>>\n>>>\n>>>Folks,\n>>>\n>>>A lot of people have been having a devilish time with Dell hardware lately.\n>>>\n>>>It seems like the quality control just isn't there on the Dell servers.\n>>>\n>>>Thing is, some companies are required to use 1st-tier or at least 2nd-tier\n>>>vendors for hardware; they won't home-build. For those people, what\n>>>vendors\n>>>do others on this list recommend? What have been your good/bad\n>>>experiences?\n>>>\n>>>--\n>>>--Josh\n>>>\n>>>Josh Berkus\n>>>Aglio Database Solutions\n>>>San Francisco\n>>>\n>>>---------------------------(end of broadcast)---------------------------\n>>>TIP 5: Have you checked our extensive FAQ?\n>>>\n>>> http://www.postgresql.org/docs/faqs/FAQ.html\n>>>\n>>>\n>>>\n>>>---------------------------(end of broadcast)---------------------------\n>>>TIP 4: Don't 'kill -9' the postmaster\n>>>\n>>> \n>>>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Wed, 01 Dec 2004 18:15:28 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" }, { "msg_contents": "Iain wrote:\n> I always say 'If you pay for quality it only hurts once', but then again \n> I don't equate high price with high quality ;-)\n\nTrue, but if you do your research, you'll more likely to get high \nquality with high price then you are high quality with low price.\n\n-- \nUntil later, Geoffrey\n", "msg_date": "Wed, 01 Dec 2004 21:21:37 -0500", "msg_from": "Geoffrey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alternatives to Dell?" } ]
[ { "msg_contents": "Not in my experience for IBM, even for an order approaching 100k. The sales guy was rude, jumping on numbers, unable to talk about exactly what differentiates IBM from Dell (equivalent config) - other than the name and their 20K+ difference.\r\n \r\nWe use many Dell servers, no quality issue, but as someone pointed out earlier, linux support is not that great.\r\n \r\nOnly issue so far hardware wise is the PERC card on one of the machines, and i believe one should stay away from the adaptec versions of PERC.\r\n \r\n-anjan\r\n \r\n\r\n\t-----Original Message----- \r\n\tFrom: Joshua D. Drake [mailto:[email protected]] \r\n\tSent: Wed 12/1/2004 9:15 PM \r\n\tTo: Matthew Marlowe \r\n\tCc: Steven Rosenstein; [email protected]; [email protected] \r\n\tSubject: Re: [PERFORM] Alternatives to Dell?\r\n\t\r\n\t\r\n\r\n\r\n\t> \r\n\t>Go the IBM website, try to find a comparative x86 system and \r\n\t>spec it out. The list pricing is in the $12-16K range. Yes, I know \r\n\t>I could get a good discount if I developed a relationship with \r\n\t>an IBM reseller here..and perhaps the end pricing would be \r\n\t>in the $10-12K range....but the Dell way just seems alot more honest \r\n\t>to me, and reasonable. The IBM gear doesn't seem that much better. \r\n\t> \r\n\t> \r\n\tIt is my experience that IBM will get within 5% of Dell if you \r\n\tprovide IBM with a written quote from Dell. \r\n\r\n\tSincerely, \r\n\r\n\tJoshua D. Drake \r\n\r\n\r\n\r\n\t>And while I have concerns about some of the Dell \r\n\t>hardware, none of the issues have really caused any issues for me or my clients \r\n\t>here yet.....(crossing fingers..) \r\n\t> \r\n\t>I just don't think IBM makes it easy for new customers to buy their equipment and \r\n\t>if I went with them, I'd always have the lingering suspicion that I was paying too much. \r\n\t> \r\n\t>I really hope they change some day... Until then, I just see Dell winning more of the \r\n\t>server market share. \r\n\t> \r\n\t>Regards, \r\n\t>Matt \r\n\t>--- Original Message--- \r\n\t> To: [email protected] \r\n\t> Cc: [email protected] \r\n\t> From: Steven Rosenstein <[email protected]> \r\n\t> Sent: 12/01/2004 4:17PM \r\n\t> Subject: Re: [PERFORM] Alternatives to Dell? \r\n\t> \r\n\t> \r\n\t> \r\n\t>>> \r\n\t>>> \r\n\t>>>I recommend IBM equipment, but in the spirit of caveat emptor I should let \r\n\t>>>you know I work for IBM... :-) \r\n\t>>> \r\n\t>>>Seriously, I've been using IBM laptops and desktops for about 5 years, even \r\n\t>>>before I started working for them. They tend to be a little more expensive \r\n\t>>>than Dell, but I think they use beefier components and don't cut the specs \r\n\t>>>quite as close as Dell does. IBM gear is designed more for industrial use \r\n\t>>>than home computing, which is reflected in the quality (and the price). \r\n\t>>> \r\n\t>>>IBM just released a new series of PowerPC-based servers that are \r\n\t>>>specifically designed to run Linux. They're at the higher end, but from \r\n\t>>>what I understand, they provide much more bang for the buck than \r\n\t>>>Intel-based servers. \r\n\t>>> \r\n\t>>>I hope this helps, \r\n\t>>>--- Steve \r\n\t>>>________________________________________________________________________ \r\n\t>>>___________ \r\n\t>>> \r\n\t>>>Steven Rosenstein \r\n\t>>>Senior IT Architect/Specialist | IBM Virtual Server Administration \r\n\t>>>Voice/FAX: 845-689-2064 | Cell: 646-345-6978 | Tieline: 930-6001 \r\n\t>>>Text Messaging: 6463456978 @ mobile.mycingular.com \r\n\t>>>Email: srosenst @ us.ibm.com \r\n\t>>> \r\n\t>>>\"Learn from the mistakes of others because you can't live long enough to \r\n\t>>>make them all yourself.\" -- Eleanor Roosevelt \r\n\t>>> \r\n\t>>> \r\n\t>>> \r\n\t>>> Josh Berkus \r\n\t>>> <[email protected] \r\n\t>>> m> To \r\n\t>>> Sent by: [email protected] \r\n\t>>> pgsql-performance cc \r\n\t>>> -owner@postgresql \r\n\t>>> .org Subject \r\n\t>>> [PERFORM] Alternatives to Dell? \r\n\t>>> \r\n\t>>> 12/01/2004 05:24 \r\n\t>>> PM \r\n\t>>> \r\n\t>>> \r\n\t>>> Please respond to \r\n\t>>> josh \r\n\t>>> \r\n\t>>> \r\n\t>>> \r\n\t>>> \r\n\t>>> \r\n\t>>> \r\n\t>>>Folks, \r\n\t>>> \r\n\t>>>A lot of people have been having a devilish time with Dell hardware lately. \r\n\t>>> \r\n\t>>>It seems like the quality control just isn't there on the Dell servers. \r\n\t>>> \r\n\t>>>Thing is, some companies are required to use 1st-tier or at least 2nd-tier \r\n\t>>>vendors for hardware; they won't home-build. For those people, what \r\n\t>>>vendors \r\n\t>>>do others on this list recommend? What have been your good/bad \r\n\t>>>experiences? \r\n\t>>> \r\n\t>>>-- \r\n\t>>>--Josh \r\n\t>>> \r\n\t>>>Josh Berkus \r\n\t>>>Aglio Database Solutions \r\n\t>>>San Francisco \r\n\t>>> \r\n\t>>>---------------------------(end of broadcast)--------------------------- \r\n\t>>>TIP 5: Have you checked our extensive FAQ? \r\n\t>>> \r\n\t>>> http://www.postgresql.org/docs/faqs/FAQ.html \r\n\t>>> \r\n\t>>> \r\n\t>>> \r\n\t>>>---------------------------(end of broadcast)--------------------------- \r\n\t>>>TIP 4: Don't 'kill -9' the postmaster \r\n\t>>> \r\n\t>>> \r\n\t>>> \r\n\t> \r\n\t> \r\n\t>---------------------------(end of broadcast)--------------------------- \r\n\t>TIP 2: you can get off all lists at once with the unregister command \r\n\t> (send \"unregister YourEmailAddressHere\" to [email protected]) \r\n\t> \r\n\t> \r\n\r\n\r\n\t-- \r\n\tCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC \r\n\tPostgresql support, programming shared hosting and dedicated hosting. \r\n\t+1-503-667-4564 - [email protected] - http://www.commandprompt.com \r\n\tPostgreSQL Replicator -- production quality replication for PostgreSQL \r\n\r\n", "msg_date": "Wed, 1 Dec 2004 22:48:24 -0500", "msg_from": "\"Anjan Dave\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Alternatives to Dell?" } ]
[ { "msg_contents": "Hi all,\n \nWhich is the best available PG replication tool in market now? \n \n>From searching on the internet, I found some resources on the following tools used for replication :\n \n\n postgres ���R \n Usogres\n eRServer/Rserv/Dbmirror \n PgReplicator \n Mammoth PostgreSQL Replicator \n Slony-I\n\nWhich one of these is a good option for replicating Postgres 7.3.2?\n\n \n\nThanks again,\n\nSaranya\n\n\t\t\n---------------------------------\nDo you Yahoo!?\n Read only the mail you want - Yahoo! Mail SpamGuard.\nHi all,\n \nWhich is the best available PG replication tool in market now? \n \nFrom searching on the internet, I found some resources on the following tools used for replication :\n \n\npostgres ���R  \nUsogres\neRServer/Rserv/Dbmirror \nPgReplicator \nMammoth PostgreSQL Replicator  \nSlony-I\nWhich one of these is a good option for replicating Postgres 7.3.2?\n \nThanks again,\nSaranya\nDo you Yahoo!? \nRead only the mail you want - Yahoo! Mail SpamGuard.", "msg_date": "Thu, 2 Dec 2004 06:38:16 -0800 (PST)", "msg_from": "sarlav kumar <[email protected]>", "msg_from_op": true, "msg_subject": "pg replication tools?" }, { "msg_contents": "sarlav kumar wrote:\n\n> Hi all,\n> \n> Which is the best available PG replication tool in market now?\n\nThere is no \"best\", there is only best for your situation. The two\nmost supported are:\n\n\n> * Mammoth PostgreSQL Replicator \n> * Slony-I\n>\n> Which one of these is a good option for replicating Postgres 7.3.2?\n>\n\nMammoth PostgreSQL Replicator will automatically upgrade you to 7.3.8 \nwhich you should be running anyway.\n\nI believe Slony will work with 7.3.2.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n> \n>\n> Thanks again,\n>\n> Saranya\n>\n> ------------------------------------------------------------------------\n> Do you Yahoo!?\n> Read only the mail you want - Yahoo! Mail SpamGuard \n> <http://us.rd.yahoo.com/mail_us/taglines/spamguard/*http://promotions.yahoo.com/new_mail/static/protection.html>. \n\n\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Thu, 02 Dec 2004 08:50:03 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg replication tools?" }, { "msg_contents": "On December 2, 2004 06:38 am, sarlav kumar wrote:\n> Hi all,\n>\n> Which is the best available PG replication tool in market now?\n>\n> From searching on the internet, I found some resources on the following\n> tools used for replication :\n>\n>\n> postgres –R\n> Usogres\n> eRServer/Rserv/Dbmirror\n> PgReplicator\n> Mammoth PostgreSQL Replicator\n> Slony-I\n>\n> Which one of these is a good option for replicating Postgres 7.3.2?\n\nWhat are your needs in a replication solution? for example, do you need to be \nable to replicate a running install with out needing to initdb, do you want \nto be able to replicate across versions; do you require sync or will async \nwork, etc..\n\n>\n>\n>\n> Thanks again,\n>\n> Saranya\n>\n>\n> ---------------------------------\n> Do you Yahoo!?\n> Read only the mail you want - Yahoo! Mail SpamGuard.\n\n-- \nDarcy Buskermolen\nWavefire Technologies Corp.\nph: 250.717.0200\nfx: 250.763.1759\nhttp://www.wavefire.com\n", "msg_date": "Thu, 2 Dec 2004 08:58:47 -0800", "msg_from": "Darcy Buskermolen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg replication tools?" }, { "msg_contents": "Clinging to sanity, [email protected] (\"Joshua D. Drake\") mumbled into her beard:\n> sarlav kumar wrote:\n>\n>> Hi all,\n>> Which is the best available PG replication tool in market now?\n>\n> There is no \"best\", there is only best for your situation. The two\n> most supported are:\n>\n>\n>> * Mammoth PostgreSQL Replicator * Slony-I\n>>\n>> Which one of these is a good option for replicating Postgres 7.3.2?\n>>\n>\n> Mammoth PostgreSQL Replicator will automatically upgrade you to\n> 7.3.8 which you should be running anyway.\n>\n> I believe Slony will work with 7.3.2.\n\nNo, it won't.\n\nI believe there was something about namespace support that did not\nstabilize until PostgreSQL 7.3.3, and Slony-I works with that version,\nat the earliest.\n\nAnd you're quite right; \"best\" is a slippery metric. Like many\nthings, it may be altered by perspective.\n-- \nIf this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me\nhttp://www3.sympatico.ca/cbbrowne/postgresql.html\nI can see clearly now, the brain is gone... \n", "msg_date": "Thu, 02 Dec 2004 21:39:02 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg replication tools?" }, { "msg_contents": "Go for Slony its best thing to start with.\n\n\nOn Thu, 2 Dec 2004 06:38:16 -0800 (PST), sarlav kumar <[email protected]> wrote:\n> \n> Hi all, \n> \n> Which is the best available PG replication tool in market now? \n> \n> From searching on the internet, I found some resources on the following\n> tools used for replication : \n> \n> postgres –R \n> Usogres \n> eRServer/Rserv/Dbmirror \n> PgReplicator \n> Mammoth PostgreSQL Replicator \n> Slony-I \n> \n> Which one of these is a good option for replicating Postgres 7.3.2? \n> \n> \n> \n> Thanks again, \n> \n> Saranya\n> \n> ________________________________\n> Do you Yahoo!?\n> Read only the mail you want - Yahoo! Mail SpamGuard. \n> \n> \n\n\n-- \nWith Best Regards,\nVishal Kashyap.\nLead Software Developer,\nhttp://saihertz.com,\nhttp://vishalkashyap.tk\n", "msg_date": "Fri, 3 Dec 2004 10:46:31 +0530", "msg_from": "\"Vishal Kashyap @ [SaiHertz]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg replication tools?" } ]
[ { "msg_contents": "Hi,\n\nBefore writing this mail, I'd researched a little about this topic, and\ngot some opinions from guys like Fabien Pascal, who argues that logical\ndesign should be separated from physical design, and other sources. As\nthis is not fact, I'm writing to you guys, that make things work in real\nworld.\n\nWe started our first big (for our company standards) project always\nthinking in normalization. But once we imported legacy data into the DB,\nthings got harder.\n\nOne example is the clients status. A client might be active, inactive or\npending (for many reasons). We store all the status a client have since\nit is in the system. To check what is the actual status of a client, we\nget the last status from this historical status table. This take a\nconsiderable time, so our best results were achieved building a\nfunction that checks the status and indexing this function. The problem\nis that indexed functions mus bu immutable, so as you can figure, if the\nstatus change after the creation of the index, the retunr of the\nfunction is still the same.\n\nWhat do you suggest for situations like this? Should I add a field to\nclients table and store its actual status, keep storing data in the\nhistorical table an control its changes with a trigger?\n\nThere are other situations that are making things difficult to us. For\nexample, one query must return the total amount a client bought in the\nlast 90 days. It's taking too long, when we must know it for many\nclients, many times. So should I create summarization tables to store\nthis kind of stuff, update it with a trigger in daily basis (for\nexample), and solve this problem with one join?\n\nOur database is not that big. The larger table has about 7.000.000 rows.\nAbout 50.000 clients, half of them active. All that I'd point out above\nuses indexes for queries, etc. But even with this it's not been fast\nenough. We have a Dell server for this (I know, the Dell issue), a Dual\nXeon 2.8, SCSI HD, 1 GB mem. Do we need better hardware for our system?\n\n-- \n+---------------------------------------------------+\n| Alvaro Nunes Melo Atua Sistemas de Informacao |\n| [email protected] www.atua.com.br |\n| UIN - 42722678 (54) 327-1044 |\n+---------------------------------------------------+\n\n", "msg_date": "Thu, 02 Dec 2004 15:05:55 -0200", "msg_from": "Alvaro Nunes Melo <[email protected]>", "msg_from_op": true, "msg_subject": "Normalization or Performance" }, { "msg_contents": "Alvaro Nunes Melo wrote:\n> Hi,\n> \n> Before writing this mail, I'd researched a little about this topic,\n> and got some opinions from guys like Fabien Pascal, who argues that\n> logical design should be separated from physical design, and other\n> sources. As this is not fact, I'm writing to you guys, that make\n> things work in real world.\n\nI believe he's right. Or at least that you should only compromise your \nlogical design once it becomes absolutely necessary due to physical \nlimitations.\n\n> We started our first big (for our company standards) project always \n> thinking in normalization. But once we imported legacy data into the\n> DB, things got harder.\n> \n> One example is the clients status. A client might be active, inactive\n> or pending (for many reasons). We store all the status a client have\n> since it is in the system. To check what is the actual status of a\n> client, we get the last status from this historical status table.\n> This take a considerable time, so our best results were achieved\n> building a function that checks the status and indexing this\n> function. The problem is that indexed functions mus bu immutable, so\n> as you can figure, if the status change after the creation of the\n> index, the retunr of the function is still the same.\n> \n> What do you suggest for situations like this? Should I add a field to\n> clients table and store its actual status, keep storing data in the \n> historical table an control its changes with a trigger?\n\nTrigger + history table is a common solution, it's easy to implement and \nthere's nothing non-relational about it as a solution.\n\n> There are other situations that are making things difficult to us.\n> For example, one query must return the total amount a client bought\n> in the last 90 days. It's taking too long, when we must know it for\n> many clients, many times. So should I create summarization tables to\n> store this kind of stuff, update it with a trigger in daily basis\n> (for example), and solve this problem with one join?\n\nOne solution I use for this sort of thing is a summary table grouped by \ndate, and accurate until the start of today. Then, I check the summary \ntable and the \"live\" table for todays information and sum those.\n\n> Our database is not that big. The larger table has about 7.000.000\n> rows. About 50.000 clients, half of them active. All that I'd point\n> out above uses indexes for queries, etc. But even with this it's not\n> been fast enough. We have a Dell server for this (I know, the Dell\n> issue), a Dual Xeon 2.8, SCSI HD, 1 GB mem. Do we need better\n> hardware for our system?\n\nSwap one of your processors for more RAM and disks, perhaps.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 02 Dec 2004 18:34:56 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Normalization or Performance" }, { "msg_contents": "On Thu, Dec 02, 2004 at 03:05:55PM -0200, Alvaro Nunes Melo wrote:\n> Hi,\n> \n> Before writing this mail, I'd researched a little about this topic, and\n> got some opinions from guys like Fabien Pascal, who argues that logical\n> design should be separated from physical design, and other sources. As\n> this is not fact, I'm writing to you guys, that make things work in real\n> world.\n> \n> We started our first big (for our company standards) project always\n> thinking in normalization. But once we imported legacy data into the DB,\n> things got harder.\n> \n> One example is the clients status. A client might be active, inactive or\n> pending (for many reasons). We store all the status a client have since\n> it is in the system. To check what is the actual status of a client, we\n> get the last status from this historical status table. This take a\n> considerable time, so our best results were achieved building a\n> function that checks the status and indexing this function. The problem\n> is that indexed functions mus bu immutable, so as you can figure, if the\n> status change after the creation of the index, the retunr of the\n> function is still the same.\n> \n> What do you suggest for situations like this? Should I add a field to\n> clients table and store its actual status, keep storing data in the\n> historical table an control its changes with a trigger?\n \nIt seems you shouldn't have to resort to this. SELECT status FROM\nclient_status WHERE client_id = blah ORDER BY status_date DESC LIMIT 1\nshould be pretty fast given an index on client_id, status_date (which\nshould be able to be unique).\n\n> There are other situations that are making things difficult to us. For\n> example, one query must return the total amount a client bought in the\n> last 90 days. It's taking too long, when we must know it for many\n> clients, many times. So should I create summarization tables to store\n> this kind of stuff, update it with a trigger in daily basis (for\n> example), and solve this problem with one join?\n \nThis sounds like a more likely candidate for a summary table, though you\nmight not want to use a trigger. Unless you need absolutely up-to-date\ninformation it seems like a nightly process to update the totals would\nbe better and more efficient.\n\n> Our database is not that big. The larger table has about 7.000.000 rows.\n> About 50.000 clients, half of them active. All that I'd point out above\n> uses indexes for queries, etc. But even with this it's not been fast\n> enough. We have a Dell server for this (I know, the Dell issue), a Dual\n> Xeon 2.8, SCSI HD, 1 GB mem. Do we need better hardware for our system?\n\nIs all this on a single HD? That's going to be a huge bottleneck. You'll\nbe much better off with a mirrored partition for your WAL files and\neither raid5 or raid10 for the database itself. You'd probably be better\noff with more memory as well. If you're going to buy a new box instead\nof upgrade your existing one, I'd recommend going with an Opteron\nbecause of it's much better memory bandwidth.\n\nFor reference, stats.distributed.net is a dual Opteron 244 1.8GHz with\n4G ram, a 200G mirror for WAL and the system files and a 6x200G RAID10\nfor the database (all SATA drives). The largest table 120M rows and\n825,000 8k pages. I can scan 1/5th of that table via an index scan in\nabout a minute. (The schema can be found at\nhttp://minilink.org/cvs.distributed.net/l3.sql.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Thu, 2 Dec 2004 17:03:48 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Normalization or Performance" }, { "msg_contents": "Hi,\n\nwithout knowing much about your system, it seems to me that the current \nstatus of a client should be represented by a status code on the client \nrecord. History is the list of *past* status codes. The full history, \nincluding the current status of a client would be obtained using a union.\n\nI had a situation which might have some parallels with yours. nthis case the \ntables represented orders and receivals. The status of an order item was \nheld on a code in the receival itemss table (an order item can be received \nmany times). This is seems to me to be not normalized as the status was \nactually the status of the order item, not the receival. The receival just \ncaused the status of the order item to change. This arrangement required \nridiculously complex sql and resulted in poor performance. Moving the status \ncode to the order item and implementing a simple trigger on the receival \nitems table cleaned things up significantly.\n\nTo put it simply, if the current status of an order item is a simple \nattribute of the order item, then it should be in the order item table. The \nsame might be said for your client.\n\nThis is just my personal opinion though and I'm always open to alternative \nopinions, as I think you are.\n\nregards\nIain\n----- Original Message ----- \nFrom: \"Alvaro Nunes Melo\" <[email protected]>\nTo: \"Pgsql-Performance\" <[email protected]>\nSent: Friday, December 03, 2004 2:05 AM\nSubject: [PERFORM] Normalization or Performance\n\n\n> Hi,\n>\n> Before writing this mail, I'd researched a little about this topic, and\n> got some opinions from guys like Fabien Pascal, who argues that logical\n> design should be separated from physical design, and other sources. As\n> this is not fact, I'm writing to you guys, that make things work in real\n> world.\n>\n> We started our first big (for our company standards) project always\n> thinking in normalization. But once we imported legacy data into the DB,\n> things got harder.\n>\n> One example is the clients status. A client might be active, inactive or\n> pending (for many reasons). We store all the status a client have since\n> it is in the system. To check what is the actual status of a client, we\n> get the last status from this historical status table. This take a\n> considerable time, so our best results were achieved building a\n> function that checks the status and indexing this function. The problem\n> is that indexed functions mus bu immutable, so as you can figure, if the\n> status change after the creation of the index, the retunr of the\n> function is still the same.\n>\n> What do you suggest for situations like this? Should I add a field to\n> clients table and store its actual status, keep storing data in the\n> historical table an control its changes with a trigger?\n>\n> There are other situations that are making things difficult to us. For\n> example, one query must return the total amount a client bought in the\n> last 90 days. It's taking too long, when we must know it for many\n> clients, many times. So should I create summarization tables to store\n> this kind of stuff, update it with a trigger in daily basis (for\n> example), and solve this problem with one join?\n>\n> Our database is not that big. The larger table has about 7.000.000 rows.\n> About 50.000 clients, half of them active. All that I'd point out above\n> uses indexes for queries, etc. But even with this it's not been fast\n> enough. We have a Dell server for this (I know, the Dell issue), a Dual\n> Xeon 2.8, SCSI HD, 1 GB mem. Do we need better hardware for our system?\n>\n> -- \n> +---------------------------------------------------+\n> | Alvaro Nunes Melo Atua Sistemas de Informacao |\n> | [email protected] www.atua.com.br |\n> | UIN - 42722678 (54) 327-1044 |\n> +---------------------------------------------------+\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend \n\n", "msg_date": "Fri, 3 Dec 2004 10:44:32 +0900", "msg_from": "\"Iain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Normalization or Performance" } ]
[ { "msg_contents": "Hi guys,\n\n I have 2 big databases on the same system. They are logically not \nconnected, separate.\n\n I want to keep them separate. Do you think it is better to use the same \nPostgreSQL server using a different location (on different disks) for each \none of them, or a separate PostgreSQL server for each of them, listening to \ndifferent ports and with data dirs residing on two different disks?\n\n I have a dual Pentium 4 1Ghz with 1.5 GB RAM and 6 36GB SCSI disks (160 \nUltra) on a 3 RAID 1 pairs configuration.\n\n Thank you (sorry I am in a hurry and did not have time to look properly \nin the mailing list archive - please forgive me).\n\nCiao,\n-Gabriele\n--\nGabriele Bartolini: Web Programmer, ht://Dig & IWA/HWG Member, ht://Check \nmaintainer\nCurrent Location: Prato, Toscana, Italia\[email protected] | http://www.prato.linux.it/~gbartolini | ICQ#129221447\n > \"Leave every hope, ye who enter!\", Dante Alighieri, Divine Comedy, The \nInferno\n\n\n-- \nNo virus found in this outgoing message.\nChecked by AVG Anti-Virus.\nVersion: 7.0.289 / Virus Database: 265.4.3 - Release Date: 26/11/2004\n\n\n", "msg_date": "Thu, 02 Dec 2004 18:16:31 +0100", "msg_from": "Gabriele Bartolini <[email protected]>", "msg_from_op": true, "msg_subject": "Different location or different instance" } ]
[ { "msg_contents": "> On Wed, Dec 01, 2004 at 05:43:10PM -0500, Merlin Moncure wrote:\n> > Also, if choice of RAID controller is an option, I'd definitely\nsuggest\n> > 3ware. They are cheap, have excellent linux support (including open\n> > source drivers)\n> \n> The drivers are open source, but the management tools are not. (This\nis\n> quite\n> impractical for us running other distributions than Red Hat or SuSE,\nat\n> least.)\n\nAh, good point. FWIW, 3ware also supports FreeBSD. It is hard to\nunderstand why they don't open source their utilities...\n\nMerlin\n", "msg_date": "Thu, 2 Dec 2004 13:39:32 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Alternatives to Dell?" } ]
[ { "msg_contents": "Hi ,\n\n I have a table in my production database which has 500k rows and \nfrom the pg_class it shows the number of \"relpages\" of\naround 750K for this table, the same table copied to a test database \nshows \"relpages\" as 35k. I run vacuumdb on the whole\ndatabase (not on the table individually but the whole database) daily. \nI think because of this most of queries are slowing down which used to \nrun much faster before.\n Is there any way to fix this problem ?\n\nThanks!\nPallav\n\n", "msg_date": "Thu, 02 Dec 2004 14:11:46 -0500", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Poor Performance on a table " }, { "msg_contents": "On Thu, 02 Dec 2004 14:11:46 -0500\nPallav Kalva <[email protected]> wrote:\n\n> Hi ,\n> \n> I have a table in my production database which has 500k rows and \n> from the pg_class it shows the number of \"relpages\" of\n> around 750K for this table, the same table copied to a test database \n> shows \"relpages\" as 35k. I run vacuumdb on the whole\n> database (not on the table individually but the whole database) daily.\n> \n> I think because of this most of queries are slowing down which used to\n> \n> run much faster before.\n> Is there any way to fix this problem ?\n\n Try a VACUUM FULL, this will clean up unused space. You might also\n want to adjust your free space map so that you don't have to do FULL\n vacuums as often ( or at all ). It is controlled by max_fsm_pages\n and max_fsm_relations. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Thu, 2 Dec 2004 13:20:14 -0600", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor Performance on a table" }, { "msg_contents": "Hi Frank,\n\n Thanks! for the quick reply, here are my current default fsm setting .\n max_fsm_pages = 20000 and max_fsm_relations = 1000\n What are the appropriates settings for these parameters ? are there \nany guidlines ? postgres docs doesnt give much information on setting \nthese values.\n\nThanks!\nPallav\n\nFrank Wiles wrote:\n\n>On Thu, 02 Dec 2004 14:11:46 -0500\n>Pallav Kalva <[email protected]> wrote:\n>\n> \n>\n>>Hi ,\n>>\n>> I have a table in my production database which has 500k rows and \n>>from the pg_class it shows the number of \"relpages\" of\n>>around 750K for this table, the same table copied to a test database \n>>shows \"relpages\" as 35k. I run vacuumdb on the whole\n>>database (not on the table individually but the whole database) daily.\n>> \n>>I think because of this most of queries are slowing down which used to\n>>\n>>run much faster before.\n>> Is there any way to fix this problem ?\n>> \n>>\n>\n> Try a VACUUM FULL, this will clean up unused space. You might also\n> want to adjust your free space map so that you don't have to do FULL\n> vacuums as often ( or at all ). It is controlled by max_fsm_pages\n> and max_fsm_relations. \n>\n> ---------------------------------\n> Frank Wiles <[email protected]>\n> http://www.wiles.org\n> ---------------------------------\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n> \n>\n\n\n", "msg_date": "Thu, 02 Dec 2004 14:32:53 -0500", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor Performance on a table" }, { "msg_contents": "Pallav Kalva <[email protected]> writes:\n> I have a table in my production database which has 500k rows and \n> from the pg_class it shows the number of \"relpages\" of\n> around 750K for this table, the same table copied to a test database \n> shows \"relpages\" as 35k. I run vacuumdb on the whole\n> database (not on the table individually but the whole database) daily. \n\nYou're obviously suffering serious table bloat :-(. Depending on how\nheavy the update traffic on that table is, it might be that once-a-day\nvacuum is simply not often enough. Another likely problem is that you\nneed to increase the FSM settings (how big is your whole database?)\n\n> Is there any way to fix this problem ?\n\nVACUUM FULL will fix the immediate problem. You might well find CLUSTER\nto be a faster alternative, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Dec 2004 14:36:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor Performance on a table " }, { "msg_contents": "On Thu, 02 Dec 2004 14:32:53 -0500\nPallav Kalva <[email protected]> wrote:\n\n> Hi Frank,\n> \n> Thanks! for the quick reply, here are my current default fsm\n> setting .\n> max_fsm_pages = 20000 and max_fsm_relations = 1000\n> What are the appropriates settings for these parameters ? are there\n> \n> any guidlines ? postgres docs doesnt give much information on setting \n> these values.\n\n There really aren't any guidelines on these because it really depends\n on your data and how you use the database. If you insert/update 99%\n of the time and only delete 1% of the time, the defaults are probably\n perfect for you. Probably up to a 80% insert/update, 20% delete\n ratio.\n\n If however you're constantly deleting entries from your database, I\n would suggest slowly raising those values in step with each other \n over the course a few weeks and see where you're at. It is really\n a matter of trial an error. \n\n With my databases, I can afford to do VACUUM FULLs fairly often\n so I typically don't need to increase my fsm values. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Thu, 2 Dec 2004 13:38:09 -0600", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor Performance on a table" }, { "msg_contents": "Tom Lane wrote:\n\n>Pallav Kalva <[email protected]> writes:\n> \n>\n>> I have a table in my production database which has 500k rows and \n>>from the pg_class it shows the number of \"relpages\" of\n>>around 750K for this table, the same table copied to a test database \n>>shows \"relpages\" as 35k. I run vacuumdb on the whole\n>>database (not on the table individually but the whole database) daily. \n>> \n>>\n>\n>You're obviously suffering serious table bloat :-(. Depending on how\n>heavy the update traffic on that table is, it might be that once-a-day\n>vacuum is simply not often enough. Another likely problem is that you\n>need to increase the FSM settings (how big is your whole database?)\n>\n Yes, you are right this table is heavily updated, the whole database \nsize is of 1.5 gigs, right now i have default fsm settings how much \nshould i increase max_fsm_pages and max_fsm_relations to ?\n\n>\n> \n>\n>> Is there any way to fix this problem ?\n>> \n>>\n>\n>VACUUM FULL will fix the immediate problem. You might well find CLUSTER\n>to be a faster alternative, though.\n> \n>\nI am hesitant to do vacuum full on the table because it is one of the \ncrucial table in our application and we cant afford to have exclusive \nlock on this table for long time. we can afford not to have writes and \nupdates but we need atleast reads on this table .\nHow does CLUSTER benefit me ? excuse me, i am new to this feature.\n\n>\t\t\tregards, tom lane\n>\n> \n>\n\n\n", "msg_date": "Thu, 02 Dec 2004 14:54:19 -0500", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor Performance on a table" }, { "msg_contents": "Pallav Kalva <[email protected]> writes:\n> Tom Lane wrote:\n>> Another likely problem is that you\n>> need to increase the FSM settings (how big is your whole database?)\n>> \n> Yes, you are right this table is heavily updated, the whole database \n> size is of 1.5 gigs, right now i have default fsm settings how much \n> should i increase max_fsm_pages and max_fsm_relations to ?\n\nA lot --- factor of 10 at least. Try \"vacuum verbose\" and look at the\nlast couple lines of output.\n\n>> VACUUM FULL will fix the immediate problem. You might well find CLUSTER\n>> to be a faster alternative, though.\n\n> How does CLUSTER benefit me ?\n\nIt'll probably take less time to rebuild the table. VACUUM FULL is\nreally optimized for the case of moving a relatively small fraction\nof the table around, but it sounds like you need a complete rebuild.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Dec 2004 15:10:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor Performance on a table " }, { "msg_contents": "Pallav,\n\n> Yes, you are right this table is heavily updated, the whole database\n> size is of 1.5 gigs, right now i have default fsm settings how much\n> should i increase max_fsm_pages and max_fsm_relations to ?\n\n1) fix the table (see below)\n2) run the system for another day\n3) run VACUUM FULL ANALYZE VERBOSE\n4) if you're running 7.4 or better, at the end you'll see a total of FSM pages \nneeded. If you're running something earlier, you'll need to get out a \ncalculator and do the math yourself.\n\nOf course, if you're getting heavy update/delete activity, vacuuming more \noften might be wise. Post the output of the above command if you have \nquestions.\n\n> I am hesitant to do vacuum full on the table because it is one of the\n> crucial table in our application and we cant afford to have exclusive\n> lock on this table for long time. we can afford not to have writes and\n> updates but we need atleast reads on this table .\n\nYou're going to have to do at least one or the table will just keep getting \nworse. Schedule it for 3am. Once you've set FSM correctly, and are \nvacuuming with the right frequency, the need to run VACUUM FULL will go away.\n\nOh, and it's likely that any indexes on the table need to be REINDEXed.\n\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 2 Dec 2004 21:57:00 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor Performance on a table" } ]
[ { "msg_contents": "> > This was an intersting Win32/linux comparison. I expected\n> > Linux to scale better, but I was surprised how poorly XP\n> > scaled. It reinforces our perception that Win32 is for low\n> > traffic servers.\n> \n> That's a bit harsh given the lack of any further investigation so far\n> isn't it? Win32 can run perfectly well with other DBMSs with hundreds\nof\n> users.\n> \n> Any chance you can profile your test runs Merlin?\n> \n\nOk, I am starting to strongly suspect the statistics collector of\nvarious kinds of malfeasance. Right now I am running with the stats\ncollector off and I am getting much better scalability and much more\ndeterministic query times...that is, even when under moderate to heavy\nload query running times are proportional to the number of users on the\nsystem...the system is purring like a kitten right now with over a 100\nusers logged in.\n\nThis coupled with the fact that I was getting random restarts with the\ncollector process makes me think that there is some kind of issue with\nthe ipc between the collector and the backends that is blocking and/or\nis being improperly handled after failure.\n\nI was running with statement level stats on under scenarios with\nextremely high levels of query activity where the server might be\nprocessing 500-1500 queries/second or more spread out over multiple\nbackends.\n\nI'll look into this issue more over next several days. I'll dip back\ninto the code and see if I can come up with a better\nanswer...unfortunately I can't run the stats collector on until I can\nschedule another load test.\n\nMerlin\n", "msg_date": "Thu, 2 Dec 2004 14:21:53 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] scalability issues on win32" }, { "msg_contents": "Merlin Moncure wrote:\n> > > This was an intersting Win32/linux comparison. I expected\n> > > Linux to scale better, but I was surprised how poorly XP\n> > > scaled. It reinforces our perception that Win32 is for low\n> > > traffic servers.\n> > \n> > That's a bit harsh given the lack of any further investigation so far\n> > isn't it? Win32 can run perfectly well with other DBMSs with hundreds\n> of\n> > users.\n> > \n> > Any chance you can profile your test runs Merlin?\n> > \n> \n> Ok, I am starting to strongly suspect the statistics collector of\n> various kinds of malfeasance. Right now I am running with the stats\n> collector off and I am getting much better scalability and much more\n> deterministic query times...that is, even when under moderate to heavy\n> load query running times are proportional to the number of users on the\n> system...the system is purring like a kitten right now with over a 100\n> users logged in.\n> \n> This coupled with the fact that I was getting random restarts with the\n> collector process makes me think that there is some kind of issue with\n> the ipc between the collector and the backends that is blocking and/or\n> is being improperly handled after failure.\n> \n> I was running with statement level stats on under scenarios with\n> extremely high levels of query activity where the server might be\n> processing 500-1500 queries/second or more spread out over multiple\n> backends.\n> \n> I'll look into this issue more over next several days. I'll dip back\n> into the code and see if I can come up with a better\n> answer...unfortunately I can't run the stats collector on until I can\n> schedule another load test.\n\nOK, the big problem is that we are nearing RC1. We would like some\nfeedback on this as soon as possible. A major Win32 cleanup for this\ncould delay the 8.0 release.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 2 Dec 2004 14:28:17 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] scalability issues on win32" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Merlin Moncure wrote:\n>> Ok, I am starting to strongly suspect the statistics collector of\n>> various kinds of malfeasance.\n\n> OK, the big problem is that we are nearing RC1. We would like some\n> feedback on this as soon as possible. A major Win32 cleanup for this\n> could delay the 8.0 release.\n\nI would say that it shouldn't delay the release --- worst case, we say\n\"the collector doesn't work very well under Win32 yet\". It's probably\nnot the only part of the system we'll find needs work under Win32.\n\nThis is moot if Merlin can find some simple fixable bug, but I'm worried\nthat doing anything significant might require major work.\n\nBTW, what about the issue we just identified with piperead() failing to\nset errno on Windows? That would certainly account for the \"random\ncollector restarts\" complaint ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Dec 2004 14:42:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] scalability issues on win32 " } ]
[ { "msg_contents": "Hi\n\nI see this article about DB2\nhttp://www-106.ibm.com/developerworks/db2/library/techarticle/dm \n-0411rielau/?ca=dgr-lnxw06SQL-Speed\n\nThe listing 2 example:\n1 SELECT D_TAX, D_NEXT_O_ID\n2 INTO :dist_tax , :next_o_id\n3 FROM OLD TABLE ( UPDATE DISTRICT\n4 SET D_NEXT_O_ID = D_NEXT_O_ID + 1\n5 WHERE D_W_ID = :w_id\n6 AND D_ID = :d_id\n7 ) AS OT\n\nI am not a expert in Rule System.\nBut I ad a look to\nhttp://www.postgresql.org/docs/7.4/static/rules-update.html\nAnd it seems possible in PostgreSQL to build non standard SQL query to \ndo thing like listing 2.\n\nI would like to know from an \"expert\" of PostgreSQL if such query is \nreally a new stuff to DB2 as the artcile states ? or if PostgreSQL has \nalready the same type of power ?\n\nCordialement,\nJean-G�rard Pailloncy\n", "msg_date": "Fri, 3 Dec 2004 21:38:49 +0100", "msg_from": "=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[email protected]>", "msg_from_op": true, "msg_subject": "DB2 feature" }, { "msg_contents": "Jean-Gerard,\n\n> The listing 2 example:\n> 1  SELECT D_TAX, D_NEXT_O_ID\n> 2     INTO :dist_tax , :next_o_id\n> 3     FROM OLD TABLE ( UPDATE DISTRICT\n> 4                       SET  D_NEXT_O_ID = D_NEXT_O_ID + 1\n> 5                       WHERE D_W_ID = :w_id\n> 6                         AND D_ID = :d_id\n> 7                     ) AS OT\n\nA lot of this is non-standard SQL, so I can't really tell what DB2 is doing \nhere. Can you explain it?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 3 Dec 2004 13:02:46 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DB2 feature" }, { "msg_contents": "Clinging to sanity, [email protected] (Pailloncy Jean-G�rard) mumbled into her beard:\n> I see this article about DB2\n> http://www-106.ibm.com/developerworks/db2/library/techarticle/dm\n> -0411rielau/?ca=dgr-lnxw06SQL-Speed\n>\n> The listing 2 example:\n> 1 SELECT D_TAX, D_NEXT_O_ID\n> 2 INTO :dist_tax , :next_o_id\n> 3 FROM OLD TABLE ( UPDATE DISTRICT\n> 4 SET D_NEXT_O_ID = D_NEXT_O_ID + 1\n> 5 WHERE D_W_ID = :w_id\n> 6 AND D_ID = :d_id\n> 7 ) AS OT\n>\n> I am not a expert in Rule System.\n> But I ad a look to\n> http://www.postgresql.org/docs/7.4/static/rules-update.html\n> And it seems possible in PostgreSQL to build non standard SQL query to\n> do thing like listing 2.\n>\n> I would like to know from an \"expert\" of PostgreSQL if such query is\n> really a new stuff to DB2 as the artcile states ? or if PostgreSQL has\n> already the same type of power ?\n\nThis feature (which evidently was derived from something in Sybase,\nwhich Microsoft therefore brought into their version of SQL Server)\nallows the Gentle User to do a mass update on a table (what's\nparenthesized), and then do some manipulations on the rows that ware\naffected by that mass update, where OLD TABLE returns the _former_\nstate of rows that were updated/deleted, and NEW TABLE would return\nthe _new_ state of rows that were inserted/updated.\n\nIt would be possible to do something analagous using rules, but the\nimplementation would look VERY different from this.\n\nIn effect, you would have to add, temporarily, a rule that does the\nthing akin to \"select d_tax, d_next_o_id into some table\" for the\nthree cases:\n\n 1. on insert, do something with NEW.D_TAX, NEW.D_NEXT_O_ID\n to correspond to the insert case;\n\n 2. on update, do something with NEW.D_TAX, NEW.D_NEXT_O_ID to\n correspond with an update, doing something with the NEW values;\n\n 3. on update, do something with OLD.D_TAX, OLD.D_NEXT_O_ID to\n correspond with an update, doing something with the OLD values;\n\n 4. on delete, do something with OLD.D_TAX, OLD.D_NEXT_O_ID...\n \nYou'd create the a rule to do things row-by-row.\n\nThe efficiency of this actually ought to be pretty good; such rules\nwould be tightly firing over and over each time a row was affected by\nthe query, and since the data being processed would be in cache, it\nwould be eminently quickly accessible.\n\nBut, compared to the DB2 approach, it involves creating and dropping\nrules on the fly...\n-- \n\"cbbrowne\",\"@\",\"acm.org\"\nhttp://www.ntlug.org/~cbbrowne/linuxdistributions.html\nSigns of a Klingon Programmer - 11. \"By filing this bug report you\nhave challenged the honor of my family. Prepare to die!\"\n", "msg_date": "Fri, 03 Dec 2004 17:27:14 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DB2 feature" }, { "msg_contents": ">> The listing 2 example:\n>> 1 �SELECT D_TAX, D_NEXT_O_ID\n>> 2 � � INTO :dist_tax , :next_o_id\n>> 3 � � FROM OLD TABLE ( UPDATE DISTRICT\n>> 4 � � � � � � � � � � � SET �D_NEXT_O_ID = D_NEXT_O_ID + 1\n>> 5 � � � � � � � � � � � WHERE D_W_ID = :w_id\n>> 6 � � � � � � � � � � � � AND D_ID = :d_id\n>> 7 � � � � � � � � � � ) AS OT\n>\n> A lot of this is non-standard SQL, so I can't really tell what DB2 is \n> doing\n> here. Can you explain it?\n\nQuote from the article at:\nhttp://www-106.ibm.com/developerworks/db2/library/techarticle/dm \n-0411rielau/?ca=dgr-lnxw06SQL-Speed\n> First, DB2 deals with the DISTRICT table. Data needs to be returned \n> and an update needs to be performed. Conventional wisdom states that \n> this requires 2 SQL statements, and that the UPDATE ought to be done \n> prior to the SELECT; otherwise deadlocks may occur as concurrency \n> increases.\n>\n> DB2 however supports a new SQL feature which is in the process of \n> being standardized. This feature allows access to what is known as \n> transition tables in triggers. The OLD TABLE transition table holds \n> the original state of the affected rows before they are processed by \n> the UPDATE or DELETE statement. The NEW TABLE transition table holds \n> the affected rows immediately after an INSERT or UPDATE was processed. \n> That is the state prior to when AFTER triggers fire. Users with a \n> Microsoft or Sybase background may know these tables by the names \n> DELETED and INSERTED.\n\nSo, if I understand they use only ONE query to get the UPDATE and the \nSELECT of the old value.\n\nCordialement,\nJean-G�rard Pailloncy\n\n", "msg_date": "Fri, 3 Dec 2004 23:48:10 +0100", "msg_from": "=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DB2 feature" } ]
[ { "msg_contents": "Hi Folks,\n\n\tI have two queries that are of the form :\nselect ... from ... where ... in (list1) AND ... in (list2). The two\nqueries differ only in the size of list2 by 1, but their performances\nare quite different. Query2 runs much faster than Query1. The queries\nare:\n\nQuery 1:\nSELECT svm,pmodel_id,pseq_id FROM paprospect2 WHERE pseq_id in\n(8880,10507,10600,10605,10724,10852 ...) AND pmodel_id in \n(4757,8221,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0);\n\nQuery 2:\nSELECT svm,pmodel_id,pseq_id FROM paprospect2 WHERE pseq_id in\n(8880,10507,10600,10605,10724,10852 ...) AND pmodel_id in \n(4757,8221,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0);\n\n=>Notice the extra zero at the end of query2. The size of list1 is 800\nand size of list2 is 49 in case of query1 and 50 in case of query2\n\nThe Query Plans are:\n\nQUERY PLAN 1:\n\nIndex Scan using paprospect2_search1, paprospect2_search1,\npaprospect2_search1, paprospect2_search1, paprospect2_search1,\npaprospect2_search1, paprospect2_search1, paprospect2_search1 ..........\n(cost=0.00..10959914.42 rows=45760 width=12)\n\n Index Cond: ((pmodel_id = 4757) OR (pmodel_id = 8221) OR (pmodel_id =\n0) OR (pmodel_id = 0) OR (pmodel_id = 0) OR (pmodel_id = 0) OR\n(pmodel_id = 0) OR (pmodel_id = 0) OR ...)\n\n Filter: ((pseq_id = 0) OR (pseq_id = 8880) OR (pseq_id = 10507) OR\n(pseq_id = 10600) OR (pseq_id = 10605) OR ...)\n\nQUERY PLAN 2:\n\nIndex Scan using\npaprospect2_pseq_id_params_id,paprospect2_pseq_id_params_id,\npaprospect2_pseq_id_params_id, paprospect2_pseq_id_params_id,\npaprospect2_pseq_id_params_id, paprospect2_pseq_id_params_id,\npaprospect2_pseq_id_params_id, paprospect2_pseq_id_params_id,\npaprospect2_pseq_id_params_id, paprospect2_pseq_id_params_id,\npaprospect2_pseq_id_params_id, paprospect2_pseq_id_params_id .......\n(cost=0.00..11050741.64 rows=46520 width=12)\n\n Index Cond: ((pseq_id = 0) OR (pseq_id = 8880) OR (pseq_id = 10507)\nOR (pseq_id = 10600) OR (pseq_id = 10605) OR (pseq_id = 10724) OR\n(pseq_id = 10852) OR (pseq_id = 10905) OR (pseq_id = 10945) OR (pseq_id\n= 10964)....)\n\nFilter: ((pmodel_id = 4757) OR (pmodel_id = 8221) OR (pmodel_id = 0) OR\n(pmodel_id = 0) OR (pmodel_id = 0) OR ...)\n\n=> Notice that the Index, Index Cond. and Filter are different in the\ntwo plans.\nIn short the query plans and performance are quite different although\nthe queries are similar. Can you please explain the difference in\nperformance? Thank you,\n\n-Kiran\n\n", "msg_date": "Fri, 03 Dec 2004 14:12:26 -0800", "msg_from": "Kiran Mukhyala <[email protected]>", "msg_from_op": true, "msg_subject": "Performance difference in similar queries" } ]
[ { "msg_contents": "Hi Folks,\n\n I have two queries that are of the form :\nselect ... from ... where ... in (list1) AND ... in (list2). The two\nqueries differ only in the size of list2 by 1, but their performances\nare quite different. Query2 runs much faster than Query1. The queries\nare:\n\nQuery 1:\nSELECT svm,pmodel_id,pseq_id FROM paprospect2 WHERE pseq_id in\n(8880,10507,10600,10605,10724,10852 ...) AND pmodel_id in \n(4757,8221,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0);\n\nQuery 2:\nSELECT svm,pmodel_id,pseq_id FROM paprospect2 WHERE pseq_id in\n(8880,10507,10600,10605,10724,10852 ...) AND pmodel_id in \n(4757,8221,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0);\n\n=>Notice the extra zero at the end of query2. The size of list1 is 800\nand size of list2 is 49 in case of query1 and 50 in case of query2\n\nThe Query Plans are:\n\nQUERY PLAN 1:\n\nIndex Scan using paprospect2_search1, paprospect2_search1,\npaprospect2_search1, paprospect2_search1, paprospect2_search1,\npaprospect2_search1, paprospect2_search1, paprospect2_search1 ..........\n(cost=0.00..10959914.42 rows=45760 width=12)\n\n Index Cond: ((pmodel_id = 4757) OR (pmodel_id = 8221) OR (pmodel_id =\n0) OR (pmodel_id = 0) OR (pmodel_id = 0) OR (pmodel_id = 0) OR\n(pmodel_id = 0) OR (pmodel_id = 0) OR ...)\n\n Filter: ((pseq_id = 0) OR (pseq_id = 8880) OR (pseq_id = 10507) OR\n(pseq_id = 10600) OR (pseq_id = 10605) OR ...)\n\nQUERY PLAN 2:\n\nIndex Scan using\npaprospect2_pseq_id_params_id,paprospect2_pseq_id_params_id,\npaprospect2_pseq_id_params_id, paprospect2_pseq_id_params_id,\npaprospect2_pseq_id_params_id, paprospect2_pseq_id_params_id,\npaprospect2_pseq_id_params_id, paprospect2_pseq_id_params_id,\npaprospect2_pseq_id_params_id, paprospect2_pseq_id_params_id,\npaprospect2_pseq_id_params_id, paprospect2_pseq_id_params_id .......\n(cost=0.00..11050741.64 rows=46520 width=12)\n\n Index Cond: ((pseq_id = 0) OR (pseq_id = 8880) OR (pseq_id = 10507)\nOR (pseq_id = 10600) OR (pseq_id = 10605) OR (pseq_id = 10724) OR\n(pseq_id = 10852) OR (pseq_id = 10905) OR (pseq_id = 10945) OR (pseq_id\n= 10964)....)\n\nFilter: ((pmodel_id = 4757) OR (pmodel_id = 8221) OR (pmodel_id = 0) OR\n(pmodel_id = 0) OR (pmodel_id = 0) OR ...)\n\n=> Notice that the Index, Index Cond. and Filter are different in the\ntwo plans.\nIn short the query plans and performance are quite different although\nthe queries are similar. Can you please explain the difference in\nperformance? Thank you,\n\n-Kiran\n\n\n\n\n", "msg_date": "Fri, 03 Dec 2004 14:31:11 -0800", "msg_from": "Kiran Mukhyala <[email protected]>", "msg_from_op": true, "msg_subject": "Performance difference in similar queries" } ]
[ { "msg_contents": "(Originally asked in [General], realized that it would probably be \nbetter asked in [Perform]:\n\nI am curious as to how much overhead building a dynamic query in a\ntrigger adds to the process. The example:\n\nHave a list of subcontractors, each of which gets unique pricing. There \nis a total of roughly 100,000 items available and some 100 \nsubcontractors. The 2 design choices would be 100 tables (one for each \nsub) at 100,000 rows or 1 table with 10,000,000 rows.\n\nChoice 1:\ntable has item number (indexed) and price\n\nChoice 2:\ntable has subcontractor id, item number, and price; index on\n(subcontractorid, item number).\n\nTable of orders would have a trigger to insert line item cost:\n-----------------------------------\nTrigger Choice 1:\nSelect into thetable lookupprice from subcontractors where\nsubcontractorid = NEW.subcontractorid;\n\nthequery := ''Select price from '' || thetable.lookupprice || '' where\nitemnumber = '' || NEW.itemnumber;\n\nFOR therow IN EXECUTE thequery LOOP\n\tNEW.itemcost := therow.price;\nEND LOOP;\nRETURN NEW;\n-----------------------------------\nTrigger Choice 2:\nSelect into thetable lookupprice from subcontractors where\nsubcontractorid = NEW.subcontractorid;\n\nSelect into therow price from mastertable where subcontractorid =\nNEW.subcontractorid and itemnumber = NEW.itemnumber;\n\nNEW.itemcost := therow.price;\nRETURN NEW;\n-----------------------------------\n\nDoing a select from the command line, the mastertable method (with id\nand partno index) is faster than looking up a single item in a named\ntable (with partno index). At what point would Trigger Choice 2 fall\nbehind performance with Trigger Choice 1 (if ever)? Is there a way to\nanalyze the performance of dynamic queries? If I had only 10\nsubcontractors or if I had 1000 subcontractors, at what point is the\noverhead of building/executing a dynamic query negated by the amount of\ntime to look up both the subid and part number in one massive table?\n\nThanks,\n\nSven\n", "msg_date": "Fri, 03 Dec 2004 22:46:45 -0500", "msg_from": "Sven Willenberger <[email protected]>", "msg_from_op": true, "msg_subject": "Overhead of dynamic query in trigger" } ]
[ { "msg_contents": "Hi !\n\nI need to insert 500.000 records on a table frequently. It´s a bulk \ninsertion from my applicatoin.\nI am with a very poor performance. PostgreSQL insert very fast until the \ntuple 200.000 and after it the insertion starts to be really slow.\nI am seeing on the log and there is a lot of transaction logs, something \nlike :\n\n2004-12-04 11:08:59 LOG: recycled transaction log file \"0000000600000012\"\n2004-12-04 11:08:59 LOG: recycled transaction log file \"0000000600000013\"\n2004-12-04 11:08:59 LOG: recycled transaction log file \"0000000600000011\"\n2004-12-04 11:14:04 LOG: recycled transaction log file \"0000000600000015\"\n2004-12-04 11:14:04 LOG: recycled transaction log file \"0000000600000014\"\n2004-12-04 11:19:08 LOG: recycled transaction log file \"0000000600000016\"\n2004-12-04 11:19:08 LOG: recycled transaction log file \"0000000600000017\"\n2004-12-04 11:24:10 LOG: recycled transaction log file \"0000000600000018\"\n\nHow can I configure PostgreSQL to have a better performance on this bulk \ninsertions ? I already increased the memory values.\n\nMy data:\nConectiva linux kernel 2.6.9\nPostgreSQL 7.4.6 - 1,5gb memory\nmax_connections = 30\nshared_buffers = 30000\nsort_mem = 32768\nvacuum_mem = 32768\nmax_fsm_pages = 30000\nmax_fsm_relations = 1500\n\nThe other configurations are default.\n\n\nCheers,\n\nRodrigo Carvalhaes \n\n\n", "msg_date": "Sat, 04 Dec 2004 11:39:39 -0200", "msg_from": "Grupos <[email protected]>", "msg_from_op": true, "msg_subject": "Improve BULK insertion" }, { "msg_contents": "In the last exciting episode, [email protected] (Grupos) wrote:\n> Hi !\n>\n> I need to insert 500.000 records on a table frequently. It�s a bulk\n> insertion from my applicatoin.\n> I am with a very poor performance. PostgreSQL insert very fast until\n> the tuple 200.000 and after it the insertion starts to be really slow.\n> I am seeing on the log and there is a lot of transaction logs,\n> something like :\n>\n> 2004-12-04 11:08:59 LOG: recycled transaction log file \"0000000600000012\"\n> 2004-12-04 11:08:59 LOG: recycled transaction log file \"0000000600000013\"\n> 2004-12-04 11:08:59 LOG: recycled transaction log file \"0000000600000011\"\n> 2004-12-04 11:14:04 LOG: recycled transaction log file \"0000000600000015\"\n> 2004-12-04 11:14:04 LOG: recycled transaction log file \"0000000600000014\"\n> 2004-12-04 11:19:08 LOG: recycled transaction log file \"0000000600000016\"\n> 2004-12-04 11:19:08 LOG: recycled transaction log file \"0000000600000017\"\n> 2004-12-04 11:24:10 LOG: recycled transaction log file \"0000000600000018\"\n\nIt is entirely normal for there to be a lot of transaction log file\nrecycling when bulk inserts are taking place; that goes through a lot\nof transaction logs.\n\n> How can I configure PostgreSQL to have a better performance on this\n> bulk insertions ? I already increased the memory values.\n\nMemory is, as likely as not, NOT the issue.\n\nTwo questions:\n\n 1. How are you doing the inserts? Via INSERT statements? Or\n via COPY statements? What sort of transaction grouping\n is involved?\n\n COPY is way faster than INSERT, and grouping plenty of updates\n into a single transaction is generally a \"win.\"\n\n 2. What is the schema like? Does the table have a foreign key\n constraint? Does it have a bunch of indices?\n\n If there should eventually be lots of indices, it tends to be\n faster to create the table with none/minimal indices, and add\n indexes afterwards, as long as your \"load\" process can be trusted\n to not break \"unique\" constraints...\n\n If there is some secondary table with a foreign key constraint,\n and _that_ table is growing, it is possible that a sequential\n scan is being used to search the secondary table where, if you\n did an ANALYZE on that table, an index scan would be preferred\n once it grew to larger size...\n\nThere isn't a particular reason for PostgreSQL to \"hit a wall\" upon\nseeing 200K records; I and coworkers routinely load database dumps\nthat have millions of (sometimes pretty fat) records, and they don't\n\"choke.\" That's true whether talking about loading things onto my\n(somewhat wimpy) desktop PC, or a SMP Xeon system with a small RAID\narray, or higher end stuff involving high end SMP and EMC disk arrays.\nThe latter obviously being orders of magnitude faster than desktop\nequipment :-).\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")\nhttp://www3.sympatico.ca/cbbrowne/unix.html\nRules of the Evil Overlord #207. \"Employees will have conjugal visit\ntrailers which they may use provided they call in a replacement and\nsign out on the timesheet. Given this, anyone caught making out in a\ncloset while leaving their station unmonitored will be shot.\"\n<http://www.eviloverlord.com/>\n", "msg_date": "Sat, 04 Dec 2004 09:48:15 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve BULK insertion" }, { "msg_contents": "Rodrigo,\n\n> I need to insert 500.000 records on a table frequently. It´s a bulk\n> insertion from my applicatoin.\n> I am with a very poor performance. PostgreSQL insert very fast until the\n> tuple 200.000 and after it the insertion starts to be really slow.\n> I am seeing on the log and there is a lot of transaction logs, something\n\nIn addition to what Chris Browne asked:\nWhat's your transaction log setup? Are your database transaction logs on a \nseperate disk resource? What is checkpoint_segments and checkpoint_timeout \nset to?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sat, 4 Dec 2004 11:14:18 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve BULK insertion" }, { "msg_contents": "Hi!\n\n1. I am doing the inserts using pg_restore. The dump was created using \npg_dump and the standard format (copy statements)\n2. See below the table schema. There are only 7 indexes. \n3. My transaction log configuration are : checkpoint_segments = 3 and \ncheckpoint_timeout = 300 and my transaction logs are on the same disk .\n\nI know that I can increase the performance separating the transaction \nlogs and making a RAID 5 array BUT I am really curious about WHY this \nperformance is so poor and HOW can I try to improve on this actual \nmachine because actualy this inserts are taking around 90 minutes!!!\n\nCheers!\n\nRodrigo\n\ndadosadv=# \\d si2010\n Table \"public.si2010\"\n Column | Type | Modifiers\n------------+------------------+---------------------------------------------------------------------\n i2_filial | character(2) | not null default ' '::bpchar\n i2_num | character(10) | not null default ' '::bpchar\n i2_linha | character(2) | not null default ' '::bpchar\n i2_data | character(8) | not null default ' '::bpchar\n i2_dc | character(1) | not null default ' '::bpchar\n i2_debito | character(20) | not null default ' \n'::bpchar\n i2_dcd | character(1) | not null default ' '::bpchar\n i2_credito | character(20) | not null default ' \n'::bpchar\n i2_dcc | character(1) | not null default ' '::bpchar\n i2_moedas | character(5) | not null default ' '::bpchar\n i2_valor | double precision | not null default 0.0\n i2_hp | character(3) | not null default ' '::bpchar\n i2_hist | character(40) | not null default \n' '::bpchar\n i2_ccd | character(9) | not null default ' '::bpchar\n i2_ccc | character(9) | not null default ' '::bpchar\n i2_ativdeb | character(6) | not null default ' '::bpchar\n i2_ativcrd | character(6) | not null default ' '::bpchar\n i2_vlmoed2 | double precision | not null default 0.0\n i2_vlmoed3 | double precision | not null default 0.0\n i2_vlmoed4 | double precision | not null default 0.0\n i2_vlmoed5 | double precision | not null default 0.0\n i2_dtvenc | character(8) | not null default ' '::bpchar\n i2_criter | character(4) | not null default ' '::bpchar\n i2_rotina | character(8) | not null default ' '::bpchar\n i2_periodo | character(6) | not null default ' '::bpchar\n i2_listado | character(1) | not null default ' '::bpchar\n i2_origem | character(40) | not null default \n' '::bpchar\n i2_permat | character(4) | not null default ' '::bpchar\n i2_filorig | character(2) | not null default ' '::bpchar\n i2_intercp | character(1) | not null default ' '::bpchar\n i2_identcp | character(12) | not null default ' '::bpchar\n i2_lote | character(4) | not null default ' '::bpchar\n i2_doc | character(6) | not null default ' '::bpchar\n i2_emporig | character(2) | not null default ' '::bpchar\n i2_lp | character(3) | not null default ' '::bpchar\n i2_itemd | character(9) | not null default ' '::bpchar\n i2_itemc | character(9) | not null default ' '::bpchar\n i2_prelan | character(1) | not null default ' '::bpchar\n i2_tipo | character(2) | not null default ' '::bpchar\n i2_dcc | character(1) | not null default ' '::bpchar\n i2_moedas | character(5) | not null default ' '::bpchar\n i2_valor | double precision | not null default 0.0\n i2_hp | character(3) | not null default ' '::bpchar\n i2_hist | character(40) | not null default \n' '::bpchar\n i2_ccd | character(9) | not null default ' '::bpchar\n i2_ccc | character(9) | not null default ' '::bpchar\n i2_ativdeb | character(6) | not null default ' '::bpchar\n i2_ativcrd | character(6) | not null default ' '::bpchar\n i2_vlmoed2 | double precision | not null default 0.0\n i2_vlmoed3 | double precision | not null default 0.0\n i2_vlmoed4 | double precision | not null default 0.0\n i2_vlmoed5 | double precision | not null default 0.0\n i2_dtvenc | character(8) | not null default ' '::bpchar\n i2_criter | character(4) | not null default ' '::bpchar\n i2_rotina | character(8) | not null default ' '::bpchar\n i2_periodo | character(6) | not null default ' '::bpchar\n i2_listado | character(1) | not null default ' '::bpchar\n i2_origem | character(40) | not null default \n' '::bpchar\n i2_permat | character(4) | not null default ' '::bpchar\n i2_filorig | character(2) | not null default ' '::bpchar\n i2_intercp | character(1) | not null default ' '::bpchar\n i2_identcp | character(12) | not null default ' '::bpchar\n i2_lote | character(4) | not null default ' '::bpchar\n i2_doc | character(6) | not null default ' '::bpchar\n i2_emporig | character(2) | not null default ' '::bpchar\n i2_lp | character(3) | not null default ' '::bpchar\n i2_itemd | character(9) | not null default ' '::bpchar\n i2_itemc | character(9) | not null default ' '::bpchar\n i2_prelan | character(1) | not null default ' '::bpchar\n i2_tipo | character(2) | not null default ' '::bpchar\n d_e_l_e_t_ | character(1) | not null default ' '::bpchar\n r_e_c_n_o_ | double precision | not null default 0.0\nIndexes:\n \"si2010_pkey\" primary key, btree (r_e_c_n_o_)\n \"si20101\" btree (i2_filial, i2_num, i2_linha, i2_periodo, \nr_e_c_n_o_, d_e_l_e_t_)\n \"si20102\" btree (i2_filial, i2_periodo, i2_num, i2_linha, \nr_e_c_n_o_, d_e_l_e_t_)\n \"si20103\" btree (i2_filial, i2_data, i2_num, i2_linha, r_e_c_n_o_, \nd_e_l_e_t_)\n \"si20104\" btree (i2_filial, i2_debito, i2_data, i2_num, i2_linha, \nr_e_c_n_o_, d_e_l_e_t_)\n \"si20105\" btree (i2_filial, i2_credito, i2_data, i2_num, i2_linha, \nr_e_c_n_o_, d_e_l_e_t_)\n \"si20106\" btree (i2_filial, i2_doc, i2_periodo, r_e_c_n_o_, d_e_l_e_t_)\n \"si20107\" btree (i2_filial, i2_origem, r_e_c_n_o_, d_e_l_e_t_)\n\n\nChristopher Browne wrote:\n\n>In the last exciting episode, [email protected] (Grupos) wrote:\n> \n>\n>>Hi !\n>>\n>>I need to insert 500.000 records on a table frequently. It�s a bulk\n>>insertion from my applicatoin.\n>>I am with a very poor performance. PostgreSQL insert very fast until\n>>the tuple 200.000 and after it the insertion starts to be really slow.\n>>I am seeing on the log and there is a lot of transaction logs,\n>>something like :\n>>\n>>2004-12-04 11:08:59 LOG: recycled transaction log file \"0000000600000012\"\n>>2004-12-04 11:08:59 LOG: recycled transaction log file \"0000000600000013\"\n>>2004-12-04 11:08:59 LOG: recycled transaction log file \"0000000600000011\"\n>>2004-12-04 11:14:04 LOG: recycled transaction log file \"0000000600000015\"\n>>2004-12-04 11:14:04 LOG: recycled transaction log file \"0000000600000014\"\n>>2004-12-04 11:19:08 LOG: recycled transaction log file \"0000000600000016\"\n>>2004-12-04 11:19:08 LOG: recycled transaction log file \"0000000600000017\"\n>>2004-12-04 11:24:10 LOG: recycled transaction log file \"0000000600000018\"\n>> \n>>\n>\n>It is entirely normal for there to be a lot of transaction log file\n>recycling when bulk inserts are taking place; that goes through a lot\n>of transaction logs.\n>\n> \n>\n>>How can I configure PostgreSQL to have a better performance on this\n>>bulk insertions ? I already increased the memory values.\n>> \n>>\n>\n>Memory is, as likely as not, NOT the issue.\n>\n>Two questions:\n>\n> 1. How are you doing the inserts? Via INSERT statements? Or\n> via COPY statements? What sort of transaction grouping\n> is involved?\n>\n> COPY is way faster than INSERT, and grouping plenty of updates\n> into a single transaction is generally a \"win.\"\n>\n> 2. What is the schema like? Does the table have a foreign key\n> constraint? Does it have a bunch of indices?\n>\n> If there should eventually be lots of indices, it tends to be\n> faster to create the table with none/minimal indices, and add\n> indexes afterwards, as long as your \"load\" process can be trusted\n> to not break \"unique\" constraints...\n>\n> If there is some secondary table with a foreign key constraint,\n> and _that_ table is growing, it is possible that a sequential\n> scan is being used to search the secondary table where, if you\n> did an ANALYZE on that table, an index scan would be preferred\n> once it grew to larger size...\n>\n>There isn't a particular reason for PostgreSQL to \"hit a wall\" upon\n>seeing 200K records; I and coworkers routinely load database dumps\n>that have millions of (sometimes pretty fat) records, and they don't\n>\"choke.\" That's true whether talking about loading things onto my\n>(somewhat wimpy) desktop PC, or a SMP Xeon system with a small RAID\n>array, or higher end stuff involving high end SMP and EMC disk arrays.\n>The latter obviously being orders of magnitude faster than desktop\n>equipment :-).\n> \n>\n", "msg_date": "Sun, 05 Dec 2004 17:52:03 -0200", "msg_from": "Rodrigo Carvalhaes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve BULK insertion" }, { "msg_contents": "Rodrigo,\n\n> 3. My transaction log configuration are : checkpoint_segments = 3  and\n> checkpoint_timeout = 300 and my transaction logs are on the same disk .\n\nWell, you need to move your transaction logs to another disk, and increase \nthem to a large number ... like 128, which is about 1GB (you'll need that \nmuch disk space). Also, increase the checkpoint_timeout to minimize \ncheckpointing during the load; like, 1500.\n\n> I know that I can increase the performance separating the transaction\n> logs and making a RAID 5 array \n\nActually, RAID5, unless you're using > 5 disks, would make things slower. \nSpeeding writes up through RAID would require at least 6 drives, and probably \nRAID 1+0.\n\n> BUT I am really curious about WHY this \n> performance is so poor and HOW can I try to improve on this actual\n> machine because actualy this inserts are taking around 90 minutes!!!\n\nAre you doing INSERTS and not COPY? If so, are you batching them in \ntransactions?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 5 Dec 2004 15:19:45 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve BULK insertion" } ]
[ { "msg_contents": "I do mass inserts daily into PG.  I drop the all indexes except my primary key and then use the COPY FROM command.  This usually takes less than 30 seconds.  I spend more time waiting for indexes to recreate.Patrick HatcherMacys.Com [email protected] wrote: -----To: [email protected]: Christopher Browne <[email protected]>Sent by: [email protected]: 2004-12-04 06:48AMSubject: Re: [PERFORM] Improve BULK insertionIn the last exciting episode, [email protected] (Grupos) wrote:> Hi !>> I need to insert 500.000 records on a table frequently. It´s a bulk> insertion from my applicatoin.> I am with a very poor performance. PostgreSQL insert very fast until> the tuple 200.000 and after it the insertion starts to be really slow.> I am seeing on the log and there is a lot of transaction logs,> something like :>> 2004-12-04 11:08:59 LOG:  recycled transaction log file \"0000000600000012\"> 2004-12-04 11:08:59 LOG:  recycled transaction log file \"0000000600000013\"> 2004-12-04 11:08:59 LOG:  recycled transaction log file \"0000000600000011\"> 2004-12-04 11:14:04 LOG:  recycled transaction log file \"0000000600000015\"> 2004-12-04 11:14:04 LOG:  recycled transaction log file \"0000000600000014\"> 2004-12-04 11:19:08 LOG:  recycled transaction log file \"0000000600000016\"> 2004-12-04 11:19:08 LOG:  recycled transaction log file \"0000000600000017\"> 2004-12-04 11:24:10 LOG:  recycled transaction log file \"0000000600000018\"It is entirely normal for there to be a lot of transaction log filerecycling when bulk inserts are taking place; that goes through a lotof transaction logs.> How can I configure PostgreSQL to have a better performance on this> bulk insertions ? I already increased the memory values.Memory is, as likely as not, NOT the issue.Two questions: 1.  How are you doing the inserts?  Via INSERT statements?  Or     via COPY statements?  What sort of transaction grouping     is involved?     COPY is way faster than INSERT, and grouping plenty of updates     into a single transaction is generally a \"win.\" 2.  What is the schema like?  Does the table have a foreign key     constraint?  Does it have a bunch of indices?     If there should eventually be lots of indices, it tends to be     faster to create the table with none/minimal indices, and add     indexes afterwards, as long as your \"load\" process can be trusted     to not break \"unique\" constraints...     If there is some secondary table with a foreign key constraint,     and _that_ table is growing, it is possible that a sequential     scan is being used to search the secondary table where, if you     did an ANALYZE on that table, an index scan would be preferred     once it grew to larger size...There isn't a particular reason for PostgreSQL to \"hit a wall\" uponseeing 200K records; I and coworkers routinely load database dumpsthat have millions of (sometimes pretty fat) records, and they don't\"choke.\"  That's true whether talking about loading things onto my(somewhat wimpy) desktop PC, or a SMP Xeon system with a small RAIDarray, or higher end stuff involving high end SMP and EMC disk arrays.The latter obviously being orders of magnitude faster than desktopequipment :-).-- (format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")http://www3.sympatico.ca/cbbrowne/unix.htmlRules of the  Evil Overlord #207. \"Employees will  have conjugal visittrailers which  they may use provided  they call in  a replacement andsign out on  the timesheet. Given this, anyone caught  making out in acloset  while  leaving  their   station  unmonitored  will  be  shot.\"<http://www.eviloverlord.com/>---------------------------(end of broadcast)---------------------------TIP 7: don't forget to increase your free space map settings", "msg_date": "Sat, 4 Dec 2004 06:47:17 -0800", "msg_from": "Patrick Hatcher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve BULK insertion" } ]
[ { "msg_contents": "Hi All,\n\nThanks for the information on replication tools!!\n \nNow, I have a question regarding locking tables and updating tables that have a relationship to the locked table.\n\nI opened up two pgsql windows logged in using same userid.\nLet's say I lock a table \"customerdata\" on one window.\nbegin;\nlock table customerdata;\n\nThen in the other window,I want to make an update to table \"customer\".\nbegin;\nupdate customer set status=0 where id=111;\n\nThe relation ship between the two tables is as follows\ncustomerdata.uid is FK on customer.id. There are no triggers that will try to update customerdata table when the above update statement is issued.\n\nMy problem is the update does not continue unless the lock on customerdata is released. Is it because the lock statement does a lock on all related tables? Is it possible to lock only the particular table we want to lock and not the related tables?\n\nAny help would be appreciated. Thanks in advance.\n\nThanks,\nSaranya\n\n\n\t\t\n---------------------------------\nDo you Yahoo!?\n Yahoo! Mail - Easier than ever with enhanced search. Learn more.\nHi All,Thanks for the information on replication tools!!\n \nNow, I have a question regarding locking tables and updating tables that have a relationship to the locked table.I opened up two pgsql windows logged in using same userid.Let's say I lock a table \"customerdata\" on one window.begin;lock table customerdata;Then in the other window,I want to make an update to table \"customer\".begin;update customer set status=0 where id=111;The relation ship between the two tables is as followscustomerdata.uid is FK on customer.id. There are no triggers that will try to update customerdata table when the above update statement is issued.My problem is the update does not continue unless the lock on customerdata is released. Is it because the lock statement does a lock on all related tables? Is it possible to lock only the particular table we want to lock and not the related tables?Any help would be appreciated. Thanks in advance.Thanks,Saranya\nDo you Yahoo!? \nYahoo! Mail - Easier than ever with enhanced search. Learn more.", "msg_date": "Sat, 4 Dec 2004 08:56:41 -0800 (PST)", "msg_from": "sarlav kumar <[email protected]>", "msg_from_op": true, "msg_subject": "lock problem" }, { "msg_contents": "On Sat, 4 Dec 2004, sarlav kumar wrote:\n\n> Thanks for the information on replication tools!!\n> Now, I have a question regarding locking tables and updating tables\n> that have a relationship to the locked table.\n>\n> I opened up two pgsql windows logged in using same userid.\n> Let's say I lock a table \"customerdata\" on one window.\n> begin;\n> lock table customerdata;\n>\n> Then in the other window,I want to make an update to table \"customer\".\n> begin;\n> update customer set status=0 where id=111;\n>\n> The relation ship between the two tables is as follows\n> customerdata.uid is FK on customer.id. There are no triggers that will\n> try to update customerdata table when the above update statement is\n> issued.\n>\n> My problem is the update does not continue unless the lock on\n> customerdata is released. Is it because the lock statement does a lock\n> on all related tables? Is it possible to lock only the particular table\n> we want to lock and not the related tables?\n\nThe no action foreign key triggers grab a Row Share on the referencing\ntable which conflicts with the Exclusive lock that LOCK TABLE takes by\ndefault. Depending on what you're trying to prevent, you may be able to\nask lock table for a lesser lock (see the list and descriptions here:\nhttp://www.postgresql.org/docs/7.4/static/explicit-locking.html#LOCKING-TABLES\n).\n", "msg_date": "Sat, 4 Dec 2004 09:34:31 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lock problem" } ]
[ { "msg_contents": "We're working with a Postgres database that includes a fairly large table\n(100M rows, increasing at around 2M per day).\n\nIn some cases we've seen some increased performance in tests by splitting\nthe table into several smaller tables. Both 'UNION ALL' views, and the\nsuperclass/subclass scheme work well at pruning down the set of rows a query\nuses, but they seem to introduce a large performance hit to the time to\nprocess each row (~50% for superclass/subclass, and ~150% for union views).\n\nIs this to be expected? Or is this a problem with our test setup?\n\nI've listed details on our tests at the end of this message. The results\nare similar with our larger tables; the overhead appears to be per record\nreturned from the subquery/subclass; it's not a constant overhead per query.\nOur production instance is running 7.4.2, but the results are the same on\n8.0.\n\nFor reference, I tested with this setup (for the superclass/subclass\npartitioning scheme):\n\n CREATE TABLE super_foo ( partition NUMERIC, bar NUMERIC );\n ANALYZE super_foo ;\n\n CREATE TABLE sub_foo1 () INHERITS ( super_foo );\n INSERT INTO sub_foo1 VALUES ( 1, 1 );\n -- repeat insert until sub_foo1 has 1,000,000 rows\n CREATE INDEX idx_subfoo1_partition ON sub_foo1 ( partition );\n ANALYZE sub_foo1 ;\n\n CREATE TABLE sub_foo2 () INHERITS ( super_foo );\n INSERT INTO sub_foo2 VALUES ( 2, 1 );\n -- repeat insert until sub_foo2 has 1,000,000 rows\n CREATE INDEX idx_subfoo2_partition ON sub_foo2 ( partition );\n ANALYZE sub_foo2 ;\n\nand this setup for the union all scheme:\n\n CREATE TABLE union_foo1 ( bar NUMERIC );\n INSERT INTO union_foo1 VALUES ( 1 ) ;\n -- repeat insert until union_foo1 has 1,000,000 rows\n ANALYZE union_foo1 ;\n\n CREATE TABLE union_foo2 ( bar NUMERIC );\n INSERT INTO union_foo2 VALUES ( 1 ) ;\n -- repeat insert until union_foo2 has 1,000,000 rows\n ANALYZE union_foo2 ;\n\n CREATE VIEW union_foo AS\n SELECT 1 AS partition, * FROM union_foo1\n UNION ALL\n SELECT 2 AS partition, * FROM union_foo2 ;\n\nThe partition pruning works marvelously:\n\n EXPLAIN SELECT SUM(bar) FROM super_foo WHERE partition = 2 ;\n QUERY PLAN\n ---------------------------------------------------------------------------\n----------------------------------\n Aggregate (cost=21899.02..21899.02 rows=1 width=32)\n -> Append (cost=0.00..19399.01 rows=1000002 width=32)\n -> Seq Scan on super_foo (cost=0.00..0.00 rows=1 width=32)\n Filter: (partition = 2::numeric)\n -> Index Scan using idx_subfoo1_partition on sub_foo1 super_foo\n(cost=0.00..2.01 rows=1 width=10)\n Index Cond: (partition = 2::numeric)\n -> Seq Scan on sub_foo2 super_foo (cost=0.00..19397.00\nrows=1000000 width=10)\n Filter: (partition = 2::numeric)\n\nand\n\n EXPLAIN SELECT SUM(bar) FROM union_foo WHERE partition = 2 ;\n QUERY PLAN\n ---------------------------------------------------------------------------\n----------------------\n Aggregate (cost=75819.15..75819.15 rows=1 width=32)\n -> Subquery Scan union_foo (cost=0.00..70818.60 rows=2000220 width=32)\n -> Append (cost=0.00..50816.40 rows=2000220 width=10)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..25408.20\nrows=1000110 width=10)\n -> Result (cost=0.00..15407.10 rows=1000110 width=10)\n One-Time Filter: false\n -> Seq Scan on union_foo1 (cost=0.00..15407.10\nrows=1000110 width=10)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..25408.20\nrows=1000110 width=10)\n -> Seq Scan on union_foo2 (cost=0.00..15407.10\nrows=1000110 width=10)\n\n\nBut you can see a fair amount of overhead, espcially in the case of the\nunion view:\n\n SELECT SUM(bar) FROM sub_foo1 UNION ALL SELECT SUM(bar) FROM sub_foo2 ;\n Time: 2291.637 ms\n\n SELECT SUM(bar) FROM union_foo1 UNION ALL SELECT SUM(bar) FROM union_foo2\n;\n Time: 2248.225 ms\n\n SELECT SUM(bar) FROM super_foo ;\n Time: 3329.953 ms\n\n SELECT SUM(bar) FROM union_foo ;\n Time: 5267.742 ms\n\n\n SELECT SUM(bar) FROM sub_foo2 ;\n Time: 1124.496 ms\n\n SELECT SUM(bar) FROM union_foo2 ;\n Time: 1090.616 ms\n\n SELECT SUM(bar) FROM super_foo WHERE partition = 2 ;\n Time: 2137.767 ms\n\n SELECT SUM(bar) FROM union_foo WHERE partition = 2 ;\n Time: 2774.887 ms\n\n", "msg_date": "Sat, 4 Dec 2004 18:45:44 -0800", "msg_from": "\"Stacy White\" <[email protected]>", "msg_from_op": true, "msg_subject": "Partitioned table performance" }, { "msg_contents": "Stacy,\n\nThanks for the stats!\n\n> In some cases we've seen some increased performance in tests by splitting\n> the table into several smaller tables.  Both 'UNION ALL' views, and the\n> superclass/subclass scheme work well at pruning down the set of rows a\n> query uses, but they seem to introduce a large performance hit to the time\n> to process each row (~50% for superclass/subclass, and ~150% for union\n> views).\n\nThis seems reasonable, actually, given your test. Really, what you should be \ncomparing it against is not against selecting from an individual partition, \nbut selecting from the whole business as one large table. \n\nI also suspect that wider rows results in less overhead proportionally; note \nthat your test contains *only* the indexed rows. I should soon have a test \nto prove this, hopefully.\n\nHowever, I would be interested in seeing EXPLAIN ANALYZE from your tests \nrather than just EXPLAIN.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 5 Dec 2004 15:06:40 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned table performance" }, { "msg_contents": "Thanks for the quick reply, Josh. Here are some more, with wider tables and\n'EXPLAIN ANALYZE' output. These tests use the same basic structure as\nbefore, but with 20 data columns rather than just the one:\n\n CREATE TABLE one_big_foo (\n partition INTEGER,\n bar1 INTEGER,\n ...\n bar20 INTEGER\n )\n\nEach set of test tables holds 1,000,000 tuples with a partition value of\n'1', and 1,000,000 with a partition value of '2'. The bar* columns are all\nset to non-null values. The 'one_big_foo' table stores all 2M rows in one\ntable. 'super_foo' and 'union_foo' split the data into two tables, and use\ninheritance and union views (respectively) to tie them together, as\ndescribed in my previous message.\n\nQuery timings and 'EXPLAIN ANALYZE' results for full table scans and for\npartition scans follow:\n\n\nvod=# SELECT COUNT(*), MAX(bar1) FROM one_big_foo ;\nTime: 3695.274 ms\n\nvod=# SELECT COUNT(*), MAX(bar1) FROM super_foo ;\nTime: 4641.992 ms\n\nvod=# SELECT COUNT(*), MAX(bar1) FROM union_foo ;\nTime: 16035.025 ms\n\n\nvod=# SELECT COUNT(*), MAX(bar1) FROM one_big_foo WHERE partition = 1 ;\nTime: 4395.274 ms\n\nvod=# SELECT COUNT(*), MAX(bar1) FROM super_foo WHERE partition = 1 ;\nTime: 3050.920 ms\n\nvod=# SELECT COUNT(*), MAX(bar1) FROM union_foo WHERE partition = 1 ;\nTime: 7468.664 ms\n\n\n\n\nvod=# EXPLAIN ANALYZE SELECT COUNT(*), MAX(bar1) FROM one_big_foo ;\n QUERY PLAN\n----------------------------------------------------------------------------\n---------------------------------------------------\n Aggregate (cost=61747.92..61747.92 rows=1 width=4) (actual\ntime=18412.471..18412.474 rows=1 loops=1)\n -> Seq Scan on one_big_foo (cost=0.00..51747.61 rows=2000061\nwidth=4) (actual time=0.097..10079.192 rows=2000000 loops=1)\n Total runtime: 18412.597 ms\n(3 rows)\n\nTime: 18413.919 ms\n\n\nvod=# EXPLAIN ANALYZE SELECT COUNT(*), MAX(bar1) FROM super_foo ;\n QUERY\nPLAN\n----------------------------------------------------------------------------\n---------------------------------------------------------------\n Aggregate (cost=61749.87..61749.87 rows=1 width=4) (actual\ntime=30267.913..30267.916 rows=1 loops=1)\n -> Append (cost=0.00..51749.24 rows=2000125 width=4) (actual\ntime=0.127..22830.610 rows=2000000 loops=1)\n -> Seq Scan on super_foo (cost=0.00..0.00 rows=1 width=4)\n(actual time=0.005..0.005 rows=0 loops=1)\n -> Seq Scan on sub_foo1 super_foo (cost=0.00..25874.62\nrows=1000062 width=4) (actual time=0.113..5808.899 rows=1000000 loops=1)\n -> Seq Scan on sub_foo2 super_foo (cost=0.00..25874.62\nrows=1000062 width=4) (actual time=0.075..5829.095 rows=1000000 loops=1)\n Total runtime: 30268.061 ms\n(6 rows)\n\nTime: 30303.271 ms\n\n\nvod=# EXPLAIN ANALYZE SELECT COUNT(*), MAX(bar1) FROM union_foo ;\n QUERY\nPLAN\n----------------------------------------------------------------------------\n--------------------------------------------------------------------\n Aggregate (cost=98573.40..98573.40 rows=1 width=4) (actual\ntime=62542.849..62542.852 rows=1 loops=1)\n -> Subquery Scan union_foo (cost=0.00..88573.20 rows=2000040\nwidth=4) (actual time=0.130..55536.040 rows=2000000 loops=1)\n -> Append (cost=0.00..68572.80 rows=2000040 width=80) (actual\ntime=0.122..43210.763 rows=2000000 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..34286.40\nrows=1000020 width=80) (actual time=0.118..16312.708 rows=1000000\nloops=1) -> Seq Scan on union_foo1\n(cost=0.00..24286.20 rows=1000020 width=80) (actual time=0.107..7763.460\nrows=1000000 loops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..34286.40\nrows=1000020 width=80) (actual time=0.116..16610.387 rows=1000000\nloops=1) -> Seq Scan on union_foo2\n(cost=0.00..24286.20 rows=1000020 width=80) (actual time=0.095..7549.522\nrows=1000000 loops=1)\n Total runtime: 62543.098 ms\n\n\n\nvod=# EXPLAIN ANALYZE SELECT COUNT(*), MAX(bar1) FROM one_big_foo WHERE\npartition = 1 ;\nQUERY PLAN\n----------------------------------------------------------------------------\n-------------------------------------------------\n Aggregate (cost=61711.25..61711.25 rows=1 width=4) (actual\ntime=11592.135..11592.139 rows=1 loops=1)\n -> Seq Scan on one_big_foo (cost=0.00..56747.76 rows=992697\nwidth=4) (actual time=0.106..7627.170 rows=1000000 loops=1)\n Filter: (partition = 1::numeric)\n Total runtime: 11592.264 ms\n(4 rows)\n\nTime: 11593.749 ms\n\nvod=# EXPLAIN ANALYZE SELECT COUNT(*), MAX(bar1) FROM super_foo WHERE\npartition = 1 ;\n\nQUERY PLAN\n----------------------------------------------------------------------------\n---------------------------------------------------------------------------\n Aggregate (cost=33377.11..33377.11 rows=1 width=4) (actual\ntime=15670.309..15670.312 rows=1 loops=1)\n -> Append (cost=0.00..28376.79 rows=1000064 width=4) (actual\ntime=6.699..12072.483 rows=1000000 loops=1)\n -> Seq Scan on super_foo (cost=0.00..0.00 rows=1 width=4)\n(actual time=0.005..0.005 rows=0 loops=1)\n Filter: (partition = 1::numeric)\n -> Seq Scan on sub_foo1 super_foo (cost=0.00..28374.78\nrows=1000062 width=4) (actual time=0.106..6688.812 rows=1000000 loops=1)\n Filter: (partition = 1::numeric)\n -> Index Scan using idx_sub_foo2_partition on sub_foo2\nsuper_foo (cost=0.00..2.01 rows=1 width=4) (actual time=0.221..0.221\nrows=0 loops=1)\n Index Cond: (partition = 1::numeric)\n Total runtime: 15670.463 ms\n(9 rows)\n\nTime: 15672.235 ms\n\nvod=# EXPLAIN ANALYZE SELECT COUNT(*), MAX(bar1) FROM union_foo WHERE\npartition = 1 ;\n QUERY\nPLAN\n----------------------------------------------------------------------------\n--------------------------------------------------------------------\n Aggregate (cost=98573.40..98573.40 rows=1 width=4) (actual\ntime=31897.629..31897.632 rows=1 loops=1)\n -> Subquery Scan union_foo (cost=0.00..88573.20 rows=2000040\nwidth=4) (actual time=0.134..28323.692 rows=1000000 loops=1)\n -> Append (cost=0.00..68572.80 rows=2000040 width=80) (actual\ntime=0.125..21969.522 rows=1000000 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..34286.40\nrows=1000020 width=80) (actual time=0.120..16867.005 rows=1000000\nloops=1)\n -> Seq Scan on union_foo1 (cost=0.00..24286.20\nrows=1000020 width=80) (actual time=0.108..8017.931 rows=1000000\nloops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..34286.40\nrows=1000020 width=80) (actual time=0.011..0.011 rows=0 loops=1)\n -> Result (cost=0.00..24286.20 rows=1000020\nwidth=80) (actual time=0.004..0.004 rows=0 loops=1)\n One-Time Filter: false\n -> Seq Scan on union_foo2\n(cost=0.00..24286.20 rows=1000020 width=80) (never executed)\n Total runtime: 31897.897 ms\n(10 rows)\n\nTime: 31900.204 ms\n\n\n\n----- Original Message ----- \nFrom: \"Josh Berkus\" <[email protected]>\nTo: <[email protected]>\nCc: \"Stacy White\" <[email protected]>\nSent: Sunday, December 05, 2004 3:06 PM\nSubject: Re: [PERFORM] Partitioned table performance\n\n\nStacy,\n\nThanks for the stats!\n\n> In some cases we've seen some increased performance in tests by splitting\n> the table into several smaller tables. Both 'UNION ALL' views, and the\n> superclass/subclass scheme work well at pruning down the set of rows a\n> query uses, but they seem to introduce a large performance hit to the time\n> to process each row (~50% for superclass/subclass, and ~150% for union\n> views).\n\nThis seems reasonable, actually, given your test. Really, what you should\nbe\ncomparing it against is not against selecting from an individual partition,\nbut selecting from the whole business as one large table.\n\nI also suspect that wider rows results in less overhead proportionally; note\nthat your test contains *only* the indexed rows. I should soon have a test\nto prove this, hopefully.\n\nHowever, I would be interested in seeing EXPLAIN ANALYZE from your tests\nrather than just EXPLAIN.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Tue, 7 Dec 2004 20:32:49 -0800", "msg_from": "\"Stacy White\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partitioned table performance" }, { "msg_contents": "Stacy,\n\n> Each set of test tables holds 1,000,000 tuples with a partition value of\n> '1', and 1,000,000 with a partition value of '2'.  The bar* columns are all\n> set to non-null values.  The 'one_big_foo' table stores all 2M rows in one\n> table.  'super_foo' and 'union_foo' split the data into two tables, and use\n> inheritance and union views (respectively) to tie them together, as\n> described in my previous message.\n>\n> Query timings and 'EXPLAIN ANALYZE' results for full table scans and for\n> partition scans follow:\n\nHmmm .... interesting. I think you've demonstrated that pseudo-partitioning \ndoesn't pay for having only 2 partitions. Examine this:\n\n         ->  Index Scan using idx_sub_foo2_partition on sub_foo2\nsuper_foo  (cost=0.00..2.01 rows=1 width=4) (actual time=0.221..0.221\nrows=0 loops=1)\n               Index Cond: (partition = 1::numeric)\n Total runtime: 15670.463 ms\n\nAs you see, even though the aggregate operation requires a seq scan, the \nplanner is still able to scan, and discard, sub_foo2, using its index in 0.2 \nseconds. Unfortunately, super_foo still needs to contend with:\n\n   ->  Append  (cost=0.00..28376.79 rows=1000064 width=4) (actual\ntime=6.699..12072.483 rows=1000000 loops=1)\n\nRight there, in the Append, you lose 6 seconds. This means that \npseudo-partitioning via inheritance will become a speed gain once you can \n\"make up\" that 6 seconds by being able to discard more partitions. If you \nwant, do a test with 6 partitions instead of 2 and let us know how it comes \nout.\n\nAlso, keep in mind that there are other reasons to do pseudo-partitioning than \nyour example. Data write performance, expiring partitions, and vacuum are \nbig reasons that can motivate partitioning even in cases when selects are \nslower.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 10 Dec 2004 21:52:40 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned table performance" }, { "msg_contents": "Josh,\n\nYou're absolutely correct that the overhead becomes less significant as the\npartitioning prunes more rows. I can even see a two-partition table being\nuseful in some situations (e.g., a table divided into a relatively small\n\"recent data\" partition and a much larger \"historical data\" partition). The\nbreak-even point is when your partitioning scheme prunes 20% of the rows\n(assuming you're using the inheritance based scheme).\n\nThanks again for the reply. So it sounds like the answer to my original\nquestion is that it's expected that the pseudo-partitioning would introduce\na fairly significant amount of overhead. Correct?\n\n\n\n----- Original Message ----- \nFrom: \"Josh Berkus\" <[email protected]>\nTo: <[email protected]>\nCc: \"Stacy White\" <[email protected]>\nSent: Friday, December 10, 2004 9:52 PM\nSubject: Re: [PERFORM] Partitioned table performance\n\n\nStacy,\n\n> Each set of test tables holds 1,000,000 tuples with a partition value of\n> '1', and 1,000,000 with a partition value of '2'. The bar* columns are all\n> set to non-null values. The 'one_big_foo' table stores all 2M rows in one\n> table. 'super_foo' and 'union_foo' split the data into two tables, and use\n> inheritance and union views (respectively) to tie them together, as\n> described in my previous message.\n>\n> Query timings and 'EXPLAIN ANALYZE' results for full table scans and for\n> partition scans follow:\n\nHmmm .... interesting. I think you've demonstrated that\npseudo-partitioning\ndoesn't pay for having only 2 partitions. Examine this:\n\n-> Index Scan using idx_sub_foo2_partition on sub_foo2\nsuper_foo (cost=0.00..2.01 rows=1 width=4) (actual time=0.221..0.221\nrows=0 loops=1)\nIndex Cond: (partition = 1::numeric)\nTotal runtime: 15670.463 ms\n\nAs you see, even though the aggregate operation requires a seq scan, the\nplanner is still able to scan, and discard, sub_foo2, using its index in 0.2\nseconds. Unfortunately, super_foo still needs to contend with:\n\n-> Append (cost=0.00..28376.79 rows=1000064 width=4) (actual\ntime=6.699..12072.483 rows=1000000 loops=1)\n\nRight there, in the Append, you lose 6 seconds. This means that\npseudo-partitioning via inheritance will become a speed gain once you can\n\"make up\" that 6 seconds by being able to discard more partitions. If you\nwant, do a test with 6 partitions instead of 2 and let us know how it comes\nout.\n\nAlso, keep in mind that there are other reasons to do pseudo-partitioning\nthan\nyour example. Data write performance, expiring partitions, and vacuum are\nbig reasons that can motivate partitioning even in cases when selects are\nslower.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n", "msg_date": "Tue, 14 Dec 2004 22:09:12 -0800", "msg_from": "\"Stacy White\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partitioned table performance" }, { "msg_contents": "Stacy,\n\n> Thanks again for the reply.  So it sounds like the answer to my original\n> question is that it's expected that the pseudo-partitioning would introduce\n> a fairly significant amount of overhead.  Correct?\n\nCorrect. For that matter, Oracle table partitioning introduces significant \noverhead, from what I've seen. I don't think there's a way not to.\n\nGenerally, I counsel people that they only want to consider \npseudo-partitioning if they have one axis on the table which is used in 90% \nor more of the queries against that table.\n\nWhat would improve the situation significantly, and the utility of \npseudo-partitioning, is the ability to have a single index span multiple \npartitions. This would allow you to have a segmented index for the \npartitioned axis, yet still use an unsegmented index for the other columns. \nHowever, there's a *lot* of work to do to make that happen.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 15 Dec 2004 10:25:02 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned table performance" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n\n> Stacy,\n> \n> > Thanks again for the reply. �So it sounds like the answer to my original\n> > question is that it's expected that the pseudo-partitioning would introduce\n> > a fairly significant amount of overhead. �Correct?\n> \n> Correct. For that matter, Oracle table partitioning introduces significant \n> overhead, from what I've seen. I don't think there's a way not to.\n\nWell Oracle has lots of partitioning intelligence pushed up to the planner to\navoid overhead.\n\nIf you have a query with something like \"WHERE date = '2004-01-01'\" and date\nis your partition key (even if it's a range) then Oracle will figure out which\npartition it will need at planning time.\n\nEven if your query is something like \"WHERE date = ?\" then Oracle will still\nrecognize that it will only need a single partition at planning time, though\nit has to decide which partition at execution time.\n\nWe didn't notice any run-time performance degradation when we went to\npartitioned tables. Maybe we were so blinded by the joy they brought us on the\nmaintenance side though. I don't think we specifically checked for run-time\nconsequences.\n\nBut I'm a bit puzzled. Why would Append have any significant cost? It's just\ntaking the tuples from one plan node and returning them until they run out,\nthen taking the tuples from another plan node. It should have no i/o cost and\nhardly any cpu cost. Where is the time going?\n\n> What would improve the situation significantly, and the utility of \n> pseudo-partitioning, is the ability to have a single index span multiple \n> partitions. This would allow you to have a segmented index for the \n> partitioned axis, yet still use an unsegmented index for the other columns. \n> However, there's a *lot* of work to do to make that happen.\n\nIn my experience \"global indexes\" defeat the whole purpose of having the\npartitions. They make dropping and adding partitions expensive which was\nalways the reason we wanted to partition something anyways.\n\nIt is handy having a higher level interface to deal with partitioned tables.\nYou can create a single \"local\" or \"segmented\" index and not have to manually\ndeal with all the partitions as separate tables. But that's just syntactic\nsugar.\n\n-- \ngreg\n\n", "msg_date": "15 Dec 2004 14:34:26 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned table performance" }, { "msg_contents": "Greg,\n\n> Well Oracle has lots of partitioning intelligence pushed up to the planner\n> to avoid overhead.\n>\n> If you have a query with something like \"WHERE date = '2004-01-01'\" and\n> date is your partition key (even if it's a range) then Oracle will figure\n> out which partition it will need at planning time.\n\nHmmm ... well, we're looking at making a spec for Postgres Table Partitioning. \nMaybe you could help?\n\n> But I'm a bit puzzled. Why would Append have any significant cost? It's\n> just taking the tuples from one plan node and returning them until they run\n> out, then taking the tuples from another plan node. It should have no i/o\n> cost and hardly any cpu cost. Where is the time going?\n\nBeats me. Tom?\n\n> In my experience \"global indexes\" defeat the whole purpose of having the\n> partitions. They make dropping and adding partitions expensive which was\n> always the reason we wanted to partition something anyways.\n\nHmmm. Possibly, I was just thinking about the cost to partitioned tables \nwhen you do a selection *not* on the partitioned axis. Also that currently \nwe can't enforce UNIQUE constraints across partitions.\n\nBut maybe reducing the cost of Append is the answer to this.\n\n> It is handy having a higher level interface to deal with partitioned\n> tables. You can create a single \"local\" or \"segmented\" index and not have\n> to manually deal with all the partitions as separate tables. But that's\n> just syntactic sugar.\n\nRight, and the easy part.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 15 Dec 2004 11:56:40 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned table performance" }, { "msg_contents": "\nJosh Berkus <[email protected]> writes:\n\n> > But I'm a bit puzzled. Why would Append have any significant cost? It's\n> > just taking the tuples from one plan node and returning them until they run\n> > out, then taking the tuples from another plan node. It should have no i/o\n> > cost and hardly any cpu cost. Where is the time going?\n> \n> Beats me. Tom?\n> \n> > In my experience \"global indexes\" defeat the whole purpose of having the\n> > partitions. They make dropping and adding partitions expensive which was\n> > always the reason we wanted to partition something anyways.\n> \n> Hmmm. Possibly, I was just thinking about the cost to partitioned tables \n> when you do a selection *not* on the partitioned axis. Also that currently \n> we can't enforce UNIQUE constraints across partitions.\n\nLike I said though, we found \"global indexes\" defeated the whole purpose. That\nmeant no global UNIQUE constraints for us when we went to partitioned tables.\nIt gave the DBAs the willies but it really wasn't a big deal.\n\nYou can still do unique local indexes on a specific partition. So as long as\nyour partition key is in the primary key you can have a trustworthy primary\nkey.\n\nAnd even if not, you usually find you're only loading data into only one\npartition. In most applications it's pretty hard to get a record from two\ndifferent partitions with conflicting IDs and not hard to check for. You could\neasily put a constraint saying that all PO numbers in the new fiscal year have\nto be greater than the last PO number from last year, for example.\n\n> But maybe reducing the cost of Append is the answer to this.\n\nThe problem with global indexes is that adding or removing an entire partition\nbecomes a large job. [Actually with Postgres MVCC I suppose removing might\nnot. But cleaning up would eventually be a large job, and the point remains\nfor adding a partition.]\n\nIdeally adding and removing a partition should be a O(1) operation. No data\nmodification at all, purely catalog changes.\n\n> > It is handy having a higher level interface to deal with partitioned\n> > tables. You can create a single \"local\" or \"segmented\" index and not have\n> > to manually deal with all the partitions as separate tables. But that's\n> > just syntactic sugar.\n> \n> Right, and the easy part.\n\nI think the hard part lies in the optimizer actually. The semantics of the\noperations to manipulate partitions might be tricky to get right but the\ncoding should be straightforward. Having the optimizer be able to recognize\nwhen it can prune partitions will be a lot of work.\n\n-- \ngreg\n\n", "msg_date": "15 Dec 2004 17:53:19 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned table performance" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> But I'm a bit puzzled. Why would Append have any significant cost? It's just\n> taking the tuples from one plan node and returning them until they run out,\n> then taking the tuples from another plan node. It should have no i/o cost and\n> hardly any cpu cost. Where is the time going?\n\nAs best I can tell by profiling, the cost of the Append node per se is\nindeed negligible --- no more than a couple percent of the runtime in\nCVS tip for a test case similar to Stacy White's example.\n\nIt looks bad in EXPLAIN ANALYZE, but you have to realize that passing\nthe tuples up through the Append node doubles the instrumentation\noverhead of EXPLAIN ANALYZE, which is pretty sizable already. (If you\nturn on \\timing in psql and try the query itself vs. EXPLAIN ANALYZE,\nthe actual elapsed time is about double, at least for me.)\n\nThe other effect, which I hadn't expected, is that the seqscans\nthemselves actually slow down. I get\n\nregression=# explain analyze SELECT COUNT(*), MAX(bar1) FROM super_foo ;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=16414.32..16414.32 rows=1 width=4) (actual time=32313.980..32313.988 rows=1 loops=1)\n -> Append (cost=0.00..13631.54 rows=556555 width=4) (actual time=0.232..21848.401 rows=524289 loops=1)\n -> Seq Scan on super_foo (cost=0.00..0.00 rows=1 width=4) (actual time=0.020..0.020 rows=0 loops=1)\n -> Seq Scan on sub_foo1 super_foo (cost=0.00..6815.77 rows=278277 width=4) (actual time=0.187..6926.395 rows=262144 loops=1)\n -> Seq Scan on sub_foo2 super_foo (cost=0.00..6815.77 rows=278277 width=4) (actual time=0.168..7026.953 rows=262145 loops=1)\n Total runtime: 32314.993 ms\n(6 rows)\n\nregression=# explain analyze SELECT COUNT(*), MAX(bar1) FROM sub_foo1;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=8207.16..8207.16 rows=1 width=4) (actual time=9850.420..9850.428 rows=1 loops=1)\n -> Seq Scan on sub_foo1 (cost=0.00..6815.77 rows=278277 width=4) (actual time=0.202..4642.401 rows=262144 loops=1)\n Total runtime: 9851.423 ms\n(3 rows)\n\nNotice the actual times for the sub_foo1 seqscans. That increase (when\ncounted for both input tables) almost exactly accounts for the\ndifference in non-EXPLAIN ANALYZE runtime.\n\nAfter digging around, I find that the reason for the difference is that\nthe optimization to avoid a projection step (ExecProject) isn't applied\nfor scans of inheritance unions:\n\n\t/*\n\t * Can't do it with inheritance cases either (mainly because Append\n\t * doesn't project).\n\t */\n\tif (rel->reloptkind != RELOPT_BASEREL)\n\t\treturn false;\n\nSo if you were to try the example in a pre-7.4 PG, which didn't have\nthat optimization, you'd probably find that the speeds were just about\nthe same. (I'm too lazy to verify this though.)\n\nI looked briefly at what it would take to cover this case, and decided\nthat it's a nontrivial change, so it's too late to do something about it\nfor 8.0. I think it's probably possible to fix it though, at least for\ncases where the child tables have rowtypes identical to the parent.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Dec 2004 20:30:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned table performance " }, { "msg_contents": "Sorry for the late reply, so I included the whole thread. Should this be\na TODO?\n\nOn Wed, Dec 15, 2004 at 08:30:08PM -0500, Tom Lane wrote:\n> Greg Stark <[email protected]> writes:\n> > But I'm a bit puzzled. Why would Append have any significant cost? It's just\n> > taking the tuples from one plan node and returning them until they run out,\n> > then taking the tuples from another plan node. It should have no i/o cost and\n> > hardly any cpu cost. Where is the time going?\n> \n> As best I can tell by profiling, the cost of the Append node per se is\n> indeed negligible --- no more than a couple percent of the runtime in\n> CVS tip for a test case similar to Stacy White's example.\n> \n> It looks bad in EXPLAIN ANALYZE, but you have to realize that passing\n> the tuples up through the Append node doubles the instrumentation\n> overhead of EXPLAIN ANALYZE, which is pretty sizable already. (If you\n> turn on \\timing in psql and try the query itself vs. EXPLAIN ANALYZE,\n> the actual elapsed time is about double, at least for me.)\n> \n> The other effect, which I hadn't expected, is that the seqscans\n> themselves actually slow down. I get\n> \n> regression=# explain analyze SELECT COUNT(*), MAX(bar1) FROM super_foo ;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=16414.32..16414.32 rows=1 width=4) (actual time=32313.980..32313.988 rows=1 loops=1)\n> -> Append (cost=0.00..13631.54 rows=556555 width=4) (actual time=0.232..21848.401 rows=524289 loops=1)\n> -> Seq Scan on super_foo (cost=0.00..0.00 rows=1 width=4) (actual time=0.020..0.020 rows=0 loops=1)\n> -> Seq Scan on sub_foo1 super_foo (cost=0.00..6815.77 rows=278277 width=4) (actual time=0.187..6926.395 rows=262144 loops=1)\n> -> Seq Scan on sub_foo2 super_foo (cost=0.00..6815.77 rows=278277 width=4) (actual time=0.168..7026.953 rows=262145 loops=1)\n> Total runtime: 32314.993 ms\n> (6 rows)\n> \n> regression=# explain analyze SELECT COUNT(*), MAX(bar1) FROM sub_foo1;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=8207.16..8207.16 rows=1 width=4) (actual time=9850.420..9850.428 rows=1 loops=1)\n> -> Seq Scan on sub_foo1 (cost=0.00..6815.77 rows=278277 width=4) (actual time=0.202..4642.401 rows=262144 loops=1)\n> Total runtime: 9851.423 ms\n> (3 rows)\n> \n> Notice the actual times for the sub_foo1 seqscans. That increase (when\n> counted for both input tables) almost exactly accounts for the\n> difference in non-EXPLAIN ANALYZE runtime.\n> \n> After digging around, I find that the reason for the difference is that\n> the optimization to avoid a projection step (ExecProject) isn't applied\n> for scans of inheritance unions:\n> \n> \t/*\n> \t * Can't do it with inheritance cases either (mainly because Append\n> \t * doesn't project).\n> \t */\n> \tif (rel->reloptkind != RELOPT_BASEREL)\n> \t\treturn false;\n> \n> So if you were to try the example in a pre-7.4 PG, which didn't have\n> that optimization, you'd probably find that the speeds were just about\n> the same. (I'm too lazy to verify this though.)\n> \n> I looked briefly at what it would take to cover this case, and decided\n> that it's a nontrivial change, so it's too late to do something about it\n> for 8.0. I think it's probably possible to fix it though, at least for\n> cases where the child tables have rowtypes identical to the parent.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Tue, 21 Dec 2004 16:56:43 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned table performance" }, { "msg_contents": "On Wed, Dec 15, 2004 at 11:56:40AM -0800, Josh Berkus wrote:\n> Greg,\n> \n> > Well Oracle has lots of partitioning intelligence pushed up to the planner\n> > to avoid overhead.\n> >\n> > If you have a query with something like \"WHERE date = '2004-01-01'\" and\n> > date is your partition key (even if it's a range) then Oracle will figure\n> > out which partition it will need at planning time.\n> \n> Hmmm ... well, we're looking at making a spec for Postgres Table Partitioning. \n> Maybe you could help?\n\nThis is something I've been thinking about doing for\nhttp://stats.distributed.net; is there a formal project for this\nsomewhere?\n\nOn a different note, has anyone looked at the savings you get by\nommitting the partition field from the child tables? ISTM that the\nsavings would be substantial for narrow tables. Of course that most\nlikely means doing a union view instead of inheritence, but I'm guessing\nhere. The table I'm thinking of partitioning is quite narrow (see\nbelow), so I suspect that dropping project_id out would result in a\nsubstantial savings (there's basically nothing that ever queries across\nthe whole table). With the data distribution, I suspect just breaking\nproject ID's 205, 5, and 25 into partitioned tables that didn't contain\nproject_id would save about 450M (4bytes * 95% * 130M).\n\n(the table has ~130M rows)\n\n Table \"public.email_contrib\"\n Column | Type | Modifiers \n------------+---------+-----------\n project_id | integer | not null\n id | integer | not null\n date | date | not null\n team_id | integer | \n work_units | bigint | not null\nIndexes:\n \"email_contrib_pkey\" primary key, btree (project_id, id, date)\n \"email_contrib__pk24\" btree (id, date) WHERE (project_id = 24)\n \"email_contrib__pk25\" btree (id, date) WHERE (project_id = 25)\n \"email_contrib__pk8\" btree (id, date) WHERE (project_id = 8)\n \"email_contrib__project_date\" btree (project_id, date)\nForeign-key constraints:\n \"fk_email_contrib__id\" FOREIGN KEY (id) REFERENCES stats_participant(id) ON UPDATE CASCADE\n \"fk_email_contrib__team_id\" FOREIGN KEY (team_id) REFERENCES stats_team(team) ON UPDATE CASCADE\n\nstats=# select * from pg_stats where tablename='email_contrib' and\nattname='project_id';\n schemaname | tablename | attname | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation \n ------------+---------------+------------+-----------+-----------+------------+-------------------+---------------------------------------------------------+------------------+-------------\n public | email_contrib | project_id | 0 | 4 | 6 | {205,5,25,8,24,3} | {0.461133,0.4455,0.0444333,0.0418667,0.0049,0.00216667} | | 0.703936\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Tue, 21 Dec 2004 17:11:55 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned table performance" }, { "msg_contents": "The discussion seems to have diverged a little, so I don't feel too bad\nabout making some semi-off-topic comments.\n\nFrom: \"Greg Stark\" <[email protected]>\n> Like I said though, we found \"global indexes\" defeated the whole purpose.\n\nFirst semi-off-topic comment: I think this depends on the index, the data,\nand the goal of the partitioning. We use partitioning on one of our Oracle\nprojects for performance rather than managability. In this case, a global\nindex on a non-partitioned field can be helpful.\n\nImagine an 'orders' table with 100 partitions on week. Products have a\nshort life cycle, and are typically around for only a week or two. A query\nlike 'SELECT * FROM orders WHERE product_no = ?' forces a lookup on 100\ndifferent local indexes, but only one global index.\n\n\nSecond-semi-off-topic comment: Josh mentioned that Oracle's partitioning\nintroduces it's own overhead, so I re-ran my earlier benchmarks on one of\nour Oracle machines.\n\nI believe Oracle's licensing agreement prohibits me from posting any\nbenchmarks, so all I'll say is that Postgres' inheritance partitioning\nimplementation is _very_ low overhead, and even union views are competitive.\n\n", "msg_date": "Tue, 21 Dec 2004 17:41:39 -0800", "msg_from": "\"Stacy White\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partitioned table performance" } ]
[ { "msg_contents": "Rodrigo --\n\nYou should definitely drop the indexes and any other FK constraints before loading and then rebuild them. Check your logs and see if there are warnings about checkpoint intervals -- only 3 logs seems like it might be small; if you have the disk space I would definitely consider raising the number. If you haven't already posted your config settings you might do so -- this seems very slow. I regularly use COPY to load or unload data sets in the 200k-900k range and they don't take 90 minutes, even on slower hardware (and usually only a few minutes on our production servers; rebuilding the indexes usually takes longer. \n\nThis unloading a 300k+ row data set on a dell linux box with not very good disks and 1 gig of RAM:\n\nStarting copy of parcel staging table parcels_12031 at Thu Dec 2 01:13:52 2004\nDone with staging table copy at Thu Dec 2 01:15:16 2004\n...\nStarting compression of parcel file at Thu Dec 2 01:15:22 2004\ngzip: /tmp/parcels_12031.unl.gz already exists; do you wish to overwrite (y or n\n)? y\nDone with compression of parcel file at Thu Dec 2 01:17:23 2004\n...\n\nAnd loading them on a rather faster server:\n\nStarting unzip of parcels at Thu Dec 2 01:29:15 2004\nFinished with unzip at Thu Dec 2 01:29:22 2004\n...\nTarget db detail table updated at Thu Dec 2 01:29:29 2004\nDropping indexes\nDropping fk constraint on tracking id\nDropping indexes\nDone dropping indexes on target parcels table at Thu Dec 2 01:29:30 2004\nNOTICE: drop cascades to table f12031.parcel_pins\nNOTICE: drop cascades to table f12031.parcel_addresses\nNOTICE: drop cascades to table f12031.parcel_owner_fti\nNOTICE: drop cascades to table f12031.parcel_owners\nRemoving old parcels entries starting at Thu Dec 2 01:29:30 2004\nDone deleting schema and parcels for track_id 10163541 at Thu Dec 2 01:33:04 2004\nStarting load of parcels at Thu Dec 2 01:33:04 2004\nDone copying data into parcels at Thu Dec 2 01:35:18 2004\nDeleting old v_detail reference for track_id 10163541\nDone with delete of old v_detail reference\nStarting creation of foreign key constraint at Thu Dec 2 01:39:43 2004\nDone with creation of foreign key constraint at Thu Dec 2 01:42:14 2004\nStarting spatial index create at Thu Dec 2 01:42:14 2004\nDone creating spatial index at Thu Dec 2 01:55:04 2004\nStarting stats on geometry column now\nDone doing stats for spatial index at Thu Dec 2 02:03:47 2004\nStarting index on PIN now\nDone creating pin index at Thu Dec 2 02:09:36 2004\nStarting index on tracking id now\nDone creating trid index at Thu Dec 2 02:12:35 2004\nStarting centroid index now\nDone creating centroid index at Thu Dec 2 02:24:11 2004\nStarting stats on centroid column\nDone doing stats for spatial index at Thu Dec 2 02:29:55 2004\nDoing City/Street Index on parcels table ...Done creating city/street index at Thu Dec 2 02:42:41 2004 with result <-1>\nCommitting changes\n\nSo this took about 70 minutes to delete 200000+ rows from a table with about 5 million rows, load a new set and reindex them (and do some statistics for spatial geometry). If the table had only this data the indexing would have been *much* faster. These are moderate size columns -- about 2 dozen columns and some spatial data (polygon and point). Both servers have rather more log files than your setup, but I am not familiar enough with postgres to know how much of an impact that alone will have. The comment about it slowing down part way through a load makes me suspect indexing issues, somehow (not from postgres experience but it rings a bell with other DBs); if you explicitly drop the indexes first and then load does it show the same performance behavior ?\n\nIf you are doing the data read from, the database write and the WAL logging all on single disk drive, then I would guess that that is your bottleneck. If you use vmstat and/or top or the like, is your I/O pegged ?\n\nHTH\n\nGreg WIlliamson\nDBA\nGlobeXplorer LLC\n\n-----Original Message-----\nFrom:\tRodrigo Carvalhaes [mailto:[email protected]]\nSent:\tSun 12/5/2004 11:52 AM\nTo:\tChristopher Browne\nCc:\[email protected]\nSubject:\tRe: [PERFORM] Improve BULK insertion\nHi!\n\n1. I am doing the inserts using pg_restore. The dump was created using \npg_dump and the standard format (copy statements)\n2. See below the table schema. There are only 7 indexes. \n3. My transaction log configuration are : checkpoint_segments = 3 and \ncheckpoint_timeout = 300 and my transaction logs are on the same disk .\n\nI know that I can increase the performance separating the transaction \nlogs and making a RAID 5 array BUT I am really curious about WHY this \nperformance is so poor and HOW can I try to improve on this actual \nmachine because actualy this inserts are taking around 90 minutes!!!\n\nCheers!\n\nRodrigo\n\ndadosadv=# \\d si2010\n Table \"public.si2010\"\n Column | Type | Modifiers\n------------+------------------+---------------------------------------------------------------------\n i2_filial | character(2) | not null default ' '::bpchar\n i2_num | character(10) | not null default ' '::bpchar\n i2_linha | character(2) | not null default ' '::bpchar\n i2_data | character(8) | not null default ' '::bpchar\n i2_dc | character(1) | not null default ' '::bpchar\n i2_debito | character(20) | not null default ' \n'::bpchar\n i2_dcd | character(1) | not null default ' '::bpchar\n i2_credito | character(20) | not null default ' \n'::bpchar\n i2_dcc | character(1) | not null default ' '::bpchar\n i2_moedas | character(5) | not null default ' '::bpchar\n i2_valor | double precision | not null default 0.0\n i2_hp | character(3) | not null default ' '::bpchar\n i2_hist | character(40) | not null default \n' '::bpchar\n i2_ccd | character(9) | not null default ' '::bpchar\n i2_ccc | character(9) | not null default ' '::bpchar\n i2_ativdeb | character(6) | not null default ' '::bpchar\n i2_ativcrd | character(6) | not null default ' '::bpchar\n i2_vlmoed2 | double precision | not null default 0.0\n i2_vlmoed3 | double precision | not null default 0.0\n i2_vlmoed4 | double precision | not null default 0.0\n i2_vlmoed5 | double precision | not null default 0.0\n i2_dtvenc | character(8) | not null default ' '::bpchar\n i2_criter | character(4) | not null default ' '::bpchar\n i2_rotina | character(8) | not null default ' '::bpchar\n i2_periodo | character(6) | not null default ' '::bpchar\n i2_listado | character(1) | not null default ' '::bpchar\n i2_origem | character(40) | not null default \n' '::bpchar\n i2_permat | character(4) | not null default ' '::bpchar\n i2_filorig | character(2) | not null default ' '::bpchar\n i2_intercp | character(1) | not null default ' '::bpchar\n i2_identcp | character(12) | not null default ' '::bpchar\n i2_lote | character(4) | not null default ' '::bpchar\n i2_doc | character(6) | not null default ' '::bpchar\n i2_emporig | character(2) | not null default ' '::bpchar\n i2_lp | character(3) | not null default ' '::bpchar\n i2_itemd | character(9) | not null default ' '::bpchar\n i2_itemc | character(9) | not null default ' '::bpchar\n i2_prelan | character(1) | not null default ' '::bpchar\n i2_tipo | character(2) | not null default ' '::bpchar\n i2_dcc | character(1) | not null default ' '::bpchar\n i2_moedas | character(5) | not null default ' '::bpchar\n i2_valor | double precision | not null default 0.0\n i2_hp | character(3) | not null default ' '::bpchar\n i2_hist | character(40) | not null default \n' '::bpchar\n i2_ccd | character(9) | not null default ' '::bpchar\n i2_ccc | character(9) | not null default ' '::bpchar\n i2_ativdeb | character(6) | not null default ' '::bpchar\n i2_ativcrd | character(6) | not null default ' '::bpchar\n i2_vlmoed2 | double precision | not null default 0.0\n i2_vlmoed3 | double precision | not null default 0.0\n i2_vlmoed4 | double precision | not null default 0.0\n i2_vlmoed5 | double precision | not null default 0.0\n i2_dtvenc | character(8) | not null default ' '::bpchar\n i2_criter | character(4) | not null default ' '::bpchar\n i2_rotina | character(8) | not null default ' '::bpchar\n i2_periodo | character(6) | not null default ' '::bpchar\n i2_listado | character(1) | not null default ' '::bpchar\n i2_origem | character(40) | not null default \n' '::bpchar\n i2_permat | character(4) | not null default ' '::bpchar\n i2_filorig | character(2) | not null default ' '::bpchar\n i2_intercp | character(1) | not null default ' '::bpchar\n i2_identcp | character(12) | not null default ' '::bpchar\n i2_lote | character(4) | not null default ' '::bpchar\n i2_doc | character(6) | not null default ' '::bpchar\n i2_emporig | character(2) | not null default ' '::bpchar\n i2_lp | character(3) | not null default ' '::bpchar\n i2_itemd | character(9) | not null default ' '::bpchar\n i2_itemc | character(9) | not null default ' '::bpchar\n i2_prelan | character(1) | not null default ' '::bpchar\n i2_tipo | character(2) | not null default ' '::bpchar\n d_e_l_e_t_ | character(1) | not null default ' '::bpchar\n r_e_c_n_o_ | double precision | not null default 0.0\nIndexes:\n \"si2010_pkey\" primary key, btree (r_e_c_n_o_)\n \"si20101\" btree (i2_filial, i2_num, i2_linha, i2_periodo, \nr_e_c_n_o_, d_e_l_e_t_)\n \"si20102\" btree (i2_filial, i2_periodo, i2_num, i2_linha, \nr_e_c_n_o_, d_e_l_e_t_)\n \"si20103\" btree (i2_filial, i2_data, i2_num, i2_linha, r_e_c_n_o_, \nd_e_l_e_t_)\n \"si20104\" btree (i2_filial, i2_debito, i2_data, i2_num, i2_linha, \nr_e_c_n_o_, d_e_l_e_t_)\n \"si20105\" btree (i2_filial, i2_credito, i2_data, i2_num, i2_linha, \nr_e_c_n_o_, d_e_l_e_t_)\n \"si20106\" btree (i2_filial, i2_doc, i2_periodo, r_e_c_n_o_, d_e_l_e_t_)\n \"si20107\" btree (i2_filial, i2_origem, r_e_c_n_o_, d_e_l_e_t_)\n\n\nChristopher Browne wrote:\n\n>In the last exciting episode, [email protected] (Grupos) wrote:\n> \n>\n>>Hi !\n>>\n>>I need to insert 500.000 records on a table frequently. It´s a bulk\n>>insertion from my applicatoin.\n>>I am with a very poor performance. PostgreSQL insert very fast until\n>>the tuple 200.000 and after it the insertion starts to be really slow.\n>>I am seeing on the log and there is a lot of transaction logs,\n>>something like :\n>>\n>>2004-12-04 11:08:59 LOG: recycled transaction log file \"0000000600000012\"\n>>2004-12-04 11:08:59 LOG: recycled transaction log file \"0000000600000013\"\n>>2004-12-04 11:08:59 LOG: recycled transaction log file \"0000000600000011\"\n>>2004-12-04 11:14:04 LOG: recycled transaction log file \"0000000600000015\"\n>>2004-12-04 11:14:04 LOG: recycled transaction log file \"0000000600000014\"\n>>2004-12-04 11:19:08 LOG: recycled transaction log file \"0000000600000016\"\n>>2004-12-04 11:19:08 LOG: recycled transaction log file \"0000000600000017\"\n>>2004-12-04 11:24:10 LOG: recycled transaction log file \"0000000600000018\"\n>> \n>>\n>\n>It is entirely normal for there to be a lot of transaction log file\n>recycling when bulk inserts are taking place; that goes through a lot\n>of transaction logs.\n>\n> \n>\n>>How can I configure PostgreSQL to have a better performance on this\n>>bulk insertions ? I already increased the memory values.\n>> \n>>\n>\n>Memory is, as likely as not, NOT the issue.\n>\n>Two questions:\n>\n> 1. How are you doing the inserts? Via INSERT statements? Or\n> via COPY statements? What sort of transaction grouping\n> is involved?\n>\n> COPY is way faster than INSERT, and grouping plenty of updates\n> into a single transaction is generally a \"win.\"\n>\n> 2. What is the schema like? Does the table have a foreign key\n> constraint? Does it have a bunch of indices?\n>\n> If there should eventually be lots of indices, it tends to be\n> faster to create the table with none/minimal indices, and add\n> indexes afterwards, as long as your \"load\" process can be trusted\n> to not break \"unique\" constraints...\n>\n> If there is some secondary table with a foreign key constraint,\n> and _that_ table is growing, it is possible that a sequential\n> scan is being used to search the secondary table where, if you\n> did an ANALYZE on that table, an index scan would be preferred\n> once it grew to larger size...\n>\n>There isn't a particular reason for PostgreSQL to \"hit a wall\" upon\n>seeing 200K records; I and coworkers routinely load database dumps\n>that have millions of (sometimes pretty fat) records, and they don't\n>\"choke.\" That's true whether talking about loading things onto my\n>(somewhat wimpy) desktop PC, or a SMP Xeon system with a small RAID\n>array, or higher end stuff involving high end SMP and EMC disk arrays.\n>The latter obviously being orders of magnitude faster than desktop\n>equipment :-).\n> \n>\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n\n\n\n", "msg_date": "Sun, 5 Dec 2004 14:48:28 -0800", "msg_from": "\"Gregory S. Williamson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve BULK insertion" } ]
[ { "msg_contents": "Hi all!\nHas anyone done any performance benchmarking of postgresql 7.4 vs 8.0? \nAre there any scenarios where 8.0 can be expected to be faster? I\nwould love to get my hands on any numbers that someone might have.\nAlso does anyone know how long it will take for a stable release of\n8.0 to come? Given the loads of additional features in 8.0, I can't\nwait to use it in production. :-)\n\nthanks a lot everyone!!!\nps\n", "msg_date": "Mon, 6 Dec 2004 14:58:46 +0530", "msg_from": "Postgres Learner <[email protected]>", "msg_from_op": true, "msg_subject": "8.0 vs. 7.4 benchmarks" }, { "msg_contents": "Hi all!\nI posted this on pgsql-performance but got no reply, so here it is:\nthanks!\nps\n\n\n---------- Forwarded message ----------\nFrom: Postgres Learner <[email protected]>\nDate: Mon, 6 Dec 2004 14:58:46 +0530\nSubject: 8.0 vs. 7.4 benchmarks\nTo: [email protected]\n\n\nHi all!\nHas anyone done any performance benchmarking of postgresql 7.4 vs 8.0?\nAre there any scenarios where 8.0 can be expected to be faster? \nI would love to get my hands on any numbers that someone might have.\n\nAlso does anyone know how long it will take for a stable release of\n8.0 to come (any estimates would be good) ? \nGiven the loads of additional features in 8.0, I can't\nwait to use it in production. :-)\n\nthanks a lot everyone!!!\nps\n", "msg_date": "Tue, 7 Dec 2004 14:47:58 +0530", "msg_from": "Postgres Learner <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: 8.0 vs. 7.4 benchmarks" }, { "msg_contents": "On Tue, Dec 07, 2004 at 14:47:58 +0530,\n Postgres Learner <[email protected]> wrote:\n> \n> Has anyone done any performance benchmarking of postgresql 7.4 vs 8.0?\n> Are there any scenarios where 8.0 can be expected to be faster? \n\nHave you read the release notes?\n\n> I would love to get my hands on any numbers that someone might have.\n> \n> Also does anyone know how long it will take for a stable release of\n> 8.0 to come (any estimates would be good) ? \n\nThe last target date I saw mentioned was 2004-12-15. If a second release\ncandidate is needed, I don't know if that date will be met.\n", "msg_date": "Tue, 7 Dec 2004 08:52:33 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: 8.0 vs. 7.4 benchmarks" }, { "msg_contents": ">>I would love to get my hands on any numbers that someone might have.\n>>\n>>Also does anyone know how long it will take for a stable release of\n>>8.0 to come (any estimates would be good) ? \n>> \n>>\n>\n>The last target date I saw mentioned was 2004-12-15. If a second release\n>candidate is needed, I don't know if that date will be met.\n> \n>\nIt should also be noted that putting any .0 release into\nproduction right away is typically a bad idea. This is not\na reflection on PostgreSQL but a reflection on software in general.\n\nIMHO 8.0 means, hey all you external developers -- time to test\nwith your applications and report bugs.\n\n8.1 means, alright we got some wide reports -- fixed a few mistakes\nand now were ready.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n>---------------------------(end of broadcast)---------------------------\n>TIP 7: don't forget to increase your free space map settings\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Tue, 07 Dec 2004 08:43:03 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: 8.0 vs. 7.4 benchmarks" }, { "msg_contents": "On Tue, Dec 07, 2004 at 08:43:03 -0800,\n \"Joshua D. Drake\" <[email protected]> wrote:\n> \n> IMHO 8.0 means, hey all you external developers -- time to test\n> with your applications and report bugs.\n> \n> 8.1 means, alright we got some wide reports -- fixed a few mistakes\n> and now were ready.\n\nThat should probably be 8.0.1. That is what the next release will be named.\nTypically there is a *.*.1 release not too long after the *.* release. My memory\nis that this has been around 2-3 months for the last serveral *.*\nreleases.\n\n8.1 will be an important release as it should include integrated autovacuum,\nsome tools for handling PITR recoveries and other changes related to lessons\nlearned from the several major feature additions in 8.0. I will be surprised\nif 8.1 is released before next fall.\n\nWe did have a thread about *.* releases about a month ago and the data seemed\nto suggest that the *.* releases tended to be better than the latest version\nof the previous *.* release. (I think the main problem is that some fixes\nwere not being back ported because they are too extensive to be safely\nback ported.) So with 8.0, it might be a good idea to hold off for a little\nbit to see if anything major was missed during beta, but that it might be\ndesirable to upgrade to 8.0 without waiting for 8.0.1 if there aren't any\nmajor problems reported within a few weeks of the release.\n", "msg_date": "Tue, 7 Dec 2004 11:43:55 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: 8.0 vs. 7.4 benchmarks" }, { "msg_contents": "Bruno Wolff III wrote:\n> On Tue, Dec 07, 2004 at 08:43:03 -0800,\n> \"Joshua D. Drake\" <[email protected]> wrote:\n> \n>>IMHO 8.0 means, hey all you external developers -- time to test\n>>with your applications and report bugs.\n>>\n>>8.1 means, alright we got some wide reports -- fixed a few mistakes\n>>and now were ready.\n> \n> \n> That should probably be 8.0.1. That is what the next release will be named.\n\nYour right that was my bad.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> We did have a thread about *.* releases about a month ago and the data seemed\n> to suggest that the *.* releases tended to be better than the latest version\n> of the previous *.* release. (I think the main problem is that some fixes\n> were not being back ported because they are too extensive to be safely\n> back ported.) So with 8.0, it might be a good idea to hold off for a little\n> bit to see if anything major was missed during beta, but that it might be\n> desirable to upgrade to 8.0 without waiting for 8.0.1 if there aren't any\n> major problems reported within a few weeks of the release.\n\n\n\n-- \nCommand Prompt, Inc., home of PostgreSQL Replication, and plPHP.\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nMammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL", "msg_date": "Tue, 07 Dec 2004 10:35:03 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: 8.0 vs. 7.4 benchmarks" } ]
[ { "msg_contents": "Hi,\n\nThis kind of long email.\n\n After searching the mailing list, have not found good answer\n for TableSpace. So, I try to post this question.\n\nMy question\nQuestion :\n 1 Which option from below scenario will be good in term of performance and\nfuture scalability?\n 2. Is it Option B1 below the right approach?\n 3. Is progresql will have problems if I have 7000 tablespace?\n\n------------------------------------------------------------------- \nEnvironment :\n - Windows 2003\n - Postgresql 8.0 beta 5\n\nScenario :\nOriginal Design:\n Total Tables 40:\n - 20 tables are main tables responsible for the others 20 tables\n - the others 20 tables are specific for each department.\n - from these 20 tables(departments)\n there are 4-5 tables that will contain approx 20 millions records\n (these tables will be hit every times access to the website).\n\nRefering to 20 tables which can be partition\nA. All departments tables is put into 20 tables.\n some querying of 20 millions records.\n\nB. For each department create tablespace. (Which means, if there\n are 7000 departments, there will be 7000 tablespace each contains\n 20 tables).\n\n\nQuestion : Which option will be good in term of performance\n and future scalability?\n\nA1. Use A option,\n As tables become huge. partition the tables which hits often\n and has large size file(usually when it bigger than 2-3 GB size)\n into separate tablespace.\n\n Problems in A1 approach :\n 1. query take very long. It might be resolved\n - indexing, better written pgsql statement.\n\n Advantage : total files are small. around 1000 in one directory\n\n\nB1. Use B option,\n Creating 7000 TableSpace for Departments\n - One Department has one tablespace\n - Each Department has 20 tables\n\n Advantage :\n - each table is small and query is very fast.\n - scalability. As the sites grows, contents grows. will\n not effect future scalability as much as A1.\n in A1 the query already max out for performance partition.\n in B1 the query has not max out yet because the data is\n already distribute across thousands of tables\n\n Disadvantage:\n - total numbers of files is huge.\n (after creating 7000 tablespace, and start\n table automatic generator to create 20 tables\n for each 7000 tablespace.\n After running the 1500th tablespace.\n Each TableSpace has : 35 files\n Surprisingly the default table space already has 20000 files)\n - Need to use dynamic table name query. (this is ok,\n since there are not very complex sql statement logic)\n\nI am trying to choose option B1, as it is good for future scability.\n\nQuestion :\n 1. Is it B1 the right approach?\n 2. Is progresql will have problems if I have 7000 tablespace?\n\n\nThank you,\nRosny\n\nnote: previously posted on cygwin. but I think it is more appropriate for\nthis group\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Mon, 6 Dec 2004 05:52:53 -0800", "msg_from": "\"Rosny\" <[email protected]>", "msg_from_op": true, "msg_subject": "TableSpace Design issues on Postgres 8.0 beta 5" }, { "msg_contents": "Additional update:\nAfter spending time in searching more option.\nBoth Postgresql and MySQL Claims\n1. Postgresql:\nhttp://www.postgresql.org/users-lounge/limitations.html\n2.MySQL\n(Drawback creating large numbers of tables)\nhttp://forums.devshed.com/t27778/s.html\n\nThis might sound like crazy ideas. I just try to experiment several option\nwhich are easy todo before developing).\n\nAfter running auto generate tables 12 tables each for each of 1600\nTablespaces\nHere are some statistics\n1. For every 12 tables created in each Tablespace will create 35 files in\ntablespace directory\n2. For every 12 tables for each 1 tables space will creates additional\napprox 17 additional new(eventhough it has it's own tablespace files for\ndefault database directory\n3. The above screnario creating 19200 tables in database\n4. It take approximately 2 hours for pgAdmin III when starting just to load\nall tables info when first starting.\n Good news(it did not hang, it able to handle all 19200 tables once\nloaded and easily browse all tables info)\n\nI have not try to actually query the database from the web.\nSeems like the whole approach of \"large number of tables\" is not good.\n(As College Database Design course said....:):).\n\nBy creating 12 tables for each of 7000 Tablespace. Now files in folder is\nnot as huge as when everything in one TableSpace.\n\n\nAny opinion is welcome...\n\n\n\n\"Rosny\" <[email protected]> wrote in message\nnews:[email protected]...\n> Hi,\n>\n> This kind of long email.\n>\n> After searching the mailing list, have not found good answer\n> for TableSpace. So, I try to post this question.\n>\n> My question\n> Question :\n> 1 Which option from below scenario will be good in term of performance\nand\n> future scalability?\n> 2. Is it Option B1 below the right approach?\n> 3. Is progresql will have problems if I have 7000 tablespace?\n>\n> ------------------------------------------------------------------- \n> Environment :\n> - Windows 2003\n> - Postgresql 8.0 beta 5\n>\n> Scenario :\n> Original Design:\n> Total Tables 40:\n> - 20 tables are main tables responsible for the others 20 tables\n> - the others 20 tables are specific for each department.\n> - from these 20 tables(departments)\n> there are 4-5 tables that will contain approx 20 millions records\n> (these tables will be hit every times access to the website).\n>\n> Refering to 20 tables which can be partition\n> A. All departments tables is put into 20 tables.\n> some querying of 20 millions records.\n>\n> B. For each department create tablespace. (Which means, if there\n> are 7000 departments, there will be 7000 tablespace each contains\n> 20 tables).\n>\n>\n> Question : Which option will be good in term of performance\n> and future scalability?\n>\n> A1. Use A option,\n> As tables become huge. partition the tables which hits often\n> and has large size file(usually when it bigger than 2-3 GB size)\n> into separate tablespace.\n>\n> Problems in A1 approach :\n> 1. query take very long. It might be resolved\n> - indexing, better written pgsql statement.\n>\n> Advantage : total files are small. around 1000 in one directory\n>\n>\n> B1. Use B option,\n> Creating 7000 TableSpace for Departments\n> - One Department has one tablespace\n> - Each Department has 20 tables\n>\n> Advantage :\n> - each table is small and query is very fast.\n> - scalability. As the sites grows, contents grows. will\n> not effect future scalability as much as A1.\n> in A1 the query already max out for performance partition.\n> in B1 the query has not max out yet because the data is\n> already distribute across thousands of tables\n>\n> Disadvantage:\n> - total numbers of files is huge.\n> (after creating 7000 tablespace, and start\n> table automatic generator to create 20 tables\n> for each 7000 tablespace.\n> After running the 1500th tablespace.\n> Each TableSpace has : 35 files\n> Surprisingly the default table space already has 20000 files)\n> - Need to use dynamic table name query. (this is ok,\n> since there are not very complex sql statement logic)\n>\n> I am trying to choose option B1, as it is good for future scability.\n>\n> Question :\n> 1. Is it B1 the right approach?\n> 2. Is progresql will have problems if I have 7000 tablespace?\n>\n>\n> Thank you,\n> Rosny\n>\n> note: previously posted on cygwin. but I think it is more appropriate for\n> this group\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n\n\n", "msg_date": "Mon, 6 Dec 2004 12:28:54 -0800", "msg_from": "\"Rosny\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TableSpace Design issues on Postgres 8.0 beta 5" }, { "msg_contents": "\"Rosny\" <[email protected]> writes:\n> B. For each department create tablespace. (Which means, if there\n> are 7000 departments, there will be 7000 tablespace each contains\n> 20 tables).\n\nIf your system has seven thousand separate logical filesystems attached\nto it, there might be some value in having seven thousand tablespaces.\nBut I will bet a great deal that it does not and there isn't.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 Dec 2004 23:44:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TableSpace Design issues on Postgres 8.0 beta 5 " }, { "msg_contents": "Thanks for the response.\n\nI just start to get a feel of where TableSpace will be used. You are right I\ndo not have 7000 logical filesystems.\nI am assuming using TableSpace as organization of files in folders in\nWindows 2003 Environment. So, each\nTableSpace will represent one folder(directory) in a drive. But this doesnot\nwork to well for my design.\n\nSince pgAdmin III GUI take 2 hours just to load approximately 14000 tables.\nI am not using TableSpace approach anymore. I am using multiple database\napproach.\n\nAnyway, thanks for the response. Today testing by taking several option to\nthe limit. I kind of having some ideas\nfor future scability.\n\nRosny\n\n\n\"Tom Lane\" <[email protected]> wrote in message\nnews:[email protected]...\n> \"Rosny\" <[email protected]> writes:\n> > B. For each department create tablespace. (Which means, if there\n> > are 7000 departments, there will be 7000 tablespace each contains\n> > 20 tables).\n>\n> If your system has seven thousand separate logical filesystems attached\n> to it, there might be some value in having seven thousand tablespaces.\n> But I will bet a great deal that it does not and there isn't.\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n\n", "msg_date": "Mon, 6 Dec 2004 23:58:06 -0800", "msg_from": "\"Rosny\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TableSpace Design issues on Postgres 8.0 beta 5" } ]
[ { "msg_contents": "Postgresql is the backbone of our spam filtering system. Currently the \nperformance is OK. Wanted to know if someone could give this config a \nquick run down and see if there is anything we can adjust here to smooth \nout the performance. The IO Wait Times are outrageous, at times the load \nwill spike up to the 70 - 90 range.\n\nHardware:\nQuad Opteron 2Ghz\nTyan Quad Opteron Board\n16GB DDR Ram\nEmulex LightPulse LP1050\nEMC Clarion Fiber Array running Raid5\n-----------------------------------------\nSoftware:\nRedHat Linux AS\nPostgresql 7.4.6\n-----------------------------------------\nDetail:\npg_xlog is stored on a local 10k RPM SCSI drive.\nThe rest of the database is stored on the Fiber Array.\n\nCurrently the database is at a size of 87.6Gig. A Vacuum Analyze runs \nevery night and has been taking 4 or 5 hours to complete. Everything \nseems to run fine for a while, then at some point the load goes through \nthe roof and the iowait % also goes way up. It will recover after a \nlittle bit and then do the same thing all over again. When this happens \naccess to the web based user interface slows way down for our customers. \nAny input for improvements to this config would be appreciated, Thanks.\n\n------------------------------------------\n\n------------------------------------------\nVacuum Output:\n\nINFO: analyzing \"pg_catalog.pg_listener\"\nINFO: \"pg_listener\": 0 pages, 0 rows sampled, 0 estimated total rows\nINFO: free space map: 79 relations, 1948399 pages stored; 5306160 total \npages needed\nDETAIL: Allocated FSM size: 500 relations + 2000000 pages = 11769 kB \nshared memory.\nVACUUM\n--------------------------------------------\n\n<--config-->\n\ntcpip_socket = true\nmax_connections = 800\n#superuser_reserved_connections = 2\nport = 5432\n#port = 9999\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777 # octal\n#virtual_host = '' # what interface to listen on; defaults \nto any\n#rendezvous_name = '' # defaults to the computer name\n\n# - Security & Authentication -\n\n#authentication_timeout = 60 # 1-600, in seconds\n#ssl = false\n#password_encryption = true\n#krb_server_keyfile = ''\n#db_user_namespace = false\n\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 16000\nsort_mem = 16384\nvacuum_mem = 3200000\n\n# - Free Space Map -\n\nmax_fsm_pages = 2000000\nmax_fsm_relations = 500\n\n# - Kernel Resource Usage -\n\nmax_files_per_process = 100 # min 25\n#preload_libraries = ''\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\nfsync = true # turns forced synchronization on or off\n#wal_sync_method = fsync # the default varies across platforms:\n # fsync, fdatasync, open_sync, or \nopen_datasync\nwal_buffers = 64 # min 4, 8KB each\n\n# - Checkpoints -\n\ncheckpoint_segments = 50 # in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 60 # range 30-3600, in seconds\n#checkpoint_warning = 30 # 0 is off, in seconds\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 10 # range 1-1000\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Enabling -\n\n#enable_hashagg = true\n#enable_hashjoin = true\nenable_indexscan = true\n#enable_mergejoin = true\n#enable_nestloop = true\n#enable_seqscan = true\n#enable_sort = true\n#enable_tidscan = true\n\n# - Planner Cost Constants -\n\neffective_cache_size = 50000 # typically 8KB each\nrandom_page_cost = 20 # units are one sequential page fetch cost\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n\n# - Genetic Query Optimizer -\n\ngeqo = true\ngeqo_threshold = 11\ngeqo_effort = 1\ngeqo_generations = 0\ngeqo_pool_size = 0 # default based on tables in statement,\n # range 128-1024\ngeqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 10 # range 1-1000\n#from_collapse_limit = 8\n#join_collapse_limit = 8 # 1 disables collapsing of explicit JOINs\n\n\n#---------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#---------------------------------------------------------------------------\n\n# - Syslog -\n\nsyslog = 2 # range 0-2; 0=stdout; 1=both; 2=syslog\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n# - When to Log -\n\nclient_min_messages = error # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # log, info, notice, warning, error\n\nlog_min_messages = error # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error, log, \nfatal,\n # panic\n\nlog_error_verbosity = terse # terse, default, or verbose messages\n\n#log_min_error_statement = panic # Values in order of increasing severity:\n # debug5, debug4, debug3, debug2, \ndebug1,\n # info, notice, warning, error, \npanic(off)\n\n#log_min_duration_statement = -1 # Log all statements whose\n # execution time exceeds the value, in\n # milliseconds. Zero prints all queries.\n # Minus-one disables.\n\n#silent_mode = false # DO NOT USE without Syslog!\n\n# - What to Log -\n\ndebug_print_parse = false\ndebug_print_rewritten = false\ndebug_print_plan = false\ndebug_pretty_print = false\nlog_connections = false\nlog_duration = false\nlog_pid = false\nlog_statement = false\nlog_timestamp = true\nlog_hostname = true\nlog_source_port = false\n\n\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Statistics Monitoring -\n\nlog_parser_stats = false\nlog_planner_stats = false\nlog_executor_stats = false\nlog_statement_stats = false\n\n# - Query/Index Statistics Collector -\n\n#stats_start_collector = true\n#stats_command_string = false\n#stats_block_level = false\n#stats_row_level = false\n#stats_reset_on_server_start = true\n\n\n#---------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#---------------------------------------------------------------------------\n\n# - Statement Behavior -\n\n#search_path = '$user,public' # schema names\n#check_function_bodies = true\n#default_transaction_isolation = 'read committed'\n#default_transaction_read_only = false\n#statement_timeout = 0 # 0 is disabled, in milliseconds\n\n# - Locale and Formatting -\n\n#datestyle = 'iso, mdy'\n#timezone = unknown # actually, defaults to TZ environment \nsetting\n#australian_timezones = false\n#extra_float_digits = 0 # min -15, max 2\n#client_encoding = sql_ascii # actually, defaults to database encoding\n\n# These settings are initialized by initdb -- they may be changed\nlc_messages = 'en_US.UTF-8' # locale for system error \nmessage strings\nlc_monetary = 'en_US.UTF-8' # locale for monetary formatting\nlc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formatting\n\n# - Other Defaults -\n\n#explain_pretty_print = true\n#dynamic_library_path = '$libdir'\n#max_expr_depth = 10000 # min 10\n\n\n#---------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n\n#deadlock_timeout = 1000 # in milliseconds\nmax_locks_per_transaction = 200 # min 10, ~260*max_connections bytes each\n\n\n#---------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#---------------------------------------------------------------------------\n\n# - Previous Postgres Versions -\n\n#add_missing_from = true\n#regex_flavor = advanced # advanced, extended, or basic\n#sql_inheritance = true\n\n# - Other Platforms & Clients -\n\n#transform_null_equals = false\n\n<--config-->\n\n\nThanks\n-- \n---------------------------------------------------------------\nBryan Vest\nComNet Inc.\nbright.net Network Administration/Network Operations\n(888)-618-4638\[email protected] Pager: [email protected]\n---------------------------------------------------------------\n", "msg_date": "Mon, 06 Dec 2004 09:40:18 -0500", "msg_from": "Bryan Vest <[email protected]>", "msg_from_op": true, "msg_subject": "Config review" }, { "msg_contents": "\nBryan Vest <[email protected]> writes:\n\n> Currently the database is at a size of 87.6Gig. A Vacuum Analyze runs every\n> night and has been taking 4 or 5 hours to complete. Everything seems to run\n> fine for a while, then at some point the load goes through the roof and the\n> iowait % also goes way up. It will recover after a little bit and then do the\n> same thing all over again. When this happens access to the web based user\n> interface slows way down for our customers. Any input for improvements to this\n> config would be appreciated, Thanks.\n\nWhile others have pointed out problems with the config I don't think any of\nthem explains this irregular behaviour. From what you're describing the\nresponse time is ok most of the time except for these particular bursts?\n\nDo they occur at regular intervals? Is it possible it's just the\ncheckpointing? Can you see which volumes the i/o traffic is on? Is it on the\nlocal transaction log files or is it on the data files? Does the write i/o\nspike upwards or is it just a storm of read i/o? Also, incidentally, Is it\npossible you have a cron job running vacuum and don't realize it?\n\nIf it happens at irregular intervals then it could be a single bad query\nthat's causing the problem. One bad query would cause a sequential scan of\nyour 87G and potentially push out a lot of data from the cache. I imagine this\nmight also be especially bad with the shared_buffers being out of whack.\n\nYou might start by checking the easiest thing first, set\nlog_min_duration_statement to something high and slowly lower it until it's\nprinting a handful of queries during the heaviest period.\n\nYou could also look for a pgsql_tmp directory that indicate a disk sort is\nhappening, which would mean some query is trying to sort a lot of data. You\nmight have to lower sort_mem to a conservative value before you could see that\nthough.\n\nThe pgsql_tmp directory appears (and disappears?) as needed, it's something\nlike this:\n\nbash-2.05b# ls /var/lib/postgres/data/base/17150/pgsql_tmp\npgsql_tmp22184.0\n\n-- \ngreg\n\n", "msg_date": "07 Dec 2004 03:03:08 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Config review" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> Bryan Vest <[email protected]> writes:\n>> Currently the database is at a size of 87.6Gig. A Vacuum Analyze runs every\n>> night and has been taking 4 or 5 hours to complete. Everything seems to run\n>> fine for a while, then at some point the load goes through the roof and the\n>> iowait % also goes way up. It will recover after a little bit and then do the\n>> same thing all over again.\n\n> While others have pointed out problems with the config I don't think any of\n> them explains this irregular behaviour.\n\nAs Greg says, it might be checkpoints or a background query. If it\nactually is the vacuum itself causing the variation in load, the theory\nthat comes to mind is that the performance tanks when the vacuum run\nswitches from find-dead-tuples to clean-indexes mode; clean-indexes is\nusually a lot more I/O intensive.\n\nISTM it actually doesn't matter much which of these explanations is\ncorrect, because all three imply the same thing: not enough disk I/O\nbandwidth. The disk is near saturation already and any increase in\ndemand drives response time over the knee of the curve.\n\nIf you are using a RAID configuration it might just be that you need\nto adjust the configuration (IIRC, there are some RAID setups that\nare not very write-friendly). Otherwise you may have little alternative\nbut to buy faster disks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 Dec 2004 10:07:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Config review " }, { "msg_contents": "On Tue, Dec 07, 2004 at 10:07:54AM -0500, Tom Lane wrote:\n> If you are using a RAID configuration it might just be that you need\n> to adjust the configuration (IIRC, there are some RAID setups that\n> are not very write-friendly). Otherwise you may have little alternative\n> but to buy faster disks.\n\nIt might be that altering the Clariion array from RAID 5 to RAID 1+0\nwould make a difference; but I'd be very surprised to learn that you\ncould get that array to go a whole lot faster.\n\nOne thing that might also be worth investigating is whether\nperformance actually goes up by moveing the WAL into the array. \nWe've had some remarkably good experiences with our recently-acquired\nEMC.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nWhen my information changes, I alter my conclusions. What do you do sir?\n\t\t--attr. John Maynard Keynes\n", "msg_date": "Tue, 7 Dec 2004 10:19:26 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Config review" } ]
[ { "msg_contents": "\nPostgresql is the backbone of our spam filtering system. Currently the \nperformance is OK. Wanted to know if someone could give this config a \nquick run down and see if there is anything we can adjust here to smooth \nout the performance. The IO Wait Times are outrageous, at times the load \nwill spike up to the 70 - 90 range.\n\nHardware:\nQuad Opteron 2Ghz\nTyan Quad Opteron Board\n16GB DDR Ram\nEmulex LightPulse LP1050\nEMC Clarion Fiber Array running Raid5\n-----------------------------------------\nSoftware:\nRedHat Linux AS\nPostgresql 7.4.6\n-----------------------------------------\nDetail:\npg_xlog is stored on a local 10k RPM SCSI drive.\nThe rest of the database is stored on the Fiber Array.\n\nCurrently the database is at a size of 87.6Gig. A Vacuum Analyze runs \nevery night and has been taking 4 or 5 hours to complete. Everything \nseems to run fine for a while, then at some point the load goes through \nthe roof and the iowait % also goes way up. It will recover after a \nlittle bit and then do the same thing all over again. When this happens \naccess to the web based user interface slows way down for our customers. \nAny input for improvements to this config would be appreciated, Thanks.\n\n------------------------------------------\n\n------------------------------------------\nVacuum Output:\n\nINFO: analyzing \"pg_catalog.pg_listener\"\nINFO: \"pg_listener\": 0 pages, 0 rows sampled, 0 estimated total rows\nINFO: free space map: 79 relations, 1948399 pages stored; 5306160 total \npages needed\nDETAIL: Allocated FSM size: 500 relations + 2000000 pages = 11769 kB \nshared memory.\nVACUUM\n--------------------------------------------\n\n<--config-->\n\ntcpip_socket = true\nmax_connections = 800\n#superuser_reserved_connections = 2\nport = 5432\n#port = 9999\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777 # octal\n#virtual_host = '' # what interface to listen on; defaults \nto any\n#rendezvous_name = '' # defaults to the computer name\n\n# - Security & Authentication -\n\n#authentication_timeout = 60 # 1-600, in seconds\n#ssl = false\n#password_encryption = true\n#krb_server_keyfile = ''\n#db_user_namespace = false\n\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 16000\nsort_mem = 16384\nvacuum_mem = 3200000\n\n# - Free Space Map -\n\nmax_fsm_pages = 2000000\nmax_fsm_relations = 500\n\n# - Kernel Resource Usage -\n\nmax_files_per_process = 100 # min 25\n#preload_libraries = ''\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\nfsync = true # turns forced synchronization on or off\n#wal_sync_method = fsync # the default varies across platforms:\n # fsync, fdatasync, open_sync, or \nopen_datasync\nwal_buffers = 64 # min 4, 8KB each\n\n# - Checkpoints -\n\ncheckpoint_segments = 50 # in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 60 # range 30-3600, in seconds\n#checkpoint_warning = 30 # 0 is off, in seconds\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 10 # range 1-1000\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Enabling -\n\n#enable_hashagg = true\n#enable_hashjoin = true\nenable_indexscan = true\n#enable_mergejoin = true\n#enable_nestloop = true\n#enable_seqscan = true\n#enable_sort = true\n#enable_tidscan = true\n\n# - Planner Cost Constants -\n\neffective_cache_size = 50000 # typically 8KB each\nrandom_page_cost = 20 # units are one sequential page fetch cost\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n\n# - Genetic Query Optimizer -\n\ngeqo = true\ngeqo_threshold = 11\ngeqo_effort = 1\ngeqo_generations = 0\ngeqo_pool_size = 0 # default based on tables in statement,\n # range 128-1024\ngeqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 10 # range 1-1000\n#from_collapse_limit = 8\n#join_collapse_limit = 8 # 1 disables collapsing of explicit JOINs\n\n\n#---------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#---------------------------------------------------------------------------\n\n# - Syslog -\n\nsyslog = 2 # range 0-2; 0=stdout; 1=both; 2=syslog\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n# - When to Log -\n\nclient_min_messages = error # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # log, info, notice, warning, error\n\nlog_min_messages = error # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error, log, \nfatal,\n # panic\n\nlog_error_verbosity = terse # terse, default, or verbose messages\n\n#log_min_error_statement = panic # Values in order of increasing severity:\n # debug5, debug4, debug3, debug2, \ndebug1,\n # info, notice, warning, error, \npanic(off)\n\n#log_min_duration_statement = -1 # Log all statements whose\n # execution time exceeds the value, in\n # milliseconds. Zero prints all queries.\n # Minus-one disables.\n\n#silent_mode = false # DO NOT USE without Syslog!\n\n# - What to Log -\n\ndebug_print_parse = false\ndebug_print_rewritten = false\ndebug_print_plan = false\ndebug_pretty_print = false\nlog_connections = false\nlog_duration = false\nlog_pid = false\nlog_statement = false\nlog_timestamp = true\nlog_hostname = true\nlog_source_port = false\n\n\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Statistics Monitoring -\n\nlog_parser_stats = false\nlog_planner_stats = false\nlog_executor_stats = false\nlog_statement_stats = false\n\n# - Query/Index Statistics Collector -\n\n#stats_start_collector = true\n#stats_command_string = false\n#stats_block_level = false\n#stats_row_level = false\n#stats_reset_on_server_start = true\n\n\n#---------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#---------------------------------------------------------------------------\n\n# - Statement Behavior -\n\n#search_path = '$user,public' # schema names\n#check_function_bodies = true\n#default_transaction_isolation = 'read committed'\n#default_transaction_read_only = false\n#statement_timeout = 0 # 0 is disabled, in milliseconds\n\n# - Locale and Formatting -\n\n#datestyle = 'iso, mdy'\n#timezone = unknown # actually, defaults to TZ environment \nsetting\n#australian_timezones = false\n#extra_float_digits = 0 # min -15, max 2\n#client_encoding = sql_ascii # actually, defaults to database encoding\n\n# These settings are initialized by initdb -- they may be changed\nlc_messages = 'en_US.UTF-8' # locale for system error \nmessage strings\nlc_monetary = 'en_US.UTF-8' # locale for monetary formatting\nlc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formatting\n\n# - Other Defaults -\n\n#explain_pretty_print = true\n#dynamic_library_path = '$libdir'\n#max_expr_depth = 10000 # min 10\n\n\n#---------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n\n#deadlock_timeout = 1000 # in milliseconds\nmax_locks_per_transaction = 200 # min 10, ~260*max_connections bytes each\n\n\n#---------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#---------------------------------------------------------------------------\n\n# - Previous Postgres Versions -\n\n#add_missing_from = true\n#regex_flavor = advanced # advanced, extended, or basic\n#sql_inheritance = true\n\n# - Other Platforms & Clients -\n\n#transform_null_equals = false\n\n<--config-->\n\n\nThanks\n-- \n---------------------------------------------------------------\nBryan Vest\nComNet Inc.\nbright.net Network Administration/Network Operations\n(888)-618-4638\[email protected] Pager: [email protected]\n---------------------------------------------------------------\n", "msg_date": "Mon, 06 Dec 2004 09:43:42 -0500", "msg_from": "Bryan <[email protected]>", "msg_from_op": true, "msg_subject": "Config Check" }, { "msg_contents": "\n\n\tAccording to these lines you should set max_fsm_pages to at the very \nleast 5306160\n\tYou have a humongous amount of RAM, you could set it to 10000000\n\n> INFO: free space map: 79 relations, 1948399 pages stored; 5306160 total \n> pages needed\n> DETAIL: Allocated FSM size: 500 relations + 2000000 pages = 11769 kB \n> shared memory.\n\n\n", "msg_date": "Mon, 06 Dec 2004 16:18:00 +0100", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Config Check" }, { "msg_contents": "Bryan <[email protected]> writes:\n> vacuum_mem = 3200000\n\nYikes. You do realize that's measured in kilobytes? Try backing it off\nto something saner, like half a gig or less.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 Dec 2004 10:56:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Config Check " }, { "msg_contents": "Hi Bryan,\n\nJust wondering, i ran vacuumdb but didn't get the information that you \nget about the free space even when i set the verbose option. How did \nyou get that?\n\nThanks,\n\nHasnul\n\n\nBryan wrote:\n\n>\n> Postgresql is the backbone of our spam filtering system. Currently the \n> performance is OK. Wanted to know if someone could give this config a \n> quick run down and see if there is anything we can adjust here to \n> smooth out the performance. The IO Wait Times are outrageous, at times \n> the load will spike up to the 70 - 90 range.\n>\n> Hardware:\n> Quad Opteron 2Ghz\n> Tyan Quad Opteron Board\n> 16GB DDR Ram\n> Emulex LightPulse LP1050\n> EMC Clarion Fiber Array running Raid5\n> -----------------------------------------\n> Software:\n> RedHat Linux AS\n> Postgresql 7.4.6\n> -----------------------------------------\n> Detail:\n> pg_xlog is stored on a local 10k RPM SCSI drive.\n> The rest of the database is stored on the Fiber Array.\n>\n> Currently the database is at a size of 87.6Gig. A Vacuum Analyze runs \n> every night and has been taking 4 or 5 hours to complete. Everything \n> seems to run fine for a while, then at some point the load goes \n> through the roof and the iowait % also goes way up. It will recover \n> after a little bit and then do the same thing all over again. When \n> this happens access to the web based user interface slows way down for \n> our customers. Any input for improvements to this config would be \n> appreciated, Thanks.\n>\n> ------------------------------------------\n>\n> ------------------------------------------\n> Vacuum Output:\n>\n> INFO: analyzing \"pg_catalog.pg_listener\"\n> INFO: \"pg_listener\": 0 pages, 0 rows sampled, 0 estimated total rows\n> INFO: free space map: 79 relations, 1948399 pages stored; 5306160 \n> total pages needed\n> DETAIL: Allocated FSM size: 500 relations + 2000000 pages = 11769 kB \n> shared memory.\n> VACUUM\n> --------------------------------------------\n>\n> <--config-->\n>\n> tcpip_socket = true\n> max_connections = 800\n> #superuser_reserved_connections = 2\n> port = 5432\n> #port = 9999\n> #unix_socket_directory = ''\n> #unix_socket_group = ''\n> #unix_socket_permissions = 0777 # octal\n> #virtual_host = '' # what interface to listen on; \n> defaults to any\n> #rendezvous_name = '' # defaults to the computer name\n>\n> # - Security & Authentication -\n>\n> #authentication_timeout = 60 # 1-600, in seconds\n> #ssl = false\n> #password_encryption = true\n> #krb_server_keyfile = ''\n> #db_user_namespace = false\n>\n>\n> #--------------------------------------------------------------------------- \n>\n> # RESOURCE USAGE (except WAL)\n> #--------------------------------------------------------------------------- \n>\n>\n> # - Memory -\n>\n> shared_buffers = 16000\n> sort_mem = 16384\n> vacuum_mem = 3200000\n>\n> # - Free Space Map -\n>\n> max_fsm_pages = 2000000\n> max_fsm_relations = 500\n>\n> # - Kernel Resource Usage -\n>\n> max_files_per_process = 100 # min 25\n> #preload_libraries = ''\n>\n>\n> #--------------------------------------------------------------------------- \n>\n> # WRITE AHEAD LOG\n> #--------------------------------------------------------------------------- \n>\n>\n> # - Settings -\n>\n> fsync = true # turns forced synchronization on or off\n> #wal_sync_method = fsync # the default varies across platforms:\n> # fsync, fdatasync, open_sync, or \n> open_datasync\n> wal_buffers = 64 # min 4, 8KB each\n>\n> # - Checkpoints -\n>\n> checkpoint_segments = 50 # in logfile segments, min 1, 16MB each\n> #checkpoint_timeout = 60 # range 30-3600, in seconds\n> #checkpoint_warning = 30 # 0 is off, in seconds\n> #commit_delay = 0 # range 0-100000, in microseconds\n> #commit_siblings = 10 # range 1-1000\n>\n>\n> #--------------------------------------------------------------------------- \n>\n> # QUERY TUNING\n> #--------------------------------------------------------------------------- \n>\n>\n> # - Planner Method Enabling -\n>\n> #enable_hashagg = true\n> #enable_hashjoin = true\n> enable_indexscan = true\n> #enable_mergejoin = true\n> #enable_nestloop = true\n> #enable_seqscan = true\n> #enable_sort = true\n> #enable_tidscan = true\n>\n> # - Planner Cost Constants -\n>\n> effective_cache_size = 50000 # typically 8KB each\n> random_page_cost = 20 # units are one sequential page fetch \n> cost\n> #cpu_tuple_cost = 0.01 # (same)\n> #cpu_index_tuple_cost = 0.001 # (same)\n> #cpu_operator_cost = 0.0025 # (same)\n>\n> # - Genetic Query Optimizer -\n>\n> geqo = true\n> geqo_threshold = 11\n> geqo_effort = 1\n> geqo_generations = 0\n> geqo_pool_size = 0 # default based on tables in statement,\n> # range 128-1024\n> geqo_selection_bias = 2.0 # range 1.5-2.0\n>\n> # - Other Planner Options -\n>\n> #default_statistics_target = 10 # range 1-1000\n> #from_collapse_limit = 8\n> #join_collapse_limit = 8 # 1 disables collapsing of explicit JOINs\n>\n>\n> #--------------------------------------------------------------------------- \n>\n> # ERROR REPORTING AND LOGGING\n> #--------------------------------------------------------------------------- \n>\n>\n> # - Syslog -\n>\n> syslog = 2 # range 0-2; 0=stdout; 1=both; 2=syslog\n> #syslog_facility = 'LOCAL0'\n> #syslog_ident = 'postgres'\n>\n> # - When to Log -\n>\n> client_min_messages = error # Values, in order of decreasing detail:\n> # debug5, debug4, debug3, debug2, \n> debug1,\n> # log, info, notice, warning, error\n>\n> log_min_messages = error # Values, in order of decreasing detail:\n> # debug5, debug4, debug3, debug2, \n> debug1,\n> # info, notice, warning, error, log, \n> fatal,\n> # panic\n>\n> log_error_verbosity = terse # terse, default, or verbose messages\n>\n> #log_min_error_statement = panic # Values in order of increasing \n> severity:\n> # debug5, debug4, debug3, debug2, \n> debug1,\n> # info, notice, warning, error, \n> panic(off)\n>\n> #log_min_duration_statement = -1 # Log all statements whose\n> # execution time exceeds the value, in\n> # milliseconds. Zero prints all \n> queries.\n> # Minus-one disables.\n>\n> #silent_mode = false # DO NOT USE without Syslog!\n>\n> # - What to Log -\n>\n> debug_print_parse = false\n> debug_print_rewritten = false\n> debug_print_plan = false\n> debug_pretty_print = false\n> log_connections = false\n> log_duration = false\n> log_pid = false\n> log_statement = false\n> log_timestamp = true\n> log_hostname = true\n> log_source_port = false\n>\n>\n> #--------------------------------------------------------------------------- \n>\n> # RUNTIME STATISTICS\n> #--------------------------------------------------------------------------- \n>\n>\n> # - Statistics Monitoring -\n>\n> log_parser_stats = false\n> log_planner_stats = false\n> log_executor_stats = false\n> log_statement_stats = false\n>\n> # - Query/Index Statistics Collector -\n>\n> #stats_start_collector = true\n> #stats_command_string = false\n> #stats_block_level = false\n> #stats_row_level = false\n> #stats_reset_on_server_start = true\n>\n>\n> #--------------------------------------------------------------------------- \n>\n> # CLIENT CONNECTION DEFAULTS\n> #--------------------------------------------------------------------------- \n>\n>\n> # - Statement Behavior -\n>\n> #search_path = '$user,public' # schema names\n> #check_function_bodies = true\n> #default_transaction_isolation = 'read committed'\n> #default_transaction_read_only = false\n> #statement_timeout = 0 # 0 is disabled, in milliseconds\n>\n> # - Locale and Formatting -\n>\n> #datestyle = 'iso, mdy'\n> #timezone = unknown # actually, defaults to TZ environment \n> setting\n> #australian_timezones = false\n> #extra_float_digits = 0 # min -15, max 2\n> #client_encoding = sql_ascii # actually, defaults to database encoding\n>\n> # These settings are initialized by initdb -- they may be changed\n> lc_messages = 'en_US.UTF-8' # locale for system error \n> message strings\n> lc_monetary = 'en_US.UTF-8' # locale for monetary formatting\n> lc_numeric = 'en_US.UTF-8' # locale for number formatting\n> lc_time = 'en_US.UTF-8' # locale for time formatting\n>\n> # - Other Defaults -\n>\n> #explain_pretty_print = true\n> #dynamic_library_path = '$libdir'\n> #max_expr_depth = 10000 # min 10\n>\n>\n> #--------------------------------------------------------------------------- \n>\n> # LOCK MANAGEMENT\n> #--------------------------------------------------------------------------- \n>\n>\n> #deadlock_timeout = 1000 # in milliseconds\n> max_locks_per_transaction = 200 # min 10, ~260*max_connections bytes each\n>\n>\n> #--------------------------------------------------------------------------- \n>\n> # VERSION/PLATFORM COMPATIBILITY\n> #--------------------------------------------------------------------------- \n>\n>\n> # - Previous Postgres Versions -\n>\n> #add_missing_from = true\n> #regex_flavor = advanced # advanced, extended, or basic\n> #sql_inheritance = true\n>\n> # - Other Platforms & Clients -\n>\n> #transform_null_equals = false\n>\n> <--config-->\n>\n>\n> Thanks\n\n\n", "msg_date": "Tue, 07 Dec 2004 10:40:41 +0800", "msg_from": "Hasnul Fadhly bin Hasan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Config Check" }, { "msg_contents": "Hasnul Fadhly bin Hasan wrote:\n> Hi Bryan,\n> \n> Just wondering, i ran vacuumdb but didn't get the information that you \n> get about the free space even when i set the verbose option. How did \n> you get that?\n> \n> Thanks,\n> \n> Hasnul\n\n\nI believe it is\nVACUUM FULL ANALYZE VERBOSE;\n\nAt the very end you will get a listing like\n\nINFO: free space map: 167 relations, 423 pages stored; 2912 total pages \nneeded\nDETAIL: Allocated FSM size: 1000 relations + 20000 pages = 186 kB \nshared memory.\n\n(Yes, mine is done on a very static table.)\n\nJohn\n=:->", "msg_date": "Mon, 06 Dec 2004 20:54:42 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Config Check" }, { "msg_contents": "Hasnul Fadhly bin Hasan <[email protected]> writes:\n> Just wondering, i ran vacuumdb but didn't get the information that you \n> get about the free space even when i set the verbose option. How did \n> you get that?\n\nPG version? IIRC 7.4 was the first to include that info in the VACUUM\nVERBOSE output.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 Dec 2004 00:03:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Config Check " } ]
[ { "msg_contents": "Hi Everybody,\n\n I have a performance problem with this query , it takes lot of time \non the production database. is there a way to improve it ? i do vacuumdb \non this database and do anlyze on the users table separately daily\n\n\nselect userID, fname, lname, email, phone, dateEntered, dateCanceled,\ndateSuspended, billAmount, billDate, dateBilled, datePaid, '?' as searches\nfrom Users u\nwhere 1=1 AND exists (select userID\n from bankaccount ba\n where ba.bankaccountID = u.bankaccountID\n and ba.accountnumber = '12345678')\n AND exists (select userID\n from bankaccount ba\n where ba.bankaccountID = u.bankaccountID\n and ba.routingNumber = '12345678')\norder by UserID desc\nlimit 500\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Limit (cost=0.00..12752.61 rows=500 width=120)\n -> Index Scan Backward using users_pkey on users u \n(cost=0.00..2460462.79 rows=96469 width=120)\n Filter: ((subplan) AND (subplan))\n SubPlan\n -> Index Scan using bankaccount_pkey on bankaccount ba \n(cost=0.00..3.07 rows=1 width=0)\n Index Cond: (bankaccountid = $1)\n Filter: (routingnumber = '12345678'::text)\n -> Index Scan using bankaccount_pkey on bankaccount ba \n(cost=0.00..3.07 rows=1 width=0)\n Index Cond: (bankaccountid = $1)\n Filter: (accountnumber = '12345678'::text)\n\nI tried changing it but it still takes lot of time\n\n\nselect userID, fname, lname, email, phone, dateEntered, dateCanceled,\ndateSuspended, billAmount, billDate, dateBilled, datePaid, '?' as searches\nfrom Users u\nwhere 1=1 AND exists (select userID\n from bankaccount ba\n where ba.bankaccountID = u.bankaccountID\n and ba.accountnumber = '12345678'\n and ba.routingNumber = '12345678')\norder by UserID desc\nlimit 500\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..3309.62 rows=500 width=120)\n -> Index Scan Backward using users_pkey on users u \n(cost=0.00..1277101.86 rows=192938 width=120)\n Filter: (subplan)\n SubPlan\n -> Index Scan using bankaccount_pkey on bankaccount ba \n(cost=0.00..3.07 rows=1 width=0)\n Index Cond: (bankaccountid = $1)\n Filter: ((accountnumber = '12345678'::text) AND \n(routingnumber = '12345678'::text))\n\n\n the users_pkey index on the primary key userid is on users table. it \nseems to be using index but it still takes lot of time.\n here is the output from the pg_class for the users and bankaccount \ntable . Table doesnt have lot of records but this query take anywhere \nfrom 3 to 5 min to run which is really bad for us. Can we improve the \nperformance on this query ?\n \nrelname | relpages | reltuples\n---------+----------+-----------\n users | 39967 | 385875\nbankaccount | 242 | 16453\n\n\nThanks!\nPallav\n\n\n\n\n", "msg_date": "Mon, 06 Dec 2004 10:28:23 -0500", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Poor Query " }, { "msg_contents": "\n\tHow many rows do the following queries return :\n\nselect userID\n from bankaccount ba\n where ba.bankaccountID = u.bankaccountID\n and ba.accountnumber = '12345678'\n\nselect userID\n from bankaccount ba\n where ba.bankaccountID = u.bankaccountID\n and ba.routingNumber = '12345678'\n\n\tCan you post EXPLAIN ANALYZE for these two queries ?\n\tRegards.\n", "msg_date": "Mon, 06 Dec 2004 16:48:05 +0100", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor Query " }, { "msg_contents": "Pierre-Frᅵdᅵric Caillaud wrote:\n\n>\n> How many rows do the following queries return :\n>\n> select userID\n> from bankaccount ba\n> where ba.bankaccountID = u.bankaccountID\n> and ba.accountnumber = '12345678'\n>\n> select userID\n> from bankaccount ba\n> where ba.bankaccountID = u.bankaccountID\n> and ba.routingNumber = '12345678'\n>\n> Can you post EXPLAIN ANALYZE for these two queries ?\n> Regards.\n>\nThanks! for the quick reply. It should usually return just one account \nfor that user so its only one record. Actually userid column doesnt \nexist on bankaccount table it exists only on the user table and it is \njoined with bankaccountid column, if i run this query separately i \nwouldnt able to run it .\n\n", "msg_date": "Mon, 06 Dec 2004 10:52:51 -0500", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor Query" }, { "msg_contents": "\n\n\tJust wanted to know the selectivity of the accountnumber and \nroutingNumber columns.\n\tI shoulda written :\n\n>> How many rows do the following queries return :\n\tOne or few at most, or a lot ?\n>>\n>> select userID\n>> from bankaccount\n>> WHERE accountnumber = '12345678'\n>>\n>> select userID\n>> from bankaccount\n>> WHERE routingNumber = '12345678'\n>>\n>> Can you post EXPLAIN ANALYZE for these two queries ?\n>> Regards.\n>>\n> Thanks! for the quick reply. It should usually return just one account \n> for that user so its only one record. Actually userid column doesnt \n> exist on bankaccount table it exists only on the user table and it is \n> joined with bankaccountid column, if i run this query separately i \n> wouldnt able to run it .\n>\n>\n\n\n\n", "msg_date": "Mon, 06 Dec 2004 17:16:10 +0100", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor Query" }, { "msg_contents": "Pierre-Frᅵdᅵric Caillaud wrote:\n\n>\n>\n> Just wanted to know the selectivity of the accountnumber and \n> routingNumber columns.\n> I shoulda written :\n>\n>>> How many rows do the following queries return :\n>>\n> One or few at most, or a lot ? \n\n\nJust One, user can i have only one bankaccount.\n\n>\n>>>\n>>> select userID\n>>> from bankaccount\n>>> WHERE accountnumber = '12345678'\n>>>\n>>> select userID\n>>> from bankaccount\n>>> WHERE routingNumber = '12345678'\n>>>\n>>> Can you post EXPLAIN ANALYZE for these two queries ?\n>>> Regards.\n>>>\n>> Thanks! for the quick reply. It should usually return just one \n>> account for that user so its only one record. Actually userid column \n>> doesnt exist on bankaccount table it exists only on the user table \n>> and it is joined with bankaccountid column, if i run this query \n>> separately i wouldnt able to run it .\n>>\n>>\n>\n>\n>\n\n\n", "msg_date": "Mon, 06 Dec 2004 11:37:47 -0500", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor Query" }, { "msg_contents": "\n\n> Just One, user can i have only one bankaccount.\n\n\tAh well, in that case :\n\tThis is your query :\n\nselect userID, fname, lname, email, phone, dateEntered, dateCanceled,\ndateSuspended, billAmount, billDate, dateBilled, datePaid, '?' as searches\n from Users u\nwhere 1=1 AND exists (select userID\n from bankaccount ba\n where ba.bankaccountID = u.bankaccountID\n and ba.accountnumber = '12345678')\nAND exists (select userID\n from bankaccount ba\n where ba.bankaccountID = u.bankaccountID\n and ba.routingNumber = '12345678')\norder by UserID desc\nlimit 500\n\n\tWhat it does is scan all users, and for each user, test if it has the \naccountnumber or the routingNumber you seek. You're reversing the problem \n: you should first look for accountnumber and routingNumber, THEN look for \nthe user :\n\n\nSELECT * FROM Users WHERE bankaccountID IN\n(SELECT bankaccountID FROM bankaccount WHERE accountnumber = '12345678' \nOR/AND routingNumber = '12345678')\n\nor :\n\nSELECT * FROM Users WHERE userID IN\n(SELECT userID FROM bankaccount WHERE accountnumber = '12345678' OR/AND \nroutingNumber = '12345678')\n\nThere is something very strange in your query, it seems that bankaccount \nand Users both have a UserID column and a bankaccountID column. Is this \nnormal ? It looks denormalized to me...\n", "msg_date": "Mon, 06 Dec 2004 18:34:22 +0100", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor Query" }, { "msg_contents": "Pierre-Frᅵdᅵric Caillaud wrote:\n\n>\n>\n>> Just One, user can i have only one bankaccount.\n>\n>\n> Ah well, in that case :\n> This is your query :\n>\n> select userID, fname, lname, email, phone, dateEntered, dateCanceled,\n> dateSuspended, billAmount, billDate, dateBilled, datePaid, '?' as \n> searches\n> from Users u\n> where 1=1 AND exists (select userID\n> from bankaccount ba\n> where ba.bankaccountID = u.bankaccountID\n> and ba.accountnumber = '12345678')\n> AND exists (select userID\n> from bankaccount ba\n> where ba.bankaccountID = u.bankaccountID\n> and ba.routingNumber = '12345678')\n> order by UserID desc\n> limit 500\n>\n> What it does is scan all users, and for each user, test if it has \n> the accountnumber or the routingNumber you seek. You're reversing the \n> problem : you should first look for accountnumber and routingNumber, \n> THEN look for the user :\n>\n>\n> SELECT * FROM Users WHERE bankaccountID IN\n> (SELECT bankaccountID FROM bankaccount WHERE accountnumber = \n> '12345678' OR/AND routingNumber = '12345678')\n>\n> or :\n>\n> SELECT * FROM Users WHERE userID IN\n> (SELECT userID FROM bankaccount WHERE accountnumber = '12345678' \n> OR/AND routingNumber = '12345678')\n>\n> There is something very strange in your query, it seems that \n> bankaccount and Users both have a UserID column and a bankaccountID \n> column. Is this normal ? It looks denormalized to me...\n>\nUserid column is only in users table not in bankaccounts table , based \non your suggestion i made changes to the query and here are the explain \nplans :\n\n\nselect userID, fname, lname, email, phone, dateEntered, dateCanceled,\ndateSuspended, billAmount, billDate, dateBilled, datePaid, '?' as searches\nfrom Users u\nwhere bankaccountid in (select bankaccountid\n from bankaccount ba\n where ba.bankaccountID = u.bankaccountID\n and ba.accountnumber = '12345678')\n AND bankaccountid in (select bankaccountid\n from bankaccount ba\n where ba.bankaccountID = u.bankaccountID\n and ba.routingNumber = '12345678')\norder by UserID desc\nlimit 500\n\n \nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..6642.59 rows=500 width=121) (actual \ntime=40180.116..93650.837 rows=1 loops=1)\n -> Index Scan Backward using users_pkey on users u \n(cost=0.00..1087936.69 rows=81891 width=121) (actual \ntime=40180.112..93650.829 rows=1 loops=1)\n Filter: ((subplan) AND (subplan))\n SubPlan\n -> Index Scan using bankaccount_pkey on bankaccount ba \n(cost=0.00..3.08 rows=1 width=4) (actual time=0.019..0.019 rows=0 loops=3)\n Index Cond: (bankaccountid = $0)\n Filter: (routingnumber = '12345678'::text)\n -> Index Scan using bankaccount_pkey on bankaccount ba \n(cost=0.00..3.08 rows=1 width=4) (actual time=0.004..0.004 rows=0 \nloops=385914)\n Index Cond: (bankaccountid = $0)\n Filter: (accountnumber = '12345678'::text)\n Total runtime: 93684.307 ms\n\n\n\nselect userID, fname, lname, email, phone, dateEntered, dateCanceled,\ndateSuspended, billAmount, billDate, dateBilled, datePaid, '?' as searches\nfrom Users u\nwhere bankaccountid in (select bankaccountid\n from bankaccount ba\n where ba.bankaccountID = u.bankaccountID\n and ba.accountnumber = '12345678'\n and ba.routingNumber = '12345678')\norder by UserID desc\nlimit 500\n\n\n \nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..1777.53 rows=500 width=121) (actual \ntime=18479.669..63584.437 rows=1 loops=1)\n -> Index Scan Backward using users_pkey on users u \n(cost=0.00..582250.93 rows=163781 width=121) (actual \ntime=18479.663..63584.428 rows=1 loops=1)\n Filter: (subplan)\n SubPlan\n -> Index Scan using bankaccount_pkey on bankaccount ba \n(cost=0.00..3.09 rows=1 width=4) (actual time=0.004..0.004 rows=0 \nloops=385914)\n Index Cond: (bankaccountid = $0)\n Filter: ((accountnumber = '12345678'::text) AND \n(routingnumber = '12345678'::text))\n Total runtime: 63596.222 ms\n\nWhat's wierd is even though there is a index on bankaccountid table it \ndoesnt use that index, it uses the index on the userid table and the \nexecution time is little better but it still takes over a minute to \nexecute .\n\n\n", "msg_date": "Mon, 06 Dec 2004 13:00:27 -0500", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor Query" }, { "msg_contents": "\n\tYour suffering comes from the \"where ba.bankaccountID = u.bankaccountID\" \nin the subselect. It means postgres has to run the subselect once for each \nrow in Users. You want the subselect to run only once, and return one (or \nmore?) bankaccountid's, then fetch the users from Users.\n\n\tJust remove the \"where ba.bankaccountID = u.bankaccountID\" !\n\n> select userID, fname, lname, email, phone, dateEntered, dateCanceled,\n> dateSuspended, billAmount, billDate, dateBilled, datePaid, '?' as \n> searches\n> from Users u\n> where bankaccountid in (select bankaccountid\n> from bankaccount ba\n> where ba.bankaccountID = u.bankaccountID\n> and ba.accountnumber = '12345678'\n> and ba.routingNumber = '12345678')\n> order by UserID desc\n> limit 500\n\nNew version :\n\n select userID, fname, lname, email, phone, dateEntered, dateCanceled,\n dateSuspended, billAmount, billDate, dateBilled, datePaid, '?' as\n searches\n from Users u\n where bankaccountid in (select bankaccountid\n from bankaccount ba\n WHERE ba.accountnumber = '12345678'\n and ba.routingNumber = '12345678')\n\nYou could also do this :\n\n select u.* from Users u, bankaccount ba\n\twhere u.bankaccountid = ba.bankaccountid\n\tand ba.accountnumber = '12345678'\n and ba.routingNumber = '12345678')\n\n\n\n", "msg_date": "Mon, 06 Dec 2004 21:18:28 +0100", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor Query" }, { "msg_contents": "Pierre-Frᅵdᅵric Caillaud wrote:\n\n>\n> Your suffering comes from the \"where ba.bankaccountID = \n> u.bankaccountID\" in the subselect. It means postgres has to run the \n> subselect once for each row in Users. You want the subselect to run \n> only once, and return one (or more?) bankaccountid's, then fetch the \n> users from Users.\n>\n> Just remove the \"where ba.bankaccountID = u.bankaccountID\" !\n>\n>> select userID, fname, lname, email, phone, dateEntered, dateCanceled,\n>> dateSuspended, billAmount, billDate, dateBilled, datePaid, '?' as \n>> searches\n>> from Users u\n>> where bankaccountid in (select bankaccountid\n>> from bankaccount ba\n>> where ba.bankaccountID = u.bankaccountID\n>> and ba.accountnumber = '12345678'\n>> and ba.routingNumber = '12345678')\n>> order by UserID desc\n>> limit 500\n>\n>\n> New version :\n>\n> select userID, fname, lname, email, phone, dateEntered, dateCanceled,\n> dateSuspended, billAmount, billDate, dateBilled, datePaid, '?' as\n> searches\n> from Users u\n> where bankaccountid in (select bankaccountid\n> from bankaccount ba\n> WHERE ba.accountnumber = '12345678'\n> and ba.routingNumber = '12345678')\n>\n> You could also do this :\n>\n> select u.* from Users u, bankaccount ba\n> where u.bankaccountid = ba.bankaccountid\n> and ba.accountnumber = '12345678'\n> and ba.routingNumber = '12345678')\n>\n>\n>\nThanks! a lot that was it , it is way much better now.\n\n", "msg_date": "Mon, 06 Dec 2004 15:44:04 -0500", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor Query" } ]
[ { "msg_contents": "Folks,\n\nI'm wondering if people have had success with 7.4 and 8.0 using specific \ncompile optimizations not provided by the default PG install. Since -O2 \nand others have been built into config, I've not been doing much myself. \n\nWhat are other people's experiences in this area? Do you have any stats to \nback it up?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 6 Dec 2004 09:26:34 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Processor optimization compile options?" } ]
[ { "msg_contents": "Hello everyone!\n\nSince our current Postgres server, a quad Xeon system, finally can't keep up with our \nload anymore we're ready to take the next step.\n\nSo the question is: Has anyone experiences with running Postgres on systems with \nmore than 4 processors in a production environment? Which systems and \narchitectures are you using (e.g. IBM xseries, IBM pseries, HP Proliant, Sun Fire, 8-\nway Opteron)? How about conflicts between Postgres' shared memory approach and \nthe NUMA architecture of most multi-processor machines?\n\nMaybe it's time to switch to Oracle or DB2, but before I give up on Postgres, I wanted \nto hear some other opinions.\n\nThanks for any hints and suggestions.\n\nBest regards,\nStephan Vogler\nCipSoft GmbH\n", "msg_date": "Mon, 06 Dec 2004 23:18:14 +0100", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "scaling beyond 4 processors" }, { "msg_contents": "\nOn Dec 6, 2004, at 5:18 PM, [email protected] wrote:\n\n> Hello everyone!\n>\n> Since our current Postgres server, a quad Xeon system, finally can't \n> keep up with our\n> load anymore we're ready to take the next step.\n>\n\nI'm assuming you've already done as much query tweaking as possible.\n\nand are you sure you are CPU bound and not IO bound?\n(Symptoms of IO bound are low cpu usage, high load average, poor \nperformance. Many processes in \"D\" state)\n\n> So the question is: Has anyone experiences with running Postgres on \n> systems with\n> more than 4 processors in a production environment? Which systems and\n\nHave you also considered a replicated approach?\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n", "msg_date": "Mon, 6 Dec 2004 21:52:26 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling beyond 4 processors" }, { "msg_contents": "In the last exciting episode, [email protected] wrote:\n> Hello everyone!\n>\n> Since our current Postgres server, a quad Xeon system, finally can't\n> keep up with our load anymore we're ready to take the next step.\n>\n> So the question is: Has anyone experiences with running Postgres on\n> systems with more than 4 processors in a production environment? \n> Which systems and architectures are you using (e.g. IBM xseries, IBM\n> pseries, HP Proliant, Sun Fire, 8- way Opteron)? How about conflicts\n> between Postgres' shared memory approach and the NUMA architecture\n> of most multi-processor machines?\n\nThe perhaps odd thing is that just about any alternative to quad-Xeon\nis likely to be _way_ better. There are some context switching\nproblems that lead to it being remarkably poorer than you'd expect.\nThrow in less-than ideal performance of the PAE memory addressing\nsystem and it seems oddly crippled overall.\n\nWe've been getting pretty good results with IBM pSeries systems;\nthey're expensive, but definitely very fast.\n\nPreliminary results with Opterons are also looking very promising.\nOne process seemed about 25x as fast on a 4-way 8GB Opteron as it was\non a 4-way 8GB Xeon, albeit with enough differences to make the\ncomparison dangerous.\n-- \nwm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','gmail.com').\nhttp://www.ntlug.org/~cbbrowne/sgml.html\nThe IETF motto: \"Rough consensus *and* working code.\"\n", "msg_date": "Mon, 06 Dec 2004 23:16:01 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling beyond 4 processors" }, { "msg_contents": "The perhaps odd thing is that just about any alternative to quad-Xeon\nis likely to be _way_ better. There are some context switching\nproblems that lead to it being remarkably poorer than you'd expect.\nThrow in less-than ideal performance of the PAE memory addressing\nsystem and it seems oddly crippled overall.\n\nWe've been getting pretty good results with IBM pSeries systems;\nthey're expensive, but definitely very fast.\n\nPreliminary results with Opterons are also looking very promising.\nOne process seemed about 25x as fast on a 4-way 8GB Opteron as it was\non a 4-way 8GB Xeon, albeit with enough differences to make the\ncomparison dangerous.\n-- \nwm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','gmail.com').\nhttp://www.ntlug.org/~cbbrowne/sgml.html\nThe IETF motto: \"Rough consensus *and* working code.\"\n", "msg_date": "7 Dec 2004 04:17:24 GMT", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling beyond 4 processors" }, { "msg_contents": "Volger,\n\n> Since our current Postgres server, a quad Xeon system, finally can't keep\n> up with our load anymore we're ready to take the next step.\n\nThere are a lot of reasons this could be happening; Quad Xeon is a problematic \nplatform, the more so if you're on Dell hardware.\n\nI've run PostgreSQL on 8-ways, and I know there are a few Sunfire users around \nthe community (16-way). There are definitely specific performance issues \naround specific query loads on multi-way systems; search this list archives \nfor \"Context Switch Bug\".\n\nI will echo others in saying that moving to Opteron on premium hardware should \njump you at least 2x on performance, and it's a lot cheaper than DB2 or \nOracle.\n\nAnd, of course, if you really want help from this list, you'll post more \nspecific problems.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 7 Dec 2004 09:49:20 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scaling beyond 4 processors" } ]
[ { "msg_contents": "Hi guys,\n\nWhy would an INSERT ever be really slow? This is what I see a lot of in \nour site logs:\n\nDec 5 15:57:48 marshall postgres[19599]: [3-1] LOG: duration: \n13265.492 ms statement: INSERT INTO users_sessions (sid, cobrand_id, \nuid) VALUES ('145982ac39e1d09fec99cc8a606155e7', '1', '0')\n\n13 seconds to insert a single row!\n\nIt seems to happen at random times during the day. That sessions table \nis heavily updated and inserted, and has pg_autovacuum running vacuum \nanalyze and analyze on it every few minutes I think.\n\nWe don't run any exclusive lock stuff on it.\n\nSo what lock or concurrency issue could cause a single-row insert to \ntake 13 seconds? Could vacuum analyze be doing it?\n\nThanks,\n\nChris\n", "msg_date": "Wed, 08 Dec 2004 10:42:19 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": true, "msg_subject": "Slow insert" }, { "msg_contents": "On Wed, Dec 08, 2004 at 10:42:19AM +0800, Christopher Kings-Lynne wrote:\n> Why would an INSERT ever be really slow? This is what I see a lot of in \n> our site logs:\n> \n> Dec 5 15:57:48 marshall postgres[19599]: [3-1] LOG: duration: \n> 13265.492 ms statement: INSERT INTO users_sessions (sid, cobrand_id, \n> uid) VALUES ('145982ac39e1d09fec99cc8a606155e7', '1', '0')\n> \n> 13 seconds to insert a single row!\n\nDo you have a foreign key or other check which could be really slow?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sat, 11 Dec 2004 02:45:28 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow insert" } ]
[ { "msg_contents": "I tried to subscribe to pgsql-performance, but there seems to be\nsomething wrong with the majordomo, so I'm sending to general too,\nwhere I'm already subscribed.\n\nMy problem is this, using PostgreSQL 7.4.6:\n\n\nI have a table that looks like this:\n\n Table \"public.cjm_object\"\n Column | Type | Modifiers\n-----------+-------------------+-----------\n timestamp | bigint | not null\n jobid | bigint | not null\n objectid | bigint | not null\n class | integer | not null\n field | character varying | not null\n data | bytea |\nIndexes:\n \"cjm_object_pkey\" primary key, btree (\"timestamp\", jobid, objectid, \"class\", field)\n \"idx_cjm_object1\" btree (objectid, \"class\", field)\n\n\nThe table has 283465 rows, and the column combination\n(objectid,class,field) can occur several times.\n\nDoing a search with all columns in the pkey works, it uses the index:\n\ndb=# explain analyze select * from cjm_object where timestamp=1102497954815296 and jobid=9 and objectid=4534 and class=12 and field='paroid';\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n Index Scan using cjm_object_pkey on cjm_object (cost=0.00..32.75 rows=1 width=54) (actual time=0.169..0.172 rows=1 loops=1)\n Index Cond: (\"timestamp\" = 1102497954815296::bigint)\n Filter: ((jobid = 9) AND (objectid = 4534) AND (\"class\" = 12) AND ((field)::text = 'paroid'::text))\n Total runtime: 0.381 ms\n(4 rows)\n\n\n\nBut when doing a search with objectid, class and field, it doesn't use\nthe idx_cjm_object1 index. \ndb=# explain analyze select * from cjm_object where objectid=4534 and class=12 and field='paroid';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------\n Seq Scan on cjm_object (cost=0.00..7987.83 rows=2 width=54) (actual time=21.660..475.664 rows=1 loops=1)\n Filter: ((objectid = 4534) AND (\"class\" = 12) AND ((field)::text = 'paroid'::text))\n Total runtime: 475.815 ms\n(3 rows)\n\n\nI have tried to set enable_seqscan to false, but it gives the same\nresult, except that the estimated cost is higher.\n\nI have also done a vacuum full analyze, and I have reindexed the\ndatabase, the table and the index. I have dropped the index and\nrecreated it, but it still gives the same result.\n\nPlease, could someone give me a clue to this?\n\n\nTomas\n", "msg_date": "10 Dec 2004 12:40:50 +0100", "msg_from": "[email protected] (Tomas =?iso-8859-1?q?Sk=E4re?=)", "msg_from_op": true, "msg_subject": "Query is not using index when it should" }, { "msg_contents": "On Fri, 10 Dec 2004, Tomas [iso-8859-1] Sk�re wrote:\n\n> I have a table that looks like this:\n>\n> Table \"public.cjm_object\"\n> Column | Type | Modifiers\n> -----------+-------------------+-----------\n> timestamp | bigint | not null\n> jobid | bigint | not null\n> objectid | bigint | not null\n> class | integer | not null\n> field | character varying | not null\n\nIn 7.4.x and earlier, you need to cast the value you're comparing to into\na bigint in order to make sure the indexes are used (in your timestamp\ncase it appears to work because the value doesn't fit in a plain integer).\n8.0 should handle this better.\n\n> But when doing a search with objectid, class and field, it doesn't use\n> the idx_cjm_object1 index.\n> db=# explain analyze select * from cjm_object where objectid=4534 and class=12 and field='paroid';\n\nUsing one of\n objectid=4534::bigint\n objectid='4534'\n objectid=CAST(4534 as bigint)\nrather than objectid=4534 should make this indexable in 7.4.x.\n", "msg_date": "Fri, 10 Dec 2004 18:28:38 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is not using index when it should" }, { "msg_contents": "Stephan Szabo <[email protected]> writes:\n\n> On Fri, 10 Dec 2004, Tomas [iso-8859-1] Sk�re wrote:\n> \n> > I have a table that looks like this:\n> >\n> > Table \"public.cjm_object\"\n> > Column | Type | Modifiers\n> > -----------+-------------------+-----------\n> > timestamp | bigint | not null\n> > jobid | bigint | not null\n> > objectid | bigint | not null\n> > class | integer | not null\n> > field | character varying | not null\n> \n> In 7.4.x and earlier, you need to cast the value you're comparing to into\n> a bigint in order to make sure the indexes are used (in your timestamp\n> case it appears to work because the value doesn't fit in a plain integer).\n> 8.0 should handle this better.\n\nThanks, casting worked well for that query. Now, could someone please\nhelp me to get this query faster? With the 283465 rows, it takes far\ntoo long time, I think. This is on a 2GHz Celeron running Linux 2.6. \nshared_buffers=1000, sort_mem=1024. \n\nselect c.* from cjm_object c\n inner join\n (select max(timestamp) as timestamp,objectid,field from cjm_object\n group by objectid,field) t\n using(timestamp,objectid,field)\n where 1=1 and data is not null\n order by objectid,field;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=145511.85..150759.75 rows=1 width=54) (actual time=17036.147..20968.811 rows=208246 loops=1)\n Merge Cond: ((\"outer\".objectid = \"inner\".objectid) AND (\"outer\".\"?column7?\" = \"inner\".\"?column4?\") AND (\"outer\".\"timestamp\" = \"inner\".\"timestamp\"))\n -> Sort (cost=47007.75..47611.06 rows=241324 width=54) (actual time=5113.099..5586.094 rows=236710 loops=1)\n Sort Key: c.objectid, (c.field)::text, c.\"timestamp\"\n -> Seq Scan on cjm_object c (cost=0.00..5862.65 rows=241324 width=54) (actual time=0.129..1788.125 rows=236710 loops=1)\n Filter: (data IS NOT NULL)\n -> Sort (cost=98504.09..99212.75 rows=283465 width=48) (actual time=11922.081..12427.683 rows=255001 loops=1)\n Sort Key: t.objectid, (t.field)::text, t.\"timestamp\"\n -> Subquery Scan t (cost=45534.39..51912.35 rows=283465 width=48) (actual time=5484.943..9289.061 rows=255001 loops=1)\n -> GroupAggregate (cost=45534.39..49077.70 rows=283465 width=25) (actual time=5484.925..8178.531 rows=255001 loops=1)\n -> Sort (cost=45534.39..46243.05 rows=283465 width=25) (actual time=5484.285..6324.067 rows=283465 loops=1)\n Sort Key: objectid, field\n -> Seq Scan on cjm_object (cost=0.00..5862.65 rows=283465 width=25) (actual time=0.124..852.749 rows=283465 loops=1)\n Total runtime: 21161.144 ms\n\n\nQuick explanation of the query:\n\nEach row in the table is a field, which is part of an object. Ex:\n\ntimestamp objectid field data\n 1 1 name test\n 1 1 type something\n 1 2 name test2\n 1 2 type whatever\n\nTimestamp is when the entry was inserted in the databas. When updating\na single field for an object, a new line with the new value is added,\ndata set to NULL if the field is deleted. So the above content could\nnow be:\n\ntimestamp objectid field data\n 1 1 name test\n 1 1 type something\n 1 2 name test2\n 1 2 type whatever\n 2 1 name newname\n 2 1 type <NULL>\n\nNow, the query picks out the highest timestamp for each\n(objectid,field) and then selects all columns for each match,\nfiltering out NULL data and ordering per objectid.\n\nIs there any way to make this query faster? I've tried rewriting it,\nputting the subquery as EXISTS condition, but it doesn't make it\nfaster. I've tried to create different indices, but they don't seem to\nbe used in this query.\n\n\nGreetings,\n\nTomas\n\n\n", "msg_date": "11 Dec 2004 15:17:13 +0100", "msg_from": "[email protected] (Tomas =?iso-8859-1?q?Sk=E4re?=)", "msg_from_op": true, "msg_subject": "Re: Query is not using index when it should" }, { "msg_contents": "On Sat, Dec 11, 2004 at 03:17:13PM +0100, Tomas Sk�re wrote:\n> select c.* from cjm_object c\n> inner join\n> (select max(timestamp) as timestamp,objectid,field from cjm_object\n> group by objectid,field) t\n> using(timestamp,objectid,field)\n> where 1=1 and data is not null\n> order by objectid,field;\n\nUsually, SELECT max(field) FROM table is better written in PostgreSQL as\nSELECT field FROM table ORDER field DESC LIMIT 1.\n\nI don't see the point of \"where 1=1\", though...\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sat, 11 Dec 2004 15:32:13 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Query is not using index when it should" }, { "msg_contents": "On Sat, Dec 11, 2004 at 03:32:13PM +0100, Steinar H. Gunderson wrote:\n> On Sat, Dec 11, 2004 at 03:17:13PM +0100, Tomas Sk�re wrote:\n> > select c.* from cjm_object c\n> > inner join\n> > (select max(timestamp) as timestamp,objectid,field from cjm_object\n> > group by objectid,field) t\n> > using(timestamp,objectid,field)\n> > where 1=1 and data is not null\n> > order by objectid,field;\n> \n> Usually, SELECT max(field) FROM table is better written in PostgreSQL as\n> SELECT field FROM table ORDER field DESC LIMIT 1.\n> \n> I don't see the point of \"where 1=1\", though...\n\nI've seen that in generated queries. The generating program uses\n\"WHERE 1=1\" to simplify the addition of other conditions: instead\nof checking if it needs to add a WHERE and putting ANDs in the right\nplaces, it simply appends subsequent conditions with \" AND condition\".\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Sat, 11 Dec 2004 09:25:39 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Query is not using index when it should" }, { "msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n\n> On Sat, Dec 11, 2004 at 03:17:13PM +0100, Tomas Sk�re wrote:\n> > select c.* from cjm_object c\n> > inner join\n> > (select max(timestamp) as timestamp,objectid,field from cjm_object\n> > group by objectid,field) t\n> > using(timestamp,objectid,field)\n> > where 1=1 and data is not null\n> > order by objectid,field;\n> \n> Usually, SELECT max(field) FROM table is better written in PostgreSQL as\n> SELECT field FROM table ORDER field DESC LIMIT 1.\n\nWell, my subquery doesn't return just one row, but one for each\nobjectid,field combination in the table. I could rewrite it to\nsomething like this:\n\nselect c.* from cjm_object c\nwhere exists (select timestamp from cjm_object t\n where c.objectid=t.objectid\n and c.field=t.field\n order by timestamp desc limit 1)\nand data is not null\norder by objectid;\n\nBut that seems to be even slower, even if it can use an index scan in\nthe subquery. Also it doesn't give the same result set, but I haven't\nlooked into what's wrong yet.\n\n> I don't see the point of \"where 1=1\", though...\n\nIt's just because the actual query is generated by a program, and it's\neasier to always have \"where 1=1\" and then add optional conditions\nwith \"and ...\".\n\n\nTomas\n", "msg_date": "12 Dec 2004 09:32:25 +0100", "msg_from": "[email protected] (Tomas =?iso-8859-1?q?Sk=E4re?=)", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Query is not using index when it should" }, { "msg_contents": "Tomas,\n\n> I tried to subscribe to pgsql-performance, but there seems to be\n> something wrong with the majordomo, so I'm sending to general too,\n> where I'm already subscribed.\n\nWell, I got your e-mail, so obviously you're subscribed to Performance.\n\n> But when doing a search with objectid, class and field, it doesn't use\n> the idx_cjm_object1 index.\n> db=# explain analyze select * from cjm_object where objectid=4534 and\n> class=12 and field='paroid'; QUERY PLAN\n\nTry:\n\nexplain analyze select * from cjm_object where objectid=4534::BIGINT and\nclass=12 and field='paroid';\n\nSometimes the planner needs a little extra help on BIGINT fields. This \nproblem is fixed in 8.0.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 12 Dec 2004 18:48:27 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is not using index when it should" } ]
[ { "msg_contents": "First off, WOO-HOO! The lists are back and I can finally get my PG\nfix!!! Now, on to the business at hand...\n\nI have four query plans below, the first two help explain my question,\nand the last two are about a speculative alternative. The first query\nuse a subselects that are generated from a middleware layer and are\nthen joined to create the final result set. In order to keep total\nexecution time down, the middleware imposes a LIMIT clause on each.\n\nI'm using the fact that Postgres can elevate a subselect-join to a\nsimple join when there are no aggregates involved and I think I\nremember there has been some work recently on elevating subselects\nthat contain a LIMIT, so I went back and ran the plans without the\nLIMITs to see what would happen. Well, the limit killed the subselect\nelevation. I'm not too worried about the relative execution times\nsince it's very fast, but more curious about the plan that was chosen.\n\nIt seems that the planner knows that the results from subselect 'b'\nwill contain just one row due to the fact that the index it is\nscanning is unique. Would it not make sense to discard the LIMIT\nclause on that subselect? That would result in the third plan, which\nhas better performance than the generated query, and is guaranteed to\nreturn the same results since the index in use is unique. Also,\nwouldn't it make sense for subselect 'a' to be elevated sans LIMIT\njust to see if there is a unique index it might be able to use?\n\nI realize this is a rather specialized case and not really great form.\n But because PG can, in some cases, elevate subselects, writing\nmiddleware to join results becomes pretty easy. Just a matter of\ndefining result sets independently, and creating a simple wrapper to\njoin them.\n\nIn any case, I will probably end up just detecting the subselect\ncondition in the middleware and drop the limit when there are some\nWHERE clauses on the inner query. I just thought I'd bring up a\npossible optimization for the future, and was curious what the gurus\nmight have to say!\n\n\n\n-- Version info and queries in question.\n\noils4=# select version();\n \nversion\n---------------------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.0.0beta4 on x86_64-unknown-linux-gnu, compiled by GCC\ngcc (GCC) 3.3.4 20040623 (Gentoo Linux 3.3.4-r1, ssp-3.3.2-2,\npie-8.7.6)\n(1 row)\n\n\n-- query 1: the query generated by middleware\n\noils4=# EXPLAIN ANALYZE select a.record, b.control from (select * from\nbiblio.record where id = 100000 limit 1000) b, (select * from\nbiblio.metarecord_field_entry limit 1000) a where a.source = b.id;\n \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=3.68..44.49 rows=5 width=40) (actual\ntime=2.066..2.066 rows=0 loops=1)\n Hash Cond: (\"outer\".source = \"inner\".id)\n -> Subquery Scan a (cost=0.00..35.75 rows=1000 width=16) (actual\ntime=0.005..1.295 rows=1000 loops=1)\n -> Limit (cost=0.00..25.75 rows=1000 width=87) (actual\ntime=0.003..0.641 rows=1000 loops=1)\n -> Seq Scan on metarecord_field_entry \n(cost=0.00..43379.75 rows=1684575 width=87) (actual time=0.003..0.435\nrows=1000 loops=1)\n -> Hash (cost=3.68..3.68 rows=1 width=40) (actual\ntime=0.039..0.039 rows=0 loops=1)\n -> Subquery Scan b (cost=0.00..3.68 rows=1 width=40)\n(actual time=0.031..0.033 rows=1 loops=1)\n -> Limit (cost=0.00..3.67 rows=1 width=1070) (actual\ntime=0.029..0.030 rows=1 loops=1)\n -> Index Scan using biblio_record_pkey on record\n (cost=0.00..3.67 rows=1 width=1070) (actual time=0.027..0.028 rows=1\nloops=1)\n Index Cond: (id = 100000)\n Total runtime: 2.171 ms\n(11 rows)\n\n\n-- query 2: the fast query, no limit allows elevation of subselects\n\noils4=# EXPLAIN ANALYZE select a.record, b.control from (select * from\nbiblio.record where id = 100000) b, (select * from\nbiblio.metarecord_field_entry) a where a.source = b.id;\n \n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..19.95 rows=9 width=22) (actual\ntime=0.043..0.055 rows=7 loops=1)\n -> Index Scan using biblio_record_pkey on record (cost=0.00..3.67\nrows=1 width=22) (actual time=0.025..0.026 rows=1 loops=1)\n Index Cond: (id = 100000)\n -> Index Scan using metarecord_field_entry_source_idx on\nmetarecord_field_entry (cost=0.00..16.19 rows=9 width=16) (actual\ntime=0.011..0.018 rows=7 loops=1)\n Index Cond: (source = 100000)\n Total runtime: 0.101 ms\n(6 rows)\n\n\n\n-- query 3: if we were to drop the limit, since we're using a unique index\n\noils4=# EXPLAIN ANALYZE select a.record, b.control from (select * from\nbiblio.record where id = 100000) b, (select * from\nbiblio.metarecord_field_entry limit 1000) a where a.source = b.id;\n \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..41.97 rows=5 width=22) (actual\ntime=1.169..1.169 rows=0 loops=1)\n -> Index Scan using biblio_record_pkey on record (cost=0.00..3.67\nrows=1 width=22) (actual time=0.036..0.038 rows=1 loops=1)\n Index Cond: (id = 100000)\n -> Subquery Scan a (cost=0.00..38.25 rows=5 width=16) (actual\ntime=1.126..1.126 rows=0 loops=1)\n Filter: (source = 100000)\n -> Limit (cost=0.00..25.75 rows=1000 width=87) (actual\ntime=0.005..0.673 rows=1000 loops=1)\n -> Seq Scan on metarecord_field_entry \n(cost=0.00..43379.75 rows=1684575 width=87) (actual time=0.004..0.424\nrows=1000 loops=1)\n Total runtime: 1.243 ms\n(8 rows)\n\n\n\n-- query 4: what I would like the seqscan in query 3 to become...\n\noils4=# EXPLAIN ANALYZE select * from biblio.metarecord_field_entry\nwhere source = 100000 limit 1000;\n \n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..16.19 rows=9 width=87) (actual time=0.026..0.035\nrows=7 loops=1)\n -> Index Scan using metarecord_field_entry_source_idx on\nmetarecord_field_entry (cost=0.00..16.19 rows=9 width=87) (actual\ntime=0.025..0.032 rows=7 loops=1)\n Index Cond: (source = 100000)\n Total runtime: 0.069 ms\n(4 rows)\n\n-- \nMike Rylander\[email protected]\nGPLS -- PINES Development\nDatabase Developer\nhttp://open-ils.org\n", "msg_date": "Fri, 10 Dec 2004 13:40:02 -0500", "msg_from": "Mike Rylander <[email protected]>", "msg_from_op": true, "msg_subject": "LIMIT causes SEQSCAN in subselect" }, { "msg_contents": "Mike,\n\n> I'm using the fact that Postgres can elevate a subselect-join to a\n> simple join when there are no aggregates involved and I think I\n> remember there has been some work recently on elevating subselects\n> that contain a LIMIT, so I went back and ran the plans without the\n> LIMITs to see what would happen. Well, the limit killed the subselect\n> elevation. \n\nActually, this makes sense. A LIMIT requires the data to be ordered first, \nand then cut based on the order; it prevents collapsing the subselect into \nthe main query. Some sort of materializing is necessary, even in cases like \nyours where the limit is inherently meaningless because you've neglected to \nuse an ORDER BY.\n\nThe fact that the estimator knows that the LIMIT is pointless because there \nare less rows in the subselect than the LIMIT will return is not something we \nwant to count on; sometimes the estimator has innaccurate information. The \nUNIQUE index makes this more certain, except that I'm not sure that the \nplanner distinguishes between actual UNIQUE indexes and columns which are \nestimated unique (per the pg_stats). And I think you can see in your case \nthat there's quite a difference between a column we're CERTAIN is unique, \nversus a column we THINK is unique.\n\n> I realize this is a rather specialized case and not really great form.\n\nExactly. You've grasped the main issue: that this has not been optimized \nbecause it's bizarre and not very sensible query writing. Someday we'll get \naround to optimizing the really wierd queries, but there's still a lot of \nwork to be done on the common ones (like count(*) ...).\n\nKeep in mind that the only reason we support LIMIT inside subqueries in the \nfirst place is a workaround to slow aggregates, and a way to do RANK. It's \ncertainly not SQL-standard.\n\n> Just a matter of \n> defining result sets independently, and creating a simple wrapper to\n> join them.\n\nWell, if you think so, you know where to submit patches ...\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 10 Dec 2004 21:40:18 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIMIT causes SEQSCAN in subselect" }, { "msg_contents": "On Fri, 10 Dec 2004 21:40:18 -0800, Josh Berkus <[email protected]> wrote:\n> Mike,\n> The fact that the estimator knows that the LIMIT is pointless because there\n> are less rows in the subselect than the LIMIT will return is not something we\n> want to count on; sometimes the estimator has innaccurate information. The\n> UNIQUE index makes this more certain, except that I'm not sure that the\n> planner distinguishes between actual UNIQUE indexes and columns which are\n> estimated unique (per the pg_stats). And I think you can see in your case\n> that there's quite a difference between a column we're CERTAIN is unique,\n> versus a column we THINK is unique.\n\nAbsolutely. At first I was going to ask if perhaps using the stats to\ndiscard the LIMIT would be possible, but since the stats are only\nguidelines I dropped that. The stats are just so tempting!\n\n> \n> > I realize this is a rather specialized case and not really great form.\n> \n> Exactly. You've grasped the main issue: that this has not been optimized\n> because it's bizarre and not very sensible query writing. Someday we'll get\n> around to optimizing the really wierd queries, but there's still a lot of\n> work to be done on the common ones (like count(*) ...).\n\nAbsolutely. And if I can help out with the common cases to gain some\nKarmic currency I will. ;) After thinking about it some more, I don't\nthink those queries we really all that wacky though. The problem with\nthe example is that the generated query is very simple, and real-world\nqueries that would be used in the subselect would be much more\ncomplex, and row estimation would be untrustworthy without a UNIQUE\nindex.\n\n> \n> Keep in mind that the only reason we support LIMIT inside subqueries in the\n> first place is a workaround to slow aggregates, and a way to do RANK. It's\n> certainly not SQL-standard.\n> \n\nNo it's not, but then nobody ever accused the authors of the SQL spec\nof being omniscient... I' cant think of another way to get, say, a\n'top 10' list from a subselect, or use a paging iterator (LIMIT ..\nOFFSET ..) as the seed for an outer query. Well, other than an SRF of\ncourse.\n\n> > Just a matter of\n> > defining result sets independently, and creating a simple wrapper to\n> > join them.\n> \n> Well, if you think so, you know where to submit patches ...\n> \n\nWell, I do, but I was talking about it being 'easy' in the middleware.\n Just let PG handle optimizing the subselects.\n\nFor example, you have a pile of predefined SELECTS that don't know\nthey are related and are used for simple lookups. You tell the SQL\ngenerator thingy that it should use two of those, queries A and B,\nthat they are related on x, and that you want to see the 'id' from A\nand the 'value' from B. Instead of having to preplan every possible\ncombination of JOINS the SQL generator will toss the preplanned ones\ninto subselects and join them in the outer query instead of having to\nrip them apart and calculate the join syntax. And yes, I know that\nview will take care of most of that for me... :)\n\nThanks for all your comments. Pretty much what I expected, but I\nthought I'd raise a use case. I'll just have to give the query\nbuilder more smarts.\n\n> --\n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n\n\n-- \nMike Rylander\[email protected]\nGPLS -- PINES Development\nDatabase Developer\nhttp://open-ils.org\n", "msg_date": "Sat, 11 Dec 2004 09:37:15 -0500", "msg_from": "Mike Rylander <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIMIT causes SEQSCAN in subselect" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> The fact that the estimator knows that the LIMIT is pointless because there \n> are less rows in the subselect than the LIMIT will return is not something we\n> want to count on; sometimes the estimator has innaccurate information.\n\nHowever, when the estimator is basing that estimate on the existence of\na unique index for the column, the estimate could be trusted. There are\na couple of reasons that we don't perform that optimization at present,\nthough:\n\n1. If the finished query plan doesn't actually *use* the index in\nquestion, then dropping the index would not directly invalidate the\nquery plan, but nonetheless the query would be broken. You could\nsubsequently get silently-wrong answers.\n\n2. For the particular point at hand, there's an implementation problem,\nwhich is that decisions about whether to flatten subqueries are taken\nbefore we do any rowcount estimation. So even if we discarded the LIMIT\nclause once we realized it was redundant, it'd be too late to get the\noptimal overall plan.\n\nPoint #1 is something I would like to fix whenever we get around to\nimplementing proper invalidation of cached plans. There would need to\nbe a way to list \"indirect\" as well as direct dependencies of a plan.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 11 Dec 2004 13:06:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIMIT causes SEQSCAN in subselect " }, { "msg_contents": "\n> The fact that the estimator knows that the LIMIT is pointless because \n> there\n> are less rows in the subselect than the LIMIT will return is not \n> something we\n> want to count on; sometimes the estimator has innaccurate information. \n> The\n> UNIQUE index makes this more certain, except that I'm not sure that the\n> planner distinguishes between actual UNIQUE indexes and columns which are\n> estimated unique (per the pg_stats). And I think you can see in your \n> case\n> that there's quite a difference between a column we're CERTAIN is unique,\n> versus a column we THINK is unique.\n\n\tI think a UNIQUE constraint can permit several 'different' NULL values... \nbetter say \"UNIQUE NOT NULL\" ?\n\t\n", "msg_date": "Fri, 24 Dec 2004 02:47:40 +0100", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIMIT causes SEQSCAN in subselect" } ]
[ { "msg_contents": "Hi,\n\nI have two similar tables in a database, one stores persons and the\nother stores telephones. They have a similar number of records (around\n70.000), but a indexed search on the persons' table is way faster than\nin the telephones' table. I'm sending the explains atacched, and I\nbelieve that the problem can be in the fact the the explain extimates a\nworng number of rows in the telefones' explain. I'm sending the explains\natacched, and the table and columns' names are in Portuguese, but if it\nmakes easier for you guys I can translate them in my next posts.\n\nThe in dex in the telephone table is multicolumn, I'd tried to drop it\nand create a single-column index, but the results were quite the same.\n\nThanks, \n\n-- \n+---------------------------------------------------+\n| Alvaro Nunes Melo Atua Sistemas de Informacao |\n| [email protected] www.atua.com.br |\n| UIN - 42722678 (54) 327-1044 |\n+---------------------------------------------------+", "msg_date": "Sat, 11 Dec 2004 15:01:24 -0200", "msg_from": "Alvaro Nunes Melo <[email protected]>", "msg_from_op": true, "msg_subject": "Very different index usage on similar tables" } ]
[ { "msg_contents": "I have a question regarding a serious performance hit taken when using a \nLIMIT clause. I am using version 7.4.6 on FreeBSD 4.10-STABLE with 2GB \nof memory. The table in question contains some 25 million rows with a \nbigserial primary key, orderdate index and a referrer index. The 2 \nselect statements are as follow:\n\nA) select storelocation,order_number from custacct where referrer = 1365 \n and orderdate between '2004-12-07' and '2004-12-07 12:00:00' order by \ncustacctid;\n\nB) select storelocation,order_number from custacct where referrer = 1365 \n and orderdate between '2004-12-07' and '2004-12-07 12:00:00' order by \ncustacctid limit 10;\n\nSo the only difference is the use of the Limit, which, in theory, should \nbe quicker after custacctid is ordered.\n\nNow the analyze results:\n\nA) explain select storelocation,order_number from custacct where \nreferrer = 1365 and orderdate between '2004-12-07' and '2004-12-07 \n12:00:00' order by custacctid;\n \n QUERY PLAN \n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=904420.55..904468.11 rows=19025 width=44)\n Sort Key: custacctid\n -> Index Scan using orderdate_idx on custacct \n(cost=0.00..903068.29 rows=19025 width=44)\n Index Cond: ((orderdate >= '2004-12-07 00:00:00'::timestamp \nwithout time zone) AND (orderdate <= '2004-12-07 12:00:00'::timestamp \nwithout time zone))\n Filter: (referrer = 1365)\n(5 rows)\n\n************************\n\nB) explain select storelocation,order_number from custacct where \nreferrer = 1365 and orderdate between '2004-12-07' and '2004-12-07 \n12:00:00' order by custacctid limit 10;\n \n QUERY PLAN \n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..33796.50 rows=10 width=44)\n -> Index Scan using custacct2_pkey on custacct \n(cost=0.00..64297840.86 rows=19025 width=44)\n Filter: ((referrer = 1365) AND (orderdate >= '2004-12-07 \n00:00:00'::timestamp without time zone) AND (orderdate <= '2004-12-07 \n12:00:00'::timestamp without time zone))\n(3 rows)\n\n*******************\n\nNotice the huge cost difference in the two plans: 904468 in the one \nwithout LIMIT versus 64297840.86 for the index scan on custacct index. \nWhy would the planner switch from using the orderdate index to the \ncustacct index (which is a BIGSERIAL, btw)?\n\nI can change that behavior (and speed up the resultant query) by using \nthe following subquery:\n\nexplain select foo.storelocation, foo.order_number from (select \nstorelocation,order_number from custacct where referrer = 1365 and \norderdate between '2004-12-07' and '2004-12-07 12:00:00' order by \ncustacctid) as foo limit 10;\n \n QUERY PLAN \n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=904420.55..904420.67 rows=10 width=100)\n -> Subquery Scan foo (cost=904420.55..904658.36 rows=19025 width=100)\n -> Sort (cost=904420.55..904468.11 rows=19025 width=44)\n Sort Key: custacctid\n -> Index Scan using orderdate_idx on custacct \n(cost=0.00..903068.29 rows=19025 width=44)\n Index Cond: ((orderdate >= '2004-12-07 \n00:00:00'::timestamp without time zone) AND (orderdate <= '2004-12-07 \n12:00:00'::timestamp without time zone))\n Filter: (referrer = 1365)\n(7 rows)\n\nAs a side note, when running query A, the query takes 1772.523 ms, when \nrunning the subselect version to get the limit, it takes 1415.615 ms. \nRunning option B (with the other index being scanned) takes several \nminutes (close to 10 minutes!). What am I missing about how the planner \nviews the LIMIT statement?\n\nSven\n", "msg_date": "Mon, 13 Dec 2004 01:13:43 -0500", "msg_from": "Sven Willenberger <[email protected]>", "msg_from_op": true, "msg_subject": "Using LIMIT changes index used by planner" }, { "msg_contents": "On Mon, 2004-12-13 at 01:13 -0500, Sven Willenberger wrote:\n> I have a question regarding a serious performance hit taken when using a \n> LIMIT clause. I am using version 7.4.6 on FreeBSD 4.10-STABLE with 2GB \n> of memory. The table in question contains some 25 million rows with a \n> bigserial primary key, orderdate index and a referrer index. The 2 \n> select statements are as follow:\n\nIt's an interesting question, but to be able to get answers from this\nlist you will need to provide \"EXPLAIN ANALYZE ...\" rather than just\n\"EXPLAIN ...\".\n\nAFAICS the bad plan on LIMIT is because it optimistically thinks the\nodds are around the 0.00 end, rather than the 64297840.86 end, and\nindeed that is what the \"Limit ...\" estimate is showing. A bad plan (in\nyour case) is encouraged here by the combination of \"LIMIT\" and \"ORDER\nBY\".\n\nFor real background on this, and calculated recommendations, we'd need\nthat more detailed output though.\n\nAs a quick hack, it's possible that you could improve things by\nincreasing the samples on relevant columns with some judicious \"ALTER\nTABLE ... ALTER COLUMN ... SET STATISTICS ...\" commands.\n\nCheers,\n\t\t\t\t\tAndrew McMillan.\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n Planning an election? Call us!\n-------------------------------------------------------------------------", "msg_date": "Mon, 13 Dec 2004 22:56:29 +1300", "msg_from": "Andrew McMillan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LIMIT changes index used by planner" }, { "msg_contents": "\n\nAndrew McMillan wrote:\n> On Mon, 2004-12-13 at 01:13 -0500, Sven Willenberger wrote:\n> \n>>I have a question regarding a serious performance hit taken when using a \n>>LIMIT clause. I am using version 7.4.6 on FreeBSD 4.10-STABLE with 2GB \n>>of memory. The table in question contains some 25 million rows with a \n>>bigserial primary key, orderdate index and a referrer index. The 2 \n>>select statements are as follow:\n> \n> \n> It's an interesting question, but to be able to get answers from this\n> list you will need to provide \"EXPLAIN ANALYZE ...\" rather than just\n> \"EXPLAIN ...\".\n> \n\nA) Query without limit clause:\nexplain analyze select storelocation,order_number from custacct where \nreferrer = 1365 and orderdate between '2004-12-07' and '2004-12-07 \n12:00:00' order by custacctid;\n \n QUERY PLAN \n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=1226485.32..1226538.78 rows=21382 width=43) (actual \ntime=30340.322..30426.274 rows=21432 loops=1)\n Sort Key: custacctid\n -> Index Scan using orderdate_idx on custacct \n(cost=0.00..1224947.52 rows=21382 width=43) (actual \ntime=159.218..30196.686 rows=21432 loops=1)\n Index Cond: ((orderdate >= '2004-12-07 00:00:00'::timestamp \nwithout time zone) AND (orderdate <= '2004-12-07 12:00:00'::timestamp \nwithout time zone))\n Filter: (referrer = 1365)\n Total runtime: 30529.151 ms\n(6 rows)\n\n************************************\n\nA2) Same query run again, to see effect of caching:\nexplain analyze select storelocation,order_number from custacct where \nreferrer = 1365 and orderdate between '2004-12-07' and '2004-12-07 \n12:00:00' order by custacctid;\n \n QUERY PLAN \n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=1226485.32..1226538.78 rows=21382 width=43) (actual \ntime=1402.410..1488.395 rows=21432 loops=1)\n Sort Key: custacctid\n -> Index Scan using orderdate_idx on custacct \n(cost=0.00..1224947.52 rows=21382 width=43) (actual time=0.736..1259.964 \nrows=21432 loops=1)\n Index Cond: ((orderdate >= '2004-12-07 00:00:00'::timestamp \nwithout time zone) AND (orderdate <= '2004-12-07 12:00:00'::timestamp \nwithout time zone))\n Filter: (referrer = 1365)\n Total runtime: 1590.675 ms\n(6 rows)\n\n***********************************\n\nB) Query run with LIMIT\n\nexplain analyze select storelocation,order_number from custacct where \nreferrer = 1365 and orderdate between '2004-12-07' and '2004-12-07 \n12:00:00' order by custacctid limit 10;\n \n QUERY PLAN \n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..43065.76 rows=10 width=43) (actual \ntime=1306957.216..1307072.111 rows=10 loops=1)\n -> Index Scan using custacct2_pkey on custacct \n(cost=0.00..92083209.38 rows=21382 width=43) (actual \ntime=1306957.205..1307072.017 rows=10 loops=1)\n Filter: ((referrer = 1365) AND (orderdate >= '2004-12-07 \n00:00:00'::timestamp without time zone) AND (orderdate <= '2004-12-07 \n12:00:00'::timestamp without time zone))\n Total runtime: 1307072.231 ms\n(4 rows)\n\n************************************\n\nC) Query using the subselect variation\n\nexplain analyze select foo.storelocation, foo.order_number from (select \nstorelocation,order_number from custacct where referrer = 1365 and \norderdate between '2004-12-07' and '2004-12-07 12:00:00' order by \ncustacctid) as foo limit 10;\n \n QUERY PLAN \n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=1226485.32..1226485.45 rows=10 width=100) (actual \ntime=1413.829..1414.024 rows=10 loops=1)\n -> Subquery Scan foo (cost=1226485.32..1226752.60 rows=21382 \nwidth=100) (actual time=1413.818..1413.933 rows=10 loops=1)\n -> Sort (cost=1226485.32..1226538.78 rows=21382 width=43) \n(actual time=1413.798..1413.834 rows=10 loops=1)\n Sort Key: custacctid\n -> Index Scan using orderdate_idx on custacct \n(cost=0.00..1224947.52 rows=21382 width=43) (actual time=0.740..1272.380 \nrows=21432 loops=1)\n Index Cond: ((orderdate >= '2004-12-07 \n00:00:00'::timestamp without time zone) AND (orderdate <= '2004-12-07 \n12:00:00'::timestamp without time zone))\n Filter: (referrer = 1365)\n Total runtime: 1418.964 ms\n(8 rows)\n\n\nThanks,\nSven\n", "msg_date": "Mon, 13 Dec 2004 17:06:40 -0500", "msg_from": "Sven Willenberger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using LIMIT changes index used by planner" }, { "msg_contents": "Sven Willenberger <[email protected]> writes:\n> explain analyze select storelocation,order_number from custacct where \n> referrer = 1365 and orderdate between '2004-12-07' and '2004-12-07 \n> 12:00:00' order by custacctid limit 10;\n \n> QUERY PLAN \n\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..43065.76 rows=10 width=43) (actual \n> time=1306957.216..1307072.111 rows=10 loops=1)\n> -> Index Scan using custacct2_pkey on custacct \n> (cost=0.00..92083209.38 rows=21382 width=43) (actual \n> time=1306957.205..1307072.017 rows=10 loops=1)\n> Filter: ((referrer = 1365) AND (orderdate >= '2004-12-07 \n> 00:00:00'::timestamp without time zone) AND (orderdate <= '2004-12-07 \n> 12:00:00'::timestamp without time zone))\n> Total runtime: 1307072.231 ms\n> (4 rows)\n\nI think this is the well-known issue of lack of cross-column correlation\nstatistics. The planner is well aware that this indexscan will be\nhorridly expensive if run to completion --- but it's assuming that\nstopping after 10 rows, or 10/21382 of the total scan, will cost only\nabout 10/21382 as much as the whole scan would. This amounts to\nassuming that the rows matching the filter condition are randomly\ndistributed among all the rows taken in custacctid order. I suspect\nthat your test case actually has a great deal of correlation between\ncustacctid and referrer/orderdate, such that the indexscan in custacctid\norder ends up fetching many more rows that fail the filter condition\nthan random chance would suggest, before it finally comes across 10 that\npass the filter.\n\nThere isn't any near-term fix in the wind for this, since storing\ncross-column statistics is an expensive proposition that we haven't\ndecided how to handle. Your workaround with separating the ORDER BY\nfrom the LIMIT is a good one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Dec 2004 17:43:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LIMIT changes index used by planner " }, { "msg_contents": "On Mon, 2004-12-13 at 17:43 -0500, Tom Lane wrote:\n> Sven Willenberger <[email protected]> writes:\n> > explain analyze select storelocation,order_number from custacct where \n> > referrer = 1365 and orderdate between '2004-12-07' and '2004-12-07 \n> > 12:00:00' order by custacctid limit 10;\n> \n> > QUERY PLAN \n> \n> > -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> > Limit (cost=0.00..43065.76 rows=10 width=43) (actual \n> > time=1306957.216..1307072.111 rows=10 loops=1)\n> > -> Index Scan using custacct2_pkey on custacct \n> > (cost=0.00..92083209.38 rows=21382 width=43) (actual \n> > time=1306957.205..1307072.017 rows=10 loops=1)\n> > Filter: ((referrer = 1365) AND (orderdate >= '2004-12-07 \n> > 00:00:00'::timestamp without time zone) AND (orderdate <= '2004-12-07 \n> > 12:00:00'::timestamp without time zone))\n> > Total runtime: 1307072.231 ms\n> > (4 rows)\n> \n> I think this is the well-known issue of lack of cross-column correlation\n> statistics. The planner is well aware that this indexscan will be\n> horridly expensive if run to completion --- \n> <snip>\n> There isn't any near-term fix in the wind for this, since storing\n> cross-column statistics is an expensive proposition that we haven't\n> decided how to handle. Your workaround with separating the ORDER BY\n> from the LIMIT is a good one.\n> \n\nYou are correct in that there is a high degree of correlation between\nthe custacctid (which is a serial key) and the orderdate as the orders\ngenerally get entered in the order that they arrive. I will go with the\nworkaround subselect query plan then.\n\nOn a related note, is there a way (other than set enable_seqscan=off) to\ngive a hint to the planner that it is cheaper to use and index scan\nversus seq scan? Using the \"workaround\" query on any time period greater\nthan 12 hours results in the planner using a seq scan. Disabling the seq\nscan and running the query on a full day period for example shows:\n\nexplain analyze select foo.storelocaion, foo.order_number from (select\nstorelocation,order_number from custacct where referrer = 1365 and\nordertdate between '2004-12-09' and '2004-12-10' order by custacctid) as\nfoo limit 10 offset 100;\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=2661326.22..2661326.35 rows=10 width=100) (actual\ntime=28446.605..28446.796 rows=10 loops=1)\n -> Subquery Scan foo (cost=2661324.97..2661866.19 rows=43297\nwidth=100) (actual time=28444.916..28446.298 rows=110 loops=1)\n -> Sort (cost=2661324.97..2661433.22 rows=43297 width=41)\n(actual time=28444.895..28445.334 rows=110 loops=1)\n Sort Key: custacctid\n -> Index Scan using orderdate_idx on custacct\n(cost=0.00..2657990.68 rows=43297 width=41) (actual\ntime=4.432..28145.212 rows=44333 loops=1)\n Index Cond: ((orderdate >= '2004-12-09\n00:00:00'::timestamp without time zone) AND (orderdate <= '2004-12-10\n00:00:00'::timestamp without time zone))\n Filter: (referrer = 1365)\n Total runtime: 28456.893 ms\n(8 rows)\n\n\nIf I interpret the above correctly, the planner guestimates a cost of\n2661326 but the actual cost is much less (assuming time is equivalent to\ncost). Would the set statistics command be of any benefit here in\n\"training\" the planner?\n\nSven\n\n\n\n", "msg_date": "Tue, 14 Dec 2004 13:28:52 -0500", "msg_from": "Sven Willenberger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using LIMIT changes index used by planner" }, { "msg_contents": "Sven Willenberger <[email protected]> writes:\n> On a related note, is there a way (other than set enable_seqscan=off) to\n> give a hint to the planner that it is cheaper to use and index scan\n> versus seq scan?\n\nThere are basically two things you can do. One: if the planner's\nrowcount estimates are badly off, you can try increasing the stats\ntargets for relevant columns in hopes of making the estimates better.\nA too-large rowcount estimate will improperly bias the decision towards\nseqscan. Two: if the rowcounts are in the right ballpark but the\nestimated costs have nothing to do with reality, you can try tuning\nthe planner's cost parameters to make the model match local reality\na bit better. random_page_cost is the grossest knob here;\neffective_cache_size is also worth looking at. See the\npgsql-performance archives for more discussion.\n\n> -> Index Scan using orderdate_idx on custacct\n> (cost=0.00..2657990.68 rows=43297 width=41) (actual\n> time=4.432..28145.212 rows=44333 loops=1)\n\nIn this case there's already a pretty good match between actual and\nestimated rowcount, so increasing the stats targets isn't likely to\nimprove the plan choice; especially since a more accurate estimate would\nshift the costs in the \"wrong\" direction anyway. Look to the cost\nparameters, instead.\n\nStandard disclaimer: don't twiddle the cost parameters on the basis\nof only one test case.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 14 Dec 2004 14:35:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LIMIT changes index used by planner " }, { "msg_contents": "On Mon, 13 Dec 2004 17:43:07 -0500, Tom Lane <[email protected]> wrote:\n\n> Sven Willenberger <[email protected]> writes:\n>> explain analyze select storelocation,order_number from custacct where\n>> referrer = 1365 and orderdate between '2004-12-07' and '2004-12-07\n>> 12:00:00' order by custacctid limit 10;\n\n\twhy not create an index on referrer, orderdate ?\n", "msg_date": "Fri, 24 Dec 2004 03:05:59 +0100", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LIMIT changes index used by planner " } ]
[ { "msg_contents": "Hi All,\n \nI have a question regarding multiple inserts.\nThe following function inserts for each country found in country table, values into merchant_buyer_country.\n \n----------------------------------------------------------------------------------------------------------------------------------------\n CSQLStatement st( sql );\n CSQLStatement st1( sql );\n SQLINTEGER rows;\n long num_codes = 0;\n rows = st.Select( \"SELECT * FROM merchant_buyer_country where merchant_id = %lu \",merchant_id );\n if ( rows )\n return 0;\n char code[4];\n rows = st.Select( \"SELECT code FROM country WHERE send IS NOT NULL OR receive IS NOT NULL\" );\n SQLBindCol( st.hstmt, 1, SQL_C_CHAR, code, sizeof(code), 0 );\n long i;\n for (i = 0; i < rows; i++ )\n {\n st.Fetch();\n st1.Command(\"INSERT INTO merchant_buyer_country (merchant_id,country,enabled,group_id) VALUES(%lu ,'%s', true, %lu )\", merchant_id,\n code,group_id);\n }\n st.CloseCursor();\n st1.CloseCursor();\n return 1;\n----------------------------------------------------------------------------------------------------------------------------------------\n\nOn looking at the log file, I saw separate inserts being performed, and each insert takes about 1 second. \n \ninsert into merchant_buyer_country (merchant_id,country,enabled,group_id) values(1203,'IN','true',1);\ninsert into merchant_buyer_country merchant_id,country,enabled,group_id) values(1203,'US','true',1);\ninsert into merchant_buyer_country merchant_id,country,enabled,group_id) values (1203,'AR','true',1);\ninsert into merchant_buyer_country (merchant_id,country,enabled,group_id) values(1203,'AZ','true',1);\ninsert into merchant_buyer_country merchant_id,country,enabled,group_id) values (1203,'BG','true',1);\ninsert into merchant_buyer_country merchant_id,country,enabled,group_id) values(1203,'SP','true',1);\n.....\n \n\n\n\n\n\nThere are more than 100 countries and this takes a lot of time for the inserts to complete. \nIs there a way to write the INSERT as follows?\n \nINSERT into merchant_buyer_country (merchant_id,country,enabled,group_id) values (1203, \n(SELECT code FROM country WHERE send IS NOT NULL OR receive IS NOT NULL), 'true',1);\n \nI tried this, but I get the following problem:\nERROR: More than one tuple returned by a subselect used as an expression.\n \nI know there is a way to this, but I am not sure where I am going wrong. Can someone please help me figure this out.\n \nThanks,\nSaranya\n \n\n\t\t\n---------------------------------\nDo you Yahoo!?\n Meet the all-new My Yahoo! ��� Try it today! \nHi All,\n \nI have a question regarding multiple inserts.\nThe following function inserts for each country found in country table, values into merchant_buyer_country.\n \n\n----------------------------------------------------------------------------------------------------------------------------------------\n        CSQLStatement st( sql );        CSQLStatement st1( sql );        SQLINTEGER rows;        long num_codes = 0;\n        rows = st.Select( \"SELECT * FROM merchant_buyer_country where merchant_id = %lu \",merchant_id );\n        if  ( rows )                return 0;\n    char code[4];        rows = st.Select( \"SELECT code FROM country WHERE send IS NOT NULL OR receive IS NOT NULL\" );    SQLBindCol( st.hstmt, 1, SQL_C_CHAR, code, sizeof(code), 0 );   long i;\n   for (i = 0; i < rows; i++ )   {\n          st.Fetch();          st1.Command(\"INSERT INTO merchant_buyer_country (merchant_id,country,enabled,group_id)  VALUES(%lu ,'%s', true, %lu )\", merchant_id, code,group_id);   }\n        st.CloseCursor();    st1.CloseCursor();\n        return 1;----------------------------------------------------------------------------------------------------------------------------------------\nOn looking at the log file, I saw separate inserts being performed, and each insert takes about 1 second. \n \ninsert into merchant_buyer_country (merchant_id,country,enabled,group_id) values(1203,'IN','true',1);\n\ninsert into merchant_buyer_country merchant_id,country,enabled,group_id)  values(1203,'US','true',1);\n\ninsert into merchant_buyer_country merchant_id,country,enabled,group_id) values (1203,'AR','true',1);\n\ninsert into merchant_buyer_country (merchant_id,country,enabled,group_id) values(1203,'AZ','true',1);\n\ninsert into merchant_buyer_country merchant_id,country,enabled,group_id) values (1203,'BG','true',1);\n\ninsert into merchant_buyer_country merchant_id,country,enabled,group_id) values(1203,'SP','true',1);\n.....\n \nThere are more than 100 countries and this takes a lot of time for the inserts to complete. \nIs there a way to write the INSERT as follows?\n \nINSERT into merchant_buyer_country (merchant_id,country,enabled,group_id)  values (1203, \n(SELECT code FROM country WHERE send IS NOT NULL OR receive IS NOT NULL), 'true',1);\n \nI tried this, but I get the following problem:\nERROR:  More than one tuple returned by a subselect used as an expression.\n \nI know there is a way to this, but I am not sure where I am going wrong. Can someone please help me figure this out.\n \nThanks,\nSaranya\n \nDo you Yahoo!? \nMeet the all-new My Yahoo! ��� Try it today!", "msg_date": "Mon, 13 Dec 2004 08:28:39 -0800 (PST)", "msg_from": "sarlav kumar <[email protected]>", "msg_from_op": true, "msg_subject": "INSERT question" }, { "msg_contents": "sarlav kumar <[email protected]> writes:\n> Is there a way to write the INSERT as follows?\n \n> INSERT into merchant_buyer_country (merchant_id,country,enabled,group_id) values (1203, \n> (SELECT code FROM country WHERE send IS NOT NULL OR receive IS NOT NULL), 'true',1);\n \n> I tried this, but I get the following problem:\n> ERROR: More than one tuple returned by a subselect used as an expression.\n\nINSERT into merchant_buyer_country (merchant_id,country,enabled,group_id) \n SELECT 1203, code, 'true', 1 FROM country\n WHERE send IS NOT NULL OR receive IS NOT NULL;\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Dec 2004 11:48:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INSERT question " }, { "msg_contents": "On Mon, Dec 13, 2004 at 08:28:39 -0800,\n sarlav kumar <[email protected]> wrote:\n> \n> Is there a way to write the INSERT as follows?\n> \n> INSERT into merchant_buyer_country (merchant_id,country,enabled,group_id) values (1203, \n> (SELECT code FROM country WHERE send IS NOT NULL OR receive IS NOT NULL), 'true',1);\n> \n\nYou have to use a SELECT instead of the VAlues clause. Something like the\nfollowing should work:\nINSERT INTO merchant_buyer_country (merchant_id, country, enabled, group_id)\n SELECT 1203, code, TRUE, 1 FROM country\n WHERE send IS NOT NULL OR receive IS NOT NULL\n;\n", "msg_date": "Mon, 13 Dec 2004 10:49:52 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INSERT question" }, { "msg_contents": "Thanks!! that worked!:)\n\nTom Lane <[email protected]> wrote:sarlav kumar writes:\n> Is there a way to write the INSERT as follows?\n\n> INSERT into merchant_buyer_country (merchant_id,country,enabled,group_id) values (1203, \n> (SELECT code FROM country WHERE send IS NOT NULL OR receive IS NOT NULL), 'true',1);\n\n> I tried this, but I get the following problem:\n> ERROR: More than one tuple returned by a subselect used as an expression.\n\nINSERT into merchant_buyer_country (merchant_id,country,enabled,group_id) \nSELECT 1203, code, 'true', 1 FROM country\nWHERE send IS NOT NULL OR receive IS NOT NULL;\n\nregards, tom lane\n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \nThanks!! that worked!:)Tom Lane <[email protected]> wrote:\nsarlav kumar writes:> Is there a way to write the INSERT as follows?> INSERT into merchant_buyer_country (merchant_id,country,enabled,group_id) values (1203, > (SELECT code FROM country WHERE send IS NOT NULL OR receive IS NOT NULL), 'true',1);> I tried this, but I get the following problem:> ERROR: More than one tuple returned by a subselect used as an expression.INSERT into merchant_buyer_country (merchant_id,country,enabled,group_id) SELECT 1203, code, 'true', 1 FROM countryWHERE send IS NOT NULL OR receive IS NOT NULL;regards, tom lane__________________________________________________Do You Yahoo!?Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com", "msg_date": "Mon, 13 Dec 2004 10:31:13 -0800 (PST)", "msg_from": "sarlav kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: INSERT question " }, { "msg_contents": "Thanks guys!! that worked!:)\n\nMichael Adler <[email protected]> wrote: \nOn Mon, Dec 13, 2004 at 08:28:39AM -0800, sarlav kumar wrote:\n> INSERT into merchant_buyer_country (merchant_id,country,enabled,group_id) values (1203, \n> (SELECT code FROM country WHERE send IS NOT NULL OR receive IS NOT NULL), 'true',1);\n> \n> I tried this, but I get the following problem:\n> ERROR: More than one tuple returned by a subselect used as an expression.\n\nINSERT into merchant_buyer_country (merchant_id,country,enabled,group_id) \nSELECT 1203, code FROM country WHERE send IS NOT NULL OR receive IS NOT NULL;\n\n-Mike Adler\n\nBruno Wolff III <[email protected]> wrote:\nOn Mon, Dec 13, 2004 at 08:28:39 -0800,\nsarlav kumar wrote:\n> \n> Is there a way to write the INSERT as follows?\n> \n> INSERT into merchant_buyer_country (merchant_id,country,enabled,group_id) values (1203, \n> (SELECT code FROM country WHERE send IS NOT NULL OR receive IS NOT NULL), 'true',1);\n> \n\nYou have to use a SELECT instead of the VAlues clause. Something like the\nfollowing should work:\nINSERT INTO merchant_buyer_country (merchant_id, country, enabled, group_id)\nSELECT 1203, code, TRUE, 1 FROM country\nWHERE send IS NOT NULL OR receive IS NOT NULL\n;\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n\n\t\t\n---------------------------------\nDo you Yahoo!?\n All your favorites on one personal page ��� Try My Yahoo!\nThanks guys!! that worked!:)\nMichael Adler <[email protected]> wrote: \n\nOn Mon, Dec 13, 2004 at 08:28:39AM -0800, sarlav kumar wrote:> INSERT into merchant_buyer_country (merchant_id,country,enabled,group_id) values (1203, > (SELECT code FROM country WHERE send IS NOT NULL OR receive IS NOT NULL), 'true',1);> > I tried this, but I get the following problem:> ERROR: More than one tuple returned by a subselect used as an expression.INSERT into merchant_buyer_country (merchant_id,country,enabled,group_id) SELECT 1203, code FROM country WHERE send IS NOT NULL OR receive IS NOT NULL;-Mike AdlerBruno Wolff III <[email protected]> wrote:\nOn Mon, Dec 13, 2004 at 08:28:39 -0800,sarlav kumar wrote:> > Is there a way to write the INSERT as follows?> > INSERT into merchant_buyer_country (merchant_id,country,enabled,group_id) values (1203, > (SELECT code FROM country WHERE send IS NOT NULL OR receive IS NOT NULL), 'true',1);> You have to use a SELECT instead of the VAlues clause. Something like thefollowing should work:INSERT INTO merchant_buyer_country (merchant_id, country, enabled, group_id)SELECT 1203, code, TRUE, 1 FROM countryWHERE send IS NOT NULL OR receive IS NOT NULL;---------------------------(end of broadcast)---------------------------TIP 1: subscribe and unsubscribe commands go to [email protected]\nDo you Yahoo!? \nAll your favorites on one personal page ��� Try My Yahoo!", "msg_date": "Mon, 13 Dec 2004 10:33:08 -0800 (PST)", "msg_from": "sarlav kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: INSERT question" }, { "msg_contents": "On Mon, 13 Dec 2004 08:28:39 -0800 (PST)\nsarlav kumar <[email protected]> threw this fish to the penguins:\n\n> Hi All,\n> \n> I have a question regarding multiple inserts.\n> The following function inserts for each country found in country table, values into merchant_buyer_country.\n> \n> -----------------------------------------------------------------------------------------------------------------------------------------\n> CSQLStatement st( sql );\n> CSQLStatement st1( sql );\n> SQLINTEGER rows;\n> long num_codes = 0;\n> rows = st.Select( \"SELECT * FROM merchant_buyer_country where merchant_id = %lu \",merchant_id );\n> if ( rows )\n> return 0;\n> char code[4];\n> rows = st.Select( \"SELECT code FROM country WHERE send IS NOT NULL OR receive IS NOT NULL\" );\n> SQLBindCol( st.hstmt, 1, SQL_C_CHAR, code, sizeof(code), 0 );\n> long i;\n> for (i = 0; i < rows; i++ )\n> {\n> st.Fetch();\n> st1.Command(\"INSERT INTO merchant_buyer_country (merchant_id,country,enabled,group_id) VALUES(%lu ,'%s', true, %lu )\", merchant_id,\n> code,group_id);\n> }\n> st.CloseCursor();\n> st1.CloseCursor();\n> return 1;\n> -----------------------------------------------------------------------------------------------------------------------------------------\n> \n> On looking at the log file, I saw separate inserts being performed, and each insert takes about 1 second. \n> \n> insert into merchant_buyer_country (merchant_id,country,enabled,group_id) values(1203,'IN','true',1);\n> insert into merchant_buyer_country merchant_id,country,enabled,group_id) values(1203,'US','true',1);\n> insert into merchant_buyer_country merchant_id,country,enabled,group_id) values (1203,'AR','true',1);\n> insert into merchant_buyer_country (merchant_id,country,enabled,group_id) values(1203,'AZ','true',1);\n> insert into merchant_buyer_country merchant_id,country,enabled,group_id) values (1203,'BG','true',1);\n> insert into merchant_buyer_country merchant_id,country,enabled,group_id) values(1203,'SP','true',1);\n> .....\n> \n> \n> \n> \n> \n> \n> There are more than 100 countries and this takes a lot of time for the inserts to complete. \n> Is there a way to write the INSERT as follows?\n> \n> INSERT into merchant_buyer_country (merchant_id,country,enabled,group_id) values (1203, \n> (SELECT code FROM country WHERE send IS NOT NULL OR receive IS NOT NULL), 'true',1);\n> \n> I tried this, but I get the following problem:\n> ERROR: More than one tuple returned by a subselect used as an expression.\n> \n> I know there is a way to this, but I am not sure where I am going wrong. Can someone please help me figure this out.\n\nTry:\n\ninsert into merchant_buyer_country select 1203,code,true,1 from country where send is not null or receive is not null;\n\n-- George Young\n-- \n\"Are the gods not just?\" \"Oh no, child.\nWhat would become of us if they were?\" (CSL)\n", "msg_date": "Wed, 29 Dec 2004 13:58:58 -0500", "msg_from": "george young <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] INSERT question" } ]
[ { "msg_contents": "Hi,\n\nI know that it's not very polite thing re-send a question, but I don't\nhave any idea about why can this be happening. I have two almost\nidentical tables, with equivalent indexes, but their performances are\nvery different. In this case, I'm sending the queries, explains,\ntables'structures and record counts. I think this is the place where I\ncan most probably get help about performance issues.\n\nThanks in advance,\n\n-- \n+---------------------------------------------------+\n| Alvaro Nunes Melo Atua Sistemas de Informacao |\n| [email protected] www.atua.com.br |\n| UIN - 42722678 (54) 327-1044 |\n+---------------------------------------------------+", "msg_date": "Mon, 13 Dec 2004 15:17:49 -0200", "msg_from": "Alvaro Nunes Melo <[email protected]>", "msg_from_op": true, "msg_subject": "Similar tables, different indexes performance" }, { "msg_contents": "On Mon, Dec 13, 2004 at 15:17:49 -0200,\n Alvaro Nunes Melo <[email protected]> wrote:\n> db=> SELECT COUNT(*) FROM titulo WHERE cd_pessoa = 1;\n> count \n> -------\n> 220\n> (1 record)\n> \n> Time: 48,762 ms\n> db=> SELECT COUNT(*) FROM movimento WHERE cd_pessoa = 1;\n> count \n> -------\n> 221\n> (1 record)\n> \n> Time: 1158,463 ms\n\nI suspect you have a lot of dead tuples in those tables.\nHave you vacuumed them recently?\nWas there enough FSM space when you did so?\n\nYou might try doing VACUUM FULL on each table now and see if that\nfixes the problem.\n", "msg_date": "Mon, 13 Dec 2004 12:03:03 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Similar tables, different indexes performance" }, { "msg_contents": "Em Seg, 2004-12-13 �s 16:03, Bruno Wolff III escreveu:\n> On Mon, Dec 13, 2004 at 15:17:49 -0200,\n> Alvaro Nunes Melo <[email protected]> wrote:\n> > db=> SELECT COUNT(*) FROM titulo WHERE cd_pessoa = 1;\n> > count \n> > -------\n> > 220\n> > (1 record)\n> > \n> > Time: 48,762 ms\n> > db=> SELECT COUNT(*) FROM movimento WHERE cd_pessoa = 1;\n> > count \n> > -------\n> > 221\n> > (1 record)\n> > \n> > Time: 1158,463 ms\n> \n> I suspect you have a lot of dead tuples in those tables.\n> Have you vacuumed them recently?\n> Was there enough FSM space when you did so?\n> \n> You might try doing VACUUM FULL on each table now and see if that\n> fixes the problem.\nThe table had not too many tuples delete, but I runned a VACUUM FULL\nVERBOSE ANALYZE and the query's cost and execution time are stil the\nsame. The output was:\nINFO: vacuuming \"public.movimento\"\nINFO: \"movimento\": found 13 removable, 347355 nonremovable row versions\nin 3251 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 68 to 74 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 131440 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n90 pages containing 14824 free bytes are potential move destinations.\nCPU 0.06s/0.03u sec elapsed 0.81 sec.\nINFO: index \"idx_movimento_cd_pessoa\" now contains 347355 row versions\nin 764 pages\nDETAIL: 13 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.01s/0.02u sec elapsed 0.18 sec.\nINFO: index \"pk_movimento\" now contains 347355 row versions in 764\npages\nDETAIL: 13 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.02s/0.02u sec elapsed 0.39 sec.\nINFO: index \"idx_movimento_cd_pessoa_id_tipo\" now contains 347355 row\nversions in 956 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.02s/0.03u sec elapsed 0.27 sec.\nINFO: \"movimento\": moved 9 row versions, truncated 3251 to 3250 pages\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.37 sec.\nINFO: index \"idx_movimento_cd_pessoa\" now contains 347355 row versions\nin 764 pages\nDETAIL: 9 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.02s/0.02u sec elapsed 0.08 sec.\nINFO: index \"pk_movimento\" now contains 347355 row versions in 764\npages\nDETAIL: 9 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.02s/0.02u sec elapsed 0.04 sec.\nINFO: index \"idx_movimento_cd_pessoa_id_tipo\" now contains 347355 row\nversions in 956 pages\nDETAIL: 9 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.02s/0.02u sec elapsed 0.07 sec.\nINFO: vacuuming \"pg_toast.pg_toast_31462037\"\nINFO: \"pg_toast_31462037\": found 0 removable, 0 nonremovable row\nversions in 0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 0 to 0 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 0 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n0 pages containing 0 free bytes are potential move destinations.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"pg_toast_31462037_index\" now contains 0 row versions in 1\npages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.01 sec.\nINFO: analyzing \"public.movimento\"\nINFO: \"movimento\": 3250 pages, 3000 rows sampled, 347170 estimated\ntotal rows\n\n\n-- \n+---------------------------------------------------+\n| Alvaro Nunes Melo Atua Sistemas de Informacao |\n| [email protected] www.atua.com.br |\n| UIN - 42722678 (54) 327-1044 |\n+---------------------------------------------------+\n\n", "msg_date": "Mon, 13 Dec 2004 17:32:02 -0200", "msg_from": "Alvaro Nunes Melo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Similar tables, different indexes performance" }, { "msg_contents": "On Mon, Dec 13, 2004 at 17:32:02 -0200,\n Alvaro Nunes Melo <[email protected]> wrote:\n> Em Seg, 2004-12-13 �s 16:03, Bruno Wolff III escreveu:\n> > On Mon, Dec 13, 2004 at 15:17:49 -0200,\n> > Alvaro Nunes Melo <[email protected]> wrote:\n> > > db=> SELECT COUNT(*) FROM titulo WHERE cd_pessoa = 1;\n> > > count \n> > > -------\n> > > 220\n> > > (1 record)\n> > > \n> > > Time: 48,762 ms\n> > > db=> SELECT COUNT(*) FROM movimento WHERE cd_pessoa = 1;\n> > > count \n> > > -------\n> > > 221\n> > > (1 record)\n> > > \n> > > Time: 1158,463 ms\n> > \n> > I suspect you have a lot of dead tuples in those tables.\n> > Have you vacuumed them recently?\n> > Was there enough FSM space when you did so?\n> > \n> > You might try doing VACUUM FULL on each table now and see if that\n> > fixes the problem.\n> The table had not too many tuples delete, but I runned a VACUUM FULL\n> VERBOSE ANALYZE and the query's cost and execution time are stil the\n> same. The output was:\n> INFO: vacuuming \"public.movimento\"\n> INFO: \"movimento\": found 13 removable, 347355 nonremovable row versions\n> in 3251 pages\n\nIf the table really has 300K rows, then something else is wrong. One likely\ncandidate is if cd_pessoa is int8 there is a quirk in postgres (which is\nfixed in 8.0) where comparing that column to an int4 constant won't use\nan index scan. This can be worked around by either casting the constant\n(e.g. 1::int8) or quoting it (e.g. '1') to delay fixing the type so that\nit will be taken to be an int8 constant.\n", "msg_date": "Mon, 13 Dec 2004 23:22:50 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Similar tables, different indexes performance" } ]
[ { "msg_contents": "Hi,\n\nI am not sure if this is the place to ask this question, but since the \nquestion is trying to improve the performance.. i guess i am not that \nfar off.\n\nMy question is if there is a query design that would query multiple \nserver simultaneously.. would that improve the performance?\n\nTo make it clear.. let's say we have 3 db servers. 1 server is just \ndesigned to take the queries while the other 2 server is the ones that \nactually\nholds the data. let's say we have a query of 'select * from \ncustomer_data' and we change it to\nselect * from\n(\ndblink('db1','select * from customer_data where timestamp between \ntimestamp \\'01-01-2004\\' and timestamp \\'06-30-2004\\'')\nunion\ndblink('db2','select * from customer_data where timestamp between \ntimestamp \\'01-07-2004\\' and timestamp \\'12-31-2004\\'')\n)\n\nWould the subquery above be done simultaneously by postgres before doing \nthe end query? or would it just execute one at a time?\n\nIf it does execute simultaneously.. it's possible to create code to \nconvert normal queries to distributed queries and requesting data from \nmultiple\ndatabase to improve performance. This would be advantageous for large \namount of data.\n\nThanks,\n\nHasnul\n\n\n", "msg_date": "Tue, 14 Dec 2004 09:44:56 +0800", "msg_from": "Hasnul Fadhly bin Hasan <[email protected]>", "msg_from_op": true, "msg_subject": "Trying to create multi db query in one large queries" }, { "msg_contents": "Hasnul,\n\n> My question is if there is a query design that would query multiple\n> server simultaneously.. would that improve the performance?\n\nNot without a vast amounts of infrastructure coding. You're basically \ntalking about what Oracle has spent the last 3 years and $100 million working \non.\n\nWould be nice, though.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 13 Dec 2004 23:18:24 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying to create multi db query in one large queries" }, { "msg_contents": "The world rejoiced as [email protected] (Josh Berkus) wrote:\n> Hasnul,\n>\n>> My question is if there is a query design that would query multiple\n>> server simultaneously.. would that improve the performance?\n>\n> Not without a vast amounts of infrastructure coding. You're\n> basically talking about what Oracle has spent the last 3 years and\n> $100 million working on.\n\nI recall a presentation from folks from Empress Software\n<http://www.empress.com/> back in about '94 or '95 offering this very\nfeature as part of the \"base functionality\" of their product.\n\nI'm not sure it's quite fair to assess things as \"more or less\npreposterous\" simply because they prove ludicrously expensive to\ndevelop on a particular platform that happens to be targeted by even\nmore ludicrous quantities of development dollars...\n\nOn the other hand, it seems unlikely that \"improved performance\" would\nbe one of the merits of this approach...\n-- \nlet name=\"cbbrowne\" and tld=\"gmail.com\" in String.concat \"@\" [name;tld];;\nhttp://www3.sympatico.ca/cbbrowne/\nRules of the Evil Overlord #92. \"If I ever talk to the hero on the\nphone, I will not taunt him. Instead I will say that his dogged\nperseverance has given me new insight on the futility of my evil ways\nand that if he leaves me alone for a few months of quiet contemplation\nI will likely return to the path of righteousness. (Heroes are\nincredibly gullible in this regard.) <http://www.eviloverlord.com/>\n", "msg_date": "Wed, 15 Dec 2004 23:22:15 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying to create multi db query in one large queries" } ]
[ { "msg_contents": "Hello,\n\nMy experience with dblink() is that each dblink() is executed serially, in\npart I would guess, due to the plan for the query. To have each query run\nin parallel you would need to execute both dblink()'s simultaneously saving\neach result into a table. I'm not sure if the same table could be\nspecified. Would depend on the constaint's I suppose.\n\n#!/bin/sh\n# Query 1\npsql -d mydb -c \"select * into mytable from dblink('db1','select * from\ncustomer_data where timestamp between timestamp \\'01-01-2004\\' and timestamp\n\\'06-30-2004\\'') as t1(c1 int, c2 text, ...);\" & PID1=$!\n# Query 2\npsql -d mydb -c \"select * into mytable from dblink('db2','select * from\ncustomer_data where timestamp between timestamp \\'01-07-2004\\' and timestamp\n\\'12-31-2004\\'') as t2(c1 int, c2 text, ...);\" & PID2=$!\n# wait\nwait $PID1\nwait $PID2\n# Do more on mydb.mytable\n...\n\nSomething like that so no guaranties. I do remember testing with this a\nwhile back and it is useful for JOIN's.\n\nGreg\n\n\n-----Original Message-----\nFrom: Hasnul Fadhly bin Hasan\nTo: [email protected]\nSent: 12/13/04 8:44 PM\nSubject: [PERFORM] Trying to create multi db query in one large queries\n\nHi,\n\nI am not sure if this is the place to ask this question, but since the \nquestion is trying to improve the performance.. i guess i am not that \nfar off.\n\nMy question is if there is a query design that would query multiple \nserver simultaneously.. would that improve the performance?\n\nTo make it clear.. let's say we have 3 db servers. 1 server is just \ndesigned to take the queries while the other 2 server is the ones that \nactually\nholds the data. let's say we have a query of 'select * from \ncustomer_data' and we change it to\nselect * from\n(\ndblink('db1','select * from customer_data where timestamp between \ntimestamp \\'01-01-2004\\' and timestamp \\'06-30-2004\\'')\nunion\ndblink('db2','select * from customer_data where timestamp between \ntimestamp \\'01-07-2004\\' and timestamp \\'12-31-2004\\'')\n)\n\nWould the subquery above be done simultaneously by postgres before doing\n\nthe end query? or would it just execute one at a time?\n\nIf it does execute simultaneously.. it's possible to create code to \nconvert normal queries to distributed queries and requesting data from \nmultiple\ndatabase to improve performance. This would be advantageous for large \namount of data.\n\nThanks,\n\nHasnul\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n", "msg_date": "Mon, 13 Dec 2004 23:07:27 -0500", "msg_from": "\"Spiegelberg, Greg\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trying to create multi db query in one large querie" }, { "msg_contents": "Spiegelberg, Greg wrote:\n> \n> My experience with dblink() is that each dblink() is executed serially\n\nCorrect.\n\nIf you really want to do multiple queries simultaneously, you would need \nto write a function very similar to dblink_record, but using asynchonous \nlibpq calls to both remote hosts. See:\n http://www.postgresql.org/docs/current/static/libpq-async.html\n\nHTH,\n\nJoe\n", "msg_date": "Mon, 13 Dec 2004 22:59:37 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying to create multi db query in one large querie" } ]
[ { "msg_contents": "Are there any tricks to speeding up pg_dump aside from doing them from a\nreplicated machine?\n\nI'm using -Fc with no compression.\n\n-- \n\n", "msg_date": "Tue, 14 Dec 2004 12:36:46 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": true, "msg_subject": "Speeding up pg_dump" }, { "msg_contents": "On Tue, 2004-12-14 at 17:36, Rod Taylor wrote:\n> Are there any tricks to speeding up pg_dump aside from doing them from a\n> replicated machine?\n> \n> I'm using -Fc with no compression.\n\nRun a separate pg_dump for larger tables and run them concurrently so\nyou use more cpu and disk resources.\n\nThe lower compression levels are fast and nearly as good (in my testing)\nas full compression. Using compression tends to use up the CPU that\nwould otherwise be wasted since the pg_dump is disk intensive, and then\nsaves further I/O by reducing the output file size.\n\n-- \nBest Regards, Simon Riggs\n\n", "msg_date": "Tue, 14 Dec 2004 23:11:27 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up pg_dump" } ]
[ { "msg_contents": "Hi all, \n \nCan someone please help me optimize this query? Is there a better way to write this query? I am generating a report of transactions ordered by time and with details of the sender and receiver etc.\n \nSELECT distinct a.time::date ||'<br>'||substring(a.time::time::text,1,8) as Time,\nCASE WHEN a.what = 0 THEN 'Money Transfer' WHEN a.what = 15 THEN 'Purchase' WHEN a.what = 26 THEN 'Merchant Streamline' WHEN a.what = 13 THEN 'Reversal' END as Transaction_Type ,\nc1.account_no as SenderAccount, c2.account_no as RecieverAccount, \nb.country as SenderCountry, d.country as RecieverCountry,\nb.firstname as SenderFirstName, b.lastname as SenderLastName, \nd.firstname as ReceiverFirstName, d.lastname as ReceiverLastName, \na.status as status,\n(select sum(td.amount * 0.01) from transaction_data td where td.data_id = a2.id and td.dir = 1 and td.uid = a.target_uid) as ReversedAmount,\n(select sum(td.amount * 0.01) from transaction_data td where td.data_id = a2.id and td.dir = 0 and td.uid = a.uid ) as DepositedAmount, a.flags, (a.amount * 0.01) as Amount,\n(a.fee * 0.01) as Fee \nFROM data a, customerdata b, customerdata d, customer c1, customer c2, participant p, data a2 \nWHERE p.id = a.partner_id AND (a.uid = c1.id) AND (a.target_uid = c2.id) and c1.id=b.uid and c2.id=d.uid\nand a.confirmation is not null AND (a2.ref_id = a.id) and \n((a2.what = 13) or (a2.what = 17) ) ORDER BY time desc ;\n \n \n QUERY PLAN \n \n-------------------------------------------------------------------------------------------------------------\n Unique (cost=2978.27..2981.54 rows=8 width=150) (actual time=502.29..506.75 rows=382 loops=1)\n -> Sort (cost=2978.27..2978.46 rows=77 width=150) (actual time=502.29..502.61 rows=461 loops=1)\n Sort Key: ((((a.\"time\")::date)::text || '<br>'::text) || \"substring\"(((a.\"time\")::time without time zone)::text, 1, 8)), CASE WHEN (a\n.what = 0) THEN 'Money Transfer'::text WHEN (a.what = 15) THEN 'Purchase'::text WHEN (a.what = 26) THEN 'Merchant Streamline'::text WHEN (a.wh\nat = 13) THEN 'Reversal'::text ELSE NULL::text END, c1.account_no, c2.account_no, b.country, d.country, b.firstname, b.lastname, d.firstname, \nd.lastname, a.status, (subplan), (subplan), a.flags, ((a.amount)::numeric * 0.01), ((a.fee)::numeric * 0.01)\n -> Hash Join (cost=2687.00..2975.86 rows=77 width=150) (actual time=423.91..493.48 rows=461 loops=1)\n Hash Cond: (\"outer\".partner_id = \"inner\".id)\n -> Nested Loop (cost=2494.67..2781.99 rows=77 width=146) (actual time=413.19..441.61 rows=472 loops=1)\n -> Merge Join (cost=2494.67..2526.04 rows=77 width=116) (actual time=413.09..429.86 rows=472 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".ref_id)\n -> Sort (cost=1443.39..1458.57 rows=6069 width=108) (actual time=370.14..377.72 rows=5604 loops=1)\n Sort Key: a.id\n -> Hash Join (cost=203.50..1062.01 rows=6069 width=108) (actual time=20.35..335.44 rows=5604 loops=1)\n Hash Cond: (\"outer\".uid = \"inner\".id)\n -> Merge Join (cost=0.00..676.43 rows=6069 width=91) (actual time=0.42..255.33 rows=5611 loops=1)\n Merge Cond: (\"outer\".target_uid = \"inner\".uid)\n -> Merge Join (cost=0.00..1224.05 rows=6069 width=61) (actual time=0.34..156.74 rows=5611 loops\n=1)\n Merge Cond: (\"outer\".target_uid = \"inner\".id)\n -> Index Scan using data_target_uid on data a (cost=0.00..2263.05 rows=6069 width=44) (ac\ntual time=0.23..63.87 rows=5630 loops=1)\n Filter: (confirmation IS NOT NULL)\n -> Index Scan using customer_pkey on customer c2 (cost=0.00..631.03 rows=6120 width=17) (\nactual time=0.05..50.97 rows=10862 loops=1)\n -> Index Scan using customerdata_uid_idx on customerdata d (cost=0.00..312.36 rows=6085 width=3\n0) (actual time=0.06..48.95 rows=10822 loops=1)\n -> Hash (cost=188.20..188.20 rows=6120 width=17) (actual time=19.81..19.81 rows=0 loops=1)\n -> Seq Scan on customer c1 (cost=0.00..188.20 rows=6120 width=17) (actual time=0.03..12.30 rows\n=6157 loops=1)\n -> Sort (cost=1051.28..1052.52 rows=497 width=8) (actual time=42.05..42.51 rows=542 loops=1)\n Sort Key: a2.ref_id\n -> Seq Scan on data a2 (cost=0.00..1029.00 rows=497 width=8) (actual time=0.21..41.14 rows=545 loops=1)\n Filter: ((what = 13) OR (what = 17))\n -> Index Scan using customerdata_uid_idx on customerdata b (cost=0.00..3.31 rows=1 width=30) (actual time=0.01..0.01 ro\nws=1 loops=472)\n Index Cond: (b.uid = \"outer\".uid)\n -> Hash (cost=192.26..192.26 rows=26 width=4) (actual time=10.50..10.50 rows=0 loops=1)\n -> Seq Scan on participant p (cost=0.00..192.26 rows=26 width=4) (actual time=10.42..10.46 rows=26 loops=1)\n SubPlan\n -> Aggregate (cost=6.08..6.08 rows=1 width=4) (actual time=0.03..0.03 rows=1 loops=461)\n -> Index Scan using td_data_id_idx on transaction_data td (cost=0.00..6.08 rows=1 width=4) (actual time=0.02..0.02 ro\nws=1 loops=461)\n Index Cond: (data_id = $0)\n Filter: ((dir = 1) AND (uid = $1))\n -> Aggregate (cost=6.08..6.08 rows=1 width=4) (actual time=0.02..0.02 rows=1 loops=461)\n -> Index Scan using td_data_id_idx on transaction_data td (cost=0.00..6.08 rows=1 width=4) (actual time=0.01..0.01 ro\nws=1 loops=461)\n Index Cond: (data_id = $0)\n Filter: ((dir = 0) AND (uid = $2))\n Total runtime: 508.27 msec\n(40 rows)\nTime: 528.13 ms\n\nPlease help me out.\nThanks in advance!\nSaranya\n \n\n\t\t\n---------------------------------\nDo you Yahoo!?\n Yahoo! Mail - Find what you need with new enhanced search. Learn more.\nHi all, \n \nCan someone please help me optimize this query? Is there a better way to write this query? I am generating a report of transactions ordered by time and with details of the sender and receiver etc.\n \nSELECT distinct a.time::date ||'<br>'||substring(a.time::time::text,1,8) as Time,CASE WHEN a.what = 0 THEN 'Money Transfer' WHEN a.what = 15 THEN 'Purchase' WHEN a.what = 26 THEN 'Merchant Streamline' WHEN a.what = 13 THEN 'Reversal' END  as Transaction_Type ,c1.account_no as SenderAccount, c2.account_no as RecieverAccount, b.country as SenderCountry, d.country as RecieverCountry,b.firstname as SenderFirstName, b.lastname as SenderLastName, d.firstname as ReceiverFirstName, d.lastname as ReceiverLastName, a.status as status,(select sum(td.amount * 0.01) from transaction_data td where td.data_id = a2.id and td.dir = 1 and td.uid = a.target_uid) as ReversedAmount,(select sum(td.amount * 0.01) from transaction_data td where td.data_id = a2.id and td.dir = 0 and td.uid = a.uid ) as DepositedAmount, a.flags, (a.amount * 0.01) as Amount,(a.fee * 0.01) as Fee FROM data a, customerdata b, customerdata d, customer c1, customer c2\n ,\n participant p, data a2 WHERE p.id = a.partner_id AND (a.uid = c1.id) AND (a.target_uid = c2.id) and c1.id=b.uid and c2.id=d.uidand a.confirmation is not null AND (a2.ref_id = a.id) and ((a2.what = 13) or (a2.what = 17) ) ORDER BY time desc ;\n \n \n QUERY PLAN                   ------------------------------------------------------------------------------------------------------------- Unique  (cost=2978.27..2981.54 rows=8 width=150) (actual time=502.29..506.75 rows=382 loops=1)   ->  Sort  (cost=2978.27..2978.46 rows=77 width=150) (actual time=502.29..502.61 rows=461 loops=1)         Sort Key: ((((a.\"time\")::date)::text || '<br>'::text) || \"substring\"(((a.\"time\")::time without time zone)::text, 1, 8)), CASE WHEN (a.what = 0) THEN 'Money Transfer'::text WHEN (a.what = 15) THEN 'Purchase'::text WHEN (a.what = 26) THEN 'Merchant Streamline'::text WHEN (a.what = 13) THEN 'Reversal'::text ELSE NULL::text END, c1.account_no, c2.account_no, b.country, d.country, b.firstname, b.lastname, d.firstname, d.lastname, a.\n status,\n (subplan), (subplan), a.flags, ((a.amount)::numeric * 0.01), ((a.fee)::numeric * 0.01)         ->  Hash Join  (cost=2687.00..2975.86 rows=77 width=150) (actual time=423.91..493.48 rows=461 loops=1)               Hash Cond: (\"outer\".partner_id = \"inner\".id)               ->  Nested Loop  (cost=2494.67..2781.99 rows=77 width=146) (actual time=413.19..441.61 rows=472 loops=1)                     ->  Merge Join  (cost=2494.67..2526.04 rows=77 width=116) (actual time=413.09..429.86 rows=472 loops=1)                         &\n nbsp;\n Merge Cond: (\"outer\".id = \"inner\".ref_id)                           ->  Sort  (cost=1443.39..1458.57 rows=6069 width=108) (actual time=370.14..377.72 rows=5604 loops=1)                                 Sort Key: a.id                                 ->  Hash Join  (cost=203.50..1062.01 rows=6069 width=108) (actual time=20.35..335.44 rows=5604\n loops=1)                                       Hash Cond: (\"outer\".uid = \"inner\".id)                                       ->  Merge Join  (cost=0.00..676.43 rows=6069 width=91) (actual time=0.42..255.33 rows=5611 loops=1)                                             Merge Cond: (\"outer\".target_uid =\n \"inner\".uid)                                             ->  Merge Join  (cost=0.00..1224.05 rows=6069 width=61) (actual time=0.34..156.74 rows=5611 loops=1)                                                   Merge Cond: (\"outer\".target_uid =\n \"inner\".id)                                                   ->  Index Scan using data_target_uid on data a  (cost=0.00..2263.05 rows=6069 width=44) (actual time=0.23..63.87 rows=5630 loops=1)                                                         Filter: (confirmation IS NOT\n NULL)                                                   ->  Index Scan using customer_pkey on customer c2  (cost=0.00..631.03 rows=6120 width=17) (actual time=0.05..50.97 rows=10862 loops=1)                                             ->  Index Scan using customerdata_uid_idx on customerdata d  (cost=0.00..312.36 rows=6085 width=30) (actual time=0.06..48.95 rows=10822\n loops=1)                                       ->  Hash  (cost=188.20..188.20 rows=6120 width=17) (actual time=19.81..19.81 rows=0 loops=1)                                             ->  Seq Scan on customer c1  (cost=0.00..188.20 rows=6120 width=17) (actual time=0.03..12.30 rows=6157 loops=1)                           ->  Sort  (cost=1051.28..1052.52 rows=497 width=8) (actual time=42.05..4\n 2.51\n rows=542 loops=1)                                 Sort Key: a2.ref_id                                 ->  Seq Scan on data a2  (cost=0.00..1029.00 rows=497 width=8) (actual time=0.21..41.14 rows=545 loops=1)                                       Filter: ((what = 13) OR (what = 17))                     ->  Index Scan using customerdata_uid_i\n dx on\n customerdata b  (cost=0.00..3.31 rows=1 width=30) (actual time=0.01..0.01 rows=1 loops=472)                           Index Cond: (b.uid = \"outer\".uid)               ->  Hash  (cost=192.26..192.26 rows=26 width=4) (actual time=10.50..10.50 rows=0 loops=1)                     ->  Seq Scan on participant p  (cost=0.00..192.26 rows=26 width=4) (actual time=10.42..10.46 rows=26 loops=1)               SubPlan                 ->  Aggregate  (cost=6.08..6.08 rows=1 width=4)\n (actual\n time=0.03..0.03 rows=1 loops=461)                       ->  Index Scan using td_data_id_idx on transaction_data td  (cost=0.00..6.08 rows=1 width=4) (actual time=0.02..0.02 rows=1 loops=461)                             Index Cond: (data_id = $0)                             Filter: ((dir = 1) AND (uid = $1))                 ->  Aggregate  (cost=6.08..6.08 rows=1 width=4) (actual time=0.02..0.02 rows=1\n loops=461)                       ->  Index Scan using td_data_id_idx on transaction_data td  (cost=0.00..6.08 rows=1 width=4) (actual time=0.01..0.01 rows=1 loops=461)                             Index Cond: (data_id = $0)                             Filter: ((dir = 0) AND (uid = $2)) Total runtime: 508.27 msec(40 rows)\nTime: 528.13 ms\nPlease help me out.\nThanks in advance!\nSaranya\n \nDo you Yahoo!? \nYahoo! Mail - Find what you need with new enhanced search. Learn more.", "msg_date": "Tue, 14 Dec 2004 13:34:07 -0800 (PST)", "msg_from": "sarlav kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Query Optimization" }, { "msg_contents": "sarlav kumar wrote:\n> Hi all,\n> \n> Can someone please help me optimize this query? Is there a better way to \n> write this query? I am generating a report of transactions ordered by \n> time and with details of the sender and receiver etc.\n> \n> SELECT distinct a.time::date ||'<br>'||substring(a.time::time::text,1,8) \n> as Time,\n> CASE WHEN a.what = 0 THEN 'Money Transfer' WHEN a.what = 15 THEN \n> 'Purchase' WHEN a.what = 26 THEN 'Merchant Streamline' WHEN a.what = 13 \n> THEN 'Reversal' END as Transaction_Type ,\n> c1.account_no as SenderAccount, c2.account_no as RecieverAccount,\n> b.country as SenderCountry, d.country as RecieverCountry,\n> b.firstname as SenderFirstName, b.lastname as SenderLastName,\n> d.firstname as ReceiverFirstName, d.lastname as ReceiverLastName,\n> a.status as status,\n> (select sum(td.amount * 0.01) from transaction_data td where td.data_id \n> = a2.id and td.dir = 1 and td.uid = a.target_uid) as ReversedAmount,\n> (select sum(td.amount * 0.01) from transaction_data td where td.data_id \n> = a2.id and td.dir = 0 and td.uid = a.uid ) as DepositedAmount, a.flags, \n> (a.amount * 0.01) as Amount,\n> (a.fee * 0.01) as Fee\n> FROM data a, customerdata b, customerdata d, customer c1, customer c2 , \n> participant p, data a2\n> WHERE p.id = a.partner_id AND (a.uid = c1.id) AND (a.target_uid = c2.id) \n> and c1.id=b.uid and c2.id=d.uid\n> and a.confirmation is not null AND (a2.ref_id = a.id) and\n> ((a2.what = 13) or (a2.what = 17) ) ORDER BY time desc ;\n(query plan followed)\n\nThe expensive operation is the UNIQUE. Are you sure, in terms of \nbusiness logic, that this is necessary? Is it actually possible to have \nduplicate transactions at the exact same time, and if so, would you \nreally want to eliminate them?\n\nAs an aside, I prefer to have numeric constants like the 'what' field in \na small lookup table of two columns (what_code, what_description); it's \neasier to extend and to document.", "msg_date": "Wed, 15 Dec 2004 10:05:49 -0800", "msg_from": "Andrew Lazarus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Query Optimization" } ]
[ { "msg_contents": "Hi All,\n \nI would like to write the output of the \\d command on all tables in a database to an output file. There are more than 200 tables in the database. I am aware of \\o command to write the output to a file. But, it will be tough to do the \\d for each table manually and write the output to a file. Is there a command/ way in which I can achieve this without having to do it for each table?\nAny help in this regard would be really appreciated.\n \nThanks,\nSaranya\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \nHi All,\n \nI would like to write the output of the \\d command on all tables in a database to an output file. There are more than 200 tables in the database. I am aware of \\o command to write the output to a file. But, it will be tough to do the \\d for each table manually and write the output to a file. Is there a command/ way in which I can achieve this without having to do it for each table?\nAny help in this regard would be really appreciated.\n \nThanks,\nSaranya__________________________________________________Do You Yahoo!?Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com", "msg_date": "Wed, 15 Dec 2004 06:38:22 -0800 (PST)", "msg_from": "sarlav kumar <[email protected]>", "msg_from_op": true, "msg_subject": "\\d output to a file" }, { "msg_contents": "am 15.12.2004, um 6:38:22 -0800 mailte sarlav kumar folgendes:\n> Hi All,\n> \n> I would like to write the output of the \\d command on all tables in a database to an output file. There are more than 200 tables in the database. I am aware of \\o command to write the output to a file. But, it will be tough to do the \\d for each table manually and write the output to a file. Is there a command/ way in which I can achieve this without having to do it for each table?\n> Any help in this regard would be really appreciated.\n\nYou can write a little shell-script to list all tables via \\d and parse\nthe output to generate for each table a '\\d table'.\n\n\nAndreas\n-- \nAndreas Kretschmer (Kontakt: siehe Header)\n Tel. NL Heynitz: 035242/47212\nGnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net\n === Schollglas Unternehmensgruppe === \n", "msg_date": "Wed, 15 Dec 2004 16:04:08 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [despammed] \\d output to a file" }, { "msg_contents": "On Wed, 15 Dec 2004 06:38:22 -0800 (PST), sarlav kumar\n<[email protected]> wrote:\n> Hi All, \n> \n> I would like to write the output of the \\d command on all tables in a\n> database to an output file. There are more than 200 tables in the database.\n> I am aware of \\o command to write the output to a file. But, it will be\n> tough to do the \\d for each table manually and write the output to a file.\n> Is there a command/ way in which I can achieve this without having to do it\n> for each table? \n> Any help in this regard would be really appreciated. \n> \n> Thanks, \n> Saranya\n> \n> \n\nTry something like:\n\npsql -c \"\\d *\" >listing.txt\n", "msg_date": "Wed, 15 Dec 2004 15:12:17 +0000", "msg_from": "Gary Cowell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] \\d output to a file" }, { "msg_contents": "sarlav kumar wrote:\n> Hi All,\n> \n> I would like to write the output of the \\d command on all tables in a\n> database to an output file. There are more than 200 tables in the\n> database. I am aware of \\o command to write the output to a file.\n> But, it will be tough to do the \\d for each table manually and write\n> the output to a file. Is there a command/ way in which I can achieve\n> this without having to do it for each table? Any help in this regard\n> would be really appreciated.\n\nWhat is the OS? On any UNIX variant you can do:\n\necho '\\d' | psql > outputfile\n\nBut this will get you the system tables as well I think.\n\nAlternately you could do something like:\n\nfor table in $(<listoftables); do\n\techo '\\d' | psql\ndone > outputfile\n\n-- \nUntil later, Geoffrey\n", "msg_date": "Wed, 15 Dec 2004 10:17:10 -0500", "msg_from": "Geoffrey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \\d output to a file" }, { "msg_contents": "Andreas Kretschmer wrote:\n> am 15.12.2004, um 6:38:22 -0800 mailte sarlav kumar folgendes:\n> \n>>Hi All,\n>> \n>>I would like to write the output of the \\d command on all tables in a database to an output file. There are more than 200 tables in the database. I am aware of \\o command to write the output to a file. But, it will be tough to do the \\d for each table manually and write the output to a file. Is there a command/ way in which I can achieve this without having to do it for each table?\n>>Any help in this regard would be really appreciated.\n> \n> \n> You can write a little shell-script to list all tables via \\d and parse\n> the output to generate for each table a '\\d table'.\n\nOr:\n\nfor table in $(<filethatcontainsalistofthetables); do\n\n\techo \"\\d $table\" | psql $DATABASE > ${table}.out\ndone\n\n-- \nUntil later, Geoffrey\n", "msg_date": "Wed, 15 Dec 2004 10:23:54 -0500", "msg_from": "Geoffrey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [despammed] \\d output to a file" }, { "msg_contents": "...and on Wed, Dec 15, 2004 at 06:38:22AM -0800, sarlav kumar used the keyboard:\n> Hi All,\n> \n> I would like to write the output of the \\d command on all tables in a database to an output file. There are more than 200 tables in the database. I am aware of \\o command to write the output to a file. But, it will be tough to do the \\d for each table manually and write the output to a file. Is there a command/ way in which I can achieve this without having to do it for each table?\n> Any help in this regard would be really appreciated.\n> \n\nHello Sarlav.\n\nYou don't say which platform you're doing this on. If it's Windows, someone\nelse will have to advise you; if it's a UNIX-like platform though, the\nfollowing simple shell script should be helpful in achieving what you want:\n\n---CUT-HERE---\n#!/bin/bash\nif [ -z \"$1\" ]; then\n echo \"Please specify a database to query.\"\n exit 1\nfi\nDATABASE=$1\nMYTABLES=\"`echo '\\t\\a\\dt' | psql -q ${DATABASE} | cut -f 2 -d '|'`\"\n\nfor table in ${MYTABLES}; do\n echo '\\d '${table}\ndone | psql ${DATABASE}\n---CUT-HERE---\n\nYou can store this script into a file called, for example, describe.sh and\ninvoke it like so:\n\n $ ./describe.sh mydatabase > description.txt\n\nIt should then do what you want.\n\nShould you have additional arguments to specify to psql, such as a host,\na username, a password and so on, it is easy to modify the script to do\nthat. Just supply those arguments in places where the \"psql\" command is\nused.\n\nHope this helped,\n-- \n Grega Bremec\n gregab at p0f dot net", "msg_date": "Wed, 15 Dec 2004 16:35:58 +0100", "msg_from": "Grega Bremec <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] \\d output to a file" }, { "msg_contents": "am Wed, dem 15.12.2004, um 10:23:54 -0500 mailte Geoffrey folgendes:\n> >>I would like to write the output of the \\d command on all tables in a \n> >>database to an output file. There are more than 200 tables in the \n> >>database. I am aware of \\o command to write the output to a file. But, it \n> >>will be tough to do the \\d for each table manually and write the output \n> >>to a file. Is there a command/ way in which I can achieve this without \n> >>having to do it for each table?\n> >\n> >You can write a little shell-script to list all tables via \\d and parse\n> >the output to generate for each table a '\\d table'.\n> \n> Or:\n> \n> for table in $(<filethatcontainsalistofthetables); do\n\nYes, but you need the file called 'filethatcontainsalistofthetables' ;-)\n\necho \"\\d\" | psql test_db | awk 'BEGIN{FS=\"|\"}{if($3 ~ \"Tabelle\") {print \"\\d\" $2}}' | psql test_db\n\nIt works, if the database named 'test_db' and if the output from \\d in\nthe 3th row is 'Tabelle'.\n\n\nAndreas\n-- \nDiese Message wurde erstellt mit freundlicher Unterst�tzung eines freilau-\nfenden Pinguins aus artgerechter Freilandhaltung. Er ist garantiert frei\nvon Micro$oft'schen Viren. (#97922 http://counter.li.org) GPG 7F4584DA\nWas, Sie wissen nicht, wo Kaufbach ist? Hier: N 51.05082�, E 13.56889� ;-)\n", "msg_date": "Wed, 15 Dec 2004 17:21:57 +0100", "msg_from": "Kretschmer Andreas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [despammed] \\d output to a file" }, { "msg_contents": "Geoffrey <[email protected]> writes:\n> sarlav kumar wrote:\n>> I would like to write the output of the \\d command on all tables in a\n>> database to an output file.\n\n> What is the OS? On any UNIX variant you can do:\n> echo '\\d' | psql > outputfile\n\nOr use \\o:\n\nregression=# \\o zzz1\nregression=# \\d\nregression=# \\o\nregression=# \\d\n List of relations\n Schema | Name | Type | Owner\n--------+---------------+-------+----------\n public | pg_ts_cfg | table | postgres\n public | pg_ts_cfgmap | table | postgres\n public | pg_ts_dict | table | postgres\n public | pg_ts_parser | table | postgres\n public | t_test | table | postgres\n public | test_tsvector | table | postgres\n(6 rows)\n\nregression=# \\q\n$ cat zzz1\n List of relations\n Schema | Name | Type | Owner\n--------+---------------+-------+----------\n public | pg_ts_cfg | table | postgres\n public | pg_ts_cfgmap | table | postgres\n public | pg_ts_dict | table | postgres\n public | pg_ts_parser | table | postgres\n public | t_test | table | postgres\n public | test_tsvector | table | postgres\n(6 rows)\n\n$\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Dec 2004 11:50:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \\d output to a file " }, { "msg_contents": "am Wed, dem 15.12.2004, um 11:50:58 -0500 mailte Tom Lane folgendes:\n> Geoffrey <[email protected]> writes:\n> > sarlav kumar wrote:\n> >> I would like to write the output of the \\d command on all tables in a\n> >> database to an output file.\n\nI remember: '\\d command on all tables'\n\n\nAnd you wrote:\n\n> \n> regression=# \\q\n> $ cat zzz1\n> List of relations\n> Schema | Name | Type | Owner\n> --------+---------------+-------+----------\n> public | pg_ts_cfg | table | postgres\n> public | pg_ts_cfgmap | table | postgres\n\nSorry, but i think, this isn't the correct answer...\n\n\nAndreas, leaning PostgreSQL and english...\n-- \nDiese Message wurde erstellt mit freundlicher Unterst�tzung eines freilau-\nfenden Pinguins aus artgerechter Freilandhaltung. Er ist garantiert frei\nvon Micro$oft'schen Viren. (#97922 http://counter.li.org) GPG 7F4584DA\nWas, Sie wissen nicht, wo Kaufbach ist? Hier: N 51.05082�, E 13.56889� ;-)\n", "msg_date": "Wed, 15 Dec 2004 19:04:17 +0100", "msg_from": "Kretschmer Andreas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \\d output to a file" }, { "msg_contents": "On Wed, 2004-12-15 at 11:50 -0500, Tom Lane wrote:\n> Geoffrey <[email protected]> writes:\n> > sarlav kumar wrote:\n> >> I would like to write the output of the \\d command on all tables in a\n> >> database to an output file.\n> \n> > What is the OS? On any UNIX variant you can do:\n> > echo '\\d' | psql > outputfile\n> \n> Or use \\o:\n> \n> regression=# \\o zzz1\n> regression=# \\d\nor:\n =# \\d *\nto get all tables as th OP wanted\n\n> regression=# \\o\n\ngnari\n\n\n", "msg_date": "Wed, 15 Dec 2004 18:23:43 +0000", "msg_from": "Ragnar =?ISO-8859-1?Q?Hafsta=F0?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] \\d output to a file" } ]
[ { "msg_contents": "I have written a program that parses a syslog file, reading all the postgres\ntransactions. I would like to know if there is a way for postgres to log\nalso the specific database the sql statement originated from. \n\nThe only options available in the postgresql.conf are:\n#log_connections = false\n#log_duration = false\n#log_pid = false\n#log_statement = false\n#log_timestamp = false\n#log_hostname = false\n#log_source_port = false\n\nIs this possible? Or is there a smart work around.\n\nRegards,\n\tTheo\n\n\n\n\n\n______________________________________________________________________\nThis email, including attachments, is intended only for the addressee\nand may be confidential, privileged and subject to copyright. If you\nhave received this email in error, please advise the sender and delete\nit. If you are not the intended recipient of this email, you must not\nuse, copy or disclose its content to anyone. You must not copy or \ncommunicate to others content that is confidential or subject to \ncopyright, unless you have the consent of the content owner.\n\n\n\n\nindentifying the database in a Postgres log file.\n\n\nI have written a program that parses a syslog file, reading all the postgres transactions. I would like to know if there is a way for postgres to log also the specific database the sql statement originated from. \nThe only options available in the postgresql.conf are:\n#log_connections = false\n#log_duration = false\n#log_pid = false\n#log_statement = false\n#log_timestamp = false\n#log_hostname = false\n#log_source_port = false\n\nIs this possible? Or is there a smart work around.\n\nRegards,\n        Theo", "msg_date": "Thu, 16 Dec 2004 10:48:06 +1100", "msg_from": "Theo Galanakis <[email protected]>", "msg_from_op": true, "msg_subject": "indentifying the database in a Postgres log file." }, { "msg_contents": "Theo Galanakis wrote:\n> \n> I have written a program that parses a syslog file, reading all the postgres\n> transactions. I would like to know if there is a way for postgres to log\n> also the specific database the sql statement originated from. \n> \n> The only options available in the postgresql.conf are:\n> #log_connections = false\n> #log_duration = false\n> #log_pid = false\n> #log_statement = false\n> #log_timestamp = false\n> #log_hostname = false\n> #log_source_port = false\n> \n> Is this possible? Or is there a smart work around.\n\nIn pre-8.0 the only way to do it is to log connections, grab the\ndatabase from there, and add the pid to join all log rows back to the\nserver row. In 8.0 we have log_line_prefix that can display all\ninformation.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 15 Dec 2004 19:09:09 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: indentifying the database in a Postgres log file." } ]
[ { "msg_contents": "I'm trying to port our application from MS-SQL to Postgres. We have\nimplemented all of our rather complicated application security in the\ndatabase. The query that follows takes a half of a second or less on\nMS-SQL server and around 5 seconds on Postgres. My concern is that this\ndata set is rather \"small\" by our applications standards. It is not\nunusual for the da_answer table to have 2-4 million records. I'm\nworried that if this very small data set is taking 5 seconds, then a\n\"regular sized\" data set will take far too long.\n\nI originally thought the NOT EXISTS on the\n\"da_data_restrict_except_open\" table was killing performance, but the\nquery took the exact same amount of time after I deleted all rows from\nthis table. Note that the hard-coded 999999999.0, and 4000 parameters,\nas well as the parameter to svp_getparentproviders are the three\nvariables that change from one run of this query to the next.\n\nI'm using Postgres 7.4.5 as packaged in Debian. shared_buffers is set\nto 57344 and sort_mem=4096.\n\nThe machine has an AMD 1.8+ and ` gig of RAM. Here are some relevant\nperformance statistics:\nrichard:/usr/share/cups/model# cat /proc/sys/kernel/shmmax\n536870912\nrichard:/usr/share/cups/model# cat /proc/sys/kernel/shmall\n536870912\nrichard:/home/richard# hdparm -tT /dev/hda\n Timing cached reads: 1112 MB in 2.00 seconds = 556.00 MB/sec\n Timing buffered disk reads: 176 MB in 3.02 seconds = 58.28 MB/sec\n\nI have included an EXPLAIN ANALYZE, relevant table counts, and relevant\nindexing information. If anyone has any suggestions on how to improve\nperformance.... TIA!\n\nSELECT tab.answer_id, client_id, question_id, recordset_id,\ndate_effective, virt_field_name\nFROM\n(\n SELECT a.uid AS answer_id, a.client_id, a.question_id, recordset_id,\ndate_effective\n FROM da_answer a\n WHERE a.date_effective <= 9999999999.0\n AND a.inactive != 1\n AND\n (\n 5000 = 4000 \n OR\n (EXISTS (SELECT * FROM svp_getparentproviderids(1) WHERE\nsvp_getparentproviderids = a.provider_id))\n )\n UNION\n SELECT a.uid AS answer_id, a.client_id, a.question_id, recordset_id,\ndate_effective\n FROM da_answer a,\n ( \n SELECT main_id \n FROM da_data_restrict\n WHERE type_id = 2 \n AND (provider_id IN (SELECT * FROM svp_getparentproviderids(1)))\n \n UNION\n \n SELECT sa.uid AS main_id \n FROM da_answer sa\n JOIN da_data_restrict_except_closed dr ON dr.main_id =\nsa.uid AND dr.type_id = 2 AND dr.except_provider_id = 1\n WHERE (restricted = 1) \n AND (restricted_closed_except = 1) \n AND sa.covered_by_roi = 1\n UNION\n SELECT sa.uid AS main_id \n FROM da_answer sa\n WHERE (restricted = 0) \n AND (restricted_open_except = 1) \n AND (NOT EXISTS (SELECT dr.main_id FROM\nda_data_restrict_except_open dr WHERE (dr.main_id = sa.uid) AND\n(dr.type_id = 2) AND (dr.except_provider_id in (select * from\nsvp_getparentproviderids(1)))))\n AND sa.covered_by_roi = 1\n UNION\n SELECT sa.uid AS main_id FROM da_answer sa WHERE (restricted\n= 0) AND (restricted_open_except = 0)\n AND sa.covered_by_roi = 1\n ) sec\n WHERE a.covered_by_roi = 1\n AND a.date_effective <= 9999999999.0\n AND a.inactive != 1\n AND a.uid = sec.main_id\n AND 5000 > 4000\n) tab, da_question q\nWHERE tab.question_id = q.uid AND (min_access_level <= 4000 OR\nmin_access_level IS NULL)\n\nTable counts from relevant tables\nda_question 1095\nda_answer 21117\nda_question 1095\nda_data_restrict_except_closed 3087\nda_data_restrict_except_open 13391\nsvp_getparentproviderids(1) 1\n\nRelevant Index\ncreate index in_da_data_restrict_provider_id on\nda_data_restrict(provider_id);\ncreate index in_da_data_restrict_main_id on da_data_restrict(main_id);\ncreate index in_da_data_restrict_type_id on da_data_restrict(type_id);\ncreate index in_da_data_restrict_client_id on\nda_data_restrict(client_id);\ncreate index in_da_dr_type_provider on\nda_data_restrict(type_id,provider_id);\n\ncreate index in_da_data_rec_provider_id ON\nda_data_restrict_except_closed(provider_id);\ncreate index in_da_data_rec_type_id ON\nda_data_restrict_except_closed(type_id);\ncreate index in_da_data_rec_main_id ON\nda_data_restrict_except_closed(main_id);\ncreate index in_da_data_rec_except_provider_id ON\nda_data_restrict_except_closed(except_provider_id);\n\ncreate index in_da_data_reo_provider_id ON\nda_data_restrict_except_open(provider_id);\ncreate index in_da_data_reo_type_id ON\nda_data_restrict_except_open(type_id);\ncreate index in_da_data_reo_main_id ON\nda_data_restrict_except_open(main_id);\ncreate index in_da_data_reo_except_provider_id ON\nda_data_restrict_except_open(except_provider_id);\n\ncreate index in_da_answer_client_id ON da_answer(client_id);\ncreate index in_da_answer_provider_id ON da_answer(provider_id);\ncreate index in_da_answer_question_id ON da_answer(question_id);\ncreate index in_da_answer_recordset_id ON da_answer(recordset_id);\ncreate index in_da_answer_restricted ON da_answer(restricted);\ncreate index in_da_answer_restricted_open_except ON\nda_answer(restricted_open_except);\ncreate index in_da_answer_restricted_closed_except ON\nda_answer(restricted_closed_except);\ncreate index in_da_answer_date_effective ON da_answer(date_effective);\ncreate index in_da_answer_inactive ON da_answer(inactive);\ncreate index in_da_answer_covered_by_roi ON da_answer(covered_by_roi);\n\ncreate index in_da_ed_inactive_roi ON da_answer(date_effective,inactive,\ncovered_by_roi);\n\ncreate index in_da_question_mal ON da_question(min_access_level);", "msg_date": "Thu, 16 Dec 2004 10:11:07 -0600", "msg_from": "Richard Rowell <[email protected]>", "msg_from_op": true, "msg_subject": "Improve performance of query" }, { "msg_contents": "The first thing to check... Did you do a recent VACUUM ANALYZE? This \nupdates all the statistics. There are a number of places where it says \n\"rows=1000\" which is usually the \"I have no idea, let me guess 1000\". \nAlso, there are a number of places where the estimates are pretty far \noff. For instance:\n\nRichard Rowell wrote:\n\n>-> Subquery Scan \"*SELECT* 1\" (cost=0.00..64034.15 rows=10540 width=24) (actual time=279.089..4419.371 rows=161 loops=1)\n> \n>\nestimating 10,000 when only 161 is a little bit different.\n\n> -> Seq Scan on da_answer a (cost=0.00..63928.75 rows=10540 width=24) (actual time=279.080..4418.808 rows=161 loops=1)\n> Filter: ((date_effective <= 9999999999::double precision) AND (inactive <> 1) AND (subplan))\n> \n>\nThough this could be a lack of cross-column statistics. If 2 columns are \ncorrelated, the planner isn't as accurate as it could be. Also, \ndate_effective <= 9999999999 doesn't seem very restrictive, could you \nuse a between statement? (date between 0 and 9999999). I know for \ntimestamps usually giving a between is better than a single sided query.\n\nThis one was underestimated.\n\n>-> Subquery Scan \"*SELECT* 2\" (cost=988627.58..989175.52 rows=2799 width=24) (actual time=290.730..417.720 rows=7556 loops=1)\n> -> Hash Join (cost=988627.58..989147.53 rows=2799 width=24) (actual time=290.722..395.739 rows=7556 loops=1)\n> Hash Cond: (\"outer\".main_id = \"inner\".uid)\n> \n>\nThis is one of the ones that looks like it didn't have any ideas. It \ncould be because of the function. You might consider adding a function \nindex, though I think there are some caveats there.\n\n>-> Function Scan on svp_getparentproviderids (cost=0.00..12.50 rows=1000 width=4) (actual time=0.473..0.474 rows=1 loops=1)\n> \n>\nAnother very poor estimation. It might be a need to increase the \nstatistics for this column (ALTER TABLE, ALTER COLUMN, SET STATISTICS). \nIIRC, compared with other db's postgres defaults to a much lower \nstatistics value. Try changing it from 10 (?) to 100 or so. There was a \ndiscussion that every column with an index should use higher statistics.\n\n>-> Index Scan using in_da_dr_type_provider on da_data_restrict (cost=0.00..145.50 rows=46 width=8) (actual time=0.041..26.627 rows=7280 loops=1)\n> \n>\nI'm not a great optimizer, these are just some first things to look at. \nYour sort mem seems pretty low to me (considering you have 1GB of RAM). \nPerhaps you could bump that up to 40MB instead of 4MB. Also, if you run \nthis query twice in a row, is it still slow? (Sometimes it takes a bit \nof work to get the right indexes loaded into ram, but then it is faster.)\n\nJust some guesses,\nJohn\n=:->", "msg_date": "Thu, 16 Dec 2004 10:59:05 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve performance of query" }, { "msg_contents": "* Richard Rowell ([email protected]) wrote:\n> I have included an EXPLAIN ANALYZE, relevant table counts, and relevant\n> indexing information. If anyone has any suggestions on how to improve\n> performance.... TIA!\n\nJust a thought- do the UNION's actually have to be union's or would\nhaving them be 'UNION ALL's work?\n\n\tStephen", "msg_date": "Thu, 16 Dec 2004 12:02:20 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve performance of query" }, { "msg_contents": "Richard Rowell wrote:\n> I'm trying to port our application from MS-SQL to Postgres. We have\n> implemented all of our rather complicated application security in the\n> database. The query that follows takes a half of a second or less on\n> MS-SQL server and around 5 seconds on Postgres. My concern is that this\n> data set is rather \"small\" by our applications standards. It is not\n> unusual for the da_answer table to have 2-4 million records. I'm\n> worried that if this very small data set is taking 5 seconds, then a\n> \"regular sized\" data set will take far too long.\n> \n> I originally thought the NOT EXISTS on the\n> \"da_data_restrict_except_open\" table was killing performance, but the\n> query took the exact same amount of time after I deleted all rows from\n> this table. Note that the hard-coded 999999999.0, and 4000 parameters,\n> as well as the parameter to svp_getparentproviders are the three\n> variables that change from one run of this query to the next.\n> \n> I'm using Postgres 7.4.5 as packaged in Debian. shared_buffers is set\n> to 57344 and sort_mem=4096.\n\nThat shared_buffers value sounds too large for 1GB RAM - rewind to 10000 \nsay. Also make sure you've read the \"performance tuning\" article at:\n http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n\n> I have included an EXPLAIN ANALYZE, relevant table counts, and relevant\n> indexing information. If anyone has any suggestions on how to improve\n> performance.... TIA!\n\nI think it's the function call(s).\n\n> SELECT tab.answer_id, client_id, question_id, recordset_id,\n> date_effective, virt_field_name\n> FROM\n> (\n> SELECT a.uid AS answer_id, a.client_id, a.question_id, recordset_id,\n> date_effective\n> FROM da_answer a\n> WHERE a.date_effective <= 9999999999.0\n> AND a.inactive != 1\n> AND\n> (\n> 5000 = 4000 \n> OR\n> (EXISTS (SELECT * FROM svp_getparentproviderids(1) WHERE\n> svp_getparentproviderids = a.provider_id))\n> )\n...\n>SubPlan\n> -> Function Scan on svp_getparentproviderids (cost=0.00..15.00 rows=5 width=4) (actual time=0.203..0.203 rows=0 loops=21089)\n> Filter: (svp_getparentproviderids = $1)\n\nHere it's running 21,089 loops around your function. Each one isn't \ncosting much, but it's the total that's killing you I think. It might be \npossible to mark the function STABLE or such, depending on what it does \n- see http://www.postgresql.org/docs/7.4/static/sql-createfunction.html\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 16 Dec 2004 17:15:29 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve performance of query" }, { "msg_contents": "Richard Rowell <[email protected]> writes:\n> I'm trying to port our application from MS-SQL to Postgres. We have\n> implemented all of our rather complicated application security in the\n> database. The query that follows takes a half of a second or less on\n> MS-SQL server and around 5 seconds on Postgres.\n\nThe EXPLAIN shows that most of the time is going into repeated\nexecutions of svp_getparentproviderids() in the first UNION arm:\n\n> -> Seq Scan on da_answer a (cost=0.00..63928.75 rows=10540 width=24) (actual time=279.080..4418.808 rows=161 loops=1)\n> Filter: ((date_effective <= 9999999999::double precision) AND (inactive <> 1) AND (subplan))\n> SubPlan\n> -> Function Scan on svp_getparentproviderids (cost=0.00..15.00 rows=5 width=4) (actual time=0.203..0.203 rows=0 loops=21089)\n> Filter: (svp_getparentproviderids = $1)\n\nI'd suggest replacing the EXISTS coding by IN:\n\t(EXISTS (SELECT * FROM svp_getparentproviderids(1) WHERE svp_getparentproviderids = a.provider_id))\nto\n\t(a.provider_id IN (SELECT * FROM svp_getparentproviderids(1)))\nThe latter form is likely to be significantly faster in PG 7.4.\n\nIt's also possible that the speed loss compared to MSSQL is really\ninside the svp_getparentproviderids function; you should look into\nthat rather than assuming this query per se is at fault.\n\nAlso, do you actually need UNION as opposed to UNION ALL? The\nduplicate-elimination behavior of UNION is a bit expensive if not\nneeded. It looks from the EXPLAIN output that some of the unions\naren't actually eliminating any rows.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Dec 2004 12:19:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve performance of query " }, { "msg_contents": "The first thing to check... Did you do a recent VACUUM ANALYZE? This\nupdates all the statistics. There are a number of places where it says\n\"rows=1000\" which is usually the \"I have no idea, let me guess 1000\".\nAlso, there are a number of places where the estimates are pretty far\noff. For instance:\n\nRichard Rowell wrote:\n\n>-> Subquery Scan \"*SELECT* 1\" (cost=0.00..64034.15 rows=10540 width=24) (actual time=279.089..4419.371 rows=161 loops=1)\n> \n>\nestimating 10,000 when only 161 is a little bit different.\n\n> -> Seq Scan on da_answer a (cost=0.00..63928.75 rows=10540 width=24) (actual time=279.080..4418.808 rows=161 loops=1)\n> Filter: ((date_effective <= 9999999999::double precision) AND (inactive <> 1) AND (subplan))\n> \n>\nThough this could be a lack of cross-column statistics. If 2 columns are\ncorrelated, the planner isn't as accurate as it could be. Also,\ndate_effective <= 9999999999 doesn't seem very restrictive, could you\nuse a between statement? (date between 0 and 9999999). I know for\ntimestamps usually giving a between is better than a single sided query.\n\nThis one was underestimated.\n\n>-> Subquery Scan \"*SELECT* 2\" (cost=988627.58..989175.52 rows=2799 width=24) (actual time=290.730..417.720 rows=7556 loops=1)\n> -> Hash Join (cost=988627.58..989147.53 rows=2799 width=24) (actual time=290.722..395.739 rows=7556 loops=1)\n> Hash Cond: (\"outer\".main_id = \"inner\".uid)\n> \n>\nThis is one of the ones that looks like it didn't have any ideas. It\ncould be because of the function. You might consider adding a function\nindex, though I think there are some caveats there.\n\n>-> Function Scan on svp_getparentproviderids (cost=0.00..12.50 rows=1000 width=4) (actual time=0.473..0.474 rows=1 loops=1)\n> \n>\nAnother very poor estimation. It might be a need to increase the\nstatistics for this column (ALTER TABLE, ALTER COLUMN, SET STATISTICS).\nIIRC, compared with other db's postgres defaults to a much lower\nstatistics value. Try changing it from 10 (?) to 100 or so. There was a\ndiscussion that every column with an index should use higher statistics.\n\n>-> Index Scan using in_da_dr_type_provider on da_data_restrict (cost=0.00..145.50 rows=46 width=8) (actual time=0.041..26.627 rows=7280 loops=1)\n> \n>\nI'm not a great optimizer, these are just some first things to look at.\nYour sort mem seems pretty low to me (considering you have 1GB of RAM).\nPerhaps you could bump that up to 40MB instead of 4MB. Also, if you run\nthis query twice in a row, is it still slow? (Sometimes it takes a bit\nof work to get the right indexes loaded into ram, but then it is faster.)\n\nJust some guesses,\nJohn\n=:->", "msg_date": "Thu, 16 Dec 2004 11:24:26 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve performance of query" } ]
[ { "msg_contents": "I have a table 'Alias' with 541162 rows. It's created as follows:\n\nCREATE TABLE alias\n(\n id int4 NOT NULL,\n person_id int4 NOT NULL,\n last_name varchar(30),\n first_name varchar(30),\n middle_name varchar(30),\n questioned_identity_flag varchar,\n CONSTRAINT alias_pkey PRIMARY KEY (id)\n) \n\nAfter populating the data, (I can provide a data file if necessary)\n I created 2 indexes as follows:\nCREATE INDEX \"PX_Alias\" ON alias USING btree (id);\nALTER TABLE alias CLUSTER ON \"PX_Alias\";\nCREATE INDEX \"IX_Alias_Last_Name\" ON alias USING btree (last_name);\nVACUUM FULL ANALYSE Alias\n\nThen I run a query:\nSELECT * FROM Alias WHERE last_name = 'ANDERSON' \nThis results in a seqscan, rather than an index scan:\n {SEQSCAN\n :startup_cost 0.00 \n :total_cost 11970.53 \n :plan_rows 3608 \n :plan_width 41 \n :targetlist (\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 1 \n :restype 23 \n :restypmod -1 \n :resname id \n :ressortgroupref 0 \n :resorigtbl 2780815 \n :resorigcol 1 \n :resjunk false\n }\n :expr\n {VAR \n :varno 1 \n :varattno 1 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 1\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 2 \n :restype 23 \n :restypmod -1 \n :resname person_id \n :ressortgroupref 0 \n :resorigtbl 2780815 \n :resorigcol 2 \n :resjunk false\n }\n :expr\n {VAR \n :varno 1 \n :varattno 2 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 2\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 3 \n :restype 1043 \n :restypmod 34 \n :resname last_name \n :ressortgroupref 0 \n :resorigtbl 2780815 \n :resorigcol 3 \n :resjunk false\n }\n :expr\n {VAR \n :varno 1 \n :varattno 3 \n :vartype 1043 \n :vartypmod 34 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 3\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 4 \n :restype 1043 \n :restypmod 34 \n :resname first_name \n :ressortgroupref 0 \n :resorigtbl 2780815 \n :resorigcol 4 \n :resjunk false\n }\n :expr\n {VAR \n :varno 1 \n :varattno 4 \n :vartype 1043 \n :vartypmod 34 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 4\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 5 \n :restype 1043 \n :restypmod 34 \n :resname middle_name \n :ressortgroupref 0 \n :resorigtbl 2780815 \n :resorigcol 5 \n :resjunk false\n }\n :expr\n {VAR \n :varno 1 \n :varattno 5 \n :vartype 1043 \n :vartypmod 34 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 5\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 6 \n :restype 1043 \n :restypmod -1 \n :resname questioned_identity_flag \n :ressortgroupref 0 \n :resorigtbl 2780815 \n :resorigcol 6 \n :resjunk false\n }\n :expr\n {VAR \n :varno 1 \n :varattno 6 \n :vartype 1043 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 6\n }\n }\n )\n :qual (\n {OPEXPR \n :opno 98 \n :opfuncid 67 \n :opresulttype 16 \n :opretset false \n :args (\n {RELABELTYPE \n :arg \n {VAR \n :varno 1 \n :varattno 3 \n :vartype 1043 \n :vartypmod 34 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 3\n }\n :resulttype 25 \n :resulttypmod -1 \n :relabelformat 0\n }\n {CONST \n :consttype 25 \n :constlen -1 \n :constbyval false \n :constisnull false \n :constvalue 12 [ 12 0 0 0 65 78 68 69 82 83 79 78 ]\n }\n )\n }\n )\n :lefttree <> \n :righttree <> \n :initPlan <> \n :extParam (b)\n :allParam (b)\n :nParamExec 0 \n :scanrelid 1\n }\n\nSeq Scan on alias (cost=0.00..11970.53 rows=3608 width=41) (actual\ntime=0.000..2103.000 rows=4443 loops=1)\n Filter: ((last_name)::text = 'ANDERSON'::text)\nTotal runtime: 2153.000 ms\n\n\nIf I:\nSET enable_seqscan TO off;\n\nThen the query takes about 300 milliseconds, and uses the index scan. \nIt seems that the cost estimate is slightly higher for the index scan,\nbut in reality, it is much faster:\n\n\n {INDEXSCAN \n :startup_cost 0.00 \n :total_cost 12148.18 \n :plan_rows 3608 \n :plan_width 41 \n :targetlist (\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 1 \n :restype 23 \n :restypmod -1 \n :resname id \n :ressortgroupref 0 \n :resorigtbl 2780815 \n :resorigcol 1 \n :resjunk false\n }\n :expr\n {VAR \n :varno 1 \n :varattno 1 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 1\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 2 \n :restype 23 \n :restypmod -1 \n :resname person_id \n :ressortgroupref 0 \n :resorigtbl 2780815 \n :resorigcol 2 \n :resjunk false\n }\n :expr\n {VAR \n :varno 1 \n :varattno 2 \n :vartype 23 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 2\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 3 \n :restype 1043 \n :restypmod 34 \n :resname last_name \n :ressortgroupref 0 \n :resorigtbl 2780815 \n :resorigcol 3 \n :resjunk false\n }\n :expr\n {VAR \n :varno 1 \n :varattno 3 \n :vartype 1043 \n :vartypmod 34 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 3\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 4 \n :restype 1043 \n :restypmod 34 \n :resname first_name \n :ressortgroupref 0 \n :resorigtbl 2780815 \n :resorigcol 4 \n :resjunk false\n }\n :expr\n {VAR \n :varno 1 \n :varattno 4 \n :vartype 1043 \n :vartypmod 34 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 4\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 5 \n :restype 1043 \n :restypmod 34 \n :resname middle_name \n :ressortgroupref 0 \n :resorigtbl 2780815 \n :resorigcol 5 \n :resjunk false\n }\n :expr\n {VAR \n :varno 1 \n :varattno 5 \n :vartype 1043 \n :vartypmod 34 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 5\n }\n }\n {TARGETENTRY\n :resdom\n {RESDOM\n :resno 6 \n :restype 1043 \n :restypmod -1 \n :resname questioned_identity_flag \n :ressortgroupref 0 \n :resorigtbl 2780815 \n :resorigcol 6 \n :resjunk false\n }\n :expr\n {VAR \n :varno 1 \n :varattno 6 \n :vartype 1043 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 6\n }\n }\n )\n :qual <> \n :lefttree <> \n :righttree <> \n :initPlan <> \n :extParam (b)\n :allParam (b)\n :nParamExec 0 \n :scanrelid 1 \n :indxid (o 5117678)\n :indxqual ((\n {OPEXPR \n :opno 98 \n :opfuncid 67 \n :opresulttype 16 \n :opretset false \n :args (\n {VAR \n :varno 1 \n :varattno 1 \n :vartype 1043 \n :vartypmod 34 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 3\n }\n {CONST \n :consttype 25 \n :constlen -1 \n :constbyval false \n :constisnull false \n :constvalue 12 [ 12 0 0 0 65 78 68 69 82 83 79 78 ]\n }\n )\n }\n ))\n :indxqualorig ((\n {OPEXPR \n :opno 98 \n :opfuncid 67 \n :opresulttype 16 \n :opretset false \n :args (\n {RELABELTYPE \n :arg \n {VAR \n :varno 1 \n :varattno 3 \n :vartype 1043 \n :vartypmod 34 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 3\n }\n :resulttype 25 \n :resulttypmod -1 \n :relabelformat 0\n }\n {CONST \n :consttype 25 \n :constlen -1 \n :constbyval false \n :constisnull false \n :constvalue 12 [ 12 0 0 0 65 78 68 69 82 83 79 78 ]\n }\n )\n }\n ))\n :indxstrategy ((i 3))\n :indxsubtype ((o 0))\n :indxlossy ((i 0))\n :indxorderdir 1\n }\n\nIndex Scan using \"IX_Alias_Last_Name\" on alias (cost=0.00..12148.18\nrows=3608 width=41) (actual time=0.000..200.000 rows=4443 loops=1)\n Index Cond: ((last_name)::text = 'ANDERSON'::text)\nTotal runtime: 220.000 ms\n\nDropping the index and cluster on the id doesn't make any difference.\n\nAccording to the pg_stats table, 'ANDERSON' is one of the most\nfrequent values; howerver, querying by another 'JACKSON', will use the\nindex scan.\n\nAny hints on what to do to make PostgreSQL use the index? This seems\nlike a fairly simple case, isn't it? (I'm using 8.0-rc1 on windows.)\n", "msg_date": "Thu, 16 Dec 2004 11:08:20 -0600", "msg_from": "Jon Anderson <[email protected]>", "msg_from_op": true, "msg_subject": "Seqscan rather than Index" }, { "msg_contents": "Jon Anderson <[email protected]> writes:\n> Any hints on what to do to make PostgreSQL use the index?\n\nYou might want to reduce random_page_cost a little.\n\nKeep in mind that your test case is small enough to fit in RAM and is\nprobably not reflective of what will happen with larger tables.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Dec 2004 12:31:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seqscan rather than Index " } ]
[ { "msg_contents": "Greetings,\n\nWhy does the append resulting from a inheritance take longer than one \nresulting from UNION ALL?\n\nsummary:\nAppend resulting from inheritance:\n-> Append (cost=0.00..17.43 rows=2 width=72) (actual \ntime=3.876..245.320 rows=28 loops=1)\nAppend resulting from UNION ALL:\n-> Append (cost=0.00..17.45 rows=2 width=72) (actual \ntime=3.730..81.465 rows=28 loops=1)\n\nin the case below both f_f_all_base and for_f_all_new are clustered on \nthe index based (group_id, group_forum_id) they were vacuum analyzed \nbefore the test below.\n\nperftestdb=# \\d f_f_all_base\n Table \"public.f_f_all_base\"\n Column | Type | Modifiers\n----------------+----------+---------------------------\n msg_id | integer | not null\n group_id | integer | default 0\n group_forum_id | integer | not null default 0\n subject | text | not null default ''::text\n date | integer | not null default 0\n user_name | text | not null default ''::text\n all_tidx | tsvector | not null\nIndexes:\n \"forftiallb_pk_1102715767\" primary key, btree (msg_id)\n \"fftiallbgfid_1102715649\" btree (group_forum_id)\n \"fftiallbgrgfid_1102715649\" btree (group_id, group_forum_id)\n\nperftestdb=# \\d for_f_all_new\n Table \"public.for_f_all_new\"\n Column | Type | Modifiers\n----------------+----------+---------------------------\n msg_id | integer | not null\n group_id | integer | default 0\n group_forum_id | integer | not null default 0\n subject | text | not null default ''::text\n date | integer | not null default 0\n user_name | text | not null default ''::text\n all_tidx | tsvector | not null\nIndexes:\n \"forfallnew_pk_ts\" primary key, btree (msg_id)\n \"forfallnewgrgfid\" btree (group_id, group_forum_id)\n \"forfallnewgrid\" btree (group_forum_id)\nInherits: f_f_all_base\n\nperftestdb=# explain analyze (SELECT f_f_all_base.msg_id, \nf_f_all_base.subject, f_f_all_base.date, f_f_all_base.user_name, '' as \nfromemail FROM f_f_all_base WHERE (all_tidx @@ to_tsquery('MMcache') ) \nAND f_f_all_base.group_id = 78745) ORDER BY msg_id DESC LIMIT 26 OFFSET \n0;\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n-----------------------\n Limit (cost=17.44..17.44 rows=2 width=72) (actual \ntime=245.726..245.827 rows=26 loops=1)\n -> Sort (cost=17.44..17.44 rows=2 width=72) (actual \ntime=245.719..245.755 rows=26 loops=1)\n Sort Key: public.f_f_all_base.msg_id\n -> Result (cost=0.00..17.43 rows=2 width=72) (actual \ntime=3.885..245.564 rows=28 loops=1)\n -> Append (cost=0.00..17.43 rows=2 width=72) (actual \ntime=3.876..245.320 rows=28 loops=1)\n -> Index Scan using fftiallbgrgfid_1102715649 on \nf_f_all_base (cost=0.00..3.52 rows=1 width=51) (actual \ntime=3.871..244.356 rows=28 loops=1)\n Index Cond: (group_id = 78745)\n Filter: (all_tidx @@ '\\'mmcach\\''::tsquery)\n -> Index Scan using forfallnewgrgfid on \nfor_f_all_new f_f_all_base (cost=0.00..13.91 rows=1 width=72) (actual \ntime=0.816..0.816 rows=0 loops=1)\n Index Cond: (group_id = 78745)\n Filter: (all_tidx @@ '\\'mmcach\\''::tsquery)\n Total runtime: 246.022 ms\n(12 rows)\n\nperftestdb=# explain analyze (SELECT f_f_all_base.msg_id, \nf_f_all_base.subject, f_f_all_base.date, f_f_all_base.user_name, '' as \nfromemail FROM ONLY f_f_all_base WHERE (all_tidx @@ \nto_tsquery('MMcache') ) AND f_f_all_base.group_id = 78745) UNION ALL \n(SELECT f_f_all_new.msg_id, f_f_all_new.subject, f_f_all_new.date, \nf_f_all_new.user_name, '' as fromemail FROM for_f_all_new f_f_all_new \nWHERE (all_tidx @@ to_tsquery('MMcache') ) AND f_f_all_new.group_id = \n78745) ORDER BY msg_id DESC LIMIT 26 OFFSET 0;\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n----------------------\n Limit (cost=17.46..17.46 rows=2 width=72) (actual time=81.703..81.833 \nrows=26 loops=1)\n -> Sort (cost=17.46..17.46 rows=2 width=72) (actual \ntime=81.695..81.737 rows=26 loops=1)\n Sort Key: msg_id\n -> Append (cost=0.00..17.45 rows=2 width=72) (actual \ntime=3.730..81.465 rows=28 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..3.53 rows=1 \nwidth=51) (actual time=3.726..80.213 rows=28 loops=1)\n -> Index Scan using fftiallbgrgfid_1102715649 on \nf_f_all_base (cost=0.00..3.52 rows=1 width=51) (actual \ntime=3.714..79.996 rows=28 loops=1)\n Index Cond: (group_id = 78745)\n Filter: (all_tidx @@ '\\'mmcach\\''::tsquery)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..13.92 rows=1 \nwidth=72) (actual time=1.146..1.146 rows=0 loops=1)\n -> Index Scan using forfallnewgrgfid on \nfor_f_all_new f_f_all_new (cost=0.00..13.91 rows=1 width=72) (actual \ntime=1.135..1.135 rows=0 loops=1)\n Index Cond: (group_id = 78745)\n Filter: (all_tidx @@ '\\'mmcach\\''::tsquery)\n Total runtime: 82.108 ms\n(13 rows)\n--\nAdi Alurkar (DBA sf.NET) <[email protected]>\n1024D/79730470 A491 5724 74DE 956D 06CB D844 6DF1 B972 7973 0470\n\n", "msg_date": "Thu, 16 Dec 2004 12:06:46 -0800", "msg_from": "Adi Alurkar <[email protected]>", "msg_from_op": true, "msg_subject": "UNION ALL vs INHERITANCE" }, { "msg_contents": "Adi Alurkar <[email protected]> writes:\n> Why does the append resulting from a inheritance take longer than one \n> resulting from UNION ALL?\n\nThe index scan is where the time difference is:\n\n> -> Index Scan using fftiallbgrgfid_1102715649 on \n> f_f_all_base (cost=0.00..3.52 rows=1 width=51) (actual \n> time=3.871..244.356 rows=28 loops=1)\n> Index Cond: (group_id = 78745)\n> Filter: (all_tidx @@ '\\'mmcach\\''::tsquery)\n\n> -> Index Scan using fftiallbgrgfid_1102715649 on \n> f_f_all_base (cost=0.00..3.52 rows=1 width=51) (actual \n> time=3.714..79.996 rows=28 loops=1)\n> Index Cond: (group_id = 78745)\n> Filter: (all_tidx @@ '\\'mmcach\\''::tsquery)\n\nOne would have to suppose this is a caching effect, ie, the data is\nalready in RAM on the second try and doesn't have to be read from disk\nagain.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Dec 2004 17:13:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UNION ALL vs INHERITANCE " } ]
[ { "msg_contents": "> You might want to reduce random_page_cost a little.\n\n> Keep in mind that your test case is small enough to fit in RAM and is\n> probably not reflective of what will happen with larger tables.\n\nI am also running 8.0 rc1 for Windows. Despite many hours spent tweaking various planner cost constants, I found little effect on cost estimates. Even reducing random_page_cost from 4.0 to 0.1 had negligible impact and failed to significantly influence the planner.\n\nIncreasing the statistics target for the last_name column to 250 or so *may* help, at least if you're only selecting one name at a time. That's the standard advice around here and the only thing I've found useful. Half the threads in this forum are about under-utilized indexes. It would be great if someone could admit the planner is broken and talk about actually fixing it!\n\nI'm unconvinced that the planner only favours sequential scans as table size decreases. In my experience so far, larger tables have the same problem only it's more noticeable.\n\nThe issue hits PostgreSQL harder than others because of its awful sequential scan speed, which is two to five times slower than other DBMS. The archives show there has been talk for years about this, but it seems, no solution. The obvious thing to consider is the block size, but people have tried increasing this in the past with only marginal success.\n\nRegards\n\nDavid\n", "msg_date": "Fri, 17 Dec 2004 11:18:36 +0000", "msg_from": "David Brown <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Seqscan rather than Index" }, { "msg_contents": "David Brown wrote:\n>> You might want to reduce random_page_cost a little.\n> \n> \n>> Keep in mind that your test case is small enough to fit in RAM and\n>> is probably not reflective of what will happen with larger tables.\n> \n> \n> I am also running 8.0 rc1 for Windows. Despite many hours spent\n> tweaking various planner cost constants, I found little effect on\n> cost estimates. Even reducing random_page_cost from 4.0 to 0.1 had\n> negligible impact and failed to significantly influence the planner.\n\nI'm not sure setting random_page_cost below 1.0 makes much sense.\n\n> Increasing the statistics target for the last_name column to 250 or\n> so *may* help, at least if you're only selecting one name at a time.\n\nNot going to do anything in this case. The planner is roughly right \nabout how many rows will be returned, it's just not expecting everything \nto be in RAM.\n\n> That's the standard advice around here and the only thing I've found\n> useful. Half the threads in this forum are about under-utilized\n> indexes. It would be great if someone could admit the planner is\n> broken and talk about actually fixing it!\n\nNot sure I agree here - when the stats are accurate, you can get the \nplanner to make near-optimal choices most of the time. Is there any \nparticular pattern you've seen?\n\n> I'm unconvinced that the planner only favours sequential scans as\n> table size decreases. In my experience so far, larger tables have the\n> same problem only it's more noticeable.\n\nHmm - assuming your statistics are good, this would suggest the other \ncost settings just aren't right for your hardware.\n\n> The issue hits PostgreSQL harder than others because of its awful\n> sequential scan speed, which is two to five times slower than other\n> DBMS. The archives show there has been talk for years about this, but\n> it seems, no solution. The obvious thing to consider is the block\n> size, but people have tried increasing this in the past with only\n> marginal success.\n\nMust admit this puzzles me. Are you saying you can't saturate your disk \nI/O? Or are you saying other DBMS store records in 0.5 to 0.2 times less \nspace than PG?\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 17 Dec 2004 13:03:50 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seqscan rather than Index" }, { "msg_contents": "Richard Huxton <[email protected]> writes:\n\n> Not going to do anything in this case. The planner is roughly right about how\n> many rows will be returned, it's just not expecting everything to be in RAM.\n\nThat doesn't make sense or else it would switch to the index at\nrandom_page_cost = 1.0. If it was still using a sequential scan at\nrandom_page_cost < 1 then perhaps he had some problem with his query like\nmismatched data types that forced it to use a full scan.\n\n> > That's the standard advice around here and the only thing I've found\n> > useful. Half the threads in this forum are about under-utilized\n> > indexes. It would be great if someone could admit the planner is\n> > broken and talk about actually fixing it!\n> \n> Not sure I agree here - when the stats are accurate, you can get the planner to\n> make near-optimal choices most of the time. Is there any particular pattern\n> you've seen?\n\nThe most common cause I've seen here is that Postgres makes very pessimistic\nassumptions about selectivity when it doesn't know better. Every other\ndatabase I've tested assumes 'col > ?' is about 5% selectivity . Postgres\nassumes 33%.\n\nPostgres is also more pessimistic about the efficiency of index scans. It's\nwilling to use a sequential scan down to well below 5% selectivity when other\ndatabases use the more traditional rule of thumb of 10%.\n\nIn combination these effects do seem to cause an _awful_ lot of complaints.\n\n\n> > The issue hits PostgreSQL harder than others because of its awful\n> > sequential scan speed, which is two to five times slower than other\n> > DBMS. The archives show there has been talk for years about this, but\n> > it seems, no solution. The obvious thing to consider is the block\n> > size, but people have tried increasing this in the past with only\n> > marginal success.\n> \n> Must admit this puzzles me. Are you saying you can't saturate your disk I/O? Or\n> are you saying other DBMS store records in 0.5 to 0.2 times less space than PG?\n\nI don't know what he's talking about either. Perhaps he's thinking of people\nwho haven't been running vacuum enough?\n\n-- \ngreg\n\n", "msg_date": "17 Dec 2004 10:47:57 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seqscan rather than Index" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> Postgres is also more pessimistic about the efficiency of index scans. It's\n> willing to use a sequential scan down to well below 5% selectivity when other\n> databases use the more traditional rule of thumb of 10%.\n\nHowever, other databases are probably basing their analysis on a\ndifferent execution model. Since we have to visit both heap and index\nin all cases, we do indeed have a larger penalty for index use.\n\nI've looked pretty closely at the cost model for index access, believe me.\nIt's not pessimistic; if anything it is undercharging for index access.\n(For one thing it treats the index's internal fetches as sequential\naccess, when in reality they are probably random.)\n\nI think the one effect that's not being modeled is amortization of index\nfetches across successive queries. The cost model is pretty much based\non the assumption that each query starts from ground zero, whereas in\nreality a heavily used index will certainly have all its upper levels in\nRAM, and if it's not too large the leaf pages might all be cached too.\nI wouldn't want to switch the planner over to making that assumption\nexclusively, but we could talk about having a cost parameter that dials\nthe assumption up or down.\n\nAwhile back I tried rewriting btcostestimate to charge zero for\naccessing the metapage and the upper index levels, but charge\nrandom_page_cost for fetching leaf pages. For small indexes this came\nout with worse (larger) numbers than we have now, which is not the\ndirection we want to go in :-(. So I think that we have to somehow\nhonestly model caching of index pages across queries.\n\nOf course, to be completely fair such a modification should account for\ncaching of heap pages as well, so it would also bring down the estimates\nfor seqscans. But I'd be willing to accept a model that considers only\ncaching of index pages as a zero-order approximation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Dec 2004 12:44:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seqscan rather than Index " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> Greg Stark <[email protected]> writes:\n> > Postgres is also more pessimistic about the efficiency of index scans. It's\n> > willing to use a sequential scan down to well below 5% selectivity when other\n> > databases use the more traditional rule of thumb of 10%.\n> \n> However, other databases are probably basing their analysis on a\n> different execution model. Since we have to visit both heap and index\n> in all cases, we do indeed have a larger penalty for index use.\n\nIt's only in special cases that other databases do not have to look at the\nheap. For simple queries like \"select * from x where foo > ?\" they still have\nto look at the heap. I never looked into how much of a bonus Oracle gives for\nthe index-only case, I'm not sure it even takes it into account.\n\n> I've looked pretty closely at the cost model for index access, believe me.\n> It's not pessimistic; if anything it is undercharging for index access.\n\nI think there's another effect here beyond the physical arithmetic. There's a\nkind of teleological reasoning that goes something like \"If the user created\nthe index chances are it's because he wanted it to be used\".\n\nI guess that argues more for more aggressive selectivity estimates than for\nbiased index costing though. If I'm doing \"where foo > ?\" then if there's an\nindex on foo I probably put it there for a reason and want it to be used even\nif postgres doesn't really have a clue how selective the query will be.\n\n> I think the one effect that's not being modeled is amortization of index\n> fetches across successive queries. \n\nAnd across multiple fetches in a single query, such as with a nested loop.\n\nIt seems like the effective_cache_size parameter should be having some\ninfluence here.\n\n-- \ngreg\n\n", "msg_date": "17 Dec 2004 13:24:33 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seqscan rather than Index" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> I think the one effect that's not being modeled is amortization of index\n>> fetches across successive queries. \n\n> And across multiple fetches in a single query, such as with a nested loop.\n\nRight, that's effectively the same problem. You could imagine making a\nspecial-purpose solution for nestloop queries but I think the issue is\nmore general than that.\n\n> It seems like the effective_cache_size parameter should be having some\n> influence here.\n\nBut it doesn't :-(. e_c_s is currently only used to estimate\namortization of repeated heap-page fetches within a single indexscan.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Dec 2004 13:36:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seqscan rather than Index " }, { "msg_contents": "On Fri, Dec 17, 2004 at 10:47:57AM -0500, Greg Stark wrote:\n>> Must admit this puzzles me. Are you saying you can't saturate your disk I/O? Or\n>> are you saying other DBMS store records in 0.5 to 0.2 times less space than PG?\n> I don't know what he's talking about either. Perhaps he's thinking of people\n> who haven't been running vacuum enough?\n\nI'm a bit unsure -- should counting ~3 million rows (no OIDs, PG 7.4,\neverything in cache, 32-byte rows) take ~3500ms on an Athlon 64 2800+?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Fri, 17 Dec 2004 22:56:27 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seqscan rather than Index" }, { "msg_contents": "On Fri, Dec 17, 2004 at 10:56:27PM +0100, Steinar H. Gunderson wrote:\n> I'm a bit unsure -- should counting ~3 million rows (no OIDs, PG 7.4,\n> everything in cache, 32-byte rows) take ~3500ms on an Athlon 64 2800+?\n\n(I realize I was a bit unclear here. This is a completely separate case, not\nrelated to the original poster -- I was just wondering if what I'm seeing is\nnormal or not.)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Fri, 17 Dec 2004 23:09:07 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seqscan rather than Index" }, { "msg_contents": "On Fri, 17 Dec 2004 23:09:07 +0100\n\"Steinar H. Gunderson\" <[email protected]> wrote:\n\n> On Fri, Dec 17, 2004 at 10:56:27PM +0100, Steinar H. Gunderson wrote:\n> > I'm a bit unsure -- should counting ~3 million rows (no OIDs, PG\n> > 7.4, everything in cache, 32-byte rows) take ~3500ms on an Athlon 64\n> > 2800+?\n> \n> (I realize I was a bit unclear here. This is a completely separate\n> case, not related to the original poster -- I was just wondering if\n> what I'm seeing is normal or not.)\n\n It depends more on your disk IO than the processor. Counting isn't\n processor intensive, but reading through the entire table on disk \n is. I've also seen a huge difference between select count(*) and \n select count(1) in older versions, haven't tried it on a recent\n version however. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Fri, 17 Dec 2004 17:02:29 -0600", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seqscan rather than Index" }, { "msg_contents": "On Fri, Dec 17, 2004 at 05:02:29PM -0600, Frank Wiles wrote:\n> It depends more on your disk IO than the processor. Counting isn't\n> processor intensive, but reading through the entire table on disk \n> is. I've also seen a huge difference between select count(*) and \n> select count(1) in older versions, haven't tried it on a recent\n> version however. \n\nLike I said, all in cache, so no disk IO. count(*) and count(1) give me\nidentical results. (BTW, I don't think this is a count problem, it's a\n\"sequential scan\" problem -- I'm just trying to find out if this is natural\nor not, ie. if this is just something I have to expect in a relational\ndatabase, even with no I/O.)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sat, 18 Dec 2004 00:55:48 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seqscan rather than Index" }, { "msg_contents": "Frank Wiles <[email protected]> writes:\n> I've also seen a huge difference between select count(*) and \n> select count(1) in older versions,\n\nThat must have been before my time, ie, pre-6.4 or so. There is\ncertainly zero difference now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Dec 2004 23:37:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seqscan rather than Index " }, { "msg_contents": "On Fri, Dec 17, 2004 at 22:56:27 +0100,\n \"Steinar H. Gunderson\" <[email protected]> wrote:\n> \n> I'm a bit unsure -- should counting ~3 million rows (no OIDs, PG 7.4,\n> everything in cache, 32-byte rows) take ~3500ms on an Athlon 64 2800+?\n\nIt doesn't seem totally out of wack. You will be limited by the memory\nbandwidth and it looks like you get something on the order of a few\nhundred references to memory per row. That may be a little high, but\nit doesn't seem ridiculously high.\n", "msg_date": "Fri, 17 Dec 2004 22:39:18 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seqscan rather than Index" }, { "msg_contents": "On Fri, Dec 17, 2004 at 10:39:18PM -0600, Bruno Wolff III wrote:\n> It doesn't seem totally out of wack. You will be limited by the memory\n> bandwidth and it looks like you get something on the order of a few\n> hundred references to memory per row. That may be a little high, but\n> it doesn't seem ridiculously high.\n\nI just tested 8.0.0rc1 -- I got a _50%_ speedup on this operation...\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sat, 18 Dec 2004 14:45:40 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seqscan rather than Index" }, { "msg_contents": "On Fri, 17 Dec 2004 23:37:37 -0500\nTom Lane <[email protected]> wrote:\n\n> Frank Wiles <[email protected]> writes:\n> > I've also seen a huge difference between select count(*) and \n> > select count(1) in older versions,\n> \n> That must have been before my time, ie, pre-6.4 or so. There is\n> certainly zero difference now.\n\n Yeah now that I think about it that sounds about the right time\n frame I last benchmarked it. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Mon, 20 Dec 2004 13:40:59 -0600", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seqscan rather than Index" } ]
[ { "msg_contents": "I have a table with an tsearch2 full text index on PG 7.4.2. And a \nquery against the index is really slow.\nI try to do a \"VACUUM FULL VERBOSE ANALYZE pkpoai.metadata\" and I got \nan error.\nI monitor memory usage with top, and pg backend uses more and more \nmemory and hits the limit of 1GB of RAM use.\n\nWhat can I do ?\n\nCordialement,\nJean-Gérard Pailloncy\n\n# top (just before the error)\n PID UID PRI NICE SIZE RES STATE WAIT TIME CPU COMMAND\n20461 503 -5 0 765M 824M sleep biowai 4:26 33.20% postgres\n\n# VACUUM FULL VERBOSE ANALYZE pkpoai.metadata;\nINFO: vacuuming \"pkpoai.metadata\"\nINFO: \"metadata\": found 167405 removable, 3133397 nonremovable row \nversions in 344179 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 168 to 2032 bytes long.\nThere were 13368 unused item pointers.\nTotal free space (including removable row versions) is 174825268 bytes.\n9362 pages are or will become empty, including 0 at the end of the \ntable.\n150433 pages containing 166581084 free bytes are potential move \ndestinations.\nCPU 6.28s/1.42u sec elapsed 51.87 sec.\nINFO: index \"metadata_pkey\" now contains 3133397 row versions in 10501 \npages\nDETAIL: 88443 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.39s/1.35u sec elapsed 26.12 sec.\nINFO: index \"metadata_archive_key\" now contains 3133397 row versions \nin 45268 pages\nDETAIL: 88443 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 2.44s/1.65u sec elapsed 355.32 sec.\nINFO: index \"metadata_oai_identifier\" now contains 3133397 row \nversions in 36336 pages\nDETAIL: 88443 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 1.67s/1.69u sec elapsed 258.86 sec.\nINFO: index \"test_metadata_all\" now contains 3133397 row versions in \n97707 pages\nDETAIL: 88442 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 1.88s/3.98u sec elapsed 230.70 sec.\nERROR: out of memory\nDETAIL: Failed on request of size 168.\n\n\nEXPLAIN SELECT id, title, author, add_authors, identifier, date FROM \npkpoai.metadata WHERE to_tsvector('default_english', \ncoalesce(author,'') ||' '|| coalesce(affiliation,'') ||' '|| \ncoalesce(add_authors,'') ||' '|| coalesce(add_affiliations,'') ||' '|| \ncoalesce(title,'') ||' '|| coalesce(abstract,'') ||' '|| \ncoalesce(discipline,'') ||' '|| coalesce(topic,'') ||' '|| \ncoalesce(publisher,'') ||' '|| coalesce(contributors,'') ||' '|| \ncoalesce(approach,'') ||' '|| coalesce(format,'') ||' '|| \ncoalesce(source,'') ||' '|| coalesce(language,'') ||' '|| \ncoalesce(relation,'') ||' '|| coalesce(coverage,'') ) @@ \nto_tsquery('default_english','pailloncy') LIMIT 100\n\nLimit (cost=0.00..310.80 rows=100 width=176)\n -> Index Scan using test_metadata_all on metadata \n(cost=0.00..9706.34 rows=3123 width=176)\n Index Cond: (to_tsvector('default_english'::text, \n((((((((((((((((((((((((((((((COALESCE(author, ''::text) || ' '::text) \n|| COALESCE(affiliation, ''::text)) || ' '::text) || \nCOALESCE(add_authors, ''::text)) || ' '::text) || \nCOALESCE(add_affiliations, ''::text)) || ' '::text) || COALESCE(title, \n''::text)) || ' '::text) || COALESCE(abstract, ''::text)) || ' '::text) \n|| COALESCE(discipline, ''::text)) || ' '::text) || COALESCE(topic, \n''::text)) || ' '::text) || COALESCE(publisher, ''::text)) || ' \n'::text) || COALESCE(contributors, ''::text)) || ' '::text) || \nCOALESCE(approach, ''::text)) || ' '::text) || COALESCE(format, \n''::text)) || ' '::text) || COALESCE(source, ''::text)) || ' '::text) \n|| (COALESCE(\"language\", ''::character varying))::text) || ' '::text) \n|| COALESCE(relation, ''::text)) || ' '::text) || COALESCE(coverage, \n''::text))) @@ '\\'paillonci\\''::tsquery)\n Filter: (to_tsvector('default_english'::text, \n((((((((((((((((((((((((((((((COALESCE(author, ''::text) || ' '::text) \n|| COALESCE(affiliation, ''::text)) || ' '::text) || \nCOALESCE(add_authors, ''::text)) || ' '::text) || \nCOALESCE(add_affiliations, ''::text)) || ' '::text) || COALESCE(title, \n''::text)) || ' '::text) || COALESCE(abstract, ''::text)) || ' '::text) \n|| COALESCE(discipline, ''::text)) || ' '::text) || COALESCE(topic, \n''::text)) || ' '::text) || COALESCE(publisher, ''::text)) || ' \n'::text) || COALESCE(contributors, ''::text)) || ' '::text) || \nCOALESCE(approach, ''::text)) || ' '::text) || COALESCE(format, \n''::text)) || ' '::text) || COALESCE(source, ''::text)) || ' '::text) \n|| (COALESCE(\"language\", ''::character varying))::text) || ' '::text) \n|| COALESCE(relation, ''::text)) || ' '::text) || COALESCE(coverage, \n''::text))) @@ '\\'paillonci\\''::tsquery)\nTotal runtime: 148.567 ms\n\n\nEXPLAIN ANALYZE SELECT id, title, author, add_authors, identifier, date \nFROM pkpoai.metadata WHERE to_tsvector('default_english', \ncoalesce(author,'') ||' '|| coalesce(affiliation,'') ||' '|| \ncoalesce(add_authors,'') ||' '|| coalesce(add_affiliations,'') ||' '|| \ncoalesce(title,'') ||' '|| coalesce(abstract,'') ||' '|| \ncoalesce(discipline,'') ||' '|| coalesce(topic,'') ||' '|| \ncoalesce(publisher,'') ||' '|| coalesce(contributors,'') ||' '|| \ncoalesce(approach,'') ||' '|| coalesce(format,'') ||' '|| \ncoalesce(source,'') ||' '|| coalesce(language,'') ||' '|| \ncoalesce(relation,'') ||' '|| coalesce(coverage,'') ) @@ \nto_tsquery('default_english','pailloncy') LIMIT 100\n\nLimit (cost=0.00..310.80 rows=100 width=176) (actual \ntime=168751.929..168751.929 rows=0 loops=1)\n -> Index Scan using test_metadata_all on metadata \n(cost=0.00..9706.34 rows=3123 width=176) (actual \ntime=168751.921..168751.921 rows=0 loops=1)\n Index Cond: (to_tsvector('default_english'::text, \n((((((((((((((((((((((((((((((COALESCE(author, ''::text) || ' '::text) \n|| COALESCE(affiliation, ''::text)) || ' '::text) || \nCOALESCE(add_authors, ''::text)) || ' '::text) || \nCOALESCE(add_affiliations, ''::text)) || ' '::text) || COALESCE(title, \n''::text)) || ' '::text) || COALESCE(abstract, ''::text)) || ' '::text) \n|| COALESCE(discipline, ''::text)) || ' '::text) || COALESCE(topic, \n''::text)) || ' '::text) || COALESCE(publisher, ''::text)) || ' \n'::text) || COALESCE(contributors, ''::text)) || ' '::text) || \nCOALESCE(approach, ''::text)) || ' '::text) || COALESCE(format, \n''::text)) || ' '::text) || COALESCE(source, ''::text)) || ' '::text) \n|| (COALESCE(\"language\", ''::character varying))::text) || ' '::text) \n|| COALESCE(relation, ''::text)) || ' '::text) || COALESCE(coverage, \n''::text))) @@ '\\'paillonci\\''::tsquery)\n Filter: (to_tsvector('default_english'::text, \n((((((((((((((((((((((((((((((COALESCE(author, ''::text) || ' '::text) \n|| COALESCE(affiliation, ''::text)) || ' '::text) || \nCOALESCE(add_authors, ''::text)) || ' '::text) || \nCOALESCE(add_affiliations, ''::text)) || ' '::text) || COALESCE(title, \n''::text)) || ' '::text) || COALESCE(abstract, ''::text)) || ' '::text) \n|| COALESCE(discipline, ''::text)) || ' '::text) || COALESCE(topic, \n''::text)) || ' '::text) || COALESCE(publisher, ''::text)) || ' \n'::text) || COALESCE(contributors, ''::text)) || ' '::text) || \nCOALESCE(approach, ''::text)) || ' '::text) || COALESCE(format, \n''::text)) || ' '::text) || COALESCE(source, ''::text)) || ' '::text) \n|| (COALESCE(\"language\", ''::character varying))::text) || ' '::text) \n|| COALESCE(relation, ''::text)) || ' '::text) || COALESCE(coverage, \n''::text))) @@ '\\'paillonci\\''::tsquery)\nTotal runtime: 168752.362 ms\n\nInformation from phpPgAdmin 3.5.1\nPostgreSQL seems to suffer from the TOAST.\nSequential Index Enregistrements\nScan Read Scan Fetch INSERT UPDATE DELETE\n 0 0 2 19080 0 0 0\n\nI/O Performance\nHeap Index TOAST TOAST Index\nDisk Buffer % Disk Buffer % Disk Buffer % Disk Buffer \n%\n17157 1953 (10%) 46945 66047 (58%) 11781 7177 (38%) 2089 44853 \n(96%)\n\nPerformance Index\nIndex Scan Read Fetch\nmetadata_archive_key 0 0 0\nmetadata_oai_identifier 0 0 0\nmetadata_pkey 0 0 0\ntest_metadata_all 2 19080 19080\n\nI/O Performance Index\nIndex Disk Buffer %\nmetadata_archive_key 0 0 (0%)\nmetadata_oai_identifie 0 0 (0%)\nmetadata_pkey 0 0 (0%)\ntest_metadata_all 46945 66047 (58%)\n\n\nStructure of the Table pkpoai.metatda\nI use only text field because I import data from the web and I do not \nknow an upper limit of the fields.\nid integer NOT NULL \nnextval('pkpoai.metadata_id_seq'::text)\narchive integer NOT NULL 0\noai_identifier character varying(255) NOT NULL\nidentifier text NOT NULL\ndatestamp timestamp without time zone NOT NULL\nauthor text NOT NULL\nemail text NOT NULL\naffiliation text NOT NULL\nadd_authors text NOT NULL\nadd_emails text NOT NULL\nadd_affiliations text NOT NULL\ntitle text NOT NULL\nabstract text NOT NULL\ndiscipline text NOT NULL\ntopic text NOT NULL\npublisher text NOT NULL\ncontributors text NOT NULL\ndate character varying(255)\ntype text NOT NULL\napproach text NOT NULL\nformat text NOT NULL\nsource text NOT NULL\nlanguage character varying(255) NOT NULL\nrelation text NOT NULL\ncoverage text NOT NULL\nrights text NOT NULL\n", "msg_date": "Fri, 17 Dec 2004 18:58:21 +0100", "msg_from": "Pailloncy Jean-Gerard <[email protected]>", "msg_from_op": true, "msg_subject": "Error in VACUUM FULL VERBOSE ANALYZE (not enough memory)" }, { "msg_contents": "Jean-Gerard,\n\n> I have a table with an tsearch2 full text index on PG 7.4.2. And a\n> query against the index is really slow.\n> I try to do a \"VACUUM FULL VERBOSE ANALYZE pkpoai.metadata\" and I got\n> an error.\n> I monitor memory usage with top, and pg backend uses more and more\n> memory and hits the limit of 1GB of RAM use.\n\nWhat is your VACUUM_MEM set to in postgresql.conf?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 17 Dec 2004 10:15:54 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error in VACUUM FULL VERBOSE ANALYZE (not enough memory)" }, { "msg_contents": ">> I have a table with an tsearch2 full text index on PG 7.4.2. And a\n>> query against the index is really slow.\n>> I try to do a \"VACUUM FULL VERBOSE ANALYZE pkpoai.metadata\" and I got\n>> an error.\n>> I monitor memory usage with top, and pg backend uses more and more\n>> memory and hits the limit of 1GB of RAM use.\n>\n> What is your VACUUM_MEM set to in postgresql.conf?\nvacuum_mem = 131072\nI have 1 GB of RAM.\nThere was only one running backend.\n\nCordialement,\nJean-Gérard Pailloncy\n\n", "msg_date": "Fri, 17 Dec 2004 19:25:29 +0100", "msg_from": "Pailloncy Jean-Gerard <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Error in VACUUM FULL VERBOSE ANALYZE (not enough memory)" }, { "msg_contents": "The classic output from top (during all other index vacuum):\n PID UID PRI NICE SIZE RES STATE WAIT TIME CPU COMMAND\n20461 503 14 0 13M 75M sleep semwai 5:27 2.05% postgres\n\nWhen backend hits the tsearch2 index, SIZE/RES grows until it reachs \n1GB, where I got the error.\n PID UID PRI NICE SIZE RES STATE WAIT TIME CPU COMMAND\n20461 503 -5 0 765M 824M sleep biowai 4:26 33.20% postgres\n\nCordialement,\nJean-Gérard Pailloncy\n\n", "msg_date": "Fri, 17 Dec 2004 19:32:24 +0100", "msg_from": "Pailloncy Jean-Gerard <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error in VACUUM FULL VERBOSE ANALYZE (not enough memory)" }, { "msg_contents": "Jean-Gerard,\n\n> The classic output from top (during all other index vacuum):\n> PID UID PRI NICE SIZE RES STATE WAIT TIME CPU COMMAND\n> 20461 503 14 0 13M 75M sleep semwai 5:27 2.05% postgres\n>\n> When backend hits the tsearch2 index, SIZE/RES grows until it reachs\n> 1GB, where I got the error.\n> PID UID PRI NICE SIZE RES STATE WAIT TIME CPU COMMAND\n> 20461 503 -5 0 765M 824M sleep biowai 4:26 33.20% postgres\n\nOK, next thing to try is upgrading to 7.4.7. Since you have 7.4.2, this \nshould be a straightforward binary replacement.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 17 Dec 2004 10:47:49 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error in VACUUM FULL VERBOSE ANALYZE (not enough memory)" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Jean-Gerard,\n>> When backend hits the tsearch2 index, SIZE/RES grows until it reachs\n>> 1GB, where I got the error.\n>> PID UID PRI NICE SIZE RES STATE WAIT TIME CPU COMMAND\n>> 20461 503 -5 0 765M 824M sleep biowai 4:26 33.20% postgres\n\n> OK, next thing to try is upgrading to 7.4.7. Since you have 7.4.2, this \n> should be a straightforward binary replacement.\n\nThis looks like it must be a memory leak in the gist indexing code\n(either gist itself or tsearch2). I don't see any post-release fixes in\nthe 7.4 branch that look like they fixed any such thing :-(, so it's\nprobably still there in 7.4.7, and likely 8.0 too.\n\nJean-Gerard, can you put together a self-contained test case? I suspect\nit need only look like \"put some data in a table, make a tsearch2 index,\ndelete half the rows in the table, VACUUM FULL\". But I don't have time\nto try to cons up a test case right now, and especially not to figure\nout what to do to duplicate your problem if it doesn't happen on the\nfirst try.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Dec 2004 14:46:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error in VACUUM FULL VERBOSE ANALYZE (not enough memory)" }, { "msg_contents": "Tom,\n\n> Jean-Gerard, can you put together a self-contained test case?  I suspect\n> it need only look like \"put some data in a table, make a tsearch2 index,\n> delete half the rows in the table, VACUUM FULL\".  But I don't have time\n> to try to cons up a test case right now, and especially not to figure\n> out what to do to duplicate your problem if it doesn't happen on the\n> first try.\n\nMight be hard. I have 2 databases with Tsearch2 on 7.4, and haven't seen any \nsuch problem. Including one that blows away about 3000 rows a day.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 17 Dec 2004 11:52:15 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error in VACUUM FULL VERBOSE ANALYZE (not enough memory)" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> Jean-Gerard, can you put together a self-contained test case? I suspect\n>> it need only look like \"put some data in a table, make a tsearch2 index,\n>> delete half the rows in the table, VACUUM FULL\". But I don't have time\n>> to try to cons up a test case right now, and especially not to figure\n>> out what to do to duplicate your problem if it doesn't happen on the\n>> first try.\n\n> Might be hard. I have 2 databases with Tsearch2 on 7.4, and haven't seen any\n> such problem. Including one that blows away about 3000 rows a day.\n\nYeah, I'm sure there is some particular thing Jean-Gerard is doing that\nis triggering the problem. He can probably boil his existing table down\nto a test case faster than we can guess what the trigger condition is.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Dec 2004 14:59:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error in VACUUM FULL VERBOSE ANALYZE (not enough memory) " }, { "msg_contents": "Bruno Wolff III <[email protected]> writes:\n> Tom Lane <[email protected]> wrote:\n>> This looks like it must be a memory leak in the gist indexing code\n>> (either gist itself or tsearch2). I don't see any post-release fixes in\n>> the 7.4 branch that look like they fixed any such thing :-(, so it's\n>> probably still there in 7.4.7, and likely 8.0 too.\n\n> Shouldn't that be 7.4.6?\n\nRight ... I copied Josh's mistake without thinking about it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Dec 2004 16:17:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error in VACUUM FULL VERBOSE ANALYZE (not enough memory) " }, { "msg_contents": "On Fri, Dec 17, 2004 at 14:46:57 -0500,\n Tom Lane <[email protected]> wrote:\n> \n> This looks like it must be a memory leak in the gist indexing code\n> (either gist itself or tsearch2). I don't see any post-release fixes in\n> the 7.4 branch that look like they fixed any such thing :-(, so it's\n> probably still there in 7.4.7, and likely 8.0 too.\n\nShouldn't that be 7.4.6? I am expecting there to be an eventual 7.4.7\nbecause of some post 7.4.6 fixes that have gone in, but I haven't seen\nany other indications that this has already happened.\n", "msg_date": "Fri, 17 Dec 2004 15:23:45 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error in VACUUM FULL VERBOSE ANALYZE (not enough memory)" }, { "msg_contents": "Update to my case:\nI drop and recreate the index and there was no problem this time.\nStrange...\n\n# DROP INDEX pkpoai.test_metadata_all;\nDROP INDEX\n# VACUUM FULL VERBOSE ANALYZE pkpoai.metadata;\nINFO: vacuuming \"pkpoai.metadata\"\nINFO: \"metadata\": found 167381 removable, 3133397 nonremovable row \nversions in 344179 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 168 to 2032 bytes long.\nThere were 13392 unused item pointers.\nTotal free space (including removable row versions) is 174825268 bytes.\n9362 pages are or will become empty, including 0 at the end of the \ntable.\n150433 pages containing 166581084 free bytes are potential move \ndestinations.\nCPU 7.07s/1.50u sec elapsed 209.46 sec.\nINFO: index \"metadata_pkey\" now contains 3133397 row versions in 10501 \npages\nDETAIL: 88246 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.68s/1.21u sec elapsed 81.89 sec.\nINFO: index \"metadata_archive_key\" now contains 3133397 row versions \nin 45268 pages\nDETAIL: 88246 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 2.28s/1.66u sec elapsed 364.19 sec.\nINFO: index \"metadata_oai_identifier\" now contains 3133397 row \nversions in 36336 pages\nDETAIL: 88246 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 1.85s/1.81u sec elapsed 260.82 sec.\nINFO: \"metadata\": moved 188118 row versions, truncated 344179 to \n327345 pages\nDETAIL: CPU 9.21s/108.65u sec elapsed 1890.56 sec.\nINFO: index \"metadata_pkey\" now contains 3133397 row versions in 10633 \npages\nDETAIL: 188118 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.64s/0.60u sec elapsed 52.24 sec.\nINFO: index \"metadata_archive_key\" now contains 3133397 row versions \nin 45597 pages\nDETAIL: 188118 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 2.40s/1.12u sec elapsed 359.17 sec.\nINFO: index \"metadata_oai_identifier\" now contains 3133397 row \nversions in 36624 pages\nDETAIL: 188118 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 1.82s/0.97u sec elapsed 277.56 sec.\nINFO: vacuuming \"pg_toast.pg_toast_27007136\"\nINFO: \"pg_toast_27007136\": found 1894 removable, 134515 nonremovable \nrow versions in 25921 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 37 to 2034 bytes long.\nThere were 460 unused item pointers.\nTotal free space (including removable row versions) is 17460524 bytes.\n217 pages are or will become empty, including 0 at the end of the table.\n22612 pages containing 17416360 free bytes are potential move \ndestinations.\nCPU 0.51s/0.10u sec elapsed 16.05 sec.\nINFO: index \"pg_toast_27007136_index\" now contains 134515 row versions \nin 561 pages\nDETAIL: 1894 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.01u sec elapsed 1.22 sec.\nINFO: \"pg_toast_27007136\": moved 1806 row versions, truncated 25921 to \n25554 pages\nDETAIL: CPU 0.03s/0.21u sec elapsed 9.83 sec.\nINFO: index \"pg_toast_27007136_index\" now contains 134515 row versions \nin 569 pages\nDETAIL: 1806 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.01 sec.\nINFO: analyzing \"pkpoai.metadata\"\nINFO: \"metadata\": 327345 pages, 90000 rows sampled, 3620548 estimated \ntotal rows\nVACUUM\n# CREATE INDEX test_metadata_all ON pkpoai.metadata USING gist \n(to_tsvector('default_english', coalesce(author,'') ||' '|| \ncoalesce(affiliation,'') ||' '|| coalesce(add_authors,'') ||' '|| \ncoalesce(add_affiliations,'') ||' '|| coalesce(title,'') ||' '|| \ncoalesce(abstract,'') ||' '|| coalesce(discipline,'') ||' '|| \ncoalesce(topic,'') ||' '|| coalesce(publisher,'') ||' '|| \ncoalesce(contributors,'') ||' '|| coalesce(approach,'') ||' '|| \ncoalesce(format,'') ||' '|| coalesce(source,'') ||' '|| \ncoalesce(language,'') ||' '|| coalesce(relation,'') ||' '|| \ncoalesce(coverage,'') ));\nNOTICE: word is too long\nNOTICE: word is too long\nNOTICE: word is too long\nCREATE INDEX\n# VACUUM FULL VERBOSE ANALYZE pkpoai.metadata;INFO: vacuuming \n\"pkpoai.metadata\"INFO: \"metadata\": found 0 removable, 3133397 \nnonremovable row versions in 327345 pagesDETAIL: 0 dead row versions \ncannot be removed yet.Nonremovable row versions range from 168 to 2032 \nbytes long.There were 29889 unused item pointers.Total free space \n(including removable row versions) is 37861356 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n93935 pages containing 28461956 free bytes are potential move \ndestinations.\nCPU 5.81s/1.09u sec elapsed 56.18 sec.\nINFO: index \"metadata_pkey\" now contains 3133397 row versions in 10633 \npages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.53s/0.94u sec elapsed 20.25 sec.\nINFO: index \"metadata_archive_key\" now contains 3133397 row versions \nin 45597 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 2.46s/1.35u sec elapsed 338.74 sec.\nINFO: index \"metadata_oai_identifier\" now contains 3133397 row \nversions in 36624 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 1.78s/1.33u sec elapsed 237.07 sec.\nINFO: index \"test_metadata_all\" now contains 3133397 row versions in \n93136 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 1.65s/3.47u sec elapsed 167.03 sec.\nINFO: \"metadata\": moved 0 row versions, truncated 327345 to 327345 \npages\nDETAIL: CPU 0.35s/0.41u sec elapsed 82.11 sec.\nINFO: vacuuming \"pg_toast.pg_toast_27007136\"\nINFO: \"pg_toast_27007136\": found 0 removable, 134515 nonremovable row \nversions in 25554 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 37 to 2034 bytes long.\nThere were 665 unused item pointers.\nTotal free space (including removable row versions) is 14468156 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n22041 pages containing 14421368 free bytes are potential move \ndestinations.\nCPU 0.52s/0.03u sec elapsed 16.14 sec.\nINFO: index \"pg_toast_27007136_index\" now contains 134515 row versions \nin 569 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.01s/0.04u sec elapsed 0.54 sec.\nINFO: \"pg_toast_27007136\": moved 0 row versions, truncated 25554 to \n25554 pages\nDETAIL: CPU 0.00s/0.03u sec elapsed 2.56 sec.\nINFO: analyzing \"pkpoai.metadata\"\nINFO: \"metadata\": 327345 pages, 90000 rows sampled, 3620548 estimated \ntotal rows\nVACUUM\n\nCordialement,\nJean-Gérard Pailloncy\n\n", "msg_date": "Fri, 17 Dec 2004 22:59:07 +0100", "msg_from": "Pailloncy Jean-Gerard <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error in VACUUM FULL VERBOSE ANALYZE (not enough memory)" }, { "msg_contents": "Pailloncy Jean-Gerard <[email protected]> writes:\n> Update to my case:\n> I drop and recreate the index and there was no problem this time.\n> Strange...\n\nWell, that time there wasn't actually any work for VACUUM FULL to do.\nI think the bloat is probably driven by having to move a lot of rows\nin order to shrink the table. That means creating and deleting a lot\nof index entries.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Dec 2004 17:20:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error in VACUUM FULL VERBOSE ANALYZE (not enough memory) " }, { "msg_contents": "I think I have a test case for 7.4.2\n\nSo I have a 3 millions of rows table \"metadata\" with a tsearch2 index.\nI had memory leak in \"vacuum full verbose analyze\"\n\nI drop the index, run \"vacuum full verbose analyze\", recreate the index \nand re-run \"vacuum full verbose analyze\".\n\nThe I run my script to insert near 15000 of rows, and run \"vacuum full \nverbose analyze\".\n\nThe backend starts with res=4Mb of ram.\nAnd grows before the first output line to res=69Mb.\nand runs staying at res=69Mb.\nThen after writing INFO: index \"metadata_oai_identifier\" and before \nINFO: index \"test_metadata_all\" which is the tsearch2 index, the \nmemory usage grows to size=742Mb res=804Mb. (Hopefully I have 1 GB of \nRAM, with 1 GB of swap).\nThe usage stay at res=804MB until INFO: \"pg_toast_27007136\": found, \nthen drop back to res=69Mb.\nWhen INFO: \"pg_toast_27007136\": moved memory usage grows to res=200MB.\nAnd did not drop back even after vacuum finished.\n\nCordialement,\nJean-Gérard Pailloncy\n\n# vacuum full verbose analyze pkpoai.metadata;\nINFO: vacuuming \"pkpoai.metadata\"\nINFO: \"metadata\": found 15466 removable, 3141229 nonremovable row \nversions in 330201 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 168 to 2032 bytes long.\nThere were 29868 unused item pointers.\nTotal free space (including removable row versions) is 54151896 bytes.\n496 pages are or will become empty, including 0 at the end of the table.\n98834 pages containing 44826736 free bytes are potential move \ndestinations.\nCPU 6.10s/1.03u sec elapsed 69.36 sec.\nINFO: index \"metadata_pkey\" now contains 3141229 row versions in 10666 \npages\nDETAIL: 15466 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.56s/0.92u sec elapsed 40.45 sec.\nINFO: index \"metadata_archive_key\" now contains 3141229 row versions \nin 45733 pages\nDETAIL: 15466 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 2.36s/1.44u sec elapsed 362.57 sec.\nINFO: index \"metadata_oai_identifier\" now contains 3141229 row \nversions in 36736 pages\nDETAIL: 15466 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 2.04s/1.25u sec elapsed 244.82 sec.\nINFO: index \"test_metadata_all\" now contains 3141229 row versions in \n93922 pages\nDETAIL: 15466 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 1.81s/3.76u sec elapsed 196.50 sec.\nINFO: \"metadata\": moved 14151 row versions, truncated 330201 to 328285 \npages\nDETAIL: CPU 2.65s/59.67u sec elapsed 251.01 sec.\nINFO: index \"metadata_pkey\" now contains 3141229 row versions in 10686 \npages\nDETAIL: 14151 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.68s/0.29u sec elapsed 67.42 sec.\nINFO: index \"metadata_archive_key\" now contains 3141229 row versions \nin 45774 pages\nDETAIL: 14151 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 2.28s/0.54u sec elapsed 347.82 sec.\nINFO: index \"metadata_oai_identifier\" now contains 3141229 row \nversions in 36784 pages\nDETAIL: 14151 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 2.02s/0.39u sec elapsed 248.27 sec.\nINFO: index \"test_metadata_all\" now contains 3141229 row versions in \n94458 pages\nDETAIL: 14151 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 1.76s/2.93u sec elapsed 173.22 sec.\nINFO: vacuuming \"pg_toast.pg_toast_27007136\"\nINFO: \"pg_toast_27007136\": found 5790 removable, 135159 nonremovable \nrow versions in 26847 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 37 to 2034 bytes long.\nThere were 665 unused item pointers.\nTotal free space (including removable row versions) is 24067284 bytes.\n559 pages are or will become empty, including 0 at the end of the table.\n23791 pages containing 24026432 free bytes are potential move \ndestinations.\nCPU 0.54s/0.12u sec elapsed 19.60 sec.\nINFO: index \"pg_toast_27007136_index\" now contains 135159 row versions \nin 593 pages\nDETAIL: 5790 index row versions were removed.\n3 index pages have been deleted, 3 are currently reusable.\nCPU 0.01s/0.03u sec elapsed 0.77 sec.\nINFO: \"pg_toast_27007136\": moved 5733 row versions, truncated 26847 to \n25695 pages\nDETAIL: CPU 0.13s/0.34u sec elapsed 15.25 sec.\nINFO: index \"pg_toast_27007136_index\" now contains 135159 row versions \nin 611 pages\nDETAIL: 5733 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.01u sec elapsed 0.01 sec.\nINFO: analyzing \"pkpoai.metadata\"\nINFO: \"metadata\": 328285 pages, 90000 rows sampled, 3631229 estimated \ntotal rows\nVACUUM\n\n", "msg_date": "Thu, 23 Dec 2004 00:42:27 +0100", "msg_from": "Pailloncy Jean-Gerard <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory leak tsearch2 VACUUM FULL VERBOSE ANALYZE" }, { "msg_contents": "Pailloncy Jean-Gerard <[email protected]> writes:\n> I think I have a test case for 7.4.2\n\nCan you send me the test data (off-list)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Dec 2004 19:01:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory leak tsearch2 VACUUM FULL VERBOSE ANALYZE " }, { "msg_contents": "Pailloncy Jean-Gerard <[email protected]> writes:\n> I think I have a test case for 7.4.2\n\nTry the attached patch.\n\nIt looked to me like there were some smaller leaks going on during COPY\nand CREATE INDEX, which I will look into later --- but this seems to be\nthe problem for VACUUM FULL.\n\n\t\t\tregards, tom lane\n\nIndex: vacuum.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/commands/vacuum.c,v\nretrieving revision 1.263\ndiff -c -r1.263 vacuum.c\n*** vacuum.c\t2 Oct 2003 23:19:44 -0000\t1.263\n--- vacuum.c\t23 Dec 2004 22:37:57 -0000\n***************\n*** 2041,2046 ****\n--- 2041,2047 ----\n \t\t\t\t\t\tExecStoreTuple(&newtup, slot, InvalidBuffer, false);\n \t\t\t\t\t\tExecInsertIndexTuples(slot, &(newtup.t_self),\n \t\t\t\t\t\t\t\t\t\t\t estate, true);\n+ \t\t\t\t\t\tResetPerTupleExprContext(estate);\n \t\t\t\t\t}\n \n \t\t\t\t\tWriteBuffer(cur_buffer);\n***************\n*** 2174,2179 ****\n--- 2175,2181 ----\n \t\t\t{\n \t\t\t\tExecStoreTuple(&newtup, slot, InvalidBuffer, false);\n \t\t\t\tExecInsertIndexTuples(slot, &(newtup.t_self), estate, true);\n+ \t\t\t\tResetPerTupleExprContext(estate);\n \t\t\t}\n \t\t}\t\t\t\t\t\t/* walk along page */\n \n", "msg_date": "Thu, 23 Dec 2004 17:44:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory leak tsearch2 VACUUM FULL VERBOSE ANALYZE " } ]
[ { "msg_contents": "Hi,\n\nI have data that I am taking from 2 tables, pulling out specific columns and inserting into one table.\n\nIs it more efficient to do:\na) insert into x\n select z from y;\n insert into x\n select z from a;\n\nb) insert into x\n select z from y\n union all\n select z from a;\n\nI have run both through explain.\na) 650ms\nb) 741.57ms\n\nAccording to the planner option a, select z from y takes 545.93 ms\nUnder option b select z from y takes 553.34 ms\n\nShouldn't the time predicted for the select z from y be the same?\n\nI would believe b would be more efficient as the inserts could be done in a batch rather than individual transactions but the planner doesn't recognize that. When I run option a through the planner I have to highlight each insert separately since the planner stops executing after the first ; it comes across.\n\nMike\n", "msg_date": "Fri, 17 Dec 2004 12:52:01 -0600", "msg_from": "\"Mike G.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Which is more efficient?" }, { "msg_contents": "A long time ago, in a galaxy far, far away, [email protected] (\"Mike G.\") wrote:\n> Hi,\n>\n> I have data that I am taking from 2 tables, pulling out specific columns and inserting into one table.\n>\n> Is it more efficient to do:\n> a) insert into x\n> select z from y;\n> insert into x\n> select z from a;\n>\n> b) insert into x\n> select z from y\n> union all\n> select z from a;\n>\n> I have run both through explain.\n> a) 650ms\n> b) 741.57ms\n>\n> According to the planner option a, select z from y takes 545.93 ms\n> Under option b select z from y takes 553.34 ms\n>\n> Shouldn't the time predicted for the select z from y be the same?\n\nNo, these are approximations. They can't be expected to be identical,\nand as you can see there's no material difference, as 545.93 only\ndiffers from 553.34 by 1.34%.\n\nThe point of EXPLAIN is to show the query _plans_ so you can evaluate\nhow sane they seem. They're pretty well identical, so EXPLAIN's doing\nwhat might be expected.\n\n> I would believe b would be more efficient as the inserts could be\n> done in a batch rather than individual transactions but the planner\n> doesn't recognize that. When I run option a through the planner I\n> have to highlight each insert separately since the planner stops\n> executing after the first ; it comes across.\n\nThe case where there would be a _material_ difference would be where\nthere were hardly any rows in either of the tables you're adding in,\nand in that case, query planning becomes a significant cost, at which\npoint simpler is probably better.\n\nIf you do the queries in separate transactions, there's some addition\nof cost of COMMIT involved, but if they can be kept in a single\ntransaction, the approaches oughtn't be materially different in cost,\nand that's what you're finding.\n-- \nselect 'cbbrowne' || '@' || 'gmail.com';\nhttp://www3.sympatico.ca/cbbrowne/x.html\nMICROS~1: Where do you want to go today? Linux: Been there, done\nthat.\n", "msg_date": "Fri, 17 Dec 2004 21:55:27 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which is more efficient?" } ]
[ { "msg_contents": "Hi All,\nI notice that most stat Postgres provides are per table or per process.\n Is it possible to monitor CPU time per transaction and IO per\ntransaction? If can't, is there any commercial capacity planning tools\navailable?\nThanks! \n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nThe all-new My Yahoo! - Get yours free! \nhttp://my.yahoo.com \n \n\n", "msg_date": "Fri, 17 Dec 2004 13:10:46 -0800 (PST)", "msg_from": "Stan Y <[email protected]>", "msg_from_op": true, "msg_subject": "Monitor CPU time per transaction?" } ]
[ { "msg_contents": "Hi All,\nI notice that most stat Postgres provides are per table or per process.\n Is it possible to monitor CPU time per transaction and IO per\ntransaction? If can't, is there any commercial capacity planning tools\navailable?\nThanks! \n\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nYahoo! Mail - Find what you need with new enhanced search.\nhttp://info.mail.yahoo.com/mail_250\n", "msg_date": "Fri, 17 Dec 2004 13:14:21 -0800 (PST)", "msg_from": "Stan Y <[email protected]>", "msg_from_op": true, "msg_subject": "Monitor CPU time per transaction?" } ]
[ { "msg_contents": "\nAny advice for settings for extremely IO constrained systems?\n\nA demo I've set up for sales seems to be spending much of it's time in \ndisk wait states.\n\n\nThe particular system I'm working with is:\n Ext3 on Debian inside Microsoft VirtualPC on NTFS\n on WindowsXP on laptops of our sales team.\nSomewhat surprisingly, CPU performance is close to native; but disk IO \nis much worse - probably orders of magnitude worse - since there are\nso many layers of filesystems involved. Unfortunately, no, I don't \nthink the sales guys will upgrade to BSD. :)\n\nThe database is too large to fit entirely in memory (3GB of spatial data\nusing PostGIS); and has relative large updates (people can add \"layers\" \nconsisting of perhaps 10000 points, lines, and polygons out of a million\nor so possibilities - they do this by doing 10K inserts into tables with \npostgis geometry columns).\n\n\nSteps I've already done:\n\n * Gave virtual PC as much memory as possible (1/2 gig)\n\n * Tuned postgresql.conf; setting\n increased effective_cache_size to 10000\n (tested a few values with this workload)\n reduced cpu_index_tuple_cost to 0.0005\n (encourages indexes which may reduce disk hits)\n decreased random_page_cost to 2\n (seems the fragmented NTFS means many sequential\n access are probably a random access anyway)\n increased work_mem to 15000\n (sorting on disk was very VERY amazingly slow)\n increased shared_buffers to 3000\n (guess)\n\n * Tuned ext3 (yeah, I'll try JFS or XFS next)\n Journal_data_writeback == minimize journaling?\n commit=600,noatime in fstab\n * tuned the VM\n echo 60000 > /proc/sys/vm/dirty_expire_centisecs\n echo 70 > /proc/sys/vm/dirty_ratio\n\nIt seems for this workload, the two biggest benefits were\n \"commit=600\" and writeback for ext3\nand\n \"echo 60000 > /proc/sys/vm/dirty_expire_centisecs\"\n\nIf I understand right, this combination says that dirty pages can sit in \nmemory far longer than the defaults -- and I guess this delays my bad IO\ntimes to the point in the salesguys presentation when he's playing with \npowerpoint:).\n\nMuch of this tuning was guesswork; but it did make the demo go from\n\"unacceptable\" to \"reasonable\". Were any of my guesses particularly\nbad, and may be doing more harm than good?\n\nAny more ideas on how to deal with a pathologically slow IO system?\n\n Ron\n\n", "msg_date": "Fri, 17 Dec 2004 23:51:12 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": true, "msg_subject": "Tips for a system with _extremely_ slow IO?" }, { "msg_contents": "On Fri, Dec 17, 2004 at 11:51:12PM -0800, Ron Mayer wrote:\n> Any advice for settings for extremely IO constrained systems?\n> \n> A demo I've set up for sales seems to be spending much of it's time in \n> disk wait states.\n> \n> \n> The particular system I'm working with is:\n> Ext3 on Debian inside Microsoft VirtualPC on NTFS\n> on WindowsXP on laptops of our sales team.\n> Somewhat surprisingly, CPU performance is close to native; but disk IO \n> is much worse - probably orders of magnitude worse - since there are\n> so many layers of filesystems involved. Unfortunately, no, I don't \n> think the sales guys will upgrade to BSD. :)\n> \n> The database is too large to fit entirely in memory (3GB of spatial data\n> using PostGIS); and has relative large updates (people can add \"layers\" \n> consisting of perhaps 10000 points, lines, and polygons out of a million\n> or so possibilities - they do this by doing 10K inserts into tables with \n> postgis geometry columns).\n\nI've found VirtualPC to be somewhat slower than VMWare for some things (and\nfaster for others) and less friendly to a Linux guest OS. Try an identical\nbuild running inside VMWare.\n\nCan you run the VM using a native disk partition, rather than one emulated\nby a big NTFS file?\n\nEven if your application needs to run under Linux, can you run the\ndatabase directly on XP (8.0RC2 hot off the presses...) and connect to\nit from the Linux VM?\n\nCheers,\n Steve\n", "msg_date": "Tue, 21 Dec 2004 06:28:18 -0800", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tips for a system with _extremely_ slow IO?" }, { "msg_contents": "On Fri, 2004-12-17 at 23:51 -0800, Ron Mayer wrote:\n> Any advice for settings for extremely IO constrained systems?\n> \n> A demo I've set up for sales seems to be spending much of it's time in \n> disk wait states.\n> \n> \n> The particular system I'm working with is:\n> Ext3 on Debian inside Microsoft VirtualPC on NTFS\n> on WindowsXP on laptops of our sales team.\n\nAs this is only for demo purposes, you might consider turning fsync off,\nalthough I have no idea if it would have any effect on your setup.\n\ngnari\n\n\n", "msg_date": "Tue, 21 Dec 2004 18:29:56 +0000", "msg_from": "Ragnar =?ISO-8859-1?Q?Hafsta=F0?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tips for a system with _extremely_ slow IO?" } ]
[ { "msg_contents": "Under postgres 7.3 logging is incredibly slow!\n\nI have applied the following settings:\n\nsyslog = 2\nsyslog_facility = 'LOCAL0'\nsyslog_ident = 'postgres'\n \n log_connections = true \nlog_duration = true \nlog_pid = true \nlog_statement = true \nlog_timestamp = true \n \nThis severely impacted the performance of our production system, a search\npage which took 1-3 seconds now takes over 30, is this normal?\n \nI need to get some performance indicators from our production db, however I\ncant turn on logging with such performance degradation.\n \nTheo\n\n\n\n\n\n______________________________________________________________________\nThis email, including attachments, is intended only for the addressee\nand may be confidential, privileged and subject to copyright. If you\nhave received this email in error, please advise the sender and delete\nit. If you are not the intended recipient of this email, you must not\nuse, copy or disclose its content to anyone. You must not copy or \ncommunicate to others content that is confidential or subject to \ncopyright, unless you have the consent of the content owner.\n\t\n\n\n\n\nMessage\n\n\nUnder postgres 7.3 logging is incredibly \nslow!\nI have \napplied the following settings:\nsyslog \n= 2\nsyslog_facility = 'LOCAL0'\nsyslog_ident = 'postgres'\n \n log_connections =  true log_duration =  true log_pid =  true log_statement =  true log_timestamp =  true \n \nThis \nseverely impacted the performance of our production system, a search page which \ntook 1-3 seconds now takes over 30, is this normal?\n \nI need to get some \nperformance indicators from our production db, however I cant turn on \nlogging with such performance degradation.\n \nTheo\n\n\n\n\n\n______________________________________________________________________This \n email, including attachments, is intended only for the addresseeand \n may be confidential, privileged and subject to copyright. If youhave \n received this email in error, please advise the sender and deleteit. \n If you are not the intended recipient of this email, you must notuse, \n copy or disclose its content to anyone. You must not copy or \n communicate to others content that is confidential or subject to \n copyright, unless you have the consent of the content \n owner.", "msg_date": "Mon, 20 Dec 2004 15:17:11 +1100", "msg_from": "Theo Galanakis <[email protected]>", "msg_from_op": true, "msg_subject": "PG Logging is Slow" }, { "msg_contents": "Theo Galanakis wrote:\n> Under postgres 7.3 logging is incredibly slow!\n> \n> I have applied the following settings:\n> \n> syslog = 2\n> syslog_facility = 'LOCAL0'\n> syslog_ident = 'postgres'\n> \n> log_connections = true \n> log_duration = true \n> log_pid = true \n> log_statement = true \n> log_timestamp = true \n> \n> This severely impacted the performance of our production system, a search\n> page which took 1-3 seconds now takes over 30, is this normal?\n> \n> I need to get some performance indicators from our production db, however I\n> cant turn on logging with such performance degradation.\n\nLinux syslog has this bad behavior of fsync'ing all log writes. See the\nsyslog manual page for a way to turn off the fsync.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 19 Dec 2004 23:31:24 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG Logging is Slow" }, { "msg_contents": "...and on Mon, Dec 20, 2004 at 03:17:11PM +1100, Theo Galanakis used the keyboard:\n> Under postgres 7.3 logging is incredibly slow!\n> \n> I have applied the following settings:\n> \n> syslog = 2\n> syslog_facility = 'LOCAL0'\n> syslog_ident = 'postgres'\n> \n> log_connections = true \n> log_duration = true \n> log_pid = true \n> log_statement = true \n> log_timestamp = true \n> \n> This severely impacted the performance of our production system, a search\n> page which took 1-3 seconds now takes over 30, is this normal?\n> \n> I need to get some performance indicators from our production db, however I\n> cant turn on logging with such performance degradation.\n> \n\nHi Theo,\n\nOne thing you should be sure about is that whichever logfile you have\nconfigured for the local0 facility is being written to asynchronously.\nSynchronous logging is REALLY expensive.\n\nIf you're using the standard syslogd, you can achieve that by prefixing\nthe filename in syslogd.conf with a dash. For example,\n\n local0.*\t\t/var/log/postgresql.log\n\nwould become\n\n local0.*\t\t-/var/log/postgresql.log\n\nOne other option would be to turn off syslog logging completely and let\npostmaster take care of the log on its own, which may or may not be\npossible for you, depending on the policy in effect (remote logging, etc.).\n\nHope this helped,\n-- \n Grega Bremec\n gregab at p0f dot net", "msg_date": "Mon, 20 Dec 2004 05:48:35 +0100", "msg_from": "Grega Bremec <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG Logging is Slow" }, { "msg_contents": "\nOn Mon, Dec 20, 2004 at 03:17:11PM +1100, Theo Galanakis wrote:\n> Under postgres 7.3 logging is incredibly slow!\n> \n> I have applied the following settings:\n> \n> syslog = 2\n> syslog_facility = 'LOCAL0'\n> syslog_ident = 'postgres'\n> \n> log_connections = true \n> log_duration = true \n> log_pid = true \n> log_statement = true \n> log_timestamp = true \n> \n> This severely impacted the performance of our production system, a search\n> page which took 1-3 seconds now takes over 30, is this normal?\n> \n> I need to get some performance indicators from our production db, however I\n> cant turn on logging with such performance degradation.\n\n\nI've experienced this problem many times due to hanging dns\nlookups. /etc/resolv.conf may point to a nonexistent\nnameserver. Comment it out and restart syslogd. Or use a syslog\nimplementation that allows you to disable dns lookups. Or just give\nthe nameserver a kick.\n\n -Mike Adler\n", "msg_date": "Mon, 20 Dec 2004 09:07:51 -0500", "msg_from": "Michael Adler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG Logging is Slow" } ]
[ { "msg_contents": "Starting from 7.4.1 on P4 and FreeBSD 5.x (exclude 5.0 - gcc in this \nedition have optimization error)\nI use next configure command\n----------------------------------\n./configure --prefix=/opt/postgres-7.4.1 --with-pgport=5432 \n --with-pam --enable-syslog --enable-depend \n\t\t'CFLAGS= -O3 -pipe -mfpmath=sse -msse2 -msse \n\t\t-mmmx -march=pentium4 -mcpu=pentium4'\n---------------------------------\nw/o any problem.\nAs I remember improvement as always task depended and\nhave 30-100%.\n\nBest regards,\n Alexander Kirpa\n\n", "msg_date": "Mon, 20 Dec 2004 06:45:11 +0200", "msg_from": "\"Alexander Kirpa\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Processor optimization compile options?" } ]
[ { "msg_contents": "Hi All,\n \nThanks to everyone for helping with my previous questions. \n \nI have a test database running on Postgres 7.3.2.\n \n version \n-------------------------------------------------------------\n PostgreSQL 7.3.2 on i686-pc-linux-gnu, compiled by GCC 2.96\n\nI have another server where a newer version of postgres that came with the Fedora Core 3 package installed.\n \nversion \n-------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 7.4.6 on i386-redhat-linux-gnu, compiled by GCC i386-redhat-linux-gcc (GCC) 3.4.2 20041017 (Red Hat 3.4.2-6)\n \nI would like to do a pg_dump on the test database, and restore it in the new database on Postgres 7.4.6. I would like to know if there would be any problem due to the postgres version/OS change. If so, could someone tell me what precautions I can take to avoid any problems?\n \nThanks in advance,\nSaranya\n \n \n \n\n\t\t\n---------------------------------\nDo you Yahoo!?\n Yahoo! Mail - Find what you need with new enhanced search. Learn more.\nHi All,\n \nThanks to everyone for helping with my previous questions. \n \nI have a test database running on Postgres 7.3.2.\n \n version                           ------------------------------------------------------------- PostgreSQL 7.3.2 on i686-pc-linux-gnu, compiled by GCC 2.96\nI have another server where a newer version of postgres that came with the Fedora Core 3 package installed.\n \nversion                                                         ------------------------------------------------------------------------------------------------------------------------- PostgreSQL 7.4.6 on i386-redhat-linux-gnu, compiled by GCC i386-redhat-linux-gcc (GCC) 3.4.2 20041017 (Red Hat 3.4.2-6)\n \nI would like to do a pg_dump on the test database, and restore it in the new database on Postgres 7.4.6. I would like to know if there would be any problem due to the postgres version/OS change. If so, could someone tell me what precautions I can take to avoid any problems?\n \nThanks in advance,\nSaranya\n \n \n \nDo you Yahoo!? \nYahoo! Mail - Find what you need with new enhanced search. Learn more.", "msg_date": "Mon, 20 Dec 2004 06:40:34 -0800 (PST)", "msg_from": "sarlav kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres version change - pg_dump" }, { "msg_contents": "am 20.12.2004, um 6:40:34 -0800 mailte sarlav kumar folgendes:\n> I would like to do a pg_dump on the test database, and restore it in\n> the new database on Postgres 7.4.6. I would like to know if there\n> would be any problem due to the postgres version/OS change. If so,\n\nNo. This is the usual way to upgrade the database.\n\n\n> could someone tell me what precautions I can take to avoid any\n> problems?\n\nYou can hold the old database ;-)\n\n\nRegards,\n-- \nAndreas Kretschmer (Kontakt: siehe Header)\n Tel. NL Heynitz: 035242/47212\nGnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net\n === Schollglas Unternehmensgruppe === \n", "msg_date": "Mon, 20 Dec 2004 15:54:16 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [despammed] Postgres version change - pg_dump" }, { "msg_contents": "On Mon, Dec 20, 2004 at 06:40:34 -0800,\n sarlav kumar <[email protected]> wrote:\n> \n> I would like to do a pg_dump on the test database, and restore it in the new database on Postgres 7.4.6. I would like to know if there would be any problem due to the postgres version/OS change. If so, could someone tell me what precautions I can take to avoid any problems?\n\nYou should use the 7.4.6 version of pg_dump to dump the old database. Note\nyou still need to be running the 7.3.2 server for the old database.\npg_dump will be just acting like a client connecting over the network\nand will work with older versions of the server.\n", "msg_date": "Mon, 20 Dec 2004 09:47:49 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres version change - pg_dump" }, { "msg_contents": "sarlav kumar wrote, On 2004-12-20 15:40:\n> I would like to do a pg_dump on the test database, and restore it in\n> the new database on Postgres 7.4.6. I would like to know if there\n> would be any problem due to the postgres version/OS change. If so,\n> could someone tell me what precautions I can take to avoid any\n> problems?\n\nApart from using the pg_dump from 7.4.6 (see Bruno's answer), you should\ntake care to use the same locale in the new database cluster. I have had\nproblems in the past with unique constraints that could not be restored\ndue to different locale settings. See here:\n\n http://www.spinics.net/lists/pgsql/msg05363.html\n\nIn my case it was not enough to create the database with a different\nencoding, I had to re-initdb the whole cluster :-/\n\n\ncheers,\nstefan\n", "msg_date": "Mon, 20 Dec 2004 18:07:47 +0100", "msg_from": "Stefan Weiss <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Postgres version change - pg_dump" }, { "msg_contents": "Hi,\n \nI think I miscommunicated something. I am doing a pg_dump from Postgres 7.3.2. I am restoring it on Postgres 7.4.6 on Fedora Core 3 on a different server.\n \nI tried doing the dump and restoring it on the new DB. I did not have any problem with the UNIQUE contraint so far. But I got an error message saying table \"session\" does not exist, though it does exist in the old database(from where I did the dump). \n \nI also got another error saying \"user abc does not exist\". On the old DB I have different set of users, with different privileges granted to each of them on the tables. I guess I need to create these set of users in the new DB before doing the dump. Am I right?\n\nI am new to postgres administration. So I am not sure what you mean by \"same locale in the new database cluster\". Could you please explain or point me to a source where I can learn from?\n \nThanks a lot for the help,\nSaranya\n \n\n \n\n\n\t\t\n---------------------------------\nDo you Yahoo!?\n Jazz up your holiday email with celebrity designs. Learn more.\n\nHi,\n \nI think I miscommunicated something. I am doing a pg_dump from Postgres 7.3.2. I am restoring it on Postgres 7.4.6 on Fedora Core 3 on a different server.\n \nI tried doing the dump and restoring it on the new DB. I did not have any problem with the UNIQUE contraint so far. But I got an error message saying table \"session\" does not exist, though it does exist in the old database(from where I did the dump). \n \nI also got another error saying \"user abc does not exist\". On the old DB I have different set of users, with different privileges granted to each of them on the tables. I guess I need to create these set of users in the new DB before doing the dump. Am I right?\nI am new to postgres administration. So I am not sure what you mean by \"same locale in the new database cluster\". Could you please explain or point me to a source where I can learn from?\n \nThanks a lot for the help,\nSaranya\n \n \nDo you Yahoo!? \nJazz up your holiday email with celebrity designs. Learn more.", "msg_date": "Mon, 20 Dec 2004 09:30:33 -0800 (PST)", "msg_from": "sarlav kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres version change - pg_dump" }, { "msg_contents": "Hi,\n \n>From what I understand, I need to execute the pg_dump command from the new server( so that it will use the 7.4.6 version), but connect to the old DB. Am I right?\n \nThanks,\nSaranya\n\nBruno Wolff III <[email protected]> wrote:\nOn Mon, Dec 20, 2004 at 06:40:34 -0800,\nsarlav kumar wrote:\n> \n> I would like to do a pg_dump on the test database, and restore it in the new database on Postgres 7.4.6. I would like to know if there would be any problem due to the postgres version/OS change. If so, could someone tell me what precautions I can take to avoid any problems?\n\nYou should use the 7.4.6 version of pg_dump to dump the old database. Note\nyou still need to be running the 7.3.2 server for the old database.\npg_dump will be just acting like a client connecting over the network\nand will work with older versions of the server.\n\n\t\t\n---------------------------------\nDo you Yahoo!?\n Jazz up your holiday email with celebrity designs. Learn more.\nHi,\n \nFrom what I understand, I need to execute the pg_dump command from the new server( so that it will use the 7.4.6 version), but connect to the old DB. Am I right?\n \nThanks,\nSaranyaBruno Wolff III <[email protected]> wrote:\nOn Mon, Dec 20, 2004 at 06:40:34 -0800,sarlav kumar wrote:> > I would like to do a pg_dump on the test database, and restore it in the new database on Postgres 7.4.6. I would like to know if there would be any problem due to the postgres version/OS change. If so, could someone tell me what precautions I can take to avoid any problems?You should use the 7.4.6 version of pg_dump to dump the old database. Noteyou still need to be running the 7.3.2 server for the old database.pg_dump will be just acting like a client connecting over the networkand will work with older versions of the server.\nDo you Yahoo!? \nJazz up your holiday email with celebrity designs. Learn more.", "msg_date": "Mon, 20 Dec 2004 09:34:06 -0800 (PST)", "msg_from": "sarlav kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres version change - pg_dump" }, { "msg_contents": "am Mon, dem 20.12.2004, um 9:34:06 -0800 mailte sarlav kumar folgendes:\n> Hi,\n> \n> From what I understand, I need to execute the pg_dump command from the new\n> server( so that it will use the 7.4.6 version), but connect to the old DB. Am I\n> right?\n\nYes. Call from the new server pg_dump with the credentials for the old\nserver, in other words, use the new version of pg_dump to generate a\ndump from the old server.\n\n> \n> Thanks,\n> Saranya\n> \n> Bruno Wolff III <[email protected]> wrote:\n\nPlease, read http://www.netmeister.org/news/learn2quote.html\n\n\nRegards, Andreas\n-- \nDiese Message wurde erstellt mit freundlicher Unterst�tzung eines freilau-\nfenden Pinguins aus artgerechter Freilandhaltung. Er ist garantiert frei\nvon Micro$oft'schen Viren. (#97922 http://counter.li.org) GPG 7F4584DA\nWas, Sie wissen nicht, wo Kaufbach ist? Hier: N 51.05082�, E 13.56889� ;-)\n", "msg_date": "Mon, 20 Dec 2004 19:27:29 +0100", "msg_from": "Kretschmer Andreas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres version change - pg_dump" }, { "msg_contents": "Hi Sarlav,\n\n> From what I understand, I need to execute the pg_dump command from the \n> new server( so that it will use the 7.4.6 version), but connect to the \n> old DB. Am I right?\n\nBasically.\n\nThe truth is Sarlav, that any pg_dump version before the new 8.0 version \nis likely to have errors restoring. You should restore the dump like this:\n\npsql -f dump.sql database\n\nAnd then when you get errors, you will see the line number of the error. \n Then you can edit the dump to fix it.\n\nChris\n", "msg_date": "Tue, 21 Dec 2004 13:58:14 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Postgres version change - pg_dump" }, { "msg_contents": "As others have already said, use the newer version of pg_dump and it should \ngo ok.\n\nI had lots of problems restoring 7.1 dumps into 7.4 database, but it goes \nsmoothly if I use the 7.4 version of pg_dump.\n\nAssuming you have 2 servers, the old one and a new one, call pg_dump from \nyour new server as follows:\n\npg_dump --username=postgres --host=192.168.x,x <other options>\n\nand use the IP address of the old server for the --host parameter.\n\nYou may need to edit the pg_hba.conf file on the old server to allow the \nconnection from the new server.\n\nThis is pretty convenient as you don't even have to copy the dump file from \nthe old server.\n\nI was thinking you could set up a backup server in this way. On a busy \nsystem, it may take a load of the main server so that running backups with \nusers online shouldn't be a problem. That's in theory anyway.\n\nregards\nIain\n\n ----- Original Message ----- \n From: sarlav kumar\n To: pgsqlnovice ; pgsqlperform\n Sent: Monday, December 20, 2004 11:40 PM\n Subject: [PERFORM] Postgres version change - pg_dump\n\n\n Hi All,\n\n Thanks to everyone for helping with my previous questions.\n\n I have a test database running on Postgres 7.3.2.\n\n version\n -------------------------------------------------------------\n PostgreSQL 7.3.2 on i686-pc-linux-gnu, compiled by GCC 2.96\n\n I have another server where a newer version of postgres that came with the \nFedora Core 3 package installed.\n\n version\n -------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 7.4.6 on i386-redhat-linux-gnu, compiled by GCC \ni386-redhat-linux-gcc (GCC) 3.4.2 20041017 (Red Hat 3.4.2-6)\n\n I would like to do a pg_dump on the test database, and restore it in the \nnew database on Postgres 7.4.6. I would like to know if there would be any \nproblem due to the postgres version/OS change. If so, could someone tell me \nwhat precautions I can take to avoid any problems?\n\n Thanks in advance,\n Saranya\n\n\n\n\n\n------------------------------------------------------------------------------\n Do you Yahoo!?\n Yahoo! Mail - Find what you need with new enhanced search. Learn more. \n\n\n\n\n\n\n\nAs others have already said, use the newer \nversion of pg_dump and it should go ok.\n \nI had lots of problems restoring 7.1 dumps \ninto 7.4 database, but it goes smoothly if I use the 7.4 version of \npg_dump.\n \nAssuming you have 2 servers, the old one \nand a new one, call pg_dump from your new server as follows:\n \npg_dump --username=postgres \n--host=192.168.x,x  <other options>\n \nand use the IP address of the old server \nfor the --host parameter.\n \nYou may need to edit the pg_hba.conf file \non the old server to allow the connection from the new server.\n \nThis is pretty convenient as you don't \neven have to copy the dump file from the old server. \n \nI was thinking you could set up a \nbackup server in this way. On a busy system, it may take a load of the main \nserver so that running backups with users online shouldn't be a problem. That's \nin theory anyway. \n \nregards\nIain\n \n\n----- Original Message ----- \nFrom:\nsarlav kumar\n\nTo: pgsqlnovice ; pgsqlperform \nSent: Monday, December 20, 2004 \n 11:40 PM\nSubject: [PERFORM] Postgres \n version change - pg_dump\n\nHi All,\n \nThanks to everyone for helping with my previous questions. \n \nI have a test database running on Postgres 7.3.2.\n \n version                           \n ------------------------------------------------------------- PostgreSQL \n 7.3.2 on i686-pc-linux-gnu, compiled by GCC 2.96\nI have another server where a newer version of postgres that came with \n the Fedora Core 3 package installed.\n \nversion                                                         \n ------------------------------------------------------------------------------------------------------------------------- PostgreSQL \n 7.4.6 on i386-redhat-linux-gnu, compiled by GCC i386-redhat-linux-gcc (GCC) \n 3.4.2 20041017 (Red Hat 3.4.2-6)\n \nI would like to do a pg_dump on the test database, and restore it in the \n new database on Postgres 7.4.6. I would like to know if there would be any \n problem due to the postgres version/OS change. If so, could someone tell me \n what precautions I can take to avoid any problems?\n \nThanks in advance,\nSaranya\n \n \n \n\n\n Do you Yahoo!?Yahoo! Mail - Find what you need with new enhanced search. \n Learn \n more.", "msg_date": "Tue, 21 Dec 2004 16:09:55 +0900", "msg_from": "\"Iain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Postgres version change - pg_dump" } ]
[ { "msg_contents": "Hi All,\n \nI installed slony1.0.5 and tried the example replication of pgbench database. That seemed to work. Now I need to replicate a DB running on a different server. slony1.0.5 is installed on the Fedora core 3 machine where Postgres 7.4.6 is installed. I have to replicate the 'test' database installed on a different machine using Postgres 7.3.2.\n \nIn the instructions to replicate the pgbench example, there is script file to create the initial configuration for the master-slave setup of the pgbench database. Is this the script file that has to be modified accordingly, to replicate my 'test' DB. And ofcourse, the shell variables have to be changed to indicate the correct location of the master and slave DBs. Am I right?\n \nAlso, in the script, the following lines are used to create sets of tables:\n# Slony-I organizes tables into sets. The smallest unit a node can\n # subscribe is a set. The following commands create one set containing\n # all 4 pgbench tables. The master or origin of the set is node 1.\n\t#--\n\tcreate set (id=1, origin=1, comment='All pgbench tables');\n\tset add table (set id=1, origin=1, id=1, fully qualified name = 'public.accounts', comment='accounts table');\n\tset add table (set id=1, origin=1, id=2, fully qualified name = 'public.branches', comment='branches table');\n\tset add table (set id=1, origin=1, id=3, fully qualified name = 'public.tellers', comment='tellers table');\n\tset add table (set id=1, origin=1, id=4, fully qualified name = 'public.history', comment='history table', key = serial);\n\n\t#--\n\nCan this be skipped? I have over 200 tables, and I am not sure if I have to list each of them in the \"set add table\" part of the scripts file. \n \nDo I need to change any of the other scripts file in the example?\n \nThanks in advance,\nSaranya\n \n \n \n \n \n \n\n\t\t\n---------------------------------\nDo you Yahoo!?\n Send a seasonal email greeting and help others. Do good.\nHi All,\n \nI installed slony1.0.5 and tried the example replication of pgbench database. That seemed to work. Now I need to replicate a DB running on a different server. slony1.0.5 is installed on the Fedora core 3 machine where Postgres 7.4.6 is installed. I have to replicate the 'test' database installed on a different machine using Postgres 7.3.2.\n \nIn the instructions to replicate the pgbench example, there is script file to create the initial configuration for the master-slave setup of the pgbench database. Is this the script file that has to be modified accordingly, to replicate my 'test' DB. And ofcourse, the shell variables have to be changed to indicate the correct location of the master and slave DBs. Am I right?\n \nAlso, in the script, the following lines are used to create sets of tables:\n# Slony-I organizes tables into sets.  The smallest unit a node can    # subscribe is a set.  The following commands create one set containing    # all 4 pgbench tables.  The master or origin of the set is node 1.\t#--\tcreate set (id=1, origin=1, comment='All pgbench tables');\tset add table (set id=1, origin=1, id=1, fully qualified name = 'public.accounts', comment='accounts table');\tset add table (set id=1, origin=1, id=2, fully qualified name = 'public.branches', comment='branches table');\tset add table (set id=1, origin=1, id=3, fully qualified name = 'public.tellers', comment='tellers table');\tset add table (set id=1, origin=1, id=4, fully qualified name = 'public.history', comment='history table', key = serial);\t#--\nCan this be skipped? I have over 200 tables, and I am not sure if I have to list each of them in the \"set add table\" part of the scripts file. \n \nDo I need to change any of the other scripts file in the example?\n \nThanks in advance,\nSaranya\n \n \n \n \n \n \nDo you Yahoo!? \nSend a seasonal email greeting and help others. Do good.", "msg_date": "Mon, 20 Dec 2004 14:25:26 -0800 (PST)", "msg_from": "sarlav kumar <[email protected]>", "msg_from_op": true, "msg_subject": "slony replication" }, { "msg_contents": "[email protected] (sarlav kumar) writes:\n> I installed slony1.0.5 and tried the example replication of pgbench\n> database. That seemed to work. Now I�need to�replicate a DB running\n> on a different server. slony1.0.5 is installed on the Fedora core 3\n> machine where Postgres 7.4.6 is installed. I have to replicate the\n> 'test' database installed on a different machine using Postgres\n> 7.3.2.\n\nSlony-I does not support versions of PostgreSQL earlier than 7.3.3.\nAs 7.3.2 is earlier than 7.3.3, I wouldn't expect that to work.\n\n> In the instructions to replicate the pgbench example, there is\n> script file to create the initial configuration for the master-slave\n> setup of�the pgbench database. Is this the script file that has to\n> be modified accordingly, to replicate my 'test' DB. And ofcourse,\n> the shell variables have to be changed to indicate the correct\n> location of the master and slave DBs. Am I right?\n\nYes, that would be right.\n\n> Also, in the script, the following lines are used to create sets of tables:\n>\n> # Slony-I organizes tables into sets.� The smallest unit a node can\n> ��� # subscribe is a set.� The following commands create one set containing\n> ��� # all 4 pgbench tables.� The master or origin of the set is node 1.\n> #--\n> create set (id=1, origin=1, comment='All pgbench tables');\n> set add table (set id=1, origin=1, id=1, fully qualified name = 'public.accounts', comment='accounts table');\n> set add table (set id=1, origin=1, id=2, fully qualified name = 'public.branches', comment='branches table');\n> set add table (set id=1, origin=1, id=3, fully qualified name = 'public.tellers', comment='tellers table');\n> set add table (set id=1, origin=1, id=4, fully qualified name = 'public.history', comment='history table', key =\n> serial);\n> #--\n>\n> Can this be skipped? I have over 200 tables, and I am not sure if I\n> have to list each of them in the \"set add table\" part of the scripts\n> file.\n\nNo, you cannot \"skip\" this. You _must_ submit slonik requests to add\neach and every table that you wish to replicate to the replication\nset.\n\nIf there are 220 tables, you'll need something rather close to 220\n\"set add table\" requests.\n\n> Do I need to change any of the other scripts file in the example?\n\nMaybe, depending on what you're trying to do.\n-- \n\"cbbrowne\",\"@\",\"ca.afilias.info\"\n<http://linuxdatabases.info/info/slony.html>\nChristopher Browne\n(416) 673-4124 (land)\n", "msg_date": "Mon, 20 Dec 2004 18:31:04 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slony replication" }, { "msg_contents": "I didn't see any responses to this, but given it is off topic for both groups \nthat wouldn't surprise me. In the future please direct these questions to the \nslony project mailing lists. \n\nOn Monday 20 December 2004 17:25, sarlav kumar wrote:\n> Hi All,\n>\n> I installed slony1.0.5 and tried the example replication of pgbench\n> database. That seemed to work. Now I need to replicate a DB running on a\n> different server. slony1.0.5 is installed on the Fedora core 3 machine\n> where Postgres 7.4.6 is installed. I have to replicate the 'test' database\n> installed on a different machine using Postgres 7.3.2.\n>\n> In the instructions to replicate the pgbench example, there is script file\n> to create the initial configuration for the master-slave setup of the\n> pgbench database. Is this the script file that has to be modified\n> accordingly, to replicate my 'test' DB. And ofcourse, the shell variables\n> have to be changed to indicate the correct location of the master and slave\n> DBs. Am I right?\n>\n\nMore or less. The scripts provided are just examples, but you can modify them \nto suite your einvironment rather than write your own. \n\n> Also, in the script, the following lines are used to create sets of tables:\n> # Slony-I organizes tables into sets. The smallest unit a node can\n> # subscribe is a set. The following commands create one set containing\n> # all 4 pgbench tables. The master or origin of the set is node 1.\n> #--\n> create set (id=1, origin=1, comment='All pgbench tables');\n> set add table (set id=1, origin=1, id=1, fully qualified name =\n> 'public.accounts', comment='accounts table'); set add table (set id=1,\n> origin=1, id=2, fully qualified name = 'public.branches', comment='branches\n> table'); set add table (set id=1, origin=1, id=3, fully qualified name =\n> 'public.tellers', comment='tellers table'); set add table (set id=1,\n> origin=1, id=4, fully qualified name = 'public.history', comment='history\n> table', key = serial);\n>\n> #--\n>\n> Can this be skipped? I have over 200 tables, and I am not sure if I have to\n> list each of them in the \"set add table\" part of the scripts file.\n>\n\nnope, you have to do them all, and dont forget the sequences. easiest way i \nfound was to generate the list programatically around a select * from \npg_class with appropriate where clause to get just the desired tables. \n\n> Do I need to change any of the other scripts file in the example?\n>\n\nChances are yes, since those scripts were written for the example scenario \nprovided, and your environment is sure to be different. Again, post to the \nslony mailing lists if you need more help. \n\n-- \nRobert Treat\nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n", "msg_date": "Tue, 28 Dec 2004 22:57:28 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] slony replication" } ]
[ { "msg_contents": "Thank-you Grega,\n\n\tI ended up using the pg_ctl -l parameter to write the output to a\nspecified file. Much quicker to do so.\n\n\tI tried the -/var/log/postgresql.log option however I noticed no\nperformance improvement. May be the fact that we use redhad linux and\nsyslog, I'm no sys-admin, so I'm not sure if there is a difference between\nsyslogd and syslog.\n\nTheo\n\n-----Original Message-----\nFrom: Grega Bremec [mailto:[email protected]] \nSent: Monday, 20 December 2004 3:49 PM\nTo: Theo Galanakis\nCc: [email protected]\nSubject: Re: [PERFORM] PG Logging is Slow\n\n\n...and on Mon, Dec 20, 2004 at 03:17:11PM +1100, Theo Galanakis used the\nkeyboard:\n> Under postgres 7.3 logging is incredibly slow!\n> \n> I have applied the following settings:\n> \n> syslog = 2\n> syslog_facility = 'LOCAL0'\n> syslog_ident = 'postgres'\n> \n> log_connections = true\n> log_duration = true \n> log_pid = true \n> log_statement = true \n> log_timestamp = true \n> \n> This severely impacted the performance of our production system, a \n> search page which took 1-3 seconds now takes over 30, is this normal?\n> \n> I need to get some performance indicators from our production db, \n> however I cant turn on logging with such performance degradation.\n> \n\nHi Theo,\n\nOne thing you should be sure about is that whichever logfile you have\nconfigured for the local0 facility is being written to asynchronously.\nSynchronous logging is REALLY expensive.\n\nIf you're using the standard syslogd, you can achieve that by prefixing the\nfilename in syslogd.conf with a dash. For example,\n\n local0.*\t\t/var/log/postgresql.log\n\nwould become\n\n local0.*\t\t-/var/log/postgresql.log\n\nOne other option would be to turn off syslog logging completely and let\npostmaster take care of the log on its own, which may or may not be possible\nfor you, depending on the policy in effect (remote logging, etc.).\n\nHope this helped,\n-- \n Grega Bremec\n gregab at p0f dot net\n\n\n______________________________________________________________________\nThis email, including attachments, is intended only for the addressee\nand may be confidential, privileged and subject to copyright. If you\nhave received this email in error, please advise the sender and delete\nit. If you are not the intended recipient of this email, you must not\nuse, copy or disclose its content to anyone. You must not copy or \ncommunicate to others content that is confidential or subject to \ncopyright, unless you have the consent of the content owner.\n\n\n\n\nRE: [PERFORM] PG Logging is Slow\n\n\nThank-you Grega,\n\n        I ended up using the pg_ctl -l parameter to write the output to a specified file. Much quicker to do so.\n\n        I tried the -/var/log/postgresql.log option however I noticed no performance improvement. May be the fact that we use redhad linux and syslog, I'm no sys-admin, so I'm not sure if there is a difference between syslogd and syslog.\nTheo\n\n-----Original Message-----\nFrom: Grega Bremec [mailto:[email protected]] \nSent: Monday, 20 December 2004 3:49 PM\nTo: Theo Galanakis\nCc: [email protected]\nSubject: Re: [PERFORM] PG Logging is Slow\n\n\n...and on Mon, Dec 20, 2004 at 03:17:11PM +1100, Theo Galanakis used the keyboard:\n> Under postgres 7.3 logging is incredibly slow!\n> \n> I have applied the following settings:\n> \n> syslog = 2\n> syslog_facility = 'LOCAL0'\n> syslog_ident = 'postgres'\n>  \n>  log_connections =  true\n> log_duration =  true \n> log_pid =  true \n> log_statement =  true \n> log_timestamp =  true \n>  \n> This severely impacted the performance of our production system, a \n> search page which took 1-3 seconds now takes over 30, is this normal?\n>  \n> I need to get some performance indicators from our production db, \n> however I cant turn on logging with such performance degradation.\n>  \n\nHi Theo,\n\nOne thing you should be sure about is that whichever logfile you have configured for the local0 facility is being written to asynchronously. Synchronous logging is REALLY expensive.\nIf you're using the standard syslogd, you can achieve that by prefixing the filename in syslogd.conf with a dash. For example,\n    local0.*            /var/log/postgresql.log\n\nwould become\n\n    local0.*            -/var/log/postgresql.log\n\nOne other option would be to turn off syslog logging completely and let postmaster take care of the log on its own, which may or may not be possible for you, depending on the policy in effect (remote logging, etc.).\nHope this helped,\n-- \n    Grega Bremec\n    gregab at p0f dot net", "msg_date": "Tue, 21 Dec 2004 09:40:35 +1100", "msg_from": "Theo Galanakis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG Logging is Slow" }, { "msg_contents": "Theo,\n\n > \tI tried the -/var/log/postgresql.log option however I noticed no\n > performance improvement. May be the fact that we use redhad linux and\n > syslog, I'm no sys-admin, so I'm not sure if there is a difference \nbetween\n > syslogd and syslog.\n\nDid you restart syslogd (that's the server process implementing the \nsyslog (= system log) service) after you changed its configuration?\n\nIn order to do so, try running\n\n/etc/init.d/syslog restart\n\nas root from a commandline.\n\nHTH\n\nAlex\n", "msg_date": "Tue, 21 Dec 2004 10:09:06 +1100", "msg_from": "Alexander Borkowski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG Logging is Slow" } ]
[ { "msg_contents": "\nHello. \nI tried performance test of version 8.0.0 beta4 by osdl-dbt-1.\nThe result is that throughput of version 8 fell to about 70 percent \ncompared with V7.4.6.\nTest result is below. (measurement was repeated 3 times)\n\n-------------------------------------------------------\nHardware spec\n CPU   Pentium � 531.986 MHz\n Memory  125660 Kb\n-------------------------------------------------------\nSoftware version\n OS Linux Kernel:2.4.21-4.EL\n Distribution: Red Hat Enterprise Linux AS release 3\n-------------------------------------------------------\nResult(transaction per second)\n first second third\n 7.4.6   6.8 6.8 5.9\n 8.0 beta4 4.3 4.6  4.4\n-------------------------------------------------------\nParameter of DBT1,PostgreSQL\n Database Size (items): 1000\n Database Size (customers): 10\n number of cache: 1\n number of connections between cache and database: 10\n number of application server: 1\n number of connections between server and database: 20\n number of drivers: 1\n eus/driver: 100\n rampuprate/driver: 60\n duration/driver: 900\n thinktime/driver: 1.6\n Put WAL on different driver: 0\n Put pgsql_tmp on different driver: 0\n database parameters: -i -c listen_addresses='*'\n shmmax: 33554432\n-------------------------------------------------------\n\n- Both version 8 and version 7 are performed under the same condition.\n- Tuning of adjustment of a parameter was not carried out at all.\n- The server and client process are performed in the same machine.\n\nIs there the weak point on the performance in version 8.0.0 ? \n\nAny help would greatly appreciated.\n\n kondou\n", "msg_date": "Tue, 21 Dec 2004 10:42:10 +0900", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Question of performance of version 8" }, { "msg_contents": "[email protected] writes:\n> I tried performance test of version 8.0.0 beta4 by osdl-dbt-1.\n> The result is that throughput of version 8 fell to about 70 percent \n> compared with V7.4.6.\n\nbeta4 is a little bit back ...\n\nI don't have dbt1 at hand, but I tried pg_bench on PG 7.4.6 against 8.0rc2\njust now. For the test case\n\n\tpgbench -i -s 10 bench\n\tpgbench -c 10 -t 10000 bench\n\non 7.4.6 I get:\n\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 65.714249 (including connections establishing)\ntps = 65.716363 (excluding connections establishing)\n\non 8.0rc2 I get:\n\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 107.379742 (including connections establishing)\ntps = 107.385301 (excluding connections establishing)\n\nThis is on a Fedora Core 3 machine, generic three-year-old PC with cheap\njunk IDE drive that lies about write completion (but I'm running fsync\noff so that hardly matters ;-)). And I didn't change any of the default\npostgresql.conf settings, just started the postmaster with -F in both\ncases. So I wouldn't claim that it's very representative of real-world\nperformance on real-world server hardware. But I'm not seeing a serious\nfalloff from 7.4 to 8.0 here --- more the other way 'round.\n\nPlease try dbt1 with 8.0rc2, and let us know if you still see a problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 20 Dec 2004 22:54:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question of performance of version 8 " } ]
[ { "msg_contents": "I have recently transfered a big database on my master node of a 4 node openSSI Cluster... The system is working fine but sometimes, I get following errors:\n\nhttp://192.168.1.100/cgi-bin/search.py\n File \"/usr/lib/python2.2/site-packages/pyPgSQL/PgSQL.py\", line 3067, in execute, referer: http://192.168.1.100/cgi-bin/search.py\n self.conn.conn.query('ROLLBACK WORK'), referer: http://192.168.1.100/cgi-bin/search.py\nlibpq.ProgrammingError: no connection to the server, referer: http://192.168.1.100/cgi-bin/search.py\n, referer: http://192.168.1.100/cgi-bin/search.py\n\nThis error comes while insertion of data takes place...\nIs Postgres successfull on Cluster??? Will that give me performance enhancement in any way??? Please help...\n\n-- \nThanks and Regards,\nGSS\n\nI have recently transfered a big database on my master node of a 4 node openSSI Cluster... The system is working fine but sometimes, I get following errors:\n\nhttp://192.168.1.100/cgi-bin/search.py\n  File \"/usr/lib/python2.2/site-packages/pyPgSQL/PgSQL.py\", line 3067, in execute, referer: http://192.168.1.100/cgi-bin/search.py\n    self.conn.conn.query('ROLLBACK WORK'), referer: http://192.168.1.100/cgi-bin/search.py\nlibpq.ProgrammingError: no connection to the server, referer: http://192.168.1.100/cgi-bin/search.py\n, referer: http://192.168.1.100/cgi-bin/search.py\n\nThis error comes while insertion of data takes place...\nIs Postgres successfull on Cluster??? Will that give me performance enhancement in any way??? Please help...\n\n-- \nThanks and Regards,\nGSS", "msg_date": "21 Dec 2004 05:02:56 -0000", "msg_from": "\"Gurpreet Sachdeva\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres on Linux Cluster!" }, { "msg_contents": "Gurpreet Sachdeva wrote:\n> I have recently transfered a big database on my master node of a 4\n> node openSSI Cluster... The system is working fine but sometimes, I\n> get following errors:\n> \n> http://192.168.1.100/cgi-bin/search.py File\n> \"/usr/lib/python2.2/site-packages/pyPgSQL/PgSQL.py\", line 3067, in\n> execute, referer: http://192.168.1.100/cgi-bin/search.py \n> self.conn.conn.query('ROLLBACK WORK'), referer:\n> http://192.168.1.100/cgi-bin/search.py libpq.ProgrammingError: no\n> connection to the server, referer:\n> http://192.168.1.100/cgi-bin/search.py , referer:\n> http://192.168.1.100/cgi-bin/search.py\n\nAt a wild guess, this happens when a CGI process is migrated to another \nnode without migrating the accompanying connection (however you'd do that).\n\n> This error comes while insertion of data takes place... Is Postgres\n> successfull on Cluster??? Will that give me performance enhancement\n> in any way??? Please help...\n\nProbably not, and almost certainly not.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 21 Dec 2004 11:21:07 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres on Linux Cluster!" } ]
[ { "msg_contents": "Thank you for your attention.\nI will try again with new postgres release and\n examine access method of sql with explain command.\n kondo\n", "msg_date": "Tue, 21 Dec 2004 14:04:08 +0900", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Question of performance of version 8" } ]